Update Proxy config watchdog implementation

I’ve got apt-cacher-ng setup for all linux boxes on a specific LAN segment. So if I’m connected to that network, I’d like to use it.

The goal is to downgrade all HTTPS repos to HTTP and add a proxy config file when a specific wifi network is joined. For the time being, I’m doing a health-check using curl.

I’ve slapped together this primitive script for a proof-of-concept on Debian templates. Yes it’s ugly and overly complicated. I plan on cleaning it up later.

#!/bin/bash

# Configuration
CACHER_HOST="your-apt-cacher-ng-server"
CACHER_PORT=3142
PROXY_URL="http://${CACHER_HOST}:${CACHER_PORT}"
APT_PROXY_CONF="/etc/apt/apt.conf.d/99proxy"
SOURCES_LIST="/etc/apt/sources.list"
SOURCES_DIR="/etc/apt/sources.list.d"
APT_CONF_DIR="/etc/apt/apt.conf.d"
BACKUP_DIR="/var/tmp/apt_sources_backup"
CHANGED_FILES_LOG="/var/tmp/apt_sources_changed.log"
INFO_LOG="/var/log/proxy_watchdog.log"
TEMP_DIR="/var/tmp/apt_sources_temp"

# Ensure root privileges
[ "$(id -u)" -ne 0 ] && { echo "Error: Must run as root" >&2; exit 1; }

# Validate CACHER_HOST
[ "$CACHER_HOST" = "your-apt-cacher-ng-server" ] && { echo "Error: CACHER_HOST not set" >&2; exit 1; }

# Create and verify directories
for dir in "$BACKUP_DIR" "$TEMP_DIR" "$APT_CONF_DIR" "$SOURCES_DIR" "$(dirname "$INFO_LOG")"; do
    mkdir -p "$dir" || { echo "Error: Failed to create $dir" >&2; exit 1; }
    [ -w "$dir" ] || { echo "Error: $dir not writable" >&2; exit 1; }
done

# Check disk space and filesystem
stat -f -c '%T' "$APT_CONF_DIR" | grep -q 'rofs' && { echo "Error: $APT_CONF_DIR is read-only" >&2; exit 1; }
df "$BACKUP_DIR" "$TEMP_DIR" >/dev/null 2>&1 || { echo "Error: Insufficient disk space for $BACKUP_DIR or $TEMP_DIR" >&2; exit 1; }

# Helper for file operations
do_file_op() {
    local op="$1" src="$2" dst="$3"
    "$op" "$src" "$dst" || { echo "Error: $op $src to $dst failed" >&2; return 1; }
}

# Check if apt-cacher-ng is responsive
check_cacher_status() {
    curl -s -I --connect-timeout 5 --max-time 10 "$PROXY_URL/" &>/dev/null
}

# Clean up duplicate backups
cleanup_duplicates() {
    local checksum_file=$(mktemp)
    find "$BACKUP_DIR" -type f -exec sha256sum {} \; | sort >"$checksum_file"
    declare -A seen_checksums
    while read -r checksum file; do
        [ -n "${seen_checksums[$checksum]}" ] && {
            [ "$file" -nt "${seen_checksums[$checksum]}" ] && rm "${seen_checksums[$checksum]}" || rm "$file"
        } || seen_checksums[$checksum]="$file"
    done <"$checksum_file"
    rm "$checksum_file"
}

# Process a file (sources or proxy)
process_file() {
    local file="$1" type="$2"
    local backup_file="$BACKUP_DIR/$(basename "$file").bak"
    local temp_file="$TEMP_DIR/$(basename "$file").temp"

    [ "$type" = "proxy" ] && [ "$file" = "$APT_PROXY_CONF" ] && return 1
    do_file_op cp "$file" "$temp_file" || return 1
    case "$type" in
        sources)
            sed -E '/^#/! { /^deb tor\+http/! s/^(deb) https(.*)$/#\[Proxy Watchdog\] \1 https\2\ndeb http\2/ }' "$temp_file" >"$temp_file.new" || {
                echo "Error: sed failed for $temp_file" >&2
                rm "$temp_file.new" 2>/dev/null
                return 1
            }
            ;;
        proxy)
            sed -E 's/^(Acquire::[http|https]::Proxy.*)$/#\[Proxy Watchdog\] \1/' "$temp_file" >"$temp_file.new" || {
                echo "Error: sed failed for $temp_file" >&2
                rm "$temp_file.new" 2>/dev/null
                return 1
            }
            echo "Acquire::http::Proxy \"$PROXY_URL\";" >>"$temp_file.new"
            ;;
    esac
    do_file_op mv "$temp_file.new" "$temp_file" || return 1

    if ! cmp -s "$file" "$temp_file"; then
        do_file_op cp "$file" "$backup_file" || return 1
        do_file_op mv "$temp_file" "$file" || return 1
        chmod 644 "$file"
        echo "$(date '+%Y-%m-%d %H:%M:%S')|$file|$backup_file" >>"$CHANGED_FILES_LOG"
        echo "$(date '+%Y-%m-%d %H:%M:%S') Updated $file" >>"$INFO_LOG"
        return 0
    fi
    rm "$temp_file" 2>/dev/null
    return 1
}

# Enable proxy and update sources
enable_proxy() {
    cleanup_duplicates
    if [ -f "$CHANGED_FILES_LOG" ]; then
        grep -q '^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}\|[^|]*\|[^|]*$' "$CHANGED_FILES_LOG" || {
            : >"$CHANGED_FILES_LOG"
            echo "$(date '+%Y-%m-%d %H:%M:%S') Cleared malformed $CHANGED_FILES_LOG" >>"$INFO_LOG"
        }
    fi

    # Cache proxy and source files
    local proxy_files=() source_files=()
    if compgen -G "$APT_CONF_DIR/*.conf" >/dev/null; then
        for file in "$APT_CONF_DIR"/*.conf; do
            grep -qE 'Acquire::(http|https)::Proxy' "$file" && proxy_files+=("$file")
        done
    fi
    [ -f "$SOURCES_LIST" ] && source_files+=("$SOURCES_LIST")
    compgen -G "$SOURCES_DIR/*.list" >/dev/null && source_files+=("$SOURCES_DIR"/*.list)

    # Process proxy files
    for file in "${proxy_files[@]}"; do
        process_file "$file" proxy
    done

    # Create main proxy config
    echo "Acquire::http::Proxy \"$PROXY_URL\";" >"$APT_PROXY_CONF" && chmod 644 "$APT_PROXY_CONF" || {
        echo "Error: Failed to create $APT_PROXY_CONF" >&2
        return 1
    }
    echo "$(date '+%Y-%m-%d %H:%M:%S') Proxy config updated to: $PROXY_URL" >>"$INFO_LOG"

    # Process source files
    for file in "${source_files[@]}"; do
        process_file "$file" sources
    done
}

# Disable proxy and revert changes
disable_proxy() {
    if [ -e "$APT_PROXY_CONF" ]; then
        [ -f "$APT_PROXY_CONF" ] || { echo "Error: $APT_PROXY_CONF is not a regular file" >&2; return 1; }
        rm "$APT_PROXY_CONF" || { echo "Error: Failed to remove $APT_PROXY_CONF" >&2; return 1; }
    fi
    echo "$(date '+%Y-%m-%d %H:%M:%S') Proxy disabled" >>"$INFO_LOG"

    if [ -f "$CHANGED_FILES_LOG" ] && [ -s "$CHANGED_FILES_LOG" ]; then
        local valid_log=$(mktemp)
        grep '^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}\|[^|]*\|[^|]*$' "$CHANGED_FILES_LOG" >"$valid_log" || {
            echo "$(date '+%Y-%m-%d %H:%M:%S') No valid entries in $CHANGED_FILES_LOG" >>"$INFO_LOG"
            rm "$valid_log"
            return 0
        }
        local restore_errors=()
        while IFS='|' read -r timestamp original backup; do
            [ -z "$original" ] || [ -z "$backup" ] || [ ! -f "$backup" ] && {
                echo "Warning: Skipping invalid backup entry: $timestamp|$original|$backup" >&2
                continue
            }
            if do_file_op cp "$backup" "$original" && chmod 644 "$original"; then
                echo "$(date '+%Y-%m-%d %H:%M:%S') Restored $original" >>"$INFO_LOG"
            else
                restore_errors+=("$original:$backup")
            fi
        done <"$valid_log"

        if [ ${#restore_errors[@]} -eq 0 ]; then
            while IFS='|' read -r timestamp original backup; do
                rm "$backup" 2>/dev/null
            done <"$valid_log"
            rm "$CHANGED_FILES_LOG" 2>/dev/null
            rm "$valid_log"
        else
            echo "Error: Restore failures, backups retained:" >&2
            for err in "${restore_errors[@]}"; do echo "  - $err" >&2; done
            rm "$valid_log"
            return 1
        fi
    fi
    cleanup_duplicates
}

# Main logic
if check_cacher_status; then
    enable_proxy || exit 1
else
    disable_proxy || exit 1
fi

rm -rf "$TEMP_DIR" 2>/dev/null

What’s the appropriate way to slap this on all templates and HVMs? Is that even the right way? It would be much less painful across Qubes deployments if I could script that too. Any help would be appreciated.

I think you are looking for salt?

You can use your script but from my poor understanding, salt is allowing you to simplify a lot of things like permissions management, file management, etc.

Take a look at:

Salt is definitely on the short list of deployment methods. I’m working on a dom0 bash script as that’s more inline with my experience. The Qubes Global Config doesn’t offer much customization of the Update Proxy configuration and avahi advertisements from apt-cacher-ng aren’t helpful here.

Also, I have @unman’s cacher build on my production Qubes machine. Work great, just not suited to my use case.

Just to be clear: you can of course keep the bash script while using a very simple salt state like this:

deploy-cacher:
  cmd.script:
    - salt://your-bash-script.sh

I think it’d be better to:

  • persistently change the repo links from https:// to http://HTTPS///
  • create a cacher qube and set it as updates proxy
  • create a qrexec service that will allow for sys-net to trigger in the cacher qube the switch between using local apt-cacher-ng in cacher qube or remote apt-cacher-ng:
    For local apt-cacher-ng, start apt-cacher-ng service in the cacher qube (default)
    For remote apt-cacher-ng, stop apt-cacher-ng service and start socat redirect on the local updates proxy port to the remote apt-cacher-ng host
  • in sys-net create a trigger that will use qrexec call to switch the apt-cacher-ng when you connect to specific wifi AP
1 Like

That is a creative approach. I had some issues with @unman’s salt recipe in 4.1, might be time to give it another look. Thanks!

My original thought on attempting something similar was syncing the cache between the cacher-qube and the on-prem apt-cacher-ng box. I never considered sys-net qrexec calls… so obvious.

I’ve heard others have great success with squid proxy (multi-distro environments). Anyone have any experience going down that route? Qubifying it wouldn’t be too hard methinks.