Apt-cacher-ng and debian-12 testing

I’m trying to upgrade all my debian-11-minimal templates to bookworm, in place.
Everything went smooth so far.
I have also upgraded cacher’s debian-11-minimal-firewall-template (that’s how I named it, it’s cloned from template for sys-firewal).
Again, everything smooth
During upgrade, I was asked if I was to kept acng.conf, which I kept.

acng.conf
#
# IMPORTANT NOTE:
#
# THIS FILE IS MAYBE JUST ONE OF MANY CONFIGURATION FILES IN THIS DIRECTORY.
# SETTINGS MADE IN OTHER FILES CAN OVERRIDE VALUES THAT YOU CHANGE HERE. GO
# LOOK FOR OTHER CONFIGURATION FILES! CHECK THE MANUAL AND INSTALLATION NOTES
# (like README.Debian) FOR MORE DETAILS!
#

# This is a configuration file for apt-cacher-ng, a smart caching proxy for
# software package downloads. It's supposed to be in a directory specified by
# the -c option of apt-cacher-ng, see apt-cacher-ng(8) for details.
# RULES:
# Letter case in variable names does not matter, names and values should be
# separated with colons. For boolean variables, zero number is considered false,
# non-zero considered true. If a default value is not explicitly mentioned in
# the description, the commented value assignments mostly represent the default
# values of the particular variables.

# Storage directory for downloaded data and related maintenance activity.
#
CacheDir: /var/cache/apt-cacher-ng

# Log file directory, can be set empty to disable logging
#
LogDir: /var/log/apt-cacher-ng

# A place to look for additional configuration and resource files if they are not
# found in the configuration directory
#
SupportDir: /usr/lib/apt-cacher-ng

# TCP server port for incoming http (or HTTP proxy) connections.
# Can be set to 9999 to emulate apt-proxy. Value of 0 turns off TCP server
# (SocketPath must be set in this case).
#
Port:8082

# Addresses or hostnames to listen on. Multiple addresses must be separated by
# spaces. Each entry must be an exact local address which is associated with a
# local interface. DNS resolution is performed using getaddrinfo(3) for all
# available protocols (IPv4, IPv6, ...). Using a protocol specific format will
# create binding(s) only on protocol specific socket(s), e.g. 0.0.0.0 will
# listen only to IPv4.
#
# Default: listens on all interfaces and protocols
#
# BindAddress: localhost 192.168.7.254 publicNameOnMainInterface

# The specification of another HTTP proxy which shall be used for downloads.
# It can include user name and password but see the manual for limitations.
#
# Default: uses direct connection
#
# Proxy: http://www-proxy.example.net:3128
# Proxy: https://username:proxypassword@proxy.example.net:3129

# Repository remapping. See manual for details.
# In this example, some backends files might be generated during package
# installation using information collected on the system.
# Examples:
#Remap-debrep: https://deb.debian.org http://deb.debian.org  file:deb_mirrors.gz /debian ; file:backends_debian # Debian Archives
Remap-debrep: https://deb.debian.org http://deb.debian.org  file:deb_mirrors.gz /debian 
Remap-uburep: file:ubuntu_mirrors /ubuntu ; file:backends_ubuntu # Ubuntu Archives
Remap-cygwin: file:cygwin_mirrors /cygwin # ; file:backends_cygwin # incomplete, please create this file or specify preferred mirrors here
#Remap-sfnet:  file:sfnet_mirrors # ; file:backends_sfnet # incomplete, please create this file or specify preferred mirrors here
Remap-alxrep: file:archlx_mirrors /archlinux # ; file:backend_archlx # Arch Linux
Remap-fedora: file:fedora_mirrors # Fedora Linux
Remap-epel:   file:epel_mirrors # Fedora EPEL
Remap-slrep:  file:sl_mirrors # Scientific Linux
Remap-gentoo: file:gentoo_mirrors.gz /gentoo ; file:backends_gentoo # Gentoo Archives
Remap-secdeb: security.debian.org ; security.debian.org deb.debian.org/debian-security
Remap-centos: file:centos_mirrors /centos ; file:backends_centos # CentOS Linux
#Remap-qubes: http://fake.qubes ; file:backends_qubes

# Virtual page accessible in a web browser to see statistics and status
# information, i.e. under http://localhost:3142/acng-report.html
# NOTE: This option must be configured to run maintenance jobs (even when used
# via acngtool in cron scripts). The AdminAuth option can be used to restrict
# access to sensitive areas on that page.
#
# Default: not set, should be set by the system administrator
#
ReportPage: acng-report.html

# Socket file for accessing through local UNIX socket instead of TCP/IP. Can be
# used with inetd (via bridge tool in.acng from apt-cacher-ng package).
#
# Default: not set, UNIX socket bridge is disabled.
#
# SocketPath:/var/run/apt-cacher-ng/socket

# If set to 1, makes log files be written to disk on every new line. Default
# is 0, buffers are flushed after the client disconnects. Technically,
# it's a convenience alias for the Debug option, see below for details.
#
# UnbufferLogs: 0

# Enables extended client information in log entries. When set to 0, only
# activity type, time and transfer sizes are logged.
#
# VerboseLog: 1

# Don't detach from the starting console.
#
# ForeGround: 0

# Store the pid of the daemon process in the specified text file.
# Default: disabled
#
# PidFile: /var/run/apt-cacher-ng/pid

# Forbid outgoing connections and work without an internet connection or
# respond with 503 error where it's not possible.
#
# Offlinemode: 0

# Forbid downloads from locations that are directly specified in the user
# request, i.e. all downloads must be processed by the preconfigured remapping
# backends (see above).
#
# ForceManaged: 0

# Days before considering an unreferenced file expired (to be deleted).
# WARNING: if the value is set too low and particular index files are not
# available for some days (mirror downtime) then there is a risk of removal of
# still useful package files.
#
ExThreshold: 4

# If the expiration is run daily, it sometimes does not make much sense to do
# it because the expected changes (i.e. removal of expired files) don't justify
# the extra processing time or additional downloads for expiration operation
# itself. This discrepancy might be especially worse if the local client
# installations are small or are rarely updated but the daily changes of
# the remote archive metadata are heavy.
#
# The following option enables a possible trade-off: the expiration run is
# suppressed until a certain amount of data has been downloaded through
# apt-cacher-ng since the last expiration execution (which might indicate that
# packages were replaced with newer versions).
#
# The number can have a suffix (k,K,m,M for Kb,KiB,Mb,MiB)
#
# ExStartTradeOff: 500m

# Stop expiration when a critical problem appears, issue like a failed update
# of an index file in the preparation step.
#
# WARNING: don't set this option to zero or empty without considering possible
# consequences like a sudden and complete cache data loss.
#
# ExAbortOnProblems: 1

# Number of failed nightly expiration runs which are considered acceptable and
# do not trigger an error notification to the admin (e.g. via daily cron job)
# before the (day) count is reached. Might be useful with whacky internet
# connections.
#
# Default: a guessed value, 1 if ExThreshold is 5 or more, 0 otherwise.
#
# ExSuppressAdminNotification: 1

# Modify file names to work around limitations of some file systems.
# WARNING: experimental feature, subject to change
#
# StupidFs: 0

# Experimental feature for apt-listbugs: pass-through SOAP requests and
# responses to/from bugs.debian.org.
# Default: guessed value, true unless ForceManaged is enabled
#
# ForwardBtsSoap: 1

# There is a small in-memory cache for DNS resolution data, expired by
# this timeout (in seconds). Internal caching is disabled if set to a value
# less than zero.
#
# DnsCacheSeconds: 1800

###############################################################################
#
# WARNING: don't modify thread and file matching parameters without a clear
# idea of what is happening behind the scene!
#
# Max. count of connection threads kept ready (for faster response in the
# future). Should be a sane value between 0 and average number of connections,
# and depend on the amount of spare RAM.
# MaxStandbyConThreads: 8
#
# Hard limit of active thread count for incoming connections, i.e. operation
# is refused when this value is reached (below zero = unlimited).
# MaxConThreads: -1
#
# Pigeonholing files (like static vs. volatile contents) is done by (extended)
# regular expressions.
#
# The following patterns are available for the purposes detailed, where
# the latter takes precedence over the former:
# - «PFilePattern» for static data that doesn't change silently on the server.
# - «VFilePattern» for volatile data that may change like every hour. Files
#   that match both PFilePattern and VfilePattern will be treated as volatile.
# - Static data with file names that match VFilePattern may be overriden being
#   treated as volatile by making it match the special static data pattern,
#   «SPfilePattern».
# - «SVfilePattern» or the "special volatile data" pattern is for the
#   convenience of specifying any exceptions to matches with SPfilePattern,
#   for cases where data must still be treated as volatile.
# - «WfilePattern» specifies a "whitelist pattern" for the regular expiration
#   job, telling it to keep the files even if they are not referenced by
#   others, like crypto signatures with which clients begin their downloads.
#
# There are two versions. The pattern variables mentioned above should not be
# set without good reason, because they would override the built-in defaults
# (that might impact updates to future versions of apt-cacher-ng). There are
# also versions of those patterns ending with Ex, which may be modified by the
# local administrator. They are evaluated in addition to the regular patterns
# at runtime.
#
# To see examples of the expected syntax, run: apt-cacher-ng -p debug=1
#
PfilePatternEx: .*xml.zck$|.*yaml.gz$|.*fedora.*arch=x86_64$|.*f30&arch=x86_64|.*29&arch=x86_64
# VfilePatternEx:
#VfilePattern = (^|.*?/)(Index|Packages(\.gz|\.bz2|\.lzma|\.xz)?|InRelease|Release|Release\.gpg|Sources(\.gz|\.bz2|\.lzma|\.xz)?|release|index\.db-.*\.gz|Contents-[^/]*(\.gz|\.bz2|\.lzma|\.xz)?|pkglist[^/]*\.bz2|rclist[^/]*\.bz2|/meta-release[^/]*|Translation[^/]*(\.gz|\.bz2|\.lzma|\.xz)?|MD5SUMS|SHA1SUMS|((setup|setup-legacy)(\.ini|\.bz2|\.hint)(\.sig)?)|mirrors\.lst|repo(index|md)\.xml(\.asc|\.key)?|directory\.yast|products|content(\.asc|\.key)?|media|filelists\.xml\.gz|filelists\.sqlite\.bz2|repomd\.xml|packages\.[a-zA-Z][a-zA-Z]\.gz|info\.txt|license\.tar\.gz|license\.zip|.*\.db(\.tar\.gz)?|.*\.files\.tar\.gz|.*\.abs\.tar\.gz|metalink\?repo|.*prestodelta\.xml\.gz|repodata/.*\.(xml|sqlite)\.(gz|bz2))$|/dists/.*/installer-[^/]+/[^0-9][^/]+/images/.*
VfilePatternEx: ^/\?release=[0-9]+&arch=.*|.*/RPM-GPG-KEY.*|.*\?repo=fedora
#VfilePatternEx: .*metalink?repo=fedora*
# SPfilePatternEx:
# SVfilePatternEx:
# WfilePatternEx:
#
###############################################################################

# A bitmask type value declaring the loging verbosity and behavior of the error
# log writing. Non-zero value triggers at least faster log file flushing.
#
# Some higher bits only working with a special debug build of apt-cacher-ng,
# see the manual for details.
#
# WARNING: this can write significant amount of data into apt-cacher.err logfile.
#
# Default: 0
#
# Debug:3

# Usually, general purpose proxies like Squid expose the IP address of the
# client user to the remote server using the X-Forwarded-For HTTP header. This
# behaviour can be optionally turned on with the Expose-Origin option.
#
# ExposeOrigin: 0

# When logging the originating IP address, trust the information supplied by
# the client in the X-Forwarded-For header.
#
# LogSubmittedOrigin: 0

# The version string reported to the peer, to be displayed as HTTP client (and
# version) in the logs of the mirror.
#
# WARNING: Expect side effects! Some archives use this header to guess
# capabilities of the client (i.e. allow redirection and/or https links) and
# change their behaviour accordingly but ACNG might not support the expected
# features.
#
# Default:
#
# UserAgent: Yet Another HTTP Client/1.2.3p4

# In some cases the Import and Expiration tasks might create fresh volatile
# data for internal use by reconstructing them using patch files. This
# by-product might be recompressed with bzip2 and with some luck the resulting
# file becomes identical to the *.bz2 file on the server which can be used by
# APT when requesting a complete version of this file.
# The downside of this feature is higher CPU load on the server during
# the maintenance tasks, and the outcome might have not much value in a LAN
# where all clients update their data often and regularly and therefore usually
# don't need the full version of the index file.
#
# RecompBz2: 0

# Network timeout for outgoing connections, in seconds.
#
# NetworkTimeout: 60

# Sometimes it makes sense to not store the data in cache and just return the
# package data to client while it comes in. The following DontCache* parameters
# can enable this behaviour for certain URL types. The tokens are extended
# regular expressions which the URLs are evaluated against.
#
# DontCacheRequested is applied to the URL as it comes in from the client.
# Example: exclude packages built with kernel-package for x86
# DontCacheRequested: linux-.*_10\...\.Custo._i386
# Example usecase: exclude popular private IP ranges from caching
# DontCacheRequested: 192.168.0 ^10\..* 172.30
#
# DontCacheResolved is applied to URLs after mapping to the target server. If
# multiple backend servers are specified then it's only matched against the
# download link for the FIRST possible source (due to implementation limits).
#
# Example usecase: all Ubuntu stuff comes from a local mirror (specified as
# backend), don't cache it again:
# DontCacheResolved: ubuntumirror.local.net
#
# DontCache directive sets (overrides) both, DontCacheResolved and
# DontCacheRequested.  Provided for convenience, see those directives for
# details.
#
# Example:
# DontCache: .*.local.university.int

# Default permission set of freshly created files and directories, as octal
# numbers (see chmod(1) for details).
# Can by limited by the umask value (see umask(2) for details) if it's set in
# the environment of the starting shell, e.g. in apt-cacher-ng init script or
# in its configuration file.
#
# DirPerms: 00755
# FilePerms: 00664

# It's possible to use use apt-cacher-ng as a regular web server with a limited
# feature set, i.e. directory browsing, downloads of any files, Content-Type
# based on /etc/mime.types, but without sorting, CGI execution, index page
# redirection and other funny things.
# To get this behavior, mappings between virtual directories and real
# directories on the server must be defined with the LocalDirs directive.
# Virtual and real directories are separated by spaces, multiple pairs are
# separated by semi-colons. Real directories must be absolute paths.
# NOTE: Since the names of that key directories share the same namespace as
# repository names (see Remap-...) it is administrator's job to avoid conflicts
# between them or explicitly create them.
#
# LocalDirs: woo /data/debarchive/woody ; hamm /data/debarchive/hamm
LocalDirs: acng-doc /usr/share/doc/apt-cacher-ng

# Precache a set of files referenced by specified index files. This can be used
# to create a partial mirror usable for offline work. There are certain limits
# and restrictions on the path specification, see manual and the cache control
# web site for details. A list of (maybe) relevant index files could be
# retrieved via "apt-get --print-uris update" on a client machine.
#
# Example:
# PrecacheFor: debrep/dists/unstable/*/source/Sources* debrep/dists/unstable/*/binary-amd64/Packages*

# Arbitrary set of data to append to request headers sent over the wire. Should
# be a well formated HTTP headers part including newlines (DOS style) which
# can be entered as escape sequences (\r\n).
#
# RequestAppendix: X-Tracking-Choice: do-not-track\r\n

# Specifies the IP protocol families to use for remote connections. Order does
# matter, first specified are considered first. Possible combinations:
# v6 v4
# v4 v6
# v6
# v4
# Default: use native order of the system's TCP/IP stack, influenced by the
# BindAddress value.
#
# ConnectProto: v6 v4

# Regular expiration algorithm finds package files which are no longer listed
# in any index file and removes them of them after a safety period.
# This option allows to keep more versions of a package in the cache after
# the safety period is over.
#
# KeepExtraVersions: 0

# Optionally uses TCP access control provided by libwrap, see hosts_access(5)
# for details. Daemon name is apt-cacher-ng.
#
# Default: guessed on startup by looking for explicit mention of apt-cacher-ng
# in /etc/hosts.allow or /etc/hosts.deny files.
#
# UseWrap: 0

# If many machines from the same local network attempt to update index files
# (apt-get update) at nearly the same time, the known state of these index file
# is temporarily frozen and multiple requests receive the cached response
# without contacting the remote server again. This parameter (in seconds)
# specifies the length of this period before these (volatile) files are
# considered outdated.
# Setting this value too low transfers more data and increases remote server
# load, setting this too high (more than a couple of minutes) increases the
# risk of delivering inconsistent responses to the clients.
#
# FreshIndexMaxAge: 27

# Usually the users are not allowed to specify custom TCP ports of remote
# mirrors in the requests, only the default HTTP port can be used (as
# workaround, proxy administrator can create Remap- rules with custom ports).
# This restriction can be disabled by specifying a list of allowed ports or 0
# for any port.
#
# AllowUserPorts: 80

# Normally the HTTP redirection responses are forwarded to the original caller
# (i.e. APT) which starts a new download attempt from the new URL. This
# solution is ok for client configurations with proxy mode but doesn't work
# well with configurations using URL prefixes in sources.list. To work around
# this the server can restart its own download with a redirection URL,
# configured with the following option. The downside is that this might be used
# to circumvent download source policies by malicious users.
# The RedirMax option specifies how many such redirects the server is allowed
# to follow per request, 0 disables the internal redirection.
# Default: guessed on startup, 0 if ForceManaged is used and 5 otherwise.
#
# RedirMax: 5

# There some broken HTTP servers and proxy servers in the wild which don't
# support the If-Range header correctly and return incorrect data when the
# contents of a (volatile) file changed. Setting VfileUseRangeOps to zero
# disables Range-based requests while retrieving volatile files, using
# If-Modified-Since and requesting the complete file instead. Setting it to
# a negative value removes even If-Modified-Since headers.
#
# VfileUseRangeOps: 1

# Allow data pass-through mode for certain hosts when requested by the client
# using a CONNECT request. This is particularly useful to allow access to SSL
# sites (https proxying). The string is a regular expression which should cover
# the server name with port and must be correctly formated and terminated.
# Examples:
# PassThroughPattern: private-ppa\.launchpad\.net:443$
# PassThroughPattern: .* # this would allow CONNECT to everything
PassThroughPattern: ^(.*):443$
#
# Default: ^(bugs\.debian\.org|changelogs\.ubuntu\.com):443$
# PassThroughPattern: ^(bugs\.debian\.org|changelogs\.ubuntu\.com):443$

# It's possible that an evil client requests a volatile file but does not
# retrieve the response and keeps the connection effectively stuck over
# many hours, blocking the particular file for other download attempts (which
# leads to not reporting file changes on server side to other users). The work
# around is the use of alternative file descriptors inside of apt-cacher-ng,
# however this might cost some extra download traffic due to worse cache usage.
# The ResponseFreezeDetectTime value specifies when a file descriptor in the
# mentioned state is to be considered defect and will require special handling.
# Default time is 500 seconds.
#
# ResponseFreezeDetectTime: 500

# Keep outgoing connections alive and reuse them for later downloads from
# the same server as long as possible.
#
# ReuseConnections: 1

# Maximum number of requests sent in a batch to remote servers before the first
# response is expected. Using higher values can greatly improve average
# throughput depending on network latency and the implementation of remote
# servers. Makes most sense when also enabled on the client side, see apt.conf
# documentation for details.
#
# Default: 10 if ReuseConnections is set, 1 otherwise
#
# PipelineDepth: 10

# Path to the system directory containing trusted CA certificates used for
# outgoing connections, see OpenSSL documentation for details.
#
# CApath: /etc/ssl/certs
#
# Path to a single trusted trusted CA certificate used for outgoing
# connections, see OpenSSL documentation for details.
#
# CAfile:

# There are different ways to detect that an upstream proxy is broken and turn
# off its use and connect directly. The first is through a custom command -
# when it returns successfully, the proxy is used, otherwise not and the
# command will be rerun only after a specified period.
# Another way is to try to connect to the proxy first and detect a connection
# timeout. The connection will then be made without HTTP proxy for the life
# time of the particular download stream and it may also affect other other
# parallel downloads.
# NOTE: this operation modes are still experimental and are subject to change!
# Unwanted side effects may occur with multiple simultaneous user connections
# or with specific per-repository proxy settings.
#
# Shell command, default: not set. Executed with the default shell and
# permissions of the apt-cacher-ng's process user. Examples:
# /bin/ip route | grep -q 192.168.117
# /usr/sbin/arp | grep -q 00:22:1f:51:8e:c1
#
# OptProxyCheckCommand: ...
#
# Check intervall, in seconds.
#
# OptProxyCheckInterval: 99
#
# Conection timeout in seconds, default: negative, means disabled.
#
# OptProxyTimeout: -1

# It's possible to limit the processing speed of download agents to set an
# overall download speed limit. Unit: KiB/s, Default: unlimited.
#
# MaxDlSpeed: 500

# In special corner cases, download clients attempt to download random chunks
# of a files headers, i.e. the first kilobytes. The "don't get client stuck"
# policy converts this usually to a 200 response starting the body from the
# beginning but that confuses some clients. When this option is set to a
# certain value, this modifies the behaviour and allows to start a file
# download where the distance between available data and the specified range
# lies within that bounds. This can look like random lag for the user but
# should be harmless apart from that.
#
# MaxInresponsiveDlSize: 64000

# In mobile environments having an adhoc connection with a redirection to some
# id verification side, this redirect might damage the cache since the data is
# involuntarily stored as package data. There is a mechanism which attempts to
# detect a such situation and mitigate the mentioned effects by not storing the
# data and also dropping the DNS cache. The trigger is the occurrence of a
# specific SUBSTRING in the content type field of the final download target
# (i.e. the auth web site) and at least one followed redirection.
#
# BadRedirDetectMime: text/html

# When a BUS signal is received (typically on IO errors), a shell command can be
# executed before the daemon is terminated.
# Example:
# BusAction: ls -l /proc/$PPID/ | mail -s SIGBUS! root

# Only set this value for debugging purposes. It disables SSL security checks
# like strict host verification. 0 means no, any other value can have
# differrent meaning in the future.
#
# NoSSLChecks: 0

# Setting this value means: on file downloads from/via cache, tag relevant
# files. And when acngtool runs the shrink command, it will look at the day
# when the file was retrieved from cache last time (and not when it was
# originally downloaded).
#
# TrackFileUse: 0

# Controls preallocation of file system space where this feature is supported.
# This might reduce disk fragmentation and therefore improve later read
# performance. However, write performance can be reduced which could be
# exploited by malicious users.
# The value defines a size limit of how much to report to the OS as expected
# file size (starting from the beginning of the file).
# Set to zero to disable this feature completely. Default: one megabyte
#
# ReserveSpace: 1048576

Restarted apt-cacher-ng-service

user@cacher:~$ sudo service apt-cacher-ng restart

Checked if the service is active

user@cacher:~$ sudo systemctl status apt-cacher-ng
��� apt-cacher-ng.service - Apt-Cacher NG software download proxy
     Loaded: loaded (/lib/systemd/system/apt-cacher-ng.service; enabled; preset>
     Active: active (running) since Mon 2022-10-17 10:44:03 EDT; 3h 12min ago
   Main PID: 1426 (apt-cacher-ng)
      Tasks: 3 (limit: 388)
     Memory: 4.4M
        CPU: 32ms
     CGroup: /system.slice/apt-cacher-ng.service
             ������1426 /usr/sbin/apt-cacher-ng -c /etc/apt-cacher-ng ForeGround=1

Oct 17 10:44:03 cacher systemd[1]: Starting Apt-Cacher NG software download pro>
Oct 17 10:44:03 cacher apt-cacher-ng[1426]: WARNING: No configuration was read >
Oct 17 10:44:03 cacher apt-cacher-ng[1426]: WARNING: No configuration was read >
Oct 17 10:44:03 cacher systemd[1]: Started Apt-Cacher NG software download prox>
user@cacher:~$

Yet, trying to update any qube, including cacher itself outputs same error

Err:1 http://HTTPS///deb.debian.org/debian bookworm InRelease
  403  Configuration error (confusing proxy mode) or prohibited port (see AllowUserPorts) [IP: 127.0.0.1 8082]

I haven’t messed with AllowUserPorts ever, and it’s still commented in acng.conf, so any idea how to trace this would be appreciated.

If you upgrade to a testing release you expect things to break.
This has broken.
Look at the difference between the two config files. That may help.

Didn’t regret. Where would civilization be without testing and failing. I find it a basic mankind legacy.
I checked files before I posted, that’s why I posted new.
Thanks.

1 Like

I’m getting this here

user@cacher:~$ wget 127.0.0.1:8082
--2022-10-18 14:06:33--  http://127.0.0.1:8082/
Connecting to 127.0.0.1:8082... connected.
HTTP request sent, awaiting response... 406 Usage Information
2022-10-18 14:06:33 ERROR 406: Usage Information.

As far as I know 406 error can be related to Accept-encoding and Accept-charset among others. I mention these since in new acng.conf poste above I noticed multiple strange characters, although commented:

# - «PFilePattern» for static data that doesn’t change silently on the server.

Correcting them didn’t resolve the issue, but maybe this is the first trace…

The version in testing is buggy - you need to explicitly allow https
AllowUserPorts: 80 443

Thanks, that was it.

Before opening a topic, I tried to explicitely allow port 80 considering error, but it didn’t occur to me to add 43, especially since it is already specified and allowed in

PassThroughPattern: ^(.*):443$

Everything seems to work as expected as of now.

PassThroughPattern - if apt-cacher-ng sees this traffic it will pass it
through without caching.
AllowUserPorts - apt-cacher-ng will allow users to specify custom
ports in the repo definitions and will cache traffic to those ports.

Use of http://HTTPS/// means that apt-cacher-ng sees traffic to
port 80, http, but redirects it out to 443 https.

The issue you saw is a bug, and will be fixed in transition to stable
bookworm.

Yes, I see. It still doesn’t work with fedora (or non-apt distros, more specifically). Thanks for clarifying former.

Ping, I have still this issue on bookworm stable.

1 Like
I edited the title slightly
  • fixed a typo in apt-cacher-ng to make sure folks find the topic when searching for that
  • removed the [solved] tags because the forum is configured to identify solved topics already (little checked box next to the title in lists, solution highlight and quick access link in the first post) edit to clarify: so it should already be clear that the topic is solved :slightly_smiling_face:

For me it is still tagged as solved.

I also have this problem for deb12 however deb11 works fine.

The cacher is the one linux qube on my system that is still based on debian-11-minimal. I ran into some sort of problem (I don’t remember exactly what), but in essence trying to use the deb12m version broke me.

It’s on my list of things to try out again.

I’m not sure what problem you have - I’ve been running cacher on a
bookworm system for most of the year.
After a system dist-upgrade, the only change required was that
identified earlier in the thread - Set AllowUserPorts: 80 443 in
/etc/apt-cacher-ng/acng.conf

1 Like

I decided, on reading your response, to give it a go. And it worked! Thanks!

OK, having said that I’ll now give an example of why my relationship with salt is love-hate rather than pure love.

I did a file.replace to change #AllowUserPorts: 80 to AllowUserPorts: 80 443. Obviously I didn’t want to do this before having acng.conf in the (otherwise) correct state, so I did a requires: referencing the file.managed.

The problem is in this case the tag is require:. There’s no rhyme or reason to it; sometimes a state uses requires: and sometimes it’s require:. I didn’t remember this. The .sls file I referenced to find an example of file.replace didn’t happen to have one with a require clause in it, so I made the mistake of looking for a different sample require (actually in this case requires) clause in the file I was adding the state to.

The damn thing said “OK” when I ran it, but in fact JSON had thrown up. Basically if there’s a Jinja or salt syntax error, it will often report “OK” in spite of actually having done nothing. (It will invariably do so if Jinja failed to render.) This is a damned lie, but it’s salt. Usually when this happens, it’s fast and I know to check when it runs too fast; in this case it took about ten minutes so I didn’t think to check the log file. (To add to the irony, if you apply an .sls file that’s syntactically correct but did nothing (perhaps because an onlyif wasn’t true), it DOES throw an error, even if that’s the desired behavior. Which is just stupid.)

So after spending a bunch of time wondering why the fix didn’t work, I finally decided to check the log file. Which did, thank heavens for small favors, tell me exactly what the error was [a big surprise in and of itself; it usually just says failed to render]. And so I got to do the whole thing over again.

After some testing today I found that my problem is related to whonix since updating without whonix works fine. For whonix-16 however i get

Failed to fetch tor+http://2s4yqjx5ul6okpp3f2gaunr2syex5jgbfpfvhxxbbjwnrsvbk5v3qbid.onion/debian/dists/bookworm/InRelease  503  Host not found [IP: 127.0.0.1 8082]

A side note, since it might be helpfull to somebody else:

Just creating and configuring a deb12-cacher template and replace the deb11-cacher template broke the apt-cacher-ng service for me. Creating an new deb12-cacher-vm fixed that issue for me. However, deb12-cacher remains broken for a whonix netvm.

I’m more impressed at the fact that your (incorrect) use of requires
worked sometimes.
The key word is require - this will work in every state.
See the docs
Use the right requisite statement, and love will be restored.

3 Likes

Time for a mass global search and replace! Thanks!

Edit (probably within the time limit for mail):

Actually it’s entirely possible that “requires” did not work. If it happens to execute the states in the right order, it will be fine regardless of whether it recognized “requires” as being “require”.

On the other hand, it didn’t complain about an unknown tag, either, and probably ought to have!

1 Like

As it happens there was exactly ONE instance of “requires” in all of my sls files…and I just happened to pick that one as a model.

Anyway, now there are none.