Issues with apt-cacher-ng

@unman: I agree that this approach is not the best, while also having stated that neglecting whonix is also not the way to go unless Qubes OS offers an alternative to sys-whonix on their next install media (or your tor recipe being a dependency of cacher and also deployed by default). The real solution (implementation detail) is outlined under

Which would require @adrelanos to implement his template proxy check against a real tor url instead of depending on a fixed proxy advertising tor support. Nothing more I can do here but have at least agreement of reality where Whonix is deployed in most deployments (default) today and where sys-whonix is ticked to be the update proxy used by most, and where cacher is expected to be really used as a drop-in replacement without it partly caching things.
On app-qube temporary installation:

I have problems with that, which is why I’m writing about it here. It works flawlessly with app-qubes where update-proxy-setup is activated, permitting network traffic to go through whatever user’s netvm preference but using cacher to download packages the same way that app-qube’s template is configured to use cacher.

On the a running app-qube, configured to use cacher as netvm without further modification (and simulating that cacher rpc is not available)
1-

user@heads-tests:~/heads$ sudo systemctl stop qubes-updates-proxy-forwarder.socket
user@heads-tests:~/heads$ sudo apt update
Ign:1 http://HTTPS///deb.debian.org/debian bookworm InRelease
Ign:2 http://HTTPS///deb.debian.org/debian-security bookworm-security InRelease
Ign:3 http://HTTPS///deb.qubes-os.org/r4.1/vm bookworm InRelease
Ign:4 http://HTTPS///updates.signal.org/desktop/apt xenial InRelease
Ign:1 http://HTTPS///deb.debian.org/debian bookworm InRelease
Ign:2 http://HTTPS///deb.debian.org/debian-security bookworm-security InRelease
Ign:3 http://HTTPS///deb.qubes-os.org/r4.1/vm bookworm InRelease
Ign:4 http://HTTPS///updates.signal.org/desktop/apt xenial InRelease
Ign:1 http://HTTPS///deb.debian.org/debian bookworm InRelease
Ign:2 http://HTTPS///deb.debian.org/debian-security bookworm-security InRelease
Ign:3 http://HTTPS///deb.qubes-os.org/r4.1/vm bookworm InRelease
Ign:4 http://HTTPS///updates.signal.org/desktop/apt xenial InRelease
Err:1 http://HTTPS///deb.debian.org/debian bookworm InRelease
  Could not connect to 127.0.0.1:8082 (127.0.0.1). - connect (111: Connection refused)
Err:2 http://HTTPS///deb.debian.org/debian-security bookworm-security InRelease
  Unable to connect to 127.0.0.1:8082:
Err:3 http://HTTPS///deb.qubes-os.org/r4.1/vm bookworm InRelease
  Unable to connect to 127.0.0.1:8082:
Err:4 http://HTTPS///updates.signal.org/desktop/apt xenial InRelease
  Unable to connect to 127.0.0.1:8082:
Reading package lists... Done
E: Failed to fetch http://HTTPS///deb.debian.org/debian/dists/bookworm/InRelease  Could not connect to 127.0.0.1:8082 (127.0.0.1). - connect (111: Connection refused)
E: Failed to fetch http://HTTPS///deb.debian.org/debian-security/dists/bookworm-security/InRelease  Unable to connect to 127.0.0.1:8082:
E: Failed to fetch http://HTTPS///deb.qubes-os.org/r4.1/vm/dists/bookworm/InRelease  Unable to connect to 127.0.0.1:8082:
E: Failed to fetch http://HTTPS///updates.signal.org/desktop/apt/dists/xenial/InRelease  Unable to connect to 127.0.0.1:8082:
E: Some index files failed to download. They have been ignored, or old ones used instead.
user@heads-tests:~/heads$ wget perdue.org
--2022-08-29 14:47:31--  http://perdue.org/
Resolving perdue.org (perdue.org)... 216.40.34.37
Connecting to perdue.org (perdue.org)|216.40.34.37|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: ‘index.html’

index.html                                [ <=>                                                                    ]   6.00K  --.-KB/s    in 0.04s   

2022-08-29 14:47:32 (143 KB/s) - ‘index.html’ saved [6141]

Or

2- Deactivating proxy for AppVMs and restarting app-qube:

[user@dom0 ~]$ sudo vim /etc/qubes/policy.d/30-user.policy
#qubes.UpdatesProxy  *  @tag:whonix-updatevm    @default    allow target=sys-whonix
#qubes.UpdatesProxy  *  @type:AppVM  @default  allow target=cacher
qubes.UpdatesProxy  *  @type:TemplateVM  @default  allow target=cacher
[user@dom0 ~]$ sudo systemctl restart qubesd
[user@dom0 ~]$ qvm-features heads-tests | grep proxy
service.updates-proxy-setup 
[user@dom0 ~]$ qvm-prefs heads-tests | grep netvm
netvm                 -  cacher

And starting heads-tests to check apt downloads capabilities:

user@heads-tests:~$ sudo apt update
Ign:1 http://HTTPS///deb.debian.org/debian bookworm InRelease
Ign:2 http://HTTPS///deb.debian.org/debian-security bookworm-security InRelease
Ign:3 http://HTTPS///deb.qubes-os.org/r4.1/vm bookworm InRelease
Ign:4 http://HTTPS///updates.signal.org/desktop/apt xenial InRelease
Ign:1 http://HTTPS///deb.debian.org/debian bookworm InRelease
Ign:2 http://HTTPS///deb.debian.org/debian-security bookworm-security InRelease
Ign:3 http://HTTPS///deb.qubes-os.org/r4.1/vm bookworm InRelease
Ign:4 http://HTTPS///updates.signal.org/desktop/apt xenial InRelease
Ign:1 http://HTTPS///deb.debian.org/debian bookworm InRelease
Ign:2 http://HTTPS///deb.debian.org/debian-security bookworm-security InRelease
Ign:3 http://HTTPS///deb.qubes-os.org/r4.1/vm bookworm InRelease
Ign:4 http://HTTPS///updates.signal.org/desktop/apt xenial InRelease
Err:1 http://HTTPS///deb.debian.org/debian bookworm InRelease
  Could not resolve 'HTTPS'
Err:2 http://HTTPS///deb.debian.org/debian-security bookworm-security InRelease
  Could not resolve 'HTTPS'
Err:3 http://HTTPS///deb.qubes-os.org/r4.1/vm bookworm InRelease
  Could not resolve 'HTTPS'
Err:4 http://HTTPS///updates.signal.org/desktop/apt xenial InRelease
  Could not resolve 'HTTPS'
Reading package lists... Done
E: Failed to fetch http://HTTPS///deb.debian.org/debian/dists/bookworm/InRelease  Could not resolve 'HTTPS'
E: Failed to fetch http://HTTPS///deb.debian.org/debian-security/dists/bookworm-security/InRelease  Could not resolve 'HTTPS'
E: Failed to fetch http://HTTPS///deb.qubes-os.org/r4.1/vm/dists/bookworm/InRelease  Could not resolve 'HTTPS'
E: Failed to fetch http://HTTPS///updates.signal.org/desktop/apt/dists/xenial/InRelease  Could not resolve 'HTTPS'
E: Some index files failed to download. They have been ignored, or old ones used instead.

And where setting proxy to local doesn’t work as expected:

user@heads-tests:~$ export https_proxy=http://127.0.0.1:8082
user@heads-tests:~$ sudo apt update
Ign:1 http://HTTPS///deb.debian.org/debian bookworm InRelease
Ign:2 http://HTTPS///deb.debian.org/debian-security bookworm-security InRelease
Ign:3 http://HTTPS///deb.qubes-os.org/r4.1/vm bookworm InRelease
Ign:4 http://HTTPS///updates.signal.org/desktop/apt xenial InRelease
Ign:1 http://HTTPS///deb.debian.org/debian bookworm InRelease
Ign:2 http://HTTPS///deb.debian.org/debian-security bookworm-security InRelease
Ign:3 http://HTTPS///deb.qubes-os.org/r4.1/vm bookworm InRelease
Ign:4 http://HTTPS///updates.signal.org/desktop/apt xenial InRelease
Ign:1 http://HTTPS///deb.debian.org/debian bookworm InRelease
Ign:2 http://HTTPS///deb.debian.org/debian-security bookworm-security InRelease
Ign:3 http://HTTPS///deb.qubes-os.org/r4.1/vm bookworm InRelease
Ign:4 http://HTTPS///updates.signal.org/desktop/apt xenial InRelease
Err:1 http://HTTPS///deb.debian.org/debian bookworm InRelease
  Could not resolve 'HTTPS'
Err:2 http://HTTPS///deb.debian.org/debian-security bookworm-security InRelease
  Could not resolve 'HTTPS'
Err:3 http://HTTPS///deb.qubes-os.org/r4.1/vm bookworm InRelease
  Could not resolve 'HTTPS'
Err:4 http://HTTPS///updates.signal.org/desktop/apt xenial InRelease
  Could not resolve 'HTTPS'
Reading package lists... Done
E: Failed to fetch http://HTTPS///deb.debian.org/debian/dists/bookworm/InRelease  Could not resolve 'HTTPS'
E: Failed to fetch http://HTTPS///deb.debian.org/debian-security/dists/bookworm-security/InRelease  Could not resolve 'HTTPS'
E: Failed to fetch http://HTTPS///deb.qubes-os.org/r4.1/vm/dists/bookworm/InRelease  Could not resolve 'HTTPS'
E: Failed to fetch http://HTTPS///updates.signal.org/desktop/apt/dists/xenial/InRelease  Could not resolve 'HTTPS'
E: Some index files failed to download. They have been ignored, or old ones used instead.
user@heads-tests:~$

And where setting the proxy address to cacher IP provides the same results, since the cacher is not used as proxy, but used as netvm:

  Could not resolve 'HTTPS'
Err:2 http://HTTPS///deb.debian.org/debian-security bookworm-security InRelease
  Could not resolve 'HTTPS'
Err:3 http://HTTPS///deb.qubes-os.org/r4.1/vm bookworm InRelease
  Could not resolve 'HTTPS'
Err:4 http://HTTPS///updates.signal.org/desktop/apt xenial InRelease
  Could not resolve 'HTTPS'
Reading package lists... Done
E: Failed to fetch http://HTTPS///deb.debian.org/debian/dists/bookworm/InRelease  Could not resolve 'HTTPS'
E: Failed to fetch http://HTTPS///deb.debian.org/debian-security/dists/bookworm-security/InRelease  Could not resolve 'HTTPS'
E: Failed to fetch http://HTTPS///deb.qubes-os.org/r4.1/vm/dists/bookworm/InRelease  Could not resolve 'HTTPS'
E: Failed to fetch http://HTTPS///updates.signal.org/desktop/apt/dists/xenial/InRelease  Could not resolve 'HTTPS'
E: Some index files failed to download. They have been ignored, or old ones used instead.

@unman: unless I skipped and misunderstood something, I do not understand how this is supposed to work. Of course cacher has its firewall open (accept) for port 8082 as per normal deployment.

Also reread your notes under notes/apt-cacher-ng.md at 6e752c5b80b4d581266b452720a4e87505437c16 · unman/notes · GitHub which didn’t help me resolve the current issue being app qubes via apt-cacher-ng (so app qubes)

The package doesn’t neglect Whonix - it works with Whonix as currently
implemented, and respects the requirements of Whonix.

Agreed

For cacher with IP 10.137.0.39
Take a qube with the http://HTTPS/// rewriting in repository
definitions.
Set netvm to cacher
Write standard apt proxy config in /etc/apt/apt.conf.d/10Proxy:
Acquire::http::Proxy "http://10.137.0.39:8082";

This is standard use of proxy in dpkg/apt.
If you think you will need this often you can keep a simple script to
change the netvm and drop that file in place.

For Fedora, add “proxy=http://10.137.0.39:8082/” to /etc/dnf/dnf.conf

I never presume to speak for the Qubes team.
When I comment in the Forum or in the mailing lists I speak for myself.

To be honest, i’m not sure how this is less intrusive as opposed to set one time:

[user@dom0 ~]$ sudo vim /etc/qubes/policy.d/30-user.policy 
#qubes.UpdatesProxy  *  @tag:whonix-updatevm    @default    allow target=sys-whonix
qubes.UpdatesProxy  *  @type:AppVM  @default  allow target=cacher
qubes.UpdatesProxy  *  @type:TemplateVM  @default  allow target=cacher

(Note that I have the userinfo.html file modified under template-cacher, so I can have @type:TemplateVM aboce, including whonix templates…)

And then, for appvms that I intend to install software from repositories in app-qubes without persistence across reboots, to

  • add “updates-proxy-setup” service under app-qube’s service tab once or
  • type qvm-service App-qube-Name updates-proxy-setup on from dom0 once

and never have to deal with that again, since the policy above permits app-qubes to talk with the proxy through rpc without further modification, thanks to its qubes.UpdatesProxy * @type:AppVM @default allow target=cacher policy statement which permits app-qubes to talk through proxy to download updates.

Salt recipes gurus, I would love to have your inputs on how to have cacher-whonix available, under https://github.com/tlaurion/shaker/pull/1

It is not yet ready for production at all, but I still think that once mature, this cacher-whonix.rpm and salt recipes should be available to have sys-whonix used as a caching update proxy for those having chose at install to deploy whonix for privacy purposes and who have decided to have sys-whonix as the update proxy, guaranteeing that everything goes through tor from sys-whonix with leak prevention guarantees.

@unman @adrelanos : this would require collaboration as said, but I think for people using whonix, this should be the way to go so that packages are cached all over the board and where cacher-non-whonix alone proves that a lot of bandwidth is needed otherwise.

I deployed cacher on August 18th.
Here are the stats for my caching proxy, proving it is needed for everyone that doubted, otherwise each clone or derivatives of templates increasing that downloading capacity need, and time on slow connection consequently:
2022-08-31-133644

I cannot update Fedora based dispVMs, nor Whonix-based dispVMs. Only Debian based dispVMS. When all are offline, of course. To my surprise, Fedora dvm templates can be updated (as Debian dvm templates, of course), but Whonix dvm templates not, complaining

E: Failed to fetch tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm/dists/bullseye-securitytesting/InRelease Read error - read (104: Connection reset by peer) Reading the greet back from SOCKS proxy socks5h://127.0.0.1:9050 failed [IP: 127.0.0.1 9050]

All templates, including Whonix, can be updated normally.

So, I’m not sure what in your post addresses my issue?

well, the answer I provided works as explained there, as explained there for app-qubes if

qubes.UpdatesProxy  *  @type:AppVM  @default  allow target=cacher

is in policy (and sudo systemctl restart qubesd to have that policy enforced by dom0).

And then

I do not know how to be clearer than that. the dispvm template will need to activate updates-proxy-setup service for the cacher to be accessible through rpc.

A way to test this is to not have the dom0 set to @anyvm and have updates-proxy-setup service added in disp-template and ticked prior of starting dispvm. Doing so, you will get a bunch of errors popups in dom0, saying that access to proxy was denied per policies. Otherwise, if @anyvm is added in policy, it will work.

For your dispvm use case (outside of your tor hidden onion issue, see below), that would require @anyvm (at this point you seem to want anything to have access to cacher) with something like the following:

[user@dom0 ~]$ cat /etc/qubes/policy.d/30-user.policy 
qubes.UpdatesProxy  *  @anyvm  @default  allow target=cacher
#qubes.UpdatesProxy  *  @tag:whonix-updatevm    @default    allow target=sys-whonix
#qubes.UpdatesProxy  *  @type:AppVM  @default  allow target=cacher
#qubes.UpdatesProxy  *  @type:TemplateVM  @default  allow target=cacher

and sudo systemctl restart qubesd

I just tested it, and was able to download updates through cacher for fedora, debian, and whonix (with hack specified before) BUT not for tor onion hidden urls. This cannot work, unless sys-whonix was actually the cacher itself, otherwise there is something I do not get. I am not sure how you were able to bypass whonix templates refusing to update themselves if not having sys-whonix defined as the update proxy either. From my current understanding, you are able to update whonix because:
qubes.UpdatesProxy * @tag:whonix-updatevm @default allow target=sys-whonix is somewhere defined in your policies, but they are not going through cacher?

Then for your tor hidden onion services errors, I’m not sure I understand your current configuration. I thought your whonix templates were successfully updating through sys-whonix, and not cacher. But then I do not get your error:

Can you clarify what you are using right now as a configuration? Things have changed since your last post.

What is your current policies content?

  • /etc/qubes-rpc/policy/qubes.UpdatesProxy
  • /etc/qubes/policy.d/30-user.policy

So Whonix disposable templates are enforcing a SOCKS proxy at 9050.

This isn’t true - if you have the onion repositories defined and cacher
has netvm set to a Tor proxy, then cacher will work just fine.
Of course, those packages wont be drawn from the same pool as clearnet
repositories. Actually, it should be straightforward to do this by
setting the redirection target in the remap definitions, so that all
requests to mapped debian repositories are redirected to the onion
addresses.
That’s a nice part of the configurability.

I was able to substitute a clone of my whonix-ws-16 template in place of debian-minimal following the @unman tutorial and successfully update all of my debian based templates using the .onion repos. However, I do not invoke apt-transport-tor in my list of sources (deb http://address.onion instead of deb tor+http://address.onion).

My caches seem much smaller than those of @insurgo though

user@host:~$ du -chs /var/cache/apt-cacher-ng/* |sort -h
48K /var/cache/apt-cacher-ng/contrib.qubes-os.org
116K /var/cache/apt-cacher-ng/2s4yqjx5ul6okpp3f2gaunr2syex5jgbfpfvhxxbbjwnrsvbk5v3qbid.onion
160K /var/cache/apt-cacher-ng/_xstore
536K /var/cache/apt-cacher-ng/deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion
96M /var/cache/apt-cacher-ng/brave-browser-apt-release.s3.brave4u7jddbv7cyviptqjc7jusxh72uik7zt6adtckl5f4nwy2v72qd.onion
276M /var/cache/apt-cacher-ng/5ajw6aqf3ep7sijnscdzw77t7xq4xjpsy335yb2wiwgouo7yfxtjlmid.onion
373M total

I used tags to define the updating policy as follows:

[user@dom0]$ cat /etc/qubes/policy.d/30-user.policy
qubes.UpdatesProxy * @tag:debian-updatevm @default allow target=cacher

[user@dom0]$ cat /etc/qubes-rpc/policy/qubes.UpdatesProxy
$type:TemplateVM $default allow,target=sys-whonix
$tag:whonix-updatevm $default allow,target=sys-whonix

I haven’t tried caching either whonix or fedora templates.

I cant help feeling that this isn’t the right way to go.
If a user has a caching proxy installed and then decides to run it over
Tor, they’ll lose the benefit of previous caching. The same if they decide to
stop running updates over Tor, but in that case they will have to
install some new package to get the benefit of caching.

I have a POC that runs on changes to the netvm - it runs wget against
the Qubes onions and checks the signature on the Release file. If good,
then it sets a parameter which Whonix could read: if not, unset.
The downside is that if it’s not running through Tor there’s a DNS
request for an onion address. But since Qubes leaks use of Tor in any
case I don’t think this is a great issue. ( I wouldn’t use this where my
life or liberty depended on it, but I wouldn’t use Tor then either.)

You can do this but you are making it easy for users to do the wrong
thing.

I never presume to speak for the Qubes team.
When I comment in the Forum or in the mailing lists I speak for myself.

You can use onions on a debian-minimal based cacher, if you set the
netvm to a tor proxy.

The caches will grow as you run updates.

Is this possible while using sys-whonix as the netvm for cacher? I played with this possibility, without success, but I didn’t want to mess around with tor for fear of not implementing it correctly. I’d much rather use a minimal template for cacher and be able to cache both debian and whonix packages.

1 Like

@unman: cacher’s package cache growing indefinitely is good as of now to observe cacher’s benefits for newcomers of the project, while I’m not sure caching for more than a week packages makes any sense for a Qubes repository cacher, leading to space problems and vm-pool over-provisioning problems in the long run.

Reading Maintenance doesn’t seem to provide a good way to cleansweep/prune cache content to prevent all space to be consumed one day, the goal of apt-cacher-ng being to provide a central way for organizations to cache all package download requests and to economize bandwidth forever; the documentation specifies how the cache is automatically cleaned: that is, if upstream repositories delete stuff, then local cache will also remove packages. This means for cacher that packages will most probably always be there and the cache growing indefinitely.

You have ideas proper implementation so that Qubes use case here doesn’t mimic mirroring of all packages downloaded forever? I mean, I do not expect any of the packages currently in cacher’s cache to ever disappear prior of my cache-private LVM to be filled, and user’s expected to have that volume grow, or newer 4.1 default to have those volumes grow until vm-pool explodes. Some kind of conservative/agressive safeguards should be applied to at least reflect general Qubes use case here, or worst case scenario where all templates supported are having cacher caching packages for them (nor am familiar with Arch’s update behavior and disk space costs related to package download since I do not use Arch).

2022-09-01-122345

3.7G today. Roughly ~300Mb a day, the culprit being debian-12 which is a testing repo, and the rhythm of packages updates related to testing package repositories.

At the current rhythm (caching debian-11, debian-12, fedora-36 and whonix since the 18th of August) I expect cacher’s 18.6G’s default private volume to be filled within maximum 6 months. Maybe some deletion magic glue should be implemented in a cron monthly job (delete files older then X days (30?) and restart apt-cacher-ng?)

1 Like

Same here, @unman. Seems like this is working with your tor unpublished shaker recipe, but from my hands-on experiences with whonix templates and sys-whonix consequently, once again without userinfo.html being hacked into template-cacher, whonix templates will refuse to use cacher as an update proxy. And qubes cannot transparently talk to onion address without aditional confiduration. This is why I insist working around the issue. Having debian use tor repositories is untested on my side here, but yet again, I would not understand why/how hidden onion tor addresses being resolved and used transparently (at least, this doesn’t work out of the box with Whonix unless Whonix workstation is used and preconfigured to use correct proxy ports (uwt wrappers), expecting once again sys-whonix to be the net qube providing network. Not cacher->sys-whonix incurrent scenario, which : doesn’t work. Please test.)

  • Agreed that hacking Whonix templates is not a solution (either deactivating tor check on configured templates) since switching update proxy to something else would break current guarantees.

  • Agreed that hacking cacher-template for apt-cacher-ng for it to provide fake guarantee that its update proxy enforces tor will weakens expected contract when using whonix templates that updates are torrified.

But then if we reject the idea of creating a cacher-whonix alternative shaker recipe book, aimed to configure whonix-gw template to implement apt-cacher-ng properly, disabling tinyproxy and enabling apt-cacher-ng with modified userinfo.html to enforce properly expected contract by whonix templates, I am seriously out of idea to have a proper apt-cacher-ng implementation that would one day make its way into Qubes repositories (user needing to know implementation details) or even one day offered as an option at OS install.
And this is what I wish cacher as a future.

Those are the ultimate goals aren’t they? Qubes pushes users to specialize templates. Users are doing it when guided. Cacher prevents installing stupid things. Installing repos will become an option soon with extrepo. But when users are specializing templates, so the requirement for bandwidth explodes. I do not know for you @unman, but my customer base is mostly end users around the world, most of them not having unlimited bandwidth and most of the time relaying on shitty internet connections, most of the time from public hotspots, parks or fastfood restaurants, where their home internet connection is not so fast and the update process (Update widget: 1 Template at a time, redownloading packages for each specialized template, takes forever if no caching, and where caching changes literally the whole experience. They start the updates at night, leaving their laptop unattended with disks decrypted. And they are worried.) Even though this is clearly not extendable to the whole community, I think it represents a lot of the users (will not vouch for unknown users, but for known situations of people wanting to use Qubes for the security/privacy(whonix) benefits out of the box).

This is why I push, tried to get funding for [Contribution] qubes-updates-cache · Issue #1957 · QubesOS/qubes-issues · GitHub since a really long time, where implementation was too manual and hacky to interest any grant application reviewer. Until now. The question really is how to properly do this, and I think it needs a bit more thorough reflection.

And… @unman : it would require you to install Whonix templates and test them at least a little for cacher use case.

If you want to discuss a bit more about this off forum, I am more than willing to do this and more seriously. I still think that this project could get funding if properly packaged if you want to go that direction. The foreseen goal of mine being for cacher to cache everything update related, including dom0 (As said before, future projection for Qubes is to be mainstream Fedora as I got it for dom0, even if dom0 packages deployment is minimal; it is not if we inspect the cost of downloading repository information alone for a different fedora version today, and will economize bandwidth when this goal is achieved). Leaving Whonix aside should not be an option.

In my past life’s corporate life, I used to develop network fingerprinting technology based on Windows updates requests and responses. It was back in time where Windows was communicating through http to their update servers. If we are lucky enough, Windows would still permit those registry keys to use http. And maybe permit apt-cacher-ng compatibility as well. Unknown to me now. The question there is if we can have salt runner in Windows deployed. And if it even makes sense to cache those packages (Windows is currently used as a StandaloneVM, where deploying it as a pure Template broke a while ago because of bugs in Qubes Windows tools and private volume not being properly initialized. But that should be fixed by now and newer vrsions of windows tools will permit to support Windows11 as well, with LTSC version also properly landing and one day supported by Ellick’s script). My intuition based on all customer requests and discussions on forum and Tabit-pro work that is happening behind the scenes and soon enough landing properly at least in Qubes testing repos, Qubes will finally get proper windows support and compartmentalization, with PR going into kellick and addressing Alternate qubes-windows-tools available · Issue #15 · ElliotKillick/qvm-create-windows-qube · GitHub

Long story short, as always i’m sorry I try to be short but its not my forte, if cacher is scoped properly, the cacher proxy will be publicized, most probably deployed in Qubes community repo at some point, cacher package signed and deployed in main stable Qubes repository one day, and maybe used widely and deployed as default (and funny enough, cached under cacher’s update proxy on all Qubes computers one day…)

@deeplow a lot of the replies here are not related to “updating app qubes” but more about “cacher” current usage possibilities and coverage expectations. Not sure if those are relevant replies here. This post is definitely not and tries to address current and future expected scope of the project. If you move this post, I will edit with proper references.

Sorry, I’m lacking the contextual knowledge to split this thread (haven’t played around yet with the cacher). Perhaps @fsflover can give a hand (if that’s something that they’re familiar with)?

1 Like

I admit I’m totally “lost in translation” with this topic’s continuation. I want to change it’s title, but it’s not allowed anymore. @deeplow may I kindly ask you to change it to:
How to update non-template qubes over qrexec while using apt-cahcer-ng as netVM

Thank you in advance, but my topic definitely isn’t about cacher

Anyway

[quote=“Insurgo, post:33, topic:10548”]

… I can confirm that in my qubes.UpdatesProxy I set

$tag:whonix-updatevm $default allow,target=cacher
$type:TemplateVM $default allow,target=cacher
$type:AppVM $default allow,target=cacher
$tag:whonix-updatevm $anyvm deny

and in my 30-default.policy I set

# HTTP proxy for downloading updates
# Upgrade Whonix TemplateVMs through sys-whonix.
qubes.UpdatesProxy * @tag:whonix-updatevm @default allow target=cacher
# Deny Whonix TemplateVMs using UpdatesProxy of any other VM.
qubes.UpdatesProxy * @tag:whonix-updatevm @anyvm deny
# Upgrade all TemplateVMs through sys-whonix or cacher.
qubes.UpdatesProxy * @type:TemplateVM @default allow target=cacher
# Upgrade all AppVMs through sys-whonix or cacher.
qubes.UpdatesProxy * @type:AppVM @default allow target=cacher
# Default rule for all TemplateVMs - direct the connection to sys-net-dvm or cacher
qubes.UpdatesProxy * @type:TemplateVM @default allow target=cacher
qubes.UpdatesProxy * @anyvm @anyvm deny

and after

sudo systemctl restart qubesd

when running update in Whonix template, I got

user@host:~$ sudo apt update && sudo apt full-upgrade
Get:1 tor+http://5ajw6aqf3ep7sijnscdzw77t7xq4xjpsy335yb2wiwgouo7yfxtjlmid.onion/debian-security bullseye-security InRelease [48.4 kB]
Hit:2 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye InRelease
Hit:3 Index of /deb/r4.1/vm/ bullseye InRelease
Hit:4 Index of /deb/r4.1/vm/ bullseye-testing InRelease
Hit:5 tor+https://deb.whonix.org bullseye InRelease
Get:6 tor+http://5ajw6aqf3ep7sijnscdzw77t7xq4xjpsy335yb2wiwgouo7yfxtjlmid.onion/debian-security bullseye-security/main amd64 Packages [180 kB]
Hit:7 tor+http://2s4yqjx5ul6okpp3f2gaunr2syex5jgbfpfvhxxbbjwnrsvbk5v3qbid.onion/debian bullseye InRelease
Hit:8 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-testing InRelease
Hit:9 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-securitytesting InRelease
Get:10 tor+http://2s4yqjx5ul6okpp3f2gaunr2syex5jgbfpfvhxxbbjwnrsvbk5v3qbid.onion/debian bullseye-updates InRelease [44.1 kB]
Get:11 tor+http://2s4yqjx5ul6okpp3f2gaunr2syex5jgbfpfvhxxbbjwnrsvbk5v3qbid.onion/debian bullseye-backports InRelease [49.0 kB]
Get:12 tor+http://2s4yqjx5ul6okpp3f2gaunr2syex5jgbfpfvhxxbbjwnrsvbk5v3qbid.onion/debian bullseye-backports/main amd64 Packages.diff/Index [63.3 kB]
Get:13 tor+http://2s4yqjx5ul6okpp3f2gaunr2syex5jgbfpfvhxxbbjwnrsvbk5v3qbid.onion/debian bullseye-backports/main amd64 Packages T-2022-09-01-2027.29-F-2022-09-01-2027.29.pdiff [19.0 kB]
Get:13 tor+http://2s4yqjx5ul6okpp3f2gaunr2syex5jgbfpfvhxxbbjwnrsvbk5v3qbid.onion/debian bullseye-backports/main amd64 Packages T-2022-09-01-2027.29-F-2022-09-01-2027.29.pdiff [19.0 kB]
Fetched 404 kB in 25s (16.2 kB/s)
Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
1 package can be upgraded. Run ‘apt list --upgradable’ to see it.
Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
Calculating upgrade… Done
The following packages were automatically installed and are no longer required:
libboost-program-options1.74.0 zsh zsh-common
Use ‘sudo apt autoremove’ to remove them.
The following packages will be upgraded:
libvchan-xen
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 8,916 B of archives.
After this operation, 1,024 B of additional disk space will be used.
Do you want to continue? [Y/n] n
Abort.
user@host:~$

Btw, my cache is 11GB at the moment.

And then everything in policies talk about cacher :thinking:

I am now really confused. In my tests, if the whonix templates were already launched with sys-whonix configured as updates proxy, I had the same results as you (but not over tor, where your ouput above shows that cacher is actually not used now, since there is no http/https url hit there, but http+tor. I have not tested that. Restarting that whonix template and retrying would be interesting, but like I said, I do not understand what is happening here since nothing passes through apt-cacher-ng (or cacher) and your urls are not showing apt-cacher-ng compatible urls like:
deb [arch=amd64] http://HTTPS///deb.qubes-os.org/r4.1/vm bullseye main


And you are right @enmus @unman : uncommenting tor+http onion links works for the quick test I’ve just done (again with userinfo.html put into template-cacher to fool proxy check of whonix template which I do not understand how @enmus is getting around as of now: how whonix templates are able to user cacher per above configuration).

From current testing setup, passing everything to cacher, with cacher faking tor supporting update proxy through modified userinfo.html exposed through apt-cacher-ng under template-cacher:

user@host:~$ grep -R onion /etc/apt/sources.list.d/ | grep -v "#" | grep -v bak
/etc/apt/sources.list.d/qubes-r4.list:deb [arch=amd64] tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye main
/etc/apt/sources.list.d/qubes-r4.list:deb [arch=amd64] tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-testing main
/etc/apt/sources.list.d/qubes-r4.list:deb [arch=amd64] tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-securitytesting main
user@host:~$ sudo apt update | grep onion

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Hit:1 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye InRelease
Hit:2 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-testing InRelease
Hit:3 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-securitytesting InRelease

shutdown whonix template.
Modifying template-cacher to remove my faked tor compatible update proxy.

[user@dom0 ~]$ qvm-run --user root template-cacher "xterm" 
Running 'xterm' on template-cacher

Removing my hack. Saving. Restarting cacher. Starting whonix template.
cacher still using sys-whonix as netvm.

user@host:~$ sudo apt update
WARNING: Execution of /usr/bin/apt prevented by /etc/uwt.d/40_qubes.conf because no torified Qubes updates proxy found.
Please make sure Whonix-Gateway (commonly called sys-whonix) is running.

- If you are using Qubes R3.2: The NetVM of this TemplateVM should be set to Whonix-Gateway (commonly called sys-whonix).

- If you are using Qubes R4 or higher: Check your _dom0_ /etc/qubes-rpc/policy/qubes.UpdatesProxy settings.

_At the very top_ of that file you should have the following:

$tag:whonix-updatevm $default allow,target=sys-whonix

To see if it is fixed, try running in Whonix TemplateVM:

sudo systemctl restart qubes-whonix-torified-updates-proxy-check

Then try to update / use apt-get again.

For more help on this subject see:
https://www.whonix.org/wiki/Qubes/UpdatesProxy

If this warning message is transient, it can be safely ignored.

Nope. Whonix cannot talk, whatsoever, through cacher if cacher doesn’t expose itself as a tor enabled proxy per whonix proxy check so I cannot understand why you setup works.

On my side I have only the following policy for my test:

[user@dom0 ~]$ sudo cat /etc/qubes/policy.d/30-user.policy 
qubes.UpdatesProxy  *  @anyvm  @default  allow target=cacher
#qubes.UpdatesProxy  *  @tag:whonix-updatevm    @default    allow target=sys-whonix
#qubes.UpdatesProxy  *  @type:AppVM  @default  allow target=cacher
#qubes.UpdatesProxy  *  @type:TemplateVM  @default  allow target=cacher

since

[user@dom0 ~]$ ls /etc/qubes-rpc/policy/qubes.UpdatesProxy
ls: cannot access '/etc/qubes-rpc/policy/qubes.UpdatesProxy': No such file or directory

/etc/qubes-rpc/policy/qubes.UpdatesProxy should not exist anymore, as flagged to @adrelanos per 4.1 (v5.0 policy standard).
Not sure if it should be @marmarek or @adrelanos’s job to remove that file, not sure which runs first but I know /etc/qubes-rpc/policy/qubes.UpdatesProxy should be deprecated and using it is misleading and deprecated.

Putting back my userinfo.html under template-cacher. Shutdowning template-cacher. shutdown of cacher through starting xterm from dom0 qvm-run with --user root.
Starting whonix template when cacher enabling whonix update proxy check to succeed with userinfo.html hack and then, mix and match of tor+http (onion) and http://HTTPS/// urls work:

user@host:~$ sudo apt update
Hit:1 tor+http://HTTPS///deb.debian.org/debian bullseye InRelease
Hit:2 tor+http://HTTPS///deb.debian.org/debian bullseye-updates InRelease
Hit:3 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye InRelease
Hit:4 tor+http://HTTPS///deb.debian.org/debian-security bullseye-security InRelease
Hit:5 tor+http://HTTPS///deb.debian.org/debian bullseye-backports InRelease
Hit:6 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-testing InRelease
Hit:7 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-securitytesting InRelease
Get:8 tor+http://HTTPS///fasttrack.debian.net/debian bullseye-fasttrack InRelease [12.9 kB]
Hit:9 tor+http://HTTPS///deb.whonix.org bullseye InRelease
Fetched 12.9 kB in 8s (1,548 B/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
4 packages can be upgraded. Run 'apt list --upgradable' to see them.

Short version: not possible to have whonix templates to use cacher as update proxy through rpc policies but if hack implemented, unless sys-whonix is still used as update proxy.

Proof of cacher being used for tor hidden onion service having been cached? It is in the cache app-vm package cache, second line:
2022-09-01-184950

Forum logistics

@deeplow, @fslover
I feel this thread has diverged hugely from anything that might be
useful in User Support.
You could name this thread: “Caching updates for non-templates
using apt-cacher-ng”

I don’t know if you can cherry pick posts, or if you have to actually
split the thread, and move everything below a certain post to a new
thread.
If it’s the latter, I think you should split at Insurgo post Mon, 29 Aug 2022 19:08:41 +0000
that starts (after a quote):
@unman: I agree that this approach is not the best…”

That new thread should be in General Discussion, and could be titled
“Issues with apt-cacher-ng”, or “Thoughts on apt-cacher-ng”, or similar.

This isn’t ideal, but neither is the current mess.

2 Likes
Forum logistics

Possible but that requires to read the whole thread from start to end to make good decisions. Something I am willing to do, but lack the time at the very moment.

This can be done very easily and is probably a good first step. Later, when there is more time I (or another mod/leader) can come and fine tune the result and return some posts to the original thread should they have been misplaced here.

Done.

1 Like
Forum logistics

Can you change title to “Caching updates for non-templates using
apt-cacher-ng”

1 Like