Issues with apt-cacher-ng

Is this possible while using sys-whonix as the netvm for cacher? I played with this possibility, without success, but I didn’t want to mess around with tor for fear of not implementing it correctly. I’d much rather use a minimal template for cacher and be able to cache both debian and whonix packages.

1 Like

@unman: cacher’s package cache growing indefinitely is good as of now to observe cacher’s benefits for newcomers of the project, while I’m not sure caching for more than a week packages makes any sense for a Qubes repository cacher, leading to space problems and vm-pool over-provisioning problems in the long run.

Reading Maintenance doesn’t seem to provide a good way to cleansweep/prune cache content to prevent all space to be consumed one day, the goal of apt-cacher-ng being to provide a central way for organizations to cache all package download requests and to economize bandwidth forever; the documentation specifies how the cache is automatically cleaned: that is, if upstream repositories delete stuff, then local cache will also remove packages. This means for cacher that packages will most probably always be there and the cache growing indefinitely.

You have ideas proper implementation so that Qubes use case here doesn’t mimic mirroring of all packages downloaded forever? I mean, I do not expect any of the packages currently in cacher’s cache to ever disappear prior of my cache-private LVM to be filled, and user’s expected to have that volume grow, or newer 4.1 default to have those volumes grow until vm-pool explodes. Some kind of conservative/agressive safeguards should be applied to at least reflect general Qubes use case here, or worst case scenario where all templates supported are having cacher caching packages for them (nor am familiar with Arch’s update behavior and disk space costs related to package download since I do not use Arch).

2022-09-01-122345

3.7G today. Roughly ~300Mb a day, the culprit being debian-12 which is a testing repo, and the rhythm of packages updates related to testing package repositories.

At the current rhythm (caching debian-11, debian-12, fedora-36 and whonix since the 18th of August) I expect cacher’s 18.6G’s default private volume to be filled within maximum 6 months. Maybe some deletion magic glue should be implemented in a cron monthly job (delete files older then X days (30?) and restart apt-cacher-ng?)

1 Like

Same here, @unman. Seems like this is working with your tor unpublished shaker recipe, but from my hands-on experiences with whonix templates and sys-whonix consequently, once again without userinfo.html being hacked into template-cacher, whonix templates will refuse to use cacher as an update proxy. And qubes cannot transparently talk to onion address without aditional confiduration. This is why I insist working around the issue. Having debian use tor repositories is untested on my side here, but yet again, I would not understand why/how hidden onion tor addresses being resolved and used transparently (at least, this doesn’t work out of the box with Whonix unless Whonix workstation is used and preconfigured to use correct proxy ports (uwt wrappers), expecting once again sys-whonix to be the net qube providing network. Not cacher->sys-whonix incurrent scenario, which : doesn’t work. Please test.)

  • Agreed that hacking Whonix templates is not a solution (either deactivating tor check on configured templates) since switching update proxy to something else would break current guarantees.

  • Agreed that hacking cacher-template for apt-cacher-ng for it to provide fake guarantee that its update proxy enforces tor will weakens expected contract when using whonix templates that updates are torrified.

But then if we reject the idea of creating a cacher-whonix alternative shaker recipe book, aimed to configure whonix-gw template to implement apt-cacher-ng properly, disabling tinyproxy and enabling apt-cacher-ng with modified userinfo.html to enforce properly expected contract by whonix templates, I am seriously out of idea to have a proper apt-cacher-ng implementation that would one day make its way into Qubes repositories (user needing to know implementation details) or even one day offered as an option at OS install.
And this is what I wish cacher as a future.

Those are the ultimate goals aren’t they? Qubes pushes users to specialize templates. Users are doing it when guided. Cacher prevents installing stupid things. Installing repos will become an option soon with extrepo. But when users are specializing templates, so the requirement for bandwidth explodes. I do not know for you @unman, but my customer base is mostly end users around the world, most of them not having unlimited bandwidth and most of the time relaying on shitty internet connections, most of the time from public hotspots, parks or fastfood restaurants, where their home internet connection is not so fast and the update process (Update widget: 1 Template at a time, redownloading packages for each specialized template, takes forever if no caching, and where caching changes literally the whole experience. They start the updates at night, leaving their laptop unattended with disks decrypted. And they are worried.) Even though this is clearly not extendable to the whole community, I think it represents a lot of the users (will not vouch for unknown users, but for known situations of people wanting to use Qubes for the security/privacy(whonix) benefits out of the box).

This is why I push, tried to get funding for [Contribution] qubes-updates-cache · Issue #1957 · QubesOS/qubes-issues · GitHub since a really long time, where implementation was too manual and hacky to interest any grant application reviewer. Until now. The question really is how to properly do this, and I think it needs a bit more thorough reflection.

And… @unman : it would require you to install Whonix templates and test them at least a little for cacher use case.

If you want to discuss a bit more about this off forum, I am more than willing to do this and more seriously. I still think that this project could get funding if properly packaged if you want to go that direction. The foreseen goal of mine being for cacher to cache everything update related, including dom0 (As said before, future projection for Qubes is to be mainstream Fedora as I got it for dom0, even if dom0 packages deployment is minimal; it is not if we inspect the cost of downloading repository information alone for a different fedora version today, and will economize bandwidth when this goal is achieved). Leaving Whonix aside should not be an option.

In my past life’s corporate life, I used to develop network fingerprinting technology based on Windows updates requests and responses. It was back in time where Windows was communicating through http to their update servers. If we are lucky enough, Windows would still permit those registry keys to use http. And maybe permit apt-cacher-ng compatibility as well. Unknown to me now. The question there is if we can have salt runner in Windows deployed. And if it even makes sense to cache those packages (Windows is currently used as a StandaloneVM, where deploying it as a pure Template broke a while ago because of bugs in Qubes Windows tools and private volume not being properly initialized. But that should be fixed by now and newer vrsions of windows tools will permit to support Windows11 as well, with LTSC version also properly landing and one day supported by Ellick’s script). My intuition based on all customer requests and discussions on forum and Tabit-pro work that is happening behind the scenes and soon enough landing properly at least in Qubes testing repos, Qubes will finally get proper windows support and compartmentalization, with PR going into kellick and addressing Alternate qubes-windows-tools available · Issue #15 · ElliotKillick/qvm-create-windows-qube · GitHub

Long story short, as always i’m sorry I try to be short but its not my forte, if cacher is scoped properly, the cacher proxy will be publicized, most probably deployed in Qubes community repo at some point, cacher package signed and deployed in main stable Qubes repository one day, and maybe used widely and deployed as default (and funny enough, cached under cacher’s update proxy on all Qubes computers one day…)

@deeplow a lot of the replies here are not related to “updating app qubes” but more about “cacher” current usage possibilities and coverage expectations. Not sure if those are relevant replies here. This post is definitely not and tries to address current and future expected scope of the project. If you move this post, I will edit with proper references.

Sorry, I’m lacking the contextual knowledge to split this thread (haven’t played around yet with the cacher). Perhaps @fsflover can give a hand (if that’s something that they’re familiar with)?

1 Like

I admit I’m totally “lost in translation” with this topic’s continuation. I want to change it’s title, but it’s not allowed anymore. @deeplow may I kindly ask you to change it to:
How to update non-template qubes over qrexec while using apt-cahcer-ng as netVM

Thank you in advance, but my topic definitely isn’t about cacher

Anyway

[quote=“Insurgo, post:33, topic:10548”]

… I can confirm that in my qubes.UpdatesProxy I set

$tag:whonix-updatevm $default allow,target=cacher
$type:TemplateVM $default allow,target=cacher
$type:AppVM $default allow,target=cacher
$tag:whonix-updatevm $anyvm deny

and in my 30-default.policy I set

# HTTP proxy for downloading updates
# Upgrade Whonix TemplateVMs through sys-whonix.
qubes.UpdatesProxy * @tag:whonix-updatevm @default allow target=cacher
# Deny Whonix TemplateVMs using UpdatesProxy of any other VM.
qubes.UpdatesProxy * @tag:whonix-updatevm @anyvm deny
# Upgrade all TemplateVMs through sys-whonix or cacher.
qubes.UpdatesProxy * @type:TemplateVM @default allow target=cacher
# Upgrade all AppVMs through sys-whonix or cacher.
qubes.UpdatesProxy * @type:AppVM @default allow target=cacher
# Default rule for all TemplateVMs - direct the connection to sys-net-dvm or cacher
qubes.UpdatesProxy * @type:TemplateVM @default allow target=cacher
qubes.UpdatesProxy * @anyvm @anyvm deny

and after

sudo systemctl restart qubesd

when running update in Whonix template, I got

user@host:~$ sudo apt update && sudo apt full-upgrade
Get:1 tor+http://5ajw6aqf3ep7sijnscdzw77t7xq4xjpsy335yb2wiwgouo7yfxtjlmid.onion/debian-security bullseye-security InRelease [48.4 kB]
Hit:2 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye InRelease
Hit:3 Index of /deb/r4.1/vm/ bullseye InRelease
Hit:4 Index of /deb/r4.1/vm/ bullseye-testing InRelease
Hit:5 tor+https://deb.whonix.org bullseye InRelease
Get:6 tor+http://5ajw6aqf3ep7sijnscdzw77t7xq4xjpsy335yb2wiwgouo7yfxtjlmid.onion/debian-security bullseye-security/main amd64 Packages [180 kB]
Hit:7 tor+http://2s4yqjx5ul6okpp3f2gaunr2syex5jgbfpfvhxxbbjwnrsvbk5v3qbid.onion/debian bullseye InRelease
Hit:8 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-testing InRelease
Hit:9 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-securitytesting InRelease
Get:10 tor+http://2s4yqjx5ul6okpp3f2gaunr2syex5jgbfpfvhxxbbjwnrsvbk5v3qbid.onion/debian bullseye-updates InRelease [44.1 kB]
Get:11 tor+http://2s4yqjx5ul6okpp3f2gaunr2syex5jgbfpfvhxxbbjwnrsvbk5v3qbid.onion/debian bullseye-backports InRelease [49.0 kB]
Get:12 tor+http://2s4yqjx5ul6okpp3f2gaunr2syex5jgbfpfvhxxbbjwnrsvbk5v3qbid.onion/debian bullseye-backports/main amd64 Packages.diff/Index [63.3 kB]
Get:13 tor+http://2s4yqjx5ul6okpp3f2gaunr2syex5jgbfpfvhxxbbjwnrsvbk5v3qbid.onion/debian bullseye-backports/main amd64 Packages T-2022-09-01-2027.29-F-2022-09-01-2027.29.pdiff [19.0 kB]
Get:13 tor+http://2s4yqjx5ul6okpp3f2gaunr2syex5jgbfpfvhxxbbjwnrsvbk5v3qbid.onion/debian bullseye-backports/main amd64 Packages T-2022-09-01-2027.29-F-2022-09-01-2027.29.pdiff [19.0 kB]
Fetched 404 kB in 25s (16.2 kB/s)
Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
1 package can be upgraded. Run ‘apt list --upgradable’ to see it.
Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
Calculating upgrade… Done
The following packages were automatically installed and are no longer required:
libboost-program-options1.74.0 zsh zsh-common
Use ‘sudo apt autoremove’ to remove them.
The following packages will be upgraded:
libvchan-xen
1 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 8,916 B of archives.
After this operation, 1,024 B of additional disk space will be used.
Do you want to continue? [Y/n] n
Abort.
user@host:~$

Btw, my cache is 11GB at the moment.

And then everything in policies talk about cacher :thinking:

I am now really confused. In my tests, if the whonix templates were already launched with sys-whonix configured as updates proxy, I had the same results as you (but not over tor, where your ouput above shows that cacher is actually not used now, since there is no http/https url hit there, but http+tor. I have not tested that. Restarting that whonix template and retrying would be interesting, but like I said, I do not understand what is happening here since nothing passes through apt-cacher-ng (or cacher) and your urls are not showing apt-cacher-ng compatible urls like:
deb [arch=amd64] http://HTTPS///deb.qubes-os.org/r4.1/vm bullseye main


And you are right @enmus @unman : uncommenting tor+http onion links works for the quick test I’ve just done (again with userinfo.html put into template-cacher to fool proxy check of whonix template which I do not understand how @enmus is getting around as of now: how whonix templates are able to user cacher per above configuration).

From current testing setup, passing everything to cacher, with cacher faking tor supporting update proxy through modified userinfo.html exposed through apt-cacher-ng under template-cacher:

user@host:~$ grep -R onion /etc/apt/sources.list.d/ | grep -v "#" | grep -v bak
/etc/apt/sources.list.d/qubes-r4.list:deb [arch=amd64] tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye main
/etc/apt/sources.list.d/qubes-r4.list:deb [arch=amd64] tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-testing main
/etc/apt/sources.list.d/qubes-r4.list:deb [arch=amd64] tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-securitytesting main
user@host:~$ sudo apt update | grep onion

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

Hit:1 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye InRelease
Hit:2 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-testing InRelease
Hit:3 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-securitytesting InRelease

shutdown whonix template.
Modifying template-cacher to remove my faked tor compatible update proxy.

[user@dom0 ~]$ qvm-run --user root template-cacher "xterm" 
Running 'xterm' on template-cacher

Removing my hack. Saving. Restarting cacher. Starting whonix template.
cacher still using sys-whonix as netvm.

user@host:~$ sudo apt update
WARNING: Execution of /usr/bin/apt prevented by /etc/uwt.d/40_qubes.conf because no torified Qubes updates proxy found.
Please make sure Whonix-Gateway (commonly called sys-whonix) is running.

- If you are using Qubes R3.2: The NetVM of this TemplateVM should be set to Whonix-Gateway (commonly called sys-whonix).

- If you are using Qubes R4 or higher: Check your _dom0_ /etc/qubes-rpc/policy/qubes.UpdatesProxy settings.

_At the very top_ of that file you should have the following:

$tag:whonix-updatevm $default allow,target=sys-whonix

To see if it is fixed, try running in Whonix TemplateVM:

sudo systemctl restart qubes-whonix-torified-updates-proxy-check

Then try to update / use apt-get again.

For more help on this subject see:
https://www.whonix.org/wiki/Qubes/UpdatesProxy

If this warning message is transient, it can be safely ignored.

Nope. Whonix cannot talk, whatsoever, through cacher if cacher doesn’t expose itself as a tor enabled proxy per whonix proxy check so I cannot understand why you setup works.

On my side I have only the following policy for my test:

[user@dom0 ~]$ sudo cat /etc/qubes/policy.d/30-user.policy 
qubes.UpdatesProxy  *  @anyvm  @default  allow target=cacher
#qubes.UpdatesProxy  *  @tag:whonix-updatevm    @default    allow target=sys-whonix
#qubes.UpdatesProxy  *  @type:AppVM  @default  allow target=cacher
#qubes.UpdatesProxy  *  @type:TemplateVM  @default  allow target=cacher

since

[user@dom0 ~]$ ls /etc/qubes-rpc/policy/qubes.UpdatesProxy
ls: cannot access '/etc/qubes-rpc/policy/qubes.UpdatesProxy': No such file or directory

/etc/qubes-rpc/policy/qubes.UpdatesProxy should not exist anymore, as flagged to @adrelanos per 4.1 (v5.0 policy standard).
Not sure if it should be @marmarek or @adrelanos’s job to remove that file, not sure which runs first but I know /etc/qubes-rpc/policy/qubes.UpdatesProxy should be deprecated and using it is misleading and deprecated.

Putting back my userinfo.html under template-cacher. Shutdowning template-cacher. shutdown of cacher through starting xterm from dom0 qvm-run with --user root.
Starting whonix template when cacher enabling whonix update proxy check to succeed with userinfo.html hack and then, mix and match of tor+http (onion) and http://HTTPS/// urls work:

user@host:~$ sudo apt update
Hit:1 tor+http://HTTPS///deb.debian.org/debian bullseye InRelease
Hit:2 tor+http://HTTPS///deb.debian.org/debian bullseye-updates InRelease
Hit:3 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye InRelease
Hit:4 tor+http://HTTPS///deb.debian.org/debian-security bullseye-security InRelease
Hit:5 tor+http://HTTPS///deb.debian.org/debian bullseye-backports InRelease
Hit:6 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-testing InRelease
Hit:7 tor+http://deb.qubesosfasa4zl44o4tws22di6kepyzfeqv3tg4e3ztknltfxqrymdad.onion/r4.1/vm bullseye-securitytesting InRelease
Get:8 tor+http://HTTPS///fasttrack.debian.net/debian bullseye-fasttrack InRelease [12.9 kB]
Hit:9 tor+http://HTTPS///deb.whonix.org bullseye InRelease
Fetched 12.9 kB in 8s (1,548 B/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
4 packages can be upgraded. Run 'apt list --upgradable' to see them.

Short version: not possible to have whonix templates to use cacher as update proxy through rpc policies but if hack implemented, unless sys-whonix is still used as update proxy.

Proof of cacher being used for tor hidden onion service having been cached? It is in the cache app-vm package cache, second line:
2022-09-01-184950

Forum logistics

@deeplow, @fslover
I feel this thread has diverged hugely from anything that might be
useful in User Support.
You could name this thread: “Caching updates for non-templates
using apt-cacher-ng”

I don’t know if you can cherry pick posts, or if you have to actually
split the thread, and move everything below a certain post to a new
thread.
If it’s the latter, I think you should split at Insurgo post Mon, 29 Aug 2022 19:08:41 +0000
that starts (after a quote):
@unman: I agree that this approach is not the best…”

That new thread should be in General Discussion, and could be titled
“Issues with apt-cacher-ng”, or “Thoughts on apt-cacher-ng”, or similar.

This isn’t ideal, but neither is the current mess.

2 Likes
Forum logistics

Possible but that requires to read the whole thread from start to end to make good decisions. Something I am willing to do, but lack the time at the very moment.

This can be done very easily and is probably a good first step. Later, when there is more time I (or another mod/leader) can come and fine tune the result and return some posts to the original thread should they have been misplaced here.

Done.

1 Like
Forum logistics

Can you change title to “Caching updates for non-templates using
apt-cacher-ng”

1 Like
Forum logistics

Thank you @Sven

1 Like
Forum logistics

Done

Forum logistics

Sorry I was bamboozled by the fork.

I wanted the General Discussion to be “Issues with apt-cacher-ng”
and the User Support thread to be “Caching updates for non-templates using
apt-cacher-ng”

2 Likes
Forum logistics

It shall be so. :wink:

@Insurgo, @unman,

In /etc/apt/apt.conf.d/90whonix I found this:

Summary
## apt caching proxy:
## apt-cacher-ng
##
## If you want to use apt-cacher-ng,
## you have to disable the apt-get uwt wrapper:
##     create a file /etc/uwt.d/50_user.conf with
##     uwtwrapper["/usr/bin/apt-get"]="0"
##

So I did it.

In the same file I have found this

Summary
## Please do not edit 90whonix file! This is because with next
## Whonix update, this file may get replaced with an improved version.
## That is why you created your own file /etc/apt/apt.conf.d/50user instead.

So I created it with these options enabled

Summary
## Working.
Acquire::http { Proxy "http://127.0.0.1:8082"; };
##
## Untested.
Acquire::http::Proxy "http://127.0.0.1:8082/";
Acquire::https::Proxy "http://127.0.0.1:8082/";
Acquire::ftp::Proxy "ftp://127.0.0.1:8082/";
Acquire::tor::Proxy "http://127.0.0.1:8082/";

According to /etc/uwt.d/30_uwt_default.conf I created 50_uwt_user.conf and specifically disabled

Summary
uwtwrapper["/usr/bin/apt"]="0"
uwtwrapper["/usr/bin/apt-get"]="0"

I have changed all sources lists in Whonix to point to cacher adding http://HTTPS/// to the each link. Onion addresses aren’t working because I’m point to cacher anyway and from there to tor via sys-whonix, so I commented them in.

In dom0, in my 30_user.policy I have changed whonix vm to point to cacher:

# Upgrade Whonix TemplateVMs through sys-whonix.
qubes.UpdatesProxy      *   @tag:whonix-updatevm    @default    allow target=cacher
# Deny Whonix TemplateVMs using UpdatesProxy of any other VM.
qubes.UpdatesProxy      *   @tag:whonix-updatevm    @anyvm      deny
qubes.UpdatesProxy	* vault @anyvm deny
...
...

Started update in whonix template

Summary
user@host:~$ sudo apt update
Hit:1 tor+http://HTTPS///deb.debian.org/debian bullseye InRelease
Hit:2 http://HTTPS///contrib.qubes-os.org/deb/r4.1/vm bullseye InRelease
Get:3 tor+http://HTTPS///deb.debian.org/debian bullseye-updates InRelease [44.1 kB]
Hit:4 tor+http://HTTPS///deb.debian.org/debian-security bullseye-security InRelease
Hit:5 http://HTTPS///contrib.qubes-os.org/deb/r4.1/vm bullseye-testing InRelease
Get:6 tor+http://HTTPS///deb.debian.org/debian bullseye-backports InRelease [49.0 kB]
Hit:7 http://HTTPS///deb.qubes-os.org/r4.1/vm bullseye-testing InRelease
Hit:8 tor+http://HTTPS///fasttrack.debian.net/debian bullseye-fasttrack InRelease
Hit:9 http://HTTPS///deb.qubes-os.org/r4.1/vm bullseye-securitytesting InRelease
Hit:10 tor+http://HTTPS///deb.whonix.org bullseye InRelease                    
Get:11 tor+http://HTTPS///deb.debian.org/debian bullseye-updates/main amd64 Packages.diff/Index [12.8 kB]
Get:12 tor+http://HTTPS///deb.debian.org/debian bullseye-backports/main amd64 Packages.diff/Index [63.3 kB]
Get:13 tor+http://HTTPS///deb.debian.org/debian bullseye-updates/main amd64 Packages T-2022-10-15-2035.13-F-2022-10-15-2035.13.pdiff [286 B]
Get:13 tor+http://HTTPS///deb.debian.org/debian bullseye-updates/main amd64 Packages T-2022-10-15-2035.13-F-2022-10-15-2035.13.pdiff [286 B]
Get:14 tor+http://HTTPS///deb.debian.org/debian bullseye-backports/main amd64 Packages T-2022-10-15-2035.13-F-2022-10-15-2035.13.pdiff [387 B]
Get:14 tor+http://HTTPS///deb.debian.org/debian bullseye-backports/main amd64 Packages T-2022-10-15-2035.13-F-2022-10-15-2035.13.pdiff [387 B]
Fetched 170 kB in 1min 10s (2,420 B/s)                                         
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
1 package can be upgraded. Run 'apt list --upgradable' to see it.
user@host:~$

Can someone please confirm this as well as if there’s some security unwise actions in this?

Hmm.

I guess this is under whonix templates.

My understanding is that it is another way of disabling whonix check for torrified update proxy.

For the moment I have no opinion on this, which implementation discussion is happening under shaker at

@unman : I am still confused on some of the internals.

Wanting to reply on a forum thread, and not having installed anything under my disp sys-net for a while, I wanted to install wireshark there in current session to install additional tools I needed just for that session…

And fell into another rabbit hole.

First, as I normally am able to do with current setup on other qubes:

[user@sys-net ~]$ sudo dnf update
Fedora 36 - x86_64                                                                                                                       11  B/s | 547  B     00:48    
Errors during downloading metadata for repository 'fedora':
  - Status code: 500 for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=fedora-36&arch=x86_64&protocol=http&protocol=http (IP: 127.0.0.1)
Error: Failed to download metadata for repo 'fedora': Cannot prepare internal mirrorlist: Status code: 500 for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=fedora-36&arch=x86_64&protocol=http&protocol=http (IP: 127.0.0.1)

Hmm?

[user@dom0 ~]$ qvm-service sys-net 
clocksync            on
qubes-update-check   on
updates-proxy-setup  on

Ok…
Let’s compare with sys-firewall, which works:

[user@sys-firewall ~]$ sudo dnf update
Fedora 36 openh264 (From Cisco) - x86_64                                                                                                0.0  B/s |   0  B     00:00    
Errors during downloading metadata for repository 'fedora-cisco-openh264':
  - Curl error (56): Failure when receiving data from the peer for https://codecs.fedoraproject.org/openh264/36/x86_64/os/repodata/repomd.xml [Received HTTP code 403 from proxy after CONNECT]
Error: Failed to download metadata for repo 'fedora-cisco-openh264': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Fedora 36 - x86_64 - Updates                                                                                                             15 MB/s |  28 MB     00:01    
^C^C^C^C^C^C^C^C^C^CKeyboardInterrupt: Terminated.
[user@dom0 ~]$ qvm-service sys-firewall
qubes-update-check   on
updates-proxy-setup  on

Any explanation on this?

[user@dom0 ~]$ sudo cat /etc/qubes/policy.d/30-user.policy 
qubes.UpdatesProxy  *  @anyvm  @default  allow target=cacher
#qubes.UpdatesProxy  *  @tag:whonix-updatevm    @default    allow target=sys-whonix
#qubes.UpdatesProxy  *  @type:AppVM  @default  allow target=cacher
#qubes.UpdatesProxy  *  @type:TemplateVM  @default  allow target=cacher
[user@sys-net ~]$ wget 127.0.0.1:8082
--2022-10-17 14:03:14--  http://127.0.0.1:8082/
Connecting to 127.0.0.1:8082... connected.
HTTP request sent, awaiting response... 403 Filtered
2022-10-17 14:03:14 ERROR 403: Filtered.
[user@sys-firewall ~]$ wget 127.0.0.1:8082
--2022-10-17 14:03:42--  http://127.0.0.1:8082/
Connecting to 127.0.0.1:8082... connected.
HTTP request sent, awaiting response... 406 Usage Information
2022-10-17 14:03:42 ERROR 406: Usage Information.

I would have expected to have the same behavior between sys-firewall and sys-net.

Another question: how to change the default of a qvm-service? I recently created a new qube, and that qube doesn’t have updates-proxy-setup by default.
As we discussed before, I do not really see how a qube having qubes-update-check service on, depending on cacher to be able to provide package list to notify dom0 could be able to do so without having updates-proxy-setup also enabled.

Consequently, for the usage I do of cacher, permitting me to install softwares I can sporadically need to have into sys-net or other disposable, I want my qubes to be able to report for templates available updates when I use them, as well as being able to have any qube I use be able to install software in them, since I am well aware, from my use case, that those won’t survive reboot as well. And for that, I would love to have updates-proxy-setup enabled by default unless I deactivate those manually.

How to accomplish this?

From https://dev.qubes-os.org/projects/core-admin-client/en/latest/manpages/qvm-service.html:

`--default` `` `, ` `-D` `` `, ` `--delete` `` `, ` `--unset` ``
Reset service to its default state (remove from the list). Default state means “lets VM choose” and can depend on VM type (NetVM, AppVM etc).

@unman : I understand that updates-proxy-setup would need to be enabled by default in the templates/dispvm templates?

I’m getting this here

user@cacher:~$ wget 127.0.0.1:8082
--2022-10-18 14:06:33--  http://127.0.0.1:8082/
Connecting to 127.0.0.1:8082... connected.
HTTP request sent, awaiting response... 406 Usage Information
2022-10-18 14:06:33 ERROR 406: Usage Information.

Your here is linked to something unsupported, from what i quickly surveyed on that post. I would suggest using shaker/cacher so that project can improve from its deployed settings instead of @unman or the community trying to figure out errors that would not occur by following known to work recipes.

I won’t comment on that own thread content, whike this off topic comment referring to that thread of inppace upgrade gone wrong will be misleading in this thread.

In other circumstances, receiving usage information in a working setup would be a good news. This means the web service is ready to receive direct connection and permits the user to deal with the cache settings from a web browser. Under correct circumstances, if cacher is setuped to be netvm of a qube, that qube couod connect directly to cacher’s ip address’ port or if proxy-enable-setup service is activated for a qube, to talk to 127.0.0.1:8082 to access cache configuration settihgs directly.

Offtopic

The fix was trivial, or rather quick. I linked to the other topic not knowing if it’s something about apt-caher-ng started not to work, regardless of the template, especially getting 403’s. I hope I clarified my reasoning now.