EDIT:
- Working, undesired implementation cacher proxy, advertising itself as g…uaranteeing tor proxy even if false, is at: https://github.com/unman/shaker/issues/10#issuecomment-1221116802
- The only way whonix template updates could be cached as all other templates if repo defs modigied to compoy with apt-cacher-ng requirements is if a cacher-whonix version was created, deactivating whonix-gw's tinyproxy and replacing it with apt-cacher-ng so that sys-whonix would be the update+cache proxy.
----
When cacher is activated, whonix-gw and whonix-ws templates cannot be updated anymore, since both whonix-gw and whonix-ws templates implement a check through systemd qubes-whonix-torified-updates-proxy-check at boot.
~Also, cacher overrides whonix recipes applied at salt install from qubes installation, deployed when the user specifies that he wants all updates to be downloaded through sys-whonix.~
~The standard place where qubes defines, and applies policies on what to use as update proxies to be used is under `/etc/qubes-rpc/policy/qubes.UpdatesProxy` which contains on standard install:~
Whonix still has policies deployed at 4.0 standard place:
```
$type:TemplateVM $default allow,target=sys-whonix
$tag:whonix-updatevm $anyvm deny
```
And where cacher write those at standard q4.1 place
https://github.com/unman/shaker/blob/3f59aacbad4e0b10e69bbbcb488c9111e4a784bc/cacher/use.sls#L7-L9
~First thing first, I think both cacher and whonix should agree on where UpdatesProxy settings should be prepended/modified, which I think historically (and per Qubes documentation as well) it should be under `/etc/qubes-rpc/policy/qubes.UpdatesProxy` for clarity and not adding confusion.~
Whonix policies needs to be applied per q4.1 standard under Qubes. Not subject of this issue.
@unman @adrelanos @fepitre
---------
The following applies proper tor+cacher settings:
https://github.com/unman/shaker/blob/3f59aacbad4e0b10e69bbbcb488c9111e4a784bc/cacher/change_templates.sls#L5-L13
Unfortunately, whonix templates implement a sys-whonix usage check which prevents templates to use cacher.
This is documented over https://www.whonix.org/wiki/Qubes/UpdatesProxy, and is the result of `qubes-whonix-torified-updates-proxy-check` systemd service started at boot.
Source code of the script can be found at https://github.com/Whonix/qubes-whonix/blob/98d80c75b02c877b556a864f253437a5d57c422c/usr/lib/qubes-whonix/init/torified-updates-proxy-check
Hacking around current internals of both project, one can temporarily disable cacher to have torified-updates-proxy-check check succeed and put its success flag that subsists for the life of that booted Templatevm. We can then reactivate cacher's added UpdatesProxy bypass and restart qubesd, and validate cacher is able to deal with tor+http->cacher->tor+https on Whonix TemplatesVMs:
1- deactivate cacher override of qubes.UpdateProxy policy:
```
[user@dom0 ~]$ cat /etc/qubes/policy.d/30-user.policy
#qubes.UpdatesProxy * @type:TemplateVM @default allow target=cacher
```
2- restart qubesd
```
[user@dom0 ~]$ sudo systemctl restart qubesd
[user@dom0 ~]$
```
3- Manually restart whonix template's torified-updates-proxy-check (here whonix-gw-16)
`user@host:~$ sudo systemctl restart qubes-whonix-torified-updates-proxy-check`
We see that whonix applied his state at:
https://github.com/Whonix/qubes-whonix/blob/98d80c75b02c877b556a864f253437a5d57c422c/usr/lib/qubes-whonix/init/torified-updates-proxy-check#L46
```
user@host:~$ ls /run/updatesproxycheck/whonix-secure-proxy-check-done
/run/updatesproxycheck/whonix-secure-proxy-check-done
```
4- Manually change cacher override and restart qubesd
```
[user@dom0 ~]$ cat /etc/qubes/policy.d/30-user.policy
qubes.UpdatesProxy * @type:TemplateVM @default allow target=cacher
[user@dom0 ~]$ sudo systemctl restart qubesd
```
5- check functionality of downloading tor+https over cacher from whonix template:
```
user@host:~$ sudo apt update
Hit:1 http://HTTPS///deb.qubes-os.org/r4.1/vm bullseye InRelease
Hit:2 tor+http://HTTPS///deb.debian.org/debian bullseye InRelease
Hit:3 tor+http://HTTPS///deb.debian.org/debian bullseye-updates InRelease
Hit:4 tor+http://HTTPS///deb.debian.org/debian-security bullseye-security InRelease
Hit:5 tor+http://HTTPS///deb.debian.org/debian bullseye-backports InRelease
Get:6 tor+http://HTTPS///fasttrack.debian.net/debian bullseye-fasttrack InRelease [12.9 kB]
Hit:7 tor+http://HTTPS///deb.whonix.org bullseye InRelease
Fetched 12.9 kB in 7s (1,938 B/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
```
Problem with this is that qubes update processes will start templates and try to apply updates unattended, and this obviously won't work unattended.
The question is then how to have whonix templates do a functional test for it to see that torrified updates **are possible** instead of whonix believing he is the only one providing the service? The code seems to implement curl check, but also doesn't work even if cacher is exposed on as a tinyproxy replacement, listening on 127.0.0.1:8082. Still digging, but at the end, we need to apply mitigation ( disable Whonix check) or proper functional testing from Whonix, which should see that the torrified repositories are actually accessible.
-----
How to fix this?
Some hints:
1- cacher and whonix should modify the policy at the same place to ease troubleshooting and understanding of what is modified on the host system, even more when dom0 is concerned. I think cacher should prepend `/etc/qubes-rpc/policy/qubes.UpdatesProxy`
2- Whonix seems to have thought of a proxy check override:
https://github.com/Whonix/qubes-whonix/blob/685898472356930308268c1be59782fbbb7efbc3/etc/uwt.d/40_qubes.conf#L15-L21
@adrenalos: not sure this is the best option, and I haven't found where to trigger that override so that the check is bypassed?
3- At Qubes OS install, torrified updates and torrifying all network traffic (setting sys-whonix as default gateway) is two different things, the later not being enforced by default. Salt recipes are available to force updates through sys-whonix when selected at install, which dom0 still uses after cacher deployment:

So my setup picked up sys-whonix as the default gateway for cacher since I configured my setup to use sys-whonix proxy as default, which permits tor+http/HTTPS to go through after applying manual mitigations. But that would not necessarily be the case for default deployments but would need to verify, sys-firewall being the default unless changed.
@unman: on that, I think the configure script should handle that corner case and make sure sys-whonix is the default gateway for cacher if whonix is deployed. Or your solution wants to work independently of Whonix altogether (#6) but there would be discrepancy between Qubes installation options, what most users use and what is available out of the box after installing cacher from rpm:
https://github.com/unman/shaker/blob/3f59aacbad4e0b10e69bbbcb488c9111e4a784bc/cacher.spec#L50-L60
4- That is, cacher cannot be used as of now for dom0 updates either. Assigning dom0 updates to cacher gives the following error from cacher:
`sh: /usr/lib/qubes/qubes-download-dom0-updates.sh: not found`
So when using cacher + sys-whonix, sys-whonix would still be used by dom0 (where caching would not necessarily makes sense since dom0 doesn't share same fedora version then templates, but I understand that this is desired to change in the near future.
Point being here: sys-whonix would still offer its tinyproxy service, sys-whonix would still be needed to torrify whonix templates updates and dom0 would still depend on sys-firewall, not cacher, on a default install (without whonix being installed). Maybe a little bit more thoughts needs to be given to the long term approach of pushing this amazing caching proxy forward to not break things on willing testers :)
@fepitre: adding you to see if you have additional recommendations, feel free to tag Marek if you feel like so later on, but this caching proxy is a really long awaited feature (https://github.com/QubesOS/qubes-issues/issues/1957) which would be a life changer if salt recipes, cloning from debian-11-minimal and sepecializing templates for different use cases, and bringing a salt store being the desired outcome from all of this.
Thank you both. Looking forward for a cacher that can be deployed as "rpm as a service" on default installations.
We are close to that, but as of now, this doesn't work, still, out of the box and seems to need a bit of collaboration from you both.
Thanks guys!