3isec-qubes-cacher-1.10-1 breaks Debian/Fedora TemplateVM updates

I installed the cacher package according to the steps outlined in https://qubes.3isec.org/tasks.html on my Qubes setup running 4.1.2

The installation of 3isec-qubes-cacher-1.10-1.fc32.x86_64.rpm via dom0 went well, but afterward, none of my Debian or Fedora TemplateVMs could update via apt/dnf update.

Debian 12

E: Failed to fetch http://HTTPS///deb.debian.org/debian/dists/bookworm/InRelease Connection failed [IP: 127.0.0.1 8082]
E: Failed to fetch http://HTTPS///deb.debian.org/debian-security/dists/bookworm-security/InRelease Connection failed [IP: 127.0.0.1 8082]
E: Failed to fetch http://HTTPS///deb.qubes-os.org/r4.1/vm/dists/bookworm/InRelease Connection failed [IP: 127.0.0.1 8082]
E: Some index files failed to download. They have been ignored, or old ones used instead.

Fedora 38

Errors during downloading metadata for repository ‘fedora’:
Curl error (52): Server returned nothing (no headers, no data) for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=fedora-38&arch=x86_64&protocol=http [Empty reply from server]
Error: Failed to download metadata for repo ‘fedora’: Cannot prepare internal mirrorlist: Curl error (52): Server returned nothing (no headers, no data) for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=fedora-38&arch=x86_64&protocol=http [Empty reply from server]

My cacher AppVM is connected to the same netVM which is successfully providing network to other AppVMs. However, when I try to ping out from the cacher AppVM I get

Ping: connect: Network is unreachable

ip link indicates eth0 is down. I brought it up via ip link set eth0 up but still can’t ping out from the cacher AppVM.

What am I missing here?

Usually when I see messages like that in a VM, I find that I have the VM set up to try to use the proxy, but the UpdateProxy policy is pointing to sys-firewall not to sys-cacher. (or vice versa, if it seems to be having trouble finding https://mirrors… without HTTPS in it.)

In other words, you may need to fix your policy to match what your VM is trying to do (that’s easier than fixing the VM to conform to the policy).

I’ll assume the former case here since it would be consistent with what you report.

It works a bit differently in 4.2 versus 4.1. In 4.2 you want to look at the file /etc/qubes/policy.d/50-config-updates.policy and there should be a line at the end that contains @type:TemplateVM and ends with target=sys-cacher. If it says something like target=sys-firewall, you may be able to fix your issue by editing that line to read sys-cacher.

In 4.1 the line will appear in a different file (grep for UpdatesProxy); I’m not sure which one because my 4.1 install has been tinkered with a lot.

On the other hand your sys-cacher doesn’t seem to have network connectivity, so this probably isn’t the problem, or at least it’s not the ONLY problem.

On 4.1, I haven’t encountered this problem.
It seems to me that you need to address the networking problem first.
Why is the cacher offline when you have a netvm set? This looks
seriously wrong.
Check that you have qubes networking set up with apt list installed qubes-core-agent-networking

systemctl status apt-cacher-ng will tell you if the service is running,
but until you resolve the network problem that’s no help.

Thanks @unman.

It looks like the template-cacher TemplateVM was created based on my Global default template (fedora-38-minimal) and therefore did not install apt-cacher-ng (obviously) or qubes-core-agent-networking (less obvious).

Even after manually installing those packages, my Fedora TemplateVMs error out when updating over clearnet (not TOR or VPN) connections.

Errors during downloading metadata for repository ‘updates’:
Status code: 403 for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=updates-released-f38&arch=x86_64&protocol=http (IP: 127.0.0.1)
Error: Failed to download metadata for repo ‘updates’: Cannot prepare internal mirrorlist: Status code: 403 for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=updates-released-f38&arch=x86_64&protocol=http (IP: 127.0.0.1)

I then tried to uninstall 3isec-qubes-cacher-1.10-1 from dom0 via sudo dnf remove 3isec-qubes-cacher-1.10-1.fc32.x86_64 to try to reinstall it with debian-11 set as the Global default template. The PRERUN scriptlet appeared to run, but also fail:

error: %preun(3isec-qubes-cacher-1.10-1.fc32.x86_64) scriptlet failed, exit status 20
Error in PREUN scriptlet in rpm package 3isec-qubes-cacher
Verifying : 3isec-qubes-cacher-1.10-1.fc32.x86_64 1/1
Failed:
3isec-qubes-cacher-1.10-1.fc32.x86_64

Is the 403 error still a known issue? And if so, how do we “clear the cacher before trying another update”?

How should cacher be uninstalled? I found this unanswered forum post while trying to troubleshoot my unsuccessful uninstall and reinstall of cacher.

Update: I was able to uninstall 3isec-qubes-cacher from dom0 by running:

sudo dnf remove 3isec-qubes-cacher-1.10-1
sudo rpm --noscripts -e 3isec-qubes-cacher

And then reinstalled it after setting my Global template to debian-12 and running the following from dom0:

sudo dnf install ./3isec-qubes-cacher-1.10-1.fc32.x86_64.rpm -y

The reinstallation threw these errors:

whonix-ws-16: ERROR (exit code 20, details in /var/log/qubes/mgmt-whonix-ws-16.log)
whonix-gw-16: ERROR (exit code 20, details in /var/log/qubes/mgmt-whonix-gw-16.log)
warning: %post(3isec-qubes-cacher-1.10-1.fc32.x86_64) scriptlet failed, exit status 20
Error in POSTIN scriptlet in rpm package 3isec-qubes-cacher
Verifying : 3isec-qubes-cacher-1.10-1.fc32.x86_64 1/1
Installed:
3isec-qubes-cacher-1.10-1.fc32.x86_64

I noticed that the template-cacher TemplateVM’s sudo systemctl status apt-cacher-ng indicates apt-cacher-ng is installed as a masked service (expected), but qubes-core-agent and qubes-core-agent-networking were not installed (unexpected).

I noticed that the cacher AppVM doesn’t seem to have the /etc/apt-cacher-ng/ directory or the apt-cacher-ng service as per systemctl status apt-cacher-ng’s output:

Unit apt-cacher-ng.service could not be found.

I verified cacher has network connectivity via ping. However, now neither debian nor fedora TemplateVMs can update through cacher.

debian-12

E: Failed to fetch http://HTTPS///deb.debian.org/debian/dists/bookworm/InRelease Connection failed [IP: 127.0.0.1 8082]
E: Failed to fetch http://HTTPS///deb.debian.org/debian-security/dists/bookworm-security/InRelease Connection failed [IP: 127.0.0.1 8082]
E: Failed to fetch http://HTTPS///deb.qubes-os.org/r4.1/vm/dists/bookworm/InRelease Connection failed [IP: 127.0.0.1 8082]
E: Some index files failed to download. They have been ignored, or old ones used instead.

fedora-38

Fedora 38 - x86_64 0.0 B/s | 0 B 00:01
Errors during downloading metadata for repository ‘fedora’:
Curl error (1): Unsupported protocol for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=fedora-38&arch=x86_64&protocol=http [Received HTTP/0.9 when not allowed]
Error: Failed to download metadata for repo ‘fedora’: Cannot prepare internal mirrorlist: Curl error (1): Unsupported protocol for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=fedora-38&arch=x86_64&protocol=http [Received HTTP/0.9 when not allowed]

It is still not clear to me how these cacher VMs are being built (in general, and why they are missing some of these packages in particular) or how this package should be uninstalled and reinstalled.

I dont see how this could happen - the clone.sls should ensure that a
debian template is installed, and that the template used by cacher is
cloned from debian-X-minimal.
But apparently it has happened. (Feel the need for a baffled emoji here)

How do you remove the package? Usually I would just remove the package
with dnf remove 3isec-qubes-cacher, but as your instal is completely
borked do it manually.

First roll back any changes made to the templates - there’s a salt state at
/srv/salt/cacher/restore_templates.sls that should do this:
sudo qubesctl --skip-dom0 --show-output --targets=templates state.apply cacher.restore_templates

Then edit /etc/qubes/policy.d/30-user.policy and remove any reference
to qubes.UpdatesProxy .

Finally:

sudo dnf remove 3isec-qubes-cacher`
sudo rpm --noscripts -e 3isec-qubes-cacher

should clean everything up on the rpm side.

Check that /srv/salt/cacher is deleted.

I attempted to remove the package from dom0 via

sudo dnf remove 3isec-qubes-cacher-1.10-1

But it errored out toward the end:

error: %preun(3isec-qubes-cacher-1.10-1.fc32.x86_64) scriptlet failed, exit status 20
Error in PREUN scriptlet in rpm package 3isec-qubes-cacher
Verifying : 3isec-qubes-cacher-1.10-1.fc32.x86_64 1/1
Failed:
3isec-qubes-cacher-1.10-1.fc32.x86_64
Error: Transaction failed

That’s why/how I came across having to also run the following from dom0:

sudo rpm --noscripts -e 3isec-qubes-cacher

I reverted the changes to dom0’s /etc/qubes/policy.d/30-user.policy and deleted /srv/salt/cacher via

sudo rm -r /srv/salt/cacher

This command did not return any output (I did not run this command before):

sudo qubesctl --skip-dom0 --show-output --targets=templates state.apply cacher.restore_templates

Re-installing cacher via dom0 (sudo dnf install ./3isec-qubes-cacher-1.10-1.fc32.x86_64.rpm) results in this error (I don’t recall any errors the first time I installed this):

warning: %post(3isec-qubes-cacher-1.10-1.fc32.x86_64) scriptlet failed, exit status 20
Error in POSTIN scriptlet in rpm package 3isec-qubes-cacher
Verifying : 3isec-qubes-cacher-1.10-1.fc32.x86_64 1/1
Installed:
3isec-qubes-cacher-1.10-1.fc32.x86_64

Interestingly, this time I was able to update debian-12, but fedora-38 continues to error out with the 403 error as before:

Errors during downloading metadata for repository ‘updates’:
Status code: 403 for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=updates-released-f38&arch=x86_64&protocol=http (IP: 127.0.0.1)
Error: Failed to download metadata for repo ‘updates’: Cannot prepare internal mirrorlist: Status code: 403 for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=updates-released-f38&arch=x86_64&protocol=http (IP: 127.0.0.1)

Also, I noticed this time the cacher AppVM shows the apt-cacher-ng service and has more files in its /etc/apt-cacher-ng/ directory. Not sure if that qubesctl command did something different.

UPDATE: It seems like there’s some sort of conflict between apt-cacher-ng 3.6.4-1 and tinyproxy 1.10.0-5 on Debian 11 where the apt-cacher-ng service can’t bind to the 8082 socket.

  • After upgrading to Qubes 4.2.1 and installing the 3isec-qubes-cacher.x86_64 v1.16-1.fc37 package, which created a debian-11 TemplateVM running apt-cacher-ng 3.6.4-1 and tinyproxy 1.10.0-5, I found that my Debian TemplateVMs throw the following error:

    E: Failed to fetch http://HTTPS///deb.debian.org/debian/dists/bookworm/InRelease 500 Unable to connect [IP: 127.0.0.1 8082]

  • I inspected cacher’s /rw/config/rc.local and noticed a useless

    • iptables command (because /sbin/iptables didn’t exist) and a similarly-useless
    • nft command (because systemctl status nftables indicated the nftables service wasn’t running)
      • I temporarily enabled the nftables service on cacher by enabling it on template-cacher, but Debian TemplateVMs continue to throw the same error
      • I inspected the apt-cacher-ng service on cacher via systemctl status apt-cacher-ng and saw

        Couldn’t bind socket: Address already in use
        Port 8082 is busy, see the manual (Troubleshooting chapter) for details.

        • Troubleshooting indicated we could ferret out the conflicting process via fuser -4 -v -n tcp 8082 which revealed there were 3 instances of tinyproxy running
          • I uninstalled tinyproxy from cacher and restarted the apt-cacher-ng service
          • Now both the cacher AppVM and my debian TemplateVMs throw DNS errors
  • A related curiosity: What in the fedora update process in particular uses JSON files?

There’s something bizarre here.

  1. cacher v1.16-1 clones a debian-12-minimal template, not a debian-11 template
  2. The package does not install any iptables command in /rw/config/rc.local
  3. The nft command in /rw/config/rc.local runs as expected to open port 8082 to incoming traffic, for network connected qubes.
  4. There is no need to enable the nftables service - in fact you should not do this.
  5. tinyproxy is masked to avoid any conflict with apt-cacher-ng

None of the issues you report match 3isec-qubes-cacher.x86_64 v1.16-1.

You dont say what sort of upgrade you performed - did you upgrade in
place and try to use an existing cacher qube? And then try to install
v1.16-1 over an existing installation? That;s the best I can come up
with.

I cant account for your networking problem.

I never presume to speak for the Qubes team. When I comment in the Forum I speak for myself.

Thanks for getting back to me @unman.

I had performed an in-place upgrade of Qubes 4.1.2 to 4.2.
I uninstalled cacher during Stage 4 of the in-place upgrade by running this from Dom0

sudo dnf remove 3isec-qubes-cacher

After the in-place upgrade completed (and after I updated /etc/yum.repos.d/3isec-dom0.repo’s baseurl with fc37) I re-installed cacher by running this from Dom0

sudo qubes-dom0-update 3isec-qubes-cacher

I just now uninstalled and re-installed cacher again, using these command in Dom0

sudo dnf remove 3isec-qubes-cacher
sudo rpm --noscripts -e 3isec-qubes-cacher
sudo qubes-dom0-update 3isec-qubes-cacher

Observed my

  • Debian-12-minimal TemplateVM’s apt update continues to error out:

    E: Failed to fetch http://HTTPS///deb.debian.org/debian/dists/bookworm/InRelease Connection failed [IP: 127.0.0.1 8082]

  • template-cacher TemplateVM’s qvm-features indicates it is now based on debian-12-minimal

  • cacher’s nftables service is disabled (as per systemctl status nftables)

  • cacher’s tinyproxy service could not be found (as per systemctl status tinyproxy; sounds like this is expected because of the masking)

  • cacher’s apt-cacher-ng service could not be found as per systemctl status apt-cacher-ng OR apt list --installed|grep apt-cacher

    • I also found that I could not run apt update from cacher (https://…) or template-cacher (http://HTTPS///)–both throw the same error as above after ignoring all sources or

      E: Failed to fetch https://deb.qubes-os.org/r4.2/vm/dists/bookworm/InRelease Reading from proxy failed - select (115: Operation now in progress) [IP: 127.0.0.1 8082]

      • I cloned template-cacher to see if toggling the source proxy prefix (https:// vs. http://HTTPS///) would make a difference

        sed -i 's^http://HTTPS///^https://^' /etc/apt/sources.list
        sed -i 's^http://HTTPS///^https://^' /etc/apt/sources.list.d/*.list

        • It did not; it ignored all of the URLs and errored out as before.

Interestingly, before I uninstalled and re-installed cacher tonight, I could update my TemplateVMs if I prefixed my sources with https://. The cacher AppVM would boot, but apt-cacher-ng wouldn’t log any activity. Now I can’t update my TemplateVMs at all. I confirmed Dom0’s /etc/qubes/policy.d/30-user.policy and /etc/qubes/policy.d/50-config-updates.policy files had the expected configurations.

This is your problem. My guess is that you had a broken proxy, so
that when you were trying to install in to template-cacher, the install
failed.
The fact that using “https” worked shows that you were not using
apt-cacher-ng.

Here’s what you could do.

  1. Remove any reference to qubes.UpdatesProxy from /etc/qubes/policy.d/30-user.policy
  2. Change the entry in /etc/qubes/policy.d/50-config-updates.policy from cacher to sys-net
  3. sudo dnf remove 3isec-qubes-cacher
  4. sudo qubes-dom0-update 3isec-qubes-cacher

It’s important that your caching proxy has network access. If the
package install failed then it will not, because it is based on a minimal
template which does not have qubes-core-agent-networking installed.
Similarly, if you have set the default netvm to none, cacher will not
have network access, and you will need to assign a netvm.

Things to check:

  1. The install in 4 completes without error.
  2. cacher has network access.
  3. The apt-cacher-ng service is running in cacher
  4. /etc/qubes/policy.d/50-config-updates.policy contains reference to cacher
  5. The repository definitions in a sample template do not contain https:// entries

When you update in a template you will see an entry in cacher
under /var/log/apt-cacher-ng/apt-cacher-ng.log - once the system
is installed you rarely need to attend to this.

I never presume to speak for the Qubes team. When I comment in the Forum I speak for myself.

Thank you for your guidance @unman

I followed the steps and apt-cacher-ng got installed in template-cacher, however I now get this error when I try to update my debian 12 TemplateVMs:

E: Failed to fetch http://HTTPS///deb.debian.org/debian/dists/bookworm/InRelease 503 Resource temporarily unavailable [IP: 127.0.0.1 8082]

systemctl status apt-cacher-ng on template-cacher seems to indicate it is ok now:

Jul 08 10:25:23 cacher systemd[1]: Starting apt-cacher-ng.service - Apt-Cacher NG software download proxy…
Jul 08 10:25:23 cacher systemd[1]: Started apt-cacher-ng.service - Apt-Cacher NG software download proxy…

To your point, the Dom0 sudo qubes-dom0-update 3isec-qubes-cacher command throws this error every time (I included the line before and after for context and bolded the error):

template-cacher: OK
whonix-gateway-17: ERROR (exit code 20, details in /var/log/qubes/mgmt-whonix-gateway-17.log)
whonix-workstation-17: ERROR (exit code 20, details in /var/log/qubes/mgmt-whonix-workstation-17.log)
warning: %post(3isec-qubes-cacher-1.16-1.fc37.x86_64) scriptlet failed, exit status 20
Error in POSTIN scriptlet in rpm package 3isec-qubes-cacher
Verifying : 3isec-qubes-cacher-1.16-1.fc37.x86_64 1/1
Installed:
3isec-qubes-cacher-1.16-1.fc37.x86_64

How can I view the POSTIN scriptlet? I’d like to try to run it line-by-line to see what’s breaking.

Thank you.

I’m not a Whonix user.
What does it say in the log files that are cited?

You can see the source here - the
scripts are in cacher.spec, but these ERRORS come in the section that
attempts to update the repository definitions, using
/srv/salt/cacher/change_templates.sls - there’s a test in that file
that prevented changes in Whonix templates by testing for nodename of
host - has that changed?

You could run this with:
sudo qubesctl --skip-dom0 --show-output --targets=whonix-workstation-17 state.apply cacher.change_templates

I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.

Neither of the cited logs (/var/log/qubes/mgmt-whonix-gateway-17.log or var/log/qubes/mgmt-whonix-workstation-17.log) contained any errors, Result: False, or exit code 20.

However, /var/log/qubes/mgmt-cacher.log DOES have a Result: False

ID: /rw/config/qubes-bind-dirs.d/50_user.conf
Function: file.managed
Result: False
Comment: Unable to manage file: none of the specified sources were found
Started: 21:54:12.347846
Duration: 5.814 ms
Changes:
Summary for cacher
Succeeded: 2 (changed=2)
Failed: 1
Total states run: 3
Total run time: 44.849 ms
exit code: 20

While researching these errors, I found forum threads like these, which seemed to capture the general sentiment: PSA: apt-cacher-ng is a buggy pile of shit and this nearly 12-year old thread which makes me wonder whether there are bigger problems here.

My Dom0’s /srv/salt/cacher/change_templates.sls still has the test line

{% if grains['nodename']|lower != 'host' %}

Running sudo qubesctl --skip-dom0 --show-output --targets=whonix-workstation-17 state.apply cacher.change_templates from Dom0 doesn’t return any problems:

whonix-workstation-17:
Summary for whonix-workstation-17
Succeeded: 0
Failed: 0
Total states run: 0
Total run time: 0.000 ms

What should I try next?

XKCD "Automation"