The new template requires changes in the source list
The package list is not updated after 2.
This is puzzling -
A. Assuming cacher is the name of the caching proxy, then this will have
brought in debian-11-minimal.
A post install step of that package rewrites the sources lists.
cacher is not the name of a caching proxy
cacher was not installed using the 3isec-qubes-cacher package
The post-install sources rewriting failed.
The debian-11-minimal template was removed or restored to its original
state before installation of the openvpn package.
B. After sources rewriting was successful, the sources were not updated.
This is why the packages were not located.
pkg.uptodate was run, and that state explicitly calls for a refresh. The
result is “True”.
This could be a bug in salt - I’ve already raised similar bugs.
possible workround would be to explicitly call apt-get refresh prior to pkg.uptodate
The only significant issue in A is the naming of cacher - this is too
embedded atm for me to consider changing.
Any failure in the rewriting script would result in templates failing
to update. No reports of this.
For B implementing that workround should be sufficient, (assuming there
os a caching proxy)
It’s worth noting that this is the only report I have so far of this package
failing to install.
Still puzzling -
I reset the minimal template prior to installing the openvpn package.
The sources in the cloned template were rewritten on package installation.
pkg.uptodate did refresh the packages list.
The necessary packages were installed.
It is niw spooled to be included into extrepo-offline-data in the next following days, meaning automatic updates of debian-12 templates will include element.io repository in the next update of installed extrepo-offline-data, and make element easily installable through
I think this is where our energies should be invested. So that one day, extrepo and extrepo-offline-data can be installed from debian repositories, and users(qubes salt, qubes preinstalled packages in templates) can permit easy additional repositories additition without worrying about GPG key download and repository definition errors that would lead into templates updates failing.
I would advise users here to redo my process, steal that issue as a template to help extrepo-data project into finding directly the upstream installation instructions so they can include your desired repository in their project.
love this project.
i had an issue with the cacher installation, now resolved. on first try, the template-cacher did not install the necessary packages, so the cacher was left unconfigured, and the install failed. i uninstalled it, changed my default net-vm to my update-vm, ran the installation again, and this time it was successful. it still reported an error, but it works great. i believe the error reported is that certain vm’s did not receive the necessary mod to their sources to work over cacher. easily fixed by manually editing sources
@unman if you let me know where to find the logs for the installation, i can report back where it failed, if this would be helpful.
edit: it would also appear that my one fedora qube is now unable to fetch metadata. verified that sources have been edited by the script to target cacher. maybe the grouping wasn’t applied properly in cacher? here is output of sudo dnf update:
Fedora 36 openh264 (From Cisco) - x86_64 0.0 B/s | 0 B 00:01
Errors during downloading metadata for repository ‘fedora-cisco-openh264’:
Curl error (56): Failure when receiving data from the peer for https://codecs.fedoraproject.org/openh264/36/x86_64/os/repodata/repomd.xml [Received HTTP code 403 from proxy after CONNECT]
Error: Failed to download metadata for repo ‘fedora-cisco-openh264’: Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Fedora 36 - x86_64 - Updates 61 kB/s | 479 kB 00:07
Errors during downloading metadata for repository ‘updates’:
First error is warning, but will obviously never reach repo nor update packages.
Second error I have annoying workaround of removing unreferenced files from apt-cacher-ng webgui. But is there a tweak to have files in sync? Reading documentation of apt-cacher-ng, I do not get how fetching Package list doesn’t update pointed files as of now.
MullvadVPN qube is also excellent @unman. outperforming my qubes-tunnel vm’s by quite a bit. one thing though: the menu item asks for my “wireshark” config files… i think you meant wireguard xD
edit: edit: in a second attempt at running “Setup Mullvad VPN”, it seems that the firewall rules didn’t automatically update from a private wireguard config. not sure why. Is this only meant for Mullvad configs specifically?
edit3: AHA! It was a syntax error in the config file. But note: it added the new firewall rule and did not replace the previous one.
i’m not getting any network throughput on sys-pihole. i can ping websites from the vm, but not from behind it. also, the pihole command is not recognized, so clearly something is wrong. but i do see a folder labeled pihole in /root. no errors reported on install. is there a missing step undocumented in the qubes-task info?
edit: the network i first tried to run the pihole installer on was blocking the pi-hole server, which is needed to run the basic-install.sh script. in this case it might be nice to return an error on the qubes-task for future users
@unman sorry, forgot you are on email list and maybe you don’t see edits-- the network i first tried to run the pihole installer on was blocking the pi-hole server, which is needed to run the basic-install.sh script. in this case it might be nice to return an error on the qubes-task for future users
one more thing i’m noticing: seems as though I am not getting qubes-update notifications anymore after installing and configuring cacher. when I manually check for updates, they are definitely available. and dom0 updates, which don’t use the cacher, are still coming through.
This has been discussed before.
The issue is that the update notification mechanism depends on qubes
being able to check if updates are available.
qubes that are offline,or are restricted to specific IP addresses will not
be able to do this.
qubes that do not have qrexec will not be able to do this.
qubes that inherit repository definitions that have been altered to work
with cacher will not be able to do this.
This mechanism makes sense if you have many qubes sharing a template.
Then it doesn’t matter if an offline qube cant check for updates
because one of the other qubes using that template should be able to
Once you begin to use specific templates the mechanism starts
to break down.
There are a number of possible approaches:
Ignore notifications and update all templates.
Change the repo definitions back to normal in qubes.
Set cacher upstream of most qubes and set proxy definition to access
cacher by IP
As you say, that wont work with template changes between distros.
Also, it will lose template updates that you do want. - e.g.
dist-upgrade a template wont be reflected in the qube, although all the
other packages would be updated.
That sounds like a nightmare.
I think your approach is good.
One thing you can do is identify dependencies between templates:
if debian-11-minimal->A->B ,then try to update B. If updates there,
then it’s likely that there will be in A and deb-11-min.
And the other way - if deb-11-min has updates, then you MUST update A and
You can use these simple tests to minimise the number of unnecessary
update attempts. (Since you are using cacher, the pain of updating will be reduced.)
Its interesting that I do not have this error at each update attempt, and it makes me wonder if, for my own use case, it would make sense to have cacher being a disposable qube to resolve the pool always increased usage problem, to get rid of that particular error that randomly hits me, considering that I barely ever install templates updates in different boot sessions of Qubes OS and that I use mainly cacher to economize bandwidth downloading the same packages across specialized templates.
Let me try to update Fedora now:
[user@fedora-36-current ~]$ sudo dnf update
Fedora 36 openh264 (From Cisco) - x86_64 0.0 B/s | 0 B 00:01
Errors during downloading metadata for repository 'fedora-cisco-openh264':
- Curl error (56): Failure when receiving data from the peer for https://codecs.fedoraproject.org/openh264/36/x86_64/os/repodata/repomd.xml [Received HTTP code 403 from proxy after CONNECT]
Error: Failed to download metadata for repo 'fedora-cisco-openh264': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Fedora 36 - x86_64 - Updates 52 kB/s | 7.6 kB 00:00
Qubes OS Repository for VM (updates) 959 B/s | 833 B 00:00
Ignoring repositories: fedora-cisco-openh264
Nothing to do.
Mental note to come back here when I get the error again in the future. From network speed above, I can tell that the package list was downloaded from upstream servers, not cacher’s cache. So the error didn’t happen. The zchunk errors happens where there is a discrepency between the cache content, downloaded from cacher as opposed to what is locally kept under Fedora’s template.
Doing a sudo dnf clean all in template sometimes helps.
Purging cacher’s unreferenced files (you have to tick option in webui) normally resolves the issue if combined with previous step.
But doing so removes what was cached, unless you tick each zchunk file individually. I’m not sure as of today, even after having read apt-cacher-ng documentation, why there would be a discrepency between Fedora’s template cache (clean all wipes it), cacher’s cache and upstream repositories. My hypothesis here is that a different mirror, containing different files is being hit and apt-cache-ng just provides that cache to Template which doesn’t match. Why? No clue!