Tool: Simple Set-up of New Qubes and Software

This has been discussed before.
The issue is that the update notification mechanism depends on qubes
being able to check if updates are available.
qubes that are offline,or are restricted to specific IP addresses will not
be able to do this.
qubes that do not have qrexec will not be able to do this.
qubes that inherit repository definitions that have been altered to work
with cacher will not be able to do this.

This mechanism makes sense if you have many qubes sharing a template.
Then it doesn’t matter if an offline qube cant check for updates
because one of the other qubes using that template should be able to
check.
Once you begin to use specific templates the mechanism starts
to break down.

There are a number of possible approaches:

  1. Ignore notifications and update all templates.
  2. Change the repo definitions back to normal in qubes.
  3. Set cacher upstream of most qubes and set proxy definition to access
    cacher by IP
  4. Set up proxy over qrexec in qubes.
  5. Probably others…

I favour 1.

hmmmm. ok, i have some thoughts-- how about:

  1. add /etc/apt/ to bid-dirs in vm’s before cacher modifies repo definitions (?)

wouldn’t work for new qubes based on already cacher-ized templates. and i’m not 100% on how bind-dirs works once the template changes. but could maybe kind of work

i think what i might do though is write a script that updates my most commonly used vm’s templates, and run it daily. then update the rest weekly or as-needed.

thanks for the explanation!

As you say, that wont work with template changes between distros.
Also, it will lose template updates that you do want. - e.g.
dist-upgrade a template wont be reflected in the qube, although all the
other packages would be updated.
That sounds like a nightmare.

I think your approach is good.
One thing you can do is identify dependencies between templates:
if debian-11-minimal->A->B ,then try to update B. If updates there,
then it’s likely that there will be in A and deb-11-min.
And the other way - if deb-11-min has updates, then you MUST update A and
B.
You can use these simple tests to minimise the number of unnecessary
update attempts. (Since you are using cacher, the pain of updating will be reduced.)

Could you please elaborate on your workaround for the second error? I’m getting it too, but removing unreferenced files didn’t do the trick (assuming I did it correctly)

@unman

Its interesting that I do not have this error at each update attempt, and it makes me wonder if, for my own use case, it would make sense to have cacher being a disposable qube to resolve the pool always increased usage problem, to get rid of that particular error that randomly hits me, considering that I barely ever install templates updates in different boot sessions of Qubes OS and that I use mainly cacher to economize bandwidth downloading the same packages across specialized templates.

Let me try to update Fedora now:
[user@fedora-36-current ~]$ sudo dnf update

Fedora 36 openh264 (From Cisco) - x86_64        0.0  B/s |   0  B     00:01    
Errors during downloading metadata for repository 'fedora-cisco-openh264':
  - Curl error (56): Failure when receiving data from the peer for https://codecs.fedoraproject.org/openh264/36/x86_64/os/repodata/repomd.xml [Received HTTP code 403 from proxy after CONNECT]
Error: Failed to download metadata for repo 'fedora-cisco-openh264': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Fedora 36 - x86_64 - Updates                     52 kB/s | 7.6 kB     00:00    
Qubes OS Repository for VM (updates)            959  B/s | 833  B     00:00    
Ignoring repositories: fedora-cisco-openh264
Dependencies resolved.
Nothing to do.
Complete!

Mental note to come back here when I get the error again in the future. From network speed above, I can tell that the package list was downloaded from upstream servers, not cacher’s cache. So the error didn’t happen. The zchunk errors happens where there is a discrepency between the cache content, downloaded from cacher as opposed to what is locally kept under Fedora’s template.

From memory:

  • Doing a sudo dnf clean all in template sometimes helps.
  • Purging cacher’s unreferenced files (you have to tick option in webui) normally resolves the issue if combined with previous step.

But doing so removes what was cached, unless you tick each zchunk file individually. I’m not sure as of today, even after having read apt-cacher-ng documentation, why there would be a discrepency between Fedora’s template cache (clean all wipes it), cacher’s cache and upstream repositories. My hypothesis here is that a different mirror, containing different files is being hit and apt-cache-ng just provides that cache to Template which doesn’t match. Why? No clue!

Cleaning cache instantly created the issue:

[user@fedora-36-current ~]$ sudo dnf clean all
24 files removed
[user@fedora-36-current ~]$ sudo dnf update
Fedora 36 - x86_64                                                                                                                      2.2 MB/s |  81 MB     00:36    
Fedora 36 openh264 (From Cisco) - x86_64                                                                                                0.0  B/s |   0  B     00:00    
Errors during downloading metadata for repository 'fedora-cisco-openh264':
  - Curl error (56): Failure when receiving data from the peer for https://codecs.fedoraproject.org/openh264/36/x86_64/os/repodata/repomd.xml [Received HTTP code 403 from proxy after CONNECT]
Error: Failed to download metadata for repo 'fedora-cisco-openh264': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Fedora 36 - x86_64 - Updates                                                                                                            163  B/s | 8.1 kB     00:50    
Errors during downloading metadata for repository 'updates':
  - Curl error (28): Timeout was reached for http://ohioix.mm.fcix.net/fedora/linux/updates/36/Everything/x86_64/repodata/repomd.xml [Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds]
  - Downloading successful, but checksum doesn't match. Calculated: 480fdb1f22d8831cfac19f1efe3005eccbfbbf79a4d4d3cc3bd238345d6418abc6022157ba74d07f7b0cd219bbe47cc2f081f65221b76fcfedcd99a52948572d(sha512) 480fdb1f22d8831cfac19f1efe3005eccbfbbf79a4d4d3cc3bd238345d6418abc6022157ba74d07f7b0cd219bbe47cc2f081f65221b76fcfedcd99a52948572d(sha512) 480fdb1f22d8831cfac19f1efe3005eccbfbbf79a4d4d3cc3bd238345d6418abc6022157ba74d07f7b0cd219bbe47cc2f081f65221b76fcfedcd99a52948572d(sha512)  Expected: 51e6aae77c3e848e650080c794985684750e4db9d3b11e586682d14d7cc94fa9f8b37a7a19a53b8251fdced968f8b806e3a7b1a9ed851c0c8ff4e1f6fb17c68f(sha512) 986e21b3c06994ab512b3f576610ebeff0492c46f15f4124c7ed2ed6f3745c2ad2f9894d06e00849f7465cb336b1f01764c31e3002c9288b1a9525253d3bf3af(sha512) 608ef25404a5194cf1a2f95fd74a4dafd5f3ad69755ddab21ffefe00bcaf432df803fa477c2bf323247f96abe7b71a778314b2aee8aec537e01756a9a274d26f(sha512) 
  - Downloading successful, but checksum doesn't match. Calculated: ffa3d254134aa8811d80816fc33bbb9ea89e531794773457c0cd0b7a28f67bde1d86208c94408051f6c157e80737c337e06bc320a1401cb0df74542f472b4e3b(sha512) ffa3d254134aa8811d80816fc33bbb9ea89e531794773457c0cd0b7a28f67bde1d86208c94408051f6c157e80737c337e06bc320a1401cb0df74542f472b4e3b(sha512) ffa3d254134aa8811d80816fc33bbb9ea89e531794773457c0cd0b7a28f67bde1d86208c94408051f6c157e80737c337e06bc320a1401cb0df74542f472b4e3b(sha512)  Expected: 51e6aae77c3e848e650080c794985684750e4db9d3b11e586682d14d7cc94fa9f8b37a7a19a53b8251fdced968f8b806e3a7b1a9ed851c0c8ff4e1f6fb17c68f(sha512) 986e21b3c06994ab512b3f576610ebeff0492c46f15f4124c7ed2ed6f3745c2ad2f9894d06e00849f7465cb336b1f01764c31e3002c9288b1a9525253d3bf3af(sha512) 608ef25404a5194cf1a2f95fd74a4dafd5f3ad69755ddab21ffefe00bcaf432df803fa477c2bf323247f96abe7b71a778314b2aee8aec537e01756a9a274d26f(sha512) 
  - Downloading successful, but checksum doesn't match. Calculated: 63dcd7af35db555497c38e9e25693dcc3873b38994906de1271fdf17354b4de0dedd68b5cddef499bd3da5d8fb6f607cc4808d5cf1849241228c1568d36e8778(sha512) 63dcd7af35db555497c38e9e25693dcc3873b38994906de1271fdf17354b4de0dedd68b5cddef499bd3da5d8fb6f607cc4808d5cf1849241228c1568d36e8778(sha512) 63dcd7af35db555497c38e9e25693dcc3873b38994906de1271fdf17354b4de0dedd68b5cddef499bd3da5d8fb6f607cc4808d5cf1849241228c1568d36e8778(sha512)  Expected: 51e6aae77c3e848e650080c794985684750e4db9d3b11e586682d14d7cc94fa9f8b37a7a19a53b8251fdced968f8b806e3a7b1a9ed851c0c8ff4e1f6fb17c68f(sha512) 986e21b3c06994ab512b3f576610ebeff0492c46f15f4124c7ed2ed6f3745c2ad2f9894d06e00849f7465cb336b1f01764c31e3002c9288b1a9525253d3bf3af(sha512) 608ef25404a5194cf1a2f95fd74a4dafd5f3ad69755ddab21ffefe00bcaf432df803fa477c2bf323247f96abe7b71a778314b2aee8aec537e01756a9a274d26f(sha512) 
Error: Failed to download metadata for repo 'updates': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried


Doing scan and/or Expiration shows some zck being automatically tagged for removal.
Selecting delete selected files:

And then delete now.

Going back to Fedora-36 Template:

[user@fedora-36-current ~]$ sudo dnf update
Fedora 36 - x86_64                                                                                                                       15 MB/s |  81 MB     00:05    
Fedora 36 openh264 (From Cisco) - x86_64                                                                                                0.0  B/s |   0  B     00:01    
Errors during downloading metadata for repository 'fedora-cisco-openh264':
  - Curl error (56): Failure when receiving data from the peer for https://codecs.fedoraproject.org/openh264/36/x86_64/os/repodata/repomd.xml [Received HTTP code 403 from proxy after CONNECT]
Error: Failed to download metadata for repo 'fedora-cisco-openh264': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Fedora 36 - x86_64 - Updates                                                                                                            2.2 MB/s |  30 MB     00:13    
Qubes OS Repository for VM (updates)                                                                                                     80 kB/s | 159 kB     00:02    
Ignoring repositories: fedora-cisco-openh264
Last metadata expiration check: 0:00:01 ago on Mon Nov  7 13:02:49 2022.
Dependencies resolved.
========================================================================================================================================================================
 Package                                            Architecture                   Version                                        Repository                       Size
========================================================================================================================================================================
Upgrading:
 exfatprogs                                         x86_64                         1.2.0-1.fc36                                   updates                          86 k
 firefox                                            x86_64                         106.0.4-1.fc36                                 updates                         109 M
 ghostscript                                        x86_64                         9.56.1-5.fc36                                  updates                          37 k
 ghostscript-tools-fonts                            x86_64                         9.56.1-5.fc36                                  updates                          12 k
 ghostscript-tools-printing                         x86_64                         9.56.1-5.fc36                                  updates                          12 k
 gtk4                                               x86_64                         4.6.8-1.fc36                                   updates                         4.8 M
 java-17-openjdk-headless                           x86_64                         1:17.0.5.0.8-2.fc36                            updates                          40 M
 keepassxc                                          x86_64                         2.7.4-1.fc36                                   updates                         7.5 M
 libfido2                                           x86_64                         1.10.0-4.fc36                                  updates                          94 k
 libgs                                              x86_64                         9.56.1-5.fc36                                  updates                         3.5 M
 mpg123-libs                                        x86_64                         1.31.1-1.fc36                                  updates                         341 k
 mtools                                             x86_64                         4.0.42-1.fc36                                  updates                         211 k
 osinfo-db                                          noarch                         20221018-1.fc36                                updates                         268 k
 python-srpm-macros                                 noarch                         3.10-20.fc36                                   updates                          23 k
 thunderbird                                        x86_64                         102.4.1-1.fc36                                 updates                         101 M
 thunderbird-librnp-rnp                             x86_64                         102.4.1-1.fc36                                 updates                         1.2 M
 tzdata                                             noarch                         2022f-1.fc36                                   updates                         427 k
 tzdata-java                                        noarch                         2022f-1.fc36                                   updates                         149 k
 vim-common                                         x86_64                         2:9.0.828-1.fc36                               updates                         7.2 M
 vim-data                                           noarch                         2:9.0.828-1.fc36                               updates                          24 k
 vim-enhanced                                       x86_64                         2:9.0.828-1.fc36                               updates                         2.0 M
 vim-filesystem                                     noarch                         2:9.0.828-1.fc36                               updates                          19 k

Transaction Summary
========================================================================================================================================================================
Upgrade  22 Packages

Total download size: 277 M
Is this ok [y/N]:

@unman: from above test and report, we can see that cacher is actually getting in the way of getting actual updates without manual action.

Having Fedora-36 manually call sudo dnf update without cleaning local Template’s cache and then cacher’s cache reported no updates available, which was a false negative.

Cleaning Fedora-36 cache caused zck mismatch error, which required to clean cacher’s cache.

What can be done to have Fedora templates correctly getting available updates under cacher’s apt-cacher-ng configuration non-applied tweaks?

offtopic

I open Firefox only in disposables, even when accessing cacher webui

Also, not sure if I’m late to the party, but the fedora-cisco-openh264 error is apparently related to the repository gpg verification. It also appen, at least for me, if you add other repositories (Vscodium, for example). Setting repo_gpgcheck=0 in the repo file seems to fix it, but I guess this is not the right way to address this issue

EDIT: I only tried repo_gpgcheck=0 for the Vscodium repo since it was in a low security template

apt-cacher-mg isnt a natural fit for Fedora, as you can tell from the
manual.
I’ll try to look at this in the morning.

Incidentally, it would be far better if this thread had been kept
on topic with specific issues raised in their own threads. Easier for
other users to follow and find answers.
I did suggest this. That ship has sailed.

Also, I find it incredibly hard to follow screeds of quoted text, and
in most cases it’s unnecessary. Include the full text as an attachment if
you must, and include the salient point in your message. Much easier all
round.

1 Like

As far as knowing when to update:

I am thinking of setting up an “update canary”–a clone of my basic template (deb11m-sys-base) that I just leave up and running–no network access. When it lights up as needing an update, I update everything. Of course if some piece of software that gets added later (like, for instance, a browser) needs an update, this won’t catch it. So it’s probably a good idea to run updates once a week regardless of whether the canary…dies or whatever the right metaphor is.

For me the real problem is my vault. It is truly offline (specifically disabled updates-proxy-setup service), and it is the sole AppVM based on its template which is never started.

So, these two will never get update notification, and I have to do it manually when I remember.

1 Like

OK, you’re willing to let dom0 update that template, but you’re not willilng to have dom0 check to see if it needs updating? I’m not sure what security hole you might be filling that way.

And as far as I know, AppVMs don’t need updating anyway; it’s just the template. Or are you installing software into an AppVM (presumably in the /home/user or /rw areas, or it’s a bit pointless)?

No, no, it’s about how dom0 gets notifications which templates need updating.

1 Like

The mitigation to the problem here is to have vault’s template shared with sys-firewall/other used qubes. I think there is a generalization happening here where we are going a bit too crazy on compartmentalization and mixing where it is useful and not.

On my use case, and I understand this might differ in yours, but my vault’s Template has nothing really special and can be and is shared with my sys-firewall.

sys-firewall has qvm-service sys-firewall updates-proxy-setup true. And by default reports for template updates to dom0.

As discussed previously though, the current cacher’s design decision is to not have updates-proxy-setup activated by default, which prevents qubes from downloading packages files and therefore report updates to dom0. This is consequential to qubes needing to talk through apt-cacher-ng to be able to report to dom0 that updates are available. So if updates-proxy-setup is not activated as a service for that qube, then that qube is expecting to talk to the internet and the repo definitions are modified to talk through and only through apt-cacher-ng. So that cannot work without updates-proxy-setup service activated for a qube to talk to repositories.

@unman said that the expected use case for cacher is for users to manually update templates at boot prior of using qubes. It is also supposed to be clear enough for users that this is a behavior change then need to implement in their routines.

I personally think that this is a regression as opposed to standard deployment, where it is possible to install, ephemerally anything needed (volatile volume overlay in qubes over / so users can install stuff for a single boot, including in dispisable qubes) but again, that would require updates-proxy-setup to be activated by default on all qubes to have templates update reported to dom0 and those packages list being able to be downloaded.

I agree, absolutely. But my way is: when I don’t have enough knowledge to feel enough confident on the matter, I do it manually step by step and in a more restrictive way, trying to compensate the lack of knowledge until reaching the level needed to feel confident.

@unman, first of all thanks for the amazing salt stack and qubes-task-manager. They are truly a blessing.

I am using sys-agent-ssh. How do I have it to automatically perform ssh-add on boot?

I’ve tried putting ssh-add in everything but to no avail:

  • ~/.xinitrc
  • ~/.profile
  • ~/.bashrc
  • /rw/config/rc.local

It only works if I spin a terminal on the sys-ssh-agent then I get the toggle: Identity added: ...

@unman – a suggestion for the cacher vm installation:
add an option to remove qubes-update-check in each vm as a part of the process, or as part of a separate available process.
since it is now a useless service, i have gone through each VM (except those that don’t use cacher like whonix) and deactivated the service manually with:
qvm-service -d VMNAME qubes-update-check
but a clever way to automate this for all cacher updated vm’s would be welcome.

i also have to say, as an aside, that the cacher has improved my workflow-- by necessity. since i don’t get update notifications i now just run a bash script fixed to a hotkey, typically at startup once a day, that updates my net-vm’s, shuts them down, then updates my daily driver app-vms, and starts all my common use applications up. and with kde putting them each in their own workspace by default it is making my startup each day that much smoother, too.

thankyou for all your contributions here!

Can you share the bash script? I am also looking for some automation

Thanks @weyoun.six
I’m glad you find cacher useful. KDE is,of course, excellent in Qubes.

Rather than go through each individually, I could ship a batch file to
disable the service.
I do this myself,but the topic is still being hashed over here in the
Forum, how best to deal with updates in qubes with the cacher installed.

1 Like

I’ve added some new packages to provide:
a central sys-git to hold repositories - policy file controls access by
qube and repository;
a template with software useful for text to speech use;
split-gpg;
a split monero wallet;
syncthing - this includes a syncthing qube with net access, and a qrexec
service to run syncthing from other qubes,including those with no netvm.

More details are available in the Qubes Task Manager.
Sources is available on github

It would be more useful if this topic were kept for installation
problems of the Task Manager, and suggestions for packages to be
included, or contributions.
Issues with individual packages are best dealt with in a topic under a
clear title in User Support.

I never presume to speak for the Qubes team. When I comment in the Forum or in the mailing lists I speak for myself.
4 Likes