Tool: Simple Set-up of New Qubes and Software

Cleaning cache instantly created the issue:

[user@fedora-36-current ~]$ sudo dnf clean all
24 files removed
[user@fedora-36-current ~]$ sudo dnf update
Fedora 36 - x86_64                                                                                                                      2.2 MB/s |  81 MB     00:36    
Fedora 36 openh264 (From Cisco) - x86_64                                                                                                0.0  B/s |   0  B     00:00    
Errors during downloading metadata for repository 'fedora-cisco-openh264':
  - Curl error (56): Failure when receiving data from the peer for https://codecs.fedoraproject.org/openh264/36/x86_64/os/repodata/repomd.xml [Received HTTP code 403 from proxy after CONNECT]
Error: Failed to download metadata for repo 'fedora-cisco-openh264': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Fedora 36 - x86_64 - Updates                                                                                                            163  B/s | 8.1 kB     00:50    
Errors during downloading metadata for repository 'updates':
  - Curl error (28): Timeout was reached for http://ohioix.mm.fcix.net/fedora/linux/updates/36/Everything/x86_64/repodata/repomd.xml [Operation too slow. Less than 1000 bytes/sec transferred the last 30 seconds]
  - Downloading successful, but checksum doesn't match. Calculated: 480fdb1f22d8831cfac19f1efe3005eccbfbbf79a4d4d3cc3bd238345d6418abc6022157ba74d07f7b0cd219bbe47cc2f081f65221b76fcfedcd99a52948572d(sha512) 480fdb1f22d8831cfac19f1efe3005eccbfbbf79a4d4d3cc3bd238345d6418abc6022157ba74d07f7b0cd219bbe47cc2f081f65221b76fcfedcd99a52948572d(sha512) 480fdb1f22d8831cfac19f1efe3005eccbfbbf79a4d4d3cc3bd238345d6418abc6022157ba74d07f7b0cd219bbe47cc2f081f65221b76fcfedcd99a52948572d(sha512)  Expected: 51e6aae77c3e848e650080c794985684750e4db9d3b11e586682d14d7cc94fa9f8b37a7a19a53b8251fdced968f8b806e3a7b1a9ed851c0c8ff4e1f6fb17c68f(sha512) 986e21b3c06994ab512b3f576610ebeff0492c46f15f4124c7ed2ed6f3745c2ad2f9894d06e00849f7465cb336b1f01764c31e3002c9288b1a9525253d3bf3af(sha512) 608ef25404a5194cf1a2f95fd74a4dafd5f3ad69755ddab21ffefe00bcaf432df803fa477c2bf323247f96abe7b71a778314b2aee8aec537e01756a9a274d26f(sha512) 
  - Downloading successful, but checksum doesn't match. Calculated: ffa3d254134aa8811d80816fc33bbb9ea89e531794773457c0cd0b7a28f67bde1d86208c94408051f6c157e80737c337e06bc320a1401cb0df74542f472b4e3b(sha512) ffa3d254134aa8811d80816fc33bbb9ea89e531794773457c0cd0b7a28f67bde1d86208c94408051f6c157e80737c337e06bc320a1401cb0df74542f472b4e3b(sha512) ffa3d254134aa8811d80816fc33bbb9ea89e531794773457c0cd0b7a28f67bde1d86208c94408051f6c157e80737c337e06bc320a1401cb0df74542f472b4e3b(sha512)  Expected: 51e6aae77c3e848e650080c794985684750e4db9d3b11e586682d14d7cc94fa9f8b37a7a19a53b8251fdced968f8b806e3a7b1a9ed851c0c8ff4e1f6fb17c68f(sha512) 986e21b3c06994ab512b3f576610ebeff0492c46f15f4124c7ed2ed6f3745c2ad2f9894d06e00849f7465cb336b1f01764c31e3002c9288b1a9525253d3bf3af(sha512) 608ef25404a5194cf1a2f95fd74a4dafd5f3ad69755ddab21ffefe00bcaf432df803fa477c2bf323247f96abe7b71a778314b2aee8aec537e01756a9a274d26f(sha512) 
  - Downloading successful, but checksum doesn't match. Calculated: 63dcd7af35db555497c38e9e25693dcc3873b38994906de1271fdf17354b4de0dedd68b5cddef499bd3da5d8fb6f607cc4808d5cf1849241228c1568d36e8778(sha512) 63dcd7af35db555497c38e9e25693dcc3873b38994906de1271fdf17354b4de0dedd68b5cddef499bd3da5d8fb6f607cc4808d5cf1849241228c1568d36e8778(sha512) 63dcd7af35db555497c38e9e25693dcc3873b38994906de1271fdf17354b4de0dedd68b5cddef499bd3da5d8fb6f607cc4808d5cf1849241228c1568d36e8778(sha512)  Expected: 51e6aae77c3e848e650080c794985684750e4db9d3b11e586682d14d7cc94fa9f8b37a7a19a53b8251fdced968f8b806e3a7b1a9ed851c0c8ff4e1f6fb17c68f(sha512) 986e21b3c06994ab512b3f576610ebeff0492c46f15f4124c7ed2ed6f3745c2ad2f9894d06e00849f7465cb336b1f01764c31e3002c9288b1a9525253d3bf3af(sha512) 608ef25404a5194cf1a2f95fd74a4dafd5f3ad69755ddab21ffefe00bcaf432df803fa477c2bf323247f96abe7b71a778314b2aee8aec537e01756a9a274d26f(sha512) 
Error: Failed to download metadata for repo 'updates': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried


Doing scan and/or Expiration shows some zck being automatically tagged for removal.
Selecting delete selected files:

And then delete now.

Going back to Fedora-36 Template:

[user@fedora-36-current ~]$ sudo dnf update
Fedora 36 - x86_64                                                                                                                       15 MB/s |  81 MB     00:05    
Fedora 36 openh264 (From Cisco) - x86_64                                                                                                0.0  B/s |   0  B     00:01    
Errors during downloading metadata for repository 'fedora-cisco-openh264':
  - Curl error (56): Failure when receiving data from the peer for https://codecs.fedoraproject.org/openh264/36/x86_64/os/repodata/repomd.xml [Received HTTP code 403 from proxy after CONNECT]
Error: Failed to download metadata for repo 'fedora-cisco-openh264': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried
Fedora 36 - x86_64 - Updates                                                                                                            2.2 MB/s |  30 MB     00:13    
Qubes OS Repository for VM (updates)                                                                                                     80 kB/s | 159 kB     00:02    
Ignoring repositories: fedora-cisco-openh264
Last metadata expiration check: 0:00:01 ago on Mon Nov  7 13:02:49 2022.
Dependencies resolved.
========================================================================================================================================================================
 Package                                            Architecture                   Version                                        Repository                       Size
========================================================================================================================================================================
Upgrading:
 exfatprogs                                         x86_64                         1.2.0-1.fc36                                   updates                          86 k
 firefox                                            x86_64                         106.0.4-1.fc36                                 updates                         109 M
 ghostscript                                        x86_64                         9.56.1-5.fc36                                  updates                          37 k
 ghostscript-tools-fonts                            x86_64                         9.56.1-5.fc36                                  updates                          12 k
 ghostscript-tools-printing                         x86_64                         9.56.1-5.fc36                                  updates                          12 k
 gtk4                                               x86_64                         4.6.8-1.fc36                                   updates                         4.8 M
 java-17-openjdk-headless                           x86_64                         1:17.0.5.0.8-2.fc36                            updates                          40 M
 keepassxc                                          x86_64                         2.7.4-1.fc36                                   updates                         7.5 M
 libfido2                                           x86_64                         1.10.0-4.fc36                                  updates                          94 k
 libgs                                              x86_64                         9.56.1-5.fc36                                  updates                         3.5 M
 mpg123-libs                                        x86_64                         1.31.1-1.fc36                                  updates                         341 k
 mtools                                             x86_64                         4.0.42-1.fc36                                  updates                         211 k
 osinfo-db                                          noarch                         20221018-1.fc36                                updates                         268 k
 python-srpm-macros                                 noarch                         3.10-20.fc36                                   updates                          23 k
 thunderbird                                        x86_64                         102.4.1-1.fc36                                 updates                         101 M
 thunderbird-librnp-rnp                             x86_64                         102.4.1-1.fc36                                 updates                         1.2 M
 tzdata                                             noarch                         2022f-1.fc36                                   updates                         427 k
 tzdata-java                                        noarch                         2022f-1.fc36                                   updates                         149 k
 vim-common                                         x86_64                         2:9.0.828-1.fc36                               updates                         7.2 M
 vim-data                                           noarch                         2:9.0.828-1.fc36                               updates                          24 k
 vim-enhanced                                       x86_64                         2:9.0.828-1.fc36                               updates                         2.0 M
 vim-filesystem                                     noarch                         2:9.0.828-1.fc36                               updates                          19 k

Transaction Summary
========================================================================================================================================================================
Upgrade  22 Packages

Total download size: 277 M
Is this ok [y/N]:

@unman: from above test and report, we can see that cacher is actually getting in the way of getting actual updates without manual action.

Having Fedora-36 manually call sudo dnf update without cleaning local Template’s cache and then cacher’s cache reported no updates available, which was a false negative.

Cleaning Fedora-36 cache caused zck mismatch error, which required to clean cacher’s cache.

What can be done to have Fedora templates correctly getting available updates under cacher’s apt-cacher-ng configuration non-applied tweaks?

offtopic

I open Firefox only in disposables, even when accessing cacher webui

Also, not sure if I’m late to the party, but the fedora-cisco-openh264 error is apparently related to the repository gpg verification. It also appen, at least for me, if you add other repositories (Vscodium, for example). Setting repo_gpgcheck=0 in the repo file seems to fix it, but I guess this is not the right way to address this issue

EDIT: I only tried repo_gpgcheck=0 for the Vscodium repo since it was in a low security template

apt-cacher-mg isnt a natural fit for Fedora, as you can tell from the
manual.
I’ll try to look at this in the morning.

Incidentally, it would be far better if this thread had been kept
on topic with specific issues raised in their own threads. Easier for
other users to follow and find answers.
I did suggest this. That ship has sailed.

Also, I find it incredibly hard to follow screeds of quoted text, and
in most cases it’s unnecessary. Include the full text as an attachment if
you must, and include the salient point in your message. Much easier all
round.

1 Like

As far as knowing when to update:

I am thinking of setting up an “update canary”–a clone of my basic template (deb11m-sys-base) that I just leave up and running–no network access. When it lights up as needing an update, I update everything. Of course if some piece of software that gets added later (like, for instance, a browser) needs an update, this won’t catch it. So it’s probably a good idea to run updates once a week regardless of whether the canary…dies or whatever the right metaphor is.

For me the real problem is my vault. It is truly offline (specifically disabled updates-proxy-setup service), and it is the sole AppVM based on its template which is never started.

So, these two will never get update notification, and I have to do it manually when I remember.

1 Like

OK, you’re willing to let dom0 update that template, but you’re not willilng to have dom0 check to see if it needs updating? I’m not sure what security hole you might be filling that way.

And as far as I know, AppVMs don’t need updating anyway; it’s just the template. Or are you installing software into an AppVM (presumably in the /home/user or /rw areas, or it’s a bit pointless)?

No, no, it’s about how dom0 gets notifications which templates need updating.

1 Like

The mitigation to the problem here is to have vault’s template shared with sys-firewall/other used qubes. I think there is a generalization happening here where we are going a bit too crazy on compartmentalization and mixing where it is useful and not.

On my use case, and I understand this might differ in yours, but my vault’s Template has nothing really special and can be and is shared with my sys-firewall.

sys-firewall has qvm-service sys-firewall updates-proxy-setup true. And by default reports for template updates to dom0.

As discussed previously though, the current cacher’s design decision is to not have updates-proxy-setup activated by default, which prevents qubes from downloading packages files and therefore report updates to dom0. This is consequential to qubes needing to talk through apt-cacher-ng to be able to report to dom0 that updates are available. So if updates-proxy-setup is not activated as a service for that qube, then that qube is expecting to talk to the internet and the repo definitions are modified to talk through and only through apt-cacher-ng. So that cannot work without updates-proxy-setup service activated for a qube to talk to repositories.

@unman said that the expected use case for cacher is for users to manually update templates at boot prior of using qubes. It is also supposed to be clear enough for users that this is a behavior change then need to implement in their routines.

I personally think that this is a regression as opposed to standard deployment, where it is possible to install, ephemerally anything needed (volatile volume overlay in qubes over / so users can install stuff for a single boot, including in dispisable qubes) but again, that would require updates-proxy-setup to be activated by default on all qubes to have templates update reported to dom0 and those packages list being able to be downloaded.

I agree, absolutely. But my way is: when I don’t have enough knowledge to feel enough confident on the matter, I do it manually step by step and in a more restrictive way, trying to compensate the lack of knowledge until reaching the level needed to feel confident.

@unman, first of all thanks for the amazing salt stack and qubes-task-manager. They are truly a blessing.

I am using sys-agent-ssh. How do I have it to automatically perform ssh-add on boot?

I’ve tried putting ssh-add in everything but to no avail:

  • ~/.xinitrc
  • ~/.profile
  • ~/.bashrc
  • /rw/config/rc.local

It only works if I spin a terminal on the sys-ssh-agent then I get the toggle: Identity added: ...

@unman – a suggestion for the cacher vm installation:
add an option to remove qubes-update-check in each vm as a part of the process, or as part of a separate available process.
since it is now a useless service, i have gone through each VM (except those that don’t use cacher like whonix) and deactivated the service manually with:
qvm-service -d VMNAME qubes-update-check
but a clever way to automate this for all cacher updated vm’s would be welcome.

i also have to say, as an aside, that the cacher has improved my workflow-- by necessity. since i don’t get update notifications i now just run a bash script fixed to a hotkey, typically at startup once a day, that updates my net-vm’s, shuts them down, then updates my daily driver app-vms, and starts all my common use applications up. and with kde putting them each in their own workspace by default it is making my startup each day that much smoother, too.

thankyou for all your contributions here!

Can you share the bash script? I am also looking for some automation

Thanks @weyoun.six
I’m glad you find cacher useful. KDE is,of course, excellent in Qubes.

Rather than go through each individually, I could ship a batch file to
disable the service.
I do this myself,but the topic is still being hashed over here in the
Forum, how best to deal with updates in qubes with the cacher installed.

1 Like

I’ve added some new packages to provide:
a central sys-git to hold repositories - policy file controls access by
qube and repository;
a template with software useful for text to speech use;
split-gpg;
a split monero wallet;
syncthing - this includes a syncthing qube with net access, and a qrexec
service to run syncthing from other qubes,including those with no netvm.

More details are available in the Qubes Task Manager.
Sources is available on github

It would be more useful if this topic were kept for installation
problems of the Task Manager, and suggestions for packages to be
included, or contributions.
Issues with individual packages are best dealt with in a topic under a
clear title in User Support.

I never presume to speak for the Qubes team. When I comment in the Forum or in the mailing lists I speak for myself.
4 Likes

That’s fantastic!
I’ve been (unsuccessfully) trying to create a sys-git for quite a while. Have you gotten around to publishing the process in your notes?

Generally you create a simple systemd service like this, to start the
agent,and load the key

[Unit]
Description=SSH agent

[Service]
Type=oneshot
Environment=SSH_AUTH_SOCK=%t/ssh-agent.socket
ExecStart=/usr/bin/ssh-agent -D -a $SSH_AUTH_SOCK
ExecStart=/usr/bin/ssh-add [key]

[Install]
WantedBy=default.target

You’ll need to adapt this to the socket value you use for the agent - you
can see this in (e.g) work.agent.sh - and I haven’t tested it in any
way.
It wont do where you have passwords on the keys.

I never presume to speak for the Qubes team.
When I comment in the Forum or in the mailing lists I speak for myself.

1 Like

No. It’s a little niche for the notes, and it’s basically documented here
All the work is done by qubes.Git and git-qrexec, which I’ve
generalised, and slightly adapted to allow for granular policy control.
My contribution is minimal.

1 Like

I had to change this slightly to (note the ExecStart => ExecStartPost):

[Unit]
Description=SSH agent

[Service]
Type=oneshot
Environment=SSH_AUTH_SOCK=%t/ssh-agent.socket
ExecStart=/usr/bin/ssh-agent -D -a $SSH_AUTH_SOCK
ExecStartPost=/usr/bin/ssh-add [key]

[Install]
WantedBy=default.target

Works like a charm. Thanks!

2 Likes

For anyone who was as confused as me, certain of these setups assume that you have already set up the cacher. Once I did that they worked fine, but before that they failed. So try that if you’re having problems with them!