I ask because I updated dom0 using the Qubes Updater (due to issues with the CLI command) so I have no idea what has been updated or even whether something has been updated (with the ‘details’ box open). All I see is a tick next to dom0 in the updater when all is finished and an uninformative report in the ‘details’ box.
After I restarted my PC, my Wireguard connections no longer work. I’ve tried a whole lot of different fixes, but none work. I even deleted and re-created that Qube. I can confirm it’s not an issue on the provider’s end, and I made no changes before the routine update, so I have to suspect that dom0 was updated with something before the restart, and whatever it was broke my Wireguard connections.
Is there a way to make the Qubes Updater more verbose? It really should at least inform users what was updated and whether there was an update. Even a “No update is available” would be nice. This is one of the reasons I used qubes-dom0-update and not the Qubes Updater, until this bug came to light and a huge warning sign had to be slapped on the documentation warning people away from the CLI version.
Am I the only one feeling dom0 updates aren’t in a good place right now? As an added bonus, I get a whole bunch of errors when I try to update my templates via the updater.
You’ll get more information from your own logs - have you checked them?
I think the current issue arose because the security fix was poorly
handled and rolled out. I don’t think that the full implications were
considered before hand.
Apart from that, I don’t see a particular issue with any updates, dom0
or template. "a whole bunch of errors"isn’t particularly illuminating on
your situation.
I have no idea where the dom0 update log is and how to access it–I should’ve mentioned that I’m not technical.
The error message for each template is the same:
Error on updating [template]: Command '['sudo', 'qubesctl', \
'--skip-dom0', '--targets=[template]', '--show-output', \
'state.sls', 'update.qubes-vm']' returned non-zero exit status 1
[template]: ERROR (exception list index out of range)
Update: I played around with the firewall settings and managed to make things work (cannot go into specifics here, but it had to do with DNS resolution). Whatever was in the update isn’t compatible with my previous setup and prevents the firewall from obtaining IP addresses needed to filter, I think.
The issue with template updates using the Qubes Updater is still here though. In case it’s relevant, under Global Settings I have ensured that ‘Check for dom0 updates’ and ‘Check for qube updates by default’ are both checked, and have clicked on the ‘Enable checking for updates for all qubes’.
Edit: relevant Github threads for anyone interested:
You might be replying to the wrong thread–not sure how it’s a Whonix issue as I’m talking about the firewall settings of my VPN Qube (which doesn’t pass through Whonix-GW).
Or wait, were you referring to the Qubes Updater issue? That one displays the same error messages (same as before) while going through both VPN and Tor.
Look at dates of packages at: Index of /r4.0/current/dom0/fc25/rpm/. But the dates can be misleading, as the packages are built weeks in advance. They are first uploaded to current-test and after bake-in period are pushed to current, but the package in current will retain the date it was pushed to current-test. But this will still indicate the latest version of a package and when it was first made available that one can locally verify against.
Those will show what has been published. This assumes the date on the server hosting the packages is correct…
…so to take the paranoia level up a notch, and get a more granular view, look into the updates-status GitHub repo issues. What we care about are issues with the r4.0-dom0-stable label: Issues · QubesOS/updates-status · GitHub
There you can see when a package is built, pushed to current-test and then subsequently pushed to current.
As an example:
repomd.xml has a date of 18-May-2021 on the yum site as of today (May 18 2021).
rpm-4.14.2.1-5.fc25.x86_64.rpm has a date of 05-May-2021 on the yum site.
The same issues shows “Package for dom0 was uploaded to stable repository” on May 18 after marmarek sent a PGP-signed comment indicating the package could be pushed to r4.0 current, assuming the SHA256 hash matches.
A qubes-dom0-update today has installed a new version of rpm in dom0.
rpm -qi rpm shows the Signature info which has date signed and key id: RSA/SHA256, Tue 04 May 2021 06:45:18 PM EDT, Key ID 1848792f9e2795e9
The last 8 digits in the key id show it’s a trusted key (according to rpm): rpm -qi gpg-pubkey-9e2795e9|grep Packager shows Qubes OS Release 4 Signing Key
The last command also spits out the PGP key itself where one can verify the key against other public sources.
Bottom line - nobody wants to go through all of that. Testing what is already in the upgrade process and providing feedback is still helpful.
Very informative and helpful, thank you. Like you mentioned, this is a lot to do, even for technical users.
I wish there was some way for lay users to be able to easily check when an update has occured and what has occured. A webpage would do, but something built within the Qubes Updater would be even better (not a feature request at the moment, since I think the devs are running around putting out fires due to updating issues)
But nobody should feel even the slightest need to do that. They should only do it if they want to (e.g., for “fun”). Right?
Hmm. Well, the Qubes Updater already tells you when an update is available and prompts you to install it (if you have that notification enabled), so you must mean something else. I infer that by “when an update has occurred” you mean updates that have already been installed, and I infer that by “what has occurred” you mean what exactly the update did when it was installed – in other words, a history of past updates. This actually already exists and is not Qubes-specific! In any Fedora VM (including dom0), type sudo dnf history to get the history and sudo dnf history info <number> (where <number> is an entry from the history log output by the previous command) to get details about that specific transaction. For Debian-based VMs, check out /var/log/apt/history.log. Further details can be found by searching for how to do such things in those distros independently of Qubes.
However, these package manager logs will not include anything that happened outside of them. In particular, if any actions are performed via Salt that are Qubes-specific, they will generally not be reflected in the package manager logs. Such actions are quite rare relative to routine package installations, but it would still be nice to have a log of them. I don’t know whether such a log already exists. Perhaps @unman knows. I’m also not sure how comprehensible they would be to users who don’t speak Salt. Of course, one of the main uses of such actions is as patching in relation to QSBs, and in those cases you can simply read the QSB to find out what’s going on.
A general update: It turns out the Qubes Updater does display what has been updated–it’s just that I’ve never actually seen it happen since dom0 updates are fairly irregular and never occured just before I test the Updater. This led to me never having seen the output. It would be so much less confusing if a simple “No updates were found” was posted when an update isn’t found, alongside or in place of the empty template.
@adw: What I wrote earlier was phrased poorly. I meant that a list containing updates to dom0 packages and the dates they were pushed to current and other repos. This way, if I want to know whether Qubes Updater should have detected something (i.e. are updates being blocked), I could quickly check the website instead of going through the many steps @icequbes1 listed. In my scenario, I thought that the Updater doesn’t notify users of the packages installed for reasons mentioned earlier, which is why I felt the need to look for a dom0 update calendar. This also raises the question of why my firewall settings had to be fixed for Wireguard to work, if it wasn’t a dom0 update, but that’s for another thread.
Regarding logs–this is helpful; thanks. I have a habit of treating dom0 as though it were some special OS and I keep forgetting that it’s just a Fedora Linux system and most things applicable to those categories still apply.
I agree. I think this is highly desirable for a security-oriented OS. I have no idea what Qubes-specific Salt actions there are, but if they modify the throne room (dom0), logging them should be important (though the argument that an intruder who makes it to dom0 would manipulate the log also applies).
Please submit an issue for this (if not already covered by an existing issue).
I understand why you want this, but I believe it would be the wrong solution to the problem. Instead, the entire update system should be set up in such a way that you don’t feel the need to manually double-check its work. The fact that a user feels the need to make sure that the system is doing its job is a sign that there is something wrong with the system (or possibly with the user’s expectation). The solution is to fix that problem (or realign the user’s expectations), not turn the user into a micromanager of mundane machine monotony.
Fair point. I wouldn’t be anxious about dom0 updates if it weren’t for the metadata issues (both for Fedora and Qubes repos), along with other RPM issues.
To anyone reading this: I don’t have a Github account, which is why I can’t submit things. I’m reluctant to open one and would appreciate if someone submitted this issue I typed up. I don’t mind if some change is made as long as reporting of the core issue remains intact.
Following the template:
Qubes OS version R4.0
Affected component(s) or functionality
Updating via Qubes Updater
Brief summary
When there are no dom0 updates available, the Qubes Update tool prints:
Updating dom0
local:
----------
This doesn’t provide enough information and can be confusing for new users and especially those switching from the CLI qubes-dom0-update (due to the current situation with metadata), because they don’t know what to expect and may not encounter an update for a long period due to the relatively sparse and irregular updates R4.0 dom0 receives. The lack of reporting could lead to confusion, like in this post. A simple addition of “No updates were found” to the printed message would be enough.
How Reproducible
100%
To Reproduce
Run Qubes Update tool for dom0 after all packages have been updated.
Expected Behavior
Inform the user that no updates were found.
I see why you’d think this might have been included, but the point the other user is making is much broader and isn’t as timely as the point I’m making, so I think some overlap should be tolerated if it means a focused spotlight is shone on the issue. Droves of qubes-dom0-update users will be migrating to the Updater due to the current situation and they will likely have fully updated dom0s, leading to the situation I described.
I think Salt’s false-positive reporting (leading to a ‘tick’ in the Updater GUI) creates further complications.
I think what’s happening here is that #6299 is actually too big of an issue. It violates our rule that every issue must be about a single, actionable thing. Seeing this, your intuition is that your issue should be a separate one, not because it’s not subsumed by #6299, but because you can see that #6299 has so much other stuff packed in that the distinct features of your specific issue (e.g., “timeliness”) are not accurately reflected by the aggregate. You probably also have the sense (accurately, I think), that #6299, being so big and so general, will probably become one of those bike-shed “discussion” issues where nothing ever actually gets done. For all these reasons, I think it’s appropriate to close #6299. I will open a separate issue for your specific thing.
I’m flattered that you attributed so much to me, but I didn’t go as far as to think that #6299 would be an unproductive discussion, since I was just focused on getting this issue through. I was more concerned about the “no overlap” mindset when combined with allowing broad, umbrella issues, since it would mean the general exclusion of focused and actionable issues.
I agree with your assessment that #6299 is far too broad to be productive. Maybe a new guideline reflecting this insight should be added to the Github submission guide, if it isn’t already there.