Are there release notes for dom0 updates?

Are there typically release notes available for each dom0 update? I was hoping that there would be a context menu in the Qubes updater, but I don’t see one

Are issues like this under Github updates-status effectively the release notes?

I typically apply updates almost immediately for VMs but sometimes defer dom0 updates, to see if any issues are reported for a week. Main nervousness is around kernel version changes and how they may impact BTRFS as I don’t have full backups in place yet (yeah, bad idea, I’m working on it …)

Thx :slight_smile:

1 Like

There was but it was not updated. So I deleted it from Qube Manager:

And the text file will be changed to refer to online documents:

1 Like

FWIW I’ve been using the dom0 kernel-latest package for as long as I can remember, and it never caused me any trouble with Btrfs. The regular kernel package (i.e. the current longterm support kernel) should be even more conservative.

2 Likes

IMHO, a (good) solution is to follow the update status for the r4.2-host-stable label :

read the git commit titles since the last package release and the related linked issues.

Note that there is a lot of labels to match your needs. r4.2-host-stable is for the 4.2 dom0 stable repository.

Note also that the updates are pushed first to the current-testing repository, and around a week later pushed to the stable repository if no problems has been detected.

3 Likes

Same here, except the recent issue with loop devices and the direct i/o flag. Though I’m not 100% sure I would have noticed this in any release notes anyway - by definition these changes aren’t obvious to be breaking, so it’s probably a longshot to expect they would prevent something breaking if I had read them. Though they may help in identifying the cause after something breaks, of course

There may not be any actual corruption occurring in the case of this issue - from what I understand, this is (effectively) a race with the checksum operation, so the checksum comes out “wrong”, the data comes out “right” - but I’m seeing these messages in dmesg, so it’s impacting me in some way. even if it’s just freaking me out :slight_smile:

In my case, these messages are coinciding with some disk performance issues I’m having in Windows that I hadn’t noticed before

It took me a while to realize the direct i/o thing was an issue at all, an indicator that I ought to follow qubes-issues more closely. The BTRFS issues were buried amongst other messages in dmesg - and it took me even longer to consider they might be related to the disk performance problem (maybe they aren’t?)

Just saying, in theory, having a nice condensed version of changes would be nice. Though I’m not saying I expect anyone to make any enhancements to how changes are currently communicated, I’m fine with the answer being “check xyz page before applying the update(s)”. I will (at least) check qubes-issues in the future, for the rare situation where some breakage happens. I’ll at least have some familiarity with what else is causing problems for users

In other words, I’m not really complaining nor am I suggesting there is any deficiency. Just looking for the best way to ensure I’m informed

1 Like

Perfect, thanks!

1 Like

Was it really the kernel version though that made a difference for you with that? If so, do you remember the last good version where the issue didn’t yet occur with Windows VMs + Btrfs + direct I/O enabled (Qubes R4.2+)?

I don’t think so, I think it was mainly to do with using direct I/O flag with losetup. But, ya know, past performance is not necessarily an indicator of future results :wink:

Seriously though, that’s why I’m not indignant or making outrageous demands or criticisms. Just kinda thought it would be nice for the future

EDIT: Regarding the last version that was OK question - that may not actually be a productive question in this case because I noticed this behavior when enabling PV drivers in a Windows VM recently. The issue may have existed latently for many (kernel & userland) versions, only triggered by use of PV

1 Like

BTW- I’m less concerned about this now as I finally fixed the BTRFS pool by setting a proper sub-volume for the main pool. That allowed qubes backup to function yesterday without any hassle. I did have a tarball set aside previously but there’s something to be said for native point & click backup/restore

(The BTRFS fixup may have been with help from one of your posts as I’m new to BTRFS - or I might be confusing that with another post, but I thought I saw your name somewhere in it)

Actually, while I might have your attention- are you familiar with the /var/lib/qubes/portables subvolume? I noticed it existed but don’t recall creating it myself. I forget things sometimes, though so who knows…

1 Like

It’s for some systemd thing:

$ rpm -qf /var/lib/portables
systemd-251.19-1.fc37.x86_64

Probably related to https://systemd.io/PORTABLE_SERVICES/

What’s the fixup? I’m surprised to hear that the Qubes OS backup system would be affected by the subvolume layout.

^— possibly needed due to my own mistake rather than Qubes, because I had to custom partition for md raid 0 + luks2 + btrfs

Oh okay, if you’re using Wyng for backups (like in the quoted post) then the subvolume layout might make a difference. I’m not familiar with it.

Hrm, yeah that’s weird - I’m not actually using wyng (yet, at least) but creating the subvolume fixed a vague fatal error I was getting from the vanilla Qubes backup application. Unfortunately I don’t recall what it was and can’t reproduce it now without mucking around with the subvolume - but it essentially picked up the pool as being 0 bytes and exiting after about a minute

Maybe it was pilot error, though it’s a pretty simple click and you’re done operation

Anyway, Qubes backup works fine now. I’m considering trying out wyng at some point but not any time too soon

1 Like

The official release notes are here:

Edit: Oh, wait. You’re asking about every single individual dom0 update? In that case, no.

This is still helpful, thanks :slight_smile: