I had a 4.3 upgrade Stage 3 failure where an old kernel-qubes-vm package couldn’t be removed. I fixed that issue and manually removed the package. I then manually ran the remaining commands (systemctl preset-all and systemctl enable) from the Stage 3 part of the upgrade script.
I rebooted into Qubes 4.3 to complete Stages 4-6. After rebooting, none of my qubes are able to start. If I manually start a qube, I see this error:
Error: Check of pool qubes_dom0/vm-pool failed (status:64). Manual repair required!
Aborting. Failed to locally activate thin pool qubes_dom0/vm-pool.
I successfully ran lvconvert --repair qubes_dom0/vm-pool. It showed a warning while running: WARNING: Sum of all thin volume sizes exceeds the size of thin pools and the size of whole volume group but still completed.
I rebooted and my qubes started normally. I continued with Stages 4-6 from the upgrade with no issues. I rebooted again and no issues.
I’m concerned about the warning since I also saw it earlier in the upgrade during the dom0 snapshot. It’s unclear if I should do something to address the warning or ignore it.
I had previously run into this error during stage 3 of the upgrade process and followed the advice suggested by unman which involved resizing the space allocated for dom0.
I am not sure if that had something to do with this failed to locally activate thin pool error that you’re describing occurring later on, but I was able to fix it with your suggested methods!
I got the warning as well and it also said:
WARNING: Sum of all thin volume sizes (<1.1 TiB) exceeds the size of thin pools and the size of whole volume group (<475 GiB).
WARNING: LV qubes_dom0/vm-pool_meta0 holds a backup of the unrepaired metadata. Use lvremove when no longer required.
I am not sure how the sun of all volume sizes could be that large as my SSD is not even 1TiB large. Furthermore I don’t know how or on what to use lvremove but just as in your case, even without doing this, everything ended up working.
I would appreciate some further insight however, if anyone understands or knows more about this!
Hello everybody,
I am facing the same issue right now. (not during upgrade. it just popped up today for some reason.)
I ran the journalctl cleaning commands and extended the filesystem as described in the advice above.
As I have before and I still do, when running
sudo lvconvert --repair qubes_dom0/vm-pool
I still get an error:
Transaction id 15231 from pool "qubes_dom0/vm-pool" does not match repaired transaction id 15230 from /dev/qubes_dom0/lvol0_pmspare.
This solved it! Thank you very very much
The repair command output that my sum of all thin pools is 6.53 TERABYTES which is crazy! No idea how that happened heh.