qubes was working as expected but one day i logged in, and for some resson none of the vms started
if i try something like qvm-start personal heres what i get:
user@dom0:~/Desktop/qvm-start personal$
Traceback (most recent call last):
File “/usr/lib/python3.13/site-packages/qubesadmin/app.py”, line 361, in qubesd_call
sock.recv(4)
FileNotFoundError: [Errno 2] No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File “/usr/bin/qvm-start”, line 5, in
sys.exit(main())
File “/usr/lib/python3.13/site-packages/qubesadmin/tools/qvm_start.py”, line 175, in main
args = parser.parse_args(args, app=app)
File “/usr/lib/python3.13/site-packages/qubesadmin/init.py”, line 431, in parse_args
action.parse_qubes_app(args)
File “/usr/lib/python3.13/site-packages/qubesadmin/init.py”, line 192, in parse_args_app
namespace.domains = (app.domains,)
File “/usr/lib/python3.13/site-packages/qubesadmin/app.py”, line 107, in getattr
if not self.app.blind_mode and attr not in self.dict:
File “/usr/lib/python3.13/site-packages/qubesadmin/app.py”, line 143, in contains
self.refresh_cache()
File “/usr/lib/python3.13/site-packages/qubesadmin/app.py”, line 73, in refresh_cache
vm_list_data = self.app.qubesd_call(“dom0”, “vm-list”)
File “/usr/lib/python3.13/site-packages/qubesadmin/app.py”, line 863, in qubesd_call
raise qubesadmin.exc.QubesDaemonCommunicationError(
qubesadmin.exc.QubesDaemonCommunicationError: Failed to connect to qubesd service: [Errno 2] No such file or directory
user@dom0:~/Desktop$
and if i try to start sys-net:
Error: Check of pool qubes_dom0/vm-pool failed (status:64). Manual repair required!
Aborting. Failed to locally activate thin pool qubes_dom0/vm-pool.
I had similar error codes. Ended up doing a fresh re install. However i think the problem with mine (still investigating) was my sys-net had a pci device which did not exist.
R 4.3 renamed my Ethernet pci device. Whenever i tried to start sys-net in got similar error codes. On my new install of 4.3, I made sure to disable the old pci device (5c:00.0?******* unkown vendor) which does not exist anymore.
Try adding ‘qubes.skip_autostart’ to GRUB on boot. See if it persists
Edit your backup config file - search for transaction ID number and replace with correct ID, eg. change 21472 to 21471 and save; sudo nano /etc/lvm/backup/qubes_dom0
user@dom0:~/Desktop$ lvchange -an qubes_dom0
WARNING: Running as a non-root user. Functionality may be unavailable.
/dev/mapper/control: open failed: Permission denied
Failure to communicate with kernel device-mapper driver.
Incompatible libdevmapper 1.02.199 (2024-07-12) and kernel driver (unknown version).
/run/lock/lvm/V_qubes_dom0:aux: open failed: Permission denied
Can’t get lock for qubes_dom0.
Cannot process volume group qubes_dom0
user@dom0:~/Desktop$ sudo lvchange -an qubes_dom0
Logical volume qubes_dom0/root contains a filesystem in use.
Logical volume qubes_dom0/swap in use.
Device qubes_dom0-root-pool_tmeta (253:1) is used by another device.
Device qubes_dom0-root-pool_tdata (253:2) is used by another device.
user@dom0:~/Desktop$ vgcfgbackup qubes_dom0 -f /etc/lvm/backup/qubes_dom0
WARNING: Running as a non-root user. Functionality may be unavailable.
/dev/mapper/control: open failed: Permission denied
Failure to communicate with kernel device-mapper driver.
Incompatible libdevmapper 1.02.199 (2024-07-12) and kernel driver (unknown version).
/run/lock/lvm/V_qubes_dom0:aux: open failed: Permission denied
Can’t get lock for qubes_dom0.
Cannot process volume group qubes_dom0
user@dom0:~/Desktop$ vgcfgrestore --force qubes_dom0 -f /etc/lvm/backup/qubes_dom0
WARNING: Running as a non-root user. Functionality may be unavailable.
/dev/mapper/control: open failed: Permission denied
Failure to communicate with kernel device-mapper driver.
Incompatible libdevmapper 1.02.199 (2024-07-12) and kernel driver (unknown version).
WARNING: Failed to check for active volumes in volume group “qubes_dom0”.
/run/lock/lvm/P_global:aux: open failed: Permission denied
user@dom0:~/Desktop$ sudo vgcfgrestore --force qubes_dom0 -f /etc/lvm/backup/qubes_dom0
Volume group qubes_dom0 has active volume: root.
Volume group qubes_dom0 has active volume: root-pool.
Volume group qubes_dom0 has active volume: root-pool_tdata.
Volume group qubes_dom0 has active volume: root-pool_tmeta.
Volume group qubes_dom0 has active volume: swap.
WARNING: Found 5 active volume(s) in volume group “qubes_dom0”.
Restoring VG with active LVs, may cause mismatch with its metadata.
Do you really want to proceed with restore of volume group “qubes_dom0”, while 5 volume(s) are active? [y/n]: y
WARNING: Forced restore of Volume Group qubes_dom0 with thin volumes.
Restored volume group qubes_dom0.
user@dom0:~/Desktop$ sudo lvchange -ay qubes_dom0
Check of pool qubes_dom0/vm-pool failed (status:64). Manual repair required!
This is happening more often in 4.3. In 4.2, it almost never happened. But now it’s happening after nearly every improper shutdown.
Someone experienced it here:
I then experienced it myself and posted a summary and the solution. After that, it happened to me two more times due to improper shutdowns.
This issue should be investigated to determine why it is occurring so frequently in 4.3. During the entire two years of 4.2, it never happened to me, and a similar issue occurred only once in 4.1.