Qubes desktop not reacting

Hi, after restarting well functioning Qubes the desktop doesn’t react properly.

(management template is fedora-43-xfce)

  • The Q in the upper left corner doesn’t react.
  • in the panel there are no icons like network, disk, usb connections or dom0 clipboard. There are icons of audio, user and battery.

Right click on desktop gives the normal menu, I can see the content as usual but cannot run anything except for dom0. But no qube-manager or global config can be started.

qvm-run [appVM] and qvm-remove [appVM] don’t work
.
With the help of AI:

sudo lvs -a.

  • none of the LData% or Meta% is close to 100%

sudo journalctl -u qubesd -n 40 --no-pager

  • qubes.exc.QubesValueError: Invalid length of interfaces=‘None’ (is 4, expected multiple of 7)
  • Failed to start qubesd.service - Qubes OS daemon

sudo systemctl restart qubesd

  • Job for qubesd.service failed because the control process exited with error code.

Seems like the /var/lib/qubes/qubes.xml is damaged?

Run out of ideas. Any idea how to solve this?

I have your exact same issue (happened today after an unexplained self-force shutdown (or “unsafe shutdown”) that I did not trigger and that was after a bunch of VMs were shutting down for no apparent reason as I triggered nothing). Except that qubesd.service doesn’t fail for the reason you listed but instead I only get that it failed due to a timeout. The systemd logs also warn me that “Found left over process 3904 (lvm) in control group while starting unit. Ignoring.” “This usually indicates unclean termination of a previous run, or service implementation deficiencies.”
However after getting into dom0 terminal I can manually start qubesd.service. But then starting a qube with qvm-run fails as I get “Thin pool qubes_dom0-vm–pool-tpol (253:9) transaction_id is 18540, while expected 18542.”

Just like you sudo lbs -a shows plenty of free space, and smartctl -a shows that my SSD is fine.

Also it’s gonna be so hard to get this sorted as I’m only able to interact here with a phone and all debug logs must be typed manually as this is my only working laptop.

It seems to be no random issue. We did something that triggered this state.

I am actually getting into trouble, because this laptop is my main laptop and it is in coma. And I have no idea what to do with it now.

What I did that could be a bit unusual:

  1. I created a new sys-usb based on Fedora-43-xfce with network access, to use my Trezor with Chromium and trezor-common package that was installed in the underlying template. I know this breaks the QubesOS security model, but with 4.3 there is some issue with 4.2 based guides.

I started to use the Monero pruned node with isolation from the offline monero-wallet this way:

What touched dom0:

sudo nano /etc/qubes/policy.d/30-user-monero.policy

  • add this to the 30-user-…
    qubes.ConnectTCP +18081 monero-wallet-ws @default allow target=monerod-ws

No idea how to proceed further. I would greatly appreciate some help here, or at least som guide how to backup the VMs or something.

For my case I changed absolutely nothing, same config as since I first upgraded to R4.3, I just logged into my Qubes like normal and then this happened after the ‘unsafe shutdown’.

One thing that changed for me though is that now for some reason the ‘Q’ icon is responsive again and qubesd manages to start only after logging in, but I still have the LVM pool mismatch problem so I think I will need to start a new thread as my problem might be different than yours now.

What did you do that the Q is working back now?

What worries me that there are no comments under this issue here. It is basically a game over for me and I am stuck with a system in coma.

It worked on its own, I didn’t change anything. It’s really weird.

You can still restore the system with a live USB which is what I ended up doing but it was very tricky for me due to the metadata corruption, specifically: the LVM pool mismatch. I’m outlining here how I was able to do it:

Caution: Please double check my instructions as I might have forgotten something in the process, also my instructions only worked for my particular situation and problem which involved: Thin pool qubes_dom0-vm--pool-tpool (252:11) transaction_id is 18540, while expected 18542.

  1. Boot into a Linux Live USB (I used an Ubuntu 24.04 USB stick I had laying around).
  2. Connect to a network. Make sure to have these packages installed:
sudo apt update
sudo apt install lvm2 thin-provisioning-tools
  1. Run sudo lsblk and determine which ones are your Qubes partitions. In my case it was /dev/nvme0n1p3.
  2. Unlock your Qubes partition by navigating to it and entering the (or you can do it manually using the terminal with cryptsetup luksOpen).
  3. Now we need to make LVM recognize the now-decrypted drive (it’s normal if the last command gives errors about the lvm pool since it’s broken) and also make note of the expected transaction id number as we’ll need it later for the repair:
sudo vgscan
sudo vgchange -ay
  1. To identify the pool names (so we can backup the metadata pools): sudo lvs -a -o name,size,segtype,devices
  2. Force activating the metadata since it didn’t appear:
sudo lvchange -ay -K qubes_dom0/vm-pool_tmeta
ls /dev/mapper/ # this is to check that it's appearing now
  1. After mounting an external hard drive to backup the lvm metadata pool:
sudo vgcfgbackup -f "/media/ubuntu/ExternalHDD/qubes_config.bkp" qubes_dom0
sudo dd if=/dev/mapper/qubes_dom0-vm--pool_tmeta of="/media/ubuntu/ExternalHDD/vm-pool_tmeta.img" bs=1M status=progress
  1. Also just in case I also backed up the root-pool metadata pool:
sudo lvchange -ay -K qubes_dom0/root-pool_tmeta
sudo dd if=/dev/mapper/qubes_dom0-root--pool_tmeta of="/media/ubuntu/ExternalHDD/root-pool_tmeta.img" bs=1M status=progress
  1. Now I edited the backed up metadata to fix the LVM thing pool mismatch: sudo nano "/media/ubuntu/ExternalHDD/qubes_config.bkp", search for vm-pool and scroll until you see transaction_id. Change the number there to the one that it should be expecting as shown in the error in Step 5. Save the file now.
  2. Deactivate the volume group for now (It should say “0 logical volume(s) in volume group qubes_dom0 now active”.):
sudo vgchange -an qubes_dom0
  1. Restore the configuration now: sudo vgcfgrestore --force -f "/media/ubuntu/ExternalHDD/qubes_config.bkp" qubes_dom0.
  2. To make sure it worked run: sudo vgchange -ay qubes_dom0, you should be seeing “XX logical volume(s) in volume group “qubes_dom0” now active”.

After rebooting at this point I was able to boot into Qubes OS and everything worked normally.