Thin pool running out of disc space (SSD)

unfortunately I reacted late on running out of space. Now the vm-pool metadata is 100% full. I can still reboot but after booting no VMs will start. I can still use the dom0 console so i hope someone might be able to help me. It seems as if i just have read permissions.
I already read a lot of things online about this issue but i just don’t understand much of it e.g. (Disk Troubleshooting | Qubes OS)
I took some screenshots so you should see easily where i am at.
I got an external drive so once i could get some space cleared up and be able to start some VMs again i would be able to pull things off the different VMs. Also i am open to delete somone VMs if that would help.

Thank you

Here is how it looks with some commands (I tried deleting):

Hi @fresher24 ,

as the df -h command shows you, dom0 isn’t out of disk space.

BUT the tray bar disk widget shows that your vm space is out of disk space (vm-pool for you).

So you should understand which VMs use a lot of disk space and free space (remove unused VMs?).

So the ways:

  • understand the qvm-ls, sudo lvs and lsblk command output
  • understand and apply the documentation you already found : Disk Troubleshooting Guide
  • try to delete unused VMs with qvm-remove

And as always the key is doing backup before the problems :wink:

1 Like

This should help. In 4.1, specify qubes_dom0/vm-pool instead of qubes_dom0/pool00

Be sure to make a backup before making any changes to your disk

1 Like

Hi kommuni
Thanks for the Link.
I entered all commands as described and put vm-pool instead of pool00.

After that I tried to start a vm but I got the error message “Thin pool qubes_dom0-vm–pool-tpool (253:8) transaction_id is 8237, while expected 8238. Failed to suspend qubes_dom0/vm-pool with queued messages.”
Then I restarted the pc and it got stuck booting several times at the same point:

I waited around half an hour but nothing happened…
Can you help me? I found “Fixing Transaction ID Mismatch”
Do you think this will help?

It will need a few extents to run lvextend --repair, so use lvresize again to shrink the qubes_dom0/swap then.

But before that I would like you to see what is the value of locking_type in /etc/lvm/lvm.conf. It can check with the following command:
grep locking_type /etc/lvm/lvm.conf | grep -v '#'

If the value is 4, edit it to 1 and save and reboot.

If cannot run the command from dom0, live boot Qubes OS from a USB stick and go to TTY2(Ctrl-Alt-F2) then mount the disk. Also See the docs: Mounting and Decrypting Qubes Partitions from Outside Qubes