No free space in vm-pool HOW DO I GET MY SYSTEM BACK?

I’m basically having this issue but the thread is 2 years old and doesn’t have a solution: LVM metadata problem - Free space in thin pool reached threshold - #3 by user-a9dc9a0d

I can’t launch my vm’s because there’s no free space in vm-pool. I tried removing some old qubes I don’t use anymore and even moved some to my other pool. I still can’t do anything with the qubes still on vm-pool. Did I permanently lose my main system drive somehow…?

Could you share a bit about your setup first and share some logs about the error?

Okay this is weird. Now it looks like I can use my system error-free when I first boot it, but eventually will run into this problem when I try to launch a qube. I have journalctl open now and am waiting for it to happen again so I can copypaste the exact error… although with my luck the error will only show in a tooltip where I can’t copypaste.

Do you know how I can check free space and/or delete things in vm-pool? That’s where the problem is it keeps running out of room for some stupid reason

Alright so here’s the error:

device-mapper: thin: 253.8: reached low water mark for data device: sending event
Insufficient free space: 4193 extents needed, byut only 1375 available
Failed command for qubes_dom0-vm–pool-tpool

It keeps going but it’s just variations of those three lines. HOW DO I FIX THIS CRAP??

EDIT: this isn’t the error I get when it fails. When there is a failure to start a qube it just shows an error in a tooltip and doesn’t say anything in journalctl… FFS qubes wtf did I do to deserve this??

I’m not sure if this could be the source of the problem, but Qubes OS creates temporary disks upon boot to allow reverting changes for that boot, this requires extra space.

Maybe you could check using qvm-volume info QUBENAME:private to see if you have too many revisions? there must be a way to remove one revision or merge it in the disk to free a lot of space.

At least, just booting a qube and stop it will create a new checkpoint and delete the oldest one, this could save a lot of free space if you deleted files in a qube.

I have revisions_to_keep set to zero on all the qubes I checked. How do I check this on -mgmt qubes?

I run into this same problem every once in a while. By default qubes sets the threshold for extending the thin pool size to 90% of the disk. If you can’t open any qubes because you’ve run out of space in the thin pool, you can temporarily increase the threshold to 99%, move or delete excess files to reduce space, and then reduce the threshold back to 90%. Here is the process:

  1. Open the dom0 terminal.
  2. Open the /etc/lvm/lvm.conf file in vim or nano (whichever one you prefer).
sudo vim /etc/lvm/lvm.conf
  1. Increase the thin_pool_autoextend_threshold config from 90 to 99:
thin_pool_autoextend_threshold = 99
  1. Start a qube and delete unused files to reduce space. When done, shutdown the qube.
  2. Start the same qube and shut it down 2 more times. Qubes will delete the old backup volumes for that qube and replace it with a now smaller volume. You can use to Qube Disk Space Monitor tool on the XFCE panel to check the disk usage afterwards.
  3. Decrease the thin_pool_autoextend_threshold config back to 90.
thin_pool_autoextend_threshold = 90

Let me know if it works for you, and if not can help you trouble shoot it a bit more.

1 Like