When trying to start up a qube, I just ran into this problem:
Qube xxx has failed to start: Cannot create new thin volume, free space in thin pool qubes_dom0/vm-pool reached threshold.
I found the several forum posts that suggest as a quick fix to set thin_pool_autoextend_threshold = 100
in /etc/lvm/lvm.conf
and this of course works.
But the underlying problem of having too little free space is still something I need to fix, so I started deleting some old qubes that I no longer need from the Qube Manager. This does not solve the problem though, the vm-pool disk usage shown in the Qube Disk Space Monitor is unchanged.
What is the reason for that? What do I need to do to make the freed space usable again?
Apart from that, I tried some disk trimming because I use a SSD. I ran fstrim -av
in multiple qubes which trimmed a significant amount of data and then on dom0, where 0 bytes were trimmed. The disk usage of the indivdual qubes as shown in the Qube Manager also doesn’t change as much as they were trimmed. I tried restarting them a couple of times to get rid of larger revisions, but also without effect.Why does this not work either? I’m on Qubes 4.1, so the additional steps for a LUKS setup described here Disk Trimming are not necessary as far as I understand. Is there anything else I need to do?