[solved] When is Disk Space Returned to the Thin Pool after Deleting Data in an AppVM?

My Qubes 4.2 laptop was running fine until yesterday, when I copied about 40GiB into an AppVM and then tried to start another AppVM. The latter was denied, because vm-pool/data went to 92.0% and total disk usage to 90.7% due to the copying. So, I deleted (not just trashed) about 14GiB of the copied data again and issued an fstrim -v /home in that AppVM, which reported to have trimmed only 795,1 MiB, though. :thinking: However, pool and total disk usage in Qubes Disk Space Monitor did not change at all, not even by that 800MiB. :face_with_raised_eyebrow:

So, when is no longer used disk space brought back into the thin pool?
Will that happen once I shutdown the AppVM? I have not done that yet, since I cannot launch any AppVM at the moment. So I fear I will be stuck in dom0 without being able to launch the AppVM again to delete further data.

While searching the forum I found someone with Qubes 4.0 having to use a live system to repair the pool. I hope such methods are not necessary anymore with Qubes 4.2. Someone else had the chance to add disk space to the pool, but my SSD is already completely partitioned for Qubes. Raising the autoextend limit as described in this thread might be a short term option to start another AppVM, but without the ability to reclaim space freed in AppVMs to the pool this is not an actual solution. The Disk Troubleshooting Guide recommends to delete LV backups (snapshots) and this indeed reduces the pool usage, but not enough to get below the threshold to be able to start AppVMs again.

I guess backing up, deleting, then restoring the AppVM would help to shrink its LVs to the current size after deleting the excess data?! Am I right? But isn’t there an easier solution to return disk space to the pool?

I’m a bit worried, since not being able to start any AppVM also includes sys-net, sys-usb, I guess. Without them, I would be unable to even do a backup before eventually deleting the AppVM :fearful: So, any help would be really appreciated!

Qubes OS is keeping a revisions of your qubes volumes so the revision with a lot of data that you’ve removed still exists in one of the revisions:

You need to restart the qube a few times for the old revision to be discarded.
Or you can try to change revisions_to_keep to 0:

qvm-volume config vmname:private revisions_to_keep 0

Maybe it’ll remove your old revisions after you shutdown the qube.

OK, solved it myself. I changed the autoextend limit temporarily to 92% as described in the thread mentioned above so I could start another AppVM. When I deleted some data there (that I had on another PC anyway), the size of its LV as displayed via sudo lvs in a dom0 terminal shrinked immediately. I then shutdown this AppVM and deleted the two backup snapshots of its LV using lvremove and this reduced the pool usage to 85%, so I’m fine now.
Then I noticed the “Data%” column for the LV of the AppVM which originally caused the issue also was lower compared to its backup snapshots, so it did shrink!

So the answer to my question above is: Actually immediately. You don’t even need to issue an fstrim. However, the freed space might still be in use by a backup snapshot – and that’s why the Qubes Disk Usage Monitor is not going down. :man_facepalming:

PS: I just saw your reply, @apparatus Thank you, you identified the same issue! :hugs: I understand these backup snapshots are there to save people’s life on a daily basis, so I’m glad they exist by default, even when they caused trouble to me. So, I hope this thread helps others to understand the issue and solve it, too. :slightly_smiling_face:

1 Like

I wish disks revisions were more visible, at least have a GUI related to it. It bites everyone, it’s a cool feature but as not many users are aware of it, it’s not used. In addition, the command line tool to manipulate revisions is not the simplest CLI in Qubes OS :confused:

1 Like