Magical Terminal Commands To Reclaim Disk Space?

Something is taking up disk space, and I just can’t find it.

I’ve been cloning, testing, deleting plenty of qubes.

I’ve been messing with file permissions, symlinks, and binds…

…And in the process losing more and more disk space lost in the file system somewhere. And everything I’ve tried so far hasn’t reclaimed it.

I’m about to pull the trigger on another reinstall, restore. Before I do, is there some secret qubes utilities somewhere that find/reclaim disk space?

What command output(s) show(s) are using more disk space than you believe you are?

I ask this because in the default install, thin volumes are used, so it is not clear if you are noting that the filesystem is more full, the thin pool is more full, or the volume group is more full than expected.


I was making veracrypt vaults in dom0, and testing making them under different file permissions, user privileges, hidden, and overt…and in less than kosher spaces on the hd.

Some of those vaults ended up with circular binds… and just simply disappeared.

I think those vaults are the problem. My dom0 space is all but gone… despite clearing out any unnecessary data.

What command indicates your dom0 space is all but gone?

Veracrypt is giving me available space each time I make/test a vault. And so far, deleting the vaults, clearing trash, rebooting, doesn’t give Veracrypt access to the freed storage. The disk usage GUI doesn’t jive with Veracrypt.

Assuming we’re talking about dom0 filesystem space, I’m expecting to see answers similar to:

[admin@dom0 ~]$ df
Filesystem                  1K-blocks    Used Available Use% Mounted on
devtmpfs                       954384       0    954384   0% /dev
tmpfs                          970320       0    970320   0% /dev/shm
tmpfs                          970320    2088    968232   1% /run
tmpfs                          970320       0    970320   0% /sys/fs/cgroup
/dev/mapper/qubes_dom0-root 452235888 8013396 421180444   2% /
tmpfs                          970320     272    970048   1% /tmp
xenstore                       970320    1040    969280   1% /var/lib/xenstored
/dev/sda1                      999320  429348    501160  47% /boot
tmpfs                          194064      12    194052   1% /run/user/1000
[admin@dom0 ~]$ 


[admin@dom0 ~]$ sudo lvs|grep -v vm-
  LV                                                        VG           Attr       LSize   Pool               Origin                                                    Data%  Meta%  Move Log Cpy%Sync Convert
  pool00                                                    qubes_dom0   twi-aotz-- 439.18g                                                                              82.03  57.58                           
  root                                                      qubes_dom0   Vwi-aotz-- 439.18g pool00                                                                       3.62                                   
  swap                                                      qubes_dom0   -wi-ao----   9.58g                                                                                                                     
[admin@dom0 ~]$ 
1 Like

df shows:
/dev/mapper/qubes_dom0-root 20511312 13814560 5631792 72% /

There is no way that much disk space is being used by dom0?

So root is ~20GB, ~14GB used, ~6GB free?

Coming from Mac days, I had to run I forget, but another command to reclaim deleted encrypted disk space.

Is there something similar like that for Luks?

root-pool qubes_dom0 twi-aotz-- 24.00g 6.33 38.05

Work our way down from the top:

[admin@dom0 ~]$ cd /
[admin@dom0 /]$ sudo du -d 1 -h 2> /dev/null
0	./proc
888K	./srv
4.0K	./opt
16K	./lost+found
2.1M	./run
0	./sys
4.0K	./mnt
0	./dev
182M	./root
4.0K	./media
99M	./home
4.0K	./ephemeral
33M	./etc
4.4G	./usr
417M	./boot
8.0K	./tmp
3.0G	./var
8.0G	.
[admin@dom0 /]$ 

Then look for lines with “G” (GB) or “M” (MB) larger than expected.

Navigate down to that directory and use the command again.

In my case, /usr is large, but that’s expected. Most of that is /usr/lib, expected.

In my case, /var is large. Is that expected? Again, most of that is /var/lib, expected.

/var/log tends to be where my dom0 filesystem grows. Even with the more recent qubes work to trim logs, it grows faster than I’d like, so I have scripts to clean it out.

You can use this method to search and destroy your space wasters.


Do you have those scripts on a git somewhere I can pick them up?

this should work to help track down consumed space without installing software in dom0:

cd /
sudo du -sh *

find which subdirectory looks more bloated then expected, and

cd {bloated directory name}
sudo du -sh *

repeat process till you find it

alternatively, instead of sudo du -sh *,
sudo du -s * | sort -n
can be useful (putting them in order of size)

1 Like

This is interesting to me. Your dom0 root filesystem is ~20GB, but the thin pool for dom0 is 24GB. Is there more than one volume in the root-pool? If not, perhaps auto-extension occurred, but filesystem has not been extended?

I can’t see one, and I didn’t set one up.
I’ve been hacking away at plenty of things just to learn Qubes… so not too big of a loss if I have to reinstall. But for future it would be nice to know what I did to cause this anomaly (particularly the disappearing vaults).

Nope, but here’s a sanitized version - this is for R4.0, probably needs a review for R4.1.


	# remove all logs and detritus

        sudo sh -c "rm -rf /var/log/xen/console/guest-*"
	sudo sh -c "rm -rf /var/log/xen/console/hypervisor.log-*"

	sudo sh -c "rm -rf /var/log/libvirt/libxl/*"

	sudo sh -c "rm -rf /var/log/qubes/guid.*"
	sudo sh -c "rm -rf /var/log/qubes/pacat.*"
	sudo sh -c "rm -rf /var/log/qubes/qrexec.*"
	sudo sh -c "rm -rf /var/log/qubes/qubesdb.*"
	sudo sh -c "rm -rf /var/log/qubes/vm-*"
	sudo sh -c "rm -rf /var/log/qubes/mgmt-*"
	# sudo sh -c "rm -rf /var/log/qubes/qmemman.log" # does not seem to get recreated until after reboot - skipping to be safe
	# sudo sh -c "rm -rf /var/log/qubes/qubes.log"   # does not seem to get recreated until after reboot - skipping to be safe

	sudo journalctl --vacuum-files=1 1> /dev/null 2> /dev/null

	sudo fstrim -av 1> /dev/null 2> /dev/null

	echo ""
	echo "Completed cleanup log files and similar."
	echo ""


1 Like

Do you have one for KDE logs as well?

I do not use KDE.

They just did a major update I hear… and so far, I haven’t seen any bugs that would make me want to go back to XFCE.
Give it a spin… (you’ll thank me later :wink:

Were you able to use du to iterate down your large directories to find the source of the space usage?