Qube disk usage difference in qube / dom0

Hello!

I have a question, that I seems can not find answer to.

From time to time I get a qubes disk full error in “Disk usage” widget. Standard one:

QUBES_NAME: volume private is XX.X% full.

When XX.X% is greater that 90%. Usually, this error correctly reflect situation and I either increase disk size in qube settings and/or delete some files in qube.

But for several particular qubes utilities inside qube shows different usage information.
E.g. Information as reported by df -h, 54% only used.

user@QUBE_NAME:~/$ df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/mapper/dmroot   20G  7.5G   11G  41% /
none                 20G  7.5G   11G  41% /usr/lib/modules
devtmpfs            4.0M     0  4.0M   0% /dev
tmpfs               1.0G  7.6M 1017M   1% /dev/shm
tmpfs               187M  480K  187M   1% /run
tmpfs               5.0M     0  5.0M   0% /run/lock
/dev/xvdb            18G  9.5G  8.2G  54% /rw
tmpfs                94M   92K   94M   1% /run/user/1000

Also for this qube there is “Disk widget” error, which says that disk is 93.3% full. I believe this info is takes from lvs command in dom0. This command reports 93.32% usage.

[user@dom0 ~]$ sudo lvs
  LV                                                VG         Attr       LSize   Pool      Origin                                            Data%  Meta%  Move Log Cpy%Sync Convert
  root                                              qubes_dom0 Vwi-aotz--  20.00g root-pool                                                   51.17                                  
  root-pool                                         qubes_dom0 twi-aotz--  24.00g                                                             42.64  19.83                           
  swap                                              qubes_dom0 -wi-a-----  <3.93g                                                                                                       
  vm-QUBE_NAME-private                              qubes_dom0 Vwi-a-tz--  18.00g vm-pool   vm-QUBE_NAME-1747847796-back                      93.32                                  
  vm-QUBE_NAME-private-1747829970-back              qubes_dom0 Vwi-a-tz--  18.00g vm-pool                                                     92.35                                  
  vm-QUBE_NAME-private-1747847796-back              qubes_dom0 Vwi-a-tz--  18.00g vm-pool   vm-QUBE_NAME-private-1747829970-back              93.03                                  
  vm-QUBE_NAME-private-snap                         qubes_dom0 Vwi-aotz--  18.00g vm-pool   vm-QUBE_NAME-private                              94.25                                  
  vm-QUBE_NAME-root-snap                            qubes_dom0 Vwi-aotz--  20.00g vm-pool   vm-TEMPLATE-root-1747908038-back                  50.70                                  
  vm-QUBE_NAME-volatile                             qubes_dom0 Vwi-aotz--  10.00g vm-pool                                                     0.02

Currently as workaround I am mounting said private image to new qube, and copy /home/user directory to new qube. Which “fixes” this notification.

I’ve tried to follow this topic advice on issuing fstrim command, however that does not help.

Does anyone happen to know more permanent fix for this situation.

I would expect sudo fstrim /rw to help, I don’t know why it doesn’t…

Thank you for your answer. That command indeed decrease somewhat usage reported by lvs.

Here is another qube example.
df -h is reporting 15% used

user@ANOTHER_QUBE:~$ df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/mapper/dmroot   20G  7.4G   11G  41% /
none                 20G  7.4G   11G  41% /usr/lib/modules
devtmpfs            4.0M     0  4.0M   0% /dev
tmpfs               1.0G     0  1.0G   0% /dev/shm
tmpfs                69M  428K   69M   1% /run
tmpfs               5.0M     0  5.0M   0% /run/lock
/dev/xvdb           2.0G  277M  1.7G  15% /rw
tmpfs                35M   88K   35M   1% /run/user/1000

Whereas sudo lvs:

[user@dom0 ~]$ sudo lvs
  LV                                                   VG         Attr       LSize   Pool      Origin                                               Data%  Meta%  Move Log Cpy%Sync Convert
  root                                                 qubes_dom0 Vwi-aotz--  20.00g root-pool                                                      51.18                                  
  root-pool                                            qubes_dom0 twi-aotz--  24.00g                                                                42.65  19.83                           
  swap                                                 qubes_dom0 -wi-a-----  <3.93g                                                                                                       
  vm-ANOTER_QUBE-private                               qubes_dom0 Vwi-a-tz--   2.00g vm-pool   vm-ANOTER_QUBE-private-1747912047-back               81.54                                  
  vm-ANOTER_QUBE-private-1747894595-back               qubes_dom0 Vwi-a-tz--   2.00g vm-pool                                                        81.93                                  
  vm-ANOTER_QUBE-private-1747912047-back               qubes_dom0 Vwi-a-tz--   2.00g vm-pool   vm-ANOTER_QUBE-private-1747894595-back               81.64                                  

Here I’ve issued fstrim command between those two snapshots. There is a slight decrease from 81.93% to 81.54%.

One thing those two qubes have in common is that in both I am using browser (chromium based) to communicate over Slack in one qube, and over Telegram in another qube. But it is not same site, so I guess it might be something to do with how those messengers are implemented (use same technology, perhaps?).

what about the private-snap volume? also not affected much?

Not much. Also decrease less that 1%. En par with private.

Another thing I’ve discovered. Browser I am using is installed via flatpack.

There is a directory in /home/user .var. Removing this directory has significant effect on used space reported by lvs, however directory itself is not that big according to du that is run inside the qube.

user@ANOTHER_QUBE:~$ du -hs .var/
271M	.var/
user@ANOTHER_QUBE:~$ df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/mapper/dmroot   20G  7.4G   11G  41% /
none                 20G  7.4G   11G  41% /usr/lib/modules
devtmpfs            4.0M     0  4.0M   0% /dev
tmpfs               1.0G     0  1.0G   0% /dev/shm
tmpfs                69M  400K   69M   1% /run
tmpfs               5.0M     0  5.0M   0% /run/lock
/dev/xvdb           2.0G  277M  1.7G  15% /rw
tmpfs                35M   32K   35M   1% /run/user/1000
user@ANOTHER_QUBE:~$ du -hs .var/
271M	.var/
user@ANOTHER_QUBE:~$ rm -rf .var/
user@ANOTHER_QUBE:~$ df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/mapper/dmroot   20G  7.4G   11G  41% /
none                 20G  7.4G   11G  41% /usr/lib/modules
devtmpfs            4.0M     0  4.0M   0% /dev
tmpfs               1.0G     0  1.0G   0% /dev/shm
tmpfs                69M  400K   69M   1% /run
tmpfs               5.0M     0  5.0M   0% /run/lock
/dev/xvdb           2.0G  5.9M  1.9G   1% /rw
tmpfs                35M   32K   35M   1% /run/user/1000
[user@dom0 ~]$ sudo lvs | grep ANOTHER_QUBE
  ANOTHER_QUBE-private                  qubes_dom0 Vwi-a-tz--   2.00g vm-pool   ANOTHER_QUBE-private-1747917433-back  9.18                                   
  ANOTHER_QUBE-private-1747917433-back  qubes_dom0 Vwi-a-tz--   2.00g vm-pool   ANOTHER_QUBE-private-1747917254-back  81.74

Edit

Interestingly copying this directory, removing old and renaming copy also yield similar result.

user@ANOTHER_QUBE:~$ df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/mapper/dmroot   20G  7.4G   11G  41% /
none                 20G  7.4G   11G  41% /usr/lib/modules
devtmpfs            4.0M     0  4.0M   0% /dev
tmpfs               1.0G     0  1.0G   0% /dev/shm
tmpfs                69M  400K   69M   1% /run
tmpfs               5.0M     0  5.0M   0% /run/lock
/dev/xvdb           2.0G  277M  1.7G  15% /rw
tmpfs                35M   32K   35M   1% /run/user/1000
user@ANOTHER_QUBE:~$ cp -R .var .var1
user@ANOTHER_QUBE:~$ rm -rf .var
user@ANOTHER_QUBE:~$ mv .var1/ .var
user@ANOTHER_QUBE:~$ df -h
Filesystem          Size  Used Avail Use% Mounted on
/dev/mapper/dmroot   20G  7.4G   11G  41% /
none                 20G  7.4G   11G  41% /usr/lib/modules
devtmpfs            4.0M     0  4.0M   0% /dev
tmpfs               1.0G     0  1.0G   0% /dev/shm
tmpfs                69M  400K   69M   1% /run
tmpfs               5.0M     0  5.0M   0% /run/lock
/dev/xvdb           2.0G  276M  1.7G  15% /rw
tmpfs                35M   32K   35M   1% /run/user/1000
user@ANOTHER_QUBE:~$ 



[user@dom0 ~]$ sudo lvs | grep ANOTHER_QUBE
  ANOTHER_QUBE-private                  qubes_dom0 Vwi-a-tz--   2.00g vm-pool   ANOTHER_QUBE-private-1747918323-back  28.61                                  
  ANOTHER_QUBE-private-1747918323-back  qubes_dom0 Vwi-a-tz--   2.00g vm-pool                                         81.74