Persistent False Storage Warning After Volume Resize - 97% Full Despite 34% Actual Usage

Hi everyone, i intended to create a bug report but i thought that i should at least post here. so i’m looking forward for your support, you check the details below:

Qubes Release:
Qubes OS release: Qubes release 4.2.4 (R4.2)

Affected component(s) or functionality: Storage monitoring, Qubes Manager widget, vm-pool threshold warnings

Brief summary

My Archlinux standalone VM persistently shows “97% full” warnings in Qubes notifications despite the actual filesystem being only 34% used (25GB/78GB). This false positive storage warning persists even after emptying the hard disk and later a successful volume resizing, partition expansion, filesystem growth, and multiple VM restarts. The issue appears to be related to stale storage monitoring cache in qubesd that doesn’t refresh properly after storage operations.

Steps to reproduce the behavior

  1. Create a standalone Archlinux VM with default 75GB storage
  2. Use the VM until approaching storage capacity (triggers legitimate warning)
  3. Increase VM storage through Qube Manager: 75GB → 80GB → 86GB
  4. Successfully resize partition using growpart /dev/xvda 2
  5. Successfully resize filesystem using resize2fs /dev/xvda2
  6. Restart VM multiple times
  7. Observe that storage warning persists despite successful resize

Expected behavior

After successful volume resize, partition expansion, and filesystem growth, the storage warning should disappear since the VM now has adequate free space.

Actual behavior

  • Qubes notification: Persistent “Archlinux: volume root is 97% full” warning
  • Actual filesystem status: /dev/xvda2 78G 25G 49G 34% /
  • Qube Manager shows: System storage max size: 80GB (correctly updated)
  • Dom0 monitoring shows: VM appears at 97% in Disk Space Monitor widget

The warning is a false positive - the filesystem has 49GB free space (66% available) but Qubes monitoring system shows it as 97% full.

Steps I’ve tried

I have systematically attempted multiple solutions:

Storage Operations:

  • Extended Qubes volume from 75GB → 80GB → 86GB using qvm-volume resize
  • Expanded partition using sudo growpart /dev/xvda 2
  • Resized filesystem using sudo resize2fs /dev/xvda2
  • Verified success: lsblk shows 79.1G partition, df -h shows 78G filesystem with 49G free

LVM Configuration:

  • Configured thin pool autoextend: thin_pool_autoextend_threshold = 90 and one time i had to increase to 100
  • Set autoextend percentage: thin_pool_autoextend_percent = 20
  • Enabled LVM monitoring: sudo lvchange --monitor y qubes_dom0/vm-pool
  • Restarted monitoring services: systemctl restart lvm2-monitor dm-event
  • Used lvsremove couple of time for the vm-Archlinux*-back files.

Qubes System Refresh Attempts:

  • Multiple VM shutdowns and restarts
  • Storage resize operations through Qube Manager GUI
  • Volume refresh commands in Dom0
  • System reboot

Current System Status:

Dom0 Volume Group Status:

VG         #PV #LV #SN Attr   VSize   VFree  
qubes_dom0   1  81   0 wz--n- 231.28g <20.74g

VM Filesystem Status:

/dev/xvda2    78G  25G  49G  34% /

LVM Volumes Status:

vm-Archlinux-root                    qubes_dom0 Vwi-a-tz--  80.09g vm-pool   vm-Archlinux-root-back    97.67                                  
vm-Archlinux-root-snap               qubes_dom0 Vwi-aotz--  80.09g vm-pool   vm-Archlinux-root         97.67

Important: LVM reports 97.67% usage while filesystem shows 34% usage - a clear disconnect between storage monitoring and actual filesystem state.

Enable discard for your qube’s partitions and run fstrim:

1 Like

I can’t believe that i did everything, i mean everything, and i missed doing that, it just didn’t crossed my mind at all.

Thanks for sharing that, the issue for HVM didn’t appear again, but the one related to vm-pool still there.

i just have one small question how to manage the vm-pool,i mean what is a usual/normal percentage for the vm-pool data. is it reasonable if see two “vm-Archlinux-root-****-back” 80GB each in addition to the main one "vm-Archlinux-root"80GB?

Now vm-pool data is 88%

It’s described in the linked topic as well:

Change revisions_to_keep to 0 for Qubes OS 4.2. For Qubes OS 4.3 there is also an option to set revisions_to_keep to -1:

UPDATE: I tried fstrim solution and it did work - it freed 52.1 GiB of space from my Archlinux VM and should have resolved the false storage warning (did it inside Arch Linux and inside dom0).
However, this revealed that the underlying partition hadn’t been properly expanded to match the 86GB volume resize I had done earlier, i got the same message again.

I faced failed attempt when trying to boot Arch Linux and got this message:

  • [FAILED] Failed to start Adjust root filesystem size

To fix this, I booted from a live CD and used parted to expand partition 2 to use the full disk space (went from ~80GB to 96GB, i expand it from dom0 before). The partition resize completed successfully.

Same Problem Presist: Now my Archlinux VM won’t boot normally. I get these errors:

  • [FAILED] Failed to start Adjust root filesystem size
  • [TIME] Timed out waiting for device /dev/xvdc1
  • [DEPEND] Dependency failed for Swaps
  • [FAILED] Failed to start Update time from ClockVM

After these errors, the login screen appears, but when I enter my password, the screen goes black and returns to login (authentication/display corruption).

I tried to boot by editing GRUB to add single parameter, which didn’t give me root shell access, and that was a bust.

The VM is accessible through live CD, but i can’t seem to be able to connect network or usb to get a backup of everything, the data isn’t lost, but I need a way to take the back. and I don’t know if there something to take a backup through other means (dom0, other AppVM, etc.).