Hi everyone, i intended to create a bug report but i thought that i should at least post here. so i’m looking forward for your support, you check the details below:
Qubes Release:
Qubes OS release: Qubes release 4.2.4 (R4.2)
Affected component(s) or functionality: Storage monitoring, Qubes Manager widget, vm-pool threshold warnings
Brief summary
My Archlinux standalone VM persistently shows “97% full” warnings in Qubes notifications despite the actual filesystem being only 34% used (25GB/78GB). This false positive storage warning persists even after emptying the hard disk and later a successful volume resizing, partition expansion, filesystem growth, and multiple VM restarts. The issue appears to be related to stale storage monitoring cache in qubesd that doesn’t refresh properly after storage operations.
Steps to reproduce the behavior
- Create a standalone Archlinux VM with default 75GB storage
- Use the VM until approaching storage capacity (triggers legitimate warning)
- Increase VM storage through Qube Manager: 75GB → 80GB → 86GB
- Successfully resize partition using
growpart /dev/xvda 2 - Successfully resize filesystem using
resize2fs /dev/xvda2 - Restart VM multiple times
- Observe that storage warning persists despite successful resize
Expected behavior
After successful volume resize, partition expansion, and filesystem growth, the storage warning should disappear since the VM now has adequate free space.
Actual behavior
- Qubes notification: Persistent “Archlinux: volume root is 97% full” warning
- Actual filesystem status:
/dev/xvda2 78G 25G 49G 34% / - Qube Manager shows: System storage max size: 80GB (correctly updated)
- Dom0 monitoring shows: VM appears at 97% in Disk Space Monitor widget
The warning is a false positive - the filesystem has 49GB free space (66% available) but Qubes monitoring system shows it as 97% full.
Steps I’ve tried
I have systematically attempted multiple solutions:
Storage Operations:
- Extended Qubes volume from 75GB → 80GB → 86GB using
qvm-volume resize - Expanded partition using
sudo growpart /dev/xvda 2 - Resized filesystem using
sudo resize2fs /dev/xvda2 - Verified success:
lsblkshows 79.1G partition,df -hshows 78G filesystem with 49G free
LVM Configuration:
- Configured thin pool autoextend:
thin_pool_autoextend_threshold = 90and one time i had to increase to 100 - Set autoextend percentage:
thin_pool_autoextend_percent = 20 - Enabled LVM monitoring:
sudo lvchange --monitor y qubes_dom0/vm-pool - Restarted monitoring services:
systemctl restart lvm2-monitor dm-event - Used
lvsremovecouple of time for thevm-Archlinux*-backfiles.
Qubes System Refresh Attempts:
- Multiple VM shutdowns and restarts
- Storage resize operations through Qube Manager GUI
- Volume refresh commands in Dom0
- System reboot
Current System Status:
Dom0 Volume Group Status:
VG #PV #LV #SN Attr VSize VFree
qubes_dom0 1 81 0 wz--n- 231.28g <20.74g
VM Filesystem Status:
/dev/xvda2 78G 25G 49G 34% /
LVM Volumes Status:
vm-Archlinux-root qubes_dom0 Vwi-a-tz-- 80.09g vm-pool vm-Archlinux-root-back 97.67
vm-Archlinux-root-snap qubes_dom0 Vwi-aotz-- 80.09g vm-pool vm-Archlinux-root 97.67
Important: LVM reports 97.67% usage while filesystem shows 34% usage - a clear disconnect between storage monitoring and actual filesystem state.