Bricked: cannot run any qube => free space in thin pool reached threshold

I was moving some large files around in a qube. Everything worked normal, I did not get any indication that something was wrong. Then I shut laptop off for the day.

Next day when I boot Qubes 4.1, none of the qubes that start on boot are running, not even sys-usb. So I try to run it manually and get this message:

Qube Status: sys-usb
sys-usb has failed to start: Cannot create new thin volume, free space in thin pool VolGroup0/vm-pool0 reached threshold.

I click on the “disk” icon and see that vm-pool0 is using 91.x%.
In the end I had to boot from a LiveUSB to manually delete some files out of a qube’s LV.

Back in the day of Qubes 4.0, I remember getting a soft-warning at 90%, and at 95% those annoying warnings every time a qube is started. But nothing like this! No warning… bam qubes is a brick.

How do I change the thin pool threshold in Qubes 4.1 similar to how it was in Qubes 4.0, with the difference:

  • soft-warning at 50Gb free space (instead of 10% in Q4.0)
  • hard-warning at 10Gb free space (instead of 5% in Q4.0)
  • brick at 0Gb free space (instead of 10% in Q4.1)

Additionally, looking at [dom0 ~]$ nano /etc/lvm/lvm.conf, why is thin_pool_autoextend_threshold = 90 ?
I mean… qubes is a single-user OS that deliberately makes multi-boot very difficult… so naturally during install I chose: sudo lvcreate -T -l +100%FREE -c 256K VolGroup0/vm-pool0

Is this thin_pool_autoextend_threshold = 90 the cause of turning qubes into a brick? Are there negative consequences if I change this value to thin_pool_autoextend_threshold = 100 ? (shouldn’t it be 100 by default?)

1 Like

This is exactly what you should do in order to avoid what it happened to you! No other reasons, as far as I know. I even set it on 65%. i can always resize it if needed, by 5% step, and that way I won’t forget about it, but constantly being aware of it.

1 Like

Thanks for confirming @enmus

Just did some test, filling a new qube with data…
fallocate -l 260G test260G.img did not work though, sure the qube-manager indicated a large qube present, but the “total disk usage” icon of xfce did not move.
but dd if=/dev/zero bs=100G count=1 of=/home/user/bigfile3 did do the trick.
I’m at 96.4% disk usage now, got a warning in the top right corner, but everything works as normal.

Can someone inform the developers please*)? As when a newbie bricks their system like this, he/she might not know how to un-brick it.

[dom0 ~]$ nano /etc/lvm/lvm.conf
change:

thin_pool_autoextend_threshold = 90

into :

thin_pool_autoextend_threshold = 100

*) I know… report an issue via GitHub… but the thing is… GitHub does not accept new accounts using @guerrillamail.com… Why does everyone/-thing needs an account anyway?

The same recently happened to me. I was also moving large file between qubes, my vm-pool went over 90% and I couldn’t start any qubes anymore. This helped:

Indeed, it would be helpful to change the default, since ordinary users may not be able to fix this easily.

2 Likes

What happens when this happens and the threshold is already 100%? Weren’t those 90% actually your savior?
I’m not sure, just want to be.

1 Like

I received this error after rebooting after stage 4 from qubes 4.0 to 4.1 and this config file fixed it for me. Thank you

1 Like

Is it possible that this setting
thin_pool_autoextend_threshold = 100
got somehow overwritten?
Recently I didn’t make any upgrades but suddenly qubes did not start because of the “reached threshold” issue. Some time ago I had this issue and fixed it by setting it to 100. It turned out the threshold was somehow overwritten and setting it to 90 did cause some trouble.