Why doesn't Qubes 4.1 use full disk space (LVM)?

Hello,

I am currently migrating to Qubes OS 4.1.

One thing I have noticed: Qubes Disk Space Monitor shows me a disk size of ~160GB instead of the available 250GB (vm-pool data column). Besides pvs report free disk space of ~50 GB.

The issue has been raised in test scenarios, but still not clear to me:

  1. Is there any good reason for Qubes to not use full disk space or is this considered a bug / current limitation in 4.1?
  2. I am no an expert on LVM, hence: is this just cosmetic nature and LVM will auto extend disk, when a certain treshold is reached - or manual fix needed?

A different post mentions manual workaround with lvextend:

lvextend -L +99.97GiB /dev/mapper/qubes_dom0-vm--pool -r
  1. Is the fix still valid for current 4.1 RC2 and considered safe?

For the purpose of reclaiming all free space, I also would propose a slight change:

sudo lvextend -l +100%FREE /dev/mapper/qubes_dom0-vm--pool -r

Appreciate any clarifications, thanks!


(Note for administrators: My previous post has been marked as spam by Akismet, I don’t know why. Hence the new post)

1 Like

IIRC LVM config was change for 4.1:

  1. dom0 LVs and VM LVs are now in different thinpools under 4.1. Problems with the VM thinpool are now much less likely to prevent booting into dom0 (this allows better recovery options).
  2. the total space asked for at install is less than 100%.
  3. however, the thinpools have both been set to autoextend into unused space in the VG.

By not claiming all the space in the VG, 4.1 will grow each thinpool into unused space as needed, at least until all space is allocated to thinpools. Presumably, this has the benefit of allowing the dom0 pool to grow, should a large number of kernel versions/initramfs images exceed the default config, assuming the VM thinpool hasn’t grown to use all space.

Brendan

1 Like

Yes, there is one. It’s a safety margin if you run out of space in either pool data or metadata. The autoextend feature is enabled, so when either of them is almost full (90% by default), it will be automatically extended.

3 Likes

@brendanhoar @marmarek Thank you very much for clarifications.

I am not sure, if there is already a feature request: it then would make sense to let Qubes Disk Space Monitor to show the absolute disk size instead of the reserved one for better user experience.

1 Like

I just change my SSD to bigger one. I clone my old drive into new, resized LUKS volume, then resized physical LVM volume and now I want to increase vm-pool. How much space I should left free and how to determine it? For now I use +90%FREE but I think this is not an ideal approach.