Is it possible to install VMs on btrfs subvolumes?

I’ve installed Qubes all by default. Everything is fine and It behaves excellent with common tasks. However when I try to compile a big project. The free space becomes not enough. I know that I can resize the logic volume. However if it’s possible to install VMs on btrfs subvolumes. I would prefer to use btrfs subvolumes to save me from those resize operations. :wink:

It’s not possible.

While you can use Btrfs with the ‘file-reflink’ Qubes OS storage driver (instead of LVM with the ‘lvm_thin’ storage driver), all storage drivers provide storage to the VM in the form of virtual block devices. Providing a dom0 filesystem subtree (e.g. a Btrfs subvolume) to a VM would be much more complex, and too hard to do securely.

Thanks a lot for your reply. I’ll try ‘file-reflink’ later. Actually I have a 1T-SSD on which Qubes OS have been installed. There are still four 2T-SSD disks not used and several HDD disks. Could you give me some advice on how to use those disks? Would it be better to make a raid0 lvm_thin pool and install Qubes OS on it(if possible)? Or should I just make a secondary-storage pool?

That’s a lot of disks, not bad.

I’ve never tried creating a RAID with the installer. Theoretically that should be possible? But it’s often a bit of a fight once you step out of its automatic installation layouts in any way.

Other than that, I guess it’s the usual tradeoff: RAID-0 would be more convenient to use - compared to managing one or even several independent secondary storage pool(s), and having to think about where to put which VMs - but it’s a bigger mess if one of the disks fails.

That’s true. It is difficult to make the choice. I was wondering if I could set the RAID pool as the default vm-pool and then remove the original one?

Yes, although to be able to remove the original storage pool you’d have to migrate all the VMs in it to the new storage pool, or rather clone them and fix up qrexec policy etc. with the new VM names. There’s no automated way to do this.

A third option would be to add the other disks to the existing LVM volume group. UNTESTED:

  1. Have working backups, and be prepared to reinstall the system if something goes wrong

  2. cryptsetup luksFormat each new disk with the original LUKS passphrase (so you don’t have to enter multiple passphrases at boot). No need to partition the disks

  3. Add more rd.luks.uuid=... parameters for each new LUKS UUID (in lowercase) to GRUB_CMDLINE_LINUX in /etc/default/grub, then regenerate grub.cfg

  4. Reboot and check that the new LUKS devices have become available as e.g. /dev/mapper/luks-u-u-i-d-x, /dev/mapper/luks-u-u-i-d-y, …

  5. vgextend qubes_dom0 /dev/mapper/luks-u-u-i-d-x /dev/mapper/luks-u-u-i-d-y ... (which will run pvcreate automatically) to add them to the volume group

  6. I think you also have to extend the thin pool? Something like lvextend -l +90%FREE qubes_dom0/vm-pool

  7. Might be necessary to systemctl restart qubesd.service and killall qui-disk-space to immediately refresh disk space information

1 Like

Thanks for all your patient replies. I think ‘file-reflink’ is still the best solution for me at the moment.

1 Like