Pools and varlibqubes in Qubes-OS 4.3

Hello. Does varlibqubes still have a file driver in new version? fstrim doesn’t work there

The file driver still exists but is not used by default if you choose the default thin-provisioned + ext4 partitioning scheme. Or a sane modern filesystem which supports file-reflink driver.

Please confirm what you have with:

qvm-pool list

In dom0. It should be lvm-thin, file-reflink or zfs for varlibqubes. If it is file, consider backing up your Qubes and do a complete re-installation.

$ qvm-pool list
NAME          DRIVER
varlibqubes   file
linux-kernel  linux-kernel
vm-pool       lvm_thin

Please could you tell me where and how I should specify a new driver for dom0 in a fresh installation? I would like to have dom0 and the vm‑pool using the lvm_thin drivers. In other words, I’d like Qubes to look like a standard installation, but with dom0 set up as a vm‑pool (with trim support). I’m not a very experienced Qubes user and I’m not confident when it comes to partitioning. Or does the new ISO use a different driver for varlibqubes during the default installation with automatic partitioning? I installed Qubes 4 months ago

Not good.

Please elaborate. Did you chose the default partitioning during installation? Or did you select custom partitioning? This is important. If the automatic partitioning did that, a fix is necessary.

I chose default automatic partitioning. I didn’t do it manually - just selected my SSD and entered the LUKS password.

And what are the templates using? The varlibqubes one or vm-pool one? You can find this out with:

qvm-volume list TEMPLATE_NAME

And what is the default pool? You can check it with:

qubes-prefs | grep pool

In dom0

I’m using Fedora 42, Debian 12, and Debian 13 templates. The default pool is vm‑pool, but I created some appVMs in varlibqubes and I can’t run fstrim on it

@marmarek Is this normal? User used the default partitioning. varlibqubes is using the file driver.

It might be possible to do this:

qvm-clone -P vm-pool PROBLEMATICVM NEWVM

This will clone the problematic AppVM to the pool which supports trim. Then you check it. And then you delete the old one.

But please always have backups. And then I would recommend to wait for a better advise from Marek

1 Like

Yes, thank you. I have to do it this way - clone my qube into vm‑pool, run fstrim, then clone it back into varlibqubes. I hope this inconvenience can be fixed :slightly_smiling_face:

No. varlibqubes will not support trim. It is cursed :confused:

I mean, if I want to keep a qube in varlibqubes and perform a trim, I have to clone it to vm‑pool, run trim, and then clone it back - doing it every time I need to trim that particular qube.

Now that I re-think this again, varlibqubes is a dummy pool (on lvm-thin systems) which should be the reserved space for dom0. So you could not fill-up dom0 volume and make the system un-bootable. You may look at this thread:

varlibqubes pool is created always, but it is not default pool if LVM is selected, and should not really be used. It’s mostly just to include dom0 space monitoring in the disk space widget.

Nevertheless, I’d like to clear the cache and shrink qube size. I like having 2 pools in LVM. I’m not sure whether it’s possible to have 2 pools in Btrfs.

Yes. It is possible

Thank you for the response. Does varlibqubes have to use a file driver in LVM, or could that be fixed in the future?

It seems I haven’t seen any guide on how to do it. Please could you write it up when you have some free time? :pray:

Sure. But first:
A note to the forum @moderators
Could you kindly move this (relatively) longer conversation on pools and varlibqubes to another topic? So we could follow it easier.

1 Like

So that the second pool isn’t tied to dom0 (like vm‑pool in LVM)