Experienced ZFS user converting to Qubes

Hello, trying again here as previous message didn’t seems to go through on blog ?
Thank you for your website and all the interesting posts there !
About Qubes and ZFS, I’m wondering if there is a way to install Qubes on a full ZFS (2x 2TB NVMe) like I did for OPNsense, or do I have to first install Qubes, then move part (i.e: vm pool) of it to a ZFS partition, then move the remaining to ZFS ?
If that is so, then what happen if I crash and have to reinstall Qubes, will I have to reformat to EXT2 (not EXT4, because journaling to save NVMe) reinstall, then re-migrate to ZFS ?
Thank you !
Érica vH

Yes, regrettably.

3 Likes

I back up everything using Borg so this doesn’t happen to me.

You can also elect to back up using zfs send and receive using an intermediary VM and qvm-run -p to tunnel the data.

2 Likes

Glad to see this conversation is still moving.

I got a Dell Precision 7740 with the intent of running Qubes on it, but it proved faulty. Then one of my elderly HP workstations started acting up. I received a pair of Apple laptops last summer and haven’t touched Qubes capable gear much since then.

I acquired an HP EliteDesk G5 in January and was surprised to find this platform has been on the HCL since version G1. I recently moved, have about triple the desk space I did, traded for a 32" Samsung monitor, and I got a nice four port KVM. I opened up the G5 last night thinking I’d pull the 240GB Windows 11 NVMe and install a 1TB in its place for Qubes. I was thwarted by a tiny Torx head screw and my toolkit went missing in the move, so it’ll be a bit before I get to this.

The G5 will run Qubes but doesn’t have the internal capacity to experiment with mirrored drives - just one NVMe slot. I retired all my 3.5" disks and my smaller supply of 2.5" are spread among Macs and Raspberry Pi5.

Has the state of ZFS progressed to the point where it’s ready for much more experimentation/writing about it? If so, I do have a dual 2.5"/3.5" USB drive carrier, will have to round up a couple small disks so I can dig into it.

1 Like

TY :slight_smile:
That’s way above my level, and most certainly call for failure so I’ll stay away for now, waiting for it to be integrated eventually …

It’s getting there.

As of February, you can now install zfs on your Qubes system natively from the repos. This should be all you need to install (I have not tested): qubes-dom0-update zfs zfs-dkms . From there create your zpool and then designate it as a qubes pool with qvm-pool.

On a fresh install, I recommend not migrating anything, but creating the smallest possible initial pool on lvm, where only your sys-qubes will live, and then make a big zfs pool for everything you actually care about (set it to be default pool).

3 Likes

Thank you !
It is indeed a fresh install, I’m just postponing week after months the moment I’m going to actually do it (see post here) on a stock P15 gen2: i7-11850 8cores (16 virt), 24M cache; 128GB DDR4 (maxed to); Intel UHD IG + NVIDIA RTX A2000 12GB GDDR6 ; 802.11ax Wi-Fi 6, 2.5GbE;
So I would have:

2GB USB key

SDD1: 1GB for /boot in EXT4 (non-encrypted)
SDD2: 750MB /boot/efi in ExFAT (non-encrypted)
SDD3: /key1 (16Mb) to hold encryption key
SDD4: /key2 (16Mb) to hold encryption key (now useless if all are with a unique key)
=> Boot sequence set to USB before drives, so USB-Key-in = boot to Qubes (SDD1), USB-key-out = boot to win10 (SDA1)

NVMe #1 512GB (478GB)

SDA1: 332GB for C:
SDA2: 1GB WinRE
SDA3: 40GB for D:
SDA4: /LVM (48GB) (Since this NVMe is smaller=cheaper than the other big 2, I set the heavy R/W to here)
… L.Swap (16Gb) for Qube (knowing I have 128GB RAM)
… L.TMP (32GB) for Qube
SDA4: /W.Swap (16GB) for Win
SDA5: /W.TMP (32GB) for Win

NVMe#2+NVMe#3 (RAID1) 1863GB

SDB1: 64GB for / in EXT2
Includes: /servicevms; /appvms; /updates; /vm-kernels; /backups
SDB2: 460GB (332+128) in ZFS for /vm-templates
SDB3: 1300GB in ZFS for /appvms

Would that work ?

It’s great to see ZFS supported by Qubes.

Before I move my zpool disks to my Qubes system, I have two questions.

  1. Installing ZFS on Qubes it will utilize all of the normal [Fedora] based commands, yes?

  2. The zpool will be Data (Block) Device that may be attached to the Qube desired, yes?

1 Like

@quantum ,

When people talk about using zfs with Qubes there’s two entirely distinct uses they could be talking about.

  1. Installing zfs tools in dom0 and creating a zpool on which qubes volumes will be stored.
  2. Using zfs tools within a qube to work with (often external) zpools.

What’s being discussed in this thread is the 1st case.

But unless I misunderstand, it sounds like you want to attach zpool-containing devices to your Qubes system, and then attach and access the zpool from within a qube, which is the 2nd case. The 2nd case is not something that needs Qubes support, and for that you probably want to install zfs in a qube template where the guest os is either Fedora or Debian. Since zfs is installed as dkms (kernel) module, any qube that runs zfs tools in it will need to have set its kernel to either pvgrub-pvh or (provided by qube), and you need to make sure the dkms module is built for the specific kernel in your qube.

1 Like

The zpool and its contents do not occupy mount points in your filesystem (there is no “/vm-templates”). Zfs-backed qubes pools contain only zvols (block devices) to be used as the volumes for your qubes. The creation and assignment of the zvols to individual qubes is handled automatically by QubesOS’s built-in zfs-qube-pool driver.

What you’re calling SDB, I don’t see any reason to split into multiple partitions/zpools. If you want that kind of logical separation between templates, appvms, etc. do it within the zpool itself rather than at the partition table level.

I also recommend against putting zfs ontop of RAID; that’s what zfs mirror vdevs are for.

1 Like

@likeafox

Thank you. Yes, I am looking at case #2.

Perhaps it’s the odd case, but my Qubes are typically not that large and I attach specific drives for specific functions. I think I’ll follow your lead, as the drives I use are typically only attached, as needed, to one specific Qube.

1 Like

I do something similar. Or at least I can; I have the capability to mount ZFS drives onto some of my VMs. (I can’t get ZFS to get along with wifi, only with ethernet. Fortunately, I don’t need to do both in the same VM anyway.)

It’s unfortunate that I cannot simply set my Debian qubes’ kernel to a Debian kernel, because the fedora ones that are offered are incompatible with ZFS when used on a Debian template. Likeafox has pointed you in the proper direction to work around this.

Thank you for the info. I’m using zfs on a separate Debian machine and will move the zpool devices to my [big] Qubes machine this weekend, using Debian-based Qubes to manage them. I never touch the internet with these devices, so no worries about ethernet-only access.

2 Likes

Hello,
I’ve read all three guides and I think I understood it all (most of it)
Since I have two NVMe 2TB, it will be easy to create one system (non-encrypted) on the first, and then clone it to a ZFS one the second, (and alter the boot consequently)

Now what if I want to use the first (with initial sys) as a mirror of the second (with new sys) ?

And what about my third NVMe (for vg_runtime: swap, /tmp, /log, /var/tmp)
Should I ZFS (not mirrored) too or it doesn’t matter much ?

And then what if (when !) I crash the system and need to reinstall it ? Will I have to go through all these steps again ? (including the mirroring) and then restore my backup ?

Thank you again, that’s a big (and great) job !

Really ? It is that simple ?
Once I’ve created the zpool on the second drive, I just have to zpool attach mypool /dev/disk1 /dev/disk2 et voilà ?
How does ZFS knows I want it as a mirror and not just an extension ?

You decide type of pool during creation

  1. striped pool (extended)
zpool create "test-striped-pool" /dev/sda1 /dev/sda2 /dev/sda3
  1. mirror pool
zpool create "test-mirror-pool" mirror /dev/sda1 /dev/sda2
  1. raidZ2 pool
zpool create "test-raidz2-pool" raidz2 /dev/sda1 /dev/sda2 /dev/sda3

You cannot change/convert pool layout/type after creation

PS: it’s best to use device ID instead device name as it can change but device ID is encoded in device firmware

ls -l /dev/disk/by-id/

first blue part is device ID

PS: check status and layout of created pool

zpool status "testpool"
1 Like

Thank you !
So there IS an arg. to make sure it is a mirror :slight_smile:

But in this case, since the install needs to be in two steps, I would need to mk a zpool and then attache the second disk,
So I would either have a “normal” pool, then making it a mirror pool,
OR make a mirror pool on only 1 disk, and then attach the second to become an actual mirror ?

That’s where I bug :confused:

You can make mirror on 2 disk (you can’t make mirror on 1 disk), detach one disk so mirror become incomplete but any disk that you attach to mirror will be erased.
So you can’t install on single disk and then attach it to mirror pool and have installation to survive.
And if you can’t create ZFS pool during install then how you plan to attach LVM/EXT4/BTRFS disk to ZFS?

Your only option is to install on BTRFS on 2 small SSD’s in a BTRFS RAID1, complete installation, login to system, install OpenZFS in dom0, create ZFS pool and attach it to QubesOS to use as VM-POOL.

But I don’t know how. Manual partitioning was too much for me when installed QubesOS and now I won’t be experimenting on my working machine.

1 Like

Thank you, for what I understood from @Rudd-O guides, I should go like:

  • NVMe0n1
    … 64GB swap swap SWAP
    … 32GB EXT2 /tmp TEMP1
    … 32GB EXT2 /var/tmp TEMP2
    … 110GB EXT2 /var/log LOG
  • NVMe1n1
    => Install Qubes with minimum size
    … 640MB xFAT /boot/efi (And an empty 640MB on NVMe2)
    … 1024MB EXT4 /boot (In Soft. RAID1 with NVMe2)
    … 80GB EXT4 / (In Soft. RAID1 with NVMe2)
    … 1,7 TB ZFS /var/lib/qubes (with NVMe2n1 in ZFS pool)
  • NVMe2n1
    … 640MB mirror of /boot/efi empty
    … 1024MB mirror of /boot
    … 80GB Mirror of /
    … 1,7TB ZFS pool for /var/lib/qubes
    Then transfer the qubes to the ZFS pool,
    Then make these the default qubes location

Am I right ?

ZFS encryption seems to be aes-256-gcm as standard,

  • How do I set it up (while creating a mirror of two NVMe for /var/lib/qubes)
  • How is it integrated to Qubes, will I have a prompt before boot (I don’T think so) or will I have to boot Qubes, then once inside mount/open the ZFS, meaning break in integration and vm-qubes not loaded at start up (worrying about sys-VM)

Other Q: Do I have any benefit in ZFS’ing my single 256GB NVMe which hold the swap, temp, log ?

And finally, do I need my 8TB backup USB HDD to be ZFS as well ? (knowing that only my /var/lib/qubes will be ZFS)

Thank you !