Simple ZFS mirror on Qubes 4.3

I have a laptop with a couple small equally sized drives and I want to experiment with Qubes on ZFS. I’m murky on the best way to do this (I’m murky on Qubes’ storage architecture in general, honestly). Like, some highlevel questions:

  • Presumably Qubes will still be using LVM; in a sensible storage configuration, how do ZFS and LVM work together?
  • LUKS is still involved and should be handling encryption rather than using ZFS encryption, right?
  • Do I need to give special consideration to booting, under coreboot + Heads?
  • Do I need to shell-out to a virtual terminal at some point during Qubes installation for zfs/zpool intervention? Or is the new Qubes 4.3 installer ‘ZFS aware’?

A year or two ago I browsed @Rudd-O’s guides:

I suspect these have become a little out of date. Are there other ZFS-on-Qubes resources I should look to?

Thanks!

Answering my questions:

LVM not needed unless you choose to use it. ZFS can do its own volume management. Perhaps you use LVM to hold (1) your dom0 root, (2) your swap, (3) your ZFS pool holding your VM pool. Or perhaps you don’t use LVM, and your ZFS pool has three datasets: (1) your dom0 root, (2) your swap, (3) your VM pool.

ZFS encryption is no longer promoted by ZFS evangelists. It’s easy to create a LUKS partition around a ZFS pool.

Not for coreboot + Heads specifically, but yes special consideration to booting if you migrate dom0 root to ZFS. I needed to install @Rudd-O’s grub-zfs-fixer package for grub2-mkconfig to succeed during migration.

All of the ZFS setup happens post-installation, as you need to qubes-dom0-update package zfs (includes zfs-dkms), and also zfs-dracut if you migrate dom0 root to ZFS.

I more-or-less followed the process in the guides above. Maybe I’ll post again with a summary of my notes.

ZFS like a loooooot of memory for daily operations, even with mirror. And it’s constantly reading and writing to pool. If you have 12gb for dom0 only and zfs installed on a spinners then it’s fabulous. I have TrueNAS Core, with 2x8GB WD server spinners. ZFS takes up to 80% from 16GB of system memory when idle.

The ARC / read cache will expand to use as much free memory is available, up to a settable threshold, with the justification that free RAM is just dormant silicon, while a read cache is potentially useful. But the memory is reclaimable. (It doesn’t show as available in free though.)

But ZFS constantly reads sectors and rewrite old ones to prevent bit root.
So, for one, it’s bad for flash storages and two, need more memory for operations.

It’s correct that ZFS needs more memory to operate. How much is up to the user, but definitely more than a traditional filesystem like ext4.

Scrubbing is a data integrity feature, it’s a good thing, and AFAIK it’s more about reading than writing (writing, yes, when there is recoverable corruption). Bad for flash- I’m not sure it’s so simple. Maybe so. But for one, compression – default on, and generally pretty effective – means less disk space used, hence less writing, hence better for flash.