Experienced ZFS user converting to Qubes

I have been using Linux since Redhat 3.03 and I’ve periodically looked at Qubes since version 2.0. The combination of the advances in the OS and what I’m doing work wise make now seem like the right time to shift at least some of what I do with a mix of Ubuntu, VirtualBox, and Proxmox to Qubes.

I have two workstations in my office, both HP Z420s. The production machine has Ubuntu Budgie 22.04 and an Nvidia GTX 1060. It has a slightly less capable sibling with a Radeon RX460 that just got Qubes 4.1.2 installed this morning. I was excited to see that Authy, Maltego, and Signal were easily installed, and I did a clumsy but tolerable OpenVPN setup. This is enough that some of what is in VirtualBox can be moved to Qubes.

I have been looking here and Googling, what I see thus far is that Qubes does not yet do what I’d want in terms of storage. The systems I have get data center grade SSDs with ext4 for startup, with partition space left free for ZFS cache duties, and I run pairs of NAS drives for storage. Since ZFS doesn’t care if a cache drive craters I’ve been running consumer grade PCIe NVMe storage in some systems.

My desktop has a Seagate Nytro SATA system drive that will take three drive writes per day, an HP Z-Turbo PCIe card is handling caching, there are a mirrored pair of 1TB WD Red 2.5" NAS drives that are VMs + /home, and I just swapped out a 4TB IronWolf for a 2.5" Barracuda 5TB. I don’t mind the single consumer 5TB in this role since what’s on it is replicated in a couple places, cool and quiet were the key drives for choosing it. Worst case I would be annoyed for a day or two waiting for a replacement drive.

Is my norm of a stout boot drive and a pair of NAS drives for the important stuff something that’s already documented and I’ve just not found it? Or am I going to be replacing 4.1.2 with 4.2.0rc5 and writing a howto once I’m happy with my setup?

And if ZFS is really so new for Qubes that it’s only just appearing in 4.2, I suspect the management rituals to take advantage of it are also not mature. I’m quite comfortable mirroring drives, adding/removing cache and log volumes, moving drives between systems, and I love the snapshot function. I have some badly behaved stuff I have to support and the lightning quick snapshot/rollback of ZFS permits me to just wade in swinging on a VM, knowing putting it back to original condition will only take a few seconds.

If there is some secret garden of ZFS wisdom, I would love to have the link to it. If this does not yet exist, that would also be useful knowledge to have.

You can check @Rudd-O’s notes:

4 Likes

After much digging around on here, I found the ZFS install formula on @Rudd-O 's site.

But I see other things here indicating ZFS isn’t included in 4.2, and overall it doesn’t look as baked in as ZFS support is for Proxmox - in that environment you can just add a ZFS pool, then point, click, and stuff will move out of LVM and into ZFS.

I see mention of BTRFS, which I’ve not used at all - it’s in Promox, but labeled something like “technology preview”.

The manner in which I use ZFS with VirtualBox VMs is as a security blanket - it’s a snap to roll back to a VM to a known good state. I think Qubes already has built in rituals for this. The other things I value are the ability to mirror drives and the cache functions. Pairs of large, helium filled NAS drives feel like flash if you dedicate 1% to 2% of their size in SSD cache space.

Given the Qubes focus on security through virtualization, rather than trying to shore up security after virtualizing, which is the norm with Proxmox and VirtualBox, I probably have some wants that originate in my service provider engineer background that are not needs for the Qubes user base overall.

Even so, the system is SO much smoother than the last time I tried to use it. I may end up with a quick change drive bay in my backup workstation so I can switch hit between Proxmox and Qubes as needed.

haha whoops, I found it on my own, then focused enough to read your post and link. I plead late night and poking around here using one hand while fooling with my name Qubes install with the other :slight_smile:

There seems to be a licensing problem:

1 Like
1 Like

This says:

Yes, that’s right — a ZFS-backed qube storage driver will be available in Qubes OS 4.2 as a storage option for Qubes OS qubes.

1 Like

Read the whole thread, cringing all the way, because it’s an interesting problem and stirs the compulsive system tuner in me. If I wade in, I’d be replicating things being done by others who are much more qualified than I, at the price of less attention on things that won’t get done in this world unless I do them.

I’m going to go have a look at what’s going on with Proxmox and supported file systems. I like the idea of ZFS everywhere, but if both Proxmox and Qubes have solid support for BTRFS, it may be worth climbing the learning curve.

Western Digital Red 1TB NVMe are $85 on Amazon. This may become a problem even I can afford to simply throw money at if data center grade storage is really that cheap.

for @Rudd-O I think might need a correction:

zpool create laptop /dev/disk/by-id/dm-uuid-*-luks-blkid -s UUID -o value /dev/sdb3`

Is there a reason this command does not have -o ashift=12 included?

It’s late, I’m tired, but didn’t want to walk away without closing the loop on some things while they’re still top of mind for me.

What I see from @Rudd-O looks like using ZFS as a full featured file system.

What I saw in the developer chatter thread on Github seemed to be focused on using ZFS zvols, the virtual block device feature. If ZFS is used purely in that fashion, wouldn’t the zvol require something like LVM layered over it in order to be used? And if that’s the case, what benefit is there in doing that rather than just giving LVM the bare partition?

I’ve used zvols for the sake of building contraptions - like exploring Ceph without dedicating physical hardware to the process. I can’t envision a production use for zvols, not with any of the systems I use.

So, now that 4.2 is out, is ZFS a thing? I don’t see it mentioned anywhere in the release notes.

1 Like

There isn’t any ZFS in the base 4.2.0 but there are good instructions from @Rudd-O on how to do it.

I anticipate having to steer others using Qubes, so I’ve not done that yet, since it’s a bit involved. BTRFS will suffice for the moment.

1 Like

I think the actual ZFS pool driver is included in 4.2.0:
https://github.com/QubesOS/qubes-core-admin/pull/522
What @apparatus posted is a legacy pr and is substituted by pr #522.

And core-admin has included the ZFS driver since 4.2.7 :core-admin v4.2.7 (r4.2) · Issue #3643 · QubesOS/updates-status · GitHub

Does this mean we should wait for 4.2.7 and not expect it in 4.2.0?

The 4.2.7 is an internal version number for one tool, and not the full release ID, yes?

BTRFS has proven a tolerable alternative thus far for a single drive system. I have some equipment coming out of service that’s going to permit me to build a Qubes box with a boot SSD and a pair of 1TB spindles. We’ll see how that goes …

1 Like

Yes, that’s right.

This thread is about experienced ZFS user converting to Qubes. I have zero experience with ZFS. I have read nice things about it, as well as some worrying ones in regards to significant SSD write amplification (additional links in the article itself):

Being experienced in all that, what would you say about it?

I think the best is to avoid ZFS on root. Put the ZFS pool on a separate disk, pass through that drive to a VM and mount it there. Any issues?

By the way, it’s a long shot, but can you pass through different ZFS data sets in a separate drive to different VMs? How about partitioning a separate drive and passing through different partitions containing different ZFS pools to different VMs?

That’s correct. I avoid consumer grade SSDs, I trashed some Samsung stuff before I switched to Seagate Nytro with three drive writes per day capability. They no longer make the small SATA drives I favored. I think the future for me will be Seagate IronWolf NVMe SSDs, but they’re limited to 0.7 DWPD. Even so, should be OK, I got out of the ArangoDB/Elasticsearch business when the free Twitter API died, and that’s where the bulk of the usage arose.

Thanks for the feedback.