Does Qubes use the Cloudflare dmcrypt patches?

In 2020, Cloudflare made a series of patches to dmcrypt to get it to near native speeds. Given that Qubes uses the same fde scheme, were these patches ever incorporated into QubesOS? They’re stable enough to have made it into the Linux kernel and any concerns about architecture are moot given Qubes is x86 only.

1 Like

The new flags aren’t used by default, although R4.2+ has a new enough cryptsetup that supports cryptsetup refresh [--persistent] --perf-no_read_workqueue --perf-no_write_workqueue. I haven’t tried it.

2 Likes

Can confirm, I have been using one (or both - don’t recall which I ended up keeping) of the workqueue flags in dom0 for a while now, maybe a year or more

2 Likes

Oh nice, did you notice any performance changes or any problems? And what kind of storage pool do you use (LVM/Btrfs/XFS/ZFS)?

It should be visible in sudo dmsetup table

1 Like

I’m using BTRFS

I do recall benchmarking and seeing a non-negligible difference but unfortunately I don’t have the data any longer

I’ve been meaning to do a bunch of benchmarking (and measurements of real use) but haven’t the time lately. When I do that, I’ll try to record everything and maybe start a thread

I’ve been experimenting with various parameters for my NVME drives (as well as related settings, like pinning cores), you might be interested to see the results of that as well. I’d be happy to get some ideas and feedback from you on this as you’re much more knowledgeable about BTRFS than I am and probably more knowledgeable on related topics.

What are your thoughts on using BTRFS raid? I decided to use md raid0 because of my history with it and lack of experience with BTRFS

I was very pleased to find out that on QubesOS, modern, high-performance systems with DDR5 RAM, PCIe5.0, fast cores and mid/high end NVME in raid0 really is quite an amazing experience, especially with the AVX512 improvements in LUKS2 implemented in the last 6-9 months, along with those work queue patches. I was afraid that the extra money spent would go to waste, but I don’t feel that way now.

This isn’t a great benchmark for various reasons, but maybe it’s a helpful data point: while using BTRFS filesystem defrag on a 2 disk md raid0, it consistently reaches 4-5GB/second (combined read+write) for sustained periods, averaging closer to 2GB/second - a limitation of number of threads I believe rather than anything to do with disk i/o in this case, I’m guessing. I think that’s pretty good, but this being my first time using BTRFS, I can’t say for sure that it is remarkable or just “not bad”

I’m trying to get a hold of a couple of Optane DC P5800X to see how the ultra-snappy random reads feel. So far I’ve only tested Samsung 990s, 970s and a pair of Micron T705. I’m testing two of the new Samsung PCIe 5.0 x4 drives (Samsung 9100) next

Oops, sorry to hijack the thread :grimacing:

2 Likes

Could be very interesting! There’s also a ticket about benchmarking storage drivers, with a bit of recent progress:

Ha, I’m the least qualified person for that. I’ve never used RAID, neither Btrfs’s nor mdadm, and my storage setup is pretty boring. (Maybe that’s the reason Btrfs has never ever eaten my data…)

2 Likes

I must be mixing you up with someone else then. Oops!

Thank you for the links

1 Like