This is due to a difference in what the storage drivers (lvm_thin vs. file-reflink) consider to be a volume’s disk usage - which then leads to weird looking results when Qube Manager unconditionally sums up all volumes of a VM. But it’s “only” cosmetical.
If you mean the size prediction in the GUI backup tool’s VM selection screen, that’s a different cosmetical bug. It shouldn’t affect the actual backup size.
Sorry to wade into this a bit late, but you’re quite right about the default LUKS sector size… seems sub-optimal.
However, Thin LVM chunk size will have a minimum size of 64KB, and is usually larger depending on the pool LV size at time of creation. My main system uses 64KB despite having a large pool size; I assume this enhances random write performance but haven’t tested it. #write_amplification
On the ‘cost’ of Thin LVM snapshots: Making snapshots is essentially no cost, but deleting (and oddly enough) renaming snapshots takes a significant amount of time. The latter are processed by the kernel in a single-threaded fashion and I usually see 80-100% CPU for >5s when Qubes or Wyng deletes a large snapshot.
Btrfs - My understanding is that it is extents-based but has a settable minimum sector size via mkfs.btrfs with a default of 4096. I think a good basis for comparison would have LUKS set to 4096, Btrfs at default 4096, and Thin LVM pool at 64KB.
@Demi don’t get me wrong on the tone here, but there were a lot of regressions on 4.1 as opposed to 4.0 stability experience.
My point here is that :
Is not enough. I’m following GitHub - QubesOS/updates-status: Track packages in testing repository as close as I can. And I see no vmm-xen to be tested, nor fixes for suspend/resume to be tested, with PR getting way too long to land even in unstable repo. I would expect things to be way more verbose under the testing section of this forum, and my guess is that there is a lot of confusion from even the willing testers to test something to be tested and if those things to be tested even reach willing testers.
How can we improve that should be discussed under the testing section, not here, but this subject will be a good quotation to justify testing discussions, which is why i’m writing it here. No blame or whatever here, but I see a lot of space for improvements through better communication and appropriate pointers.
Configuration 4 definitely works, at least in the sense that it doesn’t produce an error, and losetup -l shows the intended result of direct I/O + 512B sectors even though the underlying dm-crypt block device is 4K. I’ve been using this configuration (except with the default checksum function) for months.
It’s XFS (configuration 2) and ext4 that are not so flexible.
I decided to do a Btrfs install with the recommendations from this and the “SSD maximal performance” threads, and settled on the idea of formatting a two-device Btrfs fs with the options -O no-holes --csum xxhash for better efficiency. All on top of a 4K aligned LUKS partition, using GPT/gdisk.
The Btrfs part turned out to be a fools errand, as neither anaconda nor kickstart seem to support passing custom options to mkfs and anaconda/blivet insist on not installing into an existing fs.
So instead of doing a full dom0 root+everything Btrfs setup, I installed Qubes with a 25GB XFS partition and am now configuring custom Btrfs partitions to hold all domU stuff. If qvm-pool cooperates, I’ll be sitting pretty on my test system, also with Linux kernel 6.1 or 6.2 which have Btrfs optimizations that should greatly impact large-file access.
Blivet said it wouldn’t accept my Btrfs-on-LUKS partition unless I let it re-format it. If I let it re-format, I lose no-holes and xxhash.
I’m not sure, but I think in some cases Blivet or Anaconda custom will also re-format LUKS in addition to the fs. It seemed like once my 4096b sector LUKS changed back to 512b. I think in that case when you click ‘Done’ it asks you for a passphrase even though you’ve already unlocked your existing LUKS.
As for the advantage of having root on the custom fs, I don’t think there is much. The good part is Qubes lets me set up a vm pool the way I want.