Ext4 vs. Btrfs performance on Qubes OS installs

It was time to do my quarterly disaster recovery drill, which involves bootstrapping my entire system from scratch using my scripts and backups. Having this opportunity I wanted to put some hard numbers to my previous observations regarding ext4 vs Btrfs performance on my T430 running Qubes OS R4.1.1.

So I did two rounds: the first one I installed with ext4, restored everything, and executed @wind.gmbh’s qube start benchmark and @oijawyuh’s SSD write test, then I reinstalled/restored again but this time using Btrfs executing the same tests. Both times I made sure dom0 is fully up-to-date and at the same version as the previous test.

Qubes release: 4.1.1 (R4.1)
Brand/Model: Lenovo ThinkPad T430 (23477C8)
BIOS: CBET4000 Heads-v0.2.0-1246-g9df4e48
Xen: 4.14.5
Kernel: 5.15.63-1
RAM: 16340 Mb
CPU: i7-3840QM CPU @ 2.80GHz
SCSI: Samsung SSD 860 Rev: 2B6Q
HVM: Active
I/O MMU: Active
HAP/SLAT: Yes
TPM: Device present
Remapping: yes

So the following numbers are from a controlled environment, run on the same hardware with the same bits present. Nothing is different except the file system. I think it is justified to call the delta “significant”.

start write
ext4 9.35 s 466.8 MB/s
Btrfs 7.74 s 765.3 MB/s
Δ 1.61 s (21%) 298.5 MB/s (64%)
actual measurements

ext4 start = 9.09, 9.44, 9.51, 9.64, 9.37, 9.24, 9.42, 9.14, 9.42, 9.25 s
ext4 write = 383, 424, 463, 467, 501, 446, 425, 522, 521, 516 MB/s

Btrfs start = 7.70, 7.73, 7.67, 7.68, 7.71, 7.58, 7.82, 7.83, 7.82, 7.83 s
Btrfs write = 760, 768, 773, 758, 768, 765, 750, 761, 780, 770 MB/s

Maybe others can do similar measurements when reinstalling and share them in this thread. I’d be very interested to see if these performance gains are general in nature or somehow specific to my hardware. If general in nature, maybe a case should be made to the development team to adopt Btrfs as a default.

11 Likes

Edit to original post: it’s obviously seconds not milliseconds – sorry

Also want to note that 4k sectors significantly boost performance, with LVM+XFS providing the fastest boot times while brtfs performs decently. Having a built-in option to take full advantage of 4k drives in the future would be great.

2 Likes

@fiftyfourthparallel thank you for pointing to that post. I can’t follow your conclusion however. The numbers I see are:

LVM-XFS LVM+XFS Btrfs+blake2b
boot (best) 22.3 s 22.8 s 20.5 s
boot (avg) 22.7 s 24.7 s 20.9 s
VM boot (best) 4.68 s 4.59 s 4.48 s
VM boot (avg) 5.14 s 4.82 s 4.78 s
cryptsetup 526.8 MiB/s 646.6 MiB/s 718.2 MiB/s [1]

Then there is the part about 4kn in case of “LVM+XFS” showing a consistent advantage of ~1 s in VM boot. However this measurement was only done with “LVM+XFS”.

  • Does this mean it’s only possible in that setup?
  • What about all the other “normal” results that show “Btrfs+blake2b” being faster?
  • Wouldn’t one assume that “Btrfs+blake2b” AND 4kn (if such a thing is possible) is even faster?

@51lieal?

Obviously I haven’t consumed all the linked threads yet, but I shall do so. This is fascinating.


  1. in this case two measurements were given: “directio”=718.2 MiB/s and “no directio”=646.6 MiB/s (same as LVM+XFS). ↩︎

You’re right that the only consistent comparisons involve Fedora 34 512b templates. The faster LVM+XFS speeds involve 4kn, as you said, but confusingly at the bottom of the Fedora 35 4kn list is an entry for a ‘#full-512b’ with the fastest speeds.

To be clear though: I’m not invested in any particular filesystem, as the gains from switching to a 4kn system (including templates) overshadows any gains from choosing the best of the two and as I’ve often stressed, I’m not a technically-trained person so I’m not equipped to rigorously parse such information.

As you pointed out in the other thread I am a member of the vocal minority that prefers Ivy Bridge CPUs due to the ability to remove (most of) ME. That’s why I am so interested in the performance gains I could get by switching to Btrfs and why I am super intrigued by what you pointed to.

I’ll dig in and get more numbers (over time). I too am not invested in any particular format, nor in small performance gains. But if something promises to deliver substantial gains as 4kn appears to it is worth a look.

1 Like

After having read the whole thread I don’t think this is worth my time at this stage. Too much manual configuration, too much to learn that is definitely interesting but too much of a time sink at odds with my current priorities. It deserves to be called a “rabbit hole” at this point.

It may be too much work for a single user to justify, but would you say it’s worth integrating into Qubes as an option?

Or is there something wrong with the number or testing methodologies I should know about?

I don’t know.

This is too complex for me to learn and do in 2-3 hours and that’s all the time I’m willing to spend on this right now. Maybe over the winter holidays or something.

1 Like

@Sven just to be clear.

This is lvm+btrfs instead of lvm+ext4?

I have selected ‘Btrfs’ in the installer and accepted all defaults. So yes, my understanding is that it’s lvm+btrfs. I give “/dev/sda2 /dev/sda3” to heads / TPM to unseal. I believe you helped me figure this out initially. There are screenshots in the heads slack.

I uploaded them here too:




Hmmm.
From top of my head, I thought it was two distincts drives. If you pass /dev/sda2 /dev/sda3 to disk unlock setup, this means you have two distinct LUKS partition passed, not sure that would be LVM.

Can you do me a favor and:

[user@dom0 ~]$ sudo pvs
  PV                                                    VG         Fmt  Attr PSize   PFree  
  /dev/mapper/luks-a4ba89e4-65a3-48f9-a045-d128c2a422cc qubes_dom0 lvm2 a--  464.74g <12.63g
[user@dom0 ~]$ sudo vgs
  VG         #PV #LV #SN Attr   VSize   VFree  
  qubes_dom0   1 207   0 wz--n- 464.74g <12.63g

Heads only cares about /boot (/dev/sda1 normally) and knowing which drives to add a LUKS additional keyslot (/dev/sda2 /dev/sda3) but doesn’t play with the content of those LUKS partition. As of today, even LVMs cannot be played with under Heads since thin-LVMs were not supported until now (under PR still, not merged because firmware space constraints)

So that is not answering if lvm+brtfs, but from my understanding, brtfs is not on top of LVM, but use brtfs to deal with snapshots, reverts etc.

I am interested because brtfs is reported to be faster, where lvm is known to be stucked to 512b sectors under that thread, where a PR was reated for Qubes OS to abstract that https://forum.qubes-os.org/t/ssd-maximal-performance-native-sector-size-partition-alignment

Highly doubt BRTFS is on top of LVMs, even less thin-LVMs as per your screenshot.

(EDIT: With LVM/Thin-LVM setup (default), this is one big fat LUKS container and different LVM pools under, not two LUKS partitions.)

1 Like
[user@dom0 ~]$ sudo pvs
[user@dom0 ~]$ sudo vgs
[user@dom0 ~]$ 

Awfully quiet. As I am sure you can tell file systems are not an area of my expertise (yet).

Yeah, those where options LVM or LVM Thin or Btrfs.

1 Like

So just to make things clear here.

If this is not on top of LVM, this would mean your comparison of performance is actually comparing apples and oranges. This is not simply comparing ext4 and brtfs, but different pools, different partition scheme, different sector size and as you said in other thread, a whole rabbit hole to dig in. Still interested into digging that, but as for you, I do not have either the actual deep knowledge necessary to have deeper insights than the ones I dropped in my initial thread for sector size/alignments, where PR was created but didn’t patched an iso to test it on distinct laptop to see what would happen if LUKS+LVMs would be aligned and if and only then performance would be faster (it definitely should, since it makes no sense we use 512bytes sector size nowadays, where BRTFS is reported to not do that because of different volume management path used by Qubes to create volumes then passed to qubes to be used, hence different performance.)

To dig that down, I think it should again happen under https://forum.qubes-os.org/t/ssd-maximal-performance-native-sector-size-partition-alignment not here.

But yes: I think this thread is comparing oranges and apples, unfortunately.

1 Like

You just confirmed there is no VG (virtual group on top of multiple PV (physical volumes) nor LV (Logical volumes to be used by qubes) ).

Hence, you are using BRTFS on top of LUKS, which explains why you have two LUKS partitions. This is interesting though. You are the first person I know using Heads to boot Qubes with BRTFS (hear me out here, you are simply kexec’ing into multiboot from Heads here, booting xen+kernel+initrd from unencrypted /boot, and passing decryption key unsealed from TPM to Qubes which uses it instead of prompting for LUKS passphrases (plural here because that would be otherwise the case, which is why you passed /dev/sda2 and /dev/sda3)

1 Like

One day I will have to read all the docs…

[user@dom0 ~]$ sudo qvm-pool
NAME          DRIVER
varlibqubes   file
linux-kernel  linux-kernel
vm-pool       lvm_thin

I don’t even know how to interact with that tool directly as of today, but from what I think I understood with XFS/ZFS/BRTFS threads I digged in, it seems that they fall into file/linux-kernel pool.

@brendanhoar @Rudd-O @rustybird : Could you please ELI5? Or jump a bit into https://forum.qubes-os.org/t/ssd-maximal-performance-native-sector-size-partition-alignment to correct the facts? Sorry If I tag you wrongly, but from what I read elsewhere, you guys seem to understand way better then us here what is going on and impact the performance of Qubes as of today. That would be hightly beneficial

The important post from the other thread is here: SSD maximal performance : native sector size, partition alignment - #30 by rustybird and says:

Alright, from your perspective that might be true. From my perspective I compared Qubes OS default vs. choosing Btrfs in the installer and might have chosen the thread title wrong.

I made this one change and got a substantial increase in performance. I’d like to understand and if possible have others enjoy the same benefit.

2 Likes

So @Sven :thinking:
This seems to mean that the BRTFS pool

BRTFS is a qvm-pool of varlibqubes type.

Agreed. This is important. But I would love to understand why thin-lvm pools are stuck to 512 sectors and have thin-lvm pools have maximum performance speeds (which LVM pools are the standard way, and for which wyng-backups is the only supported pool today).

As far as my understanding, LVM2 thin pool is beginning to have a bad press and people are starting to want to get away of it because of misconfiguration by default, where @Rudd-O pushes for ZFS, others push for XFS and here we push for BTRFS. I think if LVM pools and LUKS containers were configured in an optimized way, we might not all want to get out of it :slight_smile:

1 Like
[user@dom0 ~]$ sudo qvm-pool
NAME          DRIVER
varlibqubes   file-reflink
linux-kernel  linux-kernel

While I don’t have any deep understanding of this topic I do have two identical laptops. One is my daily driver and the other is ‘standby’. So if there are concrete things you want me to try (even if they are potentially destructive) I am happy to try them out on the ‘standby’ T430. I might learn something in the process.

Any recommendations on what to read to get a grip on LVM, LUKS, pools etc?