Is it a good idea to use btrfs?

Is it a good idea to use btrfs file system instead of ext4 + LVM on a SSD?

Since the SSD of my laptop is a bit small, I like the compression and after-the-fact de-duplication offered by btrfs. But I really don’t know if btrfs can work with Qubes OS 4.0 and if btrfs is reliable now.

  • Is btrfs slow?
  • Do I have to manually re-balance the btrfs every week?
  • Would btrfs randomly lose data?
  • What happen if I get a sudden power failure?
  • Can the btrfs fill up?
  • Do I need to set any flag in /etc/fstab to make the most out of the SSD?
  • How often should I defrag the btrfs?
  • Does backuping a Qube count as creating a block level copy? Would that the possible problems described here?
  • Can I use bind_dir of Qubes OS?

One of my qubes is actually a SQL server. I heard that btrfs might not work so well if the server frequently overwrites a file, unless I disable copy-on-write and compression. A reddit post says btrfs with compression can cause “write amplification” on frequently updated files, which can kill SSD in a server settings.

This is being considered for Qubes 4.1.

See discussion here: [RFC] R4.1: switch default pool from LVM to BTRFS-Reflink · Issue #6476 · QubesOS/qubes-issues · GitHub

Maybe it is possible is Qubes 4.0, but I don’t know

1 Like

Using Btrfs is not a bad idea, and it is already supported by Qubes 4.0. The trick is to navigate the anaconda installer’s disk configuration so that you can select the Btrfs+encryption combination.

Btrfs can be pretty slow with disk images, however, so the impact on performance might be noticeable to you. Balancing and reliability won’t be an issue for non-RAID setups.

For now, LLVM + Ext4 seems to reserve lots of space to store meta-data (about 20% of the SSD). Does btrfs have to reserve similar amount of space to hold meta-data?

What flags should I use for mounting the btrfs partition? Should I use ssd, nodatacow, autodefrag for a SSD drive?

Yes, Btrfs does have similar metadata issues. I’ve had Btrfs filesystems go offline because they ran out of metadata space. All filesystems have metadata issues to some extent, but LVM and Btrfs do it in a way that intrudes on the user’s awareness.

A good example of a CoW type filesystem with capabilities similar to Btrfs, but doesn’t make the user directly responsible for metadata issues, is actually Windows NTFS. It has similar need for lots of metadata, but allocates data in a pattern that allows it to automatically expand metadata space as needed. The downside to the NTFS method is that as the volume increasingly fills up, slowdowns from fragmentation begin to dramatically increase. This seems like a good trade-off to me, as the volume keeps functioning but with reduced performance until the user upgrades the storage or removes unneeded files.