Does installing Qubes OS on a btrfs file system offer any advantage?

  1. When we use a VM, Qubes OS does not save a snapshot. Qubes OS only save a snapshot when we shut down a VM[1].
  2. Because of 1, I think the benchmark for the no copy on write (COW) case is more important. In a 2021 benchmark of random read/write (PostgreSQL), Ext4 is faster than btrfs. [2]
  3. The wiki of btrfs says copy on write flag (cow) is bad for storing the images of virtual machines. The wiki also says setting nodatacow to the directory for storing VM might be useful [3]
  4. Official documentation of btrfs says: “if nodatacow or nodatasum are enabled, compression is disabled.”[4]
  5. Official doc of btrfs says: “nodatacow (ie. also nodatasum).”[4] So nodatacow means no checksum and “no self-healing”.

If the above points are all correct, btrfs = no compression, slower, no self-healing for Qubes OS.

Does installing Qubes OS on a btrfs file system offer any advantage?


  1. Volume backup and revert | Qubes OS
  2. Benchmark of Ext4, XFS, Btrfs, ZFS With PostgreSQL - Tech Blog
  3. Gotchas - btrfs Wiki
  4. btrfs-man5(5) — BTRFS documentation
1 Like

Problem with the atodefrag of btrfs in 2022:

1 Like

A 2016 post in the mailing list says: thin provisioning will cause fragmentation, which can slow down Ext4 + LVM thin.

“Tradeoffs between btrfs, lvm, and lvm thin provisioning.”

The post also mentioned that btrfs snapshot potentially used less space because btrfs snapshots happen per file while lvm snapshot happens per volume. But I think btrfs has this advantage only in the old Qube OS. The newer Qube OS stores image of each virtual machine on its own volume, instead of a file.

Here is a copy of the post

Both btrfs and (thin) lvm do similar things with copy-on-write, though I
have not seen direct comparisons of speed between them. Btrfs is more
flexible by far, though, and its what I use for Qubes. Regular lvm is
just a hassle and IMO only good for snapshots that are immediately
created and destroyed for backup procedures.

Qubes will automatically use reflinks whenever it clones vm disk images,
which is a COW copy that happens instantly and only allocates extra data
blocks when blocks are changed in either copy. It allows a lot of
experimentation to be done at virtually no cost in disk space, for
instance. This happens on a per-vm (actually, per-file) basis, without
having to do whole-filesystem snapshots as in lvm. This makes btrfs
potentially much more space efficient than the rest if you make use of
cloning. Btrfs also has compression.

Its worth noting that vm images suffer from logical fragmentation
because writing to disk image files behaves like random database
updates. Because btrfs does COW on every write, it does make it slower
than ext4 but on Qubes not noticeably so. Qubes vm images are sparse, so
whenever deletions occur this fragments ext4 filesystems and slows them
down also.

The fastest filesystem to use in dom0 is probably ext4 without lvm.
Turning off ‘discard’ in the vm’s /rw volume may prevent some
sparse-related slowdown (at a cost in disk space). With lvm, ext4 will
probably become slower than btrfs as soon as you start making snapshots
and updates.

Beyond that, I think its possible to assign a raw block device to a vm,
though I haven’t explored that yet. Ext4 on a raw block device would be
the fastest, but not flexible or space efficient.


On a btrfs file system, we can mark a folder with chattr +C to exempt the folder from copy-on-write. Creating a snapshot of that folder would only temporarily activate copy-on-write according to this 2019 comment in unix & linux stack exchange:
taking snapshots of a BTRFS volume mounted with nodatacow?

1 Like

I tried to google a benchmark of ext4 + LVM2 thin vs btrfs. The closest that I can find is a 2015 benchmark…

In 2015, a PostgreSQL benchmark shows ext4 + LVM thin is far better than btrfs:

2022 march benchmark in Qubes OS’s issue #7300

Hmm. What does that benchmark actually mean? btrfs seems better in the benchmark. But that is not normal because Qubes OS doesn't use direct I/O for the pools that use loop device (described in issue #7332)

1 Like

Curious about this discussion, too. I would like to know if QubesOS is able to utilize btrfs features. If not, I think the default QubesOS setup (ext4) is better suited.