Btrfs and luks 4096 - slower than lvm with big qubes?

I have a qube that is 700gb with a big database. I used to run LVM qubes but thought I’d try luks 4k and btrfs since the latter is supposed to handle big qubes better than lvm and ext4

The qube starts fast but takes 5 min to shutdown (spinning red in the qui-domain manager). In dom0 qubesd and kworker spike to 70% each. I have 8 cores but it still causes system to slow and freeze on mouse inputs. The nvme drive temp goes from 30C to 45+. The qube log just says system halted with no more info during the 5 min.

On my sata ssd system, it would take 15 sec to shutdown.

Everything else on the system is faster. Luks unlock is 2 sec faster, other VMs start up is faster. I’ve tested this extensively on another machine and everything was great until I added the 700gb qube.

I could try redoing install without 4k luks or switch back to lvm.

Any ideas on diagnosing the issue ?

Probably fragmentation. Check with filefrag:

To defragment:

cd into the /var/lib/qubes/appvms/<name> directory in a dom0 root shell,

1 Like

Ok, I checked and it has 945,000 extents. Would that cause an issue ? I just transferred the files over by usb3 drive and haven’t opened the application yet. I didn’t do any major changes or template installs/updates during this time. I’d be surprised it got fragmented in a day ?

I guess I’ll try running the btrfs defrag command you quote at the end of the thread. Thanks

Wait, I’ve just pieced all the parts together in my previous reply here :slight_smile:

It’s high but not crazy high. Maybe some unfortunate interaction with SSD hardware?

Hmm do you have enough free space in dom0? If it’s rather tight that might make fragmentation worse.

Ahhh, haha thanks for writing that out ! I’ll report back

1 Like

Yes, I have 1.1TiB free space in dom0 thunar

1 Like

Success ! Thank you very much !

I had my doubts but the shutdown now takes 10 seconds. Defrag took about 45 minutes, nvme temp got much higher than I’ve seen (60+C). Extents went from 950000 to 9200. Unlike the xfs thread, running the command again wasn’t necessary… all the reduction occured in the first pass

1 Like