I noticed the test is heavily influenced by the initial memory setting of the template.
I tried increasing the initial memory to 16 GB which added more than 10 sec to the start time, but even small changes can change the start time by 1-2 sec. The lower the memory, the faster the template will start.
I noticed that recently I could not reproduce the numbers in my past posts. Boot time, in general, has increased by about one second without mem-balancing, and two seconds or so with mem-balancing. I’m not quite sure what’s the cause, but the phenomenon inspired me that this thread can also be used for tracking performance across versions of infrastructural software (xen, libvirt, etc.), more or less helping performance improvement.
I booted up my debian-11-minimal a few times and it came back as 10 seconds each, compared to 6.1 last time I checked. This was without shutting everything down though, so this is far from conclusive. I’ll do another round of tests once I find time.
Generally speaking I’ve found Qubes less stable and more sluggish than before.
Augsch, if you want to add a few more columns to the table, feel free to–but be sure to mark them as optional.
I’m using the performance governor (as hinted by @renehoj in this post) with 6 dom0 vcpus, pinned to E cores. Using the ‘ondemand’ governor is a tad slower (+0.5sec).
5.6s startup time is great compared to my old T450s that’s been running qubes for the past 7 years, but it’s 50+% more than @renehoj (which I assume ran the test on an MSI Z690-A Pro DDR4 - same as mine). I doubt it’s only down to the CPU (i9 12th gen vs i5 13th gen), it might be that btrfs also gives an edge with lvm operations when starting the vm.
[edit: forgot to add that disabling memory balancing as per @augsch 's advice shelves ~0.5s; total=5.1s]
I’m fine with variations as long as they’re clearly marked so people can compare, like what @51lieal did with SSD-4k.
In fact, it’s nice to see this topic slowly morph into a thread on how to min-max qube startup times. I think as Qubes evolves, this metric will become increasingly important, since the logical conclusion of its compartmentalization-via-VM approach to security is to have a fresh VM for every little task–so how quickly a computer can spin up a new instance will be key. Even without this extreme, the usability of Qubes (as your daily driver at least) is strongly connected to how fast your computer can spin up VMs.
So a big thank you to all who have contributed so far–please keep at it! I’d contribute if I had anything to give, but this isn’t my field
I got interesting results running R4.2 Alpha on BTRFS and 4kn drive. The installer of R4.2 has new enough cryptsetup that can automatically select appropriate sector size. So I just need to switch my NVMe drive to 4kn mode ( using another LBA format sector size ).
So, booting debian-11-minimal without memory balancing, with its memory set to 400MB, takes ~2.5s. This is better than any previous results with different NVMe drives on my machine. Specifically, it’s ~0.6s faster than another BTRFS R4.2, which is installed on an identical drive, but without 4kn.
Now, the time between me clicking on the application launcher and chromium’s window showing up, is less than 5 seconds. And my laptop’s specs are far from “powerful”!
So for anyone interested in testing R4.2 and BTRFS, enabling 4kn mode perhaps will speed up your VMs booting!
There will be BTRFS optimization landing in kernel 6.3, and I’m hoping for another leap in performance.
Thanks for your guide! I’d love to try 4kn+lvm, but I just restored a ton of VMs ( to fully transit from 4.1 to 4.2, to enjoy the lightning speed ), so maybe I’ll try 4kn+lvm when I get a spare nvme drive.
Anyway, my previous install was lvm without 4kn, and VM boot time is 1.5 seconds longer than my install of btrfs without 4kn. So I doubt whether 4kn+lvm will have performance gains conpared to 4kn+brtfs.
I’ll build 4kn templates to test if they will boot even faster. I read somewhere that btrfs treats the underlying fs’s sector size differently from lvm does. Given that 4kn templates do help in booting faster on lvm ( from your posts ) , it’s really worth investigating whether 4kn templates will boot faster than legacy templates, on brtfs.
I did some benchmarks on my main laptop before I converted it from tLVM to Btrfs-on-4k-luks. They are crude in that the only startup time measured is for Debian VM start + Firefox. However, I did also record VM shutdown times, and my experience is these really affect system performance when you are busily working on your system… the tLVM snapshot operations at shutdown hammer the dom0 storage layer often for 7+ seconds for medium sized volumes.
My plan is to re-do the benchmarks before long, so the restored VM volumes are still similar enough to make a fair comparison.
@tasket Great to see you again! Sorry for (temporarily) hijacking the thread, but I’m a long-time user of the tools you’ve written for Qubes–e.g. halt-vm-by-window, system-stats-xen, and more importantly, vm-boot-protect. I recommend the various scripts in this Github repo to just about anyone looking to have a more fluid Qubes experience–especially halt-vm-by-window.
Thank you very much for making tools that I use tens–maybe hundreds of times a day.
Anyways, the reason I’m reaching out is because I’ve noticed that since R4.1, vm-boot-protect tends to fail when booting up. I know I’m not supposed to use it with disposables, but I have since R4.0 without issue… until R4.1. Now, there’s a high probability that a terminal window appears with a warning that vm-boot-protect was triggered and that the bad private files are in /mnt/xdvb (not accurate; recalling from memory).
I’d hugely appreciate it if you updated your tools for R4.1 (or even R4.2).
With the work you and others are doing, a sub-1-second boot seems to be a fast-approaching reality. Hopefully with some template-side-tweaks (or even a native specialized template like debian-15-quick) this should be attainable within a few years.
Maybe I should start a thread asking people for ideas about what could be done with this–i.e. how to leverage the almost-negligible cost of booting (and shutting down) a VM to make Qubes more secure and more usable.
@fiftyfourthparallel Hi FFP! Feel free to open an issue when something like this happens. FWIW, when I upgraded to 4.1, I just kept using the same tools that are already there. Qubes-VM-hardening did get an update to the dom0 instructions.
The idea of preloaded-dispVMs are nice, but both threads seem to indicate that the idea isn’t actively being worked on.
Either way, the thread I’m proposing is another one of my idea threads where people just throw things out there and hopefully something inspires someone. Kind of like my “Now you’re thinking with Qubes” thread. Might as well just go ahead and post it instead of navel-gazing.
I measured 3+ seconds savings in the boot time on gen8 i7 cpu using the default 4G maxmem qube. slower CPUs will benefit more than faster ones.
The “don’t include in memory balancing” seems to be related to this:
[ 2.327810] xen:balloon: Waiting for initial ballooning down having finished.
[ 3.058150] xen:balloon: Initial ballooning down finished.
so you basically don’t load this balloon driver and get the savings from its init. Somebody could look into what the init does and actually optimize it I guess. Again slower CPUs benefit more here. but other factors seem to have big impact too. free xen ram?
[ 1.309210] xen:balloon: Waiting for initial ballooning down having finished.
[ 7.153167] xen:balloon: Initial ballooning down finished.