While looking through various Qubes forums, I noticed that prospective Qubes users tend to worry about whether their systems are powerful enough. Iâve also wondered what the impact of CPU power on Qubes OSâ operating speed is. There arenât any good resources on this. Since I donât have access to a bunch of computers to experiment on myself, I came up with a standardized test we can use to create a handy reference table.
tl;dr - This is a crude test that aims to find out the impact of CPU power on VM start time
How to submit data
Install a new copy of the latest Debian minimal template. The template can be installed by entering qvm-template install debian-x-minimal into dom0 (rename your existing installation if necessary). Do not update or modify this template before the test
Shut down all other VMs using qvm-shutdown --all (if needed, --exclude sys-usb)
Run time qvm-start debian-12-minimal and make note of the real time returned. We recommend you run this test multiple times to get a more reliable result. @wind.gmbh wrote a script thatâs short enough to manually enter into dom0:
Enter your details into the table below, with the time rounded to one decimal place. If you ran multiple tests, enter the average. This is a wiki post, so click âeditâ below
Weâll assume youâre using the latest Debian for your release unless otherwise stated. E.g. If posting for R4.1 today, itâs assumed youâre using Debian 11. If for R4.2; Debian 12. There are holes in this method (e.g. someone who didnât upgrade, or later releases) but this should be a good-enough approximation and takes care of the issue without adding new columns
What is more important, diskt type (nvme, ssdâŚ), disk encryption or CPU? I have laptop with ssd, hdparm -Tt --direct /dev/sda gives me 435MB/sec 435MB/sec my SSD is full encrypted. If I run the test 3 times, it boots from 7,3s - 8.7s, Kernel is 5.10.3-1, cpu is i7-4810MQ, R4.1
Youâre rightâthat slipped my mind. I was assuming the computers that interested people were new and therefore came with SSDs. Thanks for pointing that out.
According to some tests @unman did, disk type seems to have the greatest effect. This test was created under the assumption that everyoneâs using SSDs.
I am afraid that the measurements might give a false impression of accuracy. I upgraded the kernel from 5.4.88-1 to 5.10.8-1 and repeated the test, coming to 9.3 s, which at first hinted to a worse performance of the newer kernel. Repeating the test with this new kernel, however, resulted in values between 6.9 s and 11.4 s. If the values differ that much, there must be factors influencing them much more than kernel version or CPU speed, especially as my CPU is supposedly much slower than that of @fiftyfourthparallel.
So to get any meaning out of these tests, it is probably necessary to perform 10 tests or so, and take the average, which is 8,97 s for my configuration (an HP EliteBook 840-G4 with internal SAMSUNG SSD 850, using Kernel 5.10.8-1).
Did you shutdown all other VMs using qvm-shutdown --all? I remember performing multiple tests on R4.1 and the real time returned were all basically identical to one another.
Iâm not saying youâre wrong (as Iâve often stressed, Iâm not technical), but I feel there might be something else going on, since a Qubes with just Xen and dom0 running shouldnât have such a large range, with the upper end being more than 50% higher.
I canât shut down my VMs right now so Iâll try this again later and get back to you with the results.
I shut down all VMs, so there must be something else in the background which I didnât see. During one of the tests, however, sys-net started again without apparent reason, and I had to shut it down again.
Iâll try again with R4.1, but I think the times wil be longer then, because my R4.1 is running from a USB SSD.
I checked now with R4.1 and kernel 5.10.13-1 and made sure that no other VM was running. The times are stiil different: 9.2 s, 8.9 s, 12.5 s, 9.3 s and 8.9 s. So most measurements are pretty close, but one is exceptional. It seems that there must be something else running in the background (in dom0 perhaps?).
During my previous tests in R4.0.3, I had the Qube manager running, and as I thought that maybe that was taking time, I had it switched off during my R4.1 tests, but this does not seem to explain the times.
My R4.1 machine is occupied so I still canât shut down all VMs (even though the test should be conducted on R4.0). I pulled out an airgapped R4.0 and ran the test using an outdated dom0 and a not-pristine (but close enough) debian-10-minimal.
What I noticed confirmed what I had felt beforeâmy 15nm i5-10210U starts VMs quicker than my 10nm i7-1065G7 (the former has slightly higher clock speeds: 1.6/4.2GHz vs the i7âs 1.3/3.9GHz).
VM start times were closely clustered:
Test #
Time (s)
1
5.166
2
4.854
3
4.744
4
4.863
5
4.869
6
4.808
7
4.820
8
4.908
The first one might be anomalous because it was closer to startup.
I still get a scattering of the values, now testing R4.0.3 with all VMs turned off and also the Qube manager turned off. After shutting down the debian-10-minimal template, I also waited some time and checked via the Q widget that this VM was not running any more, and no other VM was running.
Here are my times, with the I5-7200U CPU and the internal disk, using kernel 5.10.13-1:
Test # Time (s)
7.998
7.800
7.995
7.198
8.476
8.475
5.517
8.807
8.012
8.761
So there is still no clear picture, especially with test 7, which has a much lower value.
I have no idea why youâre getting large anomalies. If you include your earlier numbers, your range is between 5.5 and 12.9 seconds, which seems to indicate instability to me (assuming youâve followed the instructions). But then again, Iâm no technician. Have you tried turning your PC on and off again?
Itâs good that itâs short enough to be manually typed into dom0 so we wonât have to ask people to paste things into it. Iâll add this as an option in the instructions.
Edit: I also went ahead and added your results while I was at it
It turns out my previous test using i5-10210U wasnât based on a 5.x kernel but a 4.19.147-1 kernel.
Iâve redone the test using a 5.6.16-1 kernel and @wind.gmbh 's handy dandy script. The results are as follows:
#
s
1
5.25
2
5.41
3
6.19
4
5.99
5
4.40
6
4.32
7
5.90
8
6.22
9
6.29
10
4.47
Median: 5.7
Mean: 5.4
Range: 2.0
Manual testing:
#
s
1
5.681
2
6.054
3
4.324
4
4.531
5
4.681
6
4.506
7
4.523
8
5.707
9
6.912
10
6.551
Median: 5.2
Mean: 5.3
Range: 2.6
Boot times exhibited a wider variance than using the 4.19 kernel. I also did manual testing (though not in a strictly consistent way) to see if the script affected times significantly. This does not seem to be the case. Based on this I can say that kernel 4.19 loads quicker and with less variance than 5.6.
I finally got rid of R4.1 on my i7-1065G7 since the R4.0.4 installer works with its updated BIOS. I redid the tests and the results were much snappier:
debian-10-minimal (fresh download, not updated, kernel 5.10.8-1, using a modified version of wind.gmbhâs script):
4.08
4.18
4.16
3.89
7.21
4.14
3.92
7.53
4.05
4.01
Mean: 4.7
Median: 4.1
Range: 3.6
R4.0.4 is clearly faster than R4.1 when using the same kernel. There are some outliers, probably from some underlying processes, but otherwise the start times are nearly half that of R4.1. Anyone know what might be causing this?