Survey: CPU and VM boot time

Submitted data (Debian only)

Current release

User CPU VM kernel ver. VM boot time (s) Storage type Qubes OS ver. Modifications
@fiftyfourthparallel i7-1065G7 6.1.62 4.6 NVMe (brtfs) R4.2.0
@augsch R5-5600U 6.1.x 4.5 NVMe (btrfs) R4.2 Alpha
@augsch R5-5600U 6.1.62 3.5 NVMe (btrfs) R4.2.0
@renehoj i9-13900K 6.1.62 3.4 NVMe (btrfs) R4.2.0
@solene i7-1260P 6.1.56 4.6 NVMe R4.2-RC4
@DVM i9-12900k 6.6.8 3.6 NVMe (btrfs) R4.2.0
@johnboy R7 5700G 6.1.62 4.0 NVMe R4.2.0
@kenosen i5-8350U 6.1.62 6.9 NVMe R4.2.0
@neoniobium i7-3840QM 6.6.2 7.5 NVMe R4.2.0
@lywarkel i5-8250U 6.1.62-1 6.1 NVMe R4.2.0
@lywarkel i5-8250U 6.1.62-1 5.7 NVMe R4.2.0 systemd-binfmt

 

Modifications

Name Creator
systemd-binfmt @renehoj

 

Past releases

R4.1
User CPU VM kernel ver. VM boot time (s) Storage type Qubes OS ver.
@fiftyfourthparallel i7-1065G7 5.10 7.9 SSD R4.1 Alpha
@fiftyfourthparallel i7-1065G7 5.16 6.1 SSD R4.1.0
@fiftyfourthparallel i7-1065G7 6.02 6.7 SSD R4.1.1
@GWeck i5-7200U 5.10 9.2 USB-SSD R4.1 Alpha
@GWeck i5-7200U 5.11 9.1 USB-SSD R4.1 Alpha
@GWeck i5-7200U 5.11 7.4 USB-SSD R4.1 Alpha with xen-pi-processor
@augsch R5-5600U 5.10 4.75 SSD(btrfs) R4.1
@augsch R5-5600U 5.10 4.1 SSD-4k? R4.1
@wind.gmbh V1605B 5.10 6.5 SSD R4.1 Alpha
@johnboy R5 2400G 5.10 5.16 SSD (sata, btrfs) R4.1
@johnboy R7 5700G 5.15 4.2 SSD (nvme, btfs) R4.1.2
@Raphael_Balthazar AMD A10-5750M 5.4 11.6 SSD R4.1 Alpha
@Sname i7-9750H 5.10 4.0 SSD R4.1
@51lieal i7-10750H 5.16 3.4 SSD-4k R4.1
@stachrom AMD Ryzen 7 5700G 5.17 5.2 SSD R4.1
@Sven i7-3840QM 5.10 8.3 SATA-SSD (btrfs) R4.1
@BenT i7-1260P 6.1.11 8.3 nVME SSD R4.1.1
@taradiddles i5-13600K 6.2.10 5.6 nvME SSD R4.1.2
R4.0
User CPU VM kernel ver. VM boot time (s) Storage type Qubes OS ver.
@fiftyfourthparallel i7-1065G7 5.10 4.7 SSD R4.0
@fiftyfourthparallel i5-10210U 4.19 4.8 SSD R4.0
@fiftyfourthparallel i5-10210U 5.6 5.4 SSD R4.0
@GWeck i5-6600 5.4 7.3 SATA-SSD R4.0
@GWeck i5-7200U 5.4 7.4 SATA-SSD R4.0
@GWeck i5-7200U 5.4 7.4 USB-SSD R4.0
@GWeck i5-7200U 5.10 7.1 USB-SSD R4.0
@GWeck i5-7200U 5.11 7.2 USB-SSD R4.0
@augsch E3-1231v3 4.9 8.0 HDD R4.0
@wind.gmbh i7-3520M 5.4 8.2 SSD R4.0
@johnboy R5 2400G 5.4 5.2 SSD (sata, btrfs) R4.0.4
@den1ed i5-8350U 1.70GHz 5.10 13.1 HDD R4.0.4
@beto XeonÂŽ Silver 4114 5.4 8.5 SSD R4.0
@beto i5-7260U 5.4 6.9 SSD R4.0
@sergiomatta AMD FX-8300 5.4 9.2 HDD R4.0

 


 

While looking through various Qubes forums, I noticed that prospective Qubes users tend to worry about whether their systems are powerful enough. I’ve also wondered what the impact of CPU power on Qubes OS’ operating speed is. There aren’t any good resources on this. Since I don’t have access to a bunch of computers to experiment on myself, I came up with a standardized test we can use to create a handy reference table.

From this thread.

tl;dr - This is a crude test that aims to find out the impact of CPU power on VM start time

 

How to submit data

  1. Install a new copy of the latest Debian minimal template. The template can be installed by entering qvm-template install debian-x-minimal into dom0 (rename your existing installation if necessary). Do not update or modify this template before the test

  2. Shut down all other VMs using qvm-shutdown --all (if needed, --exclude sys-usb)

  3. Run time qvm-start debian-12-minimal and make note of the real time returned. We recommend you run this test multiple times to get a more reliable result. @wind.gmbh wrote a script that’s short enough to manually enter into dom0:

  • dom0 script
    #!/usr/bin/bash
    
    qube="debian-12-minimal"
    
    get_real_time() {
      realtime="$(/usr/bin/time -f "%e" qvm-start -q ${qube})"
      qvm-shutdown --wait -q "${qube}"
      echo $realtime 
    }
    
    benchmark() {
      qvm-shutdown --all --wait -q
      for ((i = 0; i < 10 ; i++)); do
        sleep 15
        echo "$(get_real_time)"
      done
    } 
    
    benchmark
    
  1. Enter your details into the table below, with the time rounded to one decimal place. If you ran multiple tests, enter the average. This is a wiki post, so click ‘edit’ below

    • We’ll assume you’re using the latest Debian for your release unless otherwise stated. E.g. If posting for R4.1 today, it’s assumed you’re using Debian 11. If for R4.2; Debian 12. There are holes in this method (e.g. someone who didn’t upgrade, or later releases) but this should be a good-enough approximation and takes care of the issue without adding new columns
13 Likes

Would adding a column with additional notes make sense? Someone could put in more information if they want like “x220 HDD”.

2 Likes

What is more important, diskt type (nvme, ssd…), disk encryption or CPU? I have laptop with ssd, hdparm -Tt --direct /dev/sda gives me 435MB/sec 435MB/sec my SSD is full encrypted. If I run the test 3 times, it boots from 7,3s - 8.7s, Kernel is 5.10.3-1, cpu is i7-4810MQ, R4.1

1 Like

You’re right–that slipped my mind. I was assuming the computers that interested people were new and therefore came with SSDs. Thanks for pointing that out.

According to some tests @unman did, disk type seems to have the greatest effect. This test was created under the assumption that everyone’s using SSDs.

https://www.mail-archive.com/qubes-users@googlegroups.com/msg30816.html

3 Likes

I am afraid that the measurements might give a false impression of accuracy. I upgraded the kernel from 5.4.88-1 to 5.10.8-1 and repeated the test, coming to 9.3 s, which at first hinted to a worse performance of the newer kernel. Repeating the test with this new kernel, however, resulted in values between 6.9 s and 11.4 s. If the values differ that much, there must be factors influencing them much more than kernel version or CPU speed, especially as my CPU is supposedly much slower than that of @fiftyfourthparallel.

So to get any meaning out of these tests, it is probably necessary to perform 10 tests or so, and take the average, which is 8,97 s for my configuration (an HP EliteBook 840-G4 with internal SAMSUNG SSD 850, using Kernel 5.10.8-1).

2 Likes

Did you shutdown all other VMs using qvm-shutdown --all? I remember performing multiple tests on R4.1 and the real time returned were all basically identical to one another.

I’m not saying you’re wrong (as I’ve often stressed, I’m not technical), but I feel there might be something else going on, since a Qubes with just Xen and dom0 running shouldn’t have such a large range, with the upper end being more than 50% higher.

I can’t shut down my VMs right now so I’ll try this again later and get back to you with the results.

I shut down all VMs, so there must be something else in the background which I didn’t see. During one of the tests, however, sys-net started again without apparent reason, and I had to shut it down again.

I’ll try again with R4.1, but I think the times wil be longer then, because my R4.1 is running from a USB SSD.

1 Like

I checked now with R4.1 and kernel 5.10.13-1 and made sure that no other VM was running. The times are stiil different: 9.2 s, 8.9 s, 12.5 s, 9.3 s and 8.9 s. So most measurements are pretty close, but one is exceptional. It seems that there must be something else running in the background (in dom0 perhaps?).

During my previous tests in R4.0.3, I had the Qube manager running, and as I thought that maybe that was taking time, I had it switched off during my R4.1 tests, but this does not seem to explain the times.

1 Like

My R4.1 machine is occupied so I still can’t shut down all VMs (even though the test should be conducted on R4.0). I pulled out an airgapped R4.0 and ran the test using an outdated dom0 and a not-pristine (but close enough) debian-10-minimal.

What I noticed confirmed what I had felt before–my 15nm i5-10210U starts VMs quicker than my 10nm i7-1065G7 (the former has slightly higher clock speeds: 1.6/4.2GHz vs the i7’s 1.3/3.9GHz).

VM start times were closely clustered:

Test # Time (s)
1 5.166
2 4.854
3 4.744
4 4.863
5 4.869
6 4.808
7 4.820
8 4.908

The first one might be anomalous because it was closer to startup.

I still get a scattering of the values, now testing R4.0.3 with all VMs turned off and also the Qube manager turned off. After shutting down the debian-10-minimal template, I also waited some time and checked via the Q widget that this VM was not running any more, and no other VM was running.

Here are my times, with the I5-7200U CPU and the internal disk, using kernel 5.10.13-1:

Test # Time (s)

  1. 7.998
  2. 7.800
  3. 7.995
  4. 7.198
  5. 8.476
  6. 8.475
  7. 5.517
  8. 8.807
  9. 8.012
  10. 8.761

So there is still no clear picture, especially with test 7, which has a much lower value.

My times, CPU E3-1231v3 ,7200RPM HDD, kernel 4.19.152-1:

Test # Time (s)

7.998
8.943
7.907
7.620
7.604
7.731
8.163
8.094
7.714
2 Likes

I have no idea why you’re getting large anomalies. If you include your earlier numbers, your range is between 5.5 and 12.9 seconds, which seems to indicate instability to me (assuming you’ve followed the instructions). But then again, I’m no technician. Have you tried turning your PC on and off again?

I performed the tests on a system that was just started - no actions before the tests.

System

Dom0 Kernel: 5.10.8-1
CPU: AMD Ryzen Embedded V1605B (4 Cores/ 8 Threads @2.0Ghz with Turbo @3.6Ghz)
Storage: Samsung 970 Evo Plus (M.2 NVMe PCIe 3.0)
RAM: 32 GB DDR4 2400 Mhz

Results

6.15
6.32
6.50
6.17
6.29
7.86.
6.59
6.54
6.29
6.17
----------
Median: 6.305
Mean: 6.488
Variance: 0.258

Method

I used a little script to obtain the values.

#!/usr/bin/bash

qube="debian-10-minimal"

get_real_time() {
  realtime="$(/usr/bin/time -f "%e" qvm-start -q ${qube})"
  qvm-shutdown --wait -q "${qube}"
  echo $realtime 
}

benchmark() {
  qvm-shutdown --all --wait -q
  for ((i = 0; i < 10 ; i++)); do
    sleep 15
    echo "$(get_real_time)"
  done
}

benchmark

1 Like

Thanks for sharing the script!

It’s good that it’s short enough to be manually typed into dom0 so we won’t have to ask people to paste things into it. I’ll add this as an option in the instructions.

Edit: I also went ahead and added your results while I was at it

1 Like

My more senior “stable” machine:

System

Dom0 Kernel: 5.4.88-1
CPU: Intel Core i7-3520M (2 Cores/ 4 Threads @2.9Ghz with Boost @3.6Ghz
Storage: Samsung SATA-III SSD
RAM: 8GB DDR3 1600Mhz

Results

7.05
8.37
8.99
6.51
8.95
7.84
7.34
9.11
8.12
9.35
----------
Median: 8.245
Mean: 8.163
Variance: 0.934

It turns out my previous test using i5-10210U wasn’t based on a 5.x kernel but a 4.19.147-1 kernel.

I’ve redone the test using a 5.6.16-1 kernel and @wind.gmbh 's handy dandy script. The results are as follows:

# s
1 5.25
2 5.41
3 6.19
4 5.99
5 4.40
6 4.32
7 5.90
8 6.22
9 6.29
10 4.47

Median: 5.7
Mean: 5.4
Range: 2.0

 

Manual testing:

# s
1 5.681
2 6.054
3 4.324
4 4.531
5 4.681
6 4.506
7 4.523
8 5.707
9 6.912
10 6.551

Median: 5.2
Mean: 5.3
Range: 2.6

Boot times exhibited a wider variance than using the 4.19 kernel. I also did manual testing (though not in a strictly consistent way) to see if the script affected times significantly. This does not seem to be the case. Based on this I can say that kernel 4.19 loads quicker and with less variance than 5.6.

2 Likes

Qubes 4.1
CPU: AMD A10-5750M
kernel-5.4.98-1.fc32.qubes.x86_64
SSD

11,76
11,80
11,63
11,77
11,57
11,50
11,43
11,45
11,58
11,60

1 Like

I finally got rid of R4.1 on my i7-1065G7 since the R4.0.4 installer works with its updated BIOS. I redid the tests and the results were much snappier:

debian-10-minimal (fresh download, not updated, kernel 5.10.8-1, using a modified version of wind.gmbh’s script):
4.08
4.18
4.16
3.89
7.21
4.14
3.92
7.53
4.05
4.01

Mean: 4.7
Median: 4.1
Range: 3.6

R4.0.4 is clearly faster than R4.1 when using the same kernel. There are some outliers, probably from some underlying processes, but otherwise the start times are nearly half that of R4.1. Anyone know what might be causing this?

3 Likes