Survey: CPU and VM boot time

I glanced through your thread and wasn’t able to parse out what I was looking for:

Is your test method based on the real output of dom0’s built-in time? I’m asking because I can’t wrap my head around how your CPU is starting even Debian with such speed. Did you overclock it, and by how much? Is the clock speed even the main difference here?

Basically the thread is opened by @Insurgo to tell us about :

Most of the ssd that i bought and know still use lbaf 0 which is 512b.
In the thread it’s explained that one need to change to lbaf 1 which use 4kb

for further reference see here Umstieg auf Advanced-Format-Festplatten mit 4K-Sektoren | Seagate Deutschland

After reading and some testing, in the same thread i post a workround how to run qubes with 4k support, changing the lbaf alone already made a significant performance improvement.

and in this benchmark, i share what the difference using a default (qubes os + 512b template provided by qubes) vs ( qubes os with 4k drive + 4k template builded by me)

I can’t say this is overclocking but it is tweaking.

it’s the same script we use in this thread, the difference is I set up only 5 times.

2 Likes

Thanks for taking the time to explain this to me. Since your setup is somewhat special I’ll go ahead and add your results and mark it SSD-4k.

All I know about SSD performance is SLC, MLC, QLC, etc. Maybe one day I can look forward to seeing ‘4K’ labels being abused on SSD packaging much like how everything had ‘HD’ labels at one point.

3 Likes

I’m astonished by the performance gained from using a 4k SSD with 4kn templates, and I’m willing to test it myself…

Could you please share details on how to build a 4kn debian template? I have looked at qubes-builder but seen no information on how to change the template’s sector size. Thanks in advance.

BTW, I don’t know why 512b fedora-35 is also booting so fast, almost the same as 4kn fedora-35, whereas 512b fedora-34 boots slower.

2 Likes
# Get the qubes builder
git clone https://github.com/QubesOS/qubes-builder
cd qubes-builder
# Use the base config already in the repo
cp example-configs/templates.conf builder.conf

# Append custom settings to builder.conf
# Better to update values in-place, but this is the 60 second version
cat >>builder.conf <<EOF

# Build Fedora 33; fc33+minimal or other flavors could be given
DISTS_VM = fc35+minimal

# Use the Qubes online repo for Qubes packages since we are not building them
USE_QUBES_REPO_VERSION = \$(RELEASE)

# Use the 'current' repo; not 'current-testing'
USE_QUBES_REPO_TESTING = 0

# Don't build dom0 packages
TEMPLATE_ONLY = 1

# Since this is TEMPLATE_ONLY, list the only components needed
TEMPLATE = builder-rpm linux-template-builder

EOF

# install build dependencies
make install-deps

# Download and verify sources (only Qubes git repos)
make get-sources-git

# Apply this patch
https://github.com/51lieal/qubes-linux-template-builder/commit/fe45b4f0c462590c95030f353bf0b32a26722466.patch

# Build the template
make template

You may want to edit prepare_image.sh and change xfs to ext4.

3 Likes

I got some interesting results.

On a SSD that only has 512bit LBA sector size supported (which means it cannot be changed to 4kn mode), I got better start time than my previous installation.

The new installation is on 4k-sector-sized xfs volume. I created it following @51lieal kindly provided guide. I cannot start any VM using the created “vm” lvm-thin pool, so I set default pool back to varlibqubes. My templates are 512b, I haven’t installed my 4kn templates.

My previous installation was on btrfs, using all default settings. 512b sector size, 512b templates, etc.

The specs of both SSD are mostly the same, except that my previous SSD has slightly better overall performance, and support two lbaf — one is 512 (default), the other is 4096 (unused).

2 Likes

I feel that while the presented numbers help describe the relative speed of different hardware, that the numbers are going to be deceptive for prospective Qubes users worrying about whether their systems are powerful enough for Qubes, since what they will actually care about is things like the human wait time for them to open a browser (I.E. whonix). which normally takes 30 seconds to several minutes.
Could you add some mention of how these numbers will be underestimates of their actual wait times?

30 seconds to several minutes ???
Even on my pedestrian x220s I don’t see anything like that wait time.
Is this Whonix specific?

Take a look at an old post of mine where I did some real world testing:
https://www.mail-archive.com/qubes-users@googlegroups.com/msg30816.html

2 Likes

i7-3840QM running 5.10.112-1.fc32.qubes.x86_64 on SSD/BTRFS
min=8.02; mean=8.3; max=8.81

actual results of all 10 attempts

8.81
8.28
8.20
8.26
8.28
8.30
8.25
8.34
8.26
8.02

I had the same reaction, but maybe @fiftyfourthparallel means from power on to browser open (including booting the entire system)?

The results are for starting a minimal template with no other qubes
running: just starting.

@ddevz seems to be talking about opening a browser - presumably from
selecting from the menu to the browser opening, but it isn’t clear what
other qubes might be involved in that process. (sys-net? sys-firewall?
Whonixes?)
Whatever they might be, I’d recommend identifying the hardware so we can
maintain a blacklist as well as your useful white list.

You probably mean me, not fiftyfourthparallel, and no, I mean:

Prep the system: (I.E. select a disposable whonix browser from the start button, wait for sys-whonix to start, after the browser appears, close the browser (this just make sure were not waiting for other VMs to start during the timed test))

Do the timed test: start the timer as you select a disposable whonix browser from the start button, stop the timer the moment the web browser appears.

I’ve seen it across multiple systems. Here’s what I’ve seen:

Normal system with a HDD, on either qubes 4.0 or 4.1:
browser wait time: several min

A system specifically bought just to run qubes faster then a normal system:

  • Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz
  • VM kernel version: 5.10
  • M2 module (I.E. solid state)
  • Qubes 4.1
  • Memory: 128 Gigs

browser wait times:
24.5 seconds
24.5 seconds
25.3 seconds

doing the “debian-11 minimal” procedure:
7.2 seconds
7.4 seconds
7.6 seconds
7.2 seconds

same thing but with a up to date debian-11 image:
6.5 seconds
6.4 seconds
6.7 seconds

assuming those are in seconds, your browser opens are more then 2 min for HDD and 33+ seconds for SDD
However, your text above the results implies you are shutting down sys-whonix between opens and forcing sys-whonix to restart each test. I would expect this to slow my 25 second results down.

I would expect my system to be faster then yours, so it sounds like i need to investigate the configuration i’ve been using to set systems up.

OK, I see. In my case (i7-3840QM, 16GB RAM, SSD) …

  • 8.3 seconds for the debian-11-minimal procedure
  • ~19 seconds for google-chrome to appear when launching a new disp vm (debian-11-minimal based)

I can see how with whonix this might take a bit longer, so I suppose this is a whonix related thing.

some context

I can also see how on first look this sounds like a big deal: 19 seconds to start the browser! … that is until one realizes that an entire virtual machine with 1GB RAM in my case got created, launched and the browser launched too. This and that as a result my visiting any website now is safer than it could ever be on a bare metal install.

I am not saying that there is anything wrong with wanting faster hardware and better performance, just trying to keep it all in perspective.

2 Likes

I don’t see a need to append anything since the test is clearly labeled as ‘very crude’. There’s no need to complicate things by referencing or adding boot times including apps and apps on heavier distros (Whonix), as what we have should give a good enough indication of real-world performance. If someone wants to make a Qubes performance benchmark that does include this data, it would make me (and I imagine Sven) happy, but this very crude test isn’t the place.

Also, I know that older machines are popular among the really hardcore infosec enthusiasts, mainly due to perceptions about ME vulnerabilities (why the AMD counterpart (PSP) is so often overlooked is beyond me). However, I think they are a vocal minority among Qubes users, if the hardware surveyed here and in the HCL is of any indication. Over time they’ll inevitably become an even smaller minority. I’d wager that for most the gaps between a VM launching and its app launching is usually small enough.

I noticed the test is heavily influenced by the initial memory setting of the template.

I tried increasing the initial memory to 16 GB which added more than 10 sec to the start time, but even small changes can change the start time by 1-2 sec. The lower the memory, the faster the template will start.

1 Like

I would test this observation but I’m not free at the moment. Hopefully someone will post some numbers and if they’re consistent enough I’ll consider adding another dimension to this test.

I wonder if VM disk size also has an impact.

I noticed that recently I could not reproduce the numbers in my past posts. Boot time, in general, has increased by about one second without mem-balancing, and two seconds or so with mem-balancing. I’m not quite sure what’s the cause, but the phenomenon inspired me that this thread can also be used for tracking performance across versions of infrastructural software (xen, libvirt, etc.), more or less helping performance improvement.

1 Like

I booted up my debian-11-minimal a few times and it came back as 10 seconds each, compared to 6.1 last time I checked. This was without shutting everything down though, so this is far from conclusive. I’ll do another round of tests once I find time.

Generally speaking I’ve found Qubes less stable and more sluggish than before.

Augsch, if you want to add a few more columns to the table, feel free to–but be sure to mark them as optional.

1 Like

Did actual tests and the result came back as around 6.7s, which is nowhere as slow as when everything was running, but 10% slower than earlier. Updated wiki.

1 Like

[added my measurements]

I’m using the performance governor (as hinted by @renehoj in this post) with 6 dom0 vcpus, pinned to E cores. Using the ‘ondemand’ governor is a tad slower (+0.5sec).

5.6s startup time is great compared to my old T450s that’s been running qubes for the past 7 years, but it’s 50+% more than @renehoj (which I assume ran the test on an MSI Z690-A Pro DDR4 - same as mine). I doubt it’s only down to the CPU (i9 12th gen vs i5 13th gen), it might be that btrfs also gives an edge with lvm operations when starting the vm.

[edit: forgot to add that disabling memory balancing as per @augsch 's advice shelves ~0.5s; total=5.1s]