Survey: CPU and VM boot time

Not sure if its the CPU, but 3.x second VM startup times are impressive. I’m inclined to think its because you’re using Fedora since my i7-1065G7 is somewhat similar to yours. Are you planning on running the same test using Debian-11-minimal for comparison?

1 Like

Impressive indeed. @51lieal do you think you can add these results to the table on the first post?

The same thought occured to me but the test is only for Debian, so putting Fedora results there makes things confusing

First output is with all standard vms running.
Second output is after command shudown all

Lenovo Legion Y540-17IRH-PG0
Processor: i7-9750H 6Core 12Threads (2.6 GHz - 4.5 GHz) 12MB Cache Intel UHD Graphics 630
Chipset Memory: Mobile Intel HM370 Chipset 32GB max offering / 2666MHz DDR4, dual-channel capable, two DDR4 SO-DIMM sockets
Storage: 256 GB NVMe M.2 SSD 2280 PCIe 3.0 x 4, 32Gb/s

Screenshot_2022-04-07_19-37-01

Some thoughts, do you have memory ballooning for dom0 or the domUs ?
What if you set mem=maxmem for dom0 and/or the debian template, to avoid it ?
Do you use VCPU pinning ? On my non-Qubes dom0s all the CPUs use the stock speed, never the Turbo one (but that may be my settings dunno).
Also, unman mentionned that the SSD was the most sensitive part about speed, so monitoring with iotop or even atop during the tests could help.
Maybe VCPU scheduling or memory/disk caching is at stake.
Monitoring with xentop and watch -n 1 xl vcpu-list during the tests may give pointers.

Are you interested in any tests or only bare metal ?
I run Qubes virtualized, and can change any setting at ease. Damn I just deleted the 4.0.4 domU …
Also, are you interested in a comparison with a “normal/non-Qubes” Xen install (but Debian dom0) ?
I can run the tests with a Debian netinst to compare.

left is debian, top right is 4kn fedora 35 minimal updated.

I’ll try build debian 4kn template and let see the difference.

I glanced through your thread and wasn’t able to parse out what I was looking for:

Is your test method based on the real output of dom0’s built-in time? I’m asking because I can’t wrap my head around how your CPU is starting even Debian with such speed. Did you overclock it, and by how much? Is the clock speed even the main difference here?

Basically the thread is opened by @Insurgo to tell us about :

Most of the ssd that i bought and know still use lbaf 0 which is 512b.
In the thread it’s explained that one need to change to lbaf 1 which use 4kb

for further reference see here Umstieg auf Advanced-Format-Festplatten mit 4K-Sektoren | Seagate Deutschland

After reading and some testing, in the same thread i post a workround how to run qubes with 4k support, changing the lbaf alone already made a significant performance improvement.

and in this benchmark, i share what the difference using a default (qubes os + 512b template provided by qubes) vs ( qubes os with 4k drive + 4k template builded by me)

I can’t say this is overclocking but it is tweaking.

it’s the same script we use in this thread, the difference is I set up only 5 times.

2 Likes

Thanks for taking the time to explain this to me. Since your setup is somewhat special I’ll go ahead and add your results and mark it SSD-4k.

All I know about SSD performance is SLC, MLC, QLC, etc. Maybe one day I can look forward to seeing ‘4K’ labels being abused on SSD packaging much like how everything had ‘HD’ labels at one point.

3 Likes

I’m astonished by the performance gained from using a 4k SSD with 4kn templates, and I’m willing to test it myself…

Could you please share details on how to build a 4kn debian template? I have looked at qubes-builder but seen no information on how to change the template’s sector size. Thanks in advance.

BTW, I don’t know why 512b fedora-35 is also booting so fast, almost the same as 4kn fedora-35, whereas 512b fedora-34 boots slower.

2 Likes
# Get the qubes builder
git clone https://github.com/QubesOS/qubes-builder
cd qubes-builder
# Use the base config already in the repo
cp example-configs/templates.conf builder.conf

# Append custom settings to builder.conf
# Better to update values in-place, but this is the 60 second version
cat >>builder.conf <<EOF

# Build Fedora 33; fc33+minimal or other flavors could be given
DISTS_VM = fc35+minimal

# Use the Qubes online repo for Qubes packages since we are not building them
USE_QUBES_REPO_VERSION = \$(RELEASE)

# Use the 'current' repo; not 'current-testing'
USE_QUBES_REPO_TESTING = 0

# Don't build dom0 packages
TEMPLATE_ONLY = 1

# Since this is TEMPLATE_ONLY, list the only components needed
TEMPLATE = builder-rpm linux-template-builder

EOF

# install build dependencies
make install-deps

# Download and verify sources (only Qubes git repos)
make get-sources-git

# Apply this patch
https://github.com/51lieal/qubes-linux-template-builder/commit/fe45b4f0c462590c95030f353bf0b32a26722466.patch

# Build the template
make template

You may want to edit prepare_image.sh and change xfs to ext4.

3 Likes

I got some interesting results.

On a SSD that only has 512bit LBA sector size supported (which means it cannot be changed to 4kn mode), I got better start time than my previous installation.

The new installation is on 4k-sector-sized xfs volume. I created it following @51lieal kindly provided guide. I cannot start any VM using the created “vm” lvm-thin pool, so I set default pool back to varlibqubes. My templates are 512b, I haven’t installed my 4kn templates.

My previous installation was on btrfs, using all default settings. 512b sector size, 512b templates, etc.

The specs of both SSD are mostly the same, except that my previous SSD has slightly better overall performance, and support two lbaf — one is 512 (default), the other is 4096 (unused).

2 Likes

I feel that while the presented numbers help describe the relative speed of different hardware, that the numbers are going to be deceptive for prospective Qubes users worrying about whether their systems are powerful enough for Qubes, since what they will actually care about is things like the human wait time for them to open a browser (I.E. whonix). which normally takes 30 seconds to several minutes.
Could you add some mention of how these numbers will be underestimates of their actual wait times?

30 seconds to several minutes ???
Even on my pedestrian x220s I don’t see anything like that wait time.
Is this Whonix specific?

Take a look at an old post of mine where I did some real world testing:
https://www.mail-archive.com/qubes-users@googlegroups.com/msg30816.html

2 Likes

i7-3840QM running 5.10.112-1.fc32.qubes.x86_64 on SSD/BTRFS
min=8.02; mean=8.3; max=8.81

actual results of all 10 attempts

8.81
8.28
8.20
8.26
8.28
8.30
8.25
8.34
8.26
8.02

I had the same reaction, but maybe @fiftyfourthparallel means from power on to browser open (including booting the entire system)?

The results are for starting a minimal template with no other qubes
running: just starting.

@ddevz seems to be talking about opening a browser - presumably from
selecting from the menu to the browser opening, but it isn’t clear what
other qubes might be involved in that process. (sys-net? sys-firewall?
Whonixes?)
Whatever they might be, I’d recommend identifying the hardware so we can
maintain a blacklist as well as your useful white list.

You probably mean me, not fiftyfourthparallel, and no, I mean:

Prep the system: (I.E. select a disposable whonix browser from the start button, wait for sys-whonix to start, after the browser appears, close the browser (this just make sure were not waiting for other VMs to start during the timed test))

Do the timed test: start the timer as you select a disposable whonix browser from the start button, stop the timer the moment the web browser appears.

I’ve seen it across multiple systems. Here’s what I’ve seen:

Normal system with a HDD, on either qubes 4.0 or 4.1:
browser wait time: several min

A system specifically bought just to run qubes faster then a normal system:

  • Intel(R) Xeon(R) Gold 6240R CPU @ 2.40GHz
  • VM kernel version: 5.10
  • M2 module (I.E. solid state)
  • Qubes 4.1
  • Memory: 128 Gigs

browser wait times:
24.5 seconds
24.5 seconds
25.3 seconds

doing the “debian-11 minimal” procedure:
7.2 seconds
7.4 seconds
7.6 seconds
7.2 seconds

same thing but with a up to date debian-11 image:
6.5 seconds
6.4 seconds
6.7 seconds

assuming those are in seconds, your browser opens are more then 2 min for HDD and 33+ seconds for SDD
However, your text above the results implies you are shutting down sys-whonix between opens and forcing sys-whonix to restart each test. I would expect this to slow my 25 second results down.

I would expect my system to be faster then yours, so it sounds like i need to investigate the configuration i’ve been using to set systems up.

OK, I see. In my case (i7-3840QM, 16GB RAM, SSD) …

  • 8.3 seconds for the debian-11-minimal procedure
  • ~19 seconds for google-chrome to appear when launching a new disp vm (debian-11-minimal based)

I can see how with whonix this might take a bit longer, so I suppose this is a whonix related thing.

some context

I can also see how on first look this sounds like a big deal: 19 seconds to start the browser! … that is until one realizes that an entire virtual machine with 1GB RAM in my case got created, launched and the browser launched too. This and that as a result my visiting any website now is safer than it could ever be on a bare metal install.

I am not saying that there is anything wrong with wanting faster hardware and better performance, just trying to keep it all in perspective.

2 Likes

I don’t see a need to append anything since the test is clearly labeled as ‘very crude’. There’s no need to complicate things by referencing or adding boot times including apps and apps on heavier distros (Whonix), as what we have should give a good enough indication of real-world performance. If someone wants to make a Qubes performance benchmark that does include this data, it would make me (and I imagine Sven) happy, but this very crude test isn’t the place.

Also, I know that older machines are popular among the really hardcore infosec enthusiasts, mainly due to perceptions about ME vulnerabilities (why the AMD counterpart (PSP) is so often overlooked is beyond me). However, I think they are a vocal minority among Qubes users, if the hardware surveyed here and in the HCL is of any indication. Over time they’ll inevitably become an even smaller minority. I’d wager that for most the gaps between a VM launching and its app launching is usually small enough.