Qubes vCPU Limit

For those of you don’t know about this, there is a limit on how many vCPUs you have for a single qube. Take i7 gen 8 as example. My CPU can only handle about 20 vCPUs per qube. Here is the question: what’s your guy’s maximum vCPUs per qube?

Giving a VM more VCPUs than exist on your hardware reduces computing efficiency. Therefor on a four core laptop, the maximum I use is 4 VCPUs.


Unpopular opinion:

Here is the thing, you said that giving more vCPUs will reduces the performance. Let’s say this. If the computing power of a software that needed some more power, then are we just creating some sort of machine learning by using our own hardwares?

The number of vCPUs acts like a divider for the total number of cycles available on your hardware. By increasing the number of vCPUs you simply decrease the number of cycles each can use. This is useful if you run programs that can be parallelized and each execution unit has wait times caused by something other than computing (e.g. disk I/O).

1 Like

As far as I am aware there is no gen8 i7 with 20 cores or even threads (I think 6 cores is the max?).

OP is over-committing VCPUs, which is known to lead to performance decreases for various reasons.


That is the way I perceive it too. But what about dom0? I have read this Xen Project wiki and it states

"By default dom0 gets as many vCPUs as CPUs on the physical host. This might be a good idea if your host only has 4 CPUs, but as systems get bigger there’s no reason to assign that many vCPUs to Dom0, so reducing it to something sensible is interesting for performance. "

I have 10 physical cores on my Xeon and Xentop shows me that my dom0 has 10 vCPUs currently assigned. According to the Xen wiki it seems that dom0 is over provisioned in my case.

My question to the group is, assuming I pin dom0 only to the first 4 physical cores and prohibit my 10-20 guest VMs from touching those 4 vCPUs

  1. will the performance of the guest vms improve over current provisioning?
  2. Will dom0 performance suffer with only 4 vCPUs serving that many guest VM
  3. Has anyone here experimented with this type of provisioning in Qubes specifically?


@brendanhoar I agree with you. My post was meant to answer the OP and support your initial answer.

I can see reading it again now that this was lost in the final version I posted (around 2 AM my time :wink:

I’ve done this on some machines, and frankly see almost no performance
change. ymmv
4 vcpu is likely excessive, and you can go lower without issue.
There is some security benefit in reserving cores for dom0.

I never presume to speak for the Qubes team.
When I comment in the Forum or in the mailing lists I speak for myself.

thank you @unman

this will be very helpful

I’m not yet a Qubes user, for now only using vanilla Xen with a somewhat Qubes-like setup, so my remarks may be out of line as I don’t fully grasp every aspect of Qubes !
I also pin 4 vCPUs to dom0, and forbid other domains to use them, leaving 12 vCPUs for them (Ryzen 1700X 8c/16t).
When fastly downloading big files (100+ MBps), the 4 dom0 vCPUs get quite used ! Is Qubes using performance optimizations compared to vanilla Xen ?

Can you elaborate please ? Has it do to with L2/L3 cache sharing ?

Thx !