Qube cores?

I am running an intel i5 1235U which in total has 10 cores. Since I started using qubes I was under the presumption that when working out qube core count I had to divide between 10 cores. So if in total I had 10 cores, then if I had 4 qubes open I had to either have 2 cores for qube 1, 2 cores for qube 2, 2 cores for qube 3, and 4 cores for qube 4. OR, that I could select up to 10 cores on each qube and the system would dynamically handle resources between it all. But then I realised that I could actually select up to 99 cores. I don’t understand this as technically it would be impossible for my computer to run 99 cores as it physically only has 10 cores. So what is the deal with qube cores and how they are distributed?

I have tried to search for information regarding this but couldn’t find anything that I could understand.

You can make more than 1 vcpu on each physical core, you can think of a vcpu as a process thread. Your total number of vcpus between all qubes is likely many times higher than your physical core count.

If the scheduling for time on the CPU is round-robin, the more vcpus you give to a qube, the more time slices it gets on the CPU. The other qubes will get less time on the CPU, and Xen is going to have to work harder to manage the scheduling.

1 Like

The CPU cores exposed to each virtual machine (to each qube) are virtualized. They don’t map to any specific physical CPU core, and any workload assigned to one such virtual CPU core will be done by any physical CPU core as time permits. Generally though, performance will be the best if the total number of virtual CPU cores you have assigned are fewer than the number of physical CPU cores.

What is a round-robin CPU schedule? Is it right to give qubes like sys-usb, sys-net and other low resource qubes like 1 core, while a qube that has to run heavy programs 99 cores?

So should I view it that I have in total 99 virtual cores and I have to split 99 cores between all the qubes? Have I been making my performance poor by setting my virtual core count, on a qube that has to run heavy programs, to only 10?

I don’t know. I wouldn’t have guessed there is such a limitation though.

You cannot improve performance further by setting it higher than the total number of physical CPU cores. You can destroy CPU cache performance by doing that though, so I would recommend not to do that.

1 Like

I’ve observed that in normal circumstances 1 vcpu per service qube is enough, but… I have NAS and when transfering things 1 vcpu for sys-net and sys-firewall make them choppy with refreshing status and maybe with 1g transfers, so now they have 2 vcpu’s each.
sys-audio when audio device is on PCI is happy with 1 vcore - dunno when audio device is on usb. Bluetooth headphones works no problems with 1 vcpu.
sys-usb works fine with 1 vcpu unless you attach usb3 drive - didn’t tested it yet.
sys-vpn qubes are happy with 1 vcpu’s each.
Browser qubes works fine with 2 vcpu’s, YT qube with 4 vcpu. app-vault (KeePass) works fine with 1 vcpu, app-email with 2 vcpu. 4k playback qube with 4 vcpu.

PS: i5-1245U 10 core

1 Like

What about for more intense programs like KiCad? I know that QubesOS isn’t really ideal for KiCad due to not having direct GPU but still as long as you don’t use the 3D function it works ok, at least for me when I select 10 virtual cores, but I am wondering if selecting more cores would improve performance, or, as ryrona says, it would ruin cache performance? Also, if I select the qube running KiCad to have 10 virtual cores, and I have a browser qube open running 4 virtual cores, that totals in 14 cores which is more than my CPU has, so what is actually happening then?

These aren’t cores. Those are virtual cpus. Best comparision would be like a program - it’s not cpu core. You can run whatever you want but Xen hypervisor will try to spread it evenly acros real cores. When you have it more in one VM than physical cores then there is much shuffle betwen cache and ram - you need load vcpu and it’s cache to core and then unload to load other one.
If it’s between VM’s then it’s not a big problem because there is a shedule for loading and unloading and very often it not colides betwen VM, so one VM is unloaded and another one is loaded and it works almost smoothly.
But if one VM have more vcpus than cores then in one VM cycle Xen need load and unload vcpus to cores which impair that VM performance.

Some insight how vcpu’s are managed
https://umhau.github.io/throttle-a-xen-vcpu/