Qubes OS and CPU affinity

Is it currently possible to assign a static amount of VCPUs to all domU domains? I’d like to prevent them from using the cores I pinned to dom0.

You can set global vcpu affinity mask in /etc/xen/xl.conf:

# Specify global vcpu hard affinity masks. See xl.conf(5) for details.
#vm.cpumask="0-7"
#vm.pv.cpumask="0-3"
#vm.hvm.cpumask="3-7"

/etc/xen/xl.conf - XL Global/Host Configuration

I’ve actually tried that before but it doesn’t take effect. I’m not sure if it’s Qubes related or due to the bug(s) the xen documentation mentions. Another option would be adding the cores to a new cpupool and somehow persistently assign it to the domains, however I don’t know how to do that on Qubes.

Maybe this works

That does work I’m doing that already for one of my templates, what about non static dispvm’s is it possible to achieve the same result? The global vcpu mask would be ideal I’d love to hear if it works for anyone on Qubes

I guess it doesn’t work because CPU pool is used:

Due to bug(s), these options may not interact well with other options concerning CPU affinity. One example is CPU pools. Users should always double check that the required affinity has taken effect.

So edit the CPU pool instead for example remove CPU 0 and 1:
xl cpupool-cpu-remove Pool-0 0,1

Cpupools Howto - Xen

I’ve removed the cores I assigned to vm.cpumask from the cpupool however it still doesn’t take effect.
xl vcpu-list still lists them as using “all” with the exception of the ones I’ve removed from the pool. Also after rebooting all cores go back to the pool.

I think vm.cpumask option will be ignored if CPU pools are used.
I can only suggest to create new CPU pool for dom0 and move dom0 to this new CPU pool and leave all other VMs with default Pool-0 and remove dom0 CPUs from it.

I get this error when I try to move dom0 to another pool: (with xl cpupool-migrate)

libxl: error: libxl_cpupool.c:437:libxl_cpupool_movedomain: Error moving domain to cpupool

What about the cpupools going back to the default after rebooting, do you know what causes that to happen?

It doesn’t work for dom0, I didn’t read man about it so it’s my bad:

cpupool-migrate domain-id cpu-pool
Moves a domain specified by domain-id or domain-name into a cpu-pool. Domain-0 can’t be moved to another cpu-pool.

The cpupool changes are not persistent and you need to apply them on boot with systemd service or some other way.

Actually vm.cpumask seems to be working if you run vcpu-pin on domain:
xl vcpu-pin vmname all all
But for some reason the global masks don’t apply when a guest is created. Maybe it’s a Qubes specific problem with how it manages the domains.

1 Like

You can use a hack and pin the CPUs in template /usr/share/qubes/templates/libvirt/xen.xml:
For example to restrict guests to only run on CPUs 2-15 change:
<vcpu placement="static">{{ vm.vcpus }}</vcpu>
to
<vcpu placement="static" cpuset="2-15">{{ vm.vcpus }}</vcpu>
But this doesn’t apply to HVMs device model stubdom (vmname-dm domains) and they will still run on all CPUs.
There’s an emulatorpin option in libvirt but it seems that stubdom emulator don’t support it.

UPD:
It’s better to override the xen config by making changes in /etc/qubes/templates/libvirt/xen-user.xml as stated here:
Custom libvirt config — core-admin mm_bcdf5771-0-gbcdf5771-dirty documentation

$ cat /etc/qubes/templates/libvirt/xen-user.xml
{% extends 'libvirt/xen.xml' %}
{% block basic %}
        <vcpu placement="static" cpuset="2-15">{{ vm.vcpus }}</vcpu>
        {{ super() }}
{% endblock %}
3 Likes

Thank you so much for taking your time to find a solution. As for HVMs device model stubdom I’ll use xl vcpu-pin vmname-dm all all with vm.cpumask set unless there’s a reason I shouldn’t?

You can automate this by using libvirt hook:

# cat /etc/libvirt/hooks/libxl
#!/bin/bash
guest_name="$1"
libvirt_operation="$2"

if [ "$libvirt_operation" = "started" ]; then
    (
        exec 0</dev/null
        exec 1>/dev/null
        exec 2>/dev/null
        if [ $(qvm-prefs $guest_name virt_mode) = "hvm" ]; then
            xl vcpu-pin $guest_name-dm all all
        fi
    ) & disown
fi

I think it’s ok to do this.

1 Like

Thank you that’s very handy. Do you also know how to assign different cores to dispvms?

{% extends 'libvirt/xen.xml' %}
{% block basic %}
    {% if (vm.klass == 'DispVM' and vm.name.startswith("disp")) %}
        <vcpu placement='static' cpuset='2-3'>{{ vm.vcpus }}</vcpu>
    {% else %}
        <vcpu placement='static' cpuset='4-15'>{{ vm.vcpus }}</vcpu>
    {% endif %}
    {{ super() }}
{% endblock %}
2 Likes