CPU Pinning Alder Lake

it’s only the cores dom0 is using that has C2, all other cores only have C0 and C1.

FWIW I don’t see that behavior: whatever dom0’s pinning (or lack thereof), xenpm reports states C0, C1, and C2 for P-cores and C0 and C1 for E-cores.

$ xl vcpu-list Domain-0

How many cores does that command say dom0 is using?

I have dom0_max_vcpus=6 as a boot option so I have 6 cores.

$ xl vcpu-list Domain-0

Name                                ID  VCPU   CPU State   Time(s) Affinity (Hard / Soft)
Domain-0                             0     0    6   -b-     358.5  6 / 0-13
Domain-0                             0     1    7   -b-     297.0  7 / 0-13
Domain-0                             0     2    8   r--     306.1  8 / 0-13
Domain-0                             0     3    9   -b-     313.1  9 / 0-13
Domain-0                             0     4   10   -b-     290.9  10 / 0-13
Domain-0                             0     5   11   -b-     281.0  11 / 0-13

P-core no. 1:

$ xenpm get-cpuidle-states 0

All C-states allowed

cpu id               : 0
total C-states       : 3
[...]

E-core no. 1:

$ xenpm get-cpuidle-states 6

All C-states allowed

cpu id               : 6
total C-states       : 2
[...]

Using the xen option cpufreq=xen:hwp changes the scaling-driver to hwp-cpufreq.

I don’t know if this is better than using the acpi driver, but the hwp driver seems to be aware of the big and small cores.

cpu id               : 0
affected_cpus        : 0
cpuinfo frequency    : base [3200000] turbo [5200000]
scaling_driver       : hwp-cpufreq
scaling_avail_gov    : hwp-internal
current_governor     : hwp-internal
hwp variables        :
  hardware limits    : lowest [1] most_efficient [11]
                     : guaranteed [41] highest [65]
  configured limits  : min [1] max [255] energy_perf [128]
                     : activity_window [0 hardware selected]
                     : desired [0 hw autonomous]
turbo mode           : enabled
cpu id               : 16
affected_cpus        : 16
cpuinfo frequency    : base [3200000] turbo [3900000]
scaling_driver       : hwp-cpufreq
scaling_avail_gov    : hwp-internal
current_governor     : hwp-internal
hwp variables        :
  hardware limits    : lowest [1] most_efficient [15]
                     : guaranteed [24] highest [39]
  configured limits  : min [1] max [255] energy_perf [128]
                     : activity_window [0 hardware selected]
                     : desired [0 hw autonomous]
turbo mode           : enabled
1 Like

Also, I think moving to hwp prevented my CPU from being constantly underclocked – or at least being reported as such.

thank you for this excellent guide

can you please edit/add to your post that xen-user.xml is to be placed at /usr/share/qubes/templates/libvirt/ ? (makes it easier)

here is what i use with a mobile processor (2/8) and only need for disposables to playback videos :)
{% extends 'libvirt/xen.xml' %}
{% block basic %}
  <name>{{ vm.name }}</name>
   <uuid>{{ vm.uuid }}</uuid>
    {% if ((vm.virt_mode == 'hvm' and vm.devices['pci'].persistent() | list) or vm.maxmem == 0) -%}
     <memory unit="MiB">{{ vm.memory }}</memory>
    {% else -%}
     <memory unit="MiB">{{ vm.maxmem }}</memory>
    {% endif -%}
   <currentMemory unit="MiB">{{ vm.memory }}</currentMemory>
   {% if vm.name.startswith('disp') -%}
    <vcpu cpuset="0-11">{{ vm.vcpus }}</vcpu>
   {% else -%}
    <vcpu cpuset="4-11">{{ vm.vcpus }}</vcpu>
   {% endif -%}
{% endblock %}

first observation my battery life is now increasing from ~3hrs to probably ~4 hours

There is a lot here to absorb, but maybe someone can reply to where I am at in following this. I’m starting at the beginning. I edited GRUB in Dom0 terminal. Now I am trying to move Dom0 to the E cores. When I did that, the message I got back was “vcpu hard affinity map is empty.” When I run xl vcpu-list, Domain-0 still shows after reboot a hard/soft Affinity as both 0, 2, 4, 6, 8-15. Shouldn’t it be even numbers 16-23 for the E cores instead? And can someone explain higher level what pinning is adding to security or function? Dom0 doesn’t require performance, but a qube running heavy applications would? Are there security benefits as well?

You have 12 cores (16 threads) 0-7 are the P cores, 8-15 are the E cores, only 0, 2, 4, 6, are active because Qubes OS by default disable SMT.

Thank you. Yes, the result of cat /proc/cpuinfo agrees. You have 16, I have 12 cores (12 threads per core according to lscpu). Ok, so I need to pin differently. vcpu-pin Domain-0 8-15 [why isn’t it 6/8-12 instead with 0-5 being the P Cores?] should be the right way. What should I look for to verify that I have Dom0 on the E cores?

You could look up your processors model with intel ark and see how many cores it has.

lscpu for me states i have an i5-1235U which brings me to intel ark

there it states i have 2 performance cores and 8 e cores. which makes the cores numbere 0-1 P cores an 2-9 E-cores (keep in mind cores start counting at 0 for the first core)


i for one have SMT enabled because my thread model allows me to be lax with some mitigations so my Pcores are numbered 0-3 including the thread virtual cores: 1 & 3 and the regular E cores 4-11…

i hope i didnt mix it up myself and it helps you figuring out how many cores of which you have.

ark states 12 cores 4P 8E 16 threads
so I should input Domain-0 0 4-12 for Dom0 assigned to E cores, right?

command not implemented so I have to undo xl for 20-23 so it will implement 4-12 or not (requires sudo su)

…And the primary purpose of all this is to have the Performance Cores available and affined for qubes that have intensive demand for processing power. Then, I go to terminal in that qube and assign it to pools (next step).

If I do sudo nano /etc/default/grub in Dom0 terminal and add to GRUB_CMDLINE_XEN dom0_max_vcpus=6 then there will be 2 E cores left over for qubes not requiring the 4 P cores. So I could then assign by cpu pool the 2 E cores for Service: sys-X and the 4 P cores to Qube: work and Qube: anon-whonix. This would be a good design, right? The core memory caches save time and power with this optimization and tuning, correct?


But on reboot the Affinity is not saved…

Trying the cpupool method . . .


On the first command from the first post: multiple errors with all other qubes shut down. Answers?

If E cores come first in the series and then P cores,
/usr/sbin/xl cpupool-cpu-add pcores 8,9,10,11
… ecores 2,3,4,5,6,7
is the correct command for i5-1240P, right?

When you edit grub you need to run grub2-mkconfig -o /boot/efi/EFI/qubes/grub.cfg that is why the changes have no effect.

The cpupool error is because you are trying to remove cpus that aren’t in the pool, I’m gussing you already ran the command and it failed the second time.

The P cores are 0-7, and E cores are 8-15, with smt=off only 0,2,4,6 of the p cores are active, you can’t change the order of the cores.

1 Like

Thanks for the clarification.

How does 12 = “0(1)-7” + “8-15”? That’s 12 = 8 + 7
…the “threads” are 16 P or E per each of the 12 physical CPU cores and the threads are being pinned to P or E set of the 12 physical cores?

How would you make that part of the cpupool add command specific to the i5-1240P?

I changed the grub.cfg and now qubes other than Dom0 don’t start.

But when I switched back from EFI to grub2 legacy, now all the sys qubes boot again and everything works. GRUB2 boot is also faster. What does this tell us? EFI boot cant work with legacy components?

That means adding that line, starting with an underscore not dom0, to GRUB_CMDLINE in grub.cfg between ucode and smd? Trying with or without dom0, qubes do not start.

right now on 4.2rc2 with Xen 4.17 i struggle a little to get xen-user.xml to do its job. pinning Domain-0 manually (and via login shell script) works, but xen-user.xml doesnt work for me.

is anybody already having a working config for qubes4.2? did anything change or did i do something wrong?

I’m using the Admin API to move qubes between cpu pools, and that works the same way in 4.2

One thing I have noticed this that you can’t move vms between pools that have credit and credit2 scheduler, it crashes xen and reboots the system, it used to work in 4.1. The documentation says pools should be able to use different schedulers, so I’m not sure why it crashes the system.

thanks for the hint, i now use admin events.

@renehoj do you mind if I put the information about the service install next to the part in your initial post? i recently learned that the community guides category is editable collaboratively. i dont want to over step but would like to update your post. :slight_smile:

Sure, just edit it, that is why the posts are in wiki mode.

1 Like