Why are updates forcing the grub smt=off option?

I have noticed this happens multiple times now.

After a Qubes update, the line GRUB_CMDLINE_XEN_DEFAULT="$GRUB_CMDLINE_XEN_DEFAULT smt=off" will be added to grub.

This happens regardless of smt=on being set in grub config, and disables not only smt but also dom0_max_vcpus and dom0_vcpus_pin

1 Like

I was wrong about it changing the pinning, it only disables smt and doesn’t change the pinning.

It changes how the soft affinity is shown, which is why I though dom0 pinning got disabled.

This is one of the mitigations for speculative execution CPU bugs. It was done as part of qubes-secpack/qsb-043-2018.txt at master · QubesOS/qubes-secpack · GitHub, but applies to several other later bugs too. Generally, it is not safe to re-enable SMT unless you really understand the consequences and apply additional measures relevant to your use case.

In any case, if you really know what you are doing, you can switch it on by keeping the existing line unchanged, but adding another similar one with smt=on. But note, such configuration is not security supported - there may be cases where some issue (current or future) is mitigated by smt=off on Qubes, but if you have it changed, you’ll be vulnerable. If you do this, you need to keep track of such issues yourself, according to your threat model. That said, it may be perfectly fine in some cases, where isolation between VMs isn’t really necessary for given use case (like - purely testing / demo system).

5 Likes

I was already writing here that it doesn’t work actually

What I did to set it off definitely, was to add it to u grub line

GRUB_CMDLINE_XEN_DEFAULT="console=none dom0_mem=min:1024M dom0_mem=max:1536M ucode=scan smt=off gnttab_max_frames=2048

Obviously, you would want to set it on instead and try it.

I’m willing to take the risk of enabling smt, I’m not a high profile target I think it’s very unlikely someone is going to target me with a 0-day CPU exploit.

Do you know if the alder lake e cores, that doesn’t have the ability to hyper thread, can be considered isolated even with smt enabled?

With smt=off my CPU affinity is set to 0,2,4,6,8,10,12,14,16-23. Can I enable smt, and use cpuset in libvirt/xen.xml to set the same affinity and archive the same protection, or does smt=off do more than set the affinity?

I’m currently using a naming scheme for my qubes that allows me to start them on the p or e cores, I’m wondering if it would be possible to only disable smt on a few of the p cores, and use that area for higher risk qubes.

There is a lot of confusion in that thread, most (if not all) of info there applies to plain Linux without Xen. With Xen, it’s Xen who is responsible for scheduling (including using HT/SMT or not).

You may be able to do that with pinning, but see below.

smt=off prevent using sibling threads at all (those threads are simply offline), regardless what affinity you set.
You can try set affinity in libvirt xml for specific qubes, but double check if xl vcpu-list reports what you requested - not all options in libvirt are implemented, and it silently ignore them.

1 Like

Everything looks fine in with xl vcpu-list, thanks for the help.

Wondering, is there a way to pin dom0 to only ever use core 0, and restrict all other VMs not to use core 0?

Running “ql vcpu-list”, it seems like my “dom0_vcpus_pin” makes just a “soft” affinity.

1 Like

You can use dom0_max_vcpus=1 dom0_vcpus_pin and make a xen-user.xml that excludes the use of core 0 with cpuset="1-8" or how many cores you have.

It will work for all vms except the -dm vm spawned by HVMs, for some reason they are not configured by xen.xml

1 Like

Thank you.

I have not figured out yet what “-dm” on a VM’s name means, or where to put such a xen-user.xml file, or what to put in it. An example would do wonders.

What I really would like is for resume to yield something other than a black, unresponsive screen after an apparently-successful suspend. This might just mean getting dom0 to “rmmod amdgpu” before it suspends, but I have not discovered where to configure that. (Anyway, “mem_sleep_default=deep” does not suffice.) Getting dom0 into a state where “rmmod amdgpu” can work might be a whole other movie.

In make the file /usr/share/qubes/templates/libvirt/xen-user.xml

With the content

{% extends 'libvirt/xen.xml' %}
{% block basic %}
        <vcpu cpuset="1-8">{{ vm.vcpus }}</vcpu>
        {{ super() }}
{% endblock %}

If you have more than 8 cores, you need to adjust the cpuset.

Try starting a qube before rebooting, to make sure everything is working.

You can use the command xl vcpu-list to check that the affinity is set to 1-8.

Thank you. The notes about testing were especially helpful.

I’m actually working on my first real coding project and part of it is a libvirt libxl hook script which manages among other things stuff like setting cpupools and migrating vm’s to them, pinning cpu cores, applying nftable rules, applying and removing qubes policy rules and invoking services like qubes.TCPconnect in qubes etc.
All based on startup/shutdown and VM name information.
Configurable with a tk GUI (maybe switching to YAD even if tk is smaller and included with qubes.)
It kept growing and growing and now I’m probably just gonna end up making an O’s Qubes Toolkit (OQT) which is my personal single folder to throw onto any Fresh Qubes OS installation to set up, personalise and add tools for all my workflows including a backup button to spit out a fresh OQT folder with a data backup of personal data to be restored on next run of Init-OQT-Fresh.

Another part of OQT will be an extra final setup menu for Qubes when running Init-OQT-Fresh, which will include enable/disable SMT. (Reconfigurable later by running Settings-OQT)

I’ll share it on GitHub and (with any personal data stripped and only example config files ofcourse.)

Idk if my GitHub is linked here but I’m actually just finished on the toilet, will edit this message later, should be IOSucu, haven’t used it for much of anything.