This is one of the mitigations for speculative execution CPU bugs. It was done as part of qubes-secpack/qsb-043-2018.txt at master · QubesOS/qubes-secpack · GitHub, but applies to several other later bugs too. Generally, it is not safe to re-enable SMT unless you really understand the consequences and apply additional measures relevant to your use case.
In any case, if you really know what you are doing, you can switch it on by keeping the existing line unchanged, but adding another similar one with smt=on. But note, such configuration is not security supported - there may be cases where some issue (current or future) is mitigated by smt=off on Qubes, but if you have it changed, you’ll be vulnerable. If you do this, you need to keep track of such issues yourself, according to your threat model. That said, it may be perfectly fine in some cases, where isolation between VMs isn’t really necessary for given use case (like - purely testing / demo system).
I’m willing to take the risk of enabling smt, I’m not a high profile target I think it’s very unlikely someone is going to target me with a 0-day CPU exploit.
Do you know if the alder lake e cores, that doesn’t have the ability to hyper thread, can be considered isolated even with smt enabled?
With smt=off my CPU affinity is set to 0,2,4,6,8,10,12,14,16-23. Can I enable smt, and use cpuset in libvirt/xen.xml to set the same affinity and archive the same protection, or does smt=off do more than set the affinity?
I’m currently using a naming scheme for my qubes that allows me to start them on the p or e cores, I’m wondering if it would be possible to only disable smt on a few of the p cores, and use that area for higher risk qubes.
There is a lot of confusion in that thread, most (if not all) of info there applies to plain Linux without Xen. With Xen, it’s Xen who is responsible for scheduling (including using HT/SMT or not).
You may be able to do that with pinning, but see below.
smt=off prevent using sibling threads at all (those threads are simply offline), regardless what affinity you set.
You can try set affinity in libvirt xml for specific qubes, but double check if xl vcpu-list reports what you requested - not all options in libvirt are implemented, and it silently ignore them.
I have not figured out yet what “-dm” on a VM’s name means, or where to put such a xen-user.xml file, or what to put in it. An example would do wonders.
What I really would like is for resume to yield something other than a black, unresponsive screen after an apparently-successful suspend. This might just mean getting dom0 to “rmmod amdgpu” before it suspends, but I have not discovered where to configure that. (Anyway, “mem_sleep_default=deep” does not suffice.) Getting dom0 into a state where “rmmod amdgpu” can work might be a whole other movie.