Qubes Window Boost: Dynamic, window focus-based CPU pinning

Hey there. Inspired by other threads on CPU pinning on Qubes (linked below) I’ve created a solution to dynamically pin VMs’ vCPUs based on which one is currently focused.

Check the project out at: GitHub - Atrate/qubes-window-boost: Dynamically pin Qubes domains to vCPUs based on the focused window . Contributions welcome :).

TLDR:

This script is designed for QubesOS. It automatically reads the current focused window’s parent VM and pins the VM to all CPU cores (or to performance cores, this only needs one line’s worth of changes). The default assumption is that all VMs are by default pinned to E-cores on asymmetrical CPUs and they get given access to all cores when in focus for additional snappiness and performance. See CPU Pinning Alder Lake for information on how to set up the prerequisite core pinning required for this script to run well.

3 Likes

This sounds great.

What happens if you switch from a qube to the other multiple times per minute? The overhead of calling xl should be large?

I’ve not noticed the overhead to matter. Pinning/unpinning takes only a fraction of a second (at least in userspace) and watching the pins confirms that there doesn’t seem to be a noticeable delay.
watch -d -n 1 "xl vcpu-list | tr -s ' ' | cut -d' ' -f 1,7 | uniq | column -t"

To anyone using the script: I’ve updated the script on GitHub in order to fix a memory leak. I did not even know it’s possible to create a memory leak in bash (this easily) :slight_smile: but I realised after 1G of memory has been eaten.

Thats a neat idea. I like it

@Talkabout

This actually looks extremely promising and would be a universal solution regardless of window manager used. Thank you for taking the time to find this! If you want to, I’ll gladly accept a PR that changes the get_focused_domid() function into this more generic version.

If you want to make a PR, please make sure to follow these guidelines:

# Get window focused/window closed events. Format: for window focus changes,     
# output is just the name of the VM that got focus. For window close events,     
# output is one line of "window_closed" and the VM's name on the next line.      
# Could be handled more elegantly, but I don't think many people would name      
# their VMs "window_closed". Could be adapted to work with any other WM/DE.      
# --------------------------------------------------------------------------     

For this you’d need to also parse these events: _NET_ACTIVE_WINDOW: window id # 0x0 and substitute them with the window_closed string in the output. A simple sed call or maybe even built-in bash string substitution will probably work well.

I’d prefer all this to be in one pipeline for performance reasons, but if that’s not possible, an if clause handling the two cases (window close and window switch) is also good.

If you don’t have time, I’ll add this functionality this or next week when I have some free time.

I think it makes sense that you take care, as I am not that experienced with bash development and also do not want to spoil your script where I can clearly see that you are pretty familiar with all the bash concepts :slight_smile: Currently I am using a very stripped version completely mapped to my use cases but am very eager to reuse your nice script when it is ready.

1 Like

I’ve integrated your solution into the script. I’ll perform some testing and when I’m satisfied, I’m going to release a newer version :slight_smile:

@Talkabout I’ve updated the script to work with XFCE as well (any other X11 DE/WM should work, too). Let me know if it works for you now!

Hi @Atrate ,

very nice you adjusted the script, thanks! Before testing it myself 1 question: is there a reason why you are using hard pinning instead of soft pinning? I am running my custom script (which is not even close to yours when it comes to logic) with soft pinning and am not facing any performance issues. Reading through comments I was under the impression that whenever possible one should use soft pinning to not run into situations where processors are “busy” and because of hard pinning processes are stuck. Would it make sense to allow for soft pinning in your script also?

I think it would make sense to make a version that uses soft pinning (maybe switchable via a variable in the script). I personally use hard pinning as that suits my use case more, as I want the non-e-core-pinned qubes to have absolute priority over the rest (e.g. for better gaming performance) as well as to reduce laptop fan noise (I do not want background qubes to ever use p-cores, as they cause a lot of noise through heat).

I agree that for most users, soft affinity would probably be safer.

I modified your script slightly to make it work on my system. Following changes:

  • use soft pinning instead of hard pinning. Requires the xl vcpu-pin commands to be modified but also the grep command for the current pins
  • as I want to run it via systemctl I had to remove the “tee --output-error=warn /dev/fd/2” part as this path is not available within systemctl units

One thing that would be a good improvement is to allow multiple values for “IGNORE_PIN”. Otherwise the result looks good so far. Will test a little more over the next days.

Thanks again!

I run the command

watch -d -n 0.5 "xl vcpu-list | tr -s ' ' | cut -d' ' -f 1,7- | uniq | column -t -l 2"

But the output looks like:

Name          Affinity (Hard / Soft)
Domain-0      all / all
sys-net       all / all
sys-usb       all / all
sys-firewall  all / all

Not sure if something goes wrong,the output is just all / all.

This means that your Qubes are not pinned to E-cores. You need to make sure that your VMs use E-cores by default before using this script. One way of doing this is: