GPU Passthrough and VR Setup for Gaming

I’ve accomplished what is probably the most niche and silliest use case for Qubes possible, VR Gaming.
I don’t like dual booting, and I like using Qubes for everything I do. For that reason, I have set this up.
I don’t know how this affects the security of Qubes, but it works.

To do this you’re going to need multiple GPUs on your machine, and you’re also going to need multiple USB hubs. (Also, two monitors and a USB KVM switch help a lot and make this very usable)

My Specs:

  • Dasharo Coreboot MSI PRO Z690

  • Intel i9-12900K

  • Nvidia RTX 4080

  • Sonnet Allegro Pro USB Hub

  • Valve Index

For my use case, I like to use the integrated GPU on my Intel processor for display on Qubes. This is because it’s open source and more fitting for Linux.
This is done by plugging in my displays into my motherboard and making Xorg use the intel integrated GPU through the following configuration inside of dom0:

/etc/X11/xorg.conf.d/20-intel.conf

Section "Device"
  Identifier    "Intel Graphics"
  Driver        "Intel"
  Option "DRI" "3"
EndSection

After doing this, my lightdm and KDE session boot on my integrated GPU, leaving my Nvidia GPU to be used for a Gaming HVM.

Since SteamVR does not work well with Linux, I created a Windows 10 Pro HVM using qvm-create-windows-qube from GitHub - ElliotKillick/qvm-create-windows-qube: Spin up new Windows qubes quickly, effortlessly and securely on Qubes OS. As of 11/25/23 to use this tool, you need to downgrade qubes-windows-tools using the following command:

sudo qubes-dom0-update qubes-windows-tools-4.1.69

However, this comes with big security risks as discussed in this thread: QWT (support for Windows in Qubes OS) is not available anymore. When will this be solved? - #20 by Timewulf

After successfully installing, I attached my secondary USB Hub and Nvidia GPU to the VM using the “Devices” Section in the Qubes Settings of my Windows VM.

I followed this guide to successfully pass through my GPU. However, with my hardware no configuration was needed except for the patching of xen.xml using Method 2. After patching xen.xml, I was able to boot onto Windows.
After booting, my GPU wasn’t detected until I installed the Nvidia Drivers from their website.
Once the drivers were installed I was able to plug in another displayport cable from my GPU to my monitor and succesfully have Windows 10 output to my GPU. Also, since my USB KVM switch was plugged into both my host USB hub and my passed through USB hub, I was able to use one keyboard and mouse and switch between both easily with one button.

(Alder Lake P/E Core CPUs) (Optional)
From there, I noticed huge lag spikes and an overall poor experience using the Windows HVM. This was remedied by following this guide and pinning my physical CPU cores correctly to my HVM. I did this by allocating 8 vcpus to my gaming HVM (the amount of P cores my CPU has) and running the following command in dom0:

for i in {0..7}; do xl vcpu-pin win10 $i 0-15; done

This forces the vcpus to run on P cores, (0-15 being the 8 physical cores my CPU has without SMT) giving me much better performance. This command will need to be run every time the gaming hvm is booted unless you create a custom xen-user.xml as outlined in the original guide by renehoj

From here, my experience was very smooth, and I was able to download SteamVR and play VR games easily by plugging in my Valve Index to my USB hub that was passed through. I get very good performance and am surprised that it runs as well as it does. I have not ran into any issues.

6 Likes

I fixed the links.

1 Like

I hid the GPU from dom0, used the first method for patching the stubdom, I attached the gpu to my windows hvm (with the name gpu_2G_windows since 3.5G doesn’t work), when I boot into the hvm it appears on my main screen not my dgpu, after I install the AMD drivers each time I boot into the OS it blue screens on the main monitor and gives a black screen to the dgpu, on linux there is similar behavior.
Am I missing something?

A recent test on an i7-1260P showed that the SMT cores (disabled) were listed as 1,3,5,7 while P cores where 0,2,4,6. Could you check if they also appear like this for you? If you create a qube with only a few vCPU and try to assign 8-15, it will fail if those are SMT cores disabled

If I understand correctly, the purpose of patching stubdom is so that you can boot your virtual machine with the GPU attached and use RAM allocation above 1-3GB (or some arbitrary number of GB). Otherwise, it will just say something along the lines of “No boot devices found.”

If you used the first method to patch stubdom, and you still can’t boot with enough RAM allocated, then the second method for patching it should most likely work for you. When I initially tried this, the first method didn’t work for me either, and I instead opted for modifying xen.xml as shown in the second method from the guide.

Yes, this was the case for me as well, and still is. Your GPU will be attached to the VM, but it will still show a qubes style window on your desktop of the Windows HVM. After installing drivers, you will see your GPU attached to the HVM in the devices section of your task manager and also notice two monitors in the display settings, one being the qubes window, and the other being your actual monitor. I usually disable the Qubes window and only use my monitor. However, you could actually use the GPU without using a second monitor at all, albeit with a lot of screen latency.

I’m not exactly sure why your VM breaks after installing drivers, does it only break after rebooting? Also, when you say there is similar behavior on Linux, are you installing drivers on Linux? It should have them by default baked into the kernel when using AMD.

Pinning any of my vcpus to cores 8-15 with SMT disabled doesn’t fail and works fine.
For my CPU, that would be utilizing the P cores 8,10,12, and 14.
I’m guessing it’s just using the cores that actually exist in that range, instead of trying to use non-existent threads.

I also experimented with pinning each individual vcpu to match up with each P core using the following command:

for i in {0..8}; do xl vcpu-pin win10 $i $(expr $i + $i); done

with 8 being my number of P cores and also number of vcpus allocated to the win10 qube.

However, I didn’t notice any performance difference, leading me to believe it doesn’t matter if you use a range. I’m assuming providing a range automatically does this based off of my experiment and instead of reading the source code (lol).

1 Like

With Dasharo you can’t disable hyperthreading, it’s enabled by default in the firmware, and you can’t turn it off. You can only use the xen smt=off option, this means xen will always see all 16 P Cores, but it does not use the sibling cores.

I don’t know if pinning 0-15 will allow you to use all 16 logical cores, but pinning 0,2,4,6,8,10,12,14 will set correct affinity for P cores with the sibling cores “disabled”.

2 Likes