I’ve accomplished what is probably the most niche and silliest use case for Qubes possible, VR Gaming.
I don’t like dual booting, and I like using Qubes for everything I do. For that reason, I have set this up.
I don’t know how this affects the security of Qubes, but it works.
To do this you’re going to need multiple GPUs on your machine, and you’re also going to need multiple USB hubs. (Also, two monitors and a USB KVM switch help a lot and make this very usable)
My Specs:
-
Dasharo Coreboot MSI PRO Z690
-
Intel i9-12900K
-
Nvidia RTX 4080
-
Sonnet Allegro Pro USB Hub
-
Valve Index
For my use case, I like to use the integrated GPU on my Intel processor for display on Qubes. This is because it’s open source and more fitting for Linux.
This is done by plugging in my displays into my motherboard and making Xorg use the intel integrated GPU through the following configuration inside of dom0:
/etc/X11/xorg.conf.d/20-intel.conf
Section "Device"
Identifier "Intel Graphics"
Driver "Intel"
Option "DRI" "3"
EndSection
After doing this, my lightdm and KDE session boot on my integrated GPU, leaving my Nvidia GPU to be used for a Gaming HVM.
Since SteamVR does not work well with Linux, I created a Windows 10 Pro HVM using qvm-create-windows-qube from GitHub - ElliotKillick/qvm-create-windows-qube: Spin up new Windows qubes quickly, effortlessly and securely on Qubes OS. As of 11/25/23 to use this tool, you need to downgrade qubes-windows-tools using the following command:
sudo qubes-dom0-update qubes-windows-tools-4.1.69
However, this comes with big security risks as discussed in this thread: QWT (support for Windows in Qubes OS) is not available anymore. When will this be solved? - #20 by Timewulf
After successfully installing, I attached my secondary USB Hub and Nvidia GPU to the VM using the “Devices” Section in the Qubes Settings of my Windows VM.
I followed this guide to successfully pass through my GPU. However, with my hardware no configuration was needed except for the patching of xen.xml using Method 2. After patching xen.xml, I was able to boot onto Windows.
After booting, my GPU wasn’t detected until I installed the Nvidia Drivers from their website.
Once the drivers were installed I was able to plug in another displayport cable from my GPU to my monitor and succesfully have Windows 10 output to my GPU. Also, since my USB KVM switch was plugged into both my host USB hub and my passed through USB hub, I was able to use one keyboard and mouse and switch between both easily with one button.
(Alder Lake P/E Core CPUs) (Optional)
From there, I noticed huge lag spikes and an overall poor experience using the Windows HVM. This was remedied by following this guide and pinning my physical CPU cores correctly to my HVM. I did this by allocating 8 vcpus to my gaming HVM (the amount of P cores my CPU has) and running the following command in dom0:
for i in {0..7}; do xl vcpu-pin win10 $i 0-15; done
This forces the vcpus to run on P cores, (0-15 being the 8 physical cores my CPU has without SMT) giving me much better performance. This command will need to be run every time the gaming hvm is booted unless you create a custom xen-user.xml as outlined in the original guide by renehoj
From here, my experience was very smooth, and I was able to download SteamVR and play VR games easily by plugging in my Valve Index to my USB hub that was passed through. I get very good performance and am surprised that it runs as well as it does. I have not ran into any issues.