Create a Gaming HVM

Hardware

To have an ‘HVM’ for gaming, you must have

  • A dedicated GPU. By dedicated, it means: it is a secondary GPU, not the GPU used to display dom0. In 2023, ‘Nvidia’ and ‘Amd’ GPU work. Not tested with Intel GPUs. External GPU using thunderbolt work (Create a Gaming HVM - #8 by solene)

  • A screen available for the gaming ‘HVM’. (It can be a physical monitor or just to have multiple cables connected to the screen and switching between input source)

  • Dedicated gaming mouse and keyboard.

  • A lot of patience. GPU passthrough is not trivial, and you will need to spend time debugging.

IOMMU Group

You need to check what are the things/devices that are in the same IOMMU group as the GPU you want to passthrough. You can’t see your IOMMU Group when you are using Xen (the information is hidden from dom0). So, start a live linux distribution, enable iommu in the grub options (iommu=1 iommu_amd=on), and then displayed the folder structure of /sys/kernel/iommu_group

#!/bin/bash
shopt -s nullglob
for g in /sys/kernel/iommu_groups/*; do
 echo "IOMMU Group ${g##*/}:"
 for d in $g/devices/*; do
  echo -e "\t$(lspci -nns ${d##*/})"
done
done

GRUB modification

You must hide your secondary GPU from dom0. To do that, you have to modify the GRUB. In a dom0 Terminal, type:

qvm-pci

Then find the devices id for your secondary GPU. In my case, it is dom0:0a_00.0 and dom0:0a_00.1. Edit /etc/default/grub, and add the PCI hiding.

GRUB_CMDLINE_LINUX="... rd.qubes.hide_pci=0a:00.0,0a:00.1 "

then regenerate the grub

grub2-mkconfig -o /boot/grub2/grub.cfg

If you are using UEFI, the file to override with grub2-mkconfig is /boot/efi/EFI/qubes/grub.cfg.

Note: if after this step when you reboot the computer you get stuck in the QubesOS startup that means you are trying to use the GPU you just hide. Check your BIOS options. Also check the cables, BIOS have some GPU priority based on the type of cable. For example, DisplayPort can be favoured over HDMI.

Once you have rebooted, in dom0, type sudo lspci -vvn, you should see “Kernel driver in use: pciback” for the GPU you just hide.

Configuring the parameter “max-ram-below-4g”

Important note: Below, we set the “max-ram-below-4g” parameter to “3.5G”.
For some GPU this value need to be “2G” (discovered here: Quick howto: GPU passthrough with lots of RAM). It is not currently well understood why the value need to be exactly “2G” or exactly “3.5G”, or maybe some other values for other GPU/configuration we never saw yet. More investigation is required to understand what is going on with this parameter.

Method 1: Patching stubdom-linux-rootfs.gz

github.com/QubesOS/qubes-issues/issues/4321

Copy-paste of the comment:

This is caused by the default TOLUD (Top of Low Usable DRAM) of 3.75G provided by qemu not being large enough to accommodate the larger BARs that a graphics card typically has. The code to pass a custom max-ram-below-4g value to the qemu command line does exist in the libxl_dm.c file of xen, but there is no functionality in libvirt to add this parameter. It is possible to manually add this parameter to the qemu commandline by doing the following in a dom0 terminal. (I modified the code so it works with 4.1 and remove one of the original limitations by restricting the modification to VM with a name starting with “gpu_”)

mkdir stubroot
cp /usr/libexec/xen/boot/qemu-stubdom-linux-rootfs stubroot/qemu-stubdom-linux-rootfs.gz
cd stubroot
gunzip qemu-stubdom-linux-rootfs.gz
cpio -i -d -H newc --no-absolute-filenames < qemu-stubdom-linux-rootfs
rm qemu-stubdom-linux-rootfs
nano init

Before the line

# $dm_args and $kernel are separated with \n to allow for spaces in arguments

add:

# Patch 3.5 GB limit
vm_name=$(xenstore-read "/local/domain/$domid/name")
# Apply the patch only if the qube name start by "gpu_"
if [ $(echo "$vm_name" | grep -iEc '^gpu_' ) -eq 1 ]; then
 dm_args=$(echo "$dm_args" | sed -n '1h;2,$H;${g;s/\(-machine\nxenfv\)/\1,max-ram-below-4g=3.5G/g;p}')
fi

Then execute:

find . -print0 | cpio --null -ov \
--format=newc | gzip -9 > ../qemu-stubdom-linux-rootfs
sudo mv ../qemu-stubdom-linux-rootfs /usr/libexec/xen/boot/

Note that this will apply the change to the HVM with a name starting with “gpu_”. So you need to name your gaming HVM “gpu_SOMETHING”.

Method 2: Patching xen.xml

Instead of patching stubdom-linux-rootfs, you could inject the command directly inside the configuration template. It is the file "core-admin/ “templates/libvirt/xen.xml” in the “qubes-core-admin” repository. In dom0 this file is in “/usr/share/qubes/templates/libvirt/xen.xml”

See below the part that have been modified to add the needed “max-ram-below-4g” option.

<!-- server_ip is the address of stubdomain. It hosts it's own DNS server. -->
<emulator
 {% if vm.features.check_with_template('linux-stubdom', True) %}
 type="stubdom-linux"
 {% else %}
 type="stubdom"
 {% endif %}
 {% if vm.netvm %}
 {% if vm.features.check_with_template('linux-stubdom', True) %}
 cmdline="-qubes-net:client_ip={{ vm.ip -}}
 ,dns_0={{ vm.dns[0] -}}
 ,dns_1={{ vm.dns[1] -}}
 ,gw={{ vm.netvm.gateway -}}
 ,netmask={{ vm.netmask }} -machine xenfv,max-ram-below-4g=3.5G"
 {% else %}
 cmdline="-net lwip,client_ip={{ vm.ip -}}
 ,server_ip={{ vm.dns[1] -}}
 ,dns={{ vm.dns[0] -}}
 ,gw={{ vm.netvm.gateway -}}
 ,netmask={{ vm.netmask }} -machine xenfv,max-ram-below-4g=3.5G"
 {% endif %}
 {% else %}
 cmdline="-machine xenfv,max-ram-below-4g=3.5G"
 {% endif %}

I haven’t personnally tested this alternative but it should work and some users reported that it work. This method is less tested than patching stubdom-linux-rootfs, so I recommend patching stubdom-linux-rootfs.

Preparing the guest

As of 2023, I recommend using a Linux guest instead of a window guest.

Windows

Install a window VM, you can use this qvm-create-windows-qube

Linux

Create a new standalone Qube based on the template of your choice.

You must run the kernel provided by the guest distribution, because we will use some non-default kernel module for the GPU driver. Just follow the doc: managing-vm-kernel.

Install the GPU drivers you need.

Pass the GPU

In qubes settings for the HVM, go to the ‘devices’ tab, pass the ID corresponding to your GPU.

You may or may not need to add the option “permissive” or “no-strict-reset”.

Some word about the security implication of thoses parameters.

qvm-pci attach gpu_gaming_archlinux dom0:0a_00.0 -o permissive=True -o no-strict-reset=True
qvm-pci attach gpu_gaming_archlinux dom0:0a_00.1 -o permissive=True -o no-strict-reset=True

Starting the guest

This is where you will have a lot of issues to debug.

For Linux guests, run ‘sudo dmesg’ to have all the kernel log indicating you if there is a issue with your GPU driver. For some hardware, the MSI calls won’t work. You can work around that using for example pci=nomsi or NVreg_EnableMSI=0 or something else. Check your drivers options. Check if alternative drivers exist (amdgpu, nvidia, nouveau, nvidia-open, using drivers from the official website, …). Check multiple kernel version.

Some links that could help you to debug the issues you will have

For windows guests you will probably have the same issues but it will be harder to debug. I recommend using the drivers from Windows Update instead of the official drivers from the website of the constructor.

Some things that may be useful for debugging:

  • Virsh (start, define, …)

  • /etc/libvirt/libxl/

  • xl

  • /etc/qubes/templates/libvirt/xen/by-name/

  • /usr/lib/xen/boot/

  • virsh -c xen:/// domxml-to-native xen-xm /etc/libvirt/libxl/…

Issues with the drivers could be related to ‘qubes-vmm-xen-stubdom-linux’, ‘qubes-vmm-xen’, and the Linux kernel you will be using.

Linux guest — Integration with QubesOS

Xorg

Now Xorg. From XKCD:

image

Things you need to install:

  • The Xorg input driver to support your mouse and keyboard

  • Your favorite Windows Manager

In my case, it is:

archlinux version:

pacman -S xorg i3

debian version:

apt install xserver-xorg-input-kbd xserver-xorg-input-libinput xserver-xorg-input-mouse i3

Then create a XORG configuration file for your GPU and screen. My file named ‘AOC.conf’:

Section "ServerLayout"
Identifier "Gaming"
Screen 0 "AMD AOC" Absolute 0 0
EndSection

Section "Device"
Identifier  "AMD"

# name of the driver to use. Can be "amdgpu", "nvidia", or something else
Driver      "amdgpu"

# The BusID value will change after each qube reboot. 
BusID       "PCI:0:8:0"
EndSection

Section "Monitor"
Identifier "AOC"
VertRefresh 60
# https://arachnoid.com/modelines/ .  IMPORTANT TO GET RIGHT. MUST ADJUST WITH EACH SCREEN. 
Modeline "1920x1080" 172.80 1920 2040 2248 2576 1080 1081 1084 1118
EndSection

Section "Screen"
Identifier "AMD AOC"
Device     "AMD"
Monitor    "AOC"
EndSection

We can’t know what is the correct BusID before the qube is started. And it change after each reboot. So let’s write a script — named “xorgX1.sh” — that update this configuration file with the correct value, then start a binary on the Xorg X screen n°1.

#!/bin/bash

binary=${1:?binary required}

# Find the correct BusID of the AMD GPU, then set it in the Xorg configuration file
pci=$(lspci | grep "VGA" | grep "NVIDIA|AMD/ATI" | cut -d " " -f 1 | cut -d ":" -f 2 | cut -d "." -f 1 | cut -d "0" -f 2)
sed -i "s/PCI:0:[0-9]:0/PCI:0:$pci:0/g" /home/user/AOC.conf

# Start the Xorg server for the X screen number 1.
# The X screen n°0 is already used for QubesOS integration
sudo startx "$binary" -- :1 -config /home/user/AOC.conf
Deprecated: old way of doing it

#!/bin/bash

binary=${1:?binary required}

Find the correct BusID of the AMD GPU, then set it in the Xorg configuration file

pci=$(lspci | grep “VGA” | grep “NVIDIA|AMD/ATI” | cut -d " " -f 1 | cut -d “:” -f 2 | cut -d “.” -f 1 | cut -d “0” -f 2)
sed -i “s/PCI:0:[0-9]:0/PCI:0:$pci:0/g” /home/user/AOC.conf

Pulseaudio setup

sudo killall pulseaudio
sudo sed -i "s/load-module module-vchan-sink./load-module module-vchan-sink domid=$(qubesdb-read -w /qubes-audio-domain-xid)/" /etc/pulse/qubes-default.pa
sudo rm /home/user/.pulse/client.conf
start-pulseaudio-with-vchan
sleep 5 && sudo chmod -R 777 /root/ &
sleep 5 && sudo chmod -R 777 /root/
&
sleep 5 && sudo cp /root/.pulse/client.conf /home/user/.pulse/client.conf && sudo chown -R user:user /home/user/.pulse/client.conf &

Start the Xorg server for the X screen number 1.

The X screen n°0 is already used for QubesOS integration

sudo startx “$binary” – :1 -config /home/user/AOC.conf

Audio

  • Delete any packages related to pulseaudio and install pipewire
sudo pacman -Rdd qubes-vm-pulseaudio pulseaudio
sudo pacman -S pipewire-{jack,alsa,pulse} pipewire-qubes
  • Enable the “pipewire” service for this qube using Qubes Manager

  • Create a script to launch your favorite Windows Manager and force it to use a specific pulse server.
    Example “i3.sh”:
#!/bin/bash
setxkbmap fr
/bin/sudo -u user PULSE_SERVER=unix:/run/user/1000/pulse/native /usr/bin/i3

And launch it:

sudo ./xorgX1.sh ./i3.sh
Deprecated: old way of doing it

So you need to configure pulseaudio for Xorg multiseat. The archlinux documentation explain that very well: Xorg multiseat Use the option without system-mode deamon and adapt it to qube: Add the following line to /etc/pulse/qubes-default.pa

load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1

Then add this config for root:

mkdir /root/.pulse
echo "default-server = 127.0.0.1" > /root/.pulse/client.conf

The sound was buggy/laggy on my computer. So tried to find a workaround by playing with pulseaudio settings. It was more or less random tries, so I can’t really explain it: In /etc/pulse/daemon.conf add the following lines:

default-fragments = 60
default-fragment-size-msec = 1
high-priority = no
realtime-scheduling = no
nice-level = 18

In /etc/pulse/qubes-default.pa{.text} change

load-module module-udev-detect

to

load-module module-udev-detect tsched=0

You can launch you favorite Windows Manager like that

sudo ./xorgX1.sh /usr/bin/i3

References


This document was migrated from the qubes-community project
  • Page archive
  • First commit: 18 Jan 2023. Last commit: 18 Jan 2023.
  • Applicable Qubes OS releases based on commit dates and supported releases: 4.1
  • Original author(s) (GitHub usernames): neowutran
  • Original author(s) (forum usernames): @neowutran
  • Document license: CC BY 4.0
Contributors
4 Likes

This guide seems to draw heavily on this article by @neowutran, should some credits be added? (Asking you directly @neowutran since you’re in the forum.)

https://neowutran.ovh/qubes/articles/gaming_linux_hvm.html

Unless, I’ve got this backwards and it’s the other way around! @taradiddles if I understand the migration correctly you created the guide in the qubes-community project on GitHub?

I published it both on my website and in the qubes-community project on Github, you can check in the footer of the original post that I am the author:

"

  • Original author(s) (GitHub usernames): neowutran
  • Original author(s) (forum usernames): @neowutran
    "

everything is ok :slight_smile:

1 Like

I didn’t realize there was such useful info in the details of the migration! :bulb::bulb::bulb:

1 Like

Pretty cool guide! Did someone ever try with a thunderbolt external GPU?

@neowutran I got this working more easily with pipewire. And since there was recent work by @Demi in adding support, I think it’s a good bet.

(Disclaimer: this was on manjaro. Not pure arch)
Here are the instructions:

Getting audio to work

In my case I was having issues with audio. It was stuttering, so I removed the qubes-vm-pulseaudio package and and installed pipewire as a replacement and it seemed to do the job:

sudo pacman -Rdd qubes-vm-pulaseaudio pulseaudio
sudo pacman -S pipewire-{jack,alsa,pulse}

After restarting the machine, the audio was working

2 Likes

I just tried it but it doesn’t seems to work with multiseat Xorg. Do you confirm that for you it work even in the multiseat Xorg ?
( audio work well on my “:0” xorg session, but no sound on my “:1” xorg session. I removed all the configuration/modification related to pulseaudio )

Update:
made it working on my side with pipewire. I needed to use this custom script instead of directly i3 in my example to force connect to the audio daemon

i3.sh 
#!/bin/bash
/bin/sudo -u user PULSE_SERVER=unix:/run/user/1000/pulse/native /usr/bin/i3
1 Like

I just got an HVM to display GNOME on my external GPU :+1:

I did that (I’ll try again from a cleaner environment to get a more reproducible guide)

  • hide the pci devices (there is no IOMMU on the external GPU, except with the AMD card but it’s super clear to see the AMD VGA and AMD audio are part of the same group)
  • pass the devices to an HVM
  • use the qube provided kernel
  • use X :1 -configure to generate an xorg.conf file, then remove all extra in it to just keep the screen/monitor/device related to the VGA card itself was good enough, check you are using nvidia for NVIDIA cards and amdgpu for AMD cards
  • move that file in /etc/X11/xorg.conf.d/99-xorg.conf, and start your display manager, e.g. systemctl start gdm.service

Tested on OpenSUSE tumbleweed with an AMD RX 480 and NVIDIA 1060 :ok_hand:
Tested on OpenSUSE tumbleweed with an NVIDIA 1060 but in discrete mode (display in dom0 but uses the GPU) :ok_hand:

Tested on Debian 12 using the template with an NVIDIA 1060 :ok_hand:

I used a Razer Core X external GPU case connected in thunderbolt 3 on a Lenovo T470, unfortunately it’s almost useless in this case because the CPU is way too slow to do anything meaningful with a GPU in Qubes OS :smiley:

3 Likes

I’m sure using QubesOS doesn’t improve the CPU performance, and mobile CPU are not great for gaming to start with, but do you not also run into some serious PCIe bandwidth issues?

Can you get more than 4x PCIe lanes?, I also think TB3 and 4 only use PCIe gen 3, and it’s not the fast CPU lanes, it’s the slower chipset lanes.

Not really, on Linux I can emulate Switch games using Yuzu or play games like Death Stranding or Control (CPU bound). The laptop has an i5-7300U, it would take a while before being limited by thunderbolt bandwidth. Maybe if you use a 4k screen this would saturate faster?

If you use the eGPU as a discrete GPU, the bandwidth is limiting because the data have to go in both ways, if you use it with an external display, you have more bandwidth because the rendering doesn’t need to go back through thunderbolt.

Yes, it works if you just use it for the external display, but you could do the same with a TB dock without the GPU.

I thought you wanted to connect the GPU and use it to play games that need accelerated graphics.

that’s exactly what I’ve done, and it’s working.