Create a Gaming HVM

Hardware

To have an ‘HVM’ for gaming, you must have

  • A dedicated GPU. By dedicated, it means: it is a secondary GPU, not the GPU used to display dom0. In 2023, ‘Nvidia’ and ‘Amd’ GPU work. Not tested with Intel GPUs. External GPU using thunderbolt work (Create a Gaming HVM - #8 by solene)

  • A screen available for the gaming ‘HVM’. (It can be a physical monitor or just to have multiple cables connected to the screen and switching between input source)

  • Dedicated gaming mouse and keyboard.

  • A lot of patience. GPU passthrough is not trivial, and you will need to spend time debugging.

IOMMU Group

Goal

What

The goal of this step if to retrieve the default IOMMU Group (VFIO - “Virtual Function I/O” — The Linux Kernel documentation) of your hardware.

Why

It can help understanding potential issue with your setup (what devices live in the same IOMMU group as your GPU) / finding potential workaround.
If you feel lucky, skip this step.

How

You can’t see your IOMMU Group when you are using Xen (the information is hidden from dom0).

  • Boot a live linux distribution
  • In the grub, enable iommu: Add the parameters iommu=1 iommu_amd=on to the linux commandline
  • Once you logged in your live linux distribution, you need to retrieve the folder structure of /sys/kernel/iommu_group.
    You can use the following script to do that:
#!/bin/bash
shopt -s nullglob
for g in /sys/kernel/iommu_groups/*; do
 echo "IOMMU Group ${g##*/}:"
 for d in $g/devices/*; do
  echo -e "\t$(lspci -nns ${d##*/})"
done
done

GRUB modification

You must hide your secondary GPU from dom0. To do that, you have to modify the GRUB. In a dom0 Terminal, type:

qvm-pci

Then find the devices id for your secondary GPU. In my case, it is dom0:0a_00.0. Edit /etc/default/grub, and add the PCI hiding.

GRUB_CMDLINE_LINUX="... rd.qubes.hide_pci=0a:00.0 "

then regenerate the grub

grub2-mkconfig -o /boot/grub2/grub.cfg

If you are using UEFI and Qubes OS 4.1 or earlier, the file to override with grub2-mkconfig is /boot/efi/EFI/qubes/grub.cfg.

Note: if after this step when you reboot the computer you get stuck in the QubesOS startup that means you are trying to use the GPU you just hid. Check your BIOS options. Also check the cables, BIOS have some GPU priority based on the type of cable. For example, DisplayPort can be favoured over HDMI.

Once you have rebooted, in dom0, type sudo lspci -vvn, you should see “Kernel driver in use: pciback” for the GPU you just hid.

Configuring the parameter “max-ram-below-4g”

Since the release of this qubes version of xen: 4.17.2-8 (R4.2, 2024-01-03), no additional configuration is required.

Remove any existing “max-ram-below-4g” workaround

If you are using an older version

Why do we need to do that ?

github.com/QubesOS/qubes-issues/issues/4321

Copy-paste of the comment:

This is caused by the default TOLUD (Top of Low Usable DRAM) of 3.75G provided by qemu not being large enough to accommodate the larger BARs that a graphics card typically has. The code to pass a custom max-ram-below-4g value to the qemu command line does exist in the libxl_dm.c file of xen, but there is no functionality in libvirt to add this parameter. It is possible to manually add this parameter to the qemu commandline by doing the following in a dom0 terminal.


( “max-ram-below-4g” is not related to the amount of ram you can pass to the VM. It is not related to VRAM either. It is related to WHAT is available in the memory space for 32 bits system and WHAT is stored in the memory space for 64 bits system (usable ram is only a part of what need to be mapped in the memory space) )

Finding the correct value for this parameter

Below, we set the “max-ram-below-4g” parameter to “3.5G”.

For some GPU this value need to be “2G” (discovered here: Quick howto: GPU passthrough with lots of RAM). It is not currently well understood why the value need to be exactly “2G” or exactly “3.5G”, or maybe some other values for other GPU/configuration we never saw yet. ( AppVM with GPU pass-through crashes when more than 3.5 GB (3584MB) of RAM is assigned to it · Issue #4321 · QubesOS/qubes-issues · GitHub )
More investigation is required to understand what is going on with this parameter.

Current best guess is to run this command in dom0:
lspci -vvs GPU_IDENTIFIER | grep Region , for example: lspci -vvs 0a:00.0 | grep Region.
if the max value of [size=XXXX] is 256MB, try 3.5G for max-ram-below-4g. If the max value is bigger, try 2G for max-ram-below-4g.

Update: I think I discovered the reason (AppVM with GPU pass-through crashes when more than 3.5 GB (3584MB) of RAM is assigned to it · Issue #4321 · QubesOS/qubes-issues · GitHub Xen project Mailing List ). If you want and have the skills required to compile the xen package, try to apply this patch Fix guest memory corruption caused by hvmloader by neowutran · Pull Request #172 · QubesOS/qubes-vmm-xen · GitHub and confirm it if work as expected. With this patch, the part about “patching stubdom-linux-rootfs.gz” is not needed.

Patching stubdom-linux-rootfs.gz

I modified the original code to:

  • make it works with Qubes R4.1/R4.2
  • removed one of the original limitations by restricting the modification to VM with a name starting with “gpu_”
  • Added a way to modify per vm the value for “max-ram-below-4g”. Ex, if you want specifically to use 2G for “max-ram-below-4g”, name the vm “gpu_2G_YOURNAME”, if you want specifically to use 3.5G for “max-ram-below-4g”, name the vm “gpu_3n5G_YOURNAME”
mkdir stubroot
cp /usr/libexec/xen/boot/qemu-stubdom-linux-rootfs stubroot/qemu-stubdom-linux-rootfs.gz
cd stubroot
gunzip qemu-stubdom-linux-rootfs.gz
cpio -i -d -H newc --no-absolute-filenames < qemu-stubdom-linux-rootfs
rm qemu-stubdom-linux-rootfs
nano init

Before the line

# $dm_args and $kernel are separated with \n to allow for spaces in arguments

add:

vm_name=$(xenstore-read "/local/domain/$domid/name")
if [ $(echo "$vm_name" | grep -iEc '^gpu_' ) -eq 1 ]; then
 max_ram_below_4g=$(echo "$vm_name" | grep -iEo "_([0-9]*(n[0-9]*)?)._" | cut -d '_' -f 2)
 if [[ -z "$max_ram_below_4g" ]];
 then
  max_ram_below_4g="3.5G"
 fi
 max_ram_below_4g=$(echo "$max_ram_below_4g" | sed 's/n/./g')
 dm_args=$(echo "$dm_args" | sed -n '1h;2,$H;${g;s/\(-machine\nxenfv\)/\1,max-ram-below-4g='"$max_ram_below_4g"'/g;p}')
fi

Then execute:

find . -print0 | cpio --null -ov \
--format=newc | gzip -9 > ../qemu-stubdom-linux-rootfs
sudo mv ../qemu-stubdom-linux-rootfs /usr/libexec/xen/boot/

Note that this will apply the change to the HVM with a name starting with “gpu_”. So you need to name your gaming HVM “gpu_SOMETHING”.

Alternatively, the following dom0 script “patch_stubdom.sh” does all the previous steps:

#!/bin/bash 

patch_rootfs(){

 filename=${1?Filename is required}

 cd ~/

 sudo rm -R "patched_$filename"
 mkdir "patched_$filename"

 cp /usr/libexec/xen/boot/$filename "patched_$filename/$filename.gz"
 cp /usr/libexec/xen/boot/$filename "$filename.original"

 cd patched_$filename
 gunzip $filename.gz
 cpio -i -d -H newc --no-absolute-filenames < "$filename"
 sudo rm $filename

 grep -i "max-ram-below-4g" init && echo "!!ERROR!! The thing is already patched ! EXITING ! " >&2 && exit

patch_string=$(cat <<'EOF'


vm_name=$(xenstore-read "/local/domain/$domid/name")
if [ $(echo "$vm_name" | grep -iEc '^gpu_' ) -eq 1 ]; then
 max_ram_below_4g=$(echo "$vm_name" | grep -iEo "_([0-9]*(n[0-9]*)?)._" | cut -d '_' -f 2)
 if [[ -z "$max_ram_below_4g" ]];
 then
  max_ram_below_4g="3.5G"
 fi
 max_ram_below_4g=$(echo "$max_ram_below_4g" | sed 's/n/./g')
 dm_args=$(echo "$dm_args" | sed -n '1h;2,$H;${g;s/\\(-machine\\nxenfv\\)/\\1,max-ram-below-4g='"$max_ram_below_4g"'/g;p}')
fi
\# $dm_args and $kernel

EOF
)

awk -v r="$patch_string" '{gsub(/^# \$dm_args and \$kernel/,r)}1' init > init2
cp init /tmp/init_$filename
mv init2 init
chmod +x init

 find . -print0 | cpio --null -ov \
--format=newc | gzip -9 > ../$filename.patched
 sudo cp ../$filename.patched /usr/libexec/xen/boot/$filename

 cd ~/


}

grep -i "max-ram-below-4g" /usr/share/qubes/templates/libvirt/xen.xml && "!!ERROR!! xen.xml is patched ! EXITING ! " && exit
patch_rootfs "qemu-stubdom-linux-rootfs"
patch_rootfs "qemu-stubdom-linux-full-rootfs"

echo "stubdom have been patched."

OUTDATED - DO NOT USE - Other method: Patching xen.xml instead of stubdom

Instead of patching stubdom-linux-rootfs, you could inject the command directly inside the configuration template. It is the file "core-admin/ “templates/libvirt/xen.xml” in the “qubes-core-admin” repository. In dom0 this file is in “/usr/share/qubes/templates/libvirt/xen.xml”

See below the part that have been modified to add the needed “max-ram-below-4g” option.

<!-- server_ip is the address of stubdomain. It hosts it's own DNS server. -->
<emulator
 {% if vm.features.check_with_template('linux-stubdom', True) %}
 type="stubdom-linux"
 {% else %}
 type="stubdom"
 {% endif %}
 {% if vm.netvm %}
 {% if vm.features.check_with_template('linux-stubdom', True) %}
 cmdline="-qubes-net:client_ip={{ vm.ip -}}
 ,dns_0={{ vm.dns[0] -}}
 ,dns_1={{ vm.dns[1] -}}
 ,gw={{ vm.netvm.gateway -}}
 ,netmask={{ vm.netmask }} -machine xenfv,max-ram-below-4g=3.5G"
 {% else %}
 cmdline="-net lwip,client_ip={{ vm.ip -}}
 ,server_ip={{ vm.dns[1] -}}
 ,dns={{ vm.dns[0] -}}
 ,gw={{ vm.netvm.gateway -}}
 ,netmask={{ vm.netmask }} -machine xenfv,max-ram-below-4g=3.5G"
 {% endif %}
 {% else %}
 cmdline="-machine xenfv,max-ram-below-4g=3.5G"
 {% endif %}

A better patch for xen.xml is available here: AppVM with GPU pass-through crashes when more than 3.5 GB (3584MB) of RAM is assigned to it · Issue #4321 · QubesOS/qubes-issues · GitHub

I haven’t personnally tested this alternative but it should work and some users reported that it work. This method is less tested than patching stubdom-linux-rootfs, so I recommend patching stubdom-linux-rootfs.

Preparing the guest

As of 2023, I recommend using a Linux guest instead of a window guest.

Windows

Install a window VM, you can use this qvm-create-windows-qube

Linux

Create a new standalone Qube based on the template of your choice.

You must run the kernel provided by the guest distribution, because we will use some non-default kernel module for the GPU driver. Just follow the doc: managing-vm-kernel.

Install the GPU drivers you need.

And automated way to do that is proposed here: Create a Gaming HVM

Pass the GPU

In qubes settings for the HVM, go to the ‘devices’ tab, pass the ID corresponding to your GPU.

qvm-pci attach gpu_gaming_archlinux dom0:0a_00.0 --persistent

You may or may not need to add the option “permissive” or “no-strict-reset”.
You may or may not need to passthrough additional devices depending on the result of the IOMMU script Create a Gaming HVM

Some word about the security implication of thoses parameters.

qvm-pci attach gpu_gaming_archlinux dom0:0a_00.0 -o permissive=True -o no-strict-reset=True --persistent

Starting the guest

This is where you will have a lot of issues to debug. Prepare for intense suffering

For Linux guests, run ‘sudo dmesg’ to have all the kernel log indicating you if there is a issue with your GPU driver. For some hardware, the MSI calls won’t work. You can work around that using for example pci=nomsi or NVreg_EnableMSI=0 or something else. Check your drivers options. Check if alternative drivers exist (amdgpu, nvidia, nouveau, nvidia-open, using drivers from the official website, …). Check multiple kernel version.

For nvidia GPU, I recommand using “nvidia-open” drivers instead of “nvidia”

Some links that could help you to debug the issues you will have

For windows guests you will probably have the same issues but it will be harder to debug. I recommend using the drivers from Windows Update instead of the official drivers from the website of the constructor.

Some things that may be useful for debugging:

  • Virsh (start, define, …)

  • /etc/libvirt/libxl/

  • xl

  • /etc/qubes/templates/libvirt/xen/by-name/

  • /usr/lib/xen/boot/

  • virsh -c xen:/// domxml-to-native xen-xm /etc/libvirt/libxl/…

Issues with the drivers could be related to ‘qubes-vmm-xen-stubdom-linux’, ‘qubes-vmm-xen’, and the Linux kernel you will be using.

Linux guest — Integration with QubesOS

Xorg

Now Xorg. From XKCD:

image

Things you need to install:

  • The Xorg input driver to support your mouse and keyboard

  • Your favorite Windows Manager

In my case, it is:

archlinux version:

pacman -S xorg i3

debian version:

apt install xserver-xorg-input-kbd xserver-xorg-input-libinput xserver-xorg-input-mouse i3

Then create a XORG configuration file for your GPU and screen. My file named ‘xorg.conf’:

Section "ServerLayout"
Identifier "Passthrough Layout"
Screen 0 "Passthrough Screen" Absolute 0 0
EndSection

Section "Device"
Identifier  "Passthrough GPU"
# name of the driver to use. Can be "amdgpu", "nvidia", or something else
Driver      "driver"
Option "Coolbits" "4"
# The BusID value will change after each qube reboot. 
BusID       "PCI:0::0"
EndSection

Section "Monitor"
Identifier "Passthrough monitor"
EndSection

Section "Screen"
Identifier "Passthrough Screen"
Device     "Passthrough GPU"
Monitor    "Passthrough Monitor"
EndSection

We can’t know what is the correct BusID before the qube is started. And it change after each reboot. So let’s write a script — named “xorgX1.sh” — that update this configuration file with the correct value, then start a binary on the Xorg X screen n°1.

#!/bin/bash
binary=${1:?binary required}

# Find the correct BusID of the AMD GPU, then set it in the Xorg configuration file
lspci | grep "VGA" | grep -E "NVIDIA" && sed -i 's/^Driver .*/Driver "nvidia"/g' /opt/xorg.conf
lspci | grep "VGA" | grep -E "AMD/ATI" && sed -i 's/^Driver .*/Driver "amdgpu"/g' /opt/xorg.conf
pci=$(lspci | grep "VGA" | grep -E "NVIDIA|AMD/ATI" | cut -d " " -f 1 | cut -d ":" -f 2 | cut -d "." -f 1 | cut -d "0" -f 2)
sed -i 's/"PCI:[^"]*"/"PCI:0:'$pci':0"/g' /opt/xorg.conf

# Start the Xorg server for the X screen number 1.
# The X screen n__0 is already used for QubesOS integration
sudo startx "$binary" -- :1 -config /opt/xorg.conf
Deprecated: old way of doing it

#!/bin/bash

binary=${1:?binary required}

Find the correct BusID of the AMD GPU, then set it in the Xorg configuration file

pci=$(lspci | grep “VGA” | grep -E “NVIDIA|AMD/ATI” | cut -d " " -f 1 | cut -d “:” -f 2 | cut -d “.” -f 1 | cut -d “0” -f 2)
sed -i “s/PCI:0:[0-9]:0/PCI:0:$pci:0/g” /home/user/AOC.conf

Pulseaudio setup

sudo killall pulseaudio
sudo sed -i "s/load-module module-vchan-sink./load-module module-vchan-sink domid=$(qubesdb-read -w /qubes-audio-domain-xid)/" /etc/pulse/qubes-default.pa
sudo rm /home/user/.pulse/client.conf
start-pulseaudio-with-vchan
sleep 5 && sudo chmod -R 777 /root/ &
sleep 5 && sudo chmod -R 777 /root/
&
sleep 5 && sudo cp /root/.pulse/client.conf /home/user/.pulse/client.conf && sudo chown -R user:user /home/user/.pulse/client.conf &

Start the Xorg server for the X screen number 1.

The X screen n°0 is already used for QubesOS integration

sudo startx “$binary” – :1 -config /home/user/AOC.conf

Audio

  • Create a script to launch your favorite Windows Manager and force it to use a specific pulse server.
    Example “i3.sh”:
#!/bin/bash
sleep 5 && sudo setxkbmap -display :1 fr & 
/bin/sudo -u user PULSE_SERVER=unix:/run/user/1000/pulse/native bash -c 'sudo xhost + local:;/usr/bin/i3'

And launch it:

sudo ./xorgX1.sh ./i3.sh
Deprecated: old way of doing it
  • Delete any packages related to pulseaudio and install pipewire
sudo pacman -Rdd qubes-vm-pulseaudio pulseaudio
sudo pacman -S pipewire-{jack,alsa,pulse} pipewire-qubes
  • Enable the “pipewire” service for this qube using Qubes Manager

Deprecated: old way of doing it2

So you need to configure pulseaudio for Xorg multiseat. The archlinux documentation explain that very well: Xorg multiseat Use the option without system-mode deamon and adapt it to qube: Add the following line to /etc/pulse/qubes-default.pa

load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1

Then add this config for root:

mkdir /root/.pulse
echo "default-server = 127.0.0.1" > /root/.pulse/client.conf

The sound was buggy/laggy on my computer. So tried to find a workaround by playing with pulseaudio settings. It was more or less random tries, so I can’t really explain it: In /etc/pulse/daemon.conf add the following lines:

default-fragments = 60
default-fragment-size-msec = 1
high-priority = no
realtime-scheduling = no
nice-level = 18

In /etc/pulse/qubes-default.pa{.text} change

load-module module-udev-detect

to

load-module module-udev-detect tsched=0

You can launch you favorite Windows Manager like that

sudo ./xorgX1.sh /usr/bin/i3

Automating most of that

I trying to write a script to automate all the previous steps.
It is available here:
https://git.sr.ht/~yukikoo/gpu_template

Please be carefull and read the code, not much tests have been done

References


This document was migrated from the qubes-community project
  • Page archive
  • First commit: 18 Jan 2023. Last commit: 18 Jan 2023.
  • Applicable Qubes OS releases based on commit dates and supported releases: 4.1
  • Original author(s) (GitHub usernames): neowutran
  • Original author(s) (forum usernames): @neowutran
  • Document license: CC BY 4.0
Contributors
14 Likes

This guide seems to draw heavily on this article by @neowutran, should some credits be added? (Asking you directly @neowutran since you’re in the forum.)

https://neowutran.ovh/qubes/articles/gaming_linux_hvm.html

Unless, I’ve got this backwards and it’s the other way around! @taradiddles if I understand the migration correctly you created the guide in the qubes-community project on GitHub?

I published it both on my website and in the qubes-community project on Github, you can check in the footer of the original post that I am the author:

"

  • Original author(s) (GitHub usernames): neowutran
  • Original author(s) (forum usernames): @neowutran
    "

everything is ok :slight_smile:

1 Like

I didn’t realize there was such useful info in the details of the migration! :bulb::bulb::bulb:

1 Like

Pretty cool guide! Did someone ever try with a thunderbolt external GPU?

@neowutran I got this working more easily with pipewire. And since there was recent work by @Demi in adding support, I think it’s a good bet.

(Disclaimer: this was on manjaro. Not pure arch)
Here are the instructions:

Getting audio to work

In my case I was having issues with audio. It was stuttering, so I removed the qubes-vm-pulseaudio package and and installed pipewire as a replacement and it seemed to do the job:

sudo pacman -Rdd qubes-vm-pulaseaudio pulseaudio
sudo pacman -S pipewire-{jack,alsa,pulse}

After restarting the machine, the audio was working

2 Likes

I just tried it but it doesn’t seems to work with multiseat Xorg. Do you confirm that for you it work even in the multiseat Xorg ?
( audio work well on my “:0” xorg session, but no sound on my “:1” xorg session. I removed all the configuration/modification related to pulseaudio )

Update:
made it working on my side with pipewire. I needed to use this custom script instead of directly i3 in my example to force connect to the audio daemon

i3.sh 
#!/bin/bash
/bin/sudo -u user PULSE_SERVER=unix:/run/user/1000/pulse/native /usr/bin/i3
1 Like

I just got an HVM to display GNOME on my external GPU :+1:

I did that (I’ll try again from a cleaner environment to get a more reproducible guide)

  • hide the pci devices (there is no IOMMU on the external GPU, except with the AMD card but it’s super clear to see the AMD VGA and AMD audio are part of the same group)
  • pass the devices to an HVM
  • use the qube provided kernel
  • use X :1 -configure to generate an xorg.conf file, then remove all extra in it to just keep the screen/monitor/device related to the VGA card itself was good enough, check you are using nvidia for NVIDIA cards and amdgpu for AMD cards
  • move that file in /etc/X11/xorg.conf.d/99-xorg.conf, and start your display manager, e.g. systemctl start gdm.service

Tested on OpenSUSE tumbleweed with an AMD RX 480 and NVIDIA 1060 :ok_hand:
Tested on OpenSUSE tumbleweed with an NVIDIA 1060 but in discrete mode (display in dom0 but uses the GPU) :ok_hand:

Tested on Debian 12 using the template with an NVIDIA 1060 :ok_hand:

I used a Razer Core X external GPU case connected in thunderbolt 3 on a Lenovo T470, unfortunately it’s almost useless in this case because the CPU is way too slow to do anything meaningful with a GPU in Qubes OS :smiley:

7 Likes

I’m sure using QubesOS doesn’t improve the CPU performance, and mobile CPU are not great for gaming to start with, but do you not also run into some serious PCIe bandwidth issues?

Can you get more than 4x PCIe lanes?, I also think TB3 and 4 only use PCIe gen 3, and it’s not the fast CPU lanes, it’s the slower chipset lanes.

Not really, on Linux I can emulate Switch games using Yuzu or play games like Death Stranding or Control (CPU bound). The laptop has an i5-7300U, it would take a while before being limited by thunderbolt bandwidth. Maybe if you use a 4k screen this would saturate faster?

If you use the eGPU as a discrete GPU, the bandwidth is limiting because the data have to go in both ways, if you use it with an external display, you have more bandwidth because the rendering doesn’t need to go back through thunderbolt.

Yes, it works if you just use it for the external display, but you could do the same with a TB dock without the GPU.

I thought you wanted to connect the GPU and use it to play games that need accelerated graphics.

that’s exactly what I’ve done, and it’s working.

1 Like

Great guide but I still have some questions.

For the Iommu Group do I have to do everything inside the grub cmd of the usb linux live distro (I have to paste the #!/bin/bash part)?
And if yes how can I deal with my OS being LUKS encrypted?

Also, what’s the deal with the max-ram-below-4g?
From what I understood this means this will allow up to 2 GB of ram to your gpu passtrough HVM qube, then I don’t understand how people in other threads have straight up 4090s working. (And what about the VRAM?)

bump, I am also interested

“For the Iommu Group do I have to do everything inside the grub cmd of the usb linux live distro (I have to paste the #!/bin/bash part)?” : No. Boot into any linux live distro, use the standard terminal emulator to create a script and execute it.

“And if yes how can I deal with my OS being LUKS encrypted?”: Irrelevant, you don’t need to access anything on your OS for this step

“From what I understood this means this will allow up to 2 GB of ram to your gpu passtrough HVM qube, then I don’t understand how people in other threads have straight up 4090s working. (And what about the VRAM?)”: No, you have access to all the ram you want and don’t have any kind of limitation on VRAM.

Also, what’s the deal with the max-ram-below-4g?: You can read differents threads here:

or search on internet. Not everything is understood about what is going on with TOLUD

2 Likes

Btw, I did a presentation about my video editing setup on Qubes, which has GPU passthrough on an nvidia 4090, if anyone is interested.

Timestamp 5h34m30s

10 Likes

The lines for the original post on patching xen.xml has changed because the original patch interfered with audio VM’s.
Maybe update the lines and link to the thread in case more progress is made?

I am going to remove the information about xen.xml patching from this guide.

  • The two method (xen.xml / stubdom-linux-rootfs.gz ) are currently not doing the exact same things
  • The difference between the two methods seems to be causing confusion / I see lot mistakes in forum posts ; and it make troubleshooting harder
  • I don’t personally use the xen.xml method and I am not willing to spend time updating xen.xml patching method to have the exact same behavior as the other method
4 Likes

Are you still having to keep stubdom downgraded to apply the stubdom patch or did they resolve that?

I am using the latest version of everything available in the official repositories