Create a Gaming HVM

Maybe there is some issue when you try to passthrough the GPU that is considered as primary.
Try to change the GPU order in BIOS or switch the GPUs in PCI slots in motherboard.

The 690 alone on Dom0 was required in order for it to function, it works now. The issue currently is that I’m unable to get the Fedora 39 HVM to display anything but a black screen; despite searching around, I was unable to find a solution. Xrandr only finds dummies instead of the display. Nvidia drivers are also installed and the system detects the GPU and loads the drivers.

It appears impossible, the nvidia drivers function on Fedora 39 XFCE and Debian 12 XFCE templates, but I am unable to obtain a display signal, and xrandr is also unable to see the display. Surprisingly when Debian 12 xfce is run without any graphics drivers installed, it provides a display signal (black screen) and displays the screen in xrandr; yet, as soon as a graphics driver is added, the display disappears. Is there anything else to do that is not in the guide?

Just wanted to let everyone know I’m gaming in a windows hvm now with a 4090 and nobody can stop me

1 Like

For people who don’t otherwise use windows don’t forget to get qubes-windows-tools I don’t think it was listed in the steps of qvm-create-windows-qube. Also, on MSI motherboards the pci/integrated options are PEG and IGD, you probably want IGD. Also pci blocking wasn’t working in grub so I had to unload nouveau (sudo modprobe unload nouveau). Also, qvm-usb kind of sucked; the mic was cutting out, the input was lagging when I was downloading games, xbox controller was lobotomized, so I had to buy and passthrough a usb pci-e card, which worked GREAT (but needed a kvm switch so I can easily switch my input between the two monitors). Bluetooth passthrough dropped connections every so often so I gave up. Finally my last problem which remains unsolved is that attaching block devices doesn’t work, so I cant expose my big hard drives to windows, still working on a solution to this. During this process I broke my grub which was a whole can of worms but managed to fix it in rescue. Anyways if you’re doing windows and reading this feel free to message me if you have questions.

Tested on:
i5-13600k
RTX 4090
MSI PRO Z690-a WIFI
GMS-PI52 USB (JESWO)

1 Like

Hey,

For those who like me would have appreciate a more step by step explanation on how to find the iommu group for there PCI devices, here is how I did it : *

I had a USB with Ventoy and multiple Live ISO.

I restarted my laptop, F12 to get to the boot menu (might be a different one for your laptop), choose my USB.

I chose Ubuntu (any Linux distro would do).

Then, it ask you if you want to try or install Ubuntu, obviously, choose TRY not to install…

There, in Terminal I did :
$ sudo nano /etc/default/grub

There I found the line : which says :
GRUB_CMDLINE_LINUX=""

and changed it to read
GRUB_CMDLINE_LINUX="iommu=1 iommu_amd=on"

Not sure this is required but I did it. Then, saved it (Ctrl+X, then Y).

Then, I created myself a Script folder :
$ mkdir script

I then went back to the regular file GUI thing, went in my newly created script folder, and created a text file when I added the script provided by @taradiddles

#!/bin/bash
shopt -s nullglob
for g in /sys/kernel/iommu_groups/*; do
 echo "IOMMU Group ${g##*/}:"
 for d in $g/devices/*; do
  echo -e "\t$(lspci -nns ${d##*/})"
done
done

Save as : group_pci.sh (name it as you wish, just make sure it’s .sh at the end)

I went back in terminal, put myself in my script folder ($ cd script)

Changed the permission on the sh file (otherwise you’ll have permission denied).

$ chmod 777 .

Then run the script : $ ./group_pci.sh

And it listed all the IOMMU Groups for all my PCI devices (YEAH !!)

Both my NVDIA and the Audio (managed through NVIDIA) are in the same group, and are the only PCI element in tthat group.

Restart the laptop, remove the USB, and tada, back to Qubes for the next steps :slight_smile:

Any experts are welcome to enhance those steps or point where it’s not correct/safe.

1 Like

Let’s just pretend I do something wrong (which is very possible), and I have a black screen. I understand I could type ‘e’ to edit but when I tested, I restarted my laptop and on the Qubes line, I type ‘e’. I can there see xen stuff, not grub stuff…

So how could I access the Qubes Grub at startup (before it potentially affect/hide the wrong GPU) ?

Here is an example:

OK, I saw that but… so it says at the end “The boot will proceed as usual from here, except that no qube will be autostarted.”

Great. But how will I be able to update the grub file : ex. let’s say I did something wrong when I wanted to hide the PCI, and I want to remove (or update) the line :

GRUB_CMDLINE_LINUX="... rd.qubes.hide_pci=0a:00.0 "

And put it back to

GRUB_CMDLINE_LINUX="" for exemple ?

Will I have access to this if needed with stopping the autostart thing ?

I only have Qubes on my laptop (so no multi boot or anything)

The autostart kernel option is only an example, you don’t need to use it.
When you add the kernel options in grub with GRUB_CMDLINE_LINUX, the options will be added to the grub module2 line.
So if you have:

GRUB_CMDLINE_LINUX="... rd.qubes.hide_pci=0a:00.0 "

Then when you boot Qubes OS and in GRUB you’ll press e then you’ll see the rd.qubes.hide_pci=0a:00.0 string in the line starting with module2.
If you remove rd.qubes.hide_pci=0a:00.0 from the line and start Qubes OS with Ctrl+X then it’ll start without hiding PCI devices.

1 Like

OK… Moving on… Slowly… As mentionned, I have NVIDIA and Audio (managed through NVIDIA) in the same group.

  • 01:00.0 : VGA comptabile controler : NIVIDIA Corporation Device [10de:28a1]
  • 01:00.1 : Audio device : NVIDIA Corporation Device [10de:22be]

Am I right thinking I need to hide them both from Dom0 ? And then pass both through ?

yes

Hiding the audio device from dom0 is not necessary. Or at least not necessary on some system (I personaly don’t hide the nvidia audio device from dom0)

Same thing for the passthrough part, not necessary to passthrough the audio device (at least on some system).
For example on my system the nvidia GPU is passthrough to “games” HVM, and the nvidia audio device is passthrough to “sys-audio” HVM

For what it’s worth, on my machine they are in the same IOMMU, I can’t assign one in a hvm without having the qube to fail starting.

Sadly, afaik, that part is hardware specific. And the things that my system (motherboard/bios/…) allow me to do are probably not common

could you share about this? I’m just curious :smiley:

Passthrough successful !!
Took me quite some (a lot of) try and fail, but finally made it :slightly_smiling_face:

Few things :

  1. I’m doing this on a laptop (Acer Nitro V15)

  2. I used a Fedora HVM, painful as well, but… managed to have a stable version. The challenges comes when resizing the screen, but I’ll do some other tests later on.

  3. The PCI hiding didn’t work (at all) : not only dom0 was still seeing the NVIDIA pci, but it was also saying the driver was nouveau when I had actually blacklisted it :astonished:

  4. Although I could still see the NVIDIA pci in dom0, I moved on and attached the NVDIA pci (VGA and Audio → both in the same IOMMU group, I haven’t tried yet to dissociate them) to HVM (using the Device tab).

  5. I used this page (a thousand thanks !!) to upgrade fedora and install the nvdia open driver (I had previously checked my GPU was included in the compatibility list). I went for the CUDA install which went smoothly.

  6. When I loaded the HVM, at first I though it didn’t work cause I had a black screen, but strangely, using the console (right click on the HVM qube in the Qube Manager), I could actually login properly. Checking with the console, everything looked fine except that it couldn’t detect the display. I went in many (many) directions, and then remembered I needed another screen attached to the HDMI port (doh!!) and tada !! when restarting the HVM, the second screen, attached to the HDMI port was there. I tested a video on the net, and the sound came out of the other screen as expected (YEAH!!)

So to summarize, no pci hiding, no complex Xorg setup for the second screen, just attached the NVIDIA pieces, blacklisting nouveau and installing the nvidia drivers in the HVM.

To be fine tuned :

  1. For now, I haven’t fully tested the NVDIA card : I know it’s working, but I don’t know how « well » (or not so well) it works

  2. The HVM screen on the laptop remains black (only the screen plugged in the HDMI port works). I can still see all other VMs etc… just the HVM screen remains black. I’m guessing this may have something to do with the fact that the NVIDIA drivers aren’t installed in dom0, so the ‘display’ is not ‘passed’ from dom0 to the HVM (like it is to the VMs) or maybe something with Xorg authorization (see below) ?

  3. Probably related, in Fedora settings, it detects the other screen (the laptop screen), but as « unknown », and even if I activate it, it doesn’t display anyting. Based on my investigations, I’m guessing this may have something to do with Xorg and the « autorization » to use the laptop display : checking the dom0 xhost shows :

access conrtol enable, only authorized client can connect
SI : localgroup : qubes
SI : localuser : my_local_user_name

So I’m wondering if technically, it would be possible to add HVM(s) to a group which can access Xorg configuration to be ‘authorized’ to use the display ?

I may over simplify this and maybe it’s not a good idea for the HVM to be given access to the display through Xorg since it’s in dom0… I really don’t know enough to evaluate to possibily/risk of such authrization (Plus, I’m absolutely not sure at all that would work).

  1. Another strange thing is I don’t have access to the usual menu (on the top right for those who know) : so I can’t power on/off, and I need to either go in the settings or use the terminal to activate the wired connection or anything sound related… And I have to shut it down using terminal. Not a big deal, but I may investigate at some point.

Voilà !! Happy :slight_smile:

2 Likes

I didn’t investigated why I can do that with my hardware. Probably related with my motherboard supports for virtualization, + eventually some parameter in the bios, maybe the audio and vga devices are not in the same iommu group, who known.

My happiness in life decrease everytime I open this damn bios setup page. And every time I have to deal with hardware things I start pesting about that this damn thing should just be working out of the box before remembering that nothing ever “work” or “should be doing the right thing”, nor software, nor hardware, nor virtualization, nor anything. Just a stack of children lies we tell to ourself to avoid having nightmare about drowning in an bottomless pit of complexity everytime we think about a computer .

So for the moment I don’t want to try to understand why it work, but it work. It would probably require multiples weeks of deep dive into hardware / firmware / hardware manufactuer oddity / iommu / virtualization and other unholy related things. Maybe one day when i find the courage to inflict that to myself.


@Korben gg !
As a note, “rd.qubes.hide_pci” is not really about hidding things in dom0.
I won’t go to the details, but you are expected to still see the pci device in dom0, but the driver in use should be “pciback”.
It is something that I need to rewrite / explain better in my guide, when I have some time

2 Likes

Thanks for a wonderful guide, but it is really hard (for me) to track any progress starting from the Starting the guest.

What is the result of successfully starting the guest? Is it just a working vm with a GPU passed to it and recognized, or I should have an output? If so, it should be a working desktop or it will be done in the Qubes Integration chapter (assuming I am not starting from a headless system)?

Hi,

Do you already have a working HVM (without passthrough) ? Because you metionned

vm with GPU passed”

and this guide is to pass to a HVM (Not sure it would be possible on a VM since some “stuff” are not persistent, per design)…