Create a Gaming HVM

I also made an upgrade from 4.1 → 4.2 following the upgrade guide.
And I also have xen-hvm-stubdom-legacy.x86_64 in there.
I guess that could be a reason I get some “resizing bar0” problems.

Is there something, besides a clean install of 4.2, I can do to not have stubdom-legacy in there without breaking the system?

Try installing stubdom 4.2.9-1 from the testing repo

I tried but it just said that the latest version was already installed. I ended up with just dnf remove the legacy. Nothing seemed to happen.

After many hours of tinkering, I eventually reached my goal - to get hashcat benchmark to work. My goal was never to get it to work with a monitor, just to get the GPU passthrough to work on a headless machine.
I have a Radeon RX 6600 GPU and I made these steps to get it to work:

  • Upgrade from Qubes-OS 4.1 to 4.2 (I don’t know if this step was actually needed, but I would have done it anyways)
  • Enabling “SR-IOV” in BIOS
  • “rd.qubes.hide_pci” all pci-devices in the same IOMMU group as the GPU
  • Installing Ubuntu 22.04 server standalone guest
  • qvm-pci attach only the GPU and corresponding Audio device. Both with “permissive=True”
  • Adding “amdgpu.runpm=0 pci=nomsi” to the standalone guest’s GRUB_CMDLINE_LINUX_DEFAULT
  • Downloading and installing amdgpu-install 23.40.1 jammy deb
  • amdgpu-install --usecase=rocm

I think that was all. I hope this information is helpful for someone.

It worked for me to correct the “no bootable device” for my debian HVM with greater than 4000 mb RAM.

For those not sure how to install the package this is what I entered into the dom0 terminal. I don’t know the difference between the stubdom and the stubdom-full package so perhaps only one of the packages were needed.

sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing xen-hvm-stubdom-linux-4.2.9-1.x86_64 xen-hvm-stubdom-linux-full-4.2.9-1.x86_64


I don’t understand anything about the IOMMU part. What do I have to do? So far, I’ve tried booting into my NixOS GRUB and pressing “c” for the command line. However, I’m not sure if I misunderstood something. Can someone please explain it to me in simple step by step manner ? I’m absolutely clueless.

rewrote this section, check if you understand it better


My HVMs arent able to connect to the internet after making them a HVM
ive tried manually changing the ipv4 settings to be the same as the net cube settings but still no success
any idea what could cause this and how to fix?

Seems unrelated to this topic. If you need help, also provide information about how you created your HVM, what it is etc, HVM based on the template provider by qubes os should not have this issue.

Is anyone familiar with Windows Kernel debugging / Windows Drivers debugging ?
I would like to set a breakpoint somewhere inside nvlddmkm.sys to investigate the internal state of the driver before it crash. I tried the windbg option to attach to the local kernel, but for the moment no success

I now use remote kernel debugging instead of trying local kernel debugging.
For the linux nvidia driver, I was able to reverse engineer a patch.
For the windows nvidia driver, I am having more difficulties, the driver itself is more complex, and I am less familiar with windows kernel stuff than linux one

1 Like

qubes 4.2
tuxedo computer gemini 2 with a nvidia 4060 rtx
I confirmed with support the gpu is directly connected to the hdmi, which is plugged into an external monitor
I have hidden the nvidia gpu and audio from dom0
I have tried arch, debian and fedora

So far none of them work and I am at a loss for what to try or troubleshoot next. Perhaps I have missed something, and appreciate any tips one can offer.

update: I was able to get a debian vm working with the external display, I have the steps but they need to be cleaned up if anyone is interested. Still working out some kinks in gdm display.

update update: I can get gdm to load on the external monitor but when I try to login I get:

WARNING(gpu-debian-12-test): This VM has attempted to create a very large window in a manner that would have prevented you from closing it and regaining the control of Qubes OS’s graphical user interface.
As a protection measure, the “override_redirect” flag of the window in question has been unset. If this creates unexpected issues in the handling of this VM’s windows, please set “override_redirect_protection” to “false” for this VM in /etc/qubes/guid.conf to disable this protection measure and restart the VM.
This message will only appear once per VM per session. Please click on this notification to close it.

Searching around doesn’t yield much information. I tried disabling the override but that made dom0 unusable.

what have you done that is different from the guide ? what are the things that didn’t worked ? did you understood why it didn’t worked ?


After some time I was able to get i3 to load, but gdm and lightdm just load to a black screen, so I assume that means there is something wrong with gdm and lightdm configurations if it works with i3. I’m just not sure where to go from here. :slight_smile:

Yeah, I expect them to not work, you probably didn’t configured them to use the correct xorg configuration.
If you want to use gnome, in my scripts replace “i3” by something like “gnome-session”. Didn’t tested but should work

I tried it with gnome-session and get a black screen also.

If I just start gdm using ‘systemctl start gdm.service’ it loads up to the login screen, so it must have something to do with gnome.

update: finally got gnome to load, but as root, so need to figure out how to run it as user.

update update: got it running at user, now I need to figure out why there is not sound :slight_smile:

By the GPU you mean that displays Dom0, you mean it can be in any socket on the board?
Lots of what I hear you HAVE to leave the PRIMARY GPU alone and only use a GPU in the SECONDARY slot…
Would this be accurate?
Or is this completely false?

hey guys, Having an expensive gaming laptop, i never risked getting rid of UEFI in favor or coreboot. I’m afraid of poor thermal managment, cpu/gpu performance optimisations, MUX swithc, and shortcut keys i use to manage fan speeds. Also, i read there is risk of bricking when removing intel ME.

I wanted to know your guys take on coreboot+ heads for gaming laptops. Has it caused any issues?