Create a Gaming HVM

So these are based on templates not Standalone systems based on isos.

Arch is a harder OS than Debian and Debian is more compatible out of the box with many applications. Would a Debian-12 template work just as well?

So for this to work, there needs to be two virtual machines, one template that gets graphics processing and one template that provides graphics processing, and both can be based off templates.

Are you saying I have run a live linux system to get system information to share that with dom0 because I can’t get that information in dom0? I don’t understand the IOMMU Group and why it matters. Are you saying a live linux sysem like burn live Parrot or live Debian or Live Fedora and just boot it on my system to get the information or do you mean a live template? I do not understand this part.

So these are based on templates not Standalone systems based on isos.

You can create a standalone qube from an existing template or a ISO.

Arch is a harder OS than Debian and Debian is more compatible out of the box with many applications. Would a Debian-12 template work just as well?

Yes, it would work using a debian template or fedora template or …, it is just my personnal preference to use archlinux for this kind of things since it have all the latest bleeding edge drivers.

So for this to work, there needs to be two virtual machines, one template that gets graphics processing and one template that provides graphics processing, and both can be based off templates.

no

Are you saying I have run a live linux system to get system information to share that with dom0 because I can’t get that information in dom0? I don’t understand the IOMMU Group and why it matters. Are you saying a live linux sysem like burn live Parrot or live Debian or Live Fedora and just boot it on my system to get the information or do you mean a live template? I do not understand this part.

You have to run a live linux system to check yourself if it should theorically work without specific issues. You don’t “share” the result with dom0 after, it is just for you, for your understanding on how your hardware is wired. For what is a IOMMU group, you can check here and contribute: IOMMU groups – security impact

I have edited grub and now have

Kernel driver in use: pciback

for my GPU when running lspci -vvn

This is really hard to do. I’ve barely done any of this and already want to cry.

I don’t understand what is going on with xorg or what it means or why it’s even needed. I also don’t understand why I need a 2nd monitor. Can’t I just patch this directly to a VM?

You’ve given a lot of hints for Arch. I have no idea what to put in Debian. I’m on the verge of trying to just do this with Arch even though I don’t understand pacman or arch at all.

If I made a standalone VM with PopOS and NIVIDIA drivers or a standalone Windows machine with NVIDIA drivers and just attached the unused GPU to the VM in dom0, will it not work without Xorg?

qvm-pci attach Windows-gaming dom0:0a_00.0 -o permissive=True -o no-strict-reset=True

or

qvm-pci attach PopOS-gaming dom0:0a_00.0 -o permissive=True -o no-strict-reset=True

Is that going to fail without Xorg? Can anyone explain why or why not?

Why didn’t you tell us this would require patience and debugging and include graphs to warn us all how hard this would be?

So I tried running PopOS with NVIDIA drivers and did a pass through in dom0 but I think it didn’t work.

sudo qvm-pci attach PopOS dom0:01_01.0 -o permissive=True -o no-strict-reset=True
Got empty response from qubesd. See journalctl in dom0 for details.

In PopOS it says I am running X11 for Windows System, Virtualization listed as Xen, graphics are llvmpipe (256 bits).

It probably isn’t working at all and just running like a normal standalone PopOS.

I ran nividia-settings in PopOS and it showed the drivers weren’t loading. It said the X server isn’t available.

Back to Xorg hell.

So I went back to just using a Debian standalone template. I changed it to HVM since PVM won’t allow me to attach anything.

I installed nvidia-detect and it indicates I have a nivida graphics card. I installed nvidia drivers. I am still not sure how the second monitor, xorg, and another keyboard factors into all of this.

Possibly related

It looks like I do need a different monitor and using Xorg to pass through the GPU with HVM.

Does anyone know why I can’t just do this inside of the screen? I don’t understand why there needs to be different hardware. Is this a technical requirement of Qubes (having a different monitor) or has no one figured out how to make it work?

Can I just apply your Xorg settings to something like Debian with Nvidia installed or are modifications necessary?

I don’t understand the Monitor part in the Xorg configuration. Can I copy the monitor section just like that?

I commented the part of the file that need to be modified.

# name of the driver to use. Can be "amdgpu", "nvidia", or something else
# https://arachnoid.com/modelines/ .  IMPORTANT TO GET RIGHT. MUST ADJUST WITH EACH SCREEN. 
1 Like

I have never tried it but supposedly this guide offers a solution to doing it in a single screen:

I’m going to try it. Not sure if am good enough to figure it out.

I tried GPU passthrough on a different laptop than last time (T470 vs NV41), but when I plug in my external GPU over thunderbolt, in lspci or qvm-pci the devices names don’t appear, they are still named “thunderbolt controller” :thinking:

In case someone has an idea.

edit²: Well, on a Ubuntu live CD, the thunderbolt card doesn’t appear either :confused: I wonder if this is because I have a thunderbolt 3 gpu case and it’s a thunderbolt 4 port :woman_shrugging:

edit³: tried again on Ubuntu, it turns out I didn’t plug the thunderbolt cable correctly :woman_facepalming: , it shows up on Ubuntu but not qubes os now.

Here is lspci -vnn output for the thunderbolt related devices (GPU is connected :woman_shrugging: )

00:0d.0 USB controller [0c03]: Intel Corporation Alder Lake-P Thunderbolt 4 USB Controller [8086:461e] (rev 02) (prog-if 30 [XHCI])
	Subsystem: CLEVO/KAPOK Computer Device [1558:4041]
	Flags: medium devsel
	Memory at 80940000 (64-bit, non-prefetchable) [size=64K]
	Capabilities: <access denied>
	Kernel driver in use: pciback
	Kernel modules: xhci_pci

00:0d.2 USB controller [0c03]: Intel Corporation Alder Lake-P Thunderbolt 4 NHI #0 [8086:463e] (rev 02) (prog-if 40 [USB4 Host Interface])
	Subsystem: CLEVO/KAPOK Computer Device [1558:4041]
	Flags: fast devsel, IRQ 17
	Memory at 80900000 (64-bit, non-prefetchable) [size=256K]
	Memory at 80970000 (64-bit, non-prefetchable) [size=4K]
	Capabilities: <access denied>
	Kernel driver in use: pciback
	Kernel modules: thunderbolt

in the qube with passthrough, the thunderbolt controller is working

bash-5.2# boltctl 
 ● Razer Core X
   ├─ type:          peripheral
   ├─ name:          Core X
   ├─ vendor:        Razer
   ├─ uuid:          c5030000-0080-7708-234c-b91c5c413923
   ├─ generation:    Thunderbolt 3
   ├─ status:        connected
   │  ├─ domain:     728c8780-c09b-710e-ffff-ffffffffffff
   │  ├─ rx speed:   40 Gb/s = 2 lanes * 20 Gb/s
   │  ├─ tx speed:   40 Gb/s = 2 lanes * 20 Gb/s
   │  └─ authflags:  none
   ├─ connected:     dim. 07 janv. 2024 13:03:30
   └─ stored:        no
1 Like

How can I check I have it installed? I can’t find a packaged named vmm-xen on dom0 :thinking:

1 Like

My mistake, it is the name of the github repo that compile xen.
In dom0: dnf list | grep -E ^xen

2 Likes

Just updated to 4.2 myself and am now unable to boot any HVM with over ~3GB of memory (Linux and Windows), getting the “No bootable device”. I tried reapplying your stubdom patch but reverted back to the original since I couldn’t get it working with that.

It looks like I’m on 4.17.2-8 though stubdom is a different version number, does this match your system?

xen.x86_64                                   2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-hvm-stubdom-legacy.x86_64                2001:4.13.0-1.fc32                @anaconda         
xen-hvm-stubdom-linux.x86_64                 4.2.8-1.fc37                      @qubes-dom0-cached
xen-hvm-stubdom-linux-full.x86_64            4.2.8-1.fc37                      @qubes-dom0-cached
xen-hypervisor.x86_64                        2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-libs.x86_64                              2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-licenses.x86_64                          2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-runtime.x86_64                           2001:4.17.2-8.fc37                @qubes-dom0-cached

2 Likes
xen-hvm-stubdom-legacy

should not even exist, how did you switched to 4.2 ?

1 Like

EDIT: nevermind, I didn’t see it was about the legacy one


On an install started with 4.2-RC3, I also have these packages installed but I never tried GPU passthrough because this desktop can’t receive any external GPUs :sweat_smile:

Maybe it’s installed by default?

[solene@dom0 ~]$ dnf list --installed | grep xen
libvirt-daemon-xen.x86_64                         1000:8.9.0-6.fc37                 @anaconda         
python3-xen.x86_64                                2001:4.17.2-8.fc37                @qubes-dom0-cached
qubes-libvchan-xen.x86_64                         4.2.1-1.fc37                      @anaconda         
xen.x86_64                                        2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-hvm-stubdom-linux.x86_64                      4.2.8-1.fc37                      @qubes-dom0-cached
xen-hvm-stubdom-linux-full.x86_64                 4.2.8-1.fc37                      @qubes-dom0-cached
xen-hypervisor.x86_64                             2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-libs.x86_64                                   2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-licenses.x86_64                               2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-runtime.x86_64                                2001:4.17.2-8.fc37                @qubes-dom0-cached

In my case, the bios (Dasharo) doesn’t provide options to prioritize a GPU over another. So what I did was fix the issue from the failed qubes startup and switching to another TTY (Ctrl+Alt+F2). There was an annoying issue that kept switching back to TTY1 but that was fortunately solvable.

To allow me to see the login screen it turns out that X11 crashing was the failure reason (I saw that on lightdm logs).

Making dom0 Xorg work with an Nvidia Graphics Card

I don’t understand why this is even an issue if the GPU is supposedly hidden, but for some reason Xorg tries to accommodate the Nvidia graphics card instead of the intel graphics. I fixed this with a config in /etc/X11/xorg.conf.d/00-user.conf:

Section "ServerLayout"
	Identifier "layout"
	Screen 1 "integrated" 0 0
EndSection
	  
Section "Device"
	Identifier "integrated"
	Driver "intel"
EndSection
	  
Section "Screen"
	Identifier "integrated"
	Device "integrated"
EndSection

New problem: No bootable device

Now I’m stuck with an issue where the HVM can’t boot if any PCI-e device is attached. I see some others are also encountering this. I documented this issue in this other thread:

3 Likes

What do you means by that ?


  • Can you check if there is any remaining custom code in xen.xml or stubdom (anything related to max-ram-below-4g) ?
  • can you provide the logs file and your configuration, (and eventually the xen logs (sudo xl dmesg) when you start the HVM (add the grub parameter ‘loglvl=all’ and ‘guest_loglvl=all’ to the host before)
  • can you provide the result of sudo dnf list | grep -E ^xen in dom0, and the result of sudo lspci -vvv -s YOUR_GPU in dom0

I upgraded from 4.1 with the dist upgrade (first with --all-pre-reboot, then rebooted, then --all-post-reboot).

xen-hvm-stubdom-legacy is also present on my T480 which was upgraded from 4.1 to 4.2.

I’ll play around with it a little and see if I can get it to work, worst case I think a clean install is a safe bet, thankfully have that down to a pretty good process at this point.

Thanks for the info!