Create a Gaming HVM

I ran nividia-settings in PopOS and it showed the drivers weren’t loading. It said the X server isn’t available.

Back to Xorg hell.

So I went back to just using a Debian standalone template. I changed it to HVM since PVM won’t allow me to attach anything.

I installed nvidia-detect and it indicates I have a nivida graphics card. I installed nvidia drivers. I am still not sure how the second monitor, xorg, and another keyboard factors into all of this.

Possibly related

It looks like I do need a different monitor and using Xorg to pass through the GPU with HVM.

Does anyone know why I can’t just do this inside of the screen? I don’t understand why there needs to be different hardware. Is this a technical requirement of Qubes (having a different monitor) or has no one figured out how to make it work?

Can I just apply your Xorg settings to something like Debian with Nvidia installed or are modifications necessary?

I don’t understand the Monitor part in the Xorg configuration. Can I copy the monitor section just like that?

I commented the part of the file that need to be modified.

# name of the driver to use. Can be "amdgpu", "nvidia", or something else
# https://arachnoid.com/modelines/ .  IMPORTANT TO GET RIGHT. MUST ADJUST WITH EACH SCREEN. 
1 Like

I have never tried it but supposedly this guide offers a solution to doing it in a single screen:

I’m going to try it. Not sure if am good enough to figure it out.

I tried GPU passthrough on a different laptop than last time (T470 vs NV41), but when I plug in my external GPU over thunderbolt, in lspci or qvm-pci the devices names don’t appear, they are still named “thunderbolt controller” :thinking:

In case someone has an idea.

edit²: Well, on a Ubuntu live CD, the thunderbolt card doesn’t appear either :confused: I wonder if this is because I have a thunderbolt 3 gpu case and it’s a thunderbolt 4 port :woman_shrugging:

edit³: tried again on Ubuntu, it turns out I didn’t plug the thunderbolt cable correctly :woman_facepalming: , it shows up on Ubuntu but not qubes os now.

Here is lspci -vnn output for the thunderbolt related devices (GPU is connected :woman_shrugging: )

00:0d.0 USB controller [0c03]: Intel Corporation Alder Lake-P Thunderbolt 4 USB Controller [8086:461e] (rev 02) (prog-if 30 [XHCI])
	Subsystem: CLEVO/KAPOK Computer Device [1558:4041]
	Flags: medium devsel
	Memory at 80940000 (64-bit, non-prefetchable) [size=64K]
	Capabilities: <access denied>
	Kernel driver in use: pciback
	Kernel modules: xhci_pci

00:0d.2 USB controller [0c03]: Intel Corporation Alder Lake-P Thunderbolt 4 NHI #0 [8086:463e] (rev 02) (prog-if 40 [USB4 Host Interface])
	Subsystem: CLEVO/KAPOK Computer Device [1558:4041]
	Flags: fast devsel, IRQ 17
	Memory at 80900000 (64-bit, non-prefetchable) [size=256K]
	Memory at 80970000 (64-bit, non-prefetchable) [size=4K]
	Capabilities: <access denied>
	Kernel driver in use: pciback
	Kernel modules: thunderbolt

in the qube with passthrough, the thunderbolt controller is working

bash-5.2# boltctl 
 ● Razer Core X
   ├─ type:          peripheral
   ├─ name:          Core X
   ├─ vendor:        Razer
   ├─ uuid:          c5030000-0080-7708-234c-b91c5c413923
   ├─ generation:    Thunderbolt 3
   ├─ status:        connected
   │  ├─ domain:     728c8780-c09b-710e-ffff-ffffffffffff
   │  ├─ rx speed:   40 Gb/s = 2 lanes * 20 Gb/s
   │  ├─ tx speed:   40 Gb/s = 2 lanes * 20 Gb/s
   │  └─ authflags:  none
   ├─ connected:     dim. 07 janv. 2024 13:03:30
   └─ stored:        no
1 Like

How can I check I have it installed? I can’t find a packaged named vmm-xen on dom0 :thinking:

1 Like

My mistake, it is the name of the github repo that compile xen.
In dom0: dnf list | grep -E ^xen

2 Likes

Just updated to 4.2 myself and am now unable to boot any HVM with over ~3GB of memory (Linux and Windows), getting the “No bootable device”. I tried reapplying your stubdom patch but reverted back to the original since I couldn’t get it working with that.

It looks like I’m on 4.17.2-8 though stubdom is a different version number, does this match your system?

xen.x86_64                                   2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-hvm-stubdom-legacy.x86_64                2001:4.13.0-1.fc32                @anaconda         
xen-hvm-stubdom-linux.x86_64                 4.2.8-1.fc37                      @qubes-dom0-cached
xen-hvm-stubdom-linux-full.x86_64            4.2.8-1.fc37                      @qubes-dom0-cached
xen-hypervisor.x86_64                        2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-libs.x86_64                              2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-licenses.x86_64                          2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-runtime.x86_64                           2001:4.17.2-8.fc37                @qubes-dom0-cached

2 Likes
xen-hvm-stubdom-legacy

should not even exist, how did you switched to 4.2 ?

1 Like

EDIT: nevermind, I didn’t see it was about the legacy one


On an install started with 4.2-RC3, I also have these packages installed but I never tried GPU passthrough because this desktop can’t receive any external GPUs :sweat_smile:

Maybe it’s installed by default?

[solene@dom0 ~]$ dnf list --installed | grep xen
libvirt-daemon-xen.x86_64                         1000:8.9.0-6.fc37                 @anaconda         
python3-xen.x86_64                                2001:4.17.2-8.fc37                @qubes-dom0-cached
qubes-libvchan-xen.x86_64                         4.2.1-1.fc37                      @anaconda         
xen.x86_64                                        2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-hvm-stubdom-linux.x86_64                      4.2.8-1.fc37                      @qubes-dom0-cached
xen-hvm-stubdom-linux-full.x86_64                 4.2.8-1.fc37                      @qubes-dom0-cached
xen-hypervisor.x86_64                             2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-libs.x86_64                                   2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-licenses.x86_64                               2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-runtime.x86_64                                2001:4.17.2-8.fc37                @qubes-dom0-cached

In my case, the bios (Dasharo) doesn’t provide options to prioritize a GPU over another. So what I did was fix the issue from the failed qubes startup and switching to another TTY (Ctrl+Alt+F2). There was an annoying issue that kept switching back to TTY1 but that was fortunately solvable.

To allow me to see the login screen it turns out that X11 crashing was the failure reason (I saw that on lightdm logs).

Making dom0 Xorg work with an Nvidia Graphics Card

I don’t understand why this is even an issue if the GPU is supposedly hidden, but for some reason Xorg tries to accommodate the Nvidia graphics card instead of the intel graphics. I fixed this with a config in /etc/X11/xorg.conf.d/00-user.conf:

Section "ServerLayout"
	Identifier "layout"
	Screen 1 "integrated" 0 0
EndSection
	  
Section "Device"
	Identifier "integrated"
	Driver "intel"
EndSection
	  
Section "Screen"
	Identifier "integrated"
	Device "integrated"
EndSection

New problem: No bootable device

Now I’m stuck with an issue where the HVM can’t boot if any PCI-e device is attached. I see some others are also encountering this. I documented this issue in this other thread:

3 Likes

What do you means by that ?


  • Can you check if there is any remaining custom code in xen.xml or stubdom (anything related to max-ram-below-4g) ?
  • can you provide the logs file and your configuration, (and eventually the xen logs (sudo xl dmesg) when you start the HVM (add the grub parameter ‘loglvl=all’ and ‘guest_loglvl=all’ to the host before)
  • can you provide the result of sudo dnf list | grep -E ^xen in dom0, and the result of sudo lspci -vvv -s YOUR_GPU in dom0

I upgraded from 4.1 with the dist upgrade (first with --all-pre-reboot, then rebooted, then --all-post-reboot).

xen-hvm-stubdom-legacy is also present on my T480 which was upgraded from 4.1 to 4.2.

I’ll play around with it a little and see if I can get it to work, worst case I think a clean install is a safe bet, thankfully have that down to a pretty good process at this point.

Thanks for the info!

Hello,

I just got a powerful laptop, installed Qubes 4.2 and read all day on how to create a video card passthrough in qubes. Read most of everything from the 2020 articles to present. Also bought a separate HDMI monitor, keyboard and mouse. My laptop has Inter integrated graphics as well as a dedicated Nvidia card.

I have to start by saying that I am a pretty big newb when it comes to unix/commands/analyzing logs.

Before I even attempted anything, I wanted to write down what the steps I need to take are and see if I got anything wrong. Your answers will help me tremendously as well as any others who may attempt this and are not on an advanced technical level.

  1. Are there any Qubes bricking risks associated with attempting this? I spent about a week fully customizing my new laptop and Qubes, and now want to attempt this Windows install with the GPU passthrough. But there are modifications and reboots involved. Is there a risk that the main Qubes OS gets bricked and may require a fresh reinstall? Would be useful to have a better understanding of this risk.

  2. Someone was saying that GPU passthrough on laptops is almost impossible because display devices are hard-wired to the same outputs, and they can’t be used at the same time.

https://forum.qubes-os.org/t/perfect-qubesos-hardware-with-a-dedicated-gpu-for-passthrough/18707/2

Is there a way for me to determine whether my Intel and Nvidia cards are hard wired to the same outputs and can or can’t be used at the same time?

Based on this PDF https://device.report/manuals/precision-7770-external-display-connection-guide mine should work if I enable Hybrid Mode in BIOS. I should be able to use the laptop screen + a separate monitor using using HDMI for the HVM.

  1. I see that it all starts with IOMMU group. If I see my intel card as well as the Nvidia card with unique device id’s when running “qvm-pci”, is it still necessary to reboot using a live distro and edit grub?

If yes, can anyone be more specific on how to add that text in the live distro grub? I have a PartedMagic live disk but I dont see where I could add this and where I would be able to see the IOMMU information I need.

  1. Under GRUB modification you are saying to hide the secondary GPU from dom0 - but then you specify that in case the laptop gets stuck at startup, it means I am trying to use the GPU i tried to hide.

Is this a recoverable situation? Or if you make that mistake a fresh install is needed.

I will pause here, these are the only points I need clarification before doing anything. Thank you very much and I apologize for the noob questions.

1

I recommend you to make a backup using the Qubes Backup tools.
Theoretically you can recover from everything you could fail with the correct option or command line.
But since you are new to that, do a backup.

2: I let someone else answer to that, never tried with laptop.
3: It is better if you reboot using a live distro to gather the information. It can help to understand why something doesn’t work.
4: It is a recoverable situation, in this case, once you reach the grub, press “e” to edit the grub command line and remove the things you added previously to hide the gpu

1 Like

I believe you will need a laptop with a mux-switch and to blacklist the dGPU as you described. If the hdmi on your laptop is directly connected to your dGPU you are good to go.

The issue is getting the patches (or now, the new xen version) to work on your laptop if you have very new hardware.

Mine was working fine on 4.2 rc3, but for some reason, the passthrough no longer works on newer versions of Qubes, and it results on the VMs crashing at boot.

Thank you for your reply.

3: It is better if you reboot using a live distro to gather the information. It can help to understand why something doesn’t work.

I got the latest Kali live ISO but how do I boot it where I can enter those commands to see the information? Can I bother you for some step by step for dummies commands?

Can I bypass checking this IOMMU on reboot? I think everything is fine, I have a high end laptop built with virtualization and multiple monitors in mind, I don’t think my GPU is grouped with anything else.

What can happen if I proceed without checking this IOMMU that’s got me scratching my head for the past 2 days?

Or you can’t proceed without the device ID that I need to get ?

This will make a lot of sense to me after I see it working and analyze what I had to do for that, but right now I am very confused as to what my next steps are.

How to find IOMMU, how to hide from dom0. These are 2 big ones that I don’t seem to be able to get past.

I am willing to pay someone with BTC Zelle or paypal if they can take 1 hr from their time to make this part as for dummies as possible. Name your price. I really want to get this done ASAP.

Thank you!