Create a Gaming HVM

How can I check I have it installed? I can’t find a packaged named vmm-xen on dom0 :thinking:

1 Like

My mistake, it is the name of the github repo that compile xen.
In dom0: dnf list | grep -E ^xen

2 Likes

Just updated to 4.2 myself and am now unable to boot any HVM with over ~3GB of memory (Linux and Windows), getting the “No bootable device”. I tried reapplying your stubdom patch but reverted back to the original since I couldn’t get it working with that.

It looks like I’m on 4.17.2-8 though stubdom is a different version number, does this match your system?

xen.x86_64                                   2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-hvm-stubdom-legacy.x86_64                2001:4.13.0-1.fc32                @anaconda         
xen-hvm-stubdom-linux.x86_64                 4.2.8-1.fc37                      @qubes-dom0-cached
xen-hvm-stubdom-linux-full.x86_64            4.2.8-1.fc37                      @qubes-dom0-cached
xen-hypervisor.x86_64                        2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-libs.x86_64                              2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-licenses.x86_64                          2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-runtime.x86_64                           2001:4.17.2-8.fc37                @qubes-dom0-cached

2 Likes
xen-hvm-stubdom-legacy

should not even exist, how did you switched to 4.2 ?

1 Like

EDIT: nevermind, I didn’t see it was about the legacy one


On an install started with 4.2-RC3, I also have these packages installed but I never tried GPU passthrough because this desktop can’t receive any external GPUs :sweat_smile:

Maybe it’s installed by default?

[solene@dom0 ~]$ dnf list --installed | grep xen
libvirt-daemon-xen.x86_64                         1000:8.9.0-6.fc37                 @anaconda         
python3-xen.x86_64                                2001:4.17.2-8.fc37                @qubes-dom0-cached
qubes-libvchan-xen.x86_64                         4.2.1-1.fc37                      @anaconda         
xen.x86_64                                        2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-hvm-stubdom-linux.x86_64                      4.2.8-1.fc37                      @qubes-dom0-cached
xen-hvm-stubdom-linux-full.x86_64                 4.2.8-1.fc37                      @qubes-dom0-cached
xen-hypervisor.x86_64                             2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-libs.x86_64                                   2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-licenses.x86_64                               2001:4.17.2-8.fc37                @qubes-dom0-cached
xen-runtime.x86_64                                2001:4.17.2-8.fc37                @qubes-dom0-cached

In my case, the bios (Dasharo) doesn’t provide options to prioritize a GPU over another. So what I did was fix the issue from the failed qubes startup and switching to another TTY (Ctrl+Alt+F2). There was an annoying issue that kept switching back to TTY1 but that was fortunately solvable.

To allow me to see the login screen it turns out that X11 crashing was the failure reason (I saw that on lightdm logs).

Making dom0 Xorg work with an Nvidia Graphics Card

I don’t understand why this is even an issue if the GPU is supposedly hidden, but for some reason Xorg tries to accommodate the Nvidia graphics card instead of the intel graphics. I fixed this with a config in /etc/X11/xorg.conf.d/00-user.conf:

Section "ServerLayout"
	Identifier "layout"
	Screen 1 "integrated" 0 0
EndSection
	  
Section "Device"
	Identifier "integrated"
	Driver "intel"
EndSection
	  
Section "Screen"
	Identifier "integrated"
	Device "integrated"
EndSection

New problem: No bootable device

Now I’m stuck with an issue where the HVM can’t boot if any PCI-e device is attached. I see some others are also encountering this. I documented this issue in this other thread:

3 Likes

What do you means by that ?


  • Can you check if there is any remaining custom code in xen.xml or stubdom (anything related to max-ram-below-4g) ?
  • can you provide the logs file and your configuration, (and eventually the xen logs (sudo xl dmesg) when you start the HVM (add the grub parameter ‘loglvl=all’ and ‘guest_loglvl=all’ to the host before)
  • can you provide the result of sudo dnf list | grep -E ^xen in dom0, and the result of sudo lspci -vvv -s YOUR_GPU in dom0

I upgraded from 4.1 with the dist upgrade (first with --all-pre-reboot, then rebooted, then --all-post-reboot).

xen-hvm-stubdom-legacy is also present on my T480 which was upgraded from 4.1 to 4.2.

I’ll play around with it a little and see if I can get it to work, worst case I think a clean install is a safe bet, thankfully have that down to a pretty good process at this point.

Thanks for the info!

Hello,

I just got a powerful laptop, installed Qubes 4.2 and read all day on how to create a video card passthrough in qubes. Read most of everything from the 2020 articles to present. Also bought a separate HDMI monitor, keyboard and mouse. My laptop has Inter integrated graphics as well as a dedicated Nvidia card.

I have to start by saying that I am a pretty big newb when it comes to unix/commands/analyzing logs.

Before I even attempted anything, I wanted to write down what the steps I need to take are and see if I got anything wrong. Your answers will help me tremendously as well as any others who may attempt this and are not on an advanced technical level.

  1. Are there any Qubes bricking risks associated with attempting this? I spent about a week fully customizing my new laptop and Qubes, and now want to attempt this Windows install with the GPU passthrough. But there are modifications and reboots involved. Is there a risk that the main Qubes OS gets bricked and may require a fresh reinstall? Would be useful to have a better understanding of this risk.

  2. Someone was saying that GPU passthrough on laptops is almost impossible because display devices are hard-wired to the same outputs, and they can’t be used at the same time.

https://forum.qubes-os.org/t/perfect-qubesos-hardware-with-a-dedicated-gpu-for-passthrough/18707/2

Is there a way for me to determine whether my Intel and Nvidia cards are hard wired to the same outputs and can or can’t be used at the same time?

Based on this PDF https://device.report/manuals/precision-7770-external-display-connection-guide mine should work if I enable Hybrid Mode in BIOS. I should be able to use the laptop screen + a separate monitor using using HDMI for the HVM.

  1. I see that it all starts with IOMMU group. If I see my intel card as well as the Nvidia card with unique device id’s when running “qvm-pci”, is it still necessary to reboot using a live distro and edit grub?

If yes, can anyone be more specific on how to add that text in the live distro grub? I have a PartedMagic live disk but I dont see where I could add this and where I would be able to see the IOMMU information I need.

  1. Under GRUB modification you are saying to hide the secondary GPU from dom0 - but then you specify that in case the laptop gets stuck at startup, it means I am trying to use the GPU i tried to hide.

Is this a recoverable situation? Or if you make that mistake a fresh install is needed.

I will pause here, these are the only points I need clarification before doing anything. Thank you very much and I apologize for the noob questions.

1

I recommend you to make a backup using the Qubes Backup tools.
Theoretically you can recover from everything you could fail with the correct option or command line.
But since you are new to that, do a backup.

2: I let someone else answer to that, never tried with laptop.
3: It is better if you reboot using a live distro to gather the information. It can help to understand why something doesn’t work.
4: It is a recoverable situation, in this case, once you reach the grub, press “e” to edit the grub command line and remove the things you added previously to hide the gpu

1 Like

I believe you will need a laptop with a mux-switch and to blacklist the dGPU as you described. If the hdmi on your laptop is directly connected to your dGPU you are good to go.

The issue is getting the patches (or now, the new xen version) to work on your laptop if you have very new hardware.

Mine was working fine on 4.2 rc3, but for some reason, the passthrough no longer works on newer versions of Qubes, and it results on the VMs crashing at boot.

Thank you for your reply.

3: It is better if you reboot using a live distro to gather the information. It can help to understand why something doesn’t work.

I got the latest Kali live ISO but how do I boot it where I can enter those commands to see the information? Can I bother you for some step by step for dummies commands?

Can I bypass checking this IOMMU on reboot? I think everything is fine, I have a high end laptop built with virtualization and multiple monitors in mind, I don’t think my GPU is grouped with anything else.

What can happen if I proceed without checking this IOMMU that’s got me scratching my head for the past 2 days?

Or you can’t proceed without the device ID that I need to get ?

This will make a lot of sense to me after I see it working and analyze what I had to do for that, but right now I am very confused as to what my next steps are.

How to find IOMMU, how to hide from dom0. These are 2 big ones that I don’t seem to be able to get past.

I am willing to pay someone with BTC Zelle or paypal if they can take 1 hr from their time to make this part as for dummies as possible. Name your price. I really want to get this done ASAP.

Thank you!

Thank you for your reply.

My HDMI is connected to the Nvidia directly and the laptop screen to the onboard one. The laptop has multiple modes but in hybrid, this is the configuration as described in this article:

https://device.report/manuals/precision-7770-external-display-connection-guide

The issue is getting the patches (or now, the new xen version) to work on your laptop if you have very new hardware.

I haven’t even gotten there. I will take it one step at a time. Right now i am still trying to get past rebooting/figuring out IOMMU group headache and then hiding it from dom0, rebooting and hopefully it will work.

But I am very confused if the hiding from dom0 part is done from Qubes booted up in a dom0 terminal, or from grub in preboot.

And step by step for dummies will be greatly appreciated and as I specified in my previous post, even monetarily rewarded if that is a motivator for anyone.

I am having a real life headache from trying to figure this out and reading everything for the past days and still having no clue how to begin.

Thank you again for your reply.

Your specs are similar to mine, so I think you are good to go.

Regarding your other statements, the guide above lays out fairly well.
I would flash Ubuntu and, ‘test’ it to get your IOMMU groups, although in my experience, most devices usually have separate IOMMU groups in modern hardware, specially the gpu devices.
Regarding hiding the pci device, you need to modify the file /etc/default/grub in dom0 as outlined above.
Then after you’ve done that, you apply the stubroot patch until we get Qubes 4.2.1.

I mean every step I did in the following post up to the “no bootable device”:


Sure! And thanks for the help, btw!

This time I started off with Qubes 4.2-rc3 and try the stubdomain patch (following this tip, which unfortunately didn’t work). If needed I can try some other time to do the full 4.2 fully updated (the testing xen version) and without the patch, but I have tried it before and the results where the same.

Here’s the high-level detail of what I did:

  1. I installed Qubes 4.2 rc3 (without the graphics card)
  2. added grub boot with my xorg.conf workaround to get Xserver to start
  3. started Qubes with the graphics card attached (boots normally)
  4. Applied your script here which as I understand applies only to qubes starting with gpu_.
  5. Created a qube called gpu_manjaro and attached the graphics card (permissive=True and no-strict-reset=True)
  6. boot from manjaro (arch linux) .ISO
  7. then the error: “no bootable device”

Then I upgraded dom0 the the latest stable version and rebooted.

  1. tried to start gpu_manjaro from manjaro ISO and again the same error: no bootable device

Lastly I tried the testing packages, which includes xen 4.17.2-8 6 which supposedly is patched. This is the version that I obtained the logs from.

  1. renamed gpu_manjaro to manjaro so the patch wouldn’t apply.
  2. tried to start manjaro from manjaro ISO and again the same error: no bootable device

The following are the logs with the loglvl=all and guest_loglvl=all (hopefully I applied it correctly).

xl-dmesg.log (114.0 KB)
guest-manjaro-dm.log (40.9 KB)
guest-manjaro.log (38 Bytes)
lspci.log (1.4 KB)
dnf-list.log (721 Bytes)

1 Like

Isn’t it this issue with booting from ISO with PCI device?

Try to first install the OS from ISO in HVM without PCI devices and attach PCI devices after you’ve installed the OS.

1 Like

Indeed. Maybe it’s not passthough-related at all. I actually had commended on that very issue 4 day ago.

I will try to install. But I suspect it won’t be able to boot after I plug it in. Booting from an ISO should be no different than booting from a disk.

I see two interesting line in your logs:

[2024-01-12 14:55:15] pci 0000:00:00.0: can't claim BAR 6 [mem 0x000c0000-0x000dffff pref]: address conflict with Reserved [mem 0x000a0000-0x000fffff]
[2024-01-12 14:55:15] pcifront pci-0: Could not claim resource 0000:00:00.0/6! Device offline. Try using e820_host=1 in the guest config.
1 Like

What is the list of pci you try to pass in this hvm ?

Thanks for taking a look. I have seen some e820_host references in xen.xml. Do you think I should explore enabling this?

It’s two: the ones listed in lspci.log