Create a Gaming HVM

Does anyone have any ideas as to what might be causing it? Would be great to get it ironed out for 4.2 as well :slight_smile:

@AWhite in fact your question is unclear. You mentioned primary and secondary slot. What do you mean by that? PCI slots in a desktop?

The vast majority users in this forum are using laptops. Most often these only have one integrated GPU. Some high-end laptops or gaming laptops also have a discrete GPU. Hence users asking you to clarify how to map primary / secondary to integrated / discrete.

Having a discussion here means that participants will have different levels of understanding. It can be difficult to judge whether the person writing is confused, lazy or maybe even knows a bit more than oneself. Hence it’s a good policy to assume that one’s own position was maybe communicated less clearly then intended instead of assuming laziness or incompetence on the part of others.

I hope this thread can continue in a productive and friendly way without the moderation team needing to take action.

6 Likes

As the Original Post says, it says the Primary GPU is for Qubes, and not Passthrough, and they referr to the fact that the Primary slot in the system, is the Primary GPU.
The secondary GPUs are not in that slot.
The Primary GPU is often in the BIOS and stated as such. (Or so the details say)

So that is one reason why I am asking. Because I have 2 GPUs in my system, doesn’t matter if they are integrated or discreet, it honestly doesn’t matter because you can pass one or the other as devices.

But the details were saying Primary, but does it HAVE to be?

And anyway, even if you only have ONE GPU then you can pass that through, and pass it back when the guest shuts down.

I assume that they have not read if they have not responded to the query if the query is stated clearly like it was. There are other posts in this forum where people have responded to me and responded in such a way that they have not actually read the original post or any subsequent post.

I hope so too Sven, but my question was NOT unclear.

It was clear and concise and to the point.
Anyone with some semblance of knowledge in PCs that would be responding to the question would have understood it and read it correctly since there is no hidden meaning or anything.

So if people had read the whole thing, and not just the one message, they would understand the question and be able to answer it appropriately.

1 Like

im running qubes currently. I want to builda gaming HVM. But i would like to change my bios to coreboot + heads first. Just wondering if that will prevent me from modifying my fan speeds using shortcuts, or cause issues with thermal managment etc.

regarding the issue with the official nvidia driver I wrote an article on my website: https://web.neowutran.ovh/qubes/articles/nvidia.pdf

5 Likes

( discussion related to this issue and article on github: Windows 10 crashes while installing nvidia drivers · Issue #9003 · QubesOS/qubes-issues · GitHub )

3 Likes

I’ll try the passthrough again once I applied the patches merged a few hours ago \o/

2 Likes

I’m able to load the nvidia driver on Linux, now I need to figure a proper method to enable the discrete graphics within the qube :smiley:

On Windows 10, it’s still not able to load the module.

3 Likes

Unfortunately I still couldn’t get the Windows driver loaded and I’m pretty sure I didn’t go wrong anywhere, but great work anyway!

Do you have logs/information/… about what is wrong with your nvidia driver in windows ?

To my knowledge nvidia laptop gpu driver checks for a battery to run, and that is why windows device manager throws an error for the driver.
See this guide for ceating a fake battery (and the associated xml to pass it through to vm). (for linux however, I don’t know how you’d do this with Xen). Since both uses qemu I assume its possible, maybe with a custom xml.
Note that I haven’t tried this, and might have missed /misunderstood something.

Hi, after successfully hiding the passthrough gpu from dom0, both gpus are now pciback on driver status and dom0 is only accessible via tty. Is there any way to fix this? Thanks.

You can follow the same steps in GRUB modification but remove the PCI hiding option instead:

I am not sure what you mean, could you please clarify? It seems like the only steps to do in the grub is to hide the pci.

Did you hide both GPUs from dom0 or just the one that you wanted to passthrough and second GPU has ended up hidden as well for some reason?
I thought you’ve mistakenly hidden both GPUs in GRUB config so to fix it I’ve suggested to remove the hiding option for one of the GPUs from the GRUB config.

No, I only hide the passthrough gpu, but the dom0 gpu is also hidden for some reason, that’s the problem :smiley:

Set down exactly what steps you took to hide.
Also would be useful to know the exact details of the PCI hardware

I never presume to speak for the Qubes team. When I comment in the Forum I speak for myself.

It is a bit unclear what does mean “retrieve the folder structure” of iommu group.
Can you please specify, what in practice I should do in Qubes OS after executing this script in live distro? Is that needed just to get whole vga iommu group device id’s to processed this guide, or I should somehow recreate same subfolder structure (with links) in dom0 /sys/kernel/iommu_groups/ ?

I’ve done everything exactly as it is described in the Grub chapter. An Nvidia 2080 is used as the passtrough GPU, and a GTX 690 is used as the dom0 GPU.
Utilized was “rd.qubes.hide_pci=2b:00.0,2b:00.1”

However, because both GPUs are now PCIback and do not utilize the Nouveau driver, could it be that the Nouveau driver currently installed is not compatible with the much older GTX690?

This. E.g. you need to find out that your GPU consist of 2 devices: video device 2b:00.0 and audio device 2b:00.1. And you need to passthrough both of them.

1 Like