I’m offering a bounty of 0.7 XMR for a solution to run a standalone VM that can work seamlessly in both HVM and PVH virtualization modes with NVIDIA drivers.
Current situation
I have a standalone VM with:
Installed akmod-nvidia-open driver (preferred for modern GPUs per NVIDIA documentation)
Working nvidia-smi and other GPU-dependent tools in HVM mode
The problem
While the VM can boot in both HVM and PVH modes after NVIDIA driver installation, in PVH mode the graphics subsystem fails completely. This means:
No GUI applications will launch (browsers, file managers, terminals)
I cannot use the same VM in both modes without issues
Required solution
I need a method that allows:
A single standalone VM that works with NVIDIA drivers in both modes
Ability to simply switch between HVM and PVH via Qubes Manager
When in PVH mode, preferably using the kernel from dom0 (not pvgrub2-pvh mode or “none”)
Fully functional graphics in both virtualization modes
Prefer VM os – fedora-41-xfce (good if will work on fedora 42)
Implementation notes
I’ve discovered that dmidecode can be used within the VM to detect the virtualization mode before starting the graphics subsystem
I’m open to ANY method for switching between modes within the VM
Creating multiple full root filesystem disks and switching between them is NOT considered a valid solution if both disks must store complete copies of the root
Has anyone found a way to make this work? Any guidance or solutions would be greatly appreciated.
I apologize, I probably forgot to mention it. GPU is not required in PVH mode. Without all PCI devices, it should work as well as with HVM GPU. PVH PCI Passthrough has not yet been supported by XEN, it is not required to solve this problem within this bounty.
It is necessary to make this way: HVM mode, GPU is turned on. I open the terminal, NVIDIA-SNI and everything related to Nvidia GPU work there.
I turn off VM, delete all PCI devices from VM, change hvm to PVH mode and turn on for example Chrome. Chrome turns on and works, there is a window
Installing nvidia-akmod (or nvidia-akmod-open) now broke Xorg on pvh mode
Maybe I’m misunderstanding what you want, it works for me out of the box with Debian 12, if I also switch the kernel.
HVM uses a kernel provided by the qube, it needs custom kernel modules for the nvidia driver, PVH can only be started with a kernel provided by Qubes OS.
Maybe this is some feature of Debian? In fedora, this causes X to crash when running in pvh mode. I can connect via that emergency console via dispvm, but no matter how many logs I take, no matter how much I try to understand them myself, with AI together and the rest, but the problem seems to be that nvidia drivers replace many standard libraries, and as a result they depend on the nvidia driver, which cannot be used with the kernel from dom0 for architectural reasons, as a result, nvidia drm crashes on startup, libraries depend on it, and X server depends on these libraries.
Unfortunately, even if this works, I cannot accept it as a solution. Since Debian is not suitable for my task. No matter how hard I tried, it is completely unsuitable for anything more complex than a browser. Maybe because of bookworm, maybe because of something else. But it is so.
However, you gave me one thought, I will try the solution that just came to mind.
Update: the idea to use in-VM kernel in pvh mode (using pvgrub2-pvh) was original, but unfortunately unsuccessful. X still crashes together with Qubes-gui-agent. Now it’s fedora 42, but despite the update the problem remains the same. If I understand the logs correctly, then literally everything crashes, X, liblzma, libtinfo, libxml2, libEGL and almost a hundred other libraries. It seems that Debian does not use akmod-nvidia or akmod at all. And this is what is needed for the nvidia GPU to work in fedora VM. I checked everything else (although on fedora 41), but apart from destroying the VM it did nothing
I tried to do something similar to this to get davinci resolve (to try and get it to detect igpu and GPU) with not much success.
Why are you trying to do this - and why not just run it in HVM mode?