I’m offering a bounty of 0.7 XMR for a solution to run a standalone VM that can work seamlessly in both HVM and PVH virtualization modes with NVIDIA drivers.
Current situation
I have a standalone VM with:
- Installed
akmod-nvidia-open
driver (preferred for modern GPUs per NVIDIA documentation) - Installed
xorg-x11-drv-nvidia-cuda
package - This guide is better now Salt: automating NVIDIA GPU passthrough: fedora 41 for use Nvidia driver on hvm
- Working
nvidia-smi
and other GPU-dependent tools in HVM mode
The problem
While the VM can boot in both HVM and PVH modes after NVIDIA driver installation, in PVH mode the graphics subsystem fails completely. This means:
- No GUI applications will launch (browsers, file managers, terminals)
- I cannot use the same VM in both modes without issues
Required solution
I need a method that allows:
- A single standalone VM that works with NVIDIA drivers in both modes
- Ability to simply switch between HVM and PVH via Qubes Manager
- When in PVH mode, preferably using the kernel from dom0 (not pvgrub2-pvh mode or “none”)
- Fully functional graphics in both virtualization modes
- Prefer VM os – fedora-41-xfce (good if will work on fedora 42)
Implementation notes
- I’ve discovered that
dmidecode
can be used within the VM to detect the virtualization mode before starting the graphics subsystem - I’m open to ANY method for switching between modes within the VM
- Creating multiple full root filesystem disks and switching between them is NOT considered a valid solution if both disks must store complete copies of the root
Has anyone found a way to make this work? Any guidance or solutions would be greatly appreciated.