Would this be the same for a dual AMD gpu setup?
Of course with the identifier and driver changed to AMD I guess.
I donāt have an AMD GPU to test right now, but Iād assume it would work like this:
/etc/X11/xorg.conf.d/20-amdgpu.conf
Section "Device"
Identifier "AMDGPU1"
Driver "amdgpu"
BusID "PCI:1:0:0" # Replace this with the actual BusID of your GPU for Qubes
EndSection
The driver may be something else on Qubes, in which case you can check by using lspci -vnn
and looking for your VGA compatible controller AMD device. The driver name will be in the āKernel modulesā or āKernel driver in useā sections.
For the BusID, youāre going to need to find which of the AMD GPUs you want to keep on Qubes, which will be the number before the āVGA compatible controllerā on that device.
Nice! However in this case can you still pass it through to a VM? Idās suspect no since itās in use by dom0 bur I may be wrong.
I somehow managed to boot with rd.qubes.hide_pci but the problem now is that when I start the gpu_HVM I get: āGuest has not initialized the display (yet)ā. Iāve looked into this and someone was claiming that I should get an older firmware .bin file from 4.1 and replace it in 4.2 but the problem is that these files are the exact same and behavior doesnāt change.
Happy to hear. You donāt happen to know what made it work, correct?
Yes I can, and I go from seeing the qubes startup log on the monitor to seeing my HVM whenever I boot it. Iām surprised it works but it does haha. Might just be a case of me being lucky and my GPU model just working for some reason.