Create a Gaming HVM

I was able to change the memory by checking memory balance, changing the ram, then unchecking memory rebalance again.

Ive continued the guide to include xorg and audio configurations but now it wont even boot terminal for the standalone.

Prior to installing xorg i think i got my gpu to work. Although supertuxkart wouldnt work. Steam and some gpu benchmark programs did work however.

Ive made backups before installing xorg. Maybe i did some configuration incorrectly. What would be the recommended option? Restore from backup or try to debug it?

You can try my script to create a GPU ready template, maybe it can help you

I’ve tried your script and it worked with one note that after installing GRUB in the template the Xorg is crashing if booted with kernel provided by dom0 so I had to change the template to use the kernel provided by qube in HVM mode.

Hey,

I’m encountering some issues with certain Linux distributions regarding GPU passthrough. I realize this may not be the best venue for distro-specific queries, but my concerns are centered around HVM GPU passthrough.

For context, I have a 4-core CPU, which might contribute to some of the challenges I’m facing, though I lack the expertise to pinpoint the specifics.

When attempting to use distributions such as Nobara, Cachyos, and Holoiso, I experience significant slowdowns in my virtual machine. “Slowdowns” may not be the most accurate term; it feels more like the performance deteriorates sporadically. My instinct suggests this issue might be linked to the fact that these distributions utilize non-standard kernels—could that be the case?

Additionally, when I work with Fedora, I’ve noticed that the display manager defaults to the Bochs drivers seemingly randomly. Out of 10 boots, it boots 8 times with Bochs as the default display driver, even though it recognizes that my GPU is connected.

If anyone could shed light on the underlying causes of these problems or provide an explanation, I would greatly appreciate it. Thank you!

I’ve followed the guide and everything works fine for Linux hvms. However on windows, it sees my AMD card but gives error 43. (Windows has stopped this device because it has reported problems.)

If I disable qubes’ video-model so it only has the gpu, it gets stuck on a black screen with these two lines.

0x0000:0x0a:0x00.0x0: ROM: 0xf000 bytes at 0x6493a018
0x0000:0x0a:0x00.0x0: ROM: 0xae00 bytes at 0x64856018

I installed windows manually with an iso and have not installed Qubes Windows Tools. This happens on both windows 11 and 10.

Does anyone have an idea what is causing this and/or how I might fix it?

Good day,

I have never had to deal with the use of GRUB and so need some clarification on the instructions.
I am determined to get this setup!

So you mention boot a live distro, now by this booting into a different linux distro like say tails or line mode Kicksecure?
Also it is my understanding that GRUB can be accessed by using the shift key when booting and also access config files via the command line so not sure which approach I am suppose to take?
From what Im understand with this read is that we are suppose to boot into live distro, access GRUB and add the parameters which will enable iommu(?) and then pull ithe folder structure t with the script?
This part is not quiet clear for, is this something I just do by starting a Qube?
Also apologies if I missed it but I don’t see when this is then used later on?

Am aware this is for more advance Linux users so I do apologies for all the questions

Use some standard Linux Live distro: Fedora/Ubuntu/Debian/etc. Tails/Kicksecure may work as well, maybe it won’t work because of some the hardening there.

Add the kernel command line options temporarily in the GRUB menu.

Yes.

No, you need to boot the Live Linux OS on your hardware instead of Qubes OS, not inside the qube.

Hi thanks so much for clarifying!

I will try ubuntu since I believe I already have it it flashed on a USB and dont want have to do everything again due to issues with the hardening presented with Kicksecure.
So just to clarify the process to access grub would I just select the boot option of the live distro, and then press or hold shift as it attempts to boot and add the parameters?

Also pulling the file structure I still don’t understand the point with the script, etc. My thought is the purpose of booting into another ditro and pulling the file structure with the changes in to import in somewhere in qubes however I don’t see that which indicates a lack of understanding on my part so then the other case would be is that any change made with GRUB affects other systems but still why run the script(?) which I ask myself. Perhaps I simply do not understand enough.

Once going up to setting up the standalone after hiding the GPU from dom0 I would assume if one would want multiple standalone that each standalone vm would need to install drivers on and that each on would only be able to be used once at a time and by manually allocating the GPU to each of those vms whenever required?

Kernel/KernelBootParameters - Ubuntu Wiki

You need to check if there are other devices in the same IOMMU group or not.
If there are other devices in the same IOMMU group then you may need to passthrough all of them to the qube for it to work.
Or if it’ll work when you passthrough just the GPU without other devices in the same IOMMU group then you need to understand the possible security implications of this setup.

Yes.
You can also create a template and app qubes based on this template instead of multiple standalones if it’ll fit your case.

Hi,

So I used the cmd to hide the GPU after getting its device ID and changed my the priority output to my integrated graphics in the NB configs as originally after reboot it wouldn’t go past the startup which of course fixed the issue.
Now after using sudo lspci -vvn I can see that the GPU is showing amdgpu instead of pciback.

Done everything correctly prior and their was an issue as expected after reboot so not sure why its doing this?

What’s the output of this command in dom0 (you can remove UUID)?

cat /proc/cmdline

What’s the BDF of your GPU that you want to hide in the output of lspci?

Uncertain how to remove the UUID(if one even should?)

Output:
placeholder root=/dev/mapper/qubes_dom0-root ro rd.luks.uuid=luks-213769-7181-46bc-80ca-4ec9947c5c662 rd.lvm.lv=qubes_dom0/swap plymouth.ignore-serial-consoles 6.6.48.qubes.fc37.x86_64 rhgb quiet usbcore.authorize_default=0

Well according to the guide the kernal use should state pciback instead of amdgpu which it currently state which based on the guide indicates to me that it did not work correctly? i.e as in the GPU is not properly or if at all hidden from dom0, contrary to the experience and needed fixing after hiding it followed by the reboot which had the expected possible issue.

You don’t have rd.qubes.hide_pci that should hide your GPU from dom0 in the kernel command line options.
Did you add it in the /etc/default/grub and regenerated GRUB config?
What’s the content of /etc/default/grub in dom0?

Hi,

I did indeed execute the following cmds with the device ID of the GPU:

GRUB_CMDLINE_LINUX="... rd.qubes.hide_pci=0a:00.0 "

then regenerate the grub

grub2-mkconfig -o /boot/grub2/grub.cfg

not certain in regards to that directory as the commands provided don’t indicate that.

Grub_timeout=5
grub_distributor=“$(sed ‘s, relase . *$,g’ etc/system-release)”
grub_default=saved
grub_disable_submenu=false
grub_terminal_output=“console”
grub_cmdline_linux=“rd.luks.uuid=luks-213769c9-7181-46bc-80ca-4ec9447c5c62 rd.lvm.lv=qubes_dom0/root rd.lvm.lv=qubes_dom0/swap plymouth.ignore-serial-consoles 6.6.48-1.qubesfc37.x86_64 x86_64 rhgb quiet”
grub_disable-recovery=“true”
grub_theme=“/boot/grub2/themes/qubes/theme.txt”
grub_cmdline_xen_default=“console=none dom0_men=min:102M dom0_men:4096M ucode-scan smt=off gnttab_max_frames=2048 gnttab_maxmaptrack_frames=4096”
grub_disable_os_probe=“true”
grub_cmdline_linux=“grub_cmdline_linux usbcore.authorized_default=0”

You need to add this line at the end of /etc/default/grub file in dom0:

GRUB_CMDLINE_LINUX="$GRUB_CMDLINE_LINUX rd.qubes.hide_pci=0a:00.0 "

But change 0a:00.0 to the BDF of your GPU that you want to hide from the lspci output in dom0.
Then run this command in dom0:

grub2-mkconfig -o /boot/grub2/grub.cfg

My apologies, I do see the mention of the file now.

Okay so out put when regenerating:

/etc/default/grub: line1 12: GRUB_LINE_LINUXrd.luks.uuid=luks-213769c9-7181-46bc-80ca-4ec9447c5c62 rd.lvm.lv=qubes_dom0/root rd.lvm.lvqubes=_dom0/swap plymouth.ignore-serial-soncsoles .6.6.4 8-1.qubes.fc37.x86 x86_64 rhgb quiet usbcore.athorized_default=0 rd.qubes.hide_pci=03_00.0: No such file or directory.

So I thought okay perhaps it should be 03:00.0 instead as indicated in the qubes manager compared to dom0:03_00.0 when executing qvm-pci however either way getting this error.

Change 0a:00.0 to 03:00.0.

I have done this. However as mentioned it gives me the output shown above.
Its why I thought it it was how I input the device ID and change it to 03:00.0 which still lead with the same output but with the minor change.

Update:
I did some trial and error and after adding spce between the two lines it worked(idk how or why) and stated that it added the new changes. Basically no errors however still after reboot again the kernal use states amdgpu instead of pciback.

Post the output of this command in dom0:

cat /proc/cmdline

And the content of /etc/default/grub file in dom0.
You can copy from dom0 like this:
How to copy from dom0 | Qubes OS