Create a Gaming HVM

Not really, on Linux I can emulate Switch games using Yuzu or play games like Death Stranding or Control (CPU bound). The laptop has an i5-7300U, it would take a while before being limited by thunderbolt bandwidth. Maybe if you use a 4k screen this would saturate faster?

If you use the eGPU as a discrete GPU, the bandwidth is limiting because the data have to go in both ways, if you use it with an external display, you have more bandwidth because the rendering doesn’t need to go back through thunderbolt.

Yes, it works if you just use it for the external display, but you could do the same with a TB dock without the GPU.

I thought you wanted to connect the GPU and use it to play games that need accelerated graphics.

that’s exactly what I’ve done, and it’s working.

1 Like

Great guide but I still have some questions.

For the Iommu Group do I have to do everything inside the grub cmd of the usb linux live distro (I have to paste the #!/bin/bash part)?
And if yes how can I deal with my OS being LUKS encrypted?

Also, what’s the deal with the max-ram-below-4g?
From what I understood this means this will allow up to 2 GB of ram to your gpu passtrough HVM qube, then I don’t understand how people in other threads have straight up 4090s working. (And what about the VRAM?)

bump, I am also interested

“For the Iommu Group do I have to do everything inside the grub cmd of the usb linux live distro (I have to paste the #!/bin/bash part)?” : No. Boot into any linux live distro, use the standard terminal emulator to create a script and execute it.

“And if yes how can I deal with my OS being LUKS encrypted?”: Irrelevant, you don’t need to access anything on your OS for this step

“From what I understood this means this will allow up to 2 GB of ram to your gpu passtrough HVM qube, then I don’t understand how people in other threads have straight up 4090s working. (And what about the VRAM?)”: No, you have access to all the ram you want and don’t have any kind of limitation on VRAM.

Also, what’s the deal with the max-ram-below-4g?: You can read differents threads here:

or search on internet. Not everything is understood about what is going on with TOLUD

2 Likes

Btw, I did a presentation about my video editing setup on Qubes, which has GPU passthrough on an nvidia 4090, if anyone is interested.

Timestamp 5h34m30s

10 Likes

The lines for the original post on patching xen.xml has changed because the original patch interfered with audio VM’s.
Maybe update the lines and link to the thread in case more progress is made?

I am going to remove the information about xen.xml patching from this guide.

  • The two method (xen.xml / stubdom-linux-rootfs.gz ) are currently not doing the exact same things
  • The difference between the two methods seems to be causing confusion / I see lot mistakes in forum posts ; and it make troubleshooting harder
  • I don’t personally use the xen.xml method and I am not willing to spend time updating xen.xml patching method to have the exact same behavior as the other method
4 Likes

Are you still having to keep stubdom downgraded to apply the stubdom patch or did they resolve that?

I am using the latest version of everything available in the official repositories

I don’t understand what this means.

I want to be able to use a GPU in order to learn about open-source LLMs. It looks like many of those on git are Windows only.

Why do you recommend using a Linux guest? I also don’t know what “Linux Guest” means. Does this mean a graphical environment like what you would see if you install directly from an ISO into a StandAlone VM and not a template?

There is also mentioning creating a Standalone Qube based on your choice of template. I think only Pop!OS has NVIDIA drivers built in and I have never heard of anyone using Pop!OS inside Qubes.

I know this is an advanced topic for those with advanced linux backgrounds. I am going to try to do this despite not being advanced and I will hope I do not brick.

I have an extra monitor and an extra mouse and keyboard. Is the best way to go about this to follow this guide? Pop!OS is so easy but I feel very unsafe when I am not using Qubes.

Why do you recommend using a Linux guest? I also don’t know what “Linux Guest” means. Does this mean a graphical environment like what you would see if you install directly from an ISO into a StandAlone VM and not a template?

“Guest” as the opposed of “Host”. You could call it a “linux qube” or “linux vm”.
I recommend using a Linux guest because it is easier to debug/modify/patch than Windows.
And is integrated with QubesOS, better supported and better performance than Windows guest

There is also mentioning creating a Standalone Qube based on your choice of template. I think only Pop!OS has NVIDIA drivers built in and I have never heard of anyone using Pop!OS inside Qubes.

You can install nvidia driver on any distribution (debian, fedora, archlinux, …).

I have an extra monitor and an extra mouse and keyboard. Is the best way to go about this to follow this guide? Pop!OS is so easy but I feel very unsafe when I am not using Qubes.

Only you can answer this question. I am able to run ML models using the setup described in this guide. If you want to follow this guide, it is up to you.

I still don’t get it.

Are you talking about taking a template, like Fedora-38 or Arch and just using it to create a new qube? Or are you talking about a standalone ISO with it’s own graphical interface like if I took PopOS and used an ISO to create a standalone VM?

My personal example (for Create a Gaming HVM ):

I installed a archlinux qubes template from the community template repository (qvm-template-gui)
The template qube is called “archlinux”.

I then created a new standalone qube, named “gpu_archlinux”, based on the “archlinux” template.
I followed the doc to use my own kernel for the gpu_archlinux, instead of the kernel provided by qubesos Redirecting… .
And finally, in “gpu_archlinux”, I installed the nvidia drivers using the command: sudo pacman -S nvidia-open

Curious if anyone else who has done this has run into performance issues where the GPU isn’t being utilized fully? We have another thread going here but for sure a little lost on what to do, both the CPU and GPU seem to sit at about half utilization except for Starfield for some reason.

@neowutran I’m curious about you using a template based approach for building your HVM. Both of my Linux ones I installed with the ISO file and just used the “monitor” that displays in Qubes for setup, then installed nvidia drivers and disabled it. I’m curious if I’m missing some component that is preventing it from working effectively, though the issue is present in my Windows 10 HVM created with qvm-create-windows-qube.

1 Like

If you want, choose a benchmark that work well on linux and we both run it and compare the result

Oh was just curious if there was a specific reason is all, myself and a few others have been battling performance issues where the GPU doesn’t appear to be getting utilized fully so wasn’t sure if that was a possibility. I have been playing around with xen.xml and finally managed to double my framerates in Cyberpunk so am pretty happy about that will have to see if it applies to the Linux guest as well!

1 Like

So these are based on templates not Standalone systems based on isos.

Arch is a harder OS than Debian and Debian is more compatible out of the box with many applications. Would a Debian-12 template work just as well?

So for this to work, there needs to be two virtual machines, one template that gets graphics processing and one template that provides graphics processing, and both can be based off templates.

Are you saying I have run a live linux system to get system information to share that with dom0 because I can’t get that information in dom0? I don’t understand the IOMMU Group and why it matters. Are you saying a live linux sysem like burn live Parrot or live Debian or Live Fedora and just boot it on my system to get the information or do you mean a live template? I do not understand this part.

So these are based on templates not Standalone systems based on isos.

You can create a standalone qube from an existing template or a ISO.

Arch is a harder OS than Debian and Debian is more compatible out of the box with many applications. Would a Debian-12 template work just as well?

Yes, it would work using a debian template or fedora template or …, it is just my personnal preference to use archlinux for this kind of things since it have all the latest bleeding edge drivers.

So for this to work, there needs to be two virtual machines, one template that gets graphics processing and one template that provides graphics processing, and both can be based off templates.

no

Are you saying I have run a live linux system to get system information to share that with dom0 because I can’t get that information in dom0? I don’t understand the IOMMU Group and why it matters. Are you saying a live linux sysem like burn live Parrot or live Debian or Live Fedora and just boot it on my system to get the information or do you mean a live template? I do not understand this part.

You have to run a live linux system to check yourself if it should theorically work without specific issues. You don’t “share” the result with dom0 after, it is just for you, for your understanding on how your hardware is wired. For what is a IOMMU group, you can check here and contribute: IOMMU groups – security impact