Create a Gaming HVM

I don’t understand what this means.

I want to be able to use a GPU in order to learn about open-source LLMs. It looks like many of those on git are Windows only.

Why do you recommend using a Linux guest? I also don’t know what “Linux Guest” means. Does this mean a graphical environment like what you would see if you install directly from an ISO into a StandAlone VM and not a template?

There is also mentioning creating a Standalone Qube based on your choice of template. I think only Pop!OS has NVIDIA drivers built in and I have never heard of anyone using Pop!OS inside Qubes.

I know this is an advanced topic for those with advanced linux backgrounds. I am going to try to do this despite not being advanced and I will hope I do not brick.

I have an extra monitor and an extra mouse and keyboard. Is the best way to go about this to follow this guide? Pop!OS is so easy but I feel very unsafe when I am not using Qubes.

Why do you recommend using a Linux guest? I also don’t know what “Linux Guest” means. Does this mean a graphical environment like what you would see if you install directly from an ISO into a StandAlone VM and not a template?

“Guest” as the opposed of “Host”. You could call it a “linux qube” or “linux vm”.
I recommend using a Linux guest because it is easier to debug/modify/patch than Windows.
And is integrated with QubesOS, better supported and better performance than Windows guest

There is also mentioning creating a Standalone Qube based on your choice of template. I think only Pop!OS has NVIDIA drivers built in and I have never heard of anyone using Pop!OS inside Qubes.

You can install nvidia driver on any distribution (debian, fedora, archlinux, …).

I have an extra monitor and an extra mouse and keyboard. Is the best way to go about this to follow this guide? Pop!OS is so easy but I feel very unsafe when I am not using Qubes.

Only you can answer this question. I am able to run ML models using the setup described in this guide. If you want to follow this guide, it is up to you.

I still don’t get it.

Are you talking about taking a template, like Fedora-38 or Arch and just using it to create a new qube? Or are you talking about a standalone ISO with it’s own graphical interface like if I took PopOS and used an ISO to create a standalone VM?

My personal example (for Create a Gaming HVM ):

I installed a archlinux qubes template from the community template repository (qvm-template-gui)
The template qube is called “archlinux”.

I then created a new standalone qube, named “gpu_archlinux”, based on the “archlinux” template.
I followed the doc to use my own kernel for the gpu_archlinux, instead of the kernel provided by qubesos Redirecting… .
And finally, in “gpu_archlinux”, I installed the nvidia drivers using the command: sudo pacman -S nvidia-open

Curious if anyone else who has done this has run into performance issues where the GPU isn’t being utilized fully? We have another thread going here but for sure a little lost on what to do, both the CPU and GPU seem to sit at about half utilization except for Starfield for some reason.

@neowutran I’m curious about you using a template based approach for building your HVM. Both of my Linux ones I installed with the ISO file and just used the “monitor” that displays in Qubes for setup, then installed nvidia drivers and disabled it. I’m curious if I’m missing some component that is preventing it from working effectively, though the issue is present in my Windows 10 HVM created with qvm-create-windows-qube.

1 Like

If you want, choose a benchmark that work well on linux and we both run it and compare the result

Oh was just curious if there was a specific reason is all, myself and a few others have been battling performance issues where the GPU doesn’t appear to be getting utilized fully so wasn’t sure if that was a possibility. I have been playing around with xen.xml and finally managed to double my framerates in Cyberpunk so am pretty happy about that will have to see if it applies to the Linux guest as well!

1 Like

So these are based on templates not Standalone systems based on isos.

Arch is a harder OS than Debian and Debian is more compatible out of the box with many applications. Would a Debian-12 template work just as well?

So for this to work, there needs to be two virtual machines, one template that gets graphics processing and one template that provides graphics processing, and both can be based off templates.

Are you saying I have run a live linux system to get system information to share that with dom0 because I can’t get that information in dom0? I don’t understand the IOMMU Group and why it matters. Are you saying a live linux sysem like burn live Parrot or live Debian or Live Fedora and just boot it on my system to get the information or do you mean a live template? I do not understand this part.

So these are based on templates not Standalone systems based on isos.

You can create a standalone qube from an existing template or a ISO.

Arch is a harder OS than Debian and Debian is more compatible out of the box with many applications. Would a Debian-12 template work just as well?

Yes, it would work using a debian template or fedora template or …, it is just my personnal preference to use archlinux for this kind of things since it have all the latest bleeding edge drivers.

So for this to work, there needs to be two virtual machines, one template that gets graphics processing and one template that provides graphics processing, and both can be based off templates.

no

Are you saying I have run a live linux system to get system information to share that with dom0 because I can’t get that information in dom0? I don’t understand the IOMMU Group and why it matters. Are you saying a live linux sysem like burn live Parrot or live Debian or Live Fedora and just boot it on my system to get the information or do you mean a live template? I do not understand this part.

You have to run a live linux system to check yourself if it should theorically work without specific issues. You don’t “share” the result with dom0 after, it is just for you, for your understanding on how your hardware is wired. For what is a IOMMU group, you can check here and contribute: IOMMU groups – security impact

I have edited grub and now have

Kernel driver in use: pciback

for my GPU when running lspci -vvn

This is really hard to do. I’ve barely done any of this and already want to cry.

I don’t understand what is going on with xorg or what it means or why it’s even needed. I also don’t understand why I need a 2nd monitor. Can’t I just patch this directly to a VM?

You’ve given a lot of hints for Arch. I have no idea what to put in Debian. I’m on the verge of trying to just do this with Arch even though I don’t understand pacman or arch at all.

If I made a standalone VM with PopOS and NIVIDIA drivers or a standalone Windows machine with NVIDIA drivers and just attached the unused GPU to the VM in dom0, will it not work without Xorg?

qvm-pci attach Windows-gaming dom0:0a_00.0 -o permissive=True -o no-strict-reset=True

or

qvm-pci attach PopOS-gaming dom0:0a_00.0 -o permissive=True -o no-strict-reset=True

Is that going to fail without Xorg? Can anyone explain why or why not?

Why didn’t you tell us this would require patience and debugging and include graphs to warn us all how hard this would be?

So I tried running PopOS with NVIDIA drivers and did a pass through in dom0 but I think it didn’t work.

sudo qvm-pci attach PopOS dom0:01_01.0 -o permissive=True -o no-strict-reset=True
Got empty response from qubesd. See journalctl in dom0 for details.

In PopOS it says I am running X11 for Windows System, Virtualization listed as Xen, graphics are llvmpipe (256 bits).

It probably isn’t working at all and just running like a normal standalone PopOS.

I ran nividia-settings in PopOS and it showed the drivers weren’t loading. It said the X server isn’t available.

Back to Xorg hell.

So I went back to just using a Debian standalone template. I changed it to HVM since PVM won’t allow me to attach anything.

I installed nvidia-detect and it indicates I have a nivida graphics card. I installed nvidia drivers. I am still not sure how the second monitor, xorg, and another keyboard factors into all of this.

Possibly related

It looks like I do need a different monitor and using Xorg to pass through the GPU with HVM.

Does anyone know why I can’t just do this inside of the screen? I don’t understand why there needs to be different hardware. Is this a technical requirement of Qubes (having a different monitor) or has no one figured out how to make it work?

Can I just apply your Xorg settings to something like Debian with Nvidia installed or are modifications necessary?

I don’t understand the Monitor part in the Xorg configuration. Can I copy the monitor section just like that?

I commented the part of the file that need to be modified.

# name of the driver to use. Can be "amdgpu", "nvidia", or something else
# https://arachnoid.com/modelines/ .  IMPORTANT TO GET RIGHT. MUST ADJUST WITH EACH SCREEN. 
1 Like

I have never tried it but supposedly this guide offers a solution to doing it in a single screen:

I’m going to try it. Not sure if am good enough to figure it out.

I tried GPU passthrough on a different laptop than last time (T470 vs NV41), but when I plug in my external GPU over thunderbolt, in lspci or qvm-pci the devices names don’t appear, they are still named “thunderbolt controller” :thinking:

In case someone has an idea.

edit²: Well, on a Ubuntu live CD, the thunderbolt card doesn’t appear either :confused: I wonder if this is because I have a thunderbolt 3 gpu case and it’s a thunderbolt 4 port :woman_shrugging:

edit³: tried again on Ubuntu, it turns out I didn’t plug the thunderbolt cable correctly :woman_facepalming: , it shows up on Ubuntu but not qubes os now.

Here is lspci -vnn output for the thunderbolt related devices (GPU is connected :woman_shrugging: )

00:0d.0 USB controller [0c03]: Intel Corporation Alder Lake-P Thunderbolt 4 USB Controller [8086:461e] (rev 02) (prog-if 30 [XHCI])
	Subsystem: CLEVO/KAPOK Computer Device [1558:4041]
	Flags: medium devsel
	Memory at 80940000 (64-bit, non-prefetchable) [size=64K]
	Capabilities: <access denied>
	Kernel driver in use: pciback
	Kernel modules: xhci_pci

00:0d.2 USB controller [0c03]: Intel Corporation Alder Lake-P Thunderbolt 4 NHI #0 [8086:463e] (rev 02) (prog-if 40 [USB4 Host Interface])
	Subsystem: CLEVO/KAPOK Computer Device [1558:4041]
	Flags: fast devsel, IRQ 17
	Memory at 80900000 (64-bit, non-prefetchable) [size=256K]
	Memory at 80970000 (64-bit, non-prefetchable) [size=4K]
	Capabilities: <access denied>
	Kernel driver in use: pciback
	Kernel modules: thunderbolt

in the qube with passthrough, the thunderbolt controller is working

bash-5.2# boltctl 
 ● Razer Core X
   ├─ type:          peripheral
   ├─ name:          Core X
   ├─ vendor:        Razer
   ├─ uuid:          c5030000-0080-7708-234c-b91c5c413923
   ├─ generation:    Thunderbolt 3
   ├─ status:        connected
   │  ├─ domain:     728c8780-c09b-710e-ffff-ffffffffffff
   │  ├─ rx speed:   40 Gb/s = 2 lanes * 20 Gb/s
   │  ├─ tx speed:   40 Gb/s = 2 lanes * 20 Gb/s
   │  └─ authflags:  none
   ├─ connected:     dim. 07 janv. 2024 13:03:30
   └─ stored:        no
1 Like

How can I check I have it installed? I can’t find a packaged named vmm-xen on dom0 :thinking:

1 Like