Seamless GPU passthrough on Qubes OS with VirtualGL

I see. Looks like I misunderstood the objective. I was assuming that the qube with the GPU was being used as a sys-gpu qube, and the virtualGL was being used to allow client qubes to access the accelerated functions of the GPU to accelerate their graphics (I had been interested in virtualGL since I first saw reference to it, for this potential).

Does someone want to add the phrase “(I.E. using a 2nd GPU without a 2nd monitor)” to the title? The original title is correct, but adding that might help the next person. (Reading the original post from this perspective suddenly makes a lot more sense now)

Would I be correct that if someone put the virtualGL communication between qubes, that they could get 3D acceleration in other qubes? For example putting the GPU and the first half of virtualGL in sys-gpu, and the other half of virtualGL on a qube called “insecure-debian-3d-games-qube” as well as putting that half on a qube called “insecure-fedora-3d-games-qube”. Or am I misunderstanding how virtualGL works?

I see what you are referring to now. Unfortunately, I can’t really comment because I don’t know enough about virGL, virtualGL, or Venus to be able to say if the virtualGL and virGL/Venus attack surfaces are similar, or if they operate in different ways.

It does exactly what the title says, it allows you to do seamless passthrough, as in it works in “qubes” windows.

I have only quickly looked through the documentation, but I don’t think you can “share” access to the GPU. You can run the application on one device and have the rendered frames display on another device using networking.

Which is kinda pointless in Qubes OS, you might as well just display them in the qube running the application.

My problem with this is when I am attaching the GPU to the virtual machine (using the settings of the virtual machine) and then starting the virtual machine the whole thing crashes immediately.

I can’t even get into the command line to run vglrun.

I am using an AppVM and attaching the GPU to the AppVM in the settings. The template does not have the GPU attached.

I am doing something wrong.

Is this happening because I am not using a standalone VM?

I think that’s the error I made.

I am able to attach the GPU to the Standalone VM and get command line running. But even though I based it off a virtualgl template the applications are not there.

I tried to open Thunar and it will open and show graphics before shutting down as soon as I click on anything.

I will try installing the virtualgl files in the StandAlone and then I hope it can work.

@dispuser, are you using Qubes OS 4.2 with the latest updates?
There was a Xen bug that caused a system crash when attaching the GPU to a VM with more than a certain amount of RAM that seems to be fixed in a recent update.
In my 4.2 with the latest updates, the passthrough seems to be working both with a Standalone VM and an appVM.

I made a game qube today with passthrough+wine/lutris+vulkan+virtualgl, works just fine.

I don’t know if full-screen works, I wasn’t able to get it to work. Allowing full-screen in the qubes settings didn’t seem to do anything, the game keep running in a window.

I was having noticeable input lag, my guesstimate would be ~200ms, so games that a very lag sensitive probably isn’t going to be the best experience.

Wine/Lutris complained a lot about the LD_PRELOAD being 64bit, but the game was using a 64bit prefix and was able to use VirtualGL. I tried to install the 32bit VirtualGL binary, but it prevented libGL from loading, maybe games that use a 32bit prefix need the 32bit version.

If you edit the Lutris desktop file, you can start Lutris with VirtualGL from the qubes menu

Exec=bash -c 'export VGL_DISPLAY=egl&&vglrun lutris %U'

If you own a Nvidia GPU, you should use PRIME instead of virtualGL, that’s far more efficient :+1:

That’s great! Can you tell us what your setup is? Like:

  • what version of qubes did you use? 4.2? Were your updates recent as of your 2024-02-09 post?
  • What did you use for a template? Debian 12? Debian 11? Fedora 38? or something else?
  • What kind of card did you pass through? Was it AMD or Nvidia?

Cool. I’ll try that (if i ever get it working).

Are you using a AMD card? I get a similar phenomenon after I change the “nomsi” setting in grub.

Qubes R4.2, with all updates.
Debian 12 minimal, but should work with full as well.
Nvidia 4060, using the nvidia-kernel-open-dkms driver.

I played around with it a bit more, and it doesn’t work with all games.

im having this same issue with nvidia 3060.

Were you able to get it working correctly?

I’m still unsure how to do this. I have the GPU disabled in dom0 and can attach it, but I get stuck there.

There are things I can’t install in the template because Debian just doesn’t have them. I wish I knew what I was doing wrong.

Could you describe how to use Prime? I don’t know what this is and I have a lower intelligence and knowledge level than you. (I am being serious. I am not an experienced linux user.)

If you can simplify this a bit for Prime for someone slightly less intelligent and less knowledgeable, I would like to try this.

I really don’t want to have to delete Qubes and start using Pop just for the NVidia drivers, but I am almost at that point. Arch is harder for me to use and some stuff just isn’t available for Arch or you have to convert it.

Once you have GPU passthrough working and the nvidia module loaded, your user should be in the group video, then you to run commands with these environment variables set:

__NV_PRIME_RENDER_OFFLOAD=1 __GLX_VENDOR_LIBRARY_NAME=nvidia

1 Like

Wouldn’t you also need to configure xorg with the nvidia device?

How did you get the source and sink to work without the dummyqbs driver crashing?

How do I know if GPU passthrough is working and Nvidia is loaded? How do I know if user is in video?

I disabled the GPU in dom0 and rebuilt things and did that correctly.

I can run the debian-xfce-gl template. I can even do that with the GPU attached as long as I am only in terminal mode.

I am trying right now to get virtualgl installed in the template and I’m getting package incompatibility errors so I’m upgrading the template, which I made as a separate template.

Are you saying I can do this instead of virtualgl? I don’t even know what Prime is. I’m not a computer science or programming expert like you. I’ve read your blog a bit, you’re more advanced than I am.

I still would like to keep using Qubes and be able to use a GPU.

Configure xorg? I think I never did this. I may have skipped this step?

I don’t know how to do this for debian.

I don’t think I understand this step.

Then create a XORG configuration file for your GPU and screen. My file named ‘AOC.conf’:

Section "ServerLayout"
Identifier "Gaming"
Screen 0 "AMD AOC" Absolute 0 0
EndSection

Section "Device"
Identifier  "AMD" [i would change to nvidia]

# name of the driver to use. Can be "amdgpu", "nvidia", or something else
Driver      "amdgpu" [i would change to nvidia]

# The BusID value will change after each qube reboot. 
BusID       "PCI:0:8:0"
EndSection

[i don't know if i need to change this]

Section "Monitor"
Identifier "AOC"
VertRefresh 60
# https://arachnoid.com/modelines/ .  IMPORTANT TO GET RIGHT. MUST ADJUST WITH EACH SCREEN. 
Modeline "1920x1080" 172.80 1920 2040 2248 2576 1080 1081 1084 1118
EndSection

[i don't know if i need to change this or what it even means]

Section "Screen"
Identifier "AMD AOC"
Device     "AMD"
Monitor    "AOC"
EndSection

[i don't know what this means or what it is]

We can’t know what is the correct BusID before the qube is started. And it change after each reboot. So let’s write a script — named “xorgX1.sh” — that update this configuration file with the correct value, then start a binary on the Xorg X screen n°1.


I don't know where to put this file or how to modify it. I am using Nvidia. Do I need to modify this file based on the bus that Qubes assigns each time?

How do I create this? Just go into the template and sudo nano /AOC.conf? I have a feeling this would be the wrong directory.



Then there's this part:

#!/bin/bash

binary=${1:?binary required}

# Find the correct BusID of the AMD GPU, then set it in the Xorg configuration file
pci=$(lspci | grep "VGA" | grep -E "NVIDIA|AMD/ATI" | cut -d " " -f 1 | cut -d ":" -f 2 | cut -d "." -f 1 | cut -d "0" -f 2)
sed -i 's/"PCI:[^"]*"/"PCI:0:'$pci':0"/g' /home/user/AOC.conf

# Start the Xorg server for the X screen number 1.
# The X screen n°0 is already used for QubesOS integration
sudo startx "$binary" -- :1 -config /home/user/AOC.conf


I know how to create a script using nano and just cutting and pasting. I don't know where I would put this. Should I just create it in rw config and then start the script using rc.local each time?


Also I have no idea what Xorg is.

i think it’s possible i didn’t format that correctly