Seamless GPU passthrough on Qubes OS with VirtualGL

We are presenting to you a guide regarding GPU passthrough created in collaboration with peers.

Qubes OS already has extensive documentation available about GPU passthrough for 3D accelerated tasks but they all require in depth configuration, extra displays, and extra input devices. With VirtualGL (https://virtualgl.org) one can take advantage of the existing Qubes OS framework for audio/display by offloading OpenGL onto the secondary graphics card directly with VirtualGL. This document will mostly be working off of the Neowutran (Qubes OS article) guide with a few tweaks.

Casual applications will work but some games require capturing the mouse cursor and Qubes OS doesn’t support this, so the camera will not work with the emulated input device. An extra gamepad etc. will probably be desired.

Device compatibility:
Nvidia: Works but lower FPS reported on Nouveau/Mesa driver, proprietary driver is fine.
AMD: Working with Mesa after disabling MSI in template kernel command line with parameter pci=nomsi. Conflicts occurred having Nvidia drivers installed with the AMD card, VirtualGL reports no EGL devices while they are installed. A simple fix is possibly preloading the proper libraries.

Both graphics cards seem to support Xen’s memory balancing, but only up to the amount of initial memory at boot.

Instructions:
Edit GRUB configuration in /etc/default/grub to restrict the secondary graphics card from dom0 with the rd.qubes.hide_pci kernel option. Reconfigure GRUB with grub2-mkconfig -o /boot/grub2/grub.cfg. Reboot.

Create a new template to hold graphics drivers and VirtualGL configuration.

Follow the instructions to install desired graphics drivers for the specific OS template.

Download VirtualGL from its SourceForge repository and install it to the template.

Run vglserver_config as root and configure for EGL, respond no to the prompt about framebuffer access.

Edit /etc/profile and append export VGL_DISPLAY=egl to the bottom.

Shutdown the template. Create a new VM from it. Set virtualization mode to HVM and kernel to Qube provided. Attach secondary graphics card to it. Boot, and execute vglrun with a command to get accelerated graphics. Wrapping a shell also works (vglrun bash) to avoid having to type it every time, but there is no obvious way to automate this at startup.

From VirtualGL - ArchWiki :

Tip: vglrun is actually just a shell script that (temporarily) sets some environment variables before running the requested application - most importantly it adds the libraries that provide all the VirtualGL functionality to LD_PRELOAD. If it better suits your workflow, you could just set those variables yourself instead. The following command lists all environment variables that vglrun would set for your particular set-up:

comm -1 -3 <(env | sort) <(vglrun env | grep -v ‘^[’ | sort)

Donations:
We are no longer accepting donations, and have moved on to other projects. We will not be able to adequately respond to further questions. The follow up information provided by users appears to be sufficient for now.

13 Likes

Hello, thank you for this guide!

I already have GPU passthrough set up on one of my qubes, and I mostly use it to run hashcat or John the Ripper. Hardware acceleration is working just fine, can you extend on what would be improved using VirtualGL, I am definitely not familiar with it!

Hello.

VirtualGL is a program designed to offload OpenGL processing to remote graphics devices, although remote in this case can also just mean a separate device in one system. The main benefit for Qubes would be eliminating the need of a secondary display for output in applications that use OpenGL, they can render 2D on the Qubes display driver and OpenGL on the graphics device with VirtualGL compositing it to the 2D display. This also eliminates the need to configure audio seating.

Programs like the ones you described are focused on using GPU for compute, so it makes sense they would work headlessly. The main user group we had in mind for this guide was people using casual 3D applications for CAD such as Blender and FreeCAD, though games have been tested and work.

I was also not familiar with it, but it is something my peer has been working with for a while. Guide was 50% with me providing extra hardware testing.

3 Likes

Update: AMD confirmed working with Mesa after disabling MSI in template kernel command line with parameter pci=nomsi.

2 Likes

whenever I try to open the appVM (based on vgl template) where I added my dGPU it reboots my computer.
Do you have any solutions or understanding of why this might be happening?

The graphics device may be on a shared IOMMU group, there are more details in linked Neowutran post to check.

Before disabling MSI in template GRUB, passthrough of AMD device was unstable and caused system halts. Also if device is not properly restricted from dom0 it may cause system reboot.

1 Like

Thanks for the help! I was using the wrong command to reload grub config.
Now the GPU VM no longer crashes on loading, so I assume the GPU has been hidden successfully from dom0.

I suspect that something is still not passed through properly however as when I benchmark the HVM it works about as well as what I’d expect from the cpu.
It may have something to do with either open source nvidia drivers not being disabled properly (I tried this on a fedora template) or my inexperience using openGL. (Executing vglrun gives me a bunch of potential commands that I can add to it but I am not sure if any needed to be added or if I needed to target a specific application).

Kernel must be set to qube provided to use packaged graphics modules and vglrun is invoked targeting application to be accelerated.

1 Like

That’s exciting if you were able to get blender to work in this way.

What qubes template did you start from when you successfully tested it? fedora36? debian11? or something else?

Also, what version of VirtualGL did you have your success with? Was it VirtualGL 3.1?

How stable was it? Were you able to get actual work done? :slight_smile:

I would love to give this a whirl, but your instructions are somewhat unclear. You state that you are mostly working off of the Neowutran guide, but I don’t know what steps I am supposed to use with your guide, where your instructions pick up and Neowutran’s stop, etc. Could you provide some further detail?

Thanks!

1 Like

A few more specifics about my confusion:

  • Am I wrong in my assumption that this guide is setting up a VirtualGL server to be used to host secondary GPUs and provide access to other AppVMs?
  • Assuming that is correct, I completed the Neowutran guide up to the section “Linux guest - Integration with QubesOS.” Am I supposed to apply those configurations to my VirtualGL server or in my other AppVMs in order to give them access to the secondary GPU for rendering?

If that is NOT correct, I would like to explore an alternative/modification to this guide. Would it be possible to create a VirtualGL server with GPU passthrough and connect AppVMs using VGL transport? I assume this requires allowing inter-AppVM networking. This could be achieved a few ways:

  • Use the existing Qubes network to enable networking between VMs. However, this could create security issues since the local VirtualGL traffic would be handled by the same sys-net/sys-firewall that connects to the internet
  • Use the Qubes network server project to create a secondary internal-only sys-net/sys-firewall system so AppVMs and the VirtualGL server can communicate. This would be a little more secure by at least isolating the two network stacks. (Issue: is it possible to connect AppVMs to more than one network?)
  • This likely doesn’t exist, but if there was a way to use the VirtualGL server as the “guivm” for AppVMs and send the VGL traffic over some internal protocol (sockets or something?) instead of a TCP network stack, that would be even better. This would basically be the same idea as above only using communication at the Xen/dom0/Qubes level to send traffic between the AppVMs and the VirtualGL server

Does that make sense? Is any of that possible?

1 Like

Hey if you’re still paying attention to this post I had a bit of a question -

When you run ‘vglrun [command]’ from the qube with the second GPU attached, how does one then integrate that back into Qubes?

I currently am able to start X11 with my secondary GPU, but that requires switching to a different monitor/input setup, which I think is the thing this post is attempting to streamline. So I have the GPU_Qube up and running, typed ‘vglrun MyProgram’… but where do we go from there? Can I ‘shunt’ this program through to a different App VM Qube somehow?

Thanks for the guide, hope to get this working soon :smiley:

I am getting a “connection to the VM failed” remark before everything shuts down.

I created a template called debian-12-xfce-nonfree and installed nvidia drivers

i installed virtualgl by adding it to apt and updating the template (which worked). When I run the command for egl, I selected option 3.

i already disabled the nvidia card in dom0 and reconfigured grub and tested that out.

I did the sudo nano edit /etc/profile and I added export VGL_DISPLAY=egl to the bottom of the long list.

I then created a new VM based on the debian-12-xfce-nonfree and that’s when I get the message about the VM not connecting.

It said something like “failed to start”

and sigchld_parent_handlr: Connection to the VM failed

what am I doing wrong?

Is this because I chose Debian? Should I try this in Fedora or Arch?

Did you figure out how to do this? I don’t get it.

Just wanted to confirm that this still compromises security correct? I was reading documentation elsewhere saying Qubes will never offically supprt VirtualGL because it greatly increases the attack surface and is not a security boundary. It’s a little hard to keep on top of what is still a current issue and whats not.

So not only do I not know how to do this but also if I am able to do it then it compromises my security?

I consider this a hugely important post. Unfortunately I suspect that we read this too late and he does not log in anymore.

He seems to be the only one who got this procedure to work. It also looks like he cross-posted this to reddit. Someone want to try to hit him up on reddit and see if he still logs in there? https://www.reddit.com/r/Qubes/comments/15olxfh/seamless_gpu_passthrough_on_qubes_os_with/

If anyone is able to replicate this, please post more detailed instructions here.

What I have read on the qubes security concern on GPUs (assuming I remember correctly) was that the GPU could become a attack vector if you “unplugged” it from one VM and plugged it into another VM without it being fully reset.

Since virtualGL would bind the GPU bound to just one qube (like sys-usb and sys-audio), that particular issue may not be a problem anymore.

It looks like the correct place to document the specific possible security issues relating to using GPU in qubes, and why they would or wouldnt be a potential problem is:

Following the guide worked for me…

You set up a qube with passthrough and you install virtualgl and run vglserver_config, export VGL_DISPLAY=egl, and start a 3d application with vglrun

All it does is loop back the output to the qubes display, allowing you to use the output without having a display connected to the GPU.

that’s actually the thread where I had read it at. Further down in the discussion there’s a comment that states the below from Aug 3, 2023

"@covert8 Its funny you asked, because I was about to file an issue for this!

VirGL and Venus run the userspace driver (OpenGL and Vulkan respectively) on the host. This means that they provide a hardware-independent API to the guest, but also means that the entire userspace stack becomes guest-accessable attack surface. This attack surface is very large, and Linux graphics developers have stated on IRC that it is not a security boundary. Therefore, @marmarek has decided that Qubes OS will never use VirGL or Venus and I agree with his decision.

virtGPU native contexts, on the other hand, expose the kernel ioctl API to the guest. This API is accessible to unprivileged userspace processes, which means it is a supported security boundary. It is also much smaller than the OpenGL or Vulkan APIs, which means that its attack surface is vastly smaller. As a bonus, native contexts offer near-native performance, which should be enough even for games and other demanding tasks.

The kernel ioctl API (also known as the userspace API or uAPI) is hardware-dependent, so virtGPU native contexts are only supported on a subset of hardware. Currently, Intel, AMD, and Adreno GPUs are supported using the upstream i915, amdgpu, and freedreno drivers.

Xen supports grant-based virtio, so virtio-GPU should not be incompatible with running QEMU in a stubdomain. The virtio-GPU emulator will need to run in dom0, but its job is much simpler than that of QEMU (it is only emulating a single device) and so the attack surface should (hopefully!) be acceptable."