Seamless GPU passthrough on Qubes OS with VirtualGL

We are presenting to you a guide regarding GPU passthrough created in collaboration with peers.

Qubes OS already has extensive documentation available about GPU passthrough for 3D accelerated tasks but they all require in depth configuration, extra displays, and extra input devices. With VirtualGL ( one can take advantage of the existing Qubes OS framework for audio/display by offloading OpenGL onto the secondary graphics card directly with VirtualGL. This document will mostly be working off of the Neowutran (Qubes OS article) guide with a few tweaks.

Casual applications will work but some games require capturing the mouse cursor and Qubes OS doesn’t support this, so the camera will not work with the emulated input device. An extra gamepad etc. will probably be desired.

Device compatibility:
Nvidia: Works but lower FPS reported on Nouveau/Mesa driver, proprietary driver is fine.
AMD: Working with Mesa after disabling MSI in template kernel command line with parameter pci=nomsi. Conflicts occurred having Nvidia drivers installed with the AMD card, VirtualGL reports no EGL devices while they are installed. A simple fix is possibly preloading the proper libraries.

Both graphics cards seem to support Xen’s memory balancing, but only up to the amount of initial memory at boot.

Edit GRUB configuration in /etc/default/grub to restrict the secondary graphics card from dom0 with the rd.qubes.hide_pci kernel option. Reconfigure GRUB with grub2-mkconfig -o /boot/grub2/grub.cfg. Reboot.

Create a new template to hold graphics drivers and VirtualGL configuration.

Follow the instructions to install desired graphics drivers for the specific OS template.

Download VirtualGL from its SourceForge repository and install it to the template.

Run vglserver_config as root and configure for EGL, respond no to the prompt about framebuffer access.

Edit /etc/profile and append export VGL_DISPLAY=egl to the bottom.

Shutdown the template. Create a new VM from it. Set virtualization mode to HVM and kernel to Qube provided. Attach secondary graphics card to it. Boot, and execute vglrun with a command to get accelerated graphics. Wrapping a shell also works (vglrun bash) to avoid having to type it every time, but there is no obvious way to automate this at startup.



Hello, thank you for this guide!

I already have GPU passthrough set up on one of my qubes, and I mostly use it to run hashcat or John the Ripper. Hardware acceleration is working just fine, can you extend on what would be improved using VirtualGL, I am definitely not familiar with it!


VirtualGL is a program designed to offload OpenGL processing to remote graphics devices, although remote in this case can also just mean a separate device in one system. The main benefit for Qubes would be eliminating the need of a secondary display for output in applications that use OpenGL, they can render 2D on the Qubes display driver and OpenGL on the graphics device with VirtualGL compositing it to the 2D display. This also eliminates the need to configure audio seating.

Programs like the ones you described are focused on using GPU for compute, so it makes sense they would work headlessly. The main user group we had in mind for this guide was people using casual 3D applications for CAD such as Blender and FreeCAD, though games have been tested and work.

I was also not familiar with it, but it is something my peer has been working with for a while. Guide was 50% with me providing extra hardware testing.


Update: AMD confirmed working with Mesa after disabling MSI in template kernel command line with parameter pci=nomsi.


whenever I try to open the appVM (based on vgl template) where I added my dGPU it reboots my computer.
Do you have any solutions or understanding of why this might be happening?

The graphics device may be on a shared IOMMU group, there are more details in linked Neowutran post to check.

Before disabling MSI in template GRUB, passthrough of AMD device was unstable and caused system halts. Also if device is not properly restricted from dom0 it may cause system reboot.

1 Like

Thanks for the help! I was using the wrong command to reload grub config.
Now the GPU VM no longer crashes on loading, so I assume the GPU has been hidden successfully from dom0.

I suspect that something is still not passed through properly however as when I benchmark the HVM it works about as well as what I’d expect from the cpu.
It may have something to do with either open source nvidia drivers not being disabled properly (I tried this on a fedora template) or my inexperience using openGL. (Executing vglrun gives me a bunch of potential commands that I can add to it but I am not sure if any needed to be added or if I needed to target a specific application).

Kernel must be set to qube provided to use packaged graphics modules and vglrun is invoked targeting application to be accelerated.

1 Like

That’s exciting if you were able to get blender to work in this way.

What qubes template did you start from when you successfully tested it? fedora36? debian11? or something else?

Also, what version of VirtualGL did you have your success with? Was it VirtualGL 3.1?

How stable was it? Were you able to get actual work done? :slight_smile:

I would love to give this a whirl, but your instructions are somewhat unclear. You state that you are mostly working off of the Neowutran guide, but I don’t know what steps I am supposed to use with your guide, where your instructions pick up and Neowutran’s stop, etc. Could you provide some further detail?


A few more specifics about my confusion:

  • Am I wrong in my assumption that this guide is setting up a VirtualGL server to be used to host secondary GPUs and provide access to other AppVMs?
  • Assuming that is correct, I completed the Neowutran guide up to the section “Linux guest - Integration with QubesOS.” Am I supposed to apply those configurations to my VirtualGL server or in my other AppVMs in order to give them access to the secondary GPU for rendering?

If that is NOT correct, I would like to explore an alternative/modification to this guide. Would it be possible to create a VirtualGL server with GPU passthrough and connect AppVMs using VGL transport? I assume this requires allowing inter-AppVM networking. This could be achieved a few ways:

  • Use the existing Qubes network to enable networking between VMs. However, this could create security issues since the local VirtualGL traffic would be handled by the same sys-net/sys-firewall that connects to the internet
  • Use the Qubes network server project to create a secondary internal-only sys-net/sys-firewall system so AppVMs and the VirtualGL server can communicate. This would be a little more secure by at least isolating the two network stacks. (Issue: is it possible to connect AppVMs to more than one network?)
  • This likely doesn’t exist, but if there was a way to use the VirtualGL server as the “guivm” for AppVMs and send the VGL traffic over some internal protocol (sockets or something?) instead of a TCP network stack, that would be even better. This would basically be the same idea as above only using communication at the Xen/dom0/Qubes level to send traffic between the AppVMs and the VirtualGL server

Does that make sense? Is any of that possible?

Hey if you’re still paying attention to this post I had a bit of a question -

When you run ‘vglrun [command]’ from the qube with the second GPU attached, how does one then integrate that back into Qubes?

I currently am able to start X11 with my secondary GPU, but that requires switching to a different monitor/input setup, which I think is the thing this post is attempting to streamline. So I have the GPU_Qube up and running, typed ‘vglrun MyProgram’… but where do we go from there? Can I ‘shunt’ this program through to a different App VM Qube somehow?

Thanks for the guide, hope to get this working soon :smiley: