Getting accelerated 3D on Qubes is probably one of the biggest pain points. Even if everyone could forward a second GPU to a single domain using PCI passthrough (without even talking about the difficulties involved), we would not be able to do that to 2 domains simultaneously, and would have to face security implications even when dealing with 2 VMs non-simultaneously.
But there is another approach that (hopefully) would allow to overcome all of this: streaming the OpenGL calls from individual domains to a “GPU server” in sys-gui-gpu.
I played with the idea, happy that prior art existed - although trying that original codebase (written for rendering on Android, and itself based on a codebase for rendering on a Raspberry Pi, no shit) requires some amount of motivation.
I’m not quite to the point of getting this to run in sys-gui-gpu (partly because I don’t have one running yet ), but mostly because the original code is incomplete, only works when compiled as 32bit code (and thus with a 32bit apps), uses an insecure network protocol sharing and accepting pointers, and other fun stuff.
Nevertheless, I’ve started to poke at it as an experiment, to the point I’ve been able to run es2gears. That is, a 32bit version of it, with the “GLES server” running in the same domain with Mesa software rendering. And the small window that’s running 1000fps with plain Mesa runs at
an anemic 70fps. That may sound not so promising, but given the state of the stack there is quite some room for improvement around 220fps which is finally not so bad given the room for improvement, and I’m submitting this to your thoughts.
Edit: a simple removal of apparently-arbitrary throttling already turned the original 15x slowdown into a mere 5x slowdown. Stay tuned for hopefully better stuff