Making Nvidia CUDA available

I have a business need to do some things with the LM Studio AI software. My workstation has an older Xeon, lacking AVX2 capabilities, and its GPU is a 6GB Nvidia 1060, so it’s a nonstarter.

The new laptop I just got for Qubes has a Xeon 2176M and an 8GB Nvidia P4200. This is still marginal for large models but its here and ready to go today. I would like to keep it Qubes but the nature of the business stuff will pay for a new workstation, so I’ve got to get moving.

Everything I’ve read thus far involves using HVMs with GPU passthrough and it’s typically done for gaming. If I just need CUDA access do I need an HVM, or is there a way to get CUDA running for one of the included templates?

That seemed like a lot of stuff to wade through with no certainty that I can do it or that there won’t be some weird hardware issue. So the Dell Precision 7730 now has Ubuntu Budgie, and it’ll go back to Qubes once I replace my twelve year old workstation with something that’s only nine or ten years old :slight_smile: