I have a business need to do some things with the LM Studio AI software. My workstation has an older Xeon, lacking AVX2 capabilities, and its GPU is a 6GB Nvidia 1060, so it’s a nonstarter.
The new laptop I just got for Qubes has a Xeon 2176M and an 8GB Nvidia P4200. This is still marginal for large models but its here and ready to go today. I would like to keep it Qubes but the nature of the business stuff will pay for a new workstation, so I’ve got to get moving.
Everything I’ve read thus far involves using HVMs with GPU passthrough and it’s typically done for gaming. If I just need CUDA access do I need an HVM, or is there a way to get CUDA running for one of the included templates?