AI with qubes

Does anybody using LLM, ollama, comfy etc locally in Qubes.
If yes, what methods you used and what system requirements are needed for those VM.

I like using LM-Studio, you can use Ollama, etc ofc
Not sure what you mean by method, it is no different.
Just you wont have utilization of the GPU unless you attempt passthrough and will only use CPU & RAM.
I think LM studio will be best for you as it will inform you of which model will work based on your system resources.

Thanks I will look into that.
I tried Jan but it was failing to download models. So I thought if someone using.
I will reply after trying.