AI with Qubes os?

I seriously doubt you are going to be able to fine-tune a model without a GPU.

PyTorch runs just fine in Qubes OS only using CPU inference, but it’s painfully slow, but even with a GPU fine-tuning is time-consuming.

It’s not hard to get it running in Qubes OS.
Running local LLMs
Running local txt2img

You can probably find some guides on fine-tuning and LoRA training, but I don’t know if you want to do it only using a CPU.

1 Like