AI with Qubes os?

is there an AI model that can be ran locally that has a good base?

I’m looking for one that can be easily trained, i want to feed it data from a forum, so it can answer questions based off of data that i feed it, what i’m looking for is a base model which does not have restrictions, for me to feed it data on my own so i can make use of it, one that isn’t politically correct such as chatgpt which i think is not so good in terms of answering questions it’s not meant to properly respond to, i don’t own a GPU so this has to be CPU reliant, hope this is not too much to ask for, just inquiring on information here

thanks in advance

if you find something, it’s going to be utterly slow :confused:

since it’s solely based off of CPU, right?, what do you suggest here?, i like AI allot, it had been useful for me, but i want more ‘freedom’ with getting answers i need, i don’t feel like I’m in control… you know

I hope this is on the topic of qubes but I remote from qubes to a minisforum em680 on my local network. It’s roughly twice as fast running it on the em680. My Qubes machine has ddr4 3200 and the em680 has lpddr5 6400. This shows memory bandwidth is the most important factor for cpu llm’s. With my Qubes setup I get ~3 tokens/sec and on the em680 I get ~5 tokens/sec on a 13b model. The speeds double for 7b models.

I seriously doubt you are going to be able to fine-tune a model without a GPU.

PyTorch runs just fine in Qubes OS only using CPU inference, but it’s painfully slow, but even with a GPU fine-tuning is time-consuming.

It’s not hard to get it running in Qubes OS.
Running local LLMs
Running local txt2img

You can probably find some guides on fine-tuning and LoRA training, but I don’t know if you want to do it only using a CPU.

1 Like

Here are threads about how to use gpt4all too. That’s a good idea feeding it all the forum posts and if possible, also related internet articles. Hopefully there will be an Qubes AI qube one day that can answer all kinds of OS related questions in the proper way.

Can this be not biased then?

my own bias, to be exact

I use dolphin-mixtral together with ollama program, on my local computer, which has 64 GB ram. It is good for generating programming/coding answers.

Training part is quite hardware expensive even without using QubesOS. For training you need 4 RTX 4090 cards locally, at least. That’s why most people rent these hardware on the cloud (best paid with Monero), and train their AI models.

1 Like

So, you are sure “tomorrow” you’ll not offer your services to anyone else, paid or not (including family and friends, who all have their biases I guess, most probably different then yours)?

I genuinely find this approach dangerous. I can’t avoid to interact with outer world, and using “my truth” I’m not sure how it would benefit me. Sincerely. I find it more useful to train - myself, to properly ask questions so I can get politically incorrect answers, if that is the point. Which I’m already succeeding - I made Bing chat answer me directly that the Earth isn’t overpopulated. Next is training myself to make it answer that climate changes caused by humans is BS.

I rather see this as a matter of privacy and security, and Qubes is perfect for that if I use it to train - different personas with it.

I won’t make a noise any more, sorry.

which service do you recommend?

Look into this video: This new AI is powerful and uncensored… Let’s run it - Invidious