GPT for Qubes

I don’t understand what you wrote but I am interested in learning more. I do not want to use linkedin because of their registration requirements and invasive javascript

On Qubes, you can run ChatGPT4All in a VM that has a GPU attached by using HVM. I don’t understand what you meant but I am slower than much of the people of this forum.

1 Like

as long as you index your documents and have the patience to wait a few seconds for the response it’s fine… I would like to see your HVM receive even 10 simultaneous requests (provided that the system you use supports parallel access…) on thousands of documents… from the tests I’m doing, stabilized GPUs are useful if you want to do something serious…

1 Like

I still don’t understand whether you are offering a guide to create a cluster of GPUs that someone could use in Qubes

or if you are offering a way to remotely access GPUs and resources using Qubes

I am new to LLMs and not as smart as some of the other developers or engineers that use Qubes, but I still really like Qubes.

When you say index your documents, I am not sure if you mean localdocs and I don’t full understand the localdocs purpose yet.