llm
| Topic | Replies | Views | Activity | |
|---|---|---|---|---|
| Secure AI Inference with Qubes OS: A GPU Passthrough & Ollama Guide |
|
5 | 364 | March 31, 2026 |
| Recommend a small local LLM model trained specifically on Qubes + Whonix documentation? |
|
8 | 569 | November 25, 2025 |
| Unable to run LLMs with an AMD GPU with ROCm |
|
5 | 543 | February 4, 2025 |
| Running local LLMs with or without GPU acceleration |
|
10 | 6436 | November 23, 2024 |