Ideally the app would run locally, offline.
Has anyone found one accurate enough to make it functional?
Kaldi: Kaldi is well known in the asr space. What you are really looking for are well-trained models (machine learning term).
OpenAI’s whisper works relatively well on Qubes and works offline: GitHub - openai/whisper: Robust Speech Recognition via Large-Scale Weak Supervision. I don’t know if that’s what you’re looking for though.
I am unable to install openai-whisper on QubesOS. My attempts to install it with pipx in an AppVM fail with the following error message:
user@Whisper:~$ pipx install openai-whisper
Fatal error from pip prevented installation. Full pip output in file:
/home/user/.local/state/pipx/log/cmd_2026-01-01_20.22.57_pip_errors.log
pip seemed to fail to build package:
MarkupSafe>=2.0
Some possibly relevant errors from pip install:
ERROR: Could not install packages due to an OSError: [Errno 28] Auf dem Gerät ist kein Speicherplatz mehr verfügbar
Error installing openai-whisper.
But there really is enough storage space available:
user@Whisper:~$ df -h
Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf
/dev/mapper/dmroot 197G 16G 173G 9% /
none 197G 16G 173G 9% /usr/lib/modules
devtmpfs 4,0M 0 4,0M 0% /dev
tmpfs 1,0G 4,0K 1,0G 1% /dev/shm
tmpfs 69M 728K 68M 2% /run
tmpfs 5,0M 0 5,0M 0% /run/lock
tmpfs 1,0M 0 1,0M 0% /run/credentials/systemd-journald.service
tmpfs 1,0G 4,0K 1,0G 1% /tmp
/dev/xvdb 89G 262M 88G 1% /rw
tmpfs 35M 116K 35M 1% /run/user/1000
tmpfs 1,0M 0 1,0M 0% /run/credentials/getty@tty1.service
tmpfs 1,0M 0 1,0M 0% /run/credentials/serial-getty@hvc0.service
Somehow, I’m stuck right now. If anyone can tell me what my mistake in thinking is, I would be very grateful!
Would you be very kind to share with me how to install open ai whisper? I recently moved to Qubes OS from Mac. It was easy on Mac using brew. What are the instructions to install? Did you install in the template or the AppVM?
Also, you may need to increase the /tmp size. Do let me know if that solves the problem (and how - the instructoins for terminal)
I recall running into similar issues. I increased /tmp with mount -o remount,size=2G /tmp. Other than that, with a laptop it ran just fine on small models. Even without gpu. (update: command had a typo)
Thank you very much, @deeplow ! That was the crucial step I was missing. @alannis , since you asked: Here are my notes on the topic, so I can look it up in a few months when I’ve forgotten everything but need to set it up again on a different system ![]()
OpenAI Whisper or package manager pip
Some programs are best installed using pip, such as the OpenAI Whisper tool, which can be used to create amazingly good transcripts of audio and video files (even without a GPU). I do this as follows:
First, I install the ffmpeg and pipx packages in a Debian template. Then I create an AppVM based on this template with 90GB private storage, 8192GB max memory, and 12 VCPUs (or as many as your computer actually has). Depending on the individual case, I then select a suitable Net-qube for the AppVM’s internet access.
Now I start the terminal in the AppVM and increase the temporary storage with mount -o remount,size=5G /tmp . (Otherwise, Whisper cannot be downloaded. For smaller packages other than openai-whisper, this step can be skipped.) Please note: This command must be re-entered after each restart of the VM!
We can now install whisper with pipx install openai-whisper && pipx ensurepath.
Then, for example, transcribe a first file with whisper audiofile.mp3 --model turbo --language German . Whisper downloads the necessary model when transcribing for the first time.
Now we can shut down the AppVM and deny it Internet access. In the future, simply start it when needed, increase tmp, and then transcribe whatever comes in from another Qube ![]()
Of course, everything else that pip offers can also be installed in the same way!
If you prefer to use the original python3-pip or pip3 instead of pipx, it is best to use a Fedora image as a basis, as this saves you from having to manually set up virtual Python environments. But be careful: in my experiments with pip in Fedora, Firefox no longer started in the Qubes based on the same template. So it is best (as always) to use a separate template for this experiment.