GPT4ALL Flatpak Won't Run

this is running debian-12-xfce with solene’s flatpak hack in the template

flatpak run app/io.gpt4all.gpt4all/x86_64/stable
Failed to load libllamamodel-mainline-cuda.so: dlopen: libcuda.so.1: cannot open shared object file: No such file or directory
Failed to load libllamamodel-mainline-cuda-avxonly.so: dlopen: libcuda.so.1: cannot open shared object file: No such file or directory
constructGlobalLlama: could not find Llama implementation for backend: cuda
constructGlobalLlama: could not find Llama implementation for backend: cuda
[Warning] (Fri Mar 28 21:17:50 2025): WARNING: Could not download models.json synchronously
[Warning] (Fri Mar 28 21:17:50 2025): qrc:/gpt4all/qml/AddModelView.qml:139:13: QML AddHFModelView: Detected anchors on an item that is managed by a layout. This is undefined behavior; use Layout.alignment instead.
[Debug] (Fri Mar 28 21:17:50 2025): deserializing chats took: 0 ms
[Fatal] (Fri Mar 28 21:17:51 2025): Cannot mix incompatible Qt library (6.8.2) with this library (6.8.3)
1 Like

@solene

seems an issue in the flatpak package :woman_shrugging:

1 Like