As I continue my journey with Qubes, I’m still unsure whether I should be using flatpak --user within domU AppVMs, or installing Flatpak system-wide on a template, as described in @solene’s guide. I definitely don’t want to install the dozen or so packages I regularly use onto a single template, but instead prefer to keep them isolated (hence using Flatpak).
I’m also pretty stubborn about using Salt or Ansible for configuration management. My goal is to keep config data (under version control), stateful data, and stateless data—like binary images and signed packages—as separate as possible, each managed appropriately and with different trust levels.
Since package installation is fast (especially when using a Local-First source like pulp-qube or NuGet via Gitea for Windows qubes), I was wondering if there are any hooks I can use to launch dispVMs and handle roles or profiles—perhaps through Salt or Ansible—to deploy or install the required packages for a dispVM on-demand, just-in-time, and ephemerally/idempotently?
I’m not sure what caveats exist with this sort of CI/CD-like setup, or how well it would integrate with @unman’s cacher. I was also considering using Trivy to scan anything that enters the local mirror, but first I wanted to hear from the experts here: How terrible of an idea is this? Let me know.
thank you! I personally quite like the idea, apt-cacher-ng or local cache of packages finally clicked for me , I have came across caching before but never gave it a good look.
A single cacher qube seems promising to me but I would like to know other’s views too
I think running trivy with per template / per named-disposables CVEs severity thresholds that can return warnings for user’s review could be useful and can be theoretically used with further automation using salt.
caching might just make it simpler to have more fine-grained isolation without worrying about updates hygiene,time or bandwidth as much, resulting in better overall compartmentalization and more minimal templates at the same time, would even allow having less templates if one relies on JIT disposable
when it comes to spinning new disposables with custom, selectable packages on-premise , I’m not sure if I would do that personally but I think it’s a good idea.
within days I might try it and experiment with different air-gapping flows / architectures to see how easy it’s to implement each while reducing points of failure and inaccessibility .
I need to setup firewall rules too for the fetcher-disposable, to allow them to do fetching with apt-cache-ng only, and more importantly the cache storage will stay offline and be used locally only.
If I recall ,qvm-features can assign custom metadata to VMs using key-value pairs and I would use them with salt, but I need to try it first within few days, (would require salt in disposable template)
The VM would be able to access the value of it’s metadata ‘roles’ etc but I’ve got an idea to encrypt that custom ,"user-added " VM metadata if you consider it’s sensitive.
You can create a AppVM (or DVM template) with a script in /rw/config/rc.local which installs your required packages in a dispVM on boot, thus making the installation VM-specific and volatile.
I have such setup for one of my VMs. Obviously, that is not practical for very large packages.
If it should be non-volatile, then the package must either reside in /rw (e.g. a portable version) or be otherwise accessible (e.g. through mounting on boot an external volume where it is installed).