How I learned to love Liteqube (and why you should, too, even if you have enough RAM)

The early installation scripts had glitches, and I can understand why people are hesitant to use Liteqube. However, I assure you that it is very much worth it, even if you have 64Gb RAM like I do.

The biggest advantage of Liteqube is not the reduced RAM usage, but rather the more organized and secure template system for service qubes. It’s faster, more secure, conserves resources, and is more reliable. There are fewer moving parts, and those that remain are expertly configured in a failproof way. Personally, I love the read-only rootfs on disposable qubes, which proves that things are done correctly.

But I run sys-* from minimal template dvm’s already, there is little to be done beyond that!

Wrong. The “minimal” templates are actually quite bloated and lack the dedicated security features and elegance of litequbes, and the way you fine tune it for a specific purpose is barbaric.

Furthermore, some people criticize the lack of essential features in core-* services. However, I haven’t yet found anything that I would really miss. There are no notification icons from the service qubes, and for good reason. It should be done completely differently, definitely not by running a dependency-heavy GUI app from within. Regular core-* qubes are fully headless and it is a good thing. Someday we will have those icons implemented via core-xorg qube and rpc calls as it should be, just not right now.

The advantages of a secure design and organized installation scripts go far beyond your current threat model. In practical terms, 99% of the attack surface (excluding OPSEC-dependant and after we take your mobile phone out of the scope) is your browser, not your sys-net. Nevertheless, we all like having fewer things to worry about, and Liteqube provides peace of mind in this area. Additionally, it won’t allow you to shoot yourself in the foot by mindlessly installing applications into your “minimal” template; still retaining all the essential functionality and providing a nice framework for things you really need!

Ah, the updates proxy. Updates are now cached in tmpfs, so you should never worry why your updatevm suddenly stopped working as intended and which file system ran out of space. Why hasn’t it been done before?

9 Likes

When I looked at Liteqube the installation was indeed buggy.
One reason why I have not looked further is that the developer has not,(I
think), brought it forward to the “official” side as an enhancement or
community contribution. That seems odd.

I never presume to speak for the Qubes team. When I comment in the Forum or in the mailing lists I speak for myself.
2 Likes

As of now, there is one known issue with “chmod” (I tried to figure out what’s wrong with it and did not understand it either). You can just comment out the problematic line and the rest works flawlessly, i tried the installation several times and never ran into problems with the recent version.

Also, liteqube/Contrib.SmartCard at main · arkenoi/liteqube · GitHub

redacted

Why would that be an issue for debian-core? The only things that is needed in service qubes is up to date hardware support, which is seemingly good enough (there were minor issues with fido2, long gone)

Let me rephrase my question:

  • are you using Liteqube as your daily driver?
  • are all sys-* using debian-core as their template?
  • are you using other templates like Kali, Parrot or Fedora for assignments?
  • are you using a GUI (sometimes)?

are you using Liteqube as your daily driver?

Yes

are all sys-* using debian-core as their template?

They are called core-*, yes, except sys-whonix which I managed to substitute with core-tor previously, but now it does not work so well with anon-whonix ws.

are you using other templates like Kali, Parrot or Fedora for assignments?

Yes

are you using a GUI (sometimes)?

core-* VMs are headless and do not run x11 programs. to attach to a terminal, a separate core-xorg qube is created.

Repo updated with latest release.

1 Like

Ah, Just tried fresh install and it turns out to be a bumpy road again. Please submit bug reports opening issues on github! I will do my best to handle it.

2 Likes

Damn! Also python3-fido2 issue reemerged somehow, needed to enable debian12 testing repo to work around it. Why no one reports me bugs? :))

2 Likes

I’m guessing most people are not ready to spend so much effort in order to achieve this next level of security, even those using Qubes OS (including me). But I would like to say thank you for the interesting path forward to an even more secure configuration. Hopefully soon it will be easier to achieve for “lazy” users.

2 Likes

Am willing to test and develop some more (would need to go from bash to python though) once 4.3 is released. Lots of changes coming.

1 Like

mostly salt. unfortunately, salt does not offer much to handle state transitions and rollbacks per se, still need to do that manually :confused:

1 Like

Now testing with 4.3, works more or less so far.

I don’t see an issue tracker on your GitHub page. Here’s an issue I’m debugging right now, I’ll post it on your issue tracker when you let me know how:

When running the script, all goes well until:

e2fsck 1.46.5 (30-Dec-2021)
e2fsck: No such file or directory while trying to open /dev/mapper/qubes_dom0-vm--debian--core--private
Possibly non-existent device?
$ ls /dev/mapper

lists only control and luks devices.

This is on BTRFS.

This happens because it’s using the reflink driver. So the filesystems are stored in /var/lib/qubes/.

You would need to modify the script to do

sudo kpartx -av /var/lib/qubes/vm-templates/debian-core/root.img

and the 3rd entry of the 3rd line will be the loop device e.g. loop12p3.

But why are we going to all this effort to shrink them anyway? The space allocated to a VM is not actually taking up any storage. The code to shrink it should be removed, or at least moved to a separate ‘optimizations’ script. The idea being that you want the base install script to be as robust and simple as possible to attract users.

Probably you are right. I had some sentimental value in shrink part, like, “see how small it is now!” but the practical reason behind that is close to zero these days.

Also, it now breaks on 4.2 because new Debian templates use some obscure e2fs feature not supported by Fedora 37 dom0. And it was the wrong way to do it anyway, since the dom0 is not actually protected from malformed data on guest FS and the secure, recommended way to do is to attach block devices for disposable VMs for all manipulations:

My bad, fixed now.