What would you like to see improved in Qubes OS?

I would like to see named disposables show up as needing restart after an update to their templates in the qubes popup menu (the one that shows running VMs), just like AppVMs do. (I asked about this elsewhere and the point was made that restarting a disposable was effectively impossible, but they were thinking primarily of un-named (disp1234) disposables. However almost every VM I run is a named disposable.)

Likewise anything that shows needing to be updated.

2 Likes

For example sys-usb and sys-firewall?

1 Like

I still think that a label indicating that the qube is behind updates is important even for unnamed disposables.

That is not what the “restart flag” means, it just indicates that the template has changed.

Every time you start and stop a template, all qubes using that template are flagged for restart. That might not be ideal for disposables.

I looked at the code within Qube Manager yesterday. It actually does this: If any of qube volumes (root/private) has the is_outdated property set to True, then it shows the outdated flag. So there is no magic needs_restart property or feature for this. And the implementation of is_outdated is highly filesystem specific (method in abstract class). Here is the example for LVM implementation.

If this has to be solved properly, it should be solved in multiple places within core/core-admin/manager repos. So it takes some work and effort (and unittests).

In the end, the GUI Updater advised user to restart the DispVM service qubes (sys-usb, sys-firewall). Which is not the perfect workaround but still a workaround. The cost/benefit of this issue is something that might not make it high priority for full-time core developers or casual hobby community devs who might be working on other issues.

1 Like

The most frequent reason for the template change is updates. Apart from that, it’s usually important to know that the template changes were not applied to your qube.

I’m surprised to hear this isn’t working with lvm_thin? It works with file-reflink (Btrfs etc.), although to be clear Qube Manager can also show the :repeat: State column icon for unnamed disp1234 qubes because (despite the opinionated tooltip) the actual meaning of that icon is simply “one of the volumes is outdated - in the sense of: the source volume was modified after the volume was snapshotted from it”. Which IMO is good to know even if you can’t remedy that by restarting the qube.

Edit: Never mind, you were talking about the Qubes Domains widget, not Qube Manager.

1 Like

If a disposable (statically named or not) is supposed to get an indication on template change, then it makes sense to get one if its disposable template (AppVM) changes too.

2 Likes
  • sys-vpn by default
  • sys-i2p by default
  • sys-net with BSD
  • Alpine and Android template by default
  • Cloud AppVM by default with attachment to any other qubes and basic cloud services like megasync, cozy drive, s3, s3fs… and tools like gocryptfs, cryptomator, sirikali
  • Full ram Tails OS
  • Veracrypt hidden volume for AppVMs and reliable plausible deniablility
  • Easy integration of AppImages with Bwrap rules
  • Wayland
  • Librewolf by default
  • Portable system (grub portable and dracut erros)
  • A ISO like Debian Netinstall with other options like Journalist AppVM, Academic AppVM, Designer AppVM, Maker AppVM or Developer AppVM with basic tools.
2 Likes

This is practically out of scope of Qubes, since it’s about the templates. More discussion: Is Firefox really an appropriate default browser for Qubes?

1 Like

Sudo should be enabled by default. It’s a good idea to allow users to set a password for each AppVM.
Because of this important reason:

The primary reason to even consider any kind of root account isolation within Qubes AppVMs is to make it more difficult for attackers to launch attacks against Xen (most, or perhaps all of the XSAs that affected Qubes required root in the VM).

1 Like

This will also show the outdated sign for unnamed (disp*) disposables. Doesn’t it? Which might make user believe that its name (or even worse, its private volume) might somehow survive the restart.

I don’t see how some tiny sign can make a user believe that a disposable can survive a restart. I mean, it’s in even in the name of the qube. This thinking is too protective, harming important features.

I disagree. The option to enable root password/prompt already exists. You can harden your system if you want. Qubes OS is already sufficiently complicated for new users.

The lack of root passwords makes my life much easier with Qubes. Also, actual practical attacks against Xen allowing a real VM escape are extremely rare. The price of a sudo vulnerability is much lower than that of Xen attack. If the attacker could get the latter, the former is not so hard.

It’s a balance between reasonable security and reasonable usability. Also, it’s a reminder that Qubes protects your VMs from each other, which is its main strength. Intra-VM protections are much weaker and you should not rely on them for a strong security.

3 Likes

The original request was only for named disposables (e.g. sys-firewall, sys-usb, sys-audio, …). But if the sign is going to be for all disposables, so let it be like that. For me personally it is fine.

Correct. I should have been clearer but I wasn’t on my Qubes system and couldn’t remember the name of the widget. Qubes Manager works ideally.

Actually I originally wanted to do it for all disposables (wherever it was that I brought it up). Someone (possibly you?) made the point that they hadn’t done so because restarting a disposable gave you a different VM anyway. They were thinking of unnamed (disp1234) disposables and that’s a decent point, but I run almost entirely on named disposables and would like to know when they need restarting (and as someone upstream (fslover I think) pointed out, sys-net and things like that are usually named disposables that you’d probably want to restart as soon as practicable.

So right now I don’t care much whether unnamed disposables are flagged in the Domains widget…though they actually are flagged in Qubes Manager, so for consistency’s sake they probably should be. But I really want named disposables flagged in the Domains widget.

1 Like

I don’t understand why there are so many anti-sudo comments in the Qubes community.
I don’t want to talk too much, but this sudo topic has been everywhere since the beginning of Qubes becoming famous.
Developers should build mechanisms that make sudo easily available without security bugs.
With these options, users might choose to have sudo without a password.
By the way, we use Qubes for security and privacy. It’s sad that Qubes was released without root account isolation by default.

I’m just copy-pasting this 2017 comment by Joanna Rutkowska:

I’ve been recently talking about this with Solar Designer of Openwall (a person who probably knows more about Linux security model than most of us together), and below I try to summarize the outcomes of our discussion:

The primary reason to even consider any kind of root account isolation within Qubes AppVMs is to make it more difficult for attackers to launch attacks against Xen (most, or perhaps all of the XSAs that affected Qubes required root in the VM).

This "protect Xen" goal is significantly more important that any kind of in-VM isolation, e.g. ability to run different apps as different user accounts. Indeed, current Xorg doesn't make this feasible architecturally even, and in fact it's been one of the Qubes fundamental goals to fix this (long before anybody started even talking about Wayland that apparently also tries to fix this).

The obvious problem with using any kind of control mechanism for sudo, such as the one proposed in this ticket, is that once we open a root console in the VM, it still runs under the same Xorg that is being used by non-root apps in that VM and these other apps can launch a number of attacks against this already opened root console, such as keystroke injection to name the most obvious. Again, one of the goals of Qubes is to fix this problem by... introducing the concept of an AppVM. However, given that our goal is currently: not allowing attackers to get root in the VM, we need something else...

A solution proposed by Solar is to start another Xorg in the VM -- as root -- whenever the user decided he or she want to start a root shell in the VM. While in principle there is nothing that should prevent this, in practice there will likely be lots of minor PITAs with this. E.g. both our GUI agent and daemon have been written with assumption that for each AppVM there is only one daemon (in dom0) and one corresponding GUI agent (in the VM), which results in some sockets/vchan ports being hardcoded in the code.

A potentially alternate solution might be to not use GUI virtualization for performing any root operations in the VM. Indeed, one can use qvm-run -e root -p to get "raw", shellcode-like access to the VM. But not having a real PTY means this cannot be used e.g. to run vim or any other curses-based app. And of course WE DO NOT WANT to pipe the output of qubes.VMShell (which is what qvm-run uses) to an actual PTY-implementing code, for this would likely be a security disaster.

So, it seems that the option with starting the 2nd Xorg for the root user seems the most secure solution (+ disabling sudo for user). Unfortunately at this moment we (ITL core team) do not have resources to work on this...

Still it might be worthwhile to enable this sudo qrexec-authorization by default for our default template, after all. This is maybe because for many AppVMs there will never be a need for user to start root shell. Indeed, if we think why a user might want a root shell (see below), then it might turn out that often there should never be a need for starting root terminal in VM, and this could then be easily achieved by what is proposed in this ticket.

So, why a user might want to start a root terminal in a VM -- here is some initial list:

customize some scripts in /rw/config (typically in "devel" VMs)
run docker in VM (typically only in "devel" VMs, where builds run)
run gdb, tcpdump, nmap, etc (also typically in "devel" or "admin" VMs)