What would you like to see improved in Qubes OS?

Done @solene, the thread was fairly long and the topics sometimes entagled, I did the best I could for both threads to still make sense.

6 Likes

Better hardware compatibility. I have 3 laptops, all made between 2019-now, from different manufacturers, and I have only been able to get one to work with Qubes (although with no external monitors). Too bad it’s the only laptop I have that is powerful enough to play games on… I could dual boot or get a new system, but I’d either need to buy more storage or buy a new laptop.
That being said - I also understand the incompatibilities are a result of the very specialized requirements. I just really like the idea of Qubes and would love to try and use it as my main OS.

2 Likes

It’s not the question with Qubes if it wouldn’t want it. It’s just that isn’t about like “there’s no drivers yet for these devices”. It is about device features by design that doesn’t fit Qubes security philosophy/findings.
You don’t buy hardware first then discover Qubes. it has to be the opposite.

3 Likes

sys-usb speed, huh? I tested USB block devices and they seem to run on full 3.2 speed. Did not have a chance to try 4.0 however.

1 Like

What’s your CPU model, and what’s the load for sys-usb? I suspect it to use a lot of CPU, and maybe slower CPUs are bottleneck for sys-usb speed.

1 Like

Template-wise I would love to see:

3 Likes

A couple of minor things:

  1. yes this has to do with backup but it belongs here. When you back up one qube, the dang thing insists on printing a list of qubes you AREN’T backing up. I’d like a flag that suppresses that. I’m using backup as a means of moving qubes between machines, and this message is really distracting (with the number of qubes I have the meaningful info flies off the top of my screen).

  2. Many of the qvm utilities will print a message instructing you on how to use the command if you type a non-existent vm as the VMNAME parameter. How about just printing “no such vm” instead of assuming the user doesn’t know the basic command syntax? In one case, I’m looking for the status and I may be checking to see if the qube even exists from inside a script; an error message there telling me I don’t know how to use the command is completely inappropriate in this case.

2 Likes

I would like to have a video training course and a book available. The help provided in this forum is not always easy to follow for everyone.

6 Likes

I’ll throw out a variant. A qvm-type command to test for the existence of a qube, that won’t throw an error if it doesn’t exist. qvm-check throws an error and writes out “you idiot you got the command syntax wrong” error message.

The lack of such a thing is really messing up my salt scripts. I want to do something ONLY IF a qube exists and it’s NOT in fact an error if the qube does not exist…but salt pukes up an error anyway. (There are “only if” qualifiers in some salt states, but as far as I know there’s no way to check the existence of qube in them…again, without erroring out.)

1 Like

It seems to exist already

[solene@dom0 ~]$ qvm-check backup
qvm-check: backup: exists
[solene@dom0 ~]$ echo $?
0

[solene@dom0 ~]$ qvm-check backu
usage: qvm-check [--verbose] [--quiet] [--help] [--all] [--exclude EXCLUDE]
                 [--running] [--paused] [--template] [--networked]
                 [VMNAME [VMNAME ...]]
qvm-check: error: no such domain: 'backu'
[solene@dom0 ~]$ echo $?
2

edit: I didn’t read completely, but qvm-check errors with 2 if a qube doesn’t exist, so you can discard its output and just look the exit error code :slight_smile:

3 Likes

That’s basically what I was doing.

I was trying simply qvm-check other-qube 2> /dev/null && run-some-command but salt was still detecting the 2 error status.

I just now hit on the answer, which was to write the command as if qvm-check other-qube 2> /dev/null; then run-some-command; fi which hides that return status successfully. (This is a cmd.run state in salt.)

Still, I’d like a simple test for qube existence and/or non-existence that doesn’t write a long error message if it doesn’t exist. Just write “does not exist” instead of “exists”. I’m constantly having to wrestle with gratuitous error output doing various things (this is only one among many), and --quiet options don’t shut it off as one might expect.

2 Likes

qvm-check xxx >> /dev/null 2>&1

2 Likes

Without knowing what the something you want to do is, it’s difficult
to help.
In the simplest case using requires and qvm.exists is fine.
For example, exists.sls has:

testit:
  qvm.exists:
    - name: QUBE

create.sls has:

include:
  - exists

test:
  qvm.present:
    - name: test
    - label: purple
    - require:
        - sls: exists

This will create the qube test only if QUBE exists.

3 Likes

unman, thank you for your response!

I did not realize you could make an entire sls a “requires” (and that you could link such a thing with include); thus far I had only used “requires” for other states in the same sls.

Given my knowledge level, I did try something similar to this…basically I put the “exist” state in the same sls file as the command (which was to make Qube A the default dispvm of Qube B…i want to do this both where Qube A is created, and where Qube B is created, to ensure it gets done regardless of the order in which the two are created, but of course each sls must check for the existence of the qube created by the other sls before making the attempt).

The problem, at least with the single-file solution, is that when the state to run the command executes, it returns false because the prerequisite is missing, and I have an apparent failure of the entire sls file to run propertly. It’s not a failure, though; I simply want to run the command “onlyif” the other qube is already present.

And since your suggestion uses “requires” I suspect it would behave similarly…though I haven’t tried it yet.

I like to use “exists” to check preconditions for even starting to run the states in an sls file. For example, if I am going to delete sys-net (and its dvm template, and the dvm template’s template) then rebuild it, I damn well want to make sure the clone I made to temporarily handle sys-net duties actually exists! In that case, yes, I do want my sls file to report an error to me; using exists as a “requires” for the first actual command in the sls file is perfect for that.

But in my current case, both outcomes are correct; it’s not an error I must stop and “correct” if the other qube doesn’t exist (yet). A similar thing to what I am looking for is cmd.run having the onlyif qualifier, which I have seen used to check for the existence of a file before editing it. That allows the check to be made in the same state as the command, so either outcome is “valid” [provided there isn’t some other bona fide error that is] and not a reason to throw red messages on my screen and report error 20 at the end.

In fact if there were a way to check qube existence in an onlyif it would be nearly ideal; I can run qvm prefs on a command lilne. (In fact that’s what I did do, for some reason.) If the qvm.prefs state had onlyif, that would be even better–I could even get rid of a lot of ugly jinja in some places if that existed. However it doesn’t appear to be an option, according the the full list of qubes salt states on github.

In any case I did solve this…by running qvm prefs as a command line, inside an if block. So the command itself becomes my atomic “do this only if the qube exists” operation. As I reported elsewhere here, I ended up doing a cmd.run on qvm-check other-qube 2> /dev/null; then qvm-prefs ....; fi I did have a bit of trouble hitting on that exact command–other things I tried had mysterious syntax errors but only in the salt file, or would leak the qvm-check error to the state which would report False, but it’s past me now.

It’s clunky but it seems to be working.

Thanks again for your response. I hope I’ve given some food for thought for improvements to some of the “qvm” states in salt.

1 Like

Now it exists: GitHub - jamke/qubes-completion: Bash completion for Qubes OS

2 Likes

I would like to see a “kill networking” option to “unplug” a qube from all networking. I believe this can be accomplished with Xen’s command line tools, but for other users having this option in GUI may be better for them. I would also like to see an option to “freeze” a qube and save its RAM contents to inspect state for possible intrusion. It would be great to see management options for a qube in xfwm’s “window menu”.

I very much like what I see @Rudd-O has been doing with ZFS for Qubes and I look forward to seeing full native integration of ZFS and Qubes.

4 Likes

@de_dust2 said:

I would like to see a “kill networking” option to “unplug” a qube from all networking. I believe this can be accomplished with Xen’s command line tools, but for other users having this option in GUI may be better for them.

It’s possible in the GUI already, by simply setting the qube’s network VM to none. This is true even if the qube is running. On the GUI: open the qube manager, select the qube you want to cut off; right click; there’s an option to change the network. Running qubes that provide network will be listed (you cannot used a qube that isn’t running), but “none” is also available.

For others reading this, on the command line it’s qvm-prefs <qubename> netvm '' (I’m going from memory, so the command syntax may not quite be right, but at least now you know where to look to find out,)

Other than the filesystem, what advantages does ZFS bring to qubes?

2 Likes

This is great to know. Sounds like if a user encounters a need to “unplug” a VM asap, the right move is to switch to a workspace where qubes-qube-manager is already open, right-click on the target qube, move the cursor to the “Network” sub-menu (which does not have a hotkey assigned), then select “none”. I’ll test that the effects of clicking “none” are immediate.

I think what would best for most users, especially new users, is to have options like unplug networking and freeze qube right in xfwm’s window menu.

The data integrity with checksums and self-healing with mirrored disks is a major benefit. I have been disrupted enough by data corruption in the past that I see mirrored disks and ZFS’ self-healing as kind of a must-have. ZFS’ incremental snapshots and incremental replication is ideal for efficient backups. ZFS has a log-structured design and seems to handle snapshots and clones much more gracefully than LVM. On a side-note I would like to maintain a history of TemplateVM snapshots so that I can rollback if necessary or examine them later (I’m sure LVM can do this too).

ZFS also makes updating to faster, higher capacity, or just fresh (SSDs do wear) very easy by incrementally swapping a disk of a mirrored vdev, let ZFS do resilver, rinse and repeat.

An approach to ZFS that has been working for a while for me is to attach disks to a qube, let’s call it sys-zfs. For whatever reason Qubes picks up on what is in /dev/zvol/$zpool/ in sys-zfs and I have been able to attach a zvol let’s say named bar to another qube let’s say named baz. The qube baz sees a /dev/xvdi which is formatted and opened as a LUKS volume with cryptsetup luksOpen /dev/xvdi foo providing /dev/mapper/foo when decrypted. I would prefer to have sys-zfs serve an already-decrypted LUKS volume, so like have /dev/mapper/foo in sys-zfs rather than serve /dev/zvol/$zpool/bar from sys-zfs. The current arrangement has been working well for now.

I wanted transparent compression for the application I wanted to run so in qube baz I created a zpool backed by /dev/mapper/foo, mounted a dataset at /home/user/data/, and symlinked ~/.config and other directories to paths under /home/user/data.

3 Likes

Basically all of this is filesystem. Which is not to denigrate it–far from it!!! I was primarily wondering if there was something else as well.

I’ve managed, over the past year or so, to get myself into a state where almost no data actually lives in my qubes. It mostly lives on a NAS which runs…ZFS. On the downside what actually lives in ZFS is encrypted containers, and alas inside those containers the filesystem is ext-something or (sometimes) NTFS (gaaack!).

I agree a zfs template would be a good option to have. And now you have me wondering if I can possibly put zfs into a veracrypt container. (It’s not one of the readily available options.) And…I should work on installing at least the ability to read zfs into my qubes. Thank you!

1 Like

By ZFS standards yes. BTRFS does some of what ZFS does but I don’t want to bet my data on ZFS. bcachefs looks really cool but is still in development. The other filesystems such as ext4 don’t do what ZFS does as far as disk management, snapshots, healing (checksums) etc.

This is good. ZFS is the only thing that makes sense to have on bare disk. Whatever is on top of ZFS is still going to fsync() to whatever is underneath, and if what is underneath is ZFS that’s ideal. In industrial settings I have seen Windows run better in VMs on ZFS with compression enabled.

1 Like