Can somebody clarifies how qubes-update-check service works and how dom0 gets update notifications for TemplateVMs that are never powered on?

The AppVm checks for the available updates.

In the AppVM, five minutes after the boot, qubes-update-check.timer launches qubes-update-check.service which checks for available updates and then notifies dom0 that update are (not) available for the TemplateVM.

Example with the test-f35 App-VM based on the fedora-35 TemplateVM
[user@test-f35 ~]$ sudo systemctl list-units| grep update
  systemd-update-utmp.service                       loaded active exited    Record System Boot/Shutdown in UTMP
  qubes-update-check.timer                          loaded active waiting   Periodically check for updates
  unbound-anchor.timer                              loaded active waiting   daily update of the root trust anchor for DNSSEC

[user@test-f35 ~]$ systemctl cat qubes-update-check.timer
# /usr/lib/systemd/system/qubes-update-check.timer
[Unit]
Description=Periodically check for updates
ConditionPathExists=/var/run/qubes-service/qubes-update-check

[Timer]
OnBootSec=5min
OnUnitActiveSec=2d

[Install]
WantedBy=multi-user.target

[user@test-f35 ~]$ systemctl cat qubes-update-check.service
# /usr/lib/systemd/system/qubes-update-check.service
[Unit]
Description=Qubes check for VM updates and notify dom0
ConditionPathExists=/var/run/qubes-service/qubes-update-check
After=qubes-qrexec-agent.service

[Service]
Type=oneshot
ExecStart=-/usr/lib/qubes/upgrades-status-notify

[user@test-f35 ~]$ cat /usr/lib/qubes/upgrades-status-notify
#!/usr/bin/bash

set -e

upgrades_installed="$(/usr/lib/qubes/upgrades-installed-check)"

if [ "$upgrades_installed" = "true" ]; then
    /usr/lib/qubes/qrexec-client-vm dom0 qubes.NotifyUpdates /bin/sh -c 'echo 0'
elif [ "$upgrades_installed" = "false" ]; then
    /usr/lib/qubes/qrexec-client-vm dom0 qubes.NotifyUpdates /bin/sh -c 'echo 1'
fi

[user@test-f35 ~]$ cat /usr/lib/qubes/upgrades-installed-check
#!/usr/bin/bash

## `echo`s:
## * 'true' - if all upgrades have been installed
## * 'false' - if there are pending upgrades
## * nothing - if apt-get is currently locked
##
## Forwards the exit code of the package manager.

if [ -e /etc/system-release ]; then
    ## Fedora
    if command -v dnf >/dev/null; then
        yum=dnf
    else
        yum=yum
    fi
    # shellcheck disable=SC2034
    yum_output="$($yum -yq check-update)"
    exit_code="$?"
    [ "$exit_code" -eq 100 ] && echo "false" && exit 0
    [ "$exit_code" -eq 0 ] && echo "true"
elif [ -e /etc/debian_version ]; then

[...]

fi

exit "$exit_code"

2 Likes

Thanks @ludovic !

So…

By doing the above, i actually turned off available updates detection to be detected by appvms and reported to dom0.

@unman: also, it seems that if appvms do not have the proxy-setup enabled, and that update proxy is not made available to appvms per rpc policy, then no updates can be notified to dom0 either.

Can test this and report back. But if cacher is made available only to templates and not appvms as per previous discuasions (appvms trying to access http:/HTTPS and failing though netvm), no available updates will be notified to dom0. This is definitely a bug/blindsight to be covered, and another reason to permit appvms to talk to repositories through proxy, not netvm.

@ludovic seems not to fully answered your question. How exaclty update check is done? We only got an answer how template got the notification. Do AppVMs have to be online?

So, do I understand well: when templates are cloned on purpose only offline AppVMs to be created based on them, then those cloned templates would never get updates notification too? If offline appVMs are getting online for update check, how exactly are they offline then? I don’t want my vault qube to get online to check for updates for its template.

So, it’s not about the cacher, but about being online?

Unfold and understand my “Example with the test-f35 App-VM based on the fedora-35 TemplateVM”. All is here, you will see: dnf -yq check-update (for Fedora).

No dom0 gets the notification.

Only one AppVM for a given TemplateVM.

For a given TemplateVM, if all the AppVM are offline, you can follow the available updates from updates-status and launch manually Qubes Update as described in the 2nd section from the OP’s post.

Maybe both, do yourself the tests, all the commands are in my first post.

2 Likes

While waiting for some updates to come up to test if I understand your previous post and to post outputs here (and I guess I understand and that is exactly why I posted my question as an addition to he OP’s case), thanks for your response, but it basically obscured the point of my post, which is:

  • Templates that aren’t ran and don’t have qubes based on them
  • Templates that aren’t ran, and have qubes based on them, but they’re not online*,

will never be updated, unless manually.

(this the point, it’s not about the workflow dom0 gets notification, update launcher shows up, etc, etc…)

And I’m not aware that this is documented anywhere, and I could bet most of the users are unaware of the fact, which could be considered as a security risk per se.

Even if no updates have been detected, you can use this tool to check for updates manually at any time by selecting “Enable updates for qubes without known available updates,” then selecting all desired items from the list and clicking “Next.”

And that’s all. This definitely needs terms such as “have to”, or “should” (I guess now “community made docs” are coming ahead. But how when the community was unaware of it?)

*meaning of “online” is what I’d like to test, from the beginning of the post.

1 Like

Why is it a security risk? If the template is never run and there are no qubes based on it, then the fact that it is never updated causes no harm. It doesn’t need to be updated, because it is never used.

“Ah,” you might say, “But what if I want to start using it someday? Then it’ll be terribly out-of-date!”

True, but the same can be said of a fresh template that you install in dom0 from the official repo. For example, if you install the Debian 10 template from the official repo in dom0 right now, you’ll have a template that hasn’t been updated in over two years! You will then simply update it yourself before you start using it, which the documentation already says to do. (From what I understand after asking the devs, this is not a problem so long as there have not been any bugs discovered in the template OS’s update mechanism since then.)

There is an issue here in that I don’t make use of the update check -
it’s disabled by default in my standard configuration. I simply don’t
want qubes spewing known traffic of their own accord.
I should make this clear, or change this public package to replace
sys-firewall as stock netvm. (The default for updates is to apply them
to all templates.)
If you have a template and all the qubes using it have no netvm set,
then that template will never show available updates. The only work
round for this is to attempt updates for all templates.

The other issue with the current implementation, (actually the only one
that will work despite the obvious flaws), is that it provides
notification after a qube has been used. It isn’t hard to conceive
scenarios where this is at best unfortunate.

It would be possible to circumvent these issues by disabling the per
qube check, and replacing it with a per-template check. (Effectively
this is what running qubes-update against all templates does, but that
actually processes the updates.)
I’ve toyed with this - you can limit the checks to some extent by taking
advantage of the fact that a cloned template shows the name of the
originating base template in qvm-features. An initial check of the base
can often reduce the number of checks required.

Perhaps this discussion should be merged in to the other thread on use
of the cacher, rather than splitting discussion.

I never presume to speak for the Qubes team.
When I comment in the Forum or in the mailing lists I speak for myself.

Ah, I can see why it would be desirable to update templates that are actively used but only for offline qubes. In fact, I have some of these myself and do keep them updated, as I’ve been in the habit of updating all of my templates as a big batch since the olden days. I can understand why newer users might not have this habit if they’re just responding to update notifications, so I suppose it is worth stating this explicitly in the documentation. Added:

1 Like

@adw @unman

Conclusion is that update check is enabled by default but depends of either enable-proxy-setup / access to internet through netvm to download repository data. This means that the more app qubes are running, the more Packages files and repository data download is downloaded in parallel if the default tinyproxy is used instead of apt-cacher-ng properly configured to cache downloaded data and reused on each same parent TemplateVM to report available updates to dom0

We will take a corner case exemple, but you will get what I mean, and maybe it should be linked to UX @ninavizz. Let’s take the case of offline qubes, without any netvm connected (vault, and any other specialized templates created to do vault related, seperated unconnected appvms, for which software used there are expected to be updated for safety. We can then extend that to sys-net, sys-usb, sys-gui, sys-log, sys-xxx).

Updates checks depend on two possible qubes services, adressing different cases: app qubes having a netvm assigned (1) and app qubes not having netvm assigned (2 (and 3 required in case of cacher/apt-cacher-ng usage)) to have updates notifications appear on dom0 on a daily basis:

  1. To check updates availability for parents templates: qubes-update-check
    • This normally happens through qubes netvm for networked qubes.
    • In case of apt-cacher-ng, update check will fail silently since URLs are modified. A qube won’t be able to talk to http://HTTPS/// urls
  2. To be able to talk to update proxy (tinyproxy/apt-cacher-ng) and get packages update list even if no netvm: updates-proxy-setup
  3. For updates-proxy-setup to be accessible through appvms (vault/sys-usb etc), Qubes policies need to be modified so that any Qubes can talk through the update proxy. In my use case, so all app qubes can get updates notifications if updates-proxy-setup service is activated on a template-child qubes:
[user@dom0 ~]$ sudo cat /etc/qubes/policy.d/30-user.policy 
qubes.UpdatesProxy  *  @anyvm  @default  allow target=cacher
#qubes.UpdatesProxy  *  @tag:whonix-updatevm    @default    allow target=sys-whonix
#qubes.UpdatesProxy  *  @type:AppVM  @default  allow target=cacher
#qubes.UpdatesProxy  *  @type:TemplateVM  @default  allow target=cacher

Notes:
For service qubes, updates-proxy-setup is expected to be activated on qubes to be able to talk through the update proxy. This applies to sys-usb/vault and any other user defined “air-gapped” qubes. As we know, tinyproxy is totally dumb here and will permit anything on app qubes to talk through tinyproxy to access the internet. For apt-cacher-ng, files patterns are defined and only URLs matching http://HTTPS///, defined under parent Templates, will be permitted to go through update proxy.

In my current testing:

  • shaker/cacher is safer, but requires appvms to have updates-proxy-setup and dom0 policy to be changed to have update notifications.

For UX:

  • The Qubes update GUI should show last update timestamp (currently missing) so that users have an idea of when they last updated their templates.
  • The Qubes update GUI should show last available updates input received so that users have an idea of when they last received an update notification.

This is such case where an update of the documentation alone leaves users vulnerable.

To diagnose status of current system:

  1. Obtain status of all non-template qube for qubes-update-check service (see appvms that will check for updates and report to dom0):
    qvm-ls --raw-data --fields NAME,CLASS | grep -v TemplateVM| awk -F "|" {'print $1'} | while read qube; do if [ -n "$qube" ];then echo "$qube:qubes-update-check"; qvm-service "$qube" "qubes-update-check";fi; done
    1- Obtain status of all non-template qube for updates-proxy-setup (see appvms that will
    download/report updates through defined update proxy [important if using cacher otherwise app qubes cannot talk to repositories]):
    qvm-ls --raw-data --fields NAME,CLASS | grep -v TemplateVM| awk -F "|" {'print $1'} | while read qube; do if [ -n "$qube" ];then echo "$qube:updates-proxy-setup"; qvm-service "$qube" "updates-proxy-setup";fi; done

In the case of offline qubes based on specialized templates, users that have apt-cacher-ng might want to activate updates-proxy-setup service on their qubes to be notified on available updates. The offline qube will be able to talk to defined in Templates repositories through defined http://HTTPS/// urls and report updates availability to dom0. Otherwise, they won’t be able to report updates notifications to dom0. Also note that dom0 policy need to permit @anyvm to be able to talk to the proxy, otherwise resulting in RPC errors notifications in dom0

@unman : Let’s remember here that by using cacher, apt-cacher-ng manages its own cache expiration, caches packages list and provides such to anyone asking for packages updates to be pushed to dom0 as updte notification. So yes: the first app qube asking for a package list will actually have cacher download that package list. Other app qubes asking for that package list will download that list from cacher, not the internet: so no additional traffic is being generated. Basically, any app qube depending on a specialized Template will of course have additional repository information being downloaded through cacher; but again that package list will be downloaded only once for each specialized template. After all, this is an expected advantage of cacher from users: download once for all specialized templates.

Here again please correct me here:
EDIT: qubes-update-check is effectively activated by default, but requires internet access or update-proxy-setup to be activated in app qubes to use the update proxy (important to reduce throughput of multiple app qubes checking for updates of the same Template theough cacher).

  • Default qubes installation has qubes-update-check enabled by default for all qubes. Defaults also activates the service by default under Qubes Global Setting:
    2022-10-11-112642
    @adw: this might mean (please confirm) that package lists are downloaded, and redownloaded (since not cached by tinyproxy) by all qubes through the update proxy, even for vault non-connected app qube. The use of cacher reduces the bandwidth needed here, with or without specialized templates usage.

EDIT: The following was false and was hidden:

Summary

Consequently and important to know: vault is actually a source of dom0 update notification, since it refreshes packages list through tinyproxy on default installation, as any other supposedly "air-gapped qubes"

EDIT: true: vault and air-gapped machines, having no netvm, would depend on updates-proxy-setup service to be enabled, which is not enabled by default.

@unman : Also note that cacher’s appvm must not have updates-proxy-setup service activated, otherwise all update proxy requests will fail with dom0 RPC denied policy (since cacher will try to redirect the requests, which obviously will fail.)

So if one attempts to enable updates-proxy-setup to all templates/appvms defined, they must make sure that cacher is not enabling the service to itself:

USE WITH CAUTION THIS TURNS ON updates-proxy-setup SERVICE TO EVERY QUBE EXISTING ON THE SYSTEM BUT CACHER ITSELF (grep -v “^cacher” exclusion
qvm-ls --raw-data --fields NAME,CLASS | grep -v "^cacher"| awk -F "|" {'print $1'} | while read qube; do if [ -n "$qube" ];then echo "$qube:updates-proxy-setup"; qvm-service "$qube" "updates-proxy-setup" on;fi; done

@adw: This needs to be confirmed as said in previous post, but as of today, I think vault (airgapped qube, right?) is actually downloading packages lists per parents Template’s defined repositories through its defined update proxy and actually is a dom0 update notifier.

This needs confirmation. Might have the time to check it, while still a bit confused on qvm-service report, since it informs of deviation of the defaults only, and the defaults are defined under Qubes Global Settings (as far ar I recall, from memory) to check updates on all qubes.

Thank you for taking this seriously and talking to devs.
Thanks to @unman to clarify second bulleted point from my previous post and consequently user (and now obviously wider) unawareness on the case is security risk per se. So - unawareness is risk per se, not not-updated template. That is why I really think several words @adw will eventually add to the docs would definitely increase liability if nothing else.
Using outdated with the bugs yet to be discovered KeepassXC in my offline vault is something not only me, but anyone would want to avoid at all costs, and if that’s not the risk, than we should actually completely turn off updates.
Thanks to @Insurgo for ellaborating my idea about testing @ludovic’s one per a given TemplateVM AppVM being online.
Since there will be inexperienced users to blindly implement the idea of updating over qrexec, and considering that there are even experienced users that aren’t aware on the terms of offline and online:

Fom, post:5, topic:14096

Then, you say “offline Debian mini-pc” which is “cable tether”-ed

Whoops, terminology failure, my bad. By offline I meant “not directly connected to the internet”, after a short trip to an online dictionary I now realize that’s not exactly what offline means.

probably would be a good idea this to be emphasized somewhere in the docs, too.

Thanks to @Insurgo about pointing out about vault’s specifics. Although in my setup only 3 VMs have netVMs set: sys-fwall, sys-whonix and cacher, while using updates-proxy-setup on all non-templates, except on vault and vault-alikes, that is why on the latter I have specifically added updates-proxy-setup service to the Settings and deselected it, specifically disabling the service that way, thus preventing vault going online when future potential decision changes actual default state of the service.
But according to @Insurgo’s terrifying conclusion. even this isn’t enough, so I’m not clear if disabling update-check in vault would have implications on overall system in every possible way imagined,

EDIT: massively and repeateady edited.
Edit2: wrong conclusions here!!! As corrected below by @AndyZ :

Corrected under Can somebody clarifies how qubes-update-check service works and how dom0 gets update notifications for TemplateVMs that are never powered on? - #17 by Insurgo

Summary

This is wrong and was hidden so others don’t take this as a truism.

As said, qvm-features reports on deviation on defaults.
Conclusion is that qubes-update-check and qubes-proxy-setup are off by default. @adw @enmus: Sorry for the noise.

Can be confirmed by setting back an appvm to default (qvm-service AppVM ServiceName -d) and then checking its status qvm-service AppVM ServiceName) :

[user@dom0 ~]$ qvm-service untrusted qubes-update-check -d
[user@dom0 ~]$ qvm-service untrusted qubes-update-check
off
[user@dom0 ~]$ qvm-service untrusted qubes-proxy-setup -d
[user@dom0 ~]$ qvm-service untrusted qubes-proxy-setup
off
1 Like

The help for qvm-service says that -d (lowercase) is “disable service (same as setting “off” value)”
It is -D (uppercase) that is used to “unset service (default to VM preference)”

I have tried this and then qvm-service vm-name gives no output, neither on or off but qubes-update-check is present in /var/run/qubes-service (implying this is on?).

No services are listed in the vm settings services tab (though they are listed in the “Select a service:” drop down).

However, trying sudo dnf check-update -v gives…

error: Curl error (6): Couldn't resolve host name for https://mirrors.fedoraproject.org/metalink?repo=fedora-36&arch=x86_64 [Could not resolve host: mirrors.fedoraproject.org]

This seems consistent with updates-proxy-setup being set to off.

Also, even if updates-proxy-setup and qubes-update-check are both on, I am finding that sudo dnf check-update -v is failing if there is no allow policy set for qubes.UpdatesProxy.

The error is slightly different…

error: Curl error (56): Failure when receiving data from the peer for https://mirrors.fedoraproject.org/metalink?repo=fedora-36&arch=x86_64 [Recv failure: Connection reset by peer]

and the GUI notifications show Denied: qubes.Updatesproxy

1 Like

I’m so sorry for the confusion here, @AndyZ is totally right: -d is putting service off, -D puts service into its default status.

Thanks to @ludovic for his awesome explanation, giving implementation details:


Now let’s test with vault (an example of an offline (airgapped) qube, which is expected to not get online at all:

[user@dom0 ~]$ qvm-service vault
[user@dom0 ~]$

No customized services enabled.

Let’s check journals for service traces:

[user@vault ~]$ sudo journalctl --boot  0 | grep -e updates
Oct 12 11:04:21 vault systemd[1]: Started qubes-update-check.timer - Periodically check for updates.
Oct 12 11:04:21 vault systemd[1]: qubes-updates-proxy-forwarder.socket - Forward connection to updates proxy over Qubes RPC was skipped because of a failed condition check (ConditionPathExists=/var/run/qubes-service/updates-proxy-setup).
Oct 12 11:04:21 vault systemd[1]: qubes-updates-proxy.service - Qubes updates proxy (tinyproxy) was skipped because all trigger condition checks failed.

Update check enabled, but no proxy setup enabled.

Using shaker/cacher here (but default tinyproxy is just different exposed update proxy for AppVM, so just dismiss http://HTTPS/// prepending URLs below):

[user@vault ~]$ sudo dnf update
Fedora 36 openh264 (From Cisco) - x86_64        0.0  B/s |   0  B     00:00    
Errors during downloading metadata for repository 'fedora-cisco-openh264':
  - Curl error (6): Couldn't resolve host name for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=fedora-cisco-openh264-36&arch=x86_64&protocol=http&protocol=http [Could not resolve host: HTTPS]
Error: Failed to download metadata for repo 'fedora-cisco-openh264': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=fedora-cisco-openh264-36&arch=x86_64&protocol=http&protocol=http [Could not resolve host: HTTPS]
Fedora 36 - x86_64 - Updates                    0.0  B/s |   0  B     00:00    
Errors during downloading metadata for repository 'updates':
  - Curl error (6): Couldn't resolve host name for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=updates-released-f36&arch=x86_64&protocol=http&protocol=http [Could not resolve host: HTTPS]
Error: Failed to download metadata for repo 'updates': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=updates-released-f36&arch=x86_64&protocol=http&protocol=http [Could not resolve host: HTTPS]

No with defaults service setting, qubes update check enabled but update proxy setup disable, update check fails (no network access succeeds, no socket enabled for update check to succeed).

Now let’s change only the update proxy setup service and see outcome:

[user@dom0 ~]$ qvm-shutdown vault
[user@dom0 ~]$ qvm-service vault updates-proxy-setup off
[user@dom0 ~]$ qvm-service vault
updates-proxy-setup  off
[user@dom0 ~]$ qvm-run --pass-io --no-gui vault "sudo dnf update"
Fedora 36 openh264 (From Cisco) - x86_64        0.0  B/s |   0  B     00:00    
Errors during downloading metadata for repository 'fedora-cisco-openh264':
  - Curl error (6): Couldn't resolve host name for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=fedora-cisco-openh264-36&arch=x86_64&protocol=http&protocol=http [Could not resolve host: HTTPS]
Error: Failed to download metadata for repo 'fedora-cisco-openh264': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=fedora-cisco-openh264-36&arch=x86_64&protocol=http&protocol=http [Could not resolve host: HTTPS]
Fedora 36 - x86_64 - Updates                    0.0  B/s |   0  B     00:00    
Errors during downloading metadata for repository 'updates':
  - Curl error (6): Couldn't resolve host name for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=updates-released-f36&arch=x86_64&protocol=http&protocol=http&countme=3 [Could not resolve host: HTTPS]
  - Curl error (6): Couldn't resolve host name for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=updates-released-f36&arch=x86_64&protocol=http&protocol=http [Could not resolve host: HTTPS]
Error: Failed to download metadata for repo 'updates': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=updates-released-f36&arch=x86_64&protocol=http&protocol=http [Could not resolve host: HTTPS]
[user@dom0 ~]$ qvm-shutdown vault
[user@dom0 ~]$ qvm-service vault updates-proxy-setup -D
[user@dom0 ~]$ qvm-service vault
[user@dom0 ~]$ 
[user@dom0 ~]$ qvm-run --pass-io --no-gui vault "sudo dnf update"
Fedora 36 openh264 (From Cisco) - x86_64        0.0  B/s |   0  B     00:00    
Errors during downloading metadata for repository 'fedora-cisco-openh264':
  - Curl error (6): Couldn't resolve host name for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=fedora-cisco-openh264-36&arch=x86_64&protocol=http&protocol=http [Could not resolve host: HTTPS]
Error: Failed to download metadata for repo 'fedora-cisco-openh264': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=fedora-cisco-openh264-36&arch=x86_64&protocol=http&protocol=http [Could not resolve host: HTTPS]
Fedora 36 - x86_64 - Updates                    0.0  B/s |   0  B     00:00    
Errors during downloading metadata for repository 'updates':
  - Curl error (6): Couldn't resolve host name for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=updates-released-f36&arch=x86_64&protocol=http&protocol=http [Could not resolve host: HTTPS]
  - Curl error (6): Couldn't resolve host name for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=updates-released-f36&arch=x86_64&protocol=http&protocol=http&countme=3 [Could not resolve host: HTTPS]
Error: Failed to download metadata for repo 'updates': Cannot prepare internal mirrorlist: Curl error (6): Couldn't resolve host name for http://HTTPS///mirrors.fedoraproject.org/metalink?repo=updates-released-f36&arch=x86_64&protocol=http&protocol=http [Could not resolve host: HTTPS]

All good. But if vault was an App qube of a Template with no other app qube getting online to hae a successful update check reporting updates to dom0, that Template would never get update notifications pushed to dom0

Behavior of default services is:

So the defaults are good and documentation update is good.
BUT: users might never manually update a Template for which no available update notification is available, leading him to use outdated sotfware without knowing it.

Documentation is good, but a warned user would be better then requiring user to read everything to be aware of everything. UX enhancement would be welcome.

My prior recommendation still applies for UX improvements of the Qubes Update Widget (@adw @ninavizz @marmarek @deeplow @Sven ) please link wherever that applies, documentation update not being enough for UX experience and deeper understanding of what truely happening when they specialize templates):

  • The Qubes Update widget should tell the user of the last time dom0 received an update notification for a specific template directly under Qubes Update widget
    • Adding a “Last received update notification” timestamp column, stating the last time an update notification was received from dom0 would most probably be enough?
  • The user should be able to easily notice that other greyed templates in the Qubes Update widget have not received dom0 notifications since different dates, while updates were available for other templates.
    • That should be enough to entice users into ticking the “enable updates for qubes without known available updates” which will give the real deal. If there is still no updates, nor errors given when manually attempting to update softwares in a Template: an end-user should be worried and look for EOL notices or at least minimally investigate the potential issue (unless he uses a really old template intentionally for development purposes, there should not be any reason for using deprecated and unmainted software on a daily use. Maybe that Template should even be deleted. My point is that it should be as visible as possible that unmaintained Templates are being installed, and potentially used by and only by offline qubes which don’t warn users of available updates since offline quebes don’t notify dom0 of available updates.)
    • Adding an additional column “Last update check” should permit the user to notice EOL Templates easily since the timestamp for “Last received update notification” would still be old still old even if “Last update check” is recent.

Let’s remember that @unman’s shaker’s cacher is a life changer here and would deserve attention to improve UX of software installation, update time/bandwidth needed when using Qubes and ease UX installation experience combined with extrepo-date upstream’s project involvement in the goal to ease GPG public key installation and repository information addition in user’s Templates.

@enmus : I just wanted to make sure that you read Can somebody clarifies how qubes-update-check service works and how dom0 gets update notifications for TemplateVMs that are never powered on? - #17 by Insurgo

I talked too quick, and have mistaken qvm-service AppQubeName ServiceName -d for qvm-service AppQubeName ServiceName -D

I have consequently edited, and hidden content under my original, mistaken conclusions under Can somebody clarifies how qubes-update-check service works and how dom0 gets update notifications for TemplateVMs that are never powered on? - #15 by Insurgo so nobody else gets confused.

1 Like

@Insurgo I have now created an issue for that, quoting your reply

2 Likes

@deeplow: thank you for that.

Edited previous post where I said I would.
The important conclusion here, still are:

@adw: On bandwidth requirements:

1 Like

Thanks @Insurgo for taking care. At the beginning I said I’d like to test some things but many of you were so quick sparing me from what you did (testing and explaining).

So, for me it is left from an user’s perspective to strongly suggest to develop additional dialog box on launching updater tool to rise with something like this:

Beside other regularly updated templates automatically marked for updating, templates listed below were last updated at least ______ ago. We strongly suggest you to select them here so they could be updated too. For more details please see https ://www.qubes-os.org/doc/how-to-update/

So, it would be possible to select those template for updating in that additional window.

I just think offline, airgapped qubes shouldn’t be allowed to go online in order to check for updates. They’re simply more important than their templates. What I’d agree it would be they could check only if there are updates in cacher’s cache, not to ask cacher to check repositories. And above procedure proposed would be life-saver for vault-alikes.

1 Like

I guess you mean “trying to download,” right? If it’s an offline qube, it can’t download anything from the internet. I don’t see why that’s a security problem, as long as it isn’t trying to exfiltrate your data via a covert channel or anything. I suppose it would be slightly inefficient to have an offline qube vainly trying to check for updates when it has no network access, but it’s probably just a very minor waste of CPU activity at most, right?

(Also, regarding the portion of my post that you quoted, keep in mind that if you have a template, and you’re using a vault qube that’s based on that template, then you do have an actively-used qube based on that template.)

I am confused, as I already added the words before you posted this. Did you miss that, or is this a way of saying that what I added did not cover it?

I could be missing something, but I’m not sure why that would be a significant security risk by itself. If KeePassXC itself were compromised (e.g., package or source code), then it wouldn’t matter whether it’s up-to-date or not. (Updating to newer malicious code would probably just help the attacker.) And if Qubes VM separation as a security boundary were violated, then KeePassXC being patched would provide only minimal protection if KeePassXC were still being unlocked and used for stuff. You certainly wouldn’t want to rely on that, so this definitely doesn’t seem like something that’s important to “avoid at all costs.”

I don’t understand the suggestion, and this is too vague to be actionable for me. If something is unclear in the docs, please provide, at minimum:

  1. An exact quotation of the unclear passage currently in the docs
  2. Why you think it’s unclear

If you think something is missing, please provide, at minimum:

  1. Example text of what you think should be added (or at least the start of it)
  2. Why you think it should be added (e.g., the motivation for adding it, who it would help, or what problem it solves)

Alternatively, feel free to open a doc PR yourself.

Also, casual asides in forum posts are easily missed, so if there is actually something important that needs to be updated or fixed in the docs, the best way is to either open a doc PR or open an issue.

1 Like