Thanks @Insurgo for taking care. At the beginning I said I’d like to test some things but many of you were so quick sparing me from what you did (testing and explaining).
So, for me it is left from an user’s perspective to strongly suggest to develop additional dialog box on launching updater tool to rise with something like this:
Beside other regularly updated templates automatically marked for updating, templates listed below were last updated at least ______ ago. We strongly suggest you to select them here so they could be updated too. For more details please see https ://www.qubes-os.org/doc/how-to-update/
So, it would be possible to select those template for updating in that additional window.
I just think offline, airgapped qubes shouldn’t be allowed to go online in order to check for updates. They’re simply more important than their templates. What I’d agree it would be they could check only if there are updates in cacher’s cache, not to ask cacher to check repositories. And above procedure proposed would be life-saver for vault-alikes.
I guess you mean “trying to download,” right? If it’s an offline qube, it can’t download anything from the internet. I don’t see why that’s a security problem, as long as it isn’t trying to exfiltrate your data via a covert channel or anything. I suppose it would be slightly inefficient to have an offline qube vainly trying to check for updates when it has no network access, but it’s probably just a very minor waste of CPU activity at most, right?
(Also, regarding the portion of my post that you quoted, keep in mind that if you have a template, and you’re using a vault qube that’s based on that template, then you do have an actively-used qube based on that template.)
I am confused, as I already added the words before you posted this. Did you miss that, or is this a way of saying that what I added did not cover it?
I could be missing something, but I’m not sure why that would be a significant security risk by itself. If KeePassXC itself were compromised (e.g., package or source code), then it wouldn’t matter whether it’s up-to-date or not. (Updating to newer malicious code would probably just help the attacker.) And if Qubes VM separation as a security boundary were violated, then KeePassXC being patched would provide only minimal protection if KeePassXC were still being unlocked and used for stuff. You certainly wouldn’t want to rely on that, so this definitely doesn’t seem like something that’s important to “avoid at all costs.”
I don’t understand the suggestion, and this is too vague to be actionable for me. If something is unclear in the docs, please provide, at minimum:
An exact quotation of the unclear passage currently in the docs
Why you think it’s unclear
If you think something is missing, please provide, at minimum:
Example text of what you think should be added (or at least the start of it)
Why you think it should be added (e.g., the motivation for adding it, who it would help, or what problem it solves)
Also, casual asides in forum posts are easily missed, so if there is actually something important that needs to be updated or fixed in the docs, the best way is to either open a doc PR or open an issue.
I will explain again. That was in my draft, and you were too quick for me. The draft was created during your doc PR, which is obvious since in my last post I already referred to it
I strongly disagree. Theoretically, it could be compromised in a way to silently trigger setting netVM for vault and sending clipboard (at best, or whatever) to a specified IP address. That theoretical bug could be fixed in the new version, but update wouldn’t be possible. I’m sure there are better examples and more realistic than this one. But, whatever. I have my routine of regularly updating all qubes and it’s not about me, but of users unaware of the issue.
If a qube does not have a net qube (i.e., its netvm is set to None ), then that qube is offline. It is disconnected from all networking.
This is simply not true and dangerous. It should be:
If an update fixes whatever was compromised, then KeePassXC itself is no longer compromised, in which case my conditional statement is still true.
Thank you for the clear and specific example. I think we should actually just remove that sentence, as it’s now outdated and is not essential to the definition of the term “net qube” anyway:
I think the important part here is for users to understand that templates still have network access even though their netvms are set to None (or n/a). Thankfully, this is already documented, specifically here and here. I have also just added pointers to these sections from the “How to update” and “Templates” doc pages in order to make it easier for users to find this information:
You can search for any term in quotation marks to find exact hits:
This is a tricky question, asking to answer from a belief standpoint more then a factual, empiric standpoint. That is, until proof given to infirm such claim that there is nothing malicious there through extensive reverse engineering, its either there or not there until it can be verified its not there. That is, the whole definition of a backdoor injection through either intentional wrong doing or negligence or both.
There is awesome reversing work that has been done in the past on older CPUs, showing some proof that hidden instruction sets were added into x86 in the past:
Remember the names on those papers and dig down that rabbit hole if the subject interests you.
But keep in mind that you cannot highly modify what is on die; you can only patch it, so introducing new instruction sets is not thought as being possible. But what one day was thought impossible is years later dismantled and proven untruth.
So here again, in the absence of open source code, readable and understandable by many, we rely on the reverse analysis of the few and decide to place our trust (belief) into either of the possibilities (trustworthy/untrustworthy) waiting for the evidence.
I’ve read through this entire thread several times, but can’t seem to figure out how/why only my fedora templates detect updates, except that they are the only ones that bypass cacher.
According to qvm-service on dom0, only cacher had the qubes-updates-proxy. None of the other TemplateVMs, AppVMs, AppVM-as-a-DispVM-Template had that service or the updates-proxy-setup service listed or enabled.
All of the AppVMs and App-VM-as-a-DispVM-Template had the qubes-update-check.timer and the qubes-update-check.service enabled and active.
It seems like I am missing something basic here. Can someone point me in the right direction?
Did you read the topic?
The updates check is done in the app qubes based on the template using their net qubes and not using their template’s updates proxy and not in the template itself.
When you use apt-cacher-ng the repository definitions are rewritten so
that https:// becomes http://HTTPS///. This is so that the caching
proxy can see the request, and then forward it on encrypted as TLS
traffic.
When you use a template based qube, the repository definitions are
taken from the template, and so use that same http://HTTPS/// format.
Unless you have taken steps to use the caching proxy from the qube,
by setting a Proxy setting to the IP address of the cacher qube, then
your qube will try to connect directly to the repository. But the DNS
server will not recognise HTTPS/// as an address format, and will
return an error. This is why the Debian qubes do not report any
available updates: the Fedora qube, where the repository definitions have
not been rewritten, can access repositories and so will report when
updates are available.
I never presume to speak for the Qubes team.
When I comment in the Forum I speak for myself.
That was the helpful boost I needed to test it and get it working.
To clarify, it appears most non-TemplateVMs need:
The updates-proxy-setup service enabled via the Qube Manager GUI
A qvm-tag so Dom0’s /etc/qubes/policy.d/30-user.policy can route update requests to cacher
An entry in Dom0’s /etc/qubes/policy.d/30-user.policy to route update requests from the specified qvm-tag to cacher
I say most because any Qube running the qubes-updates-proxy service (like cacher and sys-net), seems to need:
An /rw/config/rc.local entry to revert the cacher template mods
### Revert cacher mods for update detection
sed -i 's^http://HTTPS///^https://^' /etc/apt/sources.list
sed -i 's^http://HTTPS///^https://^' /etc/apt/sources.list.d/*.list
A qvm-tag so Dom0’s /etc/qubes/policy.d/30-user.policy can route update requests to sys-net
An entry in Dom0’s /etc/qubes/policy.d/30-user.policy to route update requests from the specified qvm-tag to sys-net
Alternatively, I suppose I could revert the cacher repo modifications in all non-TemplateVM’s via /rw/config/rc.local and tag them to bypass cacher in Dom0’s /etc/qubes/policy.d/30-user.policy
Is there a better way or is this just a matter of preference?
OK, I did something that should be equivalent to what SensibleBurrito did.
Similar situation: a bunch of Debian12 minimal-based templates that are using the cacher (they go to http://HTTPS///). Other templates are not set up to use the cacher (they go to https://). The “normal” system configuration has the policy files, etc, using the cacher for everything (or NOT using the cacher for everything). To upgrade all templates, I have to upgrade the ones that use the cacher, switch the global settings to reference sys-net-wifi, then upgrade the others. Unfortunately, the only way updates could be checked on the debian-12-minimal based templates was to either run the templates themselves (or clones of them), or to just preemptively try to update them whether they needed it or not.
I have switched from having 30-user.policy (and 50-config-updates.policy) that will try to use the cacher for everyone, to a setup that has cacher used only for templates with a “use-cacher” tag. I can run update on the templates without any chance of failing (and no need to switch back and forth with the policy files) now; the tag is smart enough to cause the right proxy to be used.
However auto-checking for updates by AppVMs still doesn’t work, even if I make sure they have the use-cacher tag. I made sure that the cacher itself does not try to use the cacher for updates (as SensibleBurrito said); I explicitly enabled updates-proxy-setup in my AppVM.
If I shut down the cacher and run my AppVM, sys-cacher will start up eventually; that indicates to me the AppVM is actually trying to check for updates. [If I am not trying to run the AppVM, sys-cacher won’t start up at all.] However, nothing is found. I deliberately used a template that I know needs an update, though the system doesn’t realize it. (Judicious cloning of a template I hadn’t checked, then updating the original template, let me do that–the template had updates, therefore the clone needs updates even though the system doesn’t know that, yet.)
So I am still missing something here, probably in the cacher or its template.
This works for me:
Create a new mytest app qube with net qube set to none based on a template last checked for updates and updated a month ago.
Set policy in dom0 to allow the qubes.UpdatesProxy using cacher qube.
In /etc/qubes/policy.d/30-user.policy add:
Start the qube, wait for 5 minutes for updates check service to trigger.
After this I can see in the Qubes Update tool that new updates are available for this qube’s template.
I’m doing something basically equivalent to that. If I shut down sys-cacher (you named yours cacher), then start mytest I will see sys-cacher start (which tells me the service is on, because the appvm did try to look for updates) but no updates are reported.
If, on the other hand, I shut down my cacher, clone the template and start the clone…the cacher will start and it will detect that updates are needed–that’s how I know my template needs updates, without actually having the template check for updates.
So I’m still stuck with needing to run templates to see if they need updates. I can mitigate the security risk by instead running a clone of my actual templates (then I just have to remember to update both the clone and the original); I name those clones Update-Canary-.
Well, I found the issue, THANK YOU for suggesting I run that command (which needs ‘sudo’ by the way). It’s something peculiar to my setup (meaning most people can just stop reading here). And this is a bit involved.
As it happens, I install a lot of scripts and even executables that I myself wrote into my templates. In order to manage THAT mess, I actually build packages for them. My salt template creator loads the packages AND a “Packages” file that describes them into /home/user/repository; and then over in /etc/apt/sources.list.d/user.list I add a reference to /home/user/repository/Packages (marking it “trusted”). Then salt actually installs the packages.
Of course this stuff is not visible in the appvm, because the /home/user directory in that is different from the one in the template (the whole persistent/non-persistent thing). I actually did that deliberately; I didn’t want to mistakenly try to install or reinstall these packages in the AppVM, and if they’re not there, I can’t do that.
apt update, of course. complained that /home/user/repository/Packages was not present, for the very good reason that it is, in fact, not present!
So…I copied those files from the template to the AppVM. Running sudo apt update again gave me a different, much less significant error message (meaning that I had fixed this issue) but did NOT cause the updater to see updates (in hindsight this is not surprising). Shutting down the AppVM and restarting it, however (after shutting down the cacher, so that I could see it start again and know it had checked for updates), DID cause the system to detect that the template needs updates.
Now that I know what’s wrong, I have two possible courses of action: 1) Don’t put the packages in that directory; rather, find one that doesn’t get over-mounted in an AppVM (and I know that /usr/local is not it!). 2) Alter /etc/apt/sources-list.d/user.list to not reference the files, once I am done with qube settup. This latter option makes some sense since I don’t expect qvm-update (etc) to manage the updates to these things (I generally just re-run the salt files if I need to force templates to update my own stuff; in fact I don’t even bother incrementing version numbers).
In any case now that it works, I can do some experimenting. (For instance is it really necessary to make sure cacher doesn’t try to use itself for updates?)
Thank you again for asking the question you did; it enabled me to troubleshoot usefully!
One thing I have discovered is that your line to use the cacher for qubes with a specific tag can be moved into 50-config-updates (where the “normal” system setting is placed).
The global config gui will detect it as an “exception” and handle it properly, in other words it will show that you want sys-net to be the update proxy, except for qubes with the tag set, which will use cacher–and this is on the gui!
Since this is integrated with the “standard” way of doing things, I think it’s better.
That’s a neat find @SteveC . I tend to agree with you about integrating into the standard way of doing things–I forgot where I picked up creating the 30-user.policy in dom0.
Does this mean we no longer need a default deny rule?
@SteveC ,
Glad to hear you got it working. I was following along to see what the output of your apt update would be, because that was one of my main diagnostics for troubleshooting this.
Additionally, I wanted to chime in to say that the only time I’ve had to sudo an apt update was when I was logged into the VM as user (vs. root). If you’re having to sudo for apt update it might be another peculiarity of your setup to keep in mind.
Let me know if you’re able to get cacher to use itself for updates. That is annoying.