How can Qubes OS protect itself from Malicious Code Contributions

This was what I wanteed to find out at the start of all this. This makes me feel better about Qubes.

If this is the case though, does Qubes qualify as ‘open source’? Or is it more ‘transparent source’? Maybe my understanding of what it means to be ‘open source’ is wrong and that the ‘open’ simply means that the public can freely read the source code without needing to be alterable (in other words, it’s similar to what I mean when I say ‘transparent source’).

 


Anyways, I feel this is a sprawling topic with many excellent but lengthy contributions, so it might help to condense each of our main themes for anyone just tuning in. Nuances can’t quite be conveyed in short summaries, so keep this in mind, but correct me if I’m wrong and I’ll update the summaries:

 

@Sven’s theme is that open source is the alternative to code written by profit-oriented corporations that usually (but not always) generates closed, proprietary, subpar code, especially when code isn’t the main focus of the company (e.g. Boeing, Equifax).

I can see where the wariness and cynicism regarding corporations with, especially in light of the effects of rampant, unchecked capitalism (a good example would be the healthcare system in the US, which is, in some ways, cancerous).

 

@gonzalo-bulnes’s theme is perceptions–in particular, trust. After all, we are all subjective creatures who need to make decisions on whether to adopt certain software, and we ultimately do it on the basis of trust, especially when we don’t have the time and/or ability to personally verify every piece of software we use as well as all their updates. Open source, in his view, generates trust that leads to adoption, but this is only one piece, as security doesn’t then come automatically, and can in fact be harmed by the openness.

A thought that popped into my head (keep in mind I’m not experienced with open source) is that once a commitment to go open is made for the sake of trust, you either go fully open or people start questioning the blobs you keep closed. This is especially true with the types of people who end up using Qubes.

 

@Zrubi’s input is that being open source is critical to earning his trust as it allows him to take charge of the code for his purposes, but also adopts a seemingly negative outlook regarding security for these projects, as he recognizes that security is not guaranteed by the openness, and that adversaries tend to have more resources and an incentive not to share.

Like I said earlier, I get where the anti-corporation sentiment comes from, but I wouldn’t take such an absolutist stance on it. The luxury of being able to audit code comes to few–an ultra-miniscule fraction of the world if you include non-programmers like myself, so trade-offs need to be made.

1 Like

It may sounds negative, until you compare it with a closed source stuff, where you must trut it blindly…

There’s nothing in either the OSI definition or the four freedoms of
libre software to say that the originator has to accept any
contribution at all.
The 4 freedoms say that the user has the right to run software, study it,
change it, and redistribute it with or without changes. There’s no
requirement that the original author accept changes - they can choose to
or not.
The OSI definition doesn’t require the author to accept any changes. It
requires them to accept that changed versions can be distributed, but
this isn’t the same thing.
We don’t see huge numbers of forked free software programs, but we could.
Again the person who changes isn’t obliged yo give back to the original
project - they can choose to do so or not. As a matter of fact most
users contribute back with improvements which the original author(s)
accept, but it doesn’t have to be like this.
So yes, Qubes really is free software.

1 Like

‘open source’ vs ‘transparent source’

This is a fascinating topic onto itself.

@Sven’s theme

In addition to your excellent summary there was also a personal
experience that has nothing to do with security. I didn’t care much
about FOSS until 2015 and just used “whatever does the job best”.

I had spent 5+ years using a personal note-taking and knowledge
management application at the time. It is proprietary software, but I
gladly payed for it. Then the vendor decided to drop the Linux version
because not enough users where on that platform. A sound business
decision, but for me it meant I could either abandon all the work I put
into it, switch to another OS or keep working with the old version
forever. Which I still do, because migrating away from it is just too
much effort/pain and there is no FOSS that does the same thing
(three-dimensional mind-map).

This changed my perspective on ‘proprietary’ vs ‘freedom software’
forever. It made me think about who really ends up ‘owning’ my work in a
very real sense, if I use proprietary software and formats. Suddenly the
FSF didn’t look that Hippie to me anymore.

Again, has nothing to do with security but may help explain my sentiments.

trust

Scenario A) Billion-Dollar-Defense-Contractor creates secure operating
system primarily meant to be used by nation states (military,
intelligence, governmental) … those are the big customers. But it’s
also sold for civil use by corporations and individuals. The security
must be very high due to the stringed requirements of the primary customers.

How much trust do you have that this for-profit corporation has your
best interest in mind?

Scenario B) A well-known and respected security researcher assembles a
group of subject matter experts and starts work on a secure OS. Yes,
they have a company too and sell services, but the core product is under
GPL. The core teams main motivation clearly is not monetary, there would
be far easier ways for them to make a lot more of it. This product is
endorsed by the leading whistle-blowers, cryptologists, and free-press
activists on the planet.

How much trust do you have that this group of idealists have your best
interest in mind?

absolutist stance […] being able to audit code […] trade-offs
need to be made.

Speaking for myself I readily admit that I tend to fall into absolutist
stances quite easily and are very thankful for anyone giving me a hand
to get out of it. :wink:

Reading the discussion about creating software in a hostile environment and perhaps being forced to add backdoors or such, I stumbled on this very high-level discussion with many good arguments for (and against) using FOSS, and about the security of projects like Qubes.

In my opinion, there is still one aspect missing in this discussion about who makes changes to a project and how it is ensured by a review that no malicious code is introduced into the software. If we look at large monolithic systems like Windows, and, yes, also Linux, there is about no chance to find a well-hidden bad new part. For this reason, some fifty years ago, the concept of a Trusted Computing Base (TCB) was invented - something that Joanna mentioned already years ago. “Trusted” here does not mean that you like to trust some part of the software, but that you have to trust it, because, if this is wrecked, it’s GameOver. So careful verification of this part of the software is needed in order to rely on it.

From this proposition, two important properties directly follow:

  1. The TCB must be as small and simple as possible to get at least a chance to understand it and possibly find any bad parts.

  2. Any other software outside the TCB must not be able to influence or change it in the least way.

Qubes is well suited to use this concept, as its structure relies on compartmentalization, the critical compartments being

  • as small and simple as possible (dom0, sys-net, sys-usb, etc.)
  • non-privileged if possible (sys-net, sys-usb, and - coming - sys-audio and sys-gpu).

This leaves as its main components, dom0 and the Xen kernel as the relevant parts of the TCB. As Xen is an upstream component, the TCB part of Quies is just dom0, together wth the modules needed for system control. (Correct me if I see it too simple here.)

So this rather limited part of Qubes is that which absolutely has to be kept clean, meaning that any commits to the dom0 repository have to be thoroughly reviewed and controlled. If this is done - like it currently is - by the small group of leading developers and the security team, the situation is already as good as it can be made!

This does not block other participants in the development from contributing valuable software, even if software outside the TCB is not scrutinized in that depth. If malicious code were introduced there, its consequences are much smaller or even none at all. Thus the chances that Qubes will stay operationally clean are very good even if software from sources out of central control is accepted and built into the system. This is especially true for any software introduced at the template level, which I will write about in a second post, in order to avoid making this one too long. :wink:

3 Likes

In Qubes, dom0 is just the system control unit, while the real operational work is done in qubes, i.e. VMs of different classes (Standalone, Template, AppVM, dispvm). So now the question has to be regarded how much the security of these VMs is relevant to the security of Qubes at all. (See the current discussion on “I have been hacked”, where probably not Qubes was hacked, but some VMs, if at all.)

While Standalone VMs play a somewhat special role, standing more or less outside the high-security environment of Qubes, we have to look mainly at Template security. How are Templates introduced, maintained, and updated? This is somewhat different for the central Templates like Fedora and Debian. These are maintained by ITL and subject to the same strict controls as dom0. This is necessary as the central parts of these Templates, i.e. their kernel and the Qubes modules providing the interface to the rest of the system, can be regarded as local TCBs of the VMs. If any of these are compromised, all AppVMs based on them are compromised. Here we have a well-documented chain of trust: Any updates are only accepted only if they are signed with the appropriate certificates, so we can rely upon the fact that we have strict quality control for them.

Anyhow, if even one of these Templates gets compromised, it’s not the end of the Qubes system holding it. Spreading this compromise to dom0 should be nearly impossible (not regarding any serious flaws in Xen). AppVMs built on other Templates are unaffected by such a compromise, and the compromised Template can be rather easily replaced by a clean one.

For other Templates, the amount of control (and therefore, trust) may vary. If we look at other Linux distributions, the amount of upstream control and quality may vary, but they are useful components that may be used for tasks possibly at a somewhat lower trust level. Some of them are provided by the community, and for them, you may rely on the trust you have for their provider. As an alternative, you may create them using the documented build procedure and check for yourself that they are built on trustworthy modules.

If we look at Windows VMs, we are, however, in a quite different world. First, we have about no
trust in the installation media provided by the manufacturer, because - as far as I can see it - there are no checksums or signatures based on an unbroken chain of trust. So there is always the possibility that you may install a manipulated system from a seemingly correct download server. This has the direct consequence that a Windows system never can be part of a Trusted Computing Base - simply because you cannot trust it - without any regard to the discussion of Open vs. Closed Software.

This is still made worse by the fact that this system, over the years, has become quite unstructured and thus has no part that could be regarded as its TCB. (This is quite different from its beginning: The first official version, Windows NT 3.51, which was largely based on the architecture of the high-reliability system OpenVMS, had a clear separation of its highly-privileged parts from the rest of the system. But, alas, this was given up in order to provide more functionality and better graphic support.)

For Qubes, the necessary consequence is that Windows qubes have to be treated as untrusted VMs, and Qubes has to shield itself from any negative aspects of these VMs, which is quite well implemented by Qubes’ architecture.

What does this now mean for Qubes Windows Tools (QWT), which provide the interface to Qubes itself? Well, this is a software component running in an untrusted environment and thus can never expect to be trusted. The current version is very reliable and provides by far most of the needed functionality. Running in this environment, the focus has to be on it being as bug-free as possible, but there is, in my opinion, no greater risk to Qubes’ security: Qubes has to shield itself from any malicious action coming from a Windows qube, and it is quite irrelevant whether this attack comes from QWT or some other part of the Windows system.

So, to sum up these two posts, I think that the current way of Qubes development to handle the threat of malicious code being introduced is very effective. The critical parts, dom0 as Qubes’ TCB and the creation and maintenance processes of critical Templates, are very well structured (and documented!), and these parts, by compartmentalization, are very well shielded against possible attacks from less trustworthy components. It’s mainly the users’ task to configure the system according to their functional and security needs, and Qubes leaves them a wide range of doing so.

One should not underestimate the value of a clear, security-oriented architecture of such a complex system! In my opinion, this is much better than relying on formal software verification, which has all-to-often failed to keep its promise. (At least, that is my experience having worked for more than 20 years with OpenVMS, which gets much of its robustness from its architecture.)

4 Likes

Thank you for the fantastic, in-depth post that even I found easy to read. This level of discussion is what I hope becomes the norm in this locked forum.

I understand the concept of TCB and why at this point, if you are using a default installation of Qubes, ITL and Xen provides some level of guarantee. Earlier in the year I quipped that computer security is actually risk management in disguise since there can’t be any real guarantees that the ultra-complex systems that form our computing base is free of vulnerabilities and the best you can do is manage the risk. In this view, Qubes is the undisputed king of single-device risk management, IMO. While, yes, you are putting your eggs in the ITL and Xen baskets (i.e. if either are compromised or malicious, it’s GameOver), in my opinion there’s no way to avoid putting eggs in someone’s basket.

That said, I feel that the discussion so far hasn’t addressed a key concern I have:

One point I brought up last year (here or elsewhere) is how one can introduce lots of small pieces of code that don’t look like bugs or vulnerabilities, but when put together with other similar pieces, become a backdoor. Think of it like this: A metal detector at an airport doesn’t react to every single piece of metal, but does so for pieces that pass some detection threshold. It is possible to smuggle in small, undetectable, innocent-looking parts that can be assembled into a weapon. You can do it alone over time (stash what you’ve smuggled), or have a bunch of people each carrying one piece.

What I’m describing is probably state-level, considering the time, resources, and know-how needed to pull it off, but I’d wager it’s been done before, and the end result looks just like a natural bug and has ironclad plausible deniability. If I were some sort of mastermind, this would be my method of choice.

The ITL team may have control over the Qubes TCB, but the point becomes moot since this is a vector that is hard if not impossible for any open (or even closed) source project to guard against. The best you can do is ensure contributors are free of malicious influence and intent, to repeat my point that HUMINT is an integral part of most complex SIGINT projects.

Now, I realize that what I’m describing might come across as far-fetched --and I wouldn’t expect non-security distros to have any answers to it-- but as a distro for the techno-paranoid (who I say are justified in their paranoia), I think that this is a reasonable issue to raise.

 

P.S. I think the Windows VM section of your second post can be split off into its own posts regarding the risks (though somewhat contained) of running Windows VMs on Qubes.

 


Not technically trained; consume with salt

1 Like

I am not quite sure what you envision there? Lie detector tests, background checks, complete surveillance, financial audits, 24/7 body guards / minders, semi-regular interviews with family and friends? … that’s how the military complex does it. Not sure the FOSS world would appreciate it. :wink:

Moderation comment (this belongs in public section of the forum)

These posts (and IMHO whole discussion) are offtopic in All Around Qubes, because they are explicitly about Qubes OS.

I didn’t say that’s the recommended solution for FOSS projects, but some degree of vetting (though usually not as invasive as you described) is one of the more obvious solutions to this general problem. This is obviously highly simplified.

I am not smart or motivated enough to come up with a better solution, but I will point out that there’s a reason why organizations, with those in the military industrial complex being prime examples, reduce risk by vetting their personnel.

You’re very right about the remaining risks staying with the developers in control of the TCB. With Qubes, there is one more, rather funny, aspect to this: As it is an Open Source project distributed over many countries, the Qubes developers themselves are somewhat compartmentalized. This might make it rather difficult for an attacker to force a developer to add some backdoor or so, because there is a high probability that, during the review, some other developer(s) may find and close this loophole. Forcing all developers to enter such malicious code may prove to be (nearly) impossible, as the attacker has practically no chance to reach them all in their respective countries. And even if one developer is forced to write some backdoor, it would be quite easy to do it in such a way as to alert the other developers, while keeping that hidden from the attacker. This may quite reduce the risks to the Qubes developers, fortunately.

Looking from the perspective of an attacker, maybe attacks on the Xen development may be a much lower-hanging fruit, but this I really don’t know. Anyhow, from Qubes’s perspective, this is an upstream problem.

Moderation comment (moved topic to #general-discussion)

Well, it got split of a ‘general discussion’ because it became general, but then later returned to Qubes OS.

I don’t think it would be helpful at this point to split/merge/redirect this thread further.

Update: after a private conversation with @fsflover and consulting with @deeplow this thread was renamed and moved into general discussion.

Yes, this.

If you (like me) sometimes “stan” the Qubes OS developers’ sub-project pull-requests (PRs) on GitHub, you will see that they do an excellent job of reviewing changes and witholding approval until necessary/suggested changes are made to the PR before merging it. And they do it to each other, not just external PRs from “outside” contributors.

And they do it in public.

Now, granted, the project is not a democracy and @marmarek, as lead, appears to have final say.

However, considering that none of the other core co-developers (whether ITL or not) have ever publicly spoken ill of @marmarek

…if you don’t trust the intentions and skill of the lead of this project, I can’t really imagine a project lead you’d trust.

And the processes (such as those above, plus signing, chain of trust, etc.) the team put in place are some of the most mature I have encountered.

B

1 Like

How can Qubes OS protect itself from Malicious Code Contributions?

One step to take would be working in this ticket:

1 Like