How can Qubes OS protect itself from Malicious Code Contributions

Just came across relevant news that might be a point of discussion (and maybe the seed of a new thread):

Hacker News: “They introduce kernel bugs on purpose”

Apparently some university research team was making malicious commits to the Linux Kernel and said it was part of an experiment when caught. Just like how there’s no good reason to eliminate the possiblity a Lab Leak, the thought that a nation state might employ just such a tactic to insert bugs, vulnerabilities, and backdoors should not be ruled out.

Quoting user ‘cutemonster’:

I wonder about this me too.

To me, seems to indicate that nation state supported evil hacker org (maybe posing as an individual) could place their own exploits in the kernel. Let’s say they contribute 99.9% useful code, solve real problems, build trust over some years, and only rarely write an evil hard to notice exploit bug. And then, everyone thinks that obviously it was just an ordinary bug.

Maybe they can pose as 10 different people, in case some of them gets banned.


The massive discussion this incident has spawned is enlightening for anyone interested in the security of open source projects.

Here’s one argument from the other side of the security versus openness’ conundrum for anyone who might be interested (via user ‘ufmace’ on that thread):

It’s occurred to absolutely everyone. What doesn’t seem to have occurred to many people is that there is no such thing as a review process robust enough to prevent malicious contributions. Have you ever done code review for code written by mediocre developers? It’s impossible to find all of the bugs without spending 10x more time than it would take to just rewrite it from scratch yourself. The only real alternative is to not be open source at all and only allow contributions from people who have passed much more stringent qualifications.

There is no such thing as a process that can compensate for trust mechanisms. Or if you want to view it that way, ignoring the university’s protests and blanket-banning all contributions made by anybody there with no further investigation is part of the process.

I could probably look this up, but what’s the Qubes team’s philosophy on this?


On What? Reviewing contributions or banning contributions/issues from
some users/groups?

I should flesh my question out a bit:

Given that this is a security-focused project that seeks to help at-risk groups, sometimes from organizations like state actors, I was wondering about the overall precautions that the Qubes team has when dealing with contributions to the open-source OS, especially in light of this recent incident with the Linux Kernel. An unmolested and deployed Qubes OS is hard to attack, so if I were an adversary I’d want to spend my resources undermining the OS instead of finding ways to attack instances of it.

If there are precautions, they are usually formed by following an underlying philosophy/doctrine. Would you happen to know what the Qubes team’s doctrine is when it comes to these sorts of security risks?

1 Like

You can look at GitHub to see the amount of thought and argument that
goes in to developing new features/approaches, and at most PRs to see
the level of scrutiny that is applied.

1 Like

@Deeplow: Do you think this thread is worth splitting? I think the security of open source projects (e.g. Qubes) is a timely topic and might even qualify as a regular (unrestricted) thread.


That’s with @sven. This category its out of my moderation scope.

1 Like

Got it. Might be worth asking ahead of his decision: would you accept this topic as an unrestricted thread (i.e. in your scope) since this is about Qubes? This might be the time to set a precedent for splitting restricted threads into unrestricted threads.

Also, I ask because I can see an argument for keeping this out of the public space.

1 Like

@deeplow, @fiftyfourthparallel I agree this should be split. Once I return to my computer I will learn how to do this.

What I will do:

  • Take the conversation between @fiftyfourthparallel and @unman and split it into a “open vs. closed source from a security perspective” thread.
  • I’ll also leave an announcement of the split here and kick of the split in the new thread with an explanation just like @deeplow does.

It’s definitely another topic and one that has implications far beyond Qubes. I’ll ask everyone to please have the general conversation in the “all about Qubes” category. If in that conversation some argument or issue is identified that is unique to Qubes OS a issue in GitHub can be raised.

I hope the general discussion can be had here on a high level of quality and respect without inviting the entire internet to join the forum just for this topic. I’d rather like to avoid this conversation in the “general section” for the reason that it is extremely unlikely we will find any issue that hasn’t been discussed and addressed by the core team years ago. While at the same time it’s almost guaranteed to attract a mob of half informed first time posters leaving a trail of FUD for later consumption

Maybe I’m to negative. Let me know if you disagree.


It’s not really ‘open versus closed source’–it’s just about the security challenges of open source projects. I’m not advocating a switch to closed source.

As one of the half-informed people leaving a trail of FUD (while constantly signalling that I’m half-informed), I fully agree that restrictions might be wise. My earlier posts show that I just wanted someone to point me to that core team discussion (if publicly available), since I now see this issue as very important to the OS’ overall security and would like to know their philosophy on it. Since I didn’t get a response I want to expand the discussion to increase my chances.

1 Like

I’m not sure if you are serious, but just in case: your comments and questions are not FUD. They are legit. It’s the half informed answers I worry about. :wink:

Also sorry for not having done the split yet. This will be my first time, I couldn’t figure out how to do it on my phone and it might be Wednesday until I get back to my computer and can catch up.

1 Like

I freely admit that as a non-technical person I’m missing a critical chunk of background information for most discussions here. At the same time, I feel entitled to be outspoken and opinionated here, because what would the point of this forum be otherwise? I balance things out by consistently mentioning me lack of technical training so people will rightly take my comments with grains of salt.

There’s no real hurry for this, so please take your time.

Split complete. The option simply didn’t show on my mobile device.


@fiftyfourthparallel … I know you are not advocating this, but I do want to respond to the statement you quoted. I do have 20+ years of professional software development behind me and can only shake my head when I read stuff like this.

  1. people intentionally introducing weaknesses is a possibility, but it gets dwarfed by the issues unintentionally introduced by perfectly honest contributors

  2. any non-trivial software is on a level of complexity that most people simply cannot imagine; there are standards, tools, approaches to help tackle this, but all of them are imperfect

  3. never ending review by as many eyes as possible, pen-testing/red teaming, bug bounties and architectural counter-measures (as in Qubes OS) are the ONLY ways to improve security

  4. the idea of “people who have passed much more stringent qualifications” is laughable. If a malicious actor wants one of your people to insert something they have a 1000 different ways of making them do it wittingly or unwittingly, or they simply compromise them and use their credentials … you get the idea.

Yes, FOSS doesn’t mean the code will be better or more reviewed or any of those things. It does mean however that anybody COULD review and anybody COULD fix. With closed source all you have is a company with a small team of people motivated by the stock price.

And this is where it really starts to stink: security and quality DO NOT MAKE SENSE for a profit oriented organization. Instead they will attempt to cut costs and hire the lowest cost coders they can find (see Boeing, Solar Winds etc.)

Then if you know what hits the fan, they get a lot of mention in the press (no publicity is bad publicity) and the government will send them experts to analyze and fix the issue. Outcome? They are more well known, got a little fine and everybody now thinks that their product is the one that got the most scrutiny and is therefor probably most secure (and that might even be the case).

We won’t fix security nor privacy nor anything else for that matter by relying on quarterly driven profit maximizing companies. They are simply incapable.

Another way to look at it: the only way to have secure encryption is when the method is open and publicly scrutinized by as many cryptographers as possible and the only factor that influences the security is the key. Same with code. APTs have enormous resources and the best people on the planet to find and exploit issues, our only weapon against it is complete openness and working together.


You’ve already noted that it’s not my quote, but to be even more clear, I quoted that statement to introduce another perspective to the discussion. Since you took the effort to lay out your position, I feel like I should lay mine out too.

I should start by making clear what I mean when I say ‘open source’: I mean the ability for anyone to read and submit alterations to the source code of a piece of software. My emphasis is on the latter part (alterations), as I have no issue with the former.

I believe in Kermit’s Principle–a truly secure system is one that holds up even when every last detail of its blueprint is available to the adversary. Systems with accessible source code work well with this if they get feedback (like via bug bounty programs), though they’re still dependent on the security and integrity of the platforms on which they’re built (e.g. Spectre, Intel ME) as well as blobs that they might depend on–this is a rabbit-hole/stack-of-turtles that we have another thread for.

A lot of large projects depend on a large number of contributors and reviewers to stay afloat, and this in turn makes them dependent on trust. While I’m not advocating for closed source (1), I’m also not keen on the idea of a security-oriented system being openly alterable for this reason–trust is abusable, and this is doubly true when the environment makes it hard to tell malice from stupidity, and when the project owners depend on this trust. It’s tempting to apply Hanlon’s Razor to these errors, but when instances of the system are robust enough and compartmentalized enough, a determined adversary would decide that its resources are better spent attacking the source code instead of the deployed code, so the core assumption of this razor vanishes via another razor. This is why I think these sorts of collaborative security projects are exercises in human organization and human intelligence as much as they are exercises in digital organization and signal (counter)intelligence.

It can be said that open source is similar to democracy in that it’s the worst form of collaboration except for all those other forms that have been tried from time to time (paraphrasing Churchill, who was actually quoting someone else). This applies to the vast majority of software types–but for both national security apparti and security-sensitve software (or those portions), while a high degree of transparency is beneficial, the ability for anyone to submit alterations is not. For security-sensitive software, especially larger pieces, transparency is usually beneficial, but unvetted participation poses a high degree of risk. This is why I was asking earlier if there was such a thing as ‘transparent-source’ software (which @unman helpfully provided a few examples of, such as Mega). I’m not against transparency and accountability, but I worry about the myriad of human-intelligence methods one can use to slip little innocent-looking pieces here and there that add up to a vulnerability only its mastermind knows of.


(1) I find the argument that profit-seeking corporations and code quality are fundamentally incompatible somewhat shaky, but that’s not the focus of this post. I agree that the incentives are often not well-aligned, but not incompatible to the degree stated. But I might be naive. Sure, there are the Boeings and Solar Winds and Equifaxes, but there are also the Googles and Cloudflares too (despite what your stance of Google’s privacy issues are, you have to admit they have a strong security program).

1 Like

Hi @fityfourthparallel,

I’m glad we agree on Kermit’s Principle.

When it comes to Qubes OS @unman already pointed out that you can review
the pull request to see the level of scrutiny and thought that goes into
accepting a patch. There are very few people with commit rights to Qubes

Publicly traded companies will always prioritize what makes them more
money. When it comes to security/privacy/safety the gambit will always
be: how much will it cost if something happens (loss, settlements/fines)
and how much will it cost to prevent it (time, money, opportunity) …
whatever costs less is what they will do. In fact you could sue the
executives if they didn’t. The bug is in the system/law.


This trade off does not always mean that the product will be less
secure, of course.
I have worked with some developers where the security team had absolute
right of veto over product. That software received far greater scrutiny
than most open source projects, although the code was closed.
It’s arguable that the open source model leads to less secure outcomes
because of the emphasis on encouraging participation, and the (often
false) assumption that many eyes will be reviewing the outcome.


This trade off does not always mean that the product will be less
secure, of course.


I have worked with some developers where the security team had
absolute right of veto over product. That software received far
greater scrutiny than most open source projects, although the code
was closed.

From this and other posts of yours I suspect your daily work is a whole lot more exciting than mine. :wink:

It’s arguable that the open source model leads to less secure
outcomes because of the emphasis on encouraging participation, and
the (often false) assumption that many eyes will be reviewing the

That’s @fiftyfourthparallel’s point too. My judgement might be clouded through a mix of frustration with outcomes unchecked capitalism has produced in the country I live in and wanting FOSS and similar models to be part of the solution. You both make good arguments I have to agree with.

1 Like

Hi everyone :wave:

The original question in this topic is centered on security. I wonder if it could make sense to consider the interactions of security and trust for open-source projects, and if maybe that would help making sense of some of the mixed feelings expressed in this thread (e.g. in @Sven’s last post).

Let me try to explain why.

We want the software we use to be secure. Yet that’s not its main purpose, we use it for something and we want it to be secure in addition to that. (A piece of software that is perfectly secure but useless would be… useless, and I don’t think anyone would care much that it’s secure.) Please bear with me and hold that thought.

If a team, or someone, (whoever) does a good job and creates a piece of software that is secure, then publishing its source code does not make it more secure (at that moment at least, please bear with me). However, publishing its source code may make it easier to trust. Trust is based on the perception we have of security. Trust benefits from transparency.

Now, back to the first idea: if I won’t use a piece of software (for whatever reason, but for example because I don’t trust it) then whether it is or not secure doesn’t matter much because I’m not using it.

I think we often say “transparency is important when writing secure software” because we’re working to solve people’s problems. People need to do things securely, and for that they need to be able to do things in the first place. We need to write software they can use (trustworthy software), that is also secure. I think open-sourcing is one way (not the only one) to address the first part, and sometimes, but not always to help with the second part. And sometimes it hurts the second part; it becomes a trade-off.

And I believe most teams who care about the security of their users make trade-offs in that line: open-sourcing brings some benefits on the trustworthiness of their product and may bring some benefits on its security (if everything goes well and people review and contribute to it). It may also bring risks on the trustworthiness side (if people perceive the product as more secure than it is, putting themselves at risk), and the security side (people may find vulnerabilities to exploit when looking at the sources). Keeping the sources closed brings the correponding benefits and risks as well, and how those compare depends a lot on the context in which the software is written, reviewed and used. Ultimately a subtle balance must be found to build a product that secure and trustworthy enough to be useful in the first place. Neither secure, or “perceived as secure” are enough by themselves, we need some extent of both. (Outside, maybe, of some very theoretical contexts.)

Does this make sense?

TL;DR: In practice, I find difficult to evaluate the impact of being open-source on the security of the software without also considering the impact that open-spurcing has on its trustwortiness for the people who need it, because the risks that people mitigate or take (security) by using the software or not using it (trust) depend on both. To the point: I wonder if distinguishing these two aspects explicitely would help when discussing those important trade-offs. That’s my 2 cents :slightly_smiling_face:


My personal view on this:

I do not trust ANY profit orientant company in general.
So for me the only way to earn my trust is to open source their stuff.

I also agree that this will not make it secure by any means :wink: - but allows me to:

  • properly audit their code
  • patch/modify it to my needs
  • not depend on the company itself.
    (like if they abandon their product, I’m still able to use/maintain the code)

The security part may comes into any open-sourced project if:

  • there is a significant user base
  • they have security minded contributors
  • they accepting security related patches.

In the same time the ‘bad guys’ will NOT share their findings, but they will sell it for profit.
And they always having more funds → then more resources to do their ‘work’