Qubes code audit

I’m not sure that a clear definition has been given.

The flexible nature of a community project like this is valuable. But structure is also valuable. I want to emphasize again the benefits of creating user profiles that are of interest to the community. This will help people keep realistic threats in mind. And people who are focused on a more narrow scope might notice a problem that doesn’t fit into their scope, but being primed with the range of threats that are of interest would make it more likely for them to notice those problems. And I like to think that most people would responsibly report a problem if they see it even if it doesn’t impact them directly.

I also want to note that I would typically love to participate more in a process like this, but I’m currently working on creating a Guix template for QubesOS while also working on an academic portfolio so that I’ll actually get admitted to school the next time around on top of working a full-time job so I can pay the bills… so I don’t have much spare time at the moment. =(

1 Like

One possibility would be to write a “Introduction to code auditing” which gives full examples of going in and investigating pieces of qubes code, (including things like where to download the qubes source from), and with a little bit of context so they understand what exactly they are auditing. Possibly even write it as a wikiversity book.

This could get more people into the audit group, even if at a basic level. And once they learn more, they might add that to the book.

We may also want to think through how people should report “hey I found this… is it a problem?” to other auditors before flagging it as a actual problem. (I.E. In short, it may be necessary to thin down the false alarms before they get passed to the qubes team)
One option for reporting would be to have people report it to github so that the report can point to the exact line of code in the exact version of the exact package. Another option would be something more customized where people could report what chunks they have audited, and if they found something or not. Then you could compute summary statics like “how many of the files that I said were fine were later discovered by someone else to have contained a problem?”. This would be actual feedback for people who are now learning code auditing.

Just some Ideas

1 Like

I think the forum would be a reasonable place for this kind of discussion to happen. We don’t want to flood the developers with a large number of reports of varying quality. That would end up causing more problems than it solves, because it would take away time that could be spent addressing issues and a dedicated attacker might be able to find valuable reports before the team has a chance to respond to them. I could definitely help with the filtering process and summarizing reports into a concise problem/impact description. This is essentially what I did for most of my ~4 years as a consultant.

I do have a concern about confidentiality. The QubesOS team prefers to keep security issues confidential until they are fixed, but that is in tension with the very idea of having a large community-based audit. I don’t think it would be practical to keep everything confidential and find success in this kind of effort. So… I would feel more comfortable if someone representing the QubesOS team made a statement on that aspect of it before we started.

This sounds like a really valuable tool but also one that would take a lot of work to build. I agree that it would help a lot with coordinating this kind of audit in a decentralized fashion. I also think it would help us find organizational problems (for example, it might turn out that this kind of audit is good at finding bugs within the logic of a single module but consistently misses cross-module bugs) and help newcomers learn from prior work before they get started.

I already asked the team whether they think that this community audit is a good idea, and they said they do. They are, of course, aware that a security vulnerability could be discovered in any kind of code audit. If that were to happen during this community audit, then the auditors should do their best to respect the guidelines for reporting security issues in Qubes OS, but I don’t think anyone expects or wants a community audit to be conducted in secret. I agree that would be both impractical and undesirable.


There has to be some degree of secrecy. What were you expecting to do if you find a security-busting, double-0-day RCE explote? Just post about it on the forum so we can all get our qubes ransacked at once by script kiddies? :grimacing:

Actually, on second thought, yeah let’s do just that. Wild west rule. :horse_racing: :partying_face: If you don’t keep up with community bulletins you get hacked and lose all the loot in your inventory.

My reasoning: anything we find with a community audit is likely to already have been found by their state-funded counterparts (not because we’re bad but because they have money cheatcodes), in which case it is already being employed against the people. I don’t know about you but I’d rather take an L for the team to a script kiddie if that’s what it takes to patch a hole that is otherwise being used by god knows who to spy on Qubes users daily.

Throw Qubes OS into the forge. It can only come out stronger and better.

1 Like

By necessity it wouldn’t be secret - as soon as we include any comms in the audit design then we can’t give them much trust, because we’ve no sensible way of checking community volunteers aren’t state-sponsored or other malicious. I might be wrong, but I think in the Qubes context shouting louder about a vulnerability won’t get it fixed faster, so there’s no reason to shout from the rooftops once the information is technically discoverable. If someone on the audit manages to realize a vulnerability is sensitive before they’ve drawn the group’s attention to it, then it makes sense to report it in the usual way without opening any internal discussion.

What could be bad is the auditors openly discussing whether a vulnerability is serious for weeks without informing the team. If in doubt, better to send a report than not.


Let’s imagine two possible examples:

  1. One of the participants in the community audit discovers a vulnerability, and the other participants don’t know about it yet. In this case, the discoverer can responsibly disclose the vulnerability to the Qubes security team without informing anyone else and without announcing it publicly.

  2. In the course of a public discussion, the participants in the community audit realize that they have discovered a vulnerability. Since their discussion is already public, all they can do is inform the Qubes security team of what they’ve discovered, including the fact that it’s already public.

I took @skyvine to be asking about the first sort of scenario. It doesn’t make sense to ask about the second sort of scenario, because once you’ve said something publicly on this forum, for example, you can’t snatch it back from other people’s inboxes (or from their minds).

By that logic, allowing the code to be open source is also “wild west rule,” because it allows bad actors to discover vulnerabilities more easily. Should we also keep the source code secret for the sake of security? Of course not, because that’s just the old security-through-obscurity fallacy.

There’s currently no such thing as “community bulletins” (unlike security bulletins, which do exist), but let’s imagine for a moment that the community were to start issuing their own community bulletins, perhaps in conjunction with this community audit. Let’s imagine they’re like a community-run version of QSBs. There are several things to keep in mind:

  1. QSBs on their own don’t do anything to your system. It’s the security patches released alongside QSBs that actually patch your system and protect it. The QSB itself is just a text document. It’s purely informational.

  2. Sometimes, QSBs contain instructions for special user actions that are required to address security vulnerabilities. This is relatively rare, but it does happen from time to time. In this case, it’s users actually following the instructions and taking those actions on their own systems that does something.

  3. It’s unlikely that community security bulletins would be accompanied by community security patches. Not only would this require considerable expertise (to actually create working patches), but it would probably be very difficult for most Qubes users to trust the patches. It would probably also be non-trivial for many users to actually install the patches, since the community doesn’t have access to official Qubes signing keys.

  4. This means that the utility of community bulletins would probably be limited to providing instructions for users to follow in order to make modifications to their own systems. This is risky for most users, since following such instructions (e.g., entering commands one doesn’t understand in dom0) could compromise the system.

  5. This also assumes that the there wouldn’t be an official QSB (and possibly accompanying patches) that address the same security vulnerabilities. If there’s both an official QSB and an unofficial community bulletin for the same vulnerability, then there’s no reason to follow the community bulletin. It’s more risk for no added benefit.