Hi everyone
The original question in this topic is centered on security. I wonder if it could make sense to consider the interactions of security and trust for open-source projects, and if maybe that would help making sense of some of the mixed feelings expressed in this thread (e.g. in @Sven’s last post).
Let me try to explain why.
We want the software we use to be secure. Yet that’s not its main purpose, we use it for something and we want it to be secure in addition to that. (A piece of software that is perfectly secure but useless would be… useless, and I don’t think anyone would care much that it’s secure.) Please bear with me and hold that thought.
If a team, or someone, (whoever) does a good job and creates a piece of software that is secure, then publishing its source code does not make it more secure (at that moment at least, please bear with me). However, publishing its source code may make it easier to trust. Trust is based on the perception we have of security. Trust benefits from transparency.
Now, back to the first idea: if I won’t use a piece of software (for whatever reason, but for example because I don’t trust it) then whether it is or not secure doesn’t matter much because I’m not using it.
I think we often say “transparency is important when writing secure software” because we’re working to solve people’s problems. People need to do things securely, and for that they need to be able to do things in the first place. We need to write software they can use (trustworthy software), that is also secure. I think open-sourcing is one way (not the only one) to address the first part, and sometimes, but not always to help with the second part. And sometimes it hurts the second part; it becomes a trade-off.
And I believe most teams who care about the security of their users make trade-offs in that line: open-sourcing brings some benefits on the trustworthiness of their product and may bring some benefits on its security (if everything goes well and people review and contribute to it). It may also bring risks on the trustworthiness side (if people perceive the product as more secure than it is, putting themselves at risk), and the security side (people may find vulnerabilities to exploit when looking at the sources). Keeping the sources closed brings the correponding benefits and risks as well, and how those compare depends a lot on the context in which the software is written, reviewed and used. Ultimately a subtle balance must be found to build a product that secure and trustworthy enough to be useful in the first place. Neither secure, or “perceived as secure” are enough by themselves, we need some extent of both. (Outside, maybe, of some very theoretical contexts.)
Does this make sense?
TL;DR: In practice, I find difficult to evaluate the impact of being open-source on the security of the software without also considering the impact that open-spurcing has on its trustwortiness for the people who need it, because the risks that people mitigate or take (security) by using the software or not using it (trust) depend on both. To the point: I wonder if distinguishing these two aspects explicitely would help when discussing those important trade-offs. That’s my 2 cents