Security of a Homemade Qubes

What major security advantages of Qubes would I be missing if I installed Xen with a (perhaps hardened) Debian dom0 and created a few gnu-libre Linux VMs for regular browsing, banking, Whonix, etc.?

I don’t use my computers for anything security-critical beyond banking, and I think I’m at a low risk for malware since I don’t download anything besides distro updates, pdfs, images, and audio files, use only Linux OS, and 95% of my browsing is done on low-tech browsers with javascript disabled. I don’t connect any peripherals besides USB mouse/keyboard, monitor, and sometimes flash drives, which generally have only documents on them.

Also it seems safer to trust the Debian,Xen, and Whonix devs only rather than the Debian,Xen,Whonix+Qubes devs.

While I believe you have the competences to build your own system, it needs to be said, that this is hard. It’s easy to make a mistake. It should be noted that you’ll need to trust yourself too that no mistakes have been made.
And when it comes to security, it might be better to use a ready-made product that many other people use and gets updates regularly rather than do it all manually.

Also, Qubes is more than just Xen. It has it’s own advantages, on which you can read in this thread.

To sum it all up:

1 Like

Who is more likely to make a mistake the results in a gaping security hole, you or the Qubes devs?

Without any type of peer review, I think it’s going to be extremely difficult to get the same level of quality/security.

2 Likes

You should read this article:

https://www.schneier.com/blog/archives/2011/04/schneiers_law.html

Here’s my favorite part:

Anyone can invent a security system that he himself cannot break. I’ve said this so often that Cory Doctorow has named it “Schneier’s Law”: When someone hands you a security system and says, “I believe this is secure,” the first thing you have to ask is, “Who the hell are you?” Show me what you’ve broken to demonstrate that your assertion of the system’s security means something.

And that’s the point I want to make. It’s not that people believe they can create an unbreakable cipher; it’s that people create a cipher that they themselves can’t break, and then use that as evidence they’ve created an unbreakable cipher.

The Qubes devs (esp. incl. emeriti) became famous for “breaking” (i.e., discovering and publishing novel vulnerabilities in) things like Intel TXT, Intel VT-d, Xen, and much more since the early 2000s. After they made a name for themselves by showing how the security of so many things is so broken, they decided they wanted to put their skills to work building something secure, and that’s how we got Qubes.

7 Likes

I’m not trying to get the same level. I’m deliberately doing something simpler, which could potentially lead to fewer security holes. The question, like the OP says, is whether I’d be missing some crucial security feature, particularly given my rather low-risk computer usage.

Honestly, I’d really be doing almost nothing other than following the online guides for installing and configuring Xen, Debian, Whonix, etc. Most of the mistakes I could potentially make would be problems with the info on their websites, and I’m already trusting the devs by installing and using their software on both Qubes and the distros on my other computers, so that wouldn’t be a unique security risk.

It has nothing to do with me. I never said I was trying to create the ultimate security system; the extra security would come from isolation (that’s why I called my project a “homemade qubes”), and anyone who’s managed to install and use Qubes can install Xen and some Linux VMs, aka recreate the basic isolation. It does not require much faith in one’s abilities: most if not all of us have basically done it before.

Sounds like you are the most comfortable running a system you fully understand, and you don’t have skills to personally audit the open source projects you are using.

Don’t know there that will lead you, but I seriously doubt it’s going to result in fewer security holes.

That’s like saying, “I’m deliberately making a cipher simpler than AES-256, which could potentially lead to fewer security holes.” That’s simply not how it works. Reality begs to differ.

What you’re missing is that “you don’t know what you don’t know.” In other words, you can only plug the holes of which you are aware. If there are a bunch of other holes you don’t even know about (and there are), then you won’t think to plug those. It’s not that you don’t want to plug them or even that you’re not capable of plugging them (in principle, after learning how to do so). It’s simply that you can’t plug a hole if you don’t know where the hole is, or even that it exists.

In situations like these, the unique security risk is always the human. You are the new variable being introduced in the equation. Everything now rides on how you implement and interact with the system. You can no longer simply rely on following guides for what to do and not to do as a user, because you are the architect of your custom OS. You are now the one who’s responsible for writing guides for what users of your system should and should not do. This is why you need to be a security expert if you’re going to create your own secure (or, at least, not wildly insecure) operating system.

The “extra security” wouldn’t come from anywhere, because there wouldn’t be any.

Just as the practice of cooking a meal is not simply a matter of mixing raw ingredients together, the development of Qubes is not simply a matter of combining pieces of upstream software together. You can prove this to yourself by reading through all the source code written by the Qubes devs specifically for Qubes. This code is not from Xen or any other upstream project. Make sure you understand why all that code exists before you decide you don’t need any of it to achieve a similar result. Ask yourself why a team of world-class security engineers would have worked on Qubes for over a decade if there was so little work that needed to be done.