It’s not I want to be of bad faith, but this is not a proof of anything. Sometimes vulnerabilities are so complicated to exploit that it’s hard to tell if it’s actually possible to exploit them in a meaningful way.


I’m not trying to prove anything, I’m just saying people won’t think switching to cube is worth the inconvenience until it is as hard to hack as a paper notebook locked in a seif. Just my opinion of course.

I don’t think that reminding the difference between a vulnerability and an exploit has anything to do with displaying bad faith.

Limited analogy using doors and windowsI know folks who lock the door of their house so that others don't get in while they're out. Each of their house's windows is a vulnerability. They're made of a material that they chose for its transparency, but that turns out to be very easy to break. Gladly, that vulnerability doesn't get exploited very often so they keep choosing to live (happy) in well-lit rooms. :potted_plant:

It’s also probably worth noting that the lock that locks their door is vulnerable to picking (and a number of other less gracious methods of entry), but what’s a vulnerability for some doesn’t seem to be exploitable by, or worth exploiting to, their neighbors. Wisely, they choose not to loose sleep over impractical vulnerabilities, instead, they know what they’re trying to protect, from whom, and assess the likeliness of meaningful exploits according to their own threat model. :slightly_smiling_face:


Security of an Operating System is always a moving target. What was a secure procedure, software system five years ago, maybe not today.

Anytime any one wants to complain about what does not seem like a good idea in Qubes, they should first ask themselves the question of, “Why did the Qubes (Or Linux OS developers) do things that way. As best as I can see, the developers have made a lot of very good, intelligent decisions.

I want to focus on “worth the inconvenience.”

Are you suggesting that Qubes users should only be those, who mostly likely, took several organized Linux courses in college.

Suppose, I am working for a company with an executive who traveled to some of the different parts of the world where our company had offices.

I go to him: “While you are traveling, our corporate information is at risk. Not only our competitors wanting to know our corporate plans, but some in our own company would use some of our corporate communications as leverage against some others best interest. I have a solution for how to protect that information while you travel, and still have it available to show to those you need /want to show it to. It is called Qubes.”

So I give him a laptop with the latest stable version of Qubes OS on it. I show him the little trick to how the laptop indicates if it was tampered with. Give him a few days to get used to it before he leaves. After a some days, he is getting ready to leave on his trip.

He gives the laptop back to me: “I spent a bunch of hours with this. I read a lot of documentation. I need a computer system that I spend more time using, rather than discovering all the clever ways I can create more Qubes to utilize it.”

Just like that someone who recently wrote on the forum, recounting all the tens of hours he spent with Qubes, finishing with, “I am outta of here.”

As I said elsewhere. The developers have made a great decisions of where to stop developing. They have provided us with the equivalent of multiple computers on piece of hardware composed of multiple, and different operating systems that are carefully stitched together and crafted so they do not leak information - without my giving permission, if I practice good Operations Security, (Op Sec.)

Adding more programs to the Operating System, often third party programs risks Security compromises, and complicates the Security, as well as the functionality of Qubes. I need to use the Operating system in surfing the web, receiving information, writing replies, sending information back out onto the internet.

My suggestions. Allow a beginning install of Qubes, where.

  1. User should not be asked to open a Template to the internet.
  2. User should not be asked to put anything into dom0.
  3. User should not be asked to use Terminal.

I want the first install of Qubes to be useful. More ready to be used, even for a someone who has only used Windows.

The Developers might chose to facilitate this by putting into the Qubes 4.2 Final a key to authenticate the extra Qubes, which would be from a another site. If those who make the decisions in Qubes feel that the supervisor of that site is a trustworthy person.

While I am anxious to implement the Split Qubes which have been written, given the amount of time and thinking a newcomer might need to properly install, then use them. Not on the top of my list for the extra Qubes needed.

I believe the first extra Qube needed, is an alternate net-sys vpn Qube. I want they to be able to click click install them. Enter the credentials which they purchased for the VPN they chose. Not sure which method to insert the net-sys Qube into the main stream of the full OS, (or should) to do Updates.

sys-net VPN install being time consuming for a newcomer. Newcomer probably, desperately, needing a VPN. Would rather user be able to see all the options given to the VPN he chooses. All the entry places (points), which is running faster now, and so on. Not sure if doing so is possible, while being secure.

I feel anyone who has recognized the ways a single operating system can be compromised, and who has used the design of Qubes just a little. Would choose to stay with the risks of a single computer operating system.


slightly off-topic musings about doors and windows

Hi @gonzalo-bulnes I see this a bit different. There is a reason it is called “breaking and entering”. Legally if you enter a house through an open door or window you are not breaking any laws (unless there is a posted sign telling you not to enter, which would then be trespassing).

The function of the closed door / window is not to prevent anyone from entering, which technically is still trivial as you pointed out, but to make it an act of “breaking and entering” which is prohibited by law and therefor opens the perpetrator to persecution.

Actual security measures would be iron bars on the windows, large bolts on a steel door, paired with detection mechanisms like watchdogs, alarm systems and cameras to ease identification of the intruder. … armed guards. etc.

Many of us just happen to live in societies in which the vast majority of people have agreed to be peaceful and law abiding. Traveling in countries where this is not the case will quickly disabuse you of many assumptions. I’d liken using an internet connected computer more to taking a stroll through Libia.

But even in our societies it’s not really a strategy to call 911 and wait for the police to arrive, if someone enters your home at night. At that point you’d better have a loaded gun ready. Anything else is just make believe / gambling / hoping for the best.


But nothing is unhackable. That’s impossible to achieve in the real world.

I think that this line of thinking of misguided for two main reasons:

  1. As others have already pointed out, there’s a big difference between a theoretical vulnerability and a practical exploit. Many things are vulnerable in theory that are infeasible or impossible to exploit in practice. Several conditions must be met in order for a vulnerability to be of practical significance for you:

    A. It must be possible to exploit the vulnerability in your actual system.
    B. Your adversary must be aware of the vulnerability.
    C. Your adversary must know how to exploit the vulnerability.
    D. Your adversary must have whatever resources (e.g., time, money, technology, people) are required to exploit the vulnerability.
    E. Your adversary must be willing to expend the required resources in an attempt to exploit this vulnerability. (In other words, your adversary must stand more to gain than to lose from the attempt, or at least believe this to be the case.)

    Even then, success is not guaranteed. Another way of putting this is to say that everyone’s threat model is different, and not every theoretical vulnerability is relevant to yours.

  2. Your statement is about reported vulnerabilities, which puts the focus in the wrong place. It’s easy to reduce the number of reported vulnerabilities: Stop reporting them. Of course, the Qubes OS Project can’t stop external security researchers or upstream projects from reporting vulnerabilities, but a significant proportion of reported Qubes vulnerabilities are discovered and reported by the Qubes developers themselves. If you make the project’s goal “reduce the number of reported vulnerabilities,” that gives the project a perverse incentive: Keep self-discovered vulnerabilities secret. To be clear, the Qubes OS Project would never do this, but the dangers of bad incentives should not be underestimated (also see Goodhart’s law).

It’s not that hard to “hack” a paper notebook locked in a safe, though. Any safe can be brute forced given enough time and resources. In fact, the longest attack duration tested for UL safe certifications is only 60 minutes, and the required tools are probably much easier to come by than, e.g., zero-day exploits worth hundreds of thousands or millions.


Well, at least unhackable by any 3 letter agency or by something like NSO Group. If that is unachievable with the modern software then there is something fundamentally wrong with modern computers/os-es.

But seeing that apache (as an example) had no remote code execution exploits discovered in 2023 allows me to hope that maybe as QubeOS matures there will be at least one software+hardware configuration that will achieve that status.

Well, at least it is (probably) impossible to do it remotely…I guess it depends on everyone’s threat model.

PS: I know it is not easy but I’m not just waiting for things to happen, I’m working actively to try and find a solution to this problem.

Sometimes I wish there’d be a facepalm icon right next to the heart I could hit when people try really hard not to understand what was just explained to them.


Not sure who that comment was directed at but what I am saying is that modern software should have a system that prevents 0days before they happen.
It is not unreasonable to think a system can exist that will not do anything besides the purpose it was built to do.
Watching past recordings makes me believe Joanna Rutkowska thought that can be achieved using hardware virtualization but it is clear today that more is needed.

You might be interested in this blog post of hers:

Preventing zero-days before they happen sounds like “security by correctness,” whereas the Joanna/Qubes approach with virtualization is “security by isolation” (aka “security by compartmentalization”), which assumes that such bugs are inevitable and, rather than trying to prevent them, tries to limit the damage they can do instead.

Indeed, but that would be more like a single-function appliance than a general-purpose computer capable of running arbitrary programs.


I should add that this still wouldn’t eliminate all risk, because many security flaws are, in fact, design flaws. Even a system that does only what it was designed to do can still be problematic if its design allows for outcomes that the designers did not intend or failed to foresee. Consider Nick Bostrom’s classic example:

Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

The paperclip-maximizing AI perfectly satisfies your condition: It is a system that will not do anything besides the purpose it was built to do (namely, make as many paperclips as possible). But, as you can see, being built for a good (or even a seemingly-innocuous) purpose does not guarantee that the system will function the way we intended or that the outcome will be desirable.



I was about to issue a challenge to name single physical object that cannot be abused in a way that is clearly not it’s intended purpose.


Yes I have read her blogs a few years ago.

Security by isolation can limit the damage bugs can cause in the programs but not if the bugs are in the virtualization software itself (or in the mechanism that ensures the communication between containers - see qrexec-daemon memory corruption bug above).
By breaking functionality into parts small enough that unintended behaviour is near impossible and isolating all the parts from each other maybe we can limit the bugs only to the complex software running inside the app containers.

This is really a philosophical discussion. Yes, one can make it harder and more costly to exploit but never impossible. I think that’s ultimately also what you want and are looking for. Unless you can find a perfect human, writing perfect code compiled by perfect tools running on a perfect machine … it’s all a question of degrees.


I believe the guys at seL4 micro-kernel are trying this with varying degrees of success.

Also by having all vms memory and cpu registers encrypted with a different key per vm should protect processes from malicious hardware access.
Combining this with Qubes in theory would achieve the goal of preventing most 0days.

A post was merged into an existing topic: Ready to use qubes with third party software

Why was this closed @Sven ? Was this an accidental side-effect of moving posts around?

Felt like the right thing at the time.