A Usability Evaluation of Qubes OS (2017, pdf)

3 Likes

How had I not found this before? O.o

Thanks for sharing :slight_smile:

Nice find. I don’t recall ever seeing this before either.

Saved the bookmark but forgot to save the PDF at the time and now it’s timing out. @fsflover if you have it, would you be so kinds as to send it to my email deeplower at protonmail.com or via forum message?

1 Like

Weird, I had looked through internet archive and it gave me some weird error. Thanks!

@ninavizz pining you about this in case you haven’t seen it already.

1 Like

Thx for the @! @deeplow! Yes, I have seen that, before. Unfortunately, I find it to be really problematic. Namely, because it does not specify whom the users are that it is evaluating usability for—nor, what their needs are. It feels like a very boilerplate early-learning academic approach at a heuristics review. Heuristics need to be done with real users, and whitepapers need to qualify the skills and profiles of those users, to be considered against the analysis. That’s the whole point of the exclusive “white-paper” format: to document the complete trail that brings an author to their hypothesis.

Example: in his “Methodology” section, he should be speaking to research methodologies. Instead, he speaks to descriptions of Qubes concepts, and the technical setup of the machine he’s doing the tests with.

He also did the analysis of Qubes, in a Virtualbox environment on a Mac. Which creates an incredibly flawed premise. Qubes is a difficult artifact to test with users, because it really needs to be running as the core OS on machines it is tested with—and when seeking feedback on current users, it needs to be tested with identical hardware configs to what they typically work with (or, with their own machines).

This report was also created by a developer, evaluating Qubes against his speculation around user needs. For many reasons, that’s problematic. Namely, because he never did any actual work with users, to inform his findings—and he’s not qualified (via experience with users) to offer an “expert” analysis.

4 Likes

“You are not the user!” is a frequent adage, in UX. It is really important to de-personalize what “usability” can mean. I often see users in GH and here on the forums assert that they find things to make total sense for them, therefore they should make sense for “others” (and thus qualify as ‘usable’ within their thinking). That’s regrettably not how it works, as I’m sure you know. Yeah, we’ve got a lot of leveling-up to do, as a contributor community, around UX goals and short vs long-term expectations.

3 Likes

“In general, heuristic evaluation is difficult for a single individual to do because one person will never be able to find all the usability problems in an interface. Luckily, experience from many different projects has shown that different people find different usability problems. Therefore, it is possible to improve the effectiveness of the method significantly by involving multiple evaluators.”

From the below, a good article on how heuristics evaluations can be most effective: Heuristic Evaluation: How-To: Article by Jakob Nielsen

This is a more recent article, that reflects the updated mental-model of “heuristics” as speaking more to general design principles, than an evaluative process: 10 Usability Heuristics for User Interface Design

I think the above, was what the author was speaking to in his paper… but, to the earlier identified problem of it being a flawed premise for any one individual to determine what all problems might be for all users, with authority, feels like the bigger issue at hand, to me.

2 Likes

Thanks a lot for contextualizing this work and highlighting some of the problems with it!

1 Like