How had I not found this before? O.o
Thanks for sharing
Nice find. I donât recall ever seeing this before either.
Saved the bookmark but forgot to save the PDF at the time and now itâs timing out. @fsflover if you have it, would you be so kinds as to send it to my email deeplower at protonmail.com or via forum message?
Weird, I had looked through internet archive and it gave me some weird error. Thanks!
Thx for the @! @deeplow! Yes, I have seen that, before. Unfortunately, I find it to be really problematic. Namely, because it does not specify whom the users are that it is evaluating usability forânor, what their needs are. It feels like a very boilerplate early-learning academic approach at a heuristics review. Heuristics need to be done with real users, and whitepapers need to qualify the skills and profiles of those users, to be considered against the analysis. Thatâs the whole point of the exclusive âwhite-paperâ format: to document the complete trail that brings an author to their hypothesis.
Example: in his âMethodologyâ section, he should be speaking to research methodologies. Instead, he speaks to descriptions of Qubes concepts, and the technical setup of the machine heâs doing the tests with.
He also did the analysis of Qubes, in a Virtualbox environment on a Mac. Which creates an incredibly flawed premise. Qubes is a difficult artifact to test with users, because it really needs to be running as the core OS on machines it is tested withâand when seeking feedback on current users, it needs to be tested with identical hardware configs to what they typically work with (or, with their own machines).
This report was also created by a developer, evaluating Qubes against his speculation around user needs. For many reasons, thatâs problematic. Namely, because he never did any actual work with users, to inform his findingsâand heâs not qualified (via experience with users) to offer an âexpertâ analysis.
âYou are not the user!â is a frequent adage, in UX. It is really important to de-personalize what âusabilityâ can mean. I often see users in GH and here on the forums assert that they find things to make total sense for them, therefore they should make sense for âothersâ (and thus qualify as âusableâ within their thinking). Thatâs regrettably not how it works, as Iâm sure you know. Yeah, weâve got a lot of leveling-up to do, as a contributor community, around UX goals and short vs long-term expectations.
âIn general, heuristic evaluation is difficult for a single individual to do because one person will never be able to find all the usability problems in an interface. Luckily, experience from many different projects has shown that different people find different usability problems. Therefore, it is possible to improve the effectiveness of the method significantly by involving multiple evaluators.â
From the below, a good article on how heuristics evaluations can be most effective: Heuristic Evaluation: How-To: Article by Jakob Nielsen
This is a more recent article, that reflects the updated mental-model of âheuristicsâ as speaking more to general design principles, than an evaluative process: 10 Usability Heuristics for User Interface Design
I think the above, was what the author was speaking to in his paper⌠but, to the earlier identified problem of it being a flawed premise for any one individual to determine what all problems might be for all users, with authority, feels like the bigger issue at hand, to me.
Thanks a lot for contextualizing this work and highlighting some of the problems with it!