Heuristic Evaluations Frequently Asked Questions
What are Heuristic evaluations?
Heuristic evaluation is a review or “inspection” of an interface or product by experts, using a set of formal or informal heuristics to guide the review and document issues.
What are heuristics?
Heuristics are guidelines of good practice. The most widely used formal heuristics are developed by Jakob Nielsen and Rolf Molich (1990) and updated by Nielsen (1994). These 10 heuristics were developed for use in evaluating software, but they can be modified to work with websites, apps, and more.
Heuristic Evaluations vs Expert Review
There are no clear distinctions between these two ways of characterizing an inspection. Some practitioners use “expert review” to mean a review by a single expert and “heuristic evaluation” to mean a review by several experts. Other practitioners use heuristic evaluation to mean a review using a clearly defined set of heuristics, such as those by Nielsen. In contract, these practitioners would call the inspection an expert review when the heuristics are internalized through familiarity and practice.
We use the terms interchangeably to mean the same thing, and we adopt the methodology best suited to the client’s needs with regard to the choice of heuristics (or not) and the number of reviewers.
How many experts do you need for an effective heuristic evaluation?
Nielsen, who is credited with combining the practices of a small set of evaluators for a heuristic evaluation and a small set of users for a usability test into a method called “discount” usability engineering, found that the maximum cost-benefit ratio for heuristic evaluation is achieved with 3-5 evaluators.
That’s a lot of experts! Most practitioners use fewer experts for reviews, with some doing reviews by a single expert. We advocate using 2-3 reviewers for effective results.
What’s the methodology used in a heuristic evaluation?
In formal practice, each evaluator does an independent inspection of the interface, documents violations of the heuristics, writes notes, and then joins a meeting with the other evaluators to compare notes and arrive at consensus findings.
Severity ratings are often assigned to the findings to prioritize the need to address issues uncovered.
Although not part of the original guidelines presented by Nielsen, practitioners today often conduct their review from a typical scenario of use for a specific type of user or persona.
We conduct this type of persona-based, scenario review.
What’s the difference between heuristic evaluation and usability testing?
Usability testing provides feedback from users while they engage in typical tasks. Heuristic evaluation provides feedback from usability experts (who may or may not be domain experts), based on anticipated issues users will face.
Which is better? Heuristic evaluation or usability testing?
Both tools are widely used, and both produce useful findings. So, one is not better than the other. However, they do produce different kinds of findings:
- Heuristic evaluations tend to find more of the minor issues that may not emerge in usability studies.
- Usability testing tends to find some issues that experts miss in heuristic evaluation.
Recent research indicates that both methods tend to find roughly the same number of important findings.
If you can only choose one method, which one should it be?
If your budget does not allow for using both methods, we strongly recommend choosing usability testing over any other method. Nothing beats learning directly from users.
If you can use both methods, which one comes first?
The one-two punch of heuristic evaluation and usability testing is the best combination of tools for most situations.
The results tend to overlap in some ways and diverge in others, producing findings that substantiate user experience from two different perspectives: experts and users.
The way we typically use both with clients is we start with a heuristic evaluation, if an interface or product exists, so that we can identify usability issues. Depending on the timeline, we advocate one of the following approaches:
- If time permits, the issues identified in the evaluation can be addressed before testing to improve user experience during testing and get at some of the larger issues.
- Expert review (heuristic evaluation) to inspect a product or interface against a set of rules or guidelines
- If there isn’t time to make changes to the interface, the issues identified can become the basis for tasks and scenarios to be used in testing.
- Surveys to broaden the reach to your users and gain feedback
Learn more about our approach
The first rule of persona creation in UX is “use real data.” Who could disagree with this first rule of persona creation? We certainly don’t. We know that personas are archetypes of a user subgroup and that they need to be based on real data so that they represent...
This post is part of a 3-part series on customer journey maps. In Part 1 and Part 2, we defined customer journey maps, presented ways to gather data for the journey maps, and the benefits of holding a workshop to use the data to lay out the customer’s journey. Now in...
This post is part of a 3-part series on customer journey maps. In Part 1, we defined what we mean by customer journey maps and presented ways you can work with existing user data as well as ways to generate new data. In this part we focus on how to organize the data...
This post is part of a 3-part series on customer journey maps. Storytelling has become an increasing popular way to present research findings. You may present your users’ stories in a heuristic evaluation. Or you may create stories in the form of personas, or tell the...