When developing new products or enhancing current products with new or improved features, it’s never too early to get user insights to inform your designs. That way, user experience can be built into the product before it goes to market. This process is called iterative design.
Usability testing provides the means to unlock user experience. With usability testing, you learn what your target users like and don’t like, what pleases or frustrates them, and what supports or hinders their goal with your product.
Usability testing can be moderated or unmoderated.
In-person, face-to-face moderated usability testing provides the advantage of real-time engagement with your users. But it takes time to set it up. If you use a recruiting company to schedule your participants, you generally need to give them two weeks to screen, recruit, and schedule your participants. If you are recruiting and scheduling participants yourself from your company’s user panel, you can speed up this process a bit, but you still probably need a week to get underway. In addition to the benefit of seeing what users do as they perform tasks with your product, face-to-face moderated usability testing sessions let you read your user’s body language as you engage in the flow of natural conversation.
In-person remote usability testing also takes time to schedule participants in sessions using collaborative software platforms such as Zoom or Teams, but the timing can be accelerated substantially if you have access to a user panel and can use a calendar program like Calendly to let participants choose their own session times. Remote usability testing has the advantage of engaging with users anywhere, not just in your location, so your geographic distribution can cover your whole user base. In addition, you get to see the context in which your users are using your product, most typically their computer or smartphone.
The advantage of in-person usability testing, whether face-to-face or remote, is that you get to engage with your users while they are engaging with your product. During the session, you can ask questions, probe for insights, and adapt your plan to suit the needs of the situation.
Unmoderated usability testing means that users will be interacting with your product without the presence or engagement of a moderator. What you gain in unmoderated testing is the advantage of speed. Although you do have to take the time to set up the study, using one of the popular platforms like UserTesting or UserZoom makes the setup easy with their templates to create your screener questions, tasks, and post-task questions. You select your date range, launch the study, and users get screened quickly. Then you can select those you want for the study or elect to allow all those who are screened to participate on a first come, first served basis, and you typically begin getting their recorded sessions within the hour.
The advantage of unmoderated testing is not only how fast you get results, but also the generally lower cost as compared to in-person testing. Another big advantage is that you can get a lot more participant sessions in less time than moderated testing allows.
The disadvantage of unmoderated testing is that you can’t engage with the user, and you can’t adjust the tasks or scenarios once the study is underway.
If you want fast results, choose unmoderated usability testing. If you want the ability to explore your observations with your users as you observe their engagement with your product, choose moderated usability testing.
The ideal scenario is to choose both options, moving between moderated and unmoderated studies as your product moves through development in an iterative design process that builds usability into design decisions.Moderated testing is conducted in real-time
A/B testing—also called “split” testing”—presents one of two versions of a page or screen or marketing message to current users: one group gets A—the baseline version—and the other group gets B—the comparative version. Over the period of the study, analytics are used to determine which version leads to more clicks, longer dwell times, or higher conversion rates.
Google Optimize is one example of an A/B tool, which, in the case of Google, ties directly to Google Analytics. Companies use the results of their A/B tests to determine whether incremental changes in their designs improve user experience, resulting in increased user satisfaction and measures of success such as greater conversions. This method is generally done on a large scale on current websites to determine from web traffic analytics which design feature performs better.
But A/B testing can also be done early when design features are still in development.
For instance, what if you have a single design question: "Which Is better?" How do you get a fast answer to a preference question that works like A/B testing? There are many options for pre-live user testing: UsabilityHub, UserZoom, Maze, UserTesting, and Poll the People. Using any of these platforms provides the answer to design questions early in development.
This need is increasingly important for developers who use prototyping tools like Figma or Sketch, which give them much creative flexibility but also give rise to many design questions.
Because the designs look and feel fully functional, feedback from users provides a way for designers to select the best of these designs to send to engineering. And with the popularity of Agile development cycles, you can get the answer to a design question really fast.
A multi-national fast-food chain wanted to test different designs to understand their customers’ preferences for using rewards to apply to purchases in the mobile app. There were several key concepts they wanted to test to learn which designs resulted in users having a better understanding of how to use rewards when making purchases.
They knew that many users of the app were building up rewards points but not redeeming them, and they wanted to understand why. Was it a confusing design? A miscommunication of how to use points to make a purchase? A lack of awareness of the availability of redemption awards?
So, they decided to conduct remote moderated usability testing, and asked us to conduct the research. Using one of the popular platforms for this kind of testing, we set up the screener to recruit participants who had made purchases using the app, so that we knew the participants were current customers. We also set up scenarios where everyone started out with the same first task, but then for the second task, they alternated between two Figma mid-level design screens, in which half the participants went to the A design first, then the B design and the other half started with the B design and then went to the A design. In total, we tested the two versions with 16 participants.
The result? Inconclusive. There was no noticeable difference in user experience in either prototype. So, what were the designers to do? They had to make a choice when no clear choice was indicated from user testing.
What if they had used a platform like Poll the People to get feedback about user preferences for designs in development? They could have done a quick study to get many more than the 16 responses in the moderated remote usability testing study we conducted. For not much investment of time or money, they could have requested 100 responses or more with results in minutes, at most hours. And if these results still didn’t indicate a conclusive result, they could have polled another 20 or 30 participants or whatever number it took to pick a winner.
Poll the People would have also allowed them to isolate and test specific elements that might have contributed to this drop-off in usage of the rewards program. Any webpage is like an orchestra: there are a multitude of elements —buttons, titles, animations, etc. — that work in harmony to accomplish a greater goal. By isolating each of the elements of designs A and B, this fast-food company could have understood their users’ pain points and gotten a sense of which element would perform better.
Perhaps the problem was the placement of how many total points the app user had accumulated through the rewards program. While this is the central component to most rewards programs, it might have been lost in the fray of a busy interface or other confusing details.
By asking the simple question, “Which do you prefer?”, the design team might have gotten insight into whether or not this central component was overlooked. If this wasn’t the issue, they could then launch another quick user test to pinpoint the issue. At this point, a moderated usability test would have likely worked out well for them.
This begs the question: Should this technique replace usability testing in moderated or unmoderated studies?
Of course not, but it can certainly add value to support concept testing and decision-making when the question to users is: Which do you prefer?
As UX researchers, we don’t have to choose just one tool or technique for gaining insights into user experience. Our toolkit has dozens of tools and variations on tools that we can choose to fit the specific needs of your timeline, budget, and research questions. It’s great to have options that go from a single question like pre-live A/B testing to a full-scale usability study.
Carol brings her academic background and years of teaching and research to her work with clients to deliver the best research approaches that have proven to produce practical solutions. Carol’s many publications (6 books and more than 50 articles) have made a substantial contribution to the body of knowledge in the UX field. The 2nd edition of her award-winning handbook Usability Testing Essentials is now available.
We are a small, boutique UX consultancy.
That means you work directly with the top UX experts.
We are with you every step of the way.