The biggest flaw of usability testing and how to fix it
Colin Hynes
Partner, Experience Strategy & Research. Open to connections if we've actually met.
After running 1000+ usability studies over the last 25 years and observing about the same number, I’ve come to realize what’s wrong with traditional usability testing: it’s fake. Contrived scenarios in an artificial setting with participants trying to give you your money’s worth by over pleasing, nitpicking or body slamming the experience. The result? A bunch of red herrings swimming around a project team, leading them to make unnecessary changes or not address the real issues at all.
So how do we fix this? At every turn we need to ask: How do we make the process as true to life as possible? Here are some tips that I’ve found to do just that.
#1. Go blind
We all act differently depending on the context. When we’re with close friends we act one way, casual acquaintances another way, coworkers different again. What we say, how we say it and how forthcoming we are varies.
While more nuanced, the same goes for research. The participant is trying to feel out who you represent, what you’re really looking for and how they can do their ‘job’. That can result in a sort of ‘confound soup’--the mixing of filters and bias that end up standing in the way of them and you and the truth.
One way to reduce this issue is to keep the study blind--meaning do not disclose who is sponsoring the research. This can be especially difficult when testing a unique and scarce sample of existing customers. Sometimes the only way to find these users is to let them know who is behind the study (and where you got their name in the first place). This is typical in B2B research, like some we recently ran for a biotech company who wanted to find a certain type of scientist, who happened to be an existing customer, at a narrowly defined company size in a small region of the world. With only a handful of those people on the planet, we had to reveal who we were researching for. In fact, we needed to get the sales people they dealt with to open the door to get the conversation started.
However, in many situations, that is not the case and the type of people you need are abundant enough--especially with all the great web-based recruiting services out there these days. Even if you’re looking for existing brand users, there are ways to set up the screener questions to keep even the most curious of users in the dark as to who is paying for the study. The reward is a participant who comes to the conversation with no preconceived ideas and no way to ‘study up’ before hand. That results in a ‘cleaner’ conversation that sets you up to draw conclusions without the cloud of unintended prejudice. And that’s a great start toward reliable feedback.
#2. Match learning goals to actual goals
I lost count of how many times early in my career one of my participants would say, “Yeah, but that’s not something I would really do.” To which I would usually respond “I understand but let’s pretend that today you really do want to.” And then we’d play the game of pretend while they tried to find a product or information or some other thing that never crossed their mind prior to our session.
It can be hard to find participants who match the exact scenarios you’re testing. But it’s essential that you find people who really do reflect the aspirations and goals of the persona you are building the solutions for. It may sound like segmentation screening 101 but it’s easy to fall into the trap of relaxing this due to the constraints of qualitative research. Be stubborn on this point. Otherwise, you will spend your time having a really interesting fictional conversation that will yield nothing reliable and plenty of questions from the sponsors of the research (especially those who may not embrace testing or the conclusions that come from it).
#3. Open it up
One thing I’ve learned from having kids is that if you tell them specifically what to do and how to do it, they usually get it done (even if begrudgingly). But they also don’t learn as much as when they feel ownership over the path and get to a solution on their own. It may take them longer and not result in what you expected (or even wanted) but the outcome is more authentic.
The same thing happens when you give a user testing participant a limited set of options and specific rules for what’s needed. They try to follow the path you gave them and get their job done so they can move onto the next thing. It’s tactical, result-oriented and narrow when, in fact, the way they normally approach that task is more circuitous and exploratory.
If the users you’ve brought in have a goal that matches what you want to learn from the research, for goodness sakes let them fly free! Start with an open task where you say something like, “I understand from the recruiting process that you’re looking to ‘cut the cord’ and rethink how you buy your entertainment. So let’s do that exactly how you would if you came to this app on your phone without me here.’ And then set them loose to do what they do, how they would do it naturally.
And you might say, “Well, we test low-fidelity prototypes that can’t allow every possible path someone might take.” That’s certainly a reality especially in agile process and design thinking driven models. However, the important part is not that they actually accomplish their natural goal completely but that you at least see how they start that process. And, depending how far you get with #5 on this list, you may be able to give them a lot of latitude and get some very reliable feedback as a result.
#4. Make it remote
I remember when I headed up the usability team at Staples and we were building our first usability lab. One of the main concerns was getting our lab on the first floor or at least really close to a door on the second floor. That way we would not waste a ton of precious session time in elevators, walking a maze of hallways or up and down stairs. However, what we should have been concerned about was how this foreign environment--our mouse, our computer, our chair, our temperature setting, that funky fake fern--would impact the results of our research.
From our academic studies around human factors, we knew of the Hawthorne Effect and how being observed changes how people behave. Pile on top of that all these environmental influencers, making participants feel like a stranger in a strange land, and you have all the ingredients of a frappe of faux findings. Not very appetizing from a research reliability perspective.
With the advances in screen sharing there is very little need for in-person testing. We conduct about 90% of our studies via Zoom. Of course, when we’re testing things like kiosks, paper materials, new form factors and the like, we still conduct them in person. But that is a rarity these days. With remote testing the user is in control--their space, their equipment, their HVAC, their fern. It all adds up to them being more comfortable which dials up the ‘real’ scale and gets us closer to that nirvana of authentic feedback.
#5. Test real stuff
The rule of ‘you get what you pay for’ doesn’t just apply to washing machines and drive-through tacos. The same goes for low-fidelity design artifacts like clickable wireframes and superficial prototypes. Going low-fi is common in the early stages of a design process and it may not only be necessary but appropriate based on learning goals.
However, really push yourselves to see if you can add in that little extra that makes it feel more real. At ZeroDegrees we’re big fans of the prototyping tool Axure because, once you’ve mastered it, it provides a robust fidelity level that has been a game changer for us in testing. Here’s a prototype we built using Axure...looks like a live site but instead we built it in about two weeks after a whiteboarding workshop with our clients.
In these prototypes, we typically add in the overtly functional based on requirements: a type ahead search, data passing between screens in a flow, in-page changes based on selections. But we also include some of the nuances that make it feel like an experience that could become real: an automatic tooltip appearing and disappearing on the first page load, an accordion smoothly shifting up and down, a menu rollover delay matching best practices. These touches start to move an experience closer to real but they are not just superfluous throw aways or easter eggs for our designers to enjoy: they are deliberate pieces of a larger puzzle that tells the user: “We’re making this real so you can try real things and give real feedback.”
Conclusion
With the digital world changing more quickly than ever, we work in a constant state of ‘good enough’. So, while the steps above can be something to aspire to, they also come with the understanding that quick-and-dirty may be your reality. Checking all five boxes for every one of your usability tests does not have to be the goal. Nor does it mean that if you can’t, your study is useless.
However, do consider that every time something artificial is introduced into the process it creates a ripple that needs to be accounted for when drawing conclusions. Knowing that and accounting for it, is more than half the battle.
SVP Marketing and Customer Experience at J. Jill
5 年Definitely worth a read--thanks giving us all the benefit of your experience.
Artist. Instructor. Creates artwork from distressed material as intersection of policy and city divisions. Instagram @caban.noel or triple w ncaban.com
5 年Thoughtful and well presented article for usability and functionality. I think there are life lessons here for those of us who are into classroom education and instructional design. Thank you Colin.?