Usability Testing
Beware of the Screen-Lickers (Photo: Woman licking window)

Usability Testing

I hadn't intended to write an article, then in a moment of reciprocity, I was invited by LinkedIn to provide my perspective on this topic... only to be limited to 4 responses to a 7 part invitation. ??

Instead of wasting the momentum, I'm offering the full 7-parts (+ a bonus section) here. Enjoy? ??


LinkedIn's advice column:

Here's how you can effectively gather feedback from users as a usability tester.

Let's take a stab at providing my perspective on each of these.

1. Define Goals

LinkedIn says:

Before diving into usability testing, it's essential to define what you want to achieve. Are you looking to improve navigation, increase conversion rates, or ensure the content is easily understandable? Having clear objectives will help you create focused tasks for participants and interpret feedback effectively. Remember, the more specific your goals, the more actionable your findings will be. Keep these objectives in mind throughout the testing process as they will guide your analysis and help you stay on track.

My 2p's worth:

Brutal honesty for any UX Researcher / Tester (outside academia):

It's important to understand for whom you are conducting the research (key stakeholders), and what it is they are looking to understand.

It's often easy to find fault with a solution, yet the client has no intentions on fixing the fundamental problems. All to often they are looking for validation, at worst, simple answers and ways to mitigate rework.

Often a simple usability heuristic evaluation / expert evaluation, or even some guerrilla user testing, will tell you what's going to go wrong long before you involve real users.

Make sure that you are defining goals that not only ensure you are providing good warning to the client, yet also allows them to find quick wins.


2. Choose Participants

LinkedIn says:

Selecting the right participants is critical to obtaining relevant feedback. Aim for a diverse group that represents your user base, considering factors like age, tech-savvy, and familiarity with your product. It's not just about quantity; the quality of your participants can make or break the insights you gain. Also, consider compensating them for their time to encourage thoughtful responses and commitment to the process. The feedback from these users will be a reflection of your target audience's needs and challenges.

My advice:

Rule of Three (3)

Hopefully you are recruiting participants that support the personas you have already researched and prepared for the project. If not, you'll need to at least give the basics some thought before recruiting.

The rule of 3 is to overcome 2 being a fluke - a third will determine if its a pattern. If you do this as a minimum for each persona, you're on to clear guidance.

I tend to tell customers that we don't need more than 12 participants. We are extremely likely to see patterns occurring, and plenty of repetition.

Also consider that when you're hiring, that you hire 'stand-by' participants. These are people you pay to sit in the waiting room incase someone doesn't show. Especially if your client is observing the tests.

Whilst you should pay participants for their time, do look to exclude anyone who has participated in user research in the past 6-months (you do get people who have a bit of a side-hustle going on - attending user tests, filling out surveys and participating in studies). Also exclude anyone who is themselves a researcher. Not unless this is the persona you're after.

Accessibility

Please be considerate and invite people with varying disabilities, and accommodate their needs. All to often the user testing facilities are not designed as such to be accessible, or accessibility aids are not available. Consider people who are neurodivergent, have autism or ADHD.

Diversity

Please consider a range of cultural and ethnic backgrounds, of different genders, race and age. All to often, recruiters will pick the easiest to find within the area of the testing facility. Push them (or yourself). You learn nothing additional from having 3 to 12 people who all look, think and behave in the same way.

Inclusivity

Do consider minorities. Try to have one or more people from the LGBTQIA+ community.


3. Craft Scenarios

LinkedIn says:

Crafting realistic scenarios for participants to work through is a key step in usability testing. These scenarios should mimic actual tasks that users would perform with your product, providing a context that elicits natural interactions. Make sure they're clear and concise to avoid any confusion, which could skew your results. By observing how users navigate these scenarios, you'll gain insights into where they encounter difficulties and what aspects of your product are most intuitive.

My response:

Real-world Scenarios

It's very easy to overcomplicate instructions or lead the user. Consider exactly why your persona would be sat in front of a solution / entering your client's world. What's THEIR goal?

Prime the user so they are comfortable, with just enough information that they would have in the real world - e.g. an example advert that they would have used to be at the starting point of what you are testing.

Give them any information that they might need and would typically have in the real world. E.g. If you're testing credit card transactions, give them a dummy credit card number to use.

Determine prompts. Some participants may not be able to continue without a prompt. Consider what those might be.

Be VERY careful not to lead.


4. Collect Data

LinkedIn says:

When conducting usability tests, collecting data systematically is crucial. Use a mix of qualitative data, like user comments and expressions, and quantitative data, such as task completion times. Take notes or record sessions (with consent) for later analysis. This data will give you a well-rounded view of user experience, highlighting both the successes and pain points of your product. Be sure to pay attention to non-verbal cues as well; they can be just as telling as verbal feedback.

Matt says:

Know what you want to record before you start testing.

When it comes to any data collection or analysis, you need to have thought about what it is you want to know before you start. Then make provisions to collect that data.

Always record EVERYTHING. Have participants sign that they are happy for you and your client to review recordings. Always anonymise and aggregate data and remove all personal data ASAP. It's not required. They represent a persona. That's it.

There is so much that people do and say as they are testing a solution. Do not try to note it all down yourself as a facilitator. Digitally record it, use cameras, have a notetaker. The facilitators job is to ensure the user is comfortable and following the scenarios, that's it.

(LinkedIn restrict replies to 750 characters, and with my limited brainpower landed on 'that's it' to end a paragraph twice! In case you were wondering why... That's it!)


From this point on - you'll only find the answers to these here! Thanks to LinkedIn's own conflicting policies. Perhaps they need some help with their usability...??


5. Analyse Feedback

LinkedIn says:

Analyzing the feedback from usability tests requires careful consideration. Look for patterns and recurring issues that could indicate larger problems with your product. Don't dismiss outlier feedback either; sometimes unique insights can lead to significant improvements. Prioritize the issues based on their impact on the user experience and align them with your predefined goals. This analysis will help you determine which areas of your product need refinement.

My response:

Prepare for Reporting

If you've prepared well enough, you'll be able to get through the analysis quickly. You should know exactly what you are looking for, and with good note-taking and with the use of good tools, you will be able to quickly feedback the results.

What takes time is the hidden behaviours - not so hidden when you know what to look for. Looking for facial expressions, where they look, where the mouse pointer is positioned, the type of language they use. Some clients are less interested in this, and will be happy with your 'observations and recommendations'.

In honesty, I often look to spend twice as long reviewing recordings than they took to record. I also try to do this between tests, so I can make adjustments if needed.

?. Playback Findings

LinkedIn's AI says:

Nothing. The system's AI missed out a step - 'Playback Findings', albeit blended with no. 6: "...It's important to communicate these changes to stakeholders and explain how they're informed by user feedback. This not only validates the usability testing process but also ensures that everyone understands the rationale behind the decisions made."

My NS* says

Stakeholders often don't attend all of the user tests (if any). Do really try to make sure they attend at least 3. It really is quite painful when a stakeholder attends only one session, and then walks away making decisions purely on what they observed. Despite the rest of the tests panning out a different way.

As soon as you can, gather your analysis, prepare a presentation of your findings, and arrange a playback session.

Where possible include video clips and verbatim quotes. Do not rely on stakeholders to interpret the results. Show examples of what's been discovered.

You may have to manage egos and disappointment, so do consider how best to convey the points made in a way that your stakeholders can act on them, and not discredit the user test as it doesn't align with what their objectives for the test.

*Natural Stupidity

6. Implement Changes

LinkedIn says:

Finally, take action based on the feedback collected. Prioritize changes that will have the most significant impact on user experience and align with your goals. It's important to communicate these changes to stakeholders and explain how they're informed by user feedback. This not only validates the usability testing process but also ensures that everyone understands the rationale behind the decisions made. Implementing these changes is a critical step in enhancing the usability of your product.

The author says:

Implementing changes is a challenge, as it is open to interpretation. You are guaranteed to be encouraged to make the button BIGGER, the font bolder, move it ^^ higher up the screen... Experience will tell you that none of these will work. Changing the text of the button might; or reconsidering why the user is on the screen in the first place, and what they expect to find to help them navigate towards their goal.

The best way to implement changes is to update the prototype and retest. In my experience, user testing is left to late and you only have the one shot. The next time your solution is going to see users is when it goes live. The real test.

Be fair warned, typically, if the fix isn't simple, a) it's not going to be fixed; b) it's relative to the whole journey. The user is getting lost. This could be alleviated with better signposting (easy-ish fix), or a remapping the journey (back to square-one - not so easy-ish to fix).

If you are afforded the chance to sit with the developers prior to presenting the results of the user test, ask them for what they anticipate are the quickest options to deliver, and prioritise these. You will also buy yourself some kudos that should turn into an opportunity to make a quite major fix with more willing support.


7. Here's what else to consider

LinkedIn closes with:

This is a space to share examples, stories, or insights that don’t fit into any of the previous sections. What else would you like to add?

Matt says:

My parting wisdom (sic) is to 'expect the unexpected'.

An old client used to call users 'screen-lickers'. He couldn't believe 'users' were so 'stupid'. Questioning whether they chose to lick the screen to perform tasks. It is not (entirely) true that users are somehow subpar in comparison to the stakeholder's own intellect. What is true, is that the user is not subjected to all the background knowledge gained by the stakeholder. The user is not an insider; they don't share the same internal jargon; they haven't seen the screens or the process beyond the point they're currently looking at. They have no idea what's going to come up next, until it is shown.

I've prepared, run and sat through hundreds of user tests over the years. No matter what Usability Heuristics, WCAG guidelines, or countless experts tell you to expect... you will always be surprised by a 'screen-licker'.


Have you any lessons to share with the group?


About the author

Matt Jones is the managing director of Uservox, an Experience Consultancy, and founder of Joyall, a start-up Marketing Platform using the power of Games.

Matt has over 30-years of experience in marketing, user experience, behaviour, service and product design, working in retail, finance, insurance, pharma and games sectors. He has worked across the world, supporting FTSE 100 companies and start-ups. Himself a serial entrepreneur.

Matt believes in inclusivity, embracing diversity, seeking equality and promotes individual empowerment.


要查看或添加评论,请登录

Matt Jones的更多文章