Two main scenarios for UX Research + Analytics
Following on my earlier post on how UX research remains relevant in the age of AB testing, I wanted to describe multiple situations I've encountered where qualitative UX research and analytics worked well hand-in-hand.
As a disclaimer, I am relatively comfortable in the world of analytics, having had formal training in databases, development, and quantitative analysis, as well as having practiced usage analytics on-and-off as part of my UX researcher practice for the past 5 years. Those posts are meant to describe how data analytics and user experience research are greatly complementary.
Forming hypothesis from analytics data
In my experience, hypothesis-forming from large datasets (like product usage data) happens most commonly opportunistically. While looking at the data to inform another question, one might stumble upon an insight, and in turn conduct further analysis to inspect this phenomenon. You may have commonly observed meetings where engineers, product managers, executives were reviewing metrics for a product --monthly active users for instance--, only to observe something out of the ordinary in the data. In my experience, this observation then leads to another round of analytics investigation.
More often than not, I have seen the investigation generate more questions than it can answer, and in particular, why something is happening. To answer these questions, it is often desirable, if not outright required, to use qualitative research and talk to users.
I think there are many reasons for this. One of them is that even with the best instrumentation, you cannot know user intent or motivations. Those are essential to enable product makers to know what to work on, iterate on, or throw away.
A second reason is that instrumentation is costly to implement and maintain. As a result, the data point one might need might not be instrumented, or not instrumented in a way that allows analysts to answer the question. In this situation, I often discuss the benefits of adding the correct instrumentation before making big decisions for the product direction. However, when the decision to be made is small, and the usefulness of the extra data is likely to be short-lived, I often advocate talking to users or observing them as a more cost-effective alternative.
In either case, it is likely that some observational studies and/or interviews would greatly help understand either how a specific usage metric is indicative of specific behavior and specific need, or how to best capture a given user behavior. It can often be consolidated by additional analytics to contrast and size those qualitative insights.
Contrasting and sizing qualitative insights
Another interaction I commonly encounter between user research and analytics is when needing to verify or generalize qualitative insights. I typically seek to uncover a "a-ha" moment, where observing a user leads to a breakthrough in understanding how people interact with a product or technology, or about how they approach a situation. This is typically done at a small scale, by talking or observing one dozen people or so. Preferably more, sometimes less. In any case those participants are unlikely to be fully representative of the overall population of users a solution needs to cater to.
To contextualize how many people in the target population share similar behaviors, I then look at analytics to seek signals which might be indicative of a given behavior. Not all behaviors may be reflected in usage, and when it is, usage may not be instrumented. However, when it is, it can be a powerful way to confirm or challenge whether an insight is a fluke, or whether it may be a widespread phenomenon.
For instance, one of my former colleagues, Gerry Chu, had once noted that users in his diary study obtained no results for their search query on the product, and were then at a loss as to how to pivot to a better query. They often left the site there and then. When the designer and PM looked into the analytics, they found that many people encountered five search results or less. As a result, the team conducted a design sprint to provide a better experience for searches with small to empty result sets.
Complementing practices
While the two disciplines of user experience research and data analysis remain largely separate, there are many signs that complementary practices are emerging. Many of my colleagues come out of school trained in log analysis, and some go back to school to add this to their portfolio. I am left to wonder how much the business intelligence analyst and data analysts are trained to think about complementary practices.
I'd like to hear from you stories about when the two disciplines worked well together, or did not work so well together.
User Experience Research Lead at Google
8 年Thanks Zachary Sam Zaiss for thoughts and edits on this!