Bridging the gap between data and insight

Bridging the gap between data and insight

With the holidays now behind us, I’m reflecting on a digital interaction that many of us may have experienced in the last month. Maybe you were planning to drive a few hours to see family, or you were looking into flight tickets, or you had relatives visiting from out of town, state, or country… and you opened your phone’s weather app and checked the forecast.?

Should I drive over in the morning or the afternoon? Friday or Saturday? What are the chances it’s snowing when I do? How likely is it that Aunt Linda’s flight gets delayed due to weather? When will be a good time to take my parents on a walking tour of the neighborhood??

A horse-drawn sleigh travels through the snow
Over the river and through the woods, to Grandmother's house we go

Whatever questions were floating in your head, the weather app had an easy answer ready. Meanwhile, though, on the other side of that interaction is a meteorologist busy predicting the weather trends for the next several weeks in your region. She's looking at large swathes of data collected from satellites, radar, surface maps, and more, comparing it to historical patterns, and collating and condensing all of it into simple answers to be displayed to users like you and me on our weather apps.?

All of that meteorological data they’re working with is valuable – but in a practical sense, that immense value only manifests once the meteorologist has analyzed and interpreted it into an accessible weather report.?

If we instead opened our weather apps to find satellite readouts, radar maps, wind speed tables, and decades-spanning historical climatic datasets, 99% of us would be no better prepared to plan that long drive to Grandma’s than we were before.?


So what's the point?

You might be thinking by now that this is an article about the value of experts. But even though I agree unequivocally about experts’ enormous value, that’s not where I’m going with this.?

My point is about data itself. We are in an age now where the amount of data we have access to is staggering. There is so much data around us today that we’re practically drowning in it.?

Illustration from the Rime of the Ancient Mariner
Water, water everywhere and nary a drop to drink...

You might think that the world runs on data. But that’s not exactly the case. The world runs on the actionable insights gleaned from data.?

Without doing the work to truly understand what our data is telling us, it’s all but useless.?


Finding diamonds in the rough

I have no doubt that everyone reading this has at least a few data collection tools that they regularly use in their work. In the User Experience space, we rely on many different kinds of data. You probably also know that you have to analyze that data to find the proverbial diamonds in the rough – I know that it’s not a groundbreaking assertion.?

A diamond in the rough
Diamonds are a decision-maker's best friend

What I want to do is talk more frankly about the problems with the data we collect in our space. In my view, there’s 3 big ones:?

  1. Quantity: The amount of data collected far outstrips the time we have to analyze it – and only some of it even contains the answers we’re looking for.
  2. Bias: The very question of which datasets to collect is subject to our biases about what’s important, what we expect to find, and what problems we do or don’t already know about.?
  3. Siloing: With different datasets available in different tools used by different teams, we’re often missing out on the complete picture.?


The problem of quantity is an obvious one, and one you’ve probably faced yourself. Deciding where to dedicate your time as you perform your analysis can make the difference between finding or bypassing that diamond in the rough.?

Still frame from Disney's Lion King animated movie
?? There’s more to see than can ever be seen... ??

The question of bias in the datasets we collect is often overlooked – but it’s an important one.?

Imagine this: you’re preparing a usability test to get feedback on your website’s user journey. You write a series of tasks leading the participants from A, to B, to C. You run the test and hear all sorts of opinions about what could be improved. But the A-B-C flow you tested was based on the ideal-case user journey that your web design intends for users to follow, when in reality the most common pathway starts at C, and then goes -A-B-A-D-Z.

The fact is, it’s not easy to know what steps actually constitute “our user journey.” It’s not easy to know where the root of a problem actually lies. Not to mention the obvious difficulty in collecting data for a problem you don’t know exists.?

No alt text provided for this image
Doesn’t the silo on the left look kind of like Google Analytics?

This brings us to the problem of siloing – because sometimes, the answers we need are actually available somewhere – just not to us. Maybe the marketing team has access to data about top entry pages and user pathways that would have helped with that test script. Maybe a PM from a different team has a tool that flags rage clicks and errors, but the issues they’re prioritizing don’t overlap with yours, and so no one notices issues that are afflicting your product.?

How many designers would benefit from some of the traffic / behavioral data on Google Analytics, but don’t have access to it? How many have access to it, but just don’t have the advanced Google Analytics knowledge needed to drill down and find the relevant information? (Let’s face it, it’s not an easy platform to figure out).??


All 3 of these problems lead back to our overarching problem: How do we turn data into insight? Overabundance of data fuels inefficiency in finding insights. Bias in the collection process skews the reliability of our insights. And siloed data leaves us to fill in the blanks in our insights with assumptions.?


Better integration, better insights

What I’m hoping to do in this article, really, is to (1) identify in concise terms a trend that is already beginning to take form in our space; and (2) reflect on the best ways forward.?

The trend I’m speaking of is the drive for data integration. One of the clearest reflections of this trend, in my view, is the rise of research repositories as an important tool for UX, product, and marketing teams. (You may have noticed that several of the leading usability testing providers, ourselves included, have recently built or acquired repository tools. The same is happening in the nearby market research space.)?

Repositories are a powerful instrument for helping to solve our data problem. But they’re not the solution. With a repository, your data is only united at the end of the collection process. To truly improve our relationship with data, the integration must begin much earlier.?

We’re already starting to see success with the combined digital experience insights approach we’ve recently undertaken here at Trymata.?

For those who aren’t in the loop, in October of last year we began offering a Product Analytics suite to complement our Usability Testing tools. With web analytics data comparable to Google Analytics and behavioral data on par with Hotjar or Fullstory, it’s allowed us (and our customers) to start experimenting with an iterative research process where all of these datasets inform and enhance each other.?

Two trees whose limbs have grown together. Source: https://twitter.com/FloraBaker/status/933754609390292999/photo/1
Maybe we should coin the term “data inosculation”?

1. Solving the problem of bias

By starting with web analytics data, we learn about what’s happening in key user flows at a macro level, and objectively evaluate what those flows actually look like, and how they look over time. We can measure performance and set benchmarks for our existing goals using a variety of real usage-based KPIs, and observe improvements over time as we fix known issues.?

Importantly, we can also learn about patterns we didn’t know about. These may be common pathways we didn’t know users were taking, or clusters of frustration indicators like rage clicks or dead clicks. It may be exit rates or bounce rates that are higher than we’d like. It could be device or language trends among groups of users. Any of these could be a hint for us to dive deeper into the relevant behavioral data, or to run a usability test to learn more.?

Starting from the analytics frees us from allocating research efforts based only on our roadmap priorities, pet projects/hunches, and known UX issues. We can open our eyes to research opportunities both known and unknown, and weigh all of them to strategize a more unbiased set of research goals and testing plans.?

2. Solving the problem of quantity

Once a problem is identified within the traffic data (or picked from the list of known issues), we can take that same data and start viewing it through layers of filters and segments that eliminate as much of the noise as possible, leaving us with just the most relevant users/session data.?

Once we’ve reached a scale of data that’s manageable –?and on-topic – it becomes much more feasible to start working with behavioral data: for example, examining individual users’ event logs and session replay videos. This kind of analysis is undoubtedly “nitty-gritty,” but it’s also what enables us to analyze and understand issues from a human perspective.?

This is the point where we start reaching a deeper level of insight into our users, and what we can do to improve their experience (and our performance); but it only happens after we’re able to effectively eliminate or ignore huge amounts of the data that’s been collected, in favor of the few handfuls most likely to answer our research question.?

3. Solving the problem of siloing

At this point, of course, the other thing we can do is to run small-scale, targeted usability tests on the flow we’re investigating.?

We’ve identified the issue(s) we want to explore and explain, and we’ve learned about which users are relevant to the issue and how they’re acting. Depending on what’s come out of the behavioral data, though, we may or may not yet understand why they’re behaving the way they are.?

This is where we jump to the usability testing side of the platform, to set up a scenario and tasks that will replicate the twists and turns of the user journey we’ve observed already, and get really solid qualitative feedback about what’s going on in people’s heads throughout that experience.?

That’s not where the idea of de-siloing ends, though. Once the user tests have been run, the data analyzed, the necessary design changes proposed and executed, we can go right back to our web analytics data and start watching what happens.?

If we optimized our designs to decrease abandonment, do we actually observe a decrease post-release? If we were trying to optimize an inefficient navigational pattern, do we actually observe users following the intended pathway at higher rates??

By integrating these qualitative and quantitative sides of the equation together, we can create a constant iterative loop between observation and investigation that’s accessible to everyone involved.?


TL;DR...

Altogether, a workflow like this helps to eliminate the 3 major problems of data and insights in our industry in multiple, layered ways.?

  • By starting with all of the web data, you can reduce the bias of where you’re looking, and what you’re running user tests on.?
  • Segmenting your product analytics data and collecting small-batch usability testing feedback allows you to focus on a manageable quantity of data to solve a specific problem.?
  • Doing it all together in a platform designed for product and marketing people to use without a high level of expertise makes it easy to get a holistic understanding without leaving out a key piece of the puzzle.?

While our vision of a seamless digital experience research workflow is still in its early stages, we are continuing to develop our platform to support an integrated workflow that’s easy to execute and enables you to reach a higher level of success turning your data into insights.?


None of the ideas I’ve discussed here are totally brand-new, never-heard-before ideas, but I do hope they’ve been presented in a way that’s useful and constructive to our relationship with the data we collect in UX/DX/marketing. Do you agree about the 3 major problems I identified? I’d love to hear any other takes, or if there’s any major problems missing from my diagnosis!?



About the author: Hi, I'm the CEO and Co-founder of Trymata (formerly TryMyUI). We provide tools for design, product, and marketing teams to learn more about their users, with powerful suites for user testing and product analytics. I'm interested in all things #customerresearch, #digitaloptimization, #UX, science, and philosophy!

Mahesch Marapalli

Lead Android Engineer

2 年

Great analysis Ritvij

回复
Timothy Rotolo

Co-Founder & CGO at Trymata

2 年

Love this, Rit! The bias problem is such a big one for me, it's something I'm always trying to be aware of in testing. (I'm also so glad I no longer have to rely on Google Analytics for all my site data... ?? )

要查看或添加评论,请登录

Ritvij Gautam的更多文章

社区洞察

其他会员也浏览了