8 things I’ve learned building a UX knowledge repository

8 things I’ve learned building a UX knowledge repository

It’s been 4 years since I created a process called Atomic UX Research, which aimed to make it possible to have a self-managing, scalable, and networked system of knowledge.

Here are a few notable things I’ve learned in that time:

#1 - Context is everything

Context is 99% of a research repository.

In fact, the challenge we were first trying to overcome when we created the Atomic process, was trying to extract insights from reports without losing the context.

In a report, you have all the context that you need for the reason why you were doing the study, experiment design, results, discussion, conclusions, etc. Unfortunately, reports tend to suffer from tunnel vision — You will see reports where the data suggests something which doesn't make sense if you look at the wider picture. The context is so focused it's exclusionary

It is often the case that people find it hard to zoom out.

Most relevant to what I was trying to solve: the insights are trapped in reports and hard to extract. Especially those shiny nuggets that weren’t necessarily what you are studying.

A little anecdote on that subject:

After creating the Atomic research process, I found a handful of companies — also struggling with controlling their knowledge — who were willing to test the process and help me refine it.

One of these companies — a giant fashion brand in the UK — started going through their older research and connecting up the insights. Doing so, they realised that there was a common issue that was being brought up by customers again and again.

Because of the Atomic process we were able to spot this and get it sorted. It turned out to be an issue that would have ruined their biggest trading period where they made 70% of their revenue. They told me that it would have probably folded the company. Atomic saved a company's life during its first outing!

In summary, reports suffer by trapping insights and obfuscating them.

Whereas the traditional insight repositories, that we tested as an alternative to reports, had the opposite problem: They are fundamentally coding tools that were designed for smaller projects. So they really struggle with companies that have lots of projects or team. Even more so if they have multiple products or brands that all want to share data with each other.

No alt text provided for this image

For illustration: Let’s take a brand like Virgin with multiple brands. They have Virgin Holidays and Virgin Media. So if they had a tag for ‘connection’ that could mean a plane or train connection as well as a telephone or internet connection. If they started a dating site… Virgin dating… ‘connection’ could mean the connection between two people.

These are extreme examples, but I hope you can see how this makes it really hard for even a small company to use most repositories: Lack of context.

I’ll discuss a little more why Atomic Research doesn’t suffer as much with shared tags.

No alt text provided for this image

Another example of context is how we show connections between facts, insights, and recommendations.

The first version of Gleanly was basically a digital version of what we were doing on whiteboards. However, the lines became a tangle of spaghetti for any large project. Frankly, it was just not fit for purpose and we ended up chucking 80% of the UI in the bin.

Instead, we moved to a ‘context-switching’ UI. That shows the cards from the point of view of the selected column.

Even the search filters change context as we go through each column.

#2 - Quantity is more important than quality

I often tell clients that it is better to get poorly coded research into your reposistory than risk losing it. After all, if it’s not there at all, it will never be found.

For traditional insights repositories that rely heavily on tagging and coding this is a problem. If it’s not tagged it doesn’t exist as far as the system is concerned.

However, when we connect a fact to an insight or an insight to a recommendation we are creating a relationship between the two items. You could say we’re coding via stealth.

Connecting cards can be more useful than just tagging because:

There is more context

When I select a customer's quote and add a tag, I’m categorizing that quote and making it discoverable, but I’m not given the opportunity to explain why I believe there to be a relationship.

No alt text provided for this image

In the example above, the researcher has been pretty good at coding their insights and recommendations, but the facts have not been tagged at all. To make it worse, they all use different wording.

Ideally, they would have used the shared term our organisation decided to use for this subject (in this case ‘Fit’). They must have been in a rush, or haven’t been properly onboarded, or just feeling lazy. And really… it doesn’t matter, because if we found any of the 5 cards shown above — or the (at least 13) others connected in the other experiment — will I find all of these assets.

We can connect negatively

In the same way, we can connect evidence that supports our insight, we can connect evidence that contradicts, or at least provides a wider understanding. This is important in creating a holistic view of what we know and making better decisions.

You simply can’t tag something negatively.

It’s more human than tags

Most people’s experiences of tags come from social media, where it’s rarely used for classification purposes. However, connecting a line between two items feels natural and obvious.

As it becomes more common for non-researchers to be doing research, a system that doesn’t rely on good coding is crucial. Less pressure to memorise and use a taxonomy, let alone even know what a taxonomy is.

#3 - Terminology is not as important as you might think

It seems to me that when UX people get together, they spend alot of time discussing terminology. The most popular one is defining what even the term ‘UX’ means!

So, of course, I’ve spent the last 4 years discussing the terminology of Atomic UX Research in detail as well.

We even ended up changing one term officially: ‘Recommendations’ used to be called ‘conclusions’, but many felt that it sounded too final. I agreed and research is never complete so we changed it.

In general, when it came to terminology, I think I wanted everything to sound quite scientific: Other than ‘conclusions’, I’m least satisfied with the term ‘experiments’ — I like it because it makes it clear that the data is always experimental and ready to be disproved. But I often worry that ‘experiments’ is too formal a term. We have a tool in Gleanly that allows you to change the terminology of anything and we find that ‘experiments’ is the most often changed. A few common choices are ‘studies’ or ‘activities’, both of which I really like.

The point is: don't get too caught up in terminology. If what works for your organisation is different then that is what you should use.

I’ve even seen people adding extra layers into their process. Though I’ve never seen an addition I think is actually all that useful, fundamentally I don’t have a problem with this either: Every organisation is different and Atomic can and should be molded to work for your situation.

#4 - A lot of ‘insights’ are actually ‘facts’

Atomic provides quite a specific definition of an insight. And its one that I dont think is that common…

No alt text provided for this image


Download the Atomic UX Research Cheatsheet >>

We say an insight is our opinion on the data — the missing link between what the research has shown and what we will do next.

Often I see insights that are facts.

A good example I often use is from a client of ours, a French fashion house: They had survey data showing a strong colour preference for their customers. Let’s say it had shown that the majority of their customers preferred the colour green. They were referring to this finding as an ‘insight’ but I disagreed:

If we trust our data (and if we’re using it, we should), it is not our opinion, therefore this is a FACT!

So what is the insight? We might have some hypotheses right off the bat: Our logo is green so that might be a factor.

No alt text provided for this image

But we can look for more data from other sources to give us a better understanding. Do we see this preference reflected in sales data? What is it about our brand that attracts green lovers? Where we see gaps in knowledge we can look to run experiments to fill them.

As we build and connect these different nodes of knowledge we build a picture that not only helps us understand the cause but also the effect: What does it mean to us that so many of our customers prefer the colour green?

Should we look to promote our brand to a more diverse range of colour preferences or should we lean into it and start making more green clothes?

What that client considered an insight was actually a fact, and just the beginning of an exciting journey of discovery.

Atomic UX Research reduces the need for reporting… but doesn’t remove it entirely

Because an atomic particle is connected they make extremely sharable assets to stakeholders.

When we started building Gleanly, I put that stakeholders might not like the atomic format or find it overwhelming as the largest risk factor. I couldn’t have been more wrong.

PMs and POs partially love having such good access to the data. In curated reports, often the research findings are in one section, discussion in another, and recommendations at the end. Therefore it can be hard to understand the connection between what is being suggested and what evidence there is to support it.

As we discussed in the first point, reports are often missing context, and are blind to other research outside of that particular experiment. Where-as with the Atomic UX method, that evidence is right there with literal lines drawn to connect it. This allows the decision-maker to be able to dive into the data to the depth they need to be confident enough to make the decision

This is great for the decision-maker but it is great for the researcher as well. We don’t have to spend days creating custom reports for small decisions.

The feedback we’ve got shows that most in-house teams reduce their reporting by around 60–70%. I've spoken to a few customers that have done away with external reports entirely and just use atomic assets but this is very rare.

There are certainly still times when a report is necessary and important. Atomic particles can be intricately connected but they are focused on research learnings and can be missing the wider context (there's that word again!) of what is happening in the business* therefore a report can allow more depth and discussion around a subject.

* Technically there is no reason these can’t be recorded in the atomic format, but it’s certainly not common practice.

One of the biggest pain points around creating reports — other than the time they take — is that they are very quickly out-of-date. Taking information out of a living repository makes it dead and we are left with assets that can be out of step and at worse wildly misleading.

We are working on a solution that we believe solves this problem by giving space to the wider discussion and allowing evidence to be frozen at a point of time — which is also important for understanding why decsions were made — but is also connected directly to the central source of truth and therefore always up-to-date. We hope to deliver this in the next couple of months.

#8 A research repository is not just for UX

A big surprise for me was that atomic UX research isn’t just for UX research. It is for all knowledge.

In fact one of our very first customers mentioned on a catchup call that their business analyst team had started using it and they were onboarding a few other teams such as marketing. This was a bit worrying for me as I wasn’t sure how UX research and other research such as business strategy and marketing data would work together in the same environment.

I feel silly now because obviously it works: UX tends to touch most parts of a company — at least as far as the product or service — so why would it not be possible to mix up the knowledge from the whole company too.

We’ve had several organisations that aren’t related to UX at all use Gleanly and find being able to combine and connect varying sources of knowledge together. A few investment firms for example.

No alt text provided for this image

By far the most strange use for atomic is a criminal investigations bureau studying murders: They needed a way to be able to combine lots of very different evidence together to build a picture. From physical crime scene evidence, to verbal interview statements. More and more they have a great deal of digital evidence from smart devices to help them work out whodunnit.

Definitely not an area I was expecting to be finding this work useful!

Knowledge effects all aspects of an organisation

One of the most common goals I see with clients is a desire to desilo; break down the walls between departments. So we very often get UX, marketing, brand, sales, customer support, and of course senior management all collaborating and sharing their knowledge.

If we want an organisation to all pull in the same direction they must work together. And though it was a bit of a surprise to find that UX isn’t as unique as I might have thought, it’s wonderful to see how a repository can be the focus point for an organization to gather around and become aligned.



I think that’s enough for now. There are a few others that have come to mind whilst writing this, but I’ll save them for another time.

Instead, consider these next steps:

Marielle de Geest

Customer Insights @ Numa

2 年

Can't imagine doing research without applying the Atomic process - you hit the nail on the head with the point about facts misconstrued as insights!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了