I thought AI was coming for my job… but now I know I’m safe. For now.
Our Head of Design Research, Dr. Alexa Haynes , on whether AI-assisted research tools are a help or a hindrance
Back in 2023, with headlines flying around claiming AI was about to replace 60% of jobs, I was worried. And with good reason – near the top of the list, in black and white, was ‘Researcher’.
I was very happy in my role. Were all my years of hard work about to be undone by some code that isn’t much more than autocomplete on steroids? I had to find out.
Over the last few months, I’ve spent time investigating and testing the features of the new breed of research tools that are enhanced with AI.
They all claim to make analysis faster, better and more accurate – and they all claim they’ll make my job easier. But do the claims stand up?
I came at the task from a place of anxiety, and to some extent scepticism. I had all sorts of questions whirling around my head:
…and, most importantly, will I be replaced?
What I found surprised me. There’s actually a lot of variation in how AI is being applied within research tools and the resulting quality of their output. At best, they did speed up some processes, and I can see how they’d fit into the researcher’s toolkit. At worst, though, some of the tools provided misleading summaries and outputs – which is not just unacceptable, but could result in serious reputational impact.
In order to assess the products on a level playing field, I decided to create a mock data set. I enlisted some of my fellow Nilers to create a series of research interview videos, based loosely on a challenge being faced by a real client.
My investigation focused on tools that would assist me with the qualitative workflow. I looked at where these tools could help me right across the research process – from planning to reporting. My biggest interest was at the analysis phase, as that’s where everyone says it will have the biggest impact.
Where I was pleasantly surprised
The best tools offered up informative summaries of each individual session and then all the sessions as a whole. I cross-checked theirs with my own, and was happy with the accuracy. They also:
Where I was disappointed
However, it wasn’t all rosy. Some of the tools I evaluated also had some significant deficiencies.?
One offered up summaries that suggested one type of behaviour when in fact it was another, impacting how we would interpret motivations. For example, the AI suggested that our participants preferred to use debit cards online and Apple wallet in store; however, in our mock data set, while three people did mention using debit cards online, they all stated they preferred the convenience of Apple or Google Pay in the most situations on or offline.
In addition, some of the worse-performing tools:
Stand-out AI-assisted tools
I won’t name and shame, but I will name the tools that I found did a great job. From my evaluation, the stand-out tool is one called Marvin . This isn’t an endorsement – in fact, I’m still playing around with it. But having tried its survey analysis capabilities too, so far it’s looking good. My open-ended analysis has been streamlined significantly with clear citations and evidence provided.
领英推荐
There are also some great tools available for usability and evaluative-type research, such as Maze , which can be used for AI-assisted capture and analysis of behavioural data.?
There have also been some big steps forward in AI-supported survey tools, usually with AI embedded to help with suggesting questions and removing biases and closed or leading questions. Some of the more advanced survey packages even support live/dynamic follow-up questions, based on respondents’ earlier answers. Among these, I’m excited about Listen Labs , which I’m exploring to achieve a more conversational, almost qualitative experience with surveys.
Finally, an area where AI is purported to help researchers is sentiment analysis; however, AI doesn’t always pick up on the nuances of the English language. Through manual spot-checking, I found multiple inaccuracies across several different sentiment analysis tools – these still have some way to go.
Where do we go from here?
I’m a champion of ethical design research. The way I approach my general practice also applies to using AI in Research. While experimenting with these new AI tools, some principles jumped out:
Reflections on AI for researchers
I am just at the beginning of my exploration with AI and what it can do for researchers, but I’m cautiously optimistic, with a dash of scepticism to keep me grounded.
For me AI is currently best-placed as a thinking partner, something you can test out your first-pass ideas with, use as a jumping-off board, or to cast a critical eye over your thinking with a set of known rules. It’s also helpful for taking care of some of the more mundane tasks, but struggles to generate meaningful insights (if any sort of insight).
AI has a place in analysis, but it will not do it for you, yet. The pace of change is, of course, relentless, with some predicting a future GPT-5 might be working at the level of a postgraduate or even PhD.
Right now, even the makers of the apps themselves will tell you not to rely solely on AI for research. This is a time for cautious exploration, but always backed by an intimate knowledge of your data set, and double-checking the outputs. The human factor is – for the time being – essential.
So, for now, I’m reassured that this very human Alexa is here to stay.
For now.
To get insights from the Nile team a week earlier than they appear here, subscribe to The Navigator on Substack .
Design Delivery Expert with AI Experience / Fractional Customer Experience and Design Leader
4 个月AI is not coming for your job - people with AI skills are!