AI-led user interviews enable researchers to capture more insights
We’ve built an AI-led user interviewing and analysis tool that approaches human level ability to interview research participants to extract useful information and analyze the resulting transcripts for insights.
Wondering is an AI-led user research platform purpose-built to help companies build better products by scaling their user research. We’ve developed a new user research methodology, AI-led user interviewing, in which an AI system moderates text and voice-based user interviews with participants and analyzes the transcriptions for insights.
Through comparing expert-level user researchers’ ability to extract insights from interviews with research participants recruited from a panel, we find that an AI-led user interviewing can capture and identify many (56%) of the same insights as an expert-level user researcher when interviewing users about a recent experience (grocery shopping). Moreover, we find that it can also capture and identify additional insights (29% of all insights identified) that the user researcher misses in their analysis, and that it can complete and analyze user interviews faster and at a fraction of cost of a user researcher.
In our internal benchmarking study, we compare expert-level user researchers’ ability to extract insights from interviews with research participants recruited from a panel with our internal AI-led user interviewing methodology. We find that in this small scale study our AI-led user interviewing methodology can capture and identify many (56%) of the same insights as an expert-level user researcher when interviewing users about a recent experience (grocery shopping). Moreover, we find that it can also capture and identify additional insights (29% of all insights identified) that the user researcher misses in their analysis, and that it can complete and analyze user interviews faster than and at a fraction of cost of a user researcher.
In our pilot study, participants recruited from a research panel were interviewed about their most recent grocery shopping experience. Each participant was interviewed twice, once by an expert-level user researcher and once through an AI-led user interview. Half of the participants were first interviewed by the user researcher, and the other half were first interviewed through the AI-led user interview.?
After completing the user interviews, the user researcher analyzed the 8 user interviews they had conducted for insights about how the participant’s most recent grocery shopping experience could have been improved, creating a list of insights (Researcher Insights). The AI-led user interviews were similarly automatically analyzed through Wondering’s AI analysis feature, creating a separate list of insights (AI Insights). Neither the user researcher nor Wondering’s AI analysis feature were exposed to the contents of the user interviews conducted by the other.?
Finally, a second user researcher, who hadn’t been exposed to the contents of any of the user interviews, evaluated Researcher Insights and AI Insights to identify how many insights were identified in each.
领英推荐
Across the Researcher Insights and AI Insights 65 distinct insights were identified. The research insights contained 46 insights. The AI Insights contained 45 insights.
26 of the insights (40% of all insights) were found in both the Researcher Insights and the AI Insights, meaning the AI Insights contained 56% of the insights captured in the Researcher Insights. An additional 19 of the insights (29% of all insights) were found in the AI Insights but not in the Researcher Insights. 20 of the insights (31% of all insights) were found in the Researcher Insights but not the AI Insights.
So, how does this impact user research more broadly? Software engineering teams spend roughly half of their time reworking avoidable product mistakes that don’t meet user acceptance. Yet, most companies don’t conduct any user research at all, and even fewer do it well. AI-led user research lowers the barriers for teams to get started incorporating user insights and testing into the product development process. At the same time, it enables expert-level user research teams to accomplish more in less time, lowering the cost and risk of product development and increasing the chances you’ll build a product customers love.?
As you adopt AI-led research methods in your own research, it's worth keeping current limitations in mind. In this study we compared how AI-led user interviewing performs against a user researcher on the specific tasks of interviewing users on a predefined topic and analyzing each interview for insights. There are of course many other tasks involved in the user research process, including gathering data through other methodologies, other forms of analysis, deciding what to research and how to structure research projects, storytelling, communicating and negotiating with stakeholders and more. When adopting AI-led research methods in your user research and discovery, it’s important to remember that AI-led user interviewing is a methodology that can extend user research and product teams to help them scale their research so that they can have more impact, but not replace them.
We hope these findings inspire researchers and product development teams to integrate AI-led research methods into their ongoing research efforts to drive even more impact, regain time and scale their research and product discovery programmes. Check out Wondering for free here to try AI-led user interviews in your own user research.