Building Your Company's Qualitative Research Capability in 8,670 hours.
TL;DR: Tough. Anyone who tells you building a qualitative research capability is going to show you how to do it in a "quick and easy" way with "four easy steps" is full of it, and trying be the naked man selling you the shirt off his back. Sometimes, you need to just tuck in and eat a big meal...so relax and enjoy the build, in a matter of courses.
* * *
I want to write something that company's big and small can bring this up and say, "This is tried and true, let's build this," ten or twenty years from now. Maybe I should write a book. Let's start with an article and then answer questions.
For me, this article starts after being laid off by my company in the beginning of October. I'd just spent the better part of a year with a genius of a strategic thinker named Cody Fleischfresser, and his boss Sandy Long, building the only dedicated qualitative research capability in the whole of a company of 300K employees with a revenue of somewhere like 226.2 Billion (with a B) dollars. The mission? Build a standalone capability and process with assessments and archives that could be nimble, cost effective, and hosted a robust library with both reading and video archives of all research synthesis across the entire enterprise, but particularly as it pertains to qualitative research, and through that, identify gaps in research and carry out the qualitative work to fulfill them. And through doing all of that, fundamentally change an entire company culture around having qualitative research in combination with quantitative research to drive design to center back on the user themselves. Especially in a massive enterprise like my former employer.
Well, we built it. And it works. The assessments, the process, the library, the synthesized reporting, all of it. It's the culture change that is the hardest part...it's like trying to change the direction of a glacier by swimming against it. And so, rather than market myself as having this be mine and mine alone and selling myself as being the only one on earth who can build one of these capabilities (how wrong and narcissistic would that be?), I'm putting it out there for free - if you're running a company and thinking about building a capability out from scratch where once there was none - this is how you do it! (or at least one version of how) It is going to be different for everyone, but at the very least, I can provide the structural foundation from which you can bolt on your own components. I believe that the needs of the user in the long run outranks my ego and need to have a job (which I do, so hire me, dammit).
I should also mention - there are some amazing vendors far more well equipped in recruiting and in toys that will happily do all of the qualitative work for you. I have great relationships with many of them. And I would be remiss if I didn't mention that they can do an amazing job for you. A lot of my contacts and mentors that work for these vendors are friends and they are just as much an expert and masters of their craft as I am. But they carry the caveat of price, and not every small business can manage to put together the coin to put together a robust study. If you're the owner of a huge enterprise company and don't mine paying 5-6 figures for one study that takes one to two months (or more) to get you the answers you seek from users, then I completely understand. I just don't believe that one user research study should cost more than I make in an entire year to deliver one answer.
Start With "Why?"
It dawned on me as I'm looking for new roles in User Research that there are so few, and at first I thought, "Man, our field is really clicking along - tons of jobs for designers and almost none for researchers..." but then it dawned on me: the reason why there's not really that many jobs for qualitative researchers - most American companies aren't there yet. They don't realized the importance and the crucial insights that come from talking to actual users about actual products and services they produce. For me, I will let other writers justify the essential need for qualitative research as a component and partner of quantitative research. There are no shortage of articles here and in other places as to why you need a robust qualitative research capability - this is about the build itself.
What Cody & I built together was the idea that the Qualitative User Experience Researcher's job is not just the person that talks to the user, but is indeed the intelligence analyst for an entire design system. A CIA for R&D teams, if you will. Gathering insight and actionable intelligence from across the entire organization, then synthesizing it into one single report that goes back to design teams - all representative of they that are the MOST important: the user. The prevailing wisdom in the company itself was that because the company lived on a steady diet (read: a gluttonous intake) of numbers and real time data - there was zero need for a standalone qualitative research department...and I'm pleased to say that over time, we began changing that viewpoint.
I. Be the Research Sherpa
More often than not, the articles I read around building a qualitative capability have the department as something that only talks to your users. This is, in my view, an incorrect model for what a qualitative researcher should be doing with their day-to-day. What Cody and I built was a system by which the UX Researcher was the center conduit - the revolving gate - as it were between design teams and information throughout the entire organization (also evidence of why I shouldn't be allowed to draw freehand):
It starts at the top with a problem.
Let me say that again: This begins with a problem. The problem is found through user interviews that seek to determine what the user's problem is. All too often, I see enterprises trying to solution problems that never existed. As though VPs have these strikes of lighting...or perhaps Oracles that live in their bathtubs who they consult with to come up with these grandiose ideas for their design teams to create. The result is a collection of useless crap (sorry, but it is) no user ever wants to use, fundamentally because no one bothered to ask what their problem was in the first place. You cannot begin to solution a user's problem until you find out, from the user, what the problem is that needs a solution. /end of rant.
Once the design team is given a problem by the user, the design team issues their understanding of that problem to the qualitative UX Researcher and the conduit process begins. The qualitative research begins going to the various sources around the organization and asking the very same question around the problem that the design team has put before the qualitative research capability.
The best way for this conduit to work is for the User Researcher to have one (1) single point of contact within all of these capabilities. One person who can act as ambassador and provider of information back into the cycle.
When all of these elements from across the organization are received, then the qualitative user researcher can then compile, analyze, and report to back to the design teams what evidence already exists within the organization as it applies to solutioning the user's problem. (Cody's version of this process, pictured above) From that process, one finds that not all elements of the user's problem is answered by the existing evidence - in my former company, this had much to do with gaps in the quantitative record and in market/competitive research. These gaps then become identifiers on where qualitative research must be done 1:1 with the user themselves.
This synthesis and delivery is intended to keep individual design teams from working on the same thing at the same time (silo-ed design) or from reinventing something that's already been done (which in the industry I was formerly in, was all too common). It also served to clarify and refine a user's problem into something that could be, perhaps, a better premise upon which to design from.
II. Keep Everyone In the Loop: Research Is A Team Sport!
No boss is going to just let all of this happen across functioning teams right? We all report to someone, right? So we built that process out as well:
Notice in step 4, step 7, and step 12 - there are three moments where a sitrep is necessary to go to the research lead/director. The real trick to being an effective User Researcher - especially when leading a capability is to be a Zen Master of plates wobbly spinning in the air; and to keep them wobbly spinning in a calm and efficient manner - more on that down further, on how a Zen master keeps those plates spinning when others look on nervously waiting on one of them to fall.
But in addition to being a conduit for resources across the organization, one must also be a conduit of communication across the organization and should be willing and able to give anyone who comes calling a briefing on where this project or that is spinning, and on which stage of research a given project is currently spooling itself. We must, as qualitative user researchers, treat everyone within the organization the same freedom, empathy and stewardship as we do our users.
Now the above process chart is essentially the CRUX of it all (see what I did there?) but let's drill down on some elements of this. There is no shortage of articles around qualitative research methodologies, so I'll not sit here and entertain you by beating the dead horse deader - simple google search will find out all manner of points of view on the efficacy of all the methods. So I'm not going to redo their fine work.
Let's talk about the very last box - the Knowledge Library.
III. The Library of Alexandria
So the knowledge library was built in SharePoint (brutal). Each design subject had its folder. Within each folder there were more folders containing user interviews, secondary (read: google) research, scientific studies, market research, competitive research, a full User Experience and Human Factors glossary of terms, and more PowerPoint decks than you could ever possibly imagine. It was created as a mother/child directory, but it also contained within links to the Video On Demand capability so one could, if one so chose, could sit and watch videos for hours of users being interviewed and using products of the company. These videos could be cut down into important points (no one has time to watch 20 hours straight of users "um"and "uh" through a test) So there were snippets. And artifacts, both past and present.
The idea behind the knowledge library was similar to that of the Library of Alexandria in ancient times: when better information around the user experience was found, it could replace that which already existed. So that the library became a living, breathing archive consistently and constantly respiring itself so that anyone in the entire organization that possessed the link could go within its folders and learn, copy, link and indeed contribute to other people around the organization the findings of the august conduit of CR:UX.
The power of qualitative user research is not just talking to users on any given day about their problems or potential solutions or validation testing. The power of qualitative research is the archive which is the trail of the whole of research that can change the culture of an organization around qualitative research. Without the library, qualitative research is just passing on lessons to design teams without the efficacy or justification for doing more. In other words, to justify to directors and stakeholders the need to do more qualitative research is by having something to point to, as if to say, "Here's why we need to go further on the road of design with the user!" and having the track record to point to. This is the power of creating and maintaining the Knowledge Library. It should be the job of the Qualitative User Researcher to create, build, and maintain this Knowledge Library. It is as essential to being a good moderator; being a good archivist.
IV. Keeping the Plates Spinning
So, beyond keeping everyone in the loop and beyond building and keeping an archive of records, there is one more activity that a Qualitative UX Researcher should be responsible for building and maintaining: Where all the design teams are in the process of doing research to drive user-centered design. Cody came up with this brilliant idea of creating charts called "Research Assessments" which were an at-a-glance vision of where a design team was in their doing research, as well as next steps/recommendations from the Qualitative UXR as to where the team should look to go next. What it came to is a hard feeling love affair with what was both my nemesis and favorite tool: the Harvey Ball.
The efficacy of this system is sound. It is a visual representation of what qualitative research looks like in situ, without having to interrupt the designer/PM/VP/agile team's day for a sitrep on where things stand (those folks are busy...come ask me!) I will apologize for the blurriness of the image, but what the assessment says doesn't matter - what is important are those three headings and the boxes they cover:
- Know Context: This includes market research, quantitative surveys, and scientific/secondary source work that the Qualitative User Experience Researcher gathers and synthesizes. This is a good way to tell where in that process that cycle stands; because not everyone has the intelligence you're seeking on hand, it has to be sifted through, sorted through, then found and sent.
- Know People: This is where you want to talk to your users to fill in gaps around the first three above. Pain points, pleasure points, empathy maps, journey maps, ideal states, sorting exercises, market baskets, and ACBC (that's Adaptive Choice-Based Conjoint) all go here.
- Know Solution and Validation: This is the point at which we start talking about the answer to the first six boxes. This is where the rubber meets the road, so to speak, where the design team comes up with a solution and presents it to the CR:UX to test with users. Under this bracket are things like where in the iteration testing the design is, where in the detailed design testing is taking place and where in the backlog the design team is in fulfilling the updates on the product.
- And of course...the bloody Harvey Balls. For the purposes of ease, the balls were designed in increments of quarters. Ultimately an extra box was added apart from those bracketed that gave an overall assessment of the project based on the average filling of the balls o' Harvey across the whole of the research process.
V. The Dirty Details of Devilish Decisiveness
Somewhere in the back of my head Rod Roddy is yelling into the mic, "But wait, there's more!" Yes, it would be lovely if it was process/library/assessments, but there are things in building this that seem like minutia that will come back to bite you in the butt if one doesn't pay attention to them. Rapt attention.
A. Legal
Folks, I cannot stress enough how big of a player your legal team is going to be in building all of this out. Cody and I learned quite early on that in a Fortune 5 company, no one makes a move within, without legal knowing about it and signing off on it. Nothing will make your blood run cold quite like an unsolicited call from your legal team because you tripped a trigger. We were super lucky in this respect though because the lawyer assigned to the build of the User Experience Research program was a privacy lawyer who understood completely what we were attempting to do. I won't hash through all of the things we discussed...rather there are two documents you need to have from your legal team to carry about this build successfully:
1) A legal release form: This is signed by the user before any other document before the test begins. It basically says that there is the chance that personal information might be discussed, that users interviews and videos will be used for internal use only, and that we will keep people anonymous (side note on anonymity: a random character sequence generator will become your fast friend...rather than being the user interview of Judy Smith - it becomes the user interview of 843_&*fju87! or whatever. This document also empowers the user to contact the research capability and have the entirety of their work with us pulled and destroyed if they so need or choose.
2) I had our legal team clear the consent form in the back of Steve Krug's "Rocket Science Made Easy" for use with our users. This was important to me, because where the release is written in legal-ese and has that very formal taste to it, the consent form in the back of RSME combined with a form from someone who did one based on the GDPR in Europe and voila - a consent form that is both human-sounding and user-friendly easier to read. I felt like we needed two things that basically said similar things so that the user covered us twice in permissions. It would have been all too easy for a user to say, "Well I didn't really understand the legal document they had me sign." Like in film - coverage is everything.
B. Procurement:
If your enterprise already has vendors that do research for you, do your very best not to tick them off. Yes, you are building your own capacity to do user research, but you want to do so transparently, because a ton of help in building our own capability out came with suggestions from vendors, who were incredibly cool and very helpful when asked. We were fortunate to have a stable of six vendors that we could work with on any given project, but we wanted more. Ultimately, we wanted to work with them all, and form a bidding process for any project we could have which may have been too large for the CRUX to handle alone. We wanted the maximum amount of tools we could use to build up any qualitative research any team could want and do it in a time frame that would suit the design team's needs.
One thing we never wanted to do, if we could help it, was do recruiting. Recruiting is the most loathsome part of any qualitative user research capability. One will spend weeks just to get a decent sample population, and then there's scheduling, and honorariums and on and on. Luckily, our vendors had the robust capacity to do all of the recruiting, so that is what we relied on them most for doing. It made the build of the CRUX that much easier. It would have set the entire build back six months, easily, if we had needed to build a recruiting team, and we had neither the luxury of time, nor the capital to do such a thing, so vendors really came to our rescue.
In bringing new vendors into the company fold, one must work within the quagmire of legal brambles that is procurement. Luckily, after some searching we were assigned two rock stars of the process who helped us bring on more and more tools into the toolbox. We also instituted a program of regular "Lunch n' Learns" around new vendors where the vendors would present decks to entire design teams about what they were capable of bringing to their project. These sessions were recorded then placed into the Video-on-Demand section of the Knowledge Library so that a design team could go back and look at vendors and then request something specific that we would then reach out and coordinate with whichever vendor the team was most keen on working with.
The point of all of this section is to stress to you the need to build your capability with transparency in mind. Nothing flows through a conduit if it's locked up in territory, turf wars, and secretive operations. Operate with integrity and whatever your capability says is the truth of the user, can be trusted because the capability is built on the same transparency of truth.
VI. How Can We Save A Buck?
Recall at the top that this capability was built out to do things dirty cheap but still yield the same high impact studies as the vendors who charge high 5's low 6's for just one study, and that the capability was built out so that the largest expense would end up being travel to the user. This is the secret sauce - what the qualitative user researcher for CRUX actually does when gaps in the research are identified and the recruits are gotten after the final screener (Frames 8, 9, 10, & 11) is completed. Ideally by this point, the CRUX and the Design Team have decided as to what methodology one wishes to use and more importantly, the setting: lab, ethnography, man on the street, etc. That which constitutes the majority of the cost of most vendors are the mechanisms of a technical nature that many UXR capabilities do not know how to build out: Filming, lighting, editing, transcription, and so forth. Thus, the qualitative researcher for CRUX must know these elements in order for the enterprise to save money on doing research.
A. Do Your Own Filming, Transcription, and Editing
I was extremely fortunate to have attended film school (though I am STILL paying off the loans for it almost 20 years after the fact) prior to my finding and falling madly in love with Human Factors/Ergonomics Psychology. I hadn't a clue that when I started in HF/E that my film school lessons would come to bear so heavily as they did. Our capability saved thousands per project because I knew how to set lights for an interview, or ethnography, how to get in position to get the right shots, the cameras to use for a HD experience for the viewer. Even if we were forced into using a lab, I would request to use smaller rooms within the facility so that I wouldn't have to deal with those pesky fluorescent lights that flicker a green tint on everything.
We are incredibly fortunate in this day and age to have studio quality video editing platforms both for cost and open source. One saves a bundle in time and money by knowing how to set markers in the middle of an interview (a quick jot of the time on the clock when something crucial is said by the user) so that it can be cut later on in the editing process.
Lighting is easy and cheap - one can get near perfect lighting on Amazon for digital video - a three light set up for far cheaper than one might expect. And cameras - I mostly just used two or three Logitech 1080p web cameras for interviews placed strategically around the user, and gopro's for ethnographies where the emphasis was being a mobile, fly on the wall observer (where I couldn't be tethered to a laptop. Logitech has a decent capture software that is easily converted and then edited with, but there are others as well.
Transcript services can easily run $150.00 or more an hour. But I type pretty fast, and can mark time codes when the most crucial portions are said.
Nowhere are these cost savings more important than when one is doing weekly/iterative hybrid research in with teams in agile sprints where research and reporting is done on the fly, both on the road and in the office. A quality user research must be ready at any point to present the progress of those spinning plates to teams who are in the rush of a sprint and need crucial, actionable insights from the user.
B. The Closed Loop: The Ultimate Cost Saver
By far one of the most powerful tools we had in cost savings was the closed loop. A closed loop is the mechanism by which one recruits users through their own user base. One example of this might be a researcher set up inside of a shoe store that contacts users about why one bought the pair of shoes that they did, and how they made choices when looking for shoes in the store - the process to recruit and incentivize costs nearly nothing.
Same thing goes for large enterprises. You and I both know that every call that comes in is recorded and archived for VOC research. Nearly every call is transcribed and marked on a dashboard and metrics and quantitative data is recorded for later use. Along with those calls are metadata including the ability for an enterprise to reach out and contact that user directly. If one can do it and remain anonymous, the perfect closed loop process can be created, if and only if, the legal team of your company green lights it. From there, it is just a matter of scheduling some 1:1 time and the closed loop is complete.
The benefit of a closed loop is that you've got users that will then act as stakeholders and allow you to give them a seat at the design table itself. This also means a design team can utilize the same user as a touch point to design around, if that user meets the requirements of the user the design team is creating around. This is truly user centrism done in a very literal sense. Closed loops are a tool that no vendor can ever really do within your own company, chiefly because they are an outside vendor. They can do a version of it, but nothing beats the cost effectiveness of the real thing. Remember: Research is a team sport, and no design team is complete without the user being at the table when doing excellent design work - it is like custom building a house with the homeowner in mind as it is being built (oh what I would give for taller counters sometimes!)
VII. Why can't we just do video interviews?
This is a great question, and it is a veritable ongoing debate within the qualitative User Research community. Most large vendors will only prefer to do 1:1 interviews through video, as it saves money on travel, but I feel as a qualitative researcher, much is lost in the process. Its a watered down 1:1 interview. Here's why I say this. I've moderated and performed countless 1:1 interviews both in person and on video. For me, it's a matter of control over the interview itself: I can control the lighting, how the cameras are placed, I can capture body language and facial expressions. I can capture the scene where the user is putting themselves (this is especially crucial in ethnography) be it home, office, or any other place they could be using the product. I can use document cameras to demonstrate their working with the product with their own hands, and a slew of other little things that matter more to me and the design team, than they probably do to the user. The point is that I have coverage, and coverage helps a ton when cutting together a video presentation for a design team or stakeholders.
With video interviews, you get a face. Maybe you get face and shoulders; floating in space. Imagine at your next board meeting if everyone sat around the table and you could only see their head and shoulders, staring straight ahead. What does that do to your perception of the feedback they're giving you? Now imagine you're a design team that wants to understand their user fully - heads and shoulders tell you next to nothing, other than verbal explanations and cues. One is missing 3/4 of the rest of the show. Imagine sitting in a theater watching your favorite film and you can only see the left 25% of the screen...how entertaining is that movie now? How effective are those rain boots with 25% of the foot portion missing?
VIII. Conclusion
My mother told me once, never perform a service unless you're getting paid for it. I have, with this writing, gone completely against that (sorry mom) and offered you up the entire CRUX build for free. Imagine what you might have had to pay me in consulting fees to set this up for you? Yeesh. I'm an altruistic sort..a bleeding heart, as it were, and so baring my soul and my methods, I hope you find this all quite helpful. There is no guide that I know of that sets this up for you to the extent that I've depicted here.
There is a Buddhist koan that's punchline is, "Ping-ting comes for fire!" and what it means (to me) is that all of what I've relayed to you, you've known about all your time on this path, it only needed to be voiced. So in actuality, there's nothing new that I've revealed or imparted, just words that were inside your mind already as to how to build this, they only needed unlocking. So consider them unlocked. Now go and build it...and I'm more than happy to answer questions and mentor you on your way to building the very robust and very efficacious qualitative User Experience Research capability that your company is capable of building. Just as I build this one with transparency, so too will I help guide you and answer questions, and mentor you with the very same transparency.
* * *
A note on the photos: The slides from decks were created entirely by me (and the bad drawing on them...sorry again) blurry photos of the assessment and the library are mine also. The photos scattered throughout of mindbendingly Uncomfortable designs come by way of Katerina Kamprani - who has in her genius created what HF/E/Usability folks look at like what a lay person might look at single panel comics like. Absolutely brilliant work, and incredibly funny for those of us in the trade. The entire collection of things I'll build into my house one day, just to mess with people at their expense, can be found here: https://www.theuncomfortable.com/ I have no idea who did the comic of the horse and the man at the water (can't read the signature), but I can say that in a single frame, he/she/they encapsulated every line from every stakeholder bemoaning users for not adopting brilliant design ideas without the benefit of user research to drive design. Seriously, when it doubt...go ask the horse!
Special thanks also goes to Don Norman - the Godfather of HF/E and UX R&D for writing a little book in 1983 called "The Design of Everyday Things" which is a book that even today, I read and re-fall-in-love with my trade. It is the book, given to me by Dr. James Bliss at Old Dominion University where I found my calling and constant obsession. Thank you to both!
Head of UX Research at TheFork, a TripAdvisor company
4 年@Joel great insight, thank you! One question: we know that people cannot fully explain their actions or choices because the big part of decision process takes place in the back of our mind. I would say even more so in the digital world, since it is much more impulsive. Why then there is so much emphasis on the user interview right now in UXR? It is definitely one of the resources, but I bet on the observation and intercation based techniques, if I want to understand user behavior.
Web Application Developer
4 年I'm interested to know if anyone in your organization could access the videos/research, and what did that look like?? I imagine hanging up posters of personas with maybe a QR code on them, which takes you to an online version of each persona where you can dig deeper.
Loving Clinicians @ Freed
5 年Really great read, Joel.