The Product Thinking Playbook in Action (Remotely): Card Sorting
This was originally published on the Connected blog. Connected was acquired by Thoughtworks in April 2022.
The Product Thinking Playbook is our adaptable, customizable way of designing project plans to build better products. Consisting of various tactics, techniques, and milestones borrowed from Design Thinking, Agile Development, Lean Product Strategy, and Jobs-To-Be-Done Theory, the Product Thinking Playbook helps facilitate conversations around what you will and (just as importantly) will not do to achieve your product goals.
On a recent project, we were engaged as an end-to-end product development partner to design a web experience for a global workforce to help them discover and connect with the employer health and wellness services that appeal to them the most in their well-being journeys. This article is the third in a series that highlights specific Playbook tactics and techniques used during the project, with an emphasis on how we adapted them to work effectively in a remote context.
In the previous articles in this series, we looked at the Research Planning and Concept Evaluation tactic cards. Today, we are diving into the last technique card, Card Sorting…
Objective: What are we trying to achieve at this stage in the project?
By gaining an understanding of the user’s jobs, pain, and gains, we equipped ourselves to conduct a concept generation workshop with our client stakeholders. The workshop was a success, leaving us with more than 100+ concept features. Using the four product risk areas as criteria, the team distilled the concepts via a Concepts Evaluation workshop, ultimately landing on 10 feature concepts to validate with users to arrive at the final feature set for the most desirable product (MDP). The team used the evaluative user research techniques of Card Sorting to evaluate the desirability of our feature concepts and inform the evolution of the design and the user experience.
To successfully run the remote testing sessions and arrive at the final feature set, we needed to come up with an interactive framework that made it easy for users to understand each feature concept and select the keyword and score that best resonated with them. After the 1:1 sessions, the team had a few days to synthesize the data and decide on the features that, based on the user data, we believed should carry through to the MDP.
Approach: How do we action this Playbook tactic and adapt it to a remote setting?
a. Planning for the facilitation of the 1:1s
For the facilitation of the concept testing and card sorting sessions, it was important to choose an interactive tool that our participants would find engaging. Unlike in-person user test sessions, in which participants can jump right into the activities, a remote context is not as straightforward. Getting to know a new tool and learning the agenda for the session can increase the cognitive load on the user; hence, we had to be mindful of the learning curve so as to not overwhelm our participants. We used Jamboard to showcase our low-fidelity wireframes, allowing users to fill in the scorecard and select the keyword for each feature concept. The tool has its limitations, and the team faced a few challenges when setting it up prior to the sessions; however, it is a tool that is used in the ecosystem of our client’s workspace and thus was familiar to the participants. It is not as robust a tool as Miro, but it got the job done.
For each session, we had a facilitator and a note taker. For each concept feature, the facilitator set the scenario for the participant and revealed the low-fidelity wireframe. After taking a few seconds to absorb the feature concept, the participants then selected a word from the list of keywords that best described their experience (e.g., informative, connected, confused, etc.), and a number out of 10 that best described the feature’s importance to them.
There were a few firsts in this experience. It was the first time the team would use Jamboard to conduct the sessions. It was the first time we would use this framework to score each feature concept. And it was the first time we would conduct a session like this remotely. Lots of firsts! Hence, we knew we needed a few dry-runs to refine the session before presenting it to our participants.
领英推荐
b. Designing the visuals for each feature concept
We used the design tool Figma to create low-fidelity wireframes of the feature concepts. Figma allowed for asynchronous work to flow in harmony. Even though we divided and conquered the user flows, we were able to see each other’s work in the same “digital” space, almost better than if we were in the same physical space. This digital workflow allowed for fewer syncups as we were communicating on Figma via comments and overseeing our progress live.
At this stage of the project, the team was under a tight timeline, so we had to allocate the majority of our time to executing the work and less time for syncups and discussions. Throughout the project, Figma played a crucial role in allowing all the team members to contribute and give their input on the designs. As the Designers were producing, the Product Manager, Design Researcher, and Software Engineers reviewed the work and left their comments, ensuring that we were accurately incorporating the research and technical limitations into our design decisions.
c. Synthesizing the user data
Similar to the previous stages of this project, the team split up to independently synthesize and analyze chunks of the user data. We then regrouped to gather feedback and ensure alignment across the team for the next steps. Throughout the project, we relied on Miro when it came to any activity involving post-its. Miro simulates a whiteboard wonderfully well, and I would argue that it’s better than the real thing since it has infinite space and on-demand commenting.
The decision-making process was always collaborative to ensure that we were taking into account aspects of all four product risk areas (e.g., is a particular change to a feature technically feasible?) Our cross-disciplinary team members each brought their voice and expertise to the table, with each voice given equal weight in our discussions. When it came to deciding on the features to include in the MDP and the changes to be made, the team had a two-hour working session on Zoom. By the end of the session, we were all in agreement on the feature concepts to move forward with in the design stage.
Areas of Improvement: What can we do to ensure continuous improvement and progress?
Before the WFH shift, most of us on the team interviewed users in-person and never needed to run sessions remotely. We learned a lot on the way, from optimizing the one hour we had to making the sessions engaging and interactive for participants. When we go back to a hybrid work setting, remote interviews will still be valuable as they allow us to increase our sample size of participants.?
Working in a remote setting also taught us how to collaborate efficiently in an asynchronous manner. We found ourselves requiring more time for feedback, whether it was from our internal team members or from the client. However, we could handle only so much Zoom time so, instead of booking another meeting or long presentation as an extension to our workshop, we used asynchronous commenting. Thanks to all the digital tools and their robust commenting features (e.g., Google Suite, Miro, Figma), we found that, far from having to “settle” for comments, they were actually a huge time-saver.?
The Card Sorting card is an important step to take and one of the last in the discovery phase. Before moving on to the stages of detailed designs, fleshed out user flows, and implementation—all of which are time consuming—project teams need to be confident with the feature set that they are committing to. Like many stages in the product development process, we turn to our users and rely on the data they provide to make assertive key product decisions. This is how we can be certain that we’re always building better products.
The statements and opinions expressed in this article are those of the author(s) and do not necessarily reflect the positions of Thoughtworks.
Your exploration of 'Card Sorting' for product features and user experiences is impressive, and it's clear you're dedicated to refining your process. ?? Generative AI could further streamline your data synthesis and user testing, providing insights and efficiencies that might revolutionize your approach. ??? Let's discuss how generative AI can elevate your product development to new heights, saving you time while enhancing quality. Book a call with us to unlock these possibilities: https://chat.whatsapp.com/ITksq2L8oN47FnSjO6Pktv?? Benard