Affinity Mapping and Feature Ranking
Greg Daigle
Experienced Design Manager, e-Learning Manager, Director of Customer Success Services/Quality Assurance
One of the many methods used in design research and design thinking is Affinity Mapping. During my time as Director of Quality Assurance at Allen Interactions for their ZebraZapps e-learning software authoring tool, I led the build process through the initial product launch in 2011 and the first half dozen major releases until 2013.
We had employed Atlassian’s Jira as our agile tool and the system was accessible to all engineers, QA testers and product management team members. However, after beta testing had been completed and product launch was being readied, I identified a need for a post-launch method to open the process to an important in-house group without access to Jira: our internal studios of e-learning teams including developers instructional designers, graphical artists and producers.
Although the product management team regularly prioritized features, I wanted to create a more open and dynamic way to view and comment on feature priorities, similar to the traditional Card Sort method. I had used card sorting as a former manager of design research and advocate of Design Thinking, both in practice and as an educator. It is a tried and true method for gaining collaborative group feedback, though oversight of the process is less controlled than through Jira. My goal was to visually open up the prioritization process and give those most familiar with using the product a tool to objectively and explicitly make their preferences known.
Instead of Card Sort I chose Affinity Mapping, a methodology employing the familiar Post-its but also inviting participant commentary, questions, research links and brainstorming ideas. We started with posting the existing Jira tickets in our system. Contributors were asked to "vote" for each feature but also encouraged to add personal observations, commentaries and experiences. Rankings were determined through the use of removable colored dots.
I first printed out about 350 Feature, Feature Request and Change Request tickets that had a status of being either Open, In Progress, Approved for Development or in Development for Current Milestones. These were already identified by upper management as the most important features for consideration, though other archived tickets were also available as will be discussed.
The following instructions were given. The wall colors refer to two long hallways in the office, one with a blue wall color that wrapped around the office’s central core (see images above) and the opposite wall painted green:
INSTRUCTIONS
- In the blue hallway wall you will find a wall of pages representing the current Features for ZebraZapps as listed in Jira. Your mission is to vote for the Features that you think are the most important for the next generation of the product by placing a colored dot on that page.
- The number of dots in aggregate will be used to denote that Feature's ranking. Ranking will be used to guide the Release Date for those Features.
- The pages are attached to the wall with colored Post-its. Those pages that are held up by yellow Post-its are Features. At the end of the Features are other pages help up by Post-its of other colors. Those pages held up by blue Post-its are Feature Requests. Those pages held up by red Post-its are Change Requests. The pages continue around the corner.
- You have 24 removable dots to place on any of the Features you think are the most important. The studio developers and designers get the blue dots. The QA testers get the green dots. Management gets the yellow dots.
- In addition, upper management gets a very limited number of red dots designating top rank (regardless of other votes for that page). Think of red as a trump card to be used sparingly. You may place dots without any identifier or you may put your initials on the dots to help us come back to you for clarification. If you find that a red dot (trump) is on a page you had voted for and feel that your dot "vote" is now unnecessary, feel free to move your dot elsewhere.
- If there is a Feature, Feature Request or Change Request you believe should be retired because it has been superseded or made moot, place it on the green wall across from the blue wall. Any dot placed on a page on the green wall will result in its being moved back to the blue wall.
RESULTS
Tickets were arranged on the wall, organized horizontally by function of the software (e.g. transitions, wiring, text, message centers, arenas, etc.) and separated by green Post-its. See images above which show the ticket pages before the voting had begun.
Any of the tickets that did not get votes as indicated by dots were moved to the Milestone Deferred section at the end of the blue wall… about 202 in all. The remaining 145 tickets with dots were then assigned to the lead engineer according to the results of the ranking so that he could request individual programmers to determine requirements for each.
After this first round, a second round was initiated using a different set of filters for archived tickets listed as lower priorities under Rough Design, Approve for Design or Requirements for Milestones. That represented an additional 486 tickets. Subsequent filters were performed identifying another 83 tickets and, later, a subsequent 19 tickets. The total for this second round was 588 tickets. Key respondents were asked to strike those no longer relevant for development and to retain those still possessing merit (i.e. that feature had not been made moot by another ticket). A full analysis of those round two results was not undertaken.
CONCLUSION
Opening the prioritization of tickets to those in-house developers and team members without access to Jira gave new insights to the product management team. Several features, previously considered of lower priority, were found to actually be of higher priority to hands-on users. This led to some shifting of work priorities for engineers and QA and identified issues that might have led to dissatisfaction factors for potential users, both in-house and external.
Of the 145 tickets in the first round, only 22 received "trump" red dots from upper management. Of those, 1 feature received 3 red dots, and 2 others received 2 red dots. These became our highest priorities. Interestingly, the feature ticket with the highest number of votes (8) did not receive a red dot. There were two features with 7 dots (one receiving a red dot), three features with 6 dots, eight features with 5 dots, nine with 4 dots (one receiving a red dot) and twenty-two with 3 dots. Of those 45 features with votes of 3-8 dots, only 5 of those features also received a red dot and only two of the 23 features scoring 4-8 dots also got a red dot from upper management.
Hands-on in-house users were able to identify 21 highly desirable features for the next generation of the software that were not placed at the highest level (red dot) by upper management. The result was that feature priorities were added to and hands-on user preferences were recognized that had not previously been recognized. There were 68 features that received only 1 vote from hands-on users that also received red dots from upper management. However, since users were given the option to move their dot if a red dot was later placed on the same item, it can not be said that those items may not have originally been given a higher total and the dot later moved.