Consumer Software UX & NPS Benchmarks (2025)
Jeff Sauro, PhD | Jim Lewis, PhD

Consumer Software UX & NPS Benchmarks (2025)

When we think about software, Artificial Intelligence has gotten a lot of attention lately. But throwing an AI label on a product doesn’t necessarily make people use the software frequently or more effectively.

How do we know how usable and useful a product is?

Understanding and predicting usage, adoption, and growth starts with measuring usability and usefulness.

Our research has shown that perceptions of usability and usefulness predict both intention to use and recommendation intentions. Both of these predict future usage.

To understand the current consumer software landscape, in January 2025, we conducted a large-scale retrospective benchmark with 1,896 U.S. respondents on 40 popular consumer software products, similar to what we’ve done for the last 10+ years. Respondents tended to be younger (50% below the age of 35), most (85%) had had at least some college, and a bit over half (56%) were female.

The 40 software products included a mix of popular productivity, storage, security, music streaming, language learning, and generative AI software:

  1. Adobe Illustrator
  2. Adobe Photoshop
  3. Adobe Premiere
  4. Adobe Reader
  5. Amazon Music
  6. Babbel
  7. ChatGPT
  8. Claude
  9. Credit Karma
  10. Dropbox
  11. Duolingo
  12. Firefox
  13. Gemini
  14. GIMP
  15. Gmail
  16. Google Calendar
  17. Google Chrome
  18. Google Docs
  19. Google Drive
  20. Google G Suite
  21. Google Meet
  22. Google Sheets
  23. Google Slides
  24. iCloud
  25. McAfee Antivirus
  26. Microsoft Edge
  27. Microsoft Excel
  28. Microsoft Office 365
  29. Microsoft Outlook
  30. Microsoft PowerPoint
  31. Microsoft Word
  32. Norton Antivirus
  33. OneDrive
  34. Pandora
  35. Quicken
  36. Rosetta Stone
  37. Safari
  38. Spotify
  39. TurboTax
  40. Yahoo Mail

Participants were asked to reflect on their most recent experiences with the software and answer several items, including the System Usability Scale (SUS), the UX-Lite?, the standard likelihood-to-recommend (LTR) question used to compute the Net Promoter Score (NPS), and the Technical Activities Checklist (TAC-10?). The full details are available in the report. Here are some highlights.

Read the full article on MeasuringU's Blog


Summary and Discussion

The most popular method to measure loyalty is the Net Promoter Score (NPS). It’s calculated using an eleven-point (0 to 10) likelihood-to-recommend question, with the NPS computed by subtracting the percent of detractors (0–6) from the percent of promoters (9–10).

Across the 40 products, the average Net Promoter Score was 24%, ranging from ?12% to 56%. Firefox (56%), Google G Suite (55%), and Google Docs (52%) had the highest scores; the lowest were for Adobe Photoshop (?9%) and Microsoft Edge (?12%).

We used the popular System Usability Scale (SUS) to measure the perceived usability of the 40 products. SUS is a ten-item questionnaire with possible scores ranging from 0 to 100. The average SUS score from over 500 products (including websites, consumer software, and business software) is 68. The average SUS score from this group of consumer products was 77, with a low score of 57 and a high score of 87. A raw SUS score of 77 compared to our SUS norms translates to the 80th percentile (well above the average 50th percentile).

In the report, we also present findings for the UX-Lite. It’s becoming an important benchmark for many organizations as part of using a single (succinct) score to quantify software acceptance and satisfaction. It’s related to earlier research that produced the Technology Acceptance Model (TAM), but it has only one item each for ease (how easy is the software to use) and usefulness (how well do the features meet users’ needs). In aggregate, it provides a quick measure of technology acceptance (a mini-TAM).

Retrospective UX benchmarking is an important tool for investigating attitudes toward constructs, such as usability and usefulness, and their relationships to behavioral intentions such as the likelihood to reuse and recommend. With this information, UX researchers have estimates of how different products are doing relative to their competitors and some high-level diagnostic information that can guide additional research to understand the “why” behind the numbers.

Read the full article on MeasuringU's Blog


Benchmarking in MUiQ

The MeasuringU Intelligent Questioning Platform (MUiQ) is an unmoderated UX testing platform geared towards benchmarking the user experience. It has built-in templates that make it easy to set up complex, competitive benchmarks and an analysis dashboard that compares standardized metrics across each condition so your team can quickly gather insights from the research.

?MUiQ supports the following study types:

  • Competitive Benchmarking
  • Think-Aloud Usability Testing
  • IA Navigation (Card Sort, Click Test, Tree Test)
  • Large Scale Surveys
  • Advanced Surveys (Kano, Top Tasks, MaxDiff)
  • Prototype Testing
  • Moderated Interviews

With the results being presented in an easy to understand analysis dashboard, MUiQ provides all of the tools you need to collect meaningful UX insights.

Reach out today to learn more about MUiQ.

要查看或添加评论,请登录

MeasuringU的更多文章