Harnessing the Power of KPIs and UX Metrics — Big Guide

Harnessing the Power of KPIs and UX Metrics — Big Guide

Today, integrating business acumen and UX expertise is not just a luxury but a necessity. As organizations increasingly recognize the significance of user experience in driving business success, the role of UX Designers transcends beyond aesthetics and usability, evolving into Product Designers.

It has already become essential to speak the language of business through the lens of team-wide design Key Performance Indicators (KPIs) and UX metrics, leveraging UX benchmarking tools like SUS, SUPR-Q, UMUX-LITE, and CES to garner unbiased, statistically reliable results. By embedding these practices, designers & managers can ensure that design decisions are not mere guesses or personal preferences but intentional, deliberate, and quantifiable actions.

?? Implementation of Design KPIs (simplified)

Step 1: Identify Relevant KPIs

Using frameworks like Google’s HEART, determine which aspects of UX (Happiness, Engagement, Adoption, Retention, Task success) are most critical for your product.

Step 2: Align Metrics with Business Goals

Ensure that the UX metrics you choose to track are directly tied to key business objectives.

Step 3: Setting Benchmarks and Goals

Define initial benchmarks and set specific, measurable goals for each KPI.

Step 4: Combine Quantitative and Qualitative Data

Quantitative data gives the ‘what,’ while qualitative data explains the ‘why’ behind user behaviors.

Step 5: Regular Monitoring and Analysis

Employ tools like Hotjar, Baremetrics, and Google Analytics for consistent tracking and analysis of these KPIs.

Step 6: Use a Mix of Tools

Employ a combination of analytical tools and user feedback mechanisms to gather comprehensive data.

?? Some UX Metrics and How to Use Them (long list)

Embrace yourself…

  1. Top Tasks Success > 80% (For Critical Tasks)
  2. Time to Complete Top Tasks < 60s (For Critical Tasks)
  3. Time to First Success < 90s (For Onboarding)
  4. Time to Candidates < 120s (Navigation and Filtering in eCommerce)
  5. Time to Top Candidate < 120s (For Feature Comparison)
  6. Time to Hit the Limit of Free Tier < 7d (For Upgrades)
  7. Presets/Templates Usage > 80% per user (To Boost Efficiency)
  8. Filters Used per Session > 5 per user (Quality of Filtering)
  9. Feature Adoption Rate > 80% (Usage of a New Feature Per User)
  10. Time to Pricing Quote < 2 weeks (For B2B Systems)
  11. Application Processing Time < 2 weeks (Online Banking)
  12. Default Settings Correction < 10% (Quality of Defaults)
  13. Relevance of Top 100 Search Requests > 80% (For Top 3 Results)
  14. Service Desk Inquiries < 35/week (Poor Design leads to More Inquiries)
  15. Form Input Accuracy ≈ 100% (User Input in Forms)
  16. Frequency of Errors < 3/visit (Mistaps, Double-Clicks)
  17. Password Recovery Frequency < 5% per user (For Auth)
  18. Fake Email Addresses < 5% (For Newsletters)
  19. Helpdesk Follow-Up Rate < 4% (Quality of Service Desk Replies)
  20. Turn-Around Score < 1 week (Frustrated Users converts to Happy Users)
  21. Environmental Impact < 0.3g/page request (Sustainability)
  22. Frustration Score < 10% (AUS + SUS/SUPR-Q + Lighthouse)
  23. System Usability Scale > 75 (Overall Usability)
  24. Accessible Usability Scale (AUS) > 75 (Accessibility)
  25. Core Web Vitals ≈ 100% (Performance)

Kudos to Vitaly Friedman for the list.

…lets go.

? Top Tasks Success > 80% (For Critical Tasks)

Top Tasks Success is a pivotal metric in UX design, focusing on users’ success rate when completing critical tasks within an application or website. This metric is particularly crucial because it directly correlates with user satisfaction and the design’s overall effectiveness.

The benchmark of greater than 80% ensures that most users can complete essential tasks without significant difficulty, reflecting a user-centric design.

Why 80%?

  • Standard of Excellence. Achieving more than an 80% success rate is considered a standard of excellence in UX design. It indicates that the majority of users find the interface intuitive and efficient.
  • Balance Between Ideal and Practical. While a 100% success rate is ideal, it’s often unattainable due to the diversity of user abilities and expectations. Over 80% strikes a balance, offering high usability while accounting for inevitable variations in user experience.

Measuring Top Tasks Success

1. Identify Critical Tasks

Collaborate with stakeholders and use data analytics to identify the most critical tasks to user success and business objectives.

2. User Testing

Conduct user testing sessions through moderated sessions, unmoderated remote tests, or in-field observations, where participants are asked to complete these tasks.

3. Data Collection and Analysis

Collect data on the success rate of these tasks. Success is typically defined as the user completing the task accurately without external assistance.

4. Quantitative and Qualitative Insights

While the quantitative data will give you the success percentage, qualitative insights from user feedback can provide context and understanding of the barriers faced by those who failed.

Improving the Success Rate

  • Iterative Design. Use the insights gained from testing to improve the design iteratively. Focus on simplifying interfaces, clarifying instructions, and removing obstacles that prevent task completion.
  • Personalization and Adaptability. Consider adaptive UIs that can cater to different user segments, especially if your user base is diverse regarding tech-savviness or cultural background.

Application

In a healthcare app, critical tasks might include booking an appointment or accessing test results. If data shows that only 70% of users are successfully completing these tasks, the UX team might streamline the appointment booking process or make test results more accessible. Following these changes, if the success rate improves to 85%, it would indicate that the UX enhancements are effectively supporting users in completing these essential tasks.

? Time to Complete Top Tasks < 60 seconds (For Critical Tasks)

Time to Complete Top Tasks is a critical UX metric focusing on the efficiency and effectiveness of user interactions with a product, particularly for crucial tasks.

Setting a benchmark like “less than 60 seconds for critical tasks” is a strategic approach to ensuring that essential user tasks are not only achievable but also efficient. This metric is particularly relevant in environments where time is of the essence, such as in emergency services applications, high-frequency trading platforms, or any user interface where rapid response is crucial.

Implementation

1. Identify Critical Tasks

The first step is to pinpoint what constitutes a ‘critical task’ in the context of your product. These are typically actions that users frequently perform and are vital to their experience and the product’s core functionality.

2. Set a Clear Benchmark

In this case, the benchmark is completing these tasks in under 60 seconds. This threshold should be based on user needs and business goals, ensuring it’s realistic and relevant.

3. Measure and Analyze

Utilize user testing methods and analytics tools to measure the actual time users take to complete these tasks. Tools like Hotjar or Google Analytics can provide insights into user behavior and task completion times.

4. Iterative Design and Testing: Based on the data collected, iterate on the design to streamline the user experience. Continuously test these changes to ensure that the modifications lead to a reduction in task completion time.

5. Feedback Loops

Utilize direct user interviews, surveys, or usability testing sessions to incorporate user feedback to understand any hurdles they face in completing tasks quickly.

Application

Consider an online banking application where a critical task is to transfer funds. Initially, user data shows that completing this task takes an average of 90 seconds. The design team then iterates on the user interface to simplify the process, perhaps, by reducing the number of steps or enhancing the clarity of instructions. Post-implementation data shows that the task now takes an average of 55 seconds, meeting the set benchmark.

? Time to First Success < 90s (For Onboarding)

The Time to First Success metric in the context of onboarding measures the duration it takes for a new user to achieve their first successful interaction with a product.

Setting a benchmark such as “less than 90 seconds for onboarding” emphasizes the importance of a swift and effective introduction to the product. This metric is crucial, especially for applications or services where immediate user engagement and quick value realization are vital to retaining users.

Implementation

1. Define ‘First Success’

Clearly identify what constitutes the ‘first success’ for a user in the onboarding process. This could be completing a profile setup, making a first transaction, or understanding the product’s core functionality.

2. Establish the Benchmark

In the majority of scenarios, the goal is for users to reach this first success within 90 seconds. Though, this target should be based on understanding user capabilities and the complexity of the task.

3. Track and Measure

Use analytics tools to track the time users spend from the beginning of the onboarding process to the point of first success. Tools like Mixpanel or Amplitude can be particularly useful in tracking these user journeys.

4. Design for Quick Wins

Optimize the onboarding process to guide users rapidly to their first success, involving simplifying steps, providing clear instructions, or using progressive disclosure to manage information flow.

5. Iterate Based on Data and Feedback

Use the data gathered to refine the onboarding experience continuously. Pay attention to where users might be getting stuck and adjust accordingly. User feedback can be invaluable in this process, offering direct insights into user experiences and perceptions.

Application

Consider a project management tool where ‘first success’ is defined as creating the first project. Initially, data might show that new users take an average of 120 seconds to reach this point. The UX team could streamline the process by introducing a guided setup or pre-filled templates. After these changes, new data could reveal that users are now able to create their first project in an average of 85 seconds, successfully meeting the benchmark.

? Time to candidates < 120 seconds (For Navigation and Filtering in eCommerce)

Time to Candidates is a UX metric particularly relevant in eCommerce contexts, focusing on the time it takes for users to find potential product options (‘candidates’) through navigation and filtering.

Setting a goal like “less than 120 seconds” underscores the necessity of an efficient, user-friendly browsing experience. This metric is critical in eCommerce platforms where users expect quick access to products that meet their criteria, ensuring a seamless and satisfying shopping experience.

Implementation

1. Defining ‘Candidates’

Clearly establish what constitutes a ‘candidate’ in the context of your eCommerce platform — reaching a product detail page, viewing a filtered list of products, or adding an item to the cart.

2. Setting the Benchmark

Here, the target is for users to find suitable product candidates within 120 seconds. This timeframe should be realistic, considering the complexity of the product range and the user interface.

3. Measurement and Tracking

Employ web analytics tools to track the duration from the moment a user begins navigating or using filters to the point they identify a candidate product. Google Analytics, for example, can track user flow and time spent on specific tasks.

4. Optimizing Navigation and Filtering

Enhance the UX design to enable users to reach their product candidates more efficiently. This might involve streamlining navigation menus, improving filter options, or ensuring that search results are relevant and well-organized.

5. Iterative Improvement Based on Data

Use the collected data to refine the navigation and filtering experience continuously. Look for patterns where users spend too much time or drop off and adjust the interface accordingly.

6. User Feedback Integration

Gathering direct feedback from users through usability testing, surveys, or feedback forms, can provide insights into their experiences with navigation and filtering.

Application

In an online clothing store, ‘Time to Candidates’ could be measured from the moment a user starts searching or filtering for a specific type of clothing (like summer dresses) to when they land on a product page that fits their criteria. If initial data shows an average time of 150 seconds, the UX team might simplify the filtering options or improve search result relevance. Post-implementation, a reduction in this time to an average of 115 seconds would indicate a successful optimization against the set benchmark.

? Time to Top Candidate < 120 seconds (For Feature Comparison)

The ‘Time to Top Candidate’ metric in UX design, especially in the context of feature comparison, measures how long it takes for users to identify the best option or choice among various alternatives.

Setting a benchmark like “less than 120 seconds” is particularly relevant in platforms or applications where comparison and choice are key aspects of the user journey, such as in e-commerce, software selection tools, or any service where users must weigh different features to make a decision.

Implementation

1. Define ‘Top Candidate’

It’s crucial to clearly identify what constitutes a ‘top candidate’ in your specific context. This could be the best-priced option, the most feature-rich product, or the most suitable service based on user preferences.

2. Benchmark Establishment

Aim for users to identify their top candidate within 120 seconds. This goal should be challenging yet attainable, considering the complexity of the options and the user interface.

3. Measurement Techniques

Utilize analytics tools to track how long users spend from initiating a feature comparison to selecting their top candidate. This might involve tracking clicks, page navigation times, and interactions with comparison tools.

4. Enhancing Comparison Features

Focus on UX improvements that facilitate quicker and more effective comparison. This could include more apparent feature listings, effective use of visual aids like charts or tables, and intuitive navigation between options.

5. Iterative Design Based on Data: Use the data gathered to refine the comparison features. Look for trends in user behavior that indicate confusion or inefficiency and adjust the design accordingly.

6. Incorporating User Feedback: Direct feedback from users can provide invaluable insights. Consider conducting usability tests or surveys explicitly focusing on the feature comparison aspect to gather targeted insights.

Application

For example, on a travel booking website, ‘Time to Top Candidate’ might be measured from the moment a user begins comparing different flight options to the time they select the flight that best fits their criteria. If initial data shows an average time above 120 seconds, the UX team might work on optimizing the comparison interface, perhaps by improving filter options or presenting information more clearly. A reduction in this time in subsequent measurements would indicate a successful enhancement of the UX.

? Time to Hit the Limit of Free Tier < 7 days (For Upgrades)

The metric Time to Hit the Limit of Free Tier is particularly relevant for products that offer a freemium model. This metric measures the duration it takes for a user to reach the usage limit of the free version of a product or service, prompting consideration for an upgrade.

Setting a target of “less than 7 days” is strategic for products aiming to quickly demonstrate value to the user, thereby encouraging them to move to a paid tier.

Implementation

1. Define the Free Tier Limit

Clearly outline what the ‘limit’ of the free tier encompasses. This could be a data cap, feature limit, or usage frequency.

2. Set an Optimal Time Frame

In this case, the aim is for users to encounter this limit within 7 days. This time frame should be set to balance providing enough value to engage users while also encouraging them to consider the benefits of upgrading.

3. Track Usage Patterns

Utilize analytics tools to monitor how users interact with the product and how quickly they approach the free tier limit. This tracking should focus on user engagement patterns and feature usage.

4. Optimize User Journey Towards Upgrade

Design the user experience to guide users towards the limits of the free tier naturally. This might involve highlighting the premium features during the initial user journey or sending notifications as the user approaches the limit.

5. Data-Driven Iteration: Use gathered data to fine-tune the freemium model. Ensure the limit is not too restrictive to turn users away nor too lenient to delay upgrades.

6. Feedback and Adjustment: Collect user feedback specifically around their experience with the free tier and their perceptions of the paid offerings. Adjust the model based on this feedback to find the right balance.

Application

Consider a cloud storage service with a free tier limit of 5GB. Suppose analytics reveal that most users hit this limit in over 10 days. In that case, the UX team might introduce features like high-resolution media storage that encourage quicker utilization of the free space. If subsequent data shows users reaching this limit in an average of 6 days, with an accompanying increase in upgrades, the target is effectively met.

? Presets/Templates Usage > 80% per user (To Boost Efficiency)

The metric Presets/Templates Usage measures the percentage of users who utilize built-in presets or templates within a product. Aiming for a usage rate of “greater than 80% per user” is particularly relevant for software or platforms where efficiency and speed are enhanced through the use of these tools. This target underscores the importance of providing effective, time-saving solutions that users readily adopt.

Implementation

1. Define Presets/Templates

Clearly identify what constitutes a preset or template within your product. These could be pre-designed layouts, automated settings, workflow templates, or any feature that simplifies user processes.

2. Set a High Usage Benchmark

Targeting over 80% usage per user indicates a strong reliance on these features for efficiency. This goal should reflect the utility and accessibility of these presets or templates in enhancing user workflows.

3. Measure Utilization Rates

Use analytics tools to track how frequently users engage with presets or templates. This can involve monitoring selection rates, usage frequency, and user preferences in template choices.

4. Optimize Accessibility and Awareness

Ensure that users are aware of these features and can easily access them. This might involve UI improvements, better onboarding processes, or educational content highlighting these tools’ benefits.

5. Iterative Improvement Based on Data

Utilize the data collected to refine and improve the design and functionality of presets and templates. Focus on user feedback to understand what works and what needs enhancement.

6. Encourage Usage through Design: Design the user experience to naturally guide users towards utilizing presets and templates through recommendations, highlighting efficiency gains, or showcasing use cases.

Application

In a graphic design tool, presets might include pre-made design templates or color schemes. If initial data shows that only 60% of users are leveraging these presets, the design team might work on making these options more prominent and educating users about their benefits. After these changes, an increase in usage rate to 85% would indicate success in meeting the benchmark.

? Filters Used per Session > 5 per user (Quality of Filtering)

The Filters Used Per Session metric quantifies the average number of different filters a user applies in a single session. Setting a target like “more than 5 filters per user” can be instrumental in contexts where the quality and efficacy of filtering options are crucial, such as in e-commerce platforms, data analysis tools, or content libraries.

A higher number of filters used typically indicates that the filters are relevant, user-friendly, and effective in helping users refine their searches or selections.

Implementation

1. Clarify the Role of Filters

Define the purpose and types of filters available in your product. These could range from basic categorical filters to advanced, custom search options.

2. Establish a Usage Benchmark

Aiming for users to utilize more than 5 filters per session suggests that the filters are engaging and helpful in helping users find what they need efficiently.

3. Tracking Filter Engagement

Implement analytics to monitor how users interact with filters during a session. This includes which filters are used, the sequence of their application, and the frequency of their use.

4. Enhance Filter Visibility and Usability

Ensure that filters are easily accessible and understandable. This might involve UI/UX improvements like better placement, clearer labeling, or providing tooltips.

5. Iterative Design Based on Data

Use the data to continuously refine the filtering options. Pay attention to less-used filters to understand if they are less relevant or harder to use.

6. User Education and Encouragement

Educate users about the benefits of using filters through onboarding guides, tooltips, or contextual help. Encourage exploration and use of different filters to enhance their experience.

Application

In an online bookstore, if the data shows that users, on average, use only 2–3 filters per session, the UX team might reevaluate the filter design. This could involve adding more relevant filter categories, improving filter visibility, or simplifying the filtering process. Subsequent data showing an increase to an average of 6 filters used per session would indicate a successful enhancement in the quality and usability of the filtering system.

? Feature Adoption Rate > 80% (Usage of a New Feature Per User)

The Feature Adoption Rate metric measures the percentage of users who use a new feature within a product or service. Setting a target like “greater than 80% usage of a new feature per user” indicates an aspiration for high engagement and utilization of newly introduced features.

This metric is crucial for product teams to evaluate how effectively new features are being received and utilized by the user base, which is essential for ongoing product development and user satisfaction.

Implementation

1. Define Feature Adoption

Clearly articulate what constitutes ‘adoption’ of the new feature. This might involve users actively using the feature, incorporating it into their regular workflows, or using it a certain number of times within a given period.

2. Set an Ambitious Adoption Benchmark

Aiming for more than an 80% adoption rate sets a high standard for feature engagement and relevance. This goal should reflect the feature’s importance and its alignment with user needs.

3. Monitor Usage Patterns

Implement tracking mechanisms to monitor how and when users interact with the new feature. Analytics tools can provide insights into usage frequency, duration, and user demographics.

4. Promote Feature Awareness and Benefits

Ensure users are aware of the new feature through effective onboarding processes, educational content, and marketing efforts. Highlight the feature’s benefits and how it enhances the user experience.

5. Iterative Improvement Based on Feedback

Collect user feedback on the new feature to understand its reception and areas for improvement. Use this feedback for iterative design and feature enhancements.

6. Analyze Adoption Drivers

Understand what drives feature adoption by analyzing user behavior. Identify if the feature solves a specific problem, improves efficiency, or adds significant value to the user experience.

Application

For instance, if a social media platform introduces a new photo-editing feature, the ‘Feature Adoption Rate’ would measure how many users utilize this feature. If initial data shows a 60% adoption rate, the platform might enhance the feature’s visibility, simplify its use, or educate users about its benefits through tutorials or highlighted posts. If subsequent measurements show an increase to 85%, it indicates successful user adoption and positive reception of the feature.

? Time to Pricing Quote < 2 weeks (For B2B Systems)

The Time to Pricing Quote metric is particularly relevant for B2B (Business-to-Business) systems where providing timely and accurate pricing quotes is a critical part of the sales process.

Setting a target of “less than 2 weeks” underscores the importance of efficiency in generating pricing quotes, which can significantly impact customer satisfaction and decision-making in the B2B sector.

Implementation

1. Define the Pricing Quote Process

Clearly outline the steps involved in generating a pricing quote in your B2B system. This could include initial customer inquiries, data gathering, quote calculations, and final delivery of the quote.

2. Establish a Timeframe Goal

Aim for the entire process of generating a pricing quote to be completed in under 2 weeks. This timeframe should be realistic, considering the complexity of the products or services and the necessary internal processes.

3. Track the Quote Generation Process

Implement tracking systems to monitor the duration of each step in the pricing quote process. This can involve using CRM (Customer Relationship Management) systems, project management tools, or custom analytics.

4. Optimize the Process for Speed and Efficiency

Analyze each step of the quote generation process to identify bottlenecks or inefficiencies. Streamline workflows, automate where possible, and ensure clear communication channels both internally and with the customer.

5. Iterative Improvement Based on Data

Use data collected to refine the process continuously. Regularly review the average time taken for quote generation and make adjustments to improve efficiency.

6. Customer Feedback Integration

Collect feedback from customers regarding their experience with the quote process. Understanding their perspective can provide valuable insights for further optimization.

Application

Consider a B2B software company that customizes solutions based on client needs. If tracking reveals that the average time to deliver a pricing quote is currently 3 weeks, the company might look into automating parts of the data gathering and quote calculation process. After implementing these changes, if the time to deliver a quote reduces to an average of 1.5 weeks, it indicates successful optimization against the set target.

? Application Processing Time < 2 weeks (Online Banking)

The Application Processing Time metric measures the duration from when a customer submits an application (e.g., for a loan, account opening, or credit card) in an online banking system to when the application is fully processed and a decision is communicated.

Setting a target of “less than 2 weeks” highlights the importance of efficiency and responsiveness in the application process, a critical aspect of customer experience in the competitive online banking sector.

Implementation

1. Define the Application Process

Clearly outline the stages involved in processing an application, from submission to decision. This process might include initial data entry, document submission, verification checks, and final approval or rejection.

2. Set a Realistic Timeframe Goal

Aiming for a processing time of under 2 weeks should balance operational capabilities with customer expectations. This goal is crucial for enhancing customer satisfaction and trust in the online banking service.

3. Measure and Monitor Processing Times

Implement systems to track the duration of each stage of the application process. This can involve using banking software analytics, CRM tools, or custom tracking mechanisms.

4. Optimize Process Workflows

Analyze the current application processing workflow to identify and eliminate bottlenecks. This could involve automating specific steps, improving internal communication, or enhancing data processing technologies.

5. Iterative Improvement Based on Data

Regularly review the processing time data to identify areas for improvement. Make continuous adjustments to streamline the process and meet the set timeframe.

6. Gather Customer Feedback

Collect customer feedback regarding their experience with the application process. Understanding their perspective can offer insights for further improvements.

Application

For instance, in an online bank, the average time for processing a personal loan application might initially be 3 weeks. The bank could reduce processing times by introducing automated credit checks and improving document upload interfaces. If subsequent data shows an average processing time of 1.5 weeks, this would indicate success in achieving the target, improving customer satisfaction and competitive advantage.

? Default Settings Correction < 10% (Quality of Defaults)

The ‘Default Settings Correction’ metric evaluates the percentage of users who modify the default settings in a product. Setting a target of “less than 10%” suggests that the default settings are well-tailored to meet the needs of the majority of users, indicating a high quality of the initial configuration.

This metric is particularly significant in software and applications where default settings play a crucial role in user experience and efficiency.

Implementation

1. Define Default Settings

Clearly identify what constitutes the default settings within your product. These could be initial configurations, preset options, or standard layouts that are provided to users upon first use.

2. Set a Low Correction Benchmark

Aiming for less than 10% of users to change these settings indicates confidence that the defaults meet the needs of most users. This target should reflect a deep understanding of user preferences and behaviors.

3. Track Default Setting Adjustments

Implement analytics tools to monitor how often and to what extent users change the default settings. This can help identify which defaults are being changed most frequently and why.

4. Optimize Default Settings Based on User Behavior

Analyze user interactions with the default settings to identify patterns or common changes. Use this data to refine the default configurations to better align with user needs.

5. Iterative Improvement Based on Data

Continuously review and adjust the default settings based on user feedback and usage data. Aim to reduce the need for users to make changes to the defaults.

5. User Feedback and Testing: Collect direct feedback about users’ satisfaction with the default settings. Consider conducting A/B testing to evaluate different default configurations.

Application

In a photo editing app, if analytics indicate that 20% of users frequently adjust the default brightness and contrast settings, the UX team might consider altering these defaults in the next update. If subsequent analysis shows that only 8% of users make these adjustments after the update, it would suggest that the new default settings are more closely aligned with user preferences and needs.

? Relevance of Top 100 Search Requests > 80% (For Top 3 Results)

The Relevance of Top 100 Search Requests metric measures the percentage of times the top 3 search results are relevant to the user’s query. Setting a target of “greater than 80%” for this metric highlights the importance of accurate and relevant search functionality, especially for platforms where search is a key feature.

This metric is essential in contexts like e-commerce, content libraries, or data-driven applications, where users rely heavily on search to find what they need.

Implementation

1. Identify Top 100 Search Requests

Determine the most frequently made search requests on your platform. This involves analyzing search query data to identify the top 100 queries.

2. Set a High Relevance Benchmark

Aiming for more than 80% relevance for the top 3 search results sets a standard for search accuracy and user satisfaction. It reflects the system’s ability to understand and effectively respond to user queries.

3. Measure Search Result Relevance

Implement a system to evaluate the relevance of the top 3 results for each of the top 100 search queries through automated relevance scoring or manual assessments, possibly involving user feedback.

4. Optimize Search Algorithms

Refine the search algorithms and indexing methods based on the relevance data. Focus on improving natural language processing capabilities, query interpretation, and result in ranking mechanisms.

5. Iterative Improvement Based on Data

Continuously monitor the relevance metric and adjust the search functionality. Stay responsive to changes in user search behavior and trends.

6. User Feedback Integration

Collect feedback from users regarding the effectiveness of the search feature, what can provide insights into user expectations and areas where the search could align more with user needs.

Application

For an online bookstore, if analysis shows that only 70% of the top 3 results for the top 100 search queries are relevant, the UX team might work on enhancing the search algorithm, perhaps, by improving keyword matching or context understanding. After these updates, if the relevance rate increases to 85%, it indicates that the search functionality is more effectively meeting user needs.

? Service Desk Inquiries < 35 per week (Poor Design leads to More Inquiries)

The metric Service Desk Inquiries measures the number of inquiries or support requests a service desk receives in a given time frame. Setting a target of “less than 35 inquiries per week” can indicate a product’s design’s overall effectiveness and clarity. In many cases, a high volume of service desk inquiries can signal issues in product design, such as lack of intuitiveness, poor usability, or insufficient information.

Implementation

1. Define Service Desk Inquiry Scope

Clearly outline what types of inquiries are counted in this metric. This typically includes technical support requests, usability questions, and feature-related queries.

2. Set an Inquiry Reduction Benchmark

Aiming for fewer than 35 weekly inquiries sets a standard for product clarity and user self-sufficiency. This goal should be based on historical data and the product’s complexity.

3. Track Inquiries and Categorize Them

Monitor the number and nature of inquiries received by the service desk. Categorizing these inquiries can help identify common user challenges or confusion points in the product.

4. Analyze Inquiry Causes and Correlations

Examine the correlation between inquiry topics and specific aspects of your product design. This analysis can reveal which product areas generate the most questions or problems.

5. Iterative Design Improvements Based on Data

Use insights from service desk inquiries to inform design improvements. Focus on enhancing areas of the product that are causing confusion or difficulties for users.

6. Proactive User Education and Support

Implement strategies to reduce the need for service desk inquiries, such as improving in-app guidance, offering comprehensive FAQs, or creating tutorial videos.

Application

In SaaS platforms, if the service desk receives an average of 50 inquiries per week, primarily about navigation and feature usage, the UX team might redesign the navigation interface and improve in-app guidance. If the number of weekly inquiries drops to an average of 30 following these changes, it indicates a successful reduction in user confusion and an improvement in the product’s overall design.

? Form Input Accuracy ≈ 100% (User Input in Forms)

The Form Input Accuracy metric measures the percentage of accurate and correctly completed inputs in forms by users. Setting a target close to “100%” accuracy underscores the importance of designing forms in a manner that encourages correct and complete user input.

This metric is particularly crucial in applications or services where form filling is a key component, such as online banking, e-commerce checkouts, or registration processes. High accuracy rates indicate clear, user-friendly form design and effective guidance provided to users.

Implementation

1. Define Accurate Form Input

Clearly specify what constitutes accurate input for each field in a form. This could include correctly formatted data, complete information, and valid entries.

2. Set a High Accuracy Benchmark

Aiming for an accuracy rate close to 100% sets a high standard for form design and user interaction. This goal reflects the importance of reducing user errors and ensuring that the data collected is reliable and valid.

3. Measure and Monitor Input Accuracy

Implement systems to track the accuracy of user inputs in forms, that can include validation error rates, corrections made by users, and instances of incomplete submissions.

4. Optimize Form Design for Clarity and Ease of Use

Enhance the UX design of forms to minimize user errors by involving simplifying form layouts, providing clear instructions, using appropriate input field types, and offering real-time validation feedback.

5. Iterative Improvement Based on Data

Regularly review accuracy metrics and user behavior data to identify areas for improvement in form design. Make continuous adjustments to reduce error rates and enhance user experience.

6. User Education and Support: Provide users with guidance on correctly filling out forms, such as tooltip explanations, examples of correctly formatted data, or contextual help.

Application

In an online tax submission portal, if the initial data shows that only 85% of form inputs are accurate, the UX team might revise the form design to include clearer instructions, error messages, and examples of correctly filled fields. If subsequent measurements show an improvement in form input accuracy to 98%, it indicates a successful enhancement in form design and user guidance.

? Frequency of Errors < 3 per visit (Mistaps, Double-Clicks)

The Frequency of Errors metric quantifies the average number of user errors per visit, such as mistaps, double-clicks, or incorrect selections. Setting a target of “less than 3 errors per visit” aims to ensure a smooth, error-free user experience. This metric is particularly relevant in mobile applications, web platforms, and interactive software where user interaction is frequent and pivotal to the overall experience.

Implementation

1. Define User Errors

Clearly identify what constitutes an error in the context of your product. Common examples include mistaps on mobile devices, accidental double-clicks, erroneous form entries, or misclicks on web interfaces.

2. Set an Error Reduction Benchmark

Targeting fewer than 3 errors per visit indicates a commitment to a highly intuitive and user-friendly interface. This goal should be based on the complexity of the interface and the nature of user interactions.

3. Monitor and Track Errors

Implement analytics tools to track user errors. This can involve logging mistaps, tracking repeated clicks, and monitoring incorrect form submissions or navigation paths.

4. Analyze Error Causes and Patterns

Investigate the root causes of frequent errors. Determine if they are due to design flaws, interface complexity, user misunderstandings, or technical issues.

5. Iterative UX/UI Improvements

Based on the error analysis, make targeted improvements to the UX/UI design. This could involve redesigning buttons for better accessibility, simplifying navigation paths, or enhancing form validations and feedback.

6. User Testing and Feedback

Conduct user testing sessions to observe how real users interact with the product and where they encounter errors. Incorporate user feedback to further refine the interface.

Application

For a mobile banking app, tracking might reveal that users frequently mistap on certain buttons or have difficulty navigating the menu. If UX adjustments are made to increase button sizes and simplify the menu layout, a subsequent decrease in the average number of errors per visit to below 3 would indicate an improvement in the user interface and interaction design.

?? Password Recovery Frequency < 5% per user (For Auth)

The Password Recovery Frequency metric tracks the percentage of users who utilize the password recovery feature in an authentication system. Setting a target of “less than 5% per user” aims to ensure that the majority of users can successfully remember and use their passwords without needing recovery assistance.

This metric is crucial for platforms with login systems, as frequent password recovery attempts can indicate issues with the password creation process, user memory burden, or overall user experience.

Implementation

1. Define Password Recovery Use

Specify what qualifies as a password recovery attempt. This typically includes actions like clicking a “forgot password” link, using a password reset email, or contacting support for password assistance.

2. Set a Low Recovery Frequency Benchmark

Aiming for a recovery frequency of less than 5% per user reflects a balance between secure, memorable password policies and user convenience. This target should be based on historical user behavior and industry standards.

3. Monitor Password Recovery Attempts

Implement tracking to monitor the frequency of password recovery attempts. This can involve analytics on the “forgot password” feature usage, support ticket analysis, and tracking password reset emails.

4. Analyze Causes of High Recovery Rates

If the password recovery rate is high, investigate potential causes. It could be due to complex password requirements, unclear password guidelines, or a lack of password management tools.

5. Improve Password System Design

Based on the findings, make targeted improvements to the password system. This might involve simplifying password requirements, enhancing the clarity of password creation guidance, or introducing features like password strength indicators.

6. Educate Users and Promote Best Practices

Offer users tips and best practices for creating memorable yet secure passwords. Consider implementing educational content or reminders during the password creation process.

Application

In an e-commerce platform, if the data shows that 10% of users are using the password recovery feature regularly, the platform might revisit its password policy, perhaps by easing overly strict requirements or by implementing a more user-friendly password creation interface. If subsequent measurements indicate that the password recovery rate drops to 4%, it suggests an improvement in the password system’s user-friendliness and efficacy.

? Fake Email Addresses < 5% (For Newsletters)

The Fake Email Address Frequency metric measures the percentage of fake or invalid email addresses entered by users in newsletter sign-up forms.

Setting a target of “less than 5%” aims to ensure the authenticity and quality of the email list, which is crucial for effective email marketing and communication strategies. High rates of fake email submissions can indicate user distrust, privacy concerns, or usability issues with the sign-up process.

Implementation

1. Define Fake Email Addresses

Clearly identify what constitutes a fake email address. This typically includes addresses that are syntactically incorrect, use disposable email domains, or fail to pass email verification processes.

2. Set a Low Fake Email Address Benchmark

Aiming for less than 5% fake email addresses indicates a commitment to acquiring genuine user engagement. This target should reflect a realistic expectation based on the nature of the newsletter content and the audience.

3. Monitor Email Address Validity

Implement systems to check the validity of email addresses at the point of sign-up. This can include real-time email verification checks and the use of CAPTCHA to prevent automated fake entries.

4. Analyze Causes of Fake Submissions

If the rate of fake email submissions is high, investigate potential reasons. Consider factors such as perceived value of the newsletter, clarity of communication regarding privacy policies, and the user’s perceived effort versus benefit ratio.

5. Optimize Sign-Up Process

Based on the analysis, refine the sign-up process to encourage genuine subscriptions. This could involve simplifying the form, clearly communicating the benefits of the newsletter, and reassuring users about data privacy and usage.

6. Educate and Incentivize Users

Provide clear information about what users will receive by subscribing and consider offering incentives for genuine sign-ups, such as exclusive content, discounts, or entry into a contest.

Application

For a tech blog’s newsletter, if analytics show that 10% of the email addresses provided are fake, the blog might revise its sign-up form to include clearer benefits of the newsletter and implement a double opt-in process. If subsequent data shows a reduction in fake email addresses to 4%, it indicates an improvement in user trust and the quality of sign-ups.

?Helpdesk Follow-Up Rate < 4% (Quality of Service Desk Replies)

The Helpdesk Follow-Up Rate metric measures the percentage of service desk inquiries that require follow-up interactions. Setting a target of “less than 4%” aims to ensure that the majority of inquiries are resolved satisfactorily on the first interaction.

A low follow-up rate is indicative of effective, clear, and comprehensive initial responses by the helpdesk team, which is crucial for user satisfaction and operational efficiency.

Implementation

1. Define Helpdesk Follow-Up

Clarify what constitutes a follow-up interaction. This typically includes additional contacts from the user seeking further clarification, reporting unresolved issues, or requiring more information after the initial response.

2. Set a Low Follow-Up Rate Benchmark

Targeting a follow-up rate of less than 4% reflects a commitment to high-quality, first-contact resolution. This goal should be based on current performance metrics and industry standards.

3. Monitor and Track Follow-Up Interactions

Implement a system to accurately track when users re-contact the helpdesk after an initial inquiry. This can involve tagging follow-up emails, calls, or support tickets in the helpdesk system.

4. Analyze Reasons for Follow-Ups

Regularly review cases that required follow-ups to understand the causes. Identify if issues relate to the clarity of responses, completeness of information provided, or specific user misunderstandings.

5. Improve Helpdesk Training and Resources

Based on the analysis, enhance helpdesk training programs to focus on areas that frequently lead to follow-ups. Update internal knowledge bases and resources to provide comprehensive and clear information.

6. Iterative Improvement Based on User Feedback

Collect feedback from users regarding their satisfaction with helpdesk responses. Use this feedback to continually refine the approach and quality of service provided.

Application

In a software company, if the helpdesk follow-up rate is initially 6%, the company might implement additional training for support staff on complex issues and update their response templates to include more detailed information. If subsequent data shows that the follow-up rate decreases to 3%, it indicates an improvement in the quality of the initial responses and overall user satisfaction.

? Turn-Around Score < 1 week (Frustrated users into Happy Users)

The Turn-Around score measures the effectiveness of converting frustrated users into satisfied ones, with a focus on the time frame within which this transformation occurs. Setting a target of “less than 1 week” indicates a commitment to swiftly addressing user concerns and improving their overall experience.

This metric is especially valuable in service-oriented sectors where user satisfaction is critical and directly impacts brand loyalty and user retention.

Implementation

1. Define ‘Turn-Around’ in User Satisfaction

Clarify what constitutes a successful ‘turn-around’ of a user from being frustrated to satisfied. This could involve resolving a complaint, providing effective solutions to problems, or significantly improving the user experience based on feedback.

2. Set a Swift Resolution Benchmark

Aiming for a turn-around time of less than 1 week demonstrates a proactive approach to user satisfaction. This goal should be realistic, considering the nature of user issues and the capacity of the response team.

3. Track and Monitor User Satisfaction Changes

Implement a system to identify and track users who have reported dissatisfaction. Monitor the actions taken to address their concerns and the time taken to resolve these issues.

4. Analyze Turn-Around Processes

Regularly review cases where users have shifted from dissatisfaction to satisfaction. Understand the effectiveness of different strategies used and the time taken for resolution.

5. Improve Response Mechanisms and Resources

Based on insights, enhance customer support procedures, training, and resources to enable quicker and more effective resolutions.

6. Gather Feedback and Iterate

Collect detailed feedback from users who have experienced the turn-around process. Use this feedback to continuously refine strategies and improve the user experience.

Application

For a streaming service, if data shows that users frequently express frustration over technical glitches, the company might focus on rapid problem resolution and user communication. If the average time taken to address and resolve these issues and subsequently receive positive feedback is reduced from two weeks to under one week, it indicates a successful improvement in turning around user satisfaction.

? Environmental Impact < 0.3g/Page Request (Sustainability)

The Environmental Impact metric quantifies the carbon footprint of digital products, measured in terms of grams of CO2 emitted per page request. Setting a target of “less than 0.3g per page request” emphasizes the commitment to sustainability and the reduction of the environmental impact of digital services.

This metric is increasingly relevant as awareness grows about the carbon footprint of digital operations, especially for websites and online platforms.

Implementation

1. Define Environmental Impact Measurement

Establish how the environmental impact will be measured, focusing on CO2 emissions related to page requests. This includes the energy consumption of servers, data transmission, and end-user devices.

2. Set a Sustainable Impact Benchmark

Aiming for less than 0.3g of CO2 per page request sets a standard for environmental responsibility in digital operations. This goal should be challenging yet attainable, considering current technological capabilities and best practices in sustainable web design.

3. Monitor and Calculate Impact

Implement tools or partner with services that can accurately calculate the carbon footprint of your digital operations. This might involve analyzing server energy consumption, data transfer efficiency, and optimization of front-end resources.

4. Optimize for Reduced Environmental Impact

Focus on reducing the energy consumption and emissions associated with your digital product. Techniques can include optimizing images and videos, reducing data transfer, using energy-efficient hosting solutions, and minimizing the use of resource-intensive scripts and frameworks.

5. Iterative Improvement Based on Data

Continuously monitor the environmental impact and make improvements to reduce emissions. Stay informed about new technologies and strategies that can further reduce the carbon footprint.

6. Educate Users and Stakeholders

Raise awareness among users and stakeholders about the environmental impact of digital services and the steps being taken to minimize this impact. Promote sustainable practices within the industry.

Application

For a large e-commerce platform, initial assessments might reveal an average of 0.5g of CO2 per page request. By implementing server-side optimizations, reducing image sizes, and using more efficient code, they could potentially lower this figure to 0.25g per page request, meeting the sustainability target.

? Frustration Score < 10% (AUS + SUS/SUPR-Q + Lighthouse)

The Frustration Score metric is a comprehensive measure of user frustration, derived from a combination of various user experience (UX) evaluation tools like AttrakDiff User Satisfaction (AUS), System Usability Scale (SUS)/Standardized User Experience Percentile Rank Questionnaire (SUPR-Q), and Google Lighthouse.

Setting a target of “less than 10%” for this score aims to ensure a high level of user satisfaction and minimal frustration with the product or service.

Implementation

1. Define Frustration Score Components

Clearly outline how the Frustration Score is calculated, incorporating elements from AUS, SUS/SUPR-Q, and Lighthouse. Each of these tools offers unique insights into different aspects of user experience, such as usability, satisfaction, and technical performance.

2. Set a Low Frustration Benchmark

Aiming for a frustration score below 10% indicates a strong commitment to delivering a positive and seamless user experience. This target should be ambitious, reflecting a high standard of user interface design and functionality.

3. Collect and Analyze DataImplement surveys and tools to gather data from users (AUS and SUS/SUPR-Q) and perform technical audits (Lighthouse) to assess performance aspects contributing to user frustration.

4. Integrate Feedback into UX Design

Use the insights gained from these tools to identify areas of the product that are causing frustration. Focus on improving these areas, whether they relate to usability, satisfaction, or technical performance.

5. Iterative Improvement Based on Scores

Regularly review the Frustration Score and make targeted improvements to the product. Prioritize changes that have the potential to significantly reduce user frustration.

6. Educate and Communicate with Users

Keep users informed about improvements and updates made to address their frustrations. Transparency in addressing user feedback can further enhance user satisfaction.

Application

For a mobile banking app, if the initial Frustration Score is 15%, indicating a higher level of user dissatisfaction, the app developers might focus on redesigning complex interfaces, improving load times, and enhancing the overall usability. Upon re-evaluation, if the score reduces to 8%, it would suggest that the changes made have successfully reduced user frustration and enhanced the overall user experience.

? System Usability Scale > 75 (Overall Usability)

The System Usability Scale (SUS) is a widely recognized tool for measuring the usability of a product or system. Setting a target score of “greater than 75” on the SUS scale indicates a high level of usability, reflecting a user-friendly, efficient, and accessible product.

The SUS score, ranging from 0 to 100, is determined through a standardized questionnaire that evaluates various aspects of user experience.

Implementation

1. Understand the SUS Framework

The SUS consists of a 10-item questionnaire with five response options, ranging from “Strongly agree” to “Strongly disagree.” The questions are designed to assess the overall usability of the system, including aspects of learnability and user satisfaction.

2. Set a High Usability Benchmark

Aiming for a SUS score above 75 sets a standard for superior usability. This target is ambitious and indicates that the system is more than just acceptable, providing a good user experience.

3. Conduct SUS Surveys

Administer the SUS questionnaire to a representative sample of users after they have had sufficient interaction with the product. Ensure that the sample size is large enough to provide statistically significant results.

4. Analyze and Interpret SUS Scores

Calculate the SUS score based on user responses. Remember that each item’s score contribution ranges from 0 to 4, with the total score then multiplied by 2.5 to convert it to a 100-point scale.

5. Use SUS Data for UX Improvements

Analyze the results to identify areas where the product excels in usability and areas that need improvement. Pay attention to specific items in the questionnaire where scores are low.

6. Iterative Design Process Based on SUS Feedback

Use insights from SUS surveys to inform UX design decisions. Focus on enhancing aspects of the system that are critical to usability and overall user satisfaction.

Application

In an e-commerce platform, an initial SUS survey might reveal a score of 70. Based on specific feedback, the UX team might improve the checkout process and enhance the search functionality. After implementing these changes, a subsequent SUS survey showing a score of 78 would indicate a successful improvement in the system’s usability.

? Accessible Usability Scale (AUS) > 75 (Accessibility)

The Accessible Usability Scale (AUS) is a metric designed to evaluate the accessibility aspects of a system or product in conjunction with its usability.

Setting a target score of “greater than 75” on the AUS scale emphasizes the importance of creating a product that is not only user-friendly but also accessible to people with a wide range of abilities, including those with disabilities. A high AUS score indicates that a product is both usable and accessible, aligning with inclusive design principles.

Implementation

1. Understand the AUS Framework

Similar to the System Usability Scale (SUS), the AUS is a questionnaire-based tool, but it is tailored to assess accessibility in addition to usability. It includes questions that address key aspects of accessibility, such as ease of use for users with disabilities, availability of assistive features, and compliance with accessibility standards.

2. Set a High Accessibility and Usability Benchmark

Targeting an AUS score above 75 signifies a strong commitment to creating a product that is both usable and accessible to a diverse user base, including those with disabilities.

3. Conduct AUS Surveys

Administer the AUS questionnaire to users, including those who use assistive technologies or have varying abilities. Ensure that the survey reaches a diverse and representative sample of your user base.

4. Analyze and Interpret AUS Scores

Calculate the AUS score from user responses. Like the SUS, the AUS typically involves scoring responses on a Likert scale and converting the total to a 100-point scale.

5. Use AUS Data for Inclusive Design Improvements

Analyze the survey results to pinpoint areas where the product excels or lacks in accessibility. Pay close attention to feedback from users with disabilities or those who use assistive technologies.

6. Iterative Design Process Based on AUS Feedback

Implement design changes based on insights from the AUS survey. Focus on enhancing accessibility features, improving compliance with accessibility guidelines (like WCAG), and ensuring that the product is usable by people with a variety of needs and abilities.

Application

For a digital learning platform, an initial AUS survey may yield a score of 70. To improve this score, the platform might enhance keyboard navigability, improve screen reader compatibility, and add more alternative text for images. If a subsequent AUS survey shows an improved score of 78, it indicates successful enhancements in both usability and accessibility.

? Core Web Vitals ≈ 100% (Performance)

Core Web Vitals are a set of specific factors defined by Google that are crucial to a website’s overall user experience. These vitals focus on three aspects: loading performance (Largest Contentful Paint, LCP), interactivity (First Input Delay, FID), and visual stability (Cumulative Layout Shift, CLS).

Aiming for a score close to “100%” in Core Web Vitals indicates striving for optimal website performance, which significantly impacts user satisfaction and engagement.

Implementation

1. Understand Core Web Vitals Components:

  • Largest Contentful Paint (LCP): Measures the time taken for the largest content element on the page to load.
  • First Input Delay (FID): Measures the time from a user’s first interaction with the page (e.g., clicking a link) to the time when the browser is able to respond to that interaction.
  • Cumulative Layout Shift (CLS): Measures the amount of unexpected layout shift of visual page content.

2. Set a High Performance Benchmark

Targeting near-perfect scores in these vitals reflects a commitment to providing a fast, responsive, and stable browsing experience. This is critical for user retention and SEO rankings.

3. Monitor and Track Core Web Vitals: Use tools like Google’s PageSpeed Insights, Lighthouse, or Web Vitals Chrome extension to measure these metrics. Regular monitoring is essential for maintaining high performance.

4. Optimize Website Based on Vital Scores:

  • For LCP, focus on optimizing loading times by compressing images, leveraging browser caching, and improving server response times.
  • To improve FID, reduce JavaScript execution time and minimize main thread work.
  • To enhance CLS, ensure visual elements are stable and don’t shift during page loading.

5. Iterative Improvement Based on Performance Data: Regularly review Core Web Vitals scores and implement changes to address performance issues. This might involve optimizing content delivery, refining code, and modifying the design to improve stability.

6. Educate Teams on Best Practices: Ensure that all teams involved in website development understand the importance of Core Web Vitals and are equipped with best practices for optimizing these metrics.

Application

A news website initially scores 70% in Core Web Vitals, with issues primarily in LCP and CLS. By optimizing image sizes, implementing lazy loading, and stabilizing ad placements, the website’s Core Web Vitals score improves to 95%, indicating a significantly enhanced user experience.

You survived that far! ??

The strategic use of KPIs and UX metrics is pivotal in making design decisions that are not only creative but also deliberate, measurable, and aligned with business goals. In essence, these metrics provide a lens through which the impact of UX can be viewed and evaluated in concrete, business terms. By embracing this approach, design teams can demonstrate their value more effectively and contribute to the broader business objectives in a meaningful way.

??????????????????????

?????? If you like my work and want to support my efforts, you can buy me a coffee with ko-fi : https://ko-fi.com/outmn

?? More to read:

outmn.medium.com


Kanchan Bapna

Product Designer | UX/UI Design, User Research

3 个月

Very well written, Max Stepanov! This article will surely become my go-to resource for ensuring that I'm not only backing my designs with research but also quantifying the results at each step to be backed by data.

要查看或添加评论,请登录

Max Stepanov的更多文章

社区洞察

其他会员也浏览了