How Long Are Typical Unmoderated UX Tasks?
Jeff Sauro, PhD | Jim Lewis, PhD

How Long Are Typical Unmoderated UX Tasks?

A common logistical consideration when planning a task-based usability study is how much time you should plan for a task.

Many usability studies (especially benchmark studies) suffer from trying to do too many things. That includes asking participants to attempt too many tasks. It’s understandable why tasks get packed in—even low-cost usability testing takes time and money, so you want to make the most of the effort. This is especially the case when participants are difficult or expensive to recruit.

You want to be able to cover as many tasks as possible, but if you have too many tasks, you won’t get through them in the allotted time. It would be good to know how long it usually takes to complete typical tasks while planning the study.

Of course, task duration depends on the context. The application being tested, participants’ roles, research goals, research protocols (e.g., think-aloud, non-think-aloud, tree test), and the data collection mode (moderated or unmoderated) will all play a role.

We had a similar challenge a few years ago when we investigated the "average" task completion rate, but task times seem even more context dependent than completion rates.

In this case, though, we’re analyzing completion times not to set a performance benchmark (which is highly context dependent) but rather to get an idea about typical task duration for planning purposes. This will help researchers quickly calculate how many tasks they may plan to include given a study’s time constraints.

We narrowed our focus to unmoderated studies because participants can get stuck or flounder for a long time without a moderator. We also limited ourselves to traditional usability tasks with small numbers of precisely defined completion goals (e.g., book a hotel room on a specified date in a specified location within a specified price range). In other words, these are pragmatic tasks (where shorter task times reflect more efficient design) rather than hedonic activities (where longer times reflect more engagement as users interact with a product).


Discussion and Summary

Using our MUiQ? platform to conduct unmoderated UX studies, we collected data from 1,222 tasks as shown in Table 1. These tasks were composed of think-aloud (TA), non-TA, and tree test tasks across 112 different studies (a mix of desktop and mobile websites, mobile apps, and prototypes) from 2021 to 2023.

The key takeaways from our analysis of 1,222 unmoderated task times are:

Median times for different research protocols are significantly different. The task time distributions for the three research protocols are different enough that researchers should use the estimates in Tables 2 and 3 when planning a TA, non-TA, or tree test study rather than the overall median and interquartile percentiles.

Use the 75th percentile for planning. When you plan an unmoderated study and lack historical data about the task, we generally recommend using the 75th percentile for the planned research protocol. This means most tasks will take less than these times by task type:

  • Tree Tests: 20 seconds
  • Non-TA Tasks: ~90 seconds
  • TA Tasks: ~120 seconds

These estimates don’t include pre-task or post-task questions. The task times in the analysis are only the times participants spend attempting a task and don’t include reading instructions or answering post-task questions. Both activities will add some time to an overall study depending on the length and complexity of instructions and the number of questions.

There are no moderated data in these estimates. This analysis did not include any data from moderated studies (tasks with an attending researcher). We included only data from unmoderated studies collected using the MUiQ platform.

The dataset is not necessarily broadly representative. Although we created a large dataset of task times from unmoderated TA, non-TA, and tree test studies, a broader dataset could likely include longer times. We created our data sets from the types of UX studies we typically conduct, which may be different from other UX research contexts where pragmatic tasks are more complex (e.g., coding an error-free speech recognition app) or there is more of a focus on hedonic activities (e.g., “spend as much time as you want to browse the website to see if you find anything interesting”).

These data do not define “good” task times. Be careful not to extrapolate this time data into a benchmark. An unmoderated task that takes 40 seconds is shorter than the median time in our dataset, but this doesn’t mean it’s necessarily a fast or efficient task experience. For example, clicking a login or entering a search string are tasks that should take much less than 30 seconds.

Read the full article on MeasuringU's Blog


MUiQ Feature Highlight: Unmoderated Think-Aloud Tasks

Our MUiQ Platform allows researchers to easily collect qualitative insights by asking participants to think aloud during tasks.

MUiQ will record one video of the entire participant?experience for Think Aloud studies by recording the participants session from pre-study questions all?the way through post-study questions.

Researchers can add think-aloud tasks to any usability study and customize the various recording options, including:

  • URLs/Clicks
  • Participant Audio
  • Participant Video (optional)
  • Add a Privacy Blur (optional)

Think-Aloud videos are all housed in a custom MUiQ results dashboard which shows each individual participant's video along with a clip editor to quickly download video clips of key insights.Reach out today for more information on how your team can use MUiQ!


要查看或添加评论,请登录

MeasuringU的更多文章

社区洞察

其他会员也浏览了