Surveys with "Repetitive Questions"? are BAD!

Surveys with "Repetitive Questions" are BAD!

According to a new study conducted by the University of California Riverside, surveys that ask too many of the same sort of question weary respondents and provide incorrect results.

People weary of questions that vary only slightly, according to the study, and tend to provide similar answers to all questions as the poll goes. Marketers, legislators, and researchers that use long surveys to anticipate consumer or voter behavior would get more accurate results if they craft surveys to elicit trustworthy, original responses, according to the researchers.

"We wanted to know if collecting more data in surveys is always better, or if asking too many questions causes respondents to provide less relevant responses as they adjust to the survey," said first author Ye Li, an assistant professor of management at UC Riverside. "Could this, paradoxically, lead to more questions being asked but worse results?"

No alt text provided for this image

While it's natural to think that more data is always better, the authors questioned if the decision-making processes respondents employ to answer a series of questions would vary with time, especially if the questions are all in the same format.

The study looked at quantitative surveys like those used in market research, economics, and public policy research to figure out what individuals think about various topics. A large number of structurally similar questions are frequently asked in these surveys.

Researchers looked at four experiments in which participants were asked to answer questions about choice and preference.

Respondents in the surveys adapted their decision-making as they answer more repetitive, similarly structured choice questions, a process the authors call "adaptation." This means they processed less information, learned to weigh certain attributes more heavily, or adopted mental shortcuts for combining attributes.

In one of the studies, respondents were asked about their preferences for varying configurations of laptops. They were the sort of questions marketers use to determine if customers are willing to sacrifice a bit of screen size in return for increased storage capacity, for example.

"When you're asked questions over and over about laptop configurations that vary only slightly, the first two or three times you look at them carefully but after that maybe you just look at one attribute, such as how long the battery lasts. We use shortcuts. Using shortcuts gives you less information if you ask for too much information," said Li.

While humans are known to adapt to their environment, most methods in behavioral research used to measure preferences have underappreciated this fact.

"In as few as six or eight questions people are already answering in such a way that you're already worse off if you're trying to predict real-world behavior," said Li. "In these surveys if you keep giving people the same types of questions over and over, they start to give the same kinds of answers."

No alt text provided for this image

The findings suggest some tactics that can increase the validity of data while also saving time and money. Process-tracing, a research methodology that tracks not just the quantity of observations but also their quality, can be used to diagnose adaptation, helping to identify when it is a threat to validity. Adaptation could also be reduced or delayed by repeatedly changing the format of the task or adding filler questions or breaks. Finally, the research suggests that to maximize the validity of preference measurement surveys, researchers could use an ensemble of methods, preferably using multiple means of measurement, such as questions that involve choosing between options available at different times, matching questions, and a variety of contexts.

"The tradeoff isn't always obvious. More data isn't always better. Be cognizant of the tradeoffs," said Li. "When your goal is to predict the real world, that's when it matters."

Li was joined in the research by Antonia Krefeld-Schwalb, Eric J. Johnson, and Olivier Toubia at Columbia University; Daniel Wall at the University of Pennsylvania; and Daniel M. Bartels at the University of Chicago. The paper, "The more you ask, the less you get: When additional questions hurt external validity," is published in the?Journal of Marketing Research.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了