A critique of consumer neuroscience in ad testing

A critique of consumer neuroscience in ad testing

Here's a brief excerpt from an important critique that we wrote on best practices in applying consumer neuroscience to ad testing.

A few weeks ago, I was telling a colleague about the findings from a recent study on “neuromarketing” techniques applied to advertising testing published in the Journal of Marketing Research (Venkatraman et al., 2015).

I told him that they tested five techniques (including implicit association techniques) and concluded that only fMRI added predictive value above and beyond traditional research methods in predicting ad success. Given what he has seen Sentient produce on the additive predictive accuracy of implicit measures over the past decade, he scoffed: “How did they test the technique?”

“Get this,” I said. “They took a ‘salient’ image from each of the 30-second spots and used it as a representation of the ad. Then they captured the implicit valence associated with that specific image and used it as a measure of the ‘desirability’ of the ad itself.”

“That’s a lot of weight placed on one image from an ad,” he said. “And beyond that, it’s not even a brand impression impact variable! And they expected that to be predictive of the success of the ad?”

“Yes. Can you believe it? And this is a peer-reviewed article!”

“Makes you wonder who the ‘peers’ are reviewing the article.”

“Exactly. I think this is a case where the scientific-practitioners, who have been applying these techniques for years, know more about the appropriate application of behavioral science techniques to business than the pure-play academics.”

“You know,” he said, “when you think about it, it’s akin to testing a set of explicit questions that you’ve created, finding that they are not predictive of some behavior of interest, and concluding that the method of “questionnaire” does not have additive predictive value beyond other measures.”

“Exactly.” I laughed, “Wouldn’t you wonder if you were asking the right questions first, before you concluded that the entire approach had no added value? But it does speak volumes to where we are as an industry in applying these new methods, as well as where Academia is in identifying who the appropriate peers are to be recruited for designing and reviewing these studies.”

“I wonder how long it will take for that peer review paradigm to shift,” he reflected.

“I’m not sure,” I thought, “But I do know that continuing to publish applied validation studies, and focusing on scientific integrity in our methods, rather than trying to make a quick buck on ‘shiny-new-object’ trends with pseudo-scientific techniques is surely the path for long-lasting impact on our industry and the advancement of applied behavioral science.”

“To be fair to the authors,” I continued, “this is a really important study. It represents the first real foray in designing a study that evaluates multiple non-conscious methods in their ability to predict real-world behavioral metrics. For that, it should be lauded. And naturally, we should expect to find some failings in the design and an over-eagerness in the conclusions drawn. I’ve done that myself on many occasions.”

In order to advance the industry, we need to recognize the vision of researchers like Venkatraman et al., while simultaneously challenging it in order to advance. We attempt to achieve both of these requirements in our full critique.

Free access to the full critique on our blog: https://bit.ly/implicitadtesting 

Laura Zaikauskait?, PhD

UX Researcher | Psychologist | Scientist

9 年

It's a very interesting conclusion and I absolutely agree it's no small challenge to compare results from different methodologies this way! From my point of view, EEG was not used to its full advantage for few reasons: 1 Spectra analysis (alpha waves) is a very broad measure, why ERPs were not taken into account? 2 It measured cognitive input (alpha waves in occipital lobe) but affective output (frontal asymmetry), meaning that last part of cognition-affect-cognitive processing link was assumed, while existence of cognition-affect-cognitive processing link was analysed with fMRI data (amygdala was a mediator) 3 Results section comparing traditional methodology to EEG data is missing Thanks for sharing Aaron, that was an exceptional read.

要查看或添加评论,请登录

Aaron Reid的更多文章

  • Eliminating implicit bias in advertising

    Eliminating implicit bias in advertising

    When we invented the implicit measurement platform, Sentient Prime, we stood on the shoulders of giants. Psychologists…

    4 条评论
  • A leading indicator of consumer social behavior

    A leading indicator of consumer social behavior

    Consumer behavior is challenging to predict. Most market research techniques rely on people's opinions in a survey to…

  • New to Insights? This is a Must Read.

    New to Insights? This is a Must Read.

    Will Leach's style is so engaging that it will take you no more than 10 minutes to read his case study chapter on how…

    10 条评论
  • Emotion, reason and the ballot booth.

    Emotion, reason and the ballot booth.

    During the primary season, Sentient’s Proportion of Emotion model outperformed Real Clear Politics polling average in…

    1 条评论
  • What makes an ad "go viral"?

    What makes an ad "go viral"?

    Super Bowl 50 is upon us, and we couldn't be more excited…about the ads! The market for pre-testing of advertising is…

    2 条评论
  • The Persuasive Power of Emotion in Politics

    The Persuasive Power of Emotion in Politics

    Who’s sick of seeing political ads? All of us, right? Not me. I love them.

    1 条评论
  • Warning: This post is going to tell you how people really feel about "Muslim Immigrants".

    Warning: This post is going to tell you how people really feel about "Muslim Immigrants".

    Measuring the subconscious is a fascinating business. At Sentient, we discover insights on how people feel about brands…

    2 条评论

社区洞察

其他会员也浏览了