A critique of consumer neuroscience in ad testing
Here's a brief excerpt from an important critique that we wrote on best practices in applying consumer neuroscience to ad testing.
A few weeks ago, I was telling a colleague about the findings from a recent study on “neuromarketing” techniques applied to advertising testing published in the Journal of Marketing Research (Venkatraman et al., 2015).
I told him that they tested five techniques (including implicit association techniques) and concluded that only fMRI added predictive value above and beyond traditional research methods in predicting ad success. Given what he has seen Sentient produce on the additive predictive accuracy of implicit measures over the past decade, he scoffed: “How did they test the technique?”
“Get this,” I said. “They took a ‘salient’ image from each of the 30-second spots and used it as a representation of the ad. Then they captured the implicit valence associated with that specific image and used it as a measure of the ‘desirability’ of the ad itself.”
“That’s a lot of weight placed on one image from an ad,” he said. “And beyond that, it’s not even a brand impression impact variable! And they expected that to be predictive of the success of the ad?”
“Yes. Can you believe it? And this is a peer-reviewed article!”
“Makes you wonder who the ‘peers’ are reviewing the article.”
“Exactly. I think this is a case where the scientific-practitioners, who have been applying these techniques for years, know more about the appropriate application of behavioral science techniques to business than the pure-play academics.”
“You know,” he said, “when you think about it, it’s akin to testing a set of explicit questions that you’ve created, finding that they are not predictive of some behavior of interest, and concluding that the method of “questionnaire” does not have additive predictive value beyond other measures.”
“Exactly.” I laughed, “Wouldn’t you wonder if you were asking the right questions first, before you concluded that the entire approach had no added value? But it does speak volumes to where we are as an industry in applying these new methods, as well as where Academia is in identifying who the appropriate peers are to be recruited for designing and reviewing these studies.”
“I wonder how long it will take for that peer review paradigm to shift,” he reflected.
“I’m not sure,” I thought, “But I do know that continuing to publish applied validation studies, and focusing on scientific integrity in our methods, rather than trying to make a quick buck on ‘shiny-new-object’ trends with pseudo-scientific techniques is surely the path for long-lasting impact on our industry and the advancement of applied behavioral science.”
“To be fair to the authors,” I continued, “this is a really important study. It represents the first real foray in designing a study that evaluates multiple non-conscious methods in their ability to predict real-world behavioral metrics. For that, it should be lauded. And naturally, we should expect to find some failings in the design and an over-eagerness in the conclusions drawn. I’ve done that myself on many occasions.”
In order to advance the industry, we need to recognize the vision of researchers like Venkatraman et al., while simultaneously challenging it in order to advance. We attempt to achieve both of these requirements in our full critique.
Free access to the full critique on our blog: https://bit.ly/implicitadtesting
UX Researcher | Psychologist | Scientist
9 年It's a very interesting conclusion and I absolutely agree it's no small challenge to compare results from different methodologies this way! From my point of view, EEG was not used to its full advantage for few reasons: 1 Spectra analysis (alpha waves) is a very broad measure, why ERPs were not taken into account? 2 It measured cognitive input (alpha waves in occipital lobe) but affective output (frontal asymmetry), meaning that last part of cognition-affect-cognitive processing link was assumed, while existence of cognition-affect-cognitive processing link was analysed with fMRI data (amygdala was a mediator) 3 Results section comparing traditional methodology to EEG data is missing Thanks for sharing Aaron, that was an exceptional read.