Predicting the unpredictable.
Pollsters were wrong about the outcome of yesterday's election in Quebec. Why would they be any better at predicting election results than market researchers are at predicting market success?
Economists look at what people do, not what they say, at how they behave, not how they feel. They argue that people don't always understand their own true desires and feelings, and, even if they do, they might not give a straight answer.
Unlike economists, pollsters tracking voting intentions can't make predictions based on what people do. They need to ask how people intend to vote and assign those who are undecided or refuse to answer. They are generally pretty good at this. One of these pollsters recently declared his firm was "the most accurate of major Canadian polling firms".
That was before yesterday's election in Québec where the winning CAQ got six points more than what most polls predicted and the Liberals got 6 points less. Today, pollsters are struggling to explain why they got it so wrong. Some say they were the least wrong. That people changed their mind in the voting booth. That the low turnout (66% compared to 71% in 2014) is in part responsible. The most candid answer comes from Université de Montréal Sociology Professor Claire Durand. She told La Presse that "it's the worst polling error since the beginning of polling." [The translation is mine.] She added that there are no simple explanations and it will take an independent study to understand what happened. Observers such as UQAM Professor of Communications Bernard Motulsky rightly points out in La Presse today that "there is what you say and what you do. A healthy skepticism towards polls is likely, that's a good thing, because the future can't be predicted with precision." [The translation is mine.]
Marketers should also take note. There are very sophisticated market research tools that claim to predict market success. In my experience, they rarely do. There are always factors that simply cannot not be accounted for. Simulation is never the real thing.
- Three decades ago, I worked on a new brand of instant coffee. It required much product development work to make it taste like roast and ground coffee and a significant investment to launch it. The client invested $50K in what was then the most sophisticated pre-testing tool available. It was called the ASSESSOR Pre-Test Market Evaluation System. Its promoters argued that it had helped reduce the failure rate of new products in test market by almost a half and saved the firms that used it an estimated $120 million. The new instant coffee tested very well and its estimated market share met the action standard. It was launched in a test market and it bombed.
- Advertising pre-testing is very useful. You learn about the proposed creative's ability to break through, to tell its story and to meet its communication objectives. You can tweak it before production or drop the idea altogether if it falls significantly short of norms. Yet even when a spot tested well in rough or finished form, it can fail to be recalled or communicate its intended message once in market. Whenever that's happened, I've been told how other factors not accounted for in the pre-test impacted the spot's in-market performance.
Predictive tools are useful but they are based on what people say they'll do, not what they'll actually do. It's a distinction economists make that pollsters and marketers should keep in mind.