Integrating experiments and MMM in marketing

Integrating experiments and MMM in marketing

Last Thursday continued our Marketing Science Institute Insights Exchange on Econometric Modeling vs. Field Experiments: A Deep-Dive into Marketing Performance Measurement with panelists Karen Chisholm (Director of Analytics Transformation at Pernod Ricard), Brian Hill (Advanced Analytics, Altria), Mildad Doostan (Senior Data Scientist, Pinterest) and profs Brett Gordon (Northwestern U) and yours truly (Northeastern U.) covering the entire Northern US of A ! Happy to see my advisor Mike Hanssens Zooming in from California:



I loved the interactive session, with all ?participants showing themselves on camera with cool questions, discussion, and answers to the main questions:

1.???? Experiments: When do they work well for you? When not? What are your constraints? What if they’re not significant? How do you use experiments to help drive a decision?

2.???? Econometrics (marketing mix models): When do they work well for you? When not? How do they fit into your portfolio of methods? How do you build support for them?

3.???? Integration: How and in which order to combine experimentation and data modeling to create the best iteration?


Serious business with thoughtful practitioners & practical academics at the Marketing Science Institute

On experiments, two managers commented they work well for more tactical decisions: which campaign, which audience, which publisher/platform? Moreover, they are needed when marketing mix models (MMM) have issues with multicollinearity or insufficient spending in past data. For instance, small brands often fail to spend enough to show up in an MMM so a geotest helps to estimate the lift from ad-exposed vs not-exposed regions.

In our poll, 57% of participants use experiments in their marketing measurement:


Why and when not? The constraints are both ethical and practical: choosing not to expose consumers or regions to potentially beneficial messages, and the cost of running experiments, which yield insignificant results most of the time. At other times, the experiment yields advertising effects that are way too high, because of omitted variables (e.g. a BOGO discount). Overall, experiments require very careful control, and often need large sample sizes to pick up small effects for different segments . One major retailer used rolling control groups:

(1) lifetime,

(2) 2 years,

(3) 1 year,

(4) 6 months,

(5) 3 months,

(6) one month.

Every experiment was checked against one or more of these groups to assess cross-contamination.


On econometric modeling, all presenting managers use them for more strategic decisionsmoving around millions across channels, countries, and brands’. ?A continuous running MMM operates as a ‘red flashing light’: it shows when something is doing horrible or great, and thus where further scrutiny (e.g. with an experiment) is warranted. Indeed, my own work on metrics dashboards demonstrated that not only poor performance, but also exceptional performance should be highlighted - you'll recall my 'blue' addition to the red-yellow-green color scheme in typical dashboards .


https://www.amazon.com/Its-Not-Size-Data-How/dp/0814433952


The next poll question in our MSI webinar demonstrated that most stakeholders in the organization trust econometric models such as MMM:


?

This MMM preference is reflected in a recent survey by eMarketer: ?61.4% of US marketers who spend $500,000 or more per year on digital advertising want better/faster media mix modeling (MMM) to upgrade their?measurement ?strategies,

?

https://www.emarketer.com/content/mmm-marketers-measurement

The offered reasons are consistent with my experience: “MMM appeals to marketers in an environment where last-click attribution doesn’t show a holistic picture of the?customer journey ?and privacy is paramount. Moreover, modern MMMs are more granular, faster, and easier to implement". For instance, at MMM Labs , we estimate several models, including Google's and Meta’s Open Source modesl, to get the best results:


https://www.mmmlabs.ai/

But how do you know when the MMM results are good enough? Face validity and validation came up as the key answers for the panel. One manager commented that, after passing the standard econometric validation checks, 90 to 95% of the MMM results are “consistent with brands in which I want to invest several more millions of dollars’.

Moreover, I always validate and explore MMM findings as in the Practice Prize paper where we showed a small firm that 70% of its marketing spend was unprofitable. A deep dive showed the message and audience were on point, but diminishing returns had set in. Spectacularly. So we switched the spending around in a field experiment, validated the MMM findings, and re-analyzed to show their marketing was now profitable.


https://marketingandmetrics.com/practice-prize-paper-marketings-profit-impact-quantifying-online-and-offline-funnel-progression-2/


Integration: this brought the webinar to the final point of integrating methods and educating decision makers. We all shared stories of exaggerated expectations of what marketing communication, a weak force in marketing, can do for a brand. Digital (last click) attribution may have anchored advertisers to expect enormous returns, which are simply not true.

In experiments, a 5% lift suffices for one panelist, 15% for another. So curb your enthusiasm! ?The problem with exaggerated expectations is that advertisers severely underpower their experiment (as you recall from Measurement 101: the smaller the expected lift, the more consumers you need so the test is conclusive ).



If you combine experiments with econometric modeling, which sequence is best? That was my polling question, and the most divisive one!


?The argument for MMM first is that it is more holistic (any media spending, much more variables) and relatively inexpensive, so it can help you pinpoint where experiments are needed. As one participant commented: “If you trust your model, you get good answers cheaper and faster than with anything else” Once you have run the experiment, I recommend to use econometric modeling on that data, which now shows wonderful exogenous variation, to decide on the next experiment. This model-experiment-model-experiment is the MEME sequence Mike and I recommended in Demonstrating the Value of Marketing .

Figure 3 in

In contrast, other folks prefer to run the experiments first, and then use econometric modeling to scale the incrementality insights. Also, experiments help optimize tactics, after which MMM can help advise on the more strategic decisions:

"A danger of experiments is that they focus you on the short term, while long-term effects are important in most industries."


https://marketingandmetrics.com/wp-content/uploads/2020/06/59.-Long-term-marketing-effectiveness-is-a-high-priority-research-topic-for-managers-and-emerges-from-the.pdf

How can you get to long-term?

1)??????? ?long-term econometric modeling: see my above-visualized work on quantifying wear-in, wear-out, and persistent baseline effects of marketing;

2)??????? Include Mindset metrics (proxies for long-term sales) as outcome variables:


https://marketingandmetrics.com/wp-content/uploads/2020/06/56.Consumer-Attitude-Metrics-for-Guiding-Marketing-Mix.pdf

The wide agreement was that experiments and econometric models help each other. Moreover, market research outside these 2 options helps you make the case as a three-legged stool and gain executive trust through triangulation:


Finally, a wonderful participant concluded that the choice of method depends on:

1)??????? What’s your industry (e.g. Coke, Kodak, AT&T and Siemens should have differences)?

2)??????? What type of data do you have (related to industry and country/brand size)?

3)??????? What business question are you aiming to solve?

I couldn’t agree more, as reflected in my consultancy and communication about method choice across continents, industries and brand size:



Joel Rubinson

President, Rubinson Partners, Inc.; MTA expert advisor, Mobile Marketing Assoc.; NYU adjunct faculty member

3 小时前

a bit surprised that the idea of using experiments as a Bayesian prior on MMM did not come up. Not really seeing an integration of methods here...

回复

Excellent recap! The poll results were surprising and questions clearly coming from people doing the work!

要查看或添加评论,请登录

Prof. dr. Koen Pauwels的更多文章

  • Memes + model: which topics drove the election

    Memes + model: which topics drove the election

    I was late to the party on memes, preferring multivariate models to help me understand what matters most in marketing…

    11 条评论
  • Remove clutter and put the customer in control

    Remove clutter and put the customer in control

    On Friday, I traveled back and forth to New York City for the celebration of the awesome prof and friend Russ Winer's…

    5 条评论
  • Why WTP experiments help product adoption

    Why WTP experiments help product adoption

    This weekend, Northeastern University hosted a major development economics forum; the North East Universities…

    2 条评论
  • Do personalized ads help or hurt consumers?

    Do personalized ads help or hurt consumers?

    Last week saw the second annual conference on FTC and marketing in Washington DC. I loved the opportunity to learn both…

    1 条评论
  • Are big tech firms monopolies?

    Are big tech firms monopolies?

    Lots of views regarding the ongoing trial against Google, which finished on Friday hearing evidence. Federal…

    17 条评论
  • Models + experiments for Strategic Analytics

    Models + experiments for Strategic Analytics

    Last week saw a wonderful workshop by the Marketing Science Institute: 'Comparing the Value of Field Experiments and…

    7 条评论
  • Did Nike lose its Cool or its Place?

    Did Nike lose its Cool or its Place?

    Everyone seems to be dunking ON Nike, instead of dunking with Nike shoes this week. Bloomberg starts its ‘The Man Who…

    42 条评论
  • AI rules with human vision and inclusion

    AI rules with human vision and inclusion

    Thanks for the reactions to my ‘AI sucks’ newsletter last week! As promised, some great examples of Artificial+Human…

    14 条评论
  • AI sucks: 3 examples and 7 reasons

    AI sucks: 3 examples and 7 reasons

    Much ado about the AI hype these days, with half of my inbox full of negative examples and arguments, while the other…

    33 条评论
  • AI startups solve marketing problems

    AI startups solve marketing problems

    At the American Marketing Association's Summer conference on Friday, three excellent Boston startups demonstrated how…

    12 条评论