Attribution Accelerator Conference: Summary of Presentations
On October 12 I attended a one-day conference in New York City dedicated to multi-touch attribution (MTA). The conference gathered together some of the big vendors of MTA modeling and the big customers who use it to determine how their marketing works. These are some of the big users of MTA that attended the conference: AT&T, Kellogg, IBM, Pfizer, Procter & Gamble, L’Oreal, etc. The big media companies were also there: Facebook, CBS, CNN, Disney, etc. Time Inc hosted the conference in its beautiful building in lower Manhattan. Big Media is very interested in helping the advertisers measure the return on advertising. The best represented group at the conference was, of course, the providers of attribution modeling services including yours truly.
Let’s start with the bad news. The Mobile Marketing Association shared some data from a survey of users of attribution modeling services. The Net Promoter Score with attribution modeling is a whopping … minus 29. Yes, minus 29, which basically means that the users are not happy at all with multi-touch attribution. I take full responsibility for this sordid state of affairs and pledge to work tirelessly to fix it. Now to the good stuff.
It seems that there is a clear differentiation between marketing mix modeling (MMM) which has been around for a while and multi-touch attribution (MTA).
- MMM is strategic; MTA is tactical
- MMM is high-level; MTA is granular
- MMM tells you what happened 3-6 months ago; MTA tells you what happened two weeks ago (theoretically, at least)
- MMM allocates outcomes (web visits, revenue, etc) to different ad vehicles (TV, radio, social media, etc.); MTA may go further in using logistic (or other) regressions to estimate individual propensity to buy.
The last point above takes us to one of the current hot topics in attribution modeling: people-based attribution. People-based attribution tries to deal with the fact that the typical consumer path to purchase goes through a maze of devices, web browsers, apps, etc. How does a modeler cobble together this path? Visual IQ (part of Nielsen now) tries to connect the dots by piecing together disparate online, device, and offline data together, reconciling multiple identifiers into a single anonymous identifier for each individual. People identification is one of the three pillars of people-based attribution. The other two pillars are building consumer profiles and creating audience segments. Why do you need people identification if you have cookies? Because if you use cookies as sole identifiers you are likely to overstate advertising reach by 54% and understate frequency by 135% according to Dan Creekmore from Facebook. Basically, cookies are best left for a late afternoon snack. In attribution modeling we want to identify people, not cookies.
How often do you update your MMM and MTA models? MMM used to be updated annually. Now Jeff Doud from Kellogg says that they do it every month. Kellogg presents its updated models and recommendations every 6-7 weeks. If you hear from some overly eager attribution vendors that they can do this every week, make sure to remind them that some of the most knowledgeable users of attribution modeling find this hard to believe.
Dave Poltrack from CBS and Leslie Wood from Nielsen delivered one of the more memorable presentations that reminded us why the machines are NOT taking over marketing any time soon. They presented two meta-studies about how revenue is allocated to the different advertising elements:
- Recency of advertising accounts for 5% of revenue
- Reach accounts for 22%
- Brand accounts for 15%
- Proper Targeting delivers 9% of revenue
- And now, the star of the show: Creatives deliver 47%.
For all the data crunching and modeling we can do, the quality of creatives still accounts for about half of the revenue delivered by advertising. Until we teach the machines how to make funny and memorable ads, we will have to rely on protein-based life forms to do the heavy lifting of creating effective ad messages.
Another interesting finding from CBS/Nielsen’s presentation was that while the quality of TV ads is quite consistent, the quality of digital ads is all over the place. Truly great digital ads are forced to coexist with truly awful digital ads that are lurking way too often on your screens. Given this, it must be good news that digital ads are not used to expand advertising reach but rather they are used to reach the same people that have seen offline ads. Digital advertising ads disproportionately little to the overall unduplicated reach.
Matt Krepsik from Nielsen confirmed the sad fact that 52% of the US display ads are not “viewable”. This does not mean that you, as an advertiser, don’t pay for them. You wish. He told us that on average, across many businesses and many industries, for each dollar of revenue $0.26 is spend on marketing to deliver this $1 of revenue. These 26 cents are divided like that:
- 2 cents for digital advertising
- 2 cents for “other” media
- 4 cents for TV advertising
- And … drumroll again: 18 cents for “incentives”, otherwise known as discounts.
At the same time companies spend much more time and money to understand how their advertising works vs. how their pricing works. Any pricing researcher (and I’ll be first among them) can tell you that companies greatly underappreciate the impact of pricing and promotions on their revenues and profits.
Michael Kaushansky delivered the unwelcome news that programmatic media cannot predict the weather. Why? Because in many cases above-the-line, upper funnel media is much more important and impactful than programmatic media. He also shared a very useful framework for doing proper attribution modeling:
- Start with determining the “first touch”. What is the most often first marketing touch?
- Then determine the unique unduplicated reach of your media.
- Then determine the typical sequence of marketing touch points.
- Then run your attribution model if it still makes sense. The study he shared determined that 70%+ of the customers had only one touch before purchasing. Therefore, there may not be a real need for marketing mix or attribution modeling.
As a modeler, I hate to admit this but in some rare instances you can be fine using one of the following methods:
- First touch allocation: allocate the revenue to the first marketing touch point
- Even credit: credit the marketing touchpoints equally for each sale
- Recent credit: give the most credit to the last touch point
- Intuitive credit: ask your gut to tell you how your marketing performs.
Now, before you get too giddy with last-click attribution, you should know that according to Dan Creekmore from Facebook if you use last-click attribution, you are likely to get it wrong up to 54% of the time. Half of the time you will allocate revenue to the wrong touch point.
Sean Muller from iSpot.tv reminded us that TV attribution is a hard thing to do because even the proper counting of TV commercials' airings can be a challenge. He also advised us not to use “spike” analysis to associate TV advertising with online behavior. Until recently it was considered state of the art to determine the impact of TV advertising by counting the spikes in web visits immediately after a TV commercial is aired. If you are still doing this, you should stop. Why? Because according to Sean only 3%-8% of web visits that are generated by a TV ad occur in the first 5 minutes after the ad is aired.
In the name of brevity, I skipped or glossed over many worthy presentations from this conference but I left the best for last. For me the most intellectually challenging presentation was delivered by Elea Feit, a professor at Drexel University and Wharton School of Business. She kind of implied (without really saying it) that attribution modeling is still quite problematic and offered an alternative: using randomized experiments to determine how the different marketing vehicles work. You may know this as experimental design or in popular parlance: A/B testing. I have written about this before: while experimental design is a great approach, the way it is often implemented in the practice of A/B testing is far from perfect. For anybody who wants to understand how far away the practice of A/B testing is from sound experimental design I recommend the book by Johannes Ledolter and Arthur J. Swersey “Testing 1-2-3”. For people who don’t read books anymore, I recommend this article by Elea Feit et al: “Measuring Multichannel Advertising Performance”.
I hope this was useful for you. If you have any questions, don’t hesitate to send me an email at [email protected] or [email protected].
Cheers and Happy Attribution Modeling!