There’s more information than you think in your ad experiments

When you conduct a test of a particular advertising treatment, did you ever think you are really doing multiple experiments??

For example, let’s say you are testing online video effectiveness of a particular tactic (length/type/publisher/creative).?Let’s assume this tactic has been running continuously but you want to test how effective it is. So, you run an experiment comparing conversion rates among those who are exposed, vs. those who are not exposed (either by creating a randomized suppression list or because they just weren’t reached but are a twin for those exposed.)?So, what are you really testing in this scenario?

For experiments like this, the exposed cell is receiving media weight that corresponds to 100% reach…much higher than you are actually achieving (unless you are at Geico levels of spending)!?This is also called “per protocol” testing. So that cell is a lift or "upside elasticity" cell.?Now, the control cell is testing something too…it is testing what happens if you take your run rate of advertising from prior months and cut it to 0. Some call this a “turn-off” experiment…you are turning off that ad unit.?So, consider the following table of outcomes.

No alt text provided for this image

A vs. B is the typical comparison.?That maximizes the impact of that tactic because it reflects the difference between 100% and 0% reach.?It is unrealistic in a way, because you will never have 100% reach but it is also a fair way of doing apples to apples comparisons across ad treatments from multiple tests.

However, there is also information in A vs. C and B vs. C that you might not be capitalizing on.

A vs. C gives you the lift in conversion rate if you were to spent a lot more on that tactic.?It provides an upside elasticity (controlling for other factors if necessary).

B vs C gives you downside elasticity…what if I cut all spending on that tactic? Conversion rate B is sometimes confused for being a baseline for the brand under normal conditions; no…it should be lower because you have subtracted one type of ad support. However, if there is no difference in the conversion rates, B vs. C, this is an important finding…you can turn off that tactic with no loss in performance! ?Did you just eliminate waste? Or, is there another interpretation, possible that the tactic is more about long term effects? (Although let me editorialize…I believe in the published findings from others that if advertising has no short-term effect, it probably has little to no long-term effect either.)

Here is another way to extract value from A/B testing. Often a test is conducted because you want to adjudicate conflicting answers from different methods such as marketing mix modeling and MTA. Typically, there is one parameter associated with a particular tactic or channel and the difference between upside and downside elasticity is embedded in the shape of the curve.??So, in effect, A, B and C give you three data points on what should be the same curve.?That means you have 3 pieces of information, not 1, to tell you which model appears to have the more correct parameter and model shape for that ad tactic.

As you build up a database of both upside and downside elasticity results from testing you may even see recurring patterns emerge. Is there a generalization that one direction always provides more movement than the other, adjusting for spending? What does wear-out look like??Can you detect saturation or under-investment? Are downside ROAS calculations systematically different from the more typical upside ROAS calculations?

The smart marketer will use all of these tools…modeling, experiments, and more…to improve their ad effectiveness over time. Even TV effectiveness can be tested with ID lists by using CTV.

I’d be interested in readers’ thoughts and findings.

Jerome Samson

Content Marketing | Thought-Leadership | Research & Analysis

2 年

A good reminder that the real world comes with 'baggage:' prior exposure.

Daniel Lederman

Marketing Analytics, Ad Experimentation, Data Science, Growth / Performance

2 年

I love this idea of using incrementality tests to understand elasticity. It's a great idea. Do you have examples you can share of results and impact from real campaigns?

回复

要查看或添加评论,请登录

Joel Rubinson的更多文章

社区洞察

其他会员也浏览了