Geo-Lift vs. User Level Conversion Lift Tests

Geo-Lift vs. User Level Conversion Lift Tests

The Right Choice for Incrementality Measurement

It is a welcome change that more and more advertisers are realizing the importance of incrementality studies to understand true impact for marketing spends. Measurement companies like Measured and LiftLab offer incrementality measurement solutions using Marketing Mix Models (MMMs) and geo-experiments to help advertisers in this journey.

Geo-experiments are useful when user-level studies aren’t feasible—for example, for TV or OOH campaigns, or for understanding the combined effect of all channels. However, geo-experiments are often wrongly recommended even when user-level tests are feasible.This article explains why user-level lift studies are generally a better choice than geo-experiments.

Feasibility of User-Level Lift Studies: A common myth is that user-level lift studies are no longer feasible due to privacy restrictions like Apple ATT. For Android and web conversions, tracking has remained largely unchanged over the last 5-6 years. Therefore, there are no significant concerns here.

While Meta and Google temporarily paused lift tests for iOS campaigns after ATT, they have since developed alternate solutions. Recent versions of SKAdNetwork (SKAN) and Conversion API (CAPI) now enable effective conversion lift tests even with privacy constraints.

Why user level lift studies are better than Geo-Lift studies?

  1. User Level lift studies are more Robust than Geo-Lift studies

Geo-experiments rely on the assumption that the relationship between test and control groups established during the pre-study period remains consistent throughout the study period. This assumption can be compromised if other factors that influence the conversions/sales vary significantly in any of the test or control regions. From my experience of setting Geo-experiments during my tenure at Google, this happens very frequently.

If a competitor gives aggressive discounts in Texas (Test region) leading to lower sales for the measured brand, the baseline Test conversions goes down and the lift will be under-estimated. If the same were to happen in California(Control region), lift will be over-estimated.

User-level lift studies, on the other hand, randomize at the individual level. Thanks to law of large numbers, they are not susceptible to changes in regional variations. With millions of users in a study, variations tend to even out, ensuring balanced test and control groups and unbiased results even if things change during the study period.

2. User level lift studies are more precise than Geo-lift tests

Geo-experiments have higher variance in lift estimates and hence lower test power compared to user-level studies. This is due to dilution or a low percentage of people receiving treatment in the test group.

Consider a geo-experiment on a Meta campaign with 10 million people in both test and control regions. Only 1 million people are reached by the campaign in the test geo, but we still have to include all 10 million individuals in both groups in the study as data is only available at the geo level. The lift, if any, will come from the 1 million people reached, and the remaining 9 million people will only add to the noise/variance. Ideally, we would compare the 1 million people reached in the test group to a comparable 1 million people in the control group.

User-level lift studies on digital platforms like Meta and Google prevent this dilution. Not all 10 million audiences are included in the study—only the 1 million people reached* and matched 1 million in the control group are included, thus improving the signal-to-noise ratio.

In this hypothetical case, the standard deviation of the lift estimate in a geo-experiment can be about three times that of a user-level lift test.

3. User level lift studies are simpler to execute

Geo-experiments are relatively complicated and require statistical expertise to design and analyze. It is also error-prone. All test regions recommended by the test designer must be included, and control regions must be excluded. Restrictions must also be placed on other channels not being measured to prevent unbalanced changes in test and control regions, which can be challenging in larger organizations. In user-level tests, the process is much simpler, and feasibility analysis can be done with basic mathematics or simple Excel templates. For managed advertisers, simply contact your account team at Google or Meta, and they will assist with the setup.

4. Trust & Validation: Meta/Google Grading their Own homework ?

For platforms like Google and Meta, there is so much to lose and so little to gain by manipulating lift results for short-term gains. The lift tools are managed by central teams who do not benefit directly from advertisers increasing or decreasing their account spends. For advertisers wanting to validate the results themselves, clean room solutions such as Meta Advanced Analytics and Google ADH are available. Advertisers can bring their own conversion data, match it with test-control cohort data from the platforms, and calculate lift through simple queries to verify the reported numbers until they build trust in the results.

5. User-level lift studies are free - There is no additional cost involved in running conversion lift studies. In contrast, geo-experiments require significant analytical expertise. Hence only organizations with sophisticated marketing analytics teams or hired measurement agencies can conduct them.?


要查看或添加评论,请登录

Subhash Madireddy的更多文章

社区洞察

其他会员也浏览了