Defining Metrics for Data Analytics in Marketing Campaigns
Over the year, we have thousands of marketing campaign being executed. Since we work closely with the Data Science team and CRM team, we act as the middle man who understands the mechanic that let's us design experiments for testing the effectiveness of the campaign.
Here are the questions I ask when designing the campaign mechanic
- How are the targeted audience being selected
- What are we testing (but better to that) what are we trying to achieve
- How do we measure the success of this campaign
Target Audience
A list of people who are to receive marketing messages tend to fall into groups
- Those who are eligible by certain criteria. This is also known as rule based which the rules are set by the Product Specialist. Certain assumptions are made for example, people who have a specific product are likely to buy the product we send advertisement.
- Recommendation from Data Science Modelling. There is a wide range of variety when it comes to predictive models. Having knowledge in Data Science helps the user to understand the impact of using types of model. For instance, looking at conversion under many levels of dimension might not make sense for Random Forest Algorithm.
- Random or Mass Target. The definition is straightforward, we focus on lower criteria to achieve larger group of target audience
Designing Experiments and Setting Goals - IMPORTANT!
People tend to jump to the design before going through the goal of the project. Always ask the why. Why are we doing this project? Are we trying to acquire more subscriptions? Do we want only customers to be exposed to try certain product? Are we just offering something in exchange of something non-monetary, eg. engagement?
After the team is aligned with the goal, we see how metrices would help measure the performance. The basic ones are click rate, conversion, average ticket size.
Click rate: are we the only party who sends out ads selling this product? Are there any variation to what people see? Should be do AB test to see which ad performs better?
Conversion and average ticket size: Careful, go back to the goal and measure exactly what we try to achieve. Is there a minimum amount the customer is required to spend to gain traction after subscription.
AB Testing
Although rule based and models are proven to some extent that it will produce promising results (eg back testing) we always conduct AB testing since consumer behavior changes over time, simply.... people change
To do AB test we allocate from our target group a control group. Fundamentally we do AB testing for the following. Just simply do the opposite for the control
- Communication Channel <> control would not receive the message
- Model <> control is randomly chosen - be sure that this is comparable
- Artwork <> compare with other variation
Monetary Contribution
Finally we would like how much money the model has helped us generate. In other words, the uplift in revenue. This means that a control group is necessary to determine the baseline of not having a model or eligible criteria.
Unfortunately I cannot share our proprietary formula but I'm giving a hint as appreciation to reading towards the end of this post.
The model contribution can be calculated similar to uplift, by multiply the difference in the conversion with the total revenue generated from the model. The rest is easy to figure out.
Siravich K
Feel free to visit my LinkedIn or My Website
PS we have open positions for Marketing Campaign Analytics, DM me!