Causality vs. Signals
"Signals" are new MQL...at least according to the self-styled gurus busy pitching it as the next new hotness in B2B marketing.
But as marketers, what we should care about is "causality" – whether specific marketing activities are "causing" specific buyer outcomes.
Looking at "signals" can be a small part of this, but "signals" alone can tell you NOTHING about whether one action caused a second action.
This is the root of the failure of multi-touch attribution to be able to shed light on whether some action caused a change in buyer behavior.
It's not about Math...It's about THINKING!
This isn't about doing a bunch of math; it's about a way of thinking about the world and what data can (and cannot) tell us about the world.
There are Causal Analytics tools on the market that can help with the math, but they cannot teach you how to think about data or causality.
Those are important tools. But while a hammer is needed to build a house, a hammer is not a house! A house still requires the expertise of a trained carpenter to wield the hammer.
This is the importance of learning how to "think about data" and not just naively accept the surface "answers" that are almost always wrong.
This Newsletter is Sponsored by:
An Example of Signals and False Causality
Let's start with a simple example – "Our data shows that 92% of people who bought from us had previously read one or more of our blog articles."
Question – Does the data show that having read blog articles increases the chance of becoming a buyer?
Most marketers would say, "YES...the data clearly says so!", but they'd be wrong!
Let's use a slightly different example – "Our data shows that 92% of people who had the Measles had a high fever when sick."
Question – Does the data show that having a fever "proves" that you have the Measles?
Of course not...that would be an absurd conclusion. Most "signals" (in this case, having a fever) CANNOT determine the underlying cause. It's just a correlation!
In exactly the same way that assuming the prior data showed that reading more blog articles will cause people to become buyers.
Maybe that's true; maybe reading more blog articles WILL lead to more people buying, but the data presented cannot be used to determine that.
You cannot simply work backward from the desired effect (closed-won deals, demo requests, etc.) to see what "signals" those people exhibited and be able to say ANYTHING about causality!
A SHORT CASE STUDY IN SIGNALS & CAUSALITY
Let's work through a mini-case-study on how to think about "causality" in a sea of "signals."
What Does It Mean for Something "Cause" Something Else?
What "caused" someone to make a demo request or to become a closed-won deal?
The answer might be, "We ran Paid Search Ads, and someone clicked on the ad and filled out a demo request form on the Landing Page."
Here, the "effect" was filling out a demo request form, and the "signal" was clicking on our Paid Search Ad.
Did that "signal" (the paid search ad) actually cause that "effect" (the demo request)?
Let's say that the data shows that 55% of people who filled out a Demo Request came to that Landing Page from our Paid Search Ad.
Seems pretty convincing...right? Actually not!
To diagram this using a DAG Diagram, it would look something like this:
Aside: "DAG Diagrams" are the standard way to diagram cause-and-effect relationships. The technical term is "Directed Acyclic Graphs (DAG)," which you absolutely don't need to know (it comes from the mathematics of Graph Theory); just call it a DAG, and you're good to go!
Possible Scenarios
So, what are the possible relationships between someone clicking on our Search Ad and later filling out a Demo Request form?
Several things might have occurred:
The 1st scenario looks like this – the Paid Search ad "caused" the Demo Request:
That 2nd scenario looks like this – what matters is the Search, and the same result would have occurred if it was paid or organic (common with Paid Branded Search Ads).
That 3rd scenario looks like this – something else (prior Brand Awareness) is what "caused" the prospect to click on the search result, and Search was simply the last step in an earlier causal chain leading to the Demo Request:
It was no more "causal" than driving into the parking lot of the store you were already headed to.
Under-determination
Here, we have an "Event," a "Signal," and THREE separate possible conclusions about what actually happened.
The technical term for this is "under-determination of theory by data" (UTD), and it happens with 99% of multi-touch attribution (MTA) analyses.
领英推荐
UTD is the idea that the evidence available (or how that data is being analyzed) is insufficient to determine what beliefs we should hold about cause and effect.
This is a problem with much of what passes as "data analysis" in marketing!
How do we fix this problem?
The short answer is to add more data to the analysis. But that alone is seldom helpful unless it's the right data.
Specifically, we need to bring in data that helps answer the question – "Compared to what?"
The technical term for this is a "counterfactual statement," which is just a fancy way of saying – "What would have happened if we had NOT been running that Google Ad?"
Would a "different version of the past" have created the same future or a different future? For example –
These are "counterfactual statements" that can be compared to the original statement, "We got that demo request solely BECAUSE we ran that Google Ad."
Btw...the "technical term" for that first counterfactual (that the demo request would have occurred anyway) is called the "Null Hypothesis"...a term you really don't need to know, but if some data geeks throws out the term, they just mean that the one thing didn't cause the other thing (but with fancy sounding words).
So how do we get data on these "counterfactuals"?
You likely already have all the data you need; you're just not using it. And if you don't have it, you can usually get it.
In this specific case, we can compare all traffic to the landing page and split it into four quadrants.
This can be mapped into a Quadrant Diagram:
See...all this data was already in your system; it just wasn't being properly used.
Now, this ONLY covers the case of whether the Search Ad was directly the "cause" and doesn't yet address whether prior Brand Awareness caused the Search Ad to be seen and clicked. We will get to that one later in the article.
So we started with only this information – Traffic to the Landing Page from Paid Search Ads resulted in a 55% rate of conversion to Demo Requests:
So let's say we analyzed ALL the data and we end up with this instead – Traffic from Paid Search converted at 55% and traffic to the same Landing Page from Organic Search converted at 52%:
This would indicate there's only ~6% difference between the conversion rate for Paid vs. Organic (with an error term that might close that gap to even depending on how many data points you've collected).
Clearly this is a very marginal difference (or maybe no difference at all). Now that you're armed with this additional information on the "counterfactual," you're in a better position to decide if the Paid Search budget is likely to be worth the cost.
It may be worth continuing the Paid Ad campaign, depending on context, but now you can make an informed decision rather than just making a random guess on partial information.
For example, you may have poor SEO and no organic ranking for any of those keywords, so Paid is the only way to have any presence. But you know that this is paying to correct a prior failure (poor SEO), rather than paying because it's inherently effective.
Scenario Three – "Brand Awareness" Drove the Search Traffic
So, the 3rd scenario is more complex. Some baseline portion of the Paid Search traffic is driven by true discovery in the search results, but the results have been boosted by prior Brand Awareness and diagrams out like this:
So how would Brand Marketing impact a "Performance Marketing" tactic like Paid Search?
How Google Search is Like the Cereal Aisle at the Grocery Store
Search is the B2B equivalent to shelf-space merchandising in B2C retail.
When you walk down the cereal aisle of a large grocery store, there will be ~250 unique cereal brands.
But what you see is NOT 250 brands; what you see is ~5 to 10 brands that trigger awareness and memory recall, and 240+ blocks of noise.
Our eyes and conscious brain see what triggers familiarity and memory. This is why long-term brand marketing is so powerful; it literally changes what we see from noise that's simply ignored to a "signal" that we recognize.
This is why prior Brand Awareness can significantly increase paid and organic search effectiveness. People click on things that trigger recognition and recall.
This is why "Performance Marketing" works much better if it's done in conjunction with long-term Brand Marketing!
How to Measure Brand Impact on Search (Scenario III)
So if there's been effective prior Brand Marketing, then some portion of those results from Search are natively generated by "search discovery" and some portion by "aided-recall" of prior brand awareness memories.
Untangling this can be difficult!
There are basically two approaches (there are more, but these are the easiest) –
The first is to dig through historical data (again, simply stuff you already have) and look at search response rates before and after the company started doing active Brand Awareness marketing.
The second approach is to pick geographics or audience segments and temporarily turn off brand marketing to those groups.
Because brand awareness has a decay curve, much of your audience forgets it after a period of between 1 and 12 weeks for B2B (much shorter for B2C). So turning off brand awareness to a small (10% to 20%) portion of your audience for a month or two will start to surface effects.
If you see a drop in search performance (outside the normal week-to-week variance), then you can develop an estimate for how much of your "Performance Marketing" search results are actually being driven by prior Brand Marketing and how much is native to search discovery.
Thanks for the share Ashley Konson...and the constant reminder that interrogating your marketing dashboard for these "assumptions" is vital. A few marketers I know who are rigorous in this area - Sean Donnelly Rob Assimakopoulos Alfredo C. M. Tan Rachel Fairley
Strategic Director | Brand Guy | Strategist | Nice to chat with (says my mom)
2 个月As a brand guy, I am used to digging through various analytics tools for brand data. And there is always a lot of interesting stuff to find. Take our own website, for example. We run Google search ads for all sorts of more transactional keywords and phrases. Like "brand consultant" or "corporate design agency". But most of the conversions on our site come from organic. And why is that? Because of our brand. We are a top 10 agency in Germany and people search for our brand name. It would still be very stupid to turn off the ads. We believe it is a valuable touchpoint during agency research, because people spend a lot of time reading our cases, as we can see from the data. Can we measure how important these ads are for future sales? No. But when we meet with prospects, we very often understand that they read our cases and thought we were a possible partner because of the way we show our approach to branding and our way of thinking. So the ads that we thought are transactional actually aren't at all. They are top of funnel... These assumptions people make about their ads - they are very often faulty as we can see in this very simple example from a small company like ours.
??Helping international brands transform into the digital future, today. // Digital Strategist // Digital Due Diligence Advisor
6 个月I really like the step-by-step approach you took to go through the examples. I would add just one comment. Hopefully, it's useful for those who want not only to avoid wrong conclusions but also to find answers (to the right questions). Your company has likely even more data than the one in the example. If you add a few more dimensions: type of articles people were reading, time spent on articles, keywords they arrived from, moment (time in the year if your product has some seasonality), etc (try to find the most relevant dimensions for your situation), then you will be able to see that some articles drive engagement (but due to the topic of the article) they have nothing to do with your demo requests. Which is not entirely bad. Saw examples where the impact of a podcast was invisible at bottom-funnel KPIs for 17 months, and then data started appearing: "we heard about this method in a podcast".
Founder of Jewel Content Marketing Agency | Truths & Memes | Content Strategy, Thought Leadership, Copywriting, Social Media 'n' Stuff for B2B & Tech
6 个月True, data always requires the context of other data to be useful. We get led astray when we focus on a single number and ignore the others.
B2B Product Marketing and Customer Marketing Leader
6 个月Nicely said! Another way to think about it is, put yourself in the position of the buyer. How often do you as a buyer click a social post for something you are not familiar with and just request a demo? Chances are it’s close to zero times. Common sense says that your buyers are not that different than you.