The New Dichotomy in Digital Analytics
When getting into ease of use versus sophistication, not even a little girl from a taco ad can bring these groups together.

The New Dichotomy in Digital Analytics

This article was inspired by two events: Contentsquare entering a definitive offer to buy Heap, a leading product analytics platform, and an Amplitude article on why they don't provide auto-tracking. That sure seems like a targeted piece, despite it being written before the announcement, since both Contentsquare and Heap use and are proponents of auto-tracking.

Without taking a side, let's examine why this is even a debate, and cut through some of the marketing jargon to understand the potential risks and upsides.


If you're in the orbit of digital analytics, then the de-commissioning of Google Analytics 3 (Universal Analytics) and the forced adoption of Google Analytics 4 has probably been a talking point. Often a less than positive take as well.


The transition from UA to GA4 has been and remains a step change in how we think about capturing data on websites and apps. While the overall theme is one of moving from simple and user friendly to more robust and scalable for modern digital stacks, what I'd like to explore is a shift that is happening in parallel (in intertwined). The growth of auto-tracking and auto-capture analytics tools, juxtaposed with increasing adoption of up-stream data capture.


This is a story of battling philosophies in digital analytics: ease-of-use versus robustness, activation versus storage and business users versus engineers. We'll follow the path of history of digital analytics (speedily) and into the current battle-lines.


The Beginning – Urchin.js

You've used Google Analytics, and so have most - or all- of your colleagues. Back in 2005, Google took over a budding web analytics company called Urchin.js, and shortly thereafter birthed the web's most commonly used free analytics product.


For those of you who remember those heady days (I don't by the way), Google Analytics was nothing like the Google Analytics 3 we know from 2022. Painting with a broad brush, since we're not here to wax lyrical about GA, is that over time, Google Analytics spent a lot of energy making the product easy for business users. Thinking that aside from web operations, marketers, product owners, ecommerce managers and more would be diving into Google Analytics and needing to find what they need.


The core method of capturing information on websites didn't change much, there were tags you'd drop?for the key goals on your site, and the rest was covered by Google. All the metrics we know and love are there because they've been grandfathered in on legacy: bounce rate, time on page, pageviews, sessions and users.


Competing with Google then (and still), was SiteCatalyst - or as you might better know it now - Adobe Analytics. A common refrain about Adobe Analytics I've heard from 2016 through to this year is that it's like a Lamborghini, though one that arrives with a set of blueprints and all the parts for you to assemble and learn to drive yourself. The complexity that comes with Adobe also allows for a much greater depth of data capture and utilisation, however at a higher implementation and ownership cost.


The starting blocks: how digital analytics became another industry where sophistication battles useability.

The Shake Up - Auto Tracking vs Data Stack

Keeping our story simple, we'll focus on the rise of two very different protagonists that shake things up - both cut from the same cloth as the Google vs Adobe dichotomy.


On one side, the introduction of auto-capture analytics tools. You've heard of, and probably use these tools. The most common are Hotjar, Microsoft Clarity and Crazy Egg, with enterprise solutions like Contentsquare, Heap and Fullstory provide much greater functionality at a higher cost.


These vendors are extremes of the same philosophical banner as Google Analytics 3 - make analytics as easy as possible for end users. They do this through installing one tag and capturing everything that happens on the page. For some of these tools, that means every click, hover or element view, without needing a developer or specialist to implement new tracking.


On the other side is the rise of the data unifiers. These are tools that usually sit in-between where data is created and when it's captured by a traditional analytics tool like GA. The promise here is bigger picture, that you can create consistent, clear data signals that are captured and sent to each location you'd like it to go to, and nowhere it shouldn't. The added benefits come with more complicated set up, but allow much greater flexibility and adaption to needs. Examples you've probably heard of are Segment Connections, Snowplow Behavioural Data Platform or Tealium IQ and EventStream.


These vendors are extreme examples of the same philosophy as Adobe Analytics - make analytics as scalable and rich as possible.


Each of these two groups of tools, auto-capture and data unifiers, represent a new wave of analytics vendors who are naturally extending the extremes of how analytics in the past has worked for sophisticated and simple use cases.

How our continuum has expanded over time in both directions, providing more options for sophisticated data management, and easier tracking for insight.

Auto-tracking & User Friendly Analytics

It sounds great, right? 'Just one tag…' and not needing to worry about implementation again?! There are a few really powerful strengths to auto-capture, so before any bubbles are burst, let's call it out.


Auto-capture means better data historical validity

  • Most of the products that use auto-capture capture data en-mass, and rely on configuration or definition to report on key goals or events. For example, imagine we're capturing all page views and visits for a website for 6 months. If your team decided that you wanted to capture a new event based on a page view (such as Order Confirm for a transaction), then as soon as it's defined, you'd have data available for the previous 6 months immediately.
  • In manual tracking it would require a new event tag, which will only capture data from the date of implementation and not have any history of data!


Auto-capture means easier maintenance (most of the time)

  • Ever had a tag break? Yep, it's a constant process to keep an analytics instance maintained and in fine health. Auto-capture gets around some of the tedium here by keeping on top of basic tracking - things like button and click tracking.


Auto-capture is faster and easier to set up

  • It's not just marketing - it really is much easier and faster to set up analytics with auto-capture! It also means that you don’t need a developer (usually), and sometimes less technical business users can still configure effective analytics as well.


The Details: A Partial Solve

There are valid criticisms of auto-tracking, but to understand them we first need to appreciate the technical limitations of this method.


Auto-tracking almost exclusively works through tag implemented on websites, or a SDK for apps. This is a significant call out because many critical events that occur on a website - successful payment processing, order confirm, customer profile information from logging in or product SKU information - are sent to the website front-end from back-end events pushing to the data layer. That might sound too technical (or not enough?) so let’s break it down.


Using a simple example to illustrate this difference, let's imagine a banking website where customers can apply for credit cards. When a customer applies for a credit card, after initially putting in financial information each applicant is assessed for approval. Sometimes this is granted immediately, while other times more information may be required, or rejected outright.


Auto-tracking would capture the user as they navigate through the application, including form fields they're filling out, and the next page they see, however it wouldn't capture the outcome of the assessment - it would only see the result for the user, such as a page saying, "Please provide more information".


It's up to the engineers to surface these back-end events to the website in the data layer, such as a flag saying "assessment=info" which can be configured to be read by an auto-tracking product (though is itself, not automatically tracked - see how this is getting away from us?).


In other words, as long as you only care about page events like clicks or what's in a URL or the components on the page, then auto-tracking works a treat. Add a little complexity, and you're back to creating your tracking manually!


One other call out is that in more complicated tech stacks, there are many sources of data besides just a website or app! There could be decisioning engines, profile augmentation, product telemetry and others all generating data. Auto-capture doesn't focus on these other sources, except to include them (mostly manually), the focus is instead on digital user interfaces.


Auto-capture is about being simpler, easier, and focused on speed for business users.

Data Unifiers & The Modern Data Stack

Experts in these solutions will probably dislike painting with too broad a brush, especially since data unification platforms are more marketed to engineers and data specialists, both groups who love to be technically correct (note: as a data specialist, I can say that). To do just that, data unifiers are there to act as an intermediary layer to collate and arrange your data, so that it's easier to understand what you're capturing and using.


A good example is with Segment. If you haven't heard of it or used it before, the simple explanation is that instead of creating your events in Google Analytics or Adobe Analytics, you create them in segment, then create rules to 'pass' those events through to your downstream analytics tools. So instead of dropping new tags, Segment acts as your data capture and allows for a pageview event to be identical when it's passed through to multiple destinations.


The problem this solves is one you're probably familiar with: why isn't my data lining up between GA3 and GA4? When events that should be the same are not counting the same, users lose trust in the data, and it can be harder to reconcile between multiple platforms. It also solves for unifying anonymous digital identifiers for users to not be platform-specific.


Use cases for data unifiers range from very simple - implement Google Analytics using the data from Segment, so I can switch to Adobe later if I want without changing much.

To highly complicated - use a data unifier so i can unify signals from potentially dozens of sources into my comprehensive data stack.


While the pros for data unifiers aren't as snappy in marketing, the benefits really are significant:

Consistent data capture and user profiles

  • Your data warehouse, CDP, email marketing tools and website reporting all shares the same data source so matches and is much more easily interoperable.

Ability to integrate data from almost any source

  • Want to bring in basic digital analytics? Easy! What about layer in custom product signals? No worries! What about a host of other real-time triggered data points? All in a days work with the right configuration.

Creates a 'cleaner' data ecosystem

  • When using data unifiers, it means that the data feeding your user profiles and decisioning is easier to work with, and usually requires less cleaning and fixing to activate. This is especially true for use cases with a Customer Data Platform (CDP), where it may avoid months of data onboarding and cleaning from multiple sources.


The Details: Costs and Bandwidth

Just like the over-used Lambo analogy for Adobe Analytics, data unification tools need the right ingredients to work: dedicated bandwidth to set it up correctly and maintain the implementation, specialist support in the form of engineers, data specialists and often product specialists (such as a Tealium certified developer).


The costs for these tools are almost always worn in the maintenance and upkeep of the set-up, and as such as also prone to the same issues that block software implementations the world over:

  • Legacy technology can massively limit the overall value achieved from data unifiers, as it means that some parts of the tech stack cannot be included.
  • Operational barriers, so teams that need to capture data cannot easily access the data unifier, or vice versa, meaning businesses don't get the use from the tool they need.
  • It takes specialists, and those can be hard to find in market (and expensive to outsource), which can limit the extent that a data unifier could be used, which moves away from the key use cases around customisation and unification of your unique data sources.


Data unifiers add complexity to help businesses better manage complexity - and set a scalable architecture for technical users.

Reality: Templatising and Looking Inward

I'm painting this as a story of A vs B when both auto-capture and data unifiers are beneficiaries of the templatised practices we have because of a decade-long legacy of digital analytics use cases. Auto-capture only works because business users know what metrics they want to track consistently across industries, and data unifiers only work because the 1,000's of vendors that use digital data signals all know what to expect when a business is collecting digital data.


While both represent the extreme ends of the (made up) philosophical continuum of digital analytics, features from both camps are showing up with regularity across the board. Google Analytics 4 does auto-capture! Out of the box implementation and Enhanced Measurement provides tracking with no work for the end user. At the same time, vendors like Contentsquare, Fullstory and Glassbox are madly expanding their list of technology partners, to grow the connectivity and interoperability of their solutions to better fit into a broader data ecosystem.

?

This dichotomy between user-focused analytics that can be up and running quickly versus scalable analytics set up is more based on the capability that exists in businesses rather than any innate benefits one way or another. If your tech and data teams have a lot of clout, and they're already great at getting data into people's hands, then you're probably best getting a data unifier. If developers are a scarce resource and you never feel like you have the time or specialists on hand to make good decisions, then auto-tracking solutions are likely a better fit.

?

When you're thinking about what is going to give you the best data, it's best to first look inward.

Your team's capability is going to be a bigger determinant of which path you tread. A simple question is: do it fast, or do it right?

Future: Automatically Unified?

Prediction time! Hard mode if you can't talk about AI.

Moving forward, I see 3 main trends that are likely to bring together a lot of what I've discussed and unify this separation long-term.

?

  • Auto-tracking will become more commonplace to reduce cost of ownership

Current iterations of auto-tracking focus on capturing everything from a vendor-defined schema, though that could be something instead that becomes customer-managed. The proposition becomes less of 'let us do analytics for you', and more of 'remove the day to day grind of analytics upkeep and management', which is something we can all get behind.

?

  • Data unifiers go to market with more and more 'pre-defined' schemas

The needs of businesses are always going to have wrinkles, so there will always be some level of customisation. However, given the breadth of market penetration of data unifiers, they will come to market more and more with 'best practices' that guide businesses in a simpler implementation and configuration, which will in turn reduce the ongoing management costs.

?

  • Everything is product analytics

Our terms of 'data unifiers' and 'auto-capture' aren't in vogue. In fact, the market is moving away from 'web analytics' as a whole. As such, in 5 years the bet here is that the categories of digital experience analytics, digital intelligence, web analytics and app analytics will all be integrated into product analytics. This is a good change, and a shared language will also further help the templatisation and speed of delivery predicted above. Let's speak about digital product data and product analytics, and hopefully our lives will be simpler.


Evan Rollins is the co-founder of Drumline Digital, a digital partner committed to scaling experimentation for growth, activation and lifetime value. Evan cares about making digital better for customers through easier, more exciting experiences. His background in data science means he's not much fun at parties, but loves figuring out cool ways of using data to understand and design for customers.

Kate Cook

Tech & Data Driven Marketing Leader | Senior Manager, Marketing Effectiveness at Accenture Song | Board Advisor | Insatiably curious about the methods for measuring marketing effectiveness.

1 年

Finally got to reading and digesting this one Evan Rollins - loved this write up. An important education. Like most of the conversations I have day to day the decision for “which” tends to align to a scale of maturity for me. Increasingly I find myself asking clients “to what level of confidence do we need in this data to be wholly accurate?” I find that this immediately removes the idea that it should be anywhere near 100% confidence. The responses I usually get are “Oh around 80 or 90%” Bingo. Most of the time, IF we are searching for insights, they merely need to be directionally accurate. (As a data person, I know you’ll get that it’s always slightly painful to adopt this mindset as I’d love it if data quality was #1, but as you rightly point out, there is a cost of confidence). Increasingly I find myself asking “What is the ROI on this insight ?” Which also helps me to encourage other data practitioners to lean into things being less perfect, while directionally still accurate. This leads me to my other thought on this… the good old “source of truth” conversation. Like all maturity curves, it should ideally upgrade as the org does.

回复

要查看或添加评论,请登录

Evan Rollins的更多文章

  • Voices of Experimentation | Stefan Rodricks

    Voices of Experimentation | Stefan Rodricks

    Digital experimentation has been part of the arsenal for digital teams for over a decade, and personalisation has had…

    12 条评论
  • Experimentation: An Australian Story

    Experimentation: An Australian Story

    In a previous article, I explored a paper from Bojinov and Gupta that defines 3 key pillars of digital experimentation:…

    4 条评论
  • 3 Pillars of Digital Experimentation

    3 Pillars of Digital Experimentation

    The bridge between academia and industry is narrow, and it's hard to cross over meaningfully. Sometimes though, you…

    6 条评论
  • Systematic ways to simplify your backlog

    Systematic ways to simplify your backlog

    The entire discipline of economics desires to solve one problem: How do you address unlimited desires with limited…

  • Taking Bets Further With Experimentation

    Taking Bets Further With Experimentation

    Every day of your life you're making bets. Perhaps you're not sitting down at a roulette table for a spin, though the…

    3 条评论
  • The Big Experimentation Umbrella

    The Big Experimentation Umbrella

    Make a gamble, take calculated risks, bet on yourself, test it out. At every level of business decision making there is…

    2 条评论
  • Your next product feature kind of sucks, sorry

    Your next product feature kind of sucks, sorry

    It's Monday morning. You're excited because you've been 2 days away from your precious roadmap.

    3 条评论
  • Cookieless but still informed: A case study on surviving cookie death from luxury goods

    Cookieless but still informed: A case study on surviving cookie death from luxury goods

    Luxury items - and high ticket-value purchases more broadly - stand to lose less to the continuing cookie deprecation…

    1 条评论
  • What universities get wrong about student acquisition

    What universities get wrong about student acquisition

    One of Australian's biggest exports is education. We benefit from world-class education institutions that are chosen by…

    3 条评论
  • How to Make CX Personal

    How to Make CX Personal

    Creating a personal conversation between a brand and its customers isn’t easy. It is even harder when all we hear about…

    3 条评论

社区洞察

其他会员也浏览了