The 3 types of data to rule them all (when it comes to CRO)
Photo by <a >William Warby</a> on <a href="https://unsplash.com/photos/WahfNoqbYnM?utm_source=unsplash&utm_medium=referral&utm_content=creditCopy

The 3 types of data to rule them all (when it comes to CRO)

CRO. Conversion Rate Optimisation. Conversion Optimisation. UX Analysis. Customer Value Optimisation. Digital Experience Optimisation.


There are so many names... for the same (or similar) thing/s!


But, there are a lot of optimisation impostors positioning themselves as CRO Kings & Queens. To save on future pain, I've written a quick article to flesh out answers to the below questions (not necessarily answered in order);

  • How should a CRO programme look and feel?
  • What should I expect of a CRO agency or internal team?
  • What do I need to do to allow an expert team licence to do their jobs the right way?
  • And lastly...
  • What are the 3 types of data to rule them all?


So, what should a CRO programme feel like? What should we be looking at? How do we organise the insight and prioritise experiments (AB tests)?

But first, who am I, and why could my insight be helpful?


I'm Sean Clanchy. Managing Director - APAC of award winning international ecommerce agency, Swanky (Shopify Plus Agency) .

I've been working in digital now for a little over a decade, and supporting brands on Shopify Plus since 2016. I have experience running CRO programmes for a range of brands and businesses turning over 8 and 9 figures annually, and have multiple $1m winning experiments validated to a 95% (or higher) statistical significance. I'm supported by a team of intelligent, creative and technically brilliant strategists, analysts, marketers, designers and developers who empower me to appear far cleverer than I actually am!

Many of my opinions have been borrowed and enriched from them and their shared experiences and knowledge and to quote one of my favourite expressions (purloined from a colleague), "strong opinions, loosely held."


First, a few CRO tenets -

  • No HIPPOS - CRO is about data, experimentation and validation. Not the Highest Income Person's Opinion. If your team &/or agency do not set expectations and rules to mitigate unfounded opinion or bow to pressure based on "I'm sure" "In my experience" or "that's a no-brainer"... Run far, run fast!
  • At least 2 data sets - reviewing any ecommerce store it is hard to have all of your data point to the perfect test or conversion blocker, but, when you are putting forward any hypothesis for improvement, have at least 2 data points indicating that your idea has legs. (examples to come)
  • Set the goal that matches the test - not every test is directly affecting checkout completion so don't always use that as your measure of success


Now then. A CRO programme tends to have 3 core components - Analysis, Prioritisation, and Experimentation.

These are executed in pursuit of Insight, which can then be shared across teams to inform improvements to owned comms, paid marketing campaigns, and in some cases fulfilment, customer service and even product development.


Analysis

Analysing an ecommerce business or indeed, any digital experience is like medical triage.

  • Is there a problem?
  • Where is the problem?
  • Which limb/page/device type/browser/people are affected?
  • What is the outcome of the problem?
  • What do we think the problem could be?
  • What is the problem?

In a digital context, how do we establish answers to these questions?


  1. Quantitative Analysis

We analyse traffic to establish answers to;


"Is there a problem?"

"Where is the problem?"

"Which audiences are affected, and, what is the outcome/impact of the problem?"

Typically traffic analysis tools like Google Analytics, Adobe Analytics or Matomo will help us understand if our conversion rate has dropped, our average order value has fallen off a cliff, page bounce or page exit has surged on a particular page, pages or device type. We can also get an indication of which particular audience or audiences are affected to help us start to identify the scale of the problem and the value potential of a solution.

Sometimes we may delve deeper into particular cohort analyses - cohorts by order value, subscription cycle count and purchased product type for example.

Ultimately, if we can identify who is affected, what the change in outcome is and where within the journey users are coming into strife, we're a little like Dr Watson and Mr Holmes... we have a clue!


2. Qualitative Analysis

My conversion rate has dropped. I've identified that my bounce and exit rate is up on pages A, B and Z.

My checkout has higher funnel abandonment on the Shipping page.

Wholesale enquiry form completions have jumped off a cliff.

Now that I've identified these symptoms (to continue the appropriation of medical terminology!) I can start to take a closer look at WHY?

"Sore throat? Say "ah" and stick your tongue out".

This is where we move from data analysis to proactive evaluation, the problem appears to be somewhere here... What is it?

At this point, your agency or in-house team should be leaning into UX research tools for things like heat-mapping, scroll-mapping and session recording to try to understand your consumers behaviour. If you have access to a more powerful UX tool like FullStory or HotJar, you may even be able to segment your recordings and behavioural maps to understand the difference in experience for your impacted cohort or filter session recordings by rage clicks. Great ways to quickly identify a problem navigational element (if that is the problem!).

And while we can talk about all these new fangled tools - don't be afraid to leverage the basics! Take a screenshot of the core page/s that appear to be affected, apply a 15% Gaussian blur and ask someone unrelated to the project/site to circle the 3 things that stand out most in the blurred image. You'd be amazed how often a drop in conversion is down to Call-to-Actions being anonymised through a design change.


3. Voice of Customer

(what I like to call quantitative qualitative data as for larger traffic sites, you can build a VoC dashboard and compare feedback change over time)


A lot of brands run NPS surveys that provide a benchmark but no insight - scoring 7 out of 10 tells me 9 things I'm not and 1 thing I am. It doesn't tell me why I'm a 7.

In the case of CRO (and ongoing CX maintenance/optimisation) you should be considering 2 streams of Voice of Customer data - Passive, and Prompted .

Passive data comes from things like customer service tickets (easily exported from tools like Gorgias , Zendesk , Help Scout and Re:amaze by GoDaddy ), as well as positive and negative product reviews ( Yotpo , Trustpilot , Bazaarvoice ), and Prompted data typically comes from surveys - either on site via pop ups or driven by campaigns using things like TypeForm, Google Forms, Survey Monkey or Survicate - and customer interviews. I do NOT believe in curated user testing. I DO believe in customer feedback driven insight


A quick shout out to one of our technology partners, Okendo who have consolidated both prompted and passive feedback mechanisms into a single platform - check Okendo.io out if you're in the Shopify centric ecom space!


Passive data in my opinion is the most powerful data of all, because not always but often, it allows your customers to not only identify the problem but the ideal solution.

On the occasions as a CRO practitioner where you discover a large sample of common feedback similar to those listed below, you're usually on track for a series of winners in succession!


"Your site is crap! I've been looking for a size X in this colour for 5 minutes - how hard is it to just have decent size filters???"


"Your most expensive product is $25, as if I'm going to spend $200 for FREE Shipping!!!"


"I can't zoom on my mobile - there's no way I'm spending $1200 on a new XYZ if I can't even look at a close up!"


"I can't work out if this is compatible with my Samsung S20... Help?"


Now that we have explored data types and tenets of CRO success, lets talk about hypothesis prioritisation and expectation setting.


Hypothesis prioritisation - how to get on our priority list

To avoid putting forward and investing time, effort and traffic in irrational tests, we have a rule -

Nothing moves on to our prioritisation KANBAN until it has at least 2 data points indicating it may be the solution to a commercial problem.


So, what might constitute those datapoints?


Example 1 - this is an acceptable hypothesis data validation - not a great one

  • Add to cart rate has dropped 20% period on period, 60d on 60d. (quantitative data)
  • UX analysis and review of site changes show that recently we've rolled out a new PDP feature flagging 6 USPs (Unique Selling Points) with icons and plain text alongside typical buying elements - price, add to cart button, variant selector and quantity field on the product page. (UX review/qualitative - there is a tangible change indicating cause and affect in the affected location).


Example 2

I have changed manufacturer for my products to increase quality and ensure a sustainable, ethical manufacturing process - I've put my prices up to reflect this increase in cost.

  • My CS team have flagged an increase in complaints about costs "I love your products but how can you justify such a big jump in price -> I'm going elsewhere!" (quant/qual - Voice of Customer)
  • My Add to Carts & conversion rate have dropped (quantitative data)
  • On UX review of Home and Product pages (and outbound marketing comms) we haven't adapted USPs or communication to highlight this change in sourcing, storytelling the higher value of the product. (qualitative)


Actual Test prioritisation

Ice, Ice, Baby


We leverage a variation of the ICE Matrix product development prioritisation framework developed by Sean Ellis (the founder of Qualaroo and the original "Growth Hacker")


Impact x Confidence x Ease

We score each test proposed to address a hypothesis based on how big the impact is (the level of visual change to the user experience), how confident we are from our data that the test will result in a significant change in performance, and ease - both of test development and reporting but also, if we're designing something that will require a huge amount of future effort to maintain - the score will be poor.


Let's say we are going to test this hypothesis;


"By reducing USP clutter on the PDP for Brian's Bubblegum, we expect to see a 10% increase in Add to Cart button submissions during our test."


Now if there is heaps of clutter around the PDP and we reduce it significantly, that's a visually impactful change, so we might score that as a 5 out of 5.


5 x C x E


Are we confident that it's going to drive up Add to Cart submissions? I'm going to mark this one as a 3 (for the purpose of this example), because while layout clean up and clutter tests do often have a fairly solid strike rate, Brian's Bubblegum is Organic, Produced in a Sweat Free Shop, using the Tears of Unicorns as a key ingredient and 120% of Brian's profit are donated to charity... ie. Maybe his USPs are critical to the average consumer on his store when making a purchase decision.


So, 5 x 3 x E


Effort on this one - super easy! No new code to write or feature to deploy, let's just pencil in some CSS to hide the elements we are removing and whizzbang, hey presto - the test is ready to roll. Scoring this a 5.


So in this case, our ICE is 5 x 3 x 5 = 75 (out of a potential 125)


If, as we find in the case of some sites, we find that we have a huge amount of addressable ideas to incorporate into our testing roadmap - you'll often have multiple tests with the same score.


In that case, out tie breaker mechanisms are - how much traffic is going through the page/experience that you're considering testing? The higher the volume, potentially the faster/more statistically significant test outcome.

And, where in the journey are the users? If you have a site with millions of sessions and you can afford to (and depending on what the business needs) you may prioritise tests towards the bottom of the conversion funnel to unlock revenue faster, before moving back up funnel to higher volume pages - Looking for further insight and improving funnel throughput.


What key things are most important in developing a winning testing culture?

  • Test to learn - don't test for the sake of testing (it's not about the volume of tests for tests sake - have a solid, data based hypothesis)
  • But do maintain testing velocity - while it's important to maintain the brand look and feel of a site, being a stickler for perfection in test design delays testing. As long as the test adheres to your brand guidelines (unless that's what you are testing) and clearly addresses your hypothesis, forgive a design out by a pixel in the interest of learning faster about your customers.
  • No HIPPOS! Opinions based on data - not experience! The world is changing, your customers are changing and their expectations are too. If you don't have data to support your opinion, defer to those in your team who do.
  • Celebrate wins and share knowledge - a lot of CRO teams operate in a relative bubble. What's to stop a team that's just run a series of significant, winning tests from sharing their learnings with the owned and earned media teams, lifecycle marketing teams and customer service crew to ensure they are realising the value of uncovered knowledge? It's all about improving the customer experience right?


My last point - don't be disgruntled by losing tests.

If you find a test loses, significantly against the control - be happy!

You have confirmed that wherever you were testing and whatever you changed IS IMPORTANT to your customers. You may find a winner testing this element/hypothesis in a future test but at least you have developed some insight.

In the eyes of a CRO practitioner, nothing is more frustrating than flat test outcomes without significant change.


If you've found this article helpful, have feedback, or would like to chat to our team about improving your ecommerce performance, please don't hesitate to drop me a DM or jump across to the Swanky site to read more about our Data Analytics and Optimisation services.


Thanks for reading.

Sean

Nice one Sean. ICE is a game-changer if there's no priorisation going on, or even discovery of potential conversion issues.

要查看或添加评论,请登录

Sean C.的更多文章

社区洞察

其他会员也浏览了