Actionable Tips to Get Best out of your Conversion Rate Optimization Program

Actionable Tips to Get Best out of your Conversion Rate Optimization Program

Do you know Amazon’s prime members conversion rate is 74% and non-prime convert @ 13%?

When I refer this stat it takes a while for people to digest. So how are you doing in your online business? Chances are your reply would be “I wish I could be even one-fourth of Amazon non-prime performance”. But you are not alone my friend, on an avg. 2-3% conversion rate benchmark for a typical online retail site holds true.

 This clearly tells the scope of work for the owners of optimization program in creating customer centric experience and maximizing the revenue for their company. So how can we build a best in class optimization program and challenge the status quo? This post will focus on how to optimize your experimentation program and make it more effective, data driven and ROI focused. 

1.     Audit the current CRO process & performance

Just like we have to clearly define an objective behind individual tests and day to day analysis, look at the bigger picture optimization program in the entirety and establish key performing indicators to assess your current CRO process around Velocity, Quality and Business Impact. If you are working for client you can draft a questionnaire & have a detailed discussion on the same.

Velocity: Evaluate how many test ideas get added to your optimization roadmap every month. How many ideas convert to a live test every month? This will give some context around gap like; are we short of ideas? Or are we accommodating all the request without prioritizing or selecting ideas that have higher potential.

Quality: This ties back to last point, once a hypothesis is selected with some due diligence how many of them reach statistical significance. One of the research from VWO shows 14% win-rate for all the tests that got live across their clients, so this could be an external benchmark to compare and optimize. I have seen companies with win rate of 50 to 60%. Remember win here is not results that showed variation as a winner, but all the experiments that reached statistical significant level and provided some learning.

Business Impact: Measure average conversion lift for all statistically significant tests in terms of revenue per visitor and other KPIs. Extrapolate the winner impact in terms of revenue but do consider factor like seasonality. And ultimately if we could gauge the ROI from optimization program that would be huge to be proud of. 

In short you have to do a detailed review on below CRO performance funnel:


2.     Ensure KPIs, Strategies & Tactics are aligned: 

Get rid of some random ideas that pollutes the optimization program quality. Identify business objectives & related metrics that represent those objectives, where are we currently; compare to goals set for the year. Understand business priorities, challenges, including online revenue goals, paid channels , funnels etc. Sit with sales & marketing team to discuss buyer’s journey both from B2C and B2B perspective, buyer’s persona, and Content Strategy across persona & sales stage.  

Use above information to create a goal tree, Optimizely has created a great resource for building a goal tree and goal mapping exercise so extend that to create one for your own business, this would ensure KPIs, strategy & tactics are aligned. Check out E-commerce and Retail, Subscription-based ModelMedia (Publishing) and B2B and Lead Generation. You can pick template for your industry vertical and extend it for your business. Goal tree & goal mapping exercise brings a natural momentum and inspire us to experiment a new tactic the moment we see some issue from analytics and vice versa. Prioritize KPIs that are closer to revenue, followed by metrics that align with cost saving like CAC, CPL.

3. Leverage Consumer Psychology & Research on Cognitive Biases:

While you are brainstorming optimization tactics that could drive the KPI & Strategy defined in the goal map, a great source of enrichment would be aligning it with cognitive biases and persuasion principles . This would help you in connecting with the prospect and influencing their choices & decision making. Additionally, it would provide more context & story behind the final results of a test. A good place to start here is to leverage 6 Principles of Persuasion by Dr. Robert Cialdini and conversion heuristics by MECLABS.

Being thoughtful about cognitive biases while crafting a new hypothesis help you gain an undue advantage to anticipate user responses, this might sound black hat but as long as your offering has a real value; this becomes a legitimate tool to start a conversation or move prospect deeper into funnel stages. When I try to put this forward to people who are new to optimization world they consider this as fluff, but trust me it is not because remember lift without context turns out to be meaningless.

The big idea behind experimentation is learning and that 5% increase in conversion that give us a dopamine hit should not limit our thought process to conversion myopia.

Convertize has done an exceptional job in compiling over 250 persuasion tactics & absorbing in their platform, something missing in other key players like Optimizely, Adobe Target or VWO.

4. Enrich & Validate Ideas Roadmap with multiple sources

Experiment velocity is your top of the funnel metrics that focuses on quantity and in order to improve that get a sense on how are we are sourcing new test ideas and building optimization roadmap. Include as many data sources as possible to validate your hypothesis as well as coming up with a new idea that could solve the customer’s objections and challenges:

a.      Issues identified from User testing

b.      Issues identified from Mouse Tracking, Heat map

c.      Issues identified from user recording

d.       User Funnel Analysis from Web Analytics reporting 

e.      Online Survey

f.      Online Chat history

g.       User Feedback

 h.  Ideas from Competitor site : Not a huge fan here, but sometimes your leadership will give you attention if you show them that competitor is doing something inspirational.

i. Sales & Client services team’s perspective on top objections & challenges of buyer.

5. Create a Prioritization Framework to ensure best ideas bubble up:

The traditional approach of prioritization like PIE, ICE have been little debatable so it would be a good idea to have a hybrid approach and assign score to your idea based on specific & measurable parameters refer Optimizely & conversion XL frameworks. You can build your own scoring framework but bottom line is that every test idea/hypothesis needs to be scored & eventually prioritized so that it ties back to quality of test parameter we discussed at program level is in good shape. You can start with some of the scoring factors that are self-explanatory and move to more behavioral factors; once you reach certain maturity in understanding the impact of cognitive biases, user motivation etc.  

Let’s take an example from Google Merchandise store to walk through the test scoring framework 

Idea/Question: Do we need Global navigation in checkout process?

Hypothesis: If we remove global header from check out stage of the funnel and open that in new tab then it would remove the unwanted friction and decrease in drop off at Billing and Shipping, Payment, and Review stage of the funnel below data clearly suggest major drop off. 

Now let's score this hypothesis :

a.      Persuasion Tactic: Remove Cognitive Friction:1

b.      EASE of Execution(Custom/Native): Native: 1

c.       Change Location (TOP/Middle/Bottom): Top:3

d.      Supported by Analytics: 1

e.      Traffic (high, medium, low): High: 2

f.        Issue seen in User testing(Yes/No): No: 0

g.      Direct Impact on Revenue(Yes/No): Yes:1

h.      Includes Paid Traffic(Yes/No): No: 1

Total Score : 10

I also prefer to include conversion heuristic elements by MECLABS in scoring if they align with the hypothesis by simply adding the multiplier of relevant element. For example in above case we have understood the intent or motivation of the user to purchase and our hypothesis is reducing friction so from heuristic C = 4m + 3v + 2(i-f) - 2a we can assign 4+2=6 points to above hypothesis score.

6. Don’t stop test just because you saw 95% significance:

When your test is pushed live you get excited & keep an eye on daily basis, but remember there is lot of noise in the data collected in the initial days. So you might get in to a trap of false positive results.

In order to avoid this situation, it would be a good idea to get a sense of how many visitors you would need for each variation. Use free tool like Optimizely sample size calculator, you just need to input baseline conversion rate, minimum detectable effect (recommended 5% or above) & statistical significance (recommended 95% or above). This would eventually give you an idea of overall traffic required and how many days we should run this test before declaring a winner or learning from a statistically significant result. You can also do sample size calculation from Adobe or Evanmiller.

Let's take an example:

If you seeing above results after 2 days of experiment launch, don’t jump to conclusion. You are getting around 11 K users per day and it would take around 18 to 20 days to conclude results even if confidence level is reached 96% on day 2. So no peeking please!


 7. Experimentation & Personalization are BFFs:

One of the most significant piece in personalization puzzle is first, second & third party data about the users that could form a meaningful segment. Segment could be based on gender, age, browser, device, campaign source, industry, company size, location, time parting, day parting, new visitor, returning visitor, historical transaction etc. But the new experience tied with the segments needs to be validated in terms of business outcomes otherwise it is as good as test hypothesis. So lot of leg work that you have done in building a robust experimentation program like goal tree creation, goal mapping, hypothesis prioritization, sample size calculation, would also enrich personalization strategy & roadmap for your website. I will have a separate post on personalization so stay tuned.

8. Create a knowledge management repository for all significant results:

One of the best part of an optimization program is from all statistically significant results you either win a lift or at least a learning but you never fail. So it is imperative to document these golden nuggets in standard format so that stakeholder from marketing and product management teams can anytime refer & apply to any new sections, similar pages, application or website revamp. No matter how good you are sending out test results & learning, people tend to forget and start from scratch for every new website initiative hence such knowledge doc can potentially rescue from reinventing the wheel. This will also cut down any duplication of efforts.

We can structure experimentation in 7 steps with specific tasks as shown in below process flow diagram which is sort of a summary of this entire post in one structured framework:

Bryan Eisenberg and Jeffrey Eisenberg in his book “Be like Amazon” mentioned Continuous Optimization is one of the four corner stones of Amazon brand but other 3 include:

1.      Customer Centricity

2.      Culture of innovation

3.      Corporate Agility 

So there is lot more that needs to be done beyond experimentation and personalization in order to achieve results as good as Amazon, your turn now. Share your ideas around how to get best out of your CRO program, challenges you face, solution etc. Please share your insights by commenting below OR email me

Thank you.

Sanjay Kumar Pal

Growth Leader - Proven track record at start ups , mid size companies and global MNCs Microsoft , HCL, Cvent

6 年

Great to see you bring together your research and experience in this field and sharing for the benefit of larger community,. Way to go Arpit Srivastava ?

Kunal Mehta

Global Data Platform Head | Product Owner | Associate Director | Data Analytics | Google Analytics | Adobe Analytics | Google Cloud Platform | Machine Learning | Data Science | Data Engineering | Speaker

6 年

First thing's first, Arpit, absolutely brilliant article, fantastically researched, and something tells me that this is nothing but a part of the journey that you have probably laid for yourself, and are travelling for some time now. Brilliant to see the depth and breadth of your knowledge and ideas that you have gathered, validated and shared over here. The impact of your article can best be described by the fact that it prompted me to ask 10s of questions from you, and OFC i have a luxury of sharing the office space with you so I would definitely exploit the opportunity to the hilt, but still just to share and discuss over here: 1) Great point about 'No peeking' but it becomes extremely difficult to keep business at bay when you see the initial reports of testing are not as exciting, especially when it's an acquisition campaign/strategy where most of the traction happens within first few days.How do you suppose this predicament should be handled? 2) Again a great point of keeping the noise away from genuinely business impact ideas, rather than focusing too much of time and energy on cosmetic changes which might be easier to implement, but might not be as beneficial. When calculating the potential business impact, how important is it to put more statistical means to calculate the number? for example, seasonality impact, competition impact, industry impact and other micro and macro factors. to illustrate with an example, if we create a test for a hotel website right now, I am expected to see some increase in bookings right now anyways, due to holiday season, how important is it to a) keep this impact in view before setting the detectable impact and b) how often is it done? Again, I have so many more points to discuss, and many might just expose my naivety in the domain, but then again, what could be a better way to learn than to actually acknowledge my sheer lack of understanding in front of my amazingly talented peers :)

回复
Shazrah Owais

Groundwater Modeller at RPS Group

6 年

Great article and very well explained. Well done dear ????????

Paresh Mandhyan

VP, Global Marketing at VWO. Ex. Cvent, RateGain, Birdeye.

6 年

Great Article Arpit. The future is all about CX (which starts from the first time a prospect/buyer interacts with a brand). Personalized 1 to 1, contextual and Omni channel marketing is the only survival & growth mechanism in my opinion. Optimisation is the key component to outperform the goals. Hence, I loved this article. Very clearly laid out and super informative!

要查看或添加评论,请登录

Arpit Srivastava ?的更多文章

社区洞察

其他会员也浏览了