Startup Guide: Growth Hacking to achieve breakthrough using high tempo testing
Source: GrowthEurope

Startup Guide: Growth Hacking to achieve breakthrough using high tempo testing

In a startup, you frequently need to identify the right problem and the right solution to that problem. Both parts are equally important and need different approaches. Further, it's not a one time process but a continuous experimental mode & investment that a company needs to keep doing. You have to keep drawing hypothesis, test those out, prove them wrong or right, and then keeping repeating the cycle as fast as possible so as to eventually get to bigger success.

I'm writing this article to share our learnings on how did we set up growth hacking culture, mindset, and execution engine for the same in TravelTriangle(TT) leading to high tempo testing over our multiple product lines to either validate the problem or find out the right solution to the problem. One major source of inspiration for me during this journey has been Hacking Growth written by Morgan Brown & Sean Ellis

To start with and give you little peek first, we were running around 80+ experiments parallelly in a month through our growth POD. This POD has played a critical role in getting TT to the known scale and heights. Few of the major achievements of this POD at different point of times have been:

  • Jump in visitor to lead (V2L) by 100% through a combination of progressive forms, chatbot and exit intent
  • Jump in V2L by 50% through optimizing marketing landing pages.
  • Growing Lead to conversion (L2C) by 60% through tweaking funnel management workflow.
  • Growing revenue by 30% by experimenting with our revenue instruments.
  • Growing review % by 66% using the right combination of channel + message + time of notification
  • and many more...
No alt text provided for this image

This all was done with minimal tech members as well as arriving at 100% confidence in numbers within less than 1-month post picking up the problem/idea. However, the journey was not as easy as it was looking at the later stages and had perhaps taken us time to reach there building such type of holistic experimental eco-system rolling high tempo testings. Let me lay down a 3-steps guide on how can you set up your growth hacking eco-system and further scaling it up to a high tempo testing environment.

Step 1: Closing Pre-requisite

  • Robust Data Infrastructure: If you can't measure it, you can't improve it. The first pre-requisite to set up a growth hacking eco-system is to start measuring data everywhere and setup robust data infrastructure for the same in the company generating D-7, D-1, and/or realtime reports wherever needed to draw insights and further to take timely actions. This will also remove the problem of multiple data sources while having just one single source of truth for everything.
No alt text provided for this image
  • CEO buy-in: Journey would not reach to success if you wouldn't have your CEO buy-in. He would be the person to align all stakeholders and decide on this strategic investment until the point it starts bearing fruit and even more. Also, I meant more from a belief perspective here and not just about approval from the CEO. :)
  • One great generalist leader to be accountable for this POD: In TT, we use to follow the RACI framework. It was very clear from the start that multiple accountable people mean no accountability. So you have to choose ONE great leader, who is quite a generalist acting as accountable, to connect dots from all departments and can steer this POD in case of unknowns/conflicts within themselves or with other stakeholders outside this POD (trust me there are going to be a lot in starting). I'd recommend having one product/tech cofounder leading up this role in starting to set up a system/path for the future leader if any.
  • Real-time tracker to review experiment results & cross-impact: This would be needed to avoid a lot of time and effort in just consolidating / formatting the results as needed but instead focussing on insights, results, and accordingly move to shut/scale or tweak variation(s) in a timely fashion. Further, most of the time, experiment owners should be checking cross-impact of other metrics when the experiment metric is improving so as to clearly outline bottom-funnel / material outcome and not just some pseudo-jump in top metric. Exhaustive tracker helps in not getting such things missed.

Step 2: Prioritization / Methodology

  • Separate out product/ideas (AARRR Funnel): There are 4 types of user flow that need to be growth hacked to keeping finding S-curves for the company: Acquisition (Customer acquisition channels). This growth hacking team should have expertise in the marketing and product domain. Activation (Engaging users leading him/her to lead/conversion), Retention (Getting users back again and again) & Referral (Making people tell about you). These funnels need product experts to hack growth. Revenue (Revenue stream from new or returning users). This growth hacking team should have a combination of business and product expertise.
No alt text provided for this image
  • Prioritize ideas using ICE (Impact, Confidence, Effort) framework: It's not about throwing ideas at the wall as fast as you can and see what sticks. The more focussed approach at the start, the more intentional your experiment and hence more the impact. Also, don't be afraid of What-ifs like flipping whole funnels, changing variables of the game to experiment radical ideas too across each metric. Assign a weightage to each of ICE criteria say 40 for Impact, 35 for Confidence, 25 for Effort. Give score as 1, 3, 5 in each of the buckets so as to calculate the final weighted average. 5 would mean highest impact and confidence but the lowest effort. The main intent of this exercise is to just place ideas in relative ordering, in order of probability of timely success, so as to pick from the top. The impact should have the highest weightage since it is the most important factor. Confidence is to be kept slightly higher but not much higher as I believe race condition among confidence & effort should be resolved with higher confidence factors since it increases the probability of success.
No alt text provided for this image
  • Decide success/failure as well as stipulated time to close the experiment before starting it: This is quite critical to decide upfront when to call the experiment(s) as failure or success and moreover by when. Decide lead and lag metrics with a projected outcome range to determine if the experiment is successful or failure. Further, you should close on sample size (proactively) dividing your user traffic/pages accordingly so as to reach significant p-value within the stipulated time and not letting run the experiment forever in hope. There are many online tools like this available to determine the amount of traffic/users to be pushed to the experiment and expedite the time of closure of experiments accordingly.

Step 3: Scaling up teams, execution, processes & systems

  • Right POD members: Create a growth POD with people bringing over the right mix of tech, product, marketing & business expertise to thrash ideas, quickly implement and validate it to shut or scale as permanent feature onto the product. This POD should be skilled in working with fewer details, deploying smart & quick solutions as well as able to improvise/iterate quickly.
  • Mindset, Culture & KPI of the POD: This is very much needed to align POD members as well as set them on common goals/missions. Lag KPI of this POD would be % delta in respective metric found, # of experiments closed (should be higher), and closure SLA (should be on the lower side). Lead KPIs would be # of impactful ideas on the board pending to be executed, implementation SLA, as well as, closure SLA once the experiment is live. There needs to be high-level teamwork at each step even when it comes to testing the experiment before making it live. Further, the mindset should be to find a smart solution to a problem (even if it is around the process) like we set up guidelines of launching an experiment to just a few % of the users first and then doing a phased roll out like 1%, 5%, 10%, 40/50% (the last number depending on needed sample size) to smartly detect micro bugs in live experiment instead testing it exhaustively in the start or immediately pausing the experiment otherwise.
  • KANBAN execution: Ideas can be segregated as "to be picked", "work in progress", "live & running" & lastly "closed". Control # of ideas in each bucket as well as SLAs of movement of the idea(s) from one bucket to another so as to achieve the right outcome of success in expected period of time. Manpower needs to be allocated accordingly to optimize for success here.
  • Weekly or Bi-weekly sync ups: Do periodic weekly/bi-weekly meetings. Review & track above mentioned KPIs including the velocity of experiments getting launched, churn out ratio as well as SLA of experiment success/failure. Possible reasons for slow experimentations can be categorized in three buckets: not many impactful ideas coming out (idea scarcity); ideas are implemented slowly due to the right architecture or solution absent (slow implementation), and lastly, experiments are not getting closed/shut down fast enough (slow discard). Experiment running too long is also a recipe of failure, hindering team to move to the right & probably actual successful experiment.
  • If there is an idea scarcity, the team needs to analyze data deeply to find funnel leakages or do more customer surveys, research to put more ideas on the table, and prioritize accordingly. If still the POD is not coming out with ideas, you need to assign/change people with better thinker hat or better problem identification. And if that too doesn't help, then you are doomed :)
  • If the velocity of experiments is getting slowed due to slow implementation, everyone would believe it can get solved by adding more people in the POD but in reality, you need to revisit your tech system/architecture and implement it right to enable fast implementation. slow implementation for a single experiment also means that even if the first variation gets live in time, but based on the results in initial days, launching another variation too takes time now. We started experimentations with basic cosmetic and UI changes on the frontend pages, which then got extended to dynamic page/section using react, which the further even extended to more configurable event-driven & rule-based flow and even further evolved to A/B test Business with different predictive models. Post reaching quite a scale of parallel experiments, reportings become friction for one analytic person to make experiments live so our team optimized all experiments reporting to be auto-generated with each experiment based on which funnel does it belong to. Such measures/approach had helped us to keep the POD size contained while increasing POD efficiency multi-fold.
  • Lastly, if the issue is slow discard, it is either because sample traffic (to reach to significant p-value) is not getting reached or team is attached with the experiment so much that they start trying to make experiment success instead of finding whether it would be successful. For the former case, either divide/plan traffic proactively to reach critical mass sooner or if needed, take a calculated risk to increase the traffic as quickly as possible by extrapolating missing data, if any, and hedging negative effect. For the latter case, be always prepared with the next hypothesis to move on if the experiment fails. This will always help you to not get attached to one experiment but instead always having something next to move on & keep failing fast.

Avoid falling in traps / be open to learnings

  • Data will tell “what” but not “why”, so include traveler surveys and subjective connection to connect things from the first principle
  • If tests are neither positive nor negative, control always wins
  • You need to close the lead metrics along with lag, to understand that impact is coming because of the solution and not because of some other variable.
  • Don't associate personal attachment or bias to the experiment as then instead of validating, you start trying to make it work.
  • Do not get tempted to scale the experiment to 100% as is, in lieu of immediate gain. Experiment's solutions are often done for idea validation and not for scale.
  • Experiments never fail, hypothesis are proven wrong.
  • Few more - Most common Hypothesis testing mistakes

AHA Moment

A lot of time companies have found out their aha moment while analyzing their data or experiment results finding a large number of users taking advantage of feature buried almost laughably deep within the site. So do keep an eye on finding what is working and for whom. You often don’t know what you’re looking for until you find it. Examples - Yelp (pivoted from friends asking friends for recommendations to general business reviews), Facebook (getting 10 friends in 14 days), Twitter (30 users to follow), Slack (2000 messages exchanged), Instagram (too many features but only photo sharing was AHA), Groupon - (started as a funding campaign for causes and groups and found that only campaigns enabling users to buy better deals are successful), Youtube (started with a video dating site, turned into what is it now finding that people like sharing videos more). [Source: Hacking Growth]

For TravelTriangle, it was X conversion in Y Days for our agents to stick for almost forever. [Can't release number here due to confidentialiy]

With that, I conclude my 3-step guide and can only insist that each point in each of the 3-steps carries quite a depth and need for continuous iteration/oiling of the execution engine to make it smoother, faster, leading to high tempo testing environment paving a path to find AHA moment for your product as well as hack growth across AARRR funnel.

I'd like to hear about growth hack stories of other startups, entrepreneurs, and happy to share any more details, if required, to fellow entrepreneurs trying to set up their own growth hacking eco-system

Daniel Maigur

Divisonal Operations Manager

1 年

Liked the article

回复
Sandeep Virk

Product Design Leader | Scaling Startsups 0 to1 | Helping 50k+ creative people | Building @Crafttor | Ex-Housing, Classplus, Hike

2 年

Love the article & it was a joy ride working on some of the projects :)

回复
Rajroop Bhattacharya

Exploring Business leadership roles | PnL leader | Revenue and Growth

4 年

This was super helpful Prabhat Gupta. We often talk about frameworks and metrics in taking business decisions, but it was great to understand these examples and see these in action.

回复
Anshul Kalra

Growth Marketing Manager @ Fomos

4 年
Sahil Pruthi

Founder at Livofy(Previously Keto India) | Seen on Shark Tank India Season 1 | Stanford Seed | 3x TEDx Speaker

4 年

This was brilliant, Prabhat! Absolutely loving the content you've been posting. Keep writing :)

要查看或添加评论,请登录

Prabhat Gupta ??的更多文章

社区洞察

其他会员也浏览了