How to backlog growth hypotheses for SaaS on the pre-product-market-fit phase.

How to backlog growth hypotheses for SaaS on the pre-product-market-fit phase.

Preparations for 2024th in the Claspo marketing team were about to set up a process of growth hypothesis testing and find out the lever of how to speed up the learning process. To make it we decided to switch to OKRs and start logging our hypotheses backlog. Here are some learnings from this process.?

Troubleshooting RICE

We had trouble comparing growth hypotheses for our SaaS using the RICE methodology. The Reach and Impact metrics made it hard to compare them fairly, even though they focused on user acquisition. It felt like we were comparing apples to oranges. I mean, website and PR reach aren't the same by the essence.

So, we had to figure out a way to score them more accurately.

Here is how we approached it.

We divided the hypotheses into streams according to the goal:

  1. Boost and optimize the existing user acquisition funnel.
  2. Prioritize backlog epics from the perspective of user acquisition.
  3. Experiments with brand metrics.

This helped us obtain comparable values in the RICE score within each stream and plan for the first quarter of 2024.

Overall, the structure of our growth hypotheses backlog turned out as follows:

  • Problem group name – where we define the user's high-level problem. For example - the "Annoyance Safeguarding"
  • Funnel Stage – a tag for grouping by marketing problem (the metric that needs improvement)
  • Problem Hypothesis – description of the problem hypothesis. For example, - "Some users abandon the Sign-Up process because they don't know how Claspo deals with pop-ups negatively affecting their journey.".
  • Experiment - If we add a message explaining the value of Claspo's settings that allow marketers to build an unintrusive user experience in the hero section, we can increase the sign-up conversion rate by 25%.

Lessons we learned

Consider the stages of the Customer Journey Map (CJM) and channels.

Some hypotheses require verification in different ways. This is particularly true for channels or CJM stages.?

During the last quarter, we tested several hypotheses in PPC messaging and website. Messages that generated +15% of Sign-ups in the website, weren't successful in PPC.

Tagging hypotheses by user problem.?Not by solution or marketing tasks.

Focusing on the problem helps understand experimental sequence. If the problem is significant, we move on to other attributes. Also, formalization in JTBD aligns marketing with the product.

We have learned to distinguish between the key assumptions and the tasks that should be carried out irrespective of the experiment's outcome, as well as the tasks that can only be performed once we attain the experiment's target metrics.

Example 1 - Write blog posts highlighting competitor product issues, irrespective of enticing funnel metrics from the traffic purchase experiment.

Example 2 - Before creating any prebuild campaigns, we should assess whether it will increase conversion rates on existing traffic. Our product can help us test more than we initially thought

We can validate most user acquisition hypotheses with Claspo

As we formalized the Reach metric, our website became a platform to rapidly (or, sometimes, cheaper) validate hypotheses. 35K monthly traffic is good enough to make an experiment on conversion rate levers.?

In the third quarter of last year, we carried out a few experiments and were able to increase the activation metric by 25%. It was focused solely on widgets from the Claspo template gallery.?

This means that with just 30 minutes of setup time, you can have valid proof within a few days or a week.?

This year our goal is to increase blog subscriptions and stimulate trial and conversion to a paid subscription. I'll keep you updated on our progress.

We are still thinking about how to fix some problems we have.?

  1. Planning an experiments with PR & Influencers

Some experiments do not have a clear measure of success from the beginning. In such cases, the objective is to establish initial benchmarks, such as those related to PR, influencers, and viral content. We may not know how far we need to reach for a particular audience to achieve the expected increase in brand search.

  1. Planning experiments with paid social (Meta, X, etc)

We lack experience in paid social experiments and are unsure of the algorithm's learning process and budget requirements.

We have allocated 10% of our budget for conducting tests and experiments in Google Ads. However, we are unsure about the amount of money required to test one hypothesis in a social paid channel. Currently, we are using average CPC benchmarks, conversion rates, and estimated costs to achieve 10 target actions. This approach is not very cost-effective, and we need to improve our strategy.

We are open to advice and exploring different perspectives to improve.

Lloyd Yip

Helping B2B Organizations Put Their Lead Gen On Autopilot By Building Systems | CEO @ Attract & Scale

10 个月

I love your proactive approach! Aligning goals and leveraging OKRs is crucial for growth. ??

Great insights! Setting up a process for growth hypothesis testing is key to driving success. ??

Yassine Fatihi ??

Crafting Audits, Process, Automations that Generate ?+??| FULL REMOTE Only | Founder & Tech Creative | 30+ Companies Guided

10 个月

Love the strategic approach and growth mindset! Can't wait to hear more about your learnings. ??

Corey Preston

Founder of Mental Health Simplified - Leveraging Lived Experience - Transformational Coach | Speaker - I COACH professionals through the DARKEST moments of their life.

10 个月

Regarding your PPC messaging tests, did you find any surprising trends in user behavior?

Papa bara Gueye

étudiant en Sciences Economiques et de Gestions

10 个月

The concept of tagging hypotheses by user problem is a game-changer. It brings clarity to what we're actually trying to solve.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了