The Emotional Dangers of Getting to MVP

The Emotional Dangers of Getting to MVP

Most of the teams that I’ve worked with have struggled with successfully executing on MVP. The idea of identifying and creating a Minimum Viable Product is compelling - it implies a level of precision and scientific thought that can make a team feel smart about the way they’re deploying their resources. And yet… so many teams completely fail at achieving their goals with an MVP approach.

Why? The cause could be many reasons - but one we don’t talk about often is how emotion influences the roadmap and feature planning process.


A Feeling of Accomplishment


Some teams put a lot of emphasis on the Minimum - what is the smallest thing we can build? This is a very emotionally satisfying approach for people who like to watch the backlog burn down fast. Look how much we got done! Look how fast we are! How rapid, how iterative! We can point to the number of releases we accomplished this quarter and rationalize - hey, even if the KPIs aren’t improving, we really knocked the ball out of the park in execution!

Of course what happens here more than anything else is that you only kind of built a feature. You didn’t actually build the feature itself. For example, if you were planning to build a recommendation system, maybe you have these parts:

  1. Content gallery which can be controlled by the algorithm per user.
  2. Hooks for gathering engagement data from which we can feed “reward” data back into the system to allow it to personalize and improve.
  3. Ways to collect Day 0 personalization data to have a good shot of matching new users with content they will like.

(Of course a very rough generalization of a recommendation system.? They’re way more complex than this!)

Sometimes the valid urge to “Break down stories into the smallest possible pieces” completely neglects to understand how the feature is actually going to work. In our example, we decide that we’re going to release it in three parts separately. After all - “build small, release often,” right? So maybe we decide to make the content gallery before we do anything else. We are going to release it first, and as fast as possible. After all, that way we can get data quickly, right?

So - what happens? We release the new content gallery, but… the KPIs don’t change. When we think about it logically, why would they? Maybe we have a new UI for our content in the gallery, but unless it was a UI problem that was getting in the way of conversion, that KPI will not change.? There is no personalized data for existing users and none for Day 0 users. We have released a small piece of the big feature in an effort to be iterative, but it literally tells us nothing.?

What happens next depends on the maturity of your organization and how aligned you are on the goals you’re working on. Some teams understand that you didn’t release the full feature and upper management is willing to trust that you’ll start getting results in just one or two more releases. Other teams who are more nervous about where their resources are being allocated in an aggressive and tight competitive environment might look at the data and ask “why in the world are we continuing to invest in this feature which is clearly a dud?”? I have literally had projects killed for this reason - as much as everyone wants to be iterative, the patience of management does not always align with the iterative process.

So when it comes to the size of a feature, minimum for minimum’s sake can be dangerous and useless. But that’s not the only thing to look out for when trying to build the smallest successful feature you can.


A Feeling of Impatience

Another emotion that can drive poor planning is impatience. Sometimes we try to test only the things we don’t already know. It makes sense - we already know that users want to do X as a part of the feature, but what we really need to test is Y.? Let’s only focus on building Y and then test it!

This is actually a really common fallacy in games. I was about to start testing a game to see if we were going to get the KPIs we needed to take it to the next phase of its development. To speed things up the team decided to release the game in beta test countries without a tutorial. We already know what effect tutorials have, they reasoned. We can apply that assumption to the KPIs we get, but this will give us at least some direction data that will be helpful. Tutorials are infamously hard to make and have to be constantly changed, so everyone always wants to do them last.? So, what insight did we get from the test?

That this game needs a tutorial.

It’s obvious in hindsight (and to some us even at the time) that there is a bare minimum of things that are needed in order to see the effect your MVP may be having. Sometimes you can be clever and take a shortcut to get to a result you want, but more often than not you might be overlooking the fundamental impact another part of the ecosystem might have. This is especially true in game systems, which have intricate game loops where features depend on each other to create the KPI effects you want.


A Feeling of Fear and Denial

So, is what I’m saying to always build the feature 100% as envisioned? Make it big the first time, because you might not get another chance to finish it? Don’t release until all systems are ready?

Absolutely not. Then you no longer have any concept of MVP at all - you’ve just skipped to P. You’re not even sure it’s VP (a viable product) because you haven’t tested it yet, even though maybe you spent six months making it. What emotion does this represent? It’s fear - and then later on, denial.

First fear: “If I release this feature and it doesn’t perform, people will think I’m incompetent.” So you try to stack the deck. Make sure that this feature has everything it needs to perform according to your research and hypothesis. Leaving nothing to chance. Because if it doesn’t perform, you might have to spend the next two months with the Eye of Sauron on you as you scramble to figure out how to iterate your way to success and prove that all that development time was worth it.

I have certainly been guilty of this flaw before. I once spent months on a feature for a non-game app that made absolutely perfect sense to increase monetization, one that my game experience told me should juice conversion nicely. When it launched, I realized I had missed some very important motivations about how the users interacted with pay conversion in this product - but I had already drained so much time into this project that any political goodwill I had to try to correct it afterwards was gone. I could have made a smaller version of this feature instead, but I was worried that if I did anything lesser than the huge feature I made, it wouldn't show the KPI increase I was looking for.? Well, I made it big, wasted a lot of time, and still ended up with no KPI improvements.

What’s worse is what comes after: Denial. No, the feature is sound. It’s just this one part we need to fix. Oh, that didn’t fix it? Then maybe it’s this other part. It’s the sunk cost fallacy compounded by the fear the PM has about their reputation in the company. You could burn more development time on a lost cause that you might have been able to learn about months before with a smaller launch.


No Emotion - Just Logic

Sometimes you have to sacrifice the things you love to get the result you need.


Product Management is not just a science. To be a great PM you need to be creative. You need to look at data, look at competitors and come up with the right solution to move your product forward, ideally a fantastic solution no one has thought of before. You also need to be ruthlessly logical. You can’t afford to fall in love with your ideas. You have to have a method, and adhere to it without compromise.

So what method is most effective for determining the right scope for an MVP? I tried many things over my career - I’ve done RICE prioritization. I’ve left the interpretation to feature owners, thinking accountability will drive precision. I’ve deferred to stakeholders to determine what to build.? There are lots of other methods, but the one I have found to work the best, at least for me, is very simple. Product Management 101.

Problem. Cause. Solution.

This is great not just for prioritization, but also for determining MVP. In all the emotional examples I listed before, failure came from a lack of focusing on what was most important in favor of chasing a feeling. This approach, on the other hand, is brutally simple and impossible to fool, if you actually have a good grasp of your product.

You’re doing this feature for a reason. You want to solve a problem. This could be a metric problem (our Day 1 retention is 10% lower than we need it to be!) or an opportunity, which just a different kind of problem (we could be improving Day 0 conversion 10% if we do this!).??

Looking at it this way is great because it strips out solutions from the equation. Instead of your roadmap being?

  1. Add a new subscription tier
  2. Rebalance reward systems
  3. Remove the login requirement

?Your roadmap becomes a list of problems to solve:

  1. We need to increase ARPPU for subscribers.
  2. 90% of users don’t complete the first mission.
  3. Improve Day 0 content engagement, Reducing D1-7 churn

The reason this is good is because it provides direction for your MVP. Deciding what to do and not to do is all about what is important to solve the problem and what’s not. This helps you keep your approach focused.?

So, then: Cause. Why is this problem happening? There are variations on this question, like The Five Whys, but I like to keep it simple.? One cause at a time, though you may find each cause could be problems themselves, and you might have to solve a lot more problems before you can properly address the cause you started with.

If the problem is that Day 0 content engagement isn’t as good as we want it to be, what is the cause of that? Well, the answer to that is likely found in the data. Maybe click-through from the content page is low - if so, the hypothesis could be that our recommendation system isn’t matching people with the content they want. Maybe there is too much friction in our registration process - if that’s true, we’ll see above average churn in our registration funnel. Or, it could be something you might never guess that a panel of customer surveys will reveal.

You have to then choose which cause you think is more likely central to the problem - though it could be that they all are. One problem could have many causes. Then for each cause - what is the solution?

So here is where the MVP takes shape. The solution cannot contain anything that does not directly solve the problem. And after you cut the feature down for scope it must still solve the problem.

So if a proposed solution is to remove login registration in the onboarding funnel, it might be a good idea to place limits on pre-reg content consumption, or to have some call-to-action/incentive to register at some point on Day 0. Otherwise your registration rates could crater, and usually a registered user is a more sticky user than an unregistered one.??

If you were to remove the registration call-to-action to make MVP smaller, you have to ask if you’re still solving the problem. You might be, at least partially - Day 0 content engagement might improve - but you might actually increase D1-7 churn. So no - you’re no longer solving the problem. Therefore this part of the feature is a requirement for MVP.

Conversely, if you want to add a video at the beginning of the registration funnel that shows all the awesome content you have, that might not be necessary to solve the problem. Remember in this hypothesis, the Cause is that there is too much friction. The video might have been an MVP solution if we thought the Cause was that people don’t see how fun our content is early enough. But in this case, we don’t think that’s the Cause of our content engagement problem. So we should cut this idea to make a streamlined MVP.


The Feeling of Joy

The only emotions we should be concerned with are those we help the user to feel. MVPs are just an efficiency tool - they are not ends to themselves. We cannot let our own emotions dictate our methodology. By focusing only on making sure that the user feels?as much joy while using the product as possible, you might find that your logically thought-out MVP has a much bigger effect than you thought!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了