My Product Management Methodology

My Product Management Methodology

I find there are so many product management frameworks out there that we are spoilt for choice. They are all useful, they all add value, but no single framework covers all needs. This is a “short” article that covers the frameworks I have found to be beneficial over the years, as both a practicing product manager, and a leader of product managers. Each framework is named, briefly described, and explained why it is beneficial. I have provided links where you can learn more on how to actually use the frameworks.

My product management background is B2B and internal technology products, or T4T, as I like to call it (technology for technologists). Still, there is no reason my methodology cannot also be used for B2C product management. Additionally, I need to call out that this methodology represents my opinion, based on my experience, and not that of any of my present or former employers. However, it has been battle-tested by myself and my product management teams over the years.

My product management methodology has a simple flow, as illustrated in the header, but should not be mistaken as ‘waterfall’ just because it flows left to right. As every product manager knows after a few months of experience: You are doing every step simultaneously. It is not a unique diagram by any means, but it is a simpler view of a common Gartner visual of the product development flow that you can find all over the web.

At a high level, the flow is:

  1. Discover - Before deciding on what product to build, you need to gain an understanding of your target users, market segment, existing competition, and the opinions of key internal stakeholders. Hopefully, you are in the situation where your stakeholders are amenable to the outcome of your discovery, as opposed to looking for confirmation of an existing idea.
  2. Ideate - As useful as discovery is, it is a jumbled mess of information that cannot be leveraged easily in its raw form. I like to summarize the current state of discovery findings into a more manageable set of potential problems and insights. From these, I tend to ideate several possible solutions that could become part of a product, or even individual products themselves. For me, this step helps declutter the mind from a myriad of possibilities to a smaller number of realistic ideas.
  3. Align - Aligning on what problem(s) should be attacked, and what value should be delivered first, can be a significant stumbling block. But it is a stumbling block that must be addressed. If you blow right past this step and dive straight into building, it is highly unlikely you, your team and your key stakeholders will be on the same page. You will all think you should be building something different. It is far, far better, to get alignment before you start building.
  4. Roadmap - Once you have alignment on the problem to attack and the value to deliver first, you are on a much better footing to begin crafting your roadmap and planning what should be prioritized early, and what can wait (potentially forever). However, a poorly used roadmap can hurt the reputation of your product and your team, but I will leave that discussion until later.
  5. Build - Does this even need an explanation? Probably.
  6. Release - Once features are ready for production, at some point, they have to be released. Your target market can have a major impact upon how you release features into production. Often with B2B SaaS offerings, your customers ability to adopt new features will slow down your release (or activation) frequency, especially if your product is embedded in theirs. You might be able to get away with simple release notes, or you may have to do a full dog and pony show to drive any significant adoption of new features.
  7. Measure - If you cannot measure the impact of a feature release on one or more product KPIs or operational metrics, why did you decide to work on it in the first place? I am fine with qualitative feedback from end-users if quantitative metrics are not available. But to this day, I do not understand why some engineering teams actively push back against instrumenting a product. The excuse is usually “higher priority features” and “we know we are delivering value”... and a simple question of “prove it, show me the numbers” is generally met with a change of subject, silence, or an even more staunch repetition of the original reasoning.
  8. Learn - This is one of the most critical steps if you ask me. Here, the product manager or the product team must decide what to do next. I argue that there are seven fundamental choices here, depending on what you have learned. The details of which I leave for later.

Discover

<Rant> As product managers, rather than diving straight into solving a problem, we start by breaking the problem down into smaller problems. The goal is to understand the problem deeper and to identify which sub-problems need answering first. This way, you can reduce some of the risk and uncertainty by ensuring you have a good understanding of the right problems to solve, before you dive into writing any code. Furthermore, if the problem you are trying to address is driving a new product or a new venture, you need to identify the riskiest components of the larger problem and attack those elements first. If you have to kill a product, perhaps due to an insurmountable risk, it is best to do it early while investment is still small

Breaking down and clarifying the problem first is the focus of research work by Professor Corey Phelps, at McGill University. Phelps is also the author of a recent book on problem-solving: Cracked it!: How to solve big problems and sell solutions like top strategy consultants. As Phelps shows in his study, clarifying the problem first is a much better way to go than diving straight into solutioning. The HBR Ideacast episode “The right way to solve complex business problems” goes into more detail.

I highly recommend that every product manager listens to this podcast. In the first few minutes, Phelps states that the approach of jumping straight to building a solution because you “know it will work,” and then crafting a coherent story around it - generally speaking - is the wrong way to go. Instead, you need to slow down and be more deliberative in understanding the problem on a deeper level. This way, you can be more effective at solving it.

Early on in the interview, Phelps describes a common symptom of organizational pressures around efficiency and productivity that create an enormous incentive to jump straight to solutioning. Phelps states a common excuse is, “I don’t have time to carefully define and analyze the problem. I have to implement a solution as quickly as possible.” To add to this notion, Phelps says “You can then weave a very coherent story that makes sense, and you can use that story to jump very quickly to a solution that you just know will work.” </Rant>

Unsurprisingly, my favorite approaches for discovery are the staples of empathy interviews and user ride-alongs. The benefits of empathy interviews and ride-alongs should be obvious - they are the best way to understand your users, your stakeholders - as qualitative data provides so much content and context. Far in excess of what any survey can do.

Ideate

As you progress through your discovery, you need to start summarizing your findings somewhere. I like to use “single page” frameworks for this, as I find the space constraints force you to be really circumspect about what you put on that page. Each of the frameworks below forces a different perspective, to help you think about the problem in a different way. Like I said at the start of the article, there is no single framework that meets all of your needs. It is up to you how many, if any, to use.

Value Proposition Canvas (VPC) -Book: Value Proposition Design: How to Create Products and Services Customers Want - There are two parts to this framework, the “value proposition” and the “customer profile”. The “customer profile” piece that helps you understand what your customers value and need. The focus is on summarizing the jobs they are trying to get done, the pains they currently encounter trying to do those jobs, and the gains that they would hope to realize from being able to complete those jobs.

Since you are building a “customer profile” based on several empathy interviews, you will likely come across items you need additional clarification or insights into, which you can leverage in your next round of empathy interviews. Additionally, do not be surprised if you start to see the need for two or more “customer” profiles as you continue your discovery. Or 'personas' if you prefer that term.

Customer Journey Map (CJM) - Medium.com: How to create a Customer Journey Map - One of the major weaknesses of the VPC is that it ignores the journey that the users go through trying to get their jobs done. This is where the CJM comes in useful. With this framework, you identify the steps that a user goes through to complete their job, and map where friction exists. Because it focuses on the steps a user goes through, it can help you identify which part(s) of the process could be the best to target first, due to their level of frustration.

Furthermore, as you actually build your product and start accumulating technical debt, you can expand your CJM into a Service Blueprint (SB). This way, you start to understand how your product's architecture and workflow support the customer's journey, and where technical debt may be introducing too much risk to your product’s customer success.

Now that you have a picture forming of both your users' needs, and the journey they go through, I like to ideate on the “value proposition” component of the CJM. The idea is to come up with “products and services” that will act as “pain relievers” and “gain creators” to what you identified in the “customer profile(s)” earlier. Hopefully this way, you can ideate a clear mapping of the intent of your product, over to the needs of your customers.

If you have an existing product, then you should start by putting your product's capabilities into the “value proposition”, and evaluating how it maps into the “customer profile(s)”. Do not be surprised, if you are adopting product management for the first time, that you discover a mismatch between your current product, and one or more “customer profiles” (eg. Just like Professor Phelps commented - “I don’t have time to carefully define and analyze the problem. I have to implement a solution as quickly as possible.”). The VPC can help you understand your product-problem gap, if one exists.

Lean Canvas (LC) - Book: Running Lean: Iterate from Plan A to a Plan That Works - Since the VPC is not an easily digestible product idea, it has no information as to how you would measure progress or success, nor how you go to market and differentiate yourself from alternates. To put more shape around product ideas I like to use the LC, created and popularized by Ash Muraya and his book “Running Lean”. I really, really like the LC as it forces you on a single page, to be really clear about (1) the problem you want to solve, (2) the customer segments impacted by that problem, (3) how you would solve the problem, (4) your value proposition, (5) how you would measure progress and success, and other key aspects of a product. The LC focuses on the product-problem fit, and the single page constraint prevents a product idea from being too big. Instead, you end up creating multiple LC’s for different candidate products, which is an awesome resource to have to refer back to later.

The LC has proven so beneficial that many other books have been published on how to leverage it for success. The best two that I have come across are The Lean Product Playbook by Dan Olsen (how to apply the LC in your work) and Lean Analytics by Alistair Croll (how your success metric can, and should, change over time). Both of these books are awesome

Business Model Canvas (BMC) - Book: Business Model Generation: A Handbook for Visionaries, Game Changers, and Challengers - Despite the obvious bias I have towards the LC, it does have a weakness when it comes to exploring your business model options. The LC focuses on the product-problem fit, the “Business Model Canvas” focuses on the product-market fit and what your monetization options are. No canvas is perfect. The LC and BMC are each strong where the other is weak.

Align

Amazon Press Release and Frequently Asked Questions (PRFAQ) - Medium.com: PR FAQs for Product Documents - By now, you should have a small number of good product ideas in the form of LC/BMC, and everyone involved is chomping at the bit to start building, but I think you should hold your horses a bit longer. The risk of everyone thinking about implementation now, is that you are likely not on the same page as to the value intended to be delivered to your customers. Very much like the blind men and the elephant. This is where the PRFAQ comes into play.

A PRFAQ is a great tool to get everyone on the same page in terms of the problem you intend to solve and the direction you plan to head in. It works because it forces you to be customer-centric, document your assumptions, address concerns from multiple stakeholders, and avoid solutioning too soon.

The PR is a half to one and a half page narrative in the form of a “press release” that you would like to be able to make about your product, and focuses on the problem being solved, and pseudo-quotes on what you would want customers to say of the product once released. The narrative should not talk about the technology or architecture intended to be used.

The FAQ is a list of questions that are likely to arise in response to the PR, they can add clarity to the problem, touch on initial thoughts for architecture and technology, and go to market, and customer onboarding options. They are not intended to be deep dive responses, but should provide direction, manage expectations, and clearly call out where additional research or experimentation is required. 

The challenge with adopting the PRFAQ framework, is how big of a vision do you put into the narrative? If your organization is not used to leading with a 3-year vision, then don’t use a 3-year vision for your first PRFAQ. You will probably fail because the narrative will be too intangible and fraught with risk, compared to the time horizon you normally operate with.

Instead, start off targeting the next major release or MVP for your product. This way the narrative of the PRRFAQ is much more tangible as it matches a time horizon your organization is comfortable with. The challenge then becomes just the PRFAQ process itself, and your likelihood of successful adoption increases. The longer term vision for the product can then be included in the FAQ as “possible directions we could take post MVP” to show that you have given consideration to the longer term, but have not set it in stone yet.

As your organization becomes more comfortable with using the PRFAQ framework, the time horizon of the narrative PR can grow, and the target of the MVP or next major release definition can be moved to the FAQ. Depending on your preferences.

Roadmap

Roadmaps - Book: Product Roadmaps Relaunched: How to Set Direction While Embracing Uncertainty - Assuming that the needs of your customer change over time, often with each and every feature release… A traditional roadmap built around features and dates can make you look like a fool or a liar. If you deliver on your 12 month roadmap, you can be seen as a fool for ignoring how the needs of your users changed in that timeframe. If you adjust your roadmap each quarter to accomodate for your learnings, some stakeholders will see you as a liar for not delivering on something you “committed to 9 months ago” - No matter how many “subject to change” caveats you put around a date driven roadmap, some people will take them as commitments.

I will be the first to admit that I have a controversial viewpoint, despite the constant restatement of roadmaps I am sure you are all familiar with. Personally, I am a big fan of using a “Now, Next, Later” roadmap structure that offers much more flexibility on priority changes, as opposed to a rolling 4 quarters. If you want to see some examples in the wild that have use a similar approach, a couple of great examples are Slack’s public roadmap, and AWS Elastic Beanstalk. No dates. However, due to the nature of your product, you may need to include some date driven roadmap items due to the seasonal nature of your product - Releasing a new feature for Turbo Tax is of no help in May….

Additionally, I like to focus on problems, not solutions (thematic roadmaps) as I do not like to solution too early. Rather than the roadmap stating “build X”, I would rather say something like “optimize the creation of N”, and allow for the combined talents of my designers and engineers to determine if we should build X, Y or Z. I believe focusing on problems allows for more ideation and experimentation in addressing them. Unfortunately, this change in perspective can be hard to adopt if your organization is more solution focused. From my perspective, when you focus on solutions too soon, you turn your roadmap into a glorified backlog

I am not going to go into detail on prioritization of work, instead I will just list my two current goto frameworks for prioritization: Rice: Simple prioritization for product managers, and Weighted, shortest job first. They are both reasonably data driven, reasonably logical, without being overly burdensome.

Build

I am going to be relatively brief on the “build” step, except for a few key points

  • Use a consistent work breakdown structure that is supported by your tooling of choice, For example, I really like the structure of Theme, Initiative, Epic and Story offered in JIRA, but each agile management product has its own equivalent.
  • Give heavy consideration to adopting a thematic roadmap, built around on problems, not solutions.
  • Ensure your acceptance criteria are measurable, rather than a simple checklist. For example, since I have FIOS at home, I expect an “average of +750 Mbps download speed over 24 hours” to be the norm, as opposed to “able to connect to CNET.com”. The latter is binary, and would be satisfied by 56Kbps (dial up modem speed), despite being a miserable user experience in 2020.
  • Refine your definition of done (DoD) to be more than just an agile workflow and unit and integration testing checklist. I like to include SLA-like metrics for performance expectations such as “all UI’s should render in an average of 5 seconds or less when the system is <75% loaded, and the minimum spec’d client is being used”. This way, you are forced to introduce stress testing into your development process, and will reduce the risk of releasing performance problems into production.

Release

At some point, preferably sooner rather than later, you will take your product to market or you will be releasing new features to your existing users. Your go-to-market (GTM) strategy is very dependent upon your market, and the level of competition present. The B2C GTM is much more cutthroat than B2B. B2B depends on which domain or business process you are targeting. Whilst internal T4T products may seem like they could be easier, the politics can be potentially much harder.

Several of the books linked so far have good GTM coverage, and there is a great article on Medium on why a killer GTM always wins over a great product. I can highly recommend that read. If you want a much more meaty understanding of market competition, there is Michael Porter’s Competitive Strategy book, which is an MBA staple. Whilst it can be a dry read at times, it will give you a really good understanding of market competition, how to stake out your territory, maintain your position, and build barriers to entry. Great book, dry read.

However, regardless of your product domain, “marketing” and “evangelism” of your product is critical. To help you with this, amazon accidentally sent out their email template, and it is an awesome template to leverage for your own email product announcements. Awesome.

Measure

Repeating my original comment about measurement:

If you cannot measure the impact of a feature release on one or more product KPIs or operational metrics, why did you decide to work on it in the first place? I am fine with qualitative feedback from end-users if quantitative metrics are not available. But to this day, I do not understand why some engineering teams actively push back against instrumenting a product. The excuse is usually “higher priority features” and “we know we are delivering value”... and a simple question of “prove it, show me the numbers” is generally met with a change of subject, silence, or an even more staunch repetition of the original reasoning.

I am a firm believer that you need to measure the impact of all work done for a product, on the product KPI’s, operational metrics, or organizational OKR's it is intended to impact. How else are you going to know whether it was worth the effort? If you ask me, if you cannot identify what a roadmap item will impact, it should not be on your roadmap. Period.

If you are wondering where to start, what metrics to target. Go back to your LC and see what you listed there are how you would measure progress and success. Another resource is the previously mentioned Lean Analytics by Alistair Croll, and its accompanying blog post here: One-metric-that-matters (OMTM) blog post. The main message from Alistair is two-fold: (1) You should always be trying to impact one key metric at a time, focusing your efforts to make a larger positive impact in one place, as opposed to dividing your efforts and making minor increments in several places. (2) The metric you target should be chosen carefully, based on your product’s current context and business model.

This is a skill that will differentiate you from the majority of product managers out there: If you are able to proactively leverage the OMTM framework, and have a good track record in understanding the magnitude of change you can drive, and when to change to another OMTM based on the current context. You will be in at least the top 50% of product managers. (Caveat: for the purpose of this comment, I am excluding product managers who focus on growth hacking as it is very, very metric driven. However growth hacking is about attracting more users and converting them, it does not care about how to impact the value realized by users, through the use of the product itself.)

Learn

As you start to see the impact of your releases on your metrics, and continue to conduct empathy interviews for qualitative feedback, you need to start asking yourself what should you and your team work on next? I always ask myself the following questions:

  • Should we just continue to improve the current product? Where is my product with respect to the current roadmap and our PRFAQ? Are we delivering what we expected to? Are we confident we are heading in a good direction and that we should continue to execute on the current roadmap? Do we need to make any roadmap tweaks based on recent learnings that don’t invalidate the PRFAQ?
  • Do we think we have an opportunity to expand the capabilities of the product? Based on our learnings with our current product, does it look like we have a good opportunity to expand our offering by incorporating one of the other LC ideas we put together earlier? Should we write a new PRFAQ to include the expanded capabilities?
  • Do we need to evolve our product in a different direction? Do our metrics indicate a lackluster reception of our product? Are we moving in the wrong direction? Should we go back to discovery and re-evaluate what we are targeting, and pivot our product to something different?
  • Is there an opportunity to evolve a second product? Has the use of our product identified another set of problems amongst our customers, that could be a second product opportunity? Is there some unrealized value in our products data exhaust that we could repackage to drive additional business?
  • Should the product be killed? Not all products are worth continuing to expand. If a product no longer adds value, it should be considered for end-of-life. And sometimes a product should be euthanized in order to avoid a negative ROI driven by such things as maintenance costs exceeding revenue or “value delivered” - this scenario often involves replacing the old and costly products with a thoroughly rearchitected, more efficient, new product.
  • Can we update our CJM and SB? You don’t want to be making product decisions based on an out of date CJM or SB. things can get really messed up.
  • Do we need to refresh our product KPI’s and SLA’s? For where our product is now, are our KPI’s and SLA’s still relevant? Do we need to adjust our targets? Do we need to adjust our roadmap in order to improve performance against one of our metrics?

Keeping Your Methodology Fresh

You should inspect and adapt your product management methodology: What worked? What didn’t work? What framework have you read about of late that you would like to experiment with? Your methodology should not be set in stone, else it will become a millstone and a source of frustration. Quite likely, if you are leading a portfolio of products there will be minor differences in the way each product manager operates, which is fine, so long as you get together and regularly share learnings with the each other. But, efforts should be made to not diverge too far from a common model, else it becomes hard to compare the maturity of product managers across products.

So, what have been the two most recent changes to my methodology?

  1. VPC - It only took one workshop exercise with the VPC, and then observing a product team using it in earnest, to see the value that the VPC brings. The VPC is a fantastic framework for mapping customer profile needs, to product capabilities, real or proposed. It helps you put shape around your MVP, or understand the gaps in your current offerings.
  2. PRFAQ - My first use of the PRFAQ was to put shape around a new product targeting a new market segment, and to bring alignment to engineering, product and the C-suite. It worked great, amongst 6 people. Subsequent ‘first’ attempts with larger groups of people (20+) ran into problems due to using an unfamiliar time horizon - too intangible, too big of a vision, too much risk. Hence my recommendation to start with a more tangible target such as the next major version.

Methodology Lite?

I was asked recently how I would skinny up my methodology to work in a more SOW driven engagement. Consulting essentially. This is an interesting challenge. If the SOW in question is basically a requirements document with no wiggle room on what needs to be delivered, then I would argue that very little of this methodology fits: With a firm set of explicit deliverables, traditional project management is a better fit.

If however, the SOW is more flexible and written to target a specific list of problems, with data driven acceptance criteria, then several components of my methodology become applicable. My assumptions are that (i) the project is more of a partnership to solve a problem, (ii) there is flexibility in the specifics of the deliverable, (iii) deliverables will in part be driven by data discovered during the project, (vi) some form of milestone or stage-gate framework will be adopted to ensure that the project moves forward appropriately, within its agreed timeframe.

The first combination experiment I would try is below. Where “experiment” is an important thing to remember: Having not applied a product management methodology to SOW driven work, I do not know if the items below are a good mix, or a bad mix. The only way to find out, is to experiment.

  • Empathy interviews - To dive into the needs and problems of the users and stakeholders more deeply.
  • VPC - To map the jobs/pains/gains of each customer profile, and ideate on what the product or service would need to offer to address them.
  • CJM / SB - Critical to understand the user and stakeholder feedback in the context of time and space.
  • Roadmap - You still need a plan to drive the engagement.
  • Build - Release - Measure - Learn - To ensure that you are “releasing” and learning frequently, due to the more flexible nature of the SOW.

I am reasonably confident that this mix could work, but it would be heavily dependent on the level of flexibility in the SOW itself. If anyone has experience with this, I would love to have a conversation.

Additional Frameworks

This is not everything, not by a long way, but this article is long enough already. If I included anymore, I would have been better off writing a book. The frameworks mentioned so far are the core of my methodology, but there are plenty of other frameworks that I leverage when needed. If you decide to read through the links below, their usage and place should become clear.

John Biasi

Human-Centered Design researcher and strategist focused on practical outcomes

4 年

Curious about your perspective on iterating and testing with customers during ideation way before building. Some product managers and owners use measuring after release as testing. That’s concerning for two reasons: 1) How do we we know we have the KPIs and measurements that matter; and 2) How do we know our idea and solution hypotheses resonate with customers? Placing the term iteration after the release could lead some product development teams down a tech debt path. Iteration best belongs in ideation. Improve, expand, evolve best belong where you have it, after release.

Neil Malpass

?? Product Person ?? Perpetual learner ?? Proud generalist

4 年

This is a great Product Management overview. Thanks for sharing, Andrew. I'm curious about two things: 1. How have the non-date driven roadmaps worked in scenarios where multiple teams need to be aligned to deliver? 2. I didn't see anything related to quarterly planning e.g. OKRs mentioned here. Do you have any preferred methodologies in this space?

David Schofield

Product-first data executive, strategist, and architect. #dataiq100

4 年

Thoughts on the behavioral facets of ideation, i.e. what behavior are we trying to change?

Settu Ganesh

Co-Founder at Rise. Building Wellness Games on Wearables Data.

4 年

Thanks for sharing.

要查看或添加评论,请登录

Andrew Dempsey的更多文章

社区洞察

其他会员也浏览了