Methods and Reasons to Measure and Evaluate the Output of Software Teams
Dall-E 3 with ChatGPT - Yes I fixed the words with MS Paint :)

Methods and Reasons to Measure and Evaluate the Output of Software Teams

My thoughts on “metrics madness”, value based success factors, and data driven vs intuitive decision making


What are you or your team being measured on? If your stakeholders, investors, or company leadership took an objective look at your performance right now... what would define the success factors and influence perceptions of achievement? Do you set goals and track your progress against them? Does the team have OKR/SLA/Metric targets? If you think about these kinds of questions, you'll arrive at a mixture of facts and feelings, and plenty of controversy and disagreement over what measures and targets are best. Many kinds of creative, collaborative and complicated knowledge work can be difficult to enumerate. Software development, perhaps ironically, is one of these areas.??

Aspirational goals will make room for interpretation because they can’t always be fully achieved (“shoot for the moon and you’ll end up in the stars”). Making the goals too easily achievable risks the output being unimpressive, boring, and reduces future opportunities for the team. Hard things that are worth doing take grit and sustained commitment, not magical process optimizations or universally applicable frameworks. I’m sharing some of my thoughts on navigating “metrics madness” and focusing effort on what really matters to your organization and your customers.?

Measuring progress against lofty goals can be motivational and also highlight when a course correction or adjustment to original plans could be necessary. What metrics are most important though? You’ll find as much consensus on this as you would when asking about the best ingredients, steps, and tools to make the perfect soup. Not only does every gourmet have different tastes and preferences... but the size of the kitchen, chef’s team, and their complement of available tools and ingredients will influence their processes and outputs. The software development team is cooking up dishes for their stakeholder within their own unique constraints and with the tools at their disposal. The goal is not just to finish cooking something so the hungry stakeholder can finally eat, but to delight them with a thoughtful meal presented with care and accounting for their specific needs and preferences. Maybe you love this analogy, maybe you think it’s nonsense. Maybe you have nothing to do with software development but have always had an interest in learning more about how software teams do their work. Either way, let’s get to it!

Understanding Productivity and Its Measurement

The most commonly used metrics to measure software development team productivity/output include:

Source code based productivity metrics:

* Number of Commits to the Repo/Version Control, or pull requests (PRs) submitted for review
	
* Lines of Code (LOC) added or removed 
	
* PR reviews completed/accepted/rejected
	
* Merge conflicts created/resolved        

Agile tool based productivity metrics:

* Number of issues completed (user stories or bugs) per sprint/iteration/release
	
* Enhancements/refactors completed vs new bugs introduced
	
* Burndown - Are chunks of work being consistently completed or is everything finishing right at the end of the sprint?
	
* New enhancement requests per quarter (indicates demand and can justify growth - represent as a cumulative flow diagram)
	
* Planned stories/bugs/points vs. completed/actuals

* Hours estimated vs logged (yuck!)
	
* Points (everybody's favorite productivity measure! *not*)        

Production support based metrics:

* Incident resolution SLAs - How fast we resolve issues
	
* Uptime % - Over time this can show trends in stability
	
* MTTR - Mean time to recovery - Average outage duration
	
* Release cadence - How often is production updated - How long do users wait for fixes and upgrades?
	
* Regressions/bugs released to production - Escaped defects - How often do we miss things in testing?        

The Strengths and Limitations of Productivity Metrics

These types of common metrics are all tools for measuring how we are doing from a productivity perspective, but they do not tie directly into our organizations business goals. Without insight into the productivity though... it can be difficult to know if anything is wrong until deadlines are missed, team members complain about workload pressure, or the business outcomes become noticeably unsatisfactory. This could be a decrease in usage, revenue, subscriptions... whatever business drivers important for your software. So we monitor productivity/output metrics and look at the trends to see if there’s a thread to pull that leads to a root cause that should be addressed before it becomes a business outcome impact.???

There are some downsides that can come about from focusing too much on any one of the productivity metrics above:?

  • The team may shift focus to primarily satisfying that metric as it has become the?yardstick for their success?- This is inevitable to some extent because feedback highlights what’s being looked at. Combat this by looking at multiple measures holistically rather than a single productivity metric.??

  • Negative competitive behaviors or feelings associated with delivering?less?than others, or questioning why a member of the team isn’t?pulling their weight?- The key here is that these should be?team?metrics, and can’t always be broken out fairly into individual measurements in an equitable fashion (team member X may spend time on non-measured but critical tasks that team member Y has no visibility into).

Different metrics may be more relevant or valuable depending on the team's goals, the product or service they are working on, and the industry or domain they operate in. All in all, it's tough to pick out the right mix of metrics to measure progress against your teams goals. We want to do it anyway because it helps us with?prioritization?and?justification for change, both of which are hard things to do for any organization, but critical to achieving success. So how do we?hopefully* define a sound method for tracking against our goals without succumbing to "metrics madness”??

"Measuring programming progress by lines of code is like measuring aircraft building progress by weight." - Bill Gates


Go Beyond The Productivity Metrics and Implement a Tailored Approach with Value-Based Measurement

Sit down with the team, and align on the desired business outcomes of what they are working on. This should be plain language “reasons for being” for the team. These do not need to cover all value the team creates, or all activities the team performs. They should however, succinctly cover the biggest problem being solved with the teams work. Some examples:

  • Keep customers using our product for as long as possible - maximize session length
  • Drive sales / subscriptions
  • Enable [customer/user] to do [task] as efficiently as possible
  • You might have a North Star Metric that is easy to tie to data and business outcomes. If so, you can zero in on that. Often though, the situation is not as clear-cut.

For each of the "reason for being" business outcomes, discuss what kinds of data we have access to now or we can get access to?soon, in order to track whether we are moving in the right direction. Then look at productivity metrics that could get us there. Think through how to tie things like lines of code, bugs fixed in production, or planned vs actual work directly into these business goals. Let me know if you are able to do that.?

There's a disconnect there, right? It would be like a hospital board trying to measure the effectiveness of doctors and nurses by how many IV’s were placed, or how quickly operating rooms were turned over between surgeries. Those are important metrics to have access to from a business administration perspective, but they won’t lead directly to healthier patients. Having a team who is aware of the desired business outcomes and focused on achieving them... working with a group of people they genuinely enjoy spending working days with? That’s magic. The team will be able to come up with a set of measurable ways to decompose and track completion of the technical tasks required.??While there is difficulty in directly tying them together, sometimes there is math you can do that will tie the productivity, support, and value based metrics (more on that next) into business outcomes. Run the numbers, and validate any assumptions with the business. Backtest those assumptions if you can (was our customer satisfaction, revenue, [insert metric here] higher/better when we were releasing code to production every 2 weeks instead of every 2 months?).?

Value-Based Metrics to Consider in Addition to the Productivity Metrics

Calling a metric value-based means focusing on the the value delivered to customers and the business rather than just the output or activity of the development team, like productivity metrics above. Value-based metrics aim to ensure that?output?of the work aligns with business goals and customer needs, ultimately driving organizational success. Not all of them will fit your product/team/organization. For example, it might not be possible to talk to or survey your customers/users. Maybe you don’t have access to analytics about usage of your product (fix that first if you can).

Here are some key value-based metrics:

Customer/User Satisfaction (surveys/interviews required):

  • Customer Satisfaction (CSAT): How do your actual customers/users rate their experience using your software, usually on a 1-10 or 0-5 star scale on various factors influencing an overall satisfaction score.

  • Net Promoter Score (NPS): A little different than CSAT, NPS is how likely a user is to recommend using your software to other potential users. Someone can be very satisfied but not think your product would be a good fit for others, this differentiates in that admittedly odd case.

Note: Both CSAT and NPS require the ability to engage with customers/users, which not all teams can readily do. But take the initiative and do it, even if it goes against the grain in your organization. See if you can do some customer research/surveys, or implement a feedback mechanism into your software. Tracking CSAT and NPS over time is one of the best ways to know if the teams work is moving things in the right direction and achieve the?ultimate goal?of happy excited customers.

Product Usage and Adoption Metrics:

  • Feature Usage: This metric tracks how often and how extensively new features are used by customers, indicating the value and relevance of the features developed. Lesser used features could use more attention, or maybe they aren’t a good fit at all and should be left alone or removed.

  • Adoption Rate: This measures the rate at which new users begin using the software or trying new features the team introduced, reflecting the product or feature fit and value being delivered to the entire user base by any given enhancement or fix.?It could also indicate more communication about new features is needed.

  • Customer Retention Rate: This metric measures the percentage of customers who continually use the product over a given period, indicating long-term value and satisfaction. Many analytics platforms measure new/churned/unique users for you.

Revenue Impact Metrics:

  • Revenue Growth: This metric tracks the increase in revenue attributed to the product or specific features, directly linking development efforts to financial performance. Hard to do outside of an app that takes orders/payments, if you can use this one, lucky you!

  • Cost Savings: This measures the reduction in costs achieved through the implementation of new features or technology, reflecting the economic value delivered. Does a new productivity feature reduce the need for overtime? That saves the company money.

Quality Metrics:

  • Escaped Defects: This metric tracks the number of defects found in production, indicating the quality of the software delivered and its impact on customer satisfaction.

  • Change Failure Rate: This measures the percentage of changes that result in failures/rollbacks, reflecting the quality and reliability of the software updates.

By focusing on these value-based metrics, teams can better align their software development efforts with business objectives and customer needs, ensuring that the software delivered provides meaningful and measurable value.

The Human Factor: Beyond the Numbers

"To focus on the visible at the expense of the essential is irresponsible." -Bertrand Meyer

So now we've done some thinking about the different kinds of metrics for measuring the output and value of a software development team, and why we should do it. I would feel like I left out something really important though if I didn't write this paragraph, though. Just like many disciplines/business/fields... software development in a team environment is a people business. As I mentioned in?another article about product innovation, the single most important factor in software development team success is a shared motivation to collaborate in building something awesome. That requires vulnerability, transparency, growth mindset, positive conflict, and at least a little fun.

Studies show?that productivity is higher among teams that avoid burnout, reject busy work, and focus on delivering useful working software that excites users. This requires a special mix of leadership, team and culture building,??personal relationship development, outcome ownership/accountability, and a problem that really does need solving plus someone willing to pay for a solution (a.k.a. product market fit).??

Conclusion -a.k.a. TLDR - “Too Long; Didn’t Read”

Measuring and evaluating the output of software teams is a difficult but worthwhile objective, which requires awareness of the business context for the solution itself, as well as the “intangibles” of the team that works on it. While traditional productivity metrics provide valuable insight into the team's output and efficiency, it is crucial to complement them with value-based metrics that directly align to business goals, customer needs, and ongoing team health and happiness.?

The true measure of success lies in delivering high-quality software that delights its audience and drives growth in some business metric (revenue, stickiness, marketplace ranking, etc). Furthermore, success is being able to do this consistently over time without burning people out. By embracing a holistic approach and being willing to adjust as necessary, software development teams can navigate the challenges of “metrics madness” and focus on what truly matters: delivering innovative solutions that make a positive impact. It’s worth keeping in mind as well, that outcomes often have as much or more to do with things which cannot be measured, and must instead be felt, heard, or inspired.

“It would be nice if all of the data which sociologists require could be enumerated because then we could run them through IBM machines and draw charts as the economists do. However, not everything that can be counted counts, and not everything that counts can be counted.” - William Bruce Cameron - Informal Sociology: A Casual Introduction to Sociological Thinking” c. 1963

Thank you for reading!?

Sources I reviewed for this - check them out for more detail:

https://www.semanticscholar.org/paper/Dimensions-in-Measuring-Performance-of-Agile-A-Beh-Jusoh/f3a357a59fa14cbea8a7f650aeaaf77a80d68566

https://stackoverflow.blog/2021/11/29/the-four-engineering-metrics-that-will-streamline-your-software-delivery/

https://fortegrp.com/insights/how-you-should-be-measuring-agile/

https://www.sealights.io/software-development-metrics/10-powerful-agile-metrics-and-1-missing-metric/

https://www.planview.com/resources/articles/agile-metrics-for-leadership/

https://www.pluralsight.com/blog/software-development/software-engineering-metrics

https://www.sciencedirect.com/science/article/pii/S0950584922002257

Prashant SK Shriyan

??Global Director@QA Mentor|??The Go-To Software Testing Partner for Elite C-Suite Visionaries|???? Exposing Hidden Risks Before They Stall Growth ????|?????Strengthening Trust & Resilience

9 个月

Wow - This is an amazing article Dan Smith - insightful pointers on measuring software team output, balanced approach between productivity metrics and value-based measures, along with the emphasis on the human factor, is both refreshing and inspiring - Awesome wisdom share!!

要查看或添加评论,请登录

Dan Smith的更多文章

社区洞察

其他会员也浏览了