Agile team metrics – why, and which?
Photo: Mark Wilkinson Hughes on Unsplash

Agile team metrics – why, and which?

W. Edwards Deming (1900 – 1993) is widely considered “the father of the quality movement” in modern manufacturing. Trained as an engineer and mathematician, he pioneered statistical techniques that are still widely used today. His management advice to post-war Japanese industrialists – Toyota Motor Company, in particular – laid the groundwork for the Lean Manufacturing revolution.

?But as much as he championed statistical measures, Deming also had a strong warning for management:

“I should estimate that in my experience most troubles and most possibilities for improvement add up to the proportions something like this: 94% belongs to the system (responsibility of management); 6% special.”

In his 1982 book Out of the Crisis, as well as countless articles and seminars, Deming constantly made the point that most management focused on employees as the source of problems – when, in fact, outcomes are controlled by?the systems people work in. This misconception often led business leaders to measure the wrong things, to the detriment of the people in their organizations.

Over 30 years of research on teams have confirmed that the same principles apply to knowledge-based enterprises. If we want to help teams improve, we have to adopt solid principles when we consider metrics.? In the spirit of Deming, here are some guidelines I follow:

  • The focus of metrics should be on understanding the system.? Any measure of people should only be taken as an indicator of the effect the system is having on them.
  • In a system that relies on teamwork, any attempt to measure individual performance is counterproductive. Just don’t.
  • Always ask, “If we measure this, what will we do with the information?” A useful metric is one that contributes to good decision-making.
  • Team-based measures are for the benefit of the team. With rare exceptions, they shouldn’t be shared outside the team.? Otherwise, the temptation for management to make unsupported comparisons among teams is almost irresistible.

An important key is for the team to own the metrics, and to find low-overhead ways to consistently capture them.? If metrics are forced on them from outside, their adoption and tracking will be half-hearted.? (Or worse, they’ll be motivated to game the system without gaining any benefit.)

From that perspective, here are some metrics I’m likely to recommend to Agile teams (and their management).

Team morale

Nothing predicts long-term workplace success better than a team’s collective culture and outlook. Are they working well together? Do they feel productive?? Are they doing meaningful work?? The answers to these questions will ultimately determine a team's outcomes.?

Teams can easily build a “mood meter” into their work cycle, as part of retrospectives or even daily stand-ups.? Whether the collective mood is rising or falling, it can prompt valuable discussions – either “What’s going well that we should enhance?” or “What’s dragging us down, and how should we address it?”

?(To review some of the most compelling research about successful teams, see Nine Lies About Work by Buckingham & Goodall; and Google’s publications about effective teams, based on their Project Aristotle.)?

Escaped defects

If a bug makes it into production, it causes a double harm. It reduces customer satisfaction, and it’s costly for the team. Bugs aren’t benign, they’re one of the greatest sources of waste in a development cycle:

  1. First, time is invested in creating the defect.? (Creating problematic code takes about the same effort as writing quality code.)
  2. This is? followed by attendant impact downstream, which could range from minor user annoyance to hugely expensive downtime for a major system.
  3. Then, the team must invest more time to identify and correct the defect.
  4. And, that effort comes at the expense of other valuable work in the backlog that is delayed.? (This opportunity cost is often overlooked.)

For many teams, the most immediate way to improve their throughput is to focus on process improvements to minimize defects. This has the added benefit of reducing frustration for both the developers and their customers.

Work item cycle time

Usually measured with a control chart, cycle time offers two valuable insights for a team:?

  • First, understanding the mean cycle time helps a team realistically forecast their rate of delivering value.?
  • Secondly, analyzing outliers from the average can pinpoint sources of delay, such as wait times for other teams or impediments to deploying on demand.

By analyzing the factors that shape their cycle time, teams can quickly hone in on the improvements that will make the most difference.

(For a deeper dive into cycle time, check out Rob Redmond 's excellent article on The Power of Cycle Time.)

Activity ratio

A measure of overall efficiency, this ratio is based on two values:

  1. Processing time, the amount of time spent actively working on an item.? Usually this isn’t simply the time a work item is “In Progress.” Meetings, interruptions and waiting all take away from actual processing time.
  2. Lead time, the elapsed time from the moment a work item is added to the queue until it is deployed.

?Most teams are shocked to learn that of the total time to deliver an item, often 5% or less is actually spent working on it. The bulk of the lead time is consumed by handoffs, dependencies on other groups, decision latency, interruptions, and other delays. Analyzing the root causes of these delays is a powerful tool for improving efficiency and reducing waste.

?Implementation note: Some intrepid teams are willing to use timers to track actual work time; others may rely on a daily post-hoc estimate. I coach that a team is better off finding a way to capture rough estimates cheaply and often, rather than adding a lot of overhead for more precise tracking.

Code test coverage

“Test coverage” is a broad term that references a variety of elements:? unit tests, functional tests, static code analysis, and other tools. These can be applied at the level of individual lines, complete statements, or entire branches. Often teams will begin at the unit-test level and expand as their collective experience and level of automation grows.

Wherever teams begin, they’ll find that test coverage is usually a leading indicator of release quality.? (Of course, this assumes the team is sincere about what they’re measuring.?It’s easy to inflate unit test numbers, which is typically the response to an external mandate to reach specific thresholds.) Steady, incremental improvements in coverage will pay big dividends.

Technical debt

Development teams must constantly balance the effort they invest in meeting immediate business demands, against the work to maintain a stable and resilient code base. Stakeholders often just want to see new functionality, and don’t much care about “under the covers” work that doesn’t show obvious benefit.? However, it doesn’t take many “quick and dirty” solutions to undermine a system architecture, increase the likelihood of defects, and drive up support costs.

Tools are available to help assess the health of a program, such as by measuring cyclomatic complexity.? But most developers are close enough to their code to give a valid measure on a scale from “pure as snow” to “rewrite the whole thing!”? A quick check at the end of every iteration keeps the team alert to declining quality.

I hope these ideas provide some helpful guidance.? There are few absolutes in this area, but applying these concepts should get you and your teams off to a good start.? And as always… “inspect and adapt!”

What has your experience been in applying metrics?? Share the good and the bad - I’d like to hear from you.


?

Jon Tobey

Agile Transformation Lead | Agile Coach | Certified Scrum Master | Design Thinking Practitioner. Unleashing the power of Agile to elevate performance, foster innovation, and achieve lasting success.

1 周

I agree with all of these things. At Starbucks the only metric we reported outside of the team was Team Health. As team health went up, so did productivity. One metric on your list I do question though. Lead time doesn't seem relevant to me if anybody can add any story to the back log at any time. Doesn't giving all stories the same clock imply that they all have the same value?

回复
John Gorski

Technology Leader, Innovator, Change Agent

4 个月

Well written, Barry! You barely touch on the delays caused by "dependencies on other teams". In my experience, this is usually the biggest cause of wasted time, and escaped defects. I have found Value Stream Mapping to be an effective tool for getting additional measures, identifying root cause, and coming up with a prioritized list of targeted solutions.

回复
Elizabeth Abidakun

Senior Scrum Master

5 个月

Fantastic analysis on agile metrics! This piece effectively connects teamwork cycles with work delays, offering valuable insights for optimizing performance. It’s a holistic approach.

回复

An article on metrics without equations or numbers. I say bravo! Too many people read articles searching for a magic bullet, "If I use this formula, my project will be successful". This will make people think

回复
Brian Anderson

Senior Scrum Master at aPriori Technologies

5 个月

Wonderfully written Barry. I’ll quote you from our time together that completely changed my understanding: “Metrics belong to the team”. I have learned that as a team evolves so then should the data. This nets two things: continuous team engagement and a revolving view of performance. Keep it fresh. Keep it relevant

要查看或添加评论,请登录

Barry L Smith的更多文章

社区洞察

其他会员也浏览了