The Case Against Measuring Cycle Time
This is the latest issue of my newsletter. Each week I cover research, opinion, or practice in the field of developer productivity and experience.?This week is an article I wrote about cycle time.?
Many organizations use cycle time as a measure of their engineering performance. But this is something I recommend against.
One problem with cycle time is that it is only useful in the extremes. Take, for example, a cloud service team that has an average cycle time of 3 months—reducing this team's cycle time is probably a good idea. Now instead, take a more typical team that has an average cycle time of 4 days. Would further optimizing cycle time provide any benefit?
Although perpetually reducing cycle times may?feel?like improvement, if all you are doing is rearranging work into smaller pieces, then you are not increasing net value delivered to customers. To put it another way: delivering 2 units of software twice per week isn't necessarily better than delivering 4 units of software once per week, especially if breaking down work unnecessarily causes more toil for developers. We can visualize this effect using an inverted J curve:
Another problem with focusing on cycle time is that the metric doesn't mean anything without accounting for batch size. To use an analogy, in cycling, your speed is a function of cadence (how fast you pedal) multiplied by power (the amount of force in each pedal stroke). In software development, cycle time tells us the cadence of delivery, but not?how much?work gets delivered in each cycle (which is famously difficult to measure with software).
When leaders push teams to accelerate cycle times, teams may end up pedaling faster without actually delivering more. And when leaders compare cycle times across teams without factoring in batch size, they may derive false conclusions about teams' performance.
To sum it up: Cycle time can help with spotting?potentially?low-performing teams in some situations. But cycle time is otherwise a poor signal for measuring or improving engineering performance. To actually improve performance, leaders should focus on the daily experiences of developers and smoothing the friction that slows them down.
Turning ideas into working software so you can focus on your business.
1 年The title says “measuring”. But the argument is against “accelerating“. I agree that continually pushing for acceleration leads to diminishing returns. But that doesn’t mean we shouldn’t measure it, watch for changes, and understand when it changes — why. That’s how I’ve used cycle time. And I think it may be what your arguing for. Aim for a consistent cycle time and volume of work per cycle, and watch for anomalies that may signal something is unhealthy in the system.
Striving for happiness @ Ikigai Digital
1 年Kind of struggling with this take 1. Efficiency (aka output) is "value-free" while effectiveness (aka outcome) is "value-full" (see Ackoff, https://medium.com/@marciosete/epistemology-of-effectiveness-9a1d51b728c8). One can have the most efficient process, yet deliver zero value to the customer due to focusing on the wrong things. Hence, correlating efficiency with value is misleading at best. 2. From a product development perspective, the "north star" is to have empowered product teams that solve problems rather than execute tickets. Hence, these teams should be assessed by outcomes/effectiveness rather than output/efficiency. So mandating company-wide efficiency targets is very smelly to begin with, regardless of which specific metric we are talking about. These metrics should purely be used by the team itself to monitor the health of their internal process and to make adjustments in case of deterioration. 3. As it has been said by others, there is a vast difference between measuring something for diagnostics/investigation vs treating a metric as a target. 4. IMHO, the main benefit of shorter cycle time/smaller batches is to get feedback faster and therefore kill non-differentiating stories. again, effectiveness > efficiency
Tribe Coach at Danske Bank | Focused on Strategic Execution, Process Optimization, and People Development in Financial Services & Technology
1 年Great insights! Goodhart's law applies to cycle time as well. Moreover, relying solely on cycle time as a performance metric can be limiting. Like any other metric, it's important to remember that they only provide a partial picture individually. To gain a comprehensive assessment of team productivity, it's essential for teams to use multiple metrics in combination.
Impactful Product Development Teams + Org Designs that scale without having to make another painful reorg.
1 年If all you are doing is rearranging work into smaller pieces, then you haven't understood how to identify and reduce waste in the process.
Connecting work to value with data.
1 年Abi Noda, your objections are not wrong, but you are missing the big picture. Cycle time (assuming you've decided to measure it in some specific way) is most useful when you use it to improve flow. The absolute value of any cycle time metric is not as interesting as the *relationship* between the *average* cycle time and three other averages - throughput, wip and age of wip during the period that the cycle time measurement was taken. Little's Law governs the relationship between them, showing how far away your process is from stable flow (it's usually not) and more importantly what you need do to get it there. Little's Law is a rarity - an actionable mathematical theorem to improve flow systematically if you understand how to use it. Think of it this way: to safely land a plane you need to understand the relationship between at least four variables. - airspeed, attitude, altitude and vertical speed. If you look only at one and ignore the rest it's quite likely you'll be a pancake. Same idea applies to flow - and you can actually measure this stuff for real and use it to improve ticket flow, pull request flow etc.. it does work if you use it properly. https://smarter-engineering.exathink.com/iron-triangle-of-flow/