Agile Metrics: Fake it, Fake it … till you … realize you would never make it!
Arman Kamran
Enterprise Scaled Agile Value Delivery | Multi-Agentic / Swarm Intelligence Solutions in Generative AI | Harvard Business Advisory Council | Professor of Emerge Tech at the University of Texas | Amazon CX Council Advisor
The School of “Fake it!”
Had traditionally been the motto of Start-ups and pioneering teams and a very common advice from Guru’s to Rookies in the industry for the longest time.
It apparently used to work, at least to some good extent and lead to passing it as an advice to the next generation.
It kept bringing results, until it ran into this new thing called Agile.
As of then, trying to cook the metrics and faking performance in Agile proved to be the worst anti-pattern idea you would bring upon yourself with epic negative impacts to your team, your reputation and your organization.
Not to unload all the blame on the Agile practitioners (Scrum Master, Kanban Flow Masters …), in most cases the reason behind such embezzlement becomes a matter of survival under the suffocating pressure from the upper management on the teams to show steady growth in performance.
A Scrum Master under constant pressure from Sr Management to show an ever rising Velocity, or a Kanban Flow Master who is being pressed for constant improvement in the teams’ Delivery Rate (or continued drop in Cycle Time), would be forced into that self-damaging and team-demising practice out of despair.
Turning that illogical expectation into a personal performance metric, bashing those who cannot dance to that tune and rewarding those who can put up the best show for it, would soon be creating the downward spiral on code quality, code delivery and value creation for the customers.
This is a direct outcome of lack of adequate corporate training, awareness and understanding of how Agile works, delivers and improves and how to stay on the level with transparency in combination with Inspection and Adaptation.
Training a Good Cook (of Agile Metrics)
As per the well-known law attributed to Charles Goodhart, a British economist and former member of Bank of England and former professor of London’s school of Economics:
Source of image: https://www.sketchplanations.com/post/167369765942/goodharts-law-when-a-measure-becomes-a-target
This so-called law was originally formulated as “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.”
As clear as it was to Statisticians and Economists, it wasn’t showing the same zing within the other fields, so it was reformatted to the new way that is mentioned in the image above.
This refers to the fact that once getting an ever improving report on metrics become the goal according to the senior management, and that even develops further into the main do-or-die policy of the department – or worse – the organization, the teams start finding ways to game the report and to look nice for as long as this milking practice can continue before the reality catches up with the organization and land it somewhere nobody likes to see.
Take Velocity for example, which is expected to represent the amount of work (measured in Story Points) that a team is able to complete and deliver, during a Sprint.
In order to be effective, Velocity is calculated as the average number observed over the past few Sprints.
Naturally as the team becomes stronger in their skills and more of their work is automated and their developed functions are becoming APIs or Services and a lot of repetitive work is re-used, their performance shows improvement up until it finally reaches a plateau where it mainly stays at the team’s highest achieved number, but stays rather flat at that level.
Unless something major happens to the team’s capacity to deliver (like change in number of highly skilled team members, or using a solution that would expedite team’s ability to deliver or the new work turns out to be impossibly hard, etc.), it would stay pretty much the same.
Now if Velocity turns into your team’s lifeline and they feel threatened by its number at the end of the Sprint, they will start overestimating the work so it would appear that they are doing more.
Once the work comes out, the business will scratch their head in bewilderment of where did all the constantly growing achievement – as per the ever rising Velocity of the team – go without materializing in the actual Product that is being released.
This would continue till it is escalated to the management, at which point their in-depth investigation would reveal the puffed up estimates and under-delivery of work and would most likely ignite a finger pointing battle which can lead to the destruction of the team (or as far as dissolution of the entire department).
Of course, if this is not caught in-time, it would result in market losses that may lead to the ruin of the organization.
Same problem goes to Kanban teams when gaming the WIP (Work In Progress) constraint of the columns (stages) through breaking the Stories into broken half-stories that would now be done as two separate items and would raised the teams’ Delivery Rate.
Further manipulation can be done by breaking down stages into many sub-stages to show high Through-Put values for each. It would also show less amount of issue on the Cumulative Flow Diagram as it hides the diverging or shrinking of one former column through multiple half-stage ones.
Here are some of the common cases:
- Breaking down a small Story into more Stories, with each newly created Story having only one piece of the puzzle and estimating each one as a full Story.
- Allowing Stories that are don’t meet the “Definition of Done” to get marked as Complete and then create new Stories to finish the work, all with their own fresh estimates.
- Allowing Stories that have failed QA to get marked as Complete and then create QA Stories, each estimated separately, to finish the work.
- Taking credit for Stories that are developed by other teams for a joint release.
- Add fake Stories that do not need Dev or QA only to close them as Completed work.
- Allocating Capacity for BAU (Business-As-Usual) work and then estimate them and add them to the Sprint Backlog to consume the non-BAU capacity that is supposed to be used for other purposes.
- Manipulating the Story’s estimate in the middle of the Sprint without adjusting the remaining Sprint Backlog in a meeting with the Product Owner.
- Over-estimating the Stories by giving them ball-park, yet puffed up numbers.
- Use Partial Story Points for incomplete Stories in one Sprint and spilling them over into the next Sprint with fresh estimates.
- Create Stories for meetings and ceremonies and give the estimates as Stories.
- Using more efficient technology tools (faster infrastructure, automated testing, Cloud setup, purchasing a 3rd party solution and modify it, etc.) and use the estimates from the in-efficient, manual work days.
Scrum Master / Kanban Flow Master: Part of the Solution or Problem?
Since Scrum Masters / Kanban Flow Masters are responsible for promoting Agile values and to ensure the Scrum Team is following them as closely as possible, they are on the hook for the gamed metrics regardless of participation, staying quiet or being tricked all the way.
An experience Scrum Master should be able to notice the issue during Backlog refinement and Sprint Planning sessions.
A Savvy Kanban Flow Master should be able to tell when during the Replenishment Feedback Loop (similar to a combination of Refinement and Planning ceremonies of Scrum), they team and the Product Owner seem to be breaking Stories apart to too much granularity, leading to an artificially puffed up number of Stories to finish.
Outside that, the actual report that is created using the Metrics may also be gamed by the Scrum Master (or Kanban Flow Master) in order to give their work quality a face-lift and also balloon up their Teams’ performance.
In some cases, the middle-management may also be participating in the scheme, either by promoting the cooking practice through ignoring the details behind the puffed up numbers and enjoying the fake credit that it produces, or directly encouraging the work behind the mis-representation.
Conclusion:
Keeping Agile Teams honest is a cultural movement.
Since Agile Teams – at least at the time of this writing – are all comprised of humans (and no robots or AI yet), they should be treated with the considerations needed for Agile teams, the most important of which: Self Organization.
Once Agile teams are trained and brought into a functional level, they need to be allowed to practice their hands in self-organization and through that practice ownership of what they commit to deliver and then improving on that.
They also need to be afforded with the trust in their commitment in continued improvement and learning.
If the management feels – or is not sure – if the team is improving as they should, instead of asking for better Metrics, they should try to walk a mile in their shoes to see and feel how their day to day work is proceeding. They should check to see whether the teams are dealing with high pressure and stress and why are they experiencing that. Only then the management can step in and assist in resolving the factors behind unsatisfactory performance improvement levels and lack of acceleration in productivity.
Humans are smart and adaptable. If you push them through an impossible expectation period, they will come up with creative ways to soften through your pressurized management and none of those approaches will lead to a better “True” performance or productivity level.
We all do a lot better to step in to assist and participate in improvement as an organization and enjoy the shared victory across the teams with a real gain in customer satisfaction, and through that an expansion into the market.
Cheers
Arman Kamran