- Rely on Activity Metrics and Promote the Idea that More Activity is More Valuable. Focusing on activity metrics (e.g., hours worked, commits made) reinforces the false notion that quantity equals quality. This approach neglects the outcomes and impact of activities, encouraging busyness over value-creation and can lead to burnout without significant improvement in software quality or delivery.
- Use a Broad Metric "Framework" That Cultivates Competing Perspectives and Justifies the Proliferation of Incompatible Improvement efforts. Broad frameworks that encompass many metrics but lack focus can invite conflicting interpretations of what "improvement" truly means. Without a clear, aligned understanding of the end goals, teams may end up with divergent perspectives, leading to conflict and fragmented efforts that limit the impact of improvement initiatives.
- Create Confusing Dashboards with Disconnected Data/Trends. Dashboards that display a multitude of disconnected data points and trends create confusion, hindering teams from recognizing progress or areas needing attention. Without clarity and coherence, these dashboards work to undermine executive support, the ability to see progress, and the ability to communicate important details to justify continued investment.
- Abandon Improvement Efforts That Don’t “Just Show Up” in Dashboard Trends Shortly After Being Made. Non-trivial improvements in software development, especially those requiring learning, continual investment, or collaboration often require time to manifest in measurable macro trends. Expecting dramatic results leads to prematurely discarding valuable initiatives, especially those with long-term benefits. This reactive approach can prevent the foundational changes that require patience and a consistent focus over time to succeed.
Success depends instead on causally-connected metrics that can support modeling, clear justification, and that can guide ongoing decisions and expertise toward progress in a way that is aligned with organizational goals.