Why 'Practices only' based Agile Maturity Assessments are a croc....
I’ve been struggling for a while to get my thoughts on the assessment of large scale Agile transformations together.
The problem for me is the paradox that Agile, if it is done well, requires teams to constantly challenge themselves for better ways to do things. And that might mean, God forbid, that they might need to adapt their Practices to suit their environment. So how can Agile maturity assessments be Practices based? The short answer is that in isolation, they can’t, and there is serious risk of self delusion if you trust your Practices based assessment. Mindless application of Practices is dangerous. And people will be driven by whatever they are measured on, and if you only measure Practices, that’s what you will get.
So I have been working on developing some methods to assess what is really important.
The question is, when faced with a large Agile transformation how do you do it if not by rolling out Practices?
Well exactly, and that is the challenge that I’m writing about today.
Three ways to measure
Practices
OK sure, you can not throw Practices based assessment out completely, it is the Practices that enable and drive the behaviours, so yes, still important, but they absolutely need to be applied with common sense and adapted where necessary. Time and time again I see the application of Practices taking precedence over the original intent. This is where the Agile community is often failing.
Principles
Far more important. Look behind the Practices and understand the intent. Why have a wall? Because its visual and tactile and that works because it changes the behaviours of humans. Why stand up daily? Because it drives an accountable conversation, its transparent, and it removes blockages in real time.
If you keep the underlying Principles in mind and have the ability to tweak to ensure that the intent is being preserved, you will get far better outcomes than a mindless dogmatic application of the Practices.
If you can’t spot the following in an Agile team, it ain’t Agile no matter what the maturity of your Practices are.
- Clarity
- Visualisation
- Shared understanding
- Regular structured disciplined conversation
- Short cycles with real time blocker removal
- Open honest risk identification and management
- Genuine collaboration
- Embedded customer
- Team driven continuous improvement
- High personal and team accountability
- High levels of engagement, within the team, and with key stakeholders from the top to the bottom
I won’t elaborate these here, I’ve written much on these things in the past. Suffice it to say, if you visit a standup, you will be able to tell whether your program has effectively embedded these Principles ….
Advocacy
Here is your acid test, and this is where most are missing the boat.
There are two groups of people who determine whether you are truly successful with your Agile transformation: The practitioners and the customers of the process.
To find out if you are actually achieving small ‘a’ agility, put all the often self-promoting/justifying assessments one side, and find out whether it is really working for your practitioners and customers. If your practitioners think it’s a croc, then it is a croc. If your customers see no material difference, then you haven’t succeeded.
There is a backlash against Agile. I’ve written much about this in the past. We need to start driving integrity and accountability into our Agile rollouts.
Many Executives and Leaders won't like this. A LOT of coaches won’t like this. But it needs to happen if we are to restore the credibility of the discipline of Agile, and to find new and inventive ways to apply these principles to more and more diverse technology and business delivery problems.
The Shibusa Cycle
So how do you build capability? The key is to recognise that it is a cycle. You can’t roll out Agile without the application, training and coaching of practices. Obviously. So it needs to be understood that there is a cycle to developing true Agile maturity which cycles though Practices, Principles and Advocacy. I’ve named it The Shibusa Cycle.
Step 1 - Practices
Evaluate the Business that is being ‘Agiled’ and for each area define a set of practices that are applicable to the nature, style and capability of the area being addressed. For large scale deployments you will end up with different practices for different areas. DON’T OVER ENGINEER…. A simple subset of practices if well designed will make a difference from day 1. Apply the principles of Lean experimentation here.
Educate, and get the teams into it as soon as possible. Support with a pragmatic coaching.
Step 2 – Assess Maturity against the Principles
Forget about the Practices for the time being. See if you are confident to tick all of the Principles off.
At the start you won’t be. So it’s back to step 1. What Practices aren’t supporting the Principles? It might be a lack of discipline in the application of the Practices that you have chosen, it might be that the Practices themselves need to be tweaked for the environment, or maybe you have picked the wrong Practices for the environment. If Clarity is a problem, maybe Definitions of Done aren’t well articulated. Does the project team have a clear definition of success for the work package? Are the story cards clear on outcomes. If there is no common shared understanding of the projects Definition of Done (success criteria) at the Executive Sponsorship level, review the engagement model.
Rinse and repeat.
When you are comfortable that the team is consistently demonstrating the Principles, or it is as I say, 'humming' move to step 3.
Step 3 – Test Advocacy
Companies that crack this will set the benchmark for the successful application of Agile.
The raging philosophical debates, the zealotry, the dogmatic application of practices and the claims of great success that are more smoke and mirrors than real substance all get filtered out with a genuine, well thought out, independently applied ‘test of advocacy’ with practitioners and customers.
And this approach is a natural fit to the original intent of these practices, it is a data driven approach to continuous improvement, and it biases to outcome.
So how do you do it?
These are the characteristics of a good embedded advocacy test:
- It needs to be periodic but not too regular, but I would suggest that it be no more frequent than 6 monthly
- It needs to anonymous
- It needs to be super, super simple
- It needs to be carefully designed for the environment
- It needs to be incorruptible, not open to gaming
- It needs to identify and theme the key drivers of success or failure as the case may be
- It needs to be designed, and administered independently of the Agile rollout team
- The processes, data analysis, communications strategy and responses to the survey results, and how you will use the results to measure the performance of teams and coaches, need to be well thought out and in place and wellp before you go to market with your survey (the survey is the easy bit)
I have been using the principles of Net Promoter Score to design assessment systems with these characteristics. But be warned, the simplicity of the questions makes it easy to bang something out very quickly, the trick is to make sure that you have systems and process to be able to analyse and respond to the feedback in a structured way.
If this last step is done well, it will drive real data and focus into the course corrections that will drive real value into your Agile rollout.
Don’t overuse it. But use it regularly. It will drive the noise out and hold everyone accountable, from the Executives accountable for delivery, to the Agile Coaches and Leaders accountable for driving the transformation.