Let's start by talking about cycle time
Justin Tomlinson - Rilla Network
FTSE 30 C-Level Executive +20 years maximising value from technology in global household brands. Currently rewiring the content economy
So, in a recent post I left you with two of my standard opening questions, originally used to find my way in a new job and later used to help assess clients in that they lead to finding out how well a company does digital product delivery. If you are reading this I will assume you know a thing or two or have Google, so for brevity’s sake I have left out some definitions. To be fair this is far from an exhaustive set of related questions and capabilities but it will give you some insight into a part of how I have personally assessed where organisations and teams are at. I hope it may be of some use to you guys.
Turned around slightly to be inquisitive, the questions were:
- What is your cycle time from concept to backlog?
- What is your cycle time from backlog to showcase?
A disclaimer first. The answers to these two questions do not contain the silver bullet to agility or any form of holistic capability and maturity assessment, but they can tell you a lot.
So what (other questions and information) does question 1 lead you to:
- Is there a Why, a clear vision driving the company and the work we do here?
- Has this "Why" permeated the consciousness of the team so that they all say the same thing?
- Have we got a set of outcomes or measures that would tell us we have the right direction of travel?
- What filters do we pass work/concepts through so that we might compare them?
- How do we take a big idea and express it as something fairly uniform we can assess?
- What pieces of data must those concepts/work descriptions contain?
- How do we assess those big things and decide what’s important?
- Who comes together and how often to discuss concepts of this similar descriptive and filtered nature to decide what we work on?
- How do those people each commit to clarify any parts of the concept that we feel are essential to relative prioritisation?
- How would we know when to test a hypothesis related to these items and when to just deliver some piece of it?
- What tools and outputs do we use to describe the work, priorities, journeys and outcomes of these things?
- When something is added to our work list how do we treat what we previously thought warranted working on?
- How might we allow the team to select and pull work from a high level backlog?
- How long do we give ourselves to evaluate any given idea and get to the point where we can express it as a journey/epic/concept/test to be executed?
- Is the governance and leadership style that of servant leadership and empowerment with empathy over command and control?
- How do we value the importance of learning and revenue?
Everything in between the question and this paragraph relies upon the inference that to know what important or valuable is, either in isolation or in relative terms, we must have some mechanism of knowing what the ultimate vision, objective and key results are that we want as outcomes.
Hence I am a big fan of clear Vision/“Why†statement/ BHAGs as well as OKRs and hypothesis based delivery for anything requiring validation. A few things really don’t need much validation but most things do and they bear the questions: How do we test that or hone how real customers will experience this. And, how valuable is this thing where value is an equation that compares cost and benefit?
This cycle time measure is about taking a big idea (anything from a big new feature to a new product or service) and getting to a point where we can test it in some way or deliver some defined valuable part of it.
A cycle time here of more than two weeks to get a concept to a point where you can execute on it in some way is the longest i’d ideally want to tolerate. This will take practice though. Be prepared to kill things and to see them come back again with a new dress on. If the dress has a new killer look, entertain testing it again. Some things are definitely neither stop or continue but do more differently. If you are used to this way of working, look at adding "cost of delay" as a primary value filtering mechanism.
The next step is to plan the test or execution of some parts of the user journey into stories which produce an iterative, incremental releasable piece of value. This leads you into the world of delivery of something on your backlog where you should expect to see working product in production or experiment results, with data.
So what (other questions and information) does question 2 lead you to:
- Do we have a continuous delivery rig running?
- Can we release working product to production when we want?
- Have we solved integration and release?
- Have we solved code management, branching, merging?
- Have we got lots of the right automation running?
- Do we really do TDD? Coverage of all types of test?
- Do we get cloud, virtualization, containerisation?
- Is our CI sausage machine tip top? Can we run/test multi variants?
- Do we know build health?
- Have we got the right UX, UI, Dev, Test, Product Ownership, Organiser, Data and Digital Marketing skills in the factory?
- Are these people cooperating and in synch?
- Are we aware of the individual superpowers?
- Do we plan our work well into sprints?
- Are we pragmatic on methodology and techniques ( e.g.pairing)
- Do we know how to quickly experiment, research and measure without writing code?
- Are we used to running design and test sprints well in concert with development sprints?
- Are we good at breaking stuff down?
- Do we constantly re-prioritise without a big pause to get permission because the skills to do this are truly in the team and trusted in the Product ownership?
- How good is our relative value filtering of high level things against reality of delivery?
- Do we agree what done means?
- Can we see the work in a system or on boards?
- Are we good at experience, journey and story definition?
- Can we identify data, system and process gaps and plug them quickly?
- Does the work in the sprints tie back to the OKR’s and the vision?
- Are we tracking data driven outcomes and our experiment results in dashboards?
- Do we feel the buzz this creates in the team?
- Are people working with the values of mutual respect, altruism and integrity required in high performing teams?
- Do we genuinely conduct open minded retrospectives to learn from and do we track peoples happiness and engagement?
Being able to showcase something working very soon after our sausage machine of delivery is set up is critical and from then we should be seeing live things or validated tests and outcomes on a sprintly basis.
The inference between both questions and this paragraph is that if you create a clear vision and set of measures of what good looks like and if you trust people with autonomy to pursue them with the mastery of implementing the things above then usually things ain’t so bad. In my experience you can’t get away without most of these things if you are truly being “agile†and these things tend to arrive with skilled, disciplined and emotionally mature people. This is partly why it’s very hard to get this right and I have surely missed quite a few. I can say that if you can draw a line of sight from vision to user story or experiment then you usually have solid engagement and motivation in the team. This is especially true if you embrace learning and experiments and let the team adapt rather than revert to command and control.
I have used these two questions to great effect in single value streams right up to working on embedded software on hardware devices with teams of many hundreds. While my time at AOL was a real eye opener on what was possible my time running software delivery on broadband and satellite connected set top boxes at BskyB makes anything web, mobile, CRM or back-office seem like a sunny vacation and where we took it was one of the proudest delivery achievements of any team in my career.
One thing of note, in both the teams I have mentioned, we didn't have protracted debates about the relative or independent value of Scrum or Kanban, we simply let the team experiment and use approaches they found were helpful. Some of the terms weren’t even invented. I think we were lucky when I think about SAFe and it doesn't come much more scaled than 7 million paying customers on a low spec, multi-connected hardware device running millions of lines of code talking to a remote control and lots of live back end systems! My nightmares are still filled with thoughts of driver code, abstraction layers, middleware modules, EPG's in Java and countless API's that hadn't got the memo on their function! My hypothesis is that these teams put individuals and interactions first and you note that I generally try and avoid the regular terms from methodology that I hear a lot these days. The fact was, we go that stack to a point, previously considered impossible without the hoopla of a massive digital transformation programme or as I recall a single external consultant. Again, if you were in that team, you know who you were. Thank you.
I’m still passionate about these things but these days, and especially where I live and work now, I mostly engage with people who need no convincing of these traits, who know what good really looks like through experience and who want to work on really interesting propositions that are good for the world. These are the things that get me out of bed now and I’ve worked with some pretty amazing people by pursuing these rules. I always seem to regret it when I break them too.
I hope there's something for everyone here. I'm still learning every day and I can honestly say that the answer is different enough for every delivery team that there is no single way of working that suits them all. The things to not tolerate are incompetence, laziness or obstructiveness and the best time you can spend as a leader is in being really clear on the desired outcomes, in understanding the needs of the individuals in a team, tending to them with kindness and in getting out of the way.
Good luck.
JT