The value in splitting by value: a risk management perspective (EN)

The value in splitting by value: a risk management perspective (EN)

By Martina Gallo and Matteo Regazzi

(You can also read this article in Italian)

When working as an Agile Coach for a consulting company, it is likely that one of the ambitions is to help a big company, one that considers itself important, to adopt a way of working inspired by those 4 statements written 20 years ago in the Manifesto for Agile Software Development.

To ensure that that way of working sediments, becoming above all a way of being and thinking, we have known for some time that it is necessary but not sufficient to adopt specific practices for writing software: you have to cross the boundaries of the single team first, and those of software production then.

Agile has thus become a term used to describe a way of organizing a company, from customer relations to how new products can be designed: the two are almost overlapping and in between there is everything else, from budget management to the coding of a test. In this complex environment we risk forgetting that a large part of the change in the way of working must occur precisely in how software is developed and it is common to meet teams who write code based on what they have learned by reading code written by those, before them, worked in a traditional way (the chain may be longer, but the meaning is the same).

No alt text provided for this image

It is as if we gave the Business the keys of a Ferrari and a pat on the back and they, at the first corner of the road, go straight because when they turn the wheel they realize that the wheels are not turning. A bit like when software systems were built in the traditional way using that process that Winston W. Royce just over 50 years ago said it didn't work (but he said it on page 2 of his white paper [1]) . Those were the times when the steering lock was intentionally engaged.

No alt text provided for this image

How did we get to the current situation? Let's take a quick leap to the end of the last millennium: successful IT projects (on time, on scope, on budget) were rare. Someone was found working on a project where it was not even clear who the user was, and it was also necessary to run. What solution was being implemented ? Obviously, spending more time in the requirement gathering phase, and then in the design phase, because it was necessary to ensure that all the possible directions of evolution were identified in advance and managed. And alé, all the best minds engaged in producing UML diagram tablecloths (mostly of the static type), in the hope that in the meantime some magical tool would come out that would actually make those tablecloths executable code.

No alt text provided for this image

Luckily, a group of developers arrived and made us realize that the cost-of-change curve we were struggling against could even prove to be our ally [2]. It took us a while to understand this, and at first we were fascinated by that image in the Extreme Programming white paper where it was stated that that curve could flatten out quickly if particular techniques (practices) were in place but, above all, adopting a new way of managing risk.

J.B. Rainsberger explains it very well in a speech [3]: if we look only at the process, the obvious difference between agile and the Waterfall (the one with the backlinks), is just the length of the iterations. But this enables a completely different way of managing a project's exposure to failure risk.

The risk is defined as the possibility of suffering damage, economically a loss, connected to more or less foreseeable circumstances [Treccani]

So change, which represents a cost when it occurs and is not easily foreseeable, can be treated it as a risk.

Reasoning predictively through risk analysis makes it possible to mitigate possible threats, but the outcome of this analysis will be verifiable only in the final stages of the process chain.

Risk management, whatever the approach, can be influenced from the very beginning starting from the planning phase, which determines the level of exposure. But in areas where variations are very probable, such as software development, starting a long, very detailed planning with the aim of sticking to it as much as possible throughout the production phase, can represent a determining factor for the increase of risk.

Setting the activities with the aim of having continuous feedback that can reduce uncertainty, a risk measurement factor, therefore decreases the probability of taking a potentially wrong direction. Furthermore, dividing the work by distributing the risk and managing it in small doses, allows you to act promptly and stem possible threats. We therefore set up moments aimed at collecting feedback, considering these as necessary to acquire knowledge and reduce uncertainty, accelerating the learning and adaptation process.

As J.B. Rainsberger points out in [3], assuming that in order to calculate the exposure to risk we multiply the probability that each risk has to become a problem by the cost of the single potential problem, then to reduce this exposure we can intervene on two factors. In the case of a Waterfall model, everything is done to reduce the probability (long phases of gathering the requirements, obsessive application of anticipatory design) while in Agile we work to reduce the cost.

No alt text provided for this image

Thinking to the total cost of the change as the sum of several costs, allows you to make a breakdown that reduces the overall risk, not only because this is not incurred once, but also because it allows you to act just when there is the need.

If risk management makes risk the element to be considered, assessed and mitigated, agile considers risk an integral part of change. We can say that risk management is to risk as agile is to change. Therefore, setting up a work that is "value driven", whose goodness is not directly proportional to the compliance with the initial requirement, but which is rather measured on the satisfaction of the end user, necessarily presupposes a new more flexible mindset, accepting that we are in a VUCA world where change is the only certainty.

In practice, things become difficult and this makes the luck of the agile coaches who have guaranteed work, at least for a while. We must deliberately set up a continuous cycle of improvement with the focus on learning. A useful model to look at is Emergent Learning [4] where team and client, together, continuously formulate hypotheses, question them through experiments, accumulate experience, adapt and formulate new hypotheses.

The higher the speed and precision we want or need to have, the higher the frequency of this learning cycle must be. We can also read the feedback directly from the user's face like when testing UX prototypes. This is the feedback that interests us most: a team that is able to make a user "try" frequently the fruit of their work will be able to make better choices.

No alt text provided for this image

As for the pair programming, we could think to this constant comparison as the job that the driver and the navigator do in rallies. Any modern organization can only focus on having teams that can quickly generate value, reducing as much as possible the space-time that exists between the receipt of a new need and the feedback obtained by offering a possible solution, or Customer Lead Time.

The first step will be to have teams that are able to carry out, from conception to delivery, the production of a functionality that is important enough to be appreciated / evaluated by a user. Teams that are not simply inter-functional, but that are real Feature Teams [5], or end to end teams. To pursue this type of organization an indispensable tool is that of Value Stream Mapping. Organizations that work on very complex (deliberately or accidentally) systems, which are the result of the integration of several complex subsystems, can struggle to achieve a configuration that allows teams composed of a reasonable number of members (reasonable from the point of view of learning, e.g. 10 members) to deliver value at the end of each short iteration, so an organization review strategy cannot ignore the review of the architectures.

Second step: once the first teams have been identified, the user needs that have to be satisfied no longer have to be cut by components but must be kept consistent in terms of user value. If the needs are not INVEST [6], then how can we expect the teams to write and work user stories that are ?

The third step is to continuously search for the feedback necessary to continue the evolution of the system. An existing backlog is useful as a trace, but only when our evolutionary cycle directly impacts that backlog we can say we are really applying that risk management strategy based on the reduction of the costs of change.

These three steps are longer than any leg, in order to take them we must reduce the size of each step and repeat, perhaps adopting a continuous improvement approach, such as Lean Change Management [7], which allows us to maintain a systemic vision of the transformation that will no longer be a point of arrival but a persistent state.

Acknowledgements

要查看或添加评论,请登录

Matteo Regazzi的更多文章

社区洞察

其他会员也浏览了