Digital transformations and Agile Approaches: One thing to watch out.
Pierluca Riminucci
CTO-Europe Accounts @Infosys | Chief Architect @HSBC | Group CTO @Prada
Nowadays digital transformation programmes are invariably associated with Agile.? Organizations want to become Agile since they aim at increasing speed, efficiency, effectiveness, etc., and Agile promises to deliver all that.
However, what typically happens on the ground is a speedy adoption of a number of Agile ceremonies and tools. Things like standing crowds gathered at some corners of a big open space office, yellow stickers on every possible office’s wall or glass, Kanban boards, the ever-present Jira, etc. ?At the same time what also seem to be happening is the removal of any deeper organization performance analysis and open-minded, on-going monitoring of progresses.
I have been involved in many extensive transformation programmes across Europe and beyond and, out of this first-hand experience, I realised the existence of remarkable similarities. Indeed, most organizations cannot even get near to the core part of Agile message: They just adopt its choreography in a rather dumb way.
The overall consequence of that often materializes in a number of recurring anti-patterns that are remarkably repetitive across organizations and invariably introduce even more inefficiencies and waste.
The purpose of this brief article is to provide some insights about some of the reasons why this happens and what are the watch-outs.
Basically, far from being tout court derogatory of Agile, this article simply aims at pointing out its many fake implementations, and in doing so, it truly follows one of the Agile key principles, i.e. the one that preaches the need of paying attention to lessons learned and to incorporate remediation actions going forwards.
Below is the relevant principle quoted verbatim: “At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly”. [1]
So why not reflecting on how Agile is typically deployed into organizations that are embarking on digital transformations for real?
The first and foremost observation is that a so-called digital transformation almost invariably involves the construction of a distributed solution or a set of interoperating solutions. That is: a solution that is made of a number of components that are required to interoperate seamlessly, some to be built from scratch, some already existing and to be modified or enhanced.
In essence, most of the times, there will be the need of building a UI layer (either native mobile or hybrid or web-responsive it doesn’t really matter) that connects to an ecosystem of APIs (also typically to be largely built) that, in turn, will provide access to a layer of legacy systems of records typically via a number of different types of connectors. Some modifications are usually also required to this layer too.
So far there is no rocket science.? However, the scope of what needs be done is typically far larger than the scope that a pizza-size team can crack, in a self-contained and self-organized mode. And the latter implies that a couple of cornerstone Agile tenets break down just to start with.
Further to that, such a big and complex scope would require, as a mean of technical coordination, an upfront elaboration of architecture (i.e., an engineering focused distributed solution design) that in turn does require some clarity and comprehensiveness about requirements. As always, the devil is in the details: Requirements are not exception.
Here, on the requirements elaboration front, is where the first and very evident failure typically happens. It is often the most important failure and, at the same time, the most widely occurring.? However, it is very rarely recognised, since Agile is highly iterative and teams start to code almost immediately… they start to code something, alas!? This latter aspect often masks the real problem and disguises it as a technical failure. It appears that implementation is taking forever. However, what is really happening under the hood is that a conceptualization of what needs to be produced hasn’t been achieved to any meaningful level of details. ??
Because of its extreme importance and recurrence, I will analyse such aspect in the remaining part if this article.
Agile is almost invariably associated with user stories as the adopted format to collect requirements. ?And most of the time what gets produced is a set of high-level hollow statements, almost never mutually exclusive nor exhaustive, that nobody is really able to make sense of, let alone architects or developers, who very rarely bother to read these small sentences in their hundreds.
And indeed, in fairness, Agile preaches that user stories are not meant to describe functionalities[2] , but rather to serve as placeholders for the self-contained and self-organizing team to pick them up and further elaborate them (via team communication) to produce all the required details whilst progressing into their implementation.? Something that might work only in a self-contained and small team with negligible external dependencies.
So, the first and foremost failure that usually happens is on the requirements side. I have seen or heard of programmes that after many months haven’t yet managed to produce any satisfactory description (i.e. with a meaningful level of detail) of the WHAT, in other terms of the functional requirements end-to-end.
Meanwhile the architecture teams, sometimes organized into separate work-streams, each focused on specific? ‘views’ of the whole distributed architecture (i.e. data, security, integration, etc.) engage in rather lengthy and ?unfocused debates on potentially needed new technical capabilities, failing though to converge into a real solution design that the engineering team can read, understand and implement.
This brief article is meant to report experiences collected from the trenches; hence it is not my intention here to delve into a detailed analysis of the various remediations I myself have put forward or seen adopted with success.
Its goal is rather to highlight a recurring anti-pattern with the aim of awakening awareness about possible pitfalls especially in those non-technical stakeholders who often are the decision makers, so to help them to surely recognize whether their programme has fallen into such anti-pattern.
And believe you me, the level of noise that typically reaches an executive (business or IT) coming up from their own organization is such that it is very difficult for them to understand what is really going on at ground level, despite the many reports that are usually produced, all looking very professional, polished and often really impressive from a graphical point of view.
So what is the main symptom of such not-so-uncommon anti-pattern? In other terms: how do you recognise that your programme has got stuck at the rather basic level of requirements gathering and elaboration? What do you need to probe to assess whether that is really the case?
Naturally enough you would need to check the artefact(s) used to document the agreed requirements and read them to assess if they describe with enough clarity and comprehensiveness what the solution will do for its specified category of users. ?The latter used to be called an actors and still are within the UML formalism.
In other more mundane terms, and assuming, as typically is the case, that your distributed solution starts with a UI (either mobile or web) you should be checking that the requirements clearly describe the sequence of screens together with their logical information flows back and forth from their associated sink or source of such data. The latter typically residing into the middle and backend layers.
领英推荐
All that has to be provided at logical level, making sure that all the information flow complexity has been properly addressed. No need therefore to tediously list all the fields or specify the look & feel of the screens. However, any piece of data that is architecturally relevant needs be elaborated and analysed along with their integration/communication path at semantic level.
Also, very importantly, look to see if the relevant server-side calls are described in correspondence of any event or user action that may happen on the specific screen.
A server-side call should be documented with a logical name, together with its input and output parameters (again documented at a logical level) and a brief description of what is required to do from a UI (or front-end ) perspective, again in business terms.
There is nothing technical in here. Requirement is about describing what the UI should be doing from a user point of view, both in terms of screen interactions and calls it generates to perform business operations. The reading should be sequentially-friendly. That is, the document should be amenable to be read sequentially, from the first to the last page, and, page after page, the reader should get ever more convinced that the ground is covered comprehensively and exhaustively with no gaps or vagueness and it makes sense.
This suggestion might look trivial and old-fashion. However, if after months, there is no such description available (with the quality attributes I have just alluded to) it means requirements gathering is still all over the places and your programme has fallen in that not-so-uncommon anti-pattern.
Better still: the same exercise can be organized by leveraging independent judges with clearly defined instructions on how to perform the assessment so to assure objectiveness as much as possible.
And, if beforehand, a requirement template would have been defined with extreme care and attention (as I often recommend) this kind of programme progress measurement would be far easier to assess with ‘objective’ reliability.
Assessing whether the requirements elaboration process has produced the expected results is important though often overlooked. Requirement is often where most of the digital transformation programmes risk to be wrecked, though that is not widely recognised. ?
So, before accepting esoteric explanations as to why it is taking so unexpectedly long for my digital complex transformation programme to deliver, as an executive responsible for its delivery, I should be looking, with enough time and patience, at whether sufficient clarity has been achieved at requirements level.
Yes, often it is there where implementation delays are caused, since the various X-functional teams are progressing in a kind of short-sighted fashion, just looking at the end of their current sprint.
[1] Agile Principle #12 – in https://agilemanifesto.org/principles.html .
[2] See for instance: Dean Leffingwell, Scaling Software Agility: Best Practices for Large Enterprises, Addison Wesley
Bridging Tradition, Reimagining Success & Championing Leadership Co-Founder & CRO at RE Partners
11 个月Pierluca Riminucci In your experience, what sets apart successful IT-enabled innovation in the realm of agile transformations?