Dysfunctional Agile, Part 2: Waterfalling Agile

Dysfunctional Agile, Part 2: Waterfalling Agile

This is the continuation of the article started here.

 What do I mean by "Waterfalling Agile"?

Many programs during the implementation of the Agile methodologies end up making a simple but profound mistake. While concentrating on setting up development teams and essential Scrum, Kanban, or Extreme Programming (XP) tactics, they forget about the primary goal of Agile: delivering value as early as possible. In a nutshell, the whole point of Agile is to exercise the entire delivery process end to end (with integration and testing) as soon as possible. Only then we can discover impediments early, learn from the mistakes, improve, and repeat the improved process. 

Organizations spend tons of money implementing agile tactics, only to find themselves in the situation where they complete each of the delivery phases separately. On the surface, everything looks good. The User Stories are moving on the Scrum or Kanban boards. The Scrum Masters run standup meetings every day. Velocity and story points are tracked within each sprint. It seems that Agile adoption is going well.

Yet, at the end of the cycle, the stories are not converted into the potentially shippable product. Instead, the work is done only from the perspective of the single functional team. Completing the story by the development teams and moving it to the section in the Scrum/Kanban board named "Done" only means that the task is ready to be picked up by the next team, e.g., data integration team, then handed over the testing team, then the release team, then Operations. In other words - typical waterfall. To soften the reality, sometimes clever integrators do not call it waterfalled agile, using the name "hybrid approach" instead.

The value of such an approach is mediocre at most. To understand why, it makes sense to step back and look at the genesis of the problem.

How did we get here?

Unfortunately, it is difficult to change the habits of people who have been delivering years of projects using the traditional waterfall methodology. Especially in the IT world, splitting the work into distinct phases, plan->design->build infrastructure->build applications->build data management->integrate->secure->test->release->operate was the delivery mantra for a long time. The entire companies organized themselves around this construct, with the separate teams responsible for each of the steps. The concept of organizing teams by the function they performed, (e.g., architecture, development, security, infrastructure, operations) seemed to be the obvious choice. It was easier to manage the skills, and it was clear to delineate the responsibilities. As Conway noticed, the organizational structure matched the technical blueprint. Who is responsible for security? Security department. Who is responsible for the operations? Operations. How about testing? You can guess that it would be a role of the testing department.

In the world of lean, industrial engineering, the dominant idea was that handing over the work between separated and highly-performing functional teams would maximize process efficiency. Such a factory model worked well, for instance, in the construction business, focused, for example, on the construction of houses where it would be difficult to re-do floor plans after completing the roof and painting the walls. Similarly, the waterfall concept worked for the industrial engineering and manufacturing of semiconductors or mechanical equipment. In other words, such an approach worked well in a highly repetitive, well-known processes with precise, up-front requirements.

To illustrate this concept, we can consider an example when we are asked to make 100 wooden boxes. The work would require cutting the wood, screwing the walls, painting them, maybe constructing and screwing or gluing legs. If we knew the process well, it would make sense to do the work in the waterfall sequence, optimizing cutting, painting, etc. in separate phases.

The situation would be drastically different if we were not sure how the process looks like or if the approver of the boxes wasn't sure what would work for the client. We wouldn't like to cut the walls for all boxes only to learn, after the first assembly, that the height or size of the box doesn't match into the desired space. Similarly, we wouldn't like to paint them all, only to find out that the color needs to be changed. What if the owner would change her mind asking us to make each box different? In such a situation, we probably would opt for completing each box separately.

 Why is the waterfall not working well in the IT world?

Unfortunately, the waterfall approach is not working well when the process is more ambiguous, including the development of any new functionalities. Primarily if the features rely on a need to create new software. Such attempts to deliver new capabilities require a high level of cross-team collaboration, design thinking, try and error approach, and iterative development.

In the IT world, the continually increasing complexity of addressed issues ensures that software development has nothing to do with a well-known, repetitive process. Users usually are not sure what exactly they want. Even if they knew, it would be impractical to expect from the developers and architects to fully understand the complexity of the implementation. In such a world, it probably makes more sense to think of each IT program as it was developing a prototype, something that has never been created before.

For example, let's take a look at the relatively know concept of the computer operating system. What version do you use? Windows 10? MacOS Mojave? Did you also check the release number? What about the drivers' version? Did you see something that looks like 25.20.100.6374? The development work never ends. Why? Because the environment changes continuously. New devices show up. New interfaces emerge. New functionality is developed. New security risks are discovered. Everything has to be re-written and re-tested again and again. 

In such a world, believing that requirements and assumptions would not change during the process is wishful thinking. The nature of the problem demands the cycle of constant adjustment. It requires the accommodations of ever-changing circumstances. Trial and error of the PDSA cycle become essential to deliver the functionality. The high ambiguity is a nature of the process, where the Product Owners create hypotheses of what clients would like to have, then they translate these hypotheses into User Stories which are then interpreted by the developers who have never created this exact solution before.

Everything here is unique and not repeatable. The belief that such an environment can be meticulously planned and that all changes can be predicted upfront is irrational. In such a process, delivering value early and checking the results, learning from the mistakes, adjusting, and repeating the process becomes one of the most crucial guiding principles.

Yet, many stakeholders still are kidding themselves that unique systems can be thoroughly planned upfront and delivered on time, within budget, with the high quality. Such thinking is an attempt of defying the gravity and omitting the fact that creation of any unique, highly complex solution inherently carries a very high risk of delayed delivery caused by the factors unknown during the initial planning. Even in the construction business, many had to learn this lesson in a hard way (see the construction of the Sydney Opera House).

What delivery model works for software development?

It the software development world with increasingly shorten delivery cycles, the reality demands constant interaction between all functions required to deliver the solution. In such a process, the handover time and misunderstandings between the functional groups become the key impediments. The cycle represented by the waterfall approach needs to be repeated many times to deliver the desired outcome. The need to repeat these cycles quickly, creates challenges, if the teams are not ready to accommodate constant cross-functional handovers.

On the positive side, quick execution of the entire delivery plan-build-test-release-run cycle brings several huge benefits. Experiencing the whole process allows the teams to discover many potential impediments early. E.g., organizations do not need to wait for the end of the project to learn from the steps performed towards the end of the cycle (like integration testing). Early learning is precious!

Also, the need to repeat the process of integration and testing (usually highly cumbersome and labor-consuming) demands a need to automate it. Welcome to the world of automated CI/CD pipelines! Such automation results in the ability to quickly develop new functionalities, without increasing the risk of delivery!

Different approach to planning

In Agile methodology, planning is essential. Properly executed Agile process requires more planning adjustments than a waterfall approach. The difference between these two concepts is that instead of doing the majority of the planning upfront (waterfall), the planning activity in Agile is continuous, repeated, and updated many times during each cycle.

Quick learning of the process and ability to early validate the hypothesis of what architectural assumptions work and which do not, allows for the more practical approach to planning. For instance, the agile process allows making some final technical decisions as late as possible to take advantage of the knowledge gained in the earlier iterations. Such thinking (see, e.g., set-based design) represents a drastic contrast to the more traditional planning approach, demanding to make all critical decisions upfront based on the assumptions.

Why is it so challenging to avoid waterfalling agile?

 Defining well-working agile delivery requires a massive amount of organizational effort. Many decisions need to be addressed on the CxO levels. Some of the top obstacles include:

Creating the cross-functional teams. Such a step is required to replace handovers between the groups. In the waterfall world, the transfers happen through formal processes and sign-offs. In Agile, many of these handovers occur in the form of interactions between the members of the same team.

Creating the cross-functional teams requires collaboration across multiple managers from different service lines, alignment of HR processes, metrics, logistics, existing PMO processes, creation of new roles and shift of responsibilities from the line managers to the Product Owners and Scrum Masters. This effort is substantial, and it requires well-thought organizational change management.

Enforcing governance around the collaboration between the Product Owner and the Product Architect. The partnership is needed to balance the two opposite forces: a need to develop new, client-facing functionality (represented by the Product Owner) vs. the needs to ensuring scalability, operational ability, security, and all other non-functional requirements, usually described by the Architect.

Agile methodologies provide many best practices helping these two roles succeed. The details are way beyond the scope of this article, but it is essential to remember that the organization must clearly define governance and guardrails around processes guiding Product Owners and Architects to succeed in the transition.

Defining user stories correctly. User Stories should represent a functionality relevant to the Product Owner (PO) or Product Architect (PA). POs or PAs are supposed to understand and accept the deliverable. A correctly written user story, to be truly completed, requires the completion of the entire delivery cycle.

Unfortunately, in many organization, the user stories are tasks written by the team members (e.g., development, data, or security). Creation of such tasks usually does not require cross-functional collaboration. Someone could say that the concept of "User Stories" is replaced by the "Team Tasks." The definition of done for these tasks is difficult to be verified by the PO or PAs, who, in many cases, do not fully understand the tactical details.

Investing in the cloud and CI/CD pipeline automation. Investment in tools and processes helping with the code source control, integration, testing, and release is needed to ensure these steps can be fast, repeatable, and consistent. Many articles are focusing on the details of CI/CD tools. At the same time, it is usually less clear of how to successfully operationalize these tools and how to ensure that typically significant investment into the DevOps tools brings desired benefits.

On the one hand, we want to enable the developers to quickly spin off the environment, execute thousands of tests in a blink of an eye, get the quick feedback, and eventually integrate the working functionality with the effort of other developers, creating the potentially shippable product.

On the other hand, the amount of work and budget needed to develop such automation goes beyond and above the control of the single program. Automation of the software delivery pipeline is one of many strategic, CxO level decisions that need to be supported by the full organizational commitment and budget allocations.

To be continued...

要查看或添加评论,请登录

Jack Golebiewski, MBA, MS的更多文章

  • Dysfunctional Agile, Part 1

    Dysfunctional Agile, Part 1

    Like many other companies, your organization might be facing increasing pressure from the stakeholders and the…

  • 11 Quick Tips: Developing agile enterprise

    11 Quick Tips: Developing agile enterprise

    Introduction: a need for change Is this change difficult? While answering this question, I will skip the standard…

  • Organizational debt

    Organizational debt

    People usually think about the debt in a context of financial obligations. At the same time, many companies are raising…

    1 条评论
  • One simple rule and easy, powerful framework

    One simple rule and easy, powerful framework

    Marketing promotions, business books, and multiple articles lure us by displaying simple titles which promise easy…

  • Workforce goes out of control

    Workforce goes out of control

    Many organizations discover that empowering people unlocks organizational knowledge, increases employee satisfaction…

    1 条评论
  • Schr?dinger's cat and "it is not my problem" vs. performance metrics

    Schr?dinger's cat and "it is not my problem" vs. performance metrics

    One of the fascinating aspects of quantum mechanics is the measurement problem, popularized by the famous example of…

  • A dual mindset of technology departments

    A dual mindset of technology departments

    A long, long time ago, Information Technology (IT) departments used to be the epitome of modernization, innovation, and…

  • Business functions adopt agile thinking (Part 2)

    Business functions adopt agile thinking (Part 2)

    This article is a continuation of Part 1, where I focused on definitions and history of the agile movement. Agile…

  • Business functions adopt agile thinking (Part 1)

    Business functions adopt agile thinking (Part 1)

    Companies, which adopted agile methodologies, report significant improvements in efficiency, quality of their services,…

  • Agile businesses apply microservices concepts to organizational design

    Agile businesses apply microservices concepts to organizational design

    Traditional organizational structures where people are organized in a cost-efficient way by their functions (e.g.

社区洞察

其他会员也浏览了