Outdated belief #2: A carefully designed architecture is critical
In the early 2000s, I was one of those people preaching the importance of careful design and analysis of a system’s architecture before starting development. The belief was that especially non-functional requirements, such as performance and robustness, are hard to ‘bolt on’ to the system once development is underway. So, the software architecture community, including me, developed all kinds of tools and techniques to structure and provide systematic means to take architecture design decisions, assess the architecture’s ability to meet the non-functional requirements, such as performance and maintainability, and ensure the alignment between requirements and the architecture.
Although these techniques were and are extremely valuable, even today, there are several challenges with this view of the world. The first is that the original viewpoint is one where we start from a stable and complete set of requirements for the system. As pointed out when we discussed the first outdated belief, requirements are far from stable and tend to evolve at about 1 percent per month for most systems. That means that requirements form a moving target and can’t be used as a stable fundament for the development of a system.
The second challenge is the assumption that we have a greenfield context and are starting from scratch. Although new systems are built from scratch, the vast majority of the development effort is on systems that have been around for a while. With the emergence of the minimal viable product (MVP) style thinking, we can see that the greenfield period for any new system is intentionally shortened to the minimally possible period.
The third challenge with the original view is that it assumes that we only need architecture work at the beginning of the project. Once the architecture is ‘done,’ we can move on to development and testing and no further architecture work is required. Anyone who has ever worked on a real-world system realizes how wrong that assumption is.
Finally, the traditional way of thinking is that it’s all about the architecture and not about the architects. Although the architecture establishes the guidelines, structures and boundaries for the system, it’s the architects that define these and then evolve them over time. So, the architecture is only one of the relevant artifacts that we benefit from during development. It’s also about the underlying principles, the design decisions that led to the architecture, the connection to the business drivers, and so on.
Today, it’s clear that a system’s architecture evolves continuously, eg in response to new requirements, evolving technologies or changing user behavior. The architecture isn’t a static artifact, cast in stone, but rather a continuously evolving, organic entity that responds to the needs of the various stakeholders. It tends to evolve more slowly than the system’s features and functionality and as such provides structure, but it does need to evolve.
领英推荐
That brings us to the outdated belief that we’re discussing here. The architecture of a new system starts small, largely undefined and not well understood and covering only the needs of the MVP. If the MVP is successful and we want to grow it, then the architecture needs to evolve to meet requirements like scalability, performance and the system’s increased functionality needs. And often it needs to build integrations with other systems, demanding interfaces where we never expected to have these in the first place.
There are three principles that I believe are critical for working successfully with software architecture. First, it’s about architects, not architecture. The architecture is the constantly evolving artifact and any architecture documentation is like a milestone along the way: accurate when created but outdated when published. Much more important are the people driving this constant evolution, the architects. They need to bridge the link between business and technology.
Second, as I already alluded to several times in this post, architecture is continuously evolving in response to a constantly evolving and changing world. Rather than trying to resist and slow down the change, we should lean into the future and proactively evolve the architecture based on the needs we can see appearing. Our research shows no correlation between the age of an architecture and the need for technical debt management. Hence, we need to start evolving the architecture from the start.
Third, most professionals don’t realize this, but architecture is the incarnation of the future business strategy. In virtually all situations, the pace at which the architecture can change and evolve is very slow. That means that the architecture design decisions made today make certain business strategies very easy to realize in one or a few years, and others prohibitively expensive. Therefore, the architecture evolution initiated today defines the range of realistic business strategies down the line. In practice, it’s not the business leaders but rather the architects who determine business strategy.
Software architecture isn’t a one-time effort where we pour the concrete for the foundation of the system to never change it again. Instead, it’s a constantly evolving artifact that happens to evolve more slowly than the features and functionality in the system. Consequently, it’s about doing just enough architecture work at the start of a new system to get the MVP in place and then growing the architecture with the system in response to the needs of the business supported by the architecture. We need to welcome change and lean into the future as not doing so will cause the system to become outdated and irrelevant. And writing off many person-years of effort and losing vast amounts of codified domain knowledge is wasteful in ways that few things are. And who wants that?
To get more insights earlier, sign up for my newsletter at?[email protected] or follow me on janbosch.com/blog, LinkedIn (linkedin.com/in/janbosch), Medium or Twitter (@JanBosch).
The next big breakthrough in manufacturing will be an autonomous scheduling system.
3 年Hello Jan - The problem starts when fundamental rules (natural laws and mathematical identities) are not clearly separated from user requirements. Both get mixed-up creating lack of changeability you refer to. The software must represent / encapsulate the fundamental invariable rules while providing a means to the user to define variable rules. Take, for example, the spreadsheet or database management systems. The software only provides a framework. The user is free to use the blank spread sheet or SQL any way they need. Even a programming language falls in this category. I have followed the same approach while developing a factory manufacturing system with an autonomous scheduling engine at its core. The software provides the natural law / mathematical identity framework valid for any factory and a means for the user to self-configure it to suit his / her factory by programming / writing, what I call the factory rules, in a structured way. Needless to say, this software too may prove as revolutionary as a spreadsheet or a database management engine. Do get in touch if you wish to know more or wish to associated with the next big revolution.
Bespoke Generative AI for Engineering & Manufacturing (PLM, MES, ERP) | Cloud Native | Air Gapped | System Integration | Concepts, Technologies, Execution
3 年That's a rare case that I kind of disagree with Jan Bosch. Yes, sure, system architecture may evolve, but it should still follow a certain path/pattern. System architecture (as a part of the technical spec) is subservient to functional requirements (user scenarios), which in turn come from required Capabilities, which in turn come from how "DONE" looks like. Yes, you may start with a simple NodeJS based service, and end up with Apache Camel based Enterprise Service Bus, but it is still the same carefully thought of architecture of a multiparty data exchange. You may have small changes in the requirements all the time, but the architecture should be able to accommodate them, because your required capabilities remain the same. If your required Capabilities change then it is indeed a problem, and someone did a bad job defining them. Using dogs as an example, when you come to a puppy owner and make your choice about desired type of a dog: no amount of system architecture manipulation will reconcile a poodle with a Great Dane. Either there is a confusion in terms or I am missing something here. Yes, there is an ongoing maintenance for bugs and new features, 3rd party frameworks evolve and you may be even forced to replace them altogether (react vs vue?). Yes, you may need to adapt certain pieces of the system to new circumstances. But even then, if you did not think of the system to be scalable in advance, you may never be able to stretch it to new performance goals, you will need a brand new system. Here is a good discussion on Twitter infrastructure evolution: https://news.ycombinator.com/item?id=17147404
Lead Architect Product Engineering Framework
3 年Hi Jan, From my experience I can confirm your new insights. And from my experience, what you describe has always been the reality and often in conflict with the formal ways of working. One other aspect I find important is that ”architecture”, i.e., the need of conceptual intentions and related, integrated, product descriptive elements on different abstraction levels are needed througout the whole organisation and can’t be limited to be the work of some specific persons. BR Klas Niste
Early Adopter of Agile Learning & Didactics - Wants to spread Agile Mindset and connect communities. Founding member of #swcommunity and #siemensdevelopmentsystem
3 年Dear Jan, I will discuss that with our #SeniorSoftwareArchitects at Siemens. Maybe you would like to join the discussion? Frances Paulisch
Business Development Consultant @ Sandab | New Business Development
3 年Good article! Vital when dealing with Cybersecurity!