CIOs can reduce CO2 emission by simplifying architecture portfolios
by Andreas Diebold, Tim Howaldt, Tobias M?hl, Benjamin Zeller (alphabetical order, equal contribution)
Many thanks to Peter McElwaine-Johnn and Philipp Nützi for their priceless contribution. Without their eagle eyes our thoughts and concepts would not be what they are. Also, many thanks to Dr. Marina Zeller for her scalpel-sharp proofreading, her linguistic discipline and publishing experience put the spice in our text.
Introduction
One major demand of CIOs is to reduce the complexity of their application portfolio. Nonetheless the benefit often remains controversially discussed in the managing board and funding is uncertain so that projects need to “sneak in†architectural improvements on the side. It is almost impossible for CIOs to get funding for projects that focus on simplifying the architecture although there is a positive but maybe indirect cost benefit evaluation. CIOs need to make a direct business case to save cost and increase benefit to get funding for projects that simplify the architecture.
We have identified an imminent correlation between the simplification of application portfolios and CO2 reduction, hence enabling substantial cost savings and revenue increase potential. In this article, we introduce a novel five-step approach that was pioneered by Principal Director Peter McElwaine-Johnn of Accenture UK in his learning series “The Art of EA†to provide CIOs with the transparency and the decision-making basis (see Figure 1) to bring architecture simplification on top of the C-level agenda.
In the current global situation with emerging climate crisis where energy and CO2 certificate prices spike and put a strain on the budget, CIOs can make a positive business case for projects focused on simplifying architecture funded by cost reduction that is realized by reduced CO2 emission.
Figure 1: Five step approach to provide CIOs with fact-based decision making fundamentals
Drivers of Architecture Complexity and its Quantification
When we are talking about architecture simplification, we need to establish a common ground on how architecture complexity is defined. Every architect would probably be able to argue with one another until eternity on what defines and how to quantify architecture complexity. All different sorts of KPIs spring immediately to mind. Examples are the degree of customization, redundancy of functionality, compliancy with technology standards, frequency and effort of change. The list could continue until your forefinger will get sore from scrolling. Isn’t there a strategic way to quantify complexity? A way that does not require 18 months of assessment and an army of architects, application owners and business users.
Yes, there is. Of course, it comes at the cost of abstraction but that is exactly what CIOs need to efficiently take decisions in today’s fast-paced business and technology environment. CIOs want an effective approach to quantify the complexity of their portfolio’s architecture fast.
Let us introduce you to Glass’s Law. Robert Glass was a software engineer at Boeing who took the work of Scott Woodfield and defined a law on how to measure system complexity, condensing it to the two main contributors: the number of functions a system provides and the number of interfaces a system has (see “Fact and Fallacies About Software Engineering (Agile Software Development)â€, Addison Wesley, 2002). We are already hearing you objecting. Yes, we know: there are so many more contributors to architecture complexity, and you are right. But according to Glass the combination of those two contributors drive up complexity the most so that the others can be marginalized. The bottom line of Glass’s Law is: each time we make one system dependent from another by introducing a new functionality or a new interface, the complexity increases exponentially because the whole construct will get harder to change at a later point in time without breaking one of the depending systems.
We utilize Glass’s Law as a strategic and lightweight assessment approach to quantify the complexity of each system in a portfolio. As an example, see the application of Glass’s law in Figure 2: the complexity score decreases by ~80% when a simple monolith with nine point-to-point interfaces gets replaced with three microservices with three point-to-point interfaces each.
Figure 2: Visualization of Glass's law algorithm, as well as an example showing the CU reduction when replacing a monolithic point-to-point architecture with point-to-point microservices
Differentiation between necessary and unnecessary complexity
Now that we have identified the complexity of each system across a portfolio, we need to differentiate necessary from unnecessary complexity. Necessary complexity is determined by the business. Certain market structures, product characteristics, organizational models or regulatory requirements may result in a certain level of complexity that can hardly be simplified without impacting the business. Necessary complexity should be limited to the market differentiating business capabilities and needs to provide a competitive advantage to be justified.
The larger remainder of complexity is unnecessary that originates from flawed design decisions in the past or historically grown complexity. Examples are: technical debt from M&A transactions, incomplete or abandoned renewal activities, technology proliferations, lack of enterprise architecture governance, poor solutioning processes, misaligned project portfolio management, lost knowledge and skills and irreplaceable components. Architects pretty much know unnecessary complexity when they see it. It is a very common skill among architects, it is neither really an art nor a science, it is just what good architects can do best, it is their sweet spot.
The CIO knows the complexity of each system - that’s the first step. The architects differentiate necessary from unnecessary complexity - that’s the second step that enables the CIO to differentiate necessary and unnecessary complexity of each system. The subsequent focus will be on eliminating unnecessary complexity.
领英推è
Architecture patterns to eliminate unnecessary complexity
The next task is every architect’s favorite activity: balancing the levels of fragmentation, consolidation, layering and modularization by identifying and applying architecture patterns to reduce the architecture complexity. Examples are decoupling of components, merging functionalities, externalizing integration, introducing separation of concern, dependency injection, applying architecture patterns such as facades, strategy, strangler, vertical partitioning, command query response segregation.
This is the third step - the CIO knows how unnecessary complexity can be eliminated from the portfolio enriched with effort, duration, and cost estimates as well as scores for implementation risk and user impact to enable holistic decision making. These decision metrics enable CIOs to prioritize the architecture simplification activities on a cost-benefit assessment where benefit at this stage is quantified as complexity reduction.
Correlation of architecture complexity and CO2 emissions
We have identified an immanent correlation between the architecture simplification of application portfolios and CO2 reduction. The assumption is that the complexity of a system has a correlation with the required infrastructure footprint to run this system. This correlation may not always be direct. Often the infrastructure footprint required to run a system is rather determined by aspects such as required performance, number of concurrent reading and writing users, throughput of data volumes, parallel failover deployments to provide resilience and high availability requirements. Nonetheless, unnecessary architecture complexity in general is the result of functionality being in the wrong place both logically and physically as well as an unnecessarily high number of interfaces. This eventually leads to higher CPU and RAM utilization, higher storage consumption and higher network traffic than necessary. Even more importantly this typically results in a negative impact on the capability to flexibly scale the infrastructure according to the actual load patterns. Unnecessary architecture complexity tends to keep the entirety of a system up and running instead of just the modules that are required to handle the actual requests and load. This is especially true for cloud deployments where architectures need to be built in a way to be capable of utilizing flexible scalability. It also affects on-premise deployments that could more efficiently handle load patterns if the architecture complexity would allow for flexible scalability. In that case data centers would not need to be equipped with infrastructure capacity to handle the maximum load of all deployed systems at the same time. Simply put, for a complex system there is more infrastructure capacity required than for a less complex system.
What we did is define a set of five standard infrastructure platform archetypes with increasing complexity from very simple to very complex. While very simple systems can run on a single low-cost server that is even shared with other systems, very complex systems typically have high performance and high availability requirements, resulting in a whole armada of servers hosting the redundant databases and horizontally scaled application instances. We created a range-of-magnitude mapping from the ranges of architecture complexity to these infrastructure platform archetypes. This is the fourth step: the CIO knows which systems comprise which level of infrastructure capacity.
Quantifying CO2 emissions reduction by reducing architecture complexity
Based on the correlation of architecture complexity and infrastructure capacity we can finally calculate the level of CO2 emission. Different types of data center and cloud providers with different levels and mixes of renewable and conventional energy types result in different levels of CO2 emission. There is more than enough statistical data available from various sources to calculate an average value for CO2 emission levels per infrastructure element (e.g., a typical average standard multipurpose VM with balanced CPU, RAM and storage ratios). Those average CO2 emission levels per infrastructure element enable us to calculate the average CO2 emissions that each infrastructure platform archetype generates, which again leads to a correlation of CO2 emission level to architecture complexity range. This is the fifth step: the CIO knows how much CO2 is generated by each range of architecture complexity.
Making the case
The CIO knows the CO2 emission levels and the respective potential reduction. The current price of CO2 certificates is assigned to the decrease of architecture complexity. The current market research clearly indicates a substantial price increase of CO2 certificates. Until 2030 the prices will increase by factor 10 from the current 30 EUR to 300 EUR per ton. Assuming this continuous increase of CO2 certificate prices, our developed approach has shown that it will be relatively easy to fund architectural complexity reduction activities with the corresponding cost savings realized by simply having to purchase fewer CO2 certificates. That saving will typically deliver between 2% and 7% overall IT cost savings depending on the complexity of the current portfolio.
Figure 3: Illustrative “Pain / Gain†matrix showing which architecture complexity reduction initiatives should be given the highest priority
On top of that, the achievement of realized CO2 savings can be leveraged for other purposes such as marketing, customer acquisition and retention, and in that way contribute to revenue increase because nowadays having a leaner CO2 footprint is a competitive advantage.
With the quantified CO2 savings, we have created the missing piece for showing tangible benefits of architecture complexity reduction initiatives. The CIO can use this to prioritize initiatives and develop a roadmap for such activities both short- and long-term.
So as a simple verdict: substantial CO2 reduction can be achieved by reducing the complexity of an architecture landscape with the result of both a decrease in the IT operating cost as well as an increase in revenue and profit.
Key sources
The above-mentioned steps one to three are based on work pioneered by Principal Director Peter McElwaine-Johnn of Accenture UK.
Glass’ Law is published in the book “Fact and Fallacies About Software Engineering (Agile Software Development)â€, Addison Wesley, 2002.
Chief Product Owner | Digitalization Strategist | Entrepreneur | Scientist | Adventurer
2 年Danke friendly Ben! Mal wieder exzellenter content von Dir! ?? Dann packen wir’s mal an, Monolithen in Microservice-Scheiben zu schneiden. ?? M?gliche Vorgehensweisen zu diesem Aspekt würde sicher auch auf viele interessierte Leser treffen. ??
Focusing on Digital Strategy and Transformation, Private Equity and M&A, Consumer Goods and Retail
2 å¹´Simplification can be complex - as is CO2 measurability in Tech and beyond. Great piece of work from one our top teams.
Enterprise Architecture - Tech Strategy & Advisory | Empowering organizations to sustain transformations through solutions that are both practical and effective | Facilitator of cross-functional collaboration
2 å¹´Congratulations Ben, great to have the thought leadership and structure for this relevant topic!
-
2 年Danke Benjamin! Sehr spannender Artikel. Andreas Heussler Vielleicht ist der Ansatz für uns interessant?