A short note on Dynamic Capabilities and Enterprise Architecture
Inverse Conway's Law as depicted in https://ea.rna.nl/2020/02/11/a-tipping-point-in-the-information-revolution/

A short note on Dynamic Capabilities and Enterprise Architecture

TL;DR

A popular idea in strategic circles is 'dynamic capabilities', roughly the idea that you need to setup your organisation in such a way that it can change its core competences to follow the 'outside' (e.g. the market in case of the private sector). An important part of the framework is to let 'small innovations' grow into competitive advantages.

In an organisation, 'direction' is set top-down/oustide-in and 'agility' and 'robustness' are in reality enabled bottom-up/inside-out. Capabilities at enterprise level is 'top'. Capabilities on the 'solution' level is more 'bottom'.

With the growth of massive amounts of (brittle and ever harder to change) machine logic in the world, bottom-up becomes more and more important, because the top-down gets limited by the lack of agility or robustness of the bottom. So, if you want 'dynamic capabilities' when seen from the 'top', you need to address the ever increasing inertia and brittleness from massive machine logic landscapes (IT) at the bottom. And there, outside-in is of limited value, but some architecture is your friend.

Dynamic Capabilities

In 1990's the concept of 'dynamic capabilities' was introduced, e.g. in the paper Dynamic capabilities and strategic management by Teece et al. From the abstract:

"[...] the framework suggests that private wealth creation in regimes of rapid technological change depends in large measure on honing internal technological, organizational, and managerial processes inside the firm. In short, identifying new opportunities and organizing effectively and efficiently to embrace them are generally more fundamental to private wealth creation than is strategizing, if by strategizing one means engaging in business conduct that keeps competitors off balance, raises rival’s costs, and excludes new entrants"

'Dynamic capabilities' more or less says that you need to organise so opportunity can grow from a small beginning into a more structural competitive advantage, instead of trying to focus on where that advantage will be coming from. It is historically part of the 'resource-based' approach to strategy, though 'resource' is a bit of a misleading word, it is more meant as including not just 'stuff' (including 'human resources' — don't you hate that phrase?) but more intangible such as a trade secret or a behavioural skill such as engineering skills or an efficient and flexible process or maybe even a culture.

The setup from Teece et al. is roughly hierarchical:

  • 'Resource'
  • Competence (assembling resources in processes and the like)
  • Core Competence (the key differentiators from the previous category)
  • Dynamic Capability (changing your core competences)

Creating new or changing capabilities is seen as working on 'strategic capabilities'. So far so good.

Top-down/outside-in versus bottom-up/inside-out

But then something happens that is a good example of how most management theories work mainly both top-down and outside-in, even the 'resource-based' ones. Because, according to Teece et al. one needs to answer when something is 'strategic' (and when not). They write:

"To be strategic, a capability must be honed to a user need (so there is a source of revenues), unique (so that the product/services produced can be priced without too much regard to competition) and difficult to replicate (so profits will not be competed away)."

Ignoring the narrow for-profit bias here (what about the public sector — our societies — and their capabilities that need to change in a changing world?), key here is that we couple 'strategic' to 'outside need', as seen from the top of the organisation (the organisation's survival depends on being able to produce something that others need — stated in this way, the public sector is again in play by the way).

A second key element is the idea that you need to 'start small'. This comes from the correct observation that most learning is 'local', so — with a lot of caveats — a small unit can do a sort of trial and error in providing something to the outside world and success then breeds its own market, growing into some sort of competitive advantage.

In other words: 'dynamic capabilities' is (a) mostly top-down/outside-in framework (like most strategy frameworks, as they are (1) generally directed at being successful in a market and (2) the domain of upper management) and (b) focused on short term advantageous action at the 'bottom' that can grow into long term advantage.

In complex IT-landscapes, starting small and growing is at the same time necessary and risky. The complexity of our landscapes is now so great, that we've lost most of our capacity to do up-front design at larger scales. So, yes, we must start small and grow. This is what Agile/DevOps are doing to organisations, we're in a world of Inverse Conway's Law now.

But architects also know that the sum of many choices made in 'small' settings often lead to horrible landscapes, and that 'refactoring' and the like is often a dream for which little resources or sympathy is available. That has to do with that strong outside-in focus, as in, that very direct link that organisations tend to prefer between the tangible result for the outside 'market' and any activity in the organisation. This is the natural efficiency drive of organisations, but it can lead to risky situations (e.g. a lack of life cycle management and an abundance of technical 'debt', both coming with sizeable security risks — recall the Log4Shell scare of late 2021).

In my view, architecture is not about using 'short term' opportunities to grow into 'long term advantages', because of the effects of the IT-revolution it is about paying enough attention to the 'long term' at the bottom, regardless of short term advantage or top-down desires (unless that desire is strategic agility). After all, the average life span of key elements/choices in your IT-landscape is about 15 years, the average life span of a strategy a lot less, maybe as low as 4.

The — generally ignored — paradox is that IT-technology may change rapidly, IT-landscapes often do not.

In terms of fixed cost and variable cost: architecture seems more like a fixed cost and more a function of the size of your landscape that of what you want as business outcomes. That fixed cost is going up in IT because the more voluminous and complex our machine logic landscapes get, the more you need to pay attention to agility and robustness regardless of your markets or other stakeholders.

You either pay attention to architecture, or you pay a price consisting of a lack of agility (dynamism), robustness, and efficiency.

Enterprise Architecture has historically been top-down and outside-in (even before IT and labeling it as EA), just like the management it was there to support. EA has been struggling to adapt to the emerging situation of massive inertia. Management is sometimes ever further behind the curve, as their idea of dynamic capabilities may still simply be 'start small and grow', ignoring the — still accumulating — iron ball of their massive IT landscape they unavoidably have to carry around.

And no, you are not getting rid of that massive landscape or complexity by 'moving to the cloud'. Sorry. Complexity growth will be with us for some time following the law of complexity-capacity exhaustion:

Capabilities deployed to lessen the impact of complexity on the human capacity to manage the landscape result in the deployment of more complexity until the limit of the capacity of humans to manage the landscape has again been reached.

In other words: every time we reduce complexity, we increase volume until we humans are at the same level of complexity management again. As the brilliant XKCD webcomic illustrates an example:

No alt text provided for this image

Repeat. Until we run into the 'complexity crunch', that is. But I digress. As usual.

I'll be giving the overall closing keynote of the colocated Enterprise Architecture Conference Europe and Business Process Management Conference Europe on 12 October 2022, London, UK. Keynote title: "Digital Culture: It's not what you think it is". I will also be giving a related but different talk at the 5th Annual Enterprise Architecture for Financial Institutions (29-30 September 2022, Frankfurt, Germany). Title: "Supporting Digital Culture with Enterprise Architecture". See you at one of these, maybe?

Stephen F. Heffner

President / Owner at XTRAN, LLC

2 年

Gerben -- one answer to the conundrums you describe lies in what I call "cocktail-shaker" design -- top-down functional decomposition with concurrent bottom-up primitives identification, meeting in the middle with the results of the functional decomp being implemented using the primitives, combined with Entity Relationship analysis on the data to get it properly normalized. Then as-needed refactoring to keep the architecture (code and data) clean. The result is robust and agile (small "a") software with no technical debt, which is easily extended to incorporate new functionality. Dominique -- "automation" -- yes! That' s what we provide; see WWW.XTRAN-LLC.com for more info, including a wealth of examples.

回复
Zsolt Balog

Global IT Enterprise Architect at Nemetschek Group

2 年

Bullseye. But I don't think that complexity or additional abstraction would be the root cause. Pick any real cloud platform, like OpenStack. It's insanely complex under the hood, but for the user it's "flexibility as a service right at the moment of need". I think the technical debt is the real root cause which grows exponentially through the layers (one bad decision in a core component makes the whole structure brittle and rigid)

要查看或添加评论,请登录

社区洞察

其他会员也浏览了