Technology at business core
Summary
Technology deployment takes us to Productivity Frontier
Digital technologies are the primary way to stay at productivity frontier of any industry. This is the neutral position – not moving the frontier but not falling behind either.
Productivity = value/effort. Numerator translates to customer value and customer experience. On the denominator side, operational efficiency is the main factor. In a nutshell, productivity is about more with less.
Digital disruption works on both sides. And beyond: It often changes the definition of the industry itself. Determining your competitive landscape becomes tricky with new digital natives popping up here and there. Staying alert and agile seems to be the thing to do.
Innovate to stay competitive – or get marginalised
Productivity frontier is fundamentally about competitive landscape. Leading players move productivity frontier with innovative use of digital technologies. However, innovation is hard. With the right kind of operating model design, innovation and renewal become possible.
Competitiveness builds on innovation. Innovation is enabled with right kind of operating model design.
Laggards fall behind the productivity frontier. They fail to innovate. They fail in deploying digital technologies for higher customer value, best possible customer experience and more efficient operations. They get marginalised. Eventually, when the money runs out, they get eliminated thru creative destruction.
Digital Strategy dualism
Digital Strategy has two roles. First, strategy is needed to outline how digital technologies are to be deployed for productivity gains. That is, how they are deployed for higher customer value, better customer experience and enhanced operational efficiency. In concrete terms, the rubber meets the road in products, services, processes and business functions. For example, how analytics/AI use cases are utilised to improve customer experience thru digital services.
Second, digital strategy needs to outline how enabling capabilities are to be built and organised. Decision on operating model is one of the most crucial ones as it dictates how effective innovation and renewal will be.
Distribution as operating model design principle
Merits of distribution have been discussed somewhat extensively in ealier articles:
Business Domain is the most essential element in distributed operating model
This article builds on the earlier ones by taking closer look into the most essential distributed module: Business Domain with its in-built digital capabilities. The concept of business domain introduced in Data Mesh is now extended to cover all digital capabilities, not just data. When doing that, we choose the same philosophical foundation: Domain-Driven Design.
Business Domain as bounded context
Historically, Domain-Driven Design has been the remedy for massively increasing software complexity. With DDD, application monoliths have been broken into microservices making them easier to develop and maintain. Data Mesh has adopted DDD with the same rationale: As a remedy to ever-increasing complexity and scale.
Data Mesh article touched gently the idea of high performance domains where business, IT, software and data are organised as one digital productivity engine. Now is the time to take things further.
With DDD, comes the concept of Bounded Context that is extremely helpful and beneficial. Helpful in terms of understanding the core idea of digital ownership federation to autonomous business domains. Beneficial in terms of making domains effective with tight-knit cross-disciplinary teams organised around shared semantic understanding on value creation essentials.
Bounded Context is the foundation for high-performance domains where business applications, IT, software and data are organised as one digital productivity engine.
At its core, concept of bounded context is simple: We become domain-oriented by recognising that context matters. Meaning is context dependent. For example, “customer” associates to very different things in Sales than in Customer Care, let alone in Production. By recognising context dependency, design of optimum operating model becomes possible.
By respecting semantic dependency on context, we make no attempt to achieve single source of truth for “customer” definition. We no more collect data to centralised depositories with semantic understanding lost forever and leading to data swamp nobody wants to touch.
In distributed operating model, context is the business domain. We strive at identifying and defining business domains with context as the binding force. Depending on situational factors, domains may or may not follow traditional split into business functions.
DDD and bounded context are foundational for Data Mesh. But it doesn’t end there. The very same logic and needs apply equally across all digital technologies and to the way they are leveraged for productivity gains.
Business domains as bounded contexts are the way to keep things meaningful, tangible and understandable for all parties – for them to be able to contribute effectively.
For people to contribute effectively, they need to have shared meaning and semantics. When company success depends on digital innovation for higher customer value, better customer experience and enhanced operational efficiency, facilitation of this becomes Critical Success Factor.
Root cause of age old “split between business and IT” is the failure to recognise importance of shared meaning. Centralised operating model does not facilitate closing the gap. Ever. Adding data to the picture just adds two more gaps.
Effective digital innovation has become Critical Success Factor. Operating Model design needs to facilitate that.
Distribution and ownership federation to business domains is just the starting point. A relatively small step, in fact. The follow-up question becomes: How business domains are to grow into autonomous owners of their digital assets and accountable for their digital capabilities?
In other words, what will it take for business domains to take full ownership of their business applications, IT, software development and data? What is the right model to organise their operations?
DevOps way of organising work
DevOps is an established way to improve software development responsiveness, time-to-market and quality. The concept is based on profound realisation that operations (or production) cannot be detached from development, and vice-versa. Realisation that understanding customer needs builds on close feedback loop between development and deployment, including continuous delivery of new software releases.
“DataOps” represents application of those modern software development practises to data. Furthermore, data products for Data Mesh need software too. This makes DevOps and DataOps indistinguishable. In this article, the term DevOps covers all aspects of software and data related work.
Let’s take one more step by defining DevOps operating model to mean tight-knit cross-disciplinary team consisting of subject matter experts, software developers, data product developers, IT/data engineers, platform engineers, data scientists, product managers, and so on. This is where digital innovation takes place.
DevOps operating model builds on tight-knit cross-disciplinary team. This is where digital innovation takes place.
To ensure shared semantic understanding and tight-knit operation, each business domain has DevOps team of its own. Sometimes digital innovation requires participation from elsewhere, e.g. to come up with new analytics/AI use cases.
DevOps methods and tools consist of extensive sofware and data development stack covering areas from development to testing and from delivery to maintenance. Infrastructure as Code is a way to capitalise on cloud based development framework by ensuring that development, testing and production environments are identical. Application development is based ideally on Microservices architecture. Containers serve dual purpose: microservices deployment as well as code portability and reuse for applications and data products alike.
Criteria applied include things like workflow automation, continuous delivery, short time-to-market and quality assurance. Operational overhead and cognitive load on developers are to be minimised thru extensive automation and state-of-the-art computing and data platforms.
One might think that DevOps methods and tools are of interest only for developers and engineers. Not so. They are critically important for business domain to take full ownership of its digital assets and capabilities while minimising investments needed – not to mention their central role as enablers of digital innovation itself.
领英推荐
DevOps methods and tools enable business domain ownership while minimising overheads
Distributed operating model implies that everything takes place within business domains. That no digital capabilities are located outside domains. Well, yes and no. Hub-and-Spokes way of organising work is still applicable. Strong centralised hub is often needed initially to get the ball rolling in business domains. Eventually most of the centralised resources and capabilities are to be allocated to business domains. But there are exceptions. For example, Data Mesh platform team remains as centralised entity for obvious synergy reasons.
Network Effect makes data different
Contrary to most digital assets, making data products to interwork is not primarily about system integration. Not even about data connectivity. Rather, the ultimate goal is value explosion thru Network Effect.
Network Effect leads to data-driven value explosion
In reality, single analytics/AI use case can be based on several networked and chained data products but Metcalfe’s Law provides good approximation.
Data product innovation and design need to target at data reuse, leading to network effect. In addition to designers, this is something for Data Product Management to look after as the impact to value creation is direct. Something has gone wrong if the newly launched data product can only support single analytics/AI use case. Agile Lab article explores modelling data products and gives valuable guidelines.
The first thing to note about data products in Data Mesh is that they are not mere datasets. Rather, they have an architecture of their own consisting of input and output ports and most of all: software code doing data transformation within the data product itself. These are the data product characteristics that ultimately enable network effect and value explosion.
Data product architecture and functional characteristics enable network effect
Taking Data Mesh beyond organisational boundaries enables direct monetisation of data assets.
Data product development takes place in the environment defined by DevOps operating model and DevOps methods and tools – side-by-side with software intensive application and product development.
TINA: Modular business applications running on cloud
With regards to business applications in the context of distributed operating model, There Is No Alternative, really. Monolithic single-vendor ERP has become a non-starter.
From the arcticle Architecting digital capabilities : ”Postmodern ERP brings shift from single vendor application megasuite towards loosely coupled enterprise applications, now increasingly delivered as SaaS. At the same time monolithic transforms to distributed and federated – from single vendor control to ecosystem play. Postmodern ERP signifies move to modularity, distribution and federation.”
Rethinking business applications in that way enables flexible mapping between “IT for business process” and business domains. Hence, this kind of flexibility and modularity becomes essential requirement to business system vendors.
The need goes beyond business domain ownerhip and control of digital assets. It is more concrete than that. The need is functional. As discussed in modelling data products article, “Source-aligned Data Product is ingesting data from an operational system”. Yes, business applications too deal with data. We sometimes call it transactional data to differentiate from analytical data. This distinction worked fine when analytics was dominated by BI use cases. No more.
With analytics and AI use cases penetrating all aspects of value creation, it is becoming increasingly difficult to keep transactional and analytical data or “planes” separate. In other words, many operational and business critical processes rely increasingly on analytics. For example, Machine Learning model assisted customer interaction or industrial process optimisation start to be pretty basic stuff.
The conclusion? Operating model design needs to support data asset utilisation independently from where the data happens to be originated or stored. Merging of transactional and analytical planes makes hybrid model unfeasible: Distributing analytical data while keeping business applications centralised. TINA: Modular and distributed business applications running on cloud – and taking those applications to business domains.
Many business critical transactional processes are increasingly dependent on analytics. The planes are merging. The merge is best managed within business domains.
IT requirements
For business domains to capitalise on their digital assets with maximum effectiveness, they need strong support from IT. Cloud for computing and storage alike is foundational due to many reasons: For architectural flexibility to allow distribution in the first place but also to facilitate modern software development practises.
Furthermore, self-serve platform with Infrastructure, Product and Mesh planes, is an essential enabler for Data Mesh as described in the earlier article: “Business domains taking full ownership of data products leads to significant increase in workload, skills required and operational overhead. Data mesh platform is in central role in alleviating these challenges by hiding complexities and reducing cognitive load. The key objective is to make domain teams autonomous: being able to fully take charge of data products over their entire life cycle without need for outside support.”
For all legacy IT systems involved, there’s the multitenancy requirement: Digital assets and capabilities may be physically centralised but independent ownership by each business domain must be supported. If not, IT is not aligned with distributed operating model and will need to go.
Minimising architectural entropy is essential. Due to sunk costs, legacy IT systems may need to be allowed for a while. But evolution towards target architecture needs to be actively managed. Temporary architectural arrangements with centralised elements need to be killed off eventually. Their end-of-life ramp-down is to be taken care of.
Outsourcing options
Creating digital capabilities in each business domain is a major undertaking. The effort needed is significant in terms of ramping up and establishing the capabilities. Hence it makes sense to selectively outsource some parts of the total effort.
For example, parts of the operating model definition and deployment can be purchased as build-up service. Correspondingly, a portion of software and data product development capabilities can be outsourced.
By default, all essential core capabilities will eventually be needed in-house. However, outsourcing may be used as intermediate gap filler, especially with regards to sofware and data product development capabilities.
Lightweight option
Operating model for Technology at business core has a lightweight option where major part of domain specific investments are not needed. This option is based on engineering, development and platform outsourcing as permanent arrangement.
However, there’s a set of prerequisites for lightweight option to work:
Lightweight option builds on Bounded Context, Concept Model and strong Product Management
Build-up orchestration
Technology at business core capability build-up calls for holistic, systematic and disciplined orchestration. Holistic thru covering all aspects of change management, operating model design and deployment, software and data engineering, business applications and IT systems. Systematic thru keeping track of all build-up areas. Disciplined thru change management best practises application.
Orchestration is about planning, communication, management and leadership thru multiple build-up phases. It covers in-house and outsourced activities alike. Orchestration focuses on progress and outcome and is thus agnostic on oursourcing decisions. Orchestration is not about micro management but making sure that the resulting whole stays coherent and consistent.
Build-up orchestration ensures coherent and consistent system solution
Build-up assessment
Build-up assessment complements orchestration and provides the ultimate yardstick for the overall build-up program. Assessment deals with areas: progress and productivity.
Assessing build-up progress: What has been achieved. What part of of the targeted operating model is up and running. What are the missing pieces. What is the current performance level. What are the perceived challenges and risks going forward.
Assessing productivity gains: How is the operating model with distributed capabilities actually bringing productivity gains. For example, what is the dynamic volume of networked data product utilisation by various analytics/AI use cases across the company – be they about increased customer value, improved customer experience or enhanced operational efficiency.
Loop closes: Build-up assessment verifies productivity gains resulting from investments in Technology at business core