Architecting Digital Distribution
Image by Antti Pikkusaari

Architecting Digital Distribution

Digital Distribution is the path to data-driven productivity at scale. Architecting Digital Distribution reveals what we need from digital capabilities.

With company future competitiveness at stake, architecture is not to be reduced to mere technology matter. Instead, architecture is to be used to identify requirements to operating model, enterprise applications, software and data engineering and platforms of various types. Understanding these requirements is a prerequisite for ability to lead the change.

Decentralization merits are extensive spanning from scalability to quality. However, there’s no free lunch in Digital Distribution. Expectations to business domains are extensive. Striking optimum balance between centralized and decentralized capabilities becomes critically important.

Value creation and digital innovation within business domain builds on understanding the relationship between analytics/ML models and use cases, data products and enterprise applications – serving business processes, digital services and connected products. This is a handful!

This article provides tools to build the necessary capabilities and to lead the change. It does it with two complementary architectural perspectives: Capabilities and Functionality. The former is about centralized and decentralized digital capabilities and their relationship. The latter is about pinpointing what embedding analytics and AI in all aspects of value creation means in practise.

Article concludes with itemized draft list of requirements to distributed operating model, enterprise applications, software and data engineering, and to platforms of all types. The purpose of the list is to catalyst and guide further exploration.

Recap: Towards data-driven productivity

Article series Towards data-driven productivity was kicked off with findings that can now be used to guide work on architecture:

  • To stay competitive in digital age is to embed data, analytics and AI in all aspects of value creation
  • Sextet of scalability, speed, agility, quality, innovation and managed complexity as overall design criteria lead to distribution as overarching design principle
  • Digital Distribution is the last phase in the evolution that started from software development with Domain-Driven Design and expanded to data management with Data Mesh – the context changes but key principles and underlying drivers stay the same
  • Business Domain with semantic understanding thru bounded context becomes the most important building block in Digital Distribution
  • Large number of data products utilised by even larger number of analytics and AI use cases is the pinnacle of data-driven productivity
  • Operational and analytical planes are merging with analytics and AI getting embedded in business processes, services and products

Purpose

The ultimate purpose of Digital Distribution architecture is to make vision of data-driven productivity become real. Diving one layer deeper, that translates to need for structures, practises and tools that allow us to embed data, analytics and AI in all aspects of value creation. Need to understand what operational and analytical plane merge entails. Need to understand the chain of data products, analytics/AI use cases and business operations.

Once we gain understanding on those, it becomes possible to identify requirements to operating model, enterprise applications, engineering, and platforms. Operating model in terms of organizational structure, processes and governance. Enterprise applications with regards to enhancements thru analytical data and algorithms. How engineering enables application, data product and machine learning model development with state-of-the-art practises and methods. And platforms with regards to their crucial rule in making life in business domains so much easier thru automation and reducing cognitive load and operational overheads.

Digital Distribution architecture helps to identify requirements to operating model, enterprise applications, engineering and platforms

Architecture is the necessary intermediate step between vision and implementation. Obviously it is a two-way street with exploration on all the rest feeding back to architectural design. Learning thru incremental discovery, feedback and iteration is what this article series is all about.

Role

Architecture plays pivotal role in planning, communicating, leading and assessing digital capabilities build-up. It belongs firmly in the strategic change management toolbox.

However, set ideals and situational reality do not always meet. The gap may be significant and closing it may take a lot of time, money, patience and discipline. This leads to yet another role: Architecture used as benchmark – an agreed ideal with shared understanding on shortcomings and remaining effort.

Distribution as paradigm shift

For data management, Data Mesh signifies true paradigm shift. The change to to legacy way of doing data management is vast. Data Mesh book review and beyond discusses this in depth – and then comes up with the fundamental conclusion: “Data Mesh is the only game in town for companies aiming at embedding analytics and AI in all aspects of their customer value creation”.

However, the very same claims and conclusions apply to digital capabilities in general. Removal of centralised bottlenecks is crucial in realizing the vision of ubiquitous AI.

Distribution merits are easy to point out. Scalability is the most evident but other sextet members are soon to follow. With business domains taking ownership of digital capabilities, speed, agility, quality, innovation and managed complexity are all significant benefits beyond scale.

However, there’s no such thing as free lunch. Digital Distribution as paradigm shift is not only about benefits but also about requirements: Capabilities that business domains need to acquire and develop themselves, combined with centralized capabilities needed to help business domains in their task.

Basic rules

In the distribution design, the rule of thumb is simple: Allocate everything to business domains – unless there are good reasons not to do so. The same rule applies both as a generic principle and as situational guidance. The former is about long-term principles while the latter is about temporary adaptation to practical limitations.

Allocate everything to business domains – unless there are good reasons not to do so

The latter is also about compromises that sometimes cannot be avoided – but observing words of warning from the Data Mesh book review: “During transitory period architecture may not be perfect. The important thing is to maintain clarity on what is transitory vs. target – and apply discipline in moving decisively from the former towards the latter. The author has a wonderful concept for this: architectural entropy that is to be reduced with each evolutionary step.”

Don’t accept compromises for too long – push for architectural entropy reduction

What to decentralize to business domains

Business domain with shared semantic understanding enabled by bounded context is the starting point in deciding what are the digital capabilities needed there. That is, what they need for sufficient autonomy and digital innovation.

In the case of data management, decision on ownership is relatively straightforward: To maintain semantic understanding, each business domain takes full ownership of their data and transforms it into data products to be shared across the whole organization and beyond. No data gets dumped into centralized repository leading to a data swamp with lost semantic understanding and ownership.

Not allowed: Dumping data into centralized repository leading to lost semantic understanding

In the case of enterprise applications, the logic is similar: Taking ownership of applications needed to make business domain own processes, services and products better – with enhancement thru analytics models and data products.

After data products and enterprise applications come capabilities needed to create machine learning and other analytical models. Synergies are two-fold. First, analytics use case and model design benefits from proximity of data, applications and semantic understanding over business domain specific concepts. For example, Marketing owns marketing data but also understands the intricacies of marketing operations. Second, Marketing runs and maintains business applications used to boost those operations. And now these applications are to be augmented with analytics and AI.

The triangle of applications, data products and analytical models is the key. When combined with shared semantic understanding, business domain turns into digital innovation engine.

Digital innovation emerges from the triangle of enterprise applications, data products and analytical models in close proximity within business domain bounded context

So, it would appear that business domain would benefit greatly from its own data science capabilities. In practise, this may be a tall order given the scarcity of data science know-how. Some kind of Hub and Spokes organizational setup may be the necessary compromise within the overall distributed operating model.

Next in list is engineering capabilities. Practises, tools and skills needed to create, develop and maintain all of the above. Software, data and machine learning engineers equipped with the necessary technology stack. Business domain autonomy builds on engineering capabilities – something that cannot be fully outsourced.

Finally, all this is to be organized as tight-knit cross-disciplinary DevOps teams consisting of subject matter experts, developers, scientists, and product managers glued together with shared semantic understanding of the business domain and its needs. This is where knowledge, activities and creation come together. This is where digital innovation takes place.

DevOps teams are glued together with shared semantic understanding of business domain concepts and needs

What to develop as centralized capability

For business domains to succeed in their data-driven mission they need a lot of support. In terms of centralized capabilities, that comes in two forms: platforms and platform teams.

Platform team is a Center of Excellence with skilled professionals like the data science team mentioned above. Platform team is a matter of practicality when scale does not justify distribution to business domains. Over time, with increased scale, platform team may very well cease to exist with the know-how moving to business domains.

Digital platforms come in many forms from computing to storage, and from development aid to runtime operations. Cloud computing modularity would basically allow full distribution to business domains with nothing remaining centralized. However, very often there’s a platform team just next to the platform. For example, data mesh platform accompanied by platform team.

Platforms and platform teams have critically important role in enabling business domains to perform well. Their contribution to reduction of operational overhead and cognitive load is significant. They bring operational efficiencies thru automation. Platforms are an efficient way to standardize processes and operations and thus help onboard new business domains.

Platforms’ contribution to reduction of operational overhead and cognitive load is significant

From digital assets and capabilities perspective, support functions like HR are treated as business domains rather than centralized entities. There’s a potential for confusion as from other business domains’ perspective support functions do appear as centralized capabilities. For now, this conflict is disregarded. Support functions are treated like any other business domain in the Digital Distribution architecture.

What about Data Fabric?

Data Fabric addresses many of the same problems that data mesh is addressing: scalability, speed, quality and discoverability. The way it addresses them, however, is very different from distribution based data mesh. Data fabric leverages elaborate mechanisms like active metadata, knowledge graphs, machine learning and semantics in order to improve data integration.

In many contexts, including enterprise application evolution, data fabric is presented as default way of doing data management. Because of this, the questions about data fabric have to be addressed here and now. Specifically, “Why?” and “What’s wrong with that?”.

The Why part seems clear. Gartner goes to great lengths in downplaying the differences. Other than done in defense of the favourite data management model, the motivation is benign and is captured in a single sentence: “A data fabric begins as a metadata observer and analysis solution and is specifically not active in replacing or altering existing solutions.”

What’s wrong with that? To put shortly, data fabric is a technical overlay solution to a problem that is beyond technical by nature. Data fabric operates solelely within data management trying to solve a problem that is much bigger than that. By omitting architecture and operating model as key solution elements, data fabric ends up bringing a knife to a gun fight.

Data fabric is a technical overlay solution to a problem that is beyond technical in nature

By itself, incremental would be nice. But solution falling short in addressing the problem is not. The biggest data fabric shortcoming relates to ownership and shared semantic understanding thru bounded context. By sticking to centralized data management paradigm, data fabric does not give us business domains that would emerge as digital innovation and productivity engines. It does not deliver on the vision of embedding analytics and AI in all aspects of value creation.

Analytics boosted enterprise applications

Enterprise applications have established themselves as business operations cornerstone. Traditionally, they have had clear purpose of making “their business process” more efficient. Today, enterprise applications are much more than that.

Understanding the nature of enterprise applications calls for quick recap of ERP evolution. For a long time, ERP was a synonym for massively complex piece of software that took care of every operational aspect within a business. Selected ERP vendor was happy to sell multiple modules within the single monolithic architecture.

Introduction of Postmodern ERP broke down the monolith with the emergence of loosely coupled enterprise applications, often delivered as SaaS. This marked a decisive move towards distributed architecture from single vendor control to ecosystem play.

Postmodern ERP marked decisive move from centralized monolith towards distributed architecture

Composable ERP as conceptual cornerstone

However, to really get enterprise applications off the ground with analytics and AI, we need more: Composable ERP. Introduced in 2020 as concept, composability is the last missing ingredient. Composable ERP takes things beyond business processes and operational efficiency: It covers all applications, including the ones linked with digital services and connected products.

Furthermore, focus on composability brings flexibility and agility to application design and implementation – just what we need to embed analytics and AI. For the first time ever, we have option to source and customize any application for higher customer value and better customer experience – in addition to operational efficiency.

Composability brings flexibility to application implementation needed to embed analytics and AI

Three types of enterprise applications

Enterprise applications come in three types. Operational applications for business critical uninterrupted operations related to production and delivery. Administrative applications for business support systems that are essential for long term business continuity but not critical from uninterrupted operations perspective. Finally, applications for digital services and connected products.

Not surprisingly, gains in operational efficiency map mostly to operational applications and to a lesser degree to administrative applications. Correspondingly, advances in customer value and experience build on digital services and connected products and to a lesser degree on operational applications. Analytics and AI do all of them.

Analytics and AI use cases

Discussion on analytics and AI use cases is too often technology-driven rather than business-oriented. Emphasis tends to be on What it can do rather than on What could we use it for.

So, analytics and AI can describe, diagnose, predict and prescribe. Basic analytics uses statistical analysis and focuses on describing and diagnosing past phenomena. When machine learning aspect gets added, the name changes to AI and capabilities grow to cover prediction of future events. Building on this, it may be tempting to identify and categorize use cases taking the What it can do angle.

However, in this context the better alternative is the business-oriented angle. That is, identifying and innovating use cases for business functions and business domains. The guiding question transforms to: How can we enhance our Marketing, Sales, Production, Logistics, Customer Service or HR with analytics and AI? What are the use cases with biggest impact? The same question needs to be asked for digital services and connected products.

McKinsey global survey The state of AI in 2022 takes the business perspective to use cases. According to the survey, the TOP3 most commonly adopted AI use cases by business function are Service operations optimization, Creation of new AI-based products, and Customer service analytics.

However, the biggest survey discovery may not even be use cases themselves. The most important finding seems to be this: “Software engineers emerged as the AI role that survey responses show organizations hired most often in the past year, more often than data engineers and AI data scientists. This is another clear sign that many organizations have largely shifted from experimenting with AI to actively embedding it in enterprise applications.”

Based on that, the slogan used to kick off article series Towards data-driven productivity is worth repeating: The era of analytics point-solutions is over. Welcome to the era of ubiquitous AI enabled by data products in a mesh.

Aligning that with the signal detected by McKinsey survey would result in something along the lines:

The era of analytics and AI experimentation is over. Welcome to the era of ubiquitous AI embedded in all enterprise applications.

McKinsey survey merits are significant spanning from business-driven use cases to outlining capabilities needed, including human capital. What it lacks is architectural perspective with structured way to identify requirements to capability elements like operating model, enterprise applications, engineering and platforms.

Guidance on embedding AI in applications is a good start. But exactly how to do that and in what kind of architectural and operating model context makes all the difference. When scale, speed, agility, quality,?innovation and managed complexity become the essential design criteria.

Platform as mother’s little helper

The assignment to business domains is clear cut:

  • take firm grip of your data assets and turn them into well-managed data products
  • innovate analytics and AI use cases that maximally leverage your productivity
  • create versatile analytics models for descriptive, diagnostic, predictive and prescriptive purposes
  • make sure to use all available data products in the mesh for best possible use cases
  • deploy use cases in business processes, digital services and connected products thru integrating analytics models and data products with enterprise applications
  • make sure to have access to necessary engineering capabilities for all of this

For all of the above, there’s a common nominator: Platforms.

Assignment to business domains is challenging – platforms help to meet the challenge

Platforms come in many shapes and forms. The ones discussed in this article series include:

  • Data mesh platform – Platform that facilitates efficient data mesh operations. Comes with three distinct feature groups: data infrastructure, product experience and mesh experience planes.
  • Data storage platform – General purpose storage for structured, unstructured and streaming data. Big Data and streaming processing capability.
  • Integration platform – Multipurpose infrastructure, methods and tools for various types of system integration. May be provided as integration platform-as-a-service (iPaaS) or as an operating system (OS). Integration of enterprise applications and data sources for reliable and secure connectivity. Data integration with data collection, processing and storing (extract, transform, load). Data product and analytics model portability and reuse across organizational and technological boundaries.
  • Enterprise application platform – Extensive set of capabilities used for building, deploying, managing and running applications of all types. Covers infrastructure, methods and tools for engineering to create, test, and deploy enterprise-grade applications, with continuously improving analytics and AI capabilities including Machine Learning as a Service (MLaaS). All major cloud computing providers are players in this space.
  • Application composition platform – Platform type closely associated with Composable ERP concept. Provides low-code option for easy and fast custom application creation with limited engineering expertise.
  • DevOps, DataOps and MLOps platforms or XOps platform in short – Other infrastructure, methods and tools that complement platforms listed above in facilitating state-of-the-art engineering practises.

Platform as capability is not only about technology but very much about people and skills too. While business domains have to be in control in driving digital capabilities that does not mean that they wouldn’t need help from centralized platform teams.

In this context, middleware software delivered as OS or similar capability is considered to be a platform too. For example, to facilitate application and data integration. Actual computing may then take place on standard cloud platform with the middleware component present.

Digital Distribution Capability Architecture

Overview

Architecting digital assets and capabilities revolves around business domain ownership, responsibility and autonomy. Value creation builds on business processes, digital services and connected products. Aiming at higher customer value, better customer experience and enhanced operational efficiency involves enterprise applications of all types. That is, to embed analytics and AI in value creation is to integrate data products and analytics models with enterprise applications serving those processes, services and products. Digital Distribution architecture outlines capabilities business domains need to do just that.

To do this in scalable manner enabling speed, agility, quality, innovation and managed complexity, we have chosen distribution as overarching design principle. This leads to rule of thumb: Decentralize everything you can with business domains taking ownership. New questions emerge: How to enable that in an optimum manner? What are the capabilities within business domains vs. capabilities needed to support them in their digital mission? Capability architecture is an attempt to provide the answers.

No alt text provided for this image
Picture: Digital Distribution Capability Architecture, version 1.1

Capability architecture has three distinct layers to it: top, middle and bottom. Each has specific role in embedding analytics and AI in all aspects of value creation.

Top: Digital innovation within business domains

Digital innovation within business domain builds on understanding the relationship between data products, analytics models, analytics and AI use cases, and enterprise applications – serving business processes, digital services and connected products. This is a handful – even when complexitity and cognitive load is significantly reduced thru bounded context. Doing this in centralized paradigm would be to attempt this without context simplication!

Digital innovation is hard – even within bounded context

In terms of application types, goal of digital innovation varies a lot. For operational applications it’s mostly about enhancements in operational efficiency while securing business-critical operations. Services and products related applications benefit from differentiation thru higher customer value and better customer experience. Administrative applications fall in between: Differentiation is not primary focus but efficiency and internal customer experience are important. For them, analytics enhancements would ideally be available as off-the-shelf offering.

Data assets ownership is about turning them into data products with life cycle management. The same basic principle applies to analytics models – they too become digital assets owned by the business domain.

Data products are configured in mesh: They are reused extensively across the whole company and beyond. The concept of reuse applies to analytics models too: Each model can, in principle, be ported to another context serving another use case. In practise, there are limitations that make analytics models different from data products in this respect. MLaaS creates an option to completely outsource some of the analytics models.

Scale up with data product and analytics model reuse

Analytics models can be of any type: descriptive, diagnostic, predictive or prescriptive – with or without machine learning component. The common characteristic for all: Algorithm that works on data and runs as software code in an execution environment.

Middle: Organizing human capital for collaboration and innovation

The middle part is basically all about human capital and how to organize and manage that. While the default is to allocate everything to business domains, there are significant exceptions to the rule. Platform Teams can be many things from a specific platform focused team to centralized data science team. Overall, the balance between centralization and decentralization depends on many situational factors that need to be taken into account in operating model design and implementation.

XOps is short for DevOps, DataOps and MLOps. Each comes with unique emphasis while they all share the most important feature: Work organized as tight-knit cross-disciplinary teams that form the beating heart of each business domain. This is where digital innovation takes place.

Because so much is at stake, how to organize, facilitate and manage XOps and Platform Teams becomes the single most important area in the overall digital capability build-up effort. Despite all surrounding technology, it boils down to people being able to collaborate and innovate to make the change.

Optimizing human capital as operating model is the most important area in the digital capability build-up

Bottom: Platforms to make life in business domains so much easier

Platforms play crucial role in enabling business domains to perform well in terms of operational excellence, value creation and digital innovation. In relation to platforms, business domains’ needs are very similar. Therefore, collecting and managing requirements to platforms stays centralized activity. Business domain full autonomy in this space would lead to suboptimized point-solutions at best and very expensive chaos at worst. However, the objective of each business domain knowing their needs remains.

Business domains must know their platform requirements

Despite what visualization might imply, platforms are not fully centralized capabilities. Rather, in the context of distributed operating model, platforms become hybrids of centralized design and management combined with multi-tenancy features to enable distributed operating model. These features include things like business domain specific instance of the platform with its own configuration and customization, data isolation to enable full data asset ownership, fine-grained access control, and billing and usage tracking.

Platforms must have multi-tenancy features that enable distributed operating model

Digital Distribution Functional Architecture

Overview

Going beyond mere capabilities is to study how things actually work – to outline functionality of a system. In this case, the focus is on runtime operation rather than on development or maintenance activities. This is the best way to demonstrate merging of operational and analytical planes discussed in Towards data-driven productivity . This is where rubber meets the road with regards to deploying analytics and AI for higher customer value, better customer experience and enhanced operational efficiency.

Focus on runtime operation demonstrates merging of operational and analytical planes

Selected focus means that capabilities not directly and explicitly contributing to runtime operation need not to be drawn out. This is beneficial as it simplifies functional architecture and allows attention on the most interesting system features and characteristics. Any real-life implementation would be significantly more complicated but for the purpose this article series, this seems to be adequate granularity level.

No alt text provided for this image
Digital Distribution Functional Architecture, version 1.0

Not visible: processes, services or products

Here, enterprise application is the multipurpose player. It can be used to boost business processes, digital services and connected products alike – augmented with analytics and AI use cases discussed above. For now, there’s no need to draw processes, services or products to complicate the picture.

Correspondingly, implementation details are not visible either but it is safe to assume that in the context of Domain-Driven Design and distribution as overarching design principle, microservices play a central role. And it is equally safe to assume containers and container orchestration in play as underlying computing technology that offers unparalled mechanism for code reuse and portability, and for overall solution scalability.

With regards to implementation, it’s safe to assume microservices, containers and container orchestration

Lead actors

What is visible, however, are all the lead role actors: applications, analytics models, data products, key runtime platforms, and APIs needed for interworking and connectivity.

As before, analytics model refers to any algorithmic model with or without machine learning aspect. Two implementation options are presented here: Analytics model embedded directly into application itself using software library, or accessed thru an API resulting in less tight integration. The choice between the two depends on many situational factors with low latency real-time performance being one.

Data product appears in many different roles enabled by its API-based input and output ports and internal software code and data transformation enabled by that code. First, enterprise application can access data product directly as is. Second, data product can be an aggregate that combines data from elsewhere in the data mesh. In this case, using data products owned by another business domain. Third, data product can serve an analytics model with high-quality preprosessed and use case tailored data. Finally, data product may utilize an analytics model by itself to do data transformation.

For all elements shown, enterprise application platform offers the runtime environment. In practise, this would be Azure, AWS, GCP or similar. Behind the scenes, integration platform plays significant role in facilitating interworking and connectivity across all software and data components.

Finally, all analytical data used throughout the system would be stored in data storage platform. Correspondingly, this would be an incarnation of Azure Blob Storage, AWS S3, Google Cloud Storage, Apache Hadoop or similar – with multi-tenancy features as discussed above.

Requirements to capability areas

Architectural design with dualistic approach has significant merits. Combination of capability and functional perspectives reveals requirements that would not surface with single perspective alone. Now we are ready to point out the most essential requirements to operating model, enterprise applications, software and data engineering, and to platforms.

But before that, let’s do a quick recap on the overall assignment to business domains. The job of centralized capabilities like platforms and platform teams is to help business domains to succeed in making this happen.

Assignment to business domains:

  • take firm grip of your data assets and turn them into well-managed data products
  • innovate analytics and AI use cases that maximally leverage your productivity
  • create versatile analytics models for descriptive, diagnostic, predictive and prescriptive purposes
  • make sure to use all available data products in the mesh for best possible use cases
  • deploy use cases in business processes, digital services and connected products thru integrating analytics models and data products with enterprise applications
  • make sure to have access to necessary engineering capabilities for all of this
  • utilize centralized human capital and platforms of all types to maximum effect

Basically all requirements can be infered from this assignment. In the context of distributed operating model, all capabilities are there to make business domains perform to maximum effect.

All requirements can be infered from the assignment to business domains

However, our journey is about learning thru incremental discovery. Therefore, it is almost given that additional clarity on requirements will emerge with further exploration of each capability area. When it comes to requirements, this is not the end. This is the starting point.

This is not the end. This is the starting point.

Requirements to operating model:

  • how to organize data management in distributed operating model context with business domain ownership and autonomy
  • how to establish data product and analytics model management with lifetime ownership and for maximum value from data with extensive data product and analytics model reuse
  • how to build XOps with organizational structure, roles, skills, practises, and governance for maximum digital innovation and efficiency
  • how to build platform teams with structure, roles, skills, practises, and governance for smooth alignment with business domains
  • how to strike optimum human capital balance between business domains and platform teams
  • what is the operating model needed to transform the company to a digital value creation machine that hums with internal consistency and coherence

Requirements to enterprise applications:

  • integrating analytics and AI in enterprise applications: what, where, how
  • how to choose enterprise applications to be boosted with analytics and AI use cases for maximum gains in customer value, customer experience and operational efficiency
  • what are the business domain specific needs and how do they vary between domains
  • how to prioritize analytics and AI use cases
  • what are the needs, requirements and limitations for data product and analytics model integration
  • how functional and performance requirements vary as per application type and use
  • how to design application composability for maximum flexibility, agility and speed
  • how to tell where differentiation is needed and when standard is enough

Requirements to engineering:

  • how to bring applications, data products and analytics models together with maximum innovation and minimum cognitive load
  • how to establish modern engineering practises, methods and tools for maximum scalability, speed, agility and quality
  • how to establish microservices capabilities for application, data product and analytics model development and deployment
  • how to automate repetitive tasks for speed, quality and minimum operational overhead
  • how to build continuous delivery capability
  • how to design for functionality and performance as per application and use case specific needs
  • how to maximize code reuse and portability for application software, data products and analytics models alike
  • how to support smooth and effective collaboration between IT, software, data and analytics specialists in relation XOps operating model
  • how to support effective dialogue between business and engineering
  • how to share best practises with other business domains and with platform teams
  • how to collaborate with platform teams to source critical skills and to gain maximum support
  • how to maximally capitalize on platform capabilities across all types of platforms

Requirements to platforms:

  • how multi-tenancy features are implemented to enable distributed operating model
  • how platforms support striking optimum balance between distributed and centralized capabilities; for example, data mesh platform as a combination of data product and data mesh experience planes
  • how platforms provide data storage with scale, versatility and flexibility
  • how platforms enable smooth collaboration between business domains and platform teams
  • how platforms help to reduce cognitive load and provide options for outsourcing
  • how platforms enable integration of applications, data and analytics
  • how platforms facilitate XOps with all tools and methods needed by modern engineering
  • how platforms enable code reuse and portability
  • how platforms enable application composability combined with extensive boost from analytics and AI integration and with low-code option
  • how platforms provide runtime environment needed by different types of enterprise applications from standard administrative applications to real-time business critical applications

Future areas of exploration

Article series on data-driven productivity continues with new areas of exploration. Now that architecture has helped us to identify requirements to key capability areas, it is time to investigate what is their answer to the call.

So, perhaps not surprisingly, areas of exploration will include:

  • Digital Distribution operating model
  • analytics boosted enterprise applications
  • engineering for data-driven productivity
  • platforms for data-driven productivity
  • Digital Distribution as strategic option
  • digital strategy
  • digital capability build-up
  • and many more

The overall objective stays: Maintain strategic business perspective while diving deep enough into individual digital capabilities. Combined with that, another key objective is to provide structure and tools for effective change management. That is done by showing dependencies between capability areas thru identifying further requirements and how those requirements are satisfied.

Petri Hassinen

???? Turning business, data and technology into value | Leader | Speaker | Board Professional | Top 100 in Data & AI in Nordics

1 年

How many words is this?

要查看或添加评论,请登录

Antti Pikkusaari的更多文章

社区洞察

其他会员也浏览了