Data Mesh book review and beyond
Summary
Book review and beyond
Zhamak Dehghani has designed game-changing way to do data management. Data Mesh is not a mere idea or suggestion. Instead, in her book Data Mesh – Delivering Data-Driven Value at Scale she not only provides compelling argumentation on why we need to up our game but also describes in detail how to do it.
In this article, “book review” is about observations on Data Mesh the book – focusing on its key messages and points of high importance. Correspondingly, “beyond” is about conclusions and extrapolations on Data Mesh the concept. Beyond builds on book contents but takes things a bit further.
Highlights
The book is exceptionally well-designed complete package. It is coherent, consistent and complete; 330 pages with no slack, no redundancy, no unnecessary repetition. It takes some effort to read and digest but the prize for the time invested is significant: You not only get to understand what Data Mesh is in detailed conceptual level. You also come to realise what Data Mesh represents as paradigm shift and why you should pay attention. Why time to rethink old ways of doing data mananagement has now come.
Author’s long experience shows on every page. However, the amount of details is not the thing. Clarity of though is. Focus is on concepts rather than technology. This is important choice. By staying away from physical realisation and instead focusing on conceptual side of things, the author manages to minimize complexity. So, instead of vendors, data platforms or tools, she describes the Data Mesh essential elements and their detailed characteristics. Trust me, there’s plenty of details to go.
The book is much more than collection of ideas or mere food for thought. A reader who seeks random tips and pointers will no doubt find them. But he misses the true gem: Data Mesh as a system consisting of mutually reinforcing components. Make no mistake: It is the holistic thinking behind Data Mesh that makes it strong. That is what elevates it to paradigm shift category, not the isolated insights or discoveries.
“Holistic thinking elevates Data Mesh to paradigm shift category”
The book is put together with extraordinary clever structure and flow. It starts with inspiring and illuminating Case that is then later revisited repeatedly to make new concepts concrete and clear. Next the book oulines the What with Data Mesh key principles. Only after that – when the reader has a rough understanding on what we are talking about – the book explores motivational side: the Why. Next, we dive into significantly more details to get understanding on How: Data Mesh and data product architecture and characteristics. As the final step, the author explores in some detail how Data Mesh can be implemented for a real business and organisation.
Overall, the book covers a lot of ground from key concepts to motivational factors, and from detailed characteristics to change management. All of this with not dropping the ball even once; not even missing a beat. This is an amazing achievement from a single author. Something very rare.
What’s missing? Not much, really. Technology-oriented implementer may remain unsatisfied with very few references to data stack vendors or tools, or instructions on how to deploy them. For the rest of us, this absolutely is book’s strength rather than its weakness. With some stretch, I can put my finger on one thing that could have benefited from more in-depth handling: Business domains and their identification and design. As it stands, the book provides somewhat light-touch guidance on this topic.
Who is this book for? I would recommend it for everybody who’s work is related to data in any way, including business managers seeking for ways to boost competitiveness with analytics and AI. By staying away from (most) technical details, the book remains readable by almost anybody. Some chapters dive somewhat deeper into mesh characteristics and architecture details but the reader is given choice to skip these parts of the book. The main prerequisite is readiness for conceptual thinking. Without that, grasping book’s central messages may fall a bit short.
A void filled
Data Mesh is delightful on very personal level. During Spring 2021, based on earlier White Paper, I wrote article series on Digital business renewal focusing on competitiveness amid digital disruption. Articles on architecture, cloud platform, operating model, and data vs. software relationship, shared multiple inspirational sources with Data Mesh: Federated distribution, Domain-Driven Design, Bounded Context, microservices, modern software engineering methodology and its cloud computing enablers, and so on. Alas, at the time, I knew Data Mesh only by its name – and failed to connect the dots!
“With Data Mesh, everything clicked”
Consequently, I did not manage to fully integrate data to the overall storyline. This is most visible in the article discussing architecture. Looking back, this is revealing: Attempts to integrate centralised data architecture failed. It did not “feel” right and got dropped completely. But that didn’t stop it bothering me. Later, getting to know Data Mesh, everything clicked: For data management federated distribution, Data Mesh is the name of the game. Dots got connected. The void is now filled.
Four cornerstones
Data Mesh builds on four cornerstones: Business domains, Data product, Self-serve platform and Federated computational governance. Each has essential role in the overall concept.
Domains are about data ownership and management by distributed business domains rather than centralised data team. Data product is the new “architectural quantum” that Data Mesh operation is fundamentally based on. Not only is data product part of the mesh architecture, it has internal architecture of its own.
Self-serve platform enables domains to take ownership over data products by hiding most of the related complexity and reducing cognitive load for the team. Minimizing operational costs has high dependency on platform capabilities and their maturity. Governance in distributed Data Mesh is very different from the traditional way as policy enforcement gets now automated.
“Calling Data Mesh “architecture” misses other aspects of equal importance”
From a slightly different perspective, Data Mesh is a combination of 1) Operating model consisting of domain-driven organisational structure, distributed governance and policy enforcement, and practises for data product ownership and development, 2)?Architectural design for domains, data products and data mesh platform, and 3) Platform technology, including modern software development methodology enabled by cloud computing. It is not uncommon to see Data Mesh referred to as “architecture”. This is misleading as it misses other two aspects of equal importance.
Domain-Driven Design as foundation
Domain-Driven Design is the philosophical foundation of Data Mesh. Everything else is just a tailwave: almost like implementation details needed to make DDD work. If you don’t buy the idea of DDD, don’t bother doing anything else. Or as the author puts it: “Data mesh, at its core, is founded in decentralization and distribution of data responsibility to people who are closest to the data.”
“DDD is the revolution – everything else is just a tailwave”
Historically, DDD has been the remedy for increasing software complexity in application design – resulting from business digitalisation. Application monoliths have been broken into microservices. Combined with DevOps, software and IT have been embedded in value creation within domains.
Data Mesh is basically just suggesting: As we face the very same complexity and scalability challenge in data, why don’t we make our lives a bit easier by copying best practises from software development? It seems to have worked just fine on operational (transactional) plane. Why wouldn’t analytical plane follow the suit? As an upside, we’ll end up integrating all of them into a single high-performance domain: business, software and data. (see Data Mesh as strategic option)
With DDD comes key concept of Bounded Context that is very helpful in understanding Data Mesh core idea of federated ownership and accountability. DDD and Bounded Context combined carry significant messages:
As mentioned, the book gives only light-touch guidance on identifying and defining domains. Furthermore, the guidance is somewhat mixed with:
In essense, the guidance is: Either start with existing organisational structure or use domains defined earlier for microservices. This may not be enough. Domains deserve more attention and deeper thought process.
Ownership and accountability allocation to distributed domains is not a small thing. The rest of Data Mesh the book and Data Mesh the concept are basically about enabling just that!
Data Product as architectural element, mindset and capability
If business domain is the philosophical foundation for Data Mesh, then data product is its architectural foundation. Data Mesh requires a very specific data product. Product with internal architecture. That architecture then enables its functional role within Data Mesh.
Beyond mere technical product characteristics, come elements that the newly given accountability mandates: product design principles and organisational capabilities. Here’s a very compact introduction to each of them, starting with technical side of things. In other words, what data product is as Data Mesh architectural element, what is data-as-a-product as overall mindset, and finally, what kinds of capabilities are needed to take full ownership of data products. Data Mesh implementation builds heavily on all three.
“Data Product is more than data. Data-as-a-product is a mindset. Ownerhip calls for capabilities. Data Mesh builds on all three.”
Picture: Data product architectural elements (subset)
Data product architectural elements are as follows:
Correpondingly, data product main functionality elements include:
“Bounded Context makes data pipeline obsolete”
So this is where ETL pipeline went?! Maybe not. To consider data product internals as ETL pipeline new incarnation is prone to cause confusion and can be seen as unnecessary muddying the waters. There are no architectural rules to emulate centralised ETL with data product internal implementation. Personally, I would drop the term ETL completely. The book does not mention ETL here. In any case, the context is now bounded – and that makes all the difference. (see Modern Times on analytics plane)
Important: The book does not provide implementation details for architectural components or functional elements. That is intentional, adds clarity and gives data product developers and platform providers free hands to do their best. However, on conceptual level the guidance is solid – leading to consistent and coherent overall architecture and operating model and provides foundation for data connectivity and interoperability.
Data product ownership implies the following design principles:
Finally, data product ownership requires new organisational roles:
When assessing Data Mesh as strategic option, the gap between current capabilities and minimum set of capabilities forms key part of the analysis. By far most of the new requirements come with enabling data product full ownership. (see Capabilities bar set high, Data Mesh as strategic option)
Self-serve platform as essential enabler
Business domains taking full ownership of data products leads to significant increase in workload, skills required and operational overhead. Data mesh platform is in central role in alleviating these challenges by hiding complexities and reducing cognitive load. The key objective is to make domain teams autonomous: being able to fully take charge of data products over their entire life cycle without need for outside support. In addition, data mesh platform serves needs beyond the business domain, including data product consumers and governance operations.
Picture: Data mesh platform architecture (overview)
Again, the author chooses to stick to conceptual description rather than making an attempt to describe physical platform with its constituents. This adds clarity and makes the book easy to read and understand. It’s OK for the reader to make mental mappings between concepts and (future) physical products but there’s one thing to avoid: Don’t map things onto a single vendor monolith.
Data mesh platform’s main architectural elements are the three planes: one for infrastructure and two for product and mesh level experiences. This split works really well troughout the book and helps to understand platform’s role in the overall Data Mesh.
Infrastructure utlity plane maps fairly close to currently available software and data engineering platform services. That is, related to management and utilisation of computing and storage resources and tools. When computing or data platform vendor claims Data Mesh compliancy, there’s a high likelihood that this is where the claim is rooted in. But it’s one thing to do it in centralised manner and another to do it in genuinely distributed way – so watch out for those improvised marketing claims. (see Implementation risks and their mitigation)
However, it is the two experience planes that really start to make difference in terms of hiding complexity and reducing operational overheads – and differentiating one platform vendor from another. For the time being, there are no native data mesh platforms with everything available. But the race is on – this is the space to monitor closely in the coming months and years. (see Component maturity)
Data Mesh would not survive anarchy
The fourth and final Data Mesh cornerstone is what the author calls “federated computational governance”. The name is justified and fitting as the point is to make data governance work in distributed environment – and that wouldn’t work without high level of automation thru computation.
For seasoned and somewhat control-oriented data governance professional, Data Mesh may appear as recipe for anarchy – with business domains given unlimited autonomy to screw things up. In reality, Data Mesh needs good governance. Just implemented in a totally new manner. In fact, in areas like connectivity, interoperability and trust creation, more standardisation, cross-domain agreement and yes, governance, are needed to make Data Mesh operationally viable.
The basic principle is simple:
Systemic excellence
By now it’s probably clear that Data Mesh is not a random collection of capabilities but exceptionally well designed system of coherent, consistent and mutually reinforcing components – with deep thought backed up by best practises from field-tested software designs and loads of hands-on experience supported by highly skilled sparring partners.
No committee would ever come up with Data Mesh. For sure, everybody would contribute good ideas, possibly with deep insight, but there’s a zero chance of that resulting in something coherent and consistent. Something with systemic excellence. Data Mesh is different – it comes with in-built coherence and consistency.
Picture: Committee designed horse a.k.a compromised data mesh
But not all data mesh implementations are created equal. For example, data mesh with data lake like heavy centralised element is almost sacrilegious – certainly not to be used as benchmark. The same goes with datasets renamed as data products. If systemic excellence is what you are after, don’t settle for cheap copies – go for the real thing. (see post on systemic excellence)
Data Excellence
Another type of excellence is also useful in assessing Data Mesh: Data Excellence – defined thru five strategic measures of Scale, Speed, Agility, Quality and Value (see article on Data Excellence).
The book does not use the term. However, individual measures are all over the place, explicitly and implied. Some examples:
To me, having gone thru earlier thought process on Data Excellence, Data Mesh appears even more impressive: Delivering where it matters most.
Traditional data management shortcomings
The author does excellent job in highlighting Critical Failure Factors and smaller shortcomings of centralised data management paradigm. She does it with grace by merely pointing out facts visible to anybody willing to take a look.
领英推荐
Rather than starting from architectural or technology details, let’s have a look to the results first. Let’s start from culture that Zhamak so eloquently puts in writing: “Culture represents the language, values, beliefs, and norms that an organization embodies. Today, many organizations value the volume of data they are capturing. They believe that data collection centrally can deliver value. The norm is to externalize data responsibilities to someone else, the centralized data team. Concepts such as data pipeline, canonical models, and single source of truth are ubiquitously present in the language.”
There it is in a nutshell. What has gone wrong with the whole data management setup – as a result of technology-driven mechanistic evolution without anybody stopping to think where systemic excellence is supposed to emerge. Well, it didn’t.
There never was a chance. Systemic excellence has to be designed as, well, a system. With organisational structure, architecture, technology and governance planned as whole. Holistically.
Or maybe it’s simpler than that. Before the author, nobody really questioned industry-wide concensus that architecture used to serve BI needs would work fine for hundreds of AI use cases intervowen into the company fabric.
Technology-driven evolution from data warehouse to data lake may be marketed as the new architecture solving the earlier problems by serving data scientists better. As we now know, way too often the lake has turned into a swamp. With lost context and semantics, determining whether data would be suitable is time-consuming exercise with serious compromises to data science productivity.
It is fairly easy to predict that draining the swamp will not happen thru some massive data cleansing project nobody is willing or even able to do. Rather, it will happen when emerging data products of the newly established Data Mesh will gradually make data lake as centralised entity obsolete.
Applying Data Excellence strategic measures is revealing:
But exactly how bad centralised data pipeline can be? Let’s have a look.
Modern Times on analytics plane
Something beyond the book – but certainly inspired by it.
Many reports indicate that data engineers are suffering from burnout symptoms and many are considering career change. Here are findings of one study:
Is this a fault of their own? No. It is an indication that the work itself has been organised wrong.
Picture: Modern Times centralised data pipeline
Chaplin’s Modern Times is a fun classic but all joking aside the fact is that we organise, manage and measure innovation work applying Tayloristic principles – while expecting agility and business renewal. Amazing.
Working at the pipeline leads to a situation where data consumers turn to pipeline worker with their data needs and very often with their data quality problems. Pipeline worker himself lacks visibility to the context of data origin and hence to data semantics – leading to near-impossible task of finding lasting remedy to quality problems. Responsibility without ability to influence is poisonous. What we are witnessing is manifestation of the fundamental design flaw in centralised data management.
This is not sustainable. Not from individual’s perspective and not from business perspective. From capital’s perspective this marks gradual move towards “Pipeline workers of the world, unite!” No, capital doesn’t want this either.
With centralised data pipeline comes vicisious circle of use case proliferation leading to continuously worsening problems at factory floor.
Data Mesh represents complete overhaul of all this crap. With accountability and responsibility come ownership, autonomy and empowerment. With Bounded Context comes semantic understanding. With business domain focus comes high performance teamwork. With Data Mesh, the pipeline from hell ceases to exist.
Component maturity
In terms of maturity, we have to differentiate between conceptual and component maturity. As concept, Data Mesh is already more mature than centralised data management has ever been. I trust that the evidence for this has become clear.
"As concept, Data Mesh is more mature than previous paradigm has ever been"
However, when it comes to maturity of architectural components needed for Data Mesh implementation, we are still in very early phase. The book is fully transparent in this respect and does not make any opposite claims. In fact, the author is kind enough to itemise the current status by giving us full table of all key components with their respective maturity.
Table: Key components and their current maturity (small subset)
Some observations and conclusions:
Achilles’ heel candidates
After such appraisal, the necessary next question emerges as: Are there some fundamental weaknesses? Even Achilles’ heels in the conceptual design itself?
According to Wikipedia, “An Achilles' heel is a weakness in spite of overall strength, which can lead to downfall.” From that we can conclude that mere complexity, implementation difficulty or component immaturity do not count. When managed right, they do not constitute a path to downfall.
For me, the first real candidate emerges on page 208 (not before) with the discussion on bitemporal data. Martin Fowler’s Bitemporal history provides great introduction to the topic. Picking key message from that: “Bitemporal history is a useful way of framing history when we have to deal with retroactive changes. However we don't see it used that often, partly because many people don't know about the technique, but also because we can often get away without it. One way to avoid it is to not support retroactive changes.”
Not to support retroactive changes? No doubt that would make life easier. But Data Mesh does not take shortcuts. Instead, with the introduction of dual-timestamps, actual time and processing time, the author describes – at some length – a way to do bitemporal history on Data Mesh. As a consequence, mesh engineering just got a bit more challenging.
Would this be a genuine Achilles’ heel? The point is not dynamic nature of bitemporal data itself. The point is to do that in distributed mesh environment vs. centralised data platform. Despite some personal history in distributed real-time computing – the Nokia DX?200 switching system – I will not make an attempt to call this one out. Let the jury to decide. For now, this is just a candidate.
Another candidate emerges with data composability. Chapter 13 introduces the topic: “Composing new data from intersections and aggregations of multiple existing data is a basic function necessary for all data work. Data mesh introduces the ability to compose multiple data products in a decentralized fashion without creating tightly coupled data models that become bottlenecks for change.” And a bit later: “Powerful analytical use cases require correlating and connecting data across multiple data products and between different domains.”
What follows is somewhat lengthy description of how this would or could be implemented on Data Mesh. As before, there’s no attempt to hide away from complexities. On the contrary, the book is fully transparent in pointing out essential requirements: Ability to compose data across different modes of access and topologies, Ability to discover and learn what is relatable decentrally, Ability to seamlessly link relatable data, Ability to relate data temporally. Note that the final one ties data composability with bitemporal history and Data Mesh dynamics.
Overall, data composability is similar to the case of bitemporal data. That is, not particularly strong candidate but – due to distributed and dynamic nature of Data Mesh – something to monitor closely. Principle of transparency emerges again with author’s remark: “Note that at the time of writing, data mesh’s approach to composability is an area of development.”
Important: Put things into perspective. If you are looking for reasons to reject Data Mesh, you may have landed on something useful. But if your primary concern is the future success of your company, I’d suggest not to get hung up with a couple of technical details – even if they appear as Achilles’ heel candidates at this point of time. Solution oriented mind opts for monitoring instead of rejection.
Capabilities bar set high
When it comes to recommending Data Mesh to individual companies, the author is very cautious and down-to-earth. She is the polar opposite to a snake oil salesman trying to push cure for all ills. If anything, she is a bit too cautious – depending on the perspective we choose to take.
The book comes with extremely useful self-assessment method to scan across essential capabilities for Data Mesh adoption.
Picture: Adoption readiness criteria – you need to be on red
Observations and interpretations:
At its core, its about business domains’ ability to take full ownership of data products. Relatively speaking, everything else is much easier. A lion’s share of that ability is about modern software development practises consisting of things like cloud computing utilisation to the max, Agile, DevOps and Continuous Delivery. In a nutshell, from my article Software and data - it takes the two to tango: “In terms of productivity, software development has reached maturity but its data analytics sibling is just coming of age. It would be foolish not to take heed of older sister’s hard-earned wisdom.”
Implementation risks and their mitigation
Data Mesh implementation signifies transformational change. Exactly how extensive depends on starting point: Initial capabilities discussed above. No matter how you slice it up, solid change management practises are needed. That does not eliminate implementation related risks but it provides tools to actively mitigate them.
Change management is not book’s focus area but still it provides good set of concrete and practical guidance on the topic. The overall framework suggested contains four elements: business-driven, end to end, iterative and evolutionary.
Business-driven contains things like Continuous delivery and demonstration of value and outcome and Rapid feedback from the consumers. So, the approach is not “build it and they will come”. In fact, that wouldn’t even be possible with Data Mesh that needs to be built into business domains themselves – contrary to old data management paradigm that might suggest starting with centralised capabilities like data team and data lake.
Again, author’s experience shows as she warns against being too business-driven and reactive with the pitfall of narrow focus point solutions. As a counter-measure, we should apply product thinking and ownership. That is, when you are assigned product ownership you start thinking long term value creation.
Together, end to end and iterative aim at virtuous circle of more and more data augmented products, services and processes leading to demand for continuously enriched Data Mesh – with more data products, better platform services and higher coverage of computational policies.
Evolutionary model for change management covers a lot of ground. Its basic premise is that transformation should be seen thru evolutionary phases of exploration, expansion and sustain. For each phase, different set of characteristics, activities and measures are to be applied as per domain ownership, data as a product, self-serve platform and computational governance. To measure progress, the book suggests using fitness functions rather than KPIs – as the latter may not measure Data Mesh true maturity.
“Centralised entities do not belong to Data Mesh”
Migration from legacy architecture and technology is in the heart of transformation and requires special attention. Keeping the eye on the ball is essential: centralised entities do not belong to Data Mesh (except in camel variants). However, many practical reasons – sunk cost being one – dictate that during transitory period Data Mesh architecture may not be perfect. The important thing is to maintain clarity on what is transitory vs. target – and apply discipline in moving decisively from the former towards the latter. That is, movement has to be managed. Zhamak has a wonderful concept for this: architectural entropy that is to be reduced with each evolutionary step. She then proceeds to giving very useful tips on how data warehouse and data lake co-existence with emerging Data Mesh can be organised. Ideally, legacy systems have multitenant capability: Configuration to allocate and access storage based on data product boundaries. Basically, data storage without multitenancy option has its days numbered.
Finally, with any change – let alone transformational change – organisational inertia is given. To manage that, the book suggests focusing on values. And not just documenting existing values – what ever they happen to be – but defining what the values will need to be: “If an organization adopts data mesh in its entirety, these values are shared by all the teams and are expected from each other. These values are uncompromisable. They drive action.” This is what leadership looks like. This is the mentality and approach that Data Mesh build-up calls for.
“Leverage values to push for change”
The book proceeds to provide strong set of candidate values, for example Analytical data is everybody’s responsibility and Delight data users. Examples come with a note that “you need to contextualise them to your organisation”. I’d like to add: “…as long as you don’t water them down.”
Data strategy
The book provides hands down the best compact data strategy seen so far. It comes with itemised structure showing how data strategy maps to strategic initiatives and use cases, leading to intelligent applications and touchpoints that build on data products and on platform capabilities. In short, it shows in a crisp and crystallized way how Data Mesh brings value and competitiveness.
However, the essense of data strategy comes in textual form: “To understand how data mesh fits into a larger data strategy, let’s look at our hypothetical company, Daff, Inc. Daff ’s data strategy is to create an intelligent platform that connects artists and listeners through an immersive artistic experience. This experience is continuously and rapidly improved using data and ML embedded in each and every interaction of the listeners and artists on the platform. This implies capturing data from every single touchpoint on the platform and augmenting every platform feature using ML derived from the engagement data.” – Keywords: continuous improvement, embedded ML, every touchpoint.
Daff is basically Spotify-like content streaming service. The business and organisation are digital-native to the core. But that does not imply that Daff would be exceptional. Just that it is useful case in a book about Data Mesh. The same keywords apply to every company in all industries, digital-native or not. This is what every company will need to do to stay competitive: Embedding ML to all aspects of value creation, across all customer journey touchpoints, with continuously improving its products, services and processes. The details are context-dependent but on macro level it is that simple.
Now we are ready to take the final step: Assessing Data Mesh as strategic option.
Data Mesh as strategic option
Going beyond the book – but applying everything the author has taught to us.
General Electric CEO Jeffrey Immelt famously proclaimed that “every industrial company will become a software company”. McKinsey’s Executive’s guide to software development takes notice of GE frontman’s foresight but continues with a somber note on software’s position in strategic decision making: “Despite the mission-critical nature of software, it gets surprisingly little attention in the C-suite…software executives are rarely given a seat at the table of top management, and software strategy is often determined three to five layers down the hierarchy…”
McKinsey then suggests the correct way: “To make software an advantage, executives need to be fluent in leading software development practices and carefully determine how software is integrated into the organization. Most important for executives to get right from the start, however, is making software development a strategic priority, not an afterthought.”
Based on what we know about analytics’ and AI’s role in modern business, everything above applies to data. Furthermore, to truly “integrate data into the organization”, Data Mesh is the only game in town. Centralised data management does not facilitate that.
That takes us to Data Mesh as strategic option. Something not to be “determined three to five layers down the hierarchy”. Not only because of its strategic importance but also because of its systemic nature overlapping organisational structure, architecture, technology and governance. Indeed, decision on Data Mesh is for Management Team to take – and it wouldn’t hurt to get Board aligned too.
Difference between implementation risk (discussed above) and strategic risk is that the former maps to action while the latter often lives in inaction. Passivity during data management paradigm shift is prone to increase strategic risk. Further, festering strategic risks have tendency to turn into existential risks. Exaggeration? No. Digital disruption constitutes a minefield of risks for incumbents unable to digitalise their business in order to stay at the productivity frontier of their respective industry.
"Passivity during paradigm shift creates strategic risks"
The proper way to digitalise business is to integrate software and data into the organisation. In other words, taking data strategy learnings: Embedding ML to all aspects of value creation, across all customer journey touchpoints, with continuously improving products, services and processes. This can only be done with distributed data management – and by ensuring systemic excellence.
OK, Data Mesh needs to be handled as strategic option. What then?
Luckily the author has given us almost everything needed to assess the option:
Decision to take the option does not mandate heavy upfront investments. By default, the approach is business-driven, iterative and evolutionary.
Alternatively, early target setting can be even lower. The initial decision may not be about implementation at all but about learning to mitigate strategic risk. Systematic assessment of strategic option is a great learning opportunity by itself.
Let’s do it.
AI Influencer of the Year | Adaptive BizOps Coach | AI Monetization Expert | Founder & CEO | Keynot Speaker
2 年I am getting my copy tomorrow so this helps
Well done! Antti Pikkusaari A highly recommended executive summary and book review on Data Mesh… if you do not have time for the 330 page book…. or want to share the ideas of the book with collegues in a neat package. Shivaprasad Nayak Daniel Engberg Mattias Fras Bjorn Hertzberg Andreas Lundin Ola Westerberg Tomas Dersj? Markus Rudberg Ingo Paas G?ran K?rdel Mikael Klingvall Somil Gupta Fredrik Backner Rainer Deutschmann Antonia Tartamella Johan Larsson Ben Graham Erik Herou Joel Ankarberg Pontus Wallin Staffan Vildelin Pontus Hellgren Rolf Nordin Jonas Sk?ld Salla Franzén
Business Controller w. ITC responsibilities at Motiva Oy
2 年I'll keep this in mind
Partner, Head of Data & IoT | Business Developer | Coach | Architect | Nortal ??
2 年First of all, thank you Zhamak for sharing your vision during past years and publishing the book! Also, thank you Antti for composing this post - absolutely fantastic job! I found your post a thorough summary, not only covering the excellent book, but going beyond and extending it with your personal pragmatic viewpoints. I’ve been carefully following Data Mesh from its inception in 2019, read all published content I could find, listened the podcasts, participated a DDD-course by Zhamak, familiarized with concepts introduced in Team Topologies and Evolutionary Architectures - and obviously got my copy of the new Data Mesh book fresh out of oven. Having extensive background and feet on both software and data engineering, I couldn't agree more, that Data Mesh beautifully fills the void between these two worlds! ? Recommending to read both the book and Antti's summary on this.
Databricks Certified Spark Developer || Microsoft Certified Azure Data Engineer || 2x Azure || 5x Databricks || Data Engineer || Big Data Consultant & Trainer || Blogger
2 年Loved it?