Solving the 'Many to Many' Problem
Sana Remekie
CEO Conscia, Thought Leader in Composable Architecture, Omnichannel Personalization, Top 10 Influential Women in Tech, Public Speaker
Before we begin, let’s first define the ‘many to many’ problem in a Composable architecture.? By definition, ‘Composable’, among other things, assumes that you’re building an architecture with ‘many’ technologies.? However, these connected composable technologies must be able to serve multiple channels to future-proof the architecture and also create a consistent experience throughout the customer's journey with a brand or organization. That is where the other 'many' comes from in the 'many-to-many' discussion.
Recently, various software vendors have introduced solutions that attempt to solve this challenge.? Some of these belong to traditional categories such as CMSes, ETL tools, Search vendors, and iPaaS vendors, while others are brand new categories such as Content Federation, Experience Data Platform, DXO, and DXC.? I’d like to break these down and provide an analysis on each approach.
Headless CMS and Commerce Engines: These were wonderful innovations that allowed the content and data to be distributed to various channels (not just the website), so it solved one end of the problem. However, it doesn’t gracefully tackle the other side of connecting to many backends such as offer engines, ERPs, PIMs, DAMs, etc. I’ll caveat that by saying that many CMSs and Commerce Engines offer integrations to these other vendors through marketplaces, however, these connections are limited to very opinionated, point-to-point integrations that require heavy coding, which frankly, leads to building another monolith.? Also, without an introduction of an ETL layer, there is no way for these tools to work with legacy systems that are very much a reality in the enterprise space, such as ERPs, CRMs, databases, file systems, etc.
Content Orchestration: There are some headless CMSs that claim that they are doing ‘content orchestration’, but what they actually mean is that they are able to allow content to be referenced between their own spaces and stacks.? It’s actually quite remarkable to see how many CMS implementations force customers to create internal silos within a single CMS, let alone address the external data silos within the organization.? Now, there is another approach that some, more advanced CMSes, offer which they often call ‘external references’ or ‘custom fields or ‘remote fields’.? I would say that this definitely deserves the term ‘orchestration’ to a degree.? They’re providing discoverability of data that sits outside of their own CMS and are able to offer APIs that deliver content that goes beyond what resides in the CMS.? They’re also not simply copying/pasting the data into their own CMS through some scheduled or event-based syncing mechanism.? The challenge here is the assumption that the data that sits outside of the CMS is related to data in the CMS.? What if the data needs to be related based on a real-time context of the user? So, I would say that this solves a fairly important problem and should be taken seriously.
Content Federation / Data Unification Layer / Experience Data Platform: This is an interesting one.? Here you move data from multiple backend systems into an operational repository or cache that can be accessed by any frontend via APIs. This works very well when your only problem is query performance.? However, in more complex use cases, this alone is not sufficient.? Here are the problems I see with this approach: 1) You are copying and pasting data from source systems and it’s hard to keep that data in sync - a lot of these content federation tools don’t provide capabilities to move data around between systems and you rely on an ETL tool to get the data into their tools.? 2) You can sync content and data, but you can’t sync business logic.? For example, what about the case where one system is holding promotions, such as @Talon.One?? You can’t really move these promotions into another repository, can you?? Also, what if you need to leverage the real-time image transformation and optimization capabilities of Cloudinary??
Search - I love this one, especially because that is where I spent most of my career.? You can build wonderful browse and discovery experiences with enterprise search platforms and they do a wonderful job at delivering the data to the frontend with great performance.? However, they have many of the same challenges as Content Federation and Data Unification Layers.
Digital Experience Composition (DXC): This one is probably the most controversial one.? On the surface, this offers some promise because it connects to various backends.? But, let’s talk about the other side of the equation: the channels that we need to serve in a many-to-many problem.? DXCs are by nature, focused on web applications based on a javascript framework.? These tools provide great capability for business and non-technical users to lay out pages and control the visual aspect of the frontend via WYSIWYG control.? However, you rely on javascript SDKs to do complex transformations and data stitching before binding it with UI components.? This means that you are using your frontend to be your integration layer.
Content/Knowledge Graph: This, in many ways, is similar to the Content Federation solution, except that the underlying data is modeled in a graph database and connected via graph-based relationships.? This means that it can be queried in ways that CMSs and other Content Federation tools simply don’t allow.? This is a great solution for content that needs to be discoverable by humans and where you need to build complex relationships between your content records.? Some of these solutions allow you to query this data with real-time APIs.? In other cases, the data needs to be published into a search index, or another operational data store.
API Orchestration - This involves connecting to any backend system via an API, chaining API calls based on their dependencies, responding to real-time customer context, executing business logic based on the query context, stitching data from various API responses in real-time, shaping the response for the client. Tons of commerce and non-commerce use cases require this eg. personalization, checkout and payment orchestration.? Note, however, the assumption here is that the data in the system of record is ready to be consumed by downstream systems i.e it doesn’t require further metadata and semantic enrichment, classification, tagging, and cleansing.
领英推荐
GraphQL, BFFs and API Gateways - Instead of starting from scratch, I'll simply copy/paste my explanation from another article I wrote recently here:
The idea behind a BFF is to provide the data and content to a specific frontend in the form that it needs to deliver the experience to the end user so that the frontend does not have to perform any business or integration logic. This is to avoid writing that dreaded glue code that I keep going on about.? In this architecture, when the front-end application needs to fetch or manipulate data, it sends a request to the appropriate endpoint on the BFF.? The BFF then handles this request, which could involve aggregating data from various underlying services, making various sequential API calls, processing the data, and then responding with data in a format specifically designed for that front-end application to consume.
GraphQL on its own is just a query language for APIs and a runtime for executing those queries with your existing data. It allows the client to specify exactly what data it needs, which can greatly reduce the amount of data that needs to be sent to the frontend, and simplify the process of aggregating data from multiple sources. However, for the client to reap the benefits of GraphQL, engineers have to build GraphQL servers, which is a whole bunch of code.? Setting up a GraphQL server to act as a Backend for Frontend (BFF) involves several steps including implementing resolvers functions that implement the schema that is required by the calling application, error handling, testing, deployment, monitoring, etc.? This process is a ‘build’, contrary to commonly held beliefs.? Besides this, you own and manage GraphQL servers, which requires expensive DevOps resources.
So, yes, you can 'build' your own middle layer with GraphQL and it does, to a degree, solve the many to many problem.
Automation and ETL Platforms - These are tools like Workato and Zapier, that move data between systems by using web hooks and triggers.? Although this is an efficient way to move data around, we’re not truly solving the many-to-many problem here.? We’re just allowing data to be synced with one of the headless repositories that will eventually serve the many channels.
After all, I'm the CEO of Conscia and it would be strange not to thrown in a shameless plug for our own solution in my article, so, last but not least…..
Digital Experience Orchestration (DXO)
Conscia has defined a new category in the Composable space to solve the many-to-many problem i.e Digital Experience Orchestration.? It offers digital teams both zero code API and data orchestration, offloading the point to point integrations to the orchestration layer, simplifying the frontend code, and eliminating the need to build custom BFFs for every channel.? With the combination of the DX Graph (for data unification, discoverability, enrichment and API-delivery) and DX Engine (composable experience management and API orchestration), the DXO embraces both legacy and modern backends, allowing it to act as the bridge between any backend and any frontend.
Senior Vice-President, Data & AI services | AI Consulting, Data Engineering & Applied AI solutions | Board Director
1 年Tom Clinton
Principal Content Strategist, IKEA | Contextual Content, Knowledge Domain Modelling, Systems Thinking, Game Design Thinking, AI Content Ops
1 年Thanks for this gift of clarity, Sana!
CEO @ Orium | Composable.com | Commerce + Data + Intelligence
1 年Thorough summary Sana Remekie
HTTP 404
1 年Monolithic Digital Experience Platforms (DXP) are inflexible, lack support for agility, include features that are not always relevant (which increases complexity and customer costs), and generally don’t allow customers to achieve full intended value without excessive investment. Completely composable solutions provide agility and flexibility, but with attempts to orchestrate business processes at the front-end, lead to excessive custom code and brittle solutions that are generally not reusable across channels. Digital Experience Orchestration (DXO) engines sit somewhere between these two extremes, reducing implementation costs while allowing customers to select the systems and implement the features most appropriate for their use cases, delivering flexibility, agility, and reusability. Additionally, orchestration engines support use cases beyond Digital Experience, such as webhook listeners, inventory updates, and other general data and process flows for which there is no user interface or visitor interaction.
Content Engineering Consultant @ Albert Heijn | Owner @ The Content Engineering Agency | Data, Digital & Tech
1 年Great stuff! When will you write us a book, Sana? ?? ?????