CRM, Customer Repository, Customer Base, 360 Customers, MDM Customers, what works?
The Tower of Babel - 1563- Oil on oak panel - Pieter Brueghel the Elder (1526/1530-1569) - Dimensions : Height : 114 cm ; Width : 155 cm - Exhibition

CRM, Customer Repository, Customer Base, 360 Customers, MDM Customers, what works?

Before working directly with structured data (the kind found in standardized models, 'old school' BI models and data lakes), I used to install CRMs (for Customer Relationship Management: basically tools that transform prospects into customers and then manage commercial exchanges with them, mainly used for long sales in the Business to Business market) and ERPs (for Enterprise Resource Planning: multifunctional software packages that manage practically everything, from the product manufacturing cycle to logistics, in short, large 'bouzins' deployed across plants, sites, factories or within service companies).

And here was the Ca-ta-strophe when it came to the Customer:

  • Customer Service spoke of Customer Files (on average between 3 and 30 files per customer!);
  • Accounting spoke of Invoices ... Customer (on average between 10 and 100 invoices per customer per year!);
  • Controlling Accounts ... Customer (on average between 10 and 20 accounts per customer!) ;
  • Address Logistics ... Customer (on average 1 to 10 address sites per customer!).

The myth of Babel was in full effect, with no single department speaking the same language: granularities were not aligned, reporting was not consistent, Managers were 'banging' on their neighbors, the Big Boss was taking sometimes... radical measures (especially when the Bankers were behind the door)... "And from there the Lord scattered them over the face of all the earth." Book of Genesis (Gn 11,1-9).

I'll always remember the time when I set up my 1st "Single Customer Number", which wasn't easy to set up and required a lot of psychological lobbying within the company.

This service company specializing in payroll and human resources generated thousands of paper slips per month, and had decided to implement a CRM. This was in the 90s, just before the two towers of the World Trade Center in New York were blown to smithereens...

The IT world of the time was wilder and more instinctive, we were starting to talk methodology, TOGAF was being born ...

We quickly implemented CRM, and it was soon recognized as a powerful and .... Structuring ;-)

The problem was that this vision of the customer was not physically disseminated, and soon we'd be struggling to distinguish customers from ... prospects, dead customers ... from living ones: the gamble of the famous 'unique customer number' might have worked, but the data storage didn't have the legitimacy of a real customer database, due to the absence of an outgoing interface with the rest of the world ...

At the time, I understood one of the best practices we are now implementing in our Data Governance policies and guidelines for our customers: "Data is sharable" ...

Later, I joined a 'Mammoth' company, a 'Monster' of logistics, imposing in its size and human and industrial capacities, and very rigorous in the design of its software factories (historically internalized: heritage of President Charles De Gaulle's "Plan Calcul").

Merise was our bedside book...

We created centralized repositories for the 'Retail' branch, whose 'Customer-Partner' repository contained 'small-pro' customers and small independent distributors.

A curious mix of genres, dictated by the laws of the 'third normal form' dear to data 'modeling' ... and a perceptive vision of 'Third Party' databases responsible for containing all legal entities, whether 'Customer' or 'Supplier'.

The gains over previous experience were (on a company scale) :

  • Centralized storage in a single, legitimate zone of truth;
  • Independence from data sources;
  • Data sharing (naturally imposed by the architecture in place) and dissemination;

Our architecture was beautiful: both functional and urbanized (we had read the 'Longépé' and spoke it fluently within the team).

On the other hand, this 'Customer Repository' was dedicated to a single business line, whereas there were ... 5 national and 1 international ... made up of several companies.

I can't imagine the group's efforts to consolidate accounting and financial data...

We were still a long way from a single, shared 'Customer Database', a highly 'political' subject in those days!

How could we merge all this customer information? What obstacles to overcome? What levers to activate?

This is where the famous MDM comes into play, MDM for 'Master Data Management' (not to be confused with 'Mobile Device Management', which has been very much in vogue over the past 4 years), the main driver of which will be the strategy implemented at Group, Headquarter or even Holding level.

Indeed, at some point in its growth, a Group may feel the need to structure itself better in order to avoid exposure to major tax, social and financial risks. What's more, lenders only lend up to a certain risk limit, so the more fragmented the company, the greater the risk exposure.

MDM technologies are one of the answers for companies that have grown organically or through acquisitions, and wish to rationalize their data assets before transforming them.

In fact, it's a technology developed by the major players in ETL (Extract, Transform & Load) (Informatica etc.), powerful tools capable of collecting data from a variety of Database Management System or DBMS formats (Oracle, Sybase, Teradata, Microsoft SQL Server for the world of private publishers. MySQL, PostGreSQL, etc. for the 'free' software world) in order to reformat them for business intelligence systems.

First of all, it's important not to confuse MDM architecture with Enterprise Application Integration (EAI) architecture, which is designed to enable all a company's applications to communicate via data BUSes in as "real-time" a manner as possible.

MDM, on the other hand, is dedicated to the strategic centralization of the company's most stable and mutualizable data in a single database (and its re-distribution, of course), known as the 'well of truth' or 'Repository'.

It has a layered architecture, with each layer dedicated to a specific set of actions performed on the data:

  • First, the layer that receives the data, known as the 'Landing' area: this area will host the data in its raw form, as received. This is very close to the basic functions of ETL. Data will usually be stored here for a short period, and in some architectures for longer periods for archiving or troubleshooting purposes, or historized in some bi-modal architectures (corresponding to Gartner's vision of hybrid project management, sometimes agile and sometimes V-cycle);
  • Then there's the 'Staging' area, which literally 'breaks' and 'maps' the data into a 'target' model, and which also ensures data 'Standardization' (reformatting into a defined format, such as national and international telephone numbers), 'Cleansing' (rejecting rotten data) and finally 'Transcoding' (Reference Data) or 'Nomenclature'. Transcodification is by far the most time-consuming task (requiring collaboration, consensus and validation), as it ensures fine meshing and precise grafting between the data from each source;
  • Loading into the target model can now begin in the 3rd layer: this layer will reconcile data from heterogeneous sources and merge them ("match & merge"), which will then enable "Consolidation" - the distinctive feature of MDM software packages. It is in this zone that we find the famous cross-references (known as Cross Réf or 'XREF'), i.e. all the prior history of the merged data passed through the filter of the source prioritization matrix (matrix designed with the Business), for example: the date of birth coming from such and such an Oracle source will be judged more reliable compared to the other sources in the prioritization matrix, the postal address from such and such a Sybase database fresher, etc. The 'Consolidation' zone, known as the 'XREF' zone, is where we find the 'XREF', i.e. all the prior history of the merged data passed through the filter of the source prioritization matrix (matrix designed with the Business).
  • The "Consolidation" or "Target" zone, containing all the "Objects" in their purest, cleanest format. This is where you'll find the famous "Golden Records", real little nuggets of data!
  • This architecture is demanding and requires :
  • An acceptance strategy including: unit testing, data integration testing, non-regression testing and end-user testing;
  • A deployment strategy, as data sources are often not the same in the various subsidiaries and/or countries of a holding company;
  • A data migration strategy, as data is often voluminous when first loaded, and needs to be "sliced and diced" in order to pass through the pipes. In addition, the sources that drive them have different refresh rates.

On the other hand, compared to the other architectures mentioned above, a well-designed MDM produces results that have a positive impact on the business ... provided that only master data is used (and not transactional data)!

In fact, unfortunately, this is where it's tem

In fact, unfortunately, this is where it's tempting - very tempting - to implement the famous '360 Customer' ...

In other words, a vision consolidating all the events and interactions carried out closely or remotely with or for the same customer.

In this case, the data multiplication factor is potentially 1000 per customer!

Imagine that for a given customer, over a period of 5 years, you could potentially have :

  • 1 to 5 workshop visits ;
  • 10 to 20 incoming calls to customer services;
  • 5 to 10 sales reminders;
  • Hundreds of data probes for 1 connected product purchased (imagined if several products were purchased by the same customer);
  • Dozens of visits to your website and/or mobile applications;
  • Dozens of commercial e-mails and SMS messages;
  • A few letters;
  • Mailings of sales brochures;
  • Thousands of pieces of data from the social networks of your dozens or even hundreds of brands.

In short, do you think an MDM is designed to ingest and disseminate so much information in real time on the scale of a large group's ecosystem?

Aren't we dealing here with something bigger, something more 'Big Data'?

Personally, I think it's more interesting and realistic to create a 360 in another form, one that may seem less ambitious at first glance, but which nevertheless "rocks"!

Indeed, by 360 customer I mean 'Consolidation of referential data', i.e. a consolidated vision of referential data around the customer that will link :

  • Who (sales channel, salesperson)?
  • What (product, service, contract, guarantees, policies sold)?
  • Where (business line, processing center, branch, store)
  • The How (the analytical accounting code used to qualify the sales or management operation)
  • To whom (which customer? which product segment? which market?)

In this case, the MDM will be used to manage each of these referential dimensions independently and in distinct instances, to ultimately produce interconnected repositories in a multi-business, urbanized vision.

This '360 Master Data' will be the consolidation of all these referential data, and will be used to :

  • Analyze customer profitability;
  • Identify the best salespeople, sales areas and business lines;
  • Understand the market;
  • Simply consolidate financial figures more rapidly.

All this is achieved by providing clean, universally-recognized data that can then be manually linked to transactional sales or management data.

Does this vision speak to you? Would you like to add to it? Do you see others?

If you've already had one or more of these experiences, what answers have you come up with?

Do you now see the benefits of Master Data Management?

In short, don't hesitate to post your comments and make your contribution!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了