Overview of Monetised Data Packages
We often talk about treating data as an asset, but monetisation takes this further and treats data as a product. But how do we go about this process?
Defining packages of data – self-contained groupings of data (often cloud based) – that can be marketed as a standard offering is a cornerstone of data monetisation. Each package is sufficient in terms of data availability, data quality, visibility of metadata and purpose.
Generally, the more standard a package is, the less complex the data and services are, and thus the lower operational margins allow for mass marketing will be, but this does not make them any less useful than highly tailored or sophisticated packages.
This article shows that the simplest solutions can often be the best.
Sourcing
The first step is determining which data to source. This isn’t a technology question but rather one driven by business outcomes to find the right “domains” of interest, and the right content for these domains. A domain is something like “vehicle data” and includes all useful – and hence monetizable data for the target of the domain.
Purpose
But how do we determine what data could be useful to our customers? This is a creative process and again this is business-led. Perhaps the easiest way to generate ideas is thinking about what we currently do with our data that is useful to us, and then imagining if we extended it with other information, or made its scope broader, the precision deeper – or perhaps just plainly more accurate – could we get a better outcome we could sell? You may be surprised about how much data was there – if only we could structure it better!
Standard or not?
Part of this discussion includes thinking about how standard a product we are offering (perhaps mass market and low cost/margin) vs how bespoke and tailored. A good example of this is how you could offer packages of data grouped by who drives the content and structure and complexity of content. If you have a data governance organisation in place, you will find your data stewards a well placed to help with this.
An example of such a structure is shown below.
领英推荐
Form
Once ideas for the products are brainstormed, then the nature or “form” of the product should be established. Are we just managing raw data for our clients or are we doing more with it such as curating it or deriving it. Perhaps we are developing insights for them.
Technology plays a big factor here. Large big data solutions allow for aggregation and profiling of heterogeneous data sets, and cloud-based solutions allow easier customer deployment as well as elasticity. But you should always ask – is this technology making the product better and appropriate to the form?
Specification
Irrespective of these choices, we need to define and specify the packages – which involves knowing the data needed internally (and ensuring its quality) – but also allowing the product to be “discovered” and presented to clients and potential clients. This process of discover is like the “packaging” a prospective buyer gets before taking something out of the store. We need to describe clearly (and attractively) the metadata, the services available, as well as non-functional requirements such as speed, frequency and availability too.
Deployment
The final step in this process is building the products themselves. Here time to market and agility are important as products can rapidly become obsolete. Small incremental delivery with presentation to client focus groups will always be better than longer waterfall implementations.
Culture
Thinking about data - not as operational or management information functional silos - but as a useful and discoverable service, can have a profound shift in how culturally organisations regard their data, not just externally but internally.
This openness and focus on deployment can have strong synergies with both Cloud and Data Mesh.
At this point we should also ask – when do we involve our clients in this process? At ideas stage or final implementation??