Self-Service Data Piloting: Collaborative, Frictionless.

Self-Service Data Piloting: Collaborative, Frictionless.

In the information economy, where data has become every organisation's primary asset, data-driven strategies are now a competitive imperative to succeed across all business units. Essentially to help with decision-making, whether for business goals, financial performance, or customer satisfaction.?Forms of data piloting have evolved over time, and leaders must now find the best approach for their business among the different models that exist.


?

Centralised model:?

In the past, organisations have implemented highly centralised approaches to create data hubs. This approach relied on a small team of highly experienced data professionals armed with defined methodologies. To apply this approach to an enterprise data warehouse, for example, one must first define a central data model to collect and reconcile data that has been defined as relevant. Then, these are remodeled into subsets (data marts), to match a domain or business problem, and then remodeled again using a?business intelligence?tool. This provides a semantic layer, such as a data catalog, to be integrated into predefined reports. Only then can the data be used for analysis.?


The problem with this centralised model lies in the lack of resources available to make this data available to all who need it, quickly and accurately. The other challenge is to meet the growing demand for new types of data from end-users.

?

?

"Social networks" model:

With the advent of big data has emerged a much more agile approach to data management: the?data?lake. If the first model is to start with data modeling and governance, and then explore the real data with a top-down approach:?data lakes?take the exact opposite approach. A data lake relies on raw data, which can be ingested with minimal initial implementation costs, usually on basic file systems. Thus, it is not necessary to know the content of the data, because it will then be possible to create a structure on top of this data. It is then possible to add data quality checks, security rules, filters, etc.

?

This more agile model can handle larger volume of data sources and use cases. It also adapts to all audiences, although only the most experienced people can access the raw data. It has multiple advantages over the previous one, in that it adapts to data sources, use cases, and end-users. Raw data can be ingested as you go, and changes are easier to implement. Nevertheless, it requires stricter data quality control measures and governance. Because data quality control is not built in, it is added after the fact, as organisations expand to new use cases and end-users.

?

?

"Wikipedia" model:

The data lake lacks the ability to take control of data as it enters enterprise-grade systems, rather than after the fact. In addition, incoming data sources, introduced by different people in the same organisation, are multiplying.

The middle way, then, is to take a collaborative approach to data piloting from the start. In this way, the most knowledgeable users within the organisation can become content providers and curators. Working with data as a team from the initial phase is essential with this approach, at the risk of seeing the amount of work required to validate the reliability of the data become too time-consuming.

?

Organisations can take a Wikipedia-like approach, where anyone can collaborate on data curation, as long as they adhere to well-defined principles. This strategy commits the entire organisation to contribute to the process of transforming raw data into a reliable, documented, shareable asset.

Organisations can build a trusted, scalable system by leveraging intelligent, workflow-oriented self-service tools with built-in data quality controls.

?

Some highly regulated processes, such as the aggregation of risk data in financial services, and some specific data, such as consumer credit card information, require special attention. In these cases, a bottom-up approach will not be sufficient, but the "Wikipedia" model can complement – rather than completely replace a top-down approach, thus creating a hybrid model. While technology can help implement a collaborative approach to data piloting, enterprises must have the discipline to organise their data at a steady pace.

?

It is impossible to succeed in today's digital age without knowing what data you have and whether this data is correct & reliable. Without choosing the right framework organisations lose control of their data resulting in bad data. This leads to an inability for businesses to innovate, inability to adapt to changing customer behaviour/needs, failed digital transformation initiatives, and much more resulting in decreased competitiveness and loss of revenue.

?

In today's era: Self-Service Data Piloting is an essential step to automatically create a single source of truth, whilst maintain it in real-time. Vokse ?offers a Wikipedia-like data collaboration platform, with built-in data quality controls and alerting.

It allows you to manage data actively, whilst collaborate via an intelligent workflow-oriented self-service framework. It empowers business and data teams to regain control of their data, more importantly make sense of it without relying on IT teams. Zero Deployment, Zero Friction, Zero Risk.


While defining the data piloting model to be applied is the responsibility of leaders, successful implementation must be a team sport that requires collaboration across organisations.


Learn more by visiting?www.vokse.eu?| www.dhirubhai.net/company/vokse-dpa OR give me a follow!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了