Data Analytics 3.0

Data Analytics 3.0

Is there a Data Analytics 3.0?

For those of us that have spent years studying big data, we realise there wasn’t any major break-through in technologies - we have only seen minor performance tweaks and new priorities required for the business. 

What we understand is that when large numbers of companies start capitalizing on vast new sources of unstructured, fast-moving information—big data—we need to plan for providing a better solution. 

Some of us might call it Data Analytics 3.0. It is a new resolve to apply powerful data-gathering, online streaming of  data, speed and analysis methods - not just to a company’s operations but also to both internal and external customer experience.

It is evidence that a new era is dawning. When a new way of thinking about and applying a strength begins to take hold, businesses are challenged to respond in many ways. 

Change comes fast to every business. New players emerge, competitive positions shift, new technologies must be mastered, and talent driven toward an exciting new era.

Those of us that can connect the dots understand that competing on data analytics is being rethought. We need to perceive the general direction of change on Data Analytics 3.0 to be best positioned to drive that change.

The Evolution of Data Analytics

Data Analytics 1.0 for business intelligence

Gaining an objective, understanding of business phenomena and giving business the fact-based comprehension to go beyond intuition when making decisions. Data was about production processes, sales, customer interactions, and more were recorded, aggregated, and analyzed.

This was the era of the enterprise data warehouse, used to capture information, and of business intelligence software, used to query and report it. Readying a data set for inclusion in a warehouse was difficult. Analysts spent much of their time preparing data for analysis and relatively little time on the analysis itself.

The great majority of business intelligence activity was to address only what had happened in the past; they offered no explanations or predictions.

Data Analytics 2.0 moving into big data.

Data was not generated purely by internal transaction systems. It was sourced externally from the internet, IoT sensors of various types, public data initiatives such as the human genome project, and captures of audio and video recordings.

Big data could not fit or be analyzed fast enough on a single server, so it was processed with Hadoop, an open source software framework for fast batch data processing across parallel servers. To deal with relatively unstructured data, companies turned to a new class of databases known as NoSQL. Much information was stored and analyzed in public or private cloud-computing environments. 

Other technologies introduced during this period include “in memory” and “in database” analytics for fast number crunching. Machine-learning methods were used to rapidly generate models from the fast-moving data. The ability to embed analytics and optimization into every business decision made for business operations.

The quantitative analysts are called data scientists, and they possessed both computational and analytical skills. Data scientists are now needed to work on new product offerings and help shape the business.

Data Analytics 3.0 for steaming data.

The essence of Analytics 3.0 is not only to improve internal business decisions, but also adding more valued added products and services. 

Business will need to recognize a host of related challenges, and respond with new capabilities, positions, and priorities.

Organizations will need to integrate large and small volumes of data from internal and external sources, and in structured and unstructured formats, to yield new insights in predictive and prescriptive models.

Businesses needing real time data feeds and quick decisions, will need better and faster technologies. To complement them, new analytical methods and machine-learning techniques are being used to produce insights at a much faster rate. The challenge in the 3.0 era is to adapt operational, product development, and decision processes to take advantage of what the new technologies and methods can bring forth.

Businesses are embedding analytics into fully automated systems through scoring algorithms and analytics-based rules. Integrating analytics into systems and processes offer greater speed and for decision makers to continue using data analytics. 

To develop products and services, businesses need discovery platforms for data exploration along with the requisite skills and processes. New data discovery solutions make it possible to determine the essential features of a data without a lot of preparation.

When data analytics become an important aspect of the business, the Chief Data Officer role is to superintend the building and use of analytical capabilities. 

Predictive models are based on past data to predict the future; and prescriptive, which uses models to specify optimal behaviours and actions. Prescriptive models involve large-scale testing and optimization and are a means of embedding analytics into key processes and behaviours. They provide a high level of operational benefits but require high-quality planning and execution in return. 

Data Analytics 3.0 represents the ability to run an analytics engine for large scale data processing for both batch and streaming data. The ability to use common tools like Java, Scala, Python, R and others. Tools should be able to run on standard Hadoop clusters, cloud base, Kubernetes and the newer technologies. 

With the data analysis through Data Analytics 3.0, the Age of AI with supporting technologies like 5G, Edge computing, GPU, intelligent IoT sensors will become reality.

Businesses that want to prosper in the new data economy must fundamentally consider how the analysis of data can create value for themselves and their customers. 

要查看或添加评论,请登录

社区洞察

其他会员也浏览了