Data Management & Analytics predictions for 2021 and beyond
Photo by Igor Ikonnikov

Data Management & Analytics predictions for 2021 and beyond

Top three that may go beyond 2021:

  • Data analytics shall become an indispensable part of every business meeting - hence, PowerPoint shall be replaced with interactive dashboards and AI-generated stories.
  • Knowledge Graphs shall enable ubiquitous analytics over all data types and formats, i.e.: structured, unstructured, images, video, IoT feeds, etc.
  • The trailblazing done by Data Scientists shall be organized and harmonized by Knowledge Architects.

Main trends for 2021:

  • The new approach to data architecture: multi-phase & multi-speed data pipeline. This replaces the Data Warehouse/Data Lake -centric data platforms where data accumulation, integration, augmentation and packaging for consumption were done in the same monolithic structure. Data Lakes may still be part of the pipeline where they are the best fit: schema-less “as is” data accumulation – as an exact copy of ingested data. Those who prefer to stay in the SQL world can still use the Data Warehouse (for accumulation only) – adhering to the same main principle: storing data “as is” without any transformations.
  • Master, Reference and Meta Data Management (hereinafter referred to as “MDM”) shall become more and more important as it provides the base and the definitions for data integration and augmentation. Properly done MDM can produce the core for Enterprise Knowledge Graph, as well.
  • Since analytical use cases grow exponentially in their “Volume”, change with increasing “Velocity” and come up in expanding “Variety”, it is possible to say that we are entering the era of “Big Analytics”.
  • The Big Analytics can no longer rely on SQL schema-based data integration – instead, data must be integrated, augmented and packaged for the variety of use cases and, in the fast lane, it has to be done almost in real time. Hence, the need to use explicit machine-readable configurations to implement in-memory integration, augmentation and packaging - rather than persisted database structures.
  • Data engineering, therefore, shall shift from building “brick-n-mortar” data warehouses towards management of smart enterprise configurations for data aggregation and augmentation (often referred to as “Knowledge Graphs”).
  • The overwhelming majority of data users (whose numbers are skyrocketing with the trend towards self-service BI and citizen data science) are reluctant to learn too many technologies. Therefore, they demand “WYSIWYG” interfaces or nicely presented SQL.
  • SQL shall remain (until quantum computing becomes mainstream) the main data access/interface technology, but more and more vendors shall follow the Snowflake's and Vertica's examples: using NoSQL technologies under the covers while exposing the resulting data via SQL interface.
  • The Big Analytics shall also require much higher degree of automation in the data management (DM) pipelines, therefore, DM technologies with embedded AI and ability to implement DM processes based on Knowledge Graphs are becoming hot-sellers (subject to an acceptable degree of maturity). This is the only way for the current Data Management technology vendors to remain relevant on the market. Those not following the AI-driven Data Management trend are taking risk of being ousted by the growing number of BI and Analytics vendors who offer Data Management / Data Preparation modules as part of their integrated (i.e.: very convenient to use) offerings. 

要查看或添加评论,请登录

Igor Ikonnikov的更多文章

社区洞察

其他会员也浏览了