The changing data landscape: where we’ve been and where we’re headed
Big data: undoubtedly one of the biggest buzzwords of the 21st century and an asset that’s now worth more than gold itself.
As companies across the globe scramble to compete in a rapidly changing consumer landscape, data – and the ability to effectively collect, mine, secure and analyse it – has become a key focus and also a prime challenge for organisations keen to remain on the cutting edge in their industry.
Over the past 20 years, the data landscape itself has changed exponentially as business leaders discovered its incredible potential for enabling market focus and targeted new offerings, while paving the way for increased efficiency and productivity.
Data is now widely accepted as the best tool we have to discover what is and what is not working in a business so we can eliminate or double down on it. The speed and breadth at which we can use data to do this has also changed considerably over time in line with advances in technology.
Let’s take a brief walk down memory lane and see how the data landscape has evolved over the past two decades and where it’s headed now…
The rise of the data warehouse
From the earliest days of bartering through to the industrial revolution, the ability to track and evaluate business data has been an essential element to success. However, everything was always painstakingly manual which limited potential.
As technology advanced, customer touchpoints exploded and so too did the data opportunities.
We began to ask:
· How much data can we collect?
· Where do we store all our data?
· How do we protect data for customer privacy?
· Can we link data to tell a story?
· How do we use data to drive decision making?
With the introduction of on-premise data warehouses, organisations were now able to collect data from various business areas and collate it for reporting.
Opportunities were still extremely limited though. Data needed to be highly structured, and could only be interpreted by technical experts who often needed weeks to effectively prepare it for reporting purposes.
Data lakes and the expansion of cloud
As demand for data grew, so too did the speed at which business areas demanded it. This led to a rise in specialist data roles, such as data scientists, who could address the demand while helping organisations get better equipped for a data-driven future.
It’s also around this time that we saw on-premise data lakes become an integral part of the data architecture. Where data warehouses could only store structured data, data lakes were able to hold un-structured data too – a key capability in the big data landscape for collecting available data that may be needed in the future.
This is where many organisations find themselves now: with an on-premise data warehouse and lake as the foundation of their architecture.
Currently the biggest questions we see facing these businesses is:
· How do we scale data?
· How do we govern data?
· How do we democratise and facilitate self-service of data?
· How do we create “safe zones” for data scientists to play and innovate?
· How do we meet all our compliance obligations?
It’s these questions that technology leaders like Informatica and Microsoft are working together to answer, and we’re doing it by leveraging the latest advances in technology: cloud and artificial intelligence.
Partnerships are paving the way
Cloud and AI open up enormous opportunities for the collection, storage, security, governance and analysis of data.
For more than 15 years, Informatica and Microsoft have been delivering best of breed technology which can do all the heavy lifting for our customers. Now with the release of Microsoft’s new Azure Synapse technology, the silo between the data warehouse and data lake is dissolved – giving you a comprehensive and rich data story that unifies operational and analytics intelligence.
With such extensive capability, data management and governance are absolutely critical for enterprise.
For example, how do you find that crucial balance between democratised and controlled data so various personas can get the data they need without accessing confidential information?
How do you develop and enforce policies for data sharing between different business areas or projects, while meeting your privacy obligations?
And what role can AI and machine learning play in both data governance and in helping to build 360 degree customer profiles for enriched CX?
We will be discussing these questions and many more at our upcoming APAC Data Strategy Forum on Wednesday November 11, 2020. To join us you can register here.
Vice President - Engineering / Practice Head - Data and Analytics
4 年Nice on Daniel Clarke