A Revolution in Analytical Technology

A Revolution in Analytical Technology

It’s been 10 years since Jeanne Harris and I published our book, Competing on Analytics, and we’ve just finished updating it for early-fall (2017) re-publication. We realized during this process that there have been a lot of changes in the world of analytics, although some things have remained the same. The timeless issues of analytical leadership, change management, and culture haven’t evolved much in 10 years, and in many cases those remain the toughest problems to address.

But analytical data, technology, and the people who use them have changed a lot. I must confess that I didn’t anticipate how difficult updating the book would be (so please buy it when it comes out to make the effort worthwhile!). I thought that perhaps a few global Replace commands in Microsoft Word would do the trick—changing terabytes to petabytes, for example, and quantitative analysts to data scientists. Although we did make a few such easy changes, there are many others that required more than a simple word substitution.

I won’t go into all of them here, but below is an annotated list of some of the most impactful changes over the past decade to the world of analytical technology.

 Big Data – Interest in this term started around 2011, according to Google Trends. Of course, it began to take off earlier in Silicon Valley, with the rise of Internet behavior (clickstream) data. These new data sources led to a variety of new hardware offerings involving distributed computing. And the need to store and process this data in new ways led to a whole new raft of software, such as:

 Hadoop and the open-source revolution – Hadoop was necessary to store and do basic processing on Big Data, along with such scripting languages as Pig, Hive, and Python. Since then we’ve seen other open-source tools rise in popularity, such as Spark for streaming data and R for statistics. Acquiring and using open-source data is a pretty big change in and of itself, but each of these specific software offerings brought a set of new capabilities.

 Data lakes – Data lakes are Hadoop-based repositories of data. These don’t perform analytics themselves, but they are a great way to store different types of data—big and small, structured and unstructured—until it needs to be analyzed.

 Operational analytics – Many organizations want and need to integrate analytics with their production systems—for evaluating customers, suppliers, and partners in real time, and for making real-time offers to customers. This requires a good deal of work to integrate analytics into databases and legacy systems.

 Componentization and micro-services – Integration with production systems is much easier when the analytics are performed by small, component-based applications or APIs. Even proprietary vendors like SAS are moving in this direction.

 Streaming analytics – Internet of Things and other streaming data sources have made it increasingly desirable to analyze data as it streams into an organization. This often requires integration with some sort of event-processing technology.

 Grid/In-memory analytics – A big change in analytics has resulted in a change in the hardware environment for computing analytics. The outcome is speed—often order-of-magnitude increases in the speed of doing analytical calculations on data. In many organizations, the idea of submitting your analytics job and getting it back hours later is a distant historical memory.

 Cognitive technology – I’ve saved some of the most important technologies for last. A key assumption behind analytics in the past is that they are prepared for human decision-makers. But cognitive technologies take the next step and actually make the decision or take the recommended action. They are actually a family of technologies, including machine and deep learning, natural language processing, robotic process automation, and more. Most of these technologies have some form of analytics at their core and, to me, they have more potential for changing how we do analytics than any other technology.

Just keeping up with these technologies and updating our analytics infrastructures to accommodate them is a full-time job. In addition, users need to be educated on which technologies make the most sense for their business problems.

And the old technologies haven’t gone away. Companies still use basic statistics packages, spreadsheets, data warehouses and marts, and all sorts of small data. In almost every organization, one can make a case that it’s a combination of the new and old analytical technologies that makes the most sense. In data storage, for example, the structured data that needs lots of security and access control can reside in warehouses, while the unstructured/prestructured data swims in a data lake. And if you want to understand your customers, you certainly need a combination of Big Data and small data—and the combination of methods and tools to analyze them.

This much change in 10 years surely constitutes a revolution. And the combination of new and old analytical technologies in most firms requires that we add more resources to manage them. That’s the downside of this revolution, but the upside is that we have a better understanding of our business environment than at any other time in history. Let’s hope we use it to make some great decisions, take informed actions, and introduce great new products and services based on data and analytics.

* This article was originally published by Data Informed on April 24, 2017.

Tom Davenport, the author of several best-selling management books on analytics and big data, is the President’s Distinguished Professor of Information Technology and Management at Babson College, a Fellow of the MIT Initiative on the Digital Economy, co-founder of the International Institute for Analytics, and an independent senior adviser to Deloitte Analytics. He also is a member of the Data Informed Board of Advisers.

Emmanuel Okoh

Data Science Specialization at COGNIXIA COLABERA LEARNING SOLUTION

7 年

Love to have a copy.

回复
Ed Evans FBCS

Data Consultant @ Open Data Institute | Data and business, Data Strategy

7 年

Thanks for the article Tom. As you alluded the biggest barrier in seeing the benefits is the competency gap in organisations. Big Data has to solve business problems to be of value, so organisations need to have a realistic and evolving view of what can be achieved today and what it is going to take. Organisations - develop your competence in this area!

Debashish Sarkar

Enterprise Data Architecture Leadership

7 年

Data security is as important in any of its states....whether in rest or in motion , whether structured or semi structured, whether in a Data lake or in a Data warehouse. Having a gap in data security or governance in the Data Lake as implied in this article is a very certain recipe for a disaster that could completely immobilize an organization. Fortunately mature solution/service providers like AWS, GC and MS Azure has tons of capabilities and compliance to PCI HIPPA etc....however the solution owners need to understand this and even build additional capabilities because the responsibility lies with the solution owners.

Joysy John MBE

EdTech Advisor | Innovation & Transformation Consultant | Top 100 BAME leaders influencing the tech sector

7 年

Great article! What if we had a tool that could automatically map out the legacy and new systems? What if we could not only show the structure but also the behaviour of the system in a visual way to make big data transformation projects easier?

Angel Torres, MSF, MBA, ChFC?

Lead Financial Advisor | CFO Strategic Services

7 年

I am looking forward to see your updated book on competing on analytic. please inform when publish.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了