What we’ve learned from Hadoop
The spring cleaning of dormant Hadoop projects touched a nerve. It’s been fashionable to say that “Hadoop is dead” for some time – at least since Gartner published studies showing declining use as of 2015. ZDnet colleague Andrew Brust’s post on the project purge went positively viral. Based on that narrative, a few weeks back, we put out a post on Hadoop’s legacy. Credit Hadoop for taking down our fears of what we used to term big data and triggering a virtuous cycle of innovation that has resulted in a rich landscape of the data-oriented analytic services we have today.
So what have we learned from the history of Hadoop? It's a lesson in the trials and tribulations of open source, and the fact that even when innovation hits dead ends, that we still learn from our mistakes, experiment some more, and move on. We'll start with the obvious: community-based open source enabled Hadoop to go viral, but it was also the key to Hadoop's obstacles. Community-based open source might not be a clean process, especially as rivalries emerge that bring distractions. Nothing's ever perfect, the world ain't always pretty, but there's little question today that this decade of innovation yielded a vast panoply of options for analyzing data with what we used to call The 3 V's and made AI and Machine Learning possible.
So what's the biggest lesson that we learned? Click here for our take.
Director at Deutsche Bank
3 年Those really weren't all hadoop projects lol. Spot was just a project to process security data that happened to use hadoop. Lol
Very nice analysis, spot on!