Business (NOT) at the Speed of Thought
Alan Turing, Bubbles and Bees
20 years on from such a ground-breaking read, heralding a new age in commerce, it's an appropriate time to wonder, how far have we come...? Remember those high and heady days, the infamous dotcom bubble? A time of mania for all things Internet. Among the many catch-cries, there would no longer be travel agents, stock brokers or even banks. These were "non value adding intermediaries" who would be shed from the eco system as surplus baggage. The acronym generator went crazy... we had dot Coms, dot Cams and dot Bams, we had "stickiness" , "WAP", started putting an "e" in front of everything and hallucinated our way into the "new economy" in the "web-based world". Almost a full time job keeping up with the techno jargon. In Australia we even had a mineral exploration company who overnight claimed they were becoming a "dotcom" and would now be "an internet business" whatever that meant.
Heady days indeed. In the industry I'm most familiar with, there would no longer be Freight Service Providers (Freight Forwarders) because of the rise of online exchanges and market-places, where buyers would deal directly with carriers and simply cut out the middle-man. (But it turns out not all middle-men are created equal...some actually do add value and do fulfill a purpose). Suddenly we had a whole new arsenal of technology to finally solve those age old problems of "Supply Chain Visibility", to go paperless and create a "seamless flow" of information across the whole international supply chain. Happy days.
Perhaps the pointy end of all this though, was the money markets themselves. The fever and frenzy with which the dotcom fire spread was simply breathtaking. Again, in Australia we had many market darlings, among them Onetel, Solution 6, Sausage Software and so on. These captains of the digital economy would generate rivers of pure gold for investors as they streamlined and automated their way to e-business utopia, crushing the old and obsolete competition or anyone else in their path. The only thing to fear was fear itself, and of course missing out on the next once in a lifetime investment opportunity. Perhaps the one other thing to fear was missing the latest edition of the "Rivkin Report".
Hype much?
On global markets, particularly the NASDAQ things were in a word...INSANE. (Fed Chairman Greenspan was more restrained citing "irrational exuberance"). Companies were literally changing their names to claim to be dotcoms. "Get big fast" and "Growth over profits" were the new market fundamentals and even "huge" just wasn't big enough. AOL and Time Warner got hitched early in 2000 (a deal that has long since been universally ridiculed ) and "slow growth" or "old economy" stocks (ironically the ones that still believed in profit) were dumped in order to get on the bandwagon. Y2K came and went without a hitch and markets had only one direction...upward. In a rising market every fool can make money, but markets cannot rise forever (especially those fueled by hype)...sooner or later the devil calls to collect his dues...
After the party was over, and the thumping hangover of cold hard reality set in, it was a picture of devastation. By October of 2002 following 9/11 and the dotcom accounting practices of Enron and Worldcom, some 5 Trillion USD had been wiped off the market. 5 Trillion! That's about half of the entire US GDP for that year, vaporised. The NASDAQ fell more than 86% and the new reality of the "New Economy" started to sink in. Comparative or notable falls were experienced in most developed countries.
What we had all witnessed, was not the dawning of a glorious new age of technology, but sadly just another massive hype cycle. A bubble the size and scale of which was unprecedented. The significance of the technology and the potential it held was lost in the milieu and it itself became a casualty. More than 50% of all the new breed dotcom companies "failed forward quickly" (went bust) and those that were left found their cause set back years if not decades. Once again it turns out hype cycles and bubbles are not good for business. Gates was right about a number of things (albeit over a longer timeline). But the title of the book is fair game, and even as a cute metaphor is dead wrong. We were never going to do business at the speed of thought, or anything remotely like it. Never have done and probably never will. The notion was simply the spirit of the times and is both a product of and agent for, the hype of the day. In fact Microsoft itself was found guilty of serious anti-trust violations in 2001. You could be forgiven for wondering how much of their stellar success was due to true innovation and how much might have been good old fashioned American monopolism... (1)
Not many voices called hype on the front side of the whole saga, the naysayers were few and far between. There were a few isolated warnings, drowned out by the echo chamber of hype which had mesmerised the stampeding herd.
Fast forward 20 years and we are at a nexus again of emerging technologies and each of them with their own bubble of hype. Search "Blockchain hype cycle" or "IoT hype" just for starters. In fact the poster child for blockchain, Bitcoin and the crypto movement are the standout hype meltdown event of 2018. The blurring of the lines between what is actually possible with new technologies and what is being claimed or marketed, is at full stretch. Look up "AI Winter" or "AI hype cycle" and it's an even more compelling example.
I think therefore I am, AI then and now
It is almost Freudian that Gates chose speed of "thought" as the descriptor here. Turns out computing and thought have long been bed fellows. There's a curious symbiosis that's been there from the very beginning. Visionary, Alan Turing, widely regarded as the father of modern computing and AI (on whom the blockbuster film The Imitation Game was based), in his seminal paper of 1950 posed the game changing question "can machines (meaning programs) think?". What is almost universally missed in that same paper is a single sentence about halfway through,'"The original question, "Can machines think?" I believe to be too meaningless to deserve discussion"'. Confirming in Turing's own mind it was absurd to seriously contemplate machines thinking in any human sense. That statement is made in the direct context of separating facts from conjecture (read hype) in the future development of computing. Prophetic indeed. Marvin Minsky another leading figure and part of the famous pioneering 1956 Dartmouth AI conference group, was more sanguine writing in 1961, "within our lifetime machines may surpass us in general intelligence". What has pretty much followed since then is a pattern of moving the goal posts, simply extending the timelines further out as each claimed milestone was missed. By 1982, Minsky himself had revised his previously optimistic viewpoint saying, "I think the Al problem is one of the hardest science has ever undertaken". Another more recent example, "Artificial Intelligence will reach human levels by around 2029" – Ray Kurzweil, 2014. That's not to say there's been no progress, on the contrary, there's been great progress but as always, we need to separate fact from fiction, hype from reality. All the while of course this has been the stuff of movie fantasy from films like '2001' way back in 1967 through to documentaries like 'The Man vs The Machine' and 'AlphaGo' more recently.
In 1997 IBM's Deep Blue beat then world chess champion Gary Kasparov. What you were never told however was that Deep Blue had been "trained" by several Grand Masters in preparation. All the moves, strategies, combinations and sequences of possibly scores of the best chess players in the world, were fed programmatically into a machine. That machine then used brute force processing, or sheer computing power to defeat Kasparov (not a surprising outcome). "Deep Blue’s success was essentially due to considerably better engineering and processing 200 million moves per second". If we want to call that intelligence fine, but for the record that same intelligence could not distinguish red from blue or a dog from a cat...unless we "re-programmed" it. Following that tournament of course IBM's share price jumped, and Kasparov demanded a re-match, but it never came. (2)
2011 and IBM's Watson takes on and beats the worlds' best Jeopardy (quizz game) champions. Again what hasn't been talked about, is that the equivalent of the entire Wikipedia database, plus all of the previous Jeopardy quizz questions and answers as well as several other sources, were loaded into Watson beforehand. For the record IBM Watson had 90 servers, each using an eight-core processor, four threads per core (total of 2,880 processor threads) and 16 terabytes of RAM. Meaning Watson could process some 500 gigabytes, or about a million books, per second. For anyone who knows this stuff that's enough hardware and firepower to practically run a small country. Is it so surprising that it won a quizz game? I bet an excel spreadsheet can calculate faster than I can too!
In 2016 Google Deep Minds' AlphaGo defeated the long standing world champion of the popular Chinese board game Go. This was notable in that this particular game is arguably more complex and sophisticated , thus more challenging for a machine than the previous examples. Again the win must be put into context of the sheer computing & modelling power brought to bear on the challenge. AplhaGo was loaded with 30 Million different board positions from 160,000 real life games, taken from a Go database. Overall, training of literally tens of millions of games went into the tuning of AlphaGo in preparation. As much as the more hardline supporters in the AI camp want to emphasise that Go is more a game of intuition and judgement, the bottom line is it's still about pattern recognition and mathematical probability, and superior processing speed and power will win every time. For all its intelligence, AlphaGo could only do one thing...play Go. In fact if the game board in the experiment had been switched to literally anything other than the standard 19 x 19 configuration, AlphaGo would have been dead in the water. As a footnote to this, Deep Mind went on to develop a later version called AlphaGo Zero (with different algorithms) which just to give a sense of scale, after playing 4.9 million games against itself, was able to beat the original AlphaGo 100-0. Certainly brings a whole new meaning to another popular buzzword...gamification. (3)
These examples are all two edged. On the one hand they demonstrate real progress with computing and AI. Yet at the same time, they are little more than public spectacles and marketing stunts. Further, the current state of AI and crop of AI systems need to be understood in terms of modern hardware capability. The vastly more powerful and lower cost processors of today have allowed computer arrays that can handle the enormous amounts of data and sheer volume of processing tasks required for these systems to function. But even this will reach threshold soon with Moore's Law (processor improvement) set to peak within a few years, meaning that the ability to effectively double processing power and speed every couple of years will cease.
AI where to from here?
To state the obvious, we are not doing business at the speed of thought, and we are not teaching machines to think. Nor will we in the foreseeable future. So apart from winning the odd board game, what is actually happening in the world of AI?
If we keep in mind the enormous resources and scale that go into making AI work, it's clear these systems are not a silver bullet for every situation. For companies like Google and Amazon with vast pools of data to play with, AI can tease out predictions and correlations that were previously obscure. But in the real world, these are the exception rather than the rule.
In the field of complex medical diagnostics another system in the US named "Deep Patient" (an unsupervised deep learning network) has had remarkable and unexpected success. Not without anomalies though. Researchers provided Deep Patient data with hundreds of variables (e.g., medical history, test results, doctor visits, drugs prescribed) for about 700,000 patients. The system was able to discover patterns in the hospital data that indicated who was likely to get liver cancer soon. Somehow it could largely anticipate the onset of psychiatric disorders like schizophrenia which is notoriously difficult to predict even for psychiatrists. On face value this sounds amazing but there is a catch, commenting later on the workings of Deep Patient, head researcher Joel Dudley sadly remarked, "We can build these models, but we don’t know how they work." (4)
If we follow the constant barrage of headlines only, and get caught up in the marketing speak, of course AI is on the cusp of solving everything from this weeks' Lotto numbers to time travel. But if we pay close attention to those who work in the field we hear a different story. There's a very real concern of unrealistic expectations of AI and a strong grasp of the current limitations on almost every research front. There are countless pilots happening across many industries and many projects. The common theme is that we have a long way to go, and these are very complex problems to solve.
In fact the greatest concerns in the AI research domain centre on the damage that the current hype cycle is doing and that the rush to commercialise these technologies ends up hindering the progress required to bring them to market in the first place. Remember that dotcom meltdown we spoke of earlier, or even the crypto crash more recently? We must be clear, hype is bad bad bad for business! It sets back genuine research and development. It hijacks the normal course of progress to bring technologies to viability. It creates wildly unrealistic expectations which eventually crash and result in funding vacuums. 20 years on from the dotcom fiasco we really should know better...
Navigating a better way
Can we not do better? Can we not harness the incredible power and supercomputing potential of AI for more than party tricks and high-frequency trading? Can we not move beyond this boom bust cycle of hype? And in so doing move closer to fulfilling Turing's and others' original vision? One obstacle is the name itself it seems. The word intelligence is already misleading because of its human brain connotations. Perhaps "machine aided processing" or "machine supported" would be better. But the perceptual die is cast and it's likely too late to reimage AI.
In addition to the promising work with autonomous vehicles, voice recognition and visual systems, perhaps there are far greater prizes to be discovered through AI.
We are yet to unlock any of the deeper mysteries that may hold such extraordinary and broad potential. Surprisingly, insect navigation may well be a crucial subject area. What is known as Path Integration for example in ants remains a mystery and yet may hold the keys for future positioning and navigation systems. (5)
Further, navigation with bees has profound significance. It turns out the humble bee performs astounding feats nothing short of a miracle, on a daily basis as it learns its routes from day zero, forages over large distances, performs amazing calculations in highly variable conditions supposedly using dynamic landmarks and non linear navigation. (We desperately need to know this already just for global pollination). What kind of complex computational systems are at work here and what could we learn from them? How could we turn this kind of analysis into models that may well unlock new dimensions for route planning and optimisation, for travel and transportation in confined, complex and highly congested spaces? What might the implications be even for urban planning and utilisation of public spaces? These may well be part of the big leap forward for AI, and currently research is just scratching the surface.(6)
Endeavors such as these give scope to both develop and use, to test and perfect , at the same time, AI networks which would be revolutionary both now and into the future.(7)
20 years on from those early dotcom days, and so far, there are still travel agents, stock brokers and even banks. Personally I'm thankful there are still freight forwarders (though it doesn't keep me awake at night). What actually happened in the intervening period was more like an evolution. Despite all the claims, one thing didn't suddenly stop, and another newer thing suddenly replace it. There was adjustment, adaptation, things made room and co-habited, things bent and flexed and absorbed, new models came about over time. Ultimately things blended. Yes the advent of the internet changed many things, some quite radically, others not so much. If we wanted to get a sense of where things might be headed in the future, perhaps the recent past gives a guide. Certainly change is coming, it will involve adaptation over time, it will be a blend. In the meantime practise hard with your Chess, quiz games and Go, because you never quite know who you might be up against!
And avoid the Hype...
(1) Wikipedia "Dot-com bubble", https://en.wikipedia.org/wiki/Dot-com_bubble
(2) Levy, Steven. Backchannel, "What Deep Blue tells us about AI in 2017" May 2017, https://www.wired.com/2017/05/what-deep-blue-tells-us-about-ai-in-2017/
(3) Miller, Ron,Tech Crunch, "Artificial Intelligence is not as smart as you or Elon Musk think" (2017) https://techcrunch.com/2017/07/25/artificial-intelligence-is-not-as-smart-as-you-or-elon-musk-think/
(4) Knight, Will "The Dark Secret at the Heart of AI" - MIT Technology Review, www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/ Apr 11, 2017
(5) Sakyasingha, Dasgupta " Bug brains help AI solve navigation challenges" , Medium Corp, Sept 2017. https://medium.com/@DSakya/bug-brains-help-ai-solve-navigation-challenges-2611da7e7a61
(6) Nott, George. "How a brain the size of a sesame seed could change AI forever" Computerworld Sept 2018. https://www.computerworld.com.au/article/647401/how-brain-size-sesame-seed-could-change-ai-forever/
(7) Brian Bergstein, "The Great AI Paradox" December 15, 2017. MIT Tech Review, https://www.technologyreview.com/s/609318/the-great-ai-paradox/
Good reads
https://www.theatlantic.com/technology/archive/2012/11/noam-chomsky-on-where-artificial-intelligence-went-wrong/261637/
https://futurism.com/artificial-intelligence-hype
● LinkedIn Profile Writer ● Independent LinkedIn Trainer ● LinkedIn Profile Workshops ● 165+ recommendations ?? Australia based and don't work or connect globally as family complains my voice travels through walls ??
4 年Fascinating article and I hugely liked that you showed your references - something I don't think I've ever seen on LinkedIn and am certainly a fan of!
DON'T CLICK ON THAT! | Cyber Awareness & Culture | Fun, engaging, results
5 年Really well written and captivating!
Global Excellence & Innovation Leader / Cross-Industry Advisor and Speaker
5 年Good article Russell. Dominant companies predominantly stay one technology fad behind the adoption curve; leveraging technologies that truly enable value-add business process performance.
Design for Reliability and Maintainability (DFRAM) Improvement Leader at Dow, Inc.
5 年A very well written post I might have expected to read in a tech publication or something like Fortune magazine. It's obvious that you having lived through all of this, as I have, provides good perspective. In my field and industry there is also a lot of talk about AI and machine learning. Companies and professors trying to sell their algorithms and being the solution to problems of equipment downtime. The interesting part is that although they have the algorithms, they lack real world large scale sets of sensor data from which to train their systems. There is no Wikipedia of equipment failure data despite more than 20+years of effort from many organizations to try and establish THE model of how to collect and share good clean reliability data. In a comment I made on another post, I mentioned that for my industry/field if work (maintenance of chemical plants) I just see AI/ML as just a new way to identify work. Today the humans (operators and maintenance personnel) act as sensors and enter work orders to address equipment issues. For computers to do the same, we have to use existing sensors and/or install additional sensors AND they (like BigBlue and AlphaGo) have to be trained how to process the sensor data.
Studying and seeking new opportunities
5 年A well written piece Russell! Enjoyed the thoughtful read. I now understand a new term, ‘AI winter’... and the negative affect on business brought on by hype.