Business: The Artificial Intelligence Hype Cycle
Audience: Leadership, decision makers, AI enthusiasts
We previously discussed the hype cycle as applied to artificial intelligence, and took note of a YouTuber's video pointing out multiple instances of exaggerated or outright false claims on the part of prominent leaders in AI. We intend to stay clear of accusations, but don't mind passing on the concerns of others on this subject. Adoption of cutting-edge technology is a risky business, and we feel compelled to remind readers of the inherent risks involved.
It is clear that we are in an inflated hype cycle with respect to artificial intelligence; one need only consider one's LinkedIn feed to see countless posts on AI; we see ads, cheat sheets, analyses, predictions, promises, etc. galore, yet very little actual innovative, probative or even usable content. The big name AI providers have, regrettably, enmired themselves in the hype quicksand. We have considerable reason to believe, based on direct experience with multiple AIs, that nearly all of the more extravagant claims are nonsense, and that the rest contain more than a little hot air.
After thinking on this topic for some weeks, we took the Gartner Hype Cycle curve shown above and simply added the red dashed line one can see in the image, right above the label "Peak of Inflated Expectations". That line is our conceptual guess on the real impact of artificial intelligence. It is merely a guess, and mostly based on the large amount of work we have encountered in the course of applying local AI to our own automation project.
In this respect, our estimate is independent, and certainly conceptual rather than based on data. There is no real need for data on this topic; if we find or construct reasonable yardsticks for such measurements, we will certainly collect and later publish the resulting data. We're fairly certain someone out there is counting the number of articles and posts on the subject and plotting them over time. That would be about as close to measuring expectations as one can get. We won't waste our time on that, since development of the technology for real use is much more important, and quite time-consuming.
In effect, we are following the red dashed line trajectory, and deliberately so. The work is difficult and exacting, as any reader can clearly see by perusing our writings on the topic. We also apply every available shortcut, particularly using several AIs as digital concierges. The need to check the answers supplied by these helper AIs attenuates any values we extract from them, often severely so, and we are nearly certain that all other honest toilers in the vineyard are experiencing the exact same phenomena.
We see a relatively slow baseline AI adoption curve, accompanied by predictable and often public overshots, where companies bet their futures on AI benefits that fail to materialize. We know this is happening already due to the widespread, and in our opinion inadvisable, adoption of applicant tracking systems (ATS) as a fundamental component of corporate hiring. We'll repeat our conclusion again, since it appears solid and bears close examination: we believe ATS systems might be effective at filtering out unqualified candidates for a given open position, but do not believe any ATS system in existence today, nor any conceivable system in the future, can possibly select the top ten percent most qualified candidates.
This is a job that is so fundamentally and uniquely human by its very nature, and requires such intense attention to detail, and piercing, thoughtful intuition, that we are frankly surprised that decision makers trust machines to tell them who is best for a job. AI only understands human language statistically, and cannot make reliable decisions very far from its pre-training. When it comes to hiring for emerging technology positions, where a candidate's suitability for a position is difficult to determine even under favorable conditions, the notion that a machine can pick the top 20 or so candidates out of perhaps 200 or more applicants is frankly ridiculous. We're unsure a human can do it. How did the corporate world come to the notion they could trust machines to take on such a mission-critical task?
Now, perhaps readers in a decision making capacity can see how the hype cycle operates. We have a definite, common problem: too many applicants for too few open positions. Makers of AI systems designed to help alleviate this problem claim their products can solve this problem, produce data which appears to back up their claims, and then, either deliberately or haplessly, exaggerate their claims beyond reasonable expectations. Customers of such products typically are relatively innocent of detailed and extensive AI experience which might otherwise motivate them towards caution. They are in business, have deadlines, management expectations and goals that must be met, and so might be forgiven for buying into the hype.
Nevertheless, AI technology, like all technologies, has limitations. We are highly motivated, for example, to utilize AI for limited, pre-programmed decision making, but only for decisions that are routine and well-known in advance. We would never trust any AI to innovate, never trust any AI to make new decisions at any frontier, and would never trust AI to make hiring decisions for us. The notion is risible, unworthy of serious consideration. Yet, exactly this phenomena happens every day in the corporate world.
We believe what we have seen: AI has strictly limited utility, and will, over time, follow an adoption curve much like what we have proposed above with our red dashed line. Hapless decision makers who forget the lessons of the past, like the over-confident, 24-year-old CEOs who littered the now blasted heath of the Internet boom twenty-five years ago, will find, to their consternation, that the hyped promises of the current AI boom are no better than the overheated fever dreams of the past generation of Internet hucksters. Their results will be similar, and deservedly so.
We are quite confident that AI has present, limited utility. We wouldn't put so much effort into the subject if we weren't. At the same time, we do experiments and apply common sense. We know, for example, that the AIs we are now using are essentially black boxes to us. The content of their pre-training is unknown and difficult to determine, motivating us to go to the effort of constructing a comprehensive, relatively rapid exam we can give our AI bots to understand something about how they were trained.
Once this exam has been formulated and tested, we can use it, both to compare various bots and to measure our efforts at improving and training them. Without such a yardstick, we fear we might very well inhale our own exhaust and hallucinate non-existent benefits to the technology we develop. No thanks, been there, done that, still have the coffee cup.
We hope by publishing this article to remind our readers to exercise healthy skepticism when it comes to AI. Indeed, with the recent advent of GPT4All, and possibly other AI ecosystems we haven't yet evaluated, anyone can download and run their own, local AIs and see for themselves what the tech can and cannot do, something we heartily recommend.
We have a clear business interest in economically healthy customers. Over-reliance on false or exaggerated claims made by purveyors of AI products can compromise that health. We prefer to err on the side of caution and good common sense: In God we trust, all others must bring data.
Remember the hype cycle. It's very real, and has teeth. It can bite.
More information about Overlogix can be found at Welcome to Overlogix! Our online portfolio can be found at our master index. Our articles on applied AI are indexed at our Applied AI index, business topics at our B2B Business Index, and our TL;DR series of rapid introductions to AI and related topics lives at our AI Mini-Wiki. AI news, including our list of curated links to articles of interest, can be found at our AI News page.
Happy reading! More to come!