A Reality Check on AI
While computers have only been around since the 1950s, our fascination with artificial intelligence goes all the way back to early recorded human history. As a species, we can’t help but want to project human qualities onto non-human things.
The first known robot was built in ancient Greece, an automatic maid that would pour wine when a cup was placed in its free hand, all powered by a clever system of tubes and gears interred in its body. At the 1939 World’s Fair, Westinghouse unveiled a mechanical man named Elektro that wowed audiences with its ability to walk, talk, and even distinguish between colors. And today, the most valuable technology companies in the world – Microsoft, Apple, Alphabet, and Amazon – have poured billions into digital assistants that respond to sophisticated voice commands and even anticipate our needs, as real assistants would.
The possibilities feel endless, and one would be hard-pressed to think of an industry that won’t eventually be touched by AI in some way.??
It's hardly a surprise then that the release of ChatGPT – a chat interface that generates human-like responses with surprising accuracy – could spark so much excitement. Modern AI has created what the late Charlie Munger called a Lalapalooza effect – in this instance, the effect of combining our craving for robot intelligence with the animal spirits that drive markets up and down. AI-related stocks have soared and were the belle of the ball in 2023. CEOs have been quick to announce investments in AI, even while being unclear as to what ends. Economists, too, have been caught up in the mania with predictions that AI will drive huge leaps in labor productivity and economic output.
To be sure, we’re excited for what AI can bring. New forms of automation could help overcome the worldwide problem of aging demographics and a shrinking workforce. In medicine, predictive models that simulate drug trials – traditionally a cumbersome and expensive process – could shorten the development timeline for cures to diseases like cancer and Alzheimer’s. The possibilities feel endless, and one would be hard-pressed to think of an industry that won’t eventually be touched by AI in some way.??
...we sense that the speed and magnitude of AI’s impact could fall short of expectations.
But the market’s zeal has also brought the overuse of terms like “neural networks” and “singularity.” It’s as if to imply that AI has an all-knowing, all-powerful nature to it – a portrayal that feels misleading. In examining the current state of the technology while also considering the history of innovation cycles, we sense that the speed and magnitude of AI’s impact could fall short of expectations. We also think that it’s too early to declare who the winners and losers of AI will be. To make the case, we’ll explore some economic realities that we believe have largely gone overlooked.
#1. Businesses have been using AI for decades, just under a different name.
At the center of buzz terms like machine learning and natural language processing is a familiar discipline: statistics. We’ve long applied statistics to business problems to draw relationships between inputs (like advertising expenses) and outputs (like sales growth) to make better decisions. These models have evolved to power modern tools like traffic navigation software, credit card fraud detection, and voice-to-text typing, but even the most sophisticated algorithms require some degree of human validation to be useful.? Take the example of image recognition software. By their nature, computers only see ones and zeroes and thus need human judgment to define the arrangement of pixels that qualifies an image to be a house, bird, or waterfall. In the same vein, AI projects require well-defined goals in order to deliver value to companies. Said another way, technology remains a supplement to and not a replacement for effective leadership. Yet many starry-eyed business leaders would have us believe that AI is capable of delivering immediately actionable insights if only one were willing to make the investment. It suggests to us a misplaced faith in AI, resulting in vague goals that risk dead-ending projects.
领英推荐
#2. History tells us that technology takes longer to integrate than you would think.
Who could argue against innovation? It’s brought us sanitation systems, leading-edge medicine, and the modern electrical grid. But when it comes to the implementation of new technology, inertia can be extremely difficult to overcome. Consider that it took decades for the telephone to reach just 50 percent household penetration because of challenges presented by cultural adoption and network investments, or that 3D printing has not – and may never – reach the heights that many predicted it would despite its costs falling dramatically.
While it’s not difficult to imagine awesome use cases for AI, history tells us to expect obstacles to adoption. They could take the form of workers who are resistant to change, or businesses simply being unprepared to integrate AI as it requires large, robust data sets to draw meaningful conclusions – data that can take many years to collect and organize. We’d also expect regulators to make implementation more difficult, especially as AI raises questions around ethics and intellectual property rights.
#3. Technology doesn’t always pave a straight path to better productivity.
Consider for a moment how many indispensable tools are available to us now that didn’t exist 30 years ago. The modern internet is an obvious one in how it’s removed so much friction from everyday life. The internet has put answers to just about any question at our fingertips, created online marketplaces that connect buyers and sellers from all over the world, and given us back the time that we used to spend flipping through an Encyclopedia Britannica or charting a road trip using a Thomas Guide. You might be astonished to know then that productivity growth has been stuck around multi-decade lows.
One factor likely driving lackluster productivity growth is that there are unintended consequences to technology. For example, the invention of email promised to facilitate better communication and faster decision-making. But studies argue that email has weakened our ability to focus on a singular task for extended periods. In the same way, AI may bring uneven results. It could unleash a surge in fraud and identity theft, requiring expensive prevention efforts. It’s also not difficult to imagine a world in which a firehose of AI-generated entertainment options distracts us from better uses of our time.
This is all to say that while we think AI will continue to shift the business landscape, history tells us that it’s too early to crown any champions.
#4. It’s far easier to spot a trend than to identify winners of a trend.
From automobiles to personal computers to web search, history is rife with examples of early movers that eventually fell behind the pack. At the center of AI mania has been GPUs, or graphics processing units, which are essential to AI models. But their standing within the industry is hardly secure. Though GPUs have an unbeatable architecture in these early days of AI model “training,” these models will eventually need to be deployed at scale. It’s in this later stage that more conventional CPUs are purportedly more cost-effective than GPUs. Consider that in the 1990s, Sun Microsystems – a long-forgotten name – had the best server hardware that money could buy as its shares reached sky-high valuations. Eventually, work was distributed across x86 workstations that were less powerful but far cheaper, opening the door for competition in the open network era. This is all to say that while we think AI will continue to shift the business landscape, history tells us that it’s too early to crown any champions.
AI is in a class of its own: awe-inspiring, but alarming in its potential; embraced by the most admired companies in the world, yet still very fuzzy as to what it can actually do. While we fully expect AI to bring changes to the workplace and what we consume, when the dust settles, history may show that it turned out to be another important installment in mankind’s march toward progress – an achievement worth celebrating even if it isn’t yet the stuff of science fiction.
Private Investor
9 个月Good thoughts. I would take issue, though, with point #2. The internet allows almost instant adoption of technologies and the dominant companies have already acquired these eyeballs. Porting new technologies today that are lines of code can go viral very quickly rendering “history” perhaps not quite as instructive. AI stocks may be ahead of themselves and we may find out more today with the Nvidia print, but past analogies from a linear world may not compare to a non-linear world. Thanks for the article!
It's a fascinating reflection, David Lin, CFA. Our instinct to humanize technology speaks volumes about our drive to understand and shape the world around us. At Intel, we're continually inspired by this as we advance AI to new heights – always with a touch of human imagination. ??