AI in 2025: A Forecast for Business and Markets

AI in 2025: A Forecast for Business and Markets

Looking back on the first two years of consumer LLM chatbots, this era would have made a great sci-fi movie. Given the unprecedented risks unleashed on society, a creative screenwriter would have many pathways to choose from, such as a thriller like The Matrix in 1999, or the evil AGI bot Proteus in the 1977 horror film, Demon Seed.

However, a more appropriate film for the current AI environment might be less sci-fi than finance. One example might be American Psycho that sheds light on “the surreal world inhabited by the financial industry's elite class, and the utter disconnect they have with reality”; or another film from 2000, Boiler Room, about pump and dump schemes. Let’s hope none of the major investment banks are facing a Margin Call like the 2011 film by the same name, as that movie was about the collapse of Lehman Brothers.

Back here in the real world, below are three plausible scenarios for 2025 for AI in business and markets.

Scenario number one: The LLM/Big Tech bubble deflates

Given recent warnings about the AI bubble from the ECB on systemic risk, and Vanguard on the risk of a correction (second largest asset manager with $10T under management), one would think the hype-storm in LLMs and $8T bubble in Big Tech might be deflating, but then irrational markets…. behave irrationally… until they don’t. In my experience irrational markets tend to behave irrationally during the correction phase as well, when unfortunately, the same sort of herding (mass hysteria) that creates bubbles tend to ‘throw out the baby with the bathwater’. Hence the long history of AI winters, the dot come bubble and burst, the financial crisis, etc., etc.

Although the Big Tech bubble fueled by LLM and GenAI spending is in record territory, bubbles are certainly not new—they have occurred frequently throughout centuries. Will the bubble burst in 2025, and if so, how severely?

Timing with any level of confidence is impossible due to the uncertainty in triggers. However, we can identify several triggers that could cause the bubble to burst abruptly in 2025. I’ll provide five of the most probable triggers from my current perspective:

1. The Trump administration may find ways to reduce power in Big Tech. Neither Trump or several of his agency heads are fond of the market power of Big Techs, and the agencies have significant power to reshape markets.

2. Either a court ruling or law making it illegal to train on data owned by others without a license would likely trigger a sharp market correction in LLM and Big Tech valuations.

Although a SCOTUS ruling in 2025 may be premature given the very slow process, an order from a lower court could accelerate. For example, I’ve frequently asked a logical question:

Why hasn’t a judge ordered an injunction to stop training on the work of others while the courts sort this out? A ban on training without explicit permission in the form of a license would solve most of the problems caused by LLMs, including the extreme asset bubbles causing serious problems for the U.S. economy, not least in venture capital.

3. Geopolitical trigger. The most obvious example would be an attack on Taiwan by China where the majority of advanced AI chips are still manufactured. Trade tension and economic pressure within China is one scenario defense analysts are concerned with. Geopolitical risks are unusually high as we enter 2025. Many triggers are possible, including unforeseeable.

4.??A purely financial event. We have yet to experience a correction in the mega asset bubble that occurred during the pandemic. A variety of experts in the public and private sector are warning about leverage in non-bank financial institutions (NBFI) which now hold nearly half of the world’s wealth. Much of the leverage creating bubbles is due to lending from NBFIs.

Vulnerabilities in NBFI, including pockets of hidden or excess leverage, remain a potential source of systemic risk,” the FSB wrote. “Combined with rich asset valuations in some markets, there is the potential for sharp price corrections in the event of a shock. Policy approaches need to be combined with improved monitoring to mitigate vulnerabilities.”

5.? Investors wake up. This admittedly seems unlikely at the moment with recent new valuation highs at Nvidia and LLM firms, but Microsoft’s stock has been relatively flat for most of the year and charts suggest that peak valuation may occurred in July.

Since Microsoft was the enabler of the LLM bubble with its investment in OpenAI, has been the largest investor in related infrastructure, and sells related services to customers, the company serves as a good leading indicator. So far, the results have been much less impressive than expected or promoted by management. ?

"I think what investors are missing is that for every year Microsoft overinvests - like they have this year - they're creating a whole percentage point of drag on margins for the next six years," said Gil Luria, head of technology research at D.A. Davidson.

Scenario number two: A perfect landing

Any chart on Big Tech valuations over the past two years will confirm a take-off. The best case for Big Techs and their LLM proxies would be continued abdication of responsibility by the U.S. federal government—including Congress, and a ruling by SCOTUS that in my view would be unconstitutional.

However, both are possible even if seemingly unlikely. If combined with a soft brush by regulators, and investors continue to buy promises of massive future ROI in the GenAI hype-storm, Big Techs could presumably continue to invest unprecedented levels of capital in what is still mostly an LLM bubble propped up by predatory capital from a small oligopoly. If everything goes the way of Big Techs, their bubble might be extended into 2026 and beyond.

Scenario number three: Rules-based AI scales

I think the most likely scenario is a combination of complex small events that leads to deflation of the GenAI/LLM bubble, including Big Tech valuations, and scaling of rules-based AI systems. As I often say, we live in a rules-based civilization for very good reasons. Without rules that are actually enforced, our complex society will rapidly move towards anarchy. Unless LLMs are restricted and controlled, they will rapidly accelerate what has arguably already been an era trending towards anarchy. Many argue LLMs are already having that effect.

LLMs are inherently flawed from a security and human rights perspective, including private property and privacy, both of which are protected in the U.S. Constitution. SCOTUS has ruled that privacy is an implied right in several amendments and property rights are protected in several parts of the U.S. Constitution, even if those rights have been eroded over time.

Our approach at KYield

In our flagship system, the KOS, governance and security were designed-in from inception, to include access, protection of property and human rights. As I described in a private message recently, we didn’t jump out of an airplane before attempting to sew together a patchwork quilt for a parachute. Certain elements are required to achieve rules-based AI with levels of security that approach what is required of other industries. One of those elements is precision data management. Hence the core of our patented AI system. Consumer LLM chatbots cannot provide the level of accuracy or security required by the majority of organizations--at least not unless they are run inside a system like our KOS that provides the security and rules necessary.

We recently announced a new automotive division, led by Robert Hegbloom , that we believe provides a great example of how AI systems should scale in 2025 (Albuquerque Business First recently published a story about the new division). We’ve been collaborating with industry leaders in the auto industry for most of the year on an industry-specific version of the KOS (the first universal EAI OS), including integration with industry software and data. The combination of strong governance, security and eight functions within the KOS with industry and customer-specific data is a powerful combination, and we provide the KOS at a fraction of the cost of what most large companies are spending just on GenAI today.

Since the KOS is focused on high quality data tailored to each organization and individual, the GenAI/chatbot function is less generalized in the near-term, but it’s far more relevant, therefore more valuable, and much less expensive. The other seven functions actually deliver the bulk of ROI—not GenAI. We plan on using the auto division as a template for other industries (we’ve done deep-dives in all major industries).

Bottom line: I have a very high-level of confidence in our approach and very little confidence in the consumer LLM chatbot approach, despite massive support from their strategic partners. If the strategic support obviously intended to break through the Big Tech scale ceiling doesn’t meet the needs of customers in a sustainable manner (as our KOS clearly does), then ultimately it will fail and hundreds of billions of dollars will be burned, just as many investors fear.

It may seem counterintuitive in the current market, but it would be better for Big Techs in the long-term for more efficient and financially sustainable systems to scale. No question it would be much better for everyone else, not least customers and the greater economy.

要查看或添加评论,请登录

Mark Montgomery的更多文章

  • Knowledge Distillation and Compression

    Knowledge Distillation and Compression

    Given the recent shock from DeepSeek and their R1 distillation model, I thought we should focus this edition on…

  • How to future-proof businesses with AI systems

    How to future-proof businesses with AI systems

    We focus on the auto sector in this edition, but the same issues apply to most industries. Introduction Few would…

    3 条评论
  • The AI Arms Race is Threatening the Future of the U.S.

    The AI Arms Race is Threatening the Future of the U.S.

    (Note: I wrote this piece as an op-ed prior to the election and submitted it to two of world's leading business…

    5 条评论
  • Is your AI assistant a spy and a thief?

    Is your AI assistant a spy and a thief?

    Millions of workers are disclosing sensitive information through LLM chatbots According to a recent survey by the US…

    15 条评论
  • Industry-Specific EAI Systems

    Industry-Specific EAI Systems

    This a timely topic for us at KYield. We developed an industry-specific executive guide in August and shared…

    1 条评论
  • How to Achieve Diffusion in Enterprise AI

    How to Achieve Diffusion in Enterprise AI

    It may not be possible without creative destruction Not to be confused with the diffusion process in computing, this…

    3 条评论
  • Are we finally ready to get serious about cybersecurity in AI?

    Are we finally ready to get serious about cybersecurity in AI?

    Just when many thought it wouldn't get worse (despite warnings that it would), cybersecurity failures have started to…

    4 条评论
  • How to Achieve the Elusive ROI in EAI

    How to Achieve the Elusive ROI in EAI

    Given the ear-piercing noise of the LLM hype-storm, and the competition between Big Techs to outspend one another in…

    1 条评论
  • What is AI sovereignty? And why it should be the highest priority

    What is AI sovereignty? And why it should be the highest priority

    Definition of Enterprise AI sovereignty a. Free to govern and control one’s own enterprise AI (EAI) systems and data b.

    6 条评论
  • Wisdom is all you need (AI)

    Wisdom is all you need (AI)

    In 2017, a group of Google researchers published a paper titled “Attention is all you need”, which introduced their…

    3 条评论

社区洞察

其他会员也浏览了