The Brief on AI #18

The Brief on AI #18

The meteoric rise of infrastructural AI and the market's struggle to adapt

Recent developments

Quote of the week

The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.

Edsger W. Dijkstra

What to Make of This

Two years ago, Nvidia was worth $400bn. Now it stands at $3.3tn. While AI-driven companies surge in value, traditional sectors are struggling. What's happening in the market?

Source:

Investors are pouring their money into AI infrastructure companies, while the rest of the companies are being penalized for not yet capitalizing on AI's promised benefits. The market appears to be overestimating how quickly companies can use AI to reduce costs and boost revenues. Investors want to see results now, but adopting this new technology is more tricky than most think. It’s going to be a gradual process and it’s going to take much longer than everyone is betting on. Three main factors are compounding the issue.

1. Building a layer on top of foundation models is necessary but risky

Investments in early stage generative AI have nosedived in Q1 2024. Many of the startups that got funded in Q3 and Q4 of 2023 shouldn’t have as they became obsolete after new frontier model releases. Companies like these have been dubbed ‘thin wrappers’. Early stage investors are holding back on investing in them to see how the foundational models evolve. This is an unfortunate development because a user friendly layer on top of foundation models is exactly what companies need to be able to benefit from AI.

Source:

2. Limitations of LLMs are still unclear

In the context of limitations there is a lot of talk about hallucinations. Let us stop using this word and be more specific about the unwanted results we are getting from machines. The recently published study by Standford on the Reliability of Leading AI Legal Research Tools did a great job at this. The paper talks about ‘groundedness’ when the researchers refer to the question whether the key factual propositions make valid references to relevant legal documents. They talk of ‘correctness’ when determining whether a response is factually correct and relevant. This is a great first step, but if we truly want to uncover what’s holding us back in AI we need to:

  1. Define what human intelligence is
  2. Match the capabilities of the human brain with what machines can do
  3. Describe and explain the differences
  4. Devise strategies to bridge these gaps

Erik J. Larson’s The Myth of Artificial Intelligence: Why Computers Can't Think the Way We Do and The Atomic Human by Neil D. Lawrence both do a terrific job at the first three points I mentioned above. The last one, devising strategies to bridge the gaps, is where to opportunity lies for new AI companies to add value. Which leads me to the third and maybe biggest issue of them all.

3. AI companies tell lies

To raise funding and land new clients as an AI company at the application level you need a clear cut answer to a problem that existing AI companies can’t solve at the foundation level. And here’s the thing: most of these companies are coming up with solutions that only partially solve a problem. They might be aware of this, but they can’t get away with saying: “here is a solution that helps a little bit.”

To understand what I mean, let’s dive into Stanford’s paper on legal research some more. We discussed how groundedness and correctness are important. But to get good results from a research tool it needs to be able to synthesize facts while keeping the appropriate legal context in mind, select relevant documents based on other factors than text (e.g. legal hierarchy) and give more than one clear cut answer to a legal prompt. Even if legal research tools are able to eliminate ungrounded and incorrect answers, which they clearly are not, we are still left with these other crucial elements in the research process that currently can only be done by humans. That’s a really hard sell to investors and potential new clients. So what happens? AI companies tell an oversimplified story. Here are three examples from the Stanford paper:

And here is an overview of what is actually happening:

source:

Benchmarks and patience

The Stanford paper obviously stirred up significant discussion and debate within the academic and technological communities. The Artificial Lawyer recently shared Thomson Reuters and LexisNexis are going to create “…a consortium of stakeholders to work together to develop and maintain industry standard benchmarks across a range of legal use cases.” This is exactly the direction we should be going as an industry. We need to understand the limitations of the status quo, create solutions at an application level and measure these solutions scientifically and objectively. This will take years, decades even. And as we progress there will be gradual change, because from what I’ve learned from Larson’s and Lawrence’s books, we are far from replicating the magical machine between our ears.

要查看或添加评论,请登录

Pim Betist的更多文章

  • How Trump is going to impact AI in Europe

    How Trump is going to impact AI in Europe

    And what Europe should do to become self reliant European companies have been riding in the slipstream of American…

  • Navigating the Generative Search Revolution

    Navigating the Generative Search Revolution

    A Guide for Law Firms and Consultants Recent developments Google's search ad market share is set to drop below 50% next…

  • Pricing in the Age of AI

    Pricing in the Age of AI

    Eat or be eaten Lions that act like kittens get eaten. Law firms are the lions of the legal world, yet many are…

    8 条评论
  • The Brief on AI #20

    The Brief on AI #20

    What we can learn from enterprises successfully leveraging AI Recent developments The Economist reports AI has had…

  • The Brief on AI #19

    The Brief on AI #19

    AI's uncertain path to AGI: bubble concerns, legal sector struggles, and promising innovations Recent developments New…

  • The Brief on AI #17

    The Brief on AI #17

    Google's devastating move and law firms missing the boat Recent developments Elon Musk's xAI about to finalize a $10…

    6 条评论
  • The Brief on AI #16

    The Brief on AI #16

    The rise of the Splinternet and the liberation of legal content Recent Developments: President Biden has signed a bill…

  • The Brief on AI #15: How law firms are actually using AI

    The Brief on AI #15: How law firms are actually using AI

    Recent developments Elon Musk predicts AI will be smarter than humans by the end of 2025. Gary Marcus and Damion…

    1 条评论
  • The Brief on AI #14

    The Brief on AI #14

    The real threat and opportunity of generative AI Recent developments Apple and Google are exploring a partnership to…

  • Useful AI tip of the week

    Useful AI tip of the week

    Don’t tell an AI what not to do Amsterdam was becoming too popular amongst young British men looking for sex, drugs…

    4 条评论

社区洞察

其他会员也浏览了