FOD#61: AI Fall: Time to Build

FOD#61: AI Fall: Time to Build

Let's get even more practical

Hi there! We’re back with our Monday news digest, which we call 'Froth on the Daydream.' After reviewing 150+ newsletters, we deliver an analysis of what’s happening and what’s worth paying attention to – for smart practitioners.

Forward it to your friends and colleagues if you find it useful, or share via social networks with the buttons above ??

Next Week in Turing Post:

  • Thursday, a guest post: Your infrastructure shouldn’t live in a black box
  • Friday, AI Infra Unicorns: A Deep Dive into Graphcore


The main topic: AI Fall ??

This Monday saw significant market drops across stocks, cryptocurrencies, and oil due to growing concerns over a rapidly slowing U.S. economy. Criticisms of the Fed’s pace on rate adjustments are intensifying, fueling fears of a potential recession. Investors are on edge, closely watching for what’s next.

Is it the right time to talk about the AI bubble/winter? A lot of people think so. But this topic has been surfacing for the last year and a half. Exactly a year ago, we already discussed the AI hype, likening it to historical bubbles such as the dot-com and ICO crazes. With massive investments in generative AI (GenAI), some experts back then warned of an impending bubble that could lead to another AI winter. However, others argued that AI's tangible benefits and established industry presence might prevent such a crash. Last week, Ben Thompson also drew parallels with the 1990s tech boom, driven not by the necessity of building, but by the fear of missing out. This fear is pushing investors to focus on the risks of underbuilding rather than the potential dangers of excess.

This frenetic pace of development actually begs for an AI Fall (and a few will fall) – a period of reflection and sustainable growth. A moment to gather crops, see what bore fruit, and what failed to pass the sprout stage. The industry is transitioning from hype to building practical tools that will integrate AI more deeply into our lives. The next phase will determine whether we’re on the brink of an AI winter or at the dawn of a transformative era.

What Are We Really Building?

The question we must ask ourselves is: What are we truly aiming to achieve by pumping trillions of dollars into AI, particularly large language models (LLMs) and multimodal foundation models? Are we blindly chasing bigger models and more data, even when the internet itself may not provide enough raw material for meaningful expansion? How much more capable will GPT-5 or 6 be? They might be better at answering questions, but it doesn’t answer the question: What are we building at the end of the day? Even Sam Altman himself, in a recent interview with Joe Rogan, shared that when he started OpenAI, he believed AI would take on the heavy lifting for him. But what are we really automating? Are we addressing genuine needs, or are we caught in a loop of creating increasingly complex systems without a clear purpose?

Challenges

Yes, indeed, despite ongoing investments, the industry faces significant hurdles: imbalanced growth, unproven revenue models, and increasing skepticism from financial heavyweights like Goldman Sachs and Sequoia Capital.

As the AI arms race intensifies, so does the debate over capital expenditures. David Cahn recently argued that the current debate isn't just about whether AI CapEx is too high, but whether the speed and necessity of infrastructure buildout are warranted. The competition among major cloud providers like Microsoft, Amazon, and Google is driving rapid expansion, but at what cost? Smaller players are being squeezed, and today's investments could become obsolete if AI progress outpaces the physical infrastructure being built.

The Shift from AGI Dreams to Practical AI Tools

But again, what is it that we are building? AI has already achieved a lot. Despite concerns, AI is delivering real value. It’s an amazingly useful tool. There's still much potential. In this context, Nicholas Carlini's reflections on the value of LLMs are telling. Despite their limitations, these models are already making a tangible impact on productivity – Carlini himself reports a 50% improvement in his work. This suggests that while we may not yet be at the AGI level, the benefits of AI are very real and growing.

Mass adoption doesn't happen overnight, but generative AI is already democratizing the use of AI tools, saving time, and improving productivity. A new wave of practitioners is on the rise, poised to build more tools and help corporations integrate AI into their operations. We’re in a building phase, not just a training or bubbling phase.

I don’t believe in AI Winter, the same way I don’t believe in reaching AGI (anytime soon). For the first, we've already built too many useful tools across industries, from medicine to journalism. As for the second, we haven’t gotten closer to understanding what intelligence is. It's a time for careful consideration, strategic investments, and perhaps most importantly, a clear-eyed understanding of what we truly want AI to achieve. Even if some question whether we need ever-larger models right now, the industry has made tangible progress. It’s time to roll up our sleeves and start developing those case studies that will push progress further. It will not be AGI; it will be us equipped with our super cool AI tools.

Cheers, to the AI Fall.


In partnership with

SciSpace is a next-gen AI platform for researchers where you can effortlessly browse 280 million+ papers, conduct effortless literature reviews, chat with, understand, and summarize PDFs with its AI copilot, and so much more.?

If you love it, get 40% off an annual subscription with code TP40 or 20% off a monthly subscription with code TP20.

Try SciSpace today:


Announcements

We came back with a few announcements to make:

  • Starting this week, in every FOD, we include ‘Weekly recommendations from an AI practitioner ????’ – 2-3 links from someone who builds with AI. Never sponsored, just what works.
  • We’re announcing Turing Post Korea! Yes, Turing Post is now available in Korean, thanks to our initial reader and now collaborator, Byoungchan (Ben) Eum. Read the full announcement here.
  • In the latest FOD, we asked you what we should cover. Two topics are the absolute leaders: AGI and Agents. We’ve decided to highlight Superintelligence/AGI (what we prefer to refer to as human-level intelligence) on Fridays on our Twitter. But Agents, oh Agents, that’s a truly amazing topic to tackle from both historical and practical perspectives. In August, we will publish fewer articles because we are actively working on a series about AI agents. Stay tuned.


NEW! Weekly recommendations from AI practitioner????:

  • Cursor and Aider – they are similar (Aider is free, Cursor is paid - that’s what Copilot would have been)
  • Superwhisper – if you get it into your flow, then ah, that’s what Svoice assistant was supposed to be!


Twitter Library

Check our latest collections:


News from The Usual Suspects ?

  • Google crushing it this week:

  1. Gemini 1.5 Pro outperforms competitors: GPT-4 and Claude 3.5 are behind in benchmarks.
  2. Gemma 2 2B – a smaller, safer, more transparent model. With ShieldGemma for content moderation and Gemma Scope for model interpretability, Google’s open-source push could redefine how we trust and understand AI. And it’s all wrapped in a neat, research-friendly package – talk about a gem!
  3. Google hires talent from Character.AI: Google's acquisition of top talent from Character.AI follows the trend. InflectionAI, AdeptAI, Stability (in a sense), CharacterAI – who’s next?
  4. Google Cloud expands database portfolio with new AI features, including graph and vector search in Spanner SQL.
  5. Google unveils three AI features for Chrome like Google Lens, natural language search in your history and Tab compare. Looks useful!


  • GitHub Plays Hardball with AI

GitHub's new beta, GitHub Models, brings AI experimentation directly to developers’ fingertips. With Meta’s Llama 3.1 and OpenAI’s GPT-4o on tap, it’s a one-stop shop for AI model comparisons. By embedding AI tools seamlessly into its ecosystem, GitHub is aiming to outshine platforms like Hugging Face, making AI development as smooth as a single commit.

  • Contamination Crisis in NLP

The 2024 CONDA Data Contamination Report uncovers a major issue: AI models like GPT-4 and PaLM-2 unknowingly feasting on evaluation data, leading to misleadingly high scores. With 91 sources contaminated, this report pushes for transparency and stricter evaluation methods in the NLP community. Consider it the AI world’s version of a doping scandal.

  • Nvidia in the Hot Seat

Nvidia is in a bind, juggling antitrust probes and chip delays while secretly training robots with Apple’s Vision Pro. Their latest AI chip design flub might slow them down, but Nvidia’s influence in the tech world keeps growing – just not without some bumps along the way.

  • Groq - congrats!

  • From Stability to Flux

Former Stability.ai developers have founded Black Forest Labs, and announced the Flux.1 suite that is free and on par with Midjourney and DALL-E 3. The startup has secured $31 million in seed funding led by Andreessen Horowitz and plans to release text-to-video models next.


In other newsletters:

  1. Nvidia's Blackwell Reworked - Shipment Delays & GB200A Reworked Platforms by Semianalysis
  2. Building A Generative AI Platform by Chip Huyen

  1. Chips for Peace: how the U.S. and its allies can lead on safe and beneficial AI by Cullen O'Keefe
  2. Llama 3.1 launched and it is gooooood! by MLOps


The freshest research papers were published. We categorized for your convenience ????


要查看或添加评论,请登录

社区洞察

其他会员也浏览了