FOD#61: AI Fall: Time to Build
TuringPost
Newsletter about AI and ML. ?? Sign up for free to get your list of essential AI resources ??
Let's get even more practical
Hi there! We’re back with our Monday news digest, which we call 'Froth on the Daydream.' After reviewing 150+ newsletters, we deliver an analysis of what’s happening and what’s worth paying attention to – for smart practitioners.
Forward it to your friends and colleagues if you find it useful, or share via social networks with the buttons above ??
Next Week in Turing Post:
The main topic: AI Fall ??
This Monday saw significant market drops across stocks, cryptocurrencies, and oil due to growing concerns over a rapidly slowing U.S. economy. Criticisms of the Fed’s pace on rate adjustments are intensifying, fueling fears of a potential recession. Investors are on edge, closely watching for what’s next.
Is it the right time to talk about the AI bubble/winter? A lot of people think so. But this topic has been surfacing for the last year and a half. Exactly a year ago, we already discussed the AI hype, likening it to historical bubbles such as the dot-com and ICO crazes. With massive investments in generative AI (GenAI), some experts back then warned of an impending bubble that could lead to another AI winter. However, others argued that AI's tangible benefits and established industry presence might prevent such a crash. Last week, Ben Thompson also drew parallels with the 1990s tech boom, driven not by the necessity of building, but by the fear of missing out. This fear is pushing investors to focus on the risks of underbuilding rather than the potential dangers of excess.
This frenetic pace of development actually begs for an AI Fall (and a few will fall) – a period of reflection and sustainable growth. A moment to gather crops, see what bore fruit, and what failed to pass the sprout stage. The industry is transitioning from hype to building practical tools that will integrate AI more deeply into our lives. The next phase will determine whether we’re on the brink of an AI winter or at the dawn of a transformative era.
What Are We Really Building?
The question we must ask ourselves is: What are we truly aiming to achieve by pumping trillions of dollars into AI, particularly large language models (LLMs) and multimodal foundation models? Are we blindly chasing bigger models and more data, even when the internet itself may not provide enough raw material for meaningful expansion? How much more capable will GPT-5 or 6 be? They might be better at answering questions, but it doesn’t answer the question: What are we building at the end of the day? Even Sam Altman himself, in a recent interview with Joe Rogan, shared that when he started OpenAI, he believed AI would take on the heavy lifting for him. But what are we really automating? Are we addressing genuine needs, or are we caught in a loop of creating increasingly complex systems without a clear purpose?
Challenges
Yes, indeed, despite ongoing investments, the industry faces significant hurdles: imbalanced growth, unproven revenue models, and increasing skepticism from financial heavyweights like Goldman Sachs and Sequoia Capital.
As the AI arms race intensifies, so does the debate over capital expenditures. David Cahn recently argued that the current debate isn't just about whether AI CapEx is too high, but whether the speed and necessity of infrastructure buildout are warranted. The competition among major cloud providers like Microsoft, Amazon, and Google is driving rapid expansion, but at what cost? Smaller players are being squeezed, and today's investments could become obsolete if AI progress outpaces the physical infrastructure being built.
The Shift from AGI Dreams to Practical AI Tools
But again, what is it that we are building? AI has already achieved a lot. Despite concerns, AI is delivering real value. It’s an amazingly useful tool. There's still much potential. In this context, Nicholas Carlini's reflections on the value of LLMs are telling. Despite their limitations, these models are already making a tangible impact on productivity – Carlini himself reports a 50% improvement in his work. This suggests that while we may not yet be at the AGI level, the benefits of AI are very real and growing.
Mass adoption doesn't happen overnight, but generative AI is already democratizing the use of AI tools, saving time, and improving productivity. A new wave of practitioners is on the rise, poised to build more tools and help corporations integrate AI into their operations. We’re in a building phase, not just a training or bubbling phase.
I don’t believe in AI Winter, the same way I don’t believe in reaching AGI (anytime soon). For the first, we've already built too many useful tools across industries, from medicine to journalism. As for the second, we haven’t gotten closer to understanding what intelligence is. It's a time for careful consideration, strategic investments, and perhaps most importantly, a clear-eyed understanding of what we truly want AI to achieve. Even if some question whether we need ever-larger models right now, the industry has made tangible progress. It’s time to roll up our sleeves and start developing those case studies that will push progress further. It will not be AGI; it will be us equipped with our super cool AI tools.
Cheers, to the AI Fall.
In partnership with
SciSpace is a next-gen AI platform for researchers where you can effortlessly browse 280 million+ papers, conduct effortless literature reviews, chat with, understand, and summarize PDFs with its AI copilot, and so much more.?
If you love it, get 40% off an annual subscription with code TP40 or 20% off a monthly subscription with code TP20.
Try SciSpace today:
Announcements
We came back with a few announcements to make:
领英推荐
NEW! Weekly recommendations from AI practitioner????:
Twitter Library
Check our latest collections:
News from The Usual Suspects ?
GitHub's new beta, GitHub Models, brings AI experimentation directly to developers’ fingertips. With Meta’s Llama 3.1 and OpenAI’s GPT-4o on tap, it’s a one-stop shop for AI model comparisons. By embedding AI tools seamlessly into its ecosystem, GitHub is aiming to outshine platforms like Hugging Face, making AI development as smooth as a single commit.
The 2024 CONDA Data Contamination Report uncovers a major issue: AI models like GPT-4 and PaLM-2 unknowingly feasting on evaluation data, leading to misleadingly high scores. With 91 sources contaminated, this report pushes for transparency and stricter evaluation methods in the NLP community. Consider it the AI world’s version of a doping scandal.
Nvidia is in a bind, juggling antitrust probes and chip delays while secretly training robots with Apple’s Vision Pro. Their latest AI chip design flub might slow them down, but Nvidia’s influence in the tech world keeps growing – just not without some bumps along the way.
Former Stability.ai developers have founded Black Forest Labs, and announced the Flux.1 suite that is free and on par with Midjourney and DALL-E 3. The startup has secured $31 million in seed funding led by Andreessen Horowitz and plans to release text-to-video models next.
In other newsletters:
The freshest research papers were published. We categorized for your convenience ????