Is there a Generative AI bubble?
Midjourney v6.1, by Matt Wallace

Is there a Generative AI bubble?

Evaluating Claims of a Bubble

I was inspired to discuss this after reading a more comprehensive analysis of "Are we in an #AI bubble?" What I'd say first is: when evaluating any such claim, pro or con, remember to look at quality of evidence. TL;DR: no, with ample evidence. https://kelvinmu.substack.com/p/ai-are-we-in-another-dot-com-bubble

My news is often inundated with people who, for whatever reason, appear to have an agenda. The pro-AI agenda is often more obvious: they are AI founders, like myself, out to build things to enable a new era of productivity and creativity; or enthusiasts who are parlaying their knowledge into revenue via influence, education, advisory.

The "con" side negatives are more difficult to understand; I see the same names - people I am not connected with and do not follow - who make it onto my feed algorithmically, sometimes helped along by my network. Their points are often the same:

(Ed. Note: here I am using "AI" to refer to recent GenAI technology, as the posts often do, and everyone should understand that classic predictive AI/ML like fitting linear equations for prediction is neither controversial nor revolutionary, although even that technology was under-adopted vs its value)

Looking at Specific Claims

- AI is some form of IP theft and there will be a reckoning

- AI is not useful because AI is stupid

- AI is in a bubble and investors and companies will crash and burn apocalyptically

I don't find these compelling; but of course, like all good negative arguments, there are grains of truth.

- Some AI products were trained on copyrighted data and there may be damages

- AI does have weak areas, is hard to deploy in a naive way for many use cases, and both what it can do, why, and how to avoid any shortcomings is poorly distributed knowledge

- AI valuations are higher than contemporary peers in other sectors, and many companies receiving large investments will fail miserably; in fact, many investors clearly have trouble tracking the value chain of b2c and b2b developments, and how those will impact industry sectors, workflows, and macroeconomics, and this probably makes things more volatile

And yet, even with these shortcomings, many counters are obviously clear:

- Some AI products are trained on relatively clean data; if we assume that reading any random publicly viewable web page is infringement, this may not be true, but the question of "can I transformatively use a public web page?" has been answered many times over. The clamor over this with respect to AI reminds me of the early days of Google's success, except Google's usage was far closer to actual infringement in my mind than any AI use

- AI does have weaknesses, but it has incredible strengths. I use it all day long and my productivity is breathtakingly high. I truly cannot believe how fast I can learn, do, experiment, communicate, understand. (This post was written 100% by hand.)

- Imagine you are in a casino, and there is only one game. You bet on a number from 1-10. If you win, your bet pays 30:1. If you lose, you lose. Only so many rounds will be available and the casino will close. If you were managing funds, you would want to bet as much as you could, naturally. You certainly might want to distribute your bets. But for every $10 you bet, you expect to have $15 on average afterwards. I see AI investment like this at the moment, except the numbers are probably more extreme. The classic wisdom is that something like 65% of VC investments will return less than the invested capital, many complete losses; 30% will break even or return modest amounts; and finally, a few "winners" will return wildly outsized gains.

Given that last, it's important to note that seeing GenAI failure can be an example of a healthy environment where risky investment is made and therefore innovation can be funded. If VC was only willing to invest in obvious winners, we'd only have the equivalent of the "Big Hit IV: Hit Harder!" summer movie; and just like in the movie world, those tend to be increasingly less interesting.

Some Insight from Technology Waves of the Past

Curiosity and a certain restlessness with the status quo has caused me to be involved in basically every single technology megatrend, in a hands-on-keys way. E.g.:

  • I started a major BU at the world's first large hosting provider (Exodus, which had >50% of the top 100 websites under our roof); before that, in my very first non-intern job, I worked at the first "all you can eat" dial-up ISP! The Internet was the thing that got me into technology irrevocably, so these tech trends have truly shaped my entire life.
  • I was heavily involved in building responsive web apps later and built microservices (before they were called that), used cloud (in 2007 before AWS even had EBS volumes as a product), and so on. I had an iPhone 1 and I wrote two apps for mobile.
  • I was deeply involved in VMware's response to cloud; both enabling their service provider program, and then helping to architect their first 'public cloud' product. (Which was ultimate well-engineered and an utterly incorrect response to customer/market needs; it led me into the product space too.)

So I have a lot of experience reading certain tea leaves. I did not get particularly interested in or work in some of the other waves - Blockchain and Metaverse things, in particular. (I'll note I think both cryptocurrency and blockchain in general can be extremely useful, but it was easy to observe that, much like cab drivers wanting to share stock tips in 1999, the hype had outgrown reality by a large margin; and I think human adoption of both AR and VR is simply a given.)

So I've seen waves of hyped up technologies - Internet, Mobile, Cloud, Blockchain, Metaverse - and I suspect many have a hard time discerning between those which are fundamentally transformative and, as a class, will have a huge impact, and those where the value and future are more tenuous or less impactful.

I'll add that I think of these things, not as an investor, but through a lens an investor would approve of: net present future value. I view the present value of a tech trend as the likelihood it will be successful and practical, multiplied by the intensity of the impact, discounted for the time in the future.

Future Value of Technology Trend
	?	NPFV  = Net Present Future Value
	?	 P  = Likelihood of success and practicality (expressed as a probability between 0 and 1)
	?	 I  = Intensity of the impact (a measure of the potential impact, which can be quantified in monetary terms or other relevant units)
	?	 r  = Discount rate (reflecting the time value of money and the risk associated with the tech trend)
	?	 t  = Time in the future (in years)        

There are two things going on right now that are mega-trends that will dwarf all other past megatrends by orders of magnitude:

  • AI, including generative AI
  • Robotics

The State of AI/GenAI

First, all AI is not GenAI. There are useful incremental things going on in classic AI - platforms like Snowflake and Databricks or even smaller platforms like Savvi try to make this accessible to end users. The value is often simple, incremental. It takes data, and automates the process of extracting correct signal from noise. Training a model in classic AI (ML, really) allows you to take a large number of potentially useful "signals" and then learn how to use them to make correct predictions -- without some "intuition", just using math, at large scale. Classic ML to me is industrialized math - it's coming up with equations (by introducing features to models) and then fitting the data from the past to make useful predictions of the future. I'll add that's one niche, and there are many other techniques that are useful, and the line of "AI" vs "ML" vs "Deep Learning" can be fuzzy and is not consequential to my point.

Second, GenAI is in its infancy. I often talk about this tweet from Andrej Karpathy (an openai cofounder and the lead of Tesla's FSD program for years) as an example. This is after ChatGPT came out:

And the almost immediate response from MosaicML, prior to their acquisition by Databricks:


Casual tweets that propagate to optimizations that can save millions of dollars are one funny example of the almost comedic immaturity of the space. This is not a negative; it basically is just a way of pointing out that by the time GPT-4 came out in Mar of 2023, the entire space was casually terrible from a "final optimization" standpoint.

I can contrast that very much to the Internet. In 1997, a construction crew cut through fiber optic cable and severed the lines of MCI WorldCom and Sprint, causing the most significant disruption of Internet traffic ever, given they carried major backbone traffic. Exodus, where I worked at the time, had just turned up major capacity and we backhauled a huge amount of the Internet's backbone traffic at the time. It was in some ways a coup for Exodus's reputation - because it substantially mitigated the impact - but it also points to how fragile the Internet was at the time, as millions of dollars of investment were pouring into startups all over the place. (Exodus went public in March of 1998, the next year; and the stock appreciated >300x over the next 2 years or so from the offer price - so believe you me, I have seen a bubble from ground zero.)

The past 18 months of research have shown some unbelievably cool areas of exploration: BitNet (1-bit/ternary quantization); Mamba (transformer-less LLMs); and broadly, the incredible evolution in quality, quantity, and source of data. That's mostly about training and inference. On the consumption of models side, you saw a myriad of papers about AI Agents like the "Generative Agents: Interactive Simulacra of Human Behavior" (or as I call it "AI town") paper, to those about prompting techniques - reAct, CoT, LATS, Reflexion, and more lately, various papers about applying graph-like techniques to LLM problem solving.

Where's the Beef?

So we get back to the original linked post - https://kelvinmu.substack.com/p/ai-are-we-in-another-dot-com-bubble - from Kevin Mu. Here's one of the most powerful lines:

Even though we are still in the early phases, AI applications have already generated significantly higher revenue. We estimate that OpenAI alone is expected to generate $3-4 billion of revenue in only its third year of commercialization. This is more than the combined revenue of Google, Amazon, and Facebook in their respective third years. Other AI companies with notable revenue traction include Anthropic (~$200 million run rate), Writer ($100 million), Cohere ($35 million) Glean ($50 million+), Heygen ($20 million+), Perplexity ($20 million+) and more.

But I'd like to make this personal. (?? If you recall, this is a red flag for you in evaluating claims!) I have more decades of experience now than I want to admit, and I have had a habit my whole life of pursuing new things. I am using a bunch of tools which, whether AI is "the product" or just "what makes the product good" are transformative. Examples:

  • ChatGPT and Claude: oracular answerbots which, while certainly imperfect, give me 10x-100x velocity depending on the task. Some of those tasks in the vein of "useful, but I'd never bother to do it myself"
  • Perplexity: a research assistant extraordinaire. It is, like the others, hilariously wrong occasionally. This stuff gratefully often does not pass the sniff test. On the other hand, Perplexity, especially Pro, powered by the big foundational models, is able to synthesize huge amounts of information. Just their Chrome plugin and the 'summarize' button is worth its weight in gold.
  • Cursor: a great AI-first IDE (a fork of Visual Studio Code) is often a 10x gain for me; not every task, but it both autocompletes code with astonishing accuracy (even converting pseudocode to real code), but at times I have copy-pasted a prompt and refactored a dozen files at a time, like when converting the KamiwazaAI UI from vanilla react/backbone to MaterialUI.

This isn't "hard" work exactly, but things like this taking seconds instead of minutes/hours saves a lot of time for other things.

  • Descript: Descript is more than the sum of its parts. Its teleological innovation was to transform video editing into script editing; you record, you see a transcript, you edit the transcript and the video is edited. Combine that with world-class sound cleanup, and I had a tool that did days of editing work in hours (which actually became more line minutes when I upgraded to an M2Max!)
  • Rewind: I was able to find my screenshot from work done months ago and drag it out by querying rewind; I use it to find papers, slack conversations, and more all the time. As much as any of these tools it exemplifies Job's "bicycle for the mind". It's occasionally practically useful - I've grabbed screenshots of code I wrote on a temporary host and forgot to push into a git repo, and then used ChatGPT to reconstruct the OCR into usable code in seconds. As it has a ChatGPT integration, I can ask it to write summaries based on my own work across apps - what did I do? How would I describe X?
  • Midjourney: from the needful (the header image for this article) to the practical (see some stencils/icons below), to the fun, Midjourney is in my mind a bit of the "ChatGPT of image generation"; their platform, by generating 4 images and getting users to "upscale" one to get a full version, acted as a world-class information gathering machine. This data provided the feedback mechanism to iterate the model. I've also added a comparison below of v4, released Nov of 2022, the same time as ChatGPT, and v6.1, released this week. The difference is striking. (v1-3, which came out march-july of 2022 are arguably just unfair; I first used v3 in September of of 2022, and while fun to toy with, the results were generally awful.)

Stencil Icons in Midjourney (v5.2)


Midjourney v4 vs v6.1, prompt: "A technology worker, sitting like David's Thinker, on an urban promontory, contemplating the unbelievable pace of technology innovation looking over the city"


If you're interested in regular quantification of GenAI productivity, I recommend Ethan Mollick ; who is a technical hands-on level practitioner but brings an academic lens. But speaking from a personal level, I can confidently say that there are times when I've seen things that might have been 1-week jobs for my team in past lives (1 man-week, not the whole team!) that I've been able to handle in less than 2 minutes. In one particularly memorable occasion, I was on a call with a customer who was checking off boxes from a POC; they highlighted something that, while not in the POC acceptance criteria, they had hoped to validate. I suggested that we had strong evidence that the thing they were interested in was compatible, and then while letting them talk, I was able to actually do it; leveraging ChatGPT I was able to turn up a resource on AWS Glue and prove compatibility with a workflow, and show it on the screen in less than 90 seconds. Customer satisfied. Not long ago, that would have been something like a Jira ticket, a discussion about requirements, an exploration through documents, and finally some tests and an outcome document.

On the Quality of AI Work and Work Augmentation

One of the things I coach customers, colleagues, and people in general on is that you need to be mindful about use cases where AI can be used on its own vs when it needs a human sign off vs when it needs a human "partially in the loop" (eg, supervising an accelerated process). It's du jour to poke at the hilariously bad answers from AI - the most recent viral one being the results from "What is larger: 9.9 or 9.11?"

And yet AI:

I find that most of the studies related to productivity gains also do a poor job of understanding that the state of the art is changing extremely rapidly. Cost for inference is one metric that is fairly easily understood - what is the cost of a token?

There's a lot of great analysis at artificial analysis, and their charts on the progress of models is fantastic. Using their tools, I could select specific models. Here is a smattering of recent models with older models:

I've included GPT-4 in this first graphic just to show how incredibly expensive it was, and relative to today's state of the art, "bad". Bear in mind original GPT-4 -- which we marveled at -- cost $30 per million output tokens, and 2x that if you needed a 32k context window. GPT-4o-mini, by contrast, is 200x cheaper, has a 128k input context by default, and the output is higher quality. It's was 15 months between those releases.

In what other industry do we see higher quality products on multiple dimensions with a 200x cost reduction in 15 months?

https://artificialanalysis.ai/



If we eliminate the "older stuff" just for the historical contrast that allows us to "dial-up" the fidelity for the SOTA models:

https://artificialanalysis.ai/


Here we see a much more interesting contrast. You can choose "the best model" (where for your use case, it is likely GPT-4o, Claude 3.5 Sonnet, or Llama3.1 405B), you can pick the cheapest (Llama 3.1B, Claude Haiku, or GPT-4o-mini), or you can aim for "really good and also really cheap", meaning Llama 3.1 70B or GPT-4o-mini.

GPT-4o-mini stands out here as ridiculously good for its cost.

A word about benchmarks: I don't actually trust the eval of GPT-4o-mini here; I haven't used it enough yet for a vibe check and in the few cases I've used it thinking the task was easy and it would be nice to use for speed, I switched back to GPT-4o; granted, this is a sample size of 2-3 and "relatively" hard tasks with unoptimized toolchains. That said, GPT-4o and Claude Sonnet both deserve their crowns as the king of the model hills I can say from experience, with Llama 405B still in my queue but the buzz is very favorable for it, and the community has a history of improving on the Llama models substantially. (Worth noting, however, that the hardware required to improve 405B is obviously substantially larger than 70B and so there may be more friction to such improvement; TBD, however, as even very large GPU costs have been sponsored by organizations of all stripes.) I don't think standard MMLU or MT-bench is often a great view of quality at the SOTA level now; and Chatbot Arena, which was a pretty good reflector of the 'vibes' (that is, a reflection of the subjective evaluations of practitioners) may be having trouble because GPT-4o-mini appears to have a uniquely strong way of producing good outputs for typical prompts while not being as strong as the rating would appear on hard prompts. To be clear, the magnitude of this effect is not "that large" possibly, as 4o-mini only drops a few places in Chat Arena when isolating for "hard prompts" and "coding".

So I fully expect GPT-4o-mini to be better than Claude Haiku, which was already noted to "break the mold" for being extremely cheap but quite high quality for the result.

Regardless, we see a rapidly advancing competitive ecosystem here that is actually "hitting the ceiling" on various benchmarks.

Where AI is Going

It's important then, in the context of "is AI in a bubble" to look forward, not back. After all, the value of GenAI investments is not "can I generate piles of free cash flow in 2024", it's largely -- especially at the frontier model layer -- being played on the future value that comes from:

  • Expectations of further advancements from leaders in model performance, up to and including AGI (which as I've said elsewhere, depending on the form it takes, has an NPV that at the limit is infinity)
  • Adoption we can expect in the future with whatever improvements we do get (even if AGI remains decades or centuries away, although centuries absurdly pessimistic at this point), based on improvements in:
  • - Practitioner Knowledge and Technique; it cannot be understated how few people are extremely well versed in tools and techniques
  • - Improved hardware (could have it's own post, but see fierce competition from Groq, Qualcomm, Intel, AMD, Ampere, all putting out products to try to chip away at nVidia's dominance)
  • - Improved toolchains: across the board, this is data indexing, understanding, retrieval; deployment and management of models; fine-tuning (whether full or parameter-efficient); toolkits at the app layer for quality and consistency (e.g., DSPy), routing (e.g., LiteLLM), prompting workflows (CoT, GoT, LATS); an perhaps most importantly as we focus the lens past 6-12 months: AI Agent frameworks. Agentic toolsets that empower models to see, process, record, act, etc, with the models acting in different capacities (coordinator, planner, coder, creative thinker, supervisor, etc) with hundreds to thousands of inference calls and tool uses per user requirement, are a likely evolution in the short-medium term.

The future of GenAI is proactive

"Generative AI" is a deceptive term. The typical user experience with it is "I provide a prompt of what I want, and I get an answer, of varying quality". Power users employ many techniques, from custom instructions or Custom GPTs to prompt templates, to get better results. But the paradigm of "I go to the AI to get something" is merely a phase 1 play.

The phase 2 is proactive AI. This means that systems are empowered with the "thinking" (intelligence is the wrong word, and again, that requires an entire article on its own), the tools, the integrations to understand what you are doing personally and professionally, and empower you. We should expect in the future that:

  • Every email, every slack message, every Jira ticket, has almost unbounded context attached. I picture many tools adopting a pattern perhaps like a scrollwheel where you "zoom in" on detail in any part of what you are doing to the desired context. From: "Ticket PROD-123 was closed as COMPLETE by Joe Coder" to scrolling in to the diff, an explanation of the diff, and context about the design meeting Joe had with which colleagues, ready and waiting, I expect AI to allow us to dial in our abstraction level from high to fine-grained at a fast pace, and instantly available. This is not a "when you ask" this is a "get all the data ready for me so my experience is that I can shift the context as fast as I can scroll a mouse wheel". Even this process can be extended to feedback on the content at any level so the AI can adapt to individual user preferences
  • Every overarching objective we have, professionally and personally, if we choose to have more information or ideas, they will be waiting. Whether you want to run a marathon or figure out your product strategy for the next year, AI should be continuously gathering the latest information, contextualizing it into your own plans and desires and opinions while politely providing counterpoints that may change your mind or red flags.

I can't understate how absurdly far this can go. Imagine you are a product manager in charge of a SaaS product. A competitor comes up with an interesting feature. You sit at your desk, and your AI apprises you that the feature was released, early buzz is very positive; it would appear to threaten to take share. The AI has farmed out a prototype design to other AIs, and it shows you an updated version of your own product and how putting that feature into it would possibly look. It has already written some prototype code and tested it gently in a sandbox, so if you want to prioritize it over other work, it can create the user stories, refine and attach the design, and create a starter branch in the codebase and commit what it worked on so the dev team can pick it up.

But wait, why is this waiting for your desk? Maybe you get a text message and it shows you their feature, the mockup, and lets you know it is generating a draft. More details await you. You respond, "Let's proceed, this will be a high priority I think - but we will want to tie it into <strategic feature>, because it is a lot more useful to users with that." The AI gets your reply, adjusts its approach, rethinking the design, mockup, and prototype code. By the time you are at your desk, it's a much better iteration for your org.

Sound preposterous? It's not. It's exceedingly hard to put a timeline on such things, but this technology exists in broad strokes right now. There is not a wild innovation needed for this experience. It is just money, time, engineering. Let's assume that by the end of 2025, GPT-5o-mini is much smarter than GPT-4o-mini, has an imperfect but extremely strong "gut feel" for how it's conclusions follow as it generates, leading to much better outcomes and stronger agents - and by the way, it natively integrates into any computer and can see any browser, IDE, etc, and work them with a virtual keyboard and mouse, so it can act as you with permission in any tool your eyes and hands can use on a computer - and, by the way, it is 200x cheaper than GPT-4o-mini. Meaning the outputs are $0.75 per BILLION tokens.

The trillion inference world

I used AI to estimate the total words in all the books ever published - 1.4 trillion words. In english, this is about ~1.8T tokens. Meaning you can generate output equivalent to all the words ever put in print in human history for about $1350 of usage costs.

That may seem absurdly unnecessary - but it isn't. The human brain is an unbelievably marvelous machine. Not only are we creative, have unbelievable memory capacity, and we have intelligence (the ability to apply what we know to learn new things and skills rapidly), but the human brain runs on the equivalent of 20 watts of power. That's 1/5th of what an old-school incandescent light bulb used. Thousands of times more efficient than state of the art nVidia hardware or more.

And models are, indeed, "dumb". What we hone in on almost instantly on, quickly intuitively eliminating bad answers if we want to, handling rote things ~without thought, models may need hundreds of thousands of tokens or more to get through. They will not "intuitively hone in on things". They will have to exhaustively articulate ideas, reframe them, re-evaluate them, possibly test them, transform them formally into code or predicate calculus or some other formal representation, and then refine/eliminate/etc to get an answer we find acceptable good. To which I say: who cares? I care about the answer. If $10 of tokens saves me weeks or months of time generating billions of tokens, let's go.

Certainly we should expect that any rote, repeatable, or patterned task can be heavily reduced.

By analogy, I am not a great mathematician, so faced with certain classes of problems, rather than try to apply math to the situation, I have written Monte Carlo simulations as a way of "brute forcing" the problem. A computer could solve the discrete math version in a micro-fraction of the time; and indeed, I've occasionally presented results only to be told, "You know, the value of <situation> is <formula>" and indeed, my results match but I could have saved myself time and the computer a lot of effort.

In many ways, I expect the early future of AI to be like that. Faced with spending minutes, hours, or days on various things, we will instead deploy increasingly cheap thought at problems. We will, in essence, be brute-forcing. The winning products will likely disguise a lot of this complexity to end users, expose it but in a very visible and tunable way for software people, but nonetheless, we will absolutely spend $.10 of tokens (which by the end of 2025 might be 125 million tokens, or, in other words, about 100x the total words in the Harry Potter series (original 7), applied to a problem or problems, for $.10.

So - is AI in a bubble?

There are clearly many subjective and objective ways to evaluate this question. Back to the impetus for writing here and the original link - great analysis on many fronts by Kelvin Mu. What I would say is similar in conclusion:

  • There will be many overvalued losers; many will be the first to solve an easy problem in a non-sticky way, and they will get crushed by platforms and SaaS or well-monied but more visionary competitors; there will be also plenty of also-ran competitors that take well-funded runs at big competitors, validating those category creators while they bleed to death
  • There will be some overvalued winners (eg, they succeed but valuations were too high; either absolutely or relative to NPV); a classic from the Internet era would be Cisco, at one time the most valuable company on Earth, but ultimately not positioned to capture the larger value stream; incidentally, there is a very real chance that ALL of the foundational model builders of today end up in this bucket
  • There will be some home runs; and you'll likely learn about them as the ball is sailing over the outfield; OpenAI may well be the first (contrary to what I just said!) - and of course, as we established early, the NPV of true AGI is ~infinity and that is more of a sociopolitical problem than an economic or technological one
  • Investment in generative AI will fluctuate, with bankruptcies to be expected. Zynga is a case study from an earlier era: they popularized social gaming with Farmville, peaked in 2012, but struggled to transition to mobile gaming, leading to an acquisition by Take Two. Similarly, Rovio thrived with Angry Birds but declined and was bought by Sega. Zynga may have failed by being insufficiently mobile, and Rovio by being insufficiently social. The gaming segment overall has grown and thrived substantially.
  • There may be a trough of GPU orders if the tools don't enable end-user value across enough areas fast enough to fuel foundational model development at this pace

On the other hand, I feel ultimately confident we "ain't see nothin' yet". I continue to be amazed at the progress.

One Exception - or Lesson

I would be cautious about certain classes of things - in particular, long-term assets tied to AI growth. There was an unbelievable fiber buildout associated with the Internet boom; the Internet drove this and it was, from most vantage points, absolutely necessary. However, innovation and adoption of DWDM and later phased optics made it possible to get much more bandwidth out of existing fiber, which made the investment look unneeded.

I don't necessarily see that happening here, and GPUs are a much shorter-term asset than long-haul fiber or datacenters; but it's worth bearing in mind.

And the OTHER elephant in the room

As Brett Adcock says, human labor is a $42T market. Companies like Tesla and Adcock's Figure are deep into building humanoid robots. It goes without saying that anything that can credibly talk about carving out a portion of a $42T TAM is earth-shaking. Brett has discussed how progress in AI has "moved up" Figure's timelines. A few things about this:

  • Robots having "brains" because foundational models allow them to tap into more knowledge to understand what something is, how things work, what actions might lead to a desired result, etc, is a big deal for robots. Figure taking 0.1% of its TAM ($420B!) more than justifies an investment much, much larger than OpenAI has had - and Figure has demo'd robot interactions powered by OpenAI; e.g, when the human asks to be fed and the robot decides to hand him an apple. It's a cheesy, "easy" thing. ("Why did you hand me that?" "You wanted food, and it was the only food.") But in principle, this is huge; and it's not theoretical, because you can send ChatGPT a picture yourself and say "I'm hungry, what can I eat?" and it can find food in your picture.
  • Robots interacting in the world can become an unbelievably huge source of data for learning. Something Yann LeCun has pointed out repeatedly is how actually small "all the text of the internet" is, in contrast with the richness of the human experience. And of course our brains are much more powerful than nVidia supercomputers, and we train in so many dimensions at once. We constantly learn not just how to speak, what things are -- we understand cause and effect, temporality; which, incidentally, we understand is a learned thing as small children truly do not understand object permanence. Robots as embodied AI provide an avenue for all kinds of training
  • (There is, incidentally a corollary in the quality of Tesla 's FSD; and they, of course, had hundreds of thousands and then millions of cars on the road where people willingly supply video and telemetry for them to train on.) (And there's another corollary in the use of synthetic data for LLMs and the adoption of tools like Wayve's GAIA-1 to generate synthetic video to help train cars to drive well)


To me, AGI is a "it happens, not sure when"; I don't care because we don't need AGI for GenAI to deliver unbelievable value.

Robots is an "absolutely happening and soon". Home robots are a thing now, but mass market creation and adoption should be probably less than 5 years for early adopters and less than 10 in mass market. No matter what happens with GenAI, it's going to completely flip the world on its ear.

But also, AI, and GenAI, and robotics, have a very symbiotic relationship which will drive both to innovation faster.


Conclusion

Are we in a bubble? No. We are in a chrysalis.



Jeffrey Caldwell

Software Developer | Pioneering Generative AI Solutions | C#, .NET, Blazor, SQL, Python

1 个月

I have never seen anyone put into words the feeling I've had since interacting with gpt the first time. Let alone break it all down so well. An amazing post. I think my only disagreement might be the part about handing it off to the dev team. Two of my predictions: on-demand books, movies and games. By that I mean you say you like a genre and you want to watch a new movie kind of like the matrix and the ai builds you a new movie, different plot and characters from scratch but it gives you the same feeling. Humans will still produce movies but there's only so much time in a human life and llms will do it just as well faster and much cheaper. This isn't a fantasy, it's just matter of time. A web interface generated completely on the fly for you. With your preferences, loading your favorites, anticipating your needs and voice controlled. None of that is fantasy,someone could put it together today if they wanted to. I completely agree this is a new era. Possibly bigger than the web in terms of impact on society.

回复
Avinash Singh

Building Next Gen IT services company powered by Gen AI | AI Automation | Conversation Design | Gen AI Consulting & Development. #GenAI

2 个月

There are hurdles that stand between businesses and successful AI adoption, and i guess those hurdles make us feel of AI as bubble. AI efforts need to be directed toward the right use cases to see it’s impact with defined goals/metrices of success. Businesses needs to pace this marathon and not sprint it. Then i guess it won’t look like a bubble.

回复
Jake Kaldenbaugh

Strategic Value Acceleration Leader: CorpDev & Venture | FinOps - DataPlane - Cloud - Cybersecurity

2 个月

Overestimated in the short-term; underestimated in the long-term.

Don 春沈 Li 李

Idea Man | Entrepreneur | Technologist (past)

2 个月

I might be opinionated but here's my blunt observation of AI nay-sayers (yes, I'm also referring to GenAI for AI here): a) dumb (unable to use AI effectively) b) lazy (unwilling to spend some time to learn how to produce better results from AI)

Mark Hinkle

I help business users succeed with AI. I share my knowledge via The Artificially Intelligent Enterprise newsletter.

2 个月

Great post Matthew, I love the "chrysalis" simile.

要查看或添加评论,请登录

Matthew Wallace的更多文章

  • The Greatest Disruption

    The Greatest Disruption

    Dario Amodei's essay Machines of Loving Grace - How AI Could Transform the World for the Better has been making the…

    1 条评论
  • Climbing to AI Victory with Diffusion of Excellence

    Climbing to AI Victory with Diffusion of Excellence

    Enterprises are grappling with the complexities of AI adoption. Many of the enterprises we talk to are curious…

    2 条评论
  • Public Cloud, Private Cloud, and Repatriation

    Public Cloud, Private Cloud, and Repatriation

    My comment to David Heinemeier Hansson was getting too large for a thread, and then my post got too long for a post. I…

    10 条评论
  • Generational Inflection Points

    Generational Inflection Points

    It's been a marvelous year for technology. In some ways, it is two tales - one of post-pandemic contractions of tech…

    4 条评论
  • The Cambrian Explosion of Technology

    The Cambrian Explosion of Technology

    The Cambrian Explosion was a period that saw the fossil record of early simplistic life "explode" in diversity and…

    4 条评论
  • Multi-Cloud: Not a Good DIY Project

    Multi-Cloud: Not a Good DIY Project

    The Cloud and Innovation When I interviewed at Faction, as I sat in a conference room, I was looking across into…

    3 条评论
  • Faster and Faster

    Faster and Faster

    We live in exciting times. There are a few key technologies that are all still in their infancy, all of which I expect…

  • Cloud Strategy in the Age of Multi-Cloud

    Cloud Strategy in the Age of Multi-Cloud

    Today HashiCorp published the "State of Cloud Strategy Survey". It's a fantastic resource for anyone thinking about how…

    2 条评论
  • Why Data Gravity Will Grow Stronger

    Why Data Gravity Will Grow Stronger

    The term “data gravity” refers to the desire to have applications and data attract more applications and data on a…

    1 条评论
  • How IT Leaders Can Plan For The Imminent Multi-Cloud Wave

    How IT Leaders Can Plan For The Imminent Multi-Cloud Wave

    We are both privileged and challenged to live in a time when the pace of innovation is not only blistering, but…

社区洞察

其他会员也浏览了