How to ride an exponential wave
Well, it's almost a centaur.

How to ride an exponential wave

It’s been a year since ChatGPT was launched. As with all new, disruptive things, it’s had unexpected impacts as well as disappointments where it wasn’t as magical as we wanted it to be. We are, as always, becoming accustomed to it, so even though a lot is still changing in the world of AI, there is a temptation to settle in to the “new normal”.

But that’s a mistake. We are still very much on an exponential curve in terms of the capabilities of LLMs (and likely other kinds of models). Exponential curves are hard to understand - we tend to think linearly by default, so it’s hard to really visualize something that is moving faster than that. There’s an additional challenge, too - at low values (that is, in the early stages), linear curves can actually be higher than exponential curves. This is the reason for the famous “technology is overestimated in the near term, but underestimated in the long term”. We have a linear model of impact, which is higher than exponential at first, but much lower eventually.

So given that a lot has changed and even more will change, what do we do? How do you ride this wave? The best thing to do, other than just trying your best to stay familiar and educated with what is happening, is to look for invariants. What isn’t likely to change, or will change in degree but not nature, as this space evolves?

I can think of some things. Models will continue to get smarter and fill in gaps they have now. Things like planning, hallucinations, token windows, performance, and cost will all continue to get better. It’s hard to find a new thing to do - the “0 to 1” problem - but much easier to apply a lot of effort in parallel to optimize it once you’ve found it. So anything that can be incrementally improved (even if that increment is very expensive in capital) is likely to be.

The landscape of models will also continue to get more complex. There will be lots of choices for developers - some at the frontier stage, but more and more looking at particular niches, places where some particular mix of data, cost, latency, and quality is better served by a different approach. This will continue to drive the need for tools to manage all of this.

We are likely to continue to spend more and more time in front of these models, probably in some kind of “assistant” interface. It’s likely that ChatGPT isn’t the end state of that, and that we will continue to get richer multi-modal interactions: voice, image, video, gesture. If this goes far enough, we will get models that are able to generate user interface on the fly (and perhaps consume it as well, operating GUI applications for the user). Even if we don’t get that far, it’s very clear that users will spend more and more time starting with and working with a model - if your business doesn’t accommodate this, it’s likely to be disrupted.

Finally, what do we do, as people, to stay relevant and valuable? That’s a complex answer, and we aren’t going to know all of it just yet (imagine explaining the software business to a farmer in 1860 - telling them “don’t worry, farms will be automated, there will be only a few percent as many farm jobs as there are now, but your kids and grandkids will have plenty to do”). But one answer probably comes from the world of chess, where ‘centaurs’, teams composed of humans in conjunction with models, can be very competitive. In fact, in that world, there are humans who are fairly weak chess players on their own, but who are very good at managing the model’s ability to explore, and who are very effective as a result.

This seems to be a constant: that there will always be some advantage or value in being able to work with a model. In that world, the premium is not on learning facts (which can be looked up or explained by the model) but on learning reasoning. How does the world work? What is occam’s razor, or an falsifiable hypothesis, or a fermi question? How do you explore a space you don’t know anything about, without being fooled or lost? What are the basic behaviors of physics, societies, laws, politics?

We are moving from a world where the value isn’t as much in being able to answer the question, but in being able to ask the right one. Be a centaur! As the models get better, you will too.

Soham Mehta

Founder & CPTO at Interview Kickstart & AI Training

1 年

In some of our non technical roles, we're changing our interview process from "Do this assignment" to "Do this assignment using ChatGPT and send us your prompt history". To your point, that shifts the demonstration of skills in interviews from answering questions to asking the right ones. It is incredibly telling.

Amit Jha

Principal SW Engineering Manager at Microsoft

1 年

Thanks for sharing this Sam. This is great. Philosophy i.e. epistemology seems becoming more relevant in leveraging these growing models.

回复
Ryan Whalen

Security Leader

1 年

This is great. Thanks Sam!

回复
Grin Lord

Founder ?? Psychologist ?? Expanding Human Understanding with AI

1 年

“Be a centaur!” I’m going to use this!

Jennifer Grossman

CEO @ Oliver NYC | BA English, Psychology

1 年

Agreed. It’s always about asking the “right” question!

要查看或添加评论,请登录

Sam Schillace的更多文章

  • AI analogies and historical lessons

    AI analogies and historical lessons

    How to make sense of it all. I've decided to keep the primary posts over on Substack.

    1 条评论
  • Motion, Thought, Systems and AI

    Motion, Thought, Systems and AI

    In which I ponder how motion is like thought, why LLMs are like early steam engines (hitting things and pumping water),…

    4 条评论
  • Looking back at the Schillace "laws"

    Looking back at the Schillace "laws"

    Way back in, I think, March or so of 2023, after I’d spent a little while trying to build things with GPT-4, I wrote…

    5 条评论
  • A strange tech parable

    A strange tech parable

    In my role at Microsoft, part of what I do is spend time with the leadership team that runs M365, Office, and Windows…

    12 条评论
  • Simplicity, Clarity, Humility

    Simplicity, Clarity, Humility

    There is an old joke: “sorry this letter is so long, I didn’t have time to write a shorter one”. It’s funny, but it’s…

    4 条评论
  • A matter of context

    A matter of context

    It’s interesting that, as we talk about using AI more and more, the phrase we use is “human in the loop” instead of “AI…

    3 条评论
  • The tension between Chaos and Order

    The tension between Chaos and Order

    I’ve been spending the last week in Japan, meeting with makers and crafts people. as always, it’s a humbling…

    4 条评论
  • No Prize for Pessimism

    No Prize for Pessimism

    A book! I’ve been writing these letters for about 12 years now. I started writing them when I was at Box, as a way to…

    10 条评论
  • Adding Value in the Age of AI

    Adding Value in the Age of AI

    If you wrote out all possible combinations of, say, 1000 letters, the vast number of them would be nonsense. And the…

    3 条评论
  • Don't use AI to make work for humans

    Don't use AI to make work for humans

    I’ve started to notice an interesting pattern. More enlightened teams and people are using AI to get lots of work done…

    5 条评论

社区洞察

其他会员也浏览了