How do we make sense of all of this?
Dall-E of course

How do we make sense of all of this?

There is a lot going on in the world right now, in the explosion of AI news that’s hitting all of us constantly. I spend a fair bit of my time reading and thinking about this as my day job, and I think I’m a pretty good pattern-matcher, and even I’m having trouble keeping up.

One big problem is that so much of what’s happening has at least the potential to be very disruptive. The implications aren’t incremental, they may have to do with how we work, create, communicate, transact, and more. There are hard questions being raised about what the nature of work is, what the role of intelligence is going to be in creating value, what harms and risks we need to pay attention to, and more.

There aren’t going to be easy answers, at least for a while, and there are going to be false leads, early bad takes, all the usual “fog of war” stuff. I think there are three things we can look to right now that help navigate this.

The first thing is, fittingly, first principles. There are all kinds of patterns and behaviors that seem to show up repeatedly. Picking something you believe in and applying it to the current situation can help give clear framing. I largely talk about first principles in my writing, so you can find some if you look backwards, but a few that I like in this moment are:

  • People are lazy. Look beyond “cool” to how much easier a new tool or tech makes someones life. Convenience always wins.
  • We reject new things by default, so try to ask “what if” instead of “why not” as much as you can.
  • We tend to be bad at understanding exponential curves, so we over-estimate impact in the near term and underestimate it in the long term.
  • All complex systems have scale constraints. When these are relaxed, the system reconfigures. Look to which constraints are being relaxed in this moment.

The last one leads me to the second tool we have: math. This is a good moment to set aside emotion and look at hard data where possible. I’ve seen people take things like the Chinchilla scaling paper and apply the math there in all kinds of useful ways that help us understand costs of scaling, and the likely curve of improvement. Math is a good cure for getting fooled by your linear intuition in a non-linear moment. Sometimes you can combine the first two of these, and apply some math to a first-principle idea (for example, people have been trying to quantify job efficiency improvements with AI) .

Finally, the last thing is analogy. Sometimes we can look back at earlier technical transformations and draw parallels. This is the weakest of the three and has to be used carefully, since it’s easy to find false analogies. Analogy is more useful to find a starting point that you can then apply some more rigorous analysis to, per above. But sometimes it’s very useful to give a good sense of what’s likely.

The one challenge with analogy is that dimensional reduction. LLMs are incredibly complex and high-dimensional objects - they are like humans who have been reading for millennia. We don’t have good mental models for something with that degree of complexity, and we are fooled by some of the behaviors into thinking they are more like human minds than they really are. When we project these complex, high-dimensional objects down into a simpler analogy, we are doing dimensional reduction, which loses data and can be misleading if we don’t do it carefully. So take it with a grain of salt if you can.

It can be challenging to keep up with everything that’s going on, but it’s not impossible. There are patterns and methods of analysis that can help us work through all of the change thoughtfully.?

Li J.

harmony.one

1 年

i thought you would tell us more about what’s happening and how to make sense of it?

要查看或添加评论,请登录

Sam Schillace的更多文章

  • AI analogies and historical lessons

    AI analogies and historical lessons

    How to make sense of it all. I've decided to keep the primary posts over on Substack.

    1 条评论
  • Motion, Thought, Systems and AI

    Motion, Thought, Systems and AI

    In which I ponder how motion is like thought, why LLMs are like early steam engines (hitting things and pumping water),…

    4 条评论
  • Looking back at the Schillace "laws"

    Looking back at the Schillace "laws"

    Way back in, I think, March or so of 2023, after I’d spent a little while trying to build things with GPT-4, I wrote…

    5 条评论
  • A strange tech parable

    A strange tech parable

    In my role at Microsoft, part of what I do is spend time with the leadership team that runs M365, Office, and Windows…

    12 条评论
  • Simplicity, Clarity, Humility

    Simplicity, Clarity, Humility

    There is an old joke: “sorry this letter is so long, I didn’t have time to write a shorter one”. It’s funny, but it’s…

    4 条评论
  • A matter of context

    A matter of context

    It’s interesting that, as we talk about using AI more and more, the phrase we use is “human in the loop” instead of “AI…

    3 条评论
  • The tension between Chaos and Order

    The tension between Chaos and Order

    I’ve been spending the last week in Japan, meeting with makers and crafts people. as always, it’s a humbling…

    4 条评论
  • No Prize for Pessimism

    No Prize for Pessimism

    A book! I’ve been writing these letters for about 12 years now. I started writing them when I was at Box, as a way to…

    10 条评论
  • Adding Value in the Age of AI

    Adding Value in the Age of AI

    If you wrote out all possible combinations of, say, 1000 letters, the vast number of them would be nonsense. And the…

    3 条评论
  • Don't use AI to make work for humans

    Don't use AI to make work for humans

    I’ve started to notice an interesting pattern. More enlightened teams and people are using AI to get lots of work done…

    5 条评论

社区洞察

其他会员也浏览了