How do we make sense of all of this?
Dall-E of course

How do we make sense of all of this?

There is a lot going on in the world right now, in the explosion of AI news that’s hitting all of us constantly. I spend a fair bit of my time reading and thinking about this as my day job, and I think I’m a pretty good pattern-matcher, and even I’m having trouble keeping up.

One big problem is that so much of what’s happening has at least the potential to be very disruptive. The implications aren’t incremental, they may have to do with how we work, create, communicate, transact, and more. There are hard questions being raised about what the nature of work is, what the role of intelligence is going to be in creating value, what harms and risks we need to pay attention to, and more.

There aren’t going to be easy answers, at least for a while, and there are going to be false leads, early bad takes, all the usual “fog of war” stuff. I think there are three things we can look to right now that help navigate this.

The first thing is, fittingly, first principles. There are all kinds of patterns and behaviors that seem to show up repeatedly. Picking something you believe in and applying it to the current situation can help give clear framing. I largely talk about first principles in my writing, so you can find some if you look backwards, but a few that I like in this moment are:

  • People are lazy. Look beyond “cool” to how much easier a new tool or tech makes someones life. Convenience always wins.
  • We reject new things by default, so try to ask “what if” instead of “why not” as much as you can.
  • We tend to be bad at understanding exponential curves, so we over-estimate impact in the near term and underestimate it in the long term.
  • All complex systems have scale constraints. When these are relaxed, the system reconfigures. Look to which constraints are being relaxed in this moment.

The last one leads me to the second tool we have: math. This is a good moment to set aside emotion and look at hard data where possible. I’ve seen people take things like the Chinchilla scaling paper and apply the math there in all kinds of useful ways that help us understand costs of scaling, and the likely curve of improvement. Math is a good cure for getting fooled by your linear intuition in a non-linear moment. Sometimes you can combine the first two of these, and apply some math to a first-principle idea (for example, people have been trying to quantify job efficiency improvements with AI) .

Finally, the last thing is analogy. Sometimes we can look back at earlier technical transformations and draw parallels. This is the weakest of the three and has to be used carefully, since it’s easy to find false analogies. Analogy is more useful to find a starting point that you can then apply some more rigorous analysis to, per above. But sometimes it’s very useful to give a good sense of what’s likely.

The one challenge with analogy is that dimensional reduction. LLMs are incredibly complex and high-dimensional objects - they are like humans who have been reading for millennia. We don’t have good mental models for something with that degree of complexity, and we are fooled by some of the behaviors into thinking they are more like human minds than they really are. When we project these complex, high-dimensional objects down into a simpler analogy, we are doing dimensional reduction, which loses data and can be misleading if we don’t do it carefully. So take it with a grain of salt if you can.

It can be challenging to keep up with everything that’s going on, but it’s not impossible. There are patterns and methods of analysis that can help us work through all of the change thoughtfully.?

Li J.

harmony.one

1 年

i thought you would tell us more about what’s happening and how to make sense of it?

要查看或添加评论,请登录

社区洞察

其他会员也浏览了