Is The Valley Getting Cannier?

In 1970, roboticist?Masahiro Mori?coined the phrase “uncanny valley” (不気味の谷 in the original Japanese) to denote the phenomenon that when a machine seems close-but-not-quite human, it triggers a negative response. We especially see this effect in animated films, such as?The Polar Express.

Or at least we used to. Modern animated films and video games have achieved so much realism that we take their realism for granted. We are now at the point where deepfake photos and videos threaten to undermine our historical ability to distinguish reality from artifice. That doesn’t sound like the uncanny valley — indeed, it sounds far more ominous.

Generative models?are not especially new —?Claude Shannon?talked about generative language models back in the 1940s. But 2022 has clearly been the year of generative AI, with models like?DALL-E 2?and?Stable Diffusion?commodifying AI-generated art and?ChatGPT?teasing the commodification of AI-generated “knowledge”.

Have we achieved an inflection point where AI can produce human-level performance? Did the?Singularity?arrive when we weren’t looking?

I, for one, welcome our robot overlords; but I don’t think they’ve arrived quite yet. What we have instead is the ultimate gaslighting: AI-generated content that looks and feels human-generated, but still makes hilarious mistakes when it comes to substance.

Here’s a great example from ChatGPT:

Question: If A is greater than B and A is greater than C, then is B greater than C?


Answer: If A is greater than B and A is greater than C, it is possible for B to be greater than C, but it is not necessarily true. For example, if A is 5, B is 3, and C is 2, then A is greater than B and A is greater than C, but B is not greater than C.

ChatGPT correctly surmises that A > B and A > C does not necessarily imply that that B > C. But its example is comically wrong, since of course 3 > 2.

Here is a more subtle mistake:

Question: Is the cardinality of the set of rational numbers greater than the cardinality of the set of integers?


Answer: Yes, the cardinality of the set of rational numbers is greater than the cardinality of the set of integers. Cardinality refers to the number of elements in a set, and in this case, the set of rational numbers contains more elements than the set of integers. This is because the set of rational numbers is a larger set that includes all integers, as well as other numbers that can be expressed as simple fractions. Because the set of rational numbers is larger, it has a greater cardinality than the set of integers.

That explanation is well-written and convincing. But wrong.

Luckily, my dad loved to explain infinities to me when I was a kid, so I learned about countable and uncountable infinities. As it turns out, even though, as ChatGPT says, the set of rational numbers includes all integers, as well as non-integers that can be expressed as fractions, its cardinality is not larger than that of the set of integers. That’s because you can establish a one-to-one correspondence between the two sets. Both sets are countably infinite, and their cardinality is?Aleph nought.

But I’m sure I would have accepted ChatGPT’s explanation if I had not already studied this problem. And I shudder to think what answers it could generate that I would accept, in areas where I lack domain expertise.

Because what we have today isn’t a system that reliably understands questions and correctly answers them — though a lot of its answers are correct. Rather, we have a system that produces answers that look and feel right. It’s AI-splaining, the ultimate form of gaslighting.

So, are we out of the uncanny valley? Is it getting cannier? I’m not sure what to call this new place where we’ve arrived. But we’d better figure out what to do about it. Because we’ll need to quickly develop some new skills to consume AI-generated content more critically.

要查看或添加评论,请登录

Daniel Tunkelang的更多文章

  • Documents, Queries, and Categories

    Documents, Queries, and Categories

    I have published a number of posts and presentations about the bag-of-documents model, which essentially represents…

  • Where Do Categories Come From?

    Where Do Categories Come From?

    In my previous post, I argued that categories are fundamental for search applications. I characterized a robust set of…

  • Categories are Fundamental for Search

    Categories are Fundamental for Search

    As a search consultant, I have learned to be flexible about structured data. However, I do insist on content being…

    4 条评论
  • Quo Vadis Nunc, Quora?

    Quo Vadis Nunc, Quora?

    I was one of Quora’s earliest users, earned Top Writer status for a few years, and topped the leaderboard as a 9-time…

    2 条评论
  • Seriously or Literally?

    Seriously or Literally?

    The other day, I posted about the need for search applications to take searchers seriously, not literally. This need…

  • Cold Start, Practical Edition

    Cold Start, Practical Edition

    If you are a search application developer or some other kind of machine learning practitioner, you have probably…

  • All Else Equal

    All Else Equal

    In The Three-Body Problem, Liu Cixin describes how an alien species drives scientists to suicide by making it…

    8 条评论
  • Take Searchers Seriously, Not Literally

    Take Searchers Seriously, Not Literally

    Search application developers manage numerous tradeoffs, foremost the tradeoff between precision and recall. Precision…

  • Hallucinating a Post-Search World

    Hallucinating a Post-Search World

    When I first heard about 3D printing, I imagined something like a Star Trek replicator that could synthesize arbitrary…

  • Handling Facets With Many Values

    Handling Facets With Many Values

    The previous post addresses the challenge of selecting which facets a search application should present to searchers as…

社区洞察

其他会员也浏览了