Can AI Reason?

Can AI Reason?

Can AI reason? You may have seen several recent articles about an Apple study that questioned whether or not LLMs can reason. There are many other similar articles and confident pundits loudly proclaiming that it is impossible for AI to think logically or creatively. Such things are solely the purview of humanity and no mere pile of 1s and 0s can really think.

I would ask why. Why can’t AI reason? To answer this, I think that the first thing we need to do is to define what ‘reasoning’ actually means and how is it different than the simple pattern recognition we all know that AI is capable of. I confess that I don’t know the answer. Apple created a test of symbolic word problems that an 8th grader should be able to solve but is that what reasoning is? I’m not sure whether knowing that LLMs sometimes fail at solving word problems and brainteasers means anything at all.

Here’s a fun example from the Apple study: Liam wants to buy some school supplies. He buys 24 erasers that now cost $6.75 each, 10 notebooks that now cost $11.0 each, and a ream of bond paper that now costs $19. How much should Liam pay now, assuming that due to inflation, prices were 10% cheaper last year?

This was a question that even the most up to date version of ChatGPT had problems with. But I would like to posit that many people would also have issues with this question. While I’m sure that most people could reason out the answer, we also generally look for patterns and context and might miss the fact that the final clause is completely irrelevant. It would be really interesting to see the performance comparison between these different LLM models and a group of undergrads.

How about another fun example: You work in a 100-story building, and you have two eggs. You need to figure out the highest floor an egg can be dropped without breaking. How would you do this in the least number of tries?

You might have heard of this as a ‘Google interview question’. It was not in the Apple study, but I was actually asked this in an interview. I’m fairly sure I got it wrong and said to start at the 50th floor. I’ve since looked up how to get to the right answer (you start from the end), and I know the answer is 14. Since I can recognize patterns well enough, I believe I could also answer a question about dropping two Ming vase replicas from a building with 57 floors. However, I don’t think this would help me answer another purported Google question of how much I should charge to wash all the windows in Seattle.

I would posit that most of my reasoning is really pattern based application of rules. I know the rules of calculus and recognize the situations for which they apply. If I had never heard of the subject, could I reason out its laws? Theoretically, yes; Leibniz and Newton famously both independently discovered calculus. But do I believe that I, personally, am able to derive new mathematical theory? Likely not. Does that mean that I can’t reason? Were Leibniz and Newton actually reasoning, or did they just find new patterns to which they could apply existing rules? After all they didn’t discover calculus in a vacuum; there was thousands of years of mathematical rules and patterns from which they could draw.

Many statements on AI reasoning are hyperbolic. My favorite was Apple’s own summary article that invents quotes, misunderstands result sets, and leads with a very hyperbolic headline claiming proof. Maybe this is simply clickbait, but I see a lot of similar statements when people discuss AI. A lot of definitive proclamations like, “It’s impossible for AI to reason” or “It won’t ever be able to generate actual creativity”. People seem threatened by AI’s more humanlike abilities. Is this because we’re worried about losing our jobs to the machines or is it a more existential worry about the nature of our souls? As a philosophic materialist, I may be particularly well suited to a career in AI.

要查看或添加评论,请登录

Leah Schneider的更多文章

  • Asking For Context

    Asking For Context

    Here’s a question: A man and a goat are on one side of the river. They have a boat.

  • The Worst Survey Question in the World

    The Worst Survey Question in the World

    The worst survey question in the world This is a question that I saw recently in a report on using AI in data…

  • AI Bias and Bad Prompts

    AI Bias and Bad Prompts

    A major cause of bad responses in AI is the fact that we don’t know how to ask for what we want. Sometimes we don’t…

  • Bar Chart Axis Musings

    Bar Chart Axis Musings

    Researching data best practices, I’ve heard it said that you should always start a bar graph y-axis at 0 because not…

    2 条评论
  • Why Do We Keep Proposing a ‘Data Driven Culture’?

    Why Do We Keep Proposing a ‘Data Driven Culture’?

    What is a data driven culture? I hear all the time that data is important, but is anyone doubting that point anymore? I…

    1 条评论