If you give ChatGPT a cookie...
Image Created with Midjourney, altered by the author.

If you give ChatGPT a cookie...

You're likely familiar with the classic children's book "If You Give a Mouse a Cookie" (written by Laura Numeroff and illustrated by Felicia Bond), in which a series of increasingly ridiculous hypotheticals begins with, "If you give a mouse a cookie, he's going to want a glass of milk."

As I recall, the goal seemed to be to teach kids how to think about consequences, but in hindsight, it might have addressed the slippery slope logical fallacy—although I can't tell you if it was intended to caution against it or not.

Now, a few decades later, I find myself presented with a very different version of that mouse—Generative AI. This pattern shows up for me, specifically when working on a coding project. It goes something like this:

  1. I ask ChatGPT to write a bit of code.
  2. It does so, but the code fails due to a function not being defined, since it hallucinated something that sure would be convenient... if it existed.
  3. I give ChatGPT the error message and indicate that the function doesn't exist, and that we should try another approach.
  4. ChatGPT tries to solve the problem by simply writing that missing function! Neat!
  5. That function fails, as it uses another library... which is also a hallucination. I ask it to try again.
  6. ChatGPT tries to write the missing library...

This, for now, actually represents a vital contribution that the human user still brings to a project—direction. This very loop is one of the biggest barriers to fully automating development, as the bot doesn't know when to throw its hands up and say, "screw it, I'm going to try something entirely different." And that's, amusingly, something humans are pretty good at.

AI continues to be a tool (albeit an increasingly capable and adaptable one), and there are emerging soft skills that human users can pick up to get to their desired outcome faster or with higher quality. Even if you're not planning on picking up any programming projects, this pattern of behavior is one to be aware of when you're pitched "AI-infused" solutions. Not everything should be AI-driven, even if it's becoming a necessary mechanism for managing overhead costs and scale.

There should always be a human driving the AI project toward a well-understood destination. Otherwise, the lack of ability to interrogate AI on its decisions will mean you're building... but possibly not toward anything real.

I like the way Adam Savage (the Mythbuster) put it when talking about AI in the context of art. To paraphrase, he says that art is intended to convey a point of view, a perspective the creator is trying to share. For now, AI doesn't have that—but a human using AI may, and that's still where the value sprouts.

And if ChatGPT asks to hang its drawing on the refrigerator, GET OUT OF THERE! THE LOOP IS RESTARTING...

Amanda Berlin

Marketing Leadership | Fractional CMO for Service-Based B-to-B Businesses

6 天前

Roy, It's a reminder that even with all its capabilities, AI still needs human guidance to avoid going down rabbit holes

Simsan Mallick

IT Consultant | Expert in Software Outsourcing, IT Staff Augmentation, and Offshore Office Expansion | Delivering High-Quality Web & Mobile Application Solutions

2 个月

An interesting example of a fallacy that both AI and users can fall into is the "confirmation bias." This occurs when we focus on information that confirms our preconceptions, ignoring evidence to the contrary. How do you ensure your team stays vigilant against such biases in decision-making?

Woodley B. Preucil, CFA

Senior Managing Director

2 个月

Roy Steves ?? Very insightful. Thank you for sharing

Theresa Gollini

Digital Account Manager | Lean Six Sigma Yellow Belt, Diversity, Equity and Inclusion

2 个月

Love this

要查看或添加评论,请登录

社区洞察

其他会员也浏览了