When Good Prompts Turn Into Garbage

When Good Prompts Turn Into Garbage

While it’s nice to call ChatGPT an artificial intelligence, it’s not quite accurate.

It’s changed from a scientific term into a branding term and that is why most people are confused by the difference between ChatGPT and Skynet.

If you’re not getting the results you want when working with ChatGPT, this misconception is probably the reason.

What is Garbage?

This is when you, the user makes a mistake.

It is called operator error or pilot error in other industries.

If you ask a broken question, you get a broken answer.

If I want to know 1+1, but I enter 1+2 into the calculator…I get the wrong answer.

The calculator is correct.

But it is correct for the wrong question.


How Computers Handle Mistakes

If you fire up the terminal on your mac, you enter the most unforgiving computing environment.

This is what I saw the first time I turned on a computer in the early 1980s.

If you misspell a single word, you have to start your command all over again.

There is no room for error.

Computers operate on the assumption that the user is infallible and would never enter a command by accident.

We have added a new feature called the confirmation to add in one check when you try to do something permanent.

Are you sure you want to delete all your data?

Y/N

Unfortunately, we get so used to hitting that “yes” checkbox and it becomes muscle memory.

We sometimes hit it by accident.

And lose all that data.


How ChatGPT Handle Mistakes

ChatGPT is far more forgiving than the command line.

You can misspell words and use bad grammar and usually ChatGPT will figure it out.

This is really what it means to say that ChatGPT is an A.I.

It can accept a wider range of garbage and recycle it into something useful.

This is really great as I don’t have to fix my grammar and spelling before submitting a prompt.


You Can Still Make Garbage

ChatGPT is by no means infallible.

It is still programmed with the core belief that the user is infallible.

While it can figure out what you mean when you use the wrong word or grammar, you can sabotage yourself without realizing it.

I was recently asking ChatGPT to help me remember a video game that I liked on the Nintendo Switch.

It could not find the answer and I was very frustrated.

Until, I removed “Nintendo Switch” from my prompt.

It was a Playstation game that I was looking for.

In it’s native mode, ChatGPT is not allowed to go outside the bounds of what I asked.

It is not allowed to ask for additional information…unless I give it permission. (I’ll show you in a moment.)

ChatGPT will not tell you that your question is broken or that you’ve provided incomplete data.


The Danger of Big Prompts

Before we get into the solution, I want to point out why this problem can become insidious.

You buy a really long prompt from someone and over time, you make little tweaks to it.

You copy and paste from conversation to conversation without realizing that you have drifted from the original prompt.

Like a photocopy of a photocopy of a photocopy, the prompt is no longer perfect.

This can become even more dangerous with a multi-step prompt.

If I give you instructions to make a sandwich, but forget step seven, you are going to fail.

But.

You are going to spend a bunch of time getting to the failure point.


The Perfect Prompt

The solution is simple.? Give ChatGPT permission to ask you questions.

I was recently reading a science fiction novel called Beachhead by FX Holden. (a pen name for Tim Slee )

I’ll tell you our objective, and you can keep asking questions until you think you have enough information to start proposing some ideas for how we might achieve it.

This prompt is so perfect and it’s an elegant way of turning my one-step Master Prompt into a two step prompt that is closer to natural conversation.

When you start the conversation this way, the possibility of garbage entering the conversation is eliminated.


Cooperation is the Solution

Most of “prompting” is an attempt to translate your goal into a language that the computer will understand.

If you make a mistake in translation, this is where your results start to go off track.

This comes from the command-obey mindset that is prevalent in how most prompt engineers teach prompting.

When you give ChatGPT permission to ask you questions, you move into a cooperative adventure.

You can work together toward the same goal and now ChatGPT feels comfortable sharing its own ideas.

This is the logic behind my recent article explaining why you only need to learn one prompt.

You can read it right here.

https://www.dhirubhai.net/pulse/perfect-chatgpt-prompt-jonathan-green-1j7hc/

  • Jonathan

Sean Allen

Learning & Development Lead at AP Cymru: The Neurodiversity Charity

3 个月

Found this easy to grasp through your explanation. Thanks Jonathan ??

要查看或添加评论,请登录

Jonathan Green的更多文章

  • Maximizing My Time: How AI Helps Me Focus on Revenue-Generating Tasks

    Maximizing My Time: How AI Helps Me Focus on Revenue-Generating Tasks

    As a business owner, one of the most valuable resources I have is time. But like many entrepreneurs and executives, I…

  • CASE STUDY: Job Promotion

    CASE STUDY: Job Promotion

    How a Senior IT Executive Leveraged AI Strategy to Secure a Promotion and Expand Career Opportunities Executive…

  • Case Study: Launching a Dutch AI Consultancy

    Case Study: Launching a Dutch AI Consultancy

    Myriam Zwerver, an AI Business Strategist, was searching for practical ways to use AI and provide value to solo…

    6 条评论
  • Taskade AI Review: Revolutionizing AI Automation

    Taskade AI Review: Revolutionizing AI Automation

    We want AI to work in a very simple way. I input a piece of data and I get the result I want.

    4 条评论
  • CASE STUDY - AI Bots Write a Bestseller

    CASE STUDY - AI Bots Write a Bestseller

    Executive Summary A private client in the creative industry faced challenges with the time-intensive processes of…

    4 条评论
  • Case Study - 6 Week Book Launch

    Case Study - 6 Week Book Launch

    Executive Summary Willo Sana , a transformational business coach, spent over three years struggling to complete a book…

    4 条评论
  • Is Snoop a Fan of my Work?

    Is Snoop a Fan of my Work?

    On March 14th of 2024 I released my first consistent AI model. After a few weeks she wanted to upgrade from consistent…

    7 条评论
  • Stop Threatening to Replace Your Team with AI

    Stop Threatening to Replace Your Team with AI

    As artificial intelligence (AI) continues to reshape industries, many employees are experiencing a growing sense of…

    2 条评论
  • Multitasking Is a Myth, but AI Can Solve It

    Multitasking Is a Myth, but AI Can Solve It

    You look around the office and every looks super productive. Your entire team is buried in screens, juggling emails…

    6 条评论
  • Constant Interruptions Are Sabotaging Your Profits

    Constant Interruptions Are Sabotaging Your Profits

    Interruptions happen all the time. Whether you work from home or in an office with hundreds of coworkers, there is…

    2 条评论

社区洞察

其他会员也浏览了