When Good Prompts Turn Into Garbage

When Good Prompts Turn Into Garbage

While it’s nice to call ChatGPT an artificial intelligence, it’s not quite accurate.

It’s changed from a scientific term into a branding term and that is why most people are confused by the difference between ChatGPT and Skynet.

If you’re not getting the results you want when working with ChatGPT, this misconception is probably the reason.

What is Garbage?

This is when you, the user makes a mistake.

It is called operator error or pilot error in other industries.

If you ask a broken question, you get a broken answer.

If I want to know 1+1, but I enter 1+2 into the calculator…I get the wrong answer.

The calculator is correct.

But it is correct for the wrong question.


How Computers Handle Mistakes

If you fire up the terminal on your mac, you enter the most unforgiving computing environment.

This is what I saw the first time I turned on a computer in the early 1980s.

If you misspell a single word, you have to start your command all over again.

There is no room for error.

Computers operate on the assumption that the user is infallible and would never enter a command by accident.

We have added a new feature called the confirmation to add in one check when you try to do something permanent.

Are you sure you want to delete all your data?

Y/N

Unfortunately, we get so used to hitting that “yes” checkbox and it becomes muscle memory.

We sometimes hit it by accident.

And lose all that data.


How ChatGPT Handle Mistakes

ChatGPT is far more forgiving than the command line.

You can misspell words and use bad grammar and usually ChatGPT will figure it out.

This is really what it means to say that ChatGPT is an A.I.

It can accept a wider range of garbage and recycle it into something useful.

This is really great as I don’t have to fix my grammar and spelling before submitting a prompt.


You Can Still Make Garbage

ChatGPT is by no means infallible.

It is still programmed with the core belief that the user is infallible.

While it can figure out what you mean when you use the wrong word or grammar, you can sabotage yourself without realizing it.

I was recently asking ChatGPT to help me remember a video game that I liked on the Nintendo Switch.

It could not find the answer and I was very frustrated.

Until, I removed “Nintendo Switch” from my prompt.

It was a Playstation game that I was looking for.

In it’s native mode, ChatGPT is not allowed to go outside the bounds of what I asked.

It is not allowed to ask for additional information…unless I give it permission. (I’ll show you in a moment.)

ChatGPT will not tell you that your question is broken or that you’ve provided incomplete data.


The Danger of Big Prompts

Before we get into the solution, I want to point out why this problem can become insidious.

You buy a really long prompt from someone and over time, you make little tweaks to it.

You copy and paste from conversation to conversation without realizing that you have drifted from the original prompt.

Like a photocopy of a photocopy of a photocopy, the prompt is no longer perfect.

This can become even more dangerous with a multi-step prompt.

If I give you instructions to make a sandwich, but forget step seven, you are going to fail.

But.

You are going to spend a bunch of time getting to the failure point.


The Perfect Prompt

The solution is simple.? Give ChatGPT permission to ask you questions.

I was recently reading a science fiction novel called Beachhead by FX Holden. (a pen name for Tim Slee )

I’ll tell you our objective, and you can keep asking questions until you think you have enough information to start proposing some ideas for how we might achieve it.

This prompt is so perfect and it’s an elegant way of turning my one-step Master Prompt into a two step prompt that is closer to natural conversation.

When you start the conversation this way, the possibility of garbage entering the conversation is eliminated.


Cooperation is the Solution

Most of “prompting” is an attempt to translate your goal into a language that the computer will understand.

If you make a mistake in translation, this is where your results start to go off track.

This comes from the command-obey mindset that is prevalent in how most prompt engineers teach prompting.

When you give ChatGPT permission to ask you questions, you move into a cooperative adventure.

You can work together toward the same goal and now ChatGPT feels comfortable sharing its own ideas.

This is the logic behind my recent article explaining why you only need to learn one prompt.

You can read it right here.

https://www.dhirubhai.net/pulse/perfect-chatgpt-prompt-jonathan-green-1j7hc/

  • Jonathan

Sean Allen

Learning & Development Lead at AP Cymru: The Neurodiversity Charity

4 个月

Found this easy to grasp through your explanation. Thanks Jonathan ??

要查看或添加评论,请登录

社区洞察