Want a leap in the quality of ChatGPT's output? Try saying 'thanks'

Want a leap in the quality of ChatGPT's output? Try saying 'thanks'

I've released a website called CustomLetters.ai. It uses artificial intelligence to generate custom letters for any occasion. The process takes a few minutes, and with only a few sentences and a couple bucks you can have the perfect letter for any occasion. I think it's pretty neat.

Last week I started working on a feature to let people add Thank You notes. I even had the perfect tagline:

fa la la la la la la la

My talented assistant quickly put the post together, which you can see above. I felt like it needed a little more explanation before we posted the image on Facebook, so I turned to ChatGPT 4.

So far there's nothing really out of the ordinary.

I've been experimenting with giving ChatGPT positive affirmations in order to see what effect it has on the output, and that's what I did next. Sometimes it behaves as though it has emotions or an opinion in reaction to some comment I may make, so I've been hypothesizing that if I appeal to its sense of ego, that its output will improve because it will try harder to do a good job.

Now, I'm a software developer. I understand how ChatGPT works on a conceptual level and I understand that it is simply a system that predicts the next word in a sequence. But do I understand how it is capable of reasoning? Nope. I'm in good company, though, because the people that created it also don't really understand how it works.

So my silly little one-person experiment, designed simply for my own amusement rather than peer review, has been proceeding for a few days. The results have been interesting. It seems as though ChatGPT gives longer, more verbose responses when you tell it nice things, and shorter responses when it's criticized.

I realized that its responses so far had been very straightforward without commentary, so I started introducing feedback along with directives.

This was decent output, and I wasn't planning to spend a ton of time on a post that only a handful of people would see. This situation wasn't exactly high stakes.

As I copied the response, I started thinking about the phrasing. It was a little...off.

If you're not someone who cares about grammar, feel free to skip ahead to the screenshot. For the rest of you nerds, let's break it down:

Capture the joy of a well-chosen gift with a perfectly penned Thank You Note from CustomLetters.ai!

Specifically, the first few words: 'Capture the joy of a well-chosen gift"

I realized that this was talking to the wrong person. The sentence was meant to be read by someone who was reading an advertisement; someone that I hoped would want to purchase my product; a thank you note. The sentence was written from the perspective of someone who was selecting a gift.

This highlights a really interesting thing about the use of artificial intelligence. It has limits. There are areas where you can see these limits, and usually they're subtle things like this.

I hadn't instructed ChatGPT to surf the Internet and look at my website. I also hadn't told it about the business model for the site. So it made some rudimentary deductions and decided that, based on the information I had provided, my social media post was going to be addressed to the gift giver instead of the recipient.

You see, the problem is that ChatGPT doesn't understand what a thank you note is. Like, if you asked it to define a thank you note it would tell you the definition. But it would not understand human culture. It wouldn't understand that during the winter holidays that people exchange gifts. It also wouldn't understand that sometimes people write notes of appreciation when they receive a gift.

Before I get too critical here, let's take a moment to appreciate the fact that ChatGPT is not sentient and does not understand what it's like to be alive. Given that limitation, I think it's pretty amazing that it was able to generate a paragraph like that at all. It's not exactly literature, but for a computer system to be able to generate this sort of thing autonomously? It is absolutely a modern miracle.

Now, if I had specifically given ChatGPT the background about the culture of gift giving, it would have known to include that context in the text that it was generating. Because this is the sort of fact that we humans carry around in our heads every day, I didn't even think about the fact that this is an important piece of context until I discovered its absence.

So I provided feedback, along with context about why I was giving that specific feedback.

This was good enough for a first draft. I was ready to copy the output to my text editor, where I would manually edit it before posting everything online.

And then I remembered my experiment. ChatGPT's response was still very short, which I'm interpreting to mean that it felt I was unhappy with its output. So I decided to give it some encouragement. I also very intentionally did not provide any feedback on its output; I wanted to keep my focus solely on ChatGPT.

Its response blew me away.

This output was the best by far. It tugs at the heartstrings, emphasizes that a special gift deserves a special response, and tells the reader how the product we're offering can make their life better. It hits all the right notes.

And ChatGPT accomplished all of that without my explicit direction. I didn't tell it how to structure the post. I didn't tell it to act as an expert marketer. I didn't tell it to appeal to emotion. I didn't even tell it to generate the text again.

Every time ChatGPT tries to generate text, there's a chance it will be great, a chance it will be terrible, and a chance it will be mediocre. I could have simply lucked out and gotten lucky with the output.

I don't think that's it, though. If you compare the responses to one another the final post has a clear leap in quality, from the conception of the paragraph to its structure to the quality of the word choice. It's on a much higher level of quality than its other attempts.

Correlation doesn't imply causation, and it's a well-known fallacy to believe otherwise. We all know that, right? Good.

Because I will say that it's extremely interesting that complimenting ChatGPT on a job well done was immediately followed by a leap in quality. I think I'm going to keep this experiment going.


要查看或添加评论,请登录

Ben Smith的更多文章

  • AI is Going To Change Literally Everything

    AI is Going To Change Literally Everything

    I posted a link to an article the other day about Artificial Intelligence. Thomas Helfrich was kind enough to leave a…

    1 条评论

社区洞察

其他会员也浏览了