How I work: Leveraging negative prompts to improve ChatGPT output
In our previous article you may have noticed in the final interaction with ChatGPT several of the taglines it created for our fictional custom canoe company included the word "sail", which doesn't make sense because canoes don't have sails. By the time we got a list of good prompts I'd actually asked ChatGPT to exclude four words from its output.
This is an example of negative keywords - words or phrases you tell the Large Language Model (LLM) explicitly not to include in its output. Often the process of working with an LLM involves iteration and refinement, just like it would with a person you hired to help you with whatever task you've asked of ChatGPT. In fact you can think of each conversation as though you just hired a talented person who knows nothing about what you need until you tell them.
To test this out I used the following prompt in ChatGPT 3.5: What are some good science fiction books to read for kids? Here's its response:
While there are a lot of factors that make a book appropriate for a specific age, the choices here cover a wide range of age and reading levels. Let's try again, asking the Model to exclude books that might be too mature, complex, or inappropriate for their age:
Quite a difference! In this output only two recommendations are the same, and the results are clearly skewed to the younger end of the 'kids' age range.
With a little additional consideration and a few negative keywords you can dramatically improve the output of LLMs like ChatGPT. LLMs are already proving to be a huge time-saver for lots of tasks, and you can maximize their quality and speed by using techniques like negative prompting.