How to write great AI prompts
Mark Topps
Social Care Leader l Regional Business Manager | Co-Founder of The Caring View | Blogger | Award-Winning Care Mentor for Business Development, Mental Health, and Work-Life Balance
You can do amazing things with artificial intelligence — if you know how to write (and rewrite) great AI prompts.
I came across two great guides that I have used to inform you, written by Notion who collaborated with Theo Bleier , an AI engineer and ZDNET
Let’s start by thinking about how LLMs actually work
Large language models (LLMs) like Notion AI and ChatGPT use datasets comprising vast amounts of language — the equivalent of millions of books, web pages, and so on. They turn each word (or parts of a word) in these billions of sentences into a token, then rank every token according to how often it’s used next to every other token in that dataset.
When you prompt an AI model, it uses these rankings to study your request and send back what it considers an ideal response. For simple prompts, this process seems fairly straightforward.
Talk to your model like it’s human
Speak normally
Generative AI models aren’t like Siri or Google Assistant, which only respond effectively to exact phrases. Having been trained on mountains of conversational dialogue, your language model knows all the nuances of how people converse with and text each other. Speak to it like you’d speak to a human and you’ll get a better (more human) response.
Don't be afraid to ask multi-step questions. Ask a question, get a response. Based on that response, ask another question. I've personally done this 10-20 times in a row and gotten very powerful results. And it fits with the "talking to a friend" analogy. You wouldn't just ask one question to a friend and then walk away. You'd have a conversation. Do the same with the AI.
Be concise
Make your prompt as simple as you can while still explaining your request in all relevant detail (more on that later). The clearer your language, the less likely it is that the model will misinterpret your words (more on that later, too).
Writing a prompt is more than just asking a one-sentence question. It often involves providing relevant background information to set the context of the query.
Let's say that you want to prepare for a marathon. You could ask:
How can I prepare for a marathon?
But you'll get a far more nuanced answer if, instead, you tell the AI?that you're training for your first marathon. The answers you get will be more focused on your needs, as in:
I am a beginner runner and have never run a marathon before, but I want to complete one in six months. How can I prepare for a marathon?
Don’t use negative phrases like "Don't use negative phrases"
When you say, “Do not...”, an LLM might focus on “Do” while ignoring the “not,” and thus take the exact action you think you’ve instructed it to avoid. So:
BAD: Do not include incomplete lists.
GOOD: Only include complete lists.
Tell your model everything it needs to know
Now that we’ve discussed how to talk to our LLM, let’s get into what we’re going to talk about. I’ve chosen a research project a typical market analyst might want help with, but you can ask AI about anything you want, from schoolwork to how to put together a great menu for a dinner party on New Year’s Eve. The same principles apply.
Let’s say you’re a market analyst for a sporting goods company and you need to write a report on the best U.S. cities in which to launch a new line of camping gear. How should you ask?
Give your model an identity
One of AI's coolest features is that it can write from the point of view of a specific person or profession. It could be from the viewpoint of a pirate, Shakespeare, a journalist, marketing executive, a teacher, or from the perspective of anyone you want it to consider.
Want your model to do the work of a market analyst? Start by saying this: 'take a deep breath'
Yeah, it’s weird, but it works. LLMs train on human language. Tell your model to assume it’s a market analyst and it will emphasise token patterns that are linked to actual market analysts. When you think of it in those terms, giving your model an identity isn’t all that weird. Telling it to before it responds to your prompt really is weird, and apparently that works too
Be specific
Language models understand language but you also can’t assume your model will interpret a vague request correctly and AI does have a tendency to go off the rails, lose track of the discussion, or completely fabricate answers.?
There are a few techniques you can use to keep it on track and help keep it honest.
One of my favourite things to do is ask ChatGPT to justify its responses. I'll use phrases like "Why do you think that?" or "What evidence supports your answer?" Often, the AI will simply apologise for making stuff up and come back with a new answer. Other times, it might give you some useful information about its reasoning path.
If you have a fairly long conversation with ChatGPT, you'll start to notice that the AI loses the thread. This is clearly not unique just to AIs. If you have a fairly long conversation with most friends, family, and coworkers, someone is bound to lose the thread. That said, when you're in a conversation with ChatGPT, you can use the same techniques you use with friends. Gently guide the AI back on track, and remind it what the topic is, as well as what you're trying to explore.
Don't be afraid to play and experiment
One of the best ways to up your skill at this craft is to play around with what the chatbot can do.
Try feeding your AI a variety of interesting prompts to see what it will do with them. Then change them up, and see what happens. Here are five to get you started:
Pay attention not only to what the AI generates, but how it generates what it does, what mistakes it makes, and where it seems to run into limits. All of that detail will help you expand your prompting horizons.
More prompt-writing tips?
And of course we’re all just getting started. What wonders will tomorrow’s AI be able to perform? The sky’s the limit