Five ways to level up your AI / LLM prompting skills
Benjamin Weiss
Product Management Leadership. Helping companies grow and transform using Digital and AI solutions.
Applications like OpenAI’s ChatGPT and Google’s BARD, and the large language models (LLMs) that underpin them, are important new tools, not unlike the way “mobile” was back in 2007-2008. Early in my career, I recognized that mobile would fundamentally reshape computing and unlock vast opportunity for businesses and consumers alike. As a Product Management leader, I’ve had the honor and pleasure of doing just that for leading brands — helping navigate them through digital, and especially mobile, transformations. And while AI certainly isn’t new, and narrow use cases have been put to great use for well over a decade now, it’s these LLMs and their brain-like reasoning abilities that have me (and so many others) convinced that we’re entering the next great paradigm shift in computing.
It’s my firm belief that those who learn to write great prompts and master LLM tools early will have a significant advantage over those who deliberately shun them. That’s true whether you’re looking to harness these tools as an individual or as a developer of software products powered by AI.
Now, let me quickly get this out of the way — there’s no question that we should all be concerned about AI alignment and safety issues, and push for the creation of legal and regulatory frameworks to govern AI use. But that also shouldn’t stop us from learning to put these tools to good use. That’s what I’d like to focus on today.
Let me also start by saying that I don’t pretend to be an expert on these tools - let’s be honest, most of us have only been working with them for 6 months now. With that said, I’d like to share a bit of my own journey, and the tips I’ve learned write effective LLM prompts.
Learning 1: Set LOTS of context, and be more specific than you think
LLMs need a text input (prompt) to generate a text output (inference), and the more specific you are with your prompt, the better job the model will do with the inference it provides you. It’s natural to think of LLMs as an alternative to search, applying the learned rules of search to AI, but that’s not quite right.?
With years and years of experience with web search, we’ve all been conditioned to be as brief as possible with our queries, focusing largely on the unique keywords that are most likely to return the results we expect to see. LLMs are different, though. They’re not performing a lookup, or document retrieval the way we think about with traditional search. LLMs are modeled (loosely) on the way human brains work and learn (at the neuron and synaptic level), and because of that distinction, the way to prompt an LLM is much more akin to the way we’d converse with a friend or colleague.?
You need to set context. You need to describe, in detail, what it is that you expect and would like to see. You need to tell the model what you DON’T want, just as much as you might tell it what you DO want. Give the models some rough examples, just as you would to a friend or a colleague. This is perhaps the hardest thing to initially wrap your mind around — that the model is genuinely intelligent, and has the ability to think and to reason. It does this not through biological mechanisms, but rather by simulating our very biological mechanisms computationally. Therefore, the more detailed and specific your are in your prompt, the better your inference will tend to be.?
Learning 2: Tell the model HOW it should think
Because LLMs like the ones underpinning OpenAI’s ChatGPT and Google’s BARD are trained on incredible amounts of data from books to articles to deep social media troves like Reddit, these models can say almost anything and behave like almost anyone. That too is a hard thing to wrap your mind around initially, but it’ll quickly cement itself once you begin to instruct the model HOW it should approach a particular prompt, and you observe how those subtle changes affect the inferences generated.?
Do you want the model to think and respond as an expert in a particular field of study, as a layman, as a child? After ChatGPT’s public release lat last year, users quickly found great humor in adding “as a pirate” to their ChatGPT prompts (and still do, let’s be honest).?
There’s wisdom in this “as a pirate” prompt, though, and it is a context detail you need to incorporate into any great prompt, often at the very start. Try asking one of these models a question about economics (“describe the relationship between inflation and prices”). Then, start a new session, but first tell the model “you’re a Nobel prize winning economist who shares valuable advice with any person who asks for it.” You’ll be surprised to see how much these details matter in generating more detailed, and often, more accurate inferences. I’ve even found that these details can often help the model avoid hallucinating facts and information that are completely false. When researchers tested GPT4 on advanced exams like the bar, their prompts for any single question were often several hundred words, and included instructions about how the model should think.
This extends even deeper into things like tone of voice and personality. You can tell the model to be witty, to be passive aggressive, to be optimistic, to only use legalese, or Pig Latin, and so on. It’ll happily oblige, and provide you with a very different output based on these prompt instructions.
领英推荐
Learning 3: Iterate through a problem with the model, give it feedback along the way
One of the other search habits we need to break free of is the notion that we must accomplish our task with just one prompt. With chat optimized LLM tools like ChatGPT and BARD, it’s not just the last prompt that gets fed to the model to generate the next inference. Rather, the entire conversation, including the model’s previous inferences are fed back to generate the latest inference. This is not only how the model seems to “know” what was said a few moments ago, but it also means that we have an opportunity to iterate through a particular problem — and by observing the model’s inferences along the way, we can actually give it feedback that becomes useful in getting closer and closer to our desired output.?
In other words, if your first prompt didn’t give you exactly what you wanted, that’s fine. Keep going. Tell it what it did that you didn’t want, nudge it, and start to add those context details to help iterate toward the right output.
Lesson 4: Give the model an outline
I like to think that the greatest productivity wins from LLMs come from their ability to go from 0 to 90% in about three seconds. What I mean by that is that most of our time is often spent authoring that first draft, starting with a blank page. LLMs are great tools for authoring a first draft, whether it’s a blog post, an email, a script, a resume, a speech, or more.?
Just like humans, LLMs tend to write better first drafts when they’ve got an outline to get it started. So, provide the model with your outline. It doesn’t need perfect formatting — it could be a couple sentences, notes, or thoughts. Just enough context to make certain points come to life in the first draft. And going back to lesson one, be specific and add context. Who is the audience? What do you want them to walk away with? All those key details you’d supply to a human author are necessary inputs for the model.?Don’t even have that much time? Ask the model to provide an outline based on a couple sentence description of what you want, then ask the model to build an article based on the outline it just provided.
Lesson 5: The model isn’t just your worker, it can also be your coach
While the model is there to do your bidding, you’ll quickly realize that its learned knowledge is significantly more vast than your own (perhaps not in every topic, but generally it will be across a broad array of topics). On the one hand this can feel pretty defeating, but you can also use this fact to your advantage.?
When you’re feeling stuck writing as prompt, and you aren’t sure what context clues might be necessary to refine your ask, simply ask the model something like, “what questions should I be asking about topic X.” This trick can quickly yield a trove of information that you can then use to add the necessary context details to your next prompt. Think of the model not just as your worker, but also your coach at the same time.
I hope these tips prove useful to you as you dive into this exciting new world of large language models. It’s clear that these digital brains are going to have profound impacts on the way we all work and live in the coming months and years, and I encourage everyone to invest the time now to start learning how these powerful tools can augment your own abilities, do more in less time, and enable you automate routine tasks so that you’re spending more time on the things which bring you joy.?
Happy prompting.
Data Science Consultant | Machine Learning Engineer | AWS Certified Machine Learning Specialty | AWS Certified Solutions Architect Associate | CISSP | MBA
1 年Interesting points you made here. I use GPT4 a lot as my professional writer assistant, coach, consultant and more. As a consultant it isn’t perfect, but it is as good as many subject matter experts that I have encountered that occasionally makes inaccurate statements ?? I like to think of my interactions with it as managing a team of consultants where the responsibility to review and approve its work product falls on my lap. I recently completed a short 1h course created by OpenAI and DeepLearning.AI on prompt engineering. It covers how to leverage GPT’s API to build powerful machine learning API’s in a fraction of the time it used to take. Even though it is targeted towards developers, I highly recommend it even for non-technical people: https://www.deeplearning.ai/short-courses/chatgpt-prompt-engineering-for-developers/
Chief Technology Officer
1 年Very well written as always Benjamin Weiss! You should rebrand it "LLM Prompting for Dummies by Ben Weiss" since I would not be the target audience for it :-) But extremely accurate from my personal experience.
former CEO at Redbubble
1 年Very well-written and actionable -- thank you Ben!
Senior Director of Media Strategy & Planning at Walgreens | ex. Walmart Inc, Sears, Omnicom, Publicis, and a few start ups :)
1 年Been using it for company resesrch and industry research. Usually really good at summarizing key points.
Product Management Leadership. Helping companies grow and transform using Digital and AI solutions.
1 年For the record, I know BARD and ChatGPT aren’t the LLM models themselves… (that’s GPT3.5, GPT4, LaMDA, PaLM 2, etc.) they’re the applications wrapping these models. But for the sake of simplicity/accessibility I’m generally referring to the applications as the model since that’s the way most users are thinking about them at this stage. The over-corrrector in me feels the need to make this disclaimer!