AI Prompt Mastery: Learn Science-backed Techniques for LLM Success
TEAM International
Global IT consulting company, focused on transforming businesses outcomes, through agile and innovative IT solutions
As you’ve likely figured out for yourself, LLMs (Large Language Models) are fantastic tools for productivity, learning, and creative support. Whether it’s Python scripts, drafting emails, or deciphering confusing jargon, LLMs are an incredible resource.?
However, LLMs are also quite new, and most people haven’t mastered the proper way to use them. Firing one haphazard question after another, which is what most of us do, is a surefire way to get disappointing results.?
Instead, why not learn the proper techniques to supercharge your LLM outputs??
This article will show you exactly how! We'll break down four of the best prompting techniques and use real-life LLM prompts and responses to show you what they look like in action.??
Get ready to elevate your LLM mastery!?
Few-Shot Prompting?
Few-shot prompting is an LLM technique where a minimal number of examples (shots) train the AI model to understand the task and generate relevant outputs.?
This technique relies on minimal training data and is probably best utilized when there is limited data availability or when a user is experimenting with new problems and prefers to iterate quickly?
Let's use a variation from an example found in a research paper (Brown et al., 2020) to see how it works:?
Prompt:?
"Have you ever heard of a 'glingle'? It's supposed to be a tiny, brightly colored bird with a melodic song, native to the Amazon rainforest. An example of a sentence using 'glingle' could be: 'Hiking through the rainforest, we were enchanted by the sweet melodies of the glingles.'??
Now, what about 'frooping'? Apparently, it means skipping along while humming happily. Can you think of a sentence using 'frooping'?"?
Output:?
"After receiving the good news, she couldn't help but froop down the street, humming a little tune."?
The example is a simple one, but it demonstrates the logic behind the technique quite well. By providing a clear example of the expected output along with the definition of “frooping”, the LLM was able to provide the appropriate response.?
Now let’s consider a slightly more complicated scenario.?
For instance, let’s imagine a user who wants to create a Python script but has only one properly coded script as a reference. They could present that script as an example and provide additional instructions for generating the new script they’re looking for.?
Chain-of-Thought Prompting?
CoT prompting is a technique that seeks to mimic the sequential nature of human reasoning in LLMs.?
By providing step-by-step reasoning, or by prompting the LLM to “describe their reasoning step by step”, users can enhance a model’s problem-solving capabilities. In essence, CoT prompting is all about breaking down complex problems into manageable pieces and so is best used when dealing with complex tasks that require several logical steps.?
Here’s a quick example:?
Prompt:?
"A library has 4 red books for every 5 blue books. If there are 180 books in total, how many red books are there? Describe your reasoning step by step."?
Output:?
“Imagine the books are grouped together. We know there's a pattern: every group (let's call it a "bundle") has 4 red books and 5 blue books. We want to find the red books, but we're not given the exact numbers directly.?
The key is the total number of books (180). It needs to be a multiple of the bundle size (4 red + 5 blue = 9 total). If 180 divides evenly by 9, then we know there are whole bundles of books without any leftover books.?
Since 180 is divisible by 9 (180/9 = 20), there are 20 bundles. Each bundle has 4 red books, so there's a total of 20 bundles * 4 red books/bundle = 80 red books.”?
When a regular prompt fails to provide a correct response, CoT prompts are an excellent option as they ensure a more thorough reasoning process.?
And creating them is quite simple: all a user needs to do is make sure to prompt the LLM to explain its reasoning step by step. In some incredibly complicated tasks, they might need to provide logic as well but for most problems, the former will do.?
However, there is one thing users should keep in mind about CoT prompting. LLM research has defined it as an emergent property in large LLMs, meaning that small models might not be able to reason in this way.?
领英推荐
Prompt Chaining?
Prompt chaining is a technique where, instead of a single prompt, the user provides a series of prompts that build upon each other to steer the LLM's response in a particular direction.?
Each prompt in the sequence serves to refine the output or guide the LLM's understanding towards a specific topic or line of thought. Often the output of a prompt will become part of the next prompt. By chaining together related prompts, users can guide the model to generate coherent and relevant responses that explore a topic in depth or address a complex problem.?
This technique is particularly useful for exploring complex or multifaceted topics, brainstorming ideas, or engaging in conversational exchanges where the flow of conversation evolves naturally through a series of prompts and responses.?
Due to the nature of this prompting technique, we’ll provide a different sort of example to make it more concise.?
Imagine a user who needs to write an essay on the history of the United States and is using an LLM to craft a first draft. Here is what a prompt chain could look like in such a situation:?
Prompt 1: “Suggest a basic structure for an essay on the history of the United States and its importance as an academic subject.”?
Prompt 2: “Change the order of the 4th and 5th paragraph then add another one on the importance of the Civil War after the second.”?
Prompt 3: “Now write the essay.”?
Prompt 4: “The conclusion feels too weak. Add more closing remarks on the importance of understanding the history of the United States and make sure to reference events we’ve talked about throughout the essay.”?
Prompt 5: Revise the essay for tone. Go for academic diction but make sure it remains engaging and clear.?
In this example, we can see how these prompts interacted together to guide the LLM towards the finished output.?
Keep in mind that, for successful prompt chaining, users should provide adequate instructions for the desired output since the alteration and refinement of the response is an important part of what makes prompt chaining work.?
Directional Stimulus Prompting?
Unlike regular prompting, DS prompting introduces an element called a “directional stimulus” which acts as a hint or clue to guide the LLM in its output generation.?
DS prompting is best utilized when an LLM can provide several outputs that could potentially satisfy the user’s prompt. For instance, if a user asks for help with a movie review of “Avatar”, there is a vast number of things that the LLM could write about in its response. As such, DS prompting is not suitable for something like arithmetic questions where there is only one correct answer.?
Let’s take up the example above and utilize a DS prompt, so we can steer the LLM towards the type of output we’re looking for.?
Prompt:?
“Generate a 100-word review of the movie Avatar based on the prevailing responses by critics.?
(Directional Stimulus) Focus on the stunning visual effects, the direction style, and the predictable, archetypal nature of the plot.”?
Output:?
“Avatar dazzles with groundbreaking CGI, transporting viewers to a vividly imagined alien world teeming with bioluminescent flora and fauna. James Cameron's direction is masterful, immersing audiences in Pandora's beauty and peril. However, the plot follows a familiar hero's journey with predictable beats. Despite some derivative storytelling, Avatar's visual spectacle and immersive world-building make it a must-see for sci-fi fans.”?
And voila! We can see from our output how the directional stimulus has guided the responses towards the topics we wished to discuss.?
DI prompting is a magnificent technique to personalize LLM responses and make them feel less generic. And remember: the more in-depth and specific you make your directional stimulus, the more the LLM will have to work with.?
Conclusion?
When dealing with LLMs it can be easy to forget that there is indeed a systematic machine logic that provides your answers and not some magic computer genie. In the end, efficient prompting is all about understanding that logic and working with it.??
LLM outputs will only be as good as the prompts you provide, so make sure to take a second and think about the type of prompt that best suits your desired response. If your first attempt doesn’t work out, don’t give up right away. Try a different technique, provide more instructions, and maybe consider using a superior model. But most of all, keep calm and make sure to check back in with TEAM International for the latest LLM news and all the best prompting techniques!?