Art of Prompt Engineering – Unleashing complete potential of LLMs
Raghuveeran Sowmyanarayanan
Passionate about adding value to customers with actionable business insights driven through AI & Analytics
AI has been a remarkable revolution over decades and Gen AI is an unimaginable innovation over last 10 months.?Its ability to imitate human intelligence and perform jobs that were once considered exclusively human domains still astonishes most of us.?However, there’s always room for further improvement. Prompt engineering is clearly one of the ways to improve and achieve maximum value out of LLMs.
In this article, let us delve into the art of prompt engineering, exploring its criticality, strategies, and practical examples to master this essential craft.
What is Prompt Engineering
It is all about how to give better instructions to LLMs to ensure they understand and execute jobs correctly and provide best possible output.
Why Prompt Engineering
Prompt engineering acts as the language that bridges the gap between human input and the model’s output. It is through skilful crafting of prompts that we can channel the model’s immense capabilities toward our desired goals.?
·?????? Enhanced Productivity:?By using high-quality prompts, AI models can generate more accurate and relevant responses. This helps to spend less time corrections and more time in leveraging AI’s capabilities.
·?????? Cost Efficiency:?Training AI models is resource-intensive. Prompt engineering can reduce the need for retraining by optimizing model performance through better prompts.
·?????? Versatility:?A well-crafted prompt can make AI models more versatile, allowing them to tackle a broader range of tasks and challenges.
Understanding Prompt Engineering
Prompt engineering involves constructing precise and context-rich instructions to guide the language model’s output. Prompt engineering requires breaking down the problem, providing clear objectives, and considering the specific context in which the model operates.
Strategies for Effective Prompt Engineering
One key principle emphasizes the importance of providing clear and specific instructions to the model.?By specific instructions, I mean that we provide further detailed information about the desired outcome.?
Employing Top 3 tactics mentioned below would help to achieve clarity and specificity in prompts.
#1. Using Delimiters for Text Inputs
?Writing clear and specific instructions is?as easy as using delimiters to indicate distinct parts of the input. This tactic is especially useful if the prompt includes pieces of text.?
#2. Asking for a Structured Output?
To make parsing model outputs easier, it can be helpful to ask for a concrete structured output. Common structures can be JSON or HTML.?
#3. Few-Shot Prompting
Zero-shot prompting refers to asking the model to perform a task without providing any context. Few-shot prompting technique implies that we present the LLM with a few examples of our desired output along with some specific question.?
Next key principle emphasizes the importance of providing well-structured instructions to the model and also giving the model time to?think. This principle encourages users to rephrase the prompt to request a sequence of relevant reasonings, forcing the model to compute these intermediate steps.
#1. Specify the Intermediate Steps to do the Task
?One simple way to guide the model is to provide a list of intermediate steps that are needed to obtain the correct answer.
领英推荐
#2. Chain of Thought prompting
Chain of Thought prompting directs the LLM to process information sequentially. This means we detail out how to solve a first problem with multiple steps reasoning and then send to the model our real job, expecting it to emulate a comparable chain of thought when responding to the actual query we want it to solve. Essentially It is like merging both Sequential Thinking and Few-Shot Prompting.?
#3. Automatic Chain of Thought prompting
One question that might come to our mind could be: Do I have to think of a sequential way to solve any problem any time I want to ask to LLMs?
As mentioned in below example, we could provide a prompt with the first problem and a sequential explanation of how to solve it.
Standard Prompting
Q: Ram has 5 shuttle cocks. He buys 2 more packs each containing 3 shuttle cocks. How many shuttle cocks he has now?
A.????? The answer is 11. (Correct)
Q: The cafeteria has 23 oranges. If they used 20 to make juice and bought 6 more, how many oranges do they have?
A.????? The answer is 27. (Wrong)
Chain of Thought Prompting
Q: Ram has 5 shuttle cocks. He buys 2 more packs each containing 3 shuttle cocks. How many shuttle cocks he has now?
A.????? The answer is 11. (Correct)
Q: The cafeteria has 23 oranges. If they used 20 to make juice and bought 6 more, how many oranges do they have?
B.????? The answer is 9. (Correct)
Then, we can send our input on actual problem to solve and see how LLM applies Chain of thought principle automatically!
<user>?If John has 5 apples, then eats 2, buys 5 more, then gives 3 to his friend, how many apples does he have? Let’s think step by step.
<AI agent>?Let’s go step by step. John starts with 5 apples. He eats 2 apples, so he has 5–2 = 3 apples left. Then, he buys 5 more apples. So, he now has 3 + 5 = 8 apples. Finally, he gives 3 apples to his friend. Therefore, he has 8–3 = 5 apples left. So, after all these steps, John has 5 apples.
#4. General Knowledge prompting
A common practice of prompt engineering is augmenting a query with additional knowledge before sending the final API call to LLM.
Way Forward
Prompt engineering is an art that empowers us to tap into the vast potential of large language models. By iterating and improving prompts with precision, creativity, and an understanding of the task, context, and desired outcomes, we can shape the interactions between human input and AI output in a more direct manner to AI models and thus obtain more accurate and contextually relevant outputs, saving both time and resources. By mastering prompt engineering, we can unlock the full potential of LLMs through the ability to extract accurate and insightful responses from language models, propelling us toward transformative solutions in various domains.
?
About the Author
Raghuveeran Sowmyanarayanan is Artificial Intelligence & Analytics Leader heading Differentiating Delivery Office at Cognizant and was heading AI&A Healthcare practice and has been personally leading very large & complex Enterprise Data Lake & AI/ML implementations. He can be reached at [email protected]
$1.5B+ In Client Revenue| I help Business & Personal Brands craft Strategic Brand Positioning| Brand Copywriter| Brand Consultant| Copywriting Coach| UGC NET Qualified [Management]| Let’s Talk About Brand Transformation
1 年Thanks for posting