Issue 27: Understanding Prompt Engineering
Sharmilli Ghosh
Product Management | GTM | ISV & SI Partnerships | Startup Founder | Board Member | Investor |
Author: Bindu Thota
Introduction?
Large Language Models such as ChatGPT are pretrained transformer models that are trained to predict the next token given some input text. These LLMs are very flexible and can perform various tasks. For example, they can summarize documents, complete sentences, answer questions, and translate languages. For specific user input, the models work by predicting the best output that they determine from past training.??
These powerful models have made it possible for everyone to interact with them using natural language. The answers we get from the LLMs depend on the quality of the questions we ask of them. These questions are called prompts. Getting the prompts right is necessary to harness the LLM’s enormous potential and promise. This led to the rise of Prompt Engineering. Given the importance of prompts, I wanted to demystify prompts and prompt engineering today.?
Prompt engineering is the art and science of crafting precise instructions (prompts) for Gen-AI models. These prompts guide the model’s behavior, enabling it to generate relevant responses or perform specific tasks. Think of it as the language we use to communicate with AI systems.?
What is a Prompt??
A Prompt is an instruction to an LLM. If you used ChatGPT, you used a prompt. A prompt will get a response from the LLM.??
Prompts can be used in various ways to work with LLMs. Below are some examples:?
The quality and the usefulness of the response depends on the quality of the prompt. Not every type of input generates helpful output. Generative AI systems require context and detailed information to produce accurate and relevant responses. Systematically designed prompts, get more meaningful and usable responses. Here are examples of how the prompts change the responses from LLMs.??
Unclear and vague prompt: How should I increase my customer reach??
Response:??
Utilize social media: Social media platforms can be powerful tools for connecting with potential customers.?Create engaging content, interact with your audience, and use targeted ads to reach a wider audience. Use Search Engine Optimization (SEO):?
?
Clear and specific prompt: I am an ecommerce app targeting seniors in the age group 50-70. How can I increase my reach to this target group??
Response (removed part of the response for brevity):??
What is Prompt Engineering??
Prompt engineering?involves crafting effective queries or instructions to help these AI models understand not only the language but also the nuance and intent behind the query. Prompt engineering helps LLMs better comprehend and respond to a wide range of queries, from the simple to the highly technical.?
Prompt engineering bridges the gap between end user input and the LLM. This takes the user input and converts that to clear and specific prompt with context, do and don’t as guidance to LLMs. Prompt engineers typically identify scripts and templates that users can customize and complete to get the best result from the language models. These engineers experiment with different types of inputs to build a prompt library that application engineers can reuse in different scenarios. App developers typically encapsulate open-ended user input inside a prompt before sending it to the LLM. This translates user input into a prompt that is specific, clear, contextual to produce appropriate results for the application.??
?
For example, a user may go to a retail website and ask ‘where can I buy a dress’? This will likely get translated by the app code to create a prompt like “you are a sales agent for xxx this company. Here is where you can find the inventory and stores for the company. A user, based in Seattle, Washington is asking you where to purchase a shirt. Respond with the store locations that are within 15miles of Seattle and have this shirt in the inventory and the store timings “. The chatbot will respond with the relevant information for this prompt.??
A common technique is to construct prompts from a well-defined set of components, as shown in the following diagram. The components are instructions, context, examples, history and current input to guide a high quality, contextual and well-formed response.??
?
The prompts typically contain components to provide static context description and task specific information. Static information can include content and format instructions, database schema information, or any other contextual information that is relevant to the task. This is used to establish the style of the conversation (formal, conversational), format for the response through examples etc., overall app context etc. Task specific information involves retrieving relevant knowledge from data sources and incorporating that into the prompt to improve the performance of the LLMs.??
?
Use cases?
Some use cases where Prompt Engineering techniques are used to generate high quality results.?
Subject matter expertise?
Prompt engineering is key when we require LLMs to respond with subject matter expertise. Having the right prompt guides the LLMs to that specific subject matter context. For example, prompt engineering for an ecommerce app can make sure that the user queries are answered for that company, constrained to the products it offers and the inventory it currently holds etc.??
A medical professional app can take generic user inputs like patient symptoms and other details. Prompt engineering will take this input and create prompts the guide the LLM to look at a list of diseases associated with the symptoms first and narrow down further based on other details.??
Critical Thinking?
Critical thinking apps need LLM to solve complex problems by analyzing data from multiple angles, evaluate pros and cons and make or recommend decisions. Prompt engineering will guide the LLMs in this process to produce the analysis.?
Creativity?
Prompt engineering is useful to enhance LLM’s creative abilities. It can guide the LLM to use a specific tone, take inspiration from specific sources etc. A graphic designer can use an app to create a storyboard from a story using some specific design themes. A user can respond to an email in their voice. In all these cases, prompt engineering adds the context and the information necessary to guide the LLM.??
?
Advanced techniques?
Prompt engineers are employing these advanced techniques to improve model understanding and response quality. There are more advanced techniques not covered here but that is a topic for another day.??
领英推荐
Example: Generate a title for my story?
Example: Generate a title for my story about adventures. Here are some examples: My experiments with dangers, My walk to freedom, How I survived my childhood.?
?
Best practices for prompt engineering?
There are some best practices to follow to craft good quality prompts to elicit the best responses.??
?
Unambiguous prompts??
Prompts should clearly define the desired response to avoid misinterpretation by the AI. For instance, if asking for a novel summary, that should be clearly stated in the prompt. This helps the AI to focus only on the request and provide the summary and not a detailed analysis etc.?
Adequate context within the prompt?
Prompts should provide adequate context and include output requirements in prompt input, confining it to a specific format. For instance, to get a list of the most popular movies of the 1990s in a table, explicitly state how many movies should be listed and ask for table formatting.?
Balance between information and desired response?
Prompts should balance between simplicity and complexity to avoid vague, unrelated, or unexpected answers. A prompt that is too simple may lack context, while a prompt that is too complex may confuse the AI or reduce creativity.??
Experiment and refine?
It's essential to experiment with different ideas and test the AI prompts to see the results. Continuous testing and iteration reduce the prompt size and help the model generate better output. There are no fixed rules for how the AI outputs information, so flexibility and adaptability are essential.?
How does prompt engineering work??
Prompt engineering is a systematic process, typically done in several distinct steps, to optimize communication with AI models. Here's a structured approach:?
1. Analyze the problem?
The first crucial step is to thoroughly understand the problem. This involves not only identifying what you want the AI model to do but also grasping the complexities and nuances of the task. A deep dive into the task helps in foreseeing how the model might interpret different prompts, which is essential for crafting effective prompts. For example, in a text generation task, understanding whether the desired output is a story, an article, or a poem is vital for guiding the AI's language and style.?
2. Craft the Initial Prompt?
Once the problem is clearly defined and understood, the next step is to craft the initial prompt. The prompt is formulated to provide clear and direct instructions to the AI model. The prompt serves as a roadmap for the AI, steering it towards the desired output. The craftsmanship here lies in balancing the need for specificity (to guide the AI accurately) with the need for openness (to allow the AI to generate creative and diverse responses).?
3. Evaluate the Model Response??
After deploying the initial prompt, the next step involves evaluating the AI model's response. This determines the effectiveness of the prompt and the AI's interpretive capacity. If the output does not align with the desired goal, it’s essential to identify why and how the AI misinterpreted the prompt, which could be due to a lack of clarity, insufficient context, or the complexity of the task.?
4. Iterate on the prompt?
Based on the evaluation, the prompt is then refined. This stage is iterative and may require several rounds of adjustments. The refinement process involves altering the language of the prompt, adding more context, or restructuring the query to make it more explicit. Each iteration brings the prompt closer to an optimal state where the AI's response aligns perfectly with the task's objectives.?
5. Test Across Models?
This step helps in understanding the versatility and adaptability of the prompt. Different models, due to their unique training and structures, may interpret the same prompt in varied ways. Testing across models ensures that the prompt is robust and effective in diverse AI environments, leading to more reliable and generalizable results.?
6. Scale and Automate?
The final step involves scaling and automating the prompt. Once a prompt is refined to a point where it consistently produces desirable results, it can be scaled to address broader contexts or automated for efficiency. Scaling might involve adapting the prompt for related tasks or automating the generation process for high-volume tasks. This step marks the transition from development to deployment, where the prompt is utilized in practical, real-world applications.?
Challenges and limitations of prompt engineering?
While prompt engineering is useful for improving the accuracy and relevancy of the LLM responses, it has substantial limitations.??
?
Future?
Prompt engineering is a new and fast evolving field. There is a lot activity with individual developers and companies creating prompt libraries and prompt hubs. It will be exciting to see how this space evolves as we all learn how to get the best out of LLMs.???
?
References?
?
?
Senior Infrastructure Engineer / Senior Linux Engineer
6 个月Great article. This really needs to be widely distributed and understood. LLMs / Generative AI are here for the foreseeable future and educators really need to understand the concepts you have presented here. Students have, and will, continue to use these tools to aid them and it would be best if educators learned these things now to better teach their students the proper usage.
Senior Managing Director
6 个月Sharmilli Ghosh Fascinating read. Thank you for sharing