Prompt Engineering Best Practices
Photo by jules a. on Unsplash https://unsplash.com/photos/grayscale-photography-of-brother-typewriter-NvFkYV2ngOk?utm_content=creditCopyText&utm_me

Prompt Engineering Best Practices

Hello everyone, it's been a while since I wrote anything. I recently enrolled in a course on Coursera called "Google AI Essentials," which I found incredibly interesting and insightful. This course is particularly beneficial for anyone looking to understand the fundamentals of AI. Now that I'm almost through with the course, I thought it would be a great idea to share some of the best practices I've learned about prompt engineering.

Prompt engineering is crucial when working with Large Language Models (LLMs) because the quality of the input directly impacts the usefulness of the output. By following these best practices, you can create effective prompts that help LLMs perform at their best, providing you with the most valuable responses.

So, without further ado, let's dive into these best practices and explore how to get the most out of LLMs. Let's begin…

Specify the task

Large volumes of data are used to train LLMs. To ensure that the model produces a targeted output, you must be more explicit about the outcome you are aiming for. By giving LLMs enough criteria, you can be explicit about what you want them to accomplish. To improve how the model understands your request, use simple language and organise your questions logically. There is no perfect setup; just write naturally and pay attention to clarity.

Example: Write an informal email to my manager asking for a leave of absence.

Provide necessary context

Context influences how LLMs respond to a prompt by offering key information about your expectations. When provided with relevant context, LLMs are more likely to produce valuable output.

Include important details about your request to give LLMs the data they need to produce output that is useful. When creating an effective prompt, consider the following questions:

  • Who is the target audience? Identify relevant characteristics of the audience, such as their age, profession, or level of knowledge on the subject.
  • What tone should the model use? Specify the voice and style that will best convey the message. For example, you might prefer a casual and friendly tone when communicating with a peer, or a more professional and persuasive tone for client interactions.
  • How should LLMs organize the output? Define the format in which the information should be presented. You can provide guidance on the length or request a specific layout, such as a bulleted list or a table.
  • What is the goal of the output? Clearly state what you want LLMs to achieve with the given prompt. For instance, if your prompt asks the model to explain a concept, the goal might be to ensure that beginners in the field gain a solid understanding of the topic. Providing an LLM with a specific goal will help tailor the output to your needs.

Example: Write a warm email to my colleague expressing gratitude for their collaboration on a recent project, making sure they understand that their contributions were truly invaluable.

Provide references

Supplying LLMs with reference materials that align with your goals or resemble the desired outcome can lead to more effective outputs. Whether you include your own work, other sources, or both, it’s important to clearly explain how these references connect to your prompt to achieve the best results.

Example: Draft a list of potential campaign slogans for an airline company in the writing style of 1990s billboard advertisements.

Evaluate your output

Every model has a distinct training set, uses different programming methods, and is developed at a particular time. Consequently, some LLMs may have more knowledge on certain topics than others or may have a knowledge cutoff. Additionally, models may sometimes generate inaccurate information which is known as hallucination. Hallucination are AI outputs that are not true.

Factors that can contribute to Hallucinations

  • Quality of an LLM’s training data.
  • Phrasing of the prompt.
  • Method an LLM uses to analyze text and predict the next word in a sequence.

Before using an AI-generated output, evaluate it critically to ensure it meets your standards and is useful for you. This might require some additional research after the LLM produces its output. When assessing the output, consider the following questions:

  • Is this response accurate? Verify that the information is current and accurate.
  • Is this response unbiased? Assess whether the response is balanced and impartial, accurately represents different groups, and avoids favoritism towards specific individuals or groups.
  • Does this response include sufficient information? Make sure it offers a thorough and satisfactory answer to your question.
  • Is this response relevant to what I need? Ensure that the output is relevant to your prompt and matches the context, topic, and task you specified.
  • Is this response consistent? Confirm that it aligns with other outputs. If uncertain, try prompting the LLM in different ways to ensure the responses provide similar information.

If you determine an output is unacceptable, try to add more context to the initial prompt to generate a more focused response:

Example: The output from a prompt like What’s a conditional? might be broad, varied, or irrelevant to your needs, since that term has different meanings in various contexts.

Iteration: Instead, a prompt like Explain ‘conditionals’ to a beginner coder like a textbook would most likely produce a more targeted, useful output by specifying the audience, tone, and discipline.

Take an iterative approach

An LLM might not produce the desired result on the first try, but you can still achieve your goal with some iteration. Refine your initial prompt, issue follow-up requests, or provide feedback to guide the LLM towards the desired outcome.

To effectively revise a prompt, retain the elements that worked and make adjustments from there. You might tweak the wording (such as changing a command to a question), rearrange the prompt’s components (like placing an example at the beginning or end), or add more context to help focus the LLM's responses.

Provide examples by including specific examples in your prompt, you help the LLM better understand your expectations and the type of output you're looking for. For instance, if you're asking the model to write an email, you might include a sample email that aligns with the tone and structure you want. This additional context guides the LLM in generating more accurate and relevant responses that closely match your needs.

In Prompt Engineering, the term “Shot” is commonly used as a synonym for “example.” This terminology comes from the concept of "few-shot prompting" and "one-shot prompting" learning, where a "shot" refers to the number of examples provided to the model to guide its response. So, when someone mentions a "shot," they're talking about an example used to help the LLM understand the desired outcome.

Example: Summarize the following notes on How to use AI in a Responsible way.

Iteration: Summarize the following notes on How to use AI in a Responsible way and identify key takeaways.

Further iteration: Summarize the following notes on How to use AI in a Responsible way, identify key takeaways, and list the most important items.

Further iteration with Example: Summarize the following notes on How to use AI in a Responsible way, identify key takeaways, list the most important items, and arrange the important items by following this example.

To effectively issue follow-up requests, ask the model to make adjustments without restating the original prompt, similar to a back-and-forth conversation. LLMs can build on previous exchanges within the same conversation, allowing you to focus on making precise, individual tweaks until you achieve the desired outcome.

Example: Summarize the following notes on How to use AI in a Responsible way.

Follow-up: What were the key takeaways from this note?

Second follow-up: What are the most important items from this note by listing them?

Third follow-up: Arrange the important items in this note in a tabula form using the example provided below.

In Conclusion, we've covered a lot about prompt engineering, emphasizing its importance for getting useful results from Large Language Models (LLMs). Key practices include specifying the task clearly, providing relevant context, and offering reference materials. We also discussed the need to critically evaluate outputs for accuracy and relevance. Additionally, using specific examples and iterative adjustments can significantly enhance the results.

Thank you for reading this article. Be sure to like and recommend this article if you found it helpful and insightful.

References

Google AI Essential Course


要查看或添加评论,请登录

社区洞察

其他会员也浏览了