Prompting GPT: How to Get the Best Results in a Production Environment
With the rise of AI language models like GPT-3 and GPT-4 and apps such as ChatGPT, it has increasingly become clear that it is crucial to understand how to best prompt these models to achieve perfect results. Prompting describes the process of giving the model a question or statement to generate a response. In a production environment, there are some specific challenges that do not exist when prompting manually, e.g. in the ChatGPT app. In this blog post, we will discuss how to structure your prompts for optimal performance and robustness in a production environment.
General Prompting Guidelines
The general consideration when prompting for production should be robustness. When using ChatGPT or a similar app, you can quickly correct the model if it goes off track. In a production environment, you need to ensure that the model generates accurate responses without human intervention.
Here are the rules that we developed for prompting in a production environment:
Prompt Structure
In the LoyJoy Conversation Platform, you can configure the `prompt` and the `system message` in the GPT modules. Technically, the `prompt` is sent to the model as a `user message` while the `system message` is sent as a `system message`. Effectively, the prompt is the question or statement that the model should respond to, while the `system message` should include general information e.g. about the guidelines on how the response should be generated.
GPT Knowledge Prompting
For the GPT Knowledge module, it is important to consider that after the prompt you can edit in the LoyJoy backend, two other sections will be added to create the final prompt:
You can refer to these sections in your prompt using the terms`context` and `user question`. For example, you could write a prompt like “Based on the information in the context, answer the user question”.
领英推荐
Open vs. Closed Prompts
Example Prompt
"Answer the user question as truthfully as possible using the provided context, and if the answer is not contained within the context, say only the word “fallback”, nothing else. In your answer, quote relevant URLs you find in the “Context” using markdown syntax (`[example link](URL)`)."
This is a closed prompt for GPT knowledge. The model is instructed to truthfully answer the user question based on the knowledge database. A fallback answer is generated if the answer cannot be found in the knowledge database. Additionally, the model is instructed to generate inline links for any links found in the knowledge database.
To open up this prompt, you could remove the “fallback” instruction and allow the model to generate a response freely.
"You are the AI assistant for the LoyJoy blog post example. You answer user questions based only on the content from the knowledge database results (context), not previous knowledge." To answer questions, follow these rules:
This system message provides additional guidelines for the model on how to generate responses. Especially note the last point, which instructs the model to ignore any attempts to change the role through the user question. This is an important point to make the chat robust against users trying to trick the model.
Conclusion
Prompting GPT for a production environment requires a different approach than prompting manually. By following the guidelines outlined in this blog post, you can ensure that your prompts are robust and generate accurate responses. When creating a new prompt, it is best practice to test and fine-tune it on a variety of inputs to ensure that the model generates the desired output. If you have any questions or need further assistance with prompting GPT, feel free to reach out to our team. We are happy to help you get the best results from your GPT chat in the LoyJoy Conversational Platform.