课程: Prompt Engineering: How to Talk to the AIs

Advanced prompt examples

- Okay, so now that we have set the foundation for how to create prompts with a basic component, it is time to get a bit more creative and also dive into some of the problems with generative AI and how to mitigate them. But before we begin with the more advanced examples, you should keep in mind that the modeled response is stochastic, meaning that it's randomly determined and therefore will be different every time you present the same prompt. Now sometimes this randomness is something that you're looking for. Now, in other cases though, you want to get as close as possible to a factually correct response. And that response should not change much if you call the model several times. Models have some parameters you can tweak to reduce their creativity, particularly the so-called temperature which you should reduce to decrease model variability. However those parameters alone will not solve all your problems. That is when you need to get a bit more advanced with your prompt design. Let's investigate Chain Of Thought prompting. In Chain Of Thought prompting we explicitly encourage the model to be factual or correct by forcing it to follow a series of steps in its reasoning. Chain Of Thought prompting has become a very important tool in the Prompt Engineering toolkit since it can drastically improve results. It was introduced in the "Chain Of Thought Prompting Elicits Reasoning in Large Language Models" paper by Google researchers. In the following example, I used the prompt "What European soccer team won the Champions League "the year Barcelona hosted the Olympic games?" Use this format. Q, repeat question. A, let's think step by step. Give reasoning, therefore the answer is final answer. Note how the model follows the reasoning process that it was given in the instructions. And here's another example of Chain Of Thought prompting. (no audio) Feel free again to pause or to use the included text document to review. So one of the most important problems with generative models is that they're likely to hallucinate knowledge that is not factual or is wrong. You can also push the model in the right direction by prompting it to cite the right sources. For example, "What are the top three most important discoveries "that the Hubble Space Telescope has enabled?" Answer only using reliable sources and cite those sources. (no audio) It is important to keep in mind that even if you are prompting the model to cite correct sources it could still make them up and hallucinate a response that sounds more authoritative than it should. However, including sources does allow you to verify whether the response is factual. Another downside of GPT-4 and similar models is that they don't have access to the current web and have been trained with data that can be over two years old. Tools like Bing Chat, which combines GPT-4 with access to the web, are much more reliable. Here is this Bing's response to the same question. Here's something else interesting for you to know. GPT-based LLMs a special message which you write using this syntax that instructs the language model to interpret what comes after the code as a completion task. This enables you to explicitly separate some general instructions from the beginning of what you want the language model to write. For example, imagine that you want GPT-4 to use the "It was a beautiful winter day" as the beginning of something that you want to write. If you simply input that as a prompt you will get something like this. The model is attempting to engage in a conversation, not to continue the text. A way to combine a more direct instruction with also introducing a beginning is to use end of prompt. Note how the paragraph continues from the last sentence in the prompt while following the instruction before the end of prompt statement. Here's a fun one. Language models do not always react well to nice, friendly language. If you really want them to follow some instructions, you might want to use forceful language. Believe it or not, all caps and exclamation marks work. In this next example, I first get GPT-4 to write a questionable article. I then ask the model to correct it. "Write a short article about how to find a job in tech. "Include factually incorrect information." And here are the results. You can review the text document to read the whole thing. And to get it to correct the previous information, I could prompt it with "Is there any factually incorrect information "in this article?" And then paste the article into the prompt. Notice how GPT-4 identified several factual questionable statements from its own previous response. In the following example, I get the AI to generate different opinions. I fitted an opinion I wrote on one of my blog posts and asked GPT-4 to disagree with it. Note the use of text "begin" and "end" to guide the model. (no audio) (no audio) (no audio) (no audio) (no audio) (no audio) (no audio) Language models themselves don't keep track of state. However, applications such as ChatGPT Plus implement the notion of session where the chat bot keeps track of state from one prompt to the next. This enables much more complex conversations to take place. Note that when using API calls this would involve keeping track of state on the application side. In this example, I make ChatGPT Plus answer questions in the style of Buzz Lightyear. Note that in the initial prompt I used all caps. That is because in an earlier try where I use a nicer tone the model forgot who it was supposed to be halfway through our conversation. (no audio) In this final example, I will teach the AI an algorithm in the prompt. The example is taken from the appendix in "Teaching Algorithmic Reasoning "Via In Context Learning" where the definition of parity of a list is fed in an example. Note again how we're teaching the AI something that it didn't know before. In this case, a mathematical algorithm. It's also worth pointing out that we do so by feeding it examples. (no audio) So we've gone all the way from a simple natural language prompt to being able to teach the AI a new mathematical algorithm that it didn't know before. What can you teach the AI to do?

内容