课程: Build Prompt Flows with Azure AI Foundry

Create a chat flow

- Let's look at how to create a chat-based prompt flow in Azure AI foundry. Here's the Azure AI foundry portal. In our project, click Prompt Flow in the menu. Click Create to create a new flow. You can choose from three flow types. Standard flow provides the general functions of a prompt flow. Chat flow builds on top of the standard flow, providing a chat interface and a support for chat history. And the evaluation flow measures how well the output matches the expected criteria and the goals. In this demo, we'll create a chat flow to answer customer questions about California tours. Provide a folder name to store the flow code files. Click Create, this opens the prompt flow altering page. We can see a graph of a simple chat flow. It has inputs, output, and a chat node. Click a note of the chat flow, and we can see its configuration in the editor. For the flow inputs, there are two input fields, chat history, and a user input question. For the flow output, there's one output field answer, which gets its value from the chat output. Now let's set up the chat node. This node uses a large language model, so we need to configure, the LLM settings. In the previous video, we deployed a GPT-4o model. Here I will choose its Azure Open AI service connection. Select the API for chat. Choose the deployment name, GPT-4o. I can further configure the model parameters such as, temperature to control the randomness of the AI responses and the maximum tokens. In the prompt section, we can use the Jinja template language to define the prompt elements such as, a system message and a user question. For example, to answer customer questions about California tours, I'll replace the default system message with, "you are a travel advisor specialize in California tours. Your job is to provide helpful information on California attractions, so your users can plan their trips effectively. Please include the location, interesting facts, and the website in your response." Click Save in the top menu to save our changes. To run the chat flow, we will click Start compute session. Once the session status becomes running, we can click chat to open a chat interface. In the prompt bar, enter Hi. The chat flow returns, a greeting message. Enter, "tell me about Santa Monica Pier." The LLM tool configured in the chat flow generates a response based on my prompt instructions.

内容