Do more with AI Prompts – validate the answers in Copilot Studio

Do more with AI Prompts – validate the answers in Copilot Studio

Conversation orchestration in Copilot Studio is a powerful way to achieve the desired functionality and discussion flow with your end-user. It's a thing to master. Based on my experience, sooner or later, you will start to find places and use cases where the standard Generative AI answers, and messages are no longer enough. You could always take the processing behind Power Automate or API, but other options exist.

Now, Microsoft has published new capabilities for the AI Prompts by bringing the o1 model into use, unlocking reasoning capabilities for your AI-driven workflows. I'm hoping to see even more possibilities in the future.

Supercharge your Agents and Flows with AI Prompts - Microsoft Power Platform Blog

Original post in my blog: Do more with AI Prompts – validate the answers in Copilot Studio – Mikko Koskinen

Why AI Prompts?

AI Prompts offer structured guidelines that direct large language models (LLMs) to carry out specific tasks. This method, known as instruction tuning, enables businesses to refine AI-generated responses for greater accuracy, relevance, and usefulness.

  • Customizable Instructions: Users can?tailor prompts to address unique business needs, ensuring AI-generated content aligns with specific workflows and objectives.?

  • Dynamic Input Handling: AI Prompts can integrate?input variables and contextual data, making responses more relevant and adaptable based on real-time information.?

  • Knowledge with Dataverse Data: AI Prompts can?leverage knowledge retrieval to contextualize prompts with relevant data from Dataverse, ensuring more informed and accurate AI responses.?

  • Document and Image Processing: AI Prompts can handle?document and image inputs, enabling AI-driven workflows to extract insights, summarize content, and interpret visual data.

Use-Case: Check the relevance of the GenAI answer

In one of my projects, we encountered a situation in which the agent's knowledge source didn't necessarily offer all the relevant information regarding the use case.

We build an agent against company instructions to help users find information and ease the support team's work.

  • When the support team investigated the agent's answers, they often noticed some things and details that needed to be added.
  • Some information was missing from the instructions, but in some cases, the tacit knowledge filled the gaps.

This tacit knowledge is hard to catch and not always something you can write in some documents or instructions. We decided to give the support team a tool to gather additional information and use a QA knowledge base. Then, we used this information to ensure the agents' answers were always aligned with the best possible knowledge.

For simplicity, I will show the approach for only a selected topic, but you can also use the same approach for other use cases.

  1. Let's build a controlled topic and get the answer from the knowledge source with a Generative answer first.
  2. Make sure you turn off the Send a message from the settings.
  3. Save the bot response to a variable for later usage.

Now, we can build the important part by adding a Prompt action.

?

The logic behind the Prompt

  • The Prompt will get the original user prompt, aka question, and the answer generated by the Generative Answer action as a parameter.
  • We will add the Dataverse table holding the KB details as a knowledge source for the Prompt.
  • The first step is to analyze the question to understand its topic. We can use this finding to make a query against the KB database.
  • Then, we will query against the KB database in Dataverse to find a possible example answer or more details related to the topic.
  • In the next two steps, we will use a prompt to analyze, check, and rewrite the answer with the necessary details.
  • The new modified answer is then sent to the agent user.

More details and instructions from here: Prompts overview | Microsoft Learn

When giving the instructions, the best way is to split the needed process into multiple steps that perform a particular small task. I especially like how easy it is to add the parameters in the correct places and make the necessary connections to the database.

  1. In the desired place in the instructions, click Add.
  2. You can see the available parameters and data sources in the list.

  1. Select the desired data source to add the field used in the query.
  2. Then, in the prompt form, you will see a dropdown that you can use to select the correct column.

One last tip! As always, remember that you can use Copilot to help with prompt instructions. I tend to write lengthy and too complex. I wrote my process idea as ?I liked and then led Copilot to format the final version's Prompt.

Here is the answer comparison with the basic GenAI and prompt finetuned versions.

I was hoping to be able to test the new o1 model, but at the time of writing this, the model had not yet been activated in my tenant in North Europe.

Notes About the Costs

Be aware that using prompt action will increase the agent's message consumption. Earlier, in the preview mode, the action didn't cost any extra, but the message consumption will now be active at the beginning of April. The consumption is calculated on top of the other events in the conversation flow.

In my test case, the message consumption would be:

  1. 2 messages from the Generative action
  2. 1,5 messages from the Prompt action
  3. 1 message from the final classic answer

Note - AI prompts don't consume AI Builder credits in the Copilot Studio.

The consumption rate depends on the selected LLM model. It is per 1000 tokens and depends on the prompt model. For example, 4o-mini prompts use the basic feature, 4o prompts use the standard feature, and o1 prompts use the premium feature.

?

Petchpaitoon Krungwong

Helping organization to transform Modern Workplace and Improve Employee Experience

1 周

Useful tips

回复
Chris Burns

AI & Automation Architect | AI Evangelist | Helping Businesses Harness AI to Innovate, Automate & Scale | Expert in AI Strategy, Intelligent Automation & Digital Transformation | Kaizenova & Nasstar

2 周

Thanks for sharing this… Food for thought…. Jack Fisher

Peter Charquero Kestenholz

PPM, xPM & GenAI Innovation | Founder | Microsoft MVP

2 周

Good points and idea Mikko. I just got a dev environment spun up in North America. 4o-1 works great but when it’s a “credits” game consumption goes up dramatically.

回复
Mahmoud Hassan

Microsoft MVP | Empower enterprises to thrive with Microsoft Copilot & Modern Workplace AI solutions

2 周

Mikko Koskinen thanks for sharing ?? Using the prompt action to check the generative answers response relevance is really smart idea! I just have a single comment regarding the “query against the KB database in Dataverse to find a possible example answer or more details related to the topic.” I don’t think this feature is available now, currently if you didn’t the use the OOB filters the prompt action returns all the rows (I think upto 1000 row). Having data retrieval calls within AI prompt is very powerful, but I don’t think we’re there yet. Henry Jammes what do you think? Maybe I am wrong and this powerful feature already available?! Henry Jammes I think also AI prompt one of the areas that needs more logs/tracing information to know exactly what is happening, as currently this is a black box.

Maurizio Ceccacci

AI | CRM | Trasformazione Digitale | Agenti Autonomi

2 周

Mikko Koskinen thanks for sharing. It was a very inspiring reading. I got a couple of ideas for my projects.

要查看或添加评论,请登录

Mikko Koskinen的更多文章

社区洞察

其他会员也浏览了