Do more with AI Prompts – validate the answers in Copilot Studio
Conversation orchestration in Copilot Studio is a powerful way to achieve the desired functionality and discussion flow with your end-user. It's a thing to master. Based on my experience, sooner or later, you will start to find places and use cases where the standard Generative AI answers, and messages are no longer enough. You could always take the processing behind Power Automate or API, but other options exist.
Now, Microsoft has published new capabilities for the AI Prompts by bringing the o1 model into use, unlocking reasoning capabilities for your AI-driven workflows. I'm hoping to see even more possibilities in the future.
Original post in my blog: Do more with AI Prompts – validate the answers in Copilot Studio – Mikko Koskinen
Why AI Prompts?
AI Prompts offer structured guidelines that direct large language models (LLMs) to carry out specific tasks. This method, known as instruction tuning, enables businesses to refine AI-generated responses for greater accuracy, relevance, and usefulness.
Use-Case: Check the relevance of the GenAI answer
In one of my projects, we encountered a situation in which the agent's knowledge source didn't necessarily offer all the relevant information regarding the use case.
We build an agent against company instructions to help users find information and ease the support team's work.
This tacit knowledge is hard to catch and not always something you can write in some documents or instructions. We decided to give the support team a tool to gather additional information and use a QA knowledge base. Then, we used this information to ensure the agents' answers were always aligned with the best possible knowledge.
For simplicity, I will show the approach for only a selected topic, but you can also use the same approach for other use cases.
Now, we can build the important part by adding a Prompt action.
?
领英推荐
The logic behind the Prompt
More details and instructions from here: Prompts overview | Microsoft Learn
When giving the instructions, the best way is to split the needed process into multiple steps that perform a particular small task. I especially like how easy it is to add the parameters in the correct places and make the necessary connections to the database.
One last tip! As always, remember that you can use Copilot to help with prompt instructions. I tend to write lengthy and too complex. I wrote my process idea as ?I liked and then led Copilot to format the final version's Prompt.
Here is the answer comparison with the basic GenAI and prompt finetuned versions.
I was hoping to be able to test the new o1 model, but at the time of writing this, the model had not yet been activated in my tenant in North Europe.
Notes About the Costs
Be aware that using prompt action will increase the agent's message consumption. Earlier, in the preview mode, the action didn't cost any extra, but the message consumption will now be active at the beginning of April. The consumption is calculated on top of the other events in the conversation flow.
In my test case, the message consumption would be:
Note - AI prompts don't consume AI Builder credits in the Copilot Studio.
The consumption rate depends on the selected LLM model. It is per 1000 tokens and depends on the prompt model. For example, 4o-mini prompts use the basic feature, 4o prompts use the standard feature, and o1 prompts use the premium feature.
?
Helping organization to transform Modern Workplace and Improve Employee Experience
1 周Useful tips
AI & Automation Architect | AI Evangelist | Helping Businesses Harness AI to Innovate, Automate & Scale | Expert in AI Strategy, Intelligent Automation & Digital Transformation | Kaizenova & Nasstar
2 周Thanks for sharing this… Food for thought…. Jack Fisher
PPM, xPM & GenAI Innovation | Founder | Microsoft MVP
2 周Good points and idea Mikko. I just got a dev environment spun up in North America. 4o-1 works great but when it’s a “credits” game consumption goes up dramatically.
Microsoft MVP | Empower enterprises to thrive with Microsoft Copilot & Modern Workplace AI solutions
2 周Mikko Koskinen thanks for sharing ?? Using the prompt action to check the generative answers response relevance is really smart idea! I just have a single comment regarding the “query against the KB database in Dataverse to find a possible example answer or more details related to the topic.” I don’t think this feature is available now, currently if you didn’t the use the OOB filters the prompt action returns all the rows (I think upto 1000 row). Having data retrieval calls within AI prompt is very powerful, but I don’t think we’re there yet. Henry Jammes what do you think? Maybe I am wrong and this powerful feature already available?! Henry Jammes I think also AI prompt one of the areas that needs more logs/tracing information to know exactly what is happening, as currently this is a black box.
AI | CRM | Trasformazione Digitale | Agenti Autonomi
2 周Mikko Koskinen thanks for sharing. It was a very inspiring reading. I got a couple of ideas for my projects.