Build your own private GPT agents in Teams

Build your own private GPT agents in Teams

With a good prompt, top-tier models (GPT 4, Mistral large…) can beat the best doctors and lawyers out there.

I often ask myself, if those models can beat these people, they sure could beat a Microsoft Power Platform Technical Specialist.

The trick is, if I want to use ChatGPT and build a "Technical Specialist" agent it will be handling Microsoft and customer proprietary informations. I can't use my personal OpenAI ChatGPT account for that.

I could code ChatGPT from scratch on my tenant. I am not a professional developer and more importantly, I don’t have the expertise to build a fully secure enterprise application.

In this article we build a private ChatGPT experience in Teams

We are going to do it in low code. Let me take that back, we are going to use no code for the drudgery work, and pro code when needed.

Microsoft Copilot Studio, Dataverse and Power Automate will be the no code part of the solution. Part of Power Automate, we’ll ping the OpenAI on Azure API through a HTTP post call, which will be the pro code part of our solution.

With that setup, we are private... Let's get started!

Our data model : an agent, a conversation, and a message tables

Our first table is our agent table. It will host the agents with their name, their system prompt and a little intro message we'll use when the agent joins the conversation.

Our first agent, ChatGPT with its simple prompt

The following two tables work together. Every conversation will have one or multiple messages. Every time the agent or the user sent a message, we'll store the message.

Example of a conversation history with ChatGPT agent

Columbo, the bot to orchestrate the agents

We introduce Columbo the bot to orchestrate the conversation with our agents. For that, we'll have inner dialogs with Columbo. Those dialogs won't be stored in the conversation with our agents.

Example of an inner dialog with Columbo

Chatting with Columbo, Midjourney's style

To chat with Columbo, it will be Midjourney style. Every command will start with a "/" and will accept parameters if needed. To help distinguish conversations with agents and inner dialog with Columbo, we'll use the italic font when Columbo responds.

Using a fallback topic to chat with the agent

In Conversational AI, we traditionally first try to capture the intent of the user then kick off a scripted topic from there. In the event none of the authored topics triggered we fall back to a "catch for all" or a "fallback" topic.

Here, the fallback topic is our core topic. All this topic will do is pick up all the conversation messages and call OpenAI on Azure then save the new messages in the message table and it keeps going.

List of Columbo topics - The "Chat" topic triggered On Unknown Intent

Pinging the OpenAI on Azure API in Power Automate

In its simple form, the OpenAI on Azure chat API takes three logical inputs:

  • The "model" deployment we want to use (GPT 4 or GPT 3)
  • The system prompt (the "brain")
  • The "dialog" since the conversation started

The one key output is the response from the model as a string.

In our flow, all we do is pick up the system prompt of the agent and list all the messages for the conversation we are in. We concatenate them in an array to pass in the body of the API call. The model variable will come from the user choosing dynamically which model she/he wants to use.

Pinging the OpenAI on Azure API from Power Automate

Instructing Columbo to switch agent

To switch between agents in our bot we implement the scripted topic /agent that accepts the AgentNumber as a parameter.

Capturing the instruction parameter

One way to trigger such a topic in Copilot Studio is to use a trigger that tests programmatically the user message.

Once triggered we just extract the variable (here AgentNumber) from the user message string and call a Power Automate flow to retrieve the new agent information.

Switching between GPT 4 and GPT 3 to manage AI cost

If you have pinged the OpenAI chat API you understand that you are just one more variable away in your code to switch between GPT 3 and GPT 4 models.

All will have to do is offer the user two simple scripted topics to switch between these two models.

Switching to GPT 4

Control your AI cost while empowering your users

Give your users a GPT 4 token budget per month. Automatically track it for them and from there, she/he can manage her/his AI budget by switching back and forth between GPT 3 and GPT 4 models. If user reaches the monthly token budget, offer only GPT 3 access and block GPT 4 until next month.

Chatting with our Power Platform virtual Technical Specialist

Now we can chat with our agents privately on our tenant, let's test it out and create our Power Platform virtual Technical Specialist.

The goal here is to help with customer discovery (understanding the problem) and whenever we are ready, iterate on a Power Platform solution to the customer problem.

We'll technically create two agents: a Power Platform Discovery agent and a Power Platform Solution agent.

Counteractively, discovery is often the hardest part of a technical specialist job. It requires experience to ask the right questions.

Example of a system prompt for a virtual Power Platform GBB during discovery

Flipping GPT on its head

ChatGPT is per design programmed to provide solutions. With our Power Platform Discovery agent we are making the questions, be ChatGPT solution.

This approach leaves less room for model hallucinations and allows us to carry longer dialogs.

In spirit, it is harder to argue someone is hallucinating when asking a question

And the more questions we answer, the more we write about it, the better our problem will be scope. Playing with it, it is like taking powerful notes mixed with a feeling of gamification. You feel understood and always want to answer one more question!

Once we are done (and tired) of answering questions, all we do is bring our Power Platform Solution agent to reason over the entire dialog (and a well scope problem) and provide a potential solution we can iterate on.

Chat with a virtual Power Platform GBB to solve drivers onboarding for a shipping company

Voila, I am sure we can do way better; I have plenty more ideas. I am at a good stop for now. I have a good working tool I can push and mess with for a while ??.

Feel free to like, comment and share or just ping me a note with feedback.

Copilot studio, Power Automate and Dataverse are part of Microsoft Power Platform. If you have not, check out Power Platform, it is a terrific way to deploy enterprise grade AI securely and in no time on top of a legacy tech stack.

Last BIG thank you to my fusion team for providing me super valuable feedback, debugging services, and keeping me grounded with my ideas.

JJ Worthen , Mark Hodge , Rémi Dyon , Robert Pindar , Satish Paul ... Thank YOU!

Jake Jones

Co-founder - AI Agents for Legal

11 个月

GPT 4 cannot beat doctors nor lawyers and that’s not the point of present applications of this technology.

Kyle Bahr

Legal AI Strategy Lead & Senior Business Attorney | ex-Legal Tech AI PM, F200 Commercial Litigation & Contracts/Legal Ops & Tech Attorney, Global AmLaw100 Attorney & Paralegal, and Federal Law Clerk

11 个月

Hi Nico, thanks for the deep dive! But where's the part in which it beats a lawyer or a doctor at a legal or medical task?

Jeffrey Bennett

Navigating a software-powered world.

11 个月

Great article!

Jerome Knyszewski

Publisher of ValiantCEO magazine. We interview thought leaders. Contributor at Entrepreneur.com , Founders Mag, Authority Magazine, and WellnessVoice.com

11 个月

Amazing

要查看或添加评论,请登录

Nico Sprotti的更多文章

社区洞察

其他会员也浏览了