Demystifying Microsoft 365 Copilot
In the ever-evolving landscape of digital tools, the M365 Copilot has emerged as a game-changer for enhancing productivity and streamlining workflows. However, with its rise in popularity, several rumors have surfaced about its functionalities and capabilities. In this article, we will demystify these rumors and shed light on the true nature of the M365 Copilot.
Defining the Microsoft 365 Copilot
The M365 Copilot is an experience?using?generative AI?to?assist humans?with?complex cognitive?tasks. You can think of it as an advanced AI-powered assistant integrated into the Microsoft 365 suite. It is designed to assist users by automating repetitive tasks, providing intelligent suggestions, and enhancing overall productivity. The Copilot works seamlessly across various Microsoft 365 applications, including Word, Excel, PowerPoint, and Teams, to name a few.
The elements of Microsoft 365 Copilot
It is important to understand that M365 Copilot is not an LLM or a single application. It is an extendable orchestration engine consisting of the following elements:
User Experience: The user experience with Microsoft 365 Copilot is designed to be seamless and intuitive. The user interface of M365 Copilot is seamlessly integrated in Microsoft 365 applications like Word, Excel, PowerPoint, Outlook, and Teams. It leverages data from these applications, as well as from over 1200+ plugins, including Adobe, SAP, and Workday. This integration allows Copilot to assist with various tasks, from drafting emails to generating reports, all while maintaining a consistent and user-friendly interface.
Orchestrator (Orchestration Engine): The orchestration engine in Microsoft 365 Copilot is responsible for overseeing and synchronizing the operations of the Copilot. It ensures that the right skills and actions are executed based on the user's input (prompts). The orchestrator helps in managing the flow of information and tasks, making sure that the Copilot provides relevant and actionable responses
Knowledge (Grounding & Memory): Microsoft 365 Copilot uses grounding to improve the specificity of user prompts. Grounding involves using data from Microsoft Graph, emails, chats, documents, and other sources that the user has permission to access. This ensures that the responses are contextually relevant and actionable. Copilot does not access data that the user does not have permission to view, maintaining strict data access controls.
Skills (Action, Triggers & Workflows): Copilot's skills include understanding, summarizing, predicting, recalling, translating, and generating content. These skills are powered by actions, triggers, and workflows. Actions are specific tasks that Copilot can perform, such as sending emails or updating records. Triggers are events that initiate these actions, and workflows are sequences of actions that automate complex processes. This combination allows Copilot to handle a wide range of tasks efficiently.
Foundation Models: Microsoft 365 Copilot uses a combination of foundation models, such as GPT-4 Turbo, to match the specific needs of each feature. These models are continuously evaluated and aligned with the capabilities required for different tasks, ensuring that Copilot can provide accurate and contextually relevant responses. The foundation models are the backbone of Copilot's ability to understand and generate human-like text.
How Microsoft 365 Copilot works and logical Architecture
Before we start it is important to understand the Microsoft 365 Service Boundary in the context of the M365 Copilot. The Microsoft 365 Service Boundary in the context of M365 Copilot refers to the limits within which Copilot operates to ensure data security, privacy, and compliance. Here are some key points:
For more detailed information, you can visit the "Security and Compliance" section of the Microsoft Trust Center. But now lets get started.
Stage 1: User prompt
The M365 Copilot starts to work when Copilot receives an input prompt from a user in a Microsoft 365 app, like Teams, Word or PowerPoint. Note: since security is our top priority all data flow is encrypted via HTTPS and wss://.
领英推荐
Stage 2: Copilot pre-processes the input prompt using grounding.
Grounding improves the specificity of the prompt, and helps you get answers that are relevant and actionable to the specific task. The prompt can include text from input files or other content Copilot discovers.
Copilot only accesses data that an individual user is authorized to access, based on, for example, existing Microsoft 365 role-based access controls. Copilot doesn't access data that the user doesn't have permission to access.
To learn more, see Data, Privacy, and Security for Microsoft 365 Copilot.
Stage 3: Grounding data and modified prompts are sent to the Large Language Models.? Please be aware of the following things about LLMs inside M365 Copilot:
1. The LLM components are inside M365 Service boundary.?
2. Prompts, responses and grounding data are NOT used to train foundational models. Microsoft does not use it for advertising or share customer data with third-parties.?
3. This private instance of Azure OpenAI is maintained by Microsoft.?OpenAI does NOT have access to your data or our models.?
Stage 4: Retrieval-augmented generation. It allows Copilot to provide exactly the right type of information as input to an LLM, combining this user data with other inputs such as information retrieved from knowledge base articles to improve the prompt. Stages 2, 3 and 4 could iterate to then proceed to the next stage called post-processing.?
Stage 5: post-processing
Post-processing here includes Responsible AI (RAI) checks (which are automated and doesn't involve any humans in Microsoft to review), additional calls to Microsoft Graph for Security, Compliance and Privacy reviews, and command generation.?Tenant admins have access to Purview capabilities.? To enable that, Copilot needs to slap on Sensitivity labels on the content and responses coming through as well as record information for Auditing, eDiscovery, etc.?
Stage 6: Response
In stage 6 the user prompt, the response sent back to the user and the citation links (or files that were accessed for generating that response) is stored.? Please note that the storage of this is similar to how and where your mailbox info and other M365 data like Teams chats is stored.? The data at rest is encrypted per our M365 data commitment.? The storage of the User utterance and Copilot’s response is useful for features like Sessions/History that helps you with your user experience.?
In summary, Copilot iteratively processes and orchestrates these sophisticated services to produce results that are relevant to your business because they are contextually based on your organization’s data, and the latest/greatest from the web and 3rd party extensions so you have it all from your favorite client canvas or apps at your finger tips.?
And to tie it back all, your data at rest and during transit is encrypted to keep your data safe, compliant and provide your admins the capability to manage your tenants data securely.?
Conclusion:
I hope this articles adds the needed transparency on how the M365 Copilot works. It is clear that the M365 Copilot is designed with a robust orchestration engine at its heart, ensuring that user data is handled securely and compliantly. As organizations continue to navigate the complexities of digital transformation, the M365 Copilot will undoubtedly play a key role in streamlining workflows and driving productivity.