?? Copilot for Microsoft 365 Responsible AI Layers – What you need to Know and Do?
Copilot for Microsoft 365 Responsible AI Layers

?? Copilot for Microsoft 365 Responsible AI Layers – What you need to Know and Do?

Responsible AI refers to a set of principles that guide the design, development, deployment, and use of artificial intelligence in a way that is safe, trustworthy, and ethical. It involves considering the broader societal impact of AI systems and taking measures to align these technologies with stakeholder values, legal standards, and ethical principles. The goal is to embed ethical principles into AI applications and workflows to mitigate risks and negative outcomes, while maximizing positive outcomes.

?You might wonder, how can I make sure that I use a software as a service solution like Copilot for Microsoft 365 in a responsible AI way?!. The answer is you can do a lot!

In this post I will talk about the Copilot for Microsoft 365 Responsible AI Layers (Large Language Models (LLM), Copilot for Microsoft 365 and Use Cases/Scenarios layers) from my POV, where I will provide also some examples, specially for the Use Cases/Scenarios layer, as this is the layer that you need to pay attention to when using Copilot for Microsoft 365, to ensure that the responsible AI principles are followed from start to finish.

Finally, I will cover how to enforce and monitor the Copilot for Microsoft 365 responsible AI usage at your organization.

Now, let's explore in more detail the different responsible AI layers that we have.

Copilot for Microsoft 365 Responsible AI Layers

?? The Large Language Models (LLM) Layer

Knowing that the OpenAI GPT models are the main ones that Copilot for Microsoft 365 relies on, it is very important to understand that OpenAI cares deeply about the ethical creation and use of its models.

Their approach is multifaceted, focusing on safety, transparency, and ethical considerations. OpenAI’s Charter guides every aspect of their work, ensuring that the development of AI prioritizes beneficial outcomes and mitigates potential risks.

They engage in red-teaming, stress-testing, and impact assessments to identify and prioritize potential harms, followed by systematic testing to measure the frequency and severity of these harms.

OpenAI also collaborates with industry leaders and policymakers to foster AI systems that are aligned with human intentions and values. Their best practices encourage transparency and responsible innovation, aiming to ensure that the benefits of AI are widespread and equitable.

Example OpenAI responsible AI Practices

Safety Teams: They have specialized teams such as the Safety Systems team, Superalignment team, and Preparedness team, each focusing on different aspects of AI safety challenges at OpenAI.

OpenAI Charter: OpenAI Charter guides every aspect of their work to ensure that they prioritize the development of safe and beneficial AI.

Mira Murati |

?? Copilot for Microsoft 365 Layer

Microsoft has been very proactive in integrating responsible AI principles into the development of Copilot for Microsoft 365. The approach to responsible AI encompasses several key principles:

  • Fairness: Ensuring AI systems treat all people fairly and do not discriminate.
  • Reliability and Safety: Creating AI systems that are reliable and safe in all scenarios.
  • Privacy and Security: Protecting the privacy and security of users by implementing strong data governance.
  • Inclusiveness: Building AI systems that empower everyone and do not exclude any groups.
  • Transparency: Being transparent about how AI systems work and the decisions they make.
  • Accountability: Holding themselves accountable for the performance and impact of their AI systems.

These efforts reflect Microsoft’s commitment to building AI systems that are not only robust and innovative but also align with ethical standards and societal values.

Microsoft make it clear that all Copilots, which are Microsoft’s generative AI (GAI) flagship products, must address existing product readiness (accessibility, global readiness, freedom of expression, and interoperability policies), security, responsible AI, privacy, and other compliance commitments, including policies, standards, contractual obligations, legal obligations, and regulatory obligations.

Microsoft makes enterprise promises in this space:

  • You’re in control of your data – this has been true long before Copilot and remains true in the era of AI
  • Prompts, responses, and data accessed through Microsoft Graph aren't used to train foundation LLMs, including those used by Microsoft Copilot for Microsoft 365
  • Your data is protected at every step by the most comprehensive compliance and security controls in the industry

Example Microsoft responsible AI Practices

Copilot Copyright Commitment [Accountability]: The Copilot Copyright Commitment is Microsoft's pledge to protect customers using its commercial Copilot services from intellectual property infringement claims. It offers indemnity support, promising to defend and cover any adverse judgments or settlements related to copyright, patent, trademark, trade secrets, or right of publicity claims. This initiative aims to give customers the confidence to innovate using generative AI outputs without the fear of IP infringement liability.

Abuse Monitoring [Safety]: Abuse monitoring for Microsoft Copilot for Microsoft 365 occurs in real-time utilizing the Azure AI Content Safety. Azure AI Content Safety is a comprehensive platform that ensures the safety across AI and human interactions. It can filter a wide range of potentially harmful content, including violence, hate speech, sexual content, self-harm, and harassment. It also provides protection against prompt injection attacks, and can detect ungrounded or hallucinated content that may be misleading or false. Additionally, the service can identify copyrighted or owned content, helping to prevent the unauthorized use of protected material. These capabilities are designed to support the creation of safer AI applications by mitigating the risks associated with the generation and distribution of inappropriate or harmful content.

Microsoft Responsible AI Core Principles. Image from Microsoft's public documentation

?? Use Cases/Scenarios Layer

While Copilot for Microsoft 365 is a very flexible SaaS solution that can be applied across various use cases and scenarios, it is crucial to adhere to responsible AI principles when you are choosing where you will use it.

This means ensuring that the usage of Copilot within your use cases/scenarios are ethical, transparent, and fair. It involves being mindful of data privacy, securing informed consent when necessary, and avoiding biases that could lead to unfair outcomes.

Let's review some examples now.

Copilot Enriched Hiring Workflow

Using Copilot to augment and enrich the hiring workflow is completely valid use case, but you should pay attention to where it is appropriate and ethical to use it and where not.

  • Valid Usage: Create a job description, create interview questions, Summarize interview meeting notes.
  • Invalid Usage [Fairness]: You should never make Copilot decide who to hire at the end based on input information. This is where you need a person who is qualified to make such decision someone who can prevent things like Unconscious Bias and pay attention to things like potential and culture fit. And by this you ensure that you are avoiding LLM biased decisions and have a fair process.
  • Invalid Usage [Transparency]: You should never us Copilot in an interview to summarize/collect interview meeting notes without announcing it to the candidate and make sure he/she agrees. This way, you ensure that the use of AI/Copilot is transparent for everyone in the process.

Copilot Enriched Performance Management Process

You could use Copilot to augment and enrich various tasks in the performance management process, but you should pay attention to where it is appropriate and ethical to use it and where not:

  • Valid Usage: Employee sets his annual goals based on preference and company goals as input, manager uses Copilot to suggest training courses based on the goals he agreed with his team members.
  • Invalid Usage [Fairness]: You should never make Copilot decide who get a promotion or define the salary raise percentage based on input information. Making sure again you are having a fair process.

Enforce Responsible AI on the Use Cases/Scenarios Layer

Microsoft Copilot for Microsoft 365 apply real-time abuse monitoring utilizing the Azure AI Content Safety that can filter a wide range of potentially harmful content, including violence, hate speech, sexual content, self-harm, and harassment. However, there is no technical way or control to prevent Copilot for Microsoft 365 to respond to specific use case/scenario. To overcome this limitation the only way to ensure responsible AI usage of Copilot for Microsoft 365 is to enforce the approved usage by asking users to sign a “Terms of Use” document.?

The "Terms of Use" document for Copilot for Microsoft 365 outlines the legal agreements and conditions for using the service. It typically includes information on usage rights, restrictions, privacy policies, and other legal considerations.

Monitor Responsible AI on the Use Cases/Scenarios Layer

You can use Microsoft Purview Communication Compliance to analyze interactions (prompts and responses) entered into Copilot for Microsoft 365 to detect for inappropriate or risky interactions or sharing of confidential information. You can extend this to monitor also responsible AI invalid usage.

You can take advantage of all communication compliance features when you create a communication compliance policy that detects for Copilot for Microsoft 365 responsible AI invalid usage interactions, including:

Keywords

Trainable classifiers

Microsoft Purview Communication Compliance policy detect for Copilot for Microsoft 365 interactions. Image from Microsoft's public documentation

Summary

Adopting responsible AI principles is paramount in the development and deployment of artificial intelligence systems like Copilot for Microsoft 365. By adhering to these principles, we prioritize safety, trustworthiness, and ethical considerations throughout the entire AI lifecycle.

OpenAI's commitment to ethical creation and use of models, coupled with Microsoft's proactive integration of responsible AI principles into Copilot for Microsoft 365, underscores the importance of aligning AI technologies with societal values.

As users, it is crucial to be mindful of where and how we employ Copilot, ensuring that its usage remains ethical, transparent, and fair. It is imperative to stay informed, to engage with Copilot in a way that respects and upholds ethical considerations, and to enforce and monitor compliance with responsible AI guidelines. By doing so, we can harness the power of Copilot to drive innovation and productivity, while also safeguarding our values and ensuring a positive impact.

The future of AI is not just about technological advancements but also about the wisdom with which we integrate these tools into our daily lives and work. Let’s embrace this future with a responsible and conscientious approach, ensuring that Copilot for Microsoft 365 serves as a model of ethical AI use in the digital age.

Resources


Sharing Is Caring!

#MicrosoftCopilotTips #ModernWorkplaceAI #CopilotForMicrosoft365

Antoine Galland

Cloud Solution Architect - Modern Work @ Microsoft

6 个月

Thank you Mahmoud for sharing, and the good work ??

Pete Grett

GEN AI Evangelist | #TechSherpa | #LiftOthersUp

6 个月

Excited to dive into the world of Responsible AI with Copilot for Microsoft 365. Mahmoud Hassan

要查看或添加评论,请登录

社区洞察

其他会员也浏览了