Our first inaugural Responsible AI transparency report
Various people working on computers.

Our first inaugural Responsible AI transparency report

If you were to come across an image of two farmers using an AI system in a corn field, it might not be immediately evident that this image was generated with AI.??

But if this image were created with Microsoft Designer, it would have Content Credentials embedded in the metadata, labeling the image as AI-generated and noting the exact date and time of creation. Because these Content Credentials are cryptographically signed and sealed, they’re also tamper-evident. The provenance, or origin, of Designer-created images can be checked on multiple websites, including Microsoft’s Content Integrity Check tool.??

This is just one example of how 微软 is approaching the development and deployment of AI responsibly, with our broader approach and more product examples included in our recently released Responsible AI Transparency Report. We’re committed to promoting transparency in AI across public and private sectors, as well as contributing to the growing body of public knowledge.??

To advance AI responsibly, we need transparency in many different forms – including transparency about the technology underpinning AI systems, who’s involved in making decisions, and where AI is deployed. Our Responsible AI Transparency Report is a step toward more clarity around our AI governance processes and focuses specifically on how we build generative AI systems responsibly, how we make decisions about releasing these systems, and how we learn and thus evolve our Responsible AI program. ?

In the report, we describe our responsible AI work using the AI Risk Management Framework developed by the National Institute of Standards and Technology (NIST) . Building on our cross-company governance to establish roles and responsibilities, we map potential risks of generative AI applications through AI impact assessments, privacy and security reviews, and red teaming. We establish metrics to measure risks such as groundedness and content risks, as well as ways to evaluate the effectiveness of potential mitigations. Then we manage these identified risks, and any new risks that arise, through platform- and application-level mitigations like content filters, appropriate human oversight, and ongoing monitoring.?

An image showing how we govern AI responsibly by mapping, managing and measuring risks.
Map. Measure. Manage.

One example of this process in action involves Copilot Studio, which integrates generative AI into Microsoft 365 to enable customers without programming skills to build their own copilots. During the map, measure, manage process, our engineering team identified the risk of these copilots giving “ungrounded” answers in response to prompts, meaning the copilot’s outputs contained information that wasn’t present in the input sources. Our engineering team took steps to manage this risk, including improving groundedness filtering and introducing citations. These approaches helped improve the accuracy of copilot responses to topically appropriate questions from 88.6% to 95.7%.???

Some high-risk AI systems require a level of attention and oversight beyond what’s laid out here, which is why we established our Sensitive Uses team in 2017. As our transparency report explores, the Sensitive Uses team has received over 900 submissions since 2019, including 300 in 2023 alone.??

In the past few years, we’ve also released 30 responsible AI tools that include more than 100 features to help our customers ensure that their own AI systems are designed, developed, and deployed responsibly. We use metrics from these tools to inform our own decision-making about how effective our risk mitigations are and whether a product is ready for launch.?

There is no finish line for responsible AI.?

We’ve learned over the years that a human-centered approach to building AI systems?results in not just a more responsible product, but a better product overall. At Microsoft, we have a team of more than 400 people working on responsible AI, half of whom do so full-time. Distributing Responsible AI Champions throughout the company helps ensure that the job of advancing AI responsibly doesn’t fall solely to a single team, but rather is a function of every team across the organization.???

People remain at the center of our responsible AI progress.?

While technology sector-led initiatives are an important force in advancing responsible AI, we know that we can’t do it on our own. Governments play an important role in charting the path forward on AI.? Microsoft has long said that we need laws regulating AI that protect people's fundamental rights while allowing for positive uses of the technology to continue.?Governments should continue to convene stakeholders to develop best practices and contribute to the development of standards.?

We also understand that regulation can and should be context-specific: When AI systems conceived in advanced economies are used in developing ones, these systems either may not work or may cause harm. In 2023, we worked with more than 50 internal and external groups to better understand how AI innovation may impact regulators and individuals in developing countries. We remain committed to these and other efforts to ensure that we maximize AI’s potential societal benefits while minimizing potential harms. ?

AI can be a powerful tool that transforms how we live and work. Today, radiologists are using AI to detect breast cancer up to four years before it develops and students using AI to tailor stories to their reading level. What happens tomorrow is our collective choice.?By designing systems with people in mind, working in collaboration with a broad range of stakeholders, and iterating as we go, we can advance AI responsibly – for the benefit of us all. ??

Read the full report here: Responsible AI Transparency Report.

Magda E.

DISPONIBILIDAD DE TRABAJAR en HOTELERIA

5 个月

microsoft ofice 365 LIBRE I ACCESIBLE A TODOS , HERRAMIENTAS QUE SON DE MISMO PROGRAMA EDICION ,NO PUEDE USTEDES AHORA , QUE IMPONEN MAS TRAMITES , NO ACTULICEN SU POGRAMA DE FORMA LIBRE DEL QUE SISTEMA TECNOLOGICO DEBE SER USO PARA TODOS con los cambios realizados de actualizar y pagar , estas creando serios da?os , al que uso impuesto de agil acceso inclido en sistema , operativo y libre , ya que los formularios tecnologia incluida y accesible en el mismo , no se deberia cobrar para usar , asi sus herramientas dentro de cada tipo documento , DEL QUE PARECE NOS IREMOS A LO PRIMITIVO MAQUINA DE ESCRIBIR Y ARCHIBO CLASICO DOCUMENTADO , Como Yabien la capacida de guardad cada persona sus documentos otro negocio , del que se decia libre , de tener uso en cualquier lugar al acceso en la cuenta con nombre y privacidad , PARECS SER QUE NADA APORTA SEGURIDAD , PARA ESTAR EN USO DE TRABAJO ACONDICIONADO A LA TECNOLOGIA IMPUESTA , NO COMO BNEGOCIO , , SI BIEN SUPUESTAMENTE PARA FACILITAR , PERO NO SEBE UNO DONDE LLEGA ESTE TERMINO , DE NUEVAS TECNOLOGIAS , QUE DESTRUYE SU FUENTE PROPIA EN USO , EXPRIMIENDO A JOVENES ESTUDIANTES A PADRES Y PERSONAS QUE EN USO PARA AGILIZAR SU ESTUDIO O TRAMITE SEA ABSOLETO DE USO Y DE LIBRE ACCES

As new technologies evolve seamlessly, more people and businesses can reap the benefits.

bong hyuk choi

(AI·ESG·DX ??? ???, ??? ?????????????)

6 个月

????? ????????? ???? ??? ?? ????? ??? ? ????? ???? ????? ????? ??? ? ?? ???? ?? ? ??? ??????, ????? ??? ?? ??? ? ?? ???? ? ???? ???? ?? ?????. https://youtu.be/EDhtMO--dvI

Sakshi Pandey

Education Professional at Edureka

6 个月

Thanks for sharing

要查看或添加评论,请登录

社区洞察

其他会员也浏览了