Responsible AI Superheros Get An Upgrade to Their Toolkit
Yesterday, I got the chance to meet with the amazing women and men behind the Responsible AI program at Microsoft . They were uncovering a new set of features that were just announced today, and it will make building responsible, inclusive applications easier than ever. Thanks to Sarah Bird , Natasha Crampton and their amazing team! Including my guide Suzana Ili? and friends Minsoo Thigpen Kristen Laird Catherine Brown Katelyn Rothney Mallory M. Andrew Gully Thank you all!
One of the biggest challenges to building more responsible AI solutions at scale is knowing what to do and how to do it.
Today Microsoft released an exciting update to a product that my team uses every day as we help customers learn how to red team AI solutions, Azure AI Content Safety.
At the AI Leadership Institute , we leverage a framework that is a documented best practice from the RAI team, and it revolves around the risk lifecycle. We teach workshops around AI Red Teaming and Building RAI at Scale.
During the workshop with this team I got to use the new tools and features that were just announced. There were 2 that stood out to me, but many more!
First, is the availability of prompt shields. You can learn more about them here. Prompt Shields allow you to detect and block prompt injection attacks, including a new model for identifying indirect prompt attacks before they impact your model, coming soon and now available in preview in Azure AI Content Safety.?
Every organization as they scale from playground to production will have to solve for direct and indirect prompt injection. In the tool you can now see if prompt injection is coming from indirect channels, like being embedded in a PDF you are processing or a dataset you are using. And you will be able to do this detection before you hit your LLM!
领英推荐
The second feature I am excited about is groundedness detection . While it will not be available yet, this feature allows us to detect “hallucinations” in model outputs, something that every organization is worried about even with grounded LLM deployments.
I was able to test out this feature, that will release later this year, and it allowed me not only to detect ungrounded outputs from an LLM but also get granular level visibility to what parts of the output were impacted. When an ungrounded claim is detected, customers can take one of numerous mitigation steps:?
While many companies are still in the early stages of integrating AI into their businesses, we now know how critical it is to plan for responsible scale early in these discussions and pilots.
BONUS: ok I know I said I would only share a couple of things from the announcement, but I don't want you to miss this! Safety evaluations were also announced today.
While Microsoft currently supports pre-built quality evaluation metrics such as groundedness, relevance, and fluency in preview, the new safety evaluations provide support for additional pre-built metrics related to content and security risks. This capability harnesses learnings and innovation from Microsoft Research and are available in preview in Azure AI Studio
These tools are helpful but it all starts with planning how your organization will build it's AI Safety Systems that will protect all of your AI deployments.
Get started today with our Generative AI Upskilling Guide.
Exciting times ahead for AI innovation! ?? Can't wait to see the positive impact of these transformative features. ?? Noelle R.
It was a pleasure to meet you, Noelle. Thanks for being a champion of responsible and inclusive AI. I'm looking forward to seeing these tools be put to work!
Juxtaposing with a purpose. Cynical optimist. Innovation kinesiologist. Focused on making the aspirational operational at the intersection of energy, mobility, development, and international relations. DOTMLPFer
8 个月Trish Martinelli Amanda H.
Juxtaposing with a purpose. Cynical optimist. Innovation kinesiologist. Focused on making the aspirational operational at the intersection of energy, mobility, development, and international relations. DOTMLPFer
8 个月Awesome! Pam Ennis Molly Barr James Gray Cameron Conger