Getting started to put Responsible AI to work for your AI-powered solutions
Philippe Beraud
Chief Technology and Security Advisor, Responsible AI Lead at Microsoft | Innovation and Trust
(updated: September 2022)
Responsible innovation is top of mind. As such, the tech industry as well as a growing number of organizations of all kinds in their digital transformation are being called upon to develop and deploy Artificial Intelligence (AI) technologies and Machine Learning (ML)-powered systems (products or services) and/or features (all referred as to AI systems below) more responsibly. And yet many organizations implementing such AI systems report being unprepared to address AI risks and failures, and struggle with new challenges in terms of governance, security and compliance, with notably the forthcoming EU regulatory framework on AI.
Why Responsible AI?
Let’s start with that simple question: Why responsible AI? And what does it mean to be responsible?
The more powerful the tool, the greater the benefit or damage it can cause…
Technology innovation is not going to slow down.?The work to manage it needs to speed up.
-Brad Smith, President and Chief Legal Officer, Microsoft
?Advancements in AI are indeed different than other technologies because of the pace of innovation – there has been hundreds of research papers published every year in the past few years -, but also because of its proximity to human intelligence, impacting us at a personal and societal level.
There are a number of challenges and questions raised through the use of AI technologies. We refer to the related impacts as socio-technical impacts. To name a few in terms of today’s debate: facial recognition, fairness, corporate responsibility, deepfakes, human rights, meaningful human control, contact tracing, consent, unintended consequences, disproportionate impact, model fragility, socio-technical issues, algorithmic auditing, etc.
All of these have given rise to an industry debate about how the world should/shouldn’t use these new capabilities. It is not because you can do something that you should necessarily do it.
To quote the book “Tools and Weapons”, “When your technology changes the world, you bear a responsibility to help address the world you have helped create”, and thus we’ve been de facto embarked into a journey towards Responsible AI.
Thus, to meet these challenges (See report Organizations must address ethics in AI to gain public’s trust and loyalty), Microsoft is striving to adopt a human-centered approach to AI, designing and building technologies that benefit people and society while also mitigating potential harms. This includes understanding human needs and using these insights to drive development decisions from beginning to end.
Our journey towards Responsible AI begun nearly 6 years ago, with Satya Nadella penning an article in the Slate magazine titled “the partnership of the future” where our CEO explores how humans and AI can work together to solve society’s greatest challenges. This article introduced concepts of transparency, efficiency but not at the expense of the dignity of people, intelligent privacy, algorithmic accountability, and protection against bias.?While many efforts have been deployed in this direction since then, we cannot stress enough that we are just at the beginning of this journey.
What about you? Are You Overestimating Your Responsible AI Maturity?
A newly released workshop to introduce the subject of Responsible AI in practice
One should indeed acknowledge that the vast majority of organizations also believing in the importance of Responsible AI aren’t today sure how to cross what is commonly referred to as the “Responsible AI Gap” between principles and tangible actions in their day-to-day product development lifecycle of AI systems, partially because they lack clarity on how to make their principles operational.
Additionally, while we do recognize that turning such principles into practice must consider engineering realities, we also collectively need new kinds of engineering tools aimed at helping to better understand and refine AI technologies.
In this context, last June 2022, we were sharing publicly?Microsoft’s?Responsible AI Standard, a framework to guide how we build AI systems, along with our?Impact Assessment template?and?guide.?It is an important step in our journey to develop better, more trustworthy AI. We are releasing our latest Responsible AI Standard to share what we have learned, invite feedback from others, and contribute to the discussion about building better norms and practices around AI.?
To further continue such a journey, I have also the pleasure to announce a new workshop: the Responsible AI Workshop (https://github.com/microsoft/responsible-ai-workshop).
领英推荐
To this occasion, I would also like to deeply thank @Abderrahmane Lazraq and @ Riad El Otmani for both their involvment and their great contribution to this content as part of his stay in my team during his data scientist internship.
This project currently contains the following tutorials and walkthroughs (that will be completed with other relevant considerations/aspects in the forthcoming months):
Each of the above tutorials and walkthrough consists of a series of modules for data engineers, data scientists, ML developers, ML engineers, and other AI practitioners, as well as potentially anyone interested considering the wide range of socio-technical aspects involved in the subject.
This open-source project available on GitHub is an attempt to introduce and illustrate the use of:
It is thus designed to help you, or your "customers" whoever they are, putting Responsible AI to work, i.e., into practice for your AI-powered solutions throughout their development lifecycle, a development lifecycle being typically organized according to the following key phases while recognizing that AI product development often cycles through these phases iteratively:
In terms of prerequisites, the workshop is meant to be hands-on. So, a basic knowledge of any version of Python is expected. It’s also assumed that you have a prior experience of training machine learning (ML) models with Python and open-source frameworks like Scikit-Learn, PyTorch, and TensorFlow.
One should also note that this workshop might also be introduced by the following Microsoft Learn learning paths:
To go beyond
From holistically transforming industries to addressing critical issues facing humanity, AI is already solving some of our most complex challenges and redefining how humans and technology interact.
You can visit our Responsible AI resource center where you can find access to tools, guidelines, and additional resources that will help you create a (more) Responsible AI solution:
I hope you will enjoy the workshop's content to (further) improve your AI-powered solutions and make them (even) more responsible.
Thanks, Philippe