Responsible AI or RAI–Why is important?
Ivana Tilca
Lead Manager @ Allata | Microsoft MVP in Artificial Intelligence | Technology Advocate I Speaker I World Traveler
So I’ve reading a lot about AI and the responsibility it involves.
We hear a lot about the potential for AI to help people and society, but also a lot of potential for harm. There’s a lot of discussion today about ethics in AI. I think it’s because we’re starting to see some of the ramifications of these systems that we’re putting out in the real world.
Hollywood movies and science fiction novels show AI as human-like robots that take over the world, the current evolution of AI technologies isn’t that scary – or quite that smart. Instead, AI has evolved to provide many specific benefits in every industry.
That’s when we need to start thinking on the term Responsible AI, or RAI.
What is RAI?
We need to think of RAI as a set of socio-technical problems.
Even tough math is designed to do what you tell it to do, is designed to do the best under certain the circumstances. Even if the math is doing what it’s supposed to, you know, it’s operating over data and that data can be limited, how you show that information to the user can cause problems and failures.
AI systems can see things that people can’t. And vice versa.
What do we need?
We need experts, people who understand human-computer interaction, user research methodologies and AI systems to think deeply about new methodologies to enable rapid prototyping and iteration and other methodologies for evaluation, testing and building AI systems.
Some companies are already providing what they call AI Principles. they will not design or deploy AI that do not apply to them.
Google Principles
1 – Be socially beneficial.
2 – Avoid creating or reinforcing unfair bias.
3 – Be built and tested for safety
4 – Be accountable to people
5 – Incorporate privacy design principles.
6 – be made available for uses that accord with these principles
You can read more in here.
Microsoft Principles
In 2016, Microsoft established the Aether Committee, which serves as an advisory role to the company’s senior leadership on rising questions, challenges, and opportunities with the development and fielding of AI technologies.
Examples of efforts of these committee and its working groups its deliberation and input on Microsoft’s decisions around sensitive uses of AI, such as applications of facial recognition, and its work on developing tools for detecting and addressing bias, recommended guidelines for human-AI interaction, and policies and methods for making AI recommendations more understandable.
So their principles involve:
1 – Fairness – AI systems should treat all people fairly
2 – Inclusiveness – They should empower everyone and engage people
3 – they should perform reliably and safely
4 – transparency – the systems should be understandable
5 – privacy and security and
6 – accountability. – AI systems should have algorithmic accountability
They also provide guidelines for responsible bots. You can read more about them all in here.
As a conclusion.
The benefits and consequences of AI are still unfolding.
AI, like any other technology, can have virtuous effects, as well as much less desirable consequences. AI as a research field cannot be blamed for the latter. The specific historical, social and economic context of a deployment can make an AI machine “a Dr Jekyll or a Mr Hide”. To build responsible AI, never forget, keep humans always in the loop.