Building Responsible AI

Building Responsible AI

A recent tweet from Soft Linden illustrated the importance of strong responsible AI, governance and testing frameworks for organizations deploying public-facing machine learning applications.

Following a search for “had a seizure now what”, the tweet showed that Google’s “featured snippet” highlighted actions that a University of Utah healthcare site explicitly advised readers NOT to take.?

The Google search summary vs the actual page
No alt text provided for this image
No alt text provided for this image

Of course, it’s not exactly rare to see unhelpful examples of featured snippets, but this case was particularly egregious due to the injuries that could be experienced by those acting on this harmful advice.

While the accuracy of models is often a primary concern while they are being developed, there remains a lot of work to be done — both in development and in production — to ensure their safe and responsible use.

Some of this work will be aided by tools. Companies like Arthur AI, Credo AI, Fiddler Labs,? Parity, TruEra, and WhyLabs are all young companies in the AI model performance management and governance space. But as Abeba Birhane reminded us in our interview last year, we need to be wary? of “techno-solutionism,” assuming that technology is the solution to every problem.

There are some basic steps all organizations should take to lay the foundation for the responsible use of AI. First and foremost, organizations need to define what responsible AI looks like for them and the products they create. Once that definition is in place, companies need to commit to responsible AI creation and maintenance according to that definition. This will require an investment of time and resources to build knowledge, teams and processes in support of this commitment.

Of course, with any new and complex technology, particularly ones based on opaque probabilistic models, there is always the possibility of unexpected behavior. Rigorous testing and an established framework for responding to problems are essential for ameliorating issues when they do pop up.

In the case of featured snippets at Google, it seems that a willingness to slow down the pace of innovation to ensure responsible use may be at issue. After all, as Deb Raji pointed out in a follow-up tweet, the algorithm Google uses to generate featured snippets? — BERT — has known issues with identifying negation. This suggests that issues like the one seen in this case were foreseeable.

Each organization’s approach to responsible AI will differ in accordance with its business, leadership, products, culture, and many other factors. I’d love to hear how your organization has defined and operationalized responsible AI. Comment below to share your practices and learnings.

Daria Mikhaylenko

Social Media Manager WORLD AI FESTIVAL at CORP - AGENCY

2 年

Have you seen? The event was just postponed. Indeed, WAICF - World AI Cannes Festival will be held from April 14 to 16, 2022 at the Palais des Congrès et des Festivals in Cannes under the same conditions as currently planned.

回复
Daria Mikhaylenko

Social Media Manager WORLD AI FESTIVAL at CORP - AGENCY

2 年

Exactly! Let's talk about it & about other subjects at WAICF - World AI Cannes Festival!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了