Be ye afrAId?

Be ye afrAId?

There are lots of reasons for the public to be afraid of AI. We need to help them manage that fear.?

Fox News report on a Pew Research survey which found that 52% of U.S. citizens are more fearful than optimistic about AI. It’s not difficult to see why they might be concerned. Stories abound of harms caused by AI whether it’s policing applications that discriminate against people of colour, hiring applications that discriminate against women, or grade calculators with high levels of error. Then there’s the concerns about how AI might eliminate jobs. Add to that the fact that not even experts can tell us where the technology is headed. Geoffrey Hinton famously predicted in 2015 that AI would replace radiologists. Not only has that not happened, an article on the University of Alabama site describes a radiologist shortage in the U.S. Fear is a natural response to this level of uncertainty. What can we do to help?

Think about a product that you researched before you bought. A car, a television, a toaster. Maybe you already knew the features you wanted or maybe you looked at some comparison websites. Either way, you had the means to compare options and choose the one that suited you. What’s the AI equivalent of that? How empowered is the average consumer to navigate the landscape of AI products? AI’s invisibility complicates this question. Most AI applications are designed to be hidden from the user’s view, making their characteristics hard to perceive.

What criteria might we use to compare AI systems? Some obvious ones spring to mind. The kind of data that’s collected, how it’s stored, secured etc. I could also imagine more complex criteria relating to the AI’s behaviour. As AI applications become capable of more complex reasoning it may become necessary to specify the kind of inference we’re comfortable with. The GDPR’s purpose limitation provision aims to address this by restricting use of data to the purpose for which it was collected. But this will likely not be enough as broad interpretations of ‘purpose’ may not align with specific user preferences. You may generally agree to monitoring of your behaviour on a social media website but object to AI inferences about your mental state. Then there’s the question of how involved humans are in AI-enabled decision making process. If you submit data to an AI for the purposes of making a decision about your life such as a job or a loan you should receive assurances that the AI’s reasoning will be checked by a human.??

People need practical advice for how to make AI work for them. Until they have that, we can expect plenty more fear.?

Stephen Redmond

AI Visionary | Head of Data Analytics and AI at BearingPoint Ireland. Delivering real business value to our clients by harnessing the transformative power of data and AI.

1 年

Interesting to consider if the casual, and widely reported, “AI will replace radiologists” comment had an impact on people looking ahead at the long path to qualification and then reconsidering careers. 7 or so years later, Radiologist retire and are not replaced because of lower numbers qualifying. There are potential unknown consequences to all actions, not just AI!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了