Inadvertent harm is just as dangerous as deliberate harm

Inadvertent harm is just as dangerous as deliberate harm

In last week’s article I talked about the potential risks associated with ChatGPT. I’d like to elaborate on the nature of those risks.

The potential harms of ChatGPT are a well-discussed subject. Typing ‘ChatGPT risks’ into DuckDuckGo yielded multiple results, including the threat posed by spam emails created by ChatGPT, the risks posed to children online, and the use of ChatGPT to create malware and malicious code. Deliberate harms are obviously a concern with ChatGPT, as with all AI and indeed many forms of technology. But at least as important is accidental harm.

Deliberate harm is easier to perceive than accidental harm. While it’s hard to imagine the different ways malicious actors may attack, we understand that they will attack. We also generally understand why they attack, whether it’s for money or simply to cause harm. Accidental harm is much trickier to grasp. Part of this is understanding how harm can be caused by applications that were created with the best of intentions. Another part is that understanding accidental harm means probing ourselves. It’s easy to see how a faceless actor can act with malicious intent; it’s not so easy to see how we could do just as much harm unintentionally.

Consider some of the most prominent examples of AI harm. Recruitment tools that discriminate against women. Policing applications that discriminate against people of colour. Grade calculation algorithms with high levels of error. In these cases and many more besides the applications were intended to improve lives and ultimately had the opposite effect. This is what makes accidental harm so dangerous: it’s often perpetrated by large organisations (governments, businesses etc.) with broad reach and it can occur in any area of society.

When I think about the risks associated with AI I like to think about the automobile industry. Like AI, cars were treated with a large amount of fear and suspicion when they were first created. The role of cars in today’s society is similar to the role the tech industry wants for AI: complete ubiquity. So it makes sense to take lessons in AI safety from automobile safety. Many automobile safety measures e.g. airbags, seatbelts, and speed bumps focus on preventing accidents. Public information campaigns against drink driving, driving while texting etc. focus on accidental misuse. Deliberate misuse is a known problem with automobiles but it’s not the focus.

We need to be as mindful about the harm we cause accidentally as we are about the harm another person causes deliberately

要查看或添加评论,请登录

Aonghus McGovern, PhD.的更多文章

  • Even an implication can cause harm

    Even an implication can cause harm

    Have you ever heard of a sprinter being banned from competing because they have a genius-level IQ? Or a weightlifter…

    1 条评论
  • What's the AI equivalent of road rage?

    What's the AI equivalent of road rage?

    Regulation provides formal rules for AI. We also need informal rules.

  • Be ye afrAId?

    Be ye afrAId?

    There are lots of reasons for the public to be afraid of AI. We need to help them manage that fear.

    3 条评论
  • How far do we want to let AI into our lives?

    How far do we want to let AI into our lives?

    What have you asked your Alexa or Google assistant recently? The weather? To set a timer? What if it could tell you the…

  • Knowledge is power. So is data

    Knowledge is power. So is data

    AI could help alleviate climate change. It could also make things worse Wildfires in Maui.

  • These cookies are inedible

    These cookies are inedible

    Whether it’s the language we use or the interfaces we create, we must strike a balance between relatability and…

  • A Bad Look For Science

    A Bad Look For Science

    Science is facing an image problem Think about the last time somebody said something false about you. Maybe you had an…

  • The Analytics Rorschach Test

    The Analytics Rorschach Test

    Sometimes charts are more ambiguous than they first appear In the early 1900s a psychologist had an idea for assessing…

  • Can we calm down?

    Can we calm down?

    AI doomerism isn’t helpful. Neither is AI boosterism.

    3 条评论
  • What is intelligence anyway?

    What is intelligence anyway?

    Is AI really intelligence at all? We’re obsessed with intelligence. We have game shows like ‘Who Wants to be a…

社区洞察

其他会员也浏览了