Inadvertent harm is just as dangerous as deliberate harm
Aonghus McGovern, PhD.
Using data and analytics to help keep HubSpot and its customers safe.
In last week’s article I talked about the potential risks associated with ChatGPT. I’d like to elaborate on the nature of those risks.
The potential harms of ChatGPT are a well-discussed subject. Typing ‘ChatGPT risks’ into DuckDuckGo yielded multiple results, including the threat posed by spam emails created by ChatGPT, the risks posed to children online, and the use of ChatGPT to create malware and malicious code. Deliberate harms are obviously a concern with ChatGPT, as with all AI and indeed many forms of technology. But at least as important is accidental harm.
Deliberate harm is easier to perceive than accidental harm. While it’s hard to imagine the different ways malicious actors may attack, we understand that they will attack. We also generally understand why they attack, whether it’s for money or simply to cause harm. Accidental harm is much trickier to grasp. Part of this is understanding how harm can be caused by applications that were created with the best of intentions. Another part is that understanding accidental harm means probing ourselves. It’s easy to see how a faceless actor can act with malicious intent; it’s not so easy to see how we could do just as much harm unintentionally.
领英推荐
Consider some of the most prominent examples of AI harm. Recruitment tools that discriminate against women. Policing applications that discriminate against people of colour. Grade calculation algorithms with high levels of error. In these cases and many more besides the applications were intended to improve lives and ultimately had the opposite effect. This is what makes accidental harm so dangerous: it’s often perpetrated by large organisations (governments, businesses etc.) with broad reach and it can occur in any area of society.
When I think about the risks associated with AI I like to think about the automobile industry. Like AI, cars were treated with a large amount of fear and suspicion when they were first created. The role of cars in today’s society is similar to the role the tech industry wants for AI: complete ubiquity. So it makes sense to take lessons in AI safety from automobile safety. Many automobile safety measures e.g. airbags, seatbelts, and speed bumps focus on preventing accidents. Public information campaigns against drink driving, driving while texting etc. focus on accidental misuse. Deliberate misuse is a known problem with automobiles but it’s not the focus.
We need to be as mindful about the harm we cause accidentally as we are about the harm another person causes deliberately