The possibilities of ChatGPT are limitless. So are the risks.
Aonghus McGovern, PhD.
Using data and analytics to help keep HubSpot and its customers safe.
The potential benefits of Generative AI are obvious. Its harms are harder to imagine, which is one reason they’re so dangerous.
An article in Oxford Mail quotes UK Secretary of State for Science Michelle Donelan discussing the possibilities of ChatGPT for British society: ‘I think these types of technology are going to create a whole new section of jobs and in areas that we haven’t even thought of, and where this leads us is limitless. We need to tap into that’. Donelan goes on to state that regulation and safeguards will be necessary to mitigate possible harms. ‘these types of technology’ is likely a reference to Generative AI, a collection of technologies that consume data and produce new data with similar characteristics. ChatGPT is an example of a text-based Generative AI, while DALL-E 2 is an image-based example.
It’s all well and good to say we need to create appropriate regulation and safeguards, but how are we supposed to do that for a technology whose risks aren’t fully known by its own creators? In its Risks and Limitations document for the preview of DALL-E 2, OpenAI warn of a host of risks they identified when testing the technology. These include “particularly biased or sexualized results in response to prompts requesting images of women” and “images that tend to overrepresent people who are White-passing and Western concepts generally”. The document is extensive, indicating a substantial amount of testing was performed before the preview was released. And yet the document acknowledges that the list is not exhaustive, stating that limited access is of utmost importance ‘as we learn more about the risk surface’.
领英推荐
To get a sense of the harm that Generative AI could cause we can look at the harm being caused by less advanced AI. Take the case of Chicago man Robert McDaniel. The Conversation reports that one day McDaniel opened his door to find police officers and a social worker telling him that an AI had determined he would be involved in a shooting at some point in his life. The AI couldn’t tell whether he would be the perpetrator or the victim, just that he would be involved. Because of this prediction, McDaniel was subject to regular mandatory visits from the police and social workers. This resulted in rumours spreading that he was a police informant. He ultimately was shot; once in 2017 and again in 2020. Whether the algorithm’s output was a correct prediction or a proximate cause of this outcome is anyone’s guess.
Stories like McDaniel’s are not uncommon. They make it clear that we still don’t fully understand how to correctly apply AI. Generative AI is substantially more complicated than the models we have right now. What is the Generative AI equivalent of Robert McDaniel’s case? ?