The A.I. Dilemma
I've been reading a lot lately around AI Ethics (because AI is dependent on data, and I've always been a "data nerd" ?? ). One of the folks I track is Ravit Dotan, PhD ; she shared a post earlier today about Sam Altman (OpenAI CEO) having two different opinions about AI ethics. The face Altman showed to Congress was positive, recommending regulation and oversight, but when faced with the implementation of the same in the EU, his response was that he'd just take his ball and go home instead of playing.
That's important, but not what I really want to discuss right now.
One of the commenters in Dotan's post responded that the push to release new generalized AI language models was turning into a kind of "arms race" and shared a link to a YouTube video titled "The A.I. Dilemma". If that sounds familiar, you might be thinking of the 2020 Netflix documentary "The Social Dilemma"... and you'd be right, because the authors are the same - Tristan Harris and Aza Raskin (founders of the Center for Humane Technology). I would strongly recommend you watch the video, and don't be alarmed that it's an hour long - instead, be alarmed at the content and context they share. If you're not familiar with AI, they provide insight into the business and application of the models, as well as a good high-level overview of the history of their generation; if you're already familiar with LLMs you'll understand their points immediately.
领英推荐
The main point I took away is that there is a great need for the application of ethics, regulation, and a set of standards and practices in the development and release of these models. Early on in the presentation, the authors point out that the introduction of powerful new technology leads to a race, and if the release isn't coordinated it does not end well. The release of social networks, as shown in The Social Dilemma, was a "race to the bottom of the brain stem" - the networks pushing engagement to get and keep your attention, regardless of safety or even accuracy. This release of complex AI models ("Gollems" - Generalized Large Language Multi-Modal Models) is a "race to intimacy" - the first tool that fully integrates itself into your life "wins." Think "Her" - the 2014 movie with Joaquin Phoenix and Scarlett Johansson - and you'll start to see what they mean.
The implications for unrestricted deployment are frightening, IMO. At 47:07 into the talk, they ask (rhetorically) "But we would never actively put this in front of our children?" That example resonated with me - my kids are on Snapchat daily - and it's only one of several issues Harris and Raskin raise. The authors aren't proposing that they have answers, but are insisting that discussions need to occur now, before we find ourselves losing this race to corporations in the same way we did with social media. Given that the authors gave this talk only two months ago - prior to OpenAI's release of GPT-4, which was an exponential leap forward from ChatGPT (GPT-3.5) - and you can see how urgently these discussions are needed.