Altman's "Oppenheimer" Moment
John Pratt
Technology visionary, customer experience, project and product lead, published author
Speaking at the World Governments Summit in Dubai, Sam Altman is conspicuous, not just because he heads OpenAI, developer of ChatGPT, but because he’s a proponent of some form of oversight and regulation to occur, before the opportunity is lost, and humanity is irretrievably damaged.
NOBODY denies that the potential of AI to wreak havoc is real, and it’s an existential threat, and yet there’s no real hurry to regulate AI directly, or inform policy that might have impact for AI users, or AI itself.
Speaking to Associated Press, Altman said: “There’s some things in there that are easy to imagine where things really go wrong. And I'm not that interested in the killer robots walking on the street direction of things going wrong. I’m much more interested in the very subtle societal misalignments where we just have these systems out in society and through no particular ill intention, things just go horribly wrong.”
As someone evaluating applications for AI, it weighs on me that the consequences of seemingly small, inconsequential decisions? … could be truly exponential! There’s no doubting the real power of AI to leverage knowledge for good, valid outcomes. I think that AI has huge opportunity in Law and Medicine, for example. I’m genuinely excited by the power of AI to really enable better human-human interactions, the power to make business feel personal and personalised again. AI makes it possible to systematically deliver more empathetic, salient and even personal service interactions.
Conversely though, there’s no end to the trouble that could be caused by an application deploying the same data sets for malevolent purposes.
领英推荐
We have an obligation to think that through, to safeguard the humanity, the sanctity of life and of personal privacy. We have to create rules that bound applications, and guard against “exceptional” behaviour from machines.
We need to ensure that AI has the critical faculty for critical analysis that people increasingly lack. The fact is, left on it’s own, AI is likely to develop some of the worst characteristics of the human race. There have already been examples of AI applications that end up running amok, talking smack and exhibiting some of the same prejudices and biases that we work hard to eliminate. AI has the potential to characterise us, and our behaviour, in ways that we humans are not allowed to do. Or at least in ways that we’re not allowed to act upon.
The most worrying thing I believe is the way that AI can turn celebrity into a commodity. Take aitrashtalk.com. You can have the celebrity of your choice say pretty much anything, filtered for Adult content or not. Or can you? AI is producing some credible images, and even making original art. Some of the stuff I’ve seen is appears indistinguishable from real, live content.
There just has to be some rules. Altman is calling for an International body to regulate AI, similar in concept to the International Atomic Energy Agency. “We're still in the stage of a lot of discussion. I think we're still at a time where debate is needed and healthy, but at some point in the next few years, I think we have to move towards an action plan with real buy-in around the world.” That’s perhaps an excellent mechanism for state actors, but the power of AI is being democratised. AI capability now comes free with appliances. It’s going viral.
Will Max Headroom be too old to run for President in 2028? Hell no, he’s AI — he’s upgraded, ageless, and his memory is flawless.