Are we scared of AI Yet?

Are we scared of AI Yet?

In 2018 three people – Dr. Geoffrey Hinton, Yann LeCun, and Yoshua Bengio – shared the Turing Prize, for their contributions to the development of Machine Learning and AI.

Since then, one of these “Godfathers of AI,” Dr. Hinton, left his job at Google to become one of the most vocal critics of AI. While many are concerned with the implications of AI for everything from cyber security to job losses, Dr. Hinton’s concerns are grander: he sees it as an existential threat to humanity.

His co-Godfathers disagree, and the ongoing philosophical debate between them has been fascinating to watch play out.

This debate is nothing new, of course. Before ChatGPT had launched and taken the world by storm, there were already concerns about the looming probability of AI. Back in 2014, while Dr. Hinton, LeCun and Bengio were still developing the technology that would earn them the accolades, Dr. Stephen Hawking said “the development of full artificial intelligence could spell the end of the human race.”

So now that we’re seeing the rapid rate in which AI is disrupting how we work and live, the question needs to be asked: Do these geniuses have a point and should we be scared yet?

Skynet Is Fiction

The moment you start talking about AI as an existential threat, the first thing that comes to mind is the AI network at the centre of the Terminator films. And indeed, though those films are a fiction there is a genesis of truth in their warning. Much like many accuse AI developers of doing now, Skynet came into being because researchers were too focused on the innovation of what they were doing to consider its implications.

As to whether AI will become self-aware, take over global nuclear arsenals and declare war on humanity, that is, of course, not going to happen. However, perhaps it doesn’t need to be that dramatic, and perhaps we as a society should be interpreting these existential warnings slightly differently.

What Does AI Mean To Humanity?

The reason that there is such appetite for AI is because the benefits that it delivers are real and tangible. Thanks to AI it’s possible to work more safely and efficiently. AI’s allowing us to communicate better by offering real-time translations into every language on the planet. It’s helping doctors diagnose cancers right at the start, when they’re most treatable, and its assisting them to provide services to regional areas that would otherwise be unable to access such expertise.

Across just about every major problem that humanity faces – be that climate change, resource scarcity, conflict and security – AI will be critical to our efforts.

It’s hard to see those applications of AI as anything but a positive development.

But there’s another side to AI: AI is also being used to write books, compose music, generate art and even replace humans in performances. Earlier this year, both the screenwriters and actors of Hollywood went on strike, out of a very real concern that the producers in Hollywood were looking to AI as a way of cutting them out of the process. This is where it becomes more questionable about just how benign AI is.

The arts, broad as they are, cut to the core of what makes us human – creativity is the “soul” of our collective species. It is through the arts that we conceptualise and share our great ideas, preserve records of us as individuals and societies, inspire and fill our lives with joy.

AI can have a role in that – just as Photoshop and, before that, cameras, were tools that artists were able to use to enhance their art, so too can AI be a tool to help a creative person realise their vision.

But what we’re also seeing is a section of the community starting to rely on AI to go beyond being a tool to assist them and do both the creative and mechanical work for them. People are using AI to generate the images, write the text, create the music. With only minimal edits beyond that, these people are then publishing that work.

As far as AI is a threat to humanity, it’s there, if we cede control of our collective soul and allow it to dominate our creativity. In an ideal world, humanity will use AI in such a way that we can all work less and dedicate more time to bringing more human creativity into the world.

What we’re at risk of doing, and where Dr Hawking’s and Dr. Hinton’s warnings come in is this: AI might force us to work harder (to keep our jobs and maintain our relevance in the face of unlimited productivity), and in doing so we might start relying on AI to do the things we no longer have time for. Like to think and create.

If that happens, the world will become a soulless husk, little better than what Skynet did to it in Terminator.?

要查看或添加评论,请登录

Dominic Patterson的更多文章

社区洞察

其他会员也浏览了