OpenAI CEO on the dangers of A.I.
image credit New Yorker

OpenAI CEO on the dangers of A.I.

Sam Altman shocked the world when he released the groundbreaking A.I. powered ChatGPT late last year, but his ambitions go far beyond creating simple chatbots.

The A.I. powering tools like ChatGPT is considered narrow A.I., which is effectively an A.I trained to perform a very specific task, but Altman has his sights set on Artificial General Intelligence (AGI), which is an A.I. capable of performing any task that a human is capable of.

In a blog post titled “Planning for AGI and beyond,” Sam Altman gave the world a preview of what he has planned for the future.

“If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.”

Yet there are also potential outcomes of A.I. that are far less rosy.

“The bad case — and I think this is important to say — is like lights out for all of us,” said Altman.

Many have expressed similar concerns about the technology, including Elon Musk, who recently said that A.I. has the potential to be more dangerous than nuclear weapons.

In order to help ensure that we don’t end up with a Terminator themed future, Sam has three principles guiding his development of AGI:

  1. We want AGI to empower humanity to maximally flourish in the universe. We don’t expect the future to be an unqualified utopia, but we want to maximize the good and minimize the bad, and for AGI to be an amplifier of humanity.
  2. We want the benefits of, access to, and governance of AGI to be widely and fairly shared.
  3. We want to successfully navigate massive risks. In confronting these risks, we acknowledge that what seems right in theory often plays out more strangely than expected in practice. We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize “one shot to get it right” scenarios.

Effectively Sam is saying that, while the potential of A.I. is nearly limitless, in order to minimize the potential dangers, it’s important to rapidly iterate new versions of the technology so that we’re able to quickly address any problems we encounter along the way.

“We believe we have to continuously learn and adapt by deploying less powerful versions of the technology in order to minimize ‘one shot to get it right’ scenarios,” he wrote.

Yet as people become aware of the power of A.I., many have become increasingly concerned with the rapid evolution of the technology, leading to calls for regulation in order to prevent dangerous missteps.

“Tech companies have not fully prepared for the consequences of this dizzying pace of next-generation AI technology,” NYU professor Gary Marcus said recently .

“The global absence of a comprehensive policy framework to ensure AI alignment — that is, safeguards to ensure an AI’s function doesn’t harm humans — begs for a new approach.”

With the unofficial Silicon Valley motto being “move fast and break things,” it’s hard to predict how this is all going to work out.

This point in time is reminiscent of the early days of nuclear weapons testing, when scientists were concerned that an atomic blast may light the atmosphere on fire, yet proceeded with the tests anyway.

“Successfully transitioning to a world with superintelligence is perhaps the most important—and hopeful, and scary—project in human history,” Altman wrote. “Success is far from guaranteed, and the stakes (boundless downside and boundless upside) will hopefully unite all of us.”

Want to stay up to date with the latest A.I. news? Join our 50,000+ free daily newsletter subscribers by signing up at neonpulse.ai

#ai #tech #chatgpt

要查看或添加评论,请登录

社区洞察

其他会员也浏览了