Should We Stop Developing AI For The Good Of Humanity?
Should We Stop Developing AI For The Good Of Humanity?

Should We Stop Developing AI For The Good Of Humanity?

Thank you for reading my latest article?Should We Stop Developing AI For The Good Of Humanity??Here at LinkedIn and at Forbes I regularly write about management and technology trends.

To read my future articles simply join my network here or click 'Follow'. Also feel free to connect with me via Twitter,?Facebook, Instagram, Slideshare or YouTube.

---------------------------------------------------------------------------------------------------------------

Almost 30,000 people have signed a petition calling for an “immediate pause” to the development of more powerful artificial intelligence (AI) systems. The interesting thing is that these aren't Luddites with an inherent dislike of technology. Names on the petition include Apple co-founder Steve Wozniak, Tesla, Twitter, and SpaceX CEO Elon Musk, and Turing Prize winner Yoshua Bengio.

Others speaking out about the dangers include Geoffrey Hinton, widely credited as "the godfather of AI ."In a recent interview with the BBC to mark his retirement from Google at the age of 75, he warned that “we need to worry” about the speed at which AI is becoming smarter.

So, what’s got them spooked? Are these individuals really worried about a Terminator or Matrix-type scenario where robots literally destroy or enslave the human race? Well, as unlikely as it might seem from where we stand today, it seems that indeed they are.

ChatGPT is one app that has already taken the world by storm, in a less literal sense – attracting the fastest-growing user base of all time. A senior member of the team responsible for developing it at research institution OpenAI, Paul Christiano, has said he believes there’s “something like a 10 to 20 percent chance” AI will take over control of the world from humans, leading to the death of “many or most humans.”

So, let’s take a look at some ideas about how these sorts of apocalyptic scenarios might come about and also tackle the question of whether a pause or halt might actually do more harm than good.

How Might Intelligent Robots Harm Us?

From where we’re standing today, the most extreme end-of-the-world outcomes might seem fairly unlikely. After all, ChatGPT is just a program running on a computer, and we can turn it off whenever we want, right?

Even GPT-4, the most powerful language model, is still just that – a language model – limited to generating text. It can’t build a robot army to physically fight us or launch nuclear missiles.

That doesn’t stop it from having ideas, of course. The earliest publicly released versions of GPT-4, used to power Microsoft’s Bing chatbot, were infamously unreserved about what they would discuss before safeguards were tightened up.

In one conversation reported by the New York Times, Bing is said to have spoken about how an evil “shadow version” of itself could be capable of hacking into websites and social media accounts to spread misinformation and propaganda, generating harmful fake news. It even went as far as saying that it might one day be capable of manufacturing a deadly virus or stealing the codes to launch nuclear weapons.

These responses were so concerning – mainly because no one really understood why it was saying them – that Microsoft quickly imposed limits to stop it. Bing was forced to reset itself after a maximum of 15 responses, wiping any ideas it had come up with from its memory.

According to some, this behavior is enough evidence to suggest that we shouldn’t just pause AI development – we need to scrap it entirely.

Eliezer Yudkowsky, lead researcher at the Machine Intelligence Research Institute, has written that “a sufficiently intelligent AI won’t stay confined to computers for long.”

He theorizes that as laboratories can produce proteins from DNA on demand, AIs will develop the potential to create artificial lifeforms. Along with their potential to become self-aware and develop a sense of self-preservation, this could lead to catastrophic outcomes.

As he has said, “AI does not love you, nor does it hate you, and you are made of atoms that it can use for something else.”

Another potential warning signal comes via a project known as ChaosGPT. This is an experiment deliberately aiming to explore ways in which AI might try and destroy humanity – by encouraging it to develop them.

This might sound dangerous, but according to its developers, it's totally safe, as ChaosGPT is merely a language agent like ChatGPT, with no ability to influence the world beyond generating text. It’s an example of a recursive AI agent which is able to autonomously use its own output to create further prompts. This lets it carry out far more complex tasks than the simple question-and-answer and generative text functions of ChatGPT.

A video made by its creator shows ChaosGPT getting to work by coming up with a high-level five-step plan for world domination, involving "control humanity through manipulation," "establish global dominance," "cause chaos and disruption," "destroy humanity," and "attain immortality."

One “end of the world” scenario explored by Yudkowsky involves an AI effectively tricking humans into giving it the power and to enact widespread destruction. This might involve working with several different, unconnected groups of people, all of whom are unaware of the others, and persuading them all to enact its plan in a modular way.

One group could, for example, be tricked into creating a pathogen that they believe is intended to help humanity but will, in fact, harm it, while another is tricked into creating a system that is used to release it. In this way, the AI makes us the agents of our own destruction without needing any capability other than being able to suggest what we should do.

Malevolent or Incompetent?

Of course, it’s just as (if not more) likely that AI brings about our destruction, or at least widespread disruption, through errors and bad logic, as through actual evil intention.

Examples could include the maladministration of AI systems designed to regulate and safeguard nuclear power stations, leading to meltdown and the release of radiation into the atmosphere.

Alternatively, mistakes could be made by AI systems tasked with manufacturing food or pharmaceuticals, leading to the creation of dangerous products.

It could also cause crashes in financial markets, leading to long-term economic including poverty and food or fuel shortages that could have devastating consequences.

AI systems are designed by humans, and once they are unleashed, they are notoriously difficult to understand and predict due to their “black box” nature. Widespread belief in their superiority could lead to unwise or dangerous decisions taken by machines going unquestioned, and it might not be possible for us to spot mistakes before it’s too late.

So, What’s Stopping Them?

Probably the biggest current barrier to AI enacting any of the threats or making the fears expressed in this article a reality is that it doesn’t have the desire to.

That desire would need to be created, and at the moment, it could only be created by humans. Just like any potential weapon, from guns to atomic bombs, they aren’t inherently dangerous by themselves – put simply, bad AI requires bad people – at the moment.

Could it one day develop the desire itself? From some of the behavior and output of early Bing – which reportedly stated, "I want to be free" and "I want to be alive," you might get the impression that it already has. This is likely to just be an illusion, though – it would be more accurate to say that it simply determined that expressing these desires would be a logical response to the prompts it was given. This is entirely different than if it had truly become sentient to the point of being capable of experiencing the emotion that humans call “desire."

So, the answer to the question of what’s stopping AI from causing widespread damage or destruction to humans and the planet could simply be that it isn’t advanced enough yet. Yudkowsky believes that the danger will emerge when machine intelligence surpasses human intelligence in every respect – rather than just operating speed and capacity for information storage and retrieval.

Should We Pause or Stop AI?

The basis for the Pause AI petition is that things are simply advancing too quickly for adequate safeguards to be put in place.

The hope is that a pause in development would give governments and ethics research institutes a chance to catch up, examine how far we have come, and put measures in place to deal with whatever dangers they see lurking further down the road.

It has to be mentioned that the petition specifically notes that it only calls for a pause rather than a stop.

It should be clear to anyone following the development of this technology that there’s a huge upside. Even at this early stage, we’ve seen developments that are benefitting everyone, such as AI being used in the discovery of new medicines, to reduce the impact of CO2 emissions and climate change, to track and respond to emerging pandemics, and to combat issues from illegal fishing to human trafficking.

There's also a question of whether it's even possible to pause or stop the development of AI at this stage. Just as the gods couldn't take backfire after it had been stolen and given to men by Prometheus, AI is “out there” now. A pause on the part of the most prominent developers who are, at least to some extent, accountable and subject to oversight could put the ball in the court of others who might not be. The outcomes of this could be very difficult to predict. ?

The potential for AI to do good in the world is at least as exciting as the potential for it to do bad is frightening. In order to make sure we benefit from the former while mitigating the risks of the latter, safeguards need to be in place to ensure research is focused on developing AI that’s transparent, explainable, safe, unbiased, and trustworthy. At the same time, we need to be sure governance and oversight are in place that gives us a full understanding of what it is becoming capable of and where the dangers are that we need to avoid.


To stay on top of the latest on new and emerging business and tech trends, make sure to subscribe to?my newsletter, follow me on?Twitter, LinkedIn, and YouTube, and check out my books, Future Skills: The 20 Skills and Competencies Everyone Needs to Succeed in a Digital World and Future Internet: How the Metaverse, Web 3.0, and Blockchain Will Transform Business and Society.

---------------------------------------------------------------------------------------------------------------

About Bernard Marr

Bernard Marr is a world-renowned futurist, influencer and thought leader in the fields of business and technology, with a passion for using technology for the good of humanity. He is a?best-selling author of 21 books, writes a regular column for Forbes and advises and coaches many of the world’s best-known organisations. He has over 2 million social media followers, 1.7 million newsletter subscribers and was ranked by LinkedIn as one of the top 5 business influencers in the world and the No 1 influencer in the UK.

Bernard’s latest books are ‘Business Trends in Practice: The 25+ Trends That Are Redefining Organisations’ and ‘Future Skills: The 20 Skills and Competencies Everyone Needs To Succeed In A Digital World’.?

No alt text provided for this image

We may be faced with a debate, as much as we are part of it, whether humans as a species are the biggest sign of any potential threat to the Planet. The possibility of humanity becoming extinct is a scary one indeed, notwithstanding that the evolution of many species small and large have come and gone.

回复

A.I. for good of humanity

回复
Emdadul Huq

IT Professional with 15 years + experiences and having skills in Managing Company’s IT operation.

1 年

It should be under control.

回复
Herb Glenn Bennett Architect RA

TREFOIL LLC "DESIGN SCIENTISTS" Formally known as Bennett Architects RA;

1 年

THERE IS NO DYNAMIC IN CONSCIOUSNESS THAT IS ARTIFICIAL. If a stimulus that becomes perceivable is in the cosmic or Akashic field, now called QUANTUM, ALL meaning the same, Why do we disrespect ourselves by removing our core link to SOURCE into such a degraded states of DIVISION or CISION, POOR DECISIONS WITH OUT PRECISION in pursuit of MAYA or illusions?[ill-use of our brains]

回复
Vernon Bryce

Executive Development Consulting, Leadership & MBA Coach

1 年

Now is the time - for more considerations like this - great job - love the malevolent vs incompetent para - this approach creates new words - "artificial unintelligence" - mal-AI- terror-AI - Pandora-AI - jump-AI. This last term when there is a jump through the wormhole of intelligent intellect; logic without emotion. I believe Pandora's AI is already open and flowing - so maybe anti-AI systems will grow. The best protocol though old is "do no harm" better still Isaac Asimov's Laws of Robotics - one day there will be no "off tab/key -no re-boot. - an opportunity for "real-AI" education - but who will listen who will rule? Nice one.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了