Does AI pose an existential threat to humans?
"Spooky Datacenter" - DALL·E 2 (Prompted by Brian Evergreen)

Does AI pose an existential threat to humans?

Hello, and welcome to Future Solving, my newsletter for leaders and managers navigating what it means to lead in the era of AI.

Today, we're going to talk about one of the most heated topics around AI today: whether AI poses an existential threat to humanity.

This conversation has grown considerably more complex in the past year with the launches of DALL·E 2 and ChatGPT, recapturing global attention for the field AI. In the ensuing digital fog, five groups of leaders have formed around particular lines of belief and action for how we should handle AI.

Group 1 - AI Leaders Who Consider AI to be an Existential Risk

Until recently, fears about AI taking over the world were relegated to those outside of the technology community, but recently, leaders in AI such as Geoffrey Hinton, Sam Altman, and Yoshua Bengio have vocalized concerns about AI, going so far as calling it an existential risk for humanity, at the level of pandemics and nuclear war.

Group 2 - AI Leaders Who do not Consider AI to be an Existential Risk

Meanwhile, other leaders in the field of artificial intelligence, such as Andrew Ng, Yann LeCun, and Fei-Fei Li urge that we should be focusing our attention on the risk of humans using AI for harm, intentionally and unintentionally, rather than the science fiction narrative of runaway intelligence.

Group 3 - AI "Researchers" who Have not Worked in AI

A third group, who call themselves AI Researchers, but have never worked in the field of AI, tend to have a background in philosophy and/or physics. Their "research" begins with the leap of faith that assumes machines can achieve sentience, and they then pontificate and document the various risks, possibilities, and prescribe remedies to mitigate the risk a sentient superintelligence would present. Examples in this group are Nick Bostr?m (Philosopher and Physicist), Eliezer Yudkowsky (no formal education), and Max Tegmark (Physicist and Cosmologist).

Group 4 - Leaders from Other Fields Weighing in

Leaders from other fields both adjacent and distant from the field of AI have spoken up about the topic of artificial intelligence, such as Stephen Hawking, Elon Musk, and leaders ranging from politics to other branches of science.

Group 5 - Leaders Quietly Making Decisions and Investments

As tensions rise between these groups, the majority of leaders across fields have chosen to remain publicly silent, whether for fear of backlash, to remain focused on harnessing the potential of these technologies, or other reasons. While contributing to the broader, public debate is not necessary, employees are looking to their leaders for their perspectives on what these technologies will mean for the future (more on that later).

Three Logical Fallacies

Amidst all of this public debate and discussion, three logical fallacies have been a steady undercurrent in the past year, and once you see the pattern, it's hard to unsee when you read articles or engage in discussions on AI.

  • The Slippery Slope fallacy - an argument in which a party asserts or assumes that a small first step leads to a chain of related events culminating in some significant (usually negative) effect. Example: "If GPT-4 can write such elegant poetry, GPT-5 or 6 will surely be sentient and subsequently take over and either rule or destroy humans."
  • The Non Sequitur fallacy - an argument in which a conclusion does not follow logically from what preceded it. Example: "If a machine (neural network) has more nodes than a human brain has synapses, then the machine is more intelligent than the human."
  • The Appeal to Authority fallacy - the argument that if a credible source believes something, then it must be true. Example: "This person has a PhD. in Physics and works at or graduated from xyz school, so they must be an expert on AI."

Why This is Important

The question of whether AI presents an existential threat to humanity is important because it relates to the deeper question: what will this mean for humanity, the future of work, and for me?

The answer to that question is something that we bear the responsibility of collectively defining as leaders, and it will begin with a shared understanding of the risks and opportunities presented by this technology.

My Take

  1. I have yet to see a compelling argument for how a software program that runs Python or some other script will become sentient, begin developing its own independent goals, and "escape" the bounds of its program, let alone acquire new skills.
  2. We do not yet fully understand the human brain, nor the equation for "consciousness" or sentience. In my opinion, it is therefore logically flawed to believe that we will accidentally create sentience in machines.
  3. It is important to note the nonlinear nature of these systems. Discussions about the path of AI often refer to its growth as if a single, central system has been gaining "skills" one by one, and now has a host of skills, or that they share it like a hivemind (e.g. "Machines are now better than we are at x, y, and z"). ChatGPT can't beat humans at Chess the way that DeepBlue beat the world champion in 1997. They are completely different systems written in different languages. DeepBlue is now offline and on display in a museum. Think instead of each of these computer science achievements (AlphaGo, ChatGPT, etc.) as an expedition up Mt. Everest. You can't stack them and say that people can now climb to the moon. It would take an entirely different system to take us to the moon, the same way it would take a fundamentally different kind of system, paired with multiple scientific breakthroughs (particularly in quantum) for us to be able to even conceptually approach the creation of Artificial General Intelligence and reach the singularity (where artificial intelligence surpasses human intelligence).
  4. Machines can't code. Machines can't write. Rather, machines apply math quickly to generate information that conforms to the pattern on which they were trained. John Searle's Chinese Room Thought Experiment explains this phenomenon well. If a person were placed in a room and passed Mandarin characters under the door, which they then referenced against a lookup table on the wall, sliding the corresponding character back under the door, the person outside would believe they were having a conversation with a person who speaks Mandarin, and may even infer that the person loves them or likes them or wants to help them based on what they are passing under the door. The person in the room, however, is completely ignorant of the topic of the conversation or anything other than the lookup table. When we have a conversation with a chatbot, it can be a convincing version of this experiment, but the chatbot doesn't even understand our language, let alone the concepts we are discussing. It's effectively running back and forth between the words in our prompts and a very complex set of correlations to other words. If we ask it for a recipe for a cocktail, it will avoid putting ketchup in that recipe not because it has a sense of taste and understands that ketchup would taste bad in a cocktail. It would avoid putting ketchup in because it's not in the lookup table.

Takeaway

The biggest risk we face with regards to AI is misuse by humans. Those who are implementing AI within your organization for chatbots, quality control, etc. do not need to worry about those forms of AI accidentally becoming superintelligent. Instead, you should consider what those systems have access to and how they could be used for harm if a human with bad intentions were able to access them.

As for larger AI players, I agree with one of Max Tegmark's recommendations (which traces back to Isaac Asimov's Three Laws of Robotics) even though we diverge on our consideration of the source of risk. Larger-scale AI systems should be developed such that any question or action that would in any way harm a human or a system on which humans depend (such as infrastructure) is as impossible for that system as breaking laws of physics would be. This would mean longer development cycles, but I think it's worth it to limit the potential for humans to harness AI against other humans.

*

I’d love to hear your thoughts on this complex and hotly debated topic, as well as whether it's openly discussed at your workplace.

Thanks for reading,

Brian


Brian Evergreen ?is the author of Autonomous Transformation (a Next Big Idea Club "Must-Read" shortlisted for the Thinkers50 2023 Breakthrough Idea Award), an executive advisor, Senior Fellow at The Conference Board , and the host of the Future Solving Podcast.

Brian Evergreen This inspired us to write a article as a response. We really cannot dismiss the probability of #ExistentialThreat - Here is our response: https://www.dhirubhai.net/pulse/response-does-ai-pose-existential-threat-humans-quranai-djwze

回复
Desam Sudhakar Reddy

Author & Creator, Audiovisual Representation of Chemistry, AVC | Global 28COE Awardee| Excellence in Education Awardee 2022 | Innovative Teaching methods |Edupreneur | Science Popularization | eLearning

1 年

AI DRIVEN PROGRESS : https://www.dhirubhai.net/posts/desam-sudhakar-reddy-557a934b_artificial-intelligence-driven-progress-ugcPost-7118957899099361280-Xx7I/ Please refer to article page Nos. 10 ?? Article : https://www.ijitr.com/index.php/ojs/article/view/2818/pdf Automatic monitoring of IoT devices to track and improve industrial equipment performance and maintenance Sensor data can reveal production pipeline performance, quality, and bottlenecks. Industrial electronics can use AI to warn of potential issues, reducing downtime and maintenance costs. Home electronics, smart logistics, and supply chain optimization also benefit from AI-driven product design. AI-powered assistants are revolutionizing manufacturing by providing detailed records and insights. They are transforming engineering, supply chain management, production, maintenance, quality assurance, and logistics. AI is also aiding in SDG 9's goals of resilient infrastructure, inclusive industrialization, and innovation, while ensuring ethical use and avoiding inequality.

回复
Braden Hamm

Technology Specialist?? ?????? ?? Life Long Learner, Builder, Configurator, and Explorer. ?? Curious Fixer, Documentarian, and Troubleshooter. ?? Together, we can do anything XaaS! IAM | OIM | OIC | OCI | GCP | Azure

1 年

I enjoyed your realist approach and would have loved to see a little bit of fly fishing. Not a lot. Just a little. I too have yet to see something of sentient design but I would say that the needle is definitely moving. We are moving it. The need for cyber security evaluation as well as improvements is at an all-time high. We as the people have to adopt and support it. Turning on Two Factor is a great start. I truly look forward to watching what AI technology continues to do for issues such as human trafficking and securing our autonomy not only digitally but in analog throughout all parts of our human construct. Thank you for your worthwhile contributions. I look forward to seeing more in the future.

回复
Greg_ Walters????

Helping You Make Sense.

1 年

...good stuff...

要查看或添加评论,请登录

社区洞察

其他会员也浏览了