Does AI pose an existential threat to humans?
Brian Evergreen
Author of Autonomous Transformation (Wiley), Senior Advisor, Researcher, and Keynote Speaker
Hello, and welcome to Future Solving, my newsletter for leaders and managers navigating what it means to lead in the era of AI.
Today, we're going to talk about one of the most heated topics around AI today: whether AI poses an existential threat to humanity.
This conversation has grown considerably more complex in the past year with the launches of DALL·E 2 and ChatGPT, recapturing global attention for the field AI. In the ensuing digital fog, five groups of leaders have formed around particular lines of belief and action for how we should handle AI.
Group 1 - AI Leaders Who Consider AI to be an Existential Risk
Until recently, fears about AI taking over the world were relegated to those outside of the technology community, but recently, leaders in AI such as Geoffrey Hinton, Sam Altman, and Yoshua Bengio have vocalized concerns about AI, going so far as calling it an existential risk for humanity, at the level of pandemics and nuclear war.
Group 2 - AI Leaders Who do not Consider AI to be an Existential Risk
Meanwhile, other leaders in the field of artificial intelligence, such as Andrew Ng, Yann LeCun, and Fei-Fei Li urge that we should be focusing our attention on the risk of humans using AI for harm, intentionally and unintentionally, rather than the science fiction narrative of runaway intelligence.
Group 3 - AI "Researchers" who Have not Worked in AI
A third group, who call themselves AI Researchers, but have never worked in the field of AI, tend to have a background in philosophy and/or physics. Their "research" begins with the leap of faith that assumes machines can achieve sentience, and they then pontificate and document the various risks, possibilities, and prescribe remedies to mitigate the risk a sentient superintelligence would present. Examples in this group are Nick Bostr?m (Philosopher and Physicist), Eliezer Yudkowsky (no formal education), and Max Tegmark (Physicist and Cosmologist).
Group 4 - Leaders from Other Fields Weighing in
Leaders from other fields both adjacent and distant from the field of AI have spoken up about the topic of artificial intelligence, such as Stephen Hawking, Elon Musk, and leaders ranging from politics to other branches of science.
Group 5 - Leaders Quietly Making Decisions and Investments
As tensions rise between these groups, the majority of leaders across fields have chosen to remain publicly silent, whether for fear of backlash, to remain focused on harnessing the potential of these technologies, or other reasons. While contributing to the broader, public debate is not necessary, employees are looking to their leaders for their perspectives on what these technologies will mean for the future (more on that later).
Three Logical Fallacies
Amidst all of this public debate and discussion, three logical fallacies have been a steady undercurrent in the past year, and once you see the pattern, it's hard to unsee when you read articles or engage in discussions on AI.
领英推荐
Why This is Important
The question of whether AI presents an existential threat to humanity is important because it relates to the deeper question: what will this mean for humanity, the future of work, and for me?
The answer to that question is something that we bear the responsibility of collectively defining as leaders, and it will begin with a shared understanding of the risks and opportunities presented by this technology.
My Take
Takeaway
The biggest risk we face with regards to AI is misuse by humans. Those who are implementing AI within your organization for chatbots, quality control, etc. do not need to worry about those forms of AI accidentally becoming superintelligent. Instead, you should consider what those systems have access to and how they could be used for harm if a human with bad intentions were able to access them.
As for larger AI players, I agree with one of Max Tegmark's recommendations (which traces back to Isaac Asimov's Three Laws of Robotics) even though we diverge on our consideration of the source of risk. Larger-scale AI systems should be developed such that any question or action that would in any way harm a human or a system on which humans depend (such as infrastructure) is as impossible for that system as breaking laws of physics would be. This would mean longer development cycles, but I think it's worth it to limit the potential for humans to harness AI against other humans.
*
I’d love to hear your thoughts on this complex and hotly debated topic, as well as whether it's openly discussed at your workplace.
Thanks for reading,
Brian
Brian Evergreen ?is the author of Autonomous Transformation (a Next Big Idea Club "Must-Read" shortlisted for the Thinkers50 2023 Breakthrough Idea Award), an executive advisor, Senior Fellow at The Conference Board , and the host of the Future Solving Podcast.
Brian Evergreen This inspired us to write a article as a response. We really cannot dismiss the probability of #ExistentialThreat - Here is our response: https://www.dhirubhai.net/pulse/response-does-ai-pose-existential-threat-humans-quranai-djwze
Author & Creator, Audiovisual Representation of Chemistry, AVC | Global 28COE Awardee| Excellence in Education Awardee 2022 | Innovative Teaching methods |Edupreneur | Science Popularization | eLearning
1 年AI DRIVEN PROGRESS : https://www.dhirubhai.net/posts/desam-sudhakar-reddy-557a934b_artificial-intelligence-driven-progress-ugcPost-7118957899099361280-Xx7I/ Please refer to article page Nos. 10 ?? Article : https://www.ijitr.com/index.php/ojs/article/view/2818/pdf Automatic monitoring of IoT devices to track and improve industrial equipment performance and maintenance Sensor data can reveal production pipeline performance, quality, and bottlenecks. Industrial electronics can use AI to warn of potential issues, reducing downtime and maintenance costs. Home electronics, smart logistics, and supply chain optimization also benefit from AI-driven product design. AI-powered assistants are revolutionizing manufacturing by providing detailed records and insights. They are transforming engineering, supply chain management, production, maintenance, quality assurance, and logistics. AI is also aiding in SDG 9's goals of resilient infrastructure, inclusive industrialization, and innovation, while ensuring ethical use and avoiding inequality.
Technology Specialist?? ?????? ?? Life Long Learner, Builder, Configurator, and Explorer. ?? Curious Fixer, Documentarian, and Troubleshooter. ?? Together, we can do anything XaaS! IAM | OIM | OIC | OCI | GCP | Azure
1 年I enjoyed your realist approach and would have loved to see a little bit of fly fishing. Not a lot. Just a little. I too have yet to see something of sentient design but I would say that the needle is definitely moving. We are moving it. The need for cyber security evaluation as well as improvements is at an all-time high. We as the people have to adopt and support it. Turning on Two Factor is a great start. I truly look forward to watching what AI technology continues to do for issues such as human trafficking and securing our autonomy not only digitally but in analog throughout all parts of our human construct. Thank you for your worthwhile contributions. I look forward to seeing more in the future.
Helping You Make Sense.
1 年...good stuff...