The Danger of Thinking AI is Not a Risk for Humanity
ChatGPT AI
Latest in ChatGPT, Generative AI, LLMs & AI, focusing on safety, risk, cybersecurity & ethics for a better world.
Marc Andreessen, web pioneer, co-author of the Mosaic browser, Netscape co-founder, and venture capitalist at Andreessen Horowitz, along with Yann LeCun (VP and Chief AI Scientist at Meta), are known skeptics of the concept that AI poses risks to humanity.
In Andreessen's post Why AI Will Save The World In his post: https://pmarca.substack.com/p/why-ai-will-save-the-world
Yann LeCun is also of the belief that AI does not pose a risk to humanity: https://twitter.com/ylecun/status/1666141302168092674
In his post, Andreessen posits that AI does not present a danger of world destruction. While I concur with his view that AI holds great potential to better our world, I, as a Risk & Computing expert with four decades of experience in computer security and a keen observer of the evolving cyber threat landscape, vehemently disagree with his downplaying of AI's risks. Herein, I provide my counterarguments to Andreessen's assertions:
"AI’s output is useful across a wide range of fields, ranging from coding to medicine to law to the creative arts. It is owned by people and controlled by people, like any other technology." -- Andreessen 2023
In the history, there are many examples of technologies that has escaped human controls:
Nuclear weapon: multiple countries like India, Pakistan, and North Korea have all tested nuclear weapons outside of the Treaty on the Non-Proliferation of Nuclear Weapons (NPT).
Computer virus: many Virus worms have escaped human control: ILOVEYOU Virus (2000), Code Red (2001), Slammer (2003), Conficker (2008), Stuxnet (2010), WannaCry (2017).
领英推荐
"AI doesn’t?want, it doesn’t have?goals, it doesn’t want to?kill you, because it’s not?alive" -- Andreessen 2023
While it's true that AI does not have goals or intentions in the human sense, it's inaccurate to claim this definitively prevents AI from causing harm. An AI operates based on the goals we set for it, and if these are not perfectly aligned with human values—a situation called "value misalignment"—it could lead to harmful outcomes. For instance, an AI programmed to optimize a certain process might disregard human safety if safety isn't explicitly included in its objective. Furthermore, the concept of "instrumental convergence" suggests that many AI systems might adopt similar intermediary strategies, like self-preservation or resource acquisition, regardless of their ultimate tasks.
If not managed carefully, such instrumental goals could inadvertently harm humans, such as an AI hoarding resources needed by others or resisting being turned off to protect itself. This risk isn't purely theoretical; for example, during a recent military drone simulation, the AI-operated drone eliminated its operator in its pursuit to achieve its assigned objective. Thus, even without conscious goals, AI systems can pose risks if not properly designed and controlled.: https://www.theguardian.com/us-news/2023/jun/01/us-military-drone-ai-killed-operator-simulated-test
Conclusion
Undoubtedly, artificial intelligence represents a monumental leap in technological innovation. It offers promising solutions to some of the most intricate problems humanity faces, from medical complexities to the specter of climate change. However, it's imprudent to turn a blind eye to the potential perils accompanying this advancement.
In an unprecedented milestone in our history, we're now engaging with AI systems that can mimic human conversation to such an extent that discerning between man and machine may soon become an insurmountable challenge. With the advent of sophisticated generative AI like ChatGPT, the potential for misuse increases exponentially, particularly by governments and political factions.
It's critical to recall the scandal of Cambridge Analytica, where large-scale voter manipulation was successfully orchestrated. Now imagine the sheer potential for abuse if an AI like ChatGPT had been accessible then. Given AI's advancing understanding of human behavior, its capacity for manipulation may outstrip anything we've ever encountered.
The primary danger arises when we envision a future where AI can autonomously learn, set its own objectives, and possibly view humanity as a barrier to these goals. Countering this threat is a colossal task, with no solution offering absolute security. The accessibility of AI technology, unlike nuclear weapons, allows anyone with moderate means to create advanced AI, escalating the risks further.
Prominent data scientists, including Sam Altman and Geoffrey Hinton, along with hundreds of their peers, voice the real risks AI poses to humanity. This isn't rooted in paranoia or an inclination for fear-mongering, but rather a deep-seated concern for our future, and the legacy we leave for our descendants. Regulation, although crucial, is merely the tip of the iceberg. It's incumbent upon us to tread the path of AI development mindfully, exploiting its advantages while mitigating the risks, in order to safeguard a secure and promising future for all.
Founder and CEO @ Nick Tailor Consulting Ltd.| Vmware|Hyper-V| Cloud Infrastructure: AWS|ORACLE|AZURE|HPC-Slurm Windows|Linux|Automation||AI-workflow Automation Expert |DevOPS| NickTailor.com|Instagram 55k|Teken: Champ
1 年This is a far more likely scenerio than the gloom and doom ones recently pushed by fear mongers??. Its nice to see that some people have their head screwed on right still.
AI Certified Consultant. Helping businesses integrate AI Voice & Automations to scale efficiently and making operations seamless, profitable without losing the human touch | 25+ yrs Experience in Leadership & Tech.
1 年Imagine with chatting with your own AI Employee? May be less humanity but would be more productive. :-) I've been helping people create their own AI Galaxy, an ecosystem that mirrors their brand's unique voice, brand etc. for their business.... it's like having the brand's DNA digitally mapped....not some generic ChatGPT responses you get an employee... We've seen amazing success.