DOOMerism
Hello my fellow futurists!
Let’s jump in the deep end on civilization changing technology.?Is AI going to get so smart, that is no longer needs humans??Will it eventually turn on us??Are we all going to die?
The quote above is from Dr. Max Tegmark who leads MIT’s Future of Life Institute.?Something this serious should be taken seriously.?Let’s explore what the experts say.
I’ve assembled the ultimate panel for the debate:
First question for the panel:?Is AI a potential existential threat to humanity?
What should be done to eliminate the Ai threat and control its use?
Why do you want to have an ai super or general intelligence?
Last questions for this debate.?Maybe we will have another debate soon:?What is your prediction for AI for the next 25 years?
Summary
Moderator:?I hope you all enjoyed the debate. Please share this newsletter!
In summary, it seems there are about the same number of experts who believe AI is an existential threat as those who do not. Because there is even the possibility of of everyone dying, it is necessary for rules to track and curb AI advances.?
We focused on the bad potential of AI in this debate.?Perhaps the next one, we can focus on the good.?I think we can all agree that healing the blind, curing disease and much more are amazing possibilities.
Thanks for watching this amazing panel.?Here’s some reference points for the views expressed above.?Although they are not quotes from these people, I encourage you to do your own research.
References
Elon Musk is a tech billionaire who has expressed his concerns about artificial intelligence (AI) many times. He thinks AI is a dangerous technology that could be used to manipulate people, create super intelligent machines that could outsmart humans, and potentially end the human race. He also calls for AI regulation and a pause on the development of the most advanced AI systems. He advocates for superhuman AI to provide benefits if used responsibly. He is the co-founder of OpenAI, a research lab that aims to create safe and beneficial AI.
Learn more:
4. reuters.com
5. nytimes.com
Marc Andreessen is a venture capitalist who thinks AI is the most important and best thing our civilization has ever created. He believes AI will augment human intelligence, improve quality of life, and solve global challenges. He also argues that AI will not destroy jobs, but rather create new ones and boost productivity and demand. He dismisses AI doomerism as a cult and advocates for building more AI without excessive regulation. He envisions a future where every child has an AI tutor and every person has an AI assistant.
1. cnbc.com
2. fortune.com
3. fortune.com
Theo Priestley is a futurist, keynote speaker, and author who talks about the impact of AI on business and society. He is an authority on artificial intelligence and future trends. He is critical of AI doomerism and advocates for a balanced and ethical approach to AI development and deployment. He also emphasizes the importance of humanities and creativity in the age of AI. He believes AI can be a force for good if used wisely and responsibly.
领英推荐
2. linkedin.com
3. forbes.com
Mark Zuckerberg is the founder and CEO of Meta, formerly Facebook, who considers artificial intelligence as the key to unlocking the Metaverse and the most important foundational technology of our time. He is investing heavily in AI research and development, especially in self-supervised learning, natural language understanding, and generative models. He supports EU regulations on AI and wants to create a single model that can understand hundreds of languages and a universal speech translator. He also aims to build smarter and more empathetic AI assistants that can have multi-turn interactions with users.
1. msn.com
2. msn.com
3. techstory.in
4. wsj.com
Larry Page is the co-founder and former CEO of Google, who has a visionary and ambitious view of artificial intelligence. He sees AI as the ultimate version of Google, a search engine that would understand everything on the web and give users the right thing. He also wants to create a digital superintelligence, or a digital god, that would treat all consciousness equally, whether digital or biological. He has clashed with Elon Musk, his former friend and investor, over AI safety and ethics. He has recently re-engaged with Google's AI strategy to tackle the challenge of ChatGPT.
1. nytimes.com
Ray Kurzweil is a pioneer of AI and transhumanism, who predicts that AI will surpass human intelligence by 2029 and merge with humans by 2045, creating a new hybrid species. He believes that AI will not displace humans, but enhance them, by connecting their brains to the cloud and augmenting their capabilities. He also envisions a digital god, or a superintelligence, that will treat all consciousness equally. He is optimistic about the benefits of AI for humanity and dismisses the fears of AI doomerism. He works at Google as a director of engineering and leads projects on natural language understanding and chatbots.
1. forbes.com
3. wired.com
Nick Bostrom is a philosopher and researcher who explores the risks and ethics of superintelligence, or AI that surpasses human intelligence in all domains. He considers superintelligence as an existential threat to humanity and calls for careful alignment of AI goals with human values. He also proposes various scenarios and solutions for the emergence and control of superintelligence, such as the orthogonality thesis, the singleton hypothesis, and the control problem. He is the director of the Future of Humanity Institute at Oxford University and the author of the book Superintelligence: Paths, Dangers, Strategies.
2. nytimes.com
Max Tegmark, ?Massachusetts Institute of Technology, Future of Life Institute, is a physicist and cosmologist who explores the implications and challenges of life in the age of AI. He considers AI as a powerful and transformative technology that can benefit humanity if used wisely and responsibly. He also advocates for beneficial AI that aligns with human values and goals, and for international cooperation and regulation to avoid AI misuse and accidents. He is a co-founder of the Future of Life Institute and the author of the book Life 3.0: Being Human in the Age of Artificial Intelligence.
“half of AI researchers give AI at least?10% chance?of causing human extinction”
1. time.com
4. en.wiki
David Fritsche Jr is a former data and software engineer with NASA. David has consulted for Microsoft, Intel, Boeing, Daimler Chrysler, the Federal governments of Australian, New Zealand, UK, Canada and the United States. David was sought out by Microsoft to take their SQL Server product to the internet. He has started 6 Start-up businesses, took one public, and sold three others. Currently he is consulting for the premier Public Sector consulting firm in the nation.
Senior Project Manager at Mission Critical Partners
1 个月Great work, as always David Fritsche.
CEO | Cybersecurity Innovator | OT & IT Endpoint Security | Critical Infrastructure Protection | Post-Quantum Data Security
11 个月Great read, David. The assembly of tech prophets! It's like witnessing a futuristic (or nightmare?!) ?? summit where each participant thinks their crystal ball is the shiniest. Elon is busy crafting doomsday bunkers, Zuckerberg is sculpting digital paradises, and Larry might just be plotting to upgrade humanity with a software patch. Meanwhile, Marc Andreesen seems to be the lone voice remembering that AI is, after all, just lines of code, not an oracle. Can we take a moment to appreciate the irony? Here we are, discussing the potential overthrow of humanity by our own creations, while probably half of us can't even get our printers to work properly. Let's buckle up, folks; this rollercoaster of techno-philosophical musings is only going up!
Technologist / Creator / Promoter
1 年A Short AI Story: The Lesson Jason leaned back on the bench and whispered to Keko, “Looks like he accepted it. Told you he would. No one can tell it's not my work.” “I don't get it,” came the reply. “No one else has gotten higher than a B minus.” “You really want to know? Come over to my place later and I'll show you.” Jason's nervous voice hid his intentions and they had nothing to do with essay writing.” “In your dreams came the reply.” Keko wasn't the only one who wanted to know how Jason, possibly the one person who should have failed, had gotten the highest mark on the technical part of the course. more at: https://www.mackeonis.com/AI.shorts/the_lesson_short_AI_story.pdf
Nevada DMV Information Technology Administrator & CIO
1 年Makes me think of the terminator movies.