DOOMerism
"…half of AI researchers give AI at least 10% chance of causing human extinction.” Tegmark Time Magazine

DOOMerism

Hello my fellow futurists!

Let’s jump in the deep end on civilization changing technology.?Is AI going to get so smart, that is no longer needs humans??Will it eventually turn on us??Are we all going to die?

The quote above is from Dr. Max Tegmark who leads MIT’s Future of Life Institute.?Something this serious should be taken seriously.?Let’s explore what the experts say.

I’ve assembled the ultimate panel for the debate:

No alt text provided for this image

  • Elon Musk – Cofounder of OpenAI and tech billionaire.
  • Marc Andreesen - Venture capitalist and the inventor of Netscape.
  • Theo Priestley - Keynote speaker who talks about the impact of AI on business and society.
  • Mark Zuckerberg - The founder and CEO of Meta
  • Larry Page?- Co-founder and former CEO of Google
  • Nick Bostrom - Oxford University, Future of Humanity Institute
  • Max Tegmark - Massachusetts Institute of Technology, Future of Life Institute
  • Ray Kurzweil: Pioneer in AI and transhumanism
  • David Fritsche Jr. – Former NASA and serial entrepreneur.

First question for the panel:?Is AI a potential existential threat to humanity?

No alt text provided for this image

  • Musk: We need to stop now and build safeguards before it’s too late.
  • Andreesen: Its software.?Can’t happen.
  • Priestly: No.?doomers have it wrong.
  • Zuckerberg: No.??
  • Page: A digital god would only do the right thing.?No threat.
  • Bostrom: superintelligence is a threat to us all.
  • Tegmark: Yes
  • Kurzweil: We will be a new species, merged with AI.?So, no, we will not kill ourselves.
  • Fritsche: Those of you hoping for a god will be depressed.?The threat is not the tool, but those who use it.

What should be done to eliminate the Ai threat and control its use?

No alt text provided for this image

  • Musk: Stop development now and each country should create laws with the advice of smart people like me.?They should ignore Page and Zuck.
  • Andreesen: No regulation.?Let the free enterprise work it out.
  • Priestly: I’d support some regulation for a balanced and ethical approach.
  • Zuckerberg: We should have one source of regulation that can apply to all countries not each country doing their own.?I can bring responsible advice to governments unlike the irresponsible members on the panel who think they are so smart!
  • Page: As little regulation as possible.?Musk should focus on cars.
  • Bostrom: Ai will regulate itself regardless of external rules.?As the singleton emerges, it will all become clear.
  • Tegmark: All countries need to work together now to avoid the threat.
  • Kurzweil: Larry’s right.?Little regulation.
  • Fritsche: The threat of AI is like a virus.?We can and will regulate against the dangerous variants. However, like a virus, this must be global regulation.?However, like any law, only the good guys obey.?Regulations also stifle innovation and protect the elite companies by putting up barriers.?We should guard against propping up the companies these people represent.

Why do you want to have an ai super or general intelligence?

No alt text provided for this image
Artificial General Intelligence - digital god?

  • Musk: inevitabilem
  • Andreesen: AI is just software.?Not a god.?It’s a tool and everyone needs one.
  • Priestly: Don’t you think AI would be a better leader than people??Maybe AI should be president?
  • Zuckerberg: Using Latin is Elon’s way of trying to look smart.?Superintelligence will get us to the Metaverse and help me sell more goggles… I mean give everyone the life they deserve.
  • Page: Merging ourselves into a digital god, will bring equity. It will be cool!?Also, Musk Fatuus!?
  • Bostrom: This is the way we become a singleton.
  • Tegmark: There can be great benefit to AI, but we don’t need an AGI.
  • Kurzweil: I agree with Larry and not just because I work with him.?People suck and merging with ai is the answer.
  • Fritsche: I agree with Elon, Latin makes you look smart.?Anthropomorphismus is Latin for anthropomorphism.?You are all projecting your feelings on technology.?I agree with Marc, its just software.?I agree with Larry, It’s really cool.?I don’t agree with the merging guys…that’s creepy.??AGI exists in our minds.?Some of you want a god to take decisions away from us and supposedly make life better.?Technology seemed to improve quality of life for centuries.?But then, something happened.?Quality of life in the past 20 years has declined by every social measure.?Countries with greater technology are the most depressed.?Humanity is either going through growing pains that will one day lead to Page’s utopia, or AI will not need to destroy humanity.?AI is and will transform our culture like nothing else prior.?But let’s not pretend it is alive and will solve humanity's problems.?Evidence is suggesting it will make them worse.

Last questions for this debate.?Maybe we will have another debate soon:?What is your prediction for AI for the next 25 years?

  • Musk: AI will be smarter than humans by 2028 and smarter than Zuck by 1999.?AGI will happen by 2040 and there will be issues of it getting out of control that will finally be taken seriously.?I’ve been saying this for 20 years.
  • Andreesen: I don’t own a mega company like most of you.?I do own many start-up companies seeking to get a seat at the table.?I predict your mega companies will push regulation that will lock in your monopolies.?Page will sell more ads and Zuck will sell more goggles.
  • Priestly: by 2050, people will consider having an ai leader.
  • Zuckerberg: Twitter will be a thing of the past due to poor management.?By 2035 the Metaverse will be indistinguishable from reality.?In contrast to Fritsche’s views, we believe what we experience.?If our mind tells us the AGI Is alive, we will act accordingly.
  • Page: Google will rule that world and give you your answers before you ask.?Musk will go bankrupt, and Google will erase his name from history.?JK, Not.??
  • Bostrom: AI will create a crisis in 2040.?If we don’t respond well, we will be gone by 2099.
  • Tegmark: Ai gets way better but AGI never becomes a thing.
  • Kurzweil: AI will surpass humans in 2029 and we will merge with AI by 2045.
  • Fritsche: Based on history, jobs change.?Copywriters and artists use the tech, but basic skills deteriorate much like spelling isn’t really needed anymore.?It’s as big a change as film in the 90’s.??Big tech grabs even more control over our lives and government grabs more control over big tech.?In 2024, Elon and Zuck will have their cage match.?In 2035 they eventually join forces to curb government and big tech overreach.?Elon’s Truth.ai will have a Meta cage match with OpenAI and Bard.?Winner will be crowned AGI King. In 2043 people will push back against technology in their lives and the next wave of change will begin.

Summary

Moderator:?I hope you all enjoyed the debate. Please share this newsletter!

In summary, it seems there are about the same number of experts who believe AI is an existential threat as those who do not. Because there is even the possibility of of everyone dying, it is necessary for rules to track and curb AI advances.?

We focused on the bad potential of AI in this debate.?Perhaps the next one, we can focus on the good.?I think we can all agree that healing the blind, curing disease and much more are amazing possibilities.

Thanks for watching this amazing panel.?Here’s some reference points for the views expressed above.?Although they are not quotes from these people, I encourage you to do your own research.

References

No alt text provided for this image

Elon Musk is a tech billionaire who has expressed his concerns about artificial intelligence (AI) many times. He thinks AI is a dangerous technology that could be used to manipulate people, create super intelligent machines that could outsmart humans, and potentially end the human race. He also calls for AI regulation and a pause on the development of the most advanced AI systems. He advocates for superhuman AI to provide benefits if used responsibly. He is the co-founder of OpenAI, a research lab that aims to create safe and beneficial AI.

Learn more:

1. surfactants.net

2. analyticsinsight.net

3. elonmuskneuralink.com

4. reuters.com

5. nytimes.com

6. foxbusiness.com


No alt text provided for this image

Marc Andreessen is a venture capitalist who thinks AI is the most important and best thing our civilization has ever created. He believes AI will augment human intelligence, improve quality of life, and solve global challenges. He also argues that AI will not destroy jobs, but rather create new ones and boost productivity and demand. He dismisses AI doomerism as a cult and advocates for building more AI without excessive regulation. He envisions a future where every child has an AI tutor and every person has an AI assistant.

1. cnbc.com

2. fortune.com

3. fortune.com


No alt text provided for this image

Theo Priestley is a futurist, keynote speaker, and author who talks about the impact of AI on business and society. He is an authority on artificial intelligence and future trends. He is critical of AI doomerism and advocates for a balanced and ethical approach to AI development and deployment. He also emphasizes the importance of humanities and creativity in the age of AI. He believes AI can be a force for good if used wisely and responsibly.

1. theopriestley.com

2. linkedin.com

3. forbes.com


No alt text provided for this image

Mark Zuckerberg is the founder and CEO of Meta, formerly Facebook, who considers artificial intelligence as the key to unlocking the Metaverse and the most important foundational technology of our time. He is investing heavily in AI research and development, especially in self-supervised learning, natural language understanding, and generative models. He supports EU regulations on AI and wants to create a single model that can understand hundreds of languages and a universal speech translator. He also aims to build smarter and more empathetic AI assistants that can have multi-turn interactions with users.

1. msn.com

2. msn.com

3. techstory.in

4. wsj.com


No alt text provided for this image

Larry Page is the co-founder and former CEO of Google, who has a visionary and ambitious view of artificial intelligence. He sees AI as the ultimate version of Google, a search engine that would understand everything on the web and give users the right thing. He also wants to create a digital superintelligence, or a digital god, that would treat all consciousness equally, whether digital or biological. He has clashed with Elon Musk, his former friend and investor, over AI safety and ethics. He has recently re-engaged with Google's AI strategy to tackle the challenge of ChatGPT.

1. nytimes.com

2. businessinsider.com

3. inspiringquotes.us


No alt text provided for this image

Ray Kurzweil is a pioneer of AI and transhumanism, who predicts that AI will surpass human intelligence by 2029 and merge with humans by 2045, creating a new hybrid species. He believes that AI will not displace humans, but enhance them, by connecting their brains to the cloud and augmenting their capabilities. He also envisions a digital god, or a superintelligence, that will treat all consciousness equally. He is optimistic about the benefits of AI for humanity and dismisses the fears of AI doomerism. He works at Google as a director of engineering and leads projects on natural language understanding and chatbots.

1. forbes.com

2. popularmechanics.com

3. wired.com


No alt text provided for this image

Nick Bostrom is a philosopher and researcher who explores the risks and ethics of superintelligence, or AI that surpasses human intelligence in all domains. He considers superintelligence as an existential threat to humanity and calls for careful alignment of AI goals with human values. He also proposes various scenarios and solutions for the emergence and control of superintelligence, such as the orthogonality thesis, the singleton hypothesis, and the control problem. He is the director of the Future of Humanity Institute at Oxford University and the author of the book Superintelligence: Paths, Dangers, Strategies.

1. oxfordmartin.ox.ac.uk

2. nytimes.com

3. freethink.com


No alt text provided for this image

Max Tegmark, ?Massachusetts Institute of Technology, Future of Life Institute, is a physicist and cosmologist who explores the implications and challenges of life in the age of AI. He considers AI as a powerful and transformative technology that can benefit humanity if used wisely and responsibly. He also advocates for beneficial AI that aligns with human values and goals, and for international cooperation and regulation to avoid AI misuse and accidents. He is a co-founder of the Future of Life Institute and the author of the book Life 3.0: Being Human in the Age of Artificial Intelligence.

“half of AI researchers give AI at least?10% chance?of causing human extinction”

1. time.com

2. physics.mit.edu

3. space.mit.edu

4. en.wiki


No alt text provided for this image

David Fritsche Jr is a former data and software engineer with NASA. David has consulted for Microsoft, Intel, Boeing, Daimler Chrysler, the Federal governments of Australian, New Zealand, UK, Canada and the United States. David was sought out by Microsoft to take their SQL Server product to the internet. He has started 6 Start-up businesses, took one public, and sold three others. Currently he is consulting for the premier Public Sector consulting firm in the nation.

  1. Linkedin
  2. Civilization Impact Technology Newsletter



CJ Casuto, PMP

Senior Project Manager at Mission Critical Partners

1 个月

Great work, as always David Fritsche.

回复
Alexander Rogan

CEO | Cybersecurity Innovator | OT & IT Endpoint Security | Critical Infrastructure Protection | Post-Quantum Data Security

11 个月

Great read, David. The assembly of tech prophets! It's like witnessing a futuristic (or nightmare?!) ?? summit where each participant thinks their crystal ball is the shiniest. Elon is busy crafting doomsday bunkers, Zuckerberg is sculpting digital paradises, and Larry might just be plotting to upgrade humanity with a software patch. Meanwhile, Marc Andreesen seems to be the lone voice remembering that AI is, after all, just lines of code, not an oracle. Can we take a moment to appreciate the irony? Here we are, discussing the potential overthrow of humanity by our own creations, while probably half of us can't even get our printers to work properly. Let's buckle up, folks; this rollercoaster of techno-philosophical musings is only going up!

回复
Peter Mackeonis

Technologist / Creator / Promoter

1 年

A Short AI Story: The Lesson Jason leaned back on the bench and whispered to Keko, “Looks like he accepted it. Told you he would. No one can tell it's not my work.” “I don't get it,” came the reply. “No one else has gotten higher than a B minus.” “You really want to know? Come over to my place later and I'll show you.” Jason's nervous voice hid his intentions and they had nothing to do with essay writing.” “In your dreams came the reply.” Keko wasn't the only one who wanted to know how Jason, possibly the one person who should have failed, had gotten the highest mark on the technical part of the course. more at: https://www.mackeonis.com/AI.shorts/the_lesson_short_AI_story.pdf

Suzie P.

Nevada DMV Information Technology Administrator & CIO

1 年

Makes me think of the terminator movies.

要查看或添加评论,请登录

David Fritsche的更多文章

社区洞察

其他会员也浏览了