From AI to Armageddon: Steps to Prevent a Digital Disaster
Hey folks, in this weeks?BotZilla AI newsletter, I discuss the latest open letter from the Center for AI Safety (CAIS) asking that,
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
It's been signed by a host of AI experts from the tech industry as well as AI professors, including even, Yi Zeng, a prominent Chinese academic.
Although, there are also some notable missing signatories, including Andrew Ng, Yann LeCun, Gary Marcus and Elon Musk.
In the newsletter, I break down the 8 AI risks identified by CAIS and spend time discussing how AI might become conscious, or at least simulate consciousness, using "system-2" type prompting, with inspiration drawn from the book "Thinking Fast and Slow"?(Amazon affiliate link) by Daniel Kahneman.
As an aside, just this week, I've reached over 1000 weekly readers to my newsletter since starting it in February, and I'd like to thank everyone that has contributed, messaged and encouraged me to keep going!
Thank you!
If you'd like to join them and read this weeks newsletter on AI risks, hit the link?here.
Have a great weekend!