AI researchers predict AI will outperform humans in all tasks by 2047, urge greater focus on AI safety
A large survey of AI researchers reveals Human Level AI will come 13 years faster than previous estimates. The 2023 Expert Survey on Progress in AI has revealed a shocking shift in the attitudes of AI researchers towards AGI.?
They think it’s 50% likely that AI will outperform humans at every possible task by 2047, 13 years earlier than the predictions made in 2022. These tasks include complex things like coding an entire payment processing site from scratch and writing new Top 40 songs. While they expect AI to be capable of outperforming humans by 2047, they predict all human labor to be replaced in practice by 2116.
So our grandkids will be able to relax on the beach all day long? Maybe not, say researchers.
While 68.3% believe that good outcomes from superhuman AI are more likely than bad, the median researcher still believes it’s 5% likely AI will cause the end of humanity before then. 15% believe the effects of AGI will be net negative, up 50% from a year ago.
How does this compare to other industries?
The U.S. Nuclear Regulatory Commission has set the risk goal for a large release of radioactivity from a nuclear power plant to be less than 0.005% and the European Union Aviation Safety Agency’s safety target for commercial flight is 0.5 fatal accidents per million departures, together representing much more conservative risk levels for these more established technologies.
What does successful risk management look like for AI?
At SaferAI, we are writing risk management standards for AI systems that bridge the gap between AI labs and safety-focused industries. Learn more at https://safer-ai.org.
?? Increasing efficiency in ops, data & marketing to support AI safety-focused projects
1 年What's the source of the image?