We face a set of threats that put all of humanity at risk: the climate crisis, pandemics, nuclear weapons, and ungoverned AI. The ongoing harms and existential risk presented by these issues can't be tackled with short-term fixes. But with bold leadership and decisive action from world leaders, our best days can still lay ahead of us. That's why, with The Elders Foundation, we're calling on decision-makers to demonstrate the responsible governance and cooperation required to confront these shared global challenges. This #LongviewLeadership means: ? Thinking beyond short-term political cycles to deliver solutions for current and future generations. ?? Recognising that enduring answers require compromise and collaboration for the good of the whole world. ?? Showing compassion for all people, designing sustainable policies which respect that everyone is born free and equal in dignity and rights. ?? Upholding the international rule of law and accepting that durable agreements require transparency and accountability. ??? Committing to a vision of hope in humanity’s shared future, not playing to its divided past. World leaders have come together before to address catastrophic risks. We can do it again. Share and sign our open letter ?? https://rb.gy/0duze1
Future of Life Institute (FLI)
民间和社会团体
Campbell,California 17,144 位关注者
Independent global non-profit working to steer transformative technologies to benefit humanity.
关于我们
The Future of Life Institute (FLI) is an independent nonprofit that works to reduce extreme, large-scale risks from transformative technologies, as well as steer the development and use of these technologies to benefit life. The Institute's work primarily consists of grantmaking, educational outreach, and policy advocacy within the U.S. government, European Union institutions, and United Nations, but also includes running conferences and contests. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
- 网站
-
https://futureoflife.org
Future of Life Institute (FLI)的外部链接
- 所属行业
- 民间和社会团体
- 规模
- 11-50 人
- 总部
- Campbell,California
- 类型
- 非营利机构
- 领域
- artificial intelligence、biotechnology、European Union、nuclear、climate change、technology policy和grantmaking
地点
Future of Life Institute (FLI)员工
-
David Nicholson
Director, Future of Life Award @ Future of Life Institute | Harvard University ALM
-
Andrea Berman
Philanthropy - Partnerships - Program Development - Strategy
-
Mark Brakel
Director of Policy at Future of Life Institute (FLI)
-
Risto Uuk
Head of EU Policy and Research @ Future of Life Institute | PhD Researcher @ KU Leuven | Systemic risks from general-purpose AI
动态
-
?? Announcing: our new Digital Media Accelerator! ?? Big Tech are racing to build more and more powerful AI, despite experts urging caution - and while public understanding and awareness remains limited. This Accelerator program aims to support content creators who want to bring accessible, engaging content about AI risk and safety to new audiences - from podcasts, to newsletters, TikTok channels, YouTube series, and beyond. ?? Learn more at the link in the comments below:
-
-
"Without proper safeguards, however, powerful AI could induce severe and, in some cases, potentially irreversible harms." The draft report on AI frontier models, requested by California Gov. Gavin Newsom, and co-led by Jennifer Chayes, Fei-Fei Li, and Mariano-Florentino (Tino) Cuéllar, is out now - including, among other proposals, calls for increased transparency, whistleblower protections, and industry accountability. Read it in full in the comments:
-
-
??? FLI's AI & National Security Lead Hamza Chaudhry spoke to Fortune about OpenAI's new approach to AI safety and alignment, discussed in their recent blog post - which has been criticized for "subtly [tipping] the company in the direction of releasing AI models unless there is incontrovertible evidence that they present an immediate danger". Hamza described OpenAI’s approach as "reckless experimenting on the public" - a practice that would be unacceptable in any other industry. He also argued that this reflects a broader effort by OpenAI to minimize real government oversight over high-stakes AI systems. ?? Read the full article linked in the comments below:
-
??? "The most disorienting thing about today’s A.I. industry is that the people closest to the technology — the employees and executives of the leading A.I. labs — tend to be the most worried about how fast it’s improving." ??? "I don’t worry about individuals overpreparing for A.G.I., either. A bigger risk, I think, is that most people won’t realize that powerful A.I. is here until it’s staring them in the face — eliminating their job, ensnaring them in a scam, harming them or someone they love. This is, roughly, what happened during the social media era, when we failed to recognize the risks of tools like Facebook and Twitter until they were too big and entrenched to change. That’s why I believe in taking the possibility of A.G.I. seriously now, even if we don’t know exactly when it will arrive or precisely what form it will take." Kevin Roose explains in The New York Times why he's taking A.I. progress more and more seriously. ?? Full article linked in the comments below:
-
-
???? We're sharing our recommendations for President Trump's AI Action Plan, focused on protecting U.S. interests in the era of rapidly advancing AI. ?? An overview of the measures we recommend: ?? Protect the presidency from loss of control by mandating “off-switches"; a targeted moratorium on developing uncontrollable AI systems; and enforcing strong antitrust measures. ?? Ensure AI systems are free from ideological agendas, and ban models with superhuman persuasive abilities. ?? Protect American workers and critical infrastructure from AI-related threats by tracking labor displacement and placing export controls on advanced AI models. ?? Foster transparent development through an AI industry whistleblower program and mandatory security incident reporting. Read our proposal in full below:
-
?? ?Siliconversations on YouTube made a video about Anthony Aguirre's new "Keep the Future Human" essay: ??? "We are on the cusp of creating artificial general intelligence (AGI), even though the corporations building this technology admit they don't know how to control it." ?? Watch their video on "Keep the Future Human" below, and read the essay in full at the link in the comments:
-
?? New on the FLI Podcast! ?? FLI Executive Director Anthony Aguirre joins to discuss his new essay, "Keep the Future Human", which warns that unchecked development of smarter-than-human, autonomous, general-purpose AI could lead to human replacement - but it doesn't have to. ?? Tune in now at the link in the comments to hear how Anthony proposes we change course to secure a safe future with AI, and more:
-
??? "[AGI] is better called autonomous general intelligence than artificial general intelligence. What does that mean? The autonomy, and the generality, and the intelligence together, that's what makes humans unique. That's what gives us the ability to be the stewards of the earth. If you have a machine that has those three things, that is what makes us replaceable." ??? "Almost all of the things that we've built in history as technologies have been tools. They've been things we've designed to extend our capability, to empower us to do the things we want to do. The real difference - and this is fundamental to understand - is that AGI, and after it, superintelligence, is not a tool. It is a competitor. It is something that is more like a different species." ?? Catch Anthony Aguirre's full View From the Top segment with Nicholas Johnston and Axios at SXSW: