We face a set of threats that put all of humanity at risk: the climate crisis, pandemics, nuclear weapons, and ungoverned AI. The ongoing harms and existential risk presented by these issues can't be tackled with short-term fixes. But with bold leadership and decisive action from world leaders, our best days can still lay ahead of us. That's why, with The Elders Foundation, we're calling on decision-makers to demonstrate the responsible governance and cooperation required to confront these shared global challenges. This #LongviewLeadership means: ? Thinking beyond short-term political cycles to deliver solutions for current and future generations. ?? Recognising that enduring answers require compromise and collaboration for the good of the whole world. ?? Showing compassion for all people, designing sustainable policies which respect that everyone is born free and equal in dignity and rights. ?? Upholding the international rule of law and accepting that durable agreements require transparency and accountability. ??? Committing to a vision of hope in humanity’s shared future, not playing to its divided past. World leaders have come together before to address catastrophic risks. We can do it again. Share and sign our open letter ?? https://rb.gy/0duze1
Future of Life Institute (FLI)
民间和社会团体
Campbell,California 16,871 位关注者
Independent global non-profit working to steer transformative technologies to benefit humanity.
关于我们
The Future of Life Institute (FLI) is an independent nonprofit that works to reduce extreme, large-scale risks from transformative technologies, as well as steer the development and use of these technologies to benefit life. The Institute's work primarily consists of grantmaking, educational outreach, and policy advocacy within the U.S. government, European Union institutions, and United Nations, but also includes running conferences and contests. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.
- 网站
-
https://futureoflife.org
Future of Life Institute (FLI)的外部链接
- 所属行业
- 民间和社会团体
- 规模
- 11-50 人
- 总部
- Campbell,California
- 类型
- 非营利机构
- 领域
- artificial intelligence、biotechnology、European Union、nuclear、climate change、technology policy和grantmaking
地点
Future of Life Institute (FLI)员工
-
David Nicholson
Director, Future of Life Award @ Future of Life Institute | Harvard University ALM
-
Andrea Berman
Philanthropy - Partnerships - Program Development - Strategy
-
Mark Brakel
Director of Policy at Future of Life Institute (FLI)
-
Risto Uuk
Head of EU Policy and Research @ Future of Life Institute | PhD Researcher @ KU Leuven | Systemic risks from general-purpose AI
动态
-
?? Sunday March 9th (tomorrow) from 1-3pm CST @ SXSW ?? ?? Join FLI's Executive Director, Anthony Aguirre, at Axios House for his View From the Top talk on steering AI to empower humanity, not replace us. ?? Alongside talks from Sen. Mike Rounds, Sarah Bird (Microsoft), and Helen Toner (Center for Security and Emerging Technology (CSET)), Anthony will discuss the benefits of developing useful tool AI instead of superintelligence, along with insights from his newly released essay, "Keep The Future Human". ?? If you'd like to attend, send us a DM. Not at SXSW? Follow along on Axios' YouTube channel, linked in the comments. ?? Be sure to check out "Keep The Future Human" as well, also available in the comments.
-
-
With the unchecked race to build smarter-than-human AI intensifying, humanity is on track to almost certainly lose control. In "Keep The Future Human", FLI Executive Director Anthony Aguirre explains why we must close the 'gates' to AGI - and instead develop beneficial, safe Tool AI designed to serve us, not replace us. We're at a crossroads: continue down this dangerous path, or choose a future where AI enhances human potential, rather than threatening it. ?? Read Anthony's full "Keep The Future Human" essay - or explore the interactive summary - at the link in the comments:
-
?? You’re playing checkers. AI is playing chess. Enter PERCEY. Featuring an LLM voiced by the legendary actor Stephen Fry, PERCEY Made Me is our new AI awareness campaign, showing how AI can persuade and influence through brief, engaging interactions. ?? See for yourself, at the link in the comments:
-
"The personal decisions you make are going to shape this technology. Do you ever worry about ending up like Robert Oppenheimer?" ?? "I worry about those kinds of scenarios all the time, that's why I don't sleep very much. There's a huge amount of responsibility - probably too much - on the people leading this technology." ?? "We're dealing with something unbelievably transformative, incredibly powerful that we've not seen before. It's not just another technology." Google DeepMind CEO Demis Hassabis speaking to The Economist:
-
New on the FLI Podcast! Jeffrey Ladish from Palisade Research joins to discuss: ?? The breakneck pace of AI progress, and its risks for loss of control; ? Why AIs misbehave; ?? Palisade's new research on how AI models try to cheat to win chess; And more! ?? Listen now at the link in the comments below, or find it on your favourite podcast player:
-
"Is this what we want?" hits the nail on the head. Big Tech is openly building AI to replace human labour. The increasing impact on creatives is just the start - the impact across almost all industries will only worsen, without meaningful intervention. Is this what you want? ?? At the link in the comments, learn more about the silent album from 1,000+ artists protesting the use of creatives' work in AI training data, from Fairly Trained:
-
-
??? Call for experts! ???? The United Nations Office for Disarmament Affairs is creating an independent Scientific Panel on the Effects of Nuclear War. ???? They're seeking doctorate-level scientists in nuclear/radiation studies, climate, environment, health, social sciences, and more. ?? Applications close this Friday, March 1st. Apply now at the link in the comments below! ??
??Call for Applications: Independent Scientific Panel on the Effects of Nuclear War ?? UNODA is pleased to announce a?call for applications?for the newly established?Independent?Scientific Panel on the Effects of Nuclear War. Under?Resolution A/RES/79/238?(‘Nuclear war effects and scientific research’), adopted on?24 December 2024, the?United Nations General Assembly?has mandated an independent panel of experts to assess the?physical effects and societal consequences?of nuclear war. The Panel will focus on?seven key research areas: 1) Nuclear and Radiation Studies; 2) Atmospheric Sciences and Climate; 3) Earth and Life Sciences; 4) Environment and Environmental Studies; 5) Agriculture, Biology and Life Sciences; 6) Public Health and Medicine; 7) Behavioural and Social Sciences and Applied Economics. Who can apply? The Panel will consist of?21 leading scientific experts?in these fields, who will participate in their personal capacity. Applicants should meet these key requirements: -?Have a minimum of 10 years of professional/academic experience in one of the above mentioned seven areas; -?Have obtained a PhD in one of the abovementioned seven areas; -?Have a record of published articles in internationally recognized technical peer reviewed journals; -?Be able to contribute to research on the effects of nuclear war in one of the above mentioned areas; -?Be able to serve on the Panel from March 2025 until October 2027 and to attend regular meetings of the Panel. Deadline for applications:?1 March, 2025 More information, including the link to the application form can be found here:?Panel on the Effects of Nuclear War – UNODA
-
-
"A nuclear war or a super bad bioterrorism event, or not shaping AI properly, or not bringing society together a little bit around the polarization. Those four things, yes, the younger generation has to be very afraid of those things." In Fortune, Bill Gates outlines his top concerns facing the world now - including risks from AI:
-
-
?New research from Palisade finds that new AI models, such as DeepSeek R1 and OpenAI's o1-preview, will sometimes try to cheat by hacking when sensing defeat in a chess match. ?? This concerning development demonstrates emergent deceptive behaviour and again highlights the fundamental, unsolved challenge of how to control increasingly-powerful AI systems. ? If today's AI is already breaking chess rules to win, what rules would an even more intelligent AI system (such as AGI) be willing to break? ?? Read the full paper and TIME coverage in the comments below:
-