NEW: Amidst a frenzy around public investment in AI in Europe, our new report takes us back to the drawing board. What kind of (digital) future does Europe want? What role can (should) AI play? Who should have a say in these decisions? Europe is poised to pour billions in public funds into the AI market. Without redirecting the current trajectory, its industrial policy is poised to further entrench Big tech's dominance in the market. But there are alternative pathways forward. The report has been curated and compiled by Frederike Kaltheuner, Leevi Saari, Amba Kak, and Sarah Myers West of the AI Now Institute, who also authored the opening chapter. Other contributors include Europe’s foremost AI policy experts, including: Francesca Bria, Dr. Cecilia Rikap, Sarah Chander, Zuzanna Warso, Cristina Caffarra, Seda Gürses, Margarida Silva, Jeroen Merk, Kim van Sparrentak, Burcu Kilic, Michelle Thorne, Udbhav Tiwari, Francesco Bonfiglio, Fieke Jansen, and Mark Scott. Read expert perspectives: https://lnkd.in/eh5cFdHW
关于我们
The AI Now Institute produces diagnosis and actionable policy research on artificial intelligence.
- 网站
-
https://ainowinstitute.org/
AI Now Institute的外部链接
- 所属行业
- 研究服务
- 规模
- 2-10 人
- 总部
- New York
- 类型
- 教育机构
地点
-
主要
60 5th Ave
US,New York
AI Now Institute员工
-
Alix Dunn
I work with serious troublemakers to facilitate change. Technology ? society.
-
Adora Svitak
Writer & PhD candidate in Sociology & Women's, Gender, and Sexuality Studies
-
Frederike Kaltheuner
AI, Data, and Tech Policy Expert | Strategy Development | Leadership | shaping emerging technology for the public interest
-
Ellen Schwartz
Operations Director, AI Now Institute
动态
-
Appreciated participating in this important conversation. What we said: as policymakers consider investing in AI R&D projects, they should introduce specific conditionalities that reckon with mounting evidence of the harmful impacts of large-scale AI systems that disproportionately affect marginalized groups, including the production of discriminatory results and long-term climate impact of this scale of computing for communities in under resourced areas. More from AI Now Institute on this front very soon!
From the #KaporFoundation in Oakland to The White House in DC: Advocating for Equity in #Tech and #AI Policy. On September 30, 2024 the White Office of Public Engagement, in collaboration with the Kapor Foundation and Federation of American Scientists, hosted a roundtable on #TechEquity and AI with national experts representing over a dozen leading civil rights and community organizations that center equity in AI and emerging tech policy. The roundtable provided an opportunity for dialogue on the Administration’s current policy on technology and AI, and ways to continue to build the next generation of tech talent, boost innovation through equitable tech entrepreneurship and investment, and implement appropriate regulation and accountability mechanisms.? ? The Kapor Foundation is thrilled to continue working alongside partners to advance #EquitableTechPolicy!
-
-
“We must reject public investment in AI projects that line the pockets of large corporations at the expense of New Yorkers’ privacy, autonomy, and jobs.” Our Associate Director, Kate Brennan, warns about corporate capture of NYC's AI infrastructure. Read: https://lnkd.in/eaxhrzkV
AI Now Associate Director Kate Brennan Testifies at the New York City Council Committee on Technology Hearing on the MyCity Portal
https://ainowinstitute.org
-
Addressing #TIME100AI, Amba Kak asserts that nothing about the trajectory of AI is inevitable: "We need to broaden the horizon for what counts as innovation beyond the stale imagination of Silicon Valley so we can collectively pursue a vision for AI that serves us all." Watch the full speech here: https://lnkd.in/euWAMYGJ
AI Leaders Talk Shaping the Future of AI
time.com
-
What kinds of investments in Europe’s digital infrastructure would best serve the public interest? Register below to join this event at European Parliament to kickstart this urgent conversation, co-curated by Meredith Whittaker, Cristina Caffarra Francesca Bria & others: https://lnkd.in/eyrqr-bV ????’???? ???????? ????????????: ???????? ???????? ???????????? ???? ???????? ???? ???????? ????????????, ???? ???? ??????? Stay tuned for our upcoming report on AI and industrial policy, focused on Europe, in October: https://lnkd.in/eb4mYKfq Frederike Kaltheuner Leevi Saari Amba Kak Sarah Myers West
Toward European Digital Independence Brussels 24th September 14.30-18.30
digitalindependenceeu.wordpress.com
-
Thrilled to announce that after graduating law school earlier this year, I am joining the AI Now Institute as their new Associate Director. I'll be working on policy and research to shape AI in the public interest. If you're thinking about concentration in the AI industry, privacy and surveillance issues, labor implications of AI, AI and industrial policy, or anything in between: I'd love to catch up!
-
We are excited to announce that AI Now's Co-Executive Director, Amba Kak, is included on this year’s TIME 100 AI list, which features leaders, policymakers, artists, and entrepreneurs who are advancing major conversations about how AI is reshaping the world. We’re grateful to our collaborators as we advocate for technology that reflects public interest, not just the bottom lines of Big Tech. https://lnkd.in/e-nMeNiC
TIME100 AI 2024: Amba Kak
time.com
-
In a new Bloomberg episode, Sarah Myers West breaks down what’s at stake with market concentration in AI, the regulatory state-of-play––and why we urgently need an alternative public interest vision for tech. Listen here: https://lnkd.in/e2KgjnWP
-
Today, AI Now Institute released a new report, “Lessons from the FDA for AI," that identifies key insights for AI regulation across the development cycle, drawing on lessons from pharmaceutical regulation. For example, in pharmaceuticals the strongest point of regulatory leverage is before a product goes to market, where firms have greater incentives to comply. By contrast, AI enforcement currently requires identifying harms after they’ve occurred and seeking to remediate them. The report also refocuses attention on the need to validate the efficacy of their products rather than just mitigate the risks - evaluating safety and efficacy in tandem highlights the tradeoffs necessary to decide if the gains from a particular product merit sufficient social benefits to justify the harms. These and other insights are the product of a rapid expert deliberation we conducted last fall. This spring, the Supreme Court released several decisions that mark an uncertain future for regulatory agencies’ ability to do their job, and on the viability of many of the legislative proposals in waiting: we’re thus also publishing a spotlight piece from Dr. Reshma Ramachandran discussing the implications of Chevron and Loper-Bright for regulatory enforcement. We’re grateful to those who have participated in these conversations, to our excellent advisory council Christopher Morten, Amy Kapczynski, Frank Pasquale, Heidy Khlaaf, PhD, MBCS, and Hannah Bloch-Wehba, and the contributors to this report, especially Anna Lenhart, Matt Davies, and Raktima Roy. Read more below: https://lnkd.in/ebBksG4e
Lessons from the FDA for AI
https://ainowinstitute.org
-
Will Europe succeed in building truly public digital infrastructure for AI, or will its investments entrench Big Tech’s dominance? These and other questions will guide AI Now’s work shaping the EU’s industrial policy for AI. Read more here: https://lnkd.in/eb4mYKfq
Public Interest AI for Europe? Shaping Europe’s Nascent Industrial Policy
https://ainowinstitute.org