AI Now Institute的封面图片

关于我们

The AI Now Institute produces diagnosis and actionable policy research on artificial intelligence.

网站
https://ainowinstitute.org/
所属行业
研究服务
规模
2-10 人
总部
New York
类型
教育机构

地点

AI Now Institute员工

动态

  • This is a timely and critical moment to discuss how AI is being used to affect care work - to the detriment of patient health outcomes. Registration link below!

    Join us March 5 in DC to discuss how algorithmic management technologies are shaping one of the largest labor sectors in the country — healthcare — and what happens when nurses use AI to bid against each other for shifts. Learn how the surveillance technologies that are uprooting care work are the same tools that threaten consumer prices. Explore another case of Big Tech’s multi-pronged assault on the regulatory state. And (maybe) build consensus about what comes next. Veena Dubal, David Seligman, Mark Graham, Funda Ustek Spilda, and Chenjerai Kumanyika will guide us through the new tech wilderness and how to make sense of findings in the Fairwork US 2025 Report. Register here: https://lnkd.in/eCsiWVye

    • 该图片无替代文字
  • AI Now Institute转发了

    查看Frederike Kaltheuner的档案

    AI, geopolitics, global tech policy | Strategic Advisor

    Transatlantic ruptures, AI investment spectacles, and Europe’s governance crossroads—where do we go from here? 1. AI Investment: Who’s Really Paying? Leevi Saari took a closer look at the reality behind the billions of investments that were announced last week. 2. The US-EU Rift is Widening Polite debates over multilateralism in the?belle époque?palaces seem ill equipped to wrestle with the merging of corporate power and state authority that we are witnessing across the Atlantic. The European response? Policymakers are openly questioning the US as a reliable ally. Some even call the US an adversary—a sentiment that could reshape tech policy as a whole. Key consequences: ? Europe’s digital sovereignty is now seen as a policy imperative. The region remains structurally dependent on US tech giants for key infrastructure ? Enforcing EU digital laws is riskier than ever. The US is signaling potential retaliation. Will Brussels hold its ground? ? Regulating AI vs. regulating platforms: a double standard? The EU has taken Big Tech to task over platform accountability—but is strangely passive when it comes to AI market concentration. 3. Deregulation as a Competitiveness Strategy? AI regulation in Europe is now universally framed as a barrier to innovation. Macron and key EU figures are pushing for regulatory “simplification.” Meanwhile, the US administration is threatening Europe over its digital rules. We think this debate is a distraction from the real question: It robs Europe of the ability to actively shape AI’s future trajectory in a highly concentrated market. The “regulation yes/no”-debate also sidesteps from the urgent debate about the kind of regulation and enforcement that’s needed to shape AI in the public interest. Read our full analysis here: https://lnkd.in/eu2NsCM5

    • 该图片无替代文字
  • AI Now Institute转发了

    查看Amba Kak的档案

    Executive Director, AI Now Institute

    “We warn against an approach that under the banner of security would apply piecemeal or superficial scrutiny that gives these systems a clean chit before they are ready. These issues cannot be easily fixed or patched, and require independent safety critical evaluation which must be insulated from industry partnerships”- Heidy Khlaaf, PhD, MBCS AI Now Institute I’ll bring these reactions on the UK’s newly branded Security Institute (and what it signifies in relation to intensifying AI arms race dynamics) to the?Munich Security Conference?today.

    查看Heidy Khlaaf, PhD, MBCS的档案

    Chief AI Scientist, AI Now Institute

    Our AI Now Institute statement on the UK AI Safety Institute transition to the UK AI Security Institute: AISI’s partnership with the Defence Science and Technology Laboratory, the Ministry of Defence’s science and technology organisation, heralds the UK government turning its attention to focus on frontier AI use within the defense and national security apparatus. This comes on the heels of a slew of recent announcements that major AI companies are to be integrating their frontier AI models into national security use cases. As our research has demonstrated, these systems carry significant risks, including threats to national security given the cyber vulnerabilities inherent to frontier AI models, and that the sensitive data for which they may be trained can be extracted by adversaries (https://lnkd.in/gmdvr4qJ). While we welcome AISI’s signals to potentially investigate these risks amidst heightened “AI race” dynamics, we warn against an approach that under the banner of security would apply piecemeal or superficial scrutiny that gives these systems a clean chit before they are ready. These issues cannot be easily fixed or patched, and require independent safety critical evaluation which must be insulated from industry partnerships (https://lnkd.in/e66fRhwh). If our leaders barrel ahead with their plans to implement frontier AI for defense use, they risk undermining our national security. This is a trade-off that AI’s purported benefits cannot justify.

  • AI Now's Chief AI Scientist Heidy Khlaaf, PhD, MBCS warns that remediating AI security flaws requires independent safety critical evaluation, and must be insulated from industry partnerships. Read more: https://lnkd.in/dQQCjs7g

    查看Heidy Khlaaf, PhD, MBCS的档案

    Chief AI Scientist, AI Now Institute

    Our AI Now Institute statement on the UK AI Safety Institute transition to the UK AI Security Institute: AISI’s partnership with the Defence Science and Technology Laboratory, the Ministry of Defence’s science and technology organisation, heralds the UK government turning its attention to focus on frontier AI use within the defense and national security apparatus. This comes on the heels of a slew of recent announcements that major AI companies are to be integrating their frontier AI models into national security use cases. As our research has demonstrated, these systems carry significant risks, including threats to national security given the cyber vulnerabilities inherent to frontier AI models, and that the sensitive data for which they may be trained can be extracted by adversaries (https://lnkd.in/gmdvr4qJ). While we welcome AISI’s signals to potentially investigate these risks amidst heightened “AI race” dynamics, we warn against an approach that under the banner of security would apply piecemeal or superficial scrutiny that gives these systems a clean chit before they are ready. These issues cannot be easily fixed or patched, and require independent safety critical evaluation which must be insulated from industry partnerships (https://lnkd.in/e66fRhwh). If our leaders barrel ahead with their plans to implement frontier AI for defense use, they risk undermining our national security. This is a trade-off that AI’s purported benefits cannot justify.

  • Listen to this clear-eyed, candid conversation about the AI Action Summit, and how to look ahead to building the power necessary to disrupt the terms on which it took place. We were also grateful to co-host the happy hour afterwards and hold space for this community!

    查看Alix Dunn的档案

    I work with serious troublemakers to facilitate change. Host of the Computer Says Maybe podcast.

    Want to watch 4 amazing women debrief on the Paris Action Summit? Our first Computer Says Maybe live show, and the AI Action Summit are a wrap! We cover: -what makes civil society optimistic and cynical about the Summit -why it's so hard to advocate against concentration of power -what we can learn from countries that have embraced digital 'sovereignty' and why tech nationalism leads to more corporate consolidation Thanks to Abeba Birhane, Nabiha Syed, Amba Kak, and Astha Kapoor for supporting collective catharsis after days of choreography and superficial conversation at the Summit. Also. People talk a lot about community. This week has been a doozy. Days of intense work with colleagues launching something huge at the AI Action Summit. I walked into this first live show exhausted. Rather than being stressed out, I felt totally at home with ~80 friends and had the absolute best time. Thank you to AI Now Institute for co-hosting, Mozilla for sponsoring, and to all our friends who made the trek to be with one another at the end of a mad week. And to friends from near and far that came to support, you made it very special. Julia Keseru, Fanny Hidvégi, Camille Fran?ois, Marin Bergman, Cora Bauer, Amélie Baudot, Nishant Lalwani, Frederike Kaltheuner, Soizic Pénicaud, Jake Slater, Astha Kapoor, Nabiha Syed, Udbhav Tiwari, Andrew Strait, Farzana Dudhwala, Tariq Khokhar, Emrys Schoemaker, Sandor Lederer, Diana Spehar, Vidushi Marda, Martin Tisné, Raegan MacDonald, Ania Calderon, Brian J. Chen, Charles Johnson, Andrea Dehlendorf, Daniel Stone, Damini Satija, Rebecca Finlay, Ami Fields-Meyer, Anna Tumadóttir. To others, I have for the first time exceeded the maximum mention limit :) Missed it? This is a problem. Good news is, you can watch it on YouTube below, and we'll stream the audio on the podcast feed on Friday. https://lnkd.in/ep2sBUfz Thanks also to OxygenStream and The Morrison Group for the amazing support on the live stream the event.

  • AI Now Institute转发了

    查看Amba Kak的档案

    Executive Director, AI Now Institute

    A final (!) hot take on the Summit as we look ahead (as I said on Computer Says Maybe Alix Dunn's Live Podcast yesterday, let's shift focus to how we build public power to challenge the terms on which it eventually transpired https://lnkd.in/gdTFDxv6) I spoke to BBC News on the mixed messages from the French Summit: a (hard fought) public interest AI banner in the backdrop of an unabashed scale-driven accelerationist agenda, and governments desperately seeking to prove that they are "open for business" (and ready to deregulate to prove it). Also, our take on the limits of Deepseek/open source-as-disruption discourse. Sarah Myers West AI Now Institute

相似主页

查看职位