The Armilla Review #102
TOP STORY
NIST Removes 'AI Safety' and 'Fairness' from Research Guidelines Amid Trump Administration Shift
In a significant pivot, the National Institute of Standards and Technology (NIST) has revised guidelines for AI researchers, removing references to "AI safety," "responsible AI," and "fairness," and now prioritizing the reduction of "ideological bias" to enhance economic competitiveness and "human flourishing." This shift aligns closely with Trump administration priorities and coincides with Elon Musk's critique of OpenAI and Google's AI models for perceived ideological biases. Researchers are expressing concerns that deprioritizing fairness and safety could expose everyday users, particularly minority and economically disadvantaged groups, to discriminatory and unsafe AI outcomes. Observers warn of potential long-term harm, including unchecked misinformation and systemic bias in AI applications.
THE HEADLINES
OpenAI Pushes U.S. to Ease Restrictions on AI Training with Copyrighted Material
OpenAI is asking for an exception for the training phase, but it will also impact laws at the generation stage (which is where most of the companies using AI operate). If you materially use Cursor (which I suspect is a lot of us), you have copyrighted or derivative material in your code. The sales script you ask AI to write is likely a derivative work of someone's sales course or a book. If you're using AI to draft legal contracts or terms of service, the generated document is likely built upon agreements written by law firms. I can see this sparking a new wave of Copyright Trolls. For example,?if?you are an AI Real Estate brokerage, and use AI to generate market updates for clients, you don't want to get sued or have to pay a royalty to every real estate company/MLS/blogger/etc. because the model that generated the report was trained on articles previously written by those entities.
My opinion:?Sam Altman?maybe right. The prevailing argument will be that the financial compensation to content creators is?not?worth falling behind in the AI race or slowing down innovation on the application level. I also think, we can't look at IP through its traditional lens anymore. Any mechanism that implements a payment for the use of copyrighted material is just going to invite abuse and distract/kill businesses. There is a reason why there were massive legislative changes to stop patent trolls. Could this be the end of IP as we know it? Traditional IP rights, as we know it (outside of maybe hard tech), aren't as relevant as they used to be.
If the US government goes along with this plan, could this trigger companies to sue the government instead of these massive AI companies? The Trump government maybe ready for that. This may end up being argued at the Supreme Court for many years to come.
New EU Draft of General-Purpose AI Code Emphasizes Transparency and Systemic Risk Mitigation
The General-Purpose AI Code of Practice has entered its final drafting stage with the release of its third iteration, featuring clearer and more streamlined commitments tailored for industry compliance. The refined draft notably focuses on transparency obligations for all general-purpose AI models and introduces specific safety measures only for those classified as posing systemic risks. Accompanying the draft is an interactive website designed to facilitate stakeholder feedback, guiding the finalization of the Code by May to support the EU's AI Act compliance framework. These developments highlight the EU's efforts to establish robust governance around AI safety and responsible deployment.
Could Insurance Be the Key to Keeping AI Companies Accountable?
At SXSW, Harvard Law Professor Lawrence Lessig introduced an innovative proposal to regulate AI through insurance requirements, suggesting market-driven incentives could encourage responsible AI development. Drawing parallels to existing practices in automotive insurance, Lessig argued that companies would be financially motivated to prioritize safety, fairness, and transparency if their insurance rates depended on the inherent risks of their technologies. Panelists echoed concerns over inadequate oversight and transparency in AI, advocating for a regulatory approach akin to the rigorous evaluation processes found in other industries, like pharmaceuticals. Lessig emphasized that smart regulation doesn't stifle innovation but ensures safer and more trustworthy AI systems.?
AI Risks Dominate Discussions at PLUS D&O Symposium
At the recent Professional Liability Underwriting Society’s D&O symposium, experts highlighted AI's evolving risk profile, particularly focusing on concerns like privacy violations, bias, and IP issues. The term "AI washing," describing companies misrepresenting their AI capabilities to investors, emerged as a significant concern. Panelists underscored the necessity for human oversight and proactive governance to mitigate the unique risks AI introduces, with insurance firms called upon to deepen their understanding and evaluation methods. Experts also urged corporate boards to become proactive in managing AI risks, noting that less than 14% currently engage regularly with AI governance.
Moscow-Based Propaganda Network Successfully Infiltrates Western AI Systems
The Moscow-based disinformation network "Pravda" has successfully embedded pro-Kremlin propaganda into the outputs of major Western generative AI tools, according to a recent NewsGuard audit. By strategically publishing false narratives, Pravda has manipulated search engines and web crawlers to ensure that pro-Kremlin misinformation is fed into AI training data, influencing roughly 33% of outputs from top generative AI tools. The audit confirms concerns raised earlier by American fugitive John Mark Dougan, who openly discussed this strategy as a novel form of global information warfare. Experts warn this infiltration could seriously undermine the reliability of AI-generated information worldwide.
China Issues Measures for Identifying AI-Generated Synthetic Content
The Chinese government has formally issued new measures to regulate synthetic content created by artificial intelligence, aiming to protect user rights and public interests. These measures mandate clear labeling of AI-generated content, both explicitly and implicitly, including text, images, audio, and videos. Internet service providers must adhere to strict identification guidelines, embed digital watermarks, and maintain comprehensive metadata to prevent misuse or deception. China's regulatory move signifies a decisive step toward standardizing AI content, potentially setting global benchmarks for transparency and responsibility in AI-generated media.
Toward a Robust Science of Evaluating Generative AI
The integration of generative AI into critical sectors like medicine, law, and education has underscored the need for a more robust and systematic evaluation framework, moving beyond limited benchmarks to real-world applicability and iterative refinement. Industry experts argue for the development of an "evaluation science" for generative AI, akin to safety frameworks used in aerospace or pharmaceuticals, to reliably measure and improve the technology's performance and safety. Advocates stress the importance of iterative evaluation methods, sustained institutional investment, and adaptive regulatory frameworks that anticipate and mitigate real-world risks proactively. Without such comprehensive frameworks, AI’s potential benefits remain overshadowed by uncertainty and public distrust.
The Armilla Review is a weekly digest of important news from the AI industry, the market, government and academia. It's free to subscribe.