Armilla Review: Weekly AI Digest #69
TOP STORY
New York State Issues Guidelines for AI and Data Use in Insurance Underwriting
The New York State Department of Financial Services has issued Insurance Circular Letter No. 7, detailing guidelines for the use of artificial intelligence systems (AIS) and external consumer data (ECDIS) in insurance underwriting and pricing. This final version follows a period of public comment and addresses themes such as definitions, proxy assessments, quantitative assessments, governance, and third-party vendor oversight. The letter emphasizes the importance of fairness, transparency, and risk management, requiring insurers to demonstrate that their use of AIS and ECDIS does not result in unfair or unlawful discrimination. Insurers must also ensure appropriate governance, document their processes, and provide clear disclosures to consumers about the use of these technologies.?
FEATURED
Providing insurance coverage for artificial intelligence may be a blue ocean opportunity
Great to see Armilla's work on AI/LLM evaluation and warranty solutions featured in a report from Deloitte Insights on the growing market for AI insurance! ??
As AI technology becomes increasingly integral to business operations, the insurance industry faces a unique opportunity to develop safeguards against AI-related risks. With projections indicating that insurers could write up to $4.7 billion in annual AI insurance premiums by 2032, the need for robust risk management is more pressing than ever. Regulatory pressures and significant potential losses are likely to drive the demand for AI insurance.
THE HEADLINES
European Parliament Pushes for Inclusive AI Rule Making
In a recent newsletter, members of the European Parliament have called for greater inclusion of civil society and diverse stakeholders in drafting codes of practice for general-purpose AI models. Concerns arose after the European Commission planned to initially involve only AI model providers, prompting fears of industry self-regulation. The Commission's ambiguous language on stakeholder participation has led to worries about Big Tech's potential dominance in rule-making. Additionally, high-risk AI products like cybersecurity components are expected to be classified under the AI Act, setting a precedent for other sectors. The upcoming codes of practice are seen as a crucial bridge to formal standards, reflecting the Act's health, safety, and fundamental rights intentions.
Source: The EU AI Act Newsletter
Supreme Court Ruling Complicates AI Legislation for Congress
In light of the Supreme Court's recent decision to weaken federal regulatory power, Congress faces new challenges in setting rules for artificial intelligence. The ruling, which undermines the Chevron doctrine, requires lawmakers to draft more detailed and specific bills—a demanding task given their limited technical expertise and divided opinions. The shift places significant pressure on Congress to keep pace with rapidly evolving AI technology while balancing the need for flexible yet precise legislation. Concerns arise as the legislative body grapples with crafting effective AI regulations amidst these new constraints, raising doubts about its ability to respond swiftly and adequately to technological advancements.
Source: Bloomberg Government
OpenAI Whistleblowers Allege Illegal NDA Practices
Whistleblowers from OpenAI have filed a complaint with the SEC, alleging that the company illegally barred employees from reporting safety risks associated with its AI technology. According to a letter obtained by The Washington Post, OpenAI's restrictive non-disclosure agreements prevented staff from alerting regulators about potential dangers, undermining federal whistleblower protections. The complaint highlights concerns that OpenAI prioritized profit over safety, with employees fearful of retaliation for raising issues. OpenAI, however, stated that their policies protect employees' rights to make protected disclosures and that they have made changes to remove nondisparagement terms.
领英推荐
Source: The Washington Post
Microsoft Steps Down from OpenAI Board Observer Role
Microsoft has relinquished its observer seat on OpenAI's board, citing significant progress and confidence in the company's direction over the past eight months. The decision follows Microsoft's instrumental role in reinstating CEO Sam Altman and restructuring OpenAI's board, which now includes notable figures such as Bret Taylor and Larry Summers. The move signals Microsoft's trust in the reformed board's capability to lead OpenAI independently. OpenAI, meanwhile, is developing new ways to engage strategic partners and investors, ensuring continued collaboration without board-level oversight from Microsoft.
Source: Axios
OpenAI Introduces Scale to Measure Progress Toward AGI
OpenAI has unveiled a new five-level classification system to track its progress toward developing artificial general intelligence (AGI). This system, shared with employees during an all-hands meeting, ranges from current AI capabilities (Level 1) to AI that can manage organizational tasks (Level 5). Currently, OpenAI believes it is at the first level and nearing the second, known as “Reasoners,” which describes AI systems capable of human-like problem-solving without tools. The new classification aims to enhance understanding of AI safety and future advancements, with plans to refine the levels based on feedback from employees, investors, and the board.
Source: Bloomberg
OpenAI's leaked AGI roadmap
PEOPLE & AI
Our latest episode of the People and AI podcast dives into the innovative work happening at Genmo AI with co-founders Paras Jain and Ajay Jain, Ph.D. We covered advancements and the future landscape of AI-driven video generation.
Apple podcasts: https://lnkd.in/ga4t4WuZ
Spotify: https://lnkd.in/gBzmKsDE
AROUND THE OFFICE
The Armilla Review?is a weekly digest of important news from the AI industry, the market, government and academia. It's free to subscribe.