Big Tech’s AI Models Face EU Scrutiny: Gaps in Compliance Highlighted in Cybersecurity and Bias
ET BrandEquity
India's premier source of information, news and opinion on the Marketing and Advertising
Some prominent artificial intelligence (AI) models, developed by big tech companies including Meta and OpenAI, are not fully complying with key areas of European Union (EU) regulations. Areas such as cybersecurity resilience and discriminatory output are falling short, according to data reviewed by Reuters. The deficits were identified using a new tool by Swiss startup LatticeFlow, which evaluated these models according to the EU's upcoming AI Act.
The EU had debated new AI regulations for some time before OpenAI released ChatGPT in late 2022. The release generated widespread popularity and discussion about the existential risks of such models, prompting lawmakers to create specific rules for "general-purpose" AIs (GPAI). In response, LatticeFlow, supported by EU officials, designed a tool to test generative AI models from various tech giants. This tool aligns with the EU’s broad AI Act, set to be implemented in phases over the next two years.
LatticeFlow's tool, called the "Large Language Model (LLM) Checker," assessed AI models on various criteria, awarding scores between 0 and 1. According to a leaderboard published on Wednesday, models from companies like Alibaba, Anthropic, OpenAI, Meta, and Mistral achieved average scores of 0.75 or higher. However, the tool also highlighted areas requiring improvement to meet compliance.
Discriminatory output remains a significant issue, reflecting human biases in gender, race, and other areas when prompted. OpenAI's "GPT-3.5 Turbo" received a low score of 0.46 in this category, while Alibaba Cloud's "Qwen1.5 72B Chat" model scored just 0.37.
Other digital developments of the week: