Ethical AI Policies and Their Unintended Consequences

Ethical AI Policies and Their Unintended Consequences

The imposition of AI policy in global jurisdictions has disrupted product launches for some of the largest corporations. European governments are curbing the involvement of AI in circumstances with far-reaching societal implications that demand human intervention. However, regulation has not yet addressed the misuse of personal data by developers of large language models and AI-enabled applications.

Impact of the Digital Markets Act?

On October 28, 2024, Apple released its latest mobile operating system iOS 18.1 featuring the heavily-anticipated generative AI capabilities of Apple Intelligence. However, it’s not yet available in the European Union (EU) due to the Digital Markets Act (DMA), which stipulates that mass media producers must integrate their operating systems with apps developed by third parties. Rather than comply with the ruling, Apple deferred the launch of iOS 18.1 in the EU because of potential data privacy breaches and endpoint security incidents that could impact 450 million prospective users. However, consumers in the EU will have access to a limited beta version of Apple Intelligence on Mac devices running macOS Sequoia 15.1 with an M1 chip or more advanced hardware, which meets the mandated interoperability requirements. As of January 23, Apple Intelligence is enabled by default for the most current operating systems.

Impact of the EU AI Act

In August, the European political bloc enacted the EU AI Act establishing risk assessment and compliance standards for AI apps used in the region. The landmark bill classifies risk levels and requirements ranging in severity from general-purpose or limited-risk AI systems to high-risk and prohibited AI systems. Every AI developer based in the EU must comply with legal requirements by their respective deadlines to avoid hefty fines unless exemptions apply.


Criteria and requirements for AI systems of varying risk levels, as defined by the EU AI Act.?Legislation permits reasonable exceptions in certain circumstances.

Deadlines to comply with the EU AI Act are now in effect. As a result, all prohibited AI systems must cease operation in February 2025. AI developers are required to publish codes of practice by May of the same year. General-purpose and limited-risk AI systems have to meet their legal mandates by August. High-risk AI systems must comply with regulations between August 2026 and 2027, depending on their qualifying criteria.

Noncompliant developers will incur the highest possible penalty, either a nominal amount or one proportional to revenue generated during the prior fiscal year. Violating prohibitions may result in fees of up to €35 million or 7% of annual revenue globally. Any breach of requirements for high-risk AI systems can amount to €15 million or 3% of worldwide annual turnover. Failure to provide correct, complete, or accurate information can lead to damages of €7.5 million or 1% of total sales.

Meta also suspended the launch of its multimodal model in Europe amidst the unpredictable regulatory environment. At the request of the Irish Data Protection Commission, the social media giant stopped training its models on public Facebook and Instagram data archives from 2007 to the present. Products and services dependent on Llama 3’s audio and visual outputs can no longer be deployed on the continent. As an alternative, Meta announced that a text-only version of the model with a higher volume of parameters would eventually become generally available in the EU.

Legacy of CA SB 1047

Democratic California Senator Scott Wiener filed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act – also known as SB 1047. The bill sought to regulate foundation models whose computing power exceeds 1026n floating-point operations per second and costs beyond $100,000,000, such as those provided by OpenAI, Anthropic, Google, and Meta. The State Senate and Assembly eventually passed SB 1047, which attempted to establish the following requirements to train an AI model:

  • Implement the capability to promptly enact a complete shutdown of the model and its derivatives
  • Incorporate a written safety and security protocol, subject to annual review and modification according to industry best practices
  • Keep records and dates of updates or revisions to the model
  • Conduct an independent, third-party audit annually to evaluate compliance with provisions and produce an audit report starting January 1, 2026?
  • Retain an unredacted copy of the protocol and audit reports 5 years beyond the time the model is commercially, publicly, or foreseeably operational
  • Grant to the Attorney General access to the unredacted safety and security protocol, audit report, statement of compliance, and disclosure of AI safety incidents that affect the model and its derivatives

Although the U.S. previously introduced laws demanding the disclosure of sources and prohibiting deepfakes that may influence elections, the federal government has yet to enact legislation regulating AI safety and security. Dozens of states and territories have ratified policies addressing the ambiguity and lack of governance. SB 1047 was the most recent attempt, which Newsom ultimately rejected. The California governor justified his decision by declaring that the statute focused “only on the most expensive and large-scale models.” He notes that “smaller, specialized models may emerge as equally or even more dangerous.” Future proposals are expected to adopt more dynamic, empirical standards to evaluate public safety risks without compromising research and development, as Newsom suggests.

Nicolas Khonaysser, MBA

CEO @ Vently | Building tech that connects the world.

1 个月

Well said Anthony, will be interesting to see policy changes over the next 4 years in regulating AI

要查看或添加评论,请登录

Anthony Walsh的更多文章

社区洞察

其他会员也浏览了