Artificial Intelligence Act

Artificial Intelligence Act

The European Union 's Artificial Intelligence Act, initially proposed in April June 2021, is finally reaching a conclusion after several months of intense negotiations, which have been ongoing since June. It represents a significant step in regulating the use of AI technologies, and it's recognized as the world's first comprehensive legal framework in this area.?

At Strata Analytics Group , we actively integrate the EU's new AI Act regulations into our services and our new Ethical AI Practice. Our commitment is to provide ethical, compliant, and cutting-edge AI solutions, ensuring trust and reliability in all our offerings.

Here's a summary of its key aspects:


Risk-Based Approach:

The AI Act categorizes AI systems based on their potential risk levels. It prohibits AI systems that pose an "unacceptable risk" and imposes varying obligations on systems categorized as "high risk" or "limited risk".


Banned Applications:

The Act bans specific AI applications due to threats to citizens' rights and democracy. These include biometric categorization systems using sensitive characteristics, untargeted scraping of facial images for recognition databases, emotion recognition in workplaces and educational institutions, social scoring based on behavior or personal characteristics, and AI systems designed to manipulate human behavior or exploit vulnerabilities.


Law Enforcement Exemptions:

There are narrow exceptions for law enforcement use of biometric identification systems in public spaces, subject to strict conditions like prior judicial authorization and targeted searches for specific serious crimes.


High-Risk Systems Obligations:

High-risk AI systems, which could significantly impact health, safety, fundamental rights, environment, or democracy, are subject to strict obligations. These include mandatory fundamental rights impact assessments applicable to sectors like insurance and banking. AI systems used in elections or influencing voter behavior are also classified as high-risk.


General Artificial Intelligence Systems:

General-purpose AI systems and their models must adhere to transparency requirements, such as technical documentation and compliance with EU copyright law. High-impact general-purpose AI models with systemic risk must conduct model evaluations, assess and mitigate risks, and ensure cybersecurity.


Innovation and SME Support:

The Act promotes regulatory sandboxes and real-world testing, especially for SMEs, to develop and train innovative AI before market placement.


Sanctions:

Non-compliance can lead to significant fines, ranging from €7.5 million or 1.5% of turnover to €35 million or 7% of global turnover, depending on the infringement and company size.


Application and Coverage:

The Act will apply to providers and deployers of in-scope AI systems used in the EU, irrespective of their establishment location. It covers a broad range of AI systems, with specific exclusions for military/defense purposes, research and innovation, and non-professional use.


Timeline:

The Act is expected to come into effect in 2026, following a two-year entry period after its formal adoption, with some provisions being implemented later.

要查看或添加评论,请登录

Strata Analytics Group的更多文章

社区洞察

其他会员也浏览了