EU AI ACT: Shaping Governance for Tomorrow’s Innovation
Rupa Singh
Founder and CEO at 'The AI Bodhi' and 'AI-Beehive' | Author of "AI ETHICS with BUDDHIST PERSPECTIVE"| Top 20 Global AI Ethics Leader | Thought Leader| Expert Member at Global AI Ethics Institute
Because of the rapid growth of technological advancements and innovation, governance and regulatory mechanisms are put to the test. New and innovative regulatory mechanisms are essential to enable governments to successfully integrate these technologies into our societies to ensure that it is integrated in a sustainable, beneficial, and just manner.
Among many innovations under scrutiny, Artificial Intelligence stands out as one of the most debated of such innovations.
Questions such as:
?These questions are becoming central to our discussions and as these debates continue, AI is finding widespread use within a spectrum of regulatory contexts, encompassing established frameworks, evolving paradigms, and tailored regulations designed specifically for AI.
EU is setting Global Standards for AI:
EU is taking a strong stance on regulating AI products and services before they can access the European Market.
The GDPR is a prominent example. When it came to force, it had far-reaching implications for companies around the world that process the data of EU citizens. GDPR in 2016 and DPD (Data Protection Directive) in 1995, have not only raised the bar for privacy and data protection, but also exemplified how EU legislation can influence global regulatory practices and standards.
The GDPR, which has become a blueprint for privacy, data protection, and data sovereignty, serves as an example of how the EU’s approach to regulation can have a significant impact beyond its borders.
On Wednesday, 13-MAR-2024, European Union took a significant step forward in shaping the future of artificial intelligence with the passing of the EU AI Act in the European Parliament with an overwhelming 523 votes in favor, signalling a commitment to shaping the future of AI in Europe and beyond.
Risk Categorization:
EU AI Act includes a risk-based approach to the regulation of AI systems. The proposed framework consists of four risk tiers:
1.????? Unacceptable risk
2.????? High Risk
3.????? Limited Risk
4.????? Minimal Risk
These risks are classified based on the potential risks that AI systems poses on the health and safety of fundamental rights of individuals.
The risk classification systems are designed to ensure that the level of regulation and oversight is commensurate with the level of risk posed by a particular AI system. AI systems that pose an unacceptable risk to human health and safety of fundamental rights would be prohibited under the proposed regulations.
CATEGORY1:??Unacceptable Risk: Prohibited
Scope:
?1.????? ?Subliminal techniques or exploiting vulnerabilities of specific populations which cause harm
2.????? “Social scores” used by public authorities or on their behalf ·
3.????? ?Real-time remote biometrics in public spaces used by law enforcement (with some exceptions)
Requirements: These uses are prohibited.
SANCTIONS: Fines up to 7% of global revenue or 30mn euros, whichever is higher.
CATEGORY2: High-Risk Systems: Conformity Assessment
Scope:
1.????? AI systems that are products or safety components of products including medical devices, toys, and machinery.
2.????? Remote biometric identification and categorisation of natural persons (e.g. a system classifying the number of people of different skin tones walking down a street)
3.????? Management and operation of critical infrastructure (road traffic and the supply of water, gas, heating, and electricity)
4.????? Education and vocational training, where systems are used for e.g. admission and grading.
5.????? Employment, worker management, and access to self-employment opportunities, including systems that make or inform decisions about hiring, firing, and task allocation.
6.????? Access to and enjoyment of essential private services and public services and benefits.
7.????? Specific uses of law enforcement.
8.????? Specific uses in migration, asylum, and border control management.
9.????? Administration of justice and democratic processes, in particular when used to research and establish facts or applying the law to some facts.
?REQUIREMENTS:
Providers of high-risk systems must perform a conformity assessment to make sure that they are compliant with requirements including:
1.????? Risk management system
2.????? Data requirements
3.????? Technical documentation
4.????? Record-keeping
5.????? Transparency on the system’s functioning
6.????? Human oversight
7.????? Accuracy, robustness, and cybersecurity
8.????? Post-market monitoring
SANCTIONS: Fines up to 4% of global revenue or 20mn euros, whichever is higher, for everything except the data requirements, where the same fines apply as for the prohibited systems.
领英推čŤ
CATEGORY3: Limited Risk: Transparency Obligations
Scope:
1.????? AI systems interacting with natural persons
2.????? Emotion recognition systems or biometric categorisation systems
AI system that generates or manipulates image, audio, or video content that appears real
Requirements: Notify the user that they are engaging with an AI system
SANCTIONS: Fines up to 4% of global revenue or 20mn euros, whichever is higher.
CATEGORY4: Minimal Risk: Voluntary Codes of Conduct
SCOPE: All AI systems that are not either prohibited or high-risk.
REQUIREMENTS: Providers can choose to comply with voluntary codes of conduct. The Commission and Member States will encourage the creation and voluntary compliance with these codes.
SANCTIONS: Not applicable as there are no requirements.
New Governance Architecture for AI Regulation:
To ensure proper enforcement of the AI Act, the following governing bodies have been established:
?
The Final Green Light to the first worldwide rules on AI : Council of the EU achieves the monumental development that could reshape the landscape of AI not just for EU, but across the globe. The groundbreaking legislation marks the first of its kind worldwide, setting a new standard for AI regulation.
The risk-based approach for AI ACT ensures that the potentially harmful applications of AI face the highest level of scrutiny and control.
The aim here is two-fold:
1.????? To foster the development and update of safe and trustworthy AI systems across the EU’s single market. This includes both private companies and public entities.
2.????? To ensure the respect of fundamental rights of EU citizens, while also stimulating investment and innovation in AI across Europe.
So, what areas does the AI Act cover? Well, it applies to all areas within EU law. However, there are some notable exemptions. Systems used exclusively for military and defence purposes, as well as those used for research, are not covered under this new legislation.
The AI Act sets a precedent for other regions. By prioritizing fundamental rights and safety, Europe is demonstrating that it's possible to innovate responsibly. This could encourage other countries to adopt similar frameworks, leading to a more harmonized global approach to AI regulation.
The European Commission said, “The AI office aims at enabling the future development, deployment, and use of AI in a way that fosters societal and economic benefits and innovation, while mitigating risks.”
140 member AI office will be established within the commission.
Multi-layered governance architecture aims to ensure the effective enforcement of AI regulations across the EU. With the incorporation of diverse expertise and facilitating cooperation among different entities, the framework seeks to promote a balanced and comprehensive approach to AI governance. The responsibilities involve:
ESMA’s Directive on AI Responsibility in Financial Services:
?
On 30th May, 2024, ESMA made a significant announcement. They have made it clear that banks and investment firms must take full responsibility for the protection of customers when using AI. This directive outlines how financial organizations can use AI without violating the EU’s Markets in Financial Instrument Directives (MiFID) regulation.
According to ESMA, the decisions made by financial firms remain the responsibility of management bodies, regardless of whether those decisions are made by humans or AI-based tools.
Another critical point from ESMA’s statements is the commitment to act in the best interest of clients. This requirement applies no matter what tools the firm uses. Whether, it is AI or traditional methods, the client’s best interest should always come first.
And it does not stop there. ESMA’s guidelines also cover the use of third-party AI technologies, such as ChatGPT and Google Bard. Financial firms must ensure that these tools are used appropriately.
?One of the challenge financial firms might face when integrating AI into their operations under the new guidelines is to ensure that AI systems are transparent and their decision-making processes can be audited. Firms will need to invest in robust compliance frameworks to monitor AI activities and ensure they align with regulatory requirements and ethical standards. Additionally, there will be a need for continuous training and updates to both the AI systems and the human oversight mechanisms.
?Promoting AI Innovation in the EU
EU AI Act is not just about regulation, it is about fostering innovation.
Measures in Support of Innovation:
1.????? Innovation friendly Legal Framework: The AI Act promotes a legal environment conducive to innovation and evidence-based regulatory learning.
2.????? AI Regulatory Sandboxes: These controlled environments enable the development, testing, and validation of innovative AI systems. They allow for real-world testing, ensuring that new AI solutions are practical and effective.
?What’s Next?
Legislative Act Publication: After being signed by the presidents of European Parliament and the Council, the Act will be published in the EU’s Official Journal in the coming days and will enter into force twenty days later.
Regulation Application: The new regulation will apply two years after its entry into force, with some exceptions for specific provisions.
Organizational Changes and Timelines:
?? June 16: Organizational changes outlined in the AI Act will take effect.
?? End of June: The first meeting of the AI Board is scheduled.
?? 6 Months After Entry into Force: The AI Office will issue guidelines on AI system definitions and prohibitions.
?? 9 Months After Entry into Force: The AI Office will coordinate the creation of codes of practice for general-purpose AI models.
This is a pivotal moment for AI in the EU, balancing regulation with robust support for innovation.
Thankyou for reading this article. How do you think these regulations will impact the global AI landscape? Let's discuss in the comment!
Helping Enterprises with AI, LLM, Automations, Data, MLOps Engineering and Cloud Infrastructure Migrations & Modernisation | E2E Agile Project Delivery.
9 个ćśExciting times ahead for AI governance in the EU. Can't wait to see the impact of this new regulation. Rupa Singh