EU AI Act & Future of AI Governance
Source: https://petapixel.com

EU AI Act & Future of AI Governance

Many nations and organisations are now in the process of Artificial Intelligence (AI) regulation and governance. I recently involved in the Australian government consultation process for positioning Australia as a leader in digital economy regulation (automated decision-making and AI regulation). For more information about my contribution, download my recommendation paper here , which is more towards Responsible AI and good governance of AI through ethics. The European Union (EU) AI Act is a similar effort to future AI risk management. For more information, reach out to [email protected]

In April 2021, The European Commission proposed the AI Act (EC Proposal ). Later, in December 2022, the Council of the EU adopted its Common Position on the proposed AI Act, marking a milestone on the EU's path towards comprehensive regulation of AI systems. The AI Act categorizes systems by risk and imposes highly prescriptive requirements on high-risk systems, with a broadly extraterritorial scope covering foreign systems whose products enters the EU market. The European Parliament will arrive at its position in this year, after which the Council and Parliament will negotiate the final text of the legislation in a process called the trilogue (i.e., Council, Commission, Parliament). By the way, there are disputes in how to supervise law enforcement use of AI systems and how much governments will have to use biometric recognition systems for national security. In the meantime, countries and businesses should begin considering if their internal AI systems or AI-enabled products and services will be covered by the AI Act and take necessary steps for compliance.

No alt text provided for this image
The trilogue.https://www.eshre.eu/Europe/Governance-of-MAR-in-Europe

The European Commission has proposed a comprehensive and balanced regulatory framework on AI that seeks to ensure that AI systems are safe and respect fundamental rights, while also providing legal certainty for investment and innovation in AI, enhancing governance and enforcement of existing laws, and preventing market fragmentation (Common Position , Section 1.1 - Reasons for and objectives of the proposal). The regulatory approach is risk-based, future-proof, and flexible, allowing for adaptation as technology evolves and new risks emerge. The proposed legal framework also ensures the harmonisation of rules and the high-level protection of public interests such as health, safety, and fundamental rights, supporting the EU’s objective of being a global leader in developing secure, trustworthy, and ethical AI while ensuring the protection of ethical principles. Understanding and adopting this framework will help other nations to develop a similar strategy to excel in AI transformation and industry 4.0, without hindering technological development or unduly restricting trade.

The proposed regulation will enhance, promote and set the requirements for trustworthy AI and proportionate obligations on all value chain participants for protecting human dignity (Article 1), respect for private life and protection of personal data (Articles 7 and 8), non-discrimination (Article 21) and equality between women and men (Article 23).

The proposed regulation on AI follows a risk-based approach which classifies AI into three categories: (i) those that create an unacceptable risk, (ii) those that create a high risk, and (iii) those that create a low or minimal risk (Common Position , Section 5.2.2 & 5.2.3 - prohibited and high-risk AI practices). The list of prohibited practices includes all AI systems that are deemed unacceptable, such as those that violate fundamental rights. Prohibited practices cover manipulative or exploitative practices that have a potential to harm vulnerable groups, such as children or persons with disabilities, and also prohibits the use of AI-based social scoring for general purposes by public authorities. The use of 'real-time' remote biometric identification systems in publicly accessible spaces for law enforcement is also prohibited unless certain exceptions apply. The proposed regulation allows high-risk AI systems on the European market subject to compliance with mandatory requirements and an ex-ante conformity assessment. The classification of an AI system as high-risk is based on its intended purpose and compliance with specific legal requirements related to data governance, documentation, transparency, human oversight, robustness, accuracy, and security.

In order to prevent or minimise risks to health, safety, or fundamental rights, high-risk AI systems must be designed and developed to allow effective human oversight during their use, including through appropriate human-machine interface tools (Common Position , Article 14 - Human oversight). This human oversight is necessary, especially in cases where risks persist despite other requirements being met, and must be maintained during the intended use of the system or under reasonably foreseeable misuse. To ensure this, providers must incorporate human oversight measures into the high-risk AI system, where technically feasible, prior to placing it on the market or putting it into service. Alternatively, providers may identify appropriate human oversight measures for the user to implement before using the system. Individuals assigned with human oversight over these systems should be able to fully understand the system's capacities and limitations, monitor its operation, and detect and address anomalies, dysfunctions, and unexpected performance. They should also be aware of automation bias, correctly interpret the system's output, decide not to use it or override its output, and intervene or stop the system's operation when necessary. In addition, for high-risk AI systems, no action or decision should be taken based on the system's identification without verification and confirmation by at least two natural persons.

The effective implementation of the proposed AI regulations relies on a competent national authority for AI governance, and a robust monitoring and evaluation mechanism (Common Position , Section 5.1 - Monitoring, evaluation and reporting). EU is planning to establish a commission and public EU-wide database for registering high-risk AI applications, which will enable competent authorities, users, and other interested parties to verify compliance with the requirements of the proposal. AI providers will be obliged to provide meaningful information about their systems and the conformity assessment carried out on those systems. Additionally, AI providers will be required to report any serious incidents or malfunctioning that breaches fundamental rights obligations to national competent authorities, who will investigate and regularly transmit the information to the commission. The commission will complement this information with a comprehensive analysis of the overall AI market. The involvement of national competent authorities is crucial in ensuring the effective implementation and oversight of the proposal.

The Common Position emphasizes the importance of supporting innovation through the creation of AI regulatory sandboxes (Common Position , Article 53 - AI regulatory sandboxes). These sandboxes will provide a controlled environment for developing, testing, and validating innovative AI systems under the supervision of regulatory authorities before commercial deployment. The sandboxes will allow testing under real-world conditions with protections in place to prevent harm, including informed consent by participants and provisions for effective reversal or blocking of predictions, recommendations, or decisions by the tested system. The AI regulatory sandboxes will not affect the supervisory and corrective powers of the competent authorities, and any identified risks to health and safety and fundamental rights will result in immediate mitigation or suspension of the development and testing process until such mitigation takes place. Participants in the AI regulatory sandbox will remain liable under applicable legislation for any harm inflicted on third parties resulting from the experimentation taking place in the sandbox. The competent authorities that have established AI regulatory sandboxes will coordinate their activities and submit annual reports to the European Artificial Intelligence Board and the Commission on the results of the implementation of those schemes, including good practices, lessons learned, and recommendations on their setup and application. The modalities and conditions of the operation of the AI regulatory sandboxes will be set out in implementing acts. The exemption of microenterprise providers of high-risk AI systems from the requirements for quality management systems is also provided.

Emerging Global AI Regulation

In addition to EU's effort on AI risk management, governance and regulation, in USA, the National Institute of Standards and Technology (NIST) has just released (on January 26, 2023) their Artificial Intelligence Risk Management Framework (AI RMF) in collaboration with the private and public sectors to better manage the risks associated with AI for individuals, organizations, and society. The framework is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems. The framework was developed through a consensus-driven, open, transparent, and collaborative process that included a request for information, multiple draft versions for public comments, workshops and other opportunities to provide input. It is intended to build on, align with, and support AI risk management efforts by others. NIST also published a companion playbook , roadmap , crosswalk , video explainer and various perspectives to support the framework. For mote information please refer the article .

The Australian initiative on positioning Australia as a leader in digital economy regulation (automated decision-making and AI regulation) is also underway. The industry has already been consulted . I made a contribution to that process in Responsible AI perspectives, where the recommendation paper can be download?here .

No alt text provided for this image
AI Governance and Ethics Framework for Sustainable AI and Sustainability. A submission in response to the Australian Government AI regulation consultation process.Source: https://doi.org/10.48550/arXiv.2210.08984

AI governance and regulation have just begun. Many countries will look into regulating AI since it can cause significant risks despite the substantial economic benefits. The research found 85% of AI projects failed recently due to bias in data, algorithms, or the teams responsible for managing them. Further, there are many emerging AI risks for humanity, such as autonomous weapons, automation-spurred job loss, socio-economic inequality, bias caused by data and algorithms, privacy violations and deepfakes. These risks are fundamentally caused by unethical AI, which needs to be regulated. Following are some other initiatives to regulate AI towards Responsible AI.

United Kingdom

Canada

Brazil

Netherlands

Responsible AI can orchestrate the relevant strategies, stakeholders and resources towards?sustainability?and make?a social impact. That encompasses both consequentialism and utilitarianism perspectives of human ethics towards sustainability . As per the research , AI can support 79% of the UN Sustainable Development Goals (SDGs) targets.

No alt text provided for this image
KITE abstraction framework presented in United Nations Worlds Data Forum to support UN SDGs, ESG and Responsible AI. Source: https://unstats.un.org/unsd/undataforum/blog/KITE-an-abstraction-framework-for-reducing-complexity-in-ai-governance/

In such a Responsible AI framework, leaders can focus the key dimensions of

  1. AI,
  2. Organisation,
  3. Society,
  4. Sustainability,

to understand the stakeholders, strategy, social justice and sustainable impact. As shown in the figure, KITE abstraction framework analyses the synergy and social impact of AI from organisational, social and sustainability perspectives. The interdependent perspectives enable the evaluation of motivations for AI ethics and good governance, AI for good, AI for sustainability, and social diversity and inclusion in AI strategies and initiatives. In our experience, this framework enables organisations to systematically engage with the community, volunteers and partners to collaborate towards ethical and sustainable AI for social justice. It hides the application-specific complexities in AI and generalizes the key success factors (KSF) of AI initiatives where stakeholders can easily understand their responsibilities for sustainability and social justice. These key success factors include but are not limited to social DEI (Diversity, Equity and Inclusion), SDGs (sustainable development goals), strategy, ethics and governance in AI. Moreover, this framework supports mitigating AI risks related to biases in various aspects, including bias in data, algorithms, and the people involved in AI.

Conclusion

The breakthroughs of AI present significant opportunities for innovation and growth across industries, but it also poses significant risks related to humanity and society. With increasing concerns over bias, discrimination, privacy violations, and potential job losses, it is clear that the need for AI regulation is imminent. Organizations and governments must act responsibly and proactively by focusing on analyzing, developing, and adopting either voluntary or regulatory AI frameworks. Failure to do so could result in irreversible damage to people's lives and the economy. Therefore, it is essential that businesses and governments work together to create and implement effective AI regulation that prioritizes human well-being and protects society's fundamental values. By doing so, they can ensure that AI technology is used safely and ethically, while also reaping its full benefits for the economy and society.

For more information, reach out to [email protected]

Emmanuel R. Goffi, PhD

AI Ethicist | Professor of Ethics | Ethics Sherpa and Consultant | International Public Speaker

1 年

Thanks for this very informative piece for those who do not know about the AI Act dear Dr Mahendra Samarawickrama (GAICD, MBA, SMIEEE, ACS(CP)) Regarding the global governance, it seems rather compromised. First we are experiencing a multiplication of norms (legal, ethical, standards, national, transnational, horizontal, vertical, binding, non-binding, private and public...) that are weakening any attempt to regulate AI globally. Second, most norms are either norms issued by Western countries with no consideration for cultural particularism (which might a concern for nations such as Australia), or mere copy-pasted instrument based on Western perspective (see Bahrain). Basically, any attempt to set a global normative frame will be at the detriment of some cultures, may it be intentional or unintentional. Not sure it is even something desirable. My point here is as follow: never ever copy-paste or use a normative frames (ethical or legal) that is not deeply rooted in your culture and aiming at protecting specificaly your interests. As an illustration, it is worth looking at New Zealand and the way they embedded Maori ethics in their regulation. Australia, needs first and foremost a native reflection on the subject.

要查看或添加评论,请登录

社区洞察

其他会员也浏览了