Responsible AI in Focus: Regulatory Developments and Emerging Best Practices

Responsible AI in Focus: Regulatory Developments and Emerging Best Practices

On November 14, 2023, CIPL hosted a roundtable with representatives from CIPL member companies, data protection authorities, and experts in the field to discuss recent regulatory developments and emerging best practices in the field of Artificial Intelligence.

The roundtable explored the latest global developments in the regulation of AI (particularly the EU’s AI Act), how long-established data protection practices may offer synergies or create tensions with AI, and share concrete examples of current best practices to address risks and challenges.


Background

Global policy, legislative, and regulatory initiatives to regulate AI are emerging around the world, and organisations face growing challenges in making sense of worldwide regulatory developments such as the EU AI Act, AI Liability Directive, the U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence or the Council of Europe’s Framework Convention on the Design, Development, and Application of Artificial Intelligence Systems. Additionally, organisations must adhere to already existing legislation (e.g. GDPR and the DSA) and ensure alignment with their existing or emerging internal protocols for responsible AI development and deployment.

?

Key Takeaways from the Roundtable Discussion

1.?????? Cooperation and knowledge sharing between industry and regulators are key to fostering understanding of AI technology and the success of regulatory efforts

The success of AI regulatory efforts relies on the cooperation between industry and regulators. Enforcement of any future AI regulatory framework will require a deep understanding of the underlying technology, its use cases, and related potential risks and benefits. It is imperative for regulators and organizations to actively engage in a constructive and transparent manner, fostering an environment of trust. Organizations should approach regulatory bodies openly, providing meaningful insight into their experience, approaches and risk assessments and integrating regulatory considerations into the early stages of product design. Regulatory initiatives, for example, the ICO's innovation advice service and the EU AI Act's regulatory sandboxes, are instrumental for an open engagement and the mutual development of a sustainable approach to AI regulation. Cooperation and knowledge exchange will be particularly important considering the limited pool of talent specialized in AI that is currently available and the growing competition for such talent.

?

2.?????? Organizations can successfully leverage work previously carried out in their data protection compliance program to develop and deploy accountable, responsible and trustworthy AI

Organizations can capitalize on their existing data protection management and compliance programs to meet obligations related to AI development and deployment. While certain seeming tensions between the requirements of AI and data protection principles have to be navigated (for example, concerning principles of data minimization, purpose limitation or the concept of controllership), there are important synergies available as accountability or transparency in the form of existing documentation.[1] CIPL has been building on its Accountability Framework for its Ten Recommendations for Global AI Regulation,[2] with organizational accountability being at the very core of the AI regulatory framework.

Additionally, a well-established culture beyond legal compliance will be instrumental when it comes to integrating AI governance across the necessary teams. AI does require a cross-disciplinary approach, and organizations that have previously developed a strong compliance culture will be more easily able to leverage their internal practices into actionable organizational measures and procedures that will support their efforts to develop or deploy AI in a compliant and responsible manner.

?

3.?????? To effectively address AI challenges, organizations should ensure close cooperation across teams and disciplines ?

The widespread deployment of AI will be a game changer for organizations and cannot be addressed just by a single team. Privacy, legal, compliance, and IT will have to cooperate and contribute, depending on the complexity of the AI systems that an organization is deploying or developing, to ensure a holistic approach.

This may be achieved by creating multidisciplinary teams, fostering cooperation between existing teams and relying on external or internal AI boards.[3] No one-size-fits-all approach exists, and organizations will have to identify what best works for them based on elements such as size, use of AI, risk of its activities, business needs, and company culture.


4.?????? AI regulation must be addressed at the supranational/international level, and initiatives to find a broad supranational/international agreement are welcomed

International initiatives to establish minimum standards on AI regulation, such as the ones arising from the OECD, G7 and Council of Europe, or the UK AI Safety Summit, are welcome and necessary developments. Fragmented regulation based on a national-only approach may result in added difficulties in the development and deployment of AI and possible societal dangers resulting from diverging or even incompatible standards of security and fundamental rights protection around the world.

Finally, AI regulation must be risk-based and implemented in a manner that truly distinguishes and addresses AI based on its effective risk. Qualifying all or most technology to be high risk without taking the more nuanced deployment context into consideration will result in a horizontal and inefficient allocation of resources for AI compliance instead of a focus on severity and likelihood.[4]



[1] Centre for Information Policy Leadership, Artificial Intelligence and Data Protection in Tension, available at https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl_first_ai_report_-_ai_and_data_protection_in_tension__2_.pdf.

[2] Centre for Information Policy Leadership, Ten Recommendation for Global AI Regulation, available at: https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl_ten_recommendations_global_ai_regulation_oct2023.pdf.?

[3] Centre for Information Policy Leadership, Hard Issues and Practical Solutions, section F – Wide Range of Available Tools, available at https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl_second_report_-_artificial_intelligence_and_data_protection_-_hard_issues_and_practical_solutions__27_february_2020_.pdf

[4] CIPL has discussed the importance of careful consideration to effectively differentiate and address AI technologies based on their actual risk levels in Recommendations on Adopting a Risk-Based Approach to Regulating AI in the EU, available at https://www.informationpolicycentre.com/uploads/5/7/1/0/57104281/cipl_risk-based_approach_to_regulating_ai__22_march_2021_.pdf.

要查看或添加评论,请登录

Centre for Information Policy Leadership (CIPL)的更多文章

社区洞察

其他会员也浏览了