DIFC Regulation 10: Facilitating Interoperability and Innovation

DIFC Regulation 10: Facilitating Interoperability and Innovation

Regulation 10 on Personal Data Processed Through Autonomous and Semi-Autonomous Systems (Regulation) was issued and published by the Dubai International Financial Centre (DIFC) Commissioner of Data Protection in September 2023 as a part of the DIFC Data Protection Law 2020. The regulation outlines various requirements for businesses using autonomous or semi-autonomous systems like Artificial Intelligence (AI). The integration of these requirements into data protection compliance programmes is emphasised to protect personal data and to ensure transparency.

Regulation 10 aims at positioning DIFC as a hub for integrating diverse guidelines and principles related to AI technology. It facilitates an environment where various principles from different governments and Organizations, including EU and Chinese Regulations among others mentioned in Regulation 10, can be applied to AI development. It ensures that AI systems developed under DIFC's framework can align with a wide range of global standards and practices, making it easier for businesses to comply with different regulatory requirements.

Scope and Application

Regulation 10 addresses the processing of personal data by “Autonomous and Semi-Autonomous systems,” including those mimicking natural persons and virtual personas. It extends responsibilities to deployers and operators, aligning their compliance with those of controllers and processors. This is the first regulation of its kind in the MEASA region and establishes boundaries for organizations that deploy or operate AI systems handling personal data without directly regulating their algorithms or functions.

Obligations of deployers and system operators

The deployers and system operators of autonomous and semi-autonomous systems are expected to comply with the general requirements of legitimate and lawful data processing in a manner similar to any other controller or processor.

Notice

Regulation 10.2.2(a) mandates transparency in AI systems by requiring clear notices about data processing, risks, and system details. It aims to protect user rights through detailed disclosures and risk assessments, enabling informed decisions and minimizing potential negative impacts from advanced AI technologies.

Human-defined and self-defined purposes

The Regulation distinguishes between “human-defined” and “self-defined” purposes for processing personal data in AI systems with human-defined purposes taking precedence. Systems capable of generating their own purposes must be based on pre-defined, hard-coded human-defined principles.

Certification

Regulation 10.2.2(c) adopts a permissive certification regime for processing personal data through AI, with future guidance expected from the commissioner to establish certification requirements, especially for high-risk processing (HRP) activities. The Regulation intends to ensure that in the case of HRP activities, only those systems that completely comply with the published accreditation and certification requirements shall be used.

Fundamental Principles

The Regulation establishes certain fundamental principles like fairness, ethical compliance, transparency, security and accountability for the design of AI systems to be compliant. These principles apply to not only developers of the systems but also to deployers and operators to ensure compliance.

Autonomous Systems officer (ASO)

The ASO is expected to perform similar functions to that of a Data Protection Officer (DPO) apart from the requirement to conduct DPO Controller Assessments. The ASO will also be expected to fulfil any other obligation that may either be directed by law or in terms of globally accepted best practices including conducting DPIA, reviewing risks and processing activities with senior management and making recommendations for better compliance.

Conclusion

Regulation 10 fosters innovation and interoperability by aligning DIFC with global standards, creating a flexible framework for AI systems. The risk and outcomes-based approach to regulation provides a suitable environment for building innovative technology. It supports ethical development through transparency and accountability while simplifying compliance across jurisdictions, thus encouraging responsible AI innovation and enhancing global regulatory coherence.

If you're an organization dealing with copious amounts of data, do visit www.tsaaro.com .

News of the Week

1. Federal Judge Rules New York City's Customer Data Sharing Law Unconstitutional

On Tuesday, U.S. District Judge Analisa Torres in Manhattan struck down a New York City law that required food delivery companies to share customer data with restaurants. The ruling came in favor of DoorDash, Grubhub, and Uber Eats, with the judge declaring that the law violated the First Amendment by unlawfully regulating commercial speech. The law, implemented in the summer of 2021, was introduced as part of a series of measures aimed at helping restaurants recover from the impact of the COVID-19 pandemic.

https://theprint.in/tech/judge-declares-nyc-law-on-sharing-food-delivery-customers-data-unconstitutional/2282817/

2. Court Ruling Indicates AI Prompts and Outputs May Be Discoverable in Litigation

A recent federal district court ruling indicates that generative AI prompts and outputs could be subject to discovery during litigation, including those used during pre-suit investigations. While the guidelines for using such evidence remain unclear, litigants should carefully consider issues such as privilege, spoliation, reliability, and authentication when dealing with AI-generated materials, as these concerns are likely to arise in legal proceedings.

https://www.reuters.com/legal/legalindustry/rules-use-ai-generated-evidence-flux-2024-09-23/

3. NOYB Files Complaint Against Mozilla Over Alleged User Tracking Without Consent

On Wednesday, Vienna-based advocacy group NOYB filed a complaint with the Austrian data protection authority, accusing Mozilla of tracking user behavior on websites without their consent. The complaint, initiated by the digital rights group founded by privacy activist Max Schrems, claims that Mozilla's Firefox browser enabled a feature called "privacy preserving attribution," which effectively turned the browser into a tracking tool without informing users directly.

https://timesofindia.indiatimes.com/technology/tech-news/mozilla-firefox-faces-privacy-complaint-for-alleged-user-tracking/articleshow/113657349.cms

4. FTC Takes Action Against Companies for Misuse of AI in Deceptive Practices

On Wednesday, the U.S. Federal Trade Commission (FTC) took action against five companies for using artificial intelligence in misleading and unfair ways. Three cases involved businesses that falsely promised to help consumers generate passive income through e-commerce storefronts. Additionally, the FTC settled with DoNotPay, a company that claimed to offer automated legal services, and Rytr, an AI writing tool accused of providing a feature that allowed users to create fake content.

https://www.reuters.com/technology/artificial-intelligence/ftc-announces-crackdown-deceptive-ai-claims-schemes-2024-09-25/

5. Biden Administration to Host Global AI Safety Summit in November

The Biden administration will hold a global summit on artificial intelligence (AI) safety in November to address the rapidly advancing technology and ways to mitigate its risks. The two-day event, scheduled for November 20-21 in San Francisco, will be co-hosted by Secretary of Commerce Gina Raimondo and Secretary of State Antony Blinken. The summit will include government scientists and AI experts from at least nine countries and the European Union, according to a Wednesday announcement from the Commerce Department.

https://thehill.com/policy/technology/4885996-biden-administration-artificial-intelligence-summit/

要查看或添加评论,请登录

社区洞察

其他会员也浏览了