Privacy and AI #15

Privacy and AI #15

In this edition of Privacy and AI

? Generative AI and EU Institutions (EDPS)

? Supervision of AI systems in the EU (NL DPA)

? Data protection authorities and the use of algorithms (NL DPA)

? Colorado landmark AI law

? ETSI TR 104 225 Privacy aspects of AI/ML systems (ETSI)

? The AI Reality: IT Pros Weigh in on Knowledge Gaps, Policies, Jobs Outlook and More (ISACA)

? Hedge funds’ use of AI in trading?

? Legal Issues and Business Considerations When Using GenAI in Digital Advertising (IAB)

? Generative AI Technologies and Their Commercial Applications (US GAO)

? Transparency for ML-Enabled Medical Devices: Guiding Principles?

? Lex Friedman interview with Aravind Srinivas, CEO of Perplexity AI

? The Ethics of AI by Michael Sandel



Privacy and AI reviewed in the Journal of Data Protection & Privacy

Many thanks steve wilkinson for the review and Ardi Kolah for the invitation

Journal of Data Protection and Privacy

Link here


Generative AI and EU Institutions

The European Data Protection Supervisor issued guidance for ensuring data protection compliance when using Generative AI systems. epds

EDPS

Guidance issued by the EDPS is critical in AI for the particular role of the EDPS in the AI Act.

According to the AI Act the EDPS will:

? have the power to establish AI regulatory sandboxes EUIs (art 57(3) AIA)

? participate as observer in the European AI Board (art 65(2) AIA)

? act as competent authority for the supervision of EUIs falling within the AIA (art 70(9) AIA)

? act as market surveillance authority when EUIs fall within the AIA (art 74(9) AIA)

? have the power to impose administrative fines on EUIs (art 100 AIA)

This means that interpretation and guidance on the development and use of AI systems will have important consequences for other AI operators.

The guidance issued is far from revelatory and could have been more concrete and precise, given the EDPS role and previous guidance from national authorities.

For instance:

? Legal basis: “the use of web scraping techniques to collect data from websites and their use for training purposes might not comply with relevant data protection principles, including data minimisation and the principle of accuracy, insofar as there is no assessment on the reliability of the sources.”

So is web scraping compliant or not?

? Accuracy: “When EUIs use a generative AI system or training, testing or validation datasets provided by a third party, contractual assurances and documentation must be obtained on the procedures used to ensure the accuracy of the data used for the development of the system”

ChatGPT was trained with a dataset called Common Crawl. Common Crawl contains approx 3 billion pages of text. How can they ensure accuracy? MSFT Copilot is powered by GPT4 to improve responses. Can MSFT Copilot be used? etc

In any case, it is a good guidance for a first overview of the most important challenges of GenAI

Link here



Supervision of AI systems in the EU

Two Dutch supervisory authorities (Autoriteit Persoonsgegevens (Dutch DPA) and the Authority for Digital Infrastructure (RDI)) advised the Dutch government about how and who should conduct the supervision of AI systems.

Some highlights:

? HRAIS in products that already require CE marking: the supervision of these HRAIS should remain in the same supervisory authorities, in alignment with existing regulations

? HRAIS for which no CE marking is currently required: supervision should largely lie with the Dutch Data Protection Authority (AP should be considered the “market surveillance authority”

? Exceptions: HRAIS in

– financial sector (Authority for the Financial Markets and De Nederlandsche Bank)

– and critical infrastructures (Human Environment and Transport Inspectorate and Authority for Digital Infrastructure)

– used for judicial purposes (TBD)

Authorities supervising HRAIS
Authorities supervising HRAIS
Authorities supervising forbidden AI practices
Authorities supervising forbidden AI practices

It remains to be seen what the position of other member states will be when it comes to supervising HRAIS as required by the AI Act. I expect that many other countries follow a similar approach to the Dutch authorities that drafted this report.

Link here



Data protection authorities and the use of algorithms

The Dutch DPA Chairman warns about the transparent use of algorithms, in particular to deliver public or essential services by public bodies.

“A first step towards more transparency is for the government to let it know when algorithms have played a role in a decision taken” and this can be done “in the letter of explanation accompanying the decision”. Judicial bodies are not exempted from this transparency principle.

Judges have additional responsibilities when it comes to the use of AI systems: judges should “actively assess the impact of algorithms on government decisions” and “look beyond the dispute” and should be “extra alert to any suspicion that the algorithm has been used” for decision making.

Link here



Colorado landmark AI law

Colorado passed a law concerning consumer protections in interactions with AI systems

The law applies to "High-Risk AI Systems" (HRAIS), which are those that make or constitute a substantial factor in making a consequential decision.

"Consequential decisions" are those that has a material legal or similarly significant effect on the provision, denial, cost, terms, regarding education, employment, financial services, gov services, healthcare, housing, insurance or legal services (closed list).

Developers of HRAIS

- Developers have a duty to avoid algorithmic discrimination. For this purpose, the law requires that they produce the following documentation: Reasonably foreseeably uses, high level summary of training dataset, purpose, benefits, assessments, data governance measures, mitigations, instructions of use. Publish summary information on their website or public use case inventory

Deployers of HRAIS

- Establish Risk Management Policy and Program (RMPP), for instance NIST AI RMF and ISO 42001 [NB, ISO 42001 is an AI Management System standard. For risk management, ISO published ISO 23894). The RMPP may cover multiple high risk AI systems.

- Complete AI Impact Assessment (AIIA), annually, including purpose, use case, deployment context, algorithmic discrimination risks and mitigations, categories of data, data used in “customization”, metrics, transparency measures, post-deployment monitoring

- transparency obligations to consumers when using HRAIS and in the website

Developers and deployers in general must comply with these requirements from Feb 2026

Link here



ETSI TR 104 225 Privacy aspects of AI/ML systems

The AI Act establishes a presumption of conformity when AI providers comply with EU-harmonised standards (art 40).

A harmonised standard is a European standard developed by a recognised European Standards Organisation: CEN, CENELEC, or ETSI.

ETSI (European Telecommunications Standards Institute), via the Technical Committee Securing Artificial Intelligence (TC SAI), is developing technical specifications to mitigate threats arising from the deployment of AI, and threats to AI systems, from both other AIs, and from conventional sources.

ETSI recently launched ETSI TR 104 225 Privacy aspects of AI/ML systems

Some highlights:

“While AI technology could be interpreted as capable of creating a privacy risk violation, AI does not itself violate privacy”

Building AI models more securely: federated learning. Federated Learning means that multiple participants, each with their training data set, construct a joint model by training their local models on their data while periodically exchanging model parameters, updates to these parameters, or partially constructed models with other participants

AI Privacy Remediation Approaches

The suggested approaches to remediate AI-related privacy attacks are:

? Differential privacy: to counter membership inference, property inference, reconstruction, model extraction

? Homomorphic encryption

? Privacy preserving measurement

Some recommendations from ETSI

? Review Internal Data Transformations of AI Algorithms (crucial due to the ability of AI systems to detecting proxies for privacy-related parameters like race or gender, e.g. postal code, language, religious affiliation)

? Improve AI algorithm transparency

? Test/Evaluate Impact on consumer

? Have an appropriate legal review

Link here




The AI Reality: IT Pros Weigh in on Knowledge Gaps, Policies, Jobs Outlook and More (ISACA)

ISACA surveyed more than 3000 IT professionals about these topics and the results are the following

- More training needed: 40% of the organizations offer no AI training at all

- No AI Policies: only 15% of the organizations have a formal comprehensive AI policy

- Use of GenAI: while 42% of thee organizations allow the use of GenAI, it is believed that 70% of the employees are using AI and 60% GenAI

- AI most frequent use cases:

1) increase productivity

2) repetitive tasks automation

3) written content creation

- AI risk management: only 35% says AI risks are an immediate priority for the org

- Job market:

-- the huge majority of respondents believe that many jobs will be modified due to AI

-- 85% say that they will need to increase their skills and knowledge in AI within 2 years to advance or even retain their jobs

Link here



Hedge funds’ use of AI in trading

The Chair of the Homeland Security and Governmental Affairs Committee, released a report examining how Hedge Funds (HF) use AI to inform trading decisions and the potential impacts it could have on market stability.

Findings

1) HF use different terms to name and define their AI-based systems. Algorithmic trading, high-frequency trading and quantitative training do not rely on ML, but AI-based trading yes

2) HF do not have uniform requirements or an understanding of when human review is necessary in trading decisions, and none defined a specific point in time where that intervention must exist

3) Existing and proposed regulations concerning AI in the financial sector fail to classify technologies based on their associated risk levels

Risks arising from the use of AI by hedge funds in trading

Unique risks

- Explainability

- Disparate outcomes and bias “Investment advisors have a duty to act in their client’s best interests and must be able to explain to their clients how they make investment decisions. Using AI to inform these trading decisions complicates this duty if investment advisors are not able to fully explain how a decision was made or if the decision was made in such a way that contributes to increased bias.”

- Accountability: “If humans become less involved in the operations of AI systems for trading, the ability of investment managers to ensure “a continuous chain of human responsibility across the entire AI project lifecycle” including “conceptualization, design, development, deployment, and retirement” and to demonstrate that through audits will be critical

Amplification of risks

- Disclosure to clients and conflict of interests

- Herding behaviour

- Manipulation and influence

- Market stability

4) Independent regulatory agencies, like the SEC and CFTC, are exempt from requirements within the EO 14110

5) Regulators have begun to examine regulations for potential gaps in authority, but have not sufficiently clarified how current regulations apply to HF's use of AI in trading decisions.

6) AI’s inherent complexity and lack of explainability can frustrate compliance obligations, including the ability to provide adequate disclosures to clients.

It is difficult for HF to fully explain their trading decisions. Qualifying HF must make certain disclosures to regulators and clients about their trading decisions.

7) HF perform accuracy and safety reviews at different points and do not disclose to investors how or when they perform these reviews.

While HF disclose to investors some information regarding their use of AI technologies, these disclosures are high-level and do not include details on how systems are reviewed, or may not convey when and how AI systems are developed, employed and tested for safety and accuracy.

Recommendations

1) Create common definitions for HF's systems that utilize AI

2) Create AI operational baselines and establish a system for accountability in AI deployment

3) Require internal risk assessments that identify levels of risk for various use cases (SEC and CFTC should develop a risk assessment framework, adhering to principles in NIST AI RMF)

4) Codify EO 14110 and OMB Guidance and extend to independent agencies

5) Clarify authority of current regulations. SEC and CFTC should clarify the application of existing regulations to AI related technologies

6) Disclose necessary information on use and reliability of AI technologies

7) Require standardized audits of AI trading systems and audit trail disclosures for investors

Link here



Legal Issues and Business Considerations When Using GenAI in Digital Advertising (IAB)

IAB published its white paper addressing the legal and business issues concerning the creation, training, and implementation of generative AI in digital advertising.

While the whitepaper is focused in USA law, it provides a great overview of the most important use cases and challenges of using AI in the context of digital advertising.

AI digital advertising use cases

- audience segmentation and targeting

- budget and performance optimization

- campaign measuring

- audience intelligence

- testing ad creations

- chatbots

- content generation

Legal issues regarding the use of AI to create content

- no copyright protection for works created by non-humans (USA)

- while there is a proliferation of lawsuits concerning the collection and use of third party data and content to training AI models, no cases were reported where an end user of a third-party AI tool is sued for copyright infringement based on the output generated in response to the user prompts

- quality and accuracy risks (hallucinations, misinformation and defamation)

- biases

- creation of offensive content

- transparency and disclose of the use of AI

It also evaluates the legal issues regarding the training of AI models, with a focus on copyright, the assessment of the fair use defense and the problems of scraping

Link here



Generative AI Technologies and Their Commercial Applications

The US Government Accountability Office prepared a report about GenAI.

Highlights:

? GenAI differs from other AI systems in its ability to create novel content, in the vast volumes of data it requires for training, and in the greater size and complexity of its models.

? GenAI systems employ several model architectures, or underlying structures. These systems, referred to as neural networks, are modeled loosely on the human brain and recognize patterns in data.

? Commercial developers have created a wide range of generative AI models that produce text, code, image, and video outputs. Developers have also created products and services that enhance existing products or support customized development and refinement of models to meet customer needs. Their benefits and risks are still unclear for many applications


An important point about computing power

Article 51 AI Act

Classification of general-purpose AI models as general-purpose AI models with systemic risk

2. A general-purpose AI model shall be presumed to have high impact capabilities pursuant to paragraph 1, point (a), when the cumulative amount of computation used for its training measured in FLOPs is greater than 10^25.

Link here



Transparency for ML-Enabled Medical Devices: Guiding Principles

In 2021, three national health agencies (USA, CA, UK) identified 10 guiding principles for good machine learning practice (GMLP).

In June, they further identified guiding principles for transparency for machine learning-enabled medical devices (MLMDs). These principles build upon the previously identified GMLP principles.

The guiding principles for transparency of MLMDs consider:

? who (relevant audiences)

Transparency is relevant to all parties involved in a patient’s health care, including those intended to:

- use or receive health care with the device.

- make decisions about the device to support patient outcomes.

? why (motivation)

Transparency supports:

- safe and effective use.

- patient-centered care.

- identification and evaluation of risks and benefits of a device.

- informed decision-making and risk management.

- device maintenance and detection of errors or performance degradation.

- health equity through identification of bias.

- increased fluency and confidence in MLMD use, increased adoption of the technology

? what (relevant information)

Enabling an understanding of the MLMD includes sharing relevant information on:

- device characterization and intended use.

- how the device fits into health care workflow, including the intended impact on the judgment of a health care professional.

- device performance.

- device benefits and risks.

- product development and risk management activities across the lifecycle.

- logic of the model, when available.

- device limitations, including biases, confidence intervals and data characterization gaps.

- how safety and effectiveness are maintained across the lifecycle.

? where (placement of information)

Maximizing the utility of the software user interface can:

- make information more responsive.

- allow information to be personalized, adaptive and reciprocal.

- address user needs through a variety of modalities.

? when (timing)

Timely communication can support successful transparency, such as:

- considering information needs at different stages of the total product lifecycle.

- providing notifications of device updates.

- providing targeted information when it’s needed in the workflow

? how (methods used to support transparency)

Human-centered design principles can support transparency

Link here



Lex Friedman interview to Aravind Srinivas, CEO of Perplexity AI




The Ethics of AI by Michael Sandel

37-minute discussion led by Prof. Michael Sandel about AI ethics



Transparency note: GenAI tools

  1. Has any text been generated using AI? NO
  2. Has any text been improved using AI? This might include an AI system like Grammarly offering suggestions to reorder sentences or words to increase a clarity score. NO
  3. Has any text been suggested using AI? This might include asking ChatGPT for an outline, or having the next paragraph drafted based on previous text. NO
  4. Has the text been corrected using AI and – if so – have suggestions for spelling and grammar been accepted or rejected based on human discretion? YES, Grammaly app was used for typos and grammar
  5. Has GenAI been used in another way? YES, Google Translate was used to translate materials (eg. Dutch to English)


Unsubscription

You can unsubscribe from this newsletter at any time. Follow this link to know how to do it.


ABOUT ME

I'm a senior privacy and AI governance consultant currently working for White Label Consultancy. I previously worked for other data protection consulting companies.

I'm specialised in the legal and privacy challenges that AI poses to the rights of data subjects and how companies can comply with data protection regulations and use AI systems responsibly. This is also the topic of my PhD thesis.

I have an LL.M. (University of Manchester), and I'm a PhD (Bocconi University, Milano).

I'm the author of “Data Protection Law in Charts. A Visual Guide to the General Data Protection Regulation“ and "Privacy and AI". You can find the books here


Marcos Paulo Bastos Braga

Especialista em Gest?o de Mídias LinkedIn Arquivista/ Consultor de projetos junto ao Ministério do Planejamento e Or?amento

3 个月

Agradecimento por compartilhar

回复
Chucks Golding

Data - Senior General Counsel | Information Law | Lead-preneur | LogosLogic

4 个月

Federico Marengo - Lots on the plate Federico. I will be delving in. Thank you for posting.

Natasha T.

Head of Data Protection

4 个月

Thank you Federico.

要查看或添加评论,请登录

社区洞察