Privacy and AI #21
In this edition of Privacy and AI
? Swedish Data Protection Authority publishes guidance on GenAI and GDPR
? Commission Guidelines on prohibited AI practices
? Dutch DPA on emotion recognition in the workplace
? Virginia Senate Approves High-Risk AI Bill
? Guidelines for Evaluating Impacts of Generative AI on Vulnerable and Marginalized Communities (California Gov)
? SDAIA Risk Assessment for Transferring Personal Data Outside of the KSA
? CNIL Transfer Impact Assessment template
? FDPIC Swiss guidance on Data Breaches
? AI Playbook for the UK Government
? AI: Italy's Guidelines for the Public Administration
? The most powerful LLM is Chinese (DeepSeek V3)
Swedish Data Protection Authority publishes guidance on GenAI and GDPR
It is a great summary of the main issues surrounding the use of GenAI by public authorities, most of which can be replicated by private organisations
Link here
Commission Guidelines on prohibited AI practices
Some important points for almost every company to consider
Prohibition: using AI systems to infer emotions in the workplace
1) Workplace includes
- Online and offline
- Employees and self-employment
- Processing during recruitment process
2) AI systems should infer emotions based on biometrics (facial biometric template for instance)
- AI systems inferring emotions without using biometric data are out of the scope of the prohibition (even if they are HRAIS)
3) Inferring emotions in the workplace using GPAI systems
- General purpose AI systems are also included (as long as they infer emotions from biometrics). The example in para 41 should be read in conjunction with para 250/251
- This means that, for instance, inputting a meeting transcription in a prompt to infer emotions in a GenAI system like ChatGPT is NOT forbidden.
See para 251 “An AI system inferring emotions from written text content /sentiment analyses) to define the style or the tone of a certain article is not based on biometric data and therefore does not fall within the scope of the prohibition.”
4) Incidental inference of employees emotions
- Incidental inference based on biometrics of employees in commercial context (ie where customers, crowd, attendees, or other individuals are targeted) do not fall under this prohibition. The deployer must implement safeguards, ensuring that employees of the deployers are not adversely affected by the use (para 270)
Link here
Emotion recognition in the workplace
Dutch DPA publishes the summary feedback from the consultation on the AI Act prohibited use: emotion recognition in the workplace
One fair comment is that the prohibited practice does not expressly require emotion recognition to be derived from biometric data.
This has important implications: if a manager copy paste a chat conversation into a generative AI system, the system might retrieve emotions from the employee.
While this is a fair point from respondents, I think that only emotion recognition via biometrics should be captured, in line with AP (and EC's latest guidance) opinion.
Otherwise it may lead to a situation where (outside of education or workplace) inferring emotions using text would not be considered a high risk AI system, but if used in the workplace or education would be prohibited.
Virginia Senate Approves High-Risk AI Bill
The Virginia Senate has passed House Bill 2094, establishing regulations for the development and use of high-risk AI systems. The bill includes compliance requirements and civil penalties for violations. It now awaits potential approval from Governor Glenn Youngkin. If approved, it will be effective from July 1, 2026.
Link here
Guidelines for Evaluating Impacts of Generative AI on Vulnerable and Marginalized Communities (California Gov)
California has launched guidelines to evaluate the impact of genAI on vulnerable populations. These guidelines help public servants to consider the potential impacts a GenAI tool can have on vulnerable communities, with a particular focus on safe and equitable outcomes in the deployment and implementation of high-risk use cases.
It also recommends steps to evaluate equity impacts
1) Identify skill sets your organization needs to assess the equity impacts of GenAI on vulnerable communities (eg program leaders, information officers, data experts, legal counsel, and equity leads).
2) Review the Equity Considerations before GenAI Procurement Recommendations
For this consider:
? communities served
? communities at disproportionate risk of genAI impacts (eg marginalised or underrepresented groups, or groups overrepresented in public datasets)
? degree and scale of impact on any communities identified at disproportionate risk of genAI impacts
3) Consider whether mitigations such as problem statement refinement, project rescoping, or data quality improvement could resolve any identified equity impacts of the proposed GenAI tool.
For this, the government drafted the GenAI Equity Evaluation Checklist.
Risk Assessment for Transferring Personal Data Outside of the KSA
Link here
Transfer Impact Assessment template (CNIL final)
CNIL finished its TIA template
One year ago I made a post criticising CNIL for the lack of timing -such a template would have been super useful before DPF came into force (in comments).
We don't know if the changes proposed by the US current administration would lead to the invalidation of the DPF, thus making TIAs necessary for EU-US transfers
Swiss guidance on Data Breaches
The Federal Data Protection and Information Commissioner (FDPIC) released a guidance about the requirements for notifications of data breaches
The basics
- Swiss Federal Data Protection Act (FADP) requires controller to report data breaches “as soon as possible after becoming aware of it” (GDPR also establishes a 72hs max period)
High risk or likely high risk
- the report is mandatory if the breach is likely to result in a high risk to the rights (GDPR the report is not mandatory if the breach is unlikely to result in a high risk)
- high risk: combination of harms + likelihood
- likely high risk: this must be identified without taking into account measures that the controller only plans, announces or initiates after the data security breach, but consider the immediate measures that controllers were able to take even before submitting the report in good time where these measures demonstrably excluded or minimised the anticipated effects of any personal data breach.
- In case where a high risk cannot be sufficiently excluded, controllers must not wait before fulfilling their reporting obligation.
Companies using GDPR standards will not face challenges on this regard.
Freedom of Information Act
- Data breach notifications to the FDPIC (and FDPIC official documents produced in this regard) are subject to the Swiss Freedom of Information Act, so in principle accessible to the public under the FoIA.
Remedies
- Failure to notify a data breach is not sanctioned with fine, but the FDPIC can order the controller to report if becomes aware of the breach
- A data security breach can be considered a criminal offence if, for example, the controller has not complied with the minimum data security requirements (Article 61 letter c & Art. 8 letter c FADP). Fine of up to CHF 250.000
Attached the guidance regarding data breaches (Feb 2025) and the guidance on TOMs (Jan 2024).
These TOMs do not constitute the measures referred to in Art. 8 letter c FADP (ie failure to comply with them do not necessarily derive in a fine as per Art 61.c FADP)
Link here
AI Playbook for the UK Government
Yesterday UK Gov launched the Playbook for the UK Government to provide departments and public sector organisations with accessible technical guidance on the safe and effective use of AI.
Critical for civil servants to gain an understanding of what AI can and cannot do, how it can help, and the potential ethical, legal, privacy, sustainability and security risks it poses.
What I find interesting
How to create the AI support structure (p37)
- AI strategy and adoption plan
- AI principles
- AI governance board
- AI comms strategy
- AI sourcing and partnership strategy
- AI training
How to deploy AI securely (p74-77)
Critical to understand the different ways AI can be found the in the organization
- public AI applications and web services (ChatGPT, Claude)
- embedded AI applications (Copilot)
- public AI APIs (eg OpenAI)
- privately hosted AI models (own private cloud infrastructure)
- Managed ML model hosting platform (AWS Bedrock, Azure)
- running AI models locally (eg on device)
- working with your organisational data (fine tune or RAG)
And many more. Link here
Artificial Intelligence: Italy's Guidelines for the Public Administration
The Agency for Digital Italy (AGID) released for consultation the first AI guidelines for the public administration
There some interesting things, but I suggest to take a look at the performance indicators (which can be useful for companies too) at pp115ss.
Link here
The most powerful LLM is Chinese (DeepSeek V3)
Salient facts
? outperformed Meta’s Llama 3.1, OpenAI’s GPT-4o and Anthropic’s Claude Sonnet 3.5 in a number of well known benchmarks relating to coding, math, English
? it was developed in two months, using less powerful chips, and cost a fraction similar performing models (only USD 6M)
? It will put pressure on US tech firms investing massively in computing power to build next generation genAI
CNBC interviewed Perplexity CEO Aravind Srinivas on this matter, and US dominance on AI
DeepSeek simply explained
ABOUT ME
I'm working as AI Governance Manager at Informa.
Previously I worked as senior privacy and AI governance consultant at White Label Consultancy. I previously worked for other data protection consulting companies.
I'm specialised in the legal and privacy challenges that AI poses to the rights of data subjects and how companies can comply with data protection regulations and use AI systems responsibly. This is also the topic of my PhD thesis.
I have an LL.M. (University of Manchester), and I'm a PhD (Bocconi University, Milano).
I'm the author of “Data Protection Law in Charts. A Visual Guide to the General Data Protection Regulation“ and "Privacy and AI". You can find the books here
Data Protection & Governance dude | Founding member of Data Protection City | unCommon Sense "creative" | Proud dad of 2 daughters
2 天前Federico Marengo, aside from these very useful works, I would really like to see more of your opinions on this topic, which are flooded by so-called experts, especially in the Governance part of AI.
CISSP, Information Security Consultant, Regulatory Compliance Auditor (SOX, GDPR, PCI, SOC2, ISO27001), Information Security Management, SIEM, AI Tools, Cyber Security
2 天前Insightful
Empowering Organizations to Build a Culture of Privacy & Digital Ethics | Championing Data Privacy and AI Governance | #PrivacyMatters #WomenInPrivacy #AIEthics #DyslexicThinking
2 天前Gracias Fede. Your newsletters is the must-read for many of us.
Data Protection & Governance dude | Founding member of Data Protection City | unCommon Sense "creative" | Proud dad of 2 daughters
2 天前Thank you Federico. It is a great advantage to have the most relevant information in one place, especially from a trusted source like you! There is only one disadvantage: this will make us lazy ?? ??