Privacy and AI weekly - Issue 7
This week in Privacy and AI weekly
Privacy
? Guidelines 3/2022 on Dark patterns in social media platform interfaces
? Digital Markets Act (DMA): agreement between the Council and the European Parliament
? Concorso pubblico per professionisti IT del Garante per la Protezione dei Dati Personali
Artificial Intelligence
? Are Human Rights Impact Assessments for AI systems mandatory?
Participations of this week
? IAPP KnowledgeNet Romania
? ISACA Greater Houston Chapter: Saturday Privacy Sessions (Sat 26th March)
DATA PROTECTION
Guidelines 3/2022 on Dark patterns in social media platform interfaces
The European Data Protection Board published for public consultation the Guidelines 3/2022 on Dark patterns in social media platform interfaces: How to recognise and avoid them
The dark patterns addressed within the Guidelines were divided into the following categories and subcategories:
Overloading: users are confronted with an avalanche/ large quantity of requests, information, options or possibilities in order to prompt them to share more data or unintentionally allow personal data processing against the expectations of data subject.
Skipping: designing the interface or user experience in a way that the users forget or do not think about all or some of the data protection aspects.
Stirring: affects the choice users would make by appealing to their emotions or using visual nudges.
Hindering: an obstruction or blocking of users in their process of getting informed or managing their data by making the action hard or impossible to achieve.
Fickle: the design of the interface is inconsistent and not clear, making it hard for users to navigate the different data protection control tools and to understand the purpose of the processing.
Left in the dark: an interface is designed in a way to hide information or data protection control tools or to leave users unsure of how their data is processed and what kind of control they might have over it regarding the exercise of their rights.
Download the guidelines here
Digital Markets Act: agreement between the Council and the EU Parliament
The Council and the Parliament today reached a provisional political agreement on the Digital Markets Act (DMA), which aims to make the digital sector fairer and more competitive. Final technical work will make it possible to finalise the text in the coming days.
Who are considered gatekeepers?
The platform must also control one or more core platform services in at least three member states. These core platform services include marketplaces and app stores, search engines, social networking, cloud services, advertising services, voice assistants and web browsers.
Obligations for gatekeepers
They can no longer
More information
Concorso pubblico per professionisti IT del Garante per la Protezione dei Dati Personali
Concorso pubblico, per titoli ed esami a 10 (dieci) posti nella qualifica di impiegato?in prova con profilo informatico-tecnologico, di cui 2 (due) nel ruolo della carriera operativa al livello iniziale della scala stipendiale degli operativi con profilo informatico – tecnologico del Garante per la protezione dei dati personali e 8 (otto) nel ruolo della carriera operativa al livello 1 della scala stipendiale degli operativi con profilo informatico tecnologico dell’Autorità Nazionale Anticorruzione.
领英推荐
Principle of agreement in principle on a new framework for transatlantic data flows
Shameless self-advertising
Virtual Romania KnowledgeNet
This week I had the honour to share panel with two outstanding privacy professionals?Rosario Murga Ruiz ?and?Adrian Munteanu ?in the Virtual Romania KnowledgeNet (IAPP - International Association of Privacy Professionals )
We discussed the relationship between cybersecurity and privacy, in particular:
- Cybersecurity & Privacy essentials
- Cyber Risk Controls
- PETs
ISACA Greater Houston Chapter: Saturday Privacy Sessions
On Saturday 26th I will take part in ISACA Greater Houston Chapter: Saturday Privacy Sessions. Together with Harvey Nusz we'll discuss some recent developments in the area. The event will start at 9:00 am CST.
You can register here
Many thanks Harvey Nusz for the kind invitation!
ARTIFICIAL INTELLIGENCE
Are Human Rights Impact Assessments for AI systems mandatory? According to the Slovak Constitutional Court, the answer is yes.
In Europe, data protection authorities have taken significant steps to ensure that providers and users of AI systems respect fundamental rights, in particular, the right to data protection and to privacy (see for instance, decisions issued by the Italian DPA against Clearview AI, Deliveroo or Foodinho).
Despite the efforts made by data protection authorities, a question is worth answering: is the current data protection framework enough to deal with the challenges posed by?#AI ?systems??The simple answer to this question is no.
One of the most important tools included data protection laws to evaluate the effects of the processing activities and to mitigate the risks to individuals are the data protection impact assessments. However, DPIA (or PIAs) are mostly concerned with risks related to the privacy of individuals.
What about?#humanrights ?impact assessments (HRIA)?
HRIA analyses the effects that the activities of public administrations or private actors have on rights-holders such workers, local community members, consumers and others. They are not limited to their data protection rights and include the whole range of fundamental rights that individuals enjoy.
Should providers or users of AI systems undertake HRIA?
For some high-risk AI systems, while not mandatory, it is recommended as a best practice.
Are HRIAs mandatory?
According to the Slovak Constitutional Court, users of AI systems, under certain circumstances must carry out an HRIA.
In December 2021, the Slovak Constitutional Court in the case?eKasa?considered that where public administrations deploy automated decision-making systems the impact assessments must focus on the overall human rights impact on individuals.
Crucially, the SKCC relied heavily on the Council of Europe Recommendation CM/Rec (2020)1
Key takeaways:
? While the public administration should perform a DPIA for these processing operations,?the impact assessment MUST focus on the overall human rights impact of automated systems on individuals
? The same conditions of transparency must be reached where the state procures the AI solution from a vendor. IP rights cannot be a reason to deny access to the information.
? There must be independent collective control over the use of such a system, which operates both ex-ante and ex-post
? Control must concern the quality of the system, its components, errors and imperfections before and after deployment (eg. via audits, quality review of decisions, reporting and statistics, etc). The more complex the system, the deeper the control must be
In Qubit Privacy we could help you evaluate whether you should conduct a HRIA for your AI system and assist in the elaboration of the HRIA
About Qubit Privacy
Qubit Privacy is a boutique consultancy firm that provides data protection and AI governance services. Qubit Privacy helps your organization to stay compliant with privacy regulations like the GDPR, to protect you against cyber-attacks and data breaches and to manage and assess algorithmic risks through a range of affordable professional solutions.
Federico Marengo is the founder of Qubit Privacy. He is a PhD student (in data protection and AI) and the author of “Data Protection Law in Charts. A Visual Guide to the General Data Protection Regulation “.
For inquiries, feedback or collaborations, please contact me at [email protected]
Senior Technology Advisor - Data Privacy Professional - Cybersecurity Professional - ISO 27001 Lead Auditor - ISO 22301 SGCN
2 年A great compilation job on privacy and AI. Thank you very much for the contribution.
Data Protection, Tech & IP Lawyer | PedersoliGattai | PhD, LLM | CIPP/E
2 年Notizie sempre molto interessanti (specie la dichiarazione della Presidente Von der Leyen!), grazie per la condivisione Federico.
Data Protection Consultant | External DPO | Health | CIPP/E Trainer
2 年I look forward to hearing more about HRIA and the mitigation measures to reduce the impact of AI on people. Especially for vulnerable individuals, this assessment is essential. Great issue this week, Federico!
Business and Human Rights | Sustainability ESG | Responsible Business | Responsible Technology
2 年the reach of HRIA encompasses almost everything including tech field in which many people still think that law is too much for innovation. Well, balance has to be stricken
Competition/Antitrust Lawyer | Supporting National & International Clients with Indian Competition Law | Litigation & Compliance Solutions | Comprehensive Antitrust Advisory
2 年Thank you for sharing!