Privacy and AI weekly - Issue 7

Privacy and AI weekly - Issue 7

This week in Privacy and AI weekly

Privacy

? Guidelines 3/2022 on Dark patterns in social media platform interfaces

? Digital Markets Act (DMA): agreement between the Council and the European Parliament

? Concorso pubblico per professionisti IT del Garante per la Protezione dei Dati Personali

Artificial Intelligence

? Are Human Rights Impact Assessments for AI systems mandatory?

Participations of this week

? IAPP KnowledgeNet Romania

? ISACA Greater Houston Chapter: Saturday Privacy Sessions (Sat 26th March)


DATA PROTECTION

Guidelines 3/2022 on Dark patterns in social media platform interfaces

The European Data Protection Board published for public consultation the Guidelines 3/2022 on Dark patterns in social media platform interfaces: How to recognise and avoid them

No alt text provided for this image

The dark patterns addressed within the Guidelines were divided into the following categories and subcategories:

Overloading: users are confronted with an avalanche/ large quantity of requests, information, options or possibilities in order to prompt them to share more data or unintentionally allow personal data processing against the expectations of data subject.

  • Overloading includes: continuous prompting, privacy maze, and 'too many options'

Skipping: designing the interface or user experience in a way that the users forget or do not think about all or some of the data protection aspects.

  • Skipping includes: deceptive snugness and 'look over there'

Stirring: affects the choice users would make by appealing to their emotions or using visual nudges.

  • Stirring includes: emotional steering and 'hidden in plain sight'

Hindering: an obstruction or blocking of users in their process of getting informed or managing their data by making the action hard or impossible to achieve.

  • Hindering includes: dead-end, 'longer than necessary' and misleading information

Fickle: the design of the interface is inconsistent and not clear, making it hard for users to navigate the different data protection control tools and to understand the purpose of the processing.

  • Fickle includes: lacking hierarchy and decontextualising

Left in the dark: an interface is designed in a way to hide information or data protection control tools or to leave users unsure of how their data is processed and what kind of control they might have over it regarding the exercise of their rights.

  • Left in the dark includes: language discontinuity, conflicting information and ambiguous wording or information

Download the guidelines here


Digital Markets Act: agreement between the Council and the EU Parliament

No alt text provided for this image

The Council and the Parliament today reached a provisional political agreement on the Digital Markets Act (DMA), which aims to make the digital sector fairer and more competitive. Final technical work will make it possible to finalise the text in the coming days.

Who are considered gatekeepers?

  1. companies must either have had an annual turnover of at least €7.5bn within the EU in the past three years or have a market valuation of at least €75bn; and
  2. it must have at least 45m monthly end users and at least 10 000 business users established in the EU.

The platform must also control one or more core platform services in at least three member states. These core platform services include marketplaces and app stores, search engines, social networking, cloud services, advertising services, voice assistants and web browsers.

Obligations for gatekeepers

  • ensure that users have the right to unsubscribe from core platform services under similar conditions to subscription
  • for the most important software (e.g.?web browsers), not require this software by default upon installation of the operating system
  • ensure the interoperability of their instant messaging services’ basic functionalities
  • allow app developers fair access to the supplementary functionalities of smartphones (e.g.?NFC?chip)
  • give sellers access to their marketing or advertising performance data on the platform
  • inform the European Commission of their acquisitions and mergers

They can no longer

  • rank their own products or services higher than those of others (self-preferencing)
  • reuse private data collected during a service for the purposes of another service
  • establish unfair conditions for business users
  • pre-install certain software applications
  • require app developers to use certain services (e.g.?payment systems or identity providers) in order to be listed in app stores

More information


Concorso pubblico per professionisti IT del Garante per la Protezione dei Dati Personali

No alt text provided for this image

Concorso pubblico, per titoli ed esami a 10 (dieci) posti nella qualifica di impiegato?in prova con profilo informatico-tecnologico, di cui 2 (due) nel ruolo della carriera operativa al livello iniziale della scala stipendiale degli operativi con profilo informatico – tecnologico del Garante per la protezione dei dati personali e 8 (otto) nel ruolo della carriera operativa al livello 1 della scala stipendiale degli operativi con profilo informatico tecnologico dell’Autorità Nazionale Anticorruzione.


Principle of agreement in principle on a new framework for transatlantic data flows

No alt text provided for this image



Shameless self-advertising

Virtual Romania KnowledgeNet

This week I had the honour to share panel with two outstanding privacy professionals?Rosario Murga Ruiz ?and?Adrian Munteanu ?in the Virtual Romania KnowledgeNet (IAPP - International Association of Privacy Professionals )

We discussed the relationship between cybersecurity and privacy, in particular:

- Cybersecurity & Privacy essentials

- Cyber Risk Controls

- PETs

No alt text provided for this image


ISACA Greater Houston Chapter: Saturday Privacy Sessions

On Saturday 26th I will take part in ISACA Greater Houston Chapter: Saturday Privacy Sessions. Together with Harvey Nusz we'll discuss some recent developments in the area. The event will start at 9:00 am CST.

You can register here

Many thanks Harvey Nusz for the kind invitation!


ARTIFICIAL INTELLIGENCE

Are Human Rights Impact Assessments for AI systems mandatory? According to the Slovak Constitutional Court, the answer is yes.

In Europe, data protection authorities have taken significant steps to ensure that providers and users of AI systems respect fundamental rights, in particular, the right to data protection and to privacy (see for instance, decisions issued by the Italian DPA against Clearview AI, Deliveroo or Foodinho).

Despite the efforts made by data protection authorities, a question is worth answering: is the current data protection framework enough to deal with the challenges posed by?#AI ?systems??The simple answer to this question is no.

One of the most important tools included data protection laws to evaluate the effects of the processing activities and to mitigate the risks to individuals are the data protection impact assessments. However, DPIA (or PIAs) are mostly concerned with risks related to the privacy of individuals.

What about?#humanrights ?impact assessments (HRIA)?

HRIA analyses the effects that the activities of public administrations or private actors have on rights-holders such workers, local community members, consumers and others. They are not limited to their data protection rights and include the whole range of fundamental rights that individuals enjoy.

Should providers or users of AI systems undertake HRIA?

For some high-risk AI systems, while not mandatory, it is recommended as a best practice.

Are HRIAs mandatory?

According to the Slovak Constitutional Court, users of AI systems, under certain circumstances must carry out an HRIA.

No alt text provided for this image

In December 2021, the Slovak Constitutional Court in the case?eKasa?considered that where public administrations deploy automated decision-making systems the impact assessments must focus on the overall human rights impact on individuals.

Crucially, the SKCC relied heavily on the Council of Europe Recommendation CM/Rec (2020)1

Key takeaways:

? While the public administration should perform a DPIA for these processing operations,?the impact assessment MUST focus on the overall human rights impact of automated systems on individuals

? The same conditions of transparency must be reached where the state procures the AI solution from a vendor. IP rights cannot be a reason to deny access to the information.

? There must be independent collective control over the use of such a system, which operates both ex-ante and ex-post

? Control must concern the quality of the system, its components, errors and imperfections before and after deployment (eg. via audits, quality review of decisions, reporting and statistics, etc). The more complex the system, the deeper the control must be

More information

In Qubit Privacy we could help you evaluate whether you should conduct a HRIA for your AI system and assist in the elaboration of the HRIA

No alt text provided for this image


About Qubit Privacy

Qubit Privacy is a boutique consultancy firm that provides data protection and AI governance services. Qubit Privacy helps your organization to stay compliant with privacy regulations like the GDPR, to protect you against cyber-attacks and data breaches and to manage and assess algorithmic risks through a range of affordable professional solutions.

Federico Marengo is the founder of Qubit Privacy. He is a PhD student (in data protection and AI) and the author of “Data Protection Law in Charts. A Visual Guide to the General Data Protection Regulation “.

For inquiries, feedback or collaborations, please contact me at [email protected]

Jose Antonio Sanchez Duran

Senior Technology Advisor - Data Privacy Professional - Cybersecurity Professional - ISO 27001 Lead Auditor - ISO 22301 SGCN

2 年

A great compilation job on privacy and AI. Thank you very much for the contribution.

回复
Marco Galli

Data Protection, Tech & IP Lawyer | PedersoliGattai | PhD, LLM | CIPP/E

2 年

Notizie sempre molto interessanti (specie la dichiarazione della Presidente Von der Leyen!), grazie per la condivisione Federico.

Rosario Murga Ruiz

Data Protection Consultant | External DPO | Health | CIPP/E Trainer

2 年

I look forward to hearing more about HRIA and the mitigation measures to reduce the impact of AI on people. Especially for vulnerable individuals, this assessment is essential. Great issue this week, Federico!

Huan Nguyen

Business and Human Rights | Sustainability ESG | Responsible Business | Responsible Technology

2 年

the reach of HRIA encompasses almost everything including tech field in which many people still think that law is too much for innovation. Well, balance has to be stricken

Rohit Arora????

Competition/Antitrust Lawyer | Supporting National & International Clients with Indian Competition Law | Litigation & Compliance Solutions | Comprehensive Antitrust Advisory

2 年

Thank you for sharing!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了