Privacy and AI weekly - Issue 6

Privacy and AI weekly - Issue 6

In this issue:

Privacy

? Alternatives to Google Analytics

? Europe is worried about Kaspersky

? DPC imposes a €17m to Meta (FB) for data breaches

AI

? CoE's Guidelines on the Human Rights Impacts of Algorithmic Systems (2020)

? AIA draft: JURI draft report open for comments

? NIST Report on Identifying and Managing Bias in AI

? NIST AI Risk Management Framework draft?(!!!)


DATA PROTECTION

Alternatives to Google Analytics

After NOYB’s complaints against the use of?Google analytics?the Austrian SA and the?CNIL - Commission Nationale de l'Informatique et des Libertés?issued decisions considering that the transfers made by Google to the USA did not satisfy the requirements of the CJEU in?#SchremsII.?

The EDPS also criticised the use of GA by EU Institutions.

The Belgian and the Dutch SA are investigating similar issues.?

Last year the CNIL issued recommendations concerning the use of analytics in websites

CNIL recommendations (edited automated translation)

To respect DS rights

? users must be informed of the implementation of these trackers, eg via the privacy policy of the site or the mobile application

? the lifespan of the?#trackers?is limited to a period allowing a relevant comparison of audiences over time, as is the case for a period of 13mo, and that it is not automatically extended during new visits

? the information collected through these trackers is kept for a maximum period of 25mo

? the lifespan and retention periods mentioned above are subject to periodic review to be limited to what is strictly necessary

A vendor may provide a comparative audience measurement service to multiple publishers if:

? the data is collected, processed and stored independently for each publisher; and

? the trackers are totally independent from each other and from any other tracker.

In practice

? Some audience measurement services do not fall within the scope of the exemption (art. 82 French DP law: first-party anonymised audience measurement), in particular when their suppliers indicate that they reuse the data for their own account (it may be possible to configure these tools to disable data reuse)

? Check with your vendor if the tool does not reuse the data collected

? Consider any data transfers outside the EU that may be carried out by your provider

How to have an audience measurement solution evaluated?

Interestingly, the CNIL launched a program to identify solutions that can be configured to fall within the scope of the exemption from obtaining consent (art 82)

So, subject to the conditions mentioned by the CNIL, it identified the following providers of audience measurement services that can be used instead of?#googledataanalytics

- Analytics Suite Delta de AT Internet

- SmartProfile de Net Solution Partner?

- Wysistat Business de Wysistat

- Piwik PRO Analytics Suite de Piwik PRO

- Abla Analytics de Astra Porta

- BEYABLE Analytics de BEYABLE

- etracker Analytics (Basic, Pro, Entreprise) de etracker?

- Retency Web Audience de Retency

- Nonli de Nonli

- CS Digital de Contentsquare

- Matomo Analytics de Matomo

- Wizaly de Wizaly SAS

- Compass de Marfeel Solutions

- Statshop de Web2Roi

- Eulerian de Eulerian Technologies

See more


Europe is worried about Kaspersky

The?Bundesamt für Sicherheit in der Informationstechnik (BSI)?issued a warning concerning the software solutions provided by the Russian anti-virus company?Kaspersky.

According to German Federal Office for Information Security “A Russian IT manufacturer can conduct offensive operations itself, be forced to attack target systems against its own will, or be spied on without its knowledge as a victim of a cyber operation, or be used as a tool for attacks against its own customers”

Public institutions and critical infrastructures

? It is to be expected that state institutions, critical infrastructures, companies in the special public interest, the manufacturing industry and important areas of society may be affected.

? Companies and authorities with special security interests/framework conditions and critical infrastructure facilities are particularly at risk

Private users

? Private users without important functions in the state, economy and society may be the least in focus, but in a successful case of an attack, they can also become victims of collateral effects.?

Recommendation

? Antivirus software from the company Kaspersky should be replaced with alternative products

German BSI note (EN)

Italian authorities are particularly worried about this situation. It’s been reported that more than 2000 public institutions have procured software solutions from Kasperky, including Palazzo Chigi (seat of the Council of Ministers), Farnesina (Foreign Affairs), and Viminale (Ministry of the Interior)


More news

? Data Protection Commission announces decision in Meta (Facebook) inquiry

More information

? CNIL releases GDPR practical guide for DPOs

More information


ARTIFICIAL INTELLIGENCE

Council of Europe's Guidelines on the Human Rights Impacts of Algorithmic Systems (2020)

On April 8, 2020, the Committee of Ministers of the Council of Europe (CoE) released the Recommendation on the human rights impacts of algorithmic systems.

In this document, the CoE called on its Member States to take a precautionary approach to the development and use of AI systems and adopt legislation, policies and practices that fully respect fundamental rights.

Crucially, it issued a set of guidelines to address the Human Rights Impacts of AI Systems.

What private parties developing or using AI systems should you know about them? When and how should private actors conduct a human rights impact assessment (HRIA)?

Chapter C outlines the responsibilities of private sector actors with respect to human rights and fundamental freedoms in the context of algorithmic systems.

The measures apply to every organisation, irrespective of its categorisation (SME or not) or domain.

It demands due diligence with respect to human rights and to take proactive and reactive steps to avoid human rights violations (and documentation of these efforts)?

But it also requires

? ongoing review: human rights impacts should be evaluated at regular intervals and throughout the entire AI system lifecycle (C.1.2)

? democratic participation and public awareness: include in the evaluation of the AIS the views of relevant stakeholders and promote knowledge about the opportunities and challenges of AIS (B.1.3)?

? informational self-determination: organisations must communicate individuals about the interaction with an AIS beforehand. It should be permitted to individuals to avoid being identified by automated systems, in accordance with the law (B2.1)

? computational experimentation: where computational experimentation may impair fundamental rights, it could only be performed after a HRIA (B.3.1)

? testing: periodic assessment of the AIS against relevant standards should be integrated into the evaluatory routine, especially if AIS function and generate outputs in real-time (B3.3)

? identifiability of ADS: AI systems must be identifiable and traceable (B. 4.2)?

? AI systems should not produce discriminatory outputs (C1.4)

? where personal data is used, individuals should be informed and consent should be obtained to process their personal data (C.2.1), except other legal basis apply

? data minimisation and default opt-in for tracking (C2.2)

? ensure they use high-quality data, and data is free from errors and bias (C3.1)

? datasets should be representative of the populations (C.3.2.)

? implement security measures to ensure CIA (C.3.3)

? organisations should provide information about the potential human rights impacts, and give an opportunity to challenge the use of the AIS. (C4.1), and end-users should be given the opportunity to review the decision by humans (C.4.2), as well as effective remedies by an impartial and independent reviewer (C.4.4)

? organisations should inform the number and type of complaints received conceived concerning the AIS (C.4.3) and engage in a consultation process for the design, development and use of the AIS (C.4.5)

? build and register internal procedures to guarantee that the development and use of the AIS is continuously monitored (C.5.1)?

? HRIA: stakeholder should be involved in the HRIA, and risk mitigation techniques should be implemented (C5.3), the staff conducting the HRIA should be trained (C5.2), and the HRIA should be reviewed at regular intervals (C5.4)

Also related to HRIA, but in the context of public institutions, “For algorithmic systems carrying high risks to human rights, impact assessments should include an evaluation of the possible transformations that these systems may have on existing social, institutional or governance structures, and should contain clear recommendations on how to prevent or mitigate the high risks to human rights” B5.2

Therefore, HRIA is an important accountability tool that AI providers and AI users should start considering for successfully harnessing trustworthy AI.


More news

? AIA draft: JURI draft report open for comments

You can send your feedback to [email protected] or post it on LinkedIn :)

Link to the draft

? NIST Releases Report on Identifying and Managing Bias in AI

NIST noted that the report was part of a larger effort to support the development of trustworthy and responsible AI.

More information

? NIST AI Risk Management Framework draft available for comments

More information


No alt text provided for this image


About the author

Federico Marengo is the founder of?Qubit Privacy, a boutique consultancy firm that provides data protection and AI governance services.

He is the author of “Data Protection Law in Charts. A Visual Guide to the General Data Protection Regulation“. As a PhD researcher, his research deals with the potential and challenges of the General Data Protection Regulation to protect data subjects against the adverse effects of Artificial Intelligence.?

For inquiries, feedback or collaborations, please contact me at [email protected]

Excellent topics

回复
Jimena Garfias

Privacy Professional CIPPE | Human Rights Advocate | GDPR Guru

3 年

Thank you for sharing such interesting topics!

回复
Debbie Reynolds

The Data Diva | Data Privacy & Emerging Technologies Advisor | Technologist | Keynote Speaker | Helping Companies Make Data Privacy and Business Advantage | Advisor | Futurist | #1 Data Privacy Podcast Host | Polymath

3 年
Igor Barlek CIPP/E

Certified Information Privacy Professional | GDPR Croatia | Professional DPO | Slu?benik za za?titu podataka | Rotary Club Vara?din 1181 RID1913 Croatia

3 年

Excellent choice of topics

要查看或添加评论,请登录

Federico Marengo的更多文章

  • Privacy and AI #21

    Privacy and AI #21

    In this edition of Privacy and AI ? Swedish Data Protection Authority publishes guidance on GenAI and GDPR ? Commission…

    9 条评论
  • Privacy and AI #20

    Privacy and AI #20

    In this edition of Privacy and AI PRIVACY ? Privacy People (Stephen Bolinger, Documentary) ? EDPB, Data protection…

    8 条评论
  • Privacy and AI #19

    Privacy and AI #19

    In this edition of Privacy and AI SUCCESSFUL AI USE CASES IN ORGANIZATIONS ? Successful AI Use Cases in Legal and…

    3 条评论
  • Privacy and AI #18

    Privacy and AI #18

    In this edition of Privacy and AI AI REGULATION ? California AI Transparency ? ICO consultation on the application of…

    5 条评论
  • Privacy and AI #17

    Privacy and AI #17

    In this edition of Privacy and AI ? Privacy & AI book giveaway ? LLMs can contain personal information in California ?…

    4 条评论
  • Privacy and AI #16

    Privacy and AI #16

    In this edition of Privacy and AI ? AI & Algorithms in Risk Assessments (ELA, 2023) ? Hamburg DPA position on Personal…

    6 条评论
  • Privacy and AI #15

    Privacy and AI #15

    In this edition of Privacy and AI ? Generative AI and EU Institutions (EDPS) ? Supervision of AI systems in the EU (NL…

    4 条评论
  • Privacy and AI #14

    Privacy and AI #14

    In this edition of Privacy and AI: PRIVACY ? Privacy and AI for AI Governance Professional (AIGP) certification ?…

    7 条评论
  • Privacy and AI #13

    Privacy and AI #13

    In this edition of Privacy and AI: PRIVACY ? FTC prohibits telehealth firm Cerebral from using or disclosing sensitive…

    21 条评论
  • Privacy and AI #12

    Privacy and AI #12

    In this edition of Privacy and AI: PRIVACY ? Purpose limitation in the GenAI lifecycle (ICO call for evidence) ?…

    9 条评论

社区洞察

其他会员也浏览了