Privacy and AI weekly - Issue 15

Privacy and AI weekly - Issue 15

This Friday on Privacy and AI weekly

Privacy

? Bavarian SA publishes a guide on risk analysis and DPIA

? Data Governance Act and its privacy implications

? Officine Dati webinar: Ricerca sì, ricerca no, ricerca boh?! La Terra dei dati (IT)

? American Data Privacy and Protection Act (ADPPA)

Artificial Intelligence

? Understanding Bias in AI for Marketing (IAB 2021)

? Responsible AI in practice (Nissewaard's AI audit)



PRIVACY

Bavarian SA publishes a guide on risk analysis and DPIA

The Bavarian data protection authority?(BayLfD) published a guide on risk analysis and?#DPIA?addressed to Bavarian public bodies.

The paper "?#Risk?analysis and data protection impact assessment" presents the method and building blocks of a data protection risk analysis, explains the development of technical and organizational measures and provides practical tips for carrying out risk analyses.?

It attaches particular importance to the idea of scaling: risk analyzes do not always havee to be complex;?Depending on the occasion, different "expansion stages" are possible.?This is illustrated using several concrete use cases.

No alt text provided for this image

The supervisory authority also made available tools and modules to perform the risk analysis and DPIA (in German)

Module 1:?Description of a processing activity

Module 2:?DPIA necessity test

Module 3:?DPIA report for a processing activity

Module 4:?Resources "IT personnel management system"

Module 5:?"Video conferencing system" equipment

Module 6:?"Display workstation" resources

Link to the post (EN machine translation) and original documents (risk analysis and DPIA in DE)


Data Governance Act and its privacy implications

The?#DGA?promotes the availability of data and builds a trustworthy environment to facilitate their use for research and the creation of innovative new services and products.

No alt text provided for this image

It will enter into effect on 23 June 2022 and apply in full from 24 September 2023.

?Key features?

1) Making public sector data available for re-use, in situations where such data is subject to the rights of others?

The DGA creates a mechanism to enable the safe reuse of certain categories of public-sector data that are subject to the rights of others (e.g. personal data) Public-sector bodies allowing this type of reuse will need to be properly equipped, in technical terms, to ensure that privacy and confidentiality are fully preserved.

For Privacy Pros: DGA does not derogate GDPR. DGA establishes that e.g. before its transmission PD should be fully anonymised or use a secure processing environment. Additionally, PD should only be transmitted for re-use to a third party where a legal basis allows such transmission.

2) A new business model for data intermediation.?

The DGA creates a framework to foster a new business model – data intermediation services – that will provide a secure environment in which companies or individuals can share data.

Allowing personal data to be used with the help of a ‘personal data-sharing intermediary’ designed to help individuals exercise their rights under the GDPR

Such providers focus exclusively on personal data and seek to enhance individual agency, the individuals’ control over the data pertaining to them, assisting them in exercising their rights under the GDPR, e.g. managing consent to the processing, access, portability or deletion requests.

For PrivPros: new opportunities to advise data intermediary providers

?3) Allowing data use on altruistic grounds.

The DGA fosters the creation of data repositories. It allows companies that seek to support purposes of general interest by making available relevant data based on a data altruism model to register in a Data Altruism register.?

Data subjects in this respect would consent to specific purposes of data processing, but could also consent to data processing in certain areas of research or parts of research projects as it is often not possible to fully identify the purpose of personal data processing for scientific research purposes at the time of data collection.

For PrivPros: Data Altruism Organisations recognised in the Union are able to collect relevant data directly from individuals or to process data collected by others. Data altruism would rely on consent of data subjects (art. 6(1)(a), 7 and 9(2)(a)?#GDPR) and processing for this purpose should also comply with the requirements of 5(1)(d) and art. 89(1) GDPR concerning further processing (compatibility test)?#research

Organisations should put in place effective TOM to allow DS to modify their consent at any time


American Data Privacy and Protection Act (ADPPA)

For those who haven't had the time to read about the new US federal proposal, there are two shortcuts to getting an overview of it

ADPPA Section by Section Summary

and

IAPP webinar A Viable U.S. Federal Privacy Bill? Could this be the one?



Officine Dati webinar: Ricerca sì, ricerca no, ricerca boh?! La Terra dei dati (IT)

No alt text provided for this image

La protezione dei dati personali nell'ambito della ricerca scientifica e delle sperimentazioni cliniche è da sempre un tema di assoluta rilevanza: l'equilibrio tra riservatezza e progresso scientifico-tecnologico si gioca in un bilanciamento di interessi difficile da effettuare e ancor più difficile da mantenere attuale nel tempo. La digitalizzazione dei dati in ambito sanitario porta con sé incredibili potenzialità e contemporaneamente un inevitabile aumento dei rischi per i diritti e le libertà degli interessati.

Quanto la normativa vigente consente di cogliere appieno le grandi opportunità che il progresso ci mette a disposizione? Quali i principali problemi e quali le potenziali risposte per giungere ad una soluzione win-win? Giudizi di compatibilità delle finalità, secondary use, pseudonimizzazione e anonimizzazione, necessità di consensi specifici, di trasparenza, di garanzia dei diritti.

Tratteremo di questo ed altro insieme a:

??Guido Scorza | | Componente dell’Autorità per la protezione dei dati personali

??Arianna Greco | SVP Head Global Commercial Legal Alnylam Pharmaceuticals

??Roberto Benedetto | Avvocato, DPO Fondazione Santa Lucia

??Silvia Stefanelli | Avvocato, Studio Legale Stefanelli&Stefanelli

??Francesco Curtarelli | Senior Legal Consultant Partners4Innovation

Moderano:

??Rosario Imperiali d’Afflitto | Presidente Officine Dati

??Anna Cataleta | Vicepresidente Officine Dati

?? Per partecipare non è richiesta alcuna registrazione. L’evento verrà trasmesso in diretta streaming sulla pagina Linkedin di Officine Dati e sul canale Youtube.

?? Per maggiori informazioni scrivere a: [email protected] al sito web https://www.officinedati.org/


ARTIFICIAL INTELLIGENCE


Understanding Bias in AI for Marketing (IAB?2021)

This paper provides insight for business executives, technologists, legal and compliance officers, and platform users to understand their responsibilities in process development, deployment, and ongoing management of an AI-driven solution.

No alt text provided for this image

The guidance gives an overview of the most important aspects relevant stakeholders should consider when using?#AI?systems for marketing purposes to avoid unfair biases

In addition to considering the conscious and unconscious assumptions, intentions, and the proposed applications of AI, companies should consider:

? Data volume

? Data quality

? Computing power

? Data?#privacy?and security risks

? Legal and regulatory risks

? Public and reputational risks

It also explains important concepts to consider, a checklist, and a list of questions for each phase of an AI system’s lifecycle to help identify and mitigate?#bias

? Phase: Awareness and Discovery?

? Phase: Exploration, Solutions, and Design

? Phase: Development, Tuning, and Testing?

? Phase: Activation, Optimization, and Remediation

Links in comments

https://www.dhirubhai.net/posts/fmarengo_iab-ai-bias-reportv3marengo-activity-6937652234214109184-be-p?utm_source=linkedin_share&utm_medium=member_desktop_web

https://www.iab.com/news/understanding-bias-in-ai-what-is-your-role-and-should-you-care/


Responsible AI in practice

Nissewaard is a Dutch municipality near Rotterdam. It planned to use an AI solution to support decisions concerning access to social benefits.

No alt text provided for this image

The Municipality of Nissewaard asked a company to audit the algorithm and the investigation was completed in June 2021.

Through this audit, the municipality wanted to provide additional transparency to residents and ensure that the system was functioning properly while respecting all legal and ethical values.

The investigation conducted by the private company showed that the algorithm has technical defects. After the first signals from the auditor about the malfunctioning, the municipality of Nissewaard decided to stop using this?#algorithm.

The municipality communicated that no citizen has been harmed in any way.

Link to post and audit


About me

I'm a data protection consultant currently working for White Label Consultancy. I previously worked for TNP Consultants and Data Business Services. I have an LL.M. (University of Manchester), and I'm a PhD candidate (Bocconi University, Milano). As a PhD researcher, my research deals with the potential and challenges of the General Data Protection Regulation to protect data subjects against the adverse effects of Artificial Intelligence. I also serve as a teaching assistant in two courses at Bocconi University.

I'm the author of “Data Protection Law in Charts. A Visual Guide to the General Data Protection Regulation“, e-book released in 2021. You can find the book here

Gokhan Polat

Founder at Clovera.io | Digital Trust Advocate | AI, Blockchain & Fintech Risk Expert | ISACA Emerging Tech Advisor

2 年

Many thanks, great work ?

回复

要查看或添加评论,请登录

Federico Marengo的更多文章

  • Privacy and AI #21

    Privacy and AI #21

    In this edition of Privacy and AI ? Swedish Data Protection Authority publishes guidance on GenAI and GDPR ? Commission…

    9 条评论
  • Privacy and AI #20

    Privacy and AI #20

    In this edition of Privacy and AI PRIVACY ? Privacy People (Stephen Bolinger, Documentary) ? EDPB, Data protection…

    8 条评论
  • Privacy and AI #19

    Privacy and AI #19

    In this edition of Privacy and AI SUCCESSFUL AI USE CASES IN ORGANIZATIONS ? Successful AI Use Cases in Legal and…

    3 条评论
  • Privacy and AI #18

    Privacy and AI #18

    In this edition of Privacy and AI AI REGULATION ? California AI Transparency ? ICO consultation on the application of…

    5 条评论
  • Privacy and AI #17

    Privacy and AI #17

    In this edition of Privacy and AI ? Privacy & AI book giveaway ? LLMs can contain personal information in California ?…

    4 条评论
  • Privacy and AI #16

    Privacy and AI #16

    In this edition of Privacy and AI ? AI & Algorithms in Risk Assessments (ELA, 2023) ? Hamburg DPA position on Personal…

    6 条评论
  • Privacy and AI #15

    Privacy and AI #15

    In this edition of Privacy and AI ? Generative AI and EU Institutions (EDPS) ? Supervision of AI systems in the EU (NL…

    4 条评论
  • Privacy and AI #14

    Privacy and AI #14

    In this edition of Privacy and AI: PRIVACY ? Privacy and AI for AI Governance Professional (AIGP) certification ?…

    7 条评论
  • Privacy and AI #13

    Privacy and AI #13

    In this edition of Privacy and AI: PRIVACY ? FTC prohibits telehealth firm Cerebral from using or disclosing sensitive…

    21 条评论
  • Privacy and AI #12

    Privacy and AI #12

    In this edition of Privacy and AI: PRIVACY ? Purpose limitation in the GenAI lifecycle (ICO call for evidence) ?…

    9 条评论

社区洞察

其他会员也浏览了