Privacy and AI #1
credits: see below

Privacy and AI #1

In this January edition of Privacy and AI

PRIVACY

? Data Protection Day

? Finnish DPA decision on Schrems II

? Data accuracy: IT DPA fines electricity company €1m

? Company fined due to a data breach and lack of DPA

? CNIL fines TIKTOK €5 million for infringing regulations on cookies

? CNIL fines Apple €8m for tracking visitors without consent

? Bundeskartellamt Statement of objections issued against Google’s data processing terms

? ISACA Privacy in Practice 2023 Report

? FTC Order - Chegg Lax Security

? NIST Retires SHA-1 Cryptographic Algorithm

ARTIFICIAL INTELLIGENCE

? More importance to AI by Data Protection Authorities

? NIST AI Risk Management Framework

? ISO Guidance on Risk Management (ISO 23984:2023)


PRIVACY

Data Protection Day

The Data Protection Day commemorates the opening for signature of the Council of Europe Data Protection Convention on 28 January 1981 (Convention 108)

No alt text provided for this image
Convention 108+ cover

Convention 108 was updated in 2018 to incorporate new challenges in data protection and to align with current regulations (in particular GDPR): Convention 108+

The importance of this convention:

? it protects in a legally binding instrument personal data,

? it links data protection to the safeguarding of rights and fundamental freedoms in general, and

? it also connects data protection to privacy and private life.

It provided a fertile ground for the adoption and update of national data protection legal frameworks of EU countries.

The USA Congress also expressed support for the designation of January 28, 2009 as "National Data Privacy Day"

No alt text provided for this image

The Congress supports the designation of January 28, 2009, as National Data Privacy Day and also encourages:

(1) state and local governments to promote data privacy awareness;

(2) privacy professionals and educators to discuss data privacy and protection issues with teens in high schools; and

(3) individuals to be aware of data privacy concerns and to take steps to protect their personal information online

More information here



Finnish DPA decision on Schrems II


The Finnish DPA issued a decision against the cities of Helsinki, Espoo, Vantaa and Kauniainen for the use of tracking technologies in their website.

The Helmet[.]fi website of the capital region's libraries used cookies and other tracking technologies.

Helmet libraries have used, for example, the Google Analytics analytics tool and the Google Tag Manager service on the Helmet.fi website.

Personal data collected on the Helmet website has been transferred to the United States without adequate additional protection measures and users were also not properly informed about transfers of personal data.

The DPA ordered Helmet libraries:

- to destroy the personal data collected through website tracking technologies for those registered whose personal data have been stored or used illegally after their collection;

- to inform about the processing of personal data as required by data protection legislation. See here

Helmet libraries issued a press release to inform that they have updated their tracking practices.

No alt text provided for this image
This

The Italian, French and Austrian supervisory authorities have already issued decisions against the use of Google Analytics (not fines), and the Danish SA against Google Workspace (not a fine). Only the Portuguese DPA issued a fine regarding data transfers.?



Data accuracy: IT DPA fines electricity company €1m

Thousands of energy users mistakenly classified as defaulters were unable to switch to other energy suppliers.

?A client of Areti SpA lodged a complaint in the GPDP and the authority found that the illicit treatment involved thousands of other customers.

?The impossibility for the user to change the supplier was derived from the processing of inaccurate and outdated data, due to a misalignment of the company's internal systems. This resulted in incorrect communication to a database of defaulting clients, which is usually consulted by suppliers before signing a new contract (sector regulations allow this consultation).

?From December 2016 to January 2022 various queries that extracted information from Areti's systems had -due to a series of technical and application errors- attributed to the end customers a non-payment condition, despite the fact that they had paid their bills in due course.

?Due to these inaccuracies, the company denied the activation of the energy supply to over 47,000 customers.

?See here




Data breach + no DPA = fine

After a data breach, the Polish SA started investigations about the privacy practices of a cultural centre.

The controller used a processor to outsource the maintenance of accounting books, records, preparation of reports and storage of documentation.

The Polish SA found out that the controller:

? failed to verify whether the processor provides sufficient guarantees for the implementation of appropriate TOMs,

? had not signed a DPA (data processing agreement) as required by art 28 GDPR

? The SA imposed a EUR 550 (2500 PLN) fine on the controller

?See here


CNIL fines TIKTOK €5m infringing regulations on cookies

CNIL imposed a EUR 5m fine to TikTok

No alt text provided for this image

During an investigation carried out in tiktok[.]com website (not the app) the authority determined that the users:

1) could not refuse cookies as easily as they accept them

While TikTok offered a button allowing immediate acceptance of cookies, they did not put in place an equivalent solution (button or other) to allow the Internet user to refuse their deposit as easily. Several clicks were required to refuse all cookies, as opposed to just one to accept them. The "reject all" button was added in Feb 2022

2) were not informed in a sufficiently precise manner of the purposes of the different cookies

The Authority considered that the terms ?améliorer votre expérience sur nos sites web? and ?à des fins d’analyse et de marketing? were particularly imprecise.

See here


CNIL fines Apple €8m for tracking visitors without consent

Under the old version 14.6 of the operating system of the iPhone, when a user visited the App Store, identifiers used for several purposes, including personalization of ads on the App Store, were by default automatically read on the terminal without obtaining consent.

No alt text provided for this image

The CNIL considered that due to their advertising purpose, these identifiers are not strictly necessary for the provision of the service (the App Store) and they must not be read and/or deposited without the user's prior consent. However, the advertising targeting settings available from the "Settings" icon of the iPhone were pre-checked by default.

Moreover, the user had to perform a large number of actions in order to deactivate this setting, since this option was not included in the initialization process of the phone. These elements did not allow to collect the prior consent of users.

See here


Statement of objections issued against Google’s data processing terms

Bundeskartellamt (German Federal Cartel Office) Google its preliminary legal assessment in the proceeding initiated due to Google’s data processing terms.

According to Google processing terms, Google can:?

-? combine a variety of data from various services and use them (e.g. to create very detailed user profiles which the company can exploit for advertising and other purposes, or to train functions provided by services)?

- collect and process data across services (e.g. owned by Google like, Google Search, YouTube, Google Play, Google Maps and Google Assistant, but also third-party websites and apps, and Google’s background services)

Bundeskartellamt preliminary conclusion is that:?

- users are not given sufficient choice as to whether and to what extent they agree to this far-reaching processing of their data across services.?

- choices offered are not sufficiently transparent and too general

- general and indiscriminate data #retention and processing across services without a specific cause as a preventive measure, including for security purposes, is not permissible

Also, according to the authority, users should:

- users should be able to limit the processing of data to the specific service used and to differentiate between the purposes for which the data are processed.?

- choices offered cannot make it easier for users to consent to the processing of data across services than not to consent to this

The authority also evaluated the potential applicability of the Digital Markets Act to Google’s situation.

Google now has the opportunity to comment on the preliminary conclusions.

While this case relates to competition law, it has close connections to data protection and the conclusions reached can be easily applicable in the #privacy context.

See here



ISACA Privacy in Practice 2023 Report

No alt text provided for this image

The following are key survey findings:

? Technical privacy roles are slightly more likely to be somewhat or significantly understaffed than legal/compliance privacy roles, although both types of roles are impacted by staff shortages.

? Technical privacy roles are significantly more likely than legal/compliance privacy roles to have increased demand in the next year.?

? Experience is considered the most important factor in determining if a privacy-position candidate is qualified.

? The demand for privacy professionals is expected to increase over the next year for technical privacy professionals and legal/compliance privacy professionals.

? Privacy teams interact most frequently with information security, legal/compliance and risk management teams.?

? Enterprises that practice privacy by design are more likely to:

  • Have adequately staffed privacy teams
  • Believe that their board of directors appropriately prioritizes enterprise privacy
  • Require documented privacy policies, procedures and standards
  • üUse more privacy controls overall than are legally required
  • Feel their privacy budget is appropriately funded


Privacy Program Obstacles

However, respondents indicated that there are obstacles to forming a privacy program, with the top three being:

  • Lack of competent resources (42%)
  • Lack of clarity on the mandate, roles, and responsibilities (40%)
  • Lack of executive or business support (39%)


Actions taken by companies to remediate the staffing shortages

? organizations are training to allow non-privacy staff to move into privacy roles (49%)

? increasing their usage of contract employees or outside consultants (38%).


Causes of Privacy failures

?? lack of training (49%),

? data breach (42%)

? not practicing PbD (42%)

Remediations

?? providing privacy awareness training for employees (85%)

? revision privacy awareness training at least annually.?



FTC Order - Chegg Lax Security

Chegg careless security practices exposed data from millions of customers and employees

No alt text provided for this image

The company stored users’ personal data on its cloud storage databases in plain text and, until at least 2018, employed outdated and weak encryption to protect user passwords.

Chegg experienced four data breaches that exposed the personal information of about 40 million users and employees, including users’ email addresses and sensitive scholarship data such as their dates of birth, sexual orientation and disabilities, as well as financial and medical information about Chegg employees

FTC’s order requires Chegg to implement a comprehensive information security program, limit the data the company can collect and retain, offer users MFA to secure their accounts, and allow users to request access to and deletion of their data.



NIST Retires SHA-1 Cryptographic Algorithm

SHA-1, “secure hash algorithm,” has been in use since 1995 as part of the Federal Information Processing Standard (FIPS) 180-1.

No alt text provided for this image

Today’s increasingly powerful computers are able to attack the algorithm,?National Institute of Standards and Technology (NIST)?is announcing that SHA-1 should be phased out by Dec. 31, 2030, in favor of the more secure SHA-2 and SHA-3 groups of algorithms.

However,?NIST?has previously announced that Federal agencies should stop using SHA-1 in situations where collision attacks are a critical threat, such as for the creation of digital signatures.

SHA-1 has served as a building block for many security applications, such as validating websites — so that when you load a webpage, you can trust that its purported source is genuine. It secures information by performing a complex math operation on the characters of a message, producing a short string of characters called a hash.

Today’s more powerful computers can create fraudulent messages that result in the same hash as the original, potentially compromising the authentic message. These “collision” attacks have been used to undermine SHA-1 in recent years.



ARTIFICIAL INTELLIGENCE

More importance to AI by Data Protection Authorities

There are many examples showing the increasing importance that Data Protection Authorities are placing on AI systems


1) Dutch DPA to supervise algorithms

From January 2023, the (Dutch DPA) will start supervising algorithms for transparency, discrimination and fairness.

In the initial stage, the focus will be on the identification of high-risk algorithms, knowledge gathering and establishing the foundations for collaboration

See here


2) A Guide to ICO Audit - AI Audits

The UK DPA developed a framework for auditing AI, focusing on best practices for data protection compliance.

The audits are “voluntary” and completed by ICO’s Assurance department

An audit will typically assess organisations to:

? ensure that appropriate policies and procedures are in place and are followed;

? test the adequacy of controls in place;

? detect potential breaches;

? recommend changes

See here


3) CNIL (FR) AI service

CNIL will create a service to evaluate the privacy risks posed by AI systems.

Main missions:

- facilitate understanding of the functioning of AI systems within the CNIL, but also for professionals and individuals;

- consolidate the expertise of the CNIL in the knowledge and prevention of #privacy risks related to the implementation of these systems;

- prepare for the entry into application of the AIA

See here



NIST AI RMF

NIST has developed a framework to manage risks to individuals, organizations, and society associated with artificial intelligence. The NIST AI Risk Management Framework (AI RMF) is intended for voluntary use and to improve the ability to incorporate trustworthiness considerations into the design, development, use, and evaluation of AI products, services, and systems.

No alt text provided for this image

The Core of NIST AI RMF is composed of four functions

  • GOVERN
  • MAP
  • MEASURE
  • MANAGE

No alt text provided for this image

Each of these high-level functions is broken down into categories and subcategories. Categories and subcategories are subdivided into specific actions and outcomes (these can be found in the Playbook, I hope the pdf version of the playbook is released soon).


Below is one example of each function

  • GOVERN

GOVERN-1: Policies, processes, procedures and practices across the organization related to the mapping, measuring and managing of AI risks are in place, transparent, and implemented effectively.

GOVERN 1.1: Legal and regulatory requirements involving AI are understood, managed, and documented.

Suggested action: Maintain awareness of the legal and regulatory considerations and requirements specific to industry, sector, and business purpose, as well as the application context of the deployed AI system.

Transparency and Documentation: Organizations can document the following: To what extent has the entity defined and documented the regulatory environment—including minimum requirements in laws and regulations?

  • MAP

MAP-4: Risks and benefits are mapped for all components of the AI system including third-party software and data.

MAP-4.2: Internal risk controls for components of the AI system including third-party AI technologies are identified and documented.

Suggested Actions: Apply traditional technology risk controls – such as procurement, security, and data privacy controls – to all acquired third-party technologies.

Transparency and Documentation: Organizations can document the following: Are mechanisms established to facilitate the AI system’s auditability (e.g. traceability of the development process, the sourcing of training data and the logging of the AI system’s processes, outcomes, positive and negative impact)?

  • MEASURE

MEASURE-1: Appropriate methods and metrics are identified and applied.

MEASURE-1.2: Appropriateness of AI metrics and effectiveness of existing controls is regularly assessed and updated including reports of errors and impacts on affected communities.

Suggested Actions: Assess effectiveness of existing metrics and controls on a regular basis throughout the AI system lifecycle.

Transparency and Documentation: Organizations can document the following: What metrics has the entity developed to measure performance of the AI system?

  • MANAGE

MANAGE-3: AI risks and benefits from third-party entities are managed.

MANAGE-3.1: AI risks and benefits from third-party resources are regularly monitored, and risk controls are applied and documented.

Suggested Actions: Apply organizational risk tolerance to third-party AI systems.

Transparency and Documentation: If your organization obtained datasets from a third party, did your organization assess and manage the risks of using such datasets?


ISO Guidance on AI Risk Management

ISO 23894:2023 provides guidance on how organisations that develop, produce, deploy or use systems and services that utilize AI can manage risks specifically related to?AI.

No alt text provided for this image


The? Risk Management?Process section includes:

- general considerations

- communication and consultation

- scope, context and criteria

-?risk assessment

- risk treatment

- monitoring and review

- recording and reporting


Really interesting times regarding risk management of AI systems. These two frameworks constitute authoritative references and provide useful assistance to those working on a daily basis with AI systems.

NIST has already published a Crosswalk

No alt text provided for this image
No alt text provided for this image




About the cover image

I decided to temporarily change the cover image of the newsletter not only to refresh the image but also to test DALL-E 2, which is an AI system created by OpenAI (the company that also developed ChatGPT) that is capable of generating images from a natural language description.

I sign in to the app and provided DALL-E many "prompts" (a description in natural language of what I wanted the app generate for me)

So, after many attempts, and using the prompt "A man writing a newsletter about privacy and data protection, digital art", this was the result.

I liked it and I decided to include it in this issue of Privacy and AI. I hope you like it too.

What strikes me is the sheer complexity of tasks that publicly available AI software can currently perform. And since these activities are not without risks, risk management frameworks can also assist those in charge of designing, developing and deploying these applications.

No alt text provided for this image
Image generated by DALL-E



ABOUT ME

I'm a data protection consultant currently working for?White Label Consultancy. I previously worked for TNP Consultants and Data Business Services. I have an LL.M. (University of Manchester), and I'm a PhD candidate (Bocconi University, Milano). As a PhD researcher, my research deals with the potential and challenges of the General Data Protection Regulation to protect data subjects against the adverse effects of Artificial Intelligence. I also served as a teaching assistant in two courses at Bocconi University.

I'm the author of “Data Protection Law in Charts. A Visual Guide to the General Data Protection Regulation“, e-book released in 2021. You can find the book?here




Debbie Reynolds

The Data Diva | Data Privacy & Emerging Technologies Advisor | Technologist | Keynote Speaker | Helping Companies Make Data Privacy and Business Advantage | Advisor | Futurist | #1 Data Privacy Podcast Host | Polymath

1 年

Federico Marengo great newsletter!

Harvey Nusz, CIPM, CRISC, CGEIT, CISA

Privacy / Cybersecurity / GRC Evangelist Leading Cross-functional Teams, Working with Legal< Audit & Vendors to Securely Deliver Data Protection by Operationalizing Processes and Controls that Meet Regulatory Standards.

1 年

Bravo! I just scanned the table of contents, Federico, and this looks like a tremendous edition. I can’t wait to read it later on today. Thank you sir!

Alexandre MARTIN

Autodidacte ? Chargé d'intelligence économique ? AI hobbyist ethicist - ISO42001 ? Polymathe ? éditorialiste & Veille stratégique - Times of AI ? Techno-optimiste ?

1 年

要查看或添加评论,请登录

Federico Marengo的更多文章

  • Privacy and AI #18

    Privacy and AI #18

    In this edition of Privacy and AI AI REGULATION ? California AI Transparency ? ICO consultation on the application of…

    5 条评论
  • Privacy and AI #17

    Privacy and AI #17

    In this edition of Privacy and AI ? Privacy & AI book giveaway ? LLMs can contain personal information in California ?…

    4 条评论
  • Privacy and AI #16

    Privacy and AI #16

    In this edition of Privacy and AI ? AI & Algorithms in Risk Assessments (ELA, 2023) ? Hamburg DPA position on Personal…

    6 条评论
  • Privacy and AI #15

    Privacy and AI #15

    In this edition of Privacy and AI ? Generative AI and EU Institutions (EDPS) ? Supervision of AI systems in the EU (NL…

    4 条评论
  • Privacy and AI #14

    Privacy and AI #14

    In this edition of Privacy and AI: PRIVACY ? Privacy and AI for AI Governance Professional (AIGP) certification ?…

    7 条评论
  • Privacy and AI #13

    Privacy and AI #13

    In this edition of Privacy and AI: PRIVACY ? FTC prohibits telehealth firm Cerebral from using or disclosing sensitive…

    21 条评论
  • Privacy and AI #12

    Privacy and AI #12

    In this edition of Privacy and AI: PRIVACY ? Purpose limitation in the GenAI lifecycle (ICO call for evidence) ?…

    9 条评论
  • Privacy and AI #11

    Privacy and AI #11

    In this edition of Privacy and AI: PRIVACY AND AI GIVEAWAY (CLOSED) PRIVACY ? Cisco 2024 Data Privacy Benchmark Study ?…

    3 条评论
  • Privacy and AI #10

    Privacy and AI #10

    In this edition of Privacy and AI: PRIVACY ? A fine for not conducting a DPIA ? The legal basis for web scraping to…

    11 条评论
  • Privacy and AI #9

    Privacy and AI #9

    In this edition of Privacy and AI: PRIVACY ? EDPB bans Meta's processing PD for behavioral ads using legitimate…

    1 条评论

社区洞察

其他会员也浏览了