Privacy and AI #17

Privacy and AI #17


In this edition of Privacy and AI

? Privacy & AI book giveaway

? LLMs can contain personal information in California

? AI Mitigation Strategy Report

? Call for views on the Cyber Security of AI

? Public Authority Algorithmic and Automated Decision-Making Systems Bill

? China’s AI Security Governance Framework

? New South Wales AI Assessment Framework (AIAF)

? Australian Policy for the responsible use of AI in government

? AI Guide for Government (USA)

? The Cost of Data Breach 2024 Report (IBM)

? Data Protection Authorities on the use of user generated data in platforms to train GenAI models

? Machines Like Me (Ian McEwan)

? Bloomberg interview to Mark Zuckerberg


Privacy and AI - giveaway (closed)

One year ago I launched Privacy and AI and to celebrate it I offered a give away to my LinkedIn connections.

To thank all of you who bought, supported, read, and shared my work over the past year, I’ve gave away 5 paperback copies of my book. It was a is “first come, first served” basis and it was closed some minutes after the announcement. Some persons even post the book when they received it (see here and here)

The book can also be found in https://lnkd.in/g74iezv and in your local Amazon marketplace

Privacy and AI

Link here

Those interested can purchase the book

  • digital version here
  • amazon local marketplace



LLMs can contain personal information in California

The California Senate passed Bill 1008, amending California Consumer Privacy Act of 2018, which amends, among other things the definition of personal information the formats in which PI can exist.

The approved version states that personal information can exist in "abstract digital formats, including... artificial intelligence systems that are capable of outputting personal information".

If this bill is finally approved (CA Governor has 30 days to decide on a potential veto), it may have massive consequences for developers of foundation models. Just to think about ?

- finding what the bill considers PI in the model,

- ensure that a particular combination of connected tokes refer to a particular person and not another (for instance that the combination of tokens forming the words "Federico Marengo" refer to me and not anybody else)

- erasing this information from the system (which may involve retraining the model)

- ensuring the accuracy of the information retrieved when it involves questions about covered individuals

Link here


AI Mitigation Strategy Report

In July, ETSI updated ETSI GR SAI 005 Mitigation Strategy Report which summarizes and analyses existing and potential mitigation against threats for AI-based systems.

Link here


Call for views on the Cyber Security of AI

The UK Government is asking for views on a two-part intervention, including a voluntary Code of Practice on AI cyber security which will form a new global standard.

This call for evidence opened in May and closed at 11:59pm on 9 August 2024

The code of practice draft for comments is attached and the link in comments

Link here


Public Authority Algorithmic and Automated Decision-Making Systems Bill

UK Parliamentary introduced a proposal to regulate the use of AI for decision making in the public sector

The proposed bill would

- apply only to the public sector

- require PA to conduct an impact assessment of AI systems

- require the adoption of transparency standards

AI Impact assessment must

- be open to the public

- evaluate the risks to individuals, groups, and impacts on PA employees

- explain measures to minimise risks

- evaluate efficacy and accuracy of AIS

- assess biases to ensure adequacy to HRA1998 and Equality Act

Algorithmic Transparency Records

The Algorithmic Transparency Records must at least include

- a description of the system

- rationale explanation for hte use of the system

- info about technical specifications

- explain how the system is used to inform administrative decisions

Make arrangements for

- “provision of meaningful and personalised explanations to affected individuals

- conduct regular audits

Develop a process to validate the outcomes and the data

Train PA employees

Prohibition of procuring systems incapable of scrutiny

Link here


China’s AI Security Governance Framework

Last week the National Information Security Standardization Technical Committee (TC260) published AI Security Governance Framework


The framework contains a set of principles for managing AI systems and it provides a useful classification of AI risks and technological measures to address them.

This is a very simple but comprehensive list of risks related to individuals, groups of individuals, society and the organization developing or using the AI system. It is also translated in the TC260 website.

A summary of the risks and specific measures can be found in table in the last page.


TC260

I’d argue that we could tweak the classification or denomination a little bit but it can also be related to translation issues.

Link here


New South Wales AI Assessment Framework (AIAF)

The NSW AIAF is a risk self-assessment framework for NSW Government agencies to ensure responsible design, development, deployment, procurement and use of AI technologies.

The AIAF is mandatory for all NSW Government Agencies to use when designing, developing, deploying, procuring or using systems containing AI components.

Understanding Risk Levels
Understanding Risk Levels - examples

Link here


Australian Policy for the responsible use of AI in government

AU Gov updated a policy for the responsible use of AI in government, which will come into effect on 1st September


The policy establishes a three-prong approach to introduce the principles

1) Enable and prepare

- Establish clear accountabilities for AI adoption and use.

- Identify Accountable Officials (AO) (Dec 2024)

- According to the Standard for Accountable Officials (attached), AOs must:

? be accountable for the implementation of the policy (but AOs are not responsible for the agency’s AI use cases, but agencies can add more responsibilities to them)

? notify the Digital Transformation Agency (DTA) where the agency has identified a new high-risk use case

? be a contact point for government AI coordination and engage in gov AI forums and processes

? keep up to date with new AI requirements

2) Engage responsibly

- Use proportional, targeted risk mitigation and ensure their use of AI is transparent and explainable to the public.

- Publish a public transparency statement outlining their approach to adopting and using AI (March 2025)

3) Evolve and integrate

- Review and evaluate of AI uses, and embed feedback mechanisms throughout government.

Interesting to see how governments are shaping responsibilities and specific actions that agencies or bodies must take to adopt AI responsibly. See for instance the EO 14110 that requires American federal agencies to appoint CAIOs and inventory rights and safety impacting AI systems.


AI Guide for Government

The U.S. General Services Administration (Centres of Excellence) published some time ago an AI Guide for Government

The objective is to help government decision makers see what AI means for their agencies, how to invest and build AI capabilities. It is also aimed at helping leaders understand what to consider as they invest in AI and laying the foundation for its enterprise-wide use.

While it is worth reading in its entirely, I think many organizations can find interesting insights in the following chapters/sections

? Chapters 2: how to structure the organization to embrace AI

? Chapter 4: developing workforce

? Section 5.2: Data governance and management (establishing a multi-tier governance structure)

? Chapter 6: AI Capability Maturity

The AI CMM provides a common framework for federal agencies to evaluate organizational and operational maturity levels against stated objectives.

I also included at the end the AI CMM matrix


Link here


IBM launched the Cost of Data Breach 2024 Report


Key findings

? $4.88M is the average total cost of a data breach. The USA is where data breaches cost the most

? $2.2M is the average cost saving from the use of AI in prevention. (IBM sells AI-powered cybersec services)

? The cyber skills shortage grew. 1 organization in 5 declared they used some form of GenAI security tools to close this gap

? 35% of the breaches involved shadow data

? $4.9M is the average cost of a malicious insider attack. This kind of attackes may have been facilitated by GenAI, since it produces well-written text easily

? nearly half of breaches involved personal data (46%) or intellectual property records (43%). Interestingly, the cost of IP records involved in breaches increased significantly from the previous year.

? the involvement of law enforcement authority reduces, on average, the cost of ransomware in $1M

? 292 days are required to identify and contain breaches involving stolen credentials

Recommendations

? know your information landscape (environments, data repositories, inventories), and extra attention to hybrid environments and public clouds.

? strengthen prevention strategies with AI and automation: adopting AI expands the attack surface. An alternative to mitigate these risks consist int he application of managed AI security services which can reduce the costs of the data breach

? Take a security-first approach to GenAI adoption: only one quarter of GenAI initiatives are being secure, which exposes data and models to breaches.

? Increase cyber response training

Link here


Data Protection Authorities on the use of user generated data in platforms to train GenAI models

Recently there were several news releases from EU Data Protection Authorities about the use of user generated data in large platforms to train GenAI systems.

Grok and LinkedIn started to use the data to train their GenAI models using the data from their platforms (X and Linkedin respectively) without sufficiently informing data subjects about it. Both suspended the processing after discusions with the data protection authorities.

ICO on Twitter


DPC on Grok

Meta also announced the plans to train models, but the implementation will be a bit more transparent, at least in the UK. They informed that users will start receiving in-app notifications with information about it.

ICO on Meta


Machines Like Me (Ian McEwan)

Machines Like Me is a novel published in 2019 by McEwan. It is set in the '80 mixing real with alternative facts. Argentina won the Falklands War, Alan Turing lives and a company sells humanoids.

Among other topics, the novel makes us think about the (fictional) role of consciousness in humanoids, and what humans can and cannot do with robots.

Link here


Bloomberg interview to Mark Zuckemberg

Have you heard the term Zuckaissance?

Bloomberg

An interesting interview with Mark Zuckerberg by Bloomberg

Link here


Unsubscription

You can unsubscribe from this newsletter at any time. Follow this link to know how to do it.


ABOUT ME

I'm working as AI Governance Manager at Informa PLC. Previously I worked as senior privacy and AI governance consultant at White Label Consultancy. I previously worked for other data protection consulting companies.

I'm specialised in the legal and privacy challenges that AI poses to the rights of data subjects and how companies can comply with data protection regulations and use AI systems responsibly. This is also the topic of my PhD thesis.

I have an LL.M. (University of Manchester), and I'm a PhD (Bocconi University, Milano).

I'm the author of “Data Protection Law in Charts. A Visual Guide to the General Data Protection Regulation“ and "Privacy and AI". You can find the books here


legalrightsguru.io AI fixes this Privacy and AI edition topics

回复

Late to the party, but finally bought your Privacy & AI book Federico! Will post about it in the future.

Rudi Kramer

"Wenn dein einziges Werkzeug ein Hammer ist, wirst du jedes Problem als Nagel betrachten." (Mark Twain)

2 个月

要查看或添加评论,请登录

社区洞察

其他会员也浏览了