Twelve AI Privacy Risks
A recent paper on AI Privacy Risks discusses twelve privacy risks inherent to the use of artificial intelligence.?
If you don’t want to read further, there is one key takeaway.?
Effective AI governance will include privacy harm-envisioning techniques performed early in design. I'll encourage this to be performed by privacy engineering.?
These harm-envisioning techniques should be led by privacy engineers well-versed in privacy harms and AI-enabled system privacy risks. It is not sufficient to apply the typical Privacy by Design measure, the Privacy Impact Assessment questionnaire, after design. Instead engage privacy engineering during initial project ideation. It is also important for the privacy engineer to consider all potential privacy risks, as advances in AI may meaningfully change or exacerbate all potential privacy harms.?
Now to a summary of this excellent article, discussing the risks in the order discussed in the paper.?
1. Data Collection risks
Surveillance: Watching, listening to, or recording an individual's activities without their knowledge or consent. AI systems exacerbate the potential human harm due to their scale and ubiquity.
2. Data Processing risks
Identification: Linking specific data points to an individual's identity. AI capabilities create new scalable types of identification threats.?
Aggregation: Combining various pieces of data about a person to make inferences beyond what is explicitly captured. These defining capabilities of AI forecast behavior and are able to infer end-user attributes.
Phrenology/Physiognomy: Inferring personality, social, and emotional attributes about an individual from their physical attributes. AI may learn correlations between arbitrary inputs and outputs that are based on debunked pseudoscience.
Secondary Use: Using personal data collected for one purpose for a different purpose without end-user consent.
Exclusion: Failing to provide end-users with notice and control over how their data is being used. This lack of agency and control is enabling powerful AI systems to be created without individuals being able to exclude their information from the system.
Insecurity: Carelessness in protecting collected personal data from leaks and improper access. This insecurity could lead to an AI model gaining unexpected access to personal data because of the lack of sufficient security controls, such as end-to-end encryption. AI models may be attacked to reveal training data causing leaks.
领英推荐
3. Data Dissemination risks
Exposure: Revealing sensitive private information that people typically conceal. Generative AI may reconstruct censored or redacted content, or infer and expose sensitive information, preferences and intentions.
Distortion: Disseminating false or misleading information about people.?
Disclosure: Revealing and improperly sharing individuals' personal data. AI expands the disclosure risk as it may infer additional information beyond what was captured in the initial data.
Increased Accessibility: Widely available AI LLM chat bots are making it easier for a wide audience to access potentially sensitive information.?
4. Invasion risks
Intrusion: Actions that disturb one's solitude or encroach on personal space. AI enables ubiquitous and centralized surveillance infrastructures.?
Addressing AI-enabled system privacy risks
The researchers found in almost all cases, privacy risks were exacerbated by the AI-enablement in the system. Typical approaches to these risks are also insufficient. Consider the following privacy enhancing technologies or procedures:
AI-specific privacy guidance is required so that these risks are evaluated early in design, prior to AI model training. Care should also be taken as this research may not have uncovered all the potential privacy risks. Four subcategories of privacy risk in Solove’s taxonomy exist without relevant incidents reviewed as part of the research: Interrogation, Blackmail, Breach of Confidentiality, and Decisional Interference.?
Effective AI governance will include privacy harm-envisioning techniques performed early in design by privacy professionals well-versed in AI-enabled system privacy risks.
This is "early in design" not the typical "after design" questionnaires used for privacy impact assessments.
The paper is available at: https://arxiv.org/pdf/2310.07879.pdf
AI Advisor & Researcher | AI Privacy & Security | AI Risk & Safety | Tech & Legal | PLOT4AI author | ENISA Data Protection Engineering advisor | Expert @CEN/CENELEC JTC21 developing AI European standards
11 个月Eric Lybeck , maybe interesting for you: do you know PLOT4ai? It is a threat modeling library containing 86 AI threats. It is open source and in a couple of weeks gets an update that includes GenAI, third party related threats, and adaptation to the last AI Act text. https://www.plot4.ai It contains an online tool and it is also available in card deck format at Agile Stationery
Autodidacte & Polymathe ? Chargé d'intelligence économique ? AI hobbyist ethicist - ISO42001 ? éditorialiste & Veille stratégique - Muse? & Times of AI ? Techno humaniste & Techno optimiste ?
11 个月AI Muse? Grenoble
Ph.D. in Information Science Candidate at Indiana University Bloomington
11 个月Highly recommended to read our paper here for conversational chatbots: https://arxiv.org/pdf/2402.09716.pdf#:~:text=These%20studies%20emphasized%20the%20following,Manipulation.
Data Protection & Governance dude | Founding member of Data Protection City | unCommon Sense "creative" | Proud dad of 2 daughters
11 个月Right,but at least 10 of these 12 risks are present in other products/ services/ systems, even without AI. Moreover, even if these are known for some time, little was done to mitigate them.
Technology, AI, & Data Privacy Attorney | Manager, Responsible AI @Accenture
11 个月Looking forward to taking a look!