?? 2024-W24: Apple announced Apple Intelligence, Google's chief privacy officer to exit without replacement, new EDPS guidelines and more
Eli Atanasov, CIPP/E, PhD
?? I help businesses and their DPOs put privacy compliance on autopilot, saving them time and money in the process.
Hi privacy navigators,
Here is the latest from the ??Privacy Navigator?- your one-stop destination for everything privacy. Another week full of news and resources passed by. Here are the highlights:
Apple Announces Apple Intelligence and Private Cloud Compute
At its WWDC conference on Monday, Apple unveiled its generative AI initiative, Apple Intelligence. This new technology will be integrated across Apple’s ecosystem, including iPhones, Macs, Mail, Messages, and Photos, offering a more personalized and data-aware experience than broad-based AI systems like ChatGPT or Google's AI Overview.
Launching this fall, Apple Intelligence will be available on the iPhone 15 Pro, iPads, and Macs with M1 series chips or newer. Significant updates will revamp Siri, making it more natural and responsive, allowing follow-up questions, interruptions, and typed text responses. Siri will also be able to leverage ChatGPT for specific requests.
Apple claims that data is processed on-device, whenever possible, and cloud-based models (Private Cloud Compute) are employed only when necessary, ensuring that user data is used exclusively to fulfill the requests and remains inaccessible to Apple.
For even more complex prompts, Siri will asks the user if they want to share the data with ChatGPT. The users don’t need an OpenAI account to do this and can link their existing paid subsription.
This is a first big step towards more private AI. Hopefully, other big players may join Apple in developing more secure and private ways to use modern AI technologies.
This week's edition is sponsored by?Conformally
AVIS - Your AI Privacy Research Assistant ????
Imagine having AI trained on high quality privacy resources, that can help you with any task you have. Wouldn't this be amazing? That’s exactly what we are building.
Our ultimate goal is to create the best privacy research tool possible.
It will take some time and money for us to develop it and that's why we need to know whether professionals like you are interested.
Waitlist subscribers will get early access, can influence the development and will receive 3 free months after the official release + 50% off the regular price for any subscription.
CNIL Releases First AI Development Recommendations to Ensure GDPR Compliance
On 7th June, The Commission Nationale de l'Informatique et des Libertés (CNIL), the French data protection authority, published its first recommendations for the development of artificial intelligence (AI) systems. These guidelines aim to help professionals innovate responsibly while respecting individuals' privacy rights. The recommendations come after a comprehensive public consultation involving various stakeholders and are designed to align AI development with GDPR requirements.
The CNIL's analysis indicates that AI development can be harmonized with privacy protection. The focus is on creating ethical AI systems, tools, and applications that uphold European values and ensure public trust. The recommendations provide clear, practical guidance to help actors in the AI ecosystem make informed strategic decisions.
There are 7 Key Steps:
- Define a Purpose: Specify the AI system's objective.
- Determine Responsibilities: Outline the roles and duties of all involved parties.
- Establish a Legal Basis: Identify the legal grounds for processing personal data.
- Check Data Re-Use: Verify if existing personal data can be re-used.
- Minimize Data Use: Limit the use of personal data to what is necessary.
- Set a Retention Period: Define how long personal data will be retained.
- Conduct a DPIA: Perform a Data Protection Impact Assessment as needed.
In the coming months, CNIL will release additional how-to guides focusing on the legal basis of legitimate interest, management of data subject rights, information for data subjects, and data annotation and security during AI development. These further recommendations will also be subject to public consultation, ensuring ongoing engagement and refinement.
See the CNIL’s recommendations here.
Google’s Chief Privacy Officer to Exit Without Replacement
Google’s long-standing Chief Privacy Officer, Keith Enright, will leave the company this fall after 13 years of service, with no plans for a replacement. Enright, who has led Google’s efforts in safeguarding user data and implementing security policies, announced his departure in a LinkedIn post, expressing pride in his team's accomplishments. His exit is scheduled for September.
领英推è
The news of Enright's departure came as a shock to Google employees, as reported by Forbes. Google stated that this move is part of a broader restructuring effort, shifting privacy responsibilities to individual product teams to enhance regulatory compliance.
This announcement follows a series of privacy breaches and security concerns. Leaked internal documents from 404 Media revealed privacy issues at Google between 2013 and 2018, such as improper collection of children’s voice data and recorded license plate numbers. Google confirmed these incidents were resolved and the data was purged.
Additionally, Google faced a recent leak of around 2,500 documents related to its search algorithm, containing information that contradicted previous company statements about search rankings. The authenticity of the leak was confirmed, but Google claimed the documents lacked context.
Moreover, in December, Google settled a $5 billion lawsuit over allegations of improperly tracking personal data in Chrome's incognito mode. The company's decision not to replace Enright amid these challenges raises questions about its commitment to user privacy and security as it undergoes significant internal changes.
Noyb Calls for Immediate Halt to Meta's AI Data Use Without Consent
The European Center for Digital Rights (Noyb) has filed complaints in 11 European countries to stop Meta's plan to use personal data for undefined AI technology without user consent. Meta’s new privacy policy, set to take effect on June 26, 2024, involves using years of personal posts, images, and tracking data without proper user consent, claiming "legitimate interest" as its legal basis. This policy affects around 4 billion users, including dormant accounts, and allows data sharing with unspecified third parties.
Meta’s policy lacks transparency and violates GDPR requirements, such as the right to be forgotten. Max Schrems, privacy advocate, criticizes Meta’s approach, stating it overrides user rights and lacks clear legal limits. Meta also makes it difficult for users to opt-out, shifting the burden onto them, which Schrems calls absurd and non-compliant with GDPR.
The Irish Data Protection Commission (DPC), Meta’s EU regulator, has been accused of making deals with Meta, enabling these practices. Previous "urgency decisions" by the European Data Protection Board (EDPB) against Meta and the DPC highlight ongoing compliance issues.
Given the policy’s impending implementation, noyb has requested an urgency procedure under Article 66 GDPR. This would allow DPAs to issue preliminary halts and involve the EDPB in an EU-wide decision. The Norwegian DPA has already expressed doubts about the legality of Meta’s approach.
Meta’s inability to distinguish between data from users in the EU/EEA and non-EU countries, or between sensitive and non-sensitive data, exacerbates the problem. Noyb’s complaints highlight violations of multiple GDPR articles, urging immediate regulatory action to prevent widespread data misuse.
EDPS Publishes Guidelines for Generative AI Data Protection in EU Institutions
On 3th of June 2024, The European Data Protection Supervisor (EDPS) released preliminary guidelines to ensure compliance with data protection regulations for the use of generative AI by EU institutions. These guidelines aim to help EU bodies process personal data safely and uphold individual rights.
Key Points from the EDPS Guidelines:
- Generative AI Lifecycle: Define the use case, train the model, ensure human oversight, evaluate accuracy, and monitor regularly.
- Checking Personal Data Processing: Institutions must verify claims from AI providers about data processing, especially with web scraping techniques, to ensure compliance with data protection principles.
- Role of Data Protection Officers (DPOs): DPOs must understand AI systems' design and assist in conducting data protection impact assessments (DPIAs) and maintaining transparency.
- Conducting DPIAs: DPIAs are crucial for identifying and mitigating risks to personal data before processing operations begin.
- Lawful Data Processing: Ensure processing is based on clear legal grounds, protect against excessive data use, and adhere to principles like data minimization and accuracy.
- Data Minimization: Limit data processing to what is necessary for specific purposes, use high-quality datasets, and maintain proper data governance.
- Data Accuracy: Ensure data is accurate and up-to-date, with regular checks and human oversight to prevent inaccuracies and "hallucinations."
- Fair Processing: Address biases in training data to avoid harm to underrepresented groups and ensure accountability through thorough documentation.
- Data Security: Implement technical and organizational measures to mitigate security risks, including known vulnerabilities and advanced risk assessment techniques.
These guidelines aim to provide EU institutions with practical advice on using generative AI while protecting personal data and complying with GDPR.
See the EDPS Guidelines for Generative AI Data Protection in EU Institutions here.
?? Privacy Navigator
We have added new resources to the Privacy Navigator. You can enjoy:
- Statement on the interplay between AI Act and GDPR for private sector by the The Austrian DPA
- Guideline on the massive collection of personal data from the web for the training of generative artificial intelligence (GenAI) models by the Italian Data Protection Authority (Garante)
- Exploring the ethical, technical and legal issues of voice assistants by CNIL
- Opinion 11/2024 on the use of facial recognition to streamline airport passengers’ flow (compatibility with Articles 5(1)(e) and(f), 25 and 32 GDPR by EDPB
- AI Development Recommendations to Ensure GDPR Compliance by CNIL
- Orientations for ensuring data protection compliance when using Generative AI systems by EDPS
That's all for now, see you next week!
Eli
email:?eli@conformally.com