Privacy and AI #3
In this edition of Privacy and AI?
PRIVACY
? Privacy in the product design lifecycle ICO guidance
? Iowa to become the 6th state with comprehensive privacy law
? Colorado finalizes Privacy Act Regulations?
? Digital Health Platforms and the Impacts of Pixel Tracking, GoodRx & BetterHelp
? DPC Imposes €750,000 Fine on BOI for Inadequate Data Security Measures
? Argon Medical Devices was fined €220.000 for failure to notify the Norwegian DPA within 72 hours
ARTIFICIAL INTELLIGENCE
? Cyber-security risks of large language models (NCSC)
? The ICO updated the Guidance on AI and Data Protection
? Cybersecurity of AI and Standardisation (ENISA)
? Chatbots, deepfakes, and voice clones: AI deception for sale (FTC)
PRIVACY
Privacy in the product design lifecycle ICO guidance
Considering privacy from the design and development of products
Building the case for privacy: application of privacy laws to the product. Risks to individuals and potential business impacts?
Privacy in the design stage: mapping data needs, choosing the right moments, obtaining valid consent, and communicating privacy information
Privacy in the development stage: defining the appropriate amount of PD required, exploring technical solutions that enhance privacy, and protecting PD in development environments.
Privacy in the launch phase: conducting pre-release checks, factoring privacy into rollout plans, and deciding how best to communicate changes.
Privacy in the post-launch phase: monitoring and triaging fixes, reappraising expectations and norms
The full guidance can be accessed here
Iowa to become the 6th state with comprehensive privacy law
It will enter into force in Jan 2025
Some points to consider
- Personal data does not include?
a) Publicly available information (defined term)
b) Aggregate data: information that relates to a group or category of consumers, from which individual consumer identities have been removed, that is not linked or reasonably linkable to any consumer
- Sensitive data includes precise geolocation data
- To process consumers’ sensitive data it requires only to provide notice and an opportunity to opt-out, instead of opt-in consent
- Response to DSR in 90 days
- No right to rectification
- No need to conduct impact or risk assessments
Link here
Colorado finalizes Privacy Act Regulations
Colorado AG finalization of the Colorado Privacy Act Rules, which implement the Colorado Privacy Act (CPA). Both the CPA and the CPA Rules will enter into effect on July 1, 2023.?
The CPA Rules provide guidance on several aspects addressed in the CPA.?
The official link is broken (see here), but Luis Alberto Montezuma, as usual, was able to obtain the a copy of it (see his post here)
Digital Health Platforms and the Impacts of Pixel Tracking, GoodRx & BetterHelp
The FTC took enforcement action against GoodRx and BetterHelp, two digital healthcare platforms, for sharing user PHI with third parties for advertising.?
Both cases highlighted the use of third-party tracking pixels, which enable platforms to amass, analyze, and infer information about user activity.
The remedies in GoodRx and BetterHelp include strong provisions like bans that place strict, comprehensive limits on whether and how certain user information may be disclosed for advertising.?
FTC banned GoodRx and BetterHelp from sharing PHI for any advertising purposes and also banned BetterHelp from disclosing other PII for re-targeting.
Privacy concerns
Widespread usage of invisible pixels with no way for consumers to avoid. Blocking third party cookies does not entirely prevent PII collection from pixels?
Lack of clarity around data collection and use. With pixels, any type of personal and identifying information can be collected and shared (e.g. to identify social media profiles through matching information such as a user's email address that automatically connect a user to their social media account on the platform)
Personal information may not be effectively removed. E.g., some tracking pixels “hash” personal information to scramble PII such as names or email, which may be inadequate in some cases, because hashes can be reversed or used to link data across different databases.
Link here
DPC Imposes €750,000 Fine on BOI for Inadequate Data Security Measures
The inquiry was commenced after BOI notified the DPC of a series of 10 data breaches relating to the BOI365 banking app. The data breach notifications concerned individuals gaining unauthorised access to other people’s accounts via the BOI365 app.
After investigation, the decision found that BOI had infringed its obligations under Articles 5(1) and 32(1) GDPR as the technical and organisation measures in place at the time were not sufficient to ensure the security of the personal data processed on the BOI365 app.
On security: while there were policies and procedures in place, there were no additional controls to minimise the possibility of human error.
On training: training needs to be frequent, regular and appropriate to the activities being carried out. Training should also be informed by the risks arising from the processing activities.
On organisational measures: 9/10 data breaches were informed by third parties, instead of internal discovery. This indicates no proper testing or organisational measures in place.
More here
Argon Medical Devices was fined €220.000 for failure to notify the Norwegian DPA within 72 hours
The company suffered a cyber-attack which resulted in the attacker gaining unauthorized access to the personal data of all employees in Europe, including a Norwegian employee.
Argon became aware of the personal data breach in question at least on 19 July 2021, and it notified the breach to Datatilsynet 67 calendar days after that date, thus well beyond the statutory deadline imposed by Article 33(1) GDPR for personal data breach notifications.
It took the company over one month to confirm that personal data had been affected by the breach and over three months elapsed from the time the security incident was first detected by Argon in June 2021 to the moment when Argon submitted its notification to Datatilsynet in September 2021.
The DPA considered the delay unjustifiable since the controller did not notify the breach promptly, despite the fact that it was aware that the attacker had “accessed” personal data “subject to a greater degree of sensitivity” such as salary and benefits personal data, and that at that point Argon was unable to confirm the extent to which the broader emails within the affected mailbox account had in fact been accessed.
The DPA further said that in that case, the worse scenario should have been considered and the risks should have been assessed accordingly, including in terms of notification measures.
The EDPB has made clear that it is within the 72 hours after such an awareness that “the controller should assess the likely risk to individuals in order to determine whether the requirement for notification has been triggered”.
If after this short timeframe, the controller is still “uncertain about the specifics of the illegitimate access, the worse scenario should be considered and the risk should be assessed accordingly”, hence in these circumstances an initial notification must be promptly submitted to the competent supervisory authority, without prejudice to the possibility of updating “the supervisory authority if a follow-up investigation uncovers evidence that the security incident was contained and no breach actually occurred.”
Decision here.
ARTIFICIAL INTELLIGENCE
Cyber-security risks of large language models
The National Cyber Security Centre (UK) issued a timely blog post with evaluating risks and making recommendations for the use of Large Language Models (LLM), including ChatGPT.
Flaws identified
? provision of incorrect information (‘hallucinate’ incorrect facts)
? biased, are often gullible (in responding to leading questions, for example)
? they require huge compute resources and vast data to train from scratch
? they can be coaxed into creating toxic content are prone to ‘injection attacks’
While currently data inserted is not used to feed the model for others to query, this information will be processed by the LLM provider and may incorporate the information provided in future versions.
Cybercriminals may use this tools for new attacks, for instance, attackers may
? produce more convincing phishing emails as a result of LLMs
? try techniques they didn't have familiarity with previously
Recommendations
? not to include sensitive information in queries to public LLMs
? not to submit queries to public LLMs that would lead to issues were they made public
Link here
Cybersecurity of AI and Standardisation
European Union Agency for Cybersecurity (ENISA) considered that general-purpose standards (like ISO 27001/27002 and ISO 9001) can contribute to mitigating many of the risks faced by AI.
However there are two remaining questions
1) the extent to which general-purpose standards should be adapted to the specific AI context for a given threat
While AI has some specificities, it is in its essence software; therefore, what is applicable to software can be applied to AI. However, there are still gaps concerning clarification of #ai terms and concepts
? Shared definition of AI terminology and associated trustworthiness concepts (definition of AI is not consistent)
? Guidance on how standards related to the cybersecurity of software should be applied to AI (e.g. data poisoning and data manipulation)
2) whether existing standards are sufficient to address the cybersecurity of AI or they need to be complemented
There are concerns about insufficient knowledge of the application of existing techniques to counter threats and vulnerabilities arising from AI
? The notion of AI can include both technical and organisational elements not limited to software, such as hardware or infrastructure, which also need specific guidance
? The application of best practices for quality assurance in software might be hindered by the opacity of some AI models
? Compliance with ISO 9001 and ISO 27001 is at organisation level, not at system level and determining appropriate security measures relies on a system-specific analysis
? The support that standards can provide to secure AI is limited by the maturity of technological development
? The traceability and lineage of both data and AI components are not fully addressed
? The inherent features of ML are not fully reflected in existing standards
Finally, these are the most obvious aspects to be considered in existing/new standards:
? AI/ML components may be associated with hardware or other software components to mitigate the risk of functional failure, thus changing the cybersecurity risks associated with the resulting set-up?
? Reliable metrics can help a potential user detect a failure
? Testing procedures during the development process can lead to certain levels of accuracy/precision
Link here
The ICO updated the Guidance on AI and Data Protection
The Guidance on AI and Data Protection has been updated after requests from UK industry to clarify requirements for fairness in AI
Some of the updates:
Updated guidance here
Chatbots, deepfakes, and voice clones: AI deception for sale (FTC)
Generative AI and synthetic media are colloquial terms used to refer to chatbots developed from large language models and to technology that simulates human activity, such as software that creates deepfake videos and voice clones.?
The FTC has clarified that “The FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose”.?
Questions posed by the FTC
Original source here
I will discuss these topics more in depth in the ISACA GHC Saturday morning, March 26, 9 - 12 CST, ISACA Greater Houston Chapter with Harvey Nusz, CIPM, CRISC, CGEIT, CISA
ABOUT ME
I'm a data protection consultant currently working for?White Label Consultancy. I previously worked for TNP Consultants and Data Business Services. I have an LL.M. (University of Manchester), and I'm a PhD candidate (Bocconi University, Milano). As a PhD researcher, my research deals with the potential and challenges of the General Data Protection Regulation to protect data subjects against the adverse effects of Artificial Intelligence. I also serve as a teaching assistant in two courses at Bocconi University.
I'm the author of “Data Protection Law in Charts. A Visual Guide to the General Data Protection Regulation“, e-book released in 2021. You can find the book?here
Privacy / Cybersecurity / GRC Evangelist Leading Cross-functional Teams, Working with Legal< Audit & Vendors to Securely Deliver Data Protection by Operationalizing Processes and Controls that Meet Regulatory Standards.
1 年Federico, you did a great job as always and I thank you, very much! You add so much to our discussions, not only our discussion of privacy law, but also for privacy law and newsworthy events in the United States and in North America.
Autodidacte ? Chargé d'intelligence économique ? AI hobbyist ethicist - ISO42001 ? Polymathe ? éditorialiste & Veille stratégique - Times of AI ? Techno-optimiste ?
1 年AI Muse? Grenoble ???? ????
Technology // Data // ML // Competition // Litigation // Travel & Hospitality Industry // Co-host @RegInt: Decoding AI Regulation | Co-author of AI Act compact
1 年Federico, what a time to be alive, right?