CHATGPT in Trouble?
?? Update 13th April 2023 :
Following the Spanish Supervisory authority's request, it was announced today that The EDPB members discussed the recent enforcement action undertaken by the Italian data protection authority against?Open AI about the Chat GPT service.
The EDPB decided to launch a?dedicated task force?to foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities.
After the Italian Data Protection Authority, After the German DPAs, After the Spanish DPA, After the EDPB, THE FRENCH CNIL has decided to open an investigation following 5 complaints received (remember CNIL could investigate without having to wait for any complaint) AFP?.
?? The Italian Data Protection Authority, after the recent meeting with Open AI called for the Mission impossible : data rectification and erasure.
"...tools to enable data subjects, including non-users, to obtain rectification of their personal data as generated incorrectly by the service, or else to have those data erased..."
Other, more feasible, requirements are :
1. OpenAI to post a new privacy notice
2. Highlight information for users and non-users explaining the algorithmic implemented with their data, and recalling the rights they can exercise - in particular opposition, rectification, access...
3. Age verification : OpenAI to prohibit the use of ChatGPT by default to minors and implement an age verification system.
4. A media campaign to raise awareness about training data.
"ChatGPT: Meeting Today Between OpenAI and?The Italian Data Protection Authority
A videocall is scheduled for this evening between representatives of OpenAI and the GPDP (Garante per la protezione dei dati personali).
The meeting follows up on the reply letter sent yesterday by OpenAI to collaborate with the GPDP to comply with EU privacy law."
?? According to Federico, the videocall took place. OpenAI said they were committed to enhancing transparency in their use of personal data and existing mechanisms to exercise data subject rights and safeguards for children. They will provide a document setting out requested measures of the Italian SA.
As accuracy is a big issue knowing high level of hallucinations of ChatGPT, the Australian case mentioned below will be a court test of how OpenAI can update the dataset to avoid defamation claims.
- Following the data leak on March 20,
- Absence of legal base in the collection and storage of massive database for training.
- Inaccuracy of the information provided and data processed
- Absence of age filtering.
20 days granted to their EU representative to take appropriate measures.
Knowing the massive dataset behind CHATGPT, knowing the level of inaccuracy and hallucinations produced by ChatGPT, how can you see a GDPR compliance to revoke the ban ?
The FRENCH CNIL questioned by Marc Rees from the online news "L'Informé" responded they have not received any complaint against CHATGPT - they don't need a complaint to initiate an investigation - they are keeping an eye and will be in contact with their Italian colleagues.
All Data Protection Authorities in the EU have the special power granted under the GDPR, in Art 58(2)(f) "to impose a temporary or definitive limitation including a ban on processing". OpenAI doesn't have a main establishment in the EU, only a representative for GDPR purposes. This means that it can't benefit from the One Stop Shop mechanism, leaving it exposed to ALL EU Data Protection Authorities
As several have since taken the initiative of complaining to the CNIL and other EU DPAs, I suggest they also look at the following points :
?? The question of the personal data processed in the dataset, partly based on Common Crawl that requires deep investigation. During our webinar, Rebecca Herald mentioned finding patients medical data on Common Crawl ;
?? The question of the inaccuracy of the large dataset : large amount of inaccuracy in the dataset results on misinformation and inaccuracy in the responses provided. Once a nicely written information is published, it easily allow to spread the fake. As previously mentioned, free PhD attributed could be a fun fact, being declared dead is less fun. The inaccuracy and high level of hallucinations that has even scared OpenAI itself should justify further thoughts before launching the product.
?? Updating the large dataset is a technical complexity. [“Even when we train very capable models, it’s still not clear how to keep them up to date or revise them when they make errors,” says Eric Anthony Mitchell, a 4th-year computer science PhD student in the?Stanford Artificial Intelligence Lab, led by HAI associate director and Stanford professor?Chris Manning.]
?? How to comply with the right to delete or the right to be forgotten ?
?? When registering for the tool, the privacy notice is only made available after registration. ChatGPT is subject to the OpenAI Terms of Use and Privacy Policy no links is directly provided. The are to be fetched from the OpenAI website. OpenAI terms refers a “Data Processing Addendum”?. “If you are governed by the GDPR or CCPA and will be using OpenAI for the processing of “personal data” as defined in the GDPR or “Personal Information,” please contact [email protected] to execute our Data Processing Addendum.”
?? I am not sure the phone number requested is needed. This is against the principal of data minimisation.
?? The date breach that recently occurred shows lack of appropriate security measures in accordance to Article 32 GDPR. The introduction of plugins might create even more security issues. The argument that you dont get privacy if you dont pay is not even valid here as paid accounts were breached.
Sam Altman , CHATGPT CEO, tweeted in response to the Italian DPA ban :
“We of course defer to the Italian government and have ceased offering ChatGPT in Italy (though we think we are following all privacy laws).
Italy is one of my favorite countries and I look forward to visiting again soon!”
They well know they have no way to stop processing the personal data on their dataset. They are also aware of the issues of inaccuracy and level of hallucinations in the answers provided.
Time for other DPAs to wake up to not let Italy in apartheid.
The US FTC will step in. The U.S.’s Center for AI and Digital Policy (CAIDP) filed a formal complaint with the Federal Trade Commission calling on the agency to “halt further commercial deployment of GPT by OpenAI” until safeguards have been put in place to stop ChatGPT from deceiving people and perpetuating biases.
France and Canada have called for a ‘democratic debate’ around AI.?I would be very surprised if China did not try to take advantage of our position to sit back.
ChatGPT-4 WhitePaper expose the risks of overreliance. The Appendix claims ChatGPT scored within the top 10% results in Law exam. with 0% contamination with such unfathomable training data.
GPT-4 System Card?: "GPT-4 still displays a tendency to hedge in its responses. Some of our early studies suggest that this epistemic humility may inadvertently foster overreliance, as users develop trust in the model’s cautious approach. It’s crucial to recognize that the model isn’t always accurate in admitting its limitations, as evidenced by its tendency to hallucinate?".
OpenAI claim to have implemented various safety measures and processes throughout the GPT-4 development and deployment process that have reduced its ability to generate harmful content. However, GPT-4 can still be vulnerable to adversarial attacks and exploits or, “jailbreaks,” and harmful content is not the source of risk. Fine-tuning can modify the behavior of the model, but the fundamental capabilities of the pre-trained model, such as the potential to generate harmful content, remain latent.??
For Louis Rosenberg on Linkedin "The looming danger of AI is not its ability to generate large amounts of misleading and malicious content. Instead, it's the ability of AI to engage users in ???????????????????? ?????? ?????????????????? ????????????????????????s.'
He further points to 'This short piece by?Benjamin Pimentel?at the?San Francisco Examiner?summarizes my greatest fears about AI manipulation.'
?? WHAT WORTH THE CALL FOR MORATORY?
?? From ` Timnit Gebru On Linkedin : Please read this thread by émile P. Torres to understand just how extreme these people are. They are a full blown cult, and we need a support group for those of us who've known about them for years and tried to ignore them, and can't believe they're all over the mainstream news and such with no one batting an eye.
From Benoit Raphael on Linkedin explaining what is wrong with the call for moratory.
?? Petruta Pirvan Reported on Linkedin : In a recent blog titled “Chatbots, deepfakes, and voice clones: AI deception for sale” the?Federal Trade Commission?informs the public that the Agency will not stop short of using its powers under Section V of the FTC Act to investigate?#ai?producers, sellers or users for unfair and deceptive practices related to AI.
A host of recommendations are supplied to developers/sellers:
1?? Consider at the design stage and thereafter the reasonably foreseeable ways that the?#aitool?can be used for fraud or harms.
2?? Take reasonable measures to prevent consumer’s injury. Deterrence measures should be durable, built-in features and not bug corrections or optional features that third parties can undermine via modification or removal.
3?? Don’t over-rely on post-release detection. The burden shouldn’t be on consumers to figure out if a generative AI tool is being used to scam them.
4?? Don’t mislead consumer. Misleading consumers via digital clones such as fake dating profiles, phony followers, deepfakes, or chatbots, could result in?#ftc?enforcement actions.
This is not the first time FTC takes a powerful stance on AI.
In a 2022 Report to Congress titled “Combatting Online Harms Through Innovation”, FTC made the following recommendations to stakeholders:
? Avoid over-reliance on?#aisystems; AI is not good at understanding context, meaning, and intent
? Humans in the top; AI tools need appropriate human oversight.
? Transparency and accountability. Disclosure of intelligible information sufficient to allow third parties to test for discriminatory and harmful outcomes and for consumers to “vote with their feet".
? Responsible data science. One important and practical consideration is having adequate documentation throughout the?#aidevelopment?process.
? Platforms AI interventions. Use mitigation measures such as interventions or “frictions” to employ, including circuit-breaking, downranking, labelling, adding interstitials, sending warnings, and demonetizing bad actors.
? User tools. Building public resilience may ultimately be more effective than focusing on technological solutions.
? Availability and scalability. Sharing an?#algorithm?may not involve exposure of?#personaldata, sharing the dataset used to create an AI model could implicate?#privacy?concerns. Such concerns may be more acute when the sharing is with other commercial actors as opposed to vetted researchers or certified auditors.
? Content authenticity and provenance. Authentication tools can be used to determine whether text, images, audio, or video are deepfakes or have been otherwise manipulated.
? Legislation. Under debate in Congress are, among other things, proposals involving Section 230 of the Communications Decency Act,?#dataprivacy, and competition. Some of these proposals give the FTC new responsibilities.
You can read the full FTC Report to the Congress here.
?? Via Dr. Rehana Harasgama-Zehnder "The Federal Data Protection and Information Commissioner (FDPIC) issued a statement on the use of AI-based tools, such as ChatGPT. While it did not prohibit ChatGPT or the use thereof, unlike the Italian supervisory authority (Garante) a few days ago, the FDPIC does caution organisations using such tools.
The FDPIC recommends reviewing the tool's privacy settings and processing purposes before uploading personal data or photos of someone else and ensuring that the general data protection principles are met, in particular, the principle of transparency before deploying such tools within an organisation.
In my view, this also entails carrying out a data protection impact assessment - risk assessment - prior to implementing AI-based tools in an organisation (if used for processing personal data)."
FDPIC Press Release:?https://lnkd.in/e9GJXAnj
DRAFT FTC CONSENT DECREE ISSUED IN OPENAI CASE!? Jon Neiditz : Yes, the FTC SHOULD Investigate GPT-4 & Make AI Ethics Enforceable
On March 20th, the FTC published: Chatbots, deepfakes, and voice clones: AI deception for sale
Generative AI and synthetic media are colloquial terms used to refer to?chatbots?developed from large language models and to technology that simulates human activity, such as software that creates?deepfake videos?and?voice clones. Evidence already exists that fraudsters can use these tools to generate realistic but fake content quickly and cheaply, disseminating it to large groups or?targeting certain communities?or specific individuals. They can use chatbots to generate?spear-phishing emails,?fake websites,?fake posts,?fake profiles, and?fake consumer reviews, or to help create?malware,?ransomware, and?prompt injection attacks. They can use deepfakes and voice clones to facilitate?imposter scams,?extortion, and financial fraud. And that’s very much a non-exhaustive list.
"Following calls by over 1000 tech workers this week for a pause in the training of the most powerful AI systems, including Chat GPT, UNESCO calls on countries to fully implement its Recommendation on the Ethics of Artificial Intelligence immediately. This global normative framework, adopted unanimously by the 193 Member States of the Organization, provides all the necessary safeguards."
The world needs stronger ethical rules for artificial intelligence: this is the challenge of our time. UNESCO’s Recommendation on the ethics of AI sets the appropriate normative framework. Our Member States all endorsed this Recommendation in November 2021. It is high time to implement the strategies and regulations at national level. We have to walk the talk and ensure we deliver on the Recommendation’s objectives.
Audrey Azoulay
UNESCO's Director-General
?? Aviv Ovadya who was part of the red team testing ChatGpt-4 reflect on what the disruption caused. ‘The problem is that such powerful AI systems cannot be viewed in isolation.
They will dramatically impact our interactions with critical societal infrastructure: schools, social media, markets, courts, healthcare, etc.,—and our mental health & epistemics.′
?? UK ICO has published a document in which Stephen Almond, Executive Director, Regulatory Risk, looks at the implications of generative artificial intelligence and large language models, such as ChatGPT. Generative AI: eight questions that developers and users need to ask. He added, 'While the technology is new, the principles of data protection law remain the same.'
?? CANADIAN PRIVACY COMMISSIONER LAUNCHES INVESTIGATION INTO CHATGPT. The Office of the Privacy Commissioner of Canada (OPC), announced it has opened an investigation into the U.S. company since they have received a "“complaint alleging the collection, use, and disclosure of personal information without consent." The magazine report : "This investigation follows a series of recent moves by the?federal government?and members of the AI research community in?regulating?the development and deployment of the technology.
Other countries have also begun to crack down on the mass adoption of ChatGPT. China, which has also banned Google, Facebook, Twitter, and other digital platforms in previous years, reportedly blocked access to ChatGPT in?February."
?? Germany could block ChatGPT if needed, says data protection chief. Ulrich Kelber, explained Monday, April 3 in an article published by Handelsblatt, that a "similar procedure (to the one underway in Italy, editor's note) was possible in Germany". The German authority has requested additional information from its Italian counterpart and intends to send a file in the coming weeks to the competent ministry.
?? From Dr. Frank Schemmel Schemmel, via Peter Hense ???????? The German supervisory authority takes a stand on the ban on?#ChatGPT?(German only)
Major takeaways ??
?? The assessment of the Italian authority is comprehensible
?? Previous assessment based only on publicly available information
?? For the time being, the Bavarian authority will not take action against ChatGPT itself
?? OpenAI would do well to seek dialogue (not only with the Italian authority, but also with other European authorities) and to dispel the accusations that have been made
?? The biggest?#dataprotection?challenges are?#transparency, data subjects' rights and the legal basis for?#processing
?? Use in compliance with data protection law seems possible, as long as there is transparency (AI must not be a black box)
France and Ireland authorities said they have contacted their Italian counterpart. The Irish Data Protection Commission has explained that it intends to coordinate with all EU data protection authorities on this issue.
Reuters report: "The privacy regulator in Sweden, however, said it had no plan to ban ChatGPT nor was it in contact with the Italian watchdog. Spain's regulator said it had not received any complaint about ChatGPT but did not rule out a future investigation." adding that "Italy's deputy prime minister has?criticized?its own regulator's decision by calling it "excessive" and a German government spokesman said a ban of ChatGPT would not be necessary."
????? Australian mayor readies world's first defamation lawsuit over ChatGPT content. "Brian Hood, who was elected mayor of Hepburn Shire, 120km (75 miles) northwest of Melbourne, last November, became concerned about his reputation when members of the public told him ChatGPT had falsely named him as a guilty party in a foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s. Hood did work for the subsidiary, Note Printing Australia, but was the person who notified authorities about payment of bribes to foreign officials to win currency printing contracts, and was never charged with a crime, lawyers representing him said."
?? I checked on ChatGPT that at first responded didn't know anything about a currency printing scandal in Australia, to eventually respond with the defamatory information. How could this information be rectified? This is going to be an interesting test. ?????
? Another case of defamatory allegations: Thanks?Pernille Tranberg?for posting this case of defamatory hallucination of ChatGPT wrongfully accusing a law professor of sexual harassment. .?https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/
?? If interested to read further on this subject, some reading suggestions:
A?link to our recent?BCS, The Chartered Institute for IT?webinar?with?Richard Self?from the UK University of Derby and the US cybersecurity and data protection expert?Rebecca Herold. They both touched upon the content value of ChatGPT productions that are well built despite some well-presented misinformation spread.
C. Xiang, The Open Letter to Stop ‘Dangerous’ AI Race Is a Huge Mess,?Vice, 29 mars 2023
S. Bubeck et al., Sparks of Artificial General Intelligence?:?Early experiments with GPT-4, arXiv, 22 mars 2023
Thanks Pernille Tranberg for posting this case of defamatory hallucination of ChatGPT wrongfully accusing a law professor of sexual harassment. . https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/
Lawyer I Data Protection l AI Governance Lead I Member of the GPAI Code of Practice Working Group @EU Commission l Privacy Executive of the Year Nominee l Former DPO I AIGP, FIP, CIPPE, CIPPUS, CIPM
1 年Thank you for the mention!
CEO & Founder, The Privacy Professor(R), est. 2004| Privacy & Security Brainiacs | Author | Expert Witness | Entrepreneur | Cyber Security & Privacy Expert
1 年What a great information-risk post! Thank you, Tara, for pulling this together and sharing. And for your continuing diligence in sussing out facts and examples, and addressing this, and other, new and emerging types of privacy risks.
The Data Diva | Data Privacy & Emerging Technologies Advisor | Technologist | Keynote Speaker | Helping Companies Make Data Privacy and Business Advantage | Advisor | Futurist | #1 Data Privacy Podcast Host | Polymath
1 年Tara TAUBMAN-BASSIRIAN LLM thank you ?? for assembling and connecting the dots on these issues. I very much appreciate your thoughtful analysis as always.
Sr. Economist / Innovation Advisor at Int'l Dev - on social media as a private citizen. 18k+
1 年More... what is legal today? as ChatGPT passes Bar exam??https://www.dhirubhai.net/feed/update/urn:li:activity:7041698567333322752?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7041698567333322752%2C7041997665131266048%29 and much more https://www.dhirubhai.net/feed/update/urn:li:activity:7046956558953402368?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7046956558953402368%2C7047233167170158592%29 of 2 comments 5G and China with the 5G battle https://www.dhirubhai.net/feed/update/urn:li:activity:7047794021926113280?commentUrn=urn%3Ali%3Acomment%3A%28activity%3A7047794021926113280%2C7047931225516244992%29