Google’s Gemini Scandal: A Wake-Up Call for an Algorithmic Impact Assessment on AI systems
Giulio Coraggio
Solving Legal Challenges of the Future | Head of Intellectual Property & Technology | Partner @ DLA Piper | IT, AI, Privacy, Cyber & Gaming Lawyer
The recent Google’s Gemini scandal and its potential biases has thrust the importance of a rigorous Algorithmic Impact Assessment (AIA) for artificial intelligence (AI) systems into the spotlight.
Google’s Gemini Scandal
As reported by the press, when Google’s brand new Large Language Model, Gemini, began generating images of racially diverse Nazis, it not only showcased tastelessness but also historical inaccuracy, sparking a widespread internet outrage and a PR crisis of monumental proportions. Google’s Senior VP Prabhakar Raghavan’s admission of the system’s failure to discriminate appropriately in content generation highlighted a significant oversight in AI development.
The controversy escalated with further revelations, such as the chatbot’s refusal to compare Adolf Hitler with contemporary figures, underscoring the system’s biased and offensive responses. Google CEO Sundar Pichai’s response to the crisis, emphasizing the unacceptable nature of the outcomes and the company’s commitment to rectifying these issues, reflects the broader challenges facing AI development in terms of ethics and accuracy.
What Companies Willing to Exploit AI Shall Learn From the Scandal?
In an age where AI systems shape critical facets of our society, ensuring these technologies are deployed ethically and responsibly is paramount.
The AI Act obliges providers to train, validate and test data sets with data governance and management practices appropriate for the intended purpose of the high risk AI system. Those practices shall include, among others,
In this context, the performance of a Algorithmic Impact Assessment (AIA) becomes paramount to limit the risk of potential challenges. There is no “official” methodology to run an AIA, but the following might be a valid method:
1. Initiation:
2. Completion of the AIA Questionnaire:
3. Scoring:
4. Impact Level Determination:
5. Identify and implement mitigation measures to address the risks identified through the AIA:
领英推荐
6. Documentation and Transparency:
7. Review and Update:
Does this process sound complex to follow? That’s why we enriched our PRISCA AI Compliance tool with a module on the performance of an Algorithmic Impact Assessment to help organizations in performing such an intricated but essential analysis. You can watch a video of presentation on PRISCA AI Compliance HERE and reach an article on the AI Act and the changes introduced by it HERE.
Italy’s Largest GDPR Fine Against ENEL Highlights Flaws in DPAs’ Enforcement Procedures
The Garante issued the largest ever GDPR fine in Italy against ENEL Energia, which, however, shows deficiencies in the enforcement procedures that should be improved to benefit privacy-related values across Europe. Read more
Loot Boxes: Government announces its intention to introduce new regulation in Spain
The Ministry of Social Rights, Consumption, and Agenda 2030 of Spain, under the leadership of Pablo Bustinduy, is poised to enact significant changes in the regulatory landscape of the video gaming sector, with a particular emphasis on the contentious issue of loot boxes. Read more
Prisca AI Compliance
Prisca AI Compliance is turn-key solution to assess the maturity of artificial intelligence systems against the AI Act, privacy regulations, intellectual property rules and much more providing a score of compliance and identifying corrective actions to be undertaken. Read more
Transfer - DLA Piper legal tech solution to support Transfer Impact Assessments
This presentation shows DLA Piper legal tech tool named "Transfer" to support our clients to perform a transfer impact assessment after the Schrems II case. Read more
DLA Piper Turnkey solution on NFT and Metaverse projects
You can have a look at DLA Piper capabilities and areas for NFT and Metaverse projects. Read more
Futurist | Advisor | Speaker | Author | Educator Generative AI - AI Governance - Human Centered AI - Quantum ML - Quantum Cryptography - Quantum Robotics - Neuromorphic Computing - Space Innovation - Blockchain
12 个月Absolutely crucial topic to address and reflect on. Thanks for shedding light on this issue!
Project Manager at Wipro
12 个月Rigorous Algorithmic Impact Assessments are indeed crucial for ensuring unbiased AI systems.
Avvocato - Data Protection Officer (DPO - RPD) - Privacy
12 个月Grazie Giulio per questi aggiornamenti, sei veramente prezioso!
Data & AI
12 个月Wow, this sure got my attention! Giulio Coraggio It's crazy how AI biases can slip through. This post makes a great point about the need for Algorithmic Impact Assessment, especially with the EU #AIAct in play. We gotta ensure our AI systems play fair and square!
Senior Patent Counsel
12 个月The Gemini scandal's shockwaves reverberate as a deafening wake-up call - a clarion demand for AI's evolution to be stringently governed by rigorous Algorithmic Impact Assessments. No longer can we permit biases and inaccuracies to fester unchecked within these increasingly omnipotent systems shaping our reality. AIAs stand as the ethical supervisors, auditing each line of code for pernicious prejudices and fallacies that risk eroding society's bedrock values. Embracing this vital process cements our defiance against the existential perils of unconstrained artificial intelligence's descent into ethical chaos.