Google’s Gemini Scandal: A Wake-Up Call for an Algorithmic Impact Assessment on AI systems

Google’s Gemini Scandal: A Wake-Up Call for an Algorithmic Impact Assessment on AI systems

The recent Google’s Gemini scandal and its potential biases has thrust the importance of a rigorous Algorithmic Impact Assessment (AIA) for artificial intelligence (AI) systems into the spotlight.

Google’s Gemini Scandal

As reported by the press, when Google’s brand new Large Language Model, Gemini, began generating images of racially diverse Nazis, it not only showcased tastelessness but also historical inaccuracy, sparking a widespread internet outrage and a PR crisis of monumental proportions. Google’s Senior VP Prabhakar Raghavan’s admission of the system’s failure to discriminate appropriately in content generation highlighted a significant oversight in AI development.

The controversy escalated with further revelations, such as the chatbot’s refusal to compare Adolf Hitler with contemporary figures, underscoring the system’s biased and offensive responses. Google CEO Sundar Pichai’s response to the crisis, emphasizing the unacceptable nature of the outcomes and the company’s commitment to rectifying these issues, reflects the broader challenges facing AI development in terms of ethics and accuracy.

What Companies Willing to Exploit AI Shall Learn From the Scandal?

In an age where AI systems shape critical facets of our society, ensuring these technologies are deployed ethically and responsibly is paramount.

The AI Act obliges providers to train, validate and test data sets with data governance and management practices appropriate for the intended purpose of the high risk AI system. Those practices shall include, among others,

  • examination in view of possible biases that are likely to affect the health and safety of persons, negatively impact fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations; and
  • appropriate measures to detect, prevent and mitigate possible biases identified according to the process outlined above.

In this context, the performance of a Algorithmic Impact Assessment (AIA) becomes paramount to limit the risk of potential challenges. There is no “official” methodology to run an AIA, but the following might be a valid method:

1. Initiation:

  • Start the AIA at the design phase of a project to understand the scope and nature of the automated decision system;
  • Gather comprehensive information about the project, including the decision to be automated, the system’s design, and the data to be used.

2. Completion of the AIA Questionnaire:

  • Answer a series of questions that assess various risk and mitigation factors related to the project. These questions cover areas such as project details, system capabilities, algorithm transparency, decision-making impact, data sourcing, and mitigation measures.

3. Scoring:

  • Each response contributes to a risk or mitigation score, weighted according to the level of impact or risk mitigation it represents.
  • The final score is calculated based on the responses, providing a measure of the project’s potential impact.

4. Impact Level Determination:

  • Based on the scoring, determine the project’s impact level, ranging from Level I (little to no impact) to Level IV (very high impact).
  • This step helps in understanding the severity and scope of the impact that the automated decision-making system might have. Mitigation and Consultation:

5. Identify and implement mitigation measures to address the risks identified through the AIA:

  • Consult with internal and external stakeholders, including privacy and legal advisors, digital policy teams, and subject matter experts.

6. Documentation and Transparency:

  • Document the AIA process, findings, and decisions made based on the assessment.

7. Review and Update:

  • Regularly review and update the AIA following any significant changes to the system functionality or scope of use.

Does this process sound complex to follow? That’s why we enriched our PRISCA AI Compliance tool with a module on the performance of an Algorithmic Impact Assessment to help organizations in performing such an intricated but essential analysis. You can watch a video of presentation on PRISCA AI Compliance HERE and reach an article on the AI Act and the changes introduced by it HERE.


Italy’s Largest GDPR Fine Against ENEL Highlights Flaws in DPAs’ Enforcement Procedures

The Garante issued the largest ever GDPR fine in Italy against ENEL Energia, which, however, shows deficiencies in the enforcement procedures that should be improved to benefit privacy-related values across Europe. Read more

Loot Boxes: Government announces its intention to introduce new regulation in Spain

The Ministry of Social Rights, Consumption, and Agenda 2030 of Spain, under the leadership of Pablo Bustinduy, is poised to enact significant changes in the regulatory landscape of the video gaming sector, with a particular emphasis on the contentious issue of loot boxes. Read more


Prisca AI Compliance

Prisca AI Compliance is turn-key solution to assess the maturity of artificial intelligence systems against the AI Act, privacy regulations, intellectual property rules and much more providing a score of compliance and identifying corrective actions to be undertaken. Read more

Transfer - DLA Piper legal tech solution to support Transfer Impact Assessments

This presentation shows DLA Piper legal tech tool named "Transfer" to support our clients to perform a transfer impact assessment after the Schrems II case. Read more

DLA Piper Turnkey solution on NFT and Metaverse projects

You can have a look at DLA Piper capabilities and areas for NFT and Metaverse projects. Read more

Richard Parr

Futurist | Advisor | Speaker | Author | Educator Generative AI - AI Governance - Human Centered AI - Quantum ML - Quantum Cryptography - Quantum Robotics - Neuromorphic Computing - Space Innovation - Blockchain

12 个月

Absolutely crucial topic to address and reflect on. Thanks for shedding light on this issue!

回复
Arabind Govind

Project Manager at Wipro

12 个月

Rigorous Algorithmic Impact Assessments are indeed crucial for ensuring unbiased AI systems.

回复
Avv. Benedetta De Luca

Avvocato - Data Protection Officer (DPO - RPD) - Privacy

12 个月

Grazie Giulio per questi aggiornamenti, sei veramente prezioso!

Wow, this sure got my attention! Giulio Coraggio It's crazy how AI biases can slip through. This post makes a great point about the need for Algorithmic Impact Assessment, especially with the EU #AIAct in play. We gotta ensure our AI systems play fair and square!

Venkatesh Haran

Senior Patent Counsel

12 个月

The Gemini scandal's shockwaves reverberate as a deafening wake-up call - a clarion demand for AI's evolution to be stringently governed by rigorous Algorithmic Impact Assessments. No longer can we permit biases and inaccuracies to fester unchecked within these increasingly omnipotent systems shaping our reality. AIAs stand as the ethical supervisors, auditing each line of code for pernicious prejudices and fallacies that risk eroding society's bedrock values. Embracing this vital process cements our defiance against the existential perils of unconstrained artificial intelligence's descent into ethical chaos.

要查看或添加评论,请登录

Giulio Coraggio的更多文章

社区洞察

其他会员也浏览了