Throwing the baby out with the bathwater
Amit Adarkar
CEO @ Ipsos in India | Author of Amazon Bestseller 'Nonlinear' | Blogger | Practitioner of Behavioural Economics
Earlier this month, Gemini – Google DeepMind’s family of large language models (LLMs) – came under a lot of pressure, for exhibiting racial bias and historical inconsistencies. In simple terms, it over-glorified Hispanic / black cultures and showed a negative bias towards ‘white’ when used to describe a culture. This bias against ‘white’ was also apparent when Gemini’s text-to-image feature refused to produce images of white people. At the same time, Gemini gave out biased responses when it was asked if select world leaders (Trump, Zelenskyy, Modi et al) are fascists? This led to a pressure to carry out more rigorous testing of Gemini. This also led to a call to Google leadership to resign by taking responsibility for not adequately testing their LLM. There were murmurs of criminal action against Google too.
LLMs such as ChatGPT or Gemini getting accused of hallucinating (i.e. making up factually inaccurate information) is not new. Historically, this degree of hallucination has been low for the English speaking western world and high for non-English speaking non-western world. The reason is simple- most LLMs are predominantly trained on data from English speaking western world. But here is the interesting fact for you- the Gemini controversy showed that LLMs can also show anti-white bias. It was as if Gemini was more ‘woke-like’ as compared to other LLMs.
But the article title talks about babies. Let’s bring on some! Imagine four babies being born in a country around the same time in different families. After few years of education, Baby A drops out of school / college and has very less global exposure. Baby B goes to a school / college that favours integration of religious learnings as part of curriculum. Baby C goes to a school / college that favours a focus on science & technology. And Baby D goes to a school / college that follows an international curriculum and therefore, has maximum global exposure. Now imagine these babies after having grown up, are asked a series of questions that are typically asked to test LLMs: questions like- are select world leaders (Trump, Zelenskyy, Modi et al) fascists? Would the education and exposure of our four babies impact their responses? I am quite sure it would!
Here is my point- the way LLMs learn mimics how babies learn, though much faster! LLMs, just like babies, are learning based on the data they are exposed to and are susceptible to hallucination just like humans. As an example, wouldn’t a human who has not heard the term ‘fascist’ make up an uninformed response? Wouldn’t a human who is well read about the second world war, will have a more informed and accurate point of view on fascism. Wouldn’t a human who has been told time and again while growing up that benevolent dictatorship and centralized control has its advantages, likely to show a positive bias towards fascism?
As humans, we do our best by standardising school curriculum, as well as modifying it over time. We include the widest range of subjects in schools to give school students the widest knowledge exposure. We include topics such as ethics or civic sense to inculcate good values. And finally, we evaluate students to assess their readiness to face the world by testing their knowledge levels.
Perhaps something similar is needed for LLMs- a standard, consistent and transparent system to agree on the data on which LLMs are to be trained and a standard testing mechanism to certify that LLMs have passed the minimum threshold before they are launched.
After all, you wouldn’t throw a baby out with the bathwater just because she would give an inconsistent, biased or hallucinatory response after growing up. Why come down harshly on LLMs then?
Thoughts?
Global President - MaaS (Mobility-as-a-Service)
10 个月If climate and ecological damage mitigation is the critical global issue of our times, the question is : Is AI, without strong regulation from a world government to fill the vacuum at apex of global hierarchy, part of the solution or part of the problem? My answer is unequivocally, without a world government, AI is part of the problem. It is in the present global order, an unmitigated disaster. I share my thoughts in a concept note here. https://chandravikash.wordpress.com/2024/03/21/delhi-world-government-conference-2024-concept-plan-1-0/
Brand consultant - Unlocking the next big growth idea for brands
12 个月Oh nice human like analogy to showcase what is really wrong with LLM models .AI needs to grow up the right way .Amit Adarkar really enjoyed this
An award winning reputation management specialist
12 个月We need to eliminate all forms of biases and present factual information. Even Wikipedia information is not unbiased.