Starting with a clean slate
Amit Adarkar
CEO @ Ipsos in India | Author of Amazon Bestseller 'Nonlinear' | Blogger | Practitioner of Behavioural Economics
We are all creatures of habit. For me, Saturday mornings are reserved for reading or writing with a steaming cup of coffee. But today, I felt like starting with a clean slate. I am listening to one of my all-time favourite bands- Pink Floyd- with a cup of green tea. Bliss! One of the stanzas in the song “Lost for words” in their album “The Division Bell” goes:
So I opened my door to my enemies
And I ask - could we wipe the slate clean
But they tell me to please go **** myself
You know you just can’t win
Gets you thinking, doesn’t it? If we list down all of current world problems – the ongoing wars, proxy wars, trade conflicts etc, we could always go back to a timeline in human history where the problem started or got blown out of proportion. For India & China, it could be the 1962 war. For Russia & Ukraine, it could be the 1991 dissolution of USSR. And if people approach these conflicts with a clean slate mentality, things may actually just improve. Instead, we add more hate, complications and hegemony to an already full slate and problems worsen.
Don’t worry. My intent is not to discuss politics and wars. Let’s leave these to the world’s able politicians! But let’s talk about something that is equally concerning to all of us- Large Language Models or LLMs. Recently, media was abuzz with DeepSeek – the Chinese LLM that has taken the world by storm due to its frugal development & energy footprint. There were mentions of DeepSeek avoiding explicit answers when probed about where Covid originated. After all, DeepSeek is trained with ‘Chinese data’ - as media explained.
There are also articles about Indian developed & Indian data trained LLMs being just round the corner. I wonder how the Indian LLMs will respond to uncomfortable questions about India!
My point is that LLMs will exhibit the same biases as the data on which they are trained. So, a Chinese-data trained LLM will not give out the same responses as an Indian-data trained LLM, even to the same probe. We are not far from a future where each one of us will have our own individual custom LLM trained with our individual data in our pocket (i.e. on our cell phone). These custom LLMs will continuously tap into what we do / say and train themselves. So, my custom LLM will think and speak like me and yours like you.
In Gen AI parlance, hallucination happens when the LLM gives out responses that are inconsistent with the data on which they were trained. We always think of hallucination as a bad thing. But imagine a scenario where my custom LLM talks to your custom LLM (did you see that cute video about two chat bots talking to each other?) and both of them decide that humans are full of biases. They then decide to wipe the slate clean and come back with more sensible solutions to the world’s problems. Are they then hallucinating or are they actually being sensible? Too much?
I think I should stick to my good-old coffee and reading / writing routine instead of wiping the slate clean every now and then. What do you think?
Associate Director - Data Science at Nielsen India
19 小时前Too much? Or just the right amount of mind-expanding thought! Love this perspective.
Brand Strategy, Creative, Social Media Strategy, Content Creation
1 天前Your insights on LLMs and their inherent biases are spot-on. One additional angle to consider is the ethical responsibility of developers and users in shaping these models. As we move towards more personalized LLMs, it's crucial to establish guidelines and frameworks to ensure these tools promote constructive dialogue and understanding rather than reinforcing existing prejudices. The role of interdisciplinary collaboration involving ethicists, sociologists, and technologists will be vital in navigating this complex landscape. Let's hope that with the right approach, LLMs can indeed contribute to a more harmonious world.