Understanding how LLMs like ChatGPT work, and implications for humans

Understanding how LLMs like ChatGPT work, and implications for humans

I remember being absolutely shocked when ChatGPT came out. And nervous.

But after understanding how it works - my fears subsided. But when I recently saw gobal celebrity authors spouting nonsensical fears around AI, I thought it might be a good idea to share how LLMs(Large Language Models) like ChatGPT really work for non-techies. And share my ideas on the implications for humans. So we are able to temper both - our awe and our fears.

So what's really happening?

Without thinking too much, complete this sentence:

"I love ______ "

While there can be a million responses (I love ice cream, I love my nation, I love my kid), chances are the first word that came to mind for MOST people is 'you'. 'I love you'. This means that it is probable that many other words could come in next, but the HIGHEST PROBABILITY is 'you'

Now complete this one:

Two people in a Fintech networking event meet each other for the first time at a stall. One asks the other: "Hi there. My name is Thomas. May I know your _______"

While the word could be 'age', 'sun sign' or even 'marital status' - you can see that the highest probability is 'name'.

What about this one?

Two people Thomas and Rita meet at a pub. They have a really interesting conversation and really connect well. It feels like they have known each other for ages. They are just about to leave. Thomas says "I had a really great time. If you don't mind, may I know your _________ "

Many possible answers - but highest probability is 'phone numer'?

Context

Both examples above were "May I know your _______" But you answered 'name' in one. And 'phone number' in another. Why? It is because of the CONTEXT that preceded it. If I were to reduce it to strings and words, I would say "The next word is a function the string of words that precedes it" The possibilities narrow down. But then how do you decide exactly which word?

Training data

If I feed in training data of millions of human conversations, then it is possible to see that when words like 'networking event', 'first time', 'my', 'name', 'may', 'I', 'know', 'your' come together - USUALLY the next word is 'name'

Whereas when 'Two people', 'pub', 'interesting conversation', 'connect', 'ages', 'leave', 'really great time', 'don't mind', 'may', 'I', 'know', 'your' come together - USUALLY the next word is 'phone'...and then 'number'. And if I push it further - the next few words might be 'I', 'would', 'like', 'to', 'call', 'you', 'sometime'.

So how do LLMs work?

This is all LLMs do. They take in a lot of training data. And keep calculating the probability of the next word - given some word. The 'prompt' you are giving it is the trigger. The 'response' is not 'Here is an intelligent response based on the question you asked'. It is really 'Here are the next bunch of probable words that could follow based on the set of words you just entered'. As you keep 'conversing' i.e. 'Feeding more words' all the previous words are also getting stored as 'context'. So the next set of predicted words seem 'relevant to the current conversation'. It feels 'context specific'. Seems to 'make sense'. Assuming that a humongous amount of conversations are fed in the training data - chances are the predictions will be pretty 'accurate' i.e. 'most expected'.

If you say something to a device, and it spouts something out that seems context specific, makes sense, and most expected - the device can seem genuinely 'intelligent'. Much like polished but unintelligent consultants do.

But is it intelligent?

At some level, the truth is - most humans behave exactly this way. Our emotions, needs, thoughts, reactions are conditioned based on past experiences. We are predictable and mostly a function of our past experiences. We hear something too many times in a certain context - it becomes our 'truth'. Our opinions are a function of what we have heard from most people - rather than an original.

And as long as we give the expected response in expected situations - and expected sophisticated responses in expected sophisticated situations - the expected response is often confused with the 'right' response - which makes it good enough to survive. Even to earn a lot of money and be labelled 'successful'.

Most humans, unlike the LLMs, haven't even read too many books. Even if they have - have not really 'examined' the content. Our 'understanding' of the world comes from what 'Top best sellers', or '50 books that will change your life as per a random guy on the internet' say. If the quantity of knowledge accumulated and degree of contextually relevant, logically coherent and socially agreeable responses constructed is the benchmark for intelligence that we are exposed to all our lives, it is possible that we think 'Regardless of how the damn thing works, it seems highly intelligent.' After all, even the first version of ChatGPT was trained on 570 GB of text. That's almost 570,000 books! Maybe that's not intelligent enough to solve Global hunger - but intelligent enough to take away most jobs and take over the world?

Examining the "intelligence" of LLMs

Here are 2 thought experiments to help explore this question:

If I say "Seventeen into three equals fifty one" - you would say 'It is correct'. What if a 3 year old toddler also listens to his parents reciting the tables and also spouts "Seventeen into three equals fifty one" - what would you say? 'It is still corect', yes. But the important thing is you know WHY it is correct - the toddler does not. If she had heard her parents saying "Seventeen into three is chocolate", she would have spouted the same thing. What she says is purely a function of what she is 'trained' to say (aka training data). She does not the capability to 'examine' this input. She is always 100% sure she is 'correct'. And she is - so far as what she is trained to do. But WHERE does the intelligence to determine the correctness of her response reside? In the listener! If the listener did not have an understanding of how multiplication or logic works, if the listener was another toddler - he might take the answer at face value. Would this be 'Knowledge transition' then?

But what if the toddler was trained on the 'correct' training data? You can perhaps remember tables till 20, and manually calculate any multiplication in a minute. But what if she has enough capability to memorize and respond to any such multiplication question correctly in 1 second? Does it make her intelligent and an 'expert in multiplication tables'?

To drive home this point further, imagine the toddler is fed training data on multiplication tables - also in Swahili language. Now you don't even have a clue what she is saying (because you do not understand Swahili) - but she is still giving the 'correct' answers and that too in Swahili!! Does that make her not only a 'multiplication expert' but also superior to you because she can 'speak Swahili' language?

The answer is clearly 'No'. The toddler is 'dumb'.

But can this 'dumbness' still be useful?

An LLM is basically someone who can say "When you say or ask me something, I can refer the entire documented knowledge on the planet (domain training data) and tell you whatever words have USUALLY been uttered when such a question was raised (context)- in coherent words (English training data) - without having any clue myself on what those words means" Can this be useful? Most certainly yes. Especially in scenarios where there is no one right answer eg. content creation.

How are humans different?

But we would need a human whenever there is a need to:

A) Grasp the intangible relationships between tangible entities/facts first hand (current context forming), as opposed to being told what the context is by another human

B) Imagine the future relationships between these entities/facts (Future context envisioning)

C) Reason out and evaluate if a good answer exists in the existing possibilities (Choice making) to take us from current to future context

D) Come up with a better answer if the existing possibilities are not good enough (creating new possibilities)

E) Grasp the context of the involved entities and get buy-in for these new future and the option to get there (Context alignment) by envisioning a context that INCLUDES all their contexts

F) Deal with the intended and unintended consequences of the chosen option and repeating A, B, C, D, E on a continual basis (Responsibility)

So what will be the role of LLMs Vs Humans?

From the above it becomes clear that LLMs or any tool will be fully dependent on the past. Their role will end at recommending solutions based on the past. For example it could say "Here are 5 historical situations where 2 countries came together to build common ground, have a dialogue and settle differences without violence. Along with the detailed step by step approach they took". This can be extremely useful.

But will this suffice to end the Israeli-Palestinian conflict? (Choice making) One can envision a future context ("Peaceful relationships") but realizing it into reality involves 'context alignment', 'creating new possibilities' to get there. Shouting why "They SHOULD get there" is not enough - and often invalidates contexts rather than aligning them.

In conclusion

LLMs will never 'understand' what they are saying and hence cannot 'take responsibility' for anything. But they can still say a lot about what has worked in the past. In better words. A lot of times this is enough. And if there is no alignment with other entities or innovation involved - machines will help us do it. In short, they will be a productivity tool: To speed up HUMAN 'knowledge processing' which in itself can seem like a big feat. But it is nothing compared to what humans CAN do with this tool.

If I acquire the power to fly like Superman - it might seem very valuable, like I am going to be the next big thing. If I choose to use it to only get my groceries in 1 second instead of 10 minutes, then... :)

Like with any tool, asking 'What will we choose to DO with this tool?'[use case] is a far more important question than 'How powerful is this tool?'[technological capability] And if my use case cannot solve the most pressing problems humanity faces - then 'Is it worth investing in this technology?' is also a valid question. So the possibility of this tool taking over humans will never be because the tool is more powerful than us. If will be only if we ourselves cannot see our power or refuse to take responsibility for exercising it.

If we can just see what it means to be fully human and take responsibility for it - we are good. If we cannot - then we won't need AI to destroy us. We are fully capable of doing it ourselves.

Vandana Kaushik

ICF PCC I L&D I OD I Certified Coach I DISC Certified Trainer I Hogan Certified I OD - ODA & TISS I Belbin I Mentor I Trainer I DEI Driver I Lifelong Learner I

1 个月

This is so well written ! Thank you so much for simplifying AI ??

回复
Amol Gangal

Senior Data Architect @ Infosys | Data Architect - Cloud, Data Management, Data Security, Data Modelling | Specializing in Anti Fincrime data solutions

1 个月

Great perspective Vijayraj Kamat . Found this article really insightful.

Vijai PANDEY

Business Psychologist | Expert in Talent Assessment, Development & Management | Psychometrics | Leadership Assessment | People Analytics | Organizational Development

1 个月

Lovely article

Aman Zaidi

Leadership & Talent Development | Organisational Development | Diversity & Inclusion | Experiential Education and Training | Business Storytelling | Executive & Career Transition Coaching | Wellbeing | TEDx speaker

1 个月

Carlos Oliveira, Ernesto Pedrosa, Nuno Pimentel, Telma Moura, Kartik Iyer: This really accessible explanation might interest you!

Aman Zaidi

Leadership & Talent Development | Organisational Development | Diversity & Inclusion | Experiential Education and Training | Business Storytelling | Executive & Career Transition Coaching | Wellbeing | TEDx speaker

1 个月

Vijayraj, this is probably the best article I have read on AI / LLMs. The way you have broken it down and the simplicity with which you have explained LLM's and artificial "intelligence" makes it accessible to the layperson. Your best piece so far, in my personal opinion!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了