Importance of Language
Nicos Kekchidis
AI and Cloud-Native Transformer | Meaningful AI enabler | Team Builder | People are No.1 Priority
The Large Language Model has a key word in the middle – Language. Language is the main means of human communication and what makes humans differentiate from animals. But language is not just a way of communication, it is also the primary way we accumulate and codify knowledge pretty much about everything that eventually gets passed from one generation to another.?
Language is also an actuator of our inner voice that facilitates our thinking process. When we deliberately reason we use inner voice and think using different language constructs. Some literature about LLMs, especially early ones, emphasized limitations of large language models by equalizing their capabilities to language fluency. Certain AI luminaries were insisting that LLMs are merely parroting human’s language skills. What critics failed to recognize is LLM’s potent use of language paradigm as a reasoning mechanism like humans do: not simply fluent expression of thoughts in grammatically and syntactically correct way, there is more, much more into it.?
LLM prominence in AI landscape
There is an ongoing debate that LLM based AI, while being a fractional space of AI applicability to real life situations, disproportionately captivates so many researchers’ valuable time and regulators’ attention. I would argue that among many subfields of AI LLM and Generative AI justly steal the bulk of attention because massive human civilization knowledge has been codified in linguistic artifacts.?
Millennia worth human civilization experience has been captured in the form of billions of written documents. Just pause and think about it: the written knowledge is a result of reverse engineering of what and how humans think and feel about pretty much everything. Drama, all kinds of recipes, philosophical and casual dialogues, fiction, textbooks on psychology, math, physics, and so on. All spread out through thousands of years.
Language is how we learn, update our current knowledge and pass it along. What could be a better and most attractive way of improving this eons worth storage of immense human wisdom? What could be better ways of retrieving, transferring and enacting it? Well, LLMs are aimed at that.
In the early days of AI the hope was that First Order Logic (FOL) – the logic language – would be the best way of capturing the essence and representing our views of the world in order for computers to successfully reason. Unfortunately, FOL despite its effective formalization of logic and reasoning doesn’t scale as well when capturing almost infinite and constantly changing knowledge of the world as LLMs amazingly manage to do it today.?
Large Language Models
Here I am going to help you build a mental mini-roadmap for better appreciation of what LLMs have to offer today and what to expect from them in the long run.
Large Language Models (LLMs) are bedrock of Generative AI.
Every day we get bombarded with deafening announcements of the arrival of yet another “biggest-badass-mightiest” LLM and rushing us to explore them. We are swamped with narratives on how to use and enhance them. But there is not much material dedicated to why LLMs are so potent and fantastic, border line magical. There are not many sources addressing laymen’s concerns:?
领英推荐
A mini-roadmap
In order to build solid science based intuition around LLMs one needs to have firm footing in understanding the building blocks. I am going to cover them in the upcoming articles:
World Knowledge: Compression and Representation
Importance of proper compression of massive information and its proper representation in order to effectively infer new intelligent outcomes. We will talk about phenomenal embeddings. How LLM relies heavily on Information Theory to achieve this.?
LLM foundations: Brain, DNA and analogy
Making sense of Large Language Models and Deep Neural Networks magic by drawing loose analogy with human DNA and Brain combo. Similar concepts, different hardware.
Manifolds and representations
Dwelling on how LLMs do reason. Manifold Theory is often overlooked yet absolutely crucial to understand LLM reasoning magic.
LLM Training and Inference
Going deeper into what constitutes LLM Training and Inference.
LLMs are Dynamical systems with Phase Transitions
LLMs emergent abilities rivaling or exceeding humans are due to resemblance to dynamical systems with phase transitions.
The original article can be found here.
Staying cognizant and passionate about AI is an act of balance with not losing sight of AI ethical considerations, covered in the AI impact series.
GEN AI Evangelist | #TechSherpa | #LiftOthersUp
6 个月Absolutely fascinating insights on the importance of language and knowledge transfer. Can't wait to dive deeper into LLMs. Nicos Kekchidis