Importance of Language

Importance of Language

The Large Language Model has a key word in the middle – Language. Language is the main means of human communication and what makes humans differentiate from animals. But language is not just a way of communication, it is also the primary way we accumulate and codify knowledge pretty much about everything that eventually gets passed from one generation to another.?

Language is also an actuator of our inner voice that facilitates our thinking process. When we deliberately reason we use inner voice and think using different language constructs. Some literature about LLMs, especially early ones, emphasized limitations of large language models by equalizing their capabilities to language fluency. Certain AI luminaries were insisting that LLMs are merely parroting human’s language skills. What critics failed to recognize is LLM’s potent use of language paradigm as a reasoning mechanism like humans do: not simply fluent expression of thoughts in grammatically and syntactically correct way, there is more, much more into it.?

Language as communication means and thinking/reasoning vehicle


LLM prominence in AI landscape

There is an ongoing debate that LLM based AI, while being a fractional space of AI applicability to real life situations, disproportionately captivates so many researchers’ valuable time and regulators’ attention. I would argue that among many subfields of AI LLM and Generative AI justly steal the bulk of attention because massive human civilization knowledge has been codified in linguistic artifacts.?

Millennia worth human civilization experience has been captured in the form of billions of written documents. Just pause and think about it: the written knowledge is a result of reverse engineering of what and how humans think and feel about pretty much everything. Drama, all kinds of recipes, philosophical and casual dialogues, fiction, textbooks on psychology, math, physics, and so on. All spread out through thousands of years.

Language is how we learn, update our current knowledge and pass it along. What could be a better and most attractive way of improving this eons worth storage of immense human wisdom? What could be better ways of retrieving, transferring and enacting it? Well, LLMs are aimed at that.

Generative AI and LLM in AI space

In the early days of AI the hope was that First Order Logic (FOL) – the logic language – would be the best way of capturing the essence and representing our views of the world in order for computers to successfully reason. Unfortunately, FOL despite its effective formalization of logic and reasoning doesn’t scale as well when capturing almost infinite and constantly changing knowledge of the world as LLMs amazingly manage to do it today.?

Large Language Models

Here I am going to help you build a mental mini-roadmap for better appreciation of what LLMs have to offer today and what to expect from them in the long run.

Large Language Models (LLMs) are bedrock of Generative AI.

  • What is it?
  • Why should we trust them??
  • To what extent can we rely on them??
  • When shall we custom tailor LLMs??
  • Train them from scratch? or?
  • Fine-tune existing foundation models to our needs? or?
  • Outright build our own ones??

Every day we get bombarded with deafening announcements of the arrival of yet another “biggest-badass-mightiest” LLM and rushing us to explore them. We are swamped with narratives on how to use and enhance them. But there is not much material dedicated to why LLMs are so potent and fantastic, border line magical. There are not many sources addressing laymen’s concerns:?

  • Why shall they trust LLMs to begin with??
  • What are the scientific underpinnings of LLMs that could be intuitively conveyed to the uninitiated and without an advanced STEM degree?

A mini-roadmap

In order to build solid science based intuition around LLMs one needs to have firm footing in understanding the building blocks. I am going to cover them in the upcoming articles:

World Knowledge: Compression and Representation

Importance of proper compression of massive information and its proper representation in order to effectively infer new intelligent outcomes. We will talk about phenomenal embeddings. How LLM relies heavily on Information Theory to achieve this.?

LLM foundations: Brain, DNA and analogy

Making sense of Large Language Models and Deep Neural Networks magic by drawing loose analogy with human DNA and Brain combo. Similar concepts, different hardware.

Manifolds and representations

Dwelling on how LLMs do reason. Manifold Theory is often overlooked yet absolutely crucial to understand LLM reasoning magic.

LLM Training and Inference

Going deeper into what constitutes LLM Training and Inference.

LLMs are Dynamical systems with Phase Transitions

LLMs emergent abilities rivaling or exceeding humans are due to resemblance to dynamical systems with phase transitions.



The original article can be found here.

Staying cognizant and passionate about AI is an act of balance with not losing sight of AI ethical considerations, covered in the AI impact series.

Pete Grett

GEN AI Evangelist | #TechSherpa | #LiftOthersUp

6 个月

Absolutely fascinating insights on the importance of language and knowledge transfer. Can't wait to dive deeper into LLMs. Nicos Kekchidis

要查看或添加评论,请登录

Nicos Kekchidis的更多文章

  • LLM is all about Physics

    LLM is all about Physics

    When I penned my article early this year to demystify Large Language Models “LLMs: Mystery, Misconceptions, Love and…

    1 条评论
  • LLM foundations and Brain-DNA analogy

    LLM foundations and Brain-DNA analogy

    This article is a continuation of the series dedicated to building your intuition about modern AI. I highly recommend…

  • How Intelligent AI Really Is?

    How Intelligent AI Really Is?

    The forecast duties of meteorological service are adding up as we speak. We are hearing regular swift correlations…

    1 条评论
  • World Knowledge: Compression and Representation

    World Knowledge: Compression and Representation

    Intelligence, no matter artificial or human, relies to great extent on: How effectively we store the knowledge Reliably…

    4 条评论
  • A Two Months Check-in

    A Two Months Check-in

    I was sharing my thoughts and the results of my AI research for the past two months. Hoping that they somehow resonate,…

    1 条评论
  • AI Promise Land - Adapt Your Profession

    AI Promise Land - Adapt Your Profession

    The AI field, Generative AI specifically, is fledgling and quite promising. But it is a huge space, with abundant…

    3 条评论
  • Make AI work for you. Not the other way around.

    Make AI work for you. Not the other way around.

    Last time we talked about how to befriend AI, explore the world of opportunities with it, while staying cognizant of…

    1 条评论
  • Befriend AI

    Befriend AI

    In the previous article I took you on a brief excursion to the current AI landscape and its driving forces. Today I…

    6 条评论
  • Do Better than Silicon Valley!

    Do Better than Silicon Valley!

    For the last few weeks I’ve been talking about the importance of gaining solid scientific intuition about AI in order…

    4 条评论
  • Beware of AI Misuse

    Beware of AI Misuse

    Last week in You don't want to be a Doctor FrankenstAIn article I was addressing: perils of putting AI Ethics as…

    4 条评论

社区洞察

其他会员也浏览了