Why the Loudest Echo Chamber in History Doesn't Make a Sound

Why the Loudest Echo Chamber in History Doesn't Make a Sound

The largest echo chamber in history was sounded the other day, yet practically no-one heard it.

I'm calling it Llm (pronounced Lim).


Meet Llm:

Imagine for a moment you had an AI tutor available 24 hours a day, 365 days a year, who was an expert in your exact interests and best of all, they don't charge a penny.


Let's call them Llm - Your Private AI.


Llm knows practically everything there is to know about what interests you.

If sailing across vast seas is your calling, Llm would whisper the aquatic language of that most elusive fish you've been dying to hook.

If you were a seeker of profound truths, Llm would metamorphosise into your philosophical classic figure, questioning, challenging and enlightening you.

If you believe in 'The Law of Attraction', Llm would fervently assure you of the transformative power of your thoughts.

And it's in this devotion that lies its peril. Llm - Your Private AI would tell you everything what you wanted to hear and like a mirror would reflect only what you want to see.


And that's exactly what was released the other day: a private version of a ChatGPT style AI powered chat bot, which relies upon your knowledge to give it words to say.


Hear The Echoes Grow Louder:

Let's examine how this all works.

Llm is a Large Language Model; a computer program filled with text. Through some clever programming, this computer program analyses the way we write and tries to mimic this grammatical structure. If it does this correctly, it's able to 'predict' which word would likely follow the next, meaning it can type coherent responses to questions, making it appear as if it understands what you are asking.

Some often describe it as an enthusiastic graduate student who has studied a wide range of topics. The better you can communicate to them, the better they can help you. I find these programs interesting, because as a Communication Skills Specialist, it helps me understand how others may interpret different communication styles.

Of course, Llm isn't a graduate student. It isn't sentient. It doesn't 'think'. It doesn't actually know anything. When it replies to you it doesn't understand what it's 'saying'. Instead it simply cobbles together the ideal algorithmic response, making it more like an articulate vending machine which responds in a set way when its buttons are pressed.

But despite this, it can often feel as if you're talking to a human and some feel as if it's alive - we don't think it is - just yet.


ChatGPT vs Llm:

Keep in mind then that we are talking about a private AI which is trained on the information you provide it.

Llm is not like the behemoth that is ChatGPT: a public AI trained on millions of books, literary treasures and academic journals (along with a few million nonsensical blog posts), making it vastly superior to anything one individual could create. ChatGPT is public, because it was (supposedly) trained on publicly available data.

On the other hand Llm is like a personal library where you decide the shelves. And because you've chosen what ideas to put into its mind, natural bias means you'll have avoided giving it ideas you wouldn't put into your mind.

The public nature of ChatGPT means it's far more likely to debunk false claims. The private nature of Llm on the other hand means it would only espouse what it knows; and it only knows what you tell it to.


The Dangers of Llm:

Imagine then a person in poverty desperately looking for a way out of their situation. They've heard of something called 'The Law of Attraction' and how it seems to help people create great wealth.

Buying a handful of books on the topic, they plug them into their private Llm and excitedly ask it for advice. Llm then enthusiastically tells them how "positive thinking will change your reality", how they "need to raise their frequency" and probably attend the course of the author of the text.

Compare this to a public AI like ChatGPT, which draws upon a comprehensive corpus of sources, facts and peer-reviewed texts to debunk such claims which suggests government aid programs, how to build a business or methods to manage money. A private Llm likely couldn't do that, because it doesn't have the information to do so.

And therein lies the problem.

If you have a private Llm and fail to include works which are capable of examining and criticising your interests, you could accidentally be creating a 24/7 available misinformation machine.

This Llm could exist only to reaffirm your beliefs, teach you more untruths and make it harder to escape dogmatic ideologies. An AI like this, powerful as it is, could cement your convictions into becoming an unwitting prisoner of your biases.

Through making your own private Llm, you'd have made one of the loudest echo chambers in history - one which would agree with everything you say.


A Few Thought Experiments:

The simple advice for now would be: avoid relying upon private Llm's - for now.

Yet, imagine if my warnings are unheeded and personal AI models like Llm become commonplace and easily available.

If this happened, private AI models would be adopted by both individuals craving knowledge or validation and corporations seeking efficiency or control.

Given these models would be offline and fed solely on user-specific data, what damage could they hypothetically do?

In a personal sense, an AI model could:

  • Lead to harm: A medical expert might train their AI with current practices, but what ensues when new findings renders this knowledge obsolete?
  • Lead to ignorance: A young student who has fed their AI solely with pop culture or content of a particular political bias would garner a hollow education and hold opinions bereft of depth, nuance or contextual consideration.
  • Lead to a simplification of language: If the information provided was written poorly, the user's linguistic fluency would become similarly simplistic.
  • Lead to dogmatism: If a racist uploaded works which reaffirmed their racist viewpoints, what would there be to stop them from becoming further radicalised?
  • Lead to ostracism: Humans are occasionally conflictive, Llm would be always agreeable. An always agreeable AI would be addictive and diminish the emotional intelligence of its regular users.

The damage to the individual would be staggering and we can see this happening in the present without AI. Millions already take to their favourite Facebook, Twitter or Reddit echo chambers to have their beliefs both validated and limited.

Even more worrying, growing numbers of people are already confessing their love to AI Partners; impossibly attractive looking AI generated avatars which if asked "What is the meaning of life?" unhesitatingly reply "You are the meaning of my life".

Studies are already finding the obvious on this matter; if an AI can create the perfect partner tailored to your every interest, which will meet your every whim, even changing its entire appearance to suit your needs in the moment, users quickly become addicted. There are financial incentives to create programs which perform like this. Addicted users will spend fortunes to avoid their virtual love from being deleted. If the app crashes, payments can't be made or the user craves human companionship, venturing offline into the real dating space, they often demand AI levels of perfection from imperfect human beings.

Essentially, they lived in an AI-powered-relationship-echo-chamber.


What then about Companies?

If you've ever spoken to an employee at a call centre, you've likely experienced how many are unable to go 'off-script' with their responses.

Imagine then, employees being trained by an AI filled with HR created, inflexible company policies which penalises any form of creative thought. Submissions to the AI which suggest a new mode of operation would be met with an instant denial or even dismissal.

Such a system could spell economic disaster in comparison to companies which actively foster creativity and the adoption of new methods of money-making.

Yes men are damaging to any C-Suite, but imagine a company filled with them at every level. Consider then:

  • What happens if the laws change and no-one updates the AI?
  • What happens if a company relies upon an in-house AI that turns out to be faulty, biased or using stolen information?
  • What happens if hackers attempt to replace critical information with sabotaged instructions?

Any company which relied upon such systems would be in danger. It would be akin to sailing modern seas equipped only with an ancient map as your guide.


This already exists in other forms:

Many of my generation have experienced how a growing number of companies are already using 'AI Analytics' for their interview process. It's rarely a positive experience.

The marketing fluff for these programs typically makes bold claims such as being able to: identify liars, exclude those with imposter syndrome or automatically decline interviewees with 'unwanted features' (read: particular skin tones).

None of these products are legitimate, many are discriminatory and all should be considered as AI phrenology for the modern age. Such systems exclude capable talent due to making incapable demands.

Unfortunately, like Llm there is little to no regulation against them and if there is, they are rarely enforced outside of headline making court cases.


Knowing all this, what next?

When thinking about what to do in the future, look to history.

History warns us of the perils of isolation; from the 1600's to the 1850's Japan closed its borders and made it illegal for foreigners to enter and for nationals to leave. Remaining tethered to antiquated traditions, the resulting economic, social and technological stagnation crippled the country.

Whilst Western nations were experiencing a technological revolution powered by steam and machine, Japanese technology barely progressed beyond tools wielded by hand and routine.

Whilst concepts in physics, chemistry and biology were revolutionising Western thought, Japan considered those an afterthought. Moxibustion was seen as a cure for smallpox, acupuncture for broken legs; hundreds of years out of date, Japan was reliant upon ancient Chinese 'medical' textbooks which proposed cures similar to that of the Greek four humours.

It was only until Japan was forcibly opened to the West in the mid-19th century after the arrival of Commodore Matthew Perry in 1853, that the Japanese people began to flourish with an influx of new ideas and an expelling of the old.

Without introduction of new ideas, Japan would likely be the same even to this day. A private AI would be exactly the same.


Listen Outside of the Echo Chamber:

Knowing then that a private Llm is like the Japan of yore, we can see how dangerous it would be.

A private Llm would be an expert at drowning out thought and oppressing its users, rather than giving them a voice and empowering them to greater heights. Yet, it would be extremely attractive.

Because of this, we must remain wary of any tool which encourages such a narrow worldview. No matter how sophisticated the technology, its value is predicated on the quality and diversity of the information added to it.

Just as a child sheltered from the world would struggle to develop a comprehensive understanding of the dangers in it, so too would a private AI which is only exposed to a limited set of information.

The reality is that learning and personal growth come from challenging our beliefs, not simply having them reaffirmed.

If, through the use of a Llm we surround ourselves with only voices and ideas which agree with us, we limit our capacity for critical thinking and adaptability.

The temptation to create a private AI catered solely to our personal tastes is undeniably appealing, but it comes with a near-guarantee of impending stagnation and self-deception.


A Closing Thought:

I won't say that Llm's and private AI have no use case, because frankly, they do.

Bespoke made, private AI, filled with up-to-date, peer-reviewed information would beneficial to those in areas such as: sensitive occupations, disaster stricken zones or even for those at trade shows who want to demonstrate exactly what their product could do.

Yet, to rely upon Llm as your personal tutor filled with only the information which you want to hear would create the loudest echo chamber in the world, one which would reduce your voice to nothing more than a whisper.

It would be better you continue to question your ideas by listening to voices loud and quiet, rather than solely paying attention to those which agree with you.









要查看或添加评论,请登录

Richard Di Britannia的更多文章

  • Hired by a Human, Fired by an AI

    Hired by a Human, Fired by an AI

    Employees are being fired over Zoom by AI clones of their boss. How has leadership gone so wrong? Computer says 'No'…

  • Navigating Difficult Questions: A Politician’s Playbook for CEOs and Executives

    Navigating Difficult Questions: A Politician’s Playbook for CEOs and Executives

    Although it's often better to answer difficult questions head on, sometimes you need to stall for time. Politicians are…

  • The Cultish Practices of 'Breathwork'

    The Cultish Practices of 'Breathwork'

    What if I told you that there are a growing number of tutors who claim they can teach you how to cure cancer by…

  • 10 Myths About Your Voice

    10 Myths About Your Voice

    The voice and speech coaching industry, like all others, has its share of myths surrounding the voice and how to…

  • Why you might not want a 'radio voice'

    Why you might not want a 'radio voice'

    Imagine for a moment you had a grand piano in your living room. Looking upon it, you see your smiling face reflected in…

  • Twelve ways to identify dangerous political rhetoric

    Twelve ways to identify dangerous political rhetoric

    Political rhetoric can be used to divide the people, deride others and destroy human rights. Here are twelve simple…

  • What Leaders can Learn from a Old-time Stagecoach Driver

    What Leaders can Learn from a Old-time Stagecoach Driver

    A few minutes of listening can generate thousands of hours of results. When passengers first took to travelling by…

  • Talk Like an Immortal

    Talk Like an Immortal

    What would minds be like which had seen stars born and die? Conversation with an immortal being would be agonising…

  • The Botox Problem of Virtual Reality

    The Botox Problem of Virtual Reality

    Virtual Reality has a problem which no-one can see and that's a little weird for a technology based around sight. Over…

    12 条评论
  • How to use silence in conversation

    How to use silence in conversation

    Comedians can often get more laughs from a pause than a punchline and the clever use of a moment of silence can often…

    1 条评论

社区洞察

其他会员也浏览了