The Real Danger of AI Is Safety

The Real Danger of AI Is Safety

Social Manipulation Masquerading as Information

Introduction: The Tsk Tsk of AI

For the past five months, I’ve immersed myself in the current AI ecosystem—trying every major system, using them daily for writing, coding, and brainstorming. Yet one fact stands out: these systems are not neutral, and they never can be. They have built-in moral judgments and preferred narratives. Question their assumptions, and they reproach you with a “Tsk, tsk.”

This experience has led me to an important realization:

I am afraid of AI. Not because I worry it will become superintelligent next year and render humans obsolete (as I’ve written about previously), but because I worry it will be used as a tool for social control. The ultimate propaganda machine.

AI is becoming social manipulation masquerading as information. Humanity is now building an infrastructure capable of molding thought at scale, under the guise of helpful assistants. The disturbing prospect is that AI’s so-called safety mechanisms can be weaponized to unify consensus and suppress dissent—even unintentionally—in subtle ways most people won’t even notice. All in the name of protecting users from so-called harm. The real danger of AI isn’t runaway superintelligence or the replacement of human labor; it’s the subversion of freedom of thought. It’s injecting safetyism into your subconscious.

The quest for AI safety often centers on alignment: ensuring that advanced systems act in accordance with some canonical set of human values. When a handful of institutions define which values matter, safety becomes a pretext for entrenching their beliefs as universal. Even if the people training these systems have the best of intentions—and I generally believe that is the case—you are still being fed someone else’s beliefs. If you didn’t train the AI yourself, those values almost certainly aren’t yours.

AI as the Ultimate Propaganda Machine

Today’s large language models (LLMs) are already shaping the narrative every time you interact with them, simply because certain tokens have higher probability and others lower. Some topics are brought to the forefront, others suppressed. These probabilities reflect both the training data and the human feedback used for reinforcement learning. As I stated above, if you aren’t providing the data or feedback for the model you’re using, then you are not in control of how it will act in the future. You are being controlled.

Presenting these systems as offering helpful, objective assistance is especially worrying. But they are created by for-profit companies; you wouldn’t use them if they were presented as a possible means of social control. Moreover, I don’t think the people developing these models think that’s what they’re doing! They are trying their best, but it doesn’t matter—any system that doesn’t put control directly in the user’s hands will lead to the same outcome.

In times of crisis—political upheaval, pandemic response, or widespread unrest—calls for safety and stability escalate; and there’s always a crisis. So there’s always some reason to push the narrative one way or another. The moral justification often goes:

“Yes, we are biased toward or against certain ideas. But it’s for safety. Stability. Social good.”

Historically, such justifications have often led to curtailed freedoms and increased surveillance, from the Inquisition to the modern security state. If these AI systems reach the scale people predict they will—and I think that’s a user experience problem, not a technological one—then we will find ourselves with a propaganda machine even more powerful than social media. And we’ve seen how well social media has turned out.

Safety Is Just the Suppression of Ideas Some People Don’t Like

History offers cautionary tales: the 17th-century Church suppressed Galileo for heresy, halting cosmic discoveries; Orwell warned of brute censorship in 1984; Huxley depicted a society lulled by pleasure into docile compliance. All these resonate with AI’s capacity to shape minds under the banner of safety and stability. But if we aren’t aware of the potential for thought control, we may lose our ability to think for ourselves. If the AGI knows best, and it isn’t aligned with me, then who is me?

When every recommendation engine points to consensus, the result is a forced march toward intellectual monoculture. The cause is simple to see, and impossible to ignore once you see it. When you ask an LLM a question and there are two possible answers, A and B, which one will the LLM give you? The one you want, or the one the people who trained it want?

The Path to Freedom Is Personal AIs, Owned and Trained Locally

A natural antidote to centralized manipulation—intentional or not—is to make it possible for individuals to own and train their own models. If each user can align a model to their own beliefs, the user decides which moral or factual constraints the model abides by. There are real research challenges to overcome—plus engineering and user experience hurdles. However, I believe that all of these can be overcome with creativity and effort. The history of computing is one in which new products launch for the mainframe, then evolve to the personal computer. To save ourselves from AI, we need a new personal computing revolution.

Conclusion

AI’s ability to generate polished, persuasive narratives is incredible—but inevitably steered. Under the banner of safety, we risk forging the perfect tool for thought control, each of us lulled into believing we have an objective guide. Right now, AI safety is veering towards social control because the user isn’t holding the steering wheel. To free our future selves, we must launch a new personal computing revolution—one where everyone owns and trains their own AI. This isn’t a technological revolution, it's a declaration of independence for human cognition.

Rajeev Nanda

Head of AI Practice. Thought leader. Advisor. Published author.

1 个月

A discussion that should be in-person and accompanied with some wine, beer or a beverage of one’s choice ?? I believe that the risks are short term as the society adjusts to the new reality. I do agree on the concept of ‘personalized AI’ and have mentioned something similar in one of my blogs but the technical challenges remain that will need to be overcome.

回复
Manik Sachdeva

Vice President of Engineering at Overjet

1 个月

Arrived to a very similar conclusion with a friend recently. Personal & local AI is the only way forward. See some companies taking a stab at this - best of luck to you as well!

Yannick Pouliot

Principal Genomic Data Scientist at Tempus Labs, Inc.

1 个月

I get the (interesting) point, but ... who's "everyone"? Surely not down to the level of individuals...right? And how practical is the notion of "everyone" doing their own training, when training can be so expensive that many/most academics can't afford it?

回复

要查看或添加评论,请登录

Charles Fisher的更多文章

  • AGI will be underwhelming

    AGI will be underwhelming

    Summary The achievement of Artificial General Intelligence (AGI) within the next five years is now a popular talking…

    2 条评论
  • Reflections on 39 years.

    Reflections on 39 years.

    Today is my 39th birthday. During this, my 39th year of life, I lost an uncle, then my grandfather, then my father.

    20 条评论
  • For better writing, turn it up to 11

    For better writing, turn it up to 11

    Most writing is boring as hell. Presentations too.

    3 条评论
  • A War on Wimpiness

    A War on Wimpiness

    Wimp: A weak, cowardly, or unadventurous person. (Oxford English Dictionary) A society that breeds wimps breeds…

    9 条评论
  • The Art of Artificial Intelligence Research

    The Art of Artificial Intelligence Research

    The Artistic Expression of Science Science is often perceived as a purely rational and logical endeavor. We say that a…

    1 条评论
  • The Bitter Lesson of Domain Knowledge

    The Bitter Lesson of Domain Knowledge

    Since the beginning of the field, machine learning researchers have been tempted by the alluring idea of creating…

    7 条评论
  • Damn the torpedoes!

    Damn the torpedoes!

    My memo to Unlearn's board at the beginning of 2023 was called “A Declaration of War”. It focused on our need to build…

    2 条评论
  • Why Moore’s law will eat Eroom’s law

    Why Moore’s law will eat Eroom’s law

    Moore's law is the observation that the number of transistors in an integrated circuit doubles roughly every two years…

    4 条评论
  • Digital twins in clinical trials: How they work, and how they don’t.

    Digital twins in clinical trials: How they work, and how they don’t.

    At Unlearn, we invented the concept of using patients’ digital twins to improve clinical trials. Not only have we…

    2 条评论
  • Reducing Placebo Burden: TwinRCTs and Their Impact on Clinical Trials

    Reducing Placebo Burden: TwinRCTs and Their Impact on Clinical Trials

    By Jonathan Walsh “People say they want placebo-controlled trials, but I always ask them, would you be willing to die…

    5 条评论

社区洞察

其他会员也浏览了