The Growing Danger of Toxic Cyber Additives
Adding noise to information retrieval is corrosive and breaks the chain of trust.

The Growing Danger of Toxic Cyber Additives

In 2007 and 2008, companies in China were caught using a deadly additive, melamine, in internationally distributed pet foods and internally distributed milk. Thousands of pets died [1] as a result, and in China, several babies died from contaminated milk [2].

The incentive behind this recurring and deadly problem was simple, even if incredibly cruel: Adding cheap melamine to food products allowed manufacturers to fool tests designed to ensure that the foods had adequate nutritional levels of protein. Even worse, their additions proved directly toxic, increasing harm beyond malnutrition.

But what if I told you that an entire multi-billion-dollar 2024 industry with globally expanding markets also adds an extremely dangerous additive to its products? This additive is specifically and consciously designed to fool the tests that should have verified the value and safety of these products. Also, like melamine, this additive can cause serious harm to property and human lives. You have almost certainly already used some of these products. Without action, your use of them will expand, not shrink.

You haven’t heard about this scandal because all of this is happening in cyberspace. The products are called chatbots. For years, vendors of these products have told the public that these products have passed something called the Turing test and thus are verifiably as intelligent and clever as humans. People have already invested their personal funds, companies, and future hopes on this widely touted success. If such machines can pass a test designed by one of the most indisputably brilliant minds of the 20th century, Alan Turing, how can investing in these products possibly go wrong?

Even more ominously, these products are working their way into our fundamental infrastructure. Roles that once had verifiably sentient humans in them are increasingly passing to chatbots and chatbot derivatives, almost always based on the assumption that the superb memory, networking reach, astronomical speed, and 24-hours-a-day availability of human-smart chatbots make them an obvious choice for replacing slow, unreliable humans. These chatbots have, after all, passed the Turing test.

Only they didn’t. All of them cheated by adding digital melamine, better known as noise.

Correctly and ethically programmed Large Language Models — LLMs, what too many folks call artificial intelligences — are fantastically effective retrieval engines. If you need to find a paper by some author who worked their entire lives to figure out some result that is critical to your own business or research, you cannot do better than to use an LLM. It can pull out tiny clues from your wording of the problem and figure out just what paper to access and present to you. Innovative retrieval is where LLMs shine and can constructively boost the entire world’s economy. An ethically programmed LLM enhances free-market economies by allowing good ideas from little-known locations to spread and reach every potential market. Others build on those successes, and you amplify opportunities for small innovators and resources available for innovation.

But what if you don’t want that? What if you only care about advancing your interests while making sure that potentially competing innovations from others never see the light of day? Simple: You add noise.

Noise is the “spice” [3] — more accurately, the digital melamine — that allows an unethically designed LLM to pretend it is something more than an incredibly powerful retrieval engine. An ethically programmed LLM always gives the same result for any cluster of closely relative queries, and that result always references the specific body of human work that produced those results, often at great personal cost. When used ethically, there is no ambiguity about what is happening because the LLM takes you directly to the source of the solution. That source — that business, artist, author, engineer, or researcher — benefits in classic free-market style, gaining the attention and funds needed to advance good ideas.

Adding digital melamine — adding noise — beautifully obscures this fair retrieval process by mixing up the results from multiple similar sources just enough to make finding the correct source impossible. The honest, repetitive results of an ethically programmed LLM quickly reveal that it is nothing more than a powerful retrieval engine. In sharp contrast, LLMs “spiced” with digital melamine — with noise — replace these ethical connections to source providers with crafty hodgepodges of similar results with just enough noise added to make it look like the chatbot “thought through” the problem before answering. The LLM owner can then pretend the hodgepodge solution came not from humans but from the chatbot’s Turing-verified, highly boosted, “human-like” intelligence. Without legal remedies, this bit of Turing-fooling deception gives the LLM owner an excuse to avoid crediting creators for their work or even letting users know who did that work, as would happen in a free-market economy.

Even worse, this mixing-to-cover-sources is extremely toxic. The randomness of the mixing process quickly destroys the actual market value of the real sources while fooling users into thinking they are getting “smarter” results than they would from individual sources. Eventually, they fall victim to this deception when their attempts to create actual products fail, sometimes spectacularly, due to the failure of the randomization process to keep critical bits of information in the final mix.

The toxicity of digital melamine — of intentionally adding noise to LLM retrieval engines for the specific purposes of fooling Turing tests and obscuring free-market sources — exponentially increases if you let this noise toxin creep into your physical infrastructure. Imagine a 911 system controlled entirely by noise-added, melaminized LLM systems that slowly degrade as the noise toxin designed to fool people takes over more and more turf within the previously carefully human-designed 911 system.

This situation cannot continue. State and federal governments need to recognize that toxicity is no longer a purely chemical problem and ban the use of cyber toxins whose only purpose is to fool tests, delude users, destroy the effectiveness of free-market economies for promoting business innovation, and slowly poison systems.


Thanks to Lisa Baird, Dr. Jeffrey Funk, and Sarah Clarke for calling my attention to the excellent Brendan Dixon article. Coming from an environment where safety, accuracy, and information pedigrees are top priorities in the design of robots and AI systems, the concept of intentionally damaging an LLM retrieval system through noise insertion to mimic humans better would never have occurred to me.

A CC BY 4.0 PDF version of this article is available at Apabistia Notes [4]. The original article is available on LinkedIn [5].


[1]????? Wikipedia, “2007 pet food recalls”.

[2]????? Wikipedia, “2008 Chinese milk scandal”.

[3]????? B. Dixon, “What Chatbots Have Achieved, and What They Haven’t - And Can’t,” Mind Matters 2024, 0504 [May 4] (2024). https://mindmatters.ai/2024/05/what-chatbots-have-achieved-and-what-they-havent-and-cant/

[4]????? Apabistia Notes: https://sarxiv.org/apa.2024-05-08.1522.pdf

[5]????? LinkedIn: https://www.dhirubhai.net/pulse/growing-danger-toxic-cyber-additives-terry-bollinger-qyc2c

Akbar Sayeed

Diversely Experienced Electrical Engineer - Consultant, Researcher, Inventor, Technologist, Author, and Dot-Connector Extraordinaire

9 个月

Terry Bollinger Interesting. This article seems to assume that "Correctly and ethically programmed Large Language Models — LLMs, what too many folks call artificial intelligences — are fantastically effective retrieval engines." that such LLMs exist. But as far as I can tell and know they don't - with or without the noise. Just because of the inherent selectivity (intentional or unintentional) in the sources used for training the LLMs. So perhaps the noise is just offering a secondary selective advantage to certain players within an already selective universe of LLMs. Thoughts?

Tom Ormsby

Responsible AI to build a better world.

9 个月

Whilst I can agree with the sentiment somewhat, you are really highlighting a shortcoming of the Turing Test. The Turing Test has always been critiqued within AI as a test that is behaviourist, and cannot genuinely verify intelligence. It is not a lie to say LLM systems have passed this test. They can fool people into thinking they are talking to a human. The issue is more that passing the Turing test is not a meaningful measure of actual intelligence - just the illusion of it.

Patrick Kizny ??

Don't listen to me

9 个月

Actually, the idea of replacing sentient and compassionate humans with systems on a mass scale is not new. It was tested a hundred years ago. It ended in Auschwitz.

Patrick Kizny ??

Don't listen to me

9 个月

Wow! So well written and insightful, Terry Bollinger. Thanks for sharing this perspective. "based on the assumption that the superb memory, networking reach, astronomical speed, and 24-hours-a-day availability of human-smart chatbots make them an obvious choice for replacing slow, unreliable humans.?" → The difference, regardless of how good these answers are, and they are getting better, is _sentience_. And it's scary how we remove from governance of our world and lives.

Brad Hutchings

I help you use generative tools effectively in your life and business. Reluctant expert. Thought follower. 11x developer. Connect and DM me.

9 个月

I will quibble with one statement in here: "An ethically programmed LLM always gives the same result for any cluster of closely relative queries." My favorite LLM query is "Make up a story about Paul Bunyan and Eeyore saving the forest using 7zip." I love this query because even the little Mistal-7B-instruct LLM running safely sandboxed in a virtual machine on my laptop will make up enough different stories with different overall approaches to keep any bored adult or tech savvy 8 year old emergent reader entertained (and reading!!!) for a day. Use of LLMs for delight has to fall under "ethical". It's the killer app for these machines based on what they actually do. You rightly point out what I call the "artificial certitude" that makes these systems gross polluters in common, actual practice. We humans are too quick to suspend disbelief when an immediate, plausible answer is at hand. These systems and their creators exploit that. Bigly, as the kids say. #WrittenByMe

要查看或添加评论,请登录

Terry Bollinger的更多文章

  • I Am a Real American

    I Am a Real American

    I am a real American. I believe in our Constitution and its balancing assignment of power to three branches of…

    2 条评论
  • Why is DeepSeek so efficient?

    Why is DeepSeek so efficient?

    (Thanks to Richard Self for pointing out this similarity in LLM efficiency strategies.) Looking for simpler solutions…

    4 条评论
  • Rubbishnet: The AI Future of the Internet

    Rubbishnet: The AI Future of the Internet

    The above image is from a YouTube thumbnail posted on January 12, 2025. As best I can tell, the topic is Franklin…

    15 条评论
  • Marketing The Nothing

    Marketing The Nothing

    Do you use AI? Is it making you smarter? Giving you an advantage? Putting you ahead of the crowd? Helping you compete…

    15 条评论
  • Why Matter Creates Spacetime

    Why Matter Creates Spacetime

    In 1907, Albert Einstein published a simple set of four algebraic equations that allowed physicists to translate…

    2 条评论
  • The First Nobel Prize for Insidious Software Degradation

    The First Nobel Prize for Insidious Software Degradation

    The full paper (15 pages, 7 figures, 43 references) is located here: https://sarxiv.org/apa.

    15 条评论
  • A Tale of Two Mirrors

    A Tale of Two Mirrors

    In the Harry Potter books and movies, the Mirror of Erised enticed everyone looking into it with comforting, persuasive…

    6 条评论
  • New AI Tech Demonstrates Massive Improvements Over GenAI

    New AI Tech Demonstrates Massive Improvements Over GenAI

    Lest you, dear reader, be deceived: While the wonders described herein be both true and wholesome, this be naught but a…

    1 条评论
  • Chatbots Cannot Be “Fixed”

    Chatbots Cannot Be “Fixed”

    Wired Magazine: “Several years before ChatGPT began jibber-jabbering away, Google developed a very different ……

    5 条评论
  • Are LLMs Wasting Energy and Computation?

    Are LLMs Wasting Energy and Computation?

    In late May 2024, IEEE Spectrum pointed out that heavy reliance on precise calculations in LLMs may be largely…

    6 条评论

社区洞察

其他会员也浏览了