Looking Under the AI Bridge, Why you should love your Trolls
Adam Paulisick
CEO @ SkillBuilder.io - Deploying AI Salespeople || CMU Prof: Human-Computer Interaction, AI/ML, Sales, and Entrepreneurship
Trolls have been part of the Internet since I can remember stumbling around user groups and dinosaur-aged chat programs (RIP AOL instant messenger), lurking in forums, comment sections, and social media feeds, sowing chaos and stirring controversy. The term “troll” originally referred to a fishing technique where bait was slowly dragged through the water to attract a bite. How does this look online? Baiting users into pointless arguments, negativity, or confusion. As the internet expanded and social networks of all shapes and sizes emerged, so too did an army of trolls.
As of 2023, over 41% of internet users reported encountering some form of trolling during their online adventures, according to a Pew Research study. Platforms like X/Twitter, YouTube, and Reddit have become some of the preferred stomping grounds for these controversial personalities. Each platform tends to attract its own breed of Troll, with different tactics and styles of trolling depending on the nature of the medium.
But not all trolls are created equal, and that’s where it gets interesting. Just like internet users, trolls come in a variety of types, each with their own unique behavior patterns. Let’s break them down from what we see:
The YouTube Philosopher
Tagline: “I have an opinion about everything, and it’s always right.”
Known for: Long-winded comment threads that rarely address the actual video content.
Habitat: YouTube comments
Behavior: The YouTube Philosopher likes to pretend they’ve got a Ph.D. in absolutely everything. Whether it’s a video on quantum physics or how to make pizza dough, they’ll dive in with a dissertation-length response, dropping jargon and unrelated facts. They revel in making every video’s comment section about them.
The Reddit “Expert-in-Everything” (EiE)
Tagline: “Actually…”
Known for: Correcting everything—especially when no one asked.
Habitat: Reddit (but thrives on subs like r/AskReddit, r/technology, and r/conspiracy)
Behavior: The EiE swoops in with corrections that are often irrelevant or, ironically, wrong. They use their air of superiority to make others feel lesser, creating threads that spiral into pointless arguments about the accuracy of random facts or trivial details.
The Twitter Caffeine Junkie
Tagline: “Tweet and retreat.”
Known for: Jumping into arguments, dropping a controversial tweet, and disappearing into the digital abyss.
Habitat: Twitter (X)
Behavior: These Trolls drop bombs and vanish. They stir up debates by posting deliberately inflammatory statements but don’t stick around for the fallout.?
The ‘Open-Web’ Gremlin
Tagline: “No filter. No context. No shame.”
Known for: Showing up in comment sections on any news site or blog, they thrive in total anonymity.
Habitat: The open web (comment sections on news sites, blogs, random-ish forums)
Behavior: This type of Troll doesn’t even try to be subtle. Their comments are often toxic, offensive, or completely off-topic. They revel in anonymity and are impossible to tie down to a specific identity, popping up wherever moderation is lax.
The Facebook Conspiracy Connoisseur
Tagline: “Wake up, sheeple!”
Known for: Posting convoluted, outlandish theories in response to everything, usually with an air of smug self-satisfaction.
Habitat: Facebook and Facebook Groups
Behavior: Every post is an opportunity to connect seemingly unrelated events into a grand conspiracy theory. Whether it’s about global warming, politics, or even your grandma’s cookie recipe, they will find a way to inject paranoia into the conversation.
Okay, so these are the trolls we know - how does this change with AI??
The LLM Poke-and-Provoke
Tagline: “I just wanted to see what it would do.”??
Known for: Asking absurd, misleading, or offensive questions to large language models (LLMs) like ChatGPT, Claude, etc —just to watch them squirm.??
Habitat: Chat-based LLM platforms??
Behavior: Unlike other Trolls who engage with humans, the LLM Poke-and-Provoke Troll focuses on messing with AI systems for no reason other than to test its boundaries. They treat the interaction like kicking a robot on the sidewalk—not necessarily out of malice, but out of curiosity mixed with a desire to see if the AI can be broken or provoked into absurd responses. They might ask impossible or morally questionable questions, flood the system with irrelevant input, or try to trap it into contradictions.
Why People Do This: Some of the Potential Psychology Behind the LLM Poke-and-Provoke
It’s important to note that many people who poke and provoke LLMs wouldn’t classify themselves as "trolls" in the traditional sense. The motivations behind this behavior often stems from a combination of curiosity, frustration, and psychological tendencies that go beyond simple malice. Here are several evidence-based psychological principles that help explain this behavior:
The Dunning-Kruger Effect??
This principle suggests that individuals with lower levels of expertise often overestimate their understanding of a system. When encountering an LLM like ChatGPT, these users may assume they know better than the AI and feel compelled to "outsmart" it by finding gaps or flaws in its knowledge. They may ask convoluted or nonsensical questions, expecting that the AI will fail and thus validating their own perceived superiority.
Curiosity-Driven Exploration??
Humans are naturally curious creatures. The novelty of interacting with an advanced AI often leads to exploratory behavior. In the context of LLMs, this curiosity can manifest as testing its limits by asking intentionally difficult or strange questions, much like children testing boundaries with authority figures. The Troll isn't necessarily trying to be malicious; they’re simply seeing "what happens if I do this?"
The Online Disinhibition Effect??
When interacting with an LLM, users feel a sense of anonymity and detachment from consequences, a phenomenon known as the online disinhibition effect. People may feel emboldened to engage in behaviors they wouldn’t exhibit in face-to-face interactions. Since the AI isn’t a “real person,” users feel free to act out in ways that might seem out of character in other social settings—like trying to provoke nonsensical or inflammatory responses from the AI.
Anthropomorphism??
This psychological principle involves attributing human-like traits to non-human entities. When users interact with chat-based LLMs, they might unconsciously view the AI as having thoughts, emotions, or intentions. The Troll behavior can sometimes stem from an attempt to “test” these human-like qualities by putting the AI in awkward, controversial, or impossible situations—almost as if they're seeing how a human would react to a similar provocation.
The Appeal of Breaking the System??
The idea of pushing technology to its breaking point has a strong appeal for many people, even those who wouldn’t engage in traditional trolling. There’s a certain satisfaction in being able to expose flaws or limitations in a supposedly advanced system. For the LLM Poke-and-Provoke Troll, the thrill lies in catching the AI off guard or forcing it to admit confusion, thereby reducing its perceived authority or competence.
How This Troll Differs from Others
The LLM Poke-and-Provoke Troll is unique in that they’re not interacting with other humans; their entire focus is on destabilizing or confusing an AI system. Unlike the YouTube Philosopher or Twitter Caffeine Junkie, this Troll doesn't derive satisfaction from starting arguments with people. Instead, they seek validation by trying to confuse, trick, or “defeat” an AI—without any expectation of human feedback.
This is a modern form of trolling that stems not from social interaction but from a desire to understand (and sometimes exploit) the weaknesses in AI systems. By seeing how far they can push an LLM, they often reveal where the AI falls short—whether in logic, ethics, or tone.
The AI Silver Lining when Attracting Trolls
First, if you are getting trolls you are likely doing something right.? A point not to be made light of when you think they only go places where they can likely get attention. It’s frustrating for AI builders, even at SkillBuilder.io, but it also serves a valuable purpose. Just like trolls on other platforms, the LLM Poke-and-Provoke Troll provides critical data on how an AI responds to edge cases, exceptions, and bizarre inputs. These interactions are part of the iterative process of improving AI, ensuring systems like SkillBuilder.io can handle any challenge thrown at them—even if it comes in the form of an LLM Troll poking just for the sake of it.
Why Trolls Matter for AI: A Stress Test for Excellence
So we might dismiss trolls as nuisances, but in reality, they’re a critical part of the development process for AI, particularly AI salespeople like those built at SkillBuilder.io. Trolls challenge the limits of your system, revealing cracks that everyday users might never find. By understanding the types of Trolls across platforms, you can better anticipate where your AI might falter, providing a roadmap that prevents the worst possible responses from occuring.
For example, a YouTube Philosopher might lead an AI salesperson down a rabbit hole of irrelevant information, while the Reddit EiE could expose flaws in the AI’s fact-checking abilities. These Trolls serve as unwilling (and ironically unpaid) testers, pushing your AI’s conversational boundaries and highlighting areas for adjustment.
Why a No-Code Interface Matters to Manage Data: Guarding Against Troll-Induced Drift?
SkillBuilder.io’s no-code AI data management and AI programming interface is designed to address the real-time challenges that Trolls reveal. When these stress tests occur, having an accessible interface for AI managers allows for quick adjustments. You can tweak the AI’s responses, inject updated knowledge, or adjust tone and values in minutes—without needing technical intervention. You simply can’t make a single model wrapper system adjust to these factors fast enough and can’t oftentimes make a model dragging tons of useless enterprise data (hooked up to ALL your data sources) make sense of even why it should adjust if dated or inaccurate data is somewhere in the system to even partially validate the troll).?
This is where model drift comes into play. Model drift occurs when an AI’s performance degrades over time as the environment or user data changes. In the context of AI salespeople, this could happen if competitors introduce new products, your company shifts its messaging, or user behavior evolves. Without updates, the AI might continue to deliver outdated or irrelevant responses, eventually becoming ineffective (more on real-time competitive context in a future post but shout at us if interested to know more.)
In short, Trolls don’t break your AI—they make it stronger - but you have to have the right (no code) tools to fight back and adapt.??
#ArtificialIntelligence #AI #Entrepreneurship #Startup #DesignThinking #HCD #AIAgents
CEO @ SkillBuilder.io - Deploying AI Salespeople || CMU Prof: Human-Computer Interaction, AI/ML, Sales, and Entrepreneurship
1 个月Shoutout to The Forbes Funds and GPNP: Greater Pittsburgh Nonprofit Partnership and hundreds of early enterprise adopters for having a growth mindset when building the software and improving it is much closer to training an employee.
This might be the most feared and effective strategy doing corporate innovation, especially around AI.
Kit Mueller and Jon Pastor who would of thought we would look for the trolls at SkillBuilder.io