AI Unplugged: On the Internet, Nobody Knows You're a Human
Michael Tresca
Director, Marketing & Communications for Global Talent Acquisition at GE Vernova
The proliferation of "counterfeit people" has gotten so bad that they now constitute 50% of all Internet traffic. #AI is only going to make this worse.
The Tipping Point
We are at a tipping point where the Internet is no longer tailored to humans. Platforms that were originally human-facing, like social media and news sites, are being strongly influenced by these inhuman visitors, deprioritizing actual people to protect their systems from an unending, relentless wave of "counterfeit people" or bots. These bots are used for everything from bolstering views and likes to more malicious purposes, like spreading malware and misinformation.
In the war against bots, countries are falling one by one. "Bad bots" account for 71% of Ireland's traffic, 68% of Germany's, 43% of Mexico's, and 34% of the U.S. It's particularly bad in gaming, with bots constituting 57% of all traffic. Retail, travel, and financial services have the highest volume of bot attacks.
The source of the rot is rooted in anonymity. I wrote about this in my Master's thesis back in 1998, and the problem has only proliferated since then, becoming deeply embedded in how social media and web search systems operate. Anonymity has allowed counterfeit people to flourish, because "people" can change identities at a whim with nothing tethering them to reality.
There is nobody policing hashtags, who your employer actually is, whether or not your photo represents you accurately, or what your real name is. When you can change your name and profile at a whim, it's a fine line between "you" being a person and "you" being a bot. Social media platforms and search engines don't want to admit it, but they have been tailoring their systems to deprioritize bots -- and real humans, who might simply be doing things bots do (e.g., not posting right away, connecting with people with weak ties to them, hiding their identity) are becoming casualties in this online war.
Counterfeit People
What constitutes a "person" is increasingly fungible. Daniel Dennett feels that the fact that humans are no longer given primacy is a global threat to society:
Creating counterfeit digital people risks destroying our civilization. Democracy depends on the informed (not misinformed) consent of the governed. By allowing the most economically and politically powerful people, corporations, and governments to control our attention, these systems will control us. Counterfeit people, by distracting and confusing us and by exploiting our most irresistible fears and anxieties, will lead us into temptation and, from there, into acquiescing to our own subjugation. The counterfeit people will talk us into adopting policies and convictions that will make us vulnerable to still more manipulation. Or we will simply turn off our attention and become passive and ignorant pawns. This is a terrifying prospect.
Generative AI is already inflicting this damage on a variety of industries. The use of simple bots jumped 7% from 33% in 2022 to 40% in 2023. They are particularly prevalent in law and government (78%), entertainment (71%), and financial services (67%).
The consequences for a digital world filled with counterfeit people are dire. Trust in the Internet across 20 countries is down 11% from 74% to 63%, with significant drops in India (-10 points), Kenya (-11), Sweden (-10), Brazil (-18), Canada (-14), the United States (-12), and Poland (-26). In the U.S., trust is the Internet is just 54%.
Trust in online content is now too low for anyone to reliably use the Internet for anything important. Add in the fact that bots now outnumber humans on the Internet and we have a recipe for no one trusting what they experience online. Is there anything we can do?
领英推荐
What to Do About It
Dennett argues we need a watermarking system:
By adopting a high-tech “watermark” system like the EURion Constellation, which now protects most of the world’s currencies. The system, though not foolproof, is exceedingly difficult and costly to overpower—not worth the effort, for almost all agents, even governments. Computer scientists similarly have the capacity to create almost indelible patterns that will scream FAKE! under almost all conditions—so long as the manufacturers of cellphones, computers, digital TVs, and other devices cooperate by installing the software that will interrupt any fake messages with a warning.
The current U.S. administration has gotten the message. President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence specifically references watermarking:
…so that Americans are able to determine when content is generated using AI and when it is not.? These actions will provide a vital foundation for an approach that addresses AI’s risks without unduly reducing its benefits.
In anticipation of this looming crisis, LinkedIn has begun implementing verification in a variety of ways, ranging from government-issued IDs to workplace and educational institution emails and licenses.
Unfortunately, it’s likely too late for government intervention. We had an opportunity with social media verification, but neither Facebook nor Twitter/X addressed their bot problem by reducing anonymity of their user base. And why should they? Bots inflate numbers, making advertising and social media spending on their platforms look far more effective than it actually is.
We’re All Bots Now
You might not be surprised to learn that the response from social media platforms to the proliferation of counterfeit people is ... more counterfeit people.
We already have bots known as Butterflies with their own AI-only social media network, which is just more honest about the fact that bots are already infiltrating human social media networks. And we previously discussed the risks associated with digital twins (including your digital twin living on long after the "real you" dies). Undaunted, Meta is planning to create AI clones of celebrities (Zuckerberg cites Kylie Jenner in this interview) and eventually offer everyone their own AI agent to work on their behalf.
We're already seeing AI agents in action in politics (AI Steve in the U.K.’s general election) and even working on behalf of CEOs, but there's an important caveat:
Earlier this year, the IT consulting firm AND Digital asked hundreds of US, UK, and Dutch business leaders whether AI could soon take over a CEO role—and 43% said yes. That’s on the heels of a similar survey from last year, in which 47% of senior executives said they believed AI could replace or completely automate “most” or “all” of the chief-executive role.
Using an AI clone to represent us will probably be a necessity in a digital world where we can't believe anything we see, read, or hear on the Internet. But it's a slippery slope. If we can clone the "real" you, in a world where nobody is real ... why do we need "you" at all?
Please Note: The views and opinions expressed here are solely my own and do not necessarily represent those of my employer or any other organization.