AI Manipulation: An Urgent Issue that Needs to be Addressed
Future of Marketing Institute
Global forum on teaching, research, and outreach on future of marketing topics | Schulich School @ York University
It won't be long before our lives are filled with conversational AI agents designed to help us at every turn. They'll anticipate our wants and needs, feed us tailored information and perform useful tasks on our behalf.
They will do this using an extensive store of personal data about our individual interests and hobbies, backgrounds and aspirations, personality traits and political views—all with the goal of making our lives 'more convenient'.
These agents will be extremely skilled.
Recently, Open AI released GPT-4o, their next generation chatbot that can understand human emotions. It does this not just by reading sentiment in the text you write, but also by assessing the inflections in your voice (if you speak to it through a mic) and using your facial cues (if you interact through video).
The Future of Computing is Coming Fast
A few days later, Google announced Project Astra—short for 'advanced seeing and talking responsive agent'. The goal is to deploy an assistive AI that can interact conversationally with you while making sense of what it sees and hears in your surroundings. This will enable it to provide interactive guidance and assistance in real-time.
And Open AI’s Sam Altman told MIT Technology Review that the killer app for AI is assistive agents. In fact, he predicted everyone will want a personalized agent that acts as 'a super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had,' all captured so it can take useful actions on your behalf.
What Could Possibly Go Wrong?
As I wrote in VentureBeat last year, there is a significant risk that AI agents can be misused in ways that compromise human agency. In fact, I believe targeted manipulation is the single most dangerous threat posed by AI in the near future, especially when these agents become embedded in mobile devices.
After all, mobile devices are the gateway to our digital lives, from the news and opinions we consume to every email, phone call, and text message we receive. These agents will monitor our information flow, learning intimate details about our lives, while also filtering the content that reaches our eyes.
Any system that monitors our lives and mediates the information we receive is a vehicle for interactive manipulation. To make this even more dangerous, these AI agents will use the cameras and microphones on our mobile devices to see what we see and hear what we hear in real-time.
This capability (enabled by Multimodal Large Language Models) will make these agents extremely useful—able to react to the sights and sounds in your environment without you needing to ask for their guidance. It could also be used to trigger targeted influence that matches the precise activity or situation you are engaged in.
For many people, this level of tracking and intervention sounds creepy. Yet, I predict they will embrace the technology.
After all, agents will be designed to make our lives better, whispering in our ears as we go about our days, ensuring we don’t forget to pick up our laundry when walking down the street, tutoring us as we learn new skills, even coaching us in social situations to make us seem smarter, funnier, or more confident.
This will become an arms race among tech companies to augment our mental abilities in the most powerful ways possible. And those who choose not to use these features will quickly feel disadvantaged. Eventually, it will not even feel like a choice. This is why I regularly predict that adoption will be extremely fast, becoming ubiquitous by 2030.
Why Not Embrace an Augmented Mentality?
As I wrote about in my new book, Our Next Reality, AI agents will give us mental superpowers. But we cannot forget these are products designed to make a profit.
And by using these products, we will let corporations whisper in our ears and flash images in our eyes that nudge us, coach us, educate us, and caution us throughout our days.
In other words, we will allow AI agents to influence our thoughts and guide our behaviors. When used for good, this could be an amazing form of empowerment, but when abused it could easily become the ultimate tool of persuasion.
This brings me to the 'AI Manipulation Problem'—the fact that targeted influence delivered by conversational agents is potentially far more effective than traditional content. If you want to understand why, just ask any skilled salesperson. They know that the best way to coax someone into buying a product or service (even one they don’t need) is not to hand them a brochure, but to engage in a dialog.
A good salesperson will start with friendly banter to 'size you up' and lower your defences. They will then ask questions to surface any reservations you may have. Finally, they will customize their pitch to overcome your concerns, using carefully chosen arguments that best play on your needs or insecurities.
The reason AI manipulation is such a significant risk is that AI agents will soon be able to pitch us interactively and they will be significantly more skilled than any human salesperson (see video example). This is not simply because agents will be trained to use sales tactics, behavioral psychology, cognitive biases, and other tools of persuasion, but they will also be armed with far more information about us than any salesperson.
In fact, if the agent is your 'personal assistant', it could know more about you than any human ever has. (For a depiction of AI assistants in the near future, read my 2021 short story Metaverse 2030 or watch the video below.)
Synthetic Emotional Intelligence
In addition, these AI agents will be able to read your emotions with greater precision than any human—detecting 'mis-expressions' on your face and in your voice no salesperson would ever notice. And if we don’t put regulation in place, they will document your reactions to every pitch they make, assessing what works best on YOU personally—offers for discounts, fear of missing out, logical arguments, emotional appeals. They will learn over time exactly how to push your buttons.
Even the way AI agents appear will be optimized over time to ensure they maximize their impact on you personally. Their hair and eye color, clothing and speaking style, even age, gender, and race could all be chosen based on what is most likely to influence you most.
领英推荐
I wrote about this a decade ago in my sci-fi graphic novel UPGRADE in which virtual spokespeople called 'Spokegens' were deployed to target each individual. The AI agents would gradually adjust their appearance over time based on what works best on each person. The protagonist noticed his agent became more and more sexualized until it always showed up looking like a prostitute—all so it could most efficiently sell him 'food bars' and other products. I had intended this as irony back in 2012, but now it’s about to come true.
From a technical perspective, the manipulative danger of AI agents can be summarized in two simple words: feedback control.
That’s because a conversational agent can be given an 'influence objective' and will then work interactively to optimize the impact of that influence on an specific person through direct conversation. It can do this by expressing a point, reading the reactions (both content and emotion) in your words, your vocal inflections, and your facial expressions, and then adapt its influence tactics (both its language and strategic approach) to overcome your objections and convince you of whatever it was asked to deploy.
A control system for human manipulation is shown above. From a conceptual perspective, it’s not very different than control systems used in heat seeking missiles. They detect the heat signature of an airplane and correct in real-time if they are not aimed in exactly the right direction, homing in until they hit their target.
Unless regulated, conversational agents will be able to do the same thing, but the missile is a piece of influence, and the target is you. And if the influence is misinformation, disinformation, or propaganda, the danger is extreme. For these reasons, regulators need to greatly limit targeted interactive influence.
Are These Technologies Coming Soon?
I am confident conversational agents will impact all our lives within the next two to three years. After all, Meta, Google, and Apple have all made announcements that point in this direction.
For example, Meta recently launched a new version of their Ray-Ban glasses powered by AI that can process video from the onboard cameras, giving you guidance about items the AI can see in your surroundings. Apple is also pushing in this direction, announcing a Multimodal Large Language Model that could give eyes and ears to Siri.
As I wrote about in VentureBeat, I believe cameras will soon be included on most high-end earbuds to allow AI agents to always see what we’re looking at. As soon as these products are available to consumers, adoption will happen quickly. And they will be useful.
But the fact is, Big Tech is racing to put artificial agents into your ears (and soon our eyes) so they will guide us everywhere we go.
There are very positive uses of these technologies that will make our lives better. At the same time, these same superpowers could easily be deployed as agents of manipulation.
Addressing the Risk
The need is clear—regulators must take rapid action, ensuring the positive uses of AI agents are not hindered while protecting the public from abuse. I’ve been writing about the AI Manipulation Problem since 2016 and have given multiple presentations to the FTC over the last few years, and thus far regulators have not put any protections in place.
And it’s not just the US that is ignoring this emerging problem—regulators around the world are still focused on 'old school' AI risks involving their ability to generate traditional misinformation at scale. Those are real dangers, but the far greater risk is targeted interactive manipulation.
And with the announcements last week from Open AI and Google, it’s now clear that our time is up — regulators NEED to act now. The short video below entitled Privacy Lost (3 min) is designed to show the manipulation risk.
To address this, the first big step would be an outright ban or very strict limits on interactive conversational advertising. Interactive advertising is the 'gateway drug' to conversational propaganda and misinformation.
If we don’t ban this tactic, it will quickly become an arms race, with Big Tech companies competing with each other to deploy the most 'efficient' and 'effective' conversational ads that drive users to make purchases.
And the very same technologies that they optimize for driving products and services will be used to drive misinformation and propaganda, making it easier for you to believe things that are not in your best interest.
The time for policymakers to address this is now.
This post was written by FMI Advisory Board Member Louis Rosenberg.
Louis Rosenberg, PhD is an American technologist in the fields of AI and XR. His new book, Our Next Reality, about the impact of AI on society was recently released by Hachette. He earned his PhD from Stanford, was a professor at California Polytechnic and is currently CEO of Unanimous AI.
A version of this piece originally appeared in Venture Beat (5/17/24) and was published on Medium.
Connect with FMI
Want to stay ahead of the marketing wave and prepare yourself for the future? Connect with us on our Website, LinkedIn, Instagram or Twitter/X.
If you enjoyed this newsletter, please share it and subscribe to receive it directly in your inbox.
Academician Researcher and Realtor
4 个月A great article. While we want convenience and efficiency, it raises an important question on the cost at which we would like it. This becomes even more important because till date we haven't had an assurance that the internet technology is free from hacking attempts. Our information can fall into wrong hands. Also how can we be sure that technology works with positive emotions only and that AI will not be susceptible to negative emotions just like humans that it is trying to emulate and excel.
Professor of Marketing | Executive Director, Future of Marketing Institute, Schulich School of Business @York University
4 个月Another great article by Louis Rosenberg. Thanks again for allowing us to post your work.
Digital Marketing Manager | | Excels in SEM | ASO | OTT | Brand Strategy || MBA in Services Management
4 个月The GDPR, and GDPR-like people are REALLY gonna get a run for their money. It’s no longer tracking of delicious cookies that their privacy brigades will have to protect us from but also, the living, breathing, learning, mimicking AI animal out there.
CEO of Phenomena // Grow your LBE business with Free-Roam VR ??
4 个月Thanks for sharing
Digital Marketer | Paid Media | Content Marketing Strategy | Crisis Management | Videography-Photography specialist
4 个月The discussion around AI manipulation is indeed timely and crucial. With advancements like Open AI’s GPT-4o and Google’s Project Astra, we are witnessing the dawn of incredibly sophisticated AI agents. While these technologies promise to enhance our daily lives, the potential for misuse is alarming. The capability of AI to read and respond to our emotions in real-time underscores the need for stringent regulations to safeguard against manipulation. This is a call to action for policymakers worldwide to address these risks promptly.