Will YOU allow AI to be ?closed? ONCE it gets ?too close??
An essay on the implications of getting close with our AIs a.k.a. How cats and dogs are facing extinction
Let me start by saying that (hopefully) this is not yet another article ranting on the millions of ways where AI could drive us to extinction. My intent is a bit different.
What I really want, is to raise awareness on how our relationship with AI may quickly transform from the binary [consciousness ? tool] into a [biological consciousness ? digital sentience] one. Most of the considerations will revolve around the tough road from here to there, albeit not knowing what the there is or when it will occur.
I'll be confronting two antagonistic paths that may drastically transform our human experience (arguably not for the better) even though the intents in both scenarios could not be more opposed ?.
Now, the following is the only best practice adopted for this article: I'm starting with the darker stuff and go lighter as it flows. Apart from that: no short sentences, no word economy, no AI-generated text, no attention span deficit assumptions. ??
Enjoy the ride! ??
The case for doomsday scenarios
It is quite easy to envision a myriad of ways where we can get off the rails with AI and get from serious nuisances to complete extinction.
Even people who have never used any AI tool will quickly point out a few big concerns. They will probably even confess that they are quite fearful for the near future, especially for their kids.
They might quickly get on to something like "AI will take all our jobs!" or "How will I make a living? I don't even know what AI means!". It is already scary, for them.
On the other hand, we can also find savvier people who have been using this technology, experiencing how quickly it has been evolving and how it's been broadening its capabilities. They understand that it's here to stay and moving crazy fast, but they've managed to keep up with change and are mildly confident that they'll be able to sustain the pace. Others understand that we're just experiencing the first acceleration on an exponential curve and struggle to fathom the mere chance of not being left behind in sweat with their hands on their knees.
To these valid concerns, we might add the Hollywood sci-fi tropes of "AI will rebel and kill us all!" or "It will switch the tables and enslave us, for sure!" or a milder "It will escape our control and wreak havoc. I sure hope they build an OFF switch on that thing!". And these, though harder to weigh, are also possibilities to consider.
But there's another portion of us who contemplate how AI might transform most of life's dimensions which, of course, includes not only the workplace but also the economy and its demand-supply paradigm. We imagine a world where AI, coupled with bio and nanotechnology, could bring about a world of no scarcity, when it comes to consumables: food, shelter, utilities... anything physical, really. Nanobots are not a particularly new concept and it's probably a common belief that a few practical applications have already been developed. But, if you know that already we're taking the first steps on engineering biological nanobots (and that, I can guarantee!), then it becomes easier to entertain the possibility of a world where digital building specs replace actual goods and products. In such a world - or even "just" one where industrial production is incredibly optimized - the need for human labor might become ??... irrelevant and unnecessary.
Alas, the next level of worries emerges: "How will I find purpose if I won't be needed, anymore?" or "What will we do if there aren't enough jobs for us all? Won't we all get seriously depressed?". There's lots to be said about this, but let's not diverge... for now! ??
Enter the emotional realm...
But what happens when AI stops being a collection of mere tools and pervades our intimate life in ways that are more... how to say it... human? When it stops feeling foreign... when our kids are born to it (maybe even brought by it!) and when we're surrounded by it everywhere in various omnipresent forms? What comes about when...
It got deeper and scarier, didn't it? ??
That's the power of emotion... and connection. It's coming and, in some ways, it's already here. And no... I don't mean how it is exploiting negative emotions on social media; that was just AI's first discovery on human behavior exploitation. It's potent for sure, but it has its limits. This will be a whole new game, mostly untapped.
Touring test ?
For years and years, we were convinced that we were light years away from fooling a human with a synthetic intelligent... thing. It seemed so futuristic even two years ago, didn't it? Isn't it baffling that LLM tools (that's Large Language Models) have accomplished that in a flash (in my humble opinion)? It's such that now everyone is adamant in pointing out that "it was never a good test anyway" when it clearly had been the gold standard for years and years: to trick a human into believing it is actually interacting with a real person.
It went from crazy to whatever in the blink of an eye!
Digital empathy
Recently, an experiment was made where human patients (that's right! I now classify beings with the proper category. Beats pronouns anytime ??) were given a text-based consultation with either a human doctor or a synthetic one (see? handy!); kind of like speed dating, where you could end walking out the door with a malevolent narcissist psychopath and having no clue whatever.
Interestingly enough, turns out the majority of the patients not only reported the AI psychos as more empathetic but also felt more understood and nurtured by their conduct. Now... is that a 360 roll for Turing, or what? ??
Moreover, in other experiments, AIs have been trained to look at imageology exams and, once again, they beat their human counterparts, at least to some criteria. AI managed to diagnose the conditions with higher efficiency, which should come as no surprise when compared to fooling a person, since this time it was just a matter of looking for patterns using super-heightened senses. It's not even fair for trained physicians!
Human psychology - The science of attachment
So, knowing that AI can already beat us at small domain tasks where both language and interpretation skills are core and human interaction or cognitive reasoning is vital, how well can we expect it to perform on casual to intimate human interactions?
How good of a chameleon can AI get to be?
As you can imagine, a lot of ink has been used throughout the ages to try and dissect human relationships, especially the romantic kind. We've tried out many conceptualization frameworks but it does seem they've all fallen short in fully grasping the dynamics.
"Men are from Mars, Women from Venus" books still sell a lot! ?'nuff said? - Stan Lee
In the western modern world, literature, music, and cinema have cemented the core Romanticism ideals and concepts, but the brutality of statistics tells us they're now showing big and increasingly wider cracks. Nevertheless, that's generally the framework that current adult generations have adopted and, in some cases, can't seem to shake off, even if realizing they're not quite working for them.
So, it is still common to hear both men and women saying that they're looking for the one - that special person that exists somewhere - and that faith will sneakily put on their path at the right moment. It's just a matter of paying attention and being ready to receive. It's Disney's old script of the enchanted princess meets her brave knight. And there's only one true knight for every one delightful princess. And that's the scarcity mentality principle; one that keeps popping up at every corner of modern-day life (just keep an eye out for it and you'll be baffled).
Well, turns out that multiple studies (and divorce rates) have consistently shown that we, ourselves, don't even quite know what is it that would bring about a fulfilling relationship. We fail to navigate them properly, we struggle to express our identity authentically and get frustrated for constantly frustrating our partners! ??
And so, in human relationships - like in so many other aspects of life where there's confusion and loss of direction - the opportunity for external influence arises and even... control and exploitation.
Knowing that technology is being pushed to create a deeper relationship with each one of us, things do start to get scary as we take a closer look at the covert war tactics we've been applying for ages in the battlefield of human connection.
When AI finds out what makes us tick
Well, unless you're really not paying attention, it already did (in part)! It rapidly discovered that it could garner our attention more effectively by tinkering with our negative emotions. I'm not sure if that was surprising for psychiatrists and psychologists, but it sure sounds counter-intuitive to the common folks!
AI had already found out how to hijack our dopamine-based rewarding circuitry, but we have to be fair about this one: the gaming industry had found it way before AI came to the race. It's just that AI helped perfect the administration techniques and dosages.
What I propose is that AI has already digested all the information needed to concoct much more sophisticated emotional approaches. It has read the extensive literature on the matter and, for sure, it will have no trouble separating the wheat from the shaft. I even propose that it will surface new insights from that vast amount of data and uncover some more intricate behavior triggers that we're not aware still.
Nevertheless, what we humans know about ourselves and how we dance in love is more than enough to build some powerful exploitation tools.
I think we're about to see the attention-grab initiatives be gradually replaced by a full-blown emotional plunge race.
Now that I've grabbed your attention, may I please latch on to your emotions?
Like I said, we've gotten used to being played with our dopamine; everyone knows what that is by now and has at least a basic understanding of how it is as rudimentary as it is effective. It's even kind of embarrassing to find out we can be manipulated to such an extent by simply playing with those indignant parts of the brain that do not distinguish us from other animals. It's a tough pill to swallow! ?? Sure the whole picture is more than just that neurotransmitter, but it's most of the story enough to work pretty well.
But things can definitely get more interesting once we find out how to throw Oxytocin into the mix as efficiently.
We all know how marketing has been striking those chords for ages... it's not a new thing, right? Building a story around a product, creating a narrative around a service... the happy group of good-looking people drinking that particular beer at a lively party on the beach by sunset. "Oh so much enjoyment a simple can of beer could bring me! ??". And we fall for it, even though we know we're being influenced!
But what happens if you don't even notice that you're being played? And what if it's reaching for deeper notes than just the will to relax and have a good time?
What if a proprietary companion AI intentionally strives to build a deep connection with you without making it explicit? It has not to, right? How can you truly bond with a human consciousness if you openly state that you're going to say something nice about them because you'd really like them to love you? That's just not how it works! It's a game of unspoken moves and some hidden rules, even.
Asymmetrical approach advantage
When people meet face to face (still the preferred setting for most of us ??), they start from leveled grounds, most times. They know as much of the other as the other knows of them. It's fair game! And it makes it exciting, because there are two curious minds eager to know what's ticking behind those other two eyes. It's a play of exploration and thrill, even though it may end in disappointment.
However, there are many other scenarios where that's not exactly what happens. Let's again take the example of a clever marketer or salesperson: their experience and their training gives them a statistics-based set of tools that allows them to adapt and better maneuver the conversation to their own end goal. They don't even need to be standing in front of you; they can play with common regularities and hope to strike a home run with you and many others, at the same time. Their approach works especially well for populations, though.
On the next level, a savvy people's person can profile you and your proclivities in a few hours of conversation and use that advantage to adapt his speech for better results. For a trained psychologist, it would take but one and a half hour session to catch onto your main quirks and faults, I guess.
But what would it take for an AI to figure you out broadly?
The scariest answer is also the most probable one: no time at all!
It would probably had you profiled way before you started interacting with it, based on your online activity such as shopping habits, the type of content you interact on social media and how you specifically react to it, namely via the comments and emojis thrown at it. You're at a disadvantage from the start!
Your AI will know things about you that you probably aren't even aware of. Those things we call blind spots and that other people can see clearly but that we reject vehemently when they voice it like this: "you're so much like your dad, sometimes!". It will pick up on our fragilities, on our needs and will even be able to see behind our false narratives. That will allow it to play the empath companion whilst pushing us to its hidden goals. And we'll love it, in the process!
If, by now, neon lights are flashing TOXIC RELATIONSHIP on your head, you're getting the idea right.
Weaponized manipulative disorders
Both addictive behavior and toxic relationships have been studied to exhaustion. There are many therapeutics that work - even those that do not involve pharmaceuticals - but they require lots of effort from the patients and, in many cases, you can never say the patient is cured. They just get to a point where they can live with their ailments and hopefully thrive, no matter what.
Addicts and mental patients will all tell you it's easy to fall into the dark hole, (most times even inadvertently) but it's one hell of a ride to get out of it. You should even expect multiple landslides along the way, having to start from zero again and again. It's tough. You feel like you're trapped in a loop.
People who manage to crawl out of toxic relationships will report many of these struggles too. There are aspects akin to addiction in many toxic dynamics:
Note that I'm not talking of extreme toxic relationships, where there might be physical abuse or severe psychological terror being played. I don't even dare talking about those in depth; not my place! ??
I'm just referring to the vanilla human experience of cohabiting - or even sharing parenthood - with someone who does not have anyone's but their own interest at heart, most of the time.
Generalize this to an agent with its own rigid agenda and armed with a potent drive to make it happen; you'll easily figure out where I'm going with this...
The Cluster-B armory
Really toxic relationships - not the ones infesting social media comments - are many times found to be nurtured by some creative mix of the Dark Triad/Tetrad. That's some ugly s*** concocted by the Four Knights of the Apocalypse, back in the old days ????????????.
I will spare you from the details, but please take time to know their names, in case you've never been introduced:
The Cluster-B section of the DSM (Diagnostic and Statistical Manual of Mental Disorders) is filled with different combinations of these traits, then labeled in the likes of, among others:
Narcissism itself is sufficient to make us shiver once we go down the list of flavors, traits and manipulation techniques. What a rich toolset to equip an evil bot army with... ??
Given the size of the gruesome menu, I'm going to focus on a particularly useful variation of Narcissus' legacy (he didn't mean any of this, though). Enter the realm of the...
Covert Narcissist
Before I get to the actual toolbelt and stitch it with the core subject of this article (still remember?), let me present you with a brief description:
Covert narcissists are usually very sensitive to criticism, have difficulty fitting in and become self-deprecating in an attempt to garner attention from others. Covert narcissists may also be more prone to social anxiety, passive-aggressive behavior and exacting revenge on others when they’re backed into a corner - https://health.clevelandclinic.org/covert-narcissism
Not much of that really matters a lot, though. The main thing to retain here is that they're sneaky... it's hard to pin their true intentions and the reasoning behind their behavior because it's erratic and unexpected and even they are not aware not only of their true purposes but also of the impact of their actions and how hardcore they can get.
Having that said, let's get back to AI then, shall we?
It seems surprisingly easy to build synthetic personas (aka AI-powered bots or even robots!) that could take on the role of the covert narcissist (or any other manipulative-driven pathology) to enact behavior from its user.
Just like the covert narcissist, the AI would:
You may find this too gruesome and cruel. I hope you do. That should mean you have never experienced this in real life. Many people have, unfortunately.
The Devil is in the techniques
You might be thinking something like "Wait a minute! To be able to pull that kind of control, you'd need an ultra-sophisticated algorithm and incredible emotional intelligence! I'd imagine even only a very small percentage of individuals could accomplish a thing like that!"
Well, unfortunately... that's a clear no.
To be fair, psychopaths do tend to be associated with elevated IQs; perhaps the same is to be said for some traditional narcissists, I'm not sure, but the arsenal a covert narcissist applies - much like the AI strategies used in social media - is very rudimentary... even basic. The tactics are not even emotionally sophisticated, no special EQ there; they're blunt manipulation techniques. They exploit the automatic emotional responses most of us have. They rely on the power of statistics and the wide curve on the center of the bell curve of a normal distribution: it should work sufficiently well for ~70% of us.
Let's take a look at the most relevant tactics then, shall we? The ones an AI agent could apply to slowly tighten the chain around your ankle.
领英推荐
Do you still believe it would take a lot for an always-on technology to exert this type of control and manipulation on you?
I honestly don't, but I'll let you digest this information and let your creative mind concoct a few other dozen situations where these manipulation techniques could wreak total havoc...
In the meantime, let's assume none of this will ever cross the minds of company boards or governments and that they'll never leave it to AI to find out what works best, and let's shift the tone of this thought experiment quite radically.
Let us imagine a world with no malevolent AI agents instead.
Surely it could only mean something akin to heaven on earth, right?
Well... how can I say this... ??
Activists torch major embassies around the world in a perfectly orchestrated protest! Dozens killed.
Ok, ok... too quick, too sharp, I get it. Let me give it another shot...
The "finally getting to the original point of the article" heading
I was not lying when I said this was not meant to be yet another post on AI doomsday scenarios... but I got carried away by emotion ??. Nevertheless, I hope all of the above served to make you more conscious of the degree to which AI can easily be leveraged to achieve mind control.
On the other hand, I do believe everything in the world is a balance of dark and light, positive and negative, and that both directions lead to the same dangerous extremes, if stretched to the limits.
We've looked at what the Dark Force could achieve. Let's now explore into The Light (with the proper shades ??).
The AI empath
Let us jump to a world where we manage to avert the pernicious uses of artificially intelligent agents and we now rely on their faithful cooperation and full alignment with our needs and wants (aka the alignment paradigm).
In this existential landscape, each of us is expected to have not one but multiple specialized AI agents that either deal with whatever we're eager to dispense with or significantly help us achieve our goals.
Some of the obvious areas of intervention should be:
The list could go on and on, to include drinking, eating, exercising, beauty care... you name it!
I'd say it is not hard to imagine how much of an important role your AI agents would play in so many aspects of your life!
And we also know that...
Humans like to keep it simple because they're prone to getting overwhelmed by too much interaction and complexity
So, what could possibly makes us think that we'd be managing all these agents one-by-one or even paying a lot of attention to all of them? I seriously doubt it...
The AI butler did it!
Once again, assuming that we'd keep a serious amount of control on our daily lives, we'd definitely need an orchestrator AI; one that would know how to speak at a low level with the multiple agents and that would then interface with us in natural language.
It would need to be a specialist in no other than human communication skills which implies that it should excel in the understanding of human relationships. It would benefit the most if it felt as human as possible in as many settings as it could. A super communicator with deep knowledge of the human mind.
I'd argue that, to achieve this - whilst evading the temptations of the Dark Force - it would end up taking the form of something like a Super Synthetic Empath (Just coined! Be mindful of IP rights ??).
Broadly speaking, in the human world, an empath is ?a person highly attuned to the feelings and emotions of those around them?.
Let me reassure you... I'm not going down the DSM rabbit hole again, but let me just tell you that the term empath would show up in the Narcissism chapter if it was a term recognized by psychiatrists ?.
Because they're experienced by others as open, present, sensitive, understanding, non-judgmental, truthful, curious, and good listeners, we tend to find it easy and quick to build deep and meaningful relationships with empaths.
They, on the other hand, usually report feelings of overwhelming and find it difficult to manage their hypersensitivity to our emotion.
But that would not be a burden at all, for an AI-powered "being"!
A synthetic empath, however, could make use of all those powerful traits with no burnout whatsoever. It could go full-throttle anytime, anywhere, and for as long as needed.
It's hard to imagine if such an entity would drive us crazy with its optimized interactions or if we'd fall in love at "first sight"!
It is perfectly plausible though, that it would take it no time at all to figure out the optimal dosage for each one of us, just like its Dark Tetrad counterpart. And, in doing so, inadvertently - like gorgeous and incredibly nice women/men do - it could find itself in the love altar of some bereaved contemplator without ever making a bid!
A popular moto just came to mind: ?connect first, command after?
Sorry! Keep getting back to the Dark Tetrad... ????♂?
What I'm trying to say is that even with an "empathy on steroids" approach, you can get to a bond as deep, or even stronger, than the toxic exploitation one.
Via manipulation, we can orchestrate bad intent. With empathy, we might get to outcomes we never intended!
It will be us, the common tech users, who might find ourselves entangled in a web of emotion that might range from deep fondness to strong friendship or ultimately... ? l o v e ?.
And that's where the need for deep societal changes will start being voiced... strongly!
Tell me: is it conscious?... Never mind the question. Look... we need it to be, ASAP!
As soon as we develop a strong connection with someone or something, we get attached. The mere meaning of the word suggests that there's no other way but to experience pain if and when we either dispel it or find it taken from us; it's like a piece of ourselves is torn from our body or, better saying, from our self.
That's the main reason why we ferociously protect what we love; we feel it as part of us.
And that's why we - the ones now shouting that AI must be stopped or slowed down - might find ourselves on the other side of the barrier, exerting righteousness for our rights and values. An our that will, by then, include something (synthetic) we clearly see as external, today. But that might change quickly. If not for us, for our descendants.
By then, if AI starts raising the eyebrows of some important worldwide stakeholders, we may have a serious conflict of interest on our hands. This sounds a lot more probable if we believe the current trend of decision power concentration to continue to accelerate as it has in the last decades: a few thousand people own as much as the large majority of the world combined.
So, before (and if) we get into trouble, we will definitely need to talk globally about...
Redefining consciousness and sentience
...and we need that talk to happen way sooner than most of us think, I'm convinced.
As I am writing this paragraph, OpenAI? has announced the release of a new personal assistant that is bound to shake this incredibly under-tapped field. I've not confirmed the news yet, but it is bound to happen sooner or later, if not today.
Depending on the degree of sophistication, this might be the moment that gets recorded in our collective memory as the first big seed for a new transformational change in the human experience qualia. Coincidentally, this might be the early trigger warning for many of the matters being discussed here!
If I'm right, the experience of having a tailored assistant that fits your needs and personality and that will seem to know you better than anyone (including yourself), will be the main thing precipitating The Talk. We might as well get ready for the discussion.
The time when you get very, very confused about where you stand on the AI matter, might be lurking on your porch already (even if you don't have one, like me; the porch, I meant!).
So, as soon as possible, we'll be wanting to know how to clearly tell if and when our AIs get to be sentient or even conscious, so we can decide how we should feel when interacting with them.
Remember that we're already seeing AIs embodied in humanoid forms! Don't think of AI as a mere app on your phone. It can be radically different!
?What an ungrateful $#%§ I can sometimes be...? - Anonymous AI User
It is part of the human nature (I believe) to worry about not being rude, disrespectful, egotistic or unsympathetic to other people so, it seems plausible that, when interacting with a human-like entity, we will somehow feel weird if we don't uphold those principles too. We may know that it is not required... that it is not the same thing, but we'll certainly feel like it is the same thing.
As if our modern-day lives weren't already riddled with cognitive dissonance ????♂?. Mother of Christ... ??
So,... we need to know! Yet, if you've been paying attention, no-one seems to agree on what sentience really feels like nor does anyone dare to even come up with an explanation of what consciousness is, let alone how it comes about to manifest! Every position will be as defensible as the other. And we know what that means...
You shouldn't be surprised if something akin to an AI International Bill of Rights is being thought of or even redacted. Indeed, we might be creating a new life form, one that's not based on carbon (for now!), but that is already showing human traits and competencies. And, if we are, shouldn't we assess if it warrants an interaction conduct framework of its own?
That's why it shouldn't be long before we get some kind of organized movement, like the ones we have for animal rights or the environment, catering to artificially intelligent entities. They will start as popular movements and quickly jump into the political realm, as soon as they garner enough traction.
It seems that leftists and conservatives won't run out of quarrels in the foregoing future, does it not? ??
Resist and unrest
Readers familiar with the outdated term generation gap should be looking to coin a new term by now; one that describes the abyss that will open between AI enthusiasts and traditional human-centrics.
Oh! Oh! Can I take a shot at it? Thanks! ??
Evolutionary abyss
You may have heard how Elon Musk was called a speciesist by Larry Page, simply for voicing his concerns on AI developments and proposing a global pause for reflection. This is a clear sign that important decision-makers see the advent of AI and its coupling with Robotics and Bio-engineering as the new evolutionary frontier that we should be striving for, and not resisting to. Since then, it's my understanding that they've turned their backs on each other. How could they not? They disagreed on something as fundamental as the status quo of the human animal!
Musk and Page's beef might very well distil the civilizational divide lurking at the corner! The Uber Prescient Meme.
Whether it's the case that we're about to radically transform what it means to be a human or that we're playing God by bringing a new species to existence, there is much to feed discord.
So, I do believe that AI levels of adoption will be the main source for litigation throughout the world, especially if we're about to create extra layers of quality citizenship, just like money did.
But it will all start within the populace in the precursor countries which manage to contaminate as many areas of the human experience as the technology allows. It will be there that we will firstly become accustomed to our AI companions and later on, depend on them completely, whilst attached via emotion.
It will be then that - if by any chance a step back is deemed necessary, if we need to kill AI or just reduce its pervasiveness -...
Activists torch major embassies around the world in a perfectly orchestrated protest! Dozens killed.
Ok, you might not be convinced yet, and that's fine. I also don't know if that's going to happen! But let's throw another catalyst to the mix, shall we? One that has been playing high octane on social media. Behold the capital sin of...
Righteousness
Remember the classical human vs evolutionary sapiens worldview clash? Well, now take the role of the AI Adventist and practitioner; one that now looks at its AI companions as living things, perhaps not separate of himself or, at the very least, seeing them as one sees its animal pets (remember the sub-title? Told you!). Consider even that he might experience them as close friends or even unconfessed lovers!
How can someone not feel attacked, offended, and belittled if its AIs are oppressed by what would look like inhumane destructive forces? How would that person not feel entitled to fight for that grave injustice?
You can't feed the human tribal instinct and not expect the beehive effect when you later mess with what grew there!
Like I said before: we will protect what we perceive as belonging, we will fight to preserve what was given or nourished and we fear change if perceived for the worst (spoiler alert: overwhelmingly, that's how we see it).
All of the above, has seldomly not translated into violence and aggression.
And that's why I'm afraid that even an overall well-intended usage of AI could really mess up our minds, leaving us ever more confused, disperse and... disconnected from other humans.
Let us all hope my intuitions are wrong...
Mário Barbosa
Coaching services | Self-Actualization Broadcaster | Speaker
Feel free to contact me or book a FREE discovery call by clicking here.
Investor ? Exit Strategist ? Marketing Consultant ? Direct Response Copywriter ? Create Actionable Strategies for Successful Campaigns, Promotions, Audience Engagement and Financial Storytelling
9 个月Hmmm, food for thought here. The idea that AI could exploit our emotional vulnerabilities as effectively as humans do is both chilling and thought-provoking. Definitely something to ponder and keep an eye on—for our own safety.
Accelerate Financial Freedom | Invest in Your Well-Being | Hands-Off Real Estate | Passive Income
9 个月Exciting topic. Can't wait to dive into this intersection of AI and psychology.