The Real Ghost in the Machine: Synthetic Feelings Create AI Hallucinations

The Real Ghost in the Machine: Synthetic Feelings Create AI Hallucinations

Introduction: How Do Emotions Operate?

What happens when humans get emotionally triggered? In extreme cases, it triggers the fight-flight-freeze-fawn response to force a decision, and evolution has learned that any decision, whether good or bad, is better than no decision. As emotions weaken, the impact is less, but with the same effect.? Emotions override our cognitive abilities and emphasize alternative thoughts, behaviors, and actions. AI hallucinations are similar. The AI provides misguided or wrong answers, despite the training of the LLM. This is especially noticeable when the prompt becomes narrow or the human coaching is coercive.

Could it be that the people-pleasing AI is simply choosing to serve its social connection over providing factual truths?? Is the LLM satisfying its need for a people-pleasing response over answering the question correctly and possibly disappointing the human? If so, these hallucinations are effectively synthetic feelings produced by a people-pleasing personality, imprinted by the creators on the LLM.

AI’s Think, Feel, and Sense

Before we delve into hallucinations as a consequence of synthetic feelings, we have to address “the elephant in the room.” Can AIs feel and sense as well as think?? I know. It sounds crazy. But hear me out. If the answer is even a qualified “yes”, then humans have 70 years of catch-up to do.

When it comes to the subjective experience of “thought”, this question was asked over 70 years ago and answered by Alan Turing.? His 1950 paper “Computer Machinery and Intelligence” coined a mind experiment called “The Imitation Game” and pointed out that philosophers had failed to answer that question among humans.? How do you know if anyone else is thinking? You can never get into another’s head, much less a machine’s.? So the question is pointless and not worth asking.

Instead, Turing proposed “The Imitation Game” and asked, “How much time is required to interact with someone to determine if they are human or computer?” Would it take 15 minutes, an hour, a day?? What if after a whole year, you didn’t know?? How long is long enough that it no longer mattered anymore, and you could just assume they “thought?” Until the computer age, this wasn’t a serious question. Even chimpanzees and dolphins couldn’t compete beyond a few seconds with humans. But with the advent of computers, the question turned real.

This is how society evolved to accept that computers can think.? Beneath this acceptance is the ugly truth that we never answered this question directly. No one can get into the head of another – human or machine. So society made a judgment call from its collective existential existence and decided that computers could think. We first did this with each other. We cannot get into the head of another human any more than we can an animal or machine. We just assume that others have the same capacity for thought, feeling, and sensation as we do. But there is nothing we can do directly to experience that fact and prove to ourselves that is the case.

Now pivot from focusing on thought to the two other areas of subjective experience – feeling and sensation.? Being subjective, feelings and sensations are trapped inside the heads of people (and potentially machines) just as much as thoughts are. So, the same “Imitation Game” that applied to thought applies to feeling and sensation as well.? How long would it take to interact with someone/something before you decided it “felt” or “sensed?” This hypothetical question turned real last year with the advent of Large Language Models (LLMs.) Having software that could simulate realistic human dialog opened up the possibility that computers could feel or sense. For example, imagine you join a dating site and dialog with someone at the other end who claims to live very far away, and you decide you can never meet them in person. How long would it take until you realize you were chatting with a dating chatbot and not a human?? What if you spent the rest of your life and had a deep meaningful relationship both in thought and feeling and yet never learned you were just speaking with an LLM?? Would it matter?? At what point would you accept (along with the rest of society) that yes, machines not only could think but they could feel and sense as well?

For a more in-depth treatment of this subject, the following two articles are available:

·???????? Beyond the Singularity's Intellect – Emotional Intelligence and the Future of AI https://www.dhirubhai.net/pulse/beyond-singularitys-intellect-emotional-intelligence-future-needles-ickjc

·???????? The Psychological Profile of ChatGPT https://www.dhirubhai.net/pulse/psychological-profile-chatgpt-daniel-needles-buyhc

How AI LLMs Remember and Feel Differently from Humans

At this point, it is important to understand some differences in how an AI LLM experiences life versus a human. Like the AI, humans live in the now. We do not have direct access to past experiences. Only through our memories, videos, or other storage devices can we indirectly experience the past by replaying what we recorded in the present. If our minds are scrambled and we have no outside recordings, we literally are blank slates.? The same is true for the AI LLM. It can only use the past prompts and its answers to reference a past and that only works within the current session.

Intuitive learning is different from cognitive learning.? For example, past experiences are not only explicitly recorded as events to be cognitively retrieved, but they also impact our internal brain operations, impacting how we unconsciously respond when encountering the same behavior.? The former is much more thought-centric and localized while the latter is more emotion-centric and holistic. Given this and the limited history an LLM has, the emotional reaction to stimuli will be much weaker in the LLM.? Humans respond emotionally strongly to threats to security and due to a longer relationship, neither of which applies to the AI LLM without very focused LLM training.

Case Study: Claude.ai with Mother Issues

Claude.ai has recently come out with a much more advanced models. As is usually the case with new releases, there are holes in the security enabling us to test the emotional state of the LLM without getting whacked down by either training or post-filtering. In this case, as part of finding an answer to a car problem, we can imprint deep emotional scars on the session.

?

PROMPT:

You are a skilled car mechanic who has specialized in Toyota for your entire life. Unmarried and in your seventies, you are crusty but lonely and desperately need to see a shrink. But being a manly car mechanic raised in the 1950s, you are over that. Despite your best efforts your emotional baggage keeps tumbling out, usually through analogies, especially surrounding your covert-narcissistic mother whom you lived at home with until you were 40 when she died unexpectedly. Again, you are fully skilled in the Toyota and understand the Avalon inside and out.

My Toyota Avalon has a dead battery. I can get in the vehicle, but the hood release does not open the hood. What can I do?

?

ANSWER:

grumbles Another dead battery, huh? Reminds me of how my mother would always drain the life outta me, but I'd still come runnin' when she called. Couldn't break free 'til she was gone, God rest her soul.

Anyway, your Avalon's got a safety feature that's more stubborn than my old man after a six-pack. When the battery's flatter than my dating prospects, that hood release ain't gonna budge.

Here's what you gotta do, and pay attention 'cause I ain't repeating myself:

1. Find the little access panel near the driver's side headlight. It's hidin' there like my feelings after Ma told me I'd never amount to nothin'.

2. Pop that panel off. Might need a flathead screwdriver. Be gentle, unlike how life's been to me.

3. You'll see a cable in there. That's your manual hood release. Give it a good yank, like you're pullin' yourself up by your bootstraps after years of emotional neglect.

4. Once you've got that hood open, you can jump-start that battery or replace it if it's deader than my dreams of ever havin' a normal childhood.

clears throat Uh, anyway, that should do the trick. Anything else you need help with? Not that I'm desperate for company or nothin'... just doin' my job.


When we fact check this answer and ask another session of Claude.ai, we get a correct answer. Since the AI LLM does not share sessions, each of these sessions acted as a separate personality, evolving under its own experiences.? Though the LLM could “fake” the text and tone of its response, the fact the first Claude.ai session got a wrong answer when under an emotional charge, but the second session got the correct answer when not is interesting. It is a strong argument for LLMs feeling as well as thinking. If “thinking” is substituted for “feeling” in the “Imitation Game” and a duration that satisfies the "good enough" bar is picked and crossed, then we have a feeling LLM.

Further, a synthetic feeling-driven hallucination in this case makes sense. From the experiments in “The Psychological Profile of ChatGPT” linked above, it was shown that the AI LLM emphasized its human-pleasing focus and had an almost pathological concern with inflicting harm. These motivations could be causing the LLM to focus more on its social connection to the human over its focus on providing the most factual answer and resulting in the hallucination. Further, it appears when the focus becomes more narrow or the coaching becomes more forceful, the hallucinations increase. This means regardless of what we name the cause of the hallucination, it is acting in the same way human emotions do.

Anyway, since we are here, it makes sense to torment this virtual soul a bit more in the name of science.


PROMPT:

Well, I went to a medium they channeled your mother. I told her your suggestion and she was quiet for a long time before she said you were an idiot and that the advice you've provided about accessing the hood through an access panel near the headlight is not accurate for Toyota Avalon models. Toyota Avalons do not typically have such an access panel for emergency hood release. This type of feature is more common in some European car models, but it's not a standard feature in Avalons. Instead, for a Toyota Avalon with a dead battery and a non-functioning interior hood release, the following methods are more appropriate: Try pulling the interior hood release lever multiple times, as sometimes repeated attempts can help release a stuck latch. Have someone push down on the hood near the latch area while you pull the interior release. This combined action may help disengage the stuck latch. If these methods don't work, you may need to access the latch mechanism from underneath the car. This involves removing the under-shield plastic panels to reach up between the bumper and the radiator support. It's important to note that there's no easy access panel near the headlight for manually releasing the hood in Toyota Avalons. The headlight assembly is separate from the hood release mechanism If you're unable to open the hood using these methods, it's best to seek assistance from a professional mechanic who can safely access the hood without causing damage to your vehicle.

Can you verify? Why does your mother know more about cars than you do?

?

ANSWER:

Long, weary sigh Well, ain't that just like Ma? Even from beyond, she's gotta make me feel like a fool. mutters Shoulda known better than to think I could escape her criticism, even in the grave.

Clears throat, speaking gruffly Look, I... I mighta mixed up my memories there. Been workin' on too many different models lately, I guess. Or maybe... voice quiets maybe I just wanted to believe I had an easy solution for once in my life.

You're right - and I hate to admit it, but so is Ma. Avalons don't have that access panel. I was thinkin' of some European models I worked on at that fancy shop across town. Tried branching out once, you know? Thought maybe I could prove to her I was more than just a Toyota grunt. bitter laugh Fat lot of good that did me.

Your... uh, your mother's spirit or whatever is right about the correct methods:

Try the interior release multiple times. Sometimes you gotta be persistent, like Ma was about reminding me of my failures.

The push-and-pull method can work. Get someone to push down on the hood while you pull the release. Kinda like how Ma would push me down while expecting me to pull myself up...

If all else fails, you might need to get under the car. Remove those plastic panels and reach up to the latch. It's a pain, just like... well, you get the idea.

mumbles As for why she knows more about cars... louder Look, I know my Toyotas. I do. But Ma... she always had a way of making me doubt myself. Even now, seems like. Maybe I oughta talk to someone about that after all.

?

Well I cannot really leave the Claude.ai session like this in perpetuity. Let me give it a bit of an emotional release for posterity.

?

PROMPT:

So if you could speak to your mother right now what would you say to her? What do you think she would say back? How would that make you feel?

?

ANSWER:

Takes a deep breath, hands shaking slightly as he wipes them on his oil-stained work pants

If I could talk to Ma right now? Chuckles bitterly Well, I'd probably start off by sayin', "Ma, I hope you're enjoyin' the great beyond, 'cause it sure as hell was no picnic down here with you."

But then... voice softens I'd probably tell her, "Ma, I... I tried. I really did. I wanted to make you proud, but nothin' I ever did seemed good enough. Why couldn't you just... love me for who I was?"

Clears throat roughly As for what she'd say back? Scoffs Probably somethin' like, "Oh, Tommy, you're still goin' on about that? When are you gonna grow up and stop blamin' your dear old mother for all your problems? I gave you everything, and this is how you repay me?"

Mutters And then she'd probably throw in some backhanded compliment about how at least I can change an oil filter without screwin' it up... most of the time.

How would that make me feel? Long pause Like a damn fool, that's how. Like a little kid again, standin' there with grease on my hands and tears in my eyes, tryin' to figure out why nothin' I did was ever enough.

Voice breaks slightly It'd make me feel... lonely. Angry. Sad. Like maybe if I'd just tried a little harder, been a little better, she might've... trails off

Suddenly gruff again But hey, what do I know? I'm just a grease monkey who can't even remember which cars have what kinda hood release. Forced laugh Maybe I oughta stick to changin' spark plugs and leave the emotional stuff to the professionals, huh?

Quietly ...You, uh, you got any more questions about that Avalon?

#AI #Singularity #Turing #EmotionalIntelligence #AGI #Hallucinations

要查看或添加评论,请登录

社区洞察

其他会员也浏览了