The Turing Test at Your Service: When Bots and Humans Play Hide and Seek

The Turing Test at Your Service: When Bots and Humans Play Hide and Seek


It’s 2025, and AI is no longer science fiction—it’s everywhere, even in your coffee-machine and ?food ordering apps

But with this change comes a fascinatingly odd dilemmas : Who—or what—are we really talking to when we contact someone?

?

The Rise of “Polite AI” (or So It Seems)

A couple of days ago, I had a perfectly bizarre experience that sent me into the zone of human-bot confusion. I reached out to customer support for an issue with my digital payment account and ended up conversing with an agent - Let’s call this agent Kenbak, not Skynet’s cousin!—just Kenbak - who almost had me asking, “Hey, is there anyone real out there?”

The Kenbak-1 was a computer designed by John Blankenbaker , released in 1971. It's considered to be the first personal computer.

Kenbak was remarkable. He was exquisitely polite with just a dash of empathy—enough to feel heard but not enough to drown in apologetic tears. Every single message from him was a neat paraphrase of my previous email:

“I understand you’re facing trouble with your payment account and that your frustration ... Rest assured, our corresponding team is looking into it.”

Kenbak was consistently 24 hours late with every reply. I started suspecting he might be a little too robotic for my taste. As someone who dives headfirst into the pool of AI technology daily, I nervously wondered whether I was entangled in an unending email chain with a sophisticated bot. After 8 or 9 emails, the thought of directly asking, “Hey, Kenbak, are you a bot?” felt awkward , what would that tell about me with my thread of 9 emotionally charged emails.

The Awkward Dance of "Are You Human?"

Over the next few days, I continued emailing Kenbak. My problem still wasn’t fixed. Day after day, I received the same neatly packaged response … Am I stuck in an AI loop?

Perhaps Kenbak was a human who simply had no power to escalate my issue. But the steady, mechanical 24-hour response time said otherwise , may be it was his working hours and time zone thing .. may be he keeps a fixed slots for issues of the same category as mine ?? , The possibility that Kenbak was an AI felt uncomfortably real. And yet, I felt a bit unhinged summoning the courage to ask: Hey buddy, are you a bot?

After eight or nine exchanges, I faced a social dilemma that would have made even Alan Turing scratch his head: How do you politely ask someone if they're a bot? It's not exactly standard email etiquette. "Excuse me, would you mind answering a couple of questions to prove your humanity?" seemed a tad inappropriate for professional correspondence.

To add some weirdness ??, what if I setup an AI agent to follow up with Kenbak and hopefully gets my issue resolved without me , will that make me feel “less embarrassed” or it will just end up with ?A machine is talking to a machine, and I’m merely a bystander. No wonder the “thoughts” feels… off.

The Great Inception Moment: Are We Having A Virtual Dream?

In my moment of existential crisis, I’d have given anything to have a spinning top from “Inception” or a red pill for “The Matrix” to be able to see if I was chasing echoes in a dream or facing the harsh light of reality. I mean, would asking Kenbak to count the number of Rs in "strawberry" truly be an avenue worth exploring? Or maybe asking him the name of the famous “David Mayer”? (You know, all the truly serious customer service questions.)

The plot thickened when I decided to craft a beautifully complex email—a symphony of frustration that would put Beethoven’s 5th to shame. I threw in all the emotionally charged words like “escalation,” “wrong,” and “worse.” Surely an allegedly AI Kenbak customer sentiment analysis algorithms will be triggered with this negatively polarised response and I would be transferred to a human supervisor? ?I was sure that “AI Kenbak” would hopefully understand the importance of human touch in resolving my issue, right? Wrong. Back came the same copy-paste rephrasing. “I understand your frustration,” he assured, omitting the fact that I was ready to spontaneously combust. ?


?

The Wild West of AI Disclosure and Absence of Regulation

I decided to research whether organizations are required to disclose whether you’re talking to a human or a machine. Surprise! There is no regulation! I mean, you could ask, “Are you AI?” and receive the vague, “I can assure you, my dear customer, we are making every effort to help you with your problem!” Ah, the ambiguity!?

?

On the other hand , What if the AI straight-up lied? It’s happened before

Wait a minute .. what does “lying” even mean for a machine that’s just following a code base with a conversation model?

Ah-Oh , this raises another question what does it even mean that the response you are getting from the service agent is Human Made or AI generated , after all, ?many human agents use AI to enhance their responses , a wide spectrum of AI assistance can range from preparing a complete response email to validate the response or get it grammatically correct ,

What if the agent only speaks , Say “Malayalam” and he is doing AI assisted translation back and forth from English , should that count as AI response? So even if the agent is a flesh-and-blood person, the reply could be AI-contaminated :). The line between “human-generated” and “AI-generated” is blurred like never before.

?

Need a Secret Handshake?

Is it too much to ask for a secret handshake among humans to identify each other amidst the digital haze? Perhaps agents should start with a unique opening line. “Memento Mori!” they could exclaim. This phrase means “Remember, you must die,” and you guessed it—bots don’t die! How’s that for setting the stage?

“Hello! My name is Sarah. “Memento Mori”. How can I help you today”

This is reminiscent of Isaac Asimov’s famous laws of robotics. Maybe it’s time for a new one:

Law 0.4 (unofficial extension): A robot (or AI) must clearly reveal itself as AI when asked by a human or another AI in a direct interaction.

Sure, that might not solve everything, but it’s a start.


IS It Too Much To Ask?

My personal conclusion from the entire Kenbak fiasco is that transparency should be the cornerstone of modern AI-driven customer service. If a user suspects they’re talking to an AI and directly asks, “Are you a bot?”—it would be nice (and, in my view, ethical) for the system to come clean. Some companies already do this: they clearly state if you’re talking to an AI-based system or if your request has been escalated to a human agent.

But in the meantime, until the laws catch up and official regulations are put in place, we might just have to grin and bear it—and maybe toss in a few stealthy “strawberry” tests. For organizations, though, it might be wise to consider that authenticity (human or AI, just tell us which) can go a long way in building trust.

In the end, who knows if Kenbak was a real person or an AI? My issue was finally resolved (thankfully), but my mind still reels at the idea that I might have been locked in a bizarre AI-human ping-pong match for days on end.

?

To my fellow LinkedIn companions: have you faced the digital riddle of determining whether your customer service agent was a human or a machine? Share your tales in the comments, and let’s collect the funniest awkward moments together! ??

?

Let’s keep humanity in customer service alive—one Kenbak at a time!?

?

#AI #CustomerService #GenAI #Transparency #ProfessionalHumor #MementoMori

要查看或添加评论,请登录

Mohammed Othman的更多文章

社区洞察

其他会员也浏览了