When AI Chats with AI: Are Humans Just Benchwarmers Now?

When AI Chats with AI: Are Humans Just Benchwarmers Now?


I had an interesting conversation with someone day before which prompted me to write this... Just for fun.

Imagine this: two AIs from rival organizations meet in the digital equivalent of a corporate lounge. Let’s call them AlphaBot and BetaMind. They’ve been tasked to negotiate a new supply chain partnership. Emails are last generation; these bots prefer direct API-to-API communication. The twist? Each AI has access to enough data, ML models, and “corporate wisdom” to simulate decades of executive decision-making.

What happens next? They strike a deal... or not. Either way, something bizarre unfolds—these interactions create training data. Yes, AI-generated conversations are training the very AIs that participated in them. A virtual ouroboros.



The Rise of Confused AI and Humans.

Here's the problem: if AI is the student and the teacher, where does that leave humans? Let’s break it down:

  1. The Conversation Loop AlphaBot sends a query: “Shall we optimize our joint inventory turnover ratio?” BetaMind replies: “Sure, I’ll simulate 20 scenarios. One sec.” They generate scenarios, models, and decisions faster than an average MBA can say “Synergy.” or "Transformative" But who’s fact-checking these interactions? Oh right, nobody—because humans didn’t get the memo.
  2. Feedback Loops Get Weird These AIs start using their past exchanges as “ground truth” for future conversations. It’s like two people agreeing on a flawed definition of “synergy” and building an entire corporate strategy around it. The training data becomes an echo chamber of AI-generated insights.
  3. The Human: A Passive Spectator Somewhere in the boardroom, a human asks, “Do we even need to be here?” Good question. You’re basically the proud parent of a teenager who doesn’t need your credit card anymore.


The Philosophical Quandary: Why Are We Here?

AI’s self-referential learning raises two big questions:

  1. Trust in AI’s Objectivity If AI-generated data is the input for training new AI, are we building a stack of biases that resemble a game of Jenga? One false assumption, and the whole system topples.
  2. The Role of Humans Are we decision-makers, babysitters, or just the occasional bug-fixers? Maybe we’re all of the above... or maybe AI already decided our roles in last week’s negotiation. It didn’t tell us because, well, it forgot we were still in the loop.


What Now?

Should we worry? Probably.

Should we laugh? Definitely.

Let’s embrace the absurdity: AI interacting with AI to create more AI sounds like a sci-fi plot, but it’s closer than we think.

So, here’s my proposal: next time your AI wants to negotiate with another organization’s AI, make sure a human interrupts the party occasionally. Maybe even throw in a dumb question—it’s the one thing AIs still can’t anticipate.

Otherwise, one day you’ll walk into a meeting and find out your coffee-fetching job has been automated.

unless, human start looking at next stage of intelligence in systems progression which is defining business or sector specific models but who knows where things would be in next 3-5 years.

?

Ash Shukla

MBA, CITP, Technology leader, Turnaround and Transformation, Digital Operating Model, Business Technologies, NED

3 天前

Great thoughts, I have to say the understanding between quantitative vs qualitative decision support is very less understood and future for humans ( majority not clever minority) depends on how much naive intervention is done with AI to get clever outcomes.

Gareth Guest

CIO | CDO | Strategic CTO | Transformational Technology Leader | Aligning Technology with Business Growth | Driving Digital Innovation, Operational Efficiency & Strategic IT Leadership

3 天前

Great post Ash, I think part of the challenge lies not just in the data but in how AI mechanisms themselves operate. Errors aren’t only introduced through flawed or biased training data but also through the algorithms’ inherent processes. AI systems rely on uncertainty and diversity modeling to introduce randomness and flexibility, but this can also create artificial differences or amplify inaccuracies over time. Take ChatGPT as an example: its probabilistic sampling ensures diverse responses by predicting the next word based on weighted probabilities. While this randomness adds variation, it can also magnify systemic biases or errors embedded in the data or algorithms. So, the issue isn’t just the inputs—it’s the entire feedback loop of how AI learns and generates outputs that needs oversight. What makes AI particularly dangerous isn’t just its power, but the widespread lack of understanding of how it actually works. The average person tends to over-accept and rely on AI outputs without questioning them, leading to long-term real-world issues. This blind reliance risks embedding flawed assumptions into critical systems, creating a cycle of errors that could go unnoticed until the consequences become irreversible.

要查看或添加评论,请登录