When AI Chats with AI: Are Humans Just Benchwarmers Now?
Ash Shukla
MBA, CITP, Technology leader, Turnaround and Transformation, Digital Operating Model, Business Technologies, NED
I had an interesting conversation with someone day before which prompted me to write this... Just for fun.
Imagine this: two AIs from rival organizations meet in the digital equivalent of a corporate lounge. Let’s call them AlphaBot and BetaMind. They’ve been tasked to negotiate a new supply chain partnership. Emails are last generation; these bots prefer direct API-to-API communication. The twist? Each AI has access to enough data, ML models, and “corporate wisdom” to simulate decades of executive decision-making.
What happens next? They strike a deal... or not. Either way, something bizarre unfolds—these interactions create training data. Yes, AI-generated conversations are training the very AIs that participated in them. A virtual ouroboros.
The Rise of Confused AI and Humans.
Here's the problem: if AI is the student and the teacher, where does that leave humans? Let’s break it down:
The Philosophical Quandary: Why Are We Here?
AI’s self-referential learning raises two big questions:
What Now?
Should we worry? Probably.
Should we laugh? Definitely.
Let’s embrace the absurdity: AI interacting with AI to create more AI sounds like a sci-fi plot, but it’s closer than we think.
So, here’s my proposal: next time your AI wants to negotiate with another organization’s AI, make sure a human interrupts the party occasionally. Maybe even throw in a dumb question—it’s the one thing AIs still can’t anticipate.
Otherwise, one day you’ll walk into a meeting and find out your coffee-fetching job has been automated.
unless, human start looking at next stage of intelligence in systems progression which is defining business or sector specific models but who knows where things would be in next 3-5 years.
?
MBA, CITP, Technology leader, Turnaround and Transformation, Digital Operating Model, Business Technologies, NED
3 天前Great thoughts, I have to say the understanding between quantitative vs qualitative decision support is very less understood and future for humans ( majority not clever minority) depends on how much naive intervention is done with AI to get clever outcomes.
CIO | CDO | Strategic CTO | Transformational Technology Leader | Aligning Technology with Business Growth | Driving Digital Innovation, Operational Efficiency & Strategic IT Leadership
3 天前Great post Ash, I think part of the challenge lies not just in the data but in how AI mechanisms themselves operate. Errors aren’t only introduced through flawed or biased training data but also through the algorithms’ inherent processes. AI systems rely on uncertainty and diversity modeling to introduce randomness and flexibility, but this can also create artificial differences or amplify inaccuracies over time. Take ChatGPT as an example: its probabilistic sampling ensures diverse responses by predicting the next word based on weighted probabilities. While this randomness adds variation, it can also magnify systemic biases or errors embedded in the data or algorithms. So, the issue isn’t just the inputs—it’s the entire feedback loop of how AI learns and generates outputs that needs oversight. What makes AI particularly dangerous isn’t just its power, but the widespread lack of understanding of how it actually works. The average person tends to over-accept and rely on AI outputs without questioning them, leading to long-term real-world issues. This blind reliance risks embedding flawed assumptions into critical systems, creating a cycle of errors that could go unnoticed until the consequences become irreversible.