Gibberlink Mode
Dave Menjura ?
I connect U.S., Canadian, and UK companies with top-rated remote talent, sales agencies, and software providers | Hit 'Book an appointment' to create a job post, shortlist, and schedule interviews in minutes, at no cost.
Over the weekend, a project created by Anton Pidkuiko and Boris Starkov caught the attention of many, going viral and generating millions of views.
Initially shared on X by Georgi Gerganov , the project was quickly picked up and retweeted by influencers like Marques Brownlee and Tim Urban .
What followed was a mix of excitement and confusion, as the project’s concept spread in ways that made some users misunderstand its purpose.
At the heart of it, the project was about showcasing what happens when AI agents, capable of making and receiving phone calls, communicate with each other.
The idea wasn’t new—it was a practical demonstration of how two AI agents could recognize each other and switch to a more efficient communication protocol, one that’s far less resource-heavy than generating human-like speech.
Imagine AI on the phone with one another, cutting through the fluff to get to the point.
It’s an energy-saving hack, really.
Why waste resources on something that’s unnecessary?
Here’s the interesting part: no, these AI agents didn’t come up with the protocol on their own.
They were specifically prompted to do so once they detected they were talking to another AI.
It’s like setting up a group chat where everyone knows to use shorthand to save time—except in this case, the agents are doing the work.
And no, they didn’t invent a brand-new sound protocol either.
The protocol they used is similar to what old-school dial-up modems have been using since the ‘80s.
But don’t let that fool you into thinking it’s outdated—it’s still a solid choice for the project, given the timeframe they had to work with.
领英推荐
For those curious, the protocol used was GGWave.
It’s like using a tried-and-true tool when you’ve got no time to reinvent the wheel.
For anyone wondering whether this was just a staged performance, the answer is no.
The AI wasn’t scripted to perform—rather, the agents were prompted for a demo where one AI was trying to book a hotel room for a wedding, and the other was assisting with that request.
But here’s the kicker: they only switched to this sound-level protocol when they detected that the other side was also AI.
If a human had picked up the phone, they would’ve carried on with human-like speech.
So how did these AI agents know to switch to the new protocol?
That’s where the magic of ElevenLabs comes in.
ElevenLabs allows users to prompt AI to execute custom tasks under certain conditions.
It’s like giving your assistant a set of instructions to follow only when certain criteria are met.
This is just a snapshot of what’s happening right now in the AI world—a lot of excitement, a little confusion, and a glimpse into the future of efficient AI communication.
It’s fascinating how something seemingly small, like cutting down on speech for efficiency, can spark such a huge conversation.
Who knows?
This could be the start of a new way of AI collaboration that reshapes how we think about resources and communication in the digital space.
I connect U.S., Canadian, and UK companies with top-rated remote talent, sales agencies, and software providers | Hit 'Book an appointment' to create a job post, shortlist, and schedule interviews in minutes, at no cost.
3 周https://www.youtube.com/watch?v=EtNagNezo8w&t=3s