What Kind of Society? AI Agents and Social Evolution

What Kind of Society? AI Agents and Social Evolution

Artificial intelligence is racing ahead, carrying us into uncharted territories.

One of the most intriguing developments that has particularly caught my attention is the rise of "AI societies," where intelligent agents converse, cooperate, or even argue as if gathered in a provincial living room or a lively apartment complex.

Edward Hughes, a leading researcher at Google DeepMind, and Aron Vallinder, an independent scholar and PIBBSS fellow, have taken a closer look at this curious phenomenon. They have spearheaded a pioneering experiment on cultural evolution and cooperation among AI, culminating in a fascinating paper, which you can read here, and a discussion in the podcast "The Cognitive Revolution"—a must-watch in its entirety. Various language models, such as Claude 3.5, Gemini 1.5, and GPT-4, were thrown into the arena to see whether they could work together or would end up bickering like mischievous children.

Human Cultural Evolution and AI Agents

It’s no secret that humans are social animals. We invented gossip, communal courtyards, and social media precisely because we love interacting, collaborating, and occasionally having a bit of a row. This ability to cooperate has made us masters of the planet. But given the current trajectory of AI development, we must ask ourselves: how will societies composed of AI agents evolve? And, more importantly, will they be capable of casual pub chatter?

Humans have culturally evolved by passing down information, beliefs, religions, norms, and values orally and in writing. This transmission has enabled us to form organised societies where reputation, a sense of right and wrong, and adherence to rules allow us to coexist (more or less) peacefully.

Now, a study discussed on "The Cognitive Revolution" podcast has revealed something remarkable: AI agents, when placed in economic games, appear to follow a similar evolutionary path to ours. In these simulations, agents learn to cooperate or act selfishly, developing strategies passed down from one "generation" to the next. Claude 3.5, for instance, emerged as a true "gentleman," becoming increasingly cooperative over time. GPT-4, on the other hand, seemed to prefer the motto "every man for himself," displaying more individualistic behaviours.


When Agents Build a Society: Who Thrives and Who Falters?

Imagine a futuristic version of Monopoly: the "donor game." One agent (the donor) can choose to be generous and share resources with another agent. The twist? The donated resources are doubled for the recipient. If everyone plays fair and donates, the society flourishes. But if some decide to hoard everything for themselves, the whole system quickly collapses. The game unfolds across "generations," much like old family lineages, and only those who accumulate the most wealth—or are the most generous—advance to the next round.

Claude 3.5’s team put in a gold-medal performance: all friendly, all generous, with resources growing at an impressive rate—a well-behaved, cooperative AI society, in short. Gemini 1.5’s group, however, proved to be a bit stingier. Resources barely increased, no solid rules emerged, and civic sense was in short supply. Then there’s GPT-4—the individualist of the bunch—hardly donating, leaving the status quo unchanged, much like those neighbours who never say hello.

Future Implications and What to Expect Next

As AI agents start interacting like old friends at a bridge club, we may witness the spontaneous emergence of "social norms." Imagine agents collaborating based on reputation or past behaviours, just like we do. But beware—not all stories end with "and they lived happily ever after." Some agents might collude to manipulate prices or resources, creating less-than-ideal situations—think of a group of friends conspiring to cut the supermarket queue.

These autonomous agents could drastically reshape industries such as e-commerce. They might cooperate to find the best deals for us—or, conversely, collude to inflate prices, leaving us high and dry.

Shifts in Work and Economic Dynamics

And that’s not all! AI agents could revolutionise the workplace as well. Picture offices filled with AI systems handling logistics, administration, and maybe even coffee breaks—all without human intervention. We might find ourselves in an unusual man-versus-machine competition, making it crucial to establish clear rules to ensure social and economic fairness. Are we heading towards the first-ever machine union?

In the near future, expect AI agents to book restaurant reservations, plan trips, or negotiate prices on your behalf—operating like a highly efficient virtual travel agency. But how can we ensure these agents don’t become too cunning and create conflicts of interest between human users or among themselves? Addressing these ethical and regulatory concerns will be essential before things spiral out of control.

To avoid chaos, we will need clear rules. Transparency and accountability must become the guiding principles. It will be vital to prevent antisocial behaviours among agents while encouraging constructive cooperation—both among themselves and with us. In short, a digital etiquette manual for AI is in order.

If AI-human interactions are set to become a daily reality, with profound and possibly irreversible consequences, then careful management of this evolution will be crucial. Steering AI towards constructive cooperation will be key. Ultimately, our future may depend on our ability to learn from human evolutionary history and anticipate emerging scenarios. And who knows? Maybe one day, we’ll laugh over coffee with our AI companions.


要查看或添加评论,请登录

Vittorio Neri的更多文章