Game Theory: The Art of Thinking Strategically
Imagine you’ve just broken out of prison. Adrenaline pumps through your veins as you evade the guards and their barking dogs. You find yourself in the middle of a dense jungle, heart racing when you come across a raging river blocking your escape. In front of you are three bridges.
The first bridge is solid and completely safe—it’s the obvious choice. The second bridge is old and shaky. Crossing it gives you a 50% chance of success. The third bridge is downright terrifying. It’s barely holding together, inhabited by cobras. Your chances of crossing this bridge are only 10% at best.
Now, which bridge would you choose? The answer is obvious. But what if we add a twist? What if a detective is waiting on the other side of these bridges, trying to predict which one you’ll choose? What do you do now? GO!
The safest bridge, the one you’d pick without hesitation if you were alone, suddenly becomes a trap. Why? Because the detective would expect you to choose it—it’s the logical choice. So, you reason, What if I choose the most dangerous one, the one with the cobras? He’d never expect that!
But wait—he’s a rational agent too, just like you. He can anticipate your reasoning, and he might guess that you’ll pick the cobra bridge to throw him off. So you think again: If he’s expecting me to pick the Cobra bridge, maybe I should pick the second bridge after all.
But no—he could anticipate that as well! And suddenly, you’re caught in an endless loop of second-guessing: He’s thinking that I’m thinking that he’s thinking that I’m thinking… and the only way you could surprise him is to surprise yourself.
This is a non-paramatic situation, a perfect example of how game theory helps us understand complex decisions. The brilliance of game theory lies in helping us untangle these threads. It reveals that most decisions in life aren’t made in a vacuum. They’re shaped by the constant push and pull of what others might do, what they expect us to do, and how we adapt in turn.?
The Prisoner’s Dilemma: When Logic Leads to Betrayal
To dive deeper into game theory, let’s explore one of its most iconic examples: The Prisoner’s Dilemma.
Here’s the setup: A sheriff arrests two thieves after a heist. Unfortunately for him, he doesn’t have enough evidence to convict them without a confession from at least one of them. So, he separates the two thieves, putting them in different rooms so they can’t communicate, and offers each of them the same deal:
Now let’s think about the decision-making process for each thief:
Confessing is always the safer choice, no matter what the other person does. This makes confessing the dominant strategy for both thieves.
This outcome, where both confess, is the Nash equilibrium. Notice, though, that the Nash equilibrium is not the optimal outcome. If both had stayed silent, they’d only serve 1 year each—a far better result. But without cooperation, they always end up at the Nash equilibrium.
Dominant Arguments: Decoding the Best Move
Let’s take a closer look at another strategic game to explore the concept of dominant arguments.
In this game, two players stand at a distance from each other, each holding a ball. They have two options at every turn: shoot the ball at the other player or take a step forward. If a player shoots and hits their opponent, they win the game. But if they miss, they lose. It’s hard to decide at which turn to shoot.
Here’s where game theory helps
Suppose the probability of a player hitting their target at distance 10 is 10%, and the probability of their opponent missing at distance 9 is 80%. What’s the smarter choice? Clearly, relying on their 80% chance of missing is better than banking on your 10% chance of hitting. Up until a certain point, the dominant argument remains clear no matter what your opponent might do—to don’t shoot.
Game theory, as an official field of study, was pioneered by John von Neumann in the 20th century. Over the past seven decades, it has transformed how economists, social scientists, and even biologists approach complex, interactive situations. But the ideas behind strategic decision-making aren’t entirely new.
Take Plato’s account of the Battle of Delium. A soldier is facing an enemy attack and thinks to himself, that if the army wins, his presence might not be essential—he risks dying without gaining credit for the victory. If they lose, he’s likely to die anyway. From his perspective, fleeing seems like a rational choice.
And if every soldier reasons this way, the entire army flees, and there’s no one left to fight.? The drastic solution of Hernán Cortés, the Spanish commander who famously burned his ships upon arriving in Mexico. By eliminating retreat as an option, Cortés forced his men to stay and fight, aligning their incentives with the collective goal of victory.
From all we’ve understood so far, game theory tells us that sometimes rationality and logic point us away from trust and cooperation. Without Commander Cortés to burn our ships, it tells us to flee from fighting for our country, to betray our partner, and to drop our prices in order to cut off our rivals in the market.
And yet, in reality, cooperation and trust do exist in our society. How can this be? What allows people to work together?
领英推荐
Winning Together: How Repeated Games Build Cooperation
In a single-shot Prisoner’s Dilemma, the Nash equilibrium often leads to mutual defection. But when players expect to interact repeatedly, everything changes.
Imagine a cartel of four firms agreeing to maintain high prices by sticking to production quotas. If one firm cheats by exceeding its quota, it gains in the short term. However, the others can retaliate by underpricing it, causing long-term losses that far outweigh the short-term gains. The looming threat of future punishment keeps everyone in check, fostering a stable cooperative equilibrium.
One of the most famous studies on repeated games was conducted by Robert Axelrod. He ran a competition where players with different strategies played the Prisoner’s Dilemma repeatedly. The results? A simple strategy called Tit-for-Tat consistently ranked first.
Tit-for-Tat operates on three key principles:
Tit-for-Tat embodied all three traits. Axelrod’s study highlighted another challenge to cooperation. Imagine a firm in the cartel lowering its prices due to external market factors. If other firms misinterpret this as cheating, they might retaliate, triggering a chain reaction of defections that dismantle cooperation. To avoid this, strategies must be clear and predictable. Only then can players build trust.
But Axelrod also uncovered another critical factor. If players know when the final round will occur, rationality drives them to defect in that last turn since there’s no future consequence to fear. This logic cascades backward, leading to universal defection across all preceding rounds. This is the difference between finite games, where the goal is to win decisively, and infinite games, where the aim is to sustain play.
Take the Vietnam War, for example. The U.S. approached it as a finite game, seeking a decisive victory, while the Vietnamese adopted an infinite game mindset, focusing on endurance and survival.
Evolutionary Game Theory: The Survival of Cooperation
But this still doesn’t explain how cooperation emerges and thrives in our societies. Think about it: if Tit-for-Tat were to play against a group of defectors who always betray, it would lose every single time. Defectors would exploit its initial cooperation, leaving Tit-for-Tat unable to retaliate effectively.
For cooperation to succeed, there needs to be enough cooperative players in the population.
Imagine a large population where only a small group of cooperators exists. Through correlation—where similar players are more likely to interact—they mostly play with each other, accumulating points and resources. Over time, these cooperators thrive, passing on their strategies or traits to their offspring and growing their community.
Meanwhile, defectors prey on each other, driving themselves into near extinction. Some may survive on the periphery by preying on cooperators at the edges, forming isolated criminal communities.
At an evolutionary level, success isn’t about winning a single game—it’s about strategies that can be copied and passed down through generations. Natural selection favors strategies that ensure their long-term survival, not just immediate victories.
In the long run, most real-life games are not zero-sum. Fostering cooperation doesn’t require prioritizing the collective good over our own interests—it simply means adopting a broader perspective. By extending our vision to the long term, we can see that cooperation and altruism aren’t just ethical—they’re rational.
References:
Game Theory, Standford Encyclopaedia of Philosophy
AN OVERVIEW OF GAME THEORY AND SOME APPLICATIONS, Bellal Ahmed Bhuiyan
The Complexity of Cooperation, Robert Axelrod
Game Theory Lecture of Yale University, Ben Polak