In game theory, grim trigger (also called the grim strategy or just grim) is a trigger strategy for a repeated game.
Initially, a player using grim trigger will cooperate, but as soon as the opponent defects (thus satisfying the trigger condition), the player using grim trigger will defect for the remainder of the iterated game. Since a single defect by the opponent triggers defection forever, grim trigger is the most strictly unforgiving of strategies in an iterated game.
In Robert Axelrod's book The Evolution of Cooperation, grim trigger is called "Friedman",[1] for a 1971 paper by James W. Friedman, which uses the concept.[2][3]
The infinitely repeated prisoners' dilemma
editThe infinitely repeated prisoners’ dilemma is a well-known example for the grim trigger strategy. The normal game for two prisoners is as follows:
Prisoner B Prisoner A
|
Stays Silent (Cooperate) | Betray (Defect) |
Stays Silent (Cooperate) | 1, 1 | -1, 2 |
Betray (Defect) | 2, -1 | 0, 0 |
In the prisoners' dilemma, each player has two choices in each stage:
- Cooperate
- Defect for an immediate gain
If a player defects, he will be punished for the remainder of the game. In fact, both players are better off to stay silent (cooperate) than to betray the other, so playing (C, C) is the cooperative profile while playing (D, D), also the unique Nash equilibrium in this game, is the punishment profile.
In the grim trigger strategy, a player cooperates in the first round and in the subsequent rounds as long as his opponent does not defect from the agreement. Once the player finds that the opponent has betrayed in the previous game, he will then defect forever.
In order to evaluate the subgame perfect equilibrium (SPE) for the following grim trigger strategy of the game, strategy S* for players i and j is as follows:
- Play C in every period unless someone has ever played D in the past
- Play D forever if someone has played D in the past[4]
Then, the strategy is an SPE only if the discount factor is . In other words, neither Player 1 or Player 2 is incentivized to defect from the cooperation profile if the discount factor is greater than one half.[5]
To prove that the strategy is a SPE, cooperation should be the best response to the other player's cooperation, and the defection should be the best response to the other player's defection.[4]
Step 1: Suppose that D is never played so far.
- Player i's payoff from C :
- Player i's payoff from D :
Then, C is better than D if .
Step 2: Suppose that someone has played D previously, then Player j will play D no matter what.
- Player i's payoff from C :
- Player i's payoff from D :
Since , playing D is optimal.
The preceding argument emphasizes that there is no incentive to deviate (no profitable deviation) from the cooperation profile if , and this is true for every subgame. Therefore, the strategy for the infinitely repeated prisoners’ dilemma game is a Subgame Perfect Nash equilibrium.
In iterated prisoner's dilemma strategy competitions, grim trigger performs poorly even without noise, and adding signal errors makes it even worse. Its ability to threaten permanent defection gives it a theoretically effective way to sustain trust, but because of its unforgiving nature and the inability to communicate this threat in advance, it performs poorly.[6]
Grim trigger in international relations
editUnder the grim trigger in international relations perspective, a nation cooperates only if its partner has never exploited it in the past. Because a nation will refuse to cooperate in all future periods once its partner defects once, the indefinite removal of cooperation becomes the threat that makes such strategy a limiting case.[7]
Grim trigger in user-network interactions
editGame theory has recently been used in developing future communications systems, and the user in the user-network interaction game employing the grim trigger strategy is one of such examples.[8] If the grim trigger is decided to be used in the user-network interaction game, the user stays in the network (cooperates) if the network maintains a certain quality, but punishes the network by stopping the interaction and leaving the network as soon as the user finds out the opponent defects.[9] Antoniou et al. explains that “given such a strategy, the network has a stronger incentive to keep the promise given for a certain quality, since it faces the threat of losing its customer forever.”[8]
Comparison with other strategies
editTit for tat and grim trigger strategies are similar in nature in that both are trigger strategy where a player refuses to defect first if he has the ability to punish the opponent for defecting. The difference, however, is that grim trigger seeks maximal punishment for a single defection while tit for tat is more forgiving, offering one punishment for each defection.[10]
See also
edit- Brinkmanship – Political and military tactic
- Folk theorem (game theory) – Class of theorems about Nash equilibrium payoff profiles in repeated games
- Mutually assured destruction – Doctrine of military strategy
- Repeated game – Game that repeats a base game
- Trigger strategy – Class of strategies employed in a repeated non-cooperative game
- Tit for tat – English saying meaning "equivalent retaliation"
References
edit- ^ Axelrod, Robert (2006). The Evolution of Cooperation (Revised ed.). Basic Books. p. 36. ISBN 0-465-00564-0.
- ^ Friedman, James W. (1971). "A Non-cooperative Equilibrium for Supergames". Review of Economic Studies. 38 (1): 1–12. doi:10.2307/2296617. JSTOR 2296617.
- ^ The article on JSTOR
- ^ a b Acemoglu, Daron (November 2, 2009). "Repeated Games and Cooperation".
- ^ Levin, Jonathan (May 2006). "Repeated Games I: Perfect Monitoring" (PDF).
- ^ Axelrod, Robert (2000). "On Six Advances in Cooperation Theory" (PDF). Archived from the original (PDF) on 2007-06-22. Retrieved 2007-11-02. (page 13)
- ^ McGillivra, Fiona; Smith, Alastair (2000). "Trust and Cooperation Through Agent-specific Punishments". International Organization. 54 (4): 809–824. doi:10.1162/002081800551370. S2CID 22744046.
- ^ a b Antoniou, Josephina; Papadopoulou, Vicky (November 2009). "Cooperative user–network interactions in next generation communication networks". Computer Networks. 54 (13): 2239–2255. doi:10.1016/j.comnet.2010.03.013.
- ^ Antoniou, Josephina; Petros A, Ioannou (2016). Game Theory in Communication Networks: Cooperative Resolution of Interactive Networking Scenarios. CRC Press. ISBN 9781138199385.
- ^ Baurmann, Michael; Leist, Anton (May 2016). "On Six Advances in Cooperation Theory". Journal of Philosophy and Social Theory. 22 (1): 130–151.