Talk:Prisoner's dilemma/Archive 5
This is an archive of past discussions about Prisoner's dilemma. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 | ← | Archive 3 | Archive 4 | Archive 5 |
External links modified
Hello fellow Wikipedians,
I have just modified one external link on Prisoner's dilemma. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20130512175243/http://www.econ.nagoya-cu.ac.jp/~yhamagu/ultimatum.pdf to http://www.econ.nagoya-cu.ac.jp/~yhamagu/ultimatum.pdf
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 22:14, 12 January 2018 (UTC)
Dubious criticism of Hofstadter's briefcase game
After describing Hofstadter's briefcase version of PD, the article contains this sentence: "However, in this case both players cooperating and both players defecting actually give the same result, assuming no gains from trade exist, so chances of mutual cooperation, even in repeated games, are few." That seems like a strange way to interpret the case, and hardly a criticism of it. Wouldn't it be more reasonable to assume, since they're trading at all, that player A has a utility-function according to which diamonds & money > diamonds > money > nothing, and player B has a utility function according to which diamonds & money > money > diamonds > nothing? Does this criticism show up anywhere in a reliable source?50.191.21.222 (talk) 14:03, 27 November 2016 (UTC)
- I agree this is dubious. You could call this the Ebayer's dilemma. the theory is that the two attach different values to the goods being sold, and the price agreed is higher than the seller's value, and lower than the buyer's value, and as such both parties think they have made a good deal by trading. It would not be rational to sell a good for a value equal to or less than the value you ascribe to it; conversely it would not be rational to buy it for a value equal to or more than you think it's actually worth. Cash naturally has its face value for either side. Described here. By going through with the trade, both sides realise a value: the seller sells the item for more than she thinks it is worth; the buyer buys it for less than he is ultimately prepared to pay. Therefore any arms' length commercial transaction between rational counterparties is a positive sum game; a mutual defection is a zero-sum game (though there will be nominal frictional costs from having wasted time arriving at the bargain, so defecting will actually be negligibly a negative sum game). I have deleted the criticism, which I suspect is also OR. ElectricRay (talk) 17:03, 23 July 2018 (UTC)
Golden Balls gameshow as a real life example?
There is a somewhat reported on instance of the game show Golden Balls presenting its contestants with a prisoner's dilemma (centered around prize money) and one contestant subverting the intended conflict. One article touches upon the event here [1] — Preceding unsigned comment added by Riftwave (talk • contribs) 23:49, 3 February 2019 (UTC)
Real-Life Example: An Individual's Behaviour towards the Environment
The section "In environmental studies" mentions only the implications of the PD in state politics. However, the behaviour of individuals regarding protecting the environment is another example of the PD. That should be mentioned, too. For example: Why should I not litter/save energy/...., if everybody else does? Stefanhanoi (talk) 11:21, 4 November 2018 (UTC)
"If the game is played exactly N times and both players know this, then it is optimal to defect in all rounds. The only possible Nash equilibrium is to always defect. The proof is inductive: one might as well defect on the last turn, since the opponent will not have a chance to later retaliate. Therefore, both will defect on the last turn. Thus, the player might as well defect on the second-to-last turn, since the opponent will defect on the last no matter what is done, and so on. The same applies if the game length is unknown but has a known upper limit." This paragraph appears to be incorrect. It is not optimal to defect in all rounds just because the game is played in N rounds and both players know this. The argument appears to be fallacious. Arctic Gazelle (talk) 17:20, 29 July 2021 (UTC)
The above paragraph is the standard argument for finitely repeated prisoner's dilemma. It is not fallacious. The point is with backward induction, there is no way that one can design a retaliating strategy if the other player decides not to cooperate. (Consider just repeat the game twice, N is not necessarily a large number.) What you had in mind resembles more like the case of infinitely repeated prisoner's dilemma, in which case cooperation can become an equilibrium. --EpsilonToZero (talk) 19:27, 29 July 2021 (UTC)
Personal comment
The one thing I don't understand about this dilemma is if Person A decides to snitch on Person B, then even if B wanted to keep his mouth shut he'll change his mind and snitch on A. In the long run it's far more advantageous for both of them to cooperate. — Preceding unsigned comment added by 45.72.163.37 (talk) 23:49, 3 February 2019 (UTC)
Please note that this talk page is normally for discussion about probleme from the article, not discussion on the subject
That's the point effectively. But the one who cooperate will hope the other one does the same (for example when a authoritarian governement like the Nazi Regimesteps-in, some might cooperate and denounce others in hope state will colaborate and leave them alone. --Tech-ScienceAddict (talk) 17:16, 11 June 2021 (UTC)
- A huge part of 'the prisoner's dilemma' is that the two actors cannot communicate with each other (or know each other's intention/action), and have no way to get back at each other afterwards. Cipher821 (talk) 01:28, 17 August 2021 (UTC)
Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 6 September 2020 and 6 December 2020. Further details are available on the course page. Student editor(s): Dyy122dyy.
Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 07:13, 17 January 2022 (UTC)
Where does "Outcome D" arise from?
Outcome D: If A and B both betray the other, they will share the sentence and serve 5 years
I am failing to see where outcome D comes from? Is it implied in the text?
Premise:
William Poundstone described the game in his 1993 book Prisoner's Dilemma:
Two members of a criminal gang, A and B, are arrested and imprisoned. Each prisoner is in solitary confinement with no means of communication with their partner. The principal charge would lead to a sentence of ten years in prison; however, the police do not have the evidence for a conviction. They plan to sentence both to two years in prison on a lesser charge but offer each prisoner a Faustian bargain: If one of them confesses to the crime of the principal charge, betraying the other, they will be pardoned and free to leave while the other must serve the entirety of the sentence instead of just two years for the lesser charge.
78.144.219.213 (talk) 12:02, 17 June 2023 (UTC)
- I noticed the same when I read it a few days ago but didn't have the time or energy to investigate it or even mention it on the talk page. But I agree completely: There is a non-stated premise that if both are convicted they will share the 10 year sentence in the given hypothetical justice system, and it is confusing that this premise only shows up in Outcome D, out of the blue.
- So now I looked further into it, and it turns out that the quote has been messed with. It is not a proper quote from the stated book. The book (William Poundstone: Prisoner's Dilemma, February 1993, ISBN 0-385-41580-X) says this at page 118:
- "A typical contemporary version of the story goes like this: Two members of a criminal gang are arrested and imprisoned. Each prisoner is in solitary confinement with no means of speaking to or exchanging messages with the other. The police admit they don't have enough evidence to convict the pair on the principal charge. They plan to sentence both to a year in prison on a lesser charge. Simultaneously, the police offer each prisoner a Faustian bargain. If he testifies against his partner, he will go free while the partner will get three years in prison on the main charge. Oh, yes, there is a catch ... If both prisoners testify against each other, both will be sentenced to two years in jail."
- So I guess we should either (1) correct the quote and then also correct the text after the quote to match it, or (2) state our completely own version of the dilemma, without referring to any book, but then I think we could border on something like Wikipedia:No original research. I guess option (1) seems to be the most straightforward to ensure we align with reliable sources.
- We should also make clear that the quote is just a "typical contemporary version" mentioned in that book, because right now, without reading the article lead, the text can give the impression that this book is the original source of the prisoner's dilemma (even though the book of course makes no such claim; on the contrary, it clearly states the origin as Merrill Flood and Melvin Dresher in 1950, with the name coming from Albert W. Tucker). --Jhertel (talk) 11:43, 18 June 2023 (UTC)
- I have adjusted the article now with the correct quote and correction of the actual numbers mentioned in that specific version of the story. I also changed outcomes ABCD to 1234 to lessen confusion with prisoners A and B. Jhertel (talk) 13:33, 18 June 2023 (UTC)
I can't reconcile these 2 statements in the article
I can't reconcile these 2 statements:
"For cooperation to emerge between rational players, the number of rounds must be unknown or infinite."
...
"Another aspect of the iterated prisoner's dilemma is that this tacit agreement between players has always been established successfully even when the number of iterations is made public to both sides."
As an aside, I see the principles discussed as similar to the concept of Metagame / Metagame analysis Mathiastck (talk) 21:37, 20 July 2023 (UTC)
- Yeah I think Metagame belongs in a "see also" for Prinsoners Dilemma, it links back here and provides context:
- "
- Etymology[edit | edit source]
- The word can be found being used in the context of playing zero-sum games in a publication by the Mental Health Research Institute in 1956. It is alternately claimed that the first known use of the term was in Nigel Howard's book Paradoxes of Rationality: Theory of Metagames and Political Behavior published in 1971, where Howard used the term in his analysis of the Cold War political landscape using a variation of the Prisoner's Dilemma., however Howard used the term in Metagame Analysis in Political Problems published in 1966. In 1967, the word appeared in a study by Russell Lincoln Ackoff and in the Bulletin of the Operations Research Society of America
- " Mathiastck (talk) 21:40, 20 July 2023 (UTC)
what person?
For example, if a population consists entirely of players who always defect, except for one who follows the tit-for-tat strategy, that person is at a slight disadvantage because of the loss on the first turn. In such a population, the optimal strategy is to defect every time. 107.77.195.33 (talk) 04:45, 24 November 2023 (UTC)