Prisoners’ Dilemma Research Paper

Academic Writing Service

Sample Prisoners’ Dilemma Research Paper. Browse other  research paper examples and check the list of research paper topics for more inspiration. If you need a religion research paper written according to all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional assistance. We offer high-quality assignments for reasonable rates.

The Prisoners’ Dilemma is the formal structure of a common sort of collective action problems. It is also a vivid scenario embodying that structure. It offers a useful framework for the analysis of interaction where there are certain conflicts of interests.

Academic Writing, Editing, Proofreading, And Problem Solving Services

Get 10% OFF with 24START discount code


1. Two Agents, One Meeting

Adam and Eve have been arrested. The District Attorney (DA) now tells them this. They can either talk or keep silent. If they both talk, they will each get a 10-year sentence. If one of them talks (confessing for both) and the other does not, the one who talked will go free; the other will get a 20-year sentence. If they both keep silent, each will get a one-year sentence on some trumped-up charge. The DA lets them discuss it. Then he puts them in separate cells and goes to take their pleas.

Say that neither Adam nor Eve cares about how the other makes out or thinks he will meet the other again. Each considers only the length of his or her own sentence. For each, that person’s talking (D) dominates that person’s being silent (C); that is, whatever Eve will do, Adam prefers the outcome of his talking to that of his not talking, and correspondingly for Eve. So if both Adam and Eve are rational, both of them will talk. But since each prefers the outcome of C,C (both of them silent) to that of D,D, both will be sorry for it.




This is the familiar prisoners’ scenario, first presented by Albert Tucker in 1950. The story speaks of two people as agents, but the structure of the interaction allows the agents to be teams or nations— some of the earliest applications were to the Cold War race (on these matters, see Poundstone 1992). The arms story is meant to demonstrate that, in certain conflicts of interests where there is some agreement too, rational agents won’t cooperate though it would be better for each if they both did than if they did not.

2. Two Agents, Many Meetings

Where the agents are rational and they think they won’t meet again, there is no way out for them. Neither party will cooperate (C). Both will defect (D), and both will be sorry. But say that they think they will meet again, that their present interaction is only the first of many just like it. (In place of the possible jailsentences, let the payoffs be sums of money reflecting the same ranking of the outcomes.) Here it may seem that the prospect each faces of having to live with the other’s resentment ought to deter defection. Still, it often is argued that, where the number of rounds is finite and known to both of the agents, if they both are rational (and think each other to be rational … ), both will defect from the start to the finish.

The argument first appeared in Luce and Raiffa (1957). Suppose that the number of rounds is known by both Adam and Eve to be n, and that they both are rational. At the nth round, each will know that there will be no further meetings, no need to guard against penalties, so they both will defect. Let Adam think that Eve is rational—let him think this throughout. He will then think in the (n – 1)st round that Eve will defect in the nth round whatever he does in the (n – 1)st, that his cooperating in the (n – 1)st, wouldn’t be rewarded by Eve in the nth. He will therefore defect in the (n – 1)st round, and Eve, thinking likewise about Adam, will too. The same in round n – 2: each expecting the other to defect in the round that follows, each will here defect, though we must now also assume that Adam thinks that Eve thinks him rational—let him think this too throughout—and that Eve thinks he thinks her rational. So we move stepwise back to round 1 (both agents defecting all the way), though with a heavier load of assumptions at each preceding stage.

This backward induction rests on assumptions about these people’s beliefs about each other. It calls for supposing that, from the start to the finish, Adam believes Eve to be rational, believes she believes him to be rational, believes she believes he believes her to be rational, etc., and so too with the names transposed— call this Iterated Mutual Belief, or IMB. Some writers have noted that IMB isn’t credible (Basu 1977, Bicchieri 1989, Pettit and Sugden 1989). Where in fact it fails, rational people might cooperate in at least some of the rounds; they need not defect throughout. Both parties may thus want it to fail, and one or the other may act in some way he thinks would get it to fail.

What sort of action would do that? If either cooperated in any round, the other would have to make sense of that, and this alone would suffice. Let Adam cooperate at n – 1. Then Eve will either think (a) that Adam isn’t rational (that he thinks this best for him however Eve responds to it) or (b) that he thinks she isn’t rational (that he expects her to respond by cooperating at n). If he cooperates at n – 2, she will think (a) or (b) or (c), that he thinks she thinks he isn’t rational (that he expects her to respond by cooperating at n 1 in the hope of getting him to cooperate at n), etc. Whether Eve thinks (a), (b), or (c) (or the next-level d at n – 3), IMB will have failed.

But would it ever be rational for Adam or Eve to cooperate? More generally, how should two rational people act in a sequence of Prisoners’ Dilemmas (the number of rounds being known)? The answer depends on what each thinks the other party would do. Let B be the strategy of starting with C and then tit-for-tatting, doing in every subsequent round what the other did just before. Also let Bx be starting with C, then tit-for-tatting in all but the last x rounds, and then defecting in those. It would be rational for Adam to adopt Bx if he thought Eve will defect in round 1 but would then adopt B­, and that would be rational for Eve if she took Adam’s cooperating in round 1 to mean that he was pursuing Bx-2 . There may be many such pairs of courses. In each, both Adam and Eve cooperate in some (or many or almost all) of the rounds, and what they are doing is rational. For each person’s course is the best response to what that person thinks is the other’s. But they still will defect in some (perhaps in many) rounds.

3. Two Agents, Equilibria

Facing a sequence of interactions, people seldom know how many there will be. So let us turn to cases in which their number is unknown to the agents. Here the backward-induction argument cannot get a foothold. We cannot begin with the round both parties know will be the last—none is here known to be last. But neither can we argue for the possibility of cooperation as we just did, for that too assumed that the agents reason backwards from some known final round. In game-theoretic terms, a Prisoners’ Dilemma is a game. Where two people consider what strategy of action they should follow in some sequence of such games, they are facing what is called a Prisoners’ Dilemma supergame. We are here speaking of open-ended supergames of this sort.

The fullest treatment is Taylor’s (1987). Let each agent have a number of open-ended strategy options. If he is rational, he will choose among them in light of how he ranks the outcomes of their different pairings with the other agent’s options, every such outcome being the sequence of the results of each agent’s acting in each round as his strategy requires. Write each outcome of, say, C,C as [C,C ]. The outcome of Adam’s taking B where Eve does too (of both starting with C and then tit-for-tatting) is thus [C,C ], [C,C ], [C,C ] … . This is also the outcome of their both cooperating every time—of their both taking C Let B´ be starting with D and then tit-for-tatting; where Adam takes B and Eve takes B´, the outcome is [C,D], [D,C ], [C,D] … . Where Adam takes D (defecting each time) and Eve takes B, the outcome is [D,C ], [D,D], [D,D] … .

How Adam ranks these outcomes depends on how he values future benefits and costs. The more the future now matters to him—the smaller his discount rate—the more will it figure in his rankings. (If 1 d is Adam’s discount rate, 0 ≤ d ≥ 1, he values some benefit r rounds in the future at d r times its worth to him now.) Where the future matters enough, he will prefer [C,C ], [C,C ], [C,C ]… to [D,C ], [D,D], [D,D]…, the prospect of the long run of [D,D]’s offsetting the appeal of the initial [D,C ]. Thus if the future matters enough, he will prefer his taking B where Eve does too to his taking D where Eve is taking B.

If he is rational, what strategy will he follow? A supergame is a game, and the central concept of game theory is that of (Nash) equilibria. An equilibrium in a two-person game is a pair of options, each a best response to the other. The idea is that, if Adam knows the situation (including the operative discount rates), he will choose a strategy in an equilibrium pair. What if there are several equilibria? Call an equilibrium admissible if the outcome of no other equilibrium is either preferred to it by both agents or preferred by one of them while the other is indifferent. Taylor suggests that a rational person will choose an option in an admissible equilibrium pair.

He notes that several equilibria are possible in a supergame of the Prisoners’ Dilemma: D,D is always an equilibrium, and some other pairings of strategies are equilibria where the agents’ discount rates meet certain conditions. His basic conclusion is that where the discount rates of both agents are sufficiently small, B,B and D,D are equilibria and all other strategy pairs he thinks would be considered are not. Since only B,B is admissible (both agents prefer its outcome to that of D,D), each of the agents will choose B. It follows that, where their discount rates are sufficiently small, both agents will cooperate in every round.

This last may overstate the case. Whether it is rational to cooperate in every round depends not only on the discount rates but also on the strategies considered. Where only B,B´,C,D, and certain others are in view, Taylor’s conclusion holds. But still others might come up. Let the agents consider E m (being an exploiter): taking B only after defecting m times while the other cooperated. Let them also consider Pm (being patient): cooperating m times and then taking B. The Em Pm pair is an admissible equilibrium (where the future matters enough to the patient)—if Adam followed Em and Eve followed Pm they would both be rational. But Adam would be exploiting Eve in each of the first m rounds.

4. Many-Person Interactions

Say that there are many people each of whom must either cooperate (pay their dues, pull their weight, etc.) or defect (not pay, not pull). Each would be better off defecting, whatever the others did. But if all defected, or more than a certain number, all would be worse off than they would be if they all had cooperated. Here we have a many-person Prisoners’ Dilemma.

A frequently cited scenario is Hardin’s (1968) commons story. Each farmer in the village sends his cows to graze on the just barely adequate commons. Each knows that his sending out a new cow would degrade that commons. Since each knows too that his share of the cost would be less than the benefit to him of having an extra cow, each will send out another cow and the commons will collapse. This is the logic of the underprotection (and underprovision) of public goods, of the free rider problem, the commons (the oceans, the forests, clean air) being the public good, the overgrazing (overfishing, polluting) being free riding.

Refinements are possible here. In a many-person Prisoners’ Dilemma, each agent would be better off defecting than cooperating, whatever the others did. Each would also be better off if all cooperated than if all defected. But say that there is some lone defector who exploits all the others. Where these others remain better off than they would be if all defected, the defector is a mere nuisance—he is a free rider proper. Where he leaves some others worse off than they would be if all defected, the defector is injurious—he is a foul dealer. (For this distinction, see Pettit 1986.)

Let all the parties expect to be facing the same sort of cases many times over and let each consider what to do in the whole sequence of them. They are then facing a many-person Prisoners’ Dilemma supergame. Much has been written about this (see Taylor 1987 for references). It appears that here, as above, cooperating in each round may be rational. This too is argued by Taylor, who shows that everyone’s cooperating in round 1 and then also in all the rest if each of the others cooperated just before is an admissible equilibrium if each agent’s discount rate is sufficiently small. But so is a lone defector’s taking Em (moving to B only after defecting m times while all the others cooperated) and the others taking Pm – the conditional cooperation after round m being dependent on everyone’s cooperating in the round just before. So here too rationality allows for extensive exploitation.

The question of the rationality of cooperation in such many-person contexts has some resonance in political theory. The claim that it isn’t rational (and so can’t be expected) provides the principal backing for the idea that people need a strong governing state. Hobbes and Hume based their case for authority, for a Sovereign on this, and Hardin argues for strict enforcement of access rights to commons (for limiting procreation and entry into wealthy countries). Taylor uses his proof to the contrary to argue for the viability of voluntarism and for communal rule. So some large issues are involved.

5. Evolution And Stability

The above has been about questions of the rationality of various strategies. There is also a second sort of question. How well in the long run would various strategies fare in different contexts of others? Which (if any) would gain more adherents—which would evolve? And which (if any), if pursued by all, would resist invasion by others—which would then be stable? Much of the recent discussion derives from Axelrod (1984), a study relating Prisoners’ Dilemma to current work in evolutionary biology.

Axelrod ran computer simulations of two roundrobin tournaments of two-agent supergames, each supergame pairing a strategy with either itself or some other in the pool. Strategy B won both tournaments, and Axelrod notes that it would have won too in many other contexts (other strategy pools). Suppose that a tournament were repeated many times, the agents in each a new generation. Letting the change in the numbers of those pursuing the different strategies in each generation be in proportion to the relative fitness (the yield divided by the average yield) of those who pursued them in the one just before, Axelrod also ran a simulation of a many-fold repetition of tournaments. He found that B was the strategy pursued most often in the end. Conditional cooperation can thus evolve.

Would any strategy, if pursued by all, resist an invasion by others? Would any strategy be stable? Axelrod held that B would be stable (if the future mattered enough). This has been challenged by Boyd and Lorberbaum (1987), who hold that no strategy is ever stable. Which of these claims is correct?

Resistance to invasion might be weak, the invaders being kept from multiplying. Or it might be strong, their being pushed to extinction. Biologists speak only of monotone dynamics, in which greater fitness implies faster increase in numbers. Under such a dynamic, no strategy is strongly stable: none can strongly resist invasion. Say that the natives now pursue B. Let B2 be tit-for-two-tats, that is, starting with C and then cooperating unless the other defected in the preceding two rounds. B2 is a neutral mutant of B: it yields exactly what B yields in all interactions with either B or B2. Since B can’t gain on B2, can’t strongly resist an invasion by it, it isn’t strongly stable. And neither is any other strategy, for every strategy has neutral mutants.

What about weak stability? Let us allow for heterogeneous invasions, not all the invaders pursuing the same strategy, and let us allow for (monotone) dynamics other than that of proportional fitness. Say that B is the native strategy, that some of the invaders follow B2 and the rest B´, and that the dynamic is that of imitating the preceding tournament’s winner. B and B2 do well against themselves and against each other, B2 doing slightly worse against B´, against which B does badly. B´ does badly against B, very badly against itself, and slightly better than B against B2. B2 will win the first tournament, and having won, will be imitated and become the only strategy followed. (It in turn will be vulnerable to the alternation of cooperation and defection!) So B isn’t weakly stable under the imitation dynamic, and neither is any other strategy, for there always is some other, inept (like B´) against itself, against which certain neutral-mutant invaders do better than the natives.

No strategy is strongly stable under any monotone dynamic, and (allowing for heterogeneous invasions) none is weakly stable under e ery such dynamic. Thus Boyd and Lorberbaum are partly right. Still, it can be shown that B is weakly stable under the proportionalfitness dynamic: it is (weakly) stable in that special context. So Axelrod is partly right too. (These results are in Bendor and Swistak 1998.)

The basic thesis here is that greater fitness implies faster increase in numbers. This is true of the genes that render their carriers fitter to reproduce. But are we here speaking of genes? Are there genes (or alleles) for B’ing and for B2’ing and the like? If not, the thesis must refer to the numbers of people X’ing or to the numbers of replicas of the X’ing-meme (for memes, see Dawkins 1976 and Dennett 1995). Increases in these are independent of reproductive fitness. They depend on imitation (by progeny or by others), and people’s reproductive fitness needn’t increase the number of their imitators.

What sort of fitness promotes imitation? There may be no single answer, for sets of memes co-adapt to their situations. ‘Natural selection among genes chooses those that prosper in the presence of certain other genes … [and] gene pools come to consist of genes that do well in each others’ company … .’ (Dawkins 1980, p. 354). The same holds for selection among memes. So the prospects of X’ing may hinge on the support of co-adapting memes, among them the agents’ ideologies, their concepts of morality. In the context of some common ideologies of behavior toward others, B’ing may be imitated. In the context of more patient ideologies, B2’ing or B3’ing or Pm ing may be imitated. Where it goes along with an exploiter ideology, even Em ing may be imitated. What fits a strategy for imitation depends (at least partly) on the co-evolving ideology. Whether or not this is good news depends on the ideology.

Bibliography:

  1. Axelrod R M 1984 The Evolution of Cooperation. Basic Books, New York
  2. Basu K 1977 Information and strategy in iterated Prisoner’s Dilemma. Theory Decision 8: 293–8
  3. Bendor J, Swistak P 1998 Evolutionary equilibria: Characterization theorems and their implications. Theory and Decision 45: 99–159
  4. Bicchieri C 1989 Self-refuting strategies of strategic interaction: A paradox of common knowledge. Erkenntnis 30: 69–85
  5. Boyd R, Lorberbaum J P 1987 No pure strategy is evolutionarily stable in the repeated Prisoner’s Dilemma game. Nature 327: 58–9
  6. Dawkins R 1976 The Selfish Gene. Oxford University Press, Oxford, UK
  7. Dawkins R 1980 Good strategy or evolutionarily stable strategy? In: Barlow G W, Silverberg J (eds.) Sociobiology: Beyond Nature Nurture. Boulder, Westview, CO
  8. Dennett D C 1995 Darwin’s Dangerous Idea. Simon & Schuster, New York
  9. Hardin G 1968 The tragedy of the commons. Science 162: 1243–8
  10. Luce R D, Raiffa H 1957 Games and Decisions. Wiley, New York
  11. Pettit P 1986 Free riding and foul dealing. Journal of Philosophy 83: 361–89
  12. Pettit P, Sugden R 1989 The backward induction paradox. Journal of Philosophy 86: 169–82
  13. Poundstone W 1992 Prisoner’s Dilemma, 1st edn. Doubleday, New York
  14. Taylor M 1987 The Possibility of Cooperation. Cambridge University Press, Cambridge, UK
Social Research and Privacy of Individuals Research Paper
Primitive Society Research Paper

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality
Special offer! Get 10% off with the 24START discount code!