1. Introduction
A considerable body of evidence shows that the predictions of the standard equilibrium concepts in game theory are not borne out by a significant fraction of experimental subjects. See Camerer [
1] for a book-length treatment. For prisoners’ dilemma games, see Lewis [
2], Howard [
3], Rapoport [
4], Shafir and Tversky [
5], Cooper
et al. [
6], Croson [
7], Li and Taplin [
8], Acevedo and Krueger [
9], Busemeyer [
10], Zhong
et al. [
11], Histrova and Grinberg [
12] and Khadjavi and Lange [
13]. For oligopoly games, see Fouraker and Siegel [
14], Huck
et al. [
15], Bosch-Domènech and Vriend [
16] and Duersch
et al. [
17]. For public goods games, see Dawes and Thaler [
18], Fehr and Gächter [
19] and Gächter and Thöni [
20]. For voting games, see Quattrone and Tversky [
21], Grafstein [
22], Forsythe
et al. [
23], Rassenti
et al. [
24], Krueger and Acevedo [
25], Koudenburg [
26], Requate and Waichman [
27] and Delavande and Manski [
28]. For p-beauty contests, see Moulin [
29], Nagel [
30] and de Sousa
et al. [
31]. For auctions, see Ivanov [
32]. For the hawk-dove game, see Rubinstein and Salant [
33]. For the give-some game, see Krueger [
34]. For further results on these, and other games, that present a challenge for both mainstream and behavioural game theory, see Lucas
et al. [
35].
In this paper, we are interested in static games of complete information. Examples include the prisoners’ dilemma, the voting game, the public goods game, the hawk-dove game and oligopoly games.
Let us briefly note the nature of the violations of the standard equilibrium concepts in some static games of interest. A more detailed treatment is given in the main body of the paper. More than half the subjects in the prisoners’ dilemma game play the dominated action “cooperate”. Voters vote in elections when it is clear to them that they will not be pivotal. Under traditional preferences, if there is a cost to voting, the act of voting is dominated by not-voting. The dominant action in public goods games is to free-ride. Yet, we can elicit near first-best levels of contributions with like-minded players.
1The evidence from the publications cited above suggests the following stylized facts that any reasonable theory of static games may aspire to explain. We consider the empirical evidence behind these claims in more detail in the main body of the paper.
- S1.
A significant fraction of players behave in a manner that is consistent with the predictions of classical game theory. For instance, many players defect in a prisoners’ dilemma game; many people abstain from voting; and many people do not contribute at all in public goods games.
- S2.
An even larger fraction of players violates the predictions of classical game theory, and they often seem to behave in an apparently non-strategic manner. For instance, the action “cooperate” in a prisoner’s dilemma game is a dominated action, but these actions by players, jointly, lead to higher payoffs. These findings are fairly robust even in the other static games that we do not consider in this paper.
- S3.
In environments where players think that they are playing with other like-minded players, evidence shows that they impute diagnostic significance to their own actions when forming beliefs about the actions of others. For instance, those who cooperate (respectively defect) in prisoners’ dilemma games think that the vast majority of other players will also cooperate (respectively defect). Similarly, despite publicly-available information on election polls, those who vote Democrat (respectively Republican) in the U.S. Presidential elections believe that a significant majority of other voters will also vote Democrat (respectively Republican).
Each of the static games that we consider has an exceedingly simple structure. Hence, we believe that the anomalies relative to the predictions of classical game theory are less likely to arise from mistakenly playing the incorrect action. For this reason, our focus is not on behavioural alternatives, such as quantal response equilibrium (QRE) in which players play a noisy best response, but otherwise have consistent beliefs.
The main focus of our paper is on S3, which also allows us to shed light on S1 and S2. There is always considerable uncertainty about what others will do in one-shot games. Think of being in an experiment where you are playing a prisoner’s dilemma game, or a voting game, or a public goods game. How do you infer what the other players are likely to play? Considerable evidence, which we shall review later, suggests that people often use evidential reasoning (ER)
2,
i.e., they assign diagnostic significance to their own actions in forming beliefs about the actions of other like-minded players. We stress that a player using ER does not believe that his or her actions influence the actions of other players
3. ER merely influences a player’s own belief about which unobserved action other players are likely to take.
ER is best viewed as a heuristic or bias. In the early 1970s, Kahneman and Tversky proposed the heuristics and biases approach. A substantial literature developed subsequently that identified a rich range of heuristics and generated evidence that they are used by human subjects. See, for instance, Kahneman
et al. [
36] and Kahneman [
37]. ER, like these heuristics, is fast and frugal in the use/processing of information and in cognitive requirements. As with all heuristics, players who use ER may find
ex post that their initial beliefs were incorrect. ER can lead to the violation of some of the principles of classical decision theory, for example Savage’s sure thing principle (Savage [
38]). However, these violations have been well documented. For example, the sure thing principle is violated in the Ellsberg paradox (Ellsberg [
39]; which, however, is a non-game theoretic situation).
Evidence supports the interpretation of evidential reasoning as a heuristic. People who use evidential reasoning are not aware of using it despite their behaviour being obviously consistent with evidential reasoning. Evidential reasoning appears to arise as an automatic, rather than a deliberate effort or intention, e.g., it does not require awareness. Evidence supporting this view comes from experiments that show that evidential reasoning was not hampered by cognitive load or time required to complete an action; see Krueger [
40]. Furthermore, other evidence, also reported in Krueger [
40], suggests that considerable cognitive effort is required to suspend evidential reasoning. The evidence from Acevedo and Krueger [
9] indicates that evidential reasoning applies to human-human interaction, but not to human-non-human interaction. Players using evidential reasoning do not believe that their actions cause the action of others; it merely informs their belief about the actions taken by others. Another feature of evidential reasoning is that individuals continue to behave in a self-interested manner.
It might be useful to distinguish between to types of heuristics. The first involves no violation of the standard assumptions of game theory. They propose extra conditions that are consistent with the standard assumptions of game theory, but whose aim is to reduce the multiplicity of Nash equilibria. Examples include payoff dominance and risk dominance (Harsanyi and Selten [
41]). Note that in some cases, these heuristics are in conflict (in some games, payoff-dominance could select one equilibrium, but risk-dominance could select another). In fact, the whole program of refinements of Nash (van Damme [
42]) may be viewed in this light. We may call these conservative heuristics. The second type, which we may call radical heuristics, involve relaxation of some of the standard assumptions of game theory. Examples include Stackelberg reasoning (Colman and Bacharach [
43]; Colman
et al. [
44]). Evidential reasoning is firmly in the group of radical heuristics.
Section 2 gives a formal treatment of evidential reasoning and proposes several concepts that we will find useful in the rest of the paper. An evidential game is simply a game where players use evidential reasoning. An evidential equilibrium is one where each player chooses to optimize given his beliefs about the behaviour of the other players (inferred from his own behaviour in accordance with evidential reasoning). A consistent evidential equilibrium is an evidential equilibrium where beliefs turn out to be correct. Our formulation of evidential reasoning yields causal reasoning, the mode of reasoning assumed in the traditional framework in economics (and, indeed, generally) as a special case. If players use causal reasoning, then a consistent evidential equilibrium corresponds to a Nash equilibrium in the ordinary sense. We introduce the concept of a social projection function (SPF) and give formal definitions of like-mindedness, ingroup and outgroup.
Section 3 argues that evidential reasoning is a useful heuristic rather than a valid method of inference.
Section 4,
Section 5 and
Section 6 show that evidential reasoning can answer the following questions: Why do people voluntarily contribute to public goods? Why do people vote? Why is there so much cooperation in the prisoners’ dilemma game?
Section 7 examines the uniqueness of outcomes under evidential reasoning in the context of the Nash demand game. Oligopoly games are considered in
Section 8. In
Section 9, we argue that causal reasoning cannot adequately explain cooperation in the prisoners’ dilemma game.
Section 10 concludes.
2. Evidential Equilibrium in Static Games of Complete Information
2.1. Elements of Standard Game Theory4
Consider the following standard description of a static game of complete information, . is the set of players. is the set of actions open to player i. We denote a typical member of by . gives all possible action profiles of the players. is the set of vectors of actions open to the other players. Denote by the set of probability distributions over the set of actions . We denote a typical element of by and call it a strategy. is the probability with which player i plays , so and . In particular, if (hence, for ), then we call s a pure strategy, and we identify it with the action .
A profile of strategies of all players is denoted by sS, where S is the set of all possible profiles of strategies. A particular profile of strategies of other players is denoted by sS. The payoff to player i is a mapping S. Let π be the vector of payoffs to all players. Given a strategy profile, sS, the payoff to player i is . The structure of the game, , is common knowledge among the players. In an experimental setup, common knowledge can be achieved by a public announcement of . This is the sense in which this is a game of complete information. However, when each player, i, chooses his strategy, , he does not know the strategies, s, that have been, or will be, chosen by the other players. This is the sense in which this is a static game.
Definition 1. : is a dominant strategy for player if for each and each sS. If for each and each sS, then is a strictly dominant strategy for player i.
Definition 2. (Nash [46,47]): A strategy profile sS is a Nash equilibrium in the game if maximizes with respect to , given s, for each , i.e., Note that there is no role for beliefs about the strategies of others in the game
, nor in the definition of a Nash equilibrium (Definition 2). Hence, we augment the game
with a profile of “social projection functions”,
, that specify the beliefs of players; this is undertaken in
Section 2.2, below.
2.2. Social Projection Functions
We would like to define a function that captures the beliefs that a player has about the strategies of the other players, conditional on his own strategy. We will call such a function a social projection function.
5Definition 3. (Social projection functions (SPF)): A social projection function for player i is a mapping : S that assigns to each strategy, , for player i, an vector of strategies for the other players. We write for the subjective belief of player i that player j plays , conditional on player i playing . Hence, and . We may write s to indicate that is the vector of strategies that player i anticipates that the other players will follow if player i adopts the strategy .
We now define causal reasoning, the mode of reasoning assumed in classical game theory. Then, we define evidential reasoning.
Definition 4. (Causal reasoning): We say that player i uses causal reasoning if is independent of for each and each , i.e., if for all .
Definition 5. (Evidential reasoning) We say that player i uses evidential reasoning if it is not necessarily the case that for all .
Remark 1: (a) In a static game of complete information, players are uncertain of the actions taken by others. Under evidential reasoning, player i resolves this uncertainty by assigning diagnostic significance to his or her own choice of strategy, , in inferring the strategies of the other players, s, using his or her social projection function, . For this reason, Definition 5 allows for to change as changes. However, it is crucially important to realize that there is no causal connection between and s. The choice of by player i merely influences that player’s belief about the strategies, s, of the other players. In particular, players who use evidential reasoning know that their own actions have no causal effects in altering the actions of others when they change their own actions.
(b) An SPF (Definition 3) specifies the beliefs of a player for all possible actions of others, including out-of-equilibrium actions. The beliefs of a player need not turn out to be fulfilled in equilibrium. In this respect, ER is similar to other disequilibrium-in-belief models, such as the level-k model (Example 3, below).
(c) If player i uses causal reasoning as in classical game theory (see Definition 4), then he or she assigns no diagnostic significance to his or her own strategy, , in inferring the strategies, s, followed by the other players. Thus, under causal reasoning, remains fixed as changes. From Definitions 4 and 5, causal reasoning is a special case of evidential reasoning.
In a dynamic game (under causal reasoning), if (say) Player 1 moves first, choosing the strategy , followed by Player 2, who chooses strategy , having observed a realization of , then may very well depend on . When choosing , Player 1 will take into account the influence of his choice on the future behaviour of Player 2. This should not be confused with evidential reasoning.
Many different types of social projection functions (SPFs) are possible. There are possibly various degrees of like-mindedness. However, a particularly salient SPF is one where a player believes that a like-minded player will play a strategy that is identical to his own. We call this the identity social projection function. This SPF seems important when choices are low dimensional and players play symmetric games. Examples include cooperate or defect in a prisoners’ dilemma game, vote Democrats or Republicans in U.S. Presidential elections, coordinate or fail to coordinate in coordination games or play hawk or dove in the hawk-dove game.
Definition 6. (Identity social projection function): Let be a subset of players. Suppose that all players in M have the same action set, i.e., for all . Let be the social projection function for player . Recall that is the probability that player i assigns to player j playing action a when the strategy of player i is given by . If for all and all , then we say that is an identity social projection function on M. If , then we say that is an identity social projection function.
Example 1 (self-similarity in the hawk-dove game): Consider the following game between two players. If both choose hawk (
H), then each gets zero. If both choose dove (
D), then each gets two. If one chooses
H (the hawk) and the other chooses
D (the dove), then the hawk gets three and the dove gets one. These payoffs are summarized by
Table 1, where the row player plays
D with probability
, while the column player plays
D with probability
.
Table 1.
A hawk-dove game.
Table 1.
A hawk-dove game.
| | |
---|
| | |
| | |
Under causal reasoning (Definition 4), the row player should use a constant social projection function. However, the experimental results reported by Rubinstein and Salant [
33] show that the higher the probability,
p, with which the row player plays
D, the higher the probability,
q, she or he thinks the column player will play
D. This can be formalized by the row player adopting the identity social projection function (Definition 6):
2.3. Ingroups, Outgroups and Evidential Reasoning
Players need not impute diagnostic significance to their actions when others are perceived not to be like-minded; the next definition formalizes this idea.
Definition 7. (Ingroups and outgroups): Suppose that players use evidential reasoning.
(a) Player i regards player j () as an outgroup member if is independent of , i.e., if for all and all . Otherwise, player i regards player j () as an ingroup member.
(b) Let be a non-empty set of players. If every player in M regards every other player in M as an ingroup member, then M is an ingroup.
(c) Let and be disjoint non-empty sets of players. Suppose every player in L regards every player in M as an outgroup member. Then, we say that M is an outgroup relative to L.
Remark 2: Player i plays action with probability and believes that player j will play action with probability (the latter is conditional on ). Hence, player i believes that the joint probability of and being played is . Suppose that player i regards player j as an outgroup member. Then (and only then), is independent of . Hence, in this case, . Thus, if player i regards player j as an outgroup member, then player i believes that the probability with which he or she (player i) plays is independent of the probability that he or she believes j will play . In particular, if player i uses causal reasoning, then he or she regards all others as outgroup members, and hence, he or she believes that his or her actions are independent of the actions of all other players.
Definition 8. (Perfect ingroups): Let be a subset of players. Suppose that all players in M have the same action set, i.e., for all . Let be the social projection function for player . If is an identity social projection function on M, for each player , then M is a perfect ingroup.
Definition 9. (Evidential game): Consider the static game of complete information, . Let be a profile of social projection functions, where is the social projection function of player (Definition 3). Then, we denote the game augmented with the vector of social projection functions, , by , and we call it an evidential game. We say that players in such a game use evidential reasoning.
Definition 10. If each is independent of , then we say that Γ is a causal game.
Remark 3: (a)From Definition 9, a causal game is a special case of an evidential game.
(b) Suppose is independent of , for each player, i, so that is a causal game. Γ is still richer than the static game of complete information, , because Γ incorporates players’ beliefs about other players’ actions, as given by .
Example 2 (Matching pennies): Consider the matching pennies game.
The set of players is
. The action sets are
. Player 1, the row player, plays
H and
T with respective probabilities
p,
. Player 2, the column player, plays
H and
T with respective probabilities
q,
. The sets of possible strategies are
for Player 1 and
for Player 2. If Player 1 plays
H (say) and Player 2 plays
T, then the payoff is one to Player 1 and
to Player 2. For any profile of strategies
,
, the payoff functions of the players are
and
. The following are examples of social projection functions:
According to Equation (
2), Player 1, who uses evidential reasoning, believes that if he or she (Player 1) plays
H with probability
p, then so will Player 2 for any
. Hence, Player 1 has an identity social projection function. It is critical to note that these are the “beliefs” of Player 1. There is no presumption that these beliefs will turn out to be justified
ex post. Player 2, who uses causal reasoning, believes that Player 1 will play
H with probability
, whatever strategy,
q, Player 2 chooses. Hence, Player 1 regards Player 2 as an ingroup member, but Player 2 regards Player 1 as an outgroup member, so
fails to be an ingroup. On the other hand, if both players had identity social projection functions, then
would be an ingroup (in fact, a perfect ingroup). By contrast, if in Equation (
2) we had, say,
for all
, then both players would exhibit causal reasoning; and this example would become a causal game.
Example 3 (An application of the level-
k model to the p-beauty contest
6): A large number of contestants are asked to choose an integer between zero and 100, inclusive. It is announced that the winner is the one who gives an answer closest to
the average of the choices of all of the other players. A Level 0 player simply makes a random uniform choice. Thus, if player
j is of Level 0, then the action of player
j is
. A Level 1 player assumes all other players are level-0 and, therefore, assumes all other players will choose 50. Thus, if player
i is Level 1, his social projection function will be
for
or
for
. Thus, the optimal choice for player
i, if
i is Level 1, is
. A Level 2 player assumes all other players are Level 1 and, therefore, assumes all other players will choose
. Thus, if player
i is Level 2, his or her social projection function will be
for
or
for
. Thus, the optimal choice for player
i, if
i is Level 2, is
, and so on. Note that all players use causal reasoning, since, for each player,
is independent of
.
Intuitively, an “ingroup” is a group of players each of whom believes that the others are like-minded and, hence, would behave in a similar, but not necessarily identical, manner. The literature has typically assumed that players do not use their own actions as diagnostic of the actions of “outgroup” players; see, for instance, Robbins and Krueger [
48] and Krueger [
40]; and our definitions reflect this (see, in particular, Definition 7).
However, recent evidence suggests a more nuanced view that is also consistent with our definitions. Koudenburg
et al. [
26] show that voters project their own preference for a political party to non-voters even when they are informed about the poll results for non-voters. Thus, voters may regard non-voters as ingroup members, though in a strict sense, only the set of voters may be thought to form an ingroup (Definition 7).
Riketta and Sacramento [
49] cite several references to show that members of an ingroup assign beliefs about other members even when they could have no possible information about those members. They find that an ingroup member may have a harmonious (or cooperative) relation with other members. On the other hand, they also find that an ingroup member may be in competition (or conflict) with other members. In the latter case, an ingroup member may believe that the actions of others are in contrast to his own actions (the contrast effect). The following is an illustration.
Example 4 (the contrast effect): Consider the matching pennies game of Example 2. Instead of Equation (
2), consider the following social projection functions:
According to Equation (
3), Player 1 believes that if he or she (Player 1) plays
H with (say) probability one, then Player 2 will play
H with probability zero; and similarly for Player 2. Both players use evidential reasoning, but exhibit the contrast effect. Each player regards the other as an ingroup member. Hence,
is an ingroup. However,
N fails to be a perfect ingroup because players are not using their identity social projection functions.
2.4. Equilibria
In this section, we propose an appropriate equilibrium concept for static evidential games of complete information. We call this an evidential equilibrium.
Definition 11. (Optimal strategies): An optimal strategy for player i, , in the evidential game (Definition 9), is one that maximizes the payoff function, of player i.
Definition 12. (Evidential equilibria): The strategy profile sS is an evidential equilibrium of the evidential game if is an optimal strategy for each (Definition 11).
Definitions 11 and 12 identify an important feature of an evidential equilibrium. In static games of complete information, under uncertainty about what others will do, evidential reasoning converts an essentially strategic situation into a non-strategic problem. This is made possible by treating the social projection function as an essential part of the game. This appears to be consistent with the evidence (see
Section 6.3 and
Section 9.5). There are no higher order beliefs. Players do not think about strategically exploiting the SPF of the other player. Indeed, the game
does not involve any assumptions about the mutual or common knowledge of
. Requiring
and
to be continuous functions on compact mixed strategy spaces guarantees that an equilibrium exists in the non-strategic problem.
Note that Definition 12 only requires that a strategy for a player be optimal given his beliefs. However, of course, beliefs may turn out to be wrong, ex post. Ultimately, the choice among models in all science is guided by the evidence. The evidence reviewed above (and below) shows that in static games, beliefs about others often turn out to be incorrect. Even in experiments, where successive rounds of play lead to an improvement in the accuracy of beliefs, one may be interested in explaining the behaviour in early rounds of play where beliefs do not turn out to be correct. Often such behaviour mimics real-life situations in which decision makers do not get repeated or frequent opportunities to make their decisions. Nevertheless, it is of interest to consider the special case where beliefs turn out to be correct, at least in equilibrium. This is the subject of the next two definitions.
Definition 13. (Mutually consistent strategies): A strategy profile sS of the evidential game (Definition 9) is a mutually consistent vector of strategies if s for all , i.e., if , for all , , and all .
In other words, a strategy profile s is a mutually consistent vector of strategies if, for all players , , and all actions, , open to player j, the probability that player i believes that player j will play action (given ) is equal to the actual probability with which player j plays .
Definition 14. (Consistent evidential equilibria): A consistent evidential equilibrium of the evidential game is an evidential equilibrium, sS, which is also a mutually consistent vector of strategies (Definitions 12 and 13).
2.5. Nash Equilibria and Consistent Evidential Equilibria
As one might expect, there is a natural correspondence between Nash equilibria and consistent evidential equilibria. This is formally stated and established by the following proposition.
Proposition 1. : (a) Let sS be a Nash equilibrium in the static game of complete information, . Consider the (constant) social projection functions: s, . Then, s is a consistent evidential equilibrium in the evidential game . Furthermore, Γ is a causal game.
(b) Let sS be an evidential equilibrium in the evidential game , where is the profile of constant social projection functions s, (hence, s is a consistent evidential equilibrium and Γ is a causal game). Then, s is a Nash equilibrium in the static game of complete information .
Proof of Proposition 1: (a) Let sS be a Nash equilibrium in the static game of complete information, . Consider the social projection functions: s, . Since s is a Nash equilibrium (Definition 2), it follows that maximizes with respect to , given s, for each . Since, by construction, s, , it follows that maximizes with respect to , for each . Hence, s is an evidential equilibrium (Definitions 11 and 12) in the evidential game . Furthermore, since, by construction, s, , it follows that s is a consistent evidential equilibrium (Definitions 13 and 14). Since is a profile of constant social projection functions, it follows that is a causal game (Definition 9).
(b) Let sS be an evidential equilibrium in the evidential game , where is the profile of constant social projection functions s, . Then, maximizes with respect to , for each (Definitions 11 and 12). However, s, ; hence, maximizes with respect to , for each . Hence, s is a Nash equilibrium in the static game of complete information (Definition 2). ■.
5. Why Do People Vote and How Do They Form Beliefs?
Under causal reasoning, any one voter is most unlikely to be pivotal, so nobody should vote. However, then why do so many people vote? This is the voting paradox.
The explanation of voting when a voter uses evidential reasoning is as follows. “If I do not vote for my preferred party, then probably like-minded people will not vote, and my preferred party will lose to the other party. On the other hand, if I decide to vote then, probably, other like-minded people will also make a similar decision and my party has a better chance of winning. So I vote if I wish my party to win, otherwise I do not.” In each case, the binary voting decision (vote or not vote) has diagnostic significance in forming beliefs about whether other like-minded people will vote, although it is critical to note that one’s action to vote does not cause others to vote. Other possible explanations for voting, for instance, that people vote out of a sense of civic duty, cannot explain several kinds of strategic voting and the variation in voter turnout when an election is believed to be close; see Krueger and Acevedo [
25].
As Krueger and Acevedo [
25], p. 468, put it: “Compared with a Republican who abstains, for example, a Republican who votes can be more confident that other Republicans vote in large numbers”. Quattrone and Tversky [
21], Grafstein [
22] and Koudenburg
et al. [
26] show that experimental evidence is strongly supportive of this view.
Delavande and Manski [
28] argue that state and national poll information in the U.S. is readily available public knowledge. On the other hand, private knowledge in elections is likely to be very limited. Hence, all individuals should form similar estimates of the winning probabilities of the political parties. In contrast to these expectations, they find strong support for evidential reasoning. Voters who vote assign too high a probability to their preferred party of winning the elections. Their findings are invariant with respect to males/females, whites/non-whites, educated/non-educated,
etc. Hence, and consistent with earlier work, there is a strong possibility that evidential reasoning is hard-wired in humans.
Consider
Table 3, which reports survey evidence from successive U.S. Presidential elections that is supportive of the evidential reasoning explanation. Voters who intend to vote Democrats typically assign high probabilities to the Democrat candidate winning. By contrast, voters who intend to vote Republican assign high probabilities to a Republican win. Thus, voters seem to take their own actions as diagnostic of what other like-minded people will do. These findings show that, in some circumstances at least, people behave as if their actions were causal, even when they are merely diagnostic or evidential.
Table 3.
US presidential elections. Source: Forsythe
et al., [
23].
Table 3.
US presidential elections. Source: Forsythe et al., [23].
Year | Presidential Candidates | % of Democrat Voters Expecting Democrat Win | % of Republican Voters Expecting Republican Win |
---|
1988 | Dukakis vs. Bush | 51.7 | 94.2 |
1984 | Mondale vs. Reagan | 28.8 | 99.9 |
1980 | Carter vs. Reagan | 87.0 | 80.4 |
1976 | Carter vs. Ford | 84.2 | 80.4 |
1972 | McGovern vs. Nixon | 24.7 | 99.6 |
1968 | Humphrey vs. Nixon | 62.5 | 95.4 |
1964 | Johnson vs. Goldwater | 98.6 | 30.5 |
1960 | Kennedy vs. Nixon | 78.4 | 84.2 |
1956 | Stevenson vs. Eisenhower | 54.6 | 97.6 |
1952 | Stevenson vs. Eisenhower | 81.4 | 85.9 |
Of course, not everyone votes in elections. A possible explanation is that not all voters subscribe to evidential reasoning. The evidence appears to be consistent with a mix of voters; some follow evidential reasoning, while others follow the classical mode of reasoning, i.e., causal reasoning. Voters who do not vote appear to follow causal reasoning, while those who do vote appear to follow evidential reasoning. Estimating the respective fractions of voters who follow evidential and causal reasoning is an interesting and open empirical question, but one that lies outside the scope of our paper. Similar comments also apply to the other experimental games that we consider in which some people cooperate while others do not.
6. Explaining the Prisoners’ Dilemma under Evidential Reasoning
We have two players; hence, . Each player has two actions: cooperate (C) or defect (D). Hence, , .
Therefore, if Player 1 chooses, say, C and Player 2 chooses D, then Player 1 gets zero and Player 2 gets 10.
6.1. The Prisoners’ Dilemma under Causal Reasoning
Each player has a strictly dominant action,
D (Definition 1); thus, the unique Nash equilibrium of this game is
(Definition 2). By contrast, the empirical evidence, reviewed in
Section 6.3 below, shows that 50% or more of the outcomes involve the play
.
6.2. The Prisoners’ Dilemma under Evidential Reasoning
A strategy for player
i is entirely determined by the probability
with which he or she plays
C. His or her expected payoff,
, can be found from
Table 4:
Table 4.
A prisoner’s dilemma game.
Table 4.
A prisoner’s dilemma game.
| C | D |
C | | |
D | | |
We consider the following four cases.
Case 1: Both players use evidential reasoning (Definition 5).
Consider the social projection function (Definition 3) for player
i:
Remark 4: The social projection Function (
8) should not be interpreted as saying that by playing
C with probability
, player
i can induce player
to play
C with probability
; indeed, there is no such causal link. Player
i does not know what action the other player will take or has taken. Rather, Equation (
8) is a heuristic device. Player
i may reason as follows “I would like to cooperate with probability
. Since player
j is like-minded, I believe he or she will also choose cooperate with probability
, just like me”. None of the players attempts to strategically exploit the SPF of the other players. Indeed, there is no requirement in an evidential game that there even be mutual knowledge of the social projection functions.
From Equation (
8), we see that both players use evidential reasoning (Definition 5), so
is an evidential game (Definition 9). In particular, each player uses his identity social projection function (Definition 6). Together, both players form an ingroup (Definition 7). In fact,
N forms a perfect ingroup (Definition 8).
Proposition 2. : For the prisoners’ dilemma game, Table 4, is a consistent evidential equilibrium under the identity social projection Function Equation (8).
Proof of Proposition 2: Substituting from Equation (
8) into Equation (
7), we get:
From Equation (
9), we see that
is maximized when
. It follows that
C is the unique optimal choice for player
(Definitions 11). Hence,
is the unique evidential equilibrium of this game (Definition 12). Each player expects the other to play
C, which turns out to be correct,
ex post. Therefore,
is a mutually consistent vector of strategies (Definition 13). Hence,
is a consistent evidential equilibrium (Definition 14). ■.
In contrast,
is not the Nash equilibrium of the game (Definition 2). Indeed,
requires each player to play a strictly dominated strategies. However,
is Pareto optimal. Note that under evidential reasoning, one does not need repeated game arguments to justify cooperation in the static prisoners’ dilemma game. Moreover, this is consistent with the play of the cooperative strategy by a majority of the players (see
Section 6.3 below). This suggests that a majority of the players may be using evidential reasoning.
Case 2: Player 1 uses evidential reasoning (Definition 5), but Player 2 uses causal reasoning (Definition 4).
In this case, the SPF for each player is given by:
From Equation (
10), we see that Player 1 uses evidential reasoning (Definition 5) and, in particular, his or her identity social projection function (Definition 6), as in Case 1 above. On the other hand, from Equation (11), we see that Player 2 uses causal reasoning (Definition 4) and, in particular, mistakenly assumes that Player 1 will always cooperate. This is an evidential game. The unique evidential equilibrium (Definition 12) is
. It is an evidential equilibrium because each player’s chosen action is optimal, given his beliefs, which are captured by his social projection function. It is not a consistent evidential equilibrium (Definition 14) because the belief of Player 1 turns out to be mistaken in equilibrium (
, but Player 2 plays
D instead). By contrast, the belief of Player 2 that Player 1 plays
C turns out to be correct in equilibrium.
Case 3: Both players use causal reasoning (Definition 4), but beliefs turn out to be wrong ex post.
In this case, the SPF for each player is given by:
Both players use causal reasoning (Definition 4), so this is a causal game (Definition 10). Given these social projection functions, the unique payoff maximizing strategy for each player is to play D (Definition 11). Hence, is the unique evidential equilibrium (Definition 12). It is also, of course, the unique Nash equilibrium of this game. However, is not a mutually consistent vector of strategies (Definition 13) because each player expects his opponent to play C in response to D, but the opponent’s response is D. Hence, is not a consistent evidential equilibrium (Definition 14).
Case 4: Both players use causal reasoning (Definition 4), and beliefs turn out to be correct ex post.
In this case, the SPF for each player is given by:
Both players use causal reasoning (Definition 4), so this is a causal game (Definition 10). Given his social projection function, playing D is the unique optimal strategy for Player 1 (Definition 11) and similarly for Player 2. Hence, is the unique evidential equilibrium (Definition 12). Furthermore, is a mutually consistent vector of strategies (Definition 13) because each player expects his or her rival to play D, and in fact, his or her rival does play D. Hence, is a consistent evidential equilibrium (Definition 14). The unique Nash equilibrium of this game is, of course, . Hence, this case illustrates Proposition 1a, namely a Nash equilibrium of the game is also a consistent evidential equilibrium (Definition 14) of the game with a suitable choice of social projection functions, .
Remark 5: When players are randomly matched to play the one-shot prisoners’ dilemma game, the weight of the evidence, reviewed in the next subsection, indicates a cooperation rate of at least 50%. To our minds, the only satisfactory explanation is provided by evidential reasoning with players using their identity social projection function (Definition 6). However, substantial numbers also defect, and this can be explained (as usual) by causal reasoning. Thus, the evidence can best be explained by a mixture of players, some of whom use causal reasoning, and the others use evidential reasoning.
6.3. Evidence of Cooperation in the Prisoners’ Dilemma Game
In the static prisoners’ dilemma game, defection (
D) is a strictly dominant strategy; recall
Table 4. Hence, a player using causal reasoning should defect. However, experimental evidence indicates high cooperation rates. Rapoport [
4] finds cooperation rates of 50% in the prisoners dilemma game. Zhong
et al. [
11] show that the cooperation rates in prisoners’ dilemma studies go up to 60% when positive labels are used (such as a “cooperative game”, rather than a “prisoners’ dilemma game”). When purely generic labels are used (such as C and D), then the cooperation rates are about 50%. Khadjavi and Lange [
13] find that, while the cooperation rates among students playing the static prisoners’ dilemma game is 37%, the cooperation rate among prison inmates is 56%.
Lewis [
2] used evidential reasoning to explain the unexpected levels of cooperation in the one-shot prisoners’ dilemma game. Mutual cooperation is better than mutual defection. If players use evidential reasoning, they may take their own preference for mutual cooperation as diagnostic evidence that their rival also has a preference for mutual cooperation, in which case both players are more likely to cooperate. These views are borne out by the evidence. Cooperators believe that the probability of other players cooperating is between
and
. Similarly, players who defect believe that other players will defect with probabilities between
to
; see Krueger [
40].
Like-mindedness is compatible with both outcomes,
C and
D, and we do observe both outcomes in the PD game. Those who play
C (respectively
D) also believe that a disproportionately large share of the other players will play
C (respectively
D). However, why then do we observe so much cooperation in the static prisoners’ dilemma game? According to Gintis [
54], page 145, humans have evolved the desire to cooperate with other humans. However, this cannot be the only cause, for it cannot explain why the rate of defection increases in the PD game when players know that their rivals have cooperated (see
Section 9.5, below).
Rapoport [
55], pp. 139–141, argued that each player takes his own belief that rational players deserve the cooperative outcome as evidence that similarly rational players will also cooperate. This is similar to evidential reasoning. Howard [
3] tests the assertion by Rapoport [
55], pp. 139–141, by running a contest between two computer programs. One computer program is designed to play the dominant strategy, defect. Another computer program, called the MIRROR program, is able to recognize if it is playing another MIRROR program, in which case it also cooperates; otherwise, it plays defect. There are five copies each of the programs that plays a tournament, and not surprisingly, the MIRROR program achieves higher payoffs. In effect, what the MIRROR program is doing is replicating the notion that people would cooperate with other like-minded people. In the conclusion, Howard [
3], p. 212, gives an argument that is identical in spirit to the evidential reasoning argument: “If all players use the self-recognition program listed in the Appendix, and play cooperatively only if they recognize their opponents as their twins, then every game will be played cooperatively.”
In contrast to the standard explanations (
Section 9, below), the explanation of cooperation based on evidential reasoning appears to be quite plausible. The fact that a sizeable fraction of the experimental subjects also defect suggests that the results are best accounted for by a mixture in the population of people who use evidential reasoning and causal reasoning.
7. The Nash Demand Game
7.1. Non-Uniqueness of Outcomes in the Nash Demand Game under Causal Reasoning and under Evidential Reasoning
Consider the Nash demand game (Nash, [
56,
57]). Two players share a cake of size one. Player 1 demands
and Player 2 simultaneously demands
. If the demands are feasible,
i.e., if
, then each player receives what she or he demanded. However, if the demands are not feasible,
i.e., if
, then each player gets zero. Any pair,
, such that
is a Nash non-cooperative equilibrium. Thus, the Nash non-cooperative equilibrium concept does not pin down a unique solution in the Nash demand game.
The following theorem shows that a similar problem occurs with evidential reasoning.
8Proposition 3. : If we allow arbitrary social projection functions, then any outcome, , such that , and , is an outcome of a consistent evidential equilibrium for the Nash demand game for suitably-chosen social projection functions.
Proof of Proposition 3: Let , , . We shall construct social projection functions under which is a consistent evidential equilibrium. Consider the following social projection functions. Let and , where . The restriction ensures that the shares demanded by both players sum up to one; see below. For Player 1, set , i.e., if Player 1 makes the demand x, then she expects Player 2 to make the demand . For Player 2, set , i.e., if Player 2 makes the demand y, then she expects Player 1 to make the demand . Thus, Player 1 maximizes x subject to . The unique solution to this maximization problem is . Similarly, the unique solution to Player 2’s maximization problem, maximize y subject to , is . It is straightforward to check that is a consistent evidential equilibrium for the chosen social projection functions. Finally, choosing and gives the outcome . ■.
Corollary 4. : is the unique consistent evidential equilibrium for the Nash demand game under the identity social projection functions: and .
Proof of Corollary 4: Take in the proof of Proposition 3. ■.
Corollary 4 shows that a particularly salient social projection function, the identity function, leads to an equal division of the pie. This might be of empirical interest, particularly when studying norms of equal division.
To overcome the non-uniqueness problem highlighted by Proposition 3, we need criteria to select social projection functions. One such criterion is to select the identity social projection function for symmetric evidential games. The identity social projection function appears plausible, maybe even compelling, for symmetric evidential games. However, what further criteria would help? One possibility is to appeal to the Nash bargaining axioms (Nash [
56,
57]; Osborne and Rubinstein [
58]).
9 These axioms are introduced in
Section 7.2, below, followed by application to the Nash demand game under evidential reasoning, in the subsequent
Section 7.3.
7.2. Nash’s Axioms and Nash’s Theorem
First, some definitions.
Definition 15. : A bargaining problem is a pair , where is compact and convex and . We require that, for some , and .
The bargaining problem may be given the following interpretation. is the set of possible payoffs resulting from agreement. is the pair of payoffs to the players if they fail to agree.
Definition 16. : Let ß be the set of all bargaining problems. A bargaining solution is a mapping, ß, where .
Definition 17. : is Pareto optimal if, for all , and .
Definition 18. : A bargaining problem, , is symmetric if and if .
Definition 19. : We say that the bargaining solution, , satisfies the independence of irrelevant alternatives if for all bargaining problems, and , such that and , we have .
Definition 20. : Let ß, . Let . Then, we say that is a positive affine transformation of onto .
7.2.1. The Nash Axioms
Let
ß be the set of all bargaining problems,
a bargaining solution and
a bargaining problem. We introduce the following axioms.
- Pareto:
is Pareto optimal.
- Symmetry:
If is symmetric, then .
- Independence:
satisfies the independence of irrelevant alternatives.
- Invariance:
If is a positive affine transformation of onto , then .
7.2.2. The Nash Theorem
Proposition 5. : There is a unique bargaining solution, ß, satisfying the axioms: Pareto, symmetry, independence and invariance. It is given by: 7.3. Application to the Nash Demand Game under Evidential Reasoning
We now illustrate, by an example, how the Nash axioms, Pareto, symmetry, independence and invariance, can reduce the choice among social projection functions sufficiently so as to generate a unique outcome for the Nash demand game.
Example 5 (The Nash demand game under evidential reasoning) Assume that the utility of Player 1 is and that of Player 2 is , where . Consider the following social projection functions. Let and , where . For Player 1, set , i.e., if Player 1 makes the demand x, then he or she expects Player 2 to make the demand . For Player 2, set , i.e., if Player 2 makes the demand y, then he or she expects Player 1 to make the demand . Thus, Player 1 maximizes subject to . The unique solution to this maximization problem is . Similarly, the unique solution to Player 2’s maximization problem, maximize subject to , is . It is straightforward to check that is a consistent evidential equilibrium for the chosen social projection functions.
To make progress, we need to select . If , then the game is symmetric, and (and hence, ) appear compelling. This give the plausible outcome . However, if ,then the game is not symmetric, and it is not clear, a priori, how to choose . If we invoke, in addition to symmetry, the other Nash axioms, Pareto, independence and invariance, then, by Proposition 5, we must get the unique outcome that is determined by maximizing the Nash product (here, and ). We then show that this unique outcome can be supported by the appropriate choice of .
It is straightforward to show that the problem: choose x and y, so as to maximize subject to , having the unique solution . This, in turn, determines the unique values . Thus, social projection functions compatible with the Nash axioms are , and . Note that these social projection functions are not unique (because they were chosen to be linear). However, the outcome is unique, whatever social projection functions are chosen, provided they satisfy the Nash axioms.
8. Oligopoly Games
Consider a market for a single homogeneous good. The total industrial output,
Q, is produced by a fixed number of firms,
n. Let
be the output of firm
i, then
. All consumers are price takers. The unit price,
, is given by:
There are zero fixed costs, and the marginal cost of firm
i is a constant,
,
, where:
Hence, the profit of firm
i is
,
, which, using Equation (
16), can be written as:
where
is the vector of outputs of firms other than firm
i. Maximizing
with respect to
, given
, leads to firm
i’s reaction function:
8.1. Causal Reasoning
The next proposition summarizes the results under causal reasoning on the part of all firms.
Proposition 6. : (a) Under perfect competition, , .
(b) The monopoly outcome is given by , .
(c) The Cournot output level of any firm is: (d) In a Stackelberg leader-follower model where Firm 1 is the leader while Firm 2 is the follower, the equilibrium output levels of the leader and the follower, respectively, are: Proof of Proposition 6:
(a) Under perfect competition, each firm produces at the minimum cost,
, and the price is set equal to
; Equation (
16) then gives
.
(b) Suppose we have a single firm (the monopolist) that produces at minimum cost,
. Setting
and
(there are no other firms) in Equation (
19) gives
; Equation (
16) then gives
.
(c) In a Cournot equilibrium, each firm chooses its out,
, so as to maximize its profit Equation (
18), given the outputs,
, of the other firms. Set
and
in Equation (
19) and solve the resulting system of simultaneous linear equations to get Equation (
20).
(d) Consider a duopoly with Firm 1 acting as leader and Firm 2 acting as follower. The follower chooses its output,
, to maximize its profit given the output level,
, of the leader; Equation (
19) then gives
. Substitute this into the profit function of the leader (from (Equation (
18)) to get
. Maximize this with respect to
to get
, and hence,
. ■.
Remark 6: Note that the perfectly competitive, monopoly and Cournot games are all single-stage games, i.e., one-shot games. The firms choose their actions simultaneously (Parts a, b and c of Proposition 6). However, the Stackelberg leader-follower model is a two-stage game: The leader moves first choosing its output level correctly anticipating the reaction of the follower. The follower then moves having observed the output level of the leader.
8.2. Evidential Reasoning
Consider the consequences of evidential reasoning for the producers. All consumers are causal reasoners,
i.e., each consumer regards every other consumer and every firm as an outgroup member (recall Definition 7). We also assume that each firm regards each consumer as an outgroup member. Thus, if Cis the set of consumers and F is the set of firms, then each is an outgroup relative to the other (Definition 7c). This also allows us to continue to assume that the market demand curve is given by Equation (
16). If we allowed consumers to use evidential reasoning, then a single consumer could reason as follows “If I cut my demand, then probably each like-minded consumer would also cut his or her demand. The aggregate result would be a reduction in price for all of us”. Consumers would then be able to collude. The consequence would be that we would no longer have an oligopoly model (as classically defined), but a bargaining model. While this is very interesting, it lies beyond the scope of this paper and, in fact, deserves a paper on its own.
We now describe an evidential equilibrium,
, with the following properties. Suppose that firm
i is considering a deviation,
, from
. Firm
i reasons as follows. “If I am tempted to deviate by an amount
and if I believe that my rival, firm
j,
, is like-minded, then the rival is probably also tempted to deviate by an amount
”; the interpretation of
is given below in more detail. We formalize such reasoning by the following social projection function (Definition 3):
The social projection specified in Equation (
23) is quite general. It nests several subcases, as we show below. The generality of Equation (
23) should not be taken to mean that the predictive content of the evidential reasoning model of oligopoly is empty. Rather, as in prisoners’ dilemma games, individuals display a wide variation in choices when they are asked to play the oligopoly game (see
Section 8.4 below). Variations in the parameter
in Equation (
23) offer a parsimonious way of capturing this heterogeneity by varying the degree of like-mindedness.
In particular, perfect like-mindedness, , gives rise to the identity social projection function (Definition 6); each firm believes that the other will deviate from by an identical distance. The other extreme arises when no like-mindedness is perceived by firms, as in models of causal reasoning. This corresponds to . Intermediate cases of like-mindedness correspond to values and to . The distribution of values of in any population is ultimately an empirical question that cannot be answered in a theoretical model.
The next proposition gives the solution under evidential reasoning.
Proposition 7. : (a) Given the social projection Functions (23), the unique evidential equilibrium (Definition 12), , is characterized by the following set of simultaneous linear algebraic equations: (b) Furthermore, is a mutually consistent vector of strategies (Definition 13) and, hence, a consistent evidential equilibrium.
(c) Conversely, given any vector of outputs, , satisfying and , there exits a profile of social projection of the form Equation (23), such that is a consistent evidential equilibrium. In particular, Proof of Proposition 7: (a) Substituting
from Equation (
23) into Equation (
18) gives:
which, after simplification, gives:
Equation (27) shows how a player who uses the heuristic of evidential reasoning translates an essentially strategic problem into a decision theoretic problem. Maximizing Equation (27) with respect to
gives the optimal (pure) strategy for firm
i (Definition 11), given his social projection Function (
23):
Setting
,
and simplifying gives the following set of simultaneous linear algebraic equations,
which can be written in the matrix form Equation (
24).
(b) From Equation (
23), we see that
. In effect, when firm
i produces the output
, it believes that firm
j will produce
.
Ex post, firm
i finds that firm
j indeed did produce an output level
, thus vindicating its
ex ante belief. Hence,
is a mutually consistent vector of strategies and, hence, a consistent evidential equilibrium.
(c) Rewrite Equation (29) in the form:
Equation (30) has many solutions, for example Equation (25). ■
We now show how one may obtain the market outcomes under causal reasoning (Proposition 6) also under evidential reasoning by choosing suitable values for , , in Equation (25) of Proposition 7.
Corollary 8. : (a) Setting and , for all and , gives the perfectly competitive output levels, (Proposition 6a). Here, each firm regards every other firm as an ingroup member and the set of firms forms an ingroup (Definition 7). We may call this a competitive ingroup and the resulting social projection functions competitive social projection functions. This is in line with the ideas considered in Section 2.3 and, in particular, is an illustration of the contrast effect. (b) Setting and , for all and , gives , . Hence, the total output equals the monopoly level (Proposition 6b). In this case, the social projection functions for the producers are identity social projection functions on the set of all producers (Definition 6). Thus, each firm believes that the other firms are like-minded.
(c) Setting , for all and , gives the Cournot output levels , (Proposition 6c). Here, each firm regards the other as an outgroup member (Definition 7); hence, every firm uses causal reasoning (Definition 4).
(d) Setting and gives, respectively, the leader output and the follower output in the leader-follower Stackelberg duopoly (Proposition 6d). The social projection functions for the follower (Firm 2) is constant; hence, the follower uses causal reasoning (Definition 4), .
The follower regards the leader as an outgroup member. The leader regards the follower as an ingroup member (Definition 7), .
We may say that the leader behaves competitively towards the follower (the contrast effect, recall Section 2.3).
Remark 7: Recall that the Stackelberg game in Proposition 6d under causal reasoning is a two-stage game, where the leader moves first, correctly anticipating the reaction of the follower. The follower then chooses its output having observed the output of the leader. By contrast, the version under evidential reasoning (Corollary 8d) is a single-stage game. The empirical evidence that we present below does show that in single-stage duopoly games, players often choose outputs similar to the Stackelberg output levels. This appears to be confirmation of evidential reasoning in oligopoly games.
8.3. The Nash Bargaining Solution for Oligopoly Games
Here, we investigate the consequences of treating the oligopoly problem as a Nash bargaining problem (see
Section 7, above, for the two-player case and Osborne and Rubinstein [
58], p.23, for the
n-player case). We show that the oligopoly Models (16)–(18) have a unique Nash bargaining solution. We show that this solution specifies equal output levels for the firms. However, due to different costs, the firms’ profits are different. We give social projection functions that implement this Nash bargaining solution.
Proposition 9. : For the oligopoly Models (16)–(18), the unique Nash bargaining solution is given by:where s the unique solution to: Social projection functions that implement this solution are:where: Proof of Proposition 9: A firm can guarantee itself a zero profit by exiting the market. Therefore, we take the disagreement point to be
. Maximizing the Nash product:
gives the first order conditions:
which can be re-written as:
where:
Summing Equation (31) from one to
n, using Equation (32), then rearranging, we get:
We shall argue that Equation (33) has a unique solution,
Consider the functions
,
,
. Note that
f and
g are continuous with
and
. Hence,
, for some
. Since
f is strictly decreasing and
g is strictly increasing, it follows that
is unique. From Equation (31), it follows that the unique Nash bargaining solution for the oligopoly Models (16)–(18) is given by:
From Equation (
23) and Proposition 7c, it follows that social projection functions that implement Equation (35) are:
where:
8.4. Empirical Evidence from Oligopoly Games
In our theoretical model, we showed how alternative values of
λ in Equation (
23) were able to produce, in a static game, the output levels under alternative market forms, such as perfect competition, monopoly, Cournot and the Stackelberg leader-follower game. We now argue that these results are consistent with the empirical findings.
The early experiments on Cournot markets were conducted by Fouraker and Siegel [
14]. They, and many later experiments, gave a profit table (PT) to the subjects. The PT listed the outputs of each firm on the two margins while individual cells of the table contained the corresponding profits of both firms. The PT was often supplemented by a profit calculator (PC), which allowed each experimental subject in their role as a firm to calculate the profit for a given pair of quantities chosen by both firms. In recent years, several experiments also gave subjects a best response option (BRO), which tells them their profit maximizing quantity for any quantity chosen by the other player.
The extra information provided (PT, PC, BRO) arguably primes subjects to follow an optimization-based solution. Requate and Waichman [
27] find that there is substantially more collusion (corresponding to
in Equation (
23)) in PT and PC treatments as compared to BRO. They find, in a static duopoly experiment, that the collusive outcome is reached at least once in the 20 rounds, in 62%, 78% and 29% of the markets, respectively, in the PT, PC and BRO treatments. The theoretical outcome of the Cournot–Nash equilibrium is, therefore, not confirmed in many cases.
Several papers report the Cournot–Nash equilibrium under random matching of opponents while finding that there is greater collusion under fixed matching of players.
10 For instance, Huck
et al. [
15] consider symmetric firms and linear demand curves. Output choices vary between 3 and 15, and the Cournot–Nash outcome is for each firm to produce an output of eight. Both firms are known to the experimental subjects to be symmetric, so they must know that the solution lies on the diagonal of a relatively small matrix. Profits of each firm in the PT drop off sharply for output levels equal to or higher than 10 or less than an output of three. This leaves only seven levels of output to choose from: 4, 5, 6, 7, 8, 9. The authors report data for Round 9 of play under random matching, as the most supportive of their hypothesis (see
Table 5 in their paper); these results are given in
Table 5.
Table 5.
Distribution of output levels in Huck
et al. [
15].
Table 5.
Distribution of output levels in Huck et al. [15].
Output Level | 6 | 7 | 8 | 9 | Greater Than 10 |
---|
% of subjects choosing | 12 | 21:5 | 35:5 | 14:5 | 14 |
The mean quantity is close to the Cournot–Nash output level of eight. However, there is substantial variation in the output levels, and about 65% of individuals do not choose the Cournot output level.
Rassenti
et al. [
24] use an asymmetric Cournot game in which firms have different marginal costs that are private information. Firms do not even know the underlying probability distribution of types. Hence, there is true uncertainty, an area where evidential reasoning would seem to have the most bite. The game is played over 75 rounds to allow for substantial learning possibilities. The main finding is that while total output is above, but close to, the Cournot–Nash solution, the individual levels of output chosen by the firms are quite different from the Cournot–Nash solution. The results, in this sense, are similar to those in Huck
et al. [
15]; however, the authors take this as a refutation rather than a confirmation of the Cournot–Nash equilibrium.
Bosch-Domènech and Vriend [
16] report results from the last two rounds of a 22-round duopoly experiment in which, in each round, two firms simultaneously choose outputs. They find that the output levels are widely distributed over a range that includes the monopoly output level and the perfectly competitive level.
Table 6 summarizes information that is extracted from their paper.
Table 6.
Percentage of outputs corresponding to various market levels.
Table 6.
Percentage of outputs corresponding to various market levels.
| Monopoly | Cournot | Walrasian | Others |
---|
Easy | 38.89 | 33.33 | 0 | 27.78 |
Hard | 11.11 | 16.67 | 11.11 | 61.11 |
Hardest | 8.33 | 14.28 | 16.67 | 60.72 |
The three treatments, easy, hard and hardest, differ in terms of the time within which firms had to choose their outputs and the level of information provided. For instance, in the easy treatment, a PT is provided, but not in the other treatments. In the hardest treatment, firms are not even told of the exact functional form of the linear demand curve (only that it is downward sloping), whereas in the easy treatment, firms know the exact demand curve. The cells in
Table 6 report the approximate percentage of the standard market outcomes in each treatment. The Cournot output level is not particularly salient relative to the others. Further, the wide distribution of output levels, even in Rounds 21 and 22 of the experiment, suggest that a flexible social projection function, as in Equation (
23), is consistent with the evidence.
Waichman
et al. [
59] find that pre-play communication increases the degree of collusion in the Cournot game. Between 91% and 100% of the markets achieve collusion in at least one round of the experiment when pre-play communication is allowed. This also seems consistent with evidential reasoning. Pre-play communication may increase the players’ beliefs that they are dealing with like-minded players, hence facilitating the use of evidential reasoning.
Duersch
et al. [
17] document systematic departures from a Cournot–Nash equilibrium. They consider a linear-demand, linear-cost Cournot game with the Cournot–Nash quantity,
. Computers play one of several well-known strategies, including best response against human subjects who are not aware of the computers’ strategy over 40 rounds. Again, by creating uncertainty about what others will do, this situation is quite relevant to the domain of evidential reasoning.
Mean quantities chosen by computers (34.39) are always lower than mean quantities chosen by humans (47.95). Human subjects choose quantities that are much greater than the Cournot–Nash levels and, in some cases, approach the Stackelberg leader output of 54. In particular, when computers are programmed to play a best response with some small noise, in three different treatments, subjects choose the output levels 51.99, 48.67 and 49.18, while computers choose 32.05, 35.02 and 31.67. Thus, human subjects show systematic (upward) departures from the Cournot–Nash level, even approaching the Stackelberg levels.
Thus, one observes a wide and rich range of behaviours that are often collusive and range all the way up to the choice of quantities in the Stackelberg case. The results are consistent with a range of social projection functions used (corresponding to alternative values of
λ in Equation (
23)) by experimental subjects who use evidential reasoning. The collusive outcome is played by a significant percentage of experimental subjects, which is consistent with the identity social projection function.
9. Can Causal Reasoning Explain Cooperation in the Prisoners’ Dilemma Game?
Here, we examine several theories to see if they can explain cooperation in the one-shot prisoners’ dilemma game under causal reasoning.
11First, in
Section 9.1, we show that cooperation in the prisoners’ dilemma game cannot be explained by reputation, correlated equilibria, level-kmodels, Stackelberg reasoning or evolutionary stable equilibria.
Although the prisoners’ dilemma game is presented to experimental subjects as a non-cooperative game played once against an anonymous opponent, subjects may in fact interpret it as some form of cooperative game. We outline three such theories in
Section 9.2,
Section 9.3 and
Section 9.4. In
Section 9.5, we argue that the empirical evidence rejects these theories.
Before starting, however, the reader might find the following example useful.
Example 6 (High-low game)
12: Consider the following game between two players. If both choose high (
H), then each gets two. If both choose low (
L), then each gets one. If one chooses
H (high) and the other chooses
L (low), then each gets zero. These payoffs are summarized by
Table 7.
Table 7.
A high-low game.
Table 7.
A high-low game.
| H | L |
H | | |
L | | |
This game has two pure-strategy Nash equilibria:
and
. The problem is: which Nash equilibrium will be played? We use this example to illustrate several heuristics that solve this problem Schelling [
60] argued that, because of its high payoff,
is salient. Therefore, it becomes a focal point and is chosen by both players. Harsanyi and Selten [
41] argued that
payoff dominates
and, hence, will be selected by both players. These are examples of what we called conservative heuristics in the Introduction. They involve no violation or relaxation of any of the standard assumptions of game theory, but they do add extra conditions that cannot be derived from standard game theory, but are consistent with them.
As an example of what we called a radical heuristic, consider team reasoning (Sugden, [
61]; Bacharach, [
62,
63]). Here, the players replace individual rationality (each seeking to maximise his or her own payoff, as in standard game theory) with collective rationality (each aims to maximise the payoff to the team). Since
gives the highest payoff to the team, it chosen.
Another example of a radical heuristic is Stackelberg reasoning (Colman and Bacharach [
43]; Colman
et al. [
44]). Note that high-low is a static game of complete, but imperfect information. In particular, when a player makes his or her move, he or she does not know the move that his or her rival has taken or will take. Both players are aware of this. A player solves this problem by carrying out the following thought experiment. She or he says to herself or himself “suppose this were a game of perfect information where I move first and my rival moves second, after having observed my move. What move would she or he take? From the payoff matrix, it is obvious to me that her or his best reply to me choosing
H is for her or him to also choose
H; and the best reply for her or him to me choosing
L is for her or him to also choose
L. Since
is better for me than
, I will choose
H”. The other player also reasons in the same way. The outcome is
. It is important to note that there is no breakdown of causality, nor do the players believe so. In particular, Player 1 does not believe that by choosing
H (or
L) she or he causes Player 2 to choose
H (or
L).
We now illustrate evidential reasoning. Player 1 (say) reasons as follows. “
is clearly better for both of us. If only I could meet with Player 2, I would put to her or him the proposal that we both play
H. I am sure she or he would agree. However, player 2 is probably thinking the same. However, actually, we do not have to meet, because we both know that if we were to meet, then we would agree on
”. Let
be the probabilities with which Players 1 and 2 play
H, respectively. The players’ reasoning can be formalized using the identity social projection functions:
The resulting expected payoff to Player 1 is
Since attains a unique global maximum on at , it follows that the same holds for . Hence, Player 1 chooses H with probability one, and similarly, Player 2 chooses H with probability one. is a consistent evidential equilibrium (Definition 14). We stress, again, that there is no breakdown of causality, nor do the players believe so. In particular, Player 1 does not believe that by choosing H, she or he causes Player 2 to choose H. She or he merely takes her or his preference for H as evidence that Player 2 will also choose H.
9.1. Reputation, Correlated Equilibria, Level-K Models, Stackelberg Reasoning and Evolutionary Stable Equilibria
In the finitely-repeated prisoners’ dilemma game under incomplete information, a player may believe, possibly mistakenly, that his or her rival will respond to cooperation with cooperation and defection with defection (Kreps
et al. [
64]). Under these conditions, a player may wish to play cooperate until late in the game, so as to establish a reputation for cooperation. However, because defect is a dominant strategy, such a player will defect in the final period. This contradicts the observation of significant amounts of cooperation in the final period (Cooper
et al. [
6]). In particular, reputation (on its own) is unable to explain cooperation in the static (one-shot) prisoners’ dilemma game.
Since “cooperate” is a dominated strategy, it is not rationalizable and cannot be supported in a correlated equilibrium either. In Level-k models, any player with level k, , will never play a dominated strategy. Cooperation in a prisoners’ dilemma game cannot be explained in evolutionary games either. The reason is that the set of evolutionary stable equilibria is a subset of the set of Nash equilibria of the game.
Stackelberg reasoning cannot explain cooperation in the prisoners’ dilemma either. To see this, consider the payoff matrix in
Table 4. According to Stackelberg reasoning, if Player 1, say, chooses
C, then
D is the best response for Player 2, giving Player 1 the payoff zero; and if Player 1 chooses
D, then, again,
D is the best response for Player 2, giving Player 1 the payoff
. Hence, according to Stackelberg reasoning, Player 1 will always choose
D; and similarly for Player 2. Therefore, cooperation will never be the outcome of the prisoners’ dilemma game under Stackelberg reasoning.
9.2. Team Reasoning
Several elements are key to team reasoning (Bacharach, [
63]):
The team agrees on a common objective.
The task for each member of the team is agreed upon by all members.
The way the surplus is divided among the members of the team is agreed upon by all members.
The team is consolidated if all members carry out their tasks. However, serious failure of a significant number of members is liable to cause the team to breakup.
As an illustration, consider the two-player game whose payoff matrix is given by
Table 8, below. This, clearly, has the structure of a prisoners’ dilemma game.
Table 8.
A prisoner’s dilemma game.
Table 8.
A prisoner’s dilemma game.
| C | D |
C | | |
D | | |
The interpretation of the payoffs in
Table 8 is as follows. If a player works on her or his own, she or he gets the payoff of four. If both work as a team, then the joint payoff is 10, which is shared equally to give each a payoff of five. However, if one player defects (shirks), she or he gets six. This is because she or he gets four for herself or himself from working on her or his own. The other player (who has not defected) generates a payoff of four for the team by working on her or his own. By the sharing rule, this payoff is shared equally, giving the defector a total payoff of six and the non-defector a payoff of two.
D is the strictly dominant strategy for each player. Hence, this game has the unique Nash equilibrium,
, which gives each player the payoff four. However, if both players internalize the objective of the team, to maximise joint payoff, then each gets five.
9.3. Other-Regarding Preferences
Consider, for example, other-regarding preferences in the model of Fehr and Schmidt [
65,
66]. Suppose that we have
n players with incomes:
. We concentrate on the linear version, which has had considerable empirical support. The Fehr–Schmidt utility function of an individual with income
is given by:
Individual
j cares for his own payoff,
, as under selfish preferences. However, he or she also suffers disutility from being ahead of others (altruism) and from being behind others (envy).
and
are sometimes known as the parameters of, respectively, advantageous and disadvantageous inequity. When
, we have purely selfish preferences. Evidence indicates that disadvantageous inequity is more important than advantageous inequity (
), and one never benefits by throwing away one’s own income (
). Let us apply FSpreferences, with
, to the prisoners’ dilemma game
, whose payoff matrix is given by
Table 4, above. It can then be easily seen that
C is a strict best reply to
C, and
D is a strict best reply to
D. Hence, other-regarding preferences can, potentially, explain cooperation in the prisoners’ dilemma game. We return to this in
Section 9.5, below.
9.4. Altruism
Cooper
et al. [
6] give an explanation of cooperation in the prisoners’ dilemma game based on altruism. They consider three types of players. Egoists who always defect; dominant strategy altruists who always cooperate; and best response altruists who respond to cooperate with cooperate and to defect with defect.
Because there are three types of players and because when a player takes his or her action, he or she does not know the type of his or her opponent, this is a (static) game of incomplete information. The relevant solution concept (under causal reasoning) is the Bayesian Nash equilibrium. Each type of each player chooses his or her strategy so as to maximize his or her expected payoff, given the strategies of all of the types of all of the other players. The joint probability of all types is common knowledge. Each type of each player uses this, and his or her knowledge of his or her own type, to update his or her belief about the types of his or her rivals using Bayes’ Law. See, for example, Fudenberg and Tirole [
45], Part III.
To rationalize the behaviour of these types, Cooper
et al. [
6] assume that player
i enjoys an extra amount of utility (warm glow),
, from playing cooperate (irrespective of the action played by his or her rival). If
is sufficiently small, then player
i behaves as an egoist,
i.e., defect is a dominant action for him or her. If
is sufficiently large, then player
i behaves as a dominant strategy altruist,
i.e., cooperate is a dominant strategy for him or her. Finally, for
in the intermediate range, player
i behaves as a best response altruist,
i.e., for him or her, cooperate is the best response to cooperate and defect is the best response to defect. As an illustration, assume that, in addition to the payoffs in
Table 4, player
i enjoys warm glow,
, from cooperation. The modified payoff matrix is now given by
Table 9:
Table 9.
A prisoner’s dilemma game in the presence of warm glow.
Table 9.
A prisoner’s dilemma game in the presence of warm glow.
| C | D |
C | | |
D | | |
The following is easy to check.
If , then defect is a strictly dominant action for player i. In this case, player i behaves as an egoist.
If , then cooperate is a strictly dominant action for player i. In this case, player i behaves as a dominant strategy altruist.
If (e.g., ), then cooperate is a strict best response for player i to player j, , playing cooperate. Defect is a strict best response for player i to player j, , playing defect. Player i behaves as a best response altruist.
Reviewing the evidence, including the evidence from their own experiments, Cooper
et al. [
6] conclude that behaviour in the prisoners’ dilemma game can best be explained if players are either egoists (always defect) or best response altruists (respond to cooperate with cooperate and to defect with defect). To illustrate this, take
if player
i is an egoist and
if player
i is a best response altruist. Consider the game between two best response altruists. Putting
, the payoff
Table 9 becomes:
Table 10.
A prisoner’s dilemma game with warm glow.
Table 10.
A prisoner’s dilemma game with warm glow.
| | |
---|
| | |
| | |
In
Table 10, Player 1 plays
C with probability
p and Player 2 plays
C with probability
q.
Let be the probability of a player being the best response altruist. We have four cases to consider:
1. Suppose Player 1 (a best response altruist) plays C.
1.1 Player 1 meets an egoist with probability . An egoist always defects, giving Player 1 the payoff three.
1.2 Player 1 meets a best response altruist with probability a. If Player 2 (a best response altruist) plays C (probability q), then the payoff to Player 1 is 11. If Player 2 plays D (probability ), then the payoff to Player 1 is 3.
2. Suppose Player 1 plays D.
2.1 If Player 1 meets an egoist (probability ), who always defects, then the payoff to player 1 will be four.
2.2 Suppose Player 1 meets a best response altruist (probability a). If Player 2 (a best response altruist) plays C (probability q), then the payoff to Player 1 is 10. If Player 2 plays D (probability ), then the payoff of Player 1 is four.
Hence, the expected utility of Player 1 is,
which simplifies to:
from which we get:
and hence:
To simplify the discussion, let us concentrate on the pure strategy Bayesian Nash equilibria in a population with a fraction, , of best response altruists and a fraction, , of egoists.
If , then we have one pure strategy Bayesian Nash equilibrium, where all players defect.
If , then we have two pure strategy Bayesian Nash equilibria. In one equilibrium (as before), all payers defect. In the second equilibrium, all best response altruists cooperate and all egoists defect.
Hence, for the particular payoffs in
Table 10, cooperation can be sustained as a Bayesian Nash equilibrium in the one-shot prisoners’ dilemma game if the percentage of best response altruists is ≥50%.
See Cooper
et al. [
6] for a more extensive analysis, including mixed strategies and repeated prisoners’ dilemma games.
9.5. Can Team Reasoning, Altruism or Other-Regarding Preferences Explain Cooperation in the Prisoners’ Dilemma Game?
Shafir and Tversky [
5] presented experimental subjects with the usual one-shot prisoners’ dilemma game. They then considered the following two variants.
1. A player whose competitor had defected was informed of this and offered the chance to revise her or his decision.
2. A player whose competitor had cooperated was informed of this and offered the chance to revise her or his decision.
Shafir and Tversky [
5] found that when a player did not know what the opponent had chosen, then the player defected in 63% of games. If the player was informed that the opponent had defected (Case 1, above) then in 97% of the games, the player defected. Thus, defection increased. on the other hand, if the player was informed that the opponent cooperated (Case 2, above), then in 84% of the games, the player defected. Thus, defection, again, increased. These results were replicated and extended by Croson [
7], Li and Taplin [
8], Busemeyer
et al. [
10] and Histrova and Grinberg [
12].
We shall argue that these results are consistent with evidential reasoning, but not with team reasoning, altruism or other regarding preferences.
First, consider Case 1, above. Here, all four theories predict an increase in defection following a player being told that her or his rival has defected, in line with the evidence. However, the reasons are different. Once a player has been told the move of her or his rival uncertainty is resolved, she or he no longer needs recourse to evidential reasoning. Since she or he continues to play selfishly and since defect is a strictly dominant strategy, she or he defects. An observation of defection is liable to destroy team spirit (
Section 9.2). Hence, team reasoning is consistent with the observation of an increase in defection. According to other-regarding preferences, as in
Section 9.3 above, a player who originally chose
D will continue to choose
D after she or he has been told that her or his rival had defected. However, a player who had originally chosen
C will change to
D after being told that her or his rival had defected. Hence, the other-regarding preferences model will predict an increase in defection, in line with the evidence. Finally, consider altruism, as in
Section 9.4 above. The egoists will not change their behaviour; they will continue to defect. Likewise, the dominant strategy altruists will not change their behaviour; they will continue to cooperate. The best response altruists who chose to defect will continue to choose defect after being informed that their rival has defected. However, each best response altruist who had chosen cooperation will now switch to defection. Hence, the amount of defection will increase in line with the evidence.
Second, consider Case 2 above. Here, evidential reasoning predicts an increase in defection, in line with the evidence. The reason is that uncertainty is resolved once a player has been told the move of her or his rival. She or he no longer needs recourse to evidential reasoning. If she or he had defected, she or he will choose defect again. If she or he had cooperated, she or he will now defect. However, the other three theories predict a decline in defection, contrary to the evidence. For team-reasoning, the team spirit is strengthened once a player is told that her or his rival has cooperated. Hence, a player who had cooperated continues to cooperate. A team member who had initially chosen to defect may now change to cooperate. According other-regarding preferences, as in
Section 9.3, a player who originally chose
C will continue to choose
C after she or he has been told that her or his rival had cooperated. However, a player who had originally chosen
D will change to
C after being told that her or his rival had cooperated. Hence, the other-regarding preferences model will predict a reduction in defection, contrary to the evidence. Finally, consider altruism, as in
Section 9.4. The egoists will not change their behaviour; they will continue to defect. Likewise, the dominant strategy altruists will not change their behaviour; they will continue to cooperate. The best response altruists who chose to cooperate will continue to cooperate. However, each best response altruist who had chosen defect will now find it in their best interest to cooperate. Hence, the extent of defection will decrease, contrary to the evidence.
10. Conclusions
A large number of experimental subjects do not play a Nash equilibrium in well-known games, such as prisoners’ dilemma, the hawk-dove game, voting games, public goods games and oligopoly games. It would seem that this constitutes strong grounds for game theory to be open to alternative equilibrium concepts in static games. Aumann and Brandenburger [
67] gave epistemic conditions under which the play of a game would result in a Nash equilibrium. A violation of Nash equilibrium is also a violation of the epistemic conditions that imply it.
In static game players are uncertain about which actions the other players will take (or have taken). A great deal of evidence suggests that in resolving uncertainty about what other players will do (or have done), players assign diagnostic significance to their own actions. Such reasoning is described as evidential reasoning. Players using evidential reasoning do not believe that their actions cause the action of others; it merely informs their belief about the actions taken by others. However, players who use evidential reasoning can violate standard rationality assumptions, such as Savage’s sure thing principle. Thus, evidential reasoning is best viewed as a heuristic rather than a sound rational principle.
The aim of our paper is to explore the significance of evidential reasoning for the class of static games of complete information. We define evidential games in which some players use evidential reasoning. We also propose the relevant solution concepts for such games: evidential equilibrium and consistent evidential equilibrium.
The evidence shows that the cooperative outcome in the prisoners’ dilemma game occurs more than 50% of the time, despite cooperation being strictly dominated by defection. Other-regarding preferences and altruism under causal reasoning can both explain this. However, neither team-reasoning, other-regarding preferences nor altruism can explain why the amount of defection increases when players are told that their rivals have, in fact, cooperated. We find that the evidence can best be explained by a mixture of players, some who use evidential reasoning and others who use causal reasoning.
Our proposal of an identity social projection function appears adequate for symmetric games. However, our general notion of a social projection functions appears too general (Proposition 3). We illustrated how the Nash bargaining axioms restrict social projections functions sufficiently to get unique outcomes in the Nash demand game (Example 5) and oligopoly games (Proposition 9).
Our framework can naturally be extended to incomplete information games and dynamic games, but we lack a body of evidence that could underpin such extensions. Hence, we leave such developments for future research as more evidence accumulates.