Open Access
This article is
 freely available
 reusable
Games 2017, 8(3), 38; https://doi.org/10.3390/g8030038
Article
Strategic Behavior of Moralists and Altruists
^{1}
Toulouse School of Economics, CNRS, University of Toulouse Capitole, Toulouse, France
^{2}
Institute for Advanced Study in Toulouse, Toulouse, France
^{3}
Stockholm School of Economics, Stockholm, Sweden
^{4}
KTH Royal Institute of Technology, Stockholm, Sweden
^{*}
Author to whom correspondence should be addressed.
Received: 31 July 2017 / Accepted: 30 August 2017 / Published: 11 September 2017
Abstract
:Does altruism and morality lead to socially better outcomes in strategic interactions than selfishness? We shed some light on this complex and nontrivial issue by examining a few canonical strategic interactions played by egoists, altruists and moralists. By altruists, we mean people who do not only care about their own material payoffs but also about those to others, and, by a moralist, we mean someone who cares about own material payoff and also about what would be his or her material payoff if others were to act like himself or herself. It turns out that both altruism and morality may improve or worsen equilibrium outcomes, depending on the nature of the game. Not surprisingly, both altruism and morality improve the outcomes in standard public goods games. In infinitely repeated games, however, both altruism and morality may diminish the prospects of cooperation, and to different degrees. In coordination games, morality can eliminate socially inefficient equilibria while altruism cannot.
Keywords:
altruism; morality; Homo moralis; repeated games; coordination gamesJEL Classification:
C73; D01; D031. Introduction
Few humans are motivated solely by their private gains. Most have more complex motivations, usually including some moral considerations, a concern for fairness or an element of altruism or even spite or envy towards others. There can even be a concern for the wellbeing of one’s peer group, community, country or even humankind. By contrast, for a long time, almost all of economics was based on the premise of narrow selfinterest, by and large following the lead of Adam Smith’s Inquiry into the Nature and Causes of the Wealth of Nations (1776) [1]. However, Adam Smith himself also thought humans in fact have more complex and often social concerns and motives, a theme developed in his Theory of Moral Sentiments (1759) [2].1 Philosophers still argue about how to reconcile the themes of these two books in the mind of one and the same author. Did Adam Smith change his mind between the first and second book? Or was his position in his second book to demonstrate that wellfunctioning markets would result in beneficial results for society at large even if all individuals were to act only upon their own narrow self interest?
In view of the overwhelming experimental evidence that only a minority of people behave in accordance with predictions based on pure material selfinterest, it appears relevant to ask whether and how alternative preferences affect outcomes in standard economic interactions. It is commonly believed that if an element of altruism or morality were added to economic agents’ selfinterest, then outcomes would improve for all. Presumably, people would not cheat when trading with each other, and they would work hard even when not monitored or remunerated by way of bonus schemes. They would contribute to public goods, respect and defend the interests of others, and might even be willing to risk their lives to save the lives of others.
While this has certainly proved to be right in some interactions,2 the belief is not generally valid. For example, Lindbeck and Weibull (1988) [10] demonstrate that altruism can diminish welfare among strategically interacting individuals engaged in intertemporal decisionmaking. The reason is that if interacting individuals are aware of each others’ altruism, then even altruists will to some extent exploit each others’ altruism, resulting in misallocation of resources. One prime example is undersaving for one’s old age, with the rational expectation that others will help if need be. In this example, everyone would benefit from commitment not to help each other, as this could induce intertemporally optimal saving.
Likewise, Bernheim and Stark (1988) [11] show that altruism may be harmful to longrun cooperation. There, the reason is that in repeated games between altruists, punishments from defection may be less harsh if the punisher is altruistic—just like a loving parent who cannot credibly threaten misbehavior by a child with even a mild punishment. Specifically, in repeated interactions, the mere repetition of a static Nash equilibrium in the stage game has better welfare properties between altruists than between purely selfinterested individuals, thus diminishing the punishment from defecting from cooperation. However, altruism also diminishes the temptation to defect in the first place, since defecting harms the other party. Bernheim and Stark (1988) [11] show that the net effect of altruism may be to diminish the potential for cooperation in the sense that it diminishes the range of discount factors that enables cooperation as a subgameperfect equilibrium outcome.
The aim of the present study is to examine strategic interactions between altruists, as well as between moralists, more closely, in order to shed light on the complex and nontrivial effects of altruism and morality on equilibrium behavior and the associated material welfare. By ‘altruism’, we mean here that an individual cares not only about own material welfare but also about the material welfare of others, in line with Becker (1974 [12], 1976 [5]) , Andreoni (1988) [13], Bernheim and Stark (1988) [11], and Lindbeck and Weibull (1988) [10]. As for ‘morality’, we rely on recent results in the literature on preference evolution, results which show that a certain class of preferences, called Homo moralis preferences, stands out as being particularly favored by natural selection (Alger and Weibull, 2013 [14], 2016 [15]). A holder of such preferences maximizes a weighted sum of own material payoff evaluated at the true strategy profile and own material payoff evaluated at hypothetical strategy profiles in which some or all of the other player’s strategies have been replaced by the individual’s own strategy3.
We examine the effects of altruism and such morality for behavior and outcomes in static and repeated interactions. Some of the results may appear surprising and counterintuitive. We also show similarities and differences between altruism and morality, the main difference between these two motivations being due to the fact that while the first is purely consequentialistic, the second is partly deontological. In other words, the first motivation is only concerned with resulting material allocations, the second places some weight on “duty” or the moral value of acts, a concern about what is “the right thing to do” in the situation at hand.
Our study complements other theoretical analyses of the effects of prosocial preferences and/or moral values on the qualitative nature of equilibrium outcomes in a variety of strategic interactions. In economics, see Arrow (1973) [22], Becker (1974) [12], Andreoni (1988 [13], 1990 [23]), Bernheim (1994) [24], Levine (1998) [25], Fehr and Schmidt (1999) [26], Akerlof and Kranton (2000) [27], Bénabou and Tirole (2006) [28], Alger and Renault (2007) [29], Ellingsen and Johannesson (2008) [30], Englmaier and Wambach (2010) [31], Dufwenberg et al. (2011) [32], and Sarkisian (2017) [33]. For related models of social norms, see Young (1993) [34], Kandori, Mailath, and Rob (1993) [35], Sethi and Somanathan (1996) [36], Bicchieri (1997) [37], Lindbeck, Nyberg, and Weibull (1999) [38], Huck, Kübler, and Weibull (2012) [39], and Myerson and Weibull (2015) [40].4
Our study also complements a large literature on theoretical analyses of the evolution of behaviors in populations. For recent contributions, see Lehmann and Rousset (2012) [42], Van Cleve and Akçay (2014) [43], Allen and Tarnita (2014) [44], Ohtsuki (2014) [45], Peña, Nöldeke, and Lehmann (2015) [46], and Berger and Grüne (2016) [47]. For surveys of related work on agentbased simulation models, see Szabó and Borsos (2016) [48] and Perc et al. (2017) [49].
2. Definitions and Preliminaries
We consider nplayer normalform games (for any $n>1$) in which each player has the same set X of (pure or mixed) strategies, and $\pi \left(x,\mathit{y}\right)\in \mathbb{R}$ is the material payoff to strategy $x\in X$ when used against strategy profile $\mathit{y}\in {X}^{n1}$ for the other players. By ‘material payoff’, we mean the tangible consequences of playing the game, defined in terms of the individual’s monetary gains (or losses), or, more generally, his or her indirect consumption utility from these gains (or losses). We assume $\pi $ to be aggregative in the sense that $\pi \left(x,\mathit{y}\right)$ is invariant under permutation of the components of $\mathit{y}$. The strategy set X is taken to be a nonempty, compact and convex set in some normed vector space.
We say that an individual is purely selfinterested, or a Homo oeconomicus if he only cares about his own material payoff, so that his utility is
$$u\left({x}_{i},{\mathit{x}}_{i}\right)=\pi \left({x}_{i},{\mathit{x}}_{i}\right)\phantom{\rule{14.22636pt}{0ex}}\forall \left({x}_{i},{\mathit{x}}_{i}\right)\in {X}^{n}.$$
An individual is an altruist if he cares about his own material payoff and also attaches a weight, his or her degree of altruism $\alpha \in \left[0,1\right]$, to the material payoffs to others, so that his utility is:
$$v\left({x}_{i},{\mathit{x}}_{i}\right)=\pi \left({x}_{i},{\mathit{x}}_{i}\right)+\alpha \xb7\sum _{j\ne i}\pi \left({x}_{j},{\mathit{x}}_{j}\right)\phantom{\rule{14.22636pt}{0ex}}\forall \left({x}_{i},{\mathit{x}}_{i}\right)\in {X}^{n}.$$
Finally, an individual is a Homo moralis if he cares about his own material payoff and also attaches a weight to what his material payoff would be should others use the same strategy as him. Formally, the utility to a Homo moralis with degree of morality $\kappa \in \left[0,1\right]$ is
where ${\tilde{\mathit{x}}}_{i}$ is a random $\left(n1\right)$vector such that with probability ${\kappa}^{m}{\left(1\kappa \right)}^{nm1}$ exactly $m\in \left\{0,\dots ,n1\right\}$ of the $n1$ components of ${\mathit{x}}_{i}$ are replaced by ${x}_{i}$, while the remaining components of ${\mathit{x}}_{i}$ keep their original values (for each m, there are $\left(\genfrac{}{}{0pt}{}{n1}{m}\right)$ ways to replace m of the $n1$ components of ${\mathit{x}}_{i}$). For instance, writing ${x}_{j}$ and ${x}_{k}$ for the strategies of i’s two opponents when $n=3$:
$$w\left({x}_{i},{\mathit{x}}_{i}\right)=\mathbb{E}\left[\pi \left({x}_{i},{\tilde{\mathit{x}}}_{i}\right)\right],$$
$$\begin{array}{ccc}\hfill w\left({x}_{i},{x}_{j},{x}_{k}\right)& =& {\left(1\kappa \right)}^{2}\pi \left({x}_{i},{x}_{j},{x}_{k}\right)+\kappa \left(1\kappa \right)\pi \left({x}_{i},{x}_{i},{x}_{k}\right)\hfill \\ & & +\kappa \left(1\kappa \right)\pi \left({x}_{i},{x}_{j},{x}_{i}\right)+{\kappa}^{2}\pi \left({x}_{i},{x}_{i},{x}_{i}\right).\hfill \end{array}$$
We observe that a Homo oeconomicus can be viewed as an altruist with degree of altruism $\alpha =0$, and as a Homo moralis with degree of morality $\kappa =0$.
Our purpose is to compare equilibria of interactions in which all individuals are altruists with interactions in which all individuals are moralists. We are interested both in the equilibrium behaviors as well as in the material welfare properties of these equilibria. We will use ${G}^{\alpha}$ to refer to the nplayer game between altruists with common degree of altruism $\alpha $, with payoff functions defined in (1), and ${\Gamma}^{\kappa}$ to refer to the nplayer game between Homo moralis with common degree of morality $\kappa $, with payoff functions defined in (2).
2.1. Necessary FirstOrder Conditions
Consider a simple public goods game, with
where ${x}_{i}\ge 0$ is i’s contribution to the public good. Assume further that $X={\mathbb{R}}^{+}$. It turns out that in this interaction equilibria in ${\Gamma}^{\kappa}$ coincide with those in ${G}^{\alpha}$ when $\alpha =\kappa $.
$$\pi \left({x}_{i},{\mathit{x}}_{i}\right)={\left({x}_{i}+{\sum}_{j\ne i}{x}_{j}\right)}^{1/2}{x}_{i}^{2},$$
More generally, for interactions in which the strategy set X is an interval and $\pi $ is continuously differentiable, any interior symmetric Nash equilibrium strategy ${x}^{*}$ in game ${G}^{\alpha}$, for any $0\le \alpha <1$, satisfies the firstorder condition
(By permutation invariance of $\pi $, all partial derivatives with respect to other players’ strategies are identical). Moreover, (5) is also necessary for an interior strategy ${x}^{*}$ to be a symmetric Nash equilibrium strategy in the same interaction between moralists, ${\Gamma}^{\kappa}$ for $\kappa =\alpha $ (Alger and Weibull, 2016). Higherorder conditions may differ, however, so that the set of symmetric equilibria do not necessarily coincide.5 Nevertheless, in the above public good example, they do.
$${\left.\frac{\partial \pi \left({x}_{i},{\mathit{x}}_{i}\right)}{\partial {x}_{i}}\right}_{{x}_{1}=...={x}_{n}={x}^{*}}\phantom{\rule{4pt}{0ex}}+\phantom{\rule{4pt}{0ex}}\left(n1\right)\alpha \xb7{\left.\frac{\partial \pi \left({x}_{i},{\mathit{x}}_{i}\right)}{\partial {x}_{n}}\right}_{{x}_{1}=...={x}_{n}={x}^{*}}\phantom{\rule{4pt}{0ex}}=\phantom{\rule{4pt}{0ex}}0.$$
Figure 1 shows the unique symmetric Nashequilibrium contribution in the public goods game between moralists, ${\Gamma}^{\kappa}$, as a function of community size n, for different degrees of morality, with higher curves for higher degrees of morality. This is also the unique symmetric Nashequilibrium contribution in the public goods game between altruists, ${G}^{\alpha}$, when the degree of altruism is the same as the degree of morality, $\alpha =\kappa $. Hence, the behavioral effects of morality and altruism are here indistinguishable.
2.2. TwobyTwo Games
We now briefly consider symmetric twobytwo games, with ${\pi}_{ij}$ denoting the material payoff accruing to a player using pure strategy $i=1,2$ against pure strategy $j=1,2$. For mixed strategies, let $x,y\in \left[0,1\right]$ denote the players’ probabilities for using pure strategy 1. The expected material payoff from using mixed strategy x against mixed strategy y is bilinear:
In such an interaction, an altruist’s utility function is still bilinear:
while a Homo moralis has a utility function with quadratic terms:
Depending on whether the sum of the diagonal elements of the payoff matrix, ${\pi}_{11}+{\pi}_{22}$, exceeds, equals, or falls short of the sum of the offdiagonal elements, ${\pi}_{12}+{\pi}_{21}$, the utility of Homo moralis is either strictly convex, linear, or strictly concave in his own mixed strategy, x. Hence, the set of symmetric equilibria of ${\Gamma}^{\kappa}$ typically differs from that of ${G}^{\alpha}$ even when $\alpha =\kappa $.6
$$\pi \left(x,y\right)={\pi}_{11}xy+{\pi}_{12}x\left(1y\right)+{\pi}_{21}\left(1x\right)y+{\pi}_{22}\left(1x\right)\left(1y\right).$$
$$\begin{array}{ccc}\hfill v\left(x,y\right)& =& {\pi}_{11}xy+{\pi}_{12}x\left(1y\right)+{\pi}_{21}\left(1x\right)y+{\pi}_{22}\left(1x\right)\left(1y\right)\hfill \\ & & +\alpha \xb7\left[{\pi}_{11}xy+{\pi}_{12}y\left(1x\right)+{\pi}_{21}\left(1y\right)x+{\pi}_{22}\left(1x\right)\left(1y\right)\right],\hfill \end{array}$$
$$\begin{array}{ccc}\hfill w\left(x,y\right)& =& \left(1\kappa \right)\xb7\left[{\pi}_{11}xy+{\pi}_{12}x\left(1y\right)+{\pi}_{21}\left(1x\right)y+{\pi}_{22}\left(1x\right)\left(1y\right)\right]\hfill \\ & & +\kappa \xb7\left[{\pi}_{11}{x}^{2}+\left({\pi}_{12}+{\pi}_{21}\right)x\left(1x\right)+{\pi}_{22}{\left(1x\right)}^{2}\right].\hfill \end{array}$$
As an illustration, consider a prisoner’s dilemma with the first pure strategy representing “cooperate”, that is, payoffs ${\pi}_{21}>{\pi}_{11}>{\pi}_{22}>{\pi}_{12}$. Using the standard notation ${\pi}_{11}=R$, ${\pi}_{12}=S$, ${\pi}_{21}=T$ and ${\pi}_{22}=P$, it is easy to verify that “cooperation”, that is, the strategy pair $\left(1,1\right)$, is a Nash equilibrium in ${\Gamma}^{\kappa}$ if and only if $\kappa \ge {\kappa}^{*}$ where
and that it is a Nash equilibrium in ${G}^{\alpha}$ if and only if $\alpha \ge {\alpha}^{*}$, where
We note that
In other words, it takes less altruism to turn cooperation into an equilibrium than it takes morality when the payoff loss $RS$ inflicted upon an opponent by defecting—which an altruist cares about—exceeds the difference between the own payoff gain T from defecting unilaterally and from defecting together, P, a payoff difference a moralist cares about. The reverse is true when $RS<TP$.
$${\kappa}^{*}=\frac{TR}{TP},$$
$${\alpha}^{*}=\frac{TR}{RS}.$$
$$\left\{\begin{array}{cc}{\alpha}^{*}<{\kappa}^{*},\hfill & \phantom{\rule{4.pt}{0ex}}\mathrm{if}\phantom{\rule{4.pt}{0ex}}RS>TP,\hfill \\ {\alpha}^{*}={\kappa}^{*},\hfill & \phantom{\rule{4.pt}{0ex}}\mathrm{if}\phantom{\rule{4.pt}{0ex}}RS=TP,\hfill \\ {\alpha}^{*}>{\kappa}^{*},\hfill & \phantom{\rule{4.pt}{0ex}}\mathrm{if}\phantom{\rule{4.pt}{0ex}}RS<TP.\hfill \end{array}\right.$$
3. Repetition
We analyze infinite repetition of two distinct classes of interaction: prisoners’ dilemmas and sharing games, respectively.
3.1. Repeated Prisoners’ Dilemmas
Consider an infinitely repeated prisoner’s dilemma with payoffs as above and with a common discount factor $\delta \in \left(0,1\right)$. We will provide necessary and sufficient conditions for grim trigger (that is, cooperate until someone defects, otherwise defect forever), if used by both players, to constitute a subgameperfect equilibrium that sustains perpetual cooperation.7 We do this first for a pair of equally altruistic players, then for a pair of equally moral players, and finally compare the ability to sustain cooperation of altruists with that of moralists.
If played by two equally altruistic individuals with degree of altruism $\alpha $, the stagegame utilities to the row player are the following (see (6)):
$$\begin{array}{ccc}\hline & C& \hfill D\hfill \\ \hfill C\hfill & \left(1+\alpha \right)R& \hfill S+\alpha T\hfill \\ \hfill D\hfill & T+\alpha S& \hfill \left(1+\alpha \right)P\hfill \\ \hline\end{array}$$
Grim trigger, if used by both players, constitutes a subgame perfect equilibrium that sustains perpetual cooperation if
and
The first inequality makes oneshot deviations from cooperation unprofitable. The lefthand side is the perperiod payoff obtained if both players always cooperate. If one player defects, he gets the “temptation utility” $T+\alpha S$ once, and then the punishment payoff $\left(1+\alpha \right)P$ forever thereafter. Inequality (10) compares the present value of continued cooperation with the present value from a oneshot deviation. The second inequality, (11), makes a oneshot deviation from noncooperation (play of $\left(D,D\right)$) unprofitable; this inequality is necessary for the threat to play D following defection to be credible. For further use below, we note that (10) can be written more succinctly as a condition on $\delta $, or the players’ patience, namely as, $\delta \ge {\delta}_{A}$, where
Furthermore, denote by ${\alpha}^{**}$ the threshold value for $\alpha $ defined by (11).
$$\left(1+\alpha \right)R\phantom{\rule{4pt}{0ex}}\ge \phantom{\rule{4pt}{0ex}}\left(1\delta \right)\xb7\left(T+\alpha S\right)\phantom{\rule{4pt}{0ex}}+\phantom{\rule{4pt}{0ex}}\delta \left(1+\alpha \right)P$$
$$\alpha \le \frac{PS}{TP}.$$
$${\delta}_{A}=\frac{TR\alpha \left(RS\right)}{TP\alpha \left(PS\right)}.$$
In sum, a pair of equally altruistic players can sustain perpetual cooperation either if altruism is strong enough, $\alpha \ge {\alpha}^{*}$ (see (9)), in which case $\left(C,C\right)$ is a Nash equilibrium of the stage game and hence needs no threat of punishment to be sustained, or if players are selfish enough to credibly punish defection, $\alpha \le min\left\{{\alpha}^{*},{\alpha}^{**}\right\}$, and players are patient enough to prefer the longterm benefits from cooperation than the immediate reward from defection, $\delta \ge {\delta}_{A}$. In the intermediate case, that is, when ${\alpha}^{**}<\alpha <{\alpha}^{*}$, cooperation is not sustainable for any discount factor $\delta \in \left[0,1\right]$.
For example, suppose that $T=10$ and $S=0$. If $R=8$ and $P=4$, then ${\alpha}^{*}=1/4$ and ${\alpha}^{**}=2/3>{\alpha}^{*}$. In this case, cooperation is sustainable for any discount factor if $\alpha \ge 1/4$, and for any sufficiently high discount factor ($\delta \ge \left(14\alpha \right)/\left(32\alpha \right)$) if $\alpha <1/4$. By contrast, if $R=6$ and $P=2$, ${\alpha}^{*}=2/3$ and ${\alpha}^{**}=1/4$. In this case, cooperation is sustainable for any discount factor if altruism is strong ($\alpha \ge 2/3$) and for any sufficiently high discount factor ($\delta \ge \left(23\alpha \right)/\left(4\alpha \right)$) if altruism is weak ($\alpha \le 1/4$), but cooperation is not sustainable at all for intermediate degrees of altruism ($1/4<\alpha <2/3$).
Turning now to moralists, the stagegame utilities to a row player with degree of morality $\kappa $ are given in (7), so we now have
$$\begin{array}{ccc}\hline & C& \hfill D\hfill \\ \hfill C\hfill & R& \hfill \left(1\kappa \right)S+\kappa R\hfill \\ \hfill D\hfill & \left(1\kappa \right)T+\kappa P& \hfill P\hfill \\ \hline\end{array}$$
Comparison with the utility matrix for altruists reveals that while an altruist who defects internalizes the pain inflicted on the opponent, and is thus sensitive to the value S, a moralist who defects internalizes the consequence of his action should both choose to defect simultaneously, and is thus sensitive to the value P. Following the same logic as above, grim trigger sustains perpetual cooperation between two equally moral individuals as a subgame perfect equilibrium outcome if $\delta \ge {\delta}_{K}$, where
and $\kappa \le {\kappa}^{**}$, where
$${\delta}_{K}=\frac{TR\kappa \left(TP\right)}{TP\kappa \left(TP\right)},$$
$${\kappa}^{**}=\frac{PS}{RS}.$$
In sum, a pair of equally moral players can sustain perpetual cooperation either if $\kappa \ge {\kappa}^{*}$ (see (8)), in which case $\left(C,C\right)$ is an equilibrium of the stage game and the threat to punish by playing D is not necessary to sustain cooperation in the repeated interaction, or if $\kappa \le min\left\{{\kappa}^{*},{\kappa}^{**}\right\}$ and $\delta \ge {\delta}_{K}$.
We now turn to comparing a pair of selfish players to a pair of altruists or a pair of moralists. For selfish players, grim trigger constitutes a subgame perfect equilibrium that sustains perpetual cooperation if $\delta \ge {\delta}_{0}$, where
$${\delta}_{0}=\frac{TR}{TP}.$$
Since ${\delta}_{0}\in \left(0,1\right)$ for any values of T, R, and P, and since ${\delta}_{0}>max\left\{{\delta}_{A},{\delta}_{K}\right\}$ for any $\alpha >0$ and $\kappa >0$, we conclude the following. First, conditional on the threat to punish defectors being credible (i.e., $\alpha \le {\alpha}^{**}$ and $\kappa \le {\kappa}^{**}$, respectively), altruists and moralists are better at sustaining cooperation than selfish individuals. Second, selfish individuals are better at sustaining cooperation than altruists (resp. moralists) if the latter cannot credibly threaten to punish defectors (i.e., $\alpha >{\alpha}^{**}$ resp. $\kappa >{\kappa}^{**}$).
Finally, comparing a pair of equally altruistic players with degree of altruism $\alpha \in \left[0,1\right]$ to a pair of equally moral players with degree of morality $\kappa =\alpha $, does one pair face a more stringent challenge to sustain cooperation than the other? To answer this question, we distinguish three cases, depending on whether $TR$ exceeds, falls short of, or equals $PS$.
Suppose first that $TR=PS$. Observe first that this implies ${\alpha}^{**}={\kappa}^{**}={\alpha}^{*}={\kappa}^{*}$ (where ${\alpha}^{*}$ was defined in (9) and ${\kappa}^{*}$ in (8)). In other words, $\left(C,C\right)$ is an equilibrium of the stage game between altruists whenever it is an equilibrium of the stage game between moralists. Moreover, whenever $\left(C,C\right)$ is not an equilibrium of the stage game, altruists and moralists are equally capable of credibly threatening to play D following a defection, so that both altruists and moralists can sustain cooperation if sufficiently patient. However, it is easy to verify that $TR=PS$ implies ${\delta}_{K}>{\delta}_{A}$: thus, if $\delta \in [{\delta}_{A},{\delta}_{K})$, grim trigger constitutes a subgame perfect equilibrium that sustains perpetual cooperation for the altruists but not for the moralists.
Second, suppose that $TR>PS$. Observe first that this implies ${\alpha}^{*}>{\kappa}^{*}$: this means that if ${\kappa}^{*}\le \alpha <{\alpha}^{*}$, then $\left(C,C\right)$ is an equilibrium of the stage game between moralists but not of the stage game between altruists. Since $TR>PS$ implies ${\kappa}^{**}>{\kappa}^{*}$, ${\alpha}^{**}<{\alpha}^{*}$ and ${\alpha}^{**}<{\kappa}^{**}$, the conclusion is as follows. When $TR>PS$, there exist values of $\alpha $ for which altruists are not able to sustain cooperation for any discount factor $\delta $, whereas a pair of moralists with any degree of morality $\kappa $ can sustain perpetual cooperation; namely, for any $\delta \in \left[0,1\right]$ if $\kappa \ge {\kappa}^{*}$, and for all $\delta \ge {\delta}_{K}$ if $\kappa <{\kappa}^{*}$.
Finally, suppose that $TR<PS$. Then, it is straightforward to verify that the opposite conclusion obtains, namely that there exist degrees of morality $\kappa $ for which moralists are not able to sustain cooperation for any $\delta $, whereas a pair of altruists with arbitrary degree of altruism can sustain perpetual cooperation (for any $\delta \in \left[0,1\right]$ if $\alpha \ge {\alpha}^{*}$, and for all $\delta \ge {\delta}_{A}$ if $\alpha <{\alpha}^{*}$).
3.2. Repeated Sharing
The observation that it may be harder for altruists than for egoists to sustain cooperation in an infinitely repeated game was pointed out by Bernheim and Stark (1988 [11], Section II.B). We first recapitulate their model. We then carry through the same analysis for Homo moralis, and finally compare the two. The stagegame is the same as used by Bernheim and Stark, and represents sharing of consumption goods.
3.2.1. Altruism
The stage game is a twoplayer simultaneousmove game in which each player’s strategy set is $X=\left[0,1\mu \right]$ for some small $\mu >0$, where a player’s strategy is the amount of his consumption good that he keeps. If player 1 chooses $x\in X$ and player 2 chooses $y\in X$, payoffs are
for player 1, and
for player 2, where $0<\gamma <1/2$.8 A necessary firstorder condition for an interior Nash equilbrium is thus
and likewise for player 2. Bernheim and Stark consider the symmetric case when ${\alpha}_{1}={\alpha}_{2}=\alpha $, in which case the firstorder condition is their Equation (16).9 They use this to identify the following unique symmetric Nash equilibrium of the stage game $x=y={x}_{A}$:
They compare this with the unique symmetric Pareto optimum, ${x}_{C}=1/2$, the solution of
The utility evaluated at the stagegame equilibrium is ${v}^{NE}=\left(1+\alpha \right)\xb7{\left[{x}_{A}\left(1{x}_{A}\right)\right]}^{\gamma}$ and the utility evaluated at the Paretooptimal strategy pair is ${v}^{C}=\left(1+\alpha \right)\xb7{4}^{\gamma}$.
$${v}_{1}\left(x,y\right)={\left[x\left(1y\right)\right]}^{\gamma}+{\alpha}_{1}\xb7{\left[\left(1x\right)y\right]}^{\gamma}$$
$${v}_{2}\left(x,y\right)={\left[y\left(1x\right)\right]}^{\gamma}+{\alpha}_{2}\xb7{\left[\left(1y\right)x\right]}^{\gamma}$$
$${\left(\frac{1y}{y}\right)}^{\gamma}={\alpha}_{1}\xb7{\left(\frac{1x}{x}\right)}^{\gamma 1},$$
$${x}_{A}=min\left\{\frac{1}{1+\alpha},1\mu \right\}.$$
$$\underset{x\in X}{max}\phantom{\rule{1.em}{0ex}}{\left[x\left(1x\right)\right]}^{\gamma}+\alpha \xb7{\left[\left(1x\right)x\right]}^{\gamma}.$$
Bernheim and Stark consider an infinitely repeated play of this stage game, with discount factor $\delta \in \left(0,1\right)$. They note that perpetual play of “cooperation”, $\left({x}_{C},{x}_{C}\right)$, is sustained in subgame perfect equilibrium by the threat of (perpetual) reversion to $\left({x}_{A},{x}_{A}\right)$ iff $\delta \ge {\delta}_{A}$, where
where ${v}^{D}$ is the maximal utility from a oneshot deviation from cooperation, that is,
Solving this maximization problem, we find that a player who would optimally deviate from cooperation would play
Noting that, for $\alpha =1$, ${x}_{D}=1/2$ and ${v}^{D}=2\times {4}^{\gamma}={v}^{C}$, we observe that pure altruists do not benefit from deviation. Hence, pure altruists can sustain cooperation irrespective of $\delta $.10
$${\delta}_{A}=\frac{{v}^{D}{v}^{C}}{{v}^{D}{v}^{NE}},$$
$${v}^{D}=\underset{x\in X}{max}\phantom{\rule{1.em}{0ex}}\frac{1}{{2}^{\gamma}}\left[{x}^{\gamma}+\alpha \xb7{\left(1x\right)}^{\gamma}\right].$$
$${x}_{D}=min\left\{\frac{{\alpha}^{1/\left(\gamma 1\right)}}{1+{\alpha}^{1/\left(\gamma 1\right)}},1\mu \right\}.$$
Bernheim and Stark proceed by considering a numerical example, $\mu =0.01$ and $\gamma =1/4$, and find that the lowest discount factor $\delta $ then needed to sustain cooperation is strictly increasing with $\alpha $. In other words, altruism makes cooperation harder. We proceed in parallel with them by setting $\mu =0.01$, $\gamma =1/4$ and $\alpha >0.05$. Then, ${x}_{A}=1/\left(1+\alpha \right)$,
and
for all $\alpha $ above approximately 0.05. Figure 2 shows that indeed ${x}_{D}\le 1\mu =0.99$ for such values of $\alpha $.
$${v}^{A}={\alpha}^{\gamma}{\left(1+\alpha \right)}^{12\gamma},$$
$${x}_{D}=\frac{1}{1+{\alpha}^{1/\left(1\gamma \right)}}$$
For such $\alpha $,
Hence,
Figure 3 shows ${\delta}^{A}$ as a function of $\alpha $ when $\gamma =1/4$, for $0.05<\alpha <1$. In particular, as $\alpha \to 1$, both the nominator and denominator in the definition tend to zero. By l’Hopital’s rule, ${\delta}_{A}\to \infty $ as $\alpha \to 1$.
$${v}^{D}={2}^{\gamma}\alpha \xb7{\left[1+{\alpha}^{1/\left(\gamma 1\right)}\right]}^{1\gamma}.$$
$${\delta}_{A}=\frac{{\left[1+{\alpha}^{1/(1\gamma )}\right]}^{1\gamma}\left(1+\alpha \right)\xb7{2}^{\gamma}}{{\left[1+{\alpha}^{1/(1\gamma )}\right]}^{1\gamma}{\left(1+\alpha \right)}^{12\gamma}{\left(2\alpha \right)}^{\gamma}}.$$
These numerical results agree with those reported in Table 1 in Bernheim and Stark (1988) [11], when keeping in mind that our altruism parameter $\alpha $ is a transformation of theirs (see footnote 9 above). In this numerical example, a pair of Homo oeconomicus ($\alpha =0$), can sustain cooperation only if $\delta \gtrsim 0.25$. Altruism thus here has an economically significant negative impact on the ability to sustain cooperation, since even a small degree of altruism, such as $\alpha =1/9$, raises the discount factor needed for cooperation by 40%.
3.2.2. Morality
The stagegame is again a twoplayer simultaneousmove game in which each player’s strategy set is $X=\left[0,1\mu \right]$ for some small $\mu >0$. If player 1 chooses $x\in X$ and player 2 chooses $y\in X$, payoffs are
for player 1, and
for player 2, where $0<\gamma <1/2$. A necessary firstorder condition for an interior Nash equilibrium is thus
for player 1, and likewise for player 2. Suppose that ${\kappa}_{1}={\kappa}_{2}=\kappa $. Then, the unique symmetric equilibrium strategy is
$${w}_{1}\left(x,y\right)=\left(1{\kappa}_{1}\right)\xb7{\left[x\left(1y\right)\right]}^{\gamma}+{\kappa}_{1}\xb7{\left[x\left(1x\right)\right]}^{\gamma}$$
$${w}_{2}\left(x,y\right)=\left(1{\kappa}_{2}\right)\xb7{\left[y\left(1x\right)\right]}^{\gamma}+{\kappa}_{2}\xb7{\left[\left(1y\right)y\right]}^{\gamma},$$
$$\left(1{\kappa}_{1}\right)\xb7{\left(1y\right)}^{\gamma}+{\kappa}_{1}{\left(1x\right)}^{\gamma}={\kappa}_{1}{x}^{\gamma}\xb7{\left(\frac{1x}{x}\right)}^{\gamma 1}$$
$${x}_{K}=min\left\{\frac{1}{1+\kappa},1\mu \right\}.$$
Comparing a pair of altruists with a common degree of altruism $\alpha $ to a pair of moralists with common degree of morality $\kappa =\alpha $, we note that ${x}_{A}={x}_{K}$.
Henceforth, assume that the first term is the smallest, that is, $\kappa \ge \mu /\left(1+\mu \right)$. Then, the utility evaluated at the Nash equilibrium strategy is
$${w}^{NE}={\left[{x}_{K}\left(1{x}_{K}\right)\right]}^{\gamma}={\left[\frac{\kappa}{{\left(1+\kappa \right)}^{2}}\right]}^{\gamma}.$$
The unique symmetric Paretooptimal strategy is still ${x}_{C}=1/2$, and the utility evaluated at this strategy is ${w}^{C}={4}^{\gamma}$.
Consider an infinitely repeated play of this stage game, with discount factor $\delta \in \left(0,1\right)$. Perpetual “cooperation”, play of $\left({x}_{C},{x}_{C}\right)$, is sustained in subgame perfect equilibrium by the threat of (perpetual) reversion to $\left({x}_{K},{x}_{K}\right)$ if and only if $\delta \ge {\delta}_{K}$, where
and ${w}^{D}$ is the maximal utility from a oneshot deviation from cooperation, that is,
Solving this maximization problem, we find that a player who would optimally deviate from cooperation would play ${x}_{DK}=min\left\{{x}^{*},1\mu \right\}$, where ${x}^{*}$ is the unique solution to the fixedpoint equation
$${\delta}_{K}=\frac{{w}^{D}{w}^{C}}{{w}^{D}{w}^{NE}},$$
$${w}^{D}=\underset{x\in X}{max}\phantom{\rule{1.em}{0ex}}\left(1\kappa \right)\xb7{\left(x/2\right)}^{\gamma}+\kappa \xb7{\left[\left(1x\right)x\right]}^{\gamma}.$$
$$x=\frac{1\kappa +{\left[2\left(1x\right)\right]}^{\gamma}\xb7\kappa}{1\kappa +2{\left[2\left(1x\right)\right]}^{\gamma}\xb7\kappa}.$$
Figure 4 plots the solution as a function of $\kappa $, for $\gamma =1/4$ (and for $\kappa \ge 0.05$).
We proceed by considering the numerical example that we studied under altruism. Let thus $\mu =0.01$ and $\gamma =1/4$, and assume that $\kappa >0.01$ (which guarantees an interior solution, both for ${x}_{K}$ and ${x}_{DK}$). We use the approximation ${x}_{DK}=exp\left(\kappa \xb7ln2\right)$, indicated by the dashed curve in Figure 4. This gives the approximation
$$\begin{array}{ccc}\hfill {w}^{D}& =& {2}^{\gamma}\xb7\left(1\kappa \right)\xb7exp\left(\gamma \kappa ln2\right)+\kappa \xb7\left[{\left(1exp\left(\kappa ln2\right)\right)}^{\gamma}exp\left(\gamma \kappa ln2\right)\right]\hfill \\ & =& \left[\left(1\kappa \right){2}^{\gamma}+\kappa \xb7{\left(1exp\left(\kappa ln2\right)\right)}^{\gamma}\right]\xb7exp\left(\gamma \kappa ln2\right).\hfill \end{array}$$
The condition (17) for sustainable cooperation can thus be written as
$$\delta \ge \frac{\left[\left(1\kappa \right){2}^{\gamma}+\kappa \xb7{\left(1exp\left(\kappa ln2\right)\right)}^{\gamma}\right]\xb7exp\left(\gamma \kappa ln2\right){4}^{\gamma}}{\left[\left(1\kappa \right){2}^{\gamma}+\kappa \xb7{\left(1exp\left(\kappa ln2\right)\right)}^{\gamma}\right]\xb7exp\left(\gamma \kappa ln2\right){\kappa}^{\gamma}{\left(1+\kappa \right)}^{2\gamma}}.$$
Figure 5 shows the righthand side as a function of $\kappa $ (for $\kappa \ge 0.05$) when $\gamma =1/4$. The dashed curve is drawn for altruists with $\alpha =\kappa $. We see that, for $\gamma =1/4$, cooperation is somewhat harder to sustain between moralists than between altruists with $\alpha =\kappa $. In summary, in this numerical example, cooperation is easiest to maintain between purely selfinterested individual than between altruists, and easier to sustain between altruists than between moralists.
Does this qualitative result partly depend on the numerical approximation? Does it hold for all $\gamma $? In order to investigate these issues, assume that $\kappa =\alpha $, and note that ${\delta}_{A}\le {\delta}_{K}$ if and only if $\left(1+\alpha \right){w}^{D}\ge {v}^{D}$, an inequality that can be written as
$$\alpha \xb7{\left[1+{\alpha}^{1/\left(\gamma 1\right)}\right]}^{1\gamma}\phantom{\rule{4pt}{0ex}}\le \phantom{\rule{4pt}{0ex}}\underset{x\in X}{max}\phantom{\rule{4pt}{0ex}}\left(1+\alpha \right)\xb7\left[{\left(1\alpha \right)}^{\gamma}+\alpha {\left(2\left(1x\right)\right)}^{\gamma}\right]\xb7{x}^{\gamma}.$$
This inequality clearly holds strictly at $\alpha =0$, and by continuity also for all $\alpha >0$ that are small enough. For $\alpha =1$, (18) holds with equality, since then it boils down to
which clearly holds by equality. See Figure 6, which shows isoquants for the difference between the righthand and lefthand sides in (18). The thick curve is the zero isoquant (where the inequality is an equality) and the thin curves positive isoquants (where the inequality is slack). The diagram suggests that for every $\alpha \in \left(0,1\right)$ there exists an $x\in int\left(X\right)$ such that (18) holds strictly. Hence, the difference between altruism and morality is not due to the approximation of ${x}_{DK}$.
$${4}^{\gamma}\phantom{\rule{1.em}{0ex}}\le \phantom{\rule{1.em}{0ex}}\underset{x\in X}{max}\phantom{\rule{1.em}{0ex}}{\left[\left(1x\right)x\right]}^{\gamma},$$
3.3. Preference Representations
Both in the repeated prisoners’ dilemma and in the repeated sharing game, we represented the players’ (selfish, altruistic, moral) utility functions over behavior strategies in the repeated game as the normalized present values of their perperiod (selfish, altruistic, moral) utilities as defined over their actions in the stage game. Is this consistent with defining their utility functions directly in the repeated game; the game they actually play?
Consider the infinitely repeated play of any symmetric twoplayer game in material payoffs with common strategy set X and material payoff function $\pi :{X}^{2}\to \mathbb{R}$, and with common discount factor $\delta \in \left(0,1\right)$. In terms of normalized present values, the material payoff function of a player using behavior strategy $\sigma $ in the repeated game, when the opponent uses behavior strategy $\tau $, is then
where ${x}_{t}$ is the player’s own action in period t and ${y}_{t}$ the action of the opponent. The function $\Pi $ is thus a selfish players’ utility function in the repeated game.
$$\Pi \left(\sigma ,\tau \right)=\left(1\delta \right)\sum _{t=0}^{\infty}{\delta}^{t}\pi \left({x}_{t},{y}_{t}\right),$$
First, consider altruistic players. By definition, the utility function, in the repeated game, of such a player with degree of altruism $\alpha \in \left[0,1\right]$ is
$$\begin{array}{ccc}\hfill {V}_{\alpha}\left(\sigma ,\tau \right)& =& \Pi \left(\sigma ,\tau \right)+\alpha \xb7\Pi \left(\tau ,\sigma \right)\hfill \\ & =& \left(1\delta \right)\xb7\left(\sum _{t=0}^{\infty}{\delta}^{t}\xb7\left[\pi \left({x}_{t},{y}_{t}\right)+\alpha \pi \left({y}_{t},{x}_{t}\right)\right]\right).\hfill \end{array}$$
Hence, the utility function coincides with the normalized present value representation that we used in our analysis of the prisoners’ dilemma and sharing game.
Secondly, for a Homo moralis player with degree of morality $\kappa \in \left[0,1\right]$, the utility function in the repeated game is, by definition,
so also the repeatedgames utility function of a moralist coincides with the normalized present value representation that we used in the two games.
$$\begin{array}{ccc}\hfill {W}_{\kappa}\left(\sigma ,\tau \right)& =& \left(1\kappa \right)\xb7\Pi \left(\sigma ,\tau \right)+\kappa \xb7\Pi \left(\sigma ,\sigma \right)\hfill \\ & =& \left(1\delta \right)\xb7\left(\sum _{t=0}^{\infty}{\delta}^{t}\xb7\left[\left(1\kappa \right)\xb7\pi \left({x}_{t},{y}_{t}\right)+\kappa \xb7\pi \left({x}_{t},{x}_{t}\right)\right]\right),\hfill \end{array}$$
In sum, the additive separability over time, inherent in the very definition of payoff functions in repeated games, makes the difference between “stagegame preferences” and “repeatedgames preferences” immaterial, both in the case of altruism and in the case of morality.
4. Coordination
Suppose there are n players who simultaneously choose between two actions, A and B. Write ${s}_{i}\in S=\left\{0,1\right\}$ for the choice of individual i, where ${s}_{i}=1$ means that i chooses A, and ${s}_{i}=0$ that instead B is chosen. Let the material payoff to an individual from choosing A when ${n}_{A}$ others choose action A be ${n}_{A}\xb7a$. Likewise, let the individual’s material payoff from choosing B when ${n}_{B}$ others choose B be ${n}_{B}\xb7b$, where $0<b<a$. Examples abound. Think of A and B as two distinct “norms”, with A being the socially efficient norm. We examine under which conditions the socially inefficient norm B can be sustained in equilibrium. We will also investigate if both norms can be simultaneously and partly sustained in heterogenous populations, in the sense that some individuals take action A while others take action B.
Writing ${\mathit{s}}_{i}\in {S}^{n1}$ for the strategy profile of i’s opponents and ${u}_{i}:{S}^{n}\to \mathbb{R}$ for the payoff function of a purely selfinterested player $i=1,...,n$, we have
$${u}_{i}\left({s}_{i},{\mathit{s}}_{i}\right)=a{s}_{i}\xb7\sum _{j\ne i}{s}_{j}+b\left(1{s}_{i}\right)\xb7\sum _{j\ne i}\left(1{s}_{j}\right).$$
The utility function of an altruistic player i with degree of altruism ${\alpha}_{i}\in \left[0,1\right]$ is
$${v}_{i}\left({s}_{i},{\mathit{s}}_{i}\right)={u}_{i}\left({s}_{i},{\mathit{s}}_{i}\right)+{\alpha}_{i}\xb7\sum _{j\ne i}{u}_{j}\left({s}_{j},{\mathit{s}}_{j}\right).$$
Evidently the efficient norm A, that is all playing A, can always be sustained as a Nash equilibrium for arbitrarily altruistic players. In addition, the inefficient norm B is a Nash equilibrium. For if all others choose B, then so will any player i, no matter how altruistic. We will now see that this last conclusion does not hold for moralists.
Consider Homo moralis players, where player i has degree of morality ${\kappa}_{i}\in \left[0,1\right]$. Such a player’s utility function is
where ${\tilde{\mathit{s}}}_{i}$ is a random vector in ${S}^{n1}$ such that with probability ${\kappa}_{i}^{m}{\left(1{\kappa}_{i}\right)}^{nm1}$ exactly $m\in \left\{0,\dots n1\right\}$ of the $n1$ components of ${\mathit{s}}_{i}$ are replaced by ${s}_{i}$, while the remaining components of ${\mathit{s}}_{i}$ keep their original values. Thanks to the linearity of the material payoff function (19), the utility function ${w}_{i}$ can be written as
$${w}_{i}\left({s}_{i},{\mathit{s}}_{i}\right)={\mathbb{E}}_{{\kappa}_{i}}\left[{u}_{i}\left({s}_{i},{\tilde{\mathit{s}}}_{i}\right)\right],$$
$$\begin{array}{ccc}\hfill {w}_{i}\left({s}_{i},{\mathit{s}}_{i}\right)& =& \sum _{m=0}^{n1}\left(\genfrac{}{}{0pt}{}{n1}{m}\right){\kappa}_{i}^{m}{\left(1{\kappa}_{i}\right)}^{nm1}\xb7\left[a{s}_{i}\xb7\left(m{s}_{i}+\frac{n1m}{n1}\xb7\sum _{j\ne i}{s}_{j}\right)\right.\hfill \\ & & +\left.b\left(1{s}_{i}\right)\xb7\left(m\xb7\left(1{s}_{i}\right)+\frac{n1m}{n1}\xb7\sum _{j\ne i}\left(1{s}_{j}\right)\right)\right].\hfill \end{array}$$
The efficient norm A can clearly be sustained as a Nash equilibrium, since when all the others are playing A, individual i gets utility $\left(n1\right)a$ from taking action A and
from taking action B. By contrast, the inefficient norm cannot be sustained for all degrees of morality. To see this, first suppose all individuals have the same degree of morality $\kappa \in \left(0,1\right)$. If all the others are playing B, any individual gets utility $\left(n1\right)b$ from also playing B and would get utility
from deviating to A. Hence, the inefficient norm can be sustained in Nash equilibrium if and only if $\kappa \le b/a$.
$$b\xb7\sum _{m=0}^{n1}\left(\genfrac{}{}{0pt}{}{n1}{m}\right){\kappa}_{i}^{m}{\left(1{\kappa}_{i}\right)}^{nm1}m=b\left(n1\right){\kappa}_{i}$$
$$a\xb7\sum _{m=0}^{n1}\left(\genfrac{}{}{0pt}{}{n1}{m}\right){\kappa}^{m}{\left(1\kappa \right)}^{nm1}m=a\left(n1\right)\kappa $$
This result shows that morality can have a qualitatively different effect than altruism upon behavior in interactions with strategic complementarities. In the present case of a simple coordination game, morality eliminates the inefficient equilibrium if and only if the common degree of morality $\kappa $ exceeds $b/a$. By contrast, the inefficient equilibrium is still an equilibrium under any degree of altruism. No matter how much the parties care for each other, they always want to use the same strategy, even if this results in a socially inefficient outcome. Moralists are partly deontologically motivated and evaluate own acts not only in terms of their expected consequences, given others’ action, but also in terms of what ought to be done.
We finally examine heterogeneous populations. First, suppose that the coordination game defined above is played by $n>2+a/b$ individuals, among which all but one are purely selfinterested and the remaining individual is a Homo moralis with degree of morality $\kappa >b/a$. Under complete information, such a game has a Nash equilibrium in which all the selfinterested play B while the unique Homo moralis plays A. In this equilibrium, the moral player exerts a negative externality on the others—which causes partial miscoordination. Had the moralist instead been an altruist, he would also play B if the others do, and would thus be behaviorally indistinguishable from the purely selfinterested individuals. More generally, altruists as well as selfinterested individuals do not care about “the right thing to do” should others do likewise. They only care about the consequences for own and—if altruistic—others’ material payoffs, from their unilateral choice of action. By contrast, moralists also care about what would happen if, hypothetically, others would act like them. In coordination games, this may cause a bandwagon effect reminiscent of that shown in Granovetter’s (1978) [51] threshold model of collective action, a topic to which we now turn.
Like Granovetter, we analyze a population in which each individual faces a binary choice and takes a certain action, say A, if and only if sufficiently many do likewise. More precisely, each individual has a population threshold for taking action A. Our model of coordination can be recast in these terms. Indeed, for each individual $i=1,2,...,n$, defined by his personal degree of morality ${\kappa}_{i}\in \left[0,1\right]$, one can readily determine the minimum number of other individuals who must take action A before he is willing to do so. Consider any player i’s choice. If he expects $\tilde{n}\in \left\{0,\dots ,n1\right\}$ others to take action A, then his utility from taking action B is
while, from taking action A, it is
Hence, individual i will take action A if and only if
or
$$\begin{array}{ccc}\hfill {w}_{i}\left(0,{\mathit{s}}_{i}\right)& =& b\xb7\sum _{m=0}^{n1}\left(\genfrac{}{}{0pt}{}{n1}{m}\right){\kappa}_{i}^{m}{\left(1{\kappa}_{i}\right)}^{nm1}\left[\frac{n1m}{n1}\xb7\left(n\tilde{n}1\right)+m\right]\hfill \\ & =& b\xb7\left[\left(n\tilde{n}1\right)+\tilde{n}{\kappa}_{i}\right],\hfill \end{array}$$
$$\begin{array}{ccc}\hfill {w}_{i}\left(1,{\mathit{s}}_{i}\right)& =& a\xb7\sum _{m=0}^{n1}\left(\genfrac{}{}{0pt}{}{n1}{m}\right){\kappa}_{i}^{m}{\left(1{\kappa}_{i}\right)}^{nm1}\left[\frac{n1m}{n1}\xb7\tilde{n}+m\right]\hfill \\ & =& a\xb7\left[\tilde{n}+\left(n\tilde{n}1\right){\kappa}_{i}\right].\hfill \end{array}$$
$$\frac{a}{b}\ge \frac{n\tilde{n}1+\tilde{n}{\kappa}_{i}}{\tilde{n}+\left(n\tilde{n}1\right){\kappa}_{i}},$$
$$\frac{\tilde{n}}{n1}\ge \frac{b{\kappa}_{i}a}{\left(1{\kappa}_{i}\right)\left(a+b\right)}.$$
In other words, whenever individual i expects the population share $x=\tilde{n}/(n1)$ of others taking action A to exceed (respectively, fall short of) his or her threshold ${\theta}_{i}\in \mathbb{R}$, where
he/she takes action A (respectively B). We note that the threshold of an individual is strictly decreasing in the individual’s degree of morality. Moreover, individuals with high enough degrees of morality have negative thresholds and will thus take action A even alone. The threshold of an individual with zero degree of morality, that is, Homo oeconomicus, is $b/(a+b)$.
$${\theta}_{i}=\frac{b{\kappa}_{i}a}{\left(1{\kappa}_{i}\right)\left(a+b\right)},$$
Figure 7 below shows the threshold as a function of ${\kappa}_{i}$ for different values of $v=a/b$, and with population shares (in percentages) on the vertical axis. Starting from the bottom, the curves are drawn for $v=4$, $v=2$, $v=1.5$, and $v=1.2$. The bottom curve, the one for $v=4$, shows that an individual with degree of morality $\kappa =0.25$ is willing to switch from B to A even if nobody else switches, an individual with degree of morality $\kappa =0.1$ is willing to make this switch if 14% of the others also switch, etc. This curve also reveals that as long as there is at least 20% who are sufficiently moral, and thus willing to switch even if nobody else does, or only a small number have switched, then a bandwagon effect among myopic individuals will eventually lead the whole population to switch, step by step, even if as many as 80% of the individuals are driven by pure selfinterest.
Let F be any continuous cumulative distribution function (CDF) on $\mathbb{R}$ such that for every $\theta \in \mathbb{R}$, $F\left(\theta \right)$ is the population share of individuals with thresholds not above $\theta $. Then, $F:\mathbb{R}\to \left[0,1\right]$ is a continuous representation of the cumulative threshold distribution in the population, with $F\left(0\right)\ge 0$ and $F\left(x\right)=1$ for all $x\ge b/\left(a+b\right)$. By Bolzano’s intermediatevalue theorem, $F\left(x\right)=x$ for at least one $x\in X=\left[0,1\right]$.11 Let ${X}^{*}\subseteq \left[0,1\right]$ be the nonempty and compact set of such fixed points.
Figure 8 below shows three different CDFs. The two dashed curves represent relatively heterogenous populations, and those curves have one intersection with the diagonal, and hence the unique fixed point then is ${x}^{*}=1$. The solid curve represents a relatively homogeneous population and this distribution function has three intersections with the diagonal, and thus three fixed points; one close to zero, another near 0.45, and the third one being ${x}^{*}=1$. All fixed points are Nash equilibria in a continuum population, and are approximate Nash equilibria in finite but large populations. In the diagram, all fixed points except the one near 0.45 have index +1. Those equilibria are stable in plausible population dynamics, while the fixed point near 0.45 has index $1$ and is dynamically unstable.12
Figure 8 can be used for discussion of dynamic scenarios. Suppose that initially all individuals were to take action B. All those with nonpositive thresholds $\theta $ (that is, with relatively high morality) would immediately switch to A. If others see this, then the most moral among them (that is, those with lowest threshold) will follow suit. Depending on population size and its morality distribution, this process may go on until the population shares taking action A reaches or surpasses $b/\left(a+b\right)$, at which point all remaining individuals will switch to A. This is what may happen in a relatively heterogeneous population with morality distribution such that there is only one fixed point, which then necessarily is ${x}^{*}=1$. By contrast, in a relatively homogenous population with smallest fixed point ${x}^{*}<1$, once the adjustment process reaches the point where the population share taking action A is ${x}^{*}$, the process will either halt or switch back and forth close to ${x}^{*}$. Hence, the population may get stuck there. Had it instead started somewhere above the middle fixed point, it could lead the population gradually towards norm A and finally jump to that norm.
A discretetime version of this process is as follows. Consider a situation in which initially only strategy B exists, so that initially everybody plays B. Suddenly, strategy A appears, the interpretation being that it is discovered or invented. For each threshold number of individuals ${n}^{*}\in \left\{0,1,2,...n1\right\}$, where ${n}^{*}=\theta \xb7(n1)$, let $g\left({n}^{*}\right)$ be the number of individuals who have that threshold. If $g\left(0\right)=0$, then nobody ever switches to A. However, if $g\left(0\right)>0$, the number of individuals $N\left(t\right)$ who have switched from B to A at time $t=1,2,...$, where t denotes the number of time periods after strategy A was discovered, we have $N\left(1\right)=g\left(0\right)$, and
for all $t>1$. The process stops before everybody has switched if there exists some t such that $N\left(t+1\right)=N\left(t\right)$, i.e., if
$$N\left(t\right)=\sum _{j=0}^{N\left(t1\right)}g\left(j\right)$$
$$\sum _{j=N\left(t1\right)}^{N\left(t\right)}g\left(j\right)=0.$$
Otherwise, it goes on until the whole population has switched to the efficient norm. In this process, Homo moralis act as leaders because they are willing to lead by example. By contrast, altruists as well as selfinterested individuals do not care about the right thing to do, should others follow their lead. They care about own material payoff, as well as that of others for altruists, given what the others do. Hence, the cascading effect obtained with moral individuals does not obtain in groups of altruists or selfinterested people. We illustrate with two examples, both in which $n=100$. Table 1 shows two distributions of the thresholds. In the first example, a total of 21 individuals switch, and this takes four periods. In the second example, all individuals have switched after six periods, in spite of a slower start. Indeed, in the first example, we have $N\left(1\right)=5$, $N\left(2\right)=5+7=12$, $N\left(3\right)=12+6=18$, $N\left(4\right)=18+3=21$, but since the remaining individuals require at least 22 people to have switched before them, they do not switch. In the second example, the process starts with just one individual switching, $N\left(1\right)=1$, but then $N\left(2\right)=5$, $N\left(3\right)=10$, $N\left(4\right)=16$, $N\left(5\right)=32$, $N\left(6\right)=100$.
5. Conclusions
Altruism and morality are considered virtues in almost all societies and religions worldwide. We do not question this here. Instead, we ask whether altruism and morality help improve the material welfare properties of equilibria in strategic interactions. Our analysis reveals a complex picture; sometimes, altruism and morality have beneficial effects, sometimes altruism is better than morality, sometimes the reverse is true, sometimes they are equivalent, and sometimes selfinterest is best! The commonly held presumption that altruism and morality always lead to better outcomes is thus not generally valid. Our analysis unveiled two nontrivial and potentially important phenomena that we believe are robust and general. However, before attacking these two phenonema, we showed that in canonical and oneshot publicgoods games with arbitrary many participants, altruism and morality are behaviorally undistinguishable and lead to unambiguously increase material welfare in equilibrium. We also showed that altruism and morality induce different behaviors and outcomes in simple $2\times 2$ games. With these observations as a backdrop, we turned to the abovementioned two phenomena.
The first phenomenon is that it may be more difficult to sustain longrun cooperation in infinitely repeated interactions between altruists and moralists than between egoists. More specifically, we showed this for infinitely repeated prisoners’ dilemmas and infinitely repteated sharing games, in both cases focussing on repeatedgames strategies based on the threat of perpetual play of the stagegame Nash equilibrium. While altruists and moralists are less tempted to deviate from cooperation and less prone to punish each other—an altruist internalizes the pain inflicted upon the opponent and a moralist internalizes what would happen if both were to deviate simultaneously—the stagegame Nash equilibrium between altruists and between moralists results in higher material payoffs than between selfinterested players. This renders the punishment following a deviation less painful, both for the deviator and for the punisher. In the stagegames considered here, the latter effect is always strong enough to outweigh the former, so that both altruism and morality worsen the prospects for longrun social efficiency. More extensive analyses are called for in order to investigate whether this result obtains for other stagegames and punishment strategies (see, e.g., Mailath and Samuelson, 2006, [53]).
The second phenomenon is that morality, but not altruism, can eliminate socially inefficient equilibria in coordination games. More precisely, while Homo moralis preferences have the potential to eliminate socially inefficient equilibria, neither selfinterest nor altruism can. The reason is that while a Homo moralis is partly driven by the “right thing” to do (in terms of the material payoffs if others were to follow his behavior), a selfinterested or altruistic individual is solely driven by what others actually do, and hence has no incentive to unilaterally deviate from an inefficient equilibrium. We also showed that when coordination games are played in heterogeneous populations, individuals with a high degree of morality, even if acting myopically, may initiate population cascades away from inefficient equilibria towards a more efficient social “norm”. In such cascades, the most morally motivated take the lead and are followed by less morally motivated individuals and may finally be followed even by purely selfinterested individuals (when sufficiently many others have switched).
Advances in behavioral economics provide economists with richer and more realistic views of human motivation. Sound policy recommendations need to be based on such more realistic views. Otherwise, the recommendations are bound to fail, and may even be counterproductive. Our results show how altruism and morality may affect behavior and welfare in a few, but arguably canonical, strategic interactions. Clearly, much more theoretical and empirical work is needed for a fuller understanding to be reached, and we hope that this paper can serve as an inspiration.
Acknowledgments
We thank Ted Bergstrom, Peter Wikman and two anonymous referees for helpful comments, and Rémi Leménager for research assistance. Support by Knut and Alice Wallenberg Research Foundation and by ANRLabex IAST is gratefully acknowledged. We also thank Agence Nationale de la Recherche for funding (Chaire d’Excellence ANR12CHEX001201 for Ingela Alger, and Chaire IDEX ANR11IDEX000202 for Jörgen W. Weibull).
Author Contributions
I.A. and J.W. contributed equally.
Conflicts of Interest
The authors declare no conflict of interest.
References
 Smith, A. An Inquiry into the Nature and Causes of the Wealth of Nations; Reedited (1976); Oxford University Press: Oxford, UK, 1776. [Google Scholar]
 Smith, A. The Theory of Moral Sentiments; Reedited (1976); Oxford University Press: Oxford, UK, 1759. [Google Scholar]
 Edgeworth, F.Y. Mathematical Psychics: An Essay on the Application of Mathematics to the Moral Sciences; Kegan Paul: London, UK, 1881. [Google Scholar]
 Collard, D. Edgeworth’s Propositions on Altruism. Econ. J. 1975, 85, 355–360. [Google Scholar] [CrossRef]
 Becker, G. Altruism, Egoism, and Genetic Fitness: Economics and Sociobiology. J. Econ. Lit. 1976, 14, 817–826. [Google Scholar]
 Bergstrom, T. A Fresh Look at the Rotten Kid Theorem—And Other Household Mysteries. J. Political Econ. 1989, 97, 1138–1159. [Google Scholar] [CrossRef]
 Bourlès, R.; Bramoullé, Y.; PerezRichet, E. Altruism in Networks. Econometrica 2017, 85, 675–689. [Google Scholar] [CrossRef]
 Laffont, J.J. Macroeconomic Constraints, Economic Efficiency and Ethics: An Introduction to Kantian Economics. Economica 1975, 42, 430–437. [Google Scholar] [CrossRef]
 Brekke, K.A.; Kverndokk, S.; Nyborg, K. An Economic Model of Moral Motivation. J. Public Econ. 2003, 87, 1967–1983. [Google Scholar] [CrossRef]
 Lindbeck, A.; Weibull, J. Altruism and Time Consistency—The Economics of Fait Accompli. J. Political Econ. 1988, 96, 1165–1182. [Google Scholar] [CrossRef]
 Bernheim, B.D.; Stark, O. Altruism within the Family Reconsidered: Do Nice Guys Finish Last? Am. Econ. Rev. 1988, 78, 1034–1045. [Google Scholar]
 Becker, G. A Theory of Social Interaction. J. Political Econ. 1974, 82, 1063–1093. [Google Scholar] [CrossRef]
 Andreoni, J. Privately Provided Public Goods in a Large Economy: The Limits of Altruism. J. Public Econ. 1988, 35, 57–73. [Google Scholar] [CrossRef]
 Alger, I.; Weibull, J. Homo Moralis—Preference Evolution under Incomplete Information and Assortativity. Econometrica 2013, 81, 2269–2302. [Google Scholar]
 Alger, I.; Weibull, J. Evolution and Kantian Morality. Games Econ. Behav. 2016, 98, 56–67. [Google Scholar] [CrossRef]
 Bergstrom, T. Ethics, Evolution, and Games among Neighbors. 2009. Available online: http://economics.ucr.edu/seminars_colloquia/2010/economic_theory/Bergstrom%20paper%20for%201%2025%2010.pdf (accessed on 1 September 2017).
 Gauthier, D. Morals by Agreement; Oxford University Press: Oxford, UK, 1986. [Google Scholar]
 Binmore, K. Game Theory and The Social Contract, Volume 1: Playing Fair; MIT Press: Cambridge, MA, USA, 1994. [Google Scholar]
 Bacharach, M. Interactive Team Reasoning: A Contribution to the Theory of Cooperation. Res. Econ. 1999, 53, 117–147. [Google Scholar] [CrossRef]
 Sugden, R. The Logic of Team Reasoning. Philos. Explor. 2003, 6, 165–181. [Google Scholar] [CrossRef]
 Roemer, J.E. Kantian equilibrium. Scand. J. Econ. 2010, 112, 1–24. [Google Scholar] [CrossRef]
 Arrow, K. Social Responsibility and Economic Efficiency. Public Policy 1973, 21, 303–317. [Google Scholar]
 Andreoni, J. Impure Altruism and Donations to Public Goods: A Theory of WarmGlow Giving. Econ. J. 1990, 100, 464–477. [Google Scholar] [CrossRef]
 Bernheim, B.D. A Theory of Conformity. J. Political Econ. 1994, 102, 841–877. [Google Scholar] [CrossRef]
 Levine, D. Modelling Altruism and Spite in Experiments. Rev. Econ. Dyn. 1998, 1, 593–622. [Google Scholar] [CrossRef]
 Fehr, E.; Schmidt, K. A Theory of Fairness, Competition, and Cooperation. Q. J. Econ. 1999, 114, 817–868. [Google Scholar] [CrossRef]
 Akerlof, G.; Kranton, R. Economics and Identity. Q. J. Econ. 2000, 115, 715–753. [Google Scholar] [CrossRef]
 Bénabou, R.; Tirole, J. Incentives and Prosocial Behavior. Am. Econ. Rev. 2006, 96, 1652–1678. [Google Scholar] [CrossRef]
 Alger, I.; Renault, R. Screening Ethics when Honest Agents Care about Fairness. Int. Econ. Rev. 2007, 47, 59–85. [Google Scholar] [CrossRef]
 Ellingsen, T.; Johannesson, M. Pride and Prejudice: The Human Side of Incentive Theory. Am. Econ. Rev. 2008, 98, 990–1008. [Google Scholar] [CrossRef]
 Englmaier, F.; Wambach, A. Optimal Incentive Contracts under Inequity Aversion. Games Econ. Behav. 2010, 69, 312–328. [Google Scholar] [CrossRef]
 Dufwenberg, M.; Heidhues, P.; Kirchsteiger, G.; Riedel, F.; Sobel, J. OtherRegarding Preferences in General Equilibrium. Rev. Econ. Stud. 2011, 78, 613–639. [Google Scholar] [CrossRef]
 Sarkisian, R. Team Incentives under Moral and Altruistic Preferences: Which Team to Choose? Games 2017, 8, 37. [Google Scholar] [CrossRef]
 Young, P. Conventions. Econometrica 1993, 61, 57–84. [Google Scholar] [CrossRef]
 Kandori, M.; Mailath, G.T.; Rob, R. Learning, Mutation, and Long Run Equilibria in Games. Econometrica 1993, 61, 29–56. [Google Scholar] [CrossRef]
 Sethi, R.; Somanathan, E. The Evolution of Social Norms in Common Property Resource Use. Am. Econ. Rev. 1996, 86, 766–788. [Google Scholar]
 Bicchieri, C. Rationality and Coordination; Cambridge University Press: Cambridge, UK, 1997. [Google Scholar]
 Lindbeck, A.; Nyberg, S.; Weibull, J. Social Norms and Economic Incentives in the Welfare State. Q. J. Econ. 1999, 114, 1–33. [Google Scholar] [CrossRef]
 Huck, S.; Kübler, D.; Weibull, J.W. Social Norms and Economic Incentives in Firms. J. Econ. Behav. Organ. 2012, 83, 173–185. [Google Scholar] [CrossRef]
 Myerson, R.; Weibull, J. Tenable Strategy Blocks and Settled Equilibria. Econometrica 2015, 83, 943–976. [Google Scholar] [CrossRef]
 Dhami, S. The Foundations of Behavioral Economic Analysis; Oxford University Press: Oxford, UK, 2016. [Google Scholar]
 Lehmann, L.; Rousset, F. The Evolution of Social Discounting in Hierarchically Clustered Populations. Mol. Ecol. 2012, 21, 447–471. [Google Scholar] [CrossRef] [PubMed]
 Van Cleve, J.; Akçay, E. Pathways to Social Evolution: Reciprocity, Relatedness, and Synergy. Evolution 2014, 68, 2245–2258. [Google Scholar] [CrossRef] [PubMed]
 Allen, B.; Tarnita, C. Measures of Success in a Class of Evolutionary Models with Fixed Population Size and Structure. J. Math. Biol. 2014, 68, 109–143. [Google Scholar] [CrossRef] [PubMed]
 Ohtsuki, H. Evolutionary dynamics of nplayer games played by relatives. Philos. Trans. R. Soc. B Biol. Sci. 2014, 369, 20130359. [Google Scholar] [CrossRef] [PubMed]
 Peña, J.; Nöldeke, G.; Lehmann, L. Evolutionary Dynamics of Collective Action in Spatially Structured Populations. J. Theor. Biol. 2015, 382, 122–136. [Google Scholar] [CrossRef] [PubMed]
 Berger, U.; Grüne, A. On the Stability of Cooperation under Indirect Reciprocity with FirstOrder Information. Games Econ. Behav. 2016, 98, 19–33. [Google Scholar] [CrossRef]
 Szabó, G.; Borsos, I. Evolutionary Potential Games on Lattices. Phys. Rep. 2016, 624, 1–60. [Google Scholar] [CrossRef]
 Perc, M.; Jordan, J.J.; Rand, D.G.; Wangf, Z.; Boccaletti, S.; Szolnoki, A. Statistical Physics of Human Cooperation. Phys. Rep. 2017, 687, 1–51. [Google Scholar] [CrossRef]
 Bergstrom, T. On the Evolution of Altruistic Ethical Rules for Siblings. Am. Econ. Rev. 1995, 85, 58–81. [Google Scholar]
 Granovetter, M. Threshold Model of Collective Behavior. Am. J. Sociol. 1978, 83, 1420–1443. [Google Scholar] [CrossRef]
 McLennan, A. The Index +1 Principle; Mimeo; University of Queensland: Queensland, Australia, 2016. [Google Scholar]
 Mailath, G.; Samuelson, L. Repeated Games and Reputations; Oxford University Press: Oxford, UK, 2006. [Google Scholar]
1  
2  Thus, Becker (1976) [5] shows that an altruistic family head is beneficial for the rest of the family, even if other family members are selfish (see also Bergstrom, 1989, [6]). More recently, Bourlès, Bramoullé, and PerezRichet (2017) [7] show that altruism is beneficial for income sharing in networks. Regarding morality, Laffont (1975) [8] shows how an economy with Kantian individuals achieves efficiency. More recently, Brekke, Kverndokk, and Nyborg (2003) [9] show that a certain kind of moral concerns enhances efficiency in the private provision of public goods. 
3  
4  For a recent comprehensive textbook treatment of behavioral economics, see Dhami (2016) [41]. 
5  
6  For a complete characterization of the set of symmetric equilibria in twobytwo games between moralists, see Alger and Weibull (2013) [14]. 
7  An analysis of more general repeatedgames strategies falls outside the scope of this paper. 
8  This is the special case when $k=1$ in Bernheim and Stark (1988) [11]. 
9  Bernheim and Stark instead use the utility specification
$$v=\left(1\beta \right)\xb7{\left[x\left(1y\right)\right]}^{\gamma}+\beta \xb7{\left[\left(1x\right)y\right]}^{\gamma},$$

10  As we will see, a discontinuity will appear in this respect when $\alpha \to 1$. 
11  To see this, let $\varphi \left(x\right)=F\left(x\right)x$ for all $x\in \left[0,1\right]$, and note that $\varphi $ is continuous with $\varphi \left(0\right)\ge 0$ and $\varphi \left(1\right)\le 0$. 
12  A fixed point has index +1 if the curve $y=F\left(x\right)$ intersects the diagonal, $y=x$, from above. In general, an index of +1 usually implies strong forms of dynamic stability, while an index of $1$ usually implies instability, see McLennan (2016) [52], and the references therein, for recent discussions and analyses of index theory in economics and game theory. 
Figure 1.
The unique Nash equilibrium contribution in the publicgoods game for different degrees of morality.
Figure 4.
The solid curve shows the optimal oneshot deviation for moralists in the repeated game. The dashed curve shows an approximation.
Figure 5.
The critical discount factor for cooperation between moralists (solid curve) and altruists (dashed curve) in the repeated game.
Figure 6.
Contour map for the maximand in (18).
Figure 7.
Thresholds for switching to A, as a function of the degree of morality, in a population of size n = 100. Starting from the bottom, the curves correspond to v = 4, v = 2, v = 1.5, and v = 1.2.
Table 1.
Two distributions of the threshold number of individuals for switching from action B to action A.
Distribution 1  Distribution 2 

$g\left(0\right)$ = 5  $g\left(0\right)$ = 1 
$g\left(4\right)$ = 7  $g\left(1\right)$ = 4 
$g\left(9\right)$ = 6  $g\left(4\right)$ = 5 
$g\left(14\right)$ = 3  $g\left(8\right)$ = 6 
$g\left(22\right)$ = 10  $g\left(12\right)$ = 7 
$g\left(23\right)$ = 11  $g\left(16\right)$ = 9 
$g\left(24\right)$ = 12  $g\left(18\right)$ = 10 
$g\left(25\right)$ = 13  $g\left(20\right)$ = 11 
$g\left(26\right)$ = 14  $g\left(22\right)$ = 13 
$g\left(27\right)$ = 19  $g\left(23\right)$ = 15 
$g\left(26\right)$ = 19 
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).