Next Article in Journal
The Evolution of Cooperation in One-Dimensional Mobile Populations with Deterministic Dispersal
Next Article in Special Issue
When and How Does Mutation-Generated Variation Promote the Evolution of Cooperation?
Previous Article in Journal
Optimal Control of Heterogeneous Mutating Viruses
Previous Article in Special Issue
Game Theoretical Model of Cancer Dynamics with Four Cell Phenotypes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evolution of Cooperation in Public Goods Games with Stochastic Opting-Out

by
Alexander G. Ginsberg
1,† and
Feng Fu
2,3,*
1
Department of Mathematics, Michigan State University, East Lansing, MI 48824, USA
2
Department of Mathematics, Dartmouth College, Hanover, NH 03755, USA
3
Department of Biomedical Data Science, Geisel School of Medicine at Dartmouth, Lebanon, NH 03756, USA
*
Author to whom correspondence should be addressed.
Current address: Department of Mathematics, University of Michigan, Ann Arbor, MI 48109, USA
Games 2019, 10(1), 1; https://doi.org/10.3390/g10010001
Submission received: 13 December 2018 / Accepted: 19 December 2018 / Published: 21 December 2018
(This article belongs to the Special Issue Mathematical Biology and Game Theory)

Abstract

:
We study the evolution of cooperation in group interactions where players are randomly drawn from well-mixed populations of finite size to participate in a public goods game. However, due to the possibility of unforeseen circumstances, each player has a fixed probability of being unable to participate in the game, unlike previous models which assume voluntary participation. We first study how prescribed stochastic opting-out affects cooperation in finite populations, and then generalize for the limiting case of large populations. Because we use a pairwise comparison updating rule, our results apply to both genetic and behavioral evolution mechanisms. Moreover, in the model, cooperation is favored by natural selection over both neutral drift and defection if the return on investment exceeds a threshold value depending on the population size, the game size, and a player’s probability of opting-out. Our analysis further shows that, due to the stochastic nature of the opting-out in finite populations, the threshold of return on investment needed for natural selection to favor cooperation is actually greater than the one corresponding to compulsory games with the equal expected game size. We also use adaptive dynamics to study the co-evolution of cooperation and opting-out behavior. Indeed, given rare mutations minutely different from the resident population, an analysis based on adaptive dynamics suggests that over time the population will tend towards complete defection and non-participation, and subsequently cooperators abstaining from the public goods game will stand a chance to emerge by neutral drift, thereby paving the way for the rise of participating cooperators. Nevertheless, increasing the probability of non-participation decreases the rate at which the population tends towards defection when participating. Our work sheds light on understanding how stochastic opting-out emerges in the first place and on its role in the evolution of cooperation.

1. Introduction

Cooperation is everywhere [1,2,3,4]. Bacteria cooperate. For example, bacteria cooperate in biofilm production, where bacteria go so far as to use quorum sensing to determine when there are enough cooperators that contributing to the biofilm is worthwhile [5]. Ants cooperate, building vast anthills where members of a colony live together [6]. Birds cooperate, sounding an alarm when predators are nearby [7]. Additionally, several felids and canids cooperate, working together to catch prey [8]. Moreover, humans cooperate [9]. Indeed, whenever we contribute to a joint hunting effort [10], bring food to a potluck, or work together to combat climate change [11], we are cooperating. Why, though, do we see cooperation in all walks of life? How does cooperation evolve? Researchers have dedicated significant effort in the past decades towards studying the evolution of cooperation. See Traulsen and Nowak [3], Antal et al. [12], Boyd et al. [13], Broom and Rychtár [14], Hauert et al. [15], Hauert et al. [16], Javarone [17], Nowak [18], Priklopil et al. [19], Santos et al. [20] as examples.
In particular, one common type of social interaction in which cooperation frequently arises and which has recently attracted attention from researchers is the public goods game (in abbreviation as PGG thereafter). See [11,13,15,16,21,22,23,24]. In a PGG, cooperators contribute to a common pool which all participants of the game then share equally. In fact, in all of the instances of cooperation mentioned in the preceding paragraph, organisms contribute to a public good. In the case of bacteria, the public good is biofilm production. For ants, the good is the anthill. For birds, the good is the knowledge that a predator is nearby and hence that they should be careful. For felids and canids, the good is the catch of the hunt. Lastly, for the party-goers, the good is the food at the potluck.
However, whenever cooperators contribute to a common pool, there are free-riders, who benefit from the common pool without contributing. Game theorists frequently refer to such free-riders as defectors. These defectors cause the participants of the game to receive a smaller share of the common pool—a smaller payoff—than the social optimum where every player cooperates. In fact, assuming players can only cooperate or defect, a defector earns a larger payoff regardless of the number of cooperators because the defector does not have to contribute to the common pool, making defection the dominant strategy (or more generally individually optimal strategy). Game theorists refer to the situation in which the dominant strategy is individually optimal yet not socially optimal as a social dilemma [25]. Consequentially, if each player were rational, each player would choose to defect, regardless of the strategies of other players. As a result, each player would receive zero payoff worse than that if all have cooperated otherwise. This outcome is often called as the tragedy of the commons [1].
In reality, even though in any particular PGG defectors outperform cooperators in individual games, it may be the case that cooperators may actually outperform defectors, when averaging over all possible games. Such situation is an example of the Simpson’s paradox [16]. Additionally, there are many ways in which a tweak to the PGG may promote cooperation [26,27,28,29,30,31]. For instance, spatial selection [18,32] or more generally population structure [12,14,20,33,34,35,36,37,38,39,40], punishment of defectors [13], signaling [23,41], and optional participation [15,16], and combinations of the latter two mechanisms [24,42] have been used to promote cooperation. However, in the literature, a small but realistic tweak to the PGG has yet to be addressed. Specifically, even if there is no punishment of defectors or if players cannot opt-out, due to unforeseen circumstances, at times players simply cannot participate in the PGG. For instance, an individual traveling to a hunting party may come across a flooded road and be forced to turn back. Or, on a whim, an individual may decide to engage in some activity other than the game. Further, a player could be late to the game, or fall ill, missing out on the opportunity to participate entirely. Alternatively, the game could draw all individuals in a given area, in which case we could define game size as the number of players that frequent that area. In this case, we expect players that frequent the area to randomly choose not to pass through the area when the PGG occurs. As a result, players participate in the PGG stochastically, unable to participate independently of whether or not the player plans to cooperate or defect.
We investigate evolutionary dynamics of such PGGs with stochastic non-participation. We add a fully analyzed stochastic model to the literature, thus improving the understanding of the evolution of cooperation. Moreover, our model demonstrates that the evolution of cooperation can be promoted by the stochasticity in participation. We conclude with an analysis of adaptive dynamics for simplified two-person PGGs in finite populations, where we add rare and minute mutations to our basic model. We identify the condition for non-participation to be favored in the coevolutionary dynamics of cooperation and opting-out behavior. We also find that increasing the probability of non-participation temporarily slows the rate at which the population tends to defection when participating given rare mutations only minutely different from the resident population.

2. Model and Methods

We consider a well-mixed finite population of n agents, human or not, and suppose that frequently N n randomly selected agents receive the opportunity to participate in a PGG (Figure 1). In the PGG, some participants cooperate, either by conscious choice or by their genetics, investing one unit into a common pool, as in [15,16]. Every unit contributed by each cooperator is multiplied by some factor r (return on investment or enhancement factor), 1 < r < N , and thus for each unit contributed by a cooperator, the common pool increases by r units. At the end of the game, each PGG participant obtains an equal share of the common pool. However, the participants who do not contribute (namely, defectors) also receive a share from the common pool by free-riding. To simplify the model, we assume that participants determine their strategies before the PGG has begun (that is, not dependent on group composition), as in [15,16].
As stated, the preceding model of PGG interaction leads to domination by defectors for all games where the return on investment r is smaller than the group size N and the game is thus a social dilemma. To promote cooperation, we assume that due to unforeseen circumstances each selected agent has a fixed probability α of being unable to participate in the PGG, instead obtaining a fixed payoff σ > 0 (often called the loner’s payoff [16]). Indeed, in many games there is no good reason that all N selected agents with the opportunity and desire to participate in the game should be guaranteed to participate. Hence, by representing the probability that any selected agent will be unable to play due to such unforeseen circumstances, introducing α makes our model more realistic (Figure 1).
Additionally, our model needs an “update mechanism” by which the population may change its composition of agents that are cooperating or defecting. We use the pairwise comparison rule as in previous studies [23,24,43], where two agents are randomly selected, and one agent, the focal “updating agent”, will update his or her strategy by comparing his or her payoff to the other agent, the “compared agent”. We may think of this “update” either as the conscious choice of the updating agent to change strategy (social imitation) or as the death of the updating agent and subsequent replacement by an offspring of the compared agent (death-birth process) [25].
Then, let p i j be the probability that the updating agent, i, adopts the strategy of the compared agent, j. Specifically, we let the probability p i j of adopting strategies be given by the Fermi function, as in [23,24,43]:
p i j = 1 1 + exp [ γ ( π j π i ) ] ) ,
where π j represents the expected payoff of the compared agent j, π i represents the expected payoff of the updating agent i, and γ 0 represents the selection pressure and corresponds to the inverse temperature in statistics physics [43].
When it comes to adaptive dynamics in finite populations, for simplicity, we assume the PGG size is two. Furthermore, applying adaptive dynamics to our model as done in Imhof and Nowak [44], we assume that a single mutant who plays a different strategy but sufficiently close to the resident population attempts to invade the population. Specifically, we suppose that every agent uses a strategy in the prescribed strategy space ( α , β ) , where α is the probability that due to unforeseen circumstances the selected agent cannot participate, and β is the probability that the agent cooperates if they participate. We let the original population be composed solely of agents with strategy ( α , β ) , and we suppose that the population is invaded by a single agent with strategy ( α , β ) . Then, we let α α , and β β . As in Imhof and Nowak [44], we also assume rare and minute mutations. That is, we assume sufficient time passes between mutations that either fixation, or extinction, of the mutant type occurs, and that the invading mutant population plays strategy only minutely different from the resident population.

3. Results

3.1. Pairwise Invasion Dynamics in Finite Populations

To proceed with the analysis of the stochastic model, let us first calculate the expected payoffs for cooperators and defectors, π c and π d , respectively. To calculate π d , we use the method presented by Hauert et al. [15]. First, we observe that in a game with S players, defectors receive a benefit r n c / S , where n c is the number of cooperators in the game, if S > 1 . However, if S = 1 , that player must be a loner (because of excluding self-interactions), and will receive the loner’s payoff, σ . Then, noting that any player does not play with probability α and plays with probability 1 α , and letting x c be the proportion of cooperators in the population, we get
π d = α σ + ( 1 α ) [ r x c [ 1 ( 1 α N ) / [ N ( 1 α ) ] ] + α N 1 σ ] .
We defer the details of calculating π d to Appendix A. Employing a similar method (see details in Appendix B), we obtain
π c = π d r / ( n 1 ) [ 1 α ( 1 α N ) / N ] + ( 1 α ) [ 1 + ( 1 r ) α N 1 + ( r / N ) ( 1 α N ) / ( 1 α ) ] .
Hence,
π c π d = r / ( n 1 ) [ 1 α ( 1 α N ) / N ] + ( 1 α ) [ 1 + ( 1 r ) α N 1 + ( r / N ) ( 1 α N ) / ( 1 α ) ] ,
which is a constant with respect to the proportions of cooperators and defectors in the population.
Then, substituting π c π d into Equation (1), we obtain the probability that a cooperator becomes a defector or that a defector replaces a cooperator, given that a cooperator is selected for updating and a defector is selected for comparison, is
p c d = ( 1 + exp [ γ ( r / ( n 1 ) [ 1 ( 1 α N ) / [ N ( 1 α ) ] ] + ( 1 α ) [ 1 + ( 1 r ) α N 1 + ( r / N ) ( 1 α N ) / ( 1 α ) ] ) ] ) 1 .
Thus the probability that the number of cooperators decreases by one in one iteration of the pairwise comparison updating process given i cooperators is
T i = i n n i n p c d .
Likewise, the probability that a defector becomes a cooperator or that a cooperator replaces a defector given that the defector is selected for updating and the cooperator is selected for comparison is
p d c = ( 1 + exp [ γ ( r / ( n 1 ) ( 1 α ) [ 1 ( 1 α N ) / [ N ( 1 α ) ] ] + ( 1 α ) [ 1 + ( 1 r ) α N 1 + ( r / N ) ( 1 α N ) / ( 1 α ) ] ) ] ) 1 ,
which is also a constant. Hence, the probability that the number of cooperators increases by one in one iteration of the pairwise comparison updating process is
T i + = n i n i n p d c .
Of course, though, if the number of cooperators, i, is 0 or n, the probabilities that a cooperator will change to a defector and that a defector will change to a cooperator are both zero, and the number of cooperators remains at 0 or n. That is, i = 0 and i = n are absorbing states in the model (in the absence of mutations in strategy updating).
Moreover, now knowing p c d and p d c , and noting that p c d + p d c = 1 , we can calculate the transition matrix P for the Markov chain in which the pairwise comparison updating process is iterated repeatedly. However, as the transition matrix itself is not vital for our analysis, we defer discussion of the transition matrix to Appendix C. On the other hand, the fixation probability of cooperation, that is, the probability that given i cooperators in a population of n i defectors that every individual will become a cooperator, is vital. Following the procedure outlined by [25], we show that the fixation probability of cooperation given i 1 cooperators, x i , is
x i = ( 1 + Σ j = 1 i 1 Π k = 1 j p c d / p d c ) / ( 1 + Σ j = 1 n 1 Π k = 1 j p c d / p d c ) ,
where i = 1 implies the numerator is 1 and p c d and p d c should be indexed by the number of cooperators k. However, since p c d and p d c are constants with respect to the number of cooperators k as shown above, we have omitted the index k in Equation (9) for notational simplicity. Further simplifying, denote p c d / p d c by G ( α , γ , N , n , r ) , and observe that
G ( α , γ , N , n , r ) = ( 1 + exp ( γ ( π c π d ) ) ) / ( 1 + exp ( γ ( π c π d ) ) ) = exp [ γ ( π c π d ) ] .
Since G is constant over i, we may expand the numerator and denominator of x i as geometric series. So, if G 1 ,
x i = ( 1 G i ) / ( 1 G n ) .
However, G = 1 means that p c d = p d c = 1 / 2 , which implies neutral drift. We assume for now that G 1 . Then, observing that p d c / p c d = G 1 , the fixation probability of defection given i defectors is simply x i with G replaced by G 1 :
y i = [ G n i G n ] / [ 1 G n ] .
Hence, the fixation probability given i cooperators is
y n i = [ G i G n ] / [ 1 G n ] .
Thus, the probability of fixation of cooperators or defectors given i cooperators satisfies
x i + y n i = 1 .
In other words, the system always reaches an absorption state.
Furthermore, now knowing the probabilities of fixation of cooperation given i cooperators, x i , and of defection given i defectors, y i , we can determine which strategy is favored by natural selection. Moreover, as in Nowak [25], natural selection favors cooperation over defection if and only if x 1 > y 1 under pairwise invasion dynamics. Likewise, natural selection favors defection over cooperation if and only if y 1 > x 1 Nowak [25]. Additionally, natural selection favors cooperation over neutral drift if and only if x 1 > 1 / n , where 1 / n is the probability of fixation given neutral drift Nowak [25]. Likewise, natural selection favors defection over neutral drift if and only if y 1 > 1 / n . In fact,
x 1 > 1 / n G < 1 .
We defer the proof to Appendix D. Also, G = 1 , implies neutral evolution, since G = 1 means p c d = p d c = 1 / 2 . Since G 1 implies either p c d > p d c or vice-versa, there is neutral drift if and only if G = 1 . Thus, x 1 < 1 / n if and only if G > 1 .
Hence, natural selection favors cooperation over neutral drift if and only if G < 1 , and disfavors cooperation if and only if G > 1 . For G = 1 we have neutral evolution between cooperation and defection.
On the other hand, we can show that
G < 1 y 1 < 1 / n ,
and
G > 1 y 1 > 1 / n .
We defer proofs of the two preceding assertions to Appendix D.
Additionally if G = 1 , then there is neutral drift, as we demonstrated above, so y 1 = 1 / n . Thus, if G > 1 , y 1 > 1 / n > x 1 ; if G = 1 , then y 1 = 1 / n = x 1 ; and otherwise, i.e., 0 < G < 1 , y 1 < 1 / n < x 1 .
Additionally, using Equation (10), it leads to
G > 1 π c π d < 0 .
Likewise,
G < 1 π c π d > 0 .
Also, G = 1 if and only if π c π d = 0 . Thus, there are three possibilities:
  • Natural selection favors cooperation over defection x 1 > 1 / n > y 1 , if π c π d > 0 ;
  • Neutral evolution x 1 = 1 / n = y 1 , if π c π d = 0 ;
  • Natural selection favors defection over cooperation x 1 < 1 / n < y 1 , if π c π d < 0 .
Thus, the sign of π c π d , as given in Equation (4), which is a function of the probability that a given player opts out α , the PGG size N, the population size n, and the return on investment by cooperators, r, exclusively determines which strategies, cooperation or defection, natural selection favors more and whether or not natural selection favors each strategy replacing the other (Figure 2).
Notably, as shown in Figure 2, the graphs for π c π d < 0 (Figure 2c,d), may be obtained from the graphs for π c π d > 0 simply by relabeling cooperators as defectors and vice versa (Figure 2a,b). This is because reversing the sign of π c π d is equivalent to inverting p c d / p d c .
Based on these calculations of fixation probabilities as mentioned above, let us now identify the critical condition in terms of the threshold value of r, denoted by R ( α ) , for given PGG size N, above which cooperation will be favored by natural selection. It is easy to check that r > R implies that π c π d > 0 , and r < R implies that π c π d < 0 .
Indeed, recalling that π c π d = r / ( n 1 ) ( 1 α ) [ 1 ( 1 α N ) / [ N ( 1 α ) ] ] + ( 1 α ) [ 1 + ( 1 r ) α N 1 + ( r / N ) ( 1 α N ) / ( 1 α ) ] as given in Equation (4), it follows immediately that
π c π d > 0
r > ( 1 α N 1 ) 1 n 1 ( 1 1 α N N [ 1 α ] ) + 1 α N N ( 1 α ) α N 1 = R ( α ) ,
provided of course that the denominator of the above expression is defined.
Further simplifying, we obtain
R ( α ) = N 1 α α N 1 + α N 1 N 1 n 1 + N n 1 α N α N 1 + α N ( N 1 1 n 1 ) .
As shown in previous studies [25,39,42,43], population size has an impact on stochastic evolutionary dynamics in finite populations. We note that Equation (22) above explicitly shows how the conditions for cooperation to be favored depend on the population size n. We discuss the limit of large populations in Section 3.2 below.
The critical threshold R ( α ) , as shown in Appendix E, is in fact defined necessarily on [ 0 , 1 ) . Specifically, R ( α ) is undefined at α = 0 if and only if n = N , in which case R + as α 0 , as shown in Lemma 3 of Appendix D. On the other hand, if n > N (the PGG size N less than the population size n),
R ( 0 ) = N ( n 1 ) n N ,
also as shown in Lemma 3 of Appendix D. Additionally, although R ( α ) is undefined at α = 1 because there is essentially no game at α = 1 as everyone is opting out, we can show that it has a left-handed limit at 1 as long as the population size n > 2 :
lim α 1 R ( α ) = 2 ( n 1 ) n 2 ,
In any case, π c π d > 0 r > R ( α ) , wherever R is defined. By analogous proofs, π c π d = 0 if and only if r = R ( α ) and π c π d < 0 if and only if r < R ( α ) . Moreover, as proven in Appendix E, R ( α ) is strictly decreasing on ( 0 , 1 ) (Figure 3). Thus, for a given investment return factor r, there exists a threshold α 0 satisfying r = R ( α 0 ) , such that increasing the likelihood of opting-out α > α 0 makes natural selection favor cooperation.
We note that this threshold α 0 is analogous to the threshold on the proportion of individuals who choose to opt-out as suggested by Hauert et al. [15], which deals with an infinite rather than finite population and with planned, rather than unplanned stochastic, non-participation. This is an important insight from our present study about the impact of stochasticity in participation on the evolution of cooperation. Moreover, because this threshold α 0 satisfies r = R ( α 0 ) , for increasing values of α , the requirements on r such that natural selection favors cooperation become less and less stringent. In other words, increasing the probability of non-participation facilitates cooperation (Figure 3).

3.2. Approximations of the Critical Threshold R ( α ) for Natural Selection to Favor Cooperation

In order to further gain an intuitive understanding of the critical threshold R ( α ) , as given in Equation (22), for natural selection to favor cooperation, let us consider some limiting cases. To this end, we first suppose that the PGG size, N, is set as a fixed proportion of the population size, n. That is, N = c ( n 1 ) , where c is a constant ratio.
We first suppose n > N 1 , as in shown Figure 3b. Then, as detailed in Appendix F.1, we find that
R ( α ) N ( 1 α ) / ( 1 + c α ) ,
which is an approximation more manageable than the true R ( α ) in Equation (22). Next, we consider n N 1 , as in shown in Figure 3d. Here we may let c 0 in Equation (25), obtaining
R ( α ) N ( 1 α ) .
Notably, we may lump these two limiting cases into the broader case in which N 1 is required, and as proven in Appendix F.3, we have
R ( α ) ( n 1 ) N ( 1 α ) n N ( 1 α ) = R e x p ( α ) ,
where we denote this approximation by R e x p ( α ) .
Namely, R e x p ( α ) is the approximate threshold, which is required for natural selection to favor cooperation, in the limit of large PGG size N. Interestingly, this approximation (27) can be intuitively understood as the critical condition of π c > π d in a population of n agents playing the compulsory PGG with the fixed group size N ( 1 α ) , which is equal to the expected PGG size with stochastic opting-out. Hence, R e x p ( α ) can be obtained by using Equation (23) as α 0 and then replacing N by N ( 1 α ) .
Lastly, we suppose n N > 1 and let c = N / ( n 1 ) 0 . Then, using Equation (22), we get
R ( α ) N 1 α α N 1 + α N 1 α N 1 N + α N ( N 1 ) > N ( 1 α ) .
The inequality above is proven in Appendix D. Interestingly, as α 1 , by Equation (24), we know that R tends to 2 ( n 1 ) / ( n 2 ) . Therefore, lim α 1 , n R ( α ) 2 (also see Appendix F.4).

3.3. Adaptive Dynamics in Finite Populations

Of particular interest is to study coevolutionary dynamics of cooperation (the probability to cooperate if participating in the PGG, β ) and opting-out (the probability of non-participation, α ) in finite populations using the approach of adaptive dynamics in the continuous strategy space ( α , β ) , represented by the unit square [ 0 , 1 ] × [ 0 , 1 ] . To obtain closed-form results, here we consider the simplest possible case yet without loss of generality, that is, the two-person PGG ( N = 2 ). In this case, the game in fact becomes an optional Prisoner’s Dilemma (see Appendix G). (The more general PGGs, N > 2 , can also be analyzed analogously by the method given here.)
Let us consider a population consisting of two types of players, the invaders with the mutant strategy y and the resident population with the original strategy x, whom we call the defenders (also called the wild-type population), defined by their strategies ( β , α ) and ( β , α ) , respectively. Unless otherwise noted, we maintain the notation used in Section 3.1, and we find the expected payoff for invaders is
π y ( i ) = n i n 1 ( 1 α ) ( 1 α ) ( r β / 2 + r β / 2 β ) + i 1 n 1 ( 1 α ) 2 β ( r 1 ) + σ α + σ ( 1 α ) ( n i n 1 α + i 1 n 1 α ) .
We defer the derivation of the expected payoff for invaders to Appendix G. Moreover, since the game is symmetric, the expected payoff for defenders may be determined simply by replacing the number of invaders, i, with the number of defenders, n i , and by relabeling as appropriate. Specifically, the expected payoff for defenders is:
π x ( i ) = i n 1 ( 1 α ) ( 1 α ) ( r β / 2 + r β / 2 β ) + n i 1 n 1 ( 1 α ) 2 β ( r 1 ) + σ α + σ ( 1 α ) ( i n 1 α + n i 1 n 1 α ) .
Continuing to use the pairwise comparison updating process, the probability that the number of invaders reduces by one, T i , (where an invader is randomly chosen and adopts the strategy of a defender or is replaced by the offspring of a defender) is
T i = p y x ( i ) = i n n i n 1 1 + exp [ γ ( π x ( i ) π y ( i ) ) ] ,
where γ is the selection pressure, just as in Section 3.1. Similarly, the analogous probability that the number of invaders increases by one, T i + , is
T i + = p x y ( i ) = n i n i n 1 1 + exp [ γ ( π y ( i ) π x ( i ) ) ] .
Then, the fixation probability of an invader given i invaders in a population of defenders is
x i = ( 1 + Σ j = 1 i 1 Π k = 1 j p y x ( k ) / p x y ( k ) ) / ( 1 + Σ j = 1 n 1 Π k = 1 j p y x ( k ) / p x y ( k ) ) ,
where the backward-to-forward transition probability ratio is p y x ( k ) / p x y ( k ) = exp [ γ ( π x ( k ) π y ( k ) ) ] .
To investigate the adaptive dynamics in finite populations [44], we consider
( d α d t , d β d t ) = f ( α , β ) = lim ( α , β ) ( α , β ) ( x 1 / α , x 1 / β ) .
The direction given by f for ( α , β ) , plotting f as a vector field, is the direction in the strategy space which maximizes the fixation probability of one single invading mutant, x 1 (see Figure 4). Following the directions which maximize x 1 in the strategy space ( α , β ) starting at an initial ( α 0 , β 0 ) , that is, following the streamlines of f , indicates the most likely path in the strategy space that a population will take as mutants with similar strategies eventually fixate in the population, as suggested by Imhof and Nowak [44].
Substituting Equation (33) for i = 1 , x 1 , into Equation (34), and further simplifying, we obtain:
d α d t = lim ( α , β ) ( α , β ) x 1 α = ( 1 α ) γ ( n 2 ) [ σ ( r 1 ) β ] / ( 2 n ) ,
and
d β d t = lim ( α , β ) ( α , β ) x 1 β = ( 1 α ) 2 γ ( 2 2 n 2 r + n r ) / ( 4 n ) .
We can see that increasing the likelihood of opting out, α , slows down the overall rate of adaptation, which is given by the magnitude of the vector f = ( d α d t , d β d t ) . This is because d α d t is linearly dependent on ( 1 α ) , as shown in Equation (35) and d β d t is quadratic in terms of ( 1 α ) , as shown in Equation (36). Moreover, Equation (36) indicates that increasing the probability of stochastic opting-out, α , can diminish the rate at which individuals in the population tend towards complete defection. We refer to Figure 4 for adaptive dynamics with various combinations of r and σ in a population of finite size n.
If the return on investment r < ( 2 n 2 ) / ( n 2 ) , for α < 1 , we have
d β d t = lim ( α , β ) ( α , β ) x 1 α < 0 ,
which means that the level of cooperation β can be eroded gradually in the absence of complete non-participation ( α < 1 ), leading to complete defection in the long run.
Furthermore, if the payoff for non-participation σ < r 1 , there exists a critical threshold β * = σ / ( r 1 ) ( 0 , 1 ) dividing the strategy space into two parts (as shown in Figure 4, panels b2 and c2): for β > β * , we have d α d t < 0 , which suggests that adaptive dynamics can favor mutant strategies with increasing the likelihood of participation (i.e., smaller values of α ); in contrast, for β < β * , adaptive dynamics will favor opting-out as d α d t > 0 .
Altogether, these results imply a cyclic population dynamics of cooperation, defection, and opting-out from the perspective of adaptive dynamics in finite populations. Complete opting-out strategies ( 1 , β ) with high cooperativity β > β * are not evolutionarily stable and can be invaded by these strategies ( α , β ) with smaller likelihood of opting out and lower cooperativity. However, once cooperativity β drops below β * in the population, strategies with increasing probability of non-participation will be favored. Going further, the population will either hit the edge β = 0 with zero cooperation first and move along this edge towards complete opting-out, that is, the corner ( 1 , 0 ) as d α d t > 0 , or possibly the population will hit the edge α = 1 first and remain there with zero participation yet having non-zero cooperativity 0 < β < β * . We note that ( 1 , 0 ) is the only evolutionarily stable strategy (ESS) in this case. However, if staying on the edge α = 1 (complete non-participation), the population will be under neutral drift, allowing it to reestablish cooperation with cooperativity β > β * and subsequently overcome the barrier of complete non-participation with α < 1 .
Lastly, if the return on investment r > ( 2 n 2 ) / ( n 2 ) > 2 and the payoff for non-participation σ < r 1 , the only ESS is ( 0 , 1 ) , although the game is no longer a social dilemma in this scenario.

4. Discussion and Conclusions

In this work, we study and quantify the role of stochastic opting-out in the evolution of group cooperation in PGGs and derive the exact condition for natural selection to favor cooperation in finite populations. We find the threshold of return on investment r, denoted by R ( α ) , above which cooperation is favored is monotonically decreasing with α , suggesting that allowing prescribed probabilistic participation can facilitate the evolution of cooperation. In the two extreme cases, we find that R ( 0 ) = N ( n 1 ) / ( n N ) and R ( 1 ) = 2 ( n 1 ) / ( n 2 ) where n is the population size and N is the PGG size. Therefore, in the limit of large populations, increasing the likelihood of opting-out α can greatly reduce the threshold from N to 2. This limiting result helps us to intuitively understand the role of stochastic opting-out.
Moreover, it seems that the effect of allowing stochastic opting-out is solely reducing the effective PGG size to N ( 1 α ) on average, and thus one may expect the critical threshold of r to be R e x p = N ( 1 α ) ( n 1 ) / ( n N ( 1 α ) ) . However, we show that R ( α ) > R e x p for some limiting cases (see Figure 3). Such discrepancy is largely owing to the stochastic nature of the opting-out, in which the PGG interaction groups are formed by sampling players in finite populations and thus can be of varying size. Complementing prior studies of how group size affects cooperation [22,45], our study provides analytical insights into understanding how stochastic opting out causes dynamic PGG size and its effect on the evolutionary dynamics of group cooperation in finite populations.
All other things equal, whether it pays for players to switch from defection to cooperation, if participating in the PGG, depends on the net return from one’s own contribution, that is, ( r / S 1 ) , where S is the actual PGG size which can be less than N due to stochastic opting-out by other players. Averaging such payoff difference from switching, ( r / S 1 ) , over all possible PGG sizes and taking into account whether the focal individual is a cooperator or a defector, we can obtain the expected payoff difference, π c π d , which is in fact does not depend on σ but on α , as given in Equation (4). Intuitively, stochastic non-participation can possibly lead to small PGG size S such that the return on one’s own cooperation r / S 1 > 0 is positive, and as a result, cooperation can be promoted if the probability of non-participation α exceeds α 0 , such that the resulting PGG size S is likely to be sufficiently small to sustain cooperation.
Here we consider the simplest possible opting-out behavior, that is, random non-participation with a given probability α . As such, every player, either a cooperator or a defector, has the same prescribed probability to abstain from the PGG. It is possible that the choice of opting-out may be endogenously made by players with their knowledge of the composition of the population. Namely, players are able to choose whether or not to participate based on their sensing of their potential interaction groups, for example, via quorum sensing [5,23]. In addition, opting-out decision may be conditional on whether there will be sufficient number of cooperators in the group such that the payoff from participating in the PGG outweighs the payoff of non-participation. Clearly, cooperators should be more picky than defectors when deciding whether or not to participate, because of their risk of being exploited by defectors. It is likely that natural selection will favor conditional non-participation strategies, for example, in scenarios qualitatively similar to what our adaptive dynamics analysis (Equation (35)) has revealed: only if the cooperativity in the population is sufficient high, β > σ / ( r 1 ) , does it pay to participate. Therefore, it is of interest for future work to explore how these conditional opting-out behavior emerge in the first place and its impact on the long-term evolution of cooperation.
In conclusion, we find that in situations where samples of individuals are repeatedly drawn from a population for participation in PGGs, allowing for the possibility that members of a population stochastically cannot participate in the game facilitates cooperation. Furthermore, adaptive dynamics suggests that in the presence of small and minute mutations, introducing stochastic non-participation slows the rate at which the population tends to defection. While the adaptive dynamics also suggests that the population must tend to complete non-participation (i.e., α = 1 ) when r < ( 2 n 2 ) / ( n 2 ) , although we may see brief bursts of cooperation arising from the upper part of the edge α = 1 with β > σ / ( r 1 ) due to neutral drift. Additionally, using the pairwise comparison updating rule [43,46], our results are valid both for games with behavioral strategies and games with genetic strategies. Since PGGs are also found widely in nature, (see Nadell et al. [5], Goryunov [6], Melis and Semmann [9] as examples), our results shed light on the evolution of cooperation in many biological and social situations [2,7,8,19,24,45].

Author Contributions

A.G.G. & F.F. conceived the model, A.G.G. analyzed the model with contributions from F.F., and A.G.G. & F.F. wrote the manuscript.

Funding

This research is supported by the Dartmouth Startup Fund, the Walter and Constance Burke Research Initiation Award, and a Junior Faculty Fellowship to F.F.

Acknowledgments

The authors would like to thank the National Science Foundation and Dartmouth College for funding the REU program at which the research was conducted. In particular, A.G.G. would also like to thank Anne Gelb, Tracy Moloney, and Amy Powell, all of Dartmouth College, for personally overseeing the program. Lastly, A.G.G. would like to thank Ignacio Uriarte-Tuero, George Pappas, Tsvetanka Tsendova, and Teena Gerhardt, all of Michigan State University, for making sure he attended a program that fit his needs.

Conflicts of Interest

We have no competing interests. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

Appendix A. Derivation of πd

We define the probability that an event E occurs be denoted by P ( E ) , and let the probability that E occurs given a second event F occurs be denoted by P ( E | F ) . Then,
π d = α σ + Σ P ( n c S p l a y s ) p a y o f f
= α σ + Σ P ( p l a y s ) P ( S | p l a y s ) P ( n c | S p l a y s ) p a y o f f
= α σ + ( 1 α ) Σ P ( S | p l a y s ) P ( n c | S p l a y s ) p a y o f f
= α σ + ( 1 α ) [ Σ S = 2 N P ( S | p l a y s ) Σ n c = 0 S 1 P ( n c | S p l a y s ) r n c / S + P ( S = 1 | p l a y s ) σ ]
= α σ + ( 1 α ) [ Σ S = 2 N P ( S | p l a y s ) / S Σ n c = 0 S 1 P ( n c | S p l a y s ) r n c + P ( S = 1 | p l a y s ) σ ] .
Substituting the values of the desired probabilities and simplifying,
π d = α σ + ( 1 α ) [ r Σ S = 2 N ( 1 / S ) N 1 S 1 ( 1 α ) S 1 α N S Σ n c = 0 S 1 S 1 n c ( x c n / ( n 1 ) ) n c ( 1 ( x c n / ( n 1 ) ) ) S 1 n c n c + α N 1 σ ]
= α σ + ( 1 α ) [ r Σ S = 2 N N 1 S 1 ( 1 α ) S 1 α N S ( x c n / ( n 1 ) ) ( S 1 ) / S Σ n c = 1 S 1 S 2 n c 1 ( x c n / ( n 1 ) ) n c 1 ( 1 ( x c n / ( n 1 ) ) ) S 1 n c + α N 1 σ ]
= α σ + ( 1 α ) [ r Σ S = 2 N N 1 S 1 ( 1 α ) S 1 α N S ( x c n / ( n 1 ) ) ( S 1 ) / S Σ k = 0 S 2 S 2 k ( x c n / ( n 1 ) ) k ( 1 ( x c n / ( n 1 ) ) ) S 2 k + α N 1 σ ] .
Continuing to simplify,
π d = α σ + ( 1 α ) [ r Σ S = 2 N N 1 S 1 ( 1 α ) S 1 α N S ( x c n / ( n 1 ) ) ( S 1 ) / S + α N 1 σ ]
= α σ + ( 1 α ) [ r ( x c n / ( n 1 ) ) ( Σ S = 2 N N 1 S 1 ( 1 α ) S 1 α N S Σ S = 2 N N 1 S 1 ( 1 α ) S 1 α N S / S ) + α N 1 σ ]
= α σ + ( 1 α ) [ r ( x c n / ( n 1 ) ) ( Σ k = 1 N 1 N 1 k ( 1 α ) k α N k 1 ( 1 / N ) Σ S = 2 N N S ( 1 α ) S 1 α N S ) + α N 1 σ ]
= α σ + ( 1 α ) [ r ( x c n / ( n 1 ) ) ( ( 1 α N 1 ) 1 / ( N ( 1 α ) ) Σ S = 2 N N S ( 1 α ) S α N S ) + α N 1 σ ]
= α σ + ( 1 α ) [ r ( x c n / ( n 1 ) ) ( ( 1 α N 1 ) [ 1 N ( 1 α ) α N 1 α N ] / [ N ( 1 α ) ] + α N 1 σ ]
= α σ + ( 1 α ) [ r ( x c n / ( n 1 ) ) ( N N α [ 1 α N ] ) / ( N [ 1 α ] ) + α N 1 σ ]
= α σ + ( 1 α ) [ r ( x c n / ( n 1 ) ) [ 1 ( 1 α N ) / [ N ( 1 α ) ] ] + α N 1 σ ] .
We have verified via Mathematica and via Hauert et al. [15] that
Σ S = 2 N P ( S | p l a y s ) Σ n c = 0 N 1 P ( n c | S p l a y s ) r n c / S + P ( S = 1 | p l a y s ) σ = r ( x c n / ( n 1 ) ) [ 1 ( 1 α N ) / [ N ( 1 α ) ] ] + α N 1 σ .

Appendix B. Derivation of πc

π c = α σ + Σ P ( n c S p l a y s ) p a y o f f
= α σ + ( 1 α ) Σ P ( S | p l a y s ) P ( n c | S p l a y s ) p a y o f f
= α σ + ( 1 α ) [ Σ S = 2 N P ( S | p l a y s ) Σ n c = 0 S 1 P ( n c | S p l a y s ) ( ( r / S ) ( n c + 1 ) 1 ) + P ( S = 1 | p l a y s ) σ ]
= α σ + ( 1 α ) [ Σ S = 2 N P ( S | p l a y s ) Σ n c = 0 S 1 P ( n c | S p l a y s ) r n c / S + α N 1 σ + Σ S = 2 N P ( S | p l a y s ) Σ n c = 0 S 1 P ( n c | S p l a y s ) ( r / S 1 ) ] .
Please note that
α σ + ( 1 α ) [ Σ S = 2 N P ( S | p l a y s ) Σ n c = 0 S 1 P ( n c | S p l a y s ) r n c / S + α N 1 σ ] = π d ,
where x c n / ( n 1 ) is replaced by ( x c n 1 ) / ( n 1 ) . Indeed, x c n / ( n 1 ) is the probability that if a player defects then another player will cooperate. Here, however, we consider the probability that if a cooperator plays then a player will cooperate. Thus, to account for the cooperator we know to be playing, we must consider ( x c n 1 ) / ( n 1 ) . It follows that
π c = π d ( 1 α ) r / ( n 1 ) [ 1 ( 1 α N ) / [ N ( 1 α ) ] ] + ( 1 α ) Σ S = 2 N P ( S | p l a y s ) Σ n c = 0 S 1 P ( n c | S p l a y s ) ( r / S 1 )
= π d ( 1 α ) r / ( n 1 ) [ 1 ( 1 α N ) / [ N ( 1 α ) ] ] + ( 1 α ) Σ S = 2 N P ( S | p l a y s ) ( r / S 1 )
= π d ( 1 α ) r / ( n 1 ) [ 1 ( 1 α N ) / [ N ( 1 α ) ] ] + ( 1 α ) [ r Σ S = 2 N ( 1 / S ) N 1 S 1 ( 1 α ) S 1 α N S Σ S = 2 N N 1 S 1 ( 1 α ) S 1 α N S ] .
Continuing to simplify,
π c = π d ( 1 α ) r / ( n 1 ) [ 1 ( 1 α N ) / [ N ( 1 α ) ] ] + ( 1 α ) [ ( r / N ) Σ S = 2 N N S ( 1 α ) S 1 α N S Σ k = 1 N 1 N 1 k ( 1 α ) k α N k 1 ]
= π d ( 1 α ) r / ( n 1 ) [ 1 ( 1 α N ) / [ N ( 1 α ) ] ] + ( 1 α ) [ ( r / [ N ( 1 α ) ] ) Σ S = 2 N N S ( 1 α ) S α N S ( 1 α N 1 ) ]
= π d ( 1 α ) r / ( n 1 ) [ 1 ( 1 α N ) / [ N ( 1 α ) ] ] + ( 1 α ) [ r / [ N ( 1 α ) ] ( 1 N ( 1 α ) α N 1 α N ) 1 + α N 1 ]
= π d ( 1 α ) r / ( n 1 ) [ 1 ( 1 α N ) / [ N ( 1 α ) ] ] + ( 1 α ) [ 1 r α N 1 + α N 1 + ( r / N ) ( 1 α N ) / ( 1 α ) ]
= π d r / ( n 1 ) [ 1 α ( 1 α N ) / N ] + ( 1 α ) [ 1 + ( 1 r ) α N 1 + ( r / N ) ( 1 α N ) / ( 1 α ) ] .
Again, we have verified via Mathematica and via Hauert et al. [15] that
Σ S = 2 N P ( S | p l a y s ) Σ n c = 0 S 1 P ( n c | S p l a y s ) ( r / S 1 ) =
1 + ( 1 r ) α N 1 + ( r / N ) ( 1 α N ) / ( 1 α ) .

Appendix C. Transition Matrix

We define P be the transition matrix for the Markov chain formed by repeatedly iterating pairwise comparison. Then, P i , i 1 = p c d i ( n i ) / [ n ( n 1 ) ] , and P i , i + 1 = p d c i ( n i ) / [ n ( n 1 ) ] , for i = 2, 3, ..., n − 1. Since the only other transition from i cooperators per iteration is the absence of transition, P i , i = 1 P i , i 1 P i , i + 1 , and the remaining entries in the i t h row are 0. Also considering that i = 0 cooperators and i = n cooperators are absorbing states, it follows that P is the tridiagonal ( n + 1 ) × ( n + 1 ) matrix
1 0 0 0 0 P 2 , 1 P 2 , 2 P 2 , 3 0 0 0 P 3 , 2 P 3 , 3 P 3 , 4 0 0 0 P n 1 , n 2 P n 1 , n 1 P n 1 , n 0 0 0 0 1 .
Fortunately, the calculation P k as k is relatively straightforward. Indeed, the calculated the fixation probabilities x i in Equation (11), and y n i in Equation (13), represent, respectively, the last and first entries in the i t h row of lim n P n . Also considering that the entries in any given row of P n must sum to 1 as P n is a stochastic matrix, and that x i + y n i = 1 , it follows that
lim n P n = 1 0 0 0 x 1 0 0 y n 1 x 2 0 0 y n 2 x n 1 0 0 y 1 0 0 0 1 .
Thus, lim n X P n converges to a vector of the form ( a , 0 , , 0 , b ) . Namely, the set of vectors of the form ( a , 0 , , 0 , b ) is the set of eigenvectors of lim n P n , which in turn is the set of eigenvectors of P with eigenvalue 1. Moreover, if X = ( P r o b ( i = 0 ) , P r o b ( i = 1 ) , , P r o b ( i = n ) ) , then lim n X P n converges to a vector of the form ( α , 0 , , 0 , β ) , where α + β = 1 . Since the set of vectors of the form ( α , 0 , , 0 , β ) with α + β = 1 is the set of stochastic eigenvectors of P with eigenvalue 1, it follows that depending on the initial probability vector for the system, X = ( P r o b ( i = 0 ) , P r o b ( i = 1 ) , , P r o b ( i = n ) ) , the system can potentially converge to any stochastic eigenvector.

Appendix D. Inequalities

Appendix D.1. Proof of (15)

x 1 > 1 / n [ 1 G ] / [ 1 G n ] > 1 / n [ 1 G n ] / [ 1 G ] < n Σ k = 0 n 1 G k < Σ k = 0 n 1 1 G < 1 .

Appendix D.2. Proof of (16)

y 1 < 1 / n
[ G n 1 G n ] / [ 1 G n ] < 1 / n
[ 1 G ] / [ 1 G n ] < 1 / ( n G n 1 )
[ 1 G n ] / [ 1 G ] > n G n 1
( 1 / n ) Σ k = 0 n 1 G k > G n 1
( 1 / n ) Σ k = 0 n 1 G k > ( G ( n 1 ) ( n ) / 2 ) 2 / n
( 1 / n ) Σ k = 0 n 1 G k > ( ( Π k = 0 n 1 G k ) 1 / n ) 2 .
Moreover, if G < 1 , then ( Π k = 0 n 1 G k ) 1 n > ( ( Π k = 0 n 1 G k ) 1 n ) 2 . Hence, if G < 1 , applying the arithmetic-mean-geometric-mean (AM-GM) inequality demonstrates that
( 1 / n ) Σ k = 0 n 1 G k > ( ( Π k = 0 n 1 G k ) 1 / n ) 2 .
Thus, G < 1 implies that y 1 < 1 / n . □

Appendix D.3. Proof of (17)

If G > 1 and n > 1 , note that
y 1 > 1 / n
[ G n 1 G n ] / [ 1 G n ] > 1 / n
G 1 G 1 / G n 1 > 1 / n
G G 1 n < n G n .
Then, observe that
d 2 d 2 G ( G G 1 n ) = n ( n 1 ) G n 1 < 0 ,
for n > 1 . Thus,
d d G ( G G 1 n ) = 1 ( 1 n ) G n
is decreasing whereas
d d G ( n G n ) = n
is constant. Also considering that
d d G ( G G 1 n ) | G 1 = n = d d G ( n G n ) | G 1 ,
it follows that
d d G ( G G 1 n ) < d d G ( n G n ) ,
for n > 1 . Since it is also true that as G 1 , G G n 1 0 and n G n 0 ,
G G 1 n < n G n .
Therefore, if G > 1 , y 1 > 1 / n . □

Appendix D.4. Lemma 1: 1 α N 1 N + α N ( N 1 ) > 1

For α = 0 , 1 α N 1 N + α N ( N 1 ) = 1 . Then, if α ( 0 , 1 ) and N > 1 ,
1 α N 1 N + α N ( N 1 ) > 0
α N ] / [ N ( 1 α ) ] α N 1 > 0
( 1 / N ) ( Σ k = 0 N 1 α k ) α N 1 > 0
( 1 / N ) ( Σ k = 0 N 1 α k ) > ( α ( N 1 ) N / 2 ) 2 N
( 1 / N ) ( Σ k = 0 N 1 α k ) > ( ( Π k = 0 N 1 α k ) 1 / N ) 2 .
However, since ( ( Π k = 0 N 1 α k ) 1 / N ) 2 = α N 1 < 1 ,
( ( Π k = 0 N 1 α k ) 1 / N ) 2 < ( ( Π k = 0 N 1 α k ) 1 / N ) ,
and since by the AM-GM inequality,
( 1 / N ) ( Σ k = 0 k = N 1 α k ) > ( ( Π k = 0 N 1 α k ) 1 / N )
The inequality 1 α N 1 N + α N ( N 1 ) > 0 must be valid. □

Appendix D.5. Lemma 2: 1 [ 1 α N ] / [ N ( 1 α ) ] > 0

Suppose α [ 0 , 1 ) . Then,
[ 1 α N ] / [ N ( 1 α ) ] = 1 / N Σ k = 0 N 1 α k
< 1 / N Σ k = 0 N 1 1
< 1
Hence, 1 [ 1 α N ] / [ N ( 1 α ) ] > 0 . □

Appendix D.6. Proof that as N/n → 0, R(α) > N(1 − α) ≈ Rexp (α)

As N / n 0 , R ( α ) N 1 α α N 1 + α N 1 α N 1 N + α N ( N 1 ) . Hence,
N ( 1 α ) < N 1 α α N 1 + α N 1 α N 1 N + α N ( N 1 )
( 1 α ) ( 1 α N 1 N + α N ( N 1 ) ) < 1 α α N 1 + α N
1 α N 1 N + α N ( N 1 ) < 1 α N 1
α N ( N 1 ) < α N 1 ( N 1 )
α N < α N 1 ,
which is true for α ( 0 , 1 ) . Therefore, the inequality (A63) holds if and only if the denominator of the right-hand-side of (A63) is positive. This is true by Lemma 1. Additionally, as N / n 0 , n N ( 1 α ) n n 1 , so
R e x p ( α ) = ( n 1 ) N ( 1 α ) n N ( 1 α )
= N ( 1 α ) .

Appendix D.7. Lemma 3: Behavior of R(α) as α → 0

As α 0 ,
R ( α ) = N 1 α α N 1 + α N 1 N 1 n 1 + N n 1 α N α N 1 + α N ( N 1 1 n 1 )
N 1 ( N 1 ) / ( n 1 )
= N ( n 1 ) n N
> 0 ,
provided that N n . On the other hand, if N = n , then,
R ( α ) = N 1 α α N 1 + α N N n 1 α N α N 1 + α N ( N 1 1 n 1 )
N 1 α N / ( N 1 ) α
( N 1 ) 1 α α
+ .
Hence, as α 0 , R ( α ) is positive.

Appendix E. Proof That R(α) Is Strictly Decreasing on [0, 1)

Let
F ( α ) = r ( [ 1 α N ] / [ N ( 1 α ) ] α N 1 ) ( 1 α N 1 ) .
As shown in Hauert et al. [15], F on ( 0 , 1 ) has no root for r 2 . The preceding result does not hold, though, if N = 2 . We address the case for which N = 2 at the end of the following proof. For now, we suppose N > 2 . Then, for every r > 2 there exists exactly one α such that F = 0 , as shown in Hauert et al. [15]. We consider
Q ( α ) = 1 α N 1 [ 1 α N ] / [ N ( 1 α ) ] α N 1 .
Q gives the values of r given α for which F is zero. Hence, Q is injective where it is defined. Since [ 1 α N ] / [ N ( 1 α ) ] α N 1 is positive on [ 0 , 1 ) by Lemma 1 in Appendix D, Q is defined and thus injective on ( 0 , 1 ) . Thus, Q is either strictly decreasing or strictly increasing on ( 0 , 1 ) . However,
lim α 0 Q ( α ) = N ,
and
lim α 1 Q ( α ) = 2 ,
applying L’Hospital’s rule twice. Since Q is continuous on ( 0 , 1 ) there exist δ 1 < 1 / 2 and δ 2 < 1 / 2 such that for α 1 ( 0 , 0 + δ 1 ) and for α 2 ( 1 δ 2 , 1 ) , | Q ( α 1 ) N | < 1 / 3 and | Q ( α 2 ) 2 | < 1 / 3 , respectively. Choosing arbitrary c 1 ( 0 , 0 + δ 1 ) and c 2 ( 1 δ 2 , 1 ) , it follows that for N > 2 , Q ( c 1 ) > Q ( c 2 ) and c 1 < c 2 . Hence, Q must be strictly decreasing on ( 0 , 1 ) . Moreover, Q ( 0 ) = N . Also considering that Q < N on ( 0 , 1 ) , which can be proven using Lemma 2, Q is strictly decreasing on [ 0 , 1 ) .
Then, we let the numerator of Q be d
S ( α ) = 1 α N 1 ,
and note that S is strictly decreasing but positive on [ 0 , 1 ) . Next, we let the denominator of Q be
T ( α ) = [ 1 α N ] / [ N ( 1 α ) ] α N 1 ,
which is positive in (0,1) by Lemma 1 in Appendix D. Lastly, we let
U ( α ) = 1 n 1 ( 1 1 α N N [ 1 α ] ) ,
which is negative on ( 0 , 1 ) by Lemma 2. Also, [ 1 α N 1 ] / [ 1 α ] = Σ k = 0 N 2 α k for N 2 , a strictly increasing function of α for α 0 if N > 2 and constant if N = 2 . Hence, 1 [ 1 α N 1 ] / [ 1 α ] is strictly decreasing, and hence U ( α ) = 1 n 1 ( 1 1 α N N [ 1 α ] ) is strictly increasing for N 2 and constant for N = 2 .
Next, note that
R ( α ) = S ( α ) / [ T ( α ) + U ( α ) ] ,
and suppose for contradiction that T ( α ) + U ( α ) has zeroes in ( 0 , 1 ) which form some set W. Since T ( α ) + U ( α ) is a polynomial, W must be finite. Then, we may choose w 0 = min { w W } . Thus, throughout the interval ( 0 , w 0 ) , T ( α ) + U ( α ) must be either positive or negative but not both. Moreover, since S ( α ) is positive on ( 0 , 1 ) , R ( α ) cannot change sign on ( 0 , 1 ) . Also considering that by Lemma 3, R ( α ) is positive as α 0 , it follows that R ( α ) and T ( α ) + U ( α ) are positive on ( 0 , w 0 ) . Next, consider any α i n v , α ( 0 , w 0 ) such that α i n v < α . Then,
R ( α i n v ) > R ( α )
S ( α i n v ) / [ T ( α i n v ) + U ( α i n v ) ] > S ( α ) / [ T ( α ) + U ( α ) ]
S ( α i n v ) T ( α ) + S ( α i n v ) U ( α ) > S ( α ) T ( α i n v ) + S ( α ) U ( α i n v ) .
We can show that
S ( α i n v ) U ( α ) > S ( α ) U ( α i n v ) .
Details of proof is deferred to the end of Appendix E.
Furthermore, since Q is strictly decreasing and T is positive,
S ( α i n v ) T ( α ) > S ( α ) T ( α i n v ) .
Equation (A89) and Equation (A90) together imply that Equation (A88) is valid on ( 0 , 1 ) . Thus, R is strictly decreasing on ( 0 , w 0 ) for N > 2 . Moreover, since S and T ( α ) + U ( α ) are both polynomials, and since S is nonzero on ( 0 , 1 ) , it follows that R must have an asymptote at w 0 . However, R is positive and strictly decreasing on ( 0 , w 0 ) , so R cannot tend to ± at w 0 and thus cannot have an asymptote at w 0 . This is a contradiction! It must thus be false that T ( α ) + U ( α ) has any zeroes on ( 0 , 1 ) . Hence, since R and S are both positive as α 0 , T ( α ) + U ( α ) must also positive as α 0 . We may then show that R ( α ) is strictly decreasing on ( 0 , 1 ) by applying an argument analogous to the argument used to show that R ( α ) was strictly decreasing on ( 0 , w 0 ) in the preceding proof by contradiction. Namely, we have already established that Equation (A88) holds on ( 0 , 1 ) . Since T ( α ) + U ( α ) is positive on ( 0 , 1 ) Equation (A88) still implies Equation (A86). Therefore, R ( α ) is strictly decreasing on [ 0 , 1 ) for N > 2 .
However, if N = 2 , then the only change from the above proof is that Q is constant rather than strictly decreasing. Then, Equation (A89) still holds, and we replace Equation (A90) by
S ( α i n v ) T ( α ) = S ( α ) T ( α i n v ) .
Thus, Equation (A88) still holds. Hence, R is strictly decreasing on [ 0 , 1 ) for N 2 . □

Proof S(αinv)U(α) > S(α)U(αinv) for αinv < α

It will now be very useful to show that
S ( α i n v ) U ( α ) > S ( α ) U ( α i n v ) .
To demonstrate the preceding relations holds, note that since S is strictly decreasing, S ( α i n v ) > S ( α ) . Also considering that U is strictly increasing, U ( α ) > U ( α i n v ) , and considering that S is positive and U is negative, it follows that
S ( α i n v ) U ( α ) > S ( α ) U ( α i n v )
S ( α i n v ) U ( α ) / U ( α i n v ) < S ( α )
S ( α i n v ) / U ( α i n v ) > S ( α ) / U ( α ) .
In other words, S ( α i n v ) U ( α ) > S ( α ) U ( α i n v ) if and only if S ( α ) / U ( α ) is strictly decreasing. Indeed, to see why S ( α ) / U ( α ) is strictly decreasing, observe that
S ( α ) / U ( α ) = ( 1 α N 1 ) / ( 1 n 1 [ 1 1 α N N ( 1 α ) ] )
= ( n 1 ) ( 1 α N 1 ) / ( 1 1 α N N ( 1 α ) )
= ( n 1 ) ( 1 α N 1 ) / N ( 1 α ) ( 1 α N ) N ( 1 α )
= N ( n 1 ) 1 α α N 1 + α N N 1 N α + α N .
Also considering that since S is positive but U is negative, N ( n 1 ) is negative, so S ( α ) / U ( α ) is strictly decreasing if and only if
X ( α ) = 1 α α N 1 + α N N 1 N α + α N
is strictly increasing. Now, to establish that X ( α ) is strictly increasing, we will show X ( α ) is positive. Indeed, observe that
0 < X ( α )
0 < [ ( 1 [ N 1 ] α N 2 + N α N 1 ) ( N 1 N α + α N ) ( 1 α α N 1 + α N ) ( N + N α N 1 ) ] / [ N 1 N α + α N ] 2
0 < [ ( 1 [ N 1 ] α N 2 + N α N 1 ) ( N 1 N α + α N ) ( 1 α α N 1 + α N ) ( N + N α N 1 ) ]
0 < ( N 1 ) + N α α N ( N 1 ) 2 α N 2 + N ( N 1 ) α N 1 ( N 1 ) α 2 N 2 + N ( N 1 ) α N 1 N 2 α N + N α 2 N 1 ( N + N α + N α N 1 N α N + N α N 1 N α N N α 2 N 2 + N α 2 N 1 )
0 < 1 + α N ( 1 N 2 + 2 N ) α N 2 ( N 1 ) 2 + α N 1 ( N ( N 1 ) + N ( N 1 ) 2 N ) + α 2 N 2 ( N ( N 1 ) )
0 < 1 α N ( N 1 ) 2 α N 2 ( N 1 ) 2 + α N 1 ( 2 N 2 4 N ) + α 2 N 2 .
However, it is not at all clear that the preceding function, X 1 ( α ) = 1 α N ( N 1 ) 2 α N 2 ( N 1 ) 2 + α N 1 ( 2 N 2 4 N ) + α 2 N 2 , is positive on (0,1). Testing the endpoints, we see that X 1 ( 0 ) = 1 , but
X 1 ( 1 ) = 1 ( N 1 ) 2 ( N 1 ) 2 + 2 N 2 4 N + 1
= 1 2 N 2 + 4 N 2 + 2 N 2 4 N + 1
= 0 .
Thus, if we can show that X 1 ( α ) is strictly decreasing on ( 0 , 1 ) , we have that X 1 ( α ) > 0 on ( 0 , 1 ) . To that end, we will attempt to show that X 1 ( α ) < 0 on ( 0 , 1 ) :
0 > X 1 ( α )
0 > α N 1 N ( N 1 ) 2 α N 3 ( N 1 ) 2 ( N 2 ) + α N 2 2 N ( N 1 ) ( N 2 ) + 2 ( N 1 ) α 2 N 3
0 > α 2 N ( N 1 ) ( N 1 ) ( N 2 ) + α 2 N ( N 2 ) + 2 α N .
Still, it is not clear whether the preceding function, X 2 ( α ) = α 2 N ( N 1 ) ( N 1 ) ( N 2 ) + α 2 N ( N 2 ) + 2 α N , is negative on ( 0 , 1 ) . Testing endpoints again, we see that X 2 ( 0 ) = ( N 1 ) ( N 2 ) < 0 since N > 2 , and that
X 2 ( 1 ) = N ( N 1 ) ( N 1 ) ( N 2 ) + 2 N ( N 2 ) + 2
= N 2 + N N 2 + 3 N 2 + 2
= 0 .
Thus, if we can show that X 2 ( α ) is strictly increasing on (0,1), we have that X 2 ( α ) < 0 . To do so, observe that
0 < X 2 ( α )
0 < 2 α N ( N 1 ) + 2 N ( N 2 ) + 2 N α N 1
0 < α ( N 1 ) + ( N 2 ) + α N 1 .
Even still, we need to do more work to show the preceding equation, X 3 ( α ) = α ( N 1 ) + ( N 2 ) + α N 1 , is positive. Namely, testing endpoints one last time, we see that X 3 ( 0 ) = N 2 > 0 and that X 3 ( 1 ) = ( N 1 ) + N 2 + 1 = 0 . Therefore, if we can show that X 3 ( α ) is strictly decreasing on ( 0 , 1 ) , then X 3 ( α ) > 0 . One last time, note that
X 3 ( α ) = ( N 1 ) + ( N 1 ) α N 1
< 0 .
Thus, X 3 is positive on ( 0 , 1 ) . Employing the logic outlined above, we then have X 2 ( α ) > 0 , so X 2 ( α ) < 0 . This in turn implies that X 1 ( α ) < 0 , so X 1 ( α ) > 0 . Hence, X ( α ) > 0 and X ( α ) is strictly increasing. Finally, we then have that S ( α ) / U ( α ) is strictly decreasing, yielding the desired result that S ( α i n v ) U ( α ) > S ( α ) U ( α i n v ) .

Appendix F. Justification of Approximations

Appendix F.1. Approximation for R(α) as N → ∞, N n 1 = c

We let c be a real number in [0,1]. As N , α N 1 N , α N ( N 1 N / ( n 1 ) ) , α N 1 , α N , α N + 1 0 , as long as α 1 . Hence, for α 1 ,
R ( α ) = N 1 α α N 1 + α N 1 N 1 n 1 + N n 1 α N α N 1 + α N ( N 1 1 n 1 )
N ( 1 α ) / ( 1 + c c α ) .
However, applying L’Hospital’s rule twice yields lim α 1 R ( α ) = 0 , which is lim α 1 N ( 1 α ) / ( 1 c + c α ) . □

Appendix F.2. Approximation for R(α) for n ≫ N ≫ 0

As n , N , N / n 0 , for α 1 ,
R ( α ) = N 1 α α N 1 + α N 1 N 1 n 1 + N n 1 α N α N 1 + α N ( N 1 1 n 1 )
N ( 1 α ) .
However, as in the preceding proof, applying L’Hospital’s rule twice yields lim α 1 R ( α ) = 0 , which is lim α 1 N ( 1 α ) . □

Appendix F.3. Approximation for R(α) as N → ∞

As N , N α N 1 , N α N 0 for α 1 , so for α 1 ,
R ( α ) = N 1 α α N 1 + α N 1 N 1 n 1 + N n 1 α N α N 1 + α N ( N 1 1 n 1 )
N 1 α 1 N 1 n 1 + N n 1 α
N 1 α 1 N n + N n α
n N ( 1 α ) n N ( 1 α )
( n 1 ) N ( 1 α ) n N ( 1 α )
= R e x p ( α )
Additionally as in the preceding two proofs, applying L’Hosptal’s rule twice yields lim α 1 R ( α ) = 0 , which is lim α 1 R e x p ( α ) . □

Appendix F.4. Approximation for R(α) as N/n → 0 and α → 1

As N / n 0 , ( N 1 ) / ( n 1 ) , N / ( n 1 ) , 1 / ( n 1 ) 0 , thus
R ( α ) = N 1 α α N 1 + α N 1 N 1 n 1 + N n 1 α N α N 1 + α N ( N 1 1 n 1 )
N 1 α α N 1 + α N 1 N α N 1 + α N ( N 1 ) .
Then, as α 1 we arrive at the indeterminate form 0 / 0 , and employ L’Hospital’s rule for the case N > 2 . Now,
R ( α ) N 1 α α N 1 + α N 1 N α N 1 + α N ( N 1 )
N 1 ( N 1 ) α N 2 + N α N 1 N ( N 1 ) α N 2 + α N 1 ( N 1 ) N .
However, this still yields 0 / 0 , so we apply again L’Hospital’s rule, obtaining
R ( α ) N 1 ( N 1 ) α N 2 + N α N 1 N ( N 1 ) α N 2 + α N 1 ( N 1 ) N
N ( N 1 ) ( N 2 ) α N 3 + N ( N 1 ) α N 2 N ( N 1 ) ( N 2 ) α N 3 + α N 2 ( N 1 ) 2 N
N ( N 1 ) ( N 2 ) + N ( N 1 ) N ( N 1 ) ( N 2 ) + ( N 1 ) 2 N
= 2
On the other hand, if N = 2 , then R ( α ) N ( 1 α ) 2 / ( 1 α ) 2 = N = 2 . Hence, as N / n 0 and α 1 , R ( α ) 2 .

Appendix G. Derivation of πy

The payoff matrix for a two person public goods game in which cooperators invest 1 unit which is then multiplied by r and distributed equally among all players is
c d c r 1 r / 2 1 d r / 2 0
Then, we suppose that there are i invaders in a population of n i defenders, and that the remaining individuals all play the same strategy. We let the invader be one of the players invited to play in the two person PGG and call that player “player A”. Next, we let A c , A d , A n , and A n c represent the events where player A cooperates, defects, does not participate, and participates, respectively. We suppose “player B” is the other individual invited to play. We let B c , B d , B n be the events where player B cooperates, defects, and does not participate, respectively. Lastly, we let E and E be the events where Player B invades and defends respectively. Denoting the intersection of any two events F and G by F G , and the probability that an event F occurs by p ( F ) ,
π y = ( r 1 ) [ p ( A c E B c ) + p ( A c E B c ) ] + ( r / 2 1 ) [ p ( A c E B d ) + p ( A c E B d ) ] + r / 2 [ p ( A d E B c ) + p ( A d E B c ) ] + σ [ p ( A n ) + p ( A n c E B n ) + p ( A n c E B n ) ]
= ( r 1 ) [ p ( A c ) p ( E | A c ) p ( B c | A c E ) + p ( A c ) p ( E | A c ) p ( B c | A c E ) ] + ( r / 2 1 ) [ p ( A c ) p ( E | A c ) p ( B d | A c E ) + p ( A c ) p ( E | A c ) p ( B d | A c E ) ] + r / 2 [ p ( A d ) p ( E | A d ) p ( B c | A d E ) + p ( A d ) p ( E | A d ) p ( B c | A d E ) ] + σ [ p ( A n ) + p ( A n c ) p ( E | A n c ) p ( B n | A n c E ) + p ( A n c ) p ( E | A n c ) p ( B n | A n c E ) ]
= ( r 1 ) β ( 1 α ) [ p ( E | A c ) β ( 1 α ) + p ( E | A c ) β ( 1 α ) ] + ( r / 2 1 ) β ( 1 α ) [ p ( E | A c ) ( 1 β ) ( 1 α ) + p ( E | A c ) ( 1 β ) ( 1 α ) ] + r / 2 ( 1 β ) ( 1 α ) [ p ( E | A d ) β ( 1 α ) + p ( E | A d ) β ( 1 α ) ] + σ [ α + ( 1 α ) [ p ( E | A n c ) α + p ( E E | A n c ) α ]
π y = ( r 1 ) β ( 1 α ) [ i 1 n 1 β ( 1 α ) + n i n 1 β ( 1 α ) ] + ( r / 2 1 ) β ( 1 α ) [ i 1 n 1 ( 1 β ) ( 1 α ) + n i n 1 ( 1 β ) ( 1 α ) ] + r / 2 ( 1 β ) ( 1 α ) [ i 1 n 1 β ( 1 α ) + n i n 1 β ( 1 α ) ] + σ [ α + ( 1 α ) [ i 1 n 1 α + n i n 1 α ]
= n i n 1 ( 1 α ) ( 1 α ) ( r β / 2 + r β / 2 β ) + i 1 n 1 ( 1 α ) 2 β ( r 1 ) + σ α + σ ( 1 α ) ( n i n 1 α + i 1 n 1 α )

References

  1. Axelrod, R. The Evolution of Cooperation; Basic Books: New York, NY, USA, 1984. [Google Scholar]
  2. Hölldobler, B.; Wilson, E.O. The Superorganism: The Beauty, Elegance, and Strangeness of Insect Societies; WW Norton & Company: New York, NY, USA, 2009. [Google Scholar]
  3. Traulsen, A.; Nowak, M.A. Evolution of cooperation by multilevel selection. Proc. Natl. Acad. Sci. USA 2006, 103, 10952–10955. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Trivers, R.L. The evolution of reciprocal altruism. Q. Rev. Biol. 1971, 46, 35–57. [Google Scholar] [CrossRef]
  5. Nadell, C.D.; Xavier, J.B.; Levin, S.A.; Foster, K.R. The evolution of quorum sensing in bacterial biofilms. PLoS Biol. 2008, 6, e14. [Google Scholar] [CrossRef] [PubMed]
  6. Goryunov, D. Nest-building in ants formica exsecta (hymenoptera, formicidae). Entomol. Rev. 2015, 95, 953–958. [Google Scholar] [CrossRef]
  7. Templeton, C.N.; Greene, E.; Davis, K. Allometry of alarm calls: Black-capped chickadees encode information about predator size. Science 2005, 308, 1934–1937. [Google Scholar] [CrossRef] [PubMed]
  8. Bailey, I.; Myatt, J.P.; Wilson, A.M. Group hunting within the carnivora: Physiological, cognitive and environmental influences on strategy and cooperation. Behav. Ecol. Sociobiol. 2013, 67, 1–17. [Google Scholar] [CrossRef]
  9. Melis, A.P.; Semmann, D. How is human cooperation different? Philos. Trans. R. Soc. Lond. B Biol. Sci. 2010, 365, 2663–2674. [Google Scholar] [CrossRef] [Green Version]
  10. Wu, T.; Fu, F.; Zhang, Y.; Wang, L. The increased risk of joint venture promotes social cooperation. PLoS ONE 2013, 8, e63801. [Google Scholar] [CrossRef]
  11. Wang, J.; Fu, F.; Wu, T.; Wang, L. Emergence of social cooperation in threshold public goods games with collective risk. Phys. Rev. E 2009, 80, 016101. [Google Scholar] [CrossRef]
  12. Antal, T.; Ohtsuki, H.; Wakeley, J.; Taylor, P.D.; Nowak, M.A. Evolution of cooperation by phenotypic similarity. Proc. Natl. Acad. Sci. USA 2009, 106, 8597–8600. [Google Scholar] [CrossRef] [Green Version]
  13. Boyd, R.; Gintis, H.; Bowles, S. Coordinated punishment of defectors sustains cooperation and can proliferate when rare. Science 2010, 328, 617–620. [Google Scholar] [CrossRef] [PubMed]
  14. Broom, M.; Rychtár, J. Game-Theoretical Models in Biology; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
  15. Hauert, C.; Monte, S.D.; Hofbauer, J.; Sigmund, K. Replicator dynamics for optional public good games. J. Theor. Biol. 2002, 218, 187–194. [Google Scholar] [CrossRef] [PubMed]
  16. Hauert, C.; Monte, S.D.; Hofbauer, J.; Sigmund, K. Volunteering as red queen mechanism for cooperation in public goods games. Science 2002, 296, 1129–1132. [Google Scholar] [CrossRef] [PubMed]
  17. Javarone, M.A. The host-pathogen game: An evolutionary approach to biological competitions. Front. Phys. 2016; 94, arXiv:1607–00998arXiv:1607.00998. [Google Scholar] [CrossRef]
  18. Nowak, M.A. Five rules for the evolution of cooperation. Science 2006, 314, 1560–1563. [Google Scholar] [CrossRef]
  19. Priklopil, T.; Chatterjee, K.; Nowak, M. Optional interactions and suspicious behaviour facilitates trustful cooperation in prisoners dilemma. J. Theor. Biol. 2017, 433, 64–72. [Google Scholar] [CrossRef]
  20. Santos, F.C.; Pacheco, J.M.; Lenaerts, T. Evolutionary dynamics of social dilemmas in structured heterogeneous populations. Proc. Natl. Acad. Sci. USA 2006, 103, 3490–3494. [Google Scholar] [CrossRef] [Green Version]
  21. Hauert, C.; Miȩkisz, J. Effects of sampling interaction partners and competitors in evolutionary games. Phys. Rev. E 2018, 98, 052301. [Google Scholar] [CrossRef]
  22. Killingback, T.; Bieri, J.; Flatt, T. Evolution in group-structured populations can resolve the tragedy of the commons. Proc. R. Soc. Lond. B Biol. Sci. 2006, 273, 1477–1481. [Google Scholar] [CrossRef] [Green Version]
  23. Pacheco, J.M.; Vasconcelos, V.V.; Santos, F.C.; Skyrms, B. Co-evolutionary dynamics of collective action with signaling for a quorum. PLoS Comput. Biol. 2015, 11, e1004101. [Google Scholar] [CrossRef]
  24. Sigmund, K.; Silva, H.D.; Traulsen, A.; Hauert, C. Social learning promotes institutions for governing the commons. Nature 2010, 466, 861. [Google Scholar] [CrossRef] [PubMed]
  25. Nowak, M.A. Evolutionary Dynamics: Exploring the Equations of Life; Harvard University Press: Cambridge, MA, USA, 2006. [Google Scholar]
  26. Battiston, F.; Perc, M.; Latora, V. Determinants of public cooperation in multiplex networks. New J. Phys. 2017, 19, 073017. [Google Scholar] [CrossRef] [Green Version]
  27. Szolnoki, A.; Perc, M. Antisocial pool rewarding does not deter public cooperation. Proc. R. Soc. Lond. B Biol. Sci. 2015, 282, 20151975. [Google Scholar] [CrossRef] [PubMed]
  28. Szolnoki, A.; Perc, M. Conformity enhances network reciprocity in evolutionary social dilemmas. J. R. Soc. Interface 2015, 12, 20141299. [Google Scholar] [CrossRef] [PubMed]
  29. Wu, T.; Fu, F.; Dou, P.; Wang, L. Social influence promotes cooperation in the public goods game. Phys. A Stat. Mech. Appl. 2014, 413, 86–93. [Google Scholar] [CrossRef]
  30. Wu, T.; Fu, F.; Wang, L. Moving away from nasty encounters enhances cooperation in ecological prisoner’s dilemma game. PLoS ONE 2011, 6, e27669. [Google Scholar] [CrossRef] [PubMed]
  31. Wu, T.; Fu, F.; Zhang, Y.; Wang, L. Adaptive role switching promotes fairness in networked ultimatum game. Sci. Rep. 2013, 3, 1550. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  32. Ohtsuki, H.; Hauert, C.; Lieberman, E.; Nowak, M.A. A simple rule for the evolution of cooperation on graphs and social networks. Nature 2006, 441, 502. [Google Scholar] [CrossRef]
  33. Broom, M.; Hadjichrysanthou, C.; Rychtář, J. Evolutionary games on graphs and the speed of the evolutionary process. Proc. R. Soc. Lond. A Math. Phys. Eng. Sci. 2010, 466, 1327–1346. [Google Scholar] [CrossRef] [Green Version]
  34. Chen, X.; Fu, F.; Wang, L. Influence of different initial distributions on robust cooperation in scale-free networks: A comparative study. Phys. Lett. A 2008, 372, 1161–1167. [Google Scholar] [CrossRef] [Green Version]
  35. Du, F.; Fu, F. Partner selection shapes the strategic and topological evolution of cooperation. Dyn. Games Appl. 2011, 1, 354. [Google Scholar] [CrossRef]
  36. Javarone, M.A. Statistical physics of the spatial prisoner’s dilemma with memory-aware agents. Eur. Phys. J. B 2016, 89, 42. [Google Scholar] [CrossRef]
  37. Javarone, M.A. Statistical Physics and Computational Methods for Evolutionary Game Theory; Springer: Berlin, Germany, 2018. [Google Scholar]
  38. Javarone, M.A.; Atzeni, A.E. The role of competitiveness in the prisoner’s dilemma. Comput. Soc. Netw. 2015, 2, 15. [Google Scholar] [CrossRef]
  39. Javarone, M.A.; Marinazzo, D. Evolutionary dynamics of group formation. PLoS ONE 2017, 12, e0187960. [Google Scholar] [CrossRef] [PubMed]
  40. Rychtár, J.; Stadler, B. Evolutionary dynamics on small-world networks. Int. J. Comput. Math. Sci. 2008, 2, 1–4. [Google Scholar]
  41. Schoenmakers, S.; Hilbe, C.; Blasius, B.; Traulsen, A. Sanctions as honest signals—The evolution of pool punishment by public sanctioning institutions. J. Theor. Biol. 2014, 356, 36–46. [Google Scholar] [CrossRef]
  42. Hauert, C.; Traulsen, A.; née Brandt, H.D.S.; Nowak, M.A.; Sigmund, K. Public goods with punishment and abstaining in finite and infinite populations. Biol. Theory 2008, 3, 114–122. [Google Scholar] [CrossRef]
  43. Traulsen, A.; Pacheco, J.M.; Nowak, M.A. Pairwise comparison and selection temperature in evolutionary game dynamics. J. Theor. Biol. 2007, 246, 522–529. [Google Scholar] [CrossRef] [Green Version]
  44. Imhof, L.A.; Nowak, M.A. Stochastic evolutionary dynamics of direct reciprocity. Proc. R. Soc. Lond. B Biol. Sci. 2010, 277, 463–468. [Google Scholar] [CrossRef] [Green Version]
  45. Isaac, R.M.; Walker, J.M. Group size effects in public goods provision: The voluntary contributions mechanism. Q. J. Econ. 1988, 103, 179–199. [Google Scholar] [CrossRef]
  46. Traulsen, A.; Hauert, C.; Silva, H.D.; Nowak, M.A.; Sigmund, K. Exploration dynamics in evolutionary games. Proc. Natl. Acad. Sci. USA 2009, 106, 709–712. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. Model schematic of stochastic opting-out. Cooperators (blue) and defectors (red) are represented by dots. A fixed number of agents are randomly drawn from the population to participate in a PGG, represented by the small tan rectangular area. While most selected agents are able to make it to the game, some are not. Selected agents then return to the general populace, where no game is occurring. The fitness of agents is determined by the average payoffs they obtain from PGG interactions (cooperators vs. defectors) as well as from non-participation. Natural selection drives the co-evolutionary dynamics of opting-out behavior (the probability of non-participation, α ), and cooperation (the probability to cooperate in the PGG, β ).
Figure 1. Model schematic of stochastic opting-out. Cooperators (blue) and defectors (red) are represented by dots. A fixed number of agents are randomly drawn from the population to participate in a PGG, represented by the small tan rectangular area. While most selected agents are able to make it to the game, some are not. Selected agents then return to the general populace, where no game is occurring. The fitness of agents is determined by the average payoffs they obtain from PGG interactions (cooperators vs. defectors) as well as from non-participation. Natural selection drives the co-evolutionary dynamics of opting-out behavior (the probability of non-participation, α ), and cooperation (the probability to cooperate in the PGG, β ).
Games 10 00001 g001
Figure 2. Pairwise invasion dynamics in finite populations. Shown are the graphs of fixation probabilities for π c π d > 0 , as in panels (a,b), and for π c π d < 0 , as in panels (c,d). If π c π d > 0 , the fixation probability starting with one cooperator x 1 is always larger 1 / n (neutral drift) which is in turn always larger than that of one defector, y 1 . On the other hand, if π c π d < 0 , then the situation is reversed, that is, y 1 > 1 / n > x 1 . We confirm that the specific values chosen for π c π d are admissible for the given values of population size n.
Figure 2. Pairwise invasion dynamics in finite populations. Shown are the graphs of fixation probabilities for π c π d > 0 , as in panels (a,b), and for π c π d < 0 , as in panels (c,d). If π c π d > 0 , the fixation probability starting with one cooperator x 1 is always larger 1 / n (neutral drift) which is in turn always larger than that of one defector, y 1 . On the other hand, if π c π d < 0 , then the situation is reversed, that is, y 1 > 1 / n > x 1 . We confirm that the specific values chosen for π c π d are admissible for the given values of population size n.
Games 10 00001 g002
Figure 3. Critical threshold R ( α ) of the PGG investment return (i.e., enhancement factor) r required for cooperation to be favored. The shaded areas represent combinations of the PGG enhancement factor r and the probability of non-participation (i.e., opting-out) α which promote cooperation for game size N = 5 and n = 10 in (a), N = 500 and n = 1000 in (b), N = 5 and n = 7 in (c), and N = 500 and n = 1,000,000 in (d). The yellow line in each panel is the approximation R e x p ( α ) as given in Equation (27).
Figure 3. Critical threshold R ( α ) of the PGG investment return (i.e., enhancement factor) r required for cooperation to be favored. The shaded areas represent combinations of the PGG enhancement factor r and the probability of non-participation (i.e., opting-out) α which promote cooperation for game size N = 5 and n = 10 in (a), N = 500 and n = 1000 in (b), N = 5 and n = 7 in (c), and N = 500 and n = 1,000,000 in (d). The yellow line in each panel is the approximation R e x p ( α ) as given in Equation (27).
Games 10 00001 g003
Figure 4. Coevolution of cooperation and stochastic opting-out. Shown in Panels (ac) are the adaptive dynamics using the StreamPlot function of Mathematica in a finite population of size n = 5 for the selection pressure γ = 1 and various values of return on investment, r, and payoff for non-participants, σ . Following the arrows leads to the most likely path the population will take in the strategy space. Please note that if σ < r 1 , there exists a critical threshold of cooperativity β * = σ / ( r 1 ) such that increasing likelihood of participation is more beneficial for agents with cooperativity β > σ / ( r 1 ) whereas participating cooperators are always prone to exploitation by others as shown in Panels (b.2) and (c.2). Hence, for r < ( 2 n 2 ) / ( n 2 ) , the only evolutionarily stable strategy (ESS) is ( 1 , 0 ) . However, if r > ( 2 n 2 ) / ( n 2 ) > 2 where the game is no longer a social dilemma, ( 0 , 1 ) is an ESS.
Figure 4. Coevolution of cooperation and stochastic opting-out. Shown in Panels (ac) are the adaptive dynamics using the StreamPlot function of Mathematica in a finite population of size n = 5 for the selection pressure γ = 1 and various values of return on investment, r, and payoff for non-participants, σ . Following the arrows leads to the most likely path the population will take in the strategy space. Please note that if σ < r 1 , there exists a critical threshold of cooperativity β * = σ / ( r 1 ) such that increasing likelihood of participation is more beneficial for agents with cooperativity β > σ / ( r 1 ) whereas participating cooperators are always prone to exploitation by others as shown in Panels (b.2) and (c.2). Hence, for r < ( 2 n 2 ) / ( n 2 ) , the only evolutionarily stable strategy (ESS) is ( 1 , 0 ) . However, if r > ( 2 n 2 ) / ( n 2 ) > 2 where the game is no longer a social dilemma, ( 0 , 1 ) is an ESS.
Games 10 00001 g004

Share and Cite

MDPI and ACS Style

Ginsberg, A.G.; Fu, F. Evolution of Cooperation in Public Goods Games with Stochastic Opting-Out. Games 2019, 10, 1. https://doi.org/10.3390/g10010001

AMA Style

Ginsberg AG, Fu F. Evolution of Cooperation in Public Goods Games with Stochastic Opting-Out. Games. 2019; 10(1):1. https://doi.org/10.3390/g10010001

Chicago/Turabian Style

Ginsberg, Alexander G., and Feng Fu. 2019. "Evolution of Cooperation in Public Goods Games with Stochastic Opting-Out" Games 10, no. 1: 1. https://doi.org/10.3390/g10010001

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop