Next Article in Journal
Exploring the Gap between Perfect Bayesian Equilibrium and Sequential Equilibrium
Next Article in Special Issue
Modeling Poker Challenges by Evolutionary Game Theory
Previous Article in Journal
Ignorance Is Bliss, But for Whom? The Persistent Effect of Good Will on Cooperation
Previous Article in Special Issue
The Influence of Mobility Rate on Spiral Waves in Spatial Rock-Paper-Scissors Games
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Coevolution of Cooperation and Layer Selection Strategy in Multiplex Networks

Graduate School of Information Science, Nagoya University, Nagoya 464-8601, Japan
*
Author to whom correspondence should be addressed.
Games 2016, 7(4), 34; https://doi.org/10.3390/g7040034
Submission received: 12 August 2016 / Revised: 19 October 2016 / Accepted: 24 October 2016 / Published: 1 November 2016
(This article belongs to the Special Issue Evolutionary Games and Statistical Physics of Social Networks)

Abstract

:
Recently, the emergent dynamics in multiplex networks, composed of layers of multiple networks, has been discussed extensively in network sciences. However, little is still known about whether and how the evolution of strategy for selecting a layer to participate in can contribute to the emergence of cooperative behaviors in multiplex networks of social interactions. To investigate these issues, we constructed a coevolutionary model of cooperation and layer selection strategies in which each an individual selects one layer from multiple layers of social networks and plays the Prisoner’s Dilemma with neighbors in the selected layer. We found that the proportion of cooperative strategies increased with increasing the number of layers regardless of the degree of dilemma, and this increase occurred due to a cyclic coevolution process of game strategies and layer selection strategies. We also showed that the heterogeneity of links among layers is a key factor for multiplex networks to facilitate the evolution of cooperation, and such positive effects on cooperation were observed regardless of the difference in the stochastic properties of network topologies.

1. Introduction

The recent progress in network sciences revealed that structures of interactions among individuals could affect the emergence and evolution of cooperative behaviors significantly [1,2]. This phenomenon is because local interactions allow cooperative clusters to grow in the population of defectors in general [1]. While most of the previous studies assumed that all individuals interact in a network of a single social relationship or context, there exist different networks of social interactions in a real world, and they are affecting each other directly or indirectly in various ways. Such a situation of interactions among networks is known as a kind of multiplex network, multilayer network, interdependent network, interconnected network, and a network of a network, which have recently been discussed extensively in network sciences [3,4]. A pioneering study showed that properties of cascading failures on interdependent networks differ significantly from those of single-network systems, in that the existence of inter-connecting links between networks changes the threshold and the order of transition for cascading failures [5].
According to the seminal review paper on evolutionary games on multilayer networks [4], there are various models involving several networks, and they are called multilayer networks. When we focus on networks of social interactions, there are two types of multilayer networks, which are called interdependent networks and multiplex networks.
One is a situation called an interdependent network. It is assumed that there are several networks of social interactions, termed layers, in each of which individuals play games with neighbors. Then, a factor of interdependence is further assumed to allow the evolution process of behaviors in a layer to affect that of another [6,7,8,9,10,11]. Wang et al. constructed a model of such interdependent networks in which two layers are stochastically interconnected [6]. Each individual on a square lattice is connected to the corresponding individual on the other lattice with a fixed probability of interconnection and plays a public good game (PGG) with neighbors including the long-range neighbor if connected. They showed that the proportion of cooperation reached the maximum value when the probability of interconnection was intermediate. Wang et al. also discussed the evolution process of cooperative behaviors in a different type of interdependent network [9]. In addition to the total payoff obtained from the Prisoner’s Dilemma game (PDG) with its neighbors in a two-dimensional regular network, each individual may obtain an additional payoff. It is the payoff received by another individual at the corresponding position in the other network, reflecting indirect and interdependent effects of one network on the other. They showed that the intermediate degree of interdependence contributed to the evolution of cooperation. Interestingly, they also demonstrated that the degree of interdependence could self-organize to the optimal value [10] through the individual-level adaptation of it. Santos et al. assumed that individuals play different types of games (PDG or Snow Drift Game (SDG)) in the two layers. They discussed effects of a biased imitation, defined as the probability of imitating a neighbor in the same layer or a neighbor in the other [11]. They demonstrated that the imitation of a neighbor in the other network could promote the evolution of cooperation.
The other is a situation in which each individual participates in multiple networks with different topologies simultaneously [12,13,14], called a multiplex network. For instance, Wang et al. assumed the two layers of scale-free networks that had different roles: the layer in which players played the evolutionary game to obtain their payoffs, and the layer in which players look for neighbors to potentially update their strategy [12]. They showed that breaking the symmetry through assortative mixing (i.e., the tendency for nodes with similar degrees to become directly connected in each layer) in one layer and/or disassortative mixing in the other layer impedes the evolution of cooperation.
Gomez-Gardenes et al. assumed that each individual belongs to multiple random networks (layers) and has a strategy of PDG (cooperate or defect) for each layer. The population evolves according to the fitness determined by the accumulated payoff of the games with neighbors in all of the layers [13]. They found that the multiplex structures could facilitate the evolution of cooperation only when the temptation to defect was large. Zhang et al. also constructed a model of multiplex network by assuming two layers [14]. Each layer is composed of several groups of interactions, which creates a sub-network of groups by connecting groups with links. Each individual belongs to two groups in different layers simultaneously and plays games with others in these groups using unrelated strategies (cooperation or defection) across layers. The strategy of an individual evolves according to the total payoff from games in both layers, and an individual can move to a new group within each layer. They showed that the optimal migration range for promoting cooperation could vary depending on both mutation and migration probabilities.
While the latter two studies clarified the effects of participation of an individual in interactions in multiple networks on cooperation, it might be strong to assume that individuals always play games in all of the layers because there exist physical, social and temporal constraints in a real-life situation. Instead, we can assume that each individual actively selects not only a game strategy but also a layer to participate in depending on the state of interactions, as a coevolutionary game approach [15] in which properties characterizing either individual attributes or their environment coevolve with game strategies.
Our purpose is to clarify whether and how the evolution of layer selection strategy can contribute to the emergence of cooperation in a multiplex network of social interactions. We assume multiple layers composed of random networks. Each individual belongs to all of the layers but selects one layer and plays games with neighbors in the selected layer. Both the layer selection strategy and the strategy for PDG for each layer coevolve according to the fitness based on the payoff from the games. We show that the larger the number of layers, the larger the proportion of cooperators increases, implying that multiplex networks can contribute to the evolution of cooperative behaviors. It is caused by the dynamic coevolution process of strategies through which a burst of the proportion of individuals occurred in different layers repeatedly. We also discuss effects of the heterogeneity of layers and network types on the evolution of cooperation.

2. Model

2.1. Multiplex Network

Figure 1 shows a schematic image of our model. There are M layers that abstract different channels or contexts of social interactions among individuals. For example, each layer might correspond to a friendship network or a network in a social networking service (SNS) on the Internet. Each layer is composed of a network of interactions among individuals in the corresponding relationship. An individual i is represented as a node n i l   ( i = 0 , 1 , . . . , N 1 ) in each layer l   ( l = 0 , 1 , . . . , M 1 ) , and thus the individual is represented as a set of nodes { n i 0 , , n i M 1 } . The existence of a link between the individual i and j in the layer l means that i and j are neighboring individuals who can interact with each other in the layer l. In this study, the topology of each layer is defined as an Erdös–Rényi (ER) random graph with the average degree k. It is known that cooperative behavior is not easy to evolve in ER random graphs in comparison with networks that have regular structures (e.g., [1,16]). We adopt this structure to see if increasing the number of layers can contribute to the evolution of cooperation, despite such a hard situation. The evolution process of strategies at each time step is composed of two phases: playing games and updating strategies, as explained below.

2.2. Playing Games

Algorithm 1 shows a pseudo-code of interaction and evolution processes in a generation. We assume that each individual can participate in interactions in only one layer at every time step, reflecting the physical, temporal and cognitive constraints. Thus, each individual i has a layer selection strategy s l i { 0 , 1 , , M 1 } . It determines the layer in which the individual i plays PDG with its neighbors. Hereafter, we describe that an individual i is in the layer l if it selects the layer l ( s l i = l ), in that it participates in interactions in the social network represented by layer l.
Algorithm 1 A pseudo-code of interaction and evolution processes in a generation.  payoff ( a , b ) represents the payoff value obtained by an individual who plays a strategy a with an opponent playing a strategy b. n e i g h b o r s i l represents the set of the neighbors of the individual i in the layer l. rnd ( s ) represents a function that returns a randomly selected element from the set s. rnddist ( ) also represents a function that returns a random value from the uniform distribution [0, 1].
  • for i = 0 to N 1 do
  •  (playing games)
  • P i 0
  • for each individual j in n e i g h b o r s i s l i do
  •   if s l i = s l j then
  •     P i P i + payoff ( s p i s l i , s p j s l j )
  •   end if
  • end for
  • end for
  • (updating strategies)
  • for i = 0 to N 1 do
  •  (updating a PDG strategy)
  • j rnd ( n e i g h b o r s i s l i )
  • if P i < P j then
  •   if rnddist ( ) < | P j P i P j P m i n | then
  •     n s p i s l i s p j s l j
  •   else
  •     n s p i s l i s p i s l i
  •   end if
  • end if
  •  (updating a layer selection strategy)
  • j rnd ( n e i g h b o r s i s l i )
  • if P i < P j then
  •    n s l i s l j
  • else
  •    n s l i s l i
  • end if
  • end for
  • (mutation)
  • for i = 0 to N 1 do
  • if rnddist() < μ then
  •    n s p i s l i rnd ( { C , D } )
  • end if
  • if rnddist() < ν then
  •    n s l i rnd ( { 0 , 1 . . . M 1 } )
  • end if
  • end for
  • for i = 0 to N 1 do
  • s p i s l i n s p i s l i
  • s l i n s l i
  • end for
Each individual i also has a strategy for PDG s p i l (cooperate (C) or defect (D)) for each layer l. The payoff matrix of the PDG is defined in Table 1. It plays a PDG using the strategy s p i s l i with each neighboring individual j, in its selected layer s l i , who is in the same layer ( s l j = s l i ) and plays s p j s l j . The total payoff P i is regarded as the fitness of the individual i. For example, in Figure 1, the individual i chooses the layer 2, and there is a neighbor in its selected layer. It obtains the payoff −1 by cooperating with a defector in the same layer. Note that if there exist no neighbors in the selected layer of an individual, its fitness becomes 0.

2.3. Updating Strategies

Each individual i updates its PDG strategy in the selected layer s p i s l i and its layer selection strategy s l i according to the fitness after playing games. We assume that individuals can obtain the information about the fitness and PDG strategies of neighboring individuals in the selected layer before updating strategies. The value of PDG strategy s p i l in the next time step n s p i l is determined by the following procedure:
i 
One individual j is randomly selected from its neighboring individuals in the layer s l i ( n e i g h b o r s i s l i ) regardless of s l j .
ii 
If the fitness of the individual j ( P j ) is higher than its own fitness P i , n s p i s l i will be s p j s l i ( n s p i s l i s p j s l i ) with the following probability:
probability i j = P j P i P j P m i n , if P j > P i , 0 , otherwise ,
where P m i n represents the minimum fitness among all individuals. This equation means that each individual imitates the strategy of j with a certain probability if the fitness of the neighbor j is higher than its own fitness. The individual i imitates the strategy of j with the highest probability 1.0 when its fitness ( P i ) is the minimum and the fitness of j ( P j ) is the maximum. This probability linearly decreases as the difference between their fitness values decreases. Otherwise, if the fitness of the neighbor j is equal to or smaller than that of the individual i, it does not change the strategy ( n s p i s l i s p i s l i ).
iii 
n s p i l is replaced with C or D randomly with a mutation probability μ.
The layer selection strategy s l i in the next time step n s l i is determined by the following procedures:
i 
One individual j is randomly selected from its neighboring individuals in its selected layer s l i ( n e i g h b o r s i s l i ) regardless of s l j .
ii 
If the fitness of the individual j ( P j ) is higher than its own fitness P i , n s l i will be s l j ( s l i s l j ). Otherwise, it does not change the strategy ( n s l i s l i ).
iii 
n s l i is replaced with a random value from { 0 , 1 , . . . , M 1 } with the mutation probability ν.
These procedures mean that it can imitate a layer selection strategy of a better neighbor in its selected layer, which allows an individual to move to a different layer.
Finally, all the strategies are updated simultaneously ( s p i s l i n s p i s l i and s l i n s l i , for all i).
In some situations, it might be plausible to assume that changing a group or network to which an individual belongs (i.e., its layer selection strategy) is easier than changing the strategy related to its personality (i.e., its game strategy). The processes described above reflect such a situation in which changes in the layer selection strategies can happen more frequently than changes in the game strategies.
These procedures are repeated for G steps.

3. Experiments

We conducted experiments of this model for the purpose of revealing the co-evolution dynamics between the layer selection strategy and the cooperative behavior in multiplex networks. We used the following values as the experimental parameters: N = 100 , M = { 1 , 3 , . . . , 19 } , k = 10 . 0 , b = { 1 . 1 , . . . , 2 . 1 } , G = 10,000, μ = ν = 0 . 02 . s p i l and s l i were initialized with random values from their domains in the initial population. The experimental results are the average of five trials for each combination of the parameter settings of M and b.
We aim to understand how the proportion of cooperative behaviors can change due to the increase in the number of layers M. First, we focus on the quantitative effects M on the proportion of cooperation among the selected strategies ( s p i s l i ) of all the individuals including those who have not played games, as all the neighbors have not selected the layer, which we call P C .
We plot the average of P C over all generations with different combinations of M and b, as a heat chart, in Figure 2. The horizontal axis shows the number of layers M, and the vertical axis shows the temptation to defect b.
P C increased with increasing M and decreasing b. Thus, the multiplex network facilitated the evolution of cooperation in any conditions of the Prisoner’s Dilemma. Specifically, P C decreased with increasing b in all of the cases of M, but the amount of decrease in P C was slightly smaller as M increased from three. Thus, the negative effect of b on cooperative strategies could be reduced by increasing the number of layers M. Additional experiments showed that P C increased as the average degree k decreased.
Next, we plot the normalized entropy of the probability distribution of s l i , as a measure of the degree of dispersion of individuals over the networks in Figure 3. Because there is a difference in the maximum values of the entropy among different cases of M, we adopted the normalized entropy that was divided by the maximum value (log 1/M). The entropy was the smallest (0) when M = 0 by definition. However, it almost reached the highest value when M = 2, meaning that the individuals were uniformly distributed between two layers. As M increased and b decreased, the entropy slightly decreased and reached around 0.8 when M = 20. This tendency implies that the uneven distribution of individuals could contribute to the evolution of cooperation.
Then, we focus on the evolution process of the proportion of individuals and the proportion of cooperation in the selected strategies in each layer.
We plot the transition of these indices from the 2000th to the 3000th step in typical trials when b = 1 . 7 and M = 1 (Figure 4), 3 (Figure 5) and 9 (Figure 6). We focus on this period to observe the typical transitions after the transient process from the initial population. There are M panels, each corresponding to a layer. The horizontal axis represents step, and the red line represents the proportion of cooperation among selected strategies ( s p i s l i ) of individuals in the corresponding layer l ( P C ( l ) ). The blue line represents the proportion of individuals in the corresponding layer l (( P I ( l ) )). In addition, there is an additional panel on the bottom, which shows the average proportion of cooperation in the selected strategies over all of the layers ( P C ), except for Figure 4.
When M = 1 (Figure 4), all individuals exist in a single layer ( P I ( 0 ) = 1 ). The proportion of cooperators slightly fluctuated at small values around 0.15. This is the baseline behavior of a standard model for the evolution of cooperation in a single and random network.
On the other hand, when M = 3 (Figure 5), the average proportion of cooperators ( P C ) was higher than that when M = 1 . It fluctuated at around 0.25. We also see that the proportion of individuals in each layer ( P I ( l ) ) largely fluctuated and often reached very high values. This trend means that the individuals were distributed all over the layers, but they often got together in a layer.
Furthermore, when M = 9 (Figure 6), the average proportion of cooperators became around 0.3, which was higher than that when M = 3 . The occurrence of a burst-like rise and fall of the proportion of individuals ( P I ( l ) ) in a layer was more pronounced, and it often reached its peak around 0.8, meaning that most of the individuals selected the same layer. On the other hand, the proportion of individuals in the other layers tended to be much smaller than 0.2. We also see the gradual increase and the rapid decrease in the proportion of cooperators before and after the burst of the proportion of individuals, respectively.
The reason for this evolutionary dynamics that facilitated the cooperation can be summarized as follows. In this model, there are no games between individuals in different layers. Thus, the smaller the proportion of individuals in a layer is, the higher the locality of interactions is because it decreases the number of links used for playing games in effect. It has been pointed out that the higher locality for the smaller number of links can facilitate the evolution of cooperation [2], and it has also been pointed out that the existence of a certain fraction of vacant nodes may favor the resolution of social dilemmas [17]. Thus, cooperators can invade into a layer with the smaller number of individuals gradually. Such cooperative relationships in the layer make individuals in other less-cooperative layers (after a burst of the number of individuals) select the focal layer, which brings about a rapid increase in the proportion of individuals in the layer. However, this further allows defectors to invade into the focal layer, and thus the proportion of cooperators decreases rapidly. In such a population of defectors, individuals select other cooperative layers because it is better not to play games with neighbors than to play games with many defectors. This process causes another burst of the proportion of individuals in another cooperative layer.
It should also be noted that this cyclic coevolutionary process was observed more clearly when M was larger. However, we expect that the similar dynamics, at least in part, contributed to the evolution of cooperation even when M was smaller in which the evolutionary dynamics was more stochastic.
In Figure 7, we plot the trajectory of these two indices in the layer (0) of the top panel in Figure 6 from the 2000th to the 4000th step. The horizontal axis represents the proportion of individuals in the layer, and the vertical axis represents the proportion of cooperative strategies among the selected strategies in the layer. We see that the cyclic coevolution process of these indices occurred repeatedly.
Overall, repeated occurrences of this dynamic coevolution process of game strategies and layer selection strategies are expected to maintain the high proportion of cooperators in the whole population.

4. Effects of the Heterogeneity among Layers on Cooperation

In the previous experiments, we used an ER random graph as the network of each layer. Thus, all of the layers shared the homogeneous stochastic properties of the random network (e.g., degree distribution), but their actual topologies (i.e., node-to-node connections) were heterogeneous because we generated each network stochastically. In other words, each individual has a different neighborhood in each layer. We discuss the effect of this heterogeneity among layers on the evolution of cooperation.
We conducted experiments with an additional parameter λ for adjusting the heterogeneity among layers in the above sense. Specifically, we generated the networks of layers as follows:
i 
We create a single ER random graph, and assume its topology as the initial structure of all the layers.
ii 
For each link in each layer, with probability λ, we rewired both ends of the link to randomly selected nodes that have no connections between them.
This procedure ensures that all of the layers have the same node-to-node connections when λ = 0.0, and there is no relationships among the topologies of all of the layers when λ = 1.0. Thus, the larger λ means the larger heterogeneity among layers, and the previous results correspond to the cases when λ = 1.0.
Figure 8 shows the average of P C over all the generations with different combinations of M and λ. We used the following values as the experimental parameters: N = 100 , M = { 1 , 3 , . . . , 19 } , k = 3 . 0 , b = 1 . 7 , G = 10 , 000 , μ = 0 . 02 , and λ = { 0 . 0 , 0 . 05 , 0 . 1 , 0 . 2 , . . . , 1 . 0 } . The experimental results are the average of three trials for each combination of the parameter settings of M and λ. The horizontal axis represents the number of layer M and the vertical axis shows the average of P C over trials. Each line corresponds to the results with different λ.
This heterogeneity significantly contributed to the evolution of cooperation. When λ was 0, P C did not increase at all with increasing M. On the other hand, as λ increased, P C gradually increased with increasing M, which shows that the heterogeneity among layers is a key factor for the evolution of cooperation.
This phenomenon is expected to be due to the reasons as follows: when λ is small, an individual tends to have the same individuals as its neighbors in all layers. In this case, it tends to play with the same neighbors even after it selected different layers by imitating a more adaptive neighbor. However, such a neighbor, being imitated by the focal individual, is expected to be a defector due to its high fitness. Thus, the focal neighbors tend to play with defectors in the case of the low heterogeneity among layers. On the contrary, when λ is large, changing the layer selection strategy can make the individuals have different or no neighbors in the newly selected layer, which can give more chances for cooperative strategies to grow their clusters.

5. Effects of Network Types on Cooperation

Finally, to see the robustness of our findings discussed above, we conducted experiments with different types of networks. Instead of using ER random graphs, we adopted a one-dimensional Watts–Strogatz (WS) model [18] or a Barabási–Albert (BA) model [19] to generate networks in each layer. We show the average P C when M = 1 and M = 19 with the three cases of network topologies ER, WS and BA in Table 2. We used the following values as the experimental parameters: N = 100 , b = 1 . 2 , G = 10,000 , μ = 0 . 02 . We used the same average degree k = 4 to generate networks in all the cases. P C when M = 19 was higher than P C when M = 1 in all cases. Thus, the multiplex networks contribute to the promotion of cooperation allowing for the difference in network topologies.

6. Conclusions

We discussed whether and how the evolution of layer selection strategy can contribute to the emergence of cooperative behaviors in multiplex networks of social interactions. We constructed a coevolutionary model of cooperation and layer selection strategies in which each individual selects one layer from multiple layers and plays the Prisoner’s Dilemma with neighbors in the selected layer.
From the results of experiments, we found that the proportion of cooperative strategies increased with increasing the number of layers regardless of the degree of the dilemma, and this increase occurred due to the cyclic coevolution processes of game strategies and layer selection strategies. The emergence of such a cyclic process has been pointed out by the study of coevolution between cooperative behavior and network structure interaction in which the network rewiring strategies can coevolve with the game strategies [20,21]. This suggests that such a dynamic process could be a common phenomenon in the real world.
We also showed that the heterogeneity among layers is a key factor for multiplex networks to facilitate the evolution of cooperation, and such positive effects on cooperation were observed regardless of the difference in stochastic properties of network topologies. Future work includes experiments with a multiplex network composed of different types of complex networks.

Acknowledgments

This work was supported in part by Japan Society for the Promotion of Science Grant-in-Aid for Scientific Research (JSPS KAKENHI) Grant number 15K00335 and 15K00304.

Author Contributions

K.H., R.S. and T.A. conceived and designed a model, and wrote the paper; K.H. conducted experiments and analyses.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nowak, M.A.; May, R.M. Evolutionary games and spatial chaos. Nature 1992, 359, 826–829. [Google Scholar] [CrossRef]
  2. Ohtsuki, H.; Hauert, C.; Lieberman, E.; Nowak, M.A. A simple rule for the evolution of cooperation on graphs and social networks. Nature 2006, 441, 502–505. [Google Scholar] [CrossRef] [PubMed]
  3. Kivelä, M.; Arenas, A.; Barthelemy, M.; Gleeson, J.P.; Moreno, Y.; Porter, M.A. Multilayer networks. J. Comp. Net. 2014, 2, 203–271. [Google Scholar] [CrossRef]
  4. Wang, Z.; Wang, L.; Szolnoki, A.; Perc, M. Evolutionary games on multilayer networks: A colloquium. Eur. Phys. J. B 2015, 88, 124. [Google Scholar] [CrossRef]
  5. Buldyrev, S.V.; Parshani, R.; Paul, G.; Stanley, H.E.; Havlin, S. Catastrophic cascade of failures in interdependent networks. Nature 2010, 464, 1025–1028. [Google Scholar] [CrossRef] [PubMed]
  6. Wang, B.; Chen, X.; Wang, L. Probabilistic interconnection between interdependent networks promotes cooperation in the public goods game. J. Stat. Mech. 2012, 2012, P11017. [Google Scholar] [CrossRef]
  7. Wang, Z.; Szolnoki, A.; Perc, M. Evolution of public cooperation on interdependent networks: The impact of biased utility functions. EPL (Europhys. Lett.) 2012, 97, 48001. [Google Scholar] [CrossRef]
  8. Wang, Z.; Szolnoki, A.; Perc, M. Interdependent network reciprocity in evolutionary games. Sci. Rep. 2013, 3, 1183. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Wang, Z.; Szolnoki, A.; Perc, M. Optimal interdependence between networks for the evolution of cooperation. Sci. Rep. 2013, 3, 2470. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Wang, Z.; Szolnoki, A.; Perc, M. Self-organization towards optimally interdependent networks by means of coevolution. New J. Phys. 2014, 16, 033041. [Google Scholar] [CrossRef]
  11. Santos, M.; Dorogovtsev, S.N.; Mendes, J.F.F. Biased imitation in coupled evolutionary games in interdependent networks. Sci. Rep. 2014, 4, 4436. [Google Scholar] [CrossRef] [PubMed]
  12. Wang, Z.; Wang, L.; Perc, M. Degree mixing in multilayer networks impedes the evolution of cooperation. Phys. Rev. E 2014, 89, 052813. [Google Scholar] [CrossRef] [PubMed]
  13. Gómez-Gardenes, J.; Reinares, I.; Arenas, A.; Floría, L.M. Evolution of cooperation in multiplex networks. Sci. Rep. 2012, 2, 620. [Google Scholar] [CrossRef] [PubMed]
  14. Zhang, Y.; Fu, F.; Chen, X.; Xie, G.; Wang, L. Cooperation in group-structured populations with two layers of interactions. Sci. Rep. 2015, 5, 17446. [Google Scholar] [CrossRef] [PubMed]
  15. Perc, M.; Szolnoki, A. Coevolutionary games—A mini review. Biosystems 2010, 99, 109–125. [Google Scholar] [CrossRef] [PubMed]
  16. Masuda, N.; Aihara, K. Spatial prisoner’s dilemma optimally played in small-world networks. Phys. Lett. A 2003, 313, 55–61. [Google Scholar] [CrossRef]
  17. Wang, Z.; Szolnoki, A.; Perc, M. If players are sparse social dilemmas are too: Importance of percolation for evolution of cooperation. Sci. Rep. 2012, 2, 369. [Google Scholar] [CrossRef] [PubMed]
  18. Watts, D.J.; Strogatz, S.H. Collective dynamics of ‘small-world’ networks. Nature 1998, 393, 440–442. [Google Scholar] [CrossRef] [PubMed]
  19. Barabási, A.L.; Albert, R. Emergence of scaling in random networks. Science 1999, 286, 509–512. [Google Scholar] [PubMed]
  20. Suzuki, R.; Kato, M.; Arita, T. Cyclic coevolution of cooperative behaviors and network structures. Phys. Rev. E 2008, 77, 021911. [Google Scholar] [CrossRef] [PubMed]
  21. Hauert, C.; De Monte, S.; Hofbauer, J.; Sigmund, K. Volunteering as red queen mechanism for cooperation in public goods games. Science 2002, 296, 1129–1132. [Google Scholar] [CrossRef] [PubMed]
Figure 1. A schematic image of our model (M = 3). Each individual selects a layer, and plays games with its neighbors in the focal layer who selected the same layer. In other words, a game is played between neighboring individuals in the same layer.
Figure 1. A schematic image of our model (M = 3). Each individual selects a layer, and plays games with its neighbors in the focal layer who selected the same layer. In other words, a game is played between neighboring individuals in the same layer.
Games 07 00034 g001
Figure 2. The proportion of cooperative behaviors among the selected strategies s p i s l i ( P C ). The increase in P C with increasing M and decreasing b showed that the multiplex network facilitated the evolution of cooperation in any conditions of the Prisoner’s Dilemma.
Figure 2. The proportion of cooperative behaviors among the selected strategies s p i s l i ( P C ). The increase in P C with increasing M and decreasing b showed that the multiplex network facilitated the evolution of cooperation in any conditions of the Prisoner’s Dilemma.
Games 07 00034 g002
Figure 3. The normalized entropy of the probability distribution of layer selection strategies s l i . The slight decrease in the entropy with increasing M from 2 implies that the uneven distribution of individuals could contribute to the evolution of cooperation.
Figure 3. The normalized entropy of the probability distribution of layer selection strategies s l i . The slight decrease in the entropy with increasing M from 2 implies that the uneven distribution of individuals could contribute to the evolution of cooperation.
Games 07 00034 g003
Figure 4. The transition of the proportion of cooperative behaviors (red) among the selected strategies s p i s l i ( P C = P C ( 0 ) ) and the proportion of individuals (blue) ( P I ( 0 ) ) in the layer 0 when M = 1 . The proportion of cooperators slightly fluctuated at small values around 0.15. This is the baseline behavior of a standard model for the evolution of cooperation in a single and random network.
Figure 4. The transition of the proportion of cooperative behaviors (red) among the selected strategies s p i s l i ( P C = P C ( 0 ) ) and the proportion of individuals (blue) ( P I ( 0 ) ) in the layer 0 when M = 1 . The proportion of cooperators slightly fluctuated at small values around 0.15. This is the baseline behavior of a standard model for the evolution of cooperation in a single and random network.
Games 07 00034 g004
Figure 5. The transition of the proportion of cooperative behaviors (red) among the selected strategies ( P C ) and the proportion of individuals (blue) ( P I ( l ) ) in the layer l when M = 3 . The bottom panel shows the proportion of cooperation among all the selected layers ( P C ). P C fluctuated at around 0.25, and P I ( l ) largely fluctuated and often reached very high values. This trend means that the individuals were distributed all over the layers, but they often got together in a layer.
Figure 5. The transition of the proportion of cooperative behaviors (red) among the selected strategies ( P C ) and the proportion of individuals (blue) ( P I ( l ) ) in the layer l when M = 3 . The bottom panel shows the proportion of cooperation among all the selected layers ( P C ). P C fluctuated at around 0.25, and P I ( l ) largely fluctuated and often reached very high values. This trend means that the individuals were distributed all over the layers, but they often got together in a layer.
Games 07 00034 g005
Figure 6. The transition of the proportion of cooperative behaviors (red) among the selected strategies ( P C ) and the proportion of individuals (blue) ( P I ( l ) ) in the layer l when M = 9 . The bottom panel shows the proportion of cooperation among all of the selected layers ( P C ). The occurrence of a burst-like rise and fall of the proportion of individuals ( P I ( l ) ) in a layer was more pronounced, and it often reached its peak around 0.8, meaning that most of the individuals selected the same layer.
Figure 6. The transition of the proportion of cooperative behaviors (red) among the selected strategies ( P C ) and the proportion of individuals (blue) ( P I ( l ) ) in the layer l when M = 9 . The bottom panel shows the proportion of cooperation among all of the selected layers ( P C ). The occurrence of a burst-like rise and fall of the proportion of individuals ( P I ( l ) ) in a layer was more pronounced, and it often reached its peak around 0.8, meaning that most of the individuals selected the same layer.
Games 07 00034 g006
Figure 7. The trajectory of the proportion of cooperative behaviors ( P C ) and the proportion of individuals ( P I ) in the layer (0) of the top panel in Figure 6. The cyclic coevolution process of these indices occurred repeatedly.
Figure 7. The trajectory of the proportion of cooperative behaviors ( P C ) and the proportion of individuals ( P I ) in the layer (0) of the top panel in Figure 6. The cyclic coevolution process of these indices occurred repeatedly.
Games 07 00034 g007
Figure 8. The proportion of cooperative behaviors among the selected strategies with the different heterogeneity among layers λ. The heterogeneity significantly contributed to the evolution of cooperation especially as M increased.
Figure 8. The proportion of cooperative behaviors among the selected strategies with the different heterogeneity among layers λ. The heterogeneity significantly contributed to the evolution of cooperation especially as M increased.
Games 07 00034 g008
Table 1. A payoff matrix of Prisoner’s Dilemma game (PDG). b represents the temptation to defect.
Table 1. A payoff matrix of Prisoner’s Dilemma game (PDG). b represents the temptation to defect.
Cooperate (C)Defect (D)
Cooperate (C)1−1
Defect (D)b−1
Table 2. The proportion of cooperation in the selected strategies ( P C ) when M = 1 and M = 19 in different types of networks. The multiplex networks contribute to the promotion of cooperation allowing for the difference in network topologies.
Table 2. The proportion of cooperation in the selected strategies ( P C ) when M = 1 and M = 19 in different types of networks. The multiplex networks contribute to the promotion of cooperation allowing for the difference in network topologies.
Network TopologyERWS (p = 0.1)BA
P C ( M = 1 )0.1770.02860.241
P C ( M = 19 )0.3650.3180.424
PC: the proportion of cooperation in the selected strategies. ER: Erdös–Rényi, WS: Watts–Strogatz, BA: Barabási–Albert.

Share and Cite

MDPI and ACS Style

Hayashi, K.; Suzuki, R.; Arita, T. Coevolution of Cooperation and Layer Selection Strategy in Multiplex Networks. Games 2016, 7, 34. https://doi.org/10.3390/g7040034

AMA Style

Hayashi K, Suzuki R, Arita T. Coevolution of Cooperation and Layer Selection Strategy in Multiplex Networks. Games. 2016; 7(4):34. https://doi.org/10.3390/g7040034

Chicago/Turabian Style

Hayashi, Katsuki, Reiji Suzuki, and Takaya Arita. 2016. "Coevolution of Cooperation and Layer Selection Strategy in Multiplex Networks" Games 7, no. 4: 34. https://doi.org/10.3390/g7040034

APA Style

Hayashi, K., Suzuki, R., & Arita, T. (2016). Coevolution of Cooperation and Layer Selection Strategy in Multiplex Networks. Games, 7(4), 34. https://doi.org/10.3390/g7040034

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop