4.1. Simulation Settings
The constant size of the network during the simulations is . The initial graph is generated randomly with a mean degree . The companion oriented graph is trivially built from and forces between any pair of neighboring players are initialized at .
The non-zero diagonal payoff a has been varied in the range in steps of with ; the range is symmetrically equivalent. Each value in the phase space reported in the following figures is the average of 50 independent runs and each run has been performed on a fresh realization of the corresponding initial random graph.
To detect steady states of the dynamics
2,
i.e., those states with little or no fluctuation over extended periods of time, we first let the system evolve for a transient period of
times steps (
time steps when
). After a quasi-equilibrium state is reached past the transient, averages are calculated during
additional time steps. A steady state has always been reached in all simulations performed within the prescribed amount of time, for most of them well before the limit.
We have experimented with different proportions of uniformly randomly distributed initial strategies α belonging to the set .
4.2. Discussion of Results
Figure 2 reports the amount of
α-strategists in the population when a quasi-equilibrium state has been reached as a function of the rewiring frequency
q. The upper light part of the plots indicate the region of the parameters space where the
α-strategists are able to completely take over the population. This can happen because
α strategy offers the best payoff since
is positive, therefore
β-strategists are prone to adapt in order to improve their wealth.
Figure 2(a) shows the case where both
α and
β strategies are present in the same ratio at the beginning of the simulation. The darker region indicates the situations where diversity is able to resist. This clearly happens when the payoff difference
is zero. In this case both
α and
β are winning strategies and the players tends to organize in two big clusters to minimize the links with the opposing faction. More surprisingly, even when one of the two strategies has a payoff advantage, the evolution of the topology of the interaction allows the less favorable strategy to resist. The faster the network evolution is (larger
q), the greater the payoff difference that can be tolerated by the agents.
In
Figure 2(b) the case when
α represent only
of the initial population is presented. When no noise is present the stronger strategy needs an increased payoff advantage to take over the population. When
the payoff-inferior strategy
β is able to maintain the majority.
Figure 2.
Fraction of α-strategists in the population as a function of the relinking probability q when the quasi-equilibrium has been reached. (a) shows the case where the initial fraction of α is 0.5 and noise is not present. In (b) and (c) the initial fraction of α is 0.25. (b) shows the noiseless case and (c) the case where noise is 0.01. Results are averages over 50 independent runs.
Figure 2.
Fraction of α-strategists in the population as a function of the relinking probability q when the quasi-equilibrium has been reached. (a) shows the case where the initial fraction of α is 0.5 and noise is not present. In (b) and (c) the initial fraction of α is 0.25. (b) shows the noiseless case and (c) the case where noise is 0.01. Results are averages over 50 independent runs.
To confirm the stochastic stability of the evolution process we did a series of simulations using a noisy version of the strategy evolution rule [
18]. The amount of noise used is 0.01, which means that an agent will pick the wrong strategy once every 100 updates on average. This quantity is rather small and does not change the results obtained when the two populations are equally represented in the initial network, the graphic representation is almost the same of the one in
Figure 2(a) with respect to stochastic fluctuations. However, when the initial share is not the same, the presence of noise allows a considerable increase in the performance of the Pareto-superior strategy when this strategy is less represented in the beginning.
Figure 2(c) shows the case when the initial ratio of
α-strategists is
of the population. We can clearly see that the strategy that offers the higher payoff (
α in this case but the results for
β would obviously be symmetrical) can recover a considerable amount of the parameters space even when it starts from an unfavorable situation. The coexistence of stochastic errors and network plasticity allows the more advantageous strategy to improve its share. In this case, when
the situation is almost the same as when the initial shares are the same. The same phenomenons happen when the initial ratio of
α is smaller. The case of an initial ratio of
has been verified but is not shown here.
For visualization purposes,
Figure 3 and
Figure 4 show one typical instance of the evolution of the network
and of the strategy distribution from the initial state in which strategies are distributed uniformly at random to a final quasi-equilibrium steady state for a smaller
network. In spite of the relatively small size, the phenomena are qualitatively the same for
and
, the major difference is just the time to convergence which is much shorter for
.
These results have been obtained for a symmetric payoff of the strategies
and for an equal initial fraction of
α-strategists and
β-strategists. It is visually clear that the system goes from a random state of both the network and the strategy distribution to a final one in which the network is no longer completely random and, even more important, the strategies are distributed in a completely polarized way. In other words, the system evolves toward an equilibrium where individuals following the same convention are clustered together. Since both norms are equivalent in the sense that their respective payoffs are the
Figure 3.
(a) The simulation starts from a random network with and 50 players for each type. (b) In the first short part of the simulation ( time steps) the strategies reach an equilibrium, the network however is still unorganized. (c) The community structure starts then to emerge, many small clusters with nearly uniform strategy appears.
Figure 3.
(a) The simulation starts from a random network with and 50 players for each type. (b) In the first short part of the simulation ( time steps) the strategies reach an equilibrium, the network however is still unorganized. (c) The community structure starts then to emerge, many small clusters with nearly uniform strategy appears.
same, agents tend to pair-up with other agents playing the same strategy since playing the opposite one is a dominated strategy. The process of polarization and, in some cases, even the splitting of the graph into two distinct connected components of different colors, is facilitated by the possibility of breaking and forming links when an interaction is judged unsatisfactory by an agent. Even with a relatively small rewiring frequency of
as for the case represented in the figures, polarization is reached relatively quickly. In fact, since our graphs
G and
are purely relational entities devoid of any metric structure, breaking a link and forming another one may also be interpreted as “moving away”, which is what would physically happen in certain social contexts. If, on the other hand, the environment is say, belonging to one of two forums on the Internet, then link rewiring would not represent any physical reconfiguration of the agents, just a different web connection. Although our model is an abstract one and does not claim any social realism, still one could imagine how conceptually similar phenomena may take place in society. For example, the two norms might represent two different dress codes. People dressing in a certain way, if they go to a public place, say a bar or a concert in which individuals dress in the other way in the majority, will tend to change place in order to feel more adapted to their surroundings. Of course, one can find many other examples that would fit this description. An early model capable of qualitatively represent this kind of phenomena was Schelling’s segregation cellular automaton [
23] which was based on a simple majority rule. However, Schelling’s model, being based on a two-dimensional grid, is not realistic as a social network. Furthermore, the game theory approach allows to adjust the payoffs for a given strategy and is analytically solvable for homogeneous or regular graphs.
The above qualitative observations can be rendered more statistically rigorous by using the concept of
communities. Communities or clusters in networks can be loosely defined as being groups of nodes that are strongly connected between them and poorly connected with the rest of the graph. These structures are extremely important in social networks and may determine to a large extent the properties of dynamical processes such as diffusion, search, and rumor spreading among others. Several methods have been proposed to uncover the clusters present in a network (for a recent review see, for instance, [
24]). To detect communities, here we have used the divisive method of Girvan and Newman [
25] which is based on iteratively removing edges with a high value of edge betweenness. A commonly used statistical indicator of the presence of a recognizable community structure is the
modularity Q. According to Newman [
26] modularity is proportional to the number of edges falling within clusters minus the expected number in an equivalent network with edges placed at random. In general, networks with strong community structure tend to have values of
Q in the range
. In the case of our simulations
for the initial random networks with
like the one shown in
Figure 3(a).
Q progressively increases and reaches
for
Figure 3(c) and
for the final polarized network of
Figure 4. In the case of the larger networks with
the modularity is slightly higher during the evolution,
at the beginning of the simulation and
when the network has reached a polarized state. This is due to the more sparse structure of these networks.
Figure 4.
In the last phase the network is entirely polarized in two homogeneous clusters. If the simulation is long enough all the links between the two poles will disappear.
Figure 4.
In the last phase the network is entirely polarized in two homogeneous clusters. If the simulation is long enough all the links between the two poles will disappear.
To confirm the stability of this topological evolution we performed several simulation using the noisy strategy update rule. Even in this situation the network will attain a polarized state but due to the stochastic strategy fluctuations the two main clusters almost never reach a completely disconnected state and the modularity remains slightly lower () compared to the noiseless case.
As a second kind of numerical experiment, we asked how the population would react when, in a polarized social situation, a few connected players of one of the clusters suddenly switch to the opposite strategy. The results of a particular but typical simulation are shown in
Figure 5. Starting from the clusters obtained as a result of the co-evolution of strategies and network leading to
Figure 4, a number of “red” individuals replace some “yellow” ones in the corresponding cluster. The evolution is very
Figure 5.
(a) A consistent amount of mutant is inserted in one of the two clusters. (b) This invasion perturbs the structure of the population that starts to reorganize. (c) With enough evolution time the topology reaches a new polarized quasi-equilibrium.
Figure 5.
(a) A consistent amount of mutant is inserted in one of the two clusters. (b) This invasion perturbs the structure of the population that starts to reorganize. (c) With enough evolution time the topology reaches a new polarized quasi-equilibrium.
interesting: after some time the two-cluster structure disappears and is replaced by a different network in which several clusters with a majority of one or the other strategies coexist. However, these intermediate structures are unstable and, at steady state one recovers essentially a situation close to the initial one, in which the two poles form again but with small differences with respect to the original one. Clearly the size of the clusters is different from that of before the invasion. Even in this case, if the evolution time is long enough, the two components can become disconnected at the end. This means that, once formed, polar structures are rather stable, except for noise and stochastic errors. Moreover, we observed that at the beginning the invasion process the modularity drops slightly due to the strong reorganization of the network but then it increases again and often reaches a higher value with respect to the previous state. In the case shown here, the final modularity is
. The same happens in the larger networks where, after the invasion process
Q reaches values of
.