Network Characteristic Control of Social Dilemmas in a Public Good Game: Numerical Simulation of Agent-Based Nonlinear Dynamics

: This paper proposes a possible mechanism for obtaining sizeable behavioral structures by simulating a network–agent dynamic on an evolutionary public good game with available social .learning. The model considers a population with a ﬁxed number of players. In each round, the chosen players may contribute part of their value to a common pool. Then, each player may imitate the strategy of another player based on relative payoffs (whoever has the lower payoff adopts the strategy of the other player) and change his or her strategy using different exploratory variables. Relative payoffs are subject to incentives, including participation costs, but may also be subject to mutation, whose rate is sensitized by the network characteristics (social ties). The process discussed in this report is interesting and relevant across a broad range of disciplines that use game theory, including cultural evolutionary dynamics.


Introduction
Many computational simulations in game theory have shown that a substantial fraction of players are willing to invest costs (i.e., contribute to a joint effort) to increase their fitness [1]. In general, the rate of incentive (reward) is sufficient to increase the average level of prosocial contributions. Indeed, the system itself is a public good, and the players are often seen as altruistic because others benefit from their costly efforts [2].
Conversely, there are always players who take advantage of the incentives of others toward the public good, called defectors (or free-riders). Defection spreads among players voluntarily and ultimately results in a cascade in the system. A solution to protect the system against defection is to refrain from potential risks [3] because their strategies are adopted by other players [4], leading to infinite regress. Moreover, if everyone contributes to the public good, risk will propagate without the need for an incentive. The number of defectors may grow through simple cultural evolution, allowing the risk potential to propagate with impunity.
Although the abovementioned assumptions can help achieve prosocial behaviors (i.e., cooperation), detailed investigations of the mechanisms combining fundamental logics that provide realistic options are still necessary [5,6]. Therefore, a computerized model may serve as an essential tool that combines actual circumstances with simulations [7]. The results can be regarded as a sampled subset of the underlying social network [8]. For example, if a plausible model of the underlying network and its agent dynamics are found, it may be possible to infer which contacts are likely to create a propagation route [9]. Indeed, communication among nearby individuals is typically more frequent than long-range connections, providing efficient paths for bias spreading, as observed in real-world networks. Furthermore, to facilitate the identification of common grounds for integrating knowledge and strategies, the mechanisms and serial algorithms that underpin α β γ δ (1) However, a major challenge, even in a simple game, is to determine the ranking of payoff values. This model can achieve this objective by simply defining the dynamics of two strategies with four outcomes. We assume that players must choose between n options or strategies. In a simple case, players one and two simultaneously reveal a penny. If both pennies show heads or tails, player two must pay USD 1 to player one. On the other hand, if one penny shows heads and the other shows tails, player one must pay USD 1 to player two. The game, then, can be described as follows . ( The matrix describes the pair (a ij , b ij ) of payoff values in the ith row and jth column. It shows that if the outcome is (−1, 1), then player one (who chooses here the row of the payoff matrix) would have done better if he or she had chosen the other row. On the other hand, if the outcome had been (1, −1), then player two (the column player) would have done better if he or she had switched. Players one and two have diametrically opposed interests here. This strategic situation is very common in games. During a game of soccer, in a penalty kick, if the striker and keeper are mismatched, the striker is happy; but if they are matched, the keeper is happy. This logic applies to many common preference situations, and an efficient means of representing these dynamics is to let a player decide to implement a strategy with a given probability.
x = (x 1 , x 2 , . . . , x n ), x 1 ≥ 0 and x 1 + · · · + x n = 1 where the probability distribution presents two or more pure strategies (x i ) from which the players choose randomly; this feature is denoted as the set of all probability (p r ) distributions of such mixed strategies, as follows p r (x 1 ) + p r (x 2 ) + · · · + p r (x n ) = 1 = 100% (4) For all events, the player is expected to be happy for half the time and unhappy for the remaining time.
It does not matter what the players do because they cannot change the outcome, so they are just as happy to flip a coin (in the coin game) or to choose one of two directions (in soccer). The only question is whether one player is able to anticipate the decision of the other player. If the striker knows that the keeper is playing heads (one direction of two), the striker will avoid heads and play tails (the other direction of the two). However, it is not usually possible to guess the coin flip if the payoff is changed to provide a different outcome. In fact, if the game expands to the following arbitrarily mixed payoff conditions {A > B > C}, then the game will be expressed as follows

Player 2
Player 1 The expected utility of a player (σu) can be described as follows With even a slightly different payoff related to arbitrarily given mixed conditions {A > B > C}, the payoff will always be positive for (0, 1) under the above conditions.
With this dynamic, if the probability that each outcome occurs for a percentage of instances is determined, then the payoffs of players one and two can be expressed as follows The probability of each outcome must be multiplied by the playoff of a particular outcome as follows 2/3 1/3 Furthermore, they satisfy the following rule of probability distributions This relative welfare distribution derived directly from two by two games will enable a fundamental assumption to be set for evolutionary dynamics.

Replicator Dynamics
The core implementation of game theory lies in games with imperfect information. As described above, the zero-sum game is a classic example and an appropriate benchmark for application with imperfect information [14]. The latter is very important because real-world problems often fall into this category [15]. Thus, we applied these approaches because of their generalizability to real-world cases being undertaken in public safety, wildlife conservation, public health, and other fields.
First, let the model consider a population of players, each with a given strategy. Occasionally, the players meet randomly and play the game according to a plan. If we suppose that each player is rational, individuals consider several types of different payoffs for each person, which can be expressed as follows where each player has the payoff {π(i)}, which shows how well that type (i) is doing, and each type has a proportion {Pr(i)}. Then, the players choose certain strategies that they consider to be the best outcomes for the entire population.
Here, we consider a population of types, and those populations succeed at various levels. Some do well, and some do not. The dynamics of the model supposes a series of changes in distribution across types, such that there is a set of types {1, 2, . . . , N}, a payoff for each type (π i ), and a proportion for each one (Pr i ). The strategy of each player in each round is given as a probability that is the ratio of this weight to those of all possible strategies; .
x i is the probability that an individual player will use a strategy times the payoff [P r (i)π(i)], divided by the sum of the weights of all strategies [∑ N j=1 P r (j)π(j)] where P r (i) is the proportion of each type, and π(j) is the payoff for all types. Thus, the probability that the individual player will act in a certain way in the next round is only the relative weight of that action. Specifically, we propose that there are different probabilities P r (i) of using different strategies (x, y, and z), i.e., strategies x, y, and z have probabilities of 40%, 40%, and 20%, respectively. These probabilities could lead one to guess that strategies x and y are better than z. However, one can also look at the payoffs π(i) of the different strategies. For instance, payoffs of 5, 4, and 6 can be obtained when using strategies x, y, and z, respectively. This information prompts one to consider the strategy to use, and the answer depends on both the payoff and probability.
With this dynamic, the model presented herein describes how individuals could choose what to do or which strategies are best. Given that after a certain move, some will appear to be doing better than others, the ones doing the worst are likely to copy the ones doing better. Based on the cultural evolutionary assumptions for PGGs, we specified the respective frequencies of actions of cooperators (P c ), defectors (P d ), and loners (P l ) (see Table 1). The experimenter assigns a value to each player; then, the players may contribute part (or all) of their value to the game (a common pool). In each round, a sample (S) of individuals is chosen randomly from the entire population N. S (0 ≤ S ≤ N) individuals participate in the game, paying a cost (g). Each round requires at least two participants (cooperator and defector), and others must be nonparticipants. The cooperators contribute a fixed amount of value c > 0 and share the outcome multiplied by the interest rate r (1 > r > N) equally among the other S − 1 participants; defectors are in the round but do not contribute values. During the round, the payoffs for strategies P c , P d , and P l (denoted by x, y, and z, respectively) are determined with a participation cost c and an interest rate multiplied by the fixed contribution cost rc based on the relative frequencies of the strategies (n c /N). If n c denotes their number among the public good players, the return of public good (i.e., payoff to the players in the group) depends on the number of cooperators, and the net payoff for the strategy is given by where P c is the payoff of the cooperators, P d is the payoff of the defectors, and rc is the interest rate (r) multiplied by the fixed contribution cost (c) for the common good. For the expected payoff values of cooperators (P C ) and defectors (P D ), a defector in a group with S − 1 co-players (S = 2, . . . , n) obtains a payoff of rcx/(1 − z) on average from the common good because the nonparticipants (z) have a payoff of 0 [11] where z n−1 is the probability of finding no co-player. This equation can also be written as the derivative of z with respect to n for any power n, nz (n−1) , which is known as the power rule and can be symbolically represented as the exponent (i.e., z 2 = 2z 1 , z 3 = 3z 2 , . . . , z n = nz n−1 ). This is what it means for the derivative of z n to be nz n−1 and for the population to be reduced to loners (nonparticipants). In addition, cooperators contribute effort c with a probability 1 − z n−1 . Hence The average payoff (P) in the population is then given by The replicator equation gives the evolution of the three strategies in the population The frequencies x i of the strategies i can simply be represented as .
where x i denotes the frequency of strategy i, P i is the payoff of strategy i, and P represents the average payoff in the population. Accordingly, a strategy will spread or dwindle depending on whether it does better or worse than average. This equation holds that populations can evolve, in the sense that the frequencies x i of strategies change with time.
In Figure 1, we let the state x(t) depend on time, which can be denoted as .
x i (t), where .
x i changes as dx i /dt. Categorization by strategy in voluntary PGG shows the prototype in which cooperators are dominated by defectors, defectors by loners, and loners by cooperators. The mechanism is particularly focused on the growth rates of the relative frequencies of strategies. In other words, the state of the population evolves according to the replicator equation, where the growth rate ( . x i /x i ) of the frequency of a strategy corresponds to the difference between its payoff (P i ) and the average payoff (P) in the population. . This is what it means for the derivative of to be and for the population to be reduced to loners (nonparticipants). In addition, cooperators contribute effort with a probability 1 − . Hence The average payoff ( ) in the population is then given by The replicator equation gives the evolution of the three strategies in the population The frequencies of the strategies can simply be represented as where denotes the frequency of strategy , is the payoff of strategy , and represents the average payoff in the population. Accordingly, a strategy will spread or dwindle depending on whether it does better or worse than average. This equation holds that populations can evolve, in the sense that the frequencies of strategies change with time.
In Figure 1, we let the state ( ) depend on time, which can be denoted as ( ), where changes as / . Categorization by strategy in voluntary PGG shows the prototype in which cooperators are dominated by defectors, defectors by loners, and loners by cooperators. The mechanism is particularly focused on the growth rates of the relative frequencies of strategies. In other words, the state of the population evolves according to the replicator equation, where the growth rate ( / ) of the frequency of a strategy corresponds to the difference between its payoff ( ) and the average payoff ( ) in the population.

Imitation Dynamics with Updating Algorithm
In the cultural evolutionary context considered herein, strategies are unlikely to be inherited, but they can be transmitted through social learning [16]. If it is assumed that individuals imitate each other, replicator dynamics will be yielded again. Specifically, a randomly chosen individual from the population occasionally imitates a given model

Imitation Dynamics with Updating Algorithm
In the cultural evolutionary context considered herein, strategies are unlikely to be inherited, but they can be transmitted through social learning [16]. If it is assumed that individuals imitate each other, replicator dynamics will be yielded again. Specifically, a randomly chosen individual from the population occasionally imitates a given model with a certain likelihood. Thus, the probability that an individual switches from one strategy to the other is given by This equation is simply the replicator equation, but it states that a player (P i ) making a comparison with another player (P j ) will adopt the strategy of the other player only if it promises a higher payoff. This switch is more likely to occur if the difference in payoff is a function of the frequencies of all strategies, based on pairwise interactions [17]. The focal individual compares his or her payoff (P i = π f ) with the payoff of the role individual (P j = π r ); then, the focal individual chooses to imitate (or not) the role individual given the following This mechanism (Tables 2-4) holds the factorial for the payoff, i.e., how many combinations of r objects can we take from n objects as follows with the Gillespie algorithm (stochastic dynamic model; see the code in Table 5 for more detail) for updating the system (a r−1 /a tot < z 1 < a r /a tot ). Table 2. Strategies of potential players (C: cooperation; D: defection; L: no participation).

Definition Parameter Range
Cooperator C n.a.
Loner L n.a. Imitation probability pr ∈ (0, 1) for ii in range(t): #steps with for loop pc=PC(C,D,M) #define cooperator pd=PD(C,D,M) #define defector pl=0 #define loner a1=L/M+L*D/(M**2*(1+math.exp(pl-pd))) #rate of changing L to D a2=L/M+L*C/(M**2*(1+math.exp(pl-pc))) #rate of changing L to C a3=D/M+D*C/(M**2*(1+math.exp(pd-pc))) #rate of changing D to C a4=D/M+D*L/(M**2*(1+math.exp(pd-pl))) #rate of changing D to L a5=C/M+C*D/(M**2*(1+math.exp(pc-pd))) #rate of changing C to D a6=C/M+C*L/(M**2*(1+math.exp(pc-pl))) #rate of changing C to L atot=a1+a2+a3+a4+a5+a6 #total rate ran=random() if (0<=ran<a1/atot): The above procedures assume a well-mixed population with a finite number of strategies that are proportional to their relative abundances given that the fitness values are frequency dependent, coexisting at steady or fluctuating frequencies of the evolutionary game ( Figure 2). The mechanism is a combination of the rational and copying processes; in other words, an individual chooses rationally from a nearby individual because it seems that his or her strategy would produce a successful outcome. with a certain likelihood. Thus, the probability that an individual switches from one strategy to the other is given by This equation is simply the replicator equation, but it states that a player ( ) making a comparison with another player ( ) will adopt the strategy of the other player only if it promises a higher payoff. This switch is more likely to occur if the difference in payoff is a function of the frequencies of all strategies, based on pairwise interactions [17]. The focal individual compares his or her payoff ( = ) with the payoff of the role individual ( = ); then, the focal individual chooses to imitate (or not) the role individual given the following This mechanism holds the factorial for the payoff, i.e., how many combinations of objects can we take from objects as follows with the Gillespie algorithm (stochastic dynamic model; see the code in Table 5 for more detail) for updating the system ( / < < / ). The above procedures assume a well-mixed population with a finite number of strategies that are proportional to their relative abundances given that the fitness values are frequency dependent, coexisting at steady or fluctuating frequencies of the evolutionary game ( Figure 2). The mechanism is a combination of the rational and copying processes; in other words, an individual chooses rationally from a nearby individual because it seems that his or her strategy would produce a successful outcome.  Tables 2-4 for more detail regarding the parameters). The plot on the right side shows the categorized oscillation of strategies (cooperators = blue, defectors = red, loners = yellow).   Tables 2-4 for more detail regarding the parameters). The plot on the right side shows the categorized oscillation of strategies (cooperators = blue, defectors = red, loners = yellow).
The simulation results in Figure 3 indicate that the system has different effects on the intermediate interest rate with participation costs. An increase in the interest rate (r) prompts the population to undergo stable oscillations relative to a global attractor, where the players participate by contributing to the public good. However, if the contribution is too expensive, i.e., if the participation cost is g ≥ (r − 1)c + l for rewarding or g ≥ (r − 1)c for punishing, the players opt out of participation (Figure 4). In this scenario, nonparticipation becomes the global attractor (bottom right plot of Figure 3).

Replicator-Mutator Dynamics
Not all learning occurs from others; individuals can also learn from personal experience. The dynamics of the replicator equation describes only selection, not drift or mutation. An intelligent player may adopt a strategy, even if no one else in the population is using it, if the strategy offers the highest payoff. Dynamics can also be modified with the addition of a small, steady rate of miscopying for any small linear contribution that exceeds the role of dynamics. Consequently, the stability of the system changes, making the system structurally unstable. This feature can be interpreted as the exploration rate, and it corresponds to the mutation term in genetics [18]. Thus, by adding a mutation rate ( ) with a frequency-dependent selection, it can be expected that the impact of mutations can

Replicator-Mutator Dynamics
Not all learning occurs from others; individuals can also learn from personal experience. The dynamics of the replicator equation describes only selection, not drift or mutation. An intelligent player may adopt a strategy, even if no one else in the population is using it, if the strategy offers the highest payoff. Dynamics can also be modified with the addition of a small, steady rate of miscopying for any small linear contribution that exceeds the role of dynamics. Consequently, the stability of the system changes, making the system structurally unstable. This feature can be interpreted as the exploration rate, and

Replicator-Mutator Dynamics
Not all learning occurs from others; individuals can also learn from personal experience. The dynamics of the replicator equation describes only selection, not drift or mutation. An intelligent player may adopt a strategy, even if no one else in the population is using it, if the strategy offers the highest payoff. Dynamics can also be modified with the addition of a small, steady rate of miscopying for any small linear contribution that exceeds the role of dynamics. Consequently, the stability of the system changes, making the system structurally unstable. This feature can be interpreted as the exploration rate, and it corresponds to the mutation term in genetics [18]. Thus, by adding a mutation rate (µ) with a frequency-dependent selection, it can be expected that the impact of mutations can show a more general approach to evolutionary games, without explicit modelling of their origin [19].
In the context of the model, both of these types of dynamics occur, i.e., individuals copy both more prominent strategies and strategies that are doing better than others. The fate of an additional strategy can be examined by considering the replicator dynamics in the augmented space (mutation) and computing the growth rate of the fitness that such types obtain in the case of evolution (shown in Figure 5). The mechanism holds for the ordinary differential equation, where the differential equation contains one or more functions of an independent variable and its derivatives for updating the system, i.e., show a more general approach to evolutionary games, without explicit modelling of their origin [19].
In the context of the model, both of these types of dynamics occur, i.e., individuals copy both more prominent strategies and strategies that are doing better than others. The fate of an additional strategy can be examined by considering the replicator dynamics in the augmented space (mutation) and computing the growth rate of the fitness that such types obtain in the case of evolution (shown in Figure 5). The mechanism holds for the ordinary differential equation, where the differential equation contains one or more functions of an independent variable and its derivatives for updating the system, i.e., , , … , ; here, and are independent variables, and = ( ) is an unknown function of .  Tables 6 and  7 for more details regarding the added parameters). The colors indicate prototypes as proportions of the implementation by cooperators (blue), defectors (red), and loners (yellow to green).  Tables 6 and 7 for more details regarding the added parameters). The colors indicate prototypes as proportions of the implementation by cooperators (blue), defectors (red), and loners (yellow to green). Figure 5 demonstrates that mutation has a significant effect on the transition of strategies. The system settles into the different effects of the intermediate mutation rate. As this rate decreases, red individuals appear (plot on the left side), which prompts the players to participate by contributing to the public good. Conversely, as long as the mutation rate is sufficiently high, nonparticipation becomes a global attractor; selfish players continually defect by refraining from contributing (plot on the right side of Figure 5).  Table 7. Imitation and exploration of parameter values.

Replicator-Mutator including Network Dynamics
Currently, the proposed models cannot explain cooperation in communities with different average numbers of social ties. To impose the number of social ties as a parameter, the primary feature of a random graph [20] was used for the network characteristics, as follows. Firstly, individuals in the model are considered as vertices (fundamental element drawn as a node) and sets of two elements are drawn as lines connecting two vertices (where lines are called edges) (left side of Figure 6). Nodes are graph elements that store data, and edges are the connections between them; however, the edges can store data as well. The edges between nodes can describe any connection between individuals (called adjacency). The nodes can contain any amount of data with the information that has been chosen to store in this application, and the edges include data regarding the connection strength. A disconnected network may feature one vertex that is off to the side and has no edges. It could also have two so-called "connected components," which form a connected network on their own but have no connections between them. Thus, a connected network has no disconnected vertices, which could be a criterion for describing a network as a Networks have additional properties, i.e., edges can have direction, which means that the relationship between two nodes only applies in one direction, not the other. A "directed network" is a network that shows a direction. In the present model, however, we used an undirected network, featuring edges with no sense of direction because with a network of individuals and edges that indicate two individuals that have met, directed edges may be unnecessary. Another essential property of this structure is connectivity. A disconnected network has some vertices (nodes) that cannot be reached by other vertices (right side of Figure 6).
A disconnected network may feature one vertex that is off to the side and has no edges. It could also have two so-called "connected components," which form a connected network on their own but have no connections between them. Thus, a connected network has no disconnected vertices, which could be a criterion for describing a network as a whole, called connectivity. The fulfillment of this criterion would depend on the information contained in the graph, usually controlled by the number of nodes and number of connections.
An object-oriented language was used to enable the creation of vertex and edge objects and assign properties to them. A vertex is identified by the list of edges that it is connected to, and the converse is true for edges. However, operations involving networks may be inconvenient if one must search through vertex and edge objects. Thus, we represent connections in networks that simply use a list of edges (left side of Figure 7). The edges are each represented with an identifier of two elements. Those elements are usually numbers corresponding to the ID numbers of vertices. Thus, this list simply shows two nodes with an edge between them, and an edge list encompasses all smaller lists. As an edge list contains other lists, it is sometimes called a two-dimensional list. We represent the edge list in a network as an adjacency list. Our vertices normally exhibit the ID number that corresponds to the index in an array (right side of Figure 7). A disconnected network may feature one vertex that is off to the side and has no edges. It could also have two so-called "connected components," which form a connected network on their own but have no connections between them. Thus, a connected network has no disconnected vertices, which could be a criterion for describing a network as a whole, called connectivity. The fulfillment of this criterion would depend on the information contained in the graph, usually controlled by the number of nodes and number of connections.
An object-oriented language was used to enable the creation of vertex and edge objects and assign properties to them. A vertex is identified by the list of edges that it is connected to, and the converse is true for edges. However, operations involving networks may be inconvenient if one must search through vertex and edge objects. Thus, we represent connections in networks that simply use a list of edges (left side of Figure 7). The edges are each represented with an identifier of two elements. Those elements are usually numbers corresponding to the ID numbers of vertices. Thus, this list simply shows two nodes with an edge between them, and an edge list encompasses all smaller lists. As an edge list contains other lists, it is sometimes called a two-dimensional list. We represent the edge list in a network as an adjacency list. Our vertices normally exhibit the ID number that corresponds to the index in an array (right side of Figure 7). In this array, each space is used to store a list of nodes, such that the node with a given ID is adjacent to the index with the same number. For instance, an opening at index 0 represents a vertex with an ID of 0. This vertex shares an edge with one node, so that the reference to that node is stored in the first spot in the array. Thus, because the list contains other lists, the adjacency list is two-dimensional, enabling an adjacency matrix to be used that is essentially a two-dimensional array; however, all lists within it have the same length.

Social Ties from Eigen Centrality Algorithm
Owing to the collection of nodes influenced by connection probabilities corresponding to the adjacency list, the distribution of the connections in the network can be used for the social characteristics (probability of degree) as follows In this context, one may consider whether the individuals in the network interact with each other more often than with others, the conditions under which social beings are In this array, each space is used to store a list of nodes, such that the node with a given ID is adjacent to the index with the same number. For instance, an opening at index 0 represents a vertex with an ID of 0. This vertex shares an edge with one node, so that the reference to that node is stored in the first spot in the array. Thus, because the list contains other lists, the adjacency list is two-dimensional, enabling an adjacency matrix to be used that is essentially a two-dimensional array; however, all lists within it have the same length.

Social Ties from Eigen Centrality Algorithm
Owing to the collection of nodes influenced by connection probabilities corresponding to the adjacency list, the distribution of the connections in the network can be used for the social characteristics (probability of degree) as follows In this context, one may consider whether the individuals in the network interact with each other more often than with others, the conditions under which social beings are willing to cooperate, and the algorithm that can be characterized by the influence of a node in a network (eigenvector centrality of a graph). When considering the network as an adjacency matrix of A (Table 8), the eigenvector centrality (Table 9) must satisfy the following equation Ax = λx Table 8. Created network characteristics of parameter values.

Definition Parameter Range Created Random Network
Number of individuals n ∈ (0, ∞) willing to cooperate, and the algorithm that can be characterized by the influence of a node in a network (eigenvector centrality of a graph). When considering the network as an adjacency matrix of (Table 8), the eigenvector centrality (Table 9) must satisfy the following equation Connection probability ∈ (0,1) Adjacency matrix (linear dimension) m×n row, col Table 9. Calculated social ties ( ) from = ∈ (0,1) of a random graph. We can normalize the vector to its maximum value, bringing the vector components closer to 1. Moreover, they must be able to adjust their own changes to thrive. To understand a cooperative network of interaction, both the evolution of the network and strategies within it should be considered simultaneously.
where represents social ties due to the influence of an eigenvector centrality ( ) between individuals and is the number of nodes [21] of the sample population. Furthermore, (2 − )/ denotes the actual connections in the network, and ( − 1)/2 denotes the potential connections in the network. A potential connection is one that may exist between two individuals, regardless of whether it actually does. One individual may know another individual, and this object may be connected to the other one.
Whether the connection actually exists is irrelevant for a potential connection. In contrast, an actual connection is one that actually exists (social ties), i.e., two individuals know each other, and the objects are connected. In relation to these small linear contributions and their dynamics, structural instability can be interpreted as a characteristic of the network and influenced by the exploration rate, which corresponds to the idea of mutation in genetics.
After grouping the network characteristics that incorporate the decisions of individuals by establishing new links or giving up existing ones [22], we propose a version of evolutionary game theory and discuss the dynamic coevolution of individual strategies and the network structure [23]. In this model, the dynamics operate such that the population moves over time as a function of payoffs, and the proportions based on replicatormutator dynamics are multiplied by its network density ( Figure 8).  We can normalize the vector to its maximum value, bringing the vector components closer to 1. Moreover, they must be able to adjust their own changes to thrive. To understand a cooperative network of interaction, both the evolution of the network and strategies within it should be considered simultaneously.
where st represents social ties due to the influence of an eigenvector centrality (λx) between individuals and N is the number of nodes [21] of the sample population. Furthermore, (2 − st)/N denotes the actual connections in the network, and N(N − 1)/2 denotes the potential connections in the network. A potential connection is one that may exist between two individuals, regardless of whether it actually does. One individual may know another individual, and this object may be connected to the other one. Whether the connection actually exists is irrelevant for a potential connection. In contrast, an actual connection is one that actually exists (social ties), i.e., two individuals know each other, and the objects are connected. In relation to these small linear contributions and their dynamics, structural instability can be interpreted as a characteristic of the network and influenced by the exploration rate, which corresponds to the idea of mutation in genetics.
After grouping the network characteristics that incorporate the decisions of individuals by establishing new links or giving up existing ones [22], we propose a version of evolutionary game theory and discuss the dynamic coevolution of individual strategies and the network structure [23]. In this model, the dynamics operate such that the population moves over time as a function of payoffs, and the proportions based on replicator-mutator dynamics are multiplied by its network density (Figure 8).
In Figure 8, the exploration of the individual indicates its significance sensitivity, according to the exploratory trait of the mutation rate (left and right sides of Figure 8). However, the designated network density (as the social ties of the individuals) can mediate its sensitivity. Thus, when the network density (influenced by the eigenvector centrality λx) is sufficiently high against the exploratory mutation rate, individuals are sensitized by mutation (center-top panels of Figure 8); however, with low network density, the phase portraits are not sufficiently sensitive to changes in the mutation rate (center-bottom panels of Figure 8). This phenomenon of systemic sensitivity to external influence produces more interesting evolutionary patterns.

Discussion
In this study, the proposed model for the PGG represents a highly nonlinear system of replicator equations that can be evaluated via purely analytical means. For large incentive (r > 2), stable oscillations are observed, but when the cost of participation is too high (g > 1), no one will participate [24]. Accordingly, various combinations of time averages of the frequencies and payoffs of three strategies follow. It is impossible to increase cooperation by increasing participation costs (or decreasing incentives), which favors defections and loners [25]. To promote cooperation, the incentives should be increased or participation costs decreased, which would favor cooperation in the significant interest rate as well as the experimental results [26].
The simulation in the present model reveals that the dynamics exhibit a wide variety of adaptive mechanisms that correspond to many different types of combinations, leading to various oscillations in the frequencies of the three strategies. The option to drop out of these dynamics depends on the mutation rate multiplied by the network density as social influence. In many societies, similar situations may occur, where small mutations are known to be plausible risks in every network systems and must have marginal contributions to jeopardizing the entire system (e.g., COVID-19 mutations) [27,28]. Additional incentives attract larger participatory groups, but growth may inherently cause decline through mutations in any situation. However, the average effect of the payoff of an individual remains the same depending on the network characteristics as if the possibility exists in this simulation, in relation to their social ties.
Cooperation is necessary in societies to achieve the required collaboration of thousands of people, most of whom live very differently. How does cooperation occur, and under what conditions will cooperation emerge in a world of egoists without central authority? Social institutions, as with anything that evolves, have likely been affected by accidental developments caused by novel combinations of ordinary features [29]. This situation makes it more difficult to address several questions pertaining to emergence in decision making. Will conflict or cooperation be more prevalent in the future? Will the complexity that results from interdependence make us unable to think and foster bias [30]? Do the current troubles between neighbors point to a more alarming direction, in which more interconnected neighbors can disagree more easily, enter into disputes that are difficult to resolve, and ultimately despise one another? Perhaps it is easier to address such questions with a complexity tool. Simulation using models is a worthwhile pursuit in the context of several problems related to decision making.
Understanding the complexity of game dynamics is one such challenge in which we need not only new technologies and methodologies, but also new ways of thinking if we are to make progress in our systems. A good tool and model are not immediately applicable or even understandable because ground-breaking ideas challenge the current paradigms and penetrate new intellectual areas. This concept is illustrated by a simple application such as the zero-sum contribution to game dynamics, which required many years to be noticed and cited in a significant number of cases [31]. Although we used a numerical simulation to describe that the dynamics of the trajectories of components have direct physical interpretations, the quantitative model is adequate for obtaining qualitative results. Thus, the interpretation of mechanical values is useful in a comparative manner. Ultimately, we can account for uncertainty by gathering the parameters in a model-driven context. The proposed model is therefore robust with respect to the set of opinions developed in their actual fields (i.e., data driven).

Practical Application
We investigated how cooperation and defection change with the network characteristics with the involvement of the overall social heterogeneity [32]. For small ties among individuals, the heterogeneity remains low because the players only react slowly to social influences. On the other hand, as relationships grow, the dynamics develop sufficiently rapidly to promote the social trap of defection [33]. Greater cooperation produces additional competition at times, which leads to a reduction in the overall network heterogeneity, given the results of this simulation. The increase in heterogeneity of the pattern depends on the underlying societal organization; a considerable amount of interaction prevents the elimination of the common trap and is not quickly eliminated by cooperators [34]. Thus, the survival of cooperation relies on the capacities of individuals to adjust to adverse ties, even when the rate of mutation (or systemic risk) is high. The results indicate that simple adaptations of the social relations introduced herein coupled with the PGG account for a marginal contribution to the mitigation of systemic risk observed in realistic networks.
In the earlier discussion on PGGs, many references were made to observations of situations in which social dilemmas, such as cooperation and defection, depend on the scale of interconnection between participants [35]. However, most reported data are on the global level (macro scale) over many individuals with different parameter settings (i.e., participation cost, incentive, and mutation) rather than reflecting the changes within individuals (micro scale) over a series of rounds [36]. In addition to the general responses, the evolutionary variability (i.e., imitation and exploration) of individuals is another indicator of their behavioral integrity; from these concepts, it is indeed clear that autonomous behaviors (i.e., bias) and features (i.e., utility) differ in accordance with the relative weights (i.e., network-agent dynamics) of their micro-variable potentials [37]. Thus, based on the interconnected condition, computerized simulation is expected to become more reliable with regard to the key features that are mechanically frameworked in this model for increasing bias levels [38]. These factors extended from this study may be useful criteria for decision making and suggest a broader hypothesis for further research into the interconnectedness of social dilemmas in mathematical decisions of (public good) game theory.

Informed Consent Statement: Not applicable.
Data Availability Statement: All data and materials are our own. The materials and data used to support findings of this study are included in the Supplementary information file.