Next Article in Journal
Perceptions of Farm Size Heterogeneity and Demand for Group Index Insurance
Next Article in Special Issue
Game Theoretic Modeling of Infectious Disease Transmission with Delayed Emergence of Symptoms
Previous Article in Journal
Hybrid Assessment Scheme Based on the Stern- Judging Rule for Maintaining Cooperation under Indirect Reciprocity
Previous Article in Special Issue
Evolution of Cooperation for Multiple Mutant Configurations on All Regular Graphs with N ≤ 14 Players
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamics of Strategy Distributions in a One-Dimensional Continuous Trait Space for Games with a Quadratic Payoff Function

National Center for Biotechnology Information, National Institutes of Health, Bldg. 38A, 8600 Rockville Pike, Bethesda, MD 20894, USA
Games 2020, 11(1), 14; https://doi.org/10.3390/g11010014
Submission received: 9 January 2020 / Revised: 5 February 2020 / Accepted: 12 February 2020 / Published: 2 March 2020
(This article belongs to the Special Issue Non-Imitative Dynamics in Evolutionary Game Theory)

Abstract

:
Evolution of distribution of strategies in game theory is an interesting question that has been studied only for specific cases. Here I develop a general method to extend analysis of the evolution of continuous strategy distributions given a quadratic payoff function for any initial distribution in order to answer the following question—given the initial distribution of strategies in a game, how will it evolve over time? I look at several specific examples, including normal distribution on the entire line, normal truncated distribution, as well as exponential and uniform distributions. I show that in the case of a negative quadratic term of the payoff function, regardless of the initial distribution, the current distribution of strategies becomes normal, full or truncated, and it tends to a distribution concentrated in a single point so that the limit state of the population is monomorphic. In the case of a positive quadratic term, the limit state of the population may be dimorphic. The developed method can now be applied to a broad class of questions pertaining to evolution of strategies in games with different payoff functions and different initial distributions.

1. Introduction

Game-theoretic approach to population dynamics developed by Maynard Smith [1,2] and many other authors (see, for example, Reference [3]) assumes that individual fitness results from payoffs received during pairwise interactions that depend on individual phenotypes or strategies.
The approach to studying strategy-dependent payoffs in the case of a finite number of strategies is as follows. Assume π ( x , y ) is the payoff received by an individual using strategy x against one using strategy y . If there is a finite number of possible strategies (or traits), then π ( x , y ) is an entry of the payoff matrix. Alternatively, the number of strategies may belong to a continuous rather than discrete set of values. The case when individuals in the population use strategies that are parameterized by a single real variable x that belongs to a closed and bounded interval [ a , b ] was studied in [4,5,6,7,8,9,10] as well as many others. A brief survey of recent results on continuous state games can be found in Reference [6].
Specifically, the case of quadratic payoff function was considered in References [11,12] and some others.
Taylor and Jonker [13] offered a dynamical approach for game analysis known as replicator dynamics that allows tracing evolution of a distribution of individual strategies/traits. Typically, it is assumed that every individual uses one of finitely many possible strategies parameterized by real numbers; in this case, the Taylor-Jonker equation can be reduced to a system of differential equations and solved using well-developed methods, subject to practical limitations stemming from possible high dimensionality of the system.
Here, I extend the approach of studying games with strategies that are parameterized by a continuous set of values to study the evolution of strategy (trait) distributions over time. Specifically, I develop a method that allows computing the current distribution for games with quadratic, as well as several more general payoff, functions at any time and for any initial distribution. The approach is close to the HKV (after hidden keystone variables) method developed in References [14,15,16] used for modeling evolution of heterogeneous populations and communities. It allows generation of more general results than have previously been possible.

2. Results

2.1. Master Model

Consider a closed inhomogeneous population, where every individual is characterized by a qualitative trait (or strategy) x X , where X R is a subset of real numbers. X can be a closed and bounded interval [ a , b ] , a positive set of real numbers R + or the total set of real numbers R. Parameter x describes an individual’s inherited invariant properties; it remains unchanged for any given individual but varies from one individual to another. The fitness (per capita reproduction rate) F ( t , x ) of an individual depends on the strategy x and on interactions with other individuals in the population.
Let l ( t , x ) be population density at time t with respect to strategy x; informally, l ( t , x ) is the number of individuals that use x-strategy.
Assuming overlapping generations and smoothness of l ( t , x ) in t for each x X , the population dynamics can be described by the following general model:
d l ( t , x ) d t = l ( t , x ) F ( t , x ) N ( t ) = X l ( t , x ) d x P ( t , x ) = l ( t , x ) N ( t )
where N ( t ) is the total population size and P ( t , x ) is the pdf of the strategy distribution at time t. The initial pdf P ( 0 , x ) and the initial population size N ( 0 ) are assumed to be given.
Let π ( x , y ) be the payoff of an x-individual when it plays against a y-individual. Following standard assumptions of evolutionary game theory, assume that individual fitness F ( t , x ) is equal to the expected payoff that the individual receives as a result of a random pairwise interaction with another individual in the population, that is,
F ( t , x ) = X π ( x , y ) P ( t , y ) d y .
Equations (1) and (2) make up the master model.
Here our main goal is to study the evolution of the pdf P ( t , x ) over time. To this end, it is necessary to compute population density l ( t , x ) and total population size N ( t ) , which will be done in the following section.

2.2. Evolution of Strategy Distribution in Games with Quadratic Payoff Function

Assume that the payoff π ( x , y ) has the form
π ( x , y ) = a x 2 + b x y + c x + d y 2 + e y + f ,
where f = f ( N ) is the “background” fitness term that depends on the total population size N but does not depend on individuals’ traits and interactions; a , b , c , d , e are constant coefficients.
Then
F ( t , x ) = X π ( x , y ) P ( t , y ) d y = a x 2 + b x E t [ x ] + c x + d E t [ x 2 ] + e E t [ x ] + f ( N ) ,
where expected value is notated as E t [ g ( x ) ] = X g ( x ) P ( t , x ) d x .
Now population dynamics is defined by the equation
d l ( t , x ) d t = l ( t , x ) ( a x 2 + b x E t [ x ] + c x + d E t [ x 2 ] + e E t [ x ] + f ( N ) ) .
In order to solve this equation, apply the version of HKV method [14,15,16]. Introduce auxiliary variables s ( t ) ,   h ( t ) , such that
d s d t = E t [ x ] , d h s t = E t [ x 2 ] + f ( N ( t ) ) s ( 0 ) = h ( 0 ) = 0 .
Then
l ( t , x ) = l ( 0 , x ) e ( e s ( t ) + d h ( t ) ) e a t x 2 + x ( c t + b s ( t ) ) ,
N ( t ) = X l ( t , x ) d x = N ( 0 ) e ( e s ( t ) + d h ( t ) ) X e a t x 2 + x ( b s ( t ) + c t ) P ( 0 , x ) d x ,
P ( t , x ) = l ( t , x ) N ( t ) = P ( 0 , x ) e a t x 2 + x ( b s ( t ) + c t ) X e a t x 2 + x ( b s ( t ) + c t ) P ( 0 , x ) d x .
Notice that P ( t , x ) depends neither on h ( t ) nor on c , f , e . Therefore, if one is interested in the distribution of strategies and how it changes over time rather than the density of x-individuals, then one can replace the reproduction rate given by Equation (4) by the reproduction rate
F ( t , x ) = a x 2 + b x E t [ x ] + c x .
Equivalently, one can use the payment function (3) in a simplified form
π ( x , y ) = a x 2 + b x y + c x .
The model (1) with payoff function (10) and reproduction rate (9) has the same distribution of strategies as model (1) with payoff (3) and reproduction rate (4).
Next, using Equation (8), one can write E t [ x ] in the form
E t [ x ] = X x P ( t , x ) d x = X x e a t x 2 + x ( c t + b s ( t ) ) P ( 0 , x ) / X e a t x 2 + x ( c t + b s ( t ) ) P ( 0 , x ) d x .
Now define the following function Φ ( t , λ ) , such that
Φ ( t , λ ) = X e a t x 2 + x λ P ( 0 , x ) d x .
E t [ x ] can now be expressed as
E t [ x ] = Φ ( t , c t + b s ( t ) ) λ / Φ ( t , c t + b s ( t ) ) .
It is now possible to write the explicit equation for the auxiliary variable as
d s d t = E t [ x ] = l n Φ ( t , λ ) λ / λ = b s ( t ) + c t .
Next,
E t [ x 2 ] = 2 Φ ( t , b s ( t ) + c t ) λ 2 / Φ ( t , b s ( t ) + c t )
and therefore
V a r t [ x ] = 2 Φ ( t , b s ( t ) + c t ) λ 2 / Φ ( t , b s ( t ) + c t ) ( Φ ( t , b s ( t ) + c t ) λ / Φ ( t , b s ( t ) + c t ) ) 2 .
The moment generation function (mgf) of the current distribution of strategies as given by Equation (8) is
M t ( δ ) = X e a t x 2 + x ( b s ( t ) + c t + δ ) P ( 0 , x ) X e a t x 2 + x ( b s ( t ) + c t ) P ( 0 , x ) d x = Φ ( t , b s ( t ) + c t + δ ) Φ ( t , b s ( t ) + c t ) .
Equations (8)–(16) now provide a tool for studying the evolution of the distribution of strategies of the quadratic payment model over time for any initial distribution.

2.3. Initial Normal Distribution

The evolution of normal distribution in games with the quadratic payoff function has already been mostly studied; as shown by Oechssier and Riedel [6,8] and Cressman and Hofbauer [5], the class of normal distributions is invariant with respect to replicator dynamics in games with quadratic payoff functions (3) with positive parameter a .
This statement immediately follows from Equation (8) for the current distribution of traits. Additionally, the class of normal distributions truncated in a (finite or infinite) interval [ a , b ] is also invariant, see Section 2.6 for details and examples.
Now consider the dynamics of initial normal distributions in detail.
Let the initial distribution be normal with the mean m and variance σ 2 ,
P ( 0 , x ) = 1 2 π σ 2 exp ( ( x m ) 2 2 σ 2 ) ,   < x < ;
Its mgf is given by
M [ δ ] = exp ( δ m + δ 2 σ 2 2 )
Denoting for brevity γ = 1 2 σ 2 , one can compute the function Φ ( t , λ ) :
Φ ( t , λ ) = e a x 2 t + x λ P ( 0 , x ) d x = γ / π e x 2 a t + x λ γ ( x m ) 2   d x = γ γ + a t exp ( λ 2 + 4 γ λ m 4 a γ m 2 t 4 ( γ + a t ) ) .
Next,
Φ ( t , λ ) λ = γ γ + a t 2 γ m + λ 2 ( γ + a t ) exp ( λ 2 + 4 γ λ m 4 a γ m 2 t 4 ( γ + a t ) )   ,
So
Φ ( t , λ ) λ / Φ ( t , λ ) = λ + 2 γ m 2 ( γ + a t ) .
Then, according to Equation (5), the following explicit equation for auxiliary keystone variable emerges:
d s d t = b s + c t + 2 γ m 2 ( γ + a t ) ,   s ( 0 ) = 0 .
This equation can be solved analytically as follows:
s ( t ) = c b ( 2 a b ) ( b t + 2 γ ( 1 ( 1 + a t γ ) b 2 a ) 2 m γ b ( 1 ( 1 + a t γ ) b 2 a ) .
Now it is possible to compute the mean, variance, and current distribution of strategies using Equations (12)–(15). In the case of normal initial distribution, the simplest way to do so is to use Equation (16) for the current mgf.
Indeed, using formula (16) and after simple algebra, one can write the current mgf as
M t ( δ ) = Φ ( t , c t + b s ( t ) + δ ) ) Φ ( t , c t + b s ( t ) ) ) = exp ( δ ( λ + 2 m γ ) 2 ( γ + a t ) + δ 2 4 ( γ + a t ) ) .
It is exactly the mgf of the normal distribution (18) with the mean λ + 2 m γ 2 ( γ + a t ) and variance 1 2 ( γ + a t ) .
Remembering that λ = b s ( t ) + c t and using Equation (22), after some algebra the mean of the current strategy distribution takes the form
E t [ x ] = ( 1 + a t γ ) b 2 a 1 ( m c 2 a b ) + c 2 a b .  
Proposition 1.
Let the initial distribution of strategies in model (1), (9) be normal N ( m ,   σ 2 ) . Then the distribution of strategies at any time t is normal with the mean E t [ x ] given by Equation (23) and variance V a r t [ x ] = 1 2 ( γ + a t ) = σ 2 1 + 2 a t σ 2 .
It is easy to see that if 2 a b > 0 , then ( 1 + a t γ ) b 2 a 1 0 and E t [ x ] c 2 a b as t ; if 2 a b < 0 , then ( 1 + a t γ ) b 2 a 1 ; therefore E t [ x ] if m > c 2 a b and E t [ x ] if m < c 2 a b as t .
Notice that E t [ x ] m + c 2 a   ln ( 1 + a t γ ) as b 2 a , so E t [ x ] if 2 a b 0 . .
Figure 1 shows the dynamics of the mean of current distribution of traits.
Figure 2 shows the evolution of the distribution of traits over time. The variance of the current distribution V a r t [ x ] = σ 2 1 + 2 a t σ 2 tends to 0; therefore, the distribution of traits over time tends to a distribution concentrated at the point x = c 2 a b for 2 a b > 0 .

2.4. Exponential Initial Distributions

Let the initial distribution be exponential in [ 0 , ) , P ( 0 , x ) = v e v x . Then
P ( t , x ) = e a x 2 t + x ( λ v ) 0 e a x 2 t + x ( λ v ) d x = 2 a t π e a t ( x λ v 2 a t ) 2 1 + E r f [ λ v 2 a t ] ,
where λ = b s ( t ) + c t .
Equation (24) for any t > 0 describes the density of the normal distribution with the mean m ( t ) = λ v 2 a t = b s ( t ) + c t v 2 a t and variance σ 2 ( t ) = 1 2 a t truncated on [ 0 , ) . Notably, the mean of the truncated normal distribution (24) is not equal to m ( t ) , and its variance is not equal to σ 2 ( t ) . Instead, the mean of distribution (24) is
E t [ x ] = m ( t ) + e ( c t + b s ( t ) v ) 2 4 a t π a t ( 1 + E r f ( c t + b s ( t ) v 2 a t ) ) .
In order to compute the mean given by Equation (25) and the current distribution (24) as a function of time one needs to solve for the auxiliary variable s ( t ) that can be done using the function Φ ( t , λ ) :
Φ ( t , λ ) = v 0 e a x 2 t + x λ v x d x = π   2 a t v ( 1 + E r f ( λ v 2 a t ) ) e ( λ v ) 2 4 a t .
Then, according to Equation (14),
d s s t = l n Φ ( t , λ ) λ / λ = b s ( t ) + c t = c t + b s ( t ) v 2 a t + e ( c t + b s ( t ) v ) 2 4 a t π a t ( 1 + E r f ( c t + b s ( t ) v 2 a t ) ) ,   s ( 0 ) = 0 .
This equation can be solved numerically. Using the solution s ( t ) , we can compute the distribution (24) and all its moments.
It follows from Equation (25), that l i m   E t [ x ] = l i m   m ( t ) = c 2 a + b 2 a l i m s ( t ) t as t . One can show that l i m s ( t ) t = c 2 a b , therefore l i m   E t [ x ] = c 2 a ( b 2 a b + 1 ) . The variance of the current distribution tends to 0, so the limit distribution tends to a distribution concentrated in the point x = c 2 a ( b 2 a b + 1 ) . This proves the following proposition.
Proposition 2.
Let the initial distribution of strategies be exponential. Then the current distribution is normal at any time t > 0 that tends to a distribution concentrated in the point x = c 2 a ( b 2 a b + 1 ) .
An example of the dynamics of the current mean and variance is given on Figure 3. Figure 4 shows the dynamics of the initial exponential distribution that turns to a truncated normal distribution with its variance tending to 0. Therefore, the current distribution tends to a distribution concentrated in the point l i m E t [ x ] = 1 as t .

2.5. Uniform Initial Distribution

Now assume that the initial distribution is uniform in the interval [−1, 1]. Then
Φ ( t , λ ) = 1 1 e a x 2 t + x λ d x = 1 2 π a t exp ( λ 2 4 a t )   ( E r f ( λ + 2 a t 2 a t ) + E r f ( λ + 2 a t 2 a t ) )
and the current distribution
P ( t , x ) = l ( t , x ) N ( t ) = e a x 2 t + x ( c t + b s ( t ) ) Φ ( t , c t + b s ( t ) ) .
The auxiliary variable s ( t ) can be computed using Equation (14), or, equivalently, directly using the expression (29) for the current pdf:
d s d t = E t [ x ] = 1 1 x P ( t , x ) d x = c t + b s ( t ) 2 a t + 1 π a t exp ( ( b s ( t ) + c t + 2 a t ) 2 4 a t ) exp ( ( b s ( t ) + c t 2 a t ) 2 4 a t ) E r f ( b s ( t ) + c t + 2 a t 2 a t ) + E r f ( b s ( t ) c t + 2 a t 2 a t ) .
For a positive parameter a , the distribution P ( t , x ) is normal with the mean E ( t ) = c t + b s ( t ) 2 a t and variance σ 2 ( t ) = 1 2 a t truncated in the interval [ 1 ,   1 ] . However, for negative values of parameter a the distribution (29) is not normal; more specifically, if parameter b is also negative, then the initial distribution evolves towards a U-shaped distribution, as can be seen Figure 5 (right).

2.6. Normal Initial Distribution Truncated in the Interval [−1, 1]

Now assume the initial distribution is normal with zero mean, truncated in the interval [−1, 1]:
p ( x ) = C e ( x / σ ) 2 , 1 x 1 ,
with normalization constant C = 1 / [ σ π   E r f ( 1 σ ) ] .
Using the theory developed in Section 2.3, Equation (8), one can show that the current distribution of strategies is given by the formula
P ( t , x ) = 2 e ( b s ( t ) + c t 2 ( γ + a t ) x ) 2 4 ( γ + a t ) γ + a t π ( E r f [ b s ( t ) c t + 2 ( γ + a t ) 2 γ + a t ] + E r f [ b s ( t ) + c t + 2 ( γ + a t ) 2 γ + a t ] )   where   γ = 1 / σ 2 .
The distribution (32) is again normal truncated in the interval [−1, 1]. The current mean value that defines Equation (14) for the auxiliary variable s ( t ) can be computed using Equation (13) or using the expression (32) for the current pdf. This way one can obtain a (rather bulky) equation for s ( t ) that can be solved numerically. With this solution, one can trace the evolution of the initial truncated normal distribution. It can be shown that for a > 0 the variance of the current distribution tends to 0; therefore, the current distribution tends to a distribution concentrated in the point l i m E t [ x ] at t . The value of l i m E t [ x ] depends on model parameters. Three examples of the evolution of strategy distribution are given in Figure 6.
More generally, one can consider the normal distribution truncated in a finite interval [ a , b ] or in a half-line [ a , ) . Then it follows from Equation (8) that the current distribution is also normal truncated in that interval.
Proposition 3.
The class of truncated normal distributions is invariant with respect to replicator dynamics in games with quadratic payoff functions (3) with positive parameter a .
In contrast, one can observe another kind of evolution of the initial truncated normal distribution for a < 0 . Specifically, the current distribution has a U-shape and tends to a distribution concentrated in two extremal points of the interval where the initial distribution is defined, as can be seen in Figure 7.

2.7. Generalization

The developed approach can be applied to a more general version of the payoff function:
π ( x , y ) = f 1 ( x ) + f 2 ( x ) f 3 ( y ) + f 4 ( y ) .
In this case
d l ( t , x ) d t = l ( t , x ) X ( f 1 ( x ) + f 2 ( x ) f 3 ( y ) + f 4 ( y ) ) P ( t , y ) d y = l ( t , x ) F ( t , x ) ,
where F ( t , x ) = f 1 ( x ) + f 2 ( x ) E t [ f 3 ] + E t [ f 4 ] .
Let us introduce auxiliary variables
d s d t = E t [ f 3 ] ,   d h s t = E t [ f 4 ] ,   s ( 0 ) = h ( 0 ) = 0 .
Then
l ( t , x ) = l ( 0 , x ) exp [ f 1 ( x ) t + f 2 ( x ) s ( t ) + h ( t ) ] , N ( t ) = x l ( t , x ) d x = N ( 0 ) e h ( t ) X e f 1 ( x ) t + f 2 ( x ) s ( t ) P ( 0 , x ) d x , P ( t , x ) = l ( t , x ) N ( t ) = P ( 0 , x ) e f 1 ( x ) t + f 2 ( x ) s ( t ) X e f 1 ( x ) t + f 2 ( x ) s ( t ) P ( 0 , x ) d x .
One can see that the pdf P ( t , x ) does not depend on the variable h ( t ) and hence on the function f 4 ( y ) .
It follows from (35) that
E t [ f 3 ] = x f 3 ( x ) P ( t , x ) d x = x f 3 ( x ) e f 1 ( x ) t + f 2 ( x ) s ( t ) P ( 0 , x ) / X e f 1 ( x ) t + f 2 ( x ) s ( t ) P ( 0 , x ) d x .
Then the equation
d s d t = E t [ f 3 ] ,   s ( 0 ) = 0
can be solved, at least numerically.
Another equivalent approach may also be useful. Define the function
Φ ( t , λ , δ ) = x e f 1 ( x ) t + δ f 2 ( x ) + λ f 3 ( x ) P ( 0 , x ) d x .
Then
E t [ f 3 ] = l n Φ ( t , λ , δ ) λ / λ = 0 ,   δ = s ( t ) .
This results in a closed equation for the auxiliary variable s ( t ) :
d s s t = E t [ f 2 ] = l n Φ ( t , λ , δ ) λ / λ = 0 ,   δ = t + c s ( t ) .
Having the solution to equations (36) or (38), one can compute the current pdf (35) and all statistical characteristics of interest, such as the current mean and variance of strategies given any initial distribution.
Example 1 (see [12], Example 1).
Let π ( x , y ) = a x 4 + 4 x y . Then F ( t , x ) = a x 4 + 4 x E t [ x ] .
Introduce the auxiliary variable using the equation d s d t = E t [ x ] . Then
l ( t , x ) = l ( 0 , x ) exp ( a x 4 t + 4 x s ( t ) ) , N ( t ) = x l ( t , x ) d x = N ( 0 ) X exp ( a x 4 t + 4 x s ( t ) ) P ( 0 , x ) d x , P ( t , x ) = l ( t , x ) N ( t ) = P ( 0 , x ) exp ( a x 4 t + 4 x s ( t ) ) X exp ( a x 4 t + 4 x s ( t ) ) P ( 0 , x ) d x .
Let   Φ ( t , λ ) = x e a x 4 t + λ x P ( 0 , x ) d x .
Then
d s d t = E t [ x ] = l n Φ ( t , λ ) λ / λ = 4 s ( t ) .
This equation can be solved numerically, allowing one to then compute the pdf according to Equation (39).

3. Discussion

Classical problems of evolutionary game theory are concentrated on studying equilibrium states (such as evolutionarily stable states and Nash equilibria). Notably, it takes indefinite time to reach any equilibrium when starting from a from non-trivial initial distribution of strategies in continuous-time models. Therefore, the evolution of a given initial distribution over time may be of great interest and potentially critical importance for studying real population dynamics.
Here I developed a method that allows extending the analysis of evolution of continuous strategy distributions in games with a quadratic payoff function. Specifically, the method described here allows us to answer the question: given an initial distribution of strategies in a game, how will it evolve over time? Typically, the dynamics of population distributions are governed by replicator equations, which appear both in evolutionary game theory, as well as in analysis of the dynamics of non-homogeneous populations and communities. The approach suggested here is based on the HKV (hidden keystone variable) method developed in References [9,10,11] for analysis of the dynamics of inhomogeneous populations and finding solutions of corresponding replicator equations. The method allows the computing of the current strategy distribution and all statistical characteristics of interest, such as current mean and variance, of the current distribution given any initial distribution at any time.
I looked at several specific examples of initial distributions:
Normal
Exponential
Uniform on [−1, 1]
Truncated normal on [−1, 1]
Through the application of the proposed method, I confirm the existing results given in References [5,6], that the family of normal distributions is invariant in a game with a quadratic payoff function with negative quadratic term. Additionally, I derive explicit formulas for the current distribution, its mean and variance. I show also that the class of truncated normal distributions is also invariant with respect to replicator dynamics in games with quadratic payoff functions; as an example, I consider in detail the case of initial normal distribution truncated in [−1, 1].
Notably and unexpectedly, in most cases, regardless of initial distribution, the current distribution of strategies in games with negative quadratic term is normal, standard or truncated. Over time it evolves towards a distribution concentrated in a single point that is equal to the limit values of the mean of the current normal distribution. This can have implications for a broad class of questions pertaining to evolution of strategies in games.
For instance, the question of whether the limit state of the population is mono - or polymorphic was discussed in the literature. Here I show that for games with a quadratic payoff function, the population tends to a monomorphic stable state if the quadratic term is negative. In contrast, if the quadratic term of the payoff function is positive and the initial distribution is concentrated in a finite interval, then the current distribution can have a U-shape, and then the population tends to a di-morphic state.
In the last section I extend the developed approach to games with payoff functions of the form π ( x , y ) = f 1 ( x ) + f 2 ( x ) f 3 ( y ) + f 4 ( y ) . Formally, this framework can be applied to a very broad class of payoff functions, which include exponential or polynomial payoff functions; however, in many cases finding a solution to the equation for the auxiliary variable can be a difficult computational problem.
To summarize, the proposed method is validated against previously published results, and is then applied to a previously unsolvable class of problems. Application of this method could help expand the class of questions and answers that can now be obtained for a large class of problems in evolutionary game theory.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Maynard-Smith, J. The theory of games and the evolution of animal conflicts. J. Theor. Biol. 1974, 47, 209–221. [Google Scholar] [CrossRef] [Green Version]
  2. Maynard-Smith, J. Evolution and the Theory of Games; Cambridge University Press: Cambridge, UK, 1982. [Google Scholar]
  3. Hofbauer, J.; Sigmund, K. Evolutionary Games and Population Dynamics; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar]
  4. Cressman, R.; Hofbauer, J.; Reidel, F. Stability of the Replicator Equation for a Single-Species with a Multi-Dimensional Continuous Trait Space. J. Theor. Biol. 2006, 239, 273–288. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Oechssler, J.; Riedel, F. On the dynamic foundation of evolutionary stability in continuous models. J. Econ. Theory 2002, 107, 223–252. [Google Scholar] [CrossRef] [Green Version]
  6. Hingu, D.; Rao, K.S.M.; Shaiju, A.J. Evolutionary stability of polymorphic population states in continuous games. Dyn. Games Appl. 2018, 8, 141–156. [Google Scholar] [CrossRef] [Green Version]
  7. Cheung, M.W. Imitative dynamics for games with continuous strategy space. Games Econ. Behav. 2016, 99, 206–223. [Google Scholar] [CrossRef]
  8. Cressman, R.; Tao, Y. The replicator equation and other game dynamics. Proc. Natl. Acad. Sci. USA 2014, 111, 10810–10817. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Zhong, W.; Liu, J.; Zhang, L. Evolutionary dynamics of continuous strategy games on graphs and social networks under weak selection. Biosystems 2013, 111, 102–110. [Google Scholar] [CrossRef] [PubMed]
  10. Cressman, R. Stability of the replicator equation with continuous strategy space. Math. Soc. Sci. 2005, 50, 127–147. [Google Scholar] [CrossRef]
  11. Cressman, R.; Hofbauer, J. Measure dynamics on a one-dimensional continuous trait space: Theoretical foundations for adaptive dynamics. Theor. Popul. Biol. 2005, 67, 47–59. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  12. Oechssler, J.; Riedel, F. Evolutionary dynamics on infinite strategy space. Econ. Theory 2001, 17, 141–162. [Google Scholar] [CrossRef] [Green Version]
  13. Taylor, P.D.; Jonker, L. Evolutionarily stable strategies and game dynamics. Math. Biosci. 1978, 40, 145–156. [Google Scholar] [CrossRef]
  14. Karev, G.P. On mathematical theory of selection: Continuous time population dynamics. J. Math. Biol. 2010, 60, 107–129. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Karev, G.; Kareva, I. Replicator equations and models of biological populations and communities. Math. Model. Nat. Phenom. 2014, 9, 68–95. [Google Scholar] [CrossRef] [Green Version]
  16. Kareva, I.; Karev, G. Modeling Evolution of Heterogeneous Populations. Theory and Applications; Academic Press, Elsevier: London, UK, 2020. [Google Scholar]
Figure 1. Dynamics of the mean value of current strategy distribution given by Equation (23) as m = 0 ; b = 3 (red), b = 1 (blue); other parameters: a = c = γ = 1 . E t [ x ] when 2 a b 0 ; E t [ x ] c 2 a b when 2 a b > 0 .
Figure 1. Dynamics of the mean value of current strategy distribution given by Equation (23) as m = 0 ; b = 3 (red), b = 1 (blue); other parameters: a = c = γ = 1 . E t [ x ] when 2 a b 0 ; E t [ x ] c 2 a b when 2 a b > 0 .
Games 11 00014 g001
Figure 2. Evolution of the pdf P ( t , x ) as given by Equation (22). The initial distribution is normal with m = 0 ,   σ 2 = 1 ; parameters of the model are a = 2 ,   b = 1 ,   c = 1 .
Figure 2. Evolution of the pdf P ( t , x ) as given by Equation (22). The initial distribution is normal with m = 0 ,   σ 2 = 1 ; parameters of the model are a = 2 ,   b = 1 ,   c = 1 .
Games 11 00014 g002
Figure 3. Plots of the mean (left) and variance (right) of distribution (24) with a = b = c = v = 1 .
Figure 3. Plots of the mean (left) and variance (right) of distribution (24) with a = b = c = v = 1 .
Games 11 00014 g003
Figure 4. Evolution of the distribution of strategies over time given initial exponential distribution (24) with a = b = c = v = 1 .
Figure 4. Evolution of the distribution of strategies over time given initial exponential distribution (24) with a = b = c = v = 1 .
Games 11 00014 g004
Figure 5. Evolution of the distribution of strategies over time given initial uniform distribution in [−1, 1]; left panel: a = 1 , b = 10 , c = 1 ; right panel:   a = 1 , b = 10 , c = 1 .
Figure 5. Evolution of the distribution of strategies over time given initial uniform distribution in [−1, 1]; left panel: a = 1 , b = 10 , c = 1 ; right panel:   a = 1 , b = 10 , c = 1 .
Games 11 00014 g005
Figure 6. Evolution of the distribution of strategies over time given the initial truncated normal distribution. (A) a = 5 , b = 2 , c = 10 , σ 2 = 10 ; (B) a = 5 , b = 2 , c = 1 ,   γ = 10 ; (C) a = 1 , b = 2 , c = 1 ,   γ = 10 .
Figure 6. Evolution of the distribution of strategies over time given the initial truncated normal distribution. (A) a = 5 , b = 2 , c = 10 , σ 2 = 10 ; (B) a = 5 , b = 2 , c = 1 ,   γ = 10 ; (C) a = 1 , b = 2 , c = 1 ,   γ = 10 .
Games 11 00014 g006
Figure 7. Evolution of the distribution of strategies over time given the initial normal distribution truncated in [ 1 ,   1 ] ; a = 10 ,   b = 6 ,   c = 1 ,   γ = 10 .
Figure 7. Evolution of the distribution of strategies over time given the initial normal distribution truncated in [ 1 ,   1 ] ; a = 10 ,   b = 6 ,   c = 1 ,   γ = 10 .
Games 11 00014 g007

Share and Cite

MDPI and ACS Style

Karev, G. Dynamics of Strategy Distributions in a One-Dimensional Continuous Trait Space for Games with a Quadratic Payoff Function. Games 2020, 11, 14. https://doi.org/10.3390/g11010014

AMA Style

Karev G. Dynamics of Strategy Distributions in a One-Dimensional Continuous Trait Space for Games with a Quadratic Payoff Function. Games. 2020; 11(1):14. https://doi.org/10.3390/g11010014

Chicago/Turabian Style

Karev, Georgiy. 2020. "Dynamics of Strategy Distributions in a One-Dimensional Continuous Trait Space for Games with a Quadratic Payoff Function" Games 11, no. 1: 14. https://doi.org/10.3390/g11010014

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop