Next Article in Journal
Behavior of the Thermodynamic Properties of Binary Mixtures near the Critical Azeotrope
Previous Article in Journal
Phase Space Cell in Nonextensive Classical Systems

Quantum Games: Mixed Strategy Nash's Equilibrium Represents Minimum Entropy

by 1,2,3
Experimental Economics, Todo1 Services Inc, Miami Fl 33126, USA
GATE, UMR 5824 CNRS - France
Research and Development Department, Petroecuador, Quito-Ecuador
Entropy 2003, 5(4), 313-347;
Received: 15 November 2002 / Accepted: 5 November 2003 / Published: 15 November 2003


This paper introduces Hermite's polynomials, in the description of quantum games. Hermite's polynomials are associated with gaussian probability density. The gaussian probability density represents minimum dispersion. I introduce the concept of minimum entropy as a paradigm of both Nash's equilibrium (maximum utility MU) and Hayek equilibrium (minimum entropy ME). The ME concept is related to Quantum Games. Some questions arise after carrying out this exercise: i) What does Heisenberg's uncertainty principle represent in Game Theory and Time Series?, and ii) What do the postulates of Quantum Mechanics indicate in Game Theory and Economics?.
Keywords: quantum games. minimum entropy. time series. Nash-Hayek equilibrium quantum games. minimum entropy. time series. Nash-Hayek equilibrium

1 Introduction

The quantum games and the quantum computer are closely-related. The science of Quantum Computer is one of the modern paradigms in Computer Science and Game Theory [5,6,7,8,23,41,42,53]. Quantum Com- puter increases processing speed to have more effective data-base search. The quantum games incorporate Quantum Theory to Game Theory algorithms. This simple extrapolation allows the Prisoner’s Dilemma to be solved and demonstrates that the cooperative equilibrium [44,47,51,57] is viable and stable with a probability different from zero.
In Eisert [7,8] the analogy between Quantum Games and Game Theory is expressed as: “At the most abstract level, game theory is about numbers of entities that are effi ciently acting to maximize or minimize. For a quantum physicist, it is then legitimate to ask: what happens if linear superpositions of these actions are allowed for?, that is, if games are generalized into the quantum domain. For a particular case of the Prisoner’s Dilemma, we show that this game ceases to pose a dilemma if quantum strategies are implemented for”. They demonstrate that classical strategies are particular quantum strategy cases.
Eisert, Wilkens, and Lewenstein [8], not only give a physical model of quantum strategies but also express the idea ofidentifying moves using quantum operations and quantum properties. This approach appears to be fruitful in at least two ways. On one hand, several recently proposed quantum information application theories can already be conceived as competitive situations, where several factors which have opposing motives interact. These parts may apply quantum operations using bipartite quantum systems. On the other hand, generalizing decision theory in the domain of quantum probabilities seems interesting, as the roots of game theory are partly rooted in probability theory [43,44]. In this context it is ofinterest to investigate what solutions are attainable if superpositions of strategies are allowed [18,41,42,50,51,57]. A game is also related to the transference ofinformation. It is possible to ask: what happens if these carriers ofinformation are applied to be quantum systems, where Quantum information is a fundamental notion ofinformation? Nash’s equilibria concept as related to quantum games is essentially the same as that of game theory but the most important difference is that the strategies appear as a function of quantum properties of the physical system [41,42].
This paper essentially explains the relationship which exists among Quantum Mechanics, Nash’s equi- libria, Heisenberg’s Uncertainty Principle, Minimum Entropy and Time Series. Heisenberg’s uncertainty principle is one of the angular stones in Quantum Mechanics. One application of the uncertainty principle in Time Series is related to Spectral Analysis “The more precisely the random variable VALUE is deter- mined, the less precisely the frequency VALUE is known at this instant, and conversely”. This principle indicates that the product of the standard deviation of one random variable xt by the standard deviation ofits frequency w is greater or equal to 1/2.
This paper is organized as follows:
Section 2: Quantum Games and Hermite’s Polynomials, Section 3: Time Series and Heisenberg’s Prin- ciple, Section 4: Applications of the Models, and Section 5: Conclusion.

2 Quantum Games and Hermite’s Polynomials

Let Γ = (K, S, v) be a game to n−players, with K the set of players k = 1,..., n. Thefinite set Sk of cardinality lkN is the set of pure strategies of each player where kK, skjkSk, jk = 1, ..., lk and S = ΠKSk represents the set of pure strategy pro fi les with sS an element of that set, l = l1, l2,..., ln represents the cardinality of S, [12,43,55,56].
The vector function v : SRn associates every profile sS, where the vector of utilities v(s) = (v1(s),..., vn(s))T , and vk(s) designates the utility of the player k facing the profile s. In order to understand calculus easier, we write the function vk(s) in one explicit way vk(s) = vk(j1, j2,..., jn). The matrix vn,l represents all points of the Cartesian product Πk∈KSk. The vector vk(s) is the k– column of v.
If the mixed strategies are allowed then we have:
Entropy 05 00313 i001
the unit simplex of the mixed strategies of player kK, and Entropy 05 00313 i002 the probability vector. The set of pro fi les in mixed strategies is the polyhedron Δ with Entropy 05 00313 i003, where Entropy 05 00313 i004 and Entropy 05 00313 i005. Using the Kronecker product ⊗ it is possible to write1:
Entropy 05 00313 i006
Entropy 05 00313 i007
The n− dimensional function Entropy 05 00313 i008 associates with every pro fi le in mixed strategies the vector of expected utilities
Entropy 05 00313 i009
where Entropy 05 00313 i010 is the expected utility of the player k. Every Entropy 05 00313 i011 represents the expected utility for each player’s strategy and the vector uk is noted Entropy 05 00313 i012.
Entropy 05 00313 i013
The triplet Entropy 05 00313 i014 designates the extension of the game Γ with the mixed strategies. We get Nash’s equilibrium (the maximization of utility [3,43,55,56,57]) if and only if, ∀k, p, the inequality Entropy 05 00313 i015 Entropy 05 00313 i016 is respected.
Another way to calculate the Nash’s equilibrium [43,47], is leveling the values of the expected utilities of each strategy, when possible.
Entropy 05 00313 i017
If the resulting system of equations doesn’t have solution (p(−k))∗ then we propose the Minimum Entropy Method. This method is expressed as Minpk Hk(p)), where Entropy 05 00313 i018 standard deviation and Hk(p∗) entropy of each player k.
Entropy 05 00313 i019

2.1 Minimum Entropy Method 

Theorem 1 
(Minimum Entropy Theorem). The game entropy is minimum only in mixed strategy Nash’s equilibrium. The entropy minimization program Minpk Hk(p)), is equal to the standard deviation minimization program Minpkσk (p)),when Entropy 05 00313 i020 has gaussian density function or multinomial logit.
According to Hayek, equilibrium refers to the order state or minimum entropy. The order state is the opposite of entropy (disorder measure). There are some intellectual in fl uences and historical events which inspired Hayek to develop the idea of a spontaneous order. Here we present the technical tools needed in order to study the order state.
Case 1 
If the probability density of a variable X is normal: N (µkk), then its entropy is minimum for the minimum standard deviation (Hk )min ⇔ (σk)min. ∀k = 1, ..., n.
Let the entropy function Entropy 05 00313 i021 and p(x) the normal density function. Writing this entropy function in terms of minim−u∞m standard deviation we have.
Entropy 05 00313 i022
developing the integral we have
Entropy 05 00313 i023
For a game to n− players the total entropy can be written as follows:
Entropy 05 00313 i024
after making a few more calculations, it is possible to demonstrate that
Entropy 05 00313 i025
The entropy or measure of the disorder is directly proportional to the standard deviation or measure of the uncertainty [1,13,27]. Clausius [48,49], who discovered the entropy idea, presents it as both an evolutionary measure and as the characterization of reversible and irreversible processes [45].
Case 2 
If the probability function of Entropy 05 00313 i026 is a multinomial logit of parameter λ (rationality parameter) [35,36,37,38,39,40], then its entropy is minimum and its standard deviation is minimum for λ → ∞ [21,22].
Let Entropy 05 00313 i027 the probability for kK, jk = 1, ..., lk
Entropy 05 00313 i028
where Entropy 05 00313 i029 represents the partition utility function [1,45].
The entropy Hk(pk), expected utility Entropy 05 00313 i030, and variance Entropy 05 00313 i031 will be different for each player k.
Entropy 05 00313 i032
Using the explicit form of Entropy 05 00313 i033, we can get the entropy, the expected utility and the variance:
Entropy 05 00313 i034
The equation Entropy 05 00313 i035 can be obtained using the last seven equations; it explains that, when the entropy diminishes, the parameter (rationality) λ increases.
The rationality increases from an initial value of zero when the entropy is maximum, and it drops to its minimum value when the rationality toward the in finite value [38,39,40]: Limλ→∞Hk(pk(λ)) = min(Hk)
The standard deviation is minimum in the Nash’s equilibria [21,22,23].
If the rationality increase, then Nash’s equilibria can be reached when the rationality spreads toward to its in finite: Limλ→∞σk (λ)= 0.
Using the logical chain that has just been demonstrated, we can conclude that the entropy diminishes when the standard deviation diminishes:
Entropy 05 00313 i036
after making a few more calculations, it is possible to demonstrate that
Entropy 05 00313 i037
Remark 1 
The entropy Hk for gaussian probability density and multinomial logit are written as Entropy 05 00313 i038 Entropy 05 00313 i039 and Entropy 05 00313 i040
Case 3 
The special case of Minimum Entropy is when Entropy 05 00313 i041 and the utility function value of each strategy Entropy 05 00313 i042, are the same for all jk, and for all k, see Binmore, Myerson [3,48].
In the special case of Minimum Entropy when Entropy 05 00313 i043, the gaussian density function can be approximated by Dirac’s Delta Entropy 05 00313 i044.
The function Entropy 05 00313 i044 is called “Dirac’s function”. Dirac’s function is not a function in the usual sense. It represents an infinitely short, infinitely strong unit-area impulse. It satisfies Entropy 05 00313 i044 = Entropy 05 00313 i045 and can be obtained at the limit of the function
Entropy 05 00313 i046
Case 4 
If we can measure the standard deviation Entropy 05 00313 i047,then Nash’s equilibrium represents minimum standard deviation (σk)min , ∀kK.
Entropy 05 00313 i048
Theorem 2  
Minimum Dispersion. The gaussian density function f (x) = |φ(x)|2 represents the solution of one differential equation given by Entropy 05 00313 i049 related to minimum dispersion of a lineal combination between the variable x, and a Hermitian operator Entropy 05 00313 i050
This proofis extended in Pena and Cohen-Tannoudji [4,48]. Let Entropy 05 00313 i051 be Hermitian operators 2 which do not commute, because we can write,
Entropy 05 00313 i052
Easily we can verifi ed that the standard deviations satisfy the same rule of commutation:
Entropy 05 00313 i053
Let J (α) be the positive function defined as expected value 〈〉.
Entropy 05 00313 i058
By hypothesis the Hermitian operators can be written as Entropy 05 00313 i059. Using this property,
Entropy 05 00313 i060
the expression J (α) has the minimum value when J ’(α) = 0, thus the value of α and Jmin are
Entropy 05 00313 i061
simplifying Jmin
Entropy 05 00313 i062
We are interested in knowing the explicit form of the function φ(x) when the inequality is transformed into equality and Entropy 05 00313 i063, where Entropy 05 00313 i064 for which we need to solve the following differential equation.
Entropy 05 00313 i065
if Entropy 05 00313 i066, and Entropy 05 00313 i067 then Entropy 05 00313 i068 and
Entropy 05 00313 i069
the solution is
Entropy 05 00313 i070
Remark 2 
The gaussian density function f (x) is optimal in the sense of minimum dispersion.
According to the last theorem Entropy 05 00313 i071 is a gaussian random variable with probability density fk (x), ∀k, and in order to calculate Entropy 05 00313 i072. We can write
Entropy 05 00313 i073
Remark 3 
The central limit theorem which establishes that “The mean Entropy 05 00313 i074 of the random variable Entropy 05 00313 i075 has normal density function”, is respected in the Minimum Entropy Method.

2.2 Quantum Games Elements 

A complete relationship exists between Quantum Mechanics and Game Theory. Both share an random nature. Due to this nature it is necessary to speak of expected values. Quantum Mechanics defines observed values or expected values in Cohen-Tannoudji [4].
According to Quantum Mechanics an uncertainty exists in the simultaneous measure of two variables such as: position and momentum, energy and time [4]. A quantum element that is in a mixed state is made respecting the overlapping probabilistic principle of pure states. For its side in Game Theory a player that uses a mixed strategy respects the vNM (Von Newmann and Morgenstern) function or overlapping probabilistic of pure strategies according to Binmore [3,43]. The state function in quantum mechanics does not have an equivalent in Game Theory, therefore we will use state function as an analogy with Quantum Mechanics. The comparison between Game Theory and Quantum Mechanics can be shown in an explicit way in Table 1. If the definition of rationality in Game Theory represents an optimization process, we can say that quantum processes are essentially optimal, therefore the nature of both processes is similar
Table 1. Quantum Mechanic and Game Theory properties
Table 1. Quantum Mechanic and Game Theory properties
Quantum Mechanics Game Theory
  Particle: k = 1, ..., n   Player : k = 1, ..., n
  Quantum element   Player type
  Interaction   Interaction
  Quantum state: j = 1, ..., lk   Strategy: j = 1, ..., lk
  Energy e   Utility u
  Superposition of states   Superposition of strategies
  State function   Utility function
  Probabilistic and optimal nature   Probabilistic and optimal nature
  Uncertainty Principle   Minimum Entropy
  Matrix operators   Matrix operators
Variational Calculus, Optimal Control,
Information Theory Information Theory
  Complexity   Complexity
  variety: number of particles n   variety: number of players n
  variability: number ofinteractions n!   variability: number ofinteractions n!
  quantitative: Mathematical model   quantitative: Mathematical model
Observable value: E[e] Observable value: E[u]
The entities communicate efficiently The players interact efficiently
Entropy Minimum Entropy
Let k = 1, ..., n be players with lk strategies for every one Entropy 05 00313 i076. According to the theorem of Minimum Dispersion, the utility Entropy 05 00313 i077 converges to Nash’s equilibria and follows a normal probability density Entropy 05 00313 i078, where the expected utility is Entropy 05 00313 i079
Definition 1 
(Tactics or Sub- Strategies) Let Entropy 05 00313 i080 be the set of sub-strategies Entropy 05 00313 i081 with a probabilitygiven by Entropy 05 00313 i082 where Entropy 05 00313 i083 and Entropy 05 00313 i084 when kl. A strategy jk is constituted of several sub-strategies Entropy 05 00313 i085
With the introduction of sub-strategies the probability of each strategy is described in the following way Entropy 05 00313 i086 and we do not need any approximation of the function Entropy 05 00313 i087 where:
Entropy 05 00313 i088
and Entropy 05 00313 i089
Theorem 3 
The normal probability density ρ(u)= |ψ (x, λ)|2 can be obtained using Hermite orthogonal polynomials Hk (x). The parameter Entropy 05 00313 i090, indicates the probability value of playing jk strategy. The state function ϕk(x,λ) measuresthe behavior of jk strategy (jk one level in Quantum Mechanics [4,9,31]).The dynamic behavior of jk strategy can be written as Entropy 05 00313 i091. One approximation is Entropy 05 00313 i092
The function ψ(x, λ) is determined as follows:
Entropy 05 00313 i093
where: Entropy 05 00313 i094 and
Entropy 05 00313 i095
The Hermite polynomials properties will be necessary to express ρ(u) in function of Hk(x).
Using generating function ψ(x, λk) we can write
Entropy 05 00313 i096
Let ψ(x, λk) be the generating function
Entropy 05 00313 i097
In order to get Entropy 05 00313 i098 with the equations (QG1, PH1, PH2) and:
Entropy 05 00313 i099
If Entropy 05 00313 i100 then wk = 0; we can write:
Entropy 05 00313 i101
Comparing (Q2) and (Q3) is easy to conclude
Entropy 05 00313 i102
Entropy 05 00313 i103
The Entropy 05 00313 i104 parameter represents the probability of Entropy 05 00313 i105 sub-strategy of each k player. The utility of vNM can be written as:
Entropy 05 00313 i106
Entropy 05 00313 i107
Using the operators a, a+ let us write the equations:
Entropy 05 00313 i108
Entropy 05 00313 i109
The equation (Q6.1) indicates that the operator Entropy 05 00313 i110 decreases the level of k player. The operator a can decrease the sub-strategy description Entropy 05 00313 i111
Entropy 05 00313 i112
Entropy 05 00313 i113
The equation (Q7.1) indicates that the operator Entropy 05 00313 i114 increases the level of k player [4,46]. The operator a+ can increase the sub-strategy description Entropy 05 00313 i115
The Hermite polynomials form an orthogonal base and they possess desirable properties that permit us to express a normal probability density as a linear combination of them.

2.3 Hermite’s Polynomials Properties 

The next properties are taken from Cohen-Tannoudji, Pena and Landau [4,31,46]
Hermite polynomials obey the next differential equation [14]:
Entropy 05 00313 i116
The principal solution of Hermite’s differential equation3 is
Entropy 05 00313 i117
Taking the explicit value of Entropy 05 00313 i118 −derivative of (PH1):
Entropy 05 00313 i119
Entropy 05 00313 i120
From which we obtain the decreasing operator:
Entropy 05 00313 i121
and the increasing operator:
Entropy 05 00313 i122
Recurrence formulas:
Entropy 05 00313 i123
and normalization condition:
Entropy 05 00313 i124
The next condition expresses the orthogonality property of Hermite’s polynomials:
Entropy 05 00313 i125
The generating function is obtained using the Taylor development.
Entropy 05 00313 i126
Quantum Games generalize the representation of the probability in the utility function of vNM (von Newmann and Morgenstern). To understand Quantum Games it is necessary to use properties of quantum operators in harmonic oscillator and the properties of Hilbert space. The harmonic oscillator is presented as the paradigm in Quantum Games.

2.4 Quantum Games Properties 

According to Quantum Mechanics [4,9,31,46], a quantum system follows the next postulates or axioms:
Axiom 1 
At a fixed time to, the state of a physical system is defined by specifying a ket |ψ(to)〉 beloging to the state space (Hilbert space) V.
Axiom 2 
Every measurable physical quantity Λ is described by the operator A acting in V ; this operator is an observable parameter.
Axiom 3 
The only possible result of the measurement of a physical quantity Λ is one of the eigenvalues of the corresponding observable A.
Axiom 4 
Whena physical quantity Λ is measured on a system in the normalized state |ψ(t)〉 = Σ cj |ϕj〉, the probability P (bj) of obtaining the non-degenerate eigenvalue cj of the corresponding observable A is: P(cj) = |〈ϕj,ψ〉|2 where |ϕjis the normalized eigen vector of A associated with the eigen value cj.
Let n be players with k = 1, ...,n , lk strategies and Entropy 05 00313 i127 of each player k.
Superposition principle:
Entropy 05 00313 i128
Time evolution:
Entropy 05 00313 i129
Inner product:
Entropy 05 00313 i130
Quantum property
Entropy 05 00313 i131
Mean value
Entropy 05 00313 i132
Normality condition
Entropy 05 00313 i133

3 Time Series and Uncertainty Principle of Heisenberg

Remember some basic definitions about probability and stochastic calculus.
A probability space is a triple (Ω, Entropy 05 00313 i134, P) consisting of [10,13,30]:
  • A set Ω that represents the set of all possible outcomes of a certain random experiment.
  • A family Entropy 05 00313 i134 of subsets of Ω with a structure of σ-algebra:
    Entropy 05 00313 i135
    Entropy 05 00313 i136
    Entropy 05 00313 i137
  • A function P : Ω → [0, 1] such that:
    Entropy 05 00313 i138
    If Entropy 05 00313 i139 form a finite or countably infinite collection of disjointed sets (that is, Aj = Ø if ij) then
    Entropy 05 00313 i140
The elements of the σ-algebra Entropy 05 00313 i134 are called events and P is called a probability measure. A random variable is a function:
Entropy 05 00313 i141
Entropy 05 00313 i134-measurable, that is, Entropy 05 00313 i142 for all B in Borel’s σ algebra of R, noted ß(R).
If X is a random variable on (Ω, Entropy 05 00313 i134, P) the probability measure induced by X is the probability measure PX (called law or distribution of X) on ß(R) given by
PX(B) = P {ω : X(ω) ∈ B}, B ∈ ß(R)
The numbers PX (B), B ∈ß(R), completely characterize the random variable X in the sense that they provide the probabilities of all events involving X.
Definition 2 
The distribution function of a random variable X is the function F = FX from R to [0, 1] given by
F(x) = P {ω : X(ω) ≤ x}  xR
since, for a < b, F(b) − F(a) = P {ω : a < X(ω) ≤ b} = PX (a, b], F is a distribution function corre- sponding to the Lebesgue Stieltjes measure PX. Thus among all distribution functions corresponding to the Lebesgue-Stieltges measure PX, we choose the one with F (∞) = 1, F (−∞) = 0. In fact we can always supply the probability space in a canonical way; take Ω = R, Entropy 05 00313 i134 =ß(R), with P the Lebesgue-Stieltges measure corresponding to F.
Definition 3 
A stochastic process X = {X(t),tT} is a collection of random variables on a common probability space (Ω, Entropy 05 00313 i134, P) indexedby a parameter tTR, which we usually interpret as time [10,27,28,29,30]. It can thus be formulated as a function X : T × Ω → R.
The variable X(t, •) is Entropy 05 00313 i134−measurable in ω ∈ Ω for each tT; henceforth we shall often follow established convention and write Xt for X(t). When T is a countable set, the stochastic process is really a sequence of random variables Xt1, Xt2, ..., Xtn, ..
Definition 4 
If a random variable X = X(t) has the density ρ(x) and f : XY is one-to-one, then y = f(x) has the density g(y) given by Entropy 05 00313 i144 see Fudenberg and Tirole [12], (chapter 6).Thedensitiesρ(x) and g(y) show that “If X is a random variable and f−1(y) exists, then Y is a random variable and conversely.”
Definition 5 
Inversion formula: If h is a characteristic function of the bounded distribution function F, and F (a, b] = F(b) − F(a) then
Entropy 05 00313 i145
for all points a, b (a < b) at which F is continuous. Ifin addition, h is Lebesgue integrable on (−∞, ∞), then the function f given by
Entropy 05 00313 i146
is a density of F, that is, f is nonnegative and Entropy 05 00313 i147 for all x; furthermore, F′ = f everywhere. Thus in this case, f and h are “Fourier transform pairs”:
Entropy 05 00313 i148
Definition 6 
If X is an random variable., f a Borel measurable function [on ( Entropy 05 00313 i143, Entropy 05 00313 i143)] , then f(X) is an random variable [25]. If W is r.v., h(W) a Borel measurable function [on ( Entropy 05 00313 i143, Entropy 05 00313 i143)] , then h(W) is r.v.
Theorem 4 
The Possibility Theorem: Entropy 05 00313 i149. “The more precisely the random variable VALUE x is determined, the less precisely the frequency VALUE w is known at this instant, and vice versa”. Let X = {X(t),tT} be the stochastic process with a density function f(x) and the random variable w with a characteristic function h(w). If the density functions can be written as f(x) = ψ(x)ψ(x)∗ and h(w) = φ(w)φ(w)∗, then: Entropy 05 00313 i150
If the functions ψ(x = ±∞) = 0 and φ(w = ±∞) = 0 then f(x = ±∞) = 0 and h(w = ±∞) = 0.
The next proofis based in Frieden [11].
Let the function ψ(x) defined as,
Entropy 05 00313 i151
Here w is the frequency, different to ω ∈ Ω.
Then φ(w) is
Entropy 05 00313 i152
According to Fourier’s transform properties, we have:
Entropy 05 00313 i153
Entropy 05 00313 i154
By Parseval Plancherel theorem
Entropy 05 00313 i155
Using Schwartz’s inequality
Entropy 05 00313 i156
and the probability properties Entropy 05 00313 i157, it is evident that:
Entropy 05 00313 i158
Let Entropy 05 00313 i159 be the standard deviation of frequency w, and Entropy 05 00313 i160 the standard deviation of the aleatory variable x
Entropy 05 00313 i161
Entropy 05 00313 i162
with Parseval Plancherel theorem
Entropy 05 00313 i163
Entropy 05 00313 i164
Making the product (Q7.1)×(Q7.2) and using Q8, it follows that
Entropy 05 00313 i165
Entropy 05 00313 i166
Let G = and H = ψ′ in Q5 shows that the right side of (Q9.2) exceeds a certain quantity A
Entropy 05 00313 i167
We will note Entropy 05 00313 i168. If the variable A is given by A = Entropy 05 00313 i169 then Entropy 05 00313 i170. By definition F′ = f = ψψ
Entropy 05 00313 i171
Notice that A + A∗ = −1 and using the complex number property 2 Re (A) = −1 or 2|A|cosθ = −1 where θ phase of A, we can write 4|A|2 = Entropy 05 00313 i172. Since −1 ≤ cosθ ≤ 1 then |A|2 Entropy 05 00313 i173 and therefore:
Entropy 05 00313 i174
Remark 4 
Let xt be a time series with a spectrum of frequencies wj, where each frequency is an random variable. This spectrum of frequencies can be obtained with a minimum error (standard deviation of frequency). This minimum error σw for a certain frequency w is given by the following equation Entropy 05 00313 i175. The expected value E [wj] of each frequency can be determined experimentally.
An occurrence of one particular frequency wj is defined in a confidence interval given by
Entropy 05 00313 i176
This last remark agrees and supplements the statement of Hamilton [17].
“We should interpret Entropy 05 00313 i177 not as the portion of the variance of X that is due to cycles with frequency exactly equal to wj, but rather as the portion of the variance of X that is due to cycles with frequency near of wj, ” where:
Entropy 05 00313 i178
The only condition that imposes the Possibility Theorem is exhaustive and exclusive Entropy 05 00313 i179, because it includes the Minimum Entropy Theorem or equilibrium state theorem Entropy 05 00313 i180. It is necessary to analyze the different cases of the time series xt using the Possibility Theorem. A time series xt evolves in the dynamic equilibrium if and only if Entropy 05 00313 i181. A time series evolves out of the dynamic equilibrium if and only if Entropy 05 00313 i182.
Table 2. Evolution of time series, out of the equilibrium Entropy 05 00313 i215. In this table we can see the different changes in σx and σw.
Table 2. Evolution of time series, out of the equilibrium Entropy 05 00313 i215. In this table we can see the different changes in σx and σw.
1 Entropy 05 00313 i216
2 Entropy 05 00313 i216
3 Entropy 05 00313 i216
4 Entropy 05 00313 i216
50 Entropy 05 00313 i217trivial
Table 3. Evolution of time series, in the equilibrium Entropy 05 00313 i218. In this table we can see the different changes in σx and σw.
Table 3. Evolution of time series, in the equilibrium Entropy 05 00313 i218. In this table we can see the different changes in σx and σw.
1 Entropy 05 00313 i219
2 Entropy 05 00313 i219
3maxmin Entropy 05 00313 i219min
4minmin Entropy 05 00313 i219max
50 Entropy 05 00313 i220trivial
Remark 5 
Hirschman’sform of the uncertainty principle. The fundamental analytical effect of Heisenberg’s principle is that the probability densities for x and w cannot both be arbitrarily narrow [4], H(W) + Entropy 05 00313 i183 where: Entropy 05 00313 i184 and Entropy 05 00313 i185. When ψ(x) and φ(w) are gaussian H(W) = B + Log(σW) and H(X) = C + Log(σX), Hirschman’s inequality becomes Heisenberg’s principle, then inequalities are transformed in equalities and the minimum uncertainty is minimum entropy. In Quantum Mechanics the minimum uncertainty product also obeys a minimum entropy sum.

4 Applications of the Models

4.1 Hermites’s Polynomials Application 

Let Γ = (K, S, v) be a 3−player game, with K the set of players k = 1, 2, 3. Thefinite set Sk of cardinality lkN is the set of pure strategies of each player where kK, skjkSk, jk = 1, 2, 3 and S = S1 ×S2 ×S3 represent a set of pure strategy pro fi les with sS as an element of that set and l = 3 ∗ 3 ∗ 3 = 27 represents the cardinality of S. The vectorial function v : SR3 associates with every profile sS the vector of utilities v(s) = (v1(s),..., v3(s)), where vk(s) designates the utility of the player k facing the profi le s. In order to get facility of calculus we write the function vk(s) in an explicit way vk(s)= vk (j1, j2,..., jn).The matrix v3,27 represents all points of the Cartesian product ΠKSk see Table 4. The vector vk (s) is the k- column of v. The graphic representation of the 3-player game is shown in Figure 1.
In these games we obtain Nash’s equilibria in pure strategy (maximum utility MU, Table 4) and mixed strategy (Minimum Entropy Theorem MET, Table 5 and Table 6). After finding the equilibria we carried out a comparison with the results obtained from applying the theory of quantum games developed previously.
Figure 1. 3-player game strategies
Figure 1. 3-player game strategies
Entropy 05 00313 g001
Table 4. Maximum utility (random utilities): maxp (u1 + u2 + u3)
Nash Utilities and Standard Deviations
Nash Utilities and Standard Deviations
Maxp (u1 + u2 + u3) = 20.4
Nash Equilibria: Probabilities, Utilities
Nash Equilibria: Probabilities, Utilities
Player 1Player 2Player 3
Kroneker Products
Kroneker Products
Table 5. Minimum entropy (random utilities): minp(σ1 + σ2 + σ3) ⇒ minp (H1 + H2 + H3)
Nash Utilities and Standard Deviations
Nash Utilities and Standard Deviations
Minp (σ1+σ2+σ3) = 1.382
Nash Equilibria: Probabilities, Utilities
Nash Equilibria: Probabilities, Utilities
Player 1Player 2Player 3
p11 u11p12 u12p13 u13p21 u21p22 u22p23 u23p31 u31p32 u32p33 u33
Kroneker Products
Kroneker Products
Table 6. Minimum entropy (stone-paper-scissors): minp(σ1 + σ2 + σ3) ⇒ minp (H1 + H2 + H3)
Nash Utilities and Standard Deviations
Nash Utilities and Standard Deviations
Minp (σ1+σ2+σ3) = 1.382
Nash Equilibria: Probabilities, Utilities
Nash Equilibria: Probabilities, Utilities
Player 1Player 2Player 3
p11 u11p12 u12p13 u13p21 u21p22 u22p23 u23p31 u31p32u32p33u33
Kroneker Products
Kroneker Products

4.2 Time Series Application

The relationship between Time Series and Game Theory appears when we apply the entropy minimization theorem (EMT). This theorem (EMT) is a way to analyze the Nash-Hayek equilibrium in mixed strategies. Introducing the elements rationality and equilibrium in the domain of time series can be a big help, because it allows us to study the human behavior re fl ected and registered in historical data. The main contributions of this complementary focus on time series has a relationship with Econophysics and rationality.
Human behavior evolves and is the result of learning. The objective of learning is stability and optimal equilibrium. Due to the above-mentioned, we can affi rm that if learning is optimal then the convergence to equilibrium is faster than when the learning is sub-optimal. Introducing elements of Nash’s equilibrium in time series will allow us to evaluate learning and the convergence to equilibrium through the study of historical data (time series).
One of the branches of Physics called Quantum Mechanics was pioneered using Heisemberg’s uncer- tainty principle. This paper is simply the application of this principle in Game Theory and Time Series.
Econophysics is a newborn branch of the scientifi c development that attempts to establish the analogies between Economics and Physics, see Mantenga and Stanley [33]. The establishment of analogies is a creative way of applying the idea of cooperative equilibrium. The product of this cooperative equilibrium will produce synergies between these two sciences. From my point of view, the power of physics is the capacity of equilibrium formal treatment in stochastic dynamic systems. On the other hand, the power of Economics is the formal study of rationality, cooperative and non-cooperative equilibrium.
Econophysics is the beginning of a unifi cation stage of the systemic approach of scientifi c thought. I show that it is the beginning of a unifi cation stage but remains to create synergies with the rest of the sciences.
Let Entropy 05 00313 i221 be a covariance-stationary process with the mean E[xt] = µ and jth covariance γj
E[(xtμ)(xtjμ)] = γj
If these autocovariances are absolutely summable, the population spectrum of x is given by [21].
Entropy 05 00313 i186
If the γj ’s represent autocovariances of a covariance-stationary process using the Possibility Theorem, then
Sx(w) ∈ [Sx(wσw), Sx(w + σw)]
and Sx(w) will be nonnegative for all w. In general for an ARMA(p, q) process: xt = c + ϕ1xt−1 + ϕ2xt−2 + … + ϕpxt−p + εt + θ1εt−2 + … + θqεt−q
Entropy 05 00313 i187
The population spectrum Sx(w) ∈ [Sx(wσw), Sx(w + σw)] is given by Hamilton [17].
Entropy 05 00313 i188
where Entropy 05 00313 i189 and w is a scalar.
Given an observed sample of T observations denoted x1,x2,.., xT , we can calculate up to T − 1 sample autocovariances γj from the formulas: Entropy 05 00313 i190 and
Entropy 05 00313 i191
The sample periodogram can be expressed as:
Entropy 05 00313 i192
When the sample size T is an odd number, xt will be expressed in terms of periodic functions with M = (T − 1)/2 representing different frequencies Entropy 05 00313 i193
Entropy 05 00313 i194
The coefficients Entropy 05 00313 i195 can be estimated with OLS regression.
Entropy 05 00313 i196
The sample variance of xt can be expressed as:
Entropy 05 00313 i197
The portion of the sample variance of xt that can be attributed to cycles of frequency wj is given by:
Entropy 05 00313 i198
with Entropy 05 00313 i199 as the sample periodogram of frequency wj.
Continuing with the methodology proposed by Hamilton [17]. we develop two examples that will allow us to verify the applicability of the Possibility Theorem.
Example 5 
As a first step we build four time series knowing all their parameters aj,bj and Entropy 05 00313 i200 j = 1, ..., M (Table 7, Figure 2, Figure 3, Figure 4 and Figure 5),where
Entropy 05 00313 i201
Entropy 05 00313 i202
In the second step, we find the value of the parameters Entropy 05 00313 i203, j = 1,..., (T − 1) for the time series xt according to Q21 (Table 7, Figure 6).
In the third step, we find Entropy 05 00313 i204 (Table 8, Figure 7), only for the frequencies w1, w3, w5, w12:
In the fourth step, we compute the values σxw and Entropy 05 00313 i205 resp. for y, z, u, (Table 9).
Entropy 05 00313 i206
In the fifth step, we verify that the Possibility Theorem is respected, Entropy 05 00313 i207 resp for y,z,u, (Table 9).
Example 6 
Let {vt}t = 1,..,37 be a random variable, and w12 a random variable with gaussian probability density N (0, 1). Both variables are related as continues
Entropy 05 00313 i208
By simplicity of computing, we suppose that εv and Et has gaussian probability density N (0, 1). It is evident that σw12 = σε = 1 After a little computing we get the estimated value of Entropy 05 00313 i209 = 1.1334. The product Entropy 05 00313 i210 = 1.1334 verifi es the Possibility Theorem and permits us to compute: Entropy 05 00313 i211
w12 ∈ [E[w12] − σw min, E[w12] + σw min]
The spectral analysis can not give the theoretical value of E[w12] = 2.037. The experimental value of E[w12] = 2.8 . You can see the results in Figure 8, Figure 9 and Table 10.
Table 7. Time series values of xt,yt,zt,ut
Table 7. Time series values of xt,yt,zt,ut
P a rame te rs o f xt
j = tγjρjE [wj]γj·cos (E[w1]*j)γj·cos (E[w3]*j)γj·cos (E[w5]*j)γj·cos (E[w12]*j)xtytztutcos(w1(t-1))cos(w3(t-1))sen(w5(t-1))sen(w12(t-1))
Table 8. Frequencies, variances and sample periodogram of xt
Analysis of xt
Analysis of xt
Variance: st2 
FrequenciesWavelengthCoefficients(aj2+bj2)/2Sample Periodogram(4*pj/T)sx(E[wj])
E[w5]0,849l37,400b52,000b522,000Sx(E[w5]) 8,3502,836
Table 9. Verification of Possibility Theorem for series xt, yt, zt, ut
Table 9. Verification of Possibility Theorem for series xt, yt, zt, ut
Heisenberg's Uncertainty Principle
σxσw = 3,348 >1/2 σyσw = 4,234 >1/2
E[xt]0,000σx=E[(xt-E[xt])2]4,118 E[yt]0,000E[(yt-E[yt])2]3,206
E[w]0,892σw=E[(w-E[w])2]0,813 E[w]1,104E[(w-E[w])2]1,321
σwmin0,121 σwmin0,156
Lower[wMin[j]Upper[wj]Max[j] Lower[wj]Min[j]Upper[wj]Max[j]
E[w5]0,8490,7284,2850,9705,715 E[w1]0,1700,0140,0820,3261,918
E[w12]2,0381,91611,2852,15912,715 E[w12]2,0381,88211,0822,19412,918
σzσw = 0,000 <1/2 σuσw = 0,000 <1/2
E[zt]0,000σz=E[(zt-E[zt])2]2,867 E[ut]3,000σu=E[(ut-E[ut])2]0,000
E[w12]2,038σw=E[(w-E[w])2]0,000 E[w]σw=E[(w-E[w])2]
σwmin0,174 σwmin
Lower[wMin[j]Upper[wj]Max[j] Lower[wj]Min[j]Upper[wj]Max[j]
E[w12]2,0381,86310,9732,03812,000 E[wi]2,0380,0000,000
Application of Possibility Theorem for time series vt
Application of Possibility Theorem for time series vt
Heisenberg's Uncertainty Principle
σvσw12= 1,1334 > 1/2
Figure 2. yt = 2cos(w1(t − 1)) + 4 sin(w12(t − 1))
Figure 2. yt = 2cos(w1(t − 1)) + 4 sin(w12(t − 1))
Entropy 05 00313 g002
Figure 3. zt = 4 sin(w12(t − 1))
Figure 3. zt = 4 sin(w12(t − 1))
Entropy 05 00313 g003
Figure 4. ut = 3
Figure 4. ut = 3
Entropy 05 00313 g004
Figure 5. Constitutive elements of time series xt
Figure 5. Constitutive elements of time series xt
Entropy 05 00313 g005
Figure 6. Autocorrelations of time series xt
Figure 6. Autocorrelations of time series xt
Entropy 05 00313 g006
Figure 7. Periodogram of time series xt
Figure 7. Periodogram of time series xt
Entropy 05 00313 g007
Figure 8. Time series vt = sin(E[w12] + εt)(t − 1)) + εt
Figure 8. Time series vt = sin(E[w12] + εt)(t − 1)) + εt
Entropy 05 00313 g008
Figure 9. Spectral analysis of vt
Figure 9. Spectral analysis of vt
Entropy 05 00313 g009


  • Hermite’s polynomials allow us to study the probability density function of the vNM utility inside of a n− player game. Using the approach of Quantum Mechanics we have obtained an equivalence between quantum level and strategy. The function of states of Quantum Mechanics opens a new focus in the theory of games such as sub-strategies.
  • An immediate application of quantum games in economics is related to the principal-agent rela- tionship. Specifi cally we can use m types of agents for the case of adverse selection models. In moral risk models quantum games could be used for a discrete or continuous set of efforts.
  • In this paper we have demonstrated that the Nash-Hayek equilibrium opens new doors so that entropy in game theory can be used. Remembering that the primary way to prove Nash’s equilibria is through utility maximization, we can affi rm that human behavior arbitrates between these two stochastic- utility (benefits) U (p(x)) and entropy (risk or order) H(p(x)) elements. Accepting that the stochastic-utility/entropy Entropy 05 00313 i212 relationship is equivalent to the well-known bene fits/cost we present a new way to calculate equilibria: Entropy 05 00313 i213, where p(x) represents probability function and x = (x1,x2,..., xn) represents endogenous or exogenous variables.
  • In all time series xt , where cycles are present, it is impossible to know the exact value of the frequency wj of each one of the cycles j. We can know the value of frequency in a certain confidence interval given by wj ∈ [wjσw min, wj + σw min] where Entropy 05 00313 i214. “The more precisely the random variable VALUE x is determined, the less precisely the frequency VALUE w is known at this instant, and conversely”
  • Using Game Theory concepts we can obtain the equilibrium condition in time series. The equilib- rium condition in time series is represented by the Possibility Theorem (Table 7, Table 8, Table 9 and Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7).
  • This paper, which uses Kronecker product ⊗, represents an easy, new formalization of game (K, ∆, u(p)), which extends the game Γ to the mixed strategies.


To my father and mother.
This study and analysis was made possible through the generous financial support of TODO1 SERVICES INC ( and Petroecuador (
The author is grateful to Gonzalo Pozo (Todo1 Services-Miami), the 8th Annual Conference in Computing in Economics and Finance, July 2002, (the assistants), AIX France, the assistants to 10th Annual Conference of the SouthEast SAS® User Group USA,(the assistants). M2002, the 5th annual data mining technology conference SAS® 2002 USA (the assistants).
The opinions and errors are solely my responsibility.
The author also thanks his PhD advisor, Prof. Jean Louis Rulliere (GATE director, Universite Lumiere Lyon II France).


  1. Bacry, H. Introduction aux concepts de la Mecanique Statistique; Ellipsis: París, 1991; pp. 484–491. [Google Scholar]
  2. Bertalanffy, L. Teoría General de los Sistemas; Fondo de Cultura Económico: USA, 1976; pp. 484–491. [Google Scholar]
  3. Binmore, K. Fun and Games; D.C.Health and Company, 1992; pp. 484–491. [Google Scholar]
  4. Cohen-Tannoudji, C.; Diu, D.; Laloe F., F. Quantum Mechanics; Hermann: París, 1977. [Google Scholar]
  5. Eisert, J.; Wilkens, M.; Lewenstein, M. Quantum Games and Quantum Strategies. Phys. Rev. Lett 1999, 83, 3077. [Google Scholar]
  6. Eisert, J.; Wilkens, M. Quantum Games. J. Mod. Opt. 2000, 47, 137903. [Google Scholar]
  7. Eisert, J.; Scheel, S.; Plenio, M.B. Distilling Gaussian states with Gaussian operations is impossible. Phys. Rev. Lett 2002, 89, 137903. [Google Scholar]
  8. Eisert, J.; Wilkens, M.; Lewenstein, M. Quantum Games and Quantum Strategies; Working paper, University of Postdam: Germany, 2001. [Google Scholar]
  9. Feynman, R. Le cours de Physique de Feynman; DUNOT, 1998. [Google Scholar]
  10. Fleming, W.; Rishel, R. Deterministic and Stochastic Optimal Control; 1975. [Google Scholar]
  11. Frieden, B. Information Sciences, Probability, Statistical, Optics and Data Testing; Springer-Verlag.
  12. Fudenberg, D.; Tirole, J. Game Theory; The Mit Press: Massachusetts, 1991. [Google Scholar]
  13. Gihman, I.; Skorohod, A. Controlled Stochastic Processes; Springer Verlag: New York, 1979. [Google Scholar]
  14. Griffel, D.H. Applied Functional Analysis; John Wiley & Sons: New York, 1985. [Google Scholar]
  15. Griliches, Z.; Intriligator, M. Handbook of Econometrics; Elsevier, 1982; Vol 1. [Google Scholar]
  16. Hahn, F. Stability, Handbook of Mathematical Economics; 1982; Vol II, Chap 16. [Google Scholar]
  17. Hamilton, J.D. Time Series Analysis; Princeton University Press: New Jersey, 1994; Chap (6). [Google Scholar]
  18. Jiangfeng, Du. Experimental Realizations of Quantum Games on a Quantum Computer; Mimeo: University of Science and Technology of China, 2002. [Google Scholar]
  19. Jiménez, E. Aplicación de Neuronales Artificiales en Sísmica del Pozo; Petroecuador: Quito - Ecuador, 1996. [Google Scholar]
  20. Jiménez, E.; Rullière, J.L. Unified Game Theory. In aper presented in 8th CONFERENCE IN COMPUTING IN ECONOMICS AND FINANCE AIX; 2002. [Google Scholar]
  21. Jiménez, E.; Rullière, J.L. Unified Game Theory; I+D Innovacion, UIDT; 1390-2202Petroecuador: Quito, 2002; Vol 5, pp. 10:39–75. [Google Scholar]
  22. Jiménez, E. Preemption and Attrition in Credit/Debit Cards Incentives: Models and Experiments. In proceedings, 2003 IEEE International Conference on Computational Intelligence for Finantial Engineering, Hong Kong; 2003. [Google Scholar]
  23. Jiménez, E. Quantum Games and Minimum Entropy; Lecture Notes in Computer Science; Springer: Canada, 2003; pp. 216–225. [Google Scholar]
  24. Jiménez, E. Factor-K: La Economia Experimental como Paradigma de Transparencia; I+D Innovacion, UIDT; 1390-2202Vol 6, Petroecuador: Quito, 2003; pp. 12:1–23. [Google Scholar]
  25. Kay, Lai Chung. A Course in Probability Theory; Harcourt, Brace & World, INC, 1978. [Google Scholar]
  26. Kamien, M.; Schwartz, N. Dynamic Optimization; Elsevier: Amsterdam, 1998. [Google Scholar]
  27. Karatzas, I.; Shreve, S. Brownian Motion and Stochastic Calculus, Second Edition ed; Springer-Verlag, 1991. [Google Scholar]
  28. Karlin, S.; Taylor, H. A First Course in Stochastic Processes; Academic Press Inc: Usa, 1980. [Google Scholar]
  29. Karlin, S.; Taylor, H. A Second Course in Stochastic Processes; Academic Press Inc: Usa, 1981. [Google Scholar]
  30. Kloeden, P.; Platen, E. Numerical Solution of Stochastic Differential Equations; Springer-Verlag, 1995. [Google Scholar]
  31. Landau, L.; Schwartz, N. Mecanica Cuántica; Editorial Reverté, 1978. [Google Scholar]
  32. Lévy, E. Dictionnaire de Physique; PUF, 1998. [Google Scholar]
  33. Mantenga, R.; Stanley, E. An Introduction to Econophysics; Cambridge University Press, 2001. [Google Scholar]
  34. Malliaris, A.; Brock, W. Stochastic Methods in Economics in Finance; Elsevier: Netherlands, 1982. [Google Scholar]
  35. McFadden, D. Quantal Choice Analysis : A survey , Annals of Economicals and Social Measurement 5/4; 1976; pp. –. [Google Scholar]
  36. McFadden, D. Econometric Analysis of Qualitative Response Models; Handbook of Econometrics, 1984; Vol II, Chap 24. [Google Scholar]
  37. McFadden, D. Economic Choices. AER 2001, 91, NO. 3. 351–378. [Google Scholar]
  38. McKelvey, R.; Palfrey, T. Quantal Response Equilibria for Normal Form Games, Games and economics Behavior; 1995; pp. 10:6–38. [Google Scholar]
  39. McKelvey, R.; Palfrey, T. Quantal Response Equilibria; California Institute of Technology; current version; 1996. [Google Scholar]
  40. McKelvey, R.; Palfrey, T. Quantal Response Equilibria for Extensive Form Games. Experimental Economics 1998, 1,9-41. [Google Scholar]
  41. Meyer, D. Quantum Strategies. Phys. Rev. Lett. 1999, 82, 1052–1055. [Google Scholar]
  42. Meyer, D. Quantum Games and Quantum Algorithms; To appear in the AMS Contemporany Mathematics volume: Quantum Computation and Quantum Information Science; 2000. [Google Scholar]
  43. Myerson, R. Game Theory Analysis of Conflict.; Harvard University Press: Massachusetts, 1991. [Google Scholar]
  44. Neyman, A. Repeated Games with Bounded Entropy. Games and Economic Behavior, Ideal 2000, 30, 228–247. [Google Scholar]
  45. NGÖ, C.; NGÖ, H. Introduction à la Mecanique Statistique, Masson, 2éme édition ed; 1995. [Google Scholar]
  46. Pena, L. Introducción a la Mecánica Cuántica; Compania Editorial Continental: México, 1979. [Google Scholar]
  47. Rasmusen, E.; Palfrey, T. Games and Economic Information: An Introduction to Game Theory; Oxfort: Basil Blackwell, 1989. [Google Scholar]
  48. Rényi, A. des Probabilités; Dunod: Paris, 1966. [Google Scholar]
  49. Robert, Ch. L’Analyse statistique Bayesienne. Economica 1992. [Google Scholar]
  50. Rubinstein, A. The Electronic Mail Game: Strategic Behavior Under, Almost Common Knowledge. American Economic Review 1989, 79, 385–391. [Google Scholar]
  51. Rullière, J.L.; Walliser, B. De la spécularité à la temporalité en thèorie des jeux. In Revue d’Economie Politique; 1995; Volume 105, pp. 601–615. [Google Scholar]
  52. Rust, J. Structural Estimation of Markov Decision Processes; Handbook of Econometrics, 1994; Volume IV, Chapter 51. [Google Scholar]
  53. SENKO Corporation. Quantum Computer Simulation; SENKO Corporation: Yokohama Japan, 1999. [Google Scholar]
  54. Suijs, J.; Borm, P. Stochastic Cooperative Games: Superadditivity Convexity and Certainty Equivalents, Games and Economic Behavior. Ideal 1999, 27, 331–345. [Google Scholar]
  55. Tirole, J. Théorie de L’organisation Industrielle. In Tome II, Economica; 1988. [Google Scholar]
  56. Varian, H. Analyse Microéconomique; Troisième édition, De Boeck Université: Bruxelles, 1995. [Google Scholar]
  57. Weibul, J.W. Evolutionary Game Theory; The MIT press, 1995. [Google Scholar]
  • 1We use bolt in order to represent vector or matrix.
  • 2The hermitian operator Entropy 05 00313 i054 have the next property: Entropy 05 00313 i055, the transpose operator Entropy 05 00313 i056 is equal to complex conjugate operator Entropy 05 00313 i057
  • 3
    • H0(x) = 1 , H1(x) = 2x ,
    • H2(x) = 4x2 − 2, H3(x) = 8x3 − 12x
Back to TopTop