Quantum Games: Mixed Strategy Nash’s Equilibrium Represents Minimum Entropy

This paper introduces Hermite's polynomials, in the description of quantum games. Hermite's polynomials are associated with gaussian probability density. The gaussian probability density represents minimum dispersion. I introduce the concept of minimum entropy as a paradigm of both Nash's equilibrium (maximum utility MU) and Hayek equilibrium (minimum entropy ME). The ME concept is related to Quantum Games. Some questions arise after carrying out this exercise: i) What does Heisenberg's uncertainty principle represent in Game Theory and Time Series?, and ii) What do the postulates of Quantum Mechanics indicate in Game Theory and Economics?.


Introduction
The quantum games and the quantum computer are closely-related.The science of Quantum Computer is one of the modern paradigms in Computer Science and Game Theory [5-8, 23, 41, 42, 53].Quantum Computer increases processing speed to have more effective data-base search.The quantum games incorporate Quantum Theory to Game Theory algorithms.This simple extrapolation allows the Prisoner's Dilemma to be solved and demonstrates that the cooperative equilibrium [44,47,51,57] is viable and stable with a probability different from zero.
In Eisert [7,8] the analogy between Quantum Games and Game Theory is expressed as: "At the most abstract level, game theory is about numbers of entities that are efficiently acting to maximize or minimize.For a quantum physicist, it is then legitimate to ask: what happens if linear superpositions of these actions are allowed for?, that is, if games are generalized into the quantum domain.For a particular case of the Prisoner's Dilemma, we show that this game ceases to pose a dilemma if quantum strategies are implemented for".They demonstrate that classical strategies are particular quantum strategy cases.
Eisert, Wilkens, and Lewenstein [8], not only give a physical model of quantum strategies but also express the idea of identifying moves using quantum operations and quantum properties.This approach appears to be fruitful in at least two ways.On one hand, several recently proposed quantum information application theories can already be conceived as competitive situations, where several factors which have opposing motives interact.These parts may apply quantum operations using bipartite quantum systems.On the other hand, generalizing decision theory in the domain of quantum probabilities seems interesting, as the roots of game theory are partly rooted in probability theory [43,44].In this context it is of interest to investigate what solutions are attainable if superpositions of strategies are allowed [18,41,42,50,51,57].A game is also related to the transference of information.It is possible to ask: what happens if these carriers of information are applied to be quantum systems, where Quantum information is a fundamental notion of information?Nash's equilibria concept as related to quantum games is essentially the same as that of game theory but the most important difference is that the strategies appear as a function of quantum properties of the physical system [41,42].
This paper essentially explains the relationship which exists among Quantum Mechanics, Nash's equilibria, Heisenberg's Uncertainty Principle, Minimum Entropy and Time Series.Heisenberg's uncertainty principle is one of the angular stones in Quantum Mechanics.One application of the uncertainty principle in Time Series is related to Spectral Analysis "The more precisely the random variable VALUE is determined, the less precisely the frequency VALUE is known at this instant, and conversely".This principle indicates that the product of the standard deviation of one random variable x t by the standard deviation of its frequency w is greater or equal to 1/2.This paper is organized as follows: Section 2: Quantum Games and Hermite's Polynomials, Section 3: Time Series and Heisenberg's Principle, Section 4: Applications of the Models, and Section 5: Conclusion.

Quantum Games and Hermite's Polynomials
Let Γ = (K, S, v) be a game to n−players, with K the set of players k = 1, ..., n.The finite set S k of cardinality l k ∈ N is the set of pure strategies of each player where k ∈ K, s kj k ∈ S k , j k = 1, ..., l k and S = Π K S k represents the set of pure strategy profiles with s ∈ S an element of that set, l = l 1 , l 2 , ..., l n represents the cardinality of S, [12,43,55,56].
The vector function v : S → R n associates every profile s ∈ S, where the vector of utilities v(s) = (v1 (s), ..., v n (s)) T , and v k (s) designates the utility of the player k facing the profile s.In order to understand calculus easier, we write the function v k (s) in one explicit way v k (s) = v k (j 1 , j 2 , ..., j n ).The matrix v n,l represents all points of the Cartesian product Π k∈K S k .The vector v k (s) is the k− column of v.
If the mixed strategies are allowed then we have: ) the unit simplex of the mixed strategies of player k ∈ K, and p k = (p k j k ) the probability vector.The set of profiles in mixed strategies is the polyhedron ∆ with ∆ = Π k∈K ∆(S k ), where p = (p 1 j 1 , p 2 j 2 ..., p n j n ), and p k = (p k 1 , p k 2 , ..., p k ln ) T .Using the Kronecker product ⊗ it is possible to write 1 : where The n− dimensional function u : ∆ → R n associates with every profile in mixed strategies the vector of expected utilities u(p) = ³ u 1 (p, v(s)) , ..., u n (p, v(s)) ´T where u k (p, v(s)) is the expected utility of the player k.Every u k j k = u k j k (p (−k) , v(s)) represents the expected utility for each player's strategy and the vector u k is noted The triplet (K, ∆, u(p)) designates the extension of the game Γ with the mixed strategies.We get Nash's equilibrium (the maximization of utility [3,43,55,56,57]) if and only if, ∀k, p, the inequality Another way to calculate the Nash's equilibrium [43,47], is leveling the values of the expected utilities of each strategy, when possible.
2.1 According to Hayek, equilibrium refers to the order state or minimum entropy.The order state is the opposite of entropy (disorder measure).There are some intellectual influences and historical events which inspired Hayek to develop the idea of a spontaneous order.Here we present the technical tools needed in order to study the order state.

Minimum Entropy Method
Case 1 If the probability density of a variable X is normal: N(µ k , σ k ), then its entropy is minimum for the minimum standard deviation (H k ) min ⇔ (σ k ) min .∀k = 1, ..., n.

Proof. Let the entropy function
)dx and p(x) the normal density function.Writing this entropy function in terms of minimum standard deviation we have.
developing the integral we have For a game to n− players the total entropy can be written as follows: after making a few more calculations, it is possible to demonstrate that The entropy or measure of the disorder is directly proportional to the standard deviation or measure of the uncertainty [1,13,27].Clausius [48,49], who discovered the entropy idea, presents it as both an evolutionary measure and as the characterization of reversible and irreversible processes [45].
The entropy Using the explicit form of p k j k , we can get the entropy, the expected utility and the variance: k can be obtained using the last seven equations; it explains that, when the entropy diminishes, the parameter (rationality) λ increases.
The rationality increases from an initial value of zero when the entropy is maximum, and it drops to its minimum value when the rationality toward the infinite value [38,39,40]: The standard deviation is minimum in the Nash's equilibria [21,22,23].
If the rationality increase, then Nash's equilibria can be reached when the rationality spreads toward to its infinite: Lim λ→∞ σ k (λ) = 0.
Using the logical chain that has just been demonstrated, we can conclude that the entropy diminishes when the standard deviation diminishes: after making a few more calculations, it is possible to demonstrate that

Remark 1
The entropy H k for gaussian probability density and multinomial logit are written as 1  2 + ln √ 2π + ln σ k and ln( The special case of Minimum Entropy is when σ 2 k = 0 and the utility function value of each strategy u k j k (p (−k) , v(s)) = u k , are the same for all j k , and for all k, see Binmore, Myerson [3,48].
In the special case of Minimum Entropy when σ 2 k → 0, the gaussian density function can be approximated by Dirac's Delta δ ´.
The function δ

³
x − u k ´is called "Dirac's function".Dirac's function is not a function in the usual sense.It represents an infinitely short, infinitely strong unit-area impulse.It satisfies δ and can be obtained at the limit of the function Case 4 If we can measure the standard deviation σ 2 k 6 = 0, then Nash's equilibrium represents minimum standard deviation (σ k ) min , ∀k ∈ K.
Theorem 2 Minimum Dispersion.The gaussian density function f (x) = |φ(x)| 2 represents the solution of one differential equation given by x + ∂ ∂x ´φ(x) = 0 related to minimum dispersion of a lineal combination between the variable x, and a Hermitian operator Proof.This proof is extended in Pena and Cohen-Tannoudji [4,48].Let b A, b B be Hermitian operators 2 which do not commute, because we can write, Easily we can verified that the standard deviations satisfy the same rule of commutation: Let J(α)be the positive function defined as expected value hi.

´´À
By hypothesis the Hermitian operators can be written as the expression J(α) has the minimum value when J 0 (α) = 0, thus the value of α and J min are We are interested in knowing the explicit form of the function φ(x) when the inequality is transformed into equality and b = i for which we need to solve the following differential equation.

Remark 2
The gaussian density function f (x) is optimal in the sense of minimum dispersion.
According to the last theorem u k j k = x is a gaussian random variable with probability density f k (x), ∀k, and in order to calculate E

Remark 3
The central limit theorem which establishes that "The mean µ k of the random variable u k j k has normal density function", is respected in the Minimum Entropy Method.

Quantum Games Elements
A complete relationship exists between Quantum Mechanics and Game Theory.Both share an random nature.Due to this nature it is necessary to speak of expected values.Quantum Mechanics defines observed values or expected values in Cohen-Tannoudji [4].
According to Quantum Mechanics an uncertainty exists in the simultaneous measure of two variables such as: position and momentum, energy and time [4].A quantum element that is in a mixed state is made respecting the overlapping probabilistic principle of pure states.For its side in Game Theory a player that uses a mixed strategy respects the vNM (Von Newmann and Morgenstern) function or overlapping probabilistic of pure strategies according to Binmore [3,43].The state function in quantum mechanics does not have an equivalent in Game Theory, therefore we will use state function as an analogy with Quantum Mechanics.The comparison between Game Theory and Quantum Mechanics can be shown in an explicit way in Table 1.If the definition of rationality in Game Theory represents an optimization process, we can say that quantum processes are essentially optimal, therefore the nature of both processes is similar The players interact efficiently Minimum Entropy Let k = 1, ..., n be players with l k strategies for every one j k = m k 1 , ..., m k l k .According to the theorem of Minimum Dispersion, the utility u k j k , converges to Nash's equilibria and follows a normal probability density , where the expected utility is E probability given by ¯bk With the introduction of sub-strategies the probability of each strategy is described in the following way ¯2 and we do not need any approximation of the function where:

Theorem 3
The normal probability density ρ(u) = |ψ (x, λ)| 2 can be obtained using Hermite orthogonal polynomials H k (x).The parameter ¯bk j k ¯2 , indicates the probability value of playing j k strategy.The state function ϕ k (x, λ) measures the behavior of j k strategy (j k one level in Quantum Mechanics [4,9,31]).The dynamic behavior of j k strategy can be written as ψ The function ψ(x, λ) is determined as follows: where: Proof.The Hermite polynomials properties will be necessary to express ρ(u) in function of H k (x).
Using generating function ψ(x, λ k ) we can write (x, λ) with the equations (QG1, PH1, PH2) and: then w k = 0; we can write: The ¯bk ¯2 parameter represents the probability of m k j k sub-strategy of each k player.The utility of vNM can be written as: Using the operators a, a + let us write the equations: The equation (Q6.1) indicates that the operator a = 3/2 decreases the level of k player.The operator a can decrease the sub-strategy description The equation (Q7.1) indicates that the operator a 3/2 increases the level of k player [4,46].The operator a + can increase the sub-strategy description m k j k → m k j k + 1.The Hermite polynomials form an orthogonal base and they possess desirable properties that permit us to express a normal probability density as a linear combination of them.

Hermite's Polynomials Properties
The next properties are taken from Cohen-Tannoudji, Pena and Landau [4,31,46] Hermite polynomials obey the next differential equation [14]: The principal solution of Hermite's differential equation 3 is Taking the explicit value of m k j k −derivative of (PH1): From which we obtain the decreasing operator: and the increasing operator: Recurrence formulas: and normalization condition: The next condition expresses the orthogonality property of Hermite's polynomials: The generating function is obtained using the Taylor development.
Quantum Games generalize the representation of the probability in the utility function of vNM (von Newmann and Morgenstern).To understand Quantum Games it is necessary to use properties of quantum operators in harmonic oscillator and the properties of Hilbert space.The harmonic oscillator is presented as the paradigm in Quantum Games.

Quantum Games Properties
According to Quantum Mechanics [4,9,31,46], a quantum system follows the next postulates or axioms: Axiom 1 At a fixed time t o , the state of a physical system is defined by specifying a ket |ψ(t o )i beloging to the state space (Hilbert space) V. Axiom 2 Every measurable physical quantity Λ is described by the operator A acting in V ; this operator is an observable parameter.

Axiom 3 The only possible result of the measurement of a physical quantity Λ is one of the eigenvalues of the corresponding observable A.
Axiom 4 When a physical quantity Λ is measured on a system in the normalized state |ψ(t)i = P c j ¯ϕj ® , the probability P (b j ) of obtaining the non-degenerate eigenvalue c j of the corresponding observable A is: where ¯ϕj ® is the normalized eigen vector of A associated with the eigen value c j .
Let n be players with k = 1, ..., n , l k strategies and m k j k ∈ M k j k of each player k.
Superposition principle: Time evolution: Inner product: Mean value

Time Series and Uncertainty Principle of Heisenberg
Remember some basic definitions about probability and stochastic calculus.
A probability space is a triple (Ω, =, P ) consisting of [10,13,30]: • A set Ω that represents the set of all possible outcomes of a certain random experiment.
• A family = of subsets of Ω with a structure of σ-algebra: The elements of the σ-algebra = are called events and P is called a probability measure.
A random variable is a function: If X is a random variable on (Ω, =, P ) the probability measure induced by X is the probability measure P X (called law or distribution of X) on ß(R) given by The numbers P X (B), B ∈ß(R), completely characterize the random variable X in the sense that they provide the probabilities of all events involving X.

Definition 2 The distribution function of a random variable X is the function
given by F is a distribution function corresponding to the Lebesgue Stieltjes measure P X .Thus among all distribution functions corresponding to the Lebesgue-Stieltges measure P X , we choose the one with F (∞) = 1, F (−∞) = 0.In fact we can always supply the probability space in a canonical way; take Ω = R, = =ß(R), with P the Lebesgue-Stieltges measure corresponding to F. Definition 3 A stochastic process X = {X(t), t ∈ T } is a collection of random variables on a common probability space (Ω, =, P ) indexed by a parameter t ∈ T ⊂ R, which we usually interpret as time [10,[27][28][29][30].It can thus be formulated as a function X : T × Ω → R.
The variable X(t, •) is =−measurable in ω ∈ Ω for each t ∈ T ; henceforth we shall often follow established convention and write X t for X(t).When T is a countable set, the stochastic process is really a sequence of random variables X t 1 , X t 2 , ..., X t n , .. Definition 4 If a random variable X = X(t) has the density ρ(x) and f : X → Y is one-to-one, then y = f (x) has the density g(y) given by g(y) = ρ(f −1 (y)) ) 0 (y), see Fudenberg and Tirole [12], (chapter 6).The densities ρ(x) and g(y) show that "If X is a random variable f −1 (y) exists, then Y is a random variable and conversely." dt for all x; furthermore, F 0 = f everywhere.Thus in this case, f and h are "Fourier transform pairs":

Theorem 4
The Possibility Theorem: σ x σ w ≥ 1 2 ."The more precisely the random variable VALUE x is determined, the less precisely the frequency VALUE w is known at this instant, and vice versa".Let X = {X(t), t ∈ T } be the stochastic process with a density function f (x) and the random variable w with a characteristic function h(w).If the density functions can be written as f (x) = ψ(x)ψ(x) * and h(w) = φ(w)φ(w) * , then: If the functions ψ(x = ±∞) = 0 and φ(w = ±∞) = 0 then f(x = ±∞) = 0 and h(w = ±∞) = 0. Proof.The next proof is based in Frieden [11].
Let the function ψ(x) defined as, Here w is the frequency, different to ω ∈ Ω .Then φ(w) is According to Fourier's transform properties, we have: where: By Parseval Plancherel theorem and the probability properties Let σ 2 w be the standard deviation of frequency w, and σ 2 x the standard deviation of the aleatory variable x.
with Parseval Plancherel theorem Making the product (Q7.1)×(Q7.2) and using Q8, it follows that Let G = xψ and H = ψ 0 in Q5 shows that the right side of (Q9.2) exceeds a certain quantity A We will note Notice that A + A * = −1 and using the complex number property 2 Re (A) = −1 or 2 |A| cos θ = −1 where θ phase of A, we can write 4 |A| 2 = 1 cos 2 θ .Since −1 ≤ cos θ ≤ 1 then |A| 2 ≥ 1 4 and therefore: Remark 4 Let x t be a time series with a spectrum of frequencies w j , where each frequency is an random variable.This spectrum of frequencies can be obtained with a minimum error (standard deviation of frequency).This minimum error σ w for a certain frequency w is given by the following equation σ w min = 1 2σx .The expected value E [w j ] of each frequency can be determined experimentally.
An occurrence of one particular frequency w j is defined in a confidence interval given by This last remark agrees and supplements the statement of Hamilton [17]."We should interpret 1 2 (a 2 j + b 2 j ) not as the portion of the variance of X that is due to cycles with frequency exactly equal to w j , but rather as the portion of the variance of X that is due to cycles with frequency near of w j , " where: {a j cos(w j (t − 1)) + b j sin(w j (t − 1))} + u t or (Q14) The only condition that imposes the Possibility Theorem is exhaustive and exclusive σ x σ w ≥ 1 2 , because it includes the Minimum Entropy Theorem or equilibrium state theorem σ x σ w = 1 2 .It is necessary to analyze the different cases of the time series x t using the Possibility Theorem.A time series x t evolves in the dynamic equilibrium if and only if σ x σ w = 1 2 .A time series evolves out of the dynamic equilibrium if and only if σ x σ w > 1 2 .
Table 2. Evolution of time series, out of the equilibrium σ x σ w > 1 2 .In this table we can see the different changes in σ x and σ w .3. Evolution of time series, in the equilibrium σ x σ w = 1 2 .In this table we can see the different changes in σ x and σ w .

Cases σ
Remark 5 Hirschman's form of the uncertainty principle.The fundamental analytical effect of Heisenberg's principle is that the probability densities for x and w cannot both be arbitrarily narrow [4], where: ).When ψ(x) and φ(w) are gaussian H(W ) = B + Log(σ W ) and H(X) = C + Log(σ X ), Hirschman's inequality becomes Heisenberg's principle, then inequalities are transformed in equalities and the minimum uncertainty is minimum entropy.In Quantum Mechanics the minimum uncertainty product also obeys a minimum entropy sum.

Hermites's Polynomials Application
Let Γ = (K, S, v) be a 3−player game, with K the set of players k = 1, 2, 3.The finite set S k of cardinality l k ∈ N is the set of pure strategies of each player where k ∈ K, s kj k ∈ S k , j k = 1, 2, 3 and S = S 1 ×S 2 ×S 3 represent a set of pure strategy profiles with s ∈ S as an element of that set and l = 3 * 3 * 3 = 27 represents the cardinality of S. The vectorial function v : S → R 3 associates with every profile s ∈ S the vector of utilities v(s) = (v 1 (s), ..., v 3 (s)), where v k (s) designates the utility of the player k facing the profile s.In order to get facility of calculus we write the function v k (s) in an explicit way v k (s) = v k (j 1 , j 2 , ..., j n ).The matrix v 3,27 represents all points of the Cartesian product Π K S k see Table 4.The vector v k (s) is the kcolumn of v.The graphic representation of the 3-player game is shown in Figure 1.
In these games we obtain Nash's equilibria in pure strategy (maximum utility MU, Table 4) and mixed strategy (Minimum Entropy Theorem MET, Tables 5, 6).After finding the equilibria we carried out a comparison with the results obtained from applying the theory of quantum games developed previously.(j stone stone stone 0 0 0 0.3333 0.3333 0.3300 0.1111 0.1100 0.1100 0.0000 0.0000 0.0000 stone stone paper 0 0 1 0.3333 0.3333 0.3300 0.1111 0.1100 0.1100 0.0000 0.0000 0.1111 stone stone sccisor

Time S eries Appl icati on
The relationship between Time Series and Game Theory appears when we apply the entropy minimization theorem (EMT).This theorem (EMT) is a way to analyze the Nash-Hayek equilibrium in mixed strategies.
Introducing the elements rationality and equilibrium in the domain of time series can be a big help, because it allows us to study the human behavior reflected and registered in historical data.The main contributions of this complementary focus on time series has a relationship with Econophysics and rationality.
Human behavior evolves and is the result of learning.The objective of learning is stability and optimal equilibrium.Due to the above-mentioned, we can affirm that if learning is optimal then the convergence to equilibrium is faster than when the learning is sub-optimal.Introducing elements of Nash's equilibrium in time series will allow us to evaluate learning and the convergence to equilibrium through the study of historical data (time series).
One of the branches of Physics called Quantum Mechanics was pioneered using Heisemberg's uncertainty principle.This paper is simply the application of this principle in Game Theory and Time Series.
Econophysics is a newborn branch of the scientific development that attempts to establish the analogies between Economics and Physics, see Mantenga and Stanley [33].The establishment of analogies is a creative way of applying the idea of cooperative equilibrium.The product of this cooperative equilibrium will produce synergies between these two sciences.From my point of view, the power of physics is the capacity of equilibrium formal treatment in stochastic dynamic systems.On the other hand, the power of Economics is the formal study of rationality, cooperative and non-cooperative equilibrium.
Econophysics is the beginning of a unification stage of the systemic approach of scientific thought.I show that it is the beginning of a unification stage but remains to create synergies with the rest of the sciences.
Let {x t } t=∞ t=−∞ be a covariance-stationary process with the mean E [x t ] = µ and jth covariance If these autocovariances are absolutely summable, the population spectrum of x is given by [21].
If the γ j 's represent autocovariances of a covariance-stationary process using the Possibility Theorem, then and S x (w) will be nonnegative for all w.In general for an ARMA(p, q) process: The population spectrum S x (w) ∈ [S x (w − σ w ), S x (w + σ w )] is given by Hamilton [17].
The coefficients b a j , b b j can be estimated with OLS regression.
The sample variance of x t can be expressed as: The portion of the sample variance of x t that can be attributed to cycles of frequency w j is given by: with b S x (w j ) as the sample periodogram of frequency w j .
Continuing with the methodology proposed by Hamilton [17].we develop two examples that will allow us to verify the applicability of the Possibility Theorem.
In the third step, we find b S x (w j ) (Table 8, Figure 7), only for the frequencies w 1 , w 3 , w 5 , w 12 : In the fourth step, we compute the values σ x , σ w and σ w min = 1 2σ x .resp. for y, z, u, (Table 9).
In the fifth step, we verify that the Possibility Theorem is respected, σ x σ w ≥ 1 2 resp for y, z, u, (Table 9).

3
In this paper we have demonstrated that the Nash-Hayek equilibrium opens new doors so that entropy in game theory can be used.Remembering that the primary way to prove Nash's equilibria is through utility maximization, we can affirm that human behavior arbitrates between these two stochasticutility (benefits) U (p(x)) and entropy (risk or order) H(p(x)) elements.Accepting that the stochasticutility/entropy U (p(x)) H(p(x)) relationship is equivalent to the well-known benefits/cost we present a new way to calculate equilibria: Max x ³ U(p(x)) H(p(x)) ´,where p(x) represents probability function and x = (x 1 , x 2 , ..., x n ) represents endogenous or exogenous variables.

4
In all time series x t , where cycles are present, it is impossible to know the exact value of the frequency w j of each one of the cycles j.We can know the value of frequency in a certain confidence interval given by w j ∈ [w j − σ w min , w j + σ w min ] where σ w min = 1 2σ x ."The more precisely the random variable VALUE x is determined, the less precisely the frequency VALUE w is known at this instant, and conversely"

5
Using Game Theory concepts we can obtain the equilibrium condition in time series.The equilibrium condition in time series is represented by the Possibility Theorem (Tables 7,8,9 and Figures 3,4,5,6,7).

6
This paper, which uses Kronecker product ⊗, represents an easy, new formalization of game (K, ∆, u(p)), which extends the game Γ to the mixed strategies.

2
The hermitian operator b A have the next property: e b A = b A * , the transpose operator e b A is equal to complex conjugate operator b

Definition 5
Inversion formula: If h is a characteristic function of the bounded distribution function F, and F (a, b] = F (b) − F (a) then F (a, b] = lim c→∞ 1 2π Z +c −c e −iwa − e −iwb iu h(w)dw for all points a, b (a < b) at which F is continuous.If in addition, h is Lebesgue integrable on (−∞, ∞) , then the function f given by

2
If the resulting system of equations doesn't have solution ¡ p (−k) ¢ * then we propose the Minimum Entropy Method.This method is expressed as Min p ( P k H k (p)) , where σ 2 k (p * ) standard deviation and H k (p * ) entropy of each player k.

Table 1 .
Quantum Mechanic and Game Theory properties variability: number of interactions n! quantitative: Mathematical model Observable value: E [u]

Table 7 .
4412 w 12 ∈ [E[w 12 ] − σ w min , E[w 12 ] + σ w min ] The spectral analysis can not give the theoretical value of E[w 12 ] = 2.037.The experimental value of E[w 12 ] = 2.8 .You can see the results in Figures 8,9 and Table 10.Time series values of x t , y t , z t , u t

Table 8 .
Frequencies, variances and sample periodogram of x t

Table 9 .
Verification of Possibility Theorem for series x t , y t , z t , u t

Table 10 .
Application of Possibility Theorem for time series v t