Speed-gradient Entropy Principle for Nonstationary Processes

The speed-gradient variational principle (SG-principle) for nonstationary far from equilibrium systems is formulated and illustrated by examples. The SG-model of transient (relaxation) dynamics for systems of a finite number of particles based on maximum entropy principle is derived. It has the form dX(t)/dt = AlnX(t); where X(t) is the vector of the cell populations, A is a symmetric matrix with two zero eigenvalues corresponding to mass and energy conservation laws.


Introduction
The equations of motion for physical systems are often derived from variational principles: principle of least action, maximum entropy principle, etc. [1][2][3][4].Variational principles are based on specification of a functional (usually, integral functional) and determination of real motions as points in an appropriate functional space providing extrema of the specified functional.The principle is called integral if the functional to be extremized has an integral form.
In addition to integral principles, differential (local) ones were proposed: Gauss principle of least constraint, principle of minimum energy dissipation and others.It has been pointed out by M. Planck [5] that the local principles have some preference with respect to integral ones because they do not fix dependence of the current states and motions of the system on its later states and motions.In thermodynamics two of such principles have become most popular during last century: I. Prigogine's principle of minimum entropy production and L. Onsager's symmetry principle for kinetic coefficients.In the 1950s the so called Maximum Entropy Generation Principle (MEGP) was proposed by H. Ziegler [6,7] who also showed that Prigogine's and Onsager's principles in the near equilibrium case can be derived from MEGP under some conditions and that they are equivalent to each other.In 1957 E.T. Jaynes formulated the Maximum Entropy Principle (MEP): the entropy of any physical system tends to increase until it achieves its maximum value under constraints imposed by other physical laws [8].In fact such a prediction (implicit) can be found in the works of W. Gibbs.
In [9][10][11] a new local evolution principle, so called speed-gradient (SG) principle originated from the SG-design method of nonlinear control theory [12,13] was proposed and illustrated by a number of examples from mechanics.In [14] SG-principle was extended to the case of systems with constraints.
This paper is aimed at application of the SG-principle to entropy-driven systems.First, the formulation of the SG-principle is recalled and some illustrating examples are presented.Then the SG-principle is applied to derivation of transient (relaxation) dynamics for a system driven by maximum entropy principle.

Speed-gradient variational principle
Consider a class of physical systems described by systems of differential equations where x = (x 1 , . . ., x n ) T is n-dimensional column vector of the system state ( T is the transposition sign), T is m-dimensional column vector of free (input) variables, ẋ = dx/dt, t ≥ 0. The problem of modelling system dynamics can be posed as the search of a law of change of u(t) meeting some criterion of "natural", or "reasonable" behavior of the system.Let such a behavior be specified as a tendency to achieve a goal, specified as decreasing the value of the goal functional Q(x, t), where Q(x, t) is given apriori.The choice of Q(x, t) should reflect physical essence of the problem and it is critical for the result.An ultimate goal may be also introduced as achievement of the minimum value of Q(x): The first step of the speed-gradient procedure is to calculate the speed Q = dQ dt = ω(x, u, t), where ω(x, u, t) = ∂Q ∂t + ∂Q(x) ∂x f (x, u, t).The second step is to evaluate the gradient of the speed ∇ u Q with respect to input vector u (speed-gradient vector).Finally the law of dynamics is formed as the feedback law in the finite form or in the differential form where γ > 0 is a positive scalar or a positive definite symmetric matrix gain (positivity of a matrix is understood as positive definiteness of associated quadratic form).The underlying idea of the choices (3) or ( 4) is that the motion along the antigradient of the speed Q provides decrease of Q.It may eventually lead to negativity of Q which, in turn, yields decrease of Q.Under some natural assumptions achievement of the ultimate goal (2) can be derived as a mathematical statement [9,13] which is, however, beyond the theme of this paper.Now the speed-gradient principle can be formulated as follows.
Speed-gradient principle: Among all possible motions of the system only those are realized for which the input variables change proportionally to the speed gradient ∇ u Q(x, u, t) of an appropriate goal functional Q(x, t).If there are constraints imposed on the system motion, then the speed-gradient vector should be projected onto the set of admissible (compatible with constraints) directions.
According to the SG-principle, to describe a system dynamics one needs to introduce the goal function Q(x, t).The choice of Q(x, t) should reflect the tendency of natural behavior to decrease the current value Q(x(t), t).Systems obeying the SG-principle will be called SG-systems.In this paper only the models (1) in a special form are considered: i.e. a law of change of the state velocities is sought.Since gradient of a function is the direction of it maximum growth, the SG-direction is the direction of maximum growth for Q(x, u, t), i.e. direction of maximum production rate for Q.Respectively, the opposite direction corresponds to minimum production rate for Q.In the presence of constraints SG-principle suggests that production rate for Q is maximum under imposed constraints.The laws of dynamics under constraints can be found using the method of Lagrange multipliers.
The SG-laws with non-diagonal gain matrices γ can be interpreted by introducing a non-Euclidean metric in the space of inputs is by means of the matrix γ −1 .The matrix γ can be used to describe spatial anisotropy.Admitting dependence of the matrix γ on x one can recover dynamics law for complex mechanical systems described by Lagrangian or Hamiltonian formalism.

Examples of speed-gradient laws of dynamics
According to the speed-gradient principle, at first one needs to introduce the goal function Q(x).The choice of Q(x) should reflect the tendency of natural behavior to decrease the current value Q(x(t)).Let us consider illustrating examples.

Example 1: Motion of a particle in the potential field
In this case the vector x = (x 1 , x 2 , x 3 ) T consists of coordinates x 1 , x 2 , x 3 of a particle.Choose smooth Q(x) as the potential energy of a particle and derive the speed-gradient law in the differential form.To this end, calculate the speed Then, choosing the diagonal positive definite gain matrix Γ = m −1 I 3 , where m > 0 is a parameter, I 3 is the 3 × 3 identity matrix, we arrive at the Newton' Note that the speed-gradient laws with nondiagonal gain matrices Γ can be incorporated if a non-Euclidean metric in the space of inputs is introduced by the matrix Γ −1 .Admitting dependence of the metric matrix Γ on x one can obtain evolution laws for complex mechanical systems described by Lagrangian or Hamiltonian formalism.
The SG-principle applies not only to finite dimensional systems, but also to infinite dimensional (distributed) ones.Particularly, x may be a vector of a functional space X and f (x, u, t) may be a nonlinear differential operator (in such a case the solutions of (1) should be understood as generalized ones).We will omit mathematical details for simplicity.

Example 2:Wave, diffusion and heat transfer equations
Let x = x(r), r = col (r 1 , r 2 , r 3 ) ∈ Ω be the temperature field or the concentration of a substance field defined in the domain Ω ⊂ R 3 .Choose the goal functional evaluating nonuniformity of the field as follows where ∇ r x(r, t) is the spatial gradient of the field and boundary conditions are assumed zero for simplicity.Calculation of the speed Qt and then speed-gradient of Q t , yields i is the Laplace operator.Therefore the speed-gradient law in differential form (4) is which corresponds to the D'Alembert wave equation.The SG-law in finite form (3) reads and coincides with the diffusion or heat transfer equation.Note that the differential form of the speed-gradient laws often corresponds to reversible processes while the finite form generates irreversible ones.For modelling of more complex dynamics a combination of finite and differential SG-laws may be useful.
In a similar way dynamical equations for many other mechanical, electrical and thermodynamic systems can be recovered.The SG-principle applies to a broad class of physical systems subjected to potential and/or dissipative forces.

Speed-gradient entropy maximization
Let us underly that the speed-gradient principle provides an answer to the question: how the system will evolve?It differs from the principles of maximum entropy, maximum Fisher information, etc. providing and answer to the questions: where?and how far?Particularly, it means that the SGprinciple generates equations for the transient (nonstationary) mode rather than the equations for the system equilibrium.Let us apply the SG-principle to an entropy maximization problem.
According to the 2nd law of thermodynamics and to the Maximum Entropy Principle of Gibbs-Jaynes the entropy of any physical system tends to increase until it achieves its maximum value under constraints imposed by other physical laws.Such a statement provides knowledge about the final distribution of the system states, i.e. about asymptotic behavior of the system when t → ∞.However it does not provide information about the way how the system moves to achieve its limit (steady) state.
In order to provide motion equations for the transient mode employ the SG-principle.Assume for simplicity that the system consists of N identical particles distributed over m cells.Let N i be the number of particles in the ith cell and the mass conservation law holds: Assume that the particles can move from one cell to another and we are interested in the system behavior both in the steady-state and in the transient modes.The answer for the steady-state case is given by the Maximum Entropy Principle: if nothing else is known about the system, then its limit behavior will maximize its entropy [8].Let the entropy of the system be defined as logarithm of the number of possible states: If there are no other constraints except normalization condition (10) it achieves maximum when N * i = N/m.For large N an approximate expression is of use.Namely, if the number of particles N is large enough, one may use the Stirling approximation which coincides with the standard definition for the entropy S = − m i=1 p i ln p i , modulo a constant multiplier N , if the probabilities p i are understood as frequencies N i /N .
To get an answer for transient mode apply the SG-principle choosing approximate entropy S(X) = N ln N − m i=1 N i ln N i as the goal function to be maximized, where X = col(N 1 , . . ., N m ) is the state vector of the system.Assume for simplicity that the motion is continuous in time and the numbers N i are changing continuously, i.e.N i are not necessarily integer (for large N i it is not a strong restriction).Then the sought law of motion can be represented in the form where u i = u i (t), i = 1, . . ., m are controls -auxiliary functions to be determined.According to the SG-principle one needs to evaluate first the speed of change of the entropy (11) with respect to the system (12), then evaluate the gradient of the speed with respect to the vector of controls u i considered as frozen parameters and finally define actual controls proportionally to the projection of the speed-gradient to the surface of constraints (10).Evaluation of Ṡ yields It follows from (10) that m i=1 u i = 0. Hence Ṡ = − m i=1 u i ln N i .Evaluation of the speed-gradient yields ∂ Ṡ ∂u i = − ln N i and the SG-law u i = γ(− ln N i + λ), i = 1, . . ., m, where Lagrange multiplier λ is chosen in order to fulfill the constraint m i=1 u i = 0, i.e. λ = 1 m m i=1 ln N i .The final form of the system dynamics law is as follows: According to the SG-principle the equation ( 13) determines transient dynamics of the system.To confirm consistency of the choice (13) let us find the equilibrium mode, i.e. evaluate asymptotic behavior of the variables N i .To this end note that Ṅi = 0 and m i=1 ln N i = m ln N i in the equilibrium.Hence all N i are equal: N * i = N/m which corresponds to the maximum entropy state and agrees with thermodynamics.
The next step is to examine stability of the equilibrium mode.It can be done by means of the entropy Lyapunov function V (X) = S max − S(X) ≥ 0, where S max = N ln N .Evaluation of V yields It follows from the Cauchy-Bunyakovsky-Schwarz inequality that V (X) ≤ 0 and the equality V (X) = 0 holds if and only if all the values N i are equal, i.e.only at the maximum entropy state.Thus the law (13) provides global asymptotic stability of the maximum entropy state.The physical meaning of the law ( 13) is moving along the direction of the maximum entropy production rate (direction of the fastest entropy growth).The case of more than one constraint can be treated in the same fashion.Let in addition to the mass conservation law (10) the energy conservation law hold.Let E i be the energy of the particle in the ith cell and the total energy E = m i=1 N i E i be conserved.The energy conservation law appears as an additional constraint.Acting in a similar way, we arrive at the law (13) which needs modification to ensure conservation of the energy (14).According to the SG-principle one should form the projection onto the surface (in our case -subspace of dimension m − 2) defined by the relations It means that the evolution law should have the form where λ 1 , λ 2 are determined by substitution of ( 16) into (15).The obtained equations are linear in λ 1 , λ 2 and their solution is given by formulas The solution of ( 17) is well defined if m m i=1 E 2 i − ( m i=1 E i ) 2 = 0 which holds unless all the E i are equal (degenerate case).
Let us evaluate the equilibrium point of the system ( 12), ( 16) and analyze its stability.At the equilibrium point of the system the following equalities hold: γ(− ln N where µ = λ 1 /γ and C = exp(−λ 2 /γ).
The value of C can also be chosen from the normalization condition C = N ( m i=1 exp (−µE i )).We see that equilibrium of the system with conserved energy corresponds to the Gibbs distribution which agrees with classical thermodynamics.Again it is worth to note that the direction of change of the numbers N i coincides with the direction of the fastest growth of the local entropy production subject to constraints.As before, it can be shown that V (X) = S max − S(X) is Lyapunov function for the system and that the Gibbs distribution is the only stable equilibrium of the system in non-degenerate cases.Substitution of λ 1 , λ 2 from (17) into equation ( 16) yields general form of evolution law for the frequencies in the form where logarithm of the vector is understood componentwise and symmetric m × m matrix A is defined as follows: It depends on the vector of energies Ē = (E 1 , . . ., E m ) T ).According to its structure the matrix A is symmetric and has two zero eigenvalues.
It follows from Lyapunov arguments that Eq. ( 19) restricted onto the subspace defined by linear identities (10), (14), has a unique relative equilibrium corresponding to Gibbs distribution.Similar results are valid for continuous (distributed) systems even for more general problem of minimization of relative entropy (Kullback divergence) [14].

Conclusions
Speed-gradient variational principle provides a simple yet useful addition to classical results in thermodynamics and statistical mechanics.Whereas the classical results allow researcher to answer the question "Where it goes to?", the speed-gradient approach provides an answer to the question: "How it goes and how it reaches its steady-state mode?" SG-principle suggests that the transient behavior is potential with respect to the rate of some goal function.This idea may be applied also to evaluation of nonequilibrium stationary states and study of system internal structure evolution [17], description of transient slow motions in vibrational mechanics, etc.A different approach to variational description of nonstationary nonequilibrium processes is proposed in [18].Other physical applications of techniques and ideas developed in control theory (cybernetics) can be found in [11,14,19,20].