Next Article in Journal
Certain Integral Operators of Analytic Functions
Previous Article in Journal
New Criteria for Oscillation of Half-Linear Differential Equations with p-Laplacian-like Operators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Controlled Positive Dynamic Systems with an Entropy Operator: Fundamentals of the Theory and Applications

Federal Research Center “Computer Science and Control” of Russian Academy of Sciences, 117997 Moscow, Russia
Mathematics 2021, 9(20), 2585; https://doi.org/10.3390/math9202585
Submission received: 30 August 2021 / Revised: 4 October 2021 / Accepted: 7 October 2021 / Published: 14 October 2021
(This article belongs to the Section Dynamical Systems)

Abstract

:
Controlled dynamic systems with an entropy operator (DSEO) are considered. Mathematical models of such systems were used to study the dynamic properties in demo-economic systems, the spatiotemporal evolution of traffic flows, recurrent procedures for restoring images from projections, etc. Three problems of the study of DSEO are considered: the existence and uniqueness of singular points and the influence of control on them; stability in “large” of the singular points; and optimization of program control with linear feedback. The theorems of existence, uniqueness, and localization of singular points are proved using the properties of equations with monotone operators and the method of linear majorants of the entropy operator. The theorem on asymptotic stability of the DSEO in “large” is proven using differential inequalities. Methods for the synthesis of quasi-optimal program control and linear feedback control with integral quadratic quality functional, and ensuring the existence of a nonzero equilibrium, were developed. A recursive method for solving the integral equations of the DSEO using the multidimensional functional power series and the multidimensional Laplace transform was developed. The problem of managing regional foreign direct investment is considered, the distribution of flows is modeled by the corresponding DSEO. It is shown that linear feedback control is a more effective tool than program control.

1. Introduction

Dynamic systems with an entropy operator (DSEOs) form a class of nonlinear systems in which the nonlinearity is described by a perturbed mathematical programming problem with an entropy objective function. DSEOs turned out to be a useful framework for studying a fairly wide range of applied problems: modeling of demo-economic systems [1], traffic flows [2,3], the labor market [4], and urban agglomerations [5]; the development of diagnostics algorithms for atmospheric waves [6] and oil-bearing strata [7]; analysis of stationary modes in dynamic image reconstruction procedures from projections in computerized tomography [8]; identification of dynamic systems [9], and others.
The mathematical class of DSEOs is based on a physical model of a dynamic system with self-reproduction of matter, energy, or information exchanged stochastically between the subsystems. Moreover, self-reproduction processes have an evolutionary nature (“slow”), whereas exchange processes occur much more intensively (“fast”). The latter processes relax in rather short periods (in the time scale of “slow” processes) to local equilibrium states. Since exchange processes are assumed to be stochastic, their locally stationary states can be described by the corresponding entropy operator.
Research into DSEOs has some history and problems examined in related areas. Apparently, the monograph [10], devoted to studying processes in urban and regional systems, was the first publication on models with an entropy operator.
A feature of DSEOs is a specific nonlinearity description in the form of the argmax function. In [11], forced periodic modes in a dynamic system with the argmin function without constraints were investigated. The paper [12] considered the convergence of iterative algorithms for finding the argmin of a continuous function.
The entropy operator in DSEOs is also described by the argmax function, but the constrained maximum of the entropy function. Constraints can be specified as a system of equalities or inequalities. In the latter case, the mathematical model of the entropy operator reduces to a perturbed mathematical programming problem, which significantly complicates the study of the operator itself and the corresponding dynamic system.
The general properties of an entropy operator with equality constraints (existence, continuity, boundedness, differentiability, the Lipschitz condition) were studied in [13].
This paper considers the dynamic properties of controlled positive DSEOs with the constrained argmax entropy function. Control in such systems affects the entropy operator. Two classes of controls are studied: program and linear feedback controls. We formulate a non-negativity theorem for the solutions of equations describing the DSEO under non-negative control and initial conditions. The problem of equilibrium states is investigated: the existence of a unique singular point is established using the method of monotonic operators [14] and bilinear majorants [15], and the sizes of a multidimensional rectangular box (a vector interval) containing this point are estimated. The sizes of the asymptotic stability domain of the singular point are estimated by the state vector norm using the method of differential inequalities [16]. Optimal control design problems in the two classes mentioned are considered using an integral quadratic criterion and constraints associated with the existence, uniqueness, and stability of the equilibrium of positive DSEOs.
The developed methods are applied to study the qualitative properties and optimize a stochastic regional exchange system for foreign direct investment.

2. Mathematical Model of Positive Controlled DSEOs

Consider a DSEO with some control u ( t ) R + r applied to the entropy operator:
d x d t = x w x S y [ u ( x ) ] , x ( 0 ) > 0 , x R + n ,
where the controlled entropy operator is given by
y [ u ( x ) ] = arg max y H ( y ) | T y = u ( x ) 0 , y R + m , u R + r .
Note that in the absence control, the DSEO has zero singular point. It is stable if w < 0 , and unstable if w > 0 .
The other notations are as follows: ⊗ means the coordinate-wise product of vectors; w R + n is a vector with constant components; S > 0 is a non-negative matrix of dimensions ( n × m ) ; T is a non-negative matrix of dimensions ( r × m ) that has full rank and normalized columns:
e ( r ) T = e ( m ) ,
where e ( r ) and e ( m ) denote unit column vectors of dimensions r and m , respectively.
The entropy has the form
H ( y ) = y , ln y e a , a A = [ 0 , 1 ] for   all x R + n , e = 2.718 .
The entropy operator is a vector y [ u ( x ) ] with the components
y j ( u ( x ) ) = a j l = 1 r [ z l ( u ( x ) ) ] t l j , j = 1 , m ¯ , Φ k ( z ( u ( x ) ) ) = j = 1 m t k j a j l = 1 r [ z l ( u ( x ) ) ] t l j = u k ( x ) , k = 1 , r ¯ ,
where the vector z consists of the exponential Lagrange multipliers for problem (2).
Theorem 1.
In Equation (1), and let w 0 , S 0 , and y [ u ( x ) ] 0 for all u ( x ) 0 .
Then for x ( 0 ) 0 , all solutions x ( t ) of (1) are non-negative:
x ( t ) R + n , f o r   a l l t 0 ,
and the system (1) belongs to the class of positive DSEOs.
Proof. 
We introduce three subsets in the state space of the DSEO (1) as follows:
X + = i = ( i 1 , , i k ) X ( i ) ,
where
X ( i ) = { x : w i > x i s ( i ) y [ u ( x ) ] , i = ( i 1 , , i k ) , i j = 1 , n ¯ ,
and s ( i ) is the ith row of the matrix S;
X = i = ( i k + 1 , , i n ) X ¯ ( i ) ,
where
X ¯ ( i ) = { x : w i < x i s ( i ) y [ u ( x ) ] , i = ( i k + 1 , , i n ) , i j ( i 1 , , i k ) , i j = 1 , n ¯ ;
X = { x : w = x S y [ u ( x ) ] } .
1. Consider an initial point x ( 0 ) R + n whose components with numbers i 1 , , i k belong to the set X + , and those with numbers i k + 1 , , i n to the set X . From (1) it follows that the right-hand sides of the equations with numbers i 1 , , i k are positive. Hence, the components x i 1 ( t ) , , x i n ( t ) evolving from the point x ( 0 ) will increase monotonically, remaining positive.
Now, we analyze the components x i k + 1 , , x i n . Since they belong to the set X , the right-hand sides of the corresponding equations in (1) are negative. As a result, the components x i k + 1 ( t ) , , x i n ( t ) evolving from the point x ( 0 ) will gradually vanish, remaining positive.
2. Consider the opposite case: in the initial point x ( 0 ) R + n , the components with numbers i 1 , , i k belong to the set X , and those with numbers i k + 1 , , i n to the set X + . This case is a “mirror image” of the previous one: the components x i 1 ( t ) , , x i n ( t ) will decrease, remaining non-negative, whereas the components x i k + 1 , , x i n ( t ) will increase, remaining positive.
3. The set X is the set of nonzero singular points, and their existence is determined by equality (10). The proof of Theorem 1 is complete. □

3. Singular Points and Their Localization

The system (1) has the trivial singular point x 0 = 0 and nonzero singular points satisfying the equations
x S y [ z ] = w , Φ ( z ) = u ( x ) ,
where the entropy operator y and the function Φ have the components (5). The relationship between the exponential Lagrange multipliers z and the state vector x , i.e., an implicit function z ( x ) , is given by (5).
We transform Equation (11) to
y ˜ [ x , z ] = 1 , Ψ ( x , z ) = 1 ,
where
y ˜ [ x , z ] = S ˜ ( x ) y [ z ] , S ˜ = [ x i w i s ( i ) , i = 1 , n ¯ ] ,
Φ ˜ ( x , z ) = 1 u ( x ) Φ ( z ) ,
are the components of the entropy operator y [ z ] , and the functions Φ ( x , z ) are given by (5).
Next, we multiply component-wise the left- and right-hand sides of Equation (12) by the vectors x and z , respectively. As a result,
G ( x , z ) = x y ˜ [ x , z ] = x , Ψ ( x , z ) = z Φ ˜ ( x , z ) = z ,
where G , x , y ˜ R + n and Ψ , Φ ˜ , z R + r .
We introduce the following notations:
A ( v ) = G ( x , z ) Ψ ( x , z ) , v = { x , z } R + n + r , A R + n + r .
Then Equation (15) can be written as
A ( v ) = v .
The operator A ( v ) maps the vector v into itself. Therefore, under several useful properties (contraction, monotonicity), the equation will have a unique solution, and we may estimate the domain containing it.
Theorem 2.
Let the control function have the following properties:
u ( x ) 0 , t h e   J a c o b i a n J u / x 0 ,
for all x R + n .
Then the operator A ( v ) is monotonically increasing for all v R + n + r .
Proof. 
Consider the Jacobian of (17). Due to (16),
J A = J G / x J G / z J Ψ / x J Ψ / z
According to (16), the elements of the Jacobian J A are:
J G / x = y ˜ [ z ] + 1 w S y [ z ] ,
J G / z = J y ˜ / z = a i z s j = 1 r [ z j ] t j i , i = 1 , m ¯ ; 1 , r ¯ ,
J Ψ / x = z J Φ ˜ / x , J Φ ˜ / x = 1 u k 2 u k x i Φ k ( z ) , k = 1 , r ¯ ; ß = 1 , n ¯ ,
J Ψ / z = 1 u ( x ) J Φ / z , J Φ / z = 1 z s i = 1 m t k i t s i a i j = 1 r [ z j ] t j i , ( k , s ) = 1 , r ¯
By conditions (18) of this theorem, all component matrices of (19) have non-negative elements, and the conclusion follows. The proof of Theorem 2 is complete. □
Thus, the operator A ( v ) is monotonically increasing on the set v 0 . For monotonically increasing operators in Equation (17), there exists a pair of vectors v ( l f ) < v ( r h ) such that: A ( v ( l f ) ) > v ( l f ) and A ( v ( r h ) ) < v ( r h ) for a concave operator, and A ( v ( l f ) ) < v ( l f ) and A ( v ( r h ) ) > v ( r h ) for a convex operator. Then the solution v of Equation (17) belongs to the corresponding vector interval V = [ v ( l f ) , v ( r h ) ] ; see Theorem 3.1 in [14].
The problem is estimating the boundaries of this vector interval. The null vector 0 can be adopted as the left boundary. For estimating the right boundary (the vector v ( 1 ) ) , we define the majorant of the operator A ( v ) (15).
Recall that the majorant is a vector function C ( v ) , such that
A ( v ) C ( v ) , C ( v ) = C G ( v ) C Ψ ( v ) ,
where
G ( v ) C G ( v ) , Ψ ( v ) C Ψ ( v ) .
Lemma 1.
Assume that condition (3) holds, and the control function u ( x ) is monotonically decreasing, i.e., its Jacobian satisfies the inequality
J u / x 0 .
Then the majorant C ( v ) (24) exists, and its components are given by
C G ( v ) = x ( 2 ) F z , f k i = k = 1 r t k i j = 1 m s i j a i z k , k = 1 , r ¯ ; i = 1 , n ¯ ,
C Ψ ( v ) = z u ( x ) B z , b k l = j = 1 m a j t k j t l j , ( k , l ) = 1 , r ¯ ,
where the vector x ( 2 ) and z / u ( x ) consist of x i 2 , i = 1 , n ¯ , and z k / u k ( x ) , k = 1 , r ¯ , respectively.
Proof. 
Using a well-known inequality from [16]:
j = 1 r u j α j j = 1 r α j u j , u j > 0 , α j > 0 , j = 1 r α j = 1 ,
together with (3) and (29), we finally arrive at (27) and (28). The proof of Lemma 1 is complete. □
Replacing the operator A ( v ) in (17) with its majorant (27) and (28), we obtain equations for the vector v ( r h ) = ( x ( r h ) , z ( r h ) ) :
F z = 1 x , B z = u ( x ) .
Hence, the right boundary of the domain locating the unique singular point of the system (1) satisfies the equation
F B 1 u ( x ) = 1 x .
Here, the vector 1 x consists of 1 x i , i = 1 , n ¯ , and the control function u ( x ) is monotonically decreasing on R + r .
As an example, consider a feedback law with the exponential control function:
u ( x ) = exp ( α x ) , α = { α 1 , , α n } , α i > 0 , i = 1 , n ¯ .
We construct an approximate solution to localize the right boundary of the vector interval. The linear approximation of the exponential control function has the form
u ( x ) 1 α x .
Then Equation (31) reduces to
x q p ˜ ( x ) = 1 , q = P 1 , P = F B 1 ,
where the vector p ( x ) consists of
p ˜ i ( x ) = j = 1 n p ˜ i j x j , p ˜ i j = p i j α j .
Consider a parameterized family of the following systems of equations:
x ε q p ( x ) = 1 .
The case ε = 0 corresponds to the so-called generating system:
P ˜ x = q ˜ , q ˜ = q 1 .
Its solution is
x g = P ˜ 1 q ˜ .
For ε = 1 , we obtain the original system (36). Expanding its solution into a formal power series [17] in the parameter ε yields
x = x g + ε x ( 1 ) + ε 2 x ( 2 ) + ,
where x ( k ) is the kth approximation to the solution. Next, we substitute the series (39) into Equation (36) and equate the terms with the same powers of ε . As a result, the approximations will satisfy the following chain of relations:
  • the zeroth approximation,
    x ( 0 ) = x g ;
  • the first approximation,
    x ( 1 ) = P ^ 1 ( x g , q ) 1 , P ^ = x g ( P ˜ diag [ q i | i = 1 , n ¯ ] ) ;
Therefore, the approximate right boundary of the vector interval locating the singular point is
x ( r h ) P ˜ 1 q ˜ + P ^ 1 1 = x ˜ ( r h ) .
Well, the vector interval I = [ 0 , x ˜ ( r h ) ] contains the singular point of the positive DSEO (1) and (2).

4. Stability of Nonzero Singular Point

Consider the deviation from the singular point x , z :
ξ = x x , θ = z z , z = z ( x ) .
The differential equation for this deviation has the form
d ξ d t = ξ w ξ S y [ ξ ] 2 x S y [ ξ ] .
It taken into account here that
x w x S y [ x ] = 0 .
Theorem 3.
Let the following condition be satisfied:
y [ x ] L x , y R + m , x R + n ,
where L > 0 is the Lipschitz constant for the operator y [ x ] .
Then the singular point x is stable in large, i.e., stable under the initial deviations
v ( 0 ) = ξ ( 0 ) = i = 1 n ξ i 2 ( 0 )
in the domain
O = { v : w m i n v + 2 σ L x v 2 + σ L v 3 0 } ,
where σ = S .
Proof. 
Introducing the matrix
W = diag [ w i , i = 1 , n ¯ ]
we consider the linear differential equation
d ξ ˜ d t = W ξ ˜ .
The corresponding matricant is defined as
M τ t ( W ) = exp [ W ( t τ ) ] ,
and its norm satisfies the upper bound
M τ t ( W ) exp ( w m i n ( t τ ) ) ,
where w m i n = min i w i .
Using the matricant (51), we pass to the integral equation
ξ ( t ) = M 0 t ( W ) ξ ( 0 ) + + 0 t M τ t ( W ) ( ξ ( τ ) ξ ( τ ) S y [ ξ ( τ ) ] 2 x S y [ ξ ( τ ) d τ .
Denote the Euclidean norm of the vectors ξ ( τ ) in the form:
v ( t ) = ξ ( τ ) = i = 1 n ξ i 2 .
Then in term norms and take into account (46), we have the following inequality:
v ( t ) exp ( w m i n t ) v ( 0 ) + 0 t exp ( w m i n ( t τ ) ) V ( τ ) d τ ,
where
V ( τ ) = σ L ( v 3 ( τ ) + 2 x v 2 ( τ ) ) .
Consider the equality:
v ˜ ( t ) = exp ( w m i n t ) v ˜ ( 0 ) + 0 t exp ( w m i n ( t τ ) ) V ˜ ( τ ) d τ ,
where
V ˜ ( τ ) = σ L ( v ˜ 3 ( τ ) + 2 x v ˜ 2 ( τ ) ) .
Differentiating equality (57) yields
d v ˜ d t = w m i n v ˜ ( t ) + V ˜ ( t ) .
According to the theorem on differential inequalities [16],
v ( t ) v ˜ ( t ) .
The solution of the first-order Equation (59) will asymptotically vanish if the initial conditions for v ( t ) (the norm of the deviation from the singular point x ) belong to the domain, where the right side of the Equation (59) is negative.
Hence, the singular point x will be asymptotically stable in the domain O (48). □

5. Optimal Control for a Class of Positive DSEOs

It was previously seen that a control is necessary for the occurrence of nonzero singular point. However, in the process of reaching the point, the selected indicators of its quality can be performed. We will consider the integral indicators.
Consider the system (1) in which the entropy operator is described by the constrained argmax problem
H ( Y ) = ( k , s ) = 1 n y k s ln y k s e a k s max , s = 1 n y k s = u k ( x ) , k = 1 , n ¯ .
This problem has the analytical solution
y k s = a k s u k ( x ) , y s k = a s k u s ( x ) , ( k , s ) = 1 , n ¯ .
The entropy operator is given by the vector
y ˜ [ u ( x ) ] = S y [ u ( x ) ] = A u ( x ) ,
where A = [ a s k | ( s , k ) = 1 , n ¯ ] . Then Equation (1) can be written as
d x d t = x w x A u ( x ) .
Here u R + r for all x R + n .
Consider the case of program control:
u ( x ) = c 0 ,
where c 0 > 0 is a vector of dimension r.
Then Equation (64) reduces to
d x d t = x w x ( 2 ) c ˜ 0 ,
where
c ˜ 0 = A c 0 , x ( 2 ) = x x .
Using the matrizer (51), we express (66) in the integral form:
x ( t ) = M 0 t ( W ) x ( 0 ) + 0 t M τ t ( W ) x ( 2 ) ( τ ) c ˜ 0 d τ ,
Equations (64) and (66) belong to the class of equations with the polynomial right-hand side. To find the solution, we employ the functional Volterra series [18,19,20]:
x ( t ) = 0 t K ( 1 ) ( t τ ) x ( 0 ) d τ + 0 t 0 t K ( 2 ) ( t τ 1 , t τ 2 ) x ( 2 ) ( 0 ) d τ 1 d τ 2 + + 0 t 0 t K ( r ) ( t τ 1 , , t τ r ) x ( r ) ( 0 ) d τ 1 d τ r + ,
where:
—The vectors
x ( r ) ( 0 ) = x ( 0 ) x ( 0 ) x ( 0 ) r   terms
have dimension n .
—The weight function matrices K ( r ) ( t τ 1 , , t τ r ) have dimensions ( n × n ) .
We substitute this series into the left- and right-hand sides of (68) and equate the terms with the same powers of x ( 0 ) . As a result, we obtain the following chain of recursive equations for the images of the multidimensional Laplace transform [19,21]:
K ( 1 ) ( s ) = W ( s ) , K ( 2 ) ( s 1 , s 2 ) = c ˜ 0 W ( s 1 + s 2 ) K ( 1 ) ( s 1 ) K ( 1 ) ( s 2 ) K ( 3 ) ( s 1 , s 2 , s 3 ) = c ˜ 0 W ( s 1 + s 2 + s 3 ) K ( 1 ) ( s 1 ) K ( 2 ) ( s 2 , s 3 ) , s k = α k + ι β k .
Here
W ( s ) = diag 1 s w i , | i = 1 , n ¯ , W ( s 1 + s 2 ) = diag 1 s 1 + s 2 w i , | i = 1 , n ¯ , W ( s 1 + s 2 + s 3 ) = diag 1 s 1 + s 2 + s 3 w i , | i = 1 , n ¯ , =
Using the inverse Laplace transform, we pass to the time-varying functions in (71):
K ( 1 ) ( t ) = W ( t ) = diag exp ( w i t ) | i = 1 , n ¯ , K ( 2 ) ( t , t ) = c ˜ 0 W ( 2 ) ( t , t ) = = 0 t 0 t W ( t λ 1 + t λ 2 ) W ( λ 1 ) W ( λ 2 ) d λ 1 d λ 2 , K ( 3 ) ( t , t , t ) = c ˜ 0 W ( 3 ) ( t , t , t ) = = 0 t 0 t 0 t W ( t λ 1 + t λ 2 + t λ 3 ) W ( t λ 1 ) 0 λ 2 0 λ 3 W ( λ 2 τ 2 + λ 3 τ 3 ) W ( τ 2 ) W ( τ 3 ) d τ 2 d τ 3 d λ 1 d λ 2 d λ 3 , .
Clearly, all elements of the weight function matrices in (69) are explicitly expressed through the matrix W ( t ) = diag [ exp ( w i t ) | i = 1 , n ¯ ] and the control characteristics (vector) c ˜ 0 .
Substituting the weight function matrices (73) into (69) gives
x ( t ) = Φ 1 ( t ) x ( 0 ) + c ˜ 0 Φ 2 ( t ) x ( 2 ) ( 0 ) + Φ 3 ( t ) x ( 3 ) ( 0 ) + ,
where:
Φ 1 ( t ) = 0 t W ( τ ) d τ , Φ 2 ( t ) = 0 t 0 t W ( 2 ) ( τ , τ ) d τ d τ , Φ 3 ( t ) = 0 t 0 t 0 t W ( 3 ) ( τ , τ , τ ) d τ d τ d τ .
All matrices in these equalities are square and have dimensions ( n × n ) , and all vectors are n-dimensional; see (70).
Note that the vector x ( t ) linearly depends on the program control c ˜ 0 , and the parameters of this dependence are positive.
Consider a quadratic optimization problem:
J ( c ˜ 0 ) = 0 T x ( t , c ˜ 0 ) x ( t , c ˜ 0 ) d t min c ˜ 0 0 .
Here the terminal time T is known.
Next, we substitute the expression for x ( t ) into (77), integrate, and perform some trivial transformations. As a result, we obtain:
J ( c ˜ 0 ) = x ( 0 ) K 11 x ( 0 ) + c ˜ 0 K [ x ( 0 ) ] c ˜ 0 + c ˜ 0 L [ x ( 0 ) ] min c ˜ 0 0 ,
where the quadratic and linear form matrices are given by
K [ x ( 0 ) ] = [ x ( 2 ) ( 0 ) ] K 22 x ( 2 ) ( 0 ) + [ x ( 3 ) ( 0 ) ] K 33 x ( 3 ) ( 0 ) ,
and
L [ x ( 0 ) ] = 2 x ( 0 ) K 12 x ( 2 ) ( 0 ) + 2 x ( 0 ) K 13 x ( 3 ) ( 0 ) ,
respectively. In the equalities above,
K 11 = 0 T Φ 1 ( t ) Φ 1 ( t ) d t , K 22 = 0 T Φ 2 ( t ) Φ 2 ( t ) d t , K 33 = 0 T Φ 3 ( t ) Φ 3 ( t ) d t , K 12 = 0 T Φ 1 ( t ) Φ 2 ( t ) d t , 0 T Φ 2 ( t ) Φ 3 ( t ) d t .
Clearly, the matrix K [ x ( 0 ) ] (78) has non-negative elements. The gradient of the function (77) is given by
c ˜ 0 J ( c ˜ 0 ) = 2 K ( x ( 0 ) ) c ˜ 0 + L [ x ( 0 ) ] .
For calculating the minimum of (77), we apply the gradient projection recursive formula:
c ˜ 0 ( k + 1 ) = c ˜ 0 ( k ) + γ c ˜ 0 J ( c ˜ 0 ( k ) ) if c ˜ 0 ( k + 1 ) 0 , 0 if c ˜ 0 ( k + 1 ) < 0 .
The parameter γ is chosen to guarantee the convergence of this algorithm [22].
Now consider the case of linear feedback control:
u ( x ) = C x , C > 0 .
The equation of the controlled DSEO (68) reduces to
x ( t ) = M 0 t ( W ) x ( 0 ) + 0 t M τ t ( W ) x ( 2 ) ( τ ) C x ( τ ) d τ .
Using the procedure described above, we get the following images of the weight function matrices of the series (69):
K ( 1 ) ( s ) = W ( s ) , K ( 2 ) ( s 1 , s 2 ) = 0 , K ( 3 ) ( s 1 , s 2 , s 3 ) = K ( 1 ) ( s 1 ) K ( 1 ) ( s 2 ) K ( 1 ) ( s 3 ) C .
The inverse Laplace transform finally yields
K ( 1 ) ( t ) = W ( t ) = diag exp ( w i t ) | i = 1 , n ¯ , K ( 2 ) ( t 1 , t 2 ) = 0 , K ( 3 ) ( t 1 , t 2 , t 3 ) = W ( t 1 ) W ( t 2 ) W ( t 3 ) C , .
Thus, the solution of (62) has the form
x ( t ) = 0 t W ( t τ ) x ( 0 ) d τ + + 0 t 0 t 0 t W ( t τ 1 ) W ( t τ 2 ) W ( t τ 3 ) C x ( 3 ) ( 0 ) d τ 1 d τ 2 d τ 3 + ,
where
x ( 3 ) ( 0 ) = { x 1 3 ( 0 ) , , x n 3 ( 0 ) } .
For choosing the control matrix C , we adopt the criterion (77). In the case under consideration, it depends on the matrix C:
J ( C ) = 0 T x ( t , C ) x ( t , C ) d t min C 0 ,
where x ( t ) is given by (87). Therefore,
J ( C ) = x ( 0 ) K 11 x ( 0 ) + C R [ x ( 3 ) ( 0 ) ] C + 2 C M [ x ( 0 ) ] ,
where
K 11 = 0 T Φ 1 ( t ) Φ 1 ( t ) d t , K 33 = 0 T Φ 3 ( t ) Φ 3 ( t ) d t , R [ x ( 3 ) ( 0 ) ] = [ x ( 3 ) ( 0 ) ] K 33 x ( 3 ) ( 0 ) , M [ x ( 0 ) ] = x ( 0 ) K 13 x ( 3 ) ( 0 ) K 13 = 0 T Φ 1 ( t ) Φ 3 ( t ) d t .
Within the constant term, the gradient of the criterion (90) with respect to the matrix C is calculated as
C J ( C ) R [ x ( 0 ) ] C + M [ x ( 0 ) ] .
The minimum of (90) can be found numerically by the gradient projection recursive formula:
C ( k + 1 ) = C ( k ) + β C J ( C ( k ) ) if C ( k + 1 ) 0 , 0 if C ( k + 1 ) < 0 .
The parameter β is chosen to guarantee the convergence of this algorithm [22].

6. Control of Stochastic Flows of Regional Foreign Direct Investment (FDI)

Consider a regional system consisting of n regions exchanging investments [23]. Each region i has an investment appeal x i ( t ) , i = 1 , n ¯ , measured in the total cost of all projects implemented in it. Each region invests in its own projects and projects implemented in other regions (foreign direct investment, FDI).
The region’s investment appeal changes due to two processes. On the one hand, it decreases over time since the projects proposed for investment are “aging.” On the other, it grows as the result of FDI. Assume that the regions exchange their FDI stochastically and in portions.
The relative rate of change in the region’s investment appeal, v i = x i ˙ / x i , is proportional to the difference between the aging Φ i ( t ) and renewal Ψ i ( t ) functions of investment projects:
v i ( t ) = Φ i ( t ) + Ψ i ( t ) , i = 1 , n ¯ .
The aging function is linear one:
Φ i ( t ) = w i s i Y i ( t ) , i = 1 , n ¯ ,
where w i ; s i are positive constants.
The renewal function is characterize by distribution of the FDI flows. As the FDI exchange is stochastic and partial, the locally stationary distribution of the FDI flows Y = [ y i j | ( i , j ) = 1 , n ¯ ] is described by the controlled entropy operator
H ( Y ) = ( i , j ) = 1 n y i , j ln y i j e a i j max , j = 1 n y i j = u i , i = 1 , n ¯ ,
where A = [ a i j | ( i , j ) = 1 , n ¯ ] denotes the prior probability matrix, e = 2.71 , and u = { u 1 , , u n } is the control vector.
The entropy-optimal distribution of the FDI flows has the form
y i j = a i j u i , y j i = a j i u j , ( i , j ) = 1 , n ¯ , i j ,
subject to the constraint
j = 1 n a i j = 1 for   all i = 1 , n ¯ .
The total FDI to region i is calculated as
Ψ i ( t ) = j = 1 n a j i u j ( t ) , i = 1 , n ¯ .
Combining Equations (94), (95) and (99), we obtain the following equation for the controlled DSEO:
d y ( t ) d t = y ( t ) w S y ( t ) + A u ( t ) ,
where w = { w 1 , , w n } , S = diag [ s i | i = 1 , n ¯ ] , y = { y 1 , , y n } , and
A = a 11 a 21 a n 1 a 12 a 22 a n 2 a 1 n a 2 n a n n .
Nonzero diagonal elements of this matrix mean that regions invest part of their financial resources to increase their investment appeal.
The integral equation corresponding to (100) has the form
y ( t ) = 0 t M 0 t ( W ( τ ) ) y ( 0 ) d τ + 0 t exp ( W ( t τ ) ) S y ( 2 ) ( τ ) + y ( τ ) A u ( τ ) d τ ,
where matricant
M 0 i ( t ) = exp ( w 1 t ) 0 0 exp ( w n t )

6.1. Equilibria in the System Stochastic FDI-Exchange

Consider the case of program control  u = c 0 , where c 0 is a real positive vector. In FDI terms, this vector specifies a fixed distribution of FDI among the regions.
The equation of the system stochastic FDI-exchange with program control takes the form
d y ( t ) d t = y ( t ) w S y ( t ) + A c 0 .
A unique nonzero singular point of (104) is calculated as
y = S 1 A c 0 w 0 .
Hence, the optimal program control satisfies the inequality
c 0 ( [ A ] 1 w 0 .
Now consider the case of linear feedback control:
u = C y .
In this case, the investment appeal equation reduces to
d y ( t ) d t = y ( t ) w D y ( t ) , D = S A C .
Its unique nonzero singular point is given by
y = ( S A C ) 1 w = S ( I S 1 A C ) w 0 ,
where I is unique matrix.
Let, in terms of the norm, we have
S 1 A C 1 .
Then
( S A C ) 1 S + A C .
According to (109) and (111) the control matrix take the form:
C ( A ) 1 S .
The linear inequality (112) determine the set of admissible control matrices. Notice that an elements of control matrix C may be negative.

6.2. Optimization of Stochastic FDI Exchange

We apply the general optimization procedure of controlled DSEOs (see above) to the stochastic FDI exchange process. We optimize the control function on a time horizon T = [ 0 , T ] by the integral quadratic criterion
J ( u | y ( 0 ) ) = 0 T y ( t , u ( t ) | y ( 0 ) ) y ( t , u ( t ) | y ( 0 ) ) d t .
1. Program control u ( t ) = c 0 . In this case, the criterion takes the form
J ( c 0 | y ( 0 ) ) = 0 T y ( t , c 0 | y ( 0 ) ) y ( t , c 0 | y ( 0 ) ) d t ,
and the integral equation describing the stochastic FDI exchange process (see (104)) reduces to
y ( t ) = M 0 t ( W ) y ( 0 ) + 0 t M τ t ( W ) S y ( 2 ) ( τ ) + A ˜ y ( τ ) d τ ,
where
A ˜ = diag a i , c 0 ( i ) | i = 1 , n ¯ ,
and the matricant is given by
M τ t ( W ) = [ exp ( t τ ) w i | i = 1 , n ¯ ] .
We find the solution by expanding into a functional power series:
y ( t ) = 0 t K ( 1 ) ( t τ ) y ( 0 ) d τ + 0 t K ( 2 ) ( t τ 1 , t τ 2 ) y ( 2 ) ( 0 ) d τ 1 d τ 2 +
In terms of the multidimensional Laplace transform, the images of the weight function matrices are written as
K ( 1 ) ( s ) = W ( s ) ( I W ( s ) A ˜ ) 1 , K ( 2 ) ( s 1 , s 2 ) = W ( s 1 + s 2 ) S K ( 1 ) ( s 1 ) K ( 1 ) ( s 2 ) , = ,
where
W ( s ) = diag 1 s w i | i = 1 , n ¯ , W ( s 1 + s 2 ) = diag 1 s 1 + s 2 w i | i = 1 , n ¯ .
Consider the first approximation to the solution, characterized by the matrix K ( 1 ) ( s ) . Due to (119), this matrix is diagonal with the elements
K i i ( 1 ) ( s ) = 1 s w i + a i , c 0 , i = 1 , n ¯ ,
where a i denotes the ith row of the matrix A .
The weight functions of the first approximation have the form
k i i ( 1 ) ( t ) = exp ( w i a i , c 0 ) t , i = 1 , n ¯ .
In view of (119), the weight matrix of the second approximation is given by
k i i ( 2 ) ( t 1 , t 2 ) = s i i exp ( w i a i , c 0 ) ( t 1 + t 2 , i = 1 , n ¯ .
Since all weight function matrices are square and diagonal, we arrive at
J 1 ( c 0 | y ( 0 ) ) = i = 1 n 0 T exp 2 ( w i a i , c 0 ) t y i 2 ( 0 ) d t ,
expressing the first approximation of J 1 depending on the variable c 0 (program control). An optimization problem for the program control c 0 can be stated as follows:
J 1 ( c 0 | y ( 0 ) ) min , c 0 ( A ) 1 w 0 .
2. Linear feedback control u = C y . In this case, the criterion takes the form
J ( C | y ( 0 ) ) = 0 T y ( t , C | y ( 0 ) ) y ( t , C | y ( 0 ) ) d t ,
and the integral equation describing the stochastic FDI exchange process (see (104)) reduces to
y ( t ) = M 0 t ( W ) y ( 0 ) + 0 t M τ t ( W ) S y ( 2 ) ( τ ) + y ( τ ) A C y ( τ ) d τ ,
where the matrizer M τ t ( W ) is given by (117).
The solution of this equation can be constructed in the form (118). Similar to the previous case, we apply the multidimensional Laplace transform and equate the terms with the same powers of y ( 0 ) . As a result, we obtain the following chain of recursive equations for the images of the weight function matrices of the series (118):
K ( 1 ) ( s ) = W ( s ) , K ( 2 ) ( s 1 , s 2 ) = W ( s 1 + s 2 ) K ( 1 ) ( s 1 ) K ( 1 ) ( s 2 ) ( S + A C ) ,
Within the quadratic term, the solution of (127) has the image
Y ( s 1 , s 2 ) W ( s 1 ) y ( 0 ) + W ( s 1 + s 2 ) W ( s 1 ) W ( s 2 ) ( S + A C ) y ( 2 ) ( 0 ) .
The approximate solution of (127) is calculated as
y ( t ) = 0 t W ( t τ ) d τ y ( 0 ) + + 0 t 0 t W ( t τ 1 τ 2 ) W ( t τ 1 ) W ( t τ 2 ) d τ 1 d τ 2 ( S + A C ) y ( 2 ) ( 0 ) ,
where
W ( t τ ) = diag [ exp ( w i ( t τ ) ) | i = 1 , n ¯ ] , W ( t τ 1 τ 2 ) = diag [ exp ( w i ( t τ 1 τ 2 ) ) | i = 1 , n ¯ ] .
Obviously, the control matrix C appears in the second term only (and even linearly). Therefore, within the constant terms, the criterion J ( C | y ( 0 ) ) (126) is a quadratic form of the control matrix:
J ( C | y ( 0 ) ) [ y ( 0 ) ] W ˜ C y ( 2 ) ( 0 ) [ y ( 2 ) ( 0 ) ] S ˜ W ˜ C y ( 2 ) ( 0 ) + + [ y ( 2 ) ( 0 ) ] C A ˜ C y ( 2 ) ( 0 ) ,
where:
W ˜ = [ W ( 1 ) ] W ( 2 ) A , S ˜ = S W ˜ , A ˜ = A [ W ( 1 ) ] W ( 2 ) A , W ( 1 ) = 0 T 0 t W ( t τ ) d τ d t , W ( 2 ) = 0 T 0 t 0 t W ( t τ 1 τ 2 ) W ( t τ 1 ) W ( t τ 2 ) d τ 1 d τ 2 d t .
The optimal linear feedback control problem can be written as
J ( C | y ( 0 ) ) min C > 0 , ( S + A C ) w 0 .
This is a quadratic programming problem with linear constraints. It can be solved by standard algorithms using the matrix gradient of the criterion (132) [24].

7. Simulation of FDI-Exchange

Consider the FDI exchange process between three countries: China (1), France (2), USA (3). Assume that the exchange is adequately described by the stochastic model from Section 6, and the linear part of the aging function is absent. The Table 1 shows the data on the investment attractiveness of these countries in terms of shares of the total number of investment projects.
1. Program control u = c 0 . In this case, the system equations take the form
d y i ( t ) d t = y i ( t ) w i + j = 1 3 a j i c j , i = 1 , 3 ¯ .
The notations are as follows: c 1 , c 2 , and c 3 are the components of the program control vector; A = [ a i j | ( i , j ) = 1 , 3 ¯ ] is the prior probability matrix; the vector w > 0 characterizes the aging function of investment projects. We choose the following numerical values of the parameters:
A = 0.2 0.4 0.4 0.3 0.4 0.3 0.1 0.7 0.2 , w = 0.6 0.7 0.8 .
The row sums of the prior probability matrix are equal to 1.
Without program control, the regional FDI flows quickly fades away, approaching the zero singular point. At the zero singular point, the investment attractiveness of the region is zero.
Therefore, when optimizing program control, it is necessary to take into account the conditions for the existence of a nonzero singular point:
w + A c 0 0 .
The optimal program control problem is written as
J 1 ( c 0 , y ( 0 ) ) = i = 1 3 y i 2 ( 0 ) 0 T k i i 2 ( t , c 0 ) d t min ,
c 0 0 , c 0 [ A ] 1 w = 1.238 1.338 0.462 ,
where T = 4 and
k i i ( t , c 0 ) = exp ( w i j = 1 3 a j i c j ( 0 ) ) t , i = 1 , 3 ¯ .
The values of the initial conditions y i ( 0 ) , ( i = 1 , 2 , 3 ) are taken from Table 1, 2016.
The solution of the problem (139) takes the form:
c 0 = { 1.238 ; 1.338 ; 0.248 } .
As we can see, the solution to this problem lies in the feasible region. Optimal programmed control provides the following nonzero points of the regional equilibrium attractions:
y = { 0.07 ; 0.50 ; 0.14 } .
Table 2 shows the calculated data on the investment attractiveness of these countries in the interval 2016–2019 (initial conditions from 2016) for optimal program (141). The last row of the table contains the values of the relative mean square error between the data in Table 1 (z) and Table 2 (y):
δ i = t = 1 3 ( z t ( i ) y t ( i ) ) 2 t = 1 3 [ z ( i ) ] 2 + t = 1 3 [ y ( i ) ] 2 .
So, we can see that the use of program control leads to rather significant errors when compared with real data.
2. Linear feedback control u = C y . In this case, the system equations take the form
d y i ( t ) d t = y i ( t ) w i + k = 1 3 c ˜ i k y k , i = 1 , 3 ¯ ,
where
c ˜ i k = j = 1 3 a j k c j k , C ˜ = A C .
Here c i k , ( i , k ) = 1 , 3 ¯ are elements of the control matrix. The numerical values of the parameters are given by (136).
The non-negativity condition for a singular nonzero point has the form:
w + C ˜ y 0 .
The solution to the Equation (144) in terms of (130) is:
y ( t ) = diag [ 1 exp ( w i t ) w i y i ( 0 ) ] + + diag [ ( 1 exp ( 3 w i t ) ( exp ( 3 w i t ) 1 ) ) 9 w i ] C ˜ y ( 2 ) ( 0 ) .
Diagonal matrices are ( 3 × 3 ) .
The optimal linear feedback control problem is written as
J ( C , y ( 0 ) ) = t = 1 3 y ( t , C ˜ ) y ( t , C ˜ ) d t min , C ˜ 1 w 0 .
To solve this problem, a random search algorithm was used with inversion at each step of the matrix and checking the boundary of the feasible region. The calculation results are presented in Table 3.
From a comparison of Table 2 and Table 3, it can be seen that the mathematical model of a positive DSEO with feedback control more adequately describes the dynamics of investment attractiveness in the three-regional system.

8. Conclusions

A positive dynamical system with an entropy operator was considered. A theorem was proved on the stay of the trajectories of the system in the non-negative orthant of the state space near the initial perturbations from this subspace. Conditions for the existence, uniqueness ,and localization of a nonzero singular point were obtained under program control and feedback control. Applying the linear majorant of the entropy operator, the boundaries of the localization region were obtained.
The problem of stability of singular points was investigated. It was shown that, in the absence of control, the zero singular point was unstable, and not zero did not exist. Conditions for the existence and stability of a nonzero singular point were obtained under program control and feedback control.
A method for the synthesis of quasi-optimal control using the integral quadratic functional of quality and conditions for the existence of a singular non-zero point was developed.
The methods used in the article could be applied to study the dynamic properties and optimize the distribution system of regional flows of foreign direct investment. The advantages of using feedback control were shown.
It should be noted that the proposed method for studying program control and feedback control is based on sequential approximation of control using functional series. Therefore, the question of estimates of the accuracy of the approximation to the optimal solution remains open.

Funding

This work was supported by the Russian Science Foundation, project No. 21-11-00202.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declare no conflict of interest.

References

  1. Popkov, Y.S. Mathematical Demoeconomy. Integrating Demographic and Economic Approaches; De Gruyter: Vienna, Austria, 2014. [Google Scholar]
  2. Shvetsov, V.I.; Helbing, D. Macroscopic Dynamics of Multilane Traffic. Phys. Rev. E 1999, 59, 6328–6339. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Gasnikov, A.V.; Dorn, Y.V.; Nesterov, Y.E.; Shpirko, S.V. On the Three-Stage Version of Stable Dynamic Model of Traffic Flows. Mat. Model. 2014, 26, 34–70. (In Russian) [Google Scholar]
  4. Van Wissen, L.; Popkov, Y.S. Positive Dynamic Systems with Entropy Operator (Application to Labour Market Modelling). Eur. J. Oper. Res. 2006, 174, 1368–1379. [Google Scholar]
  5. Weidlich, W.; Popkov, Y.S.; Shvetsov, V.I. Settlement Formation Models with Entropy Operator. Ann. Reg. Sci. 1998, 32, 267–294. [Google Scholar]
  6. Leble, S.B.; Vereshchagin, S.D.; Vereshchagina, I.S. Algorithm for the Diagnostics of Waves and Entropy Mode in the Exponentially Stratified Atmosphere. Russ. J. Phys. Chem. B 2020, 14, 371–376. [Google Scholar] [CrossRef]
  7. Luckhaus, S.; Plotnikov, P.I. Entropy Solutions to the Buckley–Leverett Equations. Sib. Math. J. 2000, 41, 329–348. [Google Scholar] [CrossRef]
  8. Popkov, Y.S.; Rublev, M.V. Dynamic Procedures of Image Reconstruction from Projections Computer Tomography. Autom. Remote Control 2006, 67, 233–241. [Google Scholar] [CrossRef]
  9. Danaev, A.V.; Rusanov, V.A.; Sharshinskii, D.Y. The Entropy Maximum Principle in the Structural Identification of Dynamic Systems: An Analytical Approach. Izvestiya VUZov. Matematika 2005, 11, 16–24. (In Russian) [Google Scholar]
  10. Wilson, A.G. Catastrophe Theory and Bifurcation (Application to Urban and Regional Systems); Croom Helm: London, UK, 1981. [Google Scholar]
  11. Bobylev, N.A.; Popkov, A.Y. Forced Oscillations in Systems with Argmin Type Operators. Autom. Remote Control 2002, 63, 1707–1716. [Google Scholar] [CrossRef]
  12. Antipin, A.S. The Differential Controlled Gradient Method for Symmetric Extremal Mappings. Differ. Equations 1998, 34, 1020–1030. [Google Scholar]
  13. Popkov, Y.S.; Rublev, M.V. Estimation of a Local Lipschitz Constant of the Bq-Entropy Operator. Autom. Remote Control. 2005, 66, 1069–1080. [Google Scholar] [CrossRef]
  14. Krasnoselskii, M.A.; Vainikko, G.M.; Zabreiko, P.P.; Rutitskii, Y.B.; Stetsenko, V.Y. Priblizhennye Resheniya Operatornykh Uravnenii (Approximate Solutions of Operator Equations); Nauka: Moscow, Russia, 1969. (In Russian) [Google Scholar]
  15. Popkov, Y.S. Upper Bound Design for the Lipschitz Constant of the FG(ν,q)-entropy Operator. Mathematics 2018, 6, 73. [Google Scholar] [CrossRef] [Green Version]
  16. Beckenbach, E.F.; Bellman, R. Inequalities; Springer: Berlin/Heidelberg, Germany, 1961; 276p. [Google Scholar]
  17. Malkin, I.G. Some Problems in the Theory of Nonlinear Oscillations; U.S. Atomic Energy Commission, Technical Information Service: Washington, DC, USA; State Pub. House of Technical and Theoretical Literature: Moscow, Russia, 1959. [Google Scholar]
  18. Volterra, V. Theory of Functionals and Integral and Integro-Differential Equations; Dover Publications: Mineola, NY, USA, 1959. [Google Scholar]
  19. Vav Trees, H.L. Synthesis of Optimal Nonlinear Control Systems; The MIT Press: Cambridge, MA, USA, 1963. [Google Scholar]
  20. Ogunfunmi, T. Adaptive Nonlinear System Identification. The Volterra and Winer Approaches; Springer US: New York, NY, USA, 2007. [Google Scholar] [CrossRef]
  21. Debnath, L.; Bhatta, D. Integral Transforms and Their Applications, 2nd ed.; Chapman & Hall/CRS: New York, NY, USA, 2006. [Google Scholar]
  22. Polyak, B.T. Introduction to Optimization; Optimization Software, Publications Division: New York, NY, USA, 1987. [Google Scholar]
  23. Popkov, Y.S. Equilibria and stability of one class of positive dynamic systems with entropy operator: Application to investment dynimics modeling. Mathematics 2020, 8, 859. [Google Scholar] [CrossRef]
  24. Magnus, J.; Neudecker, H. Matrix Differetial Calculus with Application in Statistics and Econometrics, 3rd ed.; John Willey and Sons: Chichester, UK, 2007. [Google Scholar]
Table 1. Investment attractiveness.
Table 1. Investment attractiveness.
YearsCNFRUSA
20160.190.200.38
20170.320.110.19
20180.190.130.33
20190.220.220.20
Table 2. Calculated investment attractiveness (program control).
Table 2. Calculated investment attractiveness (program control).
YearsCNFRUSA
20160.190.200.38
20170.150.090.25
20180.100.170.30
20190.070.200.28
δ 0.3830.2920.351
Table 3. Calculated investment attractiveness (feedback control).
Table 3. Calculated investment attractiveness (feedback control).
YearsCNFRUSA
20160.190.200.38
20170.290.090.23
20180.210.150.27
20190.200.220.25
δ 0.0890.0540.063
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Popkov, Y.S. Controlled Positive Dynamic Systems with an Entropy Operator: Fundamentals of the Theory and Applications. Mathematics 2021, 9, 2585. https://doi.org/10.3390/math9202585

AMA Style

Popkov YS. Controlled Positive Dynamic Systems with an Entropy Operator: Fundamentals of the Theory and Applications. Mathematics. 2021; 9(20):2585. https://doi.org/10.3390/math9202585

Chicago/Turabian Style

Popkov, Yuri S. 2021. "Controlled Positive Dynamic Systems with an Entropy Operator: Fundamentals of the Theory and Applications" Mathematics 9, no. 20: 2585. https://doi.org/10.3390/math9202585

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop