Next Article in Journal
A Multiobjective Model for Analysis of the Relationships between Military Expenditures, Security, and Human Development in NATO Countries
Next Article in Special Issue
Facilitating Numerical Solutions of Inhomogeneous Continuous Time Markov Chains Using Ergodicity Bounds Obtained with Logarithmic Norm Method
Previous Article in Journal
Induced OWA Operator for Group Decision Making Dealing with Extended Comparative Linguistic Expressions with Symbolic Translation
Previous Article in Special Issue
Optimal Control Problem Solution with Phase Constraints for Group of Robots by Pontryagin Maximum Principle and Evolutionary Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fundamentals of Synthesized Optimal Control

1
Federal Research Center “Computer Science and Control” of the Russian Academy of Sciences, 119333 Moscow, Russia
2
Department of Robotic Systems and Mechatronics, Bauman Moscow State Technical University, 105005 Moscow, Russia
3
Faculty of Mechanical Engineering, Budapest University of Technology and Economics, 1111 Budapest, Hungary
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(1), 21; https://doi.org/10.3390/math9010021
Submission received: 3 November 2020 / Revised: 16 December 2020 / Accepted: 17 December 2020 / Published: 23 December 2020
(This article belongs to the Special Issue Control, Optimization, and Mathematical Modeling of Complex Systems)

Abstract

:
This paper presents a new formulation of the optimal control problem with uncertainty, in which an additive bounded function is considered as uncertainty. The purpose of the control is to ensure the achievement of terminal conditions with the optimal value of the quality functional, while the uncertainty has a limited impact on the change in the value of the functional. The article introduces the concept of feasibility of the mathematical model of the object, which is associated with the contraction property of mappings if we consider the model of the object as a one-parameter mapping. It is shown that this property is sufficient for the development of stable practical systems. To find a solution to the stated problem, which would ensure the feasibility of the system, the synthesized optimal control method is proposed. This article formulates the theoretical foundations of the synthesized optimal control. The method consists in making the control object stable relative to some point in the state space and to control the object by changing the position of the equilibrium points. The article provides evidence that this approach is insensitive to the uncertainties of the mathematical model of the object. An example of the application of the method for optimal control of a group of robots is given. A comparison of the synthesized optimal control method with the direct method on the model without disturbances and with them is presented.

1. Introduction

Object control in the classical mathematical sense is to qualitatively change the right-hand sides of the differential equations describing the mathematical model of the control object, due to the control vector included in them. Thus, the problem of optimal control [1] consists in finding such a control function, as a function of time, which will make the required changes in the right-hand sides of the model of the control object so that, for given initial conditions, the partial solution of the system of differential equations achieves the control goal with the optimal value of the quality criterion.
There are two main directions for solving the problem of optimal control: direct and indirect approaches. The indirect approach based on the Pontryagin’s maximum principle [2,3,4] solves optimal control by formulating it as a boundary-value problem, in which it is necessary to find the initial conditions for a system of differential equations for conjugate variables. Its optimal solution is highly accurate, however, very sensitive to the formulation of additional conditions that the control must satisfy, along with ensuring the maximum of the Hamiltonian, which are generally very difficult to set in practice for problems with complex phase constraints. The direct approach reduces the optimal control problem to a nonlinear programming problem [5,6,7], that provides the transition from the optimization problem in the infinite-dimensional space to the optimization problem in the finite-dimensional space, so it is more convenient and can be readily solved within a wider convergence region.
However, these works generally focus on the nominal trajectory performance without considering possible uncertainties. In practice, in the right-hand sides of the models, there are objectively some uncertainties of various nature. As a rule, they are not taken into account, but the presence of such uncertainties can lead to the loss of optimality of the obtained control.
There are also approaches when the impact of uncertainties is taken into account during the reference trajectory design beforehand [8,9]. For example, desensitized optimal control [10], modifies the nominal optimal trajectory such that it is less sensitive with respect to uncertain parameters. This involves constructing an appropriate sensitivity cost which, when penalized, provides solutions that are relatively insensitive to parametric uncertainties.
Although in practice such solutions do not guarantee the stability and still require construction of the feedback stabilization control system to eliminate errors [8].
In control theory, there is a field of robust control [11,12,13,14], which provides a certain stability coefficient of the control system. Robust control methods generally move the eigenvalues of the linearized system as far as possible to the left of the imaginary axis of the complex plane, so that uncertainties and perturbations do not make the system unstable. These methods are not aimed at solving the optimal control problem.
In practical control system design, the existing uncertainties of the mathematical model of the object, which subsequently cause the discrepancy between the real trajectory of the object and the obtained optimal one, are compensated by the synthesis of a feedback motion stabilization system relative to the optimal trajectory [8,15,16,17]. But construction of the stabilization system changes the mathematical model of the object and the received control might be not optimal for the new model.
In this paper, uncertainties are included in the problem statement as an additive bounded function. And the optimal control problem is supposed to be solved after ensuring stability to the plant in the state space. This approach was called the method of synthesized optimal control. A control function is found such that the system of differential equations will always have a stable equilibrium point in the state space. With that, the control system contains parameters that affect the position of the equilibrium point. Consequently, the object is controlled by changing the position of the equilibrium point. In this paper, it is shown that such control can also provide the required value of the quality criterion, but the mathematical model of the control object turns out to be insensitive to the existing uncertainties and external disturbances. The approach of synthesized optimal control is new, but we have already managed to obtain good experimental results [18,19] confirming the effectiveness of such control. In this paper, we provide mathematical formulations of the approach and give a theoretical substantiation of the efficiency of the synthesized optimal control. A comparative numerical example of solving the problem of optimal control of two robots under phase constraints by the indirect method of synthesized optimal control and by the direct method based on piecewise linear approximation is given.

2. Problem Statement

The mathematical model of control object with uncertainty is given
x ˙ = f ( x , u ) + y ( t ) ,
where x R n , u U R m , U is a compact set, m n , y is a uncertainty function, y ( t ) R n ,
y y ( t ) y +
y , y + are set constant vectors.
Initial conditions are set
x ( 0 ) = x 0 .
Terminal condition is set
x ( t f ) = x f ,
where time t f of hitting terminal conditions t f is not given, but is limited
t f t + ,
t + is a given positive value.
The functional is given
J = 0 t f f 0 ( x ( t ) , u ( t ) ) d t + p 1 x f x ( t f ) min u ( · ) U ,
where p 1 is a given positive value.
It is necessary to find a control function
u = h ( x , t )
such that for any partial solution
x ( t , x 0 )
of the system
x ˙ = f ( x , h ( x , t ) ) + y ( t )
from initial conditions (3) for any uncertainty function (2) value of the functional (6) satisfies inequation
J ( x ( t , x 0 ) , y ( t ) ) J ( x ( t , x 0 ) , 0 ) + Δ y ,
where J ( x ( t , x 0 ) , y ( t ) ) is a value of functional (6) for the solution (8) with perturbation (2), J ( x ( t , x 0 ) , 0 ) is a value of functional (6) for the same solution (8) without perturbations, y ( · ) 0 , Δ y is a given positive value.
Among possible solutions in the form (7) we consider only such that possess the following properties. Let x ( t , x 0 ) be some partial solution of the system (9) with y ( t ) 0 and J ( 0 ) be a value of criterion (10) for it. Let us denote
x ˜ = x ( t , x 0 ) + z ˜ ( t ) ,
x ˜ ˜ = x ( t , x 0 ) + z ˜ ˜ ( t ) ,
and
δ ˜ = max t [ 0 ; t f ] x ( t , x 0 ) x ˜ ( t ) ,
δ ˜ ˜ = max t [ 0 ; t f ] x ( t , x 0 ) + x ˜ ˜ ( t ) .
Then δ ˜ > 0 exist, such that δ ˜ ˜ δ ˜ conditions are met
Δ ˜ ˜ Δ ˜ ,
where
Δ ˜ = | J ( x ( t , x 0 ) , 0 ) J ( x ˜ ( t ) , 0 ) | ,
Δ ˜ ˜ = | J ( x ( t , x 0 ) , 0 ) J ( x ˜ ˜ ( t ) , 0 ) | .
The condition (15) is called the continuous dependence of the functional on perturbations.
The goal is to look for solutions in form (7) so that they satisfy condition (15).

3. Theoretical Background and Justifications for the Synthesized Optimal Control Method

Problems with uncertainties are often considered in optimal control, since the question is relevant in the practical implementation of obtained systems. As a rule, uncertain parameters of the right-hand sides or initial conditions are considered as uncertainties, or some random perturbations are introduced. The main direction of solving problems with perturbations is to ensure the stability of the obtained solution. So, firstly, the problem of optimal control is solved without uncertainties, and then, using the stabilization system, an attempt is made to ensure the stability of motion relative to the optimal trajectory. In fact, the creation of a stabilization system is an attempt to ensure the stability of the differential equation solution according to Lyapunov.
Theorem 1.
To perform the condition (10) it is enough that a partial solution (8) of the system (9) without perturbations y ( t ) 0
x ˙ = f ( x , h ( x , t ) ) ,
was stable according to Lyapunov.
Proof. 
From differential Equation (1) follows
x ( t + Δ t ) = x ( t ) + Δ t f ( x , h ( x , t ) ) + Δ t y ( t ) ,
or
x ¯ ( t ) = x ( t , x 0 ) + v ( t ) ,
where
v ( t ) = 0 t y ( t ) d t .
Let Δ y be given. Then according to condition (15) you can always define Δ ˜ and value δ ¯ for perturbed solution x ¯ such that according to condition of stability on Lyapunov [20,21]
x ( t , x 0 ) x ¯ ( t ) < δ ¯ , t [ 0 ; t f ] .
For this it is enough to satisfy the inequality
0 v ( t ) δ ¯ / 2 , t [ 0 ; t f ] .
 □
However, to find control function (7) such that partial solution (8) was stable according to Lyapunov is rather difficult and, in fact, it is not always necessary. According to Lyapunov’s theorem, a stable solution to a differential equation must have the property of an attractor [20,22], and, therefore, from the mathematical point of view the synthesis of stabilization system is an attempt to give an attractor property to the found optimal trajectory [21,23]. The main problem of unstable solutions is that they are difficult to implement, since small perturbations of the model lead to large errors of the functional, in other words, the solution does not have the attractor property. But in fact, the requirement for the optimal solution to obtain the attractor property or be Lyapunov stable is a fairly strict one and it could be redundant, and other weaker requirements may be enough to implement the resulting solution. For example, the motion of a pendulum is not Lyapunov stable if it is not the zero rest point, but it is physically feasible, since its small perturbations lead to small perturbations of the functional.
In this concern let us introduce the concept of feasibility.

4. Feasibility Property

Based on a qualitative analysis [24] of the solutions of systems of differential equations, the feasibility means that small changes in the model do not lead to a loss of quality. In other words, it is necessary that the solution has the contraction property.
Hypothesis 1.
A mathematical model is feasible, if its errors do not increase in time.
Definition 1.
The system of differential equations is practically feasible, if this system as a one-parametric mapping obtains a contraction property in the implementation domain.
Consider a system of differential equations
x ˙ = f ( x ) ,
where x R n .
Any ordinary differential equation is a recurrent description of a time function. A solution of the differential equation is a transformation from a recurrent form to a usual time function.
Computer calculation of the differential Equation (24) has a form
x ( t + Δ t ) = x ( t ) + Δ t f ( x ( t ) ) ,
where t is an independent parameter, Δ t is a constant parameter, and it is called a step of integration.
The right side of the Equation (25) is a one-parametric mapping from space R n to itself
F ( x , t ) = x ( t ) + Δ t f ( x ( t ) ) : R n R n .
Let a compact domain D be set in the space R n . All solutions of the differential Equations (24), that are of our interest, belong to this domain. Therefore, for the differential Equations (24) the initial and terminal conditions belong to this domain
x ( 0 ) D R n , x ( t f ) D R n ,
where x ( t f ) is a terminal point of the solution (24).
Theorem 2.
In domain D for the mapping (26), the following property is performed
ρ ( x a ( t ) , x b ( t ) ) ρ ( F ( x a ( t ) , t ) , F ( x b ( t ) , t ) ) ,
where x a ( t ) D , x b ( t ) D , ρ ( x a , x b ) is a distance between two points in the space R n
ρ ( x a , x b ) = x a x b .
Then the mathematical model (24) is feasible if the domain D R n according to the hypothesis.
Proof. 
Let x ( t ) D be a known state of the system in the moment t and y ( t ) D be a real state of the system in the same moment. The error of the state is
δ ( t ) = ρ ( x ( t ) , y ( t ) ) .
According to the mapping (26)
δ ( t + Δ t ) = ρ ( F ( x ( t ) , t ) , F ( y ( t ) ) ) .
And according to the condition (28) of the theorem
δ ( t ) δ ( t + Δ t ) .
This proves the theorem. □
The condition (28) shows that the system of differential equations as a one-parametric mapping has contraction property.
Assume that the system (24) in the neighborhood of the domain D has one stable equilibrium point, and there is no other equilibrium point in this neighborhood
f ( x ˜ ) = 0 ,
det ( λ E A ( x ˜ ) ) = λ n + a n 1 λ n 1 + + a 1 λ + a 0 = j = 1 n ( λ λ j ) = 0 ,
where E is a unit n × n matrix,
A ( x ˜ ) = f ˜ ( x ) x ,
λ j = α j + i β j ,
α j < 0 , i = 1 , j = 1 , , n .
Theorem 3.
If for the system (24) there is a domain D that includes one stable equilibrium point (33)–(36), then the system (24) is practically feasible.
Proof. 
According to the Lyapunov’s stability theorem on the first approximation the trivial solution of the differential Equation (24)
x ( t ) = x ˜ = constant
is stable. This means, that, if any solution begins from other initial point x 0 x ˜ , then it will be approximated to the stable solution asymptotically
ρ ( x ( t + Δ t , x a ) , x ˜ ) ρ ( x ( t , x a ) , x ˜ ) ,
where x ( t , x a ) is a solution of the differential Equation (24) from initial point x a .
The same is true for another initial condition x b
ρ ( x ( t + Δ t , x b ) , x ˜ ) ρ ( x ( t , x b ) , x ˜ ) .
From here, it follows that the domain D has a fixed point x ˜ of contraction mapping [24], therefore distance between solutions x ( t , x a ) and x ( t , x b ) also tends to zero or
ρ ( x ( t + Δ t , x a ) , x ( t + Δ t , x b ) ) ρ ( x ( t , x a ) , x ( t , x b ) ) .
This proves the theorem. □
Following the principle of feasibility, an approach is proposed in which the optimal control problem is solved after ensuring the stability of the object in the state space. This approach is called the method of synthesized optimal control. It includes two stages. In the first stage, the system without perturbations is made stable in some point of the state space. This stage of synthesis of the stabilization system allows to embed the control in the object so that the system of differential equations would have the necessary property of feasibility. In this case, the equilibrium point can be changed after some time, but the object maintains equilibrium at every moment in time. Then we control the position of the stable equilibrium point, as an attractor, to solve the optimal control problem.

5. The Synthesized Optimal Control

According to this approach, it is necessary to find such a control function (7) that the system without perturbations would always have a stable equilibrium point in the state space. Together with that, in the control function a parameter vector is introduced. The value of this parameter vector affects on position of the equilibrium point in the states space
u = g ( x , q * ) ,
where q * is a parameter vector.
Control function (41) provides for the system without perturbations
x ˙ = f ( x , g ( x , q * ) )
existence of the equilibrium point
f ( x * ( q * ) , g ( x * ( q * ) , q * ) ) = 0 ,
where x * ( q * ) is a vector of coordinates of equilibrium point, depending on the parameter vector q * . The system (42) satisfies conditions (34)–(36) in the point x * ( q * ) .
Algorithmically, the method of synthesized optimal control first solves the problem of stabilization system synthesis. For solving the synthesis problem, the functional (6) is not used. Purpose of the control synthesis problem is to receive such control function (41) to provide existence of the stable equilibrium point in the state space.
Once the function (41) is found, the optimal control problem is solved next for the mathematical model (42) with the initial conditions (3) and the terminal conditions (4), and with the quality criterion
J 1 = 0 t f f 0 ( x ( t ) , g ( x ( t ) , q * ( t ) ) ) d t + p 1 x f x ( t f ) min q * Q ,
where Q is a compact set in the space of parameters.
In general case, the vector of parameters q * can be some function q * ( t ) . The properties of this function and methods for finding it requires additional studies. In this work this function is found for the original optimal control problem (1)–(6) as a piece-wise constant one.
Thus, in the synthesized optimal control approach, the uncertainty in the right parts is compensated by the stability of the system relative to a point in the state space. Near the equilibrium point, all solutions converge and feasibility principle is satisfied. This first step of stabilization system synthesis is a key idea of the approach, it provides achievement of better results in the tasks with complex environment and noise. However, this approach could not be previously presented as a single computational method, since there was no general numerical approach for solving the problem of control synthesis. Formally, the problem of synthesis of stabilization system involves the construction of such a feedback control module described by some functions that produces control basing on the received data about the object’s state and this control makes the object achieve the terminal goal with the optimal value of some given criterion. In the overwhelming majority of cases, the control synthesis problem is solved analytically or technically taking into account the specific properties of the mathematical model. But now modern numerical methods of symbolic regression can be applied to find a solution without reference to specific model equations. Let us consider the issue in more details.

6. The Problem of Control System Synthesis

Consider the problem statement of the general numerical synthesis of the control system.
The mathematical model is
x ˙ = f ( x , u ) ,
where x R n , u U R m .
The domain of initial conditions is given
X 0 = { x 0 , 1 , , x 0 , K } R n .
The terminal condition is given
x * = [ x 1 * x n * ] T R n .
The quality criterion is given
J 3 = i = 1 K t f , i + p 1 x * x ( t f , i , x 0 , i ) min u ,
where t f , i is a time of achieving the terminal condition from the initial condition x 0 , i . It is necessary to find a control in the form (41).
The general formulation of the synthesis problem was posed by V.G. Boltyanskiy in the 60s of the last century [25]. One of the ways to solve it is to reduce the problem to the partial differential equation of Bellman [26,27], who also proposed a method for its solution in the form of a dynamic programming method [26,28]. Bellman’s equation in the general case has no solution; therefore, most often it is solved numerically for one initial condition, which in our case is not enough to ensure stability.
To solve the synthesis problem and obtain an equilibrium point, methods of modal control [29] can be applied for linear systems, as well as other analytical methods such as backstepping [30], analytical design of aggregated controllers [31,32], or synthesis based on the application of the Lyapunov function [21,33]. Note that all known analytical synthesis methods for nonlinear systems, when implemented, are associated with a specific type of model, therefore they cannot be considered universal. In practice, linear controllers, such as PID or PI controllers, are often used to ensure stability. Their use is also associated with a specific model, which is linearized in the neighbourhood of the equilibrium point, and their use is not related to the formal statement of the considered synthesis problem.
To solve the synthesis problem in the considered mathematical formulation, it is necessary to find the control function in the form (41). Most of the known methods specify the control function with an accuracy of the parameter values, for example, methods associated with the solution of the Bellman equation, like analytical design of optimal controllers [34], as well as the use of various controllers, including controllers based on very popular now artificial neural networks [35].
This paper proposes to solve the addressed problem numerically. For a solution of the synthesis problem we apply numerical methods of symbolic regression. These methods can look for a structure of the function in the form a special code by some genetic algorithm and also search for the optimal values of parameters in the desired function.

7. Symbolic Regression Methods

To encode a mathematical expression, it is necessary to define sets of arguments of the mathematical expression and elementary functions. To decode a code of the mathematical expression it is enough to know how many arguments has each elementary function. For encoding elementary function, it is enough to use integer vector with two components. The first component is the number of arguments of the elementary function. The second component is the function number. Arguments of mathematical expression are elementary functions without arguments, therefore the first component of an argument code is zero.
For the control synthesis problem (45)–(48) it is necessary to find a mathematical expression of the control function (41).
Let us define sets of elementary functions.
A set of mathematical expression arguments or elementary functions without arguments includes variables, parameters, and unit elements for elementary functions with two arguments,
F 0 = { f 0 , 1 = x 1 , , f 0 , n = x n , f 0 , n + 1 = c 1 , , f 0 , n + p , f 0 , n + p + 1 = e 1 , , f 0 , n + p + r = e V } ,
where x i is a component of the state vector, i = 1 , , n , x = [ x 1 x n ] T , c i is a component of the parametric vector, i = 1 , , p , c = [ c 1 c p ] p , e i is a unit element for function with two arguments.
A set of functions with one argument includes an identity function
F 1 = { f 1 , 1 ( z ) = z , f 1 , 2 ( z ) , , f 1 , W ( z ) } .
A set of functions with two arguments includes such functions, that are associative, commutative and have a unit element
F 2 = { f 2 , 1 ( z 1 , z 2 ) , , f 2 , V ( z 1 , z 2 ) } ,
where each element from the set F 2 has the following properties:
-
associative
f 2 , j ( f 2 , j ( z 1 , z 2 ) , z 3 ) = f 2 , j ( z 1 , f 2 , j ( z 2 , z 3 ) ) , j = 1 , , V ,
-
commutative
f 2 , j ( z 1 , z 2 ) = f 2 , j ( z 2 , z 1 ) , j = 1 , , V ,
-
existing of a unit element
f 2 , j ( z 1 , e j ) = z 1 , f 2 , j ( e j , z 2 ) = z 2 , j = 1 , , V .
To describe the most common mathematical expressions, it is enough functions with one and two arguments. Functions with three and more arguments may not be used.
Any element of the sets (49)–(51) is encoded by integer vector with two arguments
s = [ s 1 s 2 ] T ,
where s 1 is the number of arguments, s 2 is a function number.
A code of the mathematical expression is a set of codes of elementary functions
S = s 1 s L ,
where s j = [ s 1 j s 2 j ] T , s 1 j { 0 , 1 , 2 } ,
s 2 j { 1 , , n + p + V } , if s 1 j = 0 { 1 , , W } , if s 1 j = 1 { 1 , , V } , otherwise
Theorem 4.
For the mathematical expression code (57) with L elements to be correct, it is necessary and enough that the following formulas are valid
1 + i = 1 j s 1 i L , j = 1 , , L 1 ,
1 L + i = 1 L s 1 i = 0 .
Proof. 
Consider the Formula (58) and add there j in the left and right sides
j + 1 + i = 1 j s 1 i L j .
Consider left side of the inequation (60)
T ( j ) = j + 1 + i = 1 j s 1 i .
This equation calculates how many elements from the set of arguments (49) should be after element j. The value T ( j ) is increasing on 1 after each s 1 j = 2 , it is not changing after each s 1 j = 1 , and it is decreasing on 1 after s 1 j = 0 .
At j = L , we receive the Equation (59). After the last element j = L it must be no elements on the right from element L.
Assume that the inequation (57) fails. Then from (61) we receive for j = L
T ( j ) = L + 1 + i = 1 j s 1 i > 0 .
This means, that after the last element there are some elements. This does not allow to decode the code. Therefore conditions (57) and (58) are necessary.
Let the inequation (57) and Equation (58) be satisfied. If the element after the element j is an argument from the set (49), then T ( j ) is decreasing on 1, if it is the function number with one argument, then T ( j ) is not changed, if it is the function number with two arguments, then T ( j ) increases on 1. Equation (58) shows that the last element from the set (49) does not need arguments. The formula is decoded. Therefore, performing the Formulas (57) and (58) is enough. QED. □
From Equation (58) it follows
i = 1 L s 1 i = L 1 .
Such direct encoding is in the genetic programming [36]. This method of symbolic regression does not include extra elements, therefore codes of different mathematical expressions have different lengths. It is not very comfortable for programming and implementing crossover in genetic programming. For crossover it is necessary to find in the code (55) the sub-code of mathematical expression with the properties (57) and (58). Crossover operation in genetic programming is performed as exchanging sub-codes of mathematical expressions. Searching for sub-codes and exchanging them takes significant time of the algorithm. Other symbolic regression methods that can be effectively used to find a mathematical expression, such as the network operator method [37,38], or Cartesian genetic programming [39,40] have codes of equal length for different mathematical expressions due to redundant elements.
An effective tool in the search for an optimal mathematical expression is the principle of small variations of the basic solution [41]. According to this principle, the search for the mathematical expression can begin in the neighbourhood of one given basic solution. This solution is coded by some symbolic regression method. Other possible solutions are obtained using sets of codes of small variations of the basic solution. Each small variation slightly modifies the basic solution code so that a new code corresponds to some kind of mathematical expression.
To find the optimal mathematical expression by any method of symbolic regression, a special genetic algorithm is used. Depending on the code of symbolic regression, this genetic algorithm has its own crossover and mutation operations. Using the principle of small variations of the basic solution, crossover and mutation operations are performed on the sets of small variations.
In the numerical solution of control synthesis problems by symbolic regression methods, together with the search of the structure of the mathematical expression, it is advisable to look for the optimal values of the parameter vector c = [ c 1 c p ] T , which is included in this mathematical expression in the form of its additional arguments (49). For this purpose, it is convenient to use the same genetic algorithm as for finding the structure. In this case, a possible solution is a pair including the code for structure of the mathematical expression and the vector of parameters. When performing a crossover operation, we get not two, but four offsprings. Two offsprings have new mathematical expression structures and new parameter values, and two others inherit parent structures and have only new parameter values. The crossover operation for parameters is performed as in the classical genetic algorithm, by exchanging codes after the crossover point.
It can be seen that the methods of symbolic regression can automate the process of synthesis of control systems, but very little of them are used in this direction. Only few scientific groups [42,43,44] are developing these approaches for solving the problem of control system synthesis in view of a number of difficulties, such as non-numerical search space and the absence of a metric on it, the complexity of the program code and the absence of publicly available software packages, and so forth.

8. A Computational Example

Let us consider the optimal control problem for two mobile robots. They have to exchange its position on the plane with obstacles.
Mathematical models of mobile robots [45] are given
x ˙ j = 0.5 ( u 1 j + u 2 j ) cos ( θ j ) , y ˙ j = 0.5 ( u 1 j + u 2 j ) sin ( θ j ) , θ ˙ j = 0.5 ( u 1 j u 2 j ) ,
where u j = [ u 1 j u 2 j ] is a vector of control, j = 1 , 2 .
Control is restricted
10 = u i u i j u i + = 10 , j = 1 , 2 , i = 1 , 2 .
The initial conditions are set
x 1 ( 0 ) = 0 , y 1 ( 0 ) = 0 , θ 1 ( 0 ) = 0 , x 2 ( 0 ) = 10 , y 2 ( 0 ) = 10 , θ 2 ( 0 ) = 0 .
The terminal conditions are set
x 1 ( t f ) = 10 , y 1 ( t f ) = 10 , θ 1 ( t f ) = 0 , x 2 ( t f ) = 0 , y 2 ( t f ) = 0 , θ 2 ( t f ) = 0 ,
where
t f = t   , if t < t + and Δ f ( t ) ε t + , otherwise
Δ f ( t ) = ( 10 x 1 ( t ) ) 2 + ( 10 y 1 ( t ) ) 2 + ( θ 1 ( t ) ) 2 + ( x 2 ( t ) ) 2 + ( y 2 ( t ) ) 2 + ( θ 2 ( t ) ) 2 ,
t + = 2.4 s, ε = 0.01 .
The quality functional includes the time to reach the terminal state and penalty functions for violation of the accuracy of reaching the terminal state and for violation of static and dynamic phase constraints
J e = t f + w 1 Δ f ( t f ) + w 2 0 t f i = 1 2 j = 1 2 ϑ ( φ i , j ( t ) ) d t +
w 3 0 t f ϑ ( d 2 ( x 1 ( t ) x 2 ( t ) ) 2 ( y 1 ( t ) y 2 ( t ) ) 2 ) d t min u 1 , u 2
where w 1 = 2.5 , w 2 = 3 , w 3 = 3 ,
ϑ ( α ) = 1 , if α > 0 0 , otherwise ,
φ i , j ( t ) = r i ( x i x j ( t ) ) 2 + ( y i y j ( t ) ) 2 , i = 1 , 2 , j = 1 , 2 ,
r 1 = 3 , r 2 = 3 , x 1 = 5 , x 2 = 5 , y 1 = 9 , y 2 = 1 , d = 2 .
It is necessary to find such a control to move all robots from its initial conditions (66) to the terminal conditions (67) with the minimal value of the quality criterion (70).
To solve the optimal control problem (64)–(72) by the proposed synthesized optimal control method it is necessary to initially solve the control synthesis problem (45)–(48) for each robot. Since robots are similar, it is enough to solve the control synthesis problem once for one robot. For the solution of this problem, the symbolic regression method of Cartesian genetic programming is used.
In the result, the following control function was obtained:
u i j = u i + = 10 , if u i + u ˜ i j u i = 10 , if u ˜ i j u i u ˜ i j , otherwise , i = 1 , 2 , j = 1 , 2 ,
where
u ˜ 1 j = A + B + ρ # ( A ) , j = 1 , 2 ,
u ˜ 1 j = B A ρ # ( A ) , j = 1 , 2 ,
A = c 1 ( θ * θ j ) + σ # ( ( x * x J ) ( y * y J ) ) ,
B = 2 ( x * x j ) + sgn ( x * x J ) c 2 ,
ρ # ( α ) = sgn ( α ) B + , if | α | > log ( δ ) sgn ( α ) ( exp ( | α | ) 1 ) , σ # ( α ) = sgn ( α ) | α | ,
c 1 = 3.1094 , c 2 = 3.6289 , B + = 10 8 , δ = 10 8 .
For solution of the synthesis problem eight initial conditions were used and the quality criterion took into account the speed and the accuracy of terminal position achievement
x * = [ x * y * θ * ] T .
In the result of the solution of control synthesis problem a stable equilibrium point in the state space is appeared. Position of the equilibrium point depends on the terminal vector (79).
In the second stage the set of four points (79) were searched for each robot on criterion (70)
X * = { x * , 1 , 1 , , x * , 1 , 4 , x * , 2 , 1 , , x * , 2 , 4 } .
These points were switching in some time interval Δ t = 0.6 s for control function (73) of each robot.
To search for the points the evolutionary algorithm of Grey wolf optimizer [46,47] was used. In result, after more than one hundred tests the following best points were found:
x * , 1 , 1 = [ 4.0159 1.8954 1.2397 ] T , x * , 1 , 2 = [ 7.0890 4.2341 0.5270 ] T , x * , 1 , 3 = [ 7.2194 0.4480 1.3042 ] T , x * , 1 , 4 = [ 11.9722 9.4663 0.1866 ] T , x * , 2 , 1 = [ 5.3899 4.0791 0.1208 ] T , x * , 2 , 2 = [ 0.6401 4.3126 0.0176 ] T , x * , 2 , 3 = [ 0.3103 0.8955 0.6335 ] T , x * , 2 , 4 = [ 0.0791 0.1518 0.0195 ] T .
The algorithm simulated the system (64) with the control (73) for calculation of criterion values (70) in one test more than 500,000 times.
When searching for points, the following constraints were used
2 x * 12 , 2 y * 12 , π / 2 θ * π / 2 .
In the Figure 1 the projections of optimal trajectories on the plane { x , y } are presented. The trajectories are black lines, red circles are obstacles, small black squares are projections of found points (81).
The quality criterion (70) for found control was J e = 2.8914 .
For comparative study of the obtained solution, the same optimal control problem was solved by a direct method. For this purpose control functions of robots were approximated by piece-wise linear functions of time. The interval of approximation was Δ d t = 0.4 s, therefore a number of intervals was
K = t + Δ d t = 2.4 0.4 = 6 .
For the approximation of control function, the values of parameters on the boundaries of intervals were searched. For each one control function it was necessary to find K + 1 = 7 parameters. Total vector of parameters had twenty eight components.
q = [ q 1 q 28 ] T .
The direct control has the following form
u i j = 10 = u i + , if u i + u ¯ i j 10 = u i , if u ¯ i j u i u ¯ i j , otherwise , i = 1 , 2 , j = 1 , 2 ,
where
u ¯ 1 1 = q s + ( q s + 1 q s ) ( t s Δ d t ) Δ d t ,
u ¯ 2 1 = q s + L + ( q s + L + 1 q s + L ) ( t s Δ d t ) Δ d t ,
u ¯ 1 2 = q s + 2 K + ( q s + 2 L + 1 q s + 2 L ) ( t s Δ d t ) Δ d t ,
u ¯ 2 2 = q s + 3 L + ( q s + 3 L + 1 q s + 3 L ) ( t s Δ d t ) Δ d t ,
s Δ d t t ( s + 1 ) Δ d t , s { 1 , , 6 } , L = K + 1 = 7 .
To search for optimal parameters the same evolutionary algorithm of Grey wolf optimizer was used. In the result of more than one hundred tests the following best values of parameters were found:
q = [ 19.6125 5.4318 7.5921 19.4020 2.3928 2.1627 1.6976 1.4941 5.1828 16.9087 11.2478 2.4499 17.7201 0.6297 0.9093 1.6815 19.5283 16.4979 0.2321 11.4719 17.7372 1.4218 18.0214 3.7942 3.0899 13.3196 9.7212 0.3233 ] T
The process of searching the parameters had restrictions
20 = q q i q + = 20 , i = 1 , , 28 .
In one test, the algorithm simulated the system (64) with the control (85) for calculation of criterion values (70) more than 500,000 times. A value of quality criterion (70) for found control was J e = 2.5134 .
In Figure 2, the projection of optimal trajectories of mobile robots on the horizontal plane { x , y } is presented.
To check the obtained solutions of sensitivity to perturbations, we included random functions of uncertainty into the model (64)
x ˙ j = 0.5 ( u 1 j + u 2 j ) cos ( θ j ) + B ξ ( t ) , y ˙ j = 0.5 ( u 1 j + u 2 j ) sin ( θ j ) + B ξ ( t ) , θ ˙ j = 0.5 ( u 1 j u 2 j ) + B ξ ( t ) ,
where j = 1 , 2 , ξ ( t ) generates new random value in interval from 1 to 1 at every call.
Results of simulations with the found optimal controls and different levels of perturbations of the model are presented in the Table 1. The Table 1 includes average values of functional (70) on ten tests. As we can see, the synthesized optimal control is less sensitive to the perturbation of model. For the synthesized control with the level of perturbation B = 1.5 , the average value of the functional is changed by no more than 30 % and, for the direct control with the same level of perturbations, the functional is changed by more than 200 % .
In Figure 3, the trajectories for synthesized optimal control with model perturbations of level B = 1.5 are presented. In Figure 4, the trajectories for the direct control with the same level of perturbation B = 1.5 are presented.
As can be seen from Figure 3 and Figure 4, the synthesized control does not change the nature of the motion of objects under large disturbances, and direct control first of all violates the accuracy of achieving the terminal conditions.

9. Conclusions

This work presents the statement of the new optimal control problem with uncertainty. In this problem, the mathematical model of the control object includes an additive limited perturbing function simulating possible model inaccuracies. It is necessary to find an optimal control function that provides for limited perturbations bounded variation of functional value. For this purpose, it is proposed to use the synthesized optimal control method. According to this method initially, the control synthesis problem is solved. After that, in the state space a stable equilibrium point appears. In the second stage, the original optimal control problem is solved by searching positions of some stable equilibrium points, which are a control for stabilization system, obtained in the first stage. It is shown that such an approach supplies the property of a contraction mapping for differential equations of the mathematical model of the plant. Such differential equations are quite feasible, and their solutions reduce the errors of determining the state vector. For the solution of the control synthesis problem it is proposed to apply symbolic regression methods. A comparative example is presented. Computational experiments showed that the obtained solution is very less sensitive to perturbations in the mathematical model of the control object than the direct solution of the optimal control problem.

10. Findings/Results

This paper presents a new formulation of the optimal control problem, taking into account the objectively existing uncertainties of the model. The concept of feasibility is introduced, which means that small changes in the model do not lead to a loss of quality. Given the theoretical substantiations (definitions and theorems) that a system of differential equations of the mathematical model is feasible if it obtains, as a one-parametric mapping, a contraction property in the implementation domain. This property is an alternative to Lyapunov stability; it is softer, but sufficient for the development of real stable practical systems. An approach based on the method of synthesized optimal control is proposed, which makes it possible to develop systems that have the property of feasibility.

11. Discussion

According to the method of synthesized optimal control, the stability of the object is first ensured, that is, an equilibrium point appears in the phase space. In the neighbourhood of the stability point, the phase trajectories contract, and this property determines the feasibility of the system. For this, it is necessary to numerically solve the problem of synthesizing the stabilization system in order to obtain expressions for the control and substitute them in the right-hand sides of the object model. The synthesis problem is quite difficult. This paper proposes using numerical methods of symbolic regression to solve it. There are several successful applications, but they are still not very popular due to the complexity of the search area on a non-numerical space of functions where there is no metric. This is the direction for future research.
In the applied method of synthesized optimal control in the second stage we searched positions of equilibrium points as a piece-wise constant function. It is necessary to investigate other types of functions to change the position of the equilibrium point, how many points should be and how often they should be switched.
In further studies it is also necessary to consider solutions of the new optimal control problem for different control objects.
With the numerical solution of the optimal control problem by evolutionary algorithm it was defined that these algorithms can find solutions for complex optimal control problems with static and dynamic phase constraints. It is necessary to continue to research different evolutionary algorithms for the solution of the optimal control problems.

Author Contributions

Conceptualization, A.D. and E.S.; methodology, A.D., E.S.; software, A.D., E.S.; validation, V.S., P.Z.; investigation, E.S.; resources, V.S.; writing—original draft preparation, A.D., E.S.; writing—review and editing, E.S.; supervision, V.S., P.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported by the Ministry of Science and Higher Education of the Russian Federation, project No. 075-15-2020-799.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Athans, M.; Falb, P.L. Optimal Control: An Introduction to the Theory and Its Application; Dover Publications Inc.: Mineola, NY, USA, 2007; 880p. [Google Scholar]
  2. Pontryagin, L.S.; Boltyanskii, V.G.; Gamkrelidze, R.V.; Mishchenko, E.F. Pontryagin Selected Works: The Mathematical Theory of Optimal Process; Gordon and Breach Science Publishers: New York, NY, USA, 1985; Volume 4, p. 360. [Google Scholar]
  3. Chertovskih, R.; Karamzin, D.; Khalil, N.T.; Lobo Pereira, F. Regular path-constrained time-optimal control problems in three-dimensional flow fields. Eur. J. Control. 2020, 56, 98–106. [Google Scholar] [CrossRef] [Green Version]
  4. Arutyunov, A.; Karamzin, D. A Survey on Regularity Conditions for State-Constrained Optimal Control Problems and the Non-degenerate Maximum Principle. J. Optim. Theory Appl. 2020, 184, 697–723. [Google Scholar] [CrossRef]
  5. Gill, P.E.; Murray, W.; Wright, M.H. Practical Optimization; Academic Press: Cambridge, MA, USA, 1981. [Google Scholar]
  6. Evtushenko, Y.G. Optimization and Rapid Automatic Differentiation; Computing Center of RAS: Moscow, Russia, 2013. [Google Scholar]
  7. Betts, J.T. Survey of Numerical Methods for Trajectory Optimization. J. Guid. Control. Dyn. 1998, 21, 193–207. [Google Scholar] [CrossRef]
  8. Saunders, B.R. Optimal Trajectory Optimization under Uncertainty. Master’s Dissertation, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, Cambridge, MA, USA, 2012. [Google Scholar]
  9. Seywald, H.; Kumar, R. Desensitized Optimal Trajectories. In Proceedings of the AIAA/AAS Spaceflight Mechanics Meeting, Austin, TX, USA, 11–15 February 1996. AAS Paper 96-107:103-115. [Google Scholar]
  10. Makkapati, V.R.; Dor, M.; Tsiotras, P. Trajectory desensitization in optimal control problems. In Proceedings of the IEEE Conference on Decision and Control, Miami, FL, USA, 17–19 December 2018; pp. 2478–2483. [Google Scholar]
  11. Dullerud, G.E.; Paganini, F. A Course in Robust Control Theory: A Convex Approach; Springer: New York, NY, USA, 2000; 477p. [Google Scholar]
  12. Calafiore, G.; Dabbene, F. (Eds.) Probabilistic and Randomized Methods for Design under Uncertainty; Springer: London, UK, 2006; 458p. [Google Scholar]
  13. Chanthorn, P.; Rajchakit, G.; Thipcha, J.; Emharuethai, C.; Sriraman, R.; Lim, C.P.; Ramachandran, R. Robust Stability of Complex-Valued Stochastic Neural Networks with Time-Varying Delays and Parameter Uncertainties. Mathematics 2020, 8, 742. [Google Scholar] [CrossRef]
  14. Wu, L.; Zhao, R.; Li, Y.; Chen, Y.-H. Optimal Design of Adaptive Robust Control for the Delta Robot with Uncertainty: Fuzzy Set-Based Approach. Appl. Sci. 2020, 10, 3472. [Google Scholar] [CrossRef]
  15. Shang, D.; Li, Y.; Liu, Y.; Cui, S. Research on the motion error analysis and compensation strategy of the Delta robot. Mathematics 2019, 7, 411. [Google Scholar] [CrossRef] [Green Version]
  16. Lu, P. Regulation About Time-Varying Trajectories: Precision Entry Guidance Illustrated. J. Guid. Control. Dyn. 1999, 22, 784–790. [Google Scholar] [CrossRef]
  17. Angel, L.; Viola, J. Fractional order PID for tracking control of a parallel robotic manipulator type delta. ISA Trans. 2018, 79, 1–17. [Google Scholar] [CrossRef]
  18. Diveev, A.I. Numerical Method of Synthesized Control for Solution of the Optimal Control Problem. In Science and Information Conference; Arai, K., Ed.; Advances in Intelligent Systems and Computing; Springer Nature: Cham, Switzerland, 2020; Volume 1, pp. 137–156. [Google Scholar]
  19. Diveev, A.; Shmalko, E. Comparison of Direct and Indirect Approaches for Numerical Solution of the Optimal Control Problem by Evolutionary Methods. In Optimization and Applications. OPTIMA 2019. Communications in Computer and Information Science; Jaćimović, M., Khachay, M., Malkova, V., Posypkin, M., Eds.; Springer: Cham, Switzerland, 2019; Volume 1145, pp. 180–193. [Google Scholar]
  20. Parks, P.C. AM Lyapunov’s stability theory—100 years on. Ima J. Math. Control. Inf. 1992, 9, 275–303. [Google Scholar] [CrossRef]
  21. Clarke, F. Lyapunov Functions and Feedback in Nonlinear Control. In Optimal Control, Stabilization and Nonsmooth Analysis; de Queiroz, M., Malisoff, M., Wolenski, P., Eds.; LNCIS 301; Springer: Berlin/Heidelberg, Germany; pp. 267–282.
  22. Hahn, W.; Baartz, A. Stability of Motion; Springer: Berlin, Germany, 1967. [Google Scholar]
  23. Diveev, A.I.; Shmalko, E.Y.; Sofronova, E.A. Multipoint numerical criterion for manifolds to guarantee attractor properties in the problem of synergetic control design. ITM Web Conf. 2018, 18, 01001. [Google Scholar] [CrossRef] [Green Version]
  24. Kolmogorov, A.N.; Fomin, S.V. Elements of the Theory of Functions and Functional Analysis; Metric and Normed Spaces; Graylock Press: Rochester, NY, USA, 1957; Volume 1, 130p. [Google Scholar]
  25. Boltyansky, V.G. Mathematical Methods of Optimal Control; Holt, Rinehart and Winston: New York, NY, USA, 1971; 272p. [Google Scholar]
  26. Bellman, R.E.; Dreyfus, S.E. Applied Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1971; 364p. [Google Scholar]
  27. Afanasiev, V.N.; Kolmanovskii, V.; Nosov, V.R. Mathematical Theory of Control Systems Design; Springer: Berlin/Heidelberg, Germany, 2014; 700p. [Google Scholar]
  28. Bertsecas, D. Dynamic Programming and Optimal Control; Athena Scientific: Bellmont, MA, USA, 1995; 387p. [Google Scholar]
  29. Simon, J.D.; Mitter, S.K. A theory of modal control. Inf. Control. 1968, 13, 316–353. [Google Scholar] [CrossRef] [Green Version]
  30. Khalil, H.K. Nonlinear Systems; Prentice Hall: Upper Saddle River, NJ, USA, 2002; 750p. [Google Scholar]
  31. Kolesnikov, A.A.; Kuz’menko, A.A. Backstepping and ADAR Method in the Problems of Synthesis of the Nonlinear Control Systems. Mekhatronika Avtom. Upr. 2016, 17, 435–445. (In Russia) [Google Scholar] [CrossRef] [Green Version]
  32. Podvalny, S.L.; Vasiljev, E.M. Analytical synthesis of aggregated regulators for unmanned aerial vehicles. J. Math. Sci. 2019, 239, 135–145. [Google Scholar] [CrossRef]
  33. Agarwal, R.; O’Regan, D.; Hristova, S. Stability by Lyapunov like functions of nonlinear differential equations with non-instantaneous impulses. J. Appl. Math. Comput. 2017, 53, 147–168. [Google Scholar] [CrossRef]
  34. Mizhidon, A.D. On a Problem of Analytic Design of an Optimal Controller. Autom. Remote Control. 2011, 72, 2315–2327. [Google Scholar] [CrossRef]
  35. Yang, J.; Lu, W.; Liu, W. PID Controller Based on the Artificial Neural Network. In Advances in Neural Networks—Lecture Notes in Computer Science; Yin, F.L., Wang, J., Guo, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3174. [Google Scholar]
  36. Koza, J.R. Genetic Programming: On the Programming of Computers by Means of Natural Selection; MIT Press: Cambridge, UK, 1992; 819p. [Google Scholar]
  37. Diveev, A.I.; Sofronova, E.A. The Network Operator Method for Search of the Most Suitable Mathematical Equation. In Bio-Inspired Computational Algorithms and Their Applications; Gao, S., Ed.; Intech: Rijeka, Croatia, 2012; pp. 19–42. [Google Scholar]
  38. Diveev, A.I. A Numerical Method for Network Operator for Synthesis of a Control System with Uncertain Initial Values. J. Comput. Syst. Sci. Int. 2012, 51, 228–243. [Google Scholar] [CrossRef]
  39. Miller, J.; Thomson, P. Cartesian Genetic Programming. In Proceedings of the European Conference on Genetic Programming (EuroGP2000); Springer: Milan, Italy, 2000; Volume 1802, pp. 121–132. [Google Scholar]
  40. Diveev, A.I. Cartesian Genetic Programming for Synthesis of Control System for Group of Robots. In Proceedings of the 2020 28th Mediterranean Conference on Control and Automation (MED), Saint-Raphaël, France, 15–18 September 2020; pp. 972–977. [Google Scholar]
  41. Diveev, A. Small Variations of Basic Solution Method for Non-numerical Optimization. IFAC-PapersOnLine 2015, 48, 28–33. [Google Scholar] [CrossRef]
  42. Duriez, T.; Brunton, S.L.; Noack, B.R. Taming nonlinear dynamics with MLC. In Machine Learning Control—Taming Nonlinear Dynamics and Turbulence; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  43. Derner, E.; Kubalík, J.; Ancona, N.; Babuška, R. Symbolic Regression for Constructing Analytic Models in Reinforcement Learning. Appl. Soft Comput. 2020, 94, 1–12. [Google Scholar] [CrossRef]
  44. Diveev, A.; Hussein, O.; Shmalko, E.; Sofronova, E. Synthesis of Control System for Quad-Rotor Helicopter by the Network Operator Method. In Proceedings of SAI Intelligent Systems Conference; Advances in Intelligent Systems and Computing; Springer: Cham, Switzerland, 2020; pp. 246–263. [Google Scholar]
  45. Šuster, P.; Jadlovská, A. Tracking Trajectory of the Mobile Robot Khepera II Using Approaches of Artificial Intelligence. Acta Electrotech. Inform. 2011, 11, 38–43. [Google Scholar] [CrossRef]
  46. Diveev, A.I.; Konstantinov, S.V. Study of the Practical Convergence of Evolutionary Algorithms for the Optimal Program Control of a Wheeled Robot. J. Comput. Syst. Sci. Int. 2018, 57, 561–580. [Google Scholar] [CrossRef]
  47. Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf Optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Optimal trajectories of robots on the plane { x , y } for synthesized optimal control.
Figure 1. Optimal trajectories of robots on the plane { x , y } for synthesized optimal control.
Mathematics 09 00021 g001
Figure 2. Optimal trajectories of robots on the plane { x , y } for direct control.
Figure 2. Optimal trajectories of robots on the plane { x , y } for direct control.
Mathematics 09 00021 g002
Figure 3. Optimal trajectories of robots on the plane { x , y } for synthesized control with B = 1.5 .
Figure 3. Optimal trajectories of robots on the plane { x , y } for synthesized control with B = 1.5 .
Mathematics 09 00021 g003
Figure 4. Optimal trajectories of robots on the plane { x , y } for direct control with B = 1.5 .
Figure 4. Optimal trajectories of robots on the plane { x , y } for direct control with B = 1.5 .
Mathematics 09 00021 g004
Table 1. The average values of functional (70).
Table 1. The average values of functional (70).
Level of Noise BSynthesized ControlDirect Control
02.89142.5134
0.13.00143.0260
0.23.00663.8571
0.53.21415.5497
0.83.31565.8968
13.41236.7952
1.53.69548.2654
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Diveev, A.; Shmalko, E.; Serebrenny, V.; Zentay, P. Fundamentals of Synthesized Optimal Control. Mathematics 2021, 9, 21. https://doi.org/10.3390/math9010021

AMA Style

Diveev A, Shmalko E, Serebrenny V, Zentay P. Fundamentals of Synthesized Optimal Control. Mathematics. 2021; 9(1):21. https://doi.org/10.3390/math9010021

Chicago/Turabian Style

Diveev, Askhat, Elizaveta Shmalko, Vladimir Serebrenny, and Peter Zentay. 2021. "Fundamentals of Synthesized Optimal Control" Mathematics 9, no. 1: 21. https://doi.org/10.3390/math9010021

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop