Next Article in Journal
Equivalent Statements of Two Multidimensional Hilbert-Type Integral Inequalities with Parameters
Previous Article in Journal
Color Image Recovery Using Generalized Matrix Completion over Higher-Order Finite Dimensional Algebra
Previous Article in Special Issue
Non-Zero Sum Nash Game for Discrete-Time Infinite Markov Jump Stochastic Systems with Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Solution of the Multi-Model Singular Linear-Quadratic Optimal Control Problem: Regularization Approach

The Galilee Research Center for Applied Mathematics, Braude College of Engineering, Karmiel 2161002, Israel
Axioms 2023, 12(10), 955; https://doi.org/10.3390/axioms12100955
Submission received: 11 September 2023 / Revised: 3 October 2023 / Accepted: 9 October 2023 / Published: 10 October 2023
(This article belongs to the Special Issue Advances in Analysis and Control of Systems with Uncertainties II)

Abstract

:
We consider a finite horizon multi-model linear-quadratic optimal control problem. For this problem, we treat the case where the problem’s functional does not contain a control function. The latter means that the problem under consideration is a singular optimal control problem. To solve this problem, we associate it with a new optimal control problem for the same multi-model system. The functional in this new problem is the sum of the original functional and an integral of the square of the Euclidean norm of the vector-valued control with a small positive weighting coefficient. Thus, the new problem is regular. Moreover, it is a multi-model cheap control problem. Using the solvability conditions (Robust Maximum Principle), the solution of this cheap control problem is reduced to the solution of the following three problems: (i) a terminal-value problem for an extended matrix Riccati type differential equation; (ii) an initial-value problem for an extended vector linear differential equation; (iii) a nonlinear optimization (mathematical programming) problem. We analyze an asymptotic behavior of these problems. Using this asymptotic analysis, we design the minimizing sequence of state-feedback controls for the original multi-model singular optimal control problem, and obtain the infimum of the functional of this problem. We illustrate the theoretical results with an academic example.

1. Introduction

Multi-model systems represent the class of uncertain systems depending on an unknown numerical parameter, which belongs to some given set. This set can be either finite or infinite and compact. Thus, a multi-model system represents a set of single-model systems, each of which is associated with one of the aforementioned parameters. The optimal control problem of a multi-model system is a Min-Max type optimization problem. In such an optimal control problem, the functional is maximized with respect to the parameter and minimized with respect to the control. For multi-model optimal control problems, the first-order optimality condition (Robust Maximum Principle) was recently developed in the book [1] (see also the book [2]). Among other topics, where other (as in [1,2]) versions of multi-model systems and their analysis are considered, we can mention, for instance, the following: robust optimization in spline regression for multi-model regulatory networks (see, e.g., [3] and references therein), multi-regimes stochastic differential games with jumps (see, e.g., [4] and references therein), games with fuzzy uncertainty (see, e.g., [5] and references therein), and robust portfolio optimization under parallelepiped uncertainty (see, e.g., [6] and references therein).
The singular optimal control problem is such that the first-order optimality conditions Maximum Principle ([7]), Robust Maximum Principle ([1]), and Hamilton–Jacobi-Bellman Equation ([8]) are not applicable for obtaining its solution. Single-model singular optimal control problems are extensively studied in the literature. Several approaches to the analysis and solution of such problems are widely used. Thus, higher order necessary/sufficient control optimality conditions can be useful in solving the singular optimal control problems (see, e.g., [9,10,11,12,13] and references therein). However, the higher order optimality conditions fail to yield a candidate optimal control/optimal control for the problem, which does not have an optimal control in the class of regular (non-generalized) functions, even if the problem’s functional has a finite infimum/supremum in this class of functions. The second approach is based on the design of a singular optimal control as a minimizing sequence of regular open-loop controls. This minimizing sequence is a sequence of regular control functions of time, along which the functional tends to its infimum/supremum (see, e.g., [12,14,15] and references therein). A generalization of this approach is the extension approach (see [16,17,18]). The third approach combines geometric and analytic methods. Namely, this approach is based on a decomposition of the state space into the “singular” and “regular” subspaces, and a design of an optimal open loop control as a sum of impulsive (in the singular subspace) and regular (in the regular subspace) functions (see, e.g., [19,20,21,22] and references therein). The fourth approach proposes to look for a solution to a singular optimal control problem in a properly defined class of generalized functions (see, e.g., [23]). Finally, the fifth approach is based on regularization of the original singular problem by a “small” correction of its “singular” functional (see e.g., [24,25,26] and references therein). Such a regularization is a kind of Tikhonov’s regularization of ill-posed problems [27]. This approach yields the solution to the original problem in the form of a minimizing sequence of state feedback controls.
However, to the best of our knowledge, multi-model singular optimal control problems were not considered in the literature. In this paper, we consider the finite horizon multi-model singular linear-quadratic optimal control problem. We solve this problem by application of the regularization approach, which yields a new regular optimal control problem. The latter problem is a multi-model cheap control problem. To the best of our knowledge, multi-model cheap control problems also were not considered in the literature. Asymptotic analysis of the multi-model cheap control problem, obtained in this paper, is carried out. Based on this analysis, a minimizing sequence of state-feedback controls in the original multi-model singular control problem is designed and the infimum of the functional of this problem is derived.
It should be noted that the present paper is rather theoretical. Its motivation is to extend the regularization approach to analysis and solution of multi-model singular optimal control problems. Since the Robust Maximum Principle, applied to multi-model optimal control problems, differs considerably from the Maximum Principle, applied to single-model optimal control problems, the aforementioned extension is not trivial. It requires obtaining significantly new results in the asymptotic analysis of singularly perturbed problems, as well as significantly new results in the asymptotic analysis of optimization (mathematical programming) problems.
We organize the paper as follows. In the next section (Section 2), we present the rigorous formulation of the considered problem. Also, we present the main definitions. In Section 3, we regularize the original singular problem. This regularization yields a new problem—the multi-model cheap control problem. Using the Robust Maximum Principle, we present the solvability conditions of this new problem. In Section 4, we analyze asymptotically, these solvability conditions. Based on this analysis, in Section 5, we design the minimizing sequence of state-feedback controls for the original multi-model singular optimal control problem and obtain the infimum of the functional of this problem. In Section 6, we present, an illustrative academic example. We devote Section 7 to the concluding remarks and outlook.
The following main notations are applied in the paper.
1. 
E n denotes the n-dimensional real Euclidean space.
2. 
· denotes the Euclidean norm either of a vector or of a matrix.
3. 
The superscript “T” denotes the transposition of a matrix A ( A T ) , or of a vector x, ( x T ) .
4. 
L [ a , b ; E n ] denotes the linear space of n-dimensional vector-valued real functions, square-integrable in the finite interval [ a , b ] .
5. 
O n 1 × n 2 is used for the zero matrix of the dimension n 1 × n 2 , except in the cases where the dimension of the zero matrix is obvious. In such cases, the notation 0 is used for the zero matrix.
6. 
I n is the n-dimensional identity matrix.
7. 
col ( x , y ) , where x E n , y E m , denotes the column block-vector of the dimension n + m with the upper block x and the lower block y.
8. 
The inequality A B , where A and B are quadratic symmetric matrices of the same dimensions, means that the matrix B A is positive semi-definite.

2. Problem Formulation and Main Definitions

Consider the following multi-model system:
d w k ( t ) d t = A k ( t ) w k ( t ) + B k ( t ) u ( t ) , w k ( 0 ) = w ˜ 0 , t [ 0 , t f ] , k { 1 , 2 , , K } , K > 1 ,
where w k ( t ) , ( k { 1 , 2 , , K } ) is a state in the multi-model system and w k ( t ) E n , ( k = 1 , 2 , , K ) ; u ( t ) is a control in the multi-model system and u ( t ) E r , ( r n ) ; t f > 0 is a given time instant; A k ( t ) and B k ( t ) , t [ 0 , t f ] , ( k = 1 , 2 , , K ) are given matrix-valued continuous functions of corresponding dimensions; w ˜ 0 E n is a given vector.
Let us consider the following functional:
F ( u , k ) = w k T ( t f ) H ˜ w k ( t f ) + 0 t f w k T ( t ) D ˜ ( t ) w k ( t ) d t , k { 1 , 2 , , K } ,
where H ˜ is a constant symmetric positive semi-definite n × n -matrix; for any t [ 0 , t f ] , D ˜ ( t ) is a symmetric positive semi-definite n × n -matrix.
Based on the functional F ( u , k ) , we construct the performance index evaluating the control process of the multi-model system (1)
J ( u ) = max k { 1 , 2 , , K } F ( u , k ) inf u .
Remark 1. 
Since the control u ( · ) is not present explicitly in the functional F ( u , k ) (and, therefore, in the functional J ( u ) ), the first-order optimality conditions (see [1]) fail to yield an optimal control to the problem (1), (3). Thus, this problem is a singular optimal control problem.
Let us introduce the vectors
w = col ( w 1 , w 2 , , w K ) E K n w 0 = col ( w ˜ 0 , w ˜ 0 , , w ˜ 0 ) E K n
and the set U of all functions u = u ( w , t ) : E K n × [ 0 , t f ] E r , which are measurable with respect to t [ 0 , t f ] for any fixed w E K n and satisfy the local Lipschitz condition with respect to w E K n uniformly in t [ 0 , t f ] .
Definition 1. 
By U, we denote the subset of the set U , such that the following conditions are valid:
(i) 
for any u ( w , t ) U and any w 0 E K n of the form in (4), the initial-value problem (1) with k = 1 , 2 , , K and u ( t ) = u ( w , t ) has the unique absolutely continuous solution w u ( t ; w 0 ) = col w 1 , u ( t ; w 0 ) , w 2 , u ( t ; w 0 ) , , w K , u ( t ; w 0 ) in the entire interval [ 0 , t f ] ;
(ii) 
u w u ( t ; w 0 ) , t L 2 [ 0 , t f ; E r ] .
Such a defined set U is called the set of all admissible state-feedback controls in the problem (1) and (3).
Remark 2. 
Since for any k { 1 , 2 , , K } , any u ( w , t ) U and any w 0 E K n of the form in (4), the value of the functional F ( u , k ) with u ( t ) = u ( w , t ) is non-negative, then for any aforementioned w 0 E K n , there exists a finite infimum J * ( w 0 ) of the functional J ( u ) with respect to u ( t ) = u ( w , t ) U in the problem (1) and (3).
Consider a sequence of the functions u q * ( w , t ) U , ( q = 1 , 2 , ) .
Definition 2. 
The sequence u q * ( w , t ) q = 1 + is called a minimizing robust control sequence (or briefly, a minimizing sequence) of the optimal control problem (1) and (3) if for any w 0 E K n of the form in (4):
(a) 
there exist lim q + J u q * ( w , t ) ;
(b) 
the following equality is valid:
lim q + J u q * ( w , t ) = J * ( w 0 ) .
In this case, the value J * ( w 0 ) is called an optimal value of the functional in the problem (1) and (3).
The objective of the paper is to design the minimizing sequence of the optimal control problem (1) and (3) and to derive the expression for the optimal value of the functional in this problem.

3. Regularization of the Optimal Control Problem (1) and (3)

3.1. Multi-Model Cheap Control Problem

To design the minimizing sequence of the problem (1) and (3), first, we are going to regularize it. Namely, we replace (approximately) the singular problem (1) and (3) with a parameter-dependent regular optimal control problem. This new problem has the same multi-model dynamics (1) as the original singular problem has. However, the functional in the new problem, having the regular form, differs from the original functional J ( u ) . Namely, the functional in the new problem has the form
J ε ( u ) = max k { 1 , 2 , , K } F ε ( u , k ) ,
where
F ε ( u , k ) = w k T ( t f ) H ˜ w k ( t f ) + 0 t f [ w k T ( t ) D ˜ ( t ) w k ( t ) + ε 2 u T ( t ) u ( t ) ] d t , k { 1 , 2 , , K } ,
ε > 0 is a small parameter.
Remark 3. 
Like in the original optimal control problem (1) and (3) the objective of the control u in the new problem (1), (5) and (6) is the minimization of its functional by a proper choice of u = u ( w , t ) U .
Remark 4. 
Since the parameter ε > 0 is small, the problem (1), (5) and (6) is a cheap control problem, i.e., an optimal control problem with a control cost much smaller than a state cost in the functional. Single-model cheap control problems have been studied extensively in the literature (see e.g., [24,25,26,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46] and references therein). However, to the best of our knowledge, multi-model cheap control problems have not yet been studied in the literature. It is important to note that, due to the smallness of the control cost, a cheap control problem can be transformed to an optimal control problem for a singularly perturbed system. Various results in the topic of optimal control problems for singularly perturbed single-model systems can be found, for instance, in [36,39,40,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61] and references therein. However, to the best of our knowledge, optimal control problems for singularly perturbed multi-model systems have not yet been studied in the literature.

3.2. Solvability Conditions of the Optimal Control Problem (1), (5) and (6)

Based on the results of the book [1] (Section 9.4), let us introduce for consideration the following block-diagonal K n × K n -matrices:
A ( t ) = A 1 ( t ) O n × n O n × n O n × n A 2 ( t ) O n × n ......... ......... ......... O n × n O n × n A K ( t ) , H = H ˜ O n × n O n × n O n × n H ˜ O n × n ......... ......... ......... O n × n O n × n H ˜ , D ( t ) = D ˜ ( t ) O n × n O n × n O n × n D ˜ ( t ) O n × n ......... ......... ......... O n × n O n × n D ˜ ( t ) , Λ = λ 1 I n O n × n O n × n O n × n λ 2 I n O n × n ......... ......... ......... O n × n O n × n λ K I n ,
where λ k , ( k = 1 , 2 , , K ) are scalar nonnegative parameters satisfying the condition k = 1 K λ k = 1 , i.e., the vector λ = col ( λ 1 , λ 2 , , λ K ) belongs to the set
Ω λ = λ = col ( λ 1 , λ 2 , , λ K ) E K : λ 1 0 , λ 2 0 , , λ K 0 , k = 1 K λ k = 1 .
Along with the above-introduced block-diagonal matrices, let us introduce for consideration the following block-form matrix:
B ( t ) = B 1 ( t ) B 2 ( t ) B K ( t ) .
Based on the matrices in (7) and (8), we consider the following terminal-value problem for the matrix Riccati differential equation:
d P ( t ) d t = P ( t ) A ( t ) A T ( t ) P ( t ) + P ( t ) S ( t , ε ) P ( t ) Λ D ( t ) , t [ 0 , t f ] , P ( t f ) = Λ H ,
where
S ( t , ε ) = 1 ε 2 B ( t ) B T ( t ) .
Remark 5. 
For any λ Ω λ and any ε > 0 , the terminal-value problem (9) has the unique solution P ( t ) = P ( t , λ , ε ) in the entire interval [ 0 , t f ] , and P T ( t , λ , ε ) = P ( t , λ , ε ) .
Let us introduce the vector κ E K , the set
Ω κ = κ = col ( κ 1 , κ 2 , , κ K ) E K : κ 1 0 , κ 2 0 , , κ K 0 , k = 1 K κ k = 1 ,
and the matrices
H κ = κ 1 H ˜ O n × n O n × n O n × n κ 2 H ˜ O n × n ......... ......... ......... O n × n O n × n κ K H ˜ , D κ ( t ) = κ 1 D ˜ ( t ) O n × n O n × n O n × n κ 2 D ˜ ( t ) O n × n ......... ......... ......... O n × n O n × n κ K D ˜ ( t ) .
Proposition 1. 
For a given ε > 0 , the robust optimal state-feedback control u = u ε * ( w , t , λ * ) of the multi-model cheap control problem (1), (5) and (6) has the form
u ε * ( w , t , λ * ) = 1 ε 2 B T ( t ) P ( t , λ * , ε ) w , w E K n , t [ 0 , t f ] ,
where
λ * = λ * ( ε ) = argmin λ Ω λ I ( λ , ε ) ,
I ( λ , ε ) = ( w 0 ) T P ( 0 , λ , ε ) w 0 w T ( t f ) Λ H w ( t f ) 0 t f w T ( t ) Λ D ( t ) w ( t ) d t + max κ Ω κ 0 t f w T ( t ) D κ ( t ) w ( t ) d t + w T ( t f ) H κ w ( t f ) ;
the vector w 0 E K n is of the form in (4) and w ( t ) = w ( t , λ , ε ) is the solution of the initial-value problem
d w ( t ) d t = A ( t ) S ( t , ε ) P ( t , λ , ε ) w ( t ) , w ( 0 ) = w 0 , t [ 0 , t f ] .
The optimal value I ε * of the functional in the problem (1), (5) and (6) is
I ε * = I λ * ( ε ) , ε .
Proof. 
The statements of the proposition are direct consequences of the results of [1] (Section 9.4). □

4. Asymptotic Analysis of the Solvability Conditions to the Problem (1), (5) and (6)

4.1. Transformation of the Terminal-Value Problem (9), the Initial-Value Problem (16) and the Optimization Problem (14) and (15)

In what follows, we assume that:
AI. 
For any k { 1 , 2 , , K } and any t [ 0 , t f ] , the matrix B k ( t ) has the column rank r.
AII. 
For any k { 1 , 2 , , K } and any t [ 0 , t f ] , det B k T ( t ) D ˜ ( t ) B k ( t ) 0 .
AIII. 
For any k { 1 , 2 , , K } , H ˜ B k ( t f ) = 0 .
AIV. 
The matrix-valued functions A k ( t ) , ( k = 1 , 2 , , K ) are continuously differentiable in the interval [ 0 , t f ] .
AV. 
The matrix-valued functions B k ( t ) , ( k = 1 , 2 , , K ) and D ˜ ( t ) are twice continuously differentiable in the interval [ 0 , t f ] .
Let, for any t [ 0 , t f ] , B c ( t ) be a complement matrix to the matrix B ( t ) defined in (8). Thus, the dimension of the matrix B c ( t ) is K n × ( K n r ) , and the block-form matrix B c ( t ) , B ( t ) is invertible for all t [ 0 , t f ] . Due to the definition of the matrix-valued function B ( t ) , as well as the assumption AV and the results of the book [62] (Section 3.3), the matrix-valued function B c ( t ) can be chosen twice continuously differentiable in the interval [ 0 , t f ] .
Lemma 1. 
Let the assumptions AII and AV be satisfied. Then, there exist numbers 0 < ν min ν max such that, for all t [ 0 , t f ] and all λ Ω λ , the following relation is valid:
ν min I r B T ( t ) Λ D ( t ) B ( t ) ν max I r .
Thus, for all t [ 0 , t f ] and all λ Ω λ , the matrix B T ( t ) Λ D ( t ) B ( t ) is invertible and
1 ν max I r B T ( t ) Λ D ( t ) B ( t ) 1 1 ν min I r .
Proof. 
Using the definitions of the matrices D ( t ) , Λ and B ( t ) in Equations (7) and (8), we directly obtain
B T ( t ) Λ D ( t ) B ( t ) = k = 1 K λ k B k T ( t ) D ˜ ( t ) B k ( t ) , t [ 0 , t f ] , λ = col ( λ 1 , λ 2 , , λ K ) Ω λ .
Since the matrix D ˜ ( t ) is symmetric and positive semi-definite for each t [ 0 , t f ] , then the matrices B k T ( t ) D ˜ ( t ) B k ( t ) , ( k = 1 , 2 , , K ) are symmetric and positive definite for each t [ 0 , t f ] . Therefore, all the eigenvalues of each of these matrices are real and positive numbers for each t [ 0 , t f ] . Moreover, due to the assumption AV and the results of [63], these eigenvalues are continuous functions of t [ 0 , t f ] . Let μ k , i ( t ) , ( k = 1 , 2 , K ; i = 1 , , r ) , t [ 0 , t f ] be all the eigenvalues (including equal ones) of the matrix B k T ( t ) D ˜ ( t ) B k ( t ) . Then, we have
0 < μ k , min = min t [ 0 , t f ] min i { 1 , , r } μ k , i ( t ) max t [ 0 , t f ] max i { 1 , , r } μ k , i ( t ) = μ k , max , k = 1 , 2 , , K .
Note that μ k , max , ( k = 1 , 2 , , K ) are finite values.
Using Equations (19) and (20), we obtain
k = 1 K λ k μ k , min I r B T ( t ) Λ D ( t ) B ( t ) k = 1 K λ k μ k , max I r .
Let us choose the numbers ν min and ν max as:
ν min = min k { 1 , 2 , , K } μ k , min , ν max = max k { 1 , 2 , , K } μ k , max .
Such a choice of ν min and ν max , along with Equation (20), the relation (21) and the inclusion col ( λ 1 , λ 2 , , λ K ) Ω λ , directly yields the relation (17). The relation (18) is an immediate consequence of the relation (17). This completes the proof of the lemma. □
Consider the following matrix-valued functions of ( t , λ ) [ 0 , t f ] × Ω λ :
L ( t , λ ) = B c ( t ) B ( t ) B T ( t ) Λ D ( t ) B ( t ) 1 B T ( t ) Λ D ( t ) B c ( t ) , R ( t , λ ) = L ( t , λ ) , B ( t ) .
Remark 6. 
Due to Lemma 1 and the results of [62] (Section 3.3), the matrix R ( t , λ ) is invertible and R ( t , λ ) , R 1 ( t , λ ) are bounded for all ( t , λ ) [ 0 , t f ] × Ω λ . Moreover, the matrix-valued function R ( t , λ ) is twice continuously differentiable with respect to t [ 0 , t f ] uniformly in λ Ω λ , and this function is continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] .
Using the aforementioned matrix-valued function R ( t , λ ) and its properties, we transform the unknown P ( t ) in the terminal-value problem (9) as follows:
P ( t ) = R T ( t , λ ) 1 P ( t ) R 1 ( t , λ ) , t [ 0 , t f ] , λ Ω λ ,
where P ( t ) is a new unknown matrix-valued function.
By virtue of the results of [62] (Section 3.3), as well as Equation (10), Lemma 1 and Remark 6, we directly have the following assertion.
Proposition 2. 
Let the assumptions AI-AV be valid. Then, for any ε > 0 and any λ Ω λ , the transformation (23) converts the terminal-value problem (9) to the new terminal-value problem
d P ( t ) d t = A ( t , λ ) P ( t ) P ( t ) A T ( t , λ ) + P ( t ) S ( ε ) P ( t ) D ( t , λ ) , t [ 0 , t f ] , P ( t f ) = H ( λ ) ,
where
A ( t , λ ) = R 1 ( t , λ ) A ( t ) R ( t , λ ) d R ( t , λ ) / d t ,
B ( t ) = R 1 ( t , λ ) B ( t ) = O ( K n r ) × r I r = B ,
S ( ε ) = 1 ε 2 B B T = O ( K n r ) × ( K n r ) O ( K n r ) × r O r × ( K n r ) ( 1 / ε 2 ) I r ,
D ( t , λ ) = R T ( t , λ ) Λ D ( t ) R ( t , λ ) = D 1 ( t , λ ) O ( K n r ) × r O r × ( K n r ) D 2 ( t , λ ) ,
H ( λ ) = R T ( t f , λ ) Λ H R ( t f , λ ) = H 1 ( λ ) O ( K n r ) × r O r × ( K n r ) O r × r ,
D 1 ( t , λ ) = L T ( t , λ ) Λ D ( t ) L ( t , λ ) , D 2 ( t , λ ) = B T ( t ) Λ D ( t ) B ( t ) ,
H 1 ( λ ) = L T ( t f , λ ) Λ H L ( t f , λ ) .
For all t [ 0 , t f ] and λ Ω λ , the matrix D 1 ( t , λ ) is positive semi-definite, while the matrix D 2 ( t , λ ) is positive definite. For all λ Ω λ , the matrix H 1 ( λ ) is positive semi-definite, and the matrix-valued function H 1 ( λ ) is continuous. The matrix-valued functions A ( t , λ ) , D ( t , λ ) are continuously differentiable with respect to t [ 0 , t f ] uniformly in λ Ω λ , and these functions are continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] .
Remark 7. 
For any λ Ω λ and any ε > 0 , the terminal-value problem (24) has the unique solution P ( t ) = P ( t , λ , ε ) in the entire interval [ 0 , t f ] , and P T ( t , λ , ε ) = P ( t , λ , ε ) .
Now, let us make the following transformation of the unknown w ( t ) in the initial-value problem (16):
w ( t ) = R ( t , λ ) z ( t ) , t [ 0 , t f ] , λ Ω λ ,
where z ( t ) is a new unknown vector-valued function.
As a direct consequence of Proposition 2 and Remark 6, we have the following assertion.
Corollary 1. 
Let the assumptions AI-AV be valid. Then, for any ε > 0 and any λ Ω λ , the transformation (32), along with the transformation (23), converts the initial-value problem (16) to the new initial-value problem
d z ( t ) d t = A ( t , λ ) S ( ε ) P ( t ) z ( t ) , z ( 0 ) = z 0 ( λ ) , t [ 0 , t f ] ,
where
z 0 ( λ ) = R 1 ( 0 , λ ) w 0 ,
and the vector-valued function z 0 ( λ ) is continuous for λ Ω λ .
Corollary 2. 
Let the assumptions AI-AV be valid. Then, for any ε > 0 , the transformations (23) and (32) convert the optimization problem (14) and (15) to the equivalent optimization problem
λ * = λ * ( ε ) = argmin λ Ω λ J ( λ , ε ) ,
J ( λ , ε ) = z 0 ( λ ) T P ( 0 , λ , ε ) z 0 ( λ ) z ( t f , λ , ε ) T H ( λ ) z ( t f , λ , ε ) 0 t f z ( t , λ , ε ) T D ( t , λ ) z ( t , λ , ε ) d t + max κ Ω κ [ 0 t f z ( t , λ , ε ) T R T ( t , λ ) D κ ( t ) R ( t , λ ) z ( t , λ , ε ) d t + z ( t f , λ , ε ) T R T ( t f , λ ) H κ R ( t f , λ ) z ( t f , λ , ε ) ] ,
where the vector z 0 ( λ ) is given by (34); z ( t , λ , ε ) is the solution of the initial-value problem (33); P ( t , λ , ε ) is the solution of the terminal-value problem (24); the matrices D ( t , λ ) and H ( λ ) are given by (28) and (29), respectively; the set Ω κ is given by (11); the matrices H κ and D κ ( t ) are given in (12).
Moreover,
J λ * ( ε ) , ε = I λ * ( ε ) , ε , ε 0 .
Proof. 
The statements of the corollary follow immediately from Propositions 1, 2 and Corollary 2. □

4.2. Asymptotic Solution of the Terminal-Value Problem (24)

First of all let us note that, due to the block form of the matrix S ( ε ) (see the Equation (27)), the right-hand side of the differential Equation in (24) has a singularity with respect to ε at ε = 0 . In order to remove this singularity, we look for the solution P ( t ) = P ( t , λ , ε ) of the problem (24) in the form of the block matrix
P ( t , λ , ε ) = P 1 ( t , λ , ε ) ε P 2 ( t , λ , ε ) ε P 2 T ( t , λ , ε ) ε P 3 ( t , λ , ε ) ,
where the matrices P 1 ( t , λ , ε ) , P 2 ( t , λ , ε ) and P 3 ( t , λ , ε ) are of the dimensions ( K n r ) × ( K n r ) , ( K n r ) × r and r × r , respectively; P 1 T ( t , λ , ε ) = P 1 ( t , λ , ε ) , P 3 T ( t , λ , ε ) = P 3 ( t , λ , ε ) .
As with the partitioning the matrix P ( t , λ , ε ) , let us also partition the matrix A ( t , λ ) into blocks as follows:
A ( t , λ ) = A 1 ( t , λ ) A 2 ( t , λ ) A 3 ( t , λ ) A 4 ( t , λ ) ,
where the matrices A 1 ( t , λ ) , A 2 ( t , λ ) , A 3 ( t , λ ) and A 4 ( t , λ ) are of the dimensions ( K n r ) × ( K n r ) , ( K n r ) × r , r × ( K n r ) and r × r , respectively.
Now, substituting the block forms of the matrices S ( ε ) , D ( t , λ ) , H ( λ ) , P ( t , λ , ε ) , A ( t , λ , ε ) (see Equations (27)–(29), (37) and (38)) into the problem (24), we obtain after a routine matrix algebra the following equivalent terminal-value problem in the time interval [ 0 , t f ] :
d P 1 ( t , λ , ε ) d t = P 1 ( t , λ , ε ) A 1 ( t , λ ) ε P 2 ( t , λ , ε ) A 3 ( t , λ ) A 1 T ( t , λ ) P 1 ( t , λ , ε ) ε A 3 T ( t , λ ) P 2 T ( t , λ , ε ) + P 2 ( t , λ , ε ) P 2 T ( t , λ , ε ) D 1 ( t , λ ) , P 1 ( t f , λ , ε ) = H 1 ( λ ) ,
ε d P 2 ( t , λ , ε ) d t = P 1 ( t , λ , ε ) A 2 ( t , λ ) ε P 2 ( t , λ , ε ) A 4 ( t , λ ) ε A 1 T ( t , λ ) P 2 ( t , λ , ε ) ε A 3 T ( t , λ ) P 3 ( t , λ , ε ) + P 2 ( t , λ , ε ) P 3 ( t , λ , ε ) , P 2 ( t f , λ , ε ) = 0 ,
ε d P 3 ( t , λ , ε ) d t = ε P 2 T ( t , λ , ε ) A 2 ( t , λ ) ε P 3 ( t , λ , ε ) A 4 ( t , λ ) ε A 2 T ( t , λ ) P 2 ( t , λ , ε ) ε A 4 T ( t , λ ) P 3 ( t , λ , ε ) + P 3 ( t , λ , ε ) 2 D 2 ( t , λ ) , P 3 ( t f , λ , ε ) = 0 .
Remark 8. 
Since the terminal-value problem (39)–(41) is equivalent to the problem (24), then (due to Remark 7), for any λ Ω λ and any ε > 0 , the problem (39)–(41) has the unique solution P 1 ( t , λ , ε ) , P 2 ( t , λ , ε ) , P 1 ( t , λ , ε ) in the entire interval [ 0 , t f ] . Also, it should be noted that, for any λ Ω λ , the terminal-value problem (39)–(41) is a singularly perturbed one for a set of Riccati-type matrix differential equations. In what follows of this subsection, based on the Boundary Function Method (see, e.g., [64]), we construct and justify the zero-order asymptotic solution of this problem. Namely, we seek this asymptotic solution in the form
P j 0 ( t , λ , ε ) = P j 0 o ( t , λ ) + P j 0 b ( τ , λ ) , j = 1 , 2 , 3 , τ = ( t t f ) / ε ,
where the terms with the upper index "o” constitute the so-called outer solution, while the terms with the upper index “b” are the boundary correction terms in a left-hand neighborhood of t = t f ; τ 0 is a new independent variable, called the stretched time. For any t [ 0 , t f ) , τ as ε + 0 . Equations and conditions for obtaining the outer solution and the boundary correction terms are derived by substituting the representation (42) into the terminal-value problem (39)–(41) instead of P j ( t , λ , ε ) , ( j = 1 , 2 , 3 ) , and equating the coefficients for the same power of ε on both sides of the resulting equations, separately the coefficients depending on t and on τ.

4.2.1. Obtaining the Boundary Layer Correction P 1 0 b ( τ )

For this boundary layer correction, we have the equation
d P 10 b ( τ , λ ) d τ = 0 , τ 0 , λ Ω λ .
Like in the Boundary Function Method, we require that the boundary layer correction terms tend to zero for τ tending to . Thus, we require that
lim τ P 10 b ( τ , λ ) = 0 .
Moreover, we require that the limit (44) is uniform with respect to λ Ω λ .
From Equation (43), we obtain
P 10 b ( τ , λ ) = C ( λ ) τ 0 ,
where C ( λ ) is an arbitrary matrix-valued function of λ Ω λ .
Equation (45), along with the requirement of the fulfillment of the limit relation (44) uniformly in λ Ω λ , yields
P 10 b ( τ , λ ) 0 , τ 0 , λ Ω λ .

4.2.2. Obtaining the Outer Solution Terms

The equations and conditions for these terms are the following for all t [ 0 , t f ] and λ Ω λ :
d P 10 o ( t , λ ) d t = P 10 o ( t , λ ) A 1 ( t , λ ) A 1 T ( t , λ ) P 10 o ( t , λ ) + P 20 o ( t , λ ) P 20 o ( t , λ ) T D 1 ( t , λ ) , P 10 o ( t f , λ ) = H 1 ( λ ) ,
P 10 o ( t , λ ) A 2 ( t , λ ) + P 20 o ( t , λ ) P 30 o ( t , λ ) = 0 ,
P 30 o ( t , λ ) 2 D 2 ( t , λ ) = 0 ,
Remark 9. 
It is important to note that in the system (47)–(49), the unknown matrix-valued functions P 20 o ( t , λ ) and P 30 o ( t , λ ) are not subject to any terminal conditions. This occurs because in (47)–(49) these unknowns are subject to the algebraic (but not differential) equations.
Solving the algebraic Equation (49) and taking into account the positive definiteness of the matrix D 2 ( t , λ ) , we obtain
P 30 o ( t , λ ) = D 2 ( t , λ ) 1 / 2 , t [ 0 , t f ] , λ Ω λ ,
where the superscript “ 1 / 2 ” denotes the unique positive definite square root of the corresponding positive definite matrix.
Remark 10. 
Due to Proposition 2, P 30 o ( t , λ ) is bounded for all ( t , λ ) [ 0 , t f ] × Ω λ . Moreover, due to Proposition 2 and the Implicit Function Theorem [65], the matrix-valued function P 30 o ( t , λ ) is continuously differentiable with respect to t [ 0 , t f ] uniformly in λ Ω λ , and d P 30 o ( t , λ ) / d t is bounded for all ( t , λ ) [ 0 , t f ] × Ω λ . In addition, since D 2 ( t , λ ) is continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] , then P 30 o ( t , λ ) also is continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] .
Solving Equation (48) with respect to P 20 o ( t ) and using (50), we have
P 20 o ( t , λ ) = P 10 o ( t , λ ) A 2 ( t , λ ) D 2 ( t , λ ) 1 / 2 , t [ 0 , t f ] , λ Ω λ ,
where the superscript “ 1 / 2 ” denotes the inverse matrix for the unique positive definite square root of corresponding positive definite matrix.
The substitution of (51) into (47) yields the following terminal-value problem with respect to P 10 o ( t , λ ) for all λ Ω λ :
d P 10 o ( t , λ ) d t = P 10 o ( t , λ ) A 1 ( t , λ ) A 1 T ( t , λ ) P 10 o ( t , λ ) + P 10 o ( t , λ ) S 1 o ( t , λ ) P 10 o ( t , λ ) D 1 ( t , λ ) , t [ 0 , t f ] , P 10 o ( t f , λ ) = H 1 ( λ ) ,
where
S 1 o ( t , λ ) = A 2 ( t , λ ) D 2 1 ( t , λ ) A 2 T ( t , λ ) .
Remark 11. 
Since, for all t [ 0 , t f ] and all λ Ω λ , the matrices D 1 ( t , λ ) , H 1 ( λ ) are positive semi-definite and the matrix D 2 ( t , λ ) is positive definite (see Proposition 2), then for all λ Ω λ , the terminal-value problem (52) has the unique solution P 10 o ( t , λ ) in the entire interval [ 0 , t f ] . Moreover, due to Proposition 2, P 10 o ( t , λ ) and d P 10 o ( t , λ ) / d t are bounded for all ( t , λ ) [ 0 , t f ] × Ω λ . Therefore, due to Remark 10 and Equations (50) and (51), P 20 o ( t , λ ) and d P 20 o ( t , λ ) / d t are bounded for all ( t , λ ) [ 0 , t f ] × Ω λ . In addition, since A 1 ( t , λ ) , S 1 o ( t , λ ) , D 1 ( t , λ ) are continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] and H 1 ( λ ) is continuous with respect to λ Ω λ then, by virtue of the results of [66] (Chapter 5), P 10 o ( t , λ ) also is continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] . Therefore, due to Equations (50) and (51), Remark 10 and the continuity of A 2 ( t , λ ) with respect to λ Ω λ uniformly in t [ 0 , t f ] , P 20 o ( t , λ ) also is continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] .

4.2.3. Control-Theoretic Interpretation of the Terminal-Value Problem (52)

For any given λ Ω λ , let us consider the optimal control problem with the dynamics described by the system
d x o ( t ) d t = A 1 ( t , λ ) x o ( t ) + A 2 ( t , λ ) u o ( t ) , x o ( 0 ) = w up 0 , t [ 0 , t f ] ,
where x o ( t ) E K n r is a state vector, u o ( t ) E r is a control; w up 0 E K n r is the upper block of the vector w 0 defined in (4).
The functional, to be minimized by u o ( t ) , has the form
J o ( u o ) = x o ( t f ) T H 1 ( λ ) x o ( t f ) + 0 t f x o ( t ) T D 1 ( t , λ ) x o ( t ) + u o ( t ) T D 2 ( t , λ ) u o ( t ) d t .
Let us introduce into the consideration the set U o of all functions u o = u o ( x o , t , λ ) : E K n r × [ 0 , t f ] × Ω λ E r , which are measurable with respect to t [ 0 , t f ] for any fixed ( x o , λ ) E K n r × Ω λ and satisfy the local Lipschitz condition with respect to x o E K n r uniformly in ( t , λ ) [ 0 , t f ] × Ω λ .
Definition 3. 
By U o , we denote the subset of the set U o , such that the following conditions are valid:
(i) 
for any λ Ω λ , any u o ( x o , t , λ ) U o and any w up 0 E K n r , the initial-value problem (54) with u o ( t ) = u o ( x o , t , λ ) has the unique absolutely continuous solution x u o ( t ; x 0 , λ ) in the entire interval [ 0 , t f ] ;
(ii) 
u o x u o ( t ; x 0 , λ ) , t , λ L 2 [ 0 , t f ; E r ] .
Such a defined set U o is called the set of all admissible state-feedback controls in the problem (54) and (55).
Based on the results of [67] (Section 5) and [1] (Section 4.3), we have immediately the following assertion.
Proposition 3. 
Let the assumptions AI-AV be satisfied. Then, for any λ Ω λ , the optimal state-feedback control u o = u o * ( x o , t , λ ) of the problem (54) and (55) is
u o * ( x o , t , λ ) = D 2 1 ( t , λ ) A 2 T ( t , λ ) P 10 o ( t , λ ) x o U o .
The optimal value of the functional in the problem (54) and (55) has the form
J o * ( x 0 , λ ) = J o u o * ( x o , t , λ ) = ( w up 0 ) T P 10 o ( 0 , λ ) w up 0 .

4.2.4. Obtaining the Boundary Layer Correction Terms P 2 0 b ( τ , λ ) and P 30 b ( τ , λ )

These terms are obtained as the solution of the terminal-value problem
d P 20 b ( τ , λ ) d τ = P 20 o ( t f , λ ) P 30 b ( τ , λ ) + P 20 b ( τ , λ ) P 30 o ( t f , λ ) + P 20 b ( τ , λ ) P 30 b ( τ , λ ) , d P 30 b ( τ , λ ) d τ = P 30 o ( t f , λ ) P 30 b ( τ , λ ) + P 30 b ( τ , λ ) P 30 o ( t f , λ ) + ( P 30 b ( τ , λ ) ) 2 , P 20 b ( 0 , λ ) = P 20 o ( t f , λ ) , P 30 b ( 0 , λ ) = P 30 o ( t f , λ ) ,
where τ 0 , λ Ω λ .
Substituting the expressions for P 30 o ( t , λ ) and P 20 o ( t , λ ) (see Equations (50) and (51)) into the terminal-value problem (56) and taking into account the terminal condition for P 10 o ( t , λ ) (see Equation (52)), we transform the aforementioned terminal-value problem as follows:
d P 20 b ( τ , λ ) d τ = P 20 b ( τ , λ ) D 2 ( t f , λ ) 1 / 2 + P 30 b ( τ , λ ) + H 1 ( λ ) A 2 ( t f , λ ) D 2 ( t f , λ ) 1 / 2 P 30 b ( τ , λ ) , P 20 b ( 0 , λ ) = H 1 ( λ ) A 2 ( t f , λ ) D 2 ( t f , λ ) 1 / 2 , τ 0 , λ Ω λ ,
d P 30 b ( τ , λ ) d τ = D 2 ( t f , λ ) 1 / 2 P 30 b ( τ , λ ) + P 30 b ( τ , λ ) D 2 ( t f , λ ) 1 / 2 + P 30 b ( τ , λ ) 2 , P 30 b ( 0 , λ ) = D 2 ( t f , λ ) 1 / 2 , τ 0 , λ Ω λ .
Based on the results of [62] (Section 4.5), we obtain the solution of the terminal-value problem (57) and (58) in the form
P 20 b ( τ , λ ) = 2 H 1 ( λ ) A 2 ( t f , λ ) D 2 ( t f , λ ) 1 / 2 exp 2 D 2 ( t f , λ ) 1 / 2 τ [ I r + exp 2 D 2 ( t f , λ ) 1 / 2 τ ] 1 , τ 0 , λ Ω λ ,
P 30 b ( τ , λ ) = 2 D 2 ( t f , λ ) 1 / 2 exp 2 D 2 ( t f , λ ) 1 / 2 τ [ I r + exp 2 D 2 ( t f , λ ) 1 / 2 τ ] 1 , τ 0 , λ Ω λ .
Due to Lemma 1 (see the inequalities in (17)) and Proposition 2 (see the expression for D 2 ( t , λ ) in (30)), the matrix-valued functions P 20 b ( τ , λ ) and P 30 b ( τ , λ ) are exponentially decaying for τ uniformly with respect to λ Ω λ , i.e., there exist scalar constants a > 0 and β > 0 independent of λ Ω λ such that P 20 b ( τ , λ ) and P 30 b ( τ , λ ) satisfy the inequalities
P 20 b ( τ , λ ) a exp ( β τ ) , P 30 b ( τ , λ ) a exp ( β τ ) , τ 0 , λ Ω λ .

4.2.5. Justification of the Asymptotic Solution to the Terminal-Value Problem (39)–(41)

Theorem 1. 
Let the assumptions AI-AV be fulfilled. Then, there exists a number ε 0 > 0 independent of λ Ω λ such that, for all ε ( 0 , ε 0 ] , the entries of the solution to the terminal-value problem (39)–(41) P 1 ( t , λ , ε ) , P 2 ( t , λ , ε ) , P 3 ( t , λ , ε ) satisfy the inequalities
P 1 ( t , λ , ε ) P 10 o ( t , λ ) c ε , P j ( t , λ , ε ) P j 0 ( t , λ , ε ) c ε , j = 2 , 3 , t [ 0 , t f ] , λ Ω λ ,
where P j 0 ( t , λ , ε ) , ( j = 2 , 3 ) are given in (42); c > 0 is some constant independent of ε and λ Ω λ .
Proof. 
In the proof of the theorem, we are based on the results of [62] (Section 4.5, Lemma 4.2 and its proof) and make proper changes associated with the dependence of the solution to the problem (39)–(41) not only on the parameter ε but also on the vector-valued parameter λ . These changes allow us to prove the uniformity of the inequalities in (62) with respect to λ Ω λ .
Let us make the transformation of the variables in the problem (39)–(41)
P 1 ( t , λ , ε ) = P 10 o ( t , λ ) + Δ 1 ( t , λ , ε ) , P j ( t , λ , ε ) = P j 0 ( t , λ , ε ) + Δ j ( t , λ , ε ) , j = 2 , 3 ,
where Δ j ( t , λ , ε ) , ( j = 1 , 2 , 3 ) are new unknown matrix-valued functions; Δ 1 T ( t , λ , ε ) = Δ 1 ( t , λ , ε ) , Δ 3 T ( t , λ , ε ) = Δ 3 ( t , λ , ε ) .
Using the above introduced new unknown matrix-valued functions, let us construct the following block-form matrix-valued function:
Δ ( t , λ , ε ) = Δ 1 ( t , λ , ε ) ε Δ 2 ( t , λ , ε ) ε Δ 2 T ( t , λ , ε ) ε Δ 3 ( t , λ , ε ) .
Now, let us substitute the representation (63) into the problem (39)–(41). Due to this substitution and the use of Equations (46)–(49) and (56), as well as the block representations of the matrices S ( ε ) , D ( t , λ ) , H ( λ ) , P ( t , λ , ε ) , A ( t , λ ) (see the Equations (27)–(29), (37) and (38)), we obtain after a routine matrix algebra the terminal-value problem for Δ ( t , λ , ε )
d Δ ( t , λ , ε ) d t = Δ ( t , λ , ε ) Θ ( t , λ , ε ) Θ T ( t , λ , ε ) Δ ( t , λ , ε ) + Δ ( t , λ , ε ) S ( ε ) Δ ( t , λ , ε ) Γ ( t , λ , ε ) , t [ 0 , t f ] , Δ ( t f , λ , ε ) = 0 ,
where λ Ω λ ;
Θ ( t , λ , ε ) = A ( t , λ ) S ( ε ) P 0 ( t , λ , ε ) ;
P 0 ( t , λ , ε ) = P 10 ( t , λ , ε ) ε P 20 ( t , λ , ε ) ε P 20 T ( t , λ , ε ) ε P 30 ( t , λ , ε ) ;
the matrix-valued function Γ ( t , λ , ε ) has the block form
Γ ( t , λ , ε ) = Γ 1 ( t , λ , ε ) Γ 2 ( t , λ , ε ) Γ 2 T ( t , λ , ε ) Γ 3 ( t , λ , ε ) ,
and
Γ 1 ( t , λ , ε ) = ε P 20 o ( t , λ ) + P 20 b ( τ , λ ) A 3 ( t , λ ) ε A 3 T ( t , λ ) P 20 o ( t , λ ) + P 20 b ( τ , λ ) T + P 20 o ( t , λ ) P 20 b ( τ , λ ) T + P 20 b ( τ , λ ) P 20 o ( t , λ ) T + P 20 b ( τ , λ ) P 20 b ( τ , λ ) T , Γ 2 ( t , λ , ε ) = ε d P 20 o ( t , λ ) d t ε P 20 o ( t , λ ) + P 20 b ( τ , λ ) A 4 ( t , λ ) ε A 1 T ( t , λ ) P 20 o ( t , λ ) + P 20 b ( τ , λ ) ε A 3 T ( t , λ ) P 30 o ( t , λ ) + P 30 b ( τ , λ ) + P 20 o ( t , λ ) P 20 o ( t f , λ ) P 30 b ( τ , λ ) + P 20 b ( τ , λ ) P 30 o ( t , λ ) P 30 o ( t f , λ ) , Γ 3 ( t , λ , ε ) = ε d P 30 o ( t , λ ) d t ε P 20 o ( t , λ ) + P 20 b ( τ , λ ) T A 2 ( t , λ ) ε P 30 o ( t , λ ) + P 30 b ( τ , λ ) A 4 ( t , λ ) ε A 2 T ( t , λ ) P 20 o ( t , λ ) + P 20 b ( τ , λ ) ε A 4 T ( t , λ ) P 30 o ( t , λ ) + P 30 b ( τ , λ ) + P 30 o ( t , λ ) P 30 o ( t f , λ ) P 30 b ( τ , λ ) + P 30 b ( τ , λ ) P 30 o ( t , λ ) P 30 o ( t f , λ ) .
Remark 12. 
Since the terminal-value problem (9) (and, therefore, each of the terminal-value problems (24) and (39)–(41)) has the unique solution in the entire interval [ 0 , t f ] for any λ Ω λ and any ε > 0 , then the terminal-value problem (65) also has the unique solution in the entire interval [ 0 , t f ] for any λ Ω λ and any ε > 0 .
Let us estimate the matrix-valued functions Γ j ( t , λ , ε ) , ( j = 1 , 2 , 3 ) . To accomplish this, first, we are going to estimate the two last addends in the expressions for Γ 2 ( t , λ , ε ) and Γ 3 ( t , λ , ε ) . Let us start with the addend P 20 o ( t , λ ) P 20 o ( t f , λ ) P 30 b ( τ , λ ) . Using the Lagrange’s formula ([68]) and the expression for the variable τ in Equation (42), we can rewrite this addend as
P 20 o ( t , λ ) P 20 o ( t f , λ ) P 30 b ( τ , λ ) = d P 20 o ( t ˜ , λ ) d t ( t t f ) P 30 b ( τ , λ ) = ε d P 20 o ( t ˜ , λ ) d t τ P 30 b ( τ , λ ) , t [ 0 , t f ] , λ Ω λ ,
where t ˜ ( t , t f ) , τ = ( t t f ) / ε , ε > 0 .
Due to the inequality for P 30 b ( τ , λ ) in (61), we directly obtain the existence of scalar constants 0 < a 1 < a and 0 < β 1 < β independent of λ Ω λ such that
τ P 30 b ( τ , λ ) a 1 exp ( β 1 τ ) , τ 0 , λ Ω λ .
Equation (69), along with the boundedness of d P 20 o ( t , λ ) / d t (see Remark 11) and the inequality (70), yield immediately the inequality
P 20 o ( t , λ ) P 20 o ( t f , λ ) P 30 b ( τ , λ ) α 1 ε exp ( β 1 τ ) , t [ 0 , t f ] , λ Ω λ ,
where τ = ( t t f ) / ε , ε > 0 , α 1 > 0 is some constant independent of ε and λ Ω λ .
Using the boundedness of d P 30 o ( t , λ ) / d t (see Remark 10) and the inequalities in (61), we obtain (quite similarly to the inequality (71)) the following inequalities:
P 20 b ( τ , λ ) P 30 o ( t , λ ) P 30 o ( t f , λ ) α 2 ε exp ( β 1 τ ) , t [ 0 , t f ] , λ Ω λ , P 30 o ( t , λ ) P 30 o ( t f , λ ) P 30 b ( τ , λ ) α 2 ε exp ( β 1 τ ) , t [ 0 , t f ] , λ Ω λ , P 30 b ( τ , λ ) P 30 o ( t , λ ) P 30 o ( t f , λ ) α 2 ε exp ( β 1 τ ) , t [ 0 , t f ] , λ Ω λ ,
where τ = ( t t f ) / ε , ε > 0 , α 2 > 0 is some constant independent of ε and λ Ω λ .
Now, using Equation (68), the inequalities in (61), and Remarks 10, 11, we directly obtain the following inequalities:
Γ 1 ( t , λ , ε ) b 1 [ ε + exp ( β τ ) ] , Γ l ( t , λ , ε ) b 1 ε 1 + exp ( β 1 τ ) , l = 2 , 3 , τ = ( t t f ) / ε , t [ 0 , t f ] , ε > 0 , λ Ω λ ,
where b 1 > 0 is some constant independent of ε and λ Ω λ ; β is the positive number introduced in (61); β 1 is the positive number introduced in (70).
By virtue of the results of [69], the problem (65) can be rewritten in the equivalent integral form
Δ ( t , λ , ε ) = t f t Φ T ( σ , t , λ , ε ) [ Δ ( σ , λ , ε ) S ( ε ) Δ ( σ , λ , ε ) Γ ( σ , λ , ε ) ] Φ ( σ , t , λ , ε ) d σ , t [ 0 , t f ] , λ Ω λ , ε > 0 ,
where, for any given t [ 0 , t f ] , λ Ω λ and ε > 0 , the K n × K n -matrix-valued function Φ ( σ , t , λ , ε ) is the unique solution of the problem
d Φ ( σ , t , λ , ε ) d σ = Θ ( σ , λ , ε ) Φ ( σ , t , λ , ε ) , Φ ( t , t , λ , ε ) = I K n , σ [ t , t f ] .
By Φ 1 ( σ , t , λ , ε ) , Φ 2 ( σ , t , λ , ε ) , Φ 3 ( σ , t , λ , ε ) and Φ 4 ( σ , t , λ , ε ) , we denote the upper left-hand, upper right-hand, lower left-hand and lower right-hand blocks of the matrix Φ ( σ , t , λ , ε ) of the dimensions ( K n r ) × ( K n r ) , ( K n r ) × r , r × ( K n r ) and r × r , respectively, i.e.,
Φ ( σ , t , λ , ε ) = Φ 1 ( σ , t , λ , ε ) Φ 2 ( σ , t , λ , ε ) Φ 3 ( σ , t , λ , ε ) Φ 4 ( σ , t , λ , ε ) .
Based on the results of [30] (Lemma 4.2) and taking into account Proposition 2, the Equation (50), the inequalities in (61) and Remarks 10 and 11, we immediately have the following estimates of these blocks for all 0 t σ t f and all λ Ω λ :
Φ l ( σ , t , λ , ε ) b 2 , l = 1 , 3 , Φ 2 ( σ , t , λ , ε ) b 2 ε , Φ 4 ( σ , t , λ , ε ) b 2 ε + exp 0.5 β ( σ t ) / ε , ε ( 0 , ε 1 ] ,
where ε 1 > 0 is some sufficiently small number; b 2 > 0 is some constant independent of ε and λ Ω λ .
Now, we are going to apply the method of successive approximations to the Equation (73). For this purpose, we consider the sequence of the matrix-valued functions Δ i ( t , λ , ε ) i = 0 + given as:
Δ i + 1 ( t , λ , ε ) = t f t Φ T ( σ , t , λ , ε ) [ Δ i ( σ , λ , ε ) S ( ε ) Δ i ( σ , λ , ε ) Γ ( σ , λ , ε ) ] Φ ( σ , t , λ , ε ) d σ , i = 0 , 1 , , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 1 ] ,
where the initial guess Δ 0 ( t , λ , ε ) = 0 , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 1 ] .
Since the matrices S ( ε ) , Γ ( t , λ , ε ) and Δ 0 ( t , λ , ε ) are symmetric, then the matrices Δ i ( σ , λ , ε ) , ( i = 1 , 2 , ) also are symmetric. Let us represent these matrices in the following block form:
Δ i ( σ , λ , ε ) = Δ i , 1 ( t , λ , ε ) ε Δ i , 2 ( t , λ , ε ) ε Δ i , 2 T ( t , λ , ε ) ε Δ i , 3 ( t , λ , ε ) , i = 1 , 2 , ,
where the dimensions of the blocks in each of these matrices are the same as the dimensions of the corresponding blocks in (64).
Using the block representations of the matrices S ( ε ) , Γ ( t , λ , ε ) , Φ ( σ , t , λ , ε ) and Δ i ( t , λ , ε ) (see Equations (27), (67), (74) and (77)), as well as using the inequalities (72) and (75), we obtain the existence of a positive number ε 0 ε 1 such that, for any ε ( 0 , ε 0 ] and any λ Ω λ , the sequence Δ i ( t , λ , ε ) i = 0 + converges in the linear space of all K n × K n -matrix-valued functions continuous in the interval [ 0 , t f ] . Since the inequalities (72) and (75) are uniform with respect to λ Ω λ and ε ( 0 , ε 0 ] , then this convergence also is uniform with respect to λ Ω λ and ε ( 0 , ε 0 ] . Moreover, the following inequalities are fulfilled:
Δ i , j ( t , λ , ε ) c ε , i = 1 , 2 , j = 1 , 2 , 3 , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] ,
where c > 0 is some constant independent of λ , ε , i and j.
Let
Δ * ( t , λ , ε ) = lim i + Δ i ( t , λ , ε ) , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Comparison of (73) and (76) directly yields that Δ * ( t , λ , ε ) is the solution of the integral Equation (73) and, therefore, of the terminal-value problem (65) in the entire interval [ 0 , t f ] . Moreover, this solution has a block form similar to (64) and satisfies the inequalities
Δ j * ( t , λ , ε ) c ε , j = 1 , 2 , 3 , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Taking into account the uniqueness of the solution to the problem (65) (see Remark 12), we have that
Δ j ( t , λ , ε ) = Δ j * ( t , λ , ε ) , j = 1 , 2 , 3 , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Using this equation, as well as Equation (63) and the inequalities in (78), we directly obtain the inequalities in (62). This completes the proof of the theorem. □

4.3. Asymptotic Solution of the Initial-Value Problem (33)

First of all, let us note that the matrix P ( t ) , appearing in the right-hand side of the differential Equation in (33), is the unique solution of the terminal-value problem (24). Thus, P ( t ) = P ( t , λ , ε ) , which has the block form (37). Hence, calculating the product S ( ε ) P ( t ) appearing in the right-hand side of the differential Equation in (33), and using Equation (27), we obtain for t [ 0 , t f ] , λ Ω λ , ε > 0
S ( ε ) P ( t ) = S ( ε ) P ( t , λ , ε ) = O ( K n r ) × ( K n r ) O ( K n r ) × r ( 1 / ε ) P 2 T ( t , λ , ε ) ( 1 / ε ) P 3 ( t , λ , ε ) .
Due to Equation (79), the right-hand side of the differential Equation in (33) has a singularity with respect to ε at ε = 0 meaning that the initial-value problem (33) is singularly perturbed. However, this problem is not in an explicit singular perturbation form. In order to transform the problem (33) to the explicit singular perturbation form, we look for its solution z ( t ) = z ( t , λ , ε ) in the form of the block vector
z ( t , λ , ε ) = col x ( t , λ , ε ) , y ( t , λ , ε ) , t [ 0 , t f ] , λ Ω λ , ε > 0 ,
where x ( t , λ , ε ) E K n r , y ( t , λ , ε ) E r .
Also, let us partition the vector z 0 ( λ ) as follows:
z 0 ( λ ) = col x 0 ( λ ) , y 0 ( λ ) , λ Ω λ ,
where x 0 ( λ ) E K n r , y 0 ( λ ) E r .
Now, substituting the block forms of the matrices A ( t , λ , ε ) , S ( ε ) P ( t , λ , ε ) and the block forms of the vectors z ( t , λ , ε ) , z 0 ( λ ) (see Equations (38), (79)–(81)) into the problem (33), we obtain after a routine matrix-vector algebra the following equivalent initial-value problem in the time interval [ 0 , t f ] :
d x ( t , λ , ε ) d t = A 1 ( t , λ ) x ( t , λ , ε ) + A 2 ( t , λ ) y ( t , λ , ε ) , ε d y ( t , λ , ε ) d t = ε A 3 ( t , λ ) P 2 T ( t , λ , ε ) x ( t , λ , ε ) + ε A 4 ( t , λ ) P 3 ( t , λ , ε ) y ( t , λ , ε ) , x ( 0 , λ , ε ) = x 0 ( λ ) , y ( 0 , λ , ε ) = y 0 ( λ ) ,
where λ Ω λ , ε > 0 .
In what follows of this subsection, based on the Boundary Function Method (see, e.g., [64]), we are going to construct and justify the zero-order asymptotic solution of the singularly perturbed initial-value problem (82). Taking into account the zero-order asymptotic solution to the terminal-value problem (39)–(41) (see Equation (42)), we look for the zero-order asymptotic solution of the problem (82) in the form
x 0 ( t , λ , ε ) = x 0 o ( t , λ ) + x 0 b , 1 ( θ , λ ) + x 0 b , 2 ( τ , λ ) , y 0 ( t , λ , ε ) = y 0 o ( t , λ ) + y 0 b , 1 ( θ , λ ) + y 0 b , 2 ( τ , λ ) , θ = t / ε , τ = ( t t f ) / ε ,
where the terms with the upper index “o” constitute the outer solution; the terms with the upper index “ b , 1 ” are the boundary correction terms in a right-hand neighbourhood of t = 0 ; the terms with the upper index “ b , 2 ” are the boundary correction terms in a left-hand neighbourhood of t = t f ; θ 0 and τ 0 are new independent variables. For any t ( 0 , t f ] , θ + as ε + 0 . For any t [ 0 , t f ) , τ as ε + 0 . Equations and conditions for obtaining the outer solution and the boundary correction terms of each type are derived by substituting the expressions for x 0 ( t , λ , ε ) , y 0 ( t , λ , ε ) , P 20 ( t , λ , ε ) and P 30 ( t , λ , ε ) (see Equations (42) and (83)) into the initial-value problem (82) instead of x ( t , λ , ε ) , y ( t , λ , ε ) , P 2 ( t , λ , ε ) and P 3 ( t , λ , ε ) , respectively, and equating the coefficients for the same power of ε on both sides of the resulting equations, separately the coefficients depending on t, on θ and on τ .

4.3.1. Obtaining the Boundary Layer Corrections x 0 b , 1 ( θ , λ ) and x 0 b , 2 ( τ , λ )

For this boundary layer corrections, we have the equations
d x 0 b , 1 ( θ , λ ) d θ = 0 , θ 0 , λ Ω λ ,
d x 0 b , 2 ( τ , λ ) d τ = 0 , τ 0 , λ Ω λ .
Due to the Boundary Function Method, we require that the boundary layer correction terms in a right-hand neighborhood of t = 0 tend to zero for θ tending to + , while the boundary layer correction terms in a left-hand neighborhood of t = t f tend to zero for τ tending to . Thus, we require that
lim θ + x 0 b , 1 ( θ , λ ) = 0 , lim τ x 0 b , 2 ( τ , λ ) = 0 .
Moreover, we require that these limits are uniform with respect to λ Ω λ .
From Equations (84)–(86) we obtain (quite similarly to Equation (46) in Section 4.2.1)
x 0 b , 1 ( θ , λ ) 0 , θ 0 , λ Ω λ ,
x 0 b , 2 ( τ , λ ) 0 , τ 0 , λ Ω λ .

4.3.2. Obtaining the Outer Solution Terms

The equations and conditions for these terms have the following form for all t [ 0 , t f ] and λ Ω λ :
d x 0 o ( t , λ ) d t = A 1 ( t , λ ) x 0 o ( t , λ ) + A 2 ( t , λ ) y 0 o ( t , λ ) , x 0 o ( 0 , λ ) = x 0 ( λ ) , P 20 o ( t , λ ) T x 0 o ( t , λ ) + P 30 o ( t , λ ) y 0 o ( t , λ ) = 0 .
Remark 13. 
As with Remark 9, let us note that in the system (89), the unknown vector-valued function y 0 o ( t , λ ) is not subject to any initial condition. This occurs because in (89) this unknown is subject to the algebraic (but not differential) equation.
Solving the algebraic equation of the system (89) with respect to y 0 o ( t , λ ) and using Equations (50) and (51), we obtain
y 0 o ( t , λ ) = D 2 1 ( t , λ ) A 2 T ( t , λ ) P 10 o ( t , λ ) x 0 o ( t , λ ) , t [ 0 , t f ] , λ Ω λ .
Substituting (90) into the differential equation of the system (89) and using the Equation (53) yield the following initial-value problem with respect to x 0 o ( t , λ ) for all λ Ω λ :
d x 0 o ( t , λ ) d t = A 1 ( t , λ ) S 1 o ( t , λ ) P 10 o ( t , λ ) x 0 o ( t , λ ) , x 0 o ( 0 , λ ) = x 0 ( λ ) , t [ 0 , t f ] .
Remark 14. 
Due to Proposition 2 and Remark 11, x 0 o ( t , λ ) and d x 0 o ( t , λ ) / d t are bounded for all ( t , λ ) [ 0 , t f ] × Ω λ . Therefore, due to Equation (90), y 0 o ( t , λ ) and d y 0 o ( t , λ ) / d t are bounded for all ( t , λ ) [ 0 , t f ] × Ω λ . In addition, since A 1 ( t , λ ) , S 1 o ( t , λ ) , P 10 o ( t , λ ) are continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] then, by virtue of the results of [66] (Chapter 5), x 0 o ( t , λ ) also is continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] . Therefore, due to Equation (90), Remark 10 and the continuity of A 2 ( t , λ ) with respect to λ Ω λ uniformly in t [ 0 , t f ] , y 0 o ( t , λ ) is also continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] .

4.3.3. Obtaining the Boundary Layer Correction Term y 0 b , 1 ( θ , λ )

This term is obtained as the solution of the initial-value problem
d y 0 b , 1 ( θ , λ ) d θ = P 30 o ( 0 , λ ) y 0 b , 1 ( θ , λ ) , θ 0 , λ Ω λ , y 0 b , 1 ( 0 ) = y 0 ( λ ) y 0 o ( 0 , λ ) = y 0 ( λ ) + D 2 1 ( 0 , λ ) A 2 T ( 0 , λ ) P 10 o ( 0 , λ ) x 0 ( λ ) , λ Ω λ ,
where, due to Equation (50), P 30 o ( 0 , λ ) = D 2 ( 0 , λ ) 1 / 2 .
Solving the problem (92), we directly have
y 0 b , 1 ( θ , λ ) = y 0 ( λ ) + D 2 1 ( 0 , λ ) A 2 T ( 0 , λ ) P 10 o ( 0 , λ ) x 0 ( λ ) exp D 2 ( 0 , λ ) 1 / 2 θ , θ 0 , λ Ω λ .
Since all the matrices and vectors, appearing in the right-hand side of Equation (93) are bounded for all λ Ω λ , and the matrix D 2 ( 0 , λ ) 1 / 2 is positive definite and continuous for all λ Ω λ (see Remark 10), then the vector-valued function y 0 b , 1 ( θ , λ ) , given by Equation (93), satisfies the inequality
y 0 b , 1 ( θ , λ ) a 2 exp ( β 2 θ ) , θ 0 , λ Ω λ ,
where a 2 > 0 and β 2 > 0 are some constants independent of λ .

4.3.4. Obtaining the Boundary Layer Correction Term y 0 b , 2 ( τ , λ )

For this term, we have the equation
d y 0 b , 2 ( τ , λ ) d τ = P 30 o ( t f , λ ) + P 30 b ( τ , λ ) y 0 b , 2 ( τ , λ ) P 20 b ( τ , λ ) T x 0 o ( t f , λ ) P 30 b ( τ , λ ) y 0 o ( t f , λ ) , τ 0 , λ Ω λ ,
where, due to Equation (50), P 30 o ( t f , λ ) = D 2 ( t f , λ ) 1 / 2 ; P 20 b ( τ , λ ) and P 30 b ( τ , λ ) are given by Equations (59) and (60), respectively; x 0 o ( t , λ ) is the unique solution of the initial-value problem (91), while y 0 o ( t , λ ) is given by Equation (90).
By virtue of the results of [24], the fundamental matrix of the homogeneous equation, corresponding to Equation (95), is the following:
Y 0 b , 2 ( τ , σ , λ ) = Ψ ( τ , λ ) Ψ 1 ( σ , λ ) , < τ σ 0 , λ Ω λ ,
where
Ψ ( τ , λ ) = exp D 2 ( t f , λ ) 1 / 2 τ + exp D 2 ( t f , λ ) 1 / 2 τ , τ 0 , λ Ω λ .
Since, for all λ Ω λ , the matrix D 2 ( t f , λ ) 1 / 2 is positive definite and the matrix-valued function D 2 ( t f , λ ) 1 / 2 is continuous, then
lim τ Ψ ( τ , λ ) = + , lim τ Ψ 1 ( τ , λ ) = 0 ,
and both limits are uniform with respect to λ Ω λ .
Solving Equation (95) with a given initial value y 0 b , 2 ( 0 , λ ) of y 0 b , 2 ( τ , λ ) and using the form (96) and (97) of the corresponding fundamental matrix, we directly have
y 0 b , 2 ( τ , λ ) = Ψ ( τ , λ ) [ 1 2 y 0 b , 2 ( 0 , λ ) 0 τ Ψ 1 ( σ , λ ) P 20 b ( σ , λ ) T d σ x 0 o ( t f , λ ) 0 τ Ψ 1 ( σ , λ ) P 30 b ( σ , λ ) d σ y 0 o ( t f , λ ) ] , τ 0 , λ Ω λ .
This equation can be rewritten as:
Ψ 1 ( τ , λ ) y 0 b , 2 ( τ , λ ) = 1 2 y 0 b , 2 ( 0 , λ ) 0 τ Ψ 1 ( σ , λ ) P 20 b ( σ , λ ) T d σ x 0 o ( t f , λ ) 0 τ Ψ 1 ( σ , λ ) P 30 b ( σ , λ ) d σ y 0 o ( t f , λ ) , τ 0 , λ Ω λ .
Applying to Equation (100) the second limit relation in Equation (98), as well as the aforementioned requirement that the boundary layer correction terms in a left-hand neighborhood of t = t f that tend to zero for τ , we immediately have
y 0 b , 2 ( 0 , λ ) = 2 [ 0 Ψ 1 ( σ , λ ) P 20 b ( σ , λ ) T d σ x 0 o ( t f , λ ) + 0 Ψ 1 ( σ , λ ) P 30 b ( σ , λ ) d σ y 0 o ( t f , λ ) ] , λ Ω λ .
Due to the inequalities in (61) and the second limit relation in (98), each of the integrals in the right-hand side of the equality in (101) converges and this convergence is uniform with respect to λ Ω λ .
Substitution of (101) into Equation (99) yields after a routine rearrangement
y 0 b , 2 ( τ , λ ) = τ Ψ ( τ , λ ) Ψ 1 ( σ , λ ) P 20 b ( σ , λ ) T d σ x 0 o ( t f , λ ) + τ Ψ ( τ , λ ) Ψ 1 ( σ , λ ) P 30 b ( σ , λ ) d σ y 0 o ( t f , λ ) , τ 0 , λ Ω λ .
Let us estimate y 0 b , 2 ( τ , λ ) . To accomplish this, first, let us estimate the product Ψ ( τ , λ ) Ψ 1 ( σ , λ ) for < σ τ 0 and λ Ω λ . Such an estimation directly follows from Equation (97), as well as from the positive definiteness and boundedness of D 2 ( t f , λ ) 1 / 2 uniform with respect to λ Ω λ . Thus, we have
Ψ ( τ , λ ) Ψ 1 ( σ , λ ) a 3 exp β 3 ( σ τ ) , < σ τ 0 , λ Ω λ ,
where a 3 > 0 and β 3 > 0 are some constants independent of λ Ω λ .
Now, using the inequalities in (61), the inequality (103) and Remark 14, we directly obtain the following estimate of y 0 b , 2 ( τ , λ ) given by (102):
y 0 b , 2 ( τ , λ ) a 4 exp ( β 4 τ ) , τ 0 , λ Ω λ ,
where a 4 > 0 and β 4 > 0 are some constants independent of λ Ω λ .

4.3.5. Justification of the Asymptotic Solution to the Initial-Value Problem (82)

Theorem 2. 
Let the assumptions AI-AV be fulfilled. Then, there exists a number ε ˜ 0 ( 0 , ε 0 ] independent of λ Ω λ such that, for all ε ( 0 , ε ˜ 0 ] , the entries of the solution to the initial-value problem (82) x ( t , λ , ε ) , y ( t , λ , ε ) satisfy the inequalities
x ( t , λ , ε ) x 0 o ( t , λ ) c ˜ 1 ε , t [ 0 , t f ] , λ Ω λ , y ( t , λ , ε ) y 0 ( t , λ , ε ) c ˜ 1 ε , t [ 0 , t f ] , λ Ω λ ,
where y 0 ( t , λ , ε ) is given in (83); c ˜ 1 > 0 is some constant independent of ε and λ Ω λ ; ε 0 > 0 is the number introduced in Theorem 1.
Proof. 
Let us make the transformation of the variables in the problem (82)
x ( t , λ , ε ) = x 0 o ( t , λ ) + δ x ( t , λ , ε ) , y ( t , λ , ε ) = y 0 ( t , λ , ε ) + δ y ( t , λ , ε ) ,
where δ x ( t , λ , ε ) and δ y ( t , λ , ε ) are new unknown vector-valued functions.
Substitution of (106) into the problem (82), and use of the Equations (87)–(89), (92), (95) and (102) and Equations (42) and (63) yield after a routine algebra the following initial-value problem for the unknowns δ x ( t , λ , ε ) and δ y ( t , λ , ε ) in the time interval [ 0 , t f ] :
d δ x ( t , λ , ε ) d t = A 1 ( t , λ ) δ x ( t , λ , ε ) + A 2 ( t , λ ) δ y ( t , λ , ε ) + γ x ( t , λ , ε ) , δ x ( 0 , λ , ε ) = 0 , ε d δ y ( t , λ , ε ) d t = ε A 3 ( t , λ ) P 2 T ( t , λ , ε ) δ x ( t , λ , ε ) + ε A 4 ( t , λ ) P 3 ( t , λ , ε ) δ y ( t , λ , ε ) + γ y ( t , λ , ε ) , δ y ( 0 , λ , ε ) = φ y ( λ , ε ) ,
where λ Ω λ ,
γ x ( t , λ , ε ) = A 2 ( t , λ ) y 0 b , 1 ( θ , λ ) + y 0 b , 2 ( τ , λ ) , γ y ( t , λ , ε ) = P 20 b ( τ , λ ) T x 0 o ( t , λ ) x 0 o ( t f , λ ) P 30 b ( τ , λ ) y 0 o ( t , λ ) y 0 o ( t f , λ ) P 30 b ( τ , λ ) y 0 b , 1 ( θ , λ ) P 30 o ( t , λ ) P 30 o ( 0 , λ ) y 0 b , 1 ( θ , λ ) + ε A 3 ( t , λ ) x 0 o ( t , λ ) + ε A 4 ( t , λ ) y 0 o ( t , λ ) + y 0 b , 1 ( θ , λ ) + y 0 b , 2 ( τ , λ ) Δ 2 ( t , λ , ε ) T x 0 o ( t , λ ) Δ 3 ( t , λ , ε ) y 0 o ( t , λ ) + y 0 b , 1 ( θ , λ ) + y 0 b , 2 ( τ , λ ) , φ y ( λ , ε ) = y 0 b , 2 ( τ 0 , λ ) = τ 0 Ψ ( τ 0 , λ ) Ψ 1 ( σ , λ ) P 20 b ( σ , λ ) T d σ x 0 o ( t f , λ ) + τ 0 Ψ ( τ 0 , λ ) Ψ 1 ( σ , λ ) P 30 b ( σ , λ ) d σ y 0 o ( t f , λ ) , τ 0 = t f / ε .
Let us estimate the vector-valued functions γ x ( t , λ , ε ) , γ y ( t , λ , ε ) and φ y ( λ , ε ) . Using the boundedness of the matrix-valued function A 2 ( t , λ ) for all ( t , λ ) [ 0 , t f ] × Ω λ (see Proposition 2), as well as the inequalities (94) and (104), we directly have
γ x ( t , λ , ε ) b x exp ( β 2 θ ) + exp ( β 4 τ ) , θ = t / ε , τ = ( t t f ) / ε , t [ 0 , t f ] , ε > 0 , λ Ω λ ,
where b x > 0 is some constant independent of ε and λ Ω λ ; β 2 and β 4 are positive constants introduced in (94) and (104), respectively.
To estimate the vector-valued function γ y ( t , λ , ε ) , we should estimate each of its addends. Using the boundedness of d P 30 o ( t , λ ) / d t , d x 0 o ( t , λ ) / d t , d y 0 o ( t , λ ) / d t for all ( t , λ ) [ 0 , t f ] × Ω λ (see Remarks 10, 14) and the inequalities (61) and (94), we obtain (quite similarly to the inequality (71)) the following inequalities:
P 20 b ( τ , λ ) T x 0 o ( t , λ ) x 0 o ( t f , λ ) b y , 1 ε exp ( β 1 τ ) , t [ 0 , t f ] , λ Ω λ , P 30 b ( τ , λ ) y 0 o ( t , λ ) y 0 o ( t f , λ ) b y , 1 ε exp ( β 1 τ ) , t [ 0 , t f ] , λ Ω λ , P 30 o ( t , λ ) P 30 o ( 0 , λ ) y 0 b , 1 ( θ , λ ) b y , 1 ε exp ( β y , 1 θ ) , t [ 0 , t f ] , λ Ω λ ,
where τ = ( t t f ) / ε , θ = t / ε , ε > 0 ; b y , 1 > 0 is some constant independent of ε and λ Ω λ ; β 1 > 0 is the constant introduced in the inequality (70); 0 < β y , 1 < β 2 is some constant independent of ε and λ Ω λ ; β 2 > 0 is the constant introduced in (94).
Furthermore, using the second inequality in (61) and the inequality (94), we have
P 30 b ( τ , λ ) y 0 b , 1 ( θ , λ ) b y , 2 exp ( β y , 2 t f / ε ) ,
where b y , 2 = a a 2 , β y , 2 = min { β , β 2 } , ε > 0 .
Finally, using the boundedness of the matrix-valued functions A 3 ( t , λ ) and A 4 ( t , λ ) for all ( t , λ ) [ 0 , t f ] × Ω λ (see Proposition 2), the boundedness of the vector-valued functions x 0 o ( t , λ ) and y 0 o ( t , λ ) for all ( t , λ ) [ 0 , t f ] × Ω λ (see Remark 14), inequalities (94) and (104), as well as using Theorem 1 and Equation (63), yield the inequalities
ε A 3 ( t , λ ) x 0 o ( t , λ ) b y , 3 ε , t [ 0 , t f ] , λ Ω λ , ε > 0 , ε A 4 ( t , λ ) y 0 o ( t , λ ) + y 0 b , 1 ( θ , λ ) + y 0 b , 2 ( τ , λ ) b y , 3 ε , t [ 0 , t f ] , λ Ω λ , ε > 0 ,
Δ 2 ( t , λ , ε ) T x 0 o ( t , λ ) b y , 3 ε , t [ 0 , t f ] , λ Ω λ , ε > 0 , Δ 3 ( t , λ , ε ) y 0 o ( t , λ ) + y 0 b , 1 ( θ , λ ) + y 0 b , 2 ( τ , λ ) b y , 3 ε , t [ 0 , t f ] , λ Ω λ , ε > 0 ,
where b y , 3 > 0 is some constant independent of ε and λ Ω λ .
The inequalities (110)–(112) directly yield the estimate of γ y ( t , λ , ε )
γ y ( t , λ , ε ) b y ε , t [ 0 , t f ] , λ Ω λ , ε > 0 ,
where b y > 0 is some constant independent of ε and λ Ω λ .
Proceed to the estimate of φ y ( λ , ε ) . Using the inequalities (61) and (103), we obtain the following chain of inequalities and equality:
φ y ( λ , ε ) τ 0 Ψ ( τ 0 , λ ) Ψ 1 ( σ , λ ) P 20 b ( σ , λ ) T d σ x 0 o ( t f , λ ) + τ 0 Ψ ( τ 0 , λ ) Ψ 1 ( σ , λ ) P 30 b ( σ , λ ) d σ y 0 o ( t f , λ ) τ 0 Ψ ( τ 0 , λ ) Ψ 1 ( σ , λ ) P 20 b ( σ , λ ) T d σ x 0 o ( t f , λ ) + τ 0 Ψ ( τ 0 , λ ) Ψ 1 ( σ , λ ) P 30 b ( σ , λ ) d σ y 0 o ( t f , λ ) a a 3 x 0 o ( t f , λ ) + y 0 o ( t f , λ ) τ 0 exp ( β σ ) d σ = a a 3 β x 0 o ( t f , λ ) + y 0 o ( t f , λ ) exp ( β τ 0 ) , λ Ω λ , ε > 0 .
This chain of the inequalities and the equality, along with the expression for τ 0 (see Equation (108)) and the boundedness of the vector-valued functions x 0 o ( t , λ ) , y 0 o ( t , λ ) for all ( t , λ ) [ 0 , t f ] × Ω λ (see Remark 14), implies immediately the estimate of φ y ( λ , ε )
φ y ( λ , ε ) b φ ε , λ Ω λ , ε > 0 ,
where b φ > 0 is some constant independent of ε and λ Ω λ .
Let us introduce the following vectors of the dimension K n :
δ ( t , λ , ε ) = δ x ( t , λ , ε ) δ y ( t , λ , ε ) , γ ( t , λ , ε ) = γ x ( t , λ , ε ) γ y ( t , λ , ε ) , t [ 0 , t f ] , λ Ω λ , ε > 0 , φ ( λ , ε ) = 0 φ y ( λ , ε ) , λ Ω λ , ε > 0 .
Also, let us introduce into the consideration the following matrix:
Δ ˜ ( t , λ , ε ) = O ( K n r ) × ( K n r ) O ( K n r ) × r ( 1 / ε ) Δ 2 ( t , λ , ε ) T ( 1 / ε ) Δ 3 ( t , λ , ε ) , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] ,
where Δ 2 ( t , λ , ε ) and Δ 3 ( t , λ , ε ) are defined in Equation (63); ε 0 is introduced in Theorem 1.
Due to Theorem 1 (see the inequalities in (62)) and Equation (63), we immediately have
( 1 / ε ) Δ 2 ( t , λ , ε ) T c , ( 1 / ε ) Δ 3 ( t , λ , ε ) c , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Using the vectors δ ( t , λ , ε ) , γ ( t , λ , ε ) , φ ( λ , ε ) and the matrix Δ ˜ ( t , λ , ε ) as well as the matrix Θ ( t , λ , ε ) (see Equation (66)), we can rewrite the initial-value problem (107) in the form
d δ ( t , λ , ε ) d t = Θ ( t , λ , ε ) δ ( t , λ , ε ) Δ ˜ ( t , λ , ε ) δ ( t , λ , ε ) + γ ( t , λ , ε ) , δ ( 0 , λ , ε ) = φ ( λ , ε ) , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Let K n × K n -matrix-valued function Υ ( t , χ , λ , ε ) , 0 χ t t f be the unique solution to the following initial-value problem:
d Υ ( t , χ , λ , ε ) d t = Θ ( t , λ , ε ) Υ ( t , χ , λ , ε ) , Υ ( χ , χ , λ , ε ) = I K n , t [ χ , t f ] .
By Υ 1 ( t , χ , λ , ε ) , Υ 2 ( t , χ , λ , ε ) , Υ 3 ( t , χ , λ , ε ) and Υ 4 ( t , χ , λ , ε ) , we denote the upper left-hand, upper right-hand, lower left-hand and lower right-hand blocks of the matrix Υ ( t , χ , λ , ε ) of the dimensions ( K n r ) × ( K n r ) , ( K n r ) × r , r × ( K n r ) and r × r , respectively, i.e.,
Υ ( t , χ , λ , ε ) = Υ 1 ( t , χ , λ , ε ) Υ 2 ( t , χ , λ , ε ) Υ 3 ( t , χ , λ , ε ) Υ 4 ( t , χ , λ , ε ) .
Similarly to the inequalities in (75), we have the following estimates of these blocks for all 0 χ t t f and all λ Ω λ :
Υ l ( t , χ , λ , ε ) b 2 , l = 1 , 3 , Υ 2 ( t , χ , λ , ε ) b 2 ε , Υ 4 ( t , χ , λ , ε ) b 2 ε + exp 0.5 β ( t χ ) / ε , ε ( 0 , ε 1 ] ,
where the constant β > 0 is introduced in (61); the constants ε 1 > 0 and b 2 > 0 are introduced in (75).
Using the matrix-valued function Υ ( t , χ , λ , ε ) , let us rewrite the initial-value problem (118) in the equivalent integral form
δ ( t , λ , ε ) = Υ ( t , 0 , λ , ε ) φ ( λ , ε ) 0 t Υ ( t , χ , λ , ε ) Δ ˜ ( χ , λ , ε ) δ ( χ , λ , ε ) γ ( χ , λ , ε ) d χ , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Now (similarly to the proof of Theorem 1), we are going to apply the method of successive approximations to Equation (121). For this purpose, we consider the sequence of the vector-valued functions δ i ( t , λ , ε ) i = 0 + given as:
δ i + 1 ( t , λ , ε ) = Υ ( t , 0 , λ , ε ) φ ( λ , ε ) 0 t Υ ( t , χ , λ , ε ) Δ ˜ ( χ , λ , ε ) δ i ( χ , λ , ε ) γ ( χ , λ , ε ) d χ , i = 0 , 1 , , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] ,
where the initial guess δ 0 ( t , λ , ε ) = 0 , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Let us represent the vector-valued functions δ i ( t , λ , ε ) , ( i = 1 , 2 , ) in the block form as follows:
δ i ( t , λ , ε ) = δ i , x ( t , λ , ε ) δ i , y ( t , λ , ε ) , i = 1 , 2 , ,
where the dimension of the upper block is K n r , while the dimension of the lower block is r, i.e., the dimensions of these blocks are the same as the dimensions of the corresponding blocks in the vector-valued function δ ( t , λ , ε ) (see Equation (115)).
Using the block representations of the matrices Δ ˜ ( t , λ , ε ) , Υ ( t , χ , λ , ε ) (see the Equations (116) and (119)) and the block representations of the vectors γ ( t , λ , ε ) , φ ( λ , ε ) , δ i ( t , λ , ε ) (see Equations (115) and (123)), as well as using the inequalities (109), (113), (114), (117) and (120), we obtain the existence of a positive number ε ˜ 0 ε 0 such that, for any ε ( 0 , ε ˜ 0 ] and any λ Ω λ , the sequence δ i ( t , λ , ε ) i = 0 + converges in the linear space of all K n -vector-valued functions continuous in the interval [ 0 , t f ] . Since the aforementioned inequalities are uniform with respect to λ Ω λ and ε ( 0 , ε ˜ 0 ] , then this convergence also is uniform with respect to λ Ω λ and ε ( 0 , ε ˜ 0 ] . Moreover, the following inequalities are fulfilled:
δ i , x ( t , λ , ε ) c ˜ 1 ε , δ i , y ( t , λ , ε ) c ˜ 1 ε , i = 1 , 2 , , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε ˜ 0 ] ,
where c ˜ 1 > 0 is some constant independent of λ , ε and i.
Let us denote
δ * ( t , λ , ε ) = lim i + δ i ( t , λ , ε ) , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε 0 ] .
Equations (121) and (122) immediately imply that δ * ( t , λ , ε ) is the solution of the integral Equation (121) and, therefore, of the initial-value problem (118) in the entire interval [ 0 , t f ] . Moreover, this solution has the block form similar to the block form of the vector δ ( t , λ , ε ) (see Equation (115)) and satisfies the inequalities
δ x * ( t , λ , ε ) c ˜ ε , δ y * ( t , λ , ε ) c ˜ ε , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε ˜ 0 ] .
Since the initial-value problem (118) has the unique solution, then
δ ( t , λ , ε ) = δ * ( t , λ , ε ) , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε ˜ 0 ] .
This equation, along with Equation (106) and the inequalities in (124), directly yields the inequalities in (105). Thus, the theorem is proven. □
Let us introduce the following vector-valued functions of the dimension K n :
z 0 o ( t , λ ) = col x 0 o ( t , λ ) , y 0 o ( t , λ ) , z 0 b , 1 ( θ , λ ) = col 0 , y 0 b , 1 ( θ , λ ) , z 0 b , 2 ( τ , λ ) = col 0 , y 0 b , 2 ( τ , λ ) , z 0 ( t , λ , ε ) = z 0 o ( t , λ ) + z 0 b , 1 ( θ , λ ) + z 0 b , 2 ( τ , λ ) , t [ 0 , t f ] , θ = t / ε , τ = ( t t f ) / ε , λ Ω λ , ε ( 0 , ε ˜ 0 ] .
Thus, by virtue of Theorem 2, we have
z ( t , λ , ε ) z 0 ( t , λ , ε ) 2 c ˜ 1 ε , t [ 0 , t f ] , λ Ω λ , ε ( 0 , ε ˜ 0 ] .

4.4. Transformation of the Optimal Control in the Problem (1), (5) and (6)

To transform the expression (13) of the optimal control in the problem (1), (5) and (6), first, we observe the following. Since P ( t , λ , ε ) , t [ 0 , t f ] is the unique solution of the terminal-value problem (24) for any λ Ω λ and ε > 0 , then P t , λ * ( ε ) , ε , t [ 0 , t f ] is the unique solution of the problem (24) with λ = λ * ( ε ) and any ε > 0 . Remember that λ = λ * ( ε ) , ε > 0 is the solution of the optimization problem (14) and (15) and, due to Corollary 2, of the optimization problem (35) and (36). Taking into account the aforementioned observation, as well as Equation (23) and Proposition 2, we directly have that
P t , λ * ( ε ) , ε = R T t , λ * ( ε ) 1 P t , λ * ( ε ) , ε R 1 t , λ * ( ε ) , t [ 0 , t f ] , ε > 0
is the unique solution of the terminal-value problem (9) with λ = λ * ( ε ) .
Substituting (126) into Equation (13) and using Equations (26) and (37), we obtain after a routine rearrangement the following expression for the optimal control of the problem (1), (5) and (6):
u ε * w , t , λ * ( ε ) = 1 ε P 2 T t , λ * ( ε ) , ε , P 3 t , λ * ( ε ) , ε R 1 t , λ * ( ε ) w , w E K n , t [ 0 , t f ] , ε > 0 .
Finally, substituting the solution w t , λ * ( ε ) , ε of the initial-value problem (16) with λ = λ * ( ε ) into (127) and using Corollary 1, we obtain the time realization u * t , λ * ( ε ) , ε of the state-feedback optimal control in the problem (1), (5) and (6) along w = w t , λ * ( ε ) , ε (the open-loop optimal control in this problem)
u * t , λ * ( ε ) , ε = u ε * w t , λ * ( ε ) , ε , t , λ * ( ε ) = 1 ε P 2 T t , λ * ( ε ) , ε , P 3 t , λ * ( ε ) , ε R 1 t , λ * ( ε ) w t , λ * ( ε ) , ε = 1 ε P 2 T t , λ * ( ε ) , ε , P 3 t , λ * ( ε ) , ε z t , λ * ( ε ) , ε , t [ 0 , t f ] , ε > 0 .
Since u ε * w , t , λ * ( ε ) is the state-feedback optimal control in the problem (1), (5) and (6) and u * t , λ * ( ε ) , ε is the open-loop optimal control in this problem, then using Proposition 1 and Corollary 2, we obtain
J λ * ( ε ) , ε = I λ * ( ε ) , ε = J ε u ε * w , t , λ * ( ε ) = J ε u * t , λ * ( ε ) , ε , ε > 0 .

4.5. Asymptotic Behaviour of the Solution to the Optimization Problem (35) and (36)

Along with the optimization problem (35) and (36), let us consider the following optimization problem:
λ 0 * = argmin λ Ω λ J 0 ( λ ) ,
J 0 ( λ ) = x 0 ( λ ) T P 10 o ( 0 , λ ) x 0 ( λ ) x 0 o ( t f , λ ) T H 1 ( λ ) x 0 o ( t f , λ ) 0 t f x 0 o ( t , λ ) T D 1 ( t , λ ) x 0 o ( t , λ ) + y 0 o ( t , λ ) T D 2 ( t , λ ) y 0 o ( t , λ ) d t + max κ Ω κ [ 0 t f z 0 o ( t , λ ) T R T ( t , λ ) D κ ( t ) R ( t , λ ) z 0 o ( t , λ ) d t + x 0 o ( t f , λ ) T L T ( t f , λ ) H κ L ( t f , λ ) x 0 o ( t f , λ ) ] ,
where the matrices D 1 ( t , λ ) and D 2 ( t , λ ) are defined in (28); the matrix H 1 ( λ ) is defined in (29); the set Ω κ is given by (11); the matrices H κ and D κ ( t ) are given in (12); the K n -vector z 0 o ( t , λ ) is given in (125).
In contrast with the optimization problem (35) and (36), the optimization problem (130) and (131) is independent of ε .
Lemma 2. 
Let the assumptions AI-AV be fulfilled. Then, the function J 0 ( λ ) is continuous with respect to λ Ω λ . Moreover, the following limit equality is valid:
lim ε + 0 J ( λ , ε ) = J 0 ( λ ) uniformly in λ Ω λ .
Proof. 
We start with the proof of the first statement of the lemma. Let us observe that the functions D 1 ( t , λ ) , D 2 ( t , λ ) , H 1 ( λ ) , R ( t , λ ) , P 10 o ( t , λ ) , x 0 o ( t , λ ) , y 0 o ( t , λ ) are bounded for ( t , λ ) [ 0 , t f ] × Ω λ and they are continuous with respect to λ Ω λ uniformly in t [ 0 , t f ] (see Proposition 2, Remarks 6, 11, 14). Also, let us observe that the function H κ is continuous with respect to κ Ω κ , while the function D κ ( t ) is continuous with respect to κ Ω κ uniformly in t [ 0 , t f ] . These observations, as well as the theorem on continuity of an integral with respect to a parameter [65,68] and the Maximum Theorem [70], directly yield the continuity of the function J 0 ( λ ) with respect to λ Ω λ . Thus, the first statement of the lemma is proven.
Proceed to the proof of the limit equality (132). To prove this equality, first, we are going to transform the expression z ( t f , λ , ε ) T R T ( t f , λ ) H κ R ( t f , λ ) z ( t f , λ , ε ) appearing in the function J ( λ , ε ) (see Equation (36)). Namely, using the assumption AIII, the symmetry of the matrix H ˜ and Equations (8), (12), (22) and (80), we have
z ( t f , λ , ε ) T R T ( t f , λ ) H κ R ( t f , λ ) z ( t f , λ , ε ) = x T ( t f , λ , ε , y T ( t f , λ , ε ) ) L T ( t f , λ ) B T ( t f ) H κ L ( t f , λ ) , H κ B ( t f ) x ( t f , λ , ε ) y ( t f , λ , ε ) = x T ( t f , λ , ε ) L T ( t f , λ ) + y T ( t f , λ , ε ) B T ( t f ) H κ L ( t f , λ ) x ( t f , λ , ε ) = x T ( t f , λ , ε ) L T ( t f , λ ) H κ L ( t f , λ ) x ( t f , λ , ε ) + y T ( t f , λ , ε ) B T ( t f ) H κ L ( t f , λ ) x ( t f , λ , ε ) = x T ( t f , λ , ε ) L T ( t f , λ ) H κ L ( t f , λ ) x ( t f , λ , ε ) , λ Ω λ , κ Ω κ , ε > 0 .
Now, using Equations (28), (29), (36), (37), (80), (131), and (133), as well as Theorems 1 and 2, and the inequalities (61), (94) and (104), we obtain the limit equality (132). This completes the proof of the lemma. □
In what follows, we assume the following:
AVI. 
The optimization problem (130) and (131) has the unique solution λ 0 * .
Theorem 3. 
Let the assumptions AI-AVI be fulfilled. Then the solution λ * ( ε ) , ε ( 0 , ε ˜ 0 ] of the optimization problem (35) and (36) tends to the solution λ 0 * of the optimization problem (130) and (131) for ε + 0 , i.e.,
lim ε + 0 λ * ( ε ) = λ 0 * .
Proof. 
(by contradiction). Let us assume that the statement of the theorem is wrong. This means the existence of sequences { ε i } i = 1 + , { λ i } i = 1 + and a number η > 0 which satisfy the following conditions: (a) ε i ( 0 , ε ˜ 0 ] , ( i = 1 , 2 , ) and lim i + ε i = 0 ; (b) λ i Ω λ , ( i = 1 , 2 , ) ; (c) for any i { 1 , 2 , } , λ i = argmin λ Ω λ J ( λ , ε i ) , i.e., this vector minimizes the function (36) with ε = ε i ; (d) for any i { 1 , 2 , } , λ i λ 0 * η .
From the conditions (b) and (c), we directly have
J ( λ i , ε i ) J ( λ , ε i ) i { 1 , 2 , } , λ Ω λ .
Since the set Ω λ is bounded and closed, then the condition (b) implies the existence of a convergent in this set subsequence of the sequence { λ i } i = 1 + . For the sake of simplicity (but without loss of generality), we assume that { λ i } i = 1 + itself is such a subsequence. Thus, there exists
lim i + λ i = λ ¯ Ω λ .
Moreover, by virtue of the aforementioned condition (d),
λ ¯ λ 0 * η > 0 .
Now, using the aforementioned condition (a) on the sequence { ε i } i = 1 + , as well as Equation (135) and Lemma 2, we obtain the limit equality
lim i + J ( λ i , ε i ) = J 0 ( λ ¯ ) ,
The inequality (134), along with the equalities (132) and (137), yields immediately the following inequality:
J 0 ( λ ¯ ) J 0 ( λ ) λ Ω λ ,
meaning that the vector λ ¯ Ω λ minimizes the function J 0 ( λ ) in the set Ω λ . Hence, due to the assumption AVI, λ ¯ = λ 0 * . However, this equality contradicts the inequality (136). This contradiction implies the correctness of the statement of the theorem, which completes its proof. □
As a direct consequence of Lemma 2 and Theorem 3, we have the following assertion.
Corollary 3. 
Let the assumptions AI-AVI be fulfilled. Then, for the solution λ * ( ε ) , ε ( 0 , ε ˜ 0 ] of the optimization problem (35) and (36), there exists a function g * ( ε ) > 0 , ε ( 0 , ε ˜ 0 ] , such that lim ε + 0 g * ( ε ) = 0 and
| J λ * ( ε ) , ε J 0 λ 0 * | g * ( ε ) , ε ( 0 , ε ˜ 0 ] .

4.6. Asymptotically Suboptimal Control of the Problem (1), (5) and (6)

4.6.1. Formal Construction of the Suboptimal Control

Replacing in the right-hand side of (127) λ * ( ε ) with λ 0 * , as well as P 2 t , λ * ( ε ) , ε with P 20 o ( t , λ 0 * ) and P 3 t , λ * ( ε ) , ε with P 30 o ( t , λ 0 * ) , we obtain the following state-feedback control:
u ^ ε ( w , t , λ 0 * ) = 1 ε P 20 o ( t , λ 0 * ) T , P 30 o ( t , λ 0 * ) R 1 ( t , λ 0 * ) w , w E K n , t [ 0 , t f ] , ε > 0 .
It is clear that, for all ε > 0 , u ^ ε ( w , t , λ 0 * ) U , i.e., this control is admissible in the problem (1), (5) and (6). In what follows of this subsection, we are going to show that u ^ ε ( w , t , λ 0 * ) is asymptotically suboptimal in this problem. The latter means that this control provides the value of the functional in the problem (1), (5) and (6), which are arbitrarily close to the optimal value of this functional for all sufficiently small ε > 0 .
Substituting the control (138) into the initial-value problem (1) with k = 1 , 2 , , K and using Equations (4), (7) and (8), we obtain the corresponding closed-loop system with the trajectory denoted as w ^ ( t , ε )
d w ^ ( t , ε ) d t = A ( t ) 1 ε B ( t ) P 20 o ( t , λ 0 * ) T , P 30 o ( t , λ 0 * ) R 1 ( t , λ 0 * ) w ^ ( t , ε ) , w ^ ( 0 , ε ) = w 0 , t [ 0 , t f ] , ε > 0 .
Below, we analyze an asymptotic (with respect to ε ) behaviour of w ^ ( t , ε ) .

4.6.2. Asymptotic Behaviour of the Solution to the Initial-Value Problem (139)

To analyze the asymptotic behaviour of w ^ ( t , ε ) , we make the following transformation of variables in (139):
w ^ ( t , ε ) = R ( t , λ 0 * ) z ^ ( t , ε ) , t [ 0 , t f ] , ε > 0 ,
where z ^ ( t , ε ) is a new unknown vector-valued function.
The transformation (140), along with Equations (25), (26), (34), and (38), converts the initial-value problem (139) to the new initial-value problem with respect to z ^ ( t , ε )
d z ^ ( t , ε ) d t = A 1 ( t , λ 0 * ) A 2 ( t , λ 0 * ) A 3 ( t , λ 0 * ) 1 ε P 20 o ( t , λ 0 * ) T A 4 ( t , λ 0 * ) 1 ε P 30 o ( t , λ 0 * ) z ^ ( t , ε ) , z ^ ( 0 , ε ) = z 0 ( λ 0 * ) , t [ 0 , t f ] , ε > 0 .
As with the results of Section 4.3 (see Equation (80)), we represent the solution z ^ ( t , ε ) of the initial-value problem (141) in the block form
z ^ ( t , ε ) = col x ^ ( t , ε ) , y ^ ( t , ε ) , t [ 0 , t f ] , ε > 0 ,
where x ^ ( t , ε ) E K n r , y ^ ( t , ε ) E r .
Due to the representation (142) and Equation (81), the initial-value problem (141) is transformed to the following equivalent initial-value problem in the time interval [ 0 , t f ] :
d x ^ ( t , ε ) d t = A 1 ( t , λ 0 * ) x ^ ( t , ε ) + A 2 ( t , λ 0 * ) y ^ ( t , ε ) , ε d y ^ ( t , ε ) d t = ε A 3 ( t , λ 0 * ) P 20 o ( t , λ 0 * ) T x ^ ( t , ε ) + ε A 4 ( t , λ 0 * ) P 30 o ( t , λ 0 * ) y ^ ( t , ε ) , x ^ ( 0 , ε ) = x 0 ( λ 0 * ) , y ^ ( 0 , ε ) = y 0 ( λ 0 * ) ,
where ε > 0 .
Quite similarly to the results of Section 4.3 (see Equation (83)), we construct the zero-order asymptotic solution of the problem (143) in the form
x ^ 0 ( t , ε ) = x ^ 0 o ( t ) + x ^ 0 b , 1 ( θ ) + x ^ 0 b , 2 ( τ ) , y ^ 0 ( t , ε ) = y ^ 0 o ( t ) + y ^ 0 b , 1 ( θ ) + y ^ 0 b , 2 ( τ ) , θ = t / ε , τ = ( t t f ) / ε ,
where, similarly to (91) and (90), x ^ 0 o ( t ) and y ^ 0 o ( t ) are obtained from the system
d x ^ 0 o ( t ) d t = A 1 ( t , λ 0 * ) S 1 o ( t , λ 0 * ) P 10 o ( t , λ 0 * ) x ^ 0 o ( t ) , x ^ 0 o ( 0 ) = x 0 ( λ 0 * ) , t [ 0 , t f ] , y ^ 0 o ( t ) = D 2 1 ( t , λ 0 * ) A 2 T ( t , λ 0 * ) P 10 o ( t , λ 0 * ) x ^ 0 o ( t ) , t [ 0 , t f ] ;
similarly to (84)–(88), we have
x ^ 0 b , 1 ( θ ) 0 , θ 0 , x ^ 0 b , 2 ( τ ) 0 , τ 0 ;
similarly to (92)–(94) we obtain
y ^ 0 b , 1 ( θ ) = y 0 ( λ 0 * ) + D 2 1 ( 0 , λ 0 * ) A 2 T ( 0 , λ 0 * ) P 10 o ( 0 , λ 0 * ) x 0 ( λ 0 * ) exp D 2 ( 0 , λ 0 * ) 1 / 2 θ , θ 0 ,
yielding
y ^ 0 b , 1 ( θ ) a ^ exp ( β ^ θ ) , θ 0 ,
a ^ > 0 and β ^ > 0 are some constants.
The vector-valued function y ^ 0 b , 2 ( τ ) is obtained a bit differently than the vector-valued function y 0 b , 2 ( τ ) (see Section 4.3.4), because in the initial-value problem (143) only P 20 o ( · ) and P 30 o ( · ) (but not P 2 ( · ) and P 3 ( · ) like in (82)) are present. Namely, in contrast with the Equation (95), the vector-valued function y ^ 0 b , 2 ( τ ) satisfies the following differential equation:
d y ^ 0 b , 2 ( τ ) d τ = P 30 o ( t f , λ 0 * ) y ^ 0 b , 2 ( τ ) , τ 0 ,
where, due to Equation (50), P 30 o ( t f , λ ) = D 2 ( t f , λ ) 1 / 2 .
Solving Equation (149) with the initial value y ^ 0 b , 2 ( 0 ) of y ^ 0 b , 2 ( τ ) yields
y ^ 0 b , 2 ( τ ) = exp D 2 ( t f , λ ) 1 / 2 τ y ^ 0 b , 2 ( 0 ) , τ 0 .
Taking into account the positive definiteness of the matrix D 2 ( t f , λ ) 1 / 2 , we directly obtain that a single initial value y ^ 0 b , 2 ( 0 ) , for which y ^ 0 b , 2 ( τ ) from (150) satisfies the Boundary Function Method requirement ( lim τ y ^ 0 b , 2 ( τ ) = 0 ), is y ^ 0 b , 2 ( 0 ) = 0 . The latter, along with (150), implies
y ^ 0 b , 2 ( τ ) 0 , τ 0 .
Now, based on Equations (144)–(148) and (151), we obtain (quite similarly to Theorem 2) the following assertion.
Lemma 3. 
Let the assumptions of AI-AV be fulfilled. Then, there exists a number ε ^ 0 > 0 such that, for all ε ( 0 , ε ^ 0 ] , the entries of the solution to the initial-value problem (143) x ^ ( t , ε ) , y ^ ( t , ε ) satisfy the inequalities
x ^ ( t , ε ) x ^ 0 o ( t ) c ^ 1 ε , t [ 0 , t f ] , y ^ ( t , ε ) y ^ 0 o ( t ) y ^ 0 b , 1 ( θ ) c ^ 1 ε , t [ 0 , t f ] , θ = t / ε ,
where c ^ 1 > 0 is some constant independent of ε.
Let us introduce the following vector-valued functions of the dimension K n :
z ^ 0 o ( t ) = col x ^ 0 o ( t ) , y ^ 0 o ( t ) , z ^ 0 b , 1 ( θ ) = col 0 , y ^ 0 b , 1 ( θ ) , z ^ 0 ( t , ε ) = z ^ 0 o ( t ) + z ^ 0 b , 1 ( θ ) , t [ 0 , t f ] , θ = t / ε , ε ( 0 , ε ^ 0 ] .
Thus, by virtue of Lemma 3, we have
z ^ ( t , ε ) z ^ 0 ( t , ε ) 2 c ^ 1 ε , t [ 0 , t f ] , ε ( 0 , ε ^ ] .

4.6.3. Time Realization of the Control (138) in the Problem (1), (5) and (6)

The time realization of the control (138) along w = w ^ ( t , ε ) , which is an open-loop control in the problem (1), (5) and (6), has the form
u ^ ( t , λ 0 * , ε ) = u ^ ε ( w ^ ( t , ε ) , t , λ 0 * ) = 1 ε P 20 o ( t , λ 0 * ) T , P 30 o ( t , λ 0 * ) R 1 ( t , λ 0 * ) w ^ ( t , ε ) , = 1 ε P 20 o ( t , λ 0 * ) T , P 30 o ( t , λ 0 * ) z ^ ( t , ε ) , t [ 0 , t f ] , ε > 0 .
Since u ^ ( t , λ 0 * , ε ) is the open-loop control in the problem (1), (5) and (6), corresponding to the state-feedback control u ^ ε ( w , t , λ 0 * ) in this problem, then
J ε u ^ ε ( w , t , λ 0 * ) = J ε u ^ ( t , λ 0 * , ε ) , ε > 0 .
Below, we are going to establish a closeness between J ε u * t , λ * ( ε ) , ε (or I ε * ) and J ε u ^ ( t , λ 0 * , ε ) for all sufficiently small ε > 0 . Remember, that J ε u * t , λ * ( ε ) , ε is the value of the functional in the problem (1), (5) and (6) corresponding to the open-loop optimal control (128), i.e., it is the optimal value of the functional in this problem.

4.6.4. Closeness of the Values J ε u * t , λ * ( ε ) , ε and J ε u ^ ( t , λ 0 * , ε )

First of all, we will treat each of the values J ε u * t , λ * ( ε ) , ε and J ε u ^ ( t , λ 0 * , ε ) separately. We start with the value J ε u * t , λ * ( ε ) , ε .
Let us partition the K n × K n -matrix-valued function R ( t , λ ) into K blocks as follows:
R ( t , λ ) = R 1 ( t , λ ) R 2 ( t , λ ) R K ( t , λ ) , t [ 0 , t f ] , λ Ω λ ,
where each of the blocks is of the dimension n × K n .
Based on Equations (4) and (156), let us introduce into the consideration the following n-dimensional vector-valued functions:
w k t , λ * ( ε ) , ε = R k t , λ * ( ε ) z t , λ * ( ε ) , ε , k = 1 , 2 , , K , t [ 0 , t f ] , ε ( 0 , ε ˜ 0 ] ,
where λ = λ * ( ε ) is the solution of the optimization problem (14) and (15) and, due to Corollary 2, of the optimization problem (35) and (36); z t , λ * ( ε ) , ε is the solution of the initial-value problem (33) with λ = λ * ( ε ) ; the positive number ε ˜ 0 is introduced in Theorem 2.
Due to Corollary 1 and Equations (156) and (157), we directly have
w t , λ * ( ε ) , ε = col w 1 t , λ * ( ε ) , ε , w 2 t , λ * ( ε ) , ε , , w K t , λ * ( ε ) , ε , t [ 0 , t f ] , ε ( 0 , ε ˜ 0 ] ,
where w t , λ * ( ε ) , ε is the solution of the initial-value problem (16) with λ = λ * ( ε ) .
Thus, taking into account that u * t , λ * ( ε ) , ε is independent of k { 1 , 2 , , K } , we can represent the optimal value J ε u * t , λ * ( ε ) , ε of the functional in the problem (1), (5) and (6) as follows:
J ε u * t , λ * ( ε ) , ε = J ε 1 u * t , λ * ( ε ) , ε + J ε 2 u * t , λ * ( ε ) , ε , J ε 1 u * t , λ * ( ε ) , ε = max k { 1 , 2 , , K } [ w k T t f , λ * ( ε ) , ε H ˜ w k t f , λ * ( ε ) , ε + 0 t f w k T t , λ * ( ε ) , ε D ˜ w k t , λ * ( ε ) , ε d t ] , J ε 2 u * t , λ * ( ε ) , ε = 0 t f ε 2 u * t , λ * ( ε ) , ε T u * t , λ * ( ε ) , ε d t ,
where ε ( 0 , ε ˜ 0 ] .
Let us analyze separately the addends J ε 1 u * t , λ * ( ε ) , ε and J ε 2 u * t , λ * ( ε ) , ε of the value J ε u * t , λ * ( ε ) , ε . We start with the first addend. This addend depends on w k t f , λ * ( ε ) , ε , ( k = 1 , 2 , , K ) , given by (157). In this equation, the matrix-valued function R k t , λ * ( ε ) of the dimension n × K n appears. Using Equations (8), (22), and (156), we can represent R k t , λ * ( ε ) in the form
R k t , λ * ( ε ) = L k t , λ * ( ε ) , B k ( t ) , k = 1 , 2 , , K , t [ 0 , t f ] , ε ( 0 , ε ˜ 0 ] ,
where L k t , λ * ( ε ) is the k-th block from the above of the dimension n × ( K n r ) in the matrix L t , λ * ( ε ) , i.e., this block is obtained from the following block-form representation of the matrix L ( t , λ ) :
L ( t , λ ) = L 1 ( t , λ ) L 2 ( t , λ ) L K ( t , λ ) , t [ 0 , t f ] , λ Ω λ ,
and each of the blocks is of the dimension n × ( K n r ) .
Now, using the assumption AIII, the symmetry of the matrix H ˜ and the Equations (80), (157) and (160), we have
w k T t f , λ * ( ε ) , ε H ˜ w k t f , λ * ( ε ) , ε = z T t f , λ * ( ε ) , ε R k T t f , λ * ( ε ) H ˜ R k t f , λ * ( ε ) z t f , λ * ( ε ) , ε = x T t f , λ * ( ε ) , ε , y T t f , λ * ( ε ) , ε L k T t f , λ * ( ε ) B k T ( t f ) H ˜ L k t f , λ * ( ε ) , H ˜ B k ( t f ) · x t f , λ * ( ε ) , ε y t f , λ * ( ε ) , ε = x T t f , λ * ( ε ) , ε L k T t f , λ * ( ε ) + y T t f , λ * ( ε ) , ε B k T ( t f ) H ˜ L k t f , λ * ( ε ) x t f , λ * ( ε ) , ε = x T t f , λ * ( ε ) , ε L k T t f , λ * ( ε ) H ˜ L k t f , λ * ( ε ) x t f , λ * ( ε ) , ε + y T t f , λ * ( ε ) , ε B k T ( t f ) H ˜ L k t f , λ * ( ε ) x t f , λ * ( ε ) , ε = x T t f , λ * ( ε ) , ε L k T t f , λ * ( ε ) H ˜ L k t f , λ * ( ε ) x t f , λ * ( ε ) , ε , k = 1 , 2 , , K , ε ( 0 , ε ˜ 0 ] .
Thus, by virtue of Equations (157) and (162), the value J ε 1 u * t , λ * ( ε ) , ε (see Equation (159)) can be rewritten as follows:
J ε 1 u * t , λ * ( ε ) , ε = max k { 1 , 2 , , K } [ x T t f , λ * ( ε ) , ε L k T t f , λ * ( ε ) H ˜ L k t f , λ * ( ε ) x t f , λ * ( ε ) , ε + 0 t f z T t , λ * ( ε ) , ε R k T t , λ * ( ε ) D ˜ R k t , λ * ( ε ) z t , λ * ( ε ) , ε d t ] , ε ( 0 , ε ˜ 0 ] .
Now, we are going to analyze the value J ε 2 u * t , λ * ( ε ) , ε . Substitution of (128) into the expression for J ε 2 u * t , λ * ( ε ) , ε in Equation (159) yields after a routine algebra of matrices
J ε 2 u * t , λ * ( ε ) , ε = 0 t f z T t , λ * ( ε ) , ε Q P t , λ * ( ε ) , ε z t , λ * ( ε ) , ε d t , Q P t , λ * ( ε ) , ε = P 2 t , λ * ( ε ) , ε P 2 T t , λ * ( ε ) , ε P 2 t , λ * ( ε ) , ε P 3 t , λ * ( ε ) , ε P 3 t , λ * ( ε ) , ε P 2 T t , λ * ( ε ) , ε P 3 t , λ * ( ε ) , ε 2 , ε ( 0 , ε ˜ 0 ] .
Proceed to the analyses of the value J ε u ^ ( t , λ 0 * , ε ) . As with Equations (157)–(159), we can represent this value in the form
J ε u ^ ( t , λ 0 * , ε ) = J ε 1 u ^ ( t , λ 0 * , ε ) + J ε 2 u ^ ( t , λ 0 * , ε ) , J ε 1 u ^ ( t , λ 0 * , ε ) = max k { 1 , 2 , , K } w ^ k T ( t f , ε ) H ˜ w ^ k ( t f , ε ) + 0 t f w ^ k T ( t , ε ) D ˜ w ^ k ( t , ε ) d t , J ε 2 u ^ ( t , λ 0 * , ε ) = 0 t f ε 2 u ^ T ( t , λ 0 * , ε ) u ^ ( t , λ 0 * , ε ) ) d t ,
where ε ( 0 , ε ^ 0 ] ;
w ^ k ( t , ε ) = R k ( t , λ 0 * ) z ^ ( t , ε ) , k = 1 , 2 , , K , t [ 0 , t f ] ;
z ^ ( t , ε ) is the solution of the initial-value problem (141); and w ^ k ( t , ε ) , ( k = 1 , 2 , , K ) are the corresponding blocks of the vector-valued solution w ^ ( t , ε ) , t [ 0 , t f ] to the initial-value problem (139), i.e., w ^ ( t , ε ) = col w ^ 1 ( t , ε ) , w ^ 2 ( t , ε ) , , w ^ K ( t , ε ) .
Using Equations (142), (154) and (166), we can rewrite the values J ε 1 u ^ ( t , λ 0 * , ε ) and J ε 2 u ^ ( t , λ 0 * , ε ) (similarly to Equations (163) and (164)) as follows:
J ε 1 u ^ ( t , λ 0 * , ε ) = max k { 1 , 2 , , K } [ x ^ T ( t f , ε ) L k T t f , λ 0 * H ˜ L k t f , λ 0 * x ^ ( t f , ε ) + 0 t f z ^ T ( t , ε ) R k T ( t , λ 0 * ) D ˜ R k ( t , λ 0 * ) z ^ ( t , ε ) d t ] , ε ( 0 , ε ^ 0 ] ,
J ε 2 u ^ ( t , λ 0 * , ε ) = 0 t f z ^ T ( t , ε ) Q P 0 ( t , λ 0 * ) z ^ ( t , ε ) d t , Q P 0 ( t , λ 0 * ) = P 20 o ( t , λ 0 * ) P 20 ( t , λ 0 * ) T P 20 o ( t , λ 0 * ) P 30 o ( t , λ 0 * ) P 30 o ( t , λ 0 * ) P 20 ( t , λ 0 * ) T P 30 o ( t , λ 0 * ) 2 , ε ( 0 , ε ˜ 0 ] .
Theorem 4. 
Let the assumptions AI-AVI be fulfilled. Then, the following limit equality is valid:
lim ε + 0 J ε u * t , λ * ( ε ) , ε = lim ε + 0 J ε u ^ ( t , λ 0 * , ε ) .
Proof. 
We start with the calculation of lim ε + 0 J ε u * t , λ * ( ε ) , ε . From Equation (159), we have the following. If each of the limits lim ε + 0 J ε 1 u * t , λ * ( ε ) , ε and lim ε + 0 J ε 2 u * t , λ * ( ε ) , ε exists and is finite, then
lim ε + 0 J ε u * t , λ * ( ε ) , ε = lim ε + 0 J ε 1 u * t , λ * ( ε ) , ε + lim ε + 0 J ε 2 u * t , λ * ( ε ) , ε .
Using Equations (80), (83), (125), (156), (161) and (163), the inequalities (94) and (104), as well as Remarks 6 and 14, and Theorems 2 and 3, we obtain
lim ε + 0 J ε 1 u * t , λ * ( ε ) , ε = max k { 1 , 2 , , K } [ lim ε + 0 x T t f , λ * ( ε ) , ε L k T t f , λ * ( ε ) H ˜ L k t f , λ * ( ε ) x t f , λ * ( ε ) , ε + 0 t f lim ε + 0 z T t , λ * ( ε ) , ε R k T t , λ * ( ε ) D ˜ R k t , λ * ( ε ) z t , λ * ( ε ) , ε d t ] = max k { 1 , 2 , , K } [ x 0 o ( t f , λ 0 * ) T L k T ( t f , λ 0 * ) H ˜ L k ( t f , λ 0 * ) x 0 ( t f , λ 0 * ) + 0 t f z 0 o ( t , λ 0 * ) T R k T ( t , λ 0 * ) D ˜ R k ( t , λ 0 * ) z 0 o ( t , λ 0 * ) ) d t ] .
Furthermore, using Equations (80), (83), (125) and (164), the inequalities (61), (94), and (104), as well as Remarks 10, 11 and 14, and Theorems 1, 2 and 3, we have
lim ε + 0 J ε 2 u * t , λ * ( ε ) , ε = 0 t f lim ε + 0 z T t , λ * ( ε ) , ε Q P t , λ * ( ε ) , ε z t , λ * ( ε ) , ε d t = 0 t f z 0 o ( t , λ 0 * ) T Q P 0 ( t , λ 0 * ) z 0 o ( t , λ 0 * ) ) d t ,
where Q P 0 ( t , λ 0 * ) is defined in (168).
Proceed to the calculation of lim ε + 0 J ε u ^ ( t , λ 0 * , ε ) . Subject to the assumption that each of the limits lim ε + 0 J ε 1 u ^ ( t , λ 0 * , ε ) and lim ε + 0 J ε 2 u ^ ( t , λ 0 * , ε ) exists and is finite, we have
lim ε + 0 J ε u ^ ( t , λ 0 * , ε ) = lim ε + 0 J ε 1 u ^ ( t , λ 0 * , ε ) + lim ε + 0 J ε 2 u ^ ( t , λ 0 * , ε ) ,
where J ε 1 u ^ ( t , λ 0 * , ε ) and J ε 2 u ^ ( t , λ 0 * , ε ) are given in (165) and then rewritten in (167) and (168).
Using Equations (142), (144), (153), (167) and (168) and the inequality (148), as well as Lemma 3, we obtain
lim ε + 0 J ε 1 u ^ ( t , λ 0 * , ε ) = max k { 1 , 2 , , K } [ x ^ 0 o ( t f ) T L k T ( t f , λ 0 * ) H ˜ L k ( t f , λ 0 * ) x ^ 0 o ( t f ) + 0 t f z ^ 0 o ( t ) T R k T ( t , λ 0 * ) D ˜ R k ( t , λ 0 * ) z ^ 0 o ( t ) d t ] ,
lim ε + 0 J ε 1 u ^ ( t , λ 0 * , ε ) = 0 t f z ^ 0 o ( t ) T Q P 0 ( t , λ 0 * ) z ^ 0 o ( t ) d t .
Let us compare the right-hand side of Equation (171) with the right-hand side of Equation (174), as well as the right-hand side of Equation (172) with the right-hand side of Equation (175). Comparing Equations (90) and (91) with Equation (145), we directly have
x 0 o ( t , λ 0 * ) = x ^ 0 o ( t ) , y 0 o ( t , λ 0 * ) = y ^ 0 o ( t ) , t [ 0 , t f ] ,
yielding, due to Equations (125) and (153)
z 0 o ( t , λ 0 * ) = z ^ 0 o ( t ) , t [ 0 , t f ] .
Equations (176) and (177), along with Equations (171), (172), (174) and (175), yield immediately
lim ε + 0 J ε 1 u * t , λ * ( ε ) , ε = lim ε + 0 J ε 1 u ^ ( t , λ 0 * , ε ) , lim ε + 0 J ε 2 u * t , λ * ( ε ) , ε = lim ε + 0 J ε 2 u ^ ( t , λ 0 * , ε ) .
These two equalities, along with Equations (170) and (173), directly imply the validity of the equality (169). This completes the proof of the theorem. □
Along with w k t , λ * ( ε ) , ε , ( k = 1 , 2 , , K ) , let us introduce into the consideration the following n-dimensional vector-valued functions:
w k 0 t , λ * ( ε ) , ε = R k t , λ * ( ε ) z 0 t , λ * ( ε ) , ε , k = 1 , 2 , , K , t [ 0 , t f ] , ε ( 0 , ε ˜ 0 ] ,
where z 0 t , λ , ε is given in (125).
Using Equations (156), (80), (125), (157) and (178), as well as Corollary 1, Theorem 2, and the smoothness of the matrix-valued function R ( t , λ ) with respect to t [ 0 , t f ] uniformly in λ Ω λ (see Remark 6), we obtain the inequalities
w k t , λ * ( ε ) , ε w k 0 ( t , λ * ( ε ) , ε ) c ˜ 2 ε , k = 1 , 2 , , K , t [ 0 , t f ] , ε ( 0 , ε ˜ 0 ] ,
where c ˜ 2 > 0 is some constant independent of ε .

5. Minimizing Sequence of Optimal Control Problem (1) and (3)

Theorem 5. 
Let the assumptions AI-AVI be fulfilled. Then, the following equality is valid:
J * ( w 0 ) = J 0 λ 0 * ,
where J * ( w 0 ) is the infimum of the functional J ( u ) with respect to u ( t ) = u ( w , t ) U in the problem (1), (3) (see Remark 2); the function J 0 ( λ ) is defined in Equation (131); the vector λ 0 * is defined by Equation (130).
Proof. 
(by contradiction). To prove the theorem, we assume that its statement (the equality (179)) is wrong, i.e., we assume that J * ( w 0 ) J 0 λ 0 * . Let us show that this assumption implies the inequality
J * ( w 0 ) < J 0 λ 0 * .
Indeed, using Equations (2), (3), (5) and (6), as well as Remark 2, Proposition 1, Corollary 2, and that the control u ε * ( · ) given by (13) is the optimal one in the problem (1), (5) and (6), we directly obtain the following chain of the inequalities and the equalities:
J * ( w 0 ) J u ε * ( · ) J ε u ε * ( · ) = I λ * ( ε ) , ε = J λ * ( ε ) , ε , ε 0 .
Also, from Corollary 3, we have the double-side inequality
J 0 λ 0 * g * ( ε ) J λ * ( ε ) , ε J 0 λ 0 * + g * ( ε ) , ε ( 0 , ε ˜ 0 ] .
The chain (181) and the double-side inequality (182) immediately imply the inequality
J * ( w 0 ) J 0 λ 0 * + g * ( ε ) , ε ( 0 , ε ˜ 0 ] ,
meaning, along with the above assumed inequality J * ( w 0 ) J 0 λ 0 * , the fulfillment of (180).
Since (180) is valid and J * ( w 0 ) is the infimum of the functional J ( u ) with respect to u ( t ) = u ( w , t ) U in the problem (1), (3), then there exists a control u ˜ ( · ) U such that
J * ( w 0 ) < J u ˜ ( · ) < J 0 λ 0 * .
Taking into account that u ε * ( · ) given by (13) is the optimal control in the problem (1), (5) and (6), we directly have
J λ * ( ε ) , ε = I λ * ( ε ) , ε = J ε u ε * ( · ) J ε u ˜ ( · ) = J u ˜ ( · ) + a ˜ ε 2 , ε 0 ,
where
0 a ˜ = 0 t f u ˜ T w ˜ ( t ) , t u ˜ w ˜ ( t ) , t d t < + ,
w ˜ ( t ) = col w ˜ 1 ( t ) , w ˜ 2 ( t ) , , w ˜ K ( t ) , t [ 0 , t f ] is the unique absolutely continuous solution of the initial-value problem (1) with k = 1 , 2 , , K and u ( t ) = u ˜ ( w , t ) , ( w , t ) E K n × [ 0 , t f ] .
Thus, due to (184),
J λ * ( ε ) , ε J u ˜ ( · ) + a ˜ ε 2 , ε 0 .
This inequality, along with the left-hand side inequality in (182), yields
J 0 λ 0 * J u ˜ ( · ) + a ˜ ε 2 + g * ( ε ) , ε ( 0 , ε ˜ 0 ] .
Taking into account that g * ( ε ) > 0 , ε ( 0 , ε ˜ 0 ] and lim ε + 0 g * ( ε ) = 0 , we immediately obtain from (185) the following inequality: J 0 λ 0 * J u ˜ ( · ) , which contradicts the right-hand side inequality in (183). This contradiction means that the above assumed inequality J * ( w 0 ) J 0 λ 0 * is wrong. Therefore, the equality (179) is correct. Thus, the theorem is proven. □
Consider the sequence of numbers { ε q } q = 1 + satisfying the conditions
0 < ε q min { ε ˜ 0 , ε ^ 0 } , q = 1 , 2 , ; lim q + ε q = 0 .
Using this sequence, consider the sequence of state-feedback controls in the optimal control problem (1) and (3)
{ u ^ q ( w , t ) } q = 1 + = u ^ ε q ( w , t , λ 0 * ) q = 1 + ,
where u ^ ε ( w , t , λ 0 * ) is defined in (138).
Theorem 6. 
Let the assumptions AI-AVI be fulfilled. Then, the sequence { u ^ q ( w , t ) } q = 1 + is a minimizing sequence in the optimal control problem (1) and (3), i.e.,
lim q + J u ^ q ( w , t ) = J * ( w 0 ) .
Proof. 
Due to Remark 2 and Equations (2), (3), (5) and (6), we have the following chain of inequalities:
J * ( w 0 ) J u ^ q ( w , t ) J ε q u ^ q ( w , t ) .
Using Equations (129) and (155), as well as Corollary 3, and Theorems 4 and 5, we obtain the following:
J * ( w 0 ) = J 0 λ 0 * = lim ε + 0 J λ * ( ε ) , ε = lim ε + 0 J ε u ε * w , t , λ * ( ε ) = lim ε + 0 J ε u ^ ε ( w , t , λ 0 * ) ,
meaning, along with (187), that
lim q + J ε q u ^ q ( w , t ) = J * ( w 0 ) .
The latter, along with (189) yields immediately the equality (188). Thus, the theorem is proven. □

6. Illustrative Example

Consider the following two-model system:
d w 1 , k ( t ) d t = ρ k w 2 , k ( t ) , w 1 , k ( 0 ) = 1 , t [ 0 , 4 ] , k { 1 , 2 } , d w 2 , k ( t ) d t = u ( t ) , w 2 , k ( 0 ) = 2 , t [ 0 , 4 ] , k { 1 , 2 } ,
where w 1 , k ( t ) , w 2 , k ( t ) , u ( t ) are scalar functions; ρ 1 = 2 , ρ 2 = 1 .
Comparing the system (190) with the system (1), one can conclude that (190) is a particular case of (1) where n = 2 , r = 1 , t f = 4 , K = 2 ,
A 1 = 0 2 0 0 , A 2 = 0 1 0 0 , B 1 = B 2 = 0 1 , w ˜ 0 = 1 2 .
In this example, we choose the functional F ( u , k ) as:
F ( u , k ) = w 1 , k 2 ( 4 ) + 0 4 w 2 , k 2 ( t ) d t , k { 1 , 2 } .
Comparison of the functional (192) and the functional (2) yields that (192) is a particular case of (2) where
H ˜ = 1 0 0 0 , D ˜ = 0 0 0 1 .
Based on the functional (192), we construct the performance index evaluating the control process of the two-model system (190)
J ( u ) = max k { 1 , 2 } F ( u , k ) inf u .
Remark 15. 
The two-model singular optimal control problem (190) and (194) is a particular case of the multi-model singular optimal control problem (1) and (3). The solution of the problem (190) and (194) will allow us to clearly illustrate the theoretical results of the previous sections, while avoiding too complicated analytical/numerical calculations. Such an illustration allows us not to overload the paper and, therefore, to maintain its readability.
Proceed to the construction of the minimizing sequence in the optimal control problem (190) and (194). Due to Theorem 6, first, we should check the fulfillment of the assumptions AI-AVI in this problem. Based on Equations (191) and (193), we directly obtain the fulfillment of the assumptions AI-AV. The fulfillment of the assumption AVI will be verified in the sequel of this section. Based on Theorem 6 and Equations (138) and (187), one can conclude the following. To construct the minimizing sequence, the matrix-valued functions R ( t , λ ) , P 20 o ( t , λ ) , P 30 o ( t , λ ) should be obtained. We start by obtaining R ( t , λ ) . Due to Equations (8), (22) and (191), this matrix depends on the complement matrix B c to the matrix B = col ( 0 , 1 , 0 , 1 ) . We choose the matrix B c in the form
B c = 1 0 0 0 1 0 0 0 1 0 0 0 .
Using Equations (7), (8) and (22), the data of the example (191) and (193) and the pre-chosen matrix B c , we obtain the following matrices:
L ( t , λ ) L ( λ ) = 1 0 0 0 λ 2 0 0 0 1 0 λ 1 0 , R ( t , λ ) R ( λ ) = 1 0 0 0 0 λ 2 0 1 0 0 1 0 0 λ 1 0 1 ,
where
λ = col ( λ 1 , λ 2 ) , λ 1 0 , λ 2 0 , λ 1 + λ 2 = 1 .
Due to the results of Section 4.2.2, to obtain the matrices P 20 o ( t , λ ) , P 30 o ( t , λ ) , first, we should obtain the matrices A 1 ( t , λ ) , A 2 ( t , λ ) , D 1 ( t , λ ) , D 2 ( t , λ ) , H 1 ( λ ) , S 1 o ( λ ) . Using Equations (7), (25), (30), (31), (38) and (53), as well as the data of the example (191) and (193) and the above calculated matrices L ( t , λ ) , R ( t , λ ) , we have after a routine matrix algebra
A 1 ( t , λ ) A 1 ( λ ) = 0 2 λ 2 0 0 0 0 0 λ 1 0 , A 2 ( t , λ ) A 2 ( λ ) = 2 0 1 , D 1 ( t , λ ) D 1 ( λ ) = 0 0 0 0 λ 1 λ 2 0 0 0 0 , D 2 ( t , λ ) D 2 = 1 , H 1 ( λ ) = λ 1 0 0 0 0 0 0 0 λ 2 , S 1 o ( λ ) S 1 o = 4 0 2 0 0 0 2 0 1 .
Using Equations (50), (51) and (197), as well as the symmetry of the matrix P 10 o ( t , λ ) , we immediately have
P 30 o ( t , λ ) P 30 o = 1 , P 20 o ( t , λ ) = 2 P 10 , 11 o ( t , λ ) + P 10 , 13 o ( t , λ ) 2 P 10 , 12 o ( t , λ ) + P 10 , 23 o ( t , λ ) 2 P 10 , 13 o ( t , λ ) + P 10 , 33 o ( t , λ ) ,
where P 10 , i j o ( t , λ ) , ( i = 1 , 2 , 3 ; j = 1 , 2 , 3 ) is the entry of the matrix P 10 o ( t , λ ) placed in its i-th row and j-th column.
Solving the terminal value problem (52) with t f = 4 and the data from (197), we obtain
2 P 10 , 11 o ( t , λ ) + P 10 , 13 o ( t , λ ) = 2 λ 1 ( 4 λ 1 + λ 2 ) ( 4 t ) + 1 , 2 P 10 , 12 o ( t , λ ) + P 10 , 23 o ( t , λ ) = 3 λ 1 λ 2 ( 4 t ) ( 4 λ 1 + λ 2 ) ( 4 t ) + 1 , 2 P 10 , 13 o ( t , λ ) + P 10 , 33 o ( t , λ ) = λ 2 ( 4 λ 1 + λ 2 ) ( 4 t ) + 1 .
Also, for the sake of further calculations, we obtain the entries P 10 , 11 o ( t , λ ) , P 10 , 13 o ( t , λ ) , P 10 , 33 o ( t , λ )
P 10 , 11 o ( t , λ ) = λ 1 λ 2 ( 4 t ) + λ 1 ( 4 λ 1 + λ 2 ) ( 4 t ) + 1 , P 10 , 13 o ( t , λ ) = 2 λ 1 λ 2 ( 4 t ) ( 4 λ 1 + λ 2 ) ( 4 t ) + 1 , P 10 , 33 o ( t , λ ) = 4 λ 1 λ 2 ( 4 t ) + λ 2 ( 4 λ 1 + λ 2 ) ( 4 t ) + 1 .
Now, we should obtain the solution λ 0 * of the optimization problem (130) and (131). The minimized function J 0 ( λ ) of this problem depends on the vector x 0 ( λ ) and on the functions x 0 o ( t , λ ) , y 0 o ( t , λ ) . From Equations (4), (34) and (81), as well as the data of the example (191) and Equation (195), we obtain the vector x 0 ( λ )
x 0 ( λ ) x 0 = col ( 1 , 0 , 1 ) .
Solving the initial value problem (91) and taking into account (197), (199) and (201), we obtain
x 0 , 1 o ( t , λ ) = 2 ( 4 λ 1 + λ 2 ) ( 4 t ) + 2 λ 1 t + 1 4 ( 4 λ 1 + λ 2 ) + 1 1 , x 0 , 2 o ( t , λ ) 0 , t [ 0 , 4 ] , x 0 , 3 o ( t , λ ) = ( 4 λ 1 + λ 2 ) ( 4 t ) + 2 λ 1 t + 1 4 ( 4 λ 1 + λ 2 ) + 1 , t [ 0 , 4 ]
where x 0 , i o ( t , λ ) , ( i = 1 , 2 , 3 ) are the corresponding entries of the vector x 0 o ( t , λ ) .
Using Equations (90), (197), (199) and (202), we have
y 0 o ( t , λ ) = 2 λ 1 ( 4 λ 1 + λ 2 ) ( 4 t ) + 1 2 ( 4 λ 1 + λ 2 ) ( 4 t ) + 2 λ 1 t + 1 4 ( 4 λ 1 + λ 2 ) + 1 1 λ 2 ( 4 λ 1 + λ 2 ) ( 4 t ) + 1 ( 4 λ 1 + λ 2 ) ( 4 t ) + 2 λ 1 t + 1 4 ( 4 λ 1 + λ 2 ) + 1 = 4 λ 1 + λ 2 ( 4 λ 1 + λ 2 ) ( 4 t ) + 1 ( 4 λ 1 + λ 2 ) ( 4 t ) + 2 λ 1 t + 1 4 ( 4 λ 1 + λ 2 ) + 1 + 2 λ 1 ( 4 λ 1 + λ 2 ) ( 4 t ) + 1 2 λ 1 + λ 2 4 ( 4 λ 1 + λ 2 ) + 1 , t [ 0 , 4 ] .
Based on Equation (131) and using Equations (11), (12), (195)–(197) and (200)–(203), we obtain (after a routine calculations) the function J 0 ( λ ) in the form
J 0 ( λ ) = 4 λ 1 λ 2 + 1 4 ( 4 λ 1 + λ 2 ) + 1 λ 1 2 ( 8 λ 1 + 1 ) 4 ( 4 λ 1 + λ 2 ) + 1 1 2 + λ 1 8 λ 1 + 1 4 ( 4 λ 1 + λ 2 ) + 1 2 ,
where the vector λ is given in (196).
This function has a unique minimum point subject to the conditions (196). This minimum point is λ 0 * = col ( λ 0 , 1 * , λ 0 , 2 * ) = ( 0 , 1 ) . The corresponding minimal value of the function J 0 ( λ 0 * ) is 1 / 5 . By virtue of Definition 2 and Theorem 5, the optimal value of the functional in the two-model singular optimal control problem (190) and (194) is J * ( w 0 ) = 1 / 5 .
Using Theorem 6, as well as Equations (138), (187) and (198) and the vector λ 0 * = col ( 0 , 1 ) , we obtain the minimizing sequence in the two-model singular optimal control problem (190) and (194)
{ u ^ q ( w , t ) } q = 1 + = 1 ε q 1 5 t w 1 + w 2 q = 1 + ,
where the sequence of numbers { ε q } q = 1 + is given by (186).
Thus, the above obtained value J * ( w 0 ) = 1 / 5 is the infimum of the functional J ( u ) in the two-model singular optimal control problem (190) and (194), i.e.,
inf u max k { 1 , 2 } F ( u , k ) = 1 5 .
Let us show that in this example,
inf u max k { 1 , 2 } F ( u , k ) = max k { 1 , 2 } inf u F ( u , k ) .
Taking into account that each single-model optimal control problem of the problem (190), (194) is its particular case and using Theorems 5 and 6, we obtain that
inf u F ( u , 1 ) = 1 17 , inf u F ( u , 2 ) = 1 5 ,
which, along with (205), yields (206).
Moreover, the minimizing sequence in the single-model singular optimal control problem for k = 2 coincides with (204). Thus, in the set k , u q * ( w , t ) q = 1 + : k { 1 , 2 } , u q * ( w , t ) U , q = 1 , 2 , , the point 2 , { u ^ q ( w , t ) } q = 1 + is the saddle point of the functional (192) subject to the two-model system (190).

7. Concluding Remarks and Outlook

CRI. In this paper, we consider a finite horizon multi-model linear-quadratic optimal control problem. The functional of this problem does not contain the control function. Due to this feature of the functional, the considered optimal control problem is singular.
CRII. We solve the original control problem by the regularization approach, i.e., by its approximate transformation to an auxiliary regular optimal control problem. The latter has the same multi-model system of dynamics and a similar cost functional augmented by a finite horizon integral of the square of the Euclidean norm of the vector-valued control with a small positive weight (a small parameter). Hence, the auxiliary problem is a finite horizon multi-model linear-quadratic optimal control problem with a cheap control.
CRIII. Using the Robust Maximum Principle, we reduce the solution of this multi-model cheap control problem to the consecutive solution of the following three problems. The first problem is the terminal-value problem for the extended matrix Riccati differential equation. This problem depends not only on the aforementioned small parameter, but also on an auxiliary vector-valued parameter. The dimension of the latter equals the number of the models in the multi-model system, and this vector-valued parameter belongs to the proper bounded and closed set in the corresponding Euclidean space. The second problem is the initial-value problem for the extended vector linear differential equation. Like the first problem, the second problem also depends on the aforementioned small scalar parameter and vector-valued parameter. The third problem is the nonlinear optimization problem. The cost function of this problem depends on the small parameter, and this cost function is minimized with respect to the vector-valued parameter.
CRIV. An asymptotic analysis of each of the aforementioned three problems is carried out. Namely, for the first and the second problems, zero-order asymptotic solutions are formally constructed and justified. It is shown that these asymptotic solutions are valid uniformly with respect to the vector-valued parameter. For the third problem, the continuity of its solution with respect to the small parameter as the latter tends to zero is shown.
CRV. Based on this asymptotic analysis, the explicit expression of the infimum of the functional in the original singular optimal control problem is derived. The minimizing state-feedback control sequence in the original problem was also designed.
CRVI. The following issues of the topic of multi-model singular control problems are subject to future investigation: (a) multi-model singular infinite horizon linear-quadratic optimal control problem; (b) multi-model singular finite and infinite horizon stochastic linear-quadratic optimal control problems; (c) multi-model singular zero-sum linear-quadratic differential games; (d) multi-model singular linear-quadratic Nash equilibrium differential games; (e) singular optimal control problem of pursuit of a multi-model non-maneuvering evader; (f) singular zero-sum differential game of pursuit of a multi-model maneuvering evader; (g) singular optimal control problem of pursuit of a multi-model hybrid dynamics (regime-switching) non-maneuvering evader; (h) singular zero-sum differential game of pursuit of a multi-model hybrid dynamics (regime-switching) maneuvering evader.
It should be noted that multi-model systems other than the one of the present paper can be of considerable interest for investigation in the frame of the cheap/singular control. For instance, these problems are (i) multi-regimes cheap control stochastic differential games with jumps; (ii) multi-regimes singular stochastic differential games with jumps; (iii) cheap control games with fuzzy uncertainties; (iv) singular games with fuzzy uncertainties.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Boltyanskii, V.G.; Poznyak, A.S. The Robust Maximum Principle: Theory and Applications; Birkhauser: New York, NY, USA, 2012. [Google Scholar]
  2. Fridman, L.; Poznyak, A.; Bejarano, F.J. Robust Output LQ Optimal Control via Integral Sliding Modes; Birkhauser: New York, NY, USA, 2014. [Google Scholar]
  3. Özmen, A.; Kropat, E.; Weber, G.-W. Robust optimization in spline regression models for multi-model regulatory networks under polyhedral uncertainty. Optimization 2017, 66, 2135–2155. [Google Scholar] [CrossRef]
  4. Savku, E. A stochastic control approach for constrained stochastic differential games with jumps and regimes. Mathematics 2023, 11, 3043. [Google Scholar] [CrossRef]
  5. Ozcan, I.; Zeynep Alparslan Gok, S.; Weber, G.-W. Peer group situations and games with fuzzy uncertainty. J. Ind. Manag. Optim. 2023. [Google Scholar] [CrossRef]
  6. Kara, G.; Özmen, A.; Weber, G.-W. Stability advances in robust portfolio optimization under parallelepiped uncertainty. Cent. Eur. J. Oper. 2019, 27, 241–261. [Google Scholar] [CrossRef]
  7. Pontryagin, L.S.; Boltyanskii, V.G.; Gamkrelidze, R.V.; Mishchenko, E.F. The Mathematical Theory of Optimal Processes; Gordon & Breach Science Publishers: New York, NY, USA, 1986. [Google Scholar]
  8. Bellman, R. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
  9. Bell, D.J.; Jacobson, D.H. Singular Optimal Control Problems; Academic Press: New York, NY, USA, 1975. [Google Scholar]
  10. Gabasov, R.; Kirillova, F.M. High order necessary conditions for optimality. SIAM J. Control 1972, 10, 127–168. [Google Scholar] [CrossRef]
  11. Kelly, H.J. A second variation test for singular extremals. AIAA J. 1964, 2, 26–29. [Google Scholar] [CrossRef]
  12. Krotov, V.F. Global Methods in Optimal Control Theory; Marsel Dekker: New York, NY, USA, 1996. [Google Scholar]
  13. McDanell, J.P.; Powers, W.F. Necessary conditions for joining optimal singular and nonsingular subarcs. SIAM J. Control 1971, 9, 161–173. [Google Scholar] [CrossRef]
  14. Gurman, V.I. Optimal processes of singular control. Autom. Remote Control 1965, 26, 783–792. [Google Scholar]
  15. Gurman, V.I.; Dykhta, V.A. Singular problems of optimal control and the method of multiple maxima. Autom. Remote Control 1977, 38, 343–350. [Google Scholar]
  16. Gurman, V.I.; Kang, N.M. Degenerate problems of optimal control. I. Autom. Remote Control 2011, 72, 497–511. [Google Scholar] [CrossRef]
  17. Gurman, V.I.; Kang, N.M. Degenerate problems of optimal control. II. Autom. Remote Control 2011, 72, 727–739. [Google Scholar] [CrossRef]
  18. Gurman, V.I.; Kang, N.M. Degenerate problems of optimal control. III. Autom. Remote Control 2011, 72, 929–943. [Google Scholar] [CrossRef]
  19. Hautus, M.L.J.; Silverman, L.M. System structure and singular control. Linear Algebra Appl. 1983, 50, 369–402. [Google Scholar] [CrossRef]
  20. Willems, J.C.; Kitapci, A.; Silverman, L.M. Singular optimal oontrol: A geometric approach. SIAM J. Control Optim. 1986, 24, 323–337. [Google Scholar] [CrossRef]
  21. Geerts, T. All optimal controls for the singular linear-quadratic problem without stability; a new interpretation of the optimial cost. Linear AlgebraAppl. 1989, 116, 135–181. [Google Scholar] [CrossRef]
  22. Geerts, T. Linear-quadratic control with and without stability subject to general implicit continuous-time systems: Coordinate-free interpretations of the optimal costs in terms of dissipation inequality and linear matrix inequality; existence and uniqueness of optimal controls and state trajectories. Linear Algebra Appl. 1994, 203–204, 607–658. [Google Scholar]
  23. Zavalishchin, S.T.; Sesekin, A.N. Dynamic Impulse Systems: Theory and Applications; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1997. [Google Scholar]
  24. Glizer, V.Y. Solution of a singular optimal control problem with state delays: A cheap control approach. In Optimization Theory and Related Topics, Contemporary Mathematics Series; Reich, S., Zaslavski, A.J., Eds.; American Mathematical Society: Providence, RI, USA, 2012; Volume 568, pp. 77–107. [Google Scholar]
  25. Glizer, V.Y. Stochastic singular optimal control problem with state delays: Regularization, singular perturbation, and minimizing sequence. SIAM J.Control Optim. 2012, 50, 2862–2888. [Google Scholar] [CrossRef]
  26. Glizer, V.Y. Singular solution of an infinite horizon linear-quadratic optimal control problem with state delays. In Variational and Optimal Control Problems on Unbounded Domains, Contemporary Mathematics Series; Wolansky, G., Zaslavski, A.J., Eds.; American Mathematical Society: Providence, RI, USA, 2014; Volume 619, pp. 59–98. [Google Scholar]
  27. Tikhonov, A.N.; Arsenin, V.Y. Solutions of Ill-Posed Problems; Halsted Press: New York, NY, USA, 1977. [Google Scholar]
  28. Bikdash, M.U.; Nayfeh, A.H.; Cliff, E.M. Singular perturbation of the time-optimal soft-constrained cheap-control problem. IEEE Trans. Automat. Control 1993, 38, 466–469. [Google Scholar] [CrossRef]
  29. Dragan, V.; Halanay, A. Singular perturbations and linear feedback control. Proceedings of the Czechoslovak Conference on Differential Equations and Their Applications (Equadiff IV). In Lecture Notes in Mathematics; Springer: Berlin, Germany, 1979; Volume 703, pp. 86–92. [Google Scholar]
  30. Glizer, V.Y. Asymptotic solution of a cheap control problem with state delay. Dynam. Control 1999, 9, 339–357. [Google Scholar] [CrossRef]
  31. Glizer, V.Y. Suboptimal solution of a cheap control problem for linear systems with multiple state delays. J. Dyn. Control Syst. 2005, 11, 527–574. [Google Scholar] [CrossRef]
  32. Glizer, V.Y.; Fridman, L.M.; Turetsky, V. Cheap suboptimal control of an integral sliding mode for uncertain systems with state delays. IEEE Trans. Automat. Contr. 2007, 52, 1892–1898. [Google Scholar] [CrossRef]
  33. Glizer, V.Y. Infinite horizon cheap control problem for a class of systems with state delays. J. Nonlinear Convex Anal. 2009, 10, 199–233. [Google Scholar]
  34. Glizer, V.Y.; Kelis, O. Asymptotic properties of an infinite horizon partial cheap control problem for linear systems with known disturbances. Numer. Algebra Control Optim. 2018, 8, 211–235. [Google Scholar] [CrossRef]
  35. Jameson, A.; O’Malley, R.E. Cheap control of the time-invariant regulator. Appl. Math. Optim. 1975, 1, 337–354. [Google Scholar] [CrossRef]
  36. Kokotovic, P.V.; Khalil, H.K.; O’Reilly, J. Singular Perturbation Methods in Control: Analysis and Design; Academic Press: London, UK, 1986. [Google Scholar]
  37. Kwakernaak, H.; Sivan, R. The maximally achievable accuracy of linear optimal regulators and linear optimal filters. IEEE Trans. Automat. Control 1972, 17, 79–86. [Google Scholar] [CrossRef]
  38. Mahadevan, R.; Muthukumar, T. Homogenization of some cheap control problems. SIAM J. Math. Anal. 2011, 43, 2211–2229. [Google Scholar] [CrossRef]
  39. Naidu, D.S. Singular perturbations and time scales in control theory and applications: An overview. Dyn. Contin. Discrete Impuls. Syst. Ser. B Appl. Algorithms 2002, 9, 233–278. [Google Scholar]
  40. Naidu, D.S.; Calise, A.J. Singular perturbations and time scales in guidance and control of aerospace systems: A survey. J. Guid. Control Dyn. 2002, 24, 1057–1078. [Google Scholar] [CrossRef]
  41. O’Malley, R.E.; Jameson, A. Singular perturbations and singular arcs, I. IEEE Trans. Automat. Control 1975, 20, 218–226. [Google Scholar] [CrossRef]
  42. O’Malley, R.E.; Jameson, A. Singular perturbations and singular arcs, II. IEEE Trans. Automat. Control 1977, 22, 328–337. [Google Scholar] [CrossRef]
  43. O’Reilly, J. Partial cheap control of the time-invariant regulator. Internat. J. Control 1983, 37, 909–927. [Google Scholar] [CrossRef]
  44. Saberi, A.; Sannuti, P. Cheap and singular controls for linear quadratic regulators. IEEE Trans. Automat. Control 1987, 32, 208–219. [Google Scholar] [CrossRef]
  45. Seron, M.M.; Braslavsky, J.H.; Kokotovic, P.V.; Mayne, D.Q. Feedback limitations in nonlinear systems: From Bode integrals to cheap control. IEEE Trans. Automat. Control 1999, 44, 829–833. [Google Scholar] [CrossRef]
  46. Smetannikova, E.N.; Sobolev, V.A. Regularization of cheap periodic control problems. Automat. Remote Control 2005, 66, 903–916. [Google Scholar] [CrossRef]
  47. Artstein, Z.; Gaitsgory, V. The value function of singularly perturbed control systems. Appl. Math. Optim. 2000, 41, 425–445. [Google Scholar] [CrossRef]
  48. Dontchev, A.L. Perturbations, Approximations and Sensitivity Analysis of Optimal Control Systems; Springer: Berlin, Germany, 1983. [Google Scholar]
  49. Dragan, V. On the linear quadratic optimal control for systems described by singularly perturbed Ito differential equations with two fast time scales. Axioms 2019, 8, 30. [Google Scholar] [CrossRef]
  50. Dragan, V.; Mukaidani, H.; Shi, P. The linear quadratic regulator problem for a class of controlled systems modeled by singularly perturbed Ito differential equations. SIAM J. Control Optim. 2012, 50, 448–470. [Google Scholar] [CrossRef]
  51. Fridman, E. Decomposition of linear optimal singularly-perturbed systems with aftereffect. Automat. Remote Control 1990, 51, 1518–1527. [Google Scholar]
  52. Gajic, Z.; Lim, M.-T. Optimal Control of Singularly Perturbed Linear Systems and Applications. High Accuracy Techniques; Marsel Dekker Inc.: New York, NY, USA, 2001. [Google Scholar]
  53. Glizer, V.Y. Correctness of a constrained control Mayer’s problem for a class of singularly perturbed functional-differential systems. Control Cybernet. 2008, 37, 329–351. [Google Scholar]
  54. Kokotovic, P.; Yackel, R. Singular perturbation of linear regulators: Basic theorems. IEEE Trans. Automat. Control 1972, 17, 29–37. [Google Scholar] [CrossRef]
  55. Kuehn, C. Multiple Time Scale Dynamics; Springer: New York, NY, USA, 2015. [Google Scholar]
  56. Lange, C.G.; Miura, R.M. Singular perturbation analysis of boundary-value problems for differential-difference equations. Part V: Small shifts with layer behavior. SIAM J. Appl. Math. 1994, 54, 249–272. [Google Scholar] [CrossRef]
  57. Mukaidani, H.; Dragan, V. Control of deterministic and stochastic systems with several small parameters—A survey. Ann. Acad. Rom. Sci. Ser. Math. Its Appl. 2009, 1, 112–158. [Google Scholar]
  58. Naidu, D.S. Singular Perturbation Methodology in Control Systems; The Institution of Engineering and Technology: Edison, NJ, USA, 1988. [Google Scholar]
  59. Pena, M.L. Asymptotic expansion for the initial value problem of the sunflower equation. J.Math. Anal. Appl. 1989, 143, 471–479. [Google Scholar] [CrossRef]
  60. Reddy, P.; Sannuti, P. Optimal control of a coupled-core nuclear reactor by a singular perturbation method. IEEE Trans. Automat. Control 1975, 20, 766–769. [Google Scholar] [CrossRef]
  61. Yackel, R.; Kokotovic, P. A boundary layer method for the matrix Riccati equation. IEEE Trans. Automat. Control 1973, 18, 17–24. [Google Scholar] [CrossRef]
  62. Glizer, V.Y.; Kelis, O. Singular Linear-Quadratic Zero-Sum Differential Games and H Control Problems: Regularization Approach; Birkhauser: Basel, Switzerland, 2022. [Google Scholar]
  63. Sibuya, Y. Some global properties of matrices of functions of one variable. Math. Ann. 1965, 161, 67–77. [Google Scholar] [CrossRef]
  64. Vasil’eva, A.B.; Butuzov, V.F.; Kalachev, L.V. The Boundary Function Method for Singular Perturbation Problems; SIAM Books: Philadelphia, PA, USA, 1995. [Google Scholar]
  65. Schwartz, L. Analyse Mathematique: Cours; Hermann: Paris, France, 1967. [Google Scholar]
  66. Hartman, P. Ordinary Differential Equations; John Willey & Sons: New York, NY, USA, 1964. [Google Scholar]
  67. Bryson, A.E.; Ho, Y.C. Applied Optimal Control; Hemisphere: New York, NY, USA, 1975. [Google Scholar]
  68. Fichtenholz, G.M. The Fundamentals of Mathematical Analysis; Pergamon: Oxford, UK, 1965; Volume 1. [Google Scholar]
  69. Abou-Kandil, H.; Freiling, G.; Ionescu, V.; Jank, G. Matrix Riccati Equations in Control and Systems Theory; Birkhauser: Basel, Switzerland, 2003. [Google Scholar]
  70. Beavis, B.; Dobbs, I. Optimization and Stability Theory for Economic Analysis; Cambridge University Press: New York, NY, USA, 1990. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Glizer, V.Y. Robust Solution of the Multi-Model Singular Linear-Quadratic Optimal Control Problem: Regularization Approach. Axioms 2023, 12, 955. https://doi.org/10.3390/axioms12100955

AMA Style

Glizer VY. Robust Solution of the Multi-Model Singular Linear-Quadratic Optimal Control Problem: Regularization Approach. Axioms. 2023; 12(10):955. https://doi.org/10.3390/axioms12100955

Chicago/Turabian Style

Glizer, Valery Y. 2023. "Robust Solution of the Multi-Model Singular Linear-Quadratic Optimal Control Problem: Regularization Approach" Axioms 12, no. 10: 955. https://doi.org/10.3390/axioms12100955

APA Style

Glizer, V. Y. (2023). Robust Solution of the Multi-Model Singular Linear-Quadratic Optimal Control Problem: Regularization Approach. Axioms, 12(10), 955. https://doi.org/10.3390/axioms12100955

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop