Next Article in Journal
Symmetry Breaking: One-Point Theorem
Previous Article in Journal
Stellar-Mass Black Holes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Effect of the Cost Functional on Asymptotic Solution to One Class of Zero-Sum Linear-Quadratic Cheap Control Differential Games

by
Valery Y. Glizer
* and
Vladimir Turetsky
Department of Mathematics, Braude College of Engineering, Karmiel 2161002, Israel
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(9), 1394; https://doi.org/10.3390/sym17091394
Submission received: 30 June 2025 / Revised: 15 August 2025 / Accepted: 23 August 2025 / Published: 26 August 2025

Abstract

A finite-horizon zero-sum linear-quadratic differential game with non-homogeneous dynamics is considered. The key feature of this game is as follows. The cost of the control of the minimizing player (the minimizer) in the game’s cost functional is much smaller than the cost of the control of the maximizing player (the maximizer) and the cost of the state variable. This smallness is due to a positive small multiplier (a small parameter) for the quadratic form of the minimizer’s control in the integrand of the cost functional. Two cases of the game’s cost functional are studied: (i) the current state cost in the integrand of the cost functional is a positive definite quadratic form; (ii) the current state cost in the integrand of the cost functional is a positive semi-definite (but non-zero) quadratic form. The latter case has not yet been considered in the literature devoted to the analysis of cheap control differential games. For each of the aforementioned cases, an asymptotic approximation (by the small parameter) of the solution to the considered game is derived. It is established that the property of the aforementioned state cost (positive definiteness/positive semi-definiteness) has an essential effect on the asymptotic analysis and solution of the differential equations (Riccati-type, linear, and trivial), appearing in the solvability conditions of the considered game. The cases (i) and (ii) require considerably different approaches to the derivation of the asymptotic solutions to these differential equations. Moreover, the case (ii) requires developing a significantly novel approach. The asymptotic solutions of the aforementioned differential equations considerably differ from each other in cases (i) and (ii). This difference yields essentially different asymptotic solutions (saddle point and value) of the considered game in these cases, meaning it is of crucial importance to distinguish cases (i) and (ii) in the study of various theoretical and real-life cheap control zero-sum linear-quadratic differential games. The asymptotic solutions of the considered game in cases (i) and (ii) are compared with each other. An academic illustrative example is presented.

1. Introduction

In this paper, we study a two-person finite-horizon zero-sum differential game with linear dynamics and a quadratic cost functional. The feature of this game is the following. The control cost of the minimizer (the minimizing player) in the game’s cost functionals is small in comparison with the control cost of the maximizer (the maximizing player) and with the cost of the state variable. Such a feature means that the considered game is a cheap control game. In the most general formulation, a cheap control problem is an extremal control problem in which a control cost of at least one decision maker is much smaller than the cost of the state variable in at least one cost functional of the problem.
Cheap control problems have considerable importance in the qualitative and quantitative analysis of many topics in the theory of optimal control, the theory of H control, and the theory of differential games. For instance, such problems are important in the following topics: (1) existence analysis and analytical/numerical computation of singular controls and arcs (see, e.g., [1,2,3,4,5,6,7,8,9,10,11,12,13]); (2) derivation of limiting forms and maximally achievable accuracy of optimal regulators and filters (see, e.g., [14,15,16,17,18,19]); (3) study of inverse optimal control problems (see, e.g., [20]); (4) solution of high gain control problems (see, e.g., [21,22]).
Due to the smallness of the control cost, the Hamilton boundary-value problem and the Hamilton–Jacobi–Bellman–Isaacs equation, associated with a cheap control problem by solvability conditions, are singularly perturbed. This feature means that cheap control problems can be (and they really are) sources of novel classes of singularly perturbed differential equations. Thus, cheap control problems are also of considerable interest and importance in the theory of differential equations. Notions of perturbations of differential equations by small positive parameters, as well as a detailed explanation of a considerable difference between the regular and singular perturbations and their rigorous definitions, can be found in [23] (Chapter 1). In particular, a system of differential equations with a small positive multiplier (parameter) for a part of the highest-order derivatives is singularly perturbed.
As it is aforementioned, here we study a cheap control differential game.
The study of differential (and, more generally, dynamic) games has considerable theoretical importance (see, e.g., [13,24,25,26,27,28,29] and references therein). Moreover, differential/dynamic games are extremely important in studying various real-life problems, appearing in social and natural sciences (see, e.g., [30,31,32] and references therein), in engineering (see, e.g., [12,13,24,33] and references therein), in economics and finance (see, e.g., [34,35,36,37,38] and references therein), and in some other applications.
Cheap control differential games are extensively investigated in the open literature. Thus, in [12,13,39,40,41,42,43], different zero-sum cheap control games were analyzed. Various cheap control Nash equilibrium games were studied in [44,45,46]. A cheap control Stackelberg equilibrium differential game was studied in [47,48].
In most of the works devoted to the study of cheap control problems, the following two types of the quadratic cost of the “fast” state variable in the integral part of the cost functional are considered: (i) the cost is a positive definite quadratic form (see, e.g., [4,9,12,13,14,39,41,44,45,47] and references therein); (ii) the cost is zero (see, e.g., [42,43,46] and references therein). In the present paper, the intermediate case is studied. Namely, the case where the quadratic cost of the “fast” state variable in the integral part of the cost functional is a positive semi-definite (but non-zero) quadratic form.
The motivation of this purely theoretical paper is three-fold. The first motivation is to consider and analyze asymptotically a new class of linear-quadratic cheap control games, namely, the games with non-homogeneous dynamics. The second motivation is to consider and analyze asymptotically the essentially novel case of the quadratic cost of the “fast” state variable in the integral part of the cost functional, namely, the case where this cost is positive semi-definite (but non-zero). The asymptotic analysis of this case requires developing a significantly novel approach and yields considerably novel results. The third motivation is to compare the solution of the cheap control game in the aforementioned case with the solution of the game in the case where the quadratic cost of the “fast” state variable in the integral part of the cost functional is positive definite.
More precisely, in this paper, we consider the finite-horizon zero-sum linear-quadratic differential game with the cheap control of the minimizer. The dynamics of the considered game are non-homogeneous. The dimension of the minimizer’s control coincides with the dimension of the state vector, and the matrix-valued coefficient of the minimizer’s control in the equation of dynamics has full rank. Hence, the entire state variable is a “fast” one. Two cases of the quadratic cost of the state variable in the integral part of the cost functional are treated, namely, (a) positive definite quadratic form and (b) positive semi-definite (but non-zero) quadratic form. Thus, in case (a), the matrix-valued coefficient of the cost of the “fast” state variable in the integral part of the cost functional is a positive definite matrix. In the case (b), such a coefficient is a positive semi-definite (but non-zero) matrix. Due to the solvability conditions of the game, the derivation of its state-feedback saddle point is reduced to the solution of terminal-value problems for three differential equations: the matrix Riccati equation, vector linear equation, and scalar trivial equation. For these differential equations with the terminal conditions, asymptotic solutions are formally constructed and justified. In the aforementioned cases (a) and (b), the algorithms for constructing the asymptotic solutions and the solutions themselves differ considerably from each other. Based on these asymptotic solutions, asymptotic approximations of the game’s value and approximate saddle points are derived in each of cases (a) and (b).
The following should be noted: The cheap control differential game in case (a) was treated in the literature (and even in a more general form than in the present paper). However, to the best of our knowledge, the version of the non-homogeneous dynamics of such a game, including the derivation of an approximate saddle point, has not yet been considered in the literature. Furthermore, to the best of our knowledge, case (b) is completely novel. This case yields new types of singularly perturbed Riccati matrices and linear vector differential equations. For these equations, essentially novel approaches to the derivation of the asymptotic solutions are proposed. Moreover, along with separate analyses of cases (a) and (b), a comparison of the algorithms for the derivation of the aforementioned asymptotic solutions and the solutions themselves is presented. In this comparison, case (a) serves as a reference, clearly showing a considerable novelty of case (b) and its analysis.
The paper is organized as follows. In the next section (Section 2), the literature review on cheap control differential games is presented. In Section 3, the cheap control differential game is rigorously formulated. Main definitions are presented. In Section 4, the non-singular (invertible) transformation of the initially formulated game is carried out. This transformation yields a new, cheap control differential game, which is considerably simpler than the initially formulated game. The equivalence of both games to each other is proven. It should be noted that in both games, the state variable is the “fast” one. In what follows in this paper, the transformed game is investigated as an original cheap control differential game. In Section 5, the solvability conditions of this game are presented. These conditions contain terminal-value problems for three differential equations: the matrix Riccati equation, the vector linear equation, and the scalar trivial equation. Due to the cheap control of the minimizing player, these differential equations are perturbed by a small positive parameter ε . Along with the aforementioned terminal-value problems, the solvability conditions of the game contain the expressions for the components of the state-feedback saddle point and the value of the game. In Section 6, the asymptotic analysis with respect to ε of these solvability conditions is carried out in the case where the current state cost in the integrand of the cost functional is a positive definite quadratic form. In Section 7, such an analysis is carried out in the case where the current state cost in the integrand of the cost functional is a positive semi-definite (but non-zero) quadratic form. In both sections, the asymptotic analysis includes asymptotic solutions of the aforementioned terminal-value problems, obtaining asymptotic approximations of the game value and derivation of approximate saddle points. Section 8 is devoted to the solution of an illustrative example. Conclusions are presented in Section 9. Some technically complicated calculations are placed in Appendix A and Appendix B.
Main notations, used in the paper, are presented in Table 1.

2. Cheap Control Differential Games: Literature Review

The works, devoted to the consideration of cheap control differential games, were briefly mentioned in the previous section. In the present section, we give a more detailed literature review of this topic.

2.1. Cheap Control Zero-Sum Differential Games

In the paper [39], an infinite horizon zero-sum differential game with linear homogeneous dynamics and quadratic cost functional is studied. The state variable and the players’ controls are vectors. The cost of the control of the minimizing player in the cost functional is small in comparison with the cost of the control of the maximizing player and the cost of the state variable. The latter is assumed to be positive definite. In the paper, sufficient conditions are established guaranteeing that the game value tends to zero for the small cost of the minimizing player tending to zero.
In the paper [42], a finite-horizon zero-sum pursuit-evasion differential game with linear homogeneous dynamics and quadratic cost functional is considered. The state variable and the players’ controls are scalar. The pursuer’s control cost of the cost function is small in comparison with the cost of the evader’s control and the cost of the terminal value of the state variable. The integral part of the cost functional does not contain the cost of the state variable. This game is a result of a proper state transformation in a finite-horizon planar linear-quadratic interception game with constant velocities of the interceptor and the target. The limit behavior of the state-feedback solution to the aforementioned scalar game is investigated in the case where the cost of the pursuer’s control tends to zero. This investigation is based on an exact solution of the Riccati differential equation, associated with the considered game, by solvability conditions. In [49], the results of the paper [42] are generalized for the case of a finite-horizon planar linear-quadratic interception game with time-variable velocities of the interceptor and the target. In [43], a generalization of the results of [42] is carried out in the case where the original cheap control linear-quadratic interception game cannot be transformed to a scalar game.
In the paper [12], a finite-horizon zero-sum differential game with linear homogeneous dynamics and a quadratic cost functional is studied. The state variable and the players’ controls are vectors. The cost of the control of the minimizing player in the cost functional is small in comparison with the cost of the control of the maximizing player and the cost of the state variable. The cost functional does not contain the cost of the terminal value of the “fast” state variable, while the integral part of the cost of the “fast” state variable is assumed to be positive definite. This cheap control game is obtained as a result of the regularization of a singular differential game, considered in the paper. The zero-order asymptotic solution of the Riccati matrix differential equation, associated with the cheap control game by solvability conditions, is derived. Further, this result is used for the derivation of the state-feedback solution of the original singular game. As illustrative examples, two versions of a planar singular interception game (with zero-order control dynamics and first-order control dynamics) are analyzed. To provide the positive definiteness of the cost of the “fast” state variable, the quadratic cost of the relative lateral velocity of the interceptor and the target is included in the cost functional in the case of zero-order control dynamics. For the same purpose in the case of first-order control dynamics, the cost of the quadratic interceptor’s lateral acceleration is included in the cost functional. Although these inclusions do not contradict the physical sense of the game, they are not always necessary in the real-life interception problem.
In the works [13,40], a more general cheap control game (than the one of [12]) is studied. Namely, the game, considered in [13,40], is a partial cheap control game. This means that only a part of the coordinates of the minimizing player’s control are cheap, while the other coordinates are not. The integral part of the cost of the “fast” state variable is assumed to be positive definite. The terminal value of the “fast” state variable does not appear in the cost functional. This cheap control game is obtained due to the regularization of a partial singular differential game, considered in these works. In the latter, only a part of the control’s coordinates of the minimizing player are singular, while the other coordinates are regular. The zero-order asymptotic solution of the Riccati matrix differential equation, associated with the partial cheap control game by solvability conditions, is obtained. Further, this result is used for the derivation of the state-feedback solution of the original partial singular game. One of the illustrative examples in [13] (see Section 4.8.3.2) considers a three-dimensional complete singular pursuit-evasion game with zero-order control dynamics. In this game, all control coordinates of the pursuer are singular. The regularization of this game yields a completely cheap control game. To provide the positive definiteness of the cost of the “fast” state variable, the quadratic cost of the relative velocities of the pursuer and the evader in both the horizontal and the vertical planes is included in the cost functional. Although these inclusions do not contradict the physical sense of the game, they are not always necessary in the real-life interception problem.
In the paper [50], an infinite horizon zero-sum differential game with linear homogeneous dynamics and a quadratic cost functional is studied. The state variable and the players’ controls are vectors. The cost of the control of the minimizing player in the cost functional is small in comparison with the cost of the control of the maximizing player and the cost of the state variable. The cost of the “fast” state variable is assumed to be positive definite. This cheap control game is obtained as a result of the regularization of a singular differential game, considered in the paper. The zero-order stabilizing asymptotic solution of the Riccati matrix algebraic equation, associated with the cheap control game by solvability conditions, is derived. Further, this result is used for the derivation of the state-feedback solution of the original singular game. In [13,51], the results of the paper [50] are generalized for the case of a partial singular differential game and a partial cheap control game.
In the paper [52], an infinite horizon zero-sum differential game with linear homogeneous dynamics and a quadratic cost functional is considered. The game’s dynamics involve state delays. The state variable and the players’ controls are vectors. The cost of the control of the minimizing player in the cost functional is small in comparison with the cost of the control of the maximizing player and the cost of the state variable. The cost of the “fast” state variable is assumed to be positive definite. This cheap control game is a result of the regularization of a singular differential game, considered in the paper. The zero-order stabilizing asymptotic solution of the set of three matrix Riccati-type equations with deviating arguments (the algebraic equation, the ordinary differential equation, and the partial differential equation), associated with the cheap control game by solvability conditions, is obtained. Then, this result is used for the derivation of the state-feedback solution of the original singular game.
In the paper [53], a finite-horizon zero-sum differential game with linear homogeneous dynamics and a quadratic cost functional is considered. The game’s dynamics involve state delays. The game’s dynamics are with state delays. The state variable and the players’ controls are vectors. The cost of the control of the minimizing player in the cost functional is small in comparison with the cost of the control of the maximizing player and the cost of the state variable. The integral cost of the “fast” state variable is assumed to be positive definite, while the terminal value of the state variable does not appear in the cost functional. This cheap control game is obtained by regularization of a singular differential game, considered in the paper. The zero-order asymptotic solution of the set of three matrix Riccati-type differential equations with deviating arguments (the ordinary differential equation, the partial differential equation with two arguments, and the partial differential equation with three arguments), associated with the cheap control game by solvability conditions, is obtained. Then, this result is used for the derivation of the state-feedback solution of the original singular game.
In the paper [54], a finite horizon zero-sum differential game with linear homogeneous dynamics and quadratic cost functional is considered. The game’s dynamics are with state and control delays. The dynamics of the game are homogeneous. The state variable and the players’ controls are vectors. The cost of some (but, in general, not all) control coordinates of the minimizing player in the cost functional is much smaller than the cost of the other control coordinates of this player, the cost of the control of the maximizing player, and the cost of the terminal value of the state variable, i.e., the considered game is a partial cheap control game. The integral cost of the state variable is zero. By two consecutive state transformations, this game is converted to an equivalent partial cheap control game, which does not contain delays anymore. The parameter-free open-loop solvability condition of the new (un-delayed) game is derived. An asymptotic analysis of the open-loop saddle point solution to this game is carried out.

2.2. Cheap Control Nash Equilibrium Differential Games

In the paper [44], a finite-horizon two-person Nash equilibrium differential game with linear homogeneous dynamics and quadratic cost functionals is considered. The state variable and the players’ controls are vectors. In the cost functional of one player, the cost of some (but, in general, not all) coordinates of the control of this player is much smaller than the cost of the other control coordinates, the cost of the control of the other player, and the cost of the state variable, i.e., the considered game is a partial cheap control game. The integral cost of the “fast” state variable in the cost functional of the partial cheap control player is positive definite. The cost of the terminal value of the “fast” state variable in the cost functionals of both players is zero. This partial cheap control game is obtained due to the regularization of a partial singular Nash equilibrium differential game, initially considered in this work. The zero-order asymptotic solution of the set of Riccati-type matrix differential equations, associated with the partial cheap control game by solvability conditions, is obtained. Then, this result is used for the derivation of the state-feedback solution of the original partial singular game.
In the paper [45], an infinite horizon two-person Nash equilibrium differential game with linear homogeneous dynamics and quadratic cost functionals is considered. The state variable and the players’ controls are vectors. In the cost functional of one player, the cost of some (but, in general, not all) coordinates of the control of this player is small in comparison with the cost of the other control coordinates, the cost of the control of the other player, and the cost of the state variable, i.e., the considered game is a partial cheap control game. The integral cost of the “fast” state variable in the cost functional of the partial cheap control player is positive definite. This partial cheap control game is obtained as a result of the regularization of a partial singular Nash equilibrium differential game, considered in this work. The zero-order stabilizing asymptotic solution of the set of Riccati-type matrix algebraic equations, associated with the partial cheap control game by solvability conditions, is obtained. Then, this result is used for the derivation of the state-feedback solution of the original partial singular game.
In the paper [46], a finite horizon two-person Nash equilibrium differential game with linear homogeneous dynamics and quadratic cost functionals is considered. The game’s dynamics are with delays in the state variable and the players’ control variables. The state variable and the players’ controls are vectors. The cost of each player is a sum of two addends. The first addend is the cost of the terminal value of the state variable. The second addend is the integral cost of the control of this player. In the cost functional of one player, the cost of some (but, in general, not all) control coordinates is much smaller than the cost of the terminal value of the state variable, i.e., the considered game is a partial cheap control game. By two consecutive state transformations, this game is converted to an equivalent partial cheap control game, which does not contain delays anymore. This partial cheap control game is a result of the regularization of a partial singular Nash equilibrium differential game, considered in this work. An asymptotic open-loop solution to the partial cheap control game is obtained. Using this asymptotic solution, the exact open-loop solution of the initially considered partial singular Nash equilibrium differential game is derived.

2.3. Cheap Control Stackelberg Differential Game

In the paper [47], a two-player finite-horizon Stackelberg differential game with linear homogeneous dynamics and quadratic cost functionals is considered. The state variable and the controls of the leader and the follower are vectors. For this game, the case is considered where the control cost of the leader in the cost functionals of both players is small in comparison with the state cost and the cost of the control of the follower. The cost of the terminal value of the state variable in both functionals is zero. The cost of the “fast” state variable in the cost functional of the leader is positive definite. For this cheap control Stackelberg differential game, an asymptotically suboptimal solution is derived. This derivation is based on the first-order asymptotic solution of the boundary-value problem associated with the considered game by solvability conditions. In [48], the results of the paper [47] are applied to the asymptotic analysis of a supply chain problem. This problem is modeled by a two-player finite-horizon linear–quadratic Stackelberg differential game with the manufacturer as a leader and the retailer as a follower. The state variable and the control of each player are scalar. The control cost of the manufacturer is small. The cost of the state variable of the manufacturer (the “fast” state variable) in the integral part of its cost functional is positive. The asymptotic solution to the considered supply chain problem is constructed.

2.4. Cheap Control Differential Game of the Present Paper

In contrast with the aforementioned works, the game of the present paper is with a non-homogeneous dynamic. Due to this feature of the game, its solvability conditions contain not only the Riccati matrix differential equation but also a linear vector differential equation and a scalar differential equation. Therefore, the asymptotic analysis of the considered game requires deriving asymptotic solutions of each of these differential equations. Moreover, in the present paper, two cases of the integral cost of the “fast” state variable are studied: the positive definite cost and the positive semi-definite (but non-zero) cost. The latter case is a significantly novel one, requiring the development of an essentially novel approach to the asymptotic solution of the aforementioned differential equations. Thus, the cases of the positive definite and positive semi-definite integral costs of the “fast” state variable require considerably different approaches to their asymptotic study and the game’s solution. A rigorous analysis of this difference is also presented. The following should also be noted. The results of the asymptotic analysis in the case of the positive semi-definite integral costs of the state variable allow us to remove unnecessary restrictions in the study of real-life problems. For instance, using these results allows, in contrast with the work [13] (see Section 4.8.3.2), studying the three-dimensional singular/cheap control pursuit-evasion game with only one relative velocity in the integral part of the cost functional. In the case where only one of the relative velocities (horizontal or vertical) should be minimized, this form of the integral cost of the state variable in the three-dimensional pursuit-evasion game is much more reasonable from the practical viewpoint than the one considered in [13]. Another important application of the results of the present paper can be a supply chain problem with a cheap control either of a manufacturer or of a retailer and with a positive semi-definite (but non-zero) cost of the “fast” state variable in the integral part of the functional of the corresponding player.

3. Initial Game Formulation and Main Definitions

In this section, the initial formulation of the cheap control differential game is presented. The definition of the admissible pair of the players’ state-feedback controls in this game, as well as the definitions of the guaranteed results of the players’ controls and the definitions of the saddle point and the game value, are also presented.
Consider the following differential system controlled by two decision makers:
d ζ ( t ) d t = A ( t ) ζ ( t ) + B ( t ) w ( t ) + C ( t ) v ( t ) + ϕ ( t ) , t [ 0 , t f ] , ζ ( 0 ) = ζ 0 ,
where ζ ( t ) R n is a state variable; w ( t ) R n and v ( t ) R m are controls of the decision makers (players); A ( t ) , B ( t ) and C ( t ) are given matrices of corresponding dimensions, while ϕ ( t ) is a given vector of corresponding dimension; ζ 0 R n is a given constant vector; t f > 0 is a given time instant; the matrix-valued functions A ( t ) , B ( t ) , C ( t ) and the vector-valued function ϕ ( t ) are continuous in the interval [ 0 , t f ] ; for all t [ 0 , t f ] , det B ( t ) 0 .
The cost functional, to be minimized by the control w (the minimizer’s control) and maximized by the control v (the maximizer’s control ) has the form
J ( w , v ) = 0 t f ζ T ( t ) D ( t ) ζ ( t ) + ε 2 w T ( t ) G w ( t ) w ( t ) v T ( t ) G v ( t ) v ( t ) d t ,
where D ( t ) , G w ( t ) and G v ( t ) are given matrices of corresponding dimensions; for all t [ 0 , t f ] , D ( t ) is symmetric and positive definite/positive semi-definite, while G w ( t ) and G v ( t ) are symmetric and positive definite; the matrix-valued functions D ( t ) , G w ( t ) and G v ( t ) are continuous in the interval [ 0 , t f ] ; ε > 0 is a small parameter.
We assume that both players know perfectly all the data appearing in (1) and (2), as well as the current ( state , time ) -position of the system (1).
Consider the set W ˜ of all functions w = w ˜ ( ζ , t ) : R n × [ 0 , t f ] R n , which are measurable with respect to t [ 0 , t f ] for any given ζ R n and satisfy the local Lipschitz condition with respect to ζ R n uniformly in t [ 0 , t f ] . Similarly, let V ˜ be the set of all functions v = v ˜ ( ζ , t ) : R n × [ 0 , t f ] R m , which are measurable with respect to t [ 0 , t f ] for any given ζ R n and satisfy the local Lipschitz condition with respect to ζ R n uniformly in t [ 0 , t f ] .
Based on the results of the book [13], we introduce the following definitions.
Definition 1.
By ( W V ) ˜ , we denote the set of all pairs w ˜ ( ζ , t ) , v ˜ ( ζ , t ) , ( ζ , t ) R n × [ 0 , t f ] , satisfying the following conditions: (i) w ˜ ( ζ , t ) W ˜ , v ˜ ( ζ , t ) V ˜ ; (ii) the initial-value problem (1) for w ( t ) = w ˜ ( ζ , t ) , v ( t ) = v ˜ ( ζ , t ) and any ζ 0 R n has the unique absolutely continuous solution ζ w v ( t ; ζ 0 ) in the entire interval [ 0 , t f ] ; (iii) w ˜ ζ w v ( t ; ζ 0 ) , t L 2 [ 0 , t f ; R n ] ; (iv) v ˜ ζ w v ( t ; ζ 0 ) , t L 2 [ 0 , t f ; R m ] . We call ( W V ) ˜ the set of all admissible pairs of the players’ state-feedback controls in the game (1) and (2).
For a given w ˜ ( ζ , t ) W ˜ , we consider the set
K ˜ v w ˜ ( ζ , t ) = v ˜ ( ζ , t ) V ˜ : w ˜ ( ζ , t ) , v ˜ ( ζ , t ) ( W V ) ˜ .
Let us denote
L ˜ w = w ˜ ( ζ , t ) W ˜ : K ˜ v w ˜ ( ζ , t ) .
Similarly, for a given v ˜ ( ζ , t ) V ˜ , we consider the set
K ˜ w v ˜ ( ζ , t ) = w ˜ ( ζ , t ) W ˜ : w ˜ ( ζ , t ) , v ˜ ( ζ , t ) ( W V ) ˜ .
Let us denote
L ˜ v = v ˜ ( ζ , t ) V ˜ : K ˜ w v ˜ ( ζ , t ) .
Definition 2.
For a given w ˜ ( ζ , t ) L ˜ w , the value
J w w ˜ ( ζ , t ) ; ζ 0 = sup v ˜ ( ζ , t ) K ˜ v w ˜ ( ζ , t ) J w ˜ ( ζ , t ) , v ˜ ( ζ , t )
is called the guaranteed result of w ˜ ( ζ , t ) in the game (1) and (2).
Definition 3.
For a given v ˜ ( ζ , t ) L ˜ v , the value
J v v ˜ ( ζ , t ) ; ζ 0 = inf w ˜ ( ζ , t ) K ˜ w v ˜ ( ζ , t ) J w ˜ ( ζ , t ) , v ˜ ( ζ , t )
is called the guaranteed result of v ˜ ( ζ , t ) in the game (1) and (2).
Definition 4.
A pair w ˜ * ( ζ , t ) , v ˜ * ( ζ , t ) ( W V ) ˜ is called a saddle-point solution of the game (1) and (2) if the guaranteed results of w ˜ * ( ζ , t ) and v ˜ * ( ζ , t ) in this game are equal to each other for all ζ 0 R n , i.e.,
J w w ˜ * ( ζ , t ) ; ζ 0 = J v v ˜ * ( ζ , t ) ; ζ 0 ζ 0 R n .
If this equality is valid, then the value
J * ( ζ 0 ) = J w w ˜ * ( ζ , t ) ; ζ 0 = J v v ˜ * ( ζ , t ) ; ζ 0
is called a value of the game (1) and (2). The solution of the initial-value problem (1) with w ( t ) = w ˜ * ( ζ , t ) , v ( t ) = v ˜ * ( ζ , t ) is called a saddle-point trajectory of the game (1) and (2).

4. Transformation of the Differential Game (1) and (2)

In this section, the non-singular (invertible) transformation of the game, formulated in Section 3, is carried out. Due to this transformation, we obtain a new cheap control differential game that is considerably simpler than the initially formulated games (1) and (2). The equivalence of both games to each other is proven. It should be noted that in both games, the state variable is the “fast” one. In what follows of the paper, the transformed game is investigated as an original cheap control differential game.
Following the results of [13] (Section 4.3) and taking into account that we are going to derive the first-order asymptotic solution of the game, we assume the following:
A1. 
The matrix-valued functions A ( t ) , C ( t ) , G v ( t ) are twice continuously differentiable in the interval [ 0 , t f ] .
A2. 
The matrix-valued functions B ( t ) , D ( t ) , G w ( t ) are three times continuously differentiable in the interval [ 0 , t f ] .
A3. 
The vector-valued function ϕ ( t ) is twice continuously differentiable in the interval [ 0 , t f ] .
Remark 1.
By G w 1 / 2 ( t ) , let us denote the unique symmetric positive definite square root of the positive definite matrix G w ( t ) , t [ 0 , t f ] . The inverse matrix of G w 1 / 2 ( t ) is denoted as G w 1 / 2 ( t ) . It is clear that G w 1 / 2 ( t ) is also symmetric and positive definite. Moreover, due to the assumption A2, the matrix-valued functions G w 1 / 2 ( t ) and G w 1 / 2 ( t ) are three times continuously differentiable in the interval [ 0 , t f ] .
Remark 2.
Since the matrix D ( t ) is symmetric for all t [ 0 , t f ] , then the matrix
G w 1 / 2 ( t ) B T D ( t ) B ( t ) G w 1 / 2 ( t )
also is symmetric for all t [ 0 , t f ] . Therefore, due to the results of [55], there exists an orthogonal matrix H ( t ) , t [ 0 , t f ] ( H 1 ( t ) = H T ( t ) ) such that
H T ( t ) G w 1 / 2 ( t ) B T ( t ) D ( t ) B ( t ) G w 1 / 2 ( t ) H ( t ) = Λ ( t ) = diag λ 1 ( t ) , λ 2 ( t ) , , λ n ( t ) , t [ 0 , t f ] ,
where λ i ( t ) , ( i = 1 , 2 , , n ) are eigenvalues of the matrix G w 1 / 2 ( t ) B T ( t ) D ( t ) B ( t ) G w 1 / 2 ( t ) . Due to the assumption A2 and the results of [56], the matrix-valued function H ( t ) and the functions λ i ( t ) , ( i = 1 , 2 , , n ) are three times continuously differentiable in the interval [ 0 , t f ] . Moreover, since the matrix D ( t ) is at least positive semi-definite, then λ i ( t ) 0 , ( i = 1 , 2 , , n ) , t [ 0 , t f ] .
Let us make the following state and control transformations in the game (1) and (2):
ζ ( t ) = R z ( t ) z ( t ) , R z = B ( t ) G w 1 / 2 ( t ) H ( t ) , t [ 0 , t f ] ,
w ( t ) = R u ( t ) u ( t ) , R u ( t ) = G w 1 / 2 ( t ) H ( t ) , t [ 0 , t f ] ,
where z ( t ) is a new state variable and u ( t ) is a new control variable.
Since the matrices B ( t ) , G w 1 / 2 ( t ) , and H ( t ) are invertible, then the transformations (4) and (5) are invertible.
Lemma 1.
Let Assumptions A1–A3 be valid. Then, the transformations (4) and (5) convert the system (1) and the cost functional (2) to the following system and cost functional:
d z ( t ) d t = A ( t ) z ( t ) + u ( t ) + C ( t ) v ( t ) + f ( t ) , t [ 0 , t f ] , z ( 0 ) = z 0 ,
J ( u , v ) = 0 t f z T ( t ) Λ ( t ) z ( t ) + ε 2 u T ( t ) u ( t ) v T ( t ) G v ( t ) v ( t ) d t ,
where
A ( t ) = H T ( t ) G w 1 / 2 ( t ) B 1 ( t ) A ( t ) B ( t ) G w 1 / 2 ( t ) H ( t ) d d t B ( t ) G w 1 / 2 ( t ) H ( t ) ,
C ( t ) = H T G w 1 / 2 ( t ) B 1 ( t ) C ( t ) ,
f ( t ) = H T ( t ) G w 1 / 2 ( t ) B 1 ( t ) ϕ ( t ) ,
z 0 = H T ( 0 ) G w 1 / 2 ( 0 ) B 1 ( 0 ) ζ 0 .
The matrix-valued functions A ( t ) , C ( t ) , and the vector-valued function f ( t ) are twice continuously differentiable in the interval [ 0 , t f ] .
Proof. 
Differentiating (4) yields
d ζ ( t ) d t = d d t B ( t ) G w 1 / 2 ( t ) H ( t ) z ( t ) + B ( t ) G w 1 / 2 ( t ) H ( t ) d z ( t ) d t , t [ 0 , t f ] .
Substituting this expression for d ζ ( t ) / d t , as well as (4) and (5), into the system (1), we obtain
d d t B ( t ) G w 1 / 2 ( t ) H ( t ) z ( t ) + B ( t ) G w 1 / 2 ( t ) H ( t ) d z ( t ) d t = A ( t ) B ( t ) G w 1 / 2 ( t ) H ( t ) z ( t ) + B ( t ) G w 1 / 2 ( t ) H ( t ) u ( t ) + C ( t ) v ( t ) + ϕ ( t ) , t [ 0 , t f ] , B ( 0 ) G w 1 / 2 ( 0 ) H ( 0 ) z ( 0 ) = ζ 0 .
Now, resolving the first equation in (13) with respect to d z ( t ) / d t , the second equation in (13) with respect to z ( 0 ) , and using the orthogonality of the matrix H ( t ) , we directly prove the Equations (6), (8)–(11).
Furthermore, substitution of (4) and (5) into the cost functional (2) and use of Remark 1 and Equation (3) immediately yield the cost functional (7).
Finally, the smoothness of the matrices A ( t ) , C ( t ) , and the vector f ( t ) , claimed in the lemma, is a direct consequence of the Equations (8)–(10) and the Assumptions A1–A3. □
Remark 3.
The cost functional (7) is minimized by the control u (the minimizer’s control) and maximized by the control v (the maximizer’s control). Similarly to the games (1) and (2), we assume that in the games (6) and (7) both players know perfectly all the data appearing in the system (6) and the cost functional (7), as well as the current ( state , time ) -position of the system (6).
Consider the set U of all functions u = u ( z , t ) : R n × [ 0 , t f ] R n , which are measurable with respect to t [ 0 , t f ] for any given z R n and satisfy the local Lipschitz condition with respect to z R n uniformly in t [ 0 , t f ] . Similarly, we consider the set V of all functions v = v ( z , t ) : R n × [ 0 , t f ] R m , which are measurable with respect to t [ 0 , t f ] for any given z R n and satisfy the local Lipschitz condition with respect to z R n uniformly in t [ 0 , t f ] .
Similarly to Definitions 1–4, we introduce the following definitions.
Definition 5.
Let ( U V ) be the set of all pairs u ( z , t ) , v ( z , t ) , ( z , t ) R n × [ 0 , t f ] , satisfying the following conditions: (i) u ( z , t ) U , v ( z , t ) V ; (ii) the initial-value problem (6) for u ( t ) = u ( z , t ) , v ( t ) = v ( z , t ) and any z 0 R n has the unique absolutely continuous solution z u v ( t ; z 0 ) in the entire interval [ 0 , t f ] ; (iii) u z u v ( t ; z 0 ) , t L 2 [ 0 , t f ; R n ] ; (iv) v z u v ( t ; z 0 ) , t L 2 [ 0 , t f ; R m ] . We call ( U V ) the set of all admissible pairs of the players’ state-feedback controls in the game (6) and (7).
For a given u ( z , t ) U , we consider the set
K v u ( z , t ) = v ( z , t ) V : u ( z , t ) , v ( z , t ) ( U V ) .
Let us denote
L u = u ( z , t ) U : K v u ( z , t ) .
Similarly, for a given v ( z , t ) V , we consider the set
K u v ( z , t ) = u ( z , t ) U : u ( z , t ) , v ( z , t ) ( U V ) .
Let us denote
L v = v ( z , t ) V : K u v ( z , t ) .
Definition 6.
For a given u ( z , t ) L u , the value
J u u ( z , t ) ; z 0 = sup v ( z , t ) K v u ( z , t ) J u ( z , t ) , v ( z , t )
is called the guaranteed result of u ( z , t ) in the game (6) and (7).
Definition 7.
For a given v ( z , t ) L v , the value
J v v ( z , t ) ; z 0 = inf u ( z , t ) K u v ( z , t ) J u ( z , t ) , v ( z , t )
is called the guaranteed result of v ( z , t ) in the game (6) and (7).
Definition 8.
A pair u * ( z , t ) , v * ( z , t ) ( U V ) is called a saddle-point solution of the game (6) and (7) if the guaranteed results of u * ( z , t ) and v * ( z , t ) in this game are equal to each other for all z 0 R n , i.e.,
J u u * ( z , t ) ; z 0 = J v v * ( z , t ) ; z 0 z 0 R n .
If this equality is valid, then the value
J * ( z 0 ) = J u u * ( z , t ) ; z 0 = J v v * ( z , t ) ; z 0
is called a value of the game (6) and (7). The solution of the initial-value problem (6) with u ( t ) = u * ( z , t ) , v ( t ) = v * ( z , t ) is called a saddle-point trajectory of the game (6) and (7).
Let ζ 0 R n and z 0 R n be any prechosen vectors satisfying the Equation (11).
The following assertion is a direct consequence of Definition 1, Definition 5, and Lemma 1.
Corollary 1.
Let Assumptions A1–A3 be valid. Let w ˜ ( ζ , t ) , v ˜ ( ζ , t ) be an admissible pair of the players’ state-feedback controls in the game (1) and (2), i.e., w ˜ ( ζ , t ) , v ˜ ( ζ , t ) ( W V ) ˜ . Let ζ w v ( t ; ζ 0 ) , t [ 0 , t f ] be the solution of the initial-value problem (1) generated by this pair of the players’ controls. Then the pair R u 1 ( t ) w ˜ R z ( t ) z , t , v ˜ R z ( t ) z , t is an admissible pair of the players’ state-feedback controls in the game (6) and (7), meaning that R u 1 ( t ) w ˜ R z ( t ) z , t , v ˜ R z ( t ) z , t ( U V ) . Furthermore, ζ w v ( t ; ζ 0 ) = R z ( t ) z u v ( t ; z 0 ) , t [ 0 , t f ] , where z u v ( t ; z 0 ) , t [ 0 , t f ] is the unique solution of the initial-value problem (6) generated by the players’ controls u ( t ) = R u 1 ( t ) w ˜ R z ( t ) z , t , v ( t ) = v ˜ R z ( t ) z , t . Moreover, J w ˜ ( ζ , t ) , v ˜ ( ζ , t ) = J R u 1 ( t ) w ˜ R z ( t ) z , t , v ˜ R z ( t ) z , t . Vice versa: let u ( z , t ) , v ( z , t ) ( U V ) and z u v ( t ; z 0 ) , t [ 0 , t f ] be the solution of the initial-value problem (6) generated by this pair of the players’ controls. Then R u ( t ) u R z 1 ( t ) ζ , t , v R z 1 ( t ) ζ , t ( W V ) ˜ and z u v ( t ; z 0 ) = R z 1 ( t ) ζ w v ( t ; ζ 0 ) , t [ 0 , t f ] , where ζ w v ( t ; ζ 0 ) , t [ 0 , t f ] is the unique solution of the initial-value problem (1) generated by the players’ controls w ( t ) = R u ( t ) u R z 1 ( t ) ζ , t , v ( t ) = v R z 1 ( t ) ζ , t . Moreover, J u ( z , t ) , v ( z , t ) = J R u ( t ) u R z 1 ( t ) ζ , t , v R z 1 ( t ) ζ , t .
Lemma 2.
Let Assumptions A1–A3 be valid. Let the pair w ˜ * ( ζ , t ) , v ˜ * ( ζ , t ) be a saddle-point of the game (1) and (2). Then the pair R u 1 ( t ) w ˜ * R z ( t ) z , t , v ˜ * R z ( t ) z , t is a saddle-point of the game (6) and (7). Vice versa: let the pair { u * ( z , t ) , v * ( z , t ) be a saddle-point of the game (6) and (7). Then the pair R u ( t ) u * R z 1 ( t ) ζ , t , v * R z 1 ( t ) ζ , t is a saddle-point of the game (1) and (2).
Proof. 
We start with the first lemma’s statement. Since the pair w ˜ * ( ζ , t ) , v ˜ * ( ζ , t ) is a saddle-point of the game (1) and (2), then the pair of the players’ controls w ˜ * ( ζ , t ) , v ˜ * ( ζ , t ) is admissible in this game. Hence, due to Corollary 1, the pair of the players’ controls R u 1 ( t ) w ˜ * R z ( t ) z , t , v ˜ * R z ( t ) z , t is admissible in the game (6) and (7) and the following equality is valid: J w ˜ * ( ζ , t ) , v ˜ * ( ζ , t ) = J R u 1 ( t ) w ˜ * R z ( t ) z , t , v ˜ * R z ( t ) z , t . Moreover, by Definitions 2 and 3, Definitions 6 and 7 and Corollary 1, we obtain
J u w ˜ * ( ζ , t ) ; ζ 0 = J u R u 1 ( t ) w ˜ * R z ( t ) z , t ; z 0 , J v v ˜ * ( ζ , t ) ; ζ 0 = J v v ˜ * R z ( t ) z , t ; z 0 .
The equalities in (14), along with Definitions 4 and 8, directly yield the first statement of the lemma. The second statement is proven quite similarly. □
Remark 4.
Due to Lemma 2, the initially formulated game (1) and (2) is equivalent to the new game (6) and (7). Along with this equivalence, due to Lemma 1, the latter game is simpler than the former one. Therefore, in what follows, we deal with the game (6) and (7), which we consider as an original one and call it the Cheap Control Differential Game (CCDG). In the next section, ε-dependent solvability conditions of the CCDG are presented.
Remark 5.
By the nonsingular control transformation u ˘ ( t ) = ε u ( t ) , ( u ˘ ( t ) is a new control of the minimizer), the CCDG can be converted to the equivalent zero-sum differential game consisting of the dynamic system
ε d z ( t ) d t = ε A ( t ) z ( t ) + C ( t ) v ( t ) + f ( t ) + u ˘ ( t ) , t [ 0 , t f ] , z ( 0 ) = z 0 ,
and the cost functional
J ˘ ( u ˘ , v ) = 0 t f z T ( t ) Λ ( t ) z ( t ) + u ˘ T ( t ) u ˘ ( t ) v T ( t ) G v ( t ) v ( t ) d t .
In this game, the dynamic equation is singularly perturbed, and the state variable z ( t ) is a fast state variable (see, e.g., [23]). Therefore, we call the state variable of the CCDG a fast state variable. Thus, the cost functional J ˘ ( u ˘ , v ) contains the cost of the fast state variable z ( t ) , the cost of the maximizer’s control v ( t ) , and the non-small cost of the minimizer’s control u ˘ ( t ) while the cost functional J ( u , v ) of the CCDG contains the cost of the fast state variable z ( t ) , the cost of the maximizer’s control v ( t ) , and the small cost of the minimizer’s control u ( t ) .

5. Solvability Conditions of the CCDG

In this section, the solvability conditions of the game (6) and (7) are presented. These conditions consist of a terminal-value problem for three differential equations: the Riccati matrix equation, the linear vector equation, and the scalar equation. The solutions of these terminal-value problems appear in the expression for the saddle point and the value of the game. In addition, two cases of the state cost in the cost functional (7) are distinguished. In the subsequent sections, the game (6) and (7) will be analyzed separately in each of these cases.
Consider the following matrices:
S u ( ε ) = 1 ε 2 I n , S v ( t ) = C ( t ) G v 1 ( t ) C T ( t ) , S ( t , ε ) = S u ( ε ) S v ( t ) ,
where t [ 0 , t f ] , ε > 0 .
Using the data of the CCDG (see the Equations (6) and (7)) and the matrices in (15), we construct the terminal-value problem for the Riccati matrix differential equation
d K d t = K A ( t ) A T ( t ) K + K S ( t , ε ) K Λ ( t ) , t [ 0 , t f ] , K ( t f ) = 0 .
Following the results of [26] (Theorem 6.17), [13] (Section 4.4.2), [29], we assume that
A4. 
For a given ε > 0 , the terminal-value problem (16) has the symmetric solution K = K ( t , ε ) in the entire interval [ 0 , t f ] .
Remark 6.
Since the right-hand side of the differential equation in (16) is a smooth function with respect to the unknown matrix K, then the aforementioned solution K = K ( t , ε ) is unique.
Using Assumption A4, as well as the data of the CCDG and Equation (15), we construct the terminal-value problem for the linear vector-valued differential equation
d q d t = A T ( t ) K ( t , ε ) S ( t , ε ) q 2 K ( t , ε ) f ( t ) , t [ 0 , t f ] , q ( t f ) = 0 .
The problem (17) has the unique solution q = q ( t , ε ) in the entire interval [ 0 , t f ] . Using this solution, we construct the terminal-value problem for the scalar differential equation
d s d t = 1 4 q T ( t , ε ) S ( t , ε ) q ( t , ε ) f T ( t ) q ( t , ε ) , t [ 0 , t f ] , s ( t f ) = 0 .
This problem has the unique solution s = s ( t , ε ) in the entire interval [ 0 , t f ] .
Consider the functions
u ε * ( z , t ) = 1 ε 2 K ( t , ε ) z 1 2 ε 2 q ( t , ε ) U , ( z , t ) R n × [ 0 , t f ] ,
and
v ε * ( z , t ) = G v 1 ( t ) C T ( t ) K ( t , ε ) z + 1 2 G v 1 ( t ) C T ( t ) q ( t , ε ) V , ( z , t ) R n × [ 0 , t f ] .
Based on the results of [13,26,29], we immediately have the following assertion.
Proposition 1.
Let Assumptions A1–A4 be valid. Then the pair u ε * ( z , t ) , v ε * ( z , t ) is the saddle point of the CCDG. The value of this game has the form
J ε * ( z 0 ) = J u ε * ( z , t ) , v ε * ( z , t ) = z 0 T K ( 0 , ε ) z 0 + z 0 T q ( 0 , ε ) + s ( 0 , ε ) .
In the forthcoming sections, we derive an asymptotic solution of the CCDG with respect to ε > 0 in the following two cases:
Case I λ i ( t ) > 0 , i = 1 , 2 , , n , t [ 0 , t f ] ,
Case II λ j ( t ) > 0 , j = 1 , , l , 1 l < n , t [ 0 , t f ] , λ k ( t ) 0 , k = l + 1 , , n , t [ 0 , t f ] ,
where λ i ( t ) , ( i = 1 , 2 , , n ) are the entries of the diagonal matrix Λ ( t ) (see the Equations (3) and (7)).
We start the asymptotic solution of the CCDG with the simpler case, namely, case I.

6. Asymptotic Solution of the CCDG in Case I

In this section, the game (6) and (7) are analyzed asymptotically for all sufficiently small ε > 0 subject to the fulfillment of the condition (22). This analysis includes the first-order asymptotic solutions of the terminal-value problems for the Riccati matrix differential equation, the linear vector differential equation, and the scalar differential equation. Based on these asymptotic solutions, two kinds of asymptotic approximations of the game value are obtained, and an approximate saddle point is derived.

6.1. Transformation of the Terminal-Value Problems (16)–(18)

First of all, let us note the following. Due to Equation (15), the differential equations in the problems (16)–(18) have singularities with respect to ε in their right-hand sides for ε = 0 . To remove these singularities, we look for the solutions of the problems (16) and (17) in the form
K ( t , ε ) = ε P ( t , ε ) , t [ 0 , t f ] ,
q ( t , ε ) = ε p ( t , ε ) , t [ 0 , t f ] ,
where P ( t , ε ) and p ( t , ε ) are new unknown matrix-valued and vector-valued functions.
Substitution of (24) and (25) into the problems (16)–(18) yields the following new terminal-value problems
ε d P ( t , ε ) d t = ε P ( t , ε ) A ( t ) ε A T ( t ) P ( t , ε ) + P ( t , ε ) I n ε 2 S v ( t ) P ( t , ε ) Λ ( t ) , t [ 0 , t f ] , P ( t f , ε ) = 0 ,
ε d p ( t , ε ) d t = ε A T ( t ) P ( t , ε ) I n ε 2 S v ( t ) p ( t , ε ) 2 ε P ( t , ε ) f ( t ) , t [ 0 , t f ] , p ( t f , ε ) = 0 ,
d s ( t , ε ) d t = 1 4 p T ( t , ε ) I n ε 2 S v ( t ) p ( t , ε ) ε f T ( t ) p ( t , ε ) , t [ 0 , t f ] , s ( t f , ε ) = 0 .
Moreover, substitution of (24) and (25) into the expressions for the components of the CCDG saddle point and into the expression for the CCDG value (see the Equations (19)–(21)) yields the following new expressions for the components of the saddle point and for the game value:
u ε * ( z , t ) = 1 ε P ( t , ε ) z 1 2 ε p ( t , ε ) U , ( z , t ) R n × [ 0 , t f ] ,
v ε * ( z , t ) = ε G v 1 ( t ) C T ( t ) P ( t , ε ) z + ε 2 G v 1 ( t ) C T ( t ) p ( t , ε ) V , ( z , t ) R n × [ 0 , t f ] ,
J ε * ( z 0 ) = J u ε * ( z , t ) , v ε * ( z , t ) = ε z 0 T P ( 0 , ε ) z 0 + ε z 0 T p ( 0 , ε ) + s ( 0 , ε ) .

6.2. Asymptotic Solution of the Terminal-Value Problem (26)

The problem (26) is a singularly perturbed terminal-value problem. Based on the Boundary Functions Method [23], we look for the first-order asymptotic solution of (26) in the form
P 1 ( t , ε ) = P 0 o ( t ) + P 0 b ( τ ) + ε P 1 o ( t ) + P 1 b ( τ ) ,
where
τ = t t f ε .
Remark 7.
In (32), the terms with the superscript o constitute the so-called outer solution, and the terms with the superscript b are the boundary corrections in the left-hand neighborhood of t = t f . Equations and boundary conditions for the asymptotic solution terms are obtained by substituting P 1 ( t , ε ) into the problem (26) instead of P ( t , ε ) and equating the coefficients for the same power of ε on both sides of the resulting equations, separately depending on t and on τ. Additionally, we note the following. For any t [ 0 , t f ) and ε > 0 , τ < 0 . Moreover, if ε + 0 , then, for any t [ 0 , t f ) , τ .

6.2.1. Obtaining the Outer Solution Term P 0 o ( t )

Due to Remark 7, we have the following matrix Riccati algebraic equation for P 0 o ( t ) :
0 = P 0 o ( t ) 2 Λ ( t ) , t [ 0 , t f ] ,
yielding
P 0 o ( t ) = Λ 1 / 2 ( t ) = diag λ 1 1 / 2 ( t ) , λ 2 1 / 2 ( t ) , , λ n 1 / 2 ( t ) , t [ 0 , t f ] .
Remark 8.
Due to Remark 2 and Equation (22), the matrix P 0 o ( t ) is positive definite for all t [ 0 , t f ] . Moreover, the matrix-valued function P 0 o ( t ) is three times continuously differentiable in the interval [ 0 , t f ] .

6.2.2. Obtaining the Boundary Correction P 0 b ( τ )

Taking into account Remark 7 and Equations (33) and (35), we directly derive the following terminal-value problem for P 0 b ( τ ) :
d P 0 b ( τ ) d τ = Λ 1 / 2 ( t f ) P 0 b ( τ ) + P 0 b ( τ ) Λ 1 / 2 ( t f ) + P 0 b ( τ ) 2 , τ 0 , P 0 b ( 0 ) = Λ 1 / 2 ( t f ) .
The differential equation in (36) is a matrix Bernoulli differential equation [57]. Using this feature, we obtain the solution of the problem (36)
P 0 b ( τ ) = 2 Λ 1 / 2 ( t f ) exp 2 Λ 1 / 2 ( t f ) τ I n + exp 2 Λ 1 / 2 ( t f ) τ 1 , τ 0 .
Due to the positive definiteness of Λ 1 / 2 ( t f ) , the matrix-valued function P 0 b ( τ ) is exponentially decaying for τ , i.e.,
P 0 b ( τ ) a exp ( 2 β τ ) , τ 0 ,
where a > 0 is some constant;
β = min i { 1 , 2 , , n } λ i 1 / 2 ( t f ) > 0 .

6.2.3. Obtaining the Outer Solution Term P 1 o ( t )

Using Equation (35) and Remark 8, we have (similarly to (34)) the matrix linear algebraic equation for P 1 o ( t )
d Λ 1 / 2 ( t ) d t = Λ 1 / 2 ( t ) A ( t ) A T ( t ) Λ 1 / 2 ( t ) + Λ 1 / 2 ( t ) P 1 o ( t ) + P 1 o ( t ) Λ 1 / 2 ( t ) , t [ 0 , t f ] .
Using the results of [58] and taking into account Equations (22) and (35), we obtain the solution of the Equation (40)
P 1 o ( t ) = 0 + exp Λ 1 / 2 ( t ) ξ [ d Λ 1 / 2 ( t ) d t + Λ 1 / 2 ( t ) A ( t ) + A T ( t ) Λ 1 / 2 ( t ) ] exp Λ 1 / 2 ( t ) ξ d ξ , t [ 0 , t f ] .
To complete the construction of the asymptotic solution to the problem (26), we should derive the boundary correction P 1 b ( τ ) . This technically complicated derivation is presented in Appendix A.

6.2.4. Justification of the Asymptotic Solution to the Problem (26)

Similarly to the results of [13] (Lemma 4.2), we have the following lemma.
Lemma 3.
Let Assumptions A1–A3 and case I (see Equation (22)) be valid. Then, there exista a positive number ε 10 such that, for all ε ( 0 , ε 10 ] , the terminal-value problem (26) has the unique solution P ( t , ε ) in the entire interval [ 0 , t f ] . This solution satisfies the inequality
P ( t , ε ) P 1 ( t , ε ) a 10 ε 2 , t [ 0 , t f ] ,
where P 1 ( t , ε ) is given by (32); a 10 > 0 is some constant independent of ε.

6.3. Asymptotic Solution of the Terminal-Value Problem (27)

Like the problem (26), the problem (27) is a singularly perturbed terminal-value problem. Based on the Boundary Functions Method [23], we look for the first-order asymptotic solution of (27) in the form
p 1 ( t , ε ) = p 0 o ( t ) + p 0 b ( τ ) + ε p 1 o ( t ) + p 1 b ( τ ) ,
where the variable τ is given by (33); the terms in (42) have the same meaning as the corresponding terms in (32). These terms are obtained by substituting p 1 ( t , ε ) and P 1 ( t , ε ) into the problem (27) instead of p ( t , ε ) and P ( t , ε ) , respectively, and equating the coefficients for the same power of ε on both sides of the resulting equations, separately depending on t and on τ .

6.3.1. Obtaining the Outer Solution Term p 0 o ( t )

For this term, we have the following linear algebraic equation:
0 = P 0 o ( t ) p 0 o ( t ) , t [ 0 , t f ] .
Since P 0 o ( t ) is an invertible matrix for all t [ 0 , t f ] (see Equations (22) and (35)), then Equation (43) yields
p 0 o ( t ) 0 , t [ 0 , t f ] .

6.3.2. Obtaining the Boundary Correction p 0 b ( τ )

Taking into account Equations (33) and (44), we directly obtain the following terminal-value problem for p 0 b ( τ ) :
d p 0 b ( τ ) d τ = P 0 o ( t f ) + P 0 b ( τ ) p 0 b ( τ ) , τ 0 , p 0 b ( 0 ) = 0 ,
yielding
p 0 b ( τ ) 0 , τ 0 .

6.3.3. Obtaining the Outer Solution Term p 1 o ( t )

Using Equation (44), we have (similarly to (43)) the linear algebraic equation for p 1 o ( t )
0 = P 0 o ( t ) p 1 o ( t ) 2 P 0 o ( t ) f ( t ) , t [ 0 , t f ] .
Since P 0 o ( t ) is an invertible matrix for all t [ 0 , t f ] , then
p 1 o ( t ) = 2 f ( t ) , t [ 0 , t f ] .

6.3.4. Obtaining the Boundary Correction p 1 b ( τ )

Using Equations (33), (44), (46), and (47), we derive (similarly to Equation (45)) the following terminal-value problem for p 1 b ( τ ) :
d p 1 b ( τ ) d τ = P 0 o ( t f ) + P 0 b ( τ ) p 1 b ( τ ) , τ 0 , p 1 b ( 0 ) = 2 f ( t f ) .
Solving the problem (48), we have
p 1 b ( τ ) = 2 Φ ( 0 , τ ) f ( t f ) , τ 0 ,
where the matrix-valued function Φ ( σ , τ ) is given by (A5).
Thus, using Equations (A5), (A6), and (49), we obtain after a routine algebra
p 1 b ( τ ) = 4 exp Λ 1 / 2 ( t f ) τ Θ 1 ( τ ) f ( t f ) , τ 0 ,
which yields the inequality
p 1 b ( τ ) b exp ( β τ ) , τ 0 .
In this inequality, b > 0 is some constant; the constant β is given by (39).
Thus, p 1 b ( τ ) is an exponentially decaying function for τ .

6.3.5. Justification of the Asymptotic Solution to the Problem (27)

Using Equations (42), (44), and (46), we can rewrite the vector-valued function p 1 ( t , ε ) as
p 1 ( t , ε ) = ε p 1 o ( t ) + p 1 b ( τ ) .
Lemma 4.
Let Assumptions A1–A3 and case I (see Equation (22)) be valid. Then, for all ε ( 0 , ε 10 ] ( ε 10 > 0 is introduced in Lemma 3), the terminal-value problem (27) has the unique solution p ( t , ε ) in the entire interval [ 0 , t f ] . Moreover, there exists a positive number ε 20 ε 10 such that, for all ε ( 0 , ε 20 ] , this solution satisfies the inequality
p ( t , ε ) p 1 ( t , ε ) b 10 ε 2 , t [ 0 , t f ] ,
where p 1 ( t , ε ) is given by (52); b 10 > 0 is some constant independent of ε.
Proof. 
First of all, let us note that the existence and the uniqueness of the solution to the problem (27) for all ε ( 0 , ε 10 ] directly follow from its linearity and from the existence and the uniqueness of the solution to the problem (26) (see Lemma 3).
Proceed to the proof of the inequality (53). Let us transform the state variable in the problem (27)
p ( t , ε ) = p 1 ( t , ε ) + δ p ( t , ε ) , t [ 0 , t f ] , ε ( 0 , ε 10 ] ,
where δ p ( t , ε ) is a new state variable.
The transformation (54) converts the problem (27) to the equivalent terminal-value problem with respect to δ p ( t , ε )
ε d δ p ( t , ε ) d t = P 1 ( t , ε ) δ p ( t , ε ) + g 1 δ p ( t , ε ) , t , ε + g 2 ( t , ε ) + g 3 ( t , ε ) , t [ 0 , t f ] , δ p ( t f , ε ) = 0 ,
where P 1 ( t , ε ) is given by (32), τ is given by (33),
g 1 δ p ( t , ε ) , t , ε = ε A T ( t ) Δ P ( t , ε ) + ε 2 P ( t , ε ) S v ( t ) δ p ( t , ε ) , g 2 ( t , ε ) = ε 2 d p 1 o ( t ) d t A T ( t ) p 1 ( t , ε ) , g 3 ( t , ε ) = ε d p 1 b ( τ ) d τ + P ( t , ε ) I n ε 2 S v ( t ) p 1 o ( t ) + p 1 b ( τ ) 2 P ( t , ε ) f ( t ) , Δ P ( t , ε ) = P ( t , ε ) P 1 ( t , ε ) .
Using Lemma 3, as well as Equations (47), (50) and (52), we directly obtain the following estimates of g 1 δ p ( t , ε ) , t , ε and g 2 ( t , ε ) for all ε ( 0 , ε 10 ] :
g 1 δ p ( t , ε ) , t , ε b 1 ε δ p ( t , ε ) , t [ 0 , t f ] , g 2 ( t , ε ) b 2 ε 2 , t [ 0 , t f ] ,
where b 1 > 0 and b 2 > 0 are some constants independent of ε .
Now, let us estimate g 3 ( t , ε ) . Using Lemma 3, as well as Equations (47), (48), (50), and (52) and the inequalities (38) and (51), we have for all ε ( 0 , ε 10 ]
g 3 ( t , ε ) ε 3 P ( t , ε ) S v ( t ) p 1 o ( t ) + p 1 b ( τ ) + ε d p 1 b ( τ ) d τ + P ( t , ε ) p 1 o ( t ) + p 1 b ( τ ) 2 P ( t , ε ) f ( t ) ε 3 P ( t , ε ) S v ( t ) p 1 o ( t ) + p 1 b ( τ ) + ε P 0 o ( t ) P 0 o ( t f ) p 1 b ( τ ) + ε 2 P 1 o ( t ) + P 1 b ( τ ) p 1 o ( t ) + p 1 b ( τ ) 2 f ( t ) + ε Δ P ( t , ε ) p 1 o ( t ) + p 1 b ( τ ) 2 f ( t ) , t [ 0 , t f ] .
To complete the estimate of g 3 ( t , ε ) , one has to estimate the expression P 0 o ( t ) P 0 o ( t f ) p 1 b ( τ ) . Using the smoothness of P 0 o ( t ) (see Remark 8) and Equation (33), we obtain for any t [ 0 , t f ]
P 0 o ( t ) P 0 o ( t f ) = P 0 o ( t f + ε τ ) P 0 o ( t f ) = ε τ d P 0 o ( χ ) d χ | χ = t 1 ( t ) , t 1 ( t ) [ t , t f ] .
The latter, along with the boundedness of d P 0 o ( t ) d t in the interval [ 0 , t f ] and the inequality (51), yields
P 0 o ( t ) P 0 o ( t f ) p 1 b ( τ ) = ε d P 0 o ( s ) d s | s = t 1 ( t ) τ p 1 b ( τ ) b ¯ 3 ε , t [ 0 , t f ] , ε ( 0 , ε 10 ] ,
where b ¯ 3 > 0 is some constant independent of ε .
Thus, the inequalities (57) and (58) and Lemma 3 imply immediately
g 3 ( t , ε ) b 3 ε 2 , t [ 0 , t f ] , ε ( 0 , ε 10 ] ,
where b 3 > 0 is some constant independent of ε .
The problem (55) can be rewritten in the equivalent integral form as
δ p ( t , ε ) = 1 ε t f t Ω ( t , σ , ε ) g 1 δ p ( σ , ε ) , σ , ε + g 2 ( σ , ε ) + g 3 ( σ , ε ) d σ ,
where for any given σ [ t , t f ] and ε ( 0 , ε 10 ] , the n × n -matrix-valued function Ω ( t , σ , ε ) is the unique solution of the terminal-value problem
ε d Ω ( t , σ , ε ) d t = P 1 ( t , ε ) Ω ( t , σ , ε ) , t [ 0 , σ ] , Ω ( σ , σ , ε ) = I n .
Based on the results of [59] and using the inequalities in (22) and Equation (35), we obtain the following estimate of Ω ( t , σ , ε ) for all 0 t σ t f :
Ω ( t , σ , ε ) b Ω exp β Ω ( t σ ) / ε , ε ( 0 , ε ¯ 20 ] ,
where 0 < ε ¯ 20 ε 10 is some sufficiently small number; b Ω > 0 and β Ω > 0 are some constants independent of ε .
Applying the method of successive approximations to Equation (60), we construct the following sequence of the vector-valued functions δ p , α ( t , ε ) α = 0 + :
δ p , α + 1 ( t , ε ) = 1 ε t f t Ω ( t , σ , ε ) g 1 δ p , α ( σ , ε ) , σ , ε + g 2 ( σ , ε ) + g 3 ( σ , ε ) d σ , α = 0 , 1 , , t [ 0 , t f ] , ε ( 0 , ε ¯ 20 ] , δ p , 0 ( t , ε ) 0 .
Using the inequalities (56), (59), and (61), we obtain the existence of a positive number ε 20 ε ¯ 20 such that, for any ε ( 0 , ε 20 ] , the sequence δ p , α ( t , ε ) α = 0 + converges in the linear space of all n-dimensional vector-valued functions continuous in the interval [ 0 , t f ] . Furthermore, the following inequalities are fulfilled:
δ p , α ( t , ε ) b 10 ε 2 , α = 1 , 2 , , t [ 0 , t f ] , ε ( 0 , ε 20 ] ,
where b 10 > 0 is some constant independent of ε .
Due to the aforementioned convergence of the sequence δ p , α ( t , ε ) α = 0 + , its limit δ p ( t , ε ) = lim α + δ p , α ( t , ε ) is, for all ε ( 0 , ε 20 ] , the solution of the integral Equation (60) and, therefore, of the terminal-value problem (55) in the entire interval [ 0 , t f ] . Since the problem (55) is linear, its solution δ p ( t , ε ) is unique. Moreover, by virtue of the inequalities in (62), we directly have
δ p ( t , ε ) b 10 ε 2 , α = 1 , 2 , , t [ 0 , t f ] , ε ( 0 , ε 20 ] .
Finally, this inequality, along with Equation (54), yields the inequality (53), which completes the proof of the lemma. □

6.4. Asymptotic Solution of the Terminal-Value Problem (28)

Solving the problem (28) and taking into account Lemma 4, we obtain
s ( t , ε ) = t f t 1 4 p T ( σ , ε ) I n ε 2 S v ( σ ) p ( σ , ε ) ε f T ( σ ) p ( σ , ε ) d σ , t [ 0 , t f ] , ε ( 0 , ε 20 ] .
Let us consider the function
s ¯ ( t , ε ) = ε 2 t f t 1 4 p 1 o ( σ ) T p 1 o ( σ ) f T ( σ ) p 1 o ( σ ) d σ , t [ 0 , t f ] , ε ( 0 , ε 20 ] .
Using (47), this function can be represented as
s ¯ ( t , ε ) = ε 2 t f t f T ( σ ) f ( σ ) d σ , t [ 0 , t f ] , ε ( 0 , ε 20 ] .
Lemma 5.
Let Assumptions A1–A3 and case I (see Equation (22)) be valid. Then, for all ε ( 0 , ε 20 ] ( ε 20 > 0 is introduced in Lemma 4), the following inequality is satisfied:
| s ( t , ε ) s ¯ ( t , ε ) | c 10 ε 3 , t [ 0 , t f ] , ε ( 0 , ε 20 ] ,
where c 10 > 0 is some constant independent of ε.
Proof. 
Using Equations (63) and (64) and taking into account Equation (52), we obtain
s ( t , ε ) s ¯ ( t , ε ) = ε 2 t f t [ 1 2 p 1 o ( σ ) T p 1 b ( σ t f ) / ε + 1 4 p 1 b ( σ t f ) / ε T p 1 b ( σ t f ) / ε p 1 T ( σ , ε ) S v ( σ ) p 1 ( σ , ε ) f T ( σ ) p 1 b ( σ t f ) / ε ] d σ , t [ 0 , t f ] , ε ( 0 , ε 20 ] .
The latter, along with Equation (52) and the inequality (51), yields the statement of the lemma. □

6.5. Asymptotic Approximation of the CCDG Value

Consider the following value, depending on z 0 :
J app I ( z 0 ) = ε z 0 T P 1 ( 0 , ε ) z 0 + ε z 0 T p 1 ( 0 , ε ) + s ¯ ( 0 , ε ) ,
where P 1 ( t , ε ) , p 1 ( t , ε ) , and s ¯ ( t , ε ) are given by (32), (42), and (64), respectively.
Using Equations (31) and (66), as well as Lemmas 3–5, we directly have the assertion.
Theorem 1.
Let Assumptions A1–A3 and case I (see Equation (22)) be valid. Then, for all ε ( 0 , ε 20 ] ( ε 20 > 0 is introduced in Lemma 4), the following inequality is satisfied:
| J ε * ( z 0 ) J app I ( z 0 ) | ε 3 a 10 z 0 2 + b 10 z 0 + c 10 .
Consider the following matrix and vector:
P ¯ 1 ( ε ) = P 0 o ( 0 ) + ε P 1 o ( 0 ) , p ¯ 1 ( ε ) = ε p 1 o ( 0 ) .
Based on these matrices and vectors, let us construct the following value, depending on z 0 :
J app , 1 I ( z 0 ) = ε z 0 T P ¯ 1 ( ε ) z 0 + ε z 0 T p ¯ 1 ( ε ) + s ¯ ( 0 , ε ) .
Corollary 2.
Let Assumptions A1-A3 and case I (see Equation (22)) be valid. Then, there exists a positive number ε ¯ 20 ε 20 such that, for all ε ( 0 , ε ¯ 20 ] , the following inequality is satisfied:
| J ε * ( z 0 ) J app , 1 I ( z 0 ) | ε 3 a ¯ 10 z 0 2 + b ¯ 10 z 0 + c 10 ,
where a ¯ 10 > 0 and b ¯ 10 > 0 are some constants independent of ε.
Proof. 
First of all, let us note that, for β > 0 (see Equation (39)) and all sufficiently small ε > 0 , the following inequality is valid:
exp ( β t f / ε ) < ε .
This inequality, along with Equations (32), (33), (52) and (67), the inequalities (38), (A8), and (51) and Lemmas 3 and 4, yields the fulfillment of the inequalities
P ( 0 , ε ) P ¯ 1 ( ε ) a ¯ 10 ε 2 , p ( 0 , ε ) p ¯ 1 ( ε ) b ¯ 10 ε 2 , ε ( 0 , ε ¯ 20 ] ,
where ε ¯ 20 ( 0 , ε 20 ] is some sufficiently small number; a ¯ 10 and b ¯ 10 are some positive numbers independent of ε .
These inequalities and Theorem 1 directly imply the statement of the corollary. □

6.6. Approximate Saddle Point of the CCDG

Consider the following controls of the minimizer and the maximizer, respectively:
u ˜ ε ( z , t ) = 1 ε P 1 ( t , ε ) z 1 2 ε p 1 ( t , ε ) U , v ˜ ε ( z , t ) = ε G v 1 ( t ) C T ( t ) P 1 ( t , ε ) z + ε 2 G v 1 ( t ) C T ( t ) p 1 ( t , ε ) V , ( z , t ) R n × [ 0 , t f ] , ε ( 0 , ε 20 ] ,
where P 1 ( t , ε ) and p 1 ( t , ε ) are given by (32) and (52), respectively.
Remark 9.
The controls u ˜ ε ( z , t ) and v ˜ ε ( z , t ) are obtained from the controls u ε * ( z , t ) and v ε * ( z , t ) (see Equations (29) and (30)) by replacing their P ( t , ε ) with P 1 ( t , ε ) and p ( t , ε ) with p 1 ( t , ε ) .
Due to the linearity of these controls with respect to z R n for any t [ 0 , t f ] , ε ( 0 , ε 20 ] and their continuity with respect to t [ 0 , t f ] for any z R n , ε ( 0 , ε 20 ] , the pair u ˜ ε ( z , t ) , v ˜ ε ( z , t ) is admissible in the CCDG.
Substitution of ( u ( t ) , v ( t ) ) = u ˜ ε z ( t ) , t , v ˜ ε z ( t ) , t into the system (6) and the cost functional (7), as well as using Equation (15) and taking into account the symmetry of the matrix P 1 ( t , ε ) , yield, after a routine algebra, the following system and cost functional:
d z ( t ) d t = A ˜ ( t , ε ) z ( t ) + f ˜ ( t , ε ) , t [ 0 , t f ] , z ( 0 ) = z 0 ,
J ˜ ( z 0 ) = 0 t f z T ( t ) Λ ˜ ( t , ε ) z ( t ) + z T ( t ) g ˜ ( t , ε ) + e ˜ ( t , ε ) d t ,
where
A ˜ ( t , ε ) = A ( t ) ε S ( t , ε ) P 1 ( t , ε ) , f ˜ ( t , ε ) = f ( t ) ε 2 S ( t , ε ) p 1 ( t , ε ) , Λ ˜ ( t , ε ) = Λ ( t ) + ε 2 P 1 ( t , ε ) S ( t , ε ) P 1 ( t , ε ) , g ˜ ( t , ε ) = ε 2 P 1 ( t , ε ) S ( t , ε ) p 1 ( t , ε ) , e ˜ ( t , ε ) = ε 2 4 p 1 T ( t , ε ) S ( t , ε ) p 1 ( t , ε ) .
Based on these functions, we construct the following terminal-value problems:
d L ˜ ( t , ε ) d t = L ˜ ( t , ε ) A ˜ ( t , ε ) A ˜ T ( t , ε ) L ˜ ( t , ε ) Λ ˜ ( t , ε ) , L ˜ ( t , ε ) R n × n , t [ 0 , t f ] , L ˜ ( t f , ε ) = 0 ,
d η ˜ ( t , ε ) d t = A ˜ T ( t , ε ) η ˜ ( t , ε ) 2 L ˜ ( t , ε ) f ˜ ( t , ε ) g ˜ ( t , ε ) , η ˜ ( t , ε ) R n , t [ 0 , t f ] , η ˜ ( t f , ε ) = 0 ,
d κ ˜ ( t , ε ) d t = f ˜ T ( t , ε ) η ˜ ( t , ε ) , κ ˜ ( t , ε ) R , t [ 0 , t f ] , κ ˜ ( t f , ε ) = 0 t f e ˜ ( σ , ε ) d σ ,
where ε ( 0 , ε 20 ] .
Remark 10.
Due to the linearity, the problem (73) has the unique solution L ˜ ( t , ε ) in the entire interval [ 0 , t f ] for all ε ( 0 , ε 20 ] . Therefore, the problems (74) and (75) also have the unique solutions η ˜ ( t , ε ) and κ ˜ ( t , ε ) , respectively, in the entire interval [ 0 , t f ] for all ε ( 0 , ε 20 ] .
Lemma 6.
The value J ˜ ( z 0 ) , given by Equations (70) and (71), can be represented in the form
J ˜ ( z 0 ) = z 0 T L ˜ ( 0 , ε ) z 0 + z 0 T η ˜ ( 0 , ε ) + κ ˜ ( 0 , ε ) , ε ( 0 , ε 20 ] .
Proof. 
First, let us calculate the value J ˜ ( z 0 ) using its definition, i.e., Equations (70) and (71).
Solving the initial-value problem (70), we obtain
z ( t ) = z ( t , ε ) = Γ ( t , 0 , ε ) z 0 + 0 t Γ 1 ( σ , 0 , ε ) f ˜ ( σ , ε ) d σ , t [ 0 , t f ] , ε ( 0 , ε 20 ] ,
where for any given t [ 0 , t f ) and ε ( 0 , ε 20 ] , the matrix-valued function Γ ( σ , t , ε ) is the unique solution of the terminal-value problem
d Γ ( σ , t , ε ) d σ = A ˜ ( σ , ε ) Γ ( σ , t , ε ) , σ [ t , t f ] , Γ ( t , t , ε ) = I n .
Based on this definition of the function Γ ( σ , t , ε ) , the function Γ ( t , 0 , ε ) can be represented as
Γ ( t , 0 , ε ) = Υ ( 0 , ε ) Υ 1 ( t , ε ) T , t [ 0 , t f ] ,
where the n × n -matrix-valued function Υ ( t , ε ) is the unique solution of the terminal-value problem
d Υ ( t , ε ) d t = A ˜ T ( t , ε ) Υ ( t , ε ) , t [ 0 , t f ] , Υ ( t f , ε ) = I n .
Substituting (79) into (77), we obtain
z ( t , ε ) = Υ 1 ( t , ε ) T Υ T ( 0 , ε ) z 0 + 0 t Υ T ( σ , ε ) f ˜ ( σ , ε ) d σ , t [ 0 , t f ] .
Substitution of this expression for z ( t , ε ) into (71) yields, after a routine rearrangement, the following expression for J ˜ ( z 0 ) :
J ˜ ( z 0 ) = z 0 T H ˜ 1 ( ε ) z 0 + z 0 T H ˜ 2 ( ε ) + H ˜ 3 ( ε ) , H ˜ 1 ( ε ) = Υ ( 0 , ε ) 0 t f Υ 1 ( t , ε ) D ˜ ( t , ε ) Υ 1 ( t , ε ) T d t Υ T ( 0 , ε ) , H ˜ 2 ( ε ) = 2 Υ ( 0 , ε ) 0 t f Υ 1 ( t , ε ) D ˜ ( t , ε ) Υ 1 ( t , ε ) T 0 t Υ T ( σ , ε ) f ˜ ( σ , ε ) d σ d t + Υ ( 0 , ε ) 0 t f Υ 1 ( t , ε ) g ˜ ( t , ε ) d t , H ˜ 3 ( ε ) = 0 t f [ 0 t f ˜ T ( σ , ε ) Υ ( σ , ε ) d σ Υ 1 ( t , ε ) D ˜ ( t , ε ) Υ 1 ( t , ε ) T × 0 t Υ T ( σ , ε ) f ˜ ( σ , ε ) d σ ] d t + 0 t f 0 t f ˜ T ( σ , ε ) Υ ( σ , ε ) d σ Υ 1 ( t , ε ) g ˜ ( t , ε ) d t + 0 t f e ˜ ( t , ε ) d t .
Now, let us calculate the expression on the right-hand side of Equation (76). To do this, we should solve the terminal-value problems (73)–(75).
Using (80), we obtain the solution of the problem (73) in the form
L ˜ ( t , ε ) = Υ ( t , ε ) t t f Υ 1 ( σ , ε ) D ˜ ( σ , ε ) Υ 1 ( σ , ε ) T d σ Υ T ( t , ε ) , t [ 0 , t f ] .
Substituting this expression for L ˜ ( t , ε ) into (74) and solving the resulting problem yield, after some rearrangement, its solution as
η ˜ ( t , ε ) = Υ ( t , ε ) [ 2 t t f σ t f Υ 1 ( ξ , ε ) D ˜ ( ξ , ε ) Υ 1 ( ξ , ε ) T d ξ Υ T ( σ , ε ) f ˜ ( σ , ε ) d σ + t t f Υ 1 ( σ , ε ) g ˜ ( σ , ε ) d σ ] = Υ ( t , ε ) [ 2 t t f Υ 1 ( ξ , ε ) D ˜ ( ξ , ε ) Υ 1 ( ξ , ε ) T t ξ Υ T ( σ , ε ) f ˜ ( σ , ε ) d σ d ξ + t t f Υ 1 ( σ , ε ) g ˜ ( σ , ε ) d σ ] , t [ 0 , t f ] .
Finally, substituting the above obtained expression for η ˜ ( t , ε ) into (75) and solving the resulting problem, we have its solution in the form
κ ˜ ( t , ε ) = κ ˜ ( t f , ε ) + κ ˜ 1 ( t , ε ) + κ ˜ 2 ( t , ε ) , t [ 0 , t f ] , κ ˜ 1 ( t ) = 2 t t f f ˜ T ( σ , ε ) Υ ( σ , ε ) [ σ t f Υ 1 ( ξ , ε ) D ˜ ( ξ , ε ) Υ 1 ( ξ , ε ) T × σ ξ Υ T ( σ 1 , ε ) f ˜ ( σ 1 , ε ) d σ 1 d ξ ] d σ , κ ˜ 2 ( t ) = t t f f ˜ T ( σ , ε ) Υ ( σ , ε ) σ t f Υ 1 ( σ 1 , ε ) g ˜ ( σ 1 , ε ) d σ 1 d σ .
Let us show that κ ˜ 1 ( t , ε ) can be represented as
κ ˜ 1 ( t , ε ) = t t f [ t ξ f ˜ T ( σ , ε ) Υ ( σ , ε ) d σ Υ 1 ( ξ , ε ) D ˜ ( ξ , ε ) Υ 1 ( ξ , ε ) T × t ξ Υ T ( σ , ε ) f ˜ ( σ , ε ) d σ ] d ξ , t [ 0 , t f ] .
First, we observe that κ ˜ 1 ( t , ε ) , given in (84), and the expression in the right-hand side of (85) become zero at t = t f . Differentiation of κ ˜ 1 ( t , ε ) , given in (84), yields
d κ ˜ 1 ( t , ε ) d t = 2 f ˜ T ( t , ε ) Υ ( t , ε ) t t f Υ 1 ( ξ , ε ) D ˜ ( ξ , ε ) Υ 1 ( ξ , ε ) T × t ξ Υ T ( σ 1 , ε ) f ˜ ( σ 1 , ε ) d σ 1 d ξ , t [ 0 , t f ] .
The same expression is obtained by the differentiation of the function on the right-hand side of (85). This feature, along with the aforementioned observation, immediately yields the validity of (85).
Now, using Equation (81) and Equations (82)–(85), we obtain the following equalities:
L ˜ ( 0 , ε ) = H ˜ 1 ( ε ) , η ˜ ( 0 , ε ) = H ˜ 2 ( ε ) , κ ˜ ( 0 , ε ) = H ˜ 3 ( ε ) , ε ( 0 , ε 20 ] .
These equalities directly yield the statement of the lemma. □
Lemma 7.
Let Assumptions A1–A3 and case I (see Equation (22)) be valid. Then, there exists a positive number ε 30 ε 20 ( ε 20 > 0 is introduced in Lemma 4) such that, for all ε ( 0 , ε 30 ] , the following inequality is satisfied:
ε P ( t , ε ) L ˜ ( t , ε ) a L ε 5 , t [ 0 , t f ] ,
where P ( t , ε ) is the solution of the terminal-value problem (26) mentioned in Lemma 3; a L > 0 is some constant independent of ε.
Proof. 
For any ε ( 0 , ε 20 ] , let us consider the matrix-valued function
Δ P L ( t , ε ) = ε P ( t , ε ) L ˜ ( t , ε ) , t [ 0 , t f ] .
Using the problems (26) and (73), we obtain after a routine rearrangement the terminal-value problem for Δ P L ( t , ε )
d Δ P L ( t , ε ) d t = Δ P L ( t , ε ) A ˜ ( t , ε ) A ˜ T ( t , ε ) Δ P L ( t , ε ) + P ( t , ε ) P 1 ( t , ε ) ε 2 S ( t , ε ) P ( t , ε ) P 1 ( t , ε ) , t [ 0 , t f ] , Δ P L ( t f , ε ) = 0 ,
where P 1 ( t , ε ) is given by (32).
Solving the problem (87) and using the results of [60], we have
Δ P L ( t , ε ) = t f t Γ T ( σ , t , ε ) P ( σ , ε ) P 1 ( σ , ε ) ε 2 S ( σ , ε ) P ( σ , ε ) P 1 ( t , ε ) Γ ( σ , t , ε ) d σ , 0 t σ t f , ε ( 0 , ε 20 ] ,
where, for any given t [ 0 , t f ) and ε ( 0 , ε 20 ] , the matrix-valued function Γ ( σ , t , ε ) is the unique solution of the terminal-value problem (78).
Based on the results of [59] and using the inequalities in (22) and Equations (32), (35), and (72), we obtain the following estimate of Γ ( t , σ , ε ) for all 0 t σ t f :
Γ ( σ , t , ε ) b Γ exp β Γ ( t σ ) / ε , ε ( 0 , ε 30 ] ,
where 0 < ε 30 ε 20 is some sufficiently small number; b Γ > 0 and β Γ > 0 are some constants independent of ε .
Using Equations (15) and (88), as well as Lemma 3 and the inequality (89), we directly obtain the inequality
Δ P L ( t , ε ) a L ε 5 , t [ 0 , t f ] , ε ( 0 , ε 30 ] ,
where a L > 0 is some constant independent of ε .
Thus, Equation (86) and the inequality (90) immediately yield the statement of the lemma, which completes its proof. □
Lemma 8.
Let Assumptions A1–A3 and case I (see Equation (22)) be valid. Then, for all ε ( 0 , ε 30 ] ( ε 30 > 0 is introduced in Lemma 7), the following inequalities are satisfied:
ε p ( t , ε ) η ˜ ( t , ε ) a η ε 5 , t [ 0 , t f ] ,
| s ( 0 , ε ) κ ˜ ( 0 , ε ) | a κ , 1 ε 4 + a κ , 2 ε 5 , a κ , 1 = 1 4 b 10 2 t f , a κ , 2 = a η 1 4 t f + 0 t f f ( σ ) d σ ,
where p ( t , ε ) and s ( t , ε ) are the solutions of the terminal-value problems (27) and (28) mentioned in Lemma 4 and Lemma 5, respectively; a η > 0 is some constant independent of ε; the constant b 10 > 0 is introduced in Lemma 4.
Proof. 
We start the proof with the inequality (91).
For any ε ( 0 , ε 30 ] , let us consider the vector-valued function
Δ p η ( t , ε ) = ε p ( t , ε ) η ˜ ( t , ε ) , t [ 0 , t f ] .
Using the problems (27) and (74), we obtain, after a routine rearrangement, the terminal-value problem for Δ p η ( t , ε )
d Δ p η ( t , ε ) d t = A ˜ T ( t , ε ) Δ p η ( t , ε ) + 2 L ˜ ( t , ε ) ε P ( t , ε ) f ( t ) + ε P ( t , ε ) L ˜ ( t , ε ) ε S ( t , ε ) p 1 ( t , ε ) + P ( t , ε ) P 1 ( t , ε ) ε 2 S ( t , ε ) p ( t , ε ) p 1 ( t , ε ) , t [ 0 , t f ] , Δ p η ( t f , ε ) = 0 ,
where P 1 ( t , ε ) and p 1 ( t , ε ) are given by (32) and (52), respectively.
Solving the problem (94), we have
Δ p η ( t , ε ) = t f t Γ T ( σ , t , ε ) [ 2 L ˜ ( σ , ε ) ε P ( σ , ε ) f ( σ ) + ε P ( σ , ε ) L ˜ ( σ , ε ) ε S ( σ , ε ) p 1 ( σ , ε ) + P ( σ , ε ) P 1 ( σ , ε ) ε 2 S ( σ , ε ) p ( σ , ε ) p 1 ( σ , ε ) ] d σ , 0 t σ t f , ε ( 0 , ε 30 ] ,
where for any given t [ 0 , t f ) and ε ( 0 , ε 20 ] , the matrix-valued function Γ ( σ , t , ε ) is the unique solution of the terminal-value problem (78).
Using Lemmas 3, 4, and 7 and the inequality (89), we obtain the inequality
Δ p η ( t , ε ) a η ε 5 , t [ 0 , t f ] , ε ( 0 , ε 30 ] ,
where a η > 0 is some constant independent of ε .
The Equation (93) and the inequality (95) directly imply the inequality (91).
Proceed to the proof of the inequality (92). From Equation (18), we have
s ( 0 , ε ) = t f 0 1 4 p T ( σ , ε ) I n ε 2 S v ( σ ) p ( σ , ε ) ε f T ( σ ) p ( σ , ε ) d σ .
Using Equations (15), (72), and (75), we obtain
κ ˜ ( 0 , ε ) = t f 0 [ f T ( σ ) η ˜ ( σ , ε ) 1 2 ε p 1 T ( σ , ε ) I n ε 2 S v ( t ) η ˜ ( σ , ε ) + 1 4 p 1 T ( σ , ε ) I n ε 2 S v ( t ) p 1 ( σ , ε ) ] d σ .
Using these expressions for s ( 0 , ε ) and κ ˜ ( 0 , ε ) , as well as the inequalities (53) and (91), we obtain the following chain of inequalities for all ε ( 0 , ε 30 ] :
| s ( 0 , ε ) κ ˜ ( 0 , ε ) | 0 t f f ( σ ) ε p ( σ , ε ) η ˜ ( σ , ε ) d σ + 1 4 0 t f | p T ( σ , ε ) I n ε 2 S v ( σ ) p ( σ , ε ) 2 ε p 1 T ( σ , ε ) I n ε 2 S v ( σ ) ε p ( σ , ε ) + η ˜ ( σ , ε ) ε p ( σ , ε ) + p 1 T ( σ , ε ) I n ε 2 S v ( σ ) p 1 ( σ , ε ) | d σ a η 0 t f f ( σ ) d σ ε 5 + 1 4 0 t f | p ( σ , ε ) p 1 ( σ , ε ) T I n ε 2 S v ( σ ) p ( σ , ε ) p 1 ( σ , ε ) | d σ + 1 4 0 t f ε p ( σ , ε ) η ˜ ( σ , ε ) d σ a η 0 t f f ( σ ) d σ ε 5 + 1 4 b 10 2 t f ε 4 + 1 4 a η t f ε 5 ,
which directly yields the inequality (92).
Thus, the lemma is proven. □
Theorem 2.
Let Assumptions A1–A3 and case I (see Equation (22)) be valid. Then, for all ε ( 0 , ε 30 ] ( ε 30 > 0 is introduced in Lemma 7), the following inequality is satisfied:
| J ε * ( z 0 ) J ˜ ( z 0 ) | ε 4 a κ , 1 + a L z 0 2 + a η z 0 + a κ , 2 ε .
Proof. 
The statement of the theorem directly follows from Equations (31) and (76) and Lemmas 7 and 8. □
Remark 11.
Due to Theorem 2, the outcome J ˜ ( z 0 ) of the CCDG, generated by the pair of the controls u ˜ ε ( z , t ) , v ˜ ε ( z , t ) , approximate the CCDG value J ε * ( z 0 ) with a high accuracy for all sufficiently small ε > 0 . This observation allows us to call the pair u ˜ ε ( z , t ) , v ˜ ε ( z , t ) an approximate saddle point in the CCDG.

7. Asymptotic Solution of the CCDG in Case II

In this section, the game (6) and (7) are analyzed asymptotically for all sufficiently small ε > 0 subject to the fulfillment of the condition (23). This analysis includes the first-order asymptotic solutions of the terminal-value problems for the Riccati matrix differential equation, the linear vector differential equation, and the scalar differential equation. Based on these asymptotic solutions, two kinds of asymptotic approximations of the game value are obtained, and an approximate saddle point is derived.

7.1. Transformation of the Terminal-Value Problems (16)–(18)

As it was mentioned in Section 6.1, due to Equation (15), the differential equations in problems (16)–(18) have the singularities with respect to ε in their right-hand sides for ε = 0 . To remove these singularities, in Section 6.1 the transformations (24) and (25) of the variables in problems (16) and (17) were proposed. These transformations are applicable in case I of the matrix Λ ( t ) (see Remark 2 and Equation (22)). However, for the asymptotic analysis of the problems (16)–(18) in case II (see Equation (23)), we need another transformation allowing us to remove the aforementioned singularities.
Namely, the transformation of the variable in the problem (16) is
K = K ( t , ε ) = ε K ^ 1 ( t , ε ) ε 2 K ^ 2 ( t , ε ) ε 2 K ^ 2 T ( t , ε ) ε 2 K ^ 3 ( t , ε ) , t [ 0 , t f ] ,
where, for all t [ 0 , t f ] and sufficiently small ε > 0 , the matrices K ^ 1 ( t , ε ) , K ^ 2 ( t , ε ) and K ^ 3 ( t , ε ) are of the dimensions l × l , l × ( n l ) and ( n l ) × ( n l ) , respectively; K ^ 1 T ( t , ε ) = K ^ 1 ( t , ε ) , K ^ 3 T ( t , ε ) = K ^ 3 ( t , ε ) ; the functions K ^ 1 ( t , ε ) , K ^ 2 ( t , ε ) , and K ^ 3 ( t , ε ) are new unknown matrix-valued functions.
The transformation of the variable in the problem (17) is
q = q ( t , ε ) = ε q ^ 1 ( t , ε ) ε q ^ 2 ( t , ε ) , t [ 0 , t f ] ,
where, for all t [ 0 , t f ] and sufficiently small ε > 0 , the vectors q ^ 1 ( t , ε ) and q ^ 2 ( t , ε ) are of the dimensions l and ( n l ) , respectively; the functions q ^ 1 ( t , ε ) and q ^ 2 ( t , ε ) are new unknown vector-valued functions.
Let us partition the matrices A ( t ) , S v ( t ) and Λ ( t ) into blocks as follows:
A ( t ) = A 1 ( t ) A 2 ( t ) A 3 ( t ) A 4 ( t ) , S v ( t ) = S v 1 ( t ) S v 2 ( t ) S v 2 T ( t ) S v 3 ( t ) , Λ ( t ) = Λ 1 ( t ) 0 0 0 , t [ 0 , t f ] ,
where the matrices A 1 ( t ) , A 2 ( t ) , A 3 ( t ) , and A 4 ( t ) are of the dimensions l × l , l × ( n l ) , ( n l ) × l , and ( n l ) × ( n l ) , respectively; the matrices S v 1 ( t ) , S v 2 ( t ) , and S v 3 ( t ) are of the dimensions l × l , l × ( n l ) , and ( n l ) × ( n l ) , respectively; S v 1 T ( t ) = S v 1 ( t ) , S v 3 T ( t ) = S v 3 ( t ) ; the matrix Λ 1 ( t ) has the form
Λ 1 ( t ) = diag λ 1 ( t ) , , λ l ( t ) .
Using Equations (15), (96), and (98), we can rewrite the terminal-value problem (16) in the following equivalent form:
ε d K ^ 1 ( t , ε ) d t = ε K ^ 1 ( t , ε ) A 1 ( t ) ε 2 K ^ 2 ( t , ε ) A 3 ( t ) ε A 1 T ( t ) K ^ 1 ( t , ε ) ε 2 A 3 T ( t ) K ^ 2 T ( t , ε ) + K ^ 1 ( t , ε ) 2 ε 2 K ^ 1 ( t , ε ) S v 1 ( t ) K ^ 1 ( t , ε ) ε 3 K ^ 2 ( t , ε ) S v 2 T ( t ) K ^ 1 ( t , ε ) ε 3 K ^ 1 ( t , ε ) S v 2 ( t ) K ^ 2 T ( t , ε ) + ε 2 K ^ 2 ( t , ε ) K ^ 2 T ( t , ε ) ε 4 K ^ 2 ( t , ε ) S v 3 ( t ) K ^ 2 T ( t , ε ) Λ 1 ( t ) , t [ 0 , t f ] , K ^ 1 ( t f , ε ) = 0 ,
ε d K ^ 2 ( t , ε ) d t = K ^ 1 ( t , ε ) A 2 ( t ) ε K ^ 2 ( t , ε ) A 4 ( t ) ε A 1 T ( t ) K ^ 2 ( t , ε ) ε A 3 T ( t ) K ^ 3 ( t , ε ) + K ^ 1 ( t , ε ) K ^ 2 ( t , ε ) ε 2 K ^ 1 ( t , ε ) S v 1 ( t ) K ^ 2 ( t , ε ) ε 3 K ^ 2 ( t , ε ) S v 2 T ( t ) K ^ 2 ( t , ε ) ε 2 K ^ 1 ( t , ε ) S v 2 ( t ) K ^ 3 ( t , ε ) + ε K ^ 2 ( t , ε ) K ^ 3 ( t , ε ) ε 3 K ^ 2 ( t , ε ) S v 3 ( t ) K ^ 3 ( t , ε ) , t [ 0 , t f ] , K ^ 2 ( t f , ε ) = 0 ,
d K ^ 3 ( t , ε ) d t = K ^ 2 T ( t , ε ) A 2 ( t ) K ^ 3 ( t , ε ) A 4 ( t ) A 2 T ( t ) K ^ 2 ( t , ε ) A 4 T ( t ) K ^ 3 ( t , ε ) + K ^ 2 T ( t , ε ) K ^ 2 ( t , ε ) ε 2 K ^ 2 T ( t , ε ) S v 1 ( t ) K ^ 2 ( t , ε ) ε 2 K ^ 3 ( t , ε ) S v 2 T ( t ) K ^ 2 ( t , ε ) ε 2 K ^ 2 T ( t , ε ) S v 2 ( t ) K ^ 3 ( t , ε ) + K ^ 3 ( t , ε ) 2 ε 2 K ^ 3 ( t , ε ) S v 3 ( t ) K ^ 3 ( t , ε ) , t [ 0 , t f ] , K ^ 3 ( t f , ε ) = 0 .
Let us partition the vector f ( t ) into blocks as
f ( t ) = f 1 ( t ) f 2 ( t ) , t [ 0 , t f ] ,
where the vectors f 1 ( t ) and f 2 ( t ) are of the dimensions l and ( n l ) , respectively.
Using Equations (15), (96)–(98), (102), we can rewrite the terminal-value problem (17) in the following equivalent form:
ε d q ^ 1 ( t , ε ) d t = K ^ 1 ( t , ε ) ε A 1 T ( t ) ε 2 K ^ 1 ( t , ε ) S v 1 ( t ) ε 3 K ^ 2 ( t , ε ) S v 2 T ( t ) q ^ 1 ( t , ε ) ε A 3 T ( t ) K ^ 2 ( t , ε ) + ε K ^ 1 ( t , ε ) S v 2 ( t ) + ε 2 K ^ 2 ( t , ε ) S v 3 ( t ) q ^ 2 ( t , ε ) 2 ε K ^ 1 ( t , ε ) f 1 ( t ) 2 ε 2 K ^ 2 ( t , ε ) f 2 ( t ) , t [ 0 , t f ] , q ^ 1 ( t f , ε ) = 0 ,
d q ^ 2 ( t , ε ) d t = A 2 T ( t ) K ^ 2 T ( t , ε ) + ε 2 K ^ 2 T ( t , ε ) S v 1 ( t ) + ε 2 K ^ 3 ( t , ε ) S v 2 T ( t ) q ^ 1 ( t , ε ) A 4 T ( t ) K ^ 3 ( t , ε ) + ε 2 K ^ 2 T ( t , ε ) S v 2 ( t ) + ε 2 K ^ 3 ( t , ε ) S v 3 ( t ) q ^ 2 ( t , ε ) 2 ε K ^ 2 T ( t , ε ) f 1 ( t ) 2 ε K ^ 3 ( t , ε ) f 2 ( t ) , t [ 0 , t f ] , q ^ 2 ( t f , ε ) = 0 .
Finally, using Equations (15), (97), (98), and (102), we can rewrite the terminal-value problem (18) in the following equivalent form:
d s ( t , ε ) d s = 1 4 q ^ 1 T ( t , ε ) q ^ 1 ( t , ε ) + q ^ 2 T ( t , ε ) q ^ 2 ( t , ε ) ε 2 4 ( q ^ 1 T ( t , ε ) S v 1 ( t ) q ^ 1 ( t , ε ) + q ^ 1 T ( t , ε ) S v 2 ( t ) q ^ 2 ( t , ε ) + q ^ 2 T ( t , ε ) S v 2 T ( t ) q ^ 1 ( t , ε ) + q ^ 2 T ( t , ε ) S v 3 ( t ) q ^ 2 ( t , ε ) ) ε f 1 T ( t ) q ^ 1 ( t , ε ) + f 2 T ( t ) q ^ 2 ( t , ε ) , t [ 0 , t f ] , s ( t f , ε ) = 0 .

7.2. Asymptotic Solution of the Terminal-Value Problem (99)–(101)

Similarly to the problem (26), the problem (99)–(101) also is singularly perturbed. However, in contrast with the former, the latter contains both fast and slow state variables. Namely, the state variables K ^ 1 ( t , ε ) and K ^ 2 ( t , ε ) , derivatives of which are multiplied by the small parameter ε > 0 , are fast state variables, while the state variable K ^ 3 ( t , ε ) is a slow state variable.
Similarly to (32) and (33), we look for the first-order asymptotic solution of (99)–(101) in the form
K ^ i , 1 ( t , ε ) = K ^ i , 0 o ( t ) + K ^ i , 0 b ( τ ) + ε K ^ i , 1 o ( t ) + K ^ i , 1 b ( τ ) , τ = t t f ε , i = 1 , 2 , 3 .
The terms in (106) have the same meaning as the corresponding terms in (32). These terms are obtained by substituting K ^ i , 1 ( t , ε ) into the problem (99)–(101) instead of K ^ i ( t , ε ) , ( i = 1 , 2 , 3 ) and equating the coefficients for the same power of ε on both sides of the resulting equations, separately depending on t and on τ .

7.2.1. Obtaining the Boundary Correction K ^ 3 , 0 b ( τ )

This boundary correction satisfies the equation
d K ^ 3 , 0 b ( τ ) d τ = 0 , τ 0 .
To obtain a unique solution of this equation, we need an additional condition on K ^ 3 , 0 b ( τ ) . By virtue of the Boundary Function Method [23], such a condition is K ^ 3 , 0 b ( τ ) 0 for τ . Subject to this condition, Equation (107) yields the solution
K ^ 3 , 0 b ( τ ) = 0 , τ 0 .

7.2.2. Obtaining the Outer Solution Terms K ^ 1 , 0 o ( t ) , K ^ 2 , 0 o ( t ) , K ^ 3 , 0 o ( t )

For these terms, we have the following equations in the time interval [ 0 , t f ] :
0 = K ^ 1 , 0 o ( t ) 2 Λ 1 ( t ) ,
0 = K ^ 1 , 0 o ( t ) A 2 ( t ) + K ^ 1 , 0 o ( t ) K ^ 2 , 0 o ( t ) ,
d K ^ 3 , 0 o ( t ) d t = K ^ 2 , 0 o ( t ) T A 2 ( t ) K ^ 3 , 0 o ( t ) A 4 ( t ) A 2 T ( t ) K ^ 2 , 0 o ( t ) A 4 T ( t ) K ^ 3 , 0 o ( t ) + K ^ 2 , 0 o ( t ) T K ^ 2 , 0 o ( t ) + K ^ 3 , 0 o ( t ) 2 , K ^ 3 , 0 o ( t f ) = 0 .
The Equation (109) yields the solution
K ^ 1 , 0 o ( t ) = Λ 1 1 / 2 ( t ) = diag λ 1 1 / 2 ( t ) , , λ l 1 / 2 ( t ) , t [ 0 , t f ] .
Solving Equation (110) and taking into account the invertibility of K ^ 1 , 0 o ( t ) = Λ 1 1 / 2 ( t ) for all t [ 0 , t f ] , we obtain
K ^ 2 , 0 o ( t ) = A 2 ( t ) , t [ 0 , t f ] .
Substitution of (113) into (111) yields the following terminal-value problem with respect to K ^ 3 , 0 o ( t ) :
d K ^ 3 , 0 o ( t ) d t = K ^ 3 , 0 o ( t ) A 4 ( t ) A 4 T ( t ) K ^ 3 , 0 o ( t ) + K ^ 3 , 0 o ( t ) 2 A 2 T ( t ) A 2 ( t ) , t [ 0 , t f ] , K ^ 3 , 0 o ( t f ) = 0 .
Since A 2 T ( t ) A 2 ( t ) is a positive definite/positive semi-definite matrix for all t [ 0 , t f ] , then by virtue of the results of [61], the problem (114) has the unique solution K ^ 3 , 0 o ( t ) in the entire interval [ 0 , t f ] .

7.2.3. Obtaining the Boundary Corrections K ^ 1 , 0 b ( τ ) and K ^ 2 , 0 b ( τ )

Using Equations (112) and (113), we derive the following terminal-value problem for these corrections:
d K ^ 1 , 0 b ( τ ) d τ = K ^ 1 , 0 b ( τ ) Λ 1 1 / 2 ( t f ) + Λ 1 1 / 2 ( t f ) K ^ 1 , 0 b ( τ ) + K ^ 1 , 0 b ( τ ) 2 , τ 0 , K ^ 1 , 0 b ( 0 ) = K ^ 1 , 0 o ( t f ) = Λ 1 1 / 2 ( t f ) ,
d K ^ 2 , 0 b ( τ ) d τ = Λ 1 1 / 2 ( t f ) + K ^ 1 , 0 b ( τ ) K ^ 2 , 0 b ( τ ) , τ 0 , K ^ 2 , 0 b ( 0 ) = K ^ 2 , 0 o ( t f ) = A 2 ( t f ) .
This problem consists of two subproblems, which can be solved consecutively. First, the subproblem (115) is solved. Then, using its solution K ^ 1 , 0 b ( τ ) , the subproblem (116) is solved. The subproblem (115) is a terminal-value problem for a Bernoulli-type matrix differential equation (see, e.g., [57]), yielding the unique solution
K ^ 1 , 0 b ( τ ) = 2 Λ 1 1 / 2 ( t f ) exp 2 Λ 1 1 / 2 ( t f ) τ I l + exp 2 Λ 1 1 / 2 ( t f ) τ 1 , τ 0 .
Substituting (117) into the subproblem of (116) and solving the obtained terminal-value problem with respect to K ^ 2 , 0 b ( τ ) yield
K ^ 2 , 0 b ( τ ) = 2 exp Λ 1 1 / 2 ( t f ) τ I l + exp 2 Λ 1 1 / 2 ( t f ) τ 1 A 2 ( t f ) , τ 0 .
Since the matrix Λ 1 1 / 2 ( t f ) is positive definite, the matrix-valued functions K ^ 1 , 0 b ( τ ) and K ^ 2 , 0 b ( τ ) are exponentially decaying, i.e.,
K ^ 1 , 0 b ( τ ) a ^ 1 , 0 exp ( 2 β ^ τ ) , K ^ 2 , 0 b ( τ ) a ^ 2 , 0 exp ( β ^ τ ) , τ 0 ,
where a ^ 1 , 0 > 0 and a ^ 2 , 0 > 0 are some constants;
β ^ = min i { 1 , , l } λ i 1 / 2 ( t f ) > 0 .

7.2.4. Obtaining the Boundary Correction K ^ 3 , 1 b ( τ )

Using Equations (108) and (113) yields, after routine algebra, the equation for this boundary correction
d K ^ 3 , 1 b ( τ ) d τ = K ^ 2 , 0 b ( τ ) T K ^ 2 , 0 b ( τ ) , τ 0 .
Substituting (118) into (121) and taking into account the diagonal form of the matrix Λ 1 1 / 2 ( t ) , we obtain the following differential equation for K ^ 3 , 1 b ( τ ) :
d K ^ 3 , 1 b ( τ ) d τ = 4 A 2 T ( t f ) exp 2 Λ 1 1 / 2 ( t f ) τ I l + exp 2 Λ 1 1 / 2 ( t f ) τ 2 A 2 ( t f ) , τ 0 .
The solution of this equation with an unknown value of K ^ 3 , 1 b ( 0 ) is
K ^ 3 , 1 b ( τ ) = K ^ 3 , 1 b ( 0 ) + A 2 T ( t f ) Λ 1 1 / 2 ( t f ) A 2 ( t f ) 2 A 2 T ( t f ) Λ 1 1 / 2 ( t f ) I l + exp 2 Λ 1 1 / 2 ( t f ) τ 1 A 2 ( t f ) , τ 0 ,
where Λ 1 1 / 2 ( t f ) is the inverse matrix to the matrix Λ 1 1 / 2 ( t f ) .
Due to the Boundary Function Method [23], we choose the unknown matrix K ^ 3 , 1 b ( 0 ) such that K ^ 3 , 1 b ( τ ) 0 for τ . Thus, using (122) and taking into account the positive definiteness of the matrix Λ 1 1 / 2 ( t f ) , we have
lim τ K ^ 3 , 1 b ( τ ) = K ^ 3 , 1 b ( 0 ) A 2 T ( t f ) Λ 1 1 / 2 ( t f ) A 2 ( t f ) = 0 ,
implying
K ^ 3 , 1 b ( 0 ) = A 2 T ( t f ) Λ 1 1 / 2 ( t f ) A 2 ( t f ) .
The latter, along with Equation (122), yields after a routine rearrangement
K ^ 3 , 1 b ( τ ) = 2 A 2 T ( t f ) Λ 1 1 / 2 ( t f ) exp 2 Λ 1 1 / 2 ( t f ) τ I l + exp 2 Λ 1 1 / 2 ( t f ) τ 1 A 2 ( t f ) ,
where τ 0 .
Since Λ 1 1 / 2 ( t f ) is a positive definite matrix, the matrix-valued function K ^ 3 , 1 b ( τ ) exponentially decays for τ , i.e.,
K ^ 3 , 1 b ( τ ) a ^ 3 exp ( 2 β ^ τ ) , τ 0 ,
where a ^ 3 > 0 is some constant; the constant β ^ > 0 is given by (120).

7.2.5. Obtaining the Outer Solution Terms K ^ 1 , 1 o ( t ) , K ^ 2 , 1 o ( t ) , K ^ 3 , 1 o ( t )

Using Equations (112), (113) and (123), we have the following equations for these terms in the time interval [ 0 , t f ] :
d Λ 1 1 / 2 ( t ) d t = Λ 1 1 / 2 ( t ) A 1 ( t ) A 1 T ( t ) Λ 1 1 / 2 ( t ) + Λ 1 1 / 2 ( t ) K ^ 1 , 1 o ( t ) + K ^ 1 , 1 o ( t ) Λ 1 1 / 2 ( t ) ,
d A 2 ( t ) d t = A 2 ( t ) A 4 ( t ) A 1 T ( t ) A 2 ( t ) A 3 T ( t ) K ^ 3 , 0 o ( t ) + Λ 1 1 / 2 ( t ) K ^ 2 , 1 o ( t ) + A 2 ( t ) K ^ 3 , 0 o ( t ) ,
d K ^ 3 , 1 o ( t ) d t = K ^ 3 , 1 o ( t ) K ^ 3 , 0 o ( t ) A 4 ( t ) + K ^ 3 , 0 o ( t ) A 4 T ( t ) K ^ 3 , 1 o ( t ) , K ^ 3 , 1 o ( t f ) = K ^ 3 , 1 b ( 0 ) = A 2 T ( t f ) Λ 1 1 / 2 ( t f ) A 2 ( t f ) .
Using the results of [58] and taking into account Equation (23), we obtain the solution of Equation (125)
K ^ 1 , 1 o ( t ) = 0 + exp Λ 1 1 / 2 ( t ) ξ [ d Λ 1 1 / 2 ( t ) d t + Λ 1 1 / 2 ( t ) A 1 ( t ) + A 1 T ( t ) Λ 1 1 / 2 ( t ) ] exp Λ 1 1 / 2 ( t ) ξ d ξ , t [ 0 , t f ] .
Furthermore, taking into account the invertibility of the matrix Λ 1 1 / 2 ( t ) for all t [ 0 , t f ] , we obtain the solution of the Equation (126)
K ^ 2 , 1 o ( t ) = Λ 1 1 / 2 ( t ) [ d A 2 ( t ) d t + A 2 ( t ) A 4 ( t ) + A 1 T ( t ) A 2 ( t ) + A 3 T ( t ) K ^ 3 , 0 o ( t ) A 2 ( t ) K ^ 3 , 0 o ( t ) ] , t [ 0 , t f ] .
Finally, solving the problem (127), we obtain
K ^ 3 , 1 o ( t ) = Φ ^ ( t ) A 2 T ( t f ) Λ 1 1 / 2 ( t f ) A 2 ( t f ) Φ ^ T ( t ) , t [ 0 , t f ] ,
where the matrix-valued function Φ ^ ( t ) satisfies the terminal-value problem
d Φ ^ ( t ) d t = K ^ 3 , 0 o ( t ) A 4 T ( t ) Φ ^ ( t ) , t [ 0 , t f ] , Φ ^ ( t f ) = I n l .
To complete the construction of the asymptotic solution to the problem (99)–(101), we should derive the boundary corrections K ^ 1 , 1 b ( τ ) and K ^ 2 , 1 b ( τ ) . This technically complicated derivation is presented in Appendix B.

7.2.6. Justification of the Asymptotic Solution to the Problem (99)–(101)

Lemma 9.
Let Assumptions A1–A3 and case II (see Equation (23)) be valid. Then, there exists a positive number ε ^ 10 such that, for all ε ( 0 , ε ^ 10 ] , the problem (99)–(101) has the unique solution K ^ i ( t , ε ) , ( i = 1 , 2 , 3 ) , in the entire interval [ 0 , t f ] . This solution satisfies the inequalities K ^ i ( t , ε ) K ^ i , 1 ( t , ε ) a ^ K , i ε 2 , t [ 0 , t f ] , where K ^ i , 1 ( t , ε ) , ( i = 1 , 2 , 3 ) , are given by (106); a ^ K , i > 0 , ( i = 1 , 2 , 3 ) , are some constants independent of ε.
Proof. 
Let us transform variables in the problem (99)–(101)
K ^ i ( t , ε ) = K ^ i , 1 ( t , ε ) + Δ K , i ( t , ε ) , i = 1 , 2 , 3 ,
where Δ K , i ( t , ε ) , ( i = 1 , 2 , 3 ) are new unknown matrix-valued functions.
Consider the block-form matrix-valued function
Δ K ( t , ε ) = ε Δ K , 1 ( t , ε ) ε 2 Δ K , 2 ( t , ε ) ε 2 Δ K , 2 T ( t , ε ) ε 2 Δ K , 3 ( t , ε ) .
Substituting (132) into the problem (99)–(101), using the equations for the outer solution terms and boundary corrections (see (108)–(111), (115) and (116), (121), (125)–(127), (A9) and (A18)), and using the expressions for the matrices S ( t , ε ) , K ( t , ε ) , A ( t ) , S v ( t ) , Λ ( t ) (see Equations (15), (96), and (98)) yield after a routine algebra the terminal-value problem for Δ K ( t , ε )
d Δ K ( t , ε ) d t = Δ K ( t , ε ) A ^ K ( t , ε ) A ^ K T ( t , ε ) Δ K ( t , ε ) + Δ K ( t , ε ) S ( t , ε ) Δ K ( t , ε ) D ^ K ( t , ε ) , t [ 0 , t f ] , Δ K ( t f , ε ) = 0 ,
where
A ^ K ( t , ε ) = A ( t ) S ( t , ε ) K ^ 1 ( t , ε ) , K ^ 1 ( t , ε ) = ε K ^ 1 , 1 ( t , ε ) ε 2 K ^ 2 , 1 ( t , ε ) ε 2 K ^ 2 , 1 T ( t , ε ) ε 2 K ^ 3 , 1 ( t , ε ) ;
the matrix-valued function D ^ K ( t , ε ) is expressed in a known form by the matrix-valued functions K ^ 1 ( t , ε ) , S u ( ε ) , and S v ( t ) ; for any ε > 0 , D ^ K ( t , ε ) is a continuous function of t [ 0 , t f ] ; for any t [ 0 , t f ] and ε > 0 , the matrix D ^ K ( t , ε ) is symmetric.
Let us represent the matrix D ^ K ( t , ε ) in the block form as
D ^ K ( t , ε ) = D ^ K , 1 ( t , ε ) D ^ K , 2 ( t , ε ) D ^ K , 2 T ( t , ε ) D ^ K , 3 ( t , ε ) ,
where the dimensions of the blocks are the same as the dimensions of the corresponding blocks in the matrix K ^ 1 ( t , ε ) .
Using the inequalities (119), (124), (A17), and (A22), we obtain the following estimates for the blocks of D ^ K ( t , ε ) :
D ^ K , 1 ( t , ε ) b D , 1 ε 2 , D ^ K , 2 ( t , ε ) b D , 2 ε 3 , D ^ K , 3 ( t , ε ) b D , 3 ε 3 ε + exp ( β ^ / 2 ) τ ) , τ = ( t t f ) / ε , t [ 0 , t f ] , ε ( 0 , ε ^ D ] ,
where b ^ D , i > 0 , ( i = 1 , 2 , 3 ) , are some constants independent of ε ; the constant β ^ > 0 is given by (120); ε ^ D > 0 is some sufficiently small number.
Due to the results of [60], we can rewrite the terminal-value problem (134) in the equivalent integral form
Δ K ( t , ε ) = t f t Ω ^ T ( σ , t , ε ) [ Δ K ( σ , ε ) S ( σ , ε ) Δ K ( σ , ε ) D ^ K ( σ , ε ) ] Ω ^ ( σ , t , ε ) d σ , t [ 0 , t f ] , ε ( 0 , ε ^ D ] ,
where, for any given t [ 0 , t f ] and ε ( 0 , ε ^ D ] , the n × n -matrix-valued function Ω ^ ( σ , t , ε ) is the unique solution of the problem
d Ω ^ ( σ , t , ε ) d σ = A ^ K ( σ , ε ) Ω ^ ( σ , t , ε ) , σ [ t , t f ] , Ω ^ ( t , t , ε ) = I n .
Let Ω ^ 1 ( σ , t , ε ) , Ω ^ 2 ( σ , t , ε ) , Ω ^ 3 ( σ , t , ε ) and Ω ^ 4 ( σ , t , ε ) be the upper left-hand, upper right-hand, lower left-hand, and lower right-hand blocks of the matrix Ω ^ ( σ , t , ε ) of the dimensions l × l , l × ( n l ) , ( n l ) × l and ( n l ) × ( n l ) , respectively. By virtue of the results of [59], we have the following estimates of these blocks for all 0 t σ t f :
Ω ^ 1 ( σ , t , ε ) b Ω [ ε + exp β ^ ( t σ ) / ε ] , Ω ^ k ( σ , t , ε ) b Ω ε , k = 2 , 3 , Ω ^ 4 ( σ , t , ε ) b Ω , ε ( 0 , ε ^ Ω ] ,
where b Ω > 0 is some constant independent of ε ; ε ^ Ω > 0 is some sufficiently small number.
Applying the method of successive approximations to Equation (137), let us consider the sequence of the matrix-valued functions Δ K j ( t , ε ) j = 0 + given as
Δ K j + 1 ( t , ε ) = t f t Ω ^ T ( σ , t , ε ) [ Δ K j ( σ , ε ) S ( σ , ε ) Δ K j ( σ , ε ) D ^ K ( σ , ε ) ] Ω ^ ( σ , t , ε ) d σ , j = 0 , 1 , , t [ 0 , t f ] , ε ( 0 , ε 1 ] ,
where Δ K 0 ( t , ε ) = 0 , t [ 0 , t f ] , ε ( 0 , ε ^ D ] ; the matrices Δ K j ( σ , ε ) have the block form
Δ K j ( σ , ε ) = ε Δ K , 1 j ( t , ε ) ε 2 Δ K , 2 j ( t , ε ) ε 2 Δ K , 2 j ( t , ε ) T ε 2 Δ K , 3 j ( t , ε ) , ( j = 1 , 2 , ) ,
and the dimensions of the blocks in each of these matrices are the same as the dimensions of the corresponding blocks in (133).
Using the block representations of all the matrices appearing in Equation (140), as well as using the inequalities (136) and (139), we obtain the existence of a positive number ε ^ 10 min { ε ^ D , ε ^ Ω } such that for any ε ( 0 , ε ^ 10 ] the sequence Δ K j ( t , ε ) j = 0 + converges in the linear space of all n × n -matrix-valued functions continuous in the interval [ 0 , t f ] . Moreover, the following inequalities are fulfilled:
Δ K , i j ( t , ε ) a ^ K , i ε 2 , i = 1 , 2 , 3 , j = 1 , 2 , , t [ 0 , t f ] ,
where a ^ K , i > 0 , ( i = 1 , 2 , 3 ) , are some constants independent of ε and j.
Thus, for any ε ( 0 , ε ^ 10 ] ,
Δ K * ( t , ε ) = lim j + Δ K j ( t , ε )
is a solution of Equation (137) and, therefore, of the problem (134) in the entire interval [ 0 , t f ] . Moreover, this solution has the block form similar to (133) and satisfies the inequalities
Δ K , i * ( t , ε ) a ^ K , i ε 2 , i = 1 , 2 , 3 , t [ 0 , t f ] .
Since the right-hand side of the differential equation in the problem (134) is smooth w.r.t. Δ K uniformly in t [ 0 , t f ] , this problem cannot have more than one solution. Therefore, Δ K * ( t , ε ) defined by (141) is the unique solution of the problem (134). This observation, along with Equation (132) and the inequalities in (142), proves the lemma. □

7.2.7. Comparison of the Asymptotic Solutions to the Terminal-Value Problem (16) in the Cases I and II

Comparing the asymptotic solutions of the problem (16) in cases I and II, we can observe the following.
In case I, the problem (16) is reduced to the singularly perturbed terminal-value problem with only a fast state variable (see Equation (26)). This feature yields the uniform algorithm of constructing the entire matrix asymptotic solution and a similar form for all its entries. In particular, the outer solution terms are obtained from algebraic (not differential) equations. The zero-order (with respect to ε ) boundary corrections appear in all the entries of the asymptotic solution.
In case II (in contrast with case I), the problem (16) is reduced to the singularly perturbed terminal-value problem with two types of state variables: two fast matrix state variables and one slow matrix state variable (see Equations (99)–(101)). In this case, the outer solution terms, corresponding to the slow state variable, are obtained from differential equations. The zero-order (with respect to ε ) boundary correction, corresponding to the slow state variable, equals zero. The outer solution terms, corresponding to the fast state variables, are obtained from algebraic equations. The zero-order boundary corrections, corresponding to these state variables, are not zero.
The aforementioned observation means a considerable difference in the derivation and the form of the asymptotic solutions to the problem (16) in cases I and II.
Remark 12.
For the particular block form of the matrix A ( t ) , where A 2 ( t ) 0 , A 3 ( t ) 0 , A 4 ( t ) 0 , the second and third components of the solution to the problem (99)–(101) become identically zero, i.e., K ^ 2 ( t , ε ) 0 and K ^ 3 ( t , ε ) 0 . Hence, the problem (99)–(101) is reduced to the much simpler terminal-value problem
ε d K ^ 1 ( t , ε ) d t = ε K ^ 1 ( t , ε ) A 1 ( t ) ε A 1 T ( t ) K ^ 1 ( t , ε ) + K ^ 1 ( t , ε ) 2 ε 2 K ^ 1 ( t , ε ) S v 1 ( t ) K ^ 1 ( t , ε ) Λ 1 ( t ) , t [ 0 , t f ] , K ^ 1 ( t f , ε ) = 0 .
This problem has the same form as the terminal-value problem (26). The asymptotic solution of the latter can be obtained from the asymptotics of K ^ 1 ( t , ε ) by replacing A 1 ( t ) with A ( t ) and Λ 1 ( t ) with Λ ( t ) . However, for the sake of better readability of the paper (including a clearer explanation of the differences between the asymptotic analysis in case I and in case II, we present case I as a separate case with the proper details.

7.3. Asymptotic Solution of the Terminal-Value Problem (103) and (104)

Similarly to the problem (27), the problems (103) and (104) also is singularly perturbed. However, in contrast with (27), the problem (103) and (104) contains not only a fast state variable but also a slow one. Namely, the state variable q ^ 1 ( t , ε ) is a fast state variable, while the state variable q ^ 2 ( t , ε ) is a slow state variable.
Similarly to (42), we look for the first-order asymptotic solution of (103) and (104) in the form
q ^ k , 1 ( t , ε ) = q ^ k , 0 o ( t ) + q ^ k , 0 b ( τ ) + ε q ^ k , 1 o ( t ) + q ^ k , 1 b ( τ ) , τ = t t f ε , k = 1 , 2 .
The terms in (143) have the same meaning as the corresponding terms in (42). These terms are obtained by substituting q ^ k , 1 ( t , ε ) and K ^ i , 1 ( t , ε ) into the problem (103) and (104) instead of q ^ k ( t , ε ) , ( k = 1 , 2 ) , and K ^ i ( t , ε ) , ( i = 1 , 2 , 3 ) , and equating the coefficients for the same power of ε on both sides of the resulting equations, separately depending on t and on τ .

7.3.1. Obtaining the Boundary Correction q ^ 2 , 0 b ( τ )

This boundary correction satisfies the equation
d q ^ 2 , 0 b ( τ ) d τ = 0 , τ 0 ,
which, subject to the condition lim τ q ^ 2 , 0 b ( τ ) = 0 , yields the solution
q ^ 2 , 0 b ( τ ) = 0 , τ 0 .

7.3.2. Obtaining the Outer Solution Terms q ^ 1 , 0 o ( t ) and q ^ 2 , 0 o ( t )

Taking into account Equation (113), these terms satisfy the following equations:
0 = K ^ 1 , 0 o ( t ) q ^ 1 , 0 o ( t ) , t [ 0 , t f ] ,
d q ^ 2 , 0 o ( t ) d t = A 4 T ( t ) K ^ 3 , 0 o ( t ) q ^ 2 , 0 o ( t ) , t [ 0 , t f ] , q ^ 2 , 0 o ( t f ) = 0 .
Solving these equations and taking into account that K ^ 1 , 0 o ( t ) = Λ 1 1 / 2 ( t ) is an invertible matrix for all t [ 0 , t f ] , we directly have
q ^ 1 , 0 o ( t ) = 0 , q ^ 2 , 0 o ( t ) = 0 , t [ 0 , t f ] .

7.3.3. Obtaining the Boundary Correction q ^ 1 , 0 b ( τ )

Using Equation (145) yields the terminal-value problem for q ^ 1 , 0 b ( τ )
d q ^ 1 , 0 b ( τ ) d τ = K ^ 1 , 0 o ( t f ) q ^ 1 , 0 b ( τ ) , τ 0 , q ^ 1 , 0 b ( 0 ) = 0 ,
implying
q ^ 1 , 0 b ( τ ) = 0 , τ 0 .

7.3.4. Obtaining the Boundary Correction q ^ 2 , 1 b ( τ )

Using Equations (144) and (146), we derive the equation for q ^ 2 , 1 b ( τ )
d q ^ 2 , 1 b ( τ ) d τ = 0 , τ 0 ,
which, subject to the condition lim τ q ^ 2 , 1 b ( τ ) = 0 , yields the solution
q ^ 2 , 1 b ( τ ) = 0 , τ 0 .

7.4. Obtaining the Outer Solution Terms q ^ 1 , 1 o ( t ) and q ^ 2 , 1 o ( t )

Using Equations (113), (145), and (147), we have the equations for q ^ 1 , 1 o ( t ) and q ^ 2 , 1 o ( t )
0 = K ^ 1 , 0 o ( t ) q ^ 1 , 1 o ( t ) 2 K ^ 1 , 0 o ( t ) f 1 ( t ) , t [ 0 , t f ] ,
d q ^ 2 , 1 o ( t ) d t = A 4 T ( t ) K ^ 3 , 0 o ( t ) q ^ 2 , 1 o ( t ) 2 A 2 T ( t ) f 1 ( t ) 2 K ^ 3 , 0 o ( t ) f 2 ( t ) , t [ 0 , t f ] , q ^ 2 , 1 o ( t f ) = 0 .
The Equation (148) yields immediately
q ^ 1 , 1 o ( t ) = 2 f 1 ( t ) , t [ 0 , t f ] .
The terminal-value problem (149) has the unique solution q ^ 2 , 1 o ( t ) in the entire interval [ 0 , t f ]
q ^ 2 , 1 o ( t ) = 2 t f t Φ ^ ( t ) Φ ^ 1 ( σ ) A 2 T ( σ ) f 1 ( σ ) + K ^ 3 , 0 o ( σ ) f 2 ( σ ) d σ ,
where the matrix-valued function Φ ^ ( t ) is given by (131).

7.4.1. Obtaining the Boundary Correction q ^ 1 , 1 b ( τ )

This correction satisfies the following terminal-value problem:
d q ^ 1 , 1 b ( τ ) d τ = K ^ 1 , 0 o ( t f ) + K ^ 1 , 0 b ( τ ) q ^ 1 , 1 b ( τ ) , τ 0 , q ^ 1 , 1 b ( 0 ) = q ^ 1 , 1 o ( t f ) = 2 f 1 ( t f ) .
The solution to this problem is
q ^ 1 , 1 b ( τ ) = 2 Φ ^ 1 ( 0 , τ ) f 1 ( t f ) , τ 0 ,
where the matrix-valued function Φ ^ 1 ( σ , τ ) is given by (A13)–(A15) and satisfies the inequality (A16). This inequality, along with Equation (152), yields
q ^ 1 , 1 b ( τ ) 2 a ^ Φ , 1 f 1 ( t f ) exp ( β ^ τ ) , τ 0 .

7.4.2. Justification of the Asymptotic Solution to the Problem (103) and (104)

Using Equations (143)–(147), we can rewrite the vector-valued functions q ^ 1 , 1 ( t , ε ) and q ^ 2 , 1 ( t , ε ) as
q ^ 1 , 1 ( t , ε ) = ε q ^ 1 , 1 o ( t ) + q ^ 1 , 1 b ( τ ) , q ^ 2 , 1 ( t , ε ) = ε q ^ 2 , 1 o ( t ) .
Using this equation, as well as Equations (148), (149) and (152) and the inequality (153), we obtain, similarly to Lemma 9, the following assertion.
Lemma 10.
Let Assumptions A1–A3 and case II (see Equation (23)) be valid. Then, for all ε ( 0 , ε ^ 10 ] ( ε ^ 10 > 0 is introduced in Lemma 9), the terminal-value problem (103) and (104) has the unique solution col q ^ 1 ( t , ε ) , q ^ 2 ( t , ε ) in the entire interval [ 0 , t f ] . Moreover, there exists a positive number ε ^ 20 ε ^ 10 such that, for all ε ( 0 , ε ^ 20 ] , this solution satisfies the inequalities q ^ j ( t , ε ) q ^ j , 1 ( t , ε ) b ^ q , j ε 2 , t [ 0 , t f ] , ( j = 1 , 2 ) , where b ^ q , j > 0 , ( j = 1 , 2 ) , are some constants independent of ε.

7.4.3. Comparison of the Asymptotic Solutions to the Terminal-Value Problem (17) in the Cases I and II

Comparing the asymptotic solutions of the problem (17) in cases I and II, we can observe the following.
In case I, the problem (17) is reduced (like the problem (16)) to the singularly perturbed terminal-value problem with only a fast state variable (see Equation (27)). This feature yields the uniform algorithm of constructing the entire vector asymptotic solution and a similar form for all its entries. In particular, the outer solution terms are obtained from algebraic (not differential) equations. The first-order (with respect to ε ) boundary corrections appear in all the entries of the asymptotic solution.
In case II (in contrast with case I), the problem (17) is reduced (like the problem (16)) to the singularly perturbed terminal-value problem with two types of state variables. The transformed problem has one fast vector state variable and one slow vector state variable (see Equations (103) and (104)). In this case, the outer solution terms, corresponding to the slow state variable, are obtained from differential equations. The first-order (with respect to ε ) boundary correction, corresponding to the slow state variable, equals zero. The outer solution terms, corresponding to the fast state variable, are obtained from algebraic equations. The first-order boundary correction, corresponding to this state variable, is not zero.
The aforementioned observation means a considerable difference in the derivation and the form of the asymptotic solutions to the problem (17) in cases I and II.
Remark 13.
Similarly to Remark 12, for the particular block form of the matrix A ( t ) , where A 2 ( t ) 0 , A 3 ( t ) 0 , A 4 ( t ) 0 , the second component of the solution to the problem (103) and (104) becomes identically zero, i.e., q ^ 2 ( t , ε ) 0 . Due to this feature and that K ^ 2 ( t , ε ) 0 , K ^ 3 ( t , ε ) 0 , the problem (103) and (104) is reduced to the much simpler terminal-value problem
ε d q ^ 1 ( t , ε ) d t = K ^ 1 ( t , ε ) ε A 1 T ( t ) ε 2 K ^ 1 ( t , ε ) S v 1 ( t ) q ^ 1 ( t , ε ) 2 ε K ^ 1 ( t , ε ) f 1 ( t ) , t [ 0 , t f ] , q ^ 1 ( t f , ε ) = 0 .
This problem has the same form as the terminal-value problem (27). The asymptotic solution of the latter can be obtained from the asymptotics of q ^ 1 ( t , ε ) by replacing f 1 ( t ) with f ( t ) and Φ ^ 1 ( σ , t ) with Φ ( σ , t ) given in (A5). However, for the sake of better readability of the paper (including a clearer explanation of the differences between the asymptotic analysis in case I and in case II), we present case I as a separate case with the proper details.

7.5. Asymptotic Solution of the Terminal-Value Problem (105)

Solving the problem (105) and taking into account Lemma 10, we obtain
s ( t , ε ) = t f t [ 1 4 q ^ 1 T ( t , ε ) q ^ 1 ( t , ε ) + q ^ 2 T ( t , ε ) q ^ 2 ( t , ε ) ε 2 4 ( q ^ 1 T ( t , ε ) S v 1 ( t ) q ^ 1 ( t , ε ) + q ^ 1 T ( t , ε ) S v 2 ( t ) q ^ 2 ( t , ε ) + q ^ 2 T ( t , ε ) S v 2 T ( t ) q ^ 1 ( t , ε ) + q ^ 2 T ( t , ε ) S v 3 ( t ) q ^ 2 ( t , ε ) ) ε f 1 T ( t ) q ^ 1 ( t , ε ) + f 2 T ( t ) q ^ 2 ( t , ε ) ] d σ , t [ 0 , t f ] , ε ( 0 , ε ^ 20 ] .
Let us consider the function
s ^ ( t , ε ) = ε 2 t f t [ 1 4 q ^ 1 , 1 o ( σ ) T q ^ 1 , 1 o ( σ ) + q ^ 2 , 1 o ( σ ) T q ^ 2 , 1 o ( σ ) f 1 T ( σ ) q ^ 1 , 1 o ( σ ) f 2 T ( σ ) q ^ 2 , 1 o ( σ ) ] d σ , t [ 0 , t f ] , ε ( 0 , ε ^ 20 ] .
Using (150), this function can be represented as
s ^ ( t , ε ) = ε 2 t f t 1 4 q ^ 2 , 1 o ( σ ) T q ^ 2 , 1 o ( σ ) f 1 T ( σ ) f 1 ( σ ) f 2 T ( σ ) q ^ 2 , 1 o ( σ ) d σ , t [ 0 , t f ] , ε ( 0 , ε ^ 20 ] .
The following assertion is proven similarly to Lemma 5, using Equations (154) and (155) and Lemma 10.
Lemma 11.
Let Assumptions A1–A3 and case II (see Equation (23)) be valid. Then, for all ε ( 0 , ε ^ 20 ] ( ε ^ 20 > 0 is introduced in Lemma 10), the following inequality is satisfied:
| s ( t , ε ) s ^ ( t , ε ) | c ^ 10 ε 3 , t [ 0 , t f ] , ε ( 0 , ε ^ 20 ] ,
where c ^ 10 > 0 is some constant independent of ε.
Remark 14.
Comparison of Equation (64) and Lemma 5 with Equation (155) and Lemma 11, respectively, directly shows that the asymptotic solutions of the problem (18) in cases I and II considerably differ from each other. However, due to Remarks 12 and 13, if A 2 ( t ) 0 , A 3 ( t ) 0 , A 4 ( t ) 0 , then the asymptotic solution of the problem (18) in case I is obtained from the asymptotic solution of this problem in case II by replacing A 1 ( t ) with A ( t ) and Λ 1 ( t ) with Λ ( t ) .

7.6. Asymptotic Approximation of the CCDG Value

Consider the following value, depending on z 0 :
J app I I ( z 0 ) = z 0 T K ^ 1 ( 0 , ε ) z 0 + z 0 T Q ^ 1 ( 0 , ε ) + s ^ ( 0 , ε ) ,
where K ^ 1 ( t , ε ) is given in (135), s ^ ( t , ε ) is given by (155), and
Q ^ 1 ( t , ε ) = col ε q ^ 1 , 1 ( t , ε ) , ε q ^ 2 , 1 ( t , ε ) .
Using Equations (21), (96), (97), and (157), as well as Lemmas 9–11, we directly have the assertion.
Theorem 3.
Let Assumptions A1–A3 and case II (see Equation (23)) be valid. Then, for all ε ( 0 , ε ^ 20 ] ( ε ^ 20 > 0 is introduced in Lemma 10), the following inequality is satisfied:
| J ε * ( z 0 ) J app I I ( z 0 ) | ε 3 a ^ K , 1 z 0 , 1 2 + b ^ q , 1 z 0 , 1 + b ^ q , 2 z 0 , 2 + c ^ 10 + ε 4 2 a ^ K , 2 z 0 , 1 z 0 , 2 + a ^ K , 3 z 0 , 2 2 ,
where z 0 , 1 R l is the upper block of the vector z 0 , while z 0 , 2 R n l is lower block of the vector z 0 .
Consider the following matrix and vector:
K ¯ 1 ( ε ) = ε K ^ 1 , 0 o ( 0 ) + ε K ^ 1 , 1 o ( 0 ) ε 2 K ^ 2 , 0 o ( 0 ) + ε K ^ 2 , 1 o ( 0 ) ε 2 K ^ 2 , 0 o ( 0 ) + ε K ^ 2 , 1 o ( 0 ) T ε 2 K ^ 3 , 0 o ( 0 ) + ε K ^ 3 , 1 o ( 0 ) , Q ¯ 1 ( ε ) = col ε 2 q ^ 1 , 1 o ( 0 ) , ε 2 q ^ 2 , 1 o ( 0 ) .
Based on these matrix and vector, let us construct the following value, depending on z 0
J app , 1 I I ( z 0 ) = z 0 T K ¯ 1 ( ε ) z 0 + z 0 T Q ¯ 1 ( ε ) + s ^ ( 0 , ε ) .
Similarly to Corollary 2, we have the following assertion.
Corollary 3.
Let Assumptions A1–A3 and case II (see Equation (23)) be valid. Then, there exists a positive number ε ˇ 20 ε ^ 20 such that, for all ε ( 0 , ε ˇ 20 ] , the following inequality is satisfied:
| J ε * ( z 0 ) J app , 1 I I ( z 0 ) | ε 3 a ˇ K , 1 z 0 , 1 2 + b ˇ q , 1 z 0 , 1 + b ˇ q , 2 z 0 , 2 + c ^ 10 + ε 4 2 a ˇ K , 2 z 0 , 1 z 0 , 2 + a ˇ K , 3 z 0 , 2 2 ,
where a ˇ K , i > 0 , ( i = 1 , 2 , 3 ) and b ˇ q , k > 0 , ( k = 1 , 2 ) are some constants independent of ε.

7.7. Approximate Saddle Point of the CCDG

Consider the following controls of the minimizer and the maximizer, respectively:
u ^ ε ( z , t ) = 1 ε 2 K ^ 1 ( t , ε ) z 1 2 ε 2 Q ^ 1 ( t , ε ) U , v ^ ε ( z , t ) = G v 1 ( t ) C T ( t ) K ^ 1 ( t , ε ) z + 1 2 G v 1 ( t ) C T ( t ) Q ^ 1 ( t , ε ) V , ( z , t ) R n × [ 0 , t f ] , ε ( 0 , ε 20 ] .
Remark 15.
The controls u ^ ε ( z , t ) and v ^ ε ( z , t ) are obtained from the controls u ε * ( z , t ) and v ε * ( z , t ) (see Equations (19) and (20)) by replacing K ( t , ε ) with K ^ 1 ( t , ε ) and q ( t , ε ) with Q ^ 1 ( t , ε ) .
Due to the linearity of these controls with respect to z R n for any t [ 0 , t f ] , ε ( 0 , ε 20 ] and their continuity with respect to t [ 0 , t f ] for any z R n , ε ( 0 , ε 20 ] , the pair u ^ ε ( z , t ) , v ^ ε ( z , t ) is admissible in the CCDG.
Substituting ( u ( t ) , v ( t ) ) = u ^ ε z ( t ) , t , v ^ ε z ( t ) , t into the system (6) and the cost functional (7), using Equation (15) and taking into account the symmetry of the matrix K ^ 1 ( t , ε ) , we obtain (similarly to (70)–(72)) the following system and cost functional:
d z ( t ) d t = A ^ K ( t , ε ) z ( t ) + f ^ ( t , ε ) , t [ 0 , t f ] , z ( 0 ) = z 0 ,
J ^ ( z 0 ) = 0 t f z T ( t ) Λ ^ ( t , ε ) z ( t ) + z T ( t ) g ^ ( t , ε ) + e ^ ( t , ε ) d t ,
where A ^ K ( t , ε ) is given in (135), and
f ^ ( t , ε ) = f ( t ) 1 2 S ( t , ε ) Q ^ 1 ( t , ε ) , Λ ^ ( t , ε ) = Λ ( t ) + K ^ 1 ( t , ε ) S ( t , ε ) K ^ 1 ( t , ε ) , g ^ ( t , ε ) = K ^ 1 ( t , ε ) S ( t , ε ) Q ^ 1 ( t , ε ) , e ^ ( t , ε ) = 1 4 Q ^ 1 T ( t , ε ) S ( t , ε ) Q ^ 1 ( t , ε ) .
Using these functions, we construct similarly to (73)–(75)) the following terminal-value problems:
d L ^ ( t , ε ) d t = L ^ ( t , ε ) A ^ K ( t , ε ) A ^ K T ( t , ε ) L ^ ( t , ε ) Λ ^ ( t , ε ) , L ^ ( t , ε ) R n × n , t [ 0 , t f ] , L ^ ( t f , ε ) = 0 ,
d η ^ ( t , ε ) d t = A ^ K T ( t , ε ) η ^ ( t , ε ) 2 L ^ ( t , ε ) f ^ ( t , ε ) g ^ ( t , ε ) , η ^ ( t , ε ) R n , t [ 0 , t f ] , η ^ ( t f , ε ) = 0 ,
d κ ^ ( t , ε ) d t = f ^ T ( t , ε ) η ^ ( t , ε ) , κ ^ ( t , ε ) R , t [ 0 , t f ] , κ ^ ( t f , ε ) = 0 t f e ^ ( σ , ε ) d σ ,
where ε ( 0 , ε ^ 20 ] .
Remark 16.
Due to the linearity, the problem (163) has the unique solution L ^ ( t , ε ) in the entire interval [ 0 , t f ] for all ε ( 0 , ε ^ 20 ] . Therefore, the problems (164) and (165) also have the unique solutions η ^ ( t , ε ) and κ ^ ( t , ε ) , respectively, in the entire interval [ 0 , t f ] for all ε ( 0 , ε ^ 20 ] .
Similarly to Lemma 6, we have the following assertion.
Lemma 12.
The value J ^ ( z 0 ) , given by Equations (161) and (162), can be represented in the form
J ^ ( z 0 ) = z 0 T L ^ ( 0 , ε ) z 0 + z 0 T η ^ ( 0 , ε ) + κ ^ ( 0 , ε ) , ε ( 0 , ε ^ 20 ] .
Taking into account the symmetry of the matrix L ^ ( t , ε ) , let us represent this matrix in the block form as
L ^ ( t , ε ) = ε L ^ 1 ( t , ε ) ε 2 L ^ 2 ( t , ε ) ε 2 L ^ 2 T ( t , ε ) ε 2 L ^ 3 ( t , ε ) ,
where the matrices L ^ 1 ( t , ε ) , L ^ 2 ( t , ε ) and L ^ 3 ( t , ε ) are of the dimensions l × l , l × ( n l ) and ( n l ) × ( n l ) , respectively; L ^ 1 T ( t , ε ) = L ^ 1 ( t , ε ) , L ^ 3 T ( t , ε ) = L ^ 3 ( t , ε )
Lemma 13.
Let Assumptions A1-A3 and case II (see Equation (23)) be valid. Then, there exists a positive number ε ^ 30 ε ^ 20 ( ε ^ 20 > 0 is introduced in Lemma 10) such that, for all ε ( 0 , ε ^ 30 ] , the following inequalities are satisfied:
K ^ i ( t , ε ) L ^ i ( t , ε ) a ^ L , i ε 4 , i = 1 , 2 , 3 , t [ 0 , t f ] ,
where K ^ 1 ( t , ε ) , K ^ 2 ( t , ε ) , K ^ 3 ( t , ε ) is the solution of the terminal-value problem (99)–(101); the matrix-valued function K ( t , ε ) , given by (96), is the solution of the terminal-value problem (16); a ^ L , i > 0 , ( i = 1 , 2 , 3 ) , are some constants independent of ε.
Proof. 
For any ε ( 0 , ε ^ 20 ] , let us consider the matrix-valued function
Δ ^ K L ( t , ε ) = K ( t , ε ) L ^ ( t , ε ) , t [ 0 , t f ] .
Similarly to (87), we obtain the terminal-value problem for Δ ^ K L ( t , ε )
d Δ ^ K L ( t , ε ) d t = Δ ^ K L ( t , ε ) A ^ K ( t , ε ) A ^ K T ( t , ε ) Δ ^ K L ( t , ε ) + K ( t , ε ) K ^ 1 ( t , ε ) S ( t , ε ) K ( t , ε ) K ^ 1 ( t , ε ) , t [ 0 , t f ] , Δ ^ K L ( t f , ε ) = 0 ,
where K ^ 1 ( t , ε ) is given in (135).
Solving this terminal-value problem and using the results of [60], we have
Δ ^ K L ( t , ε ) = t f t Ω ^ T ( σ , t , ε ) K ( σ , ε ) K ^ 1 ( σ , ε ) S ( σ , ε ) K ( σ , ε ) K ^ 1 ( t , ε ) Ω ^ ( σ , t , ε ) d σ , 0 t σ t f , ε ( 0 , ε ^ 20 ] ,
where for any given t [ 0 , t f ) and ε ( 0 , ε ^ 20 ] , the matrix-valued function Ω ^ ( σ , t , ε ) is the unique solution of the terminal-value problem (138); the blocks of the matrix-valued function Ω ^ ( σ , t , ε ) satisfy the inequalities (139).
Using Equations (15), (96), (135), (167), (169), and (170), as well as Lemma 9 and the inequalities (139), we obtain by a routine algebra the validity of the inequalities in (168) with ε ^ 30 = min { ε ^ 20 , ε ^ Ω } . This completes the proof of the lemma. □
Let us represent the vector η ^ ( t , ε ) in the block form as
η ^ ( t , ε ) = ε η ^ 1 ( t , ε ) ε η ^ 2 ( t , ε ) , t [ 0 , t f ] , ε ( 0 , ε ^ 20 ] ,
where the vectors η ^ 1 ( t , ε ) and η ^ 2 ( t , ε ) are of the dimensions l and ( n l ) , respectively.
Lemma 14.
Let Assumptions A1–A3 and case II (see Equation (23)) be valid. Then, for all ε ( 0 , ε ^ 30 ] ( ε ^ 30 > 0 is introduced in Lemma 13), the following inequalities are satisfied:
q ^ j ( t , ε ) η ^ j ( t , ε ) a ^ η , j ε 4 , j = 1 , 2 , t [ 0 , t f ] ,
| s ( 0 , ε ) κ ^ ( 0 , ε ) | a ^ κ , 1 ε 4 + a ^ κ , 2 ε 5 , a ^ κ , 1 = 1 4 b ^ q , 1 2 + b ^ q , 2 2 t f , a ^ κ , 2 = a ^ η , 1 + a ^ η , 2 1 4 t f + 0 t f f ( σ ) d σ .
where col q ^ 1 ( t , ε ) , q ^ 2 ( t , ε ) is the solution of the terminal-value problem (103) and (104); the vector-valued function q ( t , ε ) , given by (97), is the solution of the terminal-value problems (17); the scalar function s ( t , ε ) is the solution of the terminal-value problem (18) and of the equivalent problem (105); a ^ η , j > 0 , ( j = 1 , 2 ) are some constants independent of ε; the constants b ^ q , 1 > 0 and b ^ q , 2 > 0 are introduced in Lemma 10.
Proof. 
We start the proof with the inequalities (172).
For any ε ( 0 , ε ^ 30 ] , let us consider the vector-valued function
Δ ^ q η ( t , ε ) = q ( t , ε ) η ^ ( t , ε ) , t [ 0 , t f ] .
Similarly to (94), we obtain the terminal-value problem for Δ ^ q η ( t , ε )
d Δ ^ q η ( t , ε ) d t = A ^ K T ( t , ε ) Δ ^ q η ( t , ε ) + 2 L ^ ( t , ε ) K ( t , ε ) f ( t ) + K ( t , ε ) L ^ ( t , ε ) S ( t , ε ) Q ^ 1 ( t , ε ) + K ( t , ε ) K ^ 1 ( t , ε ) S ( t , ε ) q ( t , ε ) Q ^ 1 ( t , ε ) , t [ 0 , t f ] , Δ q η ( t f , ε ) = 0 ,
where K ^ 1 ( t , ε ) and Q ^ 1 ( t , ε ) are given in (135) and (158), respectively.
Solving this terminal-value problem, we have
Δ ^ q η ( t , ε ) = t f t Ω ^ T ( σ , t , ε ) [ 2 L ^ ( σ , ε ) K ( σ , ε ) f ( σ ) + K ( σ , ε ) L ^ ( σ , ε ) S ( σ , ε ) Q ^ 1 ( σ , ε ) + K ( σ , ε ) K ^ 1 ( σ , ε ) S ( σ , ε ) q ( σ , ε ) Q ^ 1 ( σ , ε ) ] d σ , 0 t σ t f , ε ( 0 , ε ^ 30 ] ,
where for any given t [ 0 , t f ) and ε ( 0 , ε ^ 30 ] , the matrix-valued function Ω ^ ( σ , t , ε ) is the unique solution of the terminal-value problem (138); the blocks of the matrix-valued function Ω ^ ( σ , t , ε ) satisfy the inequalities (139).
Using Equations (15), (96), (135), (158), (171), and (174), as well as Lemmas 9 and 10, Lemma 13, and the inequalities (139), we obtain by a routine algebra the validity of the inequalities in (172).
The inequality (173) is shown similarly to the inequality (92) (see the proof of Lemma 8).
Thus, the lemma is proven. □
Theorem 4.
Let Assumptions A1–A3 and case II (see Equation (23)) be valid. Then, for all ε ( 0 , ε ^ 30 ] ( ε ^ 30 > 0 is introduced in Lemma 13), the following inequality is satisfied:
| J ε * ( z 0 ) J ^ ( z 0 ) | ε 4 a ^ κ , 1 + ε 5 a ^ L , 1 z 0 , 1 2 + a ^ η , 1 z 0 , 1 + a ^ η , 2 z 0 , 2 + a ^ κ , 2 + ε 6 2 a ^ L , 2 z 0 , 1 z 0 , 2 + a ^ L , 3 z 0 , 2 2 .
Proof. 
The statement of the theorem directly follows from Equations (21) and (166), as well as Equations (96), (97), (167) and (171), and Lemmas 13 and 14. □
Remark 17.
Due to Theorem 4, the outcome J ^ ( z 0 ) of the CCDG, generated by the pair of the controls u ^ ε ( z , t ) , v ^ ε ( z , t ) , approximate the CCDG value J ε * ( z 0 ) with a high accuracy for all sufficiently small ε > 0 . This observation allows us to call the pair u ^ ε ( z , t ) , v ^ ε ( z , t ) an approximate saddle point in the CCDG.

8. Example

In this section, a numerical example, illustrating the theoretical results of the previous sections, is presented.
We consider a particular case of CCDG (see (6) and (7)) with the following data:
n = 2 , m = 2 , t f = 2 , Λ ( t ) = Λ = diag ( λ 1 , λ 2 ) , A ( t ) = A = 1 1 3 2 , C ( t ) = C = 4 0 0 4 , G v ( t ) = G v = 8 0 0 8 , f ( t ) = 2 t t , z 0 = 1 1 .
In this example, the symmetric matrix-valued functions K ( t , ε ) and P ( t , ε ) , given by the terminal-value problems (16) and (26), respectively, are of the dimension 2 × 2 . The vector-valued functions q ( t , ε ) and p ( t , ε ) , given by the terminal-value problems (17) and (27), respectively, have the dimension 2.

8.1. Case I of the Matrix Λ

In this subsection, we treat the differential game (6) and (7), (175) in case I (see (22)), i.e., for λ 1 > 0 and λ 2 > 0 . We choose
λ 1 = 9 , λ 2 = 9 .
We start the asymptotic solution of the differential game (6) and (7), (175), (176) with the asymptotic solution of the terminal-value problem for P ( t , ε ) (see Equation (26)).
Using Equations (35), (37), and (41) and the data of the example (175) and (176), we directly have
P 0 o ( t ) = P 0 o = 3 I 2 , P 0 b ( τ ) = 6 exp ( 6 τ ) 1 + exp ( 6 τ ) I 2 , P 1 o ( t ) = P 1 o = 1 2 2 2 .
Proceed to obtaining P 1 b ( τ ) , which is based on Equations (A2), (A4) and (A5) and the data of the example (175) and (176).
Using Equations (A2) and (177), we obtain by a routine matrix calculation that Ψ ( τ ) 0 . From (A5) and (176), we have Φ ( σ , τ ) = 1 + exp ( 6 σ ) exp 3 ( τ σ ) 1 + exp ( 6 τ ) I 2 . Thus, due to (A4),
P 1 b ( τ ) = 4 exp ( 6 τ ) ( 1 + exp ( 6 τ ) ) 2 1 2 2 2 .
From the above derived expressions for P 0 b ( τ ) and P 1 b ( τ ) , we can see that both matrix-valued functions exponentially decay for τ .
Based on Equations (32), (177) and (178), let us construct the asymptotic solution P 1 ( t , ε ) of the problem (26) subject to the data (175) and (176). For this purpose, we represent this matrix-valued function in the block form
P 1 ( t , ε ) = P 1 , 11 ( t , ε ) P 1 , 12 ( t , ε ) P 1 , 12 ( t , ε ) P 1 , 22 ( t , ε ) .
The latter yields
P 1 , 11 ( t , ε ) = 3 6 exp ( 6 τ ) 1 + exp ( 6 τ ) + ε 1 4 exp ( 6 τ ) ( 1 + exp ( 6 τ ) ) 2 , P 1 , 12 ( t , ε ) = ε 2 + 8 exp ( 6 τ ) ( 1 + exp ( 6 τ ) ) 2 , P 1 , 22 ( t , ε ) = 3 6 exp ( 6 τ ) 1 + exp ( 6 τ ) + ε 2 8 exp ( 6 τ ) ( 1 + exp ( 6 τ ) ) 2 ,
where τ = ( t t f ) / ε = ( t 2 ) / ε .
In Figure 1, the absolute errors
Δ P i j ( ε ) = max t [ 0 , t f ] P i j ( t , ε ) P 1 , i j ( t , ε ) , { i j } = { 11 } , { 12 } , { 22 } ,
are depicted for ε [ 0.015 , 0.1 ] along with their mutual estimate function 5 ε 2 . Figure 1 illustrates Lemma 3.
Proceed to construction of the asymptotic solution to the terminal-value problem (27) subject to the data (175) and (176).
Using Equations (44), (46), (47), and (50) and Equation (A6), we obtain
p 0 o ( t ) 0 , p 0 b ( τ ) 0 , p 1 o ( t ) = 4 t 2 t , p 1 b ( τ ) = exp ( 3 τ ) 1 + exp ( 6 τ ) 16 8 .
Using these results, as well as Equation (42) and the block representation of the vector-valued function p 1 ( t , ε )
p 1 ( t , ε ) = p 1 , 1 ( t , ε ) p 1 , 2 ( t , ε ) ,
we obtain
p 1 , 1 ( t , ε ) = ε 4 t 16 exp ( 3 τ ) 1 + exp ( 6 τ ) , p 1 , 2 ( t , ε ) = ε 2 t 8 exp ( 3 τ ) 1 + exp ( 6 τ ) ,
where (like in (180)) τ = ( t 2 ) / ε .
In Figure 2, the absolute errors
Δ p 1 ( ε ) = max t [ 0 , t f ] p up ( t , ε ) p 1 , 1 ( t , ε ) , Δ p 2 ( ε ) = max t [ 0 , t f ] p low ( t , ε ) p 1 , 1 ( t , ε ) , p ( t , ε ) = p up ( t , ε ) p low ( t , ε ) ,
are depicted for ε [ 0.015 , 0.1 ] along with their mutual estimate function 1.7 ε 2 . Figure 2 illustrates Lemma 4.
To complete the construction of the asymptotic solutions to the terminal-value problems associated with the solvability conditions of the CCDG, we should construct the asymptotic solution to the problem (28) subject to the data (175) and (176). Using Equation (65), we directly obtain
s ¯ ( t , ε ) = 5 ε 2 3 ( 8 t 3 ) , t [ 0 , 2 ] .
In Figure 3, the absolute error
Δ s ¯ ( ε ) = max t [ 0 , t f ] s ( t , ε ) s ¯ ( t , ε ) ,
is depicted for ε [ 0.015 , 0.1 ] along with the estimate function 6.6 ε 3 . Figure 3 illustrates the inequality (65) in Lemma 5 with c 10 = 6.6 .
Now, using Equations (66) and (68), as well as the equations (179) and (180), (181) and (182), (183) and the data (175), we obtain the following two approximations of the CCDG value:
J app I ( z 0 ) = 6 ε 1 2 exp ( 12 / ε ) 1 + exp ( 12 / ε ) + ε 2 37 3 + 4 exp ( 12 / ε ) 1 + exp ( 12 / ε ) 2 24 exp ( 6 / ε ) 1 + exp ( 12 / ε ) , J app , 1 I ( z 0 ) = 6 ε + 37 ε 2 3 .
The components of the approximate saddle point have the form (69), where P 1 ( t , ε ) is given by (179) and (180), p 1 ( t , ε ) is given by (181) and (182), z R 2 , t [ 0 , 2 ] .
The game value J ε * , the values J app I , J app , 1 I and the outcome of the game J ˜ , generated by the approximate saddle point, are shown in Table 2 for ε = 0.1 , 0.05 , 0.015 (the initial state position z 0 is fixed by (175) yielding the simplified notation). Note that J app I and J app , 1 I are not distinguishable because the differences
J app I ( ε ) J app , 1 I ( ε ) = 12 ε exp ( 12 / ε ) 1 + exp ( 12 / ε ) + ε 2 4 exp ( 12 / ε ) 1 + exp ( 12 / ε ) 2 24 exp ( 6 / ε ) 1 + exp ( 12 / ε )
are negligible small ( 2.1 · 10 27 , 4.6 · 10 54 , and 1.03 · 10 176 , respectively).
In Table 3, the absolute and the relative errors of the game value approximations in case I
Δ J app I ( ε ) = J ε * J app I , δ J app I ( ε ) = Δ J app I ( ε ) J ε * · 100 % , Δ J app , 1 I ( ε ) = J ε * J app , 1 I , δ J app , 1 I ( ε ) = Δ J app , 1 I ( ε ) J ε * · 100 % , Δ J ˜ ( ε ) = J ε * J ˜ , δ J ˜ ( ε ) = Δ J ˜ ( ε ) J ε * · 100 % ,
are presented. It is seen that all errors decrease with decreasing ε . The approximation J ˜ , calculated by employing the approximate saddle point controls, is more accurate than J app I and J app , 1 I (whose accuracies are identical). The relative errors are not larger than 0.52 % for J app I and J app , 1 I , and not larger than 0.029 % for J ˜ .
Remark 18.
In addition to the aforementioned, the following should be noted. The values J app I and J app , 1 I are simply obtained by replacing in the expression for the game value (see Equation (31)) the exact values P ( 0 , ε ) , p ( 0 , ε ) , s ( 0 , ε ) with their approximations P 1 ( 0 , ε ) , p 1 ( 0 , ε ) , s ¯ ( 0 , ε ) and P ¯ 1 ( ε ) , p ¯ 1 ( ε ) , s ¯ ( 0 , ε ) , respectively. These approximations of the game value do not take into account the behavior (control) of the players and the dynamics of the game. Therefore, they can be called static (or theoretic) approximations of the game value. In contrast with J app I and J app , 1 I , the value J ˜ is obtained by application of the players’ corresponding components of the approximate saddle point. This application results in a much better approximation of the game value, which is reasonable because obtaining J ˜ takes into account the behavior (control) of the players and the dynamics of the game. The approximation J ˜ can be called a dynamic (or practical) approximation of the game value.

8.2. Case II of the Matrix Λ

In this subsection, we treat the differential game (6) and (7), (175) in case II (see (23)), i.e., for λ 1 > 0 and λ 2 = 0 . We choose
λ 1 = 9 .
We start the asymptotic solution of the differential game (6) and (7), (175), (184) with the asymptotic solution of the terminal-value problem for K ^ 1 ( t , ε ) , K ^ 2 ( t , ε ) , K ^ 3 ( t , ε ) (see Equations (99)–(101)).
Using the equations (108), (112) and (113), we directly have
K ^ 3 , 0 b ( τ ) 0 , K ^ 1 , 0 o ( t ) = K ^ 1 , 0 o = 3 , K ^ 2 , 0 o ( t ) = K ^ 2 , 0 o = 1 .
To obtain K ^ 3 , 0 o ( t ) , we should solve the terminal-value problem (111) subject to the data (175) and (184). This problem has the form
d K ^ 3 , 0 o ( t ) d t = 4 K ^ 3 , 0 o ( t ) + K ^ 3 , 0 o ( t ) 2 1 , t [ 0 , 2 ] , K ^ 3 , 0 o ( 2 ) = 0 ,
yielding the unique solution
K ^ 3 , 0 o ( t ) = 2 + 5 + 2 5 10 + 4 5 exp 2 5 ( t 2 ) 1 2 5 1 , t [ 0 , 2 ] .
Subject to the data (175) and (184), Equations (117), (118) and (122) yield
K ^ 1 , 0 b ( τ ) = 6 exp ( 6 τ ) 1 + exp ( 6 τ ) , K ^ 2 , 0 b ( τ ) = 2 exp ( 3 τ ) 1 + exp ( 6 τ ) , K ^ 3 , 1 b ( τ ) = ( 2 / 3 ) exp ( 6 τ ) 1 + exp ( 6 τ ) .
Furthermore, using Equations (128)–(131) and the data (175) and (184), we obtain
K ^ 1 , 1 o ( t ) = K ^ 1 , 1 o = 1 , K ^ 2 , 1 o ( t ) = 1 3 3 + 2 K ^ 3 , 0 o ( t ) , K ^ 3 , 1 o ( t ) = 1 3 Φ ^ ( t ) 2 , t [ 0 , 2 ] ,
where Φ ^ ( t ) is the solution of the terminal-value problem
d Φ ^ ( t ) d t = K ^ 3 , 0 o ( t ) 2 Φ ^ ( t ) , t [ 0 , 2 ] , Φ ^ ( 2 ) = 1 .
Finally, using Equations (A10), (A12), (A14), (A15), (A19), and (A21) and taking into account the data (175) and (184) and the equations (187) and (188), we derive K ^ 1 , 1 b ( τ ) and K ^ 2 , 1 b ( τ )
K ^ 1 , 1 b ( τ ) = 4 exp ( 6 τ ) 1 + exp ( 6 τ ) 2 , K ^ 2 , 1 b ( τ ) = exp ( 3 τ ) 1 + exp ( 6 τ ) 4 3 1 + exp ( 6 τ ) + 2 exp ( 3 τ ) 4 τ 2 3 .
Thus, due to Equations (106), (185)–(190), we obtain the asymptotic solution of the problem (99)–(101) subject to the data (175) and (184)
K ^ 1 , 1 ( t , ε ) = 3 6 exp ( 6 τ ) 1 + exp ( 6 τ ) + ε 1 4 exp ( 6 τ ) 1 + exp ( 6 τ ) 2 , K ^ 2 , 1 ( t , ε ) = 1 + 2 exp ( 3 τ ) 1 + exp ( 6 τ ) + ε [ 1 3 3 + 2 K ^ 3 , 0 o ( t ) + exp ( 3 τ ) 1 + exp ( 6 τ ) 4 3 1 + exp ( 6 τ ) + 2 exp ( 3 τ ) 4 τ 2 3 ] , K ^ 3 , 1 ( t , ε ) = 2 + 5 + 2 5 10 + 4 5 exp 2 5 ( t 2 ) 1 2 5 1 + ε 1 3 Φ ^ ( t ) 2 + ( 2 / 3 ) exp ( 6 τ ) 1 + exp ( 6 τ ) ,
where τ = ( t 2 ) / ε .
In Figure 4, the absolute errors
Δ K ^ i ( ε ) = max t [ 0 , t f ] K ^ i ( t , ε ) K ^ i , 1 ( t , ε ) , i = 1 , 2 , 3 ,
are depicted for ε [ 0.015 , 0.1 ] along with their mutual estimate function 5.75 ε 2 . Figure 4 illustrates Lemma 9.
Proceed to construction of the asymptotic solution to the terminal-value problem (103) and (104) subject to the data (175) and (184).
Using Equations (144)–(147) and (150)–(152), we directly have
q ^ 2 , 0 b ( τ ) 0 , q ^ 1 , 0 o ( t ) 0 , q ^ 2 , 0 o ( t ) 0 , q ^ 1 , 0 b ( τ ) 0 , q ^ 2 , 1 b ( τ ) 0 , q ^ 1 , 1 o ( t ) = 4 t , q ^ 2 , 1 o ( t ) = 2 2 t Φ ^ ( t ) Φ ^ ( σ ) 1 2 σ σ K ^ 3 , 0 o ( σ ) d σ , q ^ 1 , 1 b ( τ ) = 16 exp ( 3 τ ) 1 + exp ( 6 τ ) ,
where Φ ^ ( t ) is the solution of the terminal-value problem (189); K ^ 3 , 0 o ( t ) is given by (186).
Thus, due to Equations (143) and (192), we obtain the asymptotic solution of the problem (103) and (104) subject to the data (175) and (184)
q ^ 1 , 1 ( t , ε ) = ε 4 t 16 exp ( 3 τ ) 1 + exp ( 6 τ ) , q ^ 2 , 1 ( t , ε ) = 2 ε 2 t Φ ^ ( t ) Φ ^ ( σ ) 1 2 σ σ K ^ 3 , 0 o ( σ ) d σ ,
where τ = ( t 2 ) / ε .
In Figure 5, the absolute errors
Δ q ^ i ( ε ) = max t [ 0 , t f ] q ^ i ( t , ε ) q ^ i , 1 ( t , ε ) , i = 1 , 2 ,
are depicted for ε [ 0.015 , 0.1 ] along with their mutual estimate function 6 ε 2 . Figure 5 illustrates Lemma 10.
Using Equation (155) and Equation (192), we obtain the asymptotic solution of the problem (105) subject to the data (175) and (184)
s ^ ( t , ε ) = ε 2 2 t 1 4 q ^ 2 , 1 o ( σ ) 2 4 σ 2 σ q ^ 2 , 1 o ( σ ) d σ , t [ 0 , 2 ] .
In Figure 6, the absolute error
Δ s ^ ( ε ) = max t [ 0 , t f ] s ( t , ε ) s ^ ( t , ε ) ,
is depicted for ε [ 0.015 , 0.1 ] along with the estimate function 5.2 ε 3 . Figure 6 illustrates the inequality (156) in Lemma 11 with c ^ 10 = 5.2 .
Now, using Equations (157), (159), (191), (193), and (194) and the data (175), we obtain the following two approximations of the CCDG value:
J app I I ( z 0 ) = ε K ^ 1 , 1 ( 0 , ε ) + 2 ε 2 K ^ 2 , 1 ( 0 , ε ) + ε 2 K ^ 3 , 1 ( 0 , ε ) + ε q ^ 1 , 1 ( 0 , ε ) + ε q ^ 2 , 1 ( 0 , ε ) + s ^ ( 0 , ε ) , J app , 1 I I ( z 0 ) = ε ( 3 + ε ) 2 ε 2 1 + 1 3 ε 3 + 2 K ^ 3 , 0 o ( 0 ) + ε 2 K ^ 3 , 0 o ( 0 ) + ε K ^ 3 , 1 o ( 0 ) + ε 2 q ^ 2 , 1 o ( 0 ) + s ^ ( 0 , ε ) .
Due to Equation (160) and the data (175), the components of the approximate saddle point have the form
u ^ ε ( z , t ) = ( 1 / ε ) K ^ 1 , 1 ( t , ε ) z 1 + K ^ 2 , 1 ( t , ε ) z 2 + ( 1 / 2 ε ) q ^ 1 , 1 ( t , ε ) K ^ 2 , 1 ( t , ε ) z 1 + K ^ 3 , 1 ( t , ε ) z 2 + ( 1 / 2 ε ) q ^ 2 , 1 ( t , ε ) , v ^ ε ( z , t ) = 1 2 ε K ^ 1 , 1 ( t , ε ) z 1 + ε 2 K ^ 2 , 1 ( t , ε ) z 2 + ( ε / 2 ) q ^ 1 , 1 ( t , ε ) ε 2 K ^ 2 , 1 ( t , ε ) z 1 + ε 2 K ^ 3 , 1 ( t , ε ) z 2 + ( ε / 2 ) q ^ 2 , 1 ( t , ε ) ,
where K ^ i , 1 ( t , ε ) , ( i = 1 , 2 , 3 ) are given in (191); q ^ k , 1 ( t , ε ) , ( k = 1 , 2 ) are given in (193); z 1 , and z 2 are upper and lower scalar blocks, respectively, of the state vector z.
The values of J ε * , J app I I , J app , 1 I I and the outcome of the game J ^ , generated by the approximate saddle point, are shown in Table 4 for ε = 0.1 , 0.05 , 0.015 . In this case, the difference between J app I I and J app , 1 I I is also negligible.
The absolute and the relative errors of the game value approximations in case II
Δ J app I I ( ε ) = J ε * J app I I , δ J app I I ( ε ) = Δ J app I I ( ε ) J ε * · 100 % , Δ J app , 1 I I ( ε ) = J ε * J app , 1 I I , δ J app , 1 I I ( ε ) = Δ J app , 1 I I ( ε ) J ε * · 100 % , Δ J ^ ( ε ) = J ε * J ^ , δ J ^ ( ε ) = Δ J ^ ( ε ) J ε * · 100 % ,
are presented in Table 5.
It is seen that all errors decrease with decreasing ε . The approximation J ^ , calculated by employing approximate saddle point controls, is more accurate than J app I I and J app , 1 I I (whose accuracies are identical). The relative errors are not larger than 1.22 % for J app I I and J app , 1 I I , and not larger than 0.27 % for J ^ .
Remark 19.
Similarly to Remark 18, the following should be noted. The values J app I I and J app , 1 I I are simply obtained by replacing in the expression for the game value (see Equation (21)) the exact values K ( 0 , ε ) , q ( 0 , ε ) , s ( 0 , ε ) with their approximations K ^ 1 ( 0 , ε ) , Q ^ 1 ( 0 , ε ) , s ^ ( 0 , ε ) and K ¯ 1 ( ε ) , Q ¯ 1 ( ε ) , s ^ ( 0 , ε ) , respectively. These approximations of the game value do not take into account the behavior (control) of the players and the dynamics of the game. Therefore, they can be called static (or theoretic) approximations of the game value. In contrast with J app I I and J app , 1 I I , the value J ^ is obtained by application of the players’ corresponding components of the approximate saddle point. This application results in a much better approximation of the game value, which is reasonable because obtaining J ^ takes into account the behavior (control) of the players and the dynamics of the game. The approximation J ^ can be called a dynamic (or practical) approximation of the game value.
Remark 20.
The above considered example clearly demonstrates two important advantages of the asymptotic solution of the cheap control game (6) and (7) in both cases I and II. Namely, first, this solution yields the approximate saddle point of the game in an explicit analytical form, and second, this approximate saddle point provides a very accurate approximation of the game value even for not too small values of ε > 0

9. Conclusions

In this paper, a two-player finite-horizon zero-sum differential game with linear non-homogeneous dynamics and a quadratic cost functional was studied in the case where the control cost of the minimizing player (the minimizer) in the cost functional is much smaller than the state cost and the cost of the control of the maximizing player. This smallness is represented by the presence of the small multiplier ε > 0 in the control cost of the minimizer. Due to this feature of the minimizer’s control cost, the considered game is a cheap control game. The dimension of the minimizer’s control equals the dimension of the state vector. The matrix-valued coefficient of the minimizer’s control in the differential equation has full rank. This means that the entire state variable is a “fast” one. For this game, the state-feedback saddle point and the value were sought. By the proper changes of the state and minimizer’s control variables, the initially formulated game was transformed equivalently to the much simpler zero-sum cheap control differential game. In this new game, the matrix-valued coefficient of the minimizer’s control in the differential equation is the identity matrix. The matrix-valued coefficient for the state cost in the integral part of the game’s cost functional is a diagonal one. In the sequel to the paper, this new game was considered an original, cheap control game. The following two cases of the matrix-valued coefficient for the state cost in the integral part of the game’s cost functional were treated: (a) all the entries of the main diagonal are positive; (b) only part of the entries are positive, while the rest of the entries are zero. In each of these cases, the asymptotic analysis with respect to the small parameter ε > 0 of the state-feedback solution in the original cheap control game was carried out. This analysis includes (i) the first-order asymptotic solutions of the terminal-value problems for the three differential equations appearing in the game’s solvability conditions; and (ii) obtaining asymptotic approximations of the game value; (iii) derivation of approximate saddle point. The approaches to this analysis in cases (a) and (b) and the results of this analysis were compared with each other. This comparison clearly shows the essential novelty of case (b) and its analysis. The analysis of case (b) clearly shows that the assumption on the positive definiteness of the quadratic cost of the “fast” state variable in the integral part of the cost functional is not necessary. Positive semi-definiteness of this quadratic cost is not an obstacle in the asymptotic analysis of the cheap control game. Along with this, the property of this quadratic cost (positive definiteness or positive semi-definiteness) has a considerable impact on the asymptotic solution of the cheap control game.
As limitations of the results of the present paper, we can mention the following. In the paper, it is assumed that the matrix-valued coefficient for the minimizer’s control in the dynamics of the initially formulated game is a quadratic and non-singular matrix. The case where such a matrix is either non-quadratic or quadratic and singular requires another transformation of the state and minimizer’s control variables in the initially formulated game for its simplification. Another limitation is the assumption that all coordinates of the minimizer’s control are cheap, i.e., the game is a completely cheap control one. The case, where only a part of the coordinates of the minimizer’s control are cheap, also requires another transformation of the state and minimizer’s control variables in the initially formulated game for its simplification. The aforementioned limitations directly yield the topics for future works: (i) analysis of a cheap control game where the matrix-valued coefficient for the minimizer’s control in the dynamics is either non-quadratic or quadratic and singular; (ii) analysis of a partial cheap control game. In addition, the following topics of future works are of considerable interest: (iii) extensive numerical evaluation of the asymptotic solution of the game, including the computational cost of this solution and its comparison with the computational cost of the direct solution of the game; (iv) obtaining (by an extensive computer simulation) an upper bound of the small parameter ε > 0 , providing the established theoretical estimate of the error of the game’s asymptotic solution; (v) discussion on the applicability of the results of the paper, including the study of various real-life problems and the analysis of the influence of the cost structure on their solution, in particular, the analysis of pursuit-evasion problems and supply chain problems. Although these topics are interesting and important for research, their consideration in the present paper will essentially overload this paper and will make it hardly readable or even unreadable at all. Each of the aforementioned topics requires a separate consideration and a separate paper.

Author Contributions

Conceptualization, V.Y.G.; Methodology, V.Y.G.; Software, V.T.; Validation, V.Y.G. and V.T.; Formal analysis, V.Y.G.; Investigation, V.Y.G.; Resources, V.Y.G. and V.T.; Data curation, V.Y.G. and V.T.; Writing—original draft preparation, V.Y.G. and V.T.; Writing—review and editing, V.Y.G. and V.T.; Visualization, V.T.; Supervision, V.Y.G.; Project administration, V.Y.G.; Funding acquisition, this research received no external funding. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Obtaining the Boundary Correction P 1 b (τ)

Using Remark 7 and Equations (33), (35), (37) and (41), we derive (similarly to Equation (36)) the following terminal-value problem for P 1 b ( τ ) :
d P 1 b ( τ ) d τ = P 0 o ( t f ) + P 0 b ( τ ) P 1 b ( τ ) + P 1 b ( τ ) P 0 o ( t f ) + P 0 b ( τ ) + Ψ ( τ ) , τ 0 , P 1 b ( 0 ) = P 1 o ( t f ) ,
where
Ψ ( τ ) = P 0 b ( τ ) A ( t f ) A T ( t f ) P 0 b ( τ ) + τ d P 0 o ( t ) / d t | t = t f P 0 b ( τ ) + τ P 0 b ( τ ) d P 0 o ( t ) / d t | t = t f + P 0 b ( τ ) P 1 o ( t f ) + P 1 o ( t f ) P 0 b ( τ ) .
Due to the inequality (38), the matrix-valued function Ψ ( τ ) is estimated as
Ψ ( τ ) a 1 exp ( β τ ) , τ 0 ,
where a 1 > 0 is some constant; the constant β is given in (39).
Solving the problem (A1) and using the results of [60] and the symmetry of the matrices P 0 o ( t ) , P 0 b ( τ ) , we obtain
P 1 b ( τ ) = Φ ( 0 , τ ) P 1 o ( t f ) Φ ( 0 , τ ) + 0 τ Φ ( σ , τ ) Ψ ( σ ) Φ ( σ , τ ) d σ , τ 0 ,
where, for any τ 0 , the n × n matrix-valued function Φ ( σ , τ ) is the unique solution of the problem
d Φ ( σ , τ ) d σ = P 0 o ( t f ) + P 0 b ( σ ) Φ ( σ , τ ) , σ [ τ , 0 ] , Φ ( τ , τ ) = I n .
Solving this problem and taking into account the expressions for P 0 o ( t ) and P 0 b ( τ ) (see Equations (35) and (37)), we have
Φ ( σ , τ ) = exp Λ 1 / 2 ( t f ) ( τ σ ) Θ 1 ( τ ) Θ ( σ ) , 0 σ τ > ,
where
Θ ( χ ) = I n + exp 2 Λ 1 / 2 ( t f ) χ , χ 0 .
The matrix-valued function Φ ( σ , τ ) satisfies the inequality
Φ ( σ , τ ) a 2 exp β ( τ σ ) , 0 σ τ > ,
where a 2 > 0 is some constant; the constant β is given in (39).
Using Equation (A4) and the inequalities (A3) and (A7) yields the following estimate for P 1 b ( τ ) :
P 1 b ( τ ) a 2 2 P 1 o ( t f ) exp ( 2 β τ ) + a 1 β exp ( β τ ) 1 exp ( β τ ) , τ 0 ,
meaning that P 1 b ( τ ) is an exponentially decaying function for τ .

Appendix B. Obtaining the Boundary Corrections K ^ 1,1 b (τ) and K ^ 2,1 b (τ)

The correction K ^ 1 , 1 b ( τ ) satisfies the following terminal-value problem:
d K ^ 1 , 1 b ( τ ) d τ = K ^ 1 , 0 o ( t f ) + K ^ 1 , 0 b ( τ ) K ^ 1 , 1 b ( τ ) + K ^ 1 , 1 b ( τ ) K ^ 1 , 0 o ( t f ) + K ^ 1 , 0 b ( τ ) + Ψ ^ 1 ( τ ) , τ 0 , K ^ 1 , 1 b ( 0 ) = K ^ 1 , 1 o ( t f ) ,
where
Ψ ^ 1 ( τ ) = τ d K ^ 1 , 0 o ( t ) d t | t = t f + K ^ 1 , 1 o ( t f ) A 1 T ( t f ) K ^ 1 , 0 b ( τ ) + K ^ 1 , 0 b ( τ ) τ d K ^ 1 , 0 o ( t ) d t | t = t f + K ^ 1 , 1 o ( t f ) A 1 ( t f ) .
Due to the first inequality in (119), we can estimate the matrix-valued function Ψ ^ 1 ( τ ) as
Ψ ^ 1 ( τ ) a ^ Ψ , 1 exp ( β ^ τ ) , τ 0 ,
where a ^ Ψ , 1 > 0 is some constant; the constant β ^ > 0 is given by (120).
Solving the problem (A9) and using the results of [60] and the symmetry of the matrices K ^ 1 , 0 o ( t ) , K ^ 1 , 0 b ( τ ) , we obtain similarly to (A4)
K ^ 1 , 1 b ( τ ) = Φ ^ 1 ( 0 , τ ) K ^ 1 , 1 o ( t f ) Φ ^ 1 ( 0 , τ ) + 0 τ Φ ^ 1 ( σ , τ ) Ψ ^ 1 ( σ ) Φ ^ 1 ( σ , τ ) d σ , τ 0 ,
where, for any τ 0 , the n × n matrix-valued function Φ ^ 1 ( σ , τ ) is the unique solution of the problem
d Φ ^ 1 ( σ , τ ) d σ = K ^ 1 , 0 o ( t f ) + K ^ 1 , 0 b ( σ ) Φ ^ 1 ( σ , τ ) , σ [ τ , 0 ] , Φ ^ 1 ( τ , τ ) = I l .
Solving this problem and taking into account the expressions for K ^ 1 , 0 o ( t ) and K ^ 1 , 0 b ( τ ) (see Equations (112) and (117)), we have
Φ ^ 1 ( σ , τ ) = exp Λ 1 1 / 2 ( t f ) ( τ σ ) Θ ^ 1 ( τ ) Θ ^ ( σ ) , 0 σ τ > ,
where
Θ ^ ( χ ) = I l + exp 2 Λ 1 1 / 2 ( t f ) χ , χ 0 .
The matrix-valued function Φ ^ 1 ( σ , τ ) satisfies the inequality
Φ ^ 1 ( σ , τ ) a ^ Φ , 1 exp β ^ ( τ σ ) , 0 σ τ > ,
where a ^ Φ , 1 > 0 is some constant; the constant β ^ is given in (120).
Using Equation (A12) and the inequalities (A11) and (A16), we obtain the following estimate for K ^ 1 , 1 b ( τ ) :
K ^ 1 , 1 b ( τ ) a ^ Φ , 1 2 K ^ 1 , 1 o ( t f ) exp ( 2 β ^ τ ) + a ^ Ψ , 1 β ^ exp ( β ^ τ ) 1 exp ( β ^ τ ) , τ 0 ,
meaning that K ^ 1 , 1 b ( τ ) is an exponentially decaying function for τ .
Proceed to the correction K ^ 2 , 1 b ( τ ) . Using Equation (113), we obtain, after some rearrangement, the terminal-value problem for this correction
d K ^ 2 , 1 b ( τ ) d τ = K ^ 1 , 0 o ( t f ) + K ^ 1 , 0 b ( τ ) K ^ 2 , 1 b ( τ ) + Ψ ^ 2 ( τ ) , τ 0 , K ^ 2 , 1 b ( 0 ) = K ^ 2 , 1 o ( t f ) ,
where
Ψ ^ 2 ( τ ) = τ d K ^ 1 , 0 o ( t ) d t | t = t f + K ^ 1 , 1 o ( t f ) + K ^ 1 , 1 b ( τ ) A 1 T ( t f ) K ^ 2 , 0 b ( τ ) + K ^ 1 , 0 b ( τ ) K ^ 2 , 1 o ( t f ) K ^ 2 , 0 b ( τ ) A 4 ( t f ) .
The analysis and solution of the problem (A18) is similar to the above-presented analysis and solution of the problem (A9). Namely, the matrix-valued function Ψ ^ 2 ( τ ) can be estimated as
Ψ ^ 2 ( τ ) a ^ Ψ , 2 exp ( β ^ / 2 ) τ , τ 0 ,
where a ^ Ψ , 2 > 0 is some constant; the constant β ^ > 0 is given by (120).
The solution of the problem (A18) is
K ^ 2 , 1 b ( τ ) = Φ ^ 1 ( 0 , τ ) K ^ 2 , 1 o ( t f ) + 0 τ Φ ^ 1 ( σ , τ ) Ψ ^ 2 ( σ ) d σ , τ 0 ,
where the matrix-valued function Φ ^ 1 ( σ , τ ) is given by (A13)–(A15) and satisfies the inequality (A16).
Using Equation (A21) and the inequalities (A16) and (A20), we obtain the following estimate of K ^ 2 , 1 b ( τ ) for all τ 0 :
K ^ 2 , 1 b ( τ ) a ^ Φ , 1 K ^ 2 , 1 o ( t f ) exp ( β ^ τ ) + 2 a ^ Ψ , 2 β ^ exp ( β ^ / 2 ) τ ) 1 exp ( β ^ / 2 ) τ ,
meaning that K ^ 2 , 1 b ( τ ) is an exponentially decaying function for τ .

References

  1. O’Malley, R.E. Cheap control, singular arcs, and singular perturbations. In Optimal Control Theory and Its Applications; Kirby, B.J., Ed.; Lecture Notes in Economics and Mathematical Systems; Springer: Berlin/Heidelberg, Germany, 1974; Volume 106. [Google Scholar]
  2. Bell, D.J.; Jacobson, D.H. Singular Optimal Control Problems; Academic Press: Cambridge, MA, USA, 1975. [Google Scholar]
  3. O’Malley, R.E. The singular perturbation approach to singular arcs. In International Conference on Differential Equations; Antosiewicz, H.A., Ed.; Elsevier Inc.: Amsterdam, The Netherlands, 1975; pp. 595–611. [Google Scholar]
  4. O’Malley, R.E.; Jameson, A. Singular perturbations and singular arcs, I. IEEE Trans. Automat. Control 1975, 20, 218–226. [Google Scholar] [CrossRef]
  5. O’Malley, R.E. A more direct solution of the nearly singular linear regulator problem. SIAM J. Control Optim. 1976, 14, 1063–1077. [Google Scholar] [CrossRef]
  6. O’Malley, R.E.; Jameson, A. Singular perturbations and singular arcs, II. IEEE Trans. Automat. Control 1977, 22, 328–337. [Google Scholar] [CrossRef]
  7. Kurina, G.A. A degenerate optimal control problem and singular perturbations. Sov. Math. Dokl. 1977, 18, 1452–1456. [Google Scholar]
  8. Sannuti, P.; Wason, H.S. Multiple time-scale decomposition in cheap control problems–singular control. IEEE Trans. Automat. Control 1985, 30, 633–644. [Google Scholar] [CrossRef]
  9. Saberi, A.; Sannuti, P. Cheap and singular controls for linear quadratic regulators. IEEE Trans. Automat. Control 1987, 32, 208–219. [Google Scholar] [CrossRef]
  10. Smetannikova, E.N.; Sobolev, V.A. Regularization of cheap periodic control problems. Automat. Remote Control 2005, 66, 903–916. [Google Scholar] [CrossRef]
  11. Glizer, V.Y. Stochastic singular optimal control problem with state delays: Regularization, singular perturbation, and minimizing sequence. SIAM J. Control Optim. 2012, 50, 2862–2888. [Google Scholar] [CrossRef]
  12. Shinar, J.; Glizer, V.Y.; Turetsky, V. Solution of a singular zero-sum linear-quadratic differential game by regularization. Int. Game Theory Rev. 2014, 16, 1–32. [Google Scholar] [CrossRef]
  13. Glizer, V.Y.; Kelis, O. Singular Linear-Quadratic Zero-Sum Differential Games and H Control Problems: Regularization Approach; Birkhauser: Basel, Switzerland, 2022. [Google Scholar]
  14. Kwakernaak, H.; Sivan, R. The maximally achievable accuracy of linear optimal regulators and linear optimal filters. IEEE Trans. Autom. Control 1972, 17, 79–86. [Google Scholar] [CrossRef]
  15. Francis, B. The optimal linear-quadratic time-invariant regulator with cheap control. IEEE Trans. Autom. Control 1979, 24, 616–621. [Google Scholar] [CrossRef]
  16. Saberi, A.; Sannuti, P. Cheap control problem of a linear uniform rank system: Design by composite control. Automatica 1986, 22, 757–759. [Google Scholar] [CrossRef]
  17. Lee, J.T.; Bien, Z.N. A quadratic regulator with cheap control for a class of nonlinear systems. J. Optim. Theory Appl. 1987, 55, 289–302. [Google Scholar] [CrossRef]
  18. Braslavsky, J.H.; Seron, M.M.; Mayne, D.Q.; Kokotovic, P.V. Limiting performance of optimal linear filters. Automatica 1999, 35, 189–199. [Google Scholar] [CrossRef]
  19. Seron, M.M.; Braslavsky, J.H.; Kokotovic, P.V.; Mayne, D.Q. Feedback limitations in nonlinear systems: From Bode integrals to cheap control. IEEE Trans. Autom. Control 1999, 44, 829–833. [Google Scholar] [CrossRef]
  20. Moylan, P.J.; Anderson, B.D.O. Nonlinear regulator theory and an inverse optimal control problem. IEEE Trans. Autom. Control 1973, 18, 460–465. [Google Scholar] [CrossRef]
  21. Young, K.D.; Kokotovic, P.V.; Utkin, V.I. A singular perturbation analysis of high-gain feedback systems. IEEE Trans. Autom. Control 1977, 22, 931–938. [Google Scholar] [CrossRef]
  22. Kokotovic, P.V.; Khalil, H.K.; O’Reilly, J. Singular Perturbation Methods in Control: Analysis and Design; Academic Press: London, UK, 1986. [Google Scholar]
  23. Vasil’eva, A.B.; Butuzov, V.F.; Kalachev, L.V. The Boundary Function Method for Singular Perturbation Problems; SIAM Books: Philadelphia, PA, USA, 1995. [Google Scholar]
  24. Isaacs, R. Differential Games; John Wiley and Sons: New York, NY, USA, 1967. [Google Scholar]
  25. Krasovskii, N.N.; Subbotin, A.I. Game-Theoretical Control Problems; Springer: New York, NY, USA, 1988. [Google Scholar]
  26. Basar, T.; Olsder, G.J. Dynamic Noncooperative Game Theory; Academic Press: London, UK, 1992. [Google Scholar]
  27. Basar, T.; Bernhard, P. H-Optimal Control and Related Minimax Design Problems: A Dynamic Game Approach; Birkhauser: Boston, MA, USA, 1995. [Google Scholar]
  28. Boltyanskii, V.G.; Poznyak, A.S. The Robust Maximum Principle: Theory and Applications; Birkhauser: New York, NY, USA, 2012. [Google Scholar]
  29. Zhukovskii, V.I. Analytic design of optimum strategies in certain differential games. I. Autom. Remote Control 1970, 4, 533–536. [Google Scholar]
  30. Zhang, Q.; Tang, R.; Lu, Y.; Wang, X. The impact of anxiety on cooperative behavior: A network evolutionary game theory approach. Appl. Math. Comput. 2024, 474, 128721. [Google Scholar] [CrossRef]
  31. Pi, B.; Deng, L.-J.; Feng, M.; Perc, M.; Kurths, J. Dynamic evolution of complex networks: A reinforcement learning approach applying evolutionary games to community structure. IEEE Trans. Pattern Anal. Mach. Intell. 2025. [Google Scholar] [CrossRef]
  32. Weng, T.; Yang, H.; Gu, C.; Zhang, J.; Hui, P.; Small, M. Predator-prey games on complex networks. Commun. Nonlinear Sci. Numer. Simul. 2019, 79, 104911. [Google Scholar] [CrossRef]
  33. Bryson, A.E., Jr.; Ho, Y.-C. Applied Optimal Control; Taylor & Francis Group: New York, NY, USA, 1975. [Google Scholar]
  34. Dockner, E.J.; Jorgensen, S.; Van Long, N.; Sorger, G. Differential Games in Economics and Management Science; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar]
  35. He, X.; Prasad, A.; Sethi, S.P.; Gutierrez, G.J. A survey of Stackelberg differential game models in supply and marketing channels. J. Syst. Sci. Syst. Eng. 2007, 16, 385–413. [Google Scholar] [CrossRef]
  36. Colombo, L.; Labrecciosa, P. Stackelberg versus Cournot: A differential game approach. J. Econ. Dyn. Control 2019, 101, 239–261. [Google Scholar] [CrossRef]
  37. Kanska, K.; Wiszniewska-Matyszkiel, A. Dynamic Stackelberg duopoly with sticky prices and a myopic follower. Oper. Res. 2022, 22, 4221–4252. [Google Scholar]
  38. Hu, Y.; Oksendal, B.; Sulem, A. Singular mean-field control games with applications to optimal harvesting and investment problems. arXiv 2014, arXiv:1406.1863. [Google Scholar] [CrossRef]
  39. Petersen, I.R. Linear-quadratic differential games with cheap control. Syst. Control Lett. 1986, 8, 181–188. [Google Scholar] [CrossRef]
  40. Glizer, V.Y.; Kelis, O. Solution of a zero-sum linear quadratic differential game with singular control cost of minimiser. Control Decis. 2015, 2, 155–184. [Google Scholar] [CrossRef]
  41. Glizer, V.Y. Asymptotic solution of zero-sum linear-quadratic differential game with cheap control for the minimizer. NoDEA Nonlinear Diff. Equ. Appl. 2000, 7, 231–258. [Google Scholar] [CrossRef]
  42. Turetsky, V.; Shinar, J. Missile guidance laws based on pursuit—evasion game formulations. Automatica 2003, 39, 607–618. [Google Scholar] [CrossRef]
  43. Turetsky, V.; Glizer, V.Y. Cheap control in a non-scalarizable linear-quadratic pursuit-evasion game: Asymptotic analysis. Axioms 2022, 11, 214. [Google Scholar] [CrossRef]
  44. Glizer, V.Y. Nash equilibrium sequence in a singular two-person linear-quadratic differential game. Axioms 2021, 10, 132. [Google Scholar] [CrossRef]
  45. Glizer, V.Y. Nash equilibrium in a singular infinite horizon two-person linear-quadratic differential game. Pure Appl. Funct. Anal. 2022, 7, 1657–1698. [Google Scholar]
  46. Glizer, V.Y. Solution of one class of singular two-person Nash equilibrium games with state and control delays: Regularization approach. Appl. Set-Valued Anal. Optim. 2023, 5, 401–438. [Google Scholar] [CrossRef]
  47. Glizer, V.Y.; Turetsky, V. One class of Stackelberg linear-quadratic differential games with cheap control of a leader: Asymptotic analysis of open-loop solution. Axioms 2024, 13, 801. [Google Scholar] [CrossRef]
  48. Turetsky, V.; Glizer, V.Y. Supply chain Stackelberg differential game for a manufacturer with cheap innovation. In Proceedings of the Abstracts of 5th IMA and OR Society Conference on Mathematics of Operational Research, Birmingham, UK, 30 April–2 May 2025; p. 56. Available online: https://cdn.ima.org.uk/wp/wp-content/uploads/2025/04/OR-Abstracts-Final-27.04.pdf (accessed on 30 April 2025).
  49. Turetsky, V.; Glizer, V.Y. Robust solution of a time-variable interception problem: A cheap control approach. Int. Game Theory Rev. 2007, 9, 637–655. [Google Scholar] [CrossRef]
  50. Glizer, V.Y.; Kelis, O. Singular infinite horizon zero-sum linear-quadratic differential game: Saddle-point equilibrium sequence. Numer. Algebra Control Optim. 2017, 7, 1–20. [Google Scholar] [CrossRef]
  51. Glizer, V.Y.; Kelis, O. Upper value of a singular infinite horizon zero-sum linear-quadratic differential game. Pure Appl. Funct. Anal. 2017, 2, 511–534. [Google Scholar]
  52. Glizer, V.Y. Saddle-point equilibrium sequence in one class of singular infinite horizon zero-sum linear-quadratic differential games with state delays. Optimization 2019, 68, 349–384. [Google Scholar] [CrossRef]
  53. Glizer, V.Y. Saddle-point equilibrium sequence in a singular finite horizon zero-sum linear-quadratic differential game with delayed dynamics. Pure Appl. Funct. Anal. 2021, 6, 1227–1260. [Google Scholar]
  54. Glizer, V.Y. Asymptotic analysis and open-loop solution of one class of partial cheap control zero-sum differential games with state and control delays. Commun. Optim. Theory 2022, 2022, 16. [Google Scholar]
  55. Bellman, R. Introduction to Matrix Analysis; SIAM Books: Philadelphia, PA, USA, 1997. [Google Scholar]
  56. Sibuya, Y. Some global properties of matrices of functions of one variable. Math. Ann. 1965, 161, 67–77. [Google Scholar] [CrossRef]
  57. Derevenskii, V.P. Matrix Bernoulli equations, I. Russ. Math. 2008, 52, 12–21. [Google Scholar] [CrossRef]
  58. Gajic, Z.; Qureshi, M.T.J. Lyapunov Matrix Equation in System Stability and Control; Dover Publications: Mineola, NY, USA, 2008. [Google Scholar]
  59. Glizer, V.Y. Asymptotic solution of a cheap control problem with state delay. Dyn. Control 1999, 9, 339–357. [Google Scholar] [CrossRef]
  60. Abou-Kandil, H.; Freiling, G.; Ionescu, V.; Jank, G. Matrix Riccati Equations in Control and Systems Theory; Birkhauser: Basel, Switzerland, 2003. [Google Scholar]
  61. Kwakernaak, H.; Sivan, R. Linear Optimal Control Systems; Wiley-Interscience: New York, NY, USA, 1972. [Google Scholar]
Figure 1. Absolute errors of the asymptotic expansion P 1 ( t , ε ) of P ( t , ε ) .
Figure 1. Absolute errors of the asymptotic expansion P 1 ( t , ε ) of P ( t , ε ) .
Symmetry 17 01394 g001
Figure 2. Absolute errors of the asymptotic expansion p 1 ( t , ε ) of p ( t , ε ) .
Figure 2. Absolute errors of the asymptotic expansion p 1 ( t , ε ) of p ( t , ε ) .
Symmetry 17 01394 g002
Figure 3. Absolute error of the asymptotic expansion s ¯ ( t , ε ) of s ( t , ε ) .
Figure 3. Absolute error of the asymptotic expansion s ¯ ( t , ε ) of s ( t , ε ) .
Symmetry 17 01394 g003
Figure 4. Absolute errors Δ K ^ 1 ( ε ) , Δ K ^ 2 ( ε ) and Δ K ^ 3 ( ε ) .
Figure 4. Absolute errors Δ K ^ 1 ( ε ) , Δ K ^ 2 ( ε ) and Δ K ^ 3 ( ε ) .
Symmetry 17 01394 g004
Figure 5. Absolute errors Δ q ^ 1 ( ε ) and Δ q ^ 2 ( ε ) .
Figure 5. Absolute errors Δ q ^ 1 ( ε ) and Δ q ^ 2 ( ε ) .
Symmetry 17 01394 g005
Figure 6. Absolute error of the asymptotic expansion s ^ ( t , ε ) of s ( t , ε ) .
Figure 6. Absolute error of the asymptotic expansion s ^ ( t , ε ) of s ( t , ε ) .
Symmetry 17 01394 g006
Table 1. Main notations in the paper.
Table 1. Main notations in the paper.
No.NotationDescription
1 R n n-dimensional real Euclidean space
2 · Euclidean norm either of vector ( z ) or of matrix ( A )
3Ttransposition either of vector ( z T ) or of matrix ( A T )
4 I n identity matrix of dimension n
5 col ( x , y ) , x R n , y R m column block vector
6 diag ( a 1 , , a n ) diagonal matrix with diagonal entries a 1 ,…, a n
7 L 2 [ t 1 , t 2 ; R n ] space of all functions z ( · ) : [ t 1 , t 2 ] R n square integrable in the interval [ t 1 , t 2 ]
8 ζ ( t ) state variable of initially formulated differential game
9 w ( t ) control of minimizing player in initially formulated differential game
10 z ( t ) state variable of transformed differential game
11 u ( t ) control of minimizing player in transformed differential game
12 v ( t ) control of maximizing player in initial and transformed games
13 ε 2 small cost of control of minimizing player
14 U V set of admissible pairs of players’ state-feedback controls in transformed game
15 ( u ε * ( z , t ) , v ε * ( z , t ) saddle point of transformed game
16 J ε * value of transformed game
17 K ( t , ε ) solution of Riccati matrix differential equation
18 q ( t , ε ) solution of linear vector differential equation
19 s ( t , ε ) solution of scalar differential equation
20 P ( t , ε ) solution of transformed Riccati matrix equation in case I
21 p ( t , ε ) solution of transformed linear vector equation in case I
22 P 1 ( t , ε ) asymptotic solution of transformed Riccati equation in case I
23 p 1 ( t , ε ) asymptotic solution of transformed linear equation in case I
24 s ¯ ( t , ε ) asymptotic solution of scalar equation in case I
25 J app I and J app , 1 I asymptotic approximations of game value in case I
26 u ˜ ε ( z , t ) , v ˜ ε ( z , t ) approximate saddle point in case I
27 J ˜ output of the game generated by approximate saddle point in case I
28 K ( t , ε ) = ε K ^ 1 ( t , ε ) ε 2 K ^ 2 ( t , ε ) ε 2 K ^ 2 T ( t , ε ) ε 2 K ^ 3 ( t , ε ) block form of solution of Riccati equation in case II
29 q ( t , ε ) = ε q ^ 1 ( t , ε ) ε q ^ 2 ( t , ε ) block form of solution of linear equation in case II
30 K ^ 1 ( t , ε ) = ε K ^ 1 , 1 ( t , ε ) ε 2 K ^ 2 , 1 ( t , ε ) ε 2 K ^ 2 , 1 T ( t , ε ) ε 2 K ^ 3 , 1 ( t , ε ) asymptotic solution of Riccati equation in case II
31 Q ^ 1 ( t , ε ) = col ε q ^ 1 , 1 ( t , ε ) , ε q ^ 2 , 1 ( t , ε ) asymptotic solution of linear equation in case II
32 s ^ ( t , ε ) asymptotic solution of scalar equation in case II
33 J app I I and J app , 1 I I asymptotic approximations of game value in case II
34 u ^ ε ( z , t ) , v ^ ε ( z , t ) approximate saddle point in case II
35 J ^ output of the game generated by approximate saddle point in case II
Table 2. Values of J ε * , J app I , J app , 1 I , and J ˜ in case I.
Table 2. Values of J ε * , J app I , J app , 1 I , and J ˜ in case I.
ε J ε * J app I J app , 1 I J ˜
0.1 0.7271 0.7233 0.7233 0.7273
0.05 0.3312 0.3308 0.3308 0.3312
0.015 0.09282 0.09278 0.09278 0.09282
Table 3. Absolute and relative errors of J ε * approximations in case I.
Table 3. Absolute and relative errors of J ε * approximations in case I.
ε Δ J app I ( ε ) Δ J app , 1 I ( ε ) Δ J ˜ ( ε ) δ J app I ( ε ) δ J app , 1 I ( ε ) δ J ˜ ( ε )
0.1 3.78 · 10 3 3.78 · 10 3 2.1 · 10 4 0.52 0.52 0.029
0.05 3.42 · 10 4 3.42 · 10 4 7.31 · 10 6 0.10 0.10 0.0022
0.015 4.11 · 10 5 4.11 · 10 5 2.46 · 10 8 0.044 0.044 2.65 · 10 5
Table 4. Values of J ε * , J app I I , J app , 1 I I , and J ^ in case II.
Table 4. Values of J ε * , J app I I , J app , 1 I I , and J ^ in case II.
ε J ε * J app II J app , 1 II J ^
0.1 0.3588 0.3545 0.3545 0.3598
0.05 0.16503 0.1646 0.1646 0.16508
0.015 0.046399 0.0463718 0.0463718 0.0463999
Table 5. Absolute and relative errors of J ε * in case II.
Table 5. Absolute and relative errors of J ε * in case II.
ε Δ J app II ( ε ) Δ J app , 1 II ( ε ) Δ J ^ ( ε ) δ J app II ( ε ) δ J app , 1 II ( ε ) δ J ^ ( ε )
0.1 4.38 · 10 3 4.38 · 10 3 9.68 · 10 4 1.22 1.22 0.27
0.05 4.55 · 10 4 4.55 · 10 4 5.46 · 10 5 0.28 0.28 0.033
0.015 2.77 · 10 5 2.77 · 10 5 4.07 · 10 7 0.06 0.06 8.78 · 10 4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Glizer, V.Y.; Turetsky, V. The Effect of the Cost Functional on Asymptotic Solution to One Class of Zero-Sum Linear-Quadratic Cheap Control Differential Games. Symmetry 2025, 17, 1394. https://doi.org/10.3390/sym17091394

AMA Style

Glizer VY, Turetsky V. The Effect of the Cost Functional on Asymptotic Solution to One Class of Zero-Sum Linear-Quadratic Cheap Control Differential Games. Symmetry. 2025; 17(9):1394. https://doi.org/10.3390/sym17091394

Chicago/Turabian Style

Glizer, Valery Y., and Vladimir Turetsky. 2025. "The Effect of the Cost Functional on Asymptotic Solution to One Class of Zero-Sum Linear-Quadratic Cheap Control Differential Games" Symmetry 17, no. 9: 1394. https://doi.org/10.3390/sym17091394

APA Style

Glizer, V. Y., & Turetsky, V. (2025). The Effect of the Cost Functional on Asymptotic Solution to One Class of Zero-Sum Linear-Quadratic Cheap Control Differential Games. Symmetry, 17(9), 1394. https://doi.org/10.3390/sym17091394

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop