Nash Equilibrium Sequence in a Singular Two-Person Linear-Quadratic Differential Game

: A ﬁnite-horizon two-person non-zero-sum differential game is considered. The dynamics of the game is linear. Each of the players has a quadratic functional on its own disposal, which should be minimized. The case where weight matrices in control costs of one player are singular in both functionals is studied. Hence, the game under the consideration is singular. A novel deﬁnition of the Nash equilibrium in this game (a Nash equilibrium sequence) is proposed. The game is solved by application of the regularization method. This method yields a new differential game, which is a regular Nash equilibrium game. Moreover, the new game is a partial cheap control game. An asymptotic analysis of this game is carried out. Based on this analysis, the Nash equilibrium sequence of the pairs of the players’ state-feedback controls in the singular game is constructed. The expressions for the optimal values of the functionals in the singular game are obtained. Illustrative examples are presented.


Introduction
Differential games, which cannot be solved by application of the first-order solvability conditions, are called singular. For instance, a zero-sum differential game is called singular if it cannot be solved using the Isaacs MinMax principle [1,2] and the Bellman-Isaacs equation method [1,3]. Similarly, Nash equilibrium set of controls in a singular non-zerosum differential game cannot be derived using the first-order variational method and the generalized Hamilton-Jacobi-Bellman equation method [3,4].
Singular differential games appear in various applications. For example, such games appear in pursuit-evasion problems (see, e.g., Ref. [5]), in robust controllability problems (see, e.g., Ref. [6]), in robust interception problems of maneuvering targets (see e.g., Ref. [7]), in robust tracking problems (see, e.g., Ref. [8]), in biology processes (see, e.g., Ref. [9]), and in robust investment problems (see, e.g., Ref. [10]). Treating a singular differential game, one can try to use higher order solvability conditions. However, such conditions are useless for the game, which does not have an optimal control of at least one player in the class of regular (non-generalized) functions.
Singular zero-sum differential games were extensively analyzed in the literature by different methods (see, e.g., Refs. [7,[11][12][13][14][15][16][17][18] and references therein). Thus, in Refs [7,15,16], various singular zero-sum differential games were solved by regularization method. In Reference [11], a numerical method was proposed to solve one class of zero-sum differential games with singular control. In Reference [12], a class of zero-sum differential games with singular arcs was considered. For this class of the games, sufficient conditions for the existence of a saddle-point solution were established. In Reference [13], the Riccati matrix inequality was applied to establish the existence of an almost equilibria in a singular zerosum differential game. In Reference [14], a saddle-point solution of a singular zero-sum differential game was derived in the class of generalized functions. In Reference [17], a class of zero-sum stochastic differential games was studied. Each player of this game has a control consisting of regular and singular parts. Necessary and sufficient saddle-point optimality conditions were derived for the considered game. In Reference [18], a singular zero-sum linear-quadratic differential game was considered. This game was treated by its regularization and numerical solution of the regularized game.
Singular non-zero-sum Nash equilibrium differential games also were studied in the literature, but mostly in various stochastic settings (see, e.g., Refs. [10,[19][20][21][22] and references therein). Deterministic singular non-zero-sum Nash equilibrium differential games were studied only in few works. Thus, in Reference [23], a two-person non-zero-sum differential game with a linear second order dynamics and scalar controls of both players was considered. Each player controls one equation of the dynamics. The infinite horizon quadratic functionals of the players do not contain control costs. The admissible class of controls for both players is the set of linear state-feedbacks. The notion of asymptotic (with respect to time) ε-Nash equilibrium was introduced, and this equilibrium was designed subject to some condition. In Reference [9], a finite-horizon two-person non-zero-sum differential game was studied. This game models a biological process. Its fourth-order dynamics is linear with respect to scalar controls of the players, and these controls are bounded. The players' functionals depend only on the state variables, and this dependence is quadratic. For this singular game, a Nash equilibrium set of open-loop controls was derived in the class of regular functions. In Reference [24], an infinite horizon two-person non-zero-sum differential game with n-order linear dynamics and vector-valued unconstrained players' controls was considered. Functionals of both players are quadratic, and these functionals do not contain control costs of one (the same) player. This singular game was solved by the regularization approach.
In the present paper, we consider a deterministic finite-horizon two-person non-zerosum differential game. The dynamics of this game is linear and time-dependent. The controls of the players are unconstrained. Each player aims to minimize its own quadratic functional. We look for the Nash equilibrium in this game, and we treat the case where weight matrices in control costs of one player (the "singular" player) in both functionals are singular but non-zero. Such a feature means that the game under the consideration is singular. However, since the aforementioned singular weight matrices are non-zero, the control of the "singular" player contains both, singular and regular, coordinates. For this game, in general, the Nash equilibrium pair of controls, in which singular coordinates of the "singular" player's control are regular (non-generalized) functions, does not exist. To the best of our knowledge, such a game has not yet been studied in the literature. The aims of the paper are the following: (A) to define the solution (the Nash equilibrium) of the considered game; (B) to derive this solution. Thus, we propose for the considered singular game a novel notion of the Nash equilibrium (a Nash equilibrium sequence). Based on this notion, we solve the game by application of the regularization method. Namely, we associate the original singular game with a new differential game. This new game has the same equation of the dynamics and a similar functional of the "singular" player augmented by a finite-horizon integral of the square of its singular control coordinates with a small positive weight (a small parameter). The functional of the other ("regular") player remains unchanged. Thus, the new game is a finite-horizon regular linear-quadratic game.
The regularization method was applied for solution of singular optimal control problems in many works (see, e.g., Refs. [25][26][27] and references therein). This method also was applied for solution of singular H ∞ control problems (see, e.g., Refs. [28,29] and references therein) and for solution of singular zero-sum differential games (see, e.g., Refs. [7,15,16]). However, to the best of our knowledge, the application of the regularization method to analysis and solution of singular non-zero-sum differential games was considered only in two short conference papers [24,30]. In each of these papers, the study of the game was presented in a brief form and without detailed analysis and proofs of assertions.
The aforementioned new game, obtained by the regularization of the original singular game, is a partial cheap control game. Using the solvability conditions of a Nash equilibrium finite-horizon linear-quadratic regular game, the solution of this partial cheap control game is reduced to solution of a set of two matrix Riccati-type differential equations, singularly perturbed by the small parameter. Using an asymptotic solution of this set, a Nash equilibrium sequence of the pairs of the players' state-feedback controls in the original singular game is constructed. The expressions for the optimal values of the players' functionals in this game are obtained. Note that a particular case of the differential game, studied in the present paper, was considered briefly and without detailed proofs in the short conference paper [30].
The paper is organized as follows. In the next section, the initial formulation of the singular differential game is presented. The main definitions also are formulated. The transformation of the initially formulated game is carried out in Section 3. It is shown that the initially formulated game and the transformed game are equivalent to each other. Due to this equivalence, in the rest of the paper the transformed game is analyzed as an original singular differential game. The regularization of the original singular game, which is made in Section 4, yields a partial cheap control regular game. Nash equilibrium solution of the latter is presented in Section 5. Asymptotic analysis of the partial cheap control regular game is carried out in Section 6. In Section 7, the reduced differential game, associated with the original singular game, is presented along with its solvability conditions. The Nash equilibrium sequence for the original singular differential game and the expressions of the functionals' optimal values of this game are derived in Section 8. Two illustrative examples are considered in Section 9. Section 10 is devoted to concluding remarks. Some technically complicated proofs are placed in appendices.
The following main notations are used in the paper: 1. E n is the n-dimensional real Euclidean space.

2.
The Euclidean norm of either a vector or a matrix is denoted by · .

3.
The upper index "T" denotes the transposition either of a vector x (x T ) or of a matrix A (A T ).

4.
I n denotes the identity matrix of dimension n.

5.
O n×m denotes zero matrix of dimension n × m; however, if the dimension of zero matrix is clear, it is denoted as 0. 6.
col(x, y), where x ∈ E n , y ∈ E m , denotes the column block-vector of the dimension n + m with the upper block x and the lower block y, i.e., col(x, y) = (x T , y T ) T . 8.
⊗ denotes the Kronecker product of matrices. 9.
For a given n × m-matrix A, vec(A) means its vectorization, i.e., the nm-dimensional block vector in which the first (upper) block is the first (upper) row of A, the second block is the second row of A, and so on, the lower block of vec(A) is the last (lower) row of A.

Initial Game Formulation
The game's dynamics is described by the following system: where t f > 0 is a given final time instant; Z(t) ∈ E n is the state vector, u(t) ∈ E r , (r < n), v(t) ∈ E s are the players' controls; A(t), B u (t) and B v (t), t ∈ [0, t f ] are given matrix-valued functions of corresponding dimensions; Z 0 ∈ E n is a given constant vector.
The functionals of the player "u" with the control u(t) and the player "v" with the control v(t) are, respectively, where C u and C v are given symmetric positive semi-definite matrices of corresponding In what follows, we assume that the weight matrices R uu (t) and R vu (t) of the costs of the control u(t) in both functionals have the block form where the matricesR uu (t) andR vu (t) are of the dimension q × q, (0 < q < r); the matrix R uu (t) is positive definite; the matrixR vu (t) is positive semi-definite. The player "u" aims to minimize the functional (2) by a proper choice of the control u(t), while the player "v" aims to minimize the functional (3) by a proper choice of the control v(t).
We study the game (1)-(3) with respect to its Nash equilibrium, and subject to the assumption that both players know perfectly the current game state.

Remark 1.
Due to the assumption (4), the first-order Nash-equilibrium solvability conditions (see, e.g., Refs. [3,4]) cannot be applied to analysis and solution of the game (1)-(3), i.e., this game is singular. Moreover, this game does not have, in general, its solution (a Nash-equilibrium pair of controls) in the class of regular (non-generalized) functions.
Consider the set U Z of all functions F u (Z, t) : E n × [0, t f ] → E r , which are measurable w.r.t. t ∈ [0, t f ] for any fixed Z ∈ E n and satisfy the local Lipschitz condition w.r.t. Z ∈ E n uniformly in t ∈ [0, t f ]. In addition, consider the set V Z of all functions F v (Z, t) : E n × [0, t f ] → E s with the same properties. Definition 1. By (UV) Z , we denote the set of all pairs F u (Z, t), F v (Z, t) of functions satisfying the following conditions: In what follows, (UV) Z is called the set of all admissible pairs of players' state-feedback controls (strategies) For any given functions F u (Z, t) ∈ U Z and F v (Z, t) ∈ V Z , we consider the sets Consider the sequence of the pairs F * u,k (Z, t), F * v (Z, t) ∈ (UV) Z , (k = 1, 2, ...).

Transformation of the Game (1)-(3)
Let us represent the matrix B u (t) in the block form where the matrices B u,1 (t) and B u,2 (t) have the dimensions n × q and n × (r − q), respectively.
In what follows, we assume: AV. The matrix-valued functions B u (t) and D u (t) are twice continuously differentiable in the interval [0, t f ].
Let the n × (n − r)-matrix B u,c (t) be a complement matrix to B u (t) in the interval [0, t f ], i.e., the block matrix B u, In what follows, we also assume: Using the matrices B u,2 (t) and B u,c (t), we construct the following matrices: Now, using the matrix R u (t), we make the following transformation of the state variable Z(t) in the game (1)-(3): where z(t) ∈ E n is a new state variable. Due to the results of Reference [31], the transformation (9) is invertible. For the sake of the further analysis, we partition the matrix H u (t) into blocks as: where the matrices H u,1 (t) and H u,2 (t) have the dimensions (r − q) × (n − r) and (r − q) × q, respectively. Quite similarly to the results of Reference [15,29], we have the following assertion.

Proposition 1.
Let the assumptions AI-AVI be valid. Then, the state transformation (9) converts the system (1) to the system and the functionals (2), (3) to the functionals where The matrices D u 1 (t) and D v (t) are symmetric positive semi-definite, while the matrix D u 2 (t) is symmetric positive definite for all t ∈ [0, t f ]. The matrices C u 1 and C v 1 are symmetric positive semi-definite. Moreover, the matrix-valued functions

Remark 2.
In the new (transformed) game with the dynamics (11) and the functionals (12), (13), the player "u" aims to minimize the functional (12) by a proper choice of the control u(t), while the player "v" aims to minimize the functional (13) by a proper choice of the control v(t). Since in the game (1)-(3) both players know perfectly the current state Z(t), then due to the invertibility of the transformation (9), in the game (11)-(13) both players also know perfectly the current state z(t). Like the game (1)-(3), the new game (11)-(13) also is singular.
for any fixed z ∈ E n and satisfy the local Lipschitz condition w.r.t. z ∈ E n uniformly in t ∈ [0, t f ]. In addition, consider the set V z of all functions Definition 3. By (UV) z , we denote the set of all pairs G u (z, t), G v (z, t) of functions satisfying the following conditions: In what follows, (UV) z is called the set of all admissible pairs of players' state-feedback controls (11)-(13).
be the solution of the initial-value problem (1) generated by this pair of the play- be the solution of the initial-value problem (11) generated by this pair of the players' controls.
Proof. The statements of the corollary directly follow from Definitions 1 and 3 and Proposition 1.
For any given G u (z, t) ∈ U z and G v (z, t) ∈ V z , consider the sets Consider the sequence of the pairs .
Proof of the lemma is presented in Appendix A.

Corollary 2.
Let the assumptions AI-AVI be valid. Then, the optimal values J * u and J * v of the functionals (2) and (3) in the game (1)-(3) coincide with the optimal values J * u and J * v of the corresponding functionals (12) and (13) in the game (11) Proof. The statement of the corollary is a direct consequence of the expressions for J * u , J * v , J * u and J * v (see Definitions 2 and 4), and the proof of Lemma 1 (see Equations (A2) and (A3)-(A6) in Appendix A).

Remark 3.
Due to Lemma 1 and Corollary 2, the initially formulated differential game (1)-(3) is equivalent to the new (transformed) differential game (11)- (13). Moreover, due to Proposition 1, the new game is simpler than the initial game. Due to this observation, in what follows of this paper, we consider the game (11)-(13) as an original game. We call this game the Singular Differential Game (SDG).

Regularization of the SDG
We are going to solve the SDG by regularization method. This method consists in replacing the SDG with a regular differential game. The latter depends on a small positive parameter ε. When we set formally ε = 0, the new (regular) game becomes the SDG. Based on this observation, we construct the regular differential game, associated with the SDG, as follows. We keep for this regular game the dynamic Equation (11) and the cost functional (13) of the player "v", while we construct the functional of the player "u" in the new game to be of the regular form where Due to (4) and (23), the matrix R uu (t) + Λ(ε) is positive definite for any t ∈ [0, t f ] and any ε = 0. In addition, it is seen that, for ε = 0, the functional (22) becomes the functional (12).

Remark 4.
Since the parameter ε > 0 is small, the game (11), (13), (22) is a differential game with a partial cheap control of the player "u" in its functional (22). In what follows, the game (11), (13), (22) is called the Partial Cheap Control Differential Game (PCCDG). Zero-sum differential games with a complete/partial cheap control of at least one player were studied in many works (see e.g., Refs. [7,8,15,16,32] and references therein). Non zero-sum differential games with a complete cheap control of one player were considered only in few works (see Reference [4,24,30]). However, to the best of our knowledge, a non-zero-sum differential game with a partial cheap control of at least one player has not yet been considered in the literature. Since, for any ε > 0, the weight matrix for the control cost of the player "u" in the functional (22) is positive definite, the PCCDG is a regular differential game. The set of all admissible pairs of players' state-feedback controls (strategies) in this game is the same as in the SDG; namely, it is (UV) z .

Nash Equilibrium Solution of the PCCDG
Let us consider the following terminal-value problem for the set of two Riccati-type differential equations with respect to the symmetric matrix-valued functions K u (t) and where By virtue of the results of Reference [3,4], we have the following assertion.

Asymptotic Analysis of PCCDG
We begin this analysis with the asymptotic solution of the terminal-value problem (24)-(26). First of all, let us represent the matrices S u (t, ε) and S vu (t, ε) (see Equation (27)) in the block form. Namely, based on the block form of the matrix B u (t) (see the Equation (15)) and the block-diagonal form of the matrices R uu (t) and Λ(ε) (see the Equations (4) and (23)), we obtain: where the (n − r + q) × (n − r + q)-matrix S u 1 (t), the (n − r + q) × (r − q)-matrix S u 2 (t), Similarly, we have where Due to the block form of the matrix S u (t, ε) (see Equations (30) and (31)), the righthand sides of Equations (24) and (25) have the singularities at ε = 0. To remove these singularities and to represent the set (24)- (25) in an explicit singular perturbation form, we look for the solution K u (t, ε), K v (t, ε) , t ∈ [0, t f ] of the terminal-value problem (24)- (26) in the block form where the matrices In addition, we represent the matrices A(t), D v (t), S v (t), and S uv (t) in the block form The blocks of the matrices in (35) and (36) are of the same dimensions as the corresponding blocks of the matrices in (34). Now, substitution of (17), (30), (32), and (34)-(36) into the set (24)-(25) yields, after a routine matrix algebra, the following set of six Riccati-type differential equations with respect to the matrices K i 1 (t, ε), K i 2 (t, ε), and K i 3 (t, ε), (i = u, v) (in this set, for simplicity, we omit the designation of the dependence of the unknown matrix-valued functions on ε): It is clear that the set of Equations (37)-(42) is equivalent to the set of Equations (24) and (25). The set (37)-(42) has the explicit singular perturbation form. To obtain the terminal conditions for the set (37)-(42), we substitute (16) and (34) into the terminal conditions (26), which yields

Zero-Order Asymptotic Solution of the Terminal-Value Problem (37)-(42), (43): Formal Construction
To construct this asymptotic solution, we adapt the Boundary Function Method, Ref. [33]. Namely, we seek the zero-order asymptotic solution K i j ,0 (t, ε), (i = u, v), (j = 1, 2, 3) of the problem (37)-(42), (43) in the form where the terms with the superscript o are so-called outer solution terms, while the terms with the superscript b are boundary-layer correction terms in a left-hand neighborhood of the boundary t = t f ; the variable τ is called the stretched time and, for any t ∈ [0, t f ), τ → −∞ as ε → +∞.. Equations and conditions for the terms of the zero-order asymptotic solution are obtained by substitution of (44) into the problem (37)-(42), (43) instead of K i j , (i = u, v), (j = 1, 2, 3), and equating coefficients for the same power of ε on both sides of the resulting equations, separately for the coefficients depending on t and on τ.
Let us start the construction of the zero-order asymptotic solution with obtaining the terms K b i 1 ,0 (τ), (i = u, v). For these terms, we have the differential equations Following the Boundary Function Method, we require that K b i 1 ,0 (τ) → 0 for τ → −∞, (i = u, v). Subject to this requirement, Equations in (45) yield the unique solutions We proceed with obtaining the terms of the outer solution. Using the equality S u 3 (t, 0) = I r−q , t ∈ [0, t f ], we derive the following set of equations and conditions for these terms: Equation (49) yields the unique symmetric positive definite solution where the superscript "1/2" denotes the unique symmetric positive definite square root of the corresponding symmetric, positive definite matrix. Substituting (53) into (52), we obtain, after some rearrangement, the Lyapunov algebraic equation with respect to the matrix K o v 3 ,0 (t): Since the matrix D u 2 (t) 1/2 is symmetric positive definite, and the matrix D v 3 (t) is symmetric, then, by virtue of the results of Reference [34], Equation (54) has the unique symmetric solution Substitution of (53) into (48) yields the linear algebraic equation with respect to K o u 2 ,0 (t), in which the solution is where the superscript " − 1/2" denotes the inverse matrix for the unique symmetric positive definite square root of the corresponding symmetric positive definite matrix. Similarly, substituting (53) and (56) into (51) and solving the resulting algebraic Now, we substitute (56) into (47), which yields where Further, substituting (56)-(57) into (50) and using (52), we have, after a routine matrix algebra, where In what follows, we assume: AVII. The terminal-value problem (58), (60) has the solution K o u 1 ,0 (t), K o v 1 ,0 (t) in the entire interval [0, t f ]. Now, let us obtain the boundary-layer correction terms K b i j ,0 (τ), (i = u, v), (j = 2, 3). Using (46) and the equality S u 3 (t, 0) ≡ I r−q , we have for these terms the following terminalvalue problem in the interval τ ∈ (−∞, 0]: This problem consists of two subproblems, which can be solved consecutively: first, the subproblem with respect to K b u 2 ,0 (τ), K b u 3 ,0 (τ)) is solved, then the subproblem with respect to K b v 2 ,0 (τ), K b v 3 ,0 (τ)) is solved. Let us start with the first subproblem. Using (53), (56) and the equality K o u 1 ,0 (t f ) = C u 1 (see Equation (58)), we can rewrite the subproblem (62)-(63), (66) as: The terminal-value problem (68)-(69) also can be solved consecutively: first, the problem (69) is solved, then the problem (68) is solved. Let us observe that the differential equation in (69) is a Bernoulli-type matrix differential equation, as in Ref. [35]. Using this observation, we directly obtain the solution of the problem (69) Substituting (70) into the problem (68) and solving the obtained terminal-value problem with respect to K b u 2 ,0 (τ), we have Since the matrix D 2 (t f ) 1/2 is positive definite, the solution (70)-(71) to the problem (68)-(69) (and, therefore, to the subproblem (62)-(63), (66) of the problem (62)-(67)) satisfies the inequality where c u > 0 and β u > 0 are some constants.
Proceed to the solution of the subproblem (64)-(65), (67). First, we solve the differential Equation (65) with the corresponding terminal condition from (67). This terminal-value problem can be rewritten as: The differential equation in (73) is the Lyapunov matrix differential equation, as in Ref. [36]. Using the results of this work, we obtain the solution of the problem (73) where, for any σ ≤ 0, the matrix-valued function Φ(τ, σ) is the solution of the following terminal-value problem: Using the positive definiteness of the matrix D 2 (t f ) 1/2 , the inequality (72), and the results of Reference [33], we obtain the estimate of the matrix Φ(τ, σ) where c > 0 and 0 < β < β u are some constants. Now, let us solve the differential Equation (64) with the corresponding terminal condition from (67). This terminal-value problem can be rewritten as: This problem yields the solution Using the inequalities (72) and (76), we directly obtain the following inequality for the above obtained matrix-valued functions K b v 3 ,0 (τ) and K b v 2 ,0 (τ): where c v > 0 and 0 < β v < β are some constants.
Proof of the lemma is presented in Appendix B .
As a direct consequence of Lemma 2, we have the following two assertions.

Asymptotic Representations of the Optimal Values of the Functionals in the PCCDG
Let us represent the initial state position z 0 of the PCCDG in the block form Using the upper block x 0 of the vector z 0 and the solution K o u 1 ,0 (t), K o v 1 ,0 (t) of the terminal-value problem (58), (60) mentioned in the assumption AVII, we construct the values Corollary 5. Let the assumptions AI-AVII be valid. Then, for all ε ∈ (0, ε 0 ], the optimal values J * u,ε and J * v,ε of the functionals (22) and (13) in the PCCDG satisfy the inequalities where χ(z 0 ) > 0 is some constant independent of ε but depending on z 0 .

Reduced Differential Game
To construct this game, we introduce into the consideration the following blockform matrices: Consider the following finite-horizon non-zero-sum differential game with the dynamics of the form where x r (t) ∈ E n−r+q is a state variable; u r (t) ∈ E r and v r (t) ∈ E s are controls of the game's players; B v 1 (t) is the upper block of the matrix B v (t) of the dimension (n − r + q) × s. The functionals of the game, to be minimized by u r (t) and v r (t), respectively, are and More precisely, in the game (84)-(86), the player with the control u r (t) aims to minimize the functional (85) by a proper choice of u r (t), while the player with the control v r (t) aims to minimize the functional (86) by a proper choice of v r (t). We consider this game with respect to its Nash equilibrium, and subject to the assumption that both players know perfectly the current game state. We call the game (84)-(86) the Reduced Differential Game (RDG).

Remark 5.
Since the matricesR uu (t), D u 2 (t), and R vv (t) are positive definite in the entire interval [0, t f ], then the RDG is regular. Nash equilibrium pair of state-feedback controls in the RDG is defined quite similarly to such an equilibrium pair in the PCCDG.
By virtue of the results of Reference [3,4], we have the following assertion.

Proposition 3.
Let the assumptions AI-AVII be valid. Then, the RDG has the Nash equilibrium u * r (x r , t), v * r (x r , t) , where and K o u 1 ,0 (t), K o v 1 ,0 (t) , t ∈ [0, t f ] is the solution of the terminal-value problem (58), (60) mentioned in the assumption AVII.
The optimal values of the functionals (85) and (86) in the RDG coincides with the values J * u,0 and J * v,0 , respectively, given in (82).

Remark 6.
Using the block form of the matrices B 1,0 (t) and Θ uu (t) (see Equation (83)), we can represent the control u * r (x r , t) in the Nash equilibrium of the RDG as: where

Example 2
First of all, let us make two remarks which are used in this example.

Remark 8.
Due to the results of Reference [4], Propositions 2 and 3 hold also in the case where the Therefore, all the other assertions of the present paper (including Theorem 1) also are valid for such matrices.

Remark 9.
If all the coordinates of the "singular" player are singular (q = 0), then the upper block of the control u * ε,0 (z, t) (see Equation (90)) vanishes, while the lower block remains unchanged. Thus, in this case we have u * In this example, we consider a singular non-zero-sum game, which is an extension of the singular zero-sum planar pursuit-evasion game studied in Reference [7], as well as a singular version of the non-zero-sum pursuit-evasion game analyzed in Reference [4]. Namely, we consider the following particular case of the SDG: where the player with the scalar control u(t) is a pursuer, while the player with the scalar control v(t) is an evader; the scalar state variables x(t) and y(t) are the relative lateral separation and the relative lateral velocity of the players; the controls u(t) and v(t) are the lateral accelerations of the players. Moreover, all the coefficients in the game (95)-(97) are constant, and As in the general case of SDG, both players aim to minimize their own functionals.
In what follows of this example, we analyze the case where

CR1.
In this paper, a finite-horizon two-person linear-quadratic Nash equilibrium differential game was studied. The game is singular because the weight matrices of the control costs of one player (the "singular" player) are singular in the functionals of both players. These singular weight matrices are positive semi-definite but non-zero. The weight matrix of the control cost of the other player (the "regular" player) in its own functional is positive definite.
CR2. Subject to proper assumptions, the system of dynamics of this game was transformed to an equivalent system consisting of three modes. The first mode is controlled directly only by the "regular" player. The second mode is controlled directly by the "regular" player and the nonsingular control's coordinates of the "singular" player. The third mode is controlled directly by the entire controls of both players. Due to this transformation, the initially formulated game was converted to an equivalent Nash equilibrium game. The new game, also being singular, is simpler than the initially formulated game. Therefore, the new game was considered as an original one.
CR3. For this game, a novel notion of the Nash equilibrium (the Nash equilibrium sequence) was proposed. To derive the Nash equilibrium sequence in the original singular game, the regularization method was applied. This method consists in the replacing the original singular game with a regular Nash equilibrium game depending on a small parameter ε > 0. This regular game becomes the original singular game if we set formally ε = 0. It should be noted that the regularization method was widely applied in the literature for analysis and solution of singular optimal control problems, singular H ∞ control problems and zero-sum differential games. However, in the present paper, this method was applied for the first time in the literature to the rigorous and detailed analysis and solution of the general singular linear-quadratic Nash equilibrium differential game.
CR4. The regularized game is a partial cheap control game. Complete/partial cheap control problems were widely studied in the literature in the settings of an optimal control problem, an H ∞ control problem and a zero-sum differential game. Non zero-sum differential games with a complete cheap control of one player also were considered in the literature, although in few works. However, in the present paper, for the first time in the literature, a non-zero-sum differential game with a partial cheap control of at least one player was analyzed.
CR5. Solvability conditions of the regularized (partial cheap control) game depend on the small parameter ε, which allowed us to analyze these conditions asymptotically with respect to ε. Using this analysis, the Nash equilibrium sequence in the original singular game was designed, and the expressions for the optimal values of the functionals were obtained.
CR6. It was established that the construction of the Nash equilibrium sequence in the original singular game and the obtaining the optimal values of its functionals are based on the solution of a lower dimension regular Nash equilibrium differential game (the reduced game). Namely, to solve the original singular game, one has to solve the lower dimension regular game and to calculate by explicit formulas two additional gain matrices.

Conflicts of Interest:
The author declares no conflict of interest.

Appendix A. Proof of Lemma 1
We start with the proof of the first lemma's statement.
Due to the structure of the matrix M(t), we can conclude that the set of its eigenvalues consists of all the eigenvalues of the matrices N 1 (t) and N 2 (t) with the corresponding algebraic multiplicities. Due to the property of the eigenvalues of the Kronecker product of two matrices (see, e.g., Ref. [37]), the set of the eigenvalues of N 1 (t) consists of all the eigenvalues of the matrix D u 2 (t) 1/2 with the corresponding algebraic multiplicities, i.e., all the eigenvalues of N 1 (t) are real positive for all t ∈ [0, t f ]. Similarly to the analysis of the matrix N 1 (t), the sets of both addends in the expression of N 2 (t) consist of all the eigenvalues of the matrix D u 2 (t) 1/2 with the corresponding algebraic multiplicities.
Thus, all the eigenvalues of these addends are real positive for all t ∈ [0, t f ]. Moreover, due to the symmetry of the matrix D u 2 (t) 1/2 , both addends in the expression of N 2 (t) are symmetric matrices. Therefore, by virtue of the results of Reference [38], all the eigenvalues of N 2 (t) are real positive. Hence, all the eigenvalues of the matrix M(t) are real positive for all t ∈ [0, t f ]. This completes the proof of the lemma.