Recurrent Neural Network Models Based on Optimization Methods

: Many researchers have addressed problems involving time-varying (TV) general linear matrix equations (GLMEs) because of their importance in science and engineering. This research discusses and solves the topic of solving TV GLME using the zeroing neural network (ZNN) design. Five new ZNN models based on novel error functions arising from gradient-descent and Newton optimization methods are presented and compared to each other and to the standard ZNN design. Pseudoinversion is involved in four proposed ZNN models, while three of them are related to Newton’s optimization method. Heterogeneous numerical examples show that all models successfully solve TV GLMEs, although their effectiveness varies and depends on the input matrix.


Introduction and Preliminaries
Novel zeroing neural network (ZNN) dynamical systems are presented using various error functions that arise from gradient-descent and Newton optimization methods. Introduced models are theoretically investigated and compared to each other and to the standard ZNN design. The proposed ZNN models are applied in solving time-varying (TV) general linear matrix equations (GLMEs) with arbitrary TV real input matrices. The TV GLME problem is expressed as the matrix equation in which Y(t) is unknown matrix. The solutions Y(t) to the Equation (1) are extremely useful in science as well as in a wide range of engineering applications, including optimizing the manipulability of a robotic arm's joint angles [1], making highly accurate predictions for missing quality of service data [2], tracking robotic motion and mobile objects [3], obtaining weights of feedforward neural networks [4]. This paper presents and contrasts six ZNN models based on two new error functions for solving TV GLMEs that involve arbitrary matrices A(t), B(t), and C(t). It is worth mentioning that the matrix pseudoinversion is involved in four of the proposed ZNN models, while three are related to Newton's optimization method.
The ZNN concept was coined by Zhang et al. in [5] for determining online solutions to TV matrix inversion problems. Most ZNN-based dynamical systems belong to classes of recurrent neural networks (RNNs) designed to approximate zeros of matrix, vector, or scalar equations. Besides ZNN, there is another type of RBBs, which is known as Gradient neural networks (GNN). As an outcome, numerous significant research results have been revealed in the scientific literature. ZNN and GNN dynamics for solving linear matrix equations of the form (1) were described and compared in [6]. Two Zhang functions and initiated ZNN dynamics for solving the TV GLME A(t)Y(t) = B(t) were presented and investigated theoretically and numerically in [7]. A gradient-based neural dynamics for solving the matrix equation AXB = D was investigated in [8]. Gradient and zeroing dynamical systems aimed to solve the system of linear equations in both the TV and constant environment are investigated in [9][10][11]. GNN neural-dynamic schemes are substantially designed for solving equations with time-invariant coefficient matrices [11]. A global overview of Zeroing dynamical systems (ZND), involving ZND aimed to solving linear matrix equations, was presented in [12]. Nonlinear Zeroing neural dynamical systems with finitetime convergence and noise-tolerant for solving linear matrix equations in time-varying scenarios were proposed and investigated in [13,14]. A ZNN design whose dynamics are based on varying gain parameter and which are suitable for solving TV GLME was proposed in [15]. A very popular class of ZNN dynamical systems is represented by ZNN models with a finite-time or pre-defined convergence time. A finite-time convergent ZNN dynamics with a robust activation was presented in [14] and applied in some engineering fields. Xiao in [16] discovered and investigated finite-time dynamical systems for solving time-dependent complex matrix equation AX = B. A desirable feature of ZNN dynamics is its ability to neutralize different types of noises that appear during the evolution of dynamic systems. A noise-tolerant ZNN design with nonlinear activation that involves appropriate integral term was proposed and investigated in [17]. Finite-convergent zeroing neural design in the complex domain based on the Tikhonov regularization on the matrix A and aimed at solving time-dependent matrix equations AX = D was considered in [18]. Various and numerous areas of numerical linear algebra are studied by applying ZNN dynamics in the time-varying case. Solving tensor and matrix inversion on TV arrays [19], generalized eigenvalues problems [20], generalized inversion [21], matrix equations and matrix decompositions [3], quadratic optimization problems [22], and approximations of various matrix functions [23,24] are predominant utilizations of ZNNs. ZNN models based on two fuzzy-adaptive activations for resolving specific linear TV linear matrix equations (LME) were introduced in [25,26]. The hybrid models introduced in [27] are defined using suitable combinations of GNN and ZNN neural dynamics for solving matrix equations BX = D or CX = D. Apart from different types of activation in dynamic systems, numerous studies have been devoted to the modification of the dynamic evolution of ZNN. High-order error function designs and high-order ZNNs which are designed using powers of standard Zhang functions and adopted for solving LME were proposed in [28].
Dtermining an error function (or Zhang function) E(t), based on a suitable expression that reflects the problem being solved, is the first step in developing ZNN dynamics [29]. The second stage takes the dynamical developmenṫ in whichĖ(t) ∈ R m×n means the time derivative of E(t) ∈ R m×n , a real number λ > 0 represents the scaling parameter required for accelerating the convergence speed, and F (·) : R m×n → R m×n denotes elementwise usage of appropriately defined odd-increasing activation function on E(t). Target of our research is the linear ZNN desigṅ The system of first-order differential Equation (3), which defines the ZNN evolution law, has the analytical solution E(t) = E(0)e −λt [12]. Thus, E(t) in the standard ZNN is exponentially convergent to the zero 0 with the exponential convergence rate λ as t → +∞.
The ZNN model for solving the GLME (1) is defined using the matrix-valued Zhangian The main drawback of the basic ZNN design (3) and (4), based on the standard error function E(t), is its limited application to cases where the input matrices meet certain requirements. Motivated by that shortcoming, in this research we propose two new error functions derived from optimization methods. The error function E G (t) is defined by zeroing the gradient of the Frobenius matrix norm of E(t), while the error function E N (t) is defined by the Newton step corresponding to the Frobenius matrix norm of E(t). That is why we use the terms Gradient zeroing neural network (GZNN) and Newton zeroing neural network (NZNN) for dynamic systems that are defined based on the new error functions E G and E N , respectively.
Moreover, we noticed that the analysis of the complete set of solutions to the Equation (1) has not been investigated so far. In previous research, the solution based on invertible matrices A and C has been analyzed, which corresponds to the solution [6,26]. Our idea is to look for least-squares solutions, under weaker assumptions that allow the coefficient-matrices to be singular.
The following highlights are key results of this work: • A new and unique ZNN development, based on two novel error functions E G (t) and E N (t), is used to address the problem of solving TV GLMEs (1)  A precise convergence analysis of the proposed ZNNs is presented and the set of solutions is defined using known results from linear algebra. • Six numerical examples confirm that all considered models can solve TV GLMEs, although their effectiveness varies depending on input matrices.
The following is the structure of the presentation. Section 2 presents the motivation and describes the main results. Section 3 defines six ZNN dynamical systems and analyses their solvability, general solution, sets of all solutions, and convergence properties. Section 4 introduces and analyses simulation results obtained through six numerical examples that involve input matrices of different types. Finally, in Section 5, there are closing remarks.
The paper follows classical notations: I n ∈ R n×n signifies the identity matrix of the order n; I m,n ∈ R m×n signifies a m × n matrix with ones on the main diagonal and zeros elsewhere; 1 n and and 1 m,n , respectively, signify a vector in R n and a matrix in R m×n consisting of ones; O m,n ∈ R m×n signifies the zero m × n array; ⊗ designates the Kronecker product; vec(·) stands for the vectorization; means the Hadamard product; · F signifies the Frobenius norm; the operators (·) T , (·) −1 and (·) † , respectively, signify the matrix transposition, inversion and the pseudoinversion.

Motivation and Description of Main Results
The traditional ZNN dynamics for solving (1) is based on the error function (Zhangian) E(t) defined in (4), and follows the dynamic floẇ The standard ZNN design (5) developed to solve the GLME (1) requires invertible matrices A and C to force E(t) to zero, which the unique solution Y(t) = A −1 (t)B(t)C −1 (t) [26,30]. Our goal is to resolve this restriction and propose dynamical evolutions based on error functions that tend to zero in more general cases, which include the singularity of matrix-coefficients A(t) and C(t). This research aims to find new neurodynamical models based on different error functions. Our motivation in defining new Zhang functions has its origin in gradient-descent methods for solving the unconstrained minimization problem where f : R n → R is a continuously-differentiable function bounded from below. The gradient descent (GD) iterative scheme for solving (6) is defined by whereby g k = ∇ f (x k ) is the gradient vector and α k is appropriately defined step size. The convergence result lim for iterations (7) is widely known in the literature [31]. The Newton method with line search for solving (6) is defined by where G −1 k denotes the inverse of the Hessian matrix G k = 2 f (x k ). New error functions E G (t) and E N (t) are defined using analogies with gradientdescent and Newton method for nonlinear optimization.
The residual matrix is the desired goal function which is forced to the zero matrix. The gradient of the Frobenius norm Follwing iterations defined in [32], the gradient-descent iterations for minimizing the goal function ε Y are defined by taking an arbitrary initial matrix Y 0 ∈ R n×k . The Gradient neural network (GNN) dynamic evolution minimizes AY(t)C − B 2 F and it is based on the direct proportionality between the time derivativeẎ(t) and the negative gradient of the goal function ε Y (t) [33,34]: Our goal is to define the ZNN design for solving the TV GLME (1) based on the error function which initiates the following dynamics, termed GZNN dynamics: Further, Hessian of ε Y in the case C := I is equal to In the case A := I, Hessian of ε Y is equal to ∇ 2 ε Y = ∂ 2 ε Y ∂Y 2 = CC T . Following the results from [32], the Newton iterations with line search in the time-invariant case can be defined by Consider the real numbers δ A T A and δ CC T defined by where ε > 0 is a small real regularization parameter. So, we define the Zhangian in the case of rank(A) ≤ n ≤ m, rank(C) ≤ k ≤ h as Given the well-known limit representation of the Moore-Penorse inverse, restated in [35], it can be obtained as ε → 0. The ZNN design for solving E N (t) = 0 based on the error function E N (t), and termed NZNN, is defined byĖ Dynamics (14) forces E N (t) to zero, which (in view of (13)) coincides with the zero of E G (t).

Solvability and Solutions of Proposed Dynamical Systems
In this subsection we investigate conditions for solvability of the matrix equations E(t) = 0, E G (t) = 0 and E N (t) = 0 and their sets of solutions.
Lemma 1 defines conditions for the solvability and defines solutions of the linear matrix equation AYC = B. The results is obtained as a consequence of [35] (p. 52, Theorem 1) in the time-varying case. Lemma 1 ([35] p. 52, Theorem 1). For arbitrary A(t) ∈ C m×n , B(t) ∈ C p×q , D(t) ∈ C m×q , the general linear matrix equation is solvable if and only if in which case its general solution is given by for arbitrary Q(t) ∈ C n×p .
The set Θ Q is termed as solutions set in rest of the paper.
Lemma 2 restates results from [8] (Theorem 2.2). These results are used in Remark 1 to define the solution which will be used in numerical comparison of defined models.

Lemma 2 ([8] Theorem 2.2). Assume that constant matrices
Then the activation state variables matrix Y(t) of the dynamical systeṁ is convergent as t → +∞ and has the equilibrium state for every initial state matrix Y(0) ∈ R n×p . [8] claims that the activation state variables matrix Y(t) of the GNN dynamics (20) for solving (15) in the time-invariant case converges as t → +∞ to the equilibrium state defined in (21). Accordingly, we will use the equilibrium matrix

Remark 1. Theorem 2.2 in
for the comparison of the proposed models in performed numerical experiments.
Theorem 1 investigates conditions for the solvability of the equations E(t) = 0, E G (t) = 0, E N (t) = 0, and solutions sets to these equations. Theorem 1. Consider arbitrary smooth TV matrices A(t) ∈ R m×n , C(t) ∈ R k×h and B(t) ∈ R m×h . The following statements are valid.  (4) and solves the equation E(t) = 0. This part of the proof follows from known results on the solvability and the general solution to the matrix Equation (15), given in Lemma 1, and its application to the matrix equation

Proof. (a) The error function E(t) in the traditional ZNN (TZNN) model is defined by
On the other hand, according to Lemma 1, the error function E G (t) vanishes in the case.
As a consequence, the matrix equation E G (t) = 0 is consistent, its general solution is is given by (17), and its solutions set is defined in (18). The consistency is confirmed by the identity In addition, the general solution to E G (t) = 0 is given by (17). Indeed, using Lemma 1, the general solution to E G (t) = 0 is equal to (c) In the case rank(A) ≤ n ≤ m, rank(C) ≤ k ≤ h, the Zhangian E N (t) becomes the zero matrix in the case The GLME E N (t) = 0 is always solvable and its solutions set is equal to (18). This statement follows from Following general results from [35] (p. 52, Theorem 1) it is straightforward to verify that (18) is the general set of solutions to E N (t) = 0.

Remark 2.
The general conclusion is that the neural dynamics (12) forces the residual matrix In this way, we explain one of the main advantages of the proposed models: the matrix equation E(t) = 0 is solvable under the condition (16), while zeroing E G (t) = 0 and E N (t) = 0 are always consistent. In view of (18) and (23), the general solutions to all three equations E(t) = 0, E G (t) = 0 and E N (t) = 0 are identical.

Corollary 2. Let us consider the solutions set
generated by all possible initial states Y(0) of the form (22). The following statements are valid. (22), is a least squares solution to (1). (c) The unique solutionỸ 0 (t) ∈ Λ produces the best-approximate (i.e., the minimal-norm leastsquares) solution A † BC † to (1).
Proof. Part (a) follows from Lemma 1.
(b) Following the results from [36,37], (c) Follows from the known result originated in [36,37] The proof is complete.

Various ZNN Models for Solving TV GLMEs
The current section defines and analyses six ZNN expansions for calculating online solutions of TV GLMEs, including TZNN which is based on the traditional ZNN approach, GZNN based on the gradient ZNN approach, NZNNV1, NZNNV2 and NZNNV3 which are based on the Newton's optimization method, and DZNN based on the ZNN approach for finding the direct solution produced with the use of pseudoinverse. The TZNN and GZNN models provided here cover all potential scenarios for arbitrary TV real input matrices A(t), B(t), C(t) for solving the TV GLME (1).

Traditional ZNN Model
Considering smooth arbitrary TV matrices A(t) ∈ R m×n , C(t) ∈ R k×h and B(t) ∈ R m×h , the error function in the traditional ZNN (TZNN) model is established as under conditions It is important to note that the desired solution is Therefore, the following implicit dynamics can be obtained considering the linear ZNN flow (2) {A, B, C} ∈ D. (28) which is just the linear ZNN model (8) in [6]. Using the Kronecker product in conjunction with the vectorization, the dynamics (28) is modified and, as a result, we obtain the following ZNN model in vector form The ZNN model (29) will be termed as TZNN model. The solution to the dynamical system (29) can be approximated using the standard Matlab ode solver. If C T (t) ⊗ A(t) is a nonsingular mass matrix, the exponential convergence speed achieved by the TZNN design (29) to the exact TV solution of the TV GLME (1) is proved in [6] (Theorem 1).
In [6], the authors vectorized the TV GLME A(t)Y(t)C(t) = B(t) into its equivalent form (C T (t) ⊗ A(t))vec(Y(t)) = vec(B(t)). We restate the main result from [6] (Theorem 1) in the form adopted for (1) to improve readability of the paper. This result defines Uniquesolution condition that requires invertibility of the mass matrix (C T (t) ⊗ A(t)).

Lemma 3 ([6] Theorem 1).
Assume that time-varying coefficient matrices in (1) are smooth. If the following Unique-solution condition is satisfied for a real number δ > 0, then the matrix Y(t) of the Zhang neural network (28), starting from arbitrary initial state Y(0), converges exponentially to the theoretical solution of (1) with the exponential convergence rate λ.
Vectorized ZNN design (29) exploits the same mass matrix C T (t) ⊗ A(t) and requires its invertibility.
Corollary 3. Let A(t) ∈ R m×n , C(t) ∈ R k×h and B(t) ∈ R m×h be differentiable. If the condition (26) is satisfied, starting from arbitrary initial value Y(0), the TZNN model (40) converges exponentially and globally to A −1 (t)B(t)C −1 (t).

Gradient ZNN (GZNN) Model
The gradient-based ZNN (GZNN) approach considers arbitrary smooth TV matrices A(t) ∈ R m×n , C(t) ∈ R k×h and B(t) ∈ R m×h for solving where Y(t) ∈ R n×k is the desired solution. According to general requirements of the ZNN design (2) [6,30], we consider the constraints The error function in GZNN design will be defined by The time derivative of (33) iṡ Then, considering (2), it can be obtained: Using the vectorization in conjunction with the Kronecker product, the dynamics (35) is modified as Replacements lead to the dynamical evolution The mass matrix W(t) in (38) is singular in rank conditions rank(A) < min{m, n} or rank(C) < min{k, h}. The Tikhonov regularization is the principle widely exploited to solve such singularity problems. As a result, W(t) is regularized by the matrix where β ≥ 0 is a small regularization quantity. Then the following ZNN model is used instead of (38): where M(t) is a nonsingular mass matrix. The ZNN (40) is named GZNN, and it is solvable using a proper ode Matlab solver. A global convergence with exponential speed of the GZNN design (40) to the exact TV solution of the TV GLME (31) is certified in Theorem 2.
Theorem 2. Let A(t) ∈ R m×n , C(t) ∈ R k×h and B(t) ∈ R m×h be differentiable. If the condition (32) is satisfied, starting from arbitrary initial value y(0), the vectorized GZNN model (40) converges exponentially and globally to one element from the set Θ Q defined in (18), which solves the TV GLME (31).

Proof.
To obtain the solution y(t) to the TV GLMEs (31), the Zhangian matrix is defined by (33) inline with its time derivative (34). The ZNN expansion based on (33) leads to the model (35). Furthermore, from the derivation procedure, (40) is an equivalent form of (35), which represents the standard GZNN model. Since a regularization parameter β is involved in the the definition of the mass matrix M(t) in (40), M(t) is invertible. From [6] (Theorem 1), restated in Lemma 3, the unknown matrix Y(t) in (35) converges to the exact solution as t → ∞. According to Theorem 1(b), the matrix equation E G (t) = 0 is always solvable and its solutions are included in Θ Q .

Newton ZNN Model (Version 1)
Applying the limit representation of the pseudoinverse [35] lim ε→0 the error function (13) of the Newton ZNN version 1 (NZNNV1) model will be defined by So, the first variant of the Newton ZNN approach considers the smooth arbitrary TV matrices A(t) ∈ R m×n , C(t) ∈ R k×h and B(t) ∈ R m×h for solving the following GLMEs: where the desired solution is Y(t) ∈ R n×k . The time derivative of E N (t) is given aṡ The time derivative of the pseudoinverse of P(t) ∈ R m×n is equal tȯ Then, considering the linear ZNN design (2), it is obtaineḋ or equivalently Applying the vectorization and the Kronecker product, the (46) dynamics is transformed into and setting the subsequent ZNN design is generated: The mass matrix W(t) in (49) is singular under the rank conditions rank(A) < min{m, n} or rank(C) < min{k, h}. The regularization principle leads to the regular mass matrix such that β ≥ 0 defines the ridge regression parameter. Accordingly, the ZNN dynamical system is used instead of (49), where M(t) is the nonsingular mass matrix. The differential system (51) is termed NZNNV1, and it is solvable using ode Matlab solvers. Theorem 3 proves the exponential convergence rate of the NZNNV1 evolution (51) to the accurate TV solution of TV GLMEs (42).
Theorem 3. Let A(t) ∈ R m×n , C(t) ∈ R k×h and B(t) ∈ R m×h be differentiable. If the condition (16) is satisfied, the NZNNV1 model (51), starting form any initial value y(0), exponentially converges to one element from the set Θ Q defined in (18), which solves the TV GLME (42).

Proof.
To acquire the solution y(t) to (42), the Zhangian matrix is defined as in (41), in conjunction with the initiated ZNN dynamics. The model (45) is obtained by adopting the linear design for zeroing (41), which produces the NZNN design. From [6] (Theorem 1), restated in Lemma 3, each error matrix equation in the error matrix equation group (45) converges to the accurate solution as t → ∞. According to Theorem 1(c), the matrix equation E N (t) = 0 is always solvable and its solutions are included in Θ Q . Consequently, the solution of (45) converges to the solution of TV GLMEs (42) as t → ∞. The derivation procedure of (51) confirms that it is an equivalent form of (45).

Newton ZNN Model (Version 2)
This subsection presents the second version of the Newton ZNN model for solving the TV GLMEs of (42). The Newton ZNN version 2 (NZNNV2) model includes two error functions. More precisely, the error function (41) is converted as follows: The matrix X(t) ∈ R n×m in (52) is defined as the zero of the error function for finding the pseudoinverse of an arbitrary TV matrix A(t) (see [38]) while Z(t) ∈ R h×k is the zero of the error function for finding the pseudoinverse of an arbitrary TV matrix C(t) The derivative of (52) iṡ while the derivative of (53) iṡ Then, considering the linear ZNN design (2), it can be obtained: The dynamics (59) are modified using vectorization and the Kronecker product as follows: Setting the following dynamical system is generated with the mass matrix W(t). To solve the problem with singularity of W(t) in the case rank(A) < min{m, n} or rank(C(t)) < min{k, h}, the following nonsingular mass matrix is used: wherein β ≥ 0. Then the following ZNN model is used instead of (62): where M(t) is a nonsingular mass matrix. The NZNNV2 design (64) is solvable with an ode solver available in Matlab. The global and exponential convergence rate of the NZNNV2 flow (64) to the exact TV solution of (42) is verified in Theorem 4.

Theorem 4.
Let A(t) ∈ R m×n , C(t) ∈ R k×h and B(t) ∈ R m×h be differentiable. If the condition (16) is satisfied, the NZNNV2 model (64) initialized by an arbitrary initial value y(0), exponentially converges to the exact solution of the TV GLMEs (42) in the form (18).
Proof. Similar to the proof of Theorem 3.

Newton ZNN Model (Version 3)
This subsection presents the third version of the Newton ZNN model for solving the TV GLMEs of (65). The Newton approach considers the smooth arbitrary TV matrices A(t) ∈ R m×n , C(t) ∈ R k×h and B(t) ∈ R m×h for solving the following GLMEs: where Y ∈ R n×k is the desired solution. Note that γ ≥ 0 denotes the regularization parameter.
Based on the aforementioned, the error function in the Newton ZNN version 3 (NZNNV3) model will be defined by: Given that the time derivative of the inverse of P(t) iṡ and setting the time derivative of (41) is given bẏ Then, considering the linear ZNN dynamics (2), it can be obtained: or equivalently The vectorization and the Kronecker product transform (46) into the equivalent form Then setting the following system of differential equations is obtained: Since W(t) is singular in the case rank(A) < min{m, n} or rank(C) < min{k, h}, the utilization of the Tikhonov principle leads to the invertible mass matrix in which it is β ≥ 0. As a result, the following dynamical evolution is used instead of (77): where M(t) is a nonsingular mass matrix. The ZNN flow (79) will be denoted as NZNNV3, and it is solvable using one of the ode Matlab solvers.
Theorem 5. Let A(t) ∈ R m×n , C(t) ∈ R k×h and B(t) ∈ R m×h be differentiable. If the condition (16) is satisfied, the NZNNV3 model (79), starting from arbitrary initial state y(0), exponentially converges to the theoretical TV solution of the TV GLMEs (65) in the form (18).
Proof. Similar to the proof of Theorem 3.

Direct ZNN Model
The direct approach considers the smooth arbitrary TV matrices A(t) ∈ R m×n , C(t) ∈ R k×h and B(t) ∈ R m×h for solving the GLME where the desired solution is Y(t) ∈ R n×k . The direct approach always calculates the solution produced directly by the pseudoinverse of A(t) and C(t). As a result, the direct ZNN (DZNN) model includes three error functions. The first error function in the DZNN model is defined by wherein X(t) ∈ R n×m is the desired zero of the second error function for calculating A † (t) (see [38]): and Z(t) ∈ R h×k is the zero of the third error function for finding C † (t): The time derivative of (81) is defined bẏ the time derivative of (82) iṡ and the time derivative of (83) is given bẏ Then, considering the ZNN model (2), it can be obtained Using the vectorization and the Kronecker product, (88) is transformed into Setting the following differential system with the mass matrix W(t) is obtained Since W(t) is singular when rank(A(t)) < min{m, n} or rank(C(t)) < min{k, h}, the following regular mass matrix is used: rank(A(t)) = min{m, n} & rank(C(t)) = min{k, h} W(t) + βI mnkn , rank(A(t)) < min{m, n} or rank(C(t)) < min{k, h} such that β ≥ 0. As a result, the following ZNN model is used instead of (91): where M(t) is a nonsingular mass matrix. The system (93) is termed as DZNN model, and it is solvable with an ode Matlab solver.
Theorem 6. Let A(t) ∈ R m×n , C(t) ∈ R k×h and B(t) ∈ R m×h be differentiable. If the condition (16) is satisfied, the DZNN model (93) starting from any initial value y(0), converges exponentially to the theoretical TV solution of the TV GLMEs (80) in the form (18).
Proof. Analogous to the proof of Theorem 3.

Simulation Examples
This section compares performances of the TZNN model (29), the GZNN design (40), the NZNNV1 model (51), the NZNNV2 model (64), the NZNNV3 model (79), and the DZNN model (93) on six numerical examples (NE), involving square or rectangular, as well as singular or nonsingular input TV matrices. The diagram in Figure 1 is presented to understand better the solutions produced by these six models. This diagram shows how the GZNN, NZNNV1, NZNNV2, and NZNNV3 models generate various solutions for different initial conditions. In contrast, the TZNN and DZNN models generate just the pseudoinverse solution of the TV GLME (1) for any initial conditions. As preliminaries, a few parameters and symbols must be defined, as well as some additional information for the subsequent NEs. The considered time interval is [0, 10] in all NE, while the ZNN scaling parameter is assigned to λ = 10, and the regularization parameter is β = 1e − 8. The initial condition (IC1) in all NEs is
In the figures legends, the notation TZNN, GZNN, NZNNV1, NZNNV2, NZNNV3 and DZNN denote the solutions or errors produced by the corresponding models solutions,

Example 1
This NE is about a TV GLME that considers the following square matrices of dimensions 4 × 4: such that rank(A(t)) = 4. The input matrices A(t), C(t) and B(t) are nonsingular matrices.

Example 3
This NE is about a TV GLME that takes into account the rectangular matrices below: where rank(A(t)) = min{m, n} and m = 10, n = 6 and k = 4. That is, the input matrix A(t) is a full-column rank 10 × 6 matrix, and the input C(t) is a full-row rank 2 × 4 matrix.

Example 4
This example concerns a TV GLME that takes into account the input matrices such that A(t) and C(t) are singular and of dimensions 4 × 4 and 2 × 2, respectively, with rank(A(t)) = rank(C(t)) = 1.

Example 5
This NE is about a TV GLME that takes the rectangular matrices below into account: where rank(A(t)) = 1 and rank(C(t)) = 1. The input matrix A(t) is a 5 × 2 rank-deficient matrix, and C(t) is a full-row rank 3 × 6 matrix.

General Discussion
Since A(t) and C(t) in Example 1 are nonsingular, the solvability condition (16) is satisfied. As a consequence, the TV GLME (1)  In this subsection, the performance of six proposed ZNN models for solving TV GLME (1) is investigated through six NEs. All the models generate the solution Y * 1 (t), i.e., the pseudoinverse solution of the TV GLME (1), under the IC1 in all NEs, while the GZNN, NZNNV1, NZNNV2, and NZNNV3 models generate the solution Y * 2 (t) under the IC2 in NEs Sections 4.4-4.6. More precisely, NEs Sections 4.1-4.3, respectively, deal with square nonsingular A(t) and C(t), a rectangular full column rank A(t), and a rectangular full row rank C(t). As a result, all the models generate the pseudoinverse solution of the TV GLME (1), i.e., Y * (t) = Y * 1 (t) = Y * 2 (t), for any initial condition. It is worth noting that the TZNN is only applicable in NE Section 4.1, which has square nonsingular input matrices A(t) and C(t). To substantiate the claim mentioned above, the TZNN has been used in NE Section 4.2, where Figure 2b demonstrates that the TZNN error matrix does not converge to zero. Furthermore, NE Section 4.4 deals with square singular A(t) and C(t), and NEs Sections 4.5 and 4.6 deal with rectangular rank deficient A(t) and C(t). As a result, the GZNN, NZNNV1, NZNNV2, and NZNNV3 generate the solution Y * 1 (t) under the IC1 and the solution Y * 2 (t) under the IC2. It is worth noting that Y * The following can be deduced from the NEs. The exponential convergence of the models that is proven in Theorems 2-6 can be observed in the first row figures. Particularly, Figure 2a-c present the exponential convergence of the models in NEs Sections 4.1-4.3, respectively, whereas Figures 3a-c and 4a-c present the exponential convergence of the models in NEs Sections 4.4-4.6 under IC1 and IC2, respectively. More particularly, while all models begin with an initial value other than the optimal one, all NEs have received differentiable A(t), B(t), and C(t) matrices that meet the condition (16). In Figures 2a-c, 3a-c and 4a-c, the error functions convergence begins at t = 0 in the range [10 1 , 10 6 ], but it ends before t = 2 with lowest values in the range [10 −10 , 10 −1 ]. It is also crucial to note that when the value of the design parameter λ is higher, the models will converge faster and the overall error will be lowered even further. Furthermore, all other figures behave in the same way due to the convergence tendency of the error functions. In other words, the graphs associated with the models in the remaining figures begin at t = 0 in a very different value from the objective and reach it before t = 2. As a result, it is evident that Theorems 2-6 are proven true.
In general, according to the first row figures, all the ZNN models converge to zero, where the NZNNV1 and NZNNV3 models have the fastest convergence rate, and the TZNN and DZNN have the second and third fastest, respectively. In contrast, the GZNN and NZNNV2 models have the slowest and almost similar convergence rates. According to the second row figures under IC1 and fourth row figures, the NZNNV1 and NZNNV3 models have the fastest convergence rate, the GZNN has the second fastest, and the NZNNV2 model has the slowest. The NZNNV2 and DZNN models have the lowest overall errors when A(t) and C(t) are rectangular, specifically in NEs Sections 4.2 and 4.5. The second row figures under IC2 show that the NZNNV1, NZNNV2, NZNNV3, and GZNN models perform similarly to the second row figures under IC1. However, the DZNN model does not converge to zero, as expected. Finally, the third row figures demonstrate that under IC1, all models solutions match Y * (t), however under IC2, the DZNN model solutions match Y * 1 (t), while the NZNNV1, NZNNV2, NZNNV3, and GZNN models solutions match Y * 2 (t). Graphs included in Figures 3a-c and 4a-c, on the other hand, show that the IC1 and IC2 have no effect on the models' convergence, proving the Theorems 2-6 claim that the models' performance is unaffected by the initial condition.          where rank(A(t)) = 1 with m = 8 and n = 5, and rank(C(t)) = 1 with k = 3 and h = 4.

283
The input matrix A(t) is rank-deficient and of size 8 × 5, and C(t) is a rank deficient 284 3 × 4 matrix.

286
Since A(t) and C(t) in Example 1 are nonsingular, the solvability condition (16) 287 is satisfied. As a consequence, the TV GLME (1) is solvable with the unique solution          To summarize, the TZNN is only applicable in the case where m = n = rank(A(t)) and k = h = rank(C(t)). It is important to mention that A(t) becomes C(t) and C(t) becomes A(t) when n < m and k < h. As a result, the cases where n < m and k < h is disregarded in the NEs. The GZNN has a lower overall error than NZNNV1 and NZNNV3, although they have the same convergence speed. Even though the NZNNV2 has the slowest convergence speed, its overall error is between NZNNV1 and GZNN. Compared to GZNN, NZNNV1, and NZNNV3, the DZNN model's convergence speed is slightly slower, but its overall error is the lowest. Remember that the DZNN model's primary distinction from all other models is that, regardless of the initial conditions, it generates only a single solution based on the pseudoinversion. That is, all the models work excellently in solving the TV GLMEs (1).

Conclusions
The problem of solving TV GLMEs with random TV real input matrices is resolved in this paper by applying the ZNN design. Six ZNN models for calculating the online solution to general TV GLMEs are defined, analyzed, and compared, including TZNN, which is based on the traditional ZNN approach, GZNN which is based on the gradient ZNN approach, NZNNV1, NZNNV2, and NZNNV3 which are based on the Newton's optimization method, and DZNN which is based on the ZNN approach for finding the direct solution produced with the use of the pseudoinverse. Six numerical examples involving square or rectangular, and singular or nonsingular matrices show that all models successfully solve TV GLMEs. However, their effectiveness varies and depends on the input matrix, while the NZNNV1 and NZNNV3 models converge to exact solutions faster than the other models.
There are a few prospective study areas that can be explored.