Zeroing Neural Network for Pseudoinversion of an Arbitrary Time-Varying Matrix Based on Singular Value Decomposition

: Many researchers have investigated the time-varying (TV) matrix pseudoinverse problem in recent years, for its importance in addressing TV problems in science and engineering. In this paper, the problem of calculating the inverse or pseudoinverse of an arbitrary TV real matrix is considered and addressed using the singular value decomposition (SVD) and the zeroing neural network (ZNN) approaches. Since SVD is frequently used to compute the inverse or pseudoinverse of a matrix, this research proposes a new ZNN model based on the SVD method as well as the technique of Tikhonov regularization, for solving the problem in continuous time. Numerical experiments, involving the pseudoinversion of square, rectangular, singular, and nonsingular input matrices, indicate that the proposed models are effective for solving the problem of the inversion or pseudoinversion of time varying matrices.


Introduction and Preliminaries
In this paper, the zeroing neural network (ZNN) approach is used to address the problem of calculating the inverse or pseudoinverse of an arbitrary time-varying (TV) real matrix. On the one hand, the pseudoinverse, or Moore-Penrose inverse, of A ∈ R m×n is the unique matrix A † , such that the system of Penrose equations holds for X := A † [1][2][3]: where A T denotes the transpose of A. Note that, if A is a nonsingular square matrix, A † becomes the usual inverse A −1 . On the other hand, the singular value decomposition (SVD) of A ∈ R m×n is a factorization of the form [4]: where U ∈ R m×m and V ∈ R n×n are orthogonal matrices, i.e., U T = U −1 and V T = V −1 , while S ∈ R m×n is a rectangular (or square, in the case m = n) diagonal matrix with the singular values of A on its main diagonal. SVD is frequently used to compute the inverse or pseudoinverse of a matrix, while commonly existing in fields of scientific research, such as medical treatment and industrial applications, lattice computing [5], automatic classification of electromyograms [6], and face recognition [7]. In a recent work [8], the authors provided a zeroing neural network for computing the singular value decomposition of an arbitrary matrix. This work move things one step further, by designing a new ZNN model for calculating the inverse or pseudoinverse of an arbitrary TV matrix based on the singular value decomposition. For comparison purposes, we build another model based on direct pseudoinversion in accordance with the paper [9], and the experiments section demonstrates the efficacy of the proposed SVD model. Zhang et al. in [10], developed a ZNN design for generating online solutions to TV problems. It is worth noting that most ZNN based dynamical systems fall under the category of recurrent neural networks (RNN) that are designed to find equation zeros. As a consequence, numerous valuable research findings have been presented in the literature. Addressing generalized inversion problems [11,12], tensor and matrix inversion problems [13], systems of linear equations [14,15], systems of matrix equations [14,16], quadratic optimization problems [17], and diverse matrix functions approximation [18,19] are the main applications of ZNNs. The first stage in developing ZNN dynamics is to design an error function E(t) that is tailored to the underlying problem, commonly known as the Zhang function [20]. The second stage takes advantage of the proper dynamical evolution that follows: whereĖ(t) ∈ R m×n is the time derivative of E(t) ∈ R m×n , λ > 0 is the design parameter that is used for scaling the convergence, while F (·) : R m×n → R m×n means elementwise utilization of an odd and increasing activation function on E(t). In our research, we will consider the ZNN evolution (3) under the linear activation function. That is, This work's key points may be summarized as below: • A novel ZNN approach, which is based on SVD, is employed for solving the problem of calculating the pseudoinverse of an arbitrary TV real matrix. • Two ZNN models for calculating the pseudoinverse of an arbitrary TV matrix are offered: one called ZNNSVDP, which is based on SVD, and the other called ZNNP, which is based on a more direct approach to the problem and is offered for comparison purposes. • Four numerical experiments, involving the pseudoinversion of square, rectangular, singular, and nonsingular input matrices, indicate that both models are effective for solving the problem and that the ZNNSVDP model converges to the problem's solution faster than the ZNNP model.
Additionally, it is worth mentioning some of the paper's general notations: the symbols 1 n , 0 n denote a vector in R n consisting of ones and zeros, respectively; O n×n ∈ R n×n denotes a zero matrix of n × n dimensions; I n ∈ R n×n denotes the identity n × n matrix; ⊗ denotes Kronecker product; vec(·) denotes the vectorization technique; denotes the Hadamard (or element wise) product; and · F denotes the matrix Frobenius norm.
The paper is constituted as follows. Sections 2 and 3, respectively, define and analyse the ZNNSVDP and ZNNP models. Section 4 presents and discusses the results of four numerical experiments employing the pseudoinversion of square, rectangular, singular, and nonsingular input matrices. Lastly, the final remarks and conclusions are offered in Section 5.

Time-Varying Pseudoinverse Computation Based on SVD
This section presents and analyses the ZNNSVDP model for calculating the pseudoinverse of an arbitrary TV real matrix. Considering a smooth TV matrix A(t) ∈ R m×n , the inverse or pseudoinverse of A(t) based on SVD (2) is the following: where U(t) ∈ R m×m and V(t) ∈ R n×n are TV orthogonal matrices, and S(t) ∈ R m×n is a rectangular (or square, in the case m = n) diagonal matrix with the singular values of A(t) on its main diagonal. Here, we consider decomposition so that the singular values of A(t) are in descending order on the main diagonal of S(t). Based on (2) and (5), the ZNNSVDP model considers the following group of error functions for calculating the inverse or pseudoinverse of A(t): where X(t) is the desired solution of the problem, i.e., the inverse or pseudoinverse of A(t), and The following proposition about the structure and construction of the pseudoinverse of a diagonal matrix is offered, whereas [21] provides a full examination of this proposition.
Proposition 1. For a rectangular (or a square singular) diagonal matrix B ∈ R m×n , let b 1 , b 2 . . . , b w with w = rank(B) signify the elements of the main diagonal of B. Then, the pseudoinverse matrix of B is the following: In addition, the first time derivative of (6) is the following: where the first time derivative of Y(t) is the following [22]: or equivalentẎ . Then, combining (6), (9) and (11) with the ZNN design under the linear activation function (4), the following may be acquired: Using vectorization and the Kronecker product, the dynamics of (12) are modified as follows: Note that (13) must be simplified in order to produce a simple and explicit dynamical model that may easily calculate U(t), V(t), S(t), and X(t). As a result, the following lemmas about vectorization and the Kronecker product are offered, whereas [23] provides a full examination of Lemmas' 1 and 2 content.

Lemma 2.
For B ∈ R m×m , let vec(B) ∈ R m 2 signify the matrix B vectorization. The following occurs: where Q m ∈ R m 2 ×m 2 is a constant permutation matrix defined exclusively by m.
The following, Algorithm 1, presents an algorithmic process for obtaining the permutation matrix Q m in (15), which corresponds to a matrix of m × m dimensions. Note that the notations eye(.) and reshape(.) in Algorithm 1 have the typical notion of the related MATLAB functions [24].
Furthermore, because S(t) and S T (t) are rectangular (or square, in the case m = n) diagonal matrices, just the nonzero elements ofṠ(t) andṠ T (t) that are placed in their main diagonal must be obtained. By doing so, we may confine S(t) to being a diagonal matrix, while also reducing the dimensions of (13). Hence, employing the nonzero elements on the main diagonal of S(t) and S T (t), whose number is w = rank(A(t)), we utilize the equations vec(Ṡ(t)) = G 1ṡ (t) and vec(Ṡ T (t)) = G 2ṡ (t), respectively, to replaceṠ(t) andṠ T (t) in (14), where the matrices G 1 , G 2 ∈ R mn×w are operational matrices that can be calculated using the algorithmic procedure presented Algorithm 2. Additionally, the notation sum(.), min(.), zeros(.), mod(.) and floor(.) in Algorithm 2 have the typical notion of the related MATLAB functions [24].

Algorithm 2 Operational matrix calculation
Require: The number of the rows and columns, respectively, m and n of a matrix B ∈ R m×n , and w = rank(B). if d == c then 11: Set G(k, c) = 1 12: end if 13: end for 14: return G 15: end procedure Ensure: The operational matrix G.
Based on the aforementioned discussion, (13) can be reformulated as follows: where As a result, setting we propose the following ZNN model: where Z T (t)Z(t) is a singular mass matrix. To solve the singularity problem, the Tikhonov regularization is used and (20) is converted into: where β ≥ 0 signifies the regularization parameter. The ZNN model (21) is termed as the ZNNSVDP model and can be solved efficiently with an appropriate ode Matlab solver. The exponential convergence of the ZNNSVDP model (21) to the theoretical TV inverse or pseudoinverse of the input matrix A(t) is proven in Theorem 1.

Remark 1.
According to MATLAB's ode solvers syntax [24], the mass matrix is a symmetric matrix M that expresses the connection between the time derivativeẋ of the generalized coordinate vector x of a system, by the equation: Theorem 1. Let U(t) ∈ R m×m , V(t), ∈ R n×n , S(t) ∈ R m×n be differentiable and S(t) be a rectangular diagonal matrix. The ZNNSVDP model (21), starting from any initial value x(0), converges exponentially to the theoretical TV inverse or pseudoinverse of the input matrix A(t).
Proof. In order to obtain the solution x(t), which corresponds to the TV inverse or pseudoinverse of the input matrix A(t), the error matrix equation group is defined as in (9), inline with the ZNN design. Following that, by adopting the linear design formula for zeroing (9), the model (12) (12) converges to the theoretical solution when t → ∞. As a result, the solution of (12) converges to the theoretical TV inverse or pseudoinverse of the input matrix A(t) when t → ∞. Furthermore, from the derivation procedure of (21) from (12), the proof is completed.

Alternative Time-Varying Pseudoinverse Computation
This section presents and analyzes a ZNN model, namely, ZNNP, for calculating the pseudoinverse of any TV real matrix, based on a recent work [9] on ZNN pseudoinverse computation, and this new model will serve as a strong and fair competitor to the proposed ZNNSVDP model. Considering a smooth TV matrix A(t) ∈ R m×n , then, if rank(A(t)) = n < m, the MP inverse A † (t) becomes the left inverse A † (t) = . Therefore, we can design a ZNN model according to the following equations: Based on (22), the ZNNP model considers the following error function for calculating the inverse or pseudoinverse of A(t): where X(t) is the desired solution of the problem, i.e., the inverse or pseudoinverse of A(t). Furthermore, the first time derivative of (23) is the following: Then, combining (23) and (24) with the ZNN design (4), under the linear activation function, the following can be obtained: Using vectorization and the Kronecker product, the dynamics of (25) are modified as follows: As a result, setting rank(A(t))=m<n, +Ȧ T (t) , rank(A(t))≤m<n.

x(t)=vec(Ẋ(t)), x(t)=vec(X(t)),
where β ≥ 0 signifies the Tikhonov regularization parameter, we have the next ZNN model: where L(t) is a mass matrix. Note that the Tikhonov regularization is used in L(t) to solve the singularity problem of the cases rank(A(t)) < n ≤ m and rank(A(t)) < m < n, respectively, because the products A(t)A T (t) and A T (t)A(t) result to singular matrices. The ZNN model (28) is termed as the ZNNP model and can be solved efficiently with an ode Matlab solver, while its exponential convergence to the theoretical TV inverse or pseudoinverse of the input matrix A(t) is proven in Theorem 2.
Theorem 2. The ZNNP model (28) starting form any initial value x(0), converges exponentially to the theoretical TV inverse or pseudoinverse of the input matrix A(t).
Proof. In order to obtain the solution x(t), which corresponds to the TV inverse or pseudoinverse of the input matrix A(t), the error matrix equation group is defined as in (23), inline with the ZNN design. Following that, by adopting the linear design formula for zeroing (23), the model (25)  As a consequence, the solution of (25) converges to the theoretical TV inverse or pseudoinverse of the input matrix A(t) when t → ∞. Moreover, from the derivation procedure of (28), we know it is (25) in a different form. The proof is, thus, completed.

Numerical Experiments
This section compares and contrasts the performances of the ZNNSVDP model (21) with the ZNNP model (28) on four numerical experiments (NE), involving the pseudoinversion of square, rectangular, singular, and nonsingular input matrices. In all NE, the time interval is restricted to [0, 10] during the computation, which indicates that the starting time is t 0 = 0 and the ending time is t f = 10, while the ZNN design parameter has been set to λ = 10 and the Tikhonov regularization parameter has been set to β = 1e − 8. It is worth mentioning that the notation ZNNSVDP and ZNNP in the legends of Figure 1, respectively, denote the solutions produced by the ZNNSVDP and ZNNP models. Lastly, the MATLAB solver ode45 has been used, while the initial value for both models has been set to x(0) = sign(x * (0)), where x * (0) is the theoretical solution at t = 0 and sign is the signum function.

Experiment 1
This NE deals with the inversion of the following square matrix: Note that A(t) is a full rank matrix with dimensions 2 × 2.

Experiment 2
This NE deals with the pseudoinversion of the following rectangular matrix: Notice that A(t) is a full column rank matrix with dimensions 5 × 3.

Experiment 3
The pseudoinversion of the following rectangular matrix is the subject of this NE: The matrix A(t) is rank deficient, with rank(A(t)) = 1, and its dimensions are 4 × 2.
With rank(A(t)) = 1, the matrix A(t) is rank deficient, and its dimensions are m × n, where m = 4 and n = 9.

Analysis of Numerical Experiments-Results and Comparison
The performance of the ZNNSVDP and ZNNP models for calculating the inverse or pseudoinverse of an arbitrary matrix A(t) is investigated through the four NE defined in Sections 4. 1-4.4. For all the experiments, the results produced by the ZNNSVDP and ZNNP models are depicted in Figure 1. It is worth noting that Figure 1 has the following layout: the first column figures show the convergence of the error function, i.e., E i (t) , i = 1, . . . , 4, of the ZNNSVDP model and E D (t) of the ZNNP model; the second column figures show the convergence of the models according to the appropriate error function, i.e., residual errors; the third column figures show the trajectories of the solutions generated by the models.
The following can be deduced from the NE of this section. Overall, the error functions of the ZNNSVDP model, i.e., E i (t) , i = 1, . . . , 4, receive lower values than the error function of the ZNNP model, i.e., E D (t) , in all NE, as depicted in Figure 1a,d,g,j. When X(t) corresponds to the solution of the ZNNSVDP model rather than the solution of the ZNNP model, the convergence in Figure 1b,h,k is faster, while the convergence in Figure 1e is almost identical. It is worth noting that Figure 1b depicts the residual error I − A(t)X(t) F in the case of NE Section 4.1, Figure 1e depicts the residual error I − X(t)A(t) F in the case of NE Section 4.2, Figure 1h depicts the residual error A T (t) − X(t)A(t)A T (t) F in the case of NE Section 4.3, and Figure 1b depicts the residual error A T (t) − A T (t)A(t)X(t) F in the case of NE Section 4.4. Finally, Figure 1c,f,i,l show that both models' solutions match the theoretical inverse in the case of NE Section 4.1, and the theoretical pseudoinverse in the cases of NE Sections 4. 2-4.4. According to the presented NE, the following are some general generalizations that can be drawn. The ZNNSVDP model presented in this paper, which is based on the SVD method, shows better performances than the ZNNP model, which is based on a more direct approach for calculating the inverse or pseudoinverse. In addition, the ZNNSVDP model generates the minimum amount for the Frobenius norm of both the error functions and the residual errors. It is also important to note that, for both models, the larger the value of the design parameter λ, the higher the degree of convergence.

Conclusions
The problem of calculating the inverse or pseudoinverse of an arbitrary TV real matrix is addressed using the ZNN approach in this paper. Two ZNN models for calculating the inverse or pseudoinverse of an arbitrary TV matrix, one called ZNNSVDP, which is based on SVD, and the other called ZNNP, which is based on a more direct approach to the problem, are defined, analysed and compared. Four numerical experiments, involving the pseudoinversion of square, rectangular, singular, and nonsingular input matrices, indicate that both models are effective for solving the problem and that the ZNNSVDP model converges to the problem's solution faster than the ZNNP model.