Abstract
A secant-type method is designed for approximating the inverse and some generalized inverses of a complex matrix A. For a nonsingular matrix, the proposed method gives us an approximation of the inverse and, when the matrix is singular, an approximation of the Moore–Penrose inverse and Drazin inverse are obtained. The convergence and the order of convergence is presented in each case. Some numerical tests allowed us to confirm the theoretical results and to compare the performance of our method with other known ones. With these results, the iterative methods with memory appear for the first time for estimating the solution of a nonlinear matrix equations.
1. Introduction
Recently, many iterative methods without memory have been published for approximating the inverse or some generalized inverse of a complex matrix A of arbitrary order (see, for example, [1,2,3,4,5,6] and the references therein). This topic has a significant role to play in many areas in applied sciences and engineering, such as multivariate analysis, image and signal processing, approximation theory, cryptography, etc. (see [7]).
The discretization process of boundary problems or partial differential equations by means of divided difference technique or finite elements yields to an important number of linear systems being solved. This statement is applicable both in equations with integer derivatives and in the case of fractional derivatives (see, for example, [8,9]). In these linear problems, usually the matrix of coefficients is too big or ill-conditioned to be solved analytically. Thus, iterative methods can play a key role.
The main purpose of this manuscript is to design a secant-type iterative scheme with memory, free for inverse operators and efficient under the point of view of CPU-time, for estimating the inverse of a non-singular complex matrix. We also argue the generalization of the proposed scheme for approximating the Drazin inverse of singular square matrices and the Moore–Penrose inverse of complex rectangular matrices. As far as we know, this is the first time that this kind of methods with memory is applied to estimate generalized inverses. This might be the first step to develop higher-order methods with memory in the future. This kind of schemes has proven to be very stable for scalar equations; we expect a similar performance in the case of matrix equations.
Let us consider a non-singular complex matrix A of size . The extension of the iterative methods for the real equation to obtain the inverse of A, that is the zero of the matrix function , gives us the so-called Schulz-type schemes.
The most known of these schemes to estimate is the Newton–Schulz method [10], whose iterative expression is
where I denotes the identity matrix of order n. Schulz [11] demonstrated that the eigenvalues of matrix must be less than 1 to assure the convergence of the scheme in Equation (1). Taking into account that the residuals in each iteration of Equation (1) satisfy , Newton–Schulz method has quadratic convergence. In general, it is known that this scheme converges to with or , where , denotes the spectral radius, and is the conjugate transpose of A. Such schemes are also used for sensitivity analysis when accurate approximate inverses are needed for both square and rectangular matrices.
On the other hand, for a nonsingular matrix , Li et al. [12] suggested the scheme
with . They proved the convergence of m-order of to the inverse of A. This result was extended by Chen et al. [13] for computing the Moore–Penrose inverse. Other iterative schemes without memory have been designed for approximating the inverse or some generalized inverses.
In this paper, we construct an iterative method with memory (that is, iterate is obtained not only from the iterate k but also from other previous iterates) for computing the inverse of a nonsingular matrix. In the iterative expression of the designed method, inverse operators do not appear. We prove the order of convergence of the proposed scheme and we extend it for approximating the Moore–Penrose inverse of rectangular matrices and the Drazin inverse of singular square matrices.
For analyzing the order of convergence of an iterative method with memory, we use the concept of R-order introduced in [14] by Ortega and Rheinboldt and the following result.
Let us consider an iterative method with memory (IM) that generates a sequence of estimations to the solution , and let us also assume that this sequence converges to . If there exists a nonzero constant and nonnegative numbers , such that the inequality
holds, where is the error of iterate , then the R-order of convergence of (IM) satisfies
where is the unique positive root of the polynomial
The proof of this result can be found in [14].
From here, the work is organized as follows. In the next section, we describe how a secant-type method, free of inverse operators, is constructed for estimating the inverse of a nonsingular complex matrix, proving its order of convergence. In Section 3 and Section 4, we study the generalization of the proposed methods for computing the Moore–Penrose inverse of a rectangular complex matrix and the Drazin inverse of a singular square matrix. Section 5 is devoted to the numerical test for analyzing the performance of the proposed schemes and to confirm the theoretical results. With a section of conclusions, the paper is finished.
2. A Secant-Type Method for Matrix Inversion
Let us recall that, for an scalar nonlinear equation , the secant method is an iterative scheme with memory such that
with satisfying , , given and as initial approximations.
For a nonlinear matrix equation , where , the secant method can be described as
where and are initial estimations and being a suitable linear operator satisfying
where and . Thus, it is necessary to solve, at each iteration, the linear system . It is proven in [15] that, with this formulation, secant method converges to the solution of .
Let us consider an nonsingular complex matrix A. We want to construct iterative schemes for computing the inverse of A, that is, iterative methods for solving the matrix equation
The secant method was adapted by Monsalve et al. [15] to estimate the solution of Equation (5), that is the inverse of A, when the matrix is diagonalizable. The secant method applied to (see [15]) gives us:
Now, we extend the result presented in [15] to any nonsingular matrix, not necessarily diagonalizable. If A is a nonsingular complex matrix of size , then there exist unitary matrices U and V, of size , such that
being the singular values of A.
Let us define , that is . Then, from Equation (6),
Several algebraic manipulations allow us to assure that
If we choose initial estimations, and , such that and are diagonal matrices, then all matrices are diagonal and therefore , for all . Thus, from Equation (8), we assure
and, from this expression, we propose the secant-type method:
being and initial approximations given.
The analysis of the convergence of the iterative method with memory in Equation (9) is presented in the following result.
Theorem 1.
Let be a nonsingular matrix, with singular value decomposition . Let and be such that and are diagonal matrices. Then, sequence , obtained by Equation (9), converges to with super-linear convergence.
Proof.
Let us consider U and V unitary matrices such that the singular values decomposition in Equation (7) is satisfied, where are the singular values of A.
Then, component by component, we obtain
By subtracting from both sides of Equation (10) and denoting , we get
From Equation (11), we conclude that, for each value of j from 1 to n, in Equation (10) converges to with order of convergence of the unique positive root of , that is, (by using the result of Ortega–Rheinboldt mentioned in the Introduction).
Then, for each j, , there exist a satisfying , and tends to zero when k tends to infinity. Moreover,
Thus,
where
Therefore,
which allows us to affirm that converges to . □
On the other hand, Highan in [10] introduced the following definition for the stability of the iterative process , with a fixed point . If we assume that H is Frechét differentiable in , the iteration is stable in a neighborhood of if the Frechét derivative has bounded powers, that is, there exists a positive constant C such that
Therefore, the following result can be stated for the secant method.
Theorem 2.
The secant method in Equation (9) for the estimation of inverse matrix is a stable iterative scheme.
Proof.
The proof is made demonstrating that is an idempotent matrix.
The secant-type method described as a fixed point scheme, can be written as
It is easy to deduce that
where . Then, for , we have
Thus, is an idempotent matrix and the iteration is stable. □
3. A Secant-Type Method for Approximating the Moore–Penrose Inverse
Now, we would like to extend the proposed iterative scheme for computing the Moore–Penrose inverse [7] of a complex matrix A, denoted by . It is the unique matrix X satisfying the equations
If , by using the singular value decomposition of A, we obtain
being , . U and V are unitary matrices with and . It is also known that
where .
The convergence of the method in Equation (9) for Moore–Penrose inverse is established in the following result.
Theorem 3.
Let be a matrix with , with singular value decomposition
Let and be initial estimations such that
being and diagonal matrices of size . Then, sequence , obtained by Equation (9), converges to with super-linear order of convergence.
Proof.
Given the singular value decomposition of A, for any fixed arbitrary value of k, we define matrix as
being . Thus, by using the iterative expression in Equation (9), we obtain
Therefore, as and are diagonal matrices, so are all matrices , and the expression
represents r scalar uncoupled iterations converging to , with super-linear order, that is to say,
with , being such that sequence tends to zero for k tending to infinity.
With an analogous argument as in Theorem 1,
which allows us to affirm that converges to , with the desired order of convergence. □
4. A Secant-Type Method for Approximating the Drazin Inverse
Drazin, in 1958 (see [10]), proposed a different kind of generalized inverse, in which some conditions of the Moore–Penrose inverse and the index of the matrix appeared. The importance of this inverse has motivated many researchers to propose algorithms for its calculation.
It is known (see [10]) that the smallest nonnegative integer l, such that is called the index of A and it is denoted by . If A is a complex matrix of size , the Drazin inverse of A, denoted , is the unique matrix X satisfying
where l is the index of A.
If , then X is called the g-inverse or group inverse of A, and, if , then A is nonsingular and . Let us observe that the idempotent matrix is the projector on along , where and denote the range and null spaces of , respectively.
In [16], the following result is presented, which is used in the proof of the main result.
Proposition 1.
If is the projector on a space A along a space B, the following statements hold:
- (a)
- if and only if .
- (b)
- if and only if .
Li and Wei [1] proved that the Newton–Schulz method in Equation (1) can be used for approximating the Drazin inverse, using as initial estimation , where parameter is chosen so that condition is satisfied. One way for selecting the initial matrix used by different authors is
where is the trace of a square matrix. Another fruitful initial matrix is
Using two initial matrices of these form, , with a constant, we want to prove that the sequence obtained by the secant-type method in Equation (9) converges to the Drazin inverse . In this case, we use a different type of demonstration than those used in the previous cases.
Theorem 4.
Let be a square nonsingular matrix. We choose as initial estimations and , with . Then, sequence generated by Equation (9) satisfies the following error equation
Thus, converges to with order of convergence , that is, with super-linear convergence.
Proof.
Let us define , . Then,
Therefore, . In addition, it is easy to prove that, if we choose and such that and , then , .
Now, we denote the error of iterate k. From the selection of and and by applying Proposition 1, we establish
Thus,
From this identity, there exists such that
Thus, tends to zero and therefore tends to .
On the other hand,
Now, we analyze .
but
In the last equality, we use that , in fact , . In addition, and .
Therefore,
Finally, by applying the theorem of convergence for iterative methods with memory, as mentioned in the Introduction, we assure that the order of convergence of secant-type method is the unique positive root of , that is . □
5. Numerical Experiments
In this section, we check the behavior for the calculation of the inverse, Moore–Penrose inverse and Drazin inverse, of different test matrices A, using the secant method, which we compared with the Newton–Schulz scheme in Equation (1). Numerical computations were carried out in Matlab R2018b (MathWorks, Natick, USA) with a processor Intel(R) Xeon(R) CPU E5-2420 v2 at 2.20 GHz. As stopping criterion, we used or .
To numerically check the theoretical results, Jay [17] introduced the order of approximate computational convergence (COC), defined as
In a similar way, the authors presented in [18] another numerical approximation of the theoretical order, denoted by ACOC, and defined as
We use indistinctly any of these computational order estimates, to show the accuracy of these approximations on the proposed method. In the case of vector COC (or ACOC) is not stable, we write “-” in the corresponding table.
Example 1.
In this example, matrix A is a random matrix with . The initial estimation used for the Newton–Schulz scheme is and for the secant method and .
In Table 1, we show the results obtained by Newton–Schulz and secant-type method for the different random matrices, the number of iterations, the residuals, and the value of COC. The results are in concordance with the order of convergence of each scheme. All obtained random matrices are nonsingular and both methods give us an approximation of the inverse of A. Newton method needs lower number of iterations than Secant scheme, as was expected, being the first one quadratic and the latter one super-linear.
Table 1.
Results for approximating the inverse of a random matrix (Example 1).
Example 2.
In this example, matrix A is a random matrix for different values of m and n. The initial matrices are calculated in the same way as in the previous example.
In Table 2, we show the results obtained by Newton–Schulz and secant-type method for the different random matrices, the number of iterations, the residuals, and the value of ACOC. The results are in concordance with the order of convergence of each scheme, despite being non-square matrices. Both methods give us an approximation of the Moore–Penrose inverse of A.
Table 2.
Results for approximating the Moore–Penrose inverse of a rectangular random matrix (Example 2).
Example 3.
In this example, we want to analyze the performance of the secant method for computing the Drazin inverse of the following matrix A of size with .
Here, its Drazin inverse is expressed by
By using the initial matrix and the same stopping criterion as in the previous examples, Newton–Schulz method gives us the following information:
- ;
- ; and
- Exact error .
On the other hand, secant method is used with and , obtaining:
- ;
- ; and
- Exact error .
Example 4.
This is another example for computing the Drazin inverse of the following matrix B of size (see [1]) with .
Now, its Drazin inverse is expressed by
By using the initial matrix and the same stoping criterion as in the previous examples, Newton–Schulz method gives us the following information:
- ;
- ; and
- Exact error .
On the other hand, secant method is used with and , obtaining:
- ;
- ; and
- Exact error .
Again, the numerical tests confirm the theoretical results.
Example 5.
Finally, in this example, we test Newton–Schlutz and secant methods on several known square matrices of size , constructed by using different Matlab functions. Specifically, the test matrices are:
- (a)
- . Hankel matrix of size n × n.
- (b)
- . Toeplitz matrix of size n × n.
- (c)
- . Symmetric and positive definite matrix of size n × n, .
- (d)
- . Leslie matrix of size n × n. This type of matrices appears in problems of population models.
- (e)
- . Matrix ill-conditioned of size n × n, such that .
By using the stopping criterion
and the initial matrix and , we obtain the numerical results that appear in Table 3. In this cases, as in the previous ones, the proposed method shows good performance in terms of stability, precision, and number of iterations needed. We must take into account that both schemes have different orders of convergence, which is displayed in Table 3.
Table 3.
Results for approximating the inverse of classical square matrices (Example 5).
6. Conclusions
An iterative method with memory for approximating the inverse of nonsingular square complex matrices, the Moore–Penrose inverse of rectangular complex matrices, and the Drazin inverse of square singular matrices is presented. As far as we know, it is the first time that a scheme with memory is employed to approximate the solution of nonlinear matrix equations. The proposed scheme is free of inverse operators and its iterative expression is simple; therefore, it is computationally efficient. From particular initial approximations, the convergence is guaranteed for all matrices, without conditions. Numerical tests allowed us to analyze the performance of the proposed scheme and confirm the theoretical results.
Author Contributions
Investigation, S.A., A.C. and J.R.T.; Supervision, M.P.V.; Validation, A.C.; Writing—original draft, S.A. and M.P.V.; Writing—review and editing, J.R.T. All the authors contributed to the different parts of this manuscript. All authors have read and agreed to the published version of the manuscript.
Funding
This research was supported by PGC2018-095896-B-C22 (MCIU/AEI/FEDER, UE), Generalitat Valenciana PROMETEO/2016/089, and FONDOCYT 029-2018 República Dominicana.
Acknowledgments
The authors would like to thank the anonymous reviewers for their useful comments and suggestions that have improved the final version of this manuscript.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
References
- Li, X.; Wei, Y. Iterative methods for the Drazin inverse of a matrix with a complex spectrum. Appl. Math. Comput. 2004, 147, 855–862. [Google Scholar] [CrossRef]
- Li, H.B.; Huang, T.Z.; Zhang, Y.; Liu, X.P.; Gu, T.X. Chebyshev-type methods and preconditioning techniques. Appl. Math. Comput. 2011, 218, 260–270. [Google Scholar] [CrossRef]
- Soleymani, F.; Stanimirović, P.S. A higher order iterative method for computing the Drazin inverse. Sci. World J. 2013, 2013, 708647. [Google Scholar] [CrossRef] [PubMed]
- Li, W.; Li, J.; Qiao, T. A family of iterative methods for computing Moore-Pennrose inverse of a matrix. Linear Algebra Appl. 2013, 438, 47–56. [Google Scholar]
- Toutounian, F.; Soleymani, F. An iterative method for computing the approximate inverse of a square matrix and the Moore-Pennrose inverse of a nonsquare matrix. Appl. Math. Comput. 2015, 224, 671–680. [Google Scholar]
- Soleymani, F.; Salmani, H.; Rasouli, M. Finding the Moore-Penrose inverse by a new matrix iteration. Appl. Math. Comput. 2015, 47, 33–48. [Google Scholar] [CrossRef]
- Ben-Israel, A.; Greville, T.N.E. Generalized Inverses, 2nd ed.; Springer: New York, NY, USA, 2003. [Google Scholar]
- Gu, X.; Huang, T.; Ji, C.; Carpentieri, B.; Alikhanov, A.A. Fast iterative method with a second-order implicit difference scheme for time-space fractional convection-diffusion equation. J. Sci. Comput. 2017, 72, 957–985. [Google Scholar] [CrossRef]
- Li, M.; Gu, X.; Huang, C.; Fei, M.; Zhang, G. A fast linearized conservative finite element method for the strongly coupled nonlinear fractional Schrödinger equations. J. Comput. Phys. 2018, 358, 256–282. [Google Scholar] [CrossRef]
- Higham, N.J. Functions of Matrices: Theory and Computation; SIAM: Philadelphia, PA, USA, 2008. [Google Scholar]
- Schulz, G. Iterative Berechmmg der reziproken matrix. Z. Angew. Math. Mech. 1933, 13, 57–59. [Google Scholar] [CrossRef]
- Li, W.G.; Li, Z. A family of iterative methods for computing the approximate inverse of a square matrix and inner inverse of a non-square matrix. Appl. Math. Comput. 2010, 215, 3433–3442. [Google Scholar] [CrossRef]
- Chen, H.; Wang, Y. A family of higher-order convergent iterative methods for computing the Moore-Penrose inverse. Appl. Math. Comput. 2011, 218, 4012–4016. [Google Scholar] [CrossRef]
- Ortega, J.M.; Rheinbolt, W.C. Iterative Solutions of Nonlinears Equations in Several Variables; Academic Press, Inc.: Cambridge, MA, USA, 1970. [Google Scholar]
- Monsalve, M.; Raydan, M. A secant method for nonlinear matrix problem. Lect. Notes Electr. Eng. 2011, 80, 387–402. [Google Scholar]
- Wang, G.; Wei, Y.; Qiao, S. Generalized Inverses; Science Press: New York, NY, USA, 2004. [Google Scholar]
- Jay, L. A note of Q-order of convergence. BIT 2001, 41, 422–429. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).