Abstract
Some iterative schemes with memory were designed for approximating the inverse of a nonsingular square complex matrix and the Moore–Penrose inverse of a singular square matrix or an arbitrary complex matrix. A Kurchatov-type scheme and Steffensen’s method with memory were developed for estimating these types of inverses, improving, in the second case, the order of convergence of the Newton–Schulz scheme. The convergence and its order were studied in the four cases, and their stability was checked as discrete dynamical systems. With large matrices, some numerical examples are presented to confirm the theoretical results and to compare the results obtained with the proposed methods with those provided by other known ones.
Keywords:
nonlinear matrix equations; inverse and pseudo-inverse matrices; iterative procedure; methods with memory MSC:
65H99; 15A09
1. Introduction
Calculating the inverse of a matrix, especially of a large size, is a very difficult task with a high computational cost. The alternative is the use of iterative algorithms to estimate it. In a vast range of fields, such as image and signal processing [,,,], encryption [,], control system analysis [,], etc., it is necessary to calculate the inverse or different generalized inverses in order to solve the problems posed.
In recent years, many iterative schemes of different orders of convergence have been designed to estimate the inverse of a complex matrix A or some generalized inverse (Moore–Penrose inverse, Drazin inverse, etc.). In 2013, Weiguo et al. in [] constructed a sequence of third-order iterations converging to the Moore–Penrose matrix. In the same year as well, Toutounian and Soleymani in [] presented a high-order method for approximating inverses and pseudo-inverses of complex matrices based on Homeier’s scheme, with a derivative-free composition. More recently, Stanimirovic et al. in [] designed efficient transformations of the hyperpower iterative method for computing the generalized inverses with the aim of minimizing the number of required matrices by matrix products per cycle. In 2020, Kaur et al. established in [] new formulations of the fifth-order hyperpower method to compute the weighted Moore–Penrose inverse, improving the efficiency indices. Such approximations were found to be robust and effective when implemented as a preconditioned matrix to solve the linear systems. All these schemes were designed with the starting point of iterative procedures without memory, that is where each new iteration is calculated using only the information provided by the previous iterate.
Iterative procedures that use more than one previous iterate to calculate the next one are called methods with memory. In the context of matrix inverse approximations, the secant scheme was proposed by the authors in []. For a nonsingular matrix, the secant method gives an estimation of the inverse and, when the matrix is singular, an approximation of the pseudo-inverse and Drazin inverse. Furthermore, the superlinear convergence was demonstrated in all cases.
In this manuscript, we focused on constructing several iterative methods with memory, free of inverse operators and with different orders of convergence, for finding the inverse of a nonsingular complex matrix. We also analyzed the proposed schemes for computing the Moore–Penrose inverse of complex rectangular matrices. On the other hand, these procedures allow approximating other generalized inverses such as the Drazin inverse, the group inverse, etc., although this is beyond the scope of this work.
Let A be a complex nonsingular matrix. The design of iterative algorithms without memory for estimating the inverse matrix, which we call Newton–Schulz-type methods, is mostly based on iterative solvers for the scalar equation applied to the nonlinear matrix equation:
where is a nonlinear matrix function.
The most-used iterative scheme to approximate is the Newton–Schulz scheme []:
where I is an identity matrix of size .
On the other hand, in the context of iterative procedures with memory, the authors presented in [] a secant-type method (SM), whose iterative expression is:
where , are the starting guesses. With a particular choice of the initial approximations, the authors proved that sequence , obtained by (3), converges to with order of convergence .
To analyze the convergence order of the iterative methods with memory for solving nonlinear equations , the R-order is used (see []), which we summarize below.
Theorem 1.
Let ψ be an iterative method with memory that generates a sequence of approximations to the root α of , and let this sequence converge to α. If there exists a nonzero constant η and nonnegative numbers , , such that the inequality:
holds, then the R-order of convergence of the iterative method ψ satisfies the inequality:
where is the unique positive root of the equation:
Here, denotes the error of the approximation in the kth iterative step.
Continuing with the idea of designing iterative procedures with memory, in this work, we used the scalar iterative methods of Kurchatov and Steffensen with memory, which we adapted to the context of matrix equations. In the scalar case, this kind of procedure can reach a higher order of convergence than schemes without memory with the same number of functional evaluations and usually has better stability properties. However, as far as we know, only the secant scheme has been extended to matrix equations with good results, in [].
Kurchatov’s scheme is an iterative algorithm with memory with quadratic convergence to solve scalar equations (see []), which is deduced from Newton’s method by substituting by Kurchatov’s divided difference, that is:
Regarding Steffensen’s scheme, it was initially developed in [], where the derivative was replaced by the divided difference . It still holds the original quadratic convergence, but in practice, the set of converging initial estimations is quite smaller than that of Newton’s scheme. It was Traub who, in [], introduced an accelerating parameter in the divided difference such that derived an iterative method whose error equation was:
with . With the aim of increasing the order of convergence of the scheme, Traub defined an approximation of by , obtaining the procedure:
which, given initial values and , has order of convergence .
The stability of these schemes with memory for scalar problems was analyzed firstly in [,], showing very stable performance in both cases. In what follows, some definitions and properties about vectorial discrete dynamical systems are introduced. These tools are shown to be useful in the following sections.
Basic Concepts of Qualitative Studies of Schemes with Memory
Let us start with an iterative scheme with memory, which is used to solve the scalar problem , using two previous iterates to calculate the next one:
with and being its seeds. Therefore, a solution is estimated whether or, equivalently, . This estimation can be obtained as a fixed point of by means of:
with again and being the seeds.
To study the qualitative behavior of a fixed point iterative scheme with memory, it is applied on a low-degree polynomial , generating a vectorial rational operator, . Then, the orbit of is the set:
Therefore, a point is fixed if it is the only point belonging to its orbit, and it is a T-periodic point if is orbit is only defined by and .
On the other hand, the orbit of a point can be classified depending on its asymptotic behavior, according to the next result.
Theorem 2
([], p. 558). Let be . Let us also assume that is a period-k point. Let , be the eigenvalues of the Jacobian :
- (a)
- If , , then is an attractor. It is said to be a superattractor if .
- (b)
- If exists such that , then the periodic point is unstable (repelling or saddle).
- (c)
- If , , then is a repulsor.
Moreover, we define a point as a critical point of G if . Indeed, if is not a zero of , it is named a free critical point. In a similar way, if a fixed point is not a zero of , it is named a strange fixed point.
Denoting by an attracting fixed point of: G, its basin of attraction is defined as:
The union of all the basins of attraction of G is defined as the Fatou set, , and its complementary set in is the Julia set . It holds all the repelling fixed points and sets the boundary among the basins of attraction.
In Section 2 and Section 3, we build a Kurchatov-type and a Steffensen-type iterative method with memory, respectively, to estimate the inverse of a nonsingular complex matrix. Both schemes are free of inverse operators, and we prove their order of convergence and stability. In Section 4, we extend these schemes to calculate the Moore–Penrose inverse of a complex rectangular matrix. Section 5 is devoted to the numerical tests to analyze their performance and confirm the theoretical results. We end the work with some conclusions.
2. Kurchatov-Type Method
The Kurchatov divided difference was defined by [] as:
For an equation , Kurchatov’s method has order or convergence two and iterative expression:
If we apply this method to , where and is an nonsingular matrix, not necessarily diagonalizable, we obtain:
Next, we show a technical lemma whose result we use later.
Lemma 1.
Let be invertible and commutative, , then:
Proof.
Using algebraic manipulations,
and the lemma is proven. □
It is known that, for any nonsingular complex matrix A, there exist unitary matrices U and V, such that , being the diagonal matrix of the singular values of A, with and . Moreover, is the conjugate transpose of U. Then, , that is , , and , and we apply them in Equation (5).
where and . If the initial approximations , satisfy and are diagonal matrices, then all the matrices are diagonal and , for all . Applying several algebraic manipulations on the last equation, we can ensure:
Therefore,
Using Lemma 1, we obtain:
Then,
Finally,
In this iterative expression, still, an inverse matrix appears. Therefore, Kurchatov’s scheme is not directly transferable for the calculation of the matrix inverses.
Therefore, we propose a slight change in the Kurchatov divided difference scheme:
obtaining a new iterative scheme,
Now, we apply this iterative procedure to the matrix equation :
Using the transformations defined above and Lemma 1, we obtain:
and then,
Finally,
From this expression, by using again the transformations defined above, we have
and
where three matrix products appear, but there are no inverse operators. Equation (8) corresponds to the iterative scheme of the modified Kurchatov-type method for matrix inversion, which we call .
2.1. Convergence Analysis
Let us consider now two unitary matrices U and V, satisfying , where the singular values , of A fulfill . We define again , . By using Equation (7), we obtain:
Next, we demonstrate the order of convergence of the iterative procedure .
Theorem 3.
Let be a nonsingular matrix and and be the initial approximation such that matrices and are diagonal. Then, sequence obtained by means of the iterative expression (8), where is the singular-value decomposition of A, converges to the inverse , with convergence order 1.6180.
Proof.
Let us denote . By means of component-by-component calculations, we obtain:
Let us also denote ; by subtracting from the both sides of the last equation, we obtain:
This result shows that, for each , in Equation (9) converges to with order of convergence , which is the only positive root of (see Theorem 1). In this way, for each , there exists satisfying , . Moreover, tends to zero when . Indeed, , . Furthermore,
with . Using this result, we obtain:
Therefore, we can affirm that converges to . □
To demonstrate the stability of the modified Kurchatov-type iterative scheme, we used the definition introduced by Higham in [] for the stability of the iterative scheme , with a fixed point . Assuming that H is Frechét differentiable in fixed point , then the process is stable in a neighborhood of if there exists a positive constant C such that .
Theorem 4.
The modified Kurchatov-type iterative scheme to estimate the inverse of a nonsingular complex matrix defined by expression (8) is stable.
Proof.
The modified Kurchatov-type method can be written as follows:
So, denoting by , we conclude
Now, as , we get
Therefore, matrix is idempotent, and (8) is a stable iterative process. □
2.2. Qualitative Analysis of Kurchatov-Type Scheme
Now, we considered the Kurchatov-type scheme as a scalar iterative procedure for solving nonlinear equations, analyzed its convergence, and studied its stability. In this analysis, the stability is understood as the dependence on the initial estimations. It is made by applying the vectorial real dynamical concepts to the Kurchatov-type scheme, whose expression is:
In the following result, we show that this scheme for solving nonlinear scalar equations has superlinear convergence.
Theorem 5.
Let us consider a sufficiently differentiable function in an open neighborhood I of the simple root α of nonlinear equation . Furthermore, let us assume that is continuous at α and that and α are near enough between them. Then, sequence , , generated by the Kurchatov-type scheme, converges to α with order of convergence , its error equation being:
where means that the following terms in the error equation depend on the powers of errors and such that the sum of the exponents is at least two; moreover, and , .
Proof.
It is known that the Taylor expansion of around is:
By using the Genocchi–Hermite formula (see []) and the expansion of in the Taylor series around x,
with , the expansion of the divided difference is:
Then, the expression of the difference divided as a function of the errors and is:
Therefore,
By applying Theorem 1, the only positive real root of (where the coefficients correspond to the exponents of and in the error equation) is the convergence order of the method, that is . □
From now on, we denote by the fixed point operator associated with the Kurchatov-type method applied on . As it does not use derivatives, it is not possible to establish a scaling theorem. Therefore, we cannot make the analysis on generic second-degree polynomials.
The fixed point operator depends on two variables: (denoted by x) and (denoted by z). Therefore,
Let us analyze now the qualitative performance of the rational operator by means of the asymptotic behavior of the fixed points and the existence of free critical points.
Theorem 6.
The only fixed points of rational operator KT are the roots , both being superattracting. Moreover, KT has no free critical points.
Proof.
We solve equation:
finding that the only fixed points are and . To study their stability, we considered:
and therefore, its eigenvalues are:
and:
As , we concluded that fixed points and are superattracting.
As an immediate consequence of this analysis of and its eigenvalues , , the only satisfying are those , where . Then, they are the only critical points, and there do not exist free critical points. □
From this results, we concluded that no other performance than convergence to the roots is possible, when the Kurchatov-type is applied on , as any other basin of attraction would need a free critical point inside.
In Figure 1, we show the dynamical plane of the KT operator, x and z being real, corresponding to the abscissa and ordinate axis, respectively (see [] for the routines). The mesh used has points, and the maximum amount of iterations is 40. The distance to the root lower than is the stopping criterion.
Figure 1.
Dynamical plane of the Kurchatov-type scheme on .
Then, if each point of the mesh is considered as a seed of the method, when it converges to one of the roots of (located in ), it is represented in orange or green color, depending on the root it has converged to. The color is brighter the lower the amount of iterations is. When the initial estimation, after the iterative process, reaches 40 iterations with no convergence, it is colored in black.
In Figure 1, we observe that the basins of attraction of both roots of have symmetric shapes. In spite of the only basins of attraction being those of the roots (Theorem 6), there are black areas in the dynamical planes. They correspond to areas of slow convergence, whose initial estimations converge with a higher number of iterations.
In [,], the stability analysis of the secant and Steffensen with memory schemes was performed, among other iterative schemes with memory. In both cases, the dynamical planes plotted showed full convergence to the roots, without black areas of slow convergence or divergence. Later on, we checked how the the differences among the Kurchatov-type and secant or Steffensen with memory stability affected their numerical performance.
3. Steffensen with Memory
Now, we considered the Traub–Steffensen family [], whose iterative expression is:
where is an arbitrary constant different from zero. Furthermore, if in this iterative scheme, we obtain the well-known Steffensen method []. It is easy to show that this iterative procedure has order of convergence two for any value of , its error equation being:
where represents the error in each iteration and , . Let us also remark that this is a derivative-free scheme without memory, because it uses only the information of the current iteration. Studying the equation of the error (12) of family (11), we notice that, if , the iterative scheme (11) achieves the third order of convergence. This value of the parameter cannot be used, as is unknown. Therefore, the approximation of as the first-order divided differences is employed, obtaining , which is the secant scheme, and (11) is an iterative procedure with memory.
3.1. Iterative Scheme Design for the Matrix Inverse
Our aim now was to adapt Steffensen’s scheme with memory to solve the matrix Equation (1). In [] Monsalve et al. adapted the secant method to estimate the solution of Equation (1) for A diagonalizable, and in [], the authors extended this result to any nonsingular matrix by means of the expression:
where and are the seeds.
In a similar way, Expression (11) for matrix equations takes the form:
which includes the inverse calculations for the estimation of . This circumstance must be avoided.
As A is a nonsingular complex matrix, there exist unitary matrices U and V, such that , where are the singular values of A. We define and . Then, Equation (14) takes the form:
If the initial approximations and satisfy and , then are diagonal matrices and for all . By applying several algebraic manipulations on Equation (15), we can ensure:
and therefore,
where I denotes the identity matrix of size n. Now, taking and and after more algebraic manipulations, we obtain:
given the initial approximations and . We denote this iterative scheme with memory as . Let us remark that Expression (18) does not include inverse operators, as intended.
3.2. Convergence Analysis and Stability
We present in the next result the convergence analysis of the iterative method given by Expression (18).
Theorem 7.
Let be the singular-value decomposition of the a nonsingular matrix . Furthermore, let and be the initial approximations to , such that and are diagonal matrices. Then, sequence , generated by (18), converges to with order of convergence .
Proof.
Let U and V be unitary matrices such that , being the diagonal matrix of the singular values of A. Furthermore, by denoting and for , from Equation (18), we obtain:
Let us remark that , for . Therefore,
Therefore, in Equation (19), using component-by-component calculations, we obtain:
We denote . We subtract from both sides of Equation (20), and we obtain:
Theorem 1 allows us to conclude that, for each j from 1 to n, converges to with the convergence order the only positive root of , (see []). In this way, for each , there exist a sequence satisfying , , and when . Indeed, , . On the other hand,
where . Using this result, we obtain:
Then, converges to , and the proof is finished. □
In the following result, we prove the stability of this scheme.
Theorem 8.
Steffensen’s method with memory to estimate the inverse of a given matrix with the expression (18) is a stable iterative scheme.
Proof.
Steffensen’s scheme with memory can be written as a fixed point scheme as follows:
Then, given , we deduce that:
Next, for , we have:
therefore, is an idempotent matrix, and the iterative process (18) is stable. □
4. Approximating the Moore–Penrose Inverse
For a complex matrix that is not necessarily square, the pseudo-inverse can be defined as an object similar to an inverse matrix. Given a complex matrix, it is possible to define many possible pseudo-inverses, but the most-frequent one is the Moore–Penrose inverse.
In cases when a system of equations has no solution, there is no inverse of the matrix of the coefficients that defines the system. In these cases, it may be useful to find a value that is an approximate solution in terms of error minimization, such as being able to find the best fit of a dataset using the pseudo-inverse (Moore–Penrose inverse) of the coefficient matrix. See, for example, the classical texts of Ben-Israel [] and Higham [].
We now extend the proposed iterative methods of this manuscript for calculating the Moore–Penrose inverse of an complex matrix A. This is the unique matrix X that satisfies the following conditions:
A computationally simple and accurate way to compute the Moore–Penrose inverse is through the use of the singular-value decomposition of A. Then, if the rank of A is , we obtain:
where , , with and a unitary matrix. Furthermore, it is known that:
where .
Theorem 9.
Let be a matrix with singular-value decomposition satisfying (22) whose . If and are the initial estimations such that:
given that and are diagonal matrices of size , then the sequences , generated by Equation (18) using SSM and (8) using MKTM, converge to with order of convergence and , respectively.
Proof.
Given the singular-value decomposition of A (see (22)), we define the matrix :
for any fixed arbitrary value of k, with . Now, by using the iterative expression in Equation (18) and (8), we obtain:
and
respectively. Because and are diagonal matrices, then all matrices are diagonal as well. Then, the expression and represents r scalar uncoupled iterations converging to , , with order of convergence using and using , i.e.,
with , , such that tends to zero if k tends to infinity. Just as was argued in (8), we have that:
Therefore, we can say that converges to , with the desired order of convergence. □
5. Numerical Experiments
In this section, we present the numerical tests of the behavior of the Steffensen method with memory (SMM) and Kurchatov type’s (MKTM) designed to calculate the inverse and the Moore–Penrose inverse applied to different matrices. For comparison, we used the Newton–Schulz method (NS) [], and the secant method (SM) []. The numerical calculations were made with MATLAB 2022a (MathWorks, USA) using a 3 GHz 10-Core Intel Xeon W processor, 64 GB 2666 MHz DDR4 (iMac Pro). As stopping criteria for all numerical tests, we used or , being the nonlinear matrix equation to be solved for estimating the inverse or pseudo-inverse of a complex matrix A.
In addition, when verifying the theoretical results numerically, we used the order of approximate computational convergence (COC) introduced by Jay [], defined as:
Another form of the numerical approximation of the order of theoretical convergence presented by the authors in [], denoted as ACOC, is defined as:
To show the order of convergence of the methods in the numerical tests, we used any of these estimates of computational order. In the tables, we write “-” when the COC (or ACOC) vector is unstable. Furthermore, the mean of elapsed time after 50 executions of the codes appears in the tables, calculated by using the CPU-time command.
Example 1.
As a first example, we looked for the inverse random matrices of size , where , and 500. Since Newton–Schulz’s method needs one initial point, we took it as . The rest of the methods need two initial points, which we took as and .
Table 1 shows the results obtained by approximating the inverses of nonsingular random matrices of sizes , and 500 using the Newton–Schulz, secant, Steffensen with memory, and Kurchatov-type methods. The number of iterations, residuals, and the COC value are shown. The results confirmed the theoretical order of convergence obtained from each method. Using all methods, the approximation of the inverse of A was obtained. In all cases, the Steffensen method with memory showed better results in terms of the number of iterations and the computational time. The graphs shown in Figure 2 represent the results presented in Table 1 for the cases of matrices of , and 500.
Table 1.
Results obtained by approximating the inverse of random matrices of different sizes.
Figure 2.
Graphs of the number of iterations vs. COC for three of the cases in Table 1.
Example 2.
Now, we built square matrices of size using different MATLAB functions, such as:
- (a)
- . Symmetric and positive definite matrix of size , .
- (b)
- . Riemann matrix of size .
- (c)
- . Hankel matrix of size .
- (d)
- . Toeplitz matrix of size .
- (e)
- . Leslie matrix of size with application in problems of population models.
- (f)
- . Parter matrix of size .
Here, we used as stopping criteria or and the same initial approximations as in Example 1. The numerical results obtained are shown in Table 2 and Figure 3. As in the previous example, the proposed methods showed a good performance in terms of stability, precision, and number of iterations required.
Table 2.
Results obtained by approximating the inverse of classical square matrices of different sizes.
Figure 3.
Graphs of the number of iterations vs. COC for three of the sizes in Table 2.
Example 3.
Finally, we tested the methods for computing the Moore–Penrose inverse of random matrices for different values of m and n. The matrices of the initial approximations were calculated in the same way as in the previous examples, and the stopping criterion was the one used in Example 1.
The results obtained for the number of iterations, the residuals, and the ACOC value of Example 3 are shown in Table 3 and Figure 4. The methods gave us an approximation of the inverse Moore–Penrose and showed the same behavior as in the previous examples.
Table 3.
Results obtained to approximate the Moore–Penrose inverse of a rectangular random matrix.
Figure 4.
Graphs of the number of iterations vs. ACOC for three of the sizes in Table 3.
6. Conclusions
In this manuscript, we widened the set of iterative methods able to be applied for estimating generalized inverses of complex matrices, by using schemes with memory able to improve the Newton–Schulz scheme. Two procedures with memory were designed for approximating the inverses of nonsingular complex matrices or pseudo-inverses, in the case that the matrices are singular. The order of convergence and stability were proven, and in the case of the Steffensen with memory scheme, its order of convergence improved that of the Newton–Schulz method.
The method using Kurchatov’s divided differences cannot be directly adapted to estimate generalized inverses of complex matrices. To overcome this difficulty, Kurchatov-type divided differences were used. In this process, there was a decrease in the order of convergence of the starting method and a change in its behavior.
The technique used to adapt the iterative methods can be applied to the resolution of other types of matrix equations with an especially significant role in various areas such as control theory, dynamic programming, ladder networks, statistics, etc.
This research opens new ways in the design of iterative procedures for solving this kind of nonlinear matrix equation, with promising numerical performance, which showed agreement with theoretical results and stable results.
Author Contributions
Conceptualization, J.R.T.; methodology, J.G.M.; software, M.P.V.; formal analysis, J.R.T.; investigation, A.C.; writing—original draft preparation, J.G.M. and M.P.V.; writing—review and editing, A.C. and J.R.T. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
Not applicable.
Acknowledgments
The authors would like to thank the anonymous reviewers for their suggestions and comments that have improved the final version of this manuscript.
Conflicts of Interest
The authors declare that they have no conflict of interest.
References
- Torokhti, A.; Soto-Quiros, P. Generalized Brillinger-like transforms. IEEE Signal Process. Lett. 2016, 23, 843–847. [Google Scholar] [CrossRef]
- Chung, J.; Chung, M. Computing optimal low-rank matrix approximations for image processing. In Proceedings of the 2013 Asilomar Conference on Signals, Systems and Computers, IEEE, Pacific Grove, CA, USA, 3–6 November 2013; pp. 670–674. [Google Scholar]
- Chountasis, S.; Katsikis, V.N.; Pappas, D. Applications of the Moore–Penrose inverse in digital image restoration. Math. Probl. Eng. 2009, 2009, 170724. [Google Scholar] [CrossRef]
- Miljković, S.; Miladinović, M.; Stanimirović, P.; Stojanović, I. Application of the pseudoinverse computation in reconstruction of blurred images. Filomat 2012, 26, 453–465. [Google Scholar] [CrossRef]
- Liu, J.; Zhang, H.; Jia, J. Cryptanalysis of schemes based on pseudoinverse matrix. Wuhan Univ. J. Nat. Sci. 2016, 21, 209–213. [Google Scholar] [CrossRef]
- Dang, V.H.; Nguyen, T.D. Construction of pseudoinverse matrix over finite field and its applications. Wirel. Pers. Commun. 2017, 94, 455–466. [Google Scholar] [CrossRef]
- Nguyen, C.-T.; Tsai, Y.-W. Finite-time output feedback controller based on observer for the time-varying delayed systems: A Moore–Penrose inverse approach. Math. Probl. Eng. 2017, 2017, 2808094. [Google Scholar] [CrossRef]
- Ansari, U.; Bajodah, A.H. Robust launch vehicle’s generalized dynamic inversion attitude control. Aircr. Eng. Aerosp. Technol. 2017, 89, 902–910. [Google Scholar] [CrossRef]
- Weiguo, L.; Juan, L.; Tiantian, Q. A family of iterative methods for computating Moore–Penrose inverse of a matrix. Linear Algebra Appl. 2013, 438, 47–56. [Google Scholar] [CrossRef]
- Toutounian, F.; Soleymani, F. An iterative method for computing the approximate inverse of a square matrix and the Moore–Penrose inverse of a non-square matrix. Appl. Math. Comput. 2013, 224, 671–680. [Google Scholar] [CrossRef]
- Stanimirovich, P.; Kumar, A.; Katsikis, V. Further eficient hyperpower iterative methods for the computation of generalized inverses . RACSAM Rev. Real Acad. Cienc. Exactas Fìs. Nat. Ser. A Mat. 2019, 113, 3323–3339. [Google Scholar]
- Kaur, M.; Kansal, M.; Kumar, S. An eficient hyperpower iterative method for computating weighted Moore-Ponrose inverse. AIMS Math. 2020, 5, 1680–1692. [Google Scholar] [CrossRef]
- Artidiello, S.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Generalized Inverses Estimations by Means of Iterative Methods with Memory. Mathematics 2020, 8, 2. [Google Scholar] [CrossRef]
- Higham, N.J. Functions of Matrices: Theory and Computation; SIAM: Philadelphia, PA, USA, 2008. [Google Scholar]
- Ortega, J.M.; Rheinbolt, W.C. Iterative Solutions of Nonlinears Equations in Several Variables; Academic Press Inc.: Cambridge, MA, USA, 1970. [Google Scholar]
- Kurchatov, V.A. On a method of linear interpolation for the solution of functional equations. Dokl. Acad. Nauk. SSSR 1971, 198, 524–526. Translation in Sov. Math. Dokl. 1971, 12, 835–838. (In Russian) [Google Scholar]
- Steffensen, J.F. Remarks on iteration. Scand. Actuar. J. 1933, 1, 64–72. [Google Scholar] [CrossRef]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
- Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. A multidimensional dynamical approach to iterative methods with memory. Appl. Math. Comput. 2015, 271, 701–715. [Google Scholar] [CrossRef]
- Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. Stability analysis of iterative methods with memory. In Proceedings of the 15th International Conference on Computational and Mathematical Methods in Science and Engineering (CMMSE 2015), Rota Cadiz, Spain, 6–10 July 2015; pp. 285–290. [Google Scholar]
- Robinson, R.C. An Introduction to Dynamical Systems, Continous and Discrete. In Pure and Applied Undergraduate Texts; Americal Mathematical Society: Providence, RI, USA, 2012. [Google Scholar]
- Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameters planes of iterative families and methods. Sci. World J. 2013, 2013, 780153. [Google Scholar] [CrossRef] [PubMed]
- Monsalve, M.; Raydan, M. A secant method for nonlinear matrix problem. Numer. Linear Algebra Signals Syst. Control. 2011, 80, 387–402, Lecture Notes in Electrical Engineering. [Google Scholar]
- Ben-Israel, A.; Greville, T.N.E. Generalized Inverses; Springer: New York, NY, USA, 2003. [Google Scholar]
- Jay, L.O. A note of Q-order of convergence. BIT Numer. Math. 2001, 41, 422–429. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2012, 190, 686–698. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).