Next Article in Journal
Bayesian Estimation Based on Sequential Order Statistics for Heterogeneous Baseline Gompertz Distributions
Next Article in Special Issue
Some High-Order Convergent Iterative Procedures for Nonlinear Systems with Local Convergence
Previous Article in Journal
Grid Frequency and Amplitude Control Using DFIG Wind Turbines in a Smart Grid
Previous Article in Special Issue
A Picard-Type Iterative Scheme for Fredholm Integral Equations of the Second Kind
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On High-Order Iterative Schemes for the Matrix pth Root Avoiding the Use of Inverses

by
Sergio Amat
1,
Sonia Busquier
1,
Miguel Ángel Hernández-Verón
2 and
Ángel Alberto Magreñán
2,*
1
Department of Applied Mathematics and Statistics, Polytechnic University of Cartagena, 30203 Cartagena, Spain
2
Department of Mathematics and Computation, University of La Rioja, 26006 Logroño, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(2), 144; https://doi.org/10.3390/math9020144
Submission received: 14 December 2020 / Revised: 22 December 2020 / Accepted: 6 January 2021 / Published: 11 January 2021
(This article belongs to the Special Issue Application of Iterative Methods for Solving Nonlinear Equations)

Abstract

:
This paper is devoted to the approximation of matrix pth roots. We present and analyze a family of algorithms free of inverses. The method is a combination of two families of iterative methods. The first one gives an approximation of the matrix inverse. The second family computes, using the first method, an approximation of the matrix pth root. We analyze the computational cost and the convergence of this family of methods. Finally, we introduce several numerical examples in order to check the performance of this combination of schemes. We conclude that the method without inverse emerges as a good alternative since a similar numerical behavior with smaller computational cost is obtained.

1. Introduction

The computation of operators on a matrix appears in many applications. The use of iterative methods has emerged as a useful technique to approximate it. This paper is devoted to the approximation of the matrix pth root. We recall that the principal pth root of a matrix A C r × r , where C is a set of complex numbers, without positive eigenvalues in R , as the unique solution X of
X p A = 0 ,
with an argument of the eigenvalues in modulus lower than
π p .
For this kind of problem, the well-known and famous Newton method takes the following form
X 0 given , X n + 1 = ( p 1 ) X n + A X n 1 p p , n 0 .
This formula is a direct extension of the method applied to the scalar equation x p a = 0 . In this form, the method achieves quadratic convergence, but unfortunately, it is unstable [1]. However, as can be in seen in [2], a stable version can be found.
In fact, several applications of the Newton method exist for finding the pth root matrix, see this incomplete list of references [1,2,3,4,5,6].
Similarly, we can derive the third-order Chebyshev method as
X 0 Θ , X n + 1 = 2 p 2 3 p + 1 2 p 2 X n + 2 p 1 p 2 A X n 1 p p 1 2 p 2 A 2 X n 1 2 p , n 0 ,
where Θ = B C r × r s.t. B has no nonpositive real eigenvalues.
In our paper [7], we proposed stable versions of this algorithm, we presented some numerical advantages of it with respect to Newton and other third-order methods such as Halley’s method [8,9], and finally, we developed a general family of any order that includes both the Newton and Chebyshev methods.
In order to develop a method avoiding the use of inverses of the different iterates, we can consider the approximation of this less natural equation 1 x p 1 a = 0 . In this case, the method presented in [7] has the form
X 0 given , X n + 1 = X n k = 0 m d k k ! I A 1 X n p k , n 0 ,
where
d 0 = 1 and d k = i = 0 k 1 1 p + i , k 1 .
This method has order m, and in particular, for m = 2 and m = 3 , we recover the Newton and Chebyshev methods.
In the above Formula (2), we only need to compute the inverse of A. In the present paper, we propose to approximate the inverse of A by another iterative method. We can use our family introduced in [10] that has the form
Y 0 given , Y n + 1 = k = 0 m m + 1 k + 1 ( 1 ) k Y n ( A Y n ) k , n 0 .
Our goal is to see that the new approach yields a similar numerical behavior but with the advantage that it is free of inverse operators, and in particular, has a smaller computational cost, which means that it requires fewer operations.
The computation of the pth root of a matrix appears, for example, in fractional differential equations, discrete representations of norms corresponding to finite element discretizations of fractional Sobolev spaces, and the computation of geodesic-midpoints in neural networks (see [11] and the references therein).

2. A General Method for Approximating the Matrix pth Root Free of Inverse Operators

As we mentioned in the introduction, for approximating A 1 p , we propose the use of a combination of two families. One for the approximation of A 1 and other, that use this approximation, for the approximation of the matrix pth root.
First step: for approximating A 1
Y 0 given , Y n + 1 = k = 0 m m + 1 k + 1 ( 1 ) k Y n ( A Y n ) k , n = 0 , , L .
Second step: for approximating A 1 p
X 0 given , X n + 1 = X n k = 0 m d k k ! I Y L X n p k , n 0 ,
where Y L denotes the final iteration computed by the above method (4) in the first step.

2.1. Convergence

To establish the convergence of the general method (5), we observe that
I A 1 X n p = I Y L X n p + Y L X n p A 1 X n p = ( I Y L X n p ) + ( Y L A 1 ) X n p .
Now, suppose that, from method (4), we consider n 0 N such that Y n 0 = Y L . Then, for the iterative process (4), using Theorem 3.2 in [10], we obtain
Y L A 1 A m Y n 0 1 A 1 m + 1 A ( m + 1 ) n 0 1 Y 0 A 1 ( m + 1 ) n 0 .
On the other hand, for the iterative process (2), from Theorem 4.1 in [7], we have
I Y L X n p < I Y L X n 1 p m + 1 < < I Y L X 0 p ( m + 1 ) n .
Finally, fixed Y L we take X 0 such that I Y L X 0 p < 1 . Then, from Theorem 4.1 in [7], the sequence { X n } , given by the general method (5), converges to ( Y L ) 1 p ; therefore, { X n } is bounded. So, there exists M > 0 such that X n M for all n N .
Theorem 1.
Suppose Y 0 with Y 0 A 1 < 1 and n 0 N such that, from (4), we consider Y L = Y n 0 . If X 0 is such that I Y L X 0 p < 1 , then for all tolerance Tol > 0 given, there exists n * N such that I A 1 X n p < Tol for n n * .
Proof. 
Given Tol > 0 , from (8), there exists n 1 N such that I Y L X n p < Tol 2 for all n n 1 .
On the other hand, as X n converges to ( Y L ) 1 p , from Theorem 4.1 in [7], there exists n 2 N such that ( Y L ) 1 p X n < Tol , for all n n 2 . Then, X n Tol + ( Y L ) 1 p for all n n 2 .
Moreover, from (7), given Tol 2 ( Tol + ( Y L ) 1 p ) > 0 , there exists n 3 N such that Y L A 1 < Tol 2 for all n n 3 .
Therefore, from (6), there exists n * = m a x { n 1 , n 2 , n 3 } N such that I A 1 X n p < Tol for all n n * . □

2.2. Computational Cost and Efficiency

If A is a matrix q × q , and taking into account that the computational cost of the product of two matrices is q 3 and that adding or subtracting two matrices has a computational cost of q 2 , then the computational cost of
  • the computation of matrix Y L , given by means of using (4), is C C I ( m , q ) = m q 2 + ( m + 1 ) q 3 and its computational efficiency is C E I ( m , q ) = ( m + 1 ) 1 m q 2 + ( m + 1 ) q 3
  • the computation of method (2) has a computational cost of C C R ( m , q ) = m q 2 + ( p + m ) q 3 and its computational efficiency is C E R ( m , q ) = ( m + 1 ) 1 m q 2 + ( p + m ) q 3
  • the computational cost of method (5) is obtained directly by C C ( m , q ) = 2 m q 2 + ( p + 2 m + 1 ) q 3 and its computational efficiency is C E ( m , q ) = ( m + 1 ) 1 2 m q 2 + ( p + 2 m + 1 ) q 3 .
In Figure 1 and Figure 2 the computational efficiency for fixed p = 2 and p = 3 respectively and different values of q and m are shown.
In most cases, when working with a family of methods, the best one is related to the problem considered (see the next figures) and the main way used to solve the problem. For example, the most efficient method (in terms of computation) for solving sparse problems, which appear when discretizing differential equations, is Chebyshev’s method [7,10].
In Figure 3 and Figure 4 the computational efficiency for fixed q = 15 and q = 20 respectively and different values of p and m are shown.

3. Applications Related to Differential Equations

In this section, we present a comparison of the original method (2) using A 1 and the new proposal (5), avoiding the use of any inverse operator. We refer to our paper [7] in order to see the advantages of the original method in comparison with other methods that appear in the literature.
We consider the approximation of the pth root of two matrices related to the discretization of differential equations. This type of matrix operations appears in the approximation of space-fractional diffusion problems [12].
We start with the following matrix
A = C D B C D 0 B C D 0 B C D B C ,
where B = 1 h 2 w , C = 2 + h 2 v and D = 1 + h 2 w .
The matrix (9) can be seen as the result of applying a discretization process, using finite differences to the boundary value problem defined as
x ( t ) = v · x ( t ) + w · x ( t ) ,
where
x ( a ) = x a a n d x ( b ) = x b ,
Since the matrix A is sparse, the most efficient methods in both families are Chebyshev-like methods.
In Table 1, we compare our original method using A 1 (2) and the new combination using the approximation Y L of the inverse (5). We observe a similar numerical behavior of both methods. Thus, the approximation of the inverse seems a good alternative since has similar errors with smaller computational cost.
Finally, we compute the pth matrix roots for the matrix
A = 1 2 λ λ λ 1 2 λ λ 0 λ 1 2 λ λ 0 λ 1 2 λ λ λ 1 2 λ
This matrix is related to the discretization, using finite differences, of the laplacian operator that appear in many mathematical models. In Table 2, we observe again a similar numerical behavior of both schemes.

4. Conclusions

The function evaluation of a matrix appears in a large and growing number of applications. We have presented and studied a general family of iterative methods without using inverse operators, for the approximation of the matrix pth root. The family incorporates the approximation of A 1 , by an iterative method, into another iterative method for approximating A 1 / p . The family includes methods of every order of convergence.
As it appears in [7,10], the most efficient method, in computational terms, for solving sparse problems, which appears when discretizing differential equations, is Chebyshev’s method.
Some numerical examples related to the discretization of differential equations have been presented. We have concluded that the new approach (5) has a similar numerical behavior to that of (2) but with the advantage that it is free of inverse operators, and in particular, has a lower computational cost. Finally, we care to mention the existence of other strategies for computing fractional powers of a matrix, which do not use matrix iterations [13,14] or even take into account other techniques such as those appearing in [15].

Author Contributions

Conceptualization, S.A., S.B., M.Á.H.-V. and Á.A.M.; methodology, S.A., S.B., M.Á.H.-V. and Á.A.M.; software, S.A., S.B., M.Á.H.-V. and Á.A.M.; validation, S.A., S.B., M.Á.H.-V. and Á.A.M.; formal analysis, S.A., S.B., M.Á.H.-V. and Á.A.M.; investigation, S.A., S.B., M.Á.H.-V. and Á.A.M.; resources, S.A., S.B., M.Á.H.-V. and Á.A.M.; data curation, S.A., S.B., M.Á.H.-V. and Á.A.M.; writing original draft preparation, S.A., S.B., M.Á.H.-V. and Á.A.M.; writing review and editing, S.A., S.B., M.Á.H.-V. and Á.A.M.; visualization, S.A., S.B., M.Á.H.-V. and Á.A.M.; supervision, S.A., S.B., M.Á.H.-V. and Á.A.M.; project administration, S.A., S.B., M.Á.H.-V. and Á.A.M.; funding acquisition, S.A., S.B., M.Á.H.-V. and Á.A.M.; All authors have read and agreed to the published version of the manuscript.

Funding

The research of the authors S.A. and S.B. was funded in part by Programa de Apoyo a la investigación de la fundación Séneca-Agencia de Ciencia y Tecnología de la Región de Murcia 20928/PI/18 and by PID2019-108336GB-100 (MINECO/FEDER). The research of the author M.Á.H.-V. was supported in part by Spanish MCINN PGC2018-095896-B-C21. The research of the author Á.A.M. was funded in part by Programa de Apoyo a la investigación de la fundación Séneca-Agencia de Ciencia y Tecnología de la Región de Murcia 20928/PI/18 and by Spanish MCINN PGC2018-095896-B-C21.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Higham, N.J. Newton’s method for the matrix square root. Math. Comp. 1986, 46, 537–549. [Google Scholar]
  2. Iannazzo, B. On the Newton method for the matrix pth root. SIAM J. Matrix Anal. Appl. 2006, 28, 503–523. [Google Scholar] [CrossRef] [Green Version]
  3. Guo, C.-H.; Higham, N.J. A Schur-Newton method for the matrix pth root and its inverse. IAM J. Matrix Anal. Appl. 2006, 28, 788–804. [Google Scholar] [CrossRef]
  4. Iannazzo, B. A note on computing the matrix square root. Calcolo 2003, 40, 273–283. [Google Scholar] [CrossRef]
  5. Petkov, M.; Bors̆ukova, S. A modified Newton method for obtaining roots of matrices. Annu. Univ. Sofia Fac. Math. 1974, 66, 341–347. (In Bulgarian) [Google Scholar]
  6. Smith, M.I. A Schur algorithm for computing matrix pth roots. SIAM J. Matrix Anal. Appl. 2003, 24, 971–989. [Google Scholar] [CrossRef] [Green Version]
  7. Amat, S.; Ezquerro, J.A.; Hernández-Verón, M.A. On a new family of high-order iterative methods for the matrix pth root. Numer. Linear Algebra Appl. 2015, 22, 585–595. [Google Scholar] [CrossRef]
  8. Guo, C.-H. On Newton’s method and Halley’s method for the principal pth root of a matrix. Linear Algebra Appl. 2010, 432, 1905–1922. [Google Scholar] [CrossRef] [Green Version]
  9. Iannazzo, B. A family of rational iterations and its application to the computation of the matrix pth root. SIAM J. Matrix Anal. Appl. 2008, 30, 1445–1462. [Google Scholar] [CrossRef] [Green Version]
  10. Amat, S.; Ezquerro, J.A.; Hernández-Verón, M.A. Approximation of inverse operators by a new family of high-order iterative methods. Numer. Linear Algebra Appl. 2014, 21, 629–644. [Google Scholar] [CrossRef]
  11. Higham, N.J.; Al-Mohy, A.H. Computing Matrix Functions; The University of Manchester, MIMS EPrint: Manchester, UK, 2010; Volume 18. [Google Scholar]
  12. Szekeres, B.J.; Izsák, F. Finite difference approximation of space-fractional diffusion problems: The matrix transformation method. Comput. Math. Appl. 2017, 73, 261–269. [Google Scholar] [CrossRef] [Green Version]
  13. Iannazzo, B.; Manasse, C. A Schur logarithmic algorithm for fractional powers of matrices. SIAM J. Matrix Anal. Appl. 2013, 34, 794–813. [Google Scholar] [CrossRef] [Green Version]
  14. Higham, N.J.; Lin, L. An improved Schur–Pade algorithm for fractional powers of a matrix and their Fréchet derivatives. SIAM J. Matrix Anal. Appl. 2013, 34, 1341–1360. [Google Scholar] [CrossRef] [Green Version]
  15. Marino, G.; Scardamaglia, B.; Karapinar, E. Strong convergence theorem for strict pseudo-contractions in Hilbert spaces. J. Inequalities Appl. 2016, 2016, 134. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Computational Efficiencies for p = 2 and different values of q and m.
Figure 1. Computational Efficiencies for p = 2 and different values of q and m.
Mathematics 09 00144 g001
Figure 2. Computational Efficiencies for p = 2 and different values of q and m.
Figure 2. Computational Efficiencies for p = 2 and different values of q and m.
Mathematics 09 00144 g002
Figure 3. Computational Efficiencies for q = 15 and different values of p and m.
Figure 3. Computational Efficiencies for q = 15 and different values of p and m.
Mathematics 09 00144 g003
Figure 4. Computational Efficiencies for q = 20 and different values of p and m.
Figure 4. Computational Efficiencies for q = 20 and different values of p and m.
Mathematics 09 00144 g004
Table 1. Error for the approximation of A 1 / p , taking ( m , h , v , w ) = ( 100 , 0.01 , 20,000, 10 ) and four iterations.
Table 1. Error for the approximation of A 1 / p , taking ( m , h , v , w ) = ( 100 , 0.01 , 20,000, 10 ) and four iterations.
pOriginal Method Computing A 1 New Combined Method with Smaller Cost
2 1.4845 × 10 11 1.4845 × 10 11
4 3.6643 × 10 13 3.6639 × 10 13
6 3.1660 × 10 13 3.1655 × 10 13
8 3.2326 × 10 13 3.2326 × 10 13
Table 2. Error for the approximation of A 1 / p taking m = 100 , k = 2 × 10 6 , k / h 2 = 2 × 10 2 and three iterations.
Table 2. Error for the approximation of A 1 / p taking m = 100 , k = 2 × 10 6 , k / h 2 = 2 × 10 2 and three iterations.
pOriginal Method Computing A 1 New Combined Method with Smaller Cost
2 2.1204 × 10 14 2.1208 × 10 14
4 1.8486 × 10 14 1.8484 × 10 14
6 1.7260 × 10 14 1.7261 × 10 14
8 2.0723 × 10 14 2.0724 × 10 14
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Amat, S.; Busquier, S.; Hernández-Verón, M.Á.; Magreñán, Á.A. On High-Order Iterative Schemes for the Matrix pth Root Avoiding the Use of Inverses. Mathematics 2021, 9, 144. https://doi.org/10.3390/math9020144

AMA Style

Amat S, Busquier S, Hernández-Verón MÁ, Magreñán ÁA. On High-Order Iterative Schemes for the Matrix pth Root Avoiding the Use of Inverses. Mathematics. 2021; 9(2):144. https://doi.org/10.3390/math9020144

Chicago/Turabian Style

Amat, Sergio, Sonia Busquier, Miguel Ángel Hernández-Verón, and Ángel Alberto Magreñán. 2021. "On High-Order Iterative Schemes for the Matrix pth Root Avoiding the Use of Inverses" Mathematics 9, no. 2: 144. https://doi.org/10.3390/math9020144

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop