A Numerical Method for Computing the Roots of Non-Singular Complex-Valued Matrices

A method for the computation of the n th roots of a general complex-valued r × r non-singular matrix ? is presented. The proposed procedure is based on the Dunford–Taylor integral (also ascribed to Riesz–Fantappiè) and relies, only, on the knowledge of the invariants of the matrix, so circumventing the computation of the relevant eigenvalues. Several worked examples are illustrated to validate the developed algorithm in the case of higher order matrices.


Introduction
Complex-valued matrices are a natural extension of complex numbers, and matrix operations are well known to the reader [1] likely with the only exception of roots' computation.
The normal situation for a complex number is that the n th root always has n determinations. The equivalent situation for an r × r matrix A is that the n th root of A should have n r determinations. The problem arises in the case of matrices of a special type, for which the computation of roots is ill-posed in the sense of J.Hadamard, as they admit no roots or, conversely, an infinite number of those.
In general, the problem of computing the n th roots of general complex-valued matrices has not received the necessary attention so far. In relation to the simple case of 2 × 2 matrices, a few articles appeared in Mathematics Magazine [2][3][4] and in Linear Algebra and its Applications [5,6].
The Newton-Raphson method was applied by N.J. Higham in [7] for computing the square roots of general matrices, whereas I.A. al-Tamimi [8] and S.S. Rao et al. [9] used the Cayley-Hamilton theorem for computing roots of general 2 × 2 non-singular matrices. A necessary and sufficient condition for the existence of the n th root of a singular complex matrix A was given by P.J. Psarrakos in [10].
Two alternative techniques for the computation of the n th roots of a non-singular complex-valued matrix has been recently proposed.
The first method was presented in [11] and is based on the Cayley-Hamilton theorem in combination with the representation of non-singular matrix powers in terms of Chebyshev polynomials of the second kind [12,13]. In this way, it is possible to express the roots of non-singular 2 × 2 or 3 × 3 complex-valued matrices by making use of pseudo-Chebyshev functions [14,15]. Unfortunately, the extension of this technique to higher order matrices can be hardly achieved, owing to the complicated inductive procedure that is necessary for this purpose.
A second method, which is referred to as the FKN (F k,n functions)procedure, was described in [16]. This technique can be applied to non-singular r × r (r ≥ 2, r ∈ N) complex-valued matrices, though it is not valid for general matrices, such as nilpotent matrices. The F k,n functions can be expressed by generalized Lucas polynomials of the second kind [17][18][19][20].
Since the considered problem cannot be solved in its generality when dealing with rootless matrices [4], nilpotent matrices (i.e., Jordan blocks), or matrices with infinite roots [21], the present study is devoted to the illustration of a "canonical" method for computing the roots of general matrices in the regular case, so excluding the aforementioned exceptions. In particular, we will show that the FKN expansion can be avoided when representing the n th roots of a non-singular matrix A. As a matter of fact, the problem can be solved in an effective way by making use of the Dunford-Taylor integral, which traces back to previous works by F. Riesz and L. Fantappié, as well as a known formula for the resolvent of a matrix reported in [22].
It is shown that the evaluation of the roots of a given non-singular matrix A can be performed only on the basis of the knowledge of the matrix invariants, which are the coefficients of the characteristic equation (or equivalently, the elementary symmetric functions of the eigenvalues), and the relevant spectral radius R, which can be estimated using Gershgorin's theorem. In this way, a numerical quadrature rule can be adopted to compute a contour integral extended to a circle centered at the origin and having a radius larger than R. We also stress that, by using the proposed methodology, one can easily evaluate all the determinations of the root of a given matrix.
The paper is organized as follows. First, we recall the formula for the resolvent of a matrix and then apply this formula in combination with the Dunford-Taylor integral to derive an explicit representation of a matrix root. Several worked examples are prepared and presented in the paper to prove the effectiveness of the procedure for arbitrary higher order non-singular complex-valued matrices. To this end, the computer program Mathematica © is used.

The Dunford-Taylor Integral
The Dunford-Taylor integral [23] is an analogue of the Cauchy integral formula in function theory. It can be applied to holomorphic functions of a given operator. In the finite-dimensional case, an operator is nothing but a matrix A. Theorem 1. Suppose that f (λ) is a holomorphic function in a domain ∆ of the complex plane containing all the eigenvalues λ h of A, and let γ ⊂ ∆ be a closed smooth curve with positive direction that encloses all the λ h in its interior. Then, the matrix function f (A) can be defined by the Dunford-Taylor integral: where (λI − A) −1 denotes the resolvent of A.
As an example, given the natural number n, the n th power of A can be evaluated as: It is worth noting that an alternative technique has been presented in the scientific literature to compute matrix powers through the Cayley-Hamilton theorem and the so-called F k,n functions, which are solutions of linear recursions [12,13]. This method is purely algebraic and does not make use of quadrature rules, which are necessary to avoid Cauchy's residue theorem.
It is useful to mention that, if A is non-singular, both the Equation (2) and the results reported in [12,13] are still valid for negative values of the exponent n, since the FKN functions can be defined for n < 0 as well [24].
For holomorphic functions of general matrices, the evaluation of the Dunford-Taylor integral is more convenient than the application of Cauchy's residue theorem from a computational standpoint. In fact, such an approach relies, only, on the knowledge of the entries of the considered matrix and the relevant invariants, whereas the knowledge of the eigenvalues is necessary where Cauchy's residue theorem is applied.

Recalling the Resolvent of a Matrix
The resolvent of an operator is an important tool for using methods of complex analysis in the theory of operators on Banach spaces. The holomorphic functional calculus gives a formal justification of the procedure used. The spectral properties of the operator are determined by the analytical structure of the functional.
In this study, we consider the finite-dimensional case, so that the general operator under consideration can be represented as a r × r (r ∈ N) complex-valued matrix A.
We denote as: the invariants of A, i.e., the coefficients of the characteristic polynomial: which are invariants under similarity transformations. Let: be the roots of P (λ), i.e., the eigenvalues of A.
In [22] (pp. 93-95), the following representation of the resolvent (λI − A) −1 in terms of the invariants of A was proven: By using (5) and (6), we can easily derive the representation formula for matrix functions reported in [18].
Theorem 2. Let f (λ) be a holomorphic function in a domain ∆ of the complex plane containing the spectrum of A, and denote by γ ⊂ ∆ a simple contour enclosing all the zeros of P (λ). Then, the Dunford-Taylor integral is written as: It should be noted that, if ∆ does not contain the origin, a simple choice of γ is a circle centered at the origin and having a radius larger than the spectral radius of A, which can be determined using Gershgorin's theorem.
Let us consider now the function f (λ) = λ 1/n , where n is a fixed integer number. As this function is always holomorphic in the open set ∆ = (C − {0}), i.e., in the whole plane excluding the origin, the preceding theorem can be re-formulated as follows: Theorem 3. If A is a non-singular complex-valued matrix and γ ⊂ ∆ is a simple contour enclosing all the zeros of P (λ), then the n th roots of A can be represented as: with the general coefficient ξ k given by: Recalling Cauchy's residue theorem [25] and denoting the integrand in (9) as Φ = Φ (λ), the contour integral can be evaluated as: Assuming, for simplicity, that the eigenvalues are all simple and, therefore: we find: where we have put, by definition, (λ − λ 0 ) = (λ − λ r+1 ) := 1. Finally, upon combining (10) and (11), it is trivial to verify that (9) becomes: A similar result can be found in the case of multiple roots of the characteristic polynomial by using the following more general equation, which holds true for a pole of order m at the point λ : Remark 1. Note that the knowledge of the eigenvalues is not necessary unless the integral in (7) is computed by means of Cauchy's residue theorem. In general, only the knowledge of the matrix invariants is sufficient to evaluate the considered contour integral by choosing, as γ, a circle centered at the origin with a radius larger than the spectral radius of A. Examples of computations using Cauchy's residue theorem were given in [16].

Remark 2.
Note that, upon selecting the different determinations of the root function of degree n in the formal expression of A 1/n , it is possible to evaluate the various roots of A through Equation (8).
Based on Theorem 3, the proposed algorithm for the evaluation of the n th roots of a matrix A can be described through the flowchart illustrated in Figure 1. It is straightforward to verify that the computational complexity of the relevant numerical procedure is O r 4 in the worst case.

Square Root of a 6 × 6 Non-Singular Matrix
Consider the matrix: The relevant invariants are: whereas the corresponding eigenvalues are found to be: Putting: we find: It is not difficult to verify that: with O 6 denoting the zero matrix of order six.

Cubic Root of a 5 × 5 Non-Singular Matrix
Consider the matrix: The relevant invariants are: whereas the corresponding eigenvalues are found to be: We find: The relevant invariants are: whereas the corresponding eigenvalues are found to be: Putting: we find: It is not difficult to verify that: with O 5 denoting, as usual, the zero matrix of order five.

Fifth Root of a 4 × 4 Non-Singular Matrix
Consider the matrix: The relevant invariants are: whereas the corresponding eigenvalues are found to be: Putting: we find: It is not difficult to verify that: with O 4 denoting the zero matrix of order four.

Square Root of a 3 × 3 Non-Singular Matrix
Consider the complex-valued matrix: The relevant invariants are: whereas the corresponding eigenvalues are found to be: By taking the various determinations of the square root function, we find: For each one of the n r = 8 roots, it is not difficult to verify that: with O 3 denoting the zero matrix of order three.

Conclusions
A general method for computing the n th roots of complex-valued matrices was detailed. The proposed procedure was based on the application of the Dunford-Taylor integral in combination with a suitable representation formula of the matrix resolvent. In this way, it was possible to overcome the limitations of the techniques already available in the scientific literature, in terms of the matrix order, as well as of the root degree.
The presented approach provided an effective means for evaluating all the determinations of the root of a given matrix with a reduced computational complexity and burden since it relied, only, on the knowledge of the matrix invariants, so circumventing the need for computing the relevant eigenvalues, while the spectral radius could be estimated using Gershgorin's theorem. Several worked examples were provided to prove the correctness of the procedure for higher order real-and complex-valued matrices.