A Study of the Second-Kind Multivariate Pseudo-Chebyshev Functions of Fractional Degree

Here, in this paper, the second-kind multivariate pseudo-Chebyshev functions of fractional degree are introduced by using the Dunford–Taylor integral. As an application, the problem of finding matrix roots for a wide class of non-singular complex matrices has been considered. The principal value of the fixed matrix root is determined. In general, by changing the determinations of the numerical roots involved, we could find n r roots for the n-th root of an r × r matrix. The exceptional cases for which there are infinitely many roots, or no roots at all, are obviously excluded.


Introduction
Special functions and polynomials are used in many applications of physics, engineering, and applied mathematics, such as electrodynamics, classical and modern physics, quantum mechanics, classical mechanics, and, more recently, in biological sciences and many other fields. Among them are included the hypergeometric functions, which constitute an important class that unifies, through the introduction of appropriate parameters, the most (if not all) parts of special functions, including elliptic integrals, Beta functions, the incomplete Gamma function, Bessel functions, Legendre functions, classical orthogonal polynomials, Kummer confluent functions, and so on (see, for example, [1][2][3]).
Many multivariate generalizations of hypergeometric functions have been studied in the literature on Special Functions, even through an extension of the Pochhammer symbol [4][5][6][7][8].
In earlier investigations, explicit equations for computing matrix powers were considered in connection with the introduction of multivariate second-kind Chebyshev polynomials (see, for example, [9,10]). Since these articles were written in Italian, they were mostly ignored by the mathematical community.
Recently, a special set of univariate hypergeometric functions, called the pseudo-Chebyshev functions, were introduced [11] and studied [12].
In this article, we start considering the standard basis of a linear recurrence relation, satisfying as initial conditions the entries of the reflected identity matrix. Then the role of the second-kind generalized Lucas polynomials and their connection with the multivariate second-kind Chebyshev polynomials are recalled [13]. By comparing a matrix power representation [9] with the results of the Dunford-Taylor or Riesz-Fantappiè integral, an integral representation of these polynomials is derived. As a consequence, the extension of these polynomials to the case of rational indices is achieved.
As an application, it is shown how to compute, by using the obtained multivariate second-kind pseudo-Chebyshev functions, the principal value of matrix roots. Other roots can be found by changing the determinations of the numerical roots involved.
In the authors' opinion, the second-kind pseudo-Chebyshev functions seem to be naturally connected with the problem of computing matrix roots, as the multivariate Chebyshev polynomials are with regard to the representation of integer powers of matrices.
As it is well known, the problem of finding roots of an r × r matrix (see [14,15]) is a difficult one, since there are matrices without roots (e.g., the Jordan blocks considered in [16]), and other matrices, which have infinitely many roots (see, e.g., [17]). Of course, in what follows, these exceptional cases are excluded.
The Cayley-Hamilton Theorem was applied to compute roots of 2 × 2 non-singular matrices by Al-Tamimi [18], and by Rao et al. [19] for n × n matrices with non-negative distinct eigenvalues, since this subject appears in the framework of Markov models of finance. On the other hand, Psarrakos [20] gave a necessary and sufficient condition for the existence of mth roots of a singular complex matrix A in terms of the dimensions of the null spaces of matrix powers.
Finally, in Section 5, a worked-out example relevant to the square root of a 3 × 3 matrix is explicitly considered in order to prove the effectiveness of the method proposed here.

Basic Definitions
Definition 1. Given the r × r matrix A = [a ij ], its characteristic polynomial is given by and their invariants are as follows: It is worth noting that an extension of the notion of characteristic polynomials, which is useful in graphical representations of molecules as well as in problems involving relations between the structure and the properties of chemical compounds, has been recently considered in [21].

Recalling the Functions F k,n
It is well known (see, for example, [22,23]) that a basis for the r-dimensional vectorial space of solutions of the homogeneous linear bilateral recurrence relation with constant coefficients u k (k = 1, 2, · · · , r) with u r = 0, is given by the following functions: F k,n = F k,n (u 1 , u 2 , · · · , u r ) (k = 1, 2, · · · , r; n −1), defined by the initial conditions below: Remark 1. The F k,n functions constitute a different basis with respect to the usual one, which exploits the roots of the characteristic equation [24]. The F k,n function basis does not imply the knowledge of roots and does not depend on their multiplicity. Then, in many cases, it is more convenient.
Since u r = 0, the F k,n functions can be defined even if n < −1, by means of the so called reflection properties: It has been show by É Lucas [22,25] that all the {F k,n } n∈Z functions can be expressed through the bilateral sequence {F 1,n } n∈Z , corresponding to the initial conditions in Equation (4). More precisely, the following equations hold: Therefore, the bilateral sequence {F 1,n } n∈Z is called the fundamental solution of Equation (3) ("fonction fondamentale" by É. Lucas [25]). The functions F 1,n (u 1 , · · · , u r ) are called in literature [23] generalized Lucas polynomials of the second kind, and are related to the multivariate Chebyshev polynomials (see, e.g., R. Lidl and C. Wells [26], R. Lidl [27], M. Bruschi and P. E. Ricci [13], K. B. Dunn and R. Lidl [28], and R. J. Beerends [29]).

Matrix Powers Representation
In the preceding articles [9,22], the following result has been proved: Theorem 1. Given an r × r matrix A, putting by definition u 0 := 1, and denoting by P(λ) its characteristic polynomial (or possibly its minimal polynomial, if this is known ), the matrix powers A n , with integral exponent n, are given by the equation: where the functions F k,n (u 1 , . . . , u r ) are defined in Section 2.1.
Moreover, if A is non-singular, i.e., u r = 0, Equation (7) still works for negative integers n, assuming the reflection properties (Equation (5)) for the F k,n functions.
It is worth to recall that the knowledge of eigenvalues is equivalent to that of invariants, since the second ones are the elementary symmetric functions of the first ones.

Remark 2.
Note that, as a consequence of the above result, the higher powers of a matrix A are always expressible in terms of the lowers ones (at most up to the dimension of A).
By using the equations in Equation (6), the result of Theorem 1 writes in terms of the sequence F 1,n (u 1 , . . . , u r ). Theorem 2. Putting for shortness u := (u 1 , · · · , u r ), the integer powers of a non-singular r × r matrix A can be written in terms of the sequence F 1,n (u) as follows:

The Dunford-Taylor Integral
Theorem 3. Consider a r × r matrix A = {a h,k }, with the characteristic polynomial in Equation (1) and invariants given by Equation (2). Let f be a function holomorphic in an open set O, containing all the eigenvaues of A. Then, the matrix functions f (A) are given by the Dunford-Taylor integral [30] (but actually tracing back to Frigyes Riesz [31] and Luigi Fantappiè [32]): where γ denotes a simple contour enclosing all the zeros of P(λ).
In particular, the integer powers of A are given by the equation: Remark 3. If the eigenvalues of A, are known, Equation (10), by the residue theorem, gives back the Lagrange-Sylvester representation. However, for computing the integrals appearing in Equation (10) it is sufficient the knowledge of a circular domain D, (γ := ∂D), containing the spectrum of A (by Gerschgorin's theorem, only the entries of A are necessary, without computing its eigenvalues). Therefore, this approach is computationally less expensive with respect to the Lagrange-Sylvester formula.

Integral Representation of the F k,n Functions
By comparing Equations (7) and (10), the following result immediately follows: Theorem 4. Under the same hypotheses and notations of Theorem 3, for any n ∈ N the F k,n functions are represented by the integral:

Multivariate Second-Kind Chebyshev Polynomials
It has been noticed in [13] that the fundamental solution of the recursion given by Equation (3), which is the sequence F 1,n (u 1 , u 2 , · · · , u r−1 , u r ), assuming u r = 1, defines the set of Chebyshev polynomials of the second kind in r − 1 variables: Then, by Equation (12), the integral representation of the multivariate second-kind Chebyshev polynomials follows: Remark 5. When r = 2 for the polynomials U n (u 1 ) = U n (u 1 /2). In this case, Equation (14) gives back the well known integral representation of the second-kind Chebyshev polynomials U n (x): Let A = [a h,k ] r×r be a non-singular complex matrix of order r, in which the characteristic equation is given by Equation (1), with u r = 0 . Put: and, for shortness,ũ = (u 1 , u 2 , · · · , u r−1 ).
Consider the (r − 1)variable Chebyshev polynomials [13], defined by the recursion: Then, from Theorem 2, we can derive the result [9]: Theorem 5. The integer powers of the matrix A are given by the equation: Remark 6. Note that Equation (17) can be simplified assuming the condition det A = u r = 1, which is not a restriction. In fact, one can put: Then A n = (u r ) nÃn . By using a matrixÃ, such that detÃ = 1, Equation (17) becomes: In what follows, when u r = 1, in order to simplify the notation it is convenient to put, by definition:

Extension to the Rational Case
It is worth noting that the integral representation given by Equation (14) allows the possibility to extend the definition of the second-kind Chebyshev polynomials to the case of rational indexes.
Of course it should be necessary to consider the root multiplicity problem, but to narrow down the scope of our investigation, we will limit ourselves to work with the principal value of the considered roots.
To this aim, we put the following Definition 2. The multivariate second-kind pseudo-Chebyshev functions of rational degree p/q are defined by the integral: where the principal value of the root of λ (p/q)+1 has been fixed.
The more interesting case in the framework of pseudo-Chebyshev functions is the case of half-integer degree. In this particular case, we put the definitions: In a similar way, the definition of matrix powers can be extended to the rational case, by choosing the principal value of the considered roots, and writing Equation (18) in the form: where the second-kind pseudo-Chebyshev functions with rational indexes are defined by Equation (20).
In particular, considering a matrixÃ according to Remark 5, we find for the principal value of its square root:Ã Note that the functions with negative indexes can be found by using the reflection properties given by Equation (5).

The Three-Dimensional Case
Equation (23) can be used in order to compute the square roots of r × r matrices. Since the computation by hands is quite cumbersome, in what follows we limit ourselves to the case when r = 3. In a forthcoming investigation [37], we propose to show how to proceed for higher-order matrices.
Let A be a 3 × 3 non-singular complex matrix, and assume that det A = 0. Denote by λ 1 , λ 2 , λ 3 the roots of ∆(λ), that is: Then, according to the Remark 5, i.e., assuming det A = 1, and A ≡Ã, by using the notations of Section 4, we have, for the principal value of its square root the equation: By using Equation (21), we find that and, by using Cauchy's residue theorem, it follows that Remark 7. Note that the numerical root λ 1/2 appearing in each of the three contour integrals of Equation (25), has two determinations, so that we can derive in total 2 3 = 8 square roots of the matrix A. This remark can be extended to the general case: for the n-th root of an r × r matrix we can find n r possible determinations.

A Worked Example
Consider the matrix The invariants are: The characteristic equation is: and the roots are: According to Equation (24), by choosing the determinations of √ 2 in order to get a matrix with real entries, we find: so that the coefficient of A 2 is: (1 − √ 2)/2 . Moreover, recalling that u = v = 1, by elementary computations, we find the other coefficients in Equation (24).
The coefficient of A is: √ 2/2, and the coefficient of I is: 1/2. Then Equation (24) writes: that is: It is easily seen that by Equation (28) it follows:

Remark 8.
Note that in the preceding formulas we could change the determination of √ 2, so that we could find 8 possible values for the square root of A.

A few Other Examples
• Consider the matrix: The invariants are: and the roots are: A square root is given by • Consider the following matrix: The invariants are given below: u 1 = 7 , u 2 = 14, u 3 = 8 , and the roots are: A square root is given by

Conclusions
By using a classical result about a representation formula for matrix functions and the basic solution of a linear recurrence relation, it has been shown that the Dunford-Taylor integral allows to define the multivariate second-kind pseudo-Chebyshev functions of a rational index. These functions can be used in order to compute matrix powers, according to the method presented in the case of the square root of 3 × 3 particular matrices.