Inversion of Tridiagonal Matrices Using the Dunford-Taylor’s Integral

We show that using Dunford-Taylor’s integral, a classical tool of functional analysis, it is possible to derive an expression for the inverse of a general non-singular complex-valued tridiagonal matrix. The special cases of Jacobi’s symmetric and Toeplitz (in particular symmetric Toeplitz) matrices are included. The proposed method does not require the knowledge of the matrix eigenvalues and relies only on the relevant invariants which are determined, in a computationally effective way, by means of a dedicated recursive procedure. The considered technique has been validated through several test cases with the aid of the computer algebra program Mathematica©.


Introduction
The inversion of tridiagonal matrices, and in particular of Jacobi's matrices, has attracted the attention of several researchers in the past and recent times [1][2][3][4][5][6][7][8][9][10][11][12][13], since this appears in numerous problems of both theoretical and applied mathematics. It is well known that Jacobi's matrix is symmetric and plays a fundamental role in the theory of orthogonal polynomials. The tridiagonal Toeplitz (and the general tridiagonal) matrices appear as a generalization of the symmetric case, so that the recursion formulas, for the relative invariants, are derived by means of a slight modification of the technique used in the symmetric case. Therefore, we can assert that the symmetric case is the basis for the results of the inversion problem considered in this article.
Tridiagonal matrices occur in interpolation problems [14] as well as in the solution of boundary value problems for partial differential equations using finite difference methods [15]. The inversion of the general form as well as of the symmetric case, and some special types, can be found in [1,2,6,12], whereas a review of relevant results (up to 1992) is given in [3].
Computationally efficient techniques for the inversion of tridiagonal matrices have been derived in [7,9]. Several researches have been devoted to the explicit computation of the inverse [1,8,13] by using a variety of methods, often based on the use of recurrence relations.
For our purposes, the procedure proposed by R.A. Usmani in [4] represents a convenient foundational basis. Usmani uses the recursion satisfied by the determinants of the principal minors of a tridiagonal matrix to derive an algorithm for computing the relevant inverse. However, Usmani's approach is more complicated than the one, based on the Dunford-Taylor's integral representation, which we are presenting in this research study.
We have used the matrices considered in many of the scientific papers referenced above to validate the correctness of our methodology.
To the authors' knowledge, functional analysis techniques have been never applied to this particular framework, until now. Actually, the aforementioned mathematical instruments can be broadly adopted in various problems of matrix theory, as it has been recently demonstrated in different studies published by the same authors [16,17].
The use of complex variable methods often facilitates the solution of problems in applied mathematics and physics. In this respect, Jacques Hadamard used to state that: the shortest path between two truths in the real domain passes through the complex domain [18].

Preamble
The Dunford-Taylor's (shortly D-T's) integral [19] is a basic tool of functional analysis and represents an analogous of the Cauchy's integral formula in function theory. It traces back to F. Riesz [20] and L. Fantappiè [21].
In the finite dimensional case, operators are represented by matrices. Let A be a matrix with the characteristic polynomial P(λ) = ∑ n k=0 u k λ n−k (where u 0 := 1) and f be a holomorphic function in a domain ∆ ⊂ C which contains all the eigenvalues λ h of A. The coefficients u k of the characteristic polynomial P(λ) are called the matrix invariants since they are invariant under similarity transformations.
The D-T's integral writes: with (λI − A) −1 denoting the resolvent of A, and where γ ⊂ ∆ is a simple closed smooth curve with positive direction enclosing all the λ h in its interior.
In [22] (pp. 93-95), the following representation of the resolvent (λI − A) −1 in terms of the invariants of A is proved: Using the equations above, f (A) can re-written as: Note that, if ∆ does not contain the origin, a simple choice of γ is represented by a circle centered at the origin and having radius larger than the spectral radius ρ A of A. The spectral radius ρ A can be easily determined through the Gershgorin circle theorem using, only, the entries of A.
The representation above applies in particular to the case of the matrix exponential exp(A), showing the uselessness of the relevant definition as exponential power series in A since, as it should be well-known [23], every analytic function of A is nothing but a matrix polynomial, that is f (A) ≡ P(A) where P is the polynomial interpolating the function f on the eigenvalues of A.
Consider now the function f (λ) = λ −1 . As this function is holomorphic in the open set C − {0}, namely in the whole complex plane excluding the origin, the preceding result becomes: Theorem 1. If A is a non-singular complex matrix and γ = γ 1 ∪ γ 2 is a simple contour enclosing all the zeros of P(λ) (where γ 1 , oriented counter-clockwise, encloses all the eigenvalues of A and γ 2 , oriented clockwise, encloses the origin but no eigenvalues of A ), then the inverse of A can be represented as: Remark 1. Note that the knowledge of the eigenvalues of A when computing the integrals above is not strictly necessary. As a matter of fact, unless we make use of the Cauchy's residue theorem, the knowledge of the matrix invariants suffices, since we can compute said contour integrals directly by choosing two circles, both centered at the origin, with the radius of γ 1 larger than the spectral radius of A and the radius of γ 2 smaller than the minimum modulus of the eigenvalues of A. This approach can be more convenient since it does not require the explicit, in general cumbersome, computation of the matrix eigenvalues.
Using the D-T's integral, we have shown, in preceding articles, how to address different problems, such as the computation of the roots [16] and the inverse [17] of general non-singular complex-valued matrices. In particular, in [17], the proposed procedure has been applied to the solution of basic analytical problems involving linear algebraic equations, as well as initial value problems for linear systems of ordinary differential equations.
Of course, Theorem 1 applies, also, to the particular case of a tridiagonal matrix T n of general order n. The only problem is how to evaluate the invariants of such a matrix in terms of its entries. This calculation can be easily performed using a three-term recurrence relation which generalizes the one proven in [24] in the case of Jacobi's matrices appearing in the theory of orthogonal polynomials.
By using said recursion, it is possible to use the D-T's integral in the derivation of the inverse of a general non-singular tridiagonal matrix and, in particular, of Jacobi's and Toeplitz matrices. The test cases illustrated in the last sections of this research study prove the effectiveness of the considered methodology by recovering, in a simple and uniform way, the results already reported in the articles by Usmani [4], Salkuyeh [25] and El-Mikkawy, Karawia [7].

The Invariants of a Tridiagonal Matrix
Consider the n × n tridiagonal complex matrix (with n = 1, 2, . . . ) In what follows, we assume that the matrix T n is non-singular, that is u n = 0, ∀n. The relevant characteristic polynomial satisfies the three-term recurrence relation: as it is easily seen by using the Laplace expansion of the determinant with respect to the last column of the general matrix.

Theorem 2.
For the term u (k,n) appearing in (2), the following recursion holds true: with k = 1, 2, . . . , n; n = 1, 2, . . . , and where the initial values are given by: Proof. The proof follows by substituting the expression (2) into the recursion (4) and, afterwards, by equating the coefficients corresponding to the same powers of x.

Matrix Inversion Using Dunford-Taylor's Integral
In a recent article [17], using the Dunford-Taylor's integral [19], (actually due to F. Riesz [20] and L. Fantappiè [21]), we have proved the following theorem: Theorem 3. If A is a non-singular complex matrix of order n and γ = γ 1 ∪ γ 2 is a simple contour enclosing all the zeros of P n (λ) (where γ 1 , oriented counter-clockwise, encircles all the eigenvalues of A and γ 2 , oriented clockwise, encloses the origin but no eigenvalues of A ), then the inverse of A is represented by: Upon choosing the curves γ 1 and γ 2 with the aid of the results reported in [27] and denoting the integrand in (3) by Φ k = Φ k (λ) (k = 1, 2, . . . , n), the individual contour integral can be evaluated by means of the Cauchy's residue theorem [28] as: As we have recalled in [17,29], the evaluation of A −1 does not necessarily require the use of (6). In fact, by applying the Gershgorin circle theorem, the curves γ 1 and γ 2 can be properly chosen in order to allow a simple direct computation.

Inversion of a Tridiagonal Matrix
Assuming the matrix invariants to be known on the basis of the recursion (4), we can enunciate the following result: Theorem 4. The inverse of the non-singular tridiagonal matrix (1) can be computed using the Dunford-Taylor's integral (5), where γ 1 is a circle oriented counter-clockwise which encloses all the eigenvalues of T n and γ 2 is a circle oriented clockwise which encloses the origin but no eigenvalues of T n .

The Case of a Tridiagonal Toeplitz Matrix
For tridiagonal Toeplitz matrices, whose entries a, b, c are constant with respect the the relevant indexes, we have the following result: Theorem 5. Under the assumptions above for the circles γ 1 and γ 2 , the inverse of a non-singular tridiagonal Toeplitz matrix T n with entries a, b, c is given by the Dunford-Taylor's integral (5) with the invariants being computed using the following recursion: u (k,n) = u (k,n−1) + a u (k−1,n−1) − b c u (k−2,n−2) , with k = 1, 2, . . . , n; n = 1, 2, . . . , and where we assume the same initial values as in Theorem 1.

Equation (5) requires the computation of the powers of the matrix T n .
This is trivial for small values of n whereas, for high matrix orders, the relevant computation can be performed using different algorithms available in the scientific literature such as the ones detailed in [30,31], as well as in [25] for the special case of tridiagonal Toeplitz matrices.

The Case of a Jacobi's Matrix
In the particular case of Jacobi's matrices, which play an important role in the theory of orthogonal polynomials [24], we have: Theorem 6. Under the assumptions above for the circles γ 1 and γ 2 , the inverse of the non-singular Jacobi's matrix (8) can be computed using the Dunford-Taylor's integral (5) with the relevant invariants given by the following three-term recursion: with k = 1, 2, . . . , n; n = 1, 2, . . . .

Numerical Examples
In what follows we show some examples of the computational procedure described in the previous sections. To this end, the computer algebra program Mathematica © has been used while enforcing a 16-digit accuracy.
The examples considered here are taken from previous articles of Usmani [4], Salkuyeh [25] and El-Mikkawy, Karawia [7]. It is thus shown that the developed procedure is extremely effective and computationally efficient. In all cases, the inverse matrix T −1 n , as computed using the proposed approach, verifies the basic property defining the inverse, that is T n T −1 n = I n (the identity matrix of order n), within the machine precision. However, it is not possible to study the differences in computational complexity with the algorithms presented by the aforementioned authors since the calculations performed in Mathematica cannot be monitored openly, being this program protected by the copyright.

Inversion of a Tridiagonal Matrix of Third Order
Let us consider the tridiagonal matrix of order n = 3 analyzed in [4]: The relevant invariants can be easily computed through the iterative procedure detailed in Section 2 as: Therefore, it is not difficult to verify that the corresponding eigenvalues are: λ 1 11.0000 , λ 2 2.41421 , λ 3 −0.414214 .
By using the Dunford-Taylor's integral formula (5) in combination with the Gauss-Kronrod integration rule, the following representation of the inverse of A is obtained: with: ξ 1 −0.090909 , ξ 2 1.18182 , ξ 3 −1.90909 , this leading to the conclusion that:

Inversion of Tridiagonal Toeplitz Matrix of Fourth Order
Let us consider the tridiagonal Toeplitz matrix of order n = 4 analyzed in [25]: The relevant invariants can be easily computed through the iterative procedure detailed in Section 2 as: Therefore, it is not difficult to verify that the corresponding eigenvalues are: λ 1 8.23607 , λ 2 6.23607 , λ 3 3.76393 , λ 4 1.76393 .
By using the Dunford-Taylor's integral formula (5) in combination with the Gauss-Kronrod integration rule, the following representation of the inverse of A is obtained: with: The relevant invariants can be easily computed through the iterative procedure detailed in Section 2 as: Therefore, it is not difficult to verify that the corresponding eigenvalues are: λ 1 3.52892 , λ 2 2.00000 , λ 3 −1.36147 , λ 4 0.832551 .
By using the Dunford-Taylor's integral formula (5) in combination with the Gauss-Kronrod integration rule, the following representation of the inverse of A is obtained: with: The relevant invariants can be easily computed through the iterative procedure detailed in Section 2 as: Therefore, it is not difficult to verify that the corresponding eigenvalues are: λ 1 7.77275 , λ 2 5.50180 , λ 3 3.65809 , λ 4 −3.09534 , λ 5 1.16270 .
By using the Dunford-Taylor's integral formula (5) in combination with the Gauss-Kronrod integration rule, the following representation of the inverse of A is obtained: with:

Inversion of Tridiagonal Toeplitz Matrix of Fifth Order
Let us consider the tridiagonal Toeplitz matrix of order n = 5 analyzed in [7]: The relevant invariants can be easily computed through the iterative procedure detailed in Section 2 as: Therefore, it is not difficult to verify that the corresponding eigenvalues are: λ 1 3.73205 , λ 2 3.00000 , λ 3 2.00000 , λ 4 1.00000 , λ 5 0.267949 .
By using the Dunford-Taylor's integral formula (5) in combination with the Gauss-Kronrod integration rule, the following representation of the inverse of A is obtained: with: (24)

Conclusions
We have presented a method for the inversion of non-singular tridiagonal matrices that makes use of a classical functional analysis tool, namely the Dunford-Taylor's integral, which extends Cauchy's integral formula to the case of operators.
Since, in the finite dimensional case, operators are represented by matrices, the general formula for the inversion of an operator [19] can be used for matrices as well [16,17]. In this article, we have applied such result to general non-singular tridiagonal matrices and, as particular cases, to Jacobi's and Toeplitz matrices.
Since the application of Dunford-Taylor's integral requires the knowledge of the matrix invariants, a preceding result relevant to tridiagonal Jacobi's matrices (see [24]) has been extended and, in this way, a recursion formula for the evaluation of the invariants of a general tridiagonal matrix has been obtained. This makes it possible to apply the formulas for the inversion of a general non-singular matrix in [17] to the case of general tridiagonal matrices and, therefore, to Jacobi's and Toeplitz matrices in particular.
The proposed technique has been validated through several test cases using the computer algebra program Mathematica©.