Almost Exact Computation of Eigenvalues in Approximate Differential Problems

: Differential eigenvalue problems arise in many ﬁelds of Mathematics and Physics, often arriving, as auxiliary problems, when solving partial differential equations. In this work, we present a method for eigenvalues computation following the Tau method philosophy and using Tau Toolbox tools. This Matlab toolbox was recently presented and here we explore its potential use and suitability for this problem. The ﬁrst step is to translate the eigenvalue differential problem into an algebraic approximated eigenvalues problem. In a second step, making use of symbolic computations, we arrive at the exact polynomial expression of the determinant of the algebraic problem matrix, allowing us to get high accuracy approximations of differential eigenvalues.


Introduction
Finding eigenfunctions of differential problems can be a hard task, at least for some classical problems. Among others, we can find literature in Sturm-Liouville problems, in Mathieu problems or in Orr-Sommerfeld problems describing the difficulties involved in the resolution of those problems [1][2][3][4][5][6][7][8]. The first difficulty consists of finding accurate numerical approximations for the respective eigenvalues.
In this work, we present a procedure based on the Ortiz and Samara's operational approach to the Tau method described in [9], where the differential problem is translated into an algebraic problem. This is achieved using the called operational matrices that represent the action of differential operators in a function. We have deduced explicit formulae for the elements of these matrices [10,11] obtained by performing operations on the bases of orthogonal polynomials and, for some families, we have exact formulae, which enables the construction of very accurate operational matrices. The Tau method has already been used for these kinds of problems [5,9,12,13]; however, our work on matrix calculation formulas adds efficiency and precision to the method.
Our main purpose is to use the Tau Toolbox, a Matlab numerical library that is being developed by our research group [14][15][16]. This library allows a stable implementation of the Tau method for the construction of accurate approximate solutions for integro-differential problems. In particular, the construction of the operational matrices is done automatically. These facts led us to think that the Tau Toolbox seems to be useful for these kinds of problems.
Finally, operating with symbolic variables, we define the determinant of those matrices as polynomials and use its roots as eigenvalues' approximations.
We present some examples showing that, using this technique in the Tau Toolbox, we are able to obtain results comparable with those reported in the literature and sometimes even better.

The Tau Method
Let D : E → F be an order ν differential operator, where E and F are some function spaces, and let g i : E → R, i = 1, . . . , ν be ν functionals representing boundary conditions, so that is a well posed differential problem.

The Tau Method Principle
A particular implementation of the Tau method depends on the choice of an orthogonal basis for F. A sequence of orthogonal polynomials {P n (x)} ∞ n=0 with respect to the weight function w(x) on a given interval of orthogonality [a, b] satisfies where w i = P i , P i and δ ij is the Kronecker delta [17].
Let P be the space of algebraic polynomials of any degree and let us suppose that P is dense in F; then, the solution y of (1) has a series representation y ∼ ∑ j≥1 a j P j−1 . A polynomial approximation of degree n − 1 ∈ N is achieved by y n = n ∑ j=1 a n,j P j−1 = P n a n , where P n = [P 0 , P 1 , . . . , P n−1 ] and a n = [a n,1 , . . . , a n,n ] T . In the Tau method sense, y n is a polynomial satisfying the boundary conditions in (1) and solving the differential equation with a residual τ n = Dy n − f of maximal order. Thus, the differential problem is reduced to an algebraic one of finding the n coefficients a n,j , j = 1, . . . , n in (2) such that and so the residual τ n = O(P n−ν ).

Operational Formulation
For a given n ∈ N, n > ν, we define the matrix and the vector If, in problem (1), the differential operator D is linear and the g j are ν linear functionals, then problem (3) can be put in matrix form as T n a n = b n .
The matrix T n , called the Tau matrix, can be evaluated from operational matrices, that is, matrices translating into coefficients vectors the action of a differential operator D in a function y. Then, for each k ∈ N 0 , x k y = PM k a and d k dx k y = PN k a.
Proof. For k = 1, the result is true by hypothesis. Now, supposing that (5) is true for a k ∈ N, then ending the proof by induction.
The following result generalizes the algebraic representation from the previous proposition to differential operators. Corollary 1. Let D : P n → P n+h be a linear differential operator with polynomial coefficients and let h = max i=0,...,ν {n i − i}. If y n = P n a n , then Dy n = P n+h D (n+h)×n a n with p ik x k , and A k m×n denotes the main m × n block of the matrix (A p ) k with p = max{m, n}.
In [9], the authors discussed the application of this operational formulation of the Tau method to the numerical approximation of eigenvalues defined by differential equations. They proved that, for a differential eigenvalue problem, where in (1) and λ is a parameter, the zeros of det(T n (λ)) approach the eigenvalues of (1).

Tau Matrices' Properties
Given that we are dealing with a general orthogonal polynomial basis, instead of particular cases like Chebyshev or Legendre, we can only make assumptions about general properties of Tau matrices T n . Anyway, we can't expect to have symmetric matrices and, in general, they can be considered sparse but with a low level of sparsity.
Since P in Proposition 1 is an orthogonal basis, then M is the tridiagonal matrix with the coefficients of its three term recurrence relation. Therefore, for problems with polynomial coefficients, matrices p i (M) of Corollary 1 are banded matrices, with all non-zero elements between the ±n i diagonals.
Matrices N are always strictly upper triangular and so p i (M)N i are n i − i upper Hessenberg matrices. The resulting D (n−ν)×n block of T n defined in (4) is a general h upper Hessenberg matrix.
Moreover, one advantage of the Tau method is its ability to deal with boundary conditions, allowing the treatment of any linear combination of values of y and of its derivatives for g i in (1). Thus, the ν × n block B ν×n in T n is usually dense, with its entries g i (P j ), made by linear combinations of orthogonal polynomial values P j (x k ) and of its derivatives P (l) Assembling those blocks B ν×n and D (n−ν)×n in T n , we get an ν + h upper Hessenberg matrix. For some problems, whose dependence on the eigenvalues λ is verified only in the differential equation, we can use Schur complements to reduce matrix sizes. Considering matrix T n in (4) partitioned as where B 1 is ν × ν and the other blocks are partitioned accordingly. If B 1 is non-singular, then and the problem is reduced to solve reducing to n − ν the problem dimension. In the worst case, when B 1 is singular, we have to work with the n × n matrix T n . In the following sections, we illustrate the application of the Tau method to approximate eigenvalues in some classical problems.

Problems with Polynomial Coefficients
Sturm-Liouville problems arise from vibration problems in continuum mechanics. The general form of a fourth order Sturm-Liouville equation is with appropriate initial and boundary conditions, where a < b ∈ R, p, q, r, and µ are given piecewise continuous functions, with p(x) > 0 and µ(x) > 0. These conditions mean that (8) has an infinite sequence of real eigenvalues, bounded from above, and each one has multiplicity of at most 2 [1]. If p and q are differentiable functions, it is an elementary task to give (8) the form From this equation, we derive the operational matrix for the general form of the fourth order Sturm-Liouville differential operator associated with (8) Assuming that coefficients p, q, r, and µ are polynomials, or convenient polynomial approximations of the coefficient functions, then the height of this differential operator is well defined as where deg(.) is the polynomial degree. One consequence of Corollary 1 is that to evaluate the block D (n−ν)×n in (4) we have to apply (9) with M and N truncated to its first n + h lines and columns. The Tau matrix of a fourth order Sturm-Liouville problem is the n × n matrix T n = [B 4×n ; D (n−4)×n ], where B 4×n is the 4 × n matrix representing boundary conditions and D (n−4)×n is the first (n − 4) × n main block of D.

Example 1. Consider the Sturm-Liouville boundary value problem
whose exact eigenvalues satisfy [1,2] tanh( In that case D = N 4 − λI, where I is the identity matrix, and the boundary conditions can be represented by For each n > 4, C n in (7) is an n − 4 square matrix and its determinant an n − 4 degree polynomial. We use the Matlab function roots to find its zeros and we inspect their accuracy by testing if they satisfy relation (11).

Example 2.
A very similar problem, presented as the clamped rod problem in [12,18], is In that case, and whenever we have a symmetric problem and a symmetric base, the matrix C n in (7) has zeros intercalating all non-zero elements. We can reduce the problem dimension, defining two matrices C O and C E with respectively the odd and the even entries of C n , then det(C n ) = det(C O ) det(C E ). In that case, since C n is an 4-upper Hessenberg matrix, C O and C E are 2-upper Hessenberg matrices. The sparsity pattern of those two matrices, in Legendre basis and with n = 52, are showed in Figure 2. The first 14 eigenvalues, evaluated with an 16 × 16 matrix, are presented in [12]. In Table 1, we compare those values with our results in the Legendre basis and with n = 16 and n = 52. We present values of λ 52,k /k 4 , which allows us to verify that our estimates satisfy the property that the kth eigenvalue is proportional to k 4 [18].
The operational matrix (9) for this case is , with the same v 0 and v 1 vectors of the previous example. If λ n,k is the kth root of det(T n ), and consideringδ n,k = λ n,k −λ n−1,k λ n,k as an estimative of the relative error in λ n−1,k , then δ n = max k=1,...,m |δ n,k | is an estimative of the maximum relative error in the first m eigenvalues of the problem. In Figure 3 left, we present δ n , with m = 6 and with m = 8 for Example 3 with α = 0.02 and β = 0.0001 for n = 16, . . . , 45. In Figure 3 right, the absolute relative error |δ n,1 | in the lowest eigenvalue is presented for the same n values. In Table 2, we compare our results with those of [2] for the first six eigenvalues, and of [8] for the first 4, obtained with values α = 0.02 and β = 0.0001.
with fixed constants α, R and function U.
The particular case U = 1 − x 2 is the Poiseuille flow and, with α = 1 and R = 10000 was treated in [3][4][5]12]. The operational matrix in that case is Like in Example 2, this is an upper Hessenberg matrix with zeros intercalating its non-zero elements and we can reduce the problem dimension, splitting in two matrices the Schur complement C n of the resulting Tau matrix T n . Choosing Chebyshev basis, this is the operational version of the Tau procedure of [5], where the author was confined to eigenvalues associated with symmetric eigenfunctions, which is equivalent to finding the eigenvalues of C E .
In [5], the author obtained for λ 1 = 0.23752649 + 0.00373967i as an 8 decimal places exact value for the most unstable mode of this problem. Working with double-precision arithmetic, we obtain λ 1 = 0.2375264889204038 + 0.003739670740170985i. This value results with n = 58 that is an 29 × 29 matrix C E , the same dimension used in [5].
In addition, with R c = 5772.22, the smallest value of R for which an unstable eigenmode exists [5], and α c = 1.02056, we get the results presented in Table 3, together with those of [5].

Non-Polynomial Coefficients
In the previous section, we solved problems in the conditions of Corollary 1, i.e., with differential operators acting in polynomial spaces. In a more general situation, if some of the coefficients p i in (6) are non-polynomial functions, then the corresponding matrices p i (M) are functions of M instead of polynomial expressions.
If a non-polynomial function p i in (6) can be defined implicitly by a differential problem, with polynomial coefficients, then we can first of all use the Tau method to find a polynomial approximatioñ p i to p i and usep i (M) to approximate the matrix p i (M).

Example 5.
Mathieu's equation appears related to wave equations in elliptic cylinders [19]. For an arbitrary parameter q, the problem is to find the values of λ for which non-trivial solutions of y (x) + (λ − 2q cos(2x))y(x) = 0 (15) exist with prescribed boundary conditions.
It can be shown that there exists a countably infinite set of eigenvalues a r associated with even periodic eigenfunctions and a countably infinite set of eigenvalues b r associated with odd periodic eigenfunctions [19]. We are interested in reproducing some of those values given in there.
The operational matrix for problem (15) is Our first step to approximate Mathieu's eigenvalues is to approximate matrix cos(2M). This can be done by, firstly, considering the function z(x) = cos(2x) as the solution of a differential problem, using Tau method to get a polynomial approximation z n (x) ≈ z(x). In a second step, the operational matrix D is approximated byD and, finally, the last step consists in building the Tau matrix T n and evaluating the zeros of its determinant. We take integer values q = 0, 1, . . . , 16 and boundary conditions y (−1) = y (π/2) = 0 to get a r (q) for even r, y (−1) = y(π/2) = 0 for odd r, and y(−1) = y(π/2) = 0 to get b r (q) for odd r and y(−1) = y (π/2) = 0 for even r.
We can observe, as pointed out in [19], that, for a fixed q > 0, we have a 0 < b 1 < a 1 < b2 < · · · and that a r (q), b r (q) approach r 2 as q approaches zero.

Example 6.
Mathieu's equation also appears coupled with a modified Mathieu's equation in systems of differential equations as multi parameter eigenvalues problems. The particular case is studied in [6,7] associated with the eigenfrequencies of an elliptic membrane with semi axes α = cosh(2) and β = sinh(2).
To approximate eigenvalues for this problem, we first have to approximate cos(2x) and cosh(2x) by polynomials. Considering, as in the previous example, z 16 and w 16 ≈ cosh(2x) as the same degree Tau solution of are matrices approximating the operational matrices associated with differential equations (16). For each fixed q, we define matrices Tau T 1 and T 2 , representing Mathieu and modified Mathieu equations, respectively. Defining a n (q) the nth eigenvalue of T 1 , in ascending order, and A m (q) the mth eigenvalue of T 2 , in descending order, in [6], it was proved that a n (q) and A m (q) are analytical functions of q. Moreover, for each pair (m, n), it was proved the existence and uniqueness of an intersection point of curves a n (q) and A m (q). Those intersections identify the eigenmodes of the elliptic membrane.
In Figure 5, we recover, and extend, figures presented in [6] and in [7]. Intersection points of a n (q), the almost vertical curves, and A m (q), the oblique curves, are the eigenpairs (a, q) of (16).

Nonlinear Eigenvalues Problem
In some differential problems, the eigenvalues can arise in a nonlinear relation with eigenfunctions. Let us consider the following second order problem, related to Weber's equation:
The operational matrix corresponding to the differential equation is and so det(T n ) is a polynomial with degree 2(n − 2) in λ.

Conclusions
Since the pioneering works of Orzag [5] and Ortiz and Samara [9], the Tau method has been scarcely used to solve differential eigenvalues' problems. With our work, we conclude that the Tau method is a competitive one if we want to evaluate with high accuracy the first eigenvalues, in a large kind of differential problem.