A Fast Algorithm for the Eigenvalue Bounds of a Class of Symmetric Tridiagonal Interval Matrices

: The eigenvalue bounds of interval matrices are often required in some mechanical and engineering ﬁelds. In this paper, we improve the theoretical results presented in a previous paper “A property of eigenvalue bounds for a class of symmetric tridiagonal interval matrices” and provide a fast algorithm to ﬁnd the upper and lower bounds of the interval eigenvalues of a class of symmetric tridiagonal interval matrices.


Introduction
In a lot of practical engineering problems, many quantities are uncertain but bounded due to inaccurate measurements, errors in manufacturing, changes to the natural environment, etc., so they have to be expressed as intervals. Therefore, interval analysis has received much interest not only from engineers but also from mathematicians [1][2][3][4][5]. Rohn in [6] surveyed some of the most important results on interval matrices and other interval linear problems up to that time.
In particular, since it is often necessary to compute the eigenvalue bounds for interval eigenvalue problems in structural analysis, control fields and some related issues in engineering and mechanics, many studies have been conducted on these problems in the past few decades (e.g., [7][8][9][10][11][12][13][14][15]).
The eigenvalue problems with interval matrices dealt with in this paper can be formulated as follows: subject to K ≤ K ≤ K or k ij ≤ k ij ≤ k ij , for i, j = 1 , 2 , . . . , n where K = (k ij ), K = (k ij ) and K = (k ij ) are all n × n real symmetric matrices. K and K are known matrices which are composed of the lower and upper bounds of the intervals, respectively. K is an uncertain-but-bounded matrix and ranges over the inequalities in Equation (2). λ is the eigenvalue of the eigenvalue problem in Equation (1) with an unknown matrix K, and u is the eigenvector corresponding to λ. All interval quantities are assumed to vary independently within the bounds. In order to facilitate the expression of the interval matrices, interval matrix notations [2] are used in this paper. The inequalities in Equation (2) can be written as K ∈ K I , in which K I = [K, K] is a symmetric interval matrix. Therefore, the problem can be referred to as follows: for a given interval matrix K I , find an eigenvalue interval λ I , which is to say such that it encloses all possible eigenvalues λ satisfying Ku = λu when Furthermore, let the midpoint matrix of the interval matrix K I = [K, K] be defined as and the uncertain radius of the interval matrix K I = [K, K] be For some special cases of these problems, such as when K is a real symmetric matrix in Equation (1), some methods based on perturbation theory were proposed in [7,9,[16][17][18].
From many numerical experiments, we observed that the eigenvalue bounds are often reached at certain vertex matrices (in other words, at the boundary values of the interval entries). Therefore, the conditions under which the eigenvalue bounds will be reached at certain vertex matrices has been raised as an interesting question. Under the condition that the signs of the components of the eigenvectors remain invariable, Deif [7] developed an effective method which can yield the exact eigenvalue bounds for the interval eigenvalue problem in Equation (1). As a corollary of Deif's method, Dimarogonas [17] proposed that the eigenvalue bounds should be reached at some vertex matrix as under Deif's condition. However, there exists no criterion for judging Deif's condition in advance. Methods depending on three theorems dealing with the standard interval eigenvalue problem were introduced in [18], which claimed that the eigenvalue bounds should be achieved when the matrix entries take their boundary values for Equation (1) according to the vertex solution theorem and the parameter decomposition solution theorem. Unfortunately, this is not true. There exists a certain matrix (see the example in the Appendix of [16]) in which the upper or lower point of an eigenvalue interval is achieved when some matrix entries take certain interior values of the element intervals but not the end points. This contradicts the conclusion in [18]. For symmetric tridiagonal interval matrices, we also have counterexamples, such as This is not a vertex matrix.Therefore, the conditions under which the end values of an eigenvalue interval can be achieved when the matrix entries take their boundary values need to be established. In [19], Yuan et al. considered a special case of the problem in Equation (1). They proved that for a symmetric tridiagonal interval matrix, under some assumptions, the upper and lower bounds of the interval eigenvalues of the problem will be achieved at a certain vertex matrix of the interval matrix. Additionally, the assumptions proposed in their paper can be judged by a sufficient condition. This result is important. However, there is a drawback in that to determine the upper and lower bounds of an eigenvalue, 2 n−1 vertex matrices should be checked.
In this paper, we will present an improved theoretical result based on the work in [19]. From this result, we can derive a fast algorithm to find the upper or lower bound of an eigenvalue. Instead of checking 2 2n−1 vertex matrices, we just need to check 2n − 1 matrices.
The rest of the paper is arranged as follows. Section 2 introduces the theorical results given in [19]. Section 3 provides the improved theorical result and the fast algorithm to find the upper or lower bound of an eigenvlaue of an symmetric tridiagonal interval matrix. Section 4 illustrates the main findings of this paper by means of a simulation example. Section 5 provides further remarks.

A Property for Symmetric Tridiagonal Interval Matrices
For the convenience of reading and deriving the results of the next section, we present the results from [19] here.
Let an irreducible symmetric tridiagonal matrix A, which is a normal matrix, be denoted as Obviously, all eigenvalues of A are real. Several lemmas and corollaries are given below: Definition 1 (leading or trailing principal submatrix). Let D be a matrix of the order n with diagonal elements d 11 , . . . , d nn . The submatrices of D whose diagonal entries are composed of d 11 , . . . , d kk for k = 1, . . . , n are called leading principal submatrices of the order k, and those whose diagonal entries are composed of d n−k+1,n−k+1 , . . . , d nn k = 1, . . . , n are called trailing principal submatrices of the order k.
Let the leading and trailing principal submatrices of an order k be denoted as D k and D k for k = 1, . . . , n, respectively. The leading and trailing principal minors can be defined similarly.
Next, three theorems from the literature are introduced as lemmas below: , p. 36). A, denoted as in Equation (6), has the following properties: The n − 1 eigenvalues of the leading (trailing) principal submatrix of the order n − 1 separate the n eigenvalues of A strictly.
The following corollary can be easily deduced from Lemma 1: If a characteristic polynomial of A satisfies f (λ) = 0, then f (λ) = 0. Furthermore, the characteristic polynomial can be rewritten in the form of a determinant |A − λI|. The leading (trailing) principal minor of the order n − 1 of |A − λI| does not equal zero if f (λ) = 0.
Let the eigenvalues of A be denoted by λ 1 < λ 2 < . . . < λ n and those ofÂ beλ 1 ≤λ 2 ≤ . . . ≤λ n . Then, it holds that The following can easily be obtained from Lemma 2: If λ is the minimum or maximum eigenvalue of A such that |A − λI| = 0, then the leading and trailing principal minors of |A − λI| of the order k for k < n do not equal zero.

Lemma 3 ([22]
). If n − 1 eigenvalues of certain principal submatrix of A of the order n − 1 are distinct, then they separate n eigenvalues of A strictly.
The proof in [22] is written in Chinese. Its English translation can be found in the Appendix of [19].
From this lemma, it is easy to deduce the following corollary: Corollary 3. If n − 1 eigenvalues of each principal submatrix of A of the order n − 1 are distinct, and |A − λI| = 0, then the leading and trailing principal minors of |A − λI| of the order k with k < n do not equal zero.
The proof of this corollary is in [19]. Next, the main theorem of this paper can be introduced. We give the whole proof here since we need to use the results and notations in the next section:

Theorem 1. Let an interval matrix A I = [A, A] be a symmetric tridiagonal interval matrix, and it is denoted as
[a n , a n ] Let its vertex matrices be expressed as A s for s = 1, . . . , 2 2n−1 , with its diagonal elements being either a i or a i for i = 1, . . . , n and its subdiagonal elements being either b j or b j for j = 1, . . . , n − 1. Proof. Let the central points and the radii of the entries of A I be denoted, respectively, as for i = 1, . . . , n; j = 1, . . . , n − 1. Denote 2n − 1 real variables as X = (x a 1 , . . . , x a n , x b 1 , . . . , x b n−1 ) T ∈ R 2n−1 .
Therefore, the ith diagonal entry of A I can be expressed as a i + r a i sin x a i for i = 1, . . . , n and the jth subdiagonal entry of A I can be expressed as b j + r b j sin x b j for j = 1, . . . , n − 1. Let A I − λ I I be the characteristic determinant of A I . The definition of its leading or trailing principal minors is the same as in Definition 1. Obviously, λ i is a function of X.
Let F(X, λ i (X)) = A I − λ i (X)I . F is differentiable due to the differentiability of all entries in A I − λ i (X)I . Then, F(X, λ i (X)) can be expressed as . . . . . . b n−1 + r b n−1 sin x b n−1 b n−1 + r b n−1 sin x b n−1 a n + r a n sin x a n − λ .
Consider the partial derivative of λ with respect to x a 1 when F(X, λ i (X)) = 0. Based on the derivative rule for a determinant, we obtain Thus, we obtain that In a similar way, the partial derivative of λ with respect to x b 1 when F(X, λ i (X)) = 0 is Moreover, it can be deduced that where D = |D n−1 | + |D 1 ||D n−2 | + . . . + |D n−2 ||D 1 | + |D n−1 | and |D 0 | = |D 0 | = 1. From Corollary 1, we know that D = 0 when F(X, λ i ) = 0 since D is the derivative of F with respect to λ i . Additionally, from Corollary 2, when condition (a) is satisfied and F(X, λ 1 ) = 0 (or F(X, λ n ) = 0), it is obtained that all |D i | and |D i | for i = 1, 2, . . . , n − 1 do not equal zero. From Lemma 3, the same conclusion can be drawn when conditions (a) and (b) are both satisfied and F(X, λ i ) = 0.
Therefore, if all partial derivatives in Equation (11) vanish, then it must have cos x a k = 0 and cos x b l = 0. Then, the corresponding values of sine must be ±1. From the extremal property of a function, the eigenvalue bounds should be reached when the matrix entries take their boundary values: Remark 1. We would like to mention here that condition (b) in Theorem 1 is restrictive. However, the following are true: (i) In many engineering problems, the matrix which satisfies condition (b) can be verified by some sufficient conditions. Yuan, He and Leng provided a theorem in [19] to verify condition (b). Hladík in [15] provided an alternative test which can also verify condition (b). (ii) In many engineering problems, we just need to find the interval of the largest eigenvalue, and in this scenario, we do not need to verify condition (b). (iii) In [15], the author presented another approach to obtaining eigenvalue intervals for symmetric tridiagonal interval matrices. Compared with the algorithm in [15], our algorithm (which will be presented in the next section) does not need to use eigenvectors. Therefore, it is still competitive.

An Improved Theoretical Result and a Fast Algorithm
As we mentioned before, Theorem 1 provides an important property of symmetric tridiagonal interval matrices. However, it helps us little in finding the upper and lower bounds of eigenvalues, since 2 n−1 vertex matrices need to be checked to determine the upper and lower bounds of an eigenvalue. Below, we will prove a better property: Definition 2 (uniformly monotone). In a function G(x 1 , . . . , x n ) : E → R, if for all (x 1 , x 2 , . . . , x n ) ∈ E, G is monotone increasing or monotone decreasing with respect to x i , then G is uniformly monotone with respect to x i .  (9), if we restrict each component between [−π/2, π/2], then λ i (X) is uniformly monotone.
Proof. In the proof of Theorem 1, we showed that all |D i | and |D i | for i = 1, 2, . . . , n − 1 do not equal zero. According to Lemma 4, we have in Equation (11) that all ∂λ i ∂x a k and ∂λ i ∂x b l will keep the same sign in (−π/2, π/2) for all x a k and x b l , k = 1, 2, . . . , n, l = 1, 2, . . . , n − 1. This means that λ i (X) is uniformly monotone.
Theorem 2 tells us the following: • Assume that the conditions in Theorem 1 are satisfied; • Assume a component (for example, x a 1 ) causes λ i (X) to monotonically reach its extreme value when other components are fixed (from Theorem 1, x a 1 will reach one of its ending points).
Then, for another component (for example, x a 2 ), when x a 2 reaches one of its ending points, changing x a 1 cannot improve the value of λ i (X). Based on the above analysis, we proposed an algorithm to find the upper or lower bound of an eigenvalue. Suppose we need to find the lower bound of the smallest eigenvalue λ 1 ; We have Algorithm 1 as follows (for the upper bounds and other eigenvalues, the algorithms are similar): Algorithm 1 (Fast Algorithm for lower bound of λ 1 ) 1: Set a k = a k , b l = b l , k = 1, 2, . . . , n, l = 1, 2, . . . , n − 1. Here, a k and b l have the same meaning as in Equation (7). We will obtain a matrix A as in Equation (6) and obtain λ 1 . 2: For k from 1 to n: Let a k = a k . Here, a k has the same meaning as in Equation (7). Therefore, we have a matrix A and λ 1 . If λ 1 < λ 1 : λ 1 ← λ 1 Else: a k ← a k 3: For l from 1 to n − 1: Let b l = b l . Here, b l has the same meaning as in Equation (7). Therefore, we have a matrix A and λ 1 .
To obtain the upper or lower bound of λ i , we just need to check 2n − 1 vertex matrices instead of 2 2n−1 vertex matrices.

Numerical Experiment
A numerical example is presented here to illustrate the algorithm. The example comes from a spring mass system. The practical background here is unimportant, and therefore it was omitted. It is easy to verify that all conditions in Theorem 1 hold (see [19]). Using Algorithm 1 in Section 3, we can find the results in Table 1. The results were the same as those in [11].

Conclusions
This paper improved the theoretical results presented in [19] and presented a fast algorithm to find the upper and lower bounds of the interval eigenvalues of a class of symmetric tridiagonal interval matrices. Since this kind of matrix is popular in engineering problems, we believe that this algorithm offers strong application value. It is still an open question under what kinds of assumptions the conclusions in Theorems 1 and 2 can be generalized to symmetric but non-tridiagonal interval matrices or even more general interval matrices.

Conflicts of Interest:
The authors declare no conflict of interest.