Next Article in Journal
A Symmetrically Verifiable Outsourced Decryption Data Sharing Scheme with Privacy-Preserving for VANETs
Previous Article in Journal
Double-Swarm Grey Wolf Optimizer with Covariance and Dimension Learning for Engineering Optimization Problems
Previous Article in Special Issue
Developing a New Approach for Assessing and Improving Business Excellence: Integrating Fuzzy Analytic Hierarchical Process and Constraint Programming Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parametric Inner Bounds for the Extreme Eigenvalues of Real Symmetric Matrices

1
Department of Mathematics, School of Mathematics, Statistics and Computer Science, University of KwaZulu-Natal, Durban 4001, South Africa
2
Department of Decision Sciences, College of Economics and Management Sciences, University of South Africa, Pretoria 0003, South Africa
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2025, 17(12), 2031; https://doi.org/10.3390/sym17122031
Submission received: 22 September 2025 / Revised: 5 November 2025 / Accepted: 7 November 2025 / Published: 27 November 2025
(This article belongs to the Special Issue Symmetry in Numerical Analysis and Applied Mathematics)

Abstract

Here, inner bounds for the extreme eigenvalues of a real symmetric matrix are derived by using a two-dimensional subspace of R n . The addition of a single parameter to the basis vectors and optimization thereof results in superior bounds, as compared to using fixed basis vectors. This is a rather simple procedure that is easily automated.

1. Introduction

The determination of the spectrum σ ( A ) matrix is a fundamental problem in almost all spheres of science. In fact, if the eigenpairs are known, then this is a complete description of the associated linear operator. In most cases, the solution of the characteristic equation det ( λ I A ) = 0 is unmanageable for large matrix dimensions n. In this case, numerical methods suffice, for example, the Q R algorithm [1]. In some cases, it is only the location of the spectrum that is required, whilst in other instances, knowledge of the extreme eigenvalues is sufficient. Weinstein bounds [2] and Kato bounds [3] depend on an approximate eigenpair, which may be evaluated numerically. For real symmetric matrices, Lehmann’s method [4] provides both upper and lower bounds for the eigenvalues. The Temple quotient [5] provides a lower bound for the smallest eigenvalue of such a matrix and is a special case of Lehmann’s method. Brauer [6] provides a method to calculate the outer bounds for σ ( A ) for such matrices. The Rayleigh quotient [7] is often used when some approximate eigenpair is known to improve the eigenvalue. The Gerschgorin theorem and ovals of Cassini [8] depend only on knowledge of the entries of a matrix, yet they yield powerful results, especially for large sparse matrices. Trace bounds use only the diagonal elements of a matrix; however, they can yield some very impressive bounds for the eigenvalues. These have been studied extensively by Wolcowicz and Styan [9]. Sharma et al. [10] extended and improved the work of Wolcowiz and Styan, whilst Singh et al. [11,12] generalized the work of the previous two authors by employing functions of a matrix. Recently, Singh et al. [13] have introduced a precursor to this article by using all matrix entries. Here we introduce an optimizing parameter which improves the inner bounds. Recently, there has been a resurgence in the effort to efficiently locate eigenvalue bounds for interval matrices [14] and symmetric tridiagonal matrices [15].

2. Theory

Let A be a n × n real symmetric matrix, with eigenvalues { λ i } i = 1 n , and denote the associated set of orthonormal eigenvectors by { u i } i = 1 n . We shall assume that the eigenvalues are arranged in the order
λ 1 λ 2 λ n .
Let · , · denote the standard inner product in R n .
Theorem 1.
λ 1 min x 2 = 1 Ax , x max x 2 = 1 Ax , x λ n .
Proof. 
Let x R n , where x 2 = 1 , then
x = i = 1 n x , u i u i ,
where
i = 1 n x , u i 2 = 1 .
Ax , x = i = 1 n x , u i Au i , j = 1 n x , u j u j = i = 1 n j = 1 n λ i x , u i x , u j u i , u j = i = 1 n λ i x i , u i 2 λ n i = 1 n x i , u i 2 = λ n .
Thus, max x 2 = 1 Ax , x λ n . The left hand side of (1) is proved in a similar manner. □
Lemma 1.
If x S , x 2 = 1 , where S is a subspace of R n and n is assumed to be even, then
λ 1 min x 2 = 1 x S Ax , x max x 2 = 1 x S Ax , x λ n .
Let S = span { u , v } , where u , v R n are orthogonal. An obvious parametric choice for u and v is the following:
u = α α α β β β position n 2 v = β β β α α α
Let e ( 1 ) = i = 1 n 2 e i and e ( 2 ) = i = n 2 + 1 n e i , where e i are the standard unit vectors in R n , with one in the i t h position and zeros elsewhere. We may thus write
u = α e ( 1 ) + β e ( 2 ) ,
v = β e ( 1 ) + α e ( 2 ) .
Thus, if x S , x 2 = 1 , then x = c 1 u + c 2 v and
f ( α , β ) = Ax , x .
It may seem plausible that α and β may be chosen to maximize/minimize f. However, this is not the case. For constants k 1 and k 2 , we have that
k 1 u + k 2 v = k 1 α e ( 1 ) + β e ( 2 ) + k 2 β e ( 1 ) + α e ( 2 ) = ( k 1 α k 2 β ) e ( 1 ) + ( k 1 β k 2 α ) e ( 2 ) span e ( 1 ) , e ( 2 ) .
Thus, S = span { u , v } = span e ( 1 ) , e ( 2 ) , and since the unit ball in S is independent of the two non-parallel vectors that span S, it must be true that the extreme values of f are independent of the choice of u and v of the form given by (2)–(3). It will therefore suffice to use the rather simpler form
u = e ( 1 ) n 2 ,
v = e ( 2 ) n 2 ,
where we have chosen the normalized form.
Theorem 2.
Let S = { u , v } be an orthonormal set in R n , where n is even; then, the extreme values of Ax , x , x 2 = 1 , x S are given by
λ ± = 1 2 Au , u + Av , v ± ( Au , u Av , v ) 2 + 4 Au , v 2 .
Proof. 
Write
x = c 1 u + c 2 v c 1 2 + c 2 2 ,
and define
f ( c 1 , c 2 ) = Ax , x = c 1 Au + c 2 Av , c 1 u + c 2 v c 1 2 + c 2 2 .
Thus,
( c 1 2 + c 2 2 ) f ( c 1 , c 2 ) = c 1 2 Au , u + 2 c 1 c 2 Au , v + c 2 2 Av , v .
Differentiate (8) partially with respect to c 1 and c 2 to obtain
2 c 1 f + ( c 1 2 + c 2 2 ) f c 1 = 2 c 1 Au , u + 2 c 2 Au , v , 2 c 2 f + ( c 1 2 + c 2 2 ) f c 2 = 2 c 1 Au , v + 2 c 2 Av , v .
Setting f c 1 = f c 2 = 0 yields a linear system of equations, which may be cast in the matrix vector form
Au , u Au , v Au , v Av , v c 1 c 2 = f c 1 c 2 .
It is now obvious that the extreme values of f are the eigenvalues of the coefficient matrix in (9). The solution of the characteristic polynomial then yields (6). □
We partition the matrix A into n 2 × n 2 blocks as follows:
A = A 11 A 12 A 12 t A 22 .
If u and v are chosen as in (4) and (5), we have
Au , u = 2 n Ae ( 1 ) , e ( 1 ) = 2 n A i = 1 n 2 e i , j = 1 n 2 e j = 2 n i = 1 n 2 j = 1 n 2 A e i , e j = 2 n i = 1 n 2 j = 1 n 2 a i j = 2 n S 11 ,
where S 11 is the sum of the elements of A 11 . Similarly, we can show that
Av , v = 2 n S 22 ,
Au , v = 2 n S 12 ,
where S 22 is the sum of the elements of A 22 and S 12 is the sum of the elements of A 12 . Substituting (10)–(12) into (6), we have
λ ± = 1 n S 11 + S 22 ± ( S 11 S 22 ) 2 + 4 S 12 2 .
We now let the k t h component of e ( 1 ) be replaced by a parameter α , 1 k n 2 and
u = e ( 1 ) + ( α 1 ) e k n 2 + α 2 1 = e ( 1 ) + ( α 1 ) e k D u , v = e ( 2 ) n 2 = e ( 2 ) D v ,
where D u = n 2 + α 2 1 = u 2 and D v = n 2 = v 2 . For convenience, we shall write (6) as
λ ± = 1 2 T 1 + T 2 ± ( T 1 T 2 ) 2 + 4 T 3 2 ,
where T 1 = Au , u , T 2 = Av , v , and T 3 = Au , v . T 1 may be simplified as follows:
T 1 D u 2 = Ae ( 1 ) + ( α 1 ) Ae k , e ( 1 ) + ( α 1 ) e k = Ae ( 1 ) , e ( 1 ) + 2 ( α 1 ) Ae k , e ( 1 ) + ( α 1 ) 2 Ae k , e k = S 11 + 2 ( α 1 ) i = 1 n 2 Ae k , e i + ( α 1 ) 2 a k k = S 11 + 2 ( α 1 ) i = 1 n 2 a k i + ( α 1 ) 2 a k k = S 11 + 2 ( α 1 ) R 1 + ( α 1 ) 2 a k k ,
where R 1 = i = 1 n 2 a k i is the sum of the elements of the k t h row of A 11 . Similarly, we can show the following results:
T 2 D v 2 = S 22 ,
T 3 D u D v = S 12 + ( α 1 ) R 2 ,
where R 2 = j = n 2 + 1 n a k j is the sum of the elements of the k t h row of A 12 . Since T 1 and T 3 are functions of α , we may differentiate them with respect to alpha to show the following results:
T 1 D u 2 + 2 α T 1 = 2 R 1 + 2 ( α 1 ) a k k , T 1 D u 4 + 2 α T 1 D u 2 = 2 R 1 + 2 ( α 1 ) a k k D u 2 .
Equation (18) is simplified to yield
T 1 D u 4 = 2 ( α 1 ) 2 ( a k k R 1 ) + ( α 1 ) ( n a k k 2 S 11 ) + ( n R 1 2 S 11 ) .
In a similar manner, we may show that
T 3 D u D v + 2 α T 3 D u = R 2 , T 3 D u D v 2 + 2 α T 3 D u D v = R 2 D v .
Equation (20) may be simplified to the form
T 3 D u D v 2 = 2 ( α 1 ) 2 R 2 + ( α 1 ) ( R 2 + S 12 ) + S 12 n 8 R 2 .
From (15) and (16), we have
( T 1 T 2 ) D u 2 D v 2 = 1 2 ( α 1 ) 2 n a k k 2 S 22 + 2 ( α 1 ) ( n R 1 2 S 22 ) + n ( S 11 S 22 ) .
To find the extreme values of λ , (14) is differentiated with respect to α to yield the equation
T 1 2 ( T 1 T 2 ) 2 + 4 T 3 2 = ( T 1 T 2 ) T 1 + 4 T 3 T 3 2 .
Equation (23) simplifies to
T 3 T 1 2 = 2 T 1 T 3 ( T 1 T 2 ) + 4 T 3 T 3 2 .
Equation (24) is now multiplied by D u 9 D v 5 and terms are grouped to yield
D v 4 T 3 D u D v T 1 D u 4 2 = 2 D u 2 D v T 1 D u 4 T 3 D u D v 2 ( T 1 T 2 ) D u 2 D v 2 + 4 T 3 D u D v T 3 D u D v 2 2 D u 6 .
The terms in the brackets in Equation (25) can be simplified by using Equations (17), (19), (21), (22) and (25), written in the form
F ( α ) = G 1 ( α ) + G 2 ( α ) + G 3 ( α ) = 0 ,
where
G 1 ( α ) = D v 4 T 3 D u D v T 1 D u 4 2 = n 2 4 S 12 + ( α 1 ) R 2 2 ( α 1 ) 2 ( a k k R 1 ) + ( α 1 ) ( n a k k 2 S 11 ) + ( n R 1 2 S 11 ) 2 ,
G 2 ( α ) = 2 D u 2 D v T 1 D u 4 T 3 D u D v 2 ( T 1 T 2 ) D u 2 D v 2 = n 2 n + 2 ( α 1 ) 2 + 4 ( α 1 ) 2 ( α 1 ) 2 ( a k k R 1 ) + ( α 1 ) ( n a k k 2 S 11 ) + n R 1 2 S 11 × ( α 1 ) 2 R 2 + ( α 1 ) ( R 2 + S 12 ) + S 12 n 8 R 2 × ( α 1 ) 2 ( n a k k 2 S 22 ) + 2 ( α 1 ) ( n R 1 2 S 22 ) + n ( S 11 S 22 ) ,
G 3 ( α ) = 4 T 3 D u D v T 3 D u D v 2 2 D u 6 = 2 n + 2 ( α 1 ) 2 + 4 ( α 1 ) 3 S 12 + ( α 1 ) R 2 ×   ( α 1 ) 2 R 2 + ( α 1 ) ( R 2 + S 12 ) + S 12 n 8 R 2 2 .
Linearizing F ( α ) about α = 1 by using F ( α ) F ( 1 ) + F ( 1 ) ( α 1 ) and finding a zero which we shall call α 0 yields
α 0 = 1 F ( 1 ) F ( 1 ) = 1 G 1 ( 1 ) + G 2 ( 1 ) + G 3 ( 1 ) G 1 ( 1 ) + G 2 ( 1 ) + G 3 ( 1 ) ,
where
G 1 ( 1 ) = n 2 4 S 12 ( n R 1 2 S 11 ) 2 , G 2 ( 1 ) = n 2 n 2 ( n R 1 2 S 11 ) ( n a k k 2 S 22 ) ( s 11 s 22 ) , G 3 ( 1 ) = 2 n 3 S 12 ( S 12 n 8 R 2 ) 2 , G 1 ( 1 ) = n 2 4 ( n R 1 2 S 11 ) R 2 ( n R 1 2 S 11 ) + 2 ( n a k k 2 S 11 ) S 12 , G 2 ( 1 ) = n n 2 2 ( n R 1 2 S 11 ) S 12 n 8 R 2 ( n R 1 + 2 S 11 4 S 22 ) +   n ( S 11 S 22 ) ( n a k k 2 S 11 ) s 12 n 8 R 2 + ( n R 1 2 S 11 ) ( R 2 + S 12 ) ] , G 3 ( 1 ) = 2 n 2 S 12 n 8 R 2 S 12 n 8 R 2 ( 12 S 12 + n R 2 ) = + 2 n S 12 ( R 2 + S 12 ) ] .
Now let
P = 0 I I 0 ,
v = Pu and u = Pv ; thus, α is now a component in the vector v . From (6), we have
λ ± = 1 2 0 p t 11 p t APv , Pv + APu , Pu ± ( APv , Pv APu , Pu ) 2 + 4 APv , Pu 2 = 1 2 0 p t 11 p t PAPv , v + PAPu , u ± ( PAPv , v PAPu , u ) 2 + 4 PAPv , u 2 = 1 2 Bv , v + Bu , u ± ( Bv , v Bu , u ) 2 + 4 Bv , u 2 ,
where
B = PAP = A 22 A 12 t A 12 A 11 .
Note that σ ( A ) = σ ( B ) . Let
R ˜ 1 = i = 1 n 2 b k i = i = n 2 + 1 n a k i .
Thus, R ˜ 1 is the sum of the k t h row of A 22 . Similarly,
R ˜ 2 = i = n 2 + 1 n b k i = i = n 2 + 1 n a i k .
Thus, R ˜ 2 is the sum of the k t h column of A 12 . We may thus replace S 11 by S 22 , S 22 by S 11 , R 1 by R ˜ 1 , R 2 by R ˜ 2 , D u by D v , D v by D u , and a k k by a n 2 + k , n 2 + k in our derivations. Thus, from (27)–(29), we have
G ˜ 1 ( α ) = n 2 4 S 12 + ( α 1 ) R ˜ 2 2 ( α 1 ) 2 ( a n 2 + k , n 2 + k R ˜ 1 ) + ( α 1 ) ( n a n 2 + k , n 2 + k 2 S 22 ) + ( n R ˜ 1 2 S 22 ) 2 , G ˜ 2 ( α ) = n 2 n + 2 ( α 1 ) 2 + 4 ( α 1 ) 2 ( α 1 ) 2 ( a n 2 + k , n 2 + k R ˜ 1 ) + ( α 1 ) ( n a n 2 + k , n 2 + k 2 S 22 ) + n R 1 2 S 22 × ( α 1 ) 2 R ˜ 2 + ( α 1 ) ( R ˜ 2 + S 12 ) + S 12 n 8 R 2 × ( α 1 ) 2 ( n a n 2 + k , n 2 + k 2 S 11 ) + 2 ( α 1 ) ( n R ˜ 1 2 S 11 ) + n ( S 22 S 11 ) , G ˜ 3 ( α ) = 2 n + 2 ( α 1 ) 2 + 4 ( α 1 ) 3 S 12 + ( α 1 ) R ˜ 2 ×   ( α 1 ) 2 R ˜ 2 + ( α 1 ) ( R ˜ 2 + S 12 ) + S 12 n 8 R ˜ 2 2 .
Similarly,
F ˜ ( α ) = G ˜ 1 ( α ) + G ˜ 2 ( α ) + G ˜ 3 ( α ) = 0 ,
and
α ˜ 0 = 1 F ˜ ( 1 ) F ˜ ( 1 ) = 1 G ˜ 1 ( 1 ) + G ˜ 2 ( 1 ) + G ˜ 3 ( 1 ) G ˜ 1 ( 1 ) + G ˜ 2 ( 1 ) + G ˜ 3 ( 1 ) ,
where
G ˜ 1 ( 1 ) = n 2 4 S 12 ( n R ˜ 1 2 S 22 ) 2 , G ˜ 2 ( 1 ) = n 2 n 2 ( n R ˜ 1 2 S 22 ) ( n a n 2 + k , n 2 + k 2 S 11 ) ( S 22 S 11 ) , G ˜ 3 ( 1 ) = 2 n 3 S 12 ( S 12 n 8 R ˜ 2 ) 2 , G ˜ 1 ( 1 ) = n 2 4 ( n R ˜ 1 2 S 22 ) R ˜ 2 ( n R ˜ 1 2 S 22 ) + 2 ( n a n 2 + k , n 2 + k 2 S 22 ) S 12 , G ˜ 2 ( 1 ) = n n 2 2 ( n R ˜ 1 2 S 22 ) S 12 n 8 R ˜ 2 ( n R ˜ 1 + 2 S 22 4 S 11 ) + n ( S 22 S 11 ) ( n a n 2 + k , n 2 + k 2 S 22 ) s 12 n 8 R ˜ 2 + ( n R ˜ 1 2 S 22 ) ( R ˜ 2 + S 12 ) ] , G ˜ 3 ( 1 ) = 2 n 2 S 12 n 8 R ˜ 2 S 12 n 8 R ˜ 2 ( 12 S 12 + n R ˜ 2 ) = + 2 n S 12 ( R ˜ 2 + S 12 ) ] .
Hence,
λ ± = 1 2 T ˜ 1 + T ˜ 2 ± ( T ˜ 1 T ˜ 2 ) 2 + 4 T ˜ 3 2 ,
where
T ˜ 1 D v 2 = S 22 + 2 ( α 1 ) R ˜ 1 + ( α 1 ) 2 a n 2 + k , n 2 + k , T ˜ 2 D u 2 = S 11 , T ˜ 3 D v D u = S 12 + ( α 1 ) R ˜ 2 .
The solution of F ( α ) = 0 in (26) and F ˜ ( α ) = 0 in (31) is easily effected by using a Newton method with starting values α 0 and α ˜ 0 given in (30) and (32). This yields single values for the parameter α that is used to optimize both λ + and λ simultaneously using (14). However, this may result in the maximization of λ and/or the minimization of λ + , clearly a situation that we do not desire. However, this routine is simple to implement and automatic. An alternative yet more accurate approach is to minimize λ ( α ) separately and maximize λ + ( α ) separately by setting
λ ( α ) = 0 ,
λ + ( α ) = 0 .
In this case, an initial value of α chosen arbitrarily can lead to divergence of the Newton routine. It may also lead to the maximization of λ or minimization of λ + . Recall that α ( , ) is what complicates the issue. A workaround is to sketch the corresponding curves λ ( α ) and λ + ( α ) to determine the appropriate initial guesses. This is unfortunately not readily automated. However, the preceding process yields superior results, as we will illustrate. In order to deal with the case of n odd, we shall use a principal submatrix A ¯ of A by deleting the p t h row and column of A , where p { 1 , 2 , , n } . It is well known that σ A ¯ σ ( A ) from the Cauchy interlacing theorem [16]. In this case, we will be evaluating inner bounds for the submatrix, which are still bounds for the original matrix. The efficiency of this procedure is however restricted by the spectral distribution of A ¯ . Another possibility is to pad the original matrix to even order by adding a p t h row and p t h column, p { 0 , 1 , 2 , , n } so that the extreme eigenvalues of A are unaffected. Here p = 0 means to prepend the row and column, p = 1 means insert the row and column immediately after row one and column one of A , while p = n means append the row and column to A . The example for p = 0 is illustrated below:
t r ( A ) n 0 0 A .
Here t r ( A ) refers to the trace of A . The well known inequality λ 1 t r ( A ) n λ n ensures that the original extreme eigenvalues are not affected. Instead of the trace, one could use a i i as λ 1 a i i λ n , i = 1 , 2 , , n .
Example 1.
Here we consider the 8 × 8 matrix [17] given by
A = 8 7 6 5 4 3 2 1 7 7 6 5 4 3 2 1 6 6 6 5 4 3 2 1 5 5 5 5 4 3 2 1 4 4 4 4 4 3 2 1 3 3 3 3 3 3 2 1 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 ,
with
λ 1 = 1 2 1 cos 15 π 17 0.258736 . λ 8 = 1 2 1 cos π 17 29.365298 .
The solution for λ and λ + using (26) and (31) is shown in Table 1. From this, we can accept 2.328878 as an upper bound for λ 1 and 28.807700 as a lower bound for λ 8 . These are acceptable given the relative ease of computation. In order to obtain a superior lower bound, we solve (33) for optimal α and substitute into (14). These results are summarized in Table 2. Similarly, to obtain a superior upper bound, we solve (34) for optimal α and substitute into (14). The corresponding results are summarized in Table 3. From these, we obtain the lower bound 0.394002 and the upper bound 28.882906 However, the latter process requires a good enough starting value for α in the corresponding Newton routine, failing which there is divergence, a value of α that maximizes the lower bound, a value of α that minimizes the upper bound, or convergence to a complex zero. These starting values are obtained graphically for different values of k, as depicted in Figure 1. We shall thus not generate these results for further examples.
Example 2.
Here we consider the pentadiagonal matrix of odd order n = 11 [17] given by
A = 1 2 1 2 0 2 1 1 2 0 2 1 1 2 0 2 1 1 2 0 2 1 1 2 0 2 1 1 2 0 2 1 1 2 0 2 1 1 2 0 2 1 1 2 0 2 1 2 1 ,
with
λ 1 = 3 λ 11 = 1 2 cos 11 π 12 2 3 5.595754 .
For Example 2, we merely omit the last row and last column of the matrix to obtain and even-order submatrix. However, omission of the p t h row and column will achieve a similar purpose, though with different results. From Table 4, we accept 0.089968 as an upper bound for λ 1 and 5.260788 as a lower bound for λ 11 . We additionally summarize in Table 5 the best bounds obtained by deleting each of the other rows and corresponding columns. From Table 5, the best choices for extreme inner bounds are obviously −0.066734 and 5.262994 In Table 6, we summarize the effect of padding the matrix to even order. Several results are similar, since the padding position does not affect the spectrum of all such padded matrices. This is because each padded matrix is similar to the other by a permutation similarity transformation. From this table, −0.107061 and 5.095542 are acceptable bounds.
Example 3.
Here we consider the dense matrix A of order 9 given by A = R D R , where R = I 2 ww t , w = 1 3 [ 1 , 1 , , 1 ] , and D is a chosen diagonal matrix. Since R is an involution, σ ( A ) = σ ( D ) = { 10 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 20 } . The results using deletion are presented in Table 7. From this the best bounds are −5.823999 and 12.713623 Results using padding are presented in Table 8. Here the accepted bounds are 1.980210 and 12.506700 A similar pattern of results are obtained as for padding in Example 2. If padding is to be used, then it is only necessary to prepend or append the padding to the matrix.
Example 4.
Here we consider the Hilbert matrix A of order 10 given by
a i j = 1 i + j 1 , i , j = 1 , 2 , , 10 .
This is a highly ill-conditioned matrix with λ 1 0 and λ 10 = 1.751920 .
The results are depicted in Table 9. The minimum upper bound for λ 1 is accepted as 0.098587, while the maximum lower bound for λ 10 is given by 1.563673.

3. Conclusions

We have provided a rather inexpensive procedure to obtain optimal inner bounds for the extreme eigenvalues of real symmetric matrices. These bounds depend only on a single parameter and arise from approximation over a two-dimensional subspace of R n . They can be used to locate more accurately the spectrum of the matrix on R . It is encouraged that others extend this idea of subspace approximation of eigenvalues to r invertible matrices over antirings [18].

Author Contributions

Conceptualization, P.S. and V.S.; methodology, S.S.; software, P.S.; validation, V.S. and S.S.; formal analysis, P.S.; investigation, V.S.; writing—original draft preparation, P.S.; writing—review and editing, S.S.; project administration, V.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors wish to thank to the anonymous referees for their suggestions and comments.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Golub, G.; Uhlig, F. The QR algorithm: 50 years later–its genesis by John Francis and Vera Kublanovskaya, and subsequent developments. IMA J. Numer. Anal. 2009, 29, 467–485. [Google Scholar] [CrossRef]
  2. Coulson, C.A.; Haskins, P.J. On the relative accuracies of eigenvalue bounds. J. Phys. B At. Mol. Phys. 1973, 6, 1741–1750. [Google Scholar] [CrossRef]
  3. Kato, T. On the upper and lower bounds of eigenvalues. Phys. Rev. Lett. 1950, 77, 334–339. [Google Scholar] [CrossRef]
  4. Ovtchinnikov, E.E. Lehmann bounds and eigenvalue error estimation. SIAM J. Numer. Anal. 2011, 49, 2078–2102. [Google Scholar] [CrossRef]
  5. Delves, L.M. On the Temple lower bound for eigenvalues. J. Phys. A Gen. Phys. 1972, 5, 1123–1129. [Google Scholar] [CrossRef]
  6. Brauer, A.; Mewbom, A.C. The greatest distance between two characteristic roots of a matrix. Duke Math. J. 1959, 26, 653–661. [Google Scholar] [CrossRef]
  7. Horn, R.; Johnson, C.A. Matrix Analysis; Cambridge University Press: Cambridge, UK, 2012; pp. 345–346. [Google Scholar]
  8. Brauer, A. Limits for the characteristic roots of a matrix: VII. Duke Math. J. 1958, 25, 583–590. [Google Scholar] [CrossRef]
  9. Wolkowicz, H.; Styan, G.P.H. Bounds for eigenvalues using traces. Linear Algebra Appl. 1980, 29, 471–506. [Google Scholar] [CrossRef]
  10. Sharma, R.; Kumar, R.; Saini, R. Note on Bounds for Eigenvalues using Traces. arXiv 2014, arXiv:1409.0096. [Google Scholar] [CrossRef]
  11. Singh, P.; Singh, V.; Singh, S. New bounds for the maximal eigenvalues of positive definite matrices. Int. J. Appl. Math. 2022, 35, 685–691. [Google Scholar] [CrossRef]
  12. Singh, P.; Singh, S.; Singh, V. Results on Bounds of the Spectrum of Positive Definite Matrices by Projections. Aust. J. Math. Anal. Appl.. 2023, 20, 1–10. [Google Scholar]
  13. Singh, P.; Singh, S.; Singh, V. New Inner Bounds for the Extreme Eigenvalues of Real Symmetric Matrices. IAENG Int. J. Appl. Math. 2024, 54, 871–876. [Google Scholar]
  14. Liao, W.; Long, P. Distribution of Eigenvalues and Upper Bounds of the Spread of Interval Matrices. Mathematics 2023, 11, 4032. [Google Scholar] [CrossRef]
  15. Yuan, Q.; Yang, Z. A Fast Algorithm for the Eigenvalue Bounds of a Class of Symmetric Tridiagonal Interval Matrices. AppliedMath 2023, 3, 90–97. [Google Scholar] [CrossRef]
  16. Hwang, S.G. Cauchy’s Interlace Theorem for Eigenvalues of Hermitian Matrices. Am. Math. Mon. 2004, 111, 157–1579. [Google Scholar] [CrossRef]
  17. Gregory, R.T.; Karney, D.L. A Collection of Matrices for Testing Computational Algorithms; Robert, E., Ed.; Krieger Publishing Company: Huntington, WV, USA; New York, NY, USA, 1978; p. 57. [Google Scholar]
  18. Wang, A.; Cheng, C.; Wang, L. On r-invertible matrices over antirings. Publ. Math. Debr. 2025, 106, 445–459. [Google Scholar] [CrossRef]
Figure 1. Variation in λ and λ + with α for example 1, for different positions k of α , from which starting values of α are chosen for Newton’s routine. (a) k = 1 , (b) k = 2 , (c) k = 3 , (d) k = 4 , (e) k = 5 , (f) k = 6 , (g) k = 7 , (h) k = 8 .
Figure 1. Variation in λ and λ + with α for example 1, for different positions k of α , from which starting values of α are chosen for Newton’s routine. (a) k = 1 , (b) k = 2 , (c) k = 3 , (d) k = 4 , (e) k = 5 , (f) k = 6 , (g) k = 7 , (h) k = 8 .
Symmetry 17 02031 g001aSymmetry 17 02031 g001b
Table 1. Solution of Example 1.
Table 1. Solution of Example 1.
k α 0 , α ˜ 0 α F ( α ) , F ˜ ( α ) λ λ +
10.8267220.7103211.86 × 10 09 2.61448527.601089
20.8261970.690819−1.86 × 10 09 2.63762127.642029
30.8370010.6446099.31 × 10 10 2.70026527.756653
40.8653230.5707514.66 × 10 10 2.83463328.043900
51.0136460.555864−3.49 × 10 10 2.69166127.428588
60.9102190.433913−6.40 × 10 10 2.41437427.600672
70.7396810.296971−2.04 × 10 10 2.32887828.018525
80.4732120.154739−2.47 × 10 10 2.61320628.807700
Table 2. Solution of Example 1 using (33).
Table 2. Solution of Example 1 using (33).
k α 0 α λ ( α ) λ
1−2.00−1.9986670.00 × 10 0 0 0.888395
2−2.00−2.426749−1.33 × 10 15 0.416356
3−2.50−3.2426235.27 × 10 16 0.490469
4−4.00−5.982560−5.55 × 10 17 0.668332
5−3.00−3.2111230.00 × 10 0 0 0.639657
6−2.00−2.2563621.70 × 10 16 0.394002
7−2.50−2.5278231.11 × 10 16 0.425852
8−4.00−5.0681670.00 × 10 0 0 0.543244
Table 3. Solution of Example 1 using (34).
Table 3. Solution of Example 1 using (34).
k α 0 α λ + ( α ) λ +
11.001.1264139.16 × 10 16 28.368499
21.201.0731856.25 × 10 16 28.328146
31.000.976169−6.23 × 10 15 28.308683
40.750.8358431.67 × 10 16 28.428059
51.501.8843163.89 × 10 16 28.814222
61.001.290095−6.07 × 10 16 28.378167
70.750.788489−3.89 × 10 16 28.355043
80.000.356960−1.67 × 10 16 28.882906
Table 4. Solution for Example 2. Results for deletion of row and column 11.
Table 4. Solution for Example 2. Results for deletion of row and column 11.
k α 0 , α ˜ 0 α F ( α ) , F ˜ ( α ) λ λ +
11.0000000.3454871.82 × 10 12 3.4975485.260788
20.9026690.5981990.00 × 10 0 0 3.2553615.041520
31.1648880.708619−7.28 × 10 12 3.2627885.025014
41.0504080.768007−1.46 × 10 11 3.4312535.038625
52.6945081.265825−2.91 × 10 11 3.2917505.141396
6−3.242201−2.259370−3.73 × 10 09 0.0899684.378438
70.6964870.8009153.64 × 10 12 3.4645325.046424
80.5124210.7864302.91 × 10 11 3.3718855.036684
90.6278250.659950−7.28 × 10 12 3.3472605.046034
101.0736860.4111613.64 × 10 12 3.4311775.198358
Table 5. Solution for Example 2. Best bounds from deletion of row and column p.
Table 5. Solution for Example 2. Best bounds from deletion of row and column p.
p λ λ +
1−0.0667345.262994
21.9223444.956660
30.0010844.818508
42.0707444.818508
52.0469004.601543
63.9959044.515387
72.0250744.602222
82.0707444.816596
90.1902654.816596
102.0619734.946304
110.0899685.260788
Table 6. Solution for Example 2. Best bounds from padding to even order.
Table 6. Solution for Example 2. Best bounds from padding to even order.
p λ λ +
02.6205315.095542
12.6205315.095542
22.6205315.095542
32.6205315.095542
42.6205315.095542
52.6266405.095542
6−0.1070615.095352
7−0.1070615.095406
8−0.1070615.095406
9−0.1070615.095406
10−0.1070615.095406
11−0.1070615.095406
Table 7. Solution for Example 3. Best bounds from deletion of row and column p.
Table 7. Solution for Example 3. Best bounds from deletion of row and column p.
p λ λ +
11.05871810.592087
23.06879111.066702
33.20532411.194483
43.25144111.345256
53.28074611.517792
63.45369611.541006
73.60173611.588090
83.73050211.680606
9−5.82399912.713623
Table 8. Solution for Example 3. Best bounds from padding to even order.
Table 8. Solution for Example 3. Best bounds from padding to even order.
p λ λ +
01.98021011.600000
11.98021011.600000
21.98021011.600000
31.98021011.600000
41.98021011.600000
53.21031512.506688
63.20013512.506688
73.20004312.506688
83.20000312.506688
93.20048612.506700
Table 9. Solution for Example 4.
Table 9. Solution for Example 4.
k α 0 , α ˜ 0 α F ( α ) , F ˜ ( α ) λ λ +
12.4169061.1229610.00 × 10 0 0 0.1144561.561871
20.8125920.730081−2.73 × 10 12 0.1077681.488724
30.8544520.5632150.00 × 10 0 0 0.1136701.509045
40.9029250.460570−2.27 × 10 13 0.1184541.537182
50.9394110.3899611.14 × 10 13 0.1216751.563673
60.8620240.3924343.77 × 10 13 0.1033981.490751
70.8047490.3386479.95 × 10 14 0.1009771.495043
80.7558950.301870−2.13 × 10 13 0.0995991.499771
90.7135070.272798−2.10 × 10 13 0.0988711.504251
100.6762500.248981−3.48 × 10 13 0.0985871.508395
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Singh, P.; Singh, S.; Singh, V. Parametric Inner Bounds for the Extreme Eigenvalues of Real Symmetric Matrices. Symmetry 2025, 17, 2031. https://doi.org/10.3390/sym17122031

AMA Style

Singh P, Singh S, Singh V. Parametric Inner Bounds for the Extreme Eigenvalues of Real Symmetric Matrices. Symmetry. 2025; 17(12):2031. https://doi.org/10.3390/sym17122031

Chicago/Turabian Style

Singh, Pravin, Shivani Singh, and Virath Singh. 2025. "Parametric Inner Bounds for the Extreme Eigenvalues of Real Symmetric Matrices" Symmetry 17, no. 12: 2031. https://doi.org/10.3390/sym17122031

APA Style

Singh, P., Singh, S., & Singh, V. (2025). Parametric Inner Bounds for the Extreme Eigenvalues of Real Symmetric Matrices. Symmetry, 17(12), 2031. https://doi.org/10.3390/sym17122031

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop