Next Article in Journal
Management of Severe COVID-19 Diagnosis Using Machine Learning
Previous Article in Journal
Mechanical Evaluation of Topologically Optimized Shin Pads with Advanced Composite Materials: Assessment of the Impact Properties Utilizing Finite Element Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Rigorous Asymptotic Perturbation Bounds for Hermitian Matrix Eigendecompositions

by
Mihail Konstantinov
1 and
Petko Hristov Petkov
2,*
1
Department of Mathematics, University of Architecture, Civil Engineering and Geodesy, 1064 Sofia, Bulgaria
2
Department of Engineering Sciences, Bulgarian Academy of Sciences, 1040 Sofia, Bulgaria
*
Author to whom correspondence should be addressed.
Computation 2025, 13(10), 237; https://doi.org/10.3390/computation13100237
Submission received: 12 September 2025 / Revised: 29 September 2025 / Accepted: 5 October 2025 / Published: 7 October 2025
(This article belongs to the Section Computational Engineering)

Abstract

In this paper, we present rigorous asymptotic componentwise perturbation bounds for regular Hermitian indefinite matrix eigendecompositions, obtained via the method of splitting operators. The asymptotic bounds are derived from exact nonlinear expressions for the perturbations and allow each entry of every matrix eigenvector to be bounded in the case of distinct eigenvalues. In contrast to the perturbation analysis of the Schur form of a nonsymmetric matrix, the bounds obtained here do not rely on the Kronecker product, which significantly reduces both memory requirements and computational cost. This enables efficient sensitivity analysis of high-order problems. The eigenvector perturbation bounds are further applied to estimate the angles between perturbed and unperturbed one-dimensional invariant subspaces spanned by the corresponding eigenvectors. To reduce conservatism in the case of high-order problems, we propose the use of probabilistic perturbation bounds based on the Markov inequality. The analysis is illustrated by two numerical experiments of order 5000.

1. Introduction

The perturbation analysis of eigenvalues in real symmetric and complex Hermitian matrices is a well-established topic in matrix analysis. It has been extensively presented in depth in foundational works such as those of Kato [1], Wilkinson [2], Parlett [3], Stewart and Sun [4], Bhatia [5], Stewart [6], and Chatelin [7]; in the surveys of Sun [8] and Li [9]; and in numerous research articles, including [10,11,12,13,14,15], and the references therein. The perturbation analysis of Hermitian decompositions is generally simpler than that of nonsymmetric matrix decompositions (see [16] for a survey of nonsymmetric cases). The main objective of such an analysis is to establish perturbation bounds for the eigenvalues and eigenvectors of Hermitian matrices subject to either Hermitian or non-Hermitian perturbations; see [17] for the interesting case of non–Hermitian perturbations.
Depending on the goal, perturbation analysis may be a priori, where one seeks to predict changes in the eigendecomposition before perturbation is applied, or a posteriori, where errors in eigenvalues and eigenvectors are estimated based on computed perturbed quantities [7] (Ch. 4). In this work, we focus on the former; the latter is typically related to the derivation of residual bounds, as discussed by Davis and Kahan [18], Parlett [3] (Ch. 11), Stewart and Sun [4] (Ch. 5), Chatelin [7] (Ch. 4), and Nakatsukasa [19]. Furthermore, perturbation analysis may be asymptotic (local), where the effect of vanishing perturbations is investigated, or global, where guaranteed bounds are sought for sufficiently large perturbations. In most studies, emphasis is placed on eigenvalue perturbations, while eigenvector perturbation analysis is often reduced to sensitivity analysis of the associated invariant subspace. In several cases, this sensitivity analysis is limited to bounding the distance between the unperturbed and perturbed subspaces [13] or estimating the angles between these subspaces [6] (Ch. 4), [4] (Ch. 5), [8,9]. However, in some applications, it is necessary to obtain bounds on the individual elements of specific eigenvectors, not merely on the sensitivity of the corresponding invariant subspace. This necessitates a componentwise perturbation analysis [20] of the whole matrix eigensystem, consisting of eigenvalues and corresponding eigenvectors.
In this paper, we focus on performing such a componentwise perturbation analysis for indefinite Hermitian matrices, allowing us to bound every entry of each corresponding eigenvector. This analysis is carried out only for regular eigenvalue problems, i.e., perturbation problems involving matrices with distinct eigenvalues. Singular problems, corresponding to multiple eigenvalues, require different techniques (see [21,22]). Regular problems, on the other hand, can be addressed in a simple and efficient way using the method of splitting operators [23], which has already been applied to several other matrix perturbation problems. Importantly, the bounds obtained for the eigenvector entries via this method can also be used to derive bounds on the sensitivity of the invariant subspaces spanned by the corresponding eigenvectors.
In principle, perturbation analysis of Hermitian decompositions can be performed using techniques developed for non-Hermitian problems. However, such approaches often require substantially more memory and computational effort. For example, the method for perturbation analysis of the Schur form proposed in [24] requires the construction of large matrices via the Kronecker product. The method presented here avoids the use of such products, enabling the analysis of significantly larger problems.
Consider a Hermitian matrix A C n × n . Using a unitary transformation matrix U C n × n , the matrix A can be reduced to the diagonal form
Λ = U H A U = diag ( λ 1 , λ 2 , , λ n ) ,
where λ 1 , λ 2 , , λ n are the eigenvalues of A, while U = [ U 1 , U 2 , , U n ] is the matrix of the corresponding orthonormal eigenvectors. Note that the eigenvalues of a Hermitian matrix are always real. If A is real, then U is orthogonal [25] (Ch. 8), [3] (Ch. 1). The diagonal decomposition (1) is also referred to as the symmetric Schur decomposition. Without loss of generality, we assume that the eigenvalues of A are ordered such that λ 1 λ 2 λ n .
If the matrix A is subject to a Hermitian perturbation δ A C n × n , δ A H = δ A , then instead of the decomposition (1), we obtain the perturbed diagonal decomposition
Λ ˜ = U ˜ H A ˜ U ˜ = diag ( λ ˜ 1 , λ ˜ 2 , , λ ˜ n ) ,
where λ ˜ 1 λ ˜ 2 λ ˜ n are the perturbed eigenvalues and U ˜ = U + δ U = [ U ˜ 1 , U ˜ 2 , , U ˜ n ] is the matrix of the perturbed eigenvectors.
The objective of the perturbation analysis is to determine bounds on the eigenvalue perturbations
δ Λ = diag ( δ λ j ) , δ λ j = λ ˜ j λ j , i = 1 , 2 , , n
and on the corresponding eigenvector perturbations δ U j = U ˜ j U j . The perturbation problem for eigendecompositions of Hermitian matrices is regular if and only if the eigenvalues of A are distinct. Another important objective is to determine the sensitivity of the one-dimensional invariant subspaces spanned by the eigenvectors U j , j = 1 , 2 , , n . According to the Wielandt–Hofmann theorem [25] (Ch. 8), the eigenvalues of a Hermitian matrix are always well conditioned, since their perturbations satisfy
| λ ˜ j λ j | δ A 2 .
However, the eigenvectors may be highly sensitive to perturbations in A, depending on the separation of the eigenvalues; see, e.g., [12].
According to the method of splitting operators [23], it is possible to derive separate perturbation bounds for | δ U | and | δ Λ | , which enables the construction of tighter perturbation bounds for the eigenvector. To this end, it is convenient to introduce the matrix
δ W = U H δ U = U 1 H δ U 1 U 1 H δ U 2 U 1 H δ U n U 2 H δ U 1 U 2 H δ U 2 U 2 H δ U n U n H δ U 1 U n H δ U 2 U n H δ U n C n × n ,
which has unitary equivalence to the unknown matrix δ U . By estimating the entries of this matrix, sharp bounds on the entries of δ U can be obtained thanks to the orthogonality of U.
We further introduce the perturbation parameter vector
x = vec ( Low ( δ W ) ) = U 2 H δ U 1 U n H δ U 1 | U 3 H δ U 2 U n H δ U 2 | | U n H δ U n 1 T C ν , ν = n ( n 1 ) / 2 ,
whose components are the entries of the strictly lower triangular part of δ W . This vector is a convenient tool for bounding various quantities arising in the perturbation analysis of A.
The structure of the paper is as follows. In Section 2, we derive asymptotic bounds on the perturbation parameters, which are then applied to the perturbation analysis of eigenvalues and eigenvectors of a symmetric matrix. The eigenvector perturbation bounds are subsequently used to obtain asymptotic bounds on the perturbations of the invariant subspaces spanned by the eigenvectors. Section 3 briefly presents results concerning the determination of realistic probabilistic bounds on the entries of a random matrix using its Frobenius norm. We describe how such bounds can be incorporated into the derivation of probabilistic perturbation bounds for the eigenvector entries, eigenvalues, and invariant subspaces. Section 4 illustrates the theoretical results by two numerical experiments with matrices of order 5000, demonstrating the performance of the derived bounds. Finally, Section 5 contains concluding remarks.
All computations in this paper were performed using MATLAB® Version 9.9 (R2020b) [26] with IEEE double-precision arithmetic on a machine equipped with a 12th Gen Intel(R) Core(TM) i5-1240P CPU running at 1.70 GHz and 32 GB of RAM. M-files implementing the perturbation bounds described herein are available from the authors upon request.

2. Asymptotic Perturbation Bounds

2.1. Asymptotic Bounds for the Perturbation Parameters

 Theorem 1.  
Let A C n × n be a Hermitian matrix ( A = A H ) with distinct eigenvalues λ 1 , λ 2 , , λ n , decomposed as A = U Λ U H , where U is unitary and Λ is diagonal. Assume that A is perturbed by a Hermitian perturbation δ A so that
A ˜ = A + δ A = U ˜ Λ ˜ U ˜ H .
Denote as F = U H δ A U the transformed perturbation matrix, and construct the vector
f = vec ( Low ( F ) ) C ν , f = U i H δ A U j , = i + ( j 1 ) n j ( j + 1 ) / 2 , 1 j < i n .
Then, the perturbation parameters
x = U i H δ U j , = i + ( j 1 ) n j ( j + 1 ) / 2 , 1 j < i n
satisfy the equation
M x = f + Δ x ,
where M C ν × ν is a diagonal matrix of the form
M = μ 21 0 0 0 0 0 0 0 μ 31 0 0 0 0 0 0 0 μ n 1 0 0 0 0 0 0 0 μ 32 0 0 0 0 0 0 0 μ 42 0 0 0 0 0 0 0 μ n 2 0 0 0 0 0 0 0 μ n , n 1
with μ i j = λ i λ j and Δ x C ν being a vector containing second–order terms in δ U i , δ U j .
 Proof. 
From (1) and (2), we have
U ˜ i H ( A + δ A ) U ˜ j = U i H A U j = 0 , j = 1 , 2 , , n 1 , i = j + 1 , j + 2 , , n .
Hence,
U i H A δ U j + δ U i H A U j + δ U i H A δ U j = U ˜ i H δ A U ˜ j .
From the diagonal decomposition (1), we have
U i H A = λ i U i H , i = 1 , 2 , , n , A U j = λ j U j , j = 1 , 2 , , n .
Substituting these expressions into (4) gives
λ i U i H δ U j + λ j δ U i H U j + δ U i H A δ U j = U ˜ i H δ A U ˜ j .
Since the matrices U and U ˜ are unitary, it follows that
U H δ U = δ U H U δ U H δ U ,
and therefore,
δ U i H U j = U i H δ U j δ U i H δ U j , i = 1 , 2 , , n 1 , j = i , i + 1 , , n .
Substituting this expression for δ U i H U j into (6), we obtain
( λ i λ j ) U i H δ U j = λ j δ U i H δ U j δ U i H A δ U j U ˜ i H δ A U ˜ j .
Equivalently,
( λ i λ j ) U i H δ U j = λ j δ U i H δ U j δ U i H A δ U j U i H δ A U j U i H δ A δ U j δ U i H δ A U j δ U i H δ A δ U j .
Expression (9) can be rewritten as a system of ν = n ( n 1 ) / 2 nonlinear algebraic equations for the unknown quantities
x = U i H δ U j , = i + ( j 1 ) n j ( j + 1 ) / 2 , 1 j < i n
in the form (3), where the component Δ x , = i + ( j 1 ) n j ( j + 1 ) / 2 of the nonlinear term Δ x is given by
Δ x = λ j δ U i H δ U j δ U i H A δ U j U i H δ A δ U j δ U i H δ A U j δ U i H δ A δ U j .
We note that solving the system of Equations (3) does not require the explicit formation of the matrix M. In particular, the asymptotic bound x l i n in the inequality
| x | x l i n , = 1 , 2 , , ν
can be obtained directly by using the expression
x l i n = δ A F / | λ i λ j | , = i + ( j 1 ) n j ( j + 1 ) / 2 , 1 j < i n .
 Example 1.
Consider the 5 × 5 matrix
A = 2.401616 1.198384 0.398384 0.402384 0.398424 1.198384 2.201616 0.001616 0.002384 0.001576 0.398384 0.001616 1.801616 0.797616 0.801576 0.402384 0.002384 0.797616 1.803616 0.797576 0.398424 0.001576 0.801576 0.797576 1.801636 .
The eigenvalues of this matrix are
λ 1 = 4.0 , λ 2 = 3.0 , λ 3 = 1.01 , λ 4 = 1.0001 , λ 5 = 1.0 .
Note the closeness of the last three eigenvalues, which suggests that the corresponding eigenvectors may be ill conditioned.
The chosen perturbation matrix is
δ A = 10 9 × ( δ A 0 + δ A 0 T ) ,
where δ A 0 is a matrix whose elements are random normally distributed numbers.
For this perturbation problem, the matrix M 1 , which determines the perturbation parameter vector x in (3), has a 2–norm equal to 1 × 10 4 , confirming that the problem is relatively ill conditioned.
The exact perturbation parameters x , = 1 , 2 , , ν and their asymptotic approximations x l i n , computed using (11), are shown to eight decimal digits in Table 1. The magnitudes of the corresponding elements of the two vectors show good agreement. The discrepancies between the values of x l i n and x arise from bounding the entries of the vector f with δ A F and neglecting the nonlinear term δ x .

2.2. Asymptotic Componentwise Eigenvector Bounds

 Theorem 2.  
Under the assumptions of Theorem 1, a strict asymptotic bound on the perturbation of each eigenvector U j , j = 1 , 2 , , n of A under a perturbation δ A is given by
| U ˜ j U j | δ U j l i n = | U | W j l i n ,
where
δ W l i n = 0 x 1 l i n x 2 l i n x n 2 l i n x n 1 l i n x 1 l i n 0 x n l i n x 2 n 4 l i n x 2 n 3 l i n x 2 l i n x n l i n 0 x 3 n 7 l i n x 3 n 6 l i n x n 2 l i n x 2 n 4 l i n x 3 n 7 l i n 0 x ν l i n x n 1 l i n x 2 n 3 l i n x 3 n 6 l i n x ν l i n 0 C n × n ,
and the elements of the vector x l i n = [ x 1 l i n , x 2 l i n , , x ν l i n ] T are given by
x l i n = δ A F / | λ i λ j | , = i + ( j 1 ) n j ( j + 1 ) / 2 , 1 j < i n .
 Proof. 
Consider the auxiliary matrix introduced above,
δ W = U H δ U : = [ δ W 1 , δ W 2 , , δ W n ] , δ W j C n .
As noted earlier, the strictly lower triangular part of this matrix contains elements of the form
U i H δ U j , j = 1 , 2 , , n 1 , i = j + 1 , j + 1 , , n ,
which can be replaced by the corresponding elements x , = i + ( j 1 ) n j ( j + 1 ) / 2 of the vector x. The strictly upper triangular part of δ W consists of elements
U i H δ U j , i = 1 , 2 , , n 1 , j = i + 1 , i + 2 , , n ,
which, according to the unitary condition (7), can be represented as
U i H δ U j = δ U i H U j δ U i H δ U j
or
U i H δ U j = U j H δ U i ¯ δ U i H δ U j .
Here, the term U j H δ U i ¯ , for j > i is the conjugate of the element x . Hence, the matrix δ W can be written as
δ W = δ V δ D δ Y ,
where
δ V = 0 x ¯ 1 x ¯ 2 x ¯ n 1 x 1 0 x ¯ n x ¯ 2 n 3 x 2 x n 0 x ¯ 3 n 6 x n 1 x 2 n 3 x 3 n 6 0 C n × n ,
and the matrices
δ D = δ U 1 H δ U 1 / 2 0 0 0 δ U 2 H δ U 2 / 2 0 0 0 δ U n H δ U n / 2 C n × n ,
δ Y = 0 δ U 1 H δ U 2 δ U 1 H δ U 3 δ U 1 H δ U n 0 0 δ U 2 H δ U 3 δ U 2 H δ U n 0 0 0 δ U 3 H δ U n 0 0 0 δ U n 1 H δ U n C n × n
contain second-order terms in δ U j , j = 1 , 2 , , n . For definiteness, we assume that U ˜ is chosen such that the diagonal elements of δ D are real and
U i H δ U i + U i H δ U i ¯ = 2 U i H δ U i = δ U i H δ U j ,
since condition (12) does not impose any restriction on the imaginary part of U i H δ U i .
According to (13), the matrix | δ W | can be estimated as
| δ W | | δ V | + δ W ,
where
δ W = | δ D | + | δ Y |
contains second-order terms in the elements of x. Thus, an asymptotic (linear) approximation of the matrix | δ U | can be obtained as
| δ U l i n | | U | | U H δ U | = | U | δ W l i n .
Note that these bounds are also valid for real symmetric matrices.
 Example 2.  
For the same matrix A and perturbation δ A as in Example 1, the absolute values of the exact changes of the entries of the matrix U and their linear approximations, obtained using (15), are given, respectively, by
| δ U | = 10 4 × 0.0000045 0.0000028 0.0025383 0.0928278 0.0953600 0.0000037 0.0000020 0.0025204 0.0928299 0.0953488 0.0000025 0.0000070 0.0010687 0.0928181 0.1381715 0.0000016 0.0000139 0.0025323 0.0942843 0.0904961 0.0000071 0.0000066 0.0023290 0.1407080 0.0953601
and
δ U l i n = 10 4 × 0.0000879 0.0001319 0.0088754 0.4437818 0.4438262 0.0001099 0.0001099 0.0088791 0.4437855 0.4438298 0.0000952 0.0001209 0.0110648 0.4437745 0.6634911 0.0000953 0.0001210 0.0088680 0.4459712 0.4460378 0.0000952 0.0001209 0.0110869 0.6634467 0.4438189 .
For all i , j = 1 , 2 , , 5 , it is verified that | δ U i , j | < δ U i , j l i n .

2.3. Eigenvalue Sensitivity

For the changes in the elements of the perturbed diagonal matrix Λ ˜ , we have
δ λ i = λ ˜ i λ i = U ˜ i H ( A + δ A ) U ˜ i U i H A U i , i = 1 , 2 , , n .
Hence,
δ λ i = U i H A δ U i + δ U i H A U i + δ U i H A δ U i + U i H δ A U j
+ U i H δ A δ U i + δ U i H δ A U i + δ U i H δ A δ U i .
Thus, we obtain
δ λ i = U i H δ A U i + δ i d , i = 1 , 2 , , n ,
where, based on (5),
δ i d = λ i ( U i H δ U i + δ U i H U i ) + δ U i H A δ U i + U i H δ A δ U i + δ U i H δ A U i + δ U i H δ A δ U i .
According to (7), it holds that
δ U i H U j + U i H δ U j = δ U i H δ U j ,
which gives
δ i d = λ i δ U i H δ U i + δ U i H A δ U i + U i H δ A δ U i + δ U i H δ A U i + δ U i H δ A δ U i .
The quantity δ i d contains second-order terms in δ U i .
If the perturbation δ A is known, expression (19) makes it possible to bound δ i d using the eigenvector bound (15).
In the asymptotic eigenvalue analysis, higher-order terms are neglected, and one has
δ λ i l i n = U i H δ A U i , i = 1 , 2 , , n .
Hence,
| δ λ i | δ λ l i n = δ A 2 , i = 1 , 2 , , n ,
which reduces to the well-known corollary of the Wielandt–Hoffman theorem [25] (Ch. 8).

2.4. Sensitivity of One-Dimensional Invariant Subspaces

The estimate of the eigenvector perturbation δ U j can be used to derive a bound on the sensitivity of one-dimensional (simple) invariant subspace associated with the eigenvector U j .
Consider the one-dimensional invariant subspace X j = R ( U j ) , j = 1 , 2 , , n . The sensitivity of this subspace is measured by the angle Θ between the perturbed and unperturbed invariant subspaces. Since
U ˜ j = U j + δ U j ,
we have that [6,27] (Ch. 4),
sin ( Θ ( X ˜ j , X j ) ) = U j H U ˜ j 2 = U j H δ U j 2 ,
where
U j = U 1 , U 2 , , U j 1 , U j + 1 , , U n C n × ( n 1 )
is the orthogonal complement of U j , while U j H U j = 0 ( n 1 ) × 1 . Note that, in (20), we prefer to use the expression involving sin ( Θ ) rather than cos ( Θ ) , since, for small Θ , the sine is well conditioned and the cosine is not.
Equation (20) shows that the sensitivity of the one-dimensional invariant subspace X is directly related to the perturbation parameters x = U i H δ U j . Consequently, if bounds on the perturbation parameters are known, the sensitivity of all invariant subspaces can be estimated. More specifically, we have
U 1 H δ U 1 = x 1 x 2 x n 1 , U 2 H δ U 2 = x ¯ 1 x n x 2 n 3 , , U n H δ U n = x ¯ n 1 x ¯ 2 n 3 x ¯ 3 n 6 x ¯ ν .
Hence, the angle between the jth perturbed and unperturbed invariant subspace can be approximated with the following asymptotic estimate:
Θ ( X ˜ j , X j ) Θ l i n ( X ˜ j , X j ) = arcsin ( δ W l i n ( 1 : n , j ) 2 ) , j = 1 , 2 , , n ,
where Θ l i n is the asymptotic estimate of Θ obtained using the perturbation parameters.
We note that this estimate produces results nearly identical to the linear bounds derived in [8,13,28].
 Example 3.  
For the same matrix and perturbation as in the previous examples, the exact angles between the perturbed and unperturbed one-dimensional invariant subspaces, together with their linear estimates, are shown in Table 2.

3. Probabilistic Asymptotic Bounds

Numerical experiments with symmetric matrices indicate that the estimates obtained using Theorem 2 may become overly conservative as the matrix order increases. For instance, when the order of the matrix is 2000, the ratio between the bound (15) and the actual values of the entries of | δ U | may grow as large as 10 5 , rendering the computed bound useless. A further reduction in the perturbation bounds can be achieved by introducing probabilistic perturbation bounds, which, for large n, allow one to obtain significantly tighter estimates with high probability. To this end, we employ the probabilistic matrix bounds proposed in [29,30], which are based on the Markov inequality [31] (Section 5-4).
We briefly outline the essence of the approach presented in [29]. The goal is to reduce the magnitude of entries of an estimate Δ A of the matrix perturbation δ A at the expense of allowing some entries of | Δ A | to be smaller then the corresponding entries of | δ A | . We obtain the following result.
 Theorem 3.
Let Δ A = [ Δ a i j ] be an estimate of the m × n random perturbation δ A , and let P { | δ a i j | < Δ a i j } denote the probability that | δ a i j | < Δ a i j . If the entries of Δ A are chosen as
Δ a i j = δ A F Ξ ,
where
Ξ = ( 1 P r e f ) m n ,
and 0 < P r e f < 1 is a desired probability, then
P { | δ a i j | < Δ a i j } P r e f , i = 1 , 2 , , m , j = 1 , 2 , , n .
If the number m n is sufficiently large that Ξ > 1 , Theorem 3 allows the mean value of the bound Δ A and, hence, the magnitude of its entries to be reduced by the scaling factor Ξ , provided that the desired probability P r e f is set to be less than one. Note that the probability bound produced by the Markov inequality is conservative; in practice, the actual results are usually much closer than those predicted by P r e f . This is because the Markov inequality is valid for the worst possible distribution of the random variables, whereas in our case we do not impose any restriction on the probability distribution of the entries of δ A .
According to Theorem 3, the use of the scaling factor (22) guarantees that the inequality
| δ a i j | < Δ a i j
holds for each i and j with probability no less than P r e f .
Using Theorem 3, a probability bound on the perturbation parameters | x | , = 1 , 2 , , ν can be derived as follows.
 Theorem 4.
If the asymptotic estimate of the perturbation parameter vector x is chosen as
x e s t = δ A F Ξ | λ i λ j | , = i + ( j 1 ) n j ( j + 1 ) / 2 , 1 j < i n ,
where Ξ is determined according to
Ξ = n ( 1 P r e f ) ,
then
P { | x | x e s t 2 } P r e f .
The inequality (25) shows that a probability estimate of the component | x | can be obtained by replacing the perturbation norm δ A F in the linear estimate (11) with the probability estimate δ A F / Ξ , where the scaling factor Ξ is chosen according to (22) for a specified probability P r e f .
The probabilistic perturbation bounds on the elements of the vector x make it possible to find probabilistic bounds on the elements of δ U .
 Theorem 5.
An asymptotic bound with probability P r e f on the perturbation of each eigenvector U j , j = 1 , 2 , , n of A under a perturbation δ A is given by
| U ˜ j U j | | δ U j e s t | = | U | W j e s t ,
where
δ W e s t = 0 x 1 e s t x 2 e s t x n 2 e s t x n 1 e s t x 1 e s t 0 x n e s t x 2 n 4 e s t x 2 n 3 e s t x 2 e s t x n e s t 0 x 3 n 7 e s t x 3 n 6 e s t x n 2 e s t x 2 n 4 e s t x 3 n 7 e s t 0 x ν e s t x n 1 e s t x 2 n 3 e s t x 3 n 6 e s t x ν e s t 0 C n × n ,
and the elements of the vector x e s t = [ x 1 e s t , x 2 e s t , , x ν e s t ] T are determined from
x e s t = δ A F Ξ | λ i λ j | , = i + ( j 1 ) n j ( j + 1 ) / 2 , 1 j < i n .
The probabilistic bound on δ A then yields the following asymptotic perturbation bounds on the eigenvalues of A:
| δ λ i | δ λ i e s t = δ A F / Ξ , i = 1 , 2 , , n .
Finally, we find that the angle between the jth perturbed and unperturbed one-dimensional invariant subspaces satisfies the probabilistic asymptotic estimate
Θ j ( X ˜ j , X j ) Θ j e s t ( X ˜ j , X j ) = arcsin ( δ W e s t ( 1 : n , j ) 2 ) , j = 1 , 2 , , n .

4. Numerical Experiments

In this section, we present two numerical experiments illustrating the perturbation bounds for eigendecompositions of matrices of order 5000. The matrices are constructed in the form A = V D V T , where V is orthogonal and D is a diagonal matrix containing the prescribed eigenvalues. The perturbation δ A is taken as 10 9 × ( δ A 0 + δ A 0 T ) , where δ A 0 is a matrix with normally distributed random entries.
 Example 4.
In this examples, the eigenvalues of A are distributed uniformly from 1 to 5.999 with an increment of 0.001, which results in M 1 having a 2-norm equal to 10 3 .
In Figure 1, we display the entries of the exact perturbation | δ U = U ˜ U | , together with the entries of the asymptotic δ U l i n and the probabilistic bound δ U e s t . The desired lower bound probability is set to P r e f = 0.8 , and the scaling factor for finding the probabilistic estimates is 1000. As a result, the actual probability is equal to 100 % , since all entries of δ U e s t exceed the corresponding entries of | δ U | . As already mentioned, this phenomenon is due to the conservatism of the Markov inequality. The eigenvalue perturbations δ λ i = λ ˜ i λ i , together with their asymptotic estimate δ λ l i n = 2.8287199 × 10 6 and probability estimate δ λ e s t = 7.0717998 × 10 9 , are shown in Figure 2. There is no eigenvalue whose probability perturbation bound is smaller than the actual δ λ , which yields 100 % accuracy instead of the reference value of 80 % . Due to the equal spacing between the eigenvalues, the sensitivity of all invariant subspaces is the same in this case (Figure 3).
Clearly, the probabilistic estimates are much closer to the actual perturbations than the linear deterministic bounds are.
 Example 5.
In this example, the eigenvalues of A are set to λ i = 1 + 500 / i and are thus distributed between 1.1 and 501. The norm of the matrix M 1 is equal to 4.999 × 10 4 . The eigenvalues of A are shown in Figure 4.
In Figure 5, Figure 6 and Figure 7, we present the matrix | δ U | , the eigenvalue perturbations, and the angles between the perturbed and unperturbed invariant subspaces, respectively, as well as their asymptotic and probabilistic bounds for P r e f = 80 % . Due to the decreasing separation between the eigenvalues, the sensitivity of the invariant subspace increases with the index i. As in the previous example, the actual probabilistic bounds for the eigenvector matrix and for the eigenvalues again reach 100 % .
Note that in both examples, the linear estimates correctly reflect the changes in the corresponding exact quantities.

5. Conclusions

In this paper, we have presented strict asymptotic perturbation bounds for the eigenvalues, eigenvectors, and invariant subspaces of symmetric and Hermitian matrices. The use of the splitting operator method enables a unified treatment with the perturbation analysis of the Schur form [24] and the generalized Schur form [32]. Since the deterministic bounds derived may become conservative for high-order matrices, we have introduced probabilistic bounds based on the Markov inequality. These probabilistic bounds are easily obtained from the asymptotic bounds and allow a substantial reduction in the perturbation estimates with a guaranteed probability that is close to one in practice. As shown by the examples, an appealing aspect of the new bounds for symmetric matrices is that they can be determined with low requirements in terms of memory and volume of computational effort.

Author Contributions

Conceptualization, M.K. and P.H.P.; methodology, P.H.P.; software, P.H.P.; validation, M.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated during the current study are available from the authors upon reasonable request.

Acknowledgments

The authors are grateful to the reviewers for their remarks and suggestions that helped to improve the manuscript.

Conflicts of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationship that could be construed as a potential conflict of interest.

Notation

C ,the set of complex numbers;
C n × m ,the space of n × m complex matrices;
A = [ a i j ] ,a matrix with entries a i j ;
A j ,the jth column of A;
A i , 1 : n ,the ith row of an m × n matrix A;
A 1 : m , j ,the jth column of an m × n matrix A;
Low ( A ) ,the strictly lower triangular part of A;
| A | ,the matrix of absolute values of the elements of A;
A H ,the Hermitian transpose of A;
0 m × n ,the m × n zero matrix;
I n ,the n × n identity matrix;
δ A ,the perturbation of A;
A 2 ,the spectral norm of A;
A F ,the Frobenius norm of A;
: = ,equality by definition;
⪯,partial order: if a , b R n , then a b means a i b i , i = 1 , 2 , , n ;
X = R ( X ) ,the subspace spanned by the columns of X;
U ,the orthogonal complement of U, U H U = 0 ;

References

  1. Kato, T. Perturbation Theory for Linear Operators, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 1995; ISBN 978-0-540-58661-6. [Google Scholar]
  2. Wilkinson, J. The Algebraic Eigenvalue Problem; Clarendon Press: Oxford, UK, 1965; ISBN 978-0-19-853418-1. [Google Scholar]
  3. Parlett, B.N. The Symmetric Eigenvalue Problem; Society of Industrial and Applied Mathematics: Philadelphia, PA, USA, 1998. [Google Scholar] [CrossRef]
  4. Stewart, G.W.; Sun, J.-G. Matrix Perturbation Theory; Academic Press: New York, NY, USA, 1990; ISBN 978-0126702309. [Google Scholar]
  5. Bhatia, R. Perturbation Bounds for Matrix Eigenvalues; Society of Industrial and Applied Mathematics: Philadelphia, PA, USA, 2007; ISBN 978-0-898716-31-3. [Google Scholar] [CrossRef]
  6. Stewart, G. Matrix Algorithms; Vol. II: Eigensystems; SIAM: Philadelphia, PA, USA, 2001; ISBN 0-89871-503-2. [Google Scholar]
  7. Chatelin, F. Eigenvalues of Matrices; Society of Industrial and Applied Mathematics: Philadelphia, PA, USA, 2012; ISBN 978-1-611972-45-0. [Google Scholar] [CrossRef]
  8. Sun, J.-G. Stability and Accuracy. Perturbation Analysis of Algebraic Eigenproblems; Technical Report; Department of Computing Science, Umeå University: Umeå, Sweden, 1998; pp. 1–210. [Google Scholar]
  9. Li, R. Matrix perturbation theory. In Handbook of Linear Algebra, 2nd ed.; Hogben, L., Ed.; CRC Press: Boca Raton, FL, USA, 2014; pp. 21-1–21-20. [Google Scholar]
  10. Barlow, J.; Slapničar, I. Optimal perturbation bounds for the Hermitian eigenvalue problem. Linear Algebra Appl. 2000, 309, 19–43. [Google Scholar] [CrossRef]
  11. Mathias, R. Quadratic residual bounds for the Hermitian eigenvalue problem. SIAM J. Matrix Anal. Appl. 1998, 19, 541–550. [Google Scholar] [CrossRef]
  12. Stewart, G.W. Error and perturbation bounds for subspaces associated with certain eigenvalue problems. SIAM Rev. 1973, 15, 727–764. [Google Scholar] [CrossRef]
  13. Sun, J.-G. Perturbation expansions for invariant subspaces. Linear Algebra Appl. 1991, 153, 85–97. [Google Scholar] [CrossRef]
  14. Ipsen, I.C.F. An overview of relative sin(Θ) theorems for invariant subspaces of complex matrices. J. Comp. Appl. Math. 2000, 123, 131–153. [Google Scholar] [CrossRef]
  15. Veselić, K.; Slapničar, I. Floating-point perturbations of Hermitian matrices. Linear Algebra Appl. 1993, 195, 81–116. [Google Scholar] [CrossRef]
  16. Greenbaum, A.; Li, R.-C.; Overton, M.L. First-order perturbation theory for eigenvalues and eigenvectors. SIAM Rev. 2020, 62, 463–482. [Google Scholar] [CrossRef]
  17. Barbarino, G.; Serra–Capizzano, S. Non-Hermitian perturbations of Hermitian matrix–sequences and applications to the spectral analysis of the numerical approximation of partial differential equations. Numer. Linear Algebra Appl. 2020, 27, 31. [Google Scholar] [CrossRef]
  18. Davis, C.; Kahan, W.M. The Rotation of Eigenvectors by a Perturbation. III. SIAM J. Numer. Anal. 1970, 7, 1–46. [Google Scholar] [CrossRef]
  19. Nakatsukasa, Y. Sharp error bounds for Ritz vectors and approximate singular vectors. Math. Comput. 2018, 89, 1843–1866. [Google Scholar] [CrossRef]
  20. Wang, W.-G.; Wei, Y. Mixed and componentwise condition numbers for matrix decompositions. Theor. Comput. Sci. 2017, 681, 199–216. [Google Scholar] [CrossRef]
  21. Carlsson, M. Spectral perturbation theory of Hermitian matrices. In Bridging Eigenvalue Theory and Practice—Applications in Modern Engineering; Carpentieri, B., Ed.; IntechOpen: London, UK, 2025; ISBN 978-1-83634-248-9. [Google Scholar] [CrossRef]
  22. Li, R.-C.; Nakatsukasa, Y.; Truhar, N.; Wang, W.-G. Perturbation of multiple eigenvalues of Hermitian matrices. Linear Algebra Appl. 2012, 437, 202–213. [Google Scholar] [CrossRef]
  23. Konstantinov, M.; Petkov, P. Perturbation Methods in Matrix Analysis and Control; NOVA Science Publishers, Inc.: New York, NY, USA, 2020; Available online: https://novapublishers.com/shop/perturbation-methods-in-matrix-analysis-and-control (accessed on 4 October 2025).
  24. Petkov, P. Componentwise perturbation analysis of the Schur decomposition of a matrix. SIAM J. Matrix Anal. Appl. 2021, 42, 108–133. [Google Scholar] [CrossRef]
  25. Golub, G.H.; Van Loan, C.F. Matrix Computations, 4th ed.; The Johns Hopkins University Press: Baltimore, MD, USA, 2013; ISBN 978-1-4214-0794-4. [Google Scholar]
  26. The MathWorks, Inc. MATLAB, Version 9.9.0.1538559 (R2020b); The MathWorks, Inc.: Natick, MA, USA, 2020. Available online: https://www.mathworks.com (accessed on 4 October 2025).
  27. Björck, A.; Golub, G. Numerical methods for computing angles between linear subspaces. Math. Comp. 1973, 27, 579–594. [Google Scholar] [CrossRef]
  28. Bai, Z.; Demmel, J.; Mckenney, A. On computing condition numbers for the nonsymmetric eigenproblem. ACM Trans. Math. Softw. 1993, 19, 202–223. [Google Scholar] [CrossRef]
  29. Petkov, P. Probabilistic perturbation bounds of matrix decompositions. Numer. Linear Algebra Appl. 2024, 31, 1–40. [Google Scholar] [CrossRef]
  30. Petkov, P. Probabilistic perturbation bounds for invariant, deflating and singular subspaces. Axioms 2024, 13, 597. [Google Scholar] [CrossRef]
  31. Papoulis, A. Probability, Random Variables and Stochastic Processes, 3rd ed.; McGraw Hill, Inc.: New York, NY, USA, 1991; ISBN 0-07-048477-5. [Google Scholar]
  32. Zhang, G.; Li, H.; Wei, Y. Componentwise perturbation analysis for the generalized Schur decomposition. Calcolo 2022, 59, 19. [Google Scholar] [CrossRef]
Figure 1. The entries of the matrix | δ U | and their asymptotic and probabilistic bounds for Example 4.
Figure 1. The entries of the matrix | δ U | and their asymptotic and probabilistic bounds for Example 4.
Computation 13 00237 g001
Figure 2. The eigenvalue perturbations and their asymptotic and probabilistic bounds for Example 4.
Figure 2. The eigenvalue perturbations and their asymptotic and probabilistic bounds for Example 4.
Computation 13 00237 g002
Figure 3. Angles between the perturbed and unperturbed invariant subspaces and their asymptotic and probabilistic bounds for Example 4.
Figure 3. Angles between the perturbed and unperturbed invariant subspaces and their asymptotic and probabilistic bounds for Example 4.
Computation 13 00237 g003
Figure 4. The matrix eigenvalues for Example 5.
Figure 4. The matrix eigenvalues for Example 5.
Computation 13 00237 g004
Figure 5. The entries of the matrix | δ U | and their asymptotic and probabilistic bounds for Example 5.
Figure 5. The entries of the matrix | δ U | and their asymptotic and probabilistic bounds for Example 5.
Computation 13 00237 g005
Figure 6. The eigenvalue perturbations and their asymptotic and probabilistic bounds for Example 5.
Figure 6. The eigenvalue perturbations and their asymptotic and probabilistic bounds for Example 5.
Computation 13 00237 g006
Figure 7. Angles between the perturbed and unperturbed invariant subspaces and their asymptotic and probabilistic bounds for Example 5.
Figure 7. Angles between the perturbed and unperturbed invariant subspaces and their asymptotic and probabilistic bounds for Example 5.
Computation 13 00237 g007
Table 1. Exact perturbation parameters x related to the matrix δ W and their linear estimates for δ A F = 1.0983610 × 10 8 .
Table 1. Exact perturbation parameters x related to the matrix δ W and their linear estimates for δ A F = 1.0983610 × 10 8 .
x = U i T δ U j | x | x lin | Δ x |
x 1 = U 2 T δ U 1 7.6707043 × 10 11 1.0983610 × 10 8 4.7003350 × 10 19
x 2 = U 3 T δ U 1 6.0721878 × 10 10 3.6734482 × 10 9 4.2499607 × 10 19
x 3 = U 4 T δ U 1 6.9954664 × 10 10 3.6612033 × 10 9 2.0826081 × 10 19
x 4 = U 5 T δ U 1 2.5735541 × 10 10 3.6613255 × 10 9 1.0760803 × 10 18
x 5 = U 3 T δ U 2 1.1842760 × 10 9 5.5194023 × 10 9 5.8765348 × 10 18
x 6 = U 4 T δ U 2 9.0330987 × 10 10 5.4918052 × 10 9 7.9079719 × 10 18
x 7 = U 5 T δ U 2 8.5971180 × 10 10 5.4920797 × 10 9 8.2028102 × 10 18
x 8 = U 4 T δ U 3 1.4635852 × 10 7 1.0983611 × 10 6 1.7153442 × 10 15
x 9 = U 3 T δ U 3 4.8612969 × 10 7 1.1094556 × 10 6 3.5872417 × 10 15
x 10 = U 5 T δ U 4 2.3352890 × 10 5 1.0983610 × 10 4 9.1061463 × 10 14
Table 2. Exact angles between perturbed and unperturbed invariant subspaces and their linear estimates.
Table 2. Exact angles between perturbed and unperturbed invariant subspaces and their linear estimates.
j Θ Θ lin
1 9.6446662 × 10 10 1.2686356 × 10 8
2 1.7214722 × 10 9 1.4540507 × 10 8
3 5.0768557 × 10 7 1.5611959 × 10 6
4 2.3353348 × 10 5 1.0984160 × 10 4
5 2.3357949 × 10 5 1.0984171 × 10 4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Konstantinov, M.; Petkov, P.H. Rigorous Asymptotic Perturbation Bounds for Hermitian Matrix Eigendecompositions. Computation 2025, 13, 237. https://doi.org/10.3390/computation13100237

AMA Style

Konstantinov M, Petkov PH. Rigorous Asymptotic Perturbation Bounds for Hermitian Matrix Eigendecompositions. Computation. 2025; 13(10):237. https://doi.org/10.3390/computation13100237

Chicago/Turabian Style

Konstantinov, Mihail, and Petko Hristov Petkov. 2025. "Rigorous Asymptotic Perturbation Bounds for Hermitian Matrix Eigendecompositions" Computation 13, no. 10: 237. https://doi.org/10.3390/computation13100237

APA Style

Konstantinov, M., & Petkov, P. H. (2025). Rigorous Asymptotic Perturbation Bounds for Hermitian Matrix Eigendecompositions. Computation, 13(10), 237. https://doi.org/10.3390/computation13100237

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop