Next Article in Journal
Geometrical Visual Illusions Revisited: The Curse of Symmetry, the Cure of Sighting, and Taxing Task Demands
Next Article in Special Issue
Classification of Irreducible Z+-Modules of a Z+-Ring Using Matrix Equations
Previous Article in Journal
FUSION: Measuring Binary Function Similarity with Code-Specific Embedding and Order-Sensitive GNN
Previous Article in Special Issue
Controlling Problem within a Class of Two-Level Positive Maps
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Shift-Deflation Technique for Computing a Large Quantity of Eigenpairs of the Generalized Eigenvalue Problems

1
Department of Applied Mathematics, Nanjing Forestry University, Nanjing 210037, China
2
Department of Mathematics, Taizhou University, Taizhou 225300, China
3
Department of Mathematics, Southeast University, Nanjing 210096, China
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(12), 2547; https://doi.org/10.3390/sym14122547
Submission received: 29 September 2022 / Revised: 14 October 2022 / Accepted: 20 October 2022 / Published: 2 December 2022

Abstract

:
In this paper, we propose a shift-deflation technique for the generalized eigenvalue problems. This technique consists of the following two stages: the shift of converged eigenvalues to zeros, and the deflation of these shifted eigenvalues. By performing the above technique, we construct a new generalized eigenvalue problem with a lower dimension which shares the same eigenvalues with the original generalized eigenvalue problem except for the converged ones. In addition, we consider the relations of the eigenvectors before and after performing the technique. Finally, numerical experiments show the effectiveness and robustness of the proposed method.

1. Introduction

In this paper, we consider the computation of a large quantity of eigenpairs of a large-scale generalized eigenvalue problem (GEP)
A x = λ B x and y * A = λ y * B ,
where A , B C n × n are the coefficient matrices, and the notation ∗ denotes the conjugate transposition. The scalar λ is an eigenvalue of the GEP (1) if and only if λ is a root of det ( A λ B ) , where det ( · ) denotes the determinant of a matrix. The nonzero vectors x and y are called the right and left eigenvectors corresponding to λ , respectively. Together, ( λ , x ) or ( λ , x , y ) is called an eigenpair of the GEP (1). ( λ i , x i , y i ) ( 1 i r n ) are r eigenpairs of the GEP (1), and let
Λ 1 = diag ( λ 1 , λ 2 , , λ r ) , X 1 = [ x 1 , x 2 , , x r ] and Y 1 = [ y 1 , y 2 , , y r ] ,
where diag ( λ 1 , λ 2 , , λ r ) denotes a diagonal matrix with diagonal entries λ 1 , λ 2 , , λ r , then the pair ( Λ 1 , X 1 , Y 1 ) C r × r × C n × r × C n × r is also called an eigenpair of the GEP (1), which satisfies
A X 1 = B X 1 Λ 1 and Y 1 * A = Λ 1 Y 1 * B .
The GEP (1) arises in a number of applications, such as structural analysis [1], magneto-hydrodynamics [2], fluid–structure interaction [3] and the boundary integral equation [4]. For the small and medium-sized GEP, we can compute the eigenpairs using the QZ algorithm [5], the Riemannian nonlinear conjugate gradient method [6] and so on. For the large-scale GEP, the methods in [7,8,9,10,11,12,13] only find a few extreme eigenpairs or interior eigenpairs with eigenvalues close to a given shift. In order to compute a cluster of eigenvalues and associated eigenvectors successively, it is necessary to develop a shift-deflation technique for the GEP (1).
Assume that we have already computed some eigenvalues λ i ( 1 i r ) of the GEP (1), our goal is to construct a new GEP with coefficient matrices ( A ^ , B ^ ) whose eigenvalues are just those of the GEP (1) except for the computed eigenvalues { λ i } i = 1 r . To do this, there are two stages, namely, shift and deflation. In the shift stage, we shift the converged eigenvalues { λ i } i = 1 r to zeros while keeping the remaining eigenvalues unchanged. To this purpose, we define a new GEP with coefficient matrices ( A ˜ , B ˜ ) whose eigenvalues are r zeros and { λ i } i = r + 1 n . In the deflation stage, we deflate the shifted r zeros of the shifted GEP with coefficient matrices ( A ˜ , B ˜ ) . For this purpose, we construct a new GEP with coefficient matrices ( A ^ , B ^ ) whose eigenvalues are just { λ i } i = r + 1 n . The relationship between the eigenvectors of the two GEPs with coefficient matrices ( A , B ) and ( A ^ , B ^ ) are also shown in this paper.
Throughout this paper, we use the following notations. I n denotes the n × n identity matrix. e j and E j denote the j-th column and the first j columns of the identity matrix, respectively. The superscript ∗ denotes the conjugate transpose for a vector or a matrix. · 2 denotes the Euclidean vector norm, and · F denotes the Frobenius matrix norm. We also adopt the following MATLAB notations: A ( i : j , k : l ) denotes the submatrix of the matrix A that consists of the inrersection of the rows i to j and the columns k to l, A ( i : j , : ) and A ( : , k : l ) select the rows i to j and the columns k to l of A, respectively.

2. Shift Technique

In this section, we describe how to move the eigenvalues of the GEP (1) to zeros and keep the corresponding eigenvectors and the remaining eigenvalues unchanged.
Theorem 1.
Assume that ( λ i , x i , y i ) ( 1 i n ) are the eigenpairs of the GEP (1) with λ i 0 , the pair ( Λ 1 , X 1 , Y 1 ) is defined as (2) with Y 1 * X 1 = I r and λ i λ j where 1 i r and r < j n . Construct a new GEP
A ˜ x ˜ = λ ˜ B ˜ x ˜ a n d y ˜ * A ˜ = λ ˜ y ˜ * B ˜ ,
where the coefficient matrices A ˜ and B ˜ are defined as
A ˜ = A B X 1 Λ 1 Y 1 * , B ˜ = B ,
and ( λ ˜ i , x ˜ i , y ˜ i ) ( 1 i n ) are the eigenpairs of the shifted GEP (4) with
λ ˜ i = 0 , 1 i r , λ i , r < i n , x ˜ i = x i , 1 i r , ( I n λ i 1 X 1 Λ 1 Y 1 * ) x i , r < i n , a n d y ˜ i = y ˜ i , 1 i r , y i , r < i n .
Proof. 
We first verify the case of 1 i r . From the assumption Y 1 * X 1 = I r , we have Y 1 * x i = e i and
A ˜ x i = ( A B X 1 Λ 1 Y 1 * ) x i = A x i B X 1 Λ 1 e i = A x i λ i B x i = 0 .
Therefore, ( 0 , x i , y ˜ i ) ( 1 i r ) are the eigenpairs of the shifted GEP (4) which implies that the shift technique (5) indeed moves the nonzero eigenvalues { λ i } i = 1 r to zeros while keeping the corresponding right eigenvectors { x i } i = 1 r unchanged.
Next, we consider the case of r < i n . In fact, by using (3) we obtain
( A ˜ λ i B ˜ ) x ˜ i = ( A B X 1 Λ 1 Y 1 * λ i B ) ( I n λ i 1 X 1 Λ 1 Y 1 * ) x i = ( A λ i 1 A X 1 Λ 1 Y 1 * + λ i 1 B X 1 Λ 1 2 Y 1 * λ i B ) x i = ( A λ i B ) x i λ i 1 ( A X 1 B X 1 Λ 1 ) Λ 1 Y 1 * x i = 0
and
y ˜ i * ( A ˜ λ i B ˜ ) = y i * ( A B X 1 Λ 1 Y 1 * λ i B ) = y i * ( A λ i ) B y i * B X 1 ( λ i I r Λ 1 ) Λ 1 ( λ i I r Λ 1 ) 1 Y 1 * = y i * ( A λ i ) B + y i * ( A λ i B ) X 1 Λ 1 ( λ i I r Λ 1 ) 1 Y 1 * = y i * ( A λ i ) B I n + X 1 Λ 1 ( λ i I r Λ 1 ) 1 Y 1 * = 0
A combination of (7) and (8) indicates that ( λ i , x ˜ i , y ˜ i ) ( r < i n ) are the eigenpairs of the shifted GEP (4), which implies that the shift technique (5) indeed keeps the remaining eigenvalues { λ i } i = r + 1 n along with the corresponding left eigenvectors { y i } i = r + 1 n unchanged.    □
Remark 1.
Some remarks of Theorem 1 are illustrated as follows.
(1) 
From (6), we can see that the eigenvalues { λ i } i = 1 r , the right eigenvectors { x i } i = r + 1 n and the left eigenvectors { y i } i = 1 r have changed after implementing the shift technique (5). Moreover, the new right eigenvectors { x ˜ i } i = r + 1 n are explicitly available in (6), and the new left eigenvectors { y ˜ i } i = 1 r can be obtained by solving y ˜ i * A = 0 .
(2) 
A similar shift technique can be observed where the coefficient matrix A ˜ in (5) is defined by A ˜ = A X 1 Λ 1 Y 1 * B . At this moment, the changes of the left eigenvectors { y i } i = 1 r are available while those of the right eigenvectors { x i } i = 1 r are unavailable. Moreover, we need to solve a homogeneous system using a certain numerical method if we want both the left and right eigenvectors.
(3) 
A similar shift technique can be observed where the coefficient matrix A ˜ in (5) is defined by A ˜ = A B X 1 Λ 1 X 1 * . At this moment, the left eigenvectors { y i } i = 1 r are not needed, and Y 1 in both the condition Y 1 * X 1 = I r and the relation x ˜ i = ( I n λ i 1 X 1 Λ 1 Y 1 * ) x i should be replaced by  X 1 .
The above theorem and remarks lead to the following corollary directly.
Corollary 1.
Assume that ( λ i , x i ) ( 1 i n ) are the eigenpairs of the GEP (1) with λ i 0 , and ( λ 1 , x 1 ) is a converged eigenpair with x 1 * x 1 = 1 and λ 1 λ j , where 2 j n . Construct the new GEP (4) where the coefficient matrices A ˜ and B ˜ are defined as
A ˜ = A λ 1 B x 1 x 1 * , B ˜ = B ,
then ( λ ˜ i , x ˜ i ) ( 1 i n ) are the eigenpairs of the new GEP (4) with
λ ˜ i = 0 , i = 1 , λ i , 2 i n , a n d x ˜ i = x i , i = 1 , ( I n λ 1 λ i x 1 x 1 * ) x i , 2 i n .
Remark 2.
Some remarks of Corollary 1 are shown as follows.
(1) 
If λ 1 = 0 , then the shift technique (9) is not needed. If λ 1 = , then the shift technique (9) will fail due to the fact B x 1 = 0 . In practice, eigenvalues can be sorted into modules ascending order, i.e., | λ 1 | | λ 2 | | λ n | , and we are interested in finding the smallest eigenvalues of the GEP (1).
(2) 
The relation of the left eigenvectors y i and y ˜ i can also be given as y i = ( I n λ 1 λ 1 λ i y 1 y 1 * ) y ˜ i when 2 i n . However, it will fail if λ i = λ 1 for a certain i. In order to remedy this issue, we should shift this eigenpair ( λ i , x i ) together with the converged eigenpair ( λ 1 , x 1 ) by applying the shift technique referred to in Remark 1 (3).
(3) 
We can also shift λ 1 to infinity by using the following shift technique
A ˜ = A , B ˜ = B B x 1 x 1 * ,
while keeping the corresponding right eigenvector x 1 and the remained eigenvalues { λ i } i = 2 n unchanged. Moreover, we have the relation that x ˜ i = ( I n λ i λ 1 x 1 x 1 * ) x i and x i = ( I n λ i λ 1 λ i x 1 x 1 * ) x ˜ i when 2 i n .

3. Deflation Technique

In this section, we deflate the shifted r zeros. To this end, we construct a new GEP with dimension ( n r ) × ( n r ) whose eigenvalues are the remaining eigenvalues of the shifted GEP (4) except for zeros. The following theorem shows the feasibility of the deflation technique under certain assumptions.
Theorem 2.
Assume that ( λ i , x i , y i ) ( 1 i n ) are the eigenpairs of the GEP (1) with λ i = 0 and λ j 0 where 1 i r and r < j n , X , Y C n × r are both full column rank matrices with A X = 0 and Y * A = 0 , H and K are both nonsingular matrices with H E r = Y and K E r = X , and R = Y * B X is nonsingular. Construct a new GEP
A ^ x ^ = λ ^ B ^ x ^ a n d y ^ * A ^ = λ ^ y ^ * B ^ ,
where the coefficient matrices A ^ and B ^ are defined as
A ^ = A 1 ( r + 1 : n , r + 1 : n ) , B ^ = B 1 ( r + 1 : n , r + 1 : n ) ,
with
A 1 = H * A K , B 1 = H * ( I n B X R 1 Y * ) B K ,
then, ( λ i , x ^ i , y ^ i ) ( r < i n ) are the eigenpairs of the new GEP (12) with
x i = K R 1 S x ^ i x ^ i , y i = H ( R 1 ) * T * y ^ i y ^ i ,
where S = S 1 ( : , r + 1 : n ) C r × ( n r ) , S 1 = Y * B K , T = T 1 ( r + 1 : n , : ) C ( n r ) × r and T 1 = H * B X .
Proof. 
We first prove that λ i ( r < i n ) are the eigenvalues of the deflated GEP (12). Let L ( λ ) = A λ B , L 1 ( λ ) = H * L ( λ ) K and V ( λ ) = L 1 ( λ ) ( r + 1 : n , r + 1 : n ) . We can easily verify that L 1 ( λ ) shares the same spectrum with L ( λ ) . Moreover, we can have the following relations,
E r T L 1 ( λ ) E r = Y * ( A λ B ) X = λ R , E r T L 1 ( λ ) e j = Y * ( A λ B ) K e j = λ Y * B K e j , e j T L 1 ( λ ) E r = e j T H * ( A λ B ) X = λ e j T H * B X ,
where r < j n . Based on (14), we obtain
L 1 ( λ ) = λ I r I n r R S λ T V ( λ ) .
Let D ( λ ) = V ( λ ) + λ T R 1 S , then d e t ( L 1 ( λ ) ) = ( 1 ) r λ r d e t ( R ) d e t ( D ( λ ) ) . Therefore, λ i ( r < i n ) are the roots of d e t ( D ( λ ) ) . Denote L 2 ( λ ) = A 1 λ B 1 , then we have
L 2 ( λ ) = H * A K λ H * B K + λ ( H * B X ) R 1 ( Y * B K ) = L 1 ( λ ) + λ R T R 1 R S = L 1 ( λ ) + λ R S T T R 1 S = 0 0 0 D ( λ ) ,
which implies D ( λ ) = A ^ λ B ^ .
Now, we prove the relations (13). Denote
ω i = K 1 x i = ω i 1 ω i 2 ,
where ω i 1 C , ω i 2 C n 1 and r < i n . Since L 1 ( λ i ) ω i = H * L ( λ i ) x i = 0 , we have
λ i R ω i 1 λ i S ω i 2 = 0 , λ i T ω i 1 + V ( λ i ) ω i 2 = 0 .
According to the assumption that λ i 0 and R is nonsingular, we have ω i 1 = R 1 S ω i 2 from the first equation of (15). Therefore, we have ( V ( λ i ) + λ i T R 1 S ) ω i 2 = D ( λ i ) ω i 2 = 0 , which implies ω i 2 is a right eigenvector with respect to λ i . Without loss of generality, we let ω i 2 = x ^ i ; then, the first relation in (13) is obtained. The rest of the proof of the second relation in (13) can be given analogously.    □
Remark 3.
Some remarks of Theorem 2 are shown below.
(1) 
If we apply the shift technique (5) and solve a homogeneous system y * A = 0 for columns of Y as suggested by Remark 1 (1), the full column rank matrices X , Y C n × r are obtained with A X = 0 and Y * A = 0 .
(2) 
The nonsingularity of R = Y * B X is essential in Theorem 2. If R is singular, then the deflation technique fails.
From the above theorem, we can obtain the following corollary directly.
Corollary 2.
Assume that ( λ i , x i , y i ) ( 1 i n ) are the eigenpairs of the GEP (1) with λ 1 = 0 and λ i 0 ( 2 i n ) , x , y C n are both nonzero vectors with A x = 0 , y * A = 0 , H and K are both nonsingular and matrices with H e 1 = y , K e 1 = x , and γ = y * B x 0 . Construct a new GEP (12) where the coefficient matrices A ^ and B ^ are defined as
A ^ = A 1 ( 2 : n , 2 : n ) , B ^ = B 1 ( 2 : n , 2 : n ) ,
with
A 1 = H * A K , B 1 = H * ( I n 1 γ B x y * ) B K ,
then ( λ i , x ^ i , y ^ i ) ( 2 i n ) are the eigenpairs of the new GEP (12) with
x i = K 1 γ s x ^ i x ^ i , y i = H 1 γ ¯ t * y ^ i y ^ i ,
where s = s 1 ( : , 2 : n ) C 1 × ( n 1 ) , s 1 = y * B K , t = t 1 ( 2 : n , : ) C n 1   t 1 = H * B x and γ ¯ denotes the conjugation of γ.
Remark 4.
Some remarks of Corollary 2 are given below.
(1) 
There are a lot of choices of the matrices H and K. In actual computation, we choose the nonsingular matrices H and K to be the Householder matrices such that H y = y 2 e 1 and K x = x 2 e 1 , which guarantees the low computational cost and the numerical stability.
(2) 
The condition γ = y * B x 0 is needed in Corollary 2. If γ = 0 , the deflation technique fails. To circumvent this problem, we can shift λ 1 to infinity by using the shift technique (11) without deflation, and continue to compute the next eigenvalue of interest.

4. Shift-Deflation Technique

In this section, we synthesize the shift technique in Section 2 and the deflation technique in Section 3 to deflate some known eigenpairs ( Λ 1 , X 1 , Y 1 ) C r × r × C n × r × C n × r , and to find a large number of eigenpairs corresponding to the smallest eigenvalues in the module of the GEP (1).
We first consider the situation that r = 1 . Assume that ( λ 1 , x 1 ) is a simple eigenpair of the GEP (1) with λ 1 0 and x 1 2 = 1 . Define the matrices A ˜ and B ˜ as (9); then, A ˜ x 1 = 0 . Solve y * A ˜ = 0 for seeking the vector y ˜ 1 with y ˜ 1 2 = 1 , choose Householder matrices H and K such that K e 1 = x 1 , H e 1 = y ˜ 1 , and define the matrices A ^ and B ^ as (16) and (17) where the matrices A and B in (17) are replaced by A ˜ and B ˜ , respectively. If γ = y ˜ 1 * B ˜ x 1 0 , the shift-deflation technique can be completed, otherwise we shift λ 1 to infinity by using the shift technique (11).
If λ 1 is not a simple eigenvalue, that is r > 1 , we should shift all eigenvalues which are equal to λ 1 by using the shift technique referred to in Remark 1 (3). Moreover, we may obtain more than one converged set of eigenpairs by a certain numerical method at one iteration. Due to the advantage that a low computational cost and numerical stability can be guaranteed if we choose H and K as Householder matrices when r = 1 , we try to deflate these eigenpairs one by one with the relations (10) and (18). A numerical algorithm is summarized as follows.
The first step in Algorithm 1 can be seen as an inner iteration; therefore, it should have its own stopping criterion; for example,
α i = A x i λ i B x i 2 A F + | λ i | B F < τ 1 ,
where A and B are the coefficient matrices after implementing the shift-deflation technique, and τ 1 is a given tolerance. Then, we can denote α i as the shift-deflated relative residual norm of the shift-deflated GEP. If we are interested in the accuracy of the approximation λ i of the original GEP, that is, the coefficient matrices are the input matrices A and B, we can simply test the following stopping criterion:
σ i = σ min ( A λ i B ) < τ 2 ,
where τ 2 is also a given tolerance. If we are also interested in the accuracy of the approximation x i corresponding to λ i , we can repeat using the recursions (18) and (10) to backtrack with the computed eigenpair of the shift-deflated GEP during the iterations in Algorithm 1. To this end, we should save all the converged eigenpairs, the scalars γ , the vectors s, and the nonsingular matrices H and K. If we choose H and K as Householder matrices, we also need to save two vectors. With the backtracked eigenvector x i , we can test the accuracy of approximation x i with the following stopping criterion
β i = A x i λ i B x i 2 A F + | λ i | B F < τ 3 ,
where A and B are the input matrices, and τ 3 is a given tolerance. Then, we can denote β i as the original relative residual norm of the GEP (1) with the input matrices A and B.
Algorithm 1 Shift-deflationtechnique for the GEP.
Input: matrices A, B and the number k of the desired eigenpairs.
Output: k approximate eigenpairs and their relative residuals.
  1:
Seek some (denoted by r 1 ) eigenpairs { ( λ i , x i ) } i = 1 r with smallest eigenvalues in modules of the GEP with coefficient matrices A and B by a certain numerical method;
  2:
Backtrack the original eigenvectors by using the recursions (18) and (10) and compute their original relative residuals if necessary;
  3:
If k, approximate eigenpairs are obtained, then stop; otherwise, set l = 1 ;
  4:
Compute x 1 = x l x l 2 . If λ l is a multiple (denoted by m 2 ) eigenvalue, shift all eigenvalues which are equal to λ l by using the shift technique referred in Remark 1 (3), and go to (6); otherwise, set m = 1 ;
  5:
If λ l 0 , compute the shifted matrices A ˜ and B ˜ as (9); otherwise, set A ˜ = A and B ˜ = B ;
  6:
Compute the eigenvectors x ˜ i ( l + m i r ) as (10);
  7:
Solve a homogeneous system y * A ˜ = 0 for seeking the m independent unit vectors { y ˜ i } i = 1 m , and set j = 1 ;
  8:
Choose Householder matrices H and K such that K e 1 = x 1 , H e 1 = y ˜ j ;
  9:
Compute γ = y ˜ j * B ˜ x 1 . If γ = 0 , shift this eigenvalue λ l + j 1 to infinity by using the shift technique (11), and set A = A ˜ , B = B ˜ ; otherwise, compute matrices A ^ and B ^ as (16) and (17) and eigenvectors x ^ i ( l + j i r ) as (18), and set A = A ^ , B = B ^ ;
 10:
If j < m , set j = j + 1 and go to (8);
 11:
If l < r , set l = l + 1 and go to (4); otherwise, go to (1).

5. Numerical Results

In this section, we report some numerical examples to illustrate the effectiveness of the shift-deflation technique for the GEP. All examples are performed in Matlab R2015b on an Intel Core 2.9 GHz PC with 4 GB memory under a Windows 7 system. For simplicity, we use the Matlab built-in function ‘eigs’ to seek some eigenpairs, with the smallest modulus eigenvalues in the first iteration of Algorithm 1, and ‘svds’ to compute σ min . The integer r in Algorithm 1 is randomly chosen but does not exceed a given threshold r max due to using the Matlab built-in function ‘eigs’. We denote n by the dimension of the GEP (1), and denote k by the number of the desired eigenpairs.
Example 1.
We consider the GEP (1) where the coefficient matrices are given as
A = 3 1 0 2 0 9 0 1 0 0 0 0 0 0 1 0 0 3 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 , B = 1 1 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 .
We can easily obtain all eigenpairs by using the MATLAB built-in function ‘eig’, which are shown in Table 1.
From Table 1, we can see that it is not necessary to apply the shift technique to λ 1 . Thus, the deflation condition scalar is γ = y 1 * B x 1 = 1 if we deflate this to zero. By performing Algorithm 1 for seeking all the six eigenpairs, we can obtain the following numerical results in Table 2. We find that our proposed deflation technique is highly accurate from the first and last columns of Table 2.
Example 2.
We consider the GEP (1) where the coefficient matrices come from the Harwell-Boeing test matrices [14] bcsstk07 and bcsstm07. These matrices are 420 × 420 .
We implement Algorithm 1 with k = 10 and r max = 4 , and show the numerical results in Table 3. We can see that the computed eigenpairs are quite accurate from the last two columns of Table 3.
Example 3.
In this example, the coefficient matrices A and B are symmetric and sparse, and given by
A = 1 1 1 2 1 1 n , B = 1 1 1 1 1 1 1 1 1 .
It is obvious that some structural properties of coefficient matrices will no longer hold after performing the shift-deflation technique, such as symmetry and sparsity. However, we can still use some properties of the input coefficient matrices implicitly. In the above case, we can keep implementing sparse operations even after preforming several shift-deflation techniques as long as all converged eigenpairs, the scalars γ (or the matrices R), the vectors s (or the matrices S) and the nonsingular matrices H and K are saved during the iterations. The numerical results for Algorithm 1 with the parameters n = 10,000, r max = 10 and k = 200 are shown in Figure 1.
From Figure 1, we can see that both the original smallest singular values σ i and the original relative residual norms β i have a high accuracy even though the shift-deflation technique is performed 200 times. Moreover, the original relative residual norm β i has a similar (slightly larger in most cases) accuracy to the shift-deflated relative residual norm α i which is obtained with the Matlab built-in function ‘eigs’. Therefore, the numerical results show that the shift-deflation technique for the large-scale GEP is effective and robust.

Author Contributions

Methodology, X.S.; Software, W.W., X.S. and A.L.; Supervision, X.C.; Writing—original draft, W.W. and X.C. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the National Natural Science Foundation of China (Grant Nos. 12001396 and 12201302), the Natural Science Foundation of Jiangsu Province (Grant Nos. BK20170591 and BK20200268), the Natural Science Foundation of Jiangsu Higher Education Institutions of China (Grant Nos. 21KJB110017 and 20KJB110005), the China Postdoctoral Science Foundation (Grant No. 2018M642130) and Qing Lan Project of the Jiangsu Higher Education Institutions.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saad, Y. Numerical Methods for Large Eigenvalue Problems; Manchester University Press: Manchester, UK, 1992. [Google Scholar]
  2. Poedts, S.; Meijer, P.M.; Goedbloed, J.P.; van der Vorst, H.; Jakoby, A. Parallel Magnetohydrodynamics on the CM-5. High-Performance Computing and Networking; Springer: Berlin/Heidelberg, Germany, 1994; pp. 365–370. [Google Scholar]
  3. Yu, I.-W. Subspace iteration for eigen-solution of fluid-structure interaction problems. J. Press. Vessel Technol. ASME 1987, 109, 244–248. [Google Scholar] [CrossRef]
  4. Brebbia, C.A.; Venturini, W.S. Boundary Element Techniques: Applications in Fluid Flow and Computational Aspects; Computational Mechanics Publications: Southampton, UK, 1987. [Google Scholar]
  5. Moler, C.B.; Stewart, G.W. An algorithm for generalized matrix eigenvalue problems. SIAM J. Numer. Anal. 1973, 10, 241–256. [Google Scholar] [CrossRef]
  6. Li, J.-F.; Li, W.; Vong, S.-W.; Luo, Q.-L.; Xiao, M. A Riemannian optimization approach for solving the generalized eigenvalue problem for nonsquare matrix pencils. J. Sci. Comput. 2020, 82, 67. [Google Scholar] [CrossRef]
  7. Saad, Y. Numerical solution of large nonsymmetric eigenvalue problems. Comput. Phys. Comm. 1989, 53, 71–90. [Google Scholar] [CrossRef] [Green Version]
  8. Rommes, J. Arnoldi and Jacobi-Davidson methods for generalized eigenvalue problems with singular. Math. Comput. 2008, 77, 995–1015. [Google Scholar] [CrossRef] [Green Version]
  9. Najafi, H.S.; Moosaei, H.; Moosaei, M. A new computational Harmonic projection algorithm for large unsymmetric generalized eigenproblems. Appl. Math. Sci. 2008, 2, 1327–1334. [Google Scholar]
  10. Jia, Z.-X.; Zhang, Y. A refined shift-invert Arnoldi algorithm for large unsymmetric generalized eigenproblems. Comput. Math. Appl. 2002, 44, 1117–1127. [Google Scholar] [CrossRef] [Green Version]
  11. Bai, Z.-Z.; Miao, C.-Q. On local quadratic convergence of inexact simplified Jacobi-Davidson method. Linear Algebra Appl. 2017, 520, 215–241. [Google Scholar] [CrossRef]
  12. Bai, Z.-Z.; Miao, C.-Q. On local quadratic convergence of inexact simplified Jacobi-Davidson method for interior eigenpairs of Hermitian eigenproblems. Appl. Math. Lett. 2017, 72, 23–28. [Google Scholar] [CrossRef]
  13. Li, J.-F.; Li, W.; Duan, X.-F.; Xiao, M. Newton’s method for the parameterized generalized eigenvalue problem with nonsquare matrix pencils. Adv. Comput. Math. 2021, 47, 29. [Google Scholar] [CrossRef]
  14. Duff, I.S.; Grimes, R.G.; Lewis, J.G. Sparse matrix test problems. ACM Trans. Math. Soft. 1989, 15, 1–14. [Google Scholar] [CrossRef]
Figure 1. Numerical results of Example 3.
Figure 1. Numerical results of Example 3.
Symmetry 14 02547 g001
Table 1. The eigenpairs of Example 1.
Table 1. The eigenpairs of Example 1.
i123456
λ i 01123
x i 0 0 0 0 1 0 0 1 0 0 1 0 1 0 0 1 0 0 1 0 0 1 2 0 0 0 0 1 0 1 1 3 1 0 1 0 0 0
y i 0 1 0 0 1 0 0 1 0 0 0 0 1 4 0 1 1 2 0 3 4 1 5 1 5 1 1 5 0 3 5 0 0 1 0 0 1 0 0 1 0 0 0
Table 2. Numerical results of Example 1.
Table 2. Numerical results of Example 1.
Computed Eigenvalues α i σ i
000
1.0000 0.7476 × 10 16 0.4342 × 10 16
1.0000 0.6000 × 10 16 0.1462 × 10 15
2.0000 0.7793 × 10 16 0.7610 × 10 16
3.0000 0.9278 × 10 16 0.6686 × 10 15
0.6525 × 10 + 16 0 0.1083 × 10 15
Table 3. Numerical results of Example 2.
Table 3. Numerical results of Example 2.
r α i β i σ i
1 0.3041 × 10 18 0.3041 × 10 18 0.6831 × 10 7
4 0.5173 × 10 17 0.3549 × 10 17 0.4019 × 10 7
0.9864 × 10 17 0.2849 × 10 16 0.9069 × 10 7
0.4854 × 10 17 0.1315 × 10 16 0.2009 × 10 7
0.6373 × 10 17 0.1133 × 10 16 0.6716 × 10 7
3 0.1214 × 10 16 0.1419 × 10 16 0.5347 × 10 9
0.4649 × 10 17 0.9685 × 10 17 0.1777 × 10 6
0.1473 × 10 16 0.2073 × 10 16 0.1500 × 10 6
2 0.1525 × 10 16 0.2238 × 10 16 0.9081 × 10 7
0.1005 × 10 16 0.1829 × 10 16 0.3821 × 10 7
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wei, W.; Chen, X.; Shi, X.; Luo, A. A Shift-Deflation Technique for Computing a Large Quantity of Eigenpairs of the Generalized Eigenvalue Problems. Symmetry 2022, 14, 2547. https://doi.org/10.3390/sym14122547

AMA Style

Wei W, Chen X, Shi X, Luo A. A Shift-Deflation Technique for Computing a Large Quantity of Eigenpairs of the Generalized Eigenvalue Problems. Symmetry. 2022; 14(12):2547. https://doi.org/10.3390/sym14122547

Chicago/Turabian Style

Wei, Wei, Xiaoping Chen, Xueying Shi, and An Luo. 2022. "A Shift-Deflation Technique for Computing a Large Quantity of Eigenpairs of the Generalized Eigenvalue Problems" Symmetry 14, no. 12: 2547. https://doi.org/10.3390/sym14122547

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop