Next Article in Journal
A Class of Nonlinear Fuzzy Variational Inequality Problems
Next Article in Special Issue
A Theoretical Rigid Body Model of Vibrating Screen for Spring Failure Diagnosis
Previous Article in Journal
PPF-Dependent Fixed Point Results for New Multi-Valued Generalized F-Contraction in the Razumikhin Class with an Application
Previous Article in Special Issue
Target Fusion Detection of LiDAR and Camera Based on the Improved YOLO Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weighted Block Golub-Kahan-Lanczos Algorithms for Linear Response Eigenvalue Problem

1
School of Science, Jiangnan University, Wuxi 214122, China
2
College of Computer and Information Science, Fujian Agriculture and Forestry University, Fuzhou 350002, China
3
School of Mathematical Sciences, Shanghai Key Laboratory of PMMP, East China Normal University, Shanghai 200241, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(1), 53; https://doi.org/10.3390/math7010053
Submission received: 30 November 2018 / Revised: 25 December 2018 / Accepted: 29 December 2018 / Published: 7 January 2019
(This article belongs to the Special Issue Mathematics and Engineering)

Abstract

:
In order to solve all or some eigenvalues lied in a cluster, we propose a weighted block Golub-Kahan-Lanczos algorithm for the linear response eigenvalue problem. Error bounds of the approximations to an eigenvalue cluster, as well as their corresponding eigenspace, are established and show the advantages. A practical thick-restart strategy is applied to the block algorithm to eliminate the increasing computational and memory costs, and the numerical instability. Numerical examples illustrate the effectiveness of our new algorithms.

1. Introduction

In this paper, we are interested in solving the linear response eigenvalue problem (LREP):
H z : = 0 M K 0 u v = λ u v = λ z ,
where K and M are N × N real symmetric positive definite matrices. Such a problem arises from studying the excitation energy of many particle systems in computational quantum chemistry and physics [1,2,3]. It also known as the Bethe-Salpeter (BS) eigenvalue-problem [4] or the random phase approximation (RPA) eigenvalue problem [5]. There has immense past and recent work in developing efficient numerical algorithms and attractive theories for LREP [6,7,8,9,10,11,12,13,14,15].
Since all the eigenvalues of H are real nonzero and appear in pairs { λ , λ } [6], thus we order the eigenvalues in ascending order, i.e.,
λ 1 λ N < λ N λ 1 .
In this paper, we focus on a small portion of the positive eigenvalues for LREP, i.e., λ i , i = k , k + 1 , , with 1 k N and k + 1 N , and their corresponding eigenvectors. We only consider the real case, all the results can be easily applied to the complex case.
The weighted Golub-Kahan-Lanczos method (wGKL) for LREP was introduced in [16]. It produces recursively a much small projection B j = 0 B j B j T 0 of H at j-th iteration, where B j R j × j is upper bidiagonal. Afterwards, the eigenpairs of H can be constructed by the singular value decomposition of B j . The convergence analysis performs that running k iterations of wGKL is equivalently running 2 k iterations of a weighted Lanczos algorithm for H [16]. Actually, B j can be also a lower bidiagonal matrix, and the same discussion can be taken place as in the case of B j is upper bidiagonal. In the following, we only consider the upper bidiagonal case.
It is well known that the single-vector Lanczos method is widely used for searching a small number of extreme eigenvalues, and it may encounter very slow convergence when the wanted eigenvalues stay in a cluster [17]. Instead, a block Lanczos method with a suitable block size is capable of computing a cluster of eigenvalues including multiple eigenvalues very quickly. Motivated by this idea, we are going to develop a block version of wGKL in [16] in order to find efficiently all or some positive eigenvalues within a cluster for LREP. Based on the standard block Lanczos convergence theory in [17], the error bounds of approximation to an eigenvalue cluster, as well as their corresponding eigenspace are established to illustrate the advantage of our weighted block Golub-Kahan-Lanczos algorithm (wbGKL).
As the increasing size of the Krylov subspace, the storage demands, computational costs, and numerical stability of a simple version of a block Lanczos method may be affected [18]. Several kinds of efficiently restarting strategies to eliminate these effects are developed for the classic Lanczos method, such as, implicitly restart method [19], thick restart method [20]. In order to make our block method more practical, and using the special structure of LREP, we consider the thick restart strategy to our block method.
The rest of this paper is organized as follows. Section 2 gives some necessary preliminaries for our later use. In Section 3, the weighted block Golub-Kahan-Lanczos algorithm (wbGKL) for LREP is presented, and its convergence analysis is discussed. Section 4 proposed the thick restart weighted block Golub-Kahan-Lanczos algorithm (wbGKL-TR). The numerical examples are tested in Section 5 to illustrate the efficiency of our new algorithms. Finally, some conclusions are given in Section 6.
Throughout this paper, R m × n is the set of all m × n real matrices, R n = R n × 1 , and R = R 1 . I n (or simply I if its dimension is clear from the context) is the n × n identity matrix, and 0 m × n is an m × n matrix of zero. The superscript “ T ” denotes transpose. · F denotes the Frobenius norm of a matrix, and · 2 denotes the 2-norm of a matrix or a vector. For a matrix X R m × n , r a n k ( X ) denotes the rank of X, and R ( X ) = s p a n ( X ) denotes the column space of X; the submatrices X i : j , : and X : , k : of X composed by the intersections of row i to row j and column k to column , respectively. For matrices or scalars X i , d i a g ( X 1 , , X k ) denotes the block diagonal matrix with the i-th diagonal block X i .

2. Preliminaries

For a symmetric positive definite matrix W R N × N , the W-inner product is defined as following
x , y W : = y T W x , x , y R N .
If x , y W = 0 , then we denote it by x W y , and call it with x and y are W-orthogonal. The projector Π W is called the W-orthogonal projector onto Y if for any y R N ,
Π W y Y , ( I Π W ) y W Y .
For two subspaces X , Y R N , and suppose k = d i m ( X ) d i m ( Y ) = , if X R N × k and Y R N × are W-orthonormal basis of X and Y , respectively, i.e.,
X T W X = I k , X = R ( X ) and Y T W Y = I , Y = R ( Y ) ,
and ν j for j = 1 , , k with ν 1 ν k are the singular values of Y T W X , then the W-canonical angles θ W ( j ) ( X , Y ) from X to Y are defined by
0 θ W ( j ) ( X , Y ) = arccos ν j π / 2 , for j = 1 , , k .
If k = , these angles can be said between X and Y . Obviously, θ W ( 1 ) ( X , Y ) θ W ( k ) ( X , Y ) . Set
Θ W ( X , Y ) = d i a g ( θ W ( 1 ) ( X , Y ) , , θ W ( k ) ( X , Y ) ) .
Especially, if k = 1 , X is a vector, there is only one W-canonical angle from X to Y . In the following, we may use a matrix in one or both arguments of Θ W ( · , · ) , i.e., Θ W ( X , Y ) with the understanding that it means the subspace spanned by the columns of the matrix argument.
The following two lemmas are important to our later analysis, and for proofs and more details, the reader is referred to [12,16].
Lemma 1.
([12] Lemma 3.2). Let X and Y be two subspaces in R N with equal dimensional d i m ( X ) = d i m ( Y ) = k . Suppose θ W ( 1 ) ( X , Y ) < π / 2 . Then, for any set y 1 , y 2 , , y k 1 of the basis vectors in Y where 1 k 1 k , there is a set x 1 , x 2 , , x k 1 of linearly independent vectors in X such that Π W x j = y j for 1 j k 1 , where Π W is the W-orthogonal projector onto Y .
Lemma 2.
([16] Proposition 3.1). The matrix M K has N position eigenvalues λ 1 2 λ 2 2 λ N 2 with λ j > 0 . The corresponding right eigenvectors ξ 1 , , ξ N can be chosen K-orthonormal, and the corresponding left eigenvectors η 1 , , η N can be chosen M-orthonormal. In particular, for given { ξ j } , one can choose η j = λ j 1 K ξ j , and for given { η j } , ξ j = λ j 1 M η j , for j = 1 , , N .

3. Weighted Block Golub-Kahan-Lanczos Algorithm

3.1. Weighted Block Golub-Kahan-Lanczos Algorithm

In this section, we plan to introduce the weighted block Golub-Kahan-Lanczos algorithm (wbGKL) for LREP, which is a block version of the weighted Golub-Kahan-Lanczos algorithm [16]. Algorithm 1 gives the process of recursively generating the M-orthonormal matrix X n , the K-orthonormal matrix Y n , and the block bidiagonal matrix B n . Giving Y 1 R n × n b with Y 1 T K Y 1 = I n b , denoting E n T = [ 0 n b × ( n 1 ) n b , I n b ] , and
X n = [ X 1 , , X n ] , Y n = [ Y 1 , , Y n ] , B n = A 1 B 1 A 2 B n 1 A n ,
then we have the relation from Algorithm 1:
K Y n = X n B n , M X n = Y n B n T + Y n + 1 B n T E n T ,
and
X n T M X n = I n n b = Y n T K Y n .
Remark 1.
In Algorithm 1, we only consider the case that r a n k ( X ˜ j ) = r a n k ( Y ˜ j + 1 ) = n b , no further treatment is provided for the cases r a n k ( X ˜ j ) < n b or r a n k ( Y ˜ j + 1 ) < n b . Because K and M are both symmetric positive definite, thus the two W inStep 2are both reversible.
Algorithm 1: wbGKL
    1. Choose Y 1 satisfying Y 1 T K Y 1 = I n b , and set W = I n b , B 0 = I n b , X 0 = 0 n × n b . Compute
     F = K Y 1 .
    2. For j = 1 , 2 , , n
              X ˜ j = F W X j 1 B j 1
              F = M X ˜ j
             Do Cholesky decomposition X ˜ j T F = W T W
              A j = W , W = i n v ( W ) , X j = X ˜ j W         % W = i n v ( W ) means W = W 1
              Y ˜ j + 1 = F W Y j A j T
              F = K Y ˜ j + 1
             Do Cholesky decomposition Y ˜ j + 1 T F = W T W
              B j = W T , W = i n v ( W ) , Y j + 1 = Y ˜ j + 1 W
         End
Remark 2.
With j increasing inStep 2, the M-orthogonality of X j and the K-orthogonality of Y j will slowly lose. Thus, in practice, we can add a re-orthogonalization process in each iteration to eliminate the defect. The same strategy is executed in the following algorithms.
From (1), we have
0 M K 0 Y n 0 0 X n = Y n 0 0 X n 0 B n T B n 0 + Y n + 1 0 B n T E 2 n T
with E 2 n T = [ 0 n b × ( 2 n 1 ) n b , I n b ] . Then, the approximate eigenpairs of H can be obtained by solving a small eigenvalue problem of 0 B n T B n 0 . Suppose B n has an singular value decomposition
B n = Φ Σ n Ψ T ,
where Φ = [ ϕ 1 , ϕ 2 , , ϕ n n b ] , Ψ = [ ψ 1 , ψ 2 , , ψ n n b ] , Σ n = [ σ 1 , σ 2 , , σ n n b ] with σ 1 σ n n b > 0 . Thus, we can take ± σ j ( 1 j n n b ) as the Ritz values of H and
z ˜ j = 1 2 Y n ψ j ± X n ϕ j , 1 j n n b ,
as the corresponding K -orthonormal Ritz vectors, where K = K 0 0 M .

3.2. Convergence Analysis

In this section, we first consider the convergence analysis when using the first few σ j as approximations to the first few λ j . Then, the similar theories are presented if using the last few σ j as approximations to the last few λ j . Since a block Lanczos method with a suitable block size which is not smaller than the size of an eigenvalue cluster can compute all eigenvalues in the cluster. Now, we are considering the i-th to ( i + n b 1 ) -st eigenpairs of LREP, in which the k-th to -th eigenvalues form a cluster as in the following figure with 1 i k i + n b 1 n n b and k n .
Mathematics 07 00053 i001
Here, the squares of the eigenvalues for LREP are listed. Hence, motivated by [12,17], we analyze the convergence of the cluster eigenvalues and their corresponding eigenspace, and give the error bounds of the approximate eigenpairs belonging to eigenvalue cluster together, instead of separately for each individual eigenpair.
We first give some notations and equations, which are critical in our main theorem. Note that from (1), we get
M K Y n = Y n B n T B n + Y n + 1 B n T A n E n T .
Since (2) is the singular value decomposition of B n , thus the eigenvalues of B n T B n are σ j 2 with the associated eigenvectors ψ j for 1 j n n b .
From Lemma 2, if we let Ξ = [ ξ 1 , , ξ N ] , and Γ = [ η 1 , , η N ] , then Γ = K Ξ Λ 1 , and
M K Ξ = Ξ Λ 2 .
Write Ξ and Λ 2 as
Mathematics 07 00053 i003
Let Ξ ˇ 2 = Ξ ( : , k : ) and Λ ˇ 2 2 = d i a g ( λ k 2 , , λ 2 ) . Denote C j the first kind Chebyshev polynomial with j-th degree, and 0 j n .
In the following, we assume
θ K ( 1 ) ( Y 1 , Ξ 2 ) < π / 2 ,
i.e., r a n k ( Y 1 T K Ξ 2 ) = n b , then from Lemma 1, we have ∃ Z R N × ( k + 1 ) with R ( Z ) R ( Y 1 ) , s.t.,
Ξ 2 Ξ 2 T K Z = Ξ ˇ 2 .
Theorem 1.
Suppose θ K ( 1 ) ( Y 1 , Ξ 2 ) < π / 2 , and Z satisfy (6), then we have
d i a g ( λ k 2 σ k 2 , , λ 2 σ 2 ) F ( λ k 2 λ N 2 ) π i , k , 2 C n k 2 ( 1 + 2 γ i , ) tan 2 Θ K ( Ξ ˇ 2 , Z ) F
with
γ i , = λ 2 λ i + n b 2 λ i + n b 2 λ N 2 , π i , k , = max i + n b j N m = 1 k 1 | σ m 2 λ j 2 | min k t m = 1 k 1 | σ m 2 λ t 2 | ,
and
sin Θ K ( Ξ ˇ 2 , Y n Ψ ( : , k : ) ) F π i , k 1 + c 2 A n T B n 2 2 / δ 2 C n i ( 1 + 2 γ i , ) tan Θ K ( Ξ ˇ 2 , Z ) F
with constant c lies between 1 and π / 2 , and c = 1 if k = , and
δ = min k j p < k o r p > | λ j 2 σ p 2 | , π i , k = j = 1 i 1 λ j 2 λ N 2 λ j 2 λ k 2 .
Particularly if σ k 1 2 λ k 2 , then
π i , k , = m = 1 k 1 | σ m 2 λ N 2 | | σ m 2 λ k 2 | .
Proof. 
Multiplying L T from left, (4) can be rewritten as L T M L ( L T Ξ ) = ( L T Ξ ) Λ 2 , so, ( λ j 2 , L T ξ j ) is the eigenpair of L T M L , for j = 1 , , N , and L T ξ 1 , , L T ξ N are orthonormal. Do the same process to (3), we have
L T M L V n = V n B n T B n + V n + 1 B n T A n E n T ,
where V n = L T Y n , V n + 1 = L T Y n + 1 , and V n T V n = I n n b , which can be seen as the relation generalize by using standard Lanczos process to L T M L . Thus, σ 1 2 , , σ n n b 2 are the Ritz values of L T M L , with the corresponding orthonormal Ritz vectors V n ψ 1 , , V n ψ n n b .
Premultiplying L T to Equation (6), we have L T Ξ 2 Ξ 2 T L ( L T Z ) = L T Ξ ˇ 2 . Consequently, the conditions of the block Lanczos convergence Theorem 4.1 and Theorem 5.1 in [17] are satisfied. Thus, using the results Theorem 5.1 in [17], one has
d i a g ( λ k 2 σ k 2 , , λ 2 σ 2 ) F ( λ k 2 λ N 2 ) π i , k , 2 C n k 2 ( 1 + 2 γ i , ) tan 2 Θ ( L T Ξ ˇ 2 , L T Z ) F .
Then the bound (7) can be easily got by using ([21] Theorem 4.2)
Θ ( L T Ξ ˇ 2 , L T Z ) = Θ K ( Ξ ˇ 2 , Z ) .
Let Π n = V n V n T , then Π n is the orthogonal projection onto K n ( L T M L , L T Z ) , thus from (9), we have
Π n L T M L ( I Π n ) 2 = V n V n T L T M L ( I V n V n T ) 2 = V n ( B n T B n + E n A n T B n V n + 1 T ) V n B n T B n V n T 2 = V n A n T B n V n + 1 T 2 = A n T B n 2 .
Consequently, applying the results of Theorem 4.1 in [17], we get
sin Θ ( L T Ξ ˇ 2 , V n Ψ ( : , k : ) ) F π i , k 1 + Π n L T M L ( I Π n ) 2 2 / δ 2 C n i ( 1 + 2 γ i , ) tan Θ ( L T Ξ ˇ 2 , L T Z ) F = π i , k 1 + A n T B n 2 2 / δ 2 C n i ( 1 + 2 γ i , ) tan Θ ( L T Ξ ˇ 2 , L T Z ) F .
Then the bound (8) can be derived by using Θ ( L T Ξ ˇ 2 , V n Ψ ( : , k : ) ) = Θ K ( Ξ ˇ 2 , Y n Ψ ( : , k : ) ) and (10). ☐
Theorem 1 is used to bound the errors of the approximate eigenvalues to an eigenvalue cluster including the multiple eigenvalues. It can be also applied to the single eigenvalue case, the following corollary is derived by setting k = = i , except the left equality of (10), which needs to be proved.
Corollary 1.
Suppose θ K ( 1 ) ( Y 1 , Ξ 2 ) < π / 2 , then for 1 i n n b , there exits a vector y R ( Y 1 ) , s.t., Ξ 2 Ξ 2 T y = ξ i , and
λ i 2 σ i 2 ( λ i 2 λ N 2 ) π i , j 2 C n i 2 ( 1 + 2 γ i ) tan 2 θ K ( ξ i , y )
with
γ i = λ i 2 λ i + n b 2 λ i + n b 2 λ N 2 , π i , j = max i + n b j N m = 1 i 1 | σ m 2 λ j 2 | | σ m 2 λ i 2 | ,
and
( 1 σ i 2 λ i 2 ) + σ i 2 λ i 2 sin 2 θ M ( η i , X n ϕ i ) 1 / 2 = sin θ K ( ξ i , Y n ψ i ) π i 1 + A n T B n 2 2 / δ 2 C n i ( 1 + 2 γ i ) tan θ K ( ξ i , y )
with
δ = min i j | λ j 2 σ i 2 | , π i = j = 1 i 1 λ j 2 λ N 2 λ j 2 λ i 2 .
Proof. 
We only proof the left equality of (11). From (4) and Lemma 2, we have Ξ = M K Ξ Λ 2 = M Γ Λ 1 . If we let Z 1 = ( Y n ψ i ) T K ξ i , and Z 2 = ( X n ϕ i ) T M η i , then we can get Z 1 = σ i λ i Z 2 by using K Y n Ψ = X n B n Ψ = X n Φ Σ n . Thus
sin 2 θ K ( ξ i , Y n ψ i ) = 1 cos 2 θ K ( ξ i , Y n ψ i ) = 1 Z 1 T Z 1 = 1 σ i 2 λ i 2 Z 2 T Z 2 = 1 σ i 2 λ i 2 cos 2 θ M ( η i , X n ϕ i ) = 1 σ i 2 λ i 2 + σ i 2 λ i 2 sin 2 θ M ( η i , X n ϕ i ) .
Then,
sin θ K ( ξ i , Y n ψ i ) = 1 σ i 2 λ i 2 + σ i 2 λ i 2 sin 2 θ M ( η i , X n ϕ i ) 1 / 2 .
 ☐
Next, we are going to consider the last few σ j to approximate as the last few λ N n n b + j , j = k , , , and λ N n n b + k to λ N n n b + form a cluster in λ i ^ to λ i ^ + n b 1 , which is described in the following figure, where N + 1 n n b i ^ k ^ ^ i ^ + n b 1 N , n n b + 1 n , k ^ N n n b + k , and ^ N n n b + .
Mathematics 07 00053 i002
Similar to the above discussion for the first few eigenvalues, we can also obtain the error bounds of the approximate last few eigenpairs belongs to eigenvalue cluster together. We use the same notion, except Λ ^ 2 2 = d i a g ( λ k ^ 2 , , λ ^ 2 ) and Ξ ^ 2 = Ξ ( : , k ^ : ^ ) . Assuming θ K ( 1 ) ( Y 1 , Ξ 2 ) < π / 2 , then from Lemma 1, there ∃ Z ^ R N × ( k + 1 ) with R ( Z ^ ) R ( Y 1 ) , s.t.,
Ξ 2 Ξ 2 T K Z ^ = Ξ ^ 2 .
Theorem 2.
Suppose θ K ( 1 ) ( Y 1 , Ξ 2 ) < π / 2 and Z ^ satisfy (12), then we have
d i a g ( σ k 2 λ k ^ 2 , , σ 2 λ ^ 2 ) F ( λ 1 2 λ ^ 2 ) π ^ i ^ , k ^ , ^ 2 C n N + ^ 1 2 ( 1 + 2 γ ^ i ^ , k ^ ) tan 2 Θ K ( Ξ ^ 2 , Z ^ ) F
with
γ ^ i ^ , ^ = λ i ^ 1 2 λ k ^ 2 λ 1 2 λ i ^ 1 2 , π ^ i ^ , k ^ , ^ = max 1 j i ^ 1 m = + 1 n n b | σ m 2 λ j 2 | min k ^ t ^ m = + 1 n n b | σ m 2 λ t 2 | ,
and
sin Θ K ( Ξ ^ 2 , Y n Ψ ( : , k : ) ) F π ^ i ^ , ^ 1 + c ^ 2 A n T B n 2 2 / δ ^ 2 C n + i ^ + n b N 2 ( 1 + 2 γ ^ i ^ , k ^ ) tan Θ K ( Ξ ^ 2 , Z ^ ) F
with constant c ^ lies between 1 and π / 2 , and c ^ = 1 if k = , and
δ ^ = min k ^ j ^ p < k o r p > | λ j 2 σ p 2 | , π ^ i ^ , ^ = j = i ^ + n b N λ 1 2 λ j 2 λ ^ 2 λ j 2 .
Remark 3.
Similar to Corollary 1, Theorem 2 can also be applied to the single eigenvalue case, here we omit the detail.
Remark 4.
In Theorem 1 and 2, we use the Frobenius norm to estimate the accuracy of eigenpairs approximations, in fact, any unitary invariant norm can be used to measure.
Remark 5.
Compared with the single-vector type of the weighted Golub-Kahan-Lanczos method in [16], our convergence results show the superiority of the block version. For instance, in Corollary 1, the convergence rate of the approximate eigenvalues σ j is proportional to C n i 2 ( 1 + 2 γ i ) with γ i = λ i 2 λ i + n b 2 λ i + n b 2 λ N 2 , which is obviously better than C n i 2 ( 1 + 2 γ ˜ i ) with γ ˜ i = λ i 2 λ i + 1 2 λ i + 1 2 λ N 2 in ([16] Theorem 3.4). While the additional cost caused from the block version can be paid by the improvements generated by γ i , especially when the desired eigenvalues lie in a well-separated cluster [12].

4. Thick Restart

As the number of iterations increases, Algorithm 1 may encounter the dilemma that the amount of calculation and storage increases sharply and the numerical stability gradually weakens. In this section, we will apply the thick restart strategy [20] to improve the algorithm. After running n iterations, Algorithm 1 derives the following relations for LREP:
K Y n = X n B n , M X n = Y n B n T + Y n + 1 B n T E n T ,
with X n T M X n = I n n b = Y n T K Y n .
Recall the SVD (2), let Φ k and Ψ k be the first k n b columns of Φ and Ψ , respectively, i.e.,
Φ k = [ ϕ 1 , ϕ 2 , , ϕ k n b ] , Ψ k = [ ψ 1 , ψ 2 , , ψ k n b ] .
Thus it follows that
B n Ψ k = Φ k Σ k and B n T Φ k = Ψ k Σ k ,
where Σ k = d i a g ( σ 1 , , σ k n b ) .
By using the approximate eigenvectors of H for thick restart, we post-multiply Ψ k and Φ k to the Equation (15), respectively, and get
K Y n Ψ k = X n B n Ψ k , M X n Φ k = Y n B n T Φ k + Y n + 1 B n T E n T Φ k ,
From (16), and let Y ^ k = Y n Ψ k , X ^ k = X n Φ k , B ^ k = Σ k , Y ^ k + 1 = Y n + 1 , U T = E n T Φ k , B ^ k = B n , then (17) can be rewritten as
K Y ^ k = X ^ k B ^ k , M X ^ k = Y ^ k B ^ k T + Y ^ k + 1 B ^ k T U T ,
and X ^ k T M X ^ k = I k n b = Y ^ k T K Y ^ k .
Next, X ^ k + 1 and Y ^ k + 2 will be generalized. Firstly, we compute
X ˜ k + 1 = K Y ^ k + 1 X ^ k X ^ k T M K Y ^ k + 1 = K Y ^ k + 1 X ^ k U B ^ k .
From the second equation in (18), we know X ˜ k + 1 T M X ^ k = 0 . Do Cholesky decomposition X ˜ k + 1 T M X ˜ k + 1 = W T W , and set A ^ k + 1 = W , W = i n v ( W ) . Compute X ^ k + 1 = X ˜ k + 1 W , and let
X ^ k + 1 = [ X ^ k , X ^ k + 1 ] , B ^ k + 1 = B ^ k U B ^ k 0 A ^ k + 1 ,
we have
K Y ^ k + 1 = X ^ k + 1 B ^ k + 1 with X ^ k + 1 T M X ^ k + 1 = I ( k + 1 ) n b .
Secondly, from the above equation, we can compute
Y ˜ k + 2 = M X ^ k + 1 Y ^ k Y ^ k T K M X ^ k + 1 Y ^ k + 1 Y ^ k + 1 T K M X ^ k + 1 = M X ^ k + 1 Y ^ k + 1 A ^ k + 1 T .
Again using (19), it is easily got that Y ˜ k + 2 T K Y ^ k + 1 = 0 . Similarly, do Cholesky decomposition Y ˜ k + 2 T K Y ˜ k + 2 = W T W , and let B ^ k + 1 = W T , W = i n v ( W ) . Compute Y ^ k + 2 = Y ˜ k + 2 W , and let Y ^ k + 1 = [ Y ^ k , Y ^ k + 1 ] , we get
M X ^ k + 1 = Y ^ k + 1 B ^ k + 1 T + Y ^ k + 2 B ^ k + 1 T E k + 1 T with Y ^ k + 1 T M Y ^ k + 1 = I ( k + 1 ) n b .
Continue the same procedure for X ^ k + 2 , , X ^ n and Y ^ k + 3 , , Y ^ n + 1 , we can obtain the new M-orthonormal matrix X ^ n R N × n n b , the new K-orthonormal matrix Y ^ n R N × n n b , and the new matrix B ^ n R n n b × n n b , and relations
K Y ^ n = X ^ n B ^ n , M X ^ n = Y ^ n B ^ n T + Y ^ n + 1 B ^ n T E n T ,
with X ^ n T M X ^ n = I n n b = Y ^ n T K Y ^ n , and
B ^ n = B ^ k U B ^ k A ^ k + 1 B ^ k + 1 B ^ n 1 A ^ n .
Note that B ^ n is no longer a block bidiagonal matrix. Algorithm 2 is our thick-restart weighted block Golub-Kahan-Lanczos algorithm for LREP.
Remark 6.
Actually, from the construction of B ^ n , we can know the procedure for getting X ^ k + 2 , , X ^ n and Y ^ k + 3 , , Y ^ n + 1 is the same as applying Algorithm 1 to Y ^ k + 2 for n k 1 iterations, thus we use Algorithm 1 directly in restartingStep 2of the following Algorithm 2.
Algorithm 2: wbGKL-TR
    1. Given an initial guess Y 1 satisfying Y 1 T K Y 1 = I n b , a tolerance t o l , an integer k that the k blocks approximate eigenvectors we want to add to the solving subspace, an integer n the block dimension of solving subspace, as well as w the desired number of eigenpairs;
    2. Apply Algorithm 1 from the current point to generate the rest of X n , Y n + 1 , and B n . If it is the first cycle, the current point is Y 1 , else Y k + 2 ;
    3. Compute an SVD of B n as in (2), select w ( w n n b ) wanted singular values σ j , and their associated left singular vectors ϕ j and right singular vectors ψ j . Form the approximate eigenpairs for H , if the stopping criterion is satisfied, then stop, else continue;
    4. Generate new X ^ k + 1 , Y ^ k + 2 and B ^ k + 1 :
    Compute Y ^ k = Y n Ψ k , X ^ k = X n Φ k , B ^ k = Σ k , Y ^ k + 1 = Y n + 1 , U T = E n T Φ k , B ^ k = B n ;
    Compute X ˜ k + 1 = K Y ^ k + 1 X ^ k U B ^ k , do Cholesky decomposition X ˜ k + 1 T M X ˜ k + 1 = W T W , set A ^ k + 1 = W , W = i n v ( W ) , X ^ k + 1 = X ˜ k + 1 W ;
    Compute Y ˜ k + 2 = M X ^ k + 1 Y ^ k + 1 A ^ k + 1 T , do Cholesky decomposition Y ˜ k + 2 T K Y ˜ k + 2 = W T W , set B ^ k + 1 = W T , W = i n v ( W ) , Y ^ k + 2 = Y ˜ k + 2 W ;
    Let X k + 1 = X ^ k + 1 = [ X ^ k , X ^ k + 1 ] , B k + 1 = B ^ k + 1 = B ^ k U B ^ k 0 A ^ k + 1 , Y k + 2 = Y ^ k + 2 = [ Y ^ k , Y ^ k + 1 , Y ^ k + 2 ] , and go to Step 2.
Remark 7.
InStep 3, we compute the harmonic Ritz pairs after n iterations. In practice, we do the computation for each iterations j = 1 , , n . When restarting, the information chosen to add to the solving subspaces are the wanted w singular values of B n with their corresponding left and right singular vectors. Actually, we use MATLAB command “sort” to choose the w smallest ones or the w largest ones, and which singular values to choose depends on the desired eigenvalues of H .
In the end of this section, we list the computational costs in a generic cycle of four algorithms, which are weighted block Golub-Kahan-Lanczos algorithm, thick-restart weighted block Golub-Kahan-Lanczos algorithm, block Lanczos algorithm [12], and thick-restart block Lanczos algorithm [12], and denoted by wbGKL, wbGKL-TR, BLan, and BLan-TR, respectively. The detail pseudocodes of BLan and BLan-TR are be found in [12].
The comparisons are presented in Table 1 and Table 2. Here, we denote “block vector” a N × n b rectangular matrix, denote “mvb” the product number of a N × N matrix and a block vector. “dpb” denotes the dot product number of two block vectors X and Y, i.e., X T Y . “saxpyb” denotes the number of adding two block vectors or multiplying a block vector to a n b × n b small matrix. “Ep( 2 n × 2 n )(with sorting)” means the number of 2 n × 2 n size eigenvalue problem with sorting eigenvalues and their corresponding eigenvectors in one cycle. Similarly, “Sp( n × n )” denotes the number of n × n size singular value decomposition in one cycle. Because wbGKL and BLan are non-restart algorithms, we just count the first n Lanczos iterations.

5. Numerical Examples

In this section, two numerical experiments are carried out by using MATLAB 8.4 (R2014b) on a laptop with an Intel Core i5-6200U CPU 2.3 GHz memory 8 GB under the Windows 10 operating system.
Example 1.
In this example, we check the bounds established in Theorem 1 and 2. For simplicity, we take N = 100 , the number of weighted block Golub-Kahan-Lanczos steps n = 20 , K = M as diagonal matrix d i a g ( λ 1 , λ 2 , , λ N ) , where
λ 1 = 11 + ρ , λ 2 = 11 , λ 3 = 11 ρ , λ N 2 = 1 + ρ , λ N 1 = 1 , λ N = 1 ρ , λ j = 5 + 5 ( N j + 1 ) N 3 , j = 4 , , N 3 ,
and i = k = 1 , = 3 , i ^ = k ^ = N 2 , ^ = N , n b = 3 . There are three positive eigenvalue clusters: { λ 1 , λ 2 , λ 3 } , { λ 4 , , λ N 3 } , or { λ N 2 , λ N 1 , λ N } . Obviously, Ξ = Γ = K 1 2 .
We seek two groups of the approximate eigenpairs, the first is related to the first cluster, the second is related to the last cluster, i.e., { σ 1 , σ 2 , σ 3 } approximate { λ 1 , λ 2 , λ 3 } , and { σ n n b 2 , σ n n b 1 , σ n n b } approximate { λ N 2 , λ N 1 , λ N } . In order to see the affect that generated from ρ to the upper bounds of the approximate eigenpairs errors in weighted block Golub-Kahan-Lanczos method for LREP, we change the parameter ρ > 0 to overmaster the tightness among eigenvalues within { λ 1 , λ 2 , λ 3 } and { λ N 2 , λ N 1 , λ N } . First, we choose the same matrix Y 0 as in [12,17], i.e.,
Y 0 = 1 0 0 0 1 0 0 0 1 1 N s i n 1 c o s 1 N n b N s i n ( N n b ) c o s ( N n b ) .
Obviously, r a n k ( Y 0 ) = n b and r a n k ( Y 0 T K Ξ ( : , 1 : 3 ) ) = n b . Since K symmetric positive definite, thus do Cholesky decomposition Y 0 T K Y 0 = W T W , let Y 1 = Y 0 W 1 , hence, Y 1 satisfies (5), i.e., Y 1 T K Ξ ( : , 1 : 3 ) is singular. We take Z = Y 1 ( Ξ ( : , 1 : 3 ) T K Y 1 ) 1 , then Z satisfies (6). We execute the weighted block Golub-Kahan-Lanczos method with full re-orthogonalization for LREP in MATLAB, and check the bounds in (7), (8), (13), and (14). Since the approximate eigenvalues are { σ 1 , σ 2 , σ 3 } and { σ n n b 2 , σ n n b 1 , σ n n b } , thus π i , k , = π i , k = π ^ i ^ , k ^ , ^ = π ^ i ^ , ^ = 1 , c = c ^ = 1 , and we measure the following two groups of errors:
ε 11 = d i a g ( λ 1 2 σ 1 2 , λ 2 2 σ 2 2 , λ 3 2 σ 3 2 ) F , ε 21 = λ 1 2 λ N 2 C n 1 2 ( 1 + 2 γ 1 , 3 ) tan 2 Θ K ( Ξ ( : , 1 : 3 ) , Z ) F , ε 31 = sin Θ K ( Ξ ( : , 1 : 3 ) , Y n Ψ ( : , 1 : 3 ) ) F , ε 41 = 1 + A n T B n 2 2 / δ 2 C n 1 ( 1 + 2 γ 1 , 3 ) tan Θ K ( Ξ ( : , 1 : 3 ) , Z ) F ,
and
ε 12 = d i a g ( σ N 2 2 λ N 2 2 , σ N 1 2 λ N 1 2 , σ N 2 λ N 2 ) F , ε 22 = λ 1 2 λ N 2 C n 1 2 ( 1 + 2 γ ^ N 2 , N ) tan 2 Θ K ( Ξ ( : , N 2 : N ) , Z ) F , ε 32 = sin Θ K ( Ξ ( : , N 2 : N ) , Y n Ψ ( : , n n b 2 : n n b ) ) F , ε 42 = 1 + A n T B n 2 2 / δ ^ 2 C n i ( 1 + 2 γ ^ N 2 , N ) tan Θ K ( Ξ ( : , N 2 : N ) , Z ) F .
Actually, ε 21 and ε 41 are upper bounds of ε 11 and ε 31 , and ε 22 and ε 42 are upper bounds of ε 12 and ε 32 . Table 3 and Table 4 report the results of ε i j , i = 1 , , 4 , j = 1 , 2 with the parameter ρ goes to 0. From the two tables, we can see that the bounds for the eigenvalues lie in a cluster and their corresponding eigenvectors are sharp, and they are not sensitive to ρ when ρ goes to 0.
Example 2.
In this example, we are going to test the effectiveness of our weighted block Golub-Kahan-Lanczos algorithms. Four algorithms are tested, i.e.,wbGKL,wbGKL-TR,BLan, andBLan-TR. We choose 3 test problems used in [12,13], which are listed in Table 5. All the matrices K and M in the problems are symmetric positive definite. Specifically, Test 1 and Test 2, which are derived by the turboTDDFT command in QUANTUM ESPRESSO [22], are from the linear response research for Na2 and silane (SiH4) compound, respectively. The matrices K and M in Test 3 are from the University of Florida Sparse Matrix Collection [23], where the order of K is N = 9604 , and M is the leading N × N principal submatrix of f i n a n 512 .
We aim to compute the smallest 5 positive eigenvalues and the largest 5 eigenvalues, i.e., λ i for i = 1 , , 5 , N 4 , , N , together with their associated eigenvectors. The initial guess is chosen as V 0 = e y e ( N , n b ) with block size n b = 3 , where e y e is the MATLAB command. The same as in Example 1, since K is symmetric positive definite, thus do Cholesky decomposition Y 0 T K Y 0 = W T W , let Y 1 = Y 0 W 1 , hence, Y 1 satisfies Y 1 T K Y 1 = I n b . In wbGKL-TR and BLan-TR, we select n = 30 , k = 20 , i.e., the restart will occur once the dimension of the solving subspace is larger than 90, and the information of 60 Ritz vectors are kept. For wbGKL and BLan, because there is no restart, then we compute the approximate eigenpairs when the Lanczos iterations equals to 30 + 10 × ( j 1 ) , j = 1 , 2 , , hence, the Lanczos iterations are as the same amount as in wbGKL-TR and BLan-TR. The following relative eigenvalue error and relative residual 1-norm for each 10 approximate eigenpairs are calculated:
e ( σ j ) : = | λ j σ j | λ j , j = 1 , , 5 , | λ n + j k σ j | λ n + j k , j = n n b 4 , , n n b , r ( σ j ) : = H z ˜ j σ j z ˜ j 1 ( H 1 + σ j ) z ˜ j 1 , j = 1 , , 5 , n n b 4 , , n n b ,
where the “exact” eigenvalues λ j are calculated by the MATLAB code e i g . The calculated approximate eigenpair ( σ j , z ˜ j ) is regarded as converged if r ( σ j ) t o l = 10 8 .
Table 6 and Table 7 give the number of the Lanczos iterations (denote by i t e r ) and the CPU time in seconds (denote by C P U ) for the four algorithms, and Table 6 is for the smallest 5 positive eigenvalues, Table 7 is for the largest 5 eigenvalues. From Table 6, one can see that, no matter the smallest or the largest eigenvalues, the iteration number of the four algorithms are competitive, but wbGKL and wbGKL-TR cost significant less time than BLan and BLan-TR, especially, wbGKL-TR consumes the least amount of time. Because BLan and BLan-TR need to compute the eigenvalues of 0 T n D n 0 , which is a nonsymmetric matrix, thus the two algorithms slower than wbGKL and wbGKL-TR. Due to the saving during the orthogonalization procedure and solving a much smaller B n , wbGKL-TR is the faster algorithm.
The accuracy of the last two approximate eigenpairs in Test 1 are shown in Figure 1. From the figure, we can see that, for the last two eigenpairs, wbGKL and BLan require almost the same iterations to obtain the same accuracy, and the case of wbGKL-TR and BLan-TR also need almost the same iterations, which are one or two more restarts than wbGKL and BLan. On one hand, without solving a nonsymmetric eigenproblem, wbGKL and wbGKL-TR can save much more time than BLan and BLan-TR. On the other hand, since the dimension of the solving subspace for wbGKL-TR is bounded by n n b , the savings in the process of orthogonalization and a much smaller singular value decomposition problem is sufficient to cover the additional restart steps.

6. Conclusions

In this paper, we present a weighted block Golub-Kahan-Lanczos algorithm to solve the desired small portion of smallest or largest positive eigenvalues which are in a cluster. Convergence analysis is established in Theorems 1 and 2, and bound the errors of the eigenvalue and eigenvector approximations belonging to an eigenvalue cluster. These results also show the advantages of the block algorithm over the single-vector version. To make the new algorithm more practical, we introduced a thick-restart strategy to eliminate the numerical difficulties caused by the block method. Numerical examples are executed to demonstrate the efficiency of our new restart algorithm.

Author Contributions

Conceptualization, G.C.; Data curation, H.Z. and Z.T.; Formal analysis, H.Z. and Z.T.; Methodology, H.Z.; Project administration, H.Z. and G.C.; Resources, H.Z.; Visualization, H.Z. and Z.T.; Writing—original draft, H.Z.; Writing—review and editing, Z.T. and G.C.

Funding

This work was financial supported by the National Nature Science Foundation of China (No. 11701225, 11601081, 11471122), Fundamental Research Funds for the Central Universities (No. JUSRP11719), Natural Science Foundation of Jiangsu Province (No. BK20170173), and the research fund for distinguished young scholars of Fujian Agriculture and Forestry University (No. xjq201727).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Casida, M.E. Time-Dependent Density Functional Response Theory for Molecules. In Recent Advances in Density Functional Methods; Chong, D.P., Ed.; World Scientific: Singapore, 1995. [Google Scholar]
  2. Onida, G.; Reining, L.; Rubio, A. Electronic excitations density functional versus many-body Green’s function. Rev. Mod. Phys. 2002, 74, 601–659. [Google Scholar] [CrossRef]
  3. Rocca, D. Time-Dependent Density Functional Perturbation Theory: New algorithms with Applications to Molecular Spectra. Ph.D. Thesis, The International School for Advanced Studies, Trieste, Italy, 2007. [Google Scholar]
  4. Shao, M.; da Jornada, F.H.; Yang, C.; Deslippe, J.; Louie, S.G. Structure preserving parallel algorithms for solving the Bethe-Salpeter eigenvalue problem. Linear Algebra Appl. 2016, 488, 148–167. [Google Scholar] [CrossRef]
  5. Ring, P.; Ma, Z.; Giai, V.N.; Vretenar, D.; Wandelt, A.; Gao, L. The time-dependent relativistic mean-field theory and the random phase approximation. Nucl. Phys. A 2001, 694, 249–268. [Google Scholar] [CrossRef] [Green Version]
  6. Bai, Z.; Li, R.-C. Minimization principles for the linear response eigenvalue problem I: Theory. SIAM J. Matrix Anal. Appl. 2012, 33, 1075–1100. [Google Scholar] [CrossRef]
  7. Bai, Z.; Li, R.-C. Minimization principles for the linear response eigenvalue problem II: Computation. SIAM J. Matrix Anal. Appl. 2013, 34, 392–416. [Google Scholar] [CrossRef]
  8. Bai, Z.; Li, R.-C. Minimization principles and computation for the generalized linear response eigenvalue problem. BIT Numer. Math. 2014, 54, 31–54. [Google Scholar] [CrossRef]
  9. Li, T.; Li, R.-C.; Lin, W.-W. A symmetric structure-preserving ΓQR algorithm for linear response eigenvalue problems. Linear Algebra Appl. 2017, 520, 191–214. [Google Scholar] [CrossRef]
  10. Teng, Z.; Li, R.-C. Convergence analysis of Lanczos-type methods for the linear response eigenvalue problem. J. Comput. Appl. Math. 2013, 247, 17–33. [Google Scholar] [CrossRef]
  11. Teng, Z.; Lu, L.; Li, R.-C. Perturbation of partitioned linear response eigenvalue problems. Electron. Trans. Numer. Anal. 2015, 44, 624–638. [Google Scholar]
  12. Teng, Z.; Zhang, L.-H. A block Lanczos method for then linear response eigenvalue problem. Electron. Trans. Numer. Anal. 2017, 46, 505–523. [Google Scholar]
  13. Teng, Z.; Zhou, Y.; Li, R.-C. A block Chebyshev-Davidson method for linear response eigenvalue problems. Adv. Comput. Math. 2016, 42, 1103–1128. [Google Scholar] [CrossRef]
  14. Zhang, L.-H.; Lin, W.-W.; Li, R.-C. Backward perturbation analysis and residual-based error bounds for the linear response eigenvalue problem. BIT Numer. Math. 2014, 55, 869–896. [Google Scholar] [CrossRef]
  15. Zhang, L.-H.; Xue, J.; Li, R.-C. Rayleigh-Ritz approximation for the linear response eigenvalue problem. SIAM J. Matrix Anal. Appl. 2014, 35, 765–782. [Google Scholar] [CrossRef]
  16. Zhong, H.; Xu, H. Weighted Golub-Kahan-Lanczos bidiagonalizaiton algorithms. Electron. Trans. Numer. Anal. 2017, 47, 153–178. [Google Scholar]
  17. Li, R.-C.; Zhang, L.-H. Convergence of the block Lanczos method for eigenvalue clusters. Numer. Math. 2015, 131, 83–113. [Google Scholar] [CrossRef]
  18. Chapman, A.; Saad, Y. Deflated and augmented Krylov subspace techniques. Numer. Linear Algebra Appl. 1996, 4, 43–66. [Google Scholar] [CrossRef]
  19. Lehoucq, R.B.; Sorensen, D.C. Deflation techniques for an implicitly restarted Arnoldi iteration. SIAM J. Matrix Anal. Appl. 1996, 17, 789–821. [Google Scholar] [CrossRef]
  20. Wu, K.; Simon, H. Thick-restart Lanczos method for large symmetric eigenvalue problems. SIAM J. Matrix Anal. Appl. 2000, 22, 602–616. [Google Scholar] [CrossRef]
  21. Knyazev, A.V.; Argentati, M.E. Principal angles between subspaces in an A-based scalar product: Algorithms and perturbation estimates. SIAM J. Sci. Comput. 2002, 23, 2008–2040. [Google Scholar] [CrossRef]
  22. Giannozzi, P.; Baroni, S.; Bonini, N.; Calandra, M.; Car, R.; Cavazzoni, C.; Ceresoli, D.; Chiarotti, G.L.; Cococcioni, M.; Dabo, I.; et al. QUANTUM ESPRESSO: A modular and open-source software project for quantum simulations of materials. J. Phys. Condens. Matter 2009, 21, 395502. [Google Scholar] [CrossRef]
  23. Davis, T.; Hu, Y. The University of Florida Sparse Matrix Collection. ACM Trans. Math. Softw. 2011, 38, 1:1–1:25. [Google Scholar] [CrossRef]
Figure 1. Errors and residuals of the 2 smallest positive eigenvalues for Test 1 in Example 2.
Figure 1. Errors and residuals of the 2 smallest positive eigenvalues for Test 1 in Example 2.
Mathematics 07 00053 g001
Table 1. Main computational costs per cycle wbGKL and wbGKL-TR.
Table 1. Main computational costs per cycle wbGKL and wbGKL-TR.
wbGKLwbGKL-TRwbGKL-TR
(1-st Cycle)(Other Cycle)
mvb 2 n + 1 2 n + 1 2 ( n k )
dpb 2 n + 1 2 n + 1 2 ( n k )
saxpyb 8 n 8 n 8 ( n k ) + 2 k ( 2 n + 1 )
block vector updates 2 n + 2 2 n + 2 2 n + 2
Ep( 2 n × 2 n )(with sorting)000
Sp( n × n )111
Table 2. Main computational costs per cycle BLan and BLan-TR.
Table 2. Main computational costs per cycle BLan and BLan-TR.
BLanBLan-TRBLan-TR
(1-st Cycle)(Other Cycle)
mvb 2 n + 1 2 n + 1 2 ( n k )
dpb 2 n + 1 2 n + 1 2 ( n k )
saxpyb 6 n 6 n 6 ( n k ) + 2 k ( 2 n + 1 )
block vector updates 2 n + 2 2 n + 2 2 n + 2
Ep( 2 n × 2 n )(with sorting)111
Sp( n × n )000
Table 3. ε 11 , ε 31 together with their upper bounds ε 21 , ε 41 of Example 1.
Table 3. ε 11 , ε 31 together with their upper bounds ε 21 , ε 41 of Example 1.
ρ ε 11 ε 21 ε 31 ε 41
10 1 4.0295 × 10 13 2.6773 × 10 10 1.2491 × 10 10 2.6260 × 10 6
10 2 5.1238 × 10 14 5.4555 × 10 11 6.1184 × 10 11 1.1407 × 10 6
10 3 7.1054 × 10 14 4.6711 × 10 11 5.7698 × 10 11 1.0520 × 10 6
10 4 2.4449 × 10 13 4.5993 × 10 11 5.7370 × 10 11 1.0436 × 10 6
10 5 2.1552 × 10 13 4.5922 × 10 11 5.7338 × 10 11 1.0427 × 10 6
Table 4. ε 12 , ε 32 together with their upper bounds ε 22 , ε 42 of Example 1.
Table 4. ε 12 , ε 32 together with their upper bounds ε 22 , ε 42 of Example 1.
ρ ε 11 ε 21 ε 31 ε 41
10 1 7.1089 × 10 16 6.0352 × 10 11 1.9393 × 10 10 8.8823 × 10 7
10 2 1.3688 × 10 15 3.5913 × 10 11 1.9562 × 10 10 6.8797 × 10 7
10 3 3.9968 × 10 15 3.4113 × 10 11 1.9580 × 10 10 6.7081 × 10 7
10 4 4.8495 × 10 15 3.3938 × 10 11 1.9582 × 10 10 6.6912 × 10 7
10 5 8.1221 × 10 15 3.3920 × 10 11 1.9582 × 10 10 6.6895 × 10 7
Table 5. The matrices K and M in Test 1–3.
Table 5. The matrices K and M in Test 1–3.
ProblemsNKM
Test 11862 N a 2 N a 2
Test 25660 S i H 4 S i H 4
Test 39604 f v 1 f i n a n 512
Table 6. Compute 5 smallest positive eigenvalues for Test 1–3.
Table 6. Compute 5 smallest positive eigenvalues for Test 1–3.
AlgorithmsTest 1Test 2Test 3
CPU iter CPU iter CPU iter
wbGKL1.507014925.784831915.9308379
wbGKL-TR1.074617920.35933595.1302589
BLan4.673914987.167034943.9506379
BLan-TR2.124316339.130639319.9677592
Table 7. Compute 5 largest eigenvalues for Test 1–3.
Table 7. Compute 5 largest eigenvalues for Test 1–3.
AlgorithmsTest 1Test 2Test 3
CPU iter CPU iter CPU iter
wbGKL0.63877912.46581791.0639109
wbGKL-TR0.5284799.90931790.8774109
BLan1.46347927.40281796.7574109
BLan-TR1.01518218.34151864.1298113

Share and Cite

MDPI and ACS Style

Zhong, H.; Teng, Z.; Chen, G. Weighted Block Golub-Kahan-Lanczos Algorithms for Linear Response Eigenvalue Problem. Mathematics 2019, 7, 53. https://doi.org/10.3390/math7010053

AMA Style

Zhong H, Teng Z, Chen G. Weighted Block Golub-Kahan-Lanczos Algorithms for Linear Response Eigenvalue Problem. Mathematics. 2019; 7(1):53. https://doi.org/10.3390/math7010053

Chicago/Turabian Style

Zhong, Hongxiu, Zhongming Teng, and Guoliang Chen. 2019. "Weighted Block Golub-Kahan-Lanczos Algorithms for Linear Response Eigenvalue Problem" Mathematics 7, no. 1: 53. https://doi.org/10.3390/math7010053

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop