Abstract
In this paper, we propose a shift-deflation technique for the generalized eigenvalue problems. This technique consists of the following two stages: the shift of converged eigenvalues to zeros, and the deflation of these shifted eigenvalues. By performing the above technique, we construct a new generalized eigenvalue problem with a lower dimension which shares the same eigenvalues with the original generalized eigenvalue problem except for the converged ones. In addition, we consider the relations of the eigenvectors before and after performing the technique. Finally, numerical experiments show the effectiveness and robustness of the proposed method.
1. Introduction
In this paper, we consider the computation of a large quantity of eigenpairs of a large-scale generalized eigenvalue problem (GEP)
where are the coefficient matrices, and the notation ∗ denotes the conjugate transposition. The scalar is an eigenvalue of the GEP (1) if and only if is a root of , where denotes the determinant of a matrix. The nonzero vectors x and y are called the right and left eigenvectors corresponding to , respectively. Together, or is called an eigenpair of the GEP (1). are r eigenpairs of the GEP (1), and let
where denotes a diagonal matrix with diagonal entries , then the pair is also called an eigenpair of the GEP (1), which satisfies
The GEP (1) arises in a number of applications, such as structural analysis [1], magneto-hydrodynamics [2], fluid–structure interaction [3] and the boundary integral equation [4]. For the small and medium-sized GEP, we can compute the eigenpairs using the QZ algorithm [5], the Riemannian nonlinear conjugate gradient method [6] and so on. For the large-scale GEP, the methods in [7,8,9,10,11,12,13] only find a few extreme eigenpairs or interior eigenpairs with eigenvalues close to a given shift. In order to compute a cluster of eigenvalues and associated eigenvectors successively, it is necessary to develop a shift-deflation technique for the GEP (1).
Assume that we have already computed some eigenvalues of the GEP (1), our goal is to construct a new GEP with coefficient matrices whose eigenvalues are just those of the GEP (1) except for the computed eigenvalues . To do this, there are two stages, namely, shift and deflation. In the shift stage, we shift the converged eigenvalues to zeros while keeping the remaining eigenvalues unchanged. To this purpose, we define a new GEP with coefficient matrices whose eigenvalues are r zeros and . In the deflation stage, we deflate the shifted r zeros of the shifted GEP with coefficient matrices . For this purpose, we construct a new GEP with coefficient matrices whose eigenvalues are just . The relationship between the eigenvectors of the two GEPs with coefficient matrices and are also shown in this paper.
Throughout this paper, we use the following notations. denotes the identity matrix. and denote the j-th column and the first j columns of the identity matrix, respectively. The superscript ∗ denotes the conjugate transpose for a vector or a matrix. denotes the Euclidean vector norm, and denotes the Frobenius matrix norm. We also adopt the following MATLAB notations: denotes the submatrix of the matrix A that consists of the inrersection of the rows i to j and the columns k to l, and select the rows i to j and the columns k to l of A, respectively.
2. Shift Technique
In this section, we describe how to move the eigenvalues of the GEP (1) to zeros and keep the corresponding eigenvectors and the remaining eigenvalues unchanged.
Theorem 1.
Proof.
We first verify the case of . From the assumption , we have and
Remark 1.
Some remarks of Theorem 1 are illustrated as follows.
- (1)
- (2)
- A similar shift technique can be observed where the coefficient matrix in (5) is defined by . At this moment, the changes of the left eigenvectors are available while those of the right eigenvectors are unavailable. Moreover, we need to solve a homogeneous system using a certain numerical method if we want both the left and right eigenvectors.
- (3)
- A similar shift technique can be observed where the coefficient matrix in (5) is defined by . At this moment, the left eigenvectors are not needed, and in both the condition and the relation should be replaced by .
The above theorem and remarks lead to the following corollary directly.
Corollary 1.
Remark 2.
Some remarks of Corollary 1 are shown as follows.
- (1)
- (2)
- The relation of the left eigenvectors and can also be given as when . However, it will fail if for a certain i. In order to remedy this issue, we should shift this eigenpair together with the converged eigenpair by applying the shift technique referred to in Remark 1 (3).
- (3)
- We can also shift to infinity by using the following shift techniquewhile keeping the corresponding right eigenvector and the remained eigenvalues unchanged. Moreover, we have the relation that and when .
3. Deflation Technique
In this section, we deflate the shifted r zeros. To this end, we construct a new GEP with dimension whose eigenvalues are the remaining eigenvalues of the shifted GEP (4) except for zeros. The following theorem shows the feasibility of the deflation technique under certain assumptions.
Theorem 2.
Assume that are the eigenpairs of the GEP (1) with and where and , are both full column rank matrices with and , H and K are both nonsingular matrices with and , and is nonsingular. Construct a new GEP
where the coefficient matrices and are defined as
with
then, are the eigenpairs of the new GEP (12) with
where and .
Proof.
We first prove that are the eigenvalues of the deflated GEP (12). Let , and . We can easily verify that shares the same spectrum with . Moreover, we can have the following relations,
where . Based on (14), we obtain
Let , then . Therefore, are the roots of . Denote , then we have
which implies .
According to the assumption that and R is nonsingular, we have from the first equation of (15). Therefore, we have , which implies is a right eigenvector with respect to . Without loss of generality, we let ; then, the first relation in (13) is obtained. The rest of the proof of the second relation in (13) can be given analogously. □
Remark 3.
Some remarks of Theorem 2 are shown below.
- (1)
- If we apply the shift technique (5) and solve a homogeneous system for columns of Y as suggested by Remark 1 (1), the full column rank matrices are obtained with and .
- (2)
- The nonsingularity of is essential in Theorem 2. If R is singular, then the deflation technique fails.
From the above theorem, we can obtain the following corollary directly.
Corollary 2.
Assume that are the eigenpairs of the GEP (1) with and , are both nonzero vectors with , H and K are both nonsingular and matrices with , and . Construct a new GEP (12) where the coefficient matrices and are defined as
with
then are the eigenpairs of the new GEP (12) with
where and denotes the conjugation of γ.
Remark 4.
Some remarks of Corollary 2 are given below.
- (1)
- There are a lot of choices of the matrices H and K. In actual computation, we choose the nonsingular matrices H and K to be the Householder matrices such that and , which guarantees the low computational cost and the numerical stability.
- (2)
- The condition is needed in Corollary 2. If , the deflation technique fails. To circumvent this problem, we can shift to infinity by using the shift technique (11) without deflation, and continue to compute the next eigenvalue of interest.
4. Shift-Deflation Technique
In this section, we synthesize the shift technique in Section 2 and the deflation technique in Section 3 to deflate some known eigenpairs , and to find a large number of eigenpairs corresponding to the smallest eigenvalues in the module of the GEP (1).
We first consider the situation that . Assume that is a simple eigenpair of the GEP (1) with and . Define the matrices and as (9); then, . Solve for seeking the vector with , choose Householder matrices H and K such that , and define the matrices and as (16) and (17) where the matrices A and B in (17) are replaced by and , respectively. If , the shift-deflation technique can be completed, otherwise we shift to infinity by using the shift technique (11).
If is not a simple eigenvalue, that is , we should shift all eigenvalues which are equal to by using the shift technique referred to in Remark 1 (3). Moreover, we may obtain more than one converged set of eigenpairs by a certain numerical method at one iteration. Due to the advantage that a low computational cost and numerical stability can be guaranteed if we choose H and K as Householder matrices when , we try to deflate these eigenpairs one by one with the relations (10) and (18). A numerical algorithm is summarized as follows.
The first step in Algorithm 1 can be seen as an inner iteration; therefore, it should have its own stopping criterion; for example,
where A and B are the coefficient matrices after implementing the shift-deflation technique, and is a given tolerance. Then, we can denote as the shift-deflated relative residual norm of the shift-deflated GEP. If we are interested in the accuracy of the approximation of the original GEP, that is, the coefficient matrices are the input matrices A and B, we can simply test the following stopping criterion:
where is also a given tolerance. If we are also interested in the accuracy of the approximation corresponding to , we can repeat using the recursions (18) and (10) to backtrack with the computed eigenpair of the shift-deflated GEP during the iterations in Algorithm 1. To this end, we should save all the converged eigenpairs, the scalars , the vectors s, and the nonsingular matrices H and K. If we choose H and K as Householder matrices, we also need to save two vectors. With the backtracked eigenvector , we can test the accuracy of approximation with the following stopping criterion
where A and B are the input matrices, and is a given tolerance. Then, we can denote as the original relative residual norm of the GEP (1) with the input matrices A and B.
| Algorithm 1 Shift-deflationtechnique for the GEP. |
| Input: matrices A, B and the number k of the desired eigenpairs. Output: k approximate eigenpairs and their relative residuals.
|
5. Numerical Results
In this section, we report some numerical examples to illustrate the effectiveness of the shift-deflation technique for the GEP. All examples are performed in Matlab R2015b on an Intel Core 2.9 GHz PC with 4 GB memory under a Windows 7 system. For simplicity, we use the Matlab built-in function ‘eigs’ to seek some eigenpairs, with the smallest modulus eigenvalues in the first iteration of Algorithm 1, and ‘svds’ to compute . The integer r in Algorithm 1 is randomly chosen but does not exceed a given threshold due to using the Matlab built-in function ‘eigs’. We denote n by the dimension of the GEP (1), and denote k by the number of the desired eigenpairs.
Example 1.
We consider the GEP (1) where the coefficient matrices are given as
We can easily obtain all eigenpairs by using the MATLAB built-in function ‘eig’, which are shown in Table 1.
Table 1.
The eigenpairs of Example 1.
From Table 1, we can see that it is not necessary to apply the shift technique to . Thus, the deflation condition scalar is if we deflate this to zero. By performing Algorithm 1 for seeking all the six eigenpairs, we can obtain the following numerical results in Table 2. We find that our proposed deflation technique is highly accurate from the first and last columns of Table 2.
Table 2.
Numerical results of Example 1.
Example 2.
We consider the GEP (1) where the coefficient matrices come from the Harwell-Boeing test matrices [14] bcsstk07 and bcsstm07. These matrices are .
We implement Algorithm 1 with and , and show the numerical results in Table 3. We can see that the computed eigenpairs are quite accurate from the last two columns of Table 3.
Table 3.
Numerical results of Example 2.
Example 3.
In this example, the coefficient matrices A and B are symmetric and sparse, and given by
It is obvious that some structural properties of coefficient matrices will no longer hold after performing the shift-deflation technique, such as symmetry and sparsity. However, we can still use some properties of the input coefficient matrices implicitly. In the above case, we can keep implementing sparse operations even after preforming several shift-deflation techniques as long as all converged eigenpairs, the scalars (or the matrices R), the vectors s (or the matrices S) and the nonsingular matrices H and K are saved during the iterations. The numerical results for Algorithm 1 with the parameters n = 10,000, and are shown in Figure 1.
Figure 1.
Numerical results of Example 3.
From Figure 1, we can see that both the original smallest singular values and the original relative residual norms have a high accuracy even though the shift-deflation technique is performed 200 times. Moreover, the original relative residual norm has a similar (slightly larger in most cases) accuracy to the shift-deflated relative residual norm which is obtained with the Matlab built-in function ‘eigs’. Therefore, the numerical results show that the shift-deflation technique for the large-scale GEP is effective and robust.
Author Contributions
Methodology, X.S.; Software, W.W., X.S. and A.L.; Supervision, X.C.; Writing—original draft, W.W. and X.C. All authors have read and agreed to the published version of the manuscript.
Funding
The research was supported by the National Natural Science Foundation of China (Grant Nos. 12001396 and 12201302), the Natural Science Foundation of Jiangsu Province (Grant Nos. BK20170591 and BK20200268), the Natural Science Foundation of Jiangsu Higher Education Institutions of China (Grant Nos. 21KJB110017 and 20KJB110005), the China Postdoctoral Science Foundation (Grant No. 2018M642130) and Qing Lan Project of the Jiangsu Higher Education Institutions.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Saad, Y. Numerical Methods for Large Eigenvalue Problems; Manchester University Press: Manchester, UK, 1992. [Google Scholar]
- Poedts, S.; Meijer, P.M.; Goedbloed, J.P.; van der Vorst, H.; Jakoby, A. Parallel Magnetohydrodynamics on the CM-5. High-Performance Computing and Networking; Springer: Berlin/Heidelberg, Germany, 1994; pp. 365–370. [Google Scholar]
- Yu, I.-W. Subspace iteration for eigen-solution of fluid-structure interaction problems. J. Press. Vessel Technol. ASME 1987, 109, 244–248. [Google Scholar] [CrossRef]
- Brebbia, C.A.; Venturini, W.S. Boundary Element Techniques: Applications in Fluid Flow and Computational Aspects; Computational Mechanics Publications: Southampton, UK, 1987. [Google Scholar]
- Moler, C.B.; Stewart, G.W. An algorithm for generalized matrix eigenvalue problems. SIAM J. Numer. Anal. 1973, 10, 241–256. [Google Scholar] [CrossRef]
- Li, J.-F.; Li, W.; Vong, S.-W.; Luo, Q.-L.; Xiao, M. A Riemannian optimization approach for solving the generalized eigenvalue problem for nonsquare matrix pencils. J. Sci. Comput. 2020, 82, 67. [Google Scholar] [CrossRef]
- Saad, Y. Numerical solution of large nonsymmetric eigenvalue problems. Comput. Phys. Comm. 1989, 53, 71–90. [Google Scholar] [CrossRef][Green Version]
- Rommes, J. Arnoldi and Jacobi-Davidson methods for generalized eigenvalue problems with singular. Math. Comput. 2008, 77, 995–1015. [Google Scholar] [CrossRef]
- Najafi, H.S.; Moosaei, H.; Moosaei, M. A new computational Harmonic projection algorithm for large unsymmetric generalized eigenproblems. Appl. Math. Sci. 2008, 2, 1327–1334. [Google Scholar]
- Jia, Z.-X.; Zhang, Y. A refined shift-invert Arnoldi algorithm for large unsymmetric generalized eigenproblems. Comput. Math. Appl. 2002, 44, 1117–1127. [Google Scholar] [CrossRef]
- Bai, Z.-Z.; Miao, C.-Q. On local quadratic convergence of inexact simplified Jacobi-Davidson method. Linear Algebra Appl. 2017, 520, 215–241. [Google Scholar] [CrossRef]
- Bai, Z.-Z.; Miao, C.-Q. On local quadratic convergence of inexact simplified Jacobi-Davidson method for interior eigenpairs of Hermitian eigenproblems. Appl. Math. Lett. 2017, 72, 23–28. [Google Scholar] [CrossRef]
- Li, J.-F.; Li, W.; Duan, X.-F.; Xiao, M. Newton’s method for the parameterized generalized eigenvalue problem with nonsquare matrix pencils. Adv. Comput. Math. 2021, 47, 29. [Google Scholar] [CrossRef]
- Duff, I.S.; Grimes, R.G.; Lewis, J.G. Sparse matrix test problems. ACM Trans. Math. Soft. 1989, 15, 1–14. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).