Efficient Reduction Algorithms for Banded Symmetric Generalized Eigenproblems via Sequentially Semiseparable (SSS) Matrices
Abstract
:1. Introduction
2. Semiseparable and SSS Matrices
Fast Matrix-Matrix Multiplication
3. The Reduction Algorithm
3.1. The SSS Representation of C
3.2. Recompression of C
Complexity
- Compute a low-rank approximation of of dimensions or . If using ID, it costs flops.
- Compute , where X is an matrix and is of dimension . It costs flops.
- Compute , where is of dimension and is of dimension . It costs flops.
4. Banded Reduction for Symmetric SSS Matrix
- Work on the last two block rows. The off-diagonal block is
- Work on the 2-nd and 3-rd block rows. The off-diagonal block isWe can compute an orthogonal matrix with dimension such that . Compute , and define , and update . Now, we will introduce a bulge at positions and , and C looks likeThe bulge is computed as , and is updated. We can use the standard chasing algorithm [43] to eliminate the bulge. The bulge chasing process does not affect the top-left part of matrix C which is represented in SSS form. We draw attention to the fact that , , for , and the block tridiagonal matrix is defined by and or . Finally, matrix C has the following form after bulge chasing,
- Define , and Finally, we get the block tridiagonal matrix
Algorithm 1: (Symmetric banded reduction algorithm for symmetric SSS matrix) Assume that C is an block symmetric SSS matrix, and its generators are and , for and each generator is a matrix. |
Inputs: generators and , for Outputs: a block tridiagonal matrix defined in and .
|
Complexity
- Step 2: QR factorization of a tall matrix costs It totally executes times, and thus it costs flops.
- Step 3: Computing costs and the product of and with costs flops. Since it executes times, it costs flops in total.
- Step 4:
- Computing the bulge costs . There are times, and it costs flops in total.
- Each bulge chasing costs
- For k-th step, it requires bulge chasing steps. Since , it totally costs flops.
- Step 11: it costs flops.
5. Numerical Results
- A = rand(n); B = rand(n);
- A = (A + A’)/2; B = (B + B’)/2 + eye(n);
- A = triu(tril(A,r), −r); B = triu(tril(B,r), −r);
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Anderson, E.; Bai, Z.; Bischof, C.; Blackford, S.; Demmel, J.; Dongarra, J.; Du Croz, J.; Greenbaum, A.; Hammarling, S.; McKenney, A.; et al. LAPACK Users’ Guide, 3rd ed.; SIAM: Philadelphia, PA, USA, 1999. [Google Scholar]
- Wilkinson, J. Some recent advances in numerical linear algebra. In The State of the Art in Numerical; Academic Press: New York, NY, USA, 1977. [Google Scholar]
- Dhillon, I.S.; Parlett, B.N. Multiple representations to compute orthogonal eigenvectors of symmetric tridiagonal matrices. Linear Algebra Appl. 2004, 387, 1–28. [Google Scholar] [CrossRef] [Green Version]
- Brebner, M.; Grad, J. Eigenvalues of Ax = λBx for real symmetric matrices A and B computed by reduction to a pseudosymmetric form and the HR process. Linear Algebra Appl. 1982, 43, 99–118. [Google Scholar] [CrossRef] [Green Version]
- Tisseur, F. Tridiagonal-diagonal reduction of symmetric indefinite pairs. SIAM J. Matrix Anal. Appl. 2004, 26, 215–232. [Google Scholar] [CrossRef] [Green Version]
- Crawford, C. Reduction of a band-symmetric generalized eigenvalue problem. Commun. ACM 1973, 16, 41–44. [Google Scholar] [CrossRef]
- Lang, B. Efficient reduction of banded hermitian positive definite generalized eigenvalue problems to banded standard eigenvalue problems. SIAM J. Sci. Comput. 2019, 41, C52–C72. [Google Scholar] [CrossRef]
- Rippl, M.; Lang, B.; Huckle, T. Parallel eigenvalue computation for banded generalized eigenvalue problems. Parallel Comput. 2019, 88, 102542. [Google Scholar] [CrossRef]
- Marek, A.; Blum, V.; Johanni, R.; Havu, V.; Lang, B.; Auckenthaler, T.; Heinecke, A.; Bungartz, H.; Lederer, H. The ELPA library: Scalable parallel eigenvalue solutions for electronic structure theory and computational science. J. Phys. Condens. Matter 2014, 26, 213201. [Google Scholar] [CrossRef]
- Chandrasekaran, S.; Dewilde, P.; Gu, M.; Pals, T.; Sun, X.; van der Veen, A.J.; White, D. Fast Stable Solvers for Sequentially Semi-Separable Linear Systems of Equations; Technical Report; University of California: Berkeley, CA, USA, 2003. [Google Scholar]
- Chandrasekaran, S.; Dewilde, P.; Gu, M.; Pals, T.; Sun, X.; van der Veen, A.J.; White, D. Some fast algorithms for sequentially semiseparable representation. SIAM J. Matrix Anal. Appl. 2005, 27, 341–364. [Google Scholar] [CrossRef] [Green Version]
- Singh, V. The inverse of a certain block matrix. Bull. Aust. Math. Soc. 1979, 20, 161–163. [Google Scholar] [CrossRef] [Green Version]
- Chameleon Software Homepage. Available online: https://solverstack.gitlabpages.inria.fr/chameleon/ (accessed on 22 March 2022).
- Agullo, E.; Augonnet, C.; Dongarra, J.; Ltaief, H.; Namyst, R.; Thibault, S.; Tomov, S. A hybridization methodology for high-performance linear algebra software for GPUs. In GPU Computing Gems Jade Edition; Elsevier: Amsterdam, The Netherlands, 2012; pp. 473–484. [Google Scholar]
- Chandrasekaran, S.; Gu, M. A divide-and-conquer algorithm for the eigendecomposition of symmetric block-diagonal plus semiseparable matrices. Numer. Math. 2004, 96, 723–731. [Google Scholar] [CrossRef]
- Chandrasekaran, S.; Gu, M. Fast and stable eigendecomposition of symmetric banded plus semi-separable matrices. Linear Algebra Appl. 2000, 313, 107–114. [Google Scholar] [CrossRef] [Green Version]
- Vandebril, R.; Van Barel, M.; Mastronardi, N. Matrix Computations and Semiseparable Matrices, Volume I: Linear Systems; Johns Hopkins University Press: Baltimore, MD, USA, 2008. [Google Scholar]
- Hackbusch, W. A sparse matrix arithmetic based on -matrices. Part I: Introduction to -matrices. Computing 1999, 62, 89–108. [Google Scholar] [CrossRef]
- Hackbusch, W.; Grasedyck, L.; Börm, S. An introduction to hierarchical matrices. Math. Bohem. 2002, 127, 229–241. [Google Scholar] [CrossRef]
- Hackbusch, W.; Khoromskij, B. A sparse matrix arithmetic based on -matrices. Part II: Application to multi-dimensional problems. Computing 2000, 64, 21–47. [Google Scholar] [CrossRef]
- Börm, S.; Grasedyck, L.; Hackbusch, W. Introduction to hierarchical matrices with applications. Eng. Anal. Bound. Elem. 2003, 27, 405–422. [Google Scholar] [CrossRef] [Green Version]
- Hackbusch, W.; Khoromskij, B.; Sauter, S. On 2-matrices. In Proceedings of the Lecture on Applied Mathematics; Bungartz, H., Hope, R.H.W., Zenger, C., Eds.; Springer: Berlin/Heidelberg, Germany, 2000; pp. 9–29. [Google Scholar]
- Hackbusch, W.; Börm, S. Data-sparse approximation by adaptive 2-matrices. Computing 2002, 69, 1–35. [Google Scholar] [CrossRef]
- Eidelman, Y.; Gohberg, I. On a new class of structured matrices. Integral Equ. Oper. Theory 1999, 34, 293–324. [Google Scholar] [CrossRef]
- Eidelman, Y.; Gohberg, I.; Gemignani, L. On the fast reduction of a quasiseparable matrix to Hessenberg and tridiagonal forms. Linear Algebra Appl. 2007, 420, 86–101. [Google Scholar] [CrossRef] [Green Version]
- Vandebril, R.; Van Barel, M.; Mastronardi, N. Matrix Computations and Semiseparable Matrices, Volume II: Eigenvalue and Singular value Methods; Johns Hopkins University Press: Baltimore, MD, USA, 2008. [Google Scholar]
- Chandrasekaran, S.; Dewilde, P.; Gu, M.; Lyons, W.; Pals, T. A fast solver for HSS representations via sparse matrices. SIAM J. Matrix Anal. Appl. 2006, 29, 67–81. [Google Scholar] [CrossRef] [Green Version]
- Li, S.; Gu, M.; Cheng, L.; Chi, X.; Sun, M. An accelerated divide-and-conquer algorithm for the bidiagonal SVD problem. SIAM J. Matrix Anal. Appl. 2014, 35, 1038–1057. [Google Scholar] [CrossRef]
- Liao, X.; Li, S.; Lu, Y.; Roman, J.E. A parallel structured divide-and-conquer algorithm for symmetric tridiagonal eigenvalue problems. IEEE Trans. Parallel Distrib. Syst. 2020, 32, 367–378. [Google Scholar] [CrossRef]
- Zhang, J.; Su, Q.; Tang, B.; Wang, C.; Li, Y. DPSNet: Multitask Learning Using Geometry Reasoning for Scene Depth and Semantics. IEEE Trans. Neural Netw. Learn. Syst. 2021, 1–12. [Google Scholar] [CrossRef] [PubMed]
- Rebrova, E.; Chávez, G.; Liu, Y.; Ghysels, P.; Li, X.S. A study of clustering techniques and hierarchical matrix formats for kernel ridge regression. In Proceedings of the 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Vancouver, BC, Canada, 21–25 May 2018; pp. 883–892. [Google Scholar]
- Chávez, G.; Liu, Y.; Ghysels, P.; Li, X.S.; Rebrova, E. Scalable and memory-efficient kernel ridge regression. In Proceedings of the 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS), New Orleans, LA, USA, 18–22 May 2020; pp. 956–965. [Google Scholar]
- Erlandson, L.; Cai, D.; Xi, Y.; Chow, E. Accelerating parallel hierarchical matrix-vector products via data-driven sampling. In Proceedings of the 2020 IEEE International Parallel and Distributed Processing Symposium (IPDPS), New Orleans, LA, USA, 18–22 May 2020; pp. 749–758. [Google Scholar]
- Cai, D.; Chow, E.; Erlandson, L.; Saad, Y.; Xi, Y. SMASH: Structured matrix approximation by separation and hierarchy. Numer. Linear Algebra Appl. 2018, 25, e2204. [Google Scholar] [CrossRef] [Green Version]
- Kailath, T. Fredholm resolvents, Wiener-Hopf equations, and Riccati differential equations. IEEE Trans. Inf. Theory 1969, 15, 665–672. [Google Scholar] [CrossRef]
- Asplund, E. Inverses of matrices aij which satisfy aij = 0 for j > i + p. Math. Scand. 1959, 7, 57–60. [Google Scholar] [CrossRef] [Green Version]
- Barrett, W.; Feinsilver, P. Inverses of banded matrices. Linear Algebra Appl. 1981, 4, 111–130. [Google Scholar] [CrossRef] [Green Version]
- Vanderbril, R.; Barel, M.V.; Mastronardi, N. A note on the representation and definition of semiseparable matrices. Numer. Linear Algebra Appl. 2005, 12, 839–858. [Google Scholar] [CrossRef] [Green Version]
- Dewilde, P.; van der Veen, A. Time-Varying Systems and Computations; Kluwer Academic Publishers: Amsterdam, The Netherlands, 1998. [Google Scholar]
- Chandrasekaran, S.; Dewilde, P.; Gu, M.; Pals, T.; van der Veen, A.J. Fast stable solver for sequentially semi-separable linear systems of equations. In Proceedings of the International Conference on High-Performance Computing, Bangalore, India, 18–21 December 2002; pp. 545–554. [Google Scholar]
- Gu, M.; Eisenstat, S.C. Efficient algorithms for computing a strong-rank revealing QR factorization. SIAM J. Sci. Comput. 1996, 17, 848–869. [Google Scholar] [CrossRef] [Green Version]
- Cheng, H.; Gimbutas, Z.; Martinsson, P.; Rokhlin, V. On the compression of low rank matrices. SIAM J. Sci. Comput. 2005, 26, 1389–1404. [Google Scholar] [CrossRef] [Green Version]
- Schwarz, H.R. Tridiagonalization of a symmetric band matrix. Numer. Math. 1968, 12, 231–241. [Google Scholar] [CrossRef]
- Li, S.; Jiang, H.; Dong, D.; Huang, C.; Liu, J.; Liao, X.; Chen, X. Efficient data redistribution algorithms from irregular to block cyclic data distribution. IEEE Trans. Parallel Distrib. Syst. 2022. [Google Scholar] [CrossRef]
Matrix | ||||||
Dimension |
- |
- |
- |
- |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Yuan, F.; Li, S.; Jiang, H.; Wang, H.; Chen, C.; Du, L.; Yang, B. Efficient Reduction Algorithms for Banded Symmetric Generalized Eigenproblems via Sequentially Semiseparable (SSS) Matrices. Mathematics 2022, 10, 1676. https://doi.org/10.3390/math10101676
Yuan F, Li S, Jiang H, Wang H, Chen C, Du L, Yang B. Efficient Reduction Algorithms for Banded Symmetric Generalized Eigenproblems via Sequentially Semiseparable (SSS) Matrices. Mathematics. 2022; 10(10):1676. https://doi.org/10.3390/math10101676
Chicago/Turabian StyleYuan, Fan, Shengguo Li, Hao Jiang, Hongxia Wang, Cheng Chen, Lei Du, and Bo Yang. 2022. "Efficient Reduction Algorithms for Banded Symmetric Generalized Eigenproblems via Sequentially Semiseparable (SSS) Matrices" Mathematics 10, no. 10: 1676. https://doi.org/10.3390/math10101676
APA StyleYuan, F., Li, S., Jiang, H., Wang, H., Chen, C., Du, L., & Yang, B. (2022). Efficient Reduction Algorithms for Banded Symmetric Generalized Eigenproblems via Sequentially Semiseparable (SSS) Matrices. Mathematics, 10(10), 1676. https://doi.org/10.3390/math10101676