Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = orthogonality of consecutive residual vector

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 346 KB  
Article
Two Extrapolation Techniques on Splitting Iterative Schemes to Accelerate the Convergence Speed for Solving Linear Systems
by Chein-Shan Liu and Botong Li
Algorithms 2025, 18(7), 440; https://doi.org/10.3390/a18070440 - 18 Jul 2025
Viewed by 761
Abstract
For the splitting iterative scheme to solve the system of linear equations, an equivalent form in terms of descent and residual vectors is formulated. We propose an extrapolation technique using the new formulation, such that a new splitting iterative scheme (NSIS) can be [...] Read more.
For the splitting iterative scheme to solve the system of linear equations, an equivalent form in terms of descent and residual vectors is formulated. We propose an extrapolation technique using the new formulation, such that a new splitting iterative scheme (NSIS) can be simply generated from the original one by inserting an acceleration parameter preceding the descent vector. The spectral radius of the NSIS is proven to be smaller than the original one, and so has a faster convergence speed. The orthogonality of consecutive residual vectors is coined into the second NSIS, from which a stepwise varying orthogonalization factor can be derived explicitly. Multiplying the descent vector by the factor, the second NSIS is proven to be absolutely convergent. The modification is based on the maximal reduction of residual vector norm. Two-parameter and three-parameter NSIS are investigated, wherein the optimal value of one parameter is obtained by using the maximization technique. The splitting iterative schemes are unified to have the same iterative form, but endowed with different governing equations for the descent vector. Some examples are examined to exhibit the performance of the proposed extrapolation techniques used in the NSIS. Full article
(This article belongs to the Special Issue Recent Advances in Numerical Algorithms and Their Applications)
22 pages, 353 KB  
Article
Numerical Simulations of Complex Helmholtz Equations Using Two-Block Splitting Iterative Schemes with Optimal Values of Parameters
by Chein-Shan Liu, Chih-Wen Chang and Chia-Cheng Tsai
AppliedMath 2024, 4(4), 1256-1277; https://doi.org/10.3390/appliedmath4040068 - 9 Oct 2024
Viewed by 1429
Abstract
For a two-block splitting iterative scheme to solve the complex linear equations system resulting from the complex Helmholtz equation, the iterative form using descent vector and residual vector is formulated. We propose splitting iterative schemes by considering the perpendicular property of consecutive residual [...] Read more.
For a two-block splitting iterative scheme to solve the complex linear equations system resulting from the complex Helmholtz equation, the iterative form using descent vector and residual vector is formulated. We propose splitting iterative schemes by considering the perpendicular property of consecutive residual vector. The two-block splitting iterative schemes are proven to have absolute convergence, and the residual is minimized at each iteration step. Single and double parameters in the two-block splitting iterative schemes are derived explicitly utilizing the orthogonality condition or the minimality conditions. Some simulations of complex Helmholtz equations are performed to exhibit the performance of the proposed two-block iterative schemes endowed with optimal values of parameters. The primary novelty and major contribution of this paper lies in using the orthogonality condition of residual vectors to optimize the iterative process. The proposed method might fill a gap in the current literature, where existing iterative methods either lack explicit parameter optimization or struggle with high wave numbers and large damping constants in the complex Helmholtz equation. The two-block splitting iterative scheme provides an efficient and convergent solution, even in challenging cases. Full article
23 pages, 832 KB  
Article
Re-Orthogonalized/Affine GMRES and Orthogonalized Maximal Projection Algorithm for Solving Linear Systems
by Chein-Shan Liu, Chih-Wen Chang  and Chung-Lun Kuo 
Algorithms 2024, 17(6), 266; https://doi.org/10.3390/a17060266 - 15 Jun 2024
Cited by 3 | Viewed by 2161
Abstract
GMRES is one of the most powerful and popular methods to solve linear systems in the Krylov subspace; we examine it from two viewpoints: to maximize the decreasing length of the residual vector, and to maintain the orthogonality of the consecutive residual vector. [...] Read more.
GMRES is one of the most powerful and popular methods to solve linear systems in the Krylov subspace; we examine it from two viewpoints: to maximize the decreasing length of the residual vector, and to maintain the orthogonality of the consecutive residual vector. A stabilization factor, η, to measure the deviation from the orthogonality of the residual vector is inserted into GMRES to preserve the orthogonality automatically. The re-orthogonalized GMRES (ROGMRES) method guarantees the absolute convergence; even the orthogonality is lost gradually in the GMRES iteration. When η<1/2, the residuals’ lengths of GMRES and GMRES(m) no longer decrease; hence, η<1/2 can be adopted as a stopping criterion to terminate the iterations. We prove η=1 for the ROGMRES method; it automatically keeps the orthogonality, and maintains the maximality for reducing the length of the residual vector. We improve GMRES by seeking the descent vector to minimize the residual in a larger space of the affine Krylov subspace. The resulting orthogonalized maximal projection algorithm (OMPA) is identified as having good performance. We further derive the iterative formulas by extending the GMRES method to the affine Krylov subspace; these equations are slightly different from the equations derived by Saad and Schultz (1986). The affine GMRES method is combined with the orthogonalization technique to generate a powerful affine GMRES (A-GMRES) method with high performance. Full article
(This article belongs to the Special Issue Numerical Optimization and Algorithms: 2nd Edition)
Show Figures

Figure 1

Back to TopTop