Next Article in Journal
Change-Point Detection in Functional First-Order Auto-Regressive Models
Next Article in Special Issue
Several Characterizations of the Generalized 1-Parameter 3-Variable Hermite Polynomials
Previous Article in Journal
Analyzing Interval-Censored Recurrence Event Data with Adjusting Informative Observation Times by Propensity Scores
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Two Block Splitting Iteration Methods for Solving Complex Symmetric Linear Systems from Complex Helmholtz Equation

by
Yanan Zhu
1,
Naimin Zhang
1,* and
Zhen Chao
2
1
College of Mathematics and Physics, Wenzhou University, Wenzhou 325035, China
2
Department of Mathematics, Western Washington University, Bellingham, WA 98225, USA
*
Author to whom correspondence should be addressed.
Mathematics 2024, 12(12), 1888; https://doi.org/10.3390/math12121888
Submission received: 18 May 2024 / Revised: 14 June 2024 / Accepted: 15 June 2024 / Published: 18 June 2024

Abstract

:
In this paper, we study the improved block splitting (IBS) iteration method and its accelerated variant, the accelerated improved block splitting (AIBS) iteration method, for solving linear systems of equations stemming from the discretization of the complex Helmholtz equation. We conduct a comprehensive convergence analysis and derive optimal iteration parameters aimed at minimizing the spectral radius of the iteration matrix. Through numerical experiments, we validate the efficiency of both iteration methods.

1. Introduction

In this paper, we consider the complex partial differential Equation (PDE) governing the 2D field variable u ( x , y ) :
2 u + σ u = f , 2 = 2 x 2 + 2 y 2 , σ C
where, importantly, the parameter σ is a complex-valued constant. The PDE in (1) will then be referred to as the complex Helmholtz equation. The complex Helmholtz equation arises in periodically forced systems, ranging from electrochemical impedance spectroscopy to unsteady slow viscous flows, and in a broad area that has become known as diffusion wave field theory [1]. Discretizations of the complex Helmholtz Equation (1) using, e.g., finite differences [2,3,4], finite element [5,6], or spectral element methods [7], and appropriate boundary conditions result in a large, sparse, possibly complex-valued, and highly indefinite linear system:
A x ( W + i T ) x = b ,
where W , T R n × n are symmetric positive semi-definite matrices with at least one of them being positive definite, x , b C n , and i = 1 is the imaginary unit. The above linear system is also widely used in scientific computing and engineering applications, such as electromagnetism, wave propagation, quantum mechanics, molecular scattering, and structural dynamics (see [8,9,10,11,12,13,14]).
To solve the complex linear system (2), we often transform it into the following equivalent real block two-by-two form:
B z W T T W u v = f g b ˜ ,
where x = u + i v , b = f + i g , u, v, f, g R n , b ˜ R 2 n . In this paper, · * denotes the conjugate transpose of a matrix or a vector, · Z denotes the transpose of a matrix or a vector, and ρ ( · ) denotes the spectral radius of a matrix, respectively.
For solving complex symmetric linear systems, a lot of iteration methods, including stationary iteration methods and Krylov subspace iteration methods (often using preconditioners) have been proposed in the literature. In this paper, we are concerned with block splitting iteration methods with parameter acceleration. In recent years, many authors have developed the block splitting iteration methods for solving complex symmetric linear systems. In 2013, for solving the real-valued linear system (3), Bai [15] established a class of rotated block triangular preconditioners, and analyzed the eigenproperties of the corresponding preconditioned matrices. Following this, Salkuyeh et al. [16] discussed the generalized successive overrelaxation (GSOR) iteration method in 2014, with convergence analysis. Then, to improve the convergence speed of GSOR, in 2015, Edalatpour et al. [17] proposed an accelerated GSOR (AGSOR) iteration method with two parameters, and determined the optimal parameters. In 2018, Li et al. [18] constructed a symmetric block triangular splitting (SBTS) iteration method based on two splittings, and gave the convergence conditions as well as the optimal parameters. In 2019, Axelsson and Salkuyeh [19] derived the transformed matrix iteration (TMIT) method and gave the convergence conditions; they also determined that the optimal parameter of TMIT method was contained in the interval ( 0 ,   1 ) . In 2021, Siahkolaei et al. [20] gave an upper bound for the spectral radius of the iteration matrix of the TMIT method and then obtained the parameter which minimizes this upper bound.
Recently, Huang [21] studied a new block splitting (NBS) iteration method and a parameterized block splitting (PBS) iteration method to solve the real value linear system (3). In this paper, we continue to study BS iteration methods for solving (3). We will construct the improved block splitting (IBS) iteration method and give the corresponding convergence analysis. Furthermore, using the parameter accelerating technique for the IBS iteration method, we will construct the accelerated improved block splitting (AIBS) iteration method based on the split of the AGSOR iteration method.
The remainder of this paper is organized as follows. In Section 2, we construct the IBS iteration method to solve the linear system with a real value (3) and give the convergence analysis. In Section 3, we study the AIBS iteration method and give the convergence analysis. In Section 4, we present some numerical examples to demonstrate the effectiveness of the IBS and AIBS iteration methods. Finally, some conclusions are drawn in Section 5.

2. IBS Iteration Method and Its Convergence Analysis

In this section, we study the IBS iteration method for solving the real-valued linear system (3). First, we recall the NBS and PBS iteration methods. Multiply both sides of (3) from the left by the block matrix:
P = I α ˜ I 0 I ,
where α ˜ is a positive constant, and denote:
u v = I I 0 α ˜ I d e ,
where d, e R n . Then, (3) can be written as:
W + α ˜ T ( 1 + α ˜ 2 ) W T α ˜ W + T d e = f + α ˜ g g .
Split the coefficient matrix of (6) as follows:
W + α ˜ T ( 1 + α ˜ 2 ) W T α ˜ W + T = W + α ˜ T 0 T α ˜ W + T 0 ( 1 + α ˜ 2 ) W 0 0 M ˜ ( α ˜ ) N ˜ ( α ˜ ) .
Then, it yields the following NBS iteration:
d ( k + 1 ) e ( k + 1 ) = Γ ˜ ( α ˜ ) d ( k ) e ( k ) + M ˜ ( α ˜ ) 1 f + α ˜ g g ,
where the iteration matrix is:
Γ ˜ ( α ˜ ) = M ˜ ( α ˜ ) 1 N ˜ ( α ˜ ) = 0 ( 1 + α ˜ 2 ) ( W + α ˜ T ) 1 W 0 ( 1 + α ˜ 2 ) ( α ˜ W + T ) 1 T ( W + α ˜ T ) 1 W .
To derive the PBS method, we multiply both sides of (3) from the left by the matrix P in (4) and denote:
u v = I I 0 β ˜ I d ¯ e ¯ ,
where β ˜ is a positive constant and d ¯ , e ¯ R n . Then, (3) can be written as:
α ˜ T + W ( α ˜ β ˜ + 1 ) W + ( α ˜ β ˜ ) T T β ˜ W + T d ¯ e ¯ = f + α ˜ g g .
The PBS method splits the coefficient matrix of (11) as follows:
α ˜ T + W ( α ˜ β ˜ + 1 ) W + ( α ˜ β ˜ ) T T β ˜ W + T = α ˜ T + W 0 T β ˜ W + T 0 [ ( α ˜ β ˜ + 1 ) W + ( α ˜ β ˜ ) T ] 0 0 M ˜ ( α ˜ , β ˜ ) N ˜ ( α ˜ , β ˜ ) ,
and the iteration matrix is:
Γ ˜ ( α ˜ , β ˜ ) = M ˜ ( α ˜ , β ˜ ) 1 N ˜ ( α ˜ , β ˜ ) = 0 ( α ˜ T + W ) 1 [ ( α ˜ β ˜ + 1 ) W + ( α ˜ β ˜ ) T ] 0 ( β ˜ W + T ) 1 T ( α ˜ T + W ) 1 [ ( α ˜ β ˜ + 1 ) W + ( α ˜ β ˜ ) T ] .
Remark 1.
For the NBS method, the optimal parameter α ˜ * , which minimizes the upper bound of the spectral radius ρ ( Γ ˜ ( α ˜ ) ) is α ˜ * = 1 , see Theorem 2.1 in [21]. Furthermore, for the PBS method, maybe not optimal, the parameter α ˜ is still taken as α ˜ * = 1 , see Theorem 3.2 in [21].
By the idea of NBS and PBS methods, we take the parameter α ˜ = 1 directly, then (6) can be written as:
W + T 2 W T W + T d e = f + g g .
Split the coefficient matrix of (14) as follows:
W + T 2 W T W + T = W + T 0 T α ( W + T ) 0 2 W 0 ( α 1 ) ( W + T ) M ( α ) N ( α ) ,
where α is a positive constant. Then, it yields the following IBS iteration:
d ( k + 1 ) e ( k + 1 ) = Γ ( α ) d ( k ) e ( k ) + M ( α ) 1 f + g g ,
where the iteration matrix is:
Γ ( α ) = M ( α ) 1 N ( α ) = 0 2 ( W + T ) 1 W 0 α 1 α I + 2 α ( W + T ) 1 T ( W + T ) 1 W .
IBS iteration algorithm: Given initial vectors d ( 0 ) , e ( 0 ) R n , for k = 0 , 1 , 2 , , until the sequence { ( d ( k ) Z , e ( k ) Z ) Z } k = 0 converges, compute:
( W + T ) d ( k + 1 ) = 2 W e ( k ) + f + g , α ( W + T ) e ( k + 1 ) = T d ( k + 1 ) + ( α 1 ) ( W + T ) e ( k ) + g , u ( k + 1 ) = d ( k + 1 ) + e ( k + 1 ) , v ( k + 1 ) = e ( k + 1 ) .
From the iteration scheme (18), it can be seen that two linear subsystems with respect to the symmetric positive definite coefficient matrix T + W need to be solved at each iteration step. It can be solved by Cholesky factorization [22].
Next, we give the convergence analysis for the IBS method, and discuss the optimal parameter α * that minimizes the spectral radius ρ ( Γ ( α ) ) .
Theorem 2.
Let W be symmetric positive definite and T be symmetric positive semi-definite, respectively, and α > 0 . Then:
(1) 
The eigenvalues of the iteration matrix Γ ( α ) are taken as follows:
λ = 0 , λ = 1 1 + u k 2 α ( 1 + u k ) 2 , k = 1 , 2 , , n ,
where 0 u 1 u 2 u n are the eigenvalues of the matrix S = W 1 2 T W 1 2 .
(2) 
ρ ( Γ ( α ) ) < 1 if and only if α satisfies the following conditions:
(a) 
If u n < 1 :
α > 1 + u 1 2 2 ( 1 + u 1 ) 2 ;
(b) 
If u 1 > 1 :
α > 1 + u n 2 2 ( 1 + u n ) 2 ;
(c) 
If 0 u 1 u s 1 u s + 1 u n , s { 1 , 2 , , n 1 } , then:
α > max 1 + u 1 2 2 ( 1 + u 1 ) 2 , 1 + u n 2 2 ( 1 + u n ) 2 .
(3) 
ρ ( Γ ( α ) ) has the following expression:
(a) If u n < 1 or u 1 > 1 :
ρ ( Γ ( α ) ) = max 1 1 + u 1 2 α ( 1 + u 1 ) 2 , 1 1 + u n 2 α ( 1 + u n ) 2 ;
(b) If 0 u 1 u s 1 u s + 1 u n , s { 1 , 2 , , n 1 } , then:
ρ ( Γ ( α ) ) = max 1 1 + u 1 2 α ( 1 + u 1 ) 2 , 1 1 + u n 2 α ( 1 + u n ) 2 , 1 1 + u j 2 α ( 1 + u j ) 2 ,
where
j = s , u s 1 u s + 1 , s + 1 , u s < 1 u s + 1 .
Proof. 
Let:
P 1 = W 1 2 0 0 W 1 2 .
Then, by the Equation ( 17 ) , the following holds:
Γ ( α ) = 0 2 ( W + T ) 1 W 0 α 1 α I + 2 α ( W + T ) 1 T ( W + T ) 1 W = 0 2 W 1 2 ( I + W 1 2 T W 1 2 ) 1 W 1 2 0 α 1 α I + 2 α W 1 2 ( I + W 1 2 T W 1 2 ) 1 W 1 2 T W 1 2 ( I + W 1 2 T W 1 2 ) 1 W 1 2 = P 1 0 2 ( I + W 1 2 T W 1 2 ) 1 0 α 1 α I + 2 α ( I + W 1 2 T W 1 2 ) 1 W 1 2 T W 1 2 ( I + W 1 2 T W 1 2 ) 1 P 1 1 .
That is, Γ ( α ) is similar to the following matrix Γ ˜ ( α ) :
Γ ˜ ( α ) = 0 2 ( I + S ) 1 0 α 1 α I + 2 α ( I + S ) 1 S ( I + S ) 1 ,
in which S = W 1 2 T W 1 2 . Then, Γ ( α ) and Γ ˜ ( α ) have the same eigenvalues. Let S = U Z Σ U be the eigendecomposition of S, where U R n × n is an orthogonal matrix, Σ = d i a g ( u 1 , u 2 , , u n ) , and Z denotes transpose in this paper. Denote:
P 2 = U Z 0 0 U Z ,
and Γ ^ ( α ) = P 2 1 Γ ˜ ( α ) P 2 . Then, it holds that:
Γ ^ ( α ) = U 0 0 U 0 2 ( I + S ) 1 0 α 1 α I + 2 α ( I + S ) 1 S ( I + S ) 1 U Z 0 0 U Z = 0 2 ( I + Σ ) 1 0 α 1 α I + 2 α ( I + Σ ) 1 Σ ( I + Σ ) 1 .
Notice that Γ ( α ) , Γ ˜ ( α ) and Γ ^ ( α ) have the same eigenvalues, and α 1 α I + 2 α ( I + Σ ) 1 Σ ( I + Σ ) 1 is a diagonal matrix with the main diagonal elements being α 1 α + 2 u k α ( 1 + u k ) 2 = 1 1 + u k 2 α ( 1 + u k ) 2 , k = 1 , 2 , , n . Then, it is easy to see that the eigenvalues of Γ ( α ) are:
0 , 1 1 + u k 2 α ( 1 + u k ) 2 , k = 1 , 2 , , n .
Hence, the conclusion (1) is proved.
Next, we prove conclusion (2). By the Equation ( 28 ) , it holds that:
ρ ( Γ ( α ) ) < 1 1 1 + u k 2 α ( 1 + u k ) 2 < 1 1 < 1 1 + u k 2 α ( 1 + u k ) 2 < 1 .
Notice for any α > 0 , 1 1 + u k 2 α ( 1 + u k ) 2 < 1 holds obviously. Now, we consider the solution to 1 < 1 1 + u k 2 α ( 1 + u k ) 2 , i.e., α > 1 + u k 2 2 ( 1 + u k ) 2 . Define f ( u ) = 1 + u 2 2 ( 1 + u ) 2 , u u 1 , u n . Then, the derivative of f ( u ) is d f ( u ) d u = u 1 1 + u 3 . We discuss the following three cases:
(a)
If u n < 1 , then d f ( u ) d u < 0 , i.e., f ( u ) is monotonically decreasing with respect to u, which means max 1 k n f ( u k ) = 1 + u 1 2 2 ( 1 + u 1 ) 2 , so α > 1 + u 1 2 2 ( 1 + u 1 ) 2 ;
(b)
If u 1 > 1 , then d f ( u ) d u > 0 , i.e., f ( u ) is monotonically increasing with respect to u, which means max 1 k n f ( u k ) = 1 + u n 2 2 ( 1 + u n ) 2 , so α > 1 + u n 2 2 ( 1 + u n ) 2 ;
(c)
If 0 u 1 u s 1 u s + 1 u n , then f ( u ) is monotonically decreasing in u 1 , u s and monotonically increasing in u s + 1 , u n , respectively, so it holds that max 1 k n f ( u k ) = max 1 + u 1 2 2 ( 1 + u 1 ) 2 , 1 + u n 2 2 ( 1 + u n ) 2 , which yields α > max 1 + u 1 2 2 ( 1 + u 1 ) 2 , 1 + u n 2 2 ( 1 + u n ) 2 .
Hence, conclusion (2) is proved.
Next, we prove the conclusion (3). Define s ( u ) = 1 1 + u 2 α ( 1 + u ) 2 , u u 1 , u n , then we have d s ( u ) d u = 2 ( u 1 ) α 1 + u 3 . We discuss the following two cases:
(a)
If u n < 1 , then d s ( u ) d u > 0 , s ( u ) is monotonically increasing, and it is easy to see that:
ρ ( Γ ( α ) ) = max 1 1 + u 1 2 α ( 1 + u 1 ) 2 , 1 1 + u n 2 α ( 1 + u n ) 2 .
Similarly, when u 1 > 1 , s ( u ) is monotonically decreasing, then ρ ( Γ ( α ) ) has the same form of ( 23 ) .
(b)
If 0 u 1 u s 1 u s + 1 u n , then s ( u ) is monotonically increasing in u 1 , u s and monotonically decreasing in u s + 1 , u n , respectively. In this case s ( u ) has maximum s ( u ) m a x = max 1 1 + u s 2 α ( 1 + u s ) 2 , 1 1 + u s + 1 2 α ( 1 + u s + 1 ) 2 . Then, we consider the following two cases.
Cases I: When α   min 1 + u s 2 ( 1 + u s ) 2 , 1 + u s + 1 2 ( 1 + u s + 1 ) 2 , i.e., s ( u ) m a x 0 , then we have:
ρ ( Γ ( α ) ) = max 1 1 + u 1 2 α ( 1 + u 1 ) 2 , 1 1 + u n 2 α ( 1 + u n ) 2 .
Cases II: When α >   min 1 + u s 2 ( 1 + u s ) 2 , 1 + u s + 1 2 ( 1 + u s + 1 ) 2 , i.e., s ( u ) m a x > 0 , then we consider the following two subcases.
If u s 1 u s + 1 , then s ( u s ) s ( u s + 1 ) . When s ( u s + 1 ) 0 , then s ( u s + 1 ) s ( u s ) . When s ( u s + 1 ) < 0 , then s ( u s + 1 ) s ( u n ) . Thus, we have:
ρ ( Γ ( α ) ) = max s ( u 1 ) , s ( u n ) , s ( u s ) = max 1 1 + u 1 2 α ( 1 + u 1 ) 2 , 1 1 + u n 2 α ( 1 + u n ) 2 , 1 1 + u s 2 α ( 1 + u s ) 2 .
If u s < 1 u s + 1 , then s ( u s + 1 ) > s ( u s ) . When s ( u s ) 0 , then s ( u s ) < s ( u s + 1 ) . When s ( u s ) < 0 , then s ( u s ) s ( u 1 ) . Thus, we have:
ρ ( Γ ( α ) ) = max s ( u 1 ) , s ( u n ) , s ( u s + 1 ) = max 1 1 + u 1 2 α ( 1 + u 1 ) 2 , 1 1 + u n 2 α ( 1 + u n ) 2 , 1 1 + u s + 1 2 α ( 1 + u s + 1 ) 2 .
Then, it holds by the above discussion that:
ρ ( Γ ( α ) ) = max 1 1 + u 1 2 α ( 1 + u 1 ) 2 , 1 1 + u n 2 α ( 1 + u n ) 2 , 1 1 + u j 2 α ( 1 + u j ) 2 ,
where
j = s , u s 1 u s + 1 , s + 1 , u s < 1 u s + 1 ,
which finishes the proof. □
Theorem 2 gives the convergence conditions for the IBS method. Now, we investigate the optimal parameter α * that minimizes ρ ( Γ ( α ) ) .
Theorem 3.
Assume that the conditions of Theorem 2 are satisfied and 0 u 1 u 2 u n are the eigenvalues of S = W 1 2 T W 1 2 . Then, the optimal parameter α * which minimizes ρ ( Γ ( α ) ) and the corresponding optimal convergence factor min α ρ ( Γ ( α ) ) are given by:
(1) 
If u n < 1 or u 1 > 1 , then:
α * = ( 1 + u 1 2 ) ( 1 + u n ) 2 + ( 1 + u n 2 ) ( 1 + u 1 ) 2 2 ( 1 + u 1 ) 2 ( 1 + u n ) 2 ,
min α ρ ( Γ ( α ) ) = ρ ( Γ ( α * ) ) = ( 1 + u 1 2 ) ( 1 + u n ) 2 ( 1 + u n 2 ) ( 1 + u 1 ) 2 ( 1 + u 1 2 ) ( 1 + u n ) 2 + ( 1 + u n 2 ) ( 1 + u 1 ) 2 , u n < 1 . ( 1 + u n 2 ) ( 1 + u 1 ) 2 ( 1 + u 1 2 ) ( 1 + u n ) 2 ( 1 + u 1 2 ) ( 1 + u n ) 2 + ( 1 + u n 2 ) ( 1 + u 1 ) 2 , u 1 > 1 .
(2) 
If 0 u 1 u s 1 u s + 1 u n , s { 1 , 2 , , n 1 } , then:
α * = ( 1 + u 1 2 ) ( 1 + u j ) 2 + ( 1 + u j 2 ) ( 1 + u 1 ) 2 2 ( 1 + u 1 ) 2 ( 1 + u j ) 2 , u 1 1 u n . ( 1 + u j 2 ) ( 1 + u n ) 2 + ( 1 + u n 2 ) ( 1 + u j ) 2 2 ( 1 + u j ) 2 ( 1 + u n ) 2 , u 1 > 1 u n .
min α ρ ( Γ ( α ) ) = ρ ( Γ ( α * ) ) = ( 1 + u 1 2 ) ( 1 + u j ) 2 ( 1 + u j 2 ) ( 1 + u 1 ) 2 ( 1 + u 1 2 ) ( 1 + u j ) 2 + ( 1 + u j 2 ) ( 1 + u 1 ) 2 , u 1 1 u n , ( 1 + u n 2 ) ( 1 + u j ) 2 ( 1 + u j 2 ) ( 1 + u n ) 2 ( 1 + u j 2 ) ( 1 + u n ) 2 + ( 1 + u n 2 ) ( 1 + u j ) 2 , u 1 > 1 u n ,
where
j = s , u s 1 u s + 1 . s + 1 , u s < 1 u s + 1 .
Proof. 
(1) If u n < 1 , then:
α * = arg min α ρ ( Γ ( α ) ) = arg min α max 1 1 + u 1 2 α ( 1 + u 1 ) 2 , 1 1 + u n 2 α ( 1 + u n ) 2 .
Define s u ( α ) = 1 1 + u 2 α ( 1 + u ) 2 , s u 1 ( α ) = 1 1 + u 1 2 α ( 1 + u 1 ) 2 , s u n ( α ) = 1 1 + u n 2 α ( 1 + u n ) 2 , then α * = arg min α max s u 1 ( α ) , s u n ( α ) . Notice d s u ( α ) d α = 1 + u 2 α 2 ( 1 + u ) 2 > 0 , then s u ( α ) is monotonically increasing with respect to α , and s u ( α ) is monotonically decreasing with respect to α . As shown in Figure 1, the arrow points to the position of the optimal parameter, and it is easy to see that the optimal α * satisfies: 1 1 + u n 2 α * ( 1 + u n ) 2 = 1 + u 1 2 α * ( 1 + u 1 ) 2 1 , and after some algebra we have α * = ( 1 + u 1 2 ) ( 1 + u n ) 2 + ( 1 + u n 2 ) ( 1 + u 1 ) 2 2 ( 1 + u 1 ) 2 ( 1 + u n ) 2 .
Substitute α * into ρ ( Γ ( α ) ) = 1 1 + u n 2 α ( 1 + u n ) 2 we have:
ρ ( Γ ( α * ) ) = ( 1 + u 1 2 ) ( 1 + u n ) 2 ( 1 + u n 2 ) ( 1 + u 1 ) 2 ( 1 + u 1 2 ) ( 1 + u n ) 2 + ( 1 + u n 2 ) ( 1 + u 1 ) 2 .
Similarly, if u 1 > 1 , α * has the same expression as ( 29 ) . Substitute α * into ρ ( Γ ( α * ) ) = 1 1 + u 1 2 α * ( 1 + u 1 ) 2 we have:
ρ ( Γ ( α * ) ) = ( 1 + u n 2 ) ( 1 + u 1 ) 2 ( 1 + u 1 2 ) ( 1 + u n ) 2 ( 1 + u 1 2 ) ( 1 + u n ) 2 + ( 1 + u n 2 ) ( 1 + u 1 ) 2 .
(2) If 0 u 1 u s 1 u s + 1 u n , let s u j ( α ) = 1 1 + u j 2 α ( 1 + u j ) 2 , then:
α * = arg min α ρ ( Γ ( α ) ) = arg min α max 1 1 + u 1 2 α ( 1 + u 1 ) 2 , 1 1 + u n 2 α ( 1 + u n ) 2 , 1 1 + u j 2 α ( 1 + u j ) 2 ,
where
j = s , u s 1 u s + 1 . s + 1 , u s < 1 u s + 1 .
Notice s u ( α ) is monotonically increasing with respect to α and s u ( α ) is monotonically decreasing with respect to α . We now discuss the following two cases:
When u 1 1 u n , then 1 + u n 2 α ( 1 + u n ) 2 1 1 + u 1 2 α ( 1 + u 1 ) 2 1 , i.e., s u n ( α ) s u 1 ( α ) :
As shown in Figure 2, the image of s u 1 ( α ) is above the image of s u n ( α ) , and the arrow points to the position of the optimal parameter , and it is easy to see the optimal α * satisfies 1 1 + u j 2 α * ( 1 + u j ) 2 = 1 + u 1 2 α * ( 1 + u 1 ) 2 1 , after some algebra we have α * = ( 1 + u 1 2 ) ( 1 + u j ) 2 + ( 1 + u j 2 ) ( 1 + u 1 ) 2 2 ( 1 + u 1 ) 2 ( 1 + u j ) 2 . Substitute α * into ρ ( Γ ( α * ) ) = 1 1 + u j 2 α * ( 1 + u j ) 2 , we have:
ρ ( Γ ( α * ) ) = ( 1 + u 1 2 ) ( 1 + u j ) 2 ( 1 + u j 2 ) ( 1 + u 1 ) 2 ( 1 + u 1 2 ) ( 1 + u j ) 2 + ( 1 + u j 2 ) ( 1 + u 1 ) 2 .
When u 1 > 1 u n , then 1 + u 1 2 α ( 1 + u 1 ) 2 1 < 1 + u n 2 α ( 1 + u n ) 2 1 , i.e., s u 1 ( α ) < s u n ( α ) .
As shown in Figure 3, the image of s u n ( α ) is above the image of s u 1 ( α ) , and the arrow points to the position of the optimal parameter, and the optimal α * satisfies 1 1 + u j 2 α * ( 1 + u j ) 2 = 1 + u n 2 α * ( 1 + u n ) 2 1 , after some algebra, we have α * = ( 1 + u j 2 ) ( 1 + u n ) 2 + ( 1 + u n 2 ) ( 1 + u j ) 2 2 ( 1 + u j ) 2 ( 1 + u n ) 2 . Substitute α * into ρ ( Γ ( α * ) ) = 1 1 + u j 2 α * ( 1 + u j ) 2 we have:
ρ ( Γ ( α * ) ) = ( 1 + u n 2 ) ( 1 + u j ) 2 ( 1 + u j 2 ) ( 1 + u n ) 2 ( 1 + u j 2 ) ( 1 + u n ) 2 + ( 1 + u n 2 ) ( 1 + u j ) 2 ,
which finishes the proof. □
Remark 4.
From Theorem 3 we see that if 0 u 1 u s 1 u s + 1 u n , s { 1 , 2 , , n 1 } , then:
α * = ( 1 + u 1 2 ) ( 1 + u j ) 2 + ( 1 + u j 2 ) ( 1 + u 1 ) 2 2 ( 1 + u 1 ) 2 ( 1 + u j ) 2 , u 1 1 u n . ( 1 + u j 2 ) ( 1 + u n ) 2 + ( 1 + u n 2 ) ( 1 + u j ) 2 2 ( 1 + u j ) 2 ( 1 + u n ) 2 , u 1 > 1 u n .
However, generally it is very expensive to compute u s and u s + 1 that satisfies 1 [ u s , u s + 1 ] . In our numerical experiments, when u 1 < 1 < u n , we approximate α * by taking u j = 1 , which yields the following practical optimal parameter:
α * = 2 ( 1 + u 1 2 ) + ( 1 + u 1 ) 2 4 ( 1 + u 1 ) 2 , u 1 1 u n . 2 ( 1 + u n 2 ) + ( 1 + u n ) 2 4 ( 1 + u n ) 2 , u 1 > 1 u n .

3. AIBS Iteration Method and Its Convergence Analysis

In this section, inspired by the idea of the AGSOR iteration method, we construct the AIBS iteration method, which generalizes the IBS one and has faster convergence speed.
First, we recall the AGSOR iteration method. Multiplying both sides of ( 3 ) from the left by the block matrix:
Ω = α I 0 0 β I ,
where β is a positive constant and I is the n × n identity matrix, then ( 3 ) can be written as:
α W α T β T β W u v = α f β g .
Split the coefficient matrix of (35) as follows:
α W α T β T β W = W 0 β T W ( 1 α ) W α T 0 ( 1 β ) W M ˜ ( α , β ) N ˜ ( α , β ) .
Then, it yields the AGSOR iteration, whose iteration matrix is:
Γ ˜ ( α , β ) = M ˜ ( α , β ) 1 N ˜ ( α , β ) = ( 1 α ) I α W 1 T ( α 1 ) β W 1 T ( 1 β ) I α β W 1 T W 1 T .
Now we construct the AIBS iteration method, which is just a preconditioned AGSOR (PAGSOR) iteration method. Recall ( 14 ) in Section 2; the linear system ( 3 ) can be written by the following form:
W + T 2 W T W + T d e = f + g g .
Multiplying both sides of ( 36 ) from the left by the matrix Ω , we have:
α ( W + T ) 2 α W β T β ( W + T ) d e = α ( f + g ) β g .
Similar to AGSOR, split the coefficient matrix of (37) as follows:
α ( W + T ) 2 α W β T β ( W + T ) = W + T 0 β T W + T ( 1 α ) ( W + T ) 2 α W 0 ( 1 β ) ( W + T ) M ( α , β ) N ( α , β ) .
Then, it yields the following AIBS (PAGSOR) iteration:
d ( k + 1 ) e ( k + 1 ) = Γ ( α , β ) d ( k ) e ( k ) + M ( α , β ) 1 α ( f + g ) β g ,
where the iteration matrix is:
Γ ( α , β ) = M ( α , β ) 1 N ( α , β ) = ( 1 α ) I 2 α ( W + T ) 1 W ( α 1 ) β ( W + T ) 1 T ( 1 β ) I + 2 α β ( W + T ) 1 T ( W + T ) 1 W .
AIBS iteration algorithm: Given initial vectors d ( 0 ) , e ( 0 ) R n , for k = 0 , 1 , 2 , , until the sequence { ( d ( k ) Z , e ( k ) Z ) Z } k = 0 converges, compute:
( W + T ) d ( k + 1 ) = ( 1 α ) ( W + T ) d ( k ) 2 α W e ( k ) + α f + α g , ( W + T ) e ( k + 1 ) = β T d ( k + 1 ) + ( 1 β ) ( W + T ) e ( k ) + β g , u ( k + 1 ) = d ( k + 1 ) + e ( k + 1 ) , v ( k + 1 ) = e ( k + 1 ) .
Remark 5.
The IBS iteration method is the special case of the AIBS one as α = 1 , since they have the same iteration matrix when α = 1 .
Next, we discuss the convergence properties of the AIBS iteration method.
Lemma 1
([23]). Both roots of the real quadratic equation x 2 r x + t = 0 are less than one in the modulus if and only if t < 1 and r < 1 + t .
Lemma 2.
Let W be symmetric positive definite and T be symmetric positive semi-definite, respectively, and 0 u 1 u 2 u n are the eigenvalues of S = W 1 2 T W 1 2 . Let ξ k = 2 u k ( 1 + u k ) 2 , k = 1 , 2 , , n , s { 1 , 2 , , n 1 } . Then, the maximum ξ max and the minimum ξ min of ξ k satisfy the following equations:
ξ max = 2 u n ( 1 + u n ) 2 , u n < 1 , 2 u 1 ( 1 + u 1 ) 2 , u 1 > 1 , max 2 u s ( 1 + u s ) 2 , 2 u s + 1 ( 1 + u s + 1 ) 2 , 0 u 1 u s 1 u s + 1 u n .
ξ min = 2 u 1 ( 1 + u 1 ) 2 , u n < 1 , 2 u n ( 1 + u n ) 2 , u 1 > 1 , min 2 u 1 ( 1 + u 1 ) 2 , 2 u n ( 1 + u n ) 2 , 0 u 1 u s 1 u s + 1 u n .
Proof. 
Define g ( u ) = 2 u ( 1 + u ) 2 , u u 1 , u n , then we have d g ( u ) d u = 2 ( 1 u ) 1 + u 3 . We now discuss the following three cases:
(1)
If u n < 1 , then d g ( u ) d u > 0 , i.e., g ( u ) is monotonically increasing with respect to u:
ξ max = max 1 k n g ( u k ) = 2 u n ( 1 + u n ) 2 ,
ξ min = min 1 k n g ( u k ) = 2 u 1 ( 1 + u 1 ) 2 .
(2)
If u 1 > 1 , then d g ( u ) d u < 0 , i.e., g ( u ) is monotonically decreasing with respect to u:
ξ max = max 1 k n g ( u k ) = 2 u 1 ( 1 + u 1 ) 2 ,
ξ min = min 1 k n g ( u k ) = 2 u n ( 1 + u n ) 2 .
(3)
If 0 u 1 u s 1 u s + 1 u n , then g ( u ) is monotonically increasing and monotonically decreasing in u 1 , u s and u s + 1 , u n , respectively:
ξ max = max 1 k n g ( u k ) = max 2 u s ( 1 + u s ) 2 , 2 u s + 1 ( 1 + u s + 1 ) 2 ,
ξ min = min 1 k n g ( u k ) = min 2 u 1 ( 1 + u 1 ) 2 , 2 u n ( 1 + u n ) 2 .
ξ max and ξ min satisfy (39) and (40), which finishes the proof.
Remark 6.
In Lemma 2, when 0 u 1 u s 1 u s + 1 u n , s { 1 , 2 , , n 1 } , then ξ max = max 2 u s ( 1 + u s ) 2 , 2 u s + 1 ( 1 + u s + 1 ) 2 . Since it is difficult to compute a u s and u s + 1 that satisfy 1 [ u s , u s + 1 ] , in our numerical experiments, when u 1 < 1 < u n , we take ξ max = 1 2 , which is equivalent to u s = 1 or u s + 1 = 1 .
Theorem 7.
Assume that the conditions of Lemma 2 are satisfied. Then, the AIBS iteration method is convergent if the parameters α and β satisfy:
0 < c < b < c ( 1 + ξ min ) 2 + 2 ,
where b = α + β , c = α β .
Proof. 
Let:
P 1 = W 1 2 0 0 W 1 2 .
Then, by Equation ( 38 ) , it holds that:
Γ ( α , β ) = ( 1 α ) I 2 α ( W + T ) 1 W ( α 1 ) β ( W + T ) 1 T ( 1 β ) I + 2 α β ( W + T ) 1 T ( W + T ) 1 W = P 1 ( 1 α ) I 2 α ( I + W 1 2 T W 1 2 ) 1 Γ 1 Γ 2 P 1 1 ,
where
Γ 1 = ( α 1 ) β ( I + W 1 2 T W 1 2 ) 1 W 1 2 T W 1 2 ,
Γ 2 = ( 1 β ) I + 2 α β ( I + W 1 2 T W 1 2 ) 1 W 1 2 T W 1 2 ( I + W 1 2 T W 1 2 ) 1 .
That is, Γ ( α , β ) is similar to the following matrix Γ ˜ ( α , β ) :
Γ ˜ ( α , β ) = ( 1 α ) I 2 α ( I + S ) 1 ( α 1 ) β ( I + S ) 1 S ( 1 β ) I + 2 α β ( I + S ) 1 S ( I + S ) 1 ,
in which S = W 1 2 T W 1 2 . Then, Γ ( α , β ) and Γ ˜ ( α , β ) have the same eigenvalues. Let S = U Z Σ U be the eigendecomposition of S, where U R n × n is an orthogonal matrix, Σ = d i a g ( u 1 , u 2 , , u n ) . Denote:
P 2 = U Z 0 0 U Z ,
and Γ ^ ( α , β ) = P 2 1 Γ ˜ ( α , β ) P 2 . Then, it holds that:
Γ ^ ( α , β ) = ( 1 α ) I 2 α ( I + Σ ) 1 ( α 1 ) β ( I + Σ ) 1 Σ ( 1 β ) I + 2 α β ( I + Σ ) 1 Σ ( I + Σ ) 1 .
Notice that the non-zero block matrices of Γ ^ ( α , β ) are all diagonal matrices; define:
T k ( α , β ) = 1 α 2 α 1 + u k ( α 1 ) β u k 1 + u k 1 β + 2 α β u k ( 1 + u k ) 2 ,
then it is easy to see that:
ρ ( Γ ( α , β ) ) = ρ ( Γ ^ ( α , β ) ) = max 1 k n ρ ( T k ( α , β ) ) .
The eigenvalues λ of the matrix T k ( α , β ) satisfy the following equation:
λ 2 ( 2 ( α + β ) + α β 2 u k ( 1 + u k ) 2 ) λ + α β ( α + β ) + 1 = 0 .
Let:
b = α + β , c = α β , ξ k = 2 u k ( 1 + u k ) 2 , k = 1 , 2 , , n .
Then, (41) can be written as:
λ 2 ( 2 b + c ξ k ) λ + c b + 1 = 0 .
Now, by Lemma 1, λ < 1 if and only if:
c b + 1 < 1 , 2 + c ξ k b < c b + 2 .
Solving the above inequality yields:
0 < c < b < c ( 1 + ξ k ) 2 + 2 .
Noticing c ( 1 + ξ ) 2 + 2 is monotonically increasing with respect to ξ , we have:
0 < c < b < c ( 1 + ξ min ) 2 + 2 ,
which finishes the proof. □
Remark 8.
Theorem 7 gives the convergence conditions for the AIBS iteration method. Now, we discuss the optimal parameters α * and β * which minimize ρ ( Γ ( α , β ) ) . It is very sophisticated when c b + 1 < 0 , so in this paper and for this problem, we assume c b + 1 0 . Together with the convergence conditions in Theorem 7, we now assume b 1 c < b to investigate the local optimal parameters.
Theorem 9.
Assume that the conditions of Lemma 2 are satisfied, and b 1 c < b , where b = α + β , c = α β . Then, the optimal parameters α * and β * which minimize ρ ( Γ ( α , β ) ) are given by:
α * = b * + b * 2 4 c * 2 , β * = b * b * 2 4 c * 2 ,
where
b * = 4 1 + ( 1 ξ min ) ( 1 ξ max ) ( 1 ξ min + 1 ξ max ) 2 , c * = 4 1 ( 1 ξ min + 1 ξ max ) 2 ,
and the corresponding optimal convergence factor is given by:
min α , β ρ ( Γ ( α , β ) ) = ρ ( Γ ( α * , β * ) ) = 1 ξ min 1 ξ max 1 ξ min + 1 ξ max .
Proof. 
The two roots of (42) are given by:
λ 1 ( b , c , ξ k ) = 1 2 φ ( b , c , ξ k ) + φ ( b , c , ξ k ) 2 4 ( c b + 1 ) ,
λ 2 ( b , c , ξ k ) = 1 2 φ ( b , c , ξ k ) φ ( b , c , ξ k ) 2 4 ( c b + 1 ) ,
where
φ ( b , c , ξ k ) = 2 + c ξ k b .
Let λ ( b , c , ξ k ) be the larger modulus of the two roots λ 1 ( b , c , ξ k ) and λ 2 ( b , c , ξ k ) , that is:
λ ( b , c , ξ k ) = max { λ 1 ( b , c , ξ k ) , λ 2 ( b , c , ξ k ) } .
Then, we discuss the following two cases.
(1)
If Δ = ( 2 + c ξ k b ) 2 4 ( c b + 1 ) < 0 , then:
λ 1 ( b , c , ξ k ) = λ 2 ( b , c , ξ k ) = c b + 1 ,
that is:
λ ( b , c , ξ k ) = c b + 1 .
(2)
If Δ = ( 2 + c ξ k b ) 2 4 ( c b + 1 ) 0 , then λ 1 ( b , c , ξ k ) and λ 2 ( b , c , ξ k ) are real, and it holds that:
λ ( b , c , ξ k ) = λ 1 ( b , c , ξ k ) , 2 + c ξ k b 0 , λ 2 ( b , c , ξ k ) , 2 + c ξ k b < 0 .
From (42), we know:
λ 1 ( b , c , ξ k ) λ 2 ( b , c , ξ k ) = c b + 1 ,
then, it is easy to see that:
λ ( b , c , ξ k ) c b + 1 .
Denote Γ ( b , c ) = Γ ( α , β ) , then the spectral radius of the AIBS iteration matrix can be defined by:
ρ ( Γ ( b , c ) ) = max 1 k n { λ ( b , c , ξ k ) } .
Let:
λ i ( b , c ) = max 1 k n { λ i ( b , c , ξ k ) } , i = 1 , 2 .
Then, it holds that:
ρ ( Γ ( b , c ) ) = max { λ 1 ( b , c ) , λ 2 ( b , c ) } .
When Δ 0 and 2 + c ξ k b 0 , i.e., 2 + c ξ k b 2 c b + 1 , then:
λ 1 ( b , c , ξ k ) λ 2 ( b , c , ξ k ) .
When Δ 0 and 2 + c ξ k b < 0 , i.e., 2 + c ξ k b < 2 c b + 1 , then:
λ 1 ( b , c , ξ k ) λ 2 ( b , c , ξ k ) .
After some algebra, we have when Δ 0 and 2 + c ξ k b 0 , then:
ρ ( Γ ( b , c ) ) = λ 1 ( b , c ) = 1 2 φ ( b , c , ξ max ) + φ ( b , c , ξ max ) 2 4 ( c b + 1 ) ,
when Δ 0 and 2 + c ξ k b < 0 , then:
ρ ( Γ ( b , c ) ) = λ 2 ( b , c ) = 1 2 φ ( b , c , ξ min ) + φ ( b , c , ξ min ) 2 4 ( c b + 1 ) .
It is easy to see there exist two variables ξ ¯ , ξ ^   ( 0 ξ ^ ξ ¯ 1 2 ) satisfying the following equations:
2 + c ξ ¯ b = 2 c b + 1 ,
2 + c ξ ^ b = 2 c b + 1 .
Now, we prove ξ ¯ , ξ ^ ξ min , ξ max . Define h ( ξ ) = 2 + c ξ b , ξ 0 , 1 2 , h ( ξ ) is monotonically increasing with respect to ξ . If ξ ¯ > ξ max , then we have h ( ξ ¯ ) > h ( ξ max ) , that is:
2 + c ξ ¯ b > 2 + c ξ max b ,
by (45) and (47), it holds that:
2 c b + 1 > 2 + c ξ max b 0 ,
which means Δ < 0 , then it contradicts Δ 0 ; If ξ ^ < ξ min , then h ( ξ ^ ) < h ( ξ min ) , that is:
2 + c ξ ^ b < 2 + c ξ min b ,
by (46) and (48), it holds that:
2 c b + 1 < 2 + c ξ min b < 0 ,
which means Δ < 0 , then it contradicts Δ 0 . Hence, ξ ¯ , ξ ^ ξ min , ξ max .
From (47) and (48), by some algebra, it holds that:
b = 4 1 + ( 1 ξ ¯ ) ( 1 ξ ^ ) ( 1 ξ ¯ + 1 ξ ^ ) 2 ,
c = 4 1 ( 1 ξ ¯ + 1 ξ ^ ) 2 .
Then, we have:
λ 1 ( b , c ) = ( ξ max ξ ¯ + ξ max ξ ^ ) 2 ( 1 ξ ¯ + 1 ξ ^ ) 2 , λ 2 ( b , c ) = ( ξ ^ ξ min + ξ ¯ ξ min ) 2 ( 1 ξ ¯ + 1 ξ ^ ) 2 .
For convenience, let λ 1 ( b , c ) = λ 1 ( ξ ¯ , ξ ^ ) , λ 2 ( b , c ) = λ 2 ( ξ ¯ , ξ ^ ) . Without loss of generality, suppose ξ min < ξ max . In fact, if ξ min = ξ max , ξ ¯ = ξ ^ = ξ min = ξ max , then λ 1 ( ξ ¯ , ξ ^ ) = λ 2 ( ξ ¯ , ξ ^ ) = 0 , i.e., ρ ( Γ ( b , c ) ) = 0 , which is the extreme trivial situation. It is easy to see that the following results hold true.:
λ 1 ( ξ ¯ , ξ ^ ) = λ 2 ( ξ ¯ , ξ ^ ) , ξ ¯ + ξ ^ = ξ min + ξ max , λ 1 ( ξ ¯ , ξ ^ ) < λ 2 ( ξ ¯ , ξ ^ ) , ξ ¯ + ξ ^ > ξ min + ξ max , λ 1 ( ξ ¯ , ξ ^ ) > λ 2 ( ξ ¯ , ξ ^ ) , ξ ¯ + ξ ^ < ξ min + ξ max .
Then, we have:
ρ ( Γ ( b , c ) ) = λ 1 ( ξ ¯ , ξ ^ ) , ξ ¯ + ξ ^ ξ min + ξ max , λ 2 ( ξ ¯ , ξ ^ ) , ξ ¯ + ξ ^ > ξ min + ξ max .
Notice:
λ 1 ( ξ ¯ , ξ ^ ) ξ ¯ = ξ max ξ ¯ + ξ max ξ ^ 1 ξ ¯ + 1 ξ ^ Q 1 , λ 2 ( ξ ¯ , ξ ^ ) ξ ¯ = ξ ^ ξ min + ξ ¯ ξ min 1 ξ ¯ + 1 ξ ^ Q 2 ,
where
Q 1 = ξ max 1 + ( ξ max ξ ¯ ) ( ξ max ξ ^ ) ( 1 ξ ¯ ) ( 1 ξ ^ ) ( ξ max ξ ¯ ) ( 1 ξ ¯ ) ( 1 ξ ¯ + 1 ξ ^ ) 2 < 0 ,
Q 2 = 1 ξ min + ( ξ ^ ξ min ) ( ξ ¯ ξ min ) + ( 1 ξ ¯ ) ( 1 ξ ^ ) ( 1 ξ ¯ ) ( ξ min ξ ¯ ) ( 1 ξ ¯ + 1 ξ ^ ) 2 > 0 ,
then, it holds that:
λ 1 ( ξ ¯ , ξ ^ ) ξ ¯ < 0 , λ 2 ( ξ ¯ , ξ ^ ) ξ ¯ > 0 .
Analogously, it also holds that:
λ 1 ( ξ ¯ , ξ ^ ) ξ ^ < 0 , λ 2 ( ξ ¯ , ξ ^ ) ξ ^ > 0 .
Similar to the proof of Theorem 2.5 in [24], ρ ( Γ ( b , c ) ) has a minimum at ξ ¯ + ξ ^ = ξ min + ξ max .
First, we prove ρ ( Γ ( b , c ) ) has no minimum at ξ ¯ + ξ ^ ξ min + ξ max , see the following two cases.
Assume ρ ( Γ ( b , c ) ) has a minimum at ξ ¯ + ξ ^ > ξ min + ξ max . By (52), it holds that ρ ( Γ ( b , c ) ) = λ 2 ( ξ ¯ , ξ ^ ) , let ξ * = ξ min + ξ max ξ ^ . Then, ξ ¯ > ξ * . By the above monotone property of the function λ 2 ( ξ ¯ , ξ ^ ) , we have λ 2 ( ξ ¯ , ξ ^ ) > λ 2 ( ξ * , ξ ^ ) , which contradicts the assumption.
Assume ρ ( Γ ( b , c ) ) has minimum at ξ ¯ + ξ ^ < ξ min + ξ max . By (52), it holds that ρ ( Γ ( b , c ) ) = λ 1 ( ξ ¯ , ξ ^ ) , let ξ * * = ξ min + ξ max ξ ^ , then ξ ¯ < ξ * * . By the above monotone property of the function λ 1 ( ξ ¯ , ξ ^ ) , we have λ 1 ( ξ ¯ , ξ ^ ) > λ 1 ( ξ * * , ξ ^ ) , which contradicts the assumption.
It is easy to see from the above two cases that ρ ( Γ ( b , c ) ) may have a minimum only at ξ ¯ + ξ ^ = ξ min + ξ max .
When ξ ¯ + ξ ^ = ξ min + ξ max , from (51) and (52), it holds that:
ρ ( Γ ( b , c ) ) = λ 1 ( ξ ¯ , ξ ^ ) = λ 2 ( ξ ¯ , ξ ^ ) = ( ξ max ξ ¯ + ξ max ξ ^ ) 2 ( 1 ξ ¯ + 1 ξ ^ ) 2 = ( ξ max ξ ¯ + ξ ¯ ξ min ) 2 ( 1 ξ ¯ + 1 + ξ ¯ ξ min ξ max ) 2 = ξ max ξ min + 2 ξ ¯ 2 + ( ξ min + ξ max ) ξ ¯ ξ min ξ max 2 ξ max ξ min + 2 ξ ¯ 2 + ( ξ min + ξ max ) ξ ¯ + 1 ξ max ξ min .
Let t = ξ ¯ 2 + ( ξ min + ξ max ) ξ ¯ ξ min ξ max , and define a function ϕ ( t ) :
ϕ ( t ) = ρ ( Γ ( b , c ) ) = ξ max ξ min + 2 t ξ min ξ max 2 ξ max ξ min + 2 t + 1 ξ max ξ min .
When t = ξ min ξ max , i.e., ξ ¯ 2 + ( ξ min + ξ max ) ξ ¯ = ξ min ξ max , it holds that
( ξ max ξ ¯ ) ( ξ ¯ ξ min ) = 0 ,
so ξ ¯ = ξ max and ξ ^ = ξ min .
When t > ξ min ξ max , it holds that d ϕ ( t ) d t > 0 , which means ϕ ( t ) is increasing with respect to t.
From the above analysis, we see that ϕ ( t ) has minimum at t = ξ min ξ max , which means ρ ( Γ ( b , c ) ) has a minimum:
min b , c ρ ( Γ ( b , c ) ) = 1 ξ min 1 ξ max 1 ξ min + 1 ξ max
at ξ ¯ = ξ max and ξ ^ = ξ min . Furthermore, from (49) and (50), we obtain the corresponding optimal parameters:
b * = 4 1 + ( 1 ξ min ) ( 1 ξ max ) ( 1 ξ min + 1 ξ max ) 2 , c * = 4 1 ( 1 ξ min + 1 ξ max ) 2 ,
and notice b * = α * + β * , c * = α * β * , then it holds that:
α * = b * + b * 2 4 c * 2 , β * = b * b * 2 4 c * 2 ,
which finishes the proof. □

4. Numerical Results

In this section, we present some numerical examples to demonstrate the effectiveness of the IBS iteration method and the AIBS iteration method. All the computations are implemented in MATLAB (version R2021a) on a PC computer with 16.0 GB memory, an Intel(R) Core(TM) i5-10500 CPU @1.19 GHz, and Windows 10 as the operating system. We denote the elapsed CPU time (in second) by CPU, the iteration numbers by IT, and the norm of the relative residuals by RES, respectively. In our computations all the initial guesses are taken by zero vectors, and the computations are terminated once the stopping criterion is satisfied:
R E S = b A x ( k ) 2 b 2 10 10
We compare the AIBS and IBS methods with NBS, PBS, AGSOR, and PMHSS methods, including the corresponding preconditioned GMRES methods. In each iteration of AIBS, IBS, NBS, PBS, AGSOR, and PMHSS methods, we use the Cholesky factorization to solve the subsystems. In addition, we use the preconditioned restarted GMRES(20) in our numerical tests. The iteration number k of GMRES means k = ( i 1 ) × 20 + p , where i is the number of restarting, and p is the iteration number of the last restarting, respectively.
Example 1.
Consider the following complex symmetric linear system Equation [25,26]:
[ ( K + 3 3 τ I ) + i ( K + 3 + 3 τ I ) ] u = b
where τ is the time step-size and K is the five-point centered difference matrix approximating the negative Laplacian operator with homogeneous Dirichlet boundary conditions, on a uniform mesh in the unit square [ 0 , 1 ] × [ 0 , 1 ] with the mesh-size h = 1 m + 1 . The matrix K R n × n possesses the tensor-product form K = I V m + V m I , with V m = h 2 t r i d i a g ( 1 , 2 , 1 ) R m × m . Hence, K is an n × n block tridiagonal matrix, with n = m 2 . We take:
W = K + 3 3 τ I and T = K + 3 + 3 τ I ,
and the right-hand side vector b with its j ¯ th entry b j ¯ is given by:
b j ¯ = ( 1 i ) j ¯ τ ( j ¯ + 1 ) 2 , j ¯ = 1 , 2 , , n .
Making τ = h , and multiplying both sides of the system of equations by h 2 to regularize the coefficient matrix and the right-hand side vector at the same time.
Table 1 lists the optimal parameters for the AIBS, IBS, PBS [21], NBS [21], AGSOR [17], and PMHSS [27] methods. α * , β * represents the optimal parameters for each method. Table 2 and Table 3 list the IT, CPU, and RES of each iteration method for Example 1.
Example 2.
Consider the following complex Helmholtz Equation [28,29]:
2 u + σ 1 u + i σ 2 u = f
where σ 1 , σ 2 are real coefficient functions and u satisfies Dirichlet boundary conditions in D = [ 0 , 1 ] × [ 0 , 1 ] . The above equation describes the propagation of damped time-harmonic waves. We take H the five-point centered difference matrix approximating the negative Laplacian operator on an uniform mesh with meshsize h = 1 m + 1 . The matrix H R n × n possesses the tensor-product form H = I V m + V m I , with V m = h 2 t r i d i a g ( 1 , 2 , 1 ) R m × m . Hence, H is an n × n block-tridiagonal matrix, with n = m 2 . This leads to the complex symmetric linear system:
A x H + σ 1 I + i σ 2 I x = b
In addition, we set σ 1 = 10 3 , σ 2 = 10 4 and the right-hand side vector b to be b = ( 1 + i ) A 1 , with 1 being the vector of all entries equal to 1. We normalize the system by multiplying both sides by h 2 .
Table 4 lists the optimal parameters for the AIBS, IBS, PBS [21], NBS [21], AGSOR [17], and PMHSS [27] methods. α * , β * represents the optimal parameters for each method. Table 5 and Table 6 list the IT, CPU, and RES of each iteration method for Example 2.

5. Conclusions

For solving a complex symmetric linear system from the complex Helmholtz equation, based on the NBS and AGSOR iteration methods, we construct the IBS and AIBS iteration methods for solving its equivalent real-valued form. We analyze the convergence of the two iteration methods, and obtain the corresponding optimal parameters that minimize the spectral radius of the iteration matrix. It can be shown in our numerical results that, both for the stationary iteration and as a preconditioner to accelerate GMRES, IBS, and AIBS methods always outperform some existing iteration methods, such as the NBS method, PBS method, AGSOR method, PMHSS method, and so forth. The technique in this paper is valid for the 2D problem, and it can be tried to solve 3D problem, which will be our future work.

Author Contributions

Conceptualization, N.Z.; Methodology, Y.Z., N.Z. and Z.C.; Validation, Y.Z. and Z.C.; Writing—original draft, Y.Z. and N.Z.; Writing—review & editing, Z.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Mandelis, A. Diffusion-Wave Fields: Mathematical Methods and Green Functions; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  2. Singer, I.; Turkel, E. High-order finite difference methods for the Helmholtz equation. Comput. Methods Appl. Mech. Eng. 1998, 163, 343–358. [Google Scholar] [CrossRef]
  3. Wu, Z.; Alkhalifah, T. A highly accurate finite-difference method with minimum dispersion error for solving the Helmholtz equation. J. Comput. Phys. 2018, 365, 350–361. [Google Scholar] [CrossRef]
  4. Fu, Y. Compact fourth-order finite difference schemes for Helmholtz equation with high wave numbers. J. Comput. Math. 2008, 26, 98–111. [Google Scholar]
  5. Oberai, A.; Pinsky, P. A multiscale finite element method for the Helmholtz equation. Comput. Methods Appl. Mech. Eng. 1998, 154, 281–297. [Google Scholar] [CrossRef]
  6. Oberai, A.; Pinsky, P. A residual-based finite element method for the Helmholtz equation. Int. J. Num. Meth. Eng. 2000, 49, 399–419. [Google Scholar] [CrossRef]
  7. Mehdizadeh, O.; Paraschivoiu, M. Investigation of a two-dimensional spectral element method for Helmholtz’s equation. J. Comput. Phys. 2003, 189, 111–129. [Google Scholar] [CrossRef]
  8. Feriani, A.; Perotti, F.; Simoncini, V. Iterative system solvers for the frequency analysis of linear mechanical systems. Comput. Methods Appl. Mech. Engrg. 2000, 190, 1719–1739. [Google Scholar] [CrossRef]
  9. Hiptmair, R. Finite elements in computational electromagnetism. Acta Numer. 2003, 11, 237–339. [Google Scholar] [CrossRef]
  10. Howle, V.E.; Vavasis, S.A. An iterative method for solving complex-symmetric systems arising in electrical power modeling. SIAM J. Matrix Anal. Appl. 2005, 26, 1150–1178. [Google Scholar] [CrossRef]
  11. Rees, T.; Dollar, H.S.; Wathen, A.J. Optimal solvers for PDE-constrained optimization. SIAM J. Sci. Comput. 2010, 32, 271–298. [Google Scholar] [CrossRef]
  12. Bai, Z.-Z. Block preconditioners for elliptic PDE-constrained optimization problems. Computing 2010, 91, 379–395. [Google Scholar] [CrossRef]
  13. Axelsson, O.; Neytcheva, M.; Ahmad, B. A comparison of iterative methods to solve complex valued linear algebraic systems. Numer. Algor. 2014, 66, 811–841. [Google Scholar] [CrossRef]
  14. Benzi, M.; Bertaccini, D. Block preconditioning of real-valued iterative algorithms for complex linear systems. IMA J. Numer. Anal. 2008, 28, 598–618. [Google Scholar] [CrossRef]
  15. Bai, Z.-Z. Rotated block triangular preconditioning based on PMHSS. Sci. China Math. 2013, 56, 2523–2538. [Google Scholar] [CrossRef]
  16. Salkuyeh, D.K.; Hezari, D.; Edalatpour, V. Generalized successive overrelaxation iterative method for a class of complex symmetric linear system of equations. Int. J. Comput. Math. 2015, 92, 802–815. [Google Scholar] [CrossRef]
  17. Edalatpour, V.; Hezari, D.; Salkuyeh, D.K. Accelerated generalized SOR method for a class of complex systems of linear equations. Math. Commun. 2015, 20, 37–52. [Google Scholar]
  18. Li, X.-A.; Zhang, W.-H.; Wu, Y.-J. On symmetric block triangular splitting iteration method for a class of complex symmetric system of linear equations. Appl. Math. Lett. 2018, 79, 131–137. [Google Scholar] [CrossRef]
  19. Axelsson, O.; Salkuyeh, D.K. A new version of a preconditioning method for certain two-by-two block matrices with square blocks. BIT 2018, 59, 321–342. [Google Scholar] [CrossRef]
  20. Siahkolaei, T.S.; Salkuyeh, D.K. On the parameter selection in the transformed matrix iteration method. Numer. Algor. 2020, 86, 179–189. [Google Scholar] [CrossRef]
  21. Huang, Z.-G. Efficient block splitting iteration methods for solving a class of complex symmetric linear systems. J. Comput. Appl. Math. 2021, 395, 113574. [Google Scholar] [CrossRef]
  22. Golub, G.H.; Van Loan, C.F. Matrix Computations. In Johns Hopkins Studies in the Mathematical Science, 3rd ed.; Johns Hopkins University Press: Baltimore, MD, USA, 1996. [Google Scholar]
  23. Young, D.M. Iterative Solution of Large Linear Systems; Academic Press: New York, NY, USA, 1971. [Google Scholar]
  24. Chao, Z.; Zhang, N.-M.; Lu, Y.-Z. Optimal parameters of the generalized symmetric SOR method for augmented systems. J. Comput. Appl. Math. 2014, 266, 52–60. [Google Scholar] [CrossRef]
  25. Axelsson, O.; Kucherov, A. Real valued iterative methods for solving complex symmetric linear systems. Numer. Linear Algebra Appl. 2000, 7, 197–218. [Google Scholar] [CrossRef]
  26. Bai, Z.-Z.; Benzi, M.; Chen, F. Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 2010, 87, 93–111. [Google Scholar] [CrossRef]
  27. Bai, Z.-Z.; Benzi, M.; Chen, F. Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems. IMA J. Numer. Anal. 2013, 33, 343–369. [Google Scholar] [CrossRef]
  28. Bertaccini, D. Efficient preconditioning for sequences of parametric complex symmetric linear systems. Electron. Tran. Numer. Anal. 2004, 18, 49–64. [Google Scholar]
  29. Li, X.; Yang, A.-L.; Wu, Y.-J. Lopsided PMHSS iteration method for a class of complex symmetric linear systems. Numer. Algor. 2014, 66, 555–568. [Google Scholar] [CrossRef]
Figure 1. The image of s u ( α ) , where u = u 1 , u n .
Figure 1. The image of s u ( α ) , where u = u 1 , u n .
Mathematics 12 01888 g001
Figure 2. The image of s u ( α ) , where u = u j , u 1 , u n .
Figure 2. The image of s u ( α ) , where u = u j , u 1 , u n .
Mathematics 12 01888 g002
Figure 3. The image of s u ( α ) , where u = u j , u 1 , u n .
Figure 3. The image of s u ( α ) , where u = u j , u 1 , u n .
Mathematics 12 01888 g003
Table 1. The optimal parameters for Example 1.
Table 1. The optimal parameters for Example 1.
Methodm3264128256
AIBS α * 1.79091.75621.73541.7233
β * 1.00341.00481.00581.0065
IBS α * 0.55790.56870.57540.5792
PBS α * 1111
β * 3.13912.80922.63852.5517
NBS α * 1111
AGSOR α * 0.82830.78820.76260.7480
β * 0.24380.22250.21000.2032
PMHSS α * 1111
Table 2. Numerical results of stationary iterations for Example 1.
Table 2. Numerical results of stationary iterations for Example 1.
Methodm3264128256
AIBSIT10111111
CPU0.01060.03070.17801.2865
RES5.9254   ×   10 13 9.6256   ×   10 14 1.2124   ×   10 13 9.6584   ×   10 14
IBSIT12131313
CPU0.02050.02980.19721.8338
RES4.0848   ×   10 13 1.4752   ×   10 13 2.0660   ×   10 13 1.7505   ×   10 13
PBSIT17181919
CPU0.01260.05570.31702.2667
RES4.8159   ×   10 13 3.9531   ×   10 13 1.5480   ×   10 13 1.3762   ×   10 13
NBSIT34353535
CPU0.01530.06930.50703.7428
RES1.7149   ×   10 12 4.7500   ×   10 13 2.4915   ×   10 13 1.2721   ×   10 13
AGSORIT26293133
CPU0.01130.06530.45883.9342
RES9.1573   ×   10 13 4.0411   ×   10 13 2.9290   ×   10 13 1.2160   ×   10 13
PMHSSIT36363635
CPU0.01560.06790.57443.9953
RES2.2443   ×   10 12 1.0691   ×   10 12 4.1579   ×   10 13 2.5996   ×   10 13
Table 3. Numerical results of preconditioned GMRES for Example 1.
Table 3. Numerical results of preconditioned GMRES for Example 1.
Methodm3264128256
AIBS-GMRESIT991010
CPU0.04620.05670.25961.7734
RES1.8233   ×   10 11 8.2612   ×   10 11 9.0014   ×   10 12 3.4665   ×   10 11
IBS-GMRESIT991010
CPU0.04980.05800.26682.3085
RES4.7762   ×   10 12 7.6997   ×   10 11 1.5088   ×   10 11 5.8511   ×   10 11
PBS-GMRESIT10121213
CPU0.05150.06980.42582.9442
RES7.3239   ×   10 11 5.8871   ×   10 12 5.7710   ×   10 11 1.6185   ×   10 11
NBS-GMRESIT991010
CPU0.04870.05930.31932.4691
RES2.5049   ×   10 12 4.2859   ×   10 11 8.4617   ×   10 12 3.3517   ×   10 11
AGSOR-GMRESIT14161820
CPU0.06800.07810.43173.6026
RES4.4083   ×   10 11 5.6892   ×   10 11 4.5164   ×   10 11 6.8716   ×   10 11
PMHSS-GMRESIT15161717
CPU0.04990.06530.45753.4580
RES1.8291   ×   10 11 5.6628   ×   10 11 2.6098   ×   10 11 6.9464   ×   10 11
Table 4. The optimal parameters for Example 2.
Table 4. The optimal parameters for Example 2.
Methodm3264128256
AIBS α * 1.47561.47571.47571.2826
β * 1.03361.03371.03371.0934
IBS α * 0.66600.66600.68540.7318
PBS α * 1111
β * 1.45251.45421.45421.4542
NBS α * 1111
AGSOR α * 0.39630.23700.19690.1873
β * 0.07910.14200.17210.1810
PMHSS α * 1111
Table 5. Numerical results of stationary iterations for Example 2.
Table 5. Numerical results of stationary iterations for Example 2.
Methodm3264128256
AIBSIT13131414
CPU0.01420.03030.25441.6005
RES1.7848   ×   10 11 2.5251   ×   10 12 1.7833   ×   10 13 9.0874   ×   10 14
IBSIT17171719
CPU0.01720.05150.28612.2295
RES2.6696   ×   10 11 3.5821   ×   10 12 4.3296   ×   10 13 6.9801   ×   10 14
PBSIT19242424
CPU0.01810.05170.34985.0836
RES1.5969   ×   10 11 2.9966   ×   10 12 5.0604   ×   10 13 6.6493   ×   10 14
NBSIT26313130
CPU0.01760.06170.49553.4326
RES1.6634   ×   10 11 2.3180   ×   10 12 3.5384   ×   10 13 9.3083   ×   10 14
AGSORIT98138143142
CPU0.04300.27152.314225.0304
RES9.5904   ×   10 12 8.8873   ×   10 13 1.1951   ×   10 13 1.9752   ×   10 14
PMHSSIT53535357
CPU0.04160.14320.969112.1306
RES2.7446   ×   10 11 3.5964   ×   10 12 4.6031   ×   10 13 8.7414   ×   10 14
Table 6. Numerical results of preconditioned GMRES for Example 2.
Table 6. Numerical results of preconditioned GMRES for Example 2.
Methodm3264128256
AIBS-GMRESIT12121314
CPU0.05270.06640.31052.4856
RES1.5324   ×   10 11 4.9631   ×   10 11 3.8500   ×   10 11 8.9480   ×   10 11
IBS-GMRESIT12121314
CPU0.05910.06380.36952.4913
RES1.6402   ×   10 11 5.2164   ×   10 11 3.6414   ×   10 11 8.1989   ×   10 11
PBS-GMRESIT12131314
CPU0.06500.07040.43063.1274
RES5.4437   ×   10 11 4.3436   ×   10 11 9.4839   ×   10 11 2.7788   ×   10 11
NBS-GMRESIT11121314
CPU0.07200.08650.44903.1868
RES7.4649   ×   10 11 2.9526   ×   10 11 2.5275   ×   10 11 6.5381   ×   10 11
AGSOR-GMRESIT4179103110
CPU0.05400.40172.397116.6325
RES9.6830   ×   10 11 9.8920   ×   10 11 9.4586   ×   10 11 8.4524   ×   10 11
PMHSS-GMRESIT21232327
CPU0.11850.12120.70725.4326
RES8.4213   ×   10 11 3.4984   ×   10 11 5.4485   ×   10 11 5.7106   ×   10 11
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhu, Y.; Zhang, N.; Chao, Z. Two Block Splitting Iteration Methods for Solving Complex Symmetric Linear Systems from Complex Helmholtz Equation. Mathematics 2024, 12, 1888. https://doi.org/10.3390/math12121888

AMA Style

Zhu Y, Zhang N, Chao Z. Two Block Splitting Iteration Methods for Solving Complex Symmetric Linear Systems from Complex Helmholtz Equation. Mathematics. 2024; 12(12):1888. https://doi.org/10.3390/math12121888

Chicago/Turabian Style

Zhu, Yanan, Naimin Zhang, and Zhen Chao. 2024. "Two Block Splitting Iteration Methods for Solving Complex Symmetric Linear Systems from Complex Helmholtz Equation" Mathematics 12, no. 12: 1888. https://doi.org/10.3390/math12121888

APA Style

Zhu, Y., Zhang, N., & Chao, Z. (2024). Two Block Splitting Iteration Methods for Solving Complex Symmetric Linear Systems from Complex Helmholtz Equation. Mathematics, 12(12), 1888. https://doi.org/10.3390/math12121888

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop