Next Article in Journal
Thermostatted Kinetic Theory Structures in Biophysics: Generalizations and Perspectives
Previous Article in Journal
Some Covering and Packing Problems for Mixed Triples
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Simulations of Complex Helmholtz Equations Using Two-Block Splitting Iterative Schemes with Optimal Values of Parameters

1
Center of Excellence for Ocean Engineering, National Taiwan Ocean University, Keelung 202301, Taiwan
2
Department of Mechanical Engineering, National United University, Miaoli 36063, Taiwan
3
Bachelor Degree Program in Ocean Engineering and Technology, National Taiwan Ocean University, Keelung 202301, Taiwan
*
Author to whom correspondence should be addressed.
AppliedMath 2024, 4(4), 1256-1277; https://doi.org/10.3390/appliedmath4040068
Submission received: 6 August 2024 / Revised: 18 September 2024 / Accepted: 20 September 2024 / Published: 9 October 2024

Abstract

:
For a two-block splitting iterative scheme to solve the complex linear equations system resulting from the complex Helmholtz equation, the iterative form using descent vector and residual vector is formulated. We propose splitting iterative schemes by considering the perpendicular property of consecutive residual vector. The two-block splitting iterative schemes are proven to have absolute convergence, and the residual is minimized at each iteration step. Single and double parameters in the two-block splitting iterative schemes are derived explicitly utilizing the orthogonality condition or the minimality conditions. Some simulations of complex Helmholtz equations are performed to exhibit the performance of the proposed two-block iterative schemes endowed with optimal values of parameters. The primary novelty and major contribution of this paper lies in using the orthogonality condition of residual vectors to optimize the iterative process. The proposed method might fill a gap in the current literature, where existing iterative methods either lack explicit parameter optimization or struggle with high wave numbers and large damping constants in the complex Helmholtz equation. The two-block splitting iterative scheme provides an efficient and convergent solution, even in challenging cases.

1. Introduction

The complex Helmholtz equations appear in many physical applications, e.g., scattering problems, electro-magnetics, acoustics, damped propagation of time-harmonic waves, electrochemical impedance spectroscopy, unsteady slow viscous flows, etc. [1,2]. The discretizations with the finite difference method [3,4,5], finite element method [6,7], and spectral element method [8] lead to complex symmetric linear systems.
Consider a complex Helmholtz equation, as follows:
Δ u ( x , y ) + σ u ( x , y ) = p ( x , y ) , ( x , y ) Ω ,
where Ω : = { ( x , y ) , 0 < x < 1 , 0 < y < 1 } ; σ = σ 1 + i σ 2 , with i 2 = 1 and σ 2 0 , is a complex-valued wave number; u ( x , y ) = w ( x , y ) + i v ( x , y ) is a complex function depicting the solution of Equation (1).
Physically, the complex Helmholtz equation describes a damped wave equation, known as the Telegraph equation, as follows:
1 c 2 2 U ( x , y , t ) t 2 + γ c 2 U ( x , y , t ) t Δ U ( x , y , t ) = 0 ,
where c is the speed of wave, and γ is a constant damping coefficient. Let U ( x , y , t ) = R [ e i ω t u ( x , y ) ] be a time-harmonic solution of Equation (2), where R denotes the real part. Let the complex wave number be σ = σ 1 + i σ 2 = ω 2 / c 2 + i ω γ / c 2 ; u ( x , y ) is a solution of Equation (1) with p ( x , y ) = 0 , when U ( x , y , t ) is a solution of Equation (2). The solution of complex Helmholtz equation with complex wave number σ can be understood as the wave that is attenuated while it propagates. The larger the imaginary part σ 2 of the wave number, the stronger the damping effect.
Many applications of Equations (1) and (2) were given as follows. A fast multipole method of a complex Helmholtz equation to deal with the numerous real world applications in computational electromagnetics can be used as a building block of other fast solvers [9]; using a preconditioner in a special two-by-two block form solves the real system formulation of complex Helmholtz equations [10]; employing a parameterized Uzawa method of a complex Helmholtz equation addresses the standard saddle point problem [11] and investigates the effect of viral spread on tumor cells to determine the role of the extracellular matrix in facilitating viral spread by utilizing the Telegraph equation [12]. On the basis of the Telegraph equation of distributed parameters of a lossy transmission line, the observer allows the accurate detection and localization of a transmission fault [13], and using the integral decomposition enables us to find the frequency shift of the generalized Telegraph equation with a moving point-wise harmonic source [14].
After a five-point finite difference discretization of Equation (1), it becomes a complex linear system, as follows:
( K + σ 1 I n + i σ 2 I n ) ( w + i v ) = f + i g .
K = I n 0 S + S I n 0 is the centered difference matrix approximation of the negative Laplacian operator in Equation (1), where ⊗ is the Kronecker tensor product, n 0 is the number of interior grid points in x and y directions, respectively, and n = n 0 2 is the total number of interior grid points inside the unit square. The grid spacing is h = Δ x = Δ y = 1 / ( n 0 + 1 ) , and S = tridiag ( 1 , 2 , 1 ) / h 2 ; w consists of nodal values of the variable w ( x , y ) , and is a vectorization of w ( x i , y j ) at all inner nodal points ( x i , y j ) = ( i h , j h ) , i = 1 , , n 0 , j = 1 , , n 0 ; v consists of nodal values of the variable v ( x , y ) , and is a vectorization of v ( x i , y j ) at all inner nodal points [15].
Let
W : = K + σ 1 I n , T : = σ 2 I n ,
where W , T R n × n are symmetric positive and positive semi-definite; Equation (3) is re-written as
( W + i T ) ( w + i v ) = f + i g .
Upon letting
A = W T T W , y = w v , h = f g ,
Equation (5) can be written as
A y = h , h , y R N , A R N × N ,
where N = 2 n .
For any ( M , N ) splitting of A given by
A = M N ,
with M being nonsingular, an iterative scheme for Equation (7) is [16]
M y ( k + 1 ) = N y ( k ) + h ,
where y ( k ) is the kth step value of y . The convergence is guaranteed if
ρ = ρ ( G ) < 1 ,
where G = M 1 N is the iteration matrix, and ρ is the spectral radius of G .
The ( M , N ) splitting iterative scheme involves the Jacobi method, the Gauss–Seidel method, the successive over-relaxation (SOR) method, and the accelerated over-relaxation (AOR) method as special cases. The SOR method is developed in [17]; the AOR method in [18] is a generalization of SOR.
A lot of iteration methods have been proposed to solve the complex symmetric linear systems, like the generalized successive over-relaxation (GSOR) method [19], the accelerated GSOR (AGSOR) method [20], the symmetric block triangular splitting (SBTS) method [21], and the improved block splitting (IBS) method, as well as its acceleration AIBS [22]. Additionally, the scale-splitting (SCSP) method [23] is further generalized to the two-parameter two-step scale-splitting (TTSCSP) method [24]. The double-step method was used to solve the complex Helmholtz equation in [25].
For the coefficient matrix given by
A = A 11 A 12 A 21 A 22 ,
Equation (7) is a two-block linear system. The rank of A 11 is r, and A 11 R r × r , A 12 R r × ( n r ) , A 21 R ( m r ) × r , A 22 R ( m r ) × ( n r ) , and B = A 21 A 11 1 ; moreover, Darvishi and Khosro-Aghdam [26] derived the following optimal value of the relaxation parameter for the symmetric SOR method:
w o p t = 1 2 1 2 + B 2 B + 2 1 B 2 1 + 2 B 2 + B 2 + B 2 + B 2 .
In the SOR method, A is decomposed to
A = D U L ,
where D is a nonsingular diagonal matrix, and U and L are strictly upper and lower matrices, respectively. The SOR method with w o p t can be written as [17]
( D w o p t L ) y ( k + 1 ) = w o p t h + [ ( 1 w o p t ) D + w o p t U ] y ( k ) .
Similar to the SOR-like methods and the AOR-like methods, many efficient iterative methods as well as their spectral analysis have been studied in the literature [27,28,29,30,31].
Given an initial guess y ( 0 ) for an iterative scheme, we use Equation (7) to obtain a residual vector r ( 0 ) = h A y ( 0 ) . According to the information of r ( 0 ) , we attempt to search a good descent vector u , which corrects the solution to y = y ( 0 ) + u , so that the new residual can be decreased to abide the rule of r < r ( 0 ) . The residual vector and descent vector are two very fundamental concepts in the area of iterative schemes.
Let
K m ( A , r ( 0 ) ) : = Span { r ( 0 ) , A r ( 0 ) , , A m 1 r ( 0 ) }
be an m-dimensional Krylov subspace; the GMRES method in [32] employs the Petrov–Galerkin method
r ( 0 ) A u L m = A K m ,
to search u K m via a perpendicular property, which is a crucial ingredient for the development of many iterative algorithms in the Krylov subspace. However, this property is rarely used in the ( M , N ) splitting iterative methods. In this paper, we will adopt this concept to determine the optimal values of parameters appeared in the splitting iterative schemes to solve two-block complex linear systems.
Based on the different splittings of coefficient matrices, we are going to develop six types of iterative algorithms to solve the complex linear system:
Algorithm 1 : A = W T T W , M = 1 ω W 0 η ω 2 T 1 ω W , N = 1 ω 1 W T η ω 2 1 T 1 ω 1 W ,
Algorithm 2 : A ˜ = D 2 W T D , M = D 0 T η D , N = 0 2 W 0 ( η 1 ) D ,
Algorithm 3 : A ˜ = D 2 W T D , M = 1 ω D 0 T 1 ω D , N = 1 ω 1 D 2 W 0 1 ω 1 D ,
Algorithm 4 : A ˜ = D 2 W T D , M = D 0 η α T 1 α D , N = 0 2 W η α 1 T 1 α 1 D ,
Algorithm 5 : A ˜ = D 2 W T D , M = D 0 η α T 1 α D , N = 0 2 W η α 1 T 1 α 1 D ,
Algorithm 6 : A ˜ = D 2 W T D , M = 1 ω D 0 η ω 2 T 1 ω D , N = 1 ω 1 D 2 W η ω 2 1 T 1 ω 1 D ,
where D = W + T . When Algorithms 2 and 3 have a single parameter, Algorithms 1 and 4–6 have two parameters. Algorithms 1 and 6 have the same splitting form; however, Algorithm 1 is applied to the original system with the coefficient matrix A , while Algorithm 6 is applied to a transformed system to be introduced in Section 4 with the preconditioned coefficient matrix A ˜ . Algorithms 4 and 5 have the same splitting of A ˜ ; however, η in Algorithm 4 is a free parameter, while in Algorithm 5, α and η are obtained by the orthogonality condition. We will develop novel methods to determine the values of these parameters, such that the iteration process is optimized. We must emphasize that given ad hoc values of the parameters, the ( M , N ) iterative schemes are divergent in general.
This paper presents the following novel contributions:
(a)
The two-block splitting iterative methods for complex linear systems are formulated to preserve the orthogonality and the maximality to reduce the residual vector’s length.
(b)
The values of parameters in the splitting iteration methods are determined by the orthogonality condition and the minimality conditions.
(c)
We prove that the proposed two-block iterative schemes are absolute convergence.
(d)
A numerical simulation of the complex Helmholtz equation is advanced by highly accurate and efficient single-parameter SOR-like and two-parameter AOR-like two-block splitting iteration methods.
(e)
The optimal values of parameters can improve the accuracy and accelerate the convergence for the complex Helmholtz equation with a high wave number and large damping effect.

2. Mathematical Preliminaries

Equation (7) is equivalent to
A u = r ( k ) ,
where
u = y y ( k )
is the descent vector, and
r ( k ) = h A y ( k )
is the kth step residual vector.
Lemma 1.
Any ( M , N ) splitting iterative scheme (9) for solving Equation (7) can be formulated as
y ( k + 1 ) = y ( k ) + u ( k ) , M u ( k ) = r ( k ) .
Proof. 
It follows from Equation (9) that
M y ( k + 1 ) = M y ( k ) + ( N M ) y ( k ) + h ;
With the help of Equations (8) and (25), it becomes
M [ y ( k + 1 ) y ( k ) ] = r ( k ) .
We end the proof of Lemma 1. □
Definition 1.
The splitting iterative scheme (26) is said to be orthogonal if
r ( k + 1 ) · ( A u ( k ) ) = 0 ,
where the dot between two vectors signifies the inner product.
Theorem 1.
If the splitting iterative scheme (26) is orthogonal and the descent vector u ( k ) is bounded, then
r ( k + 1 ) 2 < r ( k ) 2
for the absolute convergence.
Proof. 
Multiplying the first one in Equation (26) by A and using Equation (25) yields
r ( k + 1 ) = r ( k ) A u ( k ) .
If condition (29) is satisfied and u ( k ) is bounded, then by taking the inner product of A u ( k ) with Equation (31), we can derive
r ( k ) · ( A u ( k ) ) = A u ( k ) 2 ,
which is called an orthogonality condition.
Taking the squared norm of Equation (31) yields
r ( k + 1 ) 2 = r ( k ) 2 2 r ( k ) · ( A u ( k ) ) + A u ( k ) 2 .
Utilizing Equation (32), we have
r ( k + 1 ) 2 = r ( k ) 2 A u ( k ) 2 < r ( k ) 2 ,
due to A u ( k ) 2 > 0 . Equation (30) is proven.
By means of Equation (31), one has
r ( k ) = r ( k + 1 ) + A u ( k ) .
Since r k + 1 and A u k are perpendicular according to Equation (29), r k , r k + 1 and A u k constitute the three sides of a perpendicular triangle. By using the Pythagorean theorem, we have
r k 2 = r k + 1 2 + A u k 2 .
Obviously, Equation (34) holds during the iteration processes. The orthogonality condition guarantees that the residual is strictly decreased step-by-step, and thus the splitting iterative scheme (26) is absolutely convergent, when u ( k ) is bounded. □
Equation (32) motivates us to define
ξ ( k ) = r ( k ) · ( A u ( k ) ) A u ( k ) 2 .
Corollary 1.
The splitting iterative method in Equation (26) diverges if
ξ ( k ) < 1 2 .
Proof. 
Equation (33), through ξ ( k ) in Equation (37), can be written as
r ( k + 1 ) 2 = r ( k ) 2 + ( 1 2 ξ ( k ) ) A u ( k ) 2 .
If ξ ( k ) < 1 / 2 ,
( 1 2 ξ ( k ) ) A u ( k ) 2 > 0 .
Hence, via r ( k + 1 ) 2 > r ( k ) 2 , the splitting iterative method (26) diverges. □
Corollary 2.
For the orthogonal splitting iterative scheme (26), which has ξ ( k ) = 1 , the reduction in the residual vector’s length is maximized.
Proof. 
We begin with
y ( k + 1 ) = y ( k ) + ξ ( k ) u ( k ) ;
hence, it is easy to deduce
r ( k + 1 ) = r ( k ) ξ ( k ) A u ( k ) ,
r ( k + 1 ) 2 = r ( k ) 2 [ 2 ξ ( k ) r ( k ) · ( A u ( k ) ) ( ξ ( k ) ) 2 A u ( k ) 2 ] .
To reduce the residual vector’s length maximally, we consider the following maximization problem:
max ξ ( k ) F = 2 ξ ( k ) r ( k ) · ( A u ( k ) ) ( ξ ( k ) ) 2 A u ( k ) 2 ;
using the maximality condition d F / d ξ ( k ) = 0 leads to Equation (37). By inserting ξ ( k ) = 1 for the orthogonal splitting iterative scheme into Equation (39), we can attain Equation (34); A u ( k ) 2 is the maximal value of F. □
The above results strongly suggest us to choose the values of parameters that appeared in the splitting iterative scheme to satisfy the equality ξ ( k ) = 1 , i.e., the orthogonality condition (32) at each iteration step. By applying the minimum residual technique to the Hermitian and skew-Hermitian splitting (HSS) iteration scheme, Yang [33] can determine the parameters explicitly. In the treatment of nonstationary upper and lower triangular splitting iteration methods for linear inverse problems, Cui et al. [34] have taken the parameters by minimizing the residual norm. The orthogonality condition (32) is more fundamental than the minimum residual technique.

3. Generalized AOR-like Iterative Scheme

Like the accelerated over-relaxation (AOR) method [18], we consider two parameters ( ω and η ) in M :
M = 1 ω W 0 η ω 2 T 1 ω W ,
which is inserted into Equation (26) to accelerate the convergence speed of the splitting iterative scheme.
Then, we split A in Equation (6) as follows:
A = W T T W = M N = 1 ω W 0 η ω 2 T 1 ω W 1 ω 1 W T η ω 2 1 T 1 ω 1 W .
Because there are two parameters in M , we can adopt Corollary 2 to find the optimal values of ω and η by using the following minimization problem derived from Equation (33):
min ω , η f = A u ( k ) 2 2 r ( k ) · ( A u ( k ) ) ,
that is,
f ω = 0 , f η = 0 ;
they are called minimality conditions.
We apply Lemma 1 to Equation (7). Let
y = y 1 y 2 ,
where y R N , y 1 , y 2 R n and N = 2 n .
The iterations of y 1 and y 2 are given by
y 1 ( k + 1 ) = y 1 ( k ) + u 1 ( k ) ,
y 2 ( k + 1 ) = y 2 ( k ) + u 2 ( k ) ,
M u 1 ( k ) u 2 ( k ) = 1 ω W 0 η ω 2 T 1 ω η W u 1 ( k ) u 2 ( k ) = r 1 ( k ) r 2 ( k ) ,
where
r 1 ( k ) = f W y 1 ( k ) + T y 2 ( k ) ,
r 2 ( k ) = g T y 1 ( k ) W y 2 ( k ) .
Because of
M 1 = ω B 0 η C ω B ,
where B = W 1 and C = B T B , u 1 ( k ) and u 2 ( k ) are obtained from Equation (51) by
u 1 ( k ) = ω B r 1 ( k ) ,
u 2 ( k ) = ω B r 2 ( k ) η C r 1 ( k ) .
Theorem 2.
For the two-parameter splitting iterative scheme with M given by Equation (44), the optimal values of ω and η satisfying the minimality conditions in Equation (47) are
ω = a 22 b 1 a 12 b 2 a 11 a 22 a 12 a 21 , η = a 11 b 2 a 21 b 1 a 11 a 22 a 12 a 21 ,
where
a 11 = v ( k ) 2 + w ( k ) 2 , a 12 = v ( k ) · z ( k ) , b 1 = r 1 ( k ) · v ( k ) + r 2 ( k ) · w ( k ) ,
a 21 = v ( k ) · z ( k ) , a 22 = z ( k ) 2 + q ( k ) 2 , b 2 = r 1 ( k ) · z ( k ) + r 2 ( k ) · q ( k ) ,
in which
v ( k ) = r 1 ( k ) T B r 2 ( k ) , z ( k ) = T C r 1 ( k ) , w ( k ) = r 2 ( k ) + T B r 1 ( k ) , q ( k ) = T B r 1 ( k ) .
Proof. 
We can derive
A u 1 ( k ) u 2 ( k ) = W T T W u 1 ( k ) u 2 ( k ) = ω v ( k ) + η z ( k ) ω w ( k ) + η q ( k ) ,
where v ( k ) , z ( k ) , w ( k ) and q ( k ) are given in Equation (60).
By Equation (61), we have
r ( k ) · ( A u ( k ) ) = ω [ r 1 ( k ) · v ( k ) + r 2 ( k ) · w ( k ) ] + η [ r 1 ( k ) · z ( k ) + r 2 ( k ) · q ( k ) ] , A u ( k ) 2 = ω 2 [ v ( k ) 2 + w ( k ) 2 ] + η 2 [ z ( k ) 2 + q ( k ) 2 ]
+ 2 ω η [ v ( k ) · z ( k ) + w ( k ) · q ( k ) ] .
Let
f = A u ( k ) 2 2 r ( k ) · ( A u ( k ) ) = ω 2 [ v ( k ) 2 + w ( k ) 2 ] + η 2 [ z ( k ) 2 + q ( k ) 2 ] + 2 ω η [ v ( k ) · z ( k ) + w ( k ) · q ( k ) ] 2 ω [ r 1 ( k ) · v ( k ) + r 2 ( k ) · w ( k ) ] 2 η [ r 1 ( k ) · z ( k ) 2 r 2 ( k ) · q ( k ) ] .
Inserting it into Equation (47) yields
a 11 ω + a 12 η = b 1 , a 21 ω + a 22 η = b 2 ,
where the coefficients are given in Equations (58) and (59). Solving ω and η from Equation (65) renders Equation (57). □
As explored in Corollary 2, the minimized value of f = F which can be obtained by Equation (46) is A u ( k ) 2 . Therefore, when the values of the parameters ω and η are optimized, they lead to
A u ( k ) 2 2 r ( k ) · ( A u ( k ) ) = A u ( k ) 2 ,
which is just the orthogonality condition in Equation (32). In this regard, the iterative scheme based on the minimality conditions in Equation (47) is also orthogonal, such that the iterative scheme is absolute convergence according to Theorem 1.

4. Optimal Splitting Iterative Schemes for Transformed Linear System

According to the works in [22,35], we take the same transformed linear system as follows. Let
P = I n I n 0 I n .
Let
y = P x 1 x 2 , b 1 b 2 = P h = P f g ,
where P is a left preconditioner of system (7). When Equation (7) is left-multiplied by P , we come to a transformed two-block linear system:
D 2 W T D x 1 x 2 = b 1 b 2 ,
where D = W + T . To differentiate Equation (69) from Equation (7), the coefficient matrix is denoted by A ˜ .

4.1. Single-Parameter Splitting Iterative Schemes

4.1.1. First Single-Parameter Splitting Iterative Scheme

According to [22], the splitting of A ˜ is given as follows:
A ˜ = D 2 W T D = M N = D 0 T η D 0 2 W 0 ( η 1 ) D .
Now we apply Lemma 1 to Equation (69). The iterations of x 1 and x 2 read as
x 1 ( k + 1 ) = x 1 ( k ) + u 1 ( k ) ,
x 2 ( k + 1 ) = x 2 ( k ) + u 2 ( k ) ,
M u 1 ( k ) u 2 ( k ) = D 0 T η D u 1 ( k ) u 2 ( k ) = r 1 ( k ) r 2 ( k ) ,
where
r 1 ( k ) = b 1 D x 1 ( k ) 2 W x 2 ( k ) ,
r 2 ( k ) = b 2 T x 1 ( k ) D x 2 ( k ) .
Because of
M 1 = B 0 1 η C 1 η B ,
where B = D 1 and C = B T B , u 1 ( k ) and u 2 ( k ) are obtained from Equation (73) as follows:
u 1 ( k ) = B r 1 ( k ) ,
u 2 ( k ) = 1 η B r 2 ( k ) 1 η C r 1 ( k ) .
Through some operations, we can derive
A ˜ u 1 ( k ) u 2 ( k ) = D 2 W T D u 1 ( k ) u 2 ( k ) = r 1 ( k ) + 2 η v ( k ) w ( k ) + 1 η z ( k ) ,
where
v ( k ) = W B r 2 ( k ) W C r 1 ( k ) , w ( k ) = T B r 1 ( k ) , z ( k ) = r 2 ( k ) w ( k ) .
Theorem 3.
If the splitting iterative scheme in Equations (71)–(73) is used to solve Equation (5), the optimal value of η to satisfy the orthogonality condition (32) is determined by
a 0 η 2 + b 0 η + c 0 = 0 ,
where
a 0 = w ( k ) 2 r 2 ( k ) · w ( k ) ,
b 0 = 2 r 1 ( k ) · v ( k ) + ( 2 w ( k ) r 2 ( k ) ) · z ( k ) ,
c 0 = 4 v ( k ) 2 + z ( k ) 2 .
If a 0 = 0 , η is given by
η = c 0 b 0 .
If a 0 0 , η is given by
η = b 0 + b 0 2 4 a 0 c 0 2 a 0 .
Proof. 
We take Equation (32) in Theorem 1 as the basis to derive the optimal value of η . By Equation (79), we have
r ( k ) · ( A ˜ u ( k ) ) = r 1 ( k ) 2 + 2 η r 1 ( k ) · v ( k ) + r 2 ( k ) · w ( k ) + 1 η r 2 ( k ) · z ( k ) , A ˜ u ( k ) 2 = r 1 ( k ) 2 + 4 η r 1 ( k ) · v ( k )
+ 4 η 2 v ( k ) 2 + w ( k ) 2 + 2 η w ( k ) · z ( k ) + 1 η 2 z ( k ) 2 .
By inserting them into Equation (32), we can derive Equation (81), which leads to Equations (85) and (86). □

4.1.2. Second Single-Parameter Splitting Iterative Scheme

We address a special SOR-like single-parameter splitting iterative scheme [31] with
M = 1 ω D 0 T 1 ω D , N = 1 ω 1 D 2 W 0 1 ω 1 D .
Theorem 4.
For the single-parameter splitting iterative scheme for Equation (5) with Equation (89) for M , the optimal value of ω to satisfy the orthogonality condition (32) is determined by
a ω 3 + b ω 2 + c ω + d = 0 ,
where
a = z ( k ) 2 + q ( k ) 2 , b = 2 v ( k ) · z ( k ) + 2 w ( k ) · q ( k ) , c = v ( k ) 2 + w ( k ) 2 r 1 ( k ) · z ( k ) r 2 ( k ) · q ( k ) , d = r 1 ( k ) · v ( k ) r 2 ( k ) · w ( k ) ,
in which
v ( k ) = r 1 ( k ) + 2 W B r 2 ( k ) , z ( k ) = 2 W C r 1 ( k ) , w ( k ) = r 2 ( k ) + T B r 1 ( k ) , q ( k ) = T B r 1 ( k ) .
Proof. 
By using M u ( k ) = r ( k ) , u 1 ( k ) and u 2 ( k ) are obtained as follows:
u 1 ( k ) = ω B r 1 ( k ) , u 2 ( k ) = ω B r 2 ( k ) ω 2 C r 1 ( k ) .
We can derive
A ˜ u 1 ( k ) u 2 ( k ) = D 2 W T D u 1 ( k ) u 2 ( k ) = ω v ( k ) + ω 2 z ( k ) ω w ( k ) + ω 2 q ( k ) ,
where v ( k ) , z ( k ) , w ( k ) and q ( k ) are given in Equation (92). □
With Equation (94), we have
r ( k ) · ( A ˜ u ( k ) ) = ω r 1 ( k ) · v ( k ) + ω 2 r 1 ( k ) · z ( k ) + ω r 2 ( k ) · w ( k ) + ω 2 r 2 ( k ) · q ( k ) , A ˜ u ( k ) 2 = ω 2 v ( k ) 2 + ω 4 z ( k ) 2 + 2 ω 3 v ( k ) · z ( k ) + ω 2 w ( k ) 2
+ ω 4 q ( k ) 2 + 2 ω 3 w ( k ) · q ( k ) .
Inserting them into Equation (32), we can derive
ω 3 [ z ( k ) 2 + q ( k ) 2 ] + 2 ω 2 [ v ( k ) · z ( k ) + w ( k ) · q ( k ) ] + ω [ v ( k ) 2 + w ( k ) 2 r 1 ( k ) · z ( k ) r 2 ( k ) · q ( k ) ] r 1 ( k ) · v ( k ) r 2 ( k ) · w ( k ) = 0 ,
which leads to Equation (90).

4.2. Two-Parameter Splitting Iterative Schemes

4.2.1. First Two-Parameter Splitting Iterative Scheme

We consider
M = D 0 η α T 1 α D , N = 0 2 W η α 1 T 1 α 1 D ,
where α and η are parameters.
We have
M 1 = B 0 η C α B ;
hence, with u ( k ) = M 1 r ( k ) , u 1 ( k ) and u 2 ( k ) are obtained as follows:
u 1 ( k ) = B r 1 ( k ) ,
u 2 ( k ) = α B r 2 ( k ) η C r 1 ( k ) .
We can derive
A ˜ u 1 ( k ) u 2 ( k ) = D 2 W T D u 1 ( k ) u 2 ( k ) = r 1 ( k ) + α v ( k ) η z ( k ) α r 2 ( k ) + ( 1 η ) w ( k ) ,
where
v ( k ) = 2 W B r 2 ( k ) , z ( k ) = 2 W C r 1 ( k ) , w ( k ) = T B r 1 ( k ) .
Theorem 5.
For the two-parameter splitting iterative scheme (26) with Equation (98) for M , when η is given, the sufficient condition for Equation (32) is
α = b 0 2 4 a 0 c 0 b 0 2 a 0 , a 0 = v ( k ) 2 + r 2 ( k ) 2 , b 0 = r 1 ( k ) · v ( k ) 2 η v ( k ) · z ( k ) r 2 ( k ) 2 + 2 ( 1 η ) r 2 ( k ) · w ( k ) ,
c 0 = η 2 z ( k ) 2 η r 1 ( k ) · z ( k ) + ( 1 η ) 2 w ( k ) 2 + ( η 1 ) r 2 ( k ) · w ( k ) .
η is given in a suitable range, such that b 0 2 4 a 0 c 0 0 .
Proof. 
With Equation (102), we have
r ( k ) · ( A ˜ u ( k ) ) = r 1 ( k ) 2 + α r 1 ( k ) · v ( k ) η r 1 ( k ) · z ( k ) + α r 2 ( k ) 2 + ( 1 η ) r 2 ( k ) · w ( k ) ,
A ˜ u ( k ) 2 = r 1 ( k ) 2 + α 2 v ( k ) 2 + η 2 z ( k ) 2 + 2 α r 1 ( k ) · v ( k ) 2 η r 1 ( k ) · z ( k ) 2 α η v ( k ) · z ( k ) + α 2 r 2 ( k ) 2 + ( 1 η ) 2 w ( k ) 2 + 2 α ( 1 η ) r 2 ( k ) · w ( k ) .
By inserting them into Equation (32), we can derive
a 0 α 2 + b 0 α + c 0 = 0 ,
where a 0 , b 0 and c 0 are given in Equation (105). If b 0 2 4 a 0 c 0 0 can be satisfied for the given value of η , α is determined by Equation (104). □
The iterative scheme in Theorem 5 has a drawback, which requires us to specify the value of η . To improve it, we prove the following result.
Theorem 6.
For the two-parameter splitting iterative scheme (26) with Equation (98) for M , the sufficient conditions to satisfy Equation (32) are
η = e 0 2 4 d 0 f 0 e 0 2 d 0 ,
α = g 0 a 0 ,
where
a 0 = v ( k ) 2 + r 2 ( k ) 2 , d 0 = z ( k ) 2 + w ( k ) 2 , e 0 = r 2 ( k ) · w ( k ) r 1 ( k ) · z ( k ) 2 w ( k ) 2 , f 0 = w ( k ) 2 r 2 ( k ) · w ( k ) , g 0 = r 1 ( k ) · v ( k ) 2 η v ( k ) · z ( k ) r 2 ( k ) 2 + 2 ( 1 η ) r 2 ( k ) · w ( k ) .
Proof. 
By setting c 0 = 0 in Equation (105), we can derive
d 0 η 2 + e 0 η + f 0 = 0 ,
where d 0 , e 0 and f 0 are given in Equation (111). By means of Equation (112), η is derived in Equation (109).
Owing to c 0 = 0 , Equation (108) reduces to
a 0 α 2 + g 0 α = 0 ,
where g 0 is given in Equation (111). Hence, we can derive α in Equation (110). □

4.2.2. Second Two-Parameter Splitting Iterative Scheme

In this section, we extend the generalized AOR-like method in Section 3 to the transformed linear system (69). Let
M = 1 ω D 0 η ω 2 T 1 ω D , N = 1 ω 1 D 2 W η ω 2 1 T 1 ω 1 D .
Because of
M 1 = ω B 0 η C ω B ,
where B = D 1 and C = B T B , u 1 ( k ) and u 2 ( k ) are obtained from Equation (26) by
u 1 ( k ) = ω B r 1 ( k ) ,
u 2 ( k ) = ω B r 2 ( k ) η C r 1 ( k ) .
Theorem 7.
For the two-parameter splitting iterative scheme (26) with Equation (114) for M , the optimal values of ω and η satisfying the minimality conditions in Equation (47) are
ω = a 22 b 1 a 12 b 2 a 11 a 22 a 12 a 21 , η = a 11 b 2 a 21 b 1 a 11 a 22 a 12 a 21 ,
where
a 11 = v ( k ) 2 + w ( k ) 2 , a 12 = v ( k ) · z ( k ) , b 1 = r 1 ( k ) · v ( k ) + r 2 ( k ) · w ( k ) ,
a 21 = v ( k ) · z ( k ) , a 22 = z ( k ) 2 + q ( k ) 2 , b 2 = r 1 ( k ) · z ( k ) + r 2 ( k ) · q ( k ) ,
in which
v ( k ) = r 1 ( k ) + 2 W B r 2 ( k ) , z ( k ) = 2 W C r 1 ( k ) , w ( k ) = r 2 ( k ) + T B r 1 ( k ) , q ( k ) = T B r 1 ( k ) .
Proof. 
We can derive
A ˜ u 1 ( k ) u 2 ( k ) = D 2 W T D u 1 ( k ) u 2 ( k ) = ω v ( k ) + η z ( k ) v ( k ) w ( k ) + η q ( k ) ,
where v ( k ) , z ( k ) , w ( k ) and q ( k ) are given in Equation (121).
Via Equation (61), we have
r ( k ) · ( A ˜ u ( k ) ) = ω [ r 1 ( k ) · v ( k ) + r 2 ( k ) · w ( k ) ] + η [ r 1 ( k ) · z ( k ) + r 2 ( k ) · q ( k ) ] , A ˜ u ( k ) 2 = ω 2 [ v ( k ) 2 + w ( k ) 2 ] + η 2 [ z ( k ) 2 + q ( k ) 2 ]
+ 2 ω η [ v ( k ) · z ( k ) + w ( k ) · q ( k ) ] .
Inserting f = A ˜ u ( k ) 2 2 r ( k ) · ( A ˜ u ( k ) ) into Equation (47) yields
a 11 ω + a 12 η = b 1 , a 21 ω + a 22 η = b 2 ,
where the coefficients are given by Equations (119) and (120). Solving ω and η from Equation (125) renders Equation (118). □

5. Pseudo-Codes

To distinct the algorithms, we name Algorithm 1 for the splitting iterative method based on Theorem 2, Algorithm 2 (Theorem 3), Algorithm 3 (Theorem 4), Algorithm 4 (Theorem 5), Algorithm 5 (Theorem 6), and Algorithm 6 (Theorem 7).
Algorithm 1. Splitting iterative scheme via Theorem 2
1: Give W , T , f , g , y 1 ( 0 ) , y 2 ( 0 ) , and ε
2: Compute B = W 1 , C = B T B
3: Do k = 0 , 1 ,
4: r 1 ( k ) = f W y 1 ( k ) + T y 2 ( k )
5: r 2 ( k ) = g T y 1 ( k ) W y 2 ( k )
6: Compute ω ( k ) and η ( k ) via Equations (57)–(59)
7: u 1 ( k ) = ω ( k ) B r 1 ( k )
8: u 2 ( k ) = ω ( k ) B r 2 ( k ) η ( k ) C r 1 ( k )
9: y 1 ( k + 1 ) = y 1 ( k ) + u 1 ( k )
10: y 2 ( k + 1 ) = y 2 ( k ) + u 2 ( k )
11: If r 1 ( k + 1 ) 2 + r 2 ( k + 1 ) 2 < ε , stop
12: Otherwise, go to 3
The computational kernel of Algorithm 1 encompasses the computations of ω ( k ) and η ( k ) via Equations (57)–(59), which require three matrix-vector productions with 3 n 2 scalar multiplications and ten inner products of n-vectors with 10 n scalar multiplications. B is an n × n matrix, which requires n 2 scalar multiplications for the inversion from an n × n matrix W ; C requires a production of three matrices with n 3 scalar multiplications; however, they are computed one time. r 1 ( k ) and r 2 ( k ) require four matrix-vector productions with 4 n 2 scalar multiplications; u 1 ( k ) and u 2 ( k ) require three matrix-vector productions with 3 n 2 scalar multiplications. In each iteration, the computational complexity is low with 10 n 2 + 10 n scalar multiplications.
Algorithm 2. Splitting iterative scheme via Theorem 3
1: Give W , T , f , g , x 1 ( 0 ) = y 1 ( 0 ) y 2 ( 0 ) , x 2 ( 0 ) = y 2 ( 0 ) , and ε
2: Compute D = W + T , B = D 1 , C = B T B
3: Compute b 1 = f + g , b 2 = g
4: Do k = 0 , 1 ,
5: r 1 ( k ) = b 1 D x 1 ( k ) 2 W x 2 ( k )
6: r 2 ( k ) = b 2 T x 1 ( k ) D x 2 ( k )
7: Compute η ( k ) via Equations (82)–(86)
8: u 1 ( k ) = B r 1 ( k )
9: u 2 ( k ) = 1 η ( k ) B r 2 ( k ) 1 η ( k ) C r 1 ( k )
10: x 1 ( k + 1 ) = x 1 ( k ) + u 1 ( k )
11: x 2 ( k + 1 ) = x 2 ( k ) + u 2 ( k )
12: If r 1 ( k + 1 ) 2 + r 2 ( k + 1 ) 2 < ε , stop
13: Otherwise, go to 3
The computational kernel of Algorithm 2 encompasses the computation of η ( k ) via Equations (82)–(86), which requires three matrix-vector productions with 3 n 2 scalar multiplications and seven inner products of n-vectors with 7 n scalar multiplications. B is an n × n matrix, which requires n 2 scalar multiplications for the inversion from an n × n matrix D ; C requires a production of three matrices with n 3 scalar multiplications; however, they are computed one time. r 1 ( k ) and r 2 ( k ) require four matrix-vector productions with 4 n 2 scalar multiplications; u 1 ( k ) and u 2 ( k ) require three matrix-vector productions with 3 n 2 scalar multiplications. In each iteration, the computational complexity is low with 10 n 2 + 7 n scalar multiplications.
Algorithm 3. Splitting iterative scheme via Theorem 4
1: Give W , T , f , g , x 1 ( 0 ) = y 1 ( 0 ) y 2 ( 0 ) , x 2 ( 0 ) = y 2 ( 0 ) , and ε
2: Compute D = W + T , B = D 1 , C = B T B
3: Compute b 1 = f + g , b 2 = g
4: Do k = 0 , 1 ,
5: r 1 ( k ) = b 1 D x 1 ( k ) 2 W x 2 ( k )
6: r 2 ( k ) = b 2 T x 1 ( k ) D x 2 ( k )
7: Compute ω ( k ) via Equations (90) and (91)
8: u 1 ( k ) = ω ( k ) B r 1 ( k )
9: u 2 ( k ) = ω ( k ) B r 2 ( k ) ( ω ( k ) ) 2 C r 1 ( k )
10: x 1 ( k + 1 ) = x 1 ( k ) + u 1 ( k )
11: x 2 ( k + 1 ) = x 2 ( k ) + u 2 ( k )
12: If r 1 ( k + 1 ) 2 + r 2 ( k + 1 ) 2 < ε , stop
13: Otherwise, go to 3
The computational kernel of Algorithm 3 encompasses the computation of ω ( k ) via Equations (90) and (91), which requires three matrix-vector productions with 3 n 2 scalar multiplications and ten inner products of n-vectors with 10 n scalar multiplications. B is an n × n matrix, which requires n 2 scalar multiplications for the inversion from an n × n matrix D ; C requires a production of three matrices with n 3 scalar multiplications; however, they are computed one time. r 1 ( k ) and r 2 ( k ) require four matrix-vector productions with 4 n 2 scalar multiplications; u 1 ( k ) and u 2 ( k ) require three matrix-vector productions with 3 n 2 scalar multiplications. In each iteration, the computational complexity is low with 10 n 2 + 10 n scalar multiplications.
Algorithm 4. Splitting iterative scheme via Theorem 5
1: Give W , T , f , g , x 1 ( 0 ) = y 1 ( 0 ) y 2 ( 0 ) , x 2 ( 0 ) = y 2 ( 0 ) , η , and ε
2: Compute D = W + T , B = D 1 , C = B T B
3: Compute b 1 = f + g , b 2 = g
4: Do k = 0 , 1 ,
5: r 1 ( k ) = b 1 D x 1 ( k ) 2 W x 2 ( k )
6: r 2 ( k ) = b 2 T x 1 ( k ) D x 2 ( k )
7: Compute α ( k ) via Equations (104) and (105)
8: u 1 ( k ) = B r 1 ( k )
9: u 2 ( k ) = α ( k ) B r 2 ( k ) η C r 1 ( k )
10: x 1 ( k + 1 ) = x 1 ( k ) + u 1 ( k )
11: x 2 ( k + 1 ) = x 2 ( k ) + u 2 ( k )
12: If r 1 ( k + 1 ) 2 + r 2 ( k + 1 ) 2 < ε , stop
13: Otherwise, go to 3
The computational kernel of Algorithm 4 encompasses the computation of α ( k ) via Equations (104) and (105), which requires three matrix-vector productions with 3 n 2 scalar multiplications and ten inner products of n-vectors with 10 n scalar multiplications. B is an n × n matrix, which requires n 2 scalar multiplications for the inversion from an n × n matrix D ; C requires a production of three matrices with n 3 scalar multiplications; however, they are computed one time. r 1 ( k ) and r 2 ( k ) require four matrix-vector productions with 4 n 2 scalar multiplications; u 1 ( k ) and u 2 ( k ) require three matrix-vector productions with 3 n 2 scalar multiplications. In each iteration, the computational complexity is low with 10 n 2 + 10 n scalar multiplications.
Algorithm 5. Splitting iterative scheme via Theorem 6
1: Give W , T , f , g , x 1 ( 0 ) = y 1 ( 0 ) y 2 ( 0 ) , x 2 ( 0 ) = y 2 ( 0 ) , and ε
2: Compute D = W + T , B = D 1 , C = B T B
3: Compute b 1 = f + g , b 2 = g
4: Do k = 0 , 1 ,
5: r 1 ( k ) = b 1 D x 1 ( k ) 2 W x 2 ( k )
6: r 2 ( k ) = b 2 T x 1 ( k ) D x 2 ( k )
7: Compute α ( k ) and η ( k ) via Equations (109)–(111)
8: u 1 ( k ) = B r 1 ( k )
9: u 2 ( k ) = α ( k ) B r 2 ( k ) η ( k ) C r 1 ( k )
10: x 1 ( k + 1 ) = x 1 ( k ) + u 1 ( k )
11: x 2 ( k + 1 ) = x 2 ( k ) + u 2 ( k )
12: If r 1 ( k + 1 ) 2 + r 2 ( k + 1 ) 2 < ε , stop
13: Otherwise, go to 3
The computational kernel of Algorithm 5 encompasses the computations of α ( k ) and η ( k ) via Equations (109)–(111), which require three matrix-vector productions with 3 n 2 scalar multiplications and thirteen inner products of n-vectors with 13 n scalar multiplications. B is an n × n matrix, which requires n 2 scalar multiplications for the inversion from an n × n matrix D ; C requires a production of three matrices with n 3 scalar multiplications; however, they are computed one time. r 1 ( k ) and r 2 ( k ) require four matrix-vector productions with 4 n 2 scalar multiplications; u 1 ( k ) and u 2 ( k ) require three matrix-vector productions with 3 n 2 scalar multiplications. In each iteration, the computational complexity is low with 10 n 2 + 13 n scalar multiplications.
Algorithm 6. Splitting iterative scheme via Theorem 7
1: Give W , T , f , g , x 1 ( 0 ) = y 1 ( 0 ) y 2 ( 0 ) , x 2 ( 0 ) = y 2 ( 0 ) , and  ε
2: Compute D = W + T , B = D 1 , C = B T B
3: Compute b 1 = f + g , b 2 = g
4: Do k = 0 , 1 ,
5: r 1 ( k ) = b 1 D x 1 ( k ) 2 W x 2 ( k )
6: r 2 ( k ) = b 2 T x 1 ( k ) D x 2 ( k )
7: Compute ω ( k ) and η ( k ) via Equations (118)–(120)
8: u 1 ( k ) = ω ( k ) B r 1 ( k )
9: u 2 ( k ) = ω ( k ) B r 2 ( k ) η ( k ) C r 1 ( k )
10: x 1 ( k + 1 ) = x 1 ( k ) + u 1 ( k )
11: x 2 ( k + 1 ) = x 2 ( k ) + u 2 ( k )
12: If r 1 ( k + 1 ) 2 + r 2 ( k + 1 ) 2 < ε , stop
13: Otherwise, go to 3
The computational kernel of Algorithm 6 encompasses the computations of ω ( k ) and η ( k ) via Equations (118)–(120), which require three matrix-vector productions with 3 n 2 scalar multiplications and ten inner products of n-vectors with 10 n scalar multiplications. B is an n × n matrix, which requires n 2 scalar multiplications for the inversion from an n × n matrix D ; C requires a production of three matrices with n 3 scalar multiplications; however, they are computed one time. r 1 ( k ) and r 2 ( k ) require four matrix-vector productions with 4 n 2 scalar multiplications; u 1 ( k ) and u 2 ( k ) require three matrix-vector productions with 3 n 2 scalar multiplications. In each iteration, the computational complexity is low with 10 n 2 + 13 n scalar multiplications.
Besides Algorithms 1 and 6, whose values of two parameters are determined by the minimality conditions in Equation (47), the values of the parameters in Algorithms 2–5 are determined by the orthogonality condition (32).

6. Examples of Complex Linear System

The complex Helmholtz equation (Equation 1) is solved in this section. To demonstrate the efficiency and accuracy of the proposed iterative algorithms, several examples will be examined. All the numerical computations are carried out by Fortran 77 in Microsoft Windows 10 Developer Studio with Intel Core I7-3770, CPU 2.80 GHz and 8 GB memory. The precision is 10 16 .
When we compare the computed results with other iterative methods in the literature, we take the same convergence criterion. The resulting complex linear systems are the same. The algorithms use the same precision and the same discretization schema.

6.1. Example 1

In [21,36], a complex linear system (5) was considered with
W = K + 3 3 h I n , T = K + 3 + 3 h I n ,
f = j h ( j + 1 ) 2 , g = j h ( j + 1 ) 2 , j = 1 , , n .
Under the convergence criterion with ε = 10 8 , the number of steps (NS) and CPU times in seconds obtained by different algorithms are compared in Table 1. For Algorithm 4, we take η = 1.9 .
The number of steps (NS) is compared in Table 2, under r / b ε = 10 6 and n 0 = 32 , where b = b 1 2 + b 2 2 , b 1 = f + g = 0 and b 2 = g . For Algorithm 4, we take η = 1.8 . The CPU time in seconds of Algorithm 4 is 25.24 s. By using the data reported in [21], we compare the NS obtained by different methods in Table 2. HSS was developed in [36], MHSS was developed in [37], and SBTS was developed in [21]. The GSOR was chosen according to Table 1 in [19].
To check the orthogonality condition (32), we can compute ξ ( k ) in Equation (37). At each iteration, if ξ ( k ) = 1 , then the orthogonality condition (32) is preserved. For Algorithm 4, we compute α ( k ) via the orthogonality condition (32) for each specified value of η ; at each iteration step, as shown in Table 3, the values of ξ ( k ) = 1 are computed, which indicate that the orthogonality condition (32) is automatically satisfied.
We must emphasize that the values of all parameters are optimized either by the minimality conditions in Equation (47) or by the orthogonality condition (32). Only for Algorithm 4, there exists a free parameter η . In Table 4, under r / b ε = 10 6 and n 0 = 20 , we investigate the influence of η on the NS. The best value is η = 1.8 or 1.9 .

6.2. Example 2

Next, we consider [21]
Δ u ( x , y ) + ( σ 1 + i σ 2 ) u ( x , y ) = p ( x , y ) , 0 < x < 1 , 0 < y < 1 ,
where we take σ 1 = 10 3 and σ 2 = 10 4 . Equation (128) is discretized to a complex linear system (5), where w = v = 1 n = ( 1 , , 1 ) T are supposed to be the exact solutions, f = W 1 n T 1 n and g = W 1 n + T 1 n .
The number of steps (NS), under r / b ε = 10 10 and n 0 = 32 , where b = b 1 2 + b 2 2 , b 1 = f + g and b 2 = g , is compared in Table 5. For Algorithm 4, we take η = 1.5 . By using the data obtained from [22], Table 5 lists NS for the methods of AIBS, IBS, PBS, NBS [35], AGSOR [20], and PMHSS [38].

6.3. Example 3

We first consider a real solution of Equation (1) with
u ( x , y ) = sin ( κ π x ) cos ( κ π y ) , 0 < x < 1 , 0 < y < 1 ,
where we take σ 1 = 2 κ 2 , and κ > 0 is the wave number.
We apply Algorithms 1 and 6 to solve the complex Helmholtz equation with high wave numbers. Table 6 lists the maximum error (ME) and the NS, under r ε = 10 10 and n 0 = 19 , where ME is defined by
M E = max | u e ( x i , y j ) u N ( x i , y j ) | , 1 i n 0 , 1 j n 0 ,
in which u e ( x i , y j ) are exact solutions computed from Equation (129) at all inner nodal points, while u N ( x i , y j ) are numerical solutions computed at all inner nodal points. All CPU times, in seconds, obtained by Algorithms 1 and 6 are smaller than 0.5 s, because the NS is only one or two steps.
Table 7 lists the maximum error (ME) and the NS for different values of σ 2 . All CPU times, in seconds, obtained by Algorithms 1 and 6 are smaller than 0.5 s, because the NS is only one step.

6.4. Example 4

We consider a complex solution of Equation (1):
u ( x , y ) = sin ( κ π x ) sin ( κ π y ) + i ( x 2 x ) ( y 2 y ) , 0 < x < 1 , 0 < y < 1 ,
where we take σ 1 = 2 κ 2 , and κ > 0 is the wave number.
We apply Algorithm 1 to solve the complex Helmholtz equation with high wave numbers. Table 8 lists the maximum error (ME), NS and CPU, under r ε = 10 10 and n 0 = 19 .
Table 9 lists the maximum error (ME), NS and CPU for different values of σ 2 .
For the case with σ 1 = 800 and σ 2 = 10 of this example, Algorithm 1 is convergence with seven steps. The orthogonality condition (32) automatically holds as reflected in Table 10 for the values ξ ( k ) computed by Equation (37). At each iteration, ξ ( k ) = 1 means that the orthogonality condition (32) is preserved.

6.5. Example 5

Finally, we consider a complex solution of modified Helmholtz equation:
u ( x , y ) = cosh ( κ π x ) sinh ( κ π y ) + i ( x 2 x ) ( y 2 y ) , 0 < x < 1 , 0 < y < 1 ,
where we take σ 1 = 2 κ 2 , and κ > 0 is the wave number.
We apply Algorithm 3 to solve this problem with different wave numbers. Table 11 lists the maximum error (ME), NS and CPU, under r ε = 10 10 and n 0 = 19 .

7. Conclusions

By using the two-block splitting iterative method to solve the complex Helmholtz equation, the orthogonality condition was formulated to accelerate the convergence speed. When the two-block splitting iterative method is orthogonal, it must be absolute convergence. As usual, the complex Helmholtz equation was transformed into the solution of a two-block complex symmetric linear system. Six different iterative algorithms were developed, whose parameters were optimized and obtained explicitly using the orthogonality equation and the minimization techniques of residual norm. Algorithms 1 and 6 were based on the minimality conditions to determine the optimal values of two parameters, while other algorithms were based on the orthogonality condition to derive the optimal values of parameters. Algorithm 1 was formulated for the original complex symmetric linear system, while other algorithms were formulated for the transformed complex symmetric linear system. Even for a high wave number and large damping constant of the complex Helmholtz equations, the proposed two-block iterative methods together with the optimal values of parameters can generate highly accurate simulating results within a few number of iterations. From the practical numerical simulations, we found that Algorithm 1 outperforms Algorithm 6.
Because the values of all parameters were optimized, the iterative algorithms automatically preserved the orthogonality condition, which is the main reason for the fast convergence of Algorithms 1–6 proposed in this paper.

Author Contributions

Conceptualization, C.-S.L. and C.-W.C.; Methodology, C.-S.L. and C.-W.C.; Software, C.-S.L., C.-W.C. and C.-C.T.; Validation, C.-S.L. and C.-W.C.; Formal analysis, C.-S.L. and C.-W.C.; Investigation, C.-S.L., C.-W.C. and C.-C.T.; Resources, C.-S.L. and C.-W.C.; Data curation, C.-S.L., C.-W.C. and C.-C.T.; Writing—original draft, C.-S.L. and C.-W.C.; Writing—review & editing, C.-S.L. and C.-W.C.; Visualization, C.-S.L., C.-W.C. and C.-C.T.; Supervision, C.-S.L. and C.-W.C.; Project administration, C.-W.C.; Funding acquisition, C.-S.L. All authors have read and agreed to the published version of the manuscript.

Funding

The NSTC 113-2221-E-019-043-MY3 grant provided by the National Science and Technology Council, who partially supported this study, is gratefully acknowledged.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abrahamsson, L.; Kreiss, H.O. Numerical solution of the coupled mode equations induct acoustics. J. Comput. Phys. 1994, 111, 1–14. [Google Scholar] [CrossRef]
  2. Mandelis, A. Diffusion-Wave Fields: Mathematical Methods and Green Functions; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  3. Singer, I.; Turkel, E. High-order finite difference methods for the Helmholtz equation. Comput. Meth. Appl. Mech. Eng. 1998, 163, 343–358. [Google Scholar] [CrossRef]
  4. Wu, Z.; Alkhalifah, T. A highly accurate finite-difference method with minimum dispersion error for solving the Helmholtz equation. J. Comput. Phys. 2018, 365, 350–361. [Google Scholar] [CrossRef]
  5. Fu, Y. Compact fourth-order finite difference schemes for Helmholtz equation with high wave numbers. J. Comput. Math. 2008, 26, 98–111. [Google Scholar] [CrossRef]
  6. Oberai, A.; Pinsky, P. A multiscale finite element method for the Helmholtz equation. Comput. Meth. Appl. Mech. Eng. 1998, 154, 281–297. [Google Scholar] [CrossRef]
  7. Oberai, A.; Pinsky, P. A residual-based finite element method for the Helmholtz equation. Int. J. Numer. Methods Eng. 2000, 49, 399–419. [Google Scholar] [CrossRef]
  8. Mehdizadeh, O.; Paraschivoiu, M. Investigation of a two-dimensional spectral element method for Helmholtz’s equation. J. Comput. Phys. 2003, 189, 111–129. [Google Scholar] [CrossRef]
  9. Cho, M.H.; Cai, W. A wideband fast multipole method for the two-dimensional complex Helmholtz equation. Comput. Phys. Commun. 2010, 181, 2086–2090. [Google Scholar] [CrossRef]
  10. Axelsson, O.; Karátson, J.; Magoulès, F. Superlinear convergence using block preconditioners for the real system formulation of complex Helmholtz equations. J. Comput. Appl. Math. 2018, 340, 424–431. [Google Scholar] [CrossRef]
  11. Ai, X.; Liao, W.; Wang, X. Optimized parameterized Uzawa methods for solving complex Helmholtz equations. Comput. Math. Appl. 2024, 164, 34–44. [Google Scholar] [CrossRef]
  12. Malinzi, J. A mathematical model for oncolytic virus spread using the telegraph equation. Commun. Nonlinear Sci. Numer. Simul. 2021, 102, 105944. [Google Scholar] [CrossRef]
  13. Benabdelhadi, A.; Chaoui, F.Z.; Ghani, D.; Giri, F. Observer design for collocated-boundary measurements of transmission line governed by telegraph equations with application to fault detection. IFAC-PapersOnLine 2024, 58, 793–798. [Google Scholar] [CrossRef]
  14. Pietrzak, T.; Horzela, A.; Górska, K. The generalized telegraph equation with moving harmonic source: Solvability using the integral decomposition technique and wave aspects. Int. J. Heat Mass Transf. 2024, 225, 125373. [Google Scholar] [CrossRef]
  15. Liu, C.S.; El-Zahar, E.R.; Chang, C.W. Dynamical optimal values of parameters in the SSOR, AOR and SAOR testing using the Poisson linear equations. Mathematics 2023, 11, 3828. [Google Scholar] [CrossRef]
  16. Varga, R.S. Matrix Iterative Analysis; Springer: Berlin, Germany, 2000. [Google Scholar]
  17. Young, D.M. Iterative methods for solving partial difference equations of elliptic type. Trans. Am. Math. Soc. 1954, 76, 92–111. [Google Scholar] [CrossRef]
  18. Hadjidimos, A. Accelerated overrelaxation method. Math. Comput. 1978, 32, 149–157. [Google Scholar] [CrossRef]
  19. Salkuyeh, D.K.; Hezari, D.; Edalatpour, V. Generalized successive overrelaxation iterative method for a class of complex symmetric linear system of equations. Int. J. Comput. Math. 2015, 92, 802–815. [Google Scholar] [CrossRef]
  20. Edalatpour, V.; Hezari, D.; Salkuyeh, D.K. Accelerated generalized SOR method for a class of complex systems of linear equations. Math. Commun. 2015, 20, 37–52. Available online: https://hrcak.srce.hr/140386 (accessed on 1 July 2015).
  21. Li, X.A.; Zhang, W.H.; Wu, Y.J. On symmetric block triangular splitting iteration method for a class of complex symmetric system of linear equations. Appl. Math. Lett. 2018, 79, 131–137. [Google Scholar] [CrossRef]
  22. Zhu, Y.; Zhang, N.M.; Chao, Z. Two block splitting iteration methods for solving complex symmetric linear systems from complex Helmholtz equation. Mathematics 2024, 12, 1888. [Google Scholar] [CrossRef]
  23. Hezari, D.; Salkuyeh, D.K.; Edalatpour, V. A new iterative method for solving a class of complex symmetric system of linear equations. Numer. Algor. 2016, 73, 927–955. [Google Scholar] [CrossRef]
  24. Salkuyeh, D.K.; Siahkolaei, T.S. Two-parameter TSCSP method for solving complex symmetric system of linear equations. Calcolo 2018, 55, 8. [Google Scholar] [CrossRef]
  25. Siahkolaei, T.S.; Salkuyeh, D.K. A new double-step method for solving complex Helmholtz equation. Hacet. J. Math. Stat. 2020, 49, 1245–1260. [Google Scholar] [CrossRef]
  26. Darvishi, M.T.; Khosro-Aghdam, R. Determination of the optimal value of relaxation parameter in symmetric SOR method for rectangular coefficient matrices. Appl. Math. Comput. 2006, 181, 1018–1025. [Google Scholar] [CrossRef]
  27. Darvishi, M.T.; Hessari, P. Symmetric SOR method for augmented systems. Appl. Math. Comput. 2006, 183, 409–415. [Google Scholar] [CrossRef]
  28. Zhang, G.F.; Lu, Q.H. On generalized symmetric SOR method for augmented systems. J. Comput. Appl. Math. 2008, 219, 51–58. [Google Scholar] [CrossRef]
  29. Darvishi, M.T.; Hessari, P. A modified symmetric successive overrelaxation method for augmented systems. Comput. Math. Appl. 2011, 61, 3128–3135. [Google Scholar] [CrossRef]
  30. Chao, Z.; Zhang, N.; Lu, Y. Optimal parameters of the generalized symmetric SOR method for augmented systems. J. Comput. Appl. Math. 2014, 266, 52–60. [Google Scholar] [CrossRef]
  31. Golub, G.H.; Wu, X.; Yuan, J.Y. SOR-like methods for augmented systems. BIT 2001, 55, 71–85. [Google Scholar] [CrossRef]
  32. Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
  33. Yang, A.L. On the convergence of the minimum residual HSS iteration method. Appl. Math. Lett. 2019, 94, 210–216. [Google Scholar] [CrossRef]
  34. Cui, J.; Peng, G.; Lu, Q.; Huang, Z. A class of nonstationary upper and lower triangular splitting iteration methods for ill-posed inverse problems. IAENG Int. J. Comput. Sci. 2020, 47, 118–129. [Google Scholar]
  35. Huang, Z.G. Efficient block splitting iteration methods for solving a class of complex symmetric linear systems. J. Comput. Appl. Math. 2021, 395, 113574. [Google Scholar] [CrossRef]
  36. Bai, Z.Z.; Benzi, M.; Chen, F. Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 2010, 87, 93–111. [Google Scholar] [CrossRef]
  37. Bai, Z.Z.; Benzi, M.; Chen, F. On preconditioned MHSS iteration methods for complex symmetric linear systems. Numer. Algor. 2011, 56, 297–317. [Google Scholar] [CrossRef]
  38. Bai, Z.Z.; Benzi, M.; Chen, F. Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems. IMA J. Numer. Anal. 2013, 33, 343–369. [Google Scholar] [CrossRef]
Table 1. Example 1: (NS, CPU) obtained by Algorithms 1–6 with different n 0 , where N = 2 n = 2 n 0 2 .
Table 1. Example 1: (NS, CPU) obtained by Algorithms 1–6 with different n 0 , where N = 2 n = 2 n 0 2 .
n 0 102025303540
Algorithm 1(22, 0.47)(25, 1.72)(26, 6.38)(25, 17.52)(26, 41.47)(26, 125.53)
Algorithm 2(22, 0.41)(23, 1.59)(29, 5.60)(36, 18.08)(36, 40.46)(36, 103.52)
Algorithm 3(18, 0.34)(24, 1.59)(31, 4.89)(26, 18.36)(30, 41.32)(30, 112.41)
Algorithm 4(16, 0.42)(15, 1.46)(16, 5.40)(16, 16.36)(16, 39.15)(16, 107.39)
Algorithm 5(21, 0.40)(22, 1.59)(22, 5.47)(19, 16.32)(18, 40.65)(19, 109.11)
Algorithm 6(23, 0.41)(22, 1.72)(21, 5.73)(19, 17.21)(19, 41.97)(18, 115.17)
Table 2. Example 1: NS obtained by different methods.
Table 2. Example 1: NS obtained by different methods.
MethodHSSMHSSSBTSGSORAlgorithm 4
NS6554312212
Table 3. Example 1: ξ ( k ) obtained by Algorithm 4.
Table 3. Example 1: ξ ( k ) obtained by Algorithm 4.
k123456789101112
ξ ( k ) 111111111111
Table 4. Example 1: NS obtained by Algorithm 4 with n 0 = 20 .
Table 4. Example 1: NS obtained by Algorithm 4 with n 0 = 20 .
η 11.21.31.41.51.61.71.81.92
NS30282623191614131323
Table 5. Example 2: NS obtained by different methods.
Table 5. Example 2: NS obtained by different methods.
MethodAIBSIBSPBSNBSAGSORPMHSSAlgorithm 4Algorithm 6
NS1317192698532025
Table 6. For Example 3, solved by Algorithms 1 and 6 with different values of κ , where σ 1 = 2 κ 2 and σ 2 = 20 .
Table 6. For Example 3, solved by Algorithms 1 and 6 with different values of κ , where σ 1 = 2 κ 2 and σ 2 = 20 .
κ 1015202530
ME (Algorithm 1) 5.90 × 10 13 4.82 × 10 12 5.88 × 10 15 5.87 × 10 12 2.34 × 10 14
NS (Algorithm 1)22142
ME (Algorithm 6) 8.78 × 10 13 2.73 × 10 13 5.88 × 10 15 1.48 × 10 12 5.82 × 10 14
NS (Algorithm 6)22122
Table 7. For Example 3, solved by Algorithms 1 and 6 with different values of σ 2 , where κ = 20 , and σ 1 = 800 .
Table 7. For Example 3, solved by Algorithms 1 and 6 with different values of σ 2 , where κ = 20 , and σ 1 = 800 .
σ 2 1020305080100
ME (Algorithm 1) 5.88 × 10 15 5.88 × 10 10 5.88 × 10 11 5.88 × 10 15 5.88 × 10 15 5.88 × 10 15
NS (Algorithm 1)111111
ME (Algorithm 6) 5.88 × 10 15 5.88 × 10 10 5.88 × 10 11 5.88 × 10 15 5.88 × 10 15 5.88 × 10 15
NS (Algorithm 6)111111
Table 8. For Example 4, solved by Algorithms 1 and 6 with different values of κ , where σ 1 = 2 κ 2 and σ 2 = 20 .
Table 8. For Example 4, solved by Algorithms 1 and 6 with different values of κ , where σ 1 = 2 κ 2 and σ 2 = 20 .
κ 1015202530
ME (Algorithm 1) 2.75 × 10 11 7.70 × 10 11 3.87 × 10 11 1.01 × 10 10 9.10 × 10 11
NS (Algorithm 1)1322109022
CPU (Algorithm 1)1.121.201.141.651.20
ME (Algorithm 6) 1.27 × 10 10 1.57 × 10 10 7.40 × 10 11 6.76 × 10 11 2.45 × 10 10
NS (Algorithm 6)6628560286203
CPU (Algorithm 6)1.673.301.593.232.64
Table 9. For Example 4, solved by Algorithms 1 and 6 with different values of σ 2 , where κ = 20 , and σ 1 = 800 .
Table 9. For Example 4, solved by Algorithms 1 and 6 with different values of σ 2 , where κ = 20 , and σ 1 = 800 .
σ 2 101520253040
ME (Algorithm 1) 5.50 × 10 11 1.63 × 10 10 6.09 × 10 11 1.46 × 10 10 5.12 × 10 11 3.58 × 10 11
NS (Algorithm 1)7810121725
CPU (Algorithm 1)1.061.121.521.261.201.26
ME (Algorithm 6) 1.57 × 10 10 1.11 × 10 10 7.40 × 10 11 9.66 × 10 11 6.74 × 10 11 4.70 × 10 11
NS (Algorithm 6)2061136088269567
CPU (Algorithm 6)3.112.101.391.783.105.27
Table 10. Example 4: ξ ( k ) obtained by Algorithm 1.
Table 10. Example 4: ξ ( k ) obtained by Algorithm 1.
k1234567
ξ ( k ) 1111111
Table 11. For Example 5, solved by Algorithm 3 with different values of κ , where σ 1 = 2 κ 2 and σ 2 = 50 .
Table 11. For Example 5, solved by Algorithm 3 with different values of κ , where σ 1 = 2 κ 2 and σ 2 = 50 .
κ 2571015
ME 2.72 × 10 12 3.54 × 10 11 8.60 × 10 12 3.05 × 10 12 2.12 × 10 12
NS (Algorithm 3)3078356575
CPU (Algorithm 3)1.321.651.391.591.66
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, C.-S.; Chang, C.-W.; Tsai, C.-C. Numerical Simulations of Complex Helmholtz Equations Using Two-Block Splitting Iterative Schemes with Optimal Values of Parameters. AppliedMath 2024, 4, 1256-1277. https://doi.org/10.3390/appliedmath4040068

AMA Style

Liu C-S, Chang C-W, Tsai C-C. Numerical Simulations of Complex Helmholtz Equations Using Two-Block Splitting Iterative Schemes with Optimal Values of Parameters. AppliedMath. 2024; 4(4):1256-1277. https://doi.org/10.3390/appliedmath4040068

Chicago/Turabian Style

Liu, Chein-Shan, Chih-Wen Chang, and Chia-Cheng Tsai. 2024. "Numerical Simulations of Complex Helmholtz Equations Using Two-Block Splitting Iterative Schemes with Optimal Values of Parameters" AppliedMath 4, no. 4: 1256-1277. https://doi.org/10.3390/appliedmath4040068

APA Style

Liu, C.-S., Chang, C.-W., & Tsai, C.-C. (2024). Numerical Simulations of Complex Helmholtz Equations Using Two-Block Splitting Iterative Schemes with Optimal Values of Parameters. AppliedMath, 4(4), 1256-1277. https://doi.org/10.3390/appliedmath4040068

Article Metrics

Back to TopTop