1. Introduction
The complex Helmholtz equations appear in many physical applications, e.g., scattering problems, electro-magnetics, acoustics, damped propagation of time-harmonic waves, electrochemical impedance spectroscopy, unsteady slow viscous flows, etc. [
1,
2]. The discretizations with the finite difference method [
3,
4,
5], finite element method [
6,
7], and spectral element method [
8] lead to complex symmetric linear systems.
Consider a complex Helmholtz equation, as follows:
where
;
, with
and
, is a complex-valued wave number;
is a complex function depicting the solution of Equation (
1).
Physically, the complex Helmholtz equation describes a damped wave equation, known as the Telegraph equation, as follows:
where
c is the speed of wave, and
is a constant damping coefficient. Let
be a time-harmonic solution of Equation (
2), where
denotes the real part. Let the complex wave number be
;
is a solution of Equation (
1) with
, when
is a solution of Equation (
2). The solution of complex Helmholtz equation with complex wave number
can be understood as the wave that is attenuated while it propagates. The larger the imaginary part
of the wave number, the stronger the damping effect.
Many applications of Equations (
1) and (
2) were given as follows. A fast multipole method of a complex Helmholtz equation to deal with the numerous real world applications in computational electromagnetics can be used as a building block of other fast solvers [
9]; using a preconditioner in a special two-by-two block form solves the real system formulation of complex Helmholtz equations [
10]; employing a parameterized Uzawa method of a complex Helmholtz equation addresses the standard saddle point problem [
11] and investigates the effect of viral spread on tumor cells to determine the role of the extracellular matrix in facilitating viral spread by utilizing the Telegraph equation [
12]. On the basis of the Telegraph equation of distributed parameters of a lossy transmission line, the observer allows the accurate detection and localization of a transmission fault [
13], and using the integral decomposition enables us to find the frequency shift of the generalized Telegraph equation with a moving point-wise harmonic source [
14].
After a five-point finite difference discretization of Equation (
1), it becomes a complex linear system, as follows:
is the centered difference matrix approximation of the negative Laplacian operator in Equation (
1), where ⊗ is the Kronecker tensor product,
is the number of interior grid points in
x and
y directions, respectively, and
is the total number of interior grid points inside the unit square. The grid spacing is
, and
;
consists of nodal values of the variable
, and is a vectorization of
at all inner nodal points
;
consists of nodal values of the variable
, and is a vectorization of
at all inner nodal points [
15].
Let
where
are symmetric positive and positive semi-definite; Equation (
3) is re-written as
Upon letting
Equation (
5) can be written as
where
.
For any
splitting of
given by
with
being nonsingular, an iterative scheme for Equation (
7) is [
16]
where
is the
kth step value of
. The convergence is guaranteed if
where
is the iteration matrix, and
is the spectral radius of
.
The
splitting iterative scheme involves the Jacobi method, the Gauss–Seidel method, the successive over-relaxation (SOR) method, and the accelerated over-relaxation (AOR) method as special cases. The SOR method is developed in [
17]; the AOR method in [
18] is a generalization of SOR.
A lot of iteration methods have been proposed to solve the complex symmetric linear systems, like the generalized successive over-relaxation (GSOR) method [
19], the accelerated GSOR (AGSOR) method [
20], the symmetric block triangular splitting (SBTS) method [
21], and the improved block splitting (IBS) method, as well as its acceleration AIBS [
22]. Additionally, the scale-splitting (SCSP) method [
23] is further generalized to the two-parameter two-step scale-splitting (TTSCSP) method [
24]. The double-step method was used to solve the complex Helmholtz equation in [
25].
For the coefficient matrix given by
Equation (
7) is a two-block linear system. The rank of
is
r, and
,
,
,
, and
; moreover, Darvishi and Khosro-Aghdam [
26] derived the following optimal value of the relaxation parameter for the symmetric SOR method:
In the SOR method,
is decomposed to
where
is a nonsingular diagonal matrix, and
and
are strictly upper and lower matrices, respectively. The SOR method with
can be written as [
17]
Similar to the SOR-like methods and the AOR-like methods, many efficient iterative methods as well as their spectral analysis have been studied in the literature [
27,
28,
29,
30,
31].
Given an initial guess
for an iterative scheme, we use Equation (
7) to obtain a residual vector
. According to the information of
, we attempt to search a good descent vector
, which corrects the solution to
, so that the new residual can be decreased to abide the rule of
. The residual vector and descent vector are two very fundamental concepts in the area of iterative schemes.
Let
be an
m-dimensional Krylov subspace; the GMRES method in [
32] employs the Petrov–Galerkin method
to search
via a perpendicular property, which is a crucial ingredient for the development of many iterative algorithms in the Krylov subspace. However, this property is rarely used in the
splitting iterative methods. In this paper, we will adopt this concept to determine the optimal values of parameters appeared in the splitting iterative schemes to solve two-block complex linear systems.
Based on the different splittings of coefficient matrices, we are going to develop six types of iterative algorithms to solve the complex linear system:
where
. When Algorithms 2 and 3 have a single parameter, Algorithms 1 and 4–6 have two parameters. Algorithms 1 and 6 have the same splitting form; however, Algorithm 1 is applied to the original system with the coefficient matrix
, while Algorithm 6 is applied to a transformed system to be introduced in
Section 4 with the preconditioned coefficient matrix
. Algorithms 4 and 5 have the same splitting of
; however,
in Algorithm 4 is a free parameter, while in Algorithm 5,
and
are obtained by the orthogonality condition. We will develop novel methods to determine the values of these parameters, such that the iteration process is optimized. We must emphasize that given ad hoc values of the parameters, the
iterative schemes are divergent in general.
This paper presents the following novel contributions:
- (a)
The two-block splitting iterative methods for complex linear systems are formulated to preserve the orthogonality and the maximality to reduce the residual vector’s length.
- (b)
The values of parameters in the splitting iteration methods are determined by the orthogonality condition and the minimality conditions.
- (c)
We prove that the proposed two-block iterative schemes are absolute convergence.
- (d)
A numerical simulation of the complex Helmholtz equation is advanced by highly accurate and efficient single-parameter SOR-like and two-parameter AOR-like two-block splitting iteration methods.
- (e)
The optimal values of parameters can improve the accuracy and accelerate the convergence for the complex Helmholtz equation with a high wave number and large damping effect.
5. Pseudo-Codes
To distinct the algorithms, we name Algorithm 1 for the splitting iterative method based on Theorem 2, Algorithm 2 (Theorem 3), Algorithm 3 (Theorem 4), Algorithm 4 (Theorem 5), Algorithm 5 (Theorem 6), and Algorithm 6 (Theorem 7).
Algorithm 1. Splitting iterative scheme via Theorem 2 |
1: Give , ,
, , , ,
and |
2: Compute , |
3: Do |
4: |
5: |
6: Compute and via Equations (57)–(59) |
7: |
8: |
9: |
10: |
11: If , stop |
12: Otherwise, go to 3 |
The computational kernel of Algorithm 1 encompasses the computations of
and
via Equations (
57)–(59), which require three matrix-vector productions with
scalar multiplications and ten inner products of
n-vectors with
scalar multiplications.
is an
matrix, which requires
scalar multiplications for the inversion from an
matrix
;
requires a production of three matrices with
scalar multiplications; however, they are computed one time.
and
require four matrix-vector productions with
scalar multiplications;
and
require three matrix-vector productions with
scalar multiplications. In each iteration, the computational complexity is low with
scalar multiplications.
Algorithm 2. Splitting iterative scheme via Theorem 3 |
1: Give , ,
, , ,
,
and |
2: Compute , , |
3: Compute , |
4: Do |
5: |
6: |
7: Compute via Equations (82)–(86) |
8: |
9: |
10: |
11: |
12: If , stop |
13: Otherwise, go to 3 |
The computational kernel of Algorithm 2 encompasses the computation of
via Equations (
82)–(
86), which requires three matrix-vector productions with
scalar multiplications and seven inner products of
n-vectors with
scalar multiplications.
is an
matrix, which requires
scalar multiplications for the inversion from an
matrix
;
requires a production of three matrices with
scalar multiplications; however, they are computed one time.
and
require four matrix-vector productions with
scalar multiplications;
and
require three matrix-vector productions with
scalar multiplications. In each iteration, the computational complexity is low with
scalar multiplications.
Algorithm 3. Splitting iterative scheme via Theorem 4 |
1: Give , ,
, , ,
,
and |
2: Compute , , |
3: Compute , |
4: Do |
5: |
6: |
7: Compute via Equations (90) and (91) |
8: |
9: |
10: |
11: |
12: If , stop |
13: Otherwise, go to 3 |
The computational kernel of Algorithm 3 encompasses the computation of
via Equations (
90) and (
91), which requires three matrix-vector productions with
scalar multiplications and ten inner products of
n-vectors with
scalar multiplications.
is an
matrix, which requires
scalar multiplications for the inversion from an
matrix
;
requires a production of three matrices with
scalar multiplications; however, they are computed one time.
and
require four matrix-vector productions with
scalar multiplications;
and
require three matrix-vector productions with
scalar multiplications. In each iteration, the computational complexity is low with
scalar multiplications.
Algorithm 4. Splitting iterative scheme via Theorem 5 |
1: Give , ,
, , ,
, ,
and |
2: Compute , , |
3: Compute , |
4: Do |
5: |
6: |
7: Compute via Equations (104) and (105) |
8: |
9: |
10: |
11: |
12: If , stop |
13: Otherwise, go to 3 |
The computational kernel of Algorithm 4 encompasses the computation of
via Equations (
104) and (105), which requires three matrix-vector productions with
scalar multiplications and ten inner products of
n-vectors with
scalar multiplications.
is an
matrix, which requires
scalar multiplications for the inversion from an
matrix
;
requires a production of three matrices with
scalar multiplications; however, they are computed one time.
and
require four matrix-vector productions with
scalar multiplications;
and
require three matrix-vector productions with
scalar multiplications. In each iteration, the computational complexity is low with
scalar multiplications.
Algorithm 5. Splitting iterative scheme via Theorem 6 |
1: Give , ,
, , ,
,
and |
2: Compute , , |
3: Compute , |
4: Do |
5: |
6: |
7: Compute and via Equations (109)–(111) |
8: |
9: |
10: |
11: |
12: If , stop |
13: Otherwise, go to 3 |
The computational kernel of Algorithm 5 encompasses the computations of
and
via Equations (
109)–(
111), which require three matrix-vector productions with
scalar multiplications and thirteen inner products of
n-vectors with
scalar multiplications.
is an
matrix, which requires
scalar multiplications for the inversion from an
matrix
;
requires a production of three matrices with
scalar multiplications; however, they are computed one time.
and
require four matrix-vector productions with
scalar multiplications;
and
require three matrix-vector productions with
scalar multiplications. In each iteration, the computational complexity is low with
scalar multiplications.
Algorithm 6. Splitting iterative scheme via Theorem 7 |
1: Give , ,
, ,
,
, and |
2: Compute , , |
3: Compute , |
4: Do |
5: |
6: |
7: Compute and via Equations (118)–(120) |
8: |
9: |
10: |
11: |
12: If , stop |
13: Otherwise, go to 3 |
The computational kernel of Algorithm 6 encompasses the computations of
and
via Equations (
118)–(120), which require three matrix-vector productions with
scalar multiplications and ten inner products of
n-vectors with
scalar multiplications.
is an
matrix, which requires
scalar multiplications for the inversion from an
matrix
;
requires a production of three matrices with
scalar multiplications; however, they are computed one time.
and
require four matrix-vector productions with
scalar multiplications;
and
require three matrix-vector productions with
scalar multiplications. In each iteration, the computational complexity is low with
scalar multiplications.
Besides Algorithms 1 and 6, whose values of two parameters are determined by the minimality conditions in Equation (
47), the values of the parameters in Algorithms 2–5 are determined by the orthogonality condition (
32).