Abstract
For a two-block splitting iterative scheme to solve the complex linear equations system resulting from the complex Helmholtz equation, the iterative form using descent vector and residual vector is formulated. We propose splitting iterative schemes by considering the perpendicular property of consecutive residual vector. The two-block splitting iterative schemes are proven to have absolute convergence, and the residual is minimized at each iteration step. Single and double parameters in the two-block splitting iterative schemes are derived explicitly utilizing the orthogonality condition or the minimality conditions. Some simulations of complex Helmholtz equations are performed to exhibit the performance of the proposed two-block iterative schemes endowed with optimal values of parameters. The primary novelty and major contribution of this paper lies in using the orthogonality condition of residual vectors to optimize the iterative process. The proposed method might fill a gap in the current literature, where existing iterative methods either lack explicit parameter optimization or struggle with high wave numbers and large damping constants in the complex Helmholtz equation. The two-block splitting iterative scheme provides an efficient and convergent solution, even in challenging cases.
1. Introduction
The complex Helmholtz equations appear in many physical applications, e.g., scattering problems, electro-magnetics, acoustics, damped propagation of time-harmonic waves, electrochemical impedance spectroscopy, unsteady slow viscous flows, etc. [1,2]. The discretizations with the finite difference method [3,4,5], finite element method [6,7], and spectral element method [8] lead to complex symmetric linear systems.
Consider a complex Helmholtz equation, as follows:
where ; , with and , is a complex-valued wave number; is a complex function depicting the solution of Equation (1).
Physically, the complex Helmholtz equation describes a damped wave equation, known as the Telegraph equation, as follows:
where c is the speed of wave, and is a constant damping coefficient. Let be a time-harmonic solution of Equation (2), where denotes the real part. Let the complex wave number be ; is a solution of Equation (1) with , when is a solution of Equation (2). The solution of complex Helmholtz equation with complex wave number can be understood as the wave that is attenuated while it propagates. The larger the imaginary part of the wave number, the stronger the damping effect.
Many applications of Equations (1) and (2) were given as follows. A fast multipole method of a complex Helmholtz equation to deal with the numerous real world applications in computational electromagnetics can be used as a building block of other fast solvers [9]; using a preconditioner in a special two-by-two block form solves the real system formulation of complex Helmholtz equations [10]; employing a parameterized Uzawa method of a complex Helmholtz equation addresses the standard saddle point problem [11] and investigates the effect of viral spread on tumor cells to determine the role of the extracellular matrix in facilitating viral spread by utilizing the Telegraph equation [12]. On the basis of the Telegraph equation of distributed parameters of a lossy transmission line, the observer allows the accurate detection and localization of a transmission fault [13], and using the integral decomposition enables us to find the frequency shift of the generalized Telegraph equation with a moving point-wise harmonic source [14].
After a five-point finite difference discretization of Equation (1), it becomes a complex linear system, as follows:
is the centered difference matrix approximation of the negative Laplacian operator in Equation (1), where ⊗ is the Kronecker tensor product, is the number of interior grid points in x and y directions, respectively, and is the total number of interior grid points inside the unit square. The grid spacing is , and ; consists of nodal values of the variable , and is a vectorization of at all inner nodal points ; consists of nodal values of the variable , and is a vectorization of at all inner nodal points [15].
For any splitting of given by
with being nonsingular, an iterative scheme for Equation (7) is [16]
where is the kth step value of . The convergence is guaranteed if
where is the iteration matrix, and is the spectral radius of .
The splitting iterative scheme involves the Jacobi method, the Gauss–Seidel method, the successive over-relaxation (SOR) method, and the accelerated over-relaxation (AOR) method as special cases. The SOR method is developed in [17]; the AOR method in [18] is a generalization of SOR.
A lot of iteration methods have been proposed to solve the complex symmetric linear systems, like the generalized successive over-relaxation (GSOR) method [19], the accelerated GSOR (AGSOR) method [20], the symmetric block triangular splitting (SBTS) method [21], and the improved block splitting (IBS) method, as well as its acceleration AIBS [22]. Additionally, the scale-splitting (SCSP) method [23] is further generalized to the two-parameter two-step scale-splitting (TTSCSP) method [24]. The double-step method was used to solve the complex Helmholtz equation in [25].
For the coefficient matrix given by
Equation (7) is a two-block linear system. The rank of is r, and , , , , and ; moreover, Darvishi and Khosro-Aghdam [26] derived the following optimal value of the relaxation parameter for the symmetric SOR method:
In the SOR method, is decomposed to
where is a nonsingular diagonal matrix, and and are strictly upper and lower matrices, respectively. The SOR method with can be written as [17]
Similar to the SOR-like methods and the AOR-like methods, many efficient iterative methods as well as their spectral analysis have been studied in the literature [27,28,29,30,31].
Given an initial guess for an iterative scheme, we use Equation (7) to obtain a residual vector . According to the information of , we attempt to search a good descent vector , which corrects the solution to , so that the new residual can be decreased to abide the rule of . The residual vector and descent vector are two very fundamental concepts in the area of iterative schemes.
Let
be an m-dimensional Krylov subspace; the GMRES method in [32] employs the Petrov–Galerkin method
to search via a perpendicular property, which is a crucial ingredient for the development of many iterative algorithms in the Krylov subspace. However, this property is rarely used in the splitting iterative methods. In this paper, we will adopt this concept to determine the optimal values of parameters appeared in the splitting iterative schemes to solve two-block complex linear systems.
Based on the different splittings of coefficient matrices, we are going to develop six types of iterative algorithms to solve the complex linear system:
where . When Algorithms 2 and 3 have a single parameter, Algorithms 1 and 4–6 have two parameters. Algorithms 1 and 6 have the same splitting form; however, Algorithm 1 is applied to the original system with the coefficient matrix , while Algorithm 6 is applied to a transformed system to be introduced in Section 4 with the preconditioned coefficient matrix . Algorithms 4 and 5 have the same splitting of ; however, in Algorithm 4 is a free parameter, while in Algorithm 5, and are obtained by the orthogonality condition. We will develop novel methods to determine the values of these parameters, such that the iteration process is optimized. We must emphasize that given ad hoc values of the parameters, the iterative schemes are divergent in general.
This paper presents the following novel contributions:
- (a)
- The two-block splitting iterative methods for complex linear systems are formulated to preserve the orthogonality and the maximality to reduce the residual vector’s length.
- (b)
- The values of parameters in the splitting iteration methods are determined by the orthogonality condition and the minimality conditions.
- (c)
- We prove that the proposed two-block iterative schemes are absolute convergence.
- (d)
- A numerical simulation of the complex Helmholtz equation is advanced by highly accurate and efficient single-parameter SOR-like and two-parameter AOR-like two-block splitting iteration methods.
- (e)
- The optimal values of parameters can improve the accuracy and accelerate the convergence for the complex Helmholtz equation with a high wave number and large damping effect.
2. Mathematical Preliminaries
Proof.
Definition 1.
The splitting iterative scheme (26) is said to be orthogonal if
where the dot between two vectors signifies the inner product.
Theorem 1.
If the splitting iterative scheme (26) is orthogonal and the descent vector is bounded, then
for the absolute convergence.
Proof.
Multiplying the first one in Equation (26) by and using Equation (25) yields
If condition (29) is satisfied and is bounded, then by taking the inner product of with Equation (31), we can derive
which is called an orthogonality condition.
Taking the squared norm of Equation (31) yields
Utilizing Equation (32), we have
due to . Equation (30) is proven.
By means of Equation (31), one has
Since and are perpendicular according to Equation (29), , and constitute the three sides of a perpendicular triangle. By using the Pythagorean theorem, we have
Obviously, Equation (34) holds during the iteration processes. The orthogonality condition guarantees that the residual is strictly decreased step-by-step, and thus the splitting iterative scheme (26) is absolutely convergent, when is bounded. □
Equation (32) motivates us to define
Corollary 1.
The splitting iterative method in Equation (26) diverges if
Proof.
Corollary 2.
For the orthogonal splitting iterative scheme (26), which has , the reduction in the residual vector’s length is maximized.
Proof.
We begin with
hence, it is easy to deduce
To reduce the residual vector’s length maximally, we consider the following maximization problem:
using the maximality condition leads to Equation (37). By inserting for the orthogonal splitting iterative scheme into Equation (39), we can attain Equation (34); is the maximal value of F. □
The above results strongly suggest us to choose the values of parameters that appeared in the splitting iterative scheme to satisfy the equality , i.e., the orthogonality condition (32) at each iteration step. By applying the minimum residual technique to the Hermitian and skew-Hermitian splitting (HSS) iteration scheme, Yang [33] can determine the parameters explicitly. In the treatment of nonstationary upper and lower triangular splitting iteration methods for linear inverse problems, Cui et al. [34] have taken the parameters by minimizing the residual norm. The orthogonality condition (32) is more fundamental than the minimum residual technique.
3. Generalized AOR-like Iterative Scheme
Like the accelerated over-relaxation (AOR) method [18], we consider two parameters ( and ) in :
which is inserted into Equation (26) to accelerate the convergence speed of the splitting iterative scheme.
Then, we split in Equation (6) as follows:
Because there are two parameters in , we can adopt Corollary 2 to find the optimal values of and by using the following minimization problem derived from Equation (33):
that is,
they are called minimality conditions.
We apply Lemma 1 to Equation (7). Let
where , and .
The iterations of and are given by
where
Because of
where and , and are obtained from Equation (51) by
Theorem 2.
Proof.
We can derive
where , , and are given in Equation (60).
As explored in Corollary 2, the minimized value of which can be obtained by Equation (46) is . Therefore, when the values of the parameters and are optimized, they lead to
which is just the orthogonality condition in Equation (32). In this regard, the iterative scheme based on the minimality conditions in Equation (47) is also orthogonal, such that the iterative scheme is absolute convergence according to Theorem 1.
4. Optimal Splitting Iterative Schemes for Transformed Linear System
According to the works in [22,35], we take the same transformed linear system as follows. Let
Let
where is a left preconditioner of system (7). When Equation (7) is left-multiplied by , we come to a transformed two-block linear system:
where . To differentiate Equation (69) from Equation (7), the coefficient matrix is denoted by .
4.1. Single-Parameter Splitting Iterative Schemes
4.1.1. First Single-Parameter Splitting Iterative Scheme
According to [22], the splitting of is given as follows:
Now we apply Lemma 1 to Equation (69). The iterations of and read as
where
Because of
where and , and are obtained from Equation (73) as follows:
Through some operations, we can derive
where
Theorem 3.
4.1.2. Second Single-Parameter Splitting Iterative Scheme
We address a special SOR-like single-parameter splitting iterative scheme [31] with
Theorem 4.
Proof.
4.2. Two-Parameter Splitting Iterative Schemes
4.2.1. First Two-Parameter Splitting Iterative Scheme
We consider
where and are parameters.
We have
hence, with , and are obtained as follows:
We can derive
where
Theorem 5.
Proof.
The iterative scheme in Theorem 5 has a drawback, which requires us to specify the value of . To improve it, we prove the following result.
Theorem 6.
4.2.2. Second Two-Parameter Splitting Iterative Scheme
In this section, we extend the generalized AOR-like method in Section 3 to the transformed linear system (69). Let
Theorem 7.
5. Pseudo-Codes
To distinct the algorithms, we name Algorithm 1 for the splitting iterative method based on Theorem 2, Algorithm 2 (Theorem 3), Algorithm 3 (Theorem 4), Algorithm 4 (Theorem 5), Algorithm 5 (Theorem 6), and Algorithm 6 (Theorem 7).
| Algorithm 1. Splitting iterative scheme via Theorem 2 |
| 1: Give , , , , , , and |
| 2: Compute , |
| 3: Do |
| 4: |
| 5: |
| 6: Compute and via Equations (57)–(59) |
| 7: |
| 8: |
| 9: |
| 10: |
| 11: If , stop |
| 12: Otherwise, go to 3 |
The computational kernel of Algorithm 1 encompasses the computations of and via Equations (57)–(59), which require three matrix-vector productions with scalar multiplications and ten inner products of n-vectors with scalar multiplications. is an matrix, which requires scalar multiplications for the inversion from an matrix ; requires a production of three matrices with scalar multiplications; however, they are computed one time. and require four matrix-vector productions with scalar multiplications; and require three matrix-vector productions with scalar multiplications. In each iteration, the computational complexity is low with scalar multiplications.
| Algorithm 2. Splitting iterative scheme via Theorem 3 |
| 1: Give , , , , , , and |
| 2: Compute , , |
| 3: Compute , |
| 4: Do |
| 5: |
| 6: |
| 7: Compute via Equations (82)–(86) |
| 8: |
| 9: |
| 10: |
| 11: |
| 12: If , stop |
| 13: Otherwise, go to 3 |
The computational kernel of Algorithm 2 encompasses the computation of via Equations (82)–(86), which requires three matrix-vector productions with scalar multiplications and seven inner products of n-vectors with scalar multiplications. is an matrix, which requires scalar multiplications for the inversion from an matrix ; requires a production of three matrices with scalar multiplications; however, they are computed one time. and require four matrix-vector productions with scalar multiplications; and require three matrix-vector productions with scalar multiplications. In each iteration, the computational complexity is low with scalar multiplications.
| Algorithm 3. Splitting iterative scheme via Theorem 4 |
| 1: Give , , , , , , and |
| 2: Compute , , |
| 3: Compute , |
| 4: Do |
| 5: |
| 6: |
| 7: Compute via Equations (90) and (91) |
| 8: |
| 9: |
| 10: |
| 11: |
| 12: If , stop |
| 13: Otherwise, go to 3 |
The computational kernel of Algorithm 3 encompasses the computation of via Equations (90) and (91), which requires three matrix-vector productions with scalar multiplications and ten inner products of n-vectors with scalar multiplications. is an matrix, which requires scalar multiplications for the inversion from an matrix ; requires a production of three matrices with scalar multiplications; however, they are computed one time. and require four matrix-vector productions with scalar multiplications; and require three matrix-vector productions with scalar multiplications. In each iteration, the computational complexity is low with scalar multiplications.
| Algorithm 4. Splitting iterative scheme via Theorem 5 |
| 1: Give , , , , , , , and |
| 2: Compute , , |
| 3: Compute , |
| 4: Do |
| 5: |
| 6: |
| 7: Compute via Equations (104) and (105) |
| 8: |
| 9: |
| 10: |
| 11: |
| 12: If , stop |
| 13: Otherwise, go to 3 |
The computational kernel of Algorithm 4 encompasses the computation of via Equations (104) and (105), which requires three matrix-vector productions with scalar multiplications and ten inner products of n-vectors with scalar multiplications. is an matrix, which requires scalar multiplications for the inversion from an matrix ; requires a production of three matrices with scalar multiplications; however, they are computed one time. and require four matrix-vector productions with scalar multiplications; and require three matrix-vector productions with scalar multiplications. In each iteration, the computational complexity is low with scalar multiplications.
| Algorithm 5. Splitting iterative scheme via Theorem 6 |
| 1: Give , , , , , , and |
| 2: Compute , , |
| 3: Compute , |
| 4: Do |
| 5: |
| 6: |
| 7: Compute and via Equations (109)–(111) |
| 8: |
| 9: |
| 10: |
| 11: |
| 12: If , stop |
| 13: Otherwise, go to 3 |
The computational kernel of Algorithm 5 encompasses the computations of and via Equations (109)–(111), which require three matrix-vector productions with scalar multiplications and thirteen inner products of n-vectors with scalar multiplications. is an matrix, which requires scalar multiplications for the inversion from an matrix ; requires a production of three matrices with scalar multiplications; however, they are computed one time. and require four matrix-vector productions with scalar multiplications; and require three matrix-vector productions with scalar multiplications. In each iteration, the computational complexity is low with scalar multiplications.
| Algorithm 6. Splitting iterative scheme via Theorem 7 |
| 1: Give , , , , , , and |
| 2: Compute , , |
| 3: Compute , |
| 4: Do |
| 5: |
| 6: |
| 7: Compute and via Equations (118)–(120) |
| 8: |
| 9: |
| 10: |
| 11: |
| 12: If , stop |
| 13: Otherwise, go to 3 |
The computational kernel of Algorithm 6 encompasses the computations of and via Equations (118)–(120), which require three matrix-vector productions with scalar multiplications and ten inner products of n-vectors with scalar multiplications. is an matrix, which requires scalar multiplications for the inversion from an matrix ; requires a production of three matrices with scalar multiplications; however, they are computed one time. and require four matrix-vector productions with scalar multiplications; and require three matrix-vector productions with scalar multiplications. In each iteration, the computational complexity is low with scalar multiplications.
6. Examples of Complex Linear System
The complex Helmholtz equation (Equation 1) is solved in this section. To demonstrate the efficiency and accuracy of the proposed iterative algorithms, several examples will be examined. All the numerical computations are carried out by Fortran 77 in Microsoft Windows 10 Developer Studio with Intel Core I7-3770, CPU 2.80 GHz and 8 GB memory. The precision is .
When we compare the computed results with other iterative methods in the literature, we take the same convergence criterion. The resulting complex linear systems are the same. The algorithms use the same precision and the same discretization schema.
6.1. Example 1
Under the convergence criterion with , the number of steps (NS) and CPU times in seconds obtained by different algorithms are compared in Table 1. For Algorithm 4, we take .
Table 1.
Example 1: (NS, CPU) obtained by Algorithms 1–6 with different , where .
The number of steps (NS) is compared in Table 2, under and , where , and . For Algorithm 4, we take . The CPU time in seconds of Algorithm 4 is 25.24 s. By using the data reported in [21], we compare the NS obtained by different methods in Table 2. HSS was developed in [36], MHSS was developed in [37], and SBTS was developed in [21]. The GSOR was chosen according to Table 1 in [19].
Table 2.
Example 1: NS obtained by different methods.
To check the orthogonality condition (32), we can compute in Equation (37). At each iteration, if , then the orthogonality condition (32) is preserved. For Algorithm 4, we compute via the orthogonality condition (32) for each specified value of ; at each iteration step, as shown in Table 3, the values of are computed, which indicate that the orthogonality condition (32) is automatically satisfied.
Table 3.
Example 1: obtained by Algorithm 4.
We must emphasize that the values of all parameters are optimized either by the minimality conditions in Equation (47) or by the orthogonality condition (32). Only for Algorithm 4, there exists a free parameter . In Table 4, under and , we investigate the influence of on the NS. The best value is or .
Table 4.
Example 1: NS obtained by Algorithm 4 with .
6.2. Example 2
Next, we consider [21]
where we take and . Equation (128) is discretized to a complex linear system (5), where are supposed to be the exact solutions, and .
The number of steps (NS), under and , where , and , is compared in Table 5. For Algorithm 4, we take . By using the data obtained from [22], Table 5 lists NS for the methods of AIBS, IBS, PBS, NBS [35], AGSOR [20], and PMHSS [38].
Table 5.
Example 2: NS obtained by different methods.
6.3. Example 3
We first consider a real solution of Equation (1) with
where we take , and is the wave number.
We apply Algorithms 1 and 6 to solve the complex Helmholtz equation with high wave numbers. Table 6 lists the maximum error (ME) and the NS, under and , where ME is defined by
in which are exact solutions computed from Equation (129) at all inner nodal points, while are numerical solutions computed at all inner nodal points. All CPU times, in seconds, obtained by Algorithms 1 and 6 are smaller than 0.5 s, because the NS is only one or two steps.
Table 6.
For Example 3, solved by Algorithms 1 and 6 with different values of , where and .
Table 7 lists the maximum error (ME) and the NS for different values of . All CPU times, in seconds, obtained by Algorithms 1 and 6 are smaller than 0.5 s, because the NS is only one step.
Table 7.
For Example 3, solved by Algorithms 1 and 6 with different values of , where , and .
6.4. Example 4
We consider a complex solution of Equation (1):
where we take , and is the wave number.
We apply Algorithm 1 to solve the complex Helmholtz equation with high wave numbers. Table 8 lists the maximum error (ME), NS and CPU, under and .
Table 8.
For Example 4, solved by Algorithms 1 and 6 with different values of , where and .
Table 9 lists the maximum error (ME), NS and CPU for different values of .
Table 9.
For Example 4, solved by Algorithms 1 and 6 with different values of , where , and .
For the case with and of this example, Algorithm 1 is convergence with seven steps. The orthogonality condition (32) automatically holds as reflected in Table 10 for the values computed by Equation (37). At each iteration, means that the orthogonality condition (32) is preserved.
Table 10.
Example 4: obtained by Algorithm 1.
6.5. Example 5
Finally, we consider a complex solution of modified Helmholtz equation:
where we take , and is the wave number.
We apply Algorithm 3 to solve this problem with different wave numbers. Table 11 lists the maximum error (ME), NS and CPU, under and .
Table 11.
For Example 5, solved by Algorithm 3 with different values of , where and .
7. Conclusions
By using the two-block splitting iterative method to solve the complex Helmholtz equation, the orthogonality condition was formulated to accelerate the convergence speed. When the two-block splitting iterative method is orthogonal, it must be absolute convergence. As usual, the complex Helmholtz equation was transformed into the solution of a two-block complex symmetric linear system. Six different iterative algorithms were developed, whose parameters were optimized and obtained explicitly using the orthogonality equation and the minimization techniques of residual norm. Algorithms 1 and 6 were based on the minimality conditions to determine the optimal values of two parameters, while other algorithms were based on the orthogonality condition to derive the optimal values of parameters. Algorithm 1 was formulated for the original complex symmetric linear system, while other algorithms were formulated for the transformed complex symmetric linear system. Even for a high wave number and large damping constant of the complex Helmholtz equations, the proposed two-block iterative methods together with the optimal values of parameters can generate highly accurate simulating results within a few number of iterations. From the practical numerical simulations, we found that Algorithm 1 outperforms Algorithm 6.
Because the values of all parameters were optimized, the iterative algorithms automatically preserved the orthogonality condition, which is the main reason for the fast convergence of Algorithms 1–6 proposed in this paper.
Author Contributions
Conceptualization, C.-S.L. and C.-W.C.; Methodology, C.-S.L. and C.-W.C.; Software, C.-S.L., C.-W.C. and C.-C.T.; Validation, C.-S.L. and C.-W.C.; Formal analysis, C.-S.L. and C.-W.C.; Investigation, C.-S.L., C.-W.C. and C.-C.T.; Resources, C.-S.L. and C.-W.C.; Data curation, C.-S.L., C.-W.C. and C.-C.T.; Writing—original draft, C.-S.L. and C.-W.C.; Writing—review & editing, C.-S.L. and C.-W.C.; Visualization, C.-S.L., C.-W.C. and C.-C.T.; Supervision, C.-S.L. and C.-W.C.; Project administration, C.-W.C.; Funding acquisition, C.-S.L. All authors have read and agreed to the published version of the manuscript.
Funding
The NSTC 113-2221-E-019-043-MY3 grant provided by the National Science and Technology Council, who partially supported this study, is gratefully acknowledged.
Data Availability Statement
The data presented in this study are available on request from the corresponding author.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Abrahamsson, L.; Kreiss, H.O. Numerical solution of the coupled mode equations induct acoustics. J. Comput. Phys. 1994, 111, 1–14. [Google Scholar] [CrossRef]
- Mandelis, A. Diffusion-Wave Fields: Mathematical Methods and Green Functions; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
- Singer, I.; Turkel, E. High-order finite difference methods for the Helmholtz equation. Comput. Meth. Appl. Mech. Eng. 1998, 163, 343–358. [Google Scholar] [CrossRef]
- Wu, Z.; Alkhalifah, T. A highly accurate finite-difference method with minimum dispersion error for solving the Helmholtz equation. J. Comput. Phys. 2018, 365, 350–361. [Google Scholar] [CrossRef]
- Fu, Y. Compact fourth-order finite difference schemes for Helmholtz equation with high wave numbers. J. Comput. Math. 2008, 26, 98–111. [Google Scholar] [CrossRef]
- Oberai, A.; Pinsky, P. A multiscale finite element method for the Helmholtz equation. Comput. Meth. Appl. Mech. Eng. 1998, 154, 281–297. [Google Scholar] [CrossRef]
- Oberai, A.; Pinsky, P. A residual-based finite element method for the Helmholtz equation. Int. J. Numer. Methods Eng. 2000, 49, 399–419. [Google Scholar] [CrossRef]
- Mehdizadeh, O.; Paraschivoiu, M. Investigation of a two-dimensional spectral element method for Helmholtz’s equation. J. Comput. Phys. 2003, 189, 111–129. [Google Scholar] [CrossRef]
- Cho, M.H.; Cai, W. A wideband fast multipole method for the two-dimensional complex Helmholtz equation. Comput. Phys. Commun. 2010, 181, 2086–2090. [Google Scholar] [CrossRef]
- Axelsson, O.; Karátson, J.; Magoulès, F. Superlinear convergence using block preconditioners for the real system formulation of complex Helmholtz equations. J. Comput. Appl. Math. 2018, 340, 424–431. [Google Scholar] [CrossRef]
- Ai, X.; Liao, W.; Wang, X. Optimized parameterized Uzawa methods for solving complex Helmholtz equations. Comput. Math. Appl. 2024, 164, 34–44. [Google Scholar] [CrossRef]
- Malinzi, J. A mathematical model for oncolytic virus spread using the telegraph equation. Commun. Nonlinear Sci. Numer. Simul. 2021, 102, 105944. [Google Scholar] [CrossRef]
- Benabdelhadi, A.; Chaoui, F.Z.; Ghani, D.; Giri, F. Observer design for collocated-boundary measurements of transmission line governed by telegraph equations with application to fault detection. IFAC-PapersOnLine 2024, 58, 793–798. [Google Scholar] [CrossRef]
- Pietrzak, T.; Horzela, A.; Górska, K. The generalized telegraph equation with moving harmonic source: Solvability using the integral decomposition technique and wave aspects. Int. J. Heat Mass Transf. 2024, 225, 125373. [Google Scholar] [CrossRef]
- Liu, C.S.; El-Zahar, E.R.; Chang, C.W. Dynamical optimal values of parameters in the SSOR, AOR and SAOR testing using the Poisson linear equations. Mathematics 2023, 11, 3828. [Google Scholar] [CrossRef]
- Varga, R.S. Matrix Iterative Analysis; Springer: Berlin, Germany, 2000. [Google Scholar]
- Young, D.M. Iterative methods for solving partial difference equations of elliptic type. Trans. Am. Math. Soc. 1954, 76, 92–111. [Google Scholar] [CrossRef]
- Hadjidimos, A. Accelerated overrelaxation method. Math. Comput. 1978, 32, 149–157. [Google Scholar] [CrossRef]
- Salkuyeh, D.K.; Hezari, D.; Edalatpour, V. Generalized successive overrelaxation iterative method for a class of complex symmetric linear system of equations. Int. J. Comput. Math. 2015, 92, 802–815. [Google Scholar] [CrossRef]
- Edalatpour, V.; Hezari, D.; Salkuyeh, D.K. Accelerated generalized SOR method for a class of complex systems of linear equations. Math. Commun. 2015, 20, 37–52. Available online: https://hrcak.srce.hr/140386 (accessed on 1 July 2015).
- Li, X.A.; Zhang, W.H.; Wu, Y.J. On symmetric block triangular splitting iteration method for a class of complex symmetric system of linear equations. Appl. Math. Lett. 2018, 79, 131–137. [Google Scholar] [CrossRef]
- Zhu, Y.; Zhang, N.M.; Chao, Z. Two block splitting iteration methods for solving complex symmetric linear systems from complex Helmholtz equation. Mathematics 2024, 12, 1888. [Google Scholar] [CrossRef]
- Hezari, D.; Salkuyeh, D.K.; Edalatpour, V. A new iterative method for solving a class of complex symmetric system of linear equations. Numer. Algor. 2016, 73, 927–955. [Google Scholar] [CrossRef]
- Salkuyeh, D.K.; Siahkolaei, T.S. Two-parameter TSCSP method for solving complex symmetric system of linear equations. Calcolo 2018, 55, 8. [Google Scholar] [CrossRef]
- Siahkolaei, T.S.; Salkuyeh, D.K. A new double-step method for solving complex Helmholtz equation. Hacet. J. Math. Stat. 2020, 49, 1245–1260. [Google Scholar] [CrossRef]
- Darvishi, M.T.; Khosro-Aghdam, R. Determination of the optimal value of relaxation parameter in symmetric SOR method for rectangular coefficient matrices. Appl. Math. Comput. 2006, 181, 1018–1025. [Google Scholar] [CrossRef]
- Darvishi, M.T.; Hessari, P. Symmetric SOR method for augmented systems. Appl. Math. Comput. 2006, 183, 409–415. [Google Scholar] [CrossRef]
- Zhang, G.F.; Lu, Q.H. On generalized symmetric SOR method for augmented systems. J. Comput. Appl. Math. 2008, 219, 51–58. [Google Scholar] [CrossRef]
- Darvishi, M.T.; Hessari, P. A modified symmetric successive overrelaxation method for augmented systems. Comput. Math. Appl. 2011, 61, 3128–3135. [Google Scholar] [CrossRef]
- Chao, Z.; Zhang, N.; Lu, Y. Optimal parameters of the generalized symmetric SOR method for augmented systems. J. Comput. Appl. Math. 2014, 266, 52–60. [Google Scholar] [CrossRef]
- Golub, G.H.; Wu, X.; Yuan, J.Y. SOR-like methods for augmented systems. BIT 2001, 55, 71–85. [Google Scholar] [CrossRef]
- Saad, Y. Iterative Methods for Sparse Linear Systems, 2nd ed.; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
- Yang, A.L. On the convergence of the minimum residual HSS iteration method. Appl. Math. Lett. 2019, 94, 210–216. [Google Scholar] [CrossRef]
- Cui, J.; Peng, G.; Lu, Q.; Huang, Z. A class of nonstationary upper and lower triangular splitting iteration methods for ill-posed inverse problems. IAENG Int. J. Comput. Sci. 2020, 47, 118–129. [Google Scholar]
- Huang, Z.G. Efficient block splitting iteration methods for solving a class of complex symmetric linear systems. J. Comput. Appl. Math. 2021, 395, 113574. [Google Scholar] [CrossRef]
- Bai, Z.Z.; Benzi, M.; Chen, F. Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 2010, 87, 93–111. [Google Scholar] [CrossRef]
- Bai, Z.Z.; Benzi, M.; Chen, F. On preconditioned MHSS iteration methods for complex symmetric linear systems. Numer. Algor. 2011, 56, 297–317. [Google Scholar] [CrossRef]
- Bai, Z.Z.; Benzi, M.; Chen, F. Preconditioned MHSS iteration methods for a class of block two-by-two linear systems with applications to distributed control problems. IMA J. Numer. Anal. 2013, 33, 343–369. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).