Next Article in Journal
An Improved Structural Reliability Analysis Method Based on Local Approximation and Parallelization
Next Article in Special Issue
Modified Accelerated Bundle-Level Methods and Their Application in Two-Stage Stochastic Programming
Previous Article in Journal
Aircraft Trajectory Tracking Using Radar Equipment with Fuzzy Logic Algorithm
Previous Article in Special Issue
Approximate Solutions for Fractional Boundary Value Problems via Green-CAS Wavelet Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On C-To-R-Based Iteration Methods for a Class of Complex Symmetric Weakly Nonlinear Equations

1
School of Mathematics and Finance, Putian University, Putian 351100, China
2
School of Mathematics and Statistic, Lanzhou University, Lanzhou 730000, China
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(2), 208; https://doi.org/10.3390/math8020208
Submission received: 31 December 2019 / Revised: 3 February 2020 / Accepted: 4 February 2020 / Published: 6 February 2020
(This article belongs to the Special Issue Computational Methods in Applied Analysis and Mathematical Modeling)

Abstract

:
To avoid solving the complex systems, we first rewrite the complex-valued nonlinear system to real-valued form (C-to-R) equivalently. Then, based on separable property of the linear and the nonlinear terms, we present a C-to-R-based Picard iteration method and a nonlinear C-to-R-based splitting (NC-to-R) iteration method for solving a class of large sparse and complex symmetric weakly nonlinear equations. At each inner process iterative step of the new methods, one only needs to solve the real subsystems with the same symmetric positive and definite coefficient matrix. Therefore, the computational workloads and computational storage will be saved in actual implements. The conditions for guaranteeing the local convergence are studied in detail. The quasi-optimal parameters are also proposed for both the C-to-R-based Picard iteration method and the NC-to-R iteration method. Numerical experiments are performed to show the efficiency of the new methods.

1. Introduction

We consider the iterative solutions of nonlinear system of equations in the following form,
A u = ϕ ( u ) , or F ( u ) = A u ϕ ( u ) = 0 ,
where A = W + i T C n × n is a large, sparse, complex symmetric matrix, with W R n × n and T R n × n being the real parts and the imaginary parts of the coefficient matrix A, respectively. Here, we assume that W and T are both symmetric positive and semidefinite (SPSD) and at least one of them being symmetric positive and definite (SPD). The right hand vector function ϕ : D C n C n is a continuously differential function defined on the open convex domain D in the n-dimensional C n . u C n is an unknown vector. When the linear term A u is strongly dominant over the nonlinear term ϕ ( u ) in certain norm [1], we say that the system of nonlinear Equation (1) is weakly nonlinear. Here, and in the sequence, we assume that the Jacobian matrix of the nonlinear function ϕ ( u ) at the solution point u D , denoted as ϕ ( u ) , is the non-Hermitian and negative semidefinite.
Weakly nonlinear equations of the form (1) arise in many areas of scientific computing and engineering applications, e.g., nonlinear ordinary and partial differential equations, nonlinear integral and integro differential equations, nonlinear optimization and variational problems, saddle point problems from image processing, and so on. For more details, see [2,3,4,5,6,7,8].
By substituting u = x + i y , where x R n and y R n , we can rewrite the system of nonlinear Equations (1) as
( W x T y ) + i ( T x + W y ) = R ( u ) + i I ( u ) ,
where R ( u ) = real ( ϕ ( u ) ) R n and I ( u ) = imag ( ϕ ( u ) ) R n are the real parts and imaginary parts of ϕ ( u ) , respectively. Here, we reformulate the complex nonlinear system (1) as a block two-by-two real form in the following,
C x y : = W T T W x y = R ( u ) I ( u ) Φ ( u ) .
When R ( u ) and I ( u ) are both constant vectors, the system (2) reduces to a linear system with structural block two-by-two coefficient matrix. To solve the linear system (2) efficiently, many methods have been proposed. Particularly, when the block matrices W and T are both SPSD and at least one of them is SPD, some classical iterative methods can be found in existed references, for example, the block preconditioned methods [9,10], the additive block diagonal preconditioned method [11]. There are also sequences of preconditioners based on the modified Hermitian and skew-Hermitian splitting (MHSS), such as, the preconditioned MHSS (PMHSS) iteration method [12,13]. Some other methods such as the preconditioned GSOR (PGSOR) iteration method [14], the complex-value to real-value (C-to-R) preconditioner [15], and so on, also attract a lot of researchers’ interest. For more efficient methods, we refer to the works in [16,17,18].
When A u is strong dominant than the nonlinear term ϕ ( u ) , and R ( u ) and I ( u ) are dependent on the variable vector u, then we say the system (2) is weakly nonlinear. The most classic and important solvers for the system of nonlinear Equations (1) is the Newton method [3,6,19], which can be described as
u ( k + 1 ) = u ( k ) F ( u ( k ) ) 1 F ( u ( k ) ) , k = 0 , 1 , 2 , .
It can be seen from the Newton method that the dominant task in implementations is to solve the following equation at each iteration step,
F ( u ( k ) ) s ( k ) = F ( u ( k ) ) , with u ( k + 1 ) = u ( k ) + s ( k ) ,
and to recompute the Jacobian matrix F ( u ( k ) ) at every iteration step. When the Jacobian matrix F ( u ( k ) ) is large and sparse, we usually use either the splitting relaxation form [6] or the Krylov subspace method form [5,20,21] to compute an approximation to update the vector s ( k ) . However, those methods are all heavy in both computational workload and computational storage.
To improve the efficiency of the Newton iteration method, Bai and Guo [4] use the Hermitian and skew-Hermitian splitting (HSS) method to solve approximately the Newton Equations (2), called the Newton-HSS method and then Guo and Duff [22] analyze the Kantorovich-type semilocal convergence. Many efficient methods based on the Hermitian and skew-Hermitian splitting (HSS) have come forth since then. For example, Bai and Yang [1] present the nonlinear HSS-like iteration method based on the Hermitian and skew-Hermitian (HS) splitting of the non-Hermitian coefficient matrix of the linear term A u . Some variants of the HSS-based methods for nonlinear equations can be found in references, e.g., the lopsided preconditioned modified HSS (LPMHSS) iteration method [23], the Newton-MHSS method [24], the accelerated Newton-GPSS iteration method [25], the preconditioned modified Newton-MHSS method [26], the modified Newton-SHSS method [27], the modified Newton-DPMHSS method [28], and so on. See [4,29,30,31,32] for more details.
In this paper, we will concentrate on the efficient methods of the real value equivalent nonlinear system (2). By utilizing the C-to-R preconditioning technique proposed in [16], we will first construct a C-to-R-based Picard iteration method. This method is actually the inexact Picard method with the C-to-R iterative method as inner process iteration. Therefore, the convergence results based on Picard iteration method can be used directly. To further improve the efficiency, we then introduce a nonlinear C-to-R splitting iteration method for solving the weakly nonlinear equations. The local convergence for both of the two new methods are analyzed in detail. The way to determine the theoretical optimal parameters is also studied.
The organization of this paper is outlined as follows. In Section 2, we firstly give the C-to-R-based Picard iteration method. Then, we give the convergence results and the choice of the quasi-optimal parameters. In Section 3, we construct the nonlinear C-to-R-based splitting iteration method for solving the nonlinear system (2) and then we give a detailed theoretically analysis about the convergence properties. Theoretical optimal parameters are also proposed subsequently. Numerical experiments are given in Section 4 to illustrate the feasibility and effectiveness of the new methods. Finally, a brief conclusion and some remarks are drawn in Section 5 to end this work.
Throughout this paper, we use ρ ( B ) to denote the spectral radius of the matrix B. Denote u = x + i y C n with x R n and y R n being its real parts and imaginary parts, respectively.

2. The C-To-R-Based Picard Iteration Method

In this section, we will propose the C-to-R-based Picard iteration method. To begin with, according to [16], we review the C-to-R preconditioner for the block two-by-two coefficient matrix in Equation (2) as
B ( α ) = α 2 W + 2 α T T T W ,
where α is a positive constant. We know that the implementing of the preconditioner B ( α ) at each iterative step needs one to solve the generalized residual linear equations
B ( α ) x y = f g .
Or equivalently,
α ( α W + T ) x + T ( α x y ) = f , T ( α x y ) + ( α W + T ) y = α g ,
i.e.,
( α W + T ) ( α x y ) = f α g , α ( α W + T ) x = f T ( α x y ) .
Therefore, we can summarize the implementation as the algorithm 1 for solving the above equations.
Algorithm 1
(The C-to-R iteration method). Let α be a given positive constant. Use the following steps to solve the generalized residual equation.
Step 1.
solve ( α W + T ) z = f α g to obtain z.
Step 2.
solve ( α W + T ) x = 1 α ( f T z ) to obtain x.
Step 3.
compute y = α x z .
The preconditioner B ( α ) can be seen as a splitting matrix from the following matrix splitting,
W T T W = α 2 W + 2 α T T T W ( α 2 1 ) W + 2 α T 0 0 0 .
On the other hand, because A u is strong dominant over the nonlinear term ϕ ( u ) , then the Picard iteration method can be used based on the separable property of the linear term and nonlinear term, i.e.,
A u ( k + 1 ) = ϕ ( u ( k ) ) .
Or equivalently, the iteration of the real value form can be described as
C x ( k + 1 ) y ( k + 1 ) : = W T T W x ( k + 1 ) y ( k + 1 ) = R ( u ( k ) ) I ( u ( k ) ) = Φ ( u ( k ) ) ,
where u ( k ) = x ( k ) + i y ( k ) C n with x ( k ) , y ( k ) R n . Therefore, to improve the efficiency, we can use the splitting iterative method as inner process iteration based on (6) to obtain the solution approximately at each Picard iteration process. We describe the C-to-R-based Picard iteration method in the Algorithm 2.
Algorithm 2
(The C-to-R-based Picard iteration method). Let ϕ C n C n be a continuously differentiable function and A = W + i T C n × n be a large, sparse complex symmetric matrix, where W R n × n and T R n × n are the real parts and the imaginary parts of A, respectively. Given an initial guess u ( 0 ) C n . For k = 0 , 1 , 2 , , until { u ( k ) } converges, compute the next iterate { u ( k + 1 ) } according to the following steps.
Step 1.
set u ˜ ( k , 0 ) : = u ( k ) .
Step 2.
for a given positive constant α and l = 0 , 1 , 2 , , l k 1 , set x ( k , l ) = real ( u ˜ ( k , l ) ) , y ( k , l ) = imag ( u ˜ ( k , l ) ) , then solve the following subsystem
α 2 W + 2 α T T T W x ( k , l + 1 ) y ( k , l + 1 ) = ( α 2 1 ) W + 2 α T 0 0 0 x ( k , l ) y ( k , l ) + R ( u ( k , l ) ) I ( u ( k , l ) )
to obtain x ( k , l + 1 ) and y ( k , l + 1 ) . Set u ˜ ( k , l + 1 ) = x ( k , l + 1 ) + i y ( k , l + 1 ) .
Step 3.
set u ( k + 1 ) = u ˜ ( k , l k ) .
We can also describe the Algorithm 2 in a detailed implementing steps as the Algorithm 3.
Algorithm 3
(The detailed implementing process of the C-to-R-based Picard iteration method). Given an initial guess u ( 0 ) C n and a sequence positive integers { l k } k = 0 , use the following iteration steps to compute u ( k + 1 ) for k = 0 , 1 , 2 , , until u ( k ) satisfies the stopping criterion:
Step 1.
set r ( k ) = ϕ ( u ( k ) ) A u ( k ) .
Step 2.
given an initial guess z ˜ ( k , 0 ) C n . For l = 0 , 1 , 2 , , l k 1 ,
Step 3.
compute r ˜ ( k , l ) = r ( k ) A z ˜ ( k , l ) , set f ˜ ( k , l ) = real ( r ˜ ( k , l ) ) , g ˜ ( k , l ) = imag ( r ˜ ( k , l ) ) .
Step 4.
solve ( α W + T ) z ( k , l ) = f ˜ ( k , l ) α g ˜ ( k , l ) to obtain z ( k , l ) .
Step 5.
solve ( α W + T ) x ( k , l ) = 1 α ( f ˜ ( k , l ) T z ˜ ( k , l ) ) to obtain x ( k , l ) .
Step 6.
compute y ( k , l ) = α x ( k , l ) z ( k , l ) .
Step 7.
set z ˜ ( k , l + 1 ) = z ˜ ( k , l ) + x ( k , l ) + i y ( k , l ) .
If z ˜ ( k , l + 1 ) satisfies the inner stopping criterion, go to Step 8.
If z ˜ ( k , l + 1 ) does not meet the inner stopping criterion, return to Step 3.
Step 8.
set u ( k + 1 ) = u ( k ) + z ˜ ( k , l k ) .
From Algorithm 3, we find that the main workload is to solve the linear subsystem with the coefficient matrix being α W + T both in Step 4 and Step 5. Therefore, we can solve the corresponding system exactly by sparse Cholesky decomposition or inexactly by symmetric Krylov subspace methods (e.g., the conjugate gradient method).
Next, we denote
L ( α ) = α 2 W + 2 α T T T W 1 ( α 2 1 ) W + 2 α T 0 0 0 ,
then the local convergence results for the C-to-R-based Picard iteration method can be summarized in the following theorem.
Theorem 1.
Denoted by
θ ( α ) = L ( α ) , μ = A 1 and ν = A 1 ϕ ( u ) .
Then, for any initial guess u ( 0 ) C n and any sequence of positive integers l k , k = 0 , 1 , 2 , , if ν < 1 and l 0 ln ( 1 ν 1 + ν ) ln θ ( α ) , the iteration sequence generated by the C-to-R-based Picard method converges to the exact solution u and it holds
lim k sup u ( k ) u 1 k ν + ( 1 + ν ) θ ( α ) l * ,
where l * = inf { lim k l k } . Particularly, let lim k l k , it follows
lim k sup u ( k ) u 1 k ν ,
or equivalently, the convergence rate of the C-to-R-based Picard method is R-rate, with the R-factor being ν.
Proof. 
The results can be obtained immediately from the results in [1]. □
Theorem 1 shows that the convergence rate of the C-to-R-based Picard iteration method depends on the quantities of θ ( α ) and ν . The weakly nonlinear property leads to the dominant of the quantities of θ ( α ) . Therefore, we can obtain the quasi-optimal parameter by minimizing the R-factor θ ( α ) = L ( α ) . In other words, we need to find α such that the eigenvalues of the following matrix cluster around 1.
α 2 W + 2 α T T T W 1 W T T W .
Therefore, we can use the results in [33,34] to obtain the quasi-optimal parameter α in the following theorem.
Theorem 2.
Let α be a given positive constant and assume that the conditions of Theorem 1 are satisfied. Then, the optimal parameter α that minimizes θ ( α ) is α = 8 4 2 .
Remark 1.
Because there exists an extra term ϕ ( u ) , then we use the above optimal parameter as a quasi-optimal parameter. In actual implementation, we use this value as a suggestion. The true optimal parameter may vary dependent on the weakly nonlinear term.

3. The Nonlinear C-to-R-Based Splitting Iteration Method

In this section, we will further introduce the NC-to-R method for solving the block two-by-two nonlinear Equations (2) by making use of the nonlinear fixed-point equation
α 2 W + 2 α T T T W x y = ( α 2 1 ) W + 2 α T 0 0 0 x y + R ( u ) I ( u ) .
The NC-to-R method can be described as the algorithm 4.
Algorithm 4
(The nonlinear C-to-R splitting iteration method). Let ϕ C n C n be a continuously differentiable function and A = W + i T C n × n be a large, sparse complex matrix, where W R n × n and T R n × n are the real parts and the imaginary parts of A, respectively. Given an initial guess u ( 0 ) C n . For k = 0 , 1 , 2 , , until { u ( k ) } converges, compute the next iterate { u ( k + 1 ) } by solving the following subsystems.
α 2 W + 2 α T T T W x ( k + 1 ) y ( k + 1 ) = ( α 2 1 ) W + 2 α T 0 0 0 x ( k ) y ( k ) + R ( u ( k ) ) I ( u ( k ) )
to obtain u ( k + 1 ) = x ( k + 1 ) + i y ( k + 1 ) .
The detailed implementing process of the NC-to-R method can be carried out in the Algorithm 5.
Algorithm 5
(The detailed implementing process of the NC-to-R method). Given an initial guess u ( 0 ) C n . For k = 0 , 1 , 2 , , until { u ( k ) } converges, compute the next iterate { u ( k + 1 ) } according to the following steps.
Step 1.
set x ( k ) = real ( u ( k ) ) , y ( k ) = imag ( u ( k ) ) and compute R ˜ ( u ( k ) ) = R ( u ( k ) ) + ( α 2 1 ) W x ( k ) + 2 α T x ( k ) .
Step 2.
solve ( α W + T ) z = R ˜ ( u ( k ) ) α I ( x ( k ) ) to obtain z.
Step 3.
solve ( α W + T ) z ˜ 1 = 1 α ( R ˜ ( u ( k ) ) T z ) to obtain z ˜ 1 .
Step 4.
compute z ˜ 2 = α z ˜ 1 z .
Step 5.
set u ( k + 1 ) = z ˜ 1 + i z ˜ 2 .
Next, we will focus on the convergence analysis for the NC-to-R method. By utilizing the Ostrowski Theorem (Theorem 10.1.3 in [6]), we can establish the local convergence theory for the NC-to-R method in the following theorem.
Theorem 3.
Assume ϕ : D C n C n is F-differentiable at a point u D such that A u = ϕ ( u ) . Denoted by
T ˜ ( α ; u ) = α 2 W + 2 α T T T W 1 ( α 2 1 ) W + 2 α T 0 0 0 + Φ ( u ) ,
where
Φ ( u ) = R ( u ) I ( u )
is defined in (2). If ρ ( T ˜ ( α ; u ) ) < 1 , then u D is a point of attraction of the NC-to-R iteration method.
Proof. 
By making use of (9), we can rewrite the nonlinear system (1) as
w = Ψ ( w ) ,
where
Ψ ( w ) = α 2 W + 2 α T T T W 1 ( α 2 1 ) W + 2 α T 0 0 0 w + Φ ( w ) ,
Φ ( w ) = R ( u ) I ( u ) and w = x y = real ( u ) imag ( u ) .
Then, the NC-to-R method can be expressed as
w ( k + 1 ) = Ψ ( w ( k ) ) , k = 0 , 1 , 2 , .
After a few simple algebra computations, we have
Ψ ( u ) = α 2 W + 2 α T T T W 1 ( α 2 1 ) W + 2 α T 0 0 0 + Φ ( u ) = T ˜ ( α ; u ) .
As ϕ : D C n C n is F-differentiable at a point u D , then the real parts and the imaginary parts are also F-differentiable at the point u . Therefore, Φ and Ψ are F-differentiable at the point u [35]. Therefore, by making use of the Ostrowski Theorem (Theorem 10.1.3 in [6]), we can conclude that if ρ ( T ˜ ( α ; u ) ) < 1 , then u is a point of attraction of the NC-to-R method. □
At the end of this section, we will give a strategy to determine the optimal iterative parameter α by following the some strategy proposed in [36].
We use tr ( . ) to denote the trace of a matrix in the following. First, we do a partition of the matrix Φ ( u ) as
Φ ( u ) = D 11 ( u ) D 12 ( u ) D 21 ( u ) D 22 ( u ) ,
then the conjugate transpose of Φ ( u ) can be expressed as
( Φ ( u ) ) * = ( D 11 ( u ) ) * ( D 21 ( u ) ) * ( D 12 ( u ) ) * ( D 22 ( u ) ) * ,
where D 11 ( . ) , D 12 ( . ) , D 21 ( . ) , and D 22 ( . ) have the same size as the matrix W and T.
The following theorem gives a strategy to choose the optimal parameter for the NC-to-R method.
Theorem 4.
Assume that the conditions of Theorem 3 are satisfied. Denoted by
δ = tr ( Φ ( u ) ( Φ ( u ) ) * ) = tr ( D 11 ( u ) ( D 11 ( u ) ) * ) + tr ( D 12 ( u ) ( D 12 ( u ) ) * ) + tr ( D 21 ( u ) ( D 21 ( u ) ) * ) + tr ( D 22 ( u ) ( D 22 ( u ) ) * ) ,
η = tr ( W D 11 ( u ) ) + tr ( ( D 11 ( u ) * W ) , ξ = tr ( T D 11 ( u ) ) + tr ( ( D 11 ( u ) * T ) ,
a = tr ( W 2 ) , b = tr ( W T ) + tr ( T W ) , and c = tr ( T 2 ) .
Then, the optimal parameter α o p t for the NC-to-R method satisfies
h ( α o p t ) = 0 and h ( α o p t ) > 0 ,
where
h ( α ) = a α 4 + 2 b α 3 + ( 2 a + 4 c + η ) α 2 + 2 ( ξ b ) α + ( a + δ η ) .
Here, h ( α ) and h ( α ) are the first derivative and second derivative of h ( α ) with respect to the variable α, respectively.
Proof. 
As it is known, if α is such a value such that B ( α ) is close to C, then B ( α ) 1 ( C B ( α ) ) should be zero approximately. Therefore, C B ( α ) should be approaching to zeros. Furthermore, the square norm C B ( α ) + Φ ( u ) F 2 could be reach the minimal with respect to α .
By direct algebra computations, it follows
C B ( α ) + Φ ( u ) F 2 = tr ( α 2 1 ) W + 2 α T 0 0 0 + Φ ( u ) ( α 2 1 ) W + 2 α T 0 0 0 + Φ ( u ) * = tr ( α 2 1 ) W + 2 α T 0 0 0 ( α 2 1 ) W + 2 α T 0 0 0 * + tr + ( α 2 1 ) W + 2 α T 0 0 0 D 11 ( u ) D 12 ( u ) D 21 ( u ) D 22 ( u ) * + tr D 11 ( u ) D 12 ( u ) D 21 ( u ) D 22 ( u ) ( α 2 1 ) W + 2 α T 0 0 0 * + tr D 11 ( u ) D 12 ( u ) D 21 ( u ) D 22 ( u ) D 11 ( u ) D 12 ( u ) D 21 ( u ) D 22 ( u ) * .
Then, by using the notation in the theorem and after some simple algebra computations, we can obtain
C B ( α ) + Φ ( u ) F 2 = a α 4 + 2 b α 3 + ( 2 a + 4 c + η ) α 2 + 2 ( ξ b ) α + ( a + δ η ) = h ( α ) .
Therefore, by taking the first derivative of h ( α ) to be zero, it follows h ( α o p t ) = 0 . Then the solution that satisfies the second derivative h ( α ) > 0 is the exact optimal parameter α o p t that we need. □

4. Numerical Experiments

In this section, we will testify the effectiveness of the C-to-R-based iteration methods by numerical experiments. All the tests are performed in MATLAB R2017a [version 9.2.0.538062] in double precision, on a personal computer with 2.40 GHz central processing unit (Intel(R) Core(TM) 2 Duo CPU), 4.00 GB memory and Windows 64-bit operating system. In our calculations, the stopping criterion for the proposed methods is the current relative residual satisfying r k 2 r 0 < 10 6 , where r k is the residual at the k-th iteration with u ( k ) being the k-th approximate solution of Equation (1). We use the zero vector as the initial guess. To show the advantages of the new methods, we compare the C-to-R-based methods with the methods listed in Table 1, which shows the abbreviations and the corresponding full description. All the parameter choices in our experiments are listed in Table 2 and we classify the cases as follows.
In addition, we set the stopping criterion for the inner iteration process of all the methods to be
ϕ ( u ( k ) ) s ( k , l k ) + ϕ ( u ( k ) ) 2 ϕ ( u ( k ) ) 2 η k ,
where l k is the inner iteration steps number, η k is the prescribed inner tolerance. Here, we fix the η k simply by η = 0.1 for all k.
Example 1.
Consider the following time dependent nonlinear equation [1,37]:
u t ( β 1 + i γ 1 ) ( u x x + u y y ) + ϱ u = ( β 2 + i γ 2 ) sin ( 1 + u x 2 + u y 2 ) , for ( x , y ) ( 0 , 1 ] × Ω u ( 0 , x , y ) = u 0 ( x , y ) , for ( x , y ) Ω u ( t , x , y ) = 0 , for ( x , y ) ( 0 , 1 ] × Ω ,
where Ω = ( 0 , 1 ) × ( 0 , 1 ) , with Ω being its boundary. ϱ is a positive constant that measures the magnitude of the reaction term.
By applying the centered finite element different scheme with the space step size h = 1 N + 1 and the implicit scheme with the temporal step size t = h , we can obtain the following nonlinear equations
A u = ϕ ( u ) ,
where
A = h ( 1 + ϱ t ) I n + ( β 1 + i γ 1 ) t h ( A N I N + I N A N ) , A N = tridiag ( 1 , 2 , 1 ) R N × N , ϕ ( u ) = ( β 2 + i γ 2 ) h t · sin ( 1 + u x 2 + u y 2 ) .
Here n = N 2 and ⊗ denoted the Kronecker product symbol. I n and I N are identity matrices of size n × n and N × N , respectively.
Our numerical results are presented for problem sizes N = 16 , 32, 64, 128, 256, and 512 (i.e., n = 16 2 , 32 2 , 64 2 , 128 2 , 256 2 , and 512 2 ). First, we search the optimal parameter that minimizes the inner iteration count number by varying the parameter from 0.1 to 1 by step size 0.1. If the iteration number decreases when α decreases, then we will expand the searching area with extra [ 0.01 , 0.1 ] by step size 0.01, or further [ 0.001 , 0.01 ] by step size 0.001. Therefore, in Table 3, we give the experimental optimal parameters for all the proposed methods with respect to different cases and mesh grids.
The numerical results along with the it_out (i.e., the outer iteration counts), IT (i.e., the total inner iteration counts running through the corresponding method), and CPU (i.e., the elapsed cpu time in seconds) are shown in Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9. If the total inner iteration count number exceeds 500, or the elapsed CPU time is over 500 in second, or the computational storage is out of memory (especially the Newton-based methods), then we will denote the corresponding results as “-” in the tables.
From Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9, we find that the outer iteration counts of the Newton-based methods are less than the outer iteration counts of the Picard-based methods. However, the Newton-based methods occupy more CPU time than the Picard-based methods.
Besides, we find that the HSS-based iteration methods are more efficient than the Krylov subspace-based methods in CPU time. Further, the N-HSS method can solve the proposed problem efficiently with the optimal experimental parameters shown in Table 3, keeping the steady iteration count numbers while the mesh grid increases.
However, we also find that as the mesh grid increases, the CPU time of the N-HSS method increases rapidly.
The good news is that the C-to-R-based iterative methods (e.g., P-C and N-C) need the least iteration counts and CPU time. In particular, the NC-to-R method not only keeps a steady iteration count number with respect to different mesh grids, but also increases very slowly in CPU time as the mesh grid increases.
Therefore, we can draw a conclusion that the C-to-R-based iterative methods are the first choice and best choice among all the proposed methods for solving this class of complex symmetric weakly nonlinear equations.

5. Concluding Remarks

In this paper, we focus on the numerical methods for solving a class of weakly nonlinear complex symmetric equations. First, we rewrite the original system as a real-valued form. Then, we propose a C-to-R-based Picard iteration method. This method is actually an inexact Picard iteration method. Therefore, the local convergence can be obtained by making use of some existed results. To further improve the efficiency, we construct a nonlinear C-to-R-based splitting iteration method. The convergence results and the theoretically optimal parameters are analyzed in detail. To illustrate the feasibility and the efficiency of the new methods, we perform some numerical experiments to compare with some classical methods. The numerical results show that our new methods are the most efficient method among all the proposed methods.

Author Contributions

M.-L.Z. deduced the theory, implemented the algorithms with the numerical examples, reviewed the manuscript, and wrote the paper; G.-F.Z. provided some innovative advice. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (Nos. 11901324, 11771193, and 11626136), the Natural Science Foundation of Fujian Province (No. 2016J05016), and the Scientific Research Project of Putian University (No. 2015061).

Acknowledgments

The authors would like to thank the referees for the comments and constructive suggestions, which were valuable in improving the quality of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bai, Z.Z.; Yang, X. On HSS-based iteration methods for weakly nonlinear systems. Appl. Numer. Math. 2009, 59, 2923–2936. [Google Scholar] [CrossRef]
  2. Bai, Z.Z. A class of two-stage iterative methods for systems of weakly nonlinear equations. Numer. Algorithms 1997, 14, 295. [Google Scholar] [CrossRef]
  3. Bai, Z.Z. On the convergence of parallel chaotic nonlinear multisplitting Newton-type methods. J. Comput. Appl. Math. 1997, 80, 317–334. [Google Scholar] [CrossRef] [Green Version]
  4. Bai, Z.Z.; Guo, X.P. On Newton-HSS methods for systems of nonlinear equations with positive-definite Jacobian matrices. J. Comput. Math. 2010, 28, 235–260. [Google Scholar]
  5. Kelley, C.T. Iterative Methods for Linear and Nonlinear Equations; SIAM: Philadelphia, PA, USA, 1995; Volume 16. [Google Scholar]
  6. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; SIAM: Philadelphia, PA, USA, 1970; Volume 30. [Google Scholar]
  7. Aidara, S. Anticipated backward doubly stochastic differential equations with non-Liphschitz coefficients. Appl. Math. Nonlinear Sci. 2019, 4, 9–20. [Google Scholar] [CrossRef] [Green Version]
  8. Hassan, S.S.; Reddy, M.P.; Rout, R.K. Dynamics of the modified n-degree Lorenz system. Appl. Math. Nonlinear Sci. 2019, 4, 315–330. [Google Scholar] [CrossRef] [Green Version]
  9. Bai, Z.Z. Block preconditioners for elliptic PDE-constrained optimization problems. Computing 2011, 91, 379–395. [Google Scholar] [CrossRef]
  10. Bai, Z.Z. Structured preconditioners for nonsingular matrices of block two-by-two structures. Math. Comput. 2006, 75, 791–815. [Google Scholar] [CrossRef] [Green Version]
  11. Bai, Z.Z.; Chen, F.; Wang, Z.Q. Additive block diagonal preconditioning for block two-by-two linear systems of skew-Hamiltonian coefficient matrices. Numer. Algorithms 2013, 62, 655–675. [Google Scholar] [CrossRef] [Green Version]
  12. Bai, Z.Z.; Benzi, M.; Chen, F. Modified HSS iteration methods for a class of complex symmetric linear systems. Computing 2010, 87, 93–111. [Google Scholar] [CrossRef]
  13. Bai, Z.Z.; Benzi, M.; Chen, F. On preconditioned MHSS iteration methods for complex symmetric linear systems. Numer. Algorithms 2011, 56, 297–317. [Google Scholar] [CrossRef]
  14. Hezari, D.; Edalatpour, V.; Salkuyeh, D.K. Preconditioned GSOR iterative method for a class of complex symmetric system of linear equations. Numer. Linear Algebra Appl. 2015, 22, 761–776. [Google Scholar] [CrossRef] [Green Version]
  15. Axelsson, O.; Kucherov, A. Real valued iterative methods for solving complex symmetric linear systems. Numer. Linear Algebra Appl. 2000, 7, 197–218. [Google Scholar] [CrossRef]
  16. Axelsson, O.; Neytcheva, M.; Ahmad, B. A comparison of iterative methods to solve complex valued linear algebraic systems. Numer. Algorithms 2014, 66, 811–841. [Google Scholar] [CrossRef]
  17. Axelsson, O.; Lukáš, D. Preconditioning methods for eddy-current optimally controlled time-harmonic electromagnetic problems. J. Numer. Math. 2019, 27, 1–21. [Google Scholar] [CrossRef]
  18. Liao, L.D.; Zhang, G.F. Efficient preconditioner and iterative method for large complex symmetric linear algebraic systems. East Asian J. Appl. Math. 2017, 7, 530–547. [Google Scholar] [CrossRef]
  19. Sherman, A.H. On Newton-iterative methods for the solution of systems of nonlinear equations. SIAM J. Numer. Anal. 1978, 15, 755–771. [Google Scholar] [CrossRef] [Green Version]
  20. Axelsson, O. A generalized conjugate gradient, least square method. Numer. Math. 1987, 51, 209–227. [Google Scholar] [CrossRef]
  21. Saad, Y. Iterative Methods for Sparse Linear Systems; SIAM: Philadelphia, PA, USA, 2003; Volume 82. [Google Scholar]
  22. Guo, X.P.; Duff, I.S. Semilocal and global convergence of the Newton–HSS method for systems of nonlinear equations. Numer. Linear Algebra Appl. 2011, 18, 299–315. [Google Scholar] [CrossRef]
  23. Li, C.X.; Wu, S.L. On LPMHSS-based iteration methods for a class of weakly nonlinear systems. Comput. Appl. Math. 2018, 37, 1232–1249. [Google Scholar] [CrossRef]
  24. Yang, A.L.; Wu, Y.J. Newton-MHSS methods for solving systems of nonlinear equations with complex symmetric Jacobian matrices. Numer. Algebra Control Optim. 2012, 2, 839. [Google Scholar] [CrossRef]
  25. Li, X.; Wu, Y.J. Accelerated Newton-GPSS methods for systems of nonlinear equations. J. Comput. Anal. Appl. 2014, 17, 245–254. [Google Scholar]
  26. Zhong, H.X.; Chen, G.L.; Guo, X.P. On preconditioned modified Newton-MHSS method for systems of nonlinear equations with complex symmetric Jacobian matrices. Numer. Algorithms 2015, 69, 553–567. [Google Scholar] [CrossRef]
  27. Xie, F.; Wu, Q.B.; Dai, P.F. Modified Newton–SHSS method for a class of systems of nonlinear equations. Comput. Appl. Math. 2019, 38, 19–37. [Google Scholar] [CrossRef]
  28. Chen, M.H.; Dou, W.; Wu, Q.B. DPMHSS-based iteration methods for solving weakly nonlinear systems with complex coefficient matrices. Appl. Numer. Math. 2019, 146, 328–341. [Google Scholar] [CrossRef]
  29. An, H.B.; Bai, Z.Z. A globally convergent Newton-GMRES method for large sparse systems of nonlinear equations. Appl. Numer. Math. 2007, 57, 235–252. [Google Scholar] [CrossRef]
  30. Zeng, M.L.; Zhang, G.F. A class of preconditioned TGHSS-based iteration methods for weakly nonlinear systems. East Asian J. Appl. Math. 2016, 6, 367–383. [Google Scholar] [CrossRef] [Green Version]
  31. Wang, J.; Guo, X.P.; Zhong, H.X. MN-DPMHSS iteration method for systems of nonlinear equations with block two-by-two complex Jacobian matrices. Numer. Algorithms 2018, 77, 167–184. [Google Scholar] [CrossRef]
  32. Amiri, A.; Darvishi, M.T.; Cordero, A.; Torregrosa, J.R. An efficient iterative method based on two-stage splitting methods to solve weakly nonlinear systems. Mathematics 2019, 7, 815. [Google Scholar] [CrossRef] [Green Version]
  33. Liao, L.D.; Zhang, G.F.; Li, R.X. Optimizing and improving of the C-to-R method for solving complex symmetric linear systems. Appl. Math. Lett. 2018, 82, 79–84. [Google Scholar] [CrossRef]
  34. Axelsson, O.; Liang, Z.Z. Parameter modified versions of preconditioning and iterative inner product free refinement methods for two-by-two block matrices. Linear Algebra Its Appl. 2019, 582, 403–429. [Google Scholar] [CrossRef]
  35. Schwartz, J.T.; Karcher, H. Nonlinear Functional Analysis; CRC Press: Boca Raton, FL, USA, 1969. [Google Scholar]
  36. Huang, Y.M. A practical formula for computing optimal parameters in the HSS iteration methods. J. Comput. Appl. Math. 2014, 255, 142–149. [Google Scholar] [CrossRef]
  37. Xie, F.; Lin, R.F.; Wu, Q.B. Modified Newton-DSS method for solving a class of systems of nonlinear equations with complex symmetric Jacobian matrices. Numer. Algorithms 2019, 1–25. [Google Scholar] [CrossRef]
Table 1. Abbreviations and the corresponding description of the proposed tested methods.
Table 1. Abbreviations and the corresponding description of the proposed tested methods.
AbbreviationDescription
N-GNewton method using the GMRES method as inner iteration process
N-HNewton method using the HSS method as inner iteration process
HSS-N-GNewton method using the HSS preconditioned GMRES method as inner iteration process
P-GPicard method using the GMRES method as inner iteration process
P-HPicard method using the HSS method as inner iteration process
HSS-P-GPicard method using the HSS preconditioned GMRES method as inner iteration process
N-HSSnonlinear HSS-like method
P-CPicard method using the C-to-R ietrative method as inner iteration process
CP-P-GPicard method using the C-to-R preconditioned GMRES method as inner iteration process
N-Cnonlinear C-to-R-based iteration method
Table 2. Cases with respect different choices of parameters.
Table 2. Cases with respect different choices of parameters.
ϱ ( β 1 + i γ 1 , β 2 + i γ 2 ) = ( 1 + i , 1 + i ) ( 0.5 + i , 1 + 0.5 i )
1Case 1.1Case 2.1
10Case 1.2Case 2.2
100Case 1.3Case 2.3
Table 3. The experimental optimal parameters with respect to different cases and mesh grids.
Table 3. The experimental optimal parameters with respect to different cases and mesh grids.
N × N : 16 × 16 32 × 32 64 × 64 128 × 128 256 × 256 512 × 512
Case 1.1N-H0.60.40.20.20.20.1
HSS-N-G0.60.51111
P-H0.70.40.20.080.010.01
HSS-P-G0.30.30.30.30.30.3
N-HSS0.10.10.10.10.10.1
P-C0.60.70.70.70.80.8
CP-P-G0.60.70.80.70.70.7
N-C0.0010.40.10.511.3
Case 1.2N-H0.50.40.20.20.10.1
HSS-N-G111111
P-H0.80.80.30.20.10.1
HSS-P-G0.80.80.30.20.10.1
N-HSS0.10.20.20.30.40.4
P-C0.70.70.70.70.80.8
CP-P-G0.60.80.80.80.80.8
N-C0.0010.040.10.511.3
Case 1.3N-H1.410.30.30.20.2
HSS-N-G0.90.90.80.80.80.7
P-H1.20.740.20.10.1
HSS-P-G1.20.70.40.20.10.1
N-HSS0.10.10.050.020.010.008
P-C0.80.80.80.80.80.8
CP-P-G0.80.80.80.80.80.8
N-C0.0010.020.070.20.50.8
Case 2.1N-H0.50.30.20.20.10.1
HSS-N-G0.80.30.20.10.10.008
P-H0.50.30.20.10.10.08
HSS-P-G0.50.30.20.10.10.08
N-HSS0.0010.020.020.030.040.04
P-C0.50.60.70.70.80.8
CP-P-G0.50.60.70.70.80.8
N-C0.0010.050.20.40.70.5
Case 2.2N-H0.50.30.20.20.10.1
HSS-N-G0.50.30.20.20.10.1
P-H0.50.30.20.10.10.08
HSS-P-G0.50.30.20.10.10.08
N-HSS0.040.070.070.080.080.09
P-C0.60.60.70.70.80.8
CP-P-G0.60.70.80.80.80.9
N-C0.0010.040.20.40.60.8
Case 2.3N-H0.90.50.30.20.20.1
HSS-N-G0.80.80.30.30.20.2
P-H0.90.90.30.20.20.1
HSS-P-G0.90.90.30.20.20.1
N-HSS0.020.050.050.060.070.07
P-C0.80.80.80.80.80.8
CP-P-G0.80.80.90.90.90.9
N-C0.0010.020.080.20.50.6
Table 4. Iteration counts and elapsed CPU time for Case 1.1.
Table 4. Iteration counts and elapsed CPU time for Case 1.1.
N × N : 16 × 16 32 × 32 64 × 64 128 × 128 256 × 256 512 × 512
N-Git_out788
IT94194327
CPU0.0780.3598.823
N-Hit_out689
IT4874135
CPU0.0410.6314.531
HSS-N-Git_out8910
IT314898
CPU0.4193.40636.891
P-Git_out7888
IT76146240379
CPU0.0460.1393.57820.891
P-Hit_out68988
IT4180145327401
CPU0.0040.1011.12425.135231.103
HSS-P-Git_out8101010
IT30445587
CPU0.3443.3099.21975.641
N-HSSIT272829323333
CPU0.0160.0470.7354.20335.741302.214
P-Cit_out678765
IT121416171715
CPU0.0010.0160.1113.0316.51620.406
CP-P-Git_out6888
IT8121212
CPU0.0010.0880.4842.931
N-CIT1779776
CPU0.0010.0160.0470.4061.0474.516
Table 5. Iteration counts and elapsed CPU time for Case 1.2.
Table 5. Iteration counts and elapsed CPU time for Case 1.2.
N × N : 16 × 16 32 × 32 64 × 64 128 × 128 256 × 256 512 × 512
N-Git_out678
IT78166314
CPU0.0470.3198.406
N-Hit_out689
IT5675140
CPU0.0510.6384.559
HSS-N-Git_out789
IT315688
CPU0.4066.7532.422
P-Git_out6788
IT64129235365
CPU0.0340.1293.42219.406
P-Hit_out67998
IT37111136145218
CPU0.0030.1620.90318.344142.541
HSS-P-Git_out791010
IT24485367
CPU0.2813.2979.21570.516
N-HSSIT262728343333
CPU0.0160.0470.7354.20335.741302.214
P-Cit_out679766
IT121418181718
CPU0.0010.0160.1283.0376.51624.194
CP-P-Git_out6788
IT7121212
CPU0.0010.0860.4842.931
N-CIT1788677
CPU0.0010.0210.0410.4011.0474.844
Table 6. Iteration counts and elapsed CPU time for Case 1.3.
Table 6. Iteration counts and elapsed CPU time for Case 1.3.
N × N : 16 × 16 32 × 32 64 × 64 128 × 128 256 × 256 512 × 512
N-Git_out456
IT46101199
CPU0.0310.2196.531
N-Hit_out667
IT255384
CPU0.0230.4412.917
HSS-N-Git_out457
IT213360
CPU0.3132.40625.813
P-Git_out6668
IT4278138304
CPU0.0160.0781.64116.172
P-Hit_out66799
IT263885139203
CPU0.0020.0450.56418.078133.11
HSS-P-Git_out67810
IT16233654
CPU0.2631.9068.85968.609
N-HSSIT272727272828
CPU0.0160.0470.7343.98431.406277.66
P-Cit_out556777
IT141417202020
CPU0.0010.0160.0933.1277.36424.969
CP-P-Git_out5567
IT9101212
CPU0.0010.0690.4442.916
N-CIT1786877
CPU0.0010.0210.0360.4141.0474.844
Table 7. Iteration counts and elapsed CPU time for Case 2.1.
Table 7. Iteration counts and elapsed CPU time for Case 2.1.
N × N : 16 × 16 32 × 32 64 × 64 128 × 128 256 × 256 512 × 512
N-Git_out788
IT93186314
CPU0.0760.3388.408
N-Hit_out679
IT3659116
CPU0.0360.5384.156
HSS-N-Git_out488
IT173446
CPU0.3085.67223.408
P-Git_out6788
IT66130242379
CPU0.0360.1253.57920.891
P-Hit_out67999
IT3253107122163
CPU0.0030.0720.81216.98491.38
HSS-P-Git_out791010
IT23334673
CPU0.2742.0639.06371.75
N-HSSIT262728282929
CPU0.0160.0470.7353.98534.399278.93
P-Cit_out678765
IT121416171715
CPU0.0010.0160.1113.0316.51620.406
CP-P-Git_out6888
IT9121212
CPU0.0010.0880.4842.931
N-CIT1769775
CPU0.0010.0160.0470.4061.0474.297
Table 8. Iteration counts and elapsed CPU time for Case 2.2.
Table 8. Iteration counts and elapsed CPU time for Case 2.2.
N × N : 16 × 16 32 × 32 64 × 64 128 × 128 256 × 256 512 × 512
N-Git_out678
IT77161302
CPU0.0430.2667.516
N-Hit_out679
IT3150103
CPU0.0310.4383.735
HSS-N-Git_out788
IT243446
CPU0.3285.35923.406
P-Git_out6788
IT65127235365
CPU0.0360.1183.46919.406
P-Hit_out67999
IT3253107122153
CPU0.0030.0720.81216.98488.38
HSS-P-Git_out79910
IT22323968
CPU0.2662.0529.89170.953
N-HSSIT262727292930
CPU0.0160.0470.7343.99334.399279.93
P-Cit_out679677
IT121418182121
CPU0.0010.0160.1283.0727.40624.766
CP-P-Git_out6788
IT9101212
CPU0.0010.0630.4842.931
N-CIT17610687
CPU0.0010.0160.0520.4011.0634.844
Table 9. Iteration counts and elapsed CPU time for Case 2.3.
Table 9. Iteration counts and elapsed CPU time for Case 2.3.
N × N : 16 × 16 32 × 32 64 × 64 128 × 128 256 × 256 512 × 512
N-Git_out445
IT4581168
CPU0.0290.1884.75
N-Hit_out567
IT182962
CPU0.0160.2592.453
HSS-N-Git_out445
IT172633
CPU0.3082.64119.203
P-Git_out6667
IT4172133270
CPU0.0160.0721.61315.234
P-Hit_out56788
IT184862128191
CPU0.0020.0680.63217.953113.11
HSS-P-Git_out6678
IT13222745
CPU0.2421.6037.70365.250
N-HSSIT262727282728
CPU0.0160.0470.7343.98530.813277.36
P-Cit_out566677
IT141717182121
CPU0.0010.0180.1093.0727.40624.766
CP-P-Git_out5466
IT98910
CPU0.0010.0630.4312.663
N-CIT1777888
CPU0.0010.0160.0470.4141.1034.984

Share and Cite

MDPI and ACS Style

Zeng, M.-L.; Zhang, G.-F. On C-To-R-Based Iteration Methods for a Class of Complex Symmetric Weakly Nonlinear Equations. Mathematics 2020, 8, 208. https://doi.org/10.3390/math8020208

AMA Style

Zeng M-L, Zhang G-F. On C-To-R-Based Iteration Methods for a Class of Complex Symmetric Weakly Nonlinear Equations. Mathematics. 2020; 8(2):208. https://doi.org/10.3390/math8020208

Chicago/Turabian Style

Zeng, Min-Li, and Guo-Feng Zhang. 2020. "On C-To-R-Based Iteration Methods for a Class of Complex Symmetric Weakly Nonlinear Equations" Mathematics 8, no. 2: 208. https://doi.org/10.3390/math8020208

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop