Next Article in Journal
Reversibility of Symmetric Linear Cellular Automata with Radius r = 3
Next Article in Special Issue
Nonlinear Operators as Concerns Convex Programming and Applied to Signal Processing
Previous Article in Journal
Pricing to the Scenario: Price-Setting Newsvendor Models for Innovative Products
Previous Article in Special Issue
A New Class of Iterative Processes for Solving Nonlinear Systems by Using One Divided Differences Operator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Iterative Method Based on Two-Stage Splitting Methods to Solve Weakly Nonlinear Systems

by
Abdolreza Amiri
1,
Mohammad Taghi Darvishi
1,*,
Alicia Cordero
2 and
Juan Ramón Torregrosa
2
1
Department of Mathematics, Razi University, Kermanshah 67149, Iran
2
Institute for Multidisciplinary Mathematics, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(9), 815; https://doi.org/10.3390/math7090815
Submission received: 17 July 2019 / Revised: 18 August 2019 / Accepted: 19 August 2019 / Published: 3 September 2019
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Abstract

:
In this paper, an iterative method for solving large, sparse systems of weakly nonlinear equations is presented. This method is based on Hermitian/skew-Hermitian splitting (HSS) scheme. Under suitable assumptions, we establish the convergence theorem for this method. In addition, it is shown that any faster and less time-consuming two-stage splitting method that satisfies the convergence theorem can be replaced instead of the HSS inner iterations. Numerical results, such as CPU time, show the robustness of our new method. This method is easy, fast and convenient with an accurate solution.

1. Introduction

For G : D C m C m , we consider the following system of nonlinear equations:
G ( x ) = 0 .
One may encounter equations like (1) in some areas of scientific computing. In particular, when the technique of finite elements or finite differences are used to discretize nonlinear boundary problems, integral equations and certain nonlinear partial differential equations. Finding the roots of systems like (1) has widespread applications in numerical and applied mathematics. There are many iterative schemes to solve (1). The most common one is the second order classical Newton’s scheme, which solves (1) iteratively as
x ( n + 1 ) = x ( n ) G ( x ( n ) ) 1 G ( x ( n ) ) , n = 0 , 1 , ,
where G ( x ( n ) ) is the Jacobian matrix of G, evaluated in the nth iteration. To avoid computation of inverse of the Jacobian matrix G ( x ) , Equation (2) is changed to
G ( x ( n ) ) ( x ( n + 1 ) x ( n ) ) = G ( x ( n ) ) .
Equation (3) is a system of linear equations. Hence, by s ( n ) = x ( n + 1 ) x ( n ) , we have to solve the following system of equations:
G ( x ( n ) ) s ( n ) = G ( x ( n ) ) ,
whence x ( n + 1 ) = x ( n ) + s ( n ) . Thus, by using this approach, we have to solve a system of linear equations such as
A x = b ,
which we usually use an iterative scheme to solve it.
Furthermore, an inexact Newton method [1,2,3,4] is a generalization of Newton’s method for solving (1), in which, at the nth iteration, the step-size s ( n ) from current approximate solution x ( n ) must satisfy a condition such as
G ( x ( n ) ) + G ( x ( n ) ) s ( n ) η n G ( x ( n ) ) ,
for a “forcing term” η n [ 0 , 1 ) . Let us consider system (1) in which G ( x ) can be separated into linear and nonlinear terms, A x and φ ( x ) , respectively, that is
G ( x ) = φ ( x ) A x or A x = φ ( x ) .
In (6), the m × m complex matrix A is a positive definite, large and sparse matrix. In addition, vector-valued function φ : D C m C m is continuously differentiable. Furthermore, x is an m-vector and D is an open set. When the norm of linear part A x is strongly dominant over the norm of nonlinear part φ ( x ) in a specific norm, system (6) is called a weakly nonlinear system [5,6]. Bai [5] used the separability and strong dominance between the linear and the nonlinear parts and introduced the following iterative scheme
A x ( n + 1 ) = φ ( x ( n ) ) .
Equation (7) is a system of linear equations. When the matrix A is positive definite, Axelsson et al. [7] solved it by a class of nested iteration methods. To solve linear positive definite systems, Bai et al. [8] applied the Hermitian/skew-Hermitian splitting (HSS) iterative scheme. For solving the large sparse, non-Hermitian positive definite linear systems, Li et al. [9] used an asymmetric Hermitian/skew-Hermitian (AHSS) iterative scheme. Moreover, to improve the robustness of the HSS method, some HSS-based iterative algorithms have been introduced. Bai and Yang [10] presented Picard-HSS and HSS-like methods to solve (7), when matrix A is a positive definite matrix. Based on the matrix multi-splitting technique, block and asynchronous two-stage methods are introduced by Bai et al. [11]. The Picard circulant and skew-circulant splitting (Picard-CSCS) algorithm and the nonlinear CSCS-like iterative algorithm are presented by Zhu and Zhang [12], when the coefficient matrix A is a Teoplitz matrix. A class of lopsided Hermitian/skew-Hermitian splitting (LHSS) algorithms and a class of nonlinear LHSS-like algorithms are used by Zhu [6] to solve the large and sparse of weakly nonlinear systems.
It must be noted that system (6) is a special form of system (1). Generally, system (6) is nonlinear. If we classify Picard-HSS and nonlinear HSS-like iterative methods as Jacobian-free schemes, in many cases, they are not as successful as Jacobian dependent schemes such as the Newton method. Most of the methods for solving nonlinear systems need to compute or approximate the Jacobian matrix in the obtained points at each step of the iterative methods, which is a very time-consuming process, especially when the Jacobian matrices φ ( x ( n ) ) are dense. Therefore, introducing any scheme that does not need to compute the Jacobian matrix and can solve a wider range of problems than the existing ones is welcome. In fact, Jacobian-free methods to solve nonlinear systems are very important and form an attractive area of research.
In this paper, we present a new iterative method to solve weakly nonlinear systems. Even though the new algorithm uses some notions of mentioned algorithms, but differs from all of them because it has three important characteristics. At the first, the new algorithm is a fully Jacobian-free one. At the second, it is easy to use, and, finally, it is very successful to solve weakly nonlinear systems. The new iterative method is a synergistic combination of high order Newton-like methods and a special splitting of the coefficient matrix A in (5).
The rest of this paper has organized as follows: in the following section, we present our new algorithm. We prove convergence of our algorithm in Section 3. We apply our algorithm to solve some problems in Section 4. In Section 5, we conclude our results and give some comments and discussions.

2. The New Algorithm

In linear system A x = b , we suppose that A = H + S , where H = 1 2 ( A + A * ) , S = 1 2 ( A A * ) , and A * is the conjugate transpose of matrix A. Hence, H and S are, respectively, Hermitian and skew-Hermitian parts of A. By an initial guess x 0 C n , and positive constants α and tol , in HSS scheme [8], one computes x l for l = 1 , 2 , as
( α I + H ) x l + 1 2 = ( α I S ) x l + b , ( α I + S ) x l + 1 = ( α I H ) x l + 1 2 + b ,
where I is the identity matrix. Stopping criterion for (8) is b A x l tol b A x 0 , for known x 0 and tol .
Bai and Guo [13] used an HSS scheme as inner iterations to generate an inexact version of Newton’s method as:
(1)
Consider the initial guess x ( 0 ) , α , tol and the sequence { l n } n = 0 of positive integers.
(2)
For n = 1 , 2 , until G ( x ( n ) ) tol G ( x ( 0 ) ) do:
(2.1)
Set s 0 ( n ) = 0 .
(2.2)
For l = 1 , 2 , , l n 1 , apply Algorithm HSS as
( α I + H ( x ( n ) ) ) s l + 1 2 ( n ) = ( α I S ( x ( n ) ) ) s l ( n ) G ( x ( n ) ) ( α I + S ( x ( n ) ) ) s l + 1 ( n ) = ( α I H ( x ( n ) ) ) s l + 1 2 ( n ) G ( x ( n ) ) ,
and obtain s l n ( n ) such that
G ( x ( n ) ) + G ( x ( n ) ) s l n ( n ) η n G ( x ( n ) ) , for some η n [ 0 , 1 ) .
(2.3)
Set x ( n + 1 ) = x ( n ) + s l n ( n ) .
In addition, to solve weakly nonlinear problems, one can use a Picard-HSS method as a simple and Jacobian-free method, which is described as follows [10].

2.1. Picard-HSS Iteration Method

Suppose that φ : D C n C n is a continuous function and A C n × n is a positive definite matrix. For an initial guess x ( 0 ) and for a positive integer sequence { l n } n = 0 , Picard-HSS iterative method computes x ( n + 1 ) for n = 0 , 1 , 2 , , by using the following iterative scheme, until the stopping criterion is satisfied [10],
(1)
Set x l ( n ) : = x ( n ) ;
(2)
For l = 0 , 1 , 2 , , n 1 , obtain x ( n + 1 ) from solving the following:
( α I + H ) x l + 1 2 ( n ) = ( α I S ) x l ( n ) + φ ( x ( n ) ) , ( α I + S ) x l + 1 ( n ) = ( α I H ) x l + 1 2 ( n ) + φ ( x ( n ) ) .
(3)
Set x ( n + 1 ) : = x l n ( n ) .
The numbers l n , n = 0 , 1 , 2 , depend on the problem, so practically they are difficult to be determined in real computations. A modified form of Picard-HSS iteration scheme, called the nonlinear HSS-like method, has been presented [10] to avoid using inner iterations as follows.

2.2. Nonlinear HSS-Like Iteration Method

Obtain x ( n + 1 ) , n = 0 , 1 , 2 , from the following [10], for a given x ( 0 ) D C n , until the stopping condition is satisfied
( α I + H ) x ( n + 1 2 ) = ( α I S ) x ( n ) + φ ( x ( n ) ) , ( α I + S ) x ( n + 1 ) = ( α I H ) x ( n + 1 2 ) + φ ( x ( n + 1 2 ) ) .
However, in this method, it is necessary to evaluate the nonlinear term φ ( x ) at each step, which for complicated nonlinear terms φ ( x ) is too costly.

2.3. Our Proposal Iterative Scheme

For solving (6) without computing Jacobian matrices, we present a new algorithm. This algorithm is a strong tool for solving weakly nonlinear problems, as Picard and nonlinear Picard algorithms, but, in comparison with Picard and nonlinear Picard algorithms, it solves a wider range of nonlinear systems. First, we change (7) as
A x ( n + 1 ) = A x ( n ) A x ( n ) + φ ( x ( n ) )
and
A x ( n + 1 ) A x ( n ) = A x ( n ) + φ ( x ( n ) ) .
After computing x ( n ) , set b ( n ) = φ ( x ( n ) ) , G n ( x ) = b ( n ) A x . Then, by intermediate iterations, obtain x ( n + 1 ) as:
  • Let x 0 ( n ) = x ( n ) and until G ( x k ( n ) ) tol n G ( x 0 ( n ) ) do:
    A s k ( n ) = G ( x k ( n ) ) ,
    where s k ( n ) = x k + 1 ( n ) x k ( n ) (k is the counter of the number of iterations (11)).
  • For solving (11), one may use any inner solver; here, we use an HSS scheme. Next, for initial value x 0 ( n ) and k = 1 , 2 , , k n 1 until
    G n ( x k ( n ) ) tol n G n ( x 0 ( n ) ) ,
    apply the HSS scheme as:
    (1)
    Set s k , 0 ( n ) = 0 .
    (2)
    For l = 0 , 1 , 2 , , l k n 1 , apply algorithm HSS (l is the counter of the number of HSS iterations):
    ( α I + H ) s k , l + 1 2 ( n ) = ( α I S ) s k , l ( n ) + G n ( x k ( n ) ) , ( α I + S ) s k , l + 1 ( n ) = ( α I H ) s k , l + 1 2 ( n ) + G n ( x k ( n ) )
    and obtain s k , l k n ( n ) such that
    G n ( x k ( n ) ) A s k , l k n ( n ) η k ( n ) G n ( x k ( n ) ) , η k ( n ) [ 0 , 1 ) .
    (3)
    Set x k + 1 ( n ) = x k ( n ) + s k , l k n ( n ) ( l k n is the required number of HSS inner iterations for satisfying (14)).
  • Finally, set x 0 ( n + 1 ) = x k n ( n ) ( k n is the required number of iterations (11) in the nth step, for satisfying (12)), b ( n + 1 ) = φ ( x 0 ( n + 1 ) ) , G n + 1 ( x ) = b ( n + 1 ) A x and again apply steps 3–14 in Algorithm 1 until to achieve the following stopping criterion:
    A x ( n ) φ ( x ( n ) ) tol A x ( 0 ) φ ( x ( 0 ) ) .
Algorithm 1: JFHSS Algorithm
Mathematics 07 00815 i001
We call this new method a JFHSS (Jacobian-free HSS) algorithm, and its steps are shown in Algorithm 1.
In addition, we call the intermediate iterations Newton-like iteration because this kind of iteration uses the same procedure as an inexact Newton’s method, except, since the function we use here is b ( n ) A x for n = 1 , 2 , , we don’t need to compute any Jacobian and, in fact, the Jacobian is the matrix A. For this reason, we also call this iterative method a "Jacobian-free method".
Since the JFHSS scheme uses many HSS inner iterations, one may use another splitting scheme instead of the HSS method. In fact, if any faster and less time-consuming splitting method is available that satisfies the convergence theorem, presented in the next section, then it can be used instead of the HSS algorithm. One of these methods that is proposed in [14] is GPSS (generalized positive definite and skew-Hermitian splitting) algorithm that uses a positive-definite and skew-Hermitian splitting scheme instead of a Hermitian and skew-Hermitian one. Let H and S be the Hermitian and skew-Hermitian parts of A; then, the GPSS algorithm splits A as A = P 1 + P 2 where P 1 and P 2 are, respectively, positive definite and skew-Hermitian matrices. In fact, we have
P 1 = D + 2 L G , P 2 = K + L G * L G + S ,
or
P 1 = D + 2 L G * , P 2 = K + L G L G * + S ,
where G and K are, respectively, Hermitian and Hermitian positive semidefinite matrices of H, that is, H = G + K ; in addition, D and L G are the diagonal matrix and the strictly lower triangular matrices of G , respectively (see [14]).
Thus, to solve the system of linear Equation (5) for an initial guess x 0 C n , and positive constants α and tol , the GPSS iteration scheme (until the stopping criterion is satisfied) computes x l for l = 1 , 2 , by
( α ¯ I + P 1 ) x l + 1 2 = ( α ¯ I P 2 ) x l + b , ( α ¯ I + P 2 ) x l + 1 = ( α ¯ I P 1 ) x l + 1 2 + b ,
where α ¯ is a given positive constant and I denotes the identity matrix. In addition, if, in Algorithm 1, we use a GPSS scheme instead of an HSS one, we denote the new method by JFGPSS (Jacobian free GPSS).

3. Convergence of the New Method

As we mentioned in the first section, for solving a nonlinear system, if one can separate (1) into linear and nonlinear terms, A x and ϕ ( x ) , when A x is strongly dominant over the nonlinear term, Picard-HSS and nonlinear HSS-like methods can solve the problem. However, in many cases, even for weakly nonlinear ones, they may fail to solve the problems. Thus, to obtain a more useful method for solving (6), based on some splitting methods, we presented a new iterative method. Now, we prove that Algorithm 1 converges to the solution of a weakly nonlinear problem (6). In the following theorem, we prove the convergence of the JFHSS scheme.
Theorem 1.
Let x ( 0 ) C n and φ : D C n C n be a G-differentiable function on an open set N 0 D on which φ ( x ) is continuous and max A 1 φ ( x ) = L < 1 . Let us suppose that H = 1 2 ( A + A * ) and S = 1 2 ( A A * ) are the Hermitian and skew-Hermitian parts of the positive definite matrix A and also that M is an upper bound for A 1 G ( x ( 0 ) ) , and l k n is the number of HSS inner iterations in which the stopping criterion (14) is satisfied,
l * n > ln ( ( 1 η ) ( 1 η k n 1 ) L 1 ) ln ( θ ) ,
with l * n = lim inf k n l k n for n = 1 , 2 , 3 , , η is the tolerance in Newton-like intermediate iterations with L < ( 1 η ) 2 and θ = T , where T is the HSS inner iteration matrix that can be written as
T = ( α I + S ) 1 ( α I H ) ( α I + H ) 1 ( α I S ) .
Then, the sequence of iteration { x ( k ) } k = 0 , which is generated by a JFHSS scheme in Algorithm 1, is well-defined and converges to x * , satisfying G ( x * ) = 0 , and also
x ( n + 1 ) x ( n ) δ M ρ n ,
x ( n + 1 ) x ( 0 ) δ M 1 ρ ,
where δ = lim sup n ( 1 + θ l * n ) 1 η and ρ = lim sup n ρ n for ρ n = ( 1 + θ l * n ) 1 η L + η k n 1 .
Proof. 
Note that T max λ i λ ( H ) α λ i α + λ i < 1 (see [8]), where λ ( H ) is the spectral radius of H and α is a positive constant in HSS inner iterations of JFHSS scheme. Based on Algorithm 1, we can express x ( n + 1 ) as
x ( n + 1 ) = x k n ( n ) = x k n 1 ( n ) + ( I T l k n ) G n ( x k n 1 ( n ) ) 1 G n ( x k n 1 ( n ) ) = x k n 1 ( n ) + ( I T l k n ) A 1 G n ( x k n 1 ( n ) ) = x k n 2 ( n ) + ( I T l k n 1 ) A 1 G n ( x k n 2 ( n ) ) + ( I T l k n ) A 1 G n ( x k n 1 ( n ) ) = x k n 3 ( n ) + ( I T l k n 2 ) A 1 G ( n ) ( x k n 3 ( n ) ) + ( I T l k n 1 ) A 1 G n ( x k n 2 ( n ) ) + ( I T l k n ) A 1 G n ( x k n 1 ( n ) ) = x 0 ( n ) + ( I T l 1 ) A 1 G n ( x 0 ( n ) ) + ( I T l 2 ) A 1 G n ( x 1 ( n ) ) + + ( I T l k n 2 ) A 1 G n ( x k n 3 ( n ) ) + ( I T l k n 1 ) A 1 G n ( x k n 2 ( n ) ) + ( I T l k n ) A 1 G n ( x k n 1 ( n ) ) = x ( n ) + j = 1 k n ( I T l j ) A 1 G n ( x j 1 ( n ) ) .
In the last equality, we used x 0 ( n ) = x ( n ) . If we set η = η cond ( A ) in (14) instead of η , where cond ( A ) = A A 1 , then η 1 . Because of (14), we have
G n ( x k n ( n ) ) G n ( x k n ( n ) ) G n ( x k n 1 ( n ) ) + G n ( x k n 1 ( n ) ) ( x k n ( n ) x k n 1 ( n ) ) + G n ( x k n 1 ( n ) ) G n ( x k n 1 ( n ) ) ( x k n ( n ) x k n 1 ( n ) ) = G n ( x k n 1 ( n ) ) A ( x k n ( n ) x k n 1 ( n ) ) η G n ( x k n 1 ( n ) ) ,
so
A 1 G n ( x k n ( n ) ) A 1 G n ( x k n ( n ) ) η A 1 G n ( x k n 1 ( n ) ) η A 1 A A 1 G n ( x k n 1 ( n ) ) = η A 1 G n ( x k n 1 ( n ) ) .
Therefore, by mathematical induction, we can obtain
A 1 G n ( x k n ( n ) ) η k n A 1 G n ( x 0 ( n ) ) .
Then, from (21), and since I T l j < 1 + θ l j 1 + θ l * n for j = 1 , 2 , , k n , we have
x ( n + 1 ) x ( n ) j = 1 k n I T l j A 1 G n ( x j 1 ( n ) ) ( I T l 1 + η I T l 2 + η 2 I T l 3 + + η k n 2 I T l k n 1 + η k n 1 I T l k n ) A 1 G n ( x 0 ( n ) ) = ( 1 + η + η 2 + + η k n 2 + η k n 1 ) ( 1 + θ l * n ) A 1 G n ( x 0 ( n ) ) = 1 η k n 1 η ( 1 + θ l * n ) A 1 G n ( x 0 ( n ) ) .
Thus, from the last inequality, since G n ( x ) = b ( n ) A x , b ( n ) = φ ( x ( n ) ) , we have
x ( n + 1 ) x ( n ) 1 η k n 1 η ( 1 + θ l * n ) A 1 ( b ( n ) A x ( n ) ) = 1 η k n 1 η ( 1 + θ l * n ) ( A 1 ( φ ( x ( n ) ) φ ( x ( n 1 ) ) ) + A 1 ( φ ( x ( n 1 ) ) A x ( n ) ) ) .
Then, by using the multivariable Mean Value Theorem (see [15]), we can write
A 1 ( φ ( x ( n ) ) φ ( x ( n 1 ) ) ) max x S A 1 φ ( x ) x ( n ) x ( n 1 ) = L x ( n ) x ( n 1 ) ,
where S = { x : x = t x ( n ) + ( 1 t ) x ( n 1 ) , 0 t 1 } . Thus,
A 1 ( φ ( x ( n ) ) φ ( x ( n 1 ) ) ) L x ( n ) x ( n 1 ) .
From the right-hand side of (24), using (22) for n 1 , and (25), we have
x ( n + 1 ) x ( n ) = 1 η k n 1 η ( 1 + θ l * n ) ( L x ( n ) x ( n 1 ) + A 1 G n 1 ( x n ) ) 1 η k n 1 η ( 1 + θ l * n ) ( L x ( n ) x ( n 1 ) + η k n 1 A 1 G n 1 ( x 0 ( n 1 ) ) ) .
If in the last inequality of (26), from (23), we use x ( n ) x ( n 1 ) 1 η k n 1 1 η ( 1 + θ l * n 1 ) A 1 G n ( x 0 ( n 1 ) ) , then
x ( n + 1 ) x ( n ) 1 η k n 1 η ( 1 + θ l * n ) ( L 1 η k n 1 1 η ( 1 + θ l * n 1 ) A 1 G n 1 ( x 0 ( n 1 ) ) + η k n 1 A 1 G n 1 ( x 0 ( n 1 ) ) ) 1 η k n 1 η ( 1 + θ l * n ) ( L 1 η k n 1 1 η ( 1 + θ l * n 1 ) + η k n 1 ) A 1 G n 1 ( x 0 ( n 1 ) ) .
As 1 η k n < 1 , n = 1 , 2 , and by the definition of ρ and δ , we have
x ( n + 1 ) x ( n ) δ ρ A 1 G n 1 ( x 0 ( n 1 ) ) .
By mathematical induction and since A 1 G 0 ( x 0 ( 0 ) ) M ,
x ( n + 1 ) x ( n ) δ ρ n A 1 G 0 ( x 0 ( 0 ) ) δ M ρ n ,
which yields (19). By the stopping criterion (18), we must have ρ < 1 and then, using (19), it is easy to deduce
x ( n + 1 ) x ( 0 ) x ( n + 1 ) x ( n ) + x ( n ) x ( n 1 ) + + x ( 1 ) x ( 0 ) δ M 1 ρ ,
which is the relation (20).
Thus, the sequence { x ( n ) } is in a ball with center x ( 0 ) and radius r = δ M 1 ρ . From (28), sequence { x ( n ) } also converges to its limit point x * . From the following iteration,
x 1 ( n ) = x 0 ( n ) + ( I T l 1 ) A 1 G n ( x 0 ( n ) ) ,
when n , x 0 ( n ) x * 0 , x 1 ( n ) x * 0 , l 1 . Moreover, as T < 1 , then T l 1 0 and we have
G ( x * ) = 0 ,
which completes the proof. ☐
Note that, in some applications, the stopping criterion (18) may be obtained as negative; this shows that, for all l * 1 , we must have ρ < 1 .
In addition, it is easy to deduce from the above theorem that any iterative method that its iteration matrix satisfies in T < 1 can be used instead of the HSS method. For a JFGPSS case, the proof is similar, except, in the inner iteration, the iterative matrix is
T = ( α ¯ I + P 2 ) 1 ( α ¯ I P 1 ) ( α ¯ I + P 1 ) 1 ( α ¯ I P 2 ) .
The following result shows the convergence of a JFGPSS algorithm:
Theorem 2.
Let x ( 0 ) C n and φ : D C n C n be a G-differentiable function on an open set N 0 D , on which φ ( x ) is continuous and max A 1 φ ( x ) = L < 1 . Let us suppose that P 1 and P 2 are generalized positive-definite and skew-Hermitian splitting parts of the positive definite matrix A as (15) and (16) and also that M is an upper bound for A 1 G ( x ( 0 ) ) ; l k n is the number of GPSS inner iterations in which the stopping criterion (14) is satisfied,
l * n > ln ( ( 1 η ) ( 1 η k n 1 ) L 1 ) ln ( θ ) ,
with l * n = lim inf k n l k n for n = 1 , 2 , 3 , , η is the tolerance in Newton-like intermediate iterations with L < ( 1 η ) 2 and θ = T , where T is the GPSS inner iteration matrix that can be written as
T = ( α ¯ I + P 2 ) 1 ( α ¯ I P 1 ) ( α ¯ I + P 1 ) 1 ( α ¯ I P 2 ) .
Then, the sequence of iteration { x ( k ) } k = 0 , generated by JFGPSS scheme in Algorithm 1, is well-defined and converges to x * , satisfying G ( x * ) = 0 , and also
x ( n + 1 ) x ( n ) δ M ρ n , x ( n + 1 ) x ( 0 ) δ M 1 ρ ,
where δ = lim sup n ( 1 + θ l * n ) 1 η and ρ = lim sup n ρ n for ρ n = ( 1 + θ l * n ) 1 η L + η k n 1 .
Proof. 
Let us note that, in this theorem, we also have T < 1 (for more details, see [16]). The rest of the proof is similar to Theorem 1. ☐
In the next section, we apply our new iterative method on some weakly nonlinear systems of equations.

4. Application

Now, we use JFHSS and JFGPSS algorithms for solving some nonlinear systems. These examples show that JFHSS and JFGPSS methods perform better than nonlinear HSS-like and Picard-HSS methods.
Example 1.
Consider the following two-dimensional nonlinear convection-diffusion equation
( u x x + u y y ) + q ( u x + u y ) = f ( x , y ) , ( x , y ) Ω u ( x , y ) = h ( x , y ) , ( x , y ) Ω
where Ω = ( 0 , 1 ) × ( 0 , 1 ) , Ω is its boundary and q is a positive constant for measuring the magnitude of the convection term. We solve this problem for each of the following cases:
Case 1
f ( x , y ) = e u ( x , y ) , h ( x , y ) = 0 .
Case 2
f ( x , y ) = e u ( x , y ) sin ( 1 + u x ( x , y ) + u y ( x , y ) ) , h ( x , y ) = e x + y .
To discretize this convection-diffusion equation, for the convective term, we use a central difference method while, for the diffusion term, we use a five-point finite difference method. These yield the following nonlinear system
H ( u ) = M u + h 2 ψ ( u ) ,
where h = 1 N + 1 is the equidistance step-size with N as a known natural number and M = A N I N + A N I N , B = C N × C N with tridiagonal matrices A N = tridiag ( 1 q h / 2 , 2 , 1 + q h / 2 ) , C N = tridiag ( 1 / h , 0 , 1 / h ) and I N is N × N identity matrix. For case 1, we have ψ ( u ) = φ ( u ) and, for case 2, ψ ( u ) = sin ( 1 + B u ) + φ ( u ) , where φ ( u ) = ( e u 1 , e u 2 , . . . , e u n ) T ; moreover, ⊗ is the Kronecker product symbol, n = N × N and sin ( u ) means sin ( u 1 ) , sin ( u 2 ) , , sin ( u n ) T . To apply Picard-HSS, nonlinear HSS-like, JFHSS and JFGPSS methods for solving (29), the stopping criterion for the outer iteration in all methods is chosen as
M u ( n ) + h 2 ψ ( u ( n ) ) M u ( 0 ) + h 2 ψ ( u ( 0 ) ) 10 12 .
Meanwhile, the Newton-like iteration (in JFHSS and JFGPSS methods) is
G n ( u k n ( n ) ) G n ( u 0 ( n ) ) 10 1 ,
and also the stopping criterion for HSS and GPSS processes in each Newton-like inner iteration is
G n ( u k ( n ) ) A s k , l k n ( n ) η G n ( u k ( n ) ) ,
where { u ( n ) } is the sequence generated by the JFHSS method. k n and l k n are, respectively, the number of Newton-like inner iterations and HSS and GPSS inner iterations, required for satisfying Relations (31) and (32).
Moreover, to avoid computing the Jacobian in Picard-HSS method, we propose the following stopping criterion for inner iterations
G ( u ( n ) ) + A s l n ( n ) η G ( u ( n ) ) .
In order to use a JFGPSS method, we apply the following decomposition on matrix M in Equation (29),
P 1 = D + 2 L G , P 2 = L G * L G + S .
In addition, K = 0 , so G = H is the Hermitian part of M and S = 1 2 ( M M * ) is the skew Hermitian part of M.
Numerical results for q = 1000 , q = 2000 and initial points u ( 0 ) = 1 ¯ , u ( 0 ) = 4 × 1 ¯ for both cases and u ( 0 ) = 12 × 1 ¯ for case 1 and u ( 0 ) = 13 × 1 ¯ for case 2 and different values of N for JFHSS, JFGPSS, nonlinear HSS-like and Picard-HSS schemes are reported in Table 1 and Table 2. Other numerical results such as CPU-time (total CPU time), the number of outer and inner iteration steps (denoted as I T o u t and I T i n n , respectively), and the norm-2 of the function at the last step (denoted by F ( u ( n ) ) ) are also presented in these tables. For JFHSS and JFGPSS algorithms, the values of I T i n t and I T i n n are reported. The former is the obtained number when total inner HSS or GPSS iteration is used in Newton-like iterations, divided by the sum of total Newton-like iterations, while the latter is the total number of intermediate iterations of the Newton-like method.
Except for u ( 0 ) = 1 ¯ , which is relatively close to the solution (in case 1, the real solution u is near zero and, in case 2, almost for all coordinates of the solution, u i , i = 1 , 2 , , n , 0 u i 1 ), the nonlinear HSS-like method for other initial points of Table 1 and Table 2 could not perform the iterations at all, but JFHSS and JFGPSS methods for all points in both cases could easily solve the problem. Picard-HSS for these three initial points could not solve the problem and, in all cases, fails to solve the problem, especially for q > 500 .
Numerical results show that the inner iterations for both JFHSS and nonlinear HSS-like are almost the same but for JFGPSS is less than these two methods. For example, in Table 1, for u ( 0 ) = 1 ¯ , q = 1000 and N = 40 , the number of inner iterations for JFHSS and JFGPSS methods are, respectively, 133 and 96 and this number for total iterations in the nonlinear HSS-like method (consider that there is only one kind of iteration in a nonlinear HSS-like method) is 127. However, the nonlinear HSS-like method needs to evaluate a greater number of the nonlinear term ψ ( u ) than the JFHSS method (for the JFHSS method, only 12 function evaluations are required compared to 254 function evaluations for the nonlinear HSS-like method). Thus, JFHSS and JFGPSS methods can significantly reduce the computational cost of evaluation of the nonlinear term, especially when the nonlinear part is so complicated, e.g., in Example 2, the difference between the computational cost of the nonlinear HSS-like method and the JFHSS method has increased, since the problem has a more complicated nonlinear term.
It must be noted that, in the inner iteration, for solving the linear systems related to the Hermitian part (in HSS scheme) and the skew-Hermitian part (in both HSS and GPSS schemes), we have employed respectively the conjugate gradient (CG) method and the Lanczos method (for more details, see [17]).
In this example, η = tol was used for all steps; in most cases, we obtained equal Newton-like and outer iterations at each step; however, in general, choosing equal η and tol does not always lead to equal Newton-like and outer iterations. For example, in cases that nonlinearity increases (e.g., when we choose initial value u ( 0 ) = 12 × 1 ¯ , in the first steps, the nonlinear term h 2 ψ ( u ) is so big) result in a different number of Newton-like and outer iterations. In all tables of this paper, a , b denote the number a · 10 b .
The optimal value for parameter α that minimizes the boundary of spectral radius of the iteration matrices is important because it also improves the convergence speed of Picard-HSS, nonlinear HSS-like, JFHSS and JFGPSS methods. There are no general results to determine the optimal α and α ¯ , so we need to obtain the optimal values of parameters α and α ¯ experimentally. However, Bai and Golub [8] proved that spectral radius of HSS iterative matrix that is obtained from the coefficient matrix M in (29) is bounded by T σ ( α ) max λ i λ ( H ) α λ i α + λ i < 1 , and the minimum of this bound is obtained when
α = α * = λ m i n ( H ) λ m a x ( H ) ,
where λ m i n ( H ) and λ m a x ( H ) are, respectively, the smallest and the largest eigenvalues of Hermitian matrix H. Usually, in an HSS scheme, α o p t α * arg min α > 0 { σ ( α ) } < 1 and ρ ( T ( α * ) ) ρ ( T ( α o p t ) ) . When q or q h / 2 is small, σ ( α ) is close to ρ ( T ( α ) ) and in this case α * is close to α o p t and α * can be a good estimation for α o p t . However, when q or q h / 2 is large (the skew-Hermitian part is dominant), hence σ ( α ) deviates too much from ρ ( T ( α ) ) , so using α * is not useful. In this case, ρ ( T ( α ) ) attains its minimum at α o p t that is far from α * , but close to q h / 2 (see [8]).
In the GPSS case, a spectral radius of T ( α ¯ ) is bounded by V ( α ¯ ) , where V ( α ¯ ) = ( α ¯ I P 1 ) ( α ¯ I + P 1 ) 1 . Since V ( α ¯ ) 2 1 (see [18]), GPSS inner iterations unconditionally converge to the exact solution in each inner iteration of a JFGPSS scheme. However, when P 1 C n × n is a general positive-definite matrix, we do not have any formula to compute α ¯ * arg min α ¯ > 0 { V ( α ¯ ) } that is the value that minimizes the boundary of iteration matrix T ( α ¯ ) , nor do we have a formula for α ¯ o p t , the value that minimizes T ( α ¯ ) .
In Table 3, the optimal values of α o p t and α ¯ o p t have been written (tested and optimal α and α ¯ o p t ) that are determined experimentally by using increments as 0.25. In addition, the corresponding spectral radius of the iteration matrices T ( α ) and T ( α ¯ ) for HSS and GPSS algorithms that are used as inner iterations to solve (29) are reported in this table. One can see that the spectral radius of GPSS method in all cases is smaller than HSS scheme, which results in faster convergence.
Example 2 ([10]).
We consider the two-dimensional nonlinear convection-diffusion equation
( u x x + u y y ) + q e x + y ( x u x + y u y ) = u e u + sin ( 1 + u x 2 + u y 2 ) , ( x , y ) Ω , u ( x , y ) = 0 , ( x , y ) Ω ,
where Ω = ( 0 , 1 ) × ( 0 , 1 ) , Ω is its boundary and q is a positive constant for measuring magnitude of the convection term. By applying the upwind finite difference scheme on the equidistance discretization grid (stepsize h = 1 N + 1 ) with the central difference scheme to the convective term, we obtain a system of nonlinear equations in the general form (for more details, see [10])
H ( x ) = M x h 2 ψ ( x ) .
We have selected zero vector u ( 0 ) = 0 ¯ = ( 0 , 0 , , 0 ) T as the initial guess. In addition, again (31) and (32) are used respectively as the stopping criteria for the inner iterations and Newton-like iterations in the JFHSS method and (30) for outer iterations in JFHSS, Picard-HSS and nonlinear HSS-like methods. Moreover, to avoid computing Jacobian in Picard-HSS and nonlinear HSS-like methods, we used (33). Similar to Example 1, one can use other iterative methods instead of HSS in Algorithm 1, for which the spectral radius of its iteration matrix is smaller and thus results in faster convergence.
Numerical results for N = 32 , 48 , 64 , optimal α and different values of q for JFNHSS, Picard-HSS and nonlinear HSS-like schemes are reported in Table 4. In addition, we adopted the experimentally optimal parameters α to obtain the least CPU times for these iterative methods. One can see that JFHSS performs better than nonlinear HSS-like and Picard-HSS methods in all cases.

5. Conclusions

In this paper, an iterative method based on two-stage splitting methods has been proposed to solve weakly nonlinear systems and a convergence property of this method has been investigated. This method is a combination of an inexact Newton method, Hermitian and skew-Hermitian splitting (or generalized positive definite and skew-Hermitian splitting) scheme. The advantage of our new method, Picard-HSS and nonlinear HSS-like over the methods like Newton method is that they don’t need explicit construction and accurate computation of the Jacobian matrix. Hence, computation works and computer memory may be saved in actual application; however, numerical results show that JFHSS and JFGPSS methods perform better than the two other ones.
Numerical results show that JFHSS and JFGPSS iteration algorithms are effective, robust, and feasible nonlinear solvers for a class of weakly nonlinear systems. Moreover, employing these algorithms to solve nonlinear systems is found to be simple, accurate, fast, flexible, convenient and have small computation cost. In addition, it must be noted that, even though our inner iteration scheme in this paper are HSS and GPSS methods, another inner iteration solver can be used subject to the condition that the iteration matrix satisfies in T < 1 .

Author Contributions

The contributions of authors are roughly equal.

Funding

This research received no external funding.

Acknowledgments

The third and fourth authors have been partially supported by the Spanish Ministerio de Ciencia, Innovación y Universidades PGC2018-095896-B-C22 and Generalitat Valenciana PROMETEO/2016/089.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Shen, W.; Li, C. Kantorovich-type convergence criterion for inexact Newton methods. Appl. Numer. Math. 2009, 59, 1599–1611. [Google Scholar] [CrossRef]
  2. An, H.-B.; Bai, Z.-Z. A globally convergent Newton-GMRES method for large sparse systems of nonlinear equations. Appl. Numer. Math. 2007, 57, 235–252. [Google Scholar] [CrossRef]
  3. Eisenstat, S.C.; Walker, H.F. Globally convergent inexact Newton methods. SIAM J. Optim. 1994, 4, 393–422. [Google Scholar] [CrossRef]
  4. Gomes-Ruggiero, M.A.; Lopes, V.L.R.; Toledo-Benavides, J.V. A globally convergent inexact Newton method with a new choice for the forcing term. Ann. Oper. Res. 2008, 157, 193–205. [Google Scholar] [CrossRef]
  5. Bai, Z.-Z. A class of two-stage iterative methods for systems of weakly nonlinear equations. Numer. Algorithms 1997, 14, 295–319. [Google Scholar] [CrossRef]
  6. Zhu, M.-Z. Modified iteration methods based on the Asymmetric HSS for weakly nonlinear systems. J. Comput. Anal. Appl. 2013, 15, 188–195. [Google Scholar]
  7. Axelsson, O.; Bai, Z.-Z.; Qiu, S.-X. A class of nested iteration schemes for linear systems with a coefficient matrix with a dominant positive definite symmetric part. Numer. Algorithms 2004, 35, 351–372. [Google Scholar] [CrossRef]
  8. Bai, Z.-Z.; Golub, G.H.; Ng, M.K. Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix Anal. Appl. 2003, 24, 603–626. [Google Scholar] [CrossRef]
  9. Li, L.; Huang, T.-Z.; Liu, X.-P. Asymmetric Hermitian and skew-Hermitian splitting methods for positive definite linear systems. Comput. Math. Appl. 2007, 54, 147–159. [Google Scholar] [CrossRef] [Green Version]
  10. Bai, Z.-Z.; Yang, X. On HSS-based iteration methods for weakly nonlinear systems. Appl. Numer. Math. 2009, 59, 2923–2936. [Google Scholar] [CrossRef]
  11. Bai, Z.-Z.; Migallón, V.; Penadés, J.; Szyld, D.B. Block and asynchronous two-stage methods for mildly nonlinear systems. Numer. Math. 1999, 82, 1–20. [Google Scholar] [CrossRef]
  12. Zhu, M.-Z.; Zhang, G.-F. On CSCS-based iteration methods for Toeplitz system of weakly nonlinear equations. J. Comput. Appl. Math. 2011, 235, 5095–5104. [Google Scholar] [CrossRef] [Green Version]
  13. Bai, Z.-Z.; Guo, X.-P. On Newton-HSS methods for systems of nonlinear equations with positive-definite Jacobian matrices. J. Comput. Math. 2010, 28, 235–260. [Google Scholar]
  14. Li, X.; Wu, Y.-J. Accelerated Newton-GPSS methods for systems of nonlinear equations. J. Comput. Anal. Appl. 2014, 17, 245–254. [Google Scholar]
  15. Edwards, C.H. Advanced Calculus of Several Variables; Academic Press: New York, NY, USA, 1973. [Google Scholar]
  16. Cao, Y.; Tan, W.-W.; Jiang, M.-Q. A generalization of the positive-definite and skew-Hermitian splitting iteration. Numer. Algebra Control Optim. 2012, 2, 811–821. [Google Scholar]
  17. Bai, Z.-Z.; Golub, G.H.; Ng, M.K. On inexact Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. Linear Algebra Appl. 2008, 428, 413–440. [Google Scholar] [CrossRef]
  18. Bai, Z.-Z.; Golub, G.H.; Lu, L.-Z.; Yin, J.-F. Block triangular and skew-Hermitian splitting methods for positive-definite linear systems. SIAM J. Sci. Comput. 2005, 26, 844–863. [Google Scholar] [CrossRef]
Table 1. Results for JFHSS, JFGPSS, nonlinear HSS-like and Picard-HSS methods of Example 1, Case 1 ( η = tol = 0.1 ).
Table 1. Results for JFHSS, JFGPSS, nonlinear HSS-like and Picard-HSS methods of Example 1, Case 1 ( η = tol = 0.1 ).
N 3040607080100
q = 1000 ,JFHSSCPU0.651.817.4613.2124.4559.32
u ( 0 ) = 1 ¯ I T o u t 121212121212
I T i n t 121212121212
I T i n n 911.0810.7510.7510.4110.91
F ( u ( n ) ) 1.86, −113.35, −111.70, −113.41, −113.54, −112.43, −11
JFGPSSCPU0.631.465.799.8417.2844.50
I T o u t 121211111111
I T i n t 141211111111
I T i n n 8.7887.647.457.908.73
F ( u ( n ) ) 5.45, −111.89, −117.69, −111.02, −109.63, −115.09, −11
Nonlinear HSS-likeCPU0.822.038.2614.6024.6561.35
I T 129127123124128126
F ( u ( n ) ) 1.45, −101.53, −101.25, −101.10, −108.60, −118.91, −11
Picard-HSS-------
q = 2000 ,JFHSSCPU1.042.7111.3219.8731.4876.13
u ( 0 ) = 1 ¯ I T o u t 121212121212
I T i n t 121212121212
I T i n n 16.0814.6714.2514.171414.08
F ( u ( n ) ) 1.47, −109.30, −117.92, −118.80, −119.56, −116.56, −11
JFGPSSCPU0.852.208.5714.2623.5054.90
I T o u t 121212121212
I T i n t 121212121212
I T i n n 14.4212.4210.58109.849.91
F ( u ( n ) ) 1.57, −108.49, −113.33, −112.80, −112.38, −114.23, −11
Nonlinear HSS-likeCPU1.322.9412.1120.5133.8880.97
I T 188172167166165165
F ( u ( n ) ) 3.24, −102.50, −102.07, −102.32, −102.037, −101.81, −10
Picard-HSS-------
q = 1000 ,JFHSSCPU0.802.249.3414.5623.7760.21
u ( 0 ) = 4 × 1 ¯ I T o u t 121212121212
I T i n t 121212121212
I T i n n 11.081110.6710.7510.5011.25
F ( u ( n ) ) 1.94, −101.70, −109.33, −119.94, −111.15, −108.77, −11
JFGPSSCPU0.561.516.5512.4721.0355.50
I T o u t 121211121111
I T i n t 121211121111
I T i n n 8.928.348.728.759.5510.63
F ( u ( n ) ) 9.76, −117.68, −114.60, −106.35, −113.73, −103.78, −10
Nonlinear HSS-like-------
Picard-HSS-------
q = 2000 ,JFHSSCPU0.992.5111.2019.4532.2377.58
u ( 0 ) = 4 × 1 ¯ I T o u t 121212121212
I T i n t 121212121212
I T i n n 16.0814.6714.2514.171414.08
F ( u ( n ) ) 5.88, −103.71, −103.20, −103.57, −103.75, −102.69, −10
JFGPSSCPU0.852.208.5814.0223.2254.94
I T o u t 121212121212
I T i n t 121212121212
I T i n n 14.4212.4110.589.929.849.84
F ( u ( n ) ) 6.26, −103.44, −101.31, −101.63, −101.08, −102.08, −10
Nonlinear HSS-like-------
Picard-HSS-------
q = 1000 ,JFHSSCPU0.812.2310.4118.8931.3181.74
u ( 0 ) = 12 × 1 ¯ I T o u t 121212121212
I T i n t 141414141414
I T i n n 10.8512.8311.2811.7111.6412.93
F ( u ( n ) ) 1.47, −81.05, −87.50, −94.55, −93.08, −93.29, −9
JFGPSSCPU0.661.707.9514.3025.4463.80
I T o u t 121212121212
I T i n t 141414141414
I T i n n 8.7887.868.649.079.92
F ( u ( n ) ) 8.02, −93.11, −83.40, −92.32, −91.61, −96.16, −10
Nonlinear HSS-like-------
Picard-HSS-------
q = 2000 ,JFHSSCPU1.062.9013.5521.7238.9487.18
u ( 0 ) = 12 × 1 ¯ I T o u t 121212121212
I T i n t 141414131413
I T i n n 14.9314.3614.8614.6214.5715
F ( u ( n ) ) 2.06, −81.48, −89.76, −96.96, −96.58, −95.72, −9
JFGPSSCPU0.952.4510.0317.8129.0669.57
I T o u t 121212121212
I T i n t 141414141413
I T i n n 13.7111.6410.7110.8510.6411.31
F ( u ( n ) ) 1.91, −81.30, −86.08, −93.26, −96.35, −93.23, −9
Nonlinear HSS-like-------
Picard-HSS-------
Table 2. Results for JFHSS, JFGPSS, nonlinear HSS-like and Picard-HSS methods of Example 1, Case 2 ( η = tol = 0.1 ).
Table 2. Results for JFHSS, JFGPSS, nonlinear HSS-like and Picard-HSS methods of Example 1, Case 2 ( η = tol = 0.1 ).
N 3040607080100
q = 1000 ,JFHSSCPU0.732.039.5416.9927.2765.47
u ( 0 ) = 1 ¯ I T o u t 111111121212
I T i n t 121212131313
I T i n n 11.2511.4111.9211.2310.9212
F ( u ( n ) ) 1.64, −101.18, −101.43, −111.32, −111.95, −111.56, −11
JFGPSSCPU0.571.537.1912.5919.4253.59
I T o u t 111111111112
I T i n t 121212121213
I T i n n 98.258.758.928.259
F ( u ( n ) ) 5.45, −118.19, −118.56, −117.6, −115.27, −114.81, −12
Nonlinear HSS-likeCPU0.822.309.8614.3829.3159.91
I T 128128123124121126
F ( u ( n ) ) 1.81, −101.43, −101.25, −101.10, −101.15, −101.06, −10
Picard-HSS-------
q = 2000 ,JFHSSCPU0.982.6311.7320.5136.0777.20
u ( 0 ) = 1 ¯ I T o u t 111111111212
I T i n t 121212121313
I T i n n 161514.5014.6714.3014.62
F ( u ( n ) ) 2.49, −102.26, −102.03, −102.30, −102.61, −111.74, −11
JFGPSSCPU0.882.268.6115.0825.8360.7
I T o u t 111111111211
I T i n t 121212121312
I T i n n 14.3312.0810.5810.669.6911
F ( u ( n ) ) 2.04, −102.91, −101.91, −101.09, −101.64, −111.72, −10
Nonlinear HSS-likeCPU1.153.8512.5219.6137.7079.26
I T 187171166166164164
F ( u ( n ) ) 3.68, −103.07, −102.48, −102.17, −102.42, −102.13, −10
Picard-HSS-------
q = 1000 ,JFHSSCPU0.722.289.3916.6228.5367.23
u ( 0 ) = 4 × 1 ¯ I T o u t 111111121111
I T i n t 121212121212
I T i n n 11.4111.3311.4111.7512.0812.34
F ( u ( n ) ) 1.62, −102.01, −101.64, −101.92, 102.89, −102.47, −10
JFGPSSCPU0.691.978.8516.5326.5370.80
I T o u t 111111111211
I T i n t 121212121312
I T i n n 10.9111.161111.4211.4212.34
F ( u ( n ) ) 2.18, −101.22, −101.21, −108.35, −111.15, −101.17, −10
Nonlinear HSS-like-------
Picard-HSS-------
q = 2000 ,JFHSSCPU0.972.5911.0620.2433.7579.80
u ( 0 ) = 4 × 1 ¯ I T o u t 111111111111
I T i n t 121212121212
I T i n n 15.9215.0814.5014.5814.6614.92
F ( u ( n ) ) 9.65, −104.15, −104.31, −104.40, −103.82, −103.39, −10
JFGPSSCPU0.882.188.7514.6025.0564.79
I T o u t 111111111111
I T i n t 121212121212
I T i n n 14.5012.4210.6610.4210.5811.08
F ( u ( n ) ) 5.06, −103.61, −102.53, −103.32, −102.95, −101.97, −10
Nonlinear HSS-like-------
Picard-HSS-------
q = 1000 ,JFHSSCPU0.772.3310.8219.5632.1385.32
u ( 0 ) = 13 × 1 ¯ I T o u t 121212121212
I T i n t 141414141414
I T i n n 10.8512.8311.2811.7111.6412.92
F ( u ( n ) ) 1.44, −81.36, −87.47, −94.56, −93.8, −93.54, −9
JFGPSSCPU0.651.748.0014.5425.0264.19
I T o u t 121212121212
I T i n t 141414141414
I T i n n 8.7888.288.649.079.86
F ( u ( n ) ) 8.03, −91.44, −83.35, −94.76, −91.69, −91.085, −9
Nonlinear HSS-like-------
Picard-HSS-------
q = 2000 ,JFHSSCPU1.082.9711.2722.4539.9889.15
u ( 0 ) = 13 × 1 ¯ I T o u t 121212121212
I T i n t 141414141414
I T i n n 14.9314.3514.4314.6214.5715
F ( u ( n ) ) 2.01, −81.49, −88.73, −96.97, −96.57, −95.72, −9
JFGPSSCPU0.992.4110.1517.9829.3367.45
I T o u t 121212121212
I T i n t 141414141413
I T i n n 13.7811.6410.7110.8610.6411.31
F ( u ( n ) ) 1.70, −81.30,−86.02, −93.23, −96.34, −93.21, −9
Nonlinear HSS-like-------
Picard-HSS-------
Table 3. Optimal value of α for HSS and GPSS inner iterations for different values of N and q of Example 1.
Table 3. Optimal value of α for HSS and GPSS inner iterations for different values of N and q of Example 1.
N 30405060708090100
q = 1000 HSS α o p t 181510.59865.755.75
ρ ( T ( α o p t ) ) 0.72260.69300.67430.66130.65130.64850.64590.6467
α * 0.40470.30620.24620.20590.17690.15510.13810.1244
ρ ( T ( α * ) ) 0.89710.92110.93600.94610.95350.95900.96340.9669
q h 2 16.129012.19519.80398.19677.04236.17285.49454.9505
ρ ( T ( q h 2 ) ) 0.72360.69740.67830.66740.66080.65740.65620.6569
GPSS α ¯ o p t 11.259.58.57.576.565.5
ρ ( T ( α ¯ o p t ) ) 0.54280.51400.50760.49830.49590.49020.49820.4983
q = 2000 HSS α o p t 26221613.512108.758
ρ ( T ( α o p t ) ) 0.79110.76630.64990.73990.0.73730.73020.73020.7242
α * 0.16380.09380.06060.04240.03130.02410.01910.0155
ρ ( T ( α * ) ) 0.95790.97570.98420.98890.99180.99370.99500.9959
q h 2 32.258124.3919.6116.393414.084512.3510.999.9010
ρ ( T ( q h 2 ) ) 0.79530.770.75120.74390.73430.7280.72820.7270
GPSS α ¯ o p t 15131110987.57
ρ ( T ( α ¯ o p t ) ) 0.64240.62120.61440.60630.60360.60280.60900.6033
Table 4. Results of JFHSS, nonlinear HSS-like and Picard-HSS methods for Example 2 ( η = tol = 0.1 ).
Table 4. Results of JFHSS, nonlinear HSS-like and Picard-HSS methods for Example 2 ( η = tol = 0.1 ).
q 5010020040012002000
N = 32 α o p t 1.41.62.5821.534
JFHSSCPU1.231.421.291.531.711.86
I T o u t 121212121212
I T i n t 121212121212
I T i n n 11.3411.671212.7516.2520.34
F ( u ( n ) ) 1.54, −142.2, −141.47, −146.87, −158.22, −151.3, −14
Nonlinear HSS-likeCPU2.032.392.252.312.422.45
I T 129137140146160167
F ( u ( n ) ) 1.1, −142.25, −142.31, −142.23, −142.4, −142.3, −14
Picard-HSSCPU7.968.317.7688.608.86
I T o u t 121212121212
I T i n n 121.1131.91126.75145.34146.34147
F ( u ( n ) ) 1.1, −141.24, −141.57, −141.96, −141.84, −141.6, −14
N = 48 α o p t 0.81.42.64.81320.5
JFHSSCPU5.255.315.55.936.216.28
I T o u t 121212121212
I T i n t 121212121212
I T i n n 13.6614.5815.08316.0817.3417.58
F ( u ( n ) ) 2.42, −146.04, −156.36, −151.96, −146.15, −158.60, −15
Nonlinear HSS-likeCPU8.8711.82810.0210.3111.2811.85
I T 161209178186201207
F ( u ( n ) ) 1.5, −141.59, −141.46, −141.57, −141.615, −141.46, −14
Picard-HSSCPU50.8150.0151.8553.3456.3259.95
I T o u t 121212121212
I T i n n 177.16179.1183.50189.34202.75213.25
F ( u ( n ) ) 7.7, −159.67, −151.11, −141.23, −141.22, −141.26, −14
N = 64 α o p t 0.711.83.38.914.2
JFHSSCPU21.6818.2318.6519.15620.5321.39
I T o u t 121212121212
I T i n t 121212121212
I T i n n 2117.3918.1718.7519.9120.84
F ( u ( n ) ) 1.61, −146.73, −159.15, −158.39, −157.7, −154.71, −15
Nonlinear HSS-likeCPU38.5731.7833.5034.6536.5637.70
I T 246206213221235242
F ( u ( n ) ) 1.17, −141.26, −141.26, −141.16, −141.19, −141.22, −14
Picard-HSSCPU219.54217.45266.83225.37228.60248.35
I T o u t 121212121212
I T i n n 219.54248.58230.75252258.75264.50
F ( u ( n ) ) 6.12, −157.7, −158.9, −151.0, −141.1, −141.1, −14

Share and Cite

MDPI and ACS Style

Amiri, A.; Darvishi, M.T.; Cordero, A.; Torregrosa, J.R. An Efficient Iterative Method Based on Two-Stage Splitting Methods to Solve Weakly Nonlinear Systems. Mathematics 2019, 7, 815. https://doi.org/10.3390/math7090815

AMA Style

Amiri A, Darvishi MT, Cordero A, Torregrosa JR. An Efficient Iterative Method Based on Two-Stage Splitting Methods to Solve Weakly Nonlinear Systems. Mathematics. 2019; 7(9):815. https://doi.org/10.3390/math7090815

Chicago/Turabian Style

Amiri, Abdolreza, Mohammad Taghi Darvishi, Alicia Cordero, and Juan Ramón Torregrosa. 2019. "An Efficient Iterative Method Based on Two-Stage Splitting Methods to Solve Weakly Nonlinear Systems" Mathematics 7, no. 9: 815. https://doi.org/10.3390/math7090815

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop