Extended Convergence Analysis of the Newton–Hermitian and Skew–Hermitian Splitting Method

: Many problems in diverse disciplines such as applied mathematics, mathematical biology, chemistry, economics, and engineering, to mention a few, reduce to solving a nonlinear equation or a system of nonlinear equations. Then various iterative methods are considered to generate a sequence of approximations converging to a solution of such problems. The goal of this article is two-fold: On the one hand, we present a correct convergence criterion for Newton–Hermitian splitting (NHSS) method under the Kantorovich theory, since the criterion given in Numer. Linear Algebra Appl., 2011, 18, 299–315 is not correct. Indeed, the radius of convergence cannot be deﬁned under the given criterion, since the discriminant of the quadratic polynomial from which this radius is derived is negative (See Remark 1 and the conclusions of the present article for more details). On the other hand, we have extended the corrected convergence criterion using our idea of recurrent functions. Numerical examples involving convection–diffusion equations further validate the theoretical results.


Introduction
Numerous problems in computational disciplines can be reduced to solving a system of nonlinear equations with n equations in n variables like using Mathematical Modelling [1][2][3][4][5][6][7][8][9][10][11]. Here, F is a continuously differentiable nonlinear mapping defined on a convex subset Ω of the n−dimensional complex linear space C n into C n . In general, the corresponding Jacobian matrix F (x) is sparse, non-symmetric and positive definite. The solution methods for the nonlinear problem F(x) = 0 are iterative in nature, since an exact solution x * could be obtained only for a few special cases. In the rest of the article, some of the well established and standard results and notations are used to establish our results (See [3][4][5][6][10][11][12][13][14] and the references there in). Undoubtedly, some of the well known methods for generating a sequence to approximate x * are the inexact Newton (IN) methods [1][2][3][5][6][7][8][9][10][11][12][13][14]. The IN algorithm involves the steps as given in the following: Furthermore, if A is sparse, non-Hermitian and positive definite, the Hermitian and skew-Hermitian splitting (HSS) algorithm [4] for solving the linear system Ax = b is given by,
Set l = l + 1 Newton-HSS [5] algorithm combines appropriately both IN and HSS methods for the solution of the large nonlinear system of equations with positive definite Jacobian matrix. The algorithm is as follows:

Algorithm NHSS (The Newton-HSS method [5])
• Step 1: Choose initial guess x 0 , positive constants α and tol; Set k = 0 • Step 2: where H k is Hermitian and S k is skew-Hermitian parts of J k . 2.
-Compute J k , H k and S k for new x k Please note that η k is varying in each iterative step, unlike a fixed positive constant value in used in [5]. Further observe that if d k, k in (6) is given in terms of d k,0 , we get where T k := T(α, k), B k := B(α, k) and Using the above expressions for T k and d k, k , we can write the Newton-HSS in (6) as A Kantorovich-type semi-local convergence analysis was presented in [7] for NHSS. However, there are shortcomings: (i) The semi-local sufficient convergence criterion provided in (15) of [7] is false. The details are given in Remark 1. Accordingly, Theorem 3.2 in [7] as well as all the followings results based on (15) in [7] are inaccurate. Further, the upper bound function g 3 (to be defined later) on the norm of the initial point is not the best that can be used under the conditions given in [7]. (ii) The convergence domain of NHSS is small in general, even if we use the corrected sufficient convergence criterion (12). That is why, using our technique of recurrent functions, we present a new semi-local convergence criterion for NHSS, which improves the corrected convergence criterion (12) (see also Section 3 and Section 4, Example 4.4). (iii) Example 4.5 taken from [7] is provided to show as in [7] that convergence can be attained even if these criteria are not checked or not satisfied, since these criteria are not sufficient too. The convergence criteria presented here are only sufficient.
The rest of the note is organized as follows. Section 2 contains the semi-local convergence analysis of NHSS under the Kantorovich theory. In Section 3, we present the semi-local convergence analysis using our idea of recurrent functions. Numerical examples are discussed in Section 4. The article ends with a few concluding remarks.

Semi-Local Convergence Analysis
To make the paper as self-contained as possible we present some results from [3] (see also [7]). The semi-local convergence of NHSS is based on the conditions (A). Let x 0 ∈ C n and F : Ω ⊂ C n −→ C n be G−differentiable on an open neighborhood Ω 0 ⊂ Ω on which F (x) is continuous and positive definite. Suppose F (x) = H(x) + S(x) where H(x) and S(x) are as in (2) with x k = x.
(A 1 ) There exist positive constants β, γ and δ such that (A 2 ) There exist nonnegative constants L h and L s such that for all x, y ∈ U(x 0 , r) ⊂ Ω 0 , Next, we present the corrected version of Theorem 3.2 in [7].
Theorem 1. Assume that conditions (A) hold with the constants satisfying and with * = lim inf k−→∞ k satisfying * > ln η ln((τ+1)θ , (Here . represents the largest integer less than or equal to the corresponding real number) τ ∈ (0, 1−θ θ ) and Then, the iteration sequence {x k } ∞ k=0 generated by Algorithm NHSS is well defined and converges to x * , so that F(x * ) = 0.
Proof. We simply follow the proof of Theorem 3.2 in [7] but use the correct functionḡ 3 instead of the incorrect function g 3 defined in the following remark.
We need to show the following auxiliary result of majorizing sequences for NHSS using the aforementioned notation.

Lemma 1.
Let β, γ, δ, L 0 , L be positive constants and η ∈ [0, 1). Suppose that where η 0 , ξ are given by (19) and (20), respectively. Then, sequence {s k } defined in (21) is nondecreasing, bounded from above by and converges to its unique least upper bounds s * which satisfies Proof. Notice that by (18) or equivalently by (21) and Suppose that (26), and hold true. Next, we shall show that they are true for k replaced by k + 1. It suffices to show that Estimate (30) motivates us to introduce recurrent functions f k defined on the interval [0, 1) by Then, we must show instead of (30) that We need a relationship between two consecutive functions f k : where Notice that g(q) = 0. It follows from (32) and (34) that Then, since it suffices to show instead of (32). We get by (31) that so, we must show that which reduces to showing that which is true by (22). Hence, the induction for (26), (28) and (29) is completed. It follows that sequence {s k } is nondecreasing, bounded above by s * * and as such it converges to its unique least upper bound s * which satisfies (25).
We need the following result.

Lemma 2 ([14]
). Suppose that conditions (A) hold. Then, the following assertions also hold: where L = L h + L s .
Next, we show how to improve Lemma 2 and the rest of the results in [3,7]. Notice that it follows from (i) in Lemma 2 that there exists L 0 > 0 such that We have that holds true and L L 0 can be arbitrarily large [2,12]. Then, we have the following improvement of Lemma 2.

Lemma 3.
Suppose that conditions (A) hold. Then, the following assertions also hold: Proof. (ii) We have It follows from the Banach lemma on invertible operators [1] that F (x) is nonsingular, so that (44) holds.

Remark 2.
The new estimates (ii) and (iii) are more precise than the corresponding ones in Lemma 2, if L 0 < L.
Proof. If we follow the proof of Theorem 3.2 in [3,7] but use (44) instead of (41) for the upper bound on the norms F (x k ) −1 we arrive at where so by (21) We also have that It follows from Lemma 1 and (49) that sequence {x k } is complete in a Banach space R n and as such it converges to some x * ∈Ū(x 0 , r) (sinceŪ(x 0 , r) is a closed set).
. Replace condition (A 2 ) by (A 2 ) There exist nonnegative constants L h and L s such that for all x, y ∈ U(x 0 , r) ⊂ Ω 1 since The majorizing sequence {t n } in [3,7] is defined by Next, we show that our sequence {s n } is tighter than {t n }.

Proposition 1.
Under the conditions of Theorems 1 and 2, the following items hold (i) s n ≤ t n (ii) s n+1 − s n ≤ t n+1 − t n and (iii) s * ≤ t * = lim k−→∞ t k ≤ r 2 .

Remark 4.
Majorizing sequences using L or L are even tighter than sequence {s n }.

Special Cases and Numerical Examples
Example 1. The semi-local convergence of inexact Newton methods was presented in [14] under the conditions More recently, Shen and Li [11] substituted g 1 (η) with g 2 (η), where Estimate (22) can be replaced by a stronger one but directly comparable to (20). Indeed, let us define a scalar sequence {u n } (less tight than {s n }) by where ρ = γL 0 (1 + η)µ. Moreover, define recurrent functions f k on the interval [0, 1) by Then, following the proof of Lemma 1, we obtain: Lemma 4. Let β, γ, δ, L 0 , L be positive constants and η ∈ [0, 1 2 ). Suppose that Then, sequence {u k } defined by (52) is nondecreasing, bounded from above by u * * = c 1 − q and converges to its unique least upper bound u * which satisfies c ≤ u * ≤ u * * .

Proposition 2.
Suppose that conditions (A) and (54) hold with r = min{r 1 , u * }. Then, sequence {x n } generated by algorithm NHSS is well defined and converges to x * which satisfies F(x * ) = 0.
These bound functions are used to obtain semi-local convergence results for the Newton-HSS method as a subclass of these techniques. In Figures 1 and 2, we can see the graphs of the four bound functions g 1 , g 2 ,ḡ 3 and g 4 . Clearly our bound functionḡ 3 improves all the earlier results. Moreover, as noted before, function g 3 cannot be used, since it is an incorrect bound function.  Figure 2. Graphs of g 1 (t) (Violet), g 2 (t) (Green),ḡ 3 (Red) and g 4 (Blue).
In the second example we compare the convergence criteria (22) and (12).
The next example is used for the reason already mentioned in (iii) of the introduction. Example 3. Consider the two-dimensional nonlinear convection-diffusion equation [7] −(u xx + u yy ) + q(u x + u y ) = −e u , (x, y) ∈ Ω u(x, y) = 0 (x, y) ∈ ∂Ω (56) where Ω = (0, 1) × (0, 1) and ∂Ω is the boundary of Ω. Here q > 0 is a constant to control the magnitude of the convection terms (see [7,15,16]). As in [7], we use classical five-point finite difference scheme with second order central difference for both convection and diffusion terms. If N defines number of interior nodes along one co-ordinate direction, then h = 1 N+1 and Re = qh 2 denotes the equidistant step-size and the mesh Reynolds number, respectively. Applying the above scheme to (56), we obtain the following system of nonlinear equations: where the coefficient matrixĀ is given byĀ Here, ⊗ is the Kronecker product, T x and T y are the tridiagonal matrices In our computations, N is chosen as 99 so that the total number of nodes are 100 × 100. We use α = qh 2 as in [7] and we consider two choices for η k i.e., η k = 0.1 and η k = 0.01 for all k.
The results obtained in our computation is given in Figures 3-6. The total number of inner iterations is denoted by IT, the total number of outer iterations is denoted by OT and the total CPU time is denoted by t.

Conclusions
A major problem for iterative methods is the fact that the convergence domain is small in general, limiting the applicability of these methods. Therefore, the same is true, in particular for Newton-Hermitian, skew-Hermitian and their variants such as the NHSS and other related methods [4][5][6]11,13,14]. Motivated by the work in [7] (see also [4][5][6]11,13,14]) we: (a) Extend the convergence domain of NHSS method without additional hypotheses. This is done in Section 3 using our new idea of recurrent functions. Examples, where the new sufficient convergence criteria hold (but not previous ones), are given in Section 4 (see also the remarks in Section 3). (b) The sufficient convergence criterion (16) given in [7] is false. Therefore, the rest of the results based on (16) do not hold. We have revisited the proofs to rectify this problem. Fortunately, the results can hold if (16) is replaced with (12). This can easily be observed in the proof of Theorem 3.2 in [7]. Notice that the issue related to the criteria (16) is not shown in Example 4.5, where convergence is established due to the fact that the validity of (16) is not checked. The convergence criteria obtained here are not necessary too. Along the same lines, our technique in Section 3 can be used to extend the applicability of other iterative methods discussed in [1][2][3][4][5][6]8,9,[12][13][14][15][16].