Direct Comparison between Two Third Convergence Order Schemes for Solving Equations

: We provide a comparison between two schemes for solving equations on Banach space. A comparison between same convergence order schemes has been given using numerical examples which can go in favor of either scheme. However, we do not know in advance and under the same set of conditions which scheme has the largest ball of convergence, tighter error bounds or best information on the location of the solution. We present a technique that allows us to achieve this objective. Numerical examples are also given to further justify the theoretical results. Our technique can be used to compare other schemes of the same convergence order.


Introduction
In this study we compare two third convergence order schemes for solving nonlinear equation where G : D ⊂ B 1 → B 2 be a continuously differentiable nonlinear operator and D stands for an open non empty subset of B 1 . Here B 1 and B 2 denote Banach spaces. It is desirable to obtain a unique solution p of (1). However, this can rarely be achieved. So researchers develop iterative schemes which converge to p. Some popular schemes are Chebyshev-Type Scheme: Simplified Chebyshev-Type Scheme:
The analysis of these schemes uses assumptions on the fourth order derivatives of G which are not on these schemes. The assumptions on fourth order derivatives reduce the applicability of these schemes. For example: Let B 1 = B 2 = R, D = [− 1 2 , 3 2 ]. Define G on D by Then, we get G (t) = 3t 2 log t 2 + 5t 4 − 4t 3 + 2t 2 , where the solution p = 1. Obviously G (t) is not bounded on D. Hence, the convergence of the above schemes are not guaranteed by the earlier studies. In this study we use only assumptions on the first derivative to prove our results. The advantages of our approach include: larger radius needed on scheme of convergence (i.e., more initial points), tighter upper bounds on x k − p ( i.e., fewer iterates to achieve a desired error tolerance). It is worth noting that these advantages are obtained without any additional conditions . So far a comparison is given between iterative schemes of the same order using numerical examples . However, not a direct comparison is given theoretically, so we know in advance under the same set of convergence conditions which scheme has the largest radius, tighter error bounds and better results on the uniqueness of the solution. The novelty of our paper is that, we introduce a technique under we can answer that scheme (4) is the best when compared to scheme (3). The same technique can be used to draw conclusions on other same order schemes.
Notice also that scheme (3) requires two derivative evaluations, one inverse and one operator evaluation. However, scheme (4) is less expensive requiring two function evaluations and one inverse. Both schemes have been studied in the literature under assumptions reaching the fourth derivative which does not appear in these schemes. However, we use only conditions on the first derivative that does appear on the schemes.
Throughout this paper U(x, r) stand for open ball with center at x and radius r > 0 andŪ(x, r) denote the closure of U(x, r).
Rest of the paper is structured as follows. The convergence analysis of schemes are given in Section 2 and examples is given in Section 3.

Ball Convergence
We present the ball convergence scheme (2), scheme (3) and scheme (4), respectively in this section. To achieve this introduce certain functions and parameters. Suppose that there exists a continuous and increasing function defined on the interval [0, ∞) with values in itself such that equation has a real positive zero denoted as R 0 . Suppose that there exists functions ω and ω 1 defined on [0, R 0 ) with values in [0, ∞). Define functions g 1 α and h 1 α on [0, R 0 ) as has a least zero denoted by has a least zero denoted by R 2 α in (0, R 0 ). Define a radius of convergence R α as It follows that for all s ∈ [0, R α ). Set e n = x n − p . We introduce a set of conditions (A) under which the ball convergence for all schemes will be obtained.
There exists a continuous and increasing function provided R 0 exists and is defined in (5).
There exist continuous and increasing functions ω and ω 1 on the interval [0, and and R 1 α , R 2 α exist and are given by (6) and (7), respectively, where R α is defined by (8).
Next, the main ball convergence result for scheme (2) is displayed.
Then, the following assertions hold true and where functions g 1 α , g 2 α were introduced earlier and R α is defined in (8). The vector p is the only zero of Equation (1) in the set D 1 introduced in condition (A5).

1.
In view of (A3) and the estimate condition (C 5 ) can be dropped and ω 1 can be replaced by

2.
The results obtained here can be used for operators F satisfying autonomous differential equations [3] of the form where P is a continuous operator. Then, since G (p) = P(G(p)) = P(0), we can apply the results without actually knowing p. For example, let G(x) = e x − 1. Then, we can choose: P(x) = x + 1.

3.
If ω 0 and ω 1 are constant functions, say ω 0 (t) = L 0 t, ω(t) = Lt, for some L 0 > 0 and L > 0, then the radius r 1 = 2 2L 0 +L was shown by us to be the convergence radius of Newton's method [5,6] x n+1 = x n − G (x n ) −1 G(x n ) for each n = 0, 1, 2, · · · (19) under the conditions (C 1 )-(C 4 ). It follows from the definition of r that the convergence radius R α of the method (2) cannot be larger than the convergence radius r 1 of the second order Newton's method (19). As already noted in [5,6] r 1 is at least as large as the convergence ball given by Rheinboldt [14] r where L 1 is the Lipschitz constant on D. In particular, for L 0 < L 1 we have that That is our convergence ball r 1 is at most three times larger than Rheinboldt's. The same value for r R was given by Traub [15].

4.
It is worth noticing that method (2) is not changing when we use simpler methods the conditions of Theorem 1 instead of the stronger conditions used in [10]. Moreover, we can compute the computational order of convergence (COC) defined by ξ = ln x n+1 − p x n − p / ln x n − p x n−1 − p or the approximate computational order of convergence This way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Fréchet derivative of operator F. 5.
We also use instead of (17) to obtain ] y n − p ≤ḡ 2 (e n )e n ≤ e n <R.
Moreover, p is the unique solution of Equation (1) in the set D 1 .

Remark 2. We haveḡ
Hence, the radiusR of scheme (4) is at least as large as that of scheme (3) whereas the ratio of convergence of scheme (4) is at least as small as that of scheme (3) (see also the numerical examples).

Numerical Examples
We compute the radii provided that α = −1. Example 1. Let us consider a system of differential equations governing the motion of an object and given by We need the Fréchet-derivative defined by to compute function ω 0 (see (A2)) and functions ω, ω 1 (see (A3)). Notice that using the ((A)) conditions, we get ω 0 (t) = (e − 1)t, ω(t) = e 1 e−1 t, ω 1 (t) = e 1 e−1 . The radii are given in Table 1.
, the space of continuous functions defined on [0, 1] be equipped with the max norm. Let D = U(0, 1). Define function F on D by We have that Then, we get that x * = 0, so ω 0 (t) = 7.5t, ω(t) = 15t and ω 1 (t) = 2. Then the are given in Table 2.
The parameters are given in Table 3.  (3) with Two-Step-Newton method (4)

Conclusions
A new technique is introduced allowing to compare schemes of the same convergence order under the same set of conditions. Hence, we know how to choose in advance among all third convergence order schemes the one providing larger choice of initial points the least number of iterates for a predetermined error tolerance and the best location on the solution. This technique can be used on other schemes along the same lines. In particular, we have shown that scheme (4) is better to use than scheme (3) under the condition (A).