Local Convergence of an Optimal Eighth Order Method under Weak Conditions

We study the local convergence of an eighth order Newton-like method to approximate a locally-unique solution of a nonlinear equation. Earlier studies, such as Chen et al. (2015) show convergence under hypotheses on the seventh derivative or even higher, although only the first derivative and the divided difference appear in these methods. The convergence in this study is shown under hypotheses only on the first derivative. Hence, the applicability of the method is expanded. Finally, numerical examples are also provided to show that our results apply to solve equations in cases where earlier studies cannot apply.


Introduction
In this study, we are concerned with the problem of approximating a locally-unique solution x * of equation: where F is a differentiable function defined on a convex subset D of S with values in S, where S is R or C.
Many problems from applied sciences, including engineering, can be solved by means of finding the solutions of equations in a form like Equation (1) using mathematical modeling [2][3][4][5][6][7].Except in special cases, the solutions of these equations can be found in closed form.This is the main reason why the most commonly-used solution methods are usually iterative.The convergence analysis of iterative methods is usually divided into two categories: semi-local and local convergence analysis.The semi-local convergence matter is, based on the information around an initial point, to give criteria ensuring the convergence of iteration procedures.A very important problem in the study of iterative procedures is the radius of convergence.In general, the radius of convergence is small.Therefore, it is important to enlarge the radius of convergence.Another important problem is to find more precise error estimates on the distances x n − x * .
The most popular method for approximating a simple solution x * of Equation ( 1) is undoubtedly Newton's method, which is given by: provided that F does not vanish in D [2,13].To obtain a higher order of convergence, many methods have been proposed .We study the local convergence of the three-step method defined for each n = 0, 1, 2, . . .by: where x 0 is an initial point, β ∈ S and: The eighth order of convergence for Method (3) was established in [1], when β ∈ S, using Taylor expansions and hypotheses reaching up to the eighth derivative of F , although only the first derivatives and the divided difference appear in these methods.This method is also an optimal in the sense of Traub with efficiency index 8 1 4 [4].The advantages of Method (3) over other competing methods were also shown in [1].However, the hypotheses of higher order derivatives limit the applicability of these methods.As a motivational example, define function ] by: Then, we have that: and: Then, obviously, function F (x) is unbounded on D. Hence, the results in [1], cannot apply to show the convergence of Method (3) or its special cases requiring hypotheses on the third derivative of function F or higher.Notice that, in particular, there is a plethora of iterative methods for approximating solutions of nonlinear equations .These results show that if the initial point x 0 is sufficiently close to the solution x * , then the sequence {x n } converges to x * .However, how close to the solution x * should the initial guess x 0 be?These local results give no information on the radius of the convergence ball for the corresponding method.The same technique can be used for other methods.
In the present study, we study the local convergence of Method (3) using hypotheses only on the first derivative of function F. We also provide the radius of the convergence ball, computable error bounds on the distances involved and the uniqueness of the solution result using Lipschitz constants.Such results were not given in [1] or the earlier related studies [8][9][10][11][12].This way, we expand the applicability of Method (3).
The rest of the paper is organized as follows: We present the local convergence analysis of Method (3) in Section 2. Numerical examples are given in the concluding Section 3.

Local Convergence
In this section, we present the local convergence analysis of Method (3) It is convenient for the local convergence analysis that follows to introduce some functions and parameters.Define functions g 1 , p and h p on the interval [0, 1 L 0 ) by: and parameter r 1 by: Moreover, define functions g 2 and h 2 on the interval [0, r p ) by: and: Then, we get h 2 (0) = −1 < 0 and h 2 (t) → ∞ as t → r − p .Denote by r 2 the smallest zero of function h 2 on the interval (0, r p ).Furthermore, define functions q and h q on the interval [0, r p ] by: h q (t) = q(t) − 1.
We have that h q (0) = −1 < 0 and h q (t) → ∞ as t → r − p .Denote by r q the smallest zero of function h q on the interval (0, r q ).Finally, define functions g 3 and h 3 on the interval [0, r q ) by: and: We get that h 3 (0) = −1 < 0 and h 3 (t) → ∞ as t → r − q .Denote by r 3 the smallest zero of function h 3 on the interval (0, r q ).Set: Then, we have that: and for each t ∈ [0, r): and: Let U (γ, ρ), Ū (γ, ρ) stand, respectively, for the open and closed balls in S, with center γ ∈ S and of radius ρ > 0. Next, we present the local convergence analysis of Method (3) using the preceding notation.
Theorem 1.Let F : D ⊂ S → S be a differentiable function.Let [., .;F ] : D × D → L(S) be a divided difference of order one.Suppose that there exist and: where the radius r is defined by Equation (4).
(c) The radius r 1 was shown in [2,3] to be the convergence radius for Newton's method Equation (2) under conditions Equations ( 11) and (13).It follows from Equation (4) and the definition of r 1 that the convergence radius r of Method (3) cannot be larger than the convergence radius r 1 of the second order Newton's method (2).As already noted that r 1 is at least as the convergence ball given by Rheinboldt [15]: In particular, for L 0 < L, we have that: r R < r 1 and: That is our convergence ball r 1 that is at most three times larger than Rheinboldt's.The same value for r R is given by Traub [4].(d) It is worth noticing that Method (3) is not changing if we use the conditions of Theorem 2.1 instead of the stronger conditions given in [1].Moreover, for the error bounds, in practice, we can use the computational order of convergence (COC) [16]: , for each n = 0, 1, 2, . . .or the approximate computational order of convergence (ACOC) [16]: , for each n = 1, 2, . . .This way, we obtain, in practice, the order of convergence in a way that avoids the bounds involving estimates higher than the first Fréchet derivative.

Numerical Example and Applications
We present numerical examples in this section. and: