Convergence Analysis of Weighted-Newton Methods of Optimal Eighth Order in Banach Spaces

We generalize a family of optimal eighth order weighted-Newton methods to Banach spaces and study their local convergence. In a previous study, the Taylor expansion of higher order derivatives is employed which may not exist or may be very expensive to compute. However, the hypotheses of the present study are based on the first Fréchet-derivative only, thereby the application of methods is expanded. New analysis also provides the radius of convergence, error bounds and estimates on the uniqueness of the solution. Such estimates are not provided in the approaches that use Taylor expansions of derivatives of higher order. Moreover, the order of convergence for the methods is verified by using computational order of convergence or approximate computational order of convergence without using higher order derivatives. Numerical examples are provided to verify the theoretical results and to show the good convergence behavior.


Introduction
In this work, we generate a sequence {x n } for approximating a locally unique solution α of the nonlinear equation where F is a Fréchet-differentiable operator defined on a closed convex subset D of Banach space B 1 with values in a Banach space B 2 .In computational sciences, many problems can be written in the form (1). See, for example [1,2].The solutions of such equations are rarely attainable in closed form.This shows why most methods for solving these equations are usually iterative in nature.The important part in the construction of an iterative method is to study its convergence analysis.
In general, the convergence domain is small.Therefore, it is important to enlarge the convergence domain without using extra hypotheses.Knowledge of the radius of convergence is useful because it gives us the degree of difficulty for obtaining initial points.Another important problem is to find more precise error estimates on x n+1 − x n or x n − α .Many authors have studied convergence analysis of iterative methods, see, for example [1][2][3][4][5][6][7].
There are numerous higher order iterative methods for solving a scalar equation f (x) = 0 (see, for example [2].Contrary to this fact, higher order methods are rare for multi-dimensional cases, that is, for approximating the solution of F(x) = 0.One possible reason is that the construction of higher order methods for solving systems is a difficult task.Another factual reason is that not every method developed for single equations can be generalized to solve systems of nonlinear equations.Recently, a family of optimal eighth order methods for solving a scalar equation f (x) = 0 has been proposed in [16], which is given by where φ 4 (x n , y n ) is any optimal fourth order scheme with the base as Newton's iteration y n and f [•, •] is Newton's first order divided difference.In particular, they have considered the following optimal fourth order schemes in the second step of (3): Ostrowski method (see [12]): Ostrowski-like method (see [12]): Kung-Traub method (see [15]): Motivated by the above methods defined on the real line, we propose the methods that follow but for Banach space valued operators.It can be observed that the above family of eighth order methods can be easily extendable for solving (1).In view of this, here we study the method (3) in Banach space.The iterative methods corresponding to the fourth order schemes (4)- (6) in the Banach space setting are given as x n+1 = Ψ 8 (x n , y n , z n ).
In above each case, we have that Here stands for the space of bounded linear operators from B 1 into B 2 .Methods ( 7)-( 9) require four inverses and four function evaluations at each step.
The rest of the paper is summarized as follows.In Section 2, the local convergence, including radius of convergence, computable error bounds and uniqueness results of the proposed methods, is presented.In order to verify the theoretical results of convergence analysis, some numerical examples are presented in Section 3. Finally, the methods are applied to solve systems of nonlinear equations in Section 4.
We have that h 1 (0) = −1 < 0 and h 1 (t) → +∞ as t → − .By applying the Bolzano's theorem on function h 1 , we deduce that equation h 1 (t) = 0 has solutions in the interval (0, ).Let r 1 be the smallest such zero.Moreover, define function p and h p on the interval [0, ) by We get h p (0) = −1 < 0 and h p (t) → +∞ as t → − .Let r p be the smallest solution of equation h p (t) = 0 in the interval (0, ).Furthermore, define functions g 2 and h 2 on the interval [0, r p ) by We obtain h 2 (t) = −1 < 0 and h 2 (t) → +∞ as t → r − p .Let r 2 be the smallest solution of equation h 2 (t) = 0 in the interval (0, r p ). Define functions q and h q on the interval (0, r p ) and functions ϕ and ψ on the interval [0, r p ), respectively by Let r q , r ψ be the smallest solutions of equations h q (t) = 0, ψ(t) = 0 in the intervals (0, r p ), (0, ), respectively.Finally, define functions g 3 and h 3 on the interval [0, 0 ) by where 0 = min{r q , r ψ }.We have that to be the radius of convergence for method (7).Then, for each t ∈ [0, r), it follows that The local convergence analysis of method (7), method (8) and method ( 9) is based on the conditions (A): , where is given in (11).
(a 4 ) There exist continuous and increasing functions λ (a 5 ) Ū(α, r) ⊆ D where r is given in (12) for method (7), by (30) for method (8) and by (31) for method ( 9).(a 6 ) There exists R ≥ r such that Next, we first present the local convergence analysis of method (7) based on the conditions (A).
Theorem 1. Assume that the conditions (A) hold.Then, sequence {x n } generated for x 0 ∈ U(α, r) − {α} by method ( 7) is well defined in U(α, r), remains in U(α, r) for each n = 0, 1, 2 . . . . . .and converges to α so that where the functions g i are defined previously.Moreover, the solution α of equation F(x) = 0 is unique in D 1 .
)dθ for some y * ∈ D 1 such that F(y * ) = 0.By (a 3 ) and (a 6 ), we have in turn that Next, we shall show the local convergence of method ( 8) in an analogous way but functions g 2 , ϕ, g 3 shall be replaced by ḡ2 , ϕ 1 , ḡ3 and which are given by ḡ2 (t) = 1 + µ(t) We shall use the same notation for r 1 as in (12) but notice that r2 and r3 correspond to the smallest positive solutions of equations h2 (t) = 0 and h3 (t) = 0, respectively.Set r = min{r 1 , r2 , r3 }. (30) The local convergence analysis of method ( 8) is given by the following theorem: Theorem 2. Assume that the conditions (A) hold.Then, the conclusions of Theorem 1 also hold for method (8) with functions ḡ2 , ḡ3 and r replacing g 2 , g 3 and r, respectively.
Proof.We have that as in Theorem 1 and using the second and third substep of method (8) we get (as in Theorem 1) that Denote by r2 , r3 , the smallest positive solutions of equations h2 (t) = 0 and h3 (t) = 0. Set r = min{r 1 , r2 , r3 }. (31) Then, we have: Theorem 3. Assume that the conditions (A) hold.Then, the conclusions of Theorem 1 also hold for method (9) with functions ḡ2 , ḡ3 and r replacing g 2 , g 3 and r, respectively.
Proof.Notice that from the second and third substep of method ( 9) we obtain Remark 1. Methods ( 7)-( 9) are not effected, when we use the conditions of the Theorems 1-3 instead of stronger conditions used in ([16], Theorem 1).Moreover, we can compute the computational order of convergence (COC) [18] defined by or the approximate computational order of convergence (ACOC) [9], given by In this way, we obtain in practice the order of convergence.

Numerical Examples
Here, we shall demonstrate the theoretical results which we have shown in Section 2. We use the divided difference given by F[x, y] = 1 2 (F (x) + F (y)) or F[x, y] = 1 0 (F (y + τ(x − y))dτ.
Example 1. Suppose that the motion of an object in three dimensions is governed by system of differential equations Then, the solution of the system is given for v = (x, y, z) T by function The Fréchet-derivative is given by Then for α = (0, 0, 0) T we have that λ The parameters r 1 , r 2 , r 3 , r2 , r3 , r2 and r3 using methods ( 7)-( 9) are given in Table 1.
Table 1.Numerical results for Example 1.

Applications
Lastly, we apply the methods ( 7)-( 9) to solve systems of nonlinear equations in R m .The performance is also compared with some existing methods.For example, we choose Newton method (NM), sixth-order methods proposed by Grau et al. [12] and Sharma and Arora [15], and eighth-order Triple-Newton Method [14].These methods are given as follows: Grau-Grau-Noguera method: This method requires two inverses and three function evaluations.
Grau-Grau-Noguera method: It requires two inverses and three function evaluations.
Sharma-Arora Method: The method requires one inverse and three function evaluations.
Triple-Newton Method: This method requires three inverses and three function evaluations.
Example 4. Let us consider the system of nonlinear equations:  Computations are performed in the programming package Mathematica using multiple-precision arithmetic.For every method, we record the number of iterations (n) needed to converge to the solution such that the stopping criterion ||F(x n )|| < 10 −350 is satisfied.In order to verify the theoretical order of convergence, we calculate the approximate computational order of convergence (ACOC) using the formula (33).For the computation of divided difference we use the formula (see [12]) F[x, y] ij = f i (x 1 , ....., x j , y j+1 , ....., y m ) − f i (x 1 , ....., x j−1 , y j , ....., y m ) Numerical results are displayed in Tables 4 and 5, which include:

•
The dimension (m) of the system of equations.

•
The required number of iterations (n).

•
The value of ||F(x n )|| of approximation to the corresponding solution of considered problems, wherein N(−h) denotes N × 10 −h .

•
The approximate computational order of convergence (ACOC).From the numerical results shown in Tables 4 and 5 it is clear that the methods possess stable convergence behavior.Moreover, the small values of ||F(x n )||, in comparison to the other methods, show the accurate behavior of the presented methods.The computational order of convergence also supports the theoretical order of convergence.Similar numerical tests, carried out for a number of other different problems, confirmed the above conclusions to a large extent.
Let U(a, b) and Ū(a, b) stand, respectively for the open and closed balls in B 1 with center a ∈ D and of radius b > 0.

Table 2 .
Numerical results for Example 2.

Table 3 .
Numerical results for Example 3.

Table 4 .
Comparison of performance of methods for Example 4. Approximate computational order of convergence (ACOC).

Table 5 .
Comparison of performance of methods for Example 5.