Ball Comparison between Three Sixth Order Methods for Banach Space Valued Operators

: Three methods of sixth order convergence are tackled for approximating the solution of an equation deﬁned on the ﬁnitely dimensional Euclidean space. This convergence requires the existence of derivatives of, at least, order seven. However, only derivatives of order one are involved in such methods. Moreover, we have no estimates on the error distances, conclusions about the uniqueness of the solution in any domain, and the convergence domain is not sufﬁciently large. Hence, these methods have limited usage. This paper introduces a new technique on a general Banach space setting based only the ﬁrst derivative and Lipschitz type conditions that allow the study of the convergence. In addition, we ﬁnd usable error distances as well as uniqueness of the solution. A comparison between the convergence balls of three methods, not possible to drive with the previous approaches, is also given. The technique is possible to use with methods available in literature improving, consequently, their applicability. Several numerical examples compare these methods and illustrate the convergence criteria.


Introduction
Let F : Ω ⊂ X → Y be Fréchet differentiable operator, X, Y be two Banach spaces and Ω ⊂ X be open, convex, and non-void. To solve F(x) = 0, we study the local convergence of the following three step methods defined for σ = 0, 1, 2, . . . as and The application of F(x) = 0 is mentioned in the standard books [1][2][3][4]. The definition of the Fréchet derivative can be found for example in [5]. These methods use two operators, two Fréchet derivative evaluations, and two linear operator inversions. The sixth convergence order of methods was given in Cordero et al. [6], Soleymani et al. [7], and Esmaeili and Ahmadi [8], respectively. The conclusions were obtained for the special case when X = Y = R i , using Taylor series with hypotheses up to the seventh derivative even though it does not appear in the methods. Thus, these hypotheses restrict the applicability of the methods. Let us consider a motivational example. We assume the following function F on X = Y = R and D = [− 1 2 , 3 2 ] such as: which leads to F (κ) = 3κ 2 ln κ 2 + 5κ 4 − 4κ 3 + 2κ 2 , We note that F (κ) is not bounded in D. Therefore, results requiring the existence of F (κ) or higher cannot be applied for studying the convergence of Equations (1)-(3). Moreover, no computable error bounds x σ − x * , where x * solves the equation F(x) = 0, or any information regarding the uniqueness of the solution are provided using Lipschitz-type functions. Similar types of problems can be found in [9][10][11][12][13][14][15]. Furthermore, the convergence criteria can not be compared, since they are based on different hypotheses. We address all these problems by using only the first derivative. Moreover, we rely on the computational order of convergence (COC) or approximated computational order of convergence (ACOC) [16][17][18] to determine the c-order (Computational order of convergence) not requiring derivatives of order higher than one. The new technique uses the same set of conditions for the three methods. Furthermore, it can also be used to extend the applicability of other methods along the same lines. Local convergence results are important because they demonstrate the degree of difficulty in choosing initial points within the so-called convergence ball that is in the region from which we can pick the initial points ensuring the convergence of the iterative method. In general, the convergence ball is small and, furthermore, decreases when the convergence order of the method increases. Therefore, it is very important to extend the radius of the convergence ball, but without imposing additional hypotheses that may limit the applicability of the method. This is the main motivation for this paper that accomplishes this objective under weaker hypotheses than previous methods. It must be noted that the number of required iterations to achieve a certain error tolerance is a distinct issue. This information is also provided, as well as the uniqueness of the solution that are not clearly addressed in previous works. In fact, when applying the previous methods, we do not have sufficient information for establishing an educated guess about the convergence ball from where the initial choice point must be picked. Therefore, with those methods, the initial point may, or may not, result in convergence toward the results.
The rest of the paper includes the following sections. Section 2 analyzes the local convergence of the proposed technique. Section 3 discusses several numerical experiments. Section 4 presents the concluding results.

Local Convergence
Let us introduce some real functions and parameters to be used later as follows in the local convergence analysis.
Notice also that g 1 , h 1 , g 2 , h 2 , r 1 , and r 2 are the same as in Theorem 1. Functions g 3 , and q appear due to the estimates (2) . Hence, we arrive at the following theorem.

Theorem 2.
Suppose that the conditions (A) hold, but with r 2 and g (2) 3 replaced by r and g 3 , respectively. Then, the same conclusions hold for method (2), but with (16) replaced by Finally, for the local convergence of method (3), we introduce the functions Let us denote by r 2 , r 3 }.
These functions are defined due to the estimates Theorem 3. Let us consider hypotheses (A), but with g 2 , g 3 , and r 3 replacing by g 2 , g 3 , and r, respectively. Then, the conclusions of Theorem 1 hold for method (3), but with (15) and (16) replaced by and respectively.

Numerical Examples
The theoretical results developed in the previous sections are illustrated numerically in this section. We denote the methods (1)-(3) by (CM), (SM), and (EA), respectively. We consider two real life problems and two standard nonlinear problems that are illustrated in Examples 1-4. The results are listed in Tables 1, 2, 3 (values of ψ i and ϕ i (in radians) for Example 3), 4, and 5. Additionally, we obtain the COC approximated by means of or ACOC [18] by: r (2) r (3) x where u = (u 1 , u 2 , u 3 ) T . Define the Fréchet-derivative as Then, for x * = (0, 0, 0) T and F (x * ) = F (x * ) −1 = diag{1, 1, 1}, we have w 0 (t) = (e − 1)t, w(t) = e (Among the three methods, the larger radius of convergence belong to the method CM.) Example 3. The kinematic synthesis problem for steering [22,23] is given as In Table 3, we present the values of ψ i and ϕ i (in radians).
We list the radii of convergence for example (4) in Table 5.

Conclusions
We have introduced a new technique capable of proving convergence relying on hypotheses only on the first derivative (used in these methods) in contrast to earlier studies using hypotheses up to the seven derivatives and the Taylor series. Moreover, the new technique provides usable error analysis for operators valued on Banach space. In order to recover the convergence order, but, without using Taylor series, we rely on the COC and ACOC that require only the first order derivative.
Four numerical examples compare the radii of the convergence balls for these methods, showing that our results can be used in cases not possible before. The technique can also be used to extend the usage of other iterative methods using inverses in an analogous procedure.