Two-Step Solver for Nonlinear Equations

In this paper we present a two-step solver for nonlinear equations with a nondifferentiable operator. This method is based on two methods of order of convergence 1 + √ 2. We study the local and a semilocal convergence using weaker conditions in order to extend the applicability of the solver. Finally, we present the numerical example that confirms the theoretical results.


Introduction
A plethora of real-life applications from various areas, including Computational Science and Engineering, are converted via mathematical modeling to equations valued on abstract spaces such as n-dimensional Euclidean, Hilbert, Banach, and other spaces [1,2].Then, researchers face the great challenge of finding a solution x * in the closed form of the equation.However, this task is generally very difficult to achieve.This is why iterative methods are developed to provide a sequence approximating x * under some initial conditions.
Then, one considers methods that mix Newton and secant steps to increase the order of convergence.This is our first objective in this paper.Moreover, the study of iterative methods involves local convergence where knowledge about the solution x * is used to determine upper bounds on the distances and radii of convergence.The difficulty of choosing initial points is given by local results, so they are important.In the semilocal convergence we use knowledge surrounding the initial point to find sufficient conditions for convergence.It turns out that in both cases the convergence region is small, limiting the applicability of iterative methods.That is why we use our ideas of the center-Lipschitz condition, in combination with the notion of the restricted convergence region, to present local as well as semilocal improvements leading to the extension of the applicability of iterative methods.
By E 1 , E 2 we consider Banach spaces and by Ω ⊆ E 1 a convex set.F : Ω → E 2 is differentiable in the Fréchet sense, G : Ω → E 2 is a continuous but its differentiability is not assumed.Then, we study equation This problem was considered by several authors.Most of them used one-step methods for finding an approximate solution of (1), for example, Newton's type method [14], difference methods [4,5] and combined methods [1][2][3]11].
We proposed a two-step method [6,10,12] to numerically solve (1) with Q(x, y) a first order divided difference of the operator G at the points x and y.This method relates to methods with the order of convergence 1 + √ 2 [7,13].
for all x, y with x = y, then, we call it a divided difference.
Two-step methods have some advantages over one-step methods.First, they usually require fewer number of iterations for finding an approximate solution.Secondly, at each iteration, they solve two similar linear problems, therefore, there is a small increase in computational complexity.That is why they are often used for solving nonlinear problems [2,6,[8][9][10]12,13].
In [6,10,12] the convergence analysis of the proposed method was provided under classical and generalized Lipschitz conditions and superquadratic convergence order was shown.Numerical results for method (2) were presented in [10,12].

Local Convergence
From now on by differentiable, we mean differentiable in the Fréchet sense.Moreover, F, G are assumed as previously.
Corollary 1. Assume hypotheses of Theorem 2 hold.Then, the order of convergence of method ( 2) is By (12) and ( 13), we get Then, for large n and a n−1 < 1, from previous inequalities, we obtain From ( 14) we relate (2) to t 2 − 2t − α = 0, leading to the solution Remark 1.To relate Theorem 1 and Corollary 2 in [12] to our Theorem 2 and Corollary 1 respectively, let us notice that under (3)-( 5) B 1 can replace B in these results, where . Then, we have and Ω 0 ⊆ Ω since r * ≤ r * , which justify the advantages claimed in the Introduction of this study.
with sequences { tn } n≥0 , {s n } n≥0 given in ( 22) and ( 23 Hence, the new results extend the applicability of the method (2).
Remark 4. Section 3 contains existence results.The uniqueness results are omitted, since they can be found in [2,6] but with center-Lipschitz constants replacing the larger Lipschitz constants.
Let's give results obtained by the method (2) for approximate solving the considered system of nonlinear equations.We chose initial approximations as x 0 = (0.1; 0.1; 0.1)d (d is a real number) and y 0 = x 0 + 0.0001.The iterative process was stopped under the condition x n+1 − x n ≤ 10 −10 and H(x n+1 ) ≤ 10 −10 .We used the Euclidean norm.The obtained results are shown in Table 1.

Conclusions
The convergence region of iterative methods is, in general, small under Lipschitz-type conditions, leading to a limited choice of initial points.Therefore, extending the choice of initial points without imposing additional, more restrictive, conditions than before is extremely important in computational sciences.This difficult task has been achieved by defining a convergence region where the iterates lie, that is more restricted than before, ensuring the Lipschitz constants are at least as small as in previous works.Hence, we achieve: more initial points, fewer iterations to achieve a predetermined error accuracy, and a better knowledge of where the solution lies.These are obtained without additional cost because the new Lipschitz constants are special cases of the old ones.This technique can be applied to other iterative methods.

Table 1 .
Value of x n − x n−1 for each iteration.