A local convergence comparison is presented between two ninth order algorithms for solving nonlinear equations. In earlier studies derivatives not appearing on the algorithms up to the 10th order were utilized to show convergence. Moreover, no error estimates, radius of convergence or results on the uniqueness of the solution that can be computed were given. The novelty of our study is that we address all these concerns by using only the first derivative which actually appears on these algorithms. That is how to extend the applicability of these algorithms. Our technique provides a direct comparison between these algorithms under the same set of convergence criteria. This technique can be used on other algorithms. Numerical experiments are utilized to test the convergence criteria.
In this study, we consider the problem of finding a solution of the nonlinear equation
where is a continuously differentiable nonlinear operator acting between the Banach spaces and and D stands for an open non empty convex subset of One would like to obtain a solution of (1) in closed form. However, this can rarely be done. So most researchers and practitioners develop iterative algorithms which converge to It is worth noticing that a plethora of problems from diverse disciplines such as applied mathematics, mathematical biology, chemistry, economics, physics, engineering and scientific computing reduce to solving an equation like (1) [1,2,3,4]. Therefore, the study of these algorithms in the general setting of a Banach space is important. At this generality we cannot use these algorithms to find solutions of multiplicity greater than one, since we assume the invertibility of There is an extensive literature on algorithms for solving systems [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. Our technique can be used to look at the local convergence of these algorithms along the same lines. Algorithms (2) and (3) when (i.e, when ) cannot be used to solve undetermined systems in this form. However, if these derivatives are replaced by Moore–Penrose inverses (as in the case of Newton’s and other algorithms [1,3,4]), then these modified algorithms can be used to solve undetermined systems too. A similar local convergence analysis can be carried out. However we do not pursue this task here. We cannot discuss local versus global convergence in the setting of a Banach space. However, we refer the reader to subdivision solvers that are global and guarantee to find all solutions (when ) [2,3,5,6,18,20,24]. Then, using these ideas we can make use of Algorithms (2) and (3) we do not pursue this here.
In this paper we study efficient ninth order-algorithms studied in , defined for by
The analysis in  uses assumptions on the 10th order derivatives of However, the assumptions on higher order derivatives reduce the applicability of Algorithms (2) and (3). For example: Let Define f on D by
Then, we get
and Obviously is not bounded on Hence, the convergence of Algorithms (2) and (3) are not guaranteed by the analysis in .
We are looking for a ball centered at and of a certain radius such that if one chooses a starter from inside this ball, then the convergence of the method to is guaranteed. That is we are interested in the ball convergence of these methods. Moreover, we also obtain upper bounds on radius of convergence and results on the uniqueness of not provided in . Our technique can be used to enlarge the applicability of other algorithms in a similar manner [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24].
The rest of the paper is organized as follows. The convergence analysis of Algorithms (2) and (3) are given in Section 2 and examples are given in Section 3.
2. Ball Convergence
We present the ball convergence of Algorithms (2) and (3) which are based on some real functions and positive parameters. Let
Suppose there exists a continuous and increasing function on S with values in itself satisfying such that equation
has a least positive zero denoted by We verify the existence of solutions for some functions that follow based on the Intermediate Value Theorem (IVT). Set Define functions on as
where function on is continuous and increasing with We get and with Denote by the least zero of equation in
Suppose that equation
has a least positive zero denoted by where Set and Define functions and on as
where is defined on and is also continuous and increasing. We get again and as Denote by the least zero of equation on
has a least positive zero denoted by Set and Define functions and on as
Then, we get and as Denote by the least solution of equation in
Suppose that equation
has a least positive zero denoted by Set and Define functions and on as
We have and as Denote by the least solution of equation in Consider a radius of convergence R as given by
By these definitions, we have for
Finally, define and its closure. We shall use the notation for all
The conditions shall be used.
is continuously differentiable and there exists a simple solution of equation with being invertible.
There exists a continuous and increasing function from S into itself with such that for all
There exist continuous and increasing functions from into S with such that for each
There exists a continuous function from into S such that for all
Next, the local convergence result for algorithm (2) follows.
Under the conditions (A) further consider choosing Then, sequence exists, stays in with Moreover, the following estimates hold true
with “” functions are introduced earlier and R is defined by (8). Furthermore, is the only solution of equation in the set given in (A6).
Consider By (A1) and (A2)
so by a lemma of Banach on invertible operators  with
Setting we obtain from algorithm (2) (first sub-step for ) that exists. Then, using Algorithm (2) (first substep for ), (A1), (8), (A3), (18) and (13) (for )
so and (14) is true for We must show is invertible, so and exist by Algorithm (2) for Indeed, we have by (A2) and (19)
so is invertible,
Then, using the second sub-step of Algorithm (3), (8), (13) (for ), (18) (for ), (19) and (20), we first have
So, we get by using also the triangle inequality
so and (15) is true for By the third sub-step of algorithm (2) for we write
Then, using (8), (13 (for ), (18) (for ), (19)–(22), and (22), we get
so and (16) holds true for Similarly, if we exchange the role of with we first obtain
So, we get that
so and (17) is true for Hence, estimates (14)–(17) are true for Suppose (14)–(17) are true for then by switching by in the previous estimates, we immediately obtain that these estimates hold for completing the induction. Moreover, by the estimate
with we obtain and Let with Set In view of (A2) and (A6), we get
so from the invertiability of G and the estimate we conclude that □
In a similar way we provide the local convergence analysis for Algorithm (3). This time the functions “g”, “h” are respectively, for
and solving equation A radius of convergence is defined as in (8)
Estimates (9)–(13) also hold with these changes. This time we are using the estimates
Moreover, we can write
Then, we can write
( is simply replacing in the definition of ), so
Hence, with these changes, we present the local convergence analysis of method (3).
Under the conditions (A) the conclusions of Theorem 1 hold but with replaces by respectively.
We can compute  the computational order of convergence (COC) defined by
or the approximate computational order of convergence
This way we obtain in practice the order of convergence without resorting to the computation of higher order derivatives appearing in the method or in the sufficient convergence criteria usually appearing in the Taylor expansions for the proofs of those results.
3. Numerical Examples
Let us consider a system of differential equations governing the motion of an object and given by
with initial conditions Let Let Define function H on D for by
The Fréchet-derivative is defined by
Notice that using the () conditions, we get for The radii are
Let the space of continuous functions defined on be equipped with the max norm. Let Define function H on D by
We have that
Then, we get so and Then the radii are
Returning back to the motivational example at the introduction of this study, we have for and Then, the radii are
Conceptualization, S.R. and I.K.A.; methodology, S.R., I.K.A. and S.G.; software, I.K.A. and S.G.; validation, S.R., I.K.A. and S.G.; formal analysis, S.R., I.K.A. and S.G.; investigation, S.R., I.K.A. and S.G.; resources, S.R., I.K.A. and S.G.; data curation, S.R. and S.G.; writing—original draft preparation, S.R., I.K.A. and S.G.; writing—review and editing, I.K.A. and S.G.; visualization, S.R., I.K.A. and S.G.; supervision, I.K.A. All authors have read and agreed to the published version of the manuscript.
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
Aizenshtein, M.; Bartoŕi, M.; Elber, G. Global solutions of well-constrained transcendental systems using expression trees and a single solution test. Comput. Aided Des.2012, 29, 265–279. [Google Scholar] [CrossRef]
Amat, S.; Argyros, I.K.; Busquier, S.; Magreñán, A.A. Local convergence and the dynamics of a two-point four parameter Jarratt-like method under weak conditions. Numer. Algorithms2017. [Google Scholar] [CrossRef]
Argyros, I.K.; Magreñán, A.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
Van Sosin, B.; Elber, G. Solving piecewise polynomial constraint systems with decomposition and a subdivision-based solver. Comput. Aided Des.2017, 90, 37–47. [Google Scholar] [CrossRef]
Argyros, I.K. Computational Theory of Iterative Methods; Series: Studies in Computational Mathematics, 15; Chui, C.K., Wuytack, L., Eds.; Elsevier: New York, NY, USA, 2007. [Google Scholar]
Argyros, I.K.; Magreñán, A.A. A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algorithms2015, 71, 1–23. [Google Scholar] [CrossRef]