Abstract
A local convergence comparison is presented between two ninth order algorithms for solving nonlinear equations. In earlier studies derivatives not appearing on the algorithms up to the 10th order were utilized to show convergence. Moreover, no error estimates, radius of convergence or results on the uniqueness of the solution that can be computed were given. The novelty of our study is that we address all these concerns by using only the first derivative which actually appears on these algorithms. That is how to extend the applicability of these algorithms. Our technique provides a direct comparison between these algorithms under the same set of convergence criteria. This technique can be used on other algorithms. Numerical experiments are utilized to test the convergence criteria.
1. Introduction
In this study, we consider the problem of finding a solution of the nonlinear equation
where is a continuously differentiable nonlinear operator acting between the Banach spaces and and D stands for an open non empty convex subset of One would like to obtain a solution of (1) in closed form. However, this can rarely be done. So most researchers and practitioners develop iterative algorithms which converge to It is worth noticing that a plethora of problems from diverse disciplines such as applied mathematics, mathematical biology, chemistry, economics, physics, engineering and scientific computing reduce to solving an equation like (1) [1,2,3,4]. Therefore, the study of these algorithms in the general setting of a Banach space is important. At this generality we cannot use these algorithms to find solutions of multiplicity greater than one, since we assume the invertibility of There is an extensive literature on algorithms for solving systems [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. Our technique can be used to look at the local convergence of these algorithms along the same lines. Algorithms (2) and (3) when (i.e, when ) cannot be used to solve undetermined systems in this form. However, if these derivatives are replaced by Moore–Penrose inverses (as in the case of Newton’s and other algorithms [1,3,4]), then these modified algorithms can be used to solve undetermined systems too. A similar local convergence analysis can be carried out. However we do not pursue this task here. We cannot discuss local versus global convergence in the setting of a Banach space. However, we refer the reader to subdivision solvers that are global and guarantee to find all solutions (when ) [2,3,5,6,18,20,24]. Then, using these ideas we can make use of Algorithms (2) and (3) we do not pursue this here.
In this paper we study efficient ninth order-algorithms studied in [23], defined for by
and
where
The analysis in [23] uses assumptions on the 10th order derivatives of However, the assumptions on higher order derivatives reduce the applicability of Algorithms (2) and (3). For example: Let Define f on D by
Then, we get
and Obviously is not bounded on Hence, the convergence of Algorithms (2) and (3) are not guaranteed by the analysis in [23].
We are looking for a ball centered at and of a certain radius such that if one chooses a starter from inside this ball, then the convergence of the method to is guaranteed. That is we are interested in the ball convergence of these methods. Moreover, we also obtain upper bounds on radius of convergence and results on the uniqueness of not provided in [23]. Our technique can be used to enlarge the applicability of other algorithms in a similar manner [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24].
2. Ball Convergence
We present the ball convergence of Algorithms (2) and (3) which are based on some real functions and positive parameters. Let
Suppose there exists a continuous and increasing function on S with values in itself satisfying such that equation
has a least positive zero denoted by We verify the existence of solutions for some functions that follow based on the Intermediate Value Theorem (IVT). Set Define functions on as
and
where function on is continuous and increasing with We get and with Denote by the least zero of equation in
Suppose that equation
has a least positive zero denoted by where Set and Define functions and on as
and
where is defined on and is also continuous and increasing. We get again and as Denote by the least zero of equation on
Suppose equation
has a least positive zero denoted by Set and Define functions and on as
and
Then, we get and as Denote by the least solution of equation in
Suppose that equation
has a least positive zero denoted by Set and Define functions and on as
and
We have and as Denote by the least solution of equation in Consider a radius of convergence R as given by
By these definitions, we have for
and
Finally, define and its closure. We shall use the notation for all
The conditions shall be used.
- (A1)
- is continuously differentiable and there exists a simple solution of equation with being invertible.
- (A2)
- There exists a continuous and increasing function from S into itself with such that for allSet
- (A3)
- There exist continuous and increasing functions from into S with such that for eachSet
- (A4)
- There exists a continuous function from into S such that for all
- (A5)
- where R is defined in (8), and exist.
- (A6)
- There exists such thatSet
Next, the local convergence result for algorithm (2) follows.
Theorem 1.
Under the conditions (A) further consider choosing Then, sequence exists, stays in with Moreover, the following estimates hold true
and
with “” functions are introduced earlier and R is defined by (8). Furthermore, is the only solution of equation in the set given in (A6).
Proof.
Consider By (A1) and (A2)
so by a lemma of Banach on invertible operators [20] with
Setting we obtain from algorithm (2) (first sub-step for ) that exists. Then, using Algorithm (2) (first substep for ), (A1), (8), (A3), (18) and (13) (for )
so and (14) is true for We must show is invertible, so and exist by Algorithm (2) for Indeed, we have by (A2) and (19)
so is invertible,
Then, using the second sub-step of Algorithm (3), (8), (13) (for ), (18) (for ), (19) and (20), we first have
So, we get by using also the triangle inequality
so and (15) is true for By the third sub-step of algorithm (2) for we write
Then, using (8), (13 (for ), (18) (for ), (19)–(22), and (22), we get
so and (16) holds true for Similarly, if we exchange the role of with we first obtain
So, we get that
so and (17) is true for Hence, estimates (14)–(17) are true for Suppose (14)–(17) are true for then by switching by in the previous estimates, we immediately obtain that these estimates hold for completing the induction. Moreover, by the estimate
with we obtain and Let with Set In view of (A2) and (A6), we get
so from the invertiability of G and the estimate we conclude that □
In a similar way we provide the local convergence analysis for Algorithm (3). This time the functions “g”, “h” are respectively, for
where
and
where
and solving equation A radius of convergence is defined as in (8)
Moreover, we can write
so
Then, we can write
so
where
so
and
( is simply replacing in the definition of ), so
Hence, with these changes, we present the local convergence analysis of method (3).
Theorem 2.
Under the conditions (A) the conclusions of Theorem 1 hold but with replaces by respectively.
Remark 1.
We can compute [17] the computational order of convergence (COC) defined by
or the approximate computational order of convergence
This way we obtain in practice the order of convergence without resorting to the computation of higher order derivatives appearing in the method or in the sufficient convergence criteria usually appearing in the Taylor expansions for the proofs of those results.
3. Numerical Examples
Example 1.
Let us consider a system of differential equations governing the motion of an object and given by
with initial conditions Let Let Define function H on D for by
The Fréchet-derivative is defined by
Notice that using the () conditions, we get for The radii are
Example 2.
Let the space of continuous functions defined on be equipped with the max norm. Let Define function H on D by
We have that
Then, we get so and Then the radii are
Example 3.
Returning back to the motivational example at the introduction of this study, we have for and Then, the radii are
Author Contributions
Conceptualization, S.R. and I.K.A.; methodology, S.R., I.K.A. and S.G.; software, I.K.A. and S.G.; validation, S.R., I.K.A. and S.G.; formal analysis, S.R., I.K.A. and S.G.; investigation, S.R., I.K.A. and S.G.; resources, S.R., I.K.A. and S.G.; data curation, S.R. and S.G.; writing—original draft preparation, S.R., I.K.A. and S.G.; writing—review and editing, I.K.A. and S.G.; visualization, S.R., I.K.A. and S.G.; supervision, I.K.A. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Aizenshtein, M.; Bartoŕi, M.; Elber, G. Global solutions of well-constrained transcendental systems using expression trees and a single solution test. Comput. Aided Des. 2012, 29, 265–279. [Google Scholar] [CrossRef]
- Amat, S.; Argyros, I.K.; Busquier, S.; Magreñán, A.A. Local convergence and the dynamics of a two-point four parameter Jarratt-like method under weak conditions. Numer. Algorithms 2017. [Google Scholar] [CrossRef]
- Argyros, I.K.; Magreñán, A.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
- Van Sosin, B.; Elber, G. Solving piecewise polynomial constraint systems with decomposition and a subdivision-based solver. Comput. Aided Des. 2017, 90, 37–47. [Google Scholar] [CrossRef]
- Argyros, I.K. Computational Theory of Iterative Methods; Series: Studies in Computational Mathematics, 15; Chui, C.K., Wuytack, L., Eds.; Elsevier: New York, NY, USA, 2007. [Google Scholar]
- Argyros, I.K.; Magreñán, A.A. A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Algorithms 2015, 71, 1–23. [Google Scholar] [CrossRef]
- Alzahrani, A.K.H.; Behl, R.; Alshomrani, A.S. Some higher-order iteration functions for solving nonlinear models. Appl. Math. Comput. 2018, 334, 80–93. [Google Scholar] [CrossRef]
- Bartoň, M. Solving polynomial systems using no-root elimination blending schemes. Comput. Aided Des. 2011, 43, 1870–1878. [Google Scholar] [CrossRef]
- Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. Stable high-order iterative methods for solving nonlinear models. Appl. Math. Comput. 2017, 303, 70–88. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
- Darvishi, M.T.; Barati, A. Super cubic iterative methods to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 1678–1685. [Google Scholar] [CrossRef]
- Esmaeili, H.; Ahmadi, M. An efficient three-step method to solve system of non linear equations. Appl. Math. Comput. 2015, 266, 1093–1101. [Google Scholar]
- Lotfi, T.; Bakhtiari, P.; Cordero, A.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar] [CrossRef]
- Magreñán, A.A. Different anomalies in a Jarratt family of iterative root finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
- Magreñán, A.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 29–38. [Google Scholar] [CrossRef]
- Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solutions of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Sharma, J.R.; Arora, H. Improved Newton-like methods for solving systems of nonlinear equations. SeMA 2017, 74, 147–163. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
- Xiao, X.Y.; Yin, H.W. Increasing the order of convergence for iterative methods to solve nonlinear systems. Calcolo 2016, 53, 285–300. [Google Scholar] [CrossRef]
- Argyros, I.K.; George, S.; Magreñán, A.A. Local convergence for multi-point-parametric Chebyshev-Halley-type methods of higher convergence order. J. Comput. Appl. Math. 2015, 282, 215–224. [Google Scholar] [CrossRef]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).