Abstract
We provide a comparison between two schemes for solving equations on Banach space. A comparison between same convergence order schemes has been given using numerical examples which can go in favor of either scheme. However, we do not know in advance and under the same set of conditions which scheme has the largest ball of convergence, tighter error bounds or best information on the location of the solution. We present a technique that allows us to achieve this objective. Numerical examples are also given to further justify the theoretical results. Our technique can be used to compare other schemes of the same convergence order.
1. Introduction
In this study we compare two third convergence order schemes for solving nonlinear equation
where be a continuously differentiable nonlinear operator and D stands for an open non empty subset of Here and denote Banach spaces. It is desirable to obtain a unique solution p of (1). However, this can rarely be achieved. So researchers develop iterative schemes which converge to Some popular schemes are
Chebyshev-Type Scheme:
Simplified Chebyshev-Type Scheme:
The analysis of these schemes uses assumptions on the fourth order derivatives of G which are not on these schemes. The assumptions on fourth order derivatives reduce the applicability of these schemes. For example: Let Define G on D by
Then, we get
where the solution Obviously is not bounded on Hence, the convergence of the above schemes are not guaranteed by the earlier studies. In this study we use only assumptions on the first derivative to prove our results. The advantages of our approach include: larger radius needed on scheme of convergence (i.e., more initial points), tighter upper bounds on ( i.e., fewer iterates to achieve a desired error tolerance). It is worth noting that these advantages are obtained without any additional conditions [,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,].
So far a comparison is given between iterative schemes of the same order using numerical examples [,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,]. However, not a direct comparison is given theoretically, so we know in advance under the same set of convergence conditions which scheme has the largest radius, tighter error bounds and better results on the uniqueness of the solution. The novelty of our paper is that, we introduce a technique under we can answer that scheme (4) is the best when compared to scheme (3). The same technique can be used to draw conclusions on other same order schemes.
Notice also that scheme (3) requires two derivative evaluations, one inverse and one operator evaluation. However, scheme (4) is less expensive requiring two function evaluations and one inverse. Both schemes have been studied in the literature under assumptions reaching the fourth derivative which does not appear in these schemes. However, we use only conditions on the first derivative that does appear on the schemes.
Throughout this paper stand for open ball with center at x and radius and denote the closure of
2. Ball Convergence
We present the ball convergence scheme (2), scheme (3) and scheme (4), respectively in this section.
To achieve this introduce certain functions and parameters. Suppose that there exists a continuous and increasing function defined on the interval with values in itself such that equation
has a real positive zero denoted as Suppose that there exists functions and defined on with values in Define functions and on as
and
Suppose that equation
has a least zero denoted by in Moreover, define functions on as
and
Suppose equation
has a least zero denoted by in Define a radius of convergence as
It follows that
for all Set We introduce a set of conditions under which the ball convergence for all schemes will be obtained.
- (A1)
- is differentiable; there exists a simple zero p of equation
- (A2)
- There exists a continuous and increasing function defined on with values in such that for allprovided exists and is defined in (5). Set
- (A3)
- There exist continuous and increasing functions and on the interval with values in such that for alland
- (A4)
- (A5)
- There exists such thatSet
Next, the main ball convergence result for scheme (2) is displayed.
Theorem 1.
Proof.
It is based on induction which assists us to show (12) and (13). If we can use (A1), (A2), (8) and (9) to see that
so by the perturbation Banach result for invertible operators [] with
so and exist by scheme (2) for We use the identity
and the second condition in (A3) to obtain
Next, in view of (8), (10), (10) (for ), (A1), (A3), (14) (for ), scheme (2) and (16) the following are obtained in sequence
leading to and the verification of (12) for Moreover, as in (16) the following are obtained in sequence
leading to and the verification of (13). Thus, estimates (12) and (13) hold true for Suppose they hold true for all Then, by exchanging by in the preceding calculations, we terminate the induction for (12) and (13). Next, by the estimate
where we deduce and Further, let with and set By (A1) and (A5) we get
so the invertibility is implied leading together with the estimate to the conclusion that □
Remark 1.
- 1.
- In view of () and the estimatecondition () can be dropped and can be replaced by
- 2.
- The results obtained here can be used for operators F satisfying autonomous differential equations [] of the formwhere P is a continuous operator. Then, since we can apply the results without actually knowing For example, let Then, we can choose:
- 3.
- If and are constant functions, say for some and then the radius was shown by us to be the convergence radius of Newton’s method [,]under the conditions ()–(). It follows from the definition of r that the convergence radius of the method (2) cannot be larger than the convergence radius of the second order Newton’s method (19). As already noted in [,] is at least as large as the convergence ball given by Rheinboldt []where is the Lipschitz constant on In particular, for we have thatandThat is our convergence ball is at most three times larger than Rheinboldt’s. The same value for was given by Traub [].
- 4.
- It is worth noticing that method (2) is not changing when we use simpler methods the conditions of Theorem 1 instead of the stronger conditions used in []. Moreover, we can compute the computational order of convergence (COC) defined byor the approximate computational order of convergenceThis way we obtain in practice the order of convergence in a way that avoids the bounds involving estimates using estimates higher than the first Fréchet derivative of operator
- 5.
- 6.
- The ball convergence result for scheme (3) clearly is obtained from Theorem 1 for
- 7.
- In our earlier works with other schemes we assumed to show the existence of and using the intermediate value theorem. Bur these initial conditions on and This is not necessary with our approach. This way we further expand the applicability of scheme (2) and (3). The same is true for scheme (4) whose ball convergence follows.
Theorem 2.
Under the conditions of Theorem 1, the conclusions of it hold for scheme (3) for
To deal with scheme (4) our real functions and parameters are Moreover, we suppose that equation
has a least solution in denoted by and set Define functions and on as
and
Suppose equation has a least positive zero in denoted by Define a radius of convergence by
Hence, we arrive at:
Theorem 3.
Moreover, p is the unique solution of Equation (1) in the set
3. Numerical Examples
We compute the radii provided that
Example 1.
Let us consider a system of differential equations governing the motion of an object and given by
with initial conditions Let Let Define function F on D for by
We need the Fréchet-derivative defined by
to compute function (see (A2)) and functions (see (A3)). Notice that using the () conditions, we get The radii are given in Table 1.
Example 2.
Let the space of continuous functions defined on be equipped with the max norm. Let Define function F on D by
We have that
Then, we get that so and Then the are given in Table 2.
4. Conclusions
A new technique is introduced allowing to compare schemes of the same convergence order under the same set of conditions. Hence, we know how to choose in advance among all third convergence order schemes the one providing larger choice of initial points the least number of iterates for a predetermined error tolerance and the best location on the solution. This technique can be used on other schemes along the same lines. In particular, we have shown that scheme (4) is better to use than scheme (3) under the condition (A).
Author Contributions
Conceptualization, S.R., I.K.A. and S.G.; Data curation, S.R., I.K.A. and S.G.; Formal analysis, S.R., I.K.A. and S.G.; Funding acquisition, S.R., I.K.A. and S.G.; Investigation, S.R., I.K.A. and S.G.; Methodology, S.R., I.K.A. and S.G.; Project administration, S.R., I.K.A. and S.G.; Resources, S.R., I.K.A. and S.G.; Software, S.R., I.K.A. and S.G.; Supervision, S.R., I.K.A. and S.G.; Validation, S.R., I.K.A. and S.G.; Visualization, S.R., I.K.A. and S.G.; Writing—original draft, S.R., I.K.A. and S.G.; Writing—review and editing, S.R., I.K.A. and S.G. All authors have equally contributed to this work. All authors read and approved this final manuscript.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Amat, S.; Busquier, S.; Negra, M. Adaptive approximation of nonlinear operators. Numer. Funct. Anal. Optim. 2004, 25, 397–405. [Google Scholar] [CrossRef]
- Amat, S.; Argyros, I.K.; Busquier, S.; Magreñán, A.A. Local convergence and the dynamics of a two-point four parameter Jarratt-like method under weak conditions. Numer. Alg. 2017. [Google Scholar] [CrossRef]
- Argyros, I.K. Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics; Chui, C.K., Wuytack, L., Eds.; Elsevier Publ. Company: New York, NY, USA, 2007. [Google Scholar]
- Argyros, I.K.; George, S. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications; Volume III: Newton’s Method Defined on Not Necessarily Bounded Domain; Nova Publishes: New York, NY, USA, 2019. [Google Scholar]
- Argyros, I.K.; George, S. Mathematical Modeling for the Solution of Equations and Systems of Equations with Applications; Volume-IV: Local Convergence of a Class of Multi-Point Super–Halley Methods; Nova Publishes: New York, NY, USA, 2019. [Google Scholar]
- Argyros, I.K.; George, S.; Magreñán, A.A. Local convergence for multi-point- parametric Chebyshev-Halley- type method of higher convergence order. J. Comput. Appl. Math. 2015, 282, 215–224. [Google Scholar] [CrossRef]
- Argyros, I.K.; Magreñán, A.A. Iterative Method and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
- Argyros, I.K.; Magreñán, A.A. A study on the local convergence and the dynamics of Chebyshev-Halley-type methods free from second derivative. Numer. Alg. 2015, 71, 1–23. [Google Scholar] [CrossRef]
- Argyros, I.K.; Regmi, S. Undergraduate Research at Cameron University on Iterative Procedures in Banach and Other Spaces; Nova Science Publishers: New York, NY, USA, 2019. [Google Scholar]
- Babajee, D.K.R.; Davho, M.E.; Darvishi, M.T.; Karami, A.; Barati, A. Analysis of two Chebyshev-like third order methods free from second derivatives for solving systems of nonlinear equations. J. Comput. Appl. Math. 2010, 233, 2002–2012. [Google Scholar] [CrossRef]
- Alzahrani, A.K.H.; Behl, R.; Alshomrani, A.S. Some higher-order iteration functions for solving nonlinear models. Appl. Math. Comput. 2018, 334, 80–93. [Google Scholar] [CrossRef]
- Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. Stable high-order iterative methods for solving nonlinear models. Appl. Math. Comput. 2017, 303, 70–88. [Google Scholar] [CrossRef]
- Choubey, N.; Panday, B.; Jaiswal, J.P. Several two-point with memory iterative methods for solving nonlinear equations. Afrika Matematika 2018, 29, 435–449. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Alg. 2010, 55, 87–99. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
- Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
- Darvishi, M.T.; Barati, A. Super cubic iterative methods to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 1678–1685. [Google Scholar] [CrossRef]
- Esmaeili, H.; Ahmadi, M. An efficient three-step method to solve system of non linear equations. Appl. Math. Comput. 2015, 266, 1093–1101. [Google Scholar]
- Fang, X.; Ni, Q.; Zeng, M. A modified quasi-Newton method for nonlinear equations. J. Comput. Appl. Math. 2018, 328, 44–58. [Google Scholar] [CrossRef]
- Fousse, L.; Hanrot, G.; Lefvre, V.; Plissier, P.; Zimmermann, P. MPFR: A multiple-precision binary floating-point library with correct rounding. ACM Trans. Math. Softw. 2007, 33, 15. [Google Scholar] [CrossRef]
- Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariate case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
- Iliev, A.; Kyurkchiev, N. Nontrivial Methods in Numerical Analysis: Selected Topics in Numerical Analysis; LAP LAMBERT Academic Publishing: Saarbrucken, Germany, 2010; ISBN 978-3-8433-6793-6. [Google Scholar]
- Lotfi, T.; Bakhtiari, P.; Cordero, A.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar] [CrossRef]
- Magreñán, A.A. Different anomalies in a Jarratt family of iterative root finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
- Magreñán, A.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 29–38. [Google Scholar] [CrossRef]
- Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–10619. [Google Scholar] [CrossRef]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solutions of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Ostrowski, A.M. Solution of Equation and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
- Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. In Mathematical Models and Numerical Solvers; Tikhonov, A.N., Ed.; Banach Center: Warsaw, Poland, 1977; pp. 129–142. [Google Scholar]
- Petkovic, M.S.; Neta, B.; Petkovic, L.j.D.; Dzunic, J. Multi-Point Methods for Solving Nonlinear Equations; Elsevier/Academic Press: Amsterdam, The Netherlands; Boston, MA, USA; Heidelberg, Germany; London, UK; New York, NY, USA, 2013. [Google Scholar]
- Sharma, J.R.; Sharma, R.; Bahl, A. An improved Newton-Traub composition for solving systems of nonlinear equa- tions. Appl. Math. Comput. 2016, 290, 98–110. [Google Scholar]
- Sharma, J.R.; Arora, H. Improved Newton-like methods for solving systems of nonlinear equations. SeMA 2017, 74, 147–163. [Google Scholar] [CrossRef]
- Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
- Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).