Abstract
This paper develops efficient equation solvers for real- and complex-valued functions. An earlier study by Lee and Kim, used the Taylor-type expansions and hypotheses on higher than first order derivatives, but no derivatives appeared in the suggested method. However, we have many cases where the calculations of the fourth derivative are expensive, or the result is unbounded, or even does not exist. We only use the first order derivative of function in the proposed convergence analysis. Hence, we expand the utilization of the earlier scheme, and we study the computable radii of convergence and error bounds based on the Lipschitz constants. Furthermore, the range of starting points is also explored to know how close the initial guess should be considered for assuring convergence. Several numerical examples where earlier studies cannot be applied illustrate the new technique.
Keywords:
divided difference; radius of convergence; Kung–Traub method; local convergence; Lipschitz constant; Banach space MSC:
47J05; 47J25; 65H10; 65G99
1. Introduction
We look for a unique root of the equation:
where is a continuous operator defined on a convex subset of with values in , and . This is a relevant issue since several problems from mathematics, physics, chemistry, and engineering can be reduced to Equation (1).
In general, either the lack, or the intractability of analytic solutions force researchers to adopt iterative techniques. However, when using that type of approach, we find problems such as slow convergence, converge to undesired root, divergence, computational inefficiency, or failure (see Traub [1] and Petkovíc et al. [2]). The study of the convergence of iterative algorithms can be classified into two categories, namely the semi-local and local convergence analysis. The first case is based on the information in the neighborhood of the starting point. This also gives criteria for guaranteeing the convergence of iteration algorithms. Therefore, a relevant issue is the convergence domain, as well as the radii of convergence of the algorithm.
Herein, we deal with the second case, that is the local convergence analysis. Let us consider a fourth order algorithm defined for as:
where is an initial point, (k is an arbitrary natural number), satisfies for , , , and is a continuous function. The fourth order convergence for Method (2) was studied by Lee and Kim [3] with Taylor series, hypotheses up to the fourth order derivative of function , and hypotheses on the first and second partial derivatives of function H. However, only the divided difference of the first order appears in (2). Favorable computations were also given with related Kung–Traub methods [1] of the form:
Notice that (3) is obtained from (2), if we define function H as . The assumptions on the derivatives of and H restrict the suitability of Algorithms (2) and (3). For instance, let us consider on , as:
From this expression, we obtain:
We find that is unbounded on at the point . Therefore, the results in [3] cannot be applied for the analysis of the convergence of Methods (2) or (3). Notice that there are numerous algorithms and convergence results available in the literature [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15]. Nonetheless, practice shows that the initial prediction must be in the neighborhood of the root for achieving convergence. However, how close must it be to the starting point? Indeed, local results do not give any information about the ball convergence radii.
We broaden the suitability of Methods (2) and (3) by using only assumptions on the first derivative of function . Moreover, we estimate the computable radii of convergence and the error bounds from Lipschitz constants. Additionally, we discuss the range of initial estimate that tells us how close it must be to achieve a granted convergence of (2). This problem was not addressed in [3], but is of capital importance in practical applications.
2. Convergence Analysis
Let and be given constants. Furthermore, we consider that , are continuous functions such that:
for each with and that and h are nondecreasing functions on the interval , , respectively. For the local convergence analysis of (2), we need to introduce a few functions and parameters. Let us define the parameters and given by:
and function on the interval by:
From the above functions, it is easy to see that , and for . Moreover, we consider the functions q and on as:
It is straightforward to find that and that as . By the intermediate value theorem, we know that has zeros in the interval . Let us assume that is the smallest zero of function on , and set:
Furthermore, let us define functions and on such that:
and:
Suppose that:
From (8), we have that and from (10) that as . Further, we assume that R is the smallest zero of function on . Therefore, we have that for each :
Let us denote by and the open and closed balls in with center and of radius , respectively.
Theorem 1.
Let us assume that is a differentiable function and is a divided difference of first order of Ω. Furthermore, we consider that h and H are functions satisfying (4), (9), , and that for each , we have:
Then, the sequence obtained for by (2) is well defined, remains in for each and converges to , so that:
and . Moreover, the limit point is the unique root of equation in
Proof.
By hypotheses , (14), (17) and (19), we further obtain:
so that:
which leads to (20) for and . We need to show that . Using (15) and the definition of R, we obtain:
From the Banach lemma on invertible functions [7,14], it follows that and:
In view of (14) and (18), we have:
and similarly:
since . Then, using the second substep of Methods (2), (11), (14), (16), (25) and (27), we obtain:
and so, (21) is true for and . Next, we need to show that and , for . Using (14) and (15), and the definition of R, we obtain:
Hence, and:
Similarly, we have that:
Adopting (13), we get:
Hence, we have:
Furthermore, is well defined by (24), (32) and (34). Using the third substep of (2), (12), (27) (for ), (28), (32) and (34), we get:
showing that (22) is true for and . Replacing , , and by , , and , respectively, in the preceding estimates, we arrive at (20)–(22). From the estimates we conclude that and . Finally, to illustrate the uniqueness, let such that . We assume . Adopting (15), we get:
Therefore, , and in view of the identity , we conclude that . □
Remark 1.
- (a)
- (b)
- We note that (2) does not change if we adopt the conditions of Theorem 1 instead of the stronger ones given in [3]. In practice, for the error bounds, we can consider the computational order of convergence (COC) [10]:or the approximated computational order of convergence (ACOC) [10]:In practice, we obtain the order of convergence that, avoiding the bounds, involves estimates higher than the first Fréchet derivative.
3. Numerical Examples
We consider some of the weight functions to solve a variety of univariate problems that are depicted in Examples 1–3.
Table 1, Table 2 and Table 3 display the minimum number of iterations necessary to obtain the required accuracy for the zeros of the functions in Examples 1–3. Moreover, we include also the initial guess, the radius of convergence of the corresponding function, and the theoretical order of convergence. Additionally, we calculate the approximated by means of (37) and (38).
Table 1.
Radii of convergence according to the adopted weight function.
Table 2.
Radii of convergence according to the adopted weight function.
Table 3.
Radii of convergence according to the adopted weight function.
All computations used the package with multiple precision arithmetic, adopting as a tolerance error and the stopping criteria:
and .
Example 1.
Let . Let us define function Ω on by:
Consequently, it results and . We obtain a different radius of convergence when using distinct types of weight functions (for details, please see [3]), COC (ξ) and s presented in Table 1.
Example 2.
Let (approximated root), and let us assume function Ω on by
As a consequence, we get and . We have the distinct radius of convergence when using several weight functions (for details, please see [3]), COC (ξ) and s listed in Table 2.
Example 3.
Using the example of the introduction, we have , and the required zero is . We have different radii of convergence by adopting distinct types of weight functions (for details, please see [3]), COC (ξ) and s in Table 3.
4. Conclusions
Locating the range or interval of the required root that provides sure convergence of an iterative method is one of the difficult problems in computational analysis. This paper addressed this problem and expanded the applicability of Methods (2) and (3) using hypotheses only on the functions appearing in these techniques. Further, we provided the radii of ball convergence and error bounds using Lipschitz conditions. This type of study was not addressed in the earlier work. With the help of the radius of convergence, we can find the range of initial estimate that tells us how close it must be for granting the convergence of Methods (2) and (3). Finally, the applicability of new approach was illustrated with several numerical examples.
Author Contributions
All co-authors contributed to the conceptualization, methodology, validation, formal analysis, writing the original draft preparation, and editing.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall Series in Automatic Computation; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Petkovic, M.S.; Neta, B.; Petkovic, L.; Džunič, J. Multipoint Methods For Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
- Lee, M.Y.; Kim, Y.I. A family of fast derivative-free fourth order multipoint optimal methods for nonlinear equations. Int. J. Comput. Math. 2012, 89, 2081–2093. [Google Scholar] [CrossRef]
- Amat, S.; Busquier, S.; Plaza, S. Dynamics of the King and Jarratt iterations. Aequ. Math. 2005, 69, 212–223. [Google Scholar] [CrossRef]
- Amat, S.; Busquier, S.; Plaza, S. Chaotic dynamics of a third-order Newton-type method. J. Math. Anal. Appl. 2010, 366, 24–32. [Google Scholar] [CrossRef]
- Amat, S.; Hernández, M.A.; Romero, N. A modified Chebyshev’s iterative method with at least sixth order of convergence. Appl. Math. Comput. 2008, 206, 164–174. [Google Scholar] [CrossRef]
- Argyros, I.K. Convergence and Application of Newton-Type Iterations; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Argyros, I.K.; Hilout, S. Numerical Methods in Nonlinear Analysis; World Scientific Publ. Comp: River Edge, NJ, USA, 2013. [Google Scholar]
- Behl, R.; Motsa, S.S. Geometric construction of eighth-order optimal families of Ostrowski’s method. Recent Theor. Appl. Approx. Theory 2015, 2015, 614612. [Google Scholar] [CrossRef] [PubMed]
- Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar] [CrossRef]
- Kanwar, V.; Behl, R.; Sharma, K.K. Simply constructed family of a Ostrowski’s method with optimal order of convergence. Comput. Math. Appl. 2011, 62, 4021–4027. [Google Scholar] [CrossRef]
- Magreñán, Á.A. Different anomalies in a Jarratt family of iterative root-finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
- Magreñán, Á.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef]
- Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations. Pol. Acad. Sci. Banach Cent. Publ. 1978, 3, 129–142. [Google Scholar] [CrossRef]
- Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).