Abstract
Solving problems in various disciplines such as biology, chemistry, economics, medicine, physics, and engineering, to mention a few, reduces to solving an equation. Its solution is one of the greatest challenges. It involves some iterative method generating a sequence approximating the solution. That is why, in this work, we analyze the convergence in a local form for an iterative method with a high order to find the solution of a nonlinear equation. We extend the applicability of previous results using only the first derivative that actually appears in the method. This is in contrast to either works using a derivative higher than one, or ones not in this method. Moreover, we consider the dynamics of some members of the family in order to see the existing differences between them.
1. Introduction
Mathematics is always changing and the way we teach it also changes, as can be seen in the literature. Moreover, in advanced mathematics we need to use different alternatives since we all know the different problems that students encounter. In this paper, we present a study on iterative methods that can be used in postgraduate studies in order to teach them.
In the present work, we are focused on the problem of solving the equation
giving an approximating solution , where is differentiable and or . There exist several studies related to this problem, since we need to use iterative methods to find the solution. We refer the reader to the book by Petkovic et al. [] for a collection of relevant methods. The method of interest is in this case:
where a starting point is chosen, parameters , , and
If we consider only the first two steps of the method in Equation (2), we obtain the King’s class of methods. This method has an order of 4 []. However, Equation (2) has limited usage, since its convergence assumes the existence of fifth order derivatives not appearing in the method. Moreover, no computable error bounds on or uniqueness results are given. Furthermore, the initial point is a shot in the dark. As an example consider function
Then is unbounded on . Hence, there is no guarantee that the method in Equation (2) converges to under the results in [].
Our technique can also be used to extend the applicability of other methods defined in [,,]. The novelty of our work, compared to other such as [,,,,,,,,,,,,,,], is that we give weaker conditions, only in the first derivative, to guarantee the convergence of the described method. Those conditions are given in Section 2 and the dynamical study appears in the Section 3.
2. Local Convergence Analysis
In this section we study the local convergence analysis of the method in Equation (2). If and , we can define and , respectively, the open and closed balls in R. Besides, we require the parameters , , , , , , and . We need to define some parameters and functions to analyze the local convergence. Consider functions on the interval by
Then, and as . Function has zeros in the interval by the intermediate value theorem. Let be the smallest such zero. Define functions and on the interval by
and
By these definitions, and as . For this reason, function has a smallest zero . Moreover, define functions on by
and
Suppose that
We can see that and as .
Characterize by the smallest such zero of in . Set min. Then, we have that
and
Moreover, by simple algebraic manipulations we can write the previous , and in view of the definition of , , , , and as
and
Next, we can give the local convergence result for the method in Equation (2) using the preceding notation.
Theorem 1.
Let be a convex subset and a differentiable function. Consider the divided difference of order one , and for each the constants , , , , , ρ, , and such that
and
where the radius r is given previously
Then, for the method in Equation (2) generates a well defined sequence , all its terms are in (), and the sequence converges to . Furthermore, the following estimates are verified
and
using the functions , and is defined above Theorem 1. Besides, is the only solution of in for such that .
Proof.
We shall show estimates of Equations (21)–(23) using mathematical induction. We get, through hypothesis , the definition of r and Equation (15) that
From the Banach Lemma on invertible operators and Equation (24) it follows that is invertible and
Then, is well defined by the method in Equation (2) for . Then, we can write
Using Equation (14), we can express
At this point, we have that, by Equation (18),
Then, by Equation (32) the function is invertible and
It also follows that is well defined from the method in Equation (2) for . Then, using the method in Equation (2) for , Equations (6), (25), (29), (30), and (33), we have that
which shows Equation (22) for and . Next, we define estimates on , , , and . Assume that . We take into account the expressions and as quadratic polynomials in (or ). At this point, the discriminants are formed, respectively, by and , which are less than zero by Equation (13). Consequently,
and
Then, is well defined. Besides, we have, by Equations (35) and (36), that
so Equation (37) reduces to exhibiting that for
where
The inequality of Equation (38) is satisfied for all , if (i.e., ), and the discriminant of is
or
where,
But the discriminant of is given by
Moreover, we have for . Then, by the Descartes rule of signs has two positive zeros. Denote by the smallest such zero, which can be given in closed form using the quadratic formula to arrive at the definition of given in Equation (20). Hence, Equation (38) holds for all provided that Equation (13) is satisfied. Then, is well defined by the method in Equation (2) for and we get, using Equations (8), (25), and (29), that
Then, using Equations (7), (34) and (39)–(42), and the method in Equation (2) for , we obtain that
which shows the inequality in Equation (23) for . By simply replacing by in the previous estimates we have that the estimates in Equations (21)–(23) hold. Then, from , , , we get that and . At last, to show the uniqueness part, let us suppose that there exists with . Define We get, using Equation (15), that
It follows from Equation (44) that Q is invertible. Then, we conclude that from the estimate . □
3. Dynamical Analysis
In this section, the method in Equation (2) is applied to three different families of functions and its behavior has been analyzed by changing the parameter using techniques that appear in [,,,].
3.1. Exponential Family
The method has been applied to the function by considering the corresponding equation . This equation has a solution at the point , which is the only one attractive fixed point of the method. In Figure 1 we observe how the method changes with the parameter. Dynamical planes represent the behavior of the method in the complex domain.

Figure 1.
Method Representation. (a) = 0.01 = 0. (b) = 0.01 = 0.01. (c) = 0.01 = 0.1. (d) = 0.01 = 1.
In Figure 2 the symmetry of the region of convergence to the solution with respect to the imaginary axis is observed. Small islands of convergence appear out of the main region. It is necessary to increase the maximum number of iterations to achieve convergence with high values of .
Figure 2.
Dynamical planes associated with the method. (a) = 0.01 = 0.01 maxiter = 10. (b) = 0.01 = 0.1 maxiter = 10. (c) = 0.01 = 1 maxiter = 10. (d) = 0.01 = 1 maxiter = 20.
3.2. Sinus Family
The method can be applied to the function with equation . In this case, the equation has a periodical solutions , , , and …, coinciding with the fixed points of the method. In Figure 3 how the method changes with the parameter is shown. Dynamical planes represent the behavior of the method in the complex domain as it appears in Figure 4, where with high values of the region of convergence is reduced.
Figure 3.
Method Representation. (a) = 0.01 = 0. (b) = 0.01 = 0.01. (c) = 0.01 = 0.1. (d) = 0.01 = 1.

Figure 4.
Dynamical planes associated with the method. (a) = 0.01 = 0.01 maxiter = 10. (b) = 0.01 = 0.1 maxiter = 10. (c) = 0.01 = 1 maxiter = 10. (d) = 0.01 = 10 maxiter = 10.
3.3. Polynomial Family
The method was applied to the function with equation . The sink fixed points obtained in this case using the method are , , the solutions of the previous equation. In Figure 5 how the method changes with the parameter is shown. Dynamical planes represent the behavior of the method in the complex domain as it appears in Figure 6, where with larger values of the region of convergence is more complex.

Figure 5.
Method Representation. (a) = 0.01 = 0. (b) = 0.01 = 0.01. (c) = 0.01 = 0.1. (d) = 0.01 = 1.
Figure 6.
Dynamical planes associated with the method. (a) = 0.01 = 0.01 maxiter = 10. (b) = 0.01 = 0.1 maxiter = 10. (c) = 0.01 = 1 maxiter = 10. (d) = 0.01 = 10 maxiter = 25.
4. Conclusions
The study of high-order iterative methods is very important, since problems from all disciplines require the solution of some equations. This solution is found as the limit of sequences generated by such methods, since closed form solutions can rarely be found in general. The convergence order is usually found in the literature using expensive Taylor expansions, high-order derivatives, and without computable error estimates on or uniqueness results. It is worth noticing that the high-order derivatives do not appear in these methods. Moreover, the initial point is a “shot in the dark”. Hence, the applicability of these methods is very limited. To address all these problems, we have developed a technique using hypotheses only on the first derivative that actually appears on the method and Lipschitz-type conditions. This allows us to: extend the applicability of the method; find a radius of convergence as well as computable error estimates and uniqueness results based on Lipschitz constants. Although we demonstrated our technique on the method in Equation (2), clearly it can be used to extend the applicability of other methods along the same lines. In view of the involvement of parameters on the method, the dynamics of it have also been explored in many interesting cases.
Author Contributions
All authors have equally contributed to this work. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Acknowledgments
Research supported in part by Programa de Apoyo a la investigación de la fundación Séneca–Agencia de Ciencia y Tecnología de la Región de Murcia 20928/PI/18 and by Spanish MINECO project PGC2018-095896-B-C21.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Petković, M.S.; Neta, B.; Petković, L.D.; Dźunić, J. Multipoint Methods for Solving Nonlinear Equations: A Survey; Elsevier: Amsterdam, The Netherlands, 2013. [Google Scholar]
- Hueso, J.L.; Martinez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
- Behl, R.; Kanwar, V.; Kim, Y.I. Higher-order families of multiple root finding methods suitable for non-convergent cases and their dynamics. Math. Model. Anal. 2019, 24, 422–444. [Google Scholar] [CrossRef]
- Amat, S.; Busquier, S.; Gutiérrez, J.M. Geometric constructions of iterative functions to solve nonlinear equations. J. Comput. Appl. Math. 2003, 157, 197–205. [Google Scholar] [CrossRef]
- Argyros, I.K. Computational Theory of Iterative Methods. Series: Studies in Computational Mathematics, 15; Chui, C.K., Wuytack, L., Eds.; Elsevier Publ. Co.: New York, NY, USA, 2007. [Google Scholar]
- Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications: A Contemporary Study; CRC Press: Boca Raton, FL, USA; Taylor & Francis Group: Abingdon, UK, 2017. [Google Scholar]
- Argyros, I.K.; Magreñán, Á.A. A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications; Elsevier: Amsterdam, The Netherlands, 2017. [Google Scholar]
- Argyros, I.K.; Hilout, S. Computational Methods in Nonlinear Analysis. Efficient Algorithms, Fixed Point Theory and Applications; World Scientific: Singapore, 2013. [Google Scholar]
- Argyros, I.K.; Hilout, S. Numerical Methods in Nonlinear Analysis; World Scientific Publ. Comp.: Hackensack, NJ, USA, 2013. [Google Scholar]
- Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
- Rheinboldt, W.C. An adaptive continuation process for solving systems of nonlinear equations, Polish Academy of Science. Banach Ctr. Publ. 1978, 3, 129–142. [Google Scholar] [CrossRef]
- Sharma, J.R. Improved Chebyshev–Halley methods with sixth and eighth order of convergence. Appl. Math. Comput. 2015, 256, 119–124. [Google Scholar] [CrossRef]
- Sharma, R. Some fifth and sixth order iterative methods for solving nonlinear equations. Int. J. Eng. Res. Appl. 2014, 4, 268–273. [Google Scholar]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice–Hall Series in Automatic Computation: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Madhu, K.; Jayaraman, J. Higher Order Methods for Nonlinear Equations and Their Basins of Attraction. Mathematics 2016, 4, 22. [Google Scholar] [CrossRef]
- Sanz-Serna, J.M.; Zhu, B. Word series high-order averaging of highly oscillatory differential equations with delay. Appl. Math. Nonlinear Sci. 2019, 4, 445–454. [Google Scholar] [CrossRef]
- Pandey, P.K. A new computational algorithm for the solution of second order initial value problems in ordinary differential equations. Appl. Math. Nonlinear Sci. 2018, 3, 167–174. [Google Scholar] [CrossRef]
- Magreñán, Á.A. Different anomalies in a Jarratt family of iterative root–finding methods. Appl. Math. Comput. 2014, 233, 29–38. [Google Scholar]
- Magreñán, Á.A. A new tool to study real dynamics: The convergence plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef]
- Magreñán, Á.A.; Argyros, I.K. On the local convergence and the dynamics of Chebyshev-Halley methods with six and eight order of convergence. J. Comput. Appl. Math. 2016, 298, 236–251. [Google Scholar] [CrossRef]
- Lotfi, T.; Magreñán, Á.A.; Mahdiani, K.; Rainer, J.J. A variant of Steffensen-King’s type family with accelerated sixth-order convergence and high efficiency index: Dynamic study and approach. Appl. Math. Comput. 2015, 252, 347–353. [Google Scholar] [CrossRef]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).