Abstract
A general optimal iterative method, for approximating the solution of nonlinear equations, of steps with order of convergence is presented. Cases and correspond to Newton’s and Ostrowski’s schemes, respectively. The basins of attraction of the proposed schemes on different test functions are analyzed and compared with the corresponding to other known methods. The dynamical planes showing the different symmetries of the basins of attraction of new and known methods are presented. The performance of different methods on some test functions is shown.
1. Introduction
Nonlinear equations and systems appear in many problems in computer science and other disciplines when they are modeled mathematically. In general, these equations have no analytical solution and we must resort to iterative processes to approximate their solutions.
In the literature, methods or families of iterative schemes, designed from different procedures, have proliferated in recent years to approximate the simple or multiple roots of a nonlinear equation , where is a real function defined on an open interval D. In the books [1,2], we can find many of the iterative schemes published in recent years and classified from different points of view. With them, we can have an overview of the state of the art in the research area of iterative processes for solving nonlinear equations. Of course, each scheme has a different behavior. This behavior is characterized with efficiency and stability criteria provided by the tools of complex dynamics.
The efficiency of these schemes has been widely considered, usually based on the efficiency index presented by Ostrowski [3]. The expression of this index, , involves the order of convergence of the method p and the number of functional evaluations in each iteration, d. The methods with higher convergence order are not necessarily the most efficient, as the complexity of the iterative expression also plays an important role. In [4], Kung and Traub established that the order of convergence of an iterative method without memory, which used n functional evaluations per iteration, is less than or equal to . When an iterative scheme reaches this upper bound, it is said to be an optimal method. Many optimal schemes have been designed in the last years with different order of convergence (see, e.g., [5,6,7,8,9,10] and the references therein).
One of the more used iterative schemes is Newton’s method, thanks to its efficiency and simplicity,
Under certain conditions, this method has quadratic convergence and it is optimal.
It is known that Newton’s method can be combined with itself an arbitrary times , obtaining a new scheme of order , (see [11]).
for . This method is not optimal following the Kung–Traub conjecture [4]. We modify expression (2) to obtain an optimal iterative scheme of order . To do this, we approximate the derivatives appearing in (2) after the first step by polynomials satisfying certain conditions.
In the remainder of this paper, we discuss the following points. In Section 2, the iterative expression of the proposed schemes is obtained, while its order of convergence is proven in Section 3. Section 4 is devoted to analyzing the dynamical planes of the proposed methods and other known ones when they are applied to different nonlinear equations. The numerical results are described in Section 5. With some conclusions and the references used in this manuscript, we finish the paper.
2. Design of the Class of Iterative Procedures
Now, we design the proposed family and prove its converge. From expression (2) and approximating the derivative after the first step, we obtain the following iterative expression:
where and , for , and polynomials satisfy
- for .
- .
Now, we obtain an explicit expression for , , which we use in each step. To simplify the expression, we take . Thus,
From the conditions that polynomial must be satisfied, it is easy to observe that and that we need the expression of since . Thus, we need to solve
which is equivalent to
where are the divided differences of and in f, that is, .
By using algebraic operations between the two last rows, we obtain
Then, the coefficient matrix of the system is a Vandermonde confluent matrix; thus, it is invertible, and the system can be solved.
Let us see how a system with Vandermonde confluent matrix is solved, which can be found in more detail in [12,13]. For solving the previous system, we denote by , , define and the Vandermonde confluent matrix
Let be a polynomial, where is the maximum exponent of , that is,
We define and
Then, we have
Since is involved in our system, we use the transpose of to obtain the values of parameters .
Thus, in our case, we need the first component of the following matrix product:
It follows that
We now determine and for . We have,
Therefore, the expressions of , and when , are
Thus, by grouping the terms properly, we get that has the form
Thus, we get the form of the term in each step.
Some of the members of this class are as follows.
The two-step method has the iterative expression
This is the Ostrowski’s method, with order of convergence four. We denote by scheme (4).
Let us now look at the three-step method:
where
whose order of convergence is 8. We denote method (5) by .
Let us remark that the extension of this family for solving nonlinear systems is not straightforward, except in the first step, as it involves vectorial interpolation polynomials.
3. Convergence Analysis
Let be a sufficiently differentiable function in an open interval D that contains a zero of f. We consider the divided difference operator
as defined by Genochi-Hermite [14]. By using Taylor’s expression of around x, we obtain
which we employ to prove the following theorem, where the order of convergence of the method defined in (3) is established.
Theorem 1.
Let be a sufficiently differentiable function in a neighbourhood D of α, such that and . Then, choosing an initial estimation close to α, sequence generated by the -step method (3) converge to α with order of convergence .
Proof.
We prove this result by induction on the number of steps. The one-step method defined in (3) is the Newton’s scheme, so we know that the one-step method has order of convergence 2. Now, we assume that the i-step method has order of convergence where . Thus, we prove that the -step method has order .
We denote that and is defined in (3). First, we calculate the expression of the polynomial .
Moreover,
where is a point in the interpolation interval. Thus, we know that
Evaluating the last expression in , we have
Defining , we obtain
Using the Taylor’s expression of around :
Then, around has the following expression:
Using the last part of the expression of , we deduce that
We assume that the i-step method has order of convergence for , that is, . Taking , we obtain
Now, we calculate . We have that
From the above expression, we obtain
Then,
As and is satisfied,
We need to calculate now the expression of .
Thus,
From the last expression, we obtain
Thus, we prove that the -step method (3) has order of convergence ; thus, by induction, the -step method has order of convergence . □
In this case, our -step method carries out evaluations, since we carry out the derivative of f in and also the image of f in the approximations . For this reason, we have that the proposed family of methods (3) is optimal according to the conjecture of Kung and Traub, because .
4. Complex Dynamics
The order of convergence is not the only criterion that must be analyzed in an iterative scheme. The relevance of the method also depends on how it behaves according to the initial estimations, so it is necessary to analyze the dynamics of the method, that is, the dynamics of the rational operator obtained when the scheme is applied on polynomials with low degree.
The dynamical study of a family of iterative methods allows classifying them, based on their behavior with respect to the initial approximations taken, into stable methods and chaotic ones. This analysis also provides important information on the reliability of the methods [15,16,17,18,19].
In this section, we focus on studying the complex dynamics of the methods M4, defined in (4), and M8, defined in (5), members of the proposed family (3). Now, we remember some concepts of complex dynamics. These contents can be enlarged by viewing different classical books (see, e.g., [20]).
Given a rational function , where is the Riemann sphere, the orbit of a point is defined as:
A is called a fixed point if . For the proposed methods, the roots of the polynomial are fixed points, but there may be other fixed points, different from the roots, which we call strange fixed points. A critical point is a point such that , and it is called free critical point when it is different from the roots of . The stability of a fixed point depends on the value of the derivative in it. In fact, it is called attractor if , superattractor if , repulsor if and parabolic if .
The basin of attraction of an attractor is defined as the set of points whose orbit tends to :
Theorem 2.
(Cayley Quadratic Test (CQT)).
Let be the rational operator obtained from a general iteration formula applied to the quadratic polynomial , with . Suppose that is conjugate to the operator , by the Möbius transformation . Then, p is the order of the iterative method associated to . Moreover, the corresponding Julia set is the circumference of center zero and radius 1.
Methods and verify the Cayley quadratic test, that is, the associated operator to each of them can be transformed by Möbius transformation into the operator , being p the order of the method, so it follows that they are stable methods, where the only superattractors fixed points are the roots of the quadratic polynomial, the strange fixed points are repulsors and there are no free critical points.
In addition, when the Cayley quadratic test is satisfied, the Julia set of these methods on a quadratic polynomial is the circumference of center zero and unit radius.
Now, we construct the dynamical planes of methods and on other equations, and we compare them with those obtained with other known schemes of order 4 and 8. We compare the proposed schemes with Newton’s method, since we generate our methods from it. The two methods of order 4 used in the comparison are Jarratt’s scheme, which can be found in [21], whose iterative expression is
and the King’s family method, with , which can be found in [22],
On the other hand, methods of order 8 used in the comparison are the method denoted by , defined in [23], whose expression is
and the method denoted by , defined in [24], which is defined as
where
which, as we can see in [24], is obtained from King’s family schemes.
We apply all these methods on the following equations, which are also the equations that we use in the numerical experiments.
- ;
- ;
- ; and
- .
We select a region of the complex plane and establish a mesh. Each point of the mesh is considered as an initial approximation for the iterative method and painted in a certain color depending on where it converges to. For the dynamic planes, we choose a mesh of points and a maximum number of 40 iterations per point. Fixed points are represented by white circles while superattractors are represented by stars. All dynamical planes show different kinds of symmetries, with independence of the number of attracting fixed points of the related operator.
We can examine the dynamic planes associated with the different methods mentioned above for the equation in Figure 1. In this case, the points of the plane that converge to the approximate solution 0.73908513 are colored orange; the points that tend to infinity are colored blue, which we determine as the points whose value of the iteration in absolute value is greater than 800; and the points that do not converge are colored black. The dynamic planes that have a more complex structure are those associated with the and methods, in Figure 1c,f, while those corresponding to the rest of the methods present fewer intersections between the different basins.
Let us now see what happens in the case of the equation , whose dynamical planes are shown in Figure 2. In this case, there are more noticeable changes between the different dynamic planes, although in all cases we have the same superattractor fixed points, to which we associate each of the colors represented in the planes, except for black, which is associated with the initial points that do not converge to any superattractor fixed point, and blue, which is associated with the initial estimates whose iterations, in absolute value, are greater than 800.
In this case, it can be seen that the planes that stand out most for their non-convergence zones in the center are those associated to the methods and , as shown in Figure 2c,f; for this reason, these methods would not be recommended for initial estimations that are in that zone. On the other hand, we can see that Newton’s method has a larger black area than the Jarratt type methods or the methods proposed in the paper. If we analyze the dynamic planes associated to and , we can see that it is a little simpler in the case of method, but, if we compare the and methods, the dynamic plane is considerably simpler in the case of scheme. For this equation, we do not study the strange fixed points, since we need to solve polynomials of high degree. As a conclusion, from these figures, it is highlighted that the most stable method for this example is scheme.
Let us now analyze the dynamical planes associated with the equation , which is shown in Figure 3. In this case, the initial estimations converging to the solution 0 are in orange and those converging to ∞ are in blue, which are determined as the initial estimates whose iterations, in absolute value, are greater than 800. We note that most of the planes are similar, except those associated with methods and , since they have a basin of attraction for the 0 point much larger than in the rest of the cases, being a little greater that of method ; for this reason, it is more convenient in this case to take one of these two methods since we have a greater number of initial estimations that converge to the solution that we are looking for.
Let us now look at the dynamic planes associated with the equation in Figure 4. In this case, we illustrate in purple the initial estimates that converge to the solution , in green the initial estimates that converge to the solution , in orange those that converge to the solution and in blue the initial estimates that converge to , which are determined as the initial estimates whose iteration in absolute value is greater than 800.
In this example, we observe that most of the planes are similar, except for those associated with methods , and , but among these three we can see that the one with the largest convergence area to ∞ is method , so it would be less convenient to use this method. If we look at the dynamic planes associated with the and methods, we can see that they are similar, although we emphasize the method a little more as it has a smaller blue region.
After commenting on the dynamic planes of these cases, what we see is that, in general, for these examples, the methods that have given good results and have been the most convenient are the and methods, since they have a larger basin of attraction in the cases we are interested in, they do not generate other basins of attraction from strange fixed points and their dynamic planes are not as complex as the others in general.
5. Numerical Experiments
In this section, we solve some nonlinear equations to compare our proposed methods with other schemes of the same order of convergence to confirm the information obtained in the previous section.
For the computational calculations, we used Matlab R2020b, with arithmetic precision variable of 1000 digits, iterating from an initial estimate and with the stopping criterium and .
In the different tables we show, for each test function and iterative scheme, the approximation of the root, the value of the function in the last iteration, the number of iterations required, the distance between the last two iterations, the computational time in seconds and the approximate computational convergence order (ACOC), defined in [25], the expression of which is
We work with the same methods used in the previous section and with equations:
- Equation , which has an approximate solution at , and we take as the initial estimate for all methods .
- Equation , which has a solution at , and we take as the initial estimate for all methods .
- Equation , which has a solution at , and we take as the initial estimate for all methods .
- Equation , which has a solution at , and we take as the initial estimate for all methods .
In Table 1, we show the results obtained for equation . In this case, all methods converge to the solution, although there are differences between them. As expected, Newton’s method performs more iterations than the rest and thus also has a longer time. It also has the longest distance between the last two iterations.
Table 1.
Results for the equation .
Among the fourth-order methods, we can see that all the results are similar in terms of the number of iterations, the computational time and the value of the function at the last iteration.
If we now look at the results of the methods of order 8, we can see that they perform the same number of iterations, but in this case the method has less computational time, followed by the method, as happens in the case of the norm of the equation evaluated in the last iteration, i.e., in this case the method gives us better results than the method, and the latter better than the method.
The conclusion of this numerical experiment is that all methods obtain similar results, although it would be advisable in this case to use the method as it is the one that takes the least computational time, one of the fewest iterations and obtains the highest order, and it is by far the method that obtains the closest approximation to the solution, as can be seen in Columns 2 and 3 of Table 1.
The results obtained for the equation are shown in Table 2. In this case, not all the methods converge to the solution, since the methods and do not converge taking as initial estimate , as can be seen in the dynamic planes associated with these methods for the equation . This fact is denoted in the table as n.c.
Table 2.
Results for the equation .
Let us now comment on the methods that do converge. In the previous case, Newton’s method was different from the rest of the methods, and in this case it is the same, although the difference between them is more notable since the number of iterations has grown considerably, as has the computational time. This is also the method that converges with the greatest distance between the last two iterations. Among the fourth-order methods, we can see that the results are similar in terms of number of iterations, computational time and the value of the function in the last iteration. If we now look at the eighth-order methods, we can see that the same number of iterations is performed, but in this case the method has less computational time, although what stands out most from this table is that the norm of the equation evaluated in the last iteration of the method is smaller than in the rest of the methods, and it is considerably smaller than in the case of the method. As a conclusion of this numerical experiment, we obtain that among the converging methods we have similar results in most cases, although the method stands out since it is one of those that makes fewer iterations and obtains a higher order, and it is by far the method that obtains a closer approximation to the solution, as can be seen in Column 2 and 3 of Table 2.
The results obtained for the equation are presented in Table 3. In this case, not all the methods converge to the solution, since the Newton, and methods do not converge taking as initial estimation , as can be seen in the dynamic planes associated to these methods for the equation . Let us discuss the methods that do converge. Among the fourth-order methods, we can see that the results of number of iterations, computational time and the value of the equation in the last iteration are similar. If we now look at the eighth-order methods, we can observe that the method performs fewer iterations and also has the shortest computational time, although what stands out most in this table is the value of the ACOC of the method, which in this case increases to 11 instead of 8, although it is true that the method also increases to 9 and has the smallest value of the norm of the equation evaluated in the last iteration, although, by performing one more iteration than the method, it is logical for this to happen.
Table 3.
Results for the equation .
As a conclusion of this numerical experiment, we obtain similar results among the converging methods, although the and methods stand out due to their dynamic planes, but especially the method since it is the one that makes fewer iterations and obtains the highest order, besides being the one that takes the least time.
We show the results obtained for the equation in Table 4. In this case, all methods converge to the solution, as can be seen in the dynamic planes associated with these methods for the equation .
Table 4.
Results for the function .
Let us comment the methods as they appear in the table. As we can see, Newton’s method is the one that performs more iterations to reach the same tolerance, although it is true that for this example the ACOC increases by one unit.
If we look at the methods of order 4, we can see that in all of them the value of the ACOC is 5, when the expected value is 4. Among these methods, the one that performs the most iterations is the method, and therefore the one that takes the longest. If we look at the and methods, the results are similar, although the method gives a better approximation in this case with the same number of iterations, and also takes less time; thus, for this example, the most convenient method of order 4 would be the method.
Regarding methods of order 8, we can see that method performs more iterations than the rest of the schemes, as well as being one of the methods that uses the most computational time. We also observe that method has the shortest computational time, although what stands out most in this table is the value of the ACOC of the method, since in this case it increases to 11 instead of 8, although it is true that the and methods also increase to 9. In addition, method obtains a more accurate approximation than the rest of the order 8 methods.
As a conclusion of this numerical experiment, we obtain that there are notable differences among the methods, highlighting as less convenient in this case the Newton and King type methods, and as more convenient methods we highlight for their results and dynamic planes the and methods, but especially the method for its dynamic planes and the method since it is the one that performs fewer iterations and obtains greater order, besides being the one that takes less time.
After performing these experiments, we can conclude that in these cases the most recommendable methods are and because they are the only methods, together with , that converge to the solution in all cases, and because they are the methods that stand out for their numerical results in all examples, although the one that stands out most from these two methods that we propose is the method.
6. Conclusions
In this manuscript, we design a family of optimal iterative methods with steps and order of convergence , for .
From this class, we can select an optimal iterative scheme with the desired order of convergence. Some classical methods are members of this class.
We work, from the dynamic and numerical point of view, with the elements of this family of orders 4 and 8, comparing the results obtained with those of other known methods. The results provided by the two elements of the family are very satisfactory, both numerical (number of iterations, error bounds, etc.) and stability related (amplitude of the convergence basins, existence of strange fixed points, etc.).
Author Contributions
Conceptualization, A.C. and J.R.T.; methodology, A.C.; software, P.T.-N.; validation, A.C., J.R.T. and P.T.-N.; formal analysis, A.C.; investigation, P.T.-N.; writing—original draft preparation, P.T.-N.; writing—review and editing, A.C. and J.R.T.; and supervision, J.R.T. All authors have read and agreed to the published version of the manuscript.
Funding
This research was partially supported by PGC2018-095896-B-C22 (MCIU/AEI/FEDER, UE).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Amat, S.; Busquier, S. Advances in Iterative Methods for Nonlinear Equations; Springer: Cham, Switzerland, 2017. [Google Scholar]
- Petković, M.; Neta, B.; Petković, L.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Boston, MA, USA, 2013. [Google Scholar]
- Ostrowski, A.M. Solutions of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1966. [Google Scholar]
- Kung, H.T.; Traub, J.F. Optimal order of one-point and multi-point iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
- Behl, R.; Chun, C.; Alshormani, A.S.; Motsa, S.S. A General Way to Construct a New Optimal Scheme with Eighth-Order Convergence for Nonlinear Equations. Int. J. Comput. Methods 2020, 17, 1843011. [Google Scholar] [CrossRef]
- Chicharro, F.I.; Cordero, A.; Martínez, T.H.; Torregrosa, J.R. Mean-based iterative methods for solving nonlinear chemistry problems. J. Math. Chem. 2020, 58, 555–572. [Google Scholar] [CrossRef]
- Cordero, A.; Moscoso-Martínez, M.; Torregrosa, J.R. Chaos and Stability in a New Iterative Family for Solving Nonlinear Equations. Algorithms 2021, 14, 101. [Google Scholar] [CrossRef]
- Herceg, H.; Herceg, H. Eighth order family of iterative methods for nonlinear equations and their basins of attraction. Comput. Appl. Math. 2018, 343, 458–480. [Google Scholar] [CrossRef]
- Solaiman, O.S.; Arrifin, S.; Hashim, I. Optimal fourth- and eighth-order of convergence derivative-free modifications of King’s method. J. King Saud Univ. 2019, 31, 1499–1504. [Google Scholar] [CrossRef]
- Zhanlav, T.; Otgondorj, K.; Mijiddorj, R. Constructive Theory of Designing Optimal Eighth-Order Derivative-Free Methods for Solving Nonlinear Equations. Am. J. Comput. Math. 2020, 10, 100–117. [Google Scholar] [CrossRef][Green Version]
- Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
- Meyer, C.D. Matrix Analysis and Applied Linear Algebra; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
- Moucouf, M.; Zriaa, S. New approaches for solving linear confluent Vandermonde systems and inverse of their corresponding matrices via Taylor’s expansion. arXiv 2020, arXiv:2010.03862v1. [Google Scholar]
- Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
- Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameters planes of iterative families and methods. Sci. World J. 2013, 2013, 780153. [Google Scholar] [CrossRef]
- Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Wide stability in a new family of optimal fourth-order iterative methods. Comput. Math. Methods 2019, 2019, e1023. [Google Scholar] [CrossRef]
- Cordero, A.; Guasp, L.; Torregrosa, J.R. Fixed Point Root-Finding Methods of Fourth-Order of Convergence. Symmetry 2019, 11, 769. [Google Scholar] [CrossRef]
- Gdawiec, K.; Kotarski, W.; Lisowska, A. On the Robust Newton’s Method with the Mann Iteration and the Artistic Patterns from Its Dynamics. Nonlinear Dyn. 2021, 104, 297–331. [Google Scholar] [CrossRef]
- Usurelu, G.I.; Bejenaru, A.; Postolach, M. Newton-like Methods and Polynomiographic Visualization of Modified Thakur Processes. Int. J. Comput. Math. 2020. [Google Scholar] [CrossRef]
- Blanchard, P. Complex Analytic Dynamics on the Riemann Sphere. Bull. AMS 1984, 11, 85–141. [Google Scholar] [CrossRef]
- Jarratt, P. Some efficient fourth-order multipoint methods for solving equations. BIT 1969, 9, 119–124. [Google Scholar] [CrossRef]
- King, R.F. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
- Neta, B.; Johnson, A.N. High order nonlinear solver. J. Comput. Methods Sci. Eng. 2008, 8, 245–250. [Google Scholar] [CrossRef]
- Chun, C.; Neta, B. Comparative study of eighth-order methods for finding simple roots of nonlinear equations. Numer. Algor. 2017, 74, 1169–1201. [Google Scholar] [CrossRef]
- Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).