A General Optimal Iterative Scheme with Arbitrary Order of Convergence

A general optimal iterative method, for approximating the solution of nonlinear equations, of (n+1) steps with 2n+1 order of convergence is presented. Cases n=0 and n=1 correspond to Newton’s and Ostrowski’s schemes, respectively. The basins of attraction of the proposed schemes on different test functions are analyzed and compared with the corresponding to other known methods. The dynamical planes showing the different symmetries of the basins of attraction of new and known methods are presented. The performance of different methods on some test functions is shown.


Introduction
Nonlinear equations and systems appear in many problems in computer science and other disciplines when they are modeled mathematically. In general, these equations have no analytical solution and we must resort to iterative processes to approximate their solutions.
In the literature, methods or families of iterative schemes, designed from different procedures, have proliferated in recent years to approximate the simple or multiple roots α of a nonlinear equation f (x) = 0, where f : D ⊆ R → R is a real function defined on an open interval D. In the books [1,2], we can find many of the iterative schemes published in recent years and classified from different points of view. With them, we can have an overview of the state of the art in the research area of iterative processes for solving nonlinear equations. Of course, each scheme has a different behavior. This behavior is characterized with efficiency and stability criteria provided by the tools of complex dynamics.
The efficiency of these schemes has been widely considered, usually based on the efficiency index presented by Ostrowski [3]. The expression of this index, I = p 1/d , involves the order of convergence of the method p and the number of functional evaluations in each iteration, d. The methods with higher convergence order are not necessarily the most efficient, as the complexity of the iterative expression also plays an important role. In [4], Kung and Traub established that the order of convergence of an iterative method without memory, which used n functional evaluations per iteration, is less than or equal to 2 n−1 . When an iterative scheme reaches this upper bound, it is said to be an optimal method. Many optimal schemes have been designed in the last years with different order of convergence (see, e.g., [5][6][7][8][9][10] and the references therein).
One of the more used iterative schemes is Newton's method, thanks to its efficiency and simplicity, Under certain conditions, this method has quadratic convergence and it is optimal. It is known that Newton's method can be combined with itself an arbitrary times n + 1, obtaining a new scheme of order 2 n+1 , n = 1, 2, 3, . . . (see [11]).
. . y n = y n−1 − f (y n−1 ) f (y n−1 ) , for k = 0, 1, 2, . . .. This method is not optimal following the Kung-Traub conjecture [4]. We modify expression (2) to obtain an optimal iterative scheme of order 2 n+1 . To do this, we approximate the derivatives appearing in (2) after the first step by polynomials satisfying certain conditions. In the remainder of this paper, we discuss the following points. In Section 2, the iterative expression of the proposed schemes is obtained, while its order of convergence is proven in Section 3. Section 4 is devoted to analyzing the dynamical planes of the proposed methods and other known ones when they are applied to different nonlinear equations. The numerical results are described in Section 5. With some conclusions and the references used in this manuscript, we finish the paper.

Design of the Class of Iterative Procedures
Now, we design the proposed family and prove its converge. From expression (2) and approximating the derivative after the first step, we obtain the following iterative expression: . . .
From the conditions that polynomial must be satisfied, it is easy to observe that P n+1 (y n ) = a 0 = f (y n ) and that we need the expression of a 1 since P n+1 (y n ) = a 1 . Thus, we need to solve where f [y j , y n ] are the divided differences of y j and y n in f , that is, f [y j , y n ] = f (y j )− f (y n ) y j −y n . By using algebraic operations between the two last rows, we obtain Then, the coefficient matrix of the system is a Vandermonde confluent matrix; thus, it is invertible, and the system can be solved.
Let us see how a system with Vandermonde confluent matrix is solved, which can be found in more detail in [12,13]. For solving the previous system, we denote by b i = y i − y n , We define g j (x) = 1 P j (x) and Then, we have Since V T is involved in our system, we use the transpose of V −1 to obtain the values of parameters a i .
Thus, in our case, we need the first component of the following matrix product: It follows that We now determine L 0,1 (0) and L j,0 (0) for j = 0, . . . , n − 1. We have, Therefore, the expressions of L 0,0 (0), L 0,1 (x) and L j,0 when j = 0, are Thus, by grouping the terms properly, we get that a 1 has the form Thus, we get the form of the term a 1 in each step.
Some of the members of this class are as follows.
The two-step method has the iterative expression This is the Ostrowski's method, with order of convergence four. We denote by M4 scheme (4).
Let us now look at the three-step method: where whose order of convergence is 8. We denote method (5) by M8. Let us remark that the extension of this family for solving nonlinear systems is not straightforward, except in the first step, as it involves vectorial interpolation polynomials.

Convergence Analysis
Let f : D ⊂ R → R be a sufficiently differentiable function in an open interval D that contains a zero α of f . We consider the divided difference operator as defined by Genochi-Hermite [14]. By using Taylor's expression of which we employ to prove the following theorem, where the order of convergence of the method defined in (3) is established.
Then, choosing an initial estimation x 0 close to α, sequence {x k } generated by the (n + 1)-step method (3) converge to α with order of convergence 2 n+1 .
Proof. We prove this result by induction on the number of steps. The one-step method defined in (3) is the Newton's scheme, so we know that the one-step method has order of convergence 2. Now, we assume that the i-step method has order of convergence 2 i where i ≤ j. Thus, we prove that the j + 1-step method has order 2 j+1 . We denote that y 0 = x 0 and y i is defined in (3). First, we calculate the expression of the polynomial P j+1 .
where is a point in the interpolation interval. Thus, we know that Evaluating the last expression in y j , we have Using the Taylor's expression of f (y i ) around α: Then, f (y i ) around α has the following expression: Using the last part of the expression of P j+1 , we deduce that We assume that the i-step method has order of convergence 2 i for i ≤ j, that is, From the above expression, we obtain Then, We need to calculate now the expression of Thus, From the last expression, we obtain Thus, we prove that the j + 1-step method (3) has order of convergence 2 j+1 ; thus, by induction, the n + 1-step method has order of convergence 2 n+1 .
In this case, our n + 1-step method carries out n + 2 evaluations, since we carry out the derivative of f in x 0 and also the image of f in the approximations x 0 , y 1 , . . . , y n . For this reason, we have that the proposed family of methods (3) is optimal according to the conjecture of Kung and Traub, because 2 n+1 = 2 n+2−1 .

Complex Dynamics
The order of convergence is not the only criterion that must be analyzed in an iterative scheme. The relevance of the method also depends on how it behaves according to the initial estimations, so it is necessary to analyze the dynamics of the method, that is, the dynamics of the rational operator obtained when the scheme is applied on polynomials with low degree.
The dynamical study of a family of iterative methods allows classifying them, based on their behavior with respect to the initial approximations taken, into stable methods and chaotic ones. This analysis also provides important information on the reliability of the methods [15][16][17][18][19].
In this section, we focus on studying the complex dynamics of the methods M4, defined in (4), and M8, defined in (5), members of the proposed family (3). Now, we remember some concepts of complex dynamics. These contents can be enlarged by viewing different classical books (see, e.g., [20]).
Given a rational function R :Ĉ →Ĉ, whereĈ is the Riemann sphere, the orbit of a point z 0 ∈Ĉ is defined as: For the proposed methods, the roots of the polynomial are fixed points, but there may be other fixed points, different from the roots, which we call strange fixed points. A critical point z 0 is a point such that R (z 0 ) = 0, and it is called free critical point when it is different from the roots of f (x). The stability of a fixed point z 0 depends on the value of the derivative R in it. In fact, it is called attractor The basin of attraction of an attractor α is defined as the set of points whose orbit tends to α:

Theorem 2. (Cayley Quadratic Test (CQT)).
Let Op(z) be the rational operator obtained from a general iteration formula applied to the quadratic polynomial q(z) = (z − α 1 )(z − α 2 ), with α 1 = α 2 . Suppose that Op(z) is conjugate to the operator z → z p , by the Möbius transformation M(z) = z−α 1 z−α 2 . Then, p is the order of the iterative method associated to Op(z). Moreover, the corresponding Julia set J(Op(z)) is the circumference of center zero and radius 1.
Methods M4 and M8 verify the Cayley quadratic test, that is, the associated operator to each of them can be transformed by Möbius transformation into the operator z p , being p the order of the method, so it follows that they are stable methods, where the only superattractors fixed points are the roots of the quadratic polynomial, the strange fixed points are repulsors and there are no free critical points.
In addition, when the Cayley quadratic test is satisfied, the Julia set of these methods on a quadratic polynomial is the circumference of center zero and unit radius. Now, we construct the dynamical planes of methods M4 and M8 on other equations, and we compare them with those obtained with other known schemes of order 4 and 8. We compare the proposed schemes with Newton's method, since we generate our methods from it. The two methods of order 4 used in the comparison are Jarratt's scheme, which can be found in [21], whose iterative expression is and the King's family method, with β = 1, which can be found in [22], Method (17) is denoted by J4 and method (18) by K4.
On the other hand, methods of order 8 used in the comparison are the method denoted by J8, defined in [23], whose expression is and the method denoted by K8, defined in [24], which is defined as where which, as we can see in [24], is obtained from King's family schemes.
We apply all these methods on the following equations, which are also the equations that we use in the numerical experiments.
We select a region of the complex plane and establish a mesh. Each point of the mesh is considered as an initial approximation for the iterative method and painted in a certain color depending on where it converges to. For the dynamic planes, we choose a mesh of 400 × 400 points and a maximum number of 40 iterations per point. Fixed points are represented by white circles while superattractors are represented by stars. All dynamical planes show different kinds of symmetries, with independence of the number of attracting fixed points of the related operator.
We can examine the dynamic planes associated with the different methods mentioned above for the equation cos(x) − x = 0 in Figure 1. In this case, the points of the plane that converge to the approximate solution 0.73908513 are colored orange; the points that tend to infinity are colored blue, which we determine as the points whose value of the iteration in absolute value is greater than 800; and the points that do not converge are colored black. The dynamic planes that have a more complex structure are those associated with the K4 and K8 methods, in Figure 1c,f, while those corresponding to the rest of the methods present fewer intersections between the different basins.  We can examine the dynamic planes associated with the different methods mentioned above for the equation cos(x) − x = 0 in Figure 1. In this case, the points of the plane that converge to the approximate solution 0.73908513 are coloured orange, the points that tend to infinity are coloured blue, which we have determined as the points whose value of the iteration in absolute value is greater than 800, and the points that do not converge are coloured black. The dynamic planes that have a more complex structure are those associated with the K4 and K8 methods, in Figures 1c and 1f, while those corresponding to the rest of the methods present fewer intersections between the different basins.
Let us now see what happens in the case of the equation (x − 1) 6 − 1 = 0, whose dynamical planes are shown in Figure 2. In this case, there are more noticeable changes between the different dynamic planes, although in all cases we have the same superattractor fixed points, to which we associate each of the colours represented in the planes, except for the black colour which is associated with the initial points that do not converge to any superattractor fixed point and the blue colour which is associated with the initial estimates Let us now see what happens in the case of the equation (x − 1) 6 − 1 = 0, whose dynamical planes are shown in Figure 2. In this case, there are more noticeable changes between the different dynamic planes, although in all cases we have the same superattractor fixed points, to which we associate each of the colors represented in the planes, except for black, which is associated with the initial points that do not converge to any superattractor fixed point, and blue, which is associated with the initial estimates whose iterations, in absolute value, are greater than 800.
(a) Newton method estimations that are in that zone. On the other hand, we can see that Newton's method has a larger black area than the methods Jarratt type or the methods proposed in the paper. If we analyse the dynamic planes associated to M4 and J4, we can see that it is a little simpler in the case of M4 method, but if we compare the M8 and J8 methods, the dynamic plane is considerably simpler in the case of M8 scheme. For this equation, we have not studied the strange fixed points, since we need to solve polynomials of high degree. As a conclusion, from these figures it is highlighted that the most stable method for this example is M8 scheme.
Let us now analyze the dynamical planes associated with the equation arctan x = 0, that we can see in Figure 3. In this case, the initial estimations converging to the solution 0 are painted in orange and those converging to ∞ are painted in blue, which have been determined as the initial estimates whose iterations, in absolute value, are greater than 800. We note that most of the planes are similar, except those associated with methods M4 and M8, since they have a basin of attraction for the 0 point much larger than in the rest of the cases, being a little greater that of method M4, for this reason it is more convenient in this case to take one of these two methods since we have a greater number of initial estimations that converge to the solution that we are looking for.
Let us now look at the dynamic planes associated with the equation arctan(x) − 2x x 2 +1 = 0 in the Figure 4. In this case, we paint in purple the initial estimates that converge In this case, it can be seen that the planes that stand out most for their non-convergence zones in the center are those associated to the methods K4 and K8, as shown in Figure 2c,f; for this reason, these methods would not be recommended for initial estimations that are in that zone. On the other hand, we can see that Newton's method has a larger black area than the Jarratt type methods or the methods proposed in the paper. If we analyze the dynamic planes associated to M4 and J4, we can see that it is a little simpler in the case of M4 method, but, if we compare the M8 and J8 methods, the dynamic plane is considerably simpler in the case of M8 scheme. For this equation, we do not study the strange fixed points, since we need to solve polynomials of high degree. As a conclusion, from these figures, it is highlighted that the most stable method for this example is M8 scheme.
Let us now analyze the dynamical planes associated with the equation arctan x = 0, which is shown in Figure 3. In this case, the initial estimations converging to the solution 0 are in orange and those converging to ∞ are in blue, which are determined as the initial estimates whose iterations, in absolute value, are greater than 800. We note that most of the planes are similar, except those associated with methods M4 and M8, since they have a basin of attraction for the 0 point much larger than in the rest of the cases, being a little greater that of method M4; for this reason, it is more convenient in this case to take one of these two methods since we have a greater number of initial estimations that converge to the solution that we are looking for.
(a) Newton method to the solution x = 0, in green the initial estimates that converge to the solution x ≈ −1.3917, in orange those that converge to the solution x ≈ 1.3917 and in blue the initial estimates that converge to x ≈ 1.3917, which have been determined as the initial estimates whose iteration in absolute value is greater than 800.
In this example, we observe that most of the planes are similar, except for those associated with methods M4, J4 and M8, but among these three we can see that the one with the largest convergence area to ∞ is method J4, so it would be less convenient to use this method. If we look at the dynamic planes associated with the M4 and M8 methods we can see that they are similar, although we emphasise the M8 method a little more as it has a smaller blue region.
After commenting on the dynamic planes of these cases, what we see is that in general, for these examples, the methods that have given good results and have been the most convenient are the M4 and M8 methods, since they have had a larger basin of attraction in the cases we were interested in, they have not generated other basins of attraction from strange fixed points and their dynamic planes have not been as complex as others in general. Let us now look at the dynamic planes associated with the equation arctan(x) − 2x x 2 +1 = 0 in Figure 4. In this case, we illustrate in purple the initial estimates that converge to the solution x = 0, in green the initial estimates that converge to the solution x ≈ −1.3917, in orange those that converge to the solution x ≈ 1.3917 and in blue the initial estimates that converge to x ≈ 1.3917, which are determined as the initial estimates whose iteration in absolute value is greater than 800.
In this example, we observe that most of the planes are similar, except for those associated with methods M4, J4 and M8, but among these three we can see that the one with the largest convergence area to ∞ is method J4, so it would be less convenient to use this method. If we look at the dynamic planes associated with the M4 and M8 methods, we can see that they are similar, although we emphasize the M8 method a little more as it has a smaller blue region.
(a) Newton method

Numerical experiments
In this section, we solve some nonlinear equations to compare our proposed methods with other schemes of the same order of convergence and to confirm the information obtained in the previous section.
In the different tables we show, for each test function and iterative scheme, the approximation of the root, the value of the function in the last iteration, the number of iterations required, the distance between the last two iterations, the computational time in seconds and the approximate computational convergence order (ACOC), defined in [8], the expression of which is We are going to work with the same methods used in the previous section and with equations: After commenting on the dynamic planes of these cases, what we see is that, in general, for these examples, the methods that have given good results and have been the most convenient are the M4 and M8 methods, since they have a larger basin of attraction in the cases we are interested in, they do not generate other basins of attraction from strange fixed points and their dynamic planes are not as complex as the others in general.

Numerical Experiments
In this section, we solve some nonlinear equations to compare our proposed methods with other schemes of the same order of convergence to confirm the information obtained in the previous section.
In the different tables we show, for each test function and iterative scheme, the approximation of the root, the value of the function in the last iteration, the number of iterations required, the distance between the last two iterations, the computational time in seconds and the approximate computational convergence order (ACOC), defined in [25], the expression of which is We work with the same methods used in the previous section and with equations: • Equation cos(x) − x = 0, which has an approximate solution at x ≈ 0, 73908513, and we take as the initial estimate for all methods x 0 = 1. • Equation (x − 1) 6 − 1 = 0, which has a solution at x = 2, and we take as the initial estimate for all methods x 0 = 1.5. • Equation arctan(x) = 0, which has a solution at x = 0, and we take as the initial estimate for all methods x 0 = 1.5. • Equation arctan(x) − 2x x 2 +1 = 0, which has a solution at x = 0, and we take as the initial estimate for all methods x 0 = 0.4. In Table 1, we show the results obtained for equation cos(x) − x = 0. In this case, all methods converge to the solution, although there are differences between them. As expected, Newton's method performs more iterations than the rest and thus also has a longer time. It also has the longest distance between the last two iterations.
Among the fourth-order methods, we can see that all the results are similar in terms of the number of iterations, the computational time and the value of the function at the last iteration.
If we now look at the results of the methods of order 8, we can see that they perform the same number of iterations, but in this case the M8 method has less computational time, followed by the J8 method, as happens in the case of the norm of the equation evaluated in the last iteration, i.e., in this case the M8 method gives us better results than the J8 method, and the latter better than the K8 method.
The conclusion of this numerical experiment is that all methods obtain similar results, although it would be advisable in this case to use the M8 method as it is the one that takes the least computational time, one of the fewest iterations and obtains the highest order, and it is by far the method that obtains the closest approximation to the solution, as can be seen in Columns 2 and 3 of Table 1. The results obtained for the equation (x − 1) 6 − 1 = 0 are shown in Table 2. In this case, not all the methods converge to the solution, since the methods K4 and K8 do not converge taking as initial estimate x 0 = 1.5, as can be seen in the dynamic planes associated with these methods for the equation (x − 1) 6 − 1 = 0. This fact is denoted in the table as n.c.
Let us now comment on the methods that do converge. In the previous case, Newton's method was different from the rest of the methods, and in this case it is the same, although the difference between them is more notable since the number of iterations has grown considerably, as has the computational time. This is also the method that converges with the greatest distance between the last two iterations. Among the fourth-order methods, we can see that the results are similar in terms of number of iterations, computational time and the value of the function in the last iteration. If we now look at the eighth-order methods, we can see that the same number of iterations is performed, but in this case the M8 method has less computational time, although what stands out most from this table is that the norm of the equation evaluated in the last iteration of the M8 method is smaller than in the rest of the methods, and it is considerably smaller than in the case of the J8 method. As a conclusion of this numerical experiment, we obtain that among the converging methods we have similar results in most cases, although the M8 method stands out since it is one of those that makes fewer iterations and obtains a higher order, and it is by far the method that obtains a closer approximation to the solution, as can be seen in Column 2 and 3 of Table 2. The results obtained for the equation arctan(x) = 0 are presented in Table 3. In this case, not all the methods converge to the solution, since the Newton, K4 and K8 methods do not converge taking as initial estimation x 0 = 1.5, as can be seen in the dynamic planes associated to these methods for the equation arctan(x) = 0. Let us discuss the methods that do converge. Among the fourth-order methods, we can see that the results of number of iterations, computational time and the value of the equation in the last iteration are similar. If we now look at the eighth-order methods, we can observe that the M8 method performs fewer iterations and also has the shortest computational time, although what stands out most in this table is the value of the ACOC of the M8 method, which in this case increases to 11 instead of 8, although it is true that the J8 method also increases to 9 and has the smallest value of the norm of the equation evaluated in the last iteration, although, by performing one more iteration than the M8 method, it is logical for this to happen.
As a conclusion of this numerical experiment, we obtain similar results among the converging methods, although the M4 and M8 methods stand out due to their dynamic planes, but especially the M8 method since it is the one that makes fewer iterations and obtains the highest order, besides being the one that takes the least time. We show the results obtained for the equation arctan(x) − 2x x 2 +1 = 0 in Table 4. In this case, all methods converge to the solution, as can be seen in the dynamic planes associated with these methods for the equation arctan(x) − 2x x 2 +1 = 0.
Let us comment the methods as they appear in the table. As we can see, Newton's method is the one that performs more iterations to reach the same tolerance, although it is true that for this example the ACOC increases by one unit.
If we look at the methods of order 4, we can see that in all of them the value of the ACOC is 5, when the expected value is 4. Among these methods, the one that performs the most iterations is the K4 method, and therefore the one that takes the longest. If we look at the M4 and J4 methods, the results are similar, although the M4 method gives a better approximation in this case with the same number of iterations, and also takes less time; thus, for this example, the most convenient method of order 4 would be the M4 method.
Regarding methods of order 8, we can see that K8 method performs more iterations than the rest of the schemes, as well as being one of the methods that uses the most computational time. We also observe that M8 method has the shortest computational time, although what stands out most in this table is the value of the ACOC of the M8 method, since in this case it increases to 11 instead of 8, although it is true that the J8 and K8 methods also increase to 9. In addition, M8 method obtains a more accurate approximation than the rest of the order 8 methods.
As a conclusion of this numerical experiment, we obtain that there are notable differences among the methods, highlighting as less convenient in this case the Newton and King type methods, and as more convenient methods we highlight for their results and dynamic planes the M4 and M8 methods, but especially the M4 method for its dynamic planes and the M8 method since it is the one that performs fewer iterations and obtains greater order, besides being the one that takes less time. After performing these experiments, we can conclude that in these cases the most recommendable methods are M4 and M8 because they are the only methods, together with J4, that converge to the solution in all cases, and because they are the methods that stand out for their numerical results in all examples, although the one that stands out most from these two methods that we propose is the M8 method.

Conclusions
In this manuscript, we design a family of optimal iterative methods with n + 1 steps and order of convergence 2 n+1 , for n = 0, 1, 2, . . ..
From this class, we can select an optimal iterative scheme with the desired order of convergence. Some classical methods are members of this class.
We work, from the dynamic and numerical point of view, with the elements of this family of orders 4 and 8, comparing the results obtained with those of other known methods. The results provided by the two elements of the family are very satisfactory, both numerical (number of iterations, error bounds, etc.) and stability related (amplitude of the convergence basins, existence of strange fixed points, etc.).