Purely Iterative Algorithms for Newton’s Maps and General Convergence

: The aim of this paper is to study the local dynamical behaviour of a broad class of purely iterative algorithms for Newton’s maps. In particular, we describe the nature and stability of ﬁxed points and provide a type of scaling theorem. Based on those results, we apply a rigidity theorem in order to study the parameter space of cubic polynomials, for a large class of new root ﬁnding algorithms. Finally, we study the relations between critical points and the parameter space.


Introduction
The computation of solutions for equations of the form is a classic problem that arises in different areas of mathematics and in particular in numerical analysis.
Here Ψ : C → C is a complex function, and usually it is assumed that α = 0. Due to the dependence on the space where the equation is defined, and where possible solutions are acting, it is ambitious to expect a unified theory that provides the exact, or even approximate solutions to this class of equations. Also, depending on the objective that is being addressed, solving an equation as above can be very different in nature, as well as, the techniques used to solve it. For instance, Picard-Lindelöf's theorem (see for example [1]) on existence and uniqueness of solutions of ordinary differential equations, and fundamental theorem of Algebra in complex analysis. On the other hand, if we turn our attention to explicit solutions, then the problem becomes even more difficult.
Consider a complex polynomial f and the classical Newton's method. In this case, higher-order methods have been extensively used and studied in order to approach the equation f (z) = 0. The iterative function N f defines a rational map on the extended complex plane (Riemann sphere) C = C ∪ {∞}. The simple roots of the equation f (z) = 0, or in other words, the roots of the equation f (z) = 0 that are not roots of the derivative f (z), are super-attracting fixed points of N f , that is, let ζ be a simple root of f (z), then N f (ζ) = ζ and N f (ζ) = 0. For a review of the dynamics of Newton's method, see for instance [2,3]. More generally, Poly d and Rat k denote the space of polynomials of degree d and the space of rational functions of degree k, respectively. By a root-finding algorithm or root-finding method it is meant a rational map T f : Poly d → Rat k , such that the roots of the polynomial map f are attracting fixed points of T f . A root-finding algorithm T f has order σ, if the local degree of T f in every simple root of f is σ.
In this paper we study the dynamical aspects of where a 0 , a 1 , a 2 , b 0 and b 1 are real numbers. Depending on those parameters, this family is of order 2, 3, 4 or 5. Also, this family can be viewed as a generalization of c−iterative functions (for a definition see Example 5 below). In [4] C. Mcmullen proved a rigidity theorem that implies that a purely iterative root finding algorithm generally convergent for cubic polynomials, is conformally conjugate to a generating map. Applying this result, J. Hawkins in [5] was able to obtain an explicit expression for rational maps which are generating, and so it is natural to ask which of these rational maps T f are generating maps. We use that rigidity result in order to show that over the space of cubic polynomials, those maps T f that generate a generally convergent algorithm are restricted to Halley's method applied to the cubic polyomial.
The paper is organized as follows. Section 2 contains some basic notions of the classic theory of complex dynamics. In addition to establish the notation and main examples, Section 3 contains the definition of purely iterative iterative algorithm for Newton's maps, that will be used throughout the article. Section 4 is devoted to the study of the nature of fixed points. In Section 5 we study the order of convergence of T f , and in Section 6 we provide the results about Scaling theorems. We provide the result concerning maps that generates generally convergent root finding algorithms for cubic polynomials in Section 7. In Section 8 we provide the relation between critical points and parameter space. The last Section summarize the conclusion.

Basic Notions in Complex Dynamics
We recall the reader so see [6] or [7][8][9][10][11][12][13][14][15][16] to obtain some basic notions of the classic theory of Fatou-Julia of complex dynamics which appear in (as a reference of the Fatou-Julia theory see for instance P. Blanchard [17] and J. Milnor [18]). Here we show a small summary: Let be a rational map of the extended complex plane into itself, where P and Q are polynomials with no common factors.
• A point ζ is called a fixed point of R if R(ζ) = ζ , and the multiplier of R at a fixed point ζ is the complex number λ(ζ) = R (ζ) .
• At each point z j of the cycle, the derivative (R n ) has the same value.

•
An n−cycle {z 0 , z 1 , . . . , z n−1 } is said to be attracting, repelling, indifferent, depending the value of the associated multiplier (same conditions than in the fixed points).

•
The Julia set of a rational map R, denoted J(R), is the closure of the set of repelling periodic points. Its complement is the Fatou set F(R). If z 0 is an attracting fixed point of R, then the convergence region B(z 0 ) is contained in the Fatou set and J(R) = ∂B(z 0 ), where ∂ denotes the topological boundary.

Definitions and Notations
Now we recall the definition of purely iterative algorithms due to S. Smale in [19]. Let P d be the space of all polynomials of degree less than or equal to d . For every k ≥ 1, define the space J k = C k+2 and the map where P and Q are polynomials in k + 2 variables z, ξ 0 , . . . , ξ k , with no common factors. A purely iterative algorithm is a rational endomorphism T f : C → C that depends on f ∈ P d and takes the form for a rational map as in (2). Consider a modification of the preceding definition. Let Rat d be the space consisting of the rational maps of degree less than or equal to d . Define a subset V ⊂ Rat d as Since Newton's method applied to z d − 1 is a rational map that satisfies the conditions in V, we conclude that V = ∅, for every d ≥ 2.
As above, defineˆ: Let G : J k → C be the rational map defined as where P and Q are polynomials in k + 2 variables with no common factors. We define a rational endomorphism T f : C → C , depending on R ∈ Rat d , by where R ∈ V . In [6] it is proved the following.
Theorem 1. For every G : J k → C defined as before, there exists a complex polynomial f of degree d such that for every R ∈ V, R = N f , where N f is the Newton method. Also, there exists a linear space H of dimension d + 1 such that V is contained in Rat d ∩ H.
Theorem 1 motivates the following definition.
Definition 1. Let S f : C → C be the rational endomorphism depending on f ∈ P d , given by where N f is Newton's map applied to f , and G is defined by the Formula (4). A rational endomorphism S f as above will be called a purely iterative algorithm for Newton's maps.

Remark 1.
Note that the degree of the polynomials P and Q in (4) does not depend on the degree of f .
In this paper we consider the family of purely iterative algorithm for Newton's maps given by the Formula (5), with where a 0 , a 1 , a 2 , b 0 and b 1 are real numbers. Then the family is given by which is exactly the Formula (1) above.

Example 1.
The family of Purely iterative algorithms for Newton's maps (1) include several important families of root-finding algorithms.
. This method has been briefly studied in the last decades [20].
For a study of dynamical and numerical properties of Halley's method, see for instance [21,22].

3.
Whittaker's iterative method also known as convex acceleration of Whittaker's method (see [23,24]), is an iterative map of order of convergence two given by Thus, according with (4) and (5), Whittaker's method is a purely iterative algorithm for Newton's maps when considering k = 1, and the polynomials

4.
Newton's method for multiple roots is obtained by considering a 0 = b 0 = 1, a 1 = a 2 = 0 and b 1 = −1. Indeed, Note that P(z, ξ 0 , ξ 1 ) = z − ξ 0 and Q(z, ξ 0 , This method has been studied by several authors. See for example [25,26] and more recently [27,28]. 5. The following method, that may be new and it is denoted by SH2 f , is a modification of the super-Halley method(for a study of this method see for instance [29]). This is given by the formula Consider the polynomials Again, it follows from (4) and (5) that SH2 f is a purely iterative algorithm for Newton's maps. 6.
In this case
Note that a purely iterative algorithm for Newton's maps may not be a root-finding algorithm. For instance, by considering the polynomial f (z) = (z − 1) 2 (z + 1), and the purely iterative Newton's map defined by where b 1 < −2, it follows that root 1 is repelling for the associated rational map. In fact, in this case As a consequence the root 1 is a repelling fixed point.

The Nature of Fixed Points
In order to ensure that T f be a root-finding algorithm (see Remark 2), some restrictions over the choice of the real parameters a 0 , a 1 , a 2 , b 0 and b 1 , are required. Let m ≥ 1 be an integer and define where, and a = a 0 + a 1 + a 2 and b = a 1 + 2a 2 .
for every m ≥ 1. Let f : C → C be a complex polynomial of degree d ≥ 2. Denote by α i its zeros and by m i ≥ 1 their multiplicities. Then T f defined in (1) is a root finding algorithm. Moreover, (a) Each root α i of multiplicity m i ≥ 1 is an attracting fixed point for T f with multiplier λ m i = 1 − l m i . Assuming that a 0 = b 0 , we have that every simple root is a superattracting fixed point for T f .
T f has a repelling fixed point at ∞ with multiplier λ −1 d .
(c) If a 1 = a 2 = 0, b 0 , b 1 = 0 and a 0 /b 1 < 0 then the extraneous fixed points of T f are the zeros of f which are not zeros of f . More precisely, if β is a zero of order n ≥ 2 of f , then it is a repelling fixed point of T f with multiplier .

Remark 3.
If a 0 = 0, then by Formula (6) we have that λ 1 = 1, that is, the simple roots of a complex polynomial are parabolic fixed points for T f . In this case T f cannot be a root-finding algorithm. So, from now on, a 0 = 0.
Proof. (a) First note that the factor (z − N f (z)) in (1) implies that T f (α i ) = α i for every i. If f has a zero α of multiplicity m , then α is a (super)attracting fixed point of Newton's method with multiplier It follows that Consequently, Consequently, α is an attracting fixed point with multiplier 0 < 1 − l m < 1. By supposing that a 0 = b 0 , we have that l 1 = 1, which implies that α is a superattracting fixed point.
(b) Note that the degree d polynomial f can be written as Therefore, when |z| tends to ∞ , it follows that f ∼ a 0 z d , and we may write Newton's method applied to the polynomial f as By constructing the Formula (1), this implies that and so, Thus, This implies that and consequently, is greater than one, and the proof is complete.
Example 2. Now we give some examples of Theorem 2: 1.

3.
In Remark 2 was considered an example of purely iterative algorithm for Newton's maps, that is not a root-finding algorithm. In this case (7) is not satisfied when b 1 < −2. Indeed, l 2 = 14/(16 + 8b 1 ) and and so the condition (7) is not satisfied. 4.
The root finding algorithm SH 2 has order of convergence 3 and does not satisfy a 0 = b 0 . In this case The following table summarizes the examples (1)

Order of Convergence
This section will describe the order of convergence of T f defined in (1). In this section, N [n] f denote the nth derivative of Newton's method. Lemma 1. Consider T f as a root finding algorithm applied to a degree d polynomial f . Then: 1.
If a 0 = b 0 , then T f is at least of order 2.

2.
If a 0 = b 0 and b 1 = a 1 − a 0 2 , then T f is at least of order 3.

4.
If condition in (3) is satisfied and additionally a 1 = 0 and N [4] f (α) = 0 for every simple root α of f , then T f has order 5.
Proof. Recall that a 0 = 0. Let α be a simple root of a polynomial f . Since Newton's method is an order 2 root-finding algorithm, it follows that Now to prove part (1) write the Formula (1) as Thus, if a 0 = b 0 we have that T f (α) = α, T f (α) = 0 and so T f is a root-finding algorithm of order at least 2, which proves part (1).
Finally to prove part (4), consider the Taylor expansion of Newton's method around the simple root α of the polynomial f , 4 , and combining those computations with the hypothesis if b 0 = a 0 , b 1 = a 1 − a 0 2 and a 1 = −2a 2 , implies that Since N [4] f (α) = 24µ 3 = 0, we have that µ 3 = 0. Additionally, by supposing that f (α) = 0, and so T f is a root-finding algorithm of order at least 5. This conclude part (4), and the proof of the Lemma. Corollary 1. Let f be a complex polynomial and denote by α i its roots. Suppose that T f is a root finding algorithm with order of convergence equal to two and the order does not depend on the multiplicity of the roots α i . Then T f is the Newton's multiple for multiple roots.

Proof. By part (a) of Theorem 2 we have that
Since the order of convergence of the root finding algorithm T f is 2, then for every root α i we have that λ 1 = 1 − l m = 0 and a 0 = b 0 for all m ∈ N. This implies that for every m ∈ N, if and only if b 1 = −a 0 , and a 1 = a 2 = 0. Therefore This concludes the proof.
Note that this set of parameters gives convergence of order 3. Then The following table show the iterations of order three, where z n+1 = T f (z n ), and z 0 = 1 + i.

Conjugacy Classes of the Schemes
We next prove an extension of the Scaling Theorem for purely iterative algorithms for Newton's maps. Let R 1 , R 2 : C → C be two rational maps. Then R 1 and R 2 are conjugated if there exists a Möbius transformation T : C → C such that R 1 • T(z) = T • R 2 (z) for all z.
Conjugacy plays a central role in the understanding of the behavior of classes of maps under iteration in the following sense. Suppose that we wish to describe both, the quantitative and the qualitative behaviors of the map z → T f (z) , where T f (z) is an iterative function resulting from an iterative method z n+1 = Φ(z n ).
Let f be an arbitrary analytic function. Since conjugacy preserves fixed points, cycles and their character (whether (super)attracting, or repelling, or indifferent), and their basins of attraction, it is a worthwhile idea to try to construct a parameterized family or families consisting of polynomials f a , as simple as possible so that, for a suitable choice of the complex parameter a, there may exists a conjugacy between T f (z) and T f a (z).
In order to describe the conjugacy classes of T f , recall a next useful result (see [39], Section 5, Theorem 1).
If g(z) = f • A(z) , then T f • A = A • T g , that is, T f is analytically conjugated to T g by A.
Proof. Assume that there exists a constant λ ∈ C * such that g(z) = λ f • A(z). According to Theorem 3, we have A(N g (z)) = αN g (z) + β = N f (A(z)) . Hence This yields which completes the proof.

Methods Generally Convergent for Cubic Polynomials
A purely iterative rational root-finding algorithm T f is generally convergent if T n f (z) converge to the roots of the polynomial f for almost all complex polynomials of degree d ≥ 2 and almost all initial conditions. C. Mcmullen proved that if d > 3, then there is no possibility to find a generally convergent root-finding algorithm. Moreover, he proved the following result: Theorem 5 ([4]). Every generally convergent algorithm for cubic polynomials is obtained by specifying a rational map R in such a way that 1.

2.
Aut(R) contains those Möbius maps that permutes the roots of unity.
Moreover the generated algorithm has the form where φ c is a Möbius transformation that associate the roots of unity to the points 1, 1 2 (−1 − √ 1 − 4c) and So, the following definition is natural: a rational map R generates a generally convergent algorithm if it is convergent for the cubic polynomial representing the roots of unity and its associated automorphism group contain the Möbius transformations which commutes the roots of unity.
As an example, consider Halley's method applied to the family of cubic polynomials f λ (z) = z 3 + (λ − 1)z − λ. Hawkins's theorem implies that if a rational map R generates a generally convergent algorithm, then zero is a fixed point of R. Hence, the condition H f (0) = 0 implies that λ = 0 or λ = 1. Also the group of automorphisms must contain the Möbius transformations that permutes the roots of unity, then λ cannot be 0. Thus, λ = 1 and we obtain Halley's method applied to z 3 − 1.
McMullen's theorem tells us how to generate a generally convergent iterative algorithm by finding the map R. The following question is natural: When T f contain rational maps which are generating of generally convergent algorithms?
The following theorem is due to J. Hawkins (Theorem 1 [5]) and describes explicitly the rational maps which are generating of generally convergent algorithms for cubic polynomials according to their degree. and then 0 is a pole of order 2. Therefore, if a 1 = a 2 = 0 and b 1 = 0, we are able to remove the poles of It is easy to see that this family of methods has at least order of convergence 4. We have seen that for the special case a 1 = − 3 4 a 0 the family have fifth order of convergence and the family has the form G(z) = z 5 (3 + 2z) 2 + 3z .

Study of the Fixed Points and Their Stability
It is clear that z = 0 and z = ∞ are fixed points of G(z, a 0 , a 1 ) which are related to the root a and b respectively. Now, we focus our the attention on the extraneous fixed points (those points which are fixed points of T f and are not solutions of the equation f (z) = 0). First of all, we notice that z = 1 is an extraneous fixed point, which is associated with the original convergence to infinity. Moreover, there are also another two strange fixed points which correspond to the roots of the polynomial q(z) = 6a 2 0 + 8a 0 a 1 + 9a 2 0 z + 16a 0 a 1 z + 12a 2 1 z + 6a 2 0 z 2 + 8a 0 a 1 z 2 , whose analytical expression, depending on a 0 and a 1 , are: There exist relations between the extraneous fixed points and they are described in the following result. Lemma 2. The number of simple extraneous fixed points of G(z, a 0 , a 1 ) is three, except in the following cases: If a 1 = − a 0 2 , then ex 1 (a 0 , a 1 ) = ex 2 (a 0 , a 1 ) = −1 that is not a fixed point, so there is only one extraneous fixed point.

(iii)
If a 1 = − 3a 0 4 , then ex 1 (a 0 , a 1 ) = ex 2 (a 0 , a 1 ) = 0 that is a fixed point related to the root a, so there are is only one extraneous fixed point.
Related to the stability of that extraneous fixed points, the first derivative of G(z, a 0 , a 1 ) must be calculated G (z, a 0 , a 1 ) = 2z 3 (1+z) 2 (6a 2 0 +8a 0 a 1 +9a 2 0 z+16a 0 a 1 z+12a 2 1 z+6a 2 Taking into account the form of the derivative, it is immediate that the origin and ∞ are superattractive fixed points for every value of a 0 and a 1 .
The stability of the other fixed points is more complicated and will be shown in a separate way. First of all, focussing the attention in the extraneous fixed point z = 1, which is related to the original convergence to ∞, and the following result can be shown.
Related to the stability of the extraneous fixed point z = 1 we have the following result.
Lemma 3. The behavior of z = 1 is the following: If a 0 = − 10a 1 17 and a 1 = 0, then z = 1 is an indifferent fixed point.
In the rest, of the cases z = 1 is repelling.
Due to the complexity of the stability function of each one of the extraneous fixed points and G (ex 2 (a 0 , a 1 ), a 0 , a 1 ) = to characterize its domain analytically is not affordable. We will use the graphical tools of software Mathematica in order to obtain the regions of stability of each of them, in the complex plane. In Figure 1, the stability region of z = 1 can be observed and in Figures 2 and 3, the region of stability of ex 1 (a 0 , a 1 ) and ex 2 (a 0 , a 1 ) are shown. These stability regions are drawn in 3D, since we study the behavior of the derivative G which depends on two parameters, so we need three axes to observe it. Taking into account these regions the following result summarize the behavior of the extraneous fixed points. These Figures are important, as it can be seen in [40][41][42] due to the fact that they give light about the stability of the extraneous fixed points, if there is no region with attracting behaviour of them, they won't have any problematic behaviour.
As a conclusion we can remark that the number and the stability of the fixed points depend on the parameters a 0 and a 1 .

Study of the Critical Points and Parameter Spaces
In this section, we compute the critical points and we show the parameter spaces associated to the free critical points. It is well known that there is at least one critical point associated with each invariant Fatou component. The critical points of the family are the solutions of is G (z, a 0 , a 1 ) = 0, where By solving this equation, it is clear that z = 0 and z = ∞ are critical points, which are related to the roots of the polynomial p(z) and they have associated their own Fatou component. Moreover, there exist critical points no related to the roots, these points are called free critical points. Their expressions are: The relations between the free critical points are described in the following result.  Moreover, it is clear that for every value of a 0 and a 1 , cr1(a 0 , a 1 ) = It is easy to see that z = −1 is a pre-periodic point as it is the pre-image of the fixed point related to the convergence to infinity, z = 1, and the other free critical points are conjugated cr 1 (a 0 , a 1 ) = 1/cr 2 (a 0 , a 1 ). So, there are only two independent free critical points and only one is not pre-periodic. Without loss of generality, we consider in this paper the free critical point cr 1 (a 0 , a 1 ). In order to find the best members of the family in terms of stability, the parameter space corresponding to this independent free critical point will be shown.
The study of the orbits of the critical points gives rise about the dynamical behavior of an iterative method. More precisely, to determinate if there exists any attracting extraneous fixed point or periodic orbit, the following question must be answered: For which values of the parameters, the orbits of the free critical points are attracting periodic orbits? In order to answer this question we are going to draw the parameter space but our main problem is that we have 2 free parameters a 0 , a 1 . In order to avoid this problem, we are going to use a a variant of the algorithm that appears in [43] and similar, in which we will consider the horizontal axis as the possible real values of a 0 and the vertical one as the possible values of a 1 . When the critical point is used as an initial estimation, for each value of the parameter, the color of the point tell us about the place it has converged to: to a fixed point, to an attracting periodic orbit or even the infinity.
In Figure 4, the parameter space associated to cr 1 (a 0 , a 1 ) is shown. The algorithm to draw this parameter speace is similar to the one used in [44]: A point is painted in cyan if considerint z 0 = cr 1 (a 0 , a 1 ) the iteration converges to 0 (which is related to one root), in magenta the convergence to ∞ (which is related to the other root) and in yellow appear the points which iteratations converges to 1 (which is related to ∞). Other colors used are:red for theconvergence to a extraneous fixed points and other colors, including black, for cycles. Now, we are going to show these anomalies using dynamical planes where the convergence to 0, after a maximum of 2000 iterations and with a tolerance of 10 −6 appear in magenta, in cyan it appears the convergence to ∞, after a maximum of 2000 iterations and with a tolerance of 10 −6 and in black the zones with no convergence to the roots. First of all, in Figures 5 and 6 the dynamical planes associated with the values of a 0 a 1 for which there is no convergence problems, are shown. As a consequence, those selections of pair of values are a good choice since all points converge to the roots of the original equation.
Then, focussing the attention in the region shown in Figure 4 it is evident that there exist members of the family with complicated behavior. In Figure 7, the dynamical planes of a member of the family with regions of convergence to any of the extraneous fixed points is shown. In this case, there exist regions of points which iterations do not converge to any of the roots of the original equations, so these values are not a good choice.
On the other hand, in Figures 4 and 8, the dynamical planes of a member of the family with regions of convergence to z = 1, related to ∞ is shown, in which we observe that there exist. In this case, there exist regions of points which iterations do not converge to any of the roots of the original equations, so these values are not a good choice.     Finally, in Figures 9 and 10, some dynamical planes of members of the family with convergence to different attracting cycles are shown.
Sharkovsky's Theorem [31], states that the existence of orbits of period 3, guaranties orbits of any period.

Conclusions
This article discusses purely iterative algorithms for Newton's maps T f , given by the Formula (1), and that were proposed in [6]. This family represents a large class of root finding algorithms, including the best known, and those of high order of convergence. Depending on the parameters a 0 , a 1 , a 2 , b 0 and b 1 , in general the family T f may not define a root finding algorithm. To avoid this difficulty, we achieved a characterization in terms of those parameters, so that it is effectively a root finding algorithm. The scaling theorem has the advantage of reducing the parameter space in dimension, and it is useful for plotting the parameter space in low dimension, among other things. We give a classification of extraneous fixed points and indifferent fixed points of T f , in terms of the parameters a 0 , a 1 , a 2 , b 0 and b 1 . Then, we use those results and Hawkins's theorem to conclude that over the family T f , the rational map that generates generally convergent root finding algorithms, is the Halley's method applied to cubic polynomials. This shows that rigidity is even stronger, and is not obtained only in terms of the conjugation.

Conflicts of Interest:
The authors declare no conflict of interest.