Memorizing Schröder’s Method as an Efﬁcient Strategy for Estimating Roots of Unknown Multiplicity

: In this paper, we propose, to the best of our knowledge, the ﬁrst iterative scheme with memory for ﬁnding roots whose multiplicity is unknown existing in the literature. It improves the efﬁciency of a similar procedure without memory due to Schröder and can be considered as a seed to generate higher order methods with similar characteristics. Once its order of convergence is studied, its stability is analyzed showing its good properties, and it is compared numerically in terms of their basins of attraction with similar schemes without memory for ﬁnding multiple roots.


Introduction
There exist in the literature (see, for example, Reference [1][2][3][4][5][6][7][8]) numerous iterative methods without memory, involving or not derivatives, designed to estimate the multiple roots of a nonlinear equation f (x) = 0, but most of them need the knowledge of the multiplicity m of these roots.
It is well-known that Schröder method [9]: , k = 0, 1, . . . is able to converge quadratically to the multiple solution of a nonlinear equation, that is, a value α ∈ R such that f (α) = 0 and f (j) (α) = 0, j = 1, 2, . . ., m − 1, with m being the multiplicity of the root.This scheme was originally deduced from Newton's scheme applied on the quotient g(x) = f (x) f (x) , and it is denoted along this manuscript by SM1, Notice that SM1 requires 3 function evaluations per step.Similarly, the derivative-free Traub-Steffensen's method for g , , k = 0, 1, . . .with γ being a real parameter, requires 4 function evaluations per step and is no longer derivative-free.This Traub-Steffensen method on g is too expensive and is not further considered.
The main advantage of Schröder scheme is its independence of the knowledge of the multiplicity of the nonlinear function, in contrast with the modified Newton's method for multiple roots, where m is the multiplicity of α, that must be known in this case.This scheme was also due to Schröder (also see Reference [9]), and we denote it by SM2.This scheme is second-order convergent and, therefore, optimal, in the sense of Kung-Traub conjecture, (as it uses two new functional evaluations per iteration; see Reference [10]).However, it needs the knowledge of the multiplicity, while SM1 does not use it; nevertheless, the main drawback of SM1 scheme is its low efficiency, as it needs to evaluate three nonlinear functions ( f (x), f (x) and f (x)) per iteration.Our aim in this manuscript is double: from one side, we would like to increase the efficiency of SM1 scheme, holding its ability for finding multiple roots of multiplicity m without knowing m and, from other side, to combine in the same algorithm the capability to find multiple roots with the use of more than one previous iterate.So, we propose an iterative scheme with memory for estimating multiple roots of unknown multiplicity.As far as we know, there is in the literature no iterative procedure satisfying these properties.
In the analysis of the convergence of the proposed scheme, some aspects must be taken into account, as it is an iterative method with memory, so that the error in several previous iterations must be considered and the multiplicity of the root m should also be a key element of the demonstration, although its specific value is not known.Regarding this fact, it should be noticed that f (q) (α) = 0 for q = 1, 2, . . ., m − 1 and f (m) (α) = 0. So, the Taylor expansions around α of f and f appearing in the iterative expression should take this information into account.
On the other hand, as our proposed scheme is an iterative procedure that uses three previous iterates for calculating the next one, it is necessary to express the error equation in terms of their corresponding errors and, from it, to deduce its order of convergence.This is made by means of a classical result by Ortega and Rheinboldt [11], that is presented below.Theorem 1.Let ψ be an iterative method with memory that generates a sequence {x k } of approximations to the root α, and let this sequence converges to α.If there exist a nonzero constant η and positive numbers t i , i = 0, 1, . . ., m, such that the inequality holds, then the R-order of convergence of the iterative method ψ satisfies the inequality where p is the unique positive root of the equation In this manuscript, Section 2 is devoted to the design and convergence analysis of the proposed derivative-free iterative method with memory to find multiple roots (without the knowledge of its multiplicity).In Section 3, its stability is analyzed in order to deduce its dependence on the initial estimations for both simple and multiple roots.In Section 4, the numerical performance of the method is checked on several test functions, being analyzed, as well as their corresponding basins of attraction, in comparison with existing Schröder methods.

Design and Convergence Analysis
Our starting point is the derivative-free scheme with memory due to Traub [12], with x 0 , x −1 and x −2 being their initial estimations, with order of convergence p = 1.839.
To estimate the multiple roots of f (x) = 0, we define the auxiliary function g(x) = f (x) f (x) , and then we apply Traub's method (1), on g(x) = 0, getting what we call gTM method, an iterative scheme with memory that is proven to converge to any multiple roots of f with the same order than original Traub's scheme, without the knowledge of its multiplicity m and using two new functional evaluations per iteration.
Theorem 2. Let f : C → C be an analytic function in the neighborhood of the multiple zero α of f with unknown multiplicity m ∈ N − {1}.Then, for an initial guess x 0 sufficiently close to α, the iteration function gTM defined in (2) has order of convergence , j = 1, 2, 3, . .., and O 3 (e k−2 , e k−1 , e k ) denotes the terms of the error equation with products of powers of e k−2 , e k−1 and e k , whose exponents sum at least 3.
Proof.Let α be a multiple zero of f (x), and e k = x k − α be the error at kth-iterate.Expanding f (x k ) and f (x k ) of x = α by using the Taylor's series expansion, we have and where , j = 1, 2, 3, . . .By using expressions (3) and ( 4), we have In a similar way, Then, where O 3 (e k−2 , e k ) denotes that the neglected terms of the error equation have products of powers of e k−2 and e k , whose exponents sum at least 3. and Therefore, from expression ( 2), and then the order of convergence is the only real root of polynomial 83929, by applying Theorem 1. So, the proof is finished.
The main advantage of this scheme is its ability for finding simple, as well as multiple, roots of a nonlinear function without the knowledge of the multiplicity, with better efficiency as SM1.Certainly, by using Ostrowski's efficiency index [13], 35647, where each index I is calculated as p 1 d , with p being the order of convergence of the method, and d the amount of new functional evaluations per iteration.
In the next section, a dynamical analysis is made on this scheme, in order to show its qualitative performance on simple and multiple roots.As it is an iterative method with memory, multidimensional real dynamics must be used.

Qualitative Study of the Proposed Iterative Methods with Memory for Multiple Roots
Let us remark that our method uses three previous iterations in order to generate the following one; therefore, it can be expressed in general as where x 0 , x −1 and x −2 are the initial estimations.By means of the procedure defined in Reference [14], this method can be described as a discrete real multidimensional dynamical system, and its qualitative behavior can be analyzed.
The qualitative performance of the dynamical system has a key element in the characterization of their fixed points, in terms of stability.In order to calculate the fixed points of Υ, an auxiliary vectorial function M : R 3 −→ R 3 can be defined, related to Υ by means of: Fixed points (w, z, x) of M satisfy w = z = x and x = Υ(w, z, x).So, w = x k−2 , z = x k−1 and x = x k .In the following, we define some basic dynamical concepts as direct extension of those used in complex discrete dynamics analysis (see Reference [15]).
Let us consider M : R 3 → R 3 a vectorial rational function obtained by the application of an iterative method on a scalar polynomial p(x).Then, if a fixed point (w, z, x) of operator M is different from (r, r, r), where r is a zero of p(x), then the fixed point is called strange.Moreover, the orbit of a point x * ∈ R 3 is the set of successive images from x * by the vector function, that is, O(x * ) = {x * , M(x * ), . . ., M n (x * ), . ..}.Indeed, a point x * ∈ R 3 is called periodic with period p if M p (x * ) = x * and M q (x * ) = x * , for q = 1, 2, . . ., p − 1.We should notice that a fixed point is a 1-periodic point.
It is also known that the qualitative performance of a fixed point of M is classified in terms of its asymptotical behavior.It can be analyzed by means of the Jacobian matrix M , as is stated in the next result (see, for instance, Reference [16]).Moreover, if there exist an eigenvalue λ i of the Jacobian matrix M evaluated at a fixed point x * satisfying |λ i | < 1 and another one λ j such that |λ j | > 1, then, x * is called saddle fixed point.As an extension of the concept in one-dimensional dynamics, if the eigenvalues of M (x * ) satisfy |λ j | = 0 for all values of j = 1, 2, . . ., m, then, the fixed point x * is not only attracting but also superattracting.Therefore, the method has quadratic convergence, at least on the class of nonlinear functions that derive the rational function (see Reference [12]).
By considering x * an attracting fixed point of M, its basin of attraction A(x * ) is defined as the set of preimages of any order The qualitative performance of different iterative schemes designed for solving nonlinear equations with multiple roots has been studied by different authors (see, for example, Reference [17][18][19]).It has been made by using discrete complex dynamics, as all these schemes are without memory.In these studies, it has been obtained that, when an iterative method (without memory) designed for finding multiple roots acts on a nonlinear function with both simple and multiple roots, it is quite usual that the basins of attraction of simple roots are narrower than those of multiple roots.Indeed, it may happen that those simple roots define fixed points of the rational function that are repulsive.Therefore, the iterative method should be able to find only multiple roots.
The following qualitative analysis is made on p(x) = (x + 1)(x − 1) m , m ≥ 1, so that the capability of the scheme to find both simple and multiple roots (with multiplicity m) is tested.
Theorem 4. The multidimensional rational operator associated with method gTM, when it is applied on polynomial p(x) = (x + 1)(x − 1) m , is Indeed, the rational operator TM satisfies that there are one strange fixed point with equal components to 1−m 1+m , that is saddle.Moreover, both fixed points corresponding to the roots of p(x) are superattracting.
Proof.By definition of the multidimensional dynamical system, TM(w, z, x) = (z, x, gTM(w, z, x)), . Therefore, is obtained.In order to calculate the fixed points of TM(w, z, x), equation TM(w, z, x) = (w, z, x) must be solved.By means of algebraic manipulations, it is reduced to w = z = x and (−1 So, the only fixed points are the roots of p(x) and the strange fixed point w = z = x = 1−m 1+m that depends on the multiplicity of the root.In order to analyze the stability of the fixed points, we calculate the Jacobian matrix where and where r(x, z, w) = (x(x + 2) The eigenvalues of TM (w, z, x), when (w, z, x) = (1, 1, 1) or (w, z, x) = (−1, −1, −1), are all equal to zero.Then, these are superattracting fixed points.
Regarding the strange fixed point ( 1−m 1+m , 1−m 1+m , 1−m 1+m ), in order to avoid an indetermination, it is necessary to calculate a simplified rational operator, forcing to w = x = z; therefore, the reduced Jacobian matrix has two zero eigenvalues, and the third one is 2 > 0. So, the strange fixed point is always a saddle point and, therefore, is in the boundary of the basins of attraction.
A very useful tool to visualize the analytical results is the dynamical plane of the system, composed by the set of the different basins of attraction.Here, the dynamical plane of the proposed method gTM is built by calculating the orbit of a mesh of 800 × 800 starting points (z, x) for a fixed value of w in the starting grid.As the iterative schemes needs to be started with three initial estimations, we generate a mesh of dynamical planes, each one of them with a fixed value of w in the interval [−1.75, 1.75].In these phase portraits, each point of the mesh is painted in different colors (orange and green in this case), depending on the attractor they converge to (marked as a white star), with a tolerance of 10 −3 .In addition, they appear in black color if the orbit has not reached any attracting fixed point in a maximum of 500 iterations.As the fixed value of w is changed in a vector of values belonging to [−1.75, 1.75], it yields to a composition of figures for each multiplicity, giving rise to a kind of contour plots.
In Figure 1, we show the performance of gTM scheme on p(x), that is, of rational operator TM for simple roots.Observing the behavior for the different plots with the three first iterations varying each in [−2, 2], the stable feasibility is noticed.The basins of attraction of the roots are the only ones; they are wide, and the only different performance (better than others in terms of the simplicity of the boundary among the basins) is the case w = 0, where the rational function is simplified.In all cases, it is observed that the only possible behavior of method gTM is the convergence to the roots.On the other hand, in Figure 2, we show a very similar performance when one of the roots is double, and the other one is simple.The basins of attraction are equally wide, and this behavior is very similar when other multiplicities have been explored.In addition, in this case can be seen that there is only convergence to the roots, as darker areas are only slower convergence, due to the higher complexity of the boundary of the basins of attraction.In the next section, the numerical and dynamical performance of our proposed scheme is tested on several nonlinear functions of increasing complexity.

Numerical Performance and Dynamical Tests
In this section, we compare three methods, namely SM2 (requiring the knowledge of the multiplicity), SM1, and gTM (derived from Traub's method).The last two methods do not require the knowledge of the multiplicity, but they do require extra functional evaluations per iteration step (three in case of SM1, two in gTM case).
The methods are compared both qualitatively via the basins of attraction figures and quantitatively via several measures.These measures are: the CPU run-time to run the method on points in a 6 by 6 square centered in the origin.We divided the square by uniformly distributed horizontal and vertical lines and took all points of intersection as initial points for the iterative process.For gTM, a method with memory, we had to take two additional starting points x −1 = x 0 + d and x −2 = x 0 + 2d, where d is the spacing of the lines.Another criterion collected by the code is the average number of iterations per point (AIPP), but, since the methods require different number of functional evaluations per step, we took the average number of functions per point (AFPP).The third criterion is the number of divergent points (DP), which is the number of points for which the method did not converge in 40 iterations using a tolerance of 10 −7 .
The functions used for our comparative study are: Notice that all but one are polynomials of various degrees and various multiplicities.
In Figures 3-8, we have plotted the basins of attraction for the 3 methods for each test function.Each figure has 3 sub-figures, with the left-most being the Schröder method using the multiplicity (SM2), the middle one Schröder method not requiring the knowledge of multiplicity (SM1), and the right-most Traub's method for multiple roots (gTM).Based on Figure 3, it is clear that SM1 and SM2 have similar basins, and gTM has more lobes on the boundary between the two basins.From Figure 4, we notice that gTM is better than SM1.In the next 3 figures, gTM is best, with wider basins of attraction and narrower black areas of no convergence to the roots.This performance is held even for non-polynomial function f 5 .Moreover, in Figure 8, it can be noticed that the basins of attraction of method SM2 are wider than our gTM method.
We now refer to the data in Tables 1-3.The CPU run-time in seconds is given in Table 2.It is clear that SM2 is consistently faster than the others.If the multiplicity is not known, then gTM is faster than SM1, except for the first example.On average, gTM is faster than SM1.The average number of function evaluations per point (see Table 2) is the highest for SM1 for all examples.Note that the last example is the hardest for all methods.The number of divergent points is the lowest for gTM for examples 1, 3, and 4. SM1 has the most divergent points for the first 6 examples, but, on the last example, gTM performed poorly and became third place overall.The method SM2 was best, on average, for the 3 categories followed by gTM for 2 categories.

Conclusions
A new iterative scheme with memory with ability for finding both simple and multiple roots (without the need of knowing its multiplicity) have been constructed.It is, as far as we know, the first method with these properties in the literature.Its order of convergence have been proven to be approximately 1.84 with two new functional evaluations per iteration; this yields the scheme to improve the efficiency of the Schröder scheme without memory SM1, that has similar properties.Using multidimensional real discrete dynamics and low-degree polynomials with simple and multiple roots, the stability of the proposed scheme has been analyzed, showing wide areas of convergence to both kind of roots.
In the last section, Schröder and gTM methods running on several examples have allowed us to conclude that, if the multiplicity is known in advance, then, SM1 and gTM cannot compete, even though gTM is better than SM1.However, when the multiplicity is not known, proposed method gTM shows a very good performance and better efficiency than SM1 methods, in terms of execution time, computational cost, and wideness of the basins of attraction.

Theorem 3 .
Let M from R m to R m be of class C 2 .Let us also assume that x * is a k-periodic point.Let λ 1 , λ 2 , . . ., λ m be the eigenvalues of the Jacobian matrix M (x * ) at periodic point x * .Then, it holds that:(a) If all the eigenvalues λ j verify |λ j | < 1, then x * is attracting.(b) If one eigenvalue λ j 0 verifies |λ j 0 | > 1, then x * is unstable, that is, repelling or saddle.(c) If all the eigenvalues λ j verify |λ j | > 1, then x * is repelling.

Figure 1 .
Figure 1.Dynamical planes of TM rational operator on p(x), for m = 1.

Table 1 .
CPU run-time (sec) for each method on the test functions.

Table 2 .
Average number of function-evaluations per point (AFPP) for each method on the test functions.

Table 3 .
Number of divergent points (DP) for each method on the test functions.