On Derivative Free Multiple-Root Finders with Optimal Fourth Order Convergence

: A number of optimal order multiple root techniques that require derivative evaluations in the formulas have been proposed in literature. However, derivative-free optimal techniques for multiple roots are seldom obtained. By considering this factor as motivational, here we present a class of optimal fourth order methods for computing multiple roots without using derivatives in the iteration. The iterative formula consists of two steps in which the ﬁrst step is a well-known Traub–Steffensen scheme whereas second step is a Traub–Steffensen-like scheme. The Methodology is based on two steps of which the ﬁrst is Traub–Steffensen iteration and the second is Traub–Steffensen-like iteration. Effectiveness is validated on different problems that shows the robust convergent behavior of the proposed methods. It has been proven that the new derivative-free methods are good competitors to their existing counterparts that need derivative information.

A number of modified methods, with or without the base of Newton's method, have been elaborated and analyzed in literature, see [2][3][4][5][6][7][8][9][10][11][12][13][14]. These methods use derivatives of either first order or both first and second order in the iterative scheme. Contrary to this, higher order methods without derivatives to calculate multiple roots are yet to be examined. These methods are very useful in the problems where the derivative ψ is cumbersome to evaluate or is costly to compute. The derivative-free counterpart of classical Newton method (1) is the Traub-Steffensen method [15]. The method uses the approximation for the derivative ψ in the Newton method (1). Here, v k = u k + βψ(u k ) and ψ[v k , u k ] = ψ(v k )−ψ(u k ) v k −u k is a first order divided difference. Thereby, the method (1) takes the form of the Traub-Steffensen scheme defined as The Traub-Steffensen method (2) is a prominent improvement of the Newton method because it maintains the quadratic convergence without adding any derivative.
Unlike Newton-like methods, the Traub-Steffensen-like methods are difficult to construct. Recently, a family of two-step Traub-Steffensen-like methods with fourth order convergence has been proposed in [16]. In terms of computational cost, the methods of [16] use three function evaluations per iteration and thus possess optimal fourth order convergence according to Kung-Traub conjecture (see [17]). This hypothesis states that multi-point methods without memory requiring m functional evaluations can attain the convergence order 2 m−1 called optimal order. Such methods are usually known as optimal methods. Our aim in this work is to develop derivative-free multiple root methods of good computational efficiency, which is to say, the methods of higher convergence order with the amount of computational work as small as we please. Consequently, we introduce a class of Traub-Steffensen-like derivative-free fourth order methods that require three new pieces of information of the function ψ and therefore have optimal fourth order convergence according to Kung-Traub conjecture. The iterative formula consists of two steps with Traub-Steffensen iteration (2) in the first step, whereas there is Traub-Steffensen-like iteration in the second step. Performance is tested numerically on many problems of different kinds. Moreover, comparison of performance with existing modified Newton-like methods verifies the robust and efficient nature of the proposed methods.
We summarize the contents of paper. In Section 2, the scheme of fourth order iteration is formulated and convergence order is studied separately for different cases. The main result, showing the unification of different cases, is studied in Section 3. Section 4 contains the basins of attractors drawn to assess the convergence domains of new methods. In Section 5, numerical experiments are performed on different problems to demonstrate accuracy and efficiency of the methods. Concluding remarks about the work are reported in Section 6.

Development of a Novel Scheme
Researchers have used different approaches to develop higher order iterative methods for solving nonlinear equations. Some of them are: Interpolation approach, Sampling approach, Composition approach, Geometrical approach, Adomian approach, and Weight-function approach. Of these, the Weight-function approach has been most popular in recent times; see, for example, Refs. [10,13,14,18,19] and references therein. Using this approach, we consider the following two-step iterative scheme for finding multiple root with multiplicity µ ≥ 2: where and G : C → C is analytic in the neighborhood of zero. This iterative scheme is weighted by the factors G(h) and 1 + 1 y k , hence the name weight-factor or weight-function technique.
Note that x k and y k are one-to-µ multi-valued functions, so we consider their principal analytic branches [18]. Hence, it is convenient to treat them as the principal root. For example, let us consider the case of x k . The principal root is given by ψ(u k ) ≤ π; this convention of Arg(p) for p ∈ C agrees with that of Log[p] command of Mathematica [20] to be employed later in the sections of basins of attraction and numerical experiments. Similarly, we treat for y k .
In the sequel, we prove fourth order of convergence of the proposed iterative scheme (3). For simplicity, the results are obtained separately for the cases depending upon the multiplicity µ. Firstly, we consider the case µ = 2. Theorem 1. Assume that u = α is a zero with multiplicity µ = 2 of the function ψ(u), where ψ : C → C is sufficiently differentiable in a domain containing α. Suppose that the initial point u 0 is closer to α; then, the order of convergence of the scheme (3) is at least four, provided that the weight-function G(h) satisfies the conditions G(0) = 0, G (0) = 1, G (0) = 6 and |G (0)| < ∞.
Proof. Assume that ε k = u k − α is the error at the k-th stage. Expanding ψ(u k ) about α using the Taylor series keeping in mind that ψ(α) = 0, ψ (α) = 0 and ψ (2) where for m ∈ N.
Similarly, Taylor series expansion of ψ(v k ) is where k + · · · . By using (4) and (5) in the first step of (3), we obtain In addition, we have that Using (4), (5) and (7), we further obtain and Using (8), we have Taylor expansion of the weight function G(h) in the neighborhood of origin up to third-order terms is given by Using (4)- (11) in the last step of (3), we have The expressions of φ 1 and φ 2 being very lengthy have not been produced explicitly.
We can obtain at least fourth order convergence if we set coefficients of ε k , ε 2 k and ε 3 k simultaneously equal to zero. Then, some simple calculations yield Using (13) in (12), we will obtain final error equation Thus, the theorem is proved.
Next, we prove the following theorem for case µ = 3.
Proof. Taking into account that ψ(α) = 0, ψ (α) = 0, ψ (α) = 0 and ψ (3) for m ∈ N. where Then, using (15) and (16) in the first step of (3), we obtain Then, from (15), (16), and (18), it follows that and Using (19), we have Developing weight function G(h) about origin by the Taylor series expansion, By using (15)- (22) in the last step of (3), we have where To obtain fourth order convergence, it is sufficient to set coefficients of ε k , ε 2 k , and ε 3 k simultaneously equal to zero. This process will yield Then, error equation (23) is given by Hence, the result is proved.

Remark 1.
We can observe from the above results that the number of conditions on G(h) is 3 corresponding to the cases µ = 2, 3 to attain the fourth order convergence of the method (3). These cases also satisfy common conditions: G(0) = 0, G (0) = µ 2 , G (0) = 3µ. Their error equations also contain the term involving the parameter β. However, for the cases µ ≥ 4, it has been seen that the error equation in each such case does not contain β term. We shall prove this fact in the next section.

Main Result
We shall prove the convergence order of scheme (3) for the multiplicity µ ≥ 4 by the following theorem: Theorem 3. Using assumptions of Theorem 1, the convergence order of scheme (3) for µ ≥ 4 is at least four, provided that G(0) = 0, G (0) = µ 2 , G (0) = 3µ and |G (0)| < ∞. Moreover, error in the scheme is given by for m ∈ N.
Expansion of ψ(z k ) around α yields Using (26), (27) and (29), we have that and Using (30), we obtain that Developing weight function G(h) about origin by the Taylor series expansion, Using (26)-(33) in the last step of (3), we get where χ n = χ n (β, F 1 , F 2 , F 3 , G(0), G (0), G (0), G (0)) when µ = 4, 5 and χ n = The fourth order convergence can be attained if we put coefficients of ε k , ε 2 k and ε 3 k simultaneously equal to zero. Then, the resulting equations yield As a result, the error equation is given by This proves the result.

Remark 2.
The proposed scheme (3) achieves fourth-order convergence with the conditions of weight-function G(h) as shown in Theorems 1-3. This convergence rate is attained by using only three functional evaluations viz.

Remark 3.
Note that the parameter β, which is used in v k , appears only in the error equations of the cases µ = 2, 3 but not for µ ≥ 4 (see Equation (36)). However, for µ ≥ 4, we have observed that this parameter appears in the terms of ε 5 k and higher order. Such terms are difficult to compute in general. However, we do not need these in order to show the required fourth order of convergence. Note also that Theorems 1-3 are presented to show the difference in error expressions. Nevertheless, the weight function G(h) satisfies the common conditions G(0) = 0, G (0) = µ 2 , G (0) = 3µ for every µ ≥ 2.

Some Special Cases
Based on various forms of function G(h) that satisfy the conditions of Theorem 3, numerous special cases of the family (3) can be explored. The following are some simple forms: The corresponding method to each of the above forms can be expressed as follows: Method 1 (M1) : Method 2 (M2) : Method 3 (M3) : Method 4 (M4) : Note that, in all the above cases, z k has the following form:

Basins of Attraction
In this section, we present complex geometry of the above considered method with a tool, namely basin of attraction, by applying the method to some complex polynomials ψ(z). Basin of attraction of the root is an important geometrical tool for comparing convergence regions of the iterative methods [21][22][23]. To start with, let us recall some basic ideas concerned with this graphical tool.
Let R : C → C be a rational mapping on the Riemann sphere. We define orbit of a point z 0 ∈ C as the set {z 0 , R(z 0 ), R 2 (z 0 ), . . . , R n (z 0 ), . . .}. A point z 0 ∈ C is a fixed point of the rational function R if it satisfies the equation R(z 0 ) = z 0 . A point z 0 is said to be periodic with period m > 1 if R m (z 0 ) = z 0 , where m is the smallest such integer. A point z 0 is called attracting if |R (z 0 )| < 1, repelling if |R (z 0 )| > 1, neutral if |R (z 0 )| = 1 and super attracting if |R (z 0 )| = 0. Assume that z * ψ is an attracting fixed point of the rational map R. Then, the basin of attraction of z * ψ is defined as The set of points whose orbits tend to an attracting fixed point z * ψ is called the Fatou set. The complementary set, called the Julia set, is the closure of the set of repelling fixed points, which establishes the boundaries between the basins of the roots. Attraction basins allow us to assess those starting points which converge to the concerned root of a polynomial when we apply an iterative method, so we can visualize which points are good options as starting points and which are not.
We select z 0 as the initial point belonging to D, where D is a rectangular region in C containing all the roots of the equation ψ(z) = 0. An iterative method starting with a point z 0 ∈ D may converge to the zero of the function ψ(z) or may diverge. To assess the basins, we consider 10 −3 as the stopping criterion for convergence restricted to 25 iterations. If this tolerance is not achieved in the required iterations, the procedure is dismissed with the result showing the divergence of the iteration function started from z 0 . While drawing the basins, the following criterion is adopted: A color is allotted to every initial guess z 0 in the attraction basin of a zero. If the iterative formula that begins at point z 0 converges, then it forms the basins of attraction with that assigned color and, if the formula fails to converge in the required number of iterations, then it is painted black.
To view the complex dynamics, the proposed methods are applied on the following three problems: Test problem 1. Consider the polynomial ψ 1 (z) = (z 2 + z + 1) 2 having two zeros {−0.5 − 0.866025i, −0.5 + 0.866025i} with multiplicity µ = 2. The attraction basins for this polynomial are shown in Figures 1-3 corresponding to the choices 0.01, 10 −4 , 10 −6 of parameter β. A color is assigned to each basin of attraction of a zero. In particular, red and green colors have been allocated to the basins of attraction of the zeros −0.5 − 0.866025i and −0.5 + 0.866025i, respectively.
Estimation of β values plays an important role in the selection of those members of family (3) which possess good convergence behavior. This is also the reason why different values of β have been chosen to assess the basins. The above graphics clearly indicate that basins are becoming wider with the smaller values of parameter β. Moreover, the black zones (used to indicate divergence zones) are also diminishing as β assumes small values. Thus, we conclude this section with a remark that the convergence of proposed methods is better for smaller values of parameter β.

Sharma-Sharma method (SSM)
: Zhou-Chen-Song method (ZCSM): Soleymani-Babajee-Lotfi method (SBLM): Kansal-Kanwar-Bhatia method (KKBM): where p = µ µ+2 . Computations are performed in the programming package of Mathematica software [20] in a PC with specifications: Intel(R) Pentium(R) CPU B960 @ 2.20 GHz, 2.20 GHz (32-bit Operating System) Microsoft Windows 7 Professional and 4 GB RAM. Numerical tests are performed by choosing the value −0.01 for parameter β in new methods. The tabulated results of the methods displayed in Table 1 include: (i) iteration number (k) required to obtain the desired solution satisfying the condition |u k+1 − u k | + |ψ(u k )| < 10 −100 , (ii) estimated error |u k+1 − u k | in the consecutive first three iterations, (iii) calculated convergence order (CCO), and (iv) time consumed (CPU time in seconds) in execution of a program, which is measured by the command "TimeUsed[ ]". The calculated convergence order (CCO) is computed by the well-known formula (see [24]) The problems considered for numerical testing are shown in Table 2.  From the computed results in Table 1, we can observe the good convergence behavior of the proposed methods. The reason for good convergence is the increase in accuracy of the successive approximations as is evident from values of the differences |u k+1 − u k |. This also implies to stable nature of the methods. Moreover, the approximations to solutions computed by the proposed methods have either greater or equal accuracy than those computed by existing counterparts. The value 0 of |u k+1 − u k | indicates that the stopping criterion |u k+1 − u k | + |ψ(u k )| < 10 −100 has been satisfied at this stage. From the calculation of calculated convergence order as shown in the second last column in each table, we have verified the theoretical fourth order of convergence. The robustness of new algorithms can also be judged by the fact that the used CPU time is less than that of the CPU time by the existing techniques. This conclusion is also confirmed by similar numerical experiments on many other different problems.

Conclusions
We have proposed a family of fourth order derivative-free numerical methods for obtaining multiple roots of nonlinear equations. Analysis of the convergence has been carried out under standard assumptions, which proves the convergence order four. The important feature of our designed scheme is its optimal order of convergence which is rare to achieve in derivative-free methods. Some special cases of the family have been explored. These cases are employed to solve some nonlinear equations. The performance is compared with existing techniques of a similar nature. Testing of the numerical results have shown the presented derivative-free method as good competitors to the already established optimal fourth order techniques that use derivative information in the algorithm. We conclude this work with a remark: the proposed derivative-free methods can be a better alternative to existing Newton-type methods when derivatives are costly to evaluate.