On a Class of Optimal Fourth Order Multiple Root Solvers without Using Derivatives

: Many optimal order multiple root techniques involving derivatives have been proposed in literature. On the contrary, optimal order multiple root techniques without derivatives are almost nonexistent. With this as a motivational factor, here we develop a family of optimal fourth-order derivative-free iterative schemes for computing multiple roots. The procedure is based on two steps of which the ﬁrst is Traub–Steffensen iteration and second is Traub–Steffensen-like iteration. Theoretical results proved for particular cases of the family are symmetric to each other. This feature leads us to prove the general result that shows the fourth-order convergence. Efﬁcacy is demonstrated on different test problems that veriﬁes the efﬁcient convergent nature of the new methods. Moreover, the comparison of performance has proven the presented derivative-free techniques as good competitors to the existing optimal fourth-order methods that use derivatives.

In such methods, one requires determining the derivatives of either first order or both first and second order. Contrary to this, higher-order derivative-free methods to compute multiple roots are yet to be investigated. These methods are important in the problems where derivative f is complicated to process or is costly to evaluate. The basic derivative-free method is the Traub-Steffensen method [16], which uses the approximation for the derivative f in the classical Newton method in Equation (1). Here, s k = t k + β f (t k ) and s−t is a divided difference of first order. In this way, the modified Newton method in Equation (1) transforms to the modified Traub-Steffensen derivative free method The modified Traub-Steffensen method in Equation (2) is a noticeable improvement over the Newton method, because it preserves the convergence of order two without using any derivative.
In this work, we aim to design derivative-free multiple root methods of high efficient quality, i.e., the methods of higher convergence order that use the computations as small as we please. Proceeding in this way, we introduce a class of derivative-free fourth-order methods that require three new pieces of information of the function f per iteration, and hence possess optimal fourth-order convergence in the terminology of Kung-Traub conjecture [17]. This conjecture states that multi-point iterative functions without memory based on n function evaluations may attain the convergence order 2 n−1 , which is maximum. The methods achieving this convergence order are usually called optimal methods. The new iterative scheme uses the modified Traub-Steffensen iteration in Equation (2) in the first step and Traub-Steffensen-like iteration in the second step. The methods are examined numerically on many practical problems of different kind. The comparison of performance with existing techniques requiring derivative evaluations verifies the efficient character of the new methods in terms of accuracy and executed CPU time.
The rest of the paper is summarized as follows. In Section 2, the scheme of fourth-order method is proposed and its convergence order is studied for particular cases. The main result for the general case is studied in Section 3. Numerical tests to demonstrate applicability and efficiency of the methods are presented in Section 4. In this section, a comparison of performance with already established methods is also shown. In Section 5, a conclusion of the main points is drawn.

Formulation of Method
To compute a multiple root with multiplicity m ≥ 1, consider the following two-step iterative scheme: where and H : C 2 → C is analytic in a neighborhood of (0, 0). Notice that this is a two-step scheme with first step as the Traub-Steffensen iteration in Equation (2) and the next step as the Traub-Steffensen-like iteration. The second step is weighted by the factor H(x, y), thus we can call it weight factor or more appropriately weight function.
In the sequel, we study the convergence results of proposed iterative scheme in Equation (3). For clarity, the results are obtained separately for different cases based on the multiplicity m. Firstly, for the case m = 1, the following theorem is proved: Theorem 1. Assume that f : C → C is an analytic function in a domain containing a multiple zero (say, α) with multiplicity m = 1. Suppose that the initial point t 0 is close enough to α, then the convergence order of Equation (3) is at least 4, provided that H 00 = 0, H 10 = 1, H 01 = 0, H 20 = 2, H 11 = 11 and H 02 = 0, where Proof. Assume that the error at kth stage is e k = t k − α. Using the Taylor's expansion of f (t k ) about α and keeping into mind that f (α) = 0 and f (α) = 0, we have where for n ∈ N. Similarly we have the Taylor's expansion of f (s k ) about α where e s k = s k − α = e k + β f (α)e k 1 + A 1 e k + A 2 e 2 k + A 3 e 3 k + A 4 e 4 k + · · · . Then, the first step of Equation (3) yields Expanding f (z k ) about α, it follows that Using Equations (4), (5) and (7) in x k and y k , after some simple calculations, we have and Developing H(x k , y k ) by Taylor series in the neighborhood of origin (0, 0), Inserting Equations (4)-(10) into the second step of Equation (3), and then some simple calculations yield where δ = δ(β, A 1 , A 2 , A 3 , H 00 , H 10 , H 01 , H 20 , H 11 , H 02 ). Here, expression of δ is not being produced explicitly since it is very lengthy.
It is clear from Equation (11) that we would obtain at least fourth-order convergence if we set coefficients of e k , e 2 k and e 3 k simultaneously equal to zero. Then, solving the resulting equations, one gets As a result, the error equation is given by Thus, the theorem is proved.
Next, we show the conditions for m = 2 by the following theorem: Using the hypotheses of Theorem 1, the order of convergence of the scheme in Equation (3) for Proof. Assume that the error at kth stage is e k = t k − α. Using the Taylor's expansion of f (t k ) about α and keeping in mind that f (α) = 0, f (α) = 0, and f (2) (α) = 0, we have where for n ∈ N. where Then, the first step of Equation (3) yields Expanding Using Equations (14), (15) and (17) in x k and y k , after some simple calculations, we have and Developing by Taylor series the weight function H(x k , y k ) in the neighborhood of origin (0, 0), Inserting Equations (14)- (20) intothe second step of Equation (3), and then some simple calculations yield Here, expression of φ is not being produced explicitly since it is very lengthy. It is clear from Equation (21) that we would obtain at least fourth-order convergence if we set coefficients of e k , e 2 k and e 3 k simultaneously equal to zero. Then, solving the resulting equations, one gets As a result, the error equation is given by Thus, the theorem is proved.
Below, we state the theorems (without proof) for the cases m = 3, 4, 5 as the proof is similar to the above proved theorems.
for n ∈ N.
for n ∈ N.
for n ∈ N.
Remark 1. We can observe from the above results that the number of conditions on H ij is 6, 4, 3, 3, 3 corresponding to cases m = 1, 2, 3, 4, 5 to attain the fourth-order convergence of the method in Equation (3). The cases m = 3, 4, 5 satisfy the common conditions, H 00 = 0, H 10 = m − H 01 , and H 20 = 4m − H 02 − 2H 11 . Nevertheless, their error equations differ from each other as the parameter β does not appear in the equations for m = 4, 5. It has been seen that when m ≥ 4 the conditions on H ij are always three in number and the error equation in each such case does not contain β term. This type of symmetry in the results helps us to prove the general result, which is presented in next section.

Main Result
For the multiplicity m ≥ 4, we prove the order of convergence of the scheme in Equation (3) by the following theorem: Theorem 6. Assume that the function f : C → C is an analytic in a domain containing zero α having multiplicity m ≥ 4. Further, suppose that the initial estimation t 0 is close enough to α. Then, the convergence of the iteration scheme in Equation (3) is of order four, provided that H 00 = 0, H 10 = m − H 01 , and H 20 = 4m − H 02 − 2H 11 , wherein {|H 01 |, |H 11 |, |H 02 |} < ∞. Moreover, the error in the scheme is given by Proof. Taking into account that f (j) (α) = 0, j = 0, 1, 2, . . . , m − 1 and f m (α) = 0, then, developing f (t k ) about α in the Taylor's series, where K n = m!
for n ∈ N.
In addition, from the expansion of f (s k ) about α, it follows that where e s k = s k − α = e k + β f m (α) m! e m k 1 + K 1 e k + K 2 e 2 k + K 3 e 3 k + K 4 e 4 k + · · · . From the first step of Equation (3), Expansion of f (z k ) around α yields Using Equations (23), (24) and (26) in the expressions of x k and y k , we have that (27) and Developing H(x k , y k ) in Taylor's series in the neighborhood of origin (0, 0), Inserting Equations (23)-(29) into the second step of Equation (3), it follows that It is clear that we can obtain at least fourth-order convergence if the coefficients of e k , e 2 k , and e 3 k vanish. On solving the resulting equations, we get Then, error of Equation (30) is given by Thus, the theorem is proved. (3) reaches fourth-order convergence provided that the conditions of Theorems 1-3 and 6 are satisfied. This convergence rate is achieved by using only three function evaluations, viz. f (t n ), f (s n ), and f (z n ), per iteration. Therefore, the scheme in Equation (3) is optimal by the Kung-Traub hypothesis [17].

Remark 3.
It is important to note that parameter β, which is used in s k , appears only in the error equations of the cases m = 1, 2, 3 but not for m ≥ 4. For m ≥ 4, we have observed that this parameter appears in the coefficients of e 5 k and higher order. However, we do not need such terms to show the required fourth-order convergence.

Some Special Cases
We can generate many iterative schemes as the special cases of the family in Equation (3) based on the forms of function H(x, y) that satisfy the conditions of Theorems 1, 2 and 6. However, we restrict ourselves to the choices of low-degree polynomials or simple rational functions. These choices should be such that the resulting methods may converge to the root with order four for m ≥ 1. Accordingly, the following simple forms are considered: (1) Let us choose the function which satisfies the conditions of Theorems 1, 2 and 6. Then, the corresponding fourth-order iterative scheme is given by (2) Next, consider the rational function satisfying the conditions of Theorems 1, 2 and 6. Then, corresponding fourth-order iterative scheme is given by (3) Consider another rational function satisfying the conditions of Theorems 1, 2 and 6, which is given by The corresponding fourth-order iterative scheme is given by For each of the above cases, . For future reference, the proposed methods in Equations (33)-(35) are denoted by NM1, NM2, and NM3, respectively.

Numerical Results
To validate the theoretical results proven in previous sections, the special cases NM1, NM2, and NM3 of new family were tested numerically by implementing them on some nonlinear equations. Moreover, their comparison was also performed with some existing optimal fourth-order methods that use derivatives in the formulas. We considered, for example, the methods by Li et al. [7,8], Sharma and Sharma [9], Zhou et al. [10], Soleymani et al. [12], and Kansal et al. [14]. The methods are expressed as follows: Li-Liao-Cheng method (LLC): Kansal-Kanwar-Bhatia method (KKB): where p = m m+2 . Computational work was compiled in the programming package of Mathematica software [18] in a PC with Intel(R) Pentium(R) CPU B960 @ 2.20 GHz, 2.20 GHz (32-bit Operating System) Microsoft Windows 7 Professional and 4 GB RAM. Performance of the new methods was tested by choosing value of the parameter β = 0.01. The tabulated results obtained by the methods for each problem include: (a) the number of iterations (k) required to obtain the solution using the stopping criterion |t k+1 − t k | + | f (t k )| < 10 −100 ; (b) the estimated error |t k+1 − t k | in the first three iterations; (c) the calculated convergence order (CCO); and (d) the elapsed time (CPU time in seconds) in execution of a program, which was measured by the command "TimeUsed[ ]". The calculated convergence order (CCO) to confirm the theoretical convergence order was calculated by the formula (see [19]) The following numerical examples were chosen for experimentation: Example 1. Planck law of radiation to calculate the energy density in an isothermal black body [20] is stated as where λ is wavelength of the radiation, c is speed of light, T is absolute temperature of the black body, k is Boltzmann's constant, and h is Planck's constant. The problem is to determine the wavelength λ corresponding to maximum energy density φ(λ). Thus, Equation (37) leads to φ (λ) = 8πchλ −6 e ch/λkT − 1 (ch/λkT)e ch/λkT e ch/λkT − 1 − 5 = A.B. (say) Note that a maxima for φ will occur when B = 0, that is when (ch/λkT)e ch/λkT e ch/λkT − 1 = 5.
Setting t = ch/λkT, the above equation assumes the form Define The root t = 0 is trivial and thus is not taken for discussion. Observe that for t = 5 the left-hand side of Equation (38) is zero and the right-hand side is e −5 ≈ 6.74 × 10 −3 . Thus, we guess that another root might occur somewhere near to t = 5. In fact, the expected root of Equation (39) is given by α ≈ 4.96511423174427630369 with t 0 = 5.5. Then, the wavelength of radiation (λ) corresponding to maximum energy density is The results so obtained are shown in Table 1.

Example 2.
Consider the van der Waals equation (see [15]) that explains the nature of a real gas by adding two parameters a 1 and a 2 in the ideal gas equation. To find the volume V in terms of rest of the parameters, one requires solving the equation One can find values of n, P, and T, for a given a set of values of a 1 and a 2 of a particular gas, so that the equation has three roots. Using a particular set of values, we have the function f 2 (t) = t 3 − 5.22t 2 + 9.0825t − 5.2675, that has three roots from which one is simple zero α = 1.72 and other one is a multiple zero α = 1.75 of multiplicity two. However, our desired zero is α = 1.75. The methods are tested for initial guess t 0 = 2.5. Computed results are given in Table 2.
Example 3. Next, we assume a standard nonlinear test function which is defined as The function f 3 has multiple zero at α = 1.8411027704926161 . . . of multiplicity three. We select initial approximation t 0 = 1.6 to obtain zero of this function. Numerical results are exhibited in Table 3.

Example 4.
Lastly, we consider another standard test function, which is defined as The function f 4 has multiple zero at i of multiplicity four. We choose the initial approximation x 0 = 1.2i for obtaining the zero of the function. Numerical results are displayed in Table 4. From the computed results shown in Tables 1-4, we can observe a good convergence behavior of the proposed methods similar to those of existing methods. The reason for good convergence is the increase in accuracy of the successive approximations per iteration, as is evident from numerical results. This also points to the stable nature of methods. It is also clear that the approximations to the solutions by the new methods have accuracies greater than or equal to those computed by existing methods. We display the value 0 of |t k+1 − t k | at the stage when stopping criterion |t k+1 − t k | + | f (t k )| < 10 −100 has been satisfied. From the calculation of computational order of convergence shown in the penultimate column in each table, we verify the theoretical fourth-order of convergence.
The efficient nature of presented methods can be observed by the fact that the amount of CPU time consumed by the methods is less than the time taken by existing methods (result confirmed by similar numerical experiments on many other different problems). The methods requiring repeated evaluations of the roots (such as the ones tackled in [21][22][23][24]), also may benefit greatly from the use of proposed methods (NM1-NM3, Equations (33)-(35)).

Conclusions
In this paper, we propose a family of fourth-order derivative-free numerical methods for obtaining multiple roots of nonlinear equations. Analysis of the convergence was carried out, which proved the order four under standard assumptions of the function whose zeros we are looking for. In addition, our designed scheme also satisfies the Kung-Traub hypothesis of optimal order of convergence. Some special cases are established. These are employed to solve nonlinear equations including those arising in practical problems. The new methods are compared with existing techniques of same order. Testing of the numerical results shows that the presented derivative-free methods are good competitors to the existing optimal fourth-order techniques that require derivative evaluations in the algorithm. We conclude the work with a remark that derivative-free methods are good alternatives to Newton-type schemes in the cases when derivatives are expensive to compute or difficult to obtain.