An Efﬁcient Class of Traub–Steffensen-Type Methods for Computing Multiple Zeros

: Numerous higher-order methods with derivative evaluations are accessible in the literature for computing multiple zeros. However, higher-order methods without derivatives are very rare for multiple zeros. Encouraged by this fact, we present a family of third-order derivative-free iterative methods for multiple zeros that require only evaluations of three functions per iteration. Convergence of the proposed class is demonstrated by means of using a graphical tool, namely basins of attraction. Applicability of the methods is demonstrated through numerical experimentation on different functions that illustrates the efﬁcient behavior. Comparison of numerical results shows that the presented iterative methods are good competitors to the existing techniques.

Many higher-order methods, either independent or based on the modified Newton's method [1] with quadratically convergent have been proposed and verified in the literature; see, for example: [2][3][4][5][6][7][8][9][10][11][12][13][14][15] and references therein. Such methods require the evaluations of derivatives of either linear order or linear and second order, or both. However, higher-order derivative-free methods to handle the case of multiple roots are yet to be explored. The main reason for the non-availability of such methods is due to the difficulty in obtaining their order of convergence. The derivative-free methods are important in the conditions when derivative of the function f is difficult to compute or is expensive to obtain. One such method free from derivative is the classical Traub-Steffensen method [16] which actually replaces the derivative f in the classical Newton method with appropriate approximation based on difference quotient, where w n = x n + β f (x n ) and f [x n , w n ] = f (w n )− f (x n ) w n −x n is a first-order divided difference. In this way the modified Newton method (1) becomes the modified Traub-Steffensen method The modified Traub-Steffensen method (2) is a noteworthy improvement of Newton's method, because it keeps quadratic convergence without using derivative.
In this work, we introduce a two-step family of third-order derivative-free methods for computing multiple zeros that require three evaluations of the function f per iteration. The iterative scheme uses the Traub-Steffensen iteration (2) in the first step and Traub-Steffensen-like iteration in the second step. The procedure is based on a simple approach of using weight factors in the iterative scheme. Many special methods of the family can be generated depending on the forms of weight factors. Efficacy of these methods is tested on various numerical problems of different natures. In the comparison with existing techniques that use derivative evaluations, the new derivative-free methods are computationally more efficient.
The rest of the paper is summarized as follows. In Section 2, the scheme of third-order methods is developed and its convergence is studied. Section 3 is divided into two parts: Sections 3.1 and 3.2. In Section 3.1, an initial approach concerning the study of the dynamics of the methods with the help of basins of attraction is presented. To demonstrate applicability and efficacy of the new methods, some numerical experiments are performed in Section 3.2. A comparison of the methods with existing ones of the same order is also shown in this subsection. Concluding remarks are given in Section 4.

The Method
With a known multiplicity m ≥ 1, we consider the following two-step scheme for multiple roots based on the Traub-Steffensen method (2) where the function H(u) : C → C is analytic in a neighborhood of '0' with u = f (y n ) The convergence order is obtained by the following theorem: Theorem 1. Let f : C → C be an analytic function in a region enclosing a multiple zero (say, α) with multiplicity m. Assume that initial approximation x 0 is sufficiently close to α, then the iteration scheme defined by (3) has attained third order of convergence, provided that H(0) = 0 and H (0) = m.
Proof. Let the error at n-th iteration be e n = x n − α. Using Taylor's expansion of f (x n ) about α, we have that where for k ∈ N. Also m! e m n C 1 e n + C 2 e 2 n + C 3 e 3 n + O((e n ) 4 ) .
Then Taylor's expansion of f (w n ) about α is given as By using (4) and (5), then the first step of the scheme (3) yields Expansion of f (y n ) about α leads us to the expression where X 1 = (m + 1)C 2 1 − 2mC 2 . Expanding H(u) in the neighborhood of '0', then using (4) and (7) in the expansion of Using the above expressions in the last step of (3), it follows that Thus, the theorem is proved.

Some Concrete Forms of H(u)
We can obtain numerous methods of the family (3) based on the form of function H(u) that satisfies the conditions of Theorem 1. but we will limit choices to consider the forms of low-order polynomials or simple functions. Accordingly, the following simple forms are chosen: The corresponding method for each of the above forms can be expressed as: Method 2 (M2): Method 3 (M3): Method 4 (M4): Method 5 (M5): Method 6 (M6): In the above each case

Remark 1.
Computational competence of an iterative method is spanned by the efficiency index E = s 1/d (see [17]), where s is the order of convergence and d is the computational cost measured as the number of new pieces of information needed by the method per iterative step. A "piece of information" typically is any calculation of the function f or one of its derivatives. The presented third-order methods require three function calculations per iteration, so the efficiency index of the methods is E = 3 √ 3 ≈ 1.442 which is equal to the efficiency index of existing third-order methods such as Dong [5], Halley [7], Chebyshev [7], Osada [12], and Victory and Neta [14]. Note, however, that these existing third-order methods require derivative evaluations in their computation which is not the case with the proposed methods. Also note that efficiency index of new methods is better than that of modified Newton's method and derivative-free Traub-Steffensen's method (E = √ 2 ≈ 1.414).

Numerical Tests
In this section, first we plot the basins of attraction of the zeros of some polynomials when the proposed iterative methods are applied on the polynomials. Next, we verify the theoretical results by the applications of the methods on some nonlinear functions, including those which arise in practical problems.

Basins of Attraction
Analysis of the complex dynamical behavior gives important information about convergence and stability of an iterative scheme; see e.g., [2,3,[18][19][20][21]. Here, we directly describe the dynamics of iterative methods M1-M6 by means of visual display of the basins of attraction of the zeros of a polynomial P(z) ∈ C. The deep dynamical study of the proposed six methods, with a detailed stability analysis of the fixed points and their behavior, requires a separate article, and so such complete analysis is not a subject of discussion in the present work.
To start with, let us take the initial point z 0 in a rectangular region R ∈ C that contains all the zeros of a polynomial P(z). The iterative method starts from point z 0 in a rectangle either converges to the zero P(z) or eventually diverges. The stopping criterion for convergence is considered to be 10 −3 up to a maximum of 25 iterations. If the required tolerance is not achieved in 25 iterations, we conclude that the method starting at point z 0 does not converge to any root. The strategy adopted is as follows: A color is allocated to each initial point z 0 in the basin of attraction of a zero. If the iteration initiating at z 0 converges, then it represents the attraction basin with that particular assigned color to it, otherwise if it fails to converge in 25 iterations, then it shows the black color.
We plot the basins of attraction of the methods M1-M6 (for the choices β = 10 −2 , 10 −4 , 10 −6 ) on following three polynomials: Problem 1. In the first example, we look at the polynomial P 1 (z) = (z 2 − 1) 2 which has zeros {±1} with multiplicity 2. For making basins, we use a grid of 400 × 400 points in a rectangle D ∈ C of size [−2, 2] × [−2, 2] and fix the color green to each initial point in the basin of attraction of zero '−1' and the color red to each point in the basin of attraction of zero '1'. Basins achieved for the methods M1-M6 are shown in Figures 1-3 corresponding to β = 10 −2 , 10 −4 , 10 −6 . Looking at the behavior of the methods, we see that methods M2 and M4 possess fewest divergent points, followed by M3 and M5. On the contrary, the method M6 has the most divergent points, followed by M1. Notice that the basins are becoming better as parameter β assumes smaller values.  1.

2.
M6.  From these graphics, one can easily evaluate the behavior and stability of any method. If we choose an initial point z 0 in a zone where distinct basins of attraction touch each other, it is impractical to predict which root is going to be attained by the iterative method that starts in z 0 . Hence, the choice of z 0 in such a zone is not a good one. Both the black zones and the regions with different colors are not suitable to take the initial guess z 0 when we want to acquire a unique root. The most adorable pictures appear when we have very tricky frontiers between basins of attraction, and they correspond to the cases where the method is more demanding with respect to the initial point and its dynamic behavior is more unpredictable. We conclude this section with a remark that the convergence nature of proposed methods depends upon the value of parameter β. The smaller the value of β, the better the convergence of the method.

Applications
The above six methods M1-M6 of family (3) are applied to solve a few nonlinear equations, which not only depict the methods practically but also serve to verify the validity of theoretical results we have derived. To investigate the theoretical order of convergence, we obtain the computational order of convergence (COC) using the formula (see [22]) Performance is compared with some well-known third-order methods requiring derivative evaluations such as Dong [5], Halley [7], Chebyshev [7], Osada [12], and Victory and Neta [14]. These methods are expressed as: Halley's method (HM): Chebyshev's method (CM): Osada's method (OM):

Victory-Neta method (VNM):
All computations are determined in the programming package Mathematica using multiple-precision arithmetic. Performance of the new methods is tested by choosing value of the parameter β = −0.01. Numerical results displayed in Tables 1-4 include: (i) values of last three consecutive errors |x n+1 − x n |; (ii) number of iterations (n) required to converge to the solution such that |x n+1 − x n | + | f (x n )| < 10 −100 ; (iii) COC; and (iv) the elapsed CPU time (CPU-time) in seconds.
The following examples are chosen for numerical tests: Eigenvalue problem). Finding eigenvalues of a large sparse matrix is one of the most challenging tasks in applied mathematics and engineering. Furthermore, calculating the zeros of a characteristic equation of square matrix greater than 4 is another big challenge. Therefore, we consider the following 9× 9 matrix The characteristic polynomial of the matrix (M) is given as This function has one multiple zero at α = 3 of multiplicity 4. We find this zero with initial approximation x 0 = 2.8. Numerical results are shown in Table 1. The relationship between the Mach number before the corner (i.e., M 1 ) and after the corner (i.e., M 2 ) is given by (see [23]) where b = γ+1 γ−1 and γ is the specific heat ratio of the gas. For a special case study, we solve the equation for M 2 given that M 1 = 1.5, γ = 1.4 and δ = 10 0 . In this case, we have We consider this case for seven times and obtained the required nonlinear function The above function has zero at α = 1.8411027704926161 . . . with multiplicity 7. The required zero is determined by using initial approximation x 0 = 1.5. Numerical results are shown in Table 2.
The function f 3 has multiple zero at α = 0 of multiplicity 3. We choose the initial approximation x 0 = 0.5 for obtaining the zero of the function. Numerical results are displayed in Table 3.
The function f 4 has complex zero at α = i of multiplicity 4. We choose the initial approximation x 0 = 1.25i for obtaining the zero of the function. Numerical results are displayed in Table 4. From the numerical results we examine that the accuracy in the values of successive approximations rises, which shows the stable nature of the methods. Also, as with the existing methods, the present methods show consistent convergence behavior. We display the value ' 0 ' of |x n+1 − x n | in the iteration at which |x n+1 − x n | + |F(x n )| < 10 −100 . From the calculation of computational order of convergence, it is also confirmed that the order of convergence of the methods is preserved. The efficient nature of proposed methods can be observed by the fact that the amount of CPU time consumed by the methods is less than that of the time taken by existing methods. In addition, the new methods are more accurate because error becomes much smaller with increasing n as compared to the error of existing techniques. The main purpose of developing the new derivative-free methods for different types of nonlinear equations is purely to illustrate the exactness of the approximate solution and the stability of the convergence to the required solution. Similarly, numerical experimentations, carried out for several problems of different type, proved the above conclusions to a large extent.

Conclusions
In this study, we have suggested a class of third-order derivative-free methods for solving nonlinear equations with multiple roots, with known multiplicity. Analysis of the convergence has shown the third-order convergence under basic assumptions regarding the nonlinear function whose zeros we are searching for. Some special cases of the class are presented. These are employed to solve some nonlinear equations and compared with existing techniques. Testing of the numerical results shows that the presented derivative-free methods are good competitors to the existing third-order techniques that require derivative evaluations in their algorithm.