A Family of Derivative Free Algorithms for Multiple-Roots of Van Der Waals Problem

: There are a good number of higher-order iterative methods for computing multiple zeros of nonlinear equations in the available literature. Most of them required ﬁrst or higher-order derivatives of the involved function. No doubt, high-order derivative-free methods for multiple zeros are more difﬁcult to obtain in comparison with simple zeros and with ﬁrst order derivatives. This study presents an optimal family of fourth order derivative-free techniques for multiple zeros that requires just three evaluations of function φ , per iteration. The approximations of the derivative/s are based on symmetric divided differences. We also demonstrate the application of new algorithms on Van der Waals, Planck law radiation, Manning for isentropic supersonic ﬂow and complex root problems. Numerical results reveal that the proposed derivative-free techniques are more efﬁcient in comparison terms of CPU, residual error, computational order of convergence, number of iterations and the difference between two consecutive iterations with other existing methods.


Introduction
Finding the multiple zeros of nonlinear equations is an important and challenging task in the field of numerical analysis and applied sciences [1,2]. In this study, we consider iterative methods to find a multiple root α (having a known multiplicity n > 1) of a nonlinear equation of the following form: where Φ : D ⊂ C → C is an analytic function in D surrounding the required zero α.
Several higher-order techniques have been developed and analyzed in the literature (see [3][4][5][6][7][8][9][10][11][12][13]). Most of them are based on the modified Newton's method [14], which is given by: It has a second-order convergence and is one of the best known one-point iterative methods for multiple zeros. However, it requires the evaluation of the first-order derivative at each step. Finding the derivative is not an easy task. In addition, the derivative-free methods are important in the cases where the derivative Φ of function Φ is either very small, or does not exist or is not easy to evaluate.
Traub-Steffensen [15] proposed the following derivative free method: for Φ in the Newton method (1). Here, Φ[u k , t k ] = Φ(u k )−Φ(t k ) u k −t k is a divided difference and u k = t k + b Φ(t k ). Then, the recursive scheme (1) takes the form of the Traub-Steffensen method, which is defined as below: Recently, some higher-order derivative-free methods have been presented in the literature (see [16][17][18][19]). Kumar et al. [16] suggested a second order one-point derivative free scheme. In addition, Behl et al. [17] and Kumar et al [18,19] advanced fourth order convergent derivative free methods for multiple zeros. According to the Kung-Traub hypothesis [20], the methods of [17][18][19] require three functional evaluations per iteration. Therefore, they have optimal convergence order.
The purpose of this study is to design some new efficient derivative-free techniques that are capable of achieving a high order convergence with a minimum number of evaluations of the involved function. Following these ideas, we derive two-step derivative-free techniques that have fourth order convergence. The new methods consume only three evaluations of the involved function, per iteration. So, it is an optimal scheme in the sense of the Kung-Traub hypothesis [20]. The algorithm is based on the Traub-Steffensen method (2) and is further modified in the second stage by using the Traub-Steffensen-like iteration. Numerical results also demonstrate the superiority of our methods over the existing ones.

Development of Scheme
For n > 1, we propose the following iterative approach: where . The results are calculated for different values of n. Firstly, we consider the case for n = 2 and establish the fourth-order convergence in the following Theorem 1.
Theorem 1. Consider that t = α is a multiple zero of Φ having multiplicity n = 2. We also assume that Φ : D ⊂ C → C is an analytic function in D that contains the vicinity of the required zero α. Then, the algorithm (3) has fourth-order convergence, if Proof. The error at the k-th stage is given by k = t k − α. Adopt the Taylor's series expansion for the function Φ(t k ) about α with the assumptions Φ(α) = 0, Φ (α) = 0 and Φ (2) (α) = 0, we have: where for m ∈ N.
Similarly, we have the following Taylor's series expansion of Φ(u k ) about α: k + · · · . By inserting expressions (4) and (5) in the first step of (3), we yield: Expand the Taylor series expansion of Φ(v k ) about α, it follows: Using (4), (5) and (7), we have: and From the expressions (8) and (9) that x k = O(e k ) and y k = O(e k ), respectively. Then, we expand the Taylor series expansion of weight function Q(x k , y k ) in the neighborhood of (0, 0) in the following way: Inserting (4)-(10) in the last step of (3), we obtain where Here, the expression of ψ m is not reproduced explicitly due to its considerable length.
We set the coefficients of k , 2 k and 3 k to zero at the same time and solving the resulting equations. Then, we obtain where Q 02 , Q 11 ∈ R. By using expression (12) in (11), we have: Hence proved Theorem 1. 2 Theorem 2. We adopt the statement of Theorem 1 in the same sense. Then, the algorithm (3) has at least fourth order convergence for n = 3, if Similarly, Taylor series expansion of Φ(u k ) about α provide the following expression: where u k = u k − α.
Using (13) and (14) in first step of (3), we get: Expand the Taylor series expansion of Φ(v k ) about α, which is given by: From the expressions (13), (14) and (16), we have: By using (10) and (13)- (18) in the last step of (3), we obtain: where We set the coefficients of 2 k and 3 k to zero and solve the resulting equations. Then, we obtain: Adopt the expression (20) in (19), we have the following error equation: Hence, we proved Theorem 2. 2

Generalization of the Method
For the multiplicity n ≥ 4, we define the following Theorem 3 for the method (3).
for m ∈ N.
If we set coefficients k , 2 k and 3 k equal to zero and solve the resulting equations, we get: Adopt the expression (28) in (27), we have the following error equation: Hence, the theorem is proved. 2 Remark 1. The algorithms (3) reaches at fourth order convergence provided the conditions of Theorem 3 are satisfied. Only three evaluations of function, namely Φ(t k ), Φ(u k ) and Φ(v k ), are used per iteration in order to achieve this convergence rate. Therefore, the Kung-Traub hypothesis [20] confirms the optimal convergence of our algorithm (3).

Remark 2.
It is worth noting that b, which is employed in u k , only exists in the error equations of the cases n = 2 and n = 3, but not in n ≥ 4. However, for n ≥ 4, we have noticed that it occurs in terms of 5 k and higher order. In general, such terms are expensive to compute. Furthermore, these terms are not required to demonstrate the desired fourth-order convergence.

Some Special Cases
We have explored several cases based on the weight function Q(x, y) which satisfy the conditions of Theorems 1-3. But, some the important simple forms are given below: (1) Q (x k ,y k ) = (4 + 3n)x k + 8(1 + n)x 2 k + ny k 4n .

Numerical Results
We choose the combinations of (30)-(32) with scheme (3)  The examples not only illustrate the feasibility and effectiveness of our methods, but also confirm the theoretical aspects. In order to verify the computational order of convergence (COC), we use the following formula (see [21]) The performance of the new algorithms is compared with the following six known methods: (i) Li-Liao-Cheng method (LLC) [5]: (ii) Li-Cheng-Neta method (LCN) [6]: where α 1 = − 1 2 n n+2 n n(n 4 + 4n 3 − 16n − 16) , .
The multiplicity of the corresponding function.
The first three estimated errors |t k+1 − t k | in the iterations. 4.

Problems
Multiplicity Table 3. Results of the methods for problem Φ 1 (t).  Table 7. Results of the methods for problem Φ 1 (t).
Methods k |t 2 − t 1 | |t 3 − t 2 | |t 4 − t 3 | COC CPU Tables 3-7 show that the proposed techniques exhibit the consistent convergence behavior. Our methods consume the same or a fewer number of iterations for the considered problems in comparison to the other mentioned methods. Tables 3-7 demonstrate that the estimated errors of the presented algorithms are less than other methods. Furthermore, our methods execute the results in the short span of time as compared to the existing ones.

Conclusions
This study proposed some optimal derivative-free numerical techniques for multiple zeros on nonlinear equations. The fourth order is investigated based on the standard hypotheses. The applicability of new techniques has been illustrated on five nonlinear equations that were converted from real-life situations. The performances of our methods were compared to other existing methods of identical order. The numerical results show that the new derivative-free algorithms are superior to the existing ones.