You are currently on the new version of our website. Access the old version .
SymmetrySymmetry
  • Article
  • Open Access

11 March 2022

A Family of Derivative Free Algorithms for Multiple-Roots of Van Der Waals Problem

,
,
,
and
1
Department of Mathematics, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Channai 601103, India
2
Department of Mathematics Sciences, King Abdulaziz University, Jeddah 21589, Saudi Arabia
3
Instituto de Matemática Multidisciplinar, Universitat Politècnica de Valéncia, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Nonlinear Analysis and Applications, Geometry of Banach Spaces and Symmetry

Abstract

There are a good number of higher-order iterative methods for computing multiple zeros of nonlinear equations in the available literature. Most of them required first or higher-order derivatives of the involved function. No doubt, high-order derivative-free methods for multiple zeros are more difficult to obtain in comparison with simple zeros and with first order derivatives. This study presents an optimal family of fourth order derivative-free techniques for multiple zeros that requires just three evaluations of function ϕ , per iteration. The approximations of the derivative/s are based on symmetric divided differences. We also demonstrate the application of new algorithms on Van der Waals, Planck law radiation, Manning for isentropic supersonic flow and complex root problems. Numerical results reveal that the proposed derivative-free techniques are more efficient in comparison terms of CPU, residual error, computational order of convergence, number of iterations and the difference between two consecutive iterations with other existing methods.

1. Introduction

Finding the multiple zeros of nonlinear equations is an important and challenging task in the field of numerical analysis and applied sciences [1,2]. In this study, we consider iterative methods to find a multiple root α (having a known multiplicity n > 1 ) of a nonlinear equation of the following form:
Φ ( t ) = 0 ,
where Φ : D C C is an analytic function in D surrounding the required zero α .
Several higher-order techniques have been developed and analyzed in the literature (see [3,4,5,6,7,8,9,10,11,12,13]). Most of them are based on the modified Newton’s method [14], which is given by:
t k + 1 = t k n Φ ( t k ) Φ ( t k ) , k = 0 , 1 , 2 , .
It has a second-order convergence and is one of the best known one-point iterative methods for multiple zeros. However, it requires the evaluation of the first-order derivative at each step. Finding the derivative is not an easy task. In addition, the derivative-free methods are important in the cases where the derivative Φ of function Φ is either very small, or does not exist or is not easy to evaluate.
Traub–Steffensen [15] proposed the following derivative free method:
Φ ( t k ) Φ ( t k + b Φ ( t k ) ) Φ ( t k ) b Φ ( t k ) , b R { 0 } ,
or
Φ ( t k ) Φ [ u k , t k ] ,
for Φ in the Newton method (1). Here, Φ [ u k , t k ] = Φ ( u k ) Φ ( t k ) u k t k is a divided difference and u k = t k + b Φ ( t k ) . Then, the recursive scheme (1) takes the form of the Traub–Steffensen method, which is defined as below:
t k + 1 = t k n Φ ( t k ) Φ [ u k , t k ] .
Recently, some higher-order derivative-free methods have been presented in the literature (see [16,17,18,19]). Kumar et al. [16] suggested a second order one-point derivative free scheme. In addition, Behl et al. [17] and Kumar et al [18,19] advanced fourth order convergent derivative free methods for multiple zeros. According to the Kung–Traub hypothesis [20], the methods of [17,18,19] require three functional evaluations per iteration. Therefore, they have optimal convergence order.
The purpose of this study is to design some new efficient derivative-free techniques that are capable of achieving a high order convergence with a minimum number of evaluations of the involved function. Following these ideas, we derive two-step derivative-free techniques that have fourth order convergence. The new methods consume only three evaluations of the involved function, per iteration. So, it is an optimal scheme in the sense of the Kung–Traub hypothesis [20]. The algorithm is based on the Traub–Steffensen method (2) and is further modified in the second stage by using the Traub–Steffensen-like iteration. Numerical results also demonstrate the superiority of our methods over the existing ones.

2. Development of Scheme

For n > 1 , we propose the following iterative approach:
v k = t k n Φ ( t k ) Φ [ u k , t k ] , t k + 1 = v k n Q ( x k , y k ) Φ ( t k ) Φ [ u k , t k ] + Φ [ v k , u k ] ,
where x k = Φ ( v k ) Φ ( t k ) n and y k = Φ ( v k ) Φ ( u k ) n .
The results are calculated for different values of n. Firstly, we consider the case for n = 2 and establish the fourth-order convergence in the following Theorem 1.
Theorem 1.
Consider that t = α is a multiple zero of Φ having multiplicity n = 2 . We also assume that Φ : D C C is an analytic function in D that contains the vicinity of the required zero α. Then, the algorithm (3) has fourth-order convergence, if
Q 00 = 0 , Q 10 = 5 4 , Q 01 = 1 4 , Q 20 = 6 Q 02 2 Q 11 , Q 11 , Q 02 , R
where Q i j = i + j x i y j Q ( x k , y k ) | ( x k = 0 , y k = 0 ) , for 0 i , j 2 .
Proof. 
The error at the k-th stage is given by ϵ k = t k α . Adopt the Taylor’s series expansion for the function Φ ( t k ) about α with the assumptions Φ ( α ) = 0 , Φ ( α ) = 0 and Φ ( 2 ) ( α ) 0 , we have:
Φ ( t k ) = Φ ( 2 ) ( α ) 2 ! ϵ k 2 1 + N 1 ϵ k + N 2 ϵ k 2 + N 3 ϵ k 3 + N 4 ϵ k 4 + ,
where N m = 2 ! ( 2 + m ) ! Φ ( 2 + m ) ( α ) Φ ( 2 ) ( α ) for m N .
Similarly, we have the following Taylor’s series expansion of Φ ( u k ) about α :
Φ ( u k ) = Φ ( 2 ) ( α ) 2 ! ϵ u k 2 1 + N 1 ϵ u k + N 2 ϵ u k 2 + N 3 ϵ u k 3 + N 4 ϵ u k 4 + ,
where ϵ u k = u k α = ϵ k + b Φ ( 2 ) ( α ) 2 ! ϵ k 2 1 + N 1 ϵ k + N 2 ϵ k 2 + N 3 ϵ k 3 + .
By inserting expressions (4) and (5) in the first step of (3), we yield:
ϵ v k = v k α = 1 2 b Φ ( 2 ) ( α ) 2 + N 1 ϵ k 2 1 16 ( b Φ ( 2 ) ( α ) ) 2 8 b Φ ( 2 ) ( α ) N 1 + 12 N 1 2 16 N 2 ϵ k 3 + O ( ϵ k 4 ) .
Expand the Taylor series expansion of Φ ( v k ) about α , it follows:
Φ ( v k ) = Φ ( 2 ) ( α ) 2 ! ϵ w k 2 1 + N 1 ϵ w k + N 2 ϵ w k 2 + N 3 ϵ w k 3 + .
Using (4), (5) and (7), we have:
x k = 1 2 b Φ ( 2 ) ( α ) 2 + N 1 ϵ k 1 16 ( b Φ ( 2 ) ( α ) ) 2 6 b Φ ( 2 ) ( α ) N 1 + 16 ( N 1 2 N 2 ) ϵ k 2 + 1 64 ( ( b Φ ( 2 ) ( α ) ) 3 22 b Φ ( 2 ) ( α ) N 1 2 + 4 29 N 1 3 + 14 b Φ ( 2 ) ( α ) N 2 2 N 1 3 ( b Φ ( 2 ) ( α ) ) 2 + 104 N 2 + 96 N 3 ) ϵ k 3 + O ( ϵ k 4 ) ,
and
y k = 1 2 b Φ ( 2 ) ( α ) 2 + N 1 ϵ k 1 16 3 ( b Φ ( 2 ) ( α ) ) 2 2 b Φ ( 2 ) ( α ) N 1 + 16 ( N 1 2 N 2 ) ϵ k 2 + 1 64 ( 7 ( b Φ ( 2 ) ( α ) ) 3 + 24 b Φ ( 2 ) ( α ) N 2 14 b Φ ( 2 ) ( α ) N 1 2 + 116 N 1 3 2 N 1 11 ( b Φ ( 2 ) ( α ) ) 2 + 104 N 2 + 96 N 3 ) ϵ k 3 + O ( ϵ k 4 ) .
From the expressions (8) and (9) that x k = O ( e k ) and y k = O ( e k ) , respectively. Then, we expand the Taylor series expansion of weight function Q ( x k , y k ) in the neighborhood of ( 0 , 0 ) in the following way:
Q ( x k , y k ) Q 00 + x k Q 10 + y k Q 01 + 1 2 x k 2 Q 20 + x k y k Q 11 + 1 2 y k 2 Q 02 .
Inserting (4)–(10) in the last step of (3), we obtain
ϵ k + 1 = 2 3 Q 00 ϵ k + 1 36 b Φ ( 2 ) ( α ) ( 9 + 10 Q 00 6 Q 01 6 Q 10 ) + 6 ( 3 + 2 Q 00 2 Q 01 2 Q 10 ) N 1 ϵ k 2 + 1 432 ( ( b Φ ( 2 ) ( α ) ) 2 ( 27 + 56 Q 00 84 Q 01 + 9 Q 02 48 Q 10 + 18 Q 11 + 9 Q 20 ) + 12 b Φ ( 2 ) ( α ) ( 18 + 14 Q 00 + 5 Q 01 3 Q 02 Q 10 6 Q 11 3 Q 20 ) N 1 12 ( 20 Q 00 + 3 ( 9 10 Q 01 + Q 02 10 Q 10 + 2 Q 11 + Q 20 ) ) N 1 2 + 144 ( 3 + 2 Q 00 2 Q 01 2 Q 10 ) N 2 ) ϵ k 3 + ψ m ϵ k 4 + O ( ϵ k 5 ) ,
where
ψ m = ψ m ( b , Q 00 , Q 10 , Q 01 , Q 20 , Q 11 , Q 02 , N 1 , N 2 , N 3 ) , = 1 5184 ( 81 ( b Φ ( 2 ) ( α ) ) 3 + 328 ( b Φ ( 2 ) ( α ) ) 3 Q 00 816 ( b Φ ( 2 ) ( α ) ) 3 Q 01 + 207 ( b Φ ( 2 ) ( α ) ) 3 Q 02 312 ( b Φ ( 2 ) ( α ) ) 3 Q 10 + 306 ( b Φ ( 2 ) ( α ) ) 3 Q 11 + 99 ( b Φ ( 2 ) ( α ) ) 3 Q 20 12 b Φ ( 2 ) ( α ) ( 40 Q 00 + 3 ( 45 + 5 Q 01 29 Q 02 19 Q 10 46 Q 11 17 Q 20 ) ) N 1 2 + 72 ( 81 + 72 Q 00 131 Q 01 + 27 Q 02 131 Q 10 + 54 Q 11 + 27 Q 20 ) N 1 3 + 144 b Φ ( 2 ) ( α ) ( 36 + 24 Q 00 + 7 Q 01 6 Q 02 5 Q 10 12 Q 11 6 Q 20 ) N 2 6 N 1 ( ( b Φ ( 2 ) ( α ) ) 2 ( 135 + 216 Q 00 202 Q 01 75 Q 02 154 Q 10 78 Q 11 3 Q 20 ) + 48 ( 45 + 34 Q 00 51 Q 01 + 6 Q 02 51 Q 10 + 12 Q 11 + 6 Q 20 ) N 2 ) + 7776 N 3 + 5184 Q 00 N 3 5184 Q 01 N 3 5184 Q 10 N 3 ) .
Here, the expression of ψ m is not reproduced explicitly due to its considerable length. We set the coefficients of ϵ k , ϵ k 2 and ϵ k 3 to zero at the same time and solving the resulting equations. Then, we obtain
Q 00 = 0 , Q 10 = 5 4 , Q 01 = 1 4 , Q 20 = 6 Q 02 2 Q 11 ,
where Q 02 , Q 11 R .
By using expression (12) in (11), we have:
ϵ k + 1 = 1 192 b Φ ( 2 ) ( α ) + 2 N 1 ( b Φ ( 2 ) ( α ) ) 2 ( 3 + 4 Q 02 + 4 Q 11 ) + 2 b Φ ( 2 ) ( α ) ( 11 + 4 Q 02 + 4 Q 11 ) N 1 + 62 N 1 2 24 N 2 ϵ k 4 + O ( ϵ k 5 ) .
Hence proved Theorem 1. □
Theorem 2.
We adopt the statement of Theorem 1 in the same sense. Then, the algorithm (3) has at least fourth order convergence for n = 3 , if
Q 00 = 0 , Q 10 = 4 3 Q 01 , Q 20 = 16 3 Q 02 2 Q 11 ,
where Q 01 , Q 02 R .
Proof. 
Keeping in the mind that Φ ( α ) = 0 , Φ ( α ) = 0 , Φ ( 2 ) ( α ) = 0 and Φ ( 3 ) ( α ) 0 . Expand Φ ( t k ) about α with the help of Taylor series expansion, we have
Φ ( t k ) = Φ ( 3 ) ( α ) 3 ! ϵ k 3 1 + N ¯ 1 ϵ k + N ¯ 2 ϵ k 2 + N ¯ 3 ϵ k 3 + N ¯ 4 ϵ k 4 + ,
where N ¯ m = 3 ! ( 3 + m ) ! Φ ( 3 + m ) ( α ) Φ ( 3 ) ( α ) for m N .
Similarly, Taylor series expansion of Φ ( u k ) about α provide the following expression:
Φ ( u k ) = Φ ( 3 ) ( α ) 3 ! ϵ u k 3 1 + N ¯ 1 ϵ u k + N ¯ 2 ϵ u k 2 + N ¯ 3 ϵ u k 3 + N ¯ 4 ϵ u k 4 + ,
where ϵ u k = u k α .
Using (13) and (14) in first step of (3), we get:
σ v k = v k α = N ¯ 1 3 ϵ k 2 + 1 18 3 b Φ ( 3 ) ( α ) 8 N ¯ 1 2 + 12 N ¯ 2 ϵ k 3 + 16 27 N ¯ 1 3 + N ¯ 1 9 ( 2 b Φ ( 3 ) ( α ) 13 N ¯ 2 ) + N ¯ 3 ϵ k 4 + O ( ϵ k 5 ) .
Expand the Taylor series expansion of Φ ( v k ) about α , which is given by:
Φ ( v k ) = Φ ( 3 ) ( α ) 3 ! ϵ v k 3 1 + N ¯ 1 ϵ v k + N ¯ 2 ϵ v k 2 + N ¯ 3 ϵ v k 3 + N ¯ 4 ϵ v k 4 + .
From the expressions (13), (14) and (16), we have:
x k = N ¯ 1 3 ϵ k + b Φ ( 3 ) ( α ) 6 5 9 N ¯ 1 2 + 2 3 N ¯ 2 ϵ k 2 + 23 27 N ¯ 1 3 + N ¯ 1 18 ( 3 b Φ ( 3 ) ( α ) 32 N ¯ 2 ) + N ¯ 3 ϵ k 3 + O ( ϵ k 4 ) .
y k = N ¯ 1 3 ϵ k + b Φ ( 3 ) ( α ) 6 5 9 N ¯ 1 2 + 2 3 N ¯ 2 ϵ k 2 + 23 27 N ¯ 1 3 + 2 9 N ¯ 1 b Φ ( 3 ) ( α ) 2 8 N ¯ 2 + N ¯ 3 ϵ k 3 + O ( ϵ k 4 ) .
By using (10) and (13)–(18) in the last step of (3), we obtain:
ϵ k + 1 = 3 4 Q 00 ϵ k + 1 12 4 + 3 Q 00 3 Q 01 3 Q 10 N ¯ 1 ϵ k 2 + 1 144 ( 2 ( 32 + 24 Q 00 36 Q 01 + 3 Q 02 36 Q 10 + 6 Q 11 + 3 Q 20 ) N ¯ 1 2 + 3 ( b Φ ( 3 ) ( α ) ( 8 + 9 Q 00 6 Q 01 6 Q 10 ) + 8 ( 3 Q 00 3 Q 01 3 Q 10 + 4 ) N ¯ 2 ) ) ϵ k 3 + φ m ϵ k 4 + O ( ϵ k 5 ) ,
where φ m = φ m ( b , Q 00 , Q 10 , Q 01 , Q 20 , Q 11 , Q 02 , N ¯ 1 , N ¯ 2 , N ¯ 3 ) .
We set the coefficients of ϵ k 2 and ϵ k 3 to zero and solve the resulting equations. Then, we obtain:
Q 00 = 0 , Q 10 = 4 3 Q 01 , Q 20 = 16 3 Q 02 2 Q 11 .
Adopt the expression (20) in (19), we have the following error equation:
ϵ k + 1 = N ¯ 1 72 b Φ ( 3 ) ( α ) ( 3 Q 01 2 ) + 16 N ¯ 1 2 8 N ¯ 2 ϵ k 4 + O ( ϵ k 5 ) .
Hence, we proved Theorem 2. □

3. Generalization of the Method

For the multiplicity n 4 , we define the following Theorem 3 for the method (3).
Theorem 3.
Using the statement of Theorem 1, the algorithm (3) for the case n 4 has at least fourth order convergence, if:
Q 00 = 0 , Q 10 = n + 1 n Q 01 , Q 20 = 4 n + 4 4 Q 02 2 Q 11 , Q 01 , Q 02 , Q 11 R .
Moreover, the error equation of (3) is given by
ϵ k + 1 = 9 + n 2 n 3 P 1 3 1 n 2 P 1 P 2 ϵ k 4 + O ( ϵ k 5 ) .
Proof. 
Keeping in mind that Φ ( j ) ( α ) = 0 , j = 0 , 1 , 2 , , n 1 , and Φ n ( α ) 0 , we have the following Taylor series expansion of Φ ( t k ) about α :
Φ ( t k ) = Φ n ( α ) n ! ϵ k n 1 + P 1 ϵ k + P 2 ϵ k 2 + P 3 ϵ k 3 + P 4 ϵ k 4 + ,
where P m = n ! ( m + n ) ! Φ ( m + n ) ( α ) Φ ( n ) ( α ) for m N .
Similarly, expand Φ ( u k ) about α leads us
Φ ( u k ) = Φ n ( α ) n ! ϵ u k n 1 + P 1 ϵ u k + P 2 ϵ u k 2 + P 3 ϵ u k 3 + P 4 ϵ u k 4 + ,
where ϵ u k = u k α = ϵ k + b Φ n ( α ) n ! ϵ k n 1 + P 1 ϵ k + P 2 ϵ k 2 + P 3 ϵ k 3 + .
Use the expressions (21) and (22) in the first step of Equation (3), we obtain
ϵ v k = P 1 4 ϵ k 2 + 1 16 8 P 2 5 P 1 2 ϵ k 3 + 25 64 P 1 3 P 1 P 2 + 1 16 ( b Φ ( 4 ) ( α ) + 12 P 3 ) ϵ k 4 + O ( ϵ k 5 ) , if n = 4 , P 1 n ϵ k 2 + 1 n 2 2 n P 2 ( 1 + n ) P 1 2 ϵ k 3 + 1 n 3 ( 1 + n ) 2 P 1 3 n ( 4 + 3 n ) P 1 P 2 + 3 n 2 P 3 ϵ k 4 + O ( ϵ k 5 ) , if n 5 .
Expansion of Φ ( v k ) around α yields:
Φ ( v k ) = Φ n ( α ) n ! ϵ v k n 1 + P 1 ϵ v k + P 2 ϵ v k 2 + P 3 ϵ v k 3 + P 4 ϵ v k 4 + .
Using (21), (22) and (24) in the expressions of x k and y k , we have:
x k = P 1 4 ϵ k + 1 8 4 P 2 3 P 1 2 ϵ k 2 + 1 128 67 P 1 3 152 P 1 P 2 + 8 ( b Φ ( 4 ) ( α ) + 12 P 3 ) ϵ k 3 + O ( ϵ k 4 ) , if n = 4 , P 1 n ϵ k + 1 n 2 2 n P 2 ( 2 + n ) P 1 2 ϵ k 2 + 1 2 n 3 ( 2 n 2 + 7 n + 7 ) P 1 3 2 n ( 7 + 3 n ) P 1 P 2 + 6 n 2 P 3 ϵ k 3 + O ( ϵ k 4 ) , if n 5 ,
and
y k = P 1 4 ϵ k + 1 8 4 P 2 3 P 1 2 ϵ k 2 + 1 128 67 P 1 3 152 P 1 P 2 + 8 ( b Φ ( 4 ) ( α ) + 12 P 3 ) ϵ k 3 + O ( ϵ k 4 ) , if n = 4 , P 1 n ϵ k + 1 n 2 2 n P 2 ( 2 + n ) P 1 2 ϵ k 2 + 1 2 n 3 ( 2 n 2 + 7 n + 7 ) P 1 3 2 n ( 7 + 3 n ) P 1 P 2 + 6 n 2 P 3 ϵ k 3 + O ( ϵ k 4 ) , if n 5 .
Insert expressions (10) and (21)–(26) in the second step of (3), we have:
ϵ k + 1 = n n + 1 Q 00 ϵ k + 1 n + 1 n + 1 + n Q 00 n Q 01 n Q 10 P 1 ϵ k 2 + 1 2 n 2 ( n + 1 ) ( ( 2 ( n + 1 ) 2 + 2 n ( n + 1 ) Q 00 2 n ( n + 3 ) Q 01 + n Q 02 2 n ( n + 3 ) Q 10 + 2 n Q 11 + n Q 20 ) P 1 2 + 8 ( n + 1 + n Q 00 n Q 01 n Q 10 ) N 2 ) ϵ k 3 + ϕ m ϵ k 4 + O ( ϵ k 5 ) ,
where ϕ m = ϕ m ( b , Q 00 , Q 10 , Q 01 , Q 20 , Q 11 , Q 02 , , P 1 , P 2 , P 3 ) for n = 4 and
ϕ m = ϕ m ( Q 00 , Q 10 , Q 01 , Q 20 , Q 11 , Q 02 , , P 1 , P 2 , P 3 ) for n 5 .
If we set coefficients ϵ k , ϵ k 2 and ϵ k 3 equal to zero and solve the resulting equations, we get:
Q 00 = 0 , Q 10 = n + 1 n Q 01 , Q 20 = 4 n + 4 4 Q 02 2 Q 11 .
Adopt the expression (28) in (27), we have the following error equation:
ϵ k + 1 = 9 + n 2 n 3 P 1 3 1 n 2 P 1 P 2 ϵ k 4 + O ( ϵ k 5 ) .
Hence, the theorem is proved. □
Remark 1. 
The algorithms (3) reaches at fourth order convergence provided the conditions of Theorem 3 are satisfied. Only three evaluations of function, namely Φ ( t k ) , Φ ( u k ) and Φ ( v k ) , are used per iteration in order to achieve this convergence rate. Therefore, the Kung–Traub hypothesis [20] confirms the optimal convergence of our algorithm (3).
Remark 2. 
It is worth noting that b, which is employed in u k , only exists in the error equations of the cases n = 2 and n = 3 , but not in n 4 . However, for n 4 , we have noticed that it occurs in terms of ϵ k 5 and higher order. In general, such terms are expensive to compute. Furthermore, these terms are not required to demonstrate the desired fourth-order convergence.

Some Special Cases

We have explored several cases based on the weight function Q ( x , y ) which satisfy the conditions of Theorems 1–3. But, some the important simple forms are given below:
( 1 ) Q ( x k , y k ) = ( 4 + 3 n ) x k + 8 ( 1 + n ) x k 2 + n y k 4 n . ( 2 ) Q ( x k , y k ) = ( 4 + 3 n ) 3 x k + n y k ( 16 + 8 n ( 3 + y k ) + n 2 ( 9 + 8 y k ) ) 4 n ( 4 + 3 n ) ( 8 x k + n ( 8 x k 3 ) 4 ) 32 n 2 ( 1 + n ) y k . ( 3 ) Q ( x k , y k ) = ( 4 + 3 n ) 3 x k + n y k ( 16 + 8 n ( 3 + y k ) + n 2 ( 9 + 8 y k ) ) 32 n ( 4 + 7 n + 3 n 2 ) x k + ( 4 + 3 n ) 3 x k 2 + 4 n ( 16 + 8 n ( 3 + y k ) + n 2 ( 9 + 8 y k ) ) .
The corresponding technique to each of the above forms can be expressed as follows:
  • Method 1 (M1):
    t k + 1 = v k n ( 4 + 3 n ) x k + 8 ( 1 + n ) x k 2 + n y k 4 n Φ ( t k ) Φ [ u k , t k ] + Φ [ v k , u k ] .
  • Method 2 (M2):
    t k + 1 = v k + n ( 4 + 3 n ) 3 x k + n y k ( 16 + 8 n ( 3 + y k ) + n 2 ( 9 + 8 y k ) ) 4 n ( 4 + 3 n ) ( 8 x k + n ( 8 x k 3 ) 4 ) 32 n 2 ( 1 + n ) y k Φ ( t k ) Φ [ u k , t k ] + Φ [ v k , u k ] .
  • Method 3 (M3):
    t k + 1 = v k n ( 4 + 3 n ) 3 x k + n y k ( 16 + 8 n ( 3 + y k ) + n 2 ( 9 + 8 y k ) ) 32 n ( 4 + 7 n + 3 n 2 ) x k + ( 4 + 3 n ) 3 x k 2 + 4 n ( 16 + 8 n ( 3 + y k ) + n 2 ( 9 + 8 y k ) ) × Φ ( t k ) Φ [ u k , t k ] + Φ [ v k , u k ] .

4. Numerical Results

We choose the combinations of (30)–(32) with scheme (3) for ( b = 0.5 ) , called by (M1a), (M2a) and (M3a), respectively. In addition, we again consider the combinations of (30)–(32) in the scheme (3) with ( b = 0.4 ) , denoted by (M1b), (M2b) and (M3b), respectively. The examples not only illustrate the feasibility and effectiveness of our methods, but also confirm the theoretical aspects. In order to verify the computational order of convergence (COC), we use the following formula (see [21])
COC = ln | ( t k + 2 α ) / ( t k + 1 α ) | ln | ( t k + 1 α ) / ( t k α ) | , k = 1 , 2 ,
The performance of the new algorithms is compared with the following six known methods:
(i) 
Li–Liao–Cheng method (LLC) [5]:
v k = t k 2 n n + 2 Φ ( t k ) Φ ( t k ) , t k + 1 = t k n ( n 2 ) n n + 2 n Φ ( v k ) n 2 Φ ( t k ) Φ ( t k ) n n + 2 n Φ ( v k ) Φ ( t k ) 2 Φ ( t k ) .
(ii) 
Li–Cheng–Neta method (LCN) [6]:
v k = t k 2 n n + 2 Φ ( t k ) Φ ( t k ) , t k + 1 = t k α 1 Φ ( t k ) Φ ( v k ) Φ ( t k ) α 2 Φ ( t k ) + α 3 Φ ( v k ) ,
where
α 1 = 1 2 n n + 2 n n ( n 4 + 4 n 3 16 n 16 ) n 3 4 n + 8 , α 2 = ( n 3 4 n + 8 ) 2 n ( n 4 + 4 n 3 4 n 2 16 n + 16 ) ( n 2 + 2 n 4 ) , α 3 = n 2 ( n 3 4 n + 8 ) n n + 2 n ( n 4 + 4 n 3 4 n 2 16 n + 16 ) ( n 2 + 2 n 4 ) .
(iii) 
Sharma–Sharma method (SSM) [7]:
v k = t k 2 n n + 2 Φ ( t k ) Φ ( t k ) , t k + 1 = t k n 8 [ ( n 3 4 n + 8 ) ( n + 2 ) 2 n n + 2 n Φ ( t k ) Φ ( v k ) × 2 ( n 1 ) ( n + 2 ) n n + 2 n Φ ( t k ) Φ ( v k ) ] Φ ( t k ) Φ ( t k ) .
(iv) 
Zhou–Chen–Song method (ZCS) [8]:
v k = t k 2 n n + 2 Φ ( t k ) Φ ( t k ) , t k + 1 = t k n 8 [ n 3 n + 2 n 2 n Φ ( v k ) Φ ( t k ) 2 2 n 2 ( n + 3 ) n + 2 n n Φ ( v k ) Φ ( t k ) + ( n 3 + 6 n 2 + 8 n + 8 ) ] Φ ( t k ) Φ ( t k ) .
(v) 
Soleymani–Babajee–Lotfi method (SBM) [10]:
v k = t k 2 n n + 2 Φ ( t k ) Φ ( t k ) , t k + 1 = t k Φ ( v k ) Φ ( t k ) q 1 ( Φ ( v k ) ) 2 + q 2 Φ ( v k ) Φ ( t k ) + q 3 ( Φ ( t k ) ) 2 ,
where
q 1 = 1 16 n 3 n ( n + 2 ) n , q 2 = 8 n ( n + 2 ) ( n 2 2 ) 8 m , q 3 = 1 16 ( n 2 ) n n 1 ( n + 2 ) 3 n .
(vi) 
Kansal–Kanwar–Bhatia method (KKB) [13]:
v k = t k 2 n n + 2 Φ ( t k ) Φ ( t k ) , t k + 1 = t k n 4 Φ ( t k ) 1 + n 4 p 2 n p n 1 Φ ( v k ) Φ ( t k ) 2 ( p n 1 ) 8 ( 2 p n + n ( p n 1 ) ) × 4 2 n + n 2 ( p n 1 ) Φ ( t k ) p n ( 2 p n + n ( p n 1 ) ) 2 Φ ( t k ) Φ ( v k ) ,
where p = n n + 2 .
The calculations are performed with Mathematica [22] using multiple-precision arithmetic. We consider the five nonlinear problems for comparisons, which are depicted in the Table 1. The numerical results in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 include:
Table 1. The considered five nonlinear problems for numerical experiments.
Table 2. Multiplicity of considered problems in Table 1.
Table 3. Results of the methods for problem Φ 1 ( t ) .
Table 4. Results of the methods for problem Φ 1 ( t ) .
Table 5. Results of the methods for problem Φ 1 ( t ) .
Table 6. Results of the methods for problem Φ 1 ( t ) .
Table 7. Results of the methods for problem Φ 1 ( t ) .
  • The multiplicity of the corresponding function.
  • The number of iterations ( k ) based on the stopping criterion | t k + 1 t k | + | Φ ( t k ) | < 10 100 .
  • The first three estimated errors | t k + 1 t k | in the iterations.
  • The computational order of convergence (COC) using (33).
  • The CPU time required in the execution of program which is computed by the Mathematica command “TimeUsed[ ]”.
The configurations of the used computer for the calculation work are given below:
Processor: Intel(R) Pentium(R) CPU B960@2.20GHz,
Made: HP
Installed memory (RAM): 4 GB RAM
Window edition: Windows 7 Professional
System type: 32-bit-Operating System.
The multiplicity of the above considered functions is calculated by the following formula:
n = t k t 0 d k d 0 ,
where d k = Φ ( t k ) g k and g k = Φ ( t k + Φ ( t k ) ) Φ ( t k ) Φ ( t k ) . We applied this formula in our method M1 and obtained the multiplicity. The obtained results are depicted in the Table 2. Similarly, we can apply M2 and M3.
Remark 3. 
The numerical results in Table 3, Table 4, Table 5, Table 6 and Table 7 show that the proposed techniques exhibit the consistent convergence behavior. Our methods consume the same or a fewer number of iterations for the considered problems in comparison to the other mentioned methods. Table 3, Table 4, Table 5, Table 6 and Table 7 demonstrate that the estimated errors of the presented algorithms are less than other methods. Furthermore, our methods execute the results in the short span of time as compared to the existing ones.

5. Conclusions

This study proposed some optimal derivative-free numerical techniques for multiple zeros on nonlinear equations. The fourth order is investigated based on the standard hypotheses. The applicability of new techniques has been illustrated on five nonlinear equations that were converted from real-life situations. The performances of our methods were compared to other existing methods of identical order. The numerical results show that the new derivative-free algorithms are superior to the existing ones.

Author Contributions

All authors have contributed equally to the development of the paper. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. D-130-713-1443.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This work was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (D-130-713-1443). The authors, therefore, acknowledge with thanks DSR technical and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  2. Sahlan, M.N.; Afshari, H. Three new approaches for solving a class of strongly nonlinear two-point boundary value problems. Bound. Value Probl. 2021, 2021, 60. [Google Scholar] [CrossRef]
  3. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
  4. Dong, C. A family of multipoint iterative functions for finding multiple roots of equations. Int. J. Comput. Math. 1987, 21, 363–367. [Google Scholar] [CrossRef]
  5. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
  6. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef] [Green Version]
  7. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
  8. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Math. Appl. 2011, 235, 4199–4206. [Google Scholar] [CrossRef] [Green Version]
  9. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef] [Green Version]
  10. Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
  11. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef] [Green Version]
  12. Geum, Y.H.; Kim, Y.I.; Neta, B. Constructing a family of optimal eighth-order modified Newton-type multiple-zero finders along with the dynamics behind their purely imaginary extraneous fixed points. J. Comp. Appl. Math. 2018, 333, 131–156. [Google Scholar] [CrossRef]
  13. Kansal, M.; Kanwar, V.; Bhatia, S. On some optimal multiple root-finding methods and their dynamics. Appl. Appl. Math. 2015, 10, 349–367. [Google Scholar]
  14. Schröder, E. Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef] [Green Version]
  15. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
  16. Kumar, D.; Sharma, J.R.; Argyros, I.K. Optimal one-point iterative function free from derivatives for multiple roots. Mathematics 2020, 8, 709. [Google Scholar] [CrossRef]
  17. Behl, R.; Bhalla, S.; Magreñán, Á.A.; Moysi, A. An Optimal Derivative Free Family of Chebyshev-Halley’s Method for Multiple Zeros. Mathematics 2021, 9, 546. [Google Scholar] [CrossRef]
  18. Kumar, S.; Kumar, D.; Sharma, J.R.; Jäntschi, L. A Family of Derivative Free Optimal Fourth Order Methods for Computing Multiple Roots. Symmetry 2020, 12, 1969. [Google Scholar] [CrossRef]
  19. Kumar, S.; Kumar, D.; Sharma, J.R.; Argyros, I.K. An efficient class of fourth-order derivative-free method for multiple-roots. Int. J. Nonlinear Sci. Numer. Simul. 2021, 2021, 000010151520200161. [Google Scholar] [CrossRef]
  20. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  21. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  22. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media: Champaign, IL, USA, 2003. [Google Scholar]
  23. Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
  24. Hoffman, J.D. Numerical Methods for Engineers and Scientists; McGraw-Hill Book Company: New York, NY, USA, 1992. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.