Next Article in Journal
Indirect Effects, Biotic Inferential Interactions and Time Functions in H-Semiotic Systems: Ecosystems Case
Next Article in Special Issue
A Higher Order Chebyshev-Halley-Type Family of Iterative Methods for Multiple Roots
Previous Article in Journal
The Existence and Global Exponential Stability of Almost Periodic Solutions for Neutral-Type CNNs on Time Scales
Previous Article in Special Issue
Improving the Computational Efficiency of a Variant of Steffensen’s Method for Nonlinear Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Fourth, Eighth and Sixteenth Order Methods by Using Divided Difference Techniques and Their Basins of Attraction and Its Application

by
Yanlin Tao
1,† and
Kalyanasundaram Madhu
2,*,†
1
School of Computer Science and Engineering, Qujing Normal University, Qujing 655011, China
2
Department of Mathematics, Saveetha Engineering College, Chennai 602105, India
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2019, 7(4), 322; https://doi.org/10.3390/math7040322
Submission received: 26 February 2019 / Revised: 24 March 2019 / Accepted: 26 March 2019 / Published: 30 March 2019
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems)

Abstract

:
The principal objective of this work is to propose a fourth, eighth and sixteenth order scheme for solving a nonlinear equation. In terms of computational cost, per iteration, the fourth order method uses two evaluations of the function and one evaluation of the first derivative; the eighth order method uses three evaluations of the function and one evaluation of the first derivative; and sixteenth order method uses four evaluations of the function and one evaluation of the first derivative. So these all the methods have satisfied the Kung-Traub optimality conjecture. In addition, the theoretical convergence properties of our schemes are fully explored with the help of the main theorem that demonstrates the convergence order. The performance and effectiveness of our optimal iteration functions are compared with the existing competitors on some standard academic problems. The conjugacy maps of the presented method and other existing eighth order methods are discussed, and their basins of attraction are also given to demonstrate their dynamical behavior in the complex plane. We apply the new scheme to find the optimal launch angle in a projectile motion problem and Planck’s radiation law problem as an application.

1. Introduction

One of the most frequent problems in engineering, scientific computing and applied mathematics, in general, is the problem of solving a nonlinear equation f ( x ) = 0 . In most of the cases, whenever real problems are faced, such as weather forecasting, accurate positioning of satellite systems in the desired orbit, measurement of earthquake magnitudes and other high-level engineering problems, only approximate solutions may get resolved. However, only in rare cases, it is possible to solve the governing equations exactly. The most familiar method of solving non linear equation is Newton’s iteration method. The local order of convergence of Newton’s method is two and it is an optimal method with two function evaluations per iterative step.
In the past decade, several higher order iterative methods have been developed and analyzed for solving nonlinear equations that improve classical methods such as Newton’s method, Chebyshev method, Halley’s iteration method, etc. As the order of convergence increases, so does the number of function evaluations per step. Hence, a new index to determine the efficiency called the efficiency index is introduced in [1] to measure the balance between these quantities. Kung-Traub [2] conjectured that the order of convergence of any multi-point without memory method with d function evaluations cannot exceed the bound 2 d 1 , the optimal order. Thus the optimal order for three evaluations per iteration would be four, four evaluations per iteration would be eight, and so on. Recently, some fourth and eighth order optimal iterative methods have been developed (see [3,4,5,6,7,8,9,10,11,12,13,14] and references therein). A more extensive list of references as well as a survey on the progress made in the class of multi-point methods is found in the recent book by Petkovic et al. [11].
This paper is organized as follows. An optimal fourth, eighth and sixteenth order methods are developed by using divided difference techniques in Section 2. In Section 3, convergence order is analyzed. In Section 4, tested numerical examples to compare the proposed methods with other known optimal methods. The problem of Projectile motion is discussed in Section 5 where the presented methods are applied on this problem with some existing ones. In Section 6, we obtain the conjugacy maps of these methods to make a comparison from dynamical point of view. In Section 7, the proposed methods are studied in the complex plane using basins of attraction. Section 8 gives concluding remarks.

2. Design of an Optimal Fourth, Eighth and Sixteenth Order Methods

Definition 1
([15]). If the sequence { x n } tends to a limit x in such a way that
lim n x n + 1 x ( x n x ) p = C
for p 1 , then the order of convergence of the sequence is said to be p, and C is known as the asymptotic error constant. If p = 1 , p = 2 or p = 3 , the convergence is said to be linear, quadratic or cubic, respectively.
Let e n = x n x , then the relation
e n + 1 = C e n p + O e n p + 1 = O e n p .
is called the error equation. The value of p is called the order of convergence of the method.
Definition 2
([1]). The Efficiency Index is given by
E I = p 1 d ,
where d is the total number of new function evaluations (the values of f and its derivatives) per iteration.
Let x n + 1 = ψ ( x n ) define an Iterative Function (IF). Let x n + 1 be determined by new information at x n , ϕ 1 ( x n ) , . . . , ϕ i ( x n ) , i 1 . No old information is reused. Thus,
x n + 1 = ψ ( x n , ϕ 1 ( x n ) , . . . , ϕ i ( x n ) ) .
Then ψ is called a multipoint IF without memory.
The Newton (also called Newton-Raphson) IF (2ndNR) is given by
ψ 2 n d N R ( x ) = x f ( x ) f ( x ) .
The 2ndNR IF is one-point IF with two function evaluations and it satisfies the Kung-Traub conjecture with d = 2 . Further, E I 2 n d N R = 1.414 .

2.1. An Optimal Fourth Order Method

We attempt to get a new optimal fourth order IF as follows, let us consider two step Newton’s method
ψ 4 t h N R ( x ) = ψ 2 n d N R ( x ) f ( ψ 2 n d N R ( x ) ) f ( ψ 2 n d N R ( x ) ) .
The above one is having fourth order convergence with four function evaluations. But, this is not an optimal method. To get an optimal, need to reduce a function and preserve the same convergence order, and so we estimate f ( ψ 2 n d N R ( x ) ) by the following polynomial
q ( t ) = a 0 + a 1 ( t x ) + a 2 ( t x ) 2 ,
which satisfies
q ( x ) = f ( x ) , q ( x ) = f ( x ) , q ( ψ 2 n d N R ( x ) ) = f ( ψ 2 n d N R ( x ) ) .
On implementing the above conditions on Equation (6), we obtain three unknowns a 0 , a 1 and a 2 . Let us define the divided differences
f [ y , x ] = f ( y ) f ( x ) y x , f [ y , x , x ] = f [ y , x ] f ( x ) y x .
From conditions, we get a 0 = f ( x ) , a 1 = f ( x ) and a 2 = f [ ψ 2 n d N R ( x ) , x , x ] , respectively, by using divided difference techniques. Now, we have the estimation
f ( ψ 2 n d N R ( x ) ) q ( ψ 2 n d N R ( x ) ) = a 1 + 2 a 2 ( ψ 2 t h N R ( x ) x ) .
Finally, we propose a new optimal fourth order method as
ψ 4 t h Y M ( x ) = ψ 2 n d N R ( x ) f ( ψ 2 n d N R ( x ) ) f ( x ) + 2 f [ ψ 2 n d N R ( x ) , x , x ] ( ψ 2 t h N R ( x ) x ) .
The efficiency of the method (7) is E I 4 t h Y M = 1.587 .

2.2. An Optimal Eighth Order Method

Next, we attempt to get a new optimal eighth order IF as following way
ψ 8 t h Y M ( x ) = ψ 4 t h Y M ( x ) f ( ψ 4 t h Y M ( x ) ) f ( ψ 4 t h Y M ( x ) ) .
The above one is having eighth order convergence with five function evaluations. But, this is not an optimal method. To get an optimal, need to reduce a function and preserve the same convergence order, and so we estimate f ( ψ 4 t h Y M ( x ) ) by the following polynomial
q ( t ) = b 0 + b 1 ( t x ) + b 2 ( t x ) 2 + b 3 ( t x ) 3 ,
which satisfies
q ( x ) = f ( x ) , q ( x ) = f ( x ) , q ( ψ 2 n d N R ( x ) ) = f ( ψ 2 n d N R ( x ) ) , q ( ψ 4 t h Y M ( x ) ) = f ( ψ 4 t h Y M ( x ) ) .
On implementing the above conditions on (8), we obtain four linear equations with four unknowns b 0 , b 1 , b 2 and b 3 . From conditions, we get b 0 = f ( x ) and b 1 = f ( x ) . To find b 2 and b 3 , we solve the following equations:
f ( ψ 2 n d N R ( x ) ) = f ( x ) + f ( x ) ( ψ 2 n d N R ( x ) x ) + b 2 ( ψ 2 n d N R ( x ) x ) 2 + b 3 ( ψ 2 n d N R ( x ) x ) 3 f ( ψ 4 t h Y M ( x ) ) = f ( x ) + f ( x ) ( ψ 4 t h Y M ( x ) x ) + b 2 ( ψ 4 t h Y M ( x ) x ) 2 + b 3 ( ψ 4 t h Y M ( x ) x ) 3 .
Thus by applying divided differences, the above equations simplifies to
b 2 + b 3 ( ψ 2 n d N R ( x ) x ) = f [ ψ 2 n d N R ( x ) , x , x ]
b 2 + b 3 ( ψ 4 t h Y M ( x ) x ) = f [ ψ 4 t h Y M ( x ) , x , x ]
Solving Equations (9) and (14), we have
b 2 = f [ ψ 2 n d N R ( x ) , x , x ] ( ψ 4 t h P M ( x ) x ) f [ ψ 4 t h Y M ( x ) , x , x ] ( ψ 2 n d N R ( x ) x ) ψ 4 t h Y M ( x ) ψ 2 n d N R ( x ) , b 3 = f [ ψ 4 t h Y M ( x ) , x , x ] f [ ψ 2 n d N R ( x ) , x , x ] ψ 4 t h Y M ( x ) ψ 2 n d N R ( x ) .
Further, using Equation (11), we have the estimation
f ( ψ 4 t h Y M ( x ) ) q ( ψ 4 t h Y M ( x ) ) = b 1 + 2 b 2 ( ψ 4 t h Y M ( x ) x ) + 3 b 3 ( ψ 4 t h Y M ( x ) x ) 2 .
Finally, we propose a new optimal eighth order method as
ψ 8 t h Y M ( x ) = ψ 4 t h Y M ( x ) f ( ψ 4 t h Y M ( x ) ) f ( x ) + 2 b 2 ( ψ 4 t h Y M ( x ) x ) + 3 b 3 ( ψ 4 t h Y M ( x ) x ) 2 .
The efficiency of the method (12) is E I 8 t h Y M = 1.682 . Remark that the method is seems a particular case of the method of Khan et al. [16], they used weight function to develop their methods. Whereas we used finite difference scheme to develop proposed methods. We can say the methods 4 t h Y M and 8 t h Y M are reconstructed of Khan et al. [16] methods.

2.3. An Optimal Sixteenth Order Method

Next, we attempt to get a new optimal sixteenth order IF as following way
ψ 16 t h Y M ( x ) = ψ 8 t h Y M ( x ) f ( ψ 8 t h Y M ( x ) ) f ( ψ 8 t h Y M ( x ) ) .
The above one is having eighth order convergence with five function evaluations. However, this is not an optimal method. To get an optimal, need to reduce a function and preserve the same convergence order, and so we estimate f ( ψ 8 t h Y M ( x ) ) by the following polynomial
q ( t ) = c 0 + c 1 ( t x ) + c 2 ( t x ) 2 + c 3 ( t x ) 3 + c 4 ( t x ) 4 ,
which satisfies
q ( x ) = f ( x ) , q ( x ) = f ( x ) , q ( ψ 2 n d N R ( x ) ) = f ( ψ 2 n d N R ( x ) ) , q ( ψ 4 t h Y M ( x ) ) = f ( ψ 4 t h Y M ( x ) ) , q ( ψ 8 t h Y M ( x ) ) = f ( ψ 8 t h Y M ( x ) ) .
On implementing the above conditions on (13), we obtain four linear equations with four unknowns c 0 , c 1 , c 2 and c 3 . From conditions, we get c 0 = f ( x ) and c 1 = f ( x ) . To find c 2 , c 3 and c 4 , we solve the following equations:
f ( ψ 2 n d N R ( x ) ) = f ( x ) + f ( x ) ( ψ 2 n d N R ( x ) x ) + c 2 ( ψ 2 n d N R ( x ) x ) 2 + c 3 ( ψ 2 n d N R ( x ) x ) 3 + c 4 ( ψ 2 n d N R ( x ) x ) 4 f ( ψ 4 t h Y M ( x ) ) = f ( x ) + f ( x ) ( ψ 4 t h Y M ( x ) x ) + c 2 ( ψ 4 t h Y M ( x ) x ) 2 + c 3 ( ψ 4 t h Y M ( x ) x ) 3 + c 4 ( ψ 4 t h Y M ( x ) x ) 4 f ( ψ 8 t h Y M ( x ) ) = f ( x ) + f ( x ) ( ψ 8 t h Y M ( x ) x ) + c 2 ( ψ 8 t h Y M ( x ) x ) 2 + c 3 ( ψ 8 t h Y M ( x ) x ) 3 + c 4 ( ψ 8 t h Y M ( x ) x ) 4 .
Thus by applying divided differences, the above equations simplifies to
c 2 + c 3 ( ψ 2 n d N R ( x ) x ) + c 4 ( ψ 2 n d N R ( x ) x ) 2 = f [ ψ 2 n d N R ( x ) , x , x ] c 2 + c 3 ( ψ 4 t h Y M ( x ) x ) + c 4 ( ψ 4 t h Y M ( x ) x ) 2 = f [ ψ 4 t h Y M ( x ) , x , x ] c 2 + c 3 ( ψ 8 t h Y M ( x ) x ) + c 4 ( ψ 8 t h Y M ( x ) x ) 2 = f [ ψ 8 t h Y M ( x ) , x , x ]
Solving Equation (14), we have
c 2 = f [ ψ 2 n d N R ( x ) , x , x ] S 2 2 S 3 + S 2 S 3 2 + f [ ψ 4 t h Y M ( x ) , x , x ] S 1 2 S 3 S 1 S 3 2 + f [ ψ 8 t h Y M ( x ) , x , x ] S 1 2 S 2 + S 1 S 2 2 S 1 2 S 2 + S 1 S 2 2 + S 1 2 S 3 S 2 2 S 3 S 1 S 3 2 + S 2 S 3 2 , c 3 = f [ ψ 2 n d N R ( x ) , x , x ] S 2 2 S 3 2 + f [ ψ 4 t h Y M ( x ) , x , x ] S 1 2 + S 3 2 + f [ ψ 8 t h Y M ( x ) , x , x ] S 1 2 S 2 2 S 1 2 S 2 + S 1 S 2 2 + S 1 2 S 3 S 2 2 S 3 S 1 S 3 2 + S 2 S 3 2 , c 4 = f [ ψ 2 n d N R ( x ) , x , x ] S 2 + S 3 + f [ ψ 4 t h Y M ( x ) , x , x ] S 1 S 3 + f [ ψ 8 t h Y M ( x ) , x , x ] S 1 + S 2 S 1 2 S 2 + S 1 S 2 2 + S 1 2 S 3 S 2 2 S 3 S 1 S 3 2 + S 2 S 3 2 , S 1 = ψ 2 n d N R ( x ) x , S 2 = ψ 4 t h Y M ( x ) x , S 3 = ψ 8 t h Y M ( x ) x .
Further, using Equation (15), we have the estimation
f ( ψ 8 t h Y M ( x ) ) q ( ψ 8 t h Y M ( x ) ) = c 1 + 2 c 2 ( ψ 8 t h Y M ( x ) x ) + 3 c 3 ( ψ 8 t h Y M ( x ) x ) 2 + 4 c 4 ( ψ 8 t h Y M ( x ) x ) 3 .
Finally, we propose a new optimal sixteenth order method as
ψ 16 t h Y M ( x ) = ψ 8 t h Y M ( x ) f ( ψ 8 t h Y M ( x ) ) f ( x ) + 2 c 2 ( ψ 8 t h Y M ( x ) x ) + 3 c 3 ( ψ 8 t h Y M ( x ) x ) 2 + 4 c 4 ( ψ 8 t h Y M ( x ) x ) 3 .
The efficiency of the method (16) is E I 16 t h Y M = 1.741 .

3. Convergence Analysis

In this section, we prove the convergence analysis of proposed I F s with help of Mathematica software.
Theorem 1.
Let f : D R R be a sufficiently smooth function having continuous derivatives. If f ( x ) has a simple root x in the open interval D and x 0 chosen in sufficiently small neighborhood of x , then the method 4 t h Y M IFs (7) is of local fourth order convergence, and the 8 t h Y M IFs (12) is of local eighth order convergence.
Proof. 
Let e = x x and c [ j ] = f ( j ) ( x ) j ! f ( x ) , j = 2 , 3 , 4 , . . . . Expanding f ( x ) and f ( x ) about x by Taylor’s method, we have
f ( x ) = f ( x ) e + e 2 c [ 2 ] + e 3 c [ 3 ] + e 4 c [ 4 ] + e 5 c [ 5 ] + e 6 c [ 6 ] + e 7 c [ 7 ] + e 8 c [ 8 ] +
and
f ( x ) = f ( x ) 1 + 2 e c [ 2 ] + 3 e 2 c [ 3 ] + 4 e 3 c [ 4 ] + 5 e 4 c [ 5 ] + 6 e 5 c [ 6 ] + 7 e 6 c [ 7 ] + 8 e 7 c [ 8 ] + 9 e 8 c [ 9 ] +
Thus,
ψ 2 n d N R ( x ) = x + c [ 2 ] e 2 + 2 c [ 2 ] 2 + 2 c [ 3 ] e 3 + 4 c [ 2 ] 3 7 c [ 2 ] c [ 3 ] + 3 c [ 4 ] e 4 + ( 8 c [ 2 ] 4 + 20 c [ 2 ] 2 c [ 3 ] 6 c [ 3 ] 2 10 c [ 2 ] c [ 4 ] + 4 c [ 5 ] ) e 5 + ( 16 c [ 2 ] 5 52 c [ 2 ] 3 c [ 3 ] + 28 c [ 2 ] 2 c [ 4 ] 17 c [ 3 ] c [ 4 ] + c [ 2 ] ( 33 c [ 3 ] 2 13 c [ 5 ] ) + 5 c [ 6 ] ) e 6 2 ( 16 c [ 2 ] 6 64 c [ 2 ] 4 c [ 3 ] 9 c [ 3 ] 3 + 36 c [ 2 ] 3 c [ 4 ] + 6 c [ 4 ] 2 + 9 c [ 2 ] 2 ( 7 c [ 3 ] 2 2 c [ 5 ] ) + 11 c [ 3 ] c [ 5 ] + c [ 2 ] ( 46 c [ 3 ] c [ 4 ] + 8 c [ 6 ] ) 3 c [ 7 ] ) e 7 + ( 64 c [ 2 ] 7 304 c [ 2 ] 5 c [ 3 ] + 176 c [ 2 ] 4 c [ 4 ] + 75 c [ 3 ] 2 c [ 4 ] + c [ 2 ] 3 ( 408 c [ 3 ] 2 92 c [ 5 ] ) 31 c [ 4 ] c [ 5 ] 27 c [ 3 ] c [ 6 ] + c [ 2 ] 2 ( 348 c [ 3 ] c [ 4 ] + 44 c [ 6 ] ) + c [ 2 ] ( 135 c [ 3 ] 3 + 64 c [ 4 ] 2 + 118 c [ 3 ] c [ 5 ] 19 c [ 7 ] ) + 7 c [ 8 ] ) e 8 + .
Expanding f ( ψ 2 n d N R ( x ) ) about x by Taylor’s method, we have
f ( ψ 2 n d N R ( x ) ) = f ( x ) ( c [ 2 ] e 2 + 2 c [ 2 ] 2 + 2 c [ 3 ] e 3 + 5 c [ 2 ] 3 7 c [ 2 ] c [ 3 ] + 3 c [ 4 ] e 4 2 ( 6 c [ 2 ] 4 12 c [ 2 ] 2 c [ 3 ] + 3 c [ 3 ] 2 + 5 c [ 2 ] c [ 4 ] 2 c [ 5 ] ) e 5 + ( 28 c [ 2 ] 5 73 c [ 2 ] 3 c [ 3 ] + 34 c [ 2 ] 2 c [ 4 ] 17 c [ 3 ] c [ 4 ] + c [ 2 ] ( 37 c [ 3 ] 2 13 c [ 5 ] ) + 5 c [ 6 ] ) e 6 2 ( 32 c [ 2 ] 6 103 c [ 2 ] 4 c [ 3 ] 9 c [ 3 ] 3 + 52 c [ 2 ] 3 c [ 4 ] + 6 c [ 4 ] 2 + c [ 2 ] 2 ( 80 c [ 3 ] 2 22 c [ 5 ] ) + 11 c [ 3 ] c [ 5 ] + c [ 2 ] ( 52 c [ 3 ] c [ 4 ] + 8 c [ 6 ] ) 3 c [ 7 ] ) e 7 + ( 144 c [ 2 ] 7 552 c [ 2 ] 5 c [ 3 ] + 297 c [ 2 ] 4 c [ 4 ] + 75 c [ 3 ] 2 c [ 4 ] + 2 c [ 2 ] 3 ( 291 c [ 3 ] 2 67 c [ 5 ] ) 31 c [ 4 ] c [ 5 ] 27 c [ 3 ] c [ 6 ] + c [ 2 ] 2 ( 455 c [ 3 ] c [ 4 ] + 54 c [ 6 ] ) + c [ 2 ] ( 147 c [ 3 ] 3 + 73 c [ 4 ] 2 + 134 c [ 3 ] c [ 5 ] 19 c [ 7 ] ) + 7 c [ 8 ] ) e 8 + . )
Using Equations (17)–(20) in divided difference techniques. We have
f [ ψ 2 n d N R ( x ) , x , x ] = f ( x ) ( c [ 2 ] + 2 c [ 3 ] e + c [ 2 ] c [ 3 ] + 3 c [ 4 ] e 2 + 2 ( c [ 2 ] 2 c [ 3 ] + c [ 3 ] 2 + c [ 2 ] c [ 4 ] + 2 c [ 5 ] ) e 3 + 4 c [ 2 ] 3 c [ 3 ] 3 c [ 2 ] 2 c [ 4 ] + 7 c [ 3 ] c [ 4 ] + c [ 2 ] ( 7 c [ 3 ] 2 + 3 c [ 5 ] ) + 5 c [ 6 ] e 4 + ( 8 c [ 2 ] 4 c [ 3 ] 6 c [ 3 ] 3 + 4 c [ 2 ] 3 c [ 4 ] + 4 c [ 2 ] 2 ( 5 c [ 3 ] 2 c [ 5 ] ) + 10 c [ 3 ] c [ 5 ] + 4 c [ 2 ] ( 5 c [ 3 ] c [ 4 ] + c [ 6 ] ) + 6 ( c [ 4 ] 2 + c [ 7 ] ) ) e 5 + ( 16 c [ 2 ] 5 c [ 3 ] 4 c [ 2 ] 4 c [ 4 ] 25 c [ 3 ] 2 c [ 4 ] + 17 c [ 4 ] c [ 5 ] + c [ 2 ] 3 ( 52 c [ 3 ] 2 + 5 c [ 5 ] ) + c [ 2 ] 2 ( 46 c [ 3 ] c [ 4 ] 5 c [ 6 ] ) + 13 c [ 3 ] c [ 6 ] + c [ 2 ] ( 33 c [ 3 ] 3 14 c [ 4 ] 2 26 c [ 3 ] c [ 5 ] + 5 c [ 7 ] ) + 7 c [ 8 ] ) e 6 + . )
Substituting Equations (18)–(21) into Equation (7), we obtain, after simplifications,
ψ 4 t h Y M ( x ) = x + c [ 2 ] 3 c [ 2 ] c [ 3 ] e 4 2 2 c [ 2 ] 4 4 c [ 2 ] 2 c [ 3 ] + c [ 3 ] 2 + c [ 2 ] c [ 4 ] e 5 + ( 10 c [ 2 ] 5 30 c [ 2 ] 3 c [ 3 ] + 12 c [ 2 ] 2 c [ 4 ] 7 c [ 3 ] c [ 4 ] + 3 c [ 2 ] ( 6 c [ 3 ] 2 c [ 5 ] ) ) e 6 2 ( 10 c [ 2 ] 6 40 c [ 2 ] 4 c [ 3 ] 6 c [ 3 ] 3 + 20 c [ 2 ] 3 c [ 4 ] + 3 c [ 4 ] 2 + 8 c [ 2 ] 2 ( 5 c [ 3 ] 2 c [ 5 ] ) + 5 c [ 3 ] c [ 5 ] + c [ 2 ] ( 26 c [ 3 ] c [ 4 ] + 2 c [ 6 ] ) ) e 7 + ( 36 c [ 2 ] 7 178 c [ 2 ] 5 c [ 3 ] + 101 c [ 2 ] 4 c [ 4 ] + 50 c [ 3 ] 2 c [ 4 ] + 3 c [ 2 ] 3 ( 84 c [ 3 ] 2 17 c [ 5 ] ) 17 c [ 4 ] c [ 5 ] 13 c [ 3 ] c [ 6 ] + c [ 2 ] 2 ( 209 c [ 3 ] c [ 4 ] + 20 c [ 6 ] ) + c [ 2 ] ( 91 c [ 3 ] 3 + 37 c [ 4 ] 2 + 68 c [ 3 ] c [ 5 ] 5 c [ 7 ] ) ) e 8 + .
Expanding f ( ψ 4 t h Y M ( x ) ) about x by Taylor’s method, we have
f ( ψ 4 t h Y M ( x ) ) = f ( x ) ( c [ 2 ] 3 c [ 2 ] c [ 3 ] e 4 2 2 c [ 2 ] 4 4 c [ 2 ] 2 c [ 3 ] + c [ 3 ] 2 + c [ 2 ] c [ 4 ] e 5 + ( 10 c [ 2 ] 5 30 c [ 2 ] 3 c [ 3 ] + 12 c [ 2 ] 2 c [ 4 ] 7 c [ 3 ] c [ 4 ] + 3 c [ 2 ] ( 6 c [ 3 ] 2 c [ 5 ] ) ) e 6 2 ( 10 c [ 2 ] 6 40 c [ 2 ] 4 c [ 3 ] 6 c [ 3 ] 3 + 20 c [ 2 ] 3 c [ 4 ] + 3 c [ 4 ] 2 + 8 c [ 2 ] 2 ( 5 c [ 3 ] 2 c [ 5 ] ) + 5 c [ 3 ] c [ 5 ] + c [ 2 ] ( 26 c [ 3 ] c [ 4 ] + 2 c [ 6 ] ) ) e 7 + ( 37 c [ 2 ] 7 180 c [ 2 ] 5 c [ 3 ] + 101 c [ 2 ] 4 c [ 4 ] + 50 c [ 3 ] 2 c [ 4 ] + c [ 2 ] 3 ( 253 c [ 3 ] 2 51 c [ 5 ] ) 17 c [ 4 ] c [ 5 ] 13 c [ 3 ] c [ 6 ] + c [ 2 ] 2 ( 209 c [ 3 ] c [ 4 ] + 20 c [ 6 ] ) + c [ 2 ] ( 91 c [ 3 ] 3 + 37 c [ 4 ] 2 + 68 c [ 3 ] c [ 5 ] 5 c [ 7 ] ) ) e 8 + . )
Now,
f [ ψ 4 t h Y M ( x ) , x , x ] = f ( x ) ( c [ 2 ] + 2 c [ 3 ] e + 3 c [ 4 ] e 2 + 4 c [ 5 ] e 3 + c [ 2 ] 3 c [ 3 ] c [ 2 ] c [ 3 ] 2 + 5 c [ 6 ] e 4 + 4 c [ 2 ] 4 c [ 3 ] + 8 c [ 2 ] 2 c [ 3 ] 2 2 c [ 3 ] 3 + 2 c [ 2 ] 3 c [ 4 ] 4 c [ 2 ] c [ 3 ] c [ 4 ] + 6 c [ 7 ] e 5 + ( 10 c [ 2 ] 5 c [ 3 ] 8 c [ 2 ] 4 c [ 4 ] + 28 c [ 2 ] 2 c [ 3 ] c [ 4 ] 11 c [ 3 ] 2 c [ 4 ] + c [ 2 ] 3 ( 30 c [ 3 ] 2 + 3 c [ 5 ] ) + 2 c [ 2 ] ( 9 c [ 3 ] 3 2 c [ 4 ] 2 3 c [ 3 ] c [ 5 ] ) + 7 c [ 8 ] ) e 6 + . )
Substituting Equations (19)–(21), (23) and (24) into Equation (12), we obtain, after simplifications,
ψ 8 t h Y M ( x ) x = c [ 2 ] 2 c [ 2 ] 2 c [ 3 ] c [ 2 ] 3 c [ 2 ] c [ 3 ] + c [ 4 ] e 8 + O ( e 9 )
Hence from Equations (22) and (25), we concluded that the convergence order of the 4thYM and 8thYM are four and eight respectively. □
The following theorem is given without proof, which can be worked out with the help of Mathematica.
Theorem 2.
Let f : D R R be a sufficiently smooth function having continuous derivatives. If f ( x ) has a simple root x in the open interval D and x 0 chosen in sufficiently small neighborhood of x , then the method (16) is of local sixteenth order convergence and and it satisfies the error equation
ψ 16 t h Y M ( x ) x = ((c[2]4)((c[2]2c[3])2)(c[2]3c[2]c[3] + c[4])(c[2]4c[2]2c[3] + c[2]c[4] − c[5]))e16 + O(e17).

4. Numerical Examples

In this section, numerical results on some test functions are compared for the new methods 4thYM, 8thYM and 16thYM with some existing eighth order methods and Newton’s method. Numerical computations have been carried out in the Matlab software with 500 significant digits. We have used the stopping criteria for the iterative process satisfying e r r o r = | x N x N 1 | < ϵ , where ϵ = 10 50 and N is the number of iterations required for convergence. The computational order of convergence is given by ([17])
ρ = ln | ( x N x N 1 ) / ( x N 1 x N 2 ) | ln | ( x N 1 x N 2 ) / ( x N 2 x N 3 ) | .
We consider the following iterative methods for solving nonlinear equations for the purpose of comparison: ψ 4 t h S B , a method proposed by Sharma et al. [18]:
y = x 2 f ( x ) 3 f ( x ) , ψ 4 t h S B ( x ) = x 1 2 + 9 8 f ( x ) f ( y ) + 3 8 f ( y ) f ( x ) f ( x ) f ( x ) .
ψ 4 t h C L N D , a method proposed by Chun et al. [19]:
y = x 2 f ( x ) 3 f ( x ) , ψ 4 t h C L N D ( x ) = x 16 f ( x ) f ( x ) 5 f ( x ) 2 + 30 f ( x ) f ( y ) 9 f ( y ) 2 .
ψ 4 t h S J , a method proposed by Singh et al. [20]:
y = x 2 3 f ( x ) f ( x ) , ψ 4 t h S J ( x ) = x 17 8 9 4 f ( y ) f ( x ) + 9 8 f ( y ) f ( x ) 2 7 4 3 4 f ( y ) f ( x n ) f ( x ) f ( x ) .
ψ 8 t h K T , a method proposed by Kung-Traub [2]:
y = x f ( x ) f ( x ) , z = y f ( y ) f ( x ) ( f ( x ) f ( y ) ) 2 f ( x ) f ( x ) , ψ 8 t h K T ( x ) = z f ( x ) f ( x ) f ( x ) f ( y ) f ( z ) ( f ( x ) f ( y ) ) 2 f ( x ) 2 + f ( y ) ( f ( y ) f ( z ) ) ( f ( x ) f ( z ) ) 2 ( f ( y ) f ( z ) ) .
ψ 8 t h L W , a method proposed by Liu et al. [8]
y = x f ( x ) f ( x ) , z = y f ( x ) f ( x ) 2 f ( y ) f ( y ) f ( x ) , ψ 8 t h L W ( x ) = z f ( z ) f ( x ) f ( x ) f ( y ) f ( x ) 2 f ( y ) 2 + f ( z ) f ( y ) f ( z ) + 4 f ( z ) f ( x ) + f ( z ) .
ψ 8 t h P N P D , a method proposed by Petkovic et al. [11]
y = x f ( x ) f ( x ) , z = x f ( y ) f ( x ) 2 f ( x ) f ( y ) f ( x ) f ( x ) f ( x ) , ψ 8 t h P N P D ( x ) = z f ( z ) f ( x ) φ ( t ) + f ( z ) f ( y ) f ( z ) + 4 f ( z ) f ( x ) ,
where φ ( t ) = 1 + 2 t + 2 t 2 t 3 a n d t = f ( y ) f ( x ) .
ψ 8 t h S A 1 , a method proposed by Sharma et al. [12]
y = x f ( x ) f ( x ) , z = y 3 2 f [ y , x ] f ( x ) f ( y ) f ( x ) , ψ 8 t h S A 1 ( x ) = z f ( z ) f ( x ) f ( x ) f [ y , x ] + f [ z , y ] 2 f [ z , y ] f [ z , x ] .
ψ 8 t h S A 2 , a method proposed by Sharma et al. [13]
y = x f ( x ) f ( x ) , z = y f ( y ) 2 f [ y , x ] f ( x ) , ψ 8 t h S A 2 ( x ) = z f [ z , y ] f [ z , x ] f ( z ) 2 f [ z , y ] f [ z , x ]
ψ 8 t h C F G T , a method proposed by Cordero et al. [6]
y = x f ( x ) f ( x ) , z = y f ( y ) f ( x ) 1 1 2 t + t 2 t 3 / 2 , ψ 8 t h C F G T ( x ) = z 1 + 3 r 1 + r f ( z ) f [ z , y ] + f [ z , x , x ] ( z y ) , r = f ( z ) f ( x ) .
ψ 8 t h C T V , a method proposed by Cordero et al. [7]
y = x f ( x ) f ( x ) , z = x 1 t 1 2 t f ( x ) f ( x ) , ψ 8 t h C T V ( x ) = z 1 t 1 2 t v 2 1 1 3 v f ( z ) f ( x ) , v = f ( z ) f ( y ) .
Table 1 shows the efficiency indices of the new methods with some known methods.
The following test functions and their simple zeros for our study are given below [10]:
f 1 ( x ) = sin ( 2 cos x ) 1 x 2 + e sin ( x 3 ) , x = 0.7848959876612125352 . . . f 2 ( x ) = x e x 2 sin 2 x + 3 cos x + 5 , x = 1.2076478271309189270 . . . f 3 ( x ) = x 3 + 4 x 2 10 , x = 1.3652300134140968457 . . . f 4 ( x ) = s i n ( x ) + c o s ( x ) + x , x = 0.4566247045676308244 . . . f 5 ( x ) = x 2 sin x , x = 1.8954942670339809471 . . . f 6 ( x ) = x 2 + s i n ( x 5 ) 1 4 , x = 0.4099920179891371316 . . .
Table 2, shows that corresponding results for f 1 f 6 . We observe that proposed method 4 t h Y M is converge in a lesser or equal number of iterations and with least error when compare to compared methods. Note that 4 t h S B and 4 t h S J methods are getting diverge in f 5 function. Hence, the proposed method 4 t h Y M can be considered competent enough to existing other compared equivalent methods.
Also, from Table 3, Table 4 and Table 5 are shows the corresponding results for f 1 f 6 . The computational order of convergence agrees with the theoretical order of convergence in all the functions. Note that 8 t h P N P D method is getting diverge in f 5 function and all other compared methods are converges with least error. Also, function f 1 having least error in 8 t h C F G T , function f 2 having least error in 8 t h C T V , functions f 3 and f 4 having least error in 8 t h Y M , function f 5 having least error in 8 t h S A 2 , function f 6 having least error in 8 t h C F G T . The proposed 16 t h Y M method converges less number of iteration with least error in all the tested functions. Hence, the 16 t h Y M can be considered competent enough to existing other compared equivalent methods.

5. Applications to Some Real World Problems

5.1. Projectile Motion Problem

We consider the classical projectile problem [21,22] in which a projectile is launched from a tower of height h > 0 , with initial speed v and at an angle θ with respect to the horizontal onto a hill, which is defined by the function ω, called the impact function which is dependent on the horizontal distance, x. We wish to find the optimal launch angle θ m which maximizes the horizontal distance. In our calculations, we neglect air resistances.
The path function y = P ( x ) that describes the motion of the projectile is given by
P ( x ) = h + x tan θ g x 2 2 v 2 sec 2 θ
When the projectile hits the hill, there is a value of x for which P ( x ) = ω ( x ) for each value of x. We wish to find the value of θ that maximize x.
ω ( x ) = P ( x ) = h + x tan θ g x 2 2 v 2 sec 2 θ
Differentiating Equation (37) implicitly w.r.t. θ, we have
ω ( x ) d x d θ = x sec 2 θ + d x d θ tan θ g v 2 x 2 sec 2 θ tan θ + x d x d θ sec 2 θ
Setting d x d θ = 0 in Equation (38), we have
x m = v 2 g cot θ m
or
θ m = arctan v 2 g x m
An enveloping parabola is a path that encloses and intersects all possible paths. Henelsmith [23] derived an enveloping parabola by maximizing the height of the projectile fora given horizontal distance x, which will give the path that encloses all possible paths. Let w = tan θ , then Equation (36) becomes
y = P ( x ) = h + x w g x 2 2 v 2 ( 1 + w 2 )
Differentiating Equation (41) w.r.t. w and setting y = 0 , Henelsmith obtained
y = x x g 2 v 2 ( w ) = 0 w = v 2 g x
so that the enveloping parabola defined by
y m = ρ ( x ) = h + v 2 2 g g x 2 2 v 2
The solution to the projectile problem requires first finding x m which satisfies ρ ( x ) = ω ( x ) and solving for θ m in Equation (40) because we want to find the point at which the enveloping parabola ρ intersects the impact function ω, and then find θ that corresponds to this point on the enveloping parabola. We choose a linear impact function ω ( x ) = 0.4 x with h = 10 and v = 20 . We let g = 9.8 . Then we apply our IFs starting from x 0 = 30 to solve the non-linear equation
f ( x ) = ρ ( x ) ω ( x ) = h + v 2 2 g g x 2 2 v 2 0.4 x
whose root is given by x m = 36.102990117 . . . . . and
θ m = arctan v 2 g x m = 48.5 .
Figure 1 shows the intersection of the path function, the enveloping parabola and the linear impact function for this application. The approximate solutions are calculated correct to 500 significant figures. The stopping criterion | x N x N 1 | < ϵ , where ϵ = 10 50 is used. Table 6 shows that proposed method 16thYM is converging better than other compared methods. Also, we observe that the computational order of convergence agrees with the theoretical order of convergence.

5.2. Planck’s Radiation Law Problem

We consider the following Planck’s radiation law problem found in [24]:
φ ( λ ) = 8 π c h λ 5 e c h / λ k T 1 ,
which calculates the energy density within an isothermal blackbody. Here, λ is the wavelength of the radiation, T is the absolute temperature of the blackbody, k is Boltzmann’s constant, h is the Planck’s constant and c is the speed of light. Suppose, we would like to determine wavelength λ which corresponds to maximum energy density φ ( λ ) . From (44), we get
φ ( λ ) = 8 π c h λ 6 e c h / λ k T 1 ( c h / λ k T ) e c h / λ k T e c h / λ k T 1 5 = A · B .
It can be checked that a maxima for φ occurs when B = 0 , that is, when
( c h / λ k T ) e c h / λ k T e c h / λ k T 1 = 5 .
Here putting x = c h / λ k T , the above equation becomes
1 x 5 = e x .
Define
f ( x ) = e x 1 + x 5 .
The aim is to find a root of the equation f ( x ) = 0 . Obviously, one of the root x = 0 is not taken for discussion. As argued in [24], the left-hand side of (45) is zero for x = 5 and e 5 6.74 × 10 3 . Hence, it is expected that another root of the equation f ( x ) = 0 might occur near x = 5 . The approximate root of the Equation (46) is given by x 4.96511423174427630369 with x 0 = 3 . Consequently, the wavelength of radiation (λ) corresponding to which the energy density is maximum is approximated as
λ c h ( k T ) 4.96511423174427630369 .
Table 7 shows that proposed method 16thYM is converging better than other compared methods. Also, we observe that the computational order of convergence agrees with the theoretical order of convergence.
Hereafter, we will study the optimal fourth and eighth order methods along with Newton’s method.

6. Corresponding Conjugacy Maps for Quadratic Polynomials

In this section, we discuss the rational map R p arising from 2 n d N R , proposed methods 4 t h Y M and 8 t h Y M applied to a generic polynomial with simple roots.
Theorem 3.
( 2 n d N R ) [18] For a rational map R p ( z ) arising from Newton’s method (4) applied to p ( z ) = ( z a ) ( z b ) , a b , R p ( z ) is conjugate via the a Möbius transformation given by M ( z ) = ( z a ) / ( z b ) to
S ( z ) = z 2 .
Theorem 4.
( 4 t h Y M ) For a rational map R p ( z ) arising from Proposed Method (7) applied to p ( z ) = ( z a ) ( z b ) , a b , R p ( z ) is conjugate via the a Möbius transformation given by M ( z ) = ( z a ) / ( z b ) to
S ( z ) = z 4 .
Proof. 
Let p ( z ) = ( z a ) ( z b ) , a b , and let M be Möbius transformation given by M ( z ) = ( z a ) / ( z b ) with its inverse M 1 ( z ) = ( z b a ) ( z 1 ) , which may be considered as map from C { } . We then have
S ( z ) = M R p M 1 ( z ) = M R p z b a z 1 = z 4 .
Theorem 5.
( 8 t h Y M ) For a rational map R p ( z ) arising from Proposed Method (12) applied to p ( z ) = ( z a ) ( z b ) , a b , R p ( z ) is conjugate via the a Möbius transformation given by M ( z ) = ( z a ) / ( z b ) to
S ( z ) = z 8 .
Proof. 
Let p ( z ) = ( z a ) ( z b ) , a b , and let M be Möbius transformation given by M ( z ) = ( z a ) / ( z b ) with its inverse M 1 ( z ) = ( z b a ) ( z 1 ) , which may be considered as map from C { } . We then have
S ( z ) = M R p M 1 ( z ) = M R p z b a z 1 = z 8 .
Remark 1.
The methods (29)–(35) are given without proof, which can be worked out with the help of Mathematica.
Remark 2.
All the maps obtained above are of the form S ( z ) = z p R ( z ) , where R ( z ) is either unity or a rational function and p is the order of the method.

7. Basins of Attraction

The study of dynamical behavior of the rational function associated to an iterative method gives important information about convergence and stability of the method. The basic definitions and dynamical concepts of rational function which can found in [4,25].
We take a square R × R = [ 2 , 2 ] × [ 2 , 2 ] of 256 × 256 points and we apply our iterative methods starting in every z ( 0 ) in the square. If the sequence generated by the iterative method attempts a zero z j of the polynomial with a tolerance | f ( z ( k ) ) | < × 10 4 and a maximum of 100 iterations, we decide that z ( 0 ) is in the basin of attraction of this zero. If the iterative method starting in z ( 0 ) reaches a zero in N iterations ( N 100 ), then we mark this point z ( 0 ) with colors if | z ( N ) z j | < × 10 4 . If N > 50 , we conclude that the starting point has diverged and we assign a dark blue color. Let N D be a number of diverging points and we count the number of starting points which converge in 1, 2, 3, 4, 5 or above 5 iterations. In the following, we describe the basins of attraction for Newton’s method and some higher order Newton type methods for finding complex roots of polynomials p 1 ( z ) = z 2 1 , p 2 ( z ) = z 3 1 and p 3 ( z ) = z 5 1 .
Figure 2 and Figure 3 shows the polynomiographs of the methods for the polynomial p 1 ( z ) . We can see that the methods 2ndNR, 4thYM, 8thSA2 and 8thYM performed very nicely. The methods 4thSB, 4thSJ, 8thKT, 8thLW, 8thPNPD, 8thSA1, 8thCFGT and 8thCTV are shows some chaotic behavior near the boundary points. The method 4thCLND have sensitive to the choice of initial guess in this case.
Figure 2 and Figure 4 shows the polynomiographs of the methods for the polynomial p 2 ( z ) . We can see that the methods 2ndNR, 4thYM, 8thSA2 and 8thYM performed very nicely. The methods 4thSB, 8thKT, 8thLW and 8thCTV are shows some chaotic behavior near the boundary points. The methods 4thCLND, 4thSJ, 8thPNPD, 8thSA1, and 8thCFGT have sensitive to the choice of initial guess in this case.
Figure 2 and Figure 5 shows the polynomiographs of the methods for the polynomial p 3 ( z ) . We can see that the methods 4thYM, 8thSA2 and 8thYM are shows some chaotic behavior near the boundary points. The methods 2ndNR, 4thSB, 4thCLND, 4thSJ, 8thKT, 8thLW, 8thPNPD, 8thSA1, 8thCFGT and 8thCTV have sensitive to the choice of initial guess in this case. In Table 8, Table 9 and Table 10, we classify the number of converging and diverging grid points for each iterative method.
We note that a point z 0 belongs to the Julia set if and only if the dynamics in a neighborhood of z 0 displays sensitive dependence on the initial conditions, so that nearby initial conditions lead to wildly different behavior after a number of iterations. For this reason, some of the methods are getting divergent points. The common boundaries of these basins of attraction constitute the Julia set of the iteration function. It is clear that one has to use quantitative measures to distinguish between the methods, since we have a different conclusion when just viewing the basins of attraction.
In order to summarize the results, we have compared mean number of iteration and total number of functional evaluations (TNFE) for each polynomials and each methods in Table 11. The best method based on the comparison in Table 11 is 8thSA2. The method with the fewest number of functional evaluations per point is 8thSA2 followed closely by 4thYM. The fastest method is 8thSA2 followed closely by 8thYM. The method with highest number of functional evaluation and slowest method is 8thPNPD.

8. Concluding Remarks and Future Work

In this work, we have developed optimal fourth, eighth and sixteenth order iterative methods for solving nonlinear equations using the divided difference approximation. The methods require the computations of three functions evaluations reaching order of convergence is four, four functions evaluations reaching order of convergence is eight and five functions evaluations reaching order of convergence is sixteen. In the sense of convergence analysis and numerical examples, the Kung-Traub’s conjecture is satisfied. We have tested some examples using the proposed schemes and some known schemes, which illustrate the superiority of the proposed method 16thYM. Also, proposed methods and some existing methods have been applied on the Projectile motion problem and Planck’s radiation law problem. The results obtained are interesting and encouraging for the new method 16thYM. The numerical experiments suggests that the new methods would be valuable alternative for solving nonlinear equations. Finally, we have also compared the basins of attraction of various fourth and eighth order methods in the complex plane.
Future work includes:
  • Now we are investigating the proposed scheme to develop optimal methods of arbitrarily high order with Newton’s method, as in [26].
  • Also, we are investigating to develop derivative free methods to study dynamical behavior and local convergence, as in [27,28].

Author Contributions

The contributions of both the authors have been similar. Both of them have worked together to develop the present manuscript.

Funding

This paper is supported by three project funds: 1. National College Students Innovation and entrepreneurship training program of Ministry of Education of the People’s Republic of China in 2017: Internet Animation Company in Minority Areas–Research Model of “Building Dream” Animation Company. (Project number: 201710684001). 2. Yunnan Provincial Science and Technology Plan Project University Joint Project 2017: Research on Boolean Satisfiability Dividing and Judging Method Based on Clustering and Partitioning (Project number: 2017FH001-056). 3. Qujing Normal college scientific research fund special project (Project number: 2018zx003).

Acknowledgments

The authors would like to thank the editors and referees for the valuable comments and for the suggestions to improve the readability of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  2. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar]
  3. Amat, S.; Busquier, S.; Plaza, S. Dynamics of a family of third-order iterative methods that do not require using second derivatives. Appl. Math. Comput. 2004, 154, 735–746. [Google Scholar]
  4. Amat, S.; Busquier, S.; Plaza, S. Review of some iterative root-finding methods from a dynamical point of view. Scientia 2004, 10, 3–35. [Google Scholar]
  5. Babajee, D.K.R.; Madhu, K.; Jayaraman, J. A family of higher order multi-point iterative methods based on power mean for solving nonlinear equations. Afrika Matematika 2016, 27, 865–876. [Google Scholar] [CrossRef]
  6. Cordero, A.; Fardi, M.; Ghasemi, M.; Torregrosa, J.R. Accelerated iterative methods for finding solutions of nonlinear equations and their dynamical behavior. Calcolo 2014, 51, 17–30. [Google Scholar]
  7. Cordero, A.; Torregrosa, J.R.; Vasileva, M.P. A family of modified ostrowski’s methods with optimal eighth order of convergence. Appl. Math. Lett. 2011, 24, 2082–2086. [Google Scholar]
  8. Liu, L.; Wang, X. Eighth-order methods with high efficiency index for solving nonlinear equations. Appl. Math. Comput. 2010, 215, 3449–3454. [Google Scholar]
  9. Madhu, K. Some New Higher Order Multi-Point Iterative Methods and Their Applications to Differential and Integral Equation and Global Positioning System. Ph.D. Thesis, Pndicherry University, Kalapet, India, June 2016. [Google Scholar]
  10. Madhu, K.; Jayaraman, J. Higher order methods for nonlinear equations and their basins of attraction. Mathematics 2016, 4, 22. [Google Scholar]
  11. Petkovic, M.S.; Neta, B.; Petkovic, L.D.; Dzunic, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2012. [Google Scholar]
  12. Sharma, J.R.; Arora, H. An efficient family of weighted-newton methods with optimal eighth order convergence. Appl. Math. Lett. 2014, 29, 1–6. [Google Scholar]
  13. Sharma, J.R.; Arora, H. A new family of optimal eighth order methods with dynamics for nonlinear equations. Appl. Math. Comput. 2016, 273, 924–933. [Google Scholar]
  14. Soleymani, F.; Khratti, S.K.; Vanani, S.K. Two new classes of optimal Jarratt-type fourth-order methods. Appl. Math. Lett. 2011, 25, 847–853. [Google Scholar]
  15. Wait, R. The Numerical Solution of Algebraic Equations; John Wiley and Sons: Hoboken, NJ, USA, 1979. [Google Scholar]
  16. Khan, Y.; Fardi, M.; Sayevand, K. A new general eighth-order family of iterative methods for solving nonlinear equations. Appl. Math. Lett. 2012, 25, 2262–2266. [Google Scholar]
  17. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar]
  18. Sharma, R.; Bahl, A. An optimal fourth order iterative method for solving nonlinear equations and its dynamics. J. Complex Anal. 2015, 2015, 259167. [Google Scholar]
  19. Chun, C.; Lee, M.Y.; Neta, B.; Dzunic, J. On optimal fourth-order iterative methods free from second derivative and their dynamics. Appl. Math. Comput. 2012, 218, 6427–6438. [Google Scholar] [CrossRef]
  20. Singh, A.; Jaiswal, J.P. Several new third-order and fourth-order iterative methods for solving nonlinear equations. Int. J. Eng. Math. 2014, 2014, 828409. [Google Scholar]
  21. Babajee, D.K.R.; Madhu, K. Comparing two techniques for developing higher order two-point iterative methods for solving quadratic equations. SeMA J. 2018, 1–22. [Google Scholar] [CrossRef]
  22. Kantrowitz, R.; Neumann, M.M. Some real analysis behind optimization of projectile motion. Mediterr. J. Math. 2014, 11, 1081–1097. [Google Scholar]
  23. Henelsmith, N. Finding the Optimal Launch Angle; Whitman College: Walla Walla, WA, USA, 2016. [Google Scholar]
  24. Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
  25. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar]
  26. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. Generating optimal derivative free iterative methods for nonlinear equations by using polynomial interpolation. Appl. Math. Comput. 2013, 57, 1950–1956. [Google Scholar] [CrossRef]
  27. Argyros, I.K.; Magrenan, A.A.; Orcos, L. Local convergence and a chemical application of derivative free root finding methods with one parameter based on interpolation. J. Math. Chem. 2016, 54, 1404–1416. [Google Scholar]
  28. Zafar, F.; Cordero, A.; Torregrosa, J.R. An efficient family of optimal eighth-order multiple root finders. Mathematics 2018, 6, 310. [Google Scholar]
Figure 1. The enveloping parabola with linear impact function.
Figure 1. The enveloping parabola with linear impact function.
Mathematics 07 00322 g001
Figure 2. Basins of attraction for 2 n d N R for the polynomial p 1 ( z ) , p 2 ( z ) , p 3 ( z ) .
Figure 2. Basins of attraction for 2 n d N R for the polynomial p 1 ( z ) , p 2 ( z ) , p 3 ( z ) .
Mathematics 07 00322 g002
Figure 3. Basins of attraction for p 1 ( z ) = z 2 1 .
Figure 3. Basins of attraction for p 1 ( z ) = z 2 1 .
Mathematics 07 00322 g003
Figure 4. Basins of attraction for p 2 ( z ) = z 3 1 .
Figure 4. Basins of attraction for p 2 ( z ) = z 3 1 .
Mathematics 07 00322 g004
Figure 5. Basins of attraction for p 3 ( z ) = z 5 1 .
Figure 5. Basins of attraction for p 3 ( z ) = z 5 1 .
Mathematics 07 00322 g005
Table 1. Comparison of Efficiency Indices.
Table 1. Comparison of Efficiency Indices.
Methods pd EI
2ndNR221.414
4thSB431.587
4thCLND431.587
4thSJ431.587
4thYM431.587
8thKT841.682
8thLW841.682
8thPNPD841.682
8thSA1841.682
8thSA2841.682
8thCFGT841.682
8thCTV841.682
8thYM841.682
16thYM1651.741
Table 2. Numerical results for nonlinear equations.
Table 2. Numerical results for nonlinear equations.
Methods f 1 ( x ) , x 0 = 0.9 f 2 ( x ) , x 0 = 1.6
N | x 1 x 0 | | x N x N 1 | ρ N | x 1 x 0 | | x N x N 1 | ρ
2 n d N R (4)70.10807.7326  × 10 74 1.9990.20449.2727  × 10 74 1.99
4 t h S B (26)40.11509.7275  × 10 64 3.9950.33431.4237  × 10 65 3.99
4 t h C L N D (27)40.11501.4296  × 10 64 3.9950.38011.1080  × 10 72 3.99
4 t h S J (28)40.11503.0653  × 10 62 3.9950.31909.9781  × 10 56 3.99
4 t h Y M (7)40.11506.0046  × 10 67 3.9950.37377.2910  × 10 120 4.00
Methods f 3 ( x ) , x 0 = 0 . 9 f 4 ( x ) , x 0 = 1 . 9
2 n d N R (4)80.62631.3514  × 10 72 2.0081.95291.6092  × 10 72 1.99
4 t h S B (26)50.50184.5722  × 10 106 3.9951.59406.0381  × 10 92 3.99
4 t h C L N D (27)50.50124.7331  × 10 108 3.9951.58942.7352  × 10 93 3.99
4 t h S J (28)50.47673.0351  × 10 135 3.9951.57769.5025  × 10 95 3.99
4 t h Y M (7)50.47352.6396  × 10 156 3.9951.55191.4400  × 10 102 3.99
Methods f 5 ( x ) , x 0 = 1 . 2 f 6 ( x ) , x 0 = 0 . 8
2 n d N R (4)92.41231.3564  × 10 83 1.9980.30563.2094  × 10 72 1.99
4 t h S B (26) Diverge 50.38012.8269  × 10 122 3.99
4 t h C L N D (27)140.05666.8760  × 10 134 3.9950.38127.8638  × 10 127 3.99
4 t h S J (28) Diverge 50.37801.4355  × 10 114 3.99
4 t h Y M (7)61.28872.3155  × 10 149 3.9950.38401.1319  × 10 143 3.99
Table 3. Numerical results for nonlinear equations.
Table 3. Numerical results for nonlinear equations.
Methods f 1 ( x ) , x 0 = 0.9 f 2 ( x ) , x 0 = 1.6
N | x 1 x 0 | | x N x N 1 | ρ N | x 1 x 0 | | x N x N 1 | ρ
8 t h K T (29)30.11511.6238  × 10 61 7.9140.38767.2890  × 10 137 7.99
8 t h L W (30)30.11514.5242  × 10 59 7.9140.39041.1195  × 10 170 8.00
8 t h P N P D (31)30.11518.8549  × 10 56 7.8740.37342.3461  × 10 85 7.99
8 t h S A 1 (32)30.11513.4432  × 10 60 7.8840.39838.4343  × 10 121 8.00
8 t h S A 2 (33)30.11516.9371  × 10 67 7.9940.39275.9247  × 10 225 7.99
8 t h C F G T (34)30.11511.1715  × 10 82 7.7750.15322.0650  × 10 183 7.99
8 t h C T V (35)30.11514.4923  × 10 61 7.9440.39252.3865  × 10 252 7.99
8 t h Y M (12)30.11511.1416  × 10 70 7.9640.38968.9301  × 10 163 8.00
16 t h Y M (16)30.1151015.9930.39233.5535  × 10 85 16.20
Table 4. Numerical results for nonlinear equations.
Table 4. Numerical results for nonlinear equations.
Methods f 3 ( x ) , x 0 = 0.9 f 4 ( x ) , x 0 = 1.9
N | x 1 x 0 | | x N x N 1 | ρ N | x 1 x 0 | | x N x N 1 | ρ
8 t h K T (29)40.46595.0765  × 10 216 7.9941.44615.5095  × 10 204 8.00
8 t h L W (30)40.46602.7346  × 10 213 7.9941.46203.7210  × 10 146 8.00
8 t h P N P D (31)40.38219.9119  × 10 71 8.0241.38582.0603  × 10 116 7.98
8 t h S A 1 (32)40.44921.5396  × 10 122 8.0041.41702.2735  × 10 136 7.99
8 t h S A 2 (33)40.46524.1445  × 10 254 7.9841.43392.5430  × 10 166 7.99
8 t h C F G T (34)40.46542.4091  × 10 260 7.9941.44174.7007  × 10 224 7.99
8 t h C T V (35)40.46523.8782  × 10 288 8.0041.39573.7790  × 10 117 7.99
8 t h Y M (12)40.46533.5460  × 10 309 7.9941.44172.9317  × 10 229 7.99
16 t h Y M (16)30.46523.6310  × 10 154 16.1331.44341.8489  × 10 110 16.36
Table 5. Numerical results for nonlinear equations.
Table 5. Numerical results for nonlinear equations.
Methods f 5 ( x ) , x 0 = 1.2 f 6 ( x ) , x 0 = 0.8
N | x 1 x 0 | | x N x N 1 | ρ N | x 1 x 0 | | x N x N 1 | ρ
8 t h K T (29)51.87872.6836  × 10 182 7.9940.38986.0701  × 10 234 7.99
8 t h L W (30)640.51564.6640  × 10 161 7.9940.38986.1410  × 10 228 7.99
8 t h P N P D (31) Diverge 40.38943.6051  × 10 190 7.99
8 t h S A 1 (32)7891.98022.1076  × 10 215 9.0040.39015.9608  × 10 245 8.00
8 t h S A 2 (33)40.71615.3670  × 10 128 7.9940.39008.3398  × 10 251 8.61
8 t h C F G T (34)52.854107.9940.390007.99
8 t h C T V (35)50.61921.6474  × 10 219 9.0040.39011.0314  × 10 274 8.00
8 t h Y M (12)40.77331.3183  × 10 87 7.9840.39001.2160  × 10 286 7.99
16 t h Y M (16)40.6985016.1030.39001.1066  × 10 143 15.73
Table 6. Results of projectile problem.
Table 6. Results of projectile problem.
IFNError cpu Time ( s ) ρ
2ndNR74.3980  × 10 76 1.0740361.99
4thYM44.3980  × 10 76 0.9020153.99
8thKT31.5610  × 10 66 0.6582358.03
8thLW37.8416  × 10 66 0.6725248.03
8thPNPD34.2702  × 10 57 0.6720428.05
8thSA131.2092  × 10 61 0.6546238.06
8thCTV33.5871  × 10 73 0.6896278.02
8thYM34.3980  × 10 76 0.6181458.02
16thYM300.51215216.01
Table 7. Results of Planck’s radiation law problem.
Table 7. Results of Planck’s radiation law problem.
IFN Error cpu Time ( s ) ρ
2ndNR71.8205  × 10 70 0.9910202.00
4thYM51.4688  × 10 181 0.8422204.00
8thKT44.0810  × 10 288 0.8087877.99
8thLW43.1188  × 10 268 0.8013047.99
8thPNPD48.0615  × 10 260 0.8008957.99
8thSA141.9335  × 10 298 0.7917068.00
8thCTV45.8673  × 10 282 0.8310068.00
8thYM42.5197  × 10 322 0.8551378.00
16thYM38.3176  × 10 153 0.82805316.52
Table 8. Results of the polynomials p 1 ( z ) = z 2 1 .
Table 8. Results of the polynomials p 1 ( z ) = z 2 1 .
IF N = 1 N = 2 N = 3 N = 4 N = 5 N > 5 N D
2ndNR4516782823,27220,54813,3680
4thSB34022,78429,0566836292835920
4thCLND37224,60029,1406512222426881076
4thSJ30019,81628,0085844296886000
4thYM52031,10027,520482812083600
8thKT468444,528984038201408125624
8thLW445243,23611,4083520154013800
8thPNPD273239,76813,11234801568487616
8thSA1432845,82481362564148432000
8thSA215,68045,7843696376000
8thCFGT961643,7167744291698056464
8thCTV712448,232746418926321920
8thYM834850,7925572824000
Table 9. Results of the polynomials p 2 ( z ) = z 3 1 .
Table 9. Results of the polynomials p 2 ( z ) = z 3 1 .
IF N = 1 N = 2 N = 3 N = 4 N = 5 N > 5 N D
2ndNR0224290811,30219,17031,9320
4thSB160981627,4389346545213,3246
4thCLND17011,24228,6109984420211,3287176
4thSJ138776025,0928260505819,2281576
4thYM27018,06430,3749862368832780
8thKT206634,24811,7526130447868620
8thLW209233,96812,1804830303094360
8thPNPD110625,71211,2583854190621,70010,452
8thSA1160836,48812,486371817809456872
8thSA2643246,850912022306402640
8thCFGT368840,74013,6964278139017447395
8thCTV353043,55411,7243220141220960
8thYM381643,59612,464363613027220
Table 10. Results of the polynomials p 3 ( z ) = z 5 1 .
Table 10. Results of the polynomials p 3 ( z ) = z 5 1 .
IF N = 1 N = 2 N = 3 N = 4 N = 5 N > 5 N D
2ndNR210012224106791852,188638
4thSB76385015,45818,026553222,5945324
4thCLND86447618,15017,774543419,61612,208
4thSJ62309411,71616,840568228,14219,900
4thYM142795627,42815,850572684340
8thKT95017,88420,8925675402416,111217
8thLW103218,76420,6225056344616,6161684
8thPNPD49612,77021,4726576243421,78814,236
8thSA169226,21215,0244060183417,7148814
8thSA2266241,40012,9144364189223040
8thCFGT200821,19423,7346180395884621953
8thCTV180236,63013,222411220967674350
8thYM173627,80821,1365804270463480
Table 11. Mean number of iteration ( N μ ) and TNFE for each polynomials and each methods.
Table 11. Mean number of iteration ( N μ ) and TNFE for each polynomials and each methods.
IF N μ for p 1 ( z ) N μ for p 2 ( z ) N μ for p 3 ( z ) Average TNFE
2ndNR4.77676.43179.85317.020514.0410
4thSB3.07014.57339.27015.637816.9135
4thCLND3.66448.635412.86128.387025.1610
4thSJ3.70027.090914.56508.452025.3561
4thYM2.63663.17334.01833.27609.8282
8thKT2.36473.12704.45013.313913.2557
8thLW2.38793.52096.32964.079416.3178
8thPNPD2.995910.502412.33608.611434.4457
8thSA12.50974.57879.78995.626222.5044
8thSA21.82862.15592.57322.18598.7436
8thCFGT2.16832.80293.49592.822311.2894
8thCTV2.10472.47083.95732.844211.3770
8thYM1.98282.35323.36172.565910.2636

Share and Cite

MDPI and ACS Style

Tao, Y.; Madhu, K. Optimal Fourth, Eighth and Sixteenth Order Methods by Using Divided Difference Techniques and Their Basins of Attraction and Its Application. Mathematics 2019, 7, 322. https://doi.org/10.3390/math7040322

AMA Style

Tao Y, Madhu K. Optimal Fourth, Eighth and Sixteenth Order Methods by Using Divided Difference Techniques and Their Basins of Attraction and Its Application. Mathematics. 2019; 7(4):322. https://doi.org/10.3390/math7040322

Chicago/Turabian Style

Tao, Yanlin, and Kalyanasundaram Madhu. 2019. "Optimal Fourth, Eighth and Sixteenth Order Methods by Using Divided Difference Techniques and Their Basins of Attraction and Its Application" Mathematics 7, no. 4: 322. https://doi.org/10.3390/math7040322

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop