Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# Development of Optimal Eighth Order Derivative-Free Methods for Multiple Roots of Nonlinear Equations

by
Janak Raj Sharma
1,*,
Sunil Kumar
1 and
Ioannis K. Argyros
2,*
1
Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal, Sangrur 148106, India
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Authors to whom correspondence should be addressed.
Symmetry 2019, 11(6), 766; https://doi.org/10.3390/sym11060766
Submission received: 27 April 2019 / Revised: 30 May 2019 / Accepted: 1 June 2019 / Published: 5 June 2019
(This article belongs to the Special Issue Symmetry with Operator Theory and Equations)

## Abstract

:
A number of higher order iterative methods with derivative evaluations are developed in literature for computing multiple zeros. However, higher order methods without derivative for multiple zeros are difficult to obtain and hence such methods are rare in literature. Motivated by this fact, we present a family of eighth order derivative-free methods for computing multiple zeros. Per iteration the methods require only four function evaluations, therefore, these are optimal in the sense of Kung-Traub conjecture. Stability of the proposed class is demonstrated by means of using a graphical tool, namely, basins of attraction. Boundaries of the basins are fractal like shapes through which basins are symmetric. Applicability of the methods is demonstrated on different nonlinear functions which illustrates the efficient convergence behavior. Comparison of the numerical results shows that the new derivative-free methods are good competitors to the existing optimal eighth-order techniques which require derivative evaluations.
MSC:
65H05; 41A25; 49M15

## 1. Introduction

Approximating a root (say, $α$) of a function is a very challenging task. It is also very important in many diverse areas such as Mathematical Biology, Physics, Chemistry, Economics and also Engineering to mention a few [1,2,3,4]. This is the case since problems from these areas are reduced to finding $α$. Researchers are utilizing iterative methods for approximating $α$ since closed form solutions can not be obtained in general. In particular, we consider derivative-free methods to compute a multiple root (say, $α$) with multiplicity m, i.e., $f ( j ) ( α ) = 0 , j = 0 , 1 , 2 , . . . , m − 1$ and $f ( m ) ( α ) ≠ 0$, of the equation $f ( x ) = 0$.
A number of higher order methods, either independent or based on the Newton’s method ([5])
$x k + 1 = x k − m f ( x k ) f ′ ( x k )$
have been proposed and analyzed in literature, see [6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. Such methods require the evaluation of derivative. However, higher order derivative-free methods to handle the case of multiple roots are yet to be investigated. Main reason of the non-availability of such methods is due to the difficulty in obtaining their order of convergence. The derivative-free methods are important in the situations where derivative of the function f is complicated to evaluate or is expensive to obtain. One such derivative-free method is the classical Traub-Steffensen method [1] which actually replaces the derivative $f ′$ in the classical Newton’s method by a suitable approximation based on finite difference quotient,
$f ′ ( x k ) ≃ f ( x k + β f ( x k ) ) − f ( x k ) β f ( x k ) = f [ w k , x k ]$
where $w k = x k + β f ( x k )$ and $β ∈ R − { 0 }$. In this way the modified Newton’s method (1) becomes the modified Traub-Steffensen method
$x k + 1 = x k − m f ( x k ) f [ w k , x k ] .$
The modified Traub-Steffensen method (2) is a noticeable improvement of Newton’s iteration, since it preserves the order of convergence without using any derivative.
Very recently, Sharma et al. in [25] have developed a family of three-point derivative free methods with seventh order convergence to compute the multiple zeros. The techniques of [25] require four function evaluations per iteration and, therefore, according to Kung-Traub hypothesis these do not possess optimal convergence [26]. According to this hypothesis multipoint methods without memory based on n function evaluations have optimal order $2 n − 1$. In this work, we introduce a family of eighth order derivative-free methods for computing multiple zeros that require the evaluations of four functions per iteration, and hence the family has optimal convergence of eighth order in the sense of Kung-Traub hypothesis. Such methods are usually known as optimal methods. The iterative scheme uses the modified Traub-Steffensen iteration (2) in the first step and Traub-Steffensen-like iterations in the second and third steps.
Rest of the paper is summarized as follows. In Section 2, optimal family of eighth order is developed and its local convergence is studied. In Section 3, the basins of attractors are analyzed to check the convergence region of new methods. In order to check the performance and to verify the theoretical results some numerical tests are performed in Section 4. A comparison with the existing methods of same order requiring derivatives is also shown in this section. Section 5 contains the concluding remarks.

## 2. Development of Method

Given a known multiplicity $m > 1$, we consider the following three-step scheme for multiple roots:
$y k = x k − m f ( x k ) f [ w k , x k ] z k = y k − m h ( A 1 + A 2 h ) f ( x k ) f [ w k , x k ] x k + 1 = z k − m u t G ( h , t ) f ( x k ) f [ w k , x k ]$
where $h = u 1 + u$, $u = f ( y k ) f ( x k ) m$, $t = f ( z k ) f ( y k ) m$ and $G : C 2 → C$ is analytic in a neighborhood of $( 0 , 0 )$. Note that this is a three-step scheme with first step as the Traub-Steffensen iteration (2) and next two steps are Traub-Steffensen-like iterations. Notice also that third step is weighted by the factor $G ( h , t )$, so this factor is called weight factor or weight function.
We shall find conditions under which the scheme (3) achieves eighth order convergence. In order to do this, let us prove the following theorem:
Theorem 1.
Let $f : C → C$ be an analytic function in a domain enclosing a multiple zero (say, α) with multiplicity m. Suppose that initial guess $x 0$ is sufficiently close to α, then the local order of convergence of scheme (3) is at least 8, provided that $A 1 = 1$, $A 2 = 3$, $G 00 = 1$, $G 10 = 2$, $G 01 = 1$, $G 20 = − 4$, $G 11 = 4$, $G 30 = − 72$, $| G 02 | < ∞$   and $| G 21 | < ∞$, where $G i j = ∂ i + j ∂ h i ∂ t j G ( h , t ) | ( 0 , 0 )$, $i , j ∈ { 0 , 1 , 2 , 3 , 4 } .$
Proof.
Let $e k = x k − α ,$ be the error in the k-th iteration. Taking into account that $f ( j ) ( α ) = 0 , j = 0 , 1 , 2 , … , m − 1$ and $f m ( α ) ≠ 0 ,$ the Taylor’s expansion of $f ( x k )$ about $α$ yields
$f ( x k ) = f m ( α ) m ! e k m 1 + C 1 e k + C 2 e k 2 + C 3 e k 3 + C 4 e k 4 + C 5 e k 5 + C 6 e k 6 + C 7 e k 7 + + C 8 e k 8 + ⋯ ,$
where $C n = m ! ( m + n ) ! f ( m + n ) ( α ) f ( m ) ( α )$ for $n ∈ N$.
Using (4) in $w k = x k + β f ( x k )$, we obtain that
$w k − α = x k − α + β f ( x k ) = e k + β f ( m ) ( α ) m ! e k m 1 + C 1 e k + C 2 e k 2 + C 3 e k 3 + C 4 e k 4 + C 5 e k 5 + C 6 e k 6 + C 7 e k 7 + C 8 e k 8 + ⋯ .$
Expanding $f ( w k )$ about $α$
$f ( w k ) = f m ( α ) m ! e w k m 1 + C 1 e w k + C 2 e w k 2 + C 3 e w k 3 + C 4 e w k 4 + ⋯ ,$
where $e w k = w k − α .$
Then the first step of (3) yields
$e y k = y k − α = C 1 m e k 2 + − ( 1 + m ) C 1 2 + 2 m C 2 m 2 e k 3 + ( 1 + m ) 2 C 1 3 − m ( 4 + 3 m ) C 1 C 2 + 3 m 2 C 3 m 3 e k 4 − ( 1 + m ) 3 C 1 4 − 2 m ( 3 + 5 m + 2 m 2 ) C 1 2 C 2 + 2 m 2 ( 2 + m ) C 2 2 + 2 m 2 ( 3 + 2 m ) C 1 C 3 m 4 e k 5 + 1 m 5 ( ( 1 + m ) 4 C 1 5 − m ( 1 + m ) 2 ( 8 + 5 m ) C 1 3 C 2 + m 2 ( 9 + 14 m + 5 m 2 ) C 1 2 C 3 + m 2 C 1 ( ( 12 + 16 m + 5 m 2 ) C 2 2 − m 2 C 4 ) − m 3 ( ( 12 + 5 m ) C 2 C 3 + m 2 C 5 ) ) e k 6 − P 1 e k 7 + P 2 e k 8 + O ( e k 9 ) ,$
where
$P 1 = 1 m 6 ( ( 1 + m ) 5 C 1 6 − 2 m ( 1 + m ) 3 ( 5 + 3 m ) C 1 4 C 2 + 6 m 2 ( 1 + m ) 2 ( 2 + m ) C 1 3 C 3 + m 2 ( 1 + m ) C 1 2 ( 3 ( 8 + 10 m + 3 m 2 ) C 2 2 − 2 m 2 C 4 ) − m 3 C 1 ( 4 ( 9 + 11 m + 3 m 2 ) C 2 C 3 + m 2 ( 1 + m ) C 5 ) + m 3 ( − 2 ( 2 + m ) 2 C 2 3 + 2 m 2 C 2 C 4 + m ( 3 ( 3 + m ) C 3 2 + m 2 C 6 ) ) ) , P 2 = 1 m 7 ( ( 1 + m ) 6 C 1 7 − m ( 1 + m ) 4 ( 12 + 7 m ) C 1 5 C 2 + m 2 ( 1 + m ) 3 ( 15 + 7 m ) C 1 4 C 3 + m 2 ( 1 + m ) 2 C 1 3 ( 2 ( 20 + 24 m + 7 m 2 ) C 2 2 − 3 m 2 C 4 ) − m 3 ( 1 + m ) C 1 2 ( 3 ( 24 + 27 m + 7 m 2 ) C 2 C 3 + m 2 ( 1 + m ) C 5 ) + m 3 C 1 ( − ( 2 + m ) 2 ( 8 + 7 m ) C 2 3 + 2 m 2 ( 4 + 3 m ) C 2 C 4 + m ( ( 27 + 30 m + 7 m 2 ) C 3 2 + m 2 ( 1 + m ) C 6 ) ) + m 4 ( ( 36 + 32 m + 7 m 2 ) C 2 2 C 3 + m 2 ( 2 + m ) C 2 C 5 − m 2 ( 3 C 3 C 4 + m C 7 ) ) ) .$
Expanding $f ( y k )$ about $α$, we have that
$f ( y k ) = f m ( α ) m ! e y k m 1 + C 1 e y k + C 2 e y k 2 + C 3 e y k 3 + C 4 e y k 4 + ⋯ .$
Also,
$u = C 1 m e k + ( − ( 2 + m ) C 1 2 + 2 m C 2 ) m 2 e k 2 + ( 7 + 7 m + 2 m 2 ) C 1 3 − 2 m ( 7 + 3 m ) C 1 C 2 + 6 m 2 C 3 2 m 3 e k 3 − 1 6 m 4 ( ( 34 + 51 m + 29 m 2 + 6 m 3 ) C 1 4 − 6 m ( 17 + 16 m + 4 m 2 ) C 1 2 C 2 + 12 m 2 ( 3 + m ) C 2 2 + 12 m 2 ( 5 + 2 m ) C 1 C 3 ) e k 4 + 1 24 m 5 ( ( 209 + 418 m + 355 m 2 + 146 m 3 + 24 m 4 ) C 1 5 − 4 m ( 209 + 306 m + 163 m 2 + 30 m 3 ) C 1 3 C 2 + 12 m 2 ( 49 + 43 m + 10 m 2 ) C 1 2 C 3 + 12 m 2 C 1 ( ( 53 + 47 m + 10 m 2 ) C 2 2 − 2 m ( 1 + m ) C 4 ) − 24 m 3 ( ( 17 + 5 m ) C 2 C 3 + m 2 C 5 ) ) e k 5 + O ( e k 6 )$
and
$h = C 1 m e k + ( − ( 3 + m ) C 1 2 + 2 m C 2 ) m 2 e k 2 + ( 17 + 11 m + 2 m 2 ) C 1 3 − 2 m ( 11 + 3 m ) C 1 C 2 + 6 m 2 C 3 2 m 3 e k 3 − 1 6 m 4 ( ( 142 + 135 m + 47 m 2 + 6 m 3 ) C 1 4 − 6 m ( 45 + 26 m + 4 m 2 ) C 1 2 C 2 + 12 m 2 ( 5 + m ) C 2 2 + 24 m 2 ( 4 + m ) C 1 C 3 ) e k 4 + 1 24 m 5 ( ( 1573 + 1966 m + 995 m 2 + 242 m 3 + 24 m 4 ) C 1 5 − 4 m ( 983 + 864 m + 271 m 2 + 30 m 3 ) C 1 3 C 2 + 12 m 2 ( 131 + 71 m + 10 m 2 ) C 1 2 C 3 + 12 m 2 C 1 ( ( 157 + 79 m + 10 m 2 ) C 2 2 − 2 m ( 1 + m ) C 4 ) − 24 m 3 ( ( 29 + 5 m ) C 2 C 3 + m 2 C 5 ) ) e k 5 + O ( e k 6 ) .$
By inserting (4)–(10) in the second step of (3), we have
$e z k = z k − α = − ( − 1 + A 1 ) C 1 m e k 2 − ( 1 + A 2 + m − A 1 ( 4 + m ) ) C 1 2 + 2 ( − 1 + A 1 ) m C 2 m 2 e k 3 + ∑ n = 1 5 δ n e k n + 3 + O ( e k 9 ) ,$
where $δ n = δ n ( A 1 , A 2 , m , C 1 , C 2 , C 3 , … , C 8 )$, $n = 1 , 2 , 3 , 4 , 5 .$ Here, expressions of $δ n$ are not being produced explicitly since they are very lengthy.
In order to obtain fourth-order convergence, the coefficients of $e k 2$ and $e k 3$ should be equal to zero. This is possible only for the following values of $A 1$ and $A 2$, which can be calculated from the expression (11):
$A 1 = 1 and A 2 = 3 .$
Then, the error Equation (11) is given by
$e z k = ( 19 + m ) C 1 3 − 2 m C 1 C 2 2 m 3 e k 4 + ∑ n = 1 4 ϕ n e k n + 4 + O ( e k 9 ) ,$
where $ϕ n = ϕ n ( m , C 1 , C 2 , C 3 , … , C 8 )$, $n = 1 , 2 , 3 , 4 .$
Expansion of $f ( z k )$ about $α$ leads us to the expression
$f ( z k ) = f m ( α ) m ! e z k m 1 + C 1 e z k + C 2 e z k 2 + C 3 e z k 3 + C 4 e z k 4 + ⋯ .$
and so $t = f ( z k ) f ( y k ) m$ yields
$t = ( 19 + m ) C 1 2 − 2 m C 2 2 m 2 e k 2 − ( 163 + 57 m + 2 m 2 ) C 1 3 − 6 m ( 19 + m ) C 1 C 2 + 6 m 2 C 3 3 m 3 e k 3 + 1 24 m 4 ( ( 5279 + 3558 m + 673 m 2 + 18 m 3 ) C 1 4 − 12 m ( 593 + 187 m + 6 m 2 ) C 1 2 C 2 + 24 m 2 ( 56 + 3 m ) C 1 C 3 + 12 m 2 ( 3 ( 25 + m ) C 2 2 + 2 m C 4 ) ) e k 4 − 1 60 m 5 ( ( 47457 + 46810 m + 16635 m 2 + 2210 m 3 + 48 m 4 ) C 1 5 − 20 m ( 4681 + 2898 m + 497 m 2 + 12 m 3 ) C 1 3 C 2 + 60 m 2 ( 429 + 129 m + 4 m 2 ) C 1 2 C 3 + 60 m 2 C 1 ( ( 537 + 147 m + 4 m 2 ) C 2 2 − 2 m C 4 ) − 60 m 3 ( 2 ( 55 + 2 m ) C 2 C 3 + m ( 1 + m ) C 5 ) ) e k 5 + O ( e k 6 ) .$
Expanding $G ( h , t )$ in neighborhood of origin $( 0 , 0 )$ by Taylor’s series, it follows that
$G ( h , t ) ≈ G 00 + G 10 h + G 01 t + 1 2 ! ( G 20 h 2 + 2 G 11 h t + G 02 t 2 ) + 1 3 ! G 30 h 3 + 3 G 21 h 2 t + 3 G 12 h t 2 + G 03 t 3 + 1 4 ! G 40 h 4 + 4 G 31 h 3 t + 6 G 22 h 2 t 2 + 4 G 13 h t 3 + G 04 t 4 ,$
where $G i j = ∂ i + j ∂ h i ∂ t j G ( h , t ) | ( 0 , 0 )$, $i , j ∈ { 0 , 1 , 2 , 3 , 4 } .$
By using (4), (6), (9), (10), (13) and (14) in third step of (3), we have
$e k + 1 = − ( G 00 − 1 ) C 1 ( ( 19 + m ) C 1 2 − 2 m C 2 ) 2 m 4 e k 4 + ∑ n = 1 4 ψ n e k n + 4 + O ( e k 9 ) ,$
$ψ n = ψ n ( m , G 00 , G 10 , G 01 , G 20 , G 11 , G 02 , G 30 , G 21 , C 1 , C 2 , … , C 8 )$, $n = 1 , 2 , 3 , 4 .$
It is clear from the Equation (15) that we will obtain at least eighth order convergence if we choose $G 00 = 1$, $ψ 1 = 0$, $ψ 2 = 0$ and $ψ 3 = 0$. We choose $G 00 = 1$ in $ψ 1 = 0 .$ Then, we get
$G 10 = 2 .$
By using $G 00$ and (16) in $ψ 2 = 0$, we will obtain
$G 01 = 1 and G 20 = − 4 .$
Using $G 00$, (16) and (17) in $ψ 3 = 0$, we obtain that
$G 11 = 4 and G 30 = − 72 .$
Inserting $G 00$ and (16)–(18) in (15), we will obtain the error equation
$e k + 1 = − 1 48 m 7 ( C 1 ( ( 19 + m ) C 1 2 − 2 m C 2 ) ( ( 3 G 02 ( 19 + m ) 2 + 2 ( − 1121 − 156 m − 7 m 2 + 3 G 21 ( 19 + m ) ) ) C 1 4 − 12 m ( − 52 + G 21 − 4 m + G 02 ( 19 + m ) ) C 1 2 C 2 + 12 ( − 2 + G 02 ) m 2 C 2 2 − 24 m 2 C 1 C 3 ) ) e k 8 + O ( e k 9 ) .$
Thus, the eighth order convergence is established. □
Remark 1.
It is important to note that the weight function $G ( h , t )$ plays a significant role in the attainment of desired convergence order of the proposed family of methods. However, only $G 02$ and $G 21$ are involved in the error Equation (19). On the other hand, $G 12$, $G 03$, $G 40$, $G 31$, $G 22$, $G 13$ and $G 04$ do not affect the error Equation (19). So, we can assume them as dummy parameters.
Remark 2.
The error Equation (19) shows that the proposed scheme (3) reaches at eighth-order convergence by using only four evaluations namely, $f ( x k )$, $f ( w k )$, $f ( y k )$ and $f ( z k )$ per iteration. Therefore, the scheme (3) is optimal according to Kung-Traub hypothesis [26] provided the conditions of Theorem 1 are satisfied.
Remark 3.
Notice that the parameter β, which is used in the iteration $w k$, does not appear in the expression (7) of $e y k$ and also in later expressions. We have observed that this parameter has the appearance in the terms $e k m$ and higher order. However, these terms are difficult to compute in general. Moreover, we do not need these in order to show the eighth convergence.

#### Some Particular Forms of Proposed Family

(1)
Let us consider the following function $G ( h , t )$ which satisfies the conditions of Theorem 1
$G ( h , t ) = 1 + 2 h + t − 2 h 2 + 4 h t − 12 h 3 .$
Then, the corresponding eighth-order iterative scheme is given by
$x k + 1 = z k − m u t 1 + 2 h + t − 2 h 2 + 4 h t − 12 h 3 f ( x k ) f [ w k , x k ] .$
(2)
Next, consider the rational function
$G ( h , t ) = 1 + 2 h + 2 t − 2 h 2 + 6 h t − 12 h 3 1 + t$
Satisfying the conditions of Theorem 1. Then, corresponding eighth-order iterative scheme is given by
$x k + 1 = z k − m u t 1 + 2 h + 2 t − 2 h 2 + 6 h t − 12 h 3 1 + t f ( x k ) f [ w k , x k ] .$
(3)
Consider another rational function satisfying the conditions of Theorem 1, which is given by
$G ( h , t ) = 1 + 3 h + t + 5 h t − 14 h 3 − 12 h 4 1 + h .$
Then, corresponding eighth-order iterative scheme is given by
$x k + 1 = z k − m u t 1 + 3 h + t + 5 h t − 14 h 3 − 12 h 4 1 + h f ( x k ) f [ w k , x k ] .$
(4)
Next, we suggest another rational function satisfying the conditions of Theorem 1, which is given by
$G ( h , t ) = 1 + 3 h + 2 t + 8 h t − 14 h 3 ( 1 + h ) ( 1 + t ) .$
Then, corresponding eighth-order iterative scheme is given by
$x k + 1 = z k − m u t 1 + 3 h + 2 t + 8 h t − 14 h 3 ( 1 + h ) ( 1 + t ) f ( x k ) f [ w k , x k ] .$
(5)
Lastly, we consider yet another function satisfying the conditions of Theorem 1
$G ( h , t ) = 1 + t − 2 h ( 2 + t ) − 2 h 2 ( 6 + 11 t ) + h 3 ( 4 + 8 t ) 2 h 2 − 6 h + 1 .$
Then, the corresponding eighth-order method is given as
$x k + 1 = z k − m u t 1 + t − 2 h ( 2 + t ) − 2 h 2 ( 6 + 11 t ) + h 3 ( 4 + 8 t ) 2 h 2 − 6 h + 1 f ( x k ) f [ w k , x k ] .$
In above each case $y k = x k − m f ( x k ) f [ w k , x k ]$ and $z k = y k − m h ( 1 + 3 h ) f ( x k ) f [ w k , x k ] .$ For future reference the proposed methods (20), (21), (22), (23) and (24) are denoted by M-1, M-2, M-3, M-4 and M-5, respectively.

## 3. Complex Dynamics of Methods

Our aim here is to analyze the complex dynamics of new methods based on graphical tool the basins of attraction of the zeros of a polynomial $P ( z )$ in complex plane. Analysis of the basins of attraction gives an important information about the stability and convergence of iterative methods. This idea was floated initially by Vrscay and Gilbert [27]. In recent times, many researchers used this concept in their work, see, for example [28,29,30] and references therein. To start with, let us recall some basic dynamical concepts of rational function associated to an iterative method. Let $ϕ : R → R$ be a rational function, the orbit of a point $x 0 ∈ R$ is defined as the set
${ x 0 , ϕ ( x 0 ) , … , ϕ m ( x 0 ) , … } ,$
of successive images of $x 0$ by the rational function.
The dynamical behavior of the orbit of a point of $R$ can be classified depending on its asymptotic behavior. In this way, a point $x 0 ∈ R$ is a fixed point of $ϕ ( x 0 )$ if it satisfies $ϕ ( x 0 ) = x 0$. Moreover, $x 0$ is called a periodic point of period $p > 1$ if it is a point such that $ϕ p ( x 0 ) = x 0$ but $ϕ k ( x 0 ) ≠ x 0$, for each $k < p$. Also, a point $x 0$ is called pre-periodic if it is not periodic but there exists a $k > 0$ such that $ϕ k ( x 0 )$ is periodic. There exist different type of fixed points depending on the associated multiplier $| ϕ ′ ( x 0 ) |$. Taking the associated multiplier into account, a fixed point $x 0$ is called: (a) attractor if $| ϕ ′ ( x 0 ) | < 1$, (b) superattractor if $| ϕ ′ ( x 0 ) | = 0$, (c) repulsor if $| ϕ ′ ( x 0 ) | > 1$ and (d) parabolic if $| ϕ ′ ( x 0 ) | = 1$.
If $α$ is an attracting fixed point of the rational function $ϕ$, its basin of attraction $A ( α )$ is defined as the set of pre-images of any order such that
$A ( α ) = { x 0 ∈ R : ϕ m ( x 0 ) → α , m → ∞ } .$
The set of points whose orbits tend to an attracting fixed point $α$ is defined as the Fatou set. Its complementary set, called Julia set, is the closure of the set consisting of repelling fixed points, and establishes the borders between the basins of attraction. That means the basin of attraction of any fixed point belongs to the Fatou set and the boundaries of these basins of attraction belong to the Julia set.
The initial point $z 0$ is taken in a rectangular region $R ∈ C$ that contains all the zeros of a polynomial $P ( z ) .$ The iterative method when starts from point $z 0$ in a rectangle either converges to the zero $P ( z )$ or eventually diverges. The stopping criterion for convergence is considered as $10 − 3$ up to a maximum of 25 iterations. If the required tolerance is not achieved in 25 iterations, we conclude that the method starting at point $z 0$ does not converge to any root. The strategy adopted is as follows: A color is allocated to each initial point $z 0$ in the basin of attraction of a zero. If the iteration initiating at $z 0$ converges, then it represents the attraction basin with that particular assigned color to it, otherwise if it fails to converge in 25 iterations, then it shows the black color.
To view complex geometry, we analyze the basins of attraction of the proposed methods M-I (I $= 1 , 2 , . . . . , 5$) on following polynomials:
Test problem 1. Consider the polynomial $P 1 ( z ) = ( z 2 − 1 ) 2$ having two zeros ${ − 1 , 1 }$ with multiplicities $m = 2$. The basin of attractors for this polynomial are shown in Figure 1, Figure 2 and Figure 3, for different choices of $β = 0 . 01 , 10 − 6 , 10 − 10 .$ A color is assigned to each basin of attraction of a zero. In particular, to obtain the basin of attraction, the red and green colors have been assigned for the zeros $− 1$ and 1, respectively. Looking at the behavior of the methods, we see that the method M-2 and M-4 possess less number of divergent points and therefore have better convergence than rest of the methods. Observe that there is a small difference among the basins for the remaining methods with the same value of $β$. Note also that the basins are becoming larger as the parameter $β$ assumes smaller values.
Test problem 2. Let $P 2 ( z ) = ( z 3 + z ) 2$ having three zeros ${ − i , 0 , i }$ with multiplicities $m = 2$. The basin of attractors for this polynomial are shown in Figure 4, Figure 5 and Figure 6, for different choices of $β = 0 . 01 , 10 − 6 , 10 − 10 .$ A color is allocated to each basin of attraction of a zero. For example, we have assigned the colors: green, red and blue corresponding to the basins of the zeros $− i$, i and 0, From graphics, we see that the methods M-2 and M-4 have better convergence due to a lesser number of divergent points. Also observe that in each case, the basins are getting broader with the smaller values of $β$. The basins in methods M-1, M-3 are almost the same and method M-5 has more divergent points.
Test problem 3. Let $P 3 ( z ) = ( z 2 − 1 4 ) ( z 2 + 9 4 )$ having four simple zeros ${ − 1 2 , 1 2 , − 3 2 i , 3 2 i , }$. To see the dynamical view, we allocate the colors green, red, blue and yellow corresponding to basins of the zeros $− 1 2$, $1 2$, $− 3 2 i$ and $3 2 i$. The basin of attractors for this polynomial are shown in Figure 7, Figure 8 and Figure 9, for different choices of $β = 0 . 01 , 10 − 6 , 10 − 10 .$ Looking at the graphics, we conclude that the methods M-2 and M-4 have better convergence behavior since they have lesser number of divergent points. The remaining methods have almost similar basins with the same value of $β$. Notice also that the basins are becoming larger with the smaller values of $β$.
From these graphics one can easily evaluate the behavior and stability of any method. If we choose an initial point $z 0$ in a zone where distinct basins of attraction touch each other, it is impractical to predict which root is going to be attained by the iterative method that starts in $z 0$. Hence, the choice of $z 0$ in such a zone is not a good one. Both the black zones and the regions with different colors are not suitable to take the initial guess $z 0$ when we want to acquire a unique root. The most adorable pictures appear when we have very tricky frontiers between basins of attraction and they correspond to the cases where the method is more demanding with respect to the initial point and its dynamic behavior is more unpredictable. We conclude this section with a remark that the convergence nature of proposed methods depends upon the value of parameter $β$. The smaller is the value of $β$, the better is the convergence of the method.

## 4. Numerical Results

In this section, we apply the methods M1–M5 of family (3) to solve few nonlinear equations, which not only depict the methods practically but also serve to verify the validity of theoretical results that we have derived. The theoretical order of convergence is verified by calculating the computational order of convergence (COC) using the formula (see [31])
$COC = ln | ( x k + 2 − α ) / ( x k + 1 − α ) | ln | ( x k + 1 − α ) / ( x k − α ) | for each k = 1 , 2 , …$
Performance is compared with some existing eighth-order methods requiring derivative evaluations in their formulae. For example, we choose the methods by Zafar et al. [19] and Behl et al. [23,24]. These methods are expressed as follows:
Zafar et al. method (ZM-1):
$y k = x k − m f ( x k ) f ′ ( x k ) z k = y k − m u k 1 − 5 u k 2 + 8 u k 3 − 2 u k + 1 f ( x k ) f ′ ( x k ) x k + 1 = z k − m u k v k ( 1 + 2 u k ) ( v k + 1 ) ( 2 w k + 1 ) f ( x k ) f ′ ( x k ) .$
Zafar et al. method (ZM-2):
$y k = x k − m f ( x k ) f ′ ( x k ) z k = y k − m u k 6 u k 3 − u k 2 + 2 u k + 1 f ( x k ) f ′ ( x k ) x k + 1 = z k − m u k v k e v k e 2 w k ( 1 + 2 u k ) f ( x k ) f ′ ( x k )$
where $u k = f ( y k ) f ( x k ) 1 m$, $v k = f ( z k ) f ( y k ) 1 m$ and $w k = f ( z k ) f ( x k ) 1 m .$
Behl et al. method (BM-1):
$y k = x k − m f ( x k ) f ′ ( x k ) z k = y k − m f ( x k ) f ′ ( x k ) u k ( 1 + 2 u k − u k 2 ) x k + 1 = z k + m f ( x k ) f ′ ( x k ) w k u k 1 − w k − 1 − 2 u k + 6 u k 3 − 1 6 ( 85 + 21 m + 2 m 2 ) u k 4 − 2 v k .$
Behl et al. method (BM-2):
$y k = x k − m f ( x k ) f ′ ( x k ) z k = y k − m f ( x k ) f ′ ( x k ) u k ( 1 + 2 u k ) x k + 1 = z k − m f ( x k ) f ′ ( x k ) w k u k 1 − w k 1 + 9 u k 2 + 2 v k + u k ( 6 + 8 v k ) 1 + 4 u k$
where $u k = f ( y k ) f ( x k ) 1 m$, $v k = f ( z k ) f ( x k ) 1 m$, $w k = f ( z k ) f ( y k ) 1 m$.
Behl et al. method (BM-3):
$y k = x k − m f ( x k ) f ′ ( x k ) z k = y k − m u f ( x k ) f ′ ( x k ) 1 + γ u 1 + ( γ − 2 ) u x k + 1 = z k − m 2 u v f ( x k ) f ′ ( x k ) 1 − ( 2 v + 1 ) ( 2 u ( 2 γ − γ 2 + 4 γ u + u + 4 ) − 2 γ + 5 ) 2 γ + 2 ( γ 2 − 6 γ + 6 ) u − 5$
where $u = f ( y k ) f ( x k ) 1 m$, $v = f ( z k ) f ( y k ) 1 m$ and $γ = 1 3 .$
All computations are performed in the programming package Mathematica [32] in PC with Intel(R) Pentium(R) CPU B960 @ 2.20 GHz, 2.20 GHz (32-bit Operating System) Microsoft Windows 7 Professional and 4 GB RAM using multiple-precision arithmetic. Performance of the new methods is tested by choosing value of the parameter $β = 0 . 01$. Choice of the initial approximation $x 0$ in the examples is obtained by using the procedure proposed in [33]. For example, the procedure when applied to the function of Example 2 in the interval [2, 3.5] using the statements
f[x_ ]=x9-29x8+349x7-2261x6+8455x5-17663x4+15927x3+6993x2-24732x+12960;
a=2; b=3.5; k=1; x0=0.5*(a+b+Sign[f[a]]*NIntegrate[Tanh[k *f[x]],{x,a,b}])
in programming package Mathematica yields a close initial approximation $x 0 = 3 . 20832$ to the root $α = 3$.
Numerical results displayed in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 contain: (i) values of first three consecutive errors $| x k + 1 − x k |$, (ii) number of iterations $( k )$ needed to converge to the required solution with the stopping criterion $| x k + 1 − x k | + | f ( x k ) | < 10 − 100$, (iii) computational order of convergence (COC) using (25) and (iv) the elapsed CPU-time (CPU-time) in seconds computed by the Mathematica command “TimeUsed[ ]”. Further, the meaning of $a × e ± b$ is $a × 10 ± b$ in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6.
The following examples are chosen for numerical tests:
Example 1.
We consider the Planck’s radiation law problem [34]:
$φ ( λ ) = 8 π c h λ − 5 e c h / λ k T − 1$
which determines the energy density with in an isothermal black body. Here, c is the speed of light, λ is the wavelength of the radiation, k is Boltzmann’s constant, T is the absolute temperature of the black body and h is the Planck’s constant. Suppose, we would like to calculate wavelength λ which corresponds to maximum energy density $φ ( λ )$. From (26), we get
$φ ′ ( λ ) = 8 π c h λ − 6 e c h / λ k T − 1 ( c h / λ k T ) e c h / λ k T e c h / λ k T − 1 − 5 = A B .$
It can be seen that a maximum value for φ occurs when $B = 0$, that is, when
$( c h / λ k T ) e c h / λ k T e c h / λ k T − 1 = 5 .$
Then, setting $x = c h / λ k T$, the above equation becomes
$1 − x 5 = e − x .$
We consider this case for four times and obtained the required nonlinear function
$f 1 ( x ) = e − x − 1 + x 5 4 .$
The aim is to find a multiple root of the equation $f 1 ( x ) = 0$. Obviously, one of the multiple root $x = 0$ is not taken into account. As argued in [34], the left-hand side of (27) is zero for $x = 5$ and right-hand side is $e − 5 ≈$ 6.74 × 10$− 3$. Hence, it is expected that another multiple root of the equation $f 1 ( x ) = 0$ might exist near to $x = 5$. The calculated value of this multiple root is given by $α ≈ 4 . 96511423$ with $x 0 = 3 . 5$. As a result, the wavelength (λ) corresponding to which the energy density is maximum is approximately given as
$λ ≈ c h ( k T ) 4 . 96511423 .$
Numerical results are shown in Table 1.
Example 2.
Finding eigen values of a large sparse matrix is a challenging task in applied mathematics and engineering. Calculating even the roots of a characteristic equation of square matrix greater than 4 is another big challenge. So, we consider the following 9× 9 matrix (see [23])
$M = 1 8 − 12 0 0 19 − 19 76 − 19 18 437 − 64 24 0 − 24 24 64 − 8 32 376 − 16 0 24 4 − 4 16 − 4 8 92 − 40 0 0 − 10 50 40 2 20 242 − 4 0 0 − 1 41 4 1 2 25 − 40 0 0 18 − 18 104 − 18 20 462 − 84 0 0 − 29 29 84 21 42 501 16 0 0 − 4 4 − 16 4 16 − 92 0 0 0 0 0 0 0 0 24 .$
The characteristic polynomial of the matrix (M) is given as
$f 2 ( x ) = x 9 − 29 x 8 + 349 x 7 − 2261 x 6 + 8455 x 5 − 17663 x 4 + 15927 x 3 + 6993 x 2 − 24732 x + 12960 .$
This function has one multiple zero at $α = 3$ of multiplicity 4. We find this zero with initial approximation $x 0 = 3 . 2$. Numerical results are shown in Table 2.
Example 3.
Consider an isentropic supersonic flow along a sharp expansion corner (see [2]). Then relationship between the Mach number before the corner (i.e., $M 1$) and after the corner (i.e., $M 2$) is given by
$δ = b 1 / 2 tan − 1 M 2 2 − 1 b 1 / 2 − tan − 1 M 1 2 − 1 b 1 / 2 − tan − 1 ( M 2 2 − 1 ) 1 / 2 − tan − 1 ( M 1 2 − 1 ) 1 / 2$
where $b = γ + 1 γ − 1$, γ is the specific heat ratio of the gas.
For a special case study, we solve the equation for $M 2$ given that $M 1 = 1 . 5$, $γ = 1 . 4$ and $δ = 10 0$. In this case, we have
$tan − 1 5 2 − tan − 1 ( x 2 − 1 ) + 6 tan − 1 x 2 − 1 6 − tan − 1 1 2 5 6 − 11 63 = 0 ,$
where $x = M 2$.
We consider this case for ten times and obtained the required nonlinear function
$f 3 ( x ) = tan − 1 5 2 − tan − 1 ( x 2 − 1 ) + 6 tan − 1 x 2 − 1 6 − tan − 1 1 2 5 6 − 11 63 10 .$
The above function has zero at $α = 1 . 8411027704926161 …$ with multiplicity 10. This zero is calculated using initial approximation $x 0 = 2$. Numerical results are shown in Table 3.
Example 4.
The van der Waals equation-of-state
$P + a 1 n 2 V 2 ( V − n a 2 ) = n R T ,$
explains the behavior of a real gas by introducing in the ideal gas equations two parameters, $a 1$ and $a 2$, specific for each gas. Determination of the volume V of the gas in terms of the remaining parameters requires the solution of a nonlinear equation in V.
$P V 3 − ( n a 2 P + n R T ) V 2 + a 1 n 2 V = a 1 a 2 n 3 .$
Given the parameters $a 1$ and $a 2$ of a particular gas, one can obtain values for n, P and T, such that this equation has three real zeros. By using the particular values (see [23]), we obtain the nonlinear equation
$x 3 − 5 . 22 x 2 + 9 . 0825 x − 5 . 2675 = 0 ,$
where $x = V$. This equation has a multiple root $α = 1 . 75$ with multiplicity 2. We further increase the multiplicity of this root to 8 by considering this case for four times and so obtain the nonlinear function
$f 4 ( x ) = ( x 3 − 5 . 22 x 2 + 9 . 0825 x − 5 . 2675 ) 4 .$
The initial guess chosen to obtain the solution 1.75 is $x 0 = 1 . 5$. Numerical results are shown in Table 4.
Example 5.
Next, we assume a standard nonlinear test function from Behl et al. [17] which is defined by
$f 5 ( x ) = − 1 − x 2 + x + c o s π x 2 + 1 6 .$
The function $f 5$ has multiple zero at $α = − 0 . 728584046 …$ of multiplicity 6. We select initial approximation $x 0 = − 0 . 76$ to obtain zero of this function. Numerical results are exhibited in Table 5.
Example 6.
Lastly, we consider another standard test function which is defined as
$f 6 ( x ) = x ( x 2 + 1 ) ( 2 e x 2 + 1 + x 2 − 1 ) cosh 2 π x 2 .$
This function has multiple zero $α = i$ of multiplicity 4. Let us choose the initial approximation $x 0 = 1 . 5 i$ to compute this zero. The computed results are displayed in Table 6.
From the numerical values of errors we examine that the accuracy in the values of successive approximations increases as the iteration proceed. This explains the stable nature of methods. Also, like the existing methods the new methods show consistent convergence nature. At the stage when stopping criterion $| x k + 1 − x k | + | f ( x k ) | < 10 − 100$ has been satisfied we display the value ‘0’ of $| x k + 1 − x k |$. From the calculation of computational order of convergence shown in the penultimate column in each table, we verify the theoretical eighth order of convergence. However, this is not true for the existing eighth-order methods BM−1 and BM−2, since the eighth order convergence is not maintained. The efficient nature of proposed methods can be observed by the fact that the amount of CPU time consumed by the methods is less than that of the time taken by existing methods. In addition, the new methods are more accurate because error becomes much smaller with increasing n as compare to the error of existing techniques. The main purpose of implementing the new derivative-free methods for solving different type of nonlinear equations is purely to illustrate the exactness of the approximate solution and the stability of the convergence to the required solution. Similar numerical experiments, performed for many other different problems, have confirmed this conclusion to a good extent.

## 5. Conclusions

In the foregoing study, we have proposed the first ever, as far as we know, class of optimal eighth order derivative-free iterative methods for solving nonlinear equations with multiple roots. Analysis of the local convergence has been carried out, which proves the order eight under standard assumptions of the function whose zeros we are looking for. Some special cases of the class are presented. These are implemented to solve nonlinear equations including those arising in practical problems. The methods are compared with existing techniques of same order. Testing of the numerical results shows that the presented derivative-free methods are good competitors to the existing optimal eighth-order techniques that require derivative evaluations in their algorithm. We conclude the work with a remark that derivative-free techniques are good options to Newton-type iterations in the cases when derivatives are expensive to compute or difficult to evaluate.

## Author Contributions

Methodology, J.R.S.; Writing—review & editing, J.R.S.; Investigation, S.K.; Data Curation, S.K.; Conceptualization, I.K.A.; Formal analysis, I.K.A.

## Funding

This research received no external funding.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
2. Hoffman, J.D. Numerical Methods for Engineers and Scientists; McGraw-Hill Book Company: New York, NY, USA, 1992. [Google Scholar]
3. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer-Verlag: New York, NY, USA, 2008. [Google Scholar]
4. Argyros, I.K.; Magreñán, Á.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
5. Schröder, E. Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef]
6. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
7. Victory, H.D.; Neta, B. A higher order method for multiple zeros of nonlinear functions. Int. J. Comput. Math. 1983, 12, 329–335. [Google Scholar] [CrossRef]
8. Dong, C. A family of multipoint iterative functions for finding multiple roots of equations. Int. J. Comput. Math. 1987, 21, 363–367. [Google Scholar] [CrossRef]
9. Osada, N. An optimal multiple root-finding method of order three. J. Comput. Appl. Math. 1994, 51, 131–133. [Google Scholar] [CrossRef] [Green Version]
10. Neta, B. New third order nonlinear solvers for multiple roots. Appl. Math. Comput. 2008, 202, 162–170. [Google Scholar] [CrossRef] [Green Version]
11. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
12. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef] [Green Version]
13. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
14. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Math. Appl. 2011, 235, 4199–4206. [Google Scholar] [CrossRef] [Green Version]
15. Kansal, M.; Kanwar, V.; Bhatia, S. On some optimal multiple root-finding methods and their dynamics. Appl. Appl. Math. 2015, 10, 349–367. [Google Scholar]
16. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiplezero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar]
17. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R.; Kanwar, V. An optimal fourth-order family of methods for multiple roots and its dynamics. Numer. Algor. 2016, 71, 775–796. [Google Scholar] [CrossRef]
18. Geum, Y.H.; Kim, Y.I.; Neta, B. Constructing a family of optimal eighth-order modified Newton-type multiple-zero finders along with the dynamics behind their purely imaginary extraneous fixed points. J. Comp. Appl. Math. 2017. [Google Scholar] [CrossRef]
19. Zafar, F.; Cordero, A.; Quratulain, R.; Torregrosa, J.R. Optimal iterative methods for finding multiple roots of nonlinear equations using free parameters. J. Math. Chem. 2017. [Google Scholar] [CrossRef]
20. Zafar, F.; Cordero, A.; Sultana, S.; Torregrosa, J.R. Optimal iterative methods for finding multiple roots of nonlinear equations using weight functions and dynamics. J. Comp. Appl. Math. 2018, 342, 352–374. [Google Scholar] [CrossRef]
21. Zafar, F.; Cordero, A.; Torregrosa, J.R. An efficient family of optimal eighth-order multiple root finders. Mathematics 2018, 6, 310. [Google Scholar] [CrossRef]
22. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. An eighth-order family of optimal multiple root finders and its dynamics. Numer. Algor. 2018, 77, 1249–1272. [Google Scholar] [CrossRef]
23. Behl, R.; Zafar, F.; Alshormani, A.S.; Junjua, M.U.D.; Yasmin, N. An optimal eighth-order scheme for multiple zeros of unvariate functions. Int. J. Comput. Meth. 2018. [Google Scholar] [CrossRef]
24. Behl, R.; Alshomrani, A.S.; Motsa, S.S. An optimal scheme for multiple roots of nonlinear equations with eighth-order convergence. J. Math. Chem. 2018. [Google Scholar] [CrossRef]
25. Sharma, J.R.; Kumar, D.; Argyros, I.K. An efficient class of Traub-Steffensen-like seventh order multiple-root solvers with applications. Symmetry 2019, 11, 518. [Google Scholar] [CrossRef]
26. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
27. Vrscay, E.R.; Gilbert, W.J. Extraneous fixed points, basin boundaries and chaotic dynamics for Schröder and König rational iteration functions. Numer. Math. 1988, 52, 1–16. [Google Scholar] [CrossRef]
28. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [Google Scholar] [CrossRef]
29. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
30. Lotfi, T.; Sharifi, S.; Salimi, M.; Siegmund, S. A new class of three-point methods with optimal convergence order eight and its dynamics. Numer. Algorithms 2015, 68, 261–288. [Google Scholar] [CrossRef]
31. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
32. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media: Champaign, IL, USA, 2003. [Google Scholar]
33. Yun, B.I. A non-iterative method for solving non-linear equations. Appl. Math. Comput. 2008, 198, 691–699. [Google Scholar] [CrossRef]
34. Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
Figure 1. Basins of attraction for methods M-1 to M-5 $( β = 0 . 01 )$ in polynomial $P 1 ( z )$.
Figure 1. Basins of attraction for methods M-1 to M-5 $( β = 0 . 01 )$ in polynomial $P 1 ( z )$.
Figure 2. Basins of attraction for methods M-1 to M-5 $( β = 10 − 6 )$ in polynomial $P 1 ( z )$.
Figure 2. Basins of attraction for methods M-1 to M-5 $( β = 10 − 6 )$ in polynomial $P 1 ( z )$.
Figure 3. Basins of attraction for methods M-1 to M-5 $( β = 10 − 10 )$ in polynomial $P 1 ( z )$.
Figure 3. Basins of attraction for methods M-1 to M-5 $( β = 10 − 10 )$ in polynomial $P 1 ( z )$.
Figure 4. Basins of attraction for methods M-1 to M-5 $( β = 0 . 01 )$ in polynomial $P 2 ( z )$.
Figure 4. Basins of attraction for methods M-1 to M-5 $( β = 0 . 01 )$ in polynomial $P 2 ( z )$.
Figure 5. Basins of attraction for methods M-1 to M-5 $( β = 10 − 6 )$ in polynomial $P 2 ( z )$.
Figure 5. Basins of attraction for methods M-1 to M-5 $( β = 10 − 6 )$ in polynomial $P 2 ( z )$.
Figure 6. Basins of attraction for methods M-1 to M-5 $( β = 10 − 10 )$ in polynomial $P 2 ( z )$.
Figure 6. Basins of attraction for methods M-1 to M-5 $( β = 10 − 10 )$ in polynomial $P 2 ( z )$.
Figure 7. Basins of attraction for methods M-1 to M-5 $( β = 0 . 01 )$ in polynomial $P 3 ( z )$.
Figure 7. Basins of attraction for methods M-1 to M-5 $( β = 0 . 01 )$ in polynomial $P 3 ( z )$.
Figure 8. Basins of attraction for methods M-1 to M-5 $( β = 10 − 6 )$ in polynomial $P 3 ( z )$.
Figure 8. Basins of attraction for methods M-1 to M-5 $( β = 10 − 6 )$ in polynomial $P 3 ( z )$.
Figure 9. Basins of attraction for methods M-1 to M-5 $( β = 10 − 10 )$ in polynomial $P 3 ( z )$.
Figure 9. Basins of attraction for methods M-1 to M-5 $( β = 10 − 10 )$ in polynomial $P 3 ( z )$.
Table 1. Performance of methods for example 1.
Table 1. Performance of methods for example 1.
Methods $| x 2 − x 1 |$ $| x 3 − x 2 |$ $| x 4 − x 3 |$ k COC CPU-Time
ZM−1 $2.13$ $4.82 × 10 − 8$ $4.27 × 10 − 67$ 4 8.000 0.608
ZM−2 $6.43$ $5.30 × 10 − 7$ $6.10 × 10 − 59$ 4 8.000 0.671
BM−1 $1.03 × 10 − 1$ $3.34 × 10 − 6$ $9.73 × 10 − 20$ 5 3.000 0.687
BM−2 $1.03 × 10 − 1$ $3.35 × 10 − 6$ $9.82 × 10 − 20$ 5 3.000 0.702
BM−3 $1.85$ $2.44 × 10 − 8$ $1.15 × 10 − 69$ 4 8.000 0.640
M−1 $1.65$ $1.86 × 10 − 8$ $3.08 × 10 − 70$ 4 8.000 0.452
M−2 $9.64 × 10 − 1$ $1.86 × 10 − 9$ $5.08 × 10 − 78$ 4 8.000 0.453
M−3 $1.64$ $1.81 × 10 − 8$ $2.80 × 10 − 70$ 4 8.000 0.468
M−4 $9.55 × 10 − 1$ $1.84 × 10 − 9$ $5.09 × 10 − 78$ 4 8.000 0.437
M−5 $1.65$ $1.86 × 10 − 8$ $3.29 × 10 − 70$ 4 8.000 0.421
Table 2. Performance of methods for example 2.
Table 2. Performance of methods for example 2.
Methods $| x 2 − x 1 |$ $| x 3 − x 2 |$ $| x 4 − x 3 |$ k COC CPU-Time
ZM−1 $2.24 × 10 − 1$ $3.06 × 10 − 8$ $3.36 × 10 − 62$ 4 8.000 0.140
ZM−2 $6.45 × 10 − 1$ $1.99 × 10 − 6$ $5.85 × 10 − 48$ 4 8.000 0.187
BM−1 $9.85 × 10 − 3$ $4.51 × 10 − 7$ $4.14 × 10 − 20$ 5 3.000 0.140
BM−2 $9.86 × 10 − 3$ $4.52 × 10 − 7$ $4.18 × 10 − 20$ 5 3.000 0.140
BM−3 $1.97 × 10 − 1$ $5.21 × 10 − 9$ $4.23 × 10 − 69$ 4 8.000 0.125
M−1 $2.07 × 10 − 1$ $6.58 × 10 − 8$ $5.78 × 10 − 59$ 4 8.000 0.125
M−2 $1.21 × 10 − 1$ $2.12 × 10 − 9$ $1.01 × 10 − 70$ 4 8.000 0.110
M−3 $2.05 × 10 − 1$ $6.68 × 10 − 8$ $7.64 × 10 − 59$ 4 8.000 0.125
M−4 $1.20 × 10 − 1$ $2.24 × 10 − 9$ $1.79 × 10 − 70$ 4 8.000 0.109
M−5 $2.07 × 10 − 1$ $8.86 × 10 − 8$ $7.65 × 10 − 58$ 4 8.000 0.093
Table 3. Performance of methods for example 3.
Table 3. Performance of methods for example 3.
Methods $| x 2 − x 1 |$ $| x 3 − x 2 |$ $| x 4 − x 3 |$ k COC CPU-Time
ZM−1 $3.19 × 10 − 2$ $2.77 × 10 − 16$ 0 3 7.995 2.355
ZM−2 $7.25 × 10 − 2$ $5.76 × 10 − 14$ 0 3 7.986 2.371
BM−1 $5.84 × 10 − 4$ $1.78 × 10 − 11$ $5.08 × 10 − 34$ 4 3.000 2.683
BM−2 $5.84 × 10 − 4$ $1.78 × 10 − 11$ $5.09 × 10 − 34$ 4 3.000 2.777
BM−3 $3.07 × 10 − 2$ $4.39 × 10 − 17$ 0 3 8.002 2.324
M−1 $3.05 × 10 − 2$ $4.52 × 10 − 16$ 0 3 7.993 1.966
M−2 $1.96 × 10 − 2$ $2.65 × 10 − 17$ 0 3 7.996 1.982
M−3 $3.04 × 10 − 2$ $5.46 × 10 − 16$ 0 3 7.993 1.965
M−4 $1.96 × 10 − 2$ $3.05 × 10 − 17$ 0 3 7.996 1.981
M−5 $3.05 × 10 − 2$ $5.43 × 10 − 16$ 0 3 7.992 1.903
Table 4. Performance of methods for example 4.
Table 4. Performance of methods for example 4.
Methods $| x 2 − x 1 |$ $| x 3 − x 2 |$ $| x 4 − x 3 |$ k COC CPU-Time
ZM−1 $2.21 × 10 − 1$ $1.83 × 10 − 1$ $7.19 × 10 − 3$ 6 8.000 0.124
ZM−2 Fails
BM−1 $1.15$ $1.06$ $5.83 × 10 − 2$ 7 3.000 0.109
BM−2 $2.44 × 10 − 2$ $4.15 × 10 − 3$ $5.41 × 10 − 4$ 7 3.000 0.110
BM−3 $2.67 × 10 − 2$ $3.06 × 10 − 3$ $9.21 × 10 − 4$ 5 7.988 0.109
M−1 $3.55 × 10 − 2$ $2.32 × 10 − 3$ $1.42 × 10 − 10$ 5 8.000 0.084
M−2 $3.05 × 10 − 2$ $7.06 × 10 − 3$ $2.94 × 10 − 3$ 6 8.000 0.093
M−3 $3.30 × 10 − 2$ $5.82 × 10 − 4$ $4.26 × 10 − 5$ 5 8.000 0.095
M−4 $2.95 × 10 − 2$ $1.22 × 10 − 2$ $6.70 × 10 − 3$ 6 8.000 0.094
M−5 $5.01 × 10 − 2$ $1.20 × 10 − 2$ $5.06 × 10 − 6$ 5 8.000 0.089
Table 5. Performance of methods for example 5.
Table 5. Performance of methods for example 5.
Methods $| x 2 − x 1 |$ $| x 3 − x 2 |$ $| x 4 − x 3 |$ k COC CPU-Time
ZM−1 $1.02 × 10 − 2$ $1.56 × 10 − 14$ 0 3 7.983 0.702
ZM−2 $2.40 × 10 − 2$ $5.32 × 10 − 14$ $7.45 × 10 − 89$ 4 8.000 0.873
BM−1 $2.55 × 10 − 4$ $7.84 × 10 − 11$ $2.26 × 10 − 30$ 5 3.000 0.920
BM−2 $2.55 × 10 − 4$ $7.84 × 10 − 11$ $2.26 × 10 − 30$ 5 3.000 0.795
BM−3 $9.57 × 10 − 3$ $2.50 × 10 − 15$ 0 3 7.989 0.671
M−1 $9.44 × 10 − 3$ $2.07 × 10 − 14$ 0 3 7.982 0.593
M−2 $5.96 × 10 − 3$ $1.02 × 10 − 15$ 0 3 7.990 0.608
M−3 $9.42 × 10 − 3$ $2.48 × 10 − 14$ 0 3 7.982 0.562
M−4 $5.95 × 10 − 3$ $1.18 × 10 − 15$ 0 3 7.989 0.530
M−5 $9.44 × 10 − 3$ $2.62 × 10 − 14$ 0 3 7.982 0.499
Table 6. Performance of methods for example 6.
Table 6. Performance of methods for example 6.
Methods $| x 2 − x 1 |$ $| x 3 − x 2 |$ $| x 4 − x 3 |$ k COC CPU-Time
ZM−1 $1.38 × 10 − 2$ $5.09 × 10 − 4$ $2.24 × 10 − 27$ 4 8.000 1.217
ZM−2 $3.13 × 10 − 2$ $4.80 × 10 − 3$ $7.00 × 10 − 20$ 4 7.998 1.357
BM−1 $4.76 × 10 − 5$ $1.26 × 10 − 36$ 0 3 8.000 0.874
BM−2 $4.76 × 10 − 5$ $2.57 × 10 − 36$ 0 3 8.000 0.889
BM−3 $1.37 × 10 − 2$ $4.98 × 10 − 4$ $3.38 × 10 − 28$ 4 8.000 1.201
M−1 $7.34 × 10 − 6$ $1.14 × 10 − 41$ 0 3 8.000 0.448
M−2 $8.25 × 10 − 6$ $4.84 × 10 − 41$ 0 3 8.000 0.452
M−3 $7.71 × 10 − 6$ $2.09 × 10 − 41$ 0 3 8.000 0.460
M−4 $8.68 × 10 − 6$ $8.58 × 10 − 41$ 0 3 8.000 0.468
M−5 $8.32 × 10 − 6$ $4.03 × 10 − 41$ 0 3 8.000 0.436

## Share and Cite

MDPI and ACS Style

Sharma, J.R.; Kumar, S.; Argyros, I.K. Development of Optimal Eighth Order Derivative-Free Methods for Multiple Roots of Nonlinear Equations. Symmetry 2019, 11, 766. https://doi.org/10.3390/sym11060766

AMA Style

Sharma JR, Kumar S, Argyros IK. Development of Optimal Eighth Order Derivative-Free Methods for Multiple Roots of Nonlinear Equations. Symmetry. 2019; 11(6):766. https://doi.org/10.3390/sym11060766

Chicago/Turabian Style

Sharma, Janak Raj, Sunil Kumar, and Ioannis K. Argyros. 2019. "Development of Optimal Eighth Order Derivative-Free Methods for Multiple Roots of Nonlinear Equations" Symmetry 11, no. 6: 766. https://doi.org/10.3390/sym11060766

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.