Next Article in Journal
Generic Homeomorphisms with Shadowing of One-Dimensional Continua
Next Article in Special Issue
A Versatile Integral in Physics and Astronomy and Fox’s H-Function
Previous Article in Journal
Review of “The Significance of the New Logic” Willard Van Orman Quine. Edited and Translated by Walter Carnielli, Frederique Janssen-Lauret, and William Pickering. Cambridge University Press, Cambridge, UK, 2018, pp. 1–200. ISBN-10: 1107179025 ISBN-13: 978-1107179028
Previous Article in Special Issue
Oscillation of Fourth-Order Functional Differential Equations with Distributed Delay
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Class of Traub–Steffensen-Type Methods for Computing Multiple Zeros

1
Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal 148106, Sangrur, India
2
Section of Mathematics, International Telematic University UNINETTUNO, Corso Vittorio Emanuele II, 39, 00186 Roma, Italy
*
Author to whom correspondence should be addressed.
Axioms 2019, 8(2), 65; https://doi.org/10.3390/axioms8020065
Submission received: 5 May 2019 / Revised: 21 May 2019 / Accepted: 22 May 2019 / Published: 25 May 2019
(This article belongs to the Special Issue Special Functions and Their Applications)

Abstract

:
Numerous higher-order methods with derivative evaluations are accessible in the literature for computing multiple zeros. However, higher-order methods without derivatives are very rare for multiple zeros. Encouraged by this fact, we present a family of third-order derivative-free iterative methods for multiple zeros that require only evaluations of three functions per iteration. Convergence of the proposed class is demonstrated by means of using a graphical tool, namely basins of attraction. Applicability of the methods is demonstrated through numerical experimentation on different functions that illustrates the efficient behavior. Comparison of numerical results shows that the presented iterative methods are good competitors to the existing techniques.

1. Introduction

We consider derivative-free methods to discover a multiple root (say, α ) of multiplicity m, i.e., f ( r ) ( α ) = 0 , r = 0 , 1 , 2 , , m 1 and f ( m ) ( α ) 0 , of a nonlinear equation f ( x ) = 0 .
Many higher-order methods, either independent or based on the modified Newton’s method [1] with quadratically convergent
x n + 1 = x n m f ( x n ) f ( x n ) ,
have been proposed and verified in the literature; see, for example: [2,3,4,5,6,7,8,9,10,11,12,13,14,15] and references therein. Such methods require the evaluations of derivatives of either linear order or linear and second order, or both. However, higher-order derivative-free methods to handle the case of multiple roots are yet to be explored. The main reason for the non-availability of such methods is due to the difficulty in obtaining their order of convergence. The derivative-free methods are important in the conditions when derivative of the function f is difficult to compute or is expensive to obtain. One such method free from derivative is the classical Traub–Steffensen method [16] which actually replaces the derivative f in the classical Newton method with appropriate approximation based on difference quotient,
f ( x n ) f ( x n + β f ( x n ) ) f ( x n ) β f ( x n ) , β R { 0 } ,
or
f ( x n ) f [ x n , w n ] ,
where w n = x n + β f ( x n ) and f [ x n , w n ] = f ( w n ) f ( x n ) w n x n is a first-order divided difference.
In this way the modified Newton method (1) becomes the modified Traub–Steffensen method
x n + 1 = x n m f ( x n ) f [ x n , w n ] .
The modified Traub–Steffensen method (2) is a noteworthy improvement of Newton’s method, because it keeps quadratic convergence without using derivative.
In this work, we introduce a two-step family of third-order derivative-free methods for computing multiple zeros that require three evaluations of the function f per iteration. The iterative scheme uses the Traub–Steffensen iteration (2) in the first step and Traub–Steffensen-like iteration in the second step. The procedure is based on a simple approach of using weight factors in the iterative scheme. Many special methods of the family can be generated depending on the forms of weight factors. Efficacy of these methods is tested on various numerical problems of different natures. In the comparison with existing techniques that use derivative evaluations, the new derivative-free methods are computationally more efficient.
The rest of the paper is summarized as follows. In Section 2, the scheme of third-order methods is developed and its convergence is studied. Section 3 is divided into two parts: Section 3.1 and Section 3.2. In Section 3.1, an initial approach concerning the study of the dynamics of the methods with the help of basins of attraction is presented. To demonstrate applicability and efficacy of the new methods, some numerical experiments are performed in Section 3.2. A comparison of the methods with existing ones of the same order is also shown in this subsection. Concluding remarks are given in Section 4.

2. The Method

With a known multiplicity m 1 , we consider the following two-step scheme for multiple roots based on the Traub–Steffensen method (2)
y n = x n m f ( x n ) f [ x n , w n ] , x n + 1 = y n H ( u ) f ( x n ) f [ x n , w n ] ,
where the function H ( u ) : C C is analytic in a neighborhood of ‘0’ with u = f ( y n ) f ( x n ) 1 m . The convergence order is obtained by the following theorem:
Theorem 1.
Let f : C C be an analytic function in a region enclosing a multiple zero (say, α) with multiplicity m. Assume that initial approximation x 0 is sufficiently close to α, then the iteration scheme defined by (3) has attained third order of convergence, provided that H ( 0 ) = 0 and H ( 0 ) = m .
Proof. 
Let the error at n-th iteration be e n = x n α . Using Taylor’s expansion of f ( x n ) about α , we have that
f ( x n ) = f ( m ) ( α ) m ! e n m 1 + C 1 e n + C 2 e n 2 + C 3 e n 3 + O ( ( e n ) 4 ) ,
where C k = m ! ( m + k ) ! f ( m + k ) ( α ) f ( m ) ( α ) for k N .
Also
w n α = x n α + β f ( x n ) = e n ( 1 + β f ( m ) ( α ) m ! e n m 1 ) + β f ( m ) ( α ) m ! e n m C 1 e n + C 2 e n 2 + C 3 e n 3 + O ( ( e n ) 4 ) .
Then Taylor’s expansion of f ( w n ) about α is given as
f ( w n ) = f ( m ) ( α ) m ! ( w n α ) m 1 + C 1 ( w n α ) + C 2 ( w n α ) + C 3 ( w n α ) + O ( ( w n α ) 4 ) ,
By using (4) and (5), then the first step of the scheme (3) yields
y n α = C 1 β m e n 2 β 2 m 2 ( m + 1 ) C 1 2 2 m C 2 e n 3 + O ( ( e n ) 4 ) ) .
Expansion of f ( y n ) about α leads us to the expression
f ( y n ) = f ( m ) ( α ) m ! β C 1 m m e n 2 m ( 1 β X 1 C 1 e n + β 2 m C 1 2 β ( m 1 ) X 1 2 + 2 C 1 4 e n 2 β X 1 6 m 2 C 1 3 6 β C 1 4 ( m + 1 ) + β 2 ( m 1 ) ( m 2 ) X 1 2 e n 3 ) + O ( ( e n 4 ) ) ,
where X 1 = ( m + 1 ) C 1 2 2 m C 2 .
Expanding H ( u ) in the neighborhood of ‘0’, then using (4) and (7) in the expansion of u = f ( y n ) f ( x n ) 1 m , we have
H ( u ) = H ( 0 ) + H ( 0 ) C 1 β m e n β m 2 ( β X 1 + C 1 2 ) e n 2 + β C 1 2 m 3 ( 2 β + 1 ) X 1 + 2 C 1 2 e n 3 + O ( ( e n ) 4 ) .
Using the above expressions in the last step of (3), it follows that
e n + 1 = H ( 0 ) m e n + β C 1 m 2 m + H ( 0 ) H ( 0 ) e n 2 1 m 3 ( β 2 ( 1 + m ) C 1 2 + 2 C 2 m + H ( 0 ) + H ( 0 ) 2 β C 1 2 H ( 0 ) ) e n 3 + O ( ( e n ) 4 ) .
It is clear that the third order of convergence is possible only if H ( 0 ) = 0 , H ( 0 ) = m . Then, the error Equation (9) becomes
e n + 1 = 2 β C 1 2 m 2 e n 3 + O ( ( e n ) 4 ) .
Thus, the theorem is proved. □

Some Concrete Forms of H(u)

We can obtain numerous methods of the family (3) based on the form of function H ( u ) that satisfies the conditions of Theorem 1. but we will limit choices to consider the forms of low-order polynomials or simple functions. Accordingly, the following simple forms are chosen:
(1)
H ( u ) = m u ,
(2)
H ( u ) = m u 1 + u ,
(3)
H ( u ) = m u 1 u ,
(4)
H ( u ) = m u 1 + m u ,
(5)
H ( u ) = m log ( u + 1 ) ,
(6)
H ( u ) = m ( e u 1 ) .
The corresponding method for each of the above forms can be expressed as:
  • Method 1 (M1):
    x n + 1 = y n m u f ( x n ) f [ x n , w n ] .
  • Method 2 (M2):
    x n + 1 = y n m u 1 + u f ( x n ) f [ x n , w n ] .
  • Method 3 (M3):
    x n + 1 = y n m u 1 u f ( x n ) f [ x n , w n ] .
  • Method 4 (M4):
    x n + 1 = y n m u 1 + m u f ( x n ) f [ x n , w n ] .
  • Method 5 (M5):
    x n + 1 = y n m log ( u + 1 ) f ( x n ) f [ x n , w n ] .
  • Method 6 (M6):
    x n + 1 = y n m ( e u 1 ) f ( x n ) f [ x n , w n ] .
In the above each case u = f ( y n ) f ( x n ) 1 m and y n = x n m f ( x n ) f [ x n , w n ] .
Remark 1.
Computational competence of an iterative method is spanned by the efficiency index E = s 1 / d (see [17]), where s is the order of convergence and d is the computational cost measured as the number of new pieces of information needed by the method per iterative step. A “piece of information” typically is any calculation of the function f or one of its derivatives. The presented third-order methods require three function calculations per iteration, so the efficiency index of the methods is E = 3 3 1.442 which is equal to the efficiency index of existing third-order methods such as Dong [5], Halley [7], Chebyshev [7], Osada [12], and Victory and Neta [14]. Note, however, that these existing third-order methods require derivative evaluations in their computation which is not the case with the proposed methods. Also note that efficiency index of new methods is better than that of modified Newton’s method and derivative-free Traub–Steffensen’s method ( E = 2 1.414 ) .

3. Numerical Tests

In this section, first we plot the basins of attraction of the zeros of some polynomials when the proposed iterative methods are applied on the polynomials. Next, we verify the theoretical results by the applications of the methods on some nonlinear functions, including those which arise in practical problems.

3.1. Basins of Attraction

Analysis of the complex dynamical behavior gives important information about convergence and stability of an iterative scheme; see e.g., [2,3,18,19,20,21]. Here, we directly describe the dynamics of iterative methods M1–M6 by means of visual display of the basins of attraction of the zeros of a polynomial P ( z ) C . The deep dynamical study of the proposed six methods, with a detailed stability analysis of the fixed points and their behavior, requires a separate article, and so such complete analysis is not a subject of discussion in the present work.
To start with, let us take the initial point z 0 in a rectangular region R C that contains all the zeros of a polynomial P ( z ) . The iterative method starts from point z 0 in a rectangle either converges to the zero P ( z ) or eventually diverges. The stopping criterion for convergence is considered to be 10 3 up to a maximum of 25 iterations. If the required tolerance is not achieved in 25 iterations, we conclude that the method starting at point z 0 does not converge to any root. The strategy adopted is as follows: A color is allocated to each initial point z 0 in the basin of attraction of a zero. If the iteration initiating at z 0 converges, then it represents the attraction basin with that particular assigned color to it, otherwise if it fails to converge in 25 iterations, then it shows the black color.
We plot the basins of attraction of the methods M1–M6 (for the choices β = 10 2 , 10 4 , 10 6 ) on following three polynomials:
Problem 1.
In the first example, we look at the polynomial P 1 ( z ) = ( z 2 1 ) 2 which has zeros { ± 1 } with multiplicity 2. For making basins, we use a grid of 400 × 400 points in a rectangle D C of size [ 2 , 2 ] × [ 2 , 2 ] and fix the color green to each initial point in the basin of attraction of zero ‘ 1 ’ and the color red to each point in the basin of attraction of zero ‘1’. Basins achieved for the methods M1–M6 are shown in Figure 1, Figure 2 and Figure 3 corresponding to β = 10 2 , 10 4 , 10 6 . Looking at the behavior of the methods, we see that methods M2 and M4 possess fewest divergent points, followed by M3 and M5. On the contrary, the method M6 has the most divergent points, followed by M1. Notice that the basins are becoming better as parameter β assumes smaller values.
Problem 2.
Let us consider the polynomial P 2 ( z ) = ( z 3 z ) 3 with zeros { 0 , ± 1 } with multiplicity 3. To examine the dynamical view, we consider a rectangle D = [ 2 , 2 ] × [ 2 , 2 ] C with 400 × 400 grid points and allot the colors red, green, and blue to each point in the basin of attraction of 1 , 0 and 1, respectively. Basins of this problem are shown in Figure 4, Figure 5 and Figure 6 corresponding to parameter choices β = 10 2 , 10 4 , 10 6 in the proposed methods. Observing the behavior of the methods, we see that methods M3, M2, and M4 have better convergence behavior since they have the least number of divergent points. On the other hand, M6 contains large black regions followed by M1 and M5, indicating that the methods do not converge in 25 iterations starting at those points. Also, observe that the basins are improving with the smaller values of β.
Problem 3.
Lastly, we consider the polynomial P 3 ( z ) = ( z 4 1 ) 2 with zeros { ± i , ± 1 } with multiplicity 2. To see the dynamical view, we consider a rectangle D = [ 2 , 2 ] × [ 2 , 2 ] C with 400 × 400 grid points and allocate the colors red, blue, yellow, and green to each point in the basin of attraction of 1 , 1, i and i, respectively. Basins for this problem are shown in Figure 7, Figure 8 and Figure 9 corresponding to parameter choices β = 10 2 , 10 4 , 10 6 in the proposed methods. Observing the behavior of the methods, we see that methods M3, M2, and M4 have better convergence behavior since they have fewest divergent points. On the other hand, M6 contains large black regions followed by M1 and M5 indicating that the methods do not converge in 25 iterations starting at those points. Also, observe that the basins are becoming larger with the smaller values of β.
From these graphics, one can easily evaluate the behavior and stability of any method. If we choose an initial point z 0 in a zone where distinct basins of attraction touch each other, it is impractical to predict which root is going to be attained by the iterative method that starts in z 0 . Hence, the choice of z 0 in such a zone is not a good one. Both the black zones and the regions with different colors are not suitable to take the initial guess z 0 when we want to acquire a unique root. The most adorable pictures appear when we have very tricky frontiers between basins of attraction, and they correspond to the cases where the method is more demanding with respect to the initial point and its dynamic behavior is more unpredictable. We conclude this section with a remark that the convergence nature of proposed methods depends upon the value of parameter β . The smaller the value of β , the better the convergence of the method.

3.2. Applications

The above six methods M1–M6 of family (3) are applied to solve a few nonlinear equations, which not only depict the methods practically but also serve to verify the validity of theoretical results we have derived. To investigate the theoretical order of convergence, we obtain the computational order of convergence (COC) using the formula (see [22])
C O C = ln | ( x n + 1 α ) / ( x n α ) | ln | ( x n α ) / ( x n 1 α ) | .
Performance is compared with some well-known third-order methods requiring derivative evaluations such as Dong [5], Halley [7], Chebyshev [7], Osada [12], and Victory and Neta [14]. These methods are expressed as:
  • Dong’s method (DM):
    y n = x n m f ( x n ) f ( x n ) , x n + 1 = y n m 1 1 m 1 m f ( y n ) f ( x n ) .
  • Halley’s method (HM):
    x n + 1 = x n f ( x n ) m + 1 2 m f ( x n ) f ( x n ) f ( x n ) 2 f ( x n ) .
  • Chebyshev’s method (CM):
    x n + 1 = x n m ( 3 m ) 2 f ( x n ) f ( x n ) m 2 2 f 2 ( x n ) f ( x n ) f 3 ( x n ) .
  • Osada’s method (OM):
    x n + 1 = x n 1 2 m ( m + 1 ) f ( x n ) f ( x n + 1 2 ( m 1 ) 2 f ( x n ) f ( x n ) .
  • Victory-Neta method (VNM):
    y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( y n ) f ( x n ) f ( x n ) + A f ( y n ) f ( x n ) + B f ( y n ) ,
    where
    A = μ 2 m μ m + 1 , B = μ m ( m 2 ) ( m 1 ) + 1 ( m 1 ) 2 and μ = m m 1 .
All computations are determined in the programming package Mathematica using multiple-precision arithmetic. Performance of the new methods is tested by choosing value of the parameter β = 0.01 . Numerical results displayed in Table 1, Table 2, Table 3 and Table 4 include: (i) values of last three consecutive errors | x n + 1 x n | ; (ii) number of iterations ( n ) required to converge to the solution such that | x n + 1 x n | + | f ( x n ) | < 10 100 ; (iii) COC; and (iv) the elapsed CPU time (CPU-time) in seconds.
The following examples are chosen for numerical tests:
Example 1 (Eigenvalue problem).
Finding eigenvalues of a large sparse matrix is one of the most challenging tasks in applied mathematics and engineering. Furthermore, calculating the zeros of a characteristic equation of square matrix greater than 4 is another big challenge. Therefore, we consider the following 9 × 9 matrix
M = 1 8 12 0 0 19 19 76 19 18 437 64 24 0 24 24 64 8 32 376 16 0 24 4 4 16 4 8 92 40 0 0 10 50 40 2 20 242 4 0 0 1 41 4 1 2 25 40 0 0 18 18 104 18 20 462 84 0 0 29 29 84 21 42 501 16 0 0 4 4 16 4 16 92 0 0 0 0 0 0 0 0 24 .
The characteristic polynomial of the matrix (M) is given as
f 1 ( x ) = x 9 29 x 8 + 349 x 7 2261 x 6 + 8455 x 5 17663 x 4 + 15927 x 3 + 6993 x 2 24732 x + 12960 .
This function has one multiple zero at α = 3 of multiplicity 4. We find this zero with initial approximation x 0 = 2.8 . Numerical results are shown in Table 1.
Example 2 (Manning equation).
Consider isentropic supersonic flow along a sharp expansion corner. The relationship between the Mach number before the corner (i.e., M 1 ) and after the corner (i.e., M 2 ) is given by (see [23])
δ = b 1 / 2 ( tan 1 ( M 2 2 1 b ) 1 / 2 tan 1 ( M 1 2 1 b ) 1 / 2 ) ( tan 1 ( M 2 2 1 ) 1 / 2 tan 1 ( M 1 2 1 ) 1 / 2 ) ,
where b = γ + 1 γ 1 and γ is the specific heat ratio of the gas.
For a special case study, we solve the equation for M 2 given that M 1 = 1.5 , γ = 1.4 and δ = 10 0 . In this case, we have
tan 1 5 2 tan 1 ( x 2 1 ) + 6 tan 1 x 2 1 6 tan 1 1 2 5 6 11 63 = 0 ,
where x = M 2 .
We consider this case for seven times and obtained the required nonlinear function
f 2 ( x ) = [ tan 1 5 2 tan 1 ( x 2 1 ) + 6 ( tan 1 x 2 1 6 tan 1 1 2 5 6 ) 11 63 ] 7 .
The above function has zero at α = 1.8411027704926161 with multiplicity 7. The required zero is determined by using initial approximation x 0 = 1.5 . Numerical results are shown in Table 2.
Example 3.
Next, we assume a standard nonlinear test function defined by
f 3 ( x ) = x 4 12 + x 2 2 + x + e x ( x 3 ) + sin x + 3 .
The function f 3 has multiple zero at α = 0 of multiplicity 3. We choose the initial approximation x 0 = 0.5 for obtaining the zero of the function. Numerical results are displayed in Table 3.
Example 4.
Lastly, we assume a standard nonlinear test function defined by
f 4 ( x ) = 2 ( x 2 + 1 ) ( 2 x e x 2 + 1 + x 3 x ) cosh 2 π x 2 .
The function f 4 has complex zero at α = i of multiplicity 4. We choose the initial approximation x 0 = 1.25 i for obtaining the zero of the function. Numerical results are displayed in Table 4.
From the numerical results we examine that the accuracy in the values of successive approximations rises, which shows the stable nature of the methods. Also, as with the existing methods, the present methods show consistent convergence behavior. We display the value ‘0’ of | x n + 1 x n | in the iteration at which | x n + 1 x n | + | F ( x n ) | < 10 100 . From the calculation of computational order of convergence, it is also confirmed that the order of convergence of the methods is preserved. The efficient nature of proposed methods can be observed by the fact that the amount of CPU time consumed by the methods is less than that of the time taken by existing methods. In addition, the new methods are more accurate because error becomes much smaller with increasing n as compared to the error of existing techniques. The main purpose of developing the new derivative-free methods for different types of nonlinear equations is purely to illustrate the exactness of the approximate solution and the stability of the convergence to the required solution. Similarly, numerical experimentations, carried out for several problems of different type, proved the above conclusions to a large extent.

4. Conclusions

In this study, we have suggested a class of third-order derivative-free methods for solving nonlinear equations with multiple roots, with known multiplicity. Analysis of the convergence has shown the third-order convergence under basic assumptions regarding the nonlinear function whose zeros we are searching for. Some special cases of the class are presented. These are employed to solve some nonlinear equations and compared with existing techniques. Testing of the numerical results shows that the presented derivative-free methods are good competitors to the existing third-order techniques that require derivative evaluations in their algorithm.

Author Contributions

All the authors have equal contribution to this study.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schröder, E. Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef]
  2. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. On developing fourth-order optimal families of methods for multiple roots and their dynamics. Appl. Math. Comput. 2015, 265, 520–532. [Google Scholar] [CrossRef] [Green Version]
  3. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R.; Kanwar, V. An optimal fourth-order family of methods for multiple roots and its dynamics. Numer. Algor. 2016, 71, 775–796. [Google Scholar] [CrossRef]
  4. Behl, R.; Zafar, F.; Alshormani, A.S.; Junjua, M.U.D.; Yasmin, N. An optimal eighth-order scheme for multiple zeros of unvariate functions. Int. J. Comput. Meth. 2018, 15. [Google Scholar] [CrossRef]
  5. Dong, C. A basic theorem of constructing an iterative formula of the higher order for computing multiple roots of an equation. Math. Numer. Sin. 1982, 11, 445–450. [Google Scholar]
  6. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef] [Green Version]
  7. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
  8. Kansal, M.; Kanwar, V.; Bhatia, S. On some optimal multiple root-finding methods and their dynamics. Appl. Appl. Math. 2015, 10, 349–367. [Google Scholar]
  9. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef] [Green Version]
  10. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
  11. Neta, B. New third order nonlinear solvers for multiple roots. Appl. Math. Comput. 2008, 202, 162–170. [Google Scholar] [CrossRef] [Green Version]
  12. Osada, N. An optimal multiple root-finding method of order three. J. Comput. Appl. Math. 1994, 51, 131–133. [Google Scholar] [CrossRef] [Green Version]
  13. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
  14. Victory, H.D.; Neta, B. A higher order method for multiple zeros of nonlinear functions. Int. J. Comput. Math. 1983, 12, 329–335. [Google Scholar] [CrossRef]
  15. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the muliplte roots of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 4199–4206. [Google Scholar] [CrossRef]
  16. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
  17. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  18. Vrscay, E.R.; Gilbert, W.J. Extraneous fixed points, basin boundaries and chaotic dynamics for Schröder and König rational iteration functions. Numer. Math. 1988, 52, 1–16. [Google Scholar] [CrossRef]
  19. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [Google Scholar] [CrossRef]
  20. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  21. Lotfi, T.; Sharifi, S.; Salimi, M.; Siegmund, S. A new class of three-point methods with optimal convergence order eight and its dynamics. Numer. Algor. 2015, 68, 261–288. [Google Scholar] [CrossRef]
  22. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  23. Hoffman, J.D. Numerical Methods for Engineers and Scientists; McGraw-Hill Book Company: New York, NY, USA, 1992. [Google Scholar]
Figure 1. Basins of attraction for polynomial P 1 ( z ) for β = 10 2 .
Figure 1. Basins of attraction for polynomial P 1 ( z ) for β = 10 2 .
Axioms 08 00065 g001
Figure 2. Basins of attraction for polynomial P 1 ( z ) for β = 10 4 .
Figure 2. Basins of attraction for polynomial P 1 ( z ) for β = 10 4 .
Axioms 08 00065 g002
Figure 3. Basins of attraction for polynomial P 1 ( z ) for β = 10 6 .
Figure 3. Basins of attraction for polynomial P 1 ( z ) for β = 10 6 .
Axioms 08 00065 g003
Figure 4. Basins of attraction for polynomial P 2 ( z ) for β = 10 2 .
Figure 4. Basins of attraction for polynomial P 2 ( z ) for β = 10 2 .
Axioms 08 00065 g004
Figure 5. Basins of attraction for polynomial P 2 ( z ) for β = 10 4 .
Figure 5. Basins of attraction for polynomial P 2 ( z ) for β = 10 4 .
Axioms 08 00065 g005
Figure 6. Basins of attraction for polynomial P 2 ( z ) for β = 10 6 .
Figure 6. Basins of attraction for polynomial P 2 ( z ) for β = 10 6 .
Axioms 08 00065 g006
Figure 7. Basins of attraction for polynomial P 3 ( z ) for β = 10 2 .
Figure 7. Basins of attraction for polynomial P 3 ( z ) for β = 10 2 .
Axioms 08 00065 g007
Figure 8. Basins of attraction for polynomial P 3 ( z ) for β = 10 4 .
Figure 8. Basins of attraction for polynomial P 3 ( z ) for β = 10 4 .
Axioms 08 00065 g008
Figure 9. Basins of attraction for polynomial P 3 ( z ) for β = 10 6 .
Figure 9. Basins of attraction for polynomial P 3 ( z ) for β = 10 6 .
Axioms 08 00065 g009
Table 1. Comparison of performance of methods for Example 1.
Table 1. Comparison of performance of methods for Example 1.
Methods | x 3 x 2 | | x 4 x 3 | | x 5 x 4 | nCOCCPU-Time
DM 9.90 × 10 11 1.52 × 10 31 5.49 × 10 94 53.00000.01942
HM 5.84 × 10 10 4.61 × 10 29 2.24 × 10 86 53.00000.01950
CM 9.54 × 10 10 2.47 × 10 28 4.30 × 10 84 53.00000.01885
OM 1.26 × 10 9 6.52 × 10 28 8.94 × 10 83 53.00000.01955
VNM 2.50 × 10 10 2.92 × 10 30 4.68 × 10 90 53.00000.02250
M 1 1.51 × 10 12 3.91 × 10 37 043.00000.00775
M 2 5.15 × 10 12 2.30 × 10 35 043.00000.01165
M 3 2.32 × 10 13 7.01 × 10 40 043.00000.01175
M 4 4.73 × 10 11 3.59 × 10 32 1.57 × 10 95 53.00000.01285
M 5 2.94 × 10 12 3.57 × 10 36 043.00000.01775
M 6 6.71 × 10 13 2.55 × 10 38 043.00000.01115
Table 2. Comparison of performance of methods for Example 2.
Table 2. Comparison of performance of methods for Example 2.
Methods | x 3 x 2 | | x 4 x 3 | | x 5 x 4 | nCOCCPU-Time
DM 6.14 × 10 9 1.48 × 10 26 2.06 × 10 79 53.00000.17575
HM 5.10 × 10 8 1.20 × 10 23 1.54 × 10 70 53.00000.20354
CM 5.98 × 10 8 2.17 × 10 23 1.04 × 10 69 53.00000.23560
OM 6.30 × 10 8 2.63 × 10 23 1.91 × 10 69 53.00000.23225
VNM 2.45 × 10 8 1.17 × 10 24 1.28 × 10 73 53.00000.24875
M 1 1.28 × 10 15 4.70 × 10 47 043.00000.11121
M 2 2.95 × 10 15 8.62 × 10 46 043.00000.12100
M 3 3.86 × 10 16 6.45 × 10 49 043.00000.10925
M 4 4.96 × 10 14 1.23 × 10 41 043.00000.11725
M 5 2.00 × 10 15 2.24 × 10 46 043.00000.12125
M 6 7.54 × 10 16 7.21 × 10 48 043.00000.12525
Table 3. Comparison of performance of methods for Example 3.
Table 3. Comparison of performance of methods for Example 3.
Methods | x 3 x 2 | | x 4 x 3 | | x 5 x 4 | nCOCCPU-Time
DM 1.02 × 10 9 3.43 × 10 29 1.31 × 10 87 53.00000.12925
HM 2.58 × 10 8 1.09 × 10 24 8.36 × 10 74 53.00000.14450
CM 2.85 × 10 8 1.65 × 10 24 3.16 × 10 73 53.00000.13275
OM 3.13 × 10 8 2.39 × 10 24 1.06 × 10 72 53.00000.12500
VNM 5.37 × 10 8 7.00 × 10 27 1.56 × 10 80 53.00000.21355
M 1 1.88 × 10 13 9.27 × 10 41 043.00000.05475
M 2 6.24 × 10 13 5.05 × 10 39 043.00000.05852
M 3 3.10 × 10 14 2.06 × 10 43 043.00000.05075
M 4 3.15 × 10 12 1.09 × 10 36 043.00000.05475
M 5 3.60 × 10 13 8.07 × 10 40 043.00000.05475
M 6 8.56 × 10 14 6.54 × 10 42 043.00000.06253
Table 4. Comparison of performance of methods for Example 4.
Table 4. Comparison of performance of methods for Example 4.
Methods | x 3 x 2 | | x 4 x 3 | | x 5 x 4 | nCOCCPU-Time
DM 7.61 × 10 9 1.42 × 10 25 9.14 × 10 76 53.00001.047
HM 6.17 × 10 8 1.12 × 10 22 6.66 × 10 67 53.00002.141
CM 7.82 × 10 8 2.81 × 10 22 1.31 × 10 65 53.00001.985
OM 8.97 × 10 8 4.78 × 10 22 7.22 × 10 65 53.00001.906
VNM 2.42 × 10 8 5.53 × 10 24 6.59 × 10 71 53.00001.219
M 1 7.10 × 10 12 7.96 × 10 35 043.00000.390
M 2 1.88 × 10 11 2.20 × 10 33 3.54 × 10 99 53.00000.516
M 3 1.72 × 10 12 5.66 × 10 37 043.00000.500
M 4 1.22 × 10 10 1.22 × 10 30 1.21 × 10 90 53.00000.578
M 5 1.20 × 10 11 4.74 × 10 34 043.00000.359
M 6 3.80 × 10 12 9.18 × 10 36 043.00000.313

Share and Cite

MDPI and ACS Style

Kumar, D.; Sharma, J.R.; Cesarano, C. An Efficient Class of Traub–Steffensen-Type Methods for Computing Multiple Zeros. Axioms 2019, 8, 65. https://doi.org/10.3390/axioms8020065

AMA Style

Kumar D, Sharma JR, Cesarano C. An Efficient Class of Traub–Steffensen-Type Methods for Computing Multiple Zeros. Axioms. 2019; 8(2):65. https://doi.org/10.3390/axioms8020065

Chicago/Turabian Style

Kumar, Deepak, Janak Raj Sharma, and Clemente Cesarano. 2019. "An Efficient Class of Traub–Steffensen-Type Methods for Computing Multiple Zeros" Axioms 8, no. 2: 65. https://doi.org/10.3390/axioms8020065

APA Style

Kumar, D., Sharma, J. R., & Cesarano, C. (2019). An Efficient Class of Traub–Steffensen-Type Methods for Computing Multiple Zeros. Axioms, 8(2), 65. https://doi.org/10.3390/axioms8020065

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop