Next Article in Journal
Practical Stability with Respect to h-Manifolds for Impulsive Control Functional Differential Equations with Variable Impulsive Perturbations
Next Article in Special Issue
Viscovatov-Like Algorithm of Thiele–Newton’s Blending Expansion for a Bivariate Function
Previous Article in Journal
Existence of Positive Solutions to Singular Boundary Value Problems Involving φ-Laplacian
Previous Article in Special Issue
Some New Oscillation Criteria for Second Order Neutral Differential Equations with Delayed Arguments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

One-Point Optimal Family of Multiple Root Solvers of Second-Order

1
Department of Mathematics, Sant Longowal Institute of Engineering and Technology, Longowal, Sangrur 148106, India
2
Section of Mathematics, International Telematic University UNINETTUNO, Corso Vittorio Emanuele II, 39, 00186 Roma, Italy
*
Authors to whom correspondence should be addressed.
Mathematics 2019, 7(7), 655; https://doi.org/10.3390/math7070655
Submission received: 6 June 2019 / Revised: 13 July 2019 / Accepted: 17 July 2019 / Published: 21 July 2019
(This article belongs to the Special Issue Multivariate Approximation for solving ODE and PDE)

Abstract

:
This manuscript contains the development of a one-point family of iterative functions. The family has optimal convergence of a second-order according to the Kung-Traub conjecture. This family is used to approximate the multiple zeros of nonlinear equations, and is based on the procedure of weight functions. The convergence behavior is discussed by showing some essential conditions of the weight function. The well-known modified Newton method is a member of the proposed family for particular choices of the weight function. The dynamical nature of different members is presented by using a technique called the “basin of attraction”. Several practical problems are given to compare different methods of the presented family.

1. Introduction

Solving nonlinear equations h ( x ) = 0 , where function h ( x ) is defined as h : Ω R R in an open interval Ω , is a delightful and demanding task in many applied scientific branches, such as Mathematical Biology, Physics, Chemistry, Economics, and also Engineering, to name a few [1,2,3,4]. This is mainly because problems from these areas usually include needing to find the root of a nonlinear equation. The huge value of this subject has led to the development of many numerical methods, with most of them having an iterative nature (see [5,6,7]). With the advanced technology of computer hardware and the latest software, the topic of solving nonlinear equations by using numerical methods has gained additional significance. Researchers are utilizing iterative methods for approximating the solution, since closed-form solutions cannot be obtained in general. In particular, here we consider iterative methods to compute a multiple root (say, α ) with multiplicity m 1 , i.e., h ( k ) ( α ) = 0 , k = 0 , 1 , 2 , , m 1 and h ( m ) ( α ) 0 , of the equation h ( x ) = 0 .
There are a plethora of methods of an iterative nature with a different order of convergence, constructed to approximate the zeros of Equation h ( x ) = 0 (see [8,9,10,11,12,13,14,15,16,17,18]). The computational efficiency index is a very effective mechanism, defined by Ostrowski in [19] which categorizes the iterative algorithms in the form of their convergence order p c and the function evaluations d required per iteration. It is formulated as I = p c 1 / d . The higher the computational efficiency index of an iterative scheme, the better the scheme is.
This idea becomes more rigid with Kung-Traub’s conjecture [20], which imposes an upper bound for the convergence order to be limited with fixed functional evaluations. According to this conjecture, an iterative scheme which requires a d number of function evaluations can attain the convergence order p c = 2 d 1 . The iterative methods which obey Kung-Traub’s conjecture are optimal in nature.
The most basic and widely used method is the well-known modified Newton’s method:
x n + 1 = x n m h ( x n ) h ( x n ) n = 0 , 1 , 2 , .
This method can efficiently find the required zero of multiplicity, m with a quadratic order of convergence, provided that the initial approximate x 0 is sufficiently nearer to zero [8]. In Traub’s terminology (see [2]), Newton’s method (1) is called the one-point method. This classical method attracts many researchers because of its huge applications in several kinds of problems, which are formulated as non-linear equations, differential equations, integral equations, systems of non-linear algebraic equations, and even to random operator equations. However, a common issue and main obstacle in the use of Newton’s method is its sensitivity to initial guesses, which must be sufficiently nearer to the exact solution for assured convergence. Developing a criterion for selecting these initial guesses is quite difficult, and therefore, a more effective iterative technique that is globally-convergent is yet to be discovered. Some other important higher order methods that are based on Newton’s method (1) have been developed in [11,13,14,21,22,23,24,25,26,27,28].
Recently, Chicharro et al. [28] used the weight function technique to design a class of optimal second-order one-point iterative methods for simple roots, including Newton’s method. In this paper, we have applied this technique for the development of a class of optimal second-order one-point methods for multiple roots. The new proposed family contains the modified Newton’s method and many other efficient methods. These methods exist when particular weight functions are selected. Therefore, with a wide range of initial approximations, we can select those methods from the family which are able to converge towards exact zero, when Newton’s method does not.
The rest of the paper is organized as follows. In Section 2, the technique of the second-order method is developed and its convergence is studied. In Section 3, the basins of attractors are studied to check the stability of the methods. Numerical experiments on different equations are performed in Section 4 to demonstrate the applicability and efficiency of the presented methods. We finish the manuscript with some valuable conclusions in Section 5.

2. The Method

For a known multiplicity m 1 , we consider the following one–step scheme for multiple roots:
x n + 1 = x n G ( ν n ) ,
where the function G ( ν n ) : C C is differentiable in a neighborhood of “0” with ν n = h ( x n ) h ( x n ) .
In the next result, we prove a theorem for the order of convergence of the scheme (2).
Theorem 1.
Let f : C C be a differentiable function in a region in which a multiple zero (say, α) with multiplicity m lies. Suppose that the initial approximate x 0 is sufficiently close to α—then, the iteration scheme defined by (2) has a second-order of convergence, provided that G ( 0 ) = 0 , G ( 0 ) = m , and | G ( 0 ) | < , and the error is
e n + 1 = 2 m C 1 G ( 0 ) 2 m 2 e n 2 + O ( e n 3 ) ,
where e n = x n α and C k = m ! ( m + k ) ! f ( m + k ) ( α ) f ( m ) ( α ) for k N .
Proof. 
Let the error at the n-th iteration be e n = x n α . Using the Taylor’s expansion of f ( x n ) and f ( x n ) about α , we have that
f ( x n ) = f ( m ) ( α ) m ! e n m 1 + C 1 e n + C 2 e n 2 + C 3 e n 3 + O ( e n 4 ) .
and
f ( x n ) = f ( m ) ( α ) m ! e n m 1 m + ( m + 1 ) C 1 e n + ( m + 2 ) C 2 e n 2 + ( m + 3 ) C 3 e n 3 + O ( e n 4 ) .
By using (4) and (5), we obtained
ν n = e n m C 1 m e n 2 + ( 1 + m ) 2 m C 1 m 3 e n 3 + O ( e n 4 ) .
If we write the expansion-of-weight function G ( ν n ) about the origin by using the Taylor series, then we have that
G ( ν n ) G ( 0 ) + ν G ( 0 ) + 1 2 ν 2 G ( 0 ) .
By employing the expression (6) in the scheme (2), we were able to obtain the error equation
e n + 1 = G ( 0 ) + 1 G ( 0 ) m e n 2 C 1 G ( 0 ) + G ( 0 ) 2 m 2 e n 2 + O ( e n 3 ) .
To obtain the second-order of convergence, the constant term and coefficient of e n in (7) should simultaneously be equal to zero. This is possible when G ( 0 ) and G ( 0 ) have the following values:
G ( 0 ) = 0 , G ( 0 ) = m .
By using the above values in (7), the error equation becomes
e n + 1 = 2 m C 1 G ( 0 ) 2 m 2 e n 2 + O ( e n 3 ) .
Hence, the second-order convergence is established. □

Some Particular Forms of G ( ν n )

We were able to obtain numerous methods of the family (2) based on the form of function G ( ν ) that satisfies the conditions of Theorem 1. However, we limited the choices to consider only some simple functions. Accordingly, the following simple forms were chosen:
(1)
G ( ν n ) = m ν n ( 1 + a 1 ν n ) (2) G ( ν n ) = m ν n 1 + a 2 ν n (3) G ( ν n ) = m ν n 1 + a 3 m ν n , (4) G ( ν n ) = m ( e ν n 1 )
(5)
G ( ν n ) = m log [ ν n + 1 ] (6) G ( ν n ) = m sin ν n (7) G ( ν n ) = ν n ( 1 m + a 4 ν n ) 2 (8) G ( ν n ) = ν n 2 + ν n 1 m + a 5 ν n ,
where a 1 , a 2 , a 3 , a 4 and a 5 are arbitrary constants.
The corresponding methods to each of the above forms are defined as follows:
Method 1 (M1):
x n + 1 = x n m ν n ( 1 + a 1 ν n ) .
Method 2 (M2):
x n + 1 = x n m ν n 1 + a 2 ν n .
Method 3 (M3):
x n + 1 = x n m ν n 1 + a 3 m ν n .
Method 4 (M4):
x n + 1 = x n m ( e ν n 1 ) .
Method 5 (M5):
x n + 1 = x n m log ( ν n + 1 ) .
Method 6 (M6):
x n + 1 = x n m sin ν n .
Method 7 (M7):
x n + 1 = x n ν n ( 1 m + a 4 ν n ) 2 .
Method 8 (M8):
x n + 1 = x n ν n 2 + ν n 1 m + a 5 ν n .
Remark 1.
The scheme (2) shows a one-point family of second-order methods which needs only two function evaluations—namely, h ( x n ) and h ( x n ) .
Remark 2.
Note that the modified Newton’s method (1) is the special case of the above methods—M1, M2, M3, and M7—if the corresponding constants a 1 , a 2 , a 3 , a n d a 4 become zero.

3. Complex Dynamics of Methods

Our goal here is to check the complex dynamics of new methods with the help of a graphical tool, the basins of attraction, of the zeros for a polynomial P ( z ) in the Argand plane. The nature of the basins of attraction provides important ideas about the stability and convergence of iterative methods. This idea was initially introduced by Vrscay and Gilbert [29]. In recent times, most researchers have been using this concept in their work—see, for example [30,31]. We consider the special cases corresponding to G ( v n ) of (2) to analyze the basins of attraction.
The starting approximate z 0 is taken in a region of rectangular shape R C that contains all the zeros of P ( z ) . A method, when it starts from point z 0 in a rectangle, either converges to zero, P ( z ) , or eventually diverges. Therefore, the stopping criterion is 10 3 up to a number of 25 iterations.
To show complex geometry, we checked the basins of attraction of the methods M1–M8 on the following four polynomials:
Problem 1.
In this problem, we took the polynomial P 1 ( z ) = ( z 2 + 4 ) 2 , which has zeros { ± 2 i } with a multiplicity of 2. We used a mesh of 400 × 400 points in a rectangular frame D C of area [ 2 , 2 ] × [ 2 , 2 ] , and gave the color green for “ 2 i ” and red for “ 2 i ”. Each initial point from the green region converges towards “ 2 i ”, and from the red region it converges to “ 2 i ”. Basins obtained for the methods M1–M8 are shown in Figure 1. Analyzing the behavior of the methods, we see that the methods M5 and M6 possess lesser numbers of divergent points, followed by M1, M4, M8, and M2. On the contrary, the method M3 has a higher number of divergent points, followed by M7.
Problem 2.
Let us consider the polynomial P 2 ( z ) = ( z 3 z ) 3 having zeros { 0 , ± 1 } with a multiplicity of 3. To see the dynamical structure, we considered a rectangular frame D = [ 2 , 2 ] × [ 2 , 2 ] C with 400 × 400 mesh points, and gave the colors red, green, and blue to each point in the basins of attraction of 1 , 0, and 1, respectively. Basins obtained for the methods M1–M8 are shown in Figure 2. Analyzing the behavior of the methods, we observe that the methods M5 and M6 have wider convergence regions, followed by M1, M4, M8, M2, M3, and M7.
Problem 3.
Next, consider the polynomial P 3 ( z ) = ( z 4 6 z 2 + 8 ) 2 that has four zeros { ± 2 , ± 1.414 } with a multiplicity of 2. To view attraction basins, we considered a rectangular frame D = [ 2 , 2 ] × [ 2 , 2 ] C with 400 × 400 mesh points and assigned the colors red, blue, green, and yellow to each point in the basin of 2 , 1.414 , , 1.414 , and 2, respectively. Basins obtained for the methods M1–M8 are shown in Figure 3. Observing the behavior of the methods, we see that the methods M5, M8, M2, M3, M4, M1, and M6 possess a lesser number of divergent points, and therefore, they show good convergence. On the contrary, the method M7 has a higher number of divergent points.
Problem 4.
Lastly, we consider the polynomial P 4 ( z ) = z 6 1 2 z 5 + 11 4 ( 1 + i ) z 4 1 4 ( 19 + 3 i ) z 3 + 1 4 ( 11 + 5 i ) z 2 1 4 ( 11 + i ) z + 3 2 3 i that has six simple zeros { 1 , 1 + 2 i , 1 2 i 2 , i , 3 i 2 , 1 i } . To view the attraction basins, we considered a rectangle D = [ 2 , 2 ] × [ 2 , 2 ] C with 300 × 300 grid points, and assigned the colors red, green, yellow, blue, cyan, and purple to each point in the basin of 1, 1 + 2 i , 1 2 1 2 i , i, 3 2 i , and 1 i , respectively. Basins obtained for the methods M1–M8 are shown in Figure 4. Analyzing the basins of the methods, we observe that the methods M5, M8, M2, M3, M4, and M6 possess a lesser number of divergent points. On the contrary, the methods M1 and M7 have a higher number of divergent points.
From the graphics of the basins, we can give the judgment on the behavior and suitability of any method in the applications. In the event that we pick an initial point z 0 in a zone where various basins of attraction contact one another, it is difficult to anticipate which root will be achieved by the iterative technique that begins in z 0 . Subsequently, the choice of z 0 in such a zone is anything but a decent one. Both the dark zones and the zones with various colors are not appropriate for choosing z 0 when we need to obtain a specific root. The most appealing pictures showed up when we had extremely intricate borders between basins of attraction. These borders correspond to the cases where the method is more demanding with respect to the initial point.

4. Numerical Results

In this section, we demonstrate the efficiency, effectiveness, and convergence behavior of the family of new methods by applying them to some practical problems. In this view, we take the special cases M1–M8 of the proposed class and choose ( a 1 = 1 / 10 ) , ( a 2 = 1 / 4 ) , ( a 3 = 1 / 10 ) , ( a 4 = 1 / 10 ) , and ( a 5 = 1 / 5 ) in the numerical work.
As we know that the constants a 1 , a 2 , a 3 , a 4 and a 5 are arbitrary, there is no particular reason for choosing these values for the constants, and we chose the values randomly. The proposed methods are compared with the existing modified Newton Method (1), also known as MNM.
To verify the theoretical results, we calculate the computational order of convergence (COC) by using the formula (see [32])
COC = log | ( x n + 1 α ) / ( x n α ) | log | ( x n α ) / ( x n 1 α ) | .
The computational work was performed in the programming software, Mathematica, by using multiple-precision arithmetic. Numerical results displayed in Table 1, Table 2, Table 3 and Table 4 include: (i) the number of approximations ( n ) required to converge to the solution such that | x n + 1 x n | + | f ( x n ) | < 10 100 , (ii) values of the last three consecutive errors | e n | = | x n + 1 x n | , (iii) residual error | f ( x n ) | , and (iv) computational order of convergence (COC).
For testing, we chose four test problems as follows:
Example 1 (Eigenvalue problem).
Eigen value problem is a difficult problem when characteristic polynomial involves a huge square matrix. Finding the zeros of characteristic equation of a square matrix with order more than 4 can even be a challenging task. So, we think about accompanying 9 × 9 matrix
M = 1 8 12 0 0 19 19 76 19 18 437 64 24 0 24 24 64 8 32 376 16 0 24 4 4 16 4 8 92 40 0 0 10 50 40 2 20 242 4 0 0 1 41 4 1 0 25 40 0 0 18 18 104 18 20 462 84 0 0 29 29 84 21 42 501 16 0 0 4 4 16 4 16 92 0 0 0 0 0 0 0 0 24 .
The characteristic polynomial of the matrix ( M ) is given as
h 1 ( x ) = x 9 29 x 8 + 349 x 7 2261 x 6 + 8455 x 5 17663 x 4 + 15927 x 3 + 6993 x 2 24732 x + 12960 .
This function has one multiple zero α = 3 with a multiplicity of 4. We chose initial approximations x 0 = 2.75 and obtained the numerical results as shown in Table 1.
Example 2 (Beam Designing Model).
Here, we consider a beam situating problem (see [4]) where a beam of length r unit is inclining toward the edge of a cubical box with the length of the sides being 1 unit each, to such an extent that one end of the bar touches the wall and the opposite end touches the floor, as shown in Figure 5.
The problem is: What should be the distance be along the bottom of the beam to the floor from the base of the wall? Suppose y is the distance along the edge of the box to the beam from the floor, and let x be the distance from the bottom of the box and of the beam. Then, for a particular value of r, we have
x 4 + 4 x 3 24 x 2 + 16 x + 16 = 0 .
The non-negative solution of the equation is a root x = 2 with a multiplicity of 2. We consider the initial approximate x 0 = 3 to find the root. Numerical results of various methods are shown in Table 2.
Example 3.
Van der Waals Equation of State, which can be expressed as
P + a 1 n 2 V 2 ( V n a 2 ) = n R T ,
explains the conduct of a real gas by taking in the perfect gas conditions two additional parameters, a 1 and a 2 , explicit for every gas. So as to decide the volume V of the gas as far as the rest of the parameters, we are required to explain the nonlinear condition in V.
P V 3 ( n a 2 P + n R T ) V 2 + a 1 n 2 V a 1 a 2 n 3 = 0 .
Given the constants a 1 and a 2 of a specific gas, one can find values for n, P, and T, with the end goal that this condition has three real roots. By utilizing the specific values, we get the accompanying nonlinear function
f 1 ( x ) = x 3 5.22 x 2 + 9.0825 x 5.2675 ,
having three real roots. One root among three roots is a multiple zero α = 1.75 with a multiplicity of order two, and other one being a simple zero ξ = 1.72 . However, our desired zero is α = 1.75 . We considered the initial guess x 0 = 1.8 for this problem. Numerical results of various methods are shown in Table 3.
Example 4.
Lastly, we consider the test function
h 4 ( x ) = x 2 + 1 2 x e x 2 + 1 + x 3 x cosh 2 π x 2 .
The function has a multiple zero α = i of multiplicity 4. We chose the initial approximation x 0 = 1.25 i to obtain the zero of the function.

5. Conclusions

In this paper, we presented a new one-point family of iterative methods with quadratic convergence for computing multiple roots with known multiplicity, based on the weight function technique. Analysis of the convergence showed the second order of convergence under some suppositions regarding the nonlinear function whose zeros are to be obtained. Some efficient and simple cases of the class were presented, and their stability was tested by analyzing complex geometry using a graphical tool—namely, the basin of attraction. The methods were employed to solve some real-world problems, such as the Eigenvalue problem, beam positioning problem, and the Van dar Waal equation of state, and were also compared with existing methods. Numerical comparison of the results revealed that the presented methods had good convergence behavior, similar to the well-known modified Newton’s method.

Author Contributions

The contribution of all the authors have been similar. All of them worked together to develop the present manuscript.

Funding

This research received no external funding.

Acknowledgments

We would like to express our gratitude to the anonymous reviewers for their help with the publication of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Springer: New York, NY, USA, 2008. [Google Scholar]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
  3. Hoffman, J.D. Numerical Methods for Engineers and Scientists; McGraw-Hill Book Company: New York, NY, USA, 1992. [Google Scholar]
  4. Zachary, J.L. Introduction to Scientific Programming: Computational Problem Solving Using Maple and C; Springer: New York, NY, USA, 2012. [Google Scholar]
  5. Cesarano, C. Generalized special functions in the description of fractional diffusive equations. Commun. Appl. Ind. Math. 2019, 10, 31–40. [Google Scholar] [CrossRef] [Green Version]
  6. Cesarano, C. Multi-dimensional Chebyshev polynomials: A non-conventional approach. Commun. Appl. Ind. Math. 2019, 10, 1–19. [Google Scholar] [CrossRef]
  7. Cesarano, C.; Ricci, P.E. Orthogonality Properties of the Pseudo-Chebyshev Functions (Variations on a Chebyshev’s Theme). Mathematics 2019, 7, 180. [Google Scholar] [CrossRef]
  8. Schröder, E. Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef]
  9. Dong, C. A family of multipoint iterative functions for finding multiple roots of equations. Int. J. Comput. Math. 1987, 21, 363–367. [Google Scholar] [CrossRef]
  10. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef] [Green Version]
  11. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
  12. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
  13. Neta, B. New third order nonlinear solvers for multiple roots. Appl. Math. Comput. 2008, 202, 162–170. [Google Scholar] [CrossRef] [Green Version]
  14. Osada, N. An optimal multiple root-finding method of order three. J. Comput. Appl. Math. 1994, 51, 131–133. [Google Scholar] [CrossRef] [Green Version]
  15. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef]
  16. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
  17. Victory, H.D.; Neta, B. A higher order method for multiple zeros of nonlinear functions. Int. J. Comput. Math. 1983, 12, 329–335. [Google Scholar] [CrossRef]
  18. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 4199–4206. [Google Scholar] [CrossRef] [Green Version]
  19. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  20. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  21. Geum, Y.H.; Kim, Y.I.; Neta, B. A sixth–order family of three–point modified Newton–like multiple–root finders and the dynamics behind their extraneous fixed points. Appl. Math. Comput. 2016, 283, 120–140. [Google Scholar] [CrossRef]
  22. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. An eighth-order family of optimal multiple root finders and its dynamics. Numer. Algor. 2018, 77, 1249–1272. [Google Scholar] [CrossRef]
  23. Zafar, F.; Cordero, A.; Rana, Q.; Torregrosa, J.R. Optimal iterative methods for finding multiple roots of nonlinear equations using free parameters. J. Math. Chem. 2017. [Google Scholar] [CrossRef]
  24. Geum, Y.H.; Kim, Y.I.; Neta, B. Constructing a family of optimal eighth-order modified Newton-type multiple-zero finders along with the dynamics behind their purely imaginary extraneous fixed points. J. Comput. Appl. Math. 2018, 333, 131–156. [Google Scholar] [CrossRef]
  25. Behl, R.; Zafar, F.; Alshomrani, A.S.; Junjua, M.; Yasmin, N. An optimal eighth-order scheme for multiple zeros of univariate function. Int. J. Comput. Math. 2019, 16, 1843002. [Google Scholar] [CrossRef]
  26. Behl, R.; Alshomrani, A.S.; Motsa, S.S. An optimal scheme for multiple roots of nonlinear equations with eighth-order convergence. J. Math. Chem. 2018. [Google Scholar] [CrossRef]
  27. Zafar, F.; Cordero, A.; Torregrosa, J.R. An efficient family of optimal eighth-order multiple root finder. Mathematics 2018, 6, 310. [Google Scholar] [CrossRef]
  28. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Genertaing root-finder iterative methods of second-order: Convergence and stability. Axioms 2019, 8, 55. [Google Scholar] [CrossRef]
  29. Vrscay, E.R.; Gilbert, W.J. Extraneous fixed points, basin boundaries and chaotic dynamics for Schröder and König rational iteration functions. Numer. Math. 1988, 52, 1–16. [Google Scholar] [CrossRef]
  30. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  31. Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [Google Scholar] [CrossRef]
  32. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
Figure 1. Basins of attraction of M1–M8 for polynomial P 1 ( z ) .
Figure 1. Basins of attraction of M1–M8 for polynomial P 1 ( z ) .
Mathematics 07 00655 g001
Figure 2. Basins of attraction of M1–M8 for polynomial P 2 ( z ) .
Figure 2. Basins of attraction of M1–M8 for polynomial P 2 ( z ) .
Mathematics 07 00655 g002
Figure 3. Basins of attraction of M1–M8 for polynomial P 3 ( z ) .
Figure 3. Basins of attraction of M1–M8 for polynomial P 3 ( z ) .
Mathematics 07 00655 g003
Figure 4. Basins of attraction of M1–M8 for polynomial P 4 ( z ) .
Figure 4. Basins of attraction of M1–M8 for polynomial P 4 ( z ) .
Mathematics 07 00655 g004
Figure 5. Beam situating problem.
Figure 5. Beam situating problem.
Mathematics 07 00655 g005
Table 1. Comparison of performance of methods for Example 1.
Table 1. Comparison of performance of methods for Example 1.
Methodsn | e n 2 | | e n 1 | | e n | f ( x n ) COC
MNM7 1.70 × 10 21 6.84 × 10 43 1.11 × 10 85 5.90 × 10 681 2.000
M17 2.79 × 10 21 1.90 × 10 42 8.77 × 10 85 9.90 × 10 674 2.000
M27 2.99 × 10 24 1.56 × 10 48 4.27 × 10 97 8.37 × 10 773 2.000
M36 2.17 × 10 13 6.50 × 10 27 5.80 × 10 54 3.68 × 10 428 2.000
M47 3.39 × 10 18 4.18 × 10 36 6.32 × 10 72 3.53 × 10 570 2.000
M56 5.79 × 10 15 3.77 × 10 30 1.60 × 10 60 5.51 × 10 481 2.000
M67 1.93 × 10 21 8.86 × 10 43 1.86 × 10 85 3.69 × 10 679 2.000
M76 2.28 × 10 13 7.15 × 10 27 7.03 × 10 54 1.71 × 10 427 2.000
M87 6.63 × 10 20 1.26 × 10 39 4.60 × 10 79 1.09 × 10 627 2.000
Table 2. Comparison of performance of methods for Example 2.
Table 2. Comparison of performance of methods for Example 2.
Methodsn | e n 2 | | e n 1 | | e n | f ( x n ) COC
MNM7 1.61 × 10 20 6.50 × 10 41 1.06 × 10 81 1.86 × 10 324 2.000
M17 3.37 × 10 21 2.55 × 10 42 1.47 × 10 84 5.64 × 10 336 2.000
M27 7.19 × 10 18 1.94 × 10 35 1.41 × 10 70 1.34 × 10 279 2.000
M37 2.52 × 10 18 2.22 × 10 36 1.73 × 10 72 2.63 × 10 287 2.000
M46 1.85 × 10 22 4.10 × 10 47 2.02 × 10 96 5.76 × 10 388 2.000
M57 6.46 × 10 16 2.09 × 10 31 2.17 × 10 62 1.34 × 10 246 2.000
M67 1.23 × 10 20 3.75 × 10 41 3.52 × 10 82 2.31 × 10 326 2.000
M77 1.35 × 10 17 7.17 × 10 35 2.01 × 10 69 6.02 × 10 275 2.000
M86 1.34 × 10 17 9.03 × 10 36 4.08 × 10 72 1.66 × 10 287 2.000
Table 3. Comparison of performance of methods for Example 3.
Table 3. Comparison of performance of methods for Example 3.
Methodsn | e n 2 | | e n 1 | | e n | f ( x n ) COC
MNM10 2.22 × 10 22 8.24 × 10 43 1.13 × 10 83 1.36 × 10 331 2.000
M110 1.49 × 10 22 3.70 × 10 43 2.27 × 10 84 2.22 × 10 334 2.000
M210 1.52 × 10 21 3.88 × 10 41 2.52 × 10 80 3.43 × 10 318 2.000
M310 1.05 × 10 21 1.83 × 10 41 5.64 × 10 81 8.54 × 10 321 2.000
M410 3.11 × 10 24 1.59 × 10 46 4.15 × 10 91 2.39 × 10 361 2.000
M510 8.82 × 10 21 1.32 × 10 39 2.93 × 10 77 6.32 × 10 306 2.000
M610 2.42 × 10 22 9.73 × 10 43 1.58 × 10 83 5.16 × 10 331 2.000
M710 1.95 × 10 21 6.42 × 10 41 6.92 × 10 80 1.95 × 10 316 2.000
M89 9.79 × 10 17 1.57 × 10 31 4.02 × 10 61 2.10 × 10 241 2.000
Table 4. Comparison of performance of methods for Example 4.
Table 4. Comparison of performance of methods for Example 4.
Methodsn | e n 2 | | e n 1 | | e n | f ( x n ) COC
MNM7 2.87 × 10 15 2.75 × 10 30 2.51 × 10 60 5.82 × 10 478 2.000
M17 2.88 × 10 15 2.76 × 10 30 2.53 × 10 60 6.21 × 10 478 2.000
M27 3.41 × 10 15 3.95 × 10 30 5.30 × 10 60 2.45 × 10 475 2.000
M37 4.43 × 10 15 6.84 × 10 30 1.63 × 10 59 2.14 × 10 471 2.000
M47 5.70 × 10 15 1.16 × 10 29 4.75 × 10 59 1.24 × 10 467 2.000
M57 5.48 × 10 15 1.07 × 10 29 4.06 × 10 59 3.53 × 10 468 2.000
M67 2.99 × 10 15 2.98 × 10 30 2.95 × 10 60 2.10 × 10 477 2.000
M77 4.48 × 10 15 6.97 × 10 30 1.69 × 10 59 2.90 × 10 471 2.000
M87 3.37 × 10 15 3.82 × 10 30 4.91 × 10 60 1.29 × 10 475 2.000

Share and Cite

MDPI and ACS Style

Kumar, D.; Sharma, J.R.; Cesarano, C. One-Point Optimal Family of Multiple Root Solvers of Second-Order. Mathematics 2019, 7, 655. https://doi.org/10.3390/math7070655

AMA Style

Kumar D, Sharma JR, Cesarano C. One-Point Optimal Family of Multiple Root Solvers of Second-Order. Mathematics. 2019; 7(7):655. https://doi.org/10.3390/math7070655

Chicago/Turabian Style

Kumar, Deepak, Janak Raj Sharma, and Clemente Cesarano. 2019. "One-Point Optimal Family of Multiple Root Solvers of Second-Order" Mathematics 7, no. 7: 655. https://doi.org/10.3390/math7070655

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop