Abstract
We suggest a derivative-free optimal method of second order which is a new version of a modification of Newton’s method for achieving the multiple zeros of nonlinear single variable functions. Iterative methods without derivatives for multiple zeros are not easy to obtain, and hence such methods are rare in literature. Inspired by this fact, we worked on a family of optimal second order derivative-free methods for multiple zeros that require only two function evaluations per iteration. The stability of the methods was validated through complex geometry by drawing basins of attraction. Moreover, applicability of the methods is demonstrated herein on different functions. The study of numerical results shows that the new derivative-free methods are good alternatives to the existing optimal second-order techniques that require derivative calculations.
MSC:
65H10; 65J10; 65Y20; 41A25
1. Introduction
Finding the roots of nonlinear functions is a significant problem in numerical mathematics and has numerous applications in different parts of applied sciences [,]. In this paper, we consider iterative techniques to locate a multiple root of multiplicity m—that means and —of a nonlinear equation .
The most widely used method for finding the multiple root of is the quadratically convergent modified Newton’s method (MNM) (see []):
In literature, there are many iterative methods of different order of convergence to approximate the multiple roots of , (see, for example [,,,,,,,,,,,,,,,]). Such methods require the evaluations of derivatives of either first order or first and second order. The motivation for developing higher order methods is closely related to the Kung–Traub conjecture [] that establishes an upper bound for the order of convergence to be attained with a specific number of functional evaluations, , where is order of convergence and represents functions’ evaluations. The methods which follow the Kung–Traub conjecture are called optimal methods.
Contrary to the methods that require derivative evaluation, the derivative-free techniques to deal with the instances of multiple roots are exceptionally uncommon. The principle issue with generating such techniques is the difficulty in finding their convergence order. Derivative-free techniques are significant in the circumstances when the derivative of function g is hard to evaluate or is costly to compute. One such derivative-free scheme is the old style Traub–Steffensen iteration [] which replaces in the traditional Newton’s iteration with suitable approximation based on difference quotient:
where is the first order divided difference with , .
Then modified Newton’s method (1) becomes modified Traub–Steffensen method
The modified Traub–Steffensen method (2) is a noticeable improvement of Newton’s method, because it maintains quadratic convergence without using any derivative.
The principle objective of this work is to design a general class of derivative-free multiple root methods including (2) by utilising the weight-function approach. These methods are developed when specific weight functions (that may rely upon at least one parameters) are chosen. The rest of the paper is as follows. In Section 2, the plan for a second-order technique is created and its convergence is considered. In Section 3, the basins of attractors are displayed to check the steadiness of techniques. Numerical examinations on various conditions are enacted in Section 4 to exhibit the applicability and efficiency of the schemes introduced here. Concluding comments are given in Section 5.
2. Formulation of Method
Given a known multiplicity , we consider the following one-step scheme for multiple roots with the first step as the Traub–Steffensen iteration (2):
where the function is analytic in a neighbourhood of zero with .
In the sequel, we shall study the convergence results of the proposed iterative scheme (3). For clarity, the results are obtained separately for different cases depending upon the multiplicity m.
Theorem 1.
Assume that is an analytic function in a domain containing a multiple zero (say, α) with multiplicity . Suppose that the initial point is close enough to α; then the convergence order of the Formula (3) is at least 2, provided that , and .
Proof.
Assume that the error at n-th stage is . Using the Taylor’s expansion of about and keeping in mind that , and we have
where for .
Similarly we have the Taylor’s expansion of about
where
Expanding weight function in the neighbourhood of origin by Taylor series, we have that
In order to obtain second order convergence, the constant term and coefficient of in (7) should be simultaneously put equal to zero. It is possible only for the following values of and :
By using the above values in (7), the error relation is given by
Thus, the theorem is proven. ☐
Theorem 2.
Assume that is an analytic function in a domain containing a multiple zero (say, α) with multiplicity . Suppose that the initial point is close enough to α; then the convergence order of the Formula (3) is at least 2, provided that , and .
Proof.
Suppose that the error at n-th iteration is . Using the Taylor’s expansion of about and keeping into mind that , , and we have
where for .
Additionally, the Taylor’s expansion of about is
where
We can expand weight function in the neighbourhood of origin by Taylor series; then we have that
In order to obtain second order convergence, the constant term and coefficient of in (13) should be simultaneously equal to zero. It is possible only for the following values of and :
By using the above values in (13), the error relation is given by
Thus, the theorem is proven. ☐
Remark 1.
We can observe from the above results that the number of conditions on is 2 corresponding to the cases to attain the quadratic order convergence of the method (3). These cases also satisfy common conditions: , . Nevertheless, their error equations differ from each other, as the parameter β does not appear in the equation for . It has been seen that when , the error equation in each such case does not contain β term. We shall prove this fact in next theorem.
For the multiplicity , we prove the order of convergence of scheme (3) by the following theorem:
Theorem 3.
Let be an analytic function in a region enclosing a multiple zero (say, α) with multiplicity m. Assume that initial guess is sufficiently close to α; then the iteration scheme defined by (3) has second order of convergence, provided that , and .
Proof.
Let the error at n-th iteration be . Using the Taylor’s expansion of about , we have that
Using (16) in , we obtain that
Taylor’s expansion of about is given as
We can expand weight function in the neighbourhood of origin by Taylor series; then we have that
In order to obtain second order convergence, the constant term and coefficient of in (21) should be simultaneously equal to zero. That is possible only for the following values of and
By using the above values in (21), the error relation is given by
Hence, the second order convergence is established. This completes the proof of theorem.
Remark 2.
It is important to note that parameter β, which is used in , appears only in the error equations of the cases but not for . For we have observed that this parameter appears in the coefficients of and higher order. However, we do not need such terms in order to show the required quadratic order convergence.
We can yield numerous methods of the family (3) based on the form of function that satisfies the conditions (22). However, we must restrict the choices to consider the forms of low order polynomials and simple rational functions. Accordingly, the following simple forms, satisfying the conditions (22), are chosen:
- (1)
- (2)
- (3)
- ,
- (4)
- (5)
- (6)
- (7)
- (8)
- ,
where , , , and are free parameters.
The corresponding method to each of the above form is defined as follows:
Method 1 (M1):
Method 2 (M2):
Method 3 (M3):
Method 4 (M4):
Method 5 (M5):
Method 6 (M6):
Method 7 (M7):
Method 8 (M8):
Remark 3.
The scheme (3) defines a family of second order methods which utilises two function evaluations; namely, and . This family is, therefore, optimal in the sense of the Kung–Traub hypothesis [].
3. Basins of Attraction
Our point here is to analyse the new strategies dependent on graphical tool—the basins of attraction—of the zeros of a polynomial in Argand plane. Examination of the basins of attraction of roots by the iterative scheme provides a significant idea about the convergence of scheme. This thought was presented at first by Vrscay and Gilbert []. As of late, numerous researchers utilised this idea in their work; see, for instance [,] and references in therein. We consider different cases relating to the function of family (3) to study the basins of attraction.
We select as the initial point belonging to D, where D is a rectangular region in containing all the roots of the equation An iterative method beginning at a point can converge to the zero of the function or diverge. In order to assess the basins, we consider as the stopping criterion for convergence up to maximum of 25 iterations. If this tolerance is not achieved in the required iterations, the procedure is dismissed with the result showing the divergence of the iteration function started from . While drawing the basins, the following criterion is adopted: A colour is allotted to every initial guess in the attraction basin of a zero. If the iterative formula begins at the point of convergence, then it forms the basins of attraction with that assigned colour, and if the formula fails to converge in the required number of iterations, then it is draw in black.
We draw the basins of attraction by applying the methods M1–M8 (choosing ) on the following two polynomials:
Problem 1.

In the first example, we consider the polynomial which has zeros with multiplicity 2. In this case, we use a grid of points in a rectangle of size and assign the colours cyan, green, yellow, red and blue to each initial point in the basin of attraction of zero "," "," "0," "" and "2." Basins obtained for the methods M1–M8 are shown in Figure 1, Figure 2 and Figure 3 corresponding to . When observing the behaviour of the methods, we see that the method M5 possesses a lesser number of divergent points, followed by M4 and M6. On the contrary, the methods M3 and M7 have higher numbers of divergent points, followed by M2 and M8. Notice that the basins are becoming wider as parameter β assumes smaller values.


Figure 1.
Basins of attraction of M1–M8 for polynomial .

Figure 2.
Basins of attraction of M1–M8 for polynomial .

Figure 3.
Basins of attraction of M1–M8 for polynomial .
Problem 2.

Let us take the polynomial having zeros with multiplicity 3. To see the dynamical view, we consider a rectangle with grid points and allocate the colours colour cyan, green, yellow, red, purple and blue to each initial point in the basins of attraction of zero "," "," "1," "," "" and "." Basins for this problem are shown in Figure 4, Figure 5 and Figure 6 corresponding to parameter choices in the proposed methods. Basins obtained for the methods M1–M8 are shown in Figure 4, Figure 5 and Figure 6 corresponding to . When observing the behaviour of the methods, we see that the methods M3 and M6 possess lesser numbers of divergent points, followed by M1, M4, M8, M2, M3 and M7.

Figure 4.
Basins of attraction of M1–M8 for polynomial .


Figure 5.
Basins of attraction of M1–M8 for polynomial .

Figure 6.
Basins of attraction of M1–M8 for polynomial .
From the graphics, we can judge the behaviour and suitability of any strategy relying upon the conditions. In the event that we pick an underlying point in a zone where various basins of attraction meet one another, it is difficult to foresee which root will be attained by the iterative strategy that starts in . Thus, the selection of in such a zone is anything but a decent one. Both the dark zones and the zones with various hues are not appropriate to speculate when we need to accomplish a specific root. The most alluring pictures show up when we have extremely perplexing wildernesses between basins of attraction, and they compare to the situations wherein the strategy is additionally requesting, as it does for the underlying point, and its dynamic conduct is progressively eccentric. We close this segment with a comment that the union conduct of proposed strategies relies on the estimation of parameter . The smaller the value of , the better the convergence of the technique.
4. Numerical Results
This section is committed to exhibiting the convergence behaviour of the displayed family. In this regard, we consider the special cases of the proposed class, M1–M8 by choosing , , , and . Performance is compared with the existing classical Newton modified method (1). We select four test functions for correlation.
To check the theoretical order of convergence, we obtain the computational order of convergence (COC) using the formula (see []):
All computations were performed in the programming package Mathematica using multiple-precision arithmetic with 4096 significant digits. Numerical results displayed in Table 1, Table 2, Table 3 and Table 4 include: (i) number of iterations required to converge to the solution such that , (ii) values of last three consecutive errors , (iii) residual error and (iv) computational order of convergence (COC).
Example 1
(Eigenvalue problem). One of the challenging tasks in linear algebra is concerned with the Eigenvalues of a large square matrix. Finding the zeros of the characteristic equation of a square matrix of order greater than 4 is another big job. So, we consider the following 9 × 9 matrix.

Table 1.
Comparison of the performances of methods for Example 1.

Table 2.
Comparison of the performances of methods for Example 2.

Table 3.
Comparison of the performances of methods for Example 3.

Table 4.
Comparison of the performances of methods for Example 4.
The characteristic polynomial of the matrix is given as
This function has one multiple zero with multiplicity 4. We choose initial approximations and obtain the numerical results as shown in Table 1.
Example 2
(Beam Designing Model). We consider a beam positioning problem (see []) where a beam of length r unit is leaning against the edge of a cubical box with sides of length 1 unit each, such that one end of the beam touches the wall and the other end touches the floor, as shown in Figure 7.

Figure 7.
Beam positioning problem.
The problem is: What should be the distance along the floor from the base of the wall to the bottom of the beam? Suppose y is the distance along the beam from the floor to the edge of the box, and let x be the distance from the bottom of the box to bottom of the beam. Then, for a given value of r, we have
The positive solution of the equation is a double root . We consider the initial guess to find the root. Numerical results of various methods are shown in Table 2.
Example 3.
The van der Waals equation of state, whose expression is
explains the behaviour of a real gas by taking in the ideal gas equations two more parameters, and , specific for each gas. In order to determine the volume V of the gas in terms of the remaining parameters, we are required to solve the nonlinear equation in V.
Given the constants and of a particular gas, one can find values for n, P and T, such that this equation has three real roots. By using the particular values, we obtain the following nonlinear function
having three roots, out of which one is a multiple zero of multiplicity of order two, and other one is simple zero . However, we seek the multiple zero . We consider initial guess to obtain this zero. The numerical results so-obtained are shown in Table 3.
Example 4.
Lastly, consider the test function
The function has multiple zero of multiplicity 4. We chose the initial approximation for obtaining the zero of the function. Numerical results of various methods are shown in Table 4.
5. Conclusions
In the foregoing work, we have developed a class of optimal second order methods free from derivatives for solving nonlinear uni-variate functions. Convergence analysis has shown the second order convergence under standard presumptions with respect to the nonlinear functions whose zeros we were searching for. Some best cases of the class were discussed. Their stability was tested by analyzing complex geometry shown by drawing their basins. We have noticed from the graphics that the basins become wider and wider as the parameter assumes smaller values. The methods were employed to solve some nonlinear uni-variate equations and also compared with already defined techniques. We conclude the paper with the remark that derivative-free techniques may prove to be good options for Newton-like methods in the cases where derivatives are expensive to evaluate.
Author Contributions
The contribution of all the authors have been similar. All of them worked together to develop the present manuscript. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Argyros, I.K. Convergence and Applications of Newton-Type Iterations; Spring: New York, NY, USA, 2008. [Google Scholar]
- Hoffman, J.D. Numerical Methods for Engineers and Scientists; McGraw-Hill Book Company: New York, NY, USA, 1992. [Google Scholar]
- Schröder, E. Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef]
- Kumar, D.; Sharma, J.R.; Cesarano, C. One-point optimal family of multiple root solvers of second-Order. Mathematics 2019, 7, 655. [Google Scholar] [CrossRef]
- Dong, C. A family of multipoint iterative functions for finding multiple roots of equations. Int. J. Comput. Math. 1987, 21, 363–367. [Google Scholar] [CrossRef]
- Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 70, 387–400. [Google Scholar] [CrossRef]
- Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
- Kansal, M.; Kanwar, V.; Bhatia, S. On some optimal multiple root-finding methods and their dynamics. Appl. Appl. Math. 2015, 10, 349–367. [Google Scholar]
- Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef]
- Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
- Lotfi, T.; Sharifi, S.; Salimi, M.; Siegmund, S. A new class of three-point methods with optimal convergence order eight and its dynamics. Numer. Algor. 2015, 68, 261–288. [Google Scholar] [CrossRef]
- Neta, B. New third order nonlinear solvers for multiple roots. Appl. Math. Comput. 2008, 202, 162–170. [Google Scholar] [CrossRef]
- Osada, N. An optimal multiple root-finding method of order three. J. Comput. Appl. Math. 1994, 51, 131–133. [Google Scholar] [CrossRef]
- Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef]
- Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
- Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
- Victory, H.D.; Neta, B. A higher order method for multiple zeros of nonlinear functions. Int. J. Comput. Math. 1983, 12, 329–335. [Google Scholar] [CrossRef]
- Yun, B.I. A non-iterative method for solving non-linear equations. Appl. Math. Comput. 2008, 198, 691–699. [Google Scholar] [CrossRef]
- Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 4199–4206. [Google Scholar] [CrossRef]
- Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
- Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
- Vrscay, E.R.; Gilbert, W.J. Extraneous fixed points, basin boundaries and chaotic dynamics for Schröder and König rational iteration functions. Numer. Math. 1988, 52, 1–16. [Google Scholar] [CrossRef]
- Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
- Varona, J.L. Graphic and numerical comparison between iterative methods. Math. Intell. 2002, 24, 37–46. [Google Scholar] [CrossRef]
- Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
- Zachary, J.L. Introduction to Scientific Programming: Computational Problem Solving Using Maple and C; Springer: New York, NY, USA, 2012. [Google Scholar]
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).