Next Article in Journal
A Review on Some Linear Positive Operators Defined on Triangles
Previous Article in Journal
The QCD Adler Function and the Muon g − 2 Anomaly from Renormalons
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Derivative-Free One-Point Algorithms for Computing Multiple Zeros of Nonlinear Equations

1
Department of Mathematics, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Channai 601103, India
2
Department of Mathematics, Government Post Graduate College, Rohtak 124001, India
3
Department of Physics and Chemistry, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
4
Institute of Doctoral Studies, Babeş-Bolyai University, 400084 Cluj-Napoca, Romania
*
Authors to whom correspondence should be addressed.
Symmetry 2022, 14(9), 1881; https://doi.org/10.3390/sym14091881
Submission received: 18 August 2022 / Revised: 3 September 2022 / Accepted: 6 September 2022 / Published: 8 September 2022
(This article belongs to the Section Mathematics)

Abstract

:
In this paper, we describe iterative derivative-free algorithms for multiple roots of a nonlinear equation. Many researchers have evaluated the multiple roots of a nonlinear equation using the first- or second-order derivative of functions. However, calculating the function’s derivative at each iteration is laborious. So, taking this as motivation, we develop second-order algorithms without using the derivatives. The convergence analysis is first carried out for particular values of multiple roots before coming to a general conclusion. According to the Kung–Traub hypothesis, the new algorithms will have optimal convergence since only two functions need to be evaluated at every step. The order of convergence is investigated using Taylor’s series expansion. Moreover, the applicability and comparisons with existing methods are demonstrated on three real-life problems (e.g., Kepler’s, Van der Waals, and continuous-stirred tank reactor problems) and three standard academic problems that contain the root clustering and complex root problems. Finally, we see from the computational outcomes that our approaches use the least amount of processing time compared with the ones already in use. This effectively displays the theoretical conclusions of this study.

1. Introduction

There are many studies regarding the solution of nonlinear equations or systems; for example, see Traub [1], the first chapter of the book in Ref. [2], and the Refs. [3,4,5,6,7]. One of these studies involves finding multiple roots for a given nonlinear equation, which is the most demanding task; hence, various iterative algorithms have been suggested and studied by researchers [8,9,10,11,12,13,14] to find the simple and multiple roots of nonlinear equations. A root with a multiplicity of λ 1 is called a multiple zero, also known as multiple points or a repeating root. Moreover, λ = 1 is called a simple zero. Solving a nonlinear equation with multiple zeros is a difficult task. In this article, we are interested in developing iterative algorithms for finding multiple root η with a multiplicity λ of a nonlinear equation Φ ( y ) = 0 , i.e., Φ ( j ) ( η ) = 0 , j = 0 , 1 , 2 , , λ 1 and Φ ( λ ) ( η ) 0 .
In the literature [15,16], iterative algorithms using multiple steps (and points) to enhance the solution are referred to as multi-point iterations. In Ref. [15], the author presented a new Hermite interpolation-based iterative technique with a convergence order of 1 + 3 , which requires, at each step, only two function evaluations. In Ref. [16], the author reviewed the Newton method’s most important theoretical results concerning the algorithm’s convergence properties, error estimates, numerical stability, and computational complexity.
Due to some interesting factors, many scholars have become interested in such algorithms. First, they can overcome the efficiency index of one-point algorithms, and second, multiple-step algorithms reduce the number of iterations and increase the order of convergence, reducing the computational load in numeric work. To find the multiple roots of a nonlinear equation, many scholars [17,18,19,20,21,22,23,24,25] developed higher-order iterative algorithms employing the first-order derivative. In the literature [17,18,20,21,22,24,25], researchers have developed optimal fourth-order methods requiring one function and two derivative evaluations per step. In Ref. [19], the authors presented six new fourth-order methods. The first four require one function and three-derivative evaluations per iteration, and the last two require one function and two derivative evaluations per iteration. In Ref. [23], the authors presented a class of two-point sixth-order multiple-zero finders with two functions and derivative evaluations per step.
The modified Newton’s method [26], which is defined as:
y q + 1 = y q λ Φ ( y q ) Φ ( y q ) q = 0 , 1 ,
where λ is the multiplicity of root η and Φ ( y q ) is the first derivative of function Φ ( y q ) , is one of the fundamental methods for finding multiple roots of a function. (1) has a quadratic convergence order if the multiplicity is known.
On the other hand, derivative-free algorithms to handle cases of multiple roots are extremely rare in contrast to the algorithms that require a derivative evaluation. The principle difficulty in developing derivative-free algorithms is studying their convergence order. For any complex problem, the derivative-free algorithms are important when the derivative of function Φ is difficult to calculate or is expensive to evaluate. In the literature, to obtain the multiple roots of an equation using the second-order modified Traub–Steffensen technique [1], some authors [27,28,29,30,31,32,33,34] derived optimal derivative-free iterative algorithms. Now, the Traub–Steffensen technique [1] is given by:
Traub–Steffensen method (TM):
y q + 1 = y q λ Φ ( y q ) Φ [ z q , y q ] q = 0 , 1 ,
where z q = y q + β Φ ( y q ) , β R { 0 } and Φ [ z q , y q ] = Φ ( z q ) Φ ( y q ) z q y q . Without evaluating any derivatives, the modified Traub–Steffensen technique maintains its order of convergence.
Recently, Kumar et al. [30] and Kansal et al. [32] presented the second-order optimal schemes defined as:
Kumar method (M):
y q + 1 = y q H ( θ ) , q = 0 , 1 ,
where H ( θ ) : D C C is a weight function and θ = Φ ( y q ) Φ [ z q , y q ] .
Kansal method (KM):
y q + 1 = y q λ ( 1 a ) Φ ( z q ) + a Φ ( y q ) Φ [ z q , y q ] , q = 0 , 1 ,
where a R .
So, constructing iterative methods with less function evaluation plays a key role. For this, optimal methods are introduced that satisfy the Kung–Traub conjecture [35]. Motivated by this, the goal of this work is to develop a method that can find multiple roots with the multiplicity λ 1 . The salient feature of the proposed method is its simple body structure, which means no new evaluation of the function without the computation of the derivative. However, the main advantage of the proposed scheme is that it is equally suitable for both categories, viz., both simple and multiple roots. It demonstrates the generality of the proposed method.
The proposed work is divided into four sections. Section 2 includes the construction of a new scheme and a convergence analysis of the scheme. Some special cases of the new scheme are studied in Section 3, and stability is also verified in this section. Last, the conclusion is discussed in Section 4.

2. Development of Novel Scheme

Here, we present a new one-point algorithm of order two, requiring two (functional) evaluations per iteration. This means the proposed algorithm supports the Kung–Traub conjecture [35]. Let us consider the scheme for multiplicity λ 1 ,
y q + 1 = y q ( α 1 + α 2 G ) Φ ( y q ) Φ [ z q , y q ] , q = 0 , 1 ,
where the z q = y q + β Φ ( y q ) , β R { 0 } , G = Φ ( z q ) Φ ( y q ) , Φ [ z q , y q ] = Φ ( z q ) Φ ( y q ) z q y q , and parameters α 1 and α 2 are arbitrary.
The convergence of (5) is obtained by Theorems 1–3. First, we will discuss the case of λ = 1 in Theorem 1, λ = 2 in Theorem 2, and λ 3 in Theorem 3. The study of the convergence analysis of scheme (5) is discussed in different parts because we will see the behaviour of the parameter β as the multiplicity root of the function increase.
Theorem 1.
Let Φ : C C represent an analytical function in the vicinity of a simple zero (say, η) λ = 1 . Consider that initial guess y 0 is sufficiently close to η; then, the scheme defined by (5) has a second order of convergence, provided that α 1 = 1 and α 2 = 0 .
Proof. 
Let ε q = y q η , be the error in the q-th iteration. Using Taylor’s expansion of Φ ( y q ) and Φ ( z q ) about η , and taking into account that Φ ( η ) = 0 and Φ ( η ) 0 , we have
Φ ( y q ) = Φ ( η ) ε q 1 + A 1 ε q + A 2 ε q 2 + A 3 ε q 3 + ,
Φ ( z q ) = Φ ( η ) ε z q 1 + A 1 ε z q + A 2 ε z q 2 + A 3 ε z q 3 + ,
where ε z q = z q η , A n = 1 ( 1 + n ) ! Φ ( 1 + n ) ( η ) Φ ( η ) for n N . Using (6) and (7) in (5), we have
ε q + 1 = y q + 1 η = ε q ( 1 α 1 α 2 ( 1 + β Φ ( η ) ) ) + ε q 2 ( α 1 + α 2 + α 1 β Φ ( η ) ) A 1 + O ( ε q 3 ) .
Equation (8) will have second order convergence if we set α 1 = 1 and α 2 = 0 . Then, the final error Equation (8) is given by
ε q + 1 = ( 1 + β Φ ( η ) ) A 1 ε q 2 + O ( ε q 3 ) .
Thus, the theorem is proved. □
Theorem 2.
Assume the hypothesis of Theorem 1 for multiplicity λ = 2 . Then, scheme (5) has a second order of convergence, provided that α 1 = 3 2 and α 2 = 1 2 .
Proof. 
Consider an error at the q-th iteration ε q = y q η . Now, considering the expansion of Φ ( y q ) and Φ ( z q ) about η by Taylor’s series, and taking into account that Φ ( η ) = 0 , Φ ( η ) = 0 , and Φ ( η ) 0 , we have
Φ ( y q ) = Φ ( η ) 2 ! ε q 2 1 + A ¯ 1 ε q + A ¯ 2 ε q 2 + A ¯ 3 ε q 3 + ,
Φ ( z q ) = Φ ( η ) 2 ! ε z q 2 1 + A ¯ 1 ε z q + A ¯ 2 ε z q 2 + A ¯ 3 ε z q 3 + ,
where ε z q = z q η , A ¯ n = 2 ( 2 + n ) ! Φ ( 2 + n ) ( η ) Φ ( η ) for n N .
By using (10) and (11) in (5), we have
ε q + 1 = y q + 1 η = 1 2 ( 2 α 1 α 2 ) ε q + 1 8 ( α 1 3 α 2 ) β Φ ( η ) + 2 ( α 1 + α 1 ) A ¯ 1 ε q 2 + O ( ε q 3 ) .
If we take α 2 = 2 α 1 , (12) becomes
ε q + 1 = 1 4 ( ( 2 α 1 3 ) β Φ ( η ) + 2 A ¯ 1 ) ε q 2 + O ( ε q 3 ) .
In (13), we can see the parameter β is present. So, making this error equation free from the parameter β , we take α 1 = 3 2 . Then, error Equation (12) gives
ε q + 1 = A ¯ 1 2 ε q 2 + O ( ε q 3 ) .
Thus, the theorem is proved. □
Theorem 3.
Use the hypothesis of Theorem 1 for multiplicity λ 3 . Then, scheme (5) has convergence order 2, provided that α 2 = λ α 1 .
Proof. 
Taking into account that Φ ( j ) ( η ) = 0 , j = 0 , 1 , 2 , , λ 1 and Φ λ ( η ) 0 , then, developing Φ ( y q ) and Φ ( z q ) about η in the Taylor’s series, we get
Φ ( y q ) = Φ λ ( η ) λ ! ε q 2 1 + A ¯ ¯ 1 ε q + A ¯ ¯ 2 ε q 2 + A ¯ ¯ 3 ε q 3 + ,
Φ ( z q ) = Φ λ ( η ) λ ! ε z q 2 1 + A ¯ ¯ 1 ε z q + A ¯ ¯ 2 ε z q 2 + A ¯ ¯ 3 ε z q 3 + ,
where ε z q = z q η , A ¯ ¯ n = λ ( λ + n ) ! Φ ( λ + n ) ( η ) Φ λ ( η ) for n N .
Inserting (15) and (16) into (5), we have
ε q + 1 = y q + 1 η = 1 λ ( λ α 1 α 2 ) ε q + ( α 1 + α 2 ) λ 2 A ¯ ¯ 1 ε q 2 + O ( ε q 3 ) .
Take α 2 = λ α 1 . Then, (17) becomes
ε q + 1 = A ¯ ¯ 1 λ ε q 2 + O ( ε q 3 ) .
Hence, the theorem is proved. □
Remark 1.
The presented scheme (5) reaches the second-order convergence provided that the conditions of Theorems 1, 2, and 3 are satisfied. This convergence rate is achieved by using only two function evaluations, viz., Φ ( y q ) and Φ ( z q ) , per iteration. Therefore, the scheme in Equation (5) is optimal according to the Kung–Traub hypothesis [35].
Remark 2.
For numeric work, the presented scheme (5) is given by
y q + 1 = y q 1 2 ( λ + 1 ) Φ ( y q ) + ( λ 1 ) Φ ( z q ) Φ [ z q , y q ] ,
wherein we have considered α 1 = λ + 1 2 and α 2 = λ 1 2 . The final scheme (19) satisfies the conditions of the above Theorems 1–3. For future work, (19) is denoted as NM.

3. Numerical Results

In this section, the new algorithm NM is applied to solve some real-life practical problems, and to check the performance of the new algorithm, we apply the new algorithm NM for β = 1 , β = 1 2 , and β = 1 3 . For numeric work, we denote the algorithm NM as NM1, NM2, and NM3 corresponding to β = 1 , β = 1 2 , and β = 1 3 , respectively. To verify the theoretical order of convergence, we used the formula (see Ref. [36])
COC = ln | ( y q + 1 y q ) / ( y q y q 1 ) | ln | ( y q y q 1 ) / ( y q 1 y q 2 ) | , for each q = 1 , 2 ,
The performances of NM1, NM2, and NM3 are compared with a modified Traub–Steffensen method (2) and the Kansal et al. method (4). The considered method (4) is denoted by KM1, KM2, KM3, and KM4 for a = 6 7 , 2 3 , 3 4 , and 5 6 , respectively. These values of parameter a are the best for the numerical results, as argued in ref. [32].
All numerical problems given in Table 1 were performed with Mathematica 9 with stopping criterion | y q + 1 y q | + | Φ ( y q ) | < 10 100 . To check the performance of the presented algorithm, we display the (i) number of the iterations required, (ii) the last four differences between two consecutive iterations | y q y q 1 | , (iii) the computational convergence order (COC) in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7, and (iv) the time consumption (CPU time) of methods in seconds to satisfy the criterion in Table 8.
The computed results in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 and Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 show that the suggested algorithms NM2 and NM3 exhibit good convergence characteristics in all considered problems, whereas in problem 1, the existing methods, namely KM1, KM2, KM3, and KM4, are not preserving their order of convergence. For problem 1, the efficient nature of the new methods is also displayed in Figure 1. In problem 2, the new methods NM1 and NM2 take less iterations for a particular initial guess for converging the required root, and the time consumption of all methods is shown in Figure 2, which shows the robust character of the new methods. In problem 3, we used D, which shows the diverging nature of the method. In problem 3, all considered methods, including the new method NM1, do not converge to the required root (except new methods NM2 and NM3 for a particular initial guess). Figure 3 explains the efficient character of new methods NM2 and NM3. Problems 4–6 and Figure 4, Figure 5 and Figure 6 also show the stability of the new methods. The increase in accuracy of the subsequent approximations, which is visible from the values of the differences | y q y q 1 | , is the cause of the algorithm’s strong convergence. This also suggests that algorithms NM2 and NM3 are stable. Additionally, the accuracy of the approximations to solutions computed by the suggested algorithms is higher or equal to that of the approximations to solutions computed by the existing algorithms. The calculation of the convergence order, as shown in the last column in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7, verified the theoretical second order of convergence of the methods. Table 8, which shows that the CPU time is smaller than that of the CPU time used by existing algorithms, can also be used to assess the robustness of new algorithms (see Figure 4, Figure 5 and Figure 6). Many other different problems also confirm the stability of the new algorithms. Since the new methods use first-divided difference Φ [ z q , y q ] , a drawback of the method is that, if at some stage (say q) the denominator z q = y q in the methods, then the methods may fail to converge. This section concludes that the new methods NM2 and NM3 are more stable and efficient.

4. Conclusions

The novel second-order optimal derivative-free family to find multiple roots of nonlinear equations is presented. Some new optimal methods are generated based on parameter β . For comparison, the well-known modified Traub method is also taken into account. The order of convergence is examined. Additionally, by testing the proposed methods on some practical problems, the stability of the methods is confirmed. The efficient nature of the presented algorithms can be observed by the fact that the error is less than or equal to existing methods. The amount of CPU time consumed by the new methods NM2 and NM3 is less than that of the time taken by existing methods of the same nature. This shows the novelty of the presented algorithms. The main advantages of new algorithms NM2 and NM3 are that they are suitable for both categories, viz., simple and multiple roots. So, we conclude this study by remarking that the new algorithms NM2 and NM3 are good options for finding the simple and multiple roots of the equations.

Author Contributions

Conceptualization, methodology, S.K.; Formal analysis, validation, resources, L.J.; Software, writing—original draft preparation, J.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

We would like to express our gratitude to the anonymous reviewers for their help with the publication of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
  2. Iliev, A.; Kyurkchiev, A.N. Nontrivial Methods in Numerical Analysis. In Selected Topics in Numerical Analysis; Lambert Academic Publishing: Chişinău, Moldova, 2010. [Google Scholar]
  3. Soleymani, F. Some optimal iterative methods and their with memory variants. J. Egypt. Math. Soc. 2013, 21, 133–141. [Google Scholar] [CrossRef]
  4. Hafiz, M.A.; Bahgat, M.S.M. Solving nonsmooth equations using family of derivative-free optimal methods. J. Egypt. Math. Soc. 2013, 21, 38–43. [Google Scholar] [CrossRef]
  5. Sihwail, R.; Solaiman, O.S.; Omar, K.; Ariffin, K.A.Z.; Alswaitti, M.; Hashim, I. A hybrid approach for solving systems of nonlinear equations using Harris Hawks optimization and Newton’s Method. IEEE Access 2021, 9, 95791–95807. [Google Scholar] [CrossRef]
  6. Sihwail, R.; Solaiman, O.S.; Ariffin, K.A.Z. New robust hybrid Jarratt-Butterfly optimization algorithm for nonlinear models. J. King Saud Univ.-Comput. Inform. Sci. 2022; in press. [Google Scholar] [CrossRef]
  7. Solaiman, O.S.; Hashim, I. Optimal eighth-order solver for nonlinear equations with applications in chemical engineering. Intell. Autom. Soft Comput. 2021, 27, 379–390. [Google Scholar] [CrossRef]
  8. Galantai, A.; Hegedus, C.J. A study of accelerated Newton methods for multiple polynomial roots. Numer. Algor. 2010, 54, 219–243. [Google Scholar] [CrossRef]
  9. Halley, E. A new exact and easy method of finding the roots of equations generally and that without any previous reduction. Philos. Trans. R. Soc. Lond. 1964, 18, 136–148. [Google Scholar]
  10. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
  11. Neta, B.; Johnson, A.N. High-order nonlinear solver for multiple roots. Comput. Math. Appl. 2008, 55, 2012–2017. [Google Scholar] [CrossRef]
  12. Victory, H.D.; Neta, B. A higher order method for multiple zeros of nonlinear functions. Int. J. Comput. Math. 1983, 12, 329–335. [Google Scholar] [CrossRef]
  13. Akram, S.; Zafar, F.; Yasmin, N. An optimal eighth-order family of iterative methods for multiple roots. Mathematics 2019, 7, 672. [Google Scholar] [CrossRef] [Green Version]
  14. Akram, S.; Akram, F.; Junjua, M.U.D.; Arshad, M.; Afzal, T. A family of optimalEighth order iteration functions for multiple roots and its dynamics. J. Math. 2021, 2021, 5597186. [Google Scholar] [CrossRef]
  15. Frontini, M. Hermite interpolation and a new iterative method for the computation of the roots of non-linear equations. Calcolo 2003, 40, 109–119. [Google Scholar] [CrossRef]
  16. Galantai, A. The theory of Newton’s method. J. Comput. Appl. Math. 2000, 124, 25–44. [Google Scholar] [CrossRef]
  17. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef]
  18. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
  19. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef]
  20. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
  21. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 4199–4206. [Google Scholar] [CrossRef]
  22. Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
  23. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef]
  24. Kansal, M.; Kanwar, V.; Bhatia, S. On some optimal multiple root-finding methods and their dynamics. Appl. Appl. Math. 2015, 10, 349–367. [Google Scholar]
  25. Behl, R.; Alsolami, A.J.; Pansera, B.A.; Al-Hamdan, W.M.; Salimi, M.; Ferrara, M. A new optimal family of Schröder’s method for multiple zeros. Mathematics 2019, 7, 1076. [Google Scholar] [CrossRef] [Green Version]
  26. Schröder, E. Über unendlich viele algorithmen zur Auflösung der gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef]
  27. Sharma, J.R.; Kumar, S.; Argyros, I.K. Development of optimal eighth order derivative-free methods for multiple roots of nonlinear equations. Symmetry 2019, 11, 766. [Google Scholar] [CrossRef]
  28. Sharma, J.R.; Kumar, S.; Jäntschi, L. On a class of optimal fourth order multiple root solvers without using derivatives. Symmetry 2019, 11, 1452. [Google Scholar] [CrossRef]
  29. Sharma, J.R.; Kumar, S.; Jäntschi, L. On derivative free multiple-root finders with optimal fourth order convergence. Mathematics 2020, 8, 1091. [Google Scholar] [CrossRef]
  30. Kumar, D.; Sharma, J.R.; Argyros, I.K. Optimal one-point iterative function free from derivatives for multiple roots. Mathematics 2020, 8, 709. [Google Scholar] [CrossRef]
  31. Kumar, S.; Kumar, D.; Sharma, J.R.; Cesarano, C.; Agarwal, P.; Chu, Y.M. An optimal fourth order derivative-free numerical algorithm for multiple roots. Symmetry 2020, 12, 1038. [Google Scholar] [CrossRef]
  32. Kansal, M.; Alshomrani, A.S.; Bhalla, S.; Behl, R.; Salimi, M. One parameter optimal derivative-free family to find the multiple roots of algebraic nonlinear equations. Mathematics 2020, 8, 2223. [Google Scholar] [CrossRef]
  33. Behl, R.; Cordero, A.; Torregrosa, J.R. A new higher-order optimal derivative free scheme for multiple roots. J. Comput. Appl. Math. 2021, 404, 113773. [Google Scholar] [CrossRef]
  34. Kumar, S.; Kumar, D.; Kumar, R. Development of cubically convergent iterative derivative free methods for computing multiple roots. SeMA 2022. [Google Scholar] [CrossRef]
  35. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  36. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth–order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  37. Danby, J.M.A.; Burkardt, T.M. The solution of Kepler’s equation. I. Celest. Mech. 1983, 40, 95–107. [Google Scholar] [CrossRef]
  38. Constantinides, A.; Mostoufi, N. Numerical Methods for Chemical Engineers with MATLAB Application; Prentice Hall PTR: Engle-Wood Cliffs, NJ, USA, 1999. [Google Scholar]
  39. Douglas, J.M. Process Dynamics and Control; Prentice Hall: Englewood Cliffs, NJ, USA, 1972; Volume 2. [Google Scholar]
  40. Zeng, Z. Computing multiple roots of inexact polynomials. Math. Comput. Lett. 2004, 74, 869–903. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Bar chart of problem Φ 1 ( y ) .
Figure 1. Bar chart of problem Φ 1 ( y ) .
Symmetry 14 01881 g001
Figure 2. Bar chart of problem Φ 2 ( y ) .
Figure 2. Bar chart of problem Φ 2 ( y ) .
Symmetry 14 01881 g002
Figure 3. Bar chart of problem Φ 3 ( y ) .
Figure 3. Bar chart of problem Φ 3 ( y ) .
Symmetry 14 01881 g003
Figure 4. Bar chart of problem Φ 4 ( y ) .
Figure 4. Bar chart of problem Φ 4 ( y ) .
Symmetry 14 01881 g004
Figure 5. Bar chart of problem Φ 5 ( y ) .
Figure 5. Bar chart of problem Φ 5 ( y ) .
Symmetry 14 01881 g005
Figure 6. Bar chart of problem Φ 6 ( y ) .
Figure 6. Bar chart of problem Φ 6 ( y ) .
Symmetry 14 01881 g006
Table 1. The following numerical problems are considered for experimentation.
Table 1. The following numerical problems are considered for experimentation.
ProblemsRootMultiplicityInitial Guess
Kepler’s problem [37]
Φ 1 ( y ) = y 1 4 sin ( y ) π 5 0.8093…10.6 & 1
Van der Waals problem [38]
Φ 2 ( y ) = y 3 5.22 y 2 + 9.0825 y 5.2675 1.7522.2 & 2.5
Continuous-Stirred Tank Reactor [38,39]
Φ 3 ( y ) = y 4 + 11.50 y 3 + 47.49 y 2 + 83.06325 y + 51.23266875 −2.852−3.5 &−3.8
Academic problem [29]
Φ 4 ( y ) = y 4 12 + y 2 2 + y + e y ( y 3 ) + sin y + 3 03−0.2 & 0.6
Complex root problem [28]
Φ 5 ( y ) = y ( y 2 + 1 ) ( 2 e y 2 + 1 + y 2 1 ) cosh π y 2 3 i50.9i & 1.1i
Root-Clustering problem [40]
Φ 6 ( y ) = ( y 1 ) 20 ( y 2 ) 15 ( y 3 ) 10 ( y 4 ) 5 3102.9
1200.7
Table 2. Performance of methods for problem 1.
Table 2. Performance of methods for problem 1.
Methodsq | y q 3 y q 4 | | y q 2 y q 3 | | y q 1 y q 2 | | y q y q 1 | COC
y 0 = 0.6
TM6 1.96 × 10 6 4.15 × 10 13 1.87 × 10 26 3.78 × 10 53 2
KM135 8.28 × 10 91 9.79 × 10 94 1.16 × 10 96 1.37 × 10 99 1
KM239 4.03 × 10 90 1.11 × 10 92 3.07 × 10 95 8.46 × 10 98 1
KM338 1.20 × 10 91 2.48 × 10 94 5.14 × 10 97 1.06 × 10 99 1
KM435 8.02 × 10 89 1.11 × 10 91 1.52 × 10 94 2.10 × 10 97 1
NM16 1.38 × 10 8 3.60 × 10 18 2.44 × 10 37 1.13 × 10 75 2
NM26 4.38 × 10 7 1.23 × 10 14 9.74 × 10 30 6.08 × 10 60 2
NM36 8.01 × 10 7 5.09 × 10 14 2.05 × 10 28 3.32 × 10 57 2
y 0 = 1
TM6 1.91 × 10 6 3.96 × 10 13 1.70 × 10 26 3.13 × 10 53 2
KM135 9.42 × 10 91 1.11 × 10 93 1.32 × 10 96 1.56 × 10 99 1
KM239 5.37 × 10 90 1.48 × 10 92 4.09 × 10 95 1.13 × 10 97 1
KM338 1.49 × 10 91 3.09 × 10 94 6.39 × 10 97 1.32 × 10 99 1
KM435 9.30 × 10 89 1.28 × 10 91 1.77 × 10 94 2.44 × 10 97 1
NM16 7.68 × 10 9 1.11 × 10 18 2.33 × 10 38 1.02 × 10 77 2
NM26 3.74 × 10 7 8.99 × 10 15 5.18 × 10 30 1.72 × 10 60 2
NM36 7.22 × 10 7 4.12 × 10 14 1.35 × 10 28 1.43 × 10 57 2
Table 3. Performance of methods for problem 2.
Table 3. Performance of methods for problem 2.
Methodsq | y q 3 y q 4 | | y q 2 y q 3 | | y q 1 y q 2 | | y q y q 1 | COC
y 0 = 2.2
TM10 4.28 × 10 9 3.05 × 10 16 1.55 × 10 30 4.03 × 10 59 2
KM110 4.37 × 10 9 3.19 × 10 16 1.69 × 10 30 4.77 × 10 59 2
KM210 4.50 × 10 9 3.37 × 10 16 1.90 × 10 30 5.99 × 10 59 2
KM310 4.44 × 10 9 3.29 × 10 16 1.80 × 10 30 5.42 × 10 59 2
KM410 4.39 × 10 9 3.21 × 10 16 1.72 × 10 30 4.91 × 10 59 2
NM110 5.05 × 10 10 4.25 × 10 18 3.01 × 10 34 1.51 × 10 66 2
NM210 1.99 × 10 9 6.60 × 10 17 7.26 × 10 32 8.79 × 10 62 2
NM310 2.74 × 10 9 1.25 × 10 16 2.62 × 10 31 1.14 × 10 60 2
y 0 = 2.5
TM11 2.05 × 10 12 7.01 × 10 23 8.19 × 10 44 1.12 × 10 85 2
KM111 2.23 × 10 12 8.31 × 10 23 1.55 × 10 43 2.21 × 10 85 2
KM211 2.50 × 10 12 1.04 × 10 22 1.81 × 10 43 5.43 × 10 85 2
KM311 2.38 × 10 12 9.43 × 10 23 1.48 × 10 43 3.66 × 10 85 2
KM411 2.26 × 10 12 8.55 × 10 23 1.22 × 10 43 2.47 × 10 85 2
NM19 3.52 × 10 12 2.06 × 10 22 7.11 × 10 43 8.42 × 10 84 2
NM210 2.46 × 10 8 1.01 × 10 14 1.70 × 10 27 4.80 × 10 53 2
NM311 1.61 × 10 13 4.29 × 10 25 3.07 × 10 48 1.58 × 10 94 2
Table 4. Performance of methods for problem 3.
Table 4. Performance of methods for problem 3.
Methodsq | y q 3 y q 4 | | y q 2 y q 3 | | y q 1 y q 2 | | y q y q 1 | COC
y 0 = 3.5
TM7 1.65 × 10 7 3.61 × 10 16 1.73 × 10 33 4.00 × 10 68 2
KM17 2.60 × 10 7 1.31 × 10 15 3.29 × 10 32 2.09 × 10 65 2
KM27 3.96 × 10 7 4.29 × 10 15 5.03 × 10 31 6.90 × 10 63 2
KM37 3.36 × 10 7 2.68 × 10 15 1.71 × 10 31 6.96 × 10 64 2
KM47 2.77 × 10 7 1.55 × 10 15 4.90 × 10 32 4.87 × 10 65 2
NM19 8.03 × 10 8 1.54 × 10 16 5.63 × 10 34 7.54 × 10 69 2
NM27 4.91 × 10 8 5.74 × 10 17 7.83 × 10 35 1.46 × 10 70 2
NM37 2.27 × 10 8 1.23 × 10 17 3.61 × 10 36 3.11 × 10 73 2
y 0 = 3.8
TMDDDDDD
KM1DDDDDD
KM2DDDDDD
KM3DDDDDD
KM4DDDDDD
NM1DDDDDD
NM27 3.28 × 10 6 2.56 × 10 13 1.56 × 10 27 5.81 × 10 56 2
NM38 1.30 × 10 10 4.01 × 10 22 3.82 × 10 45 3.48 × 10 91 2
D means that the method is diverging.
Table 5. Performance of methods for problem 4.
Table 5. Performance of methods for problem 4.
Methodsq | y q 3 y q 4 | | y q 2 y q 3 | | y q 1 y q 2 | | y q y q 1 | COC
y 0 = 0.2
TM6 1.62 × 10 6 2.18 × 10 13 3.96 × 10 27 1.31 × 10 54 2
KM16 1.62 × 10 6 2.19 × 10 13 4.00 × 10 27 1.33 × 10 54 2
KM26 1.63 × 10 6 2.21 × 10 13 4.05 × 10 27 1.37 × 10 54 2
KM36 1.62 × 10 6 2.20 × 10 13 4.03 × 10 27 1.35 × 10 54 2
KM46 1.62 × 10 6 2.19 × 10 13 4.01 × 10 27 1.34 × 10 54 2
NM16 1.65 × 10 6 2.28 × 10 13 4.33 × 10 27 1.56 × 10 54 2
NM26 1.64 × 10 6 2.24 × 10 13 4.18 × 10 27 1.45 × 10 54 2
NM36 1.63 × 10 6 2.23 × 10 13 4.13 × 10 27 1.42 × 10 54 2
y 0 = 0.6
TM6 1.82 × 10 6 2.76 × 10 13 6.35 × 10 27 3.36 × 10 54 2
KM16 1.69 × 10 6 2.39 × 10 13 4.74 × 10 27 1.87 × 10 54 2
KM26 1.53 × 10 6 1.95 × 10 13 3.16 × 10 27 8.33 × 10 55 2
KM36 1.60 × 10 6 2.13 × 10 13 3.78 × 10 27 1.19 × 10 54 2
KM46 1.67 × 10 6 2.33 × 10 13 4.51 × 10 27 1.70 × 10 54 2
NM16 3.26 × 10 6 8.84 × 10 15 6.51 × 10 30 3.53 × 10 60 2
NM26 1.05 × 10 6 9.27 × 10 14 7.16 × 10 28 4.27 × 10 56 2
NM36 1.27 × 10 6 1.34 × 10 13 1.49 × 10 27 1.84 × 10 55 2
Table 6. Performance of methods for problem 5.
Table 6. Performance of methods for problem 5.
Methodsq | y q 3 y q 4 | | y q 2 y q 3 | | y q 1 y q 2 | | y q y q 1 | COC
y 0 = 0.9 i
TM7 3.34 × 10 12 2.98 × 10 24 2.37 × 10 48 1.50 × 10 96 2
KM17 3.35 × 10 12 3.00 × 10 24 2.40 × 10 48 1.54 × 10 96 2
KM27 3.37 × 10 12 3.03 × 10 24 2.44 × 10 48 1.59 × 10 96 2
KM37 3.36 × 10 12 3.01 × 10 24 2.42 × 10 48 1.57 × 10 96 2
KM47 3.36 × 10 12 3.00 × 10 24 2.41 × 10 48 1.54 × 10 96 2
NM17 3.45 × 10 12 3.17 × 10 24 2.67 × 10 48 1.91 × 10 96 2
NM27 3.41 × 10 12 3.09 × 10 24 2.55 × 10 48 1.74 × 10 96 2
NM37 3.39 × 10 12 3.07 × 10 24 2.52 × 10 48 1.69 × 10 96 2
y 0 = 1.1 i
TM6 8.96 × 10 7 2.14 × 10 13 1.22 × 10 26 4.00 × 10 53 2
KM16 8.93 × 10 7 2.13 × 10 13 1.20 × 10 26 3.87 × 10 53 2
KM26 8.88 × 10 7 2.10 × 10 13 1.18 × 10 26 3.71 × 10 53 2
KM36 8.90 × 10 7 2.11 × 10 13 1.19 × 10 26 3.78 × 10 53 2
KM46 8.92 × 10 7 2.12 × 10 13 1.20 × 10 26 3.85 × 10 53 2
NM16 8.88 × 10 7 2.10 × 10 13 1.18 × 10 26 3.71 × 10 53 2
NM26 8.89 × 10 7 2.11 × 10 13 1.18 × 10 26 3.72 × 10 53 2
NM36 8.88 × 10 7 2.10 × 10 13 1.18 × 10 26 3.71 × 10 53 2
Table 7. Performance of methods for problem 6.
Table 7. Performance of methods for problem 6.
Methodsq | y q 3 y q 4 | | y q 2 y q 3 | | y q 1 y q 2 | | y q y q 1 | COC
y 0 = 2.9 , λ = 10
TM8 4.74 × 10 11 4.50 × 10 21 4.05 × 10 41 3.27 × 10 81 2
KM18 4.74 × 10 11 4.50 × 10 21 4.05 × 10 41 3.27 × 10 81 2
KM28 4.74 × 10 11 4.50 × 10 21 4.04 × 10 41 3.27 × 10 81 2
KM38 4.74 × 10 11 4.50 × 10 21 4.04 × 10 41 3.27 × 10 81 2
KM48 4.74 × 10 11 4.50 × 10 21 4.05 × 10 41 3.27 × 10 81 2
NM18 4.74 × 10 11 4.49 × 10 21 4.03 × 10 41 3.24 × 10 81 2
NM28 4.74 × 10 11 4.49 × 10 21 4.03 × 10 41 3.26 × 10 81 2
NM38 4.74 × 10 11 4.49 × 10 21 4.04 × 10 41 3.26 × 10 81 2
y 0 = 0.7 , λ = 20
TM8 2.52 × 10 10 6.88 × 10 20 5.13 × 10 39 2.85 × 10 77 2
KM18 2.50 × 10 10 6.75 × 10 20 4.93 × 10 39 2.63 × 10 77 2
KM28 2.46 × 10 10 6.57 × 10 20 4.67 × 10 39 2.37 × 10 77 2
KM38 2.48 × 10 10 6.65 × 10 20 4.78 × 10 39 2.48 × 10 77 2
KM48 2.49 × 10 10 6.72 × 10 20 4.90 × 10 39 2.60 × 10 77 2
NM18 2.24 × 10 10 5.45 × 10 20 3.22 × 10 39 1.12 × 10 77 2
NM28 2.42 × 10 10 6.36 × 10 20 4.38 × 10 39 2.08 × 10 77 2
NM38 2.45 × 10 10 6.48 × 10 20 4.55 × 10 39 2.25 × 10 77 2
Table 8. CPU time consumed by methods for considered problems (averages from 10 runs).
Table 8. CPU time consumed by methods for considered problems (averages from 10 runs).
ProblemsTMKM1KM2KM3KM4NM1NM2NM3
Φ 1 ( y )
y 0 = 0.6 0.62411.88922.12552.13762.10610.42170.45370.4641
y 0 = 1 0.64172.09122.30592.21252.02490.49980.50140.5267
Φ 2 ( y )
y 0 = 2.2 0.06220.06310.07630.08720.08010.05910.06130.0624
y 0 = 2.5 0.05900.07810.07820.07720.07920.04990.05810.0589
Φ 3 ( y )
y 0 = 3.5 0.07790.07230.07810.07760.07650.08130.06970.0682
y 0 = 3.8 ------0.07410.0782
Φ 4 ( y )
y 0 = 0.2 0.04980.05010.05980.05000.04990.04580.04780.0489
y 0 = 0.6 0.06710.06900.06880.06830.06700.06550.06700.0669
Φ 5 ( y )
y 0 = 0.9 i 0.59970.56750.55670.54310.55200.53170.52360.5224
y 0 = 1.1 i 0.54310.51550.49970.51070.49230.49910.48970.4872
Φ 6 ( y )
y 0 = 2.9 , λ = 10 0.11990.11230.12510.13700.13570.10800.10910.1110
y 0 = 0.7 , λ = 20 0.12100.13210.12980.12340.12990.11990.12010.1190
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Kumar, S.; Bhagwan, J.; Jäntschi, L. Optimal Derivative-Free One-Point Algorithms for Computing Multiple Zeros of Nonlinear Equations. Symmetry 2022, 14, 1881. https://doi.org/10.3390/sym14091881

AMA Style

Kumar S, Bhagwan J, Jäntschi L. Optimal Derivative-Free One-Point Algorithms for Computing Multiple Zeros of Nonlinear Equations. Symmetry. 2022; 14(9):1881. https://doi.org/10.3390/sym14091881

Chicago/Turabian Style

Kumar, Sunil, Jai Bhagwan, and Lorentz Jäntschi. 2022. "Optimal Derivative-Free One-Point Algorithms for Computing Multiple Zeros of Nonlinear Equations" Symmetry 14, no. 9: 1881. https://doi.org/10.3390/sym14091881

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop