Next Article in Journal
Dynamic Adaptive Event-Triggered Mechanism for Fractional-Order Nonlinear Multi-Agent Systems with Actuator Saturation and External Disturbances: Application to Synchronous Generators
Previous Article in Journal
Safe Path Planning Method Based on Collision Prediction for Robotic Roadheader in Narrow Tunnels
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Generalization of Liu–Zhou Method for Multiple Roots of Applied Science Problems

1
Department of Mathematics, Chandigarh University, Mohali 140413, India
2
Faculty of Management and Commerce, Poornima University, Jaipur 303905, India
3
Faculty of Engineering and Technology, Poornima University, Jaipur 303905, India
4
Department of Mathematics, Government College, Hisar 125001, India
5
Department of Physics and Chemistry, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(3), 523; https://doi.org/10.3390/math13030523
Submission received: 6 January 2025 / Revised: 30 January 2025 / Accepted: 3 February 2025 / Published: 5 February 2025
(This article belongs to the Section E2: Control Theory and Mechanics)

Abstract

:
Some optimal and non-optimal iterative approaches for computing multiple zeros of nonlinear functions have recently been published in the literature when the multiplicity θ of the root is known. Here, we present a new family of iterative algorithms for multiple zeros that are distinct from the existing approaches. Some special cases of the new family are presented and it is found that existing Liu-Zhou methods are the special cases of the new family. To check the consistency and stability of the new methods, we consider the continuous stirred tank reactor problem, isentropic supersonic flow problem, eigenvalue problem, complex root problem, and standard test problem in the numerical section and we find that the new methods are more competitive with other existing fourth-order methods. In the numerical section, the error of the new methods confirms their robust character.

1. Introduction

Numerous physical and technical applications [1,2,3] have demonstrated the significance of solving nonlinear equations in the numerical field with the rapid growth. Such problems exist in a variety of domains within the natural and physical sciences, such as those involving heat and fluid movement, initial and boundary value issues, and problems with global positioning systems. Except for a few nonlinear equations, finding the solution via an analytical approach is practically impossible. Iterative methods thus offer a desirable substitute for solving problems of this type.
To find the multiple roots of a nonlinear equation of the form f ( t ) = 0 , where f ( t ) is a real function defined in a domain D R , the modified Newton method [4,5,6] is used very commonly. The modified Newton method is given by
t i + 1 = t i θ f ( t i ) f ( t i ) , i = 0 , 1 , 2 ,
with the given multiplicity θ 1 and θ N ; the scheme provided by (1) is optimal following the Kung–Traub conjecture [7] and can be used to find the desired multiple roots with quadratic convergence.
Many scientists have been working on iterative methods for finding multiple roots with higher-order convergence and efficiency in recent decades. Numerous higher-order optimal and non-optimal methods have been developed in the literature by Behl et al. [8], Behl et al. [9], Behl et al. [10], Dong [11], Geum et al. [12], Hansen [13], Hueso et al. [14], Kansal et al. [15], Li et al. [16], Li et al. [17], Liu and Zhou [18], Neta [19,20], Osada [21], Sharma and Kumar [22,23], Sharma and Sharma [24], Sharifi et al. [25], Soleymani et al. [26], Soleymani and Babajee [27], Thukral [28], Victory and Neta [29], and Zhou et al. [30,31]. These methods are two-step and three-step methods with convergence orders of three, four, six, and eight. Thus, motivated by this, in this work, our aim is to develop optimal multi-point iterative methods of a higher convergence order that may use computations that are small in number, as needed.
Taking into account the aforementioned concerns, we offer a fourth-order family that, according to Kung–Traub hypothesis [7], has optimal fourth-order convergence and requires three additional functions of information per iteration. The proposed approach is made up of two steps, the first of which uses Newton’s iteration (1) and the second of which uses Newton-type iteration. An iterative scheme is distinct in that each iteration calls for one function and two derivatives. The main benefit of the new family of methods is that the Liu–Zhou scheme [18] is a special case of the family.
The information contained in the rest of this article is outlined as follows. In Section 2, simple definitions are given. Section 3 develops the fourth-order generalized iterative scheme and examines its convergence. Section 4 examines a few practical science problems to examine the methodological stability and validate the theoretical findings. This part also includes a comparison with existing methods and some graphs displaying the calculated outcomes. Concluding remarks are reported in Section 5.

2. Basic Definitions

2.1. Multiple Root

A root t of f ( t ) = 0 is a zero of multiplicity θ if t t —we can write f ( t ) = ( t t ) θ g ( t ) , where g ( t ) 0 . The function f C θ [ a , b ] has a zero of multiplicity θ at t in ( a , b ) if
f ( t ) = f ( t ) = f ( t ) = = f ( θ 1 ) ( t ) = 0 ,
but f ( θ ) ( t ) 0 . If θ = 1 , the root is called a simple zero.

2.2. Order of Convergence

Let { t i } i 0 be a sequence of iterative points which converges to t . Then, the convergence is said to be of order p, p > 1 , if there exists M, M > 0 , and k 0 such that
t i + 1 t M t i t p for all i i 0
or
e i + 1 M e i p for all i i 0 ,
where e i = t i t . The convergence is linear if p = 1 , quadratic if p = 2 , and there exists M such that 0 < M < 1 .

2.3. Error Equation

Let e i = t i t be the error in the ith iteration; we designate the relation
e i + 1 = L e i p + O ( e i p + 1 )
as the error equation. Here, L is an asymptotic error constant, p is the order of convergence, and O ( e i p + 1 ) denotes the higher power of e i p .

2.4. Computational Order of Convergence

Assume that t i + 2 , t i + 1 , and t i are three consecutive iterations that are near to t , with t being the zero of the function f. Then, the following formula is used to approximate the computational order of convergence (COC) (see [32]):
COC = ln | ( t i + 2 t ) / ( t i + 1 t ) | ln | ( t i + 1 t ) / ( t i t ) | , i = 1 , 2 ,

2.5. Kung–Traub Hypothesis

The Kung–Traub hypothesis [7] states that iterative methods have convergence order 2 p if they require p + 1 function evaluations per step. Such methods are called optimal methods.

3. Formulation of Scheme

The design and convergence analysis of the suggested scheme, which is the primary contribution of this paper, are covered in this section. To find multiple zeros with multiplicity θ > 1 , we take into account the following ideal fourth-order family:
u i = t i θ f ( t i ) f ( t i ) , t i + 1 = u i θ G ( h i ) f ( t i ) f ( t i ) ,
where h i = x i a 1 + a 2 x i , a 1 + a 2 x i 0 , x i = f ( u i ) f ( t i ) 1 θ 1 , a 1 and a 2 are not simultaneously zero, and the function G ( h ) is analytic in the neighborhood of 0. Note that the second factor G ( h ) is multiplied, so the factor is called the weight function.
In the followings section, we will examine certain circumstances in which Scheme (3) achieves the highest feasible order of convergence with the minimum number of functions. The software Mathematica (v. 12.0.0.0 has been used) and other computer algebra systems were used to manage lengthy calculations.
Let e i = t i t be the error at i-th iteration. Taylor’s development about α yields
f ( t i ) = f ( θ ) ( t ) θ ! e i θ 1 + D 1 e i + D 2 e i 2 + D 3 e i 3 + D 4 e i 4 +
and
f ( t i ) = f ( θ ) ( t ) θ ! e i θ 1 θ + ( θ + 1 ) D 1 e i + ( θ + 2 ) D 2 e i 2 + ( θ + 3 ) D 3 e i 3 + ( θ + 4 ) D 4 e i 4 + ,
where D n = θ ! ( θ + n ) ! f ( θ + n ) ( t ) f ( θ ) ( t ) for n N .
Using (4) and (5), we have
e u i = u i α = D 1 θ e i 2 + 2 θ D 2 ( 1 + θ ) D 1 2 θ 2 e i 3 + 1 θ 3 ( 1 + θ ) 2 D 1 3 θ ( 4 + 3 θ ) D 1 D 2 + 3 θ 2 D 3 e i 4 + O ( e i 5 ) .
Expanding f ( u i ) about t gives
f ( u i ) = f ( θ ) ( t ) θ ! e u i θ 1 θ + ( θ + 1 ) D 1 e u i + ( θ + 2 ) D 2 e u i 2 + ( θ + 3 ) D 3 e u i 3 + ( θ + 4 ) D 4 e u i 4 + .
Then, we obtain that
x i = f ( u i ) f ( t i ) 1 θ 1 = D 1 θ e i + 2 ( θ 1 ) D 2 ( θ + 1 ) D 1 2 θ ( θ 1 ) e i 2 + η 1 e i 3 + η 2 e i 4 + O ( e i 5 ) ,
where η 1 = η 1 ( θ , D 1 , D 2 , D 3 ) and η 2 = η 2 ( θ , D 1 , D 2 , D 3 , D 4 ) .
Expanding function G ( h i ) in the neighborhood of the origin, we have that
G ( h i ) G ( 0 ) + h i G ( 0 ) + 1 2 h i 2 G ( 0 ) + 1 6 h i 3 G ( 0 ) + O ( h i 4 ) ,
where h i = x i a 1 + a 2 x i .
By using Equations (4)–(6), (8) and (9) in Scheme (3), we have
e i + 1 = G ( 0 ) e i + ( a 1 + a 1 G ( 0 ) G ( 0 ) ) a 1 θ D 1 e i 2 + 1 2 a 1 2 ( θ 1 ) θ 2 ( ( ( 2 a 2 G ( 0 ) G ( 0 ) ) ( θ 1 ) 2 a 1 2 ( 1 + G ( 0 ) ) ( θ 2 1 ) + 2 a 1 G ( 0 ) ( θ 2 + 2 θ 1 ) ) D 1 2 + 4 a 1 ( a 1 + a 1 G ( 0 ) G ( 0 ) ) ( θ 1 ) θ D 2 ) e i 3 + γ e i 4 + O ( e i 5 ) ,
where
γ = 1 6 a 1 3 ( θ 1 ) 2 θ 3 ( ( ( 6 a 2 2 G ( 0 ) 6 a 2 G ( 0 ) + G ( 0 ) ) ( θ 1 ) 2 + 6 a 1 3 ( 1 + G ( 0 ) ) ( θ 2 1 ) 2 3 a 1 ( 2 a 2 G ( 0 ) G ( 0 ) ) ( 1 4 θ + θ 2 + 2 θ 3 ) 3 a 1 2 G ( 0 ) θ ( 5 + 7 θ 2 + 2 θ 3 ) ) D 1 3 6 a 1 ( θ 1 ) θ ( 2 ( 2 a 2 G ( 0 ) G ( 0 ) ) ( θ 1 ) + a 1 G ( 0 ) ( 4 8 θ 3 θ 2 ) + a 1 2 ( 1 + G ( 0 ) ) ( 4 + θ + 3 θ 2 ) ) D 1 D 2 + 18 a 1 2 ( a 1 + a 1 G ( 0 ) G ( 0 ) ) ( θ 1 ) 2 θ 2 D 3 ) .
The vanishing of the coefficients of e i , e i 2 , and e i 3 in Equation (10) is clear. We thus have the optimal fourth-order convergence. So, after simple calculations, we have the following:
G ( 0 ) = 0 , G ( 0 ) = a 1 and G ( 0 ) = 2 ( 2 a 1 2 θ + a 1 a 2 θ a 1 a 2 ) θ 1 , θ 1 .
Thus, the error Equation (10) is given by
e i + 1 = D 1 6 a 1 3 ( θ 1 ) 2 θ 3 ( ( 6 a 1 a 2 2 ( θ 1 ) 2 G ( 0 ) ( θ 1 ) 2 + 24 a 1 2 a 2 ( θ 1 ) θ + 3 a 1 3 ( 2 + θ + 8 θ 2 + θ 3 ) ) D 1 2 6 a 1 3 ( θ 1 ) θ 2 D 2 ) e i 4 + O ( e i 5 ) .
The following theorem states the above results:
Theorem 1.
Let f : C C represent an analytical function in the neighborhood of a multiple-zero t with multiplicity θ > 1 . If the initial guess t 0 is sufficiently near to t , then Scheme (3) has a local order of convergence that is at least four, if G ( 0 ) = 0 , G ( 0 ) = a 1 , G ( 0 ) = 2 ( 2 a 1 2 θ + a 1 a 2 θ a 1 a 2 ) θ 1 , and | G ( 0 ) | < .

Special Members of Scheme (3)

By distributing various values of weight functions G ( h i ) that satisfy (11), we discuss a few particular examples of our suggested Scheme (3) in this section. Thus, here we have specified various members of the suggested family in this regard. The corresponding simple forms of G ( h i ) are given by
G ( h i ) = a 1 h i ( a 2 h i ( θ 1 ) + θ + 2 a 1 h i θ 1 ) θ 1 ,
G ( h i ) = a 1 h i ( θ 1 ) θ + h i ( a 2 2 a 1 θ a 2 θ ) 1 .
Based on the values of parameters a 1 and a 2 , we present the following special members of the family in (3):
( 1 )
Combining a 1 = 1 , a 2 = 0 , and (13) in Expression (3), we have
t i + 1 = u i θ h i + 2 θ h i 2 θ 1 f ( t i ) f ( t i ) .
It is very important to remember that method (15) above is a Liu–Zhou Method [18]. This demonstrates that the Liu–Zhou method [18] is a special case of our family, which is given in (3).
( 2 )
Combining a 1 = 1 , a 2 = 0 , and (14) in Expression (3), we have
t i + 1 = u i θ h i ( θ 1 ) θ 2 h i θ 1 f ( t i ) f ( t i ) .
Again, it is important to notice that method (16) above is a Liu–Zhou method [18]. This demonstrates that Liu–Zhou methods [18] are a special case of our family (3).
( 3 )
Using a 1 = 1 , a 2 = 1 , and (13) in (3), we have
t i + 1 = u i θ h i + ( 3 θ 1 ) h i 2 θ 1 f ( t i ) f ( t i ) .
( 4 )
Use a 1 = 1 , a 2 = 1 , and (13) in (3), we obtain
t i + 1 = u i θ h i + ( 1 + θ ) h i 2 θ 1 f ( t i ) f ( t i ) .
( 5 )
Let a 1 = 1 , a 2 = 2 , and (13) in (3), we have
t i + 1 = u i θ h i + 2 h i 2 θ 1 f ( t i ) f ( t i ) .
( 6 )
Let a 1 = 1 , a 2 = 3 , and (13) in (3), we have
t i + 1 = u i θ h i ( θ 3 ) h i 2 θ 1 f ( t i ) f ( t i ) .
In each of the above cases, u i = t i θ f ( t i ) f ( t i ) . For numerical work in the following, the proposed methods (15)–(20) are denoted by LZ-1, LZ-2, M-1, M-2, M-3, and M-4, respectively.
Remark 1.
The new family (3) only requires three functional evaluations (viz., f ( t i ) , f ( t i ) , and f ( u i ) ) per iteration to reach fourth-order convergence. According to the Kung and Traub [7] hypothesis, the approaches have optimal fourth-order convergence.

4. Numerical Simulation

This section applies the new methods LZ-1, LZ-2, M-1, M-2, M-3, and M-4 to a few basic science problems and illustrates their convergence behavior and computational effectiveness. Their performance is also contrasted with current approaches. For instance, we chose Li et al. [16,17], Sharma–Sharma [24], Zhou et al. [30], Soleymani et al. [26], and Kansal et al. [15]. Now, these methods are expressed as follows:
Li–Liao–Cheng method (LLC):
u i = t i 2 θ θ + 2 f ( t i ) f ( t i ) , t i + 1 = t i θ ( θ 2 ) θ θ + 2 θ f ( u i ) θ 2 f ( t i ) f ( t i ) θ θ + 2 θ f ( u i ) f ( t i ) 2 f ( t i ) .
Li–Cheng–Neta method (LCN):
u i = t i 2 θ θ + 2 f ( t i ) f ( t i ) , t i + 1 = t i α 1 f ( t i ) f ( u i ) f ( t i ) α 2 f ( t i ) + α 3 f ( u i ) ,
where
α 1 = 1 2 θ θ + 2 θ θ ( θ 4 + 4 θ 3 16 θ 16 ) θ 3 4 θ + 8 , α 2 = ( θ 3 4 θ + 8 ) 2 θ ( θ 4 + 4 θ 3 4 θ 2 16 θ + 16 ) ( θ 2 + 2 θ 4 ) , α 3 = θ 2 ( θ 3 4 θ + 8 ) θ θ + 2 θ ( θ 4 + 4 θ 3 4 θ 2 16 θ + 16 ) ( θ 2 + 2 θ 4 ) .
Sharma–Sharma method (SM):
u i = t i 2 θ θ + 2 f ( t i ) f ( t i ) , t i + 1 = t i θ 8 [ ( θ 3 4 θ + 8 ) ( θ + 2 ) 2 θ θ + 2 θ f ( t i ) f ( u i ) ( 2 ( θ 1 ) ( θ + 2 ) θ θ + 2 θ f ( t i ) f ( u i ) ) ] f ( t i ) f ( t i ) .
Zhou–Chen–Song method (ZCS):
u i = t i 2 θ θ + 2 f ( t i ) f ( t i ) , t i + 1 = t i θ 8 [ θ 3 θ + 2 θ 2 θ f ( u i ) f ( t i ) 2 2 θ 2 ( θ + 3 ) θ + 2 θ θ f ( u i ) f ( t i ) + ( θ 3 + 6 θ 2 + 8 θ + 8 ) ] f ( t i ) f ( t i ) .
Soleymani–Babajee–Lotfi method (SBL):
u i = t i 2 θ θ + 2 f ( t i ) f ( t i ) , t i + 1 = t i f ( u i ) f ( t i ) q 1 ( f ( u i ) ) 2 + q 2 f ( u i ) f ( t i ) + q 3 ( f ( t i ) ) 2 ,
where
q 1 = 1 16 θ 3 θ ( θ + 2 ) θ , q 2 = 8 θ ( θ + 2 ) ( θ 2 2 ) 8 θ , q 3 = 1 16 ( θ 2 ) θ θ 1 ( θ + 2 ) 3 θ .
Kansal–Kanwar–Bhatia method (KKB):
u i = t i 2 θ θ + 2 f ( t i ) f ( t i ) , u i + 1 = t i θ 4 f ( t i ) 1 + θ 4 p 2 θ p θ 1 f ( u i ) f ( t i ) 2 ( p θ 1 ) 8 ( 2 p θ + θ ( p θ 1 ) ) × 4 2 θ + θ 2 ( p θ 1 ) f ( t i ) p θ ( 2 p θ + θ ( p θ 1 ) ) 2 f ( t i ) f ( u i ) ,
where p = θ θ + 2 .
The various problems considered for numerical testing are shown in Table 1. Computations were compiled in the programming package of the software Mathematica using multiple-precision arithmetic. Numerical results displayed in Table 2, Table 3, Table 4, Table 5 and Table 6 include the following: (i) number of iterations ( i ) required to obtain the desired solution using the stopping criterion | t i + 1 t i | + | f ( t i ) | < 10 350 , (ii) estimated errors | t i + 1 t i | for the first three iterations, and (iii) computational order of convergence (COC). Table 7 displays the CPU time utilized in the execution of a programm which is computed by the Mathematica command “TimeUsed[ ]”. The computational order of convergence (COC) is calculated using Formula (2).
We note that the increased accuracy of the proposed approaches exhibits increasing precision in the successive approximations based on the numerical data shown in Table 2, Table 3, Table 4, Table 5 and Table 6. This explains the method’s excellent convergence nature. The theoretical fourth-order convergence of the new methods is strongly supported by the computational order of convergence shown in the penultimate columns of the tables. In addition, the CPU time taken by the techniques as displayed in Table 7 demonstrates the computationally efficient nature of the new technique as compared to the CPU time of the considered existing techniques of the same order. We also show the time required by each method in bar graphs. Figure 1a–e show the graphical representation of data in Table 7. Similar numerical testing carried out for many other problems has confirmed the above conclusions to a large extent.

5. Conclusions

We have proposed a fourth-order family of iterative algorithms that are computationally effective for identifying multiple roots in applied science problems. According to the Kung–Traub conjecture, the approaches converge to the required root with fourth-order convergence and with three function evaluations per iteration, so the new family is optimal. The procedure is unique in the sense that there is no such algorithm available in the literature. The main benefit of the new family is that the current Liu–Zhou approach is a special case of the new family. Analysis of the convergence was carried out, which proves fourth-order convergence under standard assumptions of the function whose zeros we are looking for. Numerical testing was checked to evaluate performance. Additionally, the new methods were also applied to many other problems, further supporting the effectiveness of the new methods.

Author Contributions

Conceptualization, methodology, S.K. and A.K.; software, writing, M.K. and M.V.; draft preparation, P.D.; formal analysis, validation, resources, L.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

We would like to express our gratitude to the anonymous reviewers for their help with the publication of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Azarmanesh, M.; Dejam, M.; Azizian, P.; Yesiloz, G.; Mohamad, A.A.; Sanati-Nezhad, A. Passive microinjection within high-throughput microfluidics for controlled actuation of droplets and cells. Sci. Rep. 2019, 9, 6723. [Google Scholar] [CrossRef]
  2. Dejam, M. Advective-diffusive-reactive solute transport due to non-Newtonian fluid flows in a fracture surrounded by a tight porous medium. Int. J. Heat Mass Transf. 2019, 128, 1307–1321. [Google Scholar] [CrossRef]
  3. Nikpoor, M.H.; Dejam, M.; Chen, Z.; Clarke, M. Chemical-Gravity-Thermal Diffusion Equilibrium in Two-Phase Non-isothermal Petroleum Reservoirs. Energy Fuel 2016, 30, 2021–2034. [Google Scholar] [CrossRef]
  4. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  5. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  6. Petkovic, M.S.; Neta, B.; Petkovic, L.D.; Dzunic, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: New York, NY, USA, 2013. [Google Scholar]
  7. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  8. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. On developing fourth-order optimal families of methods for multiple roots and their dynamics. Appl. Math. Comput. 2015, 265, 520–532. [Google Scholar] [CrossRef]
  9. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R.; Kanwar, V. An optimal fourth-order family of methods for multiple roots and its dynamics. Numer. Algor. 2016, 71, 775–796. [Google Scholar] [CrossRef]
  10. Behl, R.; Zafar, F.; Alshormani, A.S.; Junjua, M.U.D.; Yasmin, N. An optimal eighth-order scheme for multiple zeros of unvariate functions. Int. J. Comput. Meth. 2018, 16, 1843002. [Google Scholar] [CrossRef]
  11. Dong, C. A family of multipoint iterative functions for finding multiple roots of equations. Int. J. Comput. Math. 1987, 21, 363–367. [Google Scholar] [CrossRef]
  12. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef]
  13. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
  14. Hueso, J.L.; Martínez, E.; Teruel, C. Determination of multiple roots of nonlinear equations and applications. J. Math. Chem. 2015, 53, 880–892. [Google Scholar] [CrossRef]
  15. Kansal, M.; Kanwar, V.; Bhatia, S. On some optimal multiple root-finding methods and their dynamics. Appl. Appl. Math. 2015, 10, 349–367. [Google Scholar]
  16. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
  17. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef]
  18. Liu, B.; Zhou, X. A new family of fourth-order methods for multiple roots of nonlinear equations. Nonlinear Anal. Model. Control 2013, 21, 143–152. [Google Scholar] [CrossRef]
  19. Neta, B. New third order nonlinear solvers for multiple roots. Appl. Math. Comput. 2008, 202, 162–170. [Google Scholar] [CrossRef]
  20. Neta, B. Extension of Murakami’s high-order non-linear solver to multiple roots. Int. J. Comput. Math. 2010, 87, 1023–1031. [Google Scholar] [CrossRef]
  21. Osada, N. An optimal multiple root-finding method of order three. J. Comput. Appl. Math. 1994, 51, 131–133. [Google Scholar] [CrossRef]
  22. Sharma, J.R.; Kumar, S. An excellent numerical technique for multiple roots. Math. Comput. Simul. 2021, 182, 316–324. [Google Scholar] [CrossRef]
  23. Sharma, J.R.; Kumar, S. A class of computationally efficient numerical algorithms for locating multiple zeros. Afr. Mat. 2021, 32, 853–864. [Google Scholar] [CrossRef]
  24. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
  25. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef]
  26. Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
  27. Soleymani, F.; Babajee, D.K.R. Computing multiple zeros using a class of quartically convergent methods. Alex. Eng. J. 2013, 52, 531–541. [Google Scholar] [CrossRef]
  28. Thukral, R. A new family of fourth-order iterative methods for solving nonlinear equations with multiple roots. J. Numer. Math. Stoch. 2014, 6, 37–44. [Google Scholar]
  29. Victory, H.D.; Neta, B. A higher order method for multiple zeros of nonlinear functions. Int. J. Comput. Math. 1983, 12, 329–335. [Google Scholar] [CrossRef]
  30. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 4199–4206. [Google Scholar] [CrossRef]
  31. Zhou, X.; Chen, X.; Song, Y. Families of third and fourth order methods for multiple roots of nonlinear equations. Appl. Math. Comput. 2013, 219, 6030–6038. [Google Scholar] [CrossRef]
  32. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  33. Douglas, J.M. Process Dynamics and Control; Prentice Hall: Englewood Cliffs, NJ, USA, 1972. [Google Scholar]
  34. Hoffman, J.D. Numerical Methods for Engineers and Scientists; McGraw-Hill Book Company: New York, NY, USA, 1992. [Google Scholar]
  35. Kumar, S.; Kumar, D.; Sharma, J.R.; Cesarano, C.; Agarwal, P.; Chu, Y.M. An Optimal Fourth Order Derivative-Free Numerical Algorithm for Multiple Roots. Symmetry 2020, 12, 1038. [Google Scholar] [CrossRef]
Figure 1. (a) Bar diagram for function f1(t). (b) Bar diagram for function f2(t). (c) Bar diagram for function f3(t). (d) Bar diagram for function f4(t). (e) Bar diagram for function f5(t).
Figure 1. (a) Bar diagram for function f1(t). (b) Bar diagram for function f2(t). (c) Bar diagram for function f3(t). (d) Bar diagram for function f4(t). (e) Bar diagram for function f5(t).
Mathematics 13 00523 g001
Table 1. Test functions.
Table 1. Test functions.
FunctionsRoot ( t ) θ t 0
Continuous stirred tank reactor problem [33]
f 1 ( t ) = t 4 + 11.50 t 3 + 47.49 t 2 + 83.06325 t + 51.23266875 −2.852−3
Standard test problem
f 2 ( t ) = t 4 12 + t 2 2 + t + e t ( t 3 ) + sin ( t ) + 3 030.6
Isentropic supersonic flow problem [34]
f 3 ( t ) = [ tan 1 5 2 tan 1 ( t 2 1 ) + 6 ( tan 1 t 2 1 6
    tan 1 1 2 5 6 ) 11 63 ] 3 1.8411294068…31.5
Eigen value problem [10]
f 4 ( t ) = ( t 8 ) ( t 5 ) ( t 4 ) ( t 3 ) 4 ( t 2 1 ) 342.3
Complex root problem [35]
f 5 ( t ) = t ( t 2 + 1 ) ( 2 e t 2 + 1 + t 2 1 ) cosh 4 π t 2 i61.1 i
Table 2. Performance of methods for function f 1 ( t ) .
Table 2. Performance of methods for function f 1 ( t ) .
Methodsi | t 2 t 1 | | t 3 t 2 | | t 4 t 3 | COC
LLC4 1.18 × 10 5 2.22 × 10 22 2.75 × 10 89 4.000
LCN4 1.18 × 10 5 2.22 × 10 22 2.75 × 10 89 4.000
SM4 1.18 × 10 5 2.23 × 10 22 2.80 × 10 89 4.000
ZCS4 1.19 × 10 5 2.26 × 10 22 2.98 × 10 89 4.000
SBL4 1.18 × 10 5 2.22 × 10 22 2.75 × 10 89 4.000
KKB4 1.18 × 10 5 2.18 × 10 22 2.59 × 10 89 4.000
LZ-14 1.26 × 10 5 2.97 × 10 22 9.04 × 10 89 4.000
LZ-24 1.17 × 10 5 2.16 × 10 22 2.48 × 10 89 4.000
M-14 1.32 × 10 5 3.54 × 10 22 1.85 × 10 89 4.000
M-24 1.22 × 10 5 2.58 × 10 22 5.14 × 10 89 4.000
M-34 1.19 × 10 5 2.34 × 10 22 3.43 × 10 89 4.000
M-44 1.18 × 10 5 2.20 × 10 22 2.69 × 10 89 4.000
Table 3. Performance of methods for function f 2 ( t ) .
Table 3. Performance of methods for function f 2 ( t ) .
Methodsi | t 2 t 1 | | t 3 t 2 | | t 4 t 3 | COC
LLC5 2.02 × 10 4 2.11 × 10 17 2.51 × 10 69 4.000
LCN5 2.02 × 10 4 2.12 × 10 17 2.54 × 10 69 4.000
SM5 2.02 × 10 4 2.12 × 10 17 2.60 × 10 69 4.000
ZCS5 2.02 × 10 4 2.15 × 10 17 2.75 × 10 69 4.000
SBL5 2.02 × 10 4 2.13 × 10 17 2.62 × 10 69 4.000
KKB5 2.02 × 10 4 2.08 × 10 17 2.31 × 10 69 4.000
LZ-15 1.39 × 10 4 5.16 × 10 18 9.75 × 10 72 4.000
LZ-25 1.37 × 10 4 3.05 × 10 18 7.40 × 10 73 4.000
M-15 1.40 × 10 4 6.93 × 10 18 4.12 × 10 71 4.000
M-25 1.38 × 10 4 3.96 × 10 18 2.68 × 10 72 4.000
M-35 1.38 × 10 4 3.27 × 10 18 1.05 × 10 72 4.000
M-45 1.37 × 10 4 3.05 × 10 18 7.40 × 10 73 4.000
Table 4. Performance of methods for function f 3 ( t ) .
Table 4. Performance of methods for function f 3 ( t ) .
Methodsi | t 2 t 1 | | t 3 t 2 | | t 4 t 3 | COC
LLC5 8.19 × 10 4 2.64 × 10 15 2.87 × 10 61 4.000
LCN5 8.19 × 10 4 2.61 × 10 15 2.73 × 10 61 4.000
SM5 8.19 × 10 4 2.55 × 10 15 2.44 × 10 61 4.000
ZCS5 8.19 × 10 4 2.40 × 10 15 1.79 × 10 61 4.000
SBL5 8.19 × 10 4 2.53 × 10 15 2.34 × 10 61 4.000
KKB5 8.19 × 10 4 2.85 × 10 15 4.22 × 10 61 4.000
LZ-15 3.99 × 10 5 6.65 × 10 20 5.13 × 10 79 4.000
LZ-25 3.97 × 10 5 3.86 × 10 20 3.44 × 10 80 4.000
M-15 4.01 × 10 5 8.91 × 10 20 2.17 × 10 78 4.000
M-25 3.98 × 10 5 5.09 × 10 20 1.36 × 10 79 4.000
M-35 3.97 × 10 5 4.16 × 10 20 5.02 × 10 80 4.000
M-45 3.97 × 10 5 3.86 × 10 20 3.44 × 10 80 4.000
Table 5. Performance of methods for function f 4 ( t ) .
Table 5. Performance of methods for function f 4 ( t ) .
Methodsi | t 2 t 1 | | t 3 t 2 | | t 4 t 3 | COC
LLC5 1.42 × 10 3 6.70 × 10 13 3.33 × 10 50 4.000
LCN5 1.42 × 10 3 6.72 × 10 13 3.37 × 10 50 4.000
SM5 1.42 × 10 3 6.76 × 10 13 3.48 × 10 50 4.000
ZCS5 1.42 × 10 3 6.83 × 10 13 3.67 × 10 50 4.000
SBL5 1.42 × 10 3 6.87 × 10 13 3.78 × 10 50 4.000
KKB5 1.42 × 10 3 6.53 × 10 13 2.93 × 10 50 4.000
LZ-15 7.07 × 10 4 4.35 × 10 14 6.26 × 10 55 4.000
LZ-25 7.02 × 10 4 1.94 × 10 14 1.12 × 10 56 4.000
M-15 7.10 × 10 4 6.59 × 10 14 4.90 × 10 54 4.000
M-25 7.04 × 10 4 2.86 × 10 14 7.88 × 10 56 4.000
M-35 7.02 × 10 4 2.08 × 10 14 1.62 × 10 56 4.000
M-45 7.02 × 10 4 1.97 × 10 14 1.23 × 10 56 4.000
Table 6. Performance of methods for function f 5 ( t ) .
Table 6. Performance of methods for function f 5 ( t ) .
Methodsi | t 2 t 1 | | t 3 t 2 | | t 4 t 3 | COC
LLC5 1.75 × 10 5 3.01 × 10 20 2.66 × 10 79 4.000
LCN5 1.75 × 10 5 3.02 × 10 20 2.68 × 10 79 4.000
SM5 1.75 × 10 5 3.03 × 10 20 2.73 × 10 79 4.000
ZCS5 1.75 × 10 5 3.05 × 10 20 2.79 × 10 79 4.000
SBL5 1.76 × 10 5 3.16 × 10 20 3.28 × 10 79 4.000
KKB5 1.74 × 10 5 2.94 × 10 20 2.40 × 10 79 4.000
LZ-15 8.39 × 10 6 9.14 × 10 22 1.29 × 10 85 4.000
LZ-25 6.91 × 10 6 2.76 × 10 22 7.01 × 10 88 4.000
M-15 9.81 × 10 6 2.29 × 10 21 6.85 × 10 84 4.000
M-25 7.43 × 10 6 4.34 × 10 22 5.06 × 10 87 4.000
M-35 6.95 × 10 6 2.87 × 10 22 8.33 × 10 88 4.000
M-45 7.01 × 10 6 3.02 × 10 22 1.04 × 10 87 4.000
Table 7. CPU time consumed by methods.
Table 7. CPU time consumed by methods.
Methods f 1 ( t ) f 2 ( t ) f 3 ( t ) f 4 ( t ) f 5 ( t )
LLC0.15620.84911.85700.42171.4506
LCN0.15570.88972.02830.45342.2164
SM0.18840.84332.04420.46812.1681
ZCS0.17140.82652.01290.45752.2165
SBL0.18731.03422.31870.45672.4653
KKB0.17230.85872.02840.40522.0287
LZ-10.03920.68671.51380.20291.2015
LZ-20.03990.72211.65330.23471.3261
M-10.06480.70241.65920.21841.3326
M-20.06370.68701.63870.23921.3427
M-30.04610.71071.65340.25011.3419
M-40.06240.72311.62390.25621.3426
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kumar, S.; Khatri, M.; Vyas, M.; Kumar, A.; Dhankhar, P.; Jäntschi, L. Generalization of Liu–Zhou Method for Multiple Roots of Applied Science Problems. Mathematics 2025, 13, 523. https://doi.org/10.3390/math13030523

AMA Style

Kumar S, Khatri M, Vyas M, Kumar A, Dhankhar P, Jäntschi L. Generalization of Liu–Zhou Method for Multiple Roots of Applied Science Problems. Mathematics. 2025; 13(3):523. https://doi.org/10.3390/math13030523

Chicago/Turabian Style

Kumar, Sunil, Monika Khatri, Muktak Vyas, Ashwini Kumar, Priti Dhankhar, and Lorentz Jäntschi. 2025. "Generalization of Liu–Zhou Method for Multiple Roots of Applied Science Problems" Mathematics 13, no. 3: 523. https://doi.org/10.3390/math13030523

APA Style

Kumar, S., Khatri, M., Vyas, M., Kumar, A., Dhankhar, P., & Jäntschi, L. (2025). Generalization of Liu–Zhou Method for Multiple Roots of Applied Science Problems. Mathematics, 13(3), 523. https://doi.org/10.3390/math13030523

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop