Next Article in Journal
A Novel Slacks-Based Interval DEA Model and Application
Previous Article in Journal
Fuzzy-Set-Based Multi-Attribute Decision-Making, Its Computing Implementation, and Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal Fourth-Order Methods for Multiple Zeros: Design, Convergence Analysis and Applications

1
Department of Mathematics, University Centre for Research and Development, Chandigarh University, Mohali 140413, India
2
Department of Mathematics, Sant Longowal Institute of Engineering Technology, Longowal 148106, India
3
Department of Physics and Chemistry, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
4
Institute of Doctoral Studies, Babeş-Bolyai University, 400084 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(3), 143; https://doi.org/10.3390/axioms13030143
Submission received: 5 February 2024 / Revised: 20 February 2024 / Accepted: 22 February 2024 / Published: 23 February 2024
(This article belongs to the Special Issue Applied Mathematics and Numerical Analysis: Theory and Applications)

Abstract

:
Nonlinear equations are frequently encountered in many areas of applied science and engineering, and they require efficient numerical methods to solve. To ensure quick and precise root approximation, this study presents derivative-free iterative methods for finding multiple zeros with an ideal fourth-order convergence rate. Furthermore, the study explores applications of the methods in both real-life and academic contexts. In particular, we examine the convergence of the methods by applying them to the problems, namely Van der Waals equation of state, Planck’s law of radiation, the Manning equation for isentropic supersonic flow and some academic problems. Numerical results reveal that the proposed derivative-free methods are more efficient and consistent than existing methods.

1. Introduction

Diverse areas of numerical analysis and optimization present challenges for the development of derivative-free methods for solving nonlinear equations with simple roots and multiple roots. Conventional iterative methods use only first-order derivatives for multi-point classes, or higher-order derivatives altogether, to direct the search for the best solutions [1,2,3,4]. However, in real-world scenarios, generating derivatives could be computationally costly, impractical, or even impossible due to the absence of formal mathematical formulations. This limitation makes it challenging to apply traditional approaches to complex systems and real-world problems. Derivative-free techniques, which merely depend on function evaluations, deal with such challenges. Constructing such algorithms with high efficiency, robustness and convergence remains a significant challenge. This means that new approaches that effectively address optimization problems without the need for explicit derivatives must be developed. The one-point modified Traub–Steffensen method [5,6] is one of the most well-known derivative-free methods for multiple roots, which is given by
t k + 1 = t k n ϝ ( t k ) ϝ [ u k , t k ] ,
where n is the known multiplicity of the root α , i.e., ϝ ( j ) ( α ) = 0 , j = 0 , 1 , 2 , , n 1 and ϝ ( n ) ( α ) 0 . Here, u k = t k + b ϝ ( t k ) , b R { 0 } and ϝ [ u k , t k ] = ϝ ( u k ) ϝ ( t k ) u k t k is a divided difference.
Multiple roots can also be used to assess the stability of a system. A dynamical system with several roots will have multiple equilibrium points, and figuring out which of these points are stable can help you figure out how the system acts in different situations. The use of multiple roots of nonlinear equations can provide valuable information in a variety of disciplines, such as stability analysis, system analysis and optimization. Finding several roots helps us to better understand the issue and develop better solutions.
Very recently, some methods with derivatives or without derivatives have been presented in the literature (see [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30]). In this article, we devise a two-step derivative-free technique that attains the fourth order of convergence. The suggested approach uses just three function evaluations per iteration, making it optimal in the sense of the Kung–Traub hypothesis [31]. The methodology is based on the Traub–Steffensen method (1) and is further modified in the second stage by using a Traub–Steffensen-like iteration. The methods are applied to real-life problems, i.e., Planck’s law of radiation [1], the Van der Waals [1] equation of state, the Manning equation of isentropic supersonic flow [2] and some academics problems [32,33].

2. Development of Method

Based on the Traub–Steffensen method (1), we propose the following iterative scheme for n > 1 :
v k = t k n ϝ ( t k ) ϝ [ u k , t k ] t k + 1 = v k n Q ( x k ) ϝ ( t k ) ϝ [ v k , t k ] + ϝ [ v k , u k ] , k = 0 , 1 , 2 ,
where u k = t k + b ϝ ( t k ) , b R { 0 } , Q : C C is a weight function and x k = ϝ ( v k ) ϝ ( t k ) n . Note that x k is a one-to-n multi-valued function, so we consider its principal analytic branches. Hence, it is convenient to treat them as the principal root. For example, let us consider the case of x k . The principal root is given by x k = exp 1 n log ϝ ( v k ) ϝ ( t k ) , with log ϝ ( v k ) ϝ ( t k ) = log | ϝ ( v k ) ϝ ( t k ) | + i Arg ϝ ( v k ) ϝ ( t k ) for π < Arg ϝ ( v k ) ϝ ( t k ) π .
The convergence is discussed separately for different cases depending upon the multiplicity n. First, we will look at case n = 2 and show that the following is true.
Theorem 1.
Suppose t = α is a multiple zero of ϝ with n = 2 and t 0 is sufficiently close to the root α. Suppose ϝ : D C C is analytic in a region enclosing the zero α. Then, Scheme (2) has a convergence order of four if Q ( 0 ) = 0 , Q ( 0 ) = 1 and Q ( 0 ) = 4 .
Proof. 
Assume that at the k-th stage, the error is e k = t k α . Using the Taylor expansion of ϝ ( t k ) about α and ϝ ( α ) = 0 , ϝ ( α ) = 0 and ϝ ( 2 ) ( α ) 0 , we have
ϝ ( t k ) = ϝ ( 2 ) ( α ) 2 ! e k 2 1 + M 1 e k + M 2 e k 2 + M 3 e k 3 + M 4 e k 4 + ,
where M m = 2 ! ( 2 + m ) ! ϝ ( 2 + m ) ( α ) ϝ ( 2 ) ( α ) for m N .
Similarly, ϝ ( u k ) about α
ϝ ( u k ) = ϝ ( 2 ) ( α ) 2 ! e u k 2 1 + M 1 e u k + M 2 e u k 2 + M 3 e u k 3 + M 4 e u k 4 + ,
where e u k = u k α = e k + b ϝ ( 2 ) ( α ) 2 ! e k 2 1 + M 1 e k + M 2 e k 2 + M 3 e k 3 + .
Then, the first step of (2) yields
e v k = v k α = 1 2 b ϝ ( 2 ) ( α ) 2 + M 1 e k 2 1 16 ( b ϝ ( 2 ) ( α ) ) 2 8 b ϝ ( 2 ) ( α ) M 1 + 12 M 1 2 16 M 2 e k 3 + O ( e k 4 ) .
Expanding ϝ ( v k ) about α , it follows that
ϝ ( v k ) = ϝ ( 2 ) ( α ) 2 ! e v k 2 1 + M 1 e v k + M 2 e v k 2 + M 3 e v k 3 + .
Using (3) and (6) in the expression of x k and simplifying, we have
x k = 1 2 b ϝ ( 2 ) ( α ) 2 + M 1 e k 1 16 ( b ϝ ( 2 ) ( α ) ) 2 6 b ϝ ( 2 ) ( α ) M 1 + 16 ( M 1 2 M 2 ) e k 2 + 1 64 ( ( b ϝ ( 2 ) ( α ) ) 3 22 b ϝ ( 2 ) ( α ) M 1 2 + 4 29 M 1 3 + 14 b ϝ ( 2 ) ( α ) M 2 2 M 1 3 ( b ϝ ( 2 ) ( α ) ) 2 + 104 M 2 + 96 M 3 ) e k 3 + O ( e k 4 ) .
The Taylor expansion of the weight function Q ( x k ) in the neighborhood of origin up to third-order terms is given by
Q ( x k ) Q ( 0 ) + x k Q ( 0 ) + 1 2 x k 2 Q ( 0 ) + 1 6 x k 3 Q ( 0 ) .
Inserting (3)–(8) in the last step of (2) and then performing some simple calculations yield
e k + 1 = Q ( 0 ) e k + 1 4 b ϝ ( 2 ) ( α ) ( 1 + 2 Q ( 0 ) Q ( 0 ) ) + 2 ( 1 + Q ( 0 ) Q ( 0 ) ) M 1 e k 2 1 32 ( ( b ϝ ( 2 ) ( α ) ) 2 ( 2 + 10 Q ( 0 ) 6 Q ( 0 ) + Q ( 0 ) ) 4 b ϝ ( 2 ) ( α ) ( 4 + 4 Q ( 0 ) Q ( 0 ) ) M 1 + 4 ( 6 + 8 Q ( 0 ) 10 Q ( 0 ) + Q ( 0 ) ) M 1 2 32 ( 1 + Q ( 0 ) Q ( 0 ) ) M 2 ) e k 3 + ψ e k 4 + O ( e k 5 ) ,
where ψ = ψ ( b , Q ( 0 ) , Q ( 0 ) , Q ( 0 ) , Q ( 0 ) , M 1 , M 2 , M 3 ) . Here, the expression of ψ is not being produced explicitly since it is very lengthy.
If we equate the coefficients of e k and e k 2 and e k 3 to zero at the same time and solve the resulting equations, one obtains
Q ( 0 ) = 0 , Q ( 0 ) = 1 and Q ( 0 ) = 4 .
Now, by using (10) in (9), we have
e k + 1 = 1 384 b ϝ ( 2 ) ( α ) + 2 M 1 ( ( b ϝ ( 2 ) ( α ) ) 2 ( Q ( 0 ) 3 ) + 4 b ϝ ( 2 ) ( α ) ( Q ( 0 ) 9 ) M 1 + 4 ( Q ( 0 ) 27 ) M 1 2 + 48 M 2 ) e k 4 + O ( e k 5 ) .
Hence, we prove Theorem 1. □
Now, we state Theorem 2 for the case n = 3 without proof since it is similar to the proof of Theorem 1.
Theorem 2.
By adopting the statement of Theorem 1, method (2) for n = 3 has at least a convergence order of four if Q ( 0 ) = 0 , Q ( 0 ) = 2 3 and Q ( 0 ) = 8 3 . Then, the error equation corresponding to n = 3 is given by
e k + 1 = 1 108 ( Q ( 0 ) 24 ) N 1 3 + 12 N 1 N 2 e k 4 + O ( e k 5 ) .
where N m = n ! ( n + m ) ! ϝ ( n + m ) ( α ) ϝ ( n ) ( α ) for m N .
Remark 1.
It is important to note that the parameter b, used in u k = t k + b ϝ ( t k ) , appears in the error equation for the case n = 2 . On the other hand, we have observed that for n = 3 , it occurs in the terms of order e k 5 and higher. We need not obtain the terms of order e k 5 and higher to prove the fourth-order convergence of a method. We shall prove these facts in the next section as a generalized result.

3. Generalized Result

For the multiplicity n 3 , we state the following theorem for Scheme (2):
Theorem 3.
Using the statement of Theorem 1, Scheme (2) for the case n 3 has at least a convergence order of four if Q ( 0 ) = 0 , Q ( 0 ) = 2 n and Q ( 0 ) = 8 n . Moreover, the error equation of Scheme (2) is given by
e k + 1 = 1 12 n 2 ( n 2 + 8 n 9 ( n 2 ) Q ( 0 ) ) T 1 3 12 ( n 2 ) T 1 T 2 e k 4 + O ( e k 5 ) ,
where T m = n ! ( m + n ) ! ϝ ( m + n ) ( α ) ϝ ( n ) ( α ) for m N .
Proof. 
Keeping in mind that ϝ ( j ) ( α ) = 0 , j = 0 , 1 , 2 , , n 1 and ϝ n ( α ) 0 , then expansion of ϝ ( t k ) about α is
ϝ ( t k ) = ϝ n ( α ) n ! e k n 1 + T 1 e k + T 2 e k 2 + T 3 e k 3 + T 4 e k 4 + .
Similarly, ϝ ( u k ) about α is
ϝ ( u k ) = ϝ n ( α ) n ! e u k n 1 + T 1 e u k + T 2 e u k 2 + T 3 e u k 3 + T 4 e u k 4 + ,
where e u k = u k α = e k + b ϝ n ( α ) n ! e k n 1 + T 1 e k + T 2 e k 2 + T 3 e k 3 + .
From the first step of Equation (2)
σ v k = v k α = T 1 n e k 2 + 1 n 2 2 n T 2 ( 1 + n ) T 1 2 e k 3 + 1 n 3 ( 1 + n ) 2 T 1 3 n ( 4 + 3 n ) T 1 T 2 + 3 n 2 T 3 e k 4 + O ( e k 5 ) .
Expansion of ϝ ( v k ) around α yields
ϝ ( v k ) = ϝ n ( α ) n ! e v k n 1 + T 1 e v k + T 2 e v k 2 + T 3 e v k 3 + T 4 e v k 4 + .
Using (12) and (15) in the expressions of x k , we have that
x k = T 1 n e k + 1 n 2 2 n T 2 ( 2 + n ) T 1 2 e k 2 + 1 2 n 3 ( 7 + 7 n + 2 n 2 ) T 1 3 2 n ( 7 + 3 n ) T 1 T 2 + 6 n 2 T 3 e k 3 + O ( e k 4 ) .
Inserting (8) and (12)–(16) in the second step of (2), we then have
e k + 1 = n Q ( 0 ) 2 e k + 1 2 n 2 + n Q ( 0 ) n Q ( 0 ) T 1 e k 2 1 4 n 2 ( ( 4 ( n + 1 ) + 2 n ( n + 1 ) Q ( 0 ) 2 n ( n + 3 ) Q ( 0 ) + n Q ( 0 ) ) T 1 2 4 n ( 2 + n Q ( 0 ) n Q ( 0 ) ) T 2 ) e k 3 + R e k 4 + O ( e k 5 ) ,
where ϕ = ϕ ( Q ( 0 ) , Q ( 0 ) , Q ( 0 ) , Q ( 0 ) , T 1 , T 2 , T 3 ) .
Set coefficients of e k , e k 2 and e k 3 equal to zero. Then, solving the resulting equations, we obtain
Q ( 0 ) = 0 , Q ( 0 ) = 2 n and Q ( 0 ) = 8 n .
Then, error Equation (17) is given by
e k + 1 = 1 12 n 2 ( n 2 + 8 n 9 ( n 2 ) Q ( 0 ) ) T 1 3 12 ( n 2 ) T 1 T 2 e k 4 + O ( e k 5 ) .
Thus, the theorem is proved. □
Remark 2.
The proposed Scheme (2) reaches at a fourth convergence order provided that the conditions of Theorems 1–3 are satisfied. Only three functional evaluations, ϝ ( t k ) , ϝ ( u k ) and ϝ ( v k ) , are used per iteration to achieve this convergence rate. As a result, the Kung–Traub hypothesis [31] determines that Scheme (2) is the optimal scheme.

Some Special Cases

Based on the forms of weight function Q ( x k ) that meet the requirements of Theorems 1–3, we can develop numerous iterative methods as the special cases of the family (2). We are, however, limited to selecting only simple forms such as low-degree polynomials or straightforward rational functions. These choices should be such that methods can converge on the root with the fourth order for n 2 . Keeping this in view, the following are some simple forms:
( 1 ) Q ( x k ) = 2 x k ( 1 + 2 x k ) n , ( 2 ) Q ( x k ) = 2 x k n 2 n x k , ( 3 ) Q ( x k ) = 2 x k ( 1 + 2 x k ) n 4 x k 2 .
The corresponding method to each of the aforementioned forms is as follows:
  • Method 1 (M1):
    t k + 1 = v k 2 x k ( 1 + 2 x k ) ϝ ( t k ) ϝ [ v k , t k ] + ϝ [ v k , u k ] .
  • Method 2 (M2):
    t k + 1 = v k 2 x k 1 2 x k ϝ ( t k ) ϝ [ v k , t k ] + ϝ [ v k , u k ] .
  • Method 3 (M3):
    t k + 1 = v k 2 n x k ( 1 + 2 x k ) n 4 x k 2 ϝ ( t k ) ϝ [ v k , t k ] + ϝ [ v k , u k ] .
Note that in all the above cases, we have
v k = t k n ϝ ( t k ) ϝ [ u k , t k ] , u k = t k + b ϝ ( t k ) , b R { 0 } .

4. Numerical Results

The proposed methods M1, M2 and M3 are applied to solve some practical and academic problems displayed in Table 1, which not only demonstrate the methods in practice, but also serve to verify the validity of theoretical results that we have developed. The chosen practical problems are Van der Waals equation of state [1], Planck’s law of radiation [1] and the Manning equation for isentropic supersonic flow [2]. Let us describe them briefly. An equation of state known as the van der Waals equation aims to clarify the differences between the behavior of real gas molecules and ideal gas molecules. Planck’s law, a foundational equation in quantum physics, describes the spectrum distribution of energy radiated by a black body at a given temperature. It serves as an explanation for the radiation from an observed black body and how temperature influences it. The empirical Manning equation in open-channel flow is used to calculate the flow rate of a fluid through a channel. The form, slope and roughness of the channel are all taken into account when calculating the fluid flow rate. This equation is primarily utilized in isentropic supersonic flow, where the fluid is traveling at a high velocity and the pressure waves it generates are moving at or above the speed of sound.
New methods are tested by taking the parameter values b = 0.5 and 1 . To verify the theoretical order of convergence, we use the following formula to obtain the approximated computational order of convergence (ACOC) (see [34]):
ACOC = ln | ( t k + 2 α ) / ( t k + 1 α ) | ln | ( t k + 1 α ) / ( t k α ) | , for each k = 1 , 2 ,
Performance is compared with some well-known optimal fourth-order methods with and without first derivatives. In all the considered methods, multiplicity is known a priori. For ready reference, these methods are expressed as follows:
  • Method by Li et al. [23] (LLC):
    v k = t k 2 n n + 2 ϝ ( t k ) ϝ ( t k ) , t k + 1 = t k n ( n 2 ) n n + 2 n ϝ ( v k ) n 2 ϝ ( t k ) ϝ ( t k ) n n + 2 n ϝ ( v k ) ϝ ( t k ) 2 ϝ ( t k ) .
  • Method by Li et al. [24] (LCN):
    v k = t k 2 n n + 2 ϝ ( t k ) ϝ ( t k ) , t k + 1 = t k α 1 ϝ ( t k ) ϝ ( v k ) ϝ ( t k ) α 2 ϝ ( t k ) + α 3 ϝ ( v k ) ,
    where
    α 1 = 1 2 n n + 2 n n ( n 4 + 4 n 3 16 n 16 ) n 3 4 n + 8 , α 2 = ( n 3 4 n + 8 ) 2 n ( n 4 + 4 n 3 4 n 2 16 n + 16 ) ( n 2 + 2 n 4 ) , α 3 = n 2 ( n 3 4 n + 8 ) n n + 2 n ( n 4 + 4 n 3 4 n 2 16 n + 16 ) ( n 2 + 2 n 4 ) .
  • Method by Sharma and Sharma [26] (SSM):
    v k = t k 2 n n + 2 ϝ ( t k ) ϝ ( t k ) , t k + 1 = t k n 8 [ ( n 3 4 n + 8 ) ( n + 2 ) 2 n n + 2 n ϝ ( t k ) ϝ ( v k ) × 2 ( n 1 ) ( n + 2 ) n n + 2 n ϝ ( t k ) ϝ ( v k ) ] ϝ ( t k ) ϝ ( t k ) .
  • Method by Zhou et al. [30] (ZCS):
    v k = t k 2 n n + 2 ϝ ( t k ) ϝ ( t k ) , t k + 1 = t k n 8 [ n 3 n + 2 n 2 n ϝ ( v k ) ϝ ( t k ) 2 2 n 2 ( n + 3 ) n + 2 n n ϝ ( v k ) ϝ ( t k ) + ( n 3 + 6 n 2 + 8 n + 8 ) ] ϝ ( t k ) ϝ ( t k ) .
  • Method by Kansal et al. [16] (KKB):
    v k = t k 2 n n + 2 ϝ ( t k ) ϝ ( t k ) , t k + 1 = t k n 4 ϝ ( t k ) 1 + n 4 p 2 n p n 1 ϝ ( v k ) ϝ ( t k ) 2 ( p n 1 ) 8 ( 2 p n + n ( p n 1 ) ) × 4 2 n + n 2 ( p n 1 ) ϝ ( t k ) p n ( 2 p n + n ( p n 1 ) ) 2 ϝ ( t k ) ϝ ( v k ) ,
    where p = n n + 2 .
  • Method by Sharma et al. [27] (SKJ):
    v k = t k n ϝ ( t k ) ϝ [ u k , t k ] , t k + 1 = v k x k + n x k 2 + ( n 1 ) w k + n w k x k ϝ ( t k ) ϝ [ u k , t k ] .
  • Method by Behl et al. [14] (BAM):
    v k = t k n ϝ ( t k ) ϝ [ u k , t k ] , t k + 1 = v k n w k + x k 2 ( 1 2 x k ) ϝ ( t k ) ϝ [ u k , t k ] ,
    where w k = ϝ ( v k ) ϝ ( u k ) n .
  • Method by Kumar et al. [20] (KKS):
    v k = t k n ϝ ( t k ) ϝ [ u k , t k ] , t k + 1 = v k ( n + 2 ) x k 1 2 x k ϝ ( t k ) ϝ [ u k , t k ] + ϝ [ v k , u k ] .
Multiple-precision arithmetic is used in all computations using the programming tool Mathematica [35]. Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 present numerical data such as the following:
( 1 )
The multiplicity n of the relevant function in Table 2.
( 2 )
The number of iterations ( k ) needed to obtain a solution where | t k t k 1 | + | ϝ ( t k ) | < 10 100 .
( 3 )
The estimated error | t k t k 1 | in the last three iterations.
( 4 )
The approximated computational order of convergence (ACOC) utilizing (20).
( 5 )
“D” represents divergent nature of the iterative methods in Table 8.
The following formula,
n = t k t 0 d k d 0 , where d k = ϝ ( t k ) g k , g k = ϝ ( t k + ϝ ( t k ) ) ϝ ( t k ) ϝ ( t k ) ,
is used to compute the multiplicity of the functions that were previously considered. Using the new method M1, we applied this formula to acquire the multiplicity shown in Table 2. We can also utilize M2 and M3.
It can be seen from the numerical results displayed in Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 that the new methods exhibit consistent behavior in all six problems, while the existing methods do not show such behavior. The existing approaches may either converge slowly to the root or fail to converge. Since our methods M1 (b = −0.5), M3 (b = −0.5) and M3 (b = −1) provide outcomes in many circumstances where the existing methods fail, they can therefore be regarded as superior in this regard. It should be noted that new methods use first-divided differences in the denominator; therefore, a drawback of the methods is that if at some stage the denominator is very small or zero, then the methods may fail to converge. However, such instances are rare in practice.

5. Conclusions

In this study, we have proposed an optimal derivative-free fourth-order numerical approach for solving nonlinear equations with multiple roots. The convergence has been investigated using standard hypotheses, and the order of convergence has been determined to be four. Nonlinear equations, such as those arising in real-life situations, are solved using the new algorithms. Comparison is made with existing methods of the same order. Numerical results show that the new derivative-free methods are strong rivals to the well-known fourth-order methods. There is much more to be done in the future. For example, our future scope of work will include exploring efficient iterative methods of further higher orders of convergence and their analyses. The other area is to develop efficient methods for solving systems of nonlinear equations and their applications in diverse domains of applied science and engineering.

Author Contributions

Conceptualization, methodology, software, writing—original draft preparation, S.K. and J.R.S.; formal analysis, validation, resources, L.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data is contained within the article.

Acknowledgments

We would like to express our gratitude to the anonymous reviewers for their help with the publication of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
  2. Hoffman, J.D. Numerical Methods for Engineers and Scientists; McGraw-Hill Book Company: New York, NY, USA, 1992. [Google Scholar]
  3. Amorós, C.; Argyros, I.K.; González, R.; Magreñán, Á.A.; Orcos, L.; Sarría, I. Study of a High Order Family: Local Convergence and Dynamics. Mathematics 2019, 7, 225. [Google Scholar] [CrossRef]
  4. Maroju, P.; Magreñán, Á.A.; Sarría, Í.; Kumar, A. Local convergence of fourth and fifth order parametric family of iterative methods in Banach spaces. J. Math. Chem. 2020, 58, 686–705. [Google Scholar] [CrossRef]
  5. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  6. Steffensen, J.F. Remarks on iteration. Skand. Aktuarietidskr 1933, 16, 64–72. [Google Scholar] [CrossRef]
  7. Zafar, F.; Iqbal, S.; Nawaz, T. A Steffensen type optimal eighth order multiple root finding scheme for nonlinear equations. J. Comput. Math. Data Sci. 2023, 7, 100079. [Google Scholar] [CrossRef]
  8. Behl, R.; Alsolami, A.J.; Pansera, B.A.; Al-Hamdan, W.M.; Salimi, M.; Ferrara, M. A new optimal family of Schröder’s method for multiple zeros. Mathematics 2019, 7, 1076. [Google Scholar] [CrossRef]
  9. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1976, 27, 257–269. [Google Scholar] [CrossRef]
  10. Neta, B.; Johnson, A.N. High-order nonlinear solver for multiple roots. Comput. Math. Appl. 2008, 55, 2012–2017. [Google Scholar] [CrossRef]
  11. Victory, H.D.; Neta, B. A higher order method for multiple zeros of nonlinear functions. Int. J. Comput. Math. 1982, 12, 329–335. [Google Scholar] [CrossRef]
  12. Akram, S.; Zafar, F.; Yasmin, N. An optimal eighth-order family of iterative methods for multiple roots. Mathematics 2019, 7, 672. [Google Scholar] [CrossRef]
  13. Akram, S.; Akram, F.; Junjua, M.U.D.; Arshad, M.; Afzal, T. A family of optimal Eighth order iteration functions for multiple roots and its dynamics. J. Math. 2021, 2021, 5597186. [Google Scholar] [CrossRef]
  14. Behl, R.; Alharbi, S.K.; Mallawi, F.O.; Salimi, M. An Optimal Derivative-Free Ostrowski’s Scheme for Multiple Roots of Nonlinear Equations. Mathematics 2020, 8, 1809. [Google Scholar] [CrossRef]
  15. Behl, R.; Bhalla, S.; Magreñán, Á.A.; Moysi, A. An Optimal Derivative Free Family of Chebyshev–Halley’s Method for Multiple Zeros. Mathematics 2021, 9, 546. [Google Scholar] [CrossRef]
  16. Kansal, M.; Kanwar, V.; Bhatia, S. On some optimal multiple root-finding methods and their dynamics. Appl. Appl. Math. 2015, 10, 349–367. [Google Scholar]
  17. Kumar, D.; Sharma, J.R.; Argyros, I.K. Optimal one-point iterative function free from derivatives for multiple roots. Mathematics 2020, 8, 709. [Google Scholar] [CrossRef]
  18. Kumar, S.; Kumar, D.; Sharma, J.R.; Argyros, I.K. An efficient class of fourth-order derivative-free method for multiple-roots. Int. J. Non. Sci. Numer. Simul. 2021, 24, 265–275. [Google Scholar] [CrossRef]
  19. Kumar, S.; Kumar, D.; Sharma, J.R.; Jäntschi, L. A Family of Derivative Free Optimal Fourth Order Methods for Computing Multiple Roots. Symmetry 2020, 12, 1969. [Google Scholar] [CrossRef]
  20. Kumar, S.; Kumar, D.; Sharma, J.R.; Cesarano, C.; Agarwal, P.; Chu, Y.M. An optimal fourth order derivative-free numerical algorithm for multiple roots. Symmetry 2020, 12, 1038. [Google Scholar] [CrossRef]
  21. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef]
  22. Geum, Y.H.; Kim, Y.I.; Neta, B. Constructing a family of optimal eighth-order modified Newton-type multiple-zero finders along with the dynamics behind their purely imaginary extraneous fixed points. J. Comp. Appl. Math. 2018, 333, 131–156. [Google Scholar] [CrossRef]
  23. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar] [CrossRef]
  24. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef]
  25. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef]
  26. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
  27. Sharma, J.R.; Kumar, S.; Jäntschi, L. On a class of optimal fourth order multiple root solvers without using derivatives. Symmetry 2019, 11, 1452. [Google Scholar] [CrossRef]
  28. Sharma, J.R.; Kumar, S.; Jäntschi, L. On Derivative Free Multiple-Root Finders with Optimal Fourth Order Convergence. Mathematics 2020, 8, 1091. [Google Scholar] [CrossRef]
  29. Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
  30. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 4199–4206. [Google Scholar] [CrossRef]
  31. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  32. Kumar, S.; Behl, R.; Alrajhi, A. Efficient Fourth-Order Scheme for Multiple Zeros: Applications and Convergence Analysis in Real-Life and Academic Problems. Mathematics 2023, 11, 3146. [Google Scholar] [CrossRef]
  33. Zeng, Z. Computing multiple roots of inexact polynomials. Math. Comput. Lett. 2004, 74, 869–903. [Google Scholar] [CrossRef]
  34. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth–order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  35. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media: Champaign, IL, USA, 2003. [Google Scholar]
Table 1. Problems for numerical experiment.
Table 1. Problems for numerical experiment.
ProblemsRootInitial Guess
Van der Waals problem [1]
ϝ 1 ( t ) = t 3 5.22 t 2 + 9.0825 t 5.2675 1.752.3
Planck’s law radiation problem [1]
ϝ 2 ( t ) = e t 1 + t 5 3 4.9651142317…5.4
Manning problem for isentropic supersonic flow [2]
ϝ 3 ( t ) = [ tan 1 5 2 tan 1 ( t 2 1 ) + 6 ( tan 1 t 2 1 6
tan 1 1 2 5 6 ) 11 63 ] 4 1.8411294068…1.5
Complex root problem
ϝ 4 ( t ) = t ( t 2 + 1 ) ( 2 e t 2 + 1 + t 2 1 ) cosh 3 π t 2 i1.1 i
Academic problem [33]
ϝ 5 ( t ) = ( t 2 ) 1 5 ( t 4 ) 5 ( t 3 ) 1 0 ( t 1 ) 2 0 10.8
Non-differential problem [32]
ϝ 6 ( t ) = ( t 2 + t 1 ) ( t 3 ) 4 e t 1 31.1
Table 2. Multiplicity of problems taken into consideration in Table 1.
Table 2. Multiplicity of problems taken into consideration in Table 1.
ProblemsMultiplicity
ϝ 1 ( t ) 2
ϝ 2 ( t ) 3
ϝ 3 ( t ) 4
ϝ 4 ( t ) 5
ϝ 5 ( t ) 20
ϝ 6 ( t ) 4
Table 3. Numerical results of methods for ϝ 1 ( t ) .
Table 3. Numerical results of methods for ϝ 1 ( t ) .
Methodsk | t k 2 t k 3 | | t k 1 t k 2 | | t k t k 1 | ACOC
LLC6 3.77 ( 6 ) 2.81 ( 18 ) 8.64 ( 67 ) 4.000
LCN6 3.77 ( 6 ) 2.81 ( 18 ) 8.64 ( 67 ) 4.000
SSM6 5.32 ( 6 ) 1.20 ( 17 ) 3.15 ( 64 ) 4.000
ZCM6 1.09 ( 5 ) 2.60 ( 16 ) 8.49 ( 59 ) 4.000
KKB6 2.49 ( 6 ) 3.99 ( 19 ) 2.63 ( 70 ) 4.000
SKJ6 1.86 ( 4 ) 3.01 ( 15 ) 2.10 ( 54 ) 4.000
BAM6 2.96 ( 7 ) 5.35 ( 23 ) 5.70 ( 86 ) 4.000
KKS5 2.36 ( 3 ) 1.22 ( 7 ) 1.03 ( 24 ) 4.000
M1 (b = −0.5)6 8.99 ( 7 ) 1.36 ( 20 ) 7.16 ( 76 ) 4.000
M1 (b = −1)5 2.98 ( 4 ) 1.56 ( 10 ) 1.23 ( 35 ) 4.000
M2 (b = −0.5)5 1.45 ( 3 ) 1.07 ( 8 ) 3.02 ( 29 ) 4.000
M2 (b = −1)5 1.15 ( 4 ) 4.08 ( 13 ) 6.49 ( 47 ) 4.000
M3 (b = −0.5)6 1.92 ( 7 ) 1.57 ( 33 ) 7.03 ( 88 ) 4.000
M3 (b = −1)5 2.09 ( 4 ) 2.16 ( 11 ) 2.52 ( 39 ) 4.000
Table 4. Numerical results of methods for ϝ 2 ( t ) .
Table 4. Numerical results of methods for ϝ 2 ( t ) .
Methodsk | t k 2 t k 3 | | t k 1 t k 2 | | t k t k 1 | ACOC
LLC4 1.95 ( 5 ) 1.17 ( 22 ) 1.51 ( 91 ) 4.000
LCN4 1.95 ( 5 ) 1.17 ( 22 ) 1.51 ( 91 ) 4.000
SSM4 1.95 ( 5 ) 1.17 ( 22 ) 1.53 ( 91 ) 4.000
ZCM4 1.96 ( 5 ) 1.18 ( 22 ) 1.58 ( 91 ) 4.000
KKB4 1.95 ( 5 ) 1.16 ( 22 ) 1.44 ( 91 ) 4.000
SKJ3 4.35 ( 1 ) 2.76 ( 6 ) 8.00 ( 27 ) 4.000
BAM3 4.35 ( 1 ) 2.41 ( 6 ) 3.85 ( 27 ) 4.000
KKS3 4.35 ( 1 ) 2.42 ( 6 ) 3.93 ( 27 ) 4.000
M1 (b = −0.5)3 4.35 ( 1 ) 2.38 ( 6 ) 4.45 ( 27 ) 4.000
M1 (b = −1)3 4.35 ( 1 ) 2.02 ( 6 ) 2.29 ( 27 ) 4.000
M2 (b = −0.5)3 4.35 ( 1 ) 2.15 ( 6 ) 2.44 ( 27 ) 4.000
M2 (b = −1)3 4.35 ( 1 ) 1.87 ( 6 ) 1.40 ( 27 ) 4.000
M3 (b = −0.5)3 4.35 ( 1 ) 2.30 ( 6 ) 3.68 ( 27 ) 4.000
M3 (b = −1)3 4.35 ( 1 ) 1.97 ( 6 ) 1.96 ( 27 ) 4.000
Table 5. Numerical results of methods for ϝ 3 ( t ) .
Table 5. Numerical results of methods for ϝ 3 ( t ) .
Methodsk | t k 2 t k 3 | | t k 1 t k 2 | | t k t k 1 | ACOC
LLC4 1.07 ( 3 ) 1.14 ( 14 ) 1.46 ( 58 ) 4.000
LCN4 1.07 ( 3 ) 1.13 ( 14 ) 1.43 ( 58 ) 4.000
SSM4 1.07 ( 3 ) 1.12 ( 14 ) 1.35 ( 58 ) 4.000
ZCS4 1.07 ( 3 ) 1.10 ( 14 ) 1.23 ( 58 ) 4.000
KKB4 1.07 ( 3 ) 1.19 ( 14 ) 1.82 ( 58 ) 4.000
SKJ4 2.64 ( 5 ) 6.95 ( 21 ) 3.34 ( 83 ) 4.000
BAM4 2.63 ( 5 ) 4.59 ( 21 ) 4.23 ( 84 ) 4.000
KKS4 2.63 ( 5 ) 4.57 ( 21 ) 4.18 ( 84 ) 4.000
M1 (b = −0.5)4 3.94 ( 5 ) 3.44 ( 20 ) 2.00 ( 80 ) 4.000
M1 (b = −1)4 5.23 ( 5 ) 1.07 ( 19 ) 1.87 ( 78 ) 4.000
M2 (b = −0.5)4 3.91 ( 5 ) 2.23 ( 20 ) 2.34 ( 81 ) 4.000
M2 (b = −1)4 5.16 ( 5 ) 6.76 ( 20 ) 2.00 ( 79 ) 4.000
M3 (b = −0.5)4 3.93 ( 5 ) 3.13 ( 20 ) 1.26 ( 80 ) 4.000
M3 (b = −1)4 5.21 ( 5 ) 9.68 ( 20 ) 1.15 ( 78 ) 4.000
Table 6. Numerical results of methods for ϝ 4 ( t ) .
Table 6. Numerical results of methods for ϝ 4 ( t ) .
Methodsk | t k 2 t k 3 | | t k 1 t k 2 | | t k t k 1 | ACOC
LLC4 2.15 ( 5 ) 7.98 ( 20 ) 1.50 ( 77 ) 4.000
LCN4 2.15 ( 5 ) 8.01 ( 20 ) 1.53 ( 77 ) 4.000
SSM4 2.16 ( 5 ) 8.08 ( 20 ) 1.59 ( 77 ) 4.000
ZCS4 2.16 ( 5 ) 8.19 ( 20 ) 1.68 ( 77 ) 4.000
KKB4 2.14 ( 5 ) 7.62 ( 20 ) 1.23 ( 77 ) 4.000
SKJ4 9.91 ( 6 ) 1.90 ( 21 ) 2.57 ( 84 ) 4.000
BAM4 7.65 ( 6 ) 4.15 ( 22 ) 3.58 ( 87 ) 4.000
KKS4 7.59 ( 6 ) 4.01 ( 22 ) 3.15 ( 87 ) 4.000
M1 (b = −0.5)4 1.43 ( 5 ) 8.16 ( 21 ) 8.74 ( 82 ) 4.000
M1 (b = −1)4 1.98 ( 5 ) 3.00 ( 20 ) 1.60 ( 79 ) 4.000
M2 (b = −0.5)4 9.31 ( 6 ) 9.09 ( 22 ) 8.28 ( 86 ) 4.000
M2 (b = −1)4 1.07 ( 5 ) 1.60 ( 21 ) 7.90 ( 85 ) 4.000
M3 (b = −0.5)4 1.33 ( 5 ) 5.65 ( 21 ) 1.85 ( 82 ) 4.000
M3 (b = −1)4 1.79 ( 5 ) 1.89 ( 20 ) 2.31 ( 80 ) 4.000
Table 7. Numerical results of methods for ϝ 5 ( t ) .
Table 7. Numerical results of methods for ϝ 5 ( t ) .
Methodsk | t k 2 t k 3 | | t k 1 t k 2 | | t k t k 1 | ACOC
LLC4 2.58 ( 4 ) 2.16 ( 10 ) 1.09 ( 38 ) 4.000
LCN4 2.58 ( 4 ) 2.16 ( 10 ) 1.09 ( 38 ) 4.000
SSM4 2.59 ( 4 ) 2.19 ( 10 ) 1.14 ( 38 ) 4.000
ZCS4 2.59 ( 4 ) 2.19 ( 10 ) 1.16 ( 38 ) 4.000
KKB4 2.51 ( 4 ) 1.88 ( 10 ) 6.10 ( 39 ) 4.000
SKJ4 3.05 ( 3 ) 5.24 ( 10 ) 4.67 ( 37 ) 4.000
BAM4 8.98 ( 4 ) 7.29 ( 13 ) 3.17 ( 49 ) 4.000
KKS4 8.98 ( 4 ) 7.29 ( 13 ) 3.17 ( 49 ) 4.000
M1 (b = −0.5)4 3.05 ( 3 ) 5.24 ( 10 ) 4.67 ( 37 ) 4.000
M1 (b = −1)4 3.05 ( 3 ) 5.24 ( 10 ) 4.68 ( 37 ) 4.000
M2 (b = −0.5)4 8.98 ( 4 ) 7.29 ( 13 ) 3.17 ( 49 ) 4.000
M2 (b = −1)4 8.98 ( 4 ) 7.29 ( 13 ) 3.17 ( 49 ) 4.000
M3 (b = −0.5)4 2.95 ( 3 ) 4.40 ( 10 ) 2.23 ( 37 ) 4.000
M3 (b = −1)4 2.95 ( 3 ) 4.40 ( 10 ) 2.23 ( 37 ) 4.000
Table 8. Numerical results of methods for ϝ 6 ( t ) .
Table 8. Numerical results of methods for ϝ 6 ( t ) .
Methodsk | t k 2 t k 3 | | t k 1 t k 2 | | t k t k 1 | ACOC
LLCDDDDD
LCNDDDDD
SSMDDDDD
ZCSDDDDD
KKBDDDDD
SKJDDDDD
BAMDDDDD
KKSDDDDD
M1 (b = −0.5)35 8.94 ( 4 ) 4.62 ( 15 ) 3.29 ( 60 ) 4.000
M1 (b = −1)DDDDD
M2 (b = −0.5)DDDDD
M2 (b = −1)DDDDD
M3 (b = −0.5)6 1.23 ( 2 ) 1.39 ( 10 ) 2.29 ( 60 ) 4.000
M3 (b = −1)7 4.15 ( 6 ) 1.82 ( 24 ) 6.70 ( 98 ) 4.000
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kumar, S.; Sharma, J.R.; Jäntschi, L. Optimal Fourth-Order Methods for Multiple Zeros: Design, Convergence Analysis and Applications. Axioms 2024, 13, 143. https://doi.org/10.3390/axioms13030143

AMA Style

Kumar S, Sharma JR, Jäntschi L. Optimal Fourth-Order Methods for Multiple Zeros: Design, Convergence Analysis and Applications. Axioms. 2024; 13(3):143. https://doi.org/10.3390/axioms13030143

Chicago/Turabian Style

Kumar, Sunil, Janak Raj Sharma, and Lorentz Jäntschi. 2024. "Optimal Fourth-Order Methods for Multiple Zeros: Design, Convergence Analysis and Applications" Axioms 13, no. 3: 143. https://doi.org/10.3390/axioms13030143

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop