Next Article in Journal
Evaluation of Machine Learning Algorithms in Network-Based Intrusion Detection Using Progressive Dataset
Previous Article in Journal
An Elementary Proof That Everett’s Quantum Multiverse Is Nonlocal: Bell-Locality and Branch-Symmetry in the Many-Worlds Interpretation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Numerical Solution of Nonlinear Problems with Multiple Roots Using Derivative-Free Algorithms

1
Department of Mathematics, University Centre for Research and Development, Chandigarh University, Mohali 140413, Punjab, India
2
Department of Mathematics, Sant Longowal Institute of Engineering Technology, Longowal 148106, Punjab, India
3
Department of Mathematics, Pt. NRS Government College, Rohtak 124001, Haryana, India
4
Department of Physics and Chemistry, Technical University of Cluj-Napoca, 400114 Cluj-Napoca, Romania
5
Institute of Doctoral Studies, Babeş-Bolyai University, 400084 Cluj-Napoca, Romania
*
Author to whom correspondence should be addressed.
Symmetry 2023, 15(6), 1249; https://doi.org/10.3390/sym15061249
Submission received: 5 May 2023 / Revised: 5 June 2023 / Accepted: 9 June 2023 / Published: 12 June 2023
(This article belongs to the Section Chemistry: Symmetry/Asymmetry)

Abstract

:
In the study of systems’ dynamics the presence of symmetry dramatically reduces the complexity, while in chemistry, symmetry plays a central role in the analysis of the structure, bonding, and spectroscopy of molecules. In a more general context, the principle of equivalence, a principle of local symmetry, dictated the dynamics of gravity, of space-time itself. In certain instances, especially in the presence of symmetry, we end up having to deal with an equation with multiple roots. A variety of optimal methods have been proposed in the literature for multiple roots with known multiplicity, all of which need derivative evaluations in the formulations. However, in the literature, optimal methods without derivatives are few. Motivated by this feature, here we present a novel optimal family of fourth-order methods for multiple roots with known multiplicity, which do not use any derivative. The scheme of the new iterative family consists of two steps, namely Traub-Steffensen and Traub-Steffensen-like iterations with weight factor. According to the Kung-Traub hypothesis, the new algorithms satisfy the optimality criterion. Taylor’s series expansion is used to examine order of convergence. We also demonstrate the application of new algorithms to real-life problems, i.e., Van der Waals problem, Manning problem, Planck law radiation problem, and Kepler’s problem. Furthermore, the performance comparisons have shown that the given derivative-free algorithms are competitive with existing optimal fourth-order algorithms that require derivative information.

1. Introduction

Simple systems often embed a good amount of symmetry. Take for instance the characteristic polynomial (ChP) of hydrocarbons [1]. Considering 3 cases here, propane (ChP is x 3 2 x ), normal butane (ChP is x 4 3 x 2 + x ) and isobutane (ChP is x 4 3 x 2 ), one should easily notice that the highest symmetry is in isobutane. At the same time, isobutane is the one having multiple roots in the characteristic polynomial. Same symmetry is responsible for the presence of the multiple roots in the ChP of 2,2,4,4-Tetramethylpentane (ChP is x 9 8 x 7 + 15 x 5 , see a_25 in [2]). One should notice that, in the selected cases, the multiple root is banal ( x = 0 ); however, in general, in more complex cases, the multiple root is not any more banal.
Much research has been conducted on the solution of nonlinear equations and systems of nonlinear equations. There are numerous publications on the topic, including those given in reference [3,4,5,6,7,8,9,10], and Traub’s book [11] has a whole chapter devoted to it. It can be particularly difficult to find multiple roots for a given nonlinear equation; hence, many scholars have proposed several iterative algorithms for this purpose (see Refs. [12,13,14,15,16,17,18,19]). Multiple zero, repeating root, or multiple point are other names for roots having a multiplicity m. The root is referred to as a simple zero when m = 1 . It is very challenging to solve a nonlinear equation with multiple zeros. The goal of this article is to build iterative algorithms for a given nonlinear equation χ ( t ) = 0 to find a multiple root a ¯ with a multiplicity of m, i.e., χ ( j ) ( a ¯ ) = 0 ,   j = 0 , 1 , 2 , , m 1 and χ ( m ) ( a ¯ ) 0 .
The use of multiple steps to improve the solution in iterative algorithms is commonly referred to as multi-point iterations in literature. Many scholars are now interested in these algorithms because of some interesting aspects. These can first overcome the one-point algorithms’ low efficiency index, and can also minimize the number of iterations and improve the order of convergence with multiple steps, which also lessens the computational burden in numerical work. Many researchers [20,21,22,23,24,25,26,27,28] have developed higher-order iterative techniques using the first-order derivative to locate the multiple roots of a nonlinear problem. One function and two derivative evaluations are needed per iteration for the optimal fourth-order methods established in the literature [20,21,23,24,25,27,28]. Li et al. have introduced six new fourth-order methods in [22]. The first four methods require one function and three-derivative evaluations per iteration, whereas the last two require one function and two derivative evaluations per iteration. A class of two-point sixth-order multiple-zero finders, using two functions and two derivative evaluations per step, has been proposed in [26].
In science and engineering, iterative techniques without derivatives are useful tools in finding multiple roots of complicated equations. These techniques do not rely on the computation of derivatives, which in the case of complex systems can be time-consuming and computationally expensive. For any complex problem, when the derivative of function χ is difficult to calculate or is expensive to evaluate, then derivative-free algorithms are important [29,30]. Derivative-free iterative approaches are crucial for optimizing complex systems and resolving difficult engineering problems [31]. In the literature, researchers [32,33,34,35,36,37,38,39] have also developed the derivative-free multiple root iterative algorithms that are based on second-order modified Traub-Steffensen iteration [11]. The modified Traub-Steffensen method is given by
t p + 1 = t p m χ ( t p ) χ [ v p , t p ] , p = 0 , 1 , 2 , 3 ,
where v p = t p + β χ ( t p ) , β R { 0 } and χ [ v p , t p ] = χ ( v p ) χ ( t p ) v p t p . For β = 1 and m = 1 this method is Steffensen’s method [40]. Note that this method is obtained from Newton’s method
t p + 1 = t p m χ ( t p ) χ ( t p ) ,
replacing the derivative χ ( t p ) by divided difference χ [ v p , t p ] .
Our goal in this study is to develop efficient derivative-free algorithms for multiple roots with known multiplicity. Therefore, we describe a class of derivative-free fourth-order algorithms that require three new pieces of function χ information per iteration and so have optimal fourth-order convergence, as defined by the Kung-Traub conjecture [41]. The scheme uses iteration (1) in the first step and in the second step uses a weight function with Traub-Steffensen-like iteration. The algorithms are tested numerically on many real-life problems such as the Van der Waals problem, the Manning (isentropic supersonic flow) problem, the Planck law radiation problem, the Kepler’s problem, etc. The performance, in the context of accuracy and CPU time, of proposed methods is compared with existing ones that require derivative evaluations.
Let us briefly introduce here the practical problems that we will consider for numerical testing. Their mathematical forms are given in later sections. The Van der Waals equation [42] corrects the deviations from ideal gas behavior, allowing for more accurate predictions of gas properties under non-ideal conditions. It is used to understand the behavior of gases at high pressures and low temperatures, where intermolecular attractions and molecular volume become significant. The Manning isentropic supersonic flow problem has multiple uses. Among them, the gas and particle dynamics in first-generation needle-free drug delivery devices [43] are of medical interest. Additionally, it is applicable to the Plank law radiation problem, which relates to the principles of phototherapy and photochemotherapy [44]. The use of the ChP in structure-activity studies is reported in previous works [45,46,47,48]. Kepler’s problem [49] has significant applications in celestial mechanics, spacecraft navigation, and astrodynamics.

2. Development of Method

We considered a fourth-order family with a simple and compact body structure for multiple zeros m 2 , which is described by the following:
z p = t p m χ ( t p ) χ [ v p , t p ] , t p + 1 = z p ω ( x p , y p ) χ ( t p ) χ [ v p , t p ] ,
where x p = χ ( z p ) χ ( t p ) m , y p = χ ( v p ) χ ( t p ) m and ω : C 2 C is a differential function in the vicinity of ( 0 , 0 ) . The second step is weighted by the factor ω ( x , y ) ; hence, ω is called the weight function.
In what follows, we will prove the convergence results of the scheme (3). For a better understanding (as we will point out in Remark 1), few results are proved separately depending on multiplicity m. First, we consider the case m = 2 and prove the following theorem:
Theorem 1.
Assuming that χ : C C is an analytic function in a domain surrounding a ¯ with multiplicity m = 2 . Let us consider an initial guess t 0 is close to a ¯ . Then, scheme (3) has at least a convergence order of 4, provided that ω 00 = 0 , ω 10 = 3 , ω 01 = 0 , ω 20 = 8 , ω 11 = 1 and ω 02 = 0 , where ω i j = i + j x i y j ω ( x p , y p ) | ( x p = 0 , y p = 0 ) , for 0 i , j 2 .
Proof
Assuming that e p = t p a ¯ is an error at p-th stage, then, using the Taylor’s series expansion of χ ( t p ) around a ¯ and allowing that χ ( a ¯ ) = 0 , χ ( a ¯ ) = 0 and χ ( 2 ) ( a ¯ ) 0 , we have
χ ( t p ) = χ ( 2 ) ( a ¯ ) 2 ! e p 2 1 + A 1 e p + A 2 e p 2 + A 3 e p 3 + A 4 e p 4 + ,
where A n = 2 ! ( 2 + n ) ! χ ( 2 + n ) ( a ¯ ) χ ( 2 ) ( a ¯ ) for n N .
Similarly, the expansion of χ ( v p ) about a ¯ can be written as follows:
χ ( v p ) = χ ( 2 ) ( a ¯ ) 2 ! e v p 2 1 + A 1 e v p + A 2 e v p 2 + A 3 e v p 3 + A 4 e v p 4 + ,
where e v p = v p a ¯ = e p + β χ ( 2 ) ( a ¯ ) 2 ! e p 2 1 + A 1 e p + A 2 e p 2 + A 3 e p 3 + A 4 e p 4 + .
Then, Equation (3) becomes
e z p = z p a ¯ = 1 2 β χ ( 2 ) ( a ¯ ) 2 + A 1 e p 2 1 16 ( β χ ( 2 ) ( a ¯ ) ) 2 8 β χ ( 2 ) ( a ¯ ) A 1 + 12 A 1 2 16 A 2 e p 3 + 1 64 ( ( β χ ( 2 ) ( a ¯ ) ) 3 20 β χ ( 2 ) ( a ¯ ) A 1 2 + 72 A 1 3 + 64 β χ ( 2 ) ( a ¯ ) A 2 10 A 1 ( ( β χ ( 2 ) ( a ¯ ) ) 2 + 16 A 2 ) + 96 A 3 ) e p 4 + O ( e p 5 ) .
Similarly, expanding χ ( z p ) about a ¯ gives us
χ ( z p ) = χ ( 2 ) ( a ¯ ) 2 ! e z p 2 1 + A 1 e z p + A 2 e z p 2 + .
Using (4), (5) and (7), we have
x p = 1 2 β χ ( 2 ) ( a ¯ ) 2 + A 1 e p 1 16 ( β χ ( 2 ) ( a ¯ ) ) 2 6 β χ ( 2 ) ( a ¯ ) A 1 + 16 ( A 1 2 A 2 ) e p 2 + 1 64 ( ( β χ ( 2 ) ( a ¯ ) ) 3 22 β χ ( 2 ) ( a ¯ ) A 1 2 + 4 29 A 1 3 + 14 β χ ( 2 ) ( a ¯ ) A 2 2 A 1 3 ( β χ ( 2 ) ( a ¯ ) ) 2 + 104 A 2 + 96 A 3 ) e p 3 + O ( e p 4 )
and
y p = 1 + β χ ( 2 ) ( a ¯ ) 2 e p 1 + 3 2 A 1 e p + 1 4 β χ ( 2 ) ( a ¯ ) A 1 + 8 A 2 e p 2 + O ( e p 3 ) .
Next, we develop ω ( x p , y p ) by Taylor series in the neighborhood of origin ( 0 , 0 ) :
ω ( x p , y p ) ω 00 + x p ω 10 + y p ω 01 + 1 2 x p 2 ω 20 + x p y p ω 11 + 1 2 y p 2 ω 02 ,
where ω i j = i + j x i y j ω ( x p , y p ) | ( x p = 0 , y p = 0 ) , for 0 i , j 2 . Inserting (4)–(10) in the second step of (3), we get
e p + 1 = 1 4 ( ω 02 + 2 ω 00 + 2 ω 01 ) e p + 1 16 ( β f ( 2 ) ( a ¯ ) ( 4 + 2 ω 00 2 ω 01 3 ω 02 2 ω 10 2 ω 11 ) + 2 ( 4 + 2 ω 00 + 2 ω 01 + ω 02 2 ω 10 2 ω 11 ) A 1 ) e p 2 1 64 ( ( β f ( 2 ) ( a ¯ ) ) 2 ( 4 + 2 ω 00 2 ω 01 + ω 02 4 ω 10 + ω 20 ) + 4 β f ( 2 ) ( a ¯ ) ( 8 4 ω 00 + 2 ω 02 + ω 10 + 3 ω 11 + ω 20 ) A 1 + 4 ( 12 + 6 ω 00 + 6 ω 01 + 3 ω 02 10 ω 10 10 ω 11 + ω 20 ) A 1 2 16 ( 4 + 2 ω 00 + 2 ω 01 + ω 02 2 ω 10 2 ω 11 ) A 2 ) e p 3 + η e p 4 + O ( e p 5 ) ,
where η = η ( β , A 1 , A 2 , A 3 , ω 00 , ω 10 , ω 01 , ω 20 , ω 11 , ω 02 ) . The expression of η is very lengthy, so it is not produced here.
It is clear from (11), if we set coefficients of e p , e p 2 and e p 3 simultaneously equal to zero, then after some simple calculation, one gets
ω 00 = 0 , ω 10 = 3 , ω 01 = 0 , ω 20 = 8 , ω 11 = 1 , ω 02 = 0 .
Consequently, the error Equation (11) is given by
e p + 1 = 1 16 β χ ( 2 ) ( a ¯ ) 2 + A 1 ( ( β χ ( 2 ) ( a ¯ ) ) 2 + 3 β χ ( 2 ) ( a ¯ ) A 1 + 11 A 1 2 4 A 2 ) e p 4 + O ( e p 5 ) .
Hence, the result is proved. □
Next, we consider the case m = 3 and prove the following theorem:
Theorem 2.
Assuming the hypothesis of Theorem 1, the convergence order of (3) for m = 3 is at least 4, if ω 01 = 2 ω 00 , ω 20 = 12 , ω 11 = 3 ω 10 and ω 02 = 2 ω 00 , wherein { | ω 00 | , | ω 10 | } < .
Proof
Let χ ( a ¯ ) = 0 , χ ( a ¯ ) = 0 , χ ( a ¯ ) = 0 and χ ( 3 ) ( a ¯ ) 0 , . Then, expanding χ ( t p ) about a ¯ using Taylor’s expansion,
χ ( t p ) = χ ( 3 ) ( a ¯ ) 3 ! e p 3 1 + B 1 e p + B 2 e p 2 + B 3 e p 3 + B 4 e p 4 + ,
where B n = 3 ! ( 3 + n ) ! χ ( 3 + n ) ( a ¯ ) χ ( 3 ) ( a ¯ ) for n N .
Similarly, we can expand χ ( v p ) about a ¯ as
χ ( v p ) = χ ( 3 ) ( a ¯ ) 3 ! e v p 3 1 + B 1 e v p + B 2 e v p 2 + B 3 e v p 3 + B 4 e v p 4 + ,
where e v p = v p a ¯ = e p + β χ ( 3 ) ( a ¯ ) 3 ! e p 3 1 + B 1 e p + B 2 e p 2 + B 3 e p 3 + B 4 e p 4 + .
Then, the first step of (3) yields
e z p = z p a ¯ = B 1 3 e p 2 + 1 18 3 β χ ( 3 ) ( a ¯ ) 8 B 1 2 + 12 B 2 e p 3 + 1 27 16 B 1 3 + 3 B 1 2 β χ ( 3 ) ( a ¯ ) 13 B 2 + 27 B 3 e p 4 + O ( e p 5 ) .
The expansion of χ ( z p ) about a ¯ is
χ ( z p ) = χ ( 3 ) ( a ¯ ) 3 ! e z p 3 1 + B 1 e z p + B 2 e z p 2 + B 3 e z p 3 + B 4 e z p 4 + .
Then, from (14), (15) and (17), it follows that
x p = B 1 3 e p + 1 18 3 β χ ( 3 ) ( a ¯ ) 10 B 1 2 + 12 B 2 e p 2 + 1 27 23 B 1 3 + 3 2 B 1 3 χ ( 3 ) ( a ¯ ) β 32 B 2 + 27 B 3 e p 3 + O ( e p 4 )
and
y p = 1 + χ ( 3 ) ( a ¯ ) 3 ! β e p 2 1 + 4 3 B 1 e p + 5 3 B 2 e p 2 + O ( e p 3 ) .
By using (10) and (14)–(19) in the last step of (3), we have
e p + 1 = 1 6 ( 2 ω 00 + 2 ω 01 + ω 02 ) e p + 1 18 6 + 2 ω 00 + 2 ω 01 + ω 02 2 ω 10 2 ω 11 B 1 e p 2 1 108 ( 2 ( 24 + 8 ω 00 + 8 ω 01 + 4 ω 02 12 ω 10 12 ω 11 + ω 20 ) B 1 2 3 ( β f ( 3 ) ( a ¯ ) ( 2 ω 00 ω 02 2 ( 3 + ω 10 + ω 11 ) ) + 4 ( 6 + 2 ω 00 + 2 ω 01 + ω 02 2 ω 10 2 ω 11 ) B 2 ) ) e p 3 + ϕ e p 4 + O ( e p 5 ) ,
where ϕ = ϕ ( β , B 1 , B 2 , B 3 , ω 00 , ω 10 , ω 01 , ω 20 , ω 11 , ω 02 ) .
The above Equation (20) will yield minimum fourth order convergence if the coefficients of e p , e p 2 and e p 3 satisfy the following conditions:
ω 01 = 2 ω 00 , ω 20 = 12 , ω 11 = 3 ω 10 , ω 02 = 2 ω 00 .
Then, the final form of the error Equation (20) is given by
e p + 1 = B 1 54 ( ω 10 6 ) β χ ( 3 ) ( a ¯ ) + 12 B 1 2 6 B 2 e p 4 + O ( e p 5 ) .
Thus, the theorem is proved. □
Next, we state the results for the cases m = 4 , 5 , 6 in the form of corollaries. The proofs are similar to the above-proved Theorems 1 and 2.
Corollary 1.
Assuming the hypothesis of Theorem 1, the convergence of (3) for m = 4 is at least 4, provided that ω 20 = 16 , ω 11 = 4 ω 10 and ω 02 = 2 ( ω 00 + ω 01 ) , where { | ω 00 | , | ω 10 | , | ω 01 | } < . Moreover, the error equation is given by
e p + 1 = 1 384 4 ( ω 01 + 2 ω 00 ) β χ ( 4 ) ( a ¯ ) + 39 C 1 3 24 C 1 C 2 e p 4 + O ( e p 5 ) ,
where C n = 4 ! ( 4 + n ) ! χ ( 4 + n ) ( a ¯ ) χ ( 4 ) ( a ¯ ) for n N .
Corollary 2.
Assuming the hypothesis of Theorem 1, the convergence of (3) for m = 5 is at least 4, provided that ω 20 = 20 , ω 11 = 5 ω 10 and ω 02 = 2 ( ω 00 + ω 01 ) , wherein { | ω 00 | , | ω 10 | , | ω 01 | } < . Then, the final error equation is given by
e p + 1 = 1 125 7 D 1 3 5 D 1 D 2 e p 4 + O ( e p 5 ) ,
where D n = 5 ! ( 5 + n ) ! χ ( 5 + n ) ( a ¯ ) χ ( 5 ) ( a ¯ ) for n N .
Corollary 3.
Assuming the hypothesis of Theorem 1, the convergence order of (3) for m = 6 is at least 4, provided that ω 20 = 24 , ω 11 = 6 ω 10 and ω 02 = 2 ( ω 00 + ω 01 ) , where { | ω 00 | , | ω 10 | , | ω 01 | } < . Moreover, the error equation of the method is
e p + 1 = 1 144 5 E 1 3 4 E 1 E 2 e p 4 + O ( e p 5 ) ,
where E n = 6 ! ( 6 + n ) ! χ ( 6 + n ) ( a ¯ ) χ ( 6 ) ( a ¯ ) for n N .
Remark 1.
Notice that the parameter β, which is used in the expression of v p , appears only in the error equations for m = 2 , 3 , 4 , but not for m = 5 . However, we have noticed that for m 5 , this parameter appears in terms involving e p 5 and higher orders. In general, such terms are difficult to calculate. Moreover, we do not require these to demonstrate the required fourth-order convergence. For these reasons the convergence conditions for m 5 are explored separately in the next section.

3. Main Result

For the reason stated in Remark 1, we prove the following unified result for m 5 :
Theorem 3.
Assuming the hypothesis of Theorem 1, the convergence order of (3) for m 5 is at least 4, provided that ω 20 = 4 m , ω 11 = m ω 10 and ω 02 = 2 ( ω 00 + ω 01 ) , wherein { | ω 00 | , | ω 10 | , | ω 01 | } < . Moreover, the error is given by
e p + 1 = ( 9 + m ) K 1 3 2 m K 1 K 2 2 m 3 e p 4 + O ( e p 5 ) .
Proof
Keeping in mind that χ ( j ) ( a ¯ ) = 0 , j = 0 , 1 , 2 , . . . , m 1 and χ m ( a ¯ ) 0 , then Taylor’s series expansion of χ ( t p ) about a ¯ is given by
χ ( t p ) = χ m ( a ¯ ) m ! e p m 1 + K 1 e p + K 2 e p 2 + K 3 e p 3 + K 4 e p 4 + ,
where K n = m ! ( m + n ) ! χ ( m + n ) ( a ¯ ) χ ( m ) ( a ¯ ) for n N .
Similarly, the expansion of χ ( v p ) about a ¯ gives
χ ( v p ) = χ m ( a ¯ ) m ! e v p m 1 + K 1 e v p + K 2 e v p 2 + K 3 e v p 3 + K 4 e v p 4 + ,
where e v p = v p a ¯ = e p + β χ m ( a ¯ ) m ! e p m 1 + K 1 e p + K 2 e p 2 + K 3 e p 3 + K 4 e p 4 + .
From the first step of (3)
e z p = z p a ¯ = K 1 m e p 2 + ( 1 + m ) K 1 2 + 2 m K 2 m 2 e p 3 + ( 1 + m ) 2 K 1 3 m ( 4 + 3 m ) K 1 K 2 + 3 m 2 K 3 m 3 e p 4 + O ( e p 5 ) .
Expanding χ ( z p ) about a ¯ , we have
χ ( z p ) = χ m ( a ¯ ) m ! e z p 2 1 + K 1 e z p + K 2 e z p 2 + K 3 e z p 3 + K 4 e z p 4 + .
Using (23), (24) and (26), we have that
x p = K 1 m e p + ( 2 + m ) K 1 2 + 2 m K 2 m 2 e p 2 + ( 7 + 7 m + 2 m 2 ) K 1 3 2 m ( 7 + 3 m ) K 1 K 2 + 6 m 2 K 3 2 m 3 e p 3 + O ( e p 4 )
and
y p = 1 + χ m ( a ¯ ) m ! β 1 + ( m + 1 ) K 1 m e p + ( m + 2 ) K 2 m e p 2 + ( m + 3 ) K 3 m e p 3 + O ( e p 4 ) .
Putting (10) and (23)–(28) in the last step of (3), it follows that
e p + 1 = 1 2 m ( 2 ω 00 + 2 ω 01 + ω 02 ) e p + 1 2 m 2 ( 2 ω 00 + 2 ω 01 + ω 02 2 ω 10 2 ω 11 + 2 m ) K 1 e p 2 1 2 m 3 ( ( ω 02 6 ω 10 6 ω 11 + ω 20 + 2 m + m ω 02 2 m ω 10 2 m ω 11 + 2 m 2 + 2 ( 1 + m ) ω 00 + 2 ( 1 + m ) ω 01 ) K 1 2 2 m ( 2 ω 00 + 2 ω 01 + ω 02 2 ω 10 2 ω 11 + 2 m ) K 2 ) e p 3 + ψ e p 4 + O ( e p 5 ) ,
where ψ = ψ ( m , K 1 , K 2 , K 3 , ω 00 , ω 10 , ω 01 , ω 20 , ω 11 , ω 02 ) .
Equation (29) will yield minimum fourth order convergence if the coefficients of e p , e p 2 and e p 3 satisfy the following conditions
ω 20 = 4 m , ω 11 = m ω 10 , ω 02 = 2 ( ω 00 + ω 01 ) .
Then, Equation (29) is given by
e p + 1 = ( 9 + m ) K 1 3 2 m K 1 K 2 2 m 3 e p 4 + O ( e p 5 ) .
Thus, the theorem is proved. □
Remark 2.
If the criteria of the above theorems are satisfied, the proposed scheme (3) achieves fourth-order convergence. Only three functional evaluations viz. χ ( t p ) , χ ( v p ) and χ ( z p ) per iteration are used to achieve this convergence rate. So the iterative scheme (3) is the optimal according to the Kung-Traub hypothesis [41].

Some Special Cases

We can construct various special iterative schemes of (3) based on the function ω ( x , y ) , which satisfies the conditions explored in the preceding theorems and corollaries. However, we shall limit our options to low-degree polynomials or simple rational functions. These selections can allow the resulting algorithms for m 2 to converge to the root with order four. The following simple forms of ω ( x , y ) are chosen:
( 1 ) ω ( x p , y p ) = x p 3 ( m 1 ) + 2 m x p + ( 3 2 m ) y p , ( 2 ) ω ( x p , y p ) = 2 m x p 2 + x p 3 y p + x p ( 1 + x p 3 ) 3 ( 1 + m + y p ) 2 m y p 1 + x p 3 , ( 3 ) ω ( x p , y p ) = x p 3 + ( 3 x p 3 ) y p 3 + m ( 3 + y p + y p 2 + 2 ( x p + y p ( x p + x p y p y p 2 ) ) ) 1 + y p + y p 2 . ( 4 ) ω ( x p , y p ) = 1 1 + x p + y p + x p y p ( ( 5 y p 4 + 2 m ( 1 + y p ) ) x p 3 x p ( 1 + y p ) ( 3 ( 1 y p ) ( 1 + x p ) + m x p ( 2 y p 5 ) + m ( 2 y p 3 ) ) ) .
The corresponding algorithm to each of the above forms can be demonstrated as follows:
Method 1 (NM1):
t p + 1 = z p x p 3 ( m 1 ) + 2 m x p + ( 3 2 m ) y p χ ( t p ) χ [ v p , t p ] .
Method 2 (NM2):
t p + 1 = z p 2 m x p 2 + x p 3 y p + x p ( 1 + x p 3 ) 3 ( 1 + m + y p ) 2 m y p 1 + x p 3 χ ( t p ) χ [ v p , t p ] .
Method 3 (NM3):
t p + 1 = z p 1 1 + y p + y p 2 ( x p ( 3 + ( 3 x p 3 ) y p 3 + m ( 3 + y p + y p 2 + 2 ( x p + y p ( x p + x p y p y p 2 ) ) ) ) ) χ ( t p ) χ [ v p , t p ] .
Method 4 (NM4):
t p + 1 = z p 1 1 + x p + y p + x p y p ( 5 y p 4 + 2 m ( 1 + y p ) x p 3 x p ( 1 + y p ) ( 3 ( 1 y p ) ( 1 + x p ) + m x p ( 2 y p 5 ) + m ( 2 y p 3 ) ) ) χ ( t p ) χ [ v p , t p ] .
In each case above z p = t p m χ ( t p ) χ [ v p , t p ] , x p = χ ( z p ) χ ( t p ) m and y p = χ ( v p ) χ ( t p ) m .

4. Numerical Results

To check the validity and stability of the new algorithms, we have considered some real-life problems which prove the results that we have shown in the preceding sections. Moreover, new algorithms are compared with existing fourth order algorithms that use derivatives in the formulas. For example, the following five schemes are taken for comparison:
Li et al. method [21] (LLC):
z p = t p 2 m m + 2 χ ( t p ) χ ( t p ) , t p + 1 = t p m ( m 2 ) m m + 2 m χ ( z p ) m 2 χ ( t p ) χ ( t p ) m m + 2 m χ ( z p ) χ ( t p ) 2 χ ( t p ) .
Li et al. method [22] (LCN):
z p = t p 2 m m + 2 χ ( t p ) χ ( t p ) , t p + 1 = t p α 1 χ ( t p ) χ ( z p ) χ ( t p ) α 2 χ ( t p ) + α 3 χ ( z p ) ,
where
α 1 = 1 2 m m + 2 m m ( m 4 + 4 m 3 16 m 16 ) m 3 4 m + 8 , α 2 = ( 8 4 m + m 3 ) 2 m ( m 2 + 2 m 4 ) ( m 4 + 4 m 3 4 m 2 16 m + 16 ) , α 3 = m 2 ( m 3 4 m + 8 ) m m + 2 m ( m 2 + 2 m 4 ) ( m 4 + 4 m 3 4 m 2 16 m + 16 ) .
Sharma-Sharma method [23] (SS):
z p = t p 2 m m + 2 χ ( t p ) χ ( t p ) , t p + 1 = t p m 8 [ ( m 3 4 m + 8 ) ( 2 + m ) 2 m m + 2 m χ ( t p ) χ ( z p ) × 2 ( m 1 ) ( 2 + m ) m m + 2 m χ ( t p ) χ ( z p ) ] χ ( t p ) χ ( t p ) .
Zhou et al. method [24] (ZCS):
z p = t p 2 m m + 2 χ ( t p ) χ ( t p ) , t p + 1 = t p m 8 [ m 3 m + 2 m 2 m χ ( z p ) χ ( t p ) 2 2 m 2 ( 3 + m ) 2 + m m m χ ( z p ) χ ( t p ) + ( m 3 + 6 m 2 + 8 m + 8 ) ] χ ( t p ) χ ( t p ) .
Soleymani et al. method [25] (SBL):
z p = t p 2 m m + 2 χ ( t p ) χ ( t p ) , t p + 1 = t p χ ( z p ) χ ( t p ) q 1 ( χ ( z p ) ) 2 + q 2 χ ( z p ) χ ( t p ) + q 3 ( χ ( t p ) ) 2 ,
where
q 1 = 1 16 m 3 m ( m + 2 ) m , q 2 = 8 m ( m + 2 ) ( m 2 2 ) 8 m , q 3 = 1 16 ( m 2 ) m m 1 ( m + 2 ) 3 m .
The numerical work of this study is performed in software Mathematica [50]. In computation, we use the value 0.01 for parameter β . The idea for taking small value is clear from the Traub-Steffensen formula (1), developed by replacing the derivative in the Newton formula (2) with the divided difference, as shown above in the Introduction section, since small values of β will give a more accurate approximation. The results displayed in Table 1, Table 2, Table 3, Table 4 and Table 5 are performed in respect of the following:
(i)
The number of iterations ( p + 1 ) taken by the algorithms and the satisfaction of the stopping criterion | t p + 1 t p | + | χ ( t p + 1 ) | < 10 100 .
(ii)
The first three iterations’ error | t p + 1 t p | .
(iii)
The calculated convergence order (CCO).
(iv)
The total time (in seconds) consumed by the algorithms.
To calculate the convergence order (CCO), we use the formula (see [51])
CCO = log | e p + 2 / e p + 1 | log | e p + 1 / e p | , for each p = 1 , 2 , ,
where e p = t p a ¯ .
Let us consider the following problems for the testing:
Problem 1: The Van der Waals equation for a gas [42,52] is given by
P + a 1 n 2 V 2 ( V n a 2 ) = n R T
where, R: universal gas constant, T: temperature, P: pressure, V: volume, n: number of moles, a 1 , a 2 : variables with values depending on the gas. To calculate the volume V, we can write (33) as
P V 3 ( n a 2 P + n R T ) V 2 + a 1 n 2 V a 1 a 2 n 2 = 0 .
One can find values of n, P, T, a 1 and a 2 of a particular gas [52] such that Equation (34) has three roots. So, by using a specific set of values, we have
χ 1 ( t ) = t 3 5.22 t 2 + 9.0825 t 5.2675 ,
that has three roots a ¯ = 1.72 , 1.75 , 1.75 . So, our desired root is a ¯ = 1.75 with m = 2 . The methods are tested for two initial guesses t 0 = 2.45 , 3 . Computed results are given in Table 6. Figure 1 and Figure 2 represent the graph of the errors committed by methods as iteration proceeds for χ 1 ( t ) at t 0 = 2.45 and t 0 = 3 , respectively. However, some graphs overlap each other. So, to make it more clear, we further draw the graphs in Figure 3 and Figure 4 for the following methods: LLC, LCN, SS, ZCS, SBL, and Figure 5 and Figure 6 for the methods: NM1, NM2, NM3 and NM4. To make the pictures more clear, the bending portion of the lines is shown in sub-figures within the Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6. It can be observed from the graphs that newly proposed methods are good competitors to existing Newton-like methods. Figure 1, Figure 2, Figure 3, Figure 4, Figure 5 and Figure 6 are drawn using the Mathematica software. These figures are the geometrical representation of the errors shown in columns 3, 4, and 5 of Table 6 that help us visualize methods’ behavior for different initial guesses.
Table 1. Numerical problems 2–5.
Table 1. Numerical problems 2–5.
ProblemRootMultiplicityInitial GuessResults’ Table
2: Isentropic supersonic flow problem [53]
χ 2 ( t ) = [ tan 1 5 2 tan 1 ( t 2 1 )
+ 6 tan 1 t 2 1 6 tan 1 1 2 5 6 11 63 ] 3 1.8411…31.5 & 1.7Table 2
3: Planck law of radiation problem [54]
χ 3 ( t ) = e t 1 + t 5 4 4.9651…42.5 & 5.5Table 3
4: Kepler’s problem [49]
χ 4 ( t ) = t 1 4 sin ( t ) π 5 5 0.8093…51 & 1.4Table 4
5: Complex root problem
χ 5 ( t ) = t ( t 2 + 1 ) ( 2 e t 2 + 1 + t 2 1 ) cos h 4 π t 2 i61.1 i & 1.3 iTable 5
Table 2. Numerical results of methods for problem 2.
Table 2. Numerical results of methods for problem 2.
Methods p + 1 | t 2 t 1 | | t 3 t 2 | | t 4 t 3 | CCOCPU-Time
t 0 = 1.5
LLC4 8.19 × 10 4 2.64 × 10 15 2.87 × 10 61 42.1224
LCN4 8.19 × 10 4 2.61 × 10 15 2.73 × 10 61 42.4660
SS4 8.19 × 10 4 2.55 × 10 15 2.44 × 10 61 42.4653
ZCS4 8.19 × 10 4 2.40 × 10 15 1.79 × 10 61 42.4492
SBL4 8.19 × 10 4 2.53 × 10 15 2.34 × 10 61 42.7765
NM14 2.76 × 10 5 8.29 × 10 21 6.75 × 10 83 41.8522
NM24 2.76 × 10 5 8.05 × 10 21 5.83 × 10 83 41.8878
NM34 2.76 × 10 5 8.29 × 10 21 6.75 × 10 83 41.8867
NM44 2.76 × 10 5 7.69 × 10 21 4.65 × 10 83 41.9042
t 0 = 1.7
LLC4 7.01 × 10 6 1.42 × 10 23 2.43 × 10 94 42.6054
LCN4 7.00 × 10 6 1.40 × 10 23 2.27 × 10 94 42.8556
SS4 6.98 × 10 6 1.36 × 10 23 1.93 × 10 94 42.9173
ZCS4 6.93 × 10 6 1.24 × 10 23 1.27 × 10 94 42.8868
SBL4 6.97 × 10 6 1.34 × 10 23 1.82 × 10 94 43.2764
NM14 4.04 × 10 6 3.82 × 10 24 3.04 × 10 96 42.4214
NM24 3.99 × 10 6 3.51 × 10 24 2.11 × 10 96 42.5285
NM34 4.04 × 10 6 3.82 × 10 24 3.04 × 10 96 42.5432
NM44 3.90 × 10 6 3.09 × 10 24 1.21 × 10 96 42.4967
Table 3. Numerical results of methods for problem 3.
Table 3. Numerical results of methods for problem 3.
Methods p + 1 | t 2 t 1 | | t 3 t 2 | | t 4 t 3 | CCOCPU-Time
t 0 = 2.5
LLC5 2.94 8.02 × 10 3 4.02 × 10 12 40.8893
LCN5 2.93 7.94 × 10 3 3.86 × 10 12 41.2483
SS5 2.90 7.78 × 10 3 3.56 × 10 12 41.2791
ZCS5 2.86 7.55 × 10 3 3.16 × 10 12 41.2172
SBL5 2.77 7.03 × 10 3 2.37 × 10 12 41.4986
NM15 4.68 9.00 × 10 4 9.07 × 10 17 40.5946
NM25 4.58 8.65 × 10 4 7.67 × 10 17 40.6385
NM35 4.66 8.94 × 10 4 8.82 × 10 17 40.6843
NM45 4.87 9.62 × 10 4 1.15 × 10 16 40.6682
t 0 = 5.5
LLC4 4.91 × 10 5 5.70 × 10 21 1.03 × 10 84 40.6545
LCN4 4.91 × 10 5 5.70 × 10 21 1.03 × 10 84 41.0772
SS4 4.92 × 10 5 5.71 × 10 21 1.04 × 10 84 41.0301
ZCS4 4.92 × 10 5 5.72 × 10 21 1.05 × 10 84 41.0142
SBL4 4.92 × 10 5 5.73 × 10 21 1.06 × 10 84 41.2646
NM13 5.57 × 10 6 1.33 × 10 25 040.4534
NM23 5.53 × 10 6 1.28 × 10 25 040.5173
NM33 5.57 × 10 6 1.33 × 10 25 040.5238
NM43 5.47 × 10 6 1.21 × 10 25 040.4923
Table 4. Numerical results of methods for problem 4.
Table 4. Numerical results of methods for problem 4.
Methods p + 1 | t 2 t 1 | | t 3 t 2 | | t 4 t 3 | CCOCPU-Time
t 0 = 1
LLC4 1.24 × 10 5 2.79 × 10 22 7.12 × 10 89 40.8422
LCN4 1.24 × 10 5 2.74 × 10 22 6.60 × 10 89 40.9675
SS4 1.23 × 10 5 2.61 × 10 22 5.40 × 10 89 40.9833
ZCS4 1.21 × 10 5 2.45 × 10 22 4.14 × 10 89 40.9215
SBL4 1.15 × 10 5 1.94 × 10 22 1.57 × 10 89 41.1863
NM14 5.37 × 10 6 2.28 × 10 24 7.42 × 10 98 40.5865
NM24 4.99 × 10 6 1.54 × 10 24 1.39 × 10 98 40.6246
NM34 5.38 × 10 6 2.29 × 10 24 7.48 × 10 98 40.5934
NM44 4.44 × 10 6 8.08 × 10 24 8.88 × 10 100 40.6440
t 0 = 1.4
LLC4 5.08 × 10 4 7.79 × 10 16 4.30 × 10 63 40.8114
LCN4 5.03 × 10 4 7.44 × 10 16 3.56 × 10 63 41.0320
SS4 4.88 × 10 4 6.57 × 10 16 2.15 × 10 63 41.0146
ZCS4 4.69 × 10 4 5.57 × 10 16 1.11 × 10 63 40.9998
SBL4 3.95 × 10 4 2.72 × 10 16 6.10 × 10 65 41.1393
NM14 6.89 × 10 4 6.17 × 10 16 3.96 × 10 64 40.6155
NM24 6.56 × 10 4 4.59 × 10 16 1.10 × 10 64 40.6442
NM34 6.89 × 10 4 6.19 × 10 16 4.03 × 10 64 40.6561
NM44 6.08 × 10 4 2.85 × 10 16 1.38 × 10 65 40.6240
Table 5. Numerical results of methods for problem 5.
Table 5. Numerical results of methods for problem 5.
Methods p + 1 | t 2 t 1 | | t 3 t 2 | | t 4 t 3 | CCOCPU-Time
t 0 = 1.1 i
LLC4 1.75 × 10 5 3.01 × 10 20 2.66 × 10 79 41.7604
LCN4 1.75 × 10 5 3.02 × 10 20 2.68 × 10 79 42.4491
SS4 1.75 × 10 5 3.03 × 10 20 2.73 × 10 79 42.4653
ZCS4 1.75 × 10 5 3.05 × 10 20 2.79 × 10 79 42.5426
SBL4 1.76 × 10 5 3.16 × 10 20 3.28 × 10 79 43.1042
NM14 6.74 × 10 6 2.93 × 10 22 1.05 × 10 87 40.5311
NM24 6.69 × 10 6 2.82 × 10 22 8.87 × 10 88 40.5380
NM34 6.74 × 10 6 2.93 × 10 22 1.05 × 10 87 40.5935
NM44 6.63 × 10 6 2.66 × 10 22 6.92 × 10 88 40.5468
t 0 = 1.3 i
LLC4 1.31 × 10 4 9.41 × 10 17 2.53 × 10 65 41.6234
LCN4 1.31 × 10 4 9.42 × 10 17 2.54 × 10 65 42.5432
SS4 1.31 × 10 4 9.43 × 10 17 2.56 × 10 65 42.6216
ZCS4 1.31 × 10 4 9.45 × 10 17 2.58 × 10 65 42.6217
SBL4 1.31 × 10 4 9.57 × 10 17 2.75 × 10 65 43.1824
NM14 2.86 × 10 5 9.54 × 10 20 1.18 × 10 77 40.5162
NM24 2.86 × 10 5 9.41 × 10 20 1.10 × 10 77 40.6243
NM34 2.86 × 10 5 9.54 × 10 20 1.18 × 10 77 40.5922
NM44 2.86 × 10 5 9.23 × 10 20 9.98 × 10 78 40.5610
Table 6. Numerical results of methods for problem 1.
Table 6. Numerical results of methods for problem 1.
Methods p + 1 | t 2 t 1 | | t 3 t 2 | | t 4 t 3 | CCOCPU-Time
t 0 = 2.45
LLC6 8.47 × 10 2 7.16 × 10 3 1.61 × 10 5 40.0787
LCN6 8.47 × 10 2 7.16 × 10 3 1.61 × 10 5 40.0786
SS6 8.63 × 10 2 7.67 × 10 3 2.17 × 10 5 40.1095
ZCS6 8.96 × 10 2 8.83 × 10 3 4.03 × 10 5 40.0786
SBL6 8.47 × 10 2 7.16 × 10 3 1.61 × 10 5 40.0938
NM16 9.21 × 10 2 9.65 × 10 3 6.36 × 10 5 40.0753
NM26 9.11 × 10 2 9.25 × 10 3 5.27 × 10 5 40.0821
NM36 9.22 × 10 2 9.68 × 10 3 6.43 × 10 5 40.0778
NM46 8.94 × 10 2 8.68 × 10 3 3.94 × 10 5 40.0782
t 0 = 3
LLC6 1.53 × 10 1 1.71 × 10 2 2.26 × 10 4 40.0982
LCN6 1.53 × 10 1 1.71 × 10 2 2.26 × 10 4 40.1102
SS6 1.56 × 10 1 1.81 × 10 2 2.80 × 10 4 40.1096
ZCS6 1.61 × 10 1 2.02 × 10 2 4.32 × 10 4 40.1145
SBL6 1.53 × 10 1 1.71 × 10 2 2.26 × 10 4 40.0944
NM16 1.69 × 10 1 2.22 × 10 2 6.10 × 10 4 40.0682
NM26 1.67 × 10 1 2.14 × 10 2 5.37 × 10 4 40.0924
NM36 1.69 × 10 1 2.22 × 10 2 6.15 × 10 4 40.0947
NM46 1.63 × 10 1 2.03 × 10 2 4.37 × 10 4 40.0936
The rest of the problems (2–5) are shown in Table 1. This table has five columns: column 1 contains the considered problem, column 2 contains the desired root of the problem, column 3 contains the multiplicity of the root, column 4 displays the initial guess and column 5 shows the table number of the numerical results of the corresponding problem.
We can see from the computed results in Table 6 and Table 2, Table 3, Table 4 and Table 5 that the new methods have good convergence behavior. The increment in precision per iteration, as seen by the numerical results, is the reason for good convergence. This also describes the stable nature of the methods. It is also clear from the computed results that the new methods have better authenticity than those calculated by the existing methods. We display the value 0 in Table 3 at the stage when the stopping criterion has been satisfied. The calculation of the convergence order in each problem, as shown in the tables, implies verification of the theoretical convergence order four. Efficiency of the new methods can also be judged by the fact that the amount of CPU time required by the methods is less than that required by the existing ones. This is shown in Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11 where we draw a bar chart of the time consumed by the methods.
Note that, in general, the new methods are more efficient than the existing ones. The new methods are also applied to many different practical problems to confirm their consistency. We conclude this section by the remark that new derivative-free iterative schemes are more effective.

5. Conclusions

In the presence of symmetry in a system, when deriving the equations characterizing the system, we end up having to deal with equations with multiple roots. This study presents a family of optimal methods for solving nonlinear equations with multiple roots. Some special cases of the family have been presented. The main advantage of the new methods is that they are derivative-free. We have employed the methods on some nonlinear real-life problems viz. the Van der Waals problem, Manning problem, Planck law radiation problem, and the Kepler’s problem. The new methods have also been compared with existing methods of the same order. It has been observed that the performance of the methods may be the same geometrically even if they are different mathematically. Finally, we conclude this study with a remark: the derivative-free algorithms presented here can be the better choice for existing Newton-type algorithms in the cases where derivatives are difficult to obtain or expensive to compute.

Author Contributions

Conceptualization, methodology, S.K. and J.R.S.; Formal analysis, validation, resources, L.J.; Software, writing-original draft preparation, J.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

We would like to express our gratitude to the anonymous reviewers for their help with the publication of this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Haruo, H. Topological Index. A Newly Proposed Quantity Characterizing the Topological Nature of Structural Isomers of Saturated Hydrocarbons. Bull. Chem. Soc. Jpn. 1971, 44, 2332–2339. [Google Scholar] [CrossRef] [Green Version]
  2. Jäntschi, L.; Bolboacă, S.D.; Furdui, C.M. Characteristic and counting polynomials: Modelling nonane isomers properties. Mol. Simulat. 2009, 35, 220–227. [Google Scholar] [CrossRef]
  3. Gander, W. On Halley’s iteration method. Am. Math. Mon. 1985, 92, 131–134. [Google Scholar] [CrossRef]
  4. Proinov, P.D. General local convergence theory for a class of iterative processes and its applications to Newton’s method. J. Complex. 2009, 25, 38–62. [Google Scholar] [CrossRef] [Green Version]
  5. Proinov, P.D.; Ivanov, S.I.; Petković, M.S. On the convergence of Gander’s type family of iterative methods for simultaneous approximation of polynomial zeros. Appl. Math. Comput. 2019, 349, 168–183. [Google Scholar] [CrossRef]
  6. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comput. 2014, 226, 635–660. [Google Scholar] [CrossRef] [Green Version]
  7. McNamee, J.M.; Pan, V.Y. Numerical Methods for Roots of Polynomials—Part II; Elsevier Science: Amsterdam, The Netherlands, 2013. [Google Scholar]
  8. Soleymani, F. Some optimal iterative methods and their with memory variants. J. Egypt. Math. Soc. 2013, 21, 133–141. [Google Scholar] [CrossRef] [Green Version]
  9. Hafiz, M.A.; Bahgat, M.S.M. Solving nonsmooth equations using family of derivative-free optimal methods. J. Egypt. Math. Soc. 2013, 21, 38–43. [Google Scholar] [CrossRef] [Green Version]
  10. Sihwail, R.; Solaiman, O.S.; Ariffin, K.A.Z. New robust hybrid Jarratt-Butterfly optimization algorithm for nonlinear models. J. King Saud Univ. Comput. Inf. Sci. 2022, 34, 8207–8220. [Google Scholar] [CrossRef]
  11. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  12. Galantai, A.; Hegedus, C.J. A study of accelerated Newton methods for multiple polynomial roots. Numer. Algorithms 2010, 54, 219–243. [Google Scholar] [CrossRef]
  13. Halley, E. A new exact and easy method of finding the roots of equations generally and that without any previous reduction. Philos. Trans. R. Soc. Lond. 1694, 18, 136–147. [Google Scholar] [CrossRef] [Green Version]
  14. Hansen, E.; Patrick, M. A family of root finding methods. Numer. Math. 1976, 27, 257–269. [Google Scholar] [CrossRef]
  15. Neta, B.; Johnson, A.N. High-order nonlinear solver for multiple roots. Comput. Math. Appl. 2008, 55, 2012–2017. [Google Scholar] [CrossRef] [Green Version]
  16. Victory, H.D.; Neta, B. A higher order method for multiple zeros of nonlinear functions. Int. J. Comput. Math. 1982, 12, 329–335. [Google Scholar] [CrossRef]
  17. Akram, S.; Zafar, F.; Yasmin, N. An optimal eighth-order family of iterative methods for multiple roots. Mathematics 2019, 7, 672. [Google Scholar] [CrossRef] [Green Version]
  18. Akram, S.; Akram, F.; Junjua, M.U.D.; Arshad, M.; Afzal, T. A family of optimal Eighth order iteration functions for multiple roots and its dynamics. J. Math. 2021, 5597186. [Google Scholar] [CrossRef]
  19. Ivanov, S.I. Unified convergence analysis of Chebyshev-Halley methods for multiple polynomial zeros. Mathematics 2022, 12, 135. [Google Scholar] [CrossRef]
  20. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef] [Green Version]
  21. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar] [CrossRef]
  22. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef] [Green Version]
  23. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
  24. Zhou, X.; Chen, X.; Song, Y. Constructing higher-order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 4199–4206. [Google Scholar] [CrossRef] [Green Version]
  25. Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
  26. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef] [Green Version]
  27. Kansal, M.; Kanwar, V.; Bhatia, S. On some optimal multiple root-finding methods and their dynamics. Appl. Appl. Math. 2015, 10, 349–367. [Google Scholar]
  28. Behl, R.; Alsolami, A.J.; Pansera, B.A.; Al-Hamdan, W.M.; Salimi, M.; Ferrara, M. A new optimal family of Schröder’s method for multiple zeros. Mathematics 2019, 7, 1076. [Google Scholar] [CrossRef] [Green Version]
  29. Soleymani, F. Efficient optimal eighth-order derivative-free methods for nonlinear equations. Jpn. J. Ind. Appl. Math. 2013, 30, 287–306. [Google Scholar] [CrossRef]
  30. Larson, J.; Menickelly, M.; Wild, S. Derivative-free optimization methods. Acta Numer. 2019, 28, 287–404. [Google Scholar] [CrossRef] [Green Version]
  31. Moré, J.J.; Wild, S.M. Benchmarking Derivative-Free Optimization Algorithms. SIAM J. Optim. 2009, 20, 172–191. [Google Scholar] [CrossRef] [Green Version]
  32. Sharma, J.R.; Kumar, S.; Jäntschi, L. On a class of optimal fourth order multiple root solvers without using derivatives. Symmetry 2019, 11, 1452. [Google Scholar] [CrossRef] [Green Version]
  33. Sharma, J.R.; Kumar, S.; Argyros, I.K. Development of optimal eighth order derivative-free methods for multiple roots of nonlinear equations. Symmetry 2019, 11, 766. [Google Scholar] [CrossRef] [Green Version]
  34. Kumar, S.; Kumar, D.; Sharma, J.R.; Cesarano, C.; Agarwal, P.; Chu, Y.M. An optimal fourth order derivative-free numerical algorithm for multiple roots. Symmetry 2020, 12, 1038. [Google Scholar] [CrossRef]
  35. Kansal, M.; Alshomrani, A.S.; Bhalla, S.; Behl, R.; Salimi, M. One parameter optimal derivative-free family to find the multiple roots of algebraic nonlinear equations. Mathematics 2020, 8, 2223. [Google Scholar] [CrossRef]
  36. Behl, R.; Cordero, A.; Torregrosa, J.R. A new higher-order optimal derivative free scheme for multiple roots. J. Comput. Appl. Math. 2022, 404, 113773. [Google Scholar] [CrossRef]
  37. Kumar, S.; Kumar, D.; Kumar, R. Development of cubically convergent iterative derivative free methods for computing multiple roots. SeMA J. 2022. [Google Scholar] [CrossRef]
  38. Kumar, D.; Sharma, J.R.; Cesarano, C. An Efficient Class of Traub–Steffensen-Type Methods for Computing Multiple Zeros. Axioms 2019, 8, 65. [Google Scholar] [CrossRef] [Green Version]
  39. Zafar, F.; Iqbal, S.; Nawaz, T. A Steffensen type optimal eighth order multiple root finding scheme for nonlinear equations. J. Comp. Math. Data Sci. 2023, 7, 100079. [Google Scholar] [CrossRef]
  40. Steffensen, J.F. Remarks on iteration. Scand. Actuar. J. 1933, 16, 64–72. [Google Scholar] [CrossRef]
  41. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
  42. Moriguchi, I.; Kanada, Y.; Komatsu, K. Van der Waals Volume and the Related Parameters for Hydrophobicity in Structure-Activity Studies. Chem. Phamaceut. Bull. 1976, 24, 1799–1806. [Google Scholar] [CrossRef] [Green Version]
  43. Quinlan, N.; Kendall, M.; Bellhouse, B.; Ainsworth, R.W. Investigations of gas and particle dynamics in first generation needle-free drug delivery devices. Shock Waves 2001, 10, 395–404. [Google Scholar] [CrossRef]
  44. Parrish, J.A. Photobiologic principles of phototherapy and photochemotherapy of psoriasis. Phamacol. Therapeut. 1981, 15, 439–446. [Google Scholar] [CrossRef]
  45. Balasubramanian, K. Integration of Graph Theory and Quantum Chemistry for Structure-Activity Relationships. SAR QSAR Environ. Res. 1994, 2, 59–77. [Google Scholar] [CrossRef]
  46. Basak, S.C.; Bertelsen, S.; Grunwald, G.D. Application of graph theoretical parameters in quantifying molecular similarity and structure-activity relationships. J. Chem. Inf. Comput. Sci. 1994, 34, 270–276. [Google Scholar] [CrossRef]
  47. Ivanciuc, O. Chemical Graphs, Molecular Matrices and Topological Indices in Chemoinformatics and Quantitative Structure-Activity Relationships. Curr. Comput. Aided Drug Des. 2013, 9, 153–163. [Google Scholar] [CrossRef]
  48. Matsuzaka, Y.; Uesawa, Y. Ensemble Learning, Deep Learning-Based and Molecular Descriptor-Based Quantitative Structure–Activity Relationships. Molecules 2023, 28, 2410. [Google Scholar] [CrossRef] [PubMed]
  49. Danby, J.M.A.; Burkardt, T.M. The solution of Kepler’s equation. I. Celest. Mech. 1983, 40, 95–107. [Google Scholar] [CrossRef]
  50. Wolfram, S. The Mathematica Book, 5th ed.; Wolfram Media: Champaign, IL, USA, 2003. [Google Scholar]
  51. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  52. Hueso, J.L.; Martinez, E.; Teruel, C. Detemination of multiple roots of nonlinear equations and applications. J. Math. Chem. 2014, 53, 880–892. [Google Scholar] [CrossRef] [Green Version]
  53. Hoffman, J.D. Numerical Methods for Engineers and Scientists; McGraw-Hill Book Company: New York, NY, USA, 1992. [Google Scholar]
  54. Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: New Delhi, India, 2006. [Google Scholar]
Figure 1. Error of all methods for χ 1 ( t ) at t 0 = 2.45 .
Figure 1. Error of all methods for χ 1 ( t ) at t 0 = 2.45 .
Symmetry 15 01249 g001
Figure 2. Error of all methods for χ 1 ( t ) at t 0 = 3 .
Figure 2. Error of all methods for χ 1 ( t ) at t 0 = 3 .
Symmetry 15 01249 g002
Figure 3. Error of LLC, LCN, SS, ZCS, SBL for χ 1 ( t ) at t 0 = 2.45 .
Figure 3. Error of LLC, LCN, SS, ZCS, SBL for χ 1 ( t ) at t 0 = 2.45 .
Symmetry 15 01249 g003
Figure 4. Error of LLC, LCN, SS, ZCS, SBL for χ 1 ( t ) at t 0 = 3 .
Figure 4. Error of LLC, LCN, SS, ZCS, SBL for χ 1 ( t ) at t 0 = 3 .
Symmetry 15 01249 g004
Figure 5. Error of NM1, NM2, NM3, NM4 for χ 1 ( t ) at t 0 = 2.45 .
Figure 5. Error of NM1, NM2, NM3, NM4 for χ 1 ( t ) at t 0 = 2.45 .
Symmetry 15 01249 g005
Figure 6. Error of NM1, NM2, NM3, NM4 for χ 1 ( t ) at t 0 = 3 .
Figure 6. Error of NM1, NM2, NM3, NM4 for χ 1 ( t ) at t 0 = 3 .
Symmetry 15 01249 g006
Figure 7. Bar Chart for χ 1 ( t ) .
Figure 7. Bar Chart for χ 1 ( t ) .
Symmetry 15 01249 g007
Figure 8. Bar Chart for χ 2 ( t ) .
Figure 8. Bar Chart for χ 2 ( t ) .
Symmetry 15 01249 g008
Figure 9. Bar Chart for χ 3 ( t ) .
Figure 9. Bar Chart for χ 3 ( t ) .
Symmetry 15 01249 g009
Figure 10. Bar Chart for χ 4 ( t ) .
Figure 10. Bar Chart for χ 4 ( t ) .
Symmetry 15 01249 g010
Figure 11. Bar Chart for χ 5 ( t ) .
Figure 11. Bar Chart for χ 5 ( t ) .
Symmetry 15 01249 g011
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kumar, S.; Sharma, J.R.; Bhagwan, J.; Jäntschi, L. Numerical Solution of Nonlinear Problems with Multiple Roots Using Derivative-Free Algorithms. Symmetry 2023, 15, 1249. https://doi.org/10.3390/sym15061249

AMA Style

Kumar S, Sharma JR, Bhagwan J, Jäntschi L. Numerical Solution of Nonlinear Problems with Multiple Roots Using Derivative-Free Algorithms. Symmetry. 2023; 15(6):1249. https://doi.org/10.3390/sym15061249

Chicago/Turabian Style

Kumar, Sunil, Janak Raj Sharma, Jai Bhagwan, and Lorentz Jäntschi. 2023. "Numerical Solution of Nonlinear Problems with Multiple Roots Using Derivative-Free Algorithms" Symmetry 15, no. 6: 1249. https://doi.org/10.3390/sym15061249

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop