Next Article in Journal
Algorithm for Generating S-Boxes with Prescribed Differential Properties
Next Article in Special Issue
Tensor-Based Approaches for Nonlinear and Multilinear Systems Modeling and Identification
Previous Article in Journal
A Cognitive Model for Technology Adoption
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Third-Order Family of Multiple Root-Findings Based on Exponential Fitted Curve

1
University Institute of Engineering and Technology, Panjab University, Chandigarh 160014, India
2
Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera, s/n, 46022 Valencia, Spain
3
Indian Institute of Technology, Kharagpur 721302, India
4
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
*
Author to whom correspondence should be addressed.
Algorithms 2023, 16(3), 156; https://doi.org/10.3390/a16030156
Submission received: 17 February 2023 / Revised: 2 March 2023 / Accepted: 9 March 2023 / Published: 12 March 2023
(This article belongs to the Special Issue Mathematical Modelling in Engineering and Human Behaviour)

Abstract

:
In this paper, we present a new third-order family of iterative methods in order to compute the multiple roots of nonlinear equations when the multiplicity ( m 1 ) is known in advance. There is a plethora of third-order point-to-point methods, available in the literature; but our methods are based on geometric derivation and converge to the required zero even though derivative becomes zero or close to zero in vicinity of the required zero. We use the exponential fitted curve and tangency conditions for the development of our schemes. Well-known Chebyshev, Halley, super-Halley and Chebyshev–Halley are the special members of our schemes for m = 1 . Complex dynamics techniques allows us to see the relation between the element of the family of iterative schemes and the wideness of the basins of attraction of the simple and multiple roots, on quadratic polynomials. Several applied problems are considered in order to demonstrate the performance of our methods and for comparison with the existing ones. Based on the numerical outcomes, we deduce that our methods illustrate better performance over the earlier methods even though in the case of multiple roots of high multiplicity.

1. Introduction

Finding the solution of nonlinear models that often come from Engineering, Chemistry, Economics and Applied Science is one of the most fascinating and difficult problems in the field of Numerical Analysis. Locating exactly such solutions is not possible, in general. Then, we focus on iterative methods. One of the most celebrated and popular methods with second-order of convergence is Newton–Raphson method, which is given by
x σ + 1 = x σ f ( x σ ) f ( x σ ) .
However, it has two major issues: the first one is that (1) fails whether first derivative f ( x ) equals to zero or is close to it in the vicinity of the required zero of f ( x ) ; the second one is that it looses the second-order of convergence in the case of multiple roots. In order to overcome both problems, Schröder in [1,2] suggested a second-order modification of Newton’s method for multiple roots when multiplicity ( m 1 ) of the required zero is known in advance, which is defined as
x σ + 1 = x σ m f ( x σ ) f ( x σ ) .
Later on, several high order modification of scheme (2) have been suggested and analyzed. A third-order Chebyshev’s method for multiple zeros suggested in [3,4], was defined as
x σ + 1 = x σ m m 2 f ( x σ ) f ( x σ ) f ( x σ ) 2 + 3 m 2 f ( x σ ) f ( x σ ) .
We denote this method by C S , for the computational comparison. A detailed local convergence analysis of this method, for polynomial zeros, has been presented in [5]. Other variants of Chebyshev’s scheme for multiple zeros can be found at [6].
In addition, Obreshkov in [3] and later Hansen and Patrick in [7], proposed the following modified Halley’s method for multiple zeros:
x σ + 1 = x σ 2 m v σ m + 1 2 m A ( x σ ) v σ ,
where A ( x σ ) = f ( x σ ) 2 f ( x σ ) and v σ = f ( x σ ) f ( x σ ) , that is denoted by H S . Theorem 4.4 of [8] gives exact bounds of the convergence domain together with error estimates and an estimate of the asymptotic error constant of Halley’s method for multiple roots.
Ostrowski in [9], presented the scheme method for multiple roots given by
x σ + 1 = x σ m v σ 1 2 A ( x σ ) v σ .
Method (5) is denoted by O S in the computational work.
Further, Osada [10], suggested the following well known third-order method for multiple roots:
x σ + 1 = x σ m v σ 3 m 2 + m A ( x σ ) v σ .
For computational issues, we denote method (6) by O N S .
Chun and Neta in [11] proposed another third-order method, which is defined by
x σ + 1 = x σ 2 m 2 f ( x σ ) 2 f ( x σ ) m ( 3 m ) f ( x σ ) f ( x σ ) f ( x σ ) + ( m 1 ) 2 f ( x σ ) 3 .
Expression (7) is denoted by C N , for the sake of computational comparison. For a detailed historical survey about these methods we refer to [12].
However, the first problem in schemes (2) to (7), is that first-order derivative should neither be zero or nor close to zero in a close point to the required root. More information about these and other one-point schemes for simple and multiple roots can be found in texts from Traub [4], Amat and Busquier [13], Ortega and Rheinboldt [14], Ostrowski [15], Petković et al. [16], among others.
So, in order to overcome these problems, we suggest a new third-order family of iterative methods in the next section. The beauty of our scheme is that Chebyshev, Halley, super-Halley all are not only special case of our scheme, when m = 1 . But, they also work, for multiple roots and failure cases ( f ( x ) = 0 at some x = x λ different from the root). The development and order of convergence of our scheme are depicted in Section 2. Their stability is analyzed under the technique of complex discrete dynamics in Section 3. In Section 4, we assume several real life problems for checking the effectiveness and comparison of ours methods with the existing ones. We finish in Section 5 with some conclusions.

2. Construction of Higher-Order Scheme

The proposed iterative scheme is defined by fitting function f ( x ) in the following form:
p σ y 1 m e α ( x x σ ) f 1 m ( x σ ) 2 + q σ ( x x σ ) y 1 m e α ( x x σ ) f 1 m ( x σ ) + y 1 m e α ( x x σ ) f 1 m ( x σ ) + r σ ( x x σ ) + s σ = y ( x ) ,
where α R , | α | < . By adopting the following tangency conditions at x = x σ
y ( x σ ) = f ( x σ ) , y ( x σ ) = f ( x σ ) , and y ( x σ ) = f ( x σ ) ,
we can easily obtain the values of disposable parameters p σ , q σ , r σ and s σ , which are given by
s σ = 0 ,
r σ = f 1 m ( x σ ) m f ( x σ ) f ( x σ ) m α f ( x σ ) ,
p σ = q σ m f ( x σ ) f ( x σ ) m α f ( x σ ) ( 1 m ) f 2 ( x σ ) + m f ( x σ ) f ( x σ ) 2 m α f ( x σ ) f ( x σ ) + α 2 m 2 f 2 ( x σ ) 2 ( f ( x σ ) m α f ( x σ ) ) 2 f 1 m ( x σ )
Therefore, at an estimation x σ + 1 of the root
y ( x σ + 1 ) 0 ,
and it follows from (13) that the iterative expression that estimates the root is
x σ + 1 = x σ f 1 m ( x σ ) p σ f 1 m ( x σ ) 1 r σ q σ f 1 m ( x σ ) .
If expression (8) is an exponentially fitted straight line, then p σ = q σ = 0 and from (14), we obtain
x σ + 1 = x σ m f ( x σ ) f ( x σ ) m α f ( x σ ) .
This is one-parameter family of modified Newton’s method. In order to obtain quadratic convergence, the entity in the denominator should be largest in magnitude. For α = 0 and m = 1 , it reduces to classical Newton’s method.
So, the general iterative expression of our parametric class of iterative procedures is
x σ + 1 = x σ 1 + 1 2 L f 1 + q σ m f ( x σ ) f ( x σ ) m α f ( x σ ) m f ( x σ ) f ( x σ ) m α f ( x σ ) ,
where q σ is a parameter and
L f = m f ( x σ ) f ( x σ ) + m α 2 f ( x σ ) f ( x σ ) 2 ( m 1 ) 2 m α f ( x σ ) f ( x σ ) ( f ( x σ ) m α f ( x σ ) ) 2 .
This scheme does not fail if f ( x σ ) is very small or zero in the vicinity of the root.
For particular values of q σ in (16), one can obtain
1.
For q σ = 0 , we have
x σ + 1 = x σ 1 + 1 2 L f m f ( x σ ) f ( x σ ) m α f ( x σ ) .
2.
For q σ = m f ( x σ ) f ( x σ ) + m α 2 f ( x σ ) f ( x σ ) 2 ( m 1 ) 2 m α f ( x σ ) f ( x σ ) 2 m f ( x σ ) ( f ( x σ ) m α f ( x σ ) ) , we obtain
x σ + 1 = x σ 2 2 L f m f ( x σ ) f ( x σ ) m α f ( x σ ) .
3.
For q σ = m f ( x σ ) f ( x σ ) + m α 2 f ( x σ ) f ( x σ ) 2 ( m 1 ) 2 m α f ( x σ ) f ( x σ ) m f ( x σ ) ( f ( x σ ) m α f ( x σ ) ) , it yields
x σ + 1 = x σ 1 + 1 2 L f 1 L f m f ( x σ ) f ( x σ ) m α f ( x σ ) .
4.
For q σ = β m f ( x σ ) f ( x σ ) + m α 2 f ( x σ ) f ( x σ ) 2 ( m 1 ) 2 m α f ( x σ ) f ( x σ ) m f ( x σ ) ( f ( x σ ) m α f ( x σ ) ) , we get
x σ + 1 = x σ 1 + 1 2 L f 1 β L f m f ( x σ ) f ( x σ ) m α f ( x σ ) .
5.
For q σ = ± , expression (16) reduces to (15).
It can be checked that, for α = 0 and m = 1 , Formulas (15), (17)–(20) are reduced to Newton, Chebyshev, Halley, super-Halley and Chebyshev-Halley classical methods, respectively. In general, parameter α is chosen to give a larger value to the denominator. We can further derive some new families of multipoint iterative methods free from second derivative by discretizing the second-order derivative involved in family (14).
In the next result, we prove that scheme (16) attains third-order of convergence for all α R .
Theorem 1.
Let us suppose x = ξ is a multiple solution of multiplicity m 1 of function f. Consider that function f : D C C is analytic in D surrounding the required zero ξ. Then, scheme (16) has third-order of convergence, and it satisfies the error equation
e σ + 1 = 2 c 1 m q σ 3 α + m α m 3 α 2 q σ 2 c 2 + c 1 2 ( m + 3 ) 2 m 2 e σ 3 + O ( e σ 4 ) .
Proof. 
Let x = ξ be a multiple zero of f ( x ) and c k = m ! ( m + k ) ! f ( m + k ) ( ξ ) f ( m ) ( ξ ) , k = 1 , 2 , be the asymptotic error constants. Expanding f ( x σ ) , f ( x σ ) and f ( x σ ) about x = ξ by Taylor’s series, we obtain
f ( x σ ) = f ( m ) ( ξ ) m ! e σ m 1 + c 1 e σ + c 2 e σ 2 + c 3 e σ 3 + c 4 e σ 4 + O ( e σ 5 ) ,
f ( x σ ) = f m ( ξ ) m ! e σ m 1 m + ( m + 1 ) c 1 e σ + ( m + 2 ) c 2 e σ 2 + ( m + 3 ) c 3 e σ 3
+ ( m + 4 ) c 4 e σ 4 + O ( e σ 5 ) ,
and
f ( x σ ) = f m ( ξ ) m ! e σ m 2 ( m 2 m + m ( m + 1 ) c 1 e σ + ( m 2 + 3 m + 2 ) c 2 e σ 2 + ( m 2 + 5 m + 6 ) c 3 e σ 3 + ( m 2 + 7 m + 12 ) c 4 e σ 4 + O ( e σ 5 ) ) ,
respectively.
By using expressions (21)–(24), we have
L f = m f ( x σ ) f ( x σ ) + m α 2 f ( x σ ) f ( x σ ) 2 ( m 1 ) 2 m α f ( x σ ) f ( x σ ) ( f ( x σ ) m α f ( x σ ) ) 2 = 2 c 1 m 2 α e σ 3 m ( α 2 m 2 c 2 ) 2 α c 1 m + c 1 2 ( m + 1 ) m 2 e σ 2 4 m 2 ( 4 α c 2 3 c 3 + α 3 m ) + c 1 m c 2 ( 3 m + 4 ) 3 α 2 m + α c 1 2 m ( 2 m + 3 ) c 1 3 ( m + 1 ) 2 m 3 e σ 3 + O ( e σ 4 ) .
By using the above value of L f and expressions (21)–(24) in (16), we get
e σ + 1 = 2 c 1 m q σ 3 α + m α m 3 α 2 q σ 2 c 2 + c 1 2 ( m + 3 ) 2 m 2 e σ 3 + O ( e σ 4 ) .
Expression (26) demonstrates the third-order of convergence for all α and q σ . □

3. Stability Analysis

In the previous section, we have designed a modification of Chebyshev-Halley scheme able to find multiple roots of nonlinear equations, holding the third-order of convergence of their original partners. Our aim in this section is to check the role of the parameter of the new class of iterative method in the stability of the resulting scheme: are these methods capable to find also single roots? how far can be the initial estimation in order to assure convergence? are these situations different depending on the value of q σ ? that is, is the stability different for the members of the set of parametric procedures?
In order to arrange this analysis, we apply our proposed schemes on the nonlinear function p ( z ) = ( z 1 ) m ( z + 1 ) , with z = 1 as multiple root of multiplicity m and a simple root at z = 1 . In all cases, α = 1 , although qualitatively similar results are found for other values of α . This is the most simple nonlinear function containing two roots, one simple and one m-multiple. Although the results cannot be directly extrapolated to any nonlinear function, several analysis on different nonlinear problems confirm, in the numerical section, these results.
First, we introduce some dynamical terms mentioned in this paper (see, for instance, [17]). Let R : C ^ C ^ be a rational function, where C ^ is the Riemann sphere, the orbit of a point z 0 is given as:
{ z 0 , R ( z 0 ) , R 2 ( z 0 ) , , R n ( z 0 ) , } ,
where the k-th composition of the map R with itself is denoted by R k .
We analyze map R by classifying the starting points from the asymptotical performance of the orbits. In these terms, a point z 0 is called fixed point of R if R ( z 0 ) = z 0 ; it is a periodic point of period p > 1 if R p ( z 0 ) = z 0 and R k ( z 0 ) z 0 , for k < p .
On the other hand, a point z 0 is called critical point of R if R ( z 0 ) = 0 . The asymptotic behavior of the critical points is a key fact for analyzing the stability of the method: a classical result from Fatou and Julia (see, for instance, [18]) each immediate basin of attraction holds at least one critical point, that is, in the connected component of the basin of attraction holding the attractor there is also a critical point.
Moreover, a fixed point of R, z 0 , is said to be attracting if | R ( z 0 ) | < 1 , or superattracting if | R ( z 0 ) | = 0 ; it is repulsive if | R ( z 0 ) | > 1 and parabolic if | R ( z 0 ) | = 1 .
Indeed, when R depends also on one or several parameters, then | R ( z 0 , α 1 , , α k ) | is not a scalar, but a function of α i , for i = 1 , 2 , , k . Then, | R ( z 0 , α 1 , , α k ) | is called stability function of the fixed point and it gives us the character of the fixed point in terms of the value of α i , for i = 1 , 2 , , k .
On the other hand, let us also remark that when fixed and critical points are found such that they are not equivalent to the roots of the polynomial p ( z ) , then they are called strange fixed and free critical points, respectively.
The basin of attraction of an attractor α is defined as:
A ( α ) = { z 0 C ^ : R n ( z 0 ) α , n } .
The Fatou set of the rational function R, is the set of points z C ^ whose orbits tend to an attractor (fixed point or periodic point). Its complement in C ^ is the Julia set, J ( R ) . So the basin of attraction of any fixed point belongs to the Fatou set and the boundary of the basin of attraction belongs to the Julia set.

3.1. Fixed Points and Stability

When the general class (20) is applied on polynomial p ( z ) with multiplicity m = 2 , rational function M ( z ) is obtained depending of free complex parameter q λ , that we denote by β . This rational operator is
M ( z ) = N ( z ) 2 x 2 3 x 3 3 ( β 3 ) + 4 ( β 1 ) x 4 12 ( β 1 ) x 3 + ( 3 9 β ) x 2 + 2 ( 7 β 9 ) x ,
where N ( z ) = 6 β + 8 ( β 1 ) x 7 4 ( 7 β 6 ) x 6 18 ( β 1 ) x 5 + ( 65 β 54 ) x 4 + ( 43 β 59 ) x 3 9 ( 3 β 5 ) x 2 + ( 77 37 β ) x + 21 . In order to get bounded the set of converging seeds of the multiple root, a Möbius transformation is made, getting the conjugate rational function of M by h, that we denote by M H ;
M H ( z , β ) = h M h 1 ( z ) ,
where h ( z ) = z + 1 z 1 . It sends the multiple root to z = 0 , the simple one to z = , and the divergence of the original scheme to z = 1 . Operators M and M H have the same qualitative properties of stability.
By solving equation M H ( z , β ) = z , the fixed points of M H operator are obtained. Two of them, z = 0 and z = , come from the roots of polynomial p ( z ) ; the rest of them, if exist, are strange fixed points. The asymptotical behavior of all the fixed points (both multiple and simple, strange or not) plays a key role in the stability of the iterative methods involved, as the convergence to fixed points different from the roots means an important drawback for an iterative method; so, we proceed below with this analysis.
To analyze the stability of the fixed points z F of M H ( z , β ) , we calculate its first derivative and evaluate it at every fixed point. Let us recall that the absolute value of this operator, M H ( z F , β ) is the stability function of fixed point z F . This stability function gives us information about the asymptotical behavior of z F . In general, the stability of other fixed points than the multiple root of p ( z ) depends on the value of parameter β . From them, the following result can be stated.
Theorem 2.
Rational function M H ( z , β ) has seven fixed points:
(a) 
Coming from the roots of p ( z ) , z = 0 is always superattracting and z = is a fixed point only if β 1 ; moreover, z = is attracting only if ( β ) > 1 2 . It is parabolic if ( β ) = 1 2 and repulsive in other cases.
(b) 
z = 1 is a parabolic strange fixed point, for any β C .
(c) 
There exist also four strange fixed points, denoted by s i ( β ) , i = 1 , 2 , 3 , 4 , corresponding to the roots of the fourth-degree polynomial q ( t ) = ( 2 β + 1 ) z 4 + ( 34 28 β ) z 3 + ( 57 30 β ) z 2 + 4 ( 6 β 13 ) z + 8 . These points can be attracting in small sets of the area [ 1 , 4 ] × [ 2.5 , 2.5 ] of the complex plane.
Proof. 
By solving equation M H ( z , β ) = z , it is found that,
( z 1 ) 2 z ( 2 β + 1 ) z 4 + ( 34 28 β ) z 3 + ( 57 30 β ) z 2 + 4 ( 6 β 13 ) z + 8 = 0 .
So, z = 0 , z = 1 (double) and the roots of polynomial q ( t ) = ( 2 β + 1 ) z 4 + ( 34 28 β ) z 3 + ( 57 30 β ) z 2 + 4 ( 6 β 13 ) z + 8 are fixed points of M H ( z , β ) . To check if z = (coming from the simple root of p ( z ) ) is a fixed point, the Inverse MH (IMH) operator
I M H ( z , β ) = 1 M H ( 1 / z , β ) ,
is defined. Therefore, it can be checked that I M H ( 0 , β ) = 0 and then, z = is a fixed point of M H ( z , β ) .
Now, the stability of these fixed points must be analyzed. It is straightforward to check that M H ( 0 , β ) = 0 , so z = 0 is superattracting; moreover, M H ( 1 , β ) = 1 what yields to the parabolic character of z = 1 (the divergence of the original operator, previous to Möbius map). To check the stability of the fixed point coming from the simple root of p ( z ) , we use again the inverse operator, getting that
| I M H ( 0 , β ) | = β β + 1 .
This allows us to conclude that the simple root of p ( z ) ( z = ) is superattracting if β = 0 , attracting if ( β ) > 1 2 , but also that it is not a fixed point of M H if β = 1 . Regarding the rest of strange fixed points, s i ( β ) , i = 1 , 2 , 3 , 4 , they can be attracting or even superattracting in some small areas of the complex plane, where their stability functions satisfy | M H ( s i ( β ) , β ) | < 1 for any i = 1 , 2 , 3 , 4 . □
In Figure 1, the stability functions of the fixed points are presented. The code color is as follows: s 1 ( β ) is attracting for values of β in orange regions, s 2 ( β ) is attracting where parameter β belongs to red areas; blue color corresponds to values of β where s 3 ( β ) is attracting and those where s 4 ( β ) is attracting are presented in purple color. In all cases, the vertices of the cones correspond to the values of β where the related strange fixed point is superattracting.
So, it has been proven that the multiple root is always a superattracting fixed point, whereas the simple root can be repulsive (if ( β ) < 1 2 ), parabolic for ( β ) = 1 2 and attracting elsewhere. Moreover, there exist strange fixed points that can be attracting in small areas close to β = 0 .
Let us remark that, pretending to converge only to the multiple root, values of β < 1 will assure us the avoiding of basins of attraction of strange fixed points. However, if both simple and multiple root are searched, then the best values of the parameters to be chosen are β = 0 (where the simple root is also superattracting, as the multiple one) or β > 4 , where all the strange fixed points s i ( β ) are repulsive.

3.2. Dynamical Planes

Each value of the parameter corresponds with a member of the class of iterative methods. When β is fixed, the iterative process can be visualized in a dynamical plane. It is obtained by iterating the chosen element of the family under study, with each point of the complex plane as initial estimation. In this section, we have used a rectangular mesh of 800 × 800 , that is, we have considered a partition of the real and imaginary intervals in 800 subintervals. The nodes of this rectangular mesh are our starting points in the iterative processes. We represent in blue color those points whose orbit converges to infinity and in orange color the points converging to z = 0 , corresponding to the multiple root of p ( z ) (with a tolerance of 10 3 ). Other colors (green, red, etc.) are used for those points whose orbit converges to one of the strange fixed points, marked as a white star in the figures if they are attracting or by a white circle if they are repulsive or parabolic. Moreover, a mesh point appears in black if it reaches the maximum number of 200 iterations without converging to any of the fixed points. The routines used appear in [19].
We plot the dynamical planes related to rational function M H ( z , β ) , for β { 1 , 0 , 1 , 2 , 3 , 5 } . These planes can be visualized in Figure 2. In Figure 2a, orange area is the basin of attraction of z = 0 (multiple root) and black points correspond to the attracting area of the parabolic fixed point z = 1 and the basin of a periodic orbit of period 2; in cases of β = 0 and β = 5 (Figure 2b and f, respectively), the basin of attraction of the simple root ( z = ) appears in blue, and in black the attracting area of the parabolic fixed point z = 1 ; in Figure 2c–e, the basin of attraction of an strange fixed point appears in green color.
The information deduced from these planes confirm the theoretical results from Theorem 2 regarding the stability of the strange fixed points. The parabolic character of z = 1 is the main drawback of the methods, as it is placed in the Julia set. However, in practice the rounding error will yield an initial estimation in the basin of z = 1 to another the attracting fixed point.

4. Numerical Results

Here, we check the effectiveness of our newly proposed iterative methods. We employ some elements of our schemes: specifically scheme (18) for α = 1 , α = 1 2 , α = 1 10 and (19) for α = 1 , α = 1 2 , α = 1 4 , denoted by M H S 1 , M H S 2 , M H S 3 , M S H S 1 , M S H S 2 and M S H S 3 , respectively, to solve nonlinear equations mentioned in Examples 1–5.
We contrast our methods with existing schemes of same order: (3), (4), (5), (6) and (7), denoted by C S , H S , O S , N S and C N , respectively. In Table 1, Table 2 and Table 3, we display the values of absolute residual errors | f ( x σ + 1 ) | , number of iterations in order to attain the desired accuracy, and the absolute errors | x σ + 1 x σ | . Further, it is known that formula [20],
ρ ln | ( x σ + 1 ξ ) / ( x σ ξ ) | ln | ( x σ ξ ) / ( x σ 1 ξ ) | .
displays the computational order of convergence, COC. In this estimation, we need to know the exact zero ξ of the nonlinear function to be solved. But, there are several practical situations where the exact solution is not accessible. Therefore, Cordero and Torregrosa in [21], suggested the following estimation of the order of convergence, known as ACOC
ρ ln | e ˇ σ + 1 / e ˇ σ | ln | e ˇ σ / e ˇ σ 1 | ,
where e ˇ σ = x σ x σ 1 and exact zero is not needed. In tables, *, D i v and F stand for converge to undesired root, case of divergence and case of failure, respectively. Moreover, d n e stands for does not exist.
During the current numerical experiments with programming language Mathematica (Version-9), all computations have been done with 1000 digits of mantissa, which minimize round-off errors. In tables, ( b 1 ± b 2 ) denotes ( b 1 × 10 ± b 2 ) . The configuration of the computer used is given below:
  • Processor: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz
  • RAM: 8:00GB
  • System type: 64-bit-Operating System, x64-based processor.
Now, we explore the performance of our proposed methods, in comparison with known ones, on some chemical and engineering problems, that can be found in [22,23,24,25], among others.
Example 1.
Van der Waals equation of state (Case of failure):
P + a 1 n 2 V 2 ( V n a 2 ) = n R T ,
describes the nature of a real gas between two gases a 1 and a 2 , when we introduce the ideal gas equations. For calculating the volume V of the gases, we need the solution of preceding expression in terms of remaining constants,
P V 3 ( n a 2 P + n R T ) V 2 + α 1 n 2 V α 1 α 2 n 2 = 0 .
By choosing the particular values of gases α 1 and α 2 , we can easily obtain the values for n, P and T. Then, it yields
f 1 ( x ) = x 3 5.22 x 2 + 9.0825 x 5.2675 .
Function f 1 has 3 zeros, and among them ξ = 1.75 is a multiple zero of multiplicity m = 2 and ξ = 1.72 is a simple zero. At the initial guess 1.73 , which is quite close to our desired zero, all the known methods fail to work because the derivative turns zero at this initial point. But, our methods work even if the derivative is zero or close to zero. So, our methods are better than the existing one-point third-order methods. The results can be found in Table 1, Table 2 and Table 3.
Example 2.
Planck’s radiation problem (Case of failure and divergence):
Consider the Planck’s radiation equation that determines the spectral density of electromagnetic radiations released by a black-body at a given temperature, and at thermal equilibrium as
G ( y ) = 8 π c h y 5 e c h y k T 1 ,
where T, y, k, h, and c denotes the absolute temperature of the black-body, wavelength of radiation, Boltzmann constant, Plank’s constant, and speed of light in the medium (vacuum), respectively. To evaluate the wavelength y which results to the maximum energy density G ( y ) , set G ( y ) = 0 . Then, we obtain the following equation,
( c h y k T ) e c h y k T e c h y k T 1 = 5 .
Further, the nonlinear equation is reformulated by setting x = c h y k T as follows:
f 2 ( x ) = e x 1 + x 5 3 .
The exact zero of multiplicity m = 3 is ξ 4.965114 , and with this solution one can easily find the wavelength y form the relation x = c h y k T . Here, we can see that derivative is zero at log 5 . So, obviously the methods like C S , H S , O S , O N S and C N fail to work. With another choice of initial guess 1.61 , we found that the derivative is not zero but very close to zero. In this case, methods C S , O N S and C N diverge from the required root. On the other hand, our method is able to converge under these conditions. Methods H S and O S also converge to the required root but they are slower than our method M S H S 3 , because they are consuming 12 and 6 numbers of iterations respectively, as compared to our method M S H S 3 that takes only 5 iterations. In this regards, please see the computational results in Table 1, Table 2 and Table 3.
Example 3.
Jumping and Oscillating problem, when we have infinite numbers of roots:
Here, we choose the following function for the computational results:
f 3 ( x ) = sin x m , m = 5
Function f 3 has infinite number of zeros but our desired zero is ξ = 0 with multiplicity five. With the help this example, we aim to show that the convergence of an iterative method is not always guaranteed, even though we choose a nearest initial point. For example, methods C S and C N with x 0 = 1.5 are unsuitable because of numerical instability, which occurs due to large value of the correction factor and it converges to 442 π and 5 π , respectively, far away from the required zero. On the other hand, Osada’s method O N S diverges for the starting point x 0 = 1.5 . Therefore, care must be taken to ensure that the root obtained is the desired one. But, there is no such problems with our methods. The details of the computational results can be found in Table 1, Table 2 and Table 3.
Example 4.
Convergence to the undesired root problem:
So, we consider an academical problem, which is given as follows:
f 4 ( x ) = e x + sin x 3 .
Expression (31) has an infinite number of roots with multiplicity three, being our desired root ξ 3.183063 . It can be seen that C S , O N S , and C N converge to undesired zero 6.281314 , after finite number of iteration when x 0 = 4.4 . On the other hand, with initial guess 1.7 methods C S , O N S , and C N consume 1280 number of iterations as compared to 6. The computational outcomes are given in Table 1, Table 2 and Table 3.
Example 5.
Finally, we consider an academical problem with higher order of multiplicity m = 100 , which is defined by
f 5 ( x ) = ( x 1 ) 3 1 100 .
Expression (32) has a zero x = 2 of multiplicity 100. We show the computational results based on the function f 5 in Table 1, Table 2 and Table 3. From the numerical results, we conclude that our methods M S H 2 and M S H 3 need the same number of iterations in order to reach the desired accuracy, but they have smaller residual errors and smaller differences between two consecutive iterations as compared to other existing methods.

5. Concluding Remarks

In this paper, we have suggested a new family of point-to-point third-order iterative methods that works for multiple roots ( m 1 ), where the multiplicity of the root is known in advance; moreover, they converge to the required root even in the case of failure or divergence, or oscillation, retaining the third-order of convergence. In addition to this, one of the most popular third-order methods namely Chebyshev, Halley and super-Halley all are the special cases of our scheme, when m = 1 . Our scheme is based on a geometrical derivation rather than derivation on some interpolations or weight function or analytic approaches, etc. Complex dynamics has been used to check the stability of the class of iterative methods. Finally, based on the numerical results, we concluded that our methods not only perform better in the case of failure, divergence and oscillations but also in the convergence cases. In the future, we will work on the multi-point variants of our family (16) by estimating the second-order derivatives.

Author Contributions

Conceptualization, V.K. and R.B.; software, M.R.; validation, A.C. and J.R.T.; formal analysis, V.K.; investigation, R.B. and M.R.; writing—original draft preparation, M.R.; writing—review and editing, A.C. and J.R.T.; supervision, V.K. and R.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors would like to thank the anonymous reviewers for their comments and suggestions, as they have improved the final version of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Schröder, E. Über unendlich viele Algorithmen zur Auflösung der Gleichungen. Math. Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef] [Green Version]
  2. Proinov, P.D.; Ivanov, S.I. Convergence of Schröder’s method for polynomial zeros of unknown multiplicity. C. R. Acad. Bulg. Sci. 2013, 66, 1073–1080. [Google Scholar] [CrossRef]
  3. Obreshkov, N. On the numerical solution of equations. Annuaire Univ. Sofia Fac. Sci. Phys. Math. 1963, 56, 73–83. (In Bulgarian) [Google Scholar]
  4. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall Series in Automatic Computation: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  5. Ivanov, S.I. On the convergence of Chebyshev’s method for multiple polynomial zeros. Results. Math. 2016, 69, 93–103. [Google Scholar] [CrossRef]
  6. Neta, B. New third order nonlinear solvers for multiple roots. Appl. Math. Comput. 2008, 202, 162–170. [Google Scholar] [CrossRef] [Green Version]
  7. Hansen, T.; Patrick, M. A family of root finding methods. Numer. Math. 1977, 27, 257–269. [Google Scholar] [CrossRef]
  8. Ivanov, S.I. A general approach to the study of the convergence of Picard iteration with an application to Halley’s method for multiple zeros of analytic functions. J. Math. Anal. Appl. 2022, 513, 126238. [Google Scholar] [CrossRef]
  9. Ostrowski, A.M. Solution of Equations in Euclidean and Banach Spaces; Academic Press: New York, NY, USA, 1973. [Google Scholar]
  10. Osada, N. An optimal multiple root-finding method of order three. Comput. Appl. Math. 1994, 51, 131–133. [Google Scholar] [CrossRef] [Green Version]
  11. Chun, C.; Neta, B. A third-order modification of Newton’s method for multiple roots. Appl. Math. Comput. 2009, 211, 474–479. [Google Scholar] [CrossRef] [Green Version]
  12. Ivanov, S.I. Unified Convergence Analysis of Chebyshev-Halley Methods for Multiple Polynomial Zeros. Mathematics 2022, 10, 135. [Google Scholar] [CrossRef]
  13. Amat, S.; Busquier, S. Advances in Iterative Methods for Nonlinear Equations; Springer: Switzerland, Switzerland, 2016. [Google Scholar]
  14. Ortega, J.M.; Rheinboldt, W.C. Iterative Solutions of Nonlinears Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  15. Ostrowski, A.M. Solution of Equations and System of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  16. Petković, M.; Neta, B.; Petković, L.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
  17. Blanchard, P. Complex Analytic Dynamics on the Riemann Sphere. Bull. AMS 1984, 11, 85–141. [Google Scholar] [CrossRef] [Green Version]
  18. Beardon, A.F. Iteration of rational functions. In Graduate Texts in Mathematics; Springer: New York, NY, USA, 1991. [Google Scholar]
  19. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing Dynamical and Parameters Planes of Iterative Families and Methods. Sci. World J. 2013, 2013, 780153. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Weerakoon, S.; Fernando, T.G.I. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  21. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  22. Constantinides, A.; Mostoufi, N. Numerical Methods for Chemical Engineers with MATLAB Applications; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
  23. Douglas, J.M. Process Dynamics and Control; Prentice Hall: Englewood Cliffs, NJ, USA, 1972; Volume 2. [Google Scholar]
  24. Shacham, M. An improved memory method for the solution of a nonlinear equation. Chem. Eng. Sci. 1989, 44, 1495–1501. [Google Scholar] [CrossRef]
  25. Balaji, G.V.; Seader, J.D. Application of interval Newton’s method to chemical engineering problems. Rel. Comput. 1995, 1, 215–223. [Google Scholar] [CrossRef]
Figure 1. Stability functions of z = and strange fixed points of MH operator.
Figure 1. Stability functions of z = and strange fixed points of MH operator.
Algorithms 16 00156 g001
Figure 2. Dynamical planes corresponding to M H ( z , β ) operator.
Figure 2. Dynamical planes corresponding to M H ( z , β ) operator.
Algorithms 16 00156 g002
Table 1. Comparison of distinct iterative functions based on absolute residual errors and absolute difference between two consecutive iterations. * stands for convergence to undesired root.
Table 1. Comparison of distinct iterative functions based on absolute residual errors and absolute difference between two consecutive iterations. * stands for convergence to undesired root.
MethodsI.G. | f ( x σ ) |
x σ + 1 x σ
CS HS OS ONS CN MHS 1 MHS 2 MHS 3 MSHS 1 MSHS 2 MSHS 3
Ex. (1) 1.73 | f ( x 6 ) | FFFFF 1.3 ( 15 ) 3.0 ( 10 ) 2.0 ( 6 ) 7.7 ( 102 ) 2.0 ( 67 ) 2.2 ( 45 )
x 7 x 6 FFFFF 2.1 ( 7 ) 1.0 ( 4 ) 8.9 ( 3 ) 1.6 ( 50 ) 2.6 ( 33 ) 2.7 ( 22 )
Ex. (2) log 5 | f ( x 6 ) | FFFFF 3.2 ( 97 ) 2.0 ( 228 ) 3.8 ( 179 ) 2.6 ( 122 ) 2.7 ( 404 ) 2.9 ( 924 )
x 7 x 6 FFFFF 3.5 ( 32 ) 6.5 ( 76 ) 1.7 ( 59 ) 1.5 ( 40 ) 1.5 ( 134 ) 7.4 ( 308 )
1.6 | f ( x 6 ) | D i v 9.8 ( 2 ) 7.2 ( 464 ) D i v D i v 2.8 ( 97 ) 1.4 ( 228 ) 1.3 ( 179 ) 2.1 ( 122 ) 4.7 ( 404 ) 1.0 ( 926 )
x 7 x 6 D i v 9.3 ( 1 ) 2.2 ( 154 ) D i v D i v 3.4 ( 32 ) 5.7 ( 76 ) 1.2 ( 59 ) 1.4 ( 40 ) 1.9 ( 134 ) 1.1 ( 308 )
Ex. (3) 1.5 | f ( x 6 ) | 2.9 ( 274 )  * 2.2 ( 129 ) 7.6 ( 837 ) D i v 3.8 ( 582 )  * 4.8 ( 593 ) 2.2 ( 652 ) 2.1 ( 294 ) 4.7 ( 1197 ) 8.4 ( 1217 ) 1.4 ( 1304 )
x 7 x 6 2.0 ( 55 )  * 1.9 ( 26 ) 6.0 ( 168 ) D i v 5.2 ( 117 )  * 3.4 ( 119 ) 4.7 ( 131 ) 1.8 ( 59 ) 8.6 ( 240 ) 6.1 ( 244 ) 1.7 ( 261 )
Ex. (4) 4.4 | f ( x 6 ) | 1.9 ( 142 )  * 3.8 ( 375 ) 2.2 ( 694 ) 1.9 ( 142 )  * 1.9 ( 142 )  * 1.5 ( 497 ) 1.5 ( 613 ) 4.2 ( 462 ) 5.9 ( 705 ) 1.2 ( 828 ) 1.3 ( 1005 )
x 7 x 6 5.8 ( 48 )  * 1.5 ( 125 ) 5.8 ( 232 ) 5.8 ( 48 )  * 5.8 ( 48 )  * 2.4 ( 166 ) 5.1 ( 205 ) 1.6 ( 154 ) 1.7 ( 235 ) 1.0 ( 276 ) 1.0 ( 335 )
1.7 | f ( x 6 ) | 1.4 ( + 7 ) 5.9 ( 331 ) 1.2 ( 646 ) 1.4 ( + 7 ) 1.4 ( + 7 ) 4.8 ( 403 ) 1.3 ( 566 ) 8.7 ( 427 ) 3.5 ( 669 ) 1.7 ( 865 ) 3.6 ( 885 )
x 7 x 6 1.5 8.1 ( 111 ) 4.7 ( 216 ) 1.5 1.5 7.5 ( 135 ) 2.3 ( 189 ) 9.2 ( 143 ) 1.5 ( 223 ) 5.3 ( 289 ) 1.5 ( 295 )
Ex. (5) | f ( x 6 ) | 5.7 ( 14 , 352 ) 7.5 ( 21 , 328 ) 4.7 ( 34 , 103 ) D i v 5.3 ( 21 , 087 ) 3.1 ( 9344 ) 1.1 ( 13 , 965 ) 1.1 ( 19 , 577 ) 3.2 ( 23 , 126 ) 5.0 ( 40 , 450 ) 2.9 ( 38 , 495 )
x 7 x 6 1.0 ( 144 ) 1.8 ( 214 ) 3.2 ( 342 ) D i v 4.6 ( 212 ) 1.2 ( 94 ) 7.5 ( 141 ) 5.7 ( 197 ) 1.9 ( 232 ) 1.1 ( 405 ) 3.8 ( 386 )
Table 2. Contrast of distinct iterative methods based on number of iterations. * stands for convergence to undesired root.
Table 2. Contrast of distinct iterative methods based on number of iterations. * stands for convergence to undesired root.
MethodsI.G. CS HS OS ONS CN MHS 1 MHS 2 MHS 3 MSHS 1 MSHS 2 MSHS 3
Ex. (1) 1.73 FFFFF91011788
Ex. (2) log 5 FFFFF877765
1.61 D i v 126 D i v D i v 877765
Ex. (3) 1.5 7 *86 D i v 6 *667666
Ex. (4) 4.4 7 *667 *7 *666665
1.7 12806612801280666666
Ex. (5) 1.5 665 D i v 6766655
Table 3. Contrast of distinct iterative methods based on Computational order of convergence. * stands for convergence to undesired root.
Table 3. Contrast of distinct iterative methods based on Computational order of convergence. * stands for convergence to undesired root.
MethodsI.G. CS HS OS ONS CN MHS 1 MHS 2 MHS 3 MSHS 1 MSHS 2 MSHS 3
Ex. (1) 1.73 d n e d n e d n e d n e d n e 3.000 3.000 3.000 3.000 3.000 3.000
Ex. (2) log 5 d n e d n e d n e d n e d n e 3.000 3.000 3.000 2.999 3.002 2.993
1.61 D i v 3.000 3.000 d n e d n e 3.000 3.000 3.000 2.999 3.002 3.002
Ex. (3) 1.5 3.000  * 3.000 3.000 d n e 3.000 3.000 3.000 3.000 3.000 3.000 3.000
Ex. (4) 4.4 3.000  * 3.000 3.000 3.000  * 3.000  * 3.000 3.000 3.000 3.002 3.000 3.000
1.7 3.000 3.000 3.000 3.000 3.000 3.000 3.000 3.000 2.998 3.000 3.000
Ex. (5) 3.000 3.000 3.000 3.000 d n e 3.000 3.000 3.000 3.000 3.000 2.975
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kanwar, V.; Cordero, A.; Torregrosa, J.R.; Rajput, M.; Behl, R. A New Third-Order Family of Multiple Root-Findings Based on Exponential Fitted Curve. Algorithms 2023, 16, 156. https://doi.org/10.3390/a16030156

AMA Style

Kanwar V, Cordero A, Torregrosa JR, Rajput M, Behl R. A New Third-Order Family of Multiple Root-Findings Based on Exponential Fitted Curve. Algorithms. 2023; 16(3):156. https://doi.org/10.3390/a16030156

Chicago/Turabian Style

Kanwar, Vinay, Alicia Cordero, Juan R. Torregrosa, Mithil Rajput, and Ramandeep Behl. 2023. "A New Third-Order Family of Multiple Root-Findings Based on Exponential Fitted Curve" Algorithms 16, no. 3: 156. https://doi.org/10.3390/a16030156

APA Style

Kanwar, V., Cordero, A., Torregrosa, J. R., Rajput, M., & Behl, R. (2023). A New Third-Order Family of Multiple Root-Findings Based on Exponential Fitted Curve. Algorithms, 16(3), 156. https://doi.org/10.3390/a16030156

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop