Previous Article in Journal
How Do Financial Development and Renewable Energy Affect Consumption-Based Carbon Emissions?

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# An Optimal Iterative Technique for Multiple Root Finder of Nonlinear Problems

by
Ramandeep Behl
1,*,
Sonia Bhalla
2,
1 and
Majed Aali Alsulami
1
1
Department of Mathematics Sciences, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Department of Mathematics, Chandigarh University, Gharuan, Mohali 140413, India
*
Author to whom correspondence should be addressed.
Math. Comput. Appl. 2022, 27(5), 74; https://doi.org/10.3390/mca27050074
Submission received: 11 July 2022 / Revised: 21 August 2022 / Accepted: 24 August 2022 / Published: 29 August 2022

## Abstract

:
In this paper, an optimal higher-order iterative technique to approximate the multiple roots of a nonlinear equation has been presented. The proposed technique has special properties: a two-point method that does not involve any derivatives, has an optimal convergence of fourth-order, is cost-effective, is more stable, and has better numerical results. In addition to this, we adopt the weight function approach at both substeps (which provide us with a more general form of two-point methods). Firstly, the convergence order is studied for multiplicity $m = 2 , 3$ by Taylor’s series expansion and then general convergence for $m ≥ 4$ is proved. We have demonstrated the applicability of our methods to six numerical problems. Out of them: the first one is the well-known Van der Waals ideal gas problem, the second one is used to study the blood rheology model, the third one is chosen from the linear algebra (namely, eigenvalue), and the remaining three are academic problems. We concluded on the basis of obtained CPU timing, computational order of convergence, and absolute errors between two consecutive iterations for which our methods illustrate better results as compared to earlier studies.
MSC:
65G99; 65H10

## 1. Introduction

Finding the multiple roots of the roots of the nonlinear equation $g ( x ) = 0$ is one of the most difficult tasks. Since the multiple roots play an important role in the areas of Computer Science, applied Mathematics, Physics, applied Chemistry and Engineering. For example, the Ideal Gas Law [1] describes the relationship between molecular size and attraction forces and the behavior of a real gas. The solution of such an equation with an analytical approach is either complicated or almost non-existent. Then, we have to focus on iterative methods. One of the most famous iterative techniques is the modified Newton’s method (MNM) [2,3], which is defined as
$x s + 1 = x s − m g ( x s ) g ′ ( x s ) , s = 0 , 1 , ⋯ .$
Its order of convergence is quadratic if the multiplicity m of the required root is to be known in advance.
The main problem of this method is the use of the first-order derivative at each substep. There are several occasions in real life problems where finding the derivative is either quite complicated or time consuming or does not exist. In those cases, it is always fruitful to use a derivative free method. Thus, Traub–Steffensen [4] suggested a derivative free scheme, which is defined by
$x s + 1 = x s − m g ( x s ) g [ μ s , x s ] ,$
where $μ s = x s + α g ( x s ) , α ≠ 0 ∈ R$.
Later on, Kumar et al. [5] and Kansal et al. [6] suggested the following second-order one point derivative free schemes:
$x s + 1 = x s − G ( θ ) , θ = g ( x s ) g [ μ s , x s ] ,$
and
$x s + 1 = x s − m ( 1 − a ) g ( μ s ) + a g ( x s ) g [ μ s , x s ] , a ∈ R ,$
respectively, where $μ s = x s + α g ( x s ) , α ≠ 0 ∈ R$.
Since all of the above three iterative schemes are one-point, they have several issues regarding their convergence order and efficiency (more details can be found in [2,3]). Then, researchers turned towards multi-point derivative free methods for known and unknown multiplicity [7,8]. Some of the important schemes are given below.
Hueso et al. [9] developed a fourth-order derivative-free method, which is given by
$y s = x s − b g ( x s ) g [ x s + g ( x s ) q , x s ] x s + 1 = x s − a 1 + a 2 h ( y s , x s ) + a 3 h ( x s , y s ) + a 4 h ( y s , x s ) 2 g ( x s ) g [ x s + g ( x s ) q , x s ] ,$
where $h ( y s , x s ) = g [ y s + g ( y s ) q , y s ] g [ x s + g ( x s ) q , x s ]$ and the values of other constants like $q , a 1 , a 2 , a 3 , a 4$ can be found in [9].
Baccouch [10] proposed many higher-order multi-point methods. One of the fourth-order derivative frees is given by
$x s + 1 = x s − m ( m 2 − 6 m + 1 ) g { 2 , s } + 6 g { 1 , s } − 3 g ( x s ) − 2 g { − 1 , s } ( g ( x s ) ) 2 + 4 m 2 ( m − 2 ) g { 1 , s } − 2 g ( x s ) + g { − 1 , s } ( g { 1 , s } − g { − 1 , s } ) 3 ( g ( x s ) ) 3 − 16 m 3 ( g { 1 , s } − 2 g ( x s ) + g { − 1 , s } ) 2 ( g { 1 , s } − g { − 1 , s } ) 5 − g { 2 , s } − 3 g { 1 , s } + 3 g ( x s ) − g { − 1 , s } 6 ( g { 1 , s } − g { − 1 , s } ) 4 ( g ( x s ) ) 4 ) ,$
where
$g { 1 , s } = g ( x s + g ( x s ) ) g { 2 , s } = g ( x s + 2 g ( x s ) ) , g { − 1 , s } = g ( x s − g ( x s ) ) .$
We denoted the scheme (6) by $( B M )$.
In 2019, Sharma et al. [11] proposed the following fourth-order derivative scheme:
$z s = x s − m g ( x s ) g [ v s , x s ] , x s + 1 = z s − H ( t s , y s ) g ( x s ) g [ v s , x s ] ,$
where $v s = x s + β g ( x s ) , t s = g ( z s ) g ( x s ) 1 m$ and $y s = g ( z s ) g ( v s ) 1 m$. The details of the weight function $H ( t s , y s )$ and conditions can be found in [11].
In 2020, Sharma et al. [12] suggested a new derivative scheme, which is given below:
$z s = x s − m g ( x s ) g [ v s , x s ] , x s + 1 = z s − G ( h s ) 1 y s + 1 g ( x s ) g [ v s , x s ] ,$
where $v s = x s + β g ( x s ) , u s = g ( z s ) g ( x s ) 1 m , h s = u s 1 + u s$ and $y s = g ( v s ) g ( x s ) 1 m$.
In 2020, Kumar et al. [13] presented a new fourth-order derivative free scheme, which is defined by
$y s = x s − m g ( x s ) g [ v s , x s ] , x s + 1 = y s − t s α 1 + α 2 t s g ( x s ) α 3 g [ v s , x s ] + α 4 g [ y s , v s ] ,$
where $v s = x s + β g ( x s ) , t s = g ( y s ) g ( x s ) 1 m$ and the values of parameters $α 1 , α 2 , α 3 ,$ and $α 4$ are depicted in [13].
In 2020, Behl et al. [14] presented the following derivative free family of fourth-order iterative methods:
$y s = x s − m g ( x s ) g [ u s , x s ] , x s + 1 = y s + t s + z s y s − x s 2 1 − 2 t s ,$
where $u s = x s + α g x s + x s , t s = f y s f x s 1 / m$ and $z s = f y s f u s 1 / m$.
In 2021, Behl et al. [15] suggested a new fourth-order derivative free variant of Chebyshev–Halley family, which is defined as follows:
$y s = x s − m g ( x s ) g [ u s , x s ] , x s + 1 = y s + m g ( x s ) g [ u s , x s ] 1 + ζ 1 − 2 β ζ ζ 2 − H ( τ ) , β ∈ R ,$
where $u s = x s + α g ( x s ) , τ = g y s g u s 1 / m$ and $ζ = g y s g x s 1 / m$. The values and hypotheses of weight function can be found in [15].
Very recently, in 2022, Behl [16] proposed another fourth-order derivative free scheme, which is given by
$t s = x s − m H ( ζ ) , x s + 1 = t s − m ζ 1 2 η + b η θ + M ( θ ) ,$
where $μ s = x s + α f ( x s ) , α , b ∈ R ,$ and $ζ = g ( x s ) g [ μ s , x s ] .$ Two multi valued functions are given as $θ = g ( t s ) f ( x s ) 1 m$ and $η = g ( t s ) g ( μ s ) 1 m$. The hypotheses and conditions on weight function M are described in [16]. Some other higher-order derivative-free techniques can be found in [10,17]. From the above discussion, it is clear that derivative free multi-point methods for multiple roots are in demand.
Thus, motivated in the same direction, we want to suggest a new and more general scheme, which can produce better and faster numerical results. Our scheme has the properties: optimal order of convergence; derivative free; flexible at both substeps; cost effective; and more stable. Our scheme is based on a weight function approach. The best part of our scheme is not only optimal derivative-free but also flexible at both substeps. With a suitable choice of weight functions at the first and second substep, we can construct many new and existing techniques. For example, if we choose $b = 0$ in the Expression (12), then it is a special case of our scheme. We illustrate the applicability of our methods to six numerical problems. On the basis of obtained results, we found that our methods demonstrate better results as compared to earlier studies in terms of CPU timing, computational order of convergence, absolute errors and differences between two consecutive iterations.

## 2. Suggested Higher-Order Scheme and Its Analysis

Here, we suggest a new 4th-order iterative technique for multiple zeros $m ≥ 2$, which is given by
$y s = x s − m H ( τ ) , x s + 1 = y s − m τ Q ( ζ ) + M ( ϑ ) ,$
where $μ s = x s + θ g ( x s )$, $τ = g ( x s ) g [ μ s , x s ]$. In addition to this, three weight functions $H : C → C$, $Q : C → C$ and $M : C → C$ are analytic in the neighborhood of origin (0). Moreover, $ζ = g ( y s ) g ( x s ) 1 m$ and $ϑ = g ( y s ) g ( μ s ) 1 m$ are two multi-valued maps. We adopt the principal root (see [18]), which can be obtained by $ζ = exp 1 m log g ( y s ) g ( x s )$, with $log g ( y s ) g ( x s ) = log g ( y s ) g ( x s ) + i arg g ( y s ) g ( x s )$ for $− π < arg g ( y s ) g ( x s ) ≤ π$. The choice of $arg ( z )$ for $z ∈ C$ agrees with $log ( z )$, which is mentioned in the numerical section. In an analogous way, we obtain $ζ = g ( y s ) g ( x s ) 1 m · exp 1 m arg g ( y s ) g ( x s ) = O ( e s )$.
It is clear to say that, by choosing $b = 0$ and $H ( τ ) = τ 2$ in the Expressions (12) and (13), respectively, then Behl’s scheme [16] turns as a special case of our scheme. In Theorem 1–3, we demonstrate the convergence analysis of (13), without adopting any extra value of g or $g ′$ at some other points.
Theorem 1.
Assume that a map $g : D ⊂ C → C$ is an analytic in $D$ surrounding the required zero. Let $x = η$ (say) be a multiple solution of multiplicity m = 2. Then, the new constructed scheme (13) has 4th-order convergence, with the following conditions:
$H 0 = 0 , H 1 = 1 , H 2 = 0 , M 0 = − Q 0 , M 1 = Q 1 = 1 2 , Q 2 = 4 − M 2 .$
It satisfies the following error equation
$e s + 1 = − ( α 0 2 θ + α 1 ) 48 α 0 3 [ − 18 α 1 α 0 2 θ + 12 α 2 α 0 − 33 α 1 2 + α 1 2 + α 0 4 θ 2 + 2 α 1 α 0 2 θ M 3 − 12 α 0 2 θ M 2 α 0 2 θ + α 1 − 2 α 0 2 H 3 + α 0 4 θ 2 + 2 α 1 α 0 2 θ + α 1 2 Q 3 ] e s 4 + O ( e s 5 ) .$
where $| Q 0 | < ∞ , | M 2 | < ∞ , | M 3 | < ∞ , | Q 3 | < ∞$ and $| H 3 | < ∞$. Note that $H 0 , M 0 , a n d Q 0$ denote the functional value of $H , M , a n d Q$ at origin $( 0 )$. The subscripts $j = 1 , 2 , 3$ in $H j$ represent the first-order, second-order, and third-order derivative, respectively, at the origin $( 0 )$. The weight functions $M j$ and $Q j$ are also defined in the similar fashion.
Proof.
We assume $e s = x s − η$ and $α i = g ( 2 + i ) ( η ) ( 2 + i ) ! , 0 ≤ i ≤ 4 , ( i ∈ W )$ are the terms of error (in sth iteration) and asymptotic constant, respectively. We choose the Taylor’s series expansions for g at two different points $x = x s$ and $x = μ s = x s + θ g ( x s )$ in the neighborhood of $η$ with hypotheses $g ( η ) = g ′ ( η ) = 0$ and $g ″ ( η ) ≠ 0$. Then, we obtain
$g ( x s ) = e s 2 α 0 + α 1 e s + α 2 e s 2 + α 3 e s 3 + α 4 e s 4 + O e s 5$
and
$g ( μ s ) = e s 2 [ α 0 + 2 α 0 2 θ + α 1 e s + α 0 3 θ 2 + 5 α 1 α 0 θ + α 2 e s 2 + 5 α 1 α 0 2 θ 2 + 6 α 2 α 0 θ + 3 α 1 2 θ + α 3 e s 3 + α 1 α 0 3 θ 3 + 8 α 2 α 0 2 θ 2 + 7 α 1 2 α 0 θ 2 + 7 α 3 α 0 θ + 7 α 1 α 2 θ + α 4 e s 4 + O e s 5 ] .$
By using Equations (15) and (16), we have
$τ = g ( x s ) f [ μ s , x s ] = 1 2 e s − ( α 0 2 θ + α 1 ) 4 α 0 e s 2 + ( α 0 4 θ 2 − 4 α 1 α 0 2 θ − 4 α 2 α 0 + 3 α 1 2 ) 8 α 0 2 e s 3 + O e s 4 .$
It is clear from the Expression (17) that the $τ = O ( e s )$. Thus, we can easily expand $H ( ζ )$ in the neighborhood of origin $( 0 )$ in the following way:
$H ( τ ) = H 0 + H 1 τ + 1 2 ! H 2 τ 2 + 1 3 ! H 3 τ 3 ,$
where $H j = H j ( 0 ) , 0 ≤ j ≤ 3 , ( j ∈ W )$.
The Expressions (17) and (18) provide the following error expression:
$e ˜ s = y s − η = − 2 H 0 + 1 − H 1 e s + H 1 α 0 2 θ + α 1 2 α 0 − H 2 4 e s 2 + O e s 3 .$
From (19), we observe that the scheme will attain at least a 2nd-order of convergence, when
$H 0 = 0 , H 1 = 1 .$
By using Expression (20) in (19), we obtain
$e ˜ s = 1 4 2 α 0 θ + 2 α 1 α 0 − H 2 e s 2 + O e s 3 .$
By adopting Taylor’s series expansions, we have
$g ( y s ) = e ˜ s 2 α 0 + α 1 e y s + α 2 e y s 2 + O e s 5 .$
From Expressions (17), (18) and (22), we further yield
$ζ = g ( y s ) g ( x s ) 1 2 = 1 4 2 α 0 θ + 2 α 1 α 0 − H 2 e s − 1 24 3 2 α 0 4 θ 2 − 6 α 1 α 0 2 θ − 8 α 2 α 0 + 8 α 1 2 − H 2 ( 2 α 0 3 θ + 3 α 1 α 0 ) α 0 2 + H 3 e s 2 + O e s 3 ) ,$
and
$ϑ = g ( y s ) g ( μ s ) 1 2 = 1 4 2 α 0 θ + 2 α 1 α 0 − H 2 e s − 1 24 3 6 α 0 4 θ 2 − 2 α 1 α 0 2 θ − 8 α 2 α 0 + 8 α 1 2 − H 2 ( 4 α 0 3 θ + 3 α 1 α 0 ) α 0 2 + H 3 e s 2 + O e s 3 ) .$
From the Expressions (23) and (24), we have $ζ = ϑ = O ( e s )$. Thus, we expand $Q ( ζ )$ and $M ( ϑ )$ in the neighborhood of origin $( 0 )$, which are defined as:
$M ( ϑ ) = M 0 + M 1 ϑ + 1 2 ! M 2 ϑ 2 + 1 3 ! M 3 ϑ 3 ,$
and
$Q ( ζ ) = Q 0 + Q 1 ζ + 1 2 ! Q 2 ζ 2 + 1 3 ! Q 3 ζ 3 ,$
where $M j = M j ( 0 )$, $Q j = Q j ( 0 )$ and $0 ≤ j ≤ 3 , ( j ∈ W )$.
By using Expressions (15)–(26) in scheme (13), we obtain
$e s + 1 = − M 0 + Q 0 e s + ∑ i = 0 2 A i e s i + 2 + O e s 5 ,$
where $A i = A i θ , α 1 , α 2 , α 3 , α 4 , H 2 , H 3 , M 0 , M 1 , M 2 , M 3 , Q 0 , Q 1 , Q 2 , Q 3$.
From (27), we observe that the scheme will attain at least the 2nd-order of convergence, when
$M 0 = − Q 0 ,$
where $Q 0 ∈ R$.
The terms $A 0$ and $A 1$ should be simultaneously zero for 4th-order convergence. We can attain this if
$H 2 = 0 , M 1 = Q 1 = 1 2 , Q 2 = 4 − M 2 ,$
where $M 2 ∈ R$.
We have the following error equation by adopting (28) in (27):
$e s + 1 = − ( α 0 2 θ + α 1 ) 48 α 0 3 [ − 18 α 1 α 0 2 θ + 12 α 2 α 0 − 33 α 1 2 + α 1 2 + α 0 4 θ 2 + 2 α 1 α 0 2 θ M 3 − 12 α 0 2 θ M 2 α 0 2 θ + α 1 − 2 α 0 2 H 3 + α 0 4 θ 2 + 2 α 1 α 0 2 θ + α 1 2 Q 3 ] e s 4 + O ( e s 5 ) ,$
where $M 3 , H 3 , Q 3 ∈ R$. We deduce from Expression (29) that our scheme (13) has obtained the fourth-order of convergence for $θ ∈ R$ and $m = 2$ with the same number of values of the involved function. Hence, Expression (13) is an optimal scheme. □
Theorem 2.
Applying the same conditions of Theorem 1, the suggested iterative technique (13) has 4th-order convergence, when $m = 3$. It satisfies the following error equation:
$e s + 1 = − 18 β 2 β 1 β 0 − 27 β 1 β 0 3 θ + 36 β 1 3 + 2 β 1 β 0 2 H 3 − β 1 3 M 3 − β 1 3 Q 3 162 β 0 3 e s 4 + O ( e s 5 ) .$
Proof.
We assume $e s = x s − η$ and are the terms of error (in the sth iteration) and asymptotic constant, respectively. We choose the Taylor’s series expansions for g at two different points $x = x s$ and $x = μ s = x s + θ g ( x s )$ in the neighborhood of $η$ with hypotheses $g ( η ) = g ′ ( η ) = g ″ ( η ) = 0$ and $g ‴ ( η ) ≠ 0$. Then, we obtain
$g ( x s ) = e s 3 β 0 + β 1 e s + β 2 e s 2 + β 3 e s 3 + β 4 e s 4 + O e s 5$
and
$g ( μ s ) = e s 3 [ β 0 + β 1 e s + ( α 2 + 3 β 0 2 θ ) e s 2 + 7 β 0 β 1 θ + β 3 e s 3 + 8 α 2 β 0 θ + 3 β 0 3 θ 2 + 4 β 1 2 θ + β 4 e s 4 + O e s 5 ] ,$
respectively.
By using the Expressions (30) and (31), we have
$τ = g ( x s ) f [ μ s , x s ] = 1 3 e s − β 1 9 β 0 e s 2 − 6 α 2 β 0 + 9 β 0 3 θ − 4 β 1 2 27 β 0 2 e s 3 + O e s 4 .$
It is clear from the Expression (32) that $τ = O ( e s )$. Thus, we can expand $H ( ζ )$ in the neighborhood of origin $( 0 )$ in the following way:
$H ( τ ) = H 0 + H 1 τ + 1 2 ! H 2 τ 2 + 1 3 ! H 3 τ 3 ,$
where $H j = H j ( 0 ) , 0 ≤ j ≤ 3 , ( j ∈ W )$.
With the help of Expressions (32) and (33), we further have
$e ˜ s = y s − η = − 3 H 0 + 1 − H 1 e s + β 1 H 1 3 β 0 − H 2 6 e s 2 + O e s 3 .$
From (34), we observe that the scheme will attain at least the 2nd-order of convergence, when
$H 0 = 0 , H 1 = 1 .$
By using Expression (35) in (34), we obtain
$e ˜ s = β 1 3 β 0 − H 2 6 e s 2 + O e s 3 .$
By adopting Taylor’s series expansions, we have
$g ( y s ) = e ˜ s 2 β 0 + β 1 e y s + β 2 e y s 2 + O e s 5 .$
From Expressions (32), (33) and (37), we further yield
$ζ = g ( y s ) g ( x s ) 1 3 = − 1 6 H 2 + β 1 3 β 0 e s + O e s 2 ) ,$
and
$ϑ = g ( y s ) g ( μ s ) 1 3 = − 1 6 H 2 + β 1 3 β 0 e s + O e s 2 ) .$
From Expressions (38) and (39), we have $ζ = ϑ = O ( e s )$. Thus, we expand $Q ( ζ )$ and $M ( ϑ )$ in the neighborhood of origin $( 0 )$, which is defined as:
$M ( ϑ ) = M 0 + M 1 ϑ + 1 2 ! M 2 ϑ 2 + 1 3 ! M 3 ϑ 3 ,$
and
$Q ( ζ ) = Q 0 + Q 1 ζ + 1 2 ! Q 2 ζ 2 + 1 3 ! Q 3 ζ 3 .$
By adopting Expressions (30)–(40) in scheme (13), we obtain
$e s + 1 = − M 0 + Q 0 e s + ∑ i = 0 2 B i e s i + 2 + O e s 5 ,$
where $B i = B i θ , β 1 , β 2 , β 3 , β 4 , H 2 , H 3 , M 0 , M 1 , M 2 , M 3 , Q 0 , Q 1 , Q 2 , Q 3$.
From (42), we observe that the scheme will attain at least the 2nd-order of convergence, when
$M 0 = − Q 0 .$
The coefficient of $e s 2$ and $e s 3$ should be simultaneously zero, in order to deduce the 4th-order convergence. This can be easily obtained by the following values:
$H 2 = 0 , M 1 = Q 1 = 1 2 , Q 2 = 4 − M 2 .$
We have the following error equation by adopting (43) in (42):
$e s + 1 = − 18 β 2 β 1 β 0 − 27 β 1 β 0 3 θ + 36 β 1 3 + 2 β 1 β 0 2 H 3 − β 1 3 M 3 − β 1 3 Q 3 162 β 0 3 e s 4 + O ( e s 5 ) .$
where $M 3 , H 3 , Q 3 ∈ R$. We deduce from Expression (44) that our scheme (13) has obtained the fourth-order of convergence for $θ ∈ R$ and $m = 3$ with the same number of values of the involved function. Hence, (13) is an optimal scheme. □

#### 2.1. General Error Equation of Technique (13)

Theorem 3.
Applying the same conditions of Theorem 1, the suggested scheme (13) has 4th-order convergence, when m ≥ 4. It satisfies the following error equation:
$e s + 1 = 2 H 3 γ 1 γ 0 2 − M 3 γ 1 3 − Q 3 γ 1 3 − 6 m γ 1 γ 2 γ 0 + ( 3 m + 27 ) γ 1 3 6 m 3 γ 0 3 e s 4 + O ( e s 5 ) .$
Proof.
We assume $e s = x s − η$ and are the terms of error (in sth iteration) and asymptotic constant, respectively. We choose the Taylor’s series expansions for g at two different points $x = x s$ and $x = μ s = x s + θ g ( x s )$ in the neighborhood of $η$ with hypotheses $g ( η ) = g ′ ( η ) = g ″ ( η ) = ⋯ = g m − 1 ( η ) 0$ and $g m ( η ) ≠ 0$. Then, we obtain
$g ( x s ) = e s m γ 0 + γ 1 e s + γ 2 e s 2 + γ 3 e s 3 + γ 4 e s 4 + O e s 5$
and
$g ( μ s ) = e s m γ 0 + γ 1 e s + γ 2 e s 2 + Γ e s 3 + O e s 4 ,$
respectively, where
$Γ = 4 γ 0 2 θ + γ 3 , m = 4 γ 3 , m > 4 .$
By using the Expressions (45) and (46), we have
$τ = g ( x s ) f [ μ s , x s ] = 1 m e s − γ 1 m 2 γ 0 e s 2 + ( m + 1 ) γ 1 2 − 2 m γ 0 γ 2 m 3 γ 0 2 e s 3 + O e s 4 .$
It is clear from the Expression (47) that $τ = O ( e s )$. Thus, we can expand $H ( ζ )$ in the neighborhood of origin $( 0 )$ in the following way:
$H ( τ ) = H 0 + H 1 τ + 1 2 ! H 2 τ 2 + 1 3 ! H 3 τ 3 ,$
where $H j = H j ( 0 ) , 0 ≤ j ≤ 3 , ( j ∈ W )$.
With the help of Expressions (47) and (48), we further have
$e ˜ s = y s − η = − m H 0 + 1 − H 1 e s + γ 1 H 1 m γ 0 − H 2 2 m e s 2 + O e s 3 .$
From (49), we observe that the scheme will attain at least the 2nd-order of convergence, when
$H 0 = 0 , H 1 = 1 .$
By using Expression (50) in (49), we obtain
$e ˜ s = γ 1 m γ 0 − H 2 2 m e s 2 + O e s 3 .$
By adopting Taylor’s series expansions, we obtain
$g ( y s ) = e ˜ s 2 γ 0 + γ 1 e y s + γ 2 e y s 2 + γ 3 e y s 3 + γ 4 e y s 4 + O e s 5 .$
By using (47), (48) and (52), we further yield
$ζ = g ( y s ) g ( x s ) 1 m = ( − 1 ) m 2 m H 2 − 2 γ 1 γ 0 e s + O e s 2 ) ,$
and
$ϑ = g ( y s ) g ( μ s ) 1 m = ( − 1 ) m 2 m H 2 − 2 γ 1 γ 0 e s + O e s 2 ) .$
From the Expressions (53) and (54), we have $ζ = ϑ = O ( e s )$. Thus, we expand $Q ( ζ )$ and $M ( ϑ )$ in the neighborhood of origin $( 0 )$, which is defined as:
$M ( ϑ ) = M 0 + M 1 ϑ + 1 2 ! M 2 ϑ 2 + 1 3 ! M 3 ϑ 3 .$
and
$Q ( ζ ) = Q 0 + Q 1 ζ + 1 2 ! Q 2 ζ 2 + 1 3 ! Q 3 ζ 3 .$
By adopting Expressions (45)–(55) in scheme (13), we obtain
$e s + 1 = − M 0 + Q 0 e s + ∑ i = 0 2 C i e s i + 2 + O e s 5 ,$
where $C i = C i θ , γ 1 , γ 2 , γ 3 , γ 4 , H 2 , H 3 , M 0 , M 1 , M 2 , M 3 , Q 0 , Q 1 , Q 2 , Q 3$.
From (57), we observe that the scheme will attain at least the 2nd-order of convergence, when
$M 0 = − Q 0 .$
The terms $C 0$ and $C 1$ should be simultaneously zero for 4th-order convergence. We can attain this by choosing the following values
$H 2 = 0 , M 1 = Q 1 = 1 2 , Q 2 = 4 − M 2 .$
We have the final asymptotic error equation by adopting (58) in (57), which is given by
$e s + 1 = 2 H 3 γ 1 γ 0 2 − M 3 γ 1 3 − Q 3 γ 1 3 − 6 m γ 1 γ 2 γ 0 + ( 3 m + 27 ) γ 1 3 6 m 3 γ 0 3 e s 4 + O ( e s 5 ) ,$
where $M 2 , M 3 , H 3 ∈ R$. We deduce from Expression (59) that our scheme (13) has obtained the fourth-order of convergence for $θ ∈ R$ and $m ≥ 4$ with the same number of values of the involved function. Hence, (13) is an optimal scheme. □
Remark 1.
It seems from (59) (for $m ≥ 4$) that θ is not involved in this expression. However, it actually appears in the coefficient of $e s 5$. Here, we do not need to calculate the coefficient of $e s 5$ because the optimal fourth-order of convergence is already obtained. Furthermore, the calculation work of $e s 5$ is quite rigorous and consumes a huge amount of time. Nonetheless, the role of θ can commence in (29) and (44).
Remark 2.
We can easily obtain Behl’s scheme [16] as a special case of our scheme, by choosing $b = 0$ and $H ( τ ) = τ 2$ in the Expressions (12) and (13), respectively.

#### 2.2. Some Special Cases of the Proposed Scheme

Here, we choose the following weight functions $H ( τ ) , M ( ϑ )$, and $Q ( ζ )$, which satisfy the conditions of Theorems 1–3:
$M 1 : H ( τ ) = τ + d 1 τ 3 , M ( ϑ ) = − a 1 + 1 2 ϑ + c ϑ 2 , Q ( ζ ) = a 1 + 1 2 ζ + ( 2 − c ) ζ 2 . M 2 : H ( τ ) = a τ + b 2 τ 3 a + b 3 τ 2 , M ( ϑ ) = a 2 + b 1 ϑ + c 1 ϑ 2 u 1 + ( 2 b 1 − u 1 ) u 1 2 a 2 ϑ + w ϑ 2 , Q ( ζ ) = − a 2 + b 1 ζ + ( 2 u 1 − c 1 ) ζ 2 u 1 + ( − 2 b 1 + u 1 ) u 1 2 a 2 ζ + w ζ 2 . M 3 : H ( τ ) = τ + d 1 τ 3 , M ( ϑ ) = a 2 + b 1 ϑ + c 1 ϑ 2 u 1 + ( 2 b 1 − u 1 ) u 1 2 a 2 ϑ + w ϑ 2 , Q ( ζ ) = − a 2 + b 1 ζ + ( 2 u 1 − c 1 ) ζ 2 u 1 + ( − 2 b 1 + u 1 ) u 1 2 a 2 ζ + w ζ 2 . M 4 : H ( t ) = a τ + b 2 τ 3 a + b 3 τ 2 , M ( ϑ ) = − a 1 + 1 2 ϑ + c ϑ 2 , Q ( ζ ) = a 1 + 1 2 ζ + ( 2 − c ) ζ 2 .$
where $d 1 , a 1 , c , a , b 2 , b 3 , a 2 , b 1 , c 1 , u 1 , w ∈ R .$ For numerical work, we choose $d 1 = 1 , a 1 = 2 ,$$c = 1 , a = 2 , b 2 = 1 , b 3 = 1 , a 2 = 1 , b 1 = 1 , c 1 = 1 , u 1 = 2 , w = 2$ in the above weight functions.

## 3. Numerical Experiments

In this segment, proposed schemes $M 1$$M 4$ are verified on some academic and application oriented problems. Here, the attained outcomes are compared with already developed methods by Zafar et al. [19], Sharma et al. [12], Behl [16], and Kansal et al. [6], respectively. All of the above mentioned existing schemes are listed below:
Zafar et al. scheme ($F M 1$) [19]:
$y s = x s − m g x s g ′ x s , x s + 1 = y s − m u s 4 u s + 1 u s + 1 2 g x s g ′ x s , s = 0 , 1 , 2 , … ,$
where
$u s = g y s g x s 1 m .$
Zafar et al. scheme ($F M 2$) [19]:
$y s = x s − m 2 g x s 2 g ′ x s + m g x s , x s + 1 = y s − m u s 1 + 2 u s + 11 2 u s 2 g x s g ′ x s + m g x s , s = 0 , 1 , 2 , … ,$
where
$u s = g y s g x s 1 m .$
Sharma et al. scheme ($S M 1$) [12]:
$y s = x s − m g x s g x s , μ s , x s + 1 = y s − m 2 h s ( 1 + 3 h s ) 1 + 1 v s g x s g x s , μ s , s = 0 , 1 , 2 , … ,$
where
$μ s = x s + γ g ( x s ) , γ ∈ R , u s = g y s g x s 1 m , v s = g μ s g x s 1 m , h s = u s 1 + u s .$
Behl scheme ($R M$) [16]:
$y s = x s − m H ( τ ) , x s + 1 = y s − m τ ζ 2 + 1 10 ζ ϑ + M ( ϑ ) , s = 0 , 1 , 2 , … ,$
where
$μ s = x s + θ g ( x s ) , τ = g ( x s ) g [ μ s , x s ] , ϑ = g ( y s ) g ( x s ) 1 m , ζ = g ( y s ) g ( μ s ) 1 m , H ( τ ) = τ + τ 3 , M ( ϑ ) = − ϑ ( − 7.6 ϑ − 1 ) 7.6 ϑ + 2 .$
Kansal et al. scheme ($T M$) [6]:
$μ s = x s + γ g ( x s ) , x s + 1 = x s − m 1 4 g ( μ s ) + 3 4 g ( x s ) g [ μ s , x s ] , s = 0 , 1 , 2 , … .$
In addition to the above methods, we also compare our methods with another fourth-order derivative free scheme (6) proposed by Baccouch [10], called by ($B M$).
In all the experimental works, we consider the value of $γ = − 0.01$. The outcomes of experiments have been achieved by the software Mathematica 10 at 10,000 multiple precision digits of mantissa with processor Intel(R) Core(TM) i5-1035G1 CPU @ 1.00GHz 1.19 GHz, and RAM 8 GB on the 64-bit operating system. The stopping criterion is $| x s − x s − 1 | + | g ( x s ) | ≤ 200$. The following tables represent that our methods illustrate better results in contrast to the earlier studies in view of the errors between two consecutive iterations $e s = | x s − x s − 1 |$, CPU timing, ACOC (approximate computational order of convergence) denoted as $ρ$. The following approach is adopted to calculate the ACOC.
Furthermore, the iterative process stops after three iterations, and each numerical is tested against different initial values. It is important to note that the meaning of $b ( ± a )$ is $b × 10 ± a$ in the following tables.
Example 1.
Firstly, we tested the methods on the Van der Waal’s ideal gas equation [15]
$( P + a n 2 V 2 ) ( V − n b ) = n R T$
that describes the behavior of particular gas with some particular values of a and b. The values $n , R ,$ and T are calculated with the help of values a and b. Hence, the Equation (2) formulates the nonlinear equations of volume of gas(V) in terms of variable x as
$g 1 ( x ) = x 3 − 5.22 x 2 + 9.0825 x − 5.2675 .$
One of the required zeroes of multiplicity m = 2 of $g 1 ( x )$ is x = 1.75. Table 1 represents the obtained results of different iterative methods for starting point $x 0$ = 1.9. It is easily observed from the table that proposed methods $M 1 , M 2 , M 3$ and $M 4$ have less absolute functional errors in contrast to other methods. In addition, the order of convergence is not achieved by method $F M 2$ even up to seven iterations. Furthermore, our method $M 4$ consumes the lowest CPU time as compared to other mentioned methods.
Example 2.
Next, we consider the study of the blood rheology model [20] that investigates the physical and flow characteristics of blood. In reality, blood is a non-Newtonian fluid and is referred to as Caisson fluid. According to the Caisson fluid model, basic fluids flow in tubes in such a way that the wall-to-wall region experiences a velocity gradient and the fluid’s central core moves as a plug with minimal deformation. The following function is taken into consideration as a nonlinear equation to examine the plug flow of Caisson fluids as
$H = − x 4 21 + 4 x 3 − 16 x 7 + 1 ;$
here, we consider $H = 0.40$ to compute the flow rate reduction and reduces to nonlinear equation as
$g ( x ) = x 8 441 − 8 x 5 63 − 2857144357 x 4 50000000000 + 16 x 2 9 − 906122449 x 250000000 + 3 10 .$
To make the function $g 2 ( x )$ have multiple roots, we take function g(x) as
$g 2 ( x ) = x 8 441 − 8 x 5 63 − 2857144357 x 4 50000000000 + 16 x 2 9 − 906122449 x 250000000 + 3 10 4 .$
By applying the proposed schemes, we obtained the required zero x = 0.08643356…of multiplicity m = 4 of the function $g 2 ( x )$. Table 2 represents the obtained results of different iterative methods for starting point $x 0$ = 0.22. It is easily observed from the table that proposed methods $M 1 , M 2 , M 3$ and $M 4$ have less absolute functional errors in contrast to other methods.
Example 3.
Since eigenvalue plays a significant role in linear algebra, it has many applications in real life problems such as image processing and quality of a product. Sometimes, it is a tough task to evaluate eigenvalues in the case of a larger size matrix. Thus, we consider the following ninth order matrix:
$B = 1 8 − 12 0 0 19 − 19 76 − 19 18 437 − 64 24 0 − 24 24 64 − 8 32 376 − 16 0 24 4 − 4 16 − 4 8 92 − 40 0 0 − 10 50 40 2 20 242 − 4 0 0 − 1 41 4 1 2 25 − 40 0 0 18 − 18 104 − 18 20 462 − 84 0 0 − 29 29 84 21 42 501 16 0 0 − 4 4 − 16 4 16 − 92 0 0 0 0 0 0 0 0 24 ,$
The characteristic equation of matrix B forms the following polynomial equation:
$g 3 ( x ) = x ( x 8 − 29 x 7 + 349 x 6 − 2261 x 5 + 8455 x 4 − 17663 x 3 + 15927 x 2 + 6993 x − 24732 ) + 12960 .$
This function has a zero x = 3 of multiplicity m = 4. Table 3 and Table 4 report the results of proposed schemes that are much better in contrast to available techniques in view of absolute functional errors, order of convergence, and CPU time. We choose two starting points $x 0$ = 2.8, and $x 0$ = 3.1, for a better comparison. One of the initial guesses $x 0$ = 2.8 is on the left-hand side of the required root, and the other one is on the right-hand side. Furthermore, there is no doubt that method $F M 2$ is consuming the lowest CPU timing, but convergence toward the required zero is very slow and not attaining the required convergence order.
Example 4.
Now, we examine the suggested methods on the following academic problem having multiplicity 4 for the root $z = i$
$g 4 ( z ) = z ( z 2 + 1 ) 2 e z 2 + 1 + z 2 − 1 c o s h 2 ( π z 2 ) .$
The results with initial values $x 0 = 1.2 i$, and $x 0 = 0.9 i$, respectively, are shown in Table 5 and Table 6. It is clear from the tables that our methods are showing much better results not only in the case of absolute residual errors but also in CPU timing.
Example 5.
Next, the following academic problem has been considered:
$g 5 ( x ) = x − 5 4 x − 1 2 + 1 ,$
which has a zero x = 2.23607 of multiplicity 4. The suggested methods are tested with starting value $x 0$ = 1.4 and attained results represented in Table 7. We found from the numerical results that our methods $M 1 , M 2 , M 3$ and $M 4$ have better numerical results in contrast to the methods $S M 1 , F M 1 ,$ and $F M 2$. Method $M 2$ is not only consuming the lowest CPU timing but also perform much better results as compared to the existing ones.
Example 6.
Lastly, the following academic problem with large multiplicity has been considered.
$g 6 ( x ) = e x − ∑ l = 0 l = 9 x l l ! ,$
which has a zero x = 0 of multiplicity 10. All the proposed and earlier methods are examined with initial value $x 0$ = 1. The achieved outcomes are shown in Table 8, which clearly demonstrate the exceptional results of the other methods. Moreover, the fourth order methods $F M 1$, and $F M 2$ are not working for this example of higher multiplicity.
Overall, we observe from Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 that proposed techniques have lower residual errors and CPU time in contrast to other methods with the same number of iterations.

## 4. Conclusions

• We constructed a new two-step, free from derivatives and a cost effective iterative technique for multiple zeros $( m ≥ 2 )$.
• The presented scheme used three different weight functions (at both substeps) in order to obtain a more general form of two-point methods.
• Several new cases are depicted in Section 2.
• Behl’s scheme [16] is obtained as a special case of our scheme, by choosing $b = 0$ and $H ( τ ) = τ 2$ in the Expressions (12) and (13), respectively.
• Since our scheme (13) consumes only three values of g at different points, the maximum bound (optimal level) of our scheme is achieved by Kung–Traub conjecture.
• From Table 7, it is confirmed that methods $F M 1$ and $F M 2$ diverge from the required solution. However, our methods do not exhibit this behavior. On the other hand, $M 4$ is not only converging to the required solution but also has the lowest absolute error among other depicted techniques.
• Finally, we deduce from Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7 and Table 8 that our schemes are more stable and cost effective. These methods could be a better alternative to the earlier studies.

## Author Contributions

Conceptualization, R.B. and S.B.; Methodology, R.B. and S.B.; Validation, R.B. and S.B.; writing—original draft preparation, R.B. and S.B.; writing—review & editing, F.M. and M.A.A. All authors have read and agreed to the published version of the manuscript.

## Funding

This research received no external funding.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Kansal, M.; Cordero, A.; Torregrosa, J.R.; Bhalla, S. A stable class of modified Newton-like methods for multiple roots and their dynamics. Int. J. Nonlinear Sci. Numer. Simul. 2020, 21, 603–621. [Google Scholar] [CrossRef]
2. Ostrowski, A.M. Solution of Equations in Euclidean and Banach Space; Academic Press: New York, NY, USA, 1973. [Google Scholar]
3. Petkovic, M.; Neta, B.; Petkovic, L.; Dzunic, J. Multipoint Methods for Solving Nonlinear Equations; Academic Press: Cambridge, MA, USA, 2012. [Google Scholar]
4. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
5. Kumar, D.; Sharma, J.R.; Argyros, I.K. Optimal one-point iterative function free from derivatives for multiple roots. Mathematics 2020, 8, 709. [Google Scholar] [CrossRef]
6. Kansal, M.; Alshomrani, A.S.; Bhalla, S.; Behl, R.; Salimi, M. One Parameter Optimal Derivative-Free Family to Find the Multiple Roots of Algebraic Nonlinear Equations. Mathematics 2020, 8, 2223. [Google Scholar] [CrossRef]
7. Jaiswal, J.P. An Optimal Order Method for Multiple Roots in Case of Unknown Multiplicity. Algorithms 2016, 9, 10. [Google Scholar] [CrossRef]
8. Ignatova, B.; Kyurkchiev, N.; Iliev, A. Multipoint algorithms arising from optimal in the sense of Kung-Traub iterative procedures for numerical solution of nonlinear equations. Gen. Math. Notes 2011, 11, 4–79. [Google Scholar]
9. Hueso, J.L.; Martnez, E.; Teruel, C. Determination of multiple roots of nonlinear equations and applications. Math. Chem. 2015, 53, 880–892. [Google Scholar] [CrossRef]
10. Baccouch, M. A Family of High Order Derivative-Free Iterative Methods for Solving Root-Finding Problems. Int. J. Appl. Comput. Math. 2019, 5, 1–31. [Google Scholar] [CrossRef]
11. Sharma, J.R.; Kumar, S.; Jntschi, L. On a class of optimal fourth order multiple root solvers without using derivatives. Symmetry 2019, 11, 1452. [Google Scholar] [CrossRef]
12. Sharma, J.R.; Kumar, S.; Jntschi, L. On Derivative Free Multiple-Root Finders with Optimal Fourth Order Convergence. Mathematics 2020, 8, 1091. [Google Scholar] [CrossRef]
13. Kumar, S.; Kumar, D.; Sharma, J.R.; Cesarano, C.; Aggarwal, P.; Chu, Y.M. An optimal fourth order derivative-free numerical algorithm for multiple roots. Symmetry 2020, 12, 1038. [Google Scholar] [CrossRef]
14. Behl, R.; Alharbi, S.K.; Mallawi, F.O.; Salimi, M. An Optimal Derivative-Free Ostrowski’s Scheme for Multiple Roots of Nonlinear Equations. Mathematics 2020, 8, 1809. [Google Scholar] [CrossRef]
15. Behl, R.; Bhalla, S.; Magreñán, Á.; Moysi, A. An Optimal Derivative Free Family of Chebyshev–Halley’s Method for Multiple Zeros. Mathematics 2021, 9, 546. [Google Scholar] [CrossRef]
16. Behl, R. A Derivative Free Fourth-Order Optimal Scheme for Applied Science Problems. Mathematics 2022, 10, 1372. [Google Scholar] [CrossRef]
17. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
18. Ahlfors, l.V. Complex Analysis; McGraw-Hill Book, Inc.: New York, NY, USA, 1979. [Google Scholar]
19. Zafar, F.; Cordero, A.; Torregrosa, J.R. Stability analysis of a family of optimal fourth-order methods for multiple roots. Numer. Algorithm 2019, 81, 947–981. [Google Scholar] [CrossRef]
20. Fournier, R.L. Basic Transport Phenomena in Biomedical Engineering; Taylor & Francis: New York, NY, USA, 2007. [Google Scholar]
Table 1. The outcomes of Example 1 based on various methods.
Table 1. The outcomes of Example 1 based on various methods.
Methods$| e 2 |$$| e 3 |$$| e 4 |$$| g ( e 4 ) |$$ρ$CPU Time
$S M 1$$1.9 ( − 2 )$$5.7 ( − 4 )$$4.6 ( − 9 )$$1.3 ( − 59 )$$4.000$$0.454$
$F M 1$$1.9 ( − 2 )$$5.7 ( − 4 )$$4.6 ( − 9 )$$1.4 ( − 59 )$$4.000$$0.469$
$F M 2$$1.4 ( − 2 )$$7.1 ( − 5 )$$1.2 ( − 14 )$$7.3 ( − 55 )$$6.148$$0.359$
$R M$$1.9 ( − 2 )$$− 6.1 ( − 5 )$$7.2 ( − 9 )$$7.7 ( − 58 )$$4.000$$0.547$
$B M$$1.9 ( − 2 )$$1.2 ( − 4 )$$2.5 ( − 9 )$$6.3 ( − 62 )$$4.000$$0.434$
$T M$$3.4 ( − 2 )$$9.0 ( − 3 )$$1.1 ( − 3 )$$1.2 ( − 11 )$$2.000$$0.406$
$M 1$$1.6 ( − 1 )$$2.8 ( − 4 )$$1.5 ( − 10 )$$5.2 ( − 72 )$$4.000$$0.516$
$M 2$$1.7 ( − 2 )$$3.6 ( − 4 )$$4.9 ( − 10 )$$8.8 ( − 68 )$$4.000$$0.438$
$M 3$$1.7 ( − 2 )$$3.6 ( − 4 )$$4.8 ( − 10 )$$8.2 ( − 68 )$$4.000$$0.437$
$M 4$$1.6 ( − 2 )$$2.8 ( − 4 )$$1.5 ( − 10 )$$4.9 ( − 72 )$$4.000$$0.421$
Table 2. The outcomes of Example 2 based on various methods.
Table 2. The outcomes of Example 2 based on various methods.
Methods$| e 2 |$$| e 3 |$$| e 4 |$$| g ( e 4 ) |$$ρ$CPU Time
$S M 1$$2.7 ( − 2 )$$7.0 ( − 7 )$$3.6 ( − 25 )$$4.6 ( − 389 )$$4.000$$0.406$
$F M 1$$2.6 ( − 2 )$$6.0 ( − 7 )$$2.0 ( − 25 )$$3.0 ( − 393 )$$4.000$$0.375$
$F M 2$$2.6 ( − 3 )$$6.7 ( − 14 )$$3.0 ( − 56 )$$2.2 ( − 898 )$$4.000$$0.360$
$R M$$2.7 ( − 2 )$$8.2 ( − 7 )$$8.4 ( − 25 )$$9.3 ( − 383 )$$4.000$$0.406$
$B M$$3.0 ( − 2 )$$3.7 ( − 5 )$$1.1 ( − 16 )$$5.4 ( − 247 )$$4.000$$0.719$
$T M$$1.1 ( − 2 )$$6.3 ( − 5 )$$2.1 ( − 9 )$$4.0 ( − 69 )$$2.000$$0.421$
$M 1$$2.8 ( − 2 )$$4.4 ( − 7 )$$3.2 ( − 26 )$$5.3 ( − 407 )$$4.000$$0.347$
$M 2$$2.7 ( − 2 )$$4.5 ( − 7 )$$3.7 ( − 26 )$$8.6 ( − 406 )$$4.000$$0.324$
$M 3$$2.8 ( − 2 )$$5.1 ( − 7 )$$6.8 ( − 26 )$$2.3 ( − 401 )$$4.000$$0.338$
$M 4$$2.7 ( − 2 )$$3.8 ( − 7 )$$1.6 ( − 26 )$$5.6 ( − 412 )$$4.000$$0.406$
Table 3. The outcomes of Example 3 based on various methods.
Table 3. The outcomes of Example 3 based on various methods.
Methods$| e 2 |$$| e 3 |$$| e 4 |$$| g ( e 4 ) |$$ρ$CPU Time
$S M 1$$7.7 ( − 5 )$$6.2 ( − 18 )$$2.6 ( − 70 )$$4.2 ( − 1115 )$$4.000$$0.468$
$F M 1$$9.9 ( − 5 )$$1.7 ( − 17 )$$1.4 ( − 68 )$$9.4 ( − 1088 )$$4.000$$0.500$
$F M 2$$3.4 ( − 2 )$$6.8 ( − 8 )$$2.5 ( − 15 )$$3.4 ( − 237 )$$1.330$$0.422$
$R M$$8.5 ( − 5 )$$1.2 ( − 17 )$$5.6 ( − 69 )$$2.0 ( − 1093 )$$4.000$$0.563$
$B M$$4.1 ( − 2 )$$2.2 ( − 2 )$$6.7 ( − 2 )$$6.4 ( − 10 )$**
$T M$$7.2 ( − 3 )$$1.2 ( − 5 )$$3.5 ( − 11 )$$5.2 ( − 85 )$$2.000$$0.500$
$M 1$$7.7 ( − 5 )$$4.9 ( − 18 )$$8.2 ( − 71 )$$1.1 ( − 1123 )$$4.000$$0.453$
$M 2$$7.2 ( − 5 )$$3.3 ( − 18 )$$1.4 ( − 71 )$$4.5 ( − 1136 )$$4.000$$0.438$
$M 3$$7.9 ( − 5 )$$6.0 ( − 18 )$$2.0 ( − 70 )$$2.0 ( − 1117 )$$4.000$$0.499$
$M 4$$6.9 ( − 5 )$$2.5 ( − 18 )$$4.2 ( − 72 )$$1.1 ( − 1144 )$$4.000$$0.531$
* stands for: Order of convergence is not attained in the first four iterations.
Table 4. The outcomes of Example 3 based on various methods with starting point $x 0 = 3.1$.
Table 4. The outcomes of Example 3 based on various methods with starting point $x 0 = 3.1$.
Methods$| e 2 |$$| e 3 |$$| e 4 |$$| g ( e 4 ) |$$ρ$CPU Time
$S M 1$$5.9 ( − 3 )$$2.0 ( − 10 )$$2.9 ( − 40 )$$1.7 ( − 634 )$$4.000$$0.625$
$F M 1$$6.1 ( − 3 )$$2.3 ( − 10 )$$5.2 ( − 40 )$$2.4 ( − 630 )$$4.000$$0.474$
$F M 2$$3.3 ( − 6 )$$5.8 ( − 12 )$$7.4 ( − 47 )$$5.3 ( − 369 )$$6.000$$0.460$
$R M$$6.0 ( − 3 )$$3.0 ( − 10 )$$1.8 ( − 39 )$$2.6 ( − 621 )$$4.000$$0.502$
$B M$$1.6 ( − 3 )$$2.3 ( − 10 )$$1.2 ( − 37 )$$3.4 ( − 583 )$$4.000$$1.187$
$T M$$2.9 ( − 3 )$$2.0 ( − 6 )$$9.6 ( − 13 )$$1.8 ( − 97 )$$2.000$$0.567$
$M 1$$6.0 ( − 3 )$$1.8 ( − 10 )$$1.4 ( − 40 )$$4.5 ( − 640 )$$4.000$$0.500$
$M 2$$5.9 ( − 3 )$$1.4 ( − 10 )$$4.9 ( − 41 )$$1.8 ( − 647 )$$4.000$$0.416$
$M 3$$6.0 ( − 3 )$$1.9 ( − 10 )$$2.2 ( − 40 )$$8.8 ( − 637 )$$4.000$$0.478$
$M 4$$5.9 ( − 3 )$$1.3 ( − 10 )$$2.8 ( − 41 )$$1.2 ( − 651 )$$4.000$$0.463$
Table 5. The outcomes of Example 4 based on various methods with starting point $x 0 = 1.2 i$.
Table 5. The outcomes of Example 4 based on various methods with starting point $x 0 = 1.2 i$.
Methods$| e 2 |$$| e 3 |$$| e 4 |$$| g ( e 4 ) |$$ρ$CPU Time
$S M 1$$1.5 ( − 4 )$$2.8 ( − 16 )$$3.2 ( − 63 )$$1.9 ( − 1000 )$$4.000$$0.687$
$F M 1$$1.7 ( − 4 )$$4.1 ( − 16 )$$1.5 ( − 62 )$$6.8 ( − 990 )$$4.000$$0.641$
$F M 2$$3.6 ( − 2 )$$1.5 ( − 3 )$$3.6 ( − 12 )$$1.2 ( − 182 )$$4.000$$1.155$
$R M$$1.5 ( − 4 )$$2.9 ( − 16 )$$4.0 ( − 63 )$$1.1 ( − 958 )$$4.000$$0.656$
$B M$$1.2 ( − 1 )$$1.5 ( − 2 )$$7.5 ( − 7 )$$1.9 ( − 92 )$$4.708$$2.921$
$T M$$7.3 ( − 3 )$$1.7 ( − 5 )$$1.0 ( − 10 )$$3.7 ( − 81 )$$2.000$$0.459$
$M 1$$1.1 ( − 4 )$$4.5 ( − 17 )$$1.1 ( − 66 )$$7.2 ( − 1057 )$$4.000$$0.688$
$M 2$$1.1 ( − 4 )$$1.2 ( − 16 )$$8.1 ( − 65 )$$1.5 ( − 1026 )$$4.000$$0.625$
$M 3$$1.2 ( − 4 )$$6.6 ( − 17 )$$5.8 ( − 66 )$$4.2 ( − 1045 )$$4.000$$0.563$
$M 4$$1.3 ( − 4 )$$9.2 ( − 17 )$$2.2 ( − 65 )$$1.1 ( − 1035 )$$4.000$$0.594$
Table 6. The outcomes of Example 4 based on various methods with starting point $x 0 = 0.9 i$.
Table 6. The outcomes of Example 4 based on various methods with starting point $x 0 = 0.9 i$.
Methods$| e 2 |$$| e 3 |$$| e 4 |$$| g ( e 4 ) |$$ρ$CPU Time
$S M 1$$9.5 ( − 3 )$$3.7 ( − 9 )$$9.6 ( − 35 )$$8.5 ( − 545 )$$4.000$$0.688$
$F M 1$$9.6 ( − 3 )$$3.9 ( − 9 )$$1.1 ( − 34 )$$7.2 ( − 544 )$$4.000$$0.718$
$F M 2$$1.1 ( − 2 )$$9.9 ( − 5 )$$1.2 ( − 8 )$$1.6 ( − 126 )$$1.335$$1.547$
$R M$$9.3 ( − 3 )$$3.7 ( − 9 )$$1.0 ( − 34 )$$4.3 ( − 544 )$$4.000$$0.828$
$B M$$1.8 ( − 4 )$$1.5 ( − 14 )$$8.3 ( − 55 )$$8.2 ( − 860 )$$4.000$$2.640$
$T M$$4.4 ( − 3 )$$6.4 ( − 6 )$$1.4 ( − 11 )$$4.9 ( − 88 )$$2.000$$0.547$
$M 1$$9.4 ( − 3 )$$1.9 ( − 9 )$$3.8 ( − 36 )$$2.7 ( − 568 )$$4.000$$0.814$
$M 2$$9.5 ( − 3 )$$2.7 ( − 9 )$$1.8 ( − 35 )$$3.9 ( − 557 )$$4.000$$0.625$
$M 3$$9.4 ( − 3 )$$2.2 ( − 9 )$$7.1 ( − 36 )$$1.0 ( − 563 )$$4.000$$0.704$
$M 4$$9.6 ( − 3 )$$2.4 ( − 9 )$$1.0 ( − 35 )$$4.4 ( − 561 )$$4.000$$0.735$
Table 7. The outcomes of Example 5 on various methods.
Table 7. The outcomes of Example 5 on various methods.
Methods$| e 2 |$$| e 3 |$$| e 4 |$$| g ( e 4 ) |$$ρ$CPU Time
$S M 1$$1.6 ( − 2 )$$8.6 ( − 9 )$$7.5 ( − 34 )$$1.2 ( − 534 )$$4.000$$0.469$
$F M 1$$1.5 ( − 2 )$$6.3 ( − 9 )$$2.1 ( − 34 )$$1.3 ( − 543 )$$4.000$$0.469$
$F M 2$$2.3 ( − 1 )$$5.2 ( − 1 )$$1.2 ( − 1 )$$1.0 ( − 19 )$$1.326$$0.479$
$R M$$1.5 ( − 2 )$$1.1 ( − 8 )$$2.7 ( − 33 )$$4.4 ( − 525 )$$4.000$$0.453$
$B M$$2.1 ( − 1 )$$5.2 ( − 1 )$$3.1 ( − 2 )$$8.6 ( − 28 )$$4.239$$0.485$
$T M$$1.0 ( − 1 )$$2.6 ( − 3 )$$1.6 ( − 6 )$$6.5 ( − 50 )$$2.000$$0.312$
$M 1$$1.3 ( − 2 )$$2.4 ( − 9 )$$3.0 ( − 36 )$$1.3 ( − 573 )$$4.000$$0.579$
$M 2$$1.3 ( − 2 )$$2.0 ( − 9 )$$1.2 ( − 36 )$$4.1 ( − 580 )$$4.000$$0.437$
$M 3$$1.3 ( − 2 )$$3.3 ( − 9 )$$1.2 ( − 35 )$$8.3 ( − 564 )$$4.000$$0.516$
$M 4$$1.2 ( − 2 )$$1.1 ( − 9 )$$9.4 ( − 38 )$$1.9 ( − 578 )$$4.000$$0.468$
Table 8. The outcomes of Example 5 on various methods.
Table 8. The outcomes of Example 5 on various methods.
Methods$| e 2 |$$| e 3 |$$| e 4 |$$| g ( e 4 ) |$$ρ$CPU Time
$S M 1$$4.8 ( − 6 )$$2.1 ( − 27 )$$8.2 ( − 113 )$$1.2 ( − 4544 )$$4.000$$1.266$
$F M 1$divergentdivergentdivergentdivergent**
$F M 2$divergentdivergentdivergentdivergent**
$R M$$3.6 ( − 7 )$$3.0 ( − 30 )$$1.5 ( − 122 )$$0.4 ( − 4212 )$$4.000$$0.860$
$B M$$1.7 ( − 5 )$$2.9 ( − 25 )$$2.9 ( − 104 )$$1.3 ( − 4202 )$$4.000$$13.359$
$T M$$9.7 ( − 3 )$$8.6 ( − 7 )$$6.7 ( − 15 )$$4.0 ( − 311 )$$2.000$$0.531$
$M 1$$3.6 ( − 7 )$$2.9 ( − 30 )$$1.3 ( − 122 )$$2.1 ( − 4920 )$$4.000$$1.172$
$M 2$$1.3 ( − 6 )$$3.2 ( − 30 )$$1.1 ( − 124 )$$3.0 ( − 5026 )$$4.000$$1.156$
$M 3$$3.6 ( − 7 )$$2.9 ( − 30 )$$1.3 ( − 122 )$$5.0 ( − 4920 )$$4.000$$1.189$
$M 4$$4.3 ( − 7 )$$8.9 ( − 33 )$$1.6 ( − 135 )$$1.2 ( − 5465 )$$4.000$$1.219$
* stands for: No needs to calculate these values in the case of divergence.
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Share and Cite

MDPI and ACS Style

Behl, R.; Bhalla, S.; Mallawi, F.; Alsulami, M.A. An Optimal Iterative Technique for Multiple Root Finder of Nonlinear Problems. Math. Comput. Appl. 2022, 27, 74. https://doi.org/10.3390/mca27050074

AMA Style

Behl R, Bhalla S, Mallawi F, Alsulami MA. An Optimal Iterative Technique for Multiple Root Finder of Nonlinear Problems. Mathematical and Computational Applications. 2022; 27(5):74. https://doi.org/10.3390/mca27050074

Chicago/Turabian Style

Behl, Ramandeep, Sonia Bhalla, Fouad Mallawi, and Majed Aali Alsulami. 2022. "An Optimal Iterative Technique for Multiple Root Finder of Nonlinear Problems" Mathematical and Computational Applications 27, no. 5: 74. https://doi.org/10.3390/mca27050074