Next Article in Journal
Computing Birational Polynomial Surface Parametrizations without Base Points
Previous Article in Journal
Fuzzy Ranking Network DEA with General Structure

Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

# One Parameter Optimal Derivative-Free Family to Find the Multiple Roots of Algebraic Nonlinear Equations

1
School of Mathematics, Thapar Institute of Engineering and Technology, Punjab 147004, India
2
Department of Mathematics, King Abdulaziz University, Jeddah 21589, Saudi Arabia
3
Department of Mathematics, Chandigarh University, Gharuan, Mohali 140413, India
4
Department of Mathematics & Statistics, McMaster University, Hamilton, ON L8S 4K1, Canada
5
Center for Dynamics, Faculty of Mathematics, Technische Universität Dresden, 01062 Dresden, Germany
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(12), 2223; https://doi.org/10.3390/math8122223
Received: 17 November 2020 / Revised: 4 December 2020 / Accepted: 6 December 2020 / Published: 14 December 2020

## Abstract

:
In this study, we construct the one parameter optimal derivative-free iterative family to find the multiple roots of an algebraic nonlinear function. Many researchers developed the higher order iterative techniques by the use of the new function evaluation or the first-order or second-order derivative of functions to evaluate the multiple roots of a nonlinear equation. However, the evaluation of the derivative at each iteration is a cumbersome task. With this motivation, we design the second-order family without the utilization of the derivative of a function and without the evaluation of the new function. The proposed family is optimal as it satisfies the convergence order of Kung and Traub’s conjecture. Here, we use one parameter a for the construction of the scheme, and for $a = 1$, the modified Traub method is its a special case. The order of convergence is analyzed by Taylor’s series expansion. Further, the efficiency of the suggested family is explored with some numerical tests. The obtained results are found to be more efficient than earlier schemes. Moreover, the basin of attraction of the proposed and earlier schemes is also analyzed.
MSC:
65G99; 65H10

## 1. Introduction

One of the challenging tasks of the scientific and engineering field is to solve nonlinear equations. Many problems like the design of electric circuits, the parachutist problem, and the ideal gas law  are formulated in the form of nonlinear equations. Due to their significance, sometimes, it is difficult to solve scalar equations with the help of analytical methods. This leads to the development of modern numerical methods. Many scholars [2,3,4,5,6,7,8,9,10] constructed higher order iterative techniques to solve nonlinear equations. Here, we concentrate on the multiple root finding of a function $g : D ⊂ R → R$ such that $g ( x r ) = g ′ ( x r ) = g ″ ( x r ) = g ″ ( x r ) = ⋯ = 0$ and $g m ( x r ) ≠ 0$, where $x r$ is the exact root of function g with multiplicity m.
One of the relevant and basic methods to find the multiple root of a function is the modified Newton’s method ($M N M$)  defined as:
$x k + 1 = x k − m g ( x k ) g ′ ( x k ) , k = 0 , 1 , 2 , ⋯ .$
where m is the multiplicity of root $x r$ and $g ′ ( x k )$ is the first derivative of function g. The convergence order of $M N M$ is quadratic if the multiplicity is known. Another important iterative technique to find multiple roots of nonlinear equations is the Schröder method  defined as:
$x k + 1 = x k − g ′ ( x k ) g ( x k ) ( g ′ ( x k ) ) 2 − g ( x k ) g ″ ( x k ) , k = 0 , 1 , 2 , ⋯ .$
For this scheme, the multiplicity of the root is not required in advance, but it involves the first-order and second-order derivative of a function at each step. Many more researchers [13,14,15,16,17,18,19,20,21,22] developed the higher order iterative schemes involving the first-order derivative to find the multiple roots of a scalar equation. The evaluation of derivatives at each step is time consuming and sometimes a difficult job for any complex problem. Therefore, some authors [23,24,25,26] introduced the derivative-free iterative techniques to find the multiple roots of an equation using second-order the modified Traub–Steffensen method ($T M$) , which is defined as:
$w k = x k + β g ( x k ) , x k + 1 = x k − m g ( x k ) g [ w k , x k ] , β ∈ R − { 0 } , k = 0 , 1 , 2 , ⋯ ,$
where $g [ w k , x k ] = g ( w k ) − g ( x k ) w k − x k .$ The order of convergence of the modified Traub–Steffensen method is maintained without the evaluation of any derivative.
Very recently, Kumar et al.  developed the optimal second-order scheme represented as:
$x k + 1 = x k − G ( θ ) , k = 0 , 1 , 2 , ⋯ .$
where $G ( θ ) : D ⊂ C → C$ is a weight function and $θ = g ( x k ) g [ w k , x k ]$. On the other hand, the construction of iterative methods with less function evaluation plays a key role. For this, the optimal methods are introduced that satisfy the Kung–Traub conjecture , meaning the order of convergence is $2 n − 1$ where n represents the number of function evaluations per iteration. Motivated by this, we suggest a second- order optimal derivative-free iterative family. The salient features of the proposed family are its simple body structure, no new evaluation of the function, consisting of one parameter, and without the computation of the derivative. The proposed work is introduced as follows. Section 2 elaborates on the construction and convergence analysis of the proposed family. The numerical performance of the suggested family is shown in Section 3. The basin of attraction of the suggested methods and the methods available in the literature are exhibited in Section 4. Finally, conclusions are made in Section 5.

## 2. Construction of the Second-Order Scheme

Here, we construct an optimal second-order family of the iterative method for multiple zeros $m ≥ 2$, which is defined by:
$w k = x k + β g ( x k ) , x k + 1 = x k − m ( 1 − a ) g ( w k ) + a g ( x k ) g [ w k , x k ] , k = 0 , 1 , 2 , ⋯ a ∈ R .$
For $a = 1$, the modified Traub method is special case of the scheme (5).

#### Convergence Analysis

Using the following Lemmas 1 and 2 and Theorem 1, we illustrate that the constructed family (5) attains the maximum second-order of convergence for all $a ∈ R$ without evaluating any extra evaluation of the function or its derivative.
Lemma 1.
Suppose $x r$ is a solution of multiplicity $m = 2$ of the function g. Consider that a function $g : D ⊂ C → C$ is analytic in $D$ surrounding the required zero $x r$. Then, the proposed scheme (5) has second-order convergence with the following error equation:
where $C i = 2 ! ( 2 + i ) ! g ( 2 + i ) ( x r ) g ( 2 ) ( x r ) , i = 1 , 2 , …$.
Proof.
Let $e k = x k − x r$ be the error in the kth iteration. We use Taylor’s series expansions for function $g ( x k )$ around $x = x r$ with the assumption $g ( x r ) = g ′ ( x r ) = 0$ and $g ″ ( x r ) ≠ 0$, which are given by:
Set $e w = w k − x r$, and take the Taylor series of a function g about the point $w k$. We have:
Now,
Substitute the values of Equations (6)–(8) into the scheme (5). One obtains the following error equation:
Hence, the scheme (5) has second-order convergence for $m = 2$. □
Lemma 2.
Assuming the same hypotheses of Lemma 1, the scheme given by (5) is of second-order convergence for $m = 3$. It satisfies the following error equation:
Proof.
Let $e k = x k − x r$ be the error in the kth iteration. We use the Taylor series expansions for function $g ( x k )$ around $x = x r$ with the assumption $g ( x r ) = g ′ ( x r ) = g ″ ( x r ) = 0$ and $g ‴ ( x r ) ≠ 0$, which are given by:
where $b i = 3 ! ( 3 + i ) ! g ( 3 + i ) ( x r ) g ( 3 ) ( x r ) , i = 1 , 2 , …$.
Set $e w = w k − x r$, and take the Taylor series of a function g about the point $w k$. We have:
Now,
Substitute the values of Equations (10)–(12) into the scheme (5). One obtains the following error equation:
Hence, the scheme (5) has second-order convergence for $m = 3$. □
Theorem 1.
Assuming the same hypotheses of Lemma 1, the scheme given by (5) is of second-order convergence for $m ≥ 3$. It converges to the following error equation:
Proof.
Let $e k = x k − x r$ be the error in the kth iteration. We use the Taylor series expansions for function $g ( x k )$ around $x = x r$ with the assumption $g ( x r ) = g ′ ( x r ) = g ″ ( x r ) = g ″ ( x r ) = ⋯ = 0$ and $g m ( x r ) ≠ 0$, which are given by:
where $C i = m ! ( m + i ) ! g ( m + i ) ( x r ) g ( m ) ( x r ) , i = 1 , 2 , …$.
Set $e w = w k − x r$, and take the Taylor series of a function g about the point $w k$. We have:
Now,
Substitute the values of Equations (10)–(12) into the scheme (5). One obtains the following error equation:
Hence, the scheme (5) has second-order convergence with multiplicity $m ≥ 3$. □

## 3. Numerical Illustration

In this section, the derivative-free proposed schemes (5) for different values of parameter a are verified on some numerical problems. Here, we take $a = 6 7 , 2 3 , 3 4 ,$ and $5 6$ and denote the proposed methods as $P M 1 , P M 2 , P M 3 ,$ and $P M 4$ respectively. The results are compared with other existing methods, the modified Traub method  and the Kumar method , as shown below.
The modified Traub method is presented as $T M$:
$w k = x k − β g ( x k ) , x k + 1 = x k − m g ( x k ) g [ w k , x k ] .$
The Kumar methods:
$θ = g ( x k ) g [ w k , x k ] , x k + 1 = x k − G ( θ ) .$
The function $G ( θ )$ can be found in the Table 1.
In all numerical problems, the parameter $β = − 0.01$ is considered. The numerical problems were performed with Mathematica 10 using 1000 multiple precision digits of the mantissa with stopping criterion $| x k − x k − 1 | + | g ( x k ) | < = 100$. To check the better performance of the proposed method, we display the error between two consecutive iterations $e k = | x k + 1 − x k |$, functional error $| g ( x k ) |$ at the $( k )$th iteration, and the approximate computational order of convergence (ACOC) denoted as $ρ$ in Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7. The following formula is used to evaluate the ACOC:
Notice that the meaning of $b ( ± a )$ is $b × 10 ± a$ in all the tables.
Example 1.
Consider the van der Waals equation of the ideal gas :
$( P + a n 2 V 2 ) ( V − n b ) = n R T$
which explains the nature of the real gas by taking the parameters $a , b$ of a particular gas. Other parameters $n , R , T$ are obtained with parameters $a , b$. Therefore, we have nonlinear equations in terms of the volume of gas (V), which is represented as x in the following equation:
$g 1 ( x ) = x 3 − 5.22 x 2 + 9.0825 x − 5.2675 .$
The desired root is $x r = 1.75$ of multiplicity $m = 2$. Table 2 depicts the performance of different schemes with initial guess $x 0 = 1.8$.
Example 2.
Here, we consider the following nonlinear function of multiplicity $m = 3$:
The nonlinear equation is tested on the proposed methods and the methods introduced in  by taking the initial guess $x 0 = 1.2$. The proposed methods converge to the desired root $x r = 0.257530285439860 …$, and the computational results are shown in Table 3.
Example 3.
Finding the eigenvalues of a large matrix whose order is greater than four, we need to solve its characteristic equation. The determination of the roots of such a higher order characteristic equation is a difficult task if we apply the linear algebra approach. Therefore, one of the best ways is to use the numerical techniques. Now, consider the following square matrix of order nine.
whose characteristic equation is modeled as a nonlinear equation as:
$g 3 ( x ) = x 9 − 29 x 8 + 349 x 7 − 2261 x 6 + 8455 x 5 − 17,663 x 4 + 15,927 x 3 + 6993 x 2 − 24,732 x + 12,960 .$
The root of this equation is $x r = 3$ with multiplicity $m = 4$. Table 4 depicts the better performance of the proposed scheme in comparison to existing techniques by taking initial guess $x 0 = 2.5$.
Example 4.
Consider another standard nonlinear equation as follows:
$g 4 ( x ) = x ( x 2 + 1 ) ( 2 e x 2 + 1 + x 2 − 1 ) c o s h 2 ( π x 2 ) ,$
which has root $x r = i$ of multiplicity four. The results are obtained on initial guess $x 0 = 1.5 i$ and shown in Table 5.
Example 5.
Finally, we consider the problem of a continuous stirred tank reactor  shown in the Figure 1.
Here, components A and R are fed to the reactor at rates of Q and $q − Q$, respectively. The reaction schemes developed in these reactors are:
$A + R → B B + R → C C + R → D D + R → E$
Douglas  converted this problem into a mathematical expression:
$K c 2.98 ( t + 2.25 ) ( t + 1.45 ) ( t + 2.85 ) 2 ( t + 4.35 ) = − 1 .$
where $K c$ is the gain of the proportional controller. This control system is stable for the values of $K c$. If we take $K c = 0$, we obtain the poles of the open-loop transferred function as the solutions of the following uni-variate equation:
$g 5 ( x ) = x 4 + 11.50 x 3 + 47.49 x 2 + 83.06325 x + 51.23266875 = 0 .$
Clearly, function $g 5 ( x )$ has three roots $− 1.45 , − 2.85$, and $− 4.35$. One of the roots $− 2.85$ has multiplicity $m = 2$. The computational results are demonstrated in Table 6. It can be concluded that the results are better for the suggested methods in terms of less residual and functional error with the similar utilization of the number of iterations for the methods developed by Traub and Kumar.
Example 6.
Next, we consider the study of the multi-factor effect in which the trajectory of an electron in the air gap between two parallel plates  is defined as:
$e , m a , y 0 , v 0$, and $E 0 s i n ( ω t + θ )$ are the charge of the electron at rest, the mass of the electron at rest, the position and velocity of the electron at time $t 0$, and the RF electric field between the plates, respectively. By choosing particular values of these parameters, we have the following expression:
This problem has root $x r = − 0.30909 …$ with multiplicity $m = 5$. We tested the problem at initial guess $x 0 = 1.2$, and the results are shown in Table 7.

## 4. Basin of Attraction

In this section, we explored the convergence behavior of the proposed family and earlier developed techniques for finding multiple roots of a nonlinear equation in the complex plane with the basin of attraction. A basin of attraction is a significant tool that depicts the stability behavior of methods on the complex plane. It provides a wide range of initial guesses to find the roots of an equation, which leads to better understanding of the performance behavior of iterative techniques. Many researchers [31,32,33,34,35,36] have opted for the basin of attraction idea to check the effectiveness of the developed iterative techniques.
The function $R : C ^ → C ^$, where $C ^$ is the Riemann sphere, the orbit of a point $z 0 ∈ C ^$, is defined by:
A point $z 0 ∈ C ¯$ is called a fixed point of $R ( z )$ if it verifies that $R ( z ) = z$. Moreover, $z 0$ is called a periodic point of period $p > 1$ if it is a point such that , but , for each $k < p$. Moreover, a point $z 0$ is called pre-periodic if it is not periodic, but there exists a $k > 0$ such that is periodic.
There exist different types of fixed points depending on their associated multiplier $| R ′ ( z 0 ) |$. Taking the associated multiplier into account, a fixed point $z 0$ is called:
• a superattractor if $| R ′ ( z 0 ) | = 0$
• an attractor if $| R ′ ( z 0 ) | < 1$
• a repulsor if $| R ′ ( z 0 ) | > 1$
• and parabolic if $| R ′ ( z 0 ) | = 1$.
The fixed points that do not correspond to the roots of the polynomial $p ( z )$ are called strange fixed points. On the other hand, a critical point $z 0$ is a point that satisfies that .
The basin of attraction of an attractor $α$ is defined as:
The Fatou set of the rational function R, $F R ,$ is the set of points $z ∈ C ^$ whose orbits tend to an attractor (fixed point, periodic orbit, or infinity). Its complement in $C ^$ is the Julia set, $J R$. That means that the basin of attraction of any fixed point belongs to the Fatou set, and the boundaries of these basins of attraction belong to the Julia set.
In this direction, we consider the nonlinear equation $p ( z )$ in interval or boundary $[ − 3 , 3 ]$ with $300 × 300$ mesh points. Each mesh point is used as the initial value to analyze the convergence behavior for suggested methods $P M 1 , P M 2 , P M 3$, and $P M 4$ and for comparison methods $T M , D M 1 , D M 2 , D M 3 , D M 4 , D M 5 , D M 6$, and $D M 7$, respectively. The different colors (red, green, yellow, ...) are considered to represent the roots of $p ( z )$. Choosing one mesh point, if schemes converge to any root of the equation, then it is represented as colors like red, green, and yellow under a tolerance of less than $10 − 3$ and a number of iterations less than 100. Otherwise, black color is assigned, which means that the mesh point does not converge to the required root. The following problems of multiple roots are studied.
Problem 1.
Consider the polynomial equation $p ( z ) = ( z 2 + 2 z − 1 ) 3$. The roots of this equation are $0 . − 1.61803 I , 0 . + 1 . I , 0 . + 0.618034 I$ with multiplicity $m = 3$. It is clear from Figure 2 that the suggested methods $P M 1 , P M 2 , P M 3$, and $P M 4$ have more convergence area as compared with methods $T M , D M 1 , D M 2 , D M 3 , D M 4 , D M 5 , D M 6$, and $D M 7$, respectively.
Problem 2.
Here, we consider a non-polynomial $p ( z ) = ( z 3 + 1 z ) m$ whose roots are $− 0.707107 + 0.707107 I , − 0.707107 − 0.707107 I , 0.707107 + 0.707107 I , 0.707107 − 0.707107 I$. The basins of Problem 2 are plotted with multiplicity $m = 4$ and $m = 3$ in Figure 3 and Figure 4, respectively. It is observed that suggested methods $P M 1 , P M 2 , P M 3$, and $P M 4$ converge to the root shown as red color, and the less divergence region is presented as the black region, whereas method $T M$ shows all divergence region in Figure 3. Similarly, methods $D M 1 , D M 2 , D M 3 , D M 4 , D M 5 , D M 6$, and $D M 7$ do not converge at at least one point, so their graph is not shown here. Moreover, Figure 4 reveals that for multiplicity $m = 3$, the proposed methods have more convergence points than the earlier methods.
Problem 3.
Lastly, we examine $p ( z ) = ( ( z − 1 ) 2 + 1 ) 2$ with roots $1 + I , 1 − I$, and multiplicity $m = 2$. The basins of attraction for the proposed schemes demonstrate equally better convergence performance than the earlier methods in Figure 5.

## 5. Conclusions

The new second-order optimal derivative-free family for the root finding of nonlinear equations is suggested. On the basis of parameter a, some new optimal methods are generated. The well known modified Traub method is a special case for $a = 1$. The convergence order is analyzed. Moreover, the efficiency of the proposed methods are verified by testing some numerical illustrations. The results are compared with the modified Traub method and other existing methods. Finally, the obtained results of the developed methods are better in the comparisons of the earlier developed schemes. The basin of attraction of the proposed schemes also shows better stability.

## Author Contributions

M.K., R.B., and S.B.: conceptualization, methodology, validation, writing—original draft preparation, writing—review & editing. A.S.A. and M.S.: review and editing. All authors read and agreed to the published version of the manuscript.

## Funding

The Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under Grant No. (FP-29-42).

## Acknowledgments

This work was funded by the Deanship of Scientific Research (DSR), King Abdulaziz University, Jeddah, under grant No. (FP-29-42). The authors, therefore, acknowledge with thanks DSR technical and financial support.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Chapra, S.C.; Canale, R.P. Numerical Methods for Engineers, 7th ed.; Mc Graw Hill Publications: New York, NY, USA, 2015. [Google Scholar]
2. Bi, W.; Ren, H.; Wu, Q. Three-step iterative methods with eighth-order convergence for solving nonlinear equations. J. Comput. Appl. Math. 2009, 255, 105–112. [Google Scholar] [CrossRef][Green Version]
3. Bi, W.; Wu, Q.; Ren, H. A new family of eighth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2009, 214, 236–245. [Google Scholar] [CrossRef]
4. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. New modifications of Potra-Pták’s method with optimal fourth and eighth order of convergence. J. Comput. Appl. Math. 2010, 234, 2969–2976. [Google Scholar] [CrossRef][Green Version]
5. Behl, R.; Salimi, M.; Ferrara, M.; Sharifi, S.; Samaher, K.A. Some real life applications of a newly constructed derivative free iterative scheme. Symmetry 2019, 11, 239. [Google Scholar] [CrossRef][Green Version]
6. Salimi, M.; Lotfi, T.; Sharifi, S.; Siegmund, S. Optimal Newton-Secant like methods without memory for solving nonlinear equations with its dynamics. Int. J. Comput. Math. 2017, 94, 1759–1777. [Google Scholar] [CrossRef][Green Version]
7. Salimi, M.; Nik Long, N.M.A.; Sharifi, S.; Pansera, B.A. A multi-point iterative method for solving nonlinear equations with optimal order of convergence. Jpn. J. Ind. Appl. Math. 2018, 35, 497–509. [Google Scholar] [CrossRef]
8. Matthies, G.; Salimi, M.; Sharifi, S.; Varona, J.L. An optimal eighth-order iterative method with its dynamics. Jpn. J. Ind. Appl. Math. 2016, 33, 751–766. [Google Scholar] [CrossRef][Green Version]
9. Sharifi, S.; Ferrara, M.; Salimi, M.; Siegmund, S. New modification of Maheshwari method with optimal eighth order of convergence for solving nonlinear equations. Open Math. Former. Cent. Eur. J. Math. 2016, 14, 443–451. [Google Scholar]
10. Lotfi, T.; Sharifi, S.; Salimi, M.; Siegmund, S. A new class of three point methods with optimal convergence order eight and its dynamics. Numer. Algor. 2016, 68, 261–288. [Google Scholar] [CrossRef]
11. Schröder, E. Über unendlich viele Algorithmen zur Aufölsung der Gleichungen. Math Ann. 1870, 2, 317–365. [Google Scholar] [CrossRef][Green Version]
12. Traub, J.F. Iterative Method for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
13. Li, S.; Liao, X.; Cheng, L. A new fourth-order iterative method for finding multiple roots of nonlinear equations. Appl. Math. Comput. 2009, 215, 1288–1292. [Google Scholar]
14. Li, S.G.; Cheng, L.Z.; Neta, B. Some fourth-order nonlinear solvers with closed formulae for multiple roots. Comput. Math. Appl. 2010, 59, 126–135. [Google Scholar] [CrossRef][Green Version]
15. Sharma, J.R.; Sharma, R. Modified Jarratt method for computing multiple roots. Appl. Math. Comput. 2010, 217, 878–881. [Google Scholar] [CrossRef]
16. Zhou, X.; Chen, X.; Song, Y. Constructing higher order methods for obtaining the multiple roots of nonlinear equations. J. Comput. Appl. Math. 2011, 235, 4199–4206. [Google Scholar] [CrossRef][Green Version]
17. Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl. 2012, 63, 764–774. [Google Scholar] [CrossRef][Green Version]
18. Soleymani, F.; Babajee, D.K.R.; Lotfi, T. On a numerical technique for finding multiple zeros and its dynamics. J. Egypt. Math. Soc. 2013, 21, 346–353. [Google Scholar] [CrossRef]
19. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef][Green Version]
20. Geum, Y.H.; Kim, Y.I.; Neta, B. Constructing a family of optimal eighth-order modified Newton-type multiple-zero finders along with the dynamics behind their purely imaginary extraneous fixed points. J. Comput. Appl. Math. 2018, 333, 131–156. [Google Scholar] [CrossRef]
21. Kansal, M.; Kanwar, V.; Bhatia, S. On some optimal multiple root-finding methods and their dynamics. Appl. Appl. Math. 2015, 10, 349–367. [Google Scholar]
22. Behl, R.; Alsolami, A.J.; Pansera, B.A.; Al-Hamdan, W.M.; Salimi, M.; Ferrara, M. A new optimal family of Schrder’s method for multiple zeros. Mathematics 2019, 7, 1076. [Google Scholar] [CrossRef][Green Version]
23. Hueso, J.L.; Martínez, E.; Teruel, C. Determination of multiple roots of nonlinear equations and applications. Math. Chem. 2015, 53, 880–892. [Google Scholar] [CrossRef][Green Version]
24. Sharma, J.R.; Kumar, S.; Jńtschi, L. On a class of optimal fourth order multiple root solvers without using derivatives. Symmetry 2019, 11, 1452. [Google Scholar] [CrossRef][Green Version]
25. Sharma, J.R.; Kumar, S.; Jńtschi, L. On Derivative Free Multiple-Root Finders with Optimal Fourth Order Convergence. Mathematics 2020, 8, 1091. [Google Scholar] [CrossRef]
26. Kumar, D.; Sharma, J.R.; Argyros, I.K. Optimal One-Point Iterative Function Free from Derivatives for Multiple Roots. Mathematics 2020, 8, 709. [Google Scholar] [CrossRef]
27. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach. 1974, 21, 643–651. [Google Scholar] [CrossRef]
28. Constantinides, A.; Mostoufi, N. Numerical Methods for Chemical Engineers with MATLAB Applications; Prentice Hall PTR: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
29. Douglas, J.M. Process Dynamics and Control; Prentice Hall: Englewood Cliffs, NJ, USA, 1972; Volume 2. [Google Scholar]
30. Maroju, P.; Magreñán, Á.A.; Motsa, S.S.; Sarría, I. Second derivative free sixth order continuation method for solving nonlinear equations with applications. J. Math. Chem. 2018, 56, 2099–2116. [Google Scholar] [CrossRef]
31. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
32. Neta, B.; Scott, M.; Chun, C. Basins of attraction for several methods to find simple roots of onlinear equations. Appl. Math. Comput. 2012, 218, 10548–10556. [Google Scholar]
33. Moysi, A.; Argyros, I.K.; Regmi, S.; González, D.; Magreñán, Á.A.; Sicilia, J.A. Convergence and Dynamics of a Higher-Order Method. Symmetry 2020, 12, 420. [Google Scholar] [CrossRef][Green Version]
34. Cordero, A.; Ramos, H.; Torregrosa, J.R. Some variants of Halley’s method with memory and their applications for solving several chemical problems. J. Math. Chem. 2020, 58, 751–774. [Google Scholar] [CrossRef]
35. Zafar, F.; Cordero, A.; Junjua, M.; Torregrosa, J.R. Optimal eighth-order iterative methods for approximating multiple zeros of nonlinear functions. Rev. Real Acad. Cienc. Exactas Fís. Nat. Ser. Mat. 2020, 114, 1–17. [Google Scholar] [CrossRef]
36. Zafar, F.; Cordero, A.; Torregrosa, J.R. A family of optimal fourth-order methods for multiple roots of nonlinear equations. Math. Methods Appl. Sci. 2020, 43, 7869–7884. [Google Scholar] [CrossRef]
Figure 1. Continuous stirred tank reactor.
Figure 1. Continuous stirred tank reactor.
Figure 2. Comparisons of the convergence plane at $β = − 0.01$ for Problem 1.
Figure 2. Comparisons of the convergence plane at $β = − 0.01$ for Problem 1.
Figure 3. Comparisons of the convergence plane at $β = − 0.01$ for Problem 2 with $m = 4$.
Figure 3. Comparisons of the convergence plane at $β = − 0.01$ for Problem 2 with $m = 4$.
Figure 4. Comparisons of the convergence plane at $β = − 0.01$ for Problem 2 with $m = 3$.
Figure 4. Comparisons of the convergence plane at $β = − 0.01$ for Problem 2 with $m = 3$.
Figure 5. Comparisons of the convergence plane at $β = − 0.01$ for Problem 3.
Figure 5. Comparisons of the convergence plane at $β = − 0.01$ for Problem 3.
Table 1. The Kumar methods.
Table 1. The Kumar methods.
Method Representation$G ( θ )$
$D M 1$$m θ ( 1 + 1 10 θ )$
$D M 2$$m θ 1 + 1 4 θ$
$D M 3$$m θ 1 + m 1 10 θ$
$D M 4$$m e θ − 1$
$D M 5$$m l o g ( θ + 1 )$
$D M 6$
$D M 7$$θ 2 + θ 1 m + 1 5 θ$
Table 2. Numerical results of Example 1.
Table 2. Numerical results of Example 1.
Schemes$| x 5 − x 4 |$$| x 6 − x 5 |$$| x 7 − x 6 |$$| g ( x 7 ) |$$ρ$
$T M$$1.9 ( − 8 )$$6.2 ( − 15 )$$6.3 ( − 28 )$$1.4 ( − 108 )$$2.000$
$D M 1$$1.8 ( − 8 )$$5.4 ( − 15 )$$4.9 ( − 28 )$$4.8 ( − 109 )$$2.000$
$D M 2$$2.2 ( − 8 )$$8.4 ( − 15 )$$1.2 ( − 27 )$$1.7 ( − 107 )$$2.000$
$D M 3$$2.2 ( − 8 )$$7.9 ( − 15 )$$1.1 ( − 27 )$$1.0 ( − 107 )$$2.000$
$D M 4$$1.4 ( − 8 )$$3.2 ( − 15 )$$1.7 ( − 28 )$$7.3 ( − 111 )$$2.000$
$D M 5$$2.6 ( − 8 )$$1.1 ( − 14 )$$2.2 ( − 27 )$$2.1 ( − 106 )$$2.000$
$D M 6$$2.3 ( − 8 )$$8.8 ( − 15 )$$1.3 ( − 27 )$$2.4 ( − 107 )$$2.000$
$D M 7$$1.3 ( − 8 )$$2.9 ( − 15 )$$1.3 ( − 28 )$$2.7 ( − 111 )$$2.000$
$P M 1$$1.9 ( − 8 )$$6.2 ( − 15 )$$6.3 ( − 28 )$$1.4 ( − 108 )$$2.000$
$P M 2$$1.9 ( − 8 )$$6.2 ( − 15 )$$6.3 ( − 28 )$$1.4 ( − 108 )$$2.000$
$P M 3$$1.9 ( − 8 )$$6.2 ( − 15 )$$6.3 ( − 28 )$$1.4 ( − 108 )$$2.000$
$P M 4$$1.9 ( − 8 )$$6.2 ( − 15 )$$6.3 ( − 28 )$$1.4 ( − 108 )$$2.000$
It is clear from the above table that our proposed methods are equally competent as the existing ones.
Table 3. Numerical results of Example 2.
Table 3. Numerical results of Example 2.
Schemes$| x 5 − x 4 |$$| x 6 − x 5 |$$| x 7 − x 6 |$$| g ( x 7 ) |$$ρ$
$T M$$6.0 ( − 10 )$$3.3 ( − 26 )$$1.0 ( − 40 )$$5.5 ( − 242 )$$2.000$
$D M 1$$6.7 ( − 10 )$$5.7 ( − 20 )$$4.2 ( − 40 )$$5.9 ( − 238 )$$2.000$
$D M 2$$4.5 ( − 11 )$$2.0 ( − 23 )$$4.1 ( − 48 )$$2.8 ( − 289 )$$2.000$
$D M 3$$1.3 ( − 10 )$$1.1 ( − 22 )$$8.1 ( − 47 )$$4.3 ( − 282 )$$2.000$
$D M 4$$1.1 ( − 13 )$$2.9 ( − 27 )$$2.2 ( − 54 )$$1.1 ( − 322 )$$2.000$
$D M 5$$2.9 ( − 8 )$$6.1 ( − 17 )$$2.7 ( − 34 )$$8.5 ( − 204 )$$2.000$
$D M 6$$1.2 ( − 9 )$$3.0 ( − 20 )$$2.0 ( − 41 )$$4.1 ( − 248 )$$2.000$
$D M 7$$3.4 ( − 11 )$$2.6 ( − 22 )$$1.5 ( − 44 )$$7.4 ( − 264 )$$2.000$
$P M 1$$8.4 ( − 17 )$$6.7 ( − 34 )$$4.1 ( − 68 )$$2.2 ( − 406 )$$2.000$
$P M 2$$1.5 ( − 21 )$$2.0 ( − 43 )$$3.7 ( − 87 )$$1.2 ( − 520 )$$2.000$
$P M 3$$8.8 ( − 16 )$$7.2 ( − 32 )$$4.8 ( − 64 )$$5.5 ( − 382 )$$2.000$
$P M 4$$1.2 ( − 16 )$$1.2 ( − 33 )$$1.4 ( − 67 )$$3.9 ( − 403 )$$2.000$
From the above numerical results, we conclude that our proposed method $P M 2$ is faster among all the above-mentioned schemes because it has the lowest residual error.
Table 4. Numerical results of Example 3.
Table 4. Numerical results of Example 3.
Schemes$| x 5 − x 4 |$$| x 6 − x 5 |$$| x 7 − x 6 |$$| g ( x 7 ) |$$ρ$
$T M$$6.0 ( − 13 )$$8.5 ( − 26 )$$1.7 ( − 51 )$$1.8 ( − 407 )$$2.000$
$D M 1$$5.1 ( − 13 )$$6.9 ( − 26 )$$1.2 ( − 51 )$$2.3 ( − 408 )$$2.000$
$D M 2$$5.2 ( − 13 )$$4.7 ( − 26 )$$3.8 ( − 52 )$$3.5 ( − 413 )$$2.000$
$D M 3$$3.2 ( − 13 )$$1.4 ( − 26 )$$2.6 ( − 53 )$$6.8 ( − 423 )$$2.000$
$D M 4$$8.4 ( − 14 )$$2.5 ( − 27 )$$2.3 ( − 54 )$$1.2 ( − 429 )$$2.000$
$D M 5$$2.0 ( − 13 )$$4.6 ( − 27 )$$2.4 ( − 54 )$$1.6 ( − 431 )$$2.000$
$D M 6$$3.0 ( − 13 )$$1.3 ( − 26 )$$2.2 ( − 53 )$$1.5 ( − 423 )$$2.000$
$D M 7$$2.9 ( − 13 )$$2.5 ( − 26 )$$1.7 ( − 52 )$$4.8 ( − 415 )$$2.000$
$P M 1$$8.3 ( − 15 )$$1.6 ( − 29 )$$6.5 ( − 59 )$$7.7 ( − 467 )$$2.000$
$P M 2$$1.5 ( − 22 )$$5.2 ( − 45 )$$6.4 ( − 90 )$$7.1 ( − 715 )$$2.000$
$P M 3$$2.7 ( − 17 )$$1.8 ( − 34 )$$7.3 ( − 69 )$$2.0 ( − 546 )$$2.000$
$P M 4$$3.1 ( − 15 )$$2.3 ( − 30 )$$1.3 ( − 60 )$$1.8 ( − 480 )$$2.000$
Based on the above computational results, we find that our method $P M 2$ has the lowest residual error among all the other mentioned methods.
Table 5. Numerical results of Example 4.
Table 5. Numerical results of Example 4.
Schemes$| x 5 − x 4 |$$| x 6 − x 5 |$$| x 7 − x 6 |$$| g ( x 7 ) |$$ρ$
$T M$$4.0 ( − 15 )$$5.2 ( − 30 )$$9.1 ( − 60 )$$1.8 ( − 473 )$$2.000$
$D M 1$$4.6 ( − 15 )$$7.1 ( − 30 )$$1.7 ( − 59 )$$2.3 ( − 471 )$$2.000$
$D M 2$$8.4 ( − 15 )$$2.4 ( − 29 )$$2.0 ( − 58 )$$8.7 ( − 463 )$$2.000$
$D M 3$$2.3 ( − 14 )$$1.8 ( − 28 )$$1.1 ( − 56 )$$9.1 ( − 449 )$$2.000$
$D M 4$$5.4 ( − 14 )$$1.1 ( − 27 )$$3.9 ( − 55 )$$2.7 ( − 436 )$$2.000$
$D M 5$$4.3 ( − 14 )$$6.7 ( − 28 )$$1.6 ( − 55 )$$2.0 ( − 439 )$$2.000$
$D M 6$$2.4 ( − 14 )$$2.0 ( − 28 )$$1.4 ( − 56 )$$6.4 ( − 448 )$$2.000$
$D M 7$$9.3 ( − 15 )$$2.9 ( − 29 )$$2.9 ( − 58 )$$1.9 ( − 461 )$$2.000$
$P M 1$$8.9 ( − 17 )$$2.7 ( − 33 )$$2.3 ( − 66 )$$3.3 ( − 526 )$$2.000$
$P M 2$$6.0 ( − 22 )$$1.2 ( − 43 )$$4.9 ( − 87 )$$1.2 ( − 691 )$$2.000$
$P M 3$$8.6 ( − 19 )$$2.4 ( − 37 )$$2.0 ( − 74 )$$9.3 ( − 591 )$$2.000$
$P M 4$$3.9 ( − 17 )$$5.0 ( − 34 )$$8.3 ( − 68 )$$8.6 ( − 538 )$$2.000$
It is clear from the above table that our methods namely $P M 1 , P M 2 , P M 3$, and $P M 4$, perform far better than the existing methods in terms of the absolute difference between two consecutive iterations and absolute residual errors. Scheme $P M 2$ has the fastest convergence towards the required root among all the other mentioned methods.
Table 6. Numerical results of Example 5.
Table 6. Numerical results of Example 5.
Schemes$| x 5 − x 4 |$$| x 6 − x 5 |$$| x 7 − x 6 |$$| g ( x 7 ) |$$ρ$
$T M$$9.9 ( − 23 )$$1.3 ( − 46 )$$2.3 ( − 94 )$$9.8 ( − 379 )$$2.000$
$D M 1$$2.3 ( − 20 )$$3.3 ( − 41 )$$7.0 ( − 83 )$$2.0 ( − 331 )$$2.000$
$D M 2$$6.5 ( − 17 )$$4.7 ( − 34 )$$2.5 ( − 18 )$$9.8 ( − 273 )$$2.000$
$D M 3$$3.6 ( − 18 )$$1.1 ( − 36 )$$1.1 ( − 73 )$$2.7 ( − 294 )$$2.000$
$D M 4$$6.7 ( − 21 )$$1.2 ( − 41 )$$3.7 ( − 83 )$$2.7 ( − 331 )$$2.000$
$D M 5$$5.4 ( − 13 )$$6.8 ( − 26 )$$1.1 ( − 51 )$$1.8 ( − 205 )$$2.000$
$D M 6$$3.0 ( − 16 )$$1.2 ( − 32 )$$1.7 ( − 65 )$$3.0 ( − 261 )$$2.000$
$D M 7$$5.7 ( − 16 )$$1.0 ( − 31 )$$3.2 ( − 63 )$$2.2 ( − 251 )$$2.000$
$P M 1$$4.4 ( − 22 )$$3.7 ( − 45 )$$2.6 ( − 91 )$$3.7 ( − 366 )$$2.000$
$P M 2$$1.8 ( − 21 )$$8.9 ( − 44 )$$2.2 ( − 88 )$$3.5 ( − 354 )$$2.000$
$P M 3$$1.0 ( − 21 )$$2.5 ( − 44 )$$1.5 ( − 89 )$$6.2 ( − 359 )$$2.000$
$P M 4$$5.4 ( − 22 )$$5.8 ( − 45 )$$6.9 ( − 91 )$$2.0 ( − 364 )$$2.000$
Our methods perform better than the existing methods (except $T M$) in terms of the absolute difference between two consecutive iterations and the absolute residual errors. They are equally competent for the scheme $T M$.
Table 7. Numerical results of Example 6.
Table 7. Numerical results of Example 6.
Schemes$| x 5 − x 4 |$$| x 6 − x 5 |$$| x 7 − x 6 |$$| g ( x 7 ) |$$ρ$
$T M$$2.2 ( − 10 )$$1.4 ( − 20 )$$5.6 ( − 41 )$$2.5 ( − 406 )$$2.000$
$D M 1$$1.2 ( − 9 )$$3.8 ( − 19 )$$3.8 ( − 38 )$$3.4 ( − 378 )$$2.000$
$D M 2$$2.4 ( − 14 )$$1.9 ( − 28 )$$1.3 ( − 56 )$$1.7 ( − 562 )$$2.000$
$D M 3$$2.1 ( − 13 )$$1.6 ( − 26 )$$9.9 ( − 53 )$$3.2 ( − 523 )$$2.000$
$D M 4$$2.8 ( − 8 )$$1.4 ( − 16 )$$3.5 ( − 33 )$$2.2 ( − 329 )$$2.000$
$D M 5$$4.8 ( − 14 )$$8.9 ( − 28 )$$3.0 ( − 55 )$$2.2 ( − 548 )$$2.000$
$D M 6$$1.8 ( − 14 )$$1.2 ( − 28 )$$5.1 ( − 57 )$$3.4 ( − 566 )$$2.000$
$D M 7$$2.2 ( − 10 )$$1.4 ( − 20 )$$5.6 ( − 41 )$$2.5 ( − 406 )$$2.000$
$P M 1$$3.7 ( − 15 )$$3.9 ( − 30 )$$4.2 ( − 60 )$$1.4 ( − 597 )$$2.000$
$P M 2$$1.2 ( − 11 )$$4.2 ( − 23 )$$4.9 ( − 46 )$$6.0 ( − 457 )$$2.000$
$P M 3$$1.1 ( − 14 )$$3.3 ( − 29 )$$3.0 ( − 58 )$$4.1 ( − 579 )$$2.000$
$P M 4$$1.9 ( − 17 )$$1.0 ( − 34 )$$2.9 ( − 69 )$$3.7 ( − 689 )$$2.000$
It is clear from the above table that our methods namely, $P M 1 , P M 2 , P M 3$, and $P M 4$, perform far better than the existing methods in terms of the absolute difference between two consecutive iterations and absolute residual errors. Scheme $P M 4$ has the fastest convergence towards the required root among all the other mentioned methods.
 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Share and Cite

MDPI and ACS Style

Kansal, M.; Alshomrani, A.S.; Bhalla, S.; Behl, R.; Salimi, M. One Parameter Optimal Derivative-Free Family to Find the Multiple Roots of Algebraic Nonlinear Equations. Mathematics 2020, 8, 2223. https://doi.org/10.3390/math8122223

AMA Style

Kansal M, Alshomrani AS, Bhalla S, Behl R, Salimi M. One Parameter Optimal Derivative-Free Family to Find the Multiple Roots of Algebraic Nonlinear Equations. Mathematics. 2020; 8(12):2223. https://doi.org/10.3390/math8122223

Chicago/Turabian Style

Kansal, Munish, Ali Saleh Alshomrani, Sonia Bhalla, Ramandeep Behl, and Mehdi Salimi. 2020. "One Parameter Optimal Derivative-Free Family to Find the Multiple Roots of Algebraic Nonlinear Equations" Mathematics 8, no. 12: 2223. https://doi.org/10.3390/math8122223

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.