An Efficient Family of Optimal Eighth-Order Multiple Root Finders

Abstract: Finding a repeated zero for a nonlinear equation f (x) = 0, f : I ⊆ R→ R has always been of much interest and attention due to its wide applications in many fields of science and engineering. Modified Newton’s method is usually applied to solve this kind of problems. Keeping in view that very few optimal higher-order convergent methods exist for multiple roots, we present a new family of optimal eighth-order convergent iterative methods for multiple roots with known multiplicity involving a multivariate weight function. The numerical performance of the proposed methods is analyzed extensively along with the basins of attractions. Real life models from life science, engineering, and physics are considered for the sake of comparison. The numerical experiments and dynamical analysis show that our proposed methods are efficient for determining multiple roots of nonlinear equations.


Introduction
It is well-known that Newton's method converges linearly for non-simple roots of a nonlinear equation.For obtaining multiple roots of a univariate nonlinear equation with a quadratic order of convergence, Schröder [1] modified Newton's method with prior knowledge of the multiplicity m ≥ 1 of the root as follows: Scheme (1) can determine the desired multiple root with quadratic convergence and is optimal in the sense of Kung-Traub's conjecture [2] that any multipoint method without memory can reach its convergence order of at most 2 p−1 for p functional evaluations.
In the last few decades, many researchers have worked to develop iterative methods for finding multiple roots with greater efficiency and higher order of convergence.Among them, Li et al. [3] in 2009, Sharma and Sharma [4] and Li et al. [5] in 2010, Zhou et al. [6] in 2011, Sharifi et al. [7] in 2012, Soleymani et al. [8], Soleymani and Babajee [9], Liu and Zhou [10] and Zhou et al. [11] in 2013, Thukral [12] in 2014, Behl et al. [13] and Hueso et al. [14] in 2015, and Behl et al. [15] in 2016 presented optimal fourth-order methods for multiple zeros.Additionally, Li et al. [5] (among other optimal methods) and Neta [16] presented non-optimal fourth-order iterative methods.In recent years, efforts have been made to obtain an optimal scheme with a convergence order greater than four for multiple zeros with multiplicity m ≥ 1 of univariate function.Some of them only succeeded in developing iterative schemes of a maximum of sixth-order convergence, in the case of multiple zeros; for example, see [17,18].However, there are only few multipoint iterative schemes with optimal eighth-order convergence for multiple zeros which have been proposed very recently.
Behl et al. [19] proposed a family of optimal eighth-order iterative methods for multiple roots involving univariate and bivariate weight functions given as: where weight functions Q : C → C and G : C 2 → C are analytical in neighborhoods of (0) and (0, 0), respectively, with , being a 1 and a 2 complex nonzero free parameters.
A second optimal eighth-order scheme involving parameters has been proposed by Zafar et al. [20], which is given as follows: where B 1 ,B 2 ∈ R are free parameters and the weight functions H : C → C, P : C → C and G : C → C are analytic in the neighborhood of 0 with Recently, Geum et al. [21] presented another optimal eighth-order method for multiple roots: Behl et al. [22] also developed another optimal eighth-order method involving free parameters and a univariate weight function as follows: where α 1 , α 2 ∈ R are two free disposable parameters and the weight function Most recently, Behl at al. [23] presented an optimal eighth-order method involving univariate weight functions given as: where B 1 , B 2 ∈ R are free parameters and the weight functions Motivated by the research going on in this direction and with a need to give more stable optimal higher-order methods, we propose a new family of optimal eighth-order iterative methods for finding simple as well as multiple zeros of a univariate nonlinear function with multiplicity m ≥ 1.The derivation of the proposed class is based on a univariate and trivariate weight function approach.In addition, our proposed methods not only give the faster convergence but also have smaller residual error.We have demonstrated the efficiency and robustness of the proposed methods by performing several applied science problems for numerical tests and observed that our methods have better numerical results than those obtained by the existing methods.Further, the dynamical performance of these methods on the above mentioned problems supports the theoretical aspects, showing a good behavior in terms of dependence on initial estimations.
The rest of the paper is organized as follows: Section 2 provides the construction of the new family of iterative methods and the analysis of convergence to prove the eighth order of convergence.In Section 3, some special cases of the new family are defined.In Section 4, the numerical performance and comparison of some special cases of the new family with the existing ones are given.The numerical comparisons is carried out using the nonlinear equations that appear in the modeling of the predator-prey model, beam designing model, electric circuit modeling, and eigenvalue problem.Additionally, some dynamical planes are provided to compare their stability with that of known methods.Finally, some conclusions are stated in Section 5.

Construction of the Family
This section is devoted to the main contribution of this study, the design and convergence analysis of the proposed scheme.We consider the following optimal eighth-order class for finding multiple zeros with multiplicity m ≥ 1: where G : C → C and H : C 3 → C are analytical functions in a neighborhood of (0) and (0, 0, 0), In the next result, we demonstrate that the order of convergence of the proposed family reaches optimal order eight.Theorem 1.Let us consider x = ξ (say) is a zero with multiplicity m ≥ 1 of the involved function f .In addition, we assume that f : C → C is an analytical function in the region enclosing the multiple zero ξ.The proposed class defined by Equation ( 7) has an optimal eighth order of convergence, when the following conditions are satisfied: where Proof.Let us assume that e n = x n − ξ is the error at nth step.By expanding f (x n ) and f (x n ) about x = ξ using Taylor series expansion, we have: and: respectively, where By inserting the above Equations ( 9) and (10), in the first substep of Equation ( 7), we obtain: where With the help of Taylor series expansion and Equation (11), we get: Using Equations ( 9) and ( 12), we have: where 13) that u n is of order one.Therefore, we can expand the weight function G(u n ) in the neighborhood of origin by Taylor series expansion up to third-order terms for eighth-order convergence as follows: Now, by inserting Equations ( 11)-( 14) in the second substep of the proposed class (Equation ( 7)), we obtain: where In order to obtain fourth-order convergence, the coefficients of e 2 n and e 3 n must be simultaneously equal to zero.Thus, from Equation ( 15), we obtain the following values of G(0) and G (0) : Using Equation ( 16), we have: where With the help of Equation ( 17) and Taylor series expansion, we have: where: Using Equations ( 9), ( 12) and ( 18), we further obtain: where . ., c 6 ), j = 1, 2, 3, 4 and: where Hence, it is clear from Equation ( 13) that t n and w n are of order 2 and 3, respectively.Therefore, we can expand weight function H(u n , t n , w n ) in the neighborhood of (0, 0, 0) by Taylor series expansion up to second-order terms as follows: where Using Equations ( 9)-( 21) in the last substep of proposed scheme (Equation ( 7)), we have: where From Equation ( 22), it is clear that we can easily obtain at least cubic order of convergence, for: Moreover, E 1 = 0 for H 100 = 0, we also have: 3  .
Thus, we take: Thus, by inserting Equation ( 24), it results that E 2 = 0 and: Therefore, by taking: we have at least a sixth-order convergence.Additionally, for H 020 = 1: which further yields: Finally, we take: where Then, by substituting Equations ( 23), ( 24), ( 26) and (28) in Equation ( 22), we obtain the following optimal asymptotical error constant term: Equation ( 29) reveals that the proposed scheme (Equation ( 7)) reaches optimal eighth-order convergence using only four functional evaluations (i.e., f (x n ), f (x n ), f (y n ) and f (z n )) per iteration.This completes the proof.

Some Special Cases of Weight Function
In this section, we discuss some special cases of our proposed class (7) by assigning different kinds of weight functions.In this regard, please see the following cases, where we have mentioned some different members of the proposed family.
Case 1: Let us describe the following polynomial weight functions directly from the hypothesis of Theorem 1: where H 001 and G 3 are free parameters.Case 1A: When H 001 = 2, G 3 = 0, we obtain the corresponding optimal eighth-order iterative method as follows: Case 2: Now, we suggest a mixture of rational and polynomial weight functions satisfying condition Equation (8) as follows: where 12 and a 0 , G 3 and H 001 are free parameters.Case 2A: When a 0 = 2, H 001 = 2, G 3 = 12, the corresponding optimal eighth-order iterative scheme is given by: Case 3: Now, we suggest another rational and polynomial weight function satisfying Equation (8) as follows: where a 3 = −2(a 0 − 1) + G 3 12 , a 4 = 2a 0 + (a 0 − 6) G 3 12 and a 0 , H 001 and G 3 are free.Case 3A: By choosing a 0 = 4, a 3 = −5, a 4 = 6, H 001 = 2, G 3 = 12, the corresponding optimal eighth-order iterative scheme is given by: In a similar way, we can develop several new and interesting optimal schemes with eighth-order convergence for multiple zeros by considering new weight functions which satisfy the conditions of Theorem 1.

Numerical Experiments
This section is devoted to demonstrating the efficiency, effectiveness, and convergence behavior of the presented family.In this regard, we consider some of the special cases of the proposed class, namely, Equations (31), (33) and (35), denoted by NS1, NS2, and NS3, respectively.In addition, we choose a total number of four test problems for comparison: The first is a predator-prey model, the second is a beam designing problem, the third is an electric circuit modeling for simple zeros, and the last is an eigenvalue problem.Now, we want to compare our methods with other existing robust schemes of the same order on the basis of the difference between two consecutive iterations, the residual errors in the function, the computational order of convergence ρ, and asymptotic error constant η.We have chosen eighth-order iterative methods for multiple zeros given by Behl et al. [19,23].We take the following particular case (Equation ( 27)) for (a 1 = 1, a 2 = −2, G 02 = 2m) of the family by Behl et al. [19] and denote it by BM1 as follows: From the eighth-order family of Behl et al. [23], we consider the following special case denoted by BM2: Tables 1-4 display the number of iteration indices (n), the error in the consecutive iterations .We did our calculations with 1000 significant digits to minimize the round-off error.We display all the numerical values in Tables 1-4 up to 7 significant digits with exponent.Finally, we display the values of approximated zeros up to 30 significant digits in Examples 1-4, although a minimum of 1000 significant digits are available with us.
For computer programming, all computations have been performed using the programming package Maple 16 with multiple precision arithmetics.Further, the meaning of a(±b) is a × 10 (±b) in Tables 1-4.Now, we explain the real life problems chosen for the sake of comparing the schemes as follows: Example 1 (Predator-Prey Model).Let us consider a predator-prey model with ladybugs as predators and aphids as preys [25].Let x be the number of aphids eaten by a ladybug per unit time per unit area, called the predation rate, denoted by P(x).The predation rate usually depends on prey density and is given as: Let the growth of aphids obey the Malthusian model; therefore, the growth rate of aphids G per hour is: The problem is to find the aphid density x for which: This gives: Let K = 30 aphids eaten per hour, a = 20 aphids and r = 2 − 1 3 per hour.Thus, we are required to find the zero of: The desired zero of f 1 is 25.198420997897463295344212145564 with m = 2.We choose x 0 = 20.Example 2 (Beam Designing Model).We consider a beam positioning problem (see [26]) where an r meter long beam is leaning against the edge of the cubical box with sides of length 1 m each, such that one of its ends touches the wall and the other touches the floor, as shown in Figure 1.What should be the distance along the floor from the base of the wall to the bottom of the beam?Let y be the distance in meters along the beam from the floor to the edge of the box and let x be the distance in meters from the bottom of the box to the bottom of the beam.Then, for a given value of r, we have: The positive solution of the equation is a double root x = 2.We consider the initial guess x 0 = 1.7.Example 3 (The Shockley Diode Equation and Electric Circuit).Let us consider an electric circuit consisting of a diode and a resistor.By Kirchoff's voltage law, the source voltage drop V S is equal to the sum of the voltage drops across the diode V D and resistor V R : Let the source voltage be V S = 0.5 V and from Ohm's law: Additionally, the voltage drop across the diode is given by the Shockley diode equation as follows: where I is the diode current in amperes, I S is saturation current (amperes), n is the emission or ideality constant (1 ≤ n ≤ 2 for silicon diode), and V D is the voltage applied across the diode.Solving Equation (40) for V D and using all the values in Equation (38), we obtain: Now, for the given values of n, V T , R and I S , we have the following equation [27]: −0.5 + 0.1I + 1.4 ln (I + 1) = 0.
Replacing I with x, we have The true root of the equation is 0.389977198390077586586453532646.We take x 0 = 0.5.Example 4 (Eigenvalue Problem).One of the challenging task of linear algebra is to calculate the eigenvalues of a large square matrix, especially when the required eigenvalues are the zeros of the characteristic polynomial obtained from the determinant of a square matrix of order greater than 4. Let us consider the following 9 × 9 matrix: The corresponding characteristic polynomial of matrix A is given as follows: The above function has one multiple zero at ξ = 3 of multiplicity 4 with initial approximation x 0 = 3.1.In Tables 1-4, we show the numerical results obtained by applying the different methods for approximating the multiple roots of f 1 (x) − f 4 (x).The obtained values confirm the theoretical results.
From the tables, it can be observed that our proposed schemes NS1, NS2, and NS3 exhibit a better performance in approximating the multiple root of f 1 , f 2 and f 4 among other similar methods.Only in the case of the example for simple zeros Behl's scheme BM1 is performing slightly better than the other methods.

Dynamical Planes
The dynamical behavior of the test functions is presented in Figures 2-9.The dynamical planes have been generated using the routines published in Reference [28].We used a mesh of 400 × 400 points in the region of the complex plane [−100, 100] × [−100, 100].We painted in orange the points whose orbit converged to the multiple root and in black those points whose orbit either diverged or converged to a strange fixed point or a cycle.We worked out with a tolerance of 10 −3 and a maximum number of 80 iterations.The multiple root is represented in the different figures by a white star.Figures 2-9 study the convergence and divergence regions of the new schemes NS1, NS2, and NS3 in comparison with the other schemes of the same order.In the case of f 1 (x) and f 2 (x), we observed that the new schemes are more stable than BM1 and BM2 as they are almost divergence-free and also converge faster than BM1 and BM2 in their common regions of convergence.In the case of f 3 (x), BM1 performs better; however, NS1, NS2, and NS3 have an edge over BM2 for the region in spite of the analogous behavior to BM2, as the new schemes show more robustness.Similarly, in the case of f 4 (x), it can be clearly observed that the divergence region for BM1 is bigger than that for NS1, NS2, and NS3.Additionally, these schemes perform better than BM2 where they are convergent.The same behavior can be observed through the numerical comparison of these methods in Tables 1-4.As a future extension, we shall be trying to construct a new optimal eighth-order method whose stability analysis can allow to choose the optimal weight function for the best possible results.

Conclusions
In this manuscript, a new general class of optimal eighth-order methods for solving nonlinear equations with multiple roots was presented.This family was obtained using the procedure of weight functions with two functions: One univariate and another depending on three variables.To reach this optimal order, some conditions on the functions and their derivatives must be imposed.Several special cases were selected and applied to different real problems, comparing their performance with that of other known methods of the same order of convergence.Finally, their dependence on initial estimations was analyzed from their basins of attraction.

8 n− 1
(the formula by Jay[24]), the absolute residual error of the corresponding function (| f (x n )|), and the asymptotical error constant η ≈ e n e

Table 1 .
Comparison of different multiple root finding methods for f 1 (x).

Table 2 .
Comparison of different multiple root finding methods for f 2 (x).

Table 3 .
Comparison of different multiple root finding methods for f 3 (x).

Table 4 .
Comparison of different multiple root finding methods for f 4 (x).