An Optimal Iterative Technique for Multiple Root Finder of Nonlinear Problems

: In this paper, an optimal higher-order iterative technique to approximate the multiple roots of a nonlinear equation has been presented. The proposed technique has special properties: a two-point method that does not involve any derivatives, has an optimal convergence of fourth-order, is cost-effective, is more stable, and has better numerical results. In addition to this, we adopt the weight function approach at both substeps (which provide us with a more general form of two-point methods). Firstly, the convergence order is studied for multiplicity m = 2,3 by Taylor’s series expansion and then general convergence for m ≥ 4 is proved. We have demonstrated the applicability of our methods to six numerical problems. Out of them: the ﬁrst one is the well-known Van der Waals ideal gas problem, the second one is used to study the blood rheology model, the third one is chosen from the linear algebra (namely, eigenvalue), and the remaining three are academic problems. We concluded on the basis of obtained CPU timing, computational order of convergence, and absolute errors between two consecutive iterations for which our methods illustrate better results as compared to earlier studies.


Introduction
Finding the multiple roots of the roots of the nonlinear equation g(x) = 0 is one of the most difficult tasks. Since the multiple roots play an important role in the areas of Computer Science, applied Mathematics, Physics, applied Chemistry and Engineering. For example, the Ideal Gas Law [1] describes the relationship between molecular size and attraction forces and the behavior of a real gas. The solution of such an equation with an analytical approach is either complicated or almost non-existent. Then, we have to focus on iterative methods. One of the most famous iterative techniques is the modified Newton's method (MNM) [2,3], which is defined as Its order of convergence is quadratic if the multiplicity m of the required root is to be known in advance.
The main problem of this method is the use of the first-order derivative at each substep. There are several occasions in real life problems where finding the derivative is either quite complicated or time consuming or does not exist. In those cases, it is always fruitful to use a derivative free method. Thus, Traub-Steffensen [4] suggested a derivative free scheme, which is defined by where µ s = x s + αg(x s ), α = 0 ∈ R. Later on, Kumar et al. [5] and Kansal et al. [6] suggested the following second-order one point derivative free schemes: and respectively, where µ s = x s + αg(x s ), α = 0 ∈ R. Since all of the above three iterative schemes are one-point, they have several issues regarding their convergence order and efficiency (more details can be found in [2,3]). Then, researchers turned towards multi-point derivative free methods for known and unknown multiplicity [7,8]. Some of the important schemes are given below.
Hueso et al. [9] developed a fourth-order derivative-free method, which is given by where h(y s , x s ) = g[y s +g(y s ) q ,y s ] g[x s +g(x s ) q ,x s ] and the values of other constants like q, a 1 , a 2 , a 3 , a 4 can be found in [9].
Baccouch [10] proposed many higher-order multi-point methods. One of the fourthorder derivative frees is given by where We denoted the scheme (6) by (BM). In 2019, Sharma et al. [11] proposed the following fourth-order derivative scheme: where v s = x s + βg(x s ), t s = g(z s ) g(x s ) 1 m and y s = g(z s ) m . The details of the weight function H(t s , y s ) and conditions can be found in [11].
In 2020, Sharma et al. [12] suggested a new derivative scheme, which is given below: where In 2020, Kumar et al. [13] presented a new fourth-order derivative free scheme, which is defined by where v s = x s + βg(x s ), t s = g(y s ) g(x s ) 1 m and the values of parameters α 1 , α 2 , α 3 , and α 4 are depicted in [13].
In 2020, Behl et al. [14] presented the following derivative free family of fourth-order iterative methods: where In 2021, Behl et al. [15] suggested a new fourth-order derivative free variant of Chebyshev-Halley family, which is defined as follows: where u s = x s + αg(x s ), τ = g(y s ) g(u s ) 1/m and ζ = g(y s ) g(x s ) 1/m . The values and hypotheses of weight function can be found in [15]. Very recently, in 2022, Behl [16] proposed another fourth-order derivative free scheme, which is given by where Two multi valued functions are given as m . The hypotheses and conditions on weight function M are described in [16]. Some other higher-order derivative-free techniques can be found in [10,17]. From the above discussion, it is clear that derivative free multi-point methods for multiple roots are in demand. Thus, motivated in the same direction, we want to suggest a new and more general scheme, which can produce better and faster numerical results. Our scheme has the properties: optimal order of convergence; derivative free; flexible at both substeps; cost effective; and more stable. Our scheme is based on a weight function approach. The best part of our scheme is not only optimal derivative-free but also flexible at both substeps. With a suitable choice of weight functions at the first and second substep, we can construct many new and existing techniques. For example, if we choose b = 0 in the Expression (12), then it is a special case of our scheme. We illustrate the applicability of our methods to six numerical problems. On the basis of obtained results, we found that our methods demonstrate better results as compared to earlier studies in terms of CPU timing, computational order of convergence, absolute errors and differences between two consecutive iterations.

Suggested Higher-Order Scheme and Its Analysis
Here, we suggest a new 4th-order iterative technique for multiple zeros m ≥ 2, which is given by where In addition to this, three weight functions H : C → C, Q : C → C and M : C → C are analytic in the neighborhood of origin (0). Moreover, g(x s ) 1 m and ϑ = g(y s ) g(µ s ) 1 m are two multi-valued maps. We adopt the principal root (see [18]), which can be obtained by ζ = exp 1 m log The choice of arg(z) for z ∈ C agrees with log(z), which is mentioned in the numerical section. In an analogous way, we obtain It is clear to say that, by choosing b = 0 and H(τ) = τ 2 in the Expressions (12) and (13), respectively, then Behl's scheme [16] turns as a special case of our scheme. In Theorem 1-3, we demonstrate the convergence analysis of (13), without adopting any extra value of g or g at some other points.
Theorem 1. Assume that a map g : D ⊂ C → C is an analytic in D surrounding the required zero. Let x = η (say) be a multiple solution of multiplicity m = 2. Then, the new constructed scheme (13) has 4th-order convergence, with the following conditions: It satisfies the following error equation Note that H 0 , M 0 , and Q 0 denote the functional value of H, M, and Q at origin (0). The subscripts j = 1, 2, 3 in H j represent the first-order, second-order, and third-order derivative, respectively, at the origin (0). The weight functions M j and Q j are also defined in the similar fashion.

Proof.
We assume e s = x s − η and are the terms of error (in sth iteration) and asymptotic constant, respectively. We choose the Taylor's series expansions for g at two different points x = x s and x = µ s = x s + θg(x s ) in the neighborhood of η with hypotheses g(η) = g (η) = 0 and g (η) = 0. Then, we obtain and By using Equations (15) and (16), we have (17) It is clear from the Expression (17) that the τ = O(e s ). Thus, we can easily expand H(ζ) in the neighborhood of origin (0) in the following way: where The Expressions (17) and (18) provide the following error expression: From (19), we observe that the scheme will attain at least a 2nd-order of convergence, when By using Expression (20) in (19), we obtaiñ By adopting Taylor's series expansions, we have From Expressions (17), (18) and (22), we further yield and From the Expressions (23) and (24), we have ζ = ϑ = O(e s ). Thus, we expand Q(ζ) and M(ϑ) in the neighborhood of origin (0), which are defined as: and where M j = M j (0), Q j = Q j (0) and 0 ≤ j ≤ 3, (j ∈ W). By using Expressions (15)-(26) in scheme (13), we obtain From (27), we observe that the scheme will attain at least the 2nd-order of convergence, when The terms A 0 and A 1 should be simultaneously zero for 4th-order convergence. We can attain this if where M 2 ∈ R.
We have the following error equation by adopting (28) in (27): where M 3 , H 3 , Q 3 ∈ R. We deduce from Expression (29) that our scheme (13) has obtained the fourth-order of convergence for θ ∈ R and m = 2 with the same number of values of the involved function. Hence, Expression (13) is an optimal scheme.
Theorem 2. Applying the same conditions of Theorem 1, the suggested iterative technique (13) has 4th-order convergence, when m = 3. It satisfies the following error equation: are the terms of error (in the sth iteration) and asymptotic constant, respectively. We choose the Taylor's series expansions for g at two different points x = x s and x = µ s = x s + θg(x s ) in the neighborhood of η with hypotheses g(η) = g (η) = g (η) = 0 and g (η) = 0. Then, we obtain and respectively. By using the Expressions (30) and (31), we have It is clear from the Expression (32) that τ = O(e s ). Thus, we can expand H(ζ) in the neighborhood of origin (0) in the following way: where With the help of Expressions (32) and (33), we further havẽ From (34), we observe that the scheme will attain at least the 2nd-order of convergence, when By using Expression (35) in (34), we obtaiñ By adopting Taylor's series expansions, we have g(y s ) =ẽ 2 s β 0 + β 1 e y s + β 2 e 2 y s + O e 5 s .
From Expressions (32), (33) and (37), we further yield and ϑ = g(y s ) g(µ s ) From Expressions (38) and (39), we have ζ = ϑ = O(e s ). Thus, we expand Q(ζ) and M(ϑ) in the neighborhood of origin (0), which is defined as: and By adopting Expressions (30)-(40) in scheme (13), we obtain From (42), we observe that the scheme will attain at least the 2nd-order of convergence, when The coefficient of e 2 s and e 3 s should be simultaneously zero, in order to deduce the 4th-order convergence. This can be easily obtained by the following values: We have the following error equation by adopting (43) in (42): where M 3 , H 3 , Q 3 ∈ R. We deduce from Expression (44) that our scheme (13) has obtained the fourth-order of convergence for θ ∈ R and m = 3 with the same number of values of the involved function. Hence, (13) is an optimal scheme. (13) Theorem 3. Applying the same conditions of Theorem 1, the suggested scheme (13) has 4th-order convergence, when m ≥ 4. It satisfies the following error equation:

With the help of Expressions (47) and (48), we further havẽ
From (49), we observe that the scheme will attain at least the 2nd-order of convergence, when H 0 = 0, H 1 = 1.
The terms C 0 and C 1 should be simultaneously zero for 4th-order convergence. We can attain this by choosing the following values We have the final asymptotic error equation by adopting (58) in (57), which is given by where M 2 , M 3 , H 3 ∈ R. We deduce from Expression (59) that our scheme (13) has obtained the fourth-order of convergence for θ ∈ R and m ≥ 4 with the same number of values of the involved function. Hence, (13) is an optimal scheme. Remark 1. It seems from (59) (for m ≥ 4) that θ is not involved in this expression. However, it actually appears in the coefficient of e 5 s . Here, we do not need to calculate the coefficient of e 5 s because the optimal fourth-order of convergence is already obtained. Furthermore, the calculation work of e 5 s is quite rigorous and consumes a huge amount of time. Nonetheless, the role of θ can commence in (29) and (44).

Numerical Experiments
In this segment, proposed schemes M1-M4 are verified on some academic and application oriented problems. Here, the attained outcomes are compared with already developed methods by Zafar et al. [19], Sharma et al. [12], Behl [16], and Kansal et al. [6], respectively. All of the above mentioned existing schemes are listed below: Zafar et al. scheme (FM 1 ) [19]:
In all the experimental works, we consider the value of γ = −0.01. The outcomes of experiments have been achieved by the software Mathematica 10 at 10,000 multiple precision digits of mantissa with processor Intel(R) Core(TM) i5-1035G1 CPU @ 1.00GHz 1.19 GHz, and RAM 8 GB on the 64-bit operating system. The stopping criterion is |x s − x s−1 | + |g(x s )| ≤ 200. The following tables represent that our methods illustrate better results in contrast to the earlier studies in view of the errors between two consecutive iterations e s = |x s − x s−1 |, CPU timing, ACOC (approximate computational order of convergence) denoted as ρ. The following approach is adopted to calculate the ACOC.
Furthermore, the iterative process stops after three iterations, and each numerical is tested against different initial values. It is important to note that the meaning of b(±a) is b × 10 ±a in the following tables.
Example 1. Firstly, we tested the methods on the Van der Waal's ideal gas equation [15] (P + an 2 V 2 )(V − nb) = nRT that describes the behavior of particular gas with some particular values of a and b. The values n, R, and T are calculated with the help of values a and b. Hence, the Equation (2) formulates the nonlinear equations of volume of gas(V) in terms of variable x as One of the required zeroes of multiplicity m = 2 of g 1 (x) is x = 1.75. Table 1 represents the obtained results of different iterative methods for starting point x 0 = 1.9. It is easily observed from the table that proposed methods M1, M2, M3 and M4 have less absolute functional errors in contrast to other methods. In addition, the order of convergence is not achieved by method FM 2 even up to seven iterations. Furthermore, our method M4 consumes the lowest CPU time as compared to other mentioned methods.  Example 2. Next, we consider the study of the blood rheology model [20] that investigates the physical and flow characteristics of blood. In reality, blood is a non-Newtonian fluid and is referred to as Caisson fluid. According to the Caisson fluid model, basic fluids flow in tubes in such a way that the wall-to-wall region experiences a velocity gradient and the fluid's central core moves as a plug with minimal deformation. The following function is taken into consideration as a nonlinear equation to examine the plug flow of Caisson fluids as here, we consider H = 0.40 to compute the flow rate reduction and reduces to nonlinear equation as To make the function g 2 (x) have multiple roots, we take function g(x) as By applying the proposed schemes, we obtained the required zero x = 0.08643356. . . of multiplicity m = 4 of the function g 2 (x). Table 2 represents the obtained results of different iterative methods for starting point x 0 = 0.22. It is easily observed from the table that proposed methods M1, M2, M3 and M4 have less absolute functional errors in contrast to other methods. Example 3. Since eigenvalue plays a significant role in linear algebra, it has many applications in real life problems such as image processing and quality of a product. Sometimes, it is a tough task to evaluate eigenvalues in the case of a larger size matrix. Thus, we consider the following ninth order matrix: The characteristic equation of matrix B forms the following polynomial equation: This function has a zero x = 3 of multiplicity m = 4. Tables 3 and 4 report the results of proposed schemes that are much better in contrast to available techniques in view of absolute functional errors, order of convergence, and CPU time. We choose two starting points x 0 = 2.8, and x 0 = 3.1, for a better comparison. One of the initial guesses x 0 = 2.8 is on the left-hand side of the required root, and the other one is on the right-hand side. Furthermore, there is no doubt that method FM 2 is consuming the lowest CPU timing, but convergence toward the required zero is very slow and not attaining the required convergence order.  Example 4. Now, we examine the suggested methods on the following academic problem having multiplicity 4 for the root z = i g 4 (z) = z(z 2 + 1) 2e z 2 +1 + z 2 − 1 cosh 2 ( πz 2 ).
The results with initial values x 0 = 1.2i, and x 0 = 0.9i, respectively, are shown in Tables 5 and 6. It is clear from the tables that our methods are showing much better results not only in the case of absolute residual errors but also in CPU timing.  Example 5. Next, the following academic problem has been considered: which has a zero x = 2.23607 of multiplicity 4. The suggested methods are tested with starting value x 0 = 1.4 and attained results represented in Table 7. We found from the numerical results that our methods M1, M2, M3 and M4 have better numerical results in contrast to the methods SM 1 , FM 1 , and FM 2 . Method M2 is not only consuming the lowest CPU timing but also perform much better results as compared to the existing ones. Example 6. Lastly, the following academic problem with large multiplicity has been considered.
which has a zero x = 0 of multiplicity 10. All the proposed and earlier methods are examined with initial value x 0 = 1. The achieved outcomes are shown in Table 8, which clearly demonstrate the exceptional results of the other methods. Moreover, the fourth order methods FM 1 , and FM 2 are not working for this example of higher multiplicity. Overall, we observe from Tables 1-8 that proposed techniques have lower residual errors and CPU time in contrast to other methods with the same number of iterations.

•
We constructed a new two-step, free from derivatives and a cost effective iterative technique for multiple zeros (m ≥ 2). • The presented scheme used three different weight functions (at both substeps) in order to obtain a more general form of two-point methods. • Several new cases are depicted in Section 2. • Behl's scheme [16] is obtained as a special case of our scheme, by choosing b = 0 and H(τ) = τ 2 in the Expressions (12) and (13), respectively. • Since our scheme (13) consumes only three values of g at different points, the maximum bound (optimal level) of our scheme is achieved by Kung-Traub conjecture. • From Table 7, it is confirmed that methods FM1 and FM2 diverge from the required solution. However, our methods do not exhibit this behavior. On the other hand, M4 is not only converging to the required solution but also has the lowest absolute error among other depicted techniques. • Finally, we deduce from Tables 1-8 that our schemes are more stable and cost effective. These methods could be a better alternative to the earlier studies.