1. Introduction
Finding the multiple roots of the roots of the nonlinear equation 
 is one of the most difficult tasks. Since the multiple roots play an important role in the areas of Computer Science, applied Mathematics, Physics, applied Chemistry and Engineering. For example, the Ideal Gas Law [
1] describes the relationship between molecular size and attraction forces and the behavior of a real gas. The solution of such an equation with an analytical approach is either complicated or almost non-existent. Then, we have to focus on iterative methods. One of the most famous iterative techniques is the modified Newton’s method (MNM) [
2,
3], which is defined as
      
Its order of convergence is quadratic if the multiplicity m of the required root is to be known in advance.
The main problem of this method is the use of the first-order derivative at each substep. There are several occasions in real life problems where finding the derivative is either quite complicated or time consuming or does not exist. In those cases, it is always fruitful to use a derivative free method. Thus, Traub–Steffensen [
4] suggested a derivative free scheme, which is defined by
      
      where 
.
Later on, Kumar et al. [
5] and Kansal et al. [
6] suggested the following second-order one point derivative free schemes:
      and
      
      respectively, where 
.
Since all of the above three iterative schemes are one-point, they have several issues regarding their convergence order and efficiency (more details can be found in [
2,
3]). Then, researchers turned towards multi-point derivative free methods for known and unknown multiplicity [
7,
8]. Some of the important schemes are given below.
Hueso et al. [
9] developed a fourth-order derivative-free method, which is given by
      
      where 
 and the values of other constants like 
 can be found in [
9].
Baccouch [
10] proposed many higher-order multi-point methods. One of the fourth-order derivative frees is given by
      
      where
      
We denoted the scheme (
6) by 
.
In 2019, Sharma et al. [
11] proposed the following fourth-order derivative scheme:
      where 
 and 
. The details of the weight function 
 and conditions can be found in [
11].
In 2020, Sharma et al. [
12] suggested a new derivative scheme, which is given below:
      where 
 and 
.
In 2020, Kumar et al. [
13] presented a new fourth-order derivative free scheme, which is defined by
      
      where 
 and the values of parameters 
 and 
 are depicted in [
13].
In 2020, Behl et al. [
14] presented the following derivative free family of fourth-order iterative methods:
      where 
 and 
.
In 2021, Behl et al. [
15] suggested a new fourth-order derivative free variant of Chebyshev–Halley family, which is defined as follows:
      where 
 and 
. The values and hypotheses of weight function can be found in [
15].
Very recently, in 2022, Behl [
16] proposed another fourth-order derivative free scheme, which is given by
      
      where 
 and 
 Two multi valued functions are given as 
 and 
. The hypotheses and conditions on weight function 
M are described in [
16]. Some other higher-order derivative-free techniques can be found in [
10,
17]. From the above discussion, it is clear that derivative free multi-point methods for multiple roots are in demand.
Thus, motivated in the same direction, we want to suggest a new and more general scheme, which can produce better and faster numerical results. Our scheme has the properties: optimal order of convergence; derivative free; flexible at both substeps; cost effective; and more stable. Our scheme is based on a weight function approach. The best part of our scheme is not only optimal derivative-free but also flexible at both substeps. With a suitable choice of weight functions at the first and second substep, we can construct many new and existing techniques. For example, if we choose 
 in the Expression (
12), then it is a special case of our scheme. We illustrate the applicability of our methods to six numerical problems. On the basis of obtained results, we found that our methods demonstrate better results as compared to earlier studies in terms of CPU timing, computational order of convergence, absolute errors and differences between two consecutive iterations.
  2. Suggested Higher-Order Scheme and Its Analysis
Here, we suggest a new 4th-order iterative technique for multiple zeros 
, which is given by
      
      where 
, 
. In addition to this, three weight functions 
, 
 and 
 are analytic in the neighborhood of origin (0). Moreover, 
 and 
 are two multi-valued maps. We adopt the principal root (see [
18]), which can be obtained by 
, with 
 for 
. The choice of 
 for 
 agrees with 
, which is mentioned in the numerical section. In an analogous way, we obtain 
.
It is clear to say that, by choosing 
 and 
 in the Expressions (
12) and (
13), respectively, then Behl’s scheme [
16] turns as a special case of our scheme. In Theorem 1–3, we demonstrate the convergence analysis of (
13), without adopting any extra value of 
g or 
 at some other points.
Theorem 1. Assume that a map  is an analytic in  surrounding the required zero. Let  (say) be a multiple solution of multiplicity m = 2. Then, the new constructed scheme (13) has 4th-order convergence, with the following conditions: It satisfies the following error equationwhere  and . Note that  denote the functional value of  at origin . The subscripts  in  represent the first-order, second-order, and third-order derivative, respectively, at the origin . The weight functions  and  are also defined in the similar fashion.  Proof.  We assume 
 and 
 are the terms of error (in 
sth iteration) and asymptotic constant, respectively. We choose the Taylor’s series expansions for 
g at two different points 
 and 
 in the neighborhood of 
 with hypotheses 
 and 
. Then, we obtain
        
        and
        
By using Equations (
15) and (
16), we have
        
It is clear from the Expression (
17) that the 
. Thus, we can easily expand 
 in the neighborhood of origin 
 in the following way:
        
        where 
.
The Expressions (
17) and (
18) provide the following error expression:
        
From (
19), we observe that the scheme will attain at least a 2nd-order of convergence, when
        
By using Expression (
20) in (
19), we obtain
        
By adopting Taylor’s series expansions, we have
        
From Expressions (
17), (
18) and (
22), we further yield
        
        and
        
From the Expressions (
23) and (
24), we have 
. Thus, we expand 
 and 
 in the neighborhood of origin 
, which are defined as:
        
        and
        
        where 
, 
 and 
.
By using Expressions (
15)–(
26) in scheme (
13), we obtain
        
        where 
.
From (
27), we observe that the scheme will attain at least the 2nd-order of convergence, when
        
        where 
.
The terms 
 and 
 should be simultaneously zero for 4th-order convergence. We can attain this if
        
        where 
.
We have the following error equation by adopting (
28) in (
27):
        
        where 
. We deduce from Expression (
29) that our scheme (
13) has obtained the fourth-order of convergence for 
 and 
 with the same number of values of the involved function. Hence, Expression (
13) is an optimal scheme.    □
 Theorem 2. Applying the same conditions of Theorem 1, the suggested iterative technique (13) has 4th-order convergence, when . It satisfies the following error equation:  Proof.  We assume 
 and 
 are the terms of error (in the 
sth iteration) and asymptotic constant, respectively. We choose the Taylor’s series expansions for 
g at two different points 
 and 
 in the neighborhood of 
 with hypotheses 
 and 
. Then, we obtain
        
        and
        
        respectively.
By using the Expressions (
30) and (
31), we have
        
It is clear from the Expression (
32) that 
. Thus, we can expand 
 in the neighborhood of origin 
 in the following way:
        
        where 
.
With the help of Expressions (
32) and (
33), we further have
        
From (
34), we observe that the scheme will attain at least the 2nd-order of convergence, when
        
By using Expression (
35) in (
34), we obtain
        
By adopting Taylor’s series expansions, we have
        
From Expressions (
32), (
33) and (
37), we further yield
        
        and
        
From Expressions (
38) and (
39), we have 
. Thus, we expand 
 and 
 in the neighborhood of origin 
, which is defined as:
        
        and
        
By adopting Expressions (
30)–(
40) in scheme (
13), we obtain
        
        where 
.
From (
42), we observe that the scheme will attain at least the 2nd-order of convergence, when
        
The coefficient of 
 and 
 should be simultaneously zero, in order to deduce the 4th-order convergence. This can be easily obtained by the following values:
        
We have the following error equation by adopting (
43) in (
42):
        
        where 
. We deduce from Expression (
44) that our scheme (
13) has obtained the fourth-order of convergence for 
 and 
 with the same number of values of the involved function. Hence, (
13) is an optimal scheme.    □
   2.1. General Error Equation of Technique (13)
Theorem 3. Applying the same conditions of Theorem 1, the suggested scheme (13) has 4th-order convergence, when m ≥ 4. It satisfies the following error equation:  Proof.  We assume 
 and 
 are the terms of error (in 
sth iteration) and asymptotic constant, respectively. We choose the Taylor’s series expansions for 
g at two different points 
 and 
 in the neighborhood of 
 with hypotheses 
 and 
. Then, we obtain
          
          and
          
          respectively, where
          
By using the Expressions (
45) and (
46), we have
          
It is clear from the Expression (
47) that 
. Thus, we can expand 
 in the neighborhood of origin 
 in the following way:
          
          where 
.
With the help of Expressions (
47) and (
48), we further have
          
From (
49), we observe that the scheme will attain at least the 2nd-order of convergence, when
          
By using Expression (
50) in (
49), we obtain
          
By adopting Taylor’s series expansions, we obtain
          
By using (
47), (
48) and (
52), we further yield
          
          and
          
From the Expressions (
53) and (
54), we have 
. Thus, we expand 
 and 
 in the neighborhood of origin 
, which is defined as:
          
          and
          
By adopting Expressions (
45)–(
55) in scheme (
13), we obtain
          
          where 
.
From (
57), we observe that the scheme will attain at least the 2nd-order of convergence, when
          
The terms 
 and 
 should be simultaneously zero for 4th-order convergence. We can attain this by choosing the following values
          
We have the final asymptotic error equation by adopting (
58) in (
57), which is given by
          
          where 
. We deduce from Expression (
59) that our scheme (
13) has obtained the fourth-order of convergence for 
 and 
 with the same number of values of the involved function. Hence, (
13) is an optimal scheme.    □
 Remark 1. It seems from (59) (for ) that θ is not involved in this expression. However, it actually appears in the coefficient of . Here, we do not need to calculate the coefficient of  because the optimal fourth-order of convergence is already obtained. Furthermore, the calculation work of  is quite rigorous and consumes a huge amount of time. Nonetheless, the role of θ can commence in (29) and (44).  Remark 2. We can easily obtain Behl’s scheme [16] as a special case of our scheme, by choosing  and  in the Expressions (12) and (13), respectively.    2.2. Some Special Cases of the Proposed Scheme
Here, we choose the following weight functions 
, and 
, which satisfy the conditions of Theorems 1–3:
        where 
 For numerical work, we choose 
 in the above weight functions.
  3. Numerical Experiments
In this segment, proposed schemes 
–
 are verified on some academic and application oriented problems. Here, the attained outcomes are compared with already developed methods by Zafar et al. [
19], Sharma et al. [
12], Behl [
16], and Kansal et al. [
6], respectively. All of the above mentioned existing schemes are listed below:
Zafar et al. scheme (
) [
19]:
      where
      
Zafar et al. scheme (
) [
19]:
      where
      
Sharma et al. scheme (
) [
12]:
      where
      
Behl scheme (
) [
16]:
      where
      
Kansal et al. scheme (
) [
6]:
In addition to the above methods, we also compare our methods with another fourth-order derivative free scheme (
6) proposed by Baccouch [
10], called by (
).
In all the experimental works, we consider the value of 
. The outcomes of experiments have been achieved by the software Mathematica 10 at 10,000 multiple precision digits of mantissa with processor Intel(R) Core(TM) i5-1035G1 CPU @ 1.00GHz 1.19 GHz, and RAM 8 GB on the 64-bit operating system. The stopping criterion is 
. The following tables represent that our methods illustrate better results in contrast to the earlier studies in view of the errors between two consecutive iterations 
, CPU timing, ACOC (approximate computational order of convergence) denoted as 
. The following approach is adopted to calculate the ACOC.
      
Furthermore, the iterative process stops after three iterations, and each numerical is tested against different initial values. It is important to note that the meaning of  is  in the following tables.
Example 1. Firstly, we tested the methods on the Van der Waal’s ideal gas equation [15]that describes the behavior of particular gas with some particular values of a and b. The values  and T are calculated with the help of values a and b. Hence, the Equation (2) formulates the nonlinear equations of volume of gas(V) in terms of variable x as One of the required zeroes of multiplicity m = 2 of  is x = 1.75. Table 1 represents the obtained results of different iterative methods for starting point  = 1.9. It is easily observed from the table that proposed methods  and  have less absolute functional errors in contrast to other methods. In addition, the order of convergence is not achieved by method  even up to seven iterations. Furthermore, our method  consumes the lowest CPU time as compared to other mentioned methods.  Example 2. Next, we consider the study of the blood rheology model [20] that investigates the physical and flow characteristics of blood. In reality, blood is a non-Newtonian fluid and is referred to as Caisson fluid. According to the Caisson fluid model, basic fluids flow in tubes in such a way that the wall-to-wall region experiences a velocity gradient and the fluid’s central core moves as a plug with minimal deformation. The following function is taken into consideration as a nonlinear equation to examine the plug flow of Caisson fluids ashere, we consider  to compute the flow rate reduction and reduces to nonlinear equation as To make the function  have multiple roots, we take function g(x) as By applying the proposed schemes, we obtained the required zero x = 0.08643356…of multiplicity m = 4 of the function . Table 2 represents the obtained results of different iterative methods for starting point  = 0.22. It is easily observed from the table that proposed methods  and  have less absolute functional errors in contrast to other methods.  Example 3. Since eigenvalue plays a significant role in linear algebra, it has many applications in real life problems such as image processing and quality of a product. Sometimes, it is a tough task to evaluate eigenvalues in the case of a larger size matrix. Thus, we consider the following ninth order matrix: The characteristic equation of matrix B forms the following polynomial equation: This function has a zero x = 3 of multiplicity m = 4. Table 3 and Table 4 report the results of proposed schemes that are much better in contrast to available techniques in view of absolute functional errors, order of convergence, and CPU time. We choose two starting points  = 2.8, and  = 3.1, for a better comparison. One of the initial guesses  = 2.8 is on the left-hand side of the required root, and the other one is on the right-hand side. Furthermore, there is no doubt that method  is consuming the lowest CPU timing, but convergence toward the required zero is very slow and not attaining the required convergence order.  Example 4. Now, we examine the suggested methods on the following academic problem having multiplicity 4 for the root  The results with initial values , and , respectively, are shown in Table 5 and Table 6. It is clear from the tables that our methods are showing much better results not only in the case of absolute residual errors but also in CPU timing.  Example 5. Next, the following academic problem has been considered:which has a zero x = 2.23607 of multiplicity 4. The suggested methods are tested with starting value  = 1.4 and attained results represented in Table 7. We found from the numerical results that our methods  and  have better numerical results in contrast to the methods  and . Method  is not only consuming the lowest CPU timing but also perform much better results as compared to the existing ones.  Example 6. Lastly, the following academic problem with large multiplicity has been considered.which has a zero x = 0 of multiplicity 10. All the proposed and earlier methods are examined with initial value  = 1. The achieved outcomes are shown in Table 8, which clearly demonstrate the exceptional results of the other methods. Moreover, the fourth order methods , and  are not working for this example of higher multiplicity.