A Derivative Free Fourth-Order Optimal Scheme for Applied Science Problems

: We suggest a new and cost-effective iterative scheme for nonlinear equations. The main features of the presented scheme are that it does not involve any derivative in the structure, achieves an optimal convergence of fourth-order factors, has more ﬂexibility for obtaining new members, and is two-point, cost-effective, more stable and yields better numerical results. The derivation of our scheme is based on the weight function technique. The convergence order is studied in three main theorems. We have demonstrated the applicability of our methods on four numerical problems. Out of them, two are real-life cases, while the third one is a root clustering problem and the fourth one is an academic problem. The obtained numerical results illustrate preferable outcomes as compared to the existing ones in terms of absolute residual errors, CPU timing, approximated zeros and absolute error difference between two consecutive iterations.


Introduction
Most of the applied science problems are nonlinear in nature because nature itself is nonlinear instead of simple or linear. The solutions of nonlinear problems are more complicated as compared to the linear and simple problems. Therefore, we consider a nonlinear problem of the following form: ( f : D ⊂ C → C is an analytic function). Such equations originate from the applied and computer science, engineering, statistics, economics, chemistry, biology and physics, etc. (see details in [1][2][3]). The application of iterative methods can also be found for the computation of approximate solutions of stationary and evolutionary problems, which are associated with differential and partial differential equations (more details in [4,5]). The exact solutions of such problems are almost non-existent. Thus, we have to focus or depend on the approximate solutions that can be obtained with the help of iterative methods. One of the most famous schemes is known as Newton's method, which is given by Undoubtedly, this scheme has second-order convergence and is a widely used method for nonlinear equations. There are several problems with this scheme. Some of the major ones are: it is a one-point method (for convergence and efficiency problems; details are given in [1][2][3]), it has a linear order of convergence for multiple zeros and the calculation of the first-order derivative at each substep. Finding the derivative is quite a rigorous problem because, sometimes, the derivative of a function consumes a large amount of time in achieving the final result or does not exist.
Therefore, higher-order optimal derivative-free methods came into demand. Then, some scholars suggested a few such methods that have fourth-order optimal convergence. Some of the most important members are given below.
In 2015, Hueso et al. [6] suggested where , is a finite difference of the first order, is denoted by (HM) (with q = 1).
In 2019, Sharma et al. [7] proposed where is denoted by (SM1). The suggested scheme (4) is one of their best methods among others proposed by Sharma et al. [7]. In 2019, Sharma et al. [8] gave where The expression (5) is one of their best schemes among other methods presented by Sharma et al. [8]. We call it (SM2).
In 2020, Kumar et al. [9] presented where (6) is one of their best schemes among others given by Kumar et al. [9]. In 2020, Behl et al. [10] suggested: where 1/m , which is called (BM). Some other higher-order derivative-free techniques can be found in [11][12][13][14][15]. We aspire to suggest a new two-step, more general and cost-effective family of iterative methods. The new scheme is derivative-free and has optimal convergence of order four. The derivation of this two-step scheme is based on the weight function technique. Further, we present three main Theorems 1-3, which demonstrate the fourth-order convergence for m ≥ 2, when the value of (m) is known in advance. The applicability of our methods is illustrated on four numerical problems. Two of them are real-life, the third one is root clustering (which originates from applied mathematics) and the last one is an academic problem. The numerical outcomes demonstrate preferable results in terms of absolute residual errors, CPU timing, approximated zeros and the absolute error difference between two consecutive iterations, in contrast to previous studies.
The rest of the paper is summarized as follows. Section 2 includes the construction as well as the convergence analysis of our scheme. The convergence analysis is studied thoroughly in three Theorems 1-3. Section 3 is devoted to the numerical experiments, where we illustrate the efficiency and convergence of our scheme. In addition, we also propose three weight functions that satisfy the hypotheses of Theorems 1-3. Further, four numerical problems are chosen to confirm the theoretical results. Finally, the concluding remarks are presented in Section 4.

Construction of Higher-Order Scheme
We suggest a new form of iterative scheme that has fourth-order optimal convergence for multiple zeros, which is given by where µ σ = x σ + α f (x σ ), α, b ∈ R, and m ≥ 2 is the known multiplicity of the needed zero. Further, the maps H : C → C and M : C → C are weight functions and analytic in the neighborhood of origin (0). Moreover, we considered and 1 m two multi-valued maps. We assume that the princi-pal root (see [16]) is given by θ = exp 1 m log , with log The choice of arg(z) for z ∈ C agrees with that of log(z), which is depicted in the numerical section. In an analogous way, we obtain , where e σ = x σ − ξ. In Theorems 1-3, we demonstrate the convergence analysis of our scheme (8), without adopting any extra value of f at some other points.
Theorem 1. We assume that x = ξ is a multiple zero of order two (m = 2) of function f . Consider the map f : D ⊂ C → C, which is analytic in D in the neighborhood of the needed zero ξ. Then, our scheme (8) attains fourth-order of convergence, if where |H (0)| < ∞, |M (0)| < ∞. The scheme (8) satisfies the following error equation: where λ 2 = f (ξ).
Proof. We assume that e σ = x σ − ξ and a l = 2!

Theorem 2.
Suppose that x = ξ is a multiple solution of order three (m = 3) of function f .
Consider the map f : D ⊂ C → C, which is analytic in D surrounding the needed zero ξ. Then, our scheme (8) attains fourth-order convergence, if where |H (0)| < ∞ and |M (0)| < ∞. Scheme (8) satisfies the following error equation: where , 1 ≤ k ≤ 4, (k ∈ N) are the terms of error (in σth iteration) and asymptotic constant, respectively. We choose the Taylor's series expansions for f at two different points and respectively, with λ 3 = f (ξ). By using expressions (25) and (26) in scheme (8), we obtain Next, from expression (27), we have ζ = O(e σ ). Thus, we expand the weight function H(ζ) in the neighborhood of origin (0) in the following way: By using expressions (27) and (28) in scheme (8), we have From (29), we observe that the scheme will attain at least second-order convergence, when Substituting expression (30) in (29), we have Again with the help of Taylor's series expansions, we obtain From expressions (25), (26) and (32), we further yield From expression (33), we have θ = O(e σ ). Thus, we expand M(θ) in the neighborhood of origin (0) as: By using expressions (25)-(35) in scheme (8), we have where The coefficients of e σ , e 2 σ and e 3 σ should be simultaneously zero, in order to deduce the fourth-order convergence. This can be easily attained by the following values: We have the following error equation by adopting (37) in (36): We deduce from expression (38) that our scheme (8) has obtained the fourth order of convergence for α ∈ R and m = 3. In addition, we have attained this convergence order without adopting any extra value of f at some other points. Hence, (8) is an optimal scheme.
General Error form of the Proposed Scheme Theorem 3. Following the same suppositions of Theorem 1, our scheme (8) attains the fourth order of convergence for m ≥ 4. The scheme (8) satisfies the following error equation: Proof. We consider that e σ = x σ − ξ and C k = m!
, 1 ≤ k ≤ 4, (k ∈ N) are the terms of error (in σth iteration) and asymptotic constant, respectively. We choose the Taylor's series expansions for f at two different points x = x σ and x = µ σ in the neighborhood and where Inserting expressions (39) and (40) into expression (8), we obtain Next, from expression (41), we have ζ = O(e σ ). Thus, we expand the weight function H(ζ) in the neighborhood of origin (0), which is defined as By using expressions (41) and (42) in scheme (8), we have From (43), we observe that scheme (8) will attain at least second-order convergence, if Substituting expression (44) in (43), we have Again, with the help of Taylor's series expansions, we have By adopting (41), (42) and (46), we further yield From expression (47), we have θ = O(e σ ). Thus, we expand M(θ) in the neighborhood of origin (0) as By using expressions (39)-(49) in scheme (8), we have The coefficients of e σ , e 2 σ and e 3 σ should be simultaneously zero, in order to deduce the fourth-order convergence. This can be easily attained by the following values: By adopting (51) in (50), we obtain the following error equation: We deduce from expression (52) that our scheme (8) has obtained the fourth order of convergence for α ∈ R and m ≥ 4. In addition, we have attained this convergence order without adopting any extra value of f at some other points. Hence, (8) is an optimal scheme. Remark 1. It seems from (52) (for m ≥ 4) that α and b are not involved in this expression. However, they actually appear in the coefficient of e 5 σ . Here, we do not need to calculate the coefficient of e 5 σ because the optimal fourth order of convergence is already obtained. Further, the calculation work of e 5 σ is quite rigorous and consumes a large amount of time. Nonetheless, the role of α and b can be observed in (23) and (38).

Numerical Experimentation
We demonstrate the efficiency and convergence of some members from our scheme (8). Therefore, we choose the following three weight functions: 1.
First weight function: 2. Second weight function: 3. Third weight function: Clearly, all the above three weight functions satisfy the conditions provided in Theorems 1-3. Now, we use these weight functions in our scheme (8) and call them (PM1) (with b = 2), (PM2) with b = 1 10 and (PM3) with b = 1 10 , respectively. We consider two applied science problems, one clustering root problem and an academic problem for the numerical tests. There are no fixed criteria for the comparison of two different iterative methods. However, we assume the following six different aspects for the comparison: 1.
The absolute residual error; 3.
The differences between two consecutive iterations; 4.
The number of iterations for attaining accuracy up to = 10 −100 ; 6.
Computational order of convergence (COC) based on the accuracy.
The values of the above mentioned parameters are depicted in Tables 1-8, along with initial guesses. The values of x σ , | f (x σ )|, COC and |x σ+1 − x σ | were calculated in Mathematica-9 for a minimum of 3000 significant digits, which minimizes the rounding off error. However, we depict these values up to 15 (with exponent), 2 (with exponent), 6 and 2 (with exponent) significant digits, respectively. We adopted the following rules and in order to calculate the computational order of convergence and the approximate computational order of convergence (ACOC) [17], respectively. Further, the CPU timing is obtained by the command "AbsoluteTiming[]" in Mathematica 9. We execute the same programming five times, and their average time is depicted in Table 7. The b 1 (±b 2 ) stands for b 1 × 10 (±b 2 ) in Tables 1-6. The configurations and outline of the adopted laptop are defined as follows: Processor: Intel(R) Core(TM)2 Duo CPU T6400 @ 2.00 GHz; Manufacturer: HP; Installed memory (RAM): 4:00 GB; Windows edition: Windows 7 Professional; System type: 64-bit-Operating System.
In order to maintain uniformity in the case of the comparison of the iterative methods, we choose β = 1 2 in the existing as well as our methods. We consider five existing methods for comparison, namely (3)- (7). The details of these methods are given in the Introduction.

Remark 2. For the following specific values of the weight functions,
We can obtain the Behl's scheme [18] as a special case of our method.
Example 1 (Eigenvalue problem). Eigenvalues and vectors are one of the most basic and challenging problems of linear algebra. The quality of a thing or object can be determined with the help of the eigenvalue problem. It is not always suitable to use the linear algebra technique. Thus, we have to rely on numerical techniques, which provide the approximate zero. Therefore, we choose the proceeding square matrix of 9 × 9, which has multiple zeros: , whose characteristic equation is given below: The function f 1 (x) has x = 3, a multiple zero with m = 4. The computational results along with starting guesses are depicted in Tables 1 and 2.
From Table 1, we can conclude that methods PM2 and PM3 display the most outstanding behavior among the mentioned methods in terms of accurate iterate x σ , the difference between two consecutive iterations and absolute residual errors. Further, we can say that the other methods have almost two times larger residual errors than PM2 and PM3. We can observe from Table 2 that the desired root is closer from the second iteration onward in our suggested methods PM2 and PM3 as compared to the mentioned ones. In addition, the other existing methods have almost three times larger residual errors, which demonstrates the better performance of our methods PM2 and PM3. Example 2 (Continuous stirred tank reactor (CSTR)). Here, we consider another problem of applied science, namely an isothermal continuous stirred tank reactor (CSTR). The components M 1 and M 2 are the fed rates of the reactors B 1 and B 2 − B 1 , respectively. In this way, we have the proceeding reaction scheme (for details, see [19]): Douglas [20] invented this model (55) while designing a simple model that can control the feedback systems. He transformed the expression (55) in the mathematical form, which is given by where R C 1 is the gain of the proportional controller. For a particular value of R C 1 = 0, we obtain f 2 (x) = x 4 + 11.50x 3 + 47.49x 2 + 83.06325x + 51.23266875.
The  Tables 3 and 4. From Table 3, we find that the lowest residual error among the existing methods (HM, SM1, SM2 KM, BM) is 7.3(−47); however, our methods PM1 PM1, PM2 and PM3 have 8.0(−82), 1.8(−79) and 2.4(−78), respectively. Thus, we can say that the existing methods have almost two times larger residual errors than our methods. This also indicates the faster convergence of our methods PM1, PM2 and PM3 as compared to others. Our techniques PM1, PM2 and PM3 also perform much better in terms of x σ and |x σ+1 − x σ | as compared to other existing ones. Table 3. Behavior of iterative methods on CSTR problem f 2 with x 0 = −2.8. We can observe from Table 4 that our method PM1 has the lowest residual error 4.0(−174) as compared to SM1 3.2(−122) (which is the lowest among other existing ones (HM, SM2 KM, BM)). This clearly indicates that PM1 has the fastest convergence and smallest residual error among others. Our methods PM1, PM2 and PM3 have almost a two times lower error difference |x σ+1 − x σ | and better x σ as compared to other existing ones. Table 4. Behavior of iterative methods on CSTR problem f 2 with x 0 = −2.9.  Example 3 (Root clustering problem). We chose a root clustering problem similar to Zeng [21]: The zeros of f 3 are x = 1, 2, 3 and x = 4 of multiplicity m = 30, 150, 191 and m = 95, respectively. All of the zeros are quite close to each other. Therefore, this is known as a root clustering problem. We chose x = 3 as the multiple zero of multiplicity 191 for the numerical experiment. The computational results are depicted in Table 5 with an initial approximation.
Undoubtedly, SM2 demonstrates slightly better behavior than our and existing methods, as shown in Table 5. However, there is no huge difference, as our methods show in the previous Tables 1-4. Our results are also significantly closer to SM2 in terms of |x σ+1 − x σ |. It is merely a difference of only four significant digits in the case of PM1. Example 4 (Academic problem). We chose another academic problem, which is given by The zero of f 4 is x = 0 of multiplicity m = 3. Computational results are depicted in Table 6 with initial approximation.
Undoubtedly, SM2 demonstrates slightly better behavior than our and existing methods, as shown in Table 6. However, there is no huge difference, as our methods show in the previous Tables 1-4. Our results are also significantly closer to SM2 in terms of |x σ+1 − x σ |, with a difference of only four significant digits in the case of PM1. Table 6. Behavior of iterative methods on f 4 with x 0 = 0.1. Table 7, we find that PM1 has the lowest average execution time for attaining the desired accuracy. The average execution times of the computational results of methods HM and SM1, respectively, are two and three times those of PM1, PM2 and PM3. Further, PM1, PM2 and PM3 also consume less CPU time (on average) as compared to SM2, KM and BM.

Remark 4.
On the basis of the obtained number of iterations in Table 8, we conclude that PM2 requires the fewest average iterations (in order to attain the desired accuracy) as compared to the existing methods. In addition, the average number of iterations of our methods PM1 and PM3 (4) is also lower as compared to 4.3 (which is the lowest among the existing methods). Thus, we deduce that our method PM2 is the fastest among other mentioned methods. Table 9, it is straightforward to say that methods PM1, PM2 and PM3 exhibit consistent COC (except Example 4) in contrast to the other existing methods. The calculation of COC is based on the number of iterations (which is depicted in Table 8 corresponding to the methods and examples). The abbreviations of T.T. and A.T. stand for total timing and average timing, respectively. The abbreviations of T.Iter. and A.Iter. are total iterations and average iterations, respectively. Table 9. COC based on the number of iterations required in order to attain the desired accuracy.

Concluding Remarks
• We have suggested a new two-step, derivative-free and cost-effective iterative scheme for multiple zeros (m ≥ 2). • Our scheme is based on the weight function technique. By using the weight functions at both substeps, we provide more flexibility for generating more general new schemes. Several new and existing cases are depicted in numerical Section 2 and Remark 2, respectively. • Our Scheme (8) consumes only three values of f at different points. Thus, the optimality of our scheme is confirmed by the Kung-Traub conjecture. • Our methods have the lowest residual error, more stable COC, difference between two iterations and better approximate zero as compared to the existing ones (see Tables 1-4 and 9). • Undoubtedly, SM2 demonstrates slightly better behavior than our and existing methods in Example 3. Our results are also considerably closer to SM2 in terms of |x σ+1 − x σ |, with a difference of only four significant digits in the case of PM1. • PM1 requires the lowest execution time to obtain the numerical results. The execution times for the computational results of methods HM and SM1, respectively, are two and three times those of our methods, namely PM1, PM2 and PM3. Thus, we deduce that our schemes are cost-effective.
• The average number of iterations of our methods PM2 and PM3 is the lowest as compared to 4.6 (lowest among the existing methods). • Finally, we conclude from Tables 1-8 that Scheme (8) is more stable, cost-effective and could be a better substitution for the existing methods. • We cannot use our scheme for the solution of nonlinear systems. In the future, we can work in two directions: either for the extension to the eighth order of convergence for multiple roots or the extension for nonlinear systems.