Derivative-Free Families of with-and without-Memory Iterative Methods for Solving Nonlinear Equations and Their Engineering Applications

: In this paper, we propose a new ﬁfth-order family of derivative-free iterative methods for solving nonlinear equations. Numerous iterative schemes found in the existing literature either exhibit divergence or fail to work when the function derivative is zero. However, the proposed family of methods successfully works even in such scenarios. We extended this idea to memory-based iterative methods by utilizing self-accelerating parameters derived from the current and previous approximations. As a result, we increased the convergence order from ﬁve to ten without requiring additional function evaluations. Analytical proofs of the proposed family of derivative-free methods, both with and without memory, are provided. Furthermore, numerical experimentation on diverse problems reveals the effectiveness and good performance of the proposed methods when compared with well-known existing methods.


Introduction
Developing iterative methods to solve nonlinear equations poses an interesting and significant challenge in the fields of applied mathematics and engineering.In practice, analytical techniques often fall short in determining the roots of nonlinear problems.As a result, researchers have developed a variety of iterative methods to solve nonlinear equations.
Multi-point iterations surpass the limitations of one-point algorithms, demonstrating superior convergence rates and computational efficiency, thereby emerging as the most powerful technique for root finding.The development of iterative methods for finding the roots of nonlinear equations holds a crucial position in the field of numerical analysis, generating considerable interest and significance.The Newton-Raphson method stands as a widely renowned iterative approach that operates without memory, defined as follows [20]: where Θ is the function and Θ is its derivative.The Newton-Raphson method requires the evaluation of two functions in each iteration and exhibits a second-order convergence rate.It aligns with Kung-Traub's hypothesis [21], which asserts that a memory-less multi-point method can achieve a maximum order of 2 γ−1 by performing γ function evaluations per iteration.The Chebyshev-Halley [21] and Ostrowski [22] methods are two iterative techniques devised for solving nonlinear equations of third and fourth orders, respectively.Researchers commonly strive to enhance the convergence rate of iterative methods.However, as the convergence rate improves, the associated increase in the number of required function evaluations may lead to a reduction in the efficiency index of these methods.The efficiency index of an iterative method quantifies its performance, as defined by [21,22]: where ρ symbolizes the convergence rate of the iterative method and γ denotes the number of function and derivative evaluations performed per iteration.Recent advancements in the field have witnessed remarkable contributions to the development of iterative techniques for solving nonlinear equations.Kumar et al. [23] introduced a derivativefree fifth-order method, while Choubey et al. [24] presented an eighth-order approach that removes the derivatives by employing techniques such as divided differences and weight functions.Sharma et al. [25] proposed an optimal fourth-order iterative method that incorporates derivatives, and Panday et al. [26] formulated both fourth and eighthorder optimal iterative approaches.Singh and Singh [27] devised an optimal eighth-order method in 2021, while Solaiman and Hashim [28] introduced an optimal eighth-order approach employing the modified Halley's method.Chanu and Panday [29] contributed a non-optimal tenth-order method for solving nonlinear equations.The methods developed in [22,30,31] required evaluation of the first and second-order derivatives, which is a cumbersome task.Also, when the derivative value is zero, the application of these methods is not possible.The exploration of nonlinear equation solving has also led to the formulation of derivative-free methods with memory, as showcased by B. Neta [32], who utilized Traub's method and Newton's method.Furthermore, Chanu et al. [33] proposed optimal memoryless techniques of fourth and eighth orders, extending them to incorporate memory.In the pursuit of resolving nonlinear equations with multiple roots, Thangkhenpau et al. [34] introduced a novel scheme that offers both with-and without-memory-based variants.
In this research article, we present a novel fifth-order derivative-free iterative method for solving simple roots of nonlinear equations.Moreover, it excels in scenarios where the derivative is zero at initial or successive iterative approximations.The method exhibits an efficiency index of 5 1/4 = 1.4953.Building upon this foundation, we extend the technique to a tenth-order method with memory by incorporating self-accelerating parameters without requiring any additional function evaluation.The efficiency index of the tenth-order method with memory is 10 1/4 = 1.7783.The subsequent sections of this document are meticulously organized to provide comprehensive insights.Section 2 delves into the utilization of divided difference and weight function techniques in the formulation of the methods, while also analyzing the convergence rates for both with-and without-memory approaches.Section 3 presents a thorough examination of numerical tests, meticulously comparing the proposed method with other well-established techniques.Finally, Section 4 concludes the study, offering a comprehensive summary of the findings and their implications.

Construction of New Iterative Schemes and Their Convergence Analysis
In this section, we have developed novel iterative techniques of both fifth and tenth orders, which are specifically designed for solving nonlinear equations without the need for derivatives.The new three-step fifth-order iterative without-memory method is outlined below: where s n −w n and P : C→C is an analytic function in the neighbourhood of 0 with t n = Θ(z n ) Θ(y n ) and K : C × C→C is another analytic function in the neighbourhood of (0, 0) with . This new family (3) requires four function evaluations at each iteration.

Proof of Theorem 1.
Let ξ be the simple root of Θ(s) = 0 and let e n = s n − ξ be the error of n th iteration.Using Taylor expansion, we obtain where e n = s n − ξ and c j = Θ (j) (ξ) j!Θ (ξ) , j = 2, 3, 4, . . .Now, using Equation (5) in the first step of method given by (3), we have and Now, using Equations ( 5)-( 7), the divided difference Θ[s n , w n ] can be expressed as: By using Equations ( 5)-( 8) in the second step of the method (3), we obtain and Now, after using Equations ( 5) to ( 9), we obtain the third step z n and Θ(z n ) as Now, we obtain Θ(s n ) by using Equations ( 5), ( 10) and ( 12); thus, we have and Now, by using Equations ( 5)-( 15), we obtain the fourth step as where After substituting the values P(0) = 0, P (0) = 1 and P (0) = 2, we have derived the error equation below: By examining Equation ( 17), we can infer that the method described by Equation ( 3) exhibits a fifth-order convergence.This conclusion serves as the completion of the proof.
Based on the conditions for P(t n ) and K(u n , v n ), as presented in Theorem 1 we are adopting the particular forms for the weight functions: 1 within the newly proposed method described by Equation (3).

Parametric Family of Three-Point with-Memory Method and its Convergence Analysis
We shall now proceed to enhance the method described by Equation ( 3) by incorporating the with-memory feature, introducing two additional parameters.Upon analyzing Equation ( 4), it becomes evident that the convergence order of the method (3) reaches ten , we can transform the error Equation ( 4) into the following form: To derive the method with memory, we introduce the parameters α = α n and β = β n , which evolve as the iteration progresses according to the formulas and . In the method (3), we make use of the following approximation: Let us define Newton's interpolating polynomials of fourth and fifth degrees as follows: Now, we can express the iterative method with memory as follows: Remark 1.It is important to note that the approach of iteratively calculating independent parameters, as employed in this method, is commonly known as a self-accelerating method.Prior to initiating the iterative process, it is crucial to determine the initial values of α 0 and β 0 , as highlighted in [35].
Our objective is to analyze the convergence properties of the method with memory.Specifically, we are interested in investigating the behavior of the sequence s n as it converges to the root ξ of Θ, as well as determining the rate at which this convergence occurs.To quantify the convergence rate, we define the difference between s n and ξ as e n = s n − ξ.If the sequence s n approaches the root ξ with an order of p, we can express it as e n+1 ∼ e p n .In order to establish the convergence order of the method (20), we can make use of the following lemma, which has been presented in [36].Let us consider the following theorem.
Theorem 2. If the initial estimate s 0 is in close vicinity to the unique root ξ of the real and suitably smooth function Θ(s) = 0, method (20) will possess a convergence rate of at least 10.8151.

Proof of Theorem 2.
Let us assume that the iterative process described in (20) produces a sequence of estimations denoted as s n .If this sequence converges to the root ξ of the function Θ with a convergence order of q, we can deduce the following: , where e n = s n − ξ ( e n+1 ∼ (e q n−1 ) q = e q 2 n−1 (22) Let us assume that the iterative sequences w n , y n and z n have the order q 1 , q 2 and q 3 , respectively.Then, Equations ( 21) and (22) give the following: e n,w ∼ (e e n,y ∼ (e e n,z ∼ (e By Theorem 1, we can write Using Lemma 1, we obtain the following: Comparing the power of e n−1 of Equations ( 23)-( 30), ( 24)-( 31), ( 25)-( 32) and ( 22)-( 33), we obtain the following system of equations: After, solving the above system of equations, we obtain q 1 = 2.4538, q 2 = 4.9075, q 3 = 7.3613 and q = 10.8151.Thus, the proof is complete.

Numerical Discussion
In this section, our objective is to elucidate the efficacy of recently introduced iterative families of methods through their application to various nonlinear equations.We will compare the results with some well-known existing methods available in the literature.In particular, the following iterative methods, in addition to Newton's method, are considered for comparison.
The well-known fourth-order multipoint without-memory Ostrowski's method (OM) is given as [22]: In 2020, Nouri et al. [30] developed the following fifth-order method (NRTM): The fifth-order method (GM) developed by Grau et al. [31]: We selected the method [30,31] for comparison so as to have a fair and uniform comparison.This is because, like our proposed methods, they have the same order of convergence and the same number of function evaluations per iteration.
The nonlinear test functions utilized for comparative analysis, along with their corresponding initial approximations, are provided below.
We consider the following Planck's radiation law problem, which calculates the energy density within an isothermal blackbody and is given by [37]: where λ is the wavelength of the radiation, T is the absolute temperature of the blackbody, B is the Boltzmann constant, P is the Planck constant and c is the speed of light.We are interested in determining the wavelength λ that corresponds to maximum energy density v(λ).
From (38), we obtain After that, if s = cP λBT , then (40) is satisfied if Therefore, the solutions of Θ 7 (s) = 0 give the maximum wavelength of radiation λ by means of the following formula: where s * is a solution of (41).
Example 8: In the study of the multi-factor effect, the trajectory of an electron in the air gap between two parallel plates is given by [37] s where e and m are the charge and the mass of the electron at rest, s 0 and υ 0 are the position and velocity of the electron at the time t 0 and E 0 sin(ωt + α) is the RF electric field between the plates.We choose the particular parameters in the expression in order to deal with a simpler expression, which is defined as follows: with s 0 = −0.309.
The comparative results for all the methods, including NM (1), NDM1 (3), NDM2 (20), NTRM (36) and GM (37), are summarized in Tables 1-8.In these tables, we have presented the following metrics for each of the compared methods after the first three full iterations on every test function: the approximated roots (s n ), absolute residual error (|Θ(s n )|), the difference between the last two successive iterations (|s n − s n−1 |) and the computational rate of convergence (COC).Also in Figure 1, we provide a comparison of methods based on the error in the consecutive iterations, |s n − s n−1 |, after the first three iterations.The determination of COC is achieved using the following equation [38]: For all numerical calculations, the programming software Mathematica 12.2 was utilized.For the with-memory method NDM2, we have selected the parameter values α 0 = 0.1 and β 0 = 0.01 to start the initial iteration.
Table 1.Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 1 (s).From all the numerical results in Tables 1-8 and from Figure 1, it is concluded that the proposed family of methods NDM1 and NDM2 is highly competitive and possesses fast convergence toward the roots with minimal absolute residual error and a minimum error value in consecutive iteration as compared to the other existing methods.Additionally, the numerical results indicate that the computational order of convergence supports the theoretical convergence order of the newly presented family of methods in the test functions.

Conclusions
In this paper, we have presented two families of iterative methods for solving nonlinear equations.The first is a memoryless, derivative-free family of fifth-order methods, while the second is a three-point family of with-memory methods.We obtained the fifth-order family of methods by employing a composition technique along with a modified Newton's method and a weighted function approach.The memory-based extension of the fifth-order family of methods employs two acceleration parameters calculated using Newton interpolating polynomials and enhancing convergence from the 5th to 10th order without requiring additional function evaluations and subsequently increasing its efficiency index from 1.495 to 1.778.Analysis of the numerical results has revealed the effectiveness and enhanced capabilities of the newly proposed methods in terms of minimal absolute residual error and minimal error in consecutive iterations.The results demonstrate that the proposed methods NDM1 and NDM2 exhibit faster convergence with smaller asymptotic constant values compared to other existing methods.Moreover, the overall performance of the newly proposed work is quite impressive, offering a fast convergence speed, and making it a promising alternative for solving nonlinear equations.

Figure 1 .
Figure 1.Comparison of the methods based on the error in consecutive iterations, |s n −s n−1 |, after the first three iterations.

Table 3 .
Comparisons of without-memory and with-memory methods after first three (n = 3) iterations for Θ 3 (s).