Improving the Computational Efficiency of a Variant of Steffensen’s Method for Nonlinear Equations

Steffensen-type methods with memory were originally designed to solve nonlinear equations without the use of additional functional evaluations per computing step. In this paper, a variant of Steffensen’s method is proposed which is derivative-free and with memory. In fact, using an acceleration technique via interpolation polynomials of appropriate degrees, the computational efficiency index of this scheme is improved. It is discussed that the new scheme is quite fast and has a high efficiency index. Finally, numerical investigations are brought forward to uphold the theoretical discussions.


Introduction
One of the commonly encountered topics in computational mathematics is to tackle solving a nonlinear algebraic equation. The equation can be presented as in the scalar case f (x) = 0, or more complicated as a system of nonlinear algebraic equations. The procedure of finding the solutions (if it exists) cannot be done analytically. In some cases, the analytic techniques only give the real result while its complex zeros should be found and reported. As such, numerical techniques are a viable choice for solving such nonlinear problems. Each of the existing computational procedures has their own domain of validity with some pros and cons [1,2].
Two classes of methods with the use of derivatives and without the use of derivatives are known to be useful depending on the application dealing with [3]. In the derivative-involved methods, a larger attraction basin along with a simple coding effort for higher dimensional problems is at hand which, in derivative-free methods, the area of choosing the initial approximations is smaller and extending to higher dimensional problems is via the application of a divided difference operator matrix, which is basically a dense matrix. However, the ease in not computing the derivative and, subsequently, the Jacobians, make the application of derivative-free methods more practical in several problems [4][5][6][7].
Here, an attempt is made at developing a computational method which is not only efficient in terms of the computational efficiency index, but also in terms of larger domains for the choice of the initial guesses/approximations for starting the proposed numerical method.
The Steffensen's method [8] for solving nonlinear scalar equations has quadratic convergence for simple zeros and given by: where the two-point divided difference is defined by: This scheme needs two function evaluations per cycle. Scheme (1) shows an excellent tool for constructing efficient iterative methods for nonlinear equations. This is because it is derivative-free with a free parameter. This parameter can, first of all, enlarge the attraction basins of Equation (1) or any of its subsequent methods and, second, can directly affect the improvement of the R-order of convergence and the efficiency index.
Recalling that Kung and Traub conjectured that the iterative method without memory based on m functions evaluation per iteration attain the optimal convergence of order 2 m−1 [9,10].
The term "with memory" means that the values of the function associated with the computed approximations of the roots are used in subsequent iterations. This is unlike the term "without memory" in which the method only uses the current values to find the next estimate. As such, in a method with memory, the calculated results up to the desired numbers of iterations should be stored and then called to proceed.
Before proceeding the given idea to improve the speed of convergence, efficiency index, and the attraction basins, we provide a short literature by reviewing some of the existing methods with accelerated convergence order. Traub [11] proposed the following two-point method with memory of order 2.414: where . This is one of the pioneering and fundamental methods with memory for solving nonlinear equations.
Džunić in [12] suggested an effective bi-parametric iterative method with memory of 1 2 3 + √ 17 R-order of convergence as follows: Moreover, Džunić and Petković [13] derived the following cubically convergent Steffensen-like method with memory: where z k−1 = x k−1 + β k−1 f (x k−1 ) depending on the second-order Newton interpolation polynomial. Various Steffensen-type methods are proposed in [14][15][16][17]. In fact, it is possible to improve the performance of the aforementioned method by considering several more sub-steps and improve the computational efficiency index via multi-step iterative methods. However, this procedure is more computational burdensome. Thus, the motivation here is to know that is it possible to improve the performance of numerical methods in terms of the computational efficiency index, basins of attraction, and the rate of convergence without adding more sub-steps and propose a numerical method as a one-step solver.
Hence, the aim of this paper is to design a one-step method with memory which is quite fast and has an improved efficiency index, based on the modification of the one-step method of Steffensen (Equation (1)) and increase the convergence order to 3.90057 without any additional functional evaluations.
The rest of this paper is ordered as follows: In Section 2, we develop the one-point Steffensen-type iterative scheme (Equation (1)) with memory which was proposed by [18]. We present the main goal in Section 3 by approximating the acceleration parameters involved in our contributed scheme by Newton's interpolating polynomial and, thus, improve the convergence R-order. The numerical reports are suggested in Section 4 to confirm the theoretical results. Some discussions are given in Section 5.

An Iterative Method
The following iterative method without memory was proposed by [18]: with the following error equation to improve the performance of (1) in terms of having more free parameters: where f (α) . Using the error Equation (6), to derive Steffensen-type iterative methods with memory, we calculate the following parameters: β = β k , ζ = ζ k , by the formula: for k = 1, 2, 3, · · · , while f (x α ), c 2 are approximations to f (α) and c 2 , respectively; where α is a simple zero of f (x). In fact, Equation (7) shows a way to minimize the asymptotic error constant of Equation (6) by making this coefficient closer and closer to zero when the iterative method is converging to the true solution.
The initial estimates β 0 and ζ 0 must be chosen before starting the process of iterations. We state the Newton's interpolating polynomial of fourth and fifth-degree passing through the saved points as follows: Recalling that N(t) is an interpolation polynomial for a given set of data points also known as the Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using Newton's divided differences method. For instance, here the set of data points for Now, using some modification on Equation (5) we present the following scheme: Noting that the accelerator parameters β k , ζ k are getting updated and then used in the iterative method right after the second iterations, viz, k ≥ 2. This means that the third line of Equation (9) is imposed at the beginning and after that the computed values are stored and used in the subsequent iterates. For k = 1, the degree of Newton interpolation polynomials would be two and three. However, for k ≥ 2, interpolations of degrees four and five as given in Equation (8) can be used to increase the convergence order.
Additionally speaking, this acceleration of convergence would be attained without the use any more functional evaluations as well as imposing more steps. Thus, the proposed scheme with memory (Equation (9)) can be attractive for solving nonlinear equations.

Convergence Analysis
In this section, we show the convergence criteria of Equation (9) using Taylor's series expansion and several extensive symbolic computations.

Theorem 1.
Let the function f (x) be sufficiently differentiable in a neighborhood of its simple zero α. If an initial approximation x 0 is necessarily close to α. Then, R-order of convergence for the one-step method (Equation (9)) with memory is 3.90057.
Proof. The proof is done using the definition of the error equation as the difference between the k-estimate and the exact zero along with symbolic computations. Let the sequence {x k } and {w k } have convergence orders r and p, respectively. Namely, and: Therefore, using Equations (10) and (11), we have: and: The associated error equations to the accelerating parameters β k and ζ k for Equation (9) can now be written as follows: and: On the other hand, by using a symbolic language and extensive computations one can find the following error terms for the involved terms existing in the fundamental error Equation (6): Combining Equations (14)- (17), we get that: We now compare the left and right hand side of Equations (12)- (19) and Equations (13)-(18), respectively. Thus, we have the following nonlinear system of equations in order to find the final R-orders: The positive real solution of (20) is r = 3.90057 and p = 1.9502. Therefore, the convergence R-order for Equation (9) is 3.90057.
Since improving the convergence R-order is useless if the whole computational method is expensive, basically researcher judge on a new scheme based upon its computational efficiency index which is a tool in order to provide a trade-off between the whole computational cost and the attained R-order. Assuming the cost of calculating each functional evaluation is one, we can use the definition of efficiency index as EI = p 1/θ , θ is the whole computational cost [19].
The computational efficiency index of Equation (9)  However, this improved computational efficiency is reported by ignoring the number of multiplication and division per computing cycle. By imposing a slight weight for such calculations one may once again obtain the improved computational efficiency of (9) in contrast to the existing schemes of the same type.

Numerical Computations
In this section, we compare the convergence performance of Equation (9), with three well-known iterative methods for solving four test problems numerically carried out in Mathematica 11.1. [20].
We denote Equations (1), (3), (5) and (9) with SM, DZ, PM, M4, respectively. We compare the our method with different methods, using β 0 = 0.1 and ζ 0 = 0.1. Here, the computational order of convergence (coc) has been computed by the following formula [21]: Recalling that using a complex initial approximation, one is able to find the complex roots of the nonlinear equations using (9). Experiment 1. Let us consider the following nonlinear test function: where α = 2 and x 0 = 1.7.

Experiment 2.
We take into account the following nonlinear test function: where α = 1 and x 0 = 0.7.

Experiment 3.
We consider the following test problem now: where α ≈ 1.8467200 and x 0 = 4.
Tables 1-4 show that the proposed Equation (9) is of order 3.90057 and it is obviously believed to be of more advantageous than the other methods listed due to its fast speed and better accuracy.
For better comparisons, we present absolute residual errors | f (x)|, for each test function which are displayed in Tables 1-4. Additionally, we compute the computational order of convergence. Noting that we have used multiple precision arithmetic considering 2000 significant digits to observe and the asymptotic error constant and the coc as obviously as possible.
The results obtained by our proposed Equation (M4) are efficient and show better performance than other existing methods.
A significant challenge of executing high-order nonlinear solvers is in finding initial approximation to start the iterations when high accuracy calculating is needed. To discuss further, mostly based on interval mathematics, one can find a close enough guess to start the process. There are some other ways to determine the real initial approximation to start the process. An idea of finding such initial guesses given in [22] is based on the useful commands in Mathematica 11.1 NDSolve [] for the nonlinear function on the interval D = [a, b].
Following this the following piece of Mathematica code could give a list of initial approximations in the working interval for Experiment 4: To check the position of the zero and the graph of the function, we can use the following code to obtain Figure 1.  We observe that the two self-accelerating parameters and have to be selected before the iterative procedure is started. That is, they are calculated by using information existing from the present and previous iterations (see, e.g., [23]). The initial estimates and should be preserved as precise small positive values. We use = = 0.1 whenever required.
To check the position of the zero and the graph of the function, we can use the following code to obtain Figure 1. To check the position of the zero and the graph of the function, we can use the following code to obtain Figure 1.   As a harder test problem, for the nonlinear function ( ) = 2 + 0.5 sin(20 ) − , we can simply find a list of estimates as initial guesses using the above piece of codes as follows:  Figure 2.
We observe that the two self-accelerating parameters and have to be selected before the iterative procedure is started. That is, they are calculated by using information existing from the present and previous iterations (see, e.g., [23]). The initial estimates and should be preserved as precise small positive values. We use = = 0.1 whenever required. To check the position of the zero and the graph of the function, we can use the following code to obtain Figure 1.  As a harder test problem, for the nonlinear function ( ) = 2 + 0.5 sin(20 ) − 2 , we can simply find a list of estimates as initial guesses using the above piece of codes as follows:  Figure 2.
We observe that the two self-accelerating parameters 0 and 0 have to be selected before the iterative procedure is started. That is, they are calculated by using information existing from the present and previous iterations (see, e.g., [23]). The initial estimates 0 and 0 should be preserved as precise small positive values. We use 0 = 0 = 0.1 whenever required.
After a number of iterates, the (nonzero) free parameters start converging to a particular value which makes the coefficient of Equation (6) zero as well as make the numerical scheme to converge with high R-order. As a harder test problem, for the nonlinear function g(x) = 2x + 0.5 sin(20π x) − x 2 , we can simply find a list of estimates as initial guesses using the above piece of codes as follows: {−0.185014, −0.162392, −0.0935912, −0.0535277, 6 Figure 2.
We observe that the two self-accelerating parameters β 0 and ζ 0 have to be selected before the iterative procedure is started. That is, they are calculated by using information existing from the present and previous iterations (see, e.g., [23]). The initial estimates β 0 and ζ 0 should be preserved as precise small positive values. We use β 0 = ζ 0 = 0.1 whenever required.
After a number of iterates, the (nonzero) free parameters start converging to a particular value which makes the coefficient of Equation (6) zero as well as make the numerical scheme to converge with high R-order.

Ending Comments
In this paper, we have constructed a one-step method with memory to solve nonlinear equations. By using two self-accelerator parameters our scheme equipped with Newton's interpolation polynomial without any additional functional calculation possesses the high computational efficiency index 1.97499, which is higher than many of the existing methods.
The efficacy of our scheme is confirmed by some of numerical examples. The results in Tables 1-4 shows that our method (Equation (M4)) is valuable to find an adequate estimate of the exact solution of nonlinear equations.

Author Contributions:
The authors contributed equally to this paper.

Ending Comments
In this paper, we have constructed a one-step method with memory to solve nonlinear equations. By using two self-accelerator parameters our scheme equipped with Newton's interpolation polynomial without any additional functional calculation possesses the high computational efficiency index 1.97499, which is higher than many of the existing methods.
The efficacy of our scheme is confirmed by some of numerical examples. The results in Tables 1-4 shows that our method (Equation (M4)) is valuable to find an adequate estimate of the exact solution of nonlinear equations.
Author Contributions: The authors contributed equally to this paper.
Funding: This research received no external funding.