A New Derivative-Free Method to Solve Nonlinear Equations

: A new high-order derivative-free method for the solution of a nonlinear equation is developed. The novelty is the use of Traub’s method as a ﬁrst step. The order is proven and demonstrated. It is also shown that the method has much fewer divergent points and runs faster than an optimal eighth-order derivative-free method.


Introduction
In engineering and applied science, we encounter the problem of solving a nonlinear equation For example, the Colebrook equation [1] to find the friction factor, or finding critical values of some nonlinear function.Another example is given by Ricceri [2] where the first eigenvalue of Helmholtz equation is found by minimizing a functional.See also [3].Most numerical solution methods are based on Newton's scheme, i.e., starting with an initial guess x 0 for the root ξ, we create a sequence {x n } The convergence is quadratic, that is, To increase the order, one has to include higher derivatives, such as Halley's scheme [4] using first and second derivatives and is of a cubic order.In order to avoid higher derivatives, one can use multipoint methods, see Petković et al. [5].
Derivative-free methods are either linear (such as Picard), super-linear (such as secant) or even quadratic, such as Steffensen's method [6], given by Because multistep methods are usually based on Newton's steps, derivative-free methods are based on Steffensen's method as the first step.There are several derivative-free methods based on Steffensen's method for simple and multiple roots.See Kansal et al. [7] for such family of methods for multiple roots and Zhanlav and otgondorj [8] for simple roots.In a recent article, Neta [9] has shown that there is a better choice for a first step, even though it is NOT second order.Traub's method [10], given by is of order 1.839, and it runs faster and has better dynamics than several other derivativefree methods.Clearly, one cannot get optimal methods (see Kung and Traub [11]) this way.Kung and Traub [11] conjectured that multipoint methods without memory using d function evaluations could have an order no larger than 2 d−1 .The efficiency index I is defined as p 1/d .Thus, an optimal method of order 8 has an efficiency index of I = 8 1/4 = 1.6817 and an optimal method of order 4 has an efficiency index I = 4 1/3 = 1.5874, which is better than Newton's method for which I = √ 2 = 1.4142.The efficiency index of optimal method cannot reach a value of 2. In fact, realistically, one uses methods of an order of at most 8.For high order derivative-free methods based on Steffensen's method as a first step, see Zhanlav and Otgondorj [8] and references there.Such methods are especially useful when the derivative is very expensive to evaluate and, of course, when the function is non-differentiable.
Here, we develop a derivative-free method with memory based on Traub's method (5) as the first step and the other two steps are based on replacing the derivative by the derivative of Newton interpolating polynomial of degree 3.In the next section, we will discuss the order of the scheme and the computational order of convergence, COC, defined by coc = where α is the final approximation for the zero ξ.

New Method
We suggest a 3-step method having (5) as the first step.The method is The derivatives in the last two steps are approximated by the derivative of Newton interpolating polynomial of degree 3: The other two steps are of the same order as the Newton's method, i.e., e z = e 2 y and e n+1 = e 2 z .Therefore, the order of the method is 4 × 1.839 = 7.356.The efficiency index I = p 1/d = 7.356 1/3 = 1.945 is higher than that of the 3-step optimal eighth order method.This is typical of methods with memory.
In Table 1, we list the computational order of convergence as defined by ( 6) for 16 different nonlinear functions.The values range from 6.622 to 7.394 with an average value of 6.872.

Dynamics Study of the Methods
The basin of attraction method was initially discussed by Stewart [12].This is better than comparing methods on the basis of running several nonlinear functions using a certain initial value.In the last decade, many papers appeared using the idea of basin of attraction to compare the efficiency of many methods.See, for example, Chun and Neta [13,14] and references there.
In this section, we describe the experiments with our method as compared to TZKO [8].We chose four polynomials and one non-polynomial function all having roots within a 6 by 6 square centered at the origin.The square is divided horizontally and vertically by equally spaced lines.We took the intersection of all these lines as initial points in the complex plane for the iterative schemes.The code collected the number of iteration or function evaluation to converge within a tolerance of 10 −7 and the root to which the sequence converged.If the sequence did not converge within 40 iterations, we denote it as a divergent point.We also collected the CPU run time to execute the code on all initial points using a Dell Optiplex 990 desktop computer.
We ran all methods on the following five examples, four of which are polynomials: Remark 1.The additional starting values are x −1 = x 0 + 0.01 and x −2 = x 0 + 0.02.
It is clear from these tables that our method runs faster (see Table 2), uses fewer function-evaluations per point (see Table 3) and has much fewer divergent points (see Table 4).In fact, for 3 out of 5 examples, our method had NO divergent points.We now take an example known to be hard, i.e., the Wilkinson-type polynomial q(z) = z(z 2 − 1/4)(z 2 − 1)(z 2 − 9/4)(z 2 − 4) (10) which has roots at z = 0, ±1/2, ±1, ±3/2, ±2.Our method runs fast and had no divergent points.TZKO requires more than double the CPU run time for our method and had 166,138 divergent points.The plots of the basins for this example are given in Figure 1.
Version March 1, 2021 submitted to Mathematics 5 of 5

( 9 )
and f [a, b] is the divided difference.Let us denote the errors e n = x n − ξ, e y = y n − ξ and e z = z n − ξ.The error in the first step is given by Traub e y = Ce 1.839 n .

Figure 1 .
Figure 1.Our method (left) and TZKO (right) for the roots of the polynomial q(z)(10)

Table 1 .
Computational order of convergence for several functions using our new method.