Next Article in Journal
Roles and Research Trends of Artificial Intelligence in Mathematics Education: A Bibliometric Mapping Analysis and Systematic Review
Previous Article in Journal
Instructional Moves that Increase Chances of Engaging All Students in Learning Mathematics
Previous Article in Special Issue
Aboodh Transform Iterative Method for Spatial Diffusion of a Biological Population with Fractional-Order
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Derivative-Free Method to Solve Nonlinear Equations

Department of Applied Mathematics, Naval Postgraduate School, Monterey, CA 93943, USA
Mathematics 2021, 9(6), 583; https://doi.org/10.3390/math9060583
Submission received: 3 February 2021 / Revised: 1 March 2021 / Accepted: 3 March 2021 / Published: 10 March 2021
(This article belongs to the Special Issue Iterative Methods for Solving Nonlinear Equations and Systems 2020)

Abstract

:
A new high-order derivative-free method for the solution of a nonlinear equation is developed. The novelty is the use of Traub’s method as a first step. The order is proven and demonstrated. It is also shown that the method has much fewer divergent points and runs faster than an optimal eighth-order derivative-free method.

1. Introduction

In engineering and applied science, we encounter the problem of solving a nonlinear equation
f ( x ) = 0
For example, the Colebrook equation [1] to find the friction factor, or finding critical values of some nonlinear function. Another example is given by Ricceri [2] where the first eigenvalue of Helmholtz equation is found by minimizing a functional. See also [3]. Most numerical solution methods are based on Newton’s scheme, i.e., starting with an initial guess x 0 for the root ξ , we create a sequence { x n }
x n + 1 = x n f ( x n ) f ( x n ) .
The convergence is quadratic, that is,
| x n + 1 ξ | C 2 | x n ξ | 2 .
To increase the order, one has to include higher derivatives, such as Halley’s scheme [4] using first and second derivatives and is of a cubic order. In order to avoid higher derivatives, one can use multipoint methods, see Petković et al. [5].
Derivative-free methods are either linear (such as Picard), super-linear (such as secant) or even quadratic, such as Steffensen’s method [6], given by
w n = x n + γ ( f ( x n ) ) x n + 1 = x n γ ( f ( x n ) ) 2 f ( w n f ( x n ) )
Because multistep methods are usually based on Newton’s steps, derivative-free methods are based on Steffensen’s method as the first step. There are several derivative-free methods based on Steffensen’s method for simple and multiple roots. See Kansal et al. [7] for such family of methods for multiple roots and Zhanlav and otgondorj [8] for simple roots. In a recent article, Neta [9] has shown that there is a better choice for a first step, even though it is NOT second order. Traub’s method [10], given by
x n + 1 = x n f ( x n ) ( f ( x n 2 ) f ( x n ) ) ( x n 2 x n ) f ( x n 2 ) f ( x n 1 ) x n 2 x n 1 + f ( x n 1 ) f ( x n ) x n 1 x n
is of order 1.839, and it runs faster and has better dynamics than several other derivative-free methods. Clearly, one cannot get optimal methods (see Kung and Traub [11]) this way. Kung and Traub [11] conjectured that multipoint methods without memory using d function evaluations could have an order no larger than 2 d 1 . The efficiency index I is defined as p 1 / d . Thus, an optimal method of order 8 has an efficiency index of I = 8 1 / 4 = 1.6817 and an optimal method of order 4 has an efficiency index I = 4 1 / 3 = 1.5874 , which is better than Newton’s method for which I = 2 = 1.4142 . The efficiency index of optimal method cannot reach a value of 2. In fact, realistically, one uses methods of an order of at most 8. For high order derivative-free methods based on Steffensen’s method as a first step, see Zhanlav and Otgondorj [8] and references there. Such methods are especially useful when the derivative is very expensive to evaluate and, of course, when the function is non-differentiable.
Here, we develop a derivative-free method with memory based on Traub’s method (5) as the first step and the other two steps are based on replacing the derivative by the derivative of Newton interpolating polynomial of degree 3. In the next section, we will discuss the order of the scheme and the computational order of convergence, COC, defined by
c o c = ln x i α x i 1 α ln x i 1 α x i 2 α
where α is the final approximation for the zero ξ .

2. New Method

We suggest a 3-step method having (5) as the first step. The method is
y n = x n f ( x n ) ( f ( x n 2 ) f ( x n ) ) ( x n 2 x n ) f ( x n 2 ) f ( x n 1 ) x n 2 x n 1 + f ( x n 1 ) f ( x n ) x n 1 x n , z n = y n f ( y n ) f ( y n ) , x n + 1 = z n f ( z n ) f ( z n ) .
The derivatives in the last two steps are approximated by the derivative of Newton interpolating polynomial of degree 3:
f ( y n ) = f [ y n , x n ] + f [ y n , x n , x n 1 ] ( y n x n ) + f [ y n , x n , x n 1 , x n 2 ] ( y n x n ) ( y n x n 1 ) ,
and
f ( z n ) = f [ z n , y n ] + f [ z n , y n , x n ] ( z n y n ) + f [ z n , y n , x n , x n 1 ] ( z n y n ) ( z n x n ) ,
and f [ a , b ] is the divided difference.
Let us denote the errors e n = x n ξ , e y = y n ξ and e z = z n ξ . The error in the first step is given by Traub
e y = C e n 1.839 .
The other two steps are of the same order as the Newton’s method, i.e., e z = e y 2 and e n + 1 = e z 2 . Therefore, the order of the method is 4 × 1.839 = 7.356 . The efficiency index I = p 1 / d = 7 . 356 1 / 3 = 1.945 is higher than that of the 3-step optimal eighth order method. This is typical of methods with memory.
In Table 1, we list the computational order of convergence as defined by (6) for 16 different nonlinear functions. The values range from 6.622 to 7.394 with an average value of 6.872.

3. Dynamics Study of the Methods

The basin of attraction method was initially discussed by Stewart [12]. This is better than comparing methods on the basis of running several nonlinear functions using a certain initial value. In the last decade, many papers appeared using the idea of basin of attraction to compare the efficiency of many methods. See, for example, Chun and Neta [13,14] and references there.
In this section, we describe the experiments with our method as compared to TZKO [8].
We chose four polynomials and one non-polynomial function all having roots within a 6 by 6 square centered at the origin. The square is divided horizontally and vertically by equally spaced lines. We took the intersection of all these lines as initial points in the complex plane for the iterative schemes. The code collected the number of iteration or function evaluation to converge within a tolerance of 10 7 and the root to which the sequence converged. If the sequence did not converge within 40 iterations, we denote it as a divergent point. We also collected the CPU run time to execute the code on all initial points using a Dell Optiplex 990 desktop computer.
We ran all methods on the following five examples, four of which are polynomials:
  • z 2 1
  • z 3 1
  • z 4 1
  • z 5 1
  • e z + 1 1 ( z 1 )
Remark 1.
The additional starting values are x 1 = x 0 + 0.01 and x 2 = x 0 + 0.02 .
It is clear from these tables that our method runs faster (see Table 2), uses fewer function-evaluations per point (see Table 3) and has much fewer divergent points (see Table 4). In fact, for 3 out of 5 examples, our method had NO divergent points. We now take an example known to be hard, i.e., the Wilkinson-type polynomial
q ( z ) = z ( z 2 1 / 4 ) ( z 2 1 ) ( z 2 9 / 4 ) ( z 2 4 )
which has roots at z = 0 , ± 1 / 2 , ± 1 , ± 3 / 2 , ± 2 . Our method runs fast and had no divergent points. TZKO requires more than double the CPU run time for our method and had 166,138 divergent points. The plots of the basins for this example are given in Figure 1.

4. Conclusions

We have developed a derivative-free method with memory based on Traub’s method as the first step. The method is of order 7.356 and has an efficiency index of 1.945, which is higher than any optimal eighth order method. We have shown that our method is faster, uses fewer function evaluations per point, and has much fewer divergent points.

Funding

This research received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Colebrook, C.F. Turbulent flows in pipes, with particular reference to the transition between the smooth and rough pipe laws. J. Inst. Civ. Eng. 1939, 11, 130. [Google Scholar] [CrossRef]
  2. Ricceri, B. A class of equations with three solutions. Mathematics 2020, 8, 478. [Google Scholar] [CrossRef] [Green Version]
  3. Treantă, S. Gradient structures associated with a polynomial differential equation. Mathematics 2020, 8, 535. [Google Scholar] [CrossRef] [Green Version]
  4. Halley, E. A new, exact and easy method of finding the roots of equations generally and that without any previous reduction. Philos. Trans. R. Soc. Lond. 1694, 18, 136–148. [Google Scholar]
  5. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for the Solution of Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2012. [Google Scholar]
  6. Steffensen, J.F. Remarks on iteration. Scand. Actuar. J. 1933, 1, 64–72. [Google Scholar] [CrossRef]
  7. Kansal, M.; Alshomrani, A.S.; Bhalla, S.; Behl, R.; Salimi, M. One parameter optimal derivative-free family to find the multiple roots of algebraic nonlinear equations. Mathematics 2020, 8, 2223. [Google Scholar] [CrossRef]
  8. Zhanlav, T.; Otgondorj, K. Comparison of some optimal derivative-free three-point iterations. J. Numer. Anal. Approx. Theory 2020, 49, 76–90. [Google Scholar]
  9. Neta, B. Basin attractors for derivative-free methods to find simple roots of nonlinear equations. J. Numer. Anal. Approx. Theory 2020, 49, 177–189. [Google Scholar]
  10. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: New York, NY, USA, 1964. [Google Scholar]
  11. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Math. 1974, 21, 634–651. [Google Scholar] [CrossRef]
  12. Stewart, B.D. Attractor Basins of Various Root-Finding Methods. Master’s Thesis, Naval Postgraduate School, Department of Applied Mathematics, Monterey, CA, USA, June 2001. [Google Scholar]
  13. Chun, C.; Neta, B. Comparative study of methods of various orders for finding simple roots of nonlinear equations. J. Appl. Anal. Comput. 2019, 9, 400–427. [Google Scholar] [CrossRef]
  14. Chun, C.; Neta, B. Comparative study of methods of various orders for finding repeated roots of nonlinear equations. J. Comput. Appl. Math. 2018, 340, 11–42. [Google Scholar] [CrossRef]
Figure 1. Our method (left) and TZKO (right) for the roots of the polynomial q ( z ) (10).
Figure 1. Our method (left) and TZKO (right) for the roots of the polynomial q ( z ) (10).
Mathematics 09 00583 g001
Table 1. Computational order of convergence for several functions using our new method.
Table 1. Computational order of convergence for several functions using our new method.
Index f ( x ) x 0 Number of IterationsCOC
1 e x + 3 1 ( x 1 ) 10.086.78
2 x 3 + 4 x 2 10 −2.6307.048
3 ( sin x ) 2 x 2 + 1 2.036.622
4 ( x 1 ) 3 1 3.546.728
5 x 3 10 4.046.901
6 x e x 2 ( sin x ) 2 + 3 cos x + 5 −1.037.048
7 e x 2 + 7 x 30 1 4.096.793
8 sin x x / 2 2.036.848
9 x 5 + x 10 , 000 4.046.780
10 ( x ) 1 / x 3 9.036.674
11 e x + x 20 0.057.049
12 ln x + ( x ) 5 10.036.659
13 x 3 x 2 1 4.046.912
14 x 5 1 10.076.749
15 e x + 1 1 ( x 1 ) 5.057.394
16 e x + 3 1 e x 1 1 15.0146.964
Table 2. CPU time (msec) for each example (1–5) and each of the methods.
Table 2. CPU time (msec) for each example (1–5) and each of the methods.
MethodEx1Ex2Ex3Ex4Ex5Average
TZKO290.886545.869621.265745.541334.541507.620
Neta201.156277.703391.063435.844302.438321.641
Table 3. Average number of function evaluations per point for each example (1–5) and each of the methods.
Table 3. Average number of function evaluations per point for each example (1–5) and each of the methods.
MethodEx1Ex2Ex3Ex4Ex5Average
TZKO11.8519.5225.6029.2612.2519.70
Neta6.778.0110.7211.028.378.98
Table 4. Number of black points for each example (1–5) and each of the methods.
Table 4. Number of black points for each example (1–5) and each of the methods.
MethodEx1Ex2Ex3Ex4Ex5Average
TZKO236416,67427,74533,419264016,568
Neta4870002542606
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Neta, B. A New Derivative-Free Method to Solve Nonlinear Equations. Mathematics 2021, 9, 583. https://doi.org/10.3390/math9060583

AMA Style

Neta B. A New Derivative-Free Method to Solve Nonlinear Equations. Mathematics. 2021; 9(6):583. https://doi.org/10.3390/math9060583

Chicago/Turabian Style

Neta, Beny. 2021. "A New Derivative-Free Method to Solve Nonlinear Equations" Mathematics 9, no. 6: 583. https://doi.org/10.3390/math9060583

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop