Three new iterative methods for solving scalar nonlinear equations using weight function technique are presented. The first one is a two-step fifth order method with four function evaluations which is improved from a two-step Newton’s method having same number of function evaluations. By this, the efficiency index of the new method is improved from 1.414 to 1.495. The second one is a three step method with one additional function evaluation producing eighth order accuracy with efficiency index 1.516. The last one is a new fourth order optimal two-step method with efficiency index 1.587. All these three methods are better than Newton’s method and many other equivalent higher order methods. Convergence analyses are established so that these methods have fifth, eighth and fourth order respectively. Numerical examples ascertain that the proposed methods are efficient and demonstrate better performance when compared to some equivalent and optimal methods. Seven application problems are solved to illustrate the efficiency and performance of the proposed methods.
Newton’s method; nonlinear equation; multi-point iteration; optimal order; higher order method; efficiency index
65H05; 65D05; 41A25
This paper concerns the numerical solution of nonlinear equations of the general form . Such equations appear in real world problems frequently while there is no closed form solution for them. That is why the numerical solution of these types of equations draws much attention nowadays. One of the common problems encountered in science and engineering is: given a single variable function , find the values of x for which . The root of such nonlinear equations may be real or complex. There are two general types of methods available to find the roots of algebraic and transcendental equations. One of them is the direct methods which are not always applicable to find the roots and the other one is iterative methods based on the concept of successive approximation. In the second type, the general procedure for solving is to start with some initial approximation near to the root and attain a sequence of iterates which in the limit converges to the true solution. The most efficient existing root-solvers are based on multi-point iterations since they overcome theoretical limits of one-point methods concerning the convergence order and computational efficiency.
To determine the solution of nonlinear equations, many iterative methods have been proposed in [1,2,3] and the references therein. Construction of iterative methods for nonlinear equations is one of the vital area of research in numerical analysis. Among them, the most familiar iterative without memory method is the Newton–Raphson method which is given by
This method is an optimal method with efficiency index (EI) 1.414. Another well known method is the Halley’s iteration method given by
To accelerate the convergence of Newton’s method, many authors have modified it as we can see in [4,5]. Significant among them is the Arithmetic mean Newton’s method ()  and the other one is the Harmonic mean Newton’s method both having cubic convergence. These two-step methods are respectively given as follows:
The efficiency index of the methods (3) and (4) is 1.442 with three function evaluations per iteration.
Recently, some fourth and eighth order optimal iterative methods have been developed in [6,7]. A more extensive list of references as well as a survey on the progress made in the class of multi-point methods is found in the recent book by Petkovic et al. . In the recent past, many higher order optimal and non-optimal iterative methods have been developed using the idea of weight functions (see [7,9,10,11,12,13]).
The main objective of this paper is to construct multi-step iterative formula without memory with improved convergence and better efficiency index. Therefore, we have presented three new Newton-type iterative methods having fifth, eighth and fourth order convergence whose efficiency indices are 1.495, 1.516 and 1.587 respectively. Among these three methods, fourth order method is a class of optimal method. Section 2 discusses the preliminaries and Section 3 presents the construction of new methods. Section 4 analyses the convergence order of the proposed methods. In Section 5, the performances of new methods are compared with some well known equivalent methods. Seven real life application problems are taken in Section 6, where all the listed methods and the proposed methods are numerically verified. Finally, conclusions are given in Section 7.
The following definitions given below are required for the ensuing convergence analysis.
().If the sequence tends to a limit in such a way that
for , then the order of convergence of the sequence is said to be p, and C is known as the asymptotic error constant. If , or , the convergence is said to be linear, quadratic or cubic, respectively.
Let , then the relation
is called the error equation. The value of p is called the order of convergence of the method.
where d is the total number of new function evaluations (the values of f and its derivatives) per iteration.
Let define an Iterative Function (I.F.). Let be determined by new information at and no old information is reused. Thus, is called a multi-point I.F. without memory.
Kung–Traub Conjecture : Let be an iterative function without memory with d evaluations. Then , where is the maximum order.
We state below a theorem which helps us to find out the order of the multi-point methods.
().Let be iterative functions with the orders respectively. Then the composition of iterative functions defines the iterative method of order .
3. Construction of New Methods
Consider the two-step Newton’s method discussed in  given below:
As per Theorem 1, method (7) has fourth-order convergence and it requires four function evaluations. However, the efficiency index of (7) does not increase and remains equal to Newton’s method .
New Fifth Order Method ): Our aim is to improve the order and efficiency index of (7) by proposing a modification of this method. We achieve this by introducing a weight function as follows:
where , , is chosen as per the requirement of the error term in order to produce fifth order convergence; details are found in the next section. As a consequence, the order of convergence has improved from four to five with four function evaluations and the efficiency index has increased from to .
New Eighth Order Method ): Further, we extend method by taking one more weighted Newton’s step and obtain a new eighth order method with one more function evaluation as follows:
where is a weight function. The efficiency index of this method is , which is better than that of the methods (7) and (8).
3.1. Further Development
Class of Optimal Fourth Order Method: A new two-step optimal iterative method of order four, requiring three function evaluations per iteration, where it uses weight function is presented. This means the new class satisfies the Kung–Traub conjecture and it is given below:
where . The above method (10) has fourth order convergence.
4. Convergence Analysis
In order to establish the convergence of the proposed methods (8) and (9), we prove the following theorem with the help of Mathematica software.
Let be sufficiently smooth functions in the neighborhood of the root. If has a simple root in the open interval D and is chosen in a sufficiently small neighborhood of , then the methods (8) and (9) have local fifth and eighth-order convergence, when
These fifth and eighth order methods respectively satisfy the following error bounds:
Expanding about and taking into account (14), we have
Expanding the weight functions about 1, then we get
Finally, using Equations (14)–(17) into (8), we have
which shows fifth order convergence.
Again Expanding the weight functions about 1, then we get
Now Expanding by using Taylor’s series about and taking into account (4), we have
Now, using Equations (16), (19) and (20) into (9) then we have
which shows eighth order convergence. □
The following theorem can be proved similar to the above theorem with the help of Mathematica software and hence proof is not given.
Let be a simple zero of sufficiently differentiable function , D is an open interval. If is sufficiently close to , then the method (10) has convergence order four, when
and it satisfies the error equation .
A Special Case of Optimal Fourth Order Method ): For different choice of in (21) will produce a different member of the fourth-order class. A particular case from the class of method (10) satisfying (21) with a specific weight function, for the choice of , is given in the following:
5. Numerical Examples
In this section, several numerical examples are considered to confirm the convergence order and to illustrate the performance of the new methods , and . The new methods are compared with some existing methods such as , , , , , , and which are given below. Note that all computations are carried out using variable precision arithmetic that uses floating point representation with 500 decimal accuracy using the Matlab software. The number of iterations , and cpu time in seconds are listed under the condition that , where . In addition, to testify the theoretical order of convergence, we calculate the computational order of convergence defined by
For demonstrating numerical results of equivalent methods, we have given below a few methods from literature:
A fourth order optimal method proposed by Sharifi–Babajee–Soleymani ()  is given by
Another fourth order optimal method proposed by Chun et al. ()  is given by
A fifth order method proposed by Liang Fang et al. ()  is given by
An optimal eighth order method proposed by Petkovic et al. (8thPNPDM)  is given by
A non-optimal eighth order method proposed by Parimala et al. (8thPKJ)  is given by
The following examples are used for numerical verification:
Table 1 shows the efficiency index of the new methods with some known methods. Table 2 and Table 3 display initial value , number of iteration (N), computational order of convergence (), and CPU time (in seconds) for all the listed methods. From the computational results, we observe that all the proposed methods , and have a lower number of iterations when compared to the other equivalent methods for most of the test functions. In addition, it can be seen that the computational order of convergence perfectly coincides with the theoretical results. Based on the numerical results, it is observed that the presented methods produce converging roots for all the functions, whereas method and method diverge for the functions , , and , , respectively.
6. Some Real Life Applications
In this section we give some applications and compare the proposed methods to other well known methods:
Application 1: We consider the classical projectile problem  in which a projectile is launched from a tower of height , with initial speed v and at an angle with respect to the horizontal distance onto a hill, which is defined by the function , called the impact function which is dependent on the horizontal distance, x. We wish to find the optimal launch angle which maximizes the horizontal distance. In our calculations, we neglect air resistance.
The path function that describes the motion of the projectile is given by
When the projectile hits the hill, there is a value of x for which for each value of x. We wish to find the value of that maximizes x.
Differentiating Equation (29) implicitly w.r.t. , we have
An enveloping parabola is a path that encloses and intersects all possible paths. This enveloping parabola is obtained by maximizing the height of the projectile for a given horizontal distance x which will give the path that encloses all possible paths. Let , then Equation (28) becomes
Differentiating Equation (33) w.r.t. w and setting , Henelsmith obtained
so that the enveloping parabola is defined by .
The solution to the projectile problem requires first finding which satisfies and solving for in Equation (32) because we want to find the point at which the enveloping parabola intersects the impact function , and then find that corresponds to this point on the enveloping parabola. We choose a linear impact function with and . We let . Then we apply our I.F.s starting from to solve the non-linear equation
whose root is given by and .
Figure 1 shows the intersection of the path function, the enveloping parabola and the linear impact function for this application when method is applied.
Application 2: The depth of embedment x if a sheet-pile wall is governed by the equation :
It can be rewritten as
An engineer has estimated the depth to be . Here we find the root of the equation with initial guess 2.5 and compare some well known methods to our methods.
Application 3: The vertical stress generated at point in an elastic continuum under the edge of a strip footing supporting a uniform pressure q is given by Boussinesq’s formula  to be:
A scientist is interested to estimate the value of x at which the vertical stress will be 25 percent of the footing stress q. Initially it is estimated that . The above can be rewritten for being equal to 25 percent of the footing stress q:
Now we find the root of the equation with initial guess 0.4 and compare some well known methods to our methods.
Application 4: Generally, many problems in scientific and engineering which involve determination of any unknown appearing implicitly give rise to a root-finding problem. The Planck’s radiation law problem appearing in [25,26] is one among them and it is given by
which calculates the energy density within an isothermal blackbody. Here, is the wavelength of the radiation; T is the absolute temperature of the blackbody; k is Boltzmann’s constant; h is the Planck’s constant; and c is the speed of light. Suppose we would like to determine wavelength , which corresponds to maximum energy density . From Equation (35), we get
It can be checked that a maxima for occurs when , that is when
Here, taking , the above equation becomes
Let us define
The aim is to find a root of the equation . Obviously, one of the root is not taken for discussion. As argued in , the left-hand side of Equation (36) is zero for and . Hence, it is expected that another root of the equation might occur near . The approximate root of the Equation (37) is given by . Consequently, the wavelength of radiation () corresponding to which the energy density is maximum is approximated as .
Application 5: Study of the multipactor effect :
The trajectory of an electron in the air gap between two parallel plates is given by
where is the electric field between plates at time , and are the position and velocity of the electron, e and m are the charge and mass of the electron at rest respectively. For the particular parameters, one can deal with a simpler expression as follows:
The required zero of the above function is .
Application 6: Van der Waals equation representing a real gas is given by :
Here, a and b are parameters specific for each gas. This equation reduces to a nonlinear equation given by
By using the particular values for unknown constants, one can obtain the following nonlinear function
having three zeros. Out of them, two are complex zeros and the third one is a real zero. However, our desired root is
Application 7: Fractional conversion in a chemical reactor : In the following expression
x represents the fractional conversion of species A in a chemical reactor. Our required zero to this problem is .
Table 4, Table 5, Table 6, Table 7, Table 8, Table 9 and Table 10 display the numerical results with respect to number of iterations (N), , order of convergence and CPU time (in seconds). The numerical experiments of the above real life problems demonstrate the validity and applicability of the proposed methods. It is observed that the presented methods take less CPU time and equal number of iterations among the equivalent compared methods. This shows that the proposed methods are very much suitable for all the application problems. In most of the cases, the proposed methods show better performance in comparison to the existing methods.
We have presented a modification of Newton’s method producing fifth, eighth and fourth order convergence for solving nonlinear equations. At each iteration, the methods require respectively four, five and three function evaluations. The optimal methods and diverge for the functions , , and , , respectively for some initial points. For these functions, the proposed methods converge even though two methods are non-optimal. Moreover, the proposed new methods and require a lower number of iterations and less cpu time for convergence when compared with other methods. method also performs well when compared with equivalent methods. Table of efficiency indices shows that the new algorithms have better efficiency and perform better than classical Newton’s method and other existing non-optimal methods. Seven application problems are solved where the new methods produce better results than other compared methods. For all the applications, proposed methods consume less cpu time and perform equivalent to other compared methods with respect to iteration number and residual error. For application problems 1 and 4, method diverges, whereas the proposed methods converges. Hence, the new methods can be considered as very good competitors to Newton’s method and many other existing equivalent optimal/non-optimal methods.
Conceptualization, validation, writing—review and editing, resources, supervision—J.J.; Methodology, software, formal analysis, data curation, writing—original draft preparation, visualization—P.S.
This research received no external funding.
The authors would like to thank the anonymous reviewers for their constructive comments and useful suggestions which greatly helped to improve this paper.
Conflicts of Interest
The authors declare no conflict of interest.
Jaiswal, J.P. Some class of third and fourth-order iterative methods for solving nonlinear equations. J. Appl. Math2014, 2014, 1–17. [Google Scholar] [CrossRef]
Soleymani, F. Some optimal iterative methods and their with memory variants. J. Egyp. Math.2013, 21, 133–141. [Google Scholar] [CrossRef]
Wang, X.; Zhang, T. Higher-order newton-type iterative methods with and without memory for solving nonlinear equations. Math. Commun.2014, 19, 91–109. [Google Scholar]
Ozban, A. Some new variants of Newton’s method. Appl. Math. Lett.2004, 17, 677–682. [Google Scholar] [CrossRef]
S Weerakoon and T G I Fernando. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett.2000, 13, 87–93. [Google Scholar] [CrossRef]
Sharma, J.R.; Arora, H. An efficient family of weighted-newton methods with optimal eighth order convergence. Appl. Math. Lett.2014, 29, 1–6. [Google Scholar] [CrossRef]
Soleymani, F.; Khratti, S.K.; Karimi Vanani, S. Two new classes of optimal Jarratt-type fourth-order methods. Appl. Math. Lett.2011, 25, 847–853. [Google Scholar] [CrossRef]
Petkovic, M.S.; Neta, B.; Petkovic, L.D.; Dzunic, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2012. [Google Scholar]
Liu, L.; Wang, X. Eighth-order methods with high efficiency index for solving nonlinear equations. Appl. Math. Comput.2010, 215, 3449–3454. [Google Scholar] [CrossRef]
Taher, L.; Tahereh, E. A new optimal eighth-order ostrowski-type family of iterative methods for solving nonlinear equations. Chin. J. Math.2014, 2014, 369713. [Google Scholar]
Parimala, S.; Madhu, K.; Jayaraman, J. A new class of optimal eighth order method with two weight functions for solving nonlinear equation. J. Nonlinear Anal. Appl.2018, 2018, 83–94. [Google Scholar]
Parimala, S.; Madhu, K.; Jayaraman, J. Revisit of ostrowski’s method and two new higher order methods for solving nonlinear equation. Int. J. Math. Appl.2018, 6, 263–270. [Google Scholar]
Sharma, J.R.; Goyal, R.K. Fourth-order derivative-free methods for solving nonlinear equations. Int. J. Comput. Math.2006, 83, 101–106. [Google Scholar] [CrossRef]
Wait, R. The Numerical Solution of Algebraic Equations; John Wiley & Sons: Hoboken, NJ, USA, 1979. [Google Scholar]
Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach.1974, 21, 643–651. [Google Scholar] [CrossRef]
Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
Noor, M.A.; Waseem, M.; Noor, K.I.; Al-Said, E. Variational iteration technique for solving a system of nonlinear equations. Optim. Lett.2013, 7, 7991–8007. [Google Scholar] [CrossRef]
Sharifi, M.; Babajee, D.K.R.; Soleymani, F. Finding the solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl.2012, 63, 764–774. [Google Scholar] [CrossRef]
Chun, C.; Lee, M.Y.; Neta, B.; Dzunic, J. On optimal fourth-order iterative methods free from second derivative and their dynamics. Appl. Math. Comput.2012, 218, 6427–6438. [Google Scholar] [CrossRef]
Fang, L.; Sun, L.; He, G. An efficient newton-type method with fifth-order convergence for solving nonlinear equations. Comput. Appl. Math.2008, 27, 269–274. [Google Scholar]
Parimala, S.; Madhu, K.; Jayaraman, J. Optimal fourth order methods with its multi-step version for nonlinear equation and their dynamics. Communicated.
Kantrowitz, R.; Neumann, M.M. Some real analysis behind optimization of projectile motion. Mediterr. J. Math.2014, 11, 1081–1097. [Google Scholar] [CrossRef]
Griffithms, D.V.; Smith, I.M. Numerical Methods for Engineers, 2nd ed.; Chapman and Hall/CRC (Taylor and Francis Group): Boca Raton, FL, USA, 2011. [Google Scholar]
Bradie, B. A Friendly Introduction to Numerical Analysis; Pearson Education Inc.: London, UK, 2006. [Google Scholar]
Jain, D. Families of newton-like methods with fourth-order convergence. Int. J. Comput. Math.2013, 90, 1072–1082. [Google Scholar] [CrossRef]