Improved Higher Order Compositions for Nonlinear Equations

: In the present study, two new compositions of convergence order six are presented for solving nonlinear equations. The ﬁrst method is obtained from the third-order one given by Home-ier using linear interpolation, and the second one is obtained from the third-order method given by Traub using divided differences. The ﬁrst method requires three evaluations of the function and one evaluation of the ﬁrst derivative, thereby enhancing the efﬁciency index. In the second method, the computation of a derivative is reduced by approximating it using divided differences. Various numerical experiments are performed which demonstrate the accuracy and efﬁcacy of the proposed methods.


Introduction
The design and conceptualization of higher order iterative methods for solving nonlinear equations is of great importance in numerical analysis and many scientific branches [1][2][3][4][5][6].A plethora of iterative methods [7][8][9][10][11][12][13][14] have been developed by various researchers to solve nonlinear equations of the form f (x) = 0, (1) where f : D ⊂ R → R is a continuously differentiable nonlinear function defined on an open interval D. One of the widely used iterative methods is Newton's method with quadratic convergence, which is given as Many other applications such as transportation, electron theory, the geometric theory of relativistic string, chemical speciation, chemical engineering, and queuing models also generate numerous such equations [15][16][17].In most cases, the problems transformed into nonlinear equations can not be solved analytically.In order to approximate them numerically, adequate iterative methods are taken into consideration.The recent trend is to develop higher order iterative methods to solve nonlinear equations of the form (1) as they provide an efficient approximation and more accuracy in finding the solution.Higher-order iterative methods are important because many applications require faster convergence.But at the same time, it is very important to maintain an equilibrium between the convergence rate and the operational cost.Newton's method has been modified in a number of ways at the additional cost of evaluation of a function, derivative and changes in the points of iteration in order to increase its efficiency index and order of convergence.Many researchers have proposed numerous higher order methods in order to improve the convergence of Newton's method.
Neta [18] has developed sixth order iterative method (NEM).It is given for k = 0, 1, 2, . . .as: This method requires three evaluations of f and one evaluation of its first derivative f per iteration.
A variant of the Jarratt method (KLM) has been developed by Kou and Li [19] of order six.It is given for k = 0, 1, 2, . . .as: where . This method requires evaluations of two f and two f per iteration.
The first sixth-order Singh Method (SM1) is: (5) This method also requires two evaluations of f and f each per iteration.
The second sixth-order Singh Method (SM2) is: This method utilizes 3 f and 1 f evaluations per iteartion.
Sharma et al. [21] proposed a sixth order iterative method (SSM).It is given for k = 0, 1, 2, . . .as: This method utilizes two evaluations of the function f and two evaluations of the first order derivative f per step.
Motivated by the ongoing research in this direction, we develop and analyze two higher order iterative methods to solve nonlinear equations using the techniques of linear interpolation and divided differences [5,[22][23][24][25][26][27].The first sixth order method is obtained by introducing a third step and approximating its derivative by linear interpolation.In a similar manner, a third step is added in a second third-order method but its derivative is approximated by divided differences up to second order leading it also to a sixth-order method.Convergence analysis of both the methods is established.The efficiency of the first proposed method is enhanced from 1.43097 to 1.56508 and the second method involves one less computation of derivative.This is the novelty behind the present work.Various nonlinear equations are solved and comparison results indicate better performance of the first presented scheme over the existing ones [18][19][20][21].
The contents of the paper are summarized as follows.Section 2 contains preliminaries, definitions and auxiliary results.Section 3 includes the establishment of the first sixth-order method along with convergence analysis using linear interpolation.The development and analysis of the second sixth-order method using divided differences are presented in Section 4. In Section 5, numerical examples are figured out to ascertain the theoretical postulates for comparing the proposed methods with the current methods.Section 6 contains the concluding remarks.

Preliminaries
In order to make the study as self contained as possible, we included some standard definitions and results.Definition 1.Let {v k } be a sequence convergent to some parameter ψ.Then, the convergence is called: (i) Linear, if there exists a parameter l and a natural number k 0 such that (ii) Of convergence order q, q ≥ 2 if there exist a parameter L, L > 0 and a natural number k 0 such that |v k+1 − ψ| ≤ l|v k − ψ| q for each k ≥ k 0 .
Definition 2. Let ψ be root of the function f .Suppose that v k−1 , v k , v k+1 , v k+2 are consecutive iterations close to ψ.Then, the convergence order (computational) ρ is defined by the formula A second type of convergence order (Approximate Computational) α is defined by the formula The efficiency index q 1/δ , where q is the convergence order and δ is the total number of new function evaluations is often utilized to compute different methods.
Next, we restate Taylor's expansion formula on realfunctions .
Lemma 1.Let f : R → R be m−times differentiable in an interval S.Then, the following expression holds for each x, d ∈ S

Development of First Sixth Order Iterative Method
In this section, we propose a three-step iterative method for solving the nonlinear equation of the form (1) from third-order Newton-type composition given by Homeier [28].This method is given as follows: Extension of third order method ( 8) to obtain a sixth-order iterative method is done by adding a Newton-like step in the following manner: where, k = 0, 1, 2, . . .and the initial approximation x 0 is chosen suitably.The efficiency index of this method is 6 1 5 = 1.43097.The foremost aim of our study is to develop a novel sixth-order iterative method with a higher efficiency index.For this, we try to reduce the number of evaluations using the following linear interpolation formula on points This simplification gives Substituting ( 11) in ( 9), the new three-step sixth-order method is given as: This method utilizes two evaluations of the function f and two evaluations of the first order derivative f at each step.The convergence analysis of the sixth-order method ( 12) is established in the next theorem.
Theorem 1.Let f : D ⊂ R → R be a sufficiently differentiable function in an open interval D and x 0 be a close approximation to its simple root ψ ∈ D.Then, the iterative method (12) satisfies the following error equation: , for k = 2, 3, . . . .

Proof.
Let e k = x k − ψ be the error iteration in the k th iterate.Applying Taylor expansion of f (x k ) and f (x k ) about ψ, we obtain Substituting ( 14) and (15) in first substep of (12), we obtain Then, the Taylor expansion about ψ gives, and Substituting ( 14), ( 15) and (17) in the second substep of (12) renders Expanding f (z k ) about ψ and using Taylor expansion, we obtain In view of (11), we obtain By substituting (19) and (20) in last substep of (12), we obtain The efficiency index of the method ( 12) is enhanced to 6 1 4 = 1.56508, which is better than that of method (9).

Development of Second Sixth Order Iterative Method
This section describes another sixth-order iterative method for solving nonlinear equations and its convergence analysis.Traub [29] proposed a third-order iterative method for k = 0, 1, 2, . . ., given as: The new sixth-order iterative method obtained by extending (21) in a similar manner as done in the previous section is as follows: where k = 0, 1, 2, . . .and x 0 is suitably chosen initial approximation close to the root.This technique requires three evaluations of the function and two evaluations of the derivative per iteration.Here, the number of evaluations of the derivative is reduced by approximating f (z k ) using the technique of divided differences up to the second order.Expanding f (z k ) by using Taylor expansion about y k up to second order, we obtain Thus, denotes the divided difference of first order [30].Similarly, the approximation of f (y k ) is given as: To obtain f (z k ), differentiate ( 23) Upon substitution of f (y k ) and f (y k ) in (24), we obtain Then, substituting (25) in the last of ( 22) the new three-step sixth order method is given as follows: This method utilizes three evaluations of the function f and two evaluations of the first order derivative f at each step.The next theorem establishes the convergence of the iterative method (26).Theorem 2. Let f : D ⊂ R → R, D being an open interval, be a sufficiently differentiable function.Let x 0 be a close approximation to its simple root ψ ∈ D.Then, for iterative method (26), the following error equation is satisfied: where , for k = 2, 3, . . . .

Proof.
Let e k = x k − ψ be the error iteration in the k th iterate.Applying Taylor expansion of f (x k ) and f (x k ) about ψ, we obtain Substituting ( 28) and ( 29) in first substep of (26), we obtain Taylor expansion about ψ gives,  31) Substituting ( 28), ( 29) and (31) in second substep of ( 26) renders Expanding f (z k ) about ψ using Taylor expansion, we obtain We obtain from (25), Substituting ( 33) and (34) in last substep of ( 26), we obtain The method ( 26) is better than (22) as it requires one less evaluation of derivative at each iteration than method (22).

Numerical Testing
In this section, the applicability is demonstrated of the proposed methods ( 12) and (26), which are now denoted by GM1 and GM2, respectively, on various nonlinear equations, thus validating the theoretical results obtained so far.Such nonlinear equations have implications to diverse areas of science and engineering [5,6].The results are compared with methods SM1, SM2, NEM, KLM and SSM given by ( 5), ( 6), ( 3), ( 4) and ( 7), respectively.The test functions are displayed in Table 1, the root correct to 15 decimal places.The comparisons of the number of iterations and a total number of function evaluations are displayed in Tables 2 and 3, respectively.

f (x)
Root (α) 1.365230013414097 Comparison of the number of iterations.  4 and 5, respectively, up to the third iteration.All the computations are performed in programming package Mathematica [31] using 600 significant digits on Intel(R) Core(TM) i5 − 8250U CPU @ 1.60 GHz 1.80 GHz with 8 GB of RAM running on the Windows 10 Pro version 2017.It can be observed that the accuracy in numerical values of approximations to the root by the proposed method GM1 is higher than the existing methods in most of the examples while GM2 is competitive with other methods.Thus, numerical experiments demonstrate the novelty and applicability of the present study.

Conclusions
The current study includes the development of two sixth-order compositions for solving nonlinear equations.This has been done by adding a Newton-like step and approximating the derivative by linear interpolation and divided differences.The enhancement of the efficiency index of the first iterative method from 1.43097 to 1.56508 establishes the motivation behind the presented work.The second method involves one less evaluation of the derivative of the function thereby increasing its applicability.Numerical results corroborate the advantage of the proposed methods over the existing ones of the same order.In the future, we will extend these methods to Banach space-valued operators and equations.

Table 3 .
Comparison of the number of function evaluations.The comparison results for |x k+1 − x k | and | f (x k )| for all considered examples are displayed in Tables

Table 4 .
Comparison of |x k+1 − x k | for all methods.

Table 5 .
Comparison of | f (x k )| for all methods.