Next Article in Journal
Robust Mixture Modeling Based on Two-Piece Scale Mixtures of Normal Family
Next Article in Special Issue
Stability Anomalies of Some Jacobian-Free Iterative Methods of High Order of Convergence
Previous Article in Journal
Euclidean Space Controllability Conditions for Singularly Perturbed Linear Systems with Multiple State and Control Delays
Previous Article in Special Issue
Complex Soliton Solutions to the Gilson–Pickering Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Efficient Two-Step Fifth-Order and Its Higher-Order Algorithms for Solving Nonlinear Systems with Applications

by
Parimala Sivakumar
and
Jayakumar Jayaraman
*,†
Department of Mathematics, Pondicherry Engineering College, Pondicherry 605014, India
*
Author to whom correspondence should be addressed.
Both authors have read and approved the final manuscript.
Axioms 2019, 8(2), 37; https://doi.org/10.3390/axioms8020037
Submission received: 20 February 2019 / Revised: 20 March 2019 / Accepted: 26 March 2019 / Published: 1 April 2019

Abstract

:
This manuscript presents a new two-step weighted Newton’s algorithm with convergence order five for approximating solutions of system of nonlinear equations. This algorithm needs evaluation of two vector functions and two Frechet derivatives per iteration. Furthermore, it is improved into a general multi-step algorithm with one more vector function evaluation per step, with convergence order 3 k + 5 , k 1 . Error analysis providing order of convergence of the algorithms and their computational efficiency are discussed based on the computational cost. Numerical implementation through some test problems are included, and comparison with well-known equivalent algorithms are presented. To verify the applicability of the proposed algorithms, we have implemented them on 1-D and 2-D Bratu problems. The presented algorithms perform better than many existing algorithms and are equivalent to a few available algorithms.

1. Introduction

The design of iterative algorithms for solving a nonlinear system of equations is a most needed and challenging task in the domain of numerical analysis. Nonlinearity is a phenomenon that occurs in physical and natural events. Phenomena which have an inherent quality of nonlinearity occur frequently in fluid and plasma mechanics, gas dynamics, combustion, ecology, biomechanics, elasticity, relativity, chemical reactions, economics modeling problems, transportation theory, etc. Due to this frequency, many works related to the current mathematical research provide importance for the development and analysis of methods for nonlinear systems. Hence, finding a solution α of the nonlinear system Φ ( x ) = 0 is a classical and difficult problem that unlocks the behavior pattern of many application problems in science and engineering, wherein Φ : D R n R n is a sufficiently Frechet differentiable function in an open convex set D.
In the last decade, many iterative techniques have been proposed for solving a system of nonlinear equations. See, for instance, [1,2,3,4,5,6,7] and the references therein. The popular method for getting a solution α D is the Newton’s method ( 2 n d N M )
x ( r + 1 ) = G 2 n d N M ( x ( r ) ) = x ( r ) [ Φ ( x ( r ) ) ] 1 Φ ( x ( r ) ) , r = 0 , 1 , 2 ,
where G m e t h o d denotes any iterative scheme and Φ ( x ( r ) ) 1 denotes the inverse of first Frechet derivative Φ ( x ( r ) ) . This 2 n d N M method needs evaluation of one function, same number of derivative and matrix inversion per iteration which produces second-order convergence.
Another classical scheme for solving system of nonlinear equations is a two-step third-order variant of Newton’s method ( 3 r d T M ) proposed by Traub [8] is given by
x ( r + 1 ) = G 3 r d T M ( x ( r ) ) = G 2 n d N M ( x ( r ) ) [ Φ ( x ( r ) ) ] 1 Φ ( G 2 n d N M ( x ( r ) ) ) .
The fourth-order Newton’s method ( 4 t h N R ) formed using two steps is given by
x ( r + 1 ) = G 4 t h N R ( x ( r ) ) = G 2 n d N M ( x ( r ) ) Φ [ G 2 n d N M ( x ( r ) ) ] 1 Φ ( G 2 n d N M ( x ( r ) ) ) ,
was proposed by Noor et al. [5] based on the variational iteration technique. Abad et al. [1] combined the Newton and Traub method to obtain a fifth-order method ( 5 t h A C T ) with three steps, given as below:
x ( r + 1 ) = G 5 t h A C T ( x ( r ) ) = G 3 r d T M ( x ( r ) ) [ Φ ( G 2 n d N M ( x ( r ) ) ) ] 1 Φ ( G 3 r d T M ( x ( r ) ) ) .
Sharma and Arora [6] proposed an eighth-order three-step method ( 8 t h S A ) given by
x ( r + 1 ) = G 8 t h S A ( x ( r ) ) = z ( x ( r ) ) 7 2 I G ( x ( r ) ) 4 I 3 2 G ( x ( r ) ) [ Φ ( x ( r ) ) ] 1 Φ ( z ( r ) ) , w h e r e z ( x ( r ) ) = G 2 n d N M ( x ( r ) ) 13 4 I G ( x ( r ) ) 7 2 I 5 4 G ( x ( r ) ) [ Φ ( x ( r ) ) ] 1 Φ ( G 2 n d N M ( x ( r ) ) , G ( x ( r ) ) = [ Φ ( x ( r ) ) ] 1 Φ ( G 2 n d N M ( x ( r ) ) ) .
Recently, Madhu et al. [9] ( 5 t h M B J ) proposed a fifth-order two-step method given by
x ( r + 1 ) = G 5 t h M B J ( x ( r ) ) = G 2 n d N M ( x ( r ) ) H 1 ( x ( r ) ) [ Φ ( x ( r ) ) ] 1 Φ ( G 2 n d N M ( x ( r ) ) ) , H 1 ( x ( r ) ) = 2 I τ ( x ( r ) ) + 5 4 ( τ ( x ( r ) ) I ) 2 , τ ( x ( r ) ) = [ Φ ( x ( r ) ) ] 1 Φ G 2 n d N M ( x ( r ) ) .
Furthermore, this method was extended to produce 3 k + 5 ( k 1 ) order of convergence known as ( 3 k + 5 ) t h MBJ method using an additional function evaluation and it is given below
x ( r + 1 ) = G ( 3 k + 2 ) t h M B J ( x ( r ) ) = μ k ( x ( r ) ) , μ j ( x ( r ) ) = μ j 1 ( x ( r ) ) H 2 ( x ( r ) ) [ Φ ( x ( r ) ) ] 1 Φ ( μ j 1 ( x ( r ) ) ) , H 2 ( x ( r ) ) = 2 I τ ( x ( r ) ) + 3 2 ( τ ( x ( r ) ) I ) 2 , μ 0 ( x ( r ) ) = G 5 t h M B J ( x ( r ) ) , j = 1 , 2 , , k , k 1 .
To derive the order of convergence for the iterative methods, higher-order derivatives are used although such derivatives are not present in the formulas. The solutions of the equation Φ ( x ) = 0 are rarely found in closed form and hence most of the methods for solving such equations are usually iterative. Convergence analysis of these iterative methods is an important part in establishing the order of any iterative method. As the convergence domain is narrow, one needs additional hypotheses to enlarge it. Furthermore, knowledge of initial approximations requires the convergence radius. Therefore, the applicability of the methods depends upon the assumptions on the higher-order derivatives of the function. Many research papers which are concerned with the local and semi-local convergence of iterative solutions are available in [10,11,12,13]. One of the most attractive features of each numerical algorithm is how the procedure deals with large-scale systems of nonlinear equations. Recently, Gonglin Yuan et al. [14,15] proposed conjugate gradient algorithms with global convergence under suitable conditions that possess some good properties for solving unconstrained optimization problems.
The primary goal of this manuscript is to propose higher-order iterative techniques which consumes less computational cost for very large nonlinear systems. Motivated by the different methods available in literature, a new efficient two-step method is proposed with convergence order five for solving systems of nonlinear equations. This new method consists of two functional evaluations, two Frechet derivative evaluations and two inverse evaluations per iteration. The order of this method can be increased by three units whenever an additional step is included. This idea is generalized for obtaining new multi-step methods of order 3 k + 5 , k 1 , increasing the convergence order by three units for every step. Also, this method requires only one new function evaluation for every new step.
The remaining part of this manuscript is organized as follows. Section 2 presents two new algorithms, one with convergence order five and the other is its multi-step version with order 3 k + 5 , k 1 . In Section 3, the analysis of convergence of the new algorithms are presented. Computational efficiencies of the proposed algorithms are calculated based on the computational cost and a comparison with other methods in terms of ratio is reported in Section 4. Numerical results for some test problems and their comparison with few available equivalent methods are given in Section 5. Two application problems known as 1-D and 2-D Bratu problems have been solved using the presented methods in Section 6. Section 7 includes a short conclusion.

2. Development of Algorithms

Efficient fifth-order algorithm:
The method proposed below is a two-step weighted iterative algorithm (known as 5 t h P J ), which is based on algorithm (2):
G 2 n d N M ( x ( r ) ) = x ( r ) [ Φ ( x ( r ) ) ] 1 Φ ( x ( r ) ) , x ( r + 1 ) = G 5 t h P J ( x ( r ) ) = G 2 n d N M ( x ( r ) ) H 1 ( x ( r ) ) [ Φ ( G 2 n d N M ( x ( r ) ) ) ] 1 Φ ( G 2 n d N M ( x ( r ) ) ) , w h e r e H 1 ( x ( r ) ) = I + 1 4 ( τ ( x ( r ) ) I ) 2 , τ ( x ( r ) ) = [ Φ ( x ( r ) ) ] 1 Φ ( G 2 n d N M ) ,
where I denotes n × n identity matrix. In the above algorithm, a weight function H 1 ( x ( r ) ) is suitably chosen to produce fifth-order convergence keeping same number of function evaluations as in method (2).
Efficient ( 3 k + 5 ) t h -order algorithm:
With the 5 t h P J algorithm as the base, we introduce new steps to obtain the following new algorithm (known as ( 3 k + 5 ) t h PJ) by evaluating a new function whenever a new step is included
x ( r + 1 ) = G ( 3 k + 5 ) t h P J ( x ( r ) ) = μ k ( x ( r ) ) , μ j ( x ( r ) ) = μ j 1 ( x ( r ) ) H 2 ( x ( r ) ) [ Φ ( G 2 n d N M ( x ( r ) ) ) ] 1 Φ ( μ j 1 ( x ( r ) ) ) , w h e r e H 2 ( x ( r ) ) = I + 1 2 ( τ ( x ( r ) ) I ) 2 , μ 0 ( x ( r ) ) = G 5 t h P J ( x ( r ) ) , j = 1 , 2 , , k , k 1 .
The above algorithm has convergence order 3 k + 5 . The case k = 0 produces the 5 t h P J algorithm.

3. Convergence Analysis

To prove the theoretical convergence, the usual Taylor’s expansion (n-dimensional) has been applied. Hence, we recall some important results from [3]:
Let Φ : D R n R n be sufficiently Fréchet differentiable in D. Assume that ith derivative of Φ at u R n , i 1 , is the i-linear function Φ ( i ) ( u ) : R n × × R n R n such that Φ ( i ) ( u ) ( v 1 , , v i ) R n . Given α + h R n , which lies near the solution α of the system of nonlinear equations Φ ( x ) = 0 , one may apply Taylor’s expansion (whenever Jacobian matrix Φ ( α ) is nonsingular) to get
Φ ( α + h ) = Φ ( α ) h + i = 2 p 1 C i h i + O ( h p ) ,
where C i = ( 1 / i ! ) [ Φ ( α ) ] 1 Φ ( i ) ( α ) , i 2 . It is noted that C i h i R n since Φ ( i ) ( α ) L ( R n × × R n , R n ) and [ Φ ( α ) ] 1 L ( R n ) . Expanding Φ ( α + h ) using Taylor’s series, we get
Φ ( α + h ) = Φ ( α ) I + i = 2 p 1 i C i h i 1 + O ( h p ) ,
where I denotes the identity matrix. We remark that i C i h i 1 L ( R n ) . The error is denoted as E ( r ) = x ( r ) α for the rth iteration. The equation E ( r + 1 ) = L E ( r ) p + O ( E ( r ) p + 1 ) is called the error equation, where L is a p-linear function L L ( R n × × R n , R n ) and p denotes order of convergence. Also, E ( r ) p = ( E 1 ( r ) , E 2 ( r ) , , E n ( r ) ) p .
By using the above concepts, the following theorems are established.
Theorem 1.
Let Φ : D R n R n be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of α R n . Choose x ( 0 ) close enough to α, where α is a solution of the system Φ ( x ) = 0 and Φ ( x ) is continuous and nonsingular at α. The iterative formula (7) producing the sequence { x ( r ) } r 0 converges to α locally with order five. The error expression is found to be
E ( r + 1 ) = G 5 t h P J ( x ( r ) ) α = L 1 E ( r ) 5 + O ( E ( r ) 6 ) , L 1 = 4 C 2 4 C 2 C 3 C 2 .
Proof. 
Applying Taylor’s formula around α we get
Φ ( x ( r ) ) = Φ ( α ) E ( r ) + C 2 E ( r ) 2 + C 3 E ( r ) 3 + C 4 E ( r ) 4 + C 5 E ( r ) 5 + C 6 E ( r ) 6 + O ( E ( r ) 7 )
and we express the differential of first order of Φ ( x ( r ) ) as
Φ ( x ( r ) ) = Φ ( α ) I + 2 C 2 E ( r ) + 3 C 3 E ( r ) 2 + 4 C 4 E ( r ) 3 + 5 C 5 E ( r ) 4 + 6 C 6 E ( r ) 5 + O ( E ( r ) 6 ) ,
where C i = ( 1 / i ! ) [ Φ ( α ) ] 1 Φ ( i ) ( α ) , i = 2 , 3 , and E ( r ) = x ( r ) α . Inverting the above series, we get
[ Φ ( x ( r ) ) ] 1 = I + X 1 E ( r ) + X 2 E ( r ) 2 + X 3 E ( r ) 3 + X 4 E ( r ) 4 + X 5 E ( r ) 5 [ Φ ( α ) ] 1 ,
where
X 1 = 2 C 2 , X 2 = 4 C 2 2 3 C 3 ,
X 3 = 8 C 2 3 + 6 C 2 C 3 + 6 C 3 C 2 4 C 4 ,
X 4 = 5 C 5 + 9 C 3 2 + 8 C 2 C 4 + 8 C 4 C 2 + 16 C 2 4 12 C 2 2 C 3 12 C 3 C 2 2 12 C 2 C 3 C 2
and
X 5 = 32 C 2 5 + 24 C 3 C 2 3 + 24 C 2 C 3 C 2 2 16 C 4 C 2 2 + 24 C 2 2 C 3 C 2 18 C 3 2 C 2 16 C 2 C 4 C 2 + 10 C 5 C 2 + 24 C 2 3 C 3 18 C 3 C 2 C 3 18 C 2 C 3 2 + 12 C 4 C 3 16 C 2 2 C 4 + 12 C 3 C 4 + 10 C 2 C 4 6 C 6 .
Then
[ Φ ( x ( r ) ) ] 1 Φ ( x ( r ) ) = E ( r ) C 2 E ( r ) 2 + 2 ( C 2 2 C 3 ) E ( r ) 3 + ( 3 C 4 4 C 2 3 + 4 C 2 C 3 + 3 C 3 C 2 ) E ( r ) 4 + ( 6 C 3 2 + 8 C 2 4 8 C 2 2 C 3 6 C 2 C 3 C 2 6 C 3 C 2 2 + 6 C 2 C 4 + 4 C 4 C 2 4 C 5 ) E ( r ) 5 + ( 5 C 6 2 C 2 C 5 14 C 2 2 C 4 + 9 C 3 C 4 + 16 C 2 3 C 3 12 C 3 C 2 C 3 12 C 2 C 3 2 + 8 C 4 C 3 16 C 2 5 + 12 C 3 C 2 3 + 12 C 2 C 3 C 2 2 8 C 4 C 2 2 + 12 C 2 2 C 3 C 2 9 C 3 2 C 2 8 C 2 C 4 C 2 + 5 C 5 C 2 + 10 C 2 C 4 ) E ( r ) 6 + O ( E ( r ) 7 ) .
Furthermore, we obtain
G 2 n d N M ( x ( r ) ) = α + C 2 E ( r ) 2 + 2 ( C 2 2 + C 3 ) E ( r ) 3 + ( 3 C 4 + 4 C 2 3 4 C 2 C 3 3 C 3 C 2 ) E ( r ) 4 + ( 6 C 3 2 8 C 2 4 + 8 C 2 2 C 3 + 6 C 2 C 3 C 2 + 6 C 3 C 2 2 6 C 2 C 4 4 C 4 C 2 + 4 C 5 ) E ( r ) 5 + ( 5 C 6 + 2 C 2 C 5 + 14 C 2 2 C 4 9 C 3 C 4 16 C 2 3 C 3 + 12 C 3 C 2 C 3 + 12 C 2 C 3 2 8 C 4 C 3 + 16 C 2 5 12 C 3 C 2 3 12 C 2 C 3 C 2 2 + 8 C 4 C 2 2 12 C 2 2 C 3 C 2 + 9 C 3 2 C 2 + 8 C 2 C 4 C 2 5 C 5 C 2 10 C 2 C 4 ) E ( r ) 6 + O ( E ( r ) 7 ) .
Equation (12) leads to
Φ ( G 2 n d N M ( x ( r ) ) ) = Φ ( α ) [ C 2 E ( r ) 2 + 2 ( C 2 2 + C 3 ) E ( r ) 3 + ( 3 C 4 + 5 C 2 3 4 C 2 C 3 3 C 3 C 2 ) E ( r ) 4 + ( 6 C 3 2 12 C 2 4 + 12 C 2 2 C 3 + 6 C 2 C 3 C 2 + 6 C 3 C 2 2 6 C 2 C 4 4 C 4 C 2 + 4 C 5 ) E ( r ) 5 + ( 5 C 6 + 2 C 2 C 5 + 14 C 2 2 C 4 9 C 3 C 4 16 C 2 3 C 3 + 12 C 3 C 2 C 3 + 12 C 2 C 3 2 8 C 4 C 3 + 16 C 2 5 12 C 3 C 2 3 12 C 2 C 3 C 2 2 + 8 C 4 C 2 2 12 C 2 2 C 3 C 2 + 9 C 3 2 C 2 + 8 C 2 C 4 C 2 5 C 5 C 2 10 C 2 C 4 ) E ( r ) 6 ] + O ( E ( r ) 7 ) .
Also,
Φ ( G 2 n d N M ( x ( r ) ) ) = [ Φ ( α ) ] I + P 1 E ( r ) 2 + P 2 E ( r ) 3 + P 3 E ( r ) 4 + O ( E ( r ) 5 ) ,
where P 1 = 2 C 2 2 , P 2 = 4 C 2 C 3 4 C 2 3 and P 3 = 4 C 2 3 4 C 2 C 3 3 C 3 C 2 + 3 C 4 + 3 C 3 C 2 2 .
Combining (11) and (14), one gets
τ ( x ( r ) ) = [ Φ ( x ( r ) ) ] 1 Φ ( G 2 n d N M ( x ( r ) ) ) ) = I 2 C 2 E ( r ) + ( 6 C 2 2 3 C 3 ) E ( r ) 2 + ( 10 C 2 C 3 + 6 C 3 C 2 16 C 2 3 4 C 4 ) E ( r ) 3 + ( 4 C 2 3 4 C 2 C 3 3 C 3 C 2 + 3 C 4 15 C 3 C 2 2 20 C 2 2 C 3 + 32 C 2 4 5 C 5 + 9 C 3 2 + 8 C 2 C 4 + 8 C 4 C 2 12 C 2 C 3 C 2 ) E ( r ) 4 + O ( E ( r ) 5 ) .
Then we get
H 1 ( x ( r ) ) = I + C 2 2 E ( r ) 2 + 3 C 2 C 3 6 C 2 3 E ( r ) 3 + 25 C 2 4 19 C 2 2 C 3 + 9 4 C 3 2 6 C 2 C 3 C 2 + 4 C 2 C 4 E ( r ) 4 + O ( E ( r ) 5 ) .
After calculating inverse of Φ ( G 2 n d N M ( x ( r ) ) ) and using (13), we get
[ Φ ( G 2 n d N M ( x ( r ) ) ) ] 1 Φ ( G 2 n d N M ( x ( r ) ) ) = ( C 2 E ( r ) 2 + 2 C 3 C 2 2 E ( r ) 3 + 3 C 4 + 3 C 2 3 3 C 3 C 2 4 C 2 C 3 E ( r ) 4 + 4 C 5 6 C 2 C 4 4 C 4 C 2 + 6 C 2 2 C 3 + 4 C 2 C 3 C 2 + 6 C 3 C 2 2 6 C 3 2 4 C 2 4 ) + O ( E ( r ) 5 ) .
We get the required error estimate by using (12), (16) and (17) in (7)
E ( r + 1 ) = G 5 t h P J ( x ( r ) ) α = 4 C 2 4 C 2 C 3 C 2 E ( r ) 5 + O ( E ( r ) 6 ) .
The above result agrees with fifth-order convergence.  □
Theorem 2.
Let Φ : D R n R n be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of α R n . Choose x ( 0 ) close enough to α, where α is a solution of the system Φ ( x ) = 0 and Φ ( x ) is continuous and nonsingular at α. The iterative formula (8) producing the sequence { x ( r ) } converges to α locally with order 3 k + 5 , where k is a positive integer and k 1 .
Proof. 
Taylor’s expansion of Φ ( μ j 1 ( x ( r ) ) ) around α gives
Φ ( μ j 1 ( x ( r ) ) ) = Φ ( α ) ( μ j 1 ( x ( r ) ) α ) + C 2 ( μ j 1 ( x ( r ) ) α ) 2 +
By expanding
H 2 ( x ( r ) ) = I + 1 2 ( η ( x ( r ) ) I ) 2 = I + 2 C 2 2 E ( r ) 2 + ( 6 C 2 C 3 12 C 2 3 ) E ( r ) 3 + .
After evaluating [ Φ ( G 2 n d N M ( x ( r ) ) ] 1 and combining with (19) yields
H 2 ( x ( r ) ) [ Φ ( G 2 n d N M ( x ( r ) ) ] 1 = I + L 2 E ( r ) 3 + [ Φ ( α ) ] 1 , L 2 = 2 C 2 C 3 8 C 2 3 .
Equations (18) and (20)leads to
H 2 ( x ( r ) ) [ Φ ( G 2 n d N M ( x ( r ) ) ] 1 Φ ( μ j 1 ( x ( r ) ) ) = I + L 2 E ( r ) 3 + ( μ j 1 ( x ( r ) ) α ) + C 2 ( μ j 1 ( x ( r ) ) α ) 2 + = μ j 1 ( x ( r ) ) α + L 2 E ( r ) 3 ( μ j 1 ( x ( r ) ) α ) + C 2 ( μ j 1 ( x ( r ) ) α ) 2 +
Substituting (21) in (8) and calculating the error term, we get
μ j ( x ( r ) ) α = ( μ j 1 ( x ( r ) ) α ) ( ( μ j 1 ( x ( r ) ) α ) + L 2 E ( r ) 3 ( μ j 1 ( x ( r ) ) α ) + C 2 ( μ j 1 ( x ( r ) ) α ) 2 + ) = L 2 E ( r ) 3 ( μ j 1 ( x ( r ) ) α ) + .
It is proved that μ 0 ( x ( r ) ) α = L 1 E ( r ) 5 + O ( E ( r ) 6 ) . Therefore, for j = 1, 2, … in (22), we get
μ 1 ( x ( r ) ) α = L 2 ( E ( r ) ( 3 ) ) μ 0 ( x ( r ) ) α + = L 2 L 1 E ( r ) 8 + μ 2 ( x ( r ) ) α = L 2 ( E ( r ) ( 3 ) ) μ 1 ( x ( r ) ) α + = L 2 ( L 2 L 1 ) E ( r ) 11 + = L 2 2 L 1 E ( r ) 11 +
Proceeding by induction, we get
μ k ( x ( r ) ) α = ( L 2 ) k L 1 ( E ( r ) ( 3 k + 5 ) ) + O ( E ( r ) ( 3 k + 6 ) ) , k 1 .
The above result shows that the method has ( 3 k + 5 ) order convergence.  □

4. Computational Efficiency of the Algorithms

The efficiency index of any iterative method is measured using the Ostrowski’s definition [16], E I = p 1 d , where p denotes the order of convergence and d denotes the number of functional evaluations per iteration. Different algorithms considered here are compared with the proposed algorithms in terms of computational cost. To evaluate Jacobian Φ and Φ , one needs n 2 function evaluations (all the elements of matrix Φ ) and n scalar functional evaluations (the coordinate terms in Φ ) respectively. Whereas, any iterative method which solves system of nonlinear equations needs one or more matrix inversion. This indicates few linear systems must be solved. Hence, the number of operations required for solving a linear system of equations dominates when the computational cost of an iterative method is measured. Due to this, Cordero et al. [3] introduced the concept of computational efficiency index (CE), where the efficiency index given by Ostrowski is combined with the number of products-quotients required per iteration. Computational efficiency index is defined as C E = p 1 / ( d + o p ) , where o p is the number of products-quotients per iteration and the details of its calculation is given in the next paragraph.
The total cost of computation for a nonlinear system of n equations with n unknowns is calculated as follows: For an iterative function, n scalar functions ϕ i ( 1 i n ) are evaluated and for the computation of divided difference n ( n 1 ) scalar functions are evaluated, where Φ ( x ) and Φ ( y ) are computed separately. Also, n 2 quotients are added for divided difference. For calculating inverse linear operator, n 3 n 3 products and divisions in the LU decomposition and n 2 products and divisions for solving two triangular linear systems are taken. Moreover, n 2 products for multiplying a matrix with a vector or a scalar and n products for multiplying a vector by a scalar are required.
The computational cost and efficiency of the proposed algorithms are compared with methods given by (1) to (6) and the algorithms presented recently by Sharma et al. [17] which are given below: A fifth-order three-step method ( 5 t h S D ) is given by
x ( r + 1 ) = G 5 t h S D ( x ( r ) ) = z ( x ( r ) ) ψ ( x n , y n ) Φ ( z ( r ) ) , w h e r e z ( x ( r ) ) = G 2 n d N M ( x ( r ) ) Φ ( x ( r ) ) 1 Φ ( G 2 n d N M ( x ( r ) ) ) , ψ ( x n , y n ) = 2 I Φ ( x ( r ) ) 1 [ z r , y r ; Φ ] Φ ( x ( r ) ) 1 .
An eighth-order four-step method ( 8 t h S D ) is given by
x ( r + 1 ) = G 8 t h S D ( x ( r ) ) = w ( x ( r ) ) ψ ( x n , y n ) Φ ( w ( r ) ) , w h e r e w ( x ( r ) ) = z ( x ( r ) ) ψ ( x n , y n ) Φ ( z ( r ) ) , z ( x ( r ) ) = G 2 n d N M ( x ( r ) ) ( 3 I 2 Φ ( x ( r ) ) 1 [ y r , x r ; Φ ] , ψ ( x n , y n ) = 2 I Φ ( x ( r ) ) 1 [ z r , y r ; Φ ] Φ ( x ( r ) ) 1 .
Table 1 displays the computational cost and computational efficiency (CE) of various methods.
To compare the CE of considered iterative methods, we calculate the following ratio [17], where C m e t h o d and C E m e t h o d denote respectively the computational cost and computational efficiency of the method.
R m e t h o d 1 ; m e t h o d 2 = l o g ( C E m e t h o d 1 ) l o g ( C E m e t h o d 2 ) = C m e t h o d 2 l o g ( o r d e r o f m e t h o d 1 ) C m e t h o d 1 l o g ( o r d e r o f m e t h o d 2 ) .
It is clear that when R m e t h o d 1 ; m e t h o d 2 > 1 , the iterative method 1 is more efficient than method 2.
5 t h P J versus 2 n d N M : The ratio (23) is given by
R 5 t h P J ; 2 n d N M = ( 1 3 n 3 + 2 n 2 + 2 3 n ) l o g ( 5 ) ( 2 3 n 3 + 8 n 2 + 7 3 n ) l o g ( 2 ) .
Based on the computation, we have R 5 t h P J ; 2 n d N M > 1 for n 32 . Thus, we conclude that C E 5 t h P J > C E 2 n d N M for n 32 .
5 t h P J versus 4 t h N R : In this case, the ratio (23) is given by
R 5 t h P J ; 4 t h N R = ( 2 3 n 3 + 4 n 2 + 4 3 n ) l o g ( 5 ) ( 2 3 n 3 + 8 n 2 + 7 3 n ) l o g ( 4 ) .
It has been verified that R 5 t h P J ; 4 t h N R > 1 for n 32 . Hence, we have C E 5 t h P J > C E 4 t h N R for n 32 .
5 t h P J versus 8 t h S A : Here the ratio (23) is given by
R 5 t h P J ; 8 t h S A = ( 1 3 n 3 + 10 n 2 + 23 3 n ) l o g ( 5 ) ( 2 3 n 3 + 8 n 2 + 7 3 n ) l o g ( 8 ) .
It can be checked that R 5 t h P J ; 8 t h S A > 1 for n = 2 which implies that C E 5 t h P J > C E 8 t h S A for n = 2 .
5 t h P J versus 8 t h S D : Here the ratio (23) is given by
R 5 t h P J ; 8 t h S D = ( 1 3 n 3 + 15 n 2 + 17 3 n ) l o g ( 5 ) ( 2 3 n 3 + 8 n 2 + 7 3 n ) l o g ( 8 ) .
It has been verified that R 5 t h P J ; 8 t h S D > 1 for 2 n 15 . Hence, we have C E 5 t h P J > C E 8 t h S D for 2 n 15 .
8 t h P J versus 2 n d N M : The ratio (23) is given by
R 8 t h P J ; 2 n d N M = ( 1 3 n 3 + 2 n 2 + 2 3 n ) l o g ( 8 ) ( 2 3 n 3 + 10 n 2 + 13 3 n ) l o g ( 2 ) .
It can be checked that R 8 t h P J ; 2 n d N M > 1 for n 13 . Thus, we conclude that C E 8 t h P J > C E 2 n d N M for n 13 .
8 t h P J versus 4 t h N R : The ratio (23) is given by
R 8 t h P J ; 4 t h N R = ( 2 3 n 3 + 4 n 2 + 4 3 n ) l o g ( 8 ) ( 2 3 n 3 + 10 n 2 + 13 3 n ) l o g ( 4 ) .
It has been verified that R 8 t h P J ; 4 t h N R > 1 for n 13 . Hence, we conclude that C E 8 t h P J > C E 4 t h N R for n 13 .
8 t h P J versus 5 t h A C T : The ratio (23) is given by
R 8 t h P J ; 5 t h A C T = ( 2 3 n 3 + 5 n 2 + 7 3 n ) l o g ( 8 ) ( 2 3 n 3 + 10 n 2 + 13 3 n ) l o g ( 5 ) .
Based on the computation, we have R 8 t h P J ; 5 t h A C T > 1 for n 19 . Thus, we conclude that C E 8 t h P J > C E 5 t h A C T for n 19 .
8 t h P J versus 5 t h P J : The ratio (23) is given by
R 8 t h P J ; 5 t h P J = ( 2 3 n 3 + 8 n 2 + 7 3 n ) l o g ( 8 ) ( 2 3 n 3 + 10 n 2 + 13 3 n ) l o g ( 5 ) .
It can be checked that R 8 t h P J ; 5 t h P J > 1 for n 2 , which implies that E 8 t h P J > E 5 t h P J for n 2 .
8 t h P J versus 8 t h S A : The ratio (23) is given by
R 8 t h P J ; 8 t h S A = ( 1 3 n 3 + 10 n 2 + 23 3 n ) l o g ( 8 ) ( 2 3 n 3 + 10 n 2 + 13 3 n ) l o g ( 8 ) .
It has been verified that R 8 t h P J ; 8 t h S A > 1 for n 2 a n d 3 . Hence, we conclude that C E 8 t h P J > C E 8 t h S A for n 2 a n d 3 .
8 t h P J versus 8 t h S D : The ratio (23) is given by
R 8 t h P J ; 8 t h S D = ( 1 3 n 3 + 15 n 2 + 17 3 n ) l o g ( 8 ) ( 2 3 n 3 + 10 n 2 + 13 3 n ) l o g ( 8 ) .
Based on the computation, we have R 8 t h P J ; 8 t h S D > 1 for 2 n 15 , which implies that C E 8 t h P J > C E 8 t h S D for 2 n 15 .
Consolidating the above ratios, the following theorem is stated to show the superiority of the proposed methods.
Theorem 3.
Computational efficiency of 5 t h P J and 8 t h P J methods satisfy: (a) C E 5 t h P J > C E 2 n d N M , C E 4 t h N R , C E 8 t h S A a n d C E 8 t h S D for n 32 , n 32 , n = 2 a n d 2 n 9 respectively. (b) C E 8 t h P J > C E 2 n d N M , C E 4 t h N R , C E 5 t h A C T , C E 5 t h P J , C E 8 t h S A a n d C E 8 t h S D for n 13 , n 13 , n 19 , n 2 , n 2 a n d 3 , a n d 2 n 15 respectively.
Remark 1.
The ratio between the proposed method 5 t h P J respectively with 5 t h A C T , 5 t h M B J , 8 t h M B J , 5 t h S D and the proposed method 8 t h P J respectively with 5 t h M B J , 8 t h M B J , 5 t h S D do not satisfy the required condition R m e t h o d 1 ; m e t h o d 2 > 1 .

5. Numerical Results

Numerical performance of the presented methods is compared with Newton’s method and some known methods given in the beginning of this paper. All the numerical calculations are done using MATLAB package for the test examples given below. We have taken 500 digits accuracy while calculating the approximate numerical solutions. The following error residual is adopted to stop the iteration:
e r r m i n = x ( r + 1 ) x ( r ) 2 < 10 100 .
The following formula is used to find approximated computational order of convergence p c :
p c log ( x ( r + 1 ) x ( r ) 2 / x ( r ) x ( r 1 ) 2 ) log ( x ( r ) x ( r 1 ) 2 / x ( r 1 ) x ( r 2 ) 2 ) .
Here N denotes the number of iterations needed for obtaining the minimum residual ( e r r m i n ) , n i n v represents the total number of Frechet derivative’s inverse and n t o t a l represents the total number of function evaluations ( Φ a n d Φ ) to reach the minimum residual as in [7].
Test Example 1 (TE1):
Φ ( y 1 , y 2 ) = ( y 1 + e x p ( y 2 ) c o s ( y 2 ) , 3 y 1 y 2 s i n ( y 2 ) ) .
The Jacobian matrix is given by Φ ( y ) = 1 e x p ( y 2 ) + s i n ( y 2 ) 3 1 c o s ( y 2 ) . Initial approximation is taken as y ( 0 ) = ( 1.5 , 2 ) T and the analytic solution is given by α = ( 0 , 0 ) T .
Test Example 2 (TE2):
y 2 y 3 + y 4 ( y 2 + y 3 ) = 0 , y 1 y 3 + y 4 ( y 1 + y 3 ) = 0 , y 1 y 2 + y 4 ( y 1 + y 2 ) = 0 , y 1 y 2 + y 1 y 3 + y 2 y 3 = 1 .
The above system is solved by taking the starting approximation y ( 0 ) = ( 0.5 , 0.5 , 0.5 , 0.2 ) T .
The solution is given by α ( 0.577350 , 0.577350 , 0.577350 , 0.288675 ) T . The Jacobian matrix is given by
Φ ( y ) = 0 y 3 + y 4 y 2 + y 4 y 2 + y 3 y 3 + y 4 0 y 1 + y 4 y 1 + y 3 y 2 + y 4 y 1 + y 4 0 y 1 + y 2 y 2 + y 3 y 1 + y 3 y 1 + y 2 0 .
Test Example 3 (TE3):
cos y 2 sin y 1 = 0 , y 3 y 1 1 y 2 = 0 , exp y 1 y 3 2 = 0 .
The solution for the above system is α ( 0.909569 , 0.661227 , 1.575834 ) T . The initial vector for the iteration is taken as x ( 0 ) = ( 1 , 0.5 , 1.5 ) T . The Jacobian matrix produced thus is given by
Φ ( y ) = cos y 1 sin y 2 0 y 3 y 1 ln y 3 1 / y 2 2 y 3 y 1 y 1 / y 3 exp y 1 0 2 y 3 .
Test Example 4 (TE4): The following boundary value problem is considered
y + y 3 = 0 , y ( 0 ) = 0 , y ( 1 ) = 1 ,
where equal mesh is used for dividing the interval [0, 1] which is given below
u 0 = 0 < u 1 < u 2 < < u m 1 < u m = 1 , u j + 1 = u j + h , h = 1 / m .
Denote y 0 = y ( u 0 ) = 0 , y 1 = y ( u 1 ) , , y m 1 = y ( u m 1 ) , y m = y ( u m ) = 1 .
Discretizing the second derivative by the following difference formula
y y r 1 2 y r + y r + 1 h 2 , r = 1 , 2 , 3 , , m 1 ,
we obtain m 1 nonlinear equations in m 1 variables as given below
y r 1 2 y r + y r + 1 + h 2 y r 3 = 0 , r = 1 , 2 , 3 , , m 1 .
The above equations are solved by taking m = 16 (i.e., n = 15 ) and y ( 0 ) = ( 1 , 1 , , 1 ) T as the initial approximation, where we get the Jacobian matrix with 43 non-zero elements as below.
3 h 2 y 1 2 2 1 0 0 0 0 0 1 3 h 2 y 2 2 2 1 0 0 0 0 0 1 3 h 2 y 3 2 2 1 0 0 0 . . . 0 0 0 0 1 3 h 2 y 14 2 2 1 0 0 0 0 0 1 3 h 2 y 15 2 2 .
The following solution is obtained for the differential equation at the given mesh points.
α = { 0.065997633200364677 , 0.131994143490292748 , 0.197981670725993839 , 0.263938884538034848 , 0.329824274254574844 , 0.395569509201723100 , 0.461072959646730428 , 0.526193524526372529 , 0.590744978992414345 , 0.654491128910354268 , 0.717142134576548678 , 0.778352432953974123 , 0.837720734425024994 , 0.894792581480763658 , 0.949065916629282713 } T .
Test Example 5 (TE5): The following huge nonlinear system is considered
y i y i + 1 1 = 0 , i = 1 , 2 , 3 , 15 , y 15 y 1 1 = 0 .
The solution is α = ( 1 , 1 , 1 , , 1 ) T . Choosing the initial vector as y ( 0 ) = ( 1.5 , 1.5 , 1.5 , , 1.5 ) T , we obtain the following Jacobian matrix.
Φ ( y ) = y 2 y 1 0 0 0 0 0 y 3 y 2 0 0 0 . . . 0 0 0 0 y 15 y 14 y 15 0 0 0 0 y 1 .
The numerical results are displayed in Table 2 for the test examples TE1-TE5 by which we arrive at 8 t h P J algorithm is the most efficient one with least residual error and less CPU time. Also, it is observed from the results, the proposed algorithms require fewer iterations than the compared methods for the test example TE3. However, when comparing number of functional evaluations ( n t o t a l ) and inverse evaluations ( n i n v ) , 8 t h P J algorithm is the best among the compared methods.

6. Applications

6.1. Bratu Problem in One Dimension

Consider the one-dimensional Bratu problem stated as [18]
d 2 U d x 2 + λ exp U ( x ) = 0 , λ > 0 , 0 < x < 1 ,
subject to the conditions U ( 0 ) = U ( 1 ) = 0 . The two known bifurcated exact solutions of the Planar Bratu problem are obtained for the values of λ < λ c , one solution for λ = λ c and for λ > λ c solution does not exist. Here λ c is known as critical value and its value is 8 ( η 2 1 ) , where η denotes the fixed point of coth ( x ) . The analytic solution for Equation (24) is found to be
U ( x ) = 2 log e cosh ( x 1 2 ) θ 2 cosh θ 4 .
The constant θ has to be found such that it satisfies the boundary conditions. The critical value of λ is obtained by using a similar procedure found in [19]. Substitute Equation (25) in (24) and convert it by using collocation at x = 1 2 as it is the middle point of [0, 1]. Instead, one can choose a different value for x, although it may give higher-order approximation but collocating at x = 1 2 will give equal distribution of the interval throughout the region and hence we get
θ 2 = 2 λ cosh 2 θ 4 .
Differentiate the above Equation (26) with respect to θ and by taking d λ d θ = 0 , it is seen that λ c satisfies
θ = 1 2 λ c cosh θ 4 sinh θ 4 .
Substituting the value of λ c from (27) in (26) one gets the value of θ c ,
θ c 4 = coth θ c 4 ,
for which θ c = 4.798714560 can be obtained using an iterative method. Then the value of λ c = 3.513830720 is obtained from Equation (26). Figure 1 display this critical value of λ c . Equation (24) is discretized using the finite-difference scheme to obtain
F j ( U j ) = U j + 1 2 U j + U j 1 h 2 + λ exp U j = 0 , j = 1 ( 1 ) M 1 ,
subject to the conditions U 0 = U M = 0 with mesh length h = 1 / M . There are M 1 difference equations. The Jacobian matrix obtained is sparse and it has three non-zero entries on the main and two sub-diagonals. The solution of the above finite-difference equations converges to the solution of 1-D Bratu problem with the help of initial vector U ( 0 ) = ( 0 , 0 , , 0 ) T .
We take M = 100 and experiment for 350 λ ’s in the interval λ ( 0 , 3.5 ] (mesh length = 0.01). For every λ , take M λ to be the minimum number of iterations so that U j ( r + 1 ) U j ( r ) 2 < 10 13 , where, the computed value U j ( r ) is calculated for 13-decimal-place accuracy. Let M λ ¯ represents the average of iteration number for the 350 λ ’s. Table 3 presents the numerical results for one-dimensional Bratu problem, where N denotes the number of iterations for convergence. It is observed from all the methods found in Table 3, whenever the methods have higher-order convergence, a greater number of points in λ converge with fewer iterations. For the proposed 5 t h P J and 8 t h P J methods, a greater number of points in λ converge within four iterations. However, the least average iteration number ( M λ ¯ ) reduces as the order of method increases. Therefore, it is found that 5 t h P J and 8 t h P J methods are better among the methods taken for comparison since they have the least average iteration number ( M λ ¯ ).

6.2. Bratu Problem in Two Dimension

The following partial differential equations representing two-dimensional Bratu problem [19] is considered:
2 U x 2 + 2 U y 2 + λ e x p ( U ) = 0 , x , y D = [ 0 , 1 ] × [ 0 , 1 ]
satisfying the conditions
U ( x , y ) = 0 , x , y Γ ,
where Γ represents the boundary of D. The two known bifurcated exact solutions of the Planar Bratu problem are obtained for the values of λ < λ c , one solution for λ = λ c and for λ > λ c solution does not exist. The analytic solution for Equation (30) is found to be
U ( x , y ) = 2 log e cosh ( θ 4 ) cosh ( x 1 2 ) ( y 1 2 ) θ cosh ( x 1 2 ) θ 2 cosh ( y 1 2 ) θ 2 .
The constant θ must be found such that it satisfies the boundary conditions. Substitute Equation (32) in (30) and convert it by using collocation at ( x , y ) = 1 2 , 1 2 as it is the center of the domain D. Instead, one can choose a different value for ( x , y ) , although it may give higher-order approximation but collocating at ( x , y ) = 1 2 , 1 2 will give equal distribution of the interval throughout the region and hence we get
θ 2 = λ cosh 2 θ 4 .
Differentiate the above Equation (33) with respect to θ and by taking d λ d θ = 0 , it is seen that λ c satisfies
θ = 1 4 λ c cosh θ 4 sinh θ 4 .
Substituting the value of λ c from (34) in (33) one gets the value of θ c ,
θ c 4 = coth θ c 4 ,
for which θ c = 4.798714561 can be obtained using an iterative method. Then the value of λ c = 7.027661438 is obtained from Equation (34). Figure 2 display this critical value of λ c .
Equation (30) is discretized using the finite-difference five-point formula with mesh length h on both the axes to obtain the nonlinear equations
F ( U i , j ) = ( 4 U i , j λ h 2 e x p ( U i , j ) ) + U i + 1 , j + U i 1 , j + U i , j + 1 + U i , j 1 ,
where U i , j is U at ( x i , y j ) , x i = i h , y j = j h , i , j = 1 ( 1 ) M , subject to the conditions U i , 0 = U i , M + 1 = U 0 , j = U M + 1 , j = 0 for i , j = 0 ( 1 ) M + 1 . Equation (36) represents a set of M × M nonlinear equations in U i , j which are then solved by using iterative methods. By taking M = 10 and M = 20 , we experiment for 700 λ ’s, in the interval λ ( 0 , 7 ] (mesh length = 0.01). For every λ , take M λ to be the minimum number of iterations so that U i , j ( r + 1 ) U i , j ( r ) 2 < 10 11 , where, the computed value U i , j ( r ) is calculated for 13-decimal-place accuracy. Let M λ ¯ represents the average of iteration number for the 700 λ ’s.
Table 4 and Table 5 presents the numerical results for this problem, where N denotes the number of iterations for convergence of the solution of nonlinear Equation (36). It is observed from all the methods found in Table 5 that the presented 8 t h P J algorithm converges for all the mesh points in two iterations. With respect to the lowest mean iteration number ( M λ ¯ ), 8 t h P J algorithm is better among the algorithms taken for comparison since they have the least M λ ¯ , while performing equivalently with few same-order algorithms for M = 20.

7. Conclusions

Newton-type algorithms produce second and fourth-order convergence for single and two-step methods, respectively. By using appropriate weight function, we have improved the Newton’s fourth-order method to fifth-order by keeping the same number of function evaluation. Furthermore, a multi-step version producing higher-order convergence has been proposed with fifth-order method as its base to solve nonlinear system of equations. The new schemes do not use second or higher-order Frechet derivatives to achieve higher-order convergence. Computational efficiencies of the proposed algorithms are calculated based on the computational cost and comparison with other methods in terms of ratio which shows that the presented algorithms are better than many other algorithms. By considering a few examples, we have illustrated numerically the new schemes are superior to that of Newton’s method and some recently available fourth-, fifth-, and eighth-order algorithms. Also, we infer from the computational results, the proposed methods have robust and efficient convergence behavior. To check the application aspect of the new algorithms, we have implemented them on one-dimensional and two-dimensional Bratu problems. The results of these applications indicate that the presented methods perform better than some available algorithms.

Author Contributions

Conceptualization, validation, Writing—Review and Editing, resources, supervision—J.J.; Methodology, software, formal analysis, data curation, Writing—Original Draft preparation, visualization—P.S.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments and useful suggestions which greatly helped to improve this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Abad, M.F.; Cordero, A.; Torregrosa, J.R. Fourth and fifth-order methods for solving nonlinear systems of equations: An application to the global positioning system. Abstr. Appl. Anal. 2013, 2013, 586708. [Google Scholar] [CrossRef]
  2. Babajee, D.K.R.; Madhu, K.; Jayaraman, J. On some improved harmonic mean newton-like methods for solving systems of nonlinear equations. Algorithms 2015, 8, 895–909. [Google Scholar] [CrossRef]
  3. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. A modified newton-jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  4. Madhu, K. Sixth order newton-type method for solving system of nonlinear equations and its applications. Appl. Math. E-Notes 2017, 17, 221–230. [Google Scholar]
  5. Noor, M.A.; Waseem, M.; Noor, K.I.; Al-Said, E. Variational iteration technique for solving a system of nonlinear equations. Optim. Lett. 2013, 7, 991–1007. [Google Scholar] [CrossRef]
  6. Sharma, J.R.; Arora, H. Improved newton-like methods for solving systems of nonlinear equations. SeMA J. 2017, 74, 147–163. [Google Scholar] [CrossRef]
  7. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  8. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Upper Saddle River, NJ, USA, 1964. [Google Scholar]
  9. Madhu, K.; Babajee, D.K.R.; Jayaraman, J. An improvement to double-step newton method and its multi-step version for solving system of nonlinear equations and its applications. Numer. Algorithms 2017, 74, 593–607. [Google Scholar] [CrossRef]
  10. Amat, S.; Busquier, S.; Gutierrez, J.M. Third-order iterative methods with applications to hammerstein equations: A unified approach. J. Comput. Appl. Math. 2011, 235, 2936–2943. [Google Scholar] [CrossRef]
  11. Amat, S.; Busquier, S.; Gutierrez, J.M. Geometric constructions of iterative methods to solve nonlinear equations. Comput. Appl. Math. 2003, 157, 197–205. [Google Scholar] [CrossRef]
  12. Argyros, I.K.; Magrenan, A.A. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA, 2017. [Google Scholar]
  13. Sharma, J.R.; Argyros, I.K.; Kumar, S. Ball convergence of an efficient eighth order iterative method under weak conditions. Mathematics 2018, 6, 260. [Google Scholar] [CrossRef]
  14. Yuan, G.; Duan, X.; Liu, W.; Wang, X.; Cui, Z.; Sheng, Z. Two new PRP Conjugate Gradient Algorithms for Minimization Optimization Models. PLoS ONE 2015, 10, 1–24. [Google Scholar] [CrossRef] [PubMed]
  15. Yuan, G.; Zhang, M. A three-terms Polak-Ribire-Polyak conjugate gradient algorithm for large-scale nonlinear equations. J. Comput. Appl. Math. 2015, 286, 186–195. [Google Scholar] [CrossRef]
  16. Ostrowski, A.M. Solutions of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  17. Sharma, J.R.; Kumar, D. On a class of efficient higher order newton-like methods. Math. Model. Anal. 2019, 24, 105–126. [Google Scholar] [CrossRef]
  18. Buckmire, R. Investigations of nonstandard Mickens-type finite-difference schemes for singular boundary value problems in cylindrical or spherical coordinates. Num. Methods Partial Differ. Equ. 2003, 19, 380–398. [Google Scholar] [CrossRef]
  19. Odejide, S.A.; Aregbesola, Y.A.S. A note on two dimensional Bratu problem. Kragujev. J. Math. 2006, 29, 49–56. [Google Scholar]
Figure 1. Variation of θ for 1-D Bratu Problem.
Figure 1. Variation of θ for 1-D Bratu Problem.
Axioms 08 00037 g001
Figure 2. Variation of θ for 2-D Bratu Problem.
Figure 2. Variation of θ for 2-D Bratu Problem.
Axioms 08 00037 g002
Table 1. Table of Computational Cost and Computational Efficiency.
Table 1. Table of Computational Cost and Computational Efficiency.
MethodComputational Cost ( C method ) Computational Efficiency
2 n d N M 1 3 n 3 + 2 n 2 + 2 3 n 2 1 C 2 n d N M
4 t h N R 2 3 n 3 + 4 n 2 + 4 3 n 4 1 C 4 t h N R
5 t h A C T 2 3 n 3 + 5 n 2 + 7 3 n 5 1 C 5 t h A C T
5 t h M B J 1 3 n 3 + 7 n 2 + 11 3 n 5 1 C 5 t h M B J
5 t h S D 1 3 n 3 + 8 n 2 + 8 3 n 5 1 C 5 t h S D
5 t h P J 2 3 n 3 + 8 n 2 + 7 3 n 5 1 C 5 t h P J
8 t h S A 1 3 n 3 + 10 n 2 + 23 3 n 8 1 C 8 t h S A
8 t h M B J 1 3 n 3 + 9 n 2 + 17 3 n 8 1 C 8 t h M B J
8 t h S D 1 3 n 3 + 15 n 2 + 17 3 n 8 1 C 8 t h S D
8 t h P J 2 3 n 3 + 10 n 2 + 13 3 n 8 1 C 8 t h P J
Table 2. Comparison of results for test examples.
Table 2. Comparison of results for test examples.
TEMethodsN p c CPU T err min Φ Φ n inv n total
TE1 2 n d N M 101.992.5043271.038 × 10 103 10101040
4 t h N R 63.991.9579575.384 × 10 207 12121248
5 t h A C T 64.992.3976142.280 × 10 289 18121260
5 t h M B J 64.992.2419821.095 × 10 315 1212648
5 t h P J 64.992.237277012121248
8 t h S A 57.792.21858101510550
8 t h M B J 57.792.09206301510550
8 t h P J 57.902.074542015101050
11 t h P J 410.952.0603244.362 x 10 154 168848
TE2 2 n d N M 82.003.0238253.928 × 10 145 888128
4 t h N R 54.032.5898492.988 × 10 291 101010160
5 t h A C T 45.152.9168075.083 × 10 102 1288144
5 t h M B J 45.152.8402915.083 × 10 102 884128
5 t h P J 45.122.7908385.714 × 10 121 888128
8 t h S A 48.013.33180101284144
8 t h M B J 48.803.08251001284144
8 t h P J 48.603.07178001288144
11 t h P J 311.783.0892149.138 × 10 106 1266120
TE3 2 n d N M 92.002.9073781.010 × 10 107 99990
4 t h N R 54.002.4012731.010 × 10 107 101010100
5 t h A C T 65.003.4577300181212138
5 t h M B J 53.923.0864057.523 × 10 106 10105100
5 t h P J 53.923.0174442.109 × 10 143 101010100
8 t h S A 56.023.776025015105115
8 t h M B J 56.023.327364015105115
8 t h P J 45.862.9458751.938 × 10 104 128892
11 t h P J 48.093.2319564.484 × 10 228 1688104
TE4 2 n d N M 82.007.8926964.963 × 10 114 888240
4 t h N R 53.997.8411821.410 × 10 228 101010300
5 t h A C T 54.0511.7544462.580 × 10 195 151010375
5 t h M B J 54.0214.2533921.001 × 10 215 10105300
5 t h P J 54.0212.5363271.030 × 10 253 101010300
8 t h S A 45.9015.2082844.511 × 10 155 1284300
8 t h M B J 45.9014.1277404.511 × 10 155 1284300
8 t h P J 45.9314.0143421.533 × 10 193 1288300
11 t h P J 48.6515.63862001688360
TE5 2 n d N M 91.996.2735658.969 × 10 179 999405
4 t h N R 54.005.5620798.969 × 10 179 101010450
5 t h A C T 55.007.1867391.399 × 10 304 151010525
5 t h M B J 55.008.7358781.399 × 10 304 10105450
5 t h P J 54.997.7493090101010450
8 t h S A 47.998.6518723.680 × 10 226 1284420
8 t h M B J 47.998.0020113.680 × 10 226 1284420
8 t h P J 47.998.0691851.358 × 10 272 1288420
11 t h P J 410.688.21328801688480
Table 3. Comparison of number of λ s requiring N iterations to converge in 1-D Bratu problem.
Table 3. Comparison of number of λ s requiring N iterations to converge in 1-D Bratu problem.
Method N = 2 N = 3 N = 4 N = 5 N > 5 M λ ¯
2 n d N M 012115142814.93
4 t h N R 1225776413.21
5 t h A C T 11361713753.71
5 t h M B J 2326360313.14
5 t h P J 2327648213.10
8 t h S A 2326360313.13
8 t h M B J 2326261223.14
8 t h P J 2327648123.09
Table 4. Comparison of number of λ s requiring N iterations to converge in 2-D Bratu problem (M = 10).
Table 4. Comparison of number of λ s requiring N iterations to converge in 2-D Bratu problem (M = 10).
Method N = 2 N = 3 N = 4 N = 5 N > 5 M λ ¯
2 n d N M 01015207903.96
4 t h N R 1015990002.85
5 t h A C T 2005000002.71
5 t h M B J 1215790002.82
5 t h P J 1205800002.82
8 t h S A 5141860002.26
8 t h M B J 5141860002.26
8 t h P J 4412590002.37
Table 5. Comparison of number of λ s requiring N iterations to converge in 2-D Bratu problem (M = 20).
Table 5. Comparison of number of λ s requiring N iterations to converge in 2-D Bratu problem (M = 20).
Method N = 2 N = 3 N = 4 N = 5 M λ ¯
2 n d N M 121248703.69
4 t h N R 213487002.69
5 t h A C T 419281002.40
5 t h M B J 217483002.69
5 t h P J 217483002.69
8 t h S A 7000002
8 t h M B J 7000002
8 t h P J 7000002

Share and Cite

MDPI and ACS Style

Sivakumar, P.; Jayaraman, J. Efficient Two-Step Fifth-Order and Its Higher-Order Algorithms for Solving Nonlinear Systems with Applications. Axioms 2019, 8, 37. https://doi.org/10.3390/axioms8020037

AMA Style

Sivakumar P, Jayaraman J. Efficient Two-Step Fifth-Order and Its Higher-Order Algorithms for Solving Nonlinear Systems with Applications. Axioms. 2019; 8(2):37. https://doi.org/10.3390/axioms8020037

Chicago/Turabian Style

Sivakumar, Parimala, and Jayakumar Jayaraman. 2019. "Efficient Two-Step Fifth-Order and Its Higher-Order Algorithms for Solving Nonlinear Systems with Applications" Axioms 8, no. 2: 37. https://doi.org/10.3390/axioms8020037

APA Style

Sivakumar, P., & Jayaraman, J. (2019). Efficient Two-Step Fifth-Order and Its Higher-Order Algorithms for Solving Nonlinear Systems with Applications. Axioms, 8(2), 37. https://doi.org/10.3390/axioms8020037

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop