Efﬁcient Two-Step Fifth-Order and Its Higher-Order Algorithms for Solving Nonlinear Systems with Applications

: This manuscript presents a new two-step weighted Newton’s algorithm with convergence order five for approximating solutions of system of nonlinear equations. This algorithm needs evaluation of two vector functions and two Frechet derivatives per iteration. Furthermore, it is improved into a general multi-step algorithm with one more vector function evaluation per step, with convergence order 3 k + 5, k ≥ 1. Error analysis providing order of convergence of the algorithms and their computational efficiency are discussed based on the computational cost. Numerical implementation through some test problems are included, and comparison with well-known equivalent algorithms are presented. To verify the applicability of the proposed algorithms, we have implemented them on 1-D and 2-D Bratu problems. The presented algorithms perform better than many existing algorithms and are equivalent to a few available algorithms.


Introduction
The design of iterative algorithms for solving a nonlinear system of equations is a most needed and challenging task in the domain of numerical analysis. Nonlinearity is a phenomenon that occurs in physical and natural events. Phenomena which have an inherent quality of nonlinearity occur frequently in fluid and plasma mechanics, gas dynamics, combustion, ecology, biomechanics, elasticity, relativity, chemical reactions, economics modeling problems, transportation theory, etc. Due to this frequency, many works related to the current mathematical research provide importance for the development and analysis of methods for nonlinear systems. Hence, finding a solution α of the nonlinear system Φ(x) = 0 is a classical and difficult problem that unlocks the behavior pattern of many application problems in science and engineering, wherein Φ : D ⊂ R n → R n is a sufficiently Frechet differentiable function in an open convex set D.
To derive the order of convergence for the iterative methods, higher-order derivatives are used although such derivatives are not present in the formulas. The solutions of the equation Φ(x) = 0 are rarely found in closed form and hence most of the methods for solving such equations are usually iterative. Convergence analysis of these iterative methods is an important part in establishing the order of any iterative method. As the convergence domain is narrow, one needs additional hypotheses to enlarge it. Furthermore, knowledge of initial approximations requires the convergence radius. Therefore, the applicability of the methods depends upon the assumptions on the higher-order derivatives of the function. Many research papers which are concerned with the local and semi-local convergence of iterative solutions are available in [10][11][12][13]. One of the most attractive features of each numerical algorithm is how the procedure deals with large-scale systems of nonlinear equations. Recently, Gonglin Yuan et al. [14,15] proposed conjugate gradient algorithms with global convergence under suitable conditions that possess some good properties for solving unconstrained optimization problems.
The primary goal of this manuscript is to propose higher-order iterative techniques which consumes less computational cost for very large nonlinear systems. Motivated by the different methods available in literature, a new efficient two-step method is proposed with convergence order five for solving systems of nonlinear equations. This new method consists of two functional evaluations, two Frechet derivative evaluations and two inverse evaluations per iteration. The order of this method can be increased by three units whenever an additional step is included. This idea is generalized for obtaining new multi-step methods of order 3k + 5, k ≥ 1, increasing the convergence order by three units for every step. Also, this method requires only one new function evaluation for every new step.
The remaining part of this manuscript is organized as follows. Section 2 presents two new algorithms, one with convergence order five and the other is its multi-step version with order 3k + 5, k ≥ 1. In Section 3, the analysis of convergence of the new algorithms are presented. Computational efficiencies of the proposed algorithms are calculated based on the computational cost and a comparison with other methods in terms of ratio is reported in Section 4. Numerical results for some test problems and their comparison with few available equivalent methods are given in Section 5. Two application problems known as 1-D and 2-D Bratu problems have been solved using the presented methods in Section 6. Section 7 includes a short conclusion.

Efficient fifth-order algorithm:
The method proposed below is a two-step weighted iterative algorithm (known as 5 th PJ), which is based on algorithm (2): where I denotes n × n identity matrix. In the above algorithm, a weight function H 1 (x (r) ) is suitably chosen to produce fifth-order convergence keeping same number of function evaluations as in method (2). Efficient (3k + 5) th -order algorithm: With the 5 th PJ algorithm as the base, we introduce new steps to obtain the following new algorithm (known as (3k + 5) th PJ) by evaluating a new function whenever a new step is included where H 2 (x (r) ) = I + 1 2 (τ(x (r) ) − I) 2 , µ 0 (x (r) ) = G 5 th PJ (x (r) ), j = 1, 2, ..., k, k ≥ 1.
The above algorithm has convergence order 3k + 5. The case k = 0 produces the 5 th PJ algorithm.

Convergence Analysis
To prove the theoretical convergence, the usual Taylor's expansion (n-dimensional) has been applied. Hence, we recall some important results from [3]: Let Φ : D ⊆ R n −→ R n be sufficiently Fréchet differentiable in D. Assume that ith derivative of Φ at u ∈ R n , i ≥ 1, is the i-linear function Φ (i) (u) : R n × · · · × R n −→ R n such that Φ (i) (u)(v 1 , . . . , v i ) ∈ R n . Given α + h ∈ R n , which lies near the solution α of the system of nonlinear equations Φ(x) = 0, one may apply Taylor's expansion (whenever Jacobian matrix Φ (α) is nonsingular) to get where It is noted that C i h i ∈ R n since Φ (i) (α) ∈ L(R n × · · · × R n , R n ) and [Φ (α)] −1 ∈ L(R n ). Expanding Φ (α + h) using Taylor's series, we get where I denotes the identity matrix. We remark that iC i h i−1 ∈ L(R n ). The error is denoted as where L is a p-linear function L ∈ L(R n × · · · × R n , R n ) and p denotes order of convergence.
n ) p . By using the above concepts, the following theorems are established. Theorem 1. Let Φ : D ⊆ R n −→ R n be sufficiently Fréchet differentiable at each point of an open convex neighborhood D of α ∈ R n . Choose x (0) close enough to α, where α is a solution of the system Φ(x) = 0 and Φ (x) is continuous and nonsingular at α. The iterative formula (7) producing the sequence {x (r) } r≥0 converges to α locally with order five. The error expression is found to be Proof. Applying Taylor's formula around α we get and we express the differential of first order of Φ(x (r) ) as . and E (r) = x (r) − α. Inverting the above series, we get where Furthermore, we obtain Equation (12) leads to Also, where Combining (11) and (14), one gets Then we get ). (16) After calculating inverse of Φ (G 2 nd N M (x (r) )) and using (13), we get We get the required error estimate by using (12), (16) and (17) in (7) The above result agrees with fifth-order convergence. Proof. Taylor's expansion of Φ(µ j−1 (x (r) )) around α gives By expanding After evaluating [Φ (G 2 nd N M (x (r) )] −1 and combining with (19) yields Equations (18) and (20) leads to Substituting (21) in (8) and calculating the error term, we get Proceeding by induction, we get The above result shows that the method has (3k + 5) order convergence.

Computational Efficiency of the Algorithms
The efficiency index of any iterative method is measured using the Ostrowski's definition [16], EI = p 1 d , where p denotes the order of convergence and d denotes the number of functional evaluations per iteration. Different algorithms considered here are compared with the proposed algorithms in terms of computational cost. To evaluate Jacobian Φ and Φ, one needs n 2 function evaluations (all the elements of matrix Φ ) and n scalar functional evaluations (the coordinate terms in Φ) respectively. Whereas, any iterative method which solves system of nonlinear equations needs one or more matrix inversion. This indicates few linear systems must be solved. Hence, the number of operations required for solving a linear system of equations dominates when the computational cost of an iterative method is measured. Due to this, Cordero et al. [3] introduced the concept of computational efficiency index (CE), where the efficiency index given by Ostrowski is combined with the number of products-quotients required per iteration. Computational efficiency index is defined as where op is the number of products-quotients per iteration and the details of its calculation is given in the next paragraph.
The total cost of computation for a nonlinear system of n equations with n unknowns is calculated as follows: For an iterative function, n scalar functions φ i (1 ≤ i ≤ n) are evaluated and for the computation of divided difference n(n − 1) scalar functions are evaluated, where Φ(x) and Φ(y) are computed separately. Also, n 2 quotients are added for divided difference. For calculating inverse linear operator, n 3 −n 3 products and divisions in the LU decomposition and n 2 products and divisions for solving two triangular linear systems are taken. Moreover, n 2 products for multiplying a matrix with a vector or a scalar and n products for multiplying a vector by a scalar are required.
The computational cost and efficiency of the proposed algorithms are compared with methods given by (1) to (6) and the algorithms presented recently by Sharma et al. [17] which are given below: A fifth-order three-step method (5 th SD) is given by An eighth-order four-step method (8 th SD) is given by To compare the CE of considered iterative methods, we calculate the following ratio [17], where C method and CE method denote respectively the computational cost and computational efficiency of the method.
It can be checked that R 5 th PJ;8 th SA > 1 for n = 2 which implies that CE 5 th PJ > CE 8 th SA for n = 2. 5 th PJ versus 8 th SD: Here the ratio (23) is given by .
It can be checked that R 8 th PJ;5 th PJ > 1 for n ≥ 2, which implies that E 8 th PJ > E 5 th PJ for n ≥ 2.
It has been verified that R 8 th PJ;8 th SA > 1 for n ≥ 2and 3. Hence, we conclude that CE 8 th PJ > CE 8 th SA for n ≥ 2 and 3.
Based on the computation, we have R 8 th PJ;8 th SD > 1 for 2 ≤ n ≤ 15, which implies that CE 8 th PJ > CE 8 th SD for 2 ≤ n ≤ 15.
Consolidating the above ratios, the following theorem is stated to show the superiority of the proposed methods. (a) CE 5 th PJ > CE 2 nd N M , CE 4 th NR , CE 8 th SA and CE 8 th SD for n ≥ 32, n ≥ 32, n = 2 and 2 ≤ n ≤ 9 respectively. (b) CE 8 th PJ > CE 2 nd N M , CE 4 th NR , CE 5 th ACT , CE 5 th PJ , CE 8 th SA and CE 8 th SD for n ≥ 13, n ≥ 13, n ≥ 19, n ≥ 2, n ≥ 2 and 3, and 2 ≤ n ≤ 15 respectively.

Remark 1.
The ratio between the proposed method 5 th PJ respectively with 5 th ACT, 5 th MBJ, 8 th MBJ, 5 th SD and the proposed method 8 th PJ respectively with 5 th MBJ, 8 th MBJ, 5 th SD do not satisfy the required condition R method1;method2 > 1.

Numerical Results
Numerical performance of the presented methods is compared with Newton's method and some known methods given in the beginning of this paper. All the numerical calculations are done using MATLAB package for the test examples given below. We have taken 500 digits accuracy while calculating the approximate numerical solutions. The following error residual is adopted to stop the iteration: The following formula is used to find approximated computational order of convergence p c : Here N denotes the number of iterations needed for obtaining the minimum residual (err min ), n inv represents the total number of Frechet derivative's inverse and n total represents the total number of function evaluations (Φ and Φ ) to reach the minimum residual as in [7].
The numerical results are displayed in Table 2 for the test examples TE1-TE5 by which we arrive at 8 th PJ algorithm is the most efficient one with least residual error and less CPU time. Also, it is observed from the results, the proposed algorithms require fewer iterations than the compared methods for the test example TE3. However, when comparing number of functional evaluations (n total ) and inverse evaluations (n inv ), 8thPJ algorithm is the best among the compared methods. found in Table 3, whenever the methods have higher-order convergence, a greater number of points in λ converge with fewer iterations. For the proposed 5 th PJ and 8 th PJ methods, a greater number of points in λ converge within four iterations. However, the least average iteration number (M λ ) reduces as the order of method increases. Therefore, it is found that 5 th PJ and 8 th PJ methods are better among the methods taken for comparison since they have the least average iteration number (M λ ).

Bratu Problem in Two Dimension
The following partial differential equations representing two-dimensional Bratu problem [19] is considered: satisfying the conditions where Γ represents the boundary of D. The two known bifurcated exact solutions of the Planar Bratu problem are obtained for the values of λ < λ c , one solution for λ = λ c and for λ > λ c solution does not exist. The analytic solution for Equation (30) is found to be The constant θ must be found such that it satisfies the boundary conditions. Substitute Equation (32) in (30) and convert it by using collocation at (x, y) = 1 2 , 1 2 as it is the center of the domain D. Instead, one can choose a different value for (x, y), although it may give higher-order approximation but collocating at (x, y) = 1 2 , 1 2 will give equal distribution of the interval throughout the region and hence we get Differentiate the above Equation (33) with respect to θ and by taking dλ dθ = 0, it is seen that Substituting the value of λ c from (34) in (33) one gets the value of θ c , for which θ c = 4.798714561 can be obtained using an iterative method. Then the value of λ c = 7.027661438 is obtained from Equation (34). Figure 2 display this critical value of λ c .
where U i,j is U at (x i , y j ), x i = ih, y j = jh, i, j = 1(1)M, subject to the conditions U i,0 = U i,M+1 = U 0,j = U M+1,j = 0 for i, j = 0(1)M + 1. Equation (36) represents a set of M × M nonlinear equations in U i,j which are then solved by using iterative methods. By taking M = 10 and M = 20, we experiment for 700 λ's, in the interval λ ∈ (0, 7] (mesh length = 0.01). For every λ, take M λ to be the minimum number of iterations so that U i,j is calculated for 13-decimal-place accuracy. Let M λ represents the average of iteration number for the 700 λ's. Tables 4 and 5 presents the numerical results for this problem, where N denotes the number of iterations for convergence of the solution of nonlinear Equation (36). It is observed from all the methods found in Table 5 that the presented 8 th PJ algorithm converges for all the mesh points in two iterations. With respect to the lowest mean iteration number (M λ ), 8 th PJ algorithm is better among the algorithms taken for comparison since they have the least M λ , while performing equivalently with few same-order algorithms for M = 20.

Conclusions
Newton-type algorithms produce second and fourth-order convergence for single and two-step methods, respectively. By using appropriate weight function, we have improved the Newton's fourth-order method to fifth-order by keeping the same number of function evaluation. Furthermore, a multi-step version producing higher-order convergence has been proposed with fifth-order method as its base to solve nonlinear system of equations. The new schemes do not use second or higher-order Frechet derivatives to achieve higher-order convergence. Computational efficiencies of the proposed algorithms are calculated based on the computational cost and comparison with other methods in terms of ratio which shows that the presented algorithms are better than many other algorithms. By considering a few examples, we have illustrated numerically the new schemes are superior to that of Newton's method and some recently available fourth-, fifth-, and eighth-order algorithms. Also, we infer from the computational results, the proposed methods have robust and efficient convergence behavior. To check the application aspect of the new algorithms, we have implemented them on one-dimensional and two-dimensional Bratu problems. The results of these applications indicate that the presented methods perform better than some available algorithms.
Funding: This research received no external funding.

Acknowledgments:
The authors would like to thank the anonymous reviewers for their constructive comments and useful suggestions which greatly helped to improve this paper.

Conflicts of Interest:
The authors declare no conflict of interest.