Higher-Order Iteration Schemes for Solving Nonlinear Systems of Equations

: We present a three-step family of iterative methods to solve systems of nonlinear equations. This family is a generalization of the well-known fourth-order King’s family to the multidimensional case. The convergence analysis of the methods is provided under mild conditions. The analytical discussion of the work is upheld by performing numerical experiments on some application oriented problems. Finally, numerical results demonstrate the validity and reliability of the suggested methods.


Introduction
System of nonlinear equations (SNEs) finds applications to numerous phenomena in many areas of science and engineering. Given a nonlinear system, F(X) = 0, where F is a nonlinear map from R k → R k , we are interested to compute a vector X * = (x * 1 , x * 2 , · · · , x * k ) T such that F(X * ) = 0 , where F(X) = ( f 1 (X), f 2 (X), . . . , f k (X)) T is a Fréchet differentiable function and X = (x 1 , x 2 , . . . , x k ) T ∈ R k . The classical Newton's method [1] is the most famous procedure to solve SNEs. It is given by X (k+1) = X (k) − {F (X (k) )} −1 F(X (k) ), k = 0, 1, 2, . . . . (1) It converges quadratically if the function F is continuously differentiable and the initial approximation is close enough. In the literature, there are variety of higher-order methods that improve the convergence order of Newton's scheme. For example, several authors have proposed cubically convergent methods [2][3][4][5] requiring computation of 2-F (2-F stand for F two times), 1-F (1-F stands for F one time), and two matrix inversions per step. In [6], the authors developed another family of methods of order three, one of which requires one 1-F and 3-F , whereas the other requires 1-F and 4-F evaluations and two matrix inversions per iteration. In [7], Darvishi and Barati utilized 2-F, 2-F and two matrix inversions per step to propose a new third-order scheme. Similarly, several third-order methods have been proposed in [8,9] that require 2-F, 1-F , and one matrix inversion. Babajee et al. [10] presented a method having convergence order four which consumes 1-F, 2-F and two matrix inversions per iteration. Another fourth-order method is developed in [11] using two evaluations of the function and the Jacobian and one matrix inversion, whereas the authors of [12] propose another fourth-order method, utilizing 3-F, 1-F , and one matrix inversion per iteration. Another fifth-order method in [13] requires three evaluations of the function and only one Jacobian evaluation, with the solution of three linear systems with the same matrix of coefficients per iteration.
In pursuit of faster algorithms, researchers have also developed fifth and sixth-order methods, for example, in [6,[14][15][16]. In [15], Narang et al. extended the existing Babajee's fourth-order scheme [17] to solve SNEs and developed a sixth-order convergent family of Chebyshev-Halley type methods. Their scheme requires two F , two F evaluations, and the solution of two linear systems per iteration. One can notice that although the researchers are making an attempt to improve the order of convergence of an iterative method, it mostly leads to increase in the computational cost per iteration. The computational cost is especially high if the method involves the use of second order Fréchet derivative F (X). This is a major limitation of the higher-order methods. Thus, although developing new iterative methods, we should try to keep the computational cost low. With this intention, we have made an attempt to develop a family of three-step sixth-order family of methods requiring two F, two F and one matrix inversion per iteration. This family of methods are compared to be more efficient than existing methods. These have been found to be effective in solving particularly large-scale nonlinear systems.
The outline of the manuscript is as follows. In Section 2, a new class of new sixth-order scheme and its convergence analysis is presented. In Section 3, we present numerous illustrative examples to validate the theoretical results. Finally, Section 4 contains some conclusions.

Design of the King's Family for Multidimensional Case
In this section, we proposed a new three-point extension of King's method [18][19][20][21] having sixth-order convergence. For this purpose, we consider the well-known fourth-order King's method, which is given by where α is a real parameter and u k = f (y k ) f (x k ) . For α = 0, one can obtain the well-known Ostrowski's method [22][23][24].
Let us now modify the method (2) for SNEs by rewriting the scheme as follows, . Finally, we can rewrite the above scheme (2) for SNEs with one additional sub-step in the following manner, where [·, ·; F] is a finite difference of order one and α is a free disposable parameter with U (k) = I − [x (k) , y (k) ; F]F (x (k) ) −1 . In addition, F[Y n , X n ] is a finite difference of order one. Now, it is necessary to analyze the convergence conditions of this modified King's class of methods. In Theorem 1, we demonstrate the convergence order of the above scheme (3). We have used the following procedures [25] to prove the convergence results.
Let F : Ω ⊆ R k −→ R k be sufficiently differentiable in Ω. Now, we define the qth derivative of F at ω ∈ Ω, q ≥ 1. It can be viewed as a q-linear function F (q) (ω) : R k × · · · × R k −→ R k , such that F (q) (ω)(v 1 , . . . , v q ) ∈ R k . It is easy to observe that 1.
Using the above relations, we can introduce the following notation, Now, applying Taylor's expansion for ξ * + h ∈ R k in the neighborhood of a solution ξ * of the given linear system, one can get where Similarly, we can express F as where I denotes the identity matrix. Therefore, qC q h q−1 ∈ L(R k ). From Equation (5), we obtain where Let us denote e (k) = x (k) − ξ * as the error at the kth iteration. Then, the error equation is given as follows, Here, p is the order of convergence and (e (k) ) p is a column vector ( p e (k) , e (k) , · · · , e (k) ) T . Theorem 1. Let F : Ω ⊆ R k → R k be a sufficiently differentiable function defined on a convex set Ω containing the zero ξ * . Let us assume that F (x) is continuous and non-vanishing at ξ * . If the initial guess x (0) is close enough to ξ * , the iterative scheme (3) attains sixth-order convergence for each α.
Proof. Let e (k) = x (k) − ξ * be the error at the kth-iteration. Now, expanding F(x (k) ) and F (x (k) ) using Taylor's expansion in a neighborhood of ξ * , we get and 6 ), (8) where I is the identity matrix of size n × n and (7) and (9), we yield 6 , etc. By inserting the expression (10) in the first substep of (3), we obtain which further produces and By using expressions (9), (12), and (13), we obtain By adopting expressions (11)- (14) in the scheme (3), we have where which further produces with the help of expression (12) By using equation (17) in the final substep of (3), we have Therefore, the scheme (3) has sixth-order convergence.

Numerical Experiments
Here, we checked the efficiency and effectiveness of our scheme on real-life and standard academic test problems. Therefore, we consider five number of the examples' details, as seen in the examples (1)- (5). Further, we also depicted the starting points and zeros of the considered nonlinear system in the examples (1)- (5). Now, we employ our sixth-order scheme (3), called (PM), to verify the computational performance of them with existing methods considered in the previous section. Now, we compare (3) with a sixth-order family proposed by Abbasbandy et al. [26] and Hueso et al. [27]. We choose their best expressions (8) and (14)(15) for t 1 = − 9 4 and s 2 = 9 8 , respectively denoted by (AM 6 ) and (HM 6 ). Moreover, a comparison of a newly proposed scheme has been done with the sixth-order family of iterative method proposed by Sharma and Arora [28] and Wang and Li [29], out these works we choose two methods, namely, (13) and (6), respectively, called (SM 6 ) and (W M 6 ). Finally, we compare (3) with sixth-order methods suggested by Mona et al. [15] and Lotfi et al. [16], we consider methods (3.1) for λ = 1, β = 2, p = 1 and q = 3 2 and (5), respectively, called by (MM 6 ) and (LM 6 ). All the iterative expressions are mentioned below.
Method SM 6 : where s and t are real parameters.
Method LM 6 : The abscissas t j and the weights w j are known and depicted in the Table 1 when t = 8(for the more details please have a look at Example 1). In Tables 2-6, we mention the number of iteration indexes (n), residual errors ( F(x (k) ) ), error x (k+1) − x (k) and computational convergence order . In addition, the value of η is the last calculated value of x (k) −x (k−1) 6 . Finally, the comparison on the basis of number of iterations taken by different methods on numerical Examples 1-5 is also depicted in Table 7.
All the computations have been done with multiple precision arithmetic with 1000 digits of mantissa, which minimize round-off errors in Mathematica-9. Here, a (±b) is a × 10 (±b) in all the tables. The stopping criteria for the programming is defined as follows, Now, using the Gauss Legendre formula, we transform the above equation into a finite-dimensional problem, which is given as , where t j and w j denote abscissas and weights, respectively. Now t s j and w s j are determined for t = 8 by Gauss Legendre quadrature formula. Let us call y(t i ) by y i (i = 1, 2, . . . , 8), then we get the following nonlinear system, Here, the abscissas t j and the weights w j are known and depicted in the . The numerical results show that the proposed methods PM1 6 and PM2 6 have better residual errors in comparison with the existing ones. In addition, smaller asymptotic error constants also belong to our methods PM1 6 and PM2 6 .      Example 2. Let us consider the Van der Pol equation [30], which governs the flow of current in a vacuum tube, defined as follows, Here, boundary conditions are given by y(0) = 0, y(2) = 1. Further, we divide the given interval [0, 2] as follows, Moreover, we assume that y i = y(x i ), i = 0, 1, . . . , n.
Let us consider µ = 1 2 and initial approximation y The numerical results are displayed in Table 3. It is found that the newly proposed methods perform better in all aspects, whereas the existing methods show larger residual errors.  Example 3. The 2D Bratu problem [31,32] is defined as with boundary conditions u = 0 on Ω.
Using finite difference discretization, a given nonlinear PDE can be reduced to a SNEs. Let Θ i,j = u(x i , t j ) be the numerical solution at the grid points of the mesh. Let M 1 and M 2 be the number of steps in x and t directions, respectively, and h and k be the respective step sizes. To solve the given PDE, we apply the central difference formula to u xx and u tt in the following way, By using expression (27) in (26) and after some simplification, we have 2, 3, . . . , M 1 , j = 1, 2, 3, . . . , M 2 (28) By choosing M 1 = M 2 = 11, C = 0.1, and h = 1 11 , we obtain a system of nonlinear equations of size 10 × 10, with the initial guess x 0 = 0.1(sin(πh)sin(πk), sin(2πh)sin(2πk), . . . , sin(10πh)sin(10πk)) T . Numerical estimations are given in Table 4. Numerical results demonstrate that the new methods have much improved error estimations and computational order of convergence in comparison to its competitors.   Example 4. Consider another typical nonlinear problem, that is, Fisher's equation [33] with homogeneous Neumann's BC's, the diffusion coefficient H is Again using finite difference discretization, the equation (29) reduces to a SNEs. Consider Θ i,j = u(x i , t j ) be its approximate solution at the grid points of the mesh. Let M 1 and M 2 be the number of steps in x and t directions, and h and k be the respective step size. Applying central difference formula to u xx , backward difference for u t (x i , t j ), and forward difference for u x (x i , t j ), respectively, in the following way, where h = 1 By adopting expression (30) in (29), after some simplification, we get 2, 3, . . . , M 1 , j = 1, 2, 3, . . . , M 2 (31) For the system of nonlinear equations, we considered M 1 = M 2 = 5, h = 1 5 , k = 1 5 and H = 1, which reduces to nonlinear system of size 5 × 5, with the initial guess x 0 = 1 + i 25 T , i = 1, 2, . . . , 25. All the numerical results are shown in Table 5. Numerical results show that our methods have better computational efficiency than the already existing schemes, in terms of residual errors and difference between consecutive approximations.  Example 5. Let us consider the following nonlinear system, To obtain a large SNEs 200 × 200, we choose n = 200 and the initial approximation x (0) = (1.25, 1.25, 1.25, · · · , 1.25(200times)) T for this problem. The required solution of this problem is ξ * = (1, 1, 1, · · · , 1(200times)) T . The obtained results can be observed in Table 6. It can be easily seen that the proposed scheme performs well; in this case, in terms of error estimates as compared to the available methods of the same nature.