Generalized High-Order Classes for Solving Nonlinear Systems and Their Applications

: A generalized high-order class for approximating the solution of nonlinear systems of equations is introduced. First, from a fourth-order iterative family for solving nonlinear equations, we propose an extension to nonlinear systems of equations holding the same order of convergence but replacing the Jacobian by a divided difference in the weight functions for systems. The proposed GH family of methods is designed from this fourth-order family using both the composition and the weight functions technique. The resulting family has order of convergence 9. The performance of a particular iterative method of both families is analyzed for solving different test systems and also for the Fisher’s problem, showing the good performance of the new methods.


Introduction
During the last few decades, there has been a wide amount of interest in designing iterative schemes for estimating both equations and systems of equations presenting nonlinearities. Since the problems of finding a zero x * ∈ D of a system of nonlinear equations F(x) = 0, where F : D ⊆ R n −→ R n , are present in science and engineering, the iterative methods are an ideal candidate for finding the solutions.
There is extensive literature on iterative methods for solving nonlinear equations, good overviews can be found in [1,2]. However, the extension of schemes from equations to systems of equations is not always trivial. Based on the Kung-Traub's conjecture for nonlinear equations [3], several optimal methods of three steps can be found in the recent works [4,5]. There are other interesting methods on the research that reach higher order of convergence [6].
In this paper, we present a new family of iterative methods for solving nonlinear systems of equations with convergence order nine. This class, named GH family, has two weight functions and four steps on its iterative expression. It needs one evaluation of the Jacobian matrix and four functional evaluations of the nonlinear function per iteration. Previously, a fourth-order family with only a weight function is proposed. This family is the basis for designing the iterative schemes of the GH family with a composition-type technique. The development of the family covers Section 2. In order to check the feasibility of the proposed schemes to solve nonlinear systems of equations, Section 3 shows the numerical results when the fourth-order scheme is used for solving Fisher's partial differential equation and when the ninth-order family is applied to several test functions. Finally, Section 4 collects the main conclusions of the work.
Some basic definitions must be recalled for analyzing the order of convergence of the methods. Further details can be found in [4,7] and also the notation used in this work.

The GH Family for Solving Systems of Nonlinear Equations
In [8], a new family of iterative methods for solving nonlinear equations is introduced. Its iterative expression is where G(η k ) is a weight function with η k = f (y k ) f (x k ) . Its order of convergence is analyzed in the following result. A complete proof can be found in [8]. Our purpose in this paper is to extend this result for the case of multidimensional problems.
Theorem 1. Let f : Ω ⊂ R −→ R be a real sufficiently differentiable function in an open interval Ω and let x * ∈ Ω be a simple root of f (x) = 0. If x 0 is close enough to x * and G(η) satisfies conditions G(0) = G (0) = 1 and G (0) = 4, then all the iterative methods of family (1) converge to x * with fourth-order of convergence, their error equation being In order to extend family (1) for solving nonlinear systems, an alternative definition for variable η k is required. For this purpose, we develop the expression as follows: So, the extension of family (1) for solving systems of nonlinear equations turns into where and G : R n×n −→ R n×n is a matrix weight function. Furthermore, if X = R n×n denotes the space of all n × n real matrices, then we can define (see [9]) G : X −→ X such that its Fréchet derivatives satisfy: (a) G (u)(v) = G 1 uv, being G : X −→ L(X), G 1 ∈ R, and L(x) denotes the space of linear mappings from X to itself, Moreover, the definition of η k in Label (4) uses the divided difference operator of F on R n , [·, ·; F] : Ω × Ω ⊂ R n × R n −→ L(R n ), defined in [10] as The next result shows the order of convergence of family (3).

Theorem 2.
Let the nonlinear function F : Ω ⊆ R n −→ R n be a sufficiently Fréchet differentiable in an open convex set Ω, x * ∈ Ω a solution of the system F(x) = 0. It must be also satisfied that F (x) is continuous and nonsingular in x * . Let us suppose that the initial guess x (0) ∈ R n is close enough to x * and G(η k ) satisfies G(0) = I, G 1 = 1, G 2 = 4 and |G 3 | < +∞, where I denotes the identity matrix of size n × n. Then, family (3) converges to x * with order of convergence 4, its error equation being Proof. Let us denote by e (k) = x (k) − x * for all k the error in iteration k. By using Taylor series expansions around x * , F(x (k) ) and F (x (k) ) can be expressed as In the same way, it holds that As [F (x (k) )] −1 F (x (k) ) = I, we have X 1 = I and From (5) and (6), we have for the values Now, from the above developments, The divided difference operator is defined by the formula of Gennochi-Hermite [10] [ Expanding F (x + th) in Taylor series around x and integrating, we have In particular, for y (k) given by the first step of the family (3), is obtained, where S i , i ≥ 2, is Now, the value of η k is given by: By using (12), its successive powers are obtained: Using the Taylor expansion of G(η k ) around 0, we get Then, Finally, the error equation of family (3) is By applying conditions G(0) = I, G 1 = 1 and G 2 = 4, the error equation turns into so the iterative family (3) is fourth-order convergent.
Trying to design a higher-order class with the same structure, we now consider the fourth-step iterative family: where G(η k ) and H(τ k ) are two matrix weight functions with variables defined by Let us recall that the iterative scheme (13), from now on called GH family, has been obtained by a composition-type of the iterative family (3) with itself. The convergence of GH family is analyzed in the next result.

Theorem 3.
Let the nonlinear function F : Ω ⊆ R n −→ R n be a sufficiently Fréchet differentiable in an open convex set Ω, x * ∈ Ω a solution of the system F(x) = 0. It must be also satisfied that F (x) is continuous and nonsingular in x * . Let us suppose that the initial estimation x (0) ∈ R n is close enough to x * and the weight functions G(η k ) and H(τ k ) satisfy: Then, all the iterative methods of family (13) converge to x * with order of convergence 9.
Proof. By using the developments in the proof of Theorem 2 (with more terms in the error expressions) and also the same way to proceed, we obtain for the second step of family (13) 10 ). (14) The coefficients K j are obtained using expressions (7), (9), (11), and (12), and From these expressions, we have and then Thus, it is obtained that Then, the coefficients in (14) are 6 ).
Therefore, the Taylor development of variable τ k of the weight function H is given by: When the conditions in the theorem for the weight function H are applied, we have the coefficients being P 1 = N 1 , Finally, Then, the error equation of GH family is: As it can be proven that T i = 0 for 4 ≤ i ≤ 8, we have that GH family has order of convergence 9.

Numerical Experience
In this section, we are applying the iterative families (3) and (13) to several nonlinear systems of equations. In particular, the performance of family (3) is checked by solving the Fisher's partial differential equation.

Application of Family (3) to Fisher's Equation
Fisher's equation [11] v t ( represents a model of diffusion in population dynamics, where D > 0 is the diffusion constant, r is the growth rate of the species, and c is the carrying capacity. In this section, a specific case of Fisher's equation is solved using iterative methods. In this case, The domain of x is the interval [−25, 50]. The boundary conditions are v(−25, t) = 1 and v(50, t) = 0, for t > 0, while the initial condition is x < −10, 0, −10 ≤ x ≤ 10, 1/4, 10 < x < 20, 0, x ≥ 20.
Discretizing (16) and by using divided differences, the problem can be solved as a family of nonlinear systems. For this purpose, we first have selected a grid of points in the domain, (x i , t j ) ∈ [−25, 50] × [0, T max ], where x i represents the node in the spatial variable, set as x i = −25 + ih, i = 0, 1, . . . , nx, j is the index of the time variable, set as t j = 0 + jk, j = 0, 1, . . . , nt, h and k are the spatial and time steps, respectively, and nx and nt are the number of subintervals for variables x and t, respectively. Then, an approximation of the solution at each point (x i , t j ) of the mesh will be obtained, that is, v i,j ≈ v(x i , t j ). Applying backward differences to the time derivative and central differences to the spacial one, that is the scheme in finite differences for the approximated problem is for i = 1, . . . , nx − 1, j = 1, . . . , nt. After some algebraic manipulations, (18) results in where λ = k/h 2 . Depending on the number of subintervals used in the discretization of the variable x, a nonlinear system of size (nx − 1) × (nx − 1) can be found by solving (19). The nonlinear system defined for a fixed j is the following: where matrix A is Each system gives an approximated solution for the problem in a time step t j from the obtained in the instant t j−1 , so we begin to solve the systems using the solution at t 0 provided by (17).
Using iterative methods for solving nonlinear systems, such as family (3), system (20) can be solved. In order to compare the numerical results, another iterative scheme with the same order of convergence as family (3) has been chosen. In this case, the Sharma et al. [12] method, denoted in this work by S4, is fourth-order convergent and has the iterative expression where L k = −I + 9 4 [F (y (k) )] −1 F (x (k) ) + 3 4 [F (x (k) )] −1 F (y (k) ).
On the other hand, to check the numerical behavior of family (3), it is necessary to select a weight function satisfying conditions of Theorem 2, so it will be obtained a method of this family. Several functions satisfy the conditions of the theorem, some of them being: An efficiency comparison between the proposed schemes is given in terms of the computational cost and the number of functional evaluations. This comparison helps to choose the more efficient method of family (3). For this purpose, we can use the efficiency index defined by Ostrowski [13] as I = p 1/d , where p is the order of the method and d is the number of new functional evaluations per iteration required by the method. The computational cost for solving a linear system of equations depends on its size. As the proposed methods can be used for solving large systems of equations, this cost must be taken into account. Thus, we compare the performance of the methods with the computational efficiency index introduced in [7] as CE = p 1/(d+op) , where op is the number of operations (products and quotients) per iteration. Table 1 summarizes the results for the computational efficiency index and number of functional evaluations for each method, where G4 1 , G4 2 , and G4 3 denote the resulting iterative schemes when family (3) is applied using the weight functions (22), respectively.
For each method, Table 1 shows the number of different evaluations of the function (F), the Jacobian matrix (F ), and the number of different divided differences used by the method in each iteration (nDD). All the methods need n and n 2 evaluations to compute F and F , respectively, and n(n−1) 2 for each divided difference operator. Regarding the operational cost, the value of Mv is the number of matrix-vector products, with n 2 operations for each product. To compute an inverse linear operator, one may solve a n × n linear system of equations, where an LU decomposition is performed and two triangular systems must be solved, with a total cost of 1 3 n 3 + n 2 − 1 3 n operations. However, for solving r linear systems with the same matrix of coefficients, the LU decomposition is computed only once, so the computational cost is only 1 3 n 3 + rn 2 − 1 3 n. The values of s1 and s2 are the number of linear systems that each scheme solves per iteration with matrix of coefficients F (x (k) ) or another matrix, respectively.  As the results of Table 1 show, among the methods belonging to family (3), the most efficient is method G4 1 . Then, the numerical performance of the family is carried out with this method. In addition, method S4 requires more functional evaluations than the other ones as it computes two Jacobian matrices in each iteration.
The results obtained in Table 1 can be observed in Figure 1, where the value of log 4 (CE) for the four methods by varying the size of the system (n) has been represented. As we can see, for small values of n, the indices of G4 1 and S4 show similar performance; meanwhile, when the value of n increases, the computational efficiency index of all methods decreases, but the index of method G4 1 is greater than the rest.  Table 1 for methods G4 1 , G4 2 , G4 3 , and S4 and different sizes of the system. We have solved system (20) using methods S4 and G4 1 for nx = 20. For the numerical performance, we use software Matlab R2017b with variable precision arithmetics of 1000 digits of mantissa.The results of the application of the methods for solving the nonlinear system are collected in Table 2 varying the value of nt and T max . For every performance, the iteration procedure stops when F(x (k+1) ) < 10 −6 or the number of iterations reaches the number 50. The value of iter represents the mean number of iterations needed when all the columns have been calculated and the terms a(b) represent the value a · 10 b . Moreover, the elapsed time in seconds to obtain the solution for the problem after 10 (consecutive) executions is shown. The results in Table 2 show the good performance of method G4 1 for solving the Fisher's problem. Method G4 1 only needs two iterations to calculate a solution for the system, the mean number of iterations always being lower than that of method S4. For a fixed value of T max , when nt increases, so does the elapsed time, but the approximation to the solution is better since ||F(x k+1 )|| is smaller. In addition, the e-time is lower for method G4 1 , so it reaches the solution with more computational efficiency and arithmetical precision than the other scheme.  Table 3 for methods NLM8, SLB8, and GH9 for different sizes of the system.
The results obtained from applying the methods for solving the nonlinear systems are collected in Tables 4-6. The stopping criteria now is a difference between two consecutive iterates lower than 10 −200 or the condition F(x (k+1) ) < 10 −200 with a maximum number of iterations of 50. The results have been calculated using Matlab R2017b with variable precision arithmetics of 2000 digits of mantissa. In this way, the numerical noise is far enough to not affect the final result. The approximated computational order of convergence [15] represents an approximation of the order of convergence of each method. The ACOC [15] is the approximated computational order of convergence defined as It is a computational approximation of the theoretical order. For every nonlinear system, the higher value of the ACOC is for the GH family, as expected. In general, our proposed scheme converges in less iterations than the other tested methods with very competitive error estimates.

Conclusions
Two families of iterative methods for solving nonlinear systems of equations have been introduced, the first one being a multidimensional extension of a previous scalar class of iterative methods. Both schemes are designed via the matrix weight functions procedure with only one evaluation of the Jacobian and its estimation using divided differences for systems. Two more steps are added to the fourth-order class, holding the Jacobian matrix evaluated in the same step, allowing us to reach order of convergence 9 for the GH family. The numerical tests confirm the quality of the iterative schemes for solving systems of equations of non-small size, improving the results of several methods in the literature.