Constructing Frozen Jacobian Iterative Methods for Solving Systems of Nonlinear Equations, Associated with Odes and Pdes Using the Homotopy Method

A homotopy method is presented for the construction of frozen Jacobian iterative methods. The frozen Jacobian iterative methods are attractive because the inversion of the Jacobian is performed in terms of LUfactorization only once, for a single instance of the iterative method. We embedded parameters in the iterative methods with the help of the homotopy method: the values of the parameters are determined in such a way that a better convergence rate is achieved. The proposed homotopy technique is general and has the ability to construct different families of iterative methods, for solving weakly nonlinear systems of equations. Further iterative methods are also proposed for solving general systems of nonlinear equations.


Introduction
Iterative methods for solving nonlinear equations and systems of nonlinear equations always represent an attractive research topic.Newton's method [1][2][3] is a classical iterative method for solving systems of nonlinear equations and offers quadratic convergence.Many authors have developed further iterative methods for solving either nonlinear equations or systems of nonlinear equations [4][5][6][7].Here, we focus on the idea of the frozen Jacobian: we observe that this idea is computationally attractive for designing iterative methods because the inversion of the Jacobian (regarding LUfactorization) is performed a single time, for a single instance of the iterative method.Many researchers have proposed frozen Jacobian iterations (see [8][9][10][11][12] and references therein).In our view, the use of perturbation techniques in connection with the homotopy method can be very effective for developing new frozen Jacobian iterative methods, when solving systems of nonlinear equations.The homotopy method can be found in the work of Poincaré [13].After a careful review of the relevant literature, we come to know that much work has been done in the direction of designing iterative methods for solving nonlinear scalar equations, using the homotopy perturbation method, but not for systems of nonlinear equations.The reason behind this fact is that usually, the constructed iterative methods, for solving nonlinear equations using different homotopy methods, contain higher order derivatives.From a computational point of view, the evaluation of higher order Fréchet derivatives is expensive for a general system of nonlinear equations.However, when considering systems of weakly nonlinear equations, associated with wide classes of ordinary and partial differential equations, we find that all higher order Fréchet derivatives possess a diagonal structure: as a consequence, the resulting computational cost is mild.
The core idea of homotopy (or homotopy continuation) was developed for solving hard problems numerically.The nonlinear problem is approximated around some initial guess to construct a linear model or a model that can be solved easily.Once we have a linear model, we construct a homotopy function that maps linear model to a nonlinear model with the help of auxiliary parameters.The embedding of auxiliary parameters in the homotopy function helps us to solve stiff problems iteratively in a way that the solution of the less hard problem is used as an initial guess for the next harder problem.The gradual change in the parameters assists us in solving a hard problem softly in the mathematical sense.The above-described scenario is one of the pictures of the homotopy methods or, precisely speaking, homotopy continuation methods.The other picture is to use the homotopy method to approximate solution of nonlinear problems analytically [14][15][16][17][18][19][20].Many authors have used [7,21] homotopy methods for constructing iterative methods for solving nonlinear equations.For the construction of iterative methods, they build a homotopy between a linear model of the nonlinear equation and the nonlinear equation, by expanding it around some initial guess.The constructed homotopy contains auxiliary parameters.The auxiliary parameters can be used for different purposes.Some authors use the homotopy parameters to enhance the convergence by changing the dynamics of the iterative method and give estimates for the optimal parameters [22].Now, we present in detail our proposals.Noor [21] proposed the following homotopy iterative method for solving nonlinear equations.Let f(x) = 0 be a scalar nonlinear equation, and let x 0 be an initial guess.We perform a linearization of f(x) around a point γ: where f(γ) + f (γ)(xγ) is the linear approximation of f(x) around γ and g(x) is the nonlinear part of f(x).Equation ( 1) can be written as: where . When assuming that L is a linear function, according to [21], a homotopy can be defined between L(v) -L(x 0 ) and L(v) + N(v) -c: where H(v, q, T) : R × ] is an embedded parameter, α is a real free parameter and T is a suitable operator.In Equation (3), L(v) -L(x 0 ) and L(v) + N(v) -c are called homotopic, i.e., we established homotopy between the linear model and the nonlinear model.When q = 0, we have to solve the linear model H(v, 0, T) = L(v) -L(x 0 ), and for q = 1, we have as the target the nonlinear system in Equation (2), that is H(v, 1, T) = α(N(v) -c).When we change q from zero to one, the linear problem continuously deforms into the original nonlinear problem.Further, we assume that the solution of Equation ( 3) can be expressed as power series in q: Notice that the solution of target nonlinear problem can be approximated as: In 2010, Yongyan et al. [22] proposed a two-parameter homotopy method for the construction of iterative methods, for solving scalar nonlinear equations.The authors in [22] use the homotopy auxiliary parameters to change the dynamics of the proposed iterative methods for solving nonlinear equations and provide the optimal values of auxiliary parameters, for enhancing the convergence speed without altering the convergence order of the iterative method.It is worth noting that they used auxiliary parameters to change the path of converging sequences of successive approximations to obtain fast convergence.Inspired by their mathematical model for homotopy, we introduce a generalization of the two-parameter homotopy for solving a system of weakly nonlinear equations.However, our primary motivation is to use the auxiliary parameters to increase the convergence order of the iterative methods.Secondly, our proposal is for the construction of iterative methods for solving a system of nonlinear equations instead of a single nonlinear equation.In summary, we use the homotopy techniques to develop mathematical models that we use for designing higher order iterative methods with the help of new parameters.

Construction of Iterative Methods
We start from the case where m = 3 and when Equation ( 8) can be written as: The Taylor series expansion of F(•) around x 0 gives: where z = yx 0 and C j = F (j) (x 0 ), i.e., C j is a j-th order Fréchet derivative.Using y(q) = x 0 + ∞ ∑ i=1 q i x i in Equation (10), we get: The linear approximation L(y) can be written as: L(y) = F (y θ ) x 0 -F (y θ ) y θ + F(y θ ) + F (y θ ) x 1 q + F (y θ ) x 2 q 2 + F (y θ ) x 3 q 3 + F (y θ ) x 4 q 4 + F (y θ ) x 5 q 5 + O q 6 (12) Simplification of Equation ( 9) gives: By equating to zero the coefficients of powers of q, we find five equations: By solving Equations ( 14)-( 18), we obtain:

Iterative Method I
By adding x 0 and x 1 , we get the following iterative method: If α 0 = 1, then the order of convergence of Equation ( 19) is three.

Iterative Method II
Summing to x 0 , x 1 and x 2 , we deduce the following iterative method: The convergence order of Equation ( 20) is four.Notice that we have the information F (u 1 ), and we want to use this information in a computationally-efficient way to enhance the convergence order.This means we can add more steps of the form F (u 0 ) φ φ φ 3 = F (u 1 ) φ φ φ 2 in the iterative structure of the iterative scheme.Thus, we can enhance the convergence order of the iterative method Equation ( 20) by including more parameters owing to the inclusion of additional steps.We construct a model of the iterative method Equation (20) as: The order of convergence of the iterative method Equation ( 21) is five.

Iterative Method III
The summation of x 0 , x 1 , x 2 and x 3 induces the following iterative method: The order of convergence of the iterative method Equation ( 22) is five.We extend the model of iterative method Equation (22), by including more free-parameters, so that we can enhance the convergence order. 7 2 , then the order of convergence of the iterative method Equation ( 23) is six.Further iterative methods can be constructed by adding more x i s and following essentially the same basic ideas.

Application to Systems of Nonlinear Equations Associated with Ordinary and Partial Differential Equations
According to our analysis, we observed that the parametric homotopy method has the potential to provide an effective model for the construction of a higher order iterative method for solving systems of nonlinear equations.The unknown parameters in the homotopy method and the extended models (bases on homotopy provided models) can be used to enhance the convergence order of the iterative method.The iterative method Equation ( 21) can be employed effectively for solving a general system of nonlinear equations.On the contrary, the iterative method Equation ( 23) is only efficient to solve weakly nonlinear systems associated with nonlinear ordinary equations (ODEs) and partial differential equations (PDEs).In fact, many ODEs and PDEs can be written as: where L(•) is a linear differential operator, f(•) is a nonlinear function, y is a dependent variable and x is an independent variable.Let A be the discrete approximation of linear differential operator The nonlinear problem Equation ( 24) can be written in the form of a system of nonlinear equations: It should be noticed that there are discretizations of Equation ( 24) that do not correspond to Equation (25).Here, we explicitly assume that the numerical methods used for approximating Equation ( 24) lead to a representation as in Equation ( 25).The higher order Fréchet derivatives of Equation ( 25) can be computed as: where diag(•) represents the diagonal matrix, which keeps the input vector as its main diagonal.The iterative method Equation (23) for weakly nonlinear systems can be written as: (27) where is element-wise multiplication between two vectors.

Computational Cost of Equation (27)
The iterative method Equation ( 27) uses five function evaluations, one LU factorization, the solution of six lower and upper triangular systems, four matrix-vector multiplications, six vector-vector multiplications and three scalar-vector multiplications.For weakly nonlinear systems Equation (25), the Fréchet derivatives are diagonal matrices, but can be converted into vectors to perform element-wise vector-vector multiplications.In conclusion, the count of scalar function evaluations for a nonlinear function Equation (25) and its Fréchet derivatives is equal.

Convergence Analysis
We will only provide the proof of the convergence of iterative method Equation (23), and the convergence proofs of other iterative methods are similar.To perform the convergence analysis, we use the symbolic algebraic tools of Maple R 2015.
Theorem 1.Let F : Γ ⊆ R n → R n be a sufficiently Fréchet differentiable function on an open convex neighborhood Γ of u * ∈ R n with F(u * ) = 0 and det(F (u * )) = 0, where F (u) denotes the Fréchet derivative of F(u).Let D 1 = F (u * ) and D s = 1 s! F (u * ) -1 F (s) (u * ), for s ≥ 2, where F (s) (u) denotes the s-order Fréchet derivative of F(u).Then, the iterative method Equation ( 23), with an initial guess in the neighborhood of u * , converges to u * with local order of convergence at least six and error: where e 0 = u 0u * , e 0 p = p times (e 0 , e 0 , . . ., e 0 ) and 114D 5  2 is a six-linear function, i.e., L ∈ L Taylor's series expansion of F(u 0 ) around u * is: where e k = u ku * .Computing the Fréchet derivative of F with respect to e 0 , we deduce: where I is the identity matrix.Computing its inverse using the symbolic mathematical package Maple, we obtain: By using all of the values of φ φ φ's, we get the following error equation.
The error Equation (30) directly implies that the local order of convergence of the iterative method Equation ( 23) is at least six.(32)

Numerical Testing
To show the effectiveness of our proposed multi-step iterative method, we solve a carefully-chosen set of ODEs and PDEs.It is also important to verify the claimed order of convergence numerically, so we adopt the following definition to compute the computational order of convergence (COC): Consider a system of four nonlinear equations: Let w = [w 1 , w 2 , w 3 , w 4 ] T be a constant vector, and F (x) and F (x)w = F (x)w can be written as: In Table 1, we computed the COC's of different iterative methods, and numerical results confirm the theoretically-claimed convergence orders.The last two columns of Table 1 are similar because the structure of iterative methods Equations ( 21) and ( 22) is similar, and iterative method Equation (22) has an extra parameter that does not enhance the convergence order.Table 2 shows the COC for different values of m for the iterative method Equation (31), and the COC satisfies the formula 2(m + 1) that was claimed.The numerical computations in Tables 1 and 2 are performed in Mathematica R 10 by setting $MinPrecision = 82,000 and $MinPrecision = 7000, respectively.where f(•) is a nonlinear function of u(x, y, z) and p(x, y, z) is the source term.The discretization of Equation ( 35) is performed by using the Chebyshev pseudo-spectral collocation method [23][24][25][26][27].
The discretized form of Equation ( 35) is: where T •• is the Chebyshev pseudo-spectral collocation discretization of the second order derivative, I is the identity matrix and ⊗ is a Kronecker product.We assume the solution of Equation ( 35) is u = sin(x + y + z) and f(u) = u 2 .In Table 3, we show the absolute error in the solution of Equation (36).In all of the cases, we perform a single iteration and get maximum reduction in the absolute error when m = 5 and the grid size is 12 × 12 × 12.In one iteration, we perform the LU factorization only once, and then, we solve lower and upper triangular systems for linear subproblems, which make the procedure computationally inexpensive.

2D Nonlinear Wave Equation
The following 2D nonlinear wave equation is studied.
where f(u) = u s is nonlinear function and c, s are constants.The source term p(x, y) can be computed by assuming solution u = exp(-t)sin(x + y).Dirichlet boundary conditions are imposed.We use the Chebyshev pseudo-spectral collocation method to discretize Equation (37), and we obtain the following system of nonlinear equations.
Table 4 shows that the number of steps, to get a reduction in the absolute error in the solution of Equation (38), depends on the step-size in the spectral approximation.

Verification of the Mesh-Independence Principle
In the paper [28], the authors proved the mesh-independence principle for the classical Newton method in order to solve systems of nonlinear equations associated with a general class of nonlinear boundary value problems.They also have stated some conditions on the discretization of the nonlinear boundary value problems.In simple words, suppose we have two discretizations (one is finer than the other) of a nonlinear boundary value problem, and suppose that we use the Newton method to solve two associated systems of nonlinear equations, one for each discretization.The magnitude of absolute errors for both discretizations against the number of Newton iterations will be almost equal.Tables 4 and 5 show that multi-step iterative methods do not obey the mesh-independence principle, when we perform a single evaluation of the Jacobian and its inversion.Table 6 displays that the Newton method and the iterative method Equation ( 27) almost verify the mesh-independence principle.Notice that the iterative method in Equation ( 31) is the multi-step version of the iterative method Equation (27).The multi-step iterative methods for the problem Equation (35) does not satisfy the mesh-independence property for the problem Equation (35), while Newton and the iterative method in Equation ( 27) have this very nice feature.Remark.In a single iteration of multi-step iterative methods, we use the frozen Jacobian at the initial guess, and we employ its LU factors repeatedly in the multi-step part to solve the related lower and upper triangular systems.The idea of the frozen Jacobian is fruitful because the computation of a new Jacobian and its LU factorization are computationally expensive operations.Now, we ask whether the applicability of such methods is limited.What we have observed in numerical simulations is the following: when one has to solve a stiff initial or boundary value problem, in the start of the simulations, it is recommended to perform more than one iteration with a limited number of multi-steps.In fact, in the case of stiff problems, the computation of the Jacobian repeatedly is crucial for the convergence of the whole iterative process.

Conclusions
The homotopy methods provide a way to construct models for the development of higher order iterative methods, to solve systems of nonlinear equations.We have shown that the inclusion of parameters in the homotopy can help to enhance the convergence order of the proposed techniques.Usually, we get higher order Fréchet derivatives that are computationally unsuitable for a general system of nonlinear equations.However, for particular classes of ODEs and PDEs, we have shown that the computational cost of higher order Fréchet derivatives is not high.The numerical results show the validity and effectiveness of the proposed multi-step iterative methods.

Table 1 .
Verification of the convergence order of different iterative methods for the problem Equation (33); the initial guess is x k = 15/10.COC, computational order of convergence.

Table 3 .
Multi-step iterative method Equation (31): absolute error in the solution of Equation (36) versus different grid sizes.Number of iterations = 1; initial guess U = 0.

Table 6 .
Newton method and iterative method Equation (27): absolute error in the solution of Equation (38) versus different grid sizes.Number of iterations = 4; number of steps = 0, s = 3, c = 1; initial guess U