Memory in a New Variant of King’s Family for Solving Nonlinear Systems

: In the recent literature, very few high-order Jacobian-free methods with memory for solving nonlinear systems appear. In this paper, we introduce a new variant of King’s family with order four to solve nonlinear systems along with its convergence analysis. The proposed family requires two divided difference operators and to compute only one inverse of a matrix per iteration. Furthermore, we have extended the proposed scheme up to the sixth-order of convergence with two additional functional evaluations. In addition, these schemes are further extended to methods with memory. We illustrate their applicability by performing numerical experiments on a wide variety of practical problems, even big-sized. It is observed that these methods produce approximations of greater accuracy and are more efﬁcient in practice, compared with the existing methods.


Introduction
Nonlinear systems of equations, Ψ(x) = 0, Ψ : D ⊆ R n → R n , appear very frequently in many areas of Engineering and Science. Therefore, it is a very challenging task to find solutions of nonlinear systems of equations. Finding their solutions by analytical methods is very hard or rarely possible. Many authors have attempted to estimate the solutions of nonlinear systems of equations using iterative techniques. One of the oldest and simplest iterative method is Newton's method [1,2] defined as Ψ(x (j) ), j = 0, 1, 2, . . .
where Ψ (x (j) ) denotes the Jacobian matrix of Ψ evaluated in x (j) . This method has quadratic convergence by choosing an initial guess close to the solution. There are many higher-order techniques available in the literature [3][4][5], which have Newton's method as a predictor step. In many realistic situations, the first-order Fréchet derivative Ψ (x) fails to exist or it is time consuming. For such situations, Traub [2] introduced a Jacobian-free method, with order two, defined by ; Ψ] −1 Ψ(x (j) ), j = 0, 1, 2, . . . (2) where [w (j) , x (j) ; Ψ] is the divided difference of Ψ of first-order and w (j) = x (j) + βΨ(x (j) ) and β = 0 is an arbitrary constant. For β = 1, method (2) reduces to the multidimensional extension of Steffensen's method, presented by Samanskii in [6]. After that, many researchers started developing higher-order Jacobian-free methods [7][8][9][10]. On the other side, to accelerate the order of convergence of iterative schemes with same number of computations, leads to be a popular aspect. That is known as "with memory", i.e., an iterative method with memory uses data from more than one previous iteration. Also, there is very little literature [11][12][13][14] with methods with the memory for solving nonlinear systems. with this motivation, we develop new iterative schemes to attain convergence order as high as possible keeping the number of function evaluations per iteration as minimum as possible.
This manuscript is organized as follows. In Section 2, we construct new methods of the fourth-and sixth-order and proceed to their convergence analysis. The construction and study of the convergence of their corresponding iterative families with memory is presented in Section 3. In Section 4, various numerical tests has been made to check theoretical results and to compare the properties of the presented algorithms with some similar existing methods. Some conclusions are stated to finish the paper.

Design of the New Class
For constructing the new methods, we consider the known King's family [15] of iterative schemes to solve ψ(x) = 0, as in expression where and α ∈ R. If α = 0, the well-known Ostrowski's method is obtained [2]. By dividing numerator and denominator of where . Now, re-writing By using first step of Equation (3) and the operator of divided difference, we have This idea allows the generalization of the family to the multidimensional case and it was firstly used in [16]. Now, using binomial expansion up to two terms τ x (j) , z (j) in (4), it can be expressed as: Finally, using Equations (5)-(7) in (3), we get where (8) and considering two more step with two additional functional evaluations of ψ, we propose the following new variant of King's family in multidimensional case: ; Ψ] −1 and γ, δ, α ∈ R. Let us observe that all the linear systems solved per iteration have the same coefficient matrix, that yields better efficiency.

Analysis of the Convergence
To explore the convergence of (9), let us recall the results appearing in [1] about Taylor's series expression on vectorial functions. Let Ψ : D ⊆ R n → R n be d-times differentiable Fréchet in D ⊆ R n , convex set. Then the following expression holds, for any x, h ∈ R n : where Consider the mapping [·, ·; Ψ] : D × D ⊆ R n × R n → L(R n ), i.e., the first-order divided difference operator of Ψ on R n , which can be expressed by Gennochi-Hermite formula [17], By developing Ψ (x (j) + uh) in Taylor's series expansion at x (j) and integrating, we get If we define e (j) = x (j) − x * , we develop Ψ(x (j) ) and its derivatives in a neighbourhood of x * , where x * ∈ R n satisfies Ψ(x * ) = 0. Assuming that [Ψ (x * )] −1 exists, we get From Equation (13), the derivatives of Ψ(x (j) ) can be written as and being I the identity matrix of order n. Now, the order of convergence of (9) can be demonstrated through the following result. Theorem 1. Let x * ∈ R n be a solution of system Ψ(x) = 0 and Ψ : D ⊆ R n → R n , being Ψ differentiable enough in D open neighborhood of x * at which Ψ (x * ) is non-singular. Then, for x (0) initial guess sufficiently close to x * and α ∈ R, iterative scheme (9) has, at least, fourth-order of convergence, provided γ = δ.
Proof. Let e (j) = x (j) − x * be the error in the approximation x (j) . Thus, we denote by e (j) and e (j) In view of Equations (11), (12), (14)- (16), and setting where Employing Equations (13) and (20), the error equation of first step of scheme (9), one gets Using (20) and (22), we obtain From Equation (23), we get Again, using Taylor series around x * , we obtain Substituting Equations (22)-(25), the error equation up to the second step of technique (9) is Again, developing e (j) z 2 by Taylor series around x * and using Equation (23), the error equation of (9) is given by Hence, (9) has order of convergence four, provided γ = δ.
Let us remark that in this proposed method, the fourth-order of convergence is achieved. However, the error equation depends on the value of two parameters, γ and δ, which would allow increasing its order, taking appropriate values. Moreover, more steps with similar shape can be added that can increase the final order of convergence in several units, as can be seen in the following section.

Development and Convergence Analysis of Sixth-Order Scheme
Now, we propose the following new iterative procedure by introducing additional steps in the proposed technique (9) to achieve sixth-order convergence.
Next, we prove the convergence of scheme (28) in the following result.
Theorem 2. Let Ψ : D ⊆ R n → R n be differentiable enough in D, an open neighborhood of x * which satisfies Ψ(x * ) = 0. Consider that initial guess x (0) is sufficiently close to the required zero x * and Ψ (x) is non-singular in x * and continuous. Then, the local convergence order of {x (j) }, generated by (28), is six for all α ∈ R, if real parameters γ and δ satisfy γ = δ.

Methods with Memory
As it has been stated in (27), the convergence order of the scheme is four if (δ − γ) I Ψ (x * ) = −I. On the other side, if we choose δ and γ such that (γ − δ) I = −[Ψ (x * )] −1 , then convergence must be at least five. However, the further acceleration of convergence is not possible in absence of knowledge about the value of [Ψ (x * )] −1 . However, we can estimate [Ψ (x * )] −1 , using the already available data which leads to accelerate the order of convergence. This idea was suggested by Traub in [2] and later used and extended by Petković et al. in [18], at this moment for scalar equations. Motivated from this fact, we approximate [Ψ (x * )] −1 by where u (j) ), using the current and previous available data. In this manner, we extend the Jacobian-free methods (9) and (28) to schemes with memory to solve nonlinear systems. Thus, we can define the new methods with memory as follows: and respectively. Here, 1 , x (j) ; Ψ], and γ 1 and δ 1 are arbitrary constants satisfying δ 1 − γ 1 = 1.

Convergence Analysis of Methods with Memory
Now, we state and prove the R-order of convergence of schemes with memory (32) and (33). Theorem 3. Let Ψ : D ⊆ R n → R n be differentiable enough in an open convex neighborhood D of x * , solution of Ψ(x) = 0 and given matrix B (k) , recursively calculated by the form given in (31) and for an initial guess x (0) , close enough to solution x * . Then, the R-order convergence for iterative schemes (32) and (33) are 2 + √ 5 ≈ 4.24 and 3 + √ 10 ≈ 6.162, respectively, if δ 1 − γ 1 = 1.
Proof. Let {x (j) } be a sequence of approximation generated by the iterative expression (32) such that it converges to the solution x * of Ψ(x) = 0 with R − order at lest r. Then, being {D (j,r) } a sequence tending to D r , asymptotic error constant, when j → ∞. Let us also remark that notation s ≈ t means that the magnitudes of s and t have the same order. Further on, e (j+1) ≈ D (j,r) (D (j−1,r) (e (j−1) ) r ) r = D (j,r) (D (j−1,r) ) r ((e (j−1) ) r 2 ).
Once this convergence order has been stated, the performance of these procedures must be checked on different kinds of problems. In the following section, several real-life problems (some of them big-sized) are solved by using these techniques.

Numerical Experiments
In this section, we consider several numerical problems to show the performance of the proposed methods. New schemes (32) namely PM 1 4 , PM 2 4 for α = 1 2 and for α = 1 4 , respectively, are considered and compared with existing techniques with memory proposed by Sharma and Petković, (SM 4 ) for (c = −0.01) [12] and Narang's et al. method (MM 1 4 ) [14]. The proposed schemes (33) for α = 1 2 , and α = 1 4 are denoted as PM 3 6 , and PM 4 6 , respectively and compared with existing schemes with memory namely SM 2 6 , MM 6 proposed by Sharma and Arora et al. [13], and Narang's et al. [14], respectively. For better and fair comparisons, the performance of the new methods as well as the existing ones is tested for the same varying initial estimation of the accelerating matrix B (0) = −0.001I.
The numerical results are performed with γ 1 = 1 and satisfying δ 1 − γ 1 = 1. To numerically check the order of convergence proven theoretically, we have displayed the number of iteration indices j, the residual Ψ(x (j) ) , the distance between the two last iterations x (j+1) − x (j) , and also the approximated computational order of convergence (ρ) using the ACOC defined in [19], where x (j−2) , x (j−1) , x (j) , and x (j+1) are four consecutive approximations in the iterative process. All numerical computations were done on Mathematica 11 [20] with multiple precision arithmetics, by using 2000 digits of mantissa, with the aim of minimizing the round-off errors and in all tables, b(±c) denotes b × 10 ±c .

Example 1.
Let us firstly consider the problem of kinematic synthesis for steering, that was described in [21][22][23][24]. It is modeled as the nonlinear system We can see in Table 1 the values of ψ i and φ i , in radians. We have considered the initial estimation x The numerical results are displayed in Table 2. It can be observed in Table 2 that the estimations of the error of proposed methods are better than those of the known methods from the first iteration. Moreover, the estimated order of convergence coincides with the theoretical one, for all schemes.  Table 3. Comparative study of different methods for Example 2. In this case, the best results have been obtained by schemes MM 4 and MM 6 , although those obtained by our proposed methods are very similar.

Example 3.
Let us consider another nonlinear problem that is called Coupled Burgers equations [26] defined as ∂u ∂t in the intervals x ∈ [0, 5] and 0 ≤ t ≤ 1 4 . Again, by using finite differences, Equation (45) is reduced to a nonlinear system. Consider w i,j = u(x i , t j ) and r i+1,j = v(x i , t j ) be, respectively, their estimated solution at the nodes of the mesh. Let the number of subintervals in x and t be denoted by M and N, respectively , and h 1 and h 2 be the respective step size. By applying central differences to and central differences for We consider M = 9 and N = 9 which yields to a system of size 128, with the initial estimation for points = Range[−0.8, 0.8, 0.025] and x 0 = Drop[Drop[Join[pts, pts], 1], −1] has been evaluated in Mathematica software. The matrix plot of two divided differences used in proposed iterative scheme has been shown in Figures 1 and 2. In Figure 3, the approximation of the solution is represented. In Figure 4, the blue line shows the exact solution u(x, t) = e −t sinx and dotted red points represents approximated solution.     and x ∈ C[0, 1], s, t ∈ [0, 1] . We use Gauss-Legendre quadrature formula, given as 1 0 f (t)dt ≈ ∑ 8 j=1 w j f (t j ), to transform this intergal equation into a finite-dimensional problem. The nodes t j and weights w j , j = 1, 2, . . . , 8 are determined by means of Legendre polynomials (see Table 4). Denoting the approximations of x(t i ) by x i , i = 1, 2, . . . , 8, one gets the system of nonlinear equations F(x) = ( f 1 (x), . . . , f 8 (x)) T , where where i = 1, 2, . . . , 8 and  9 10 , . . . , 9 10 ) T is presented in Table 5.  In this example, we observe again that the proposed methods, as well in fourth-order as in sixth one, get the best error estimations from the first iterations.
To transform problem (46) into nonlinear system of size 50 × 50 with step size h = 1 51 , the finite difference discretization is used. In the test made, initial guess x (0) = ( 1 10 , 1 10 , . . . , 1 10 ) T has been used and the obtained results are shown in Table 6. For this example, method PM 1 4 gets the best error estimations among its partners and PM 3 6 achieves also the best error estimates from the first iteration.

Concluding Remarks
To summarize, we have developed new Jacobian-free methods of fourth and sixth-order for solving systems of nonlinear equations numerically. The convergence of the proposed schemes is accelerated by using memorization which is based on current and previous available data. A wide range of numerical experiments have been taken into account, that confirm the theoretical results. It is found that the presented methods with memory perform as good or higher effectiveness as the existing ones.