Impact on Stability by the Use of Memory in Traub-Type Schemes

In this work, two Traub-type methods with memory are introduced using accelerating parameters. To obtain schemes with memory, after the inclusion of these parameters in Traub’s method, they have been designed using linear approximations or the Newton’s interpolation polynomials. In both cases, the parameters use information from the current and the previous iterations, so they define a method with memory. Moreover, they achieve higher order of convergence than Traub’s scheme without any additional functional evaluations. The real dynamical analysis verifies that the proposed methods with memory not only converge faster, but they are also more stable than the original scheme. The methods selected by means of this analysis can be applied for solving nonlinear problems with a wider set of initial estimations than their original partners. This fact also involves a lower number of iterations in the process.


Introduction
Finding the solution of a nonlinear equation f (x) = 0 has been and still is present in many fields of Technology and Science. Because many of these equations cannot be solved analytically, the solution of this kind of problems is approximated by using iterative methods. Among them, the best-known is Newton's scheme, with quadratic convergence under some conditions. The role of iterative processes for solving nonlinear problems of many branches of science and engineering has increased exponentially in recent years. This is due to the applicability of these algorithms to real life problems. For instance, Shacham et al. [1,2], described the fraction of the nitrogen-hydrogen feed that gets converted to ammonia (this fraction is known as fractional conversion) in the form of nonlinear scalar equation. On the other hand, Shacham [3] described the fractional conversion in a chemical reactor also by using a scalar equation. Moreover, Shacham and Kehat [4] gave several examples of real life problems which are modeled by means of nonlinear scalar equation such as: chemical equilibrium calculations problem, energy or material balance problem in a chemical reactor problem, isothermal flash problem, azeotropic point calculation problem, calculation of gas volume from Beattie Bridgeman problem, adiabatic flame temperature problem, liquid flow rate in pipe problem, pressure drop in a converging diverging nozzle problem, etc.
There is an extensive literature related to iterative methods for solving nonlinear equations. The design of new efficient methods is an ongoing issue for scientists. Recently, this design goes hand to hand with the dynamical analysis [5][6][7][8][9], allowing the knowledge of the stability of the methods involved.
Kung and Traub conjectured in [10] that iterative methods without memory which use d functional evaluations per iteration cannot have order of convergence higher than 2 d−1 . The inclusion of the memory in the iterative methods is a powerful technique, since it increases the order of convergence of the method without adding new functional evaluations. Traub designed the first method with memory [11] based on the Steffensen's one [12], increasing the order of convergence from 2 to 2.41. Several authors have modified the iterative methods to include memory, as done in [13] for methods with derivatives, or [14][15][16] for derivative-free ones.
In this work, we focus on the order of convergence of two Traub-type methods when we modify its iterative expression to obtain methods with memory. Moreover, the dynamical analysis is performed in a similar way as done in [17][18][19][20].
To analyze the order of convergence of each method with memory, we use the following result [21]: Let ϕ be an iterative method with memory that generates a convergent sequence {x k } of approximations of a zero α of function f . Let us assume a nonzero constant τ and positive numbers p i , i = 0, 1, . . . , m, such that holds. Then, the R-order of convergence of ϕ satisfies being p the only positive root of the polynomial The rest of this manuscript is organized as follows. In Section 2, the basic concepts of dynamics for both one-dimensional and multidimensional real cases are remembered, since the analysis of iterative methods with memory requires the multidimensional dynamics for real variable. Based on Traub's method, Section 3 collects the construction of two new methods with memory. The first scheme needs two previous iterations, while the second one also needs an intermediate point. The convergence of the methods without and with memory is also performed. To analyze the stability of the introduced methods, Section 4 is devoted to the dynamical study. Finally, some discussion and conclusions about the results are presented in Section 5.

Real Dynamics
The study of the dynamical properties of the rational operator associated with an iterative scheme on low degree polynomials, gives interesting information about the stability of this method. This study is usually carried out applying tools of complex dynamics. However, as stated in [22], the knowledge of the complex dynamics is not included in the study of the real dynamics, since the behavior can be different. Therefore, when we have an iterative method with memory, the fundamentals of the complex dynamics can be applied, but they must be particularized for the real variable case.
The aim of this paper is to compare an iterative scheme without memory with other two schemes with memory that have been obtained from the first one. Then, we will focus on tools of real dynamics distinguishing the one-dimensional case, used for real dynamics without memory, from the multidimensional case, used in real dynamics of methods with memory.
When an iterative method is applied on a nonlinear equation f (x) = 0, we obtain a rational function whose dynamics are unknown. Now we recall some basic concepts of real dynamics for the one-dimensional case. To expand these contents see, for example [23,24].
Let R : R −→ R be the rational function obtained when an iterative scheme is applied on a polynomial f (x). Then, the orbit of a point x 0 ∈ R is given by The stability of the fixed points depends on the multiplier of the fixed point, |R (x)|, when R denotes the derivative of function R. Therefore, if x F ∈ R is a fixed point of R, it is called attracting when |R (x F )| < 1; repelling if |R (x F )| > 1; superattracting if |R (x F )| = 0; and neutral when |R (x F )| = 1. A critical point x C holds R(x C ) = 0, and it is called free critical point when it is different from the roots of f (x).
When x F is an attracting fixed point, we define the basin of attraction as the set of pre-images of any order such that Iterative schemes which use two previous iterations to calculate the following iterate have the general form x k+1 = h(x k−1 , x k ), k ≥ 1, being x 0 and x 1 two initial estimations. To carry out the dynamical study of a method with memory, we need to build an associated discrete dynamical system [17]. For this purpose, we consider the auxiliary function G : R 2 −→ R 2 defined as Please note that this definition can be easily adapted to schemes with memory which use more than two previous iterations in each step.
As we did in the one-dimensional case, we recall on some dynamical concepts. A point (z, x) ∈ R 2 is a fixed point of G when it satisfies G(z, x) = (z, x). So, it must verify z = x and x = h(z, x). When a fixed point (z, x) does not agree with a root of f (x), it is called strange fixed point.
As in complex dynamics, the orbit of a point x 0 ∈ R 2 is composed of its successive images by G {x 0 , G(x 0 ), . . . , G m (x 0 ), . . .}, and its dynamical behavior can be classified depending on its asymptotical behavior. Hence, x 0 ∈ R 2 is a T-periodic point when G T (x 0 ) = x 0 but G t (x 0 ) = x 0 , for t < T. The stability of the fixed points of G can be analyzed using the following result [25]: Theorem 2. Let G : R n −→ R n be of class C 2 and x T a T-periodic point. Let λ 1 , λ 2 , . . . , λ n be the eigenvalues of G (x T ), where G is the Jacobian matrix of G.
Moreover, if |λ j | = 1, for j = 1, 2, . . . , n, x T is hyperbolic. In particular, if there exists an eigenvalue such that |λ i | < 1 and another such that |λ j | > 1, the hyperbolic point is called saddle point. Please note that for T = 1, the T-periodic point is recalled as fixed point, and it is denoted by x F . Finally, the basin of attraction of a T-periodic point x T is defined as:

Traub-Type Methods with Memory
The well-known Traub's method [11] has the iterative expression The next result shows the order of convergence of the method from its error equation. If α ∈ I is a simple zero of f and x 0 is near enough to α, then sequence {x k } generated by Traub's method (1) converges to α with order of convergence 3, being its error equation: .
Although in Traub's scheme is not possible to increase the order of convergence, some changes on its iterative expression allow us the inclusion of memory, obtaining methods with higher order of convergence.
To improve the order in Traub's method, we start by adding an accelerator parameter δ in the first step of its iterative scheme, so we obtain: The order of convergence of the resulting family is set in the following result. If α ∈ I is a simple zero of f and x 0 is near enough to α, then the iterative family (2) converges to α with order of convergence 3 for any value of parameter δ, being its error equation Proof. By using Taylor series expansions, f (x k ) can be expressed as , By using these expansions, we have Then, we obtain the following error equation We study how to increase the order of the class by analyzing its error equation the order increases in, at least, one unit. Unfortunately, α is not known. Therefore, we need to approximate f (α) and f (α) transforming the iterative schemes in other ones with memory. If the following linear approximations are applied the accelerator parameter gets into Then, we have an iterative scheme with memory whose order of convergence is analyzed below. For simplicity, the Traub-type method with memory (2) with δ k defined by (5) will be denoted by TM1. If α ∈ I is a simple zero of f and x 0 is near enough to α, then the iterative method TM1 converges to α with order of convergence p ≈ 3.30, and its error equation is: where O 4 (e k , e k−1 ) indicates that the sum of the exponents of e k and e k−1 in the rejected terms of the development is at least 4.
Proof. From the error Equation (3), the following relation is satisfied By using Taylor series expansions, we have Therefore, the accelerator parameter is By taking the lower order terms, Then, we obtain Let the R-order of the method be at least p. Then it is satisfied where D k,p tends to the asymptotic error constant, D p , when k −→ ∞. Analogously, In the same way, relation (6) satisfies Finally, the exponents of e k−1 in (7) and (8) must be the same, so it is obtained the following equation: In the previous equation the only positive solution p ≈ 3.30 gives the order of convergence of the method.
Based on Theorem 5, the inclusion of memory increases the order of convergence of the family, since the order of convergence of (1) and (2) has lower values. In Section 4.2 the dynamical study of this family is performed, to check the stability of its members.
Following an analogous way to proceed as in the TM1 case, two accelerating parameters δ 1 and δ 2 are included in Traub's original method (1) obtaining the iterative scheme whose error equation is shown in the next result. If α ∈ I is a simple zero of f and x 0 is near enough to α, then the iterative family (9) has order of convergence 3, for any values of parameters δ 1 and δ 2 , and its error equation is: where e k = x k − α and c j = f (j) (α) j! f (α) , j ≥ 2.
Using these expansions, we have Expanding f (y k ) around α, we obtain Then, the error equation becomes If we study how to increase the order of convergence of this class, we can easily verify that if δ 1 = −c 2 and δ 2 = −2c 2 the order of convergence can increase, at least, up to 5. We need to approximate f (α) and f (α) and it could be done again by using linear approximations as in the family TM1. However, if we want to get an order of convergence closer to 5 it is preferable to use higher order approximations.
For this purpose, we use the Newton's interpolation polynomial of second degree, set through three available approximations x k , x k−1 and y k−1 to interpolate f . Denoted as N 2 (t; x k , x k−1 , y k−1 ) = N 2 (t), it is defined by Now, if we set the approximations we have the following accelerator parameters: The Traub-type family with memory (9), taking the expression for the two accelerator parameters in (12), is called TM2 method.
In the next theorem, we prove how much the order has increased with respect to Traub's method. If α ∈ I is a simple zero of f and x 0 is near enough to α, then the iterative method TM2 converges to α with order of convergence p ≈ 3.56.
Let us denote e k,y = y k − α, for all k. Let N 2 (t) be defined in (11) and the following Taylor developments in terms of the errors in each iterative step of the method: Using these developments in N 2 (t), evaluating the derivatives of N 2 (t) in the point x k and rejecting the terms of order higher than 0 of e k , we obtain: From the previous calculation the following relations are satisfied: On the other hand, let the R-order of the method be at least p. So, it is satisfied where D k,p tends to D p , the asymptotic error constant, when k → ∞.
In the same way, e k ∼ D k−1,p e Using relations (15) and (16), in (13): Finally, if we match the exponents of (17) and (18), the following equation is obtained: In this problem, the only possible solution for the previous equation is Then, TM2 method has order of convergence p ≈ 3.56.

Real Dynamics of Traub's Method
In [5], Amat et al. carry out a dynamical study about Traub's method when it is applied to polynomials of second and third degree. By taking this study on quadratic polynomials as a reference, we construct and analyze the bifurcation diagrams and the dynamical lines of this method. This analysis allow us to compare the dynamical features of Traub's method with the TM1 and TM2 corresponding ones.
Consider the next result that allows the generalization of dynamic study in some iterative schemes.

Theorem 8 (Scaling theorem).
Let f (x) be an analytic function, and let A(x) = αx + β, with α = 0, be an affine map. Let g(x) = λ( f • A)(x), with λ = 0. Then, the fixed-point operator M f is analytically conjugated to M g by A, i.e., In addition, it is possible to analyze the dynamics of a family of polynomials with just the analysis of a few cases. Theorem 9. Let q(x) = a 1 x 2 + a 2 x + a 3 , a 1 = 0, be a generic quadratic polynomial with simple roots. Then q(x) can be reduced to p(x) = x 2 + c, where c = 4a 1 a 3 − a 2 2 , by means of an affine map. This affine transformation induces a conjugation between M q and M p , the fixed-point operators corresponding to polynomials q(x) and p(x), respectively.
Traub's method satisfies the Scaling Theorem, as proved in [5]. Moreover, according to Theorem 9, the study of the family of polynomials p c (x) = x 2 + c can be generalized to any quadratic polynomial.
If we apply Traub's method on p c (x) = x 2 + c, c ∈ R, we obtain the fixed-point operator M c : R −→ R which depends on c: Solving M c (x) = x we get for all c < 0 two fixed points, We have plotted in Figure 1 the bifurcation diagrams for the fixed points of M c . Figure 1a represents the superattracting fixed points x F 1 (c) and x F 2 (c), while Figure 1b represents the strange fixed points x F 3 (c) and x F 4 (c). The plots have been generated by iterating Traub's method taking as initial estimation the fixed points with a small perturbation. For each c, the successive values of x k are plotted from the iterate #500 to #700 in order to represent the advanced state of the orbit of each initial guess. The bifurcation diagrams show values of c where the method has more stability. In all cases, when c > 0 there is no point because they are complex, so it is verified that c cannot take positive values for any fixed point. As x F 1,2 (c) are superattracting, in Figure 1a all the points converge to one of the two roots. In Figure 1b it is observed the same behavior because x F 3,4 (c) are repelling, so, there are not strange attracting points.
As represented in [19], the dynamical lines of Traub's method when it is applied on p c (x) are shown in Figure 2 for different values of c. Using the software MATLAB R 2017b, the interval x 0 ∈ [−5, 5] has been divided in 500 points and they have been used as initial estimations to iterate Traub's method. When a point in the interval converges to one of the roots of p c (x) it is painted in orange or blue. Orange points correspond to the basin of attraction of x F 1 (c), while the blue ones belong to x F 2 (c). It is painted in black the points that do not converge to any root. The stopping criteria are 50 maximum of iterations and a difference between two consecutive iterations of 10 −3 . The convergence plane [26] gathers into one graphic the complete behavior of every member of a family of one-dimensional iterative methods. Figure 3 shows the convergence plane of Traub's method. The basins of attraction of the fixed points have been colored in blue or orange taking values for the parameter c ∈ [−30, 0] and initial estimations for the method in x 0 ∈ [−30, 30]. As an added value, the superattracting fixed points and the strange fixed points are also represented with black and white lines, respectively, since they depend on the value of c. For every value of c < 0, each initial guess converges to a superattracting fixed point. Almost every initial guess of x 0 > 0 tends to x F 1 (c), and almost every initial guess of x 0 < 0 tends to x F 2 (c). There is a thin region of initial estimations that converge to the further superattracting fixed point. The width of this region is set by the value of the strange fixed points x F 3 (c) and x F 4 (c).

Multidimensional Real Dynamics of TM1 Method
From now on, we apply TM1 scheme on the family of polynomials p c (x) = x 2 + c. As TM1 is a method with memory, its fixed-point function depends on the two previous iterates, x k and x k−1 , that will be denoted by x and z, respectively. Then the fixed-point function is: Since a fixed point must satisfy z = x and x = g(z, x), the fixed points are these x such that R c (x, x) = (x, x). Solving this equation, the only fixed points for c ≤ 0 are the roots of p c (x), x F 1,2 (c) = (± √ −c, ± √ −c). When c > 0 all the fixed points are complex, so they are out of this study. To analyze the behavior of the fixed points, let us consider the Jacobian matrix of R c (z, x) at the point The eigenvalues associated with this fixed point are λ 1 = λ 2 = 0. The same result is obtained with the fixed point x F 2 (c) = ( √ −c, √ −c). Then, by applying Theorem 2, the two fixed points are attracting. In Figure 4 some dynamical planes associated with TM1 method for different c are shown. The implementation of these dynamical planes has followed a similar structure as [27] presents. It has been used a mesh of 500 × 500 estimations to iterate the method. Finally, each point is painted according to the root that it has converged to: blue for (− √ −c, − √ −c), orange for ( √ −c, √ −c), and black in other case. The stopping criteria are 50 maximum of iterations and a difference between two consecutive iterations of 10 −3 . An expanding behavior of the basins of attraction can be seen when the value of c decreases. Let us remark that there is not any periodic orbit in the black region.

Multidimensional Real Dynamics of TM2 Method
As stated in Section 2, methods with memory that use two previous iterates have the expression x k+1 = g(x k−1 , x k ). However, as TM2 method uses the interpolation polynomial N 2 (t; x k , x k−1 , y k−1 ), the calculation of the estimation x k+1 requires the knowledge of x k , x k−1 and y k−1 . Therefore, the general expression of a method with memory that uses three previous points is being g its fixed-point function. Moreover, its fixed-point operator G : R 3 −→ R 3 is defined by the expression G(x k−1 , y k−1 , x k ) = (x k , y k , x k+1 ) = (x k , y k , g(x k−1 , y k−1 , x k )), k = 1, 2, . . . , where x 0 , y 0 and x 1 are the initial approximations. It will be denoted x = x k , z = x k−1 , xy = y k and zy = y k−1 , for all k. Then, (19) defines a discrete dynamical system whose fixed points (z, zy, x) ∈ R 3 must verify To compare the stability of TM2 with TM1 and Traub (1) methods, the family of polynomials p c (x) = x 2 + c is used again. The fixed-point operator of TM2 scheme, when it is applied on p c (x), is Since the conditions (20) are imposed to G c in order to obtain a real-valued function, the result is a one-dimensional operatorG c (x). The analysis of the stability of its fixed points requires the dynamical analysis. For this purpose, the one-dimensional operator gets the expression The fixed points ofG c (x) are the roots of p c (x), x F 1 (c) = − √ −c and x F 2 (c) = √ −c, for c < 0. In addition, x F 3 = 0 is a strange fixed point. When c > 0, all the fixed points ofG c (x) are complex, so they are out of the real dynamics analysis. As the Jacobian matrix is of the form the stability of the fixed points can be checked inG c (x), which is given bỹ By evaluating the fixed points in |G c (x)|, x F 1 (c) and x F 2 (c) are superattracting, and x F 3 is repelling, because |G c (x)| = 4.
ComputingG c (x) = 0, four critical points are obtained: the roots of p c and two free critical points, x C 1 (c) = − 2 3 c and x C 2 (c) = 2 3 c, when c > 0. Following the same procedure as in Traub's method, in Figure 5 the bifurcation diagrams for the fixed points have been plotted. On the one hand, Figure 5a confirm that x F 1,2 (c) are superattracting fixed points when c < 0. On the other hand, Figure 5b illustrates that x F 3 is a repelling point because if it is taken with a small perturbation as an initial estimation of the method, the successive iterates do not converge to it and they converge to the roots of p c (x). Figure 5. Bifurcation diagrams for the fixed points of TM2 method.
The parameter lines of a method show the final orbit of each critical point depending on c. As each point of the lines represents a particular method, they allow the choice of the value of the parameter that guarantees convergence to any of the roots, i.e., the selection of the more stable methods of the family. TM2 method only has critical points when c > 0. However, in this case there are no fixed points. For this reason, the parameter lines in TM2 method do not give us information about the stability of the family. Figure 6 shows the dynamical lines of TM2 method when it is applied on p c (x) varying the value of c. The parameters of the representation follow the same structure as the dynamical lines of Figure 2. Figure 6a-c show that for every c < 0, the orbit of each initial x 0 converges to a root of p c (x). Moreover, negative initial guesses converge to x F 1 (c) and the positive ones converge to x F 2 (c). As expected, TM2 method does not converge when c is positive as shown in Figure 6d. To analyze the dynamical behavior of the complete family, Figure 7 plots the convergence plane of TM2 method. According to the notation followed in Figure 3, now there is a white line which represents x F 3 . The strange fixed point splits the plane into two half-planes, each one containing a different basin of attraction.

Conclusions
In this paper, two Traub-type iterative schemes are designed by introducing accelerating parameters in the iterative expression of Traub's scheme. The use of memory in these parameters originates TM1 and TM2 methods, whose convergence analysis and stability have been carried out to compare them with Traub's scheme.
In the dynamical study, Traub and TM1 methods have strange fixed points but their bifurcation diagrams show that they are repelling. In addition, it can be checked in the bifurcation diagrams that the roots of p c (x) are superattracting points because the iterates, although they are close to a strange fixed point, converge to one of them.
In the analysis of the order of convergence in Traub, TM1 and TM2 methods, the use of memory guarantees higher order of convergence than the original method without any additional functional evaluations. Moreover, the convergence plane in Figure 7 verifies that methods with memory have greater stability than the corresponding ones without memory.
These good properties shown, as in order of convergence as in stability, of the proposed methods allows application of them on practical problems relaxing the usually strong conditions on the initial estimations used.