Two Iterative Methods with Memory Constructed by the Method of Inverse Interpolation and Their Dynamics

Abstract: In this paper, we obtain two iterative methods with memory by using inverse interpolation. Firstly, using three function evaluations, we present a two-step iterative method with memory, which has the convergence order 4.5616. Secondly, a three-step iterative method of order 10.1311 is obtained, which requires four function evaluations per iteration. Herzberger’s matrix method is used to prove the convergence order of new methods. Finally, numerical comparisons are made with some known methods by using the basins of attraction and through numerical computations to demonstrate the efficiency and the performance of the presented methods.


Introduction
Solving nonlinear equations is one of the most important problems in scientific computation. Since 1960's, many multipoint iterative methods have been proposed for solving nonlinear equations of the form f (x) = 0. Inverse interpolation method and self-accelerating parameter method are two effective ways to construct iterative method with memory. An iterative method uses some self-accelerating parameters, which is called the self-accelerating type iterative method with memory. The self-accelerating parameters is a variable parameter, which can be constructed by Newton interpolation or Hermite interpolation. Many efficient self-accelerating type iterative methods have been presented in recent years, see [1][2][3][4][5][6][7][8][9][10][11]. Džunić et al. [1], Soleymani et al. [2] and Sharma et al. [3] have proposed some derivative free iterative methods with one self-accelerating parameter for solving nonlinear equations. We [4][5][6] have obtained some Newton type iterative methods with memory using one simple self-accelerating parameter, which is constructed by the iterative sequences. By increasing the numbers of the self-accelerating parameters, Cordero et al. [7], Lotfi et al. [8] and Zafar et al. [9] have obtained some iterative method with high efficiency. Chicharro et al. [10] have analyzed the stability of iterative method with memory by dynamical theory. Narang et al. [11] have presented a class of Steffensen type method with memory for solving nonlinear systems. However, the self-accelerating parameter will be very complex if it is constructed by high order interpolation polynomial. In order to save computing time, we should construct the self-accelerating parameter with simple structure. If an iterative method with memory is constructed by using inverse interpolation polynomial, we will call it inverse interpolation method with memory. Neta [12] has derived a very fast inverse interpolation iterative method with memory, which is given by where Method (1) is denoted by Neta's method (NETM) in this paper. Petković and Neta [13] have obtained that the convergence order of NETM is at least 10.1311. Inspired by Neta's idea, Petković et al. [14] have presented the following two-step iterative method with memory where N(x k ) and φ(t) are defined by (2) and (3), respectively. The convergence order of method (4) is 4.562. Method (4) is denoted by Petković's method (PETM) in this paper. In this paper, two new iterative methods with memory are proposed for solving nonlinear equations, which are constructed by using inverse interpolation method. We construct a two-step iterative method of order 4.5616 by using three function evaluations. In order to further improve the convergence order, a three-step iterative method with convergence order 10.1311 is obtained, which requires four function evaluations. Herzberger's matrix method is used to prove the order of convergence of new methods. Finally, numerical experiments are employed to support the theory developed in this work. The basins of attraction of existing methods and our methods are presented and compared to illustrate their performance.

Two Inverse Interpolation Iterative Methods with Memory
Using inverse interpolation rational polynomial, we construct a two-step iterative method with memory. Let be a rational polynomial satisfying From (6)-(8), we get and the following system Solving system (10), we obtain where ϕ(t) is a rational function. From (5), (9), (11) and (12), we get In the next step, x k+1 can be obtained by carrying out the same calculation as y k but y k−1 should be instead by y k . We get Together with (14) and (15), we obtain a new two-step method with memory as follows: where ϕ(t) is defined by (13). We will use Herzberger's matrix method [15] to determine the convergence order of Method (16).

Theorem 1.
Let α ∈ I be a simple zero of a sufficiently differentiable function f : I ⊂ R → R for an open interval I. If an initial approximation x 0 is sufficiently close to a simple zero α of f , then the order of convergence of the two-step method (16) with memory is at least 4.5616.
Proof. The lower bound of order of a single step s-point method The lower bound of order of an s-step method G = G 1 • G 2 • · · · • G s is the spectral radius of the product of matrices M (s) = M 1 · M 2 · · · M s . According to Method (16), we get the following matrices.
The matrix M (2) corresponding to Method (15) is The characteristic polynomial of matrix M (2) is We conclude that the convergence order of Method (16) with memory is at least 4.5616.
In order to improve the computational efficiency of iterative method, we construct a three-step iterative method by using inverse interpolation rational polynomial. Let be a rational polynomial satisfying From (18)-(21), we get and the following system Solving system (23), we obtain where ϕ(t) is defined by (13). From (16), (21) and (23)-(25), we have where In the next step, z k can be obtained by carrying out the same calculation as y k but y k−1 should be instead by y k . We get where . x k+1 can be obtained by carrying out the same calculation as z k but z k−1 should be instead by z k . where Together with (27)-(29), we obtain a new three-step method with memory as follows: where Now, we give the order of convergence of Method (30) by the following theorem.

Theorem 2.
Let α ∈ I be a simple zero of a sufficiently differentiable function f : I ⊂ R → R for an open interval I. If an initial approximation x 0 is sufficiently close to a simple zero α of f , then the order of convergence of the three-step Method (30) with memory is at least 10.1311.
The following test functions were used in numerical experiments.
The absolute errors |x k − a| in the first four iterations are given in Table 1, where a is the exact root computed with 3600 significant digits. For Methods (1), (4), (16) and (30), the initial approximation y −1 is calculated by y −1 = N(x 0 ),z −1 is calculated by z −1 = y −1 + | f (x 0 )|/10. The approximated computational order of convergence (ACOC) is defined by [16]: Numerical results for f i (x)(i = 1, · · · , 6).  Table 1 shows that numerical results are in concordance with the theory developed in this paper. The computing accuracy of our Method (16) is better than that of NETM for solving nonlinear equations f i (i = 1, 2, 4, 5, 6). The computing accuracy of our Method (30) is better than that of method PETM for solving nonlinear equations f i (i = 1, 2, 4, 6).

Dynamical Analysis
The dynamical properties of the rational function give us important information about numerical features of the iterative method as its stability and reliability. In this section, we compare our Methods (16) and (30) to Newton's method (2), NETM (1), PETM (4) and WANM (30) by using the basins of attraction for three complex polynomials f (z) = z k − 1, k = 2, 3, 4, 5, 6. To generate the basins of attraction for the zeros of a polynomial and an iterative method we take a grid of 300 × 300 points in a rectangle D = [−3.0, 3.0] × [−3.0, 3.0] ⊂ C and we use these points as z 0 . If the sequence generated by iterative method reaches a zero z * of the polynomial with a tolerance |z k − z * | < 10 −5 and a maximum of 25 iterations, we decide that z 0 is in the basin of attraction of the zero and we paint this point in a blue color for this root. In the same basin of attraction, the number of iterations needed to achieve the solution is showed in darker or brighter colors (the less iterations, the brighter color). Black color denotes lack of convergence to any of the roots (with the maximum of iterations established) or convergence to the infinity. The parameters used in iterative Method (30) is λ 0 = 0.001. All the figures are created by the Mathematica 4 computer algebra system. The computer specifications are Microsoft Windows 7 Intel(R), Core(TM) i3-2350M CPU, 1.79 GHz with 2GB of RAM. Figure 1 show that our Method (16) and Newton's method are global convergence for solving complex polynomial f (z) = z 2 − 1. Compared to the other method, our Method (16) has the least diverging points in Figure 2. Figure 3 show that the convergence speed of our method (30) is faster than that of the other methods. Figures 1-5 show that the stability of Method (16) is better than the other methods and method WANM has the worst stability. The basins of attraction for our Method (16) with memory are larger than the other methods. On the whole, our Methods (16) and (30) are better than the other methods in this paper.

Conclusions
In this paper, two new iterative methods with memory are proposed for solving nonlinear equations, which are constructed by inverse interpolation method. We first propose a two-step iterative method with convergence order 4.5616, which requires three function evaluations. By increasing a function evaluation, a three-step method is obtained, which has the convergence order 10.1311. New methods are compared in performance with the existing methods by numerical examples. Numerical examples confirm the theoretical results. The basins of attraction of existing methods and our methods are presented and compared to illustrate their performance.The basin of attraction for our Method (16) is better than any of the other methods.