Next Article in Journal
What Do a Longest Increasing Subsequence and a Longest Decreasing Subsequence Know about Each Other?
Previous Article in Journal
Exploring an Ensemble of Methods that Combines Fuzzy Cognitive Maps and Neural Networks in Solving the Time Series Prediction Problem of Gas Consumption in Greece
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stability Analysis of Jacobian-Free Newton’s Iterative Method

by
Abdolreza Amiri
1,
Alicia Cordero
2,*,
Mohammad Taghi Darvishi
1 and
Juan R. Torregrosa
2
1
Department of Mathematics, Faculty of Science, Razi University, 67149 Kermanshah, Iran
2
Institute for Multidisciplinary Mathematics, Universitat Politècnica de València, 46022 València, Spain
*
Author to whom correspondence should be addressed.
Algorithms 2019, 12(11), 236; https://doi.org/10.3390/a12110236
Submission received: 3 October 2019 / Revised: 30 October 2019 / Accepted: 2 November 2019 / Published: 6 November 2019

Abstract

:
It is well known that scalar iterative methods with derivatives are highly more stable than their derivative-free partners, understanding the term stability as a measure of the wideness of the set of converging initial estimations. In multivariate case, multidimensional dynamical analysis allows us to afford this task and it is made on different Jacobian-free variants of Newton’s method, whose estimations of the Jacobian matrix have increasing order. The respective basins of attraction and the number of fixed and critical points give us valuable information in this sense.

1. Introduction

Let F ( x ) = 0 , F : D R n R n , be a system of nonlinear equations. Usually, this kind of problems can not be solved analytically and the approach to a solution is made by means of iterative techniques. The best known iterative algorithm is Newton’s method, with second order of convergence and iterative expression
x ( k + 1 ) = x ( k ) F ( x ( k ) ) 1 F ( x ( k ) ) , k = 0 , 1 ,
from an estimation x ( 0 ) D . This iterative scheme needs the evaluation of the nonlinear function and its associate Jacobian matrix at each iteration. However, sometimes, the size of the system or the specific properties of the problem do not allow to evaluate the Jacobian matrix F ( x ) , or even its calculation at each iteration (for example, if F is an error function); in these cases, some approximations of the Jacobian matrix can be used. The most usual one is a divided difference matrix, that is, a linear operator [ x ( k ) , y ( k ) ; F ] satisfying condition (see [1,2])
[ x , y ; F ] ( x y ) = F ( x ) F ( y ) .
In this case, when F ( x ( k ) ) in Newton’s method is replaced by [ x ( k ) , y ( k ) ; F ] , where y = x ( k ) + F ( x ( k ) ) , we obtain the so-called Steffensen’s method [1], also with order of convergence two. To compute in practice the elements of the divided difference operator, the following first-order divided difference operator
[ x ( k ) , y ( k ) ; F ] i , j 1 = F i ( y 1 , y 2 , , y j 1 , y j , x j + 1 , , x n ) F i ( y 1 , y 2 , , y j 1 , x j , x j + 1 , , x n ) y j x j ,
or the symmetric second-order one,
[ x ( k ) , y ( k ) ; F ] i , j 2 = ( F i ( y 1 , y 2 , , y j 1 , y j , x j + 1 , , x n ) F i ( y 1 , y 2 , , y j 1 , x j , x j + 1 , , x n ) + F i ( x 1 , x 2 , , x j 1 , x j , y j + 1 , , y n ) F i ( x 1 , x 2 , , x j 1 , y j , y j + 1 , , y n ) ) / 2 ( y j x j ) ,
are proposed in [3].
Let us remark that operator (4) is symmetric and it can be used to evaluate the divided difference even when the problem is nonsymmetric. However, the number of evaluations of the scalar functions in the computation of (4) is higher than those in (3). Moreover, when divided difference (3) is used as an approximation for the Jacobian matrices appearing in any iterative method, then usually the iterative procedure does not preserve its order of convergence.
The authors in [4] proposed to replace y = x + F ( x ) in (2) by y = x + Γ ( x ) , being Γ ( x ) = ( f 1 ( x ) m , f 2 ( x ) m , , f n ( x ) m ) T , with m N . Then, divided difference
[ x , x + Γ ( x ) ; F ] Γ ( x ) = F ( x ) F ( x + Γ ( x ) ) ,
becomes an approximation of F ( x ) of order m. It was also shown that, by choosing a suitable value of m, the order of convergence of any iterative method can be preserved with a reduced computational cost. So, the Jacobian-free variants of Newton’s scheme that we analyze hold the second order of convergence of the original method.
Our aim is to see if, further on the order of convergence of the method, the use of different divided differences to replace the Jacobian matrix in the iterative expression of Newton’s method, can affect the dependence on initial estimations of the modified scheme to converge.
By using Taylor expansion of the divided difference operator (5), authors in [4] proved the following results.
Theorem 1.
(See [4]) Let F be a nonlinear operator F : D R n R n with coordinate functions f i , i = 1 , 2 , , n and m N such that m 1 . Let us consider the divided difference operator [ x + Γ ( x ) , x ; F ] , where Γ ( x ) = ( f 1 ( x ) m , f 2 ( x ) m , , f n ( x ) m ) T , then the order of the divided difference [ x + Γ ( x ) , x ; F ] as an approximation of the Jacobian matrix F ( x ) is m.
Corollary 1.
(See [4]) Under the same assumptions as in Theorem 1, the order of the central divided difference operator
[ x + Γ ( x ) , x Γ ( x ) ; F ]
is 2 m as an approximation of F ( x ) , being Γ ( x ) = ( f 1 ( x ) m , f 2 ( x ) m , , f n ( x ) m ) T .
Based on these results, in [4] it was presented a new technique to transform iterative schemes for solving nonlinear systems into Jacobian-free ones, preserving the order of convergence in all cases. The key fact of this new approach is the mth power of the coordinate functions of F ( x ) , that needs different values depending on the order of convergence of the first step of the iterative method. This general procedure was checked, both theoretical and numerically, showing the preservation of the order of convergence and very precise results when the appropriate values of m were employed.
Also the authors in [5] showed that the order of the approximation of F ( x ) might be improved (in terms of efficiency) by means of the Richardson extrapolation. It can be seen in the following result.
Lemma 1.
(See [5]) Divided difference
R [ x , x + Γ ( x ) ; F ] : = 1 3 2 2 [ x + 1 2 Γ ( x ) , x 1 2 Γ ( x ) ; F ] [ x + Γ ( x ) , x Γ ( x ) ; F ] ,
which is obtained by Richardson extrapolation of (6) is an approximation of order 4 m of F ( x ) .
Although the design and convergence analysis of iterative methods for solving nonlinear problems is a successful area of research in the last decades, it has been recently that the study of their stability has become usual (see, for example, [6,7,8,9,10]). So, when a method is presented, not only its order of convergence and efficiency are important, but also its dependance on the initial estimations used to converge. This is known as the stability analysis of the iterative scheme.
The study of the stability of an iterative procedure has been mostly made by using techniques from complex discrete dynamics, that are very useful in the scalar case. Nevertheless, it frequently does not provide enough information when systems of nonlinear equations must be solved. This is the reason why the authors in [11] applied by first time real multidimensional discrete dynamics in order to analyze the performance of vectorial iterative methods on polynomial systems. In this way, it was possible to conclude about their stability properties: their dependence on the initial estimation used and the simplicity or complexity of the sets of convergent initial guesses (known as Fatou set) and their boundaries (Julia set). These procedure have been employed in the last years to analyze new and existing vectorial iterative schemes, see for instance [5,12,13,14]. We are going to use these techniques in this paper to the vectorial rational functions obtained by applying different Jacobian-free variants of Newton’s method on several low-degree polynomial systems. These vectorial rational functions are called also multidimensional fixed point functions in the literature.
The polynomial systems used in this study are defined by the nonlinear functions:
q ( x ) = q 1 ( x ) = x 1 2 1 q 2 ( x ) = x 2 2 1 , r ( x ) = r 1 ( x ) = x 1 6 1 r 2 ( x ) = x 2 6 1 , p ( x ) = p 1 ( x ) = x 1 2 x 2 1 p 2 ( x ) = x 2 2 x 1 1 .
By using uncoupled systems as p ( x ) = 0 and r ( x ) = 0 and coupled as p ( x ) = 0 , we can generalize the performance of the proposed methods to another nonlinear systems. Moreover, let us remark that these results can be obtained by using similar systems with size n > 2 but we analyze the case n = 2 in order to use two-dimensional plots to visualize the analytical findings.
In the next section, the dynamical behavior of the fixed point functions of Jacobian-free versions of Newton’s method applied on q ( x ) , r ( x ) and p ( x ) are studied when forward, central and Richardson extrapolation-type of divided differences are used. To get this aim, some dynamical concepts must be introduced.
Definition 1.
(See [11]) Let G : R n R n be a vectorial function. The orbit of the point x ( 0 ) R n is defined as the set of successive images of x ( 0 ) by the vectorial function, { x ( 0 ) , G ( x ( 0 ) ) , , G m ( x ( 0 ) ) , } .
The dynamical behavior of the orbit of a point of R n can be classified depending on its asymptotic behavior. In this way, a point x * R n is a fixed point of G if G ( x * ) = x * .
The following results are well known results in discrete dynamics and in this paper we use them to study the stability of nonlinear operators.
Theorem 2.
(See [15]) Let G : R n R n be C 2 . Assume that x * is a k-periodic point, k 1 . Let λ 1 , λ 2 , , λ n be the eigenvalues of G ( x * ) .
(a) 
If all eigenvalues λ j have | λ j | < 1 , then x * is attracting.
(b) 
If one eigenvalue | λ j 0 | > 1 , j 0 { 1 , 2 , , n } then x * is unstable, that is, a repelling or a saddle point.
(c) 
If all eigenvalues λ j have | λ j | > 1 , then x * is repelling.
Also, a fixed point is called hyperbolic if for all eigenvalues λ j of G ( x * ) , we have | λ j | 1 . If there exists an eigenvalue such that | λ j | < 1 and an eigenvalue that | λ i | > 1 the hyperbolic point is called saddle point.
Let us note that, the entries of G ( x * ) are the partial derivatives of each coordinate function of the vectorial rational operator that defines the iterative scheme. When the calculation of spectrum of G ( x * ) is difficult the following result which is consistent with the previous theorem, can be used.
Proposition 1.
(See [11]) Let x * be a fixed point of G then,
(a) 
If g i ( x * ) x j < 1 n for all i , j { 1 , , n } , then x * R n is attracting.
(b) 
If g i ( x * ) x j = 0 for all i , j { 1 , , n } , then x * R n is superattracting.
(c) 
If g i ( x * ) x j > 1 n for all i , j { 1 , , n } , then x * R n x * is unstable and lies at the Julia set.
In this paper, we only use Theorem 2 to investigate the stability of the fixed points. Let us consider an iterative method for finding the roots of a nonlinear systems F ( x ) = 0 . This generates a multidimensional fixed point operator G ( x ) . A fixed point x * of G ( x ) is called a strange fixed point if it is not a root of the nonlinear function F ( x ) . The basin of attraction of x * (which may be a root of F ( x ) or a strange fixed point) is the set of pre-images of any order such that
A ( x * ) = { x ( 0 ) R n : G m ( x ( 0 ) ) x * , m } .
Definition 2.
A point x R n is a critical point of G ( x ) if the eigenvalues λ j of G ( x ) are null for all j = 1 , 2 , , n .
The critical points play an important role in this study since a classical result of Julia and Fatou establishes that, in the connected component of any basin of attraction including an attracting fixed point, there is always at least a critical point.
As it is obvious, a superattracting fixed point is also a critical point. A critical point that is not a root of function F ( x ) is called free critical point.
The motivation of this work is to analyze the stability of the Jacobian-free variants of Newton’s method for the most simple nonlinear equations. Certainly, it is known that, in general, divided differences are less stable than Jacobian matrices, but we study how the increasing of the precision in the estimation of the Jacobian matrix affects to the stability of the methods and in the wideness of the basins of attraction of the roots.

2. Jacobian-Free Variants of Newton’s Method

In this section, we study the dynamical properties of Jacobian-free Newton’s method when different divided differences are used. To get this purpose we analyze the dynamical concepts on polynomial systems q ( x ) = 0 , r ( x ) = 0 and p ( x ) = 0 . The dynamical concepts on two dimensional systems can be extended to an n-dimensional case (see [11] to notice how the dynamics of a multidimensional iterative method can be analyzed), so for visualizing graphically the analytical results we investigate the two-dimensional case. From now on, the modified Newton’s scheme which results from replacing forward divided difference (5) instead of Jacobian matrix in Newton’s method, is denoted by FMNm, for m = 1 , 2 , . In a similar way, when central divided difference (6) is used to replace Jacobian matrix in Newton’s procedure, the resulting modified schemes are denoted by CMNm, for m = 1 , 2 , . Also, the modified Newton’s method obtained by using divided difference (7) is denoted by RMNm, for m = 1 , 2 , .
Let us remark that Newton’s method has quadratic convergence and, by using the mentioned approximations of the Jacobian matrix, this order is preserved, even in case m = 1 (in this case, the scheme is known as Steffensen’s method).
We use proposed families FMNm, CMNm and RMNm on the polynomial systems q ( x ) = 0 , r ( x ) = 0 and p ( x ) = 0 . In the following sections, the coordinate functions of the different classes of iterative methods, joint with their fixed and critical points are summarized.

2.1. Second-Degree Polynomial System q ( x ) = 0

Proposition 2.
The coordinate functions of the fixed point operator λ 1 , m ( x ) associated to FMNm for m = 1 , 2 , on polynomial system q ( x ) = 0 are
λ j 1 , m ( x ) = x j q j ( x ) 2 x j + q j ( x ) m , j = 1 , 2 .
Moreover,
(a) 
For m = 1 , 2 , , the only fixed points are the roots of q ( x ) , ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) , that are also superattracting. There is no strange fixed point in this case.
(b) 
The components of free critical points c λ 1 , m n = ( k , l ) are roots of 2 q j ( x ) + ( 2 + 2 m ) x j q j ( x ) m + q j ( x ) ( 2 m ) = 0 , provided that k and l are not equal to 1 and 1 , simultaneously.
Remark 1.
Except for m = 1 , that is a case with 12 free critical points, for m > 1 there exist 32 free critical points of the fixed point operator associated to FMNm. In particular, free critical points for the fixed point function λ 1 , m ( x ) , m = 1 , 2 , , 6 are
( k , l ) , k , l { 3.73205 , 0.267949 , 1 , 1 } , for m = 1 , ( k , l ) , k , l { 2.11724 , 1.13867 , 0.191305 , 0.760289 , 1 , 1 } , for m = 2 , ( k , l ) , k , l { 1.85429 , 1.2072 , 0.614069 , 0.14308 , 1 , 1 } , for m = 3 , ( k , l ) , k , l { 1.74245 , 1.24306 , 0.112834 , 0.541487 , 1 , 1 } , for m = 4 , ( k , l ) , k , l { 1.67929 , 1.26615 , 0.496325 , 0.0927087 , 1 , 1 } , for m = 5 , ( k , l ) , k , l { 1.63822 , 1.28266 , 0.0785163 , 0.464271 , 1 , 1 } , for m = 6 ,
provided that k and l are not equal to 1 and 1 , simultaneously.
Figure 1 shows the dynamical behavior of fixed point function λ 1 , m ( x ) for m = 1 , 2 , , 6 . These figures have been obtained by the routines described in [16]. To draw them, a mesh of 400 × 400 points has been used, 200 was the maximum number of iterations involved and 10 3 the tolerance used as stopping criterium. In this paper, we have used a white star to show the roots of the nonlinear polynomial system and a white square for the free critical points. Figure 1 shows that, as greater m is, the wideness of basins of attraction decreases, in spite of having a better approximation of the Jacobian matrix. The color assigned to each of the basin of attraction corresponds to one of the roots of the nonlinear system. The black area shows the no convergence in the maximum number of iterations, or divergence.
To see the behavior of the vectorial function in the black area of dynamical planes, we visualize the orbit of the rational function corresponding to the starting point ( 3 , 3 ) after 200 iterations. This orbit appears as a yellow circle per each iterate and yellow lines between each pair of consecutive iterates. In Figure 1a, corresponding to m = 1 , its value in the 200th iteration is ( 213.0141 , 213.0141 ) . Figure 1b, which corresponds to m = 2 , shows lower rate of divergence (or convergence to infinity), being its value at the 200th iteration ( 8.6818 , 8.6818 ) . This effect is higher by increasing m but, for m 3 it is observed that
λ 1 , m ( x ( 0 ) ) λ 1 , m ( x ( 200 ) ) λ 1 , m + 1 ( x ( 0 ) ) λ 1 , m + 1 ( x ( 200 ) ) 1 .
The vectors on the top of Figure 1c–f, corresponds to the last iterate with the starting point x ( 0 ) = ( 3 , 3 ) .
Proposition 3.
The coordinates of the fixed point operator λ 2 , m ( x ) associated to CMNm, for m = 1 , 2 , , and RMNm, for m = 1 , 2 , , on polynomial system q ( x ) = 0 are
λ j 2 ( x ) = 1 + x j 2 2 x j , j = 1 , 2 ,
which are the same components of the fixed point function of Newton’s method on q ( x ) .
Since q ( x ) is a 2nd-degree polynomial system, the approximations of order equal or higher than 2 of Jacobian matrix are exact and the fixed point function of the CMNm and RMNm methods coincide with that of Newton’s method. In this case, ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) are superattracting fixed points and there are no strange fixed points or free critical points. Figure 2 shows the resulting dynamical plane, that coincides with that of Newton’s method.

2.2. Sixth-Degree Polynomial System r ( x ) = 0

The corresponding results about FMNm, CMNm and RMNm classes applied on sixth-degree polynomial system r ( x ) = 0 are summarized in the following propositions. They resume, in each case, the iteration function, the fixed and critical points.
Proposition 4.
The coordinate functions of the fixed point operator h 1 , m ( x ) associated to FMNm on r ( x ) , for m = 1 , 2 , , are
h j 1 , m ( x ) = x j r j ( x ) 1 + m x j 6 + ( x j + r j ( x ) m ) 6 , j = 1 , 2 .
Moreover,
(a) 
For m 1 the fixed points are ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) that are also superattracting. There are no strange fixed points in this case.
(b) 
The coordinates of free critical points c h 1 , m n = ( k , l ) are the roots of the polynomial
( x j 6 ( x j + r j ( x ) m ) 6 ) 2 + r j ( x ) 1 + m ( 6 x j 5 + 6 ( 1 + 6 m x j 5 r j ( x ) 1 + m ) ( x j + r j ( x ) m ) 5 ) + 6 ( 1 + m ) x j 5 r j ( x ) m ( x j 6 ( x j + r j ( x ) m ) 6 ) ,
for j = 1 , 2 and m = 1 , 2 , , provided that k and l are not equal to 1 and 1 simultaneously.
Remark 2.
Except for m = 1 , in which case that the fixed point function h 1 , m ( x ) has 12 free critical points, the multidimensional rational function FMNm, for m = 2 , 3 , , has 32 free critical points. The free critical points in this case are
( k , l ) , k , l { 1.3191 , 0.290716 , 1 , 1 } for m = 1 , ( k , l ) , k , l { 1.20592 , 1.01655 , 0.289432 , 0.978576 , 1 , 1 } for m = 2 , ( k , l ) , k , l { 1.17593 , 1.03983 , 0.934077 , 0.288186 , 1 , 1 } for m = 3 , ( k , l ) , k , l { 1.16205 , 1.0543 , 0.286976 , 0.897504 , 1 , 1 } for m = 4 , ( k , l ) , k , l { 1.15401 , 1.064 , 0.868592 , 0.285801 , 1 , 1 } for m = 5 , ( k , l ) , k , l { 1.14876 , 1.07101 , 0.284659 , 0.845166 , 1 , 1 } for m = 6 ,
provided that k and l are not equal to 1 and 1 simultaneously.
Figure 3 shows the dynamical planes of the fixed point function h 1 , m ( x ) for m = 1 , 2 , , 6 . Inside the black areas, there are regions where the orbits tend toward the poles of the rational function (those points that make the denominator of the elements of the rational function null). In fact, the orbits of points in these areas reach very close to the roots, so the vectorial rational function at these points become numerically singular. Figure 3a,b show two points in these black areas.
Figure 4 and Figure 5 show details of the dynamical planes of FMN2 and FMN3 methods. These figures show the difference between the basins of attraction of an odd member with an even member of the family FMNm. Because of the symmetry, the transpose of the basin of attraction of ( 1 , 1 ) coincide with that of ( 1 , 1 ) , so we only depicted one of them.
The analysis for the cases of central divided differences is made in the following result.
Proposition 5.
The coordinate functions of the fixed point operator h 2 , m ( x ) associated to CMNm, for m = 1 , 2 , , on r ( x ) are
h j 2 , m ( x ) = x j r j ( x ) 6 x j 5 + 20 x j 3 r j ( x ) 2 m + 6 x j r j ( x ) 4 m , j = 1 , 2 .
Moreover,
(a) 
The fixed points are ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) that are also superattracting and there are not strange fixed points.
(b) 
The components of free critical points c h 2 , m n = ( k , l ) in this case are the roots of the polynomial P h 2 , m j ( x ) , provided that Q h 2 , m j 0 , being
P h 2 , m j ( x ) = 15 x j 10 + 30 ( 3 + 4 m ) x j 8 r j ( x ) 2 m 3 r j ( x ) 4 m + ( 221 + 72 m ) x j 6 r j ( x ) 4 m + 6 x j 2 r j ( x ) 2 m ( 5 + 3 r j ( x ) 6 m ) + 15 x j 4 ( 1 + 8 r j ( x ) 6 m ) , j = 1 , 2 , Q h 2 , m j = 2 ( 3 x j 5 + 10 x j 3 r j ( x ) 2 m + 3 x j r j ( x ) 4 m ) 2 , j = 1 , 2 ,
where k and l are not simultaneously equal to 1 and 1 .
Remark 3.
The fixed point rational function h 2 , m ( x ) , for all m, has 32 free critical points. The free critical points in this case are
( k , l ) , k , l { 0.424661 , 0.424661 , 0.984594 , 0.984594 , 1 , 1 } for m = 1 . ( k , l ) , k , l { 0.417373 , 0.417373 , 0.913322 , 0.913322 , 1 , 1 } for m = 2 . ( k , l ) , k , l { 0.410772 , 0.410772 , 0.864585 , 0.864585 , 1 , 1 } for m = 3 . ( k , l ) , k , l { 0.404767 , 0.404767 , 0.830398 , 0.830398 , 1 , 1 } for m = 4 . ( k , l ) , k , l { 0.399279 , 0.399279 , 0.804535 , 0.804535 , 1 , 1 } for m = 5 . ( k , l ) , k , l { 0.394237 , 0.394237 , 0.783908 , 0.783908 , 1 , 1 } for m = 6 .
provided that k and l are not equal to 1 and 1 simultaneously.
Central divided differences show very stable behavior. All the free critical points are inside the basins of attraction of the roots. This means that it is not possible other performance than convergence to the roots (or divergence). In fact, the basins of attraction of the roots in this case are much greater than those in the forward divided difference. Black areas near to the boundaries of the basins of attraction are regions of slow convergence or divergence. Figure 6a,b shows the slow convergence of the point ( 1.5 , 1.5 ) towards (−1,1); let us observe that, as for m = 1 point ( 1.5 , 1.5 ) is closer to the boundary so its speed is higher than that for m = 2 . The behavior of function h 2 , m ( x ) in black area among the basins of attraction and near the axis x and y are similar for m = 1 , 2 , . In fact by choosing points in this area the iterative function is slowly divergent (see Figure 6c,d).
Let us remark that, although this scheme is quite stable, the basins of attraction of the roots are much smaller than those of Newton’s method on r ( x ) , where there are not black regions.
Finally, the following result gives us information about the stability of the sixth-degree system when higher-order estimations of the Jacobian are made.
Proposition 6.
The coordinate functions of the fixed point operator h 3 , m ( x ) associated to RMNm on r ( x ) , for m = 1 , 2 , , are
h j 3 , m ( x ) = x j 2 r j ( x ) 12 x j 5 3 x j r j ( x ) 4 m , j = 1 , 2 .
(a) 
For m = 1 , 2 , , the fixed points are ( 1 , 1 ) , ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) that are also superattracting and there are not strange fixed points in this case.
(b) 
The components of free critical points c h 3 , m n = ( k , l ) are the roots of polynomial
P h 3 , m j ( x ) = 40 x j 4 + 40 x j 10 + 2 r j ( x ) 4 m 2 ( 7 + 24 m ) x j 6 r j ( x ) 4 m + 3 x j 2 r j ( x ) 8 m ,
for j = 1 , 2 and m = 1 , 2 , , provided that 4 x j 5 + x j r j ( x ) 4 m 0 and k and l are not simultaneously equal to 1 and 1 .
Remark 4.
The fixed point function h 3 , m ( x ) on r ( x ) for all m has 60 free critical points. The free critical points, for m = 1 , 2 , , 6 , are
( k , l ) , k , l { 0.467908 , 0.467908 , 1.1047 , 1.1047 , 1.23916 , 1.23916 , 1 , 1 } for m = 1 , ( k , l ) , k , l { 0.446836 , 0.446836 , 1.10721 , 1.10721 , 1.18013 , 1.18013 , 1 , 1 } for m = 2 , ( k , l ) , k , l { 0.431784 , 0.431784 , 1.16201 , 1.16201 , 1.10964 , 1.10964 , 1 , 1 } for m = 3 , ( k , l ) , k , l { 0.420071 , 0.420071 , 1.15298 , 1.15298 , 1.11137 , 1.11137 , 1 , 1 } for m = 4 , ( k , l ) , k , l { 0.41049 , 0.41049 , 1.1475 , 1.1475 , 1.11265 , 1.11265 , 1 , 1 } for m = 5 , ( k , l ) , k , l { 0.402394 , 0.402394 , 1.14379 , 1.14379 , 1.11364 , 1.11364 , 1 , 1 } for m = 6 ,
provided that k and l are not equal to 1 and 1 , simultaneously.
In this case, we obtain approximations of order 4 , 8 , 12 , for m = 1 , 2 , 3 , , respectively, but nevertheless their basins of attraction are smaller than those of central divided difference (and of course of Newton’s method). Figure 7 shows the dynamical planes for m = 1 , 2 , 3 , 4 . As in the previous cases, in the black area the convergence is very slow (or it diverges).

2.3. Second-Degree Polynomial System p ( x ) = 0

The results about FMNm, CMNm and RMNm classes applied on second-degree polynomial system p ( x ) = 0 are stated in the following propositions. They resume, in each case, the iteration function, the fixed and critical points.
Proposition 7.
The coordinate functions of the fixed point operator k 1 , m ( x ) associated to FMNm on p ( x ) , for m = 1 , 2 , , are
k 1 1 , m ( x ) = x 1 1 x 1 + x 2 2 1 + ( x 1 2 + ( x 1 + ( 1 x 1 2 + x 2 ) m ) 2 ) ( x 2 2 + ( x 2 + ( 1 + x 1 x 2 2 ) m ) 2 ) ( 1 x 1 2 + x 2 ) m ( 1 + x 1 x 2 2 ) m ( 1 + x 1 2 x 2 ) ( x 2 2 + ( x 2 + ( 1 + x 1 x 2 2 ) m ) 2 ) ( 1 + x 1 x 2 2 ) m ( 1 + ( x 1 2 + ( x 1 + ( 1 x 1 2 + x 2 ) m ) 2 ) ( x 2 2 + ( x 2 + ( 1 + x 1 x 2 2 ) m ) 2 ) ( 1 x 1 2 + x 2 ) m ( 1 + x 1 x 2 2 ) m ) , m = 1 , 2 , k 2 1 , m ( x ) = x 2 1 + x 1 2 x 2 1 + ( x 1 2 + ( x 1 + ( 1 x 1 2 + x 2 ) m ) 2 ) ( x 2 2 + ( x 2 + ( 1 + x 1 x 2 2 ) m ) 2 ) ( 1 x 1 2 + x 2 ) m ( 1 + x 1 x 2 2 ) m ( 1 x 1 + x 2 2 ) ( x 1 2 + ( x 1 + ( 1 x 1 2 + x 2 ) m ) 2 ) ( 1 x 1 2 + x 2 ) m ( 1 + ( x 1 2 + ( x 1 + ( 1 x 1 2 + x 2 ) m ) 2 ) ( x 2 2 + ( x 2 + ( 1 + x 1 x 2 2 ) m ) 2 ) ( 1 x 1 2 + x 2 ) m ( 1 + x 1 x 2 2 ) m ) , m = 1 , 2 , .
Moreover,
(a) 
For m 1 the only fixed points are the roots ( 1 , 0 ) , ( 0 , 1 ) , 1 2 1 5 , 1 2 1 5 and 1 2 1 + 5 , 1 2 1 + 5 , that are superattracting.
(b) 
For m = 1 , the free critical points are ( 0.632026 , 0.460233 ) , ( 0.460233 , 0.632026 ) , ( 0.203287 , 0.294659 ) and ( 0.294659 , 0.203287 ) .
The calculation of free critical points of k 1 , m ( x ) for m > 1 is very complicated hence, so it has been provided only for m = 1 . Figure 8 shows the dynamical planes of the fixed point function k 1 , m ( x ) for m = 1 , 2 , 3 , 4 . Similar results to polynomial system q ( x ) = 0 and r ( x ) = 0 have been obtained in this case, as by increasing m, the wideness of the basins of attraction decreases. To study the behavior of the vectorial function in the black area of dynamical planes, we visualize the diverging orbits of k 1 , m ( x ) with the starting point ( 0 , 3 ) , after 200 iterations. The vectors on the top of the Figure 8a–d, show the last vectorial iterate with the starting point ( 0 , 3 ) .
The following proposition shows, that using CMNm, for m = 1 , 2 , , and RMNm, for m = 1 , 2 , , on polynomial system p ( x ) = 0 , we obtain, as was the case for the system q ( x ) = 0 , the same fixed point function as the classical Newton’s method.
Proposition 8.
The coordinates of the fixed point operator k 2 , m ( x ) (which is denoted by k 2 ( x ) ) associated to CMNm for m = 1 , 2 , and RMNm for m = 1 , 2 , on polynomial system p ( x ) are
k 1 2 ( x ) = 1 + 2 ( 1 + x 1 2 ) x 2 + x 2 2 1 + 4 x 1 4 x 2 , k 2 2 ( x ) = 1 + x 1 2 + 2 x 1 ( 1 + x 2 2 ) 1 + 4 x 1 4 x 2 ,
which are the same components of the fixed point function of Newton’s method on p ( x ) . Moreover, the fixed points of k 2 ( x ) are the roots ( 1 , 0 ) , ( 0 , 1 ) , 1 2 1 5 , 1 2 1 5 and 1 2 1 + 5 , 1 2 1 + 5 that are superattracting. There are no strange fixed points nor free critical points in this case.
Figure 9 shows the dynamical plane of the fixed point function k 2 ( x ) , corresponding to methods CMNm and RMNm for any m = 1 , 2 , as well as their original partner, Newton’s scheme.

3. Conclusions

In this paper, several new Jacobian-free Newton’s method have been introduced, by using forward, central and Richardson divided differences based on an element-by-element power of the nonlinear function F ( x ) , G ( x ) = ( f 1 ( x ) m , f 2 ( x ) m , , f n ( x ) m ) T . As far as we know, these Jacobian-free variants of Newton’s method have not been analyzed until now. We conclude that better estimations do not always involve greater stability. In fact, the best scheme in terms of numerical efficiency and wideness of the sets of converging initial points, is CMNm. This central differences method does not need to calculate and evaluate the Jacobian matrix as Newton’s method and provides similar basins of attraction. Although Richardson’s method reaches good results of convergence, has a computational cost that discourages its use.

Author Contributions

The contribution of the authors to this manuscript can be defined as: conceptualization, A.C. and J.R.T.; methodology, M.T.D.; software, A.A.; formal analysis, M.T.D.; investigation, A.A.; writing—original draft preparation, A.A.; writing—review and editing, A.C. and J.R.T.; supervision, A.C. and J.R.T.

Funding

This research was partially supported by Spanish Ministerio de Ciencia, Innovación y Universidades PGC2018-095896-B-C22 and Generalitat Valenciana PROMETEO/2016/089.

Acknowledgments

The authors would like to thank the anonymous reviewers for their constructive comments and suggestions that have improved the final version of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
  3. Grau-Sánchez, M.; Noguera, M.; Amat, S. On the approximation of derivatives using divided difference operators preserving the local convergence order of iterative methods. J. Comput. Appl. Math. 2013, 237, 363–372. [Google Scholar] [CrossRef]
  4. Amiri, A.R.; Cordero, A.; Darvishi, M.T.; Torregrosa, J.R. Preserving the order of convergence: Low-complexity Jacobian-free iterative schemes for solving nonlinear systems. J. Comput. Appl. Math. 2018, 337, 87–97. [Google Scholar] [CrossRef]
  5. Amiri, A.R.; Cordero, A.; Darvishi, M.T.; Torregrosa, J.R. Stability analysis of Jacobian-free iterative methods for solving nonlinear systems by using families of mth power divided differences. J. Math. Chem. 2019, 57, 1344–1373. [Google Scholar] [CrossRef]
  6. Amat, S.; Busquier, S.; Plaza, S. Review of some iterative root–finding methods from a dynamical point of view. Scientia 2004, 10, 3–35. [Google Scholar]
  7. Neta, B.; Chun, C.; Scott, M. Basins of attraction for optimal eighth order methods to find simple roots of nonlinear equations. Appl. Math. Comput. 2014, 227, 567–592. [Google Scholar] [CrossRef]
  8. Geum, Y.H.; Kim, Y.I.; Neta, B. A class of two-point sixth-order multiple-zero finders of modified double-Newton type and their dynamics. Appl. Math. Comput. 2015, 270, 387–400. [Google Scholar] [CrossRef] [Green Version]
  9. Amat, S.; Busquier, S.; Magreñán, Á.A. Reducing Chaos and Bifurcations in Newton-Type Methods. In Abstract and Applied Analysis; Hindawi: London, UK, 2013. [Google Scholar]
  10. Magreñán, Á.A.; Argyros, I.K. A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications; Academic Press: New York, NY, USA, 2019. [Google Scholar]
  11. Cordero, A.; Soleymani, F.; Torregrosa, J.R. Dynamical analysis of iterative methods for nonlinear systems or how to deal with the dimension? Appl. Math. Comput. 2014, 244, 398–412. [Google Scholar] [CrossRef]
  12. Sharma, J.R.; Sharma, R.; Bahl, A. An improved Newton-Traub composition for solving systems of nonlinear equations. Appl. Math. Comput. 2016, 290, 98–110. [Google Scholar]
  13. García Calcines, J.M.; Gutiérrez, J.M.; Hernández Paricio, L.J. Rivas Rodríguez, M.T. Graphical representations for the homogeneous bivariate Newton’s method. Appl. Math. Comput. 2015, 269, 988–1006. [Google Scholar]
  14. Cordero, A.; Maimó, J.G.; Torregrosa, J.R.; Vassileva, M.P. Multidimensional stability analysis of a family of biparametric iterative methods. J. Math. Chem. 2017, 55, 1461–1480. [Google Scholar] [CrossRef]
  15. Robinson, R.C. An Introduction to Dynamical Systems, Continuous and Discrete; American Mathematical Society: Providence, RI, USA, 2012. [Google Scholar]
  16. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameter planes of iterative families and methods. Sci. World J. 2013, 2013. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Dynamical plane for FMNm, m = 1 , 2 , , 6 on q ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Figure 1. Dynamical plane for FMNm, m = 1 , 2 , , 6 on q ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Algorithms 12 00236 g001
Figure 2. Dynamical plane of CMNm, RMNm and Newton’s method on q ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars).
Figure 2. Dynamical plane of CMNm, RMNm and Newton’s method on q ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars).
Algorithms 12 00236 g002
Figure 3. Dynamical plane of FMNm method for m = 1 , 2 , , 6 on r ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Figure 3. Dynamical plane of FMNm method for m = 1 , 2 , , 6 on r ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Algorithms 12 00236 g003aAlgorithms 12 00236 g003b
Figure 4. Basins of attraction of ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) for FMN2 method on r ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Figure 4. Basins of attraction of ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) for FMN2 method on r ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Algorithms 12 00236 g004
Figure 5. Basins of attraction of ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) for FMN3 method on r ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Figure 5. Basins of attraction of ( 1 , 1 ) , ( 1 , 1 ) and ( 1 , 1 ) for FMN3 method on r ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Algorithms 12 00236 g005
Figure 6. Dynamical plane for CMNm method, m = 1 , 2 , 3 , 4 on r ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Figure 6. Dynamical plane for CMNm method, m = 1 , 2 , 3 , 4 on r ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Algorithms 12 00236 g006
Figure 7. Dynamical plane for RMNm method, m = 1 , 2 , 3 , 4 on r ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Figure 7. Dynamical plane for RMNm method, m = 1 , 2 , 3 , 4 on r ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Algorithms 12 00236 g007aAlgorithms 12 00236 g007b
Figure 8. Dynamical plane for FMNm method, m = 1 , 2 , 3 , 4 on p ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Figure 8. Dynamical plane for FMNm method, m = 1 , 2 , 3 , 4 on p ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Algorithms 12 00236 g008
Figure 9. Dynamical plane of CMNm, RMNm and Newton’s method on p ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Figure 9. Dynamical plane of CMNm, RMNm and Newton’s method on p ( x ) . Green, orange, red and blue areas are the basins of attraction of the roots of q ( x ) (marked as white stars); black area denotes divergence and white squares are free critical points.
Algorithms 12 00236 g009

Share and Cite

MDPI and ACS Style

Amiri, A.; Cordero, A.; Darvishi, M.T.; Torregrosa, J.R. Stability Analysis of Jacobian-Free Newton’s Iterative Method. Algorithms 2019, 12, 236. https://doi.org/10.3390/a12110236

AMA Style

Amiri A, Cordero A, Darvishi MT, Torregrosa JR. Stability Analysis of Jacobian-Free Newton’s Iterative Method. Algorithms. 2019; 12(11):236. https://doi.org/10.3390/a12110236

Chicago/Turabian Style

Amiri, Abdolreza, Alicia Cordero, Mohammad Taghi Darvishi, and Juan R. Torregrosa. 2019. "Stability Analysis of Jacobian-Free Newton’s Iterative Method" Algorithms 12, no. 11: 236. https://doi.org/10.3390/a12110236

APA Style

Amiri, A., Cordero, A., Darvishi, M. T., & Torregrosa, J. R. (2019). Stability Analysis of Jacobian-Free Newton’s Iterative Method. Algorithms, 12(11), 236. https://doi.org/10.3390/a12110236

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop