Next Article in Journal
Analysis of Word Problems in Primary Education Mathematics Textbooks in Spain
Next Article in Special Issue
Extended Kung–Traub Methods for Solving Equations with Applications
Previous Article in Journal
Isometry Invariant Shape Recognition of Projectively Perturbed Point Clouds by the Mergegram Extending 0D Persistence
Previous Article in Special Issue
A New Nonlinear Ninth-Order Root-Finding Method with Error Analysis and Basins of Attraction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New High-Order Jacobian-Free Iterative Method with Memory for Solving Nonlinear Systems

1
Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah 21589, Saudi Arabia
2
Institute for Multidisciplinary Mathematics, Universitat Politècnica de València, 46022 València, Spain
3
Department of Mathematics, Chandigarh University, Mohali 140413, India
*
Author to whom correspondence should be addressed.
Mathematics 2021, 9(17), 2122; https://doi.org/10.3390/math9172122
Submission received: 27 July 2021 / Revised: 24 August 2021 / Accepted: 27 August 2021 / Published: 1 September 2021
(This article belongs to the Special Issue Application of Iterative Methods for Solving Nonlinear Equations)

Abstract

:
We used a Kurchatov-type accelerator to construct an iterative method with memory for solving nonlinear systems, with sixth-order convergence. It was developed from an initial scheme without memory, with order of convergence four. There exist few multidimensional schemes using more than one previous iterate in the very recent literature, mostly with low orders of convergence. The proposed scheme showed its efficiency and robustness in several numerical tests, where it was also compared with the existing procedures with high orders of convergence. These numerical tests included large nonlinear systems. In addition, we show that the proposed scheme has very stable qualitative behavior, by means of the analysis of an associated multidimensional, real rational function and also by means of a comparison of its basin of attraction with those of comparison methods.

1. Introduction

New and efficient iterative techniques are needed for obtaining the solution ξ of a system of nonlinear equations of the form
F ( x ) = 0 ,
where F : Ω R n R n , Ω R n being an open convex set, which is present in scientific, engineering and various other models (details can be found in the articles [1,2,3,4,5]).
The search for the solution ξ R n of the system F ( x ) = 0 is a much more complex problem than finding a solution of a scalar equation f ( x ) = 0 . As in the scalar case, we can transform the original system into an equivalent system of the form:
x = G ( x ) ,
for a given vector function G : R n R n , whose coordinate functions we will denote by g i and i = 1 , 2 , , n . Starting from an initial approximation x ( 0 ) = ( x 1 ( 0 ) , x 2 ( 0 ) , , x n ( 0 ) ) T , we can generate a succession of vectors of m a t h b b R n by means of the iterative formula
x ( j + 1 ) = G ( x ( j ) ) , j = 0 , 1 , , n ,
where x ( j ) = ( x 1 ( j ) , x 2 ( j ) , , x n ( j ) ) T R n .
We say that the process is convergent if { x ( j ) } ξ when j ; then, ξ = ( ξ 1 , ξ 2 , , ξ n ) T R n will be, under certain conditions of the function G, a solution of the system x = G ( x ) . The vector ξ is called a fixed point of the function G and the algorithm described by the Equation (2) is a fixed point method.
A very important concept of iterative methods is their order of convergence, which provides a measure of the speed of convergence. Let x ( j ) j 0 be a succession of vectors of R n such that they tend to the solution of the nonlinear system ξ when j tends to infinity. The convergence of this sequence is said to be
(i)
linear, if there exist M, 0 < M < 1 and j 0 N such that x ( j + 1 ) ξ M x ( j ) ξ for all j j 0 .
(ii)
of order p, if there exist M > 0 and k 0 N such that x ( j + 1 ) ξ M x ( j ) ξ p for all j 0 .
This definition is independent of the norm of m a t h b b R n used.
We denote by e ( j ) = x ( j ) ξ the error in the j-th iteration and call an error equation to the expression e ( j + 1 ) = L e ( j ) p + O e ( j ) p + 1 , where L is a a p-linear function L L l e f t ( R n , R n , R n r i g h t ) , and p is the order of convergence of the method.
The most known fixed-point iterative method is Newton’s scheme:
x ( j + 1 ) = x ( j ) [ F ( x ( j ) ) ] 1 F ( x ( j ) ) , j = 0 , 1 ,
where F denotes the Jacobian matrix of operator F. However, there are many practical situations where the calculations of a Jacobian matrix are computationally expensive, and/or it requires a great deal of time for them to be given or calculated. Therefore, derivative-free methods are quite popular for finding the roots of nonlinear equations and systems of nonlinear equations.
The first multidimensional derivative-free method was proposed by Samanskii in [6], by replacing the Jacobian matrix F with the divided difference operator:
x ( j + 1 ) = x ( j ) [ x ( j ) + F ( x ( j ) ) , x ( j ) ; F ] 1 F ( x ( j ) ) , j = 0 , 1 ,
This scheme keeps the quadratic order of convergence of Newton’s procedure. It is the vectorial extension of scalar Steffensen’s method.
Later on, Traub defined a class of iterative methods (known as Traub-Steffensen’s family) [7], given by
x ( j + 1 ) = x ( j ) [ u ( j ) , x ( j ) ; F ] 1 F ( x ( j ) ) , j = 0 , 1 ,
where u ( j ) = x ( j ) + β F ( x ( j ) ) . The class (3) can be easily recovered from Newton’s well-known method [7] by replacing the Jacobian matrix with operator [ u ( j ) , x ( j ) ; F ] F ( x ( j ) ) . Let us remark that, for the particular value of β = 1 in expression (3), the deduced scheme is Samanskii’s one.
In recent years, different scalar iterative schemes with memory have been designed (a good overview can be found in [8]), mostly derivative-free ones. These have been constructed with increasing orders of convergence, and therefore, with increasing computational complexity. In terms of stability, some researchers compared the amplitude of the set of initial points converging to the same attractor, using complex discrete dynamics techniques. In [9], the authors observed that iterative schemes with seventh-order memory convergence showed better stability properties than many eighth-order optimal procedures without memory. This graphical comparison was subsequently used by different authors; see, for example, the work of Wang et al. in [10] and Cordero et al. [11] in 2016 or the investigations of Bakhtiari et al. [12] in 2016 and Howk et al. [13] in the following years.
Regarding nonlinear vectorial problems, some methods with memory have been developed which improve the convergence rate of Steffensen’s method or Steffensen-type methods at the expense of additional evaluations of vector functions, divided difference or changes in the points of iterations. In past and recent years, a few high-order multi-point extensions of Steffensen’s method or Steffensen-type methods have been proposed and analyzed in the available literature [14,15,16,17] for solving nonlinear systems of equations. All these modifications are in the direction of increasing the local order of convergence with the view of increasing their efficiency indices, as they usually do not involve new functional evaluations. Therefore, these constructions occasionally possess a better order of convergence and efficiency index, but there are very few iterative schemes of this kind in the literature, due in part to their recent development and also due to the difficulty of design and convergence analysis.
In 2020, Chicharro et al. [18] proposed such an extension, which is given by
w ( j ) = x ( j ) [ 2 x ( j ) x ( j 1 ) , x ( j 1 ) ; F ] 1 F ( x ( j ) ) , x ( j + 1 ) = x ( j ) [ w ( j ) , x ( j ) ; F ] 1 F ( x ( j ) ) ,
and its higher-order version is
w ( j ) = x ( j ) [ 2 x ( j ) x ( j 1 ) , x ( j 1 ) ; F ] 1 F ( x ( j ) ) , y ( j ) = x ( j ) [ w ( j ) , x ( j ) ; F ] 1 F ( x ( j ) ) , x ( j + 1 ) = y ( j ) [ w ( j ) , y ( j ) ; F ] 1 F ( y ( j ) ) .
The versions have third and fifth order of convergence, respectively. An extension of this type was first reported by Chicharro et al. [18] based on Kurchatov’s divided difference operator.
The authors developed in [19] a technique that, using multidimensional real discrete dynamics tools, is able to analyze the stability of iterative schemes with memory, not only in graphical terms, but essentially in analytical terms. Using this technique, the stability of the fixed and critical points of secant, Steffensen’ and Kurchatov’s methods (among others) were studied in [19]. It was also used to analyze other procedures, such as those described in [20], the one defined by Choubey et al. in [21] and those by Chicharro et al. in [22,23,24].
The aim of this work was to produce two new schemes without and with memory of orders four and six, respectively. Our scheme is also based on Kurchatov’s divided difference operator. However, our scheme does not have only higher-order convergence, unlike the recent scheme (5). However, we did not use any additional functional evaluation of F or F (Jacobian matrix of F) or another iterative substep. We also provide a deep analysis of the suggested scheme regarding the order of convergence (Section 2) and its stability properties, constructing an associated multidimensional discrete dynamical system. Therefore, the good performance in terms of convergence to the searched roots and wideness of their basins of attraction is proven, as on polynomial functions as on non-polynomial ones, in Section 3. In addition, we compare in Section 4 our methods to the existing recent methods on several numerical problems with similar iterative structures. On the basis of the results, we found that our methods perform better than the existing ones when dealing with residual errors, the difference between two consecutive iterations and stable computational order of convergence. Finally, some conclusions and the references used bring this manuscript to an end.

2. Construction and Convergence of New Iterative Schemes

Combining the Traub-Steffensen family of methods and a second step with different divided-difference operators, we propose the class of iterative schemes described as
y ( j ) = x ( j ) [ u ( j ) , x ( j ) ; F ] 1 F ( x ( j ) ) , x ( j + 1 ) = y ( j ) [ y ( j ) , x ( j ) ; F ] 1 [ u ( j ) , x ( j ) ; F ] [ u ( j ) , y ( j ) ; F ] 1 F ( y ( j ) ) ,
where u ( j ) = x ( j ) + β F ( x ( j ) ) , β R . In order to analyze the convergence of schemes (6), we need the definition of the divided difference operator as well as its Taylor’s expansion (more details can be found in [5]).
Lemma 1.
Suppose F : Ω R n R n is k-times Fréchet differentiable in an open convex set Ω. In addition, we assume that for any x , h , R j , the following expression holds:
F ( x + h ) = F ( x ) + F ( x ) h + 1 2 ! F ( x ) h 2 + 1 3 ! F ( x ) h 3 + + 1 ( k 1 ) ! F ( k 1 ) ( x ) h k 1 + R k ,
where
R k sup 0 t 1 F ( k ) ( x + t h ) h k and h k = ( h , h , . . k . , h ) .
Now, we can obtain the following Taylor series expansion of the divided difference operator, by adopting the Genocchi–Hermite formula [5]:
[ x + h , x ; F ] = 0 1 F ( x + t h ) d t , for   all   x , h   R j .
Then, we have
[ x + h , x ; F ] = 0 1 F ( x + t h ) d t = F ( x ) + 1 2 ! F ( x ) h + 1 3 ! F ( x ) h 2 + 1 4 ! F ( x ) h 3 + O ( h 4 ) .
Now, we are in a position to analyze the convergence order of proposed schemes (6), as we can see in the following result. In it, I denotes the identity matrix of size n × n and F ( k ) ( x ) can be considered as a k-linear operator:
F ( k ) ( x ) : R n × × R n R n , k = 1 , 2 , .
Todeepen one’s understanding of the concepts of Taylor expansion using several variables, we suggest references [5,25].
Theorem 1.
Let F : Ω R n R n be a sufficiently differentiable function in an open convex neighborhood Ω of a zero ξ. Suppose that F ( x ) is a continuous and nonsingular at x = ξ and the initial guess x ( 0 ) is close enough to ξ. Then, the iterative schemes defined by (6) have a fourth order of convergence for every β, β 0 . They satisfy the following error equation:
e ( j + 1 ) = T 2 I + β F ( α ) ( 2 T 2 2 T 3 ) I + β F ( α ) e ( j ) 4 + O e ( j ) 5 ,
where e ( j ) = x ( j ) ξ , j = 1 , 2 , , T k = 1 k ! F ( ξ ) 1 F ( k ) ( ξ ) , k = 2 , 3 , 4 ,
Proof. 
Let e ( j ) = x ( j ) ξ be the error of the jth-iteration and ξ R n be a solution of F ( x ) . Then, developing F ( x ( j ) ) in the neighborhood of ξ , we have
F ( x ( j ) ) = F ( ξ ) e ( j ) + T 2 e ( j ) 2 + T 3 e ( j ) 3 + T 4 e ( j ) 4 + T 5 e ( j ) 5 + O e ( j ) 6 ,
F ( x ( j ) ) = F ( ξ ) I + 2 T 2 e ( j ) + 3 T 3 e ( j ) 2 + 4 T 4 e ( j ) 3 + 5 T 5 e ( j ) 4 + O e ( j ) 5 ,
F ( x ( j ) ) = F ( ξ ) 2 T 2 + 6 T 3 e ( j ) + 12 T 4 e ( j ) 2 + 20 T 5 e ( j ) 3 + O e ( j ) 4 ,
and
F ( x ( j ) ) = F ( ξ ) 6 T 3 + 24 T 4 e ( j ) + 60 T 5 e ( j ) 2 + O e ( j ) 3 .
By adopting the expressions (8)–(12), we obtain -4.6cm0cm
u ( j ) , x ( j ) ; F = F ( x ( j ) ) + 1 2 ! F ( x ( j ) ) ( u ( j ) x ( j ) ) + 1 3 ! F ( x ( j ) ) ( u ( j ) x ( j ) ) 2 + 1 4 ! F ( x ( j ) ) ( u ( j ) x ( j ) ) 3 + O ( u ( j ) x ( j ) ) 4 = F ( ξ ) [ I + T 2 2 + β F ( ξ ) e ( j ) + 3 T 3 + β T 2 F ( ξ ) T 2 + 3 β T 3 F ( ξ ) + β 2 T 3 F ( ξ ) 2 e ( j ) 2 + ( β T 2 F ( ξ ) T 3 + 3 β T 3 F ( ξ ) T 2 + 6 β T 4 F ( ξ ) + β 2 T 3 F ( ξ ) T 2 + β 2 T 3 F ( ξ ) T 2 F ( ξ ) + 4 β 2 T 4 F ( ξ ) 2 + 4 T 4 ) e ( j ) 3 + O e ( j ) 4 ] .
Inversion of u ( j ) , x ( j ) ; F is given by
u ( j ) , x ( j ) ; F 1 = I + Z 1 e ( j ) + Z 2 e ( j ) 2 + O e ( j ) 3 F ( ξ ) 1 ,
where -4.6cm0cm
Z 1 = 2 T 2 β T 2 F ( ξ ) , Z 2 = 3 T 3 + β T 2 F ( ξ ) T 2 + 2 β T 2 2 F ( ξ ) 3 β T 3 F ( ξ ) β 2 T 3 F ( ξ ) 2 + β 2 T 2 F ( ξ ) T 2 F ( ξ ) + 4 T 2 2 .
Using expressions (9) and (14), we get -4.6cm0cm
u ( j ) , x ( j ) ; F 1 F ( x ( j ) ) = T 2 I + β F ( ξ ) e ( j ) 2 [ 2 T 3 2 T 2 2 2 β T 2 2 F ( ξ ) + 3 β T 3 F ( ξ ) + β 2 T 3 F ( ξ ) 2 β 2 T 2 F ( ξ ) T 2 F ( ξ ) ] e ( j ) 3 + O e ( j ) 4 .
Now, use the expression (15) in the first substep of our proposed scheme (6). We have
y ( j ) ξ = T 2 I + β F ( ξ ) e ( j ) 2 + [ 2 T 3 2 T 2 2 2 β T 2 2 F ( ξ ) + 3 β T 3 F ( ξ ) + β 2 T 3 F ( ξ ) 2 β 2 T 2 F ( ξ ) T 2 F ( ξ ) ] e ( j ) 3 + O e ( j ) 4 .
In a similar fashion as we did in expression (13), we can develop F ( y ( j ) ) and its derivatives in the neighborhood of ξ , given by
u ( j ) , y ( j ) ; F = F ( y ( j ) ) + 1 2 ! F ( y ( j ) ) ( u ( j ) y ( j ) ) + 1 3 ! F ( y ( j ) ) ( u ( j ) y ( j ) ) 2 + 1 4 ! F ( y ( j ) ) ( u ( j ) y ( j ) ) 3 + O ( u ( j ) y ( j ) ) 4 = F ( ξ ) [ I + T 2 I + β F ( ξ ) e ( j ) + [ 2 T 2 2 I + β F ( ξ ) + T 2 ( β F ( ξ ) T 2 T 2 β T 2 F ( ξ ) ) + T 3 I + β F ( ξ ) 2 ] e ( j ) 2 + O e ( j ) 3 ] .
By using the property u ( j ) , y ( j ) ; F 1 u ( j ) , y ( j ) ; F = I , we obtain -4.6cm0cm
u ( j ) , y ( j ) ; F 1 = [ I T 2 I + β F ( ξ ) e ( j ) + ( 2 T 2 2 I + β F ( ξ ) T 2 β F ( ξ ) T 2 T 2 β T 2 F ( ξ ) T 3 I + β F ( ξ ) 2 + T 2 I + β F ( ξ ) T 2 I + β F ( ξ ) ) e ( j ) 2 + O e ( j ) 3 ] F ( ξ ) 1 ,
and similarly, we have
y ( j ) , x ( j ) ; F 1 = I T 2 e ( j ) + T 2 2 β F ( ξ ) T 3 2 + O e ( j ) 3 F ( ξ ) 1 .
By using the expressions (14) and (17)–(19) in the second substep of our scheme (6), the error equation of our scheme (6) becomes:
e ( j + 1 ) = T 2 1 + β F ( α ) ( 2 T 2 2 T 3 ) 1 + β F ( α ) e ( j ) 4 + O e ( j ) 5 .
Therefore, (6) is a parametric family of fourth-order iterative methods. □

3. Extension to a Higher-Order Scheme with Memory

In this section, we construct a new scheme with memory based on our scheme (6), without using any new functional evaluation. It is straightforward to say from error Equation (20) that we can obtain a higher order of convergence by choosing β = F ( ξ ) 1 . However, we also know that the required solution ξ is unknown. We want to increase the order of convergence without additional values of vector function or Jacobian matrix. Therefore, we use one of the most efficient Kurchatov’s divided difference operators, 2 x ( j ) x ( j 1 ) , x ( j 1 ) ; F , to approximate the value of F ( ξ ) , which is given by
F ( ξ ) 2 x ( j ) x ( j 1 ) , x ( j 1 ) ; F .
Then, we define
B : = A ( j ) = 2 x ( j ) x ( j 1 ) , x ( j 1 ) ; F 1 F ( ξ ) 1 .
Hence, from our scheme (6) we can deduce the following iterative method with memory:
y ( j ) = x ( j ) [ u ( j ) , x ( j ) ; F ] 1 F ( x ( j ) ) , x ( j + 1 ) = y ( j ) [ y ( j ) , x ( j ) ; F ] 1 [ u ( j ) , x ( j ) ; F ] [ u ( j ) , y ( j ) ; F ] 1 F ( y ( j ) ) ,
where u ( j ) = x ( j ) 2 x ( j ) x ( j 1 ) , x ( j 1 ) ; F 1 F ( x ( j ) ) .
In the following result, we analyze the convergence order of scheme (22) with memory, denoted by P M 6 .
Theorem 2.
Let F : Ω R n R n be a sufficiently differentiable function in an open convex neighborhood Ω of a zero of F, ξ. Suppose that F ( x ) is continuous and non-singular in ξ. In addition, the initial guesses x ( 0 ) and x ( 1 ) are close enough to the required solution ξ. Then, the iterative scheme defined by (22) has sixth order of convergence.
Proof. 
In a similar way as we did in expressions (13) and (17), we expand 2 x ( j ) x ( j 1 ) , x ( j 1 ) ; F in the neighborhood of ξ as
2 x ( j ) x ( j 1 ) , x ( j 1 ) ; F = F ( x ( j 1 ) ) + 1 2 ! F ( x ( j 1 ) ) 2 ( e ( j ) e ( j 1 ) ) + 1 3 ! F ( x ( j 1 ) ) 2 ( e ( j ) e ( j 1 ) ) 2 + O e ( j 1 ) , e ( j ) 3 , = F ( ξ ) [ I + 2 T 2 e ( j ) 2 T 3 e ( j 1 ) e ( j ) + T 3 e ( j 1 ) 2 4 ( T 2 2 T 3 ) e ( j ) 2 + O e ( j 1 ) , e ( j ) 3 ] ,
where O e ( j 1 ) , e ( j ) 3 stands for the sum of the exponents of e ( j 1 ) and e ( j ) is at minimum 3. We obtain the inverse of 2 x ( j ) x ( j 1 ) , x ( j 1 ) ; F is given by
2 x ( j ) x ( j 1 ) , x ( j 1 ) ; F 1 = [ I 2 T 2 e ( j ) + 2 T 3 e ( j 1 ) e ( j ) T 3 e ( j 1 ) 2 + 4 ( T 2 2 T 3 ) e ( j ) 2 + O e ( j 1 ) , e ( j ) 3 ] F ( ξ ) 1 ,
which further yields
I + A ( j ) F ( ξ ) = 2 T 2 e ( j ) 2 T 3 e ( j 1 ) e ( j ) + T 3 e ( j 1 ) 2 4 ( T 2 2 T 3 ) e ( j ) 2 + O e ( j 1 ) , e ( j ) 3 , e ( j ) .
Therefore, it is clear that we have I + A ( j ) F ( ξ ) e ( j ) and adopt this value in the expression (20). Then, we have
e ( j + 1 ) e ( j ) e ( j ) e ( j ) 4 = e ( j ) 6 .
Hence, the proof is complete, which demonstrates that the proposed scheme with memory (22) has sixth-order convergence. □

4. A Qualitative Study of Iterative Methods with Memory: New and Known

In this section, we analyze the stability of the proposed scheme with memory in its scalar form, as this kind of study yields multidimensional operators to be analyzed and it does not support the use of vectorial schemes. However, its performance on systems of nonlinear equations is checked in Section 5 and it is compared with other known schemes with memory.
The expression of a scalar fixed-point iterative method with memory, using two previous iterations to calculate the following estimation, is
x j + 1 = Φ ( x j 1 , x j ) , j 1 ,
x 0 and x 1 being the starting estimations. We use the technique presented in [19,20] to describe any method with memory as a discrete real vectorial dynamical system, in order to analyze its qualitative behavior.
In order to calculate the fixed points of an iterative method with iteration function Φ , an auxiliary fixed point multidimensional function Γ : R 2 R 2 can be defined, related to Φ by means of:
Γ x j 1 , x j = ( x j , x j + 1 ) = ( x j , Φ ( x j 1 , x j ) ) , j = 1 , 2 , ,
x 0 and x 1 being, again, the initial estimations. Therefore, a fixed point of this operator will be obtained when not only x j + 1 = x j , but also x j 1 = x j .
From function Γ : R 2 R 2 , a discrete dynamical system in R 2 can be defined by Γ x j 1 , x j = ( x j , x j + 1 ) , where Φ is the operator of the iterative method with memory. The fixed points ( z , x ) of Γ satisfy z = x and x = Φ ( z , x ) . This notation implies z = x j 1 and x = x j . In the following, we recall some basic dynamical concepts that are direct extensions of those used in complex discrete dynamics analysis (see [26]).
Let us consider the vectorial rational function Γ : R 2 R 2 , usually obtained by applying an iterative method on a scalar polynomial p ( x ) . Then, if a fixed point ( z , x ) of operator Γ is different from ( r , r ) , where r is a zero of p ( x ) , it is called a strange fixed point. On the other hand, the orbit of a point x ¯ R 2 is defined as the set of successive images from x ¯ by the vector function—that is, orbit ( x ¯ ) = x ¯ , Γ ( x ¯ ) , , Γ n ( x ¯ ) , . Indeed, a point x * R 2 is called k-periodic if Γ k x * = x * and Γ p x * x * , for p = 1 , 2 , , k 1 .
The qualitative performance of a point of R 2 is classified depending on its asymptotic behavior. Thus, the stability of fixed points for vectorial operators satisfies the statements appearing in the following result (see, for instance, [27]).
Theorem 3.
Let Γ from R m to R m be of class C 2 . Assume x * is a k-periodic point. Let λ 1 , λ 2 , , λ m be the eigenvalues of the Jacobian matrix Γ ( x * ) . Then, it holds that
(a) 
If all the eigenvalues λ j verify | λ j | < 1 , then x * is attracting.
(b) 
If one eigenvalue λ j 0 verifies | λ j 0 | > 1 , then x * is unstable—that is, repelling or saddle.
(c) 
If all the eigenvalues λ j verify | λ j | > 1 , then x * is repelling.
Moreover, a fixed point is said to be not hyperbolic if all the eigenvalues λ j of Γ ( x * ) satisfy | λ j | 1 . Specifically, if there exist an eigenvalue λ i satisfying | λ i | < 1 and another one λ j such that | λ j | > 1 , then it is called saddle point.
There is a key difference between the study of the stability of a fixed point x * in scalar and vectorial dynamics: In the first case, if | Γ ( x * ) | < 1 then x * is attracting; in particular it is superattracting if | Γ ( x * ) | = 0 and it is repelling when | Γ ( x * ) | > 1 , Γ being the scalar rational function related to the iterative scheme on a low-degee polynomial p ( x ) ). In the vectorial case, the character of the fixed points is calculated by means of the eigenvalues of the Jacobian matrix Γ (see Theorem 3). Nevertheless, sometimes the Jacobian is not well-defined at the fixed points. In these cases, we impose on the rational operator G the condition w = z = x , so that it is reduced to a real-valued function. Therefore, the stability of the fixed point can be inferred from the absolute value of its first derivative at the fixed point.
By considering x * an attracting fixed point of function Γ , we define its basin of attraction A ( x * ) as the set of preimages of any order:
A ( x * ) = x 0 R 2 : Γ m ( x 0 ) = x * , for some m N .
A key element in the stability analysis of an iterative method is the set of critical points of its associated rational function Γ : if Γ ( x ) satisfies det ( Γ ( x ) ) = 0 , x is called a critical point. A critical point ( c , c ) such that c is not a root of p ( x ) is called a free critical point. Another way to get critical points is finding those points that make null the eigenvalues of Γ . As an extension of the scalar case, if they are ant composed by the roots of polynomial p ( x ) , they are named free critical points. Indeed, Julia and Fatou [26] proved that there is at least one critical point associated with each basin of attraction. Therefore, by studying the orbit of the free critical points, all the attracting elements can be found. This result is valid for both complex and real iterative functions.
In this section, we analyze the performance of three different schemes with memory—our proposed scheme (22) and two known similar schemes, which are defined in (5), taken from Chicharro et al. [18]—on quadratic polynomials that we denote by A M 5 , which use Kurchatov’s divided difference in order to introduce memory. We also analyze scheme
y ( j ) = x ( j ) [ u ( j ) , x ( j ) ; F ] 1 F ( x ( j ) ) , x ( j + 1 ) = y ( j ) 3 I G ( j ) ( 3 I G ( j ) ) [ u ( j ) , x ( j ) ; F ] 1 F ( y ( j ) ) ,
where G ( j ) = [ u ( j ) , x ( j ) ; F ] 1 [ y ( j ) + c F ( y ( j ) ) , y ( j ) ; F ] , u ( j ) = x ( j ) + γ F ( x ( j ) ) and γ = [ u ( j 1 ) , x ( j 1 ) ; F ] 1 . It was from Petkovic et al. in [15]. We denote it by S M 4.45 and use it with c = 0.01 .
These schemes have several similarities: they are vectorial schemes with memory; they include divided differences operators in their iterative expressions; they are mainly used to define their respective accelerating parameters. However, we are going to see that the use of these elements does not determine the wideness of the sets of initial estimations converging to the roots, when real multidimensional discrete dynamics tools are used.
In order to extend the results to any quadratic polynomial, the first analysis is shown for P M 6 on p ( x ) = x 2 c , so that the value of c yields a situation with real, complex or multiple roots depending on c > 0 , c < 0 or c = 0 , respectively. The multidimensional rational function Γ in this particular case will be denoted in what follows by P M . This analysis can be summarized in the following result.
Theorem 4.
The multidimensional rational operator associated with proposed scheme P M 6 , when it is applied on polynomial p ( x ) = x 2 c c 0 —is
P M ( z , x ) = x , c 5 + 15 c 4 x 2 + 130 c 3 x 4 + 214 c 2 x 6 + 141 c x 8 + 11 x 10 4 c 4 x + 56 c 3 x 3 + 192 c 2 x 5 + 200 c x 7 + 60 x 9 ,
and it is
P M ( z , x ) = x , 11 x 60 ,
for c = 0 . Moreover, P M satisfies:
(a)
There are no strange fixed points.
(b)
If c < 0 , there are eight different components of the free critical points, which are defined as ± s i , i = 1 , 2 , 3 , 4 , s i being the (real) roots of polynomial s ( t ) = c 4 + 32 c 3 t + 210 c 2 t 2 + 360 c t 3 + 165 t 4 . If c > 0 , there are not free critical points.
Proof. 
Let us remark that operator P M ( z , x ) can be obtained by directly applying method P M 6 to polynomial p ( x ) . Moreover, we know that fixed points of P M will have equal components. This is the reason why, when we force the three consecutive iterates to be equal ( x = z ), then the only fixed points are composed by the roots x = ± c . That is, ( c , c ) and ( c , c ) .
Regarding the critical points, the Jacobian matrix P M is
P M ( z , x ) = 0 1 0 x 2 c 5 c 4 + 32 c 3 x 2 + 210 c 2 x 4 + 360 c x 6 + 165 x 8 4 x 2 c 4 + 14 c 3 x 2 + 48 c 2 x 4 + 50 c x 6 + 15 x 8 2 ,
with eigenvalues 0 , x 2 c 5 c 4 + 32 c 3 x 2 + 210 c 2 x 4 + 360 c x 6 + 165 x 8 4 x 2 c 4 + 14 c 3 x 2 + 48 c 2 x 4 + 50 c x 6 + 15 x 8 2 .
By definition, the components of critical points are those values making null the eigenvalues of P M . By using the change of variables t = x 2 on the second factor of the numerator of the not null eigenvalue, we get s ( t ) = c 4 + 32 c 3 t + 210 c 2 t 2 + 360 c t 3 + 165 t 4 . This polynomial only has real roots for c < 0 , denoted by s i , i = 1 , 2 , 3 , 4 . Then, if c < 0 , there exist eight different componts of free critical points ( ± s i , ± s j ) , i , j { 1 , 2 , 3 , 4 } . □
A very useful tool to visualize the analytical results is the dynamical plane of the system, composed by the set of the different basins of attraction. It can be drawn by means of the programs presented in [28], after some changes to adapt them to schemes with memory. The dynamical plane of a method is built by calculating the orbit of a mesh of 400 × 400 starting points ( z , x ) (although z does not appear in the rational function P M ) and painting each of them in a different color (orange and green in this case) depending on the attractor they converge to (marked as a white star), with a tolerance of 10 3 . Additionally, they appear in black if the orbit has not reached any attracting fixed point in a maximum of 80 iterations. In Figure 1, we show the dynamical planes of this method for selected values of c, in order to show its performance.
Let us remark that, as by definition all the fixed points have equal components, they will always appear in the bisector of the first and third quadrants of the dynamical plane. It can be observed that, when there are no real roots ( c < 0 , Figure 1a), no other attracting element appears; when c = 0 , the only root is multiple and the convergence is linear, so there is global convergence to x = 0 , as can be seen in Figure 1b. In Figure 1c, the convergence to the roots is also observed to be global, their basins of attraction being two symmetrical half-planes, which is exactly the same behavior as Newton’s method on quadratic polynomials.
Moreover, let us remark that when c > 0 (real simple roots case), there are not free critical points, as in this case the only possible performance of the method was the convergence to the roots. The reason for this behavior is that in each basin of attraction there must be a critical point; if the only critical points are the roots of that basin, then there is no other possible convergence.

Comparisons with Other Methods with Memory for Nonlinear Problems

Here we compare the stability of the proposed method with that of known A M 5 and S M 4.45 schemes. Firstly, we show their performances on quadratic polynomials: P M 6 . The dynamical planes are plotted in the complex plane, by starting the iterative methods with memory with an initial value of the accelerating parameter of 0.01 om each case, for any initial guess z C , defined in a mesh of 400 × 400 points and with a maximum of 80 iterations.
As shown in Figure 2 and Figure 3, all three methods have been used to estimate the complex roots of the unity of second and third degrees. It can be observed that the performances with P M 6 and A M 5 were very similar for quadratic polynomials, showing global convergence, similar to that of Newton’s scheme.
However, this global convergence is held on cubic polynomials in the case of P M 6 , but in case of A M 5 , black areas of no convergence to the roots appear. Iterative method S M 4.45 shows these black regions both for quadratic and cubic polynomials, which are bigger in the cubic case.
This good performance of the set of converging initial estimations of proposed method P M 6 is also shown in the case of non-polynomial equations: let us notice the performances of all known and new methods on the complex rational function f ( z ) = z 2 1 z 2 + 1 + 1 , z C with a simple zero at ξ = 0 . The basins of attraction of the root appearing in Figure 4, being similar, are wider in case of P M 6 than for A M 5 or S M 4.45 .
Additionally, in Figure 5 the basin of attraction of ξ 0.25753 , the simple root of g ( z ) = z 2 exp ( z ) 3 z + 2 , is very wide in the case of P M 6 , with small areas of no convergence to the root, in comparison with those of known methods A M 5 and S M 4.45 .
Thus, it can be concluded that proposed method P M 6 had very stable performance, as in real and in complex spaces, both for polynomial and non-polynomial functions, in the scalar case. In spite of having the same accelerating parameter as the A M 5 method, the new scheme has proven to be better than known ones in terms of order of convergence and also in terms of stability. In the next section, its numerical performance on nonlinear systems of different sizes is demonstrated.

5. Numerical Experiments

This section presents the validity of the proposed scheme with memory on some numerical problems. The proposed method (22) was applied to numerical problems and compared with other existing techniques with memory [15,18], presented as A M 5 and S M 4.45 , respectively. In all numerical problems, initially the parameter β = B ( 0 ) = 0.01 I , where I is the identity matrix, and for method S M 4.45 , another parameter c = 0.01 was considered. All the numerical tests were conducted by using Mathematica 10, with 400 multiple precision digits of mantissa. For all the examples, we have included in the respective tables the following information: x ( j + 1 ) x ( j ) ; j = 0 , 1 , 2 ; the residual F ( x ( j ) ) at j = 0 , 1 , 2 ; approximated computational order of convergence (ACOC) denoted as ρ [29]
ρ = ln x ( j + 1 ) x ( j ) x ( j ) x ( j 1 ) ln x ( j ) x ( j 1 ) x ( j 1 ) x ( j 2 ) , f o r e a c h j = 2 , 3 , ;
and C P U t i m e .
Further on, the iterative procedure was stopped after three iterations and problems were tested on three different initial values. Notice that the meaning of b ( ± a ) is b × 10 ± a in all the tables.
Example 1.
Consider the following system of nonlinear equations in four unknown variables:
x 2 x 3 + x 2 + x 3 x 4 = 0 , x 2 x 3 + x 2 + x 3 x 4 = 0 , x 1 x 3 + x 1 + x 3 x 4 = 0 , x 1 x 2 + x 3 x 2 + x 1 x 3 = 1 .
The approximate solution is ( 0.57735 , 0.57735 , 0.57735 , 0.28867 ) T . Table 1 depicts that the proposed scheme with all different initial guesses converged to a solution much faster than the methods A M 5 and S M 4.45 . Clearly, the residual error, functional error and computational time of proposed method P M 6 were superior to those of A M 5 and S M 4.45 .
Example 2.
Another 200 × 200 system of nonlinear equations defined as:
x i x i + 1 1 = 0 , x n x 1 1 = 0 , i = 1 , 2 , , n = 200 .
The exact solution of this problem is ( 1 , 1 , , 1 ) T . This system of nonlinear equations was examined on initial guesses ( 1.1 , , 1.1 ) T , ( 0.3 , , 0.3 ) T and ( 0.8 , , 0.8 ) T ; and results are shown in Table 2. The proposed technique performed better than existing scheme A M 5 , whereas scheme S M 4.45 diverged for initial guess ( 0.3 , , 0.3 ) T . In the case of a large nonlinear system of equations, the proposed methods demonstrated an efficient order of convergence and short CPU time when compared with other schemes.
Example 3.
Next, we consider the transcendental system of equations shown below
2 j = 1 100 x j 2 + 2 x j 2 + tan 1 x i + 1 = 0 , i = 1 , 2 , , 100 .
This problem has an approximated solution of ( 0.0736323 , 0.0736323 , , 0.0736323 ) T . The performance of the proposed method was better than those of the other methods ones, as shown in Table 3.
Example 4.
We have tested the proposed method on another well-known nonlinear problem Fisher’s equation [30] which has many applications in chemistry, heat and mass transfer, biology and ecology. Basically, it determines the process of interaction between diffusion and reaction. This nonlinear problem with homogeneous Neumann’s boundary conditions and diffusion coefficient D can be defined as:
u t = D u x x + u ( 1 u ) = 0 , u ( x , 0 ) = 1.5 + 0.5 c o s ( π x ) , 0 x 1 , u x ( 0 , t ) = 0 , t 0 , u x ( 1 , t ) = 0 , t 0 .
Applying the finite difference discreitization on the Equation (31) leads to a system of nonlinear equations. Suppose w i , j = u ( x i , t j ) is its approximate solution at the grid points of the mesh. Let M and N be the numbers of steps in x and t directions, and h and k be the respective step sizes. Use central difference to approximate the second order partial derivative u x x ( x i , t j ) = ( w i + 1 , j 2 w i , j + w i 1 , j ) / h 2 ; backward difference approximation for the first-order derivative with respect to "t" as u t ( x i , t j ) = ( w i , j w i , j 1 ) / k ; and forward difference for first order derivative with respect to x as u x ( x i , t j ) = ( w i + 1 , j w i , j ) / ( h ) . The solution of the system is obtained by taking steps along x-axis, M = 21 , and t-axis, N = 21 , which form a nonlinear system of size 400, with the initial vector x ( 0 ) = ( i / ( M 1 ) 2 ) T , i = 1 , 2 , , M 1 . The results have been computed by different methods and are shown in Table 4. It can be noticed that the lowest execution time and residual error at the third iteration correspond to the proposed method P M 6 . Moreover, the approximate solution has been plotted in Figure 6.

6. Conclusions

The number of iterative schemes with memory for solving multidimensional problems is low, in part due to the difficulty of the task. Additionally, it is in part due to the lack of efficiency of the resulting method, if the usual techniques employed in the design of scalar methods with memory are employed as high-degree interpolation polynomials. With the procedure used, the iterative expression of the scheme with memory remains simple, and the order of the original method is increased by 50 % . Thus, the efficiency is highly improved. Moreover it has been proven, by means of the associated multidimensional discrete dynamical system, that it is a very stable scheme with wide basins of attraction and global convergence on quadratic polynomials. Its performance on other nonlinear functions was also found to be very stable in comparison with other known schemes. The numerical tests confirmed these results, even for large nonlinear systems and applied problems, such as Fisher’s partial differential equation.

Author Contributions

Conceptualization, R.B.; methodology, A.C.; software, S.B.; validation, J.R.T.; formal analysis, R.B.; investigation, J.R.T.; writing—original draft preparation, S.B.; writing—review and editing, A.C.; supervision, J.R.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by PGC2018-095896-B-C22 (MCIU/AEI/FEDER, UE).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Burden, R.L.; Faires, J.D. Numerical Analysis; PWS Publishing Company: Boston, MA, USA, 2001. [Google Scholar]
  2. Grosan, C.; Abraham, A. A new approach for solving nonlinear equations systems. IEEE Trans. Syst. Man Cybernet Part A Syst. Hum. 2008, 38, 698–714. [Google Scholar] [CrossRef]
  3. Moré, J.J. A collection of nonlinear model problems. In Computational Solution of Nonlinear Systems of Equations, Lectures in Applied Mathematics; Allgower, E.L., Georg, K., Eds.; American Mathematical Society: Providence, RI, USA, 1990; Volume 26, pp. 723–762. [Google Scholar]
  4. Tsoulos, I.G.; Stavrakoudis, A. On locating all roots of systems of nonlinear equations inside bounded domain using global optimization methods. Nonlinear Anal. Real World Appl. 2010, 11, 2465–2471. [Google Scholar] [CrossRef]
  5. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  6. Samanskii, V. On a modification of the Newton method. Ukrain. Math. 1967, 19, 133–138. [Google Scholar]
  7. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  8. Petković, M.S.; Neta, B.; Petkovixcx, L.D.; Džunixcx, J. Multipoint Methods for the Solution of Nonlinear Equations; Elsevier: Amsterdam, The Netherlands, 2012. [Google Scholar]
  9. Cordero, A.; Lotfi, T.; Bakhtiari, P.; Torregrosa, J.R. An efficient two-parametric family with memory for nonlinear equations. Numer. Algorithms 2015, 68, 323–335. [Google Scholar] [CrossRef] [Green Version]
  10. Wang, X.; Zhang, T.; Qin, Y. Efficient two-step derivative-free iterative methods with memory and their dynamics. Int. J. Comput. Math. 2016, 93, 1423–1446. [Google Scholar] [CrossRef]
  11. Cordero, A.; Lotfi, T.; Torregrosa, J.R.; Assari, P.; Taher-Khani, S. Some new bi-accelerator two-point method for solving nonlinear equations. J. Comput. Appl. Math. 2016, 35, 251–267. [Google Scholar] [CrossRef]
  12. Bakhtiari, P.; Cordero, A.; Lotfi, T.; Mahdiani, K.; Torregrosa, J.R. Widening basins of attraction of optimal iterative methods for solving nonlinear equations. Nonlinear Dyn. 2017, 87, 913–938. [Google Scholar] [CrossRef]
  13. Howk, C.L.; Hueso, J.L.; Martínez, E.; Teruel, C. A class of efficient high-order iterative methods with memory for nonlinear equations and their dynamics. Math. Meth. Appl. Sci. 2018, 41, 7263–7282. [Google Scholar] [CrossRef]
  14. Sharma, J.R.; Arora, H. Efficient higher order derivative-free multipoint methods with and without memory for systems of nonlinear equations. Int. J. Comput. Math. 2018, 95, 920–938. [Google Scholar] [CrossRef]
  15. Petkovíc, M.S.; Sharma, J.R. On some efficient derivative-free iterative methods with memory for solving systems of nonlinear equations. Numer. Algor. 2016, 71, 457–474. [Google Scholar] [CrossRef]
  16. Narang, M.; Bathia, S.; Alshomrani, A.S.; Kanwar, V. General efficient class of Steffensen type methods with memory for solving systems on nonlinear equations. Comput. Appl. Math. 2019, 352, 23–39. [Google Scholar] [CrossRef]
  17. Cordero, A.; Maimó, J.G.; Torregrosa, J.R.; Vassileva, M.P. Iterative methods with memory for solving systems of nonlinear equations using a second order approximation. Mathematics 2019, 7, 1069. [Google Scholar] [CrossRef] [Green Version]
  18. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. On the improvement of the order of convergence of iterative methods for solving nonlinear systems by means of memory. Appl. Math. Lett. 2020, 104. [Google Scholar] [CrossRef]
  19. Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. A multidimensional dynamical approach to iterative methods with memory. Appl. Math. Comput. 2015, 271, 701–715. [Google Scholar] [CrossRef] [Green Version]
  20. Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. Stability of King’s family of iterative methods with memory. Comput. Appl. Math. 2017, 318, 504–514. [Google Scholar] [CrossRef] [Green Version]
  21. Choubey, N.; Cordero, A.; Jaiswal, J.P.; Torregrosa, J.R. Dynamical techniques for analyzing iterative schemes with memory. Complexity 2018, 2018. [Google Scholar] [CrossRef]
  22. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Stability and applicability of iterative methods with memory. J. Math. Chem. 2019, 57, 1282–1300. [Google Scholar] [CrossRef]
  23. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. On the choice of the best members of the Kim family and the improvement of its convergence. Math. Meth. Appl. Sci. 2020, 43, 8051–8066. [Google Scholar] [CrossRef]
  24. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. Impact on stability by the use of memory in Traub-type schemes. Mathematics 2020, 8, 274. [Google Scholar] [CrossRef] [Green Version]
  25. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Alg. 2010, 5, 87–99. [Google Scholar] [CrossRef]
  26. Blanchard, P. Complex Analytic Dynamics on the Riemann Sphere. Bull. AMS 1984, 11, 85–141. [Google Scholar] [CrossRef] [Green Version]
  27. Robinson, R.C. An Introduction to Dynamical Systems, Continous and Discrete; American Mathematical Society: Providence, RI, USA, 2012. [Google Scholar]
  28. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing dynamical and parameters planes of iterative families and methods. Sci. World 2013, 2013. [Google Scholar] [CrossRef] [PubMed]
  29. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  30. Sauer, T. Numerical Analysis, 2nd ed.; Pearson: Boston, MA, USA, 2012. [Google Scholar]
Figure 1. Dynamical planes of P M 6 method on p ( x ) , for different values of c.
Figure 1. Dynamical planes of P M 6 method on p ( x ) , for different values of c.
Mathematics 09 02122 g001
Figure 2. Complex dynamical planes of new and known methods on x 2 1 .
Figure 2. Complex dynamical planes of new and known methods on x 2 1 .
Mathematics 09 02122 g002
Figure 3. Complex dynamical planes of new and known methods on x 3 1 .
Figure 3. Complex dynamical planes of new and known methods on x 3 1 .
Mathematics 09 02122 g003
Figure 4. Complex dynamical planes of new and known methods on f ( z ) = z 2 1 z 2 + 1 + 1 with ξ = 0 .
Figure 4. Complex dynamical planes of new and known methods on f ( z ) = z 2 1 z 2 + 1 + 1 with ξ = 0 .
Mathematics 09 02122 g004
Figure 5. Complex dynamical planes of new and known methods on z 2 exp ( z ) 3 z + 2 , with ξ 0.25753 .
Figure 5. Complex dynamical planes of new and known methods on z 2 exp ( z ) 3 z + 2 , with ξ 0.25753 .
Mathematics 09 02122 g005
Figure 6. Approximated Solution for Fisher’s Equation t [ 0 , 3 ] .
Figure 6. Approximated Solution for Fisher’s Equation t [ 0 , 3 ] .
Mathematics 09 02122 g006
Table 1. Convergence behavior of the schemes for Example 1.
Table 1. Convergence behavior of the schemes for Example 1.
x ( 0 ) Schemes | | x ( 1 ) x ( 0 ) | | | | x ( 2 ) x ( 1 ) | | | | x ( 3 ) x ( 2 ) | | | | F ( x ( 1 ) ) | | | | F ( x ( 2 ) ) | | | | F ( x ( 3 ) ) | | ρ CPU Time
1 1 P M 6 8.4 ( 1 ) 2.7 ( 11 ) 1.0 ( 70 ) 1.8 ( 1 ) 5.4 ( 11 ) 2.1 ( 70 ) 6.040 0.422
A M 5 2.1 ( 1 ) 1.7 ( 7 ) 2.4 ( 40 ) 4.6 ( 1 ) 3.5 ( 7 ) 4.9 ( 40 ) 5.395 0.549
S M 4.45 7.9 ( 2 ) 1.3 ( 8 ) 6.9 ( 42 ) 1.7 ( 1 ) 2.7 ( 8 ) 1.4 ( 41 ) 4.917 0.577
0.5 0.5 P M 6 1.6 ( 3 ) 1.9 ( 24 ) 2.4 ( 152 ) 3.2 ( 3 ) 3.8 ( 24 ) 4.7 ( 152 ) 6.112 0.626
A M 5 1.5 ( 2 ) 4.8 ( 16 ) 5.7 ( 86 ) 3.0 ( 2 ) 9.6 ( 16 ) 1.1 ( 85 ) 5.176 1.641
S M 4.45 3.6 ( 4 ) 2.2 ( 21 ) 1.8 ( 99 ) 7.2 ( 4 ) 4.3 ( 21 ) 3.6 ( 99 ) 4.534 26.58
0.4 0.4 P M 6 3.3 ( 2 ) 1.6 ( 14 ) 8.3 ( 91 ) 6.4 ( 2 ) 3.1 ( 14 ) 1.7 ( 90 ) 6.185 0.532
A M 5 1.3 ( 1 ) 5.1 ( 10 ) 1.9 ( 54 ) 2.5 ( 1 ) 1.0 ( 9 ) 3.7 ( 54 ) 5.282 0.621
S M 4.45 1.6 ( 1 ) 2.1 ( 8 ) 8.5 ( 42 ) 3.1 ( 1 ) 4.3 ( 8 ) 1.7 ( 41 ) 4.859 0.682
Table 2. Convergence behavior of schemes for Example 2.
Table 2. Convergence behavior of schemes for Example 2.
x ( 0 ) Schemes | | x ( 1 ) x ( 0 ) | | | | x ( 2 ) x ( 1 ) | | | | x ( 3 ) x ( 2 ) | | | | F ( x ( 1 ) ) | | | | F ( x ( 2 ) ) | | | | F ( x ( 3 ) ) | | ρ CPU Time
1.1 1.1 P M 6 2.9 ( 4 ) 6.4 ( 29 ) 7.3 ( 177 ) 5.8 ( 4 ) 1.3 ( 28 ) 1.5 ( 176 ) 6.000 143.215
A M 5 3.2 ( 3 ) 5.1 ( 19 ) 5.2 ( 98 ) 6.4 ( 3 ) 1.0 ( 18 ) 1.0 ( 97 ) 5.000 313.076
S M 4.45 1.8 ( 4 ) 3.4 ( 22 ) 2.8 ( 101 ) 3.5 ( 4 ) 6.9 ( 22 ) 5.6 ( 101 ) 4.465 180.219
0.3 0.3 P M 6 7.5 4.2 ( 3 ) 6.4 ( 22 ) 19.0 8.5 ( 3 ) 1.3 ( 21 ) 6.000 107.329
A M 5 3.9 3.2 ( 3 ) 5.3 ( 19 ) 6.8 6.4 ( 3 ) 1.1 ( 18 ) 5.000 120.001
S M 4.45 dddddddd
0.8 0.8 P M 6 1.0 ( 2 ) 1.1 ( 19 ) 2.1 ( 121 ) 2.0 ( 2 ) 2.2 ( 19 ) 4.3 ( 121 ) 5.999 92.843
A M 5 4.0 ( 2 ) 1.7 ( 13 ) 2.1 ( 70 ) 8.1 ( 2 ) 3.4 ( 13 ) 4.1 ( 70 ) 5.000 93.905
S M 4.45 3.3 ( 3 ) 3.0 ( 16 ) 2.5 ( 75 ) 6.5 ( 3 ) 5.9 ( 16 ) 5.0 ( 75 ) 4.529 191.609
(d: stands for the divergent).
Table 3. Convergence behavior of schemes for Example 3.
Table 3. Convergence behavior of schemes for Example 3.
x ( 0 ) Schemes | | x ( 1 ) x ( 0 ) | | | | x ( 2 ) x ( 1 ) | | | | x ( 3 ) x ( 2 ) | | | | F ( x ( 1 ) ) | | | | F ( x ( 2 ) ) | | | | F ( x ( 3 ) ) | | ρ CPU Time
0.3 0.3 P M 6 6.6 ( 2 ) 2.2 ( 8 ) 4.6 ( 44 ) 2.0 6.3 ( 7 ) 1.3 ( 42 ) 5.511 638.578
A M 5 8.4 ( 2 ) 7.6 ( 7 ) 3.9 ( 31 ) 2.5 2.1 ( 5 ) 1.1 ( 29 ) 3.558 726.233
S M 4.45 5.2 ( 1 ) 8.6 ( 3 ) 3.7 ( 10 ) 23.0 2.4 ( 1 ) 1.0 ( 8 ) 4.134 759.5
0.1 0.1 P M 6 7.4 ( 4 ) 5.7 ( 20 ) 2.9 ( 98 ) 2.1 ( 2 ) 1.6 ( 18 ) 8.1 ( 97 ) 4.859 677.171
A M 5 2.7 ( 3 ) 3.5 ( 14 ) 1.4 ( 59 ) 7.6 ( 2 ) 9.8 ( 13 ) 4.0 ( 58 ) 4.167 685.561
S M 4.45 2.4 ( 3 ) 1.4 ( 12 ) 9.7 ( 54 ) 6.9 ( 2 ) 4.1 ( 11 ) 2.7 ( 52 ) 4.461 736.844
0.5 0.5 P M 6 5.4 ( 1 ) 1.0 ( 4 ) 4.4 ( 19 ) 21.0 2.9 ( 2 ) 1.2 ( 17 ) 5.653 897.173
A M 5 5.5 ( 1 ) 2.5 ( 3 ) 2.5 ( 14 ) 22.0 7.1 ( 3 ) 7.1 ( 13 ) 4.694 809.123
S M 4.45 1.4 1.3 ( 1 ) 4.5 ( 5 ) 86.0 3.9 1.3 ( 3 ) 3.352 1204.64
Table 4. Convergence behavior of schemes for Example 4.
Table 4. Convergence behavior of schemes for Example 4.
Schemes | | x ( 1 ) x ( 0 ) | | | | x ( 2 ) x ( 1 ) | | | | x ( 3 ) x ( 2 ) | | | | F ( x ( 1 ) ) | | | | F ( x ( 2 ) ) | | | | F ( x ( 3 ) ) | | ρ CPU Time
P M 6 2.8 1.5 ( 7 ) 1.9 ( 51 ) 2.4 ( 2 ) 3.9 ( 5 ) 8.9 ( 49 ) 6.0413 1333.85
A M 5 2.7 5.0 ( 6 ) 3.0 ( 35 ) 1.9 ( 2 ) 9.2 ( 5 ) 1.3 ( 32 ) 5.0929 1415.90
S M 4.45 4.9 5.0 ( 5 ) 5.3 ( 27 ) 5.7 ( 2 ) 1.5 ( 2 ) 2.6 ( 24 ) 4.3948 3606.19
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Behl, R.; Cordero, A.; Torregrosa, J.R.; Bhalla, S. A New High-Order Jacobian-Free Iterative Method with Memory for Solving Nonlinear Systems. Mathematics 2021, 9, 2122. https://doi.org/10.3390/math9172122

AMA Style

Behl R, Cordero A, Torregrosa JR, Bhalla S. A New High-Order Jacobian-Free Iterative Method with Memory for Solving Nonlinear Systems. Mathematics. 2021; 9(17):2122. https://doi.org/10.3390/math9172122

Chicago/Turabian Style

Behl, Ramandeep, Alicia Cordero, Juan R. Torregrosa, and Sonia Bhalla. 2021. "A New High-Order Jacobian-Free Iterative Method with Memory for Solving Nonlinear Systems" Mathematics 9, no. 17: 2122. https://doi.org/10.3390/math9172122

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop