Next Article in Journal
Comparison of Single-Lane Roundabout Entry Degree of Saturation Estimations from Analytical and Regression Models
Next Article in Special Issue
Learning Individualized Hyperparameter Settings
Previous Article in Journal
Multi-Objective Decision-Making Meets Dynamic Shortest Path: Challenges and Prospects
Previous Article in Special Issue
Automatic Fault Detection and Diagnosis in Cellular Networks and Beyond 5G: Intelligent Network Management
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence and Stability of a New Parametric Class of Iterative Processes for Nonlinear Systems

by
Alicia Cordero
1,*,
Javier G. Maimó
2,
Antmel Rodríguez-Cabral
2,3 and
Juan R. Torregrosa
1
1
Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera, s/n, 46022 Valencia, Spain
2
Instituto Tecnológico de Santo Domingo (INTEC), Av. Los Procéres 49, Santo Domingo 10602, Dominican Republic
3
Escuela de Matemáticas, Universidad Autónoma de Santo Domingo (UASD), Ciudad Universitaria, Av. Alma Mater, Santo Domingo 10105, Dominican Republic
*
Author to whom correspondence should be addressed.
Algorithms 2023, 16(3), 163; https://doi.org/10.3390/a16030163
Submission received: 8 February 2023 / Revised: 10 March 2023 / Accepted: 14 March 2023 / Published: 16 March 2023
(This article belongs to the Special Issue 2022 and 2023 Selected Papers from Algorithms Editorial Board Members)

Abstract

:
In this manuscript, we carry out a study on the generalization of a known family of multipoint scalar iterative processes for approximating the solutions of nonlinear systems. The convergence analysis of the proposed class under various smooth conditions is provided. We also study the stability of this family, analyzing the fixed and critical points of the rational operator resulting from applying the family on low-degree polynomials, as well as the basins of attraction and the orbits (periodic or not) that these points produce. This dynamical study also allows us to observe which members of the family are more stable and which have chaotic behavior. Graphical analyses of dynamical planes, parameter line and bifurcation planes are also studied. Numerical tests are performed on different nonlinear systems for checking the theoretical results and to compare the proposed schemes with other known ones.

1. Introduction

Many papers deal with methods and families of iterative schemes to approximate the solution of nonlinear equations f ( x ) = 0 , with f : I R R as a function defined in an open interval I. Each of them has a different behavior due to their order of convergence, stability and efficiency. Of the existing methods in the literature, in the present manuscript, we focus on the family of iterative processes (ACTV) for approximating the solutions of nonlinear equations, proposed by Artidiello et al. in [1]. This family was constructed adding a Newton step to Ostrowski’s scheme, and using a divided difference operator. Then, the family has a three-step iterative expression with an arbitrary complex parameter α . Moreover, its order of convergence is at least six. Its iterative expression is
y k = x k f x k f x k , z k = y k f y k 2 f x k , y k f x k , x k + 1 = z k [ α + ( 1 + α ) u k + ( 1 α ) v k ] f z k f x k , k = 0 , 1 ,
where
u k = 1 f x k , y k f x k and v k = f x k f x k , y k , k = 0 , 1 , 2 ,
The divided difference operator is defined as
f [ x , y ] = f ( x ) f ( y ) ( x y ) , x , y I .
By using tools of complex dynamics, the stability of this family was studied by Moscoso [2], where it was observed that there is good dynamic behavior in the case of α = 1 . In Section 2, we present the multidimensional extension of family (1) and prove its convergence order.
In the stability analysis (Section 3), we determine whether the fixed points of the associated rational operator are of an attracting, repulsing or saddle point nature; on the other hand, we search for which values of the parameter-free critical points may appear. In the bifurcation analysis of free critical points (Section 4), we calculate the parameter lines, which we generate from the mentioned free critical points, then we generate the bifurcation planes for specific intervals of parameter α , and as a consequence of these studies, we generate the dynamical planes for members of the family with stable and unstable behavior. In Section 5, some numerical problems are considered to confirm the theoretical results. The proposed schemes for different values of parameter are considered and compared with Newton’s method and some known sixth-order techniques, namely C 6 1 , C 6 2 , B 6 , P S H 6 1 , P S H 6 2 , X H 6 , introduced by Cordero et al. in [3], Cordero et al. in [4], Behl et al. in [5], Capdevila et al. in [6], and Xiao and Yin et al. in [7].
The iterative expressions of these methods for solving a nonlinear systems F ( x ) = 0 , F : Ω R n R n are shown below. Newton’s scheme is the most known iterative algorithm
x ( k + 1 ) = x ( k ) F x ( k ) 1 F x ( k ) , k = 0 , 1 , ,
where F ( x ) denotes the Jacobian matrix associated to F.
The following sixth-order iterative scheme (see [3]) is named C 6 1 . It uses three evaluations of F and two of F , per iteration:
y ( k ) = x ( k ) F x ( k ) 1 F x ( k ) , z ( k ) = y ( k ) F x ( k ) 1 2 I F y ( k ) F x ( k ) 1 F y ( k ) , x ( k + 1 ) = z ( k ) F y ( k ) 1 F z ( k ) , k = 0 , 1 ,
The following scheme, introduced in [4], is a modified Newton–Jarratt composition with sixth-order convergence and evaluates twice F and F , per iteration. It is denoted by C 6 2 :
z ( k ) = x ( k ) 2 3 F x ( k ) 1 F x ( k ) , y ( k ) = x ( k ) 1 2 3 F z ( k ) F x ( k ) 1 3 F z ( k ) + F x ( k ) F x ( k ) 1 F x ( k ) , x ( k + 1 ) = y ( k ) 1 2 F x ( k ) + 3 2 F z ( k ) 1 F y ( k ) , k = 0 , 1 ,
Algorithm (5) was constructed by Behl et al. in [5] and it is denoted by B 6 .
y ( k ) = x ( k ) 2 3 F x ( k ) 1 F x ( k ) , z ( k ) = x ( k ) a 1 I + a 2 F y ( k ) 1 F x ( k ) 2 F x ( k ) 1 F x ( k ) , x ( k + 1 ) = z ( k ) b 2 F x ( k ) + b 3 F y ( k ) 1 F x ( k ) + b 1 F y ( k ) F x ( k ) 1 F z ( k ) ,
where k 0 , a 1 = a 2 + 1 = 5 / 8 , a 2 = 3 / 8 , b 2 = 1 b 3 + b 1 = ( 1 / 2 ) ( 1 + 3 b 1 ) , b 3 = ( 1 / 2 ) ( 3 + 5 b 1 ) and b 1 is a parameter. This is a class of iterative processes that achieves convergence order six with twice F evaluations and F , per iteration. For our comparison, we will use two versions of method B 6 , one of them with a 2 = 3 8 , a 1 = 5 8 , b 1 = 3 5 , b 3 = 0 , b 2 = 2 5 and the other one with a 2 = 3 8 , a 1 = 5 8 , b 1 = 1 , b 3 = 4 , b 2 = 2 .
Capdevila et al. in [6] introduced the following class of iterative methods that we call P S H 6 1 . The elements of this family have an order of convergence of six and they need three evaluations of function F, one of the the Jacobian matrix F and a divided difference [ x , y ; F ] per iteration:
y ( k ) = x ( k ) F x ( k ) 1 F x ( k ) , z ( k ) = y ( k ) I + 2 t ( k ) + 1 2 α t ( k ) 2 F x ( k ) 1 F y ( k ) , x ( k + 1 ) = z ( k ) I + 2 t ( k ) + 1 2 α t ( k ) 2 F x ( k ) 1 F z ( k ) , k = 0 , 1 , ,
where α is free and t ( k ) = I F x ( k ) 1 x ( k ) , y ( k ) ; F . For the numerical results, we will take α = 0 and α = 10 .
Introduced also by Capdevila et al. in [6], we work with the following scheme, denoted P H S 6 2 , with the same order of convergence and the same number of functional evaluations per iteration as P S H 6 1 :
y ( k ) = x ( k ) F x ( k ) 1 F x ( k ) , z ( k ) = y ( k ) I + 2 I + α t ( k ) 1 t ( k ) F x ( k ) 1 F y ( k ) , x ( k + 1 ) = z ( k ) I + 2 I + α t ( k ) 1 t ( k ) F x ( k ) 1 F z ( k ) , k = 0 , 1 ,
In this case, we take α = 10 .
Finally, we use the method called X H 6 introduced by Xiao and Yin in [7]. In this case, we need twice F evaluations and F on x ( k ) , z ( k ) and x ( k ) , y ( k ) , respectively, per iteration.
y ( k ) = x ( k ) 2 3 F x ( k ) 1 F x ( k ) , z ( k ) = x ( k ) 1 2 I + 9 4 F y ( k ) 1 F x ( k ) + 3 4 F x ( k ) 1 F y ( k ) F x ( k ) 1 F x ( k ) , x ( k + 1 ) = z ( k ) 1 2 3 F y ( k ) 1 F x ( k ) 1 F z ( k ) , k = 0 , 1 ,

Multidimensional Real Dynamics Concepts

Discrete dynamics is a very useful tool to study the stability of iterative schemes for solving nonlinear systems. An exhaustive description of this tool can be found in the book [8]. A resource used for the stability analysis of iterative schemes for solving nonlinear systems is to analyze the dynamical behavior of the vectorial rational operator obtained to apply the iterative expression on low degree polynomial systems. This technique generally uses quadratic or cubic polynomials [9].
When we have scalar iterative processes, the tools to be used are of real or complex discrete dynamics. However, here, we handle a family of vectorial iterative methods, so real multidimensional dynamics must be used to analyze its stability, see [6]. We proceed by taking a system of quadratic polynomials on which we will apply our method in order to obtain the associated multidimensional rational operator and perform the analysis of the fixed and critical points in order to select members of the family with good stability.
Some concepts used in this study are presented, see for instance [10].
Let G : R n R n be the operator obtained from the iterative scheme on a polynomial system p ( x ) . The set of successive images of x ( 0 ) through G ( x ) , x ( 0 ) , G x ( 0 ) , , G m x ( 0 ) , is called the orbit of x ( 0 ) R n . x * R n is a fixed point of G if G x * = x * . Of course, the roots of p ( x ) are a fixed point of G, but there may be fixed points of G that are not solutions of system p ( x ) . We refer them as strange fixed points. A point x that satisfies G k ( x ) = x and G k p ( x ) x , for p < k and k 1 is called a periodic point, of period k. For classifying the stability of fixed or periodic points, we use the following result.
Theorem 1
([8], pg. 558). Let G : R n R n be of type C 2 . Assuming that x * is a periodic k-point, k 1 . If λ 1 , λ 2 , , λ n are the eigenvalues of G x * , we have the following:
(a) 
If all eigenvalues λ k verify that λ k < 1 , then x * is an attracting point.
(b) 
If an eigenvalue λ k 0 is such that λ k 0 > 1 , then x * is unstable, that is, repulsor or saddle.
(c) 
If all eigenvalues λ k verify that λ k > 1 , then x * is a repulsive point.
The set of preimages of any order of an attracting fixed point of the multidimensional rational function G, x * ,
A x * = x ( 0 ) R n : G m x ( 0 ) x * , m ,
is the basin of attraction of x * , A x * .
The solutions of G ( x ) = 0 are called the critical point of operator G. The critical points different of the roots of p ( x ) are called a free critical point. The critical points are important for our analysis because of the following result from Julia and Fatou (see [11,12,13]).
Theorem 2
(Julia and Fatou). Let G be a rational function. The immediate basin of attraction of a periodic (or fixed) attractor point contains at least one critical point.

2. Family ACTV for Nonlinear Systems

Taking into account the iterative expression of family (1), we can extend, in a natural way, this expression for solving nonlinear systems F ( x ) = 0 . We change scalar f by vectorial F and f [ x , y ] by the divided difference operator [ x , y ; F ] . The resulting expression is
y ( k ) = x ( k ) F x ( k ) 1 F x ( k ) , z ( k ) = y ( k ) 2 x ( k ) , y ( k ) ; F F x ( k ) 1 F y ( k ) , x ( k + 1 ) = z ( k ) α F x ( k ) 1 + ( 1 α ) x ( k ) , y ( k ) ; F 1 F z ( k ) ( 1 + α ) I n F x ( k ) 1 x ( k ) , y ( k ) ; F · F x ( k ) 1 F z ( k ) ,
where I n is the n × n identity matrix.
Mapping [ · , · ; F ] : Ω × Ω R n × R n L R n such that
[ x , y ; F ] ( x y ) = F ( x ) F ( y ) , for any x , y Ω ,
is the divided difference operator of F on R n (see [14]).
The proof of the main result is based on the Genochi–Hermite formula (see [14]),
[ x , y ; F ] = 0 1 F ( x + t ( x y ) ) d t , for all ( x , y ) Ω × Ω .
By developing F ( x + t h ) in Taylor series around x, we obtain
0 1 F ( x + t h ) d t = F ( x ) + 1 2 F ( x ) h + 1 6 F ( x ) h 2 + O h 3 .
Denoting by e = x ξ , where ξ is a zero of F ( x ) , and assuming that F ( ξ ) is invertible, we obtain
F ( x ) = F ( ξ ) e + C 2 e 2 + C 3 e 3 + C 4 e 4 + C 5 e 5 + O e 6 , F ( x ) = F ( ξ ) I + 2 C 2 e + 3 C 3 e 2 + 4 C 4 e 3 + 5 C 5 e 4 + O e 5 , F ( x ) = F ( ξ ) 2 C 2 + 6 C 3 e + 12 C 4 e 2 + 20 C 5 e 3 + O e 4 , F ( x ) = F ( ξ ) 6 C 3 + 24 C 4 e + 60 C 5 e 2 + O e 3 ,
where C q = 1 q ! F ( ξ ) 1 F ( q ) ( ξ ) , q 2 . Replacing these expressions in the Genocchi–Hermite formula and denoting the second point of the divided difference by y = x + h and the error of y by e y = y ξ , we obtain
[ x , y ; F ] = F ( ξ ) I + C 2 e y + e + C 3 e 2 + O e 3 .
Particularly, if y is the Newton approximation, i.e., h = x y = F ( x ) 1 F ( x ) , we obtain
[ x , y ; F ] = F ( ξ ) I + C 2 e + C 2 2 + C 3 e 2 + O e 3 .

Convergence Analysis

Theorem 3.
Being F : Ω R n R n differentiable enough in an open convex neighborhood Ω of ξ, root of F ( x ) . Consider a seed x ( 0 ) close enough to the solution ξ and that F ( x ) is continuous and invertible in ξ. Then, (9) has a local convergence of order six, for all α R , with the error equation
e ( k + 1 ) = ( 5 + α ) C 2 5 C 2 2 C 3 C 2 C 3 C 2 3 + C 3 2 C 2 e ( k ) 6 + O e ( k ) 7 ,
being C k = 1 k ! F ( ξ ) 1 F ( k ) ( ξ ) , k = 2 , 3 , , e ( k ) = x ( k ) ξ .
Proof. 
From
y ( k ) = x ( k ) F x ( k ) 1 F x ( k ) ,
We perform the Taylor series of F x ( k ) and F x ( k ) around ξ ,
F x ( k ) = F ξ e ( k ) + C 2 e ( k ) 2 + C 3 e ( k ) 3 + C 4 e ( k ) 4 + C 5 e ( k ) 5 + C 6 e ( k ) 6 + C 7 e ( k ) 7 + O e ( k ) 8 ,
F x ( k ) = F ξ I + 2 C 2 e ( k ) + 3 C 3 e ( k ) 2 + 4 C 4 e ( k ) 3 + 5 C 5 e ( k ) 4 + 6 C 6 e ( k ) 5 + 7 C 7 e ( k ) 6 + O e ( k ) 7 .
We suppose that the Jacobian matrix F ( ξ ) is nonsingular and calculate the Taylor expansion of F x ( k ) 1 as follows:
F x ( k ) 1 = I + X 2 e ( k ) + X 3 e ( k ) 2 + X 4 e ( k ) 3 + X 5 e ( k ) 4 + X 6 e ( k ) 5 + X 7 e ( k ) 6 F ξ 1 + O e k 7 ,
where X 2 , X 3 , X 4 , X 5 , X 6 , X 7 are unknowns such that
F x ( k ) 1 F x ( k ) = I .
Then, it can be proven that
F x ( k ) 1 = I 2 C 2 e ( k ) + 4 C 2 2 3 C 3 e ( k ) 2 + 6 C 3 C 2 + 6 C 2 C 3 4 C 4 8 C 2 3 e ( k ) 3 + 16 C 2 4 + 9 C 3 2 12 C 3 C 2 2 12 C 2 C 3 C 2 12 C 2 2 C 3 + 8 C 4 C 2 + 8 C 2 C 4 5 C 5 e ( k ) 4 + 32 C 2 5 18 C 3 2 C 2 + 24 C 3 C 2 3 + 24 C 2 C 3 C 2 2 + 24 C 2 2 C 3 C 2 16 C 4 C 2 2 16 C 2 C 4 C 2 + 10 C 5 C 2 18 C 3 C 2 C 3 18 C 2 C 3 2 + 12 C 4 C 3 + 24 C 2 3 C 3 16 C 2 2 C 4 + 12 C 3 C 4 + 10 C 2 C 5 6 C 6 e ( k ) 5 + 64 C 2 6 + 36 C 3 2 C 2 2 48 C 3 C 2 4 48 C 2 C 3 C 2 3 48 C 2 2 C 3 C 2 2 + 32 C 4 C 2 3 + 32 C 2 C 4 C 2 2 20 C 5 C 2 2 + 36 C 3 C 2 C 3 C 2 + 36 C 2 C 3 2 C 2 24 C 4 C 3 C 2 48 C 2 3 C 3 C 2 + 32 C 2 2 C 4 C 2 24 C 3 C 4 C 2 20 C 2 C 5 C 2 + 12 C 6 C 2 48 C 2 4 C 3 27 C 3 3 + 36 C 3 C 2 2 C 3 + 36 C 2 C 3 C 2 C 3 + 36 C 2 2 C 3 2 24 C 4 C 2 C 3 24 C 2 C 4 C 3 + 15 C 5 C 3 24 C 3 C 2 C 4 24 C 2 C 3 C 4 + 16 C 4 2 + 32 C 2 3 C 4 20 C 2 2 C 5 + 15 C 3 C 5 + 12 C 2 C 6 7 C 7 e ( k ) 6 F ξ 1 + O e ( k ) 7 ,
and multiplying expressions (12) and (15), we obtain
F x ( k ) 1 F x ( k ) = e ( k ) C 2 e ( k ) 2 + 2 C 2 2 C 3 e ( k ) 3 + 3 C 3 C 2 + 4 C 2 C 3 3 C 4 4 C 2 3 e ( k ) 4 + 6 C 3 C 2 2 8 C 2 2 C 3 + 6 C 3 2 6 C 2 C 3 C 2 + 6 C 2 C 4 + 4 C 4 C 2 + 8 C 2 4 4 C 5 e ( k ) 5 + 16 C 2 5 9 C 3 2 C 2 + 12 C 3 C 2 3 + 12 C 2 C 3 C 2 2 + 12 C 2 2 C 3 C 2 8 C 4 C 2 2 8 C 2 C 4 C 2 + 5 C 5 C 2 12 C 3 C 2 C 3 12 C 2 C 3 2 + 8 C 4 C 3 + 16 C 2 3 C 3 12 C 2 2 C 4 + 9 C 3 C 4 + 8 C 2 C 5 5 C 6 e ( k ) 6 .
Taking into account e ( k ) = x ( k ) ξ , the expansion of the error at the first step of family (1) is
y ( k ) ξ = C 2 e ( k ) 2 2 C 2 2 C 3 e ( k ) 3 3 C 3 C 2 + 4 C 2 C 3 3 C 4 4 C 2 3 e ( k ) 4 6 C 3 C 2 2 8 C 2 2 C 3 + 6 C 3 2 6 C 2 C 3 C 2 + 6 C 2 C 4 + 4 C 4 C 2 + 8 C 2 4 4 C 5 e ( k ) 5 16 C 2 5 9 C 3 2 C 2 + 12 C 3 C 2 3 + 12 C 2 C 3 C 2 2 + 12 C 2 2 C 3 C 2 8 C 4 C 2 2 8 C 2 C 4 C 2 + 5 C 5 C 2 12 C 3 C 2 C 3 12 C 2 C 3 2 + 8 C 4 C 3 + 16 C 2 3 C 3 12 C 2 2 C 4 + 9 C 3 C 4 + 8 C 2 C 5 5 C 6 e ( k ) 6 .
For z ( k ) , we calculate x ( k ) , y ( k ) ; F up to order six using the Genochi–Hermite formula seen in (10), obtaining
= F ( x ) I + C 2 e ( k ) + C 2 2 + C 3 e ( k ) 2 + C 4 + C 3 C 2 + 2 C 2 C 3 2 C 2 3 e ( k ) 3 + C 5 + C 4 C 2 + 3 C 2 C 4 + 2 C 3 2 3 C 2 C 3 C 2 4 C 2 2 C 3 C 3 C 2 2 + 4 C 2 4 e ( k ) 4 + C 6 8 C 2 5 + 6 C 2 C 3 C 2 2 + 8 C 2 3 C 3 + 6 C 2 2 C 3 C 2 C 4 C 2 2 6 C 2 2 C 4 4 C 2 C 4 C 2 6 C 2 C 3 2 C 3 2 C 2 2 C 3 C 2 C 3 + 2 C 4 C 3 + 3 C 3 C 4 + C 5 C 2 + 4 C 2 C 5 e ( k ) 5 + C 7 + 16 C 2 6 + 9 C 2 C 3 2 C 2 + 12 C 2 C 3 C 2 C 3 + 12 C 2 2 C 3 2 C 3 C 2 C 3 C 2 C 3 2 C 2 2 12 C 2 C 3 C 2 3 12 C 2 2 C 3 C 2 2 12 C 2 3 C 3 C 2 16 C 2 4 C 3 + 4 C 3 C 2 4 + 8 C 2 C 4 C 2 2 + 8 C 2 2 C 4 C 2 + 12 C 2 3 C 4 + C 4 C 2 3 8 C 2 C 4 C 3 9 C 2 C 3 C 4 3 C 3 C 2 C 4 C 3 C 4 C 2 C 4 C 3 C 2 2 C 4 C 2 C 3 + 5 C 2 C 6 + C 6 C 2 2 C 3 3 + 4 C 3 C 5 + 2 C 5 C 3 + 3 C 4 2 5 C 2 C 5 C 2 8 C 2 2 C 5 C 5 C 2 2 e ( k ) 6 + O e ( k ) 7 .
Following a similar procedure to the one used in (14) and (15), we have
2 x ( k ) , y ( k ) ; F F x ( k ) 1 = I + C 3 2 C 2 2 e ( k ) 2 + 2 C 4 2 C 3 C 2 4 C 2 C 3 + 4 C 2 3 e ( k ) 3 + 4 C 2 4 + 6 C 2 2 C 3 + 6 C 2 C 3 C 2 6 C 2 C 4 2 C 4 C 2 + 3 C 5 3 C 3 2 e ( k ) 4 + 2 C 4 C 2 2 + 8 C 2 2 C 4 + 8 C 2 C 4 C 2 2 C 4 C 3 4 C 3 C 4 + 8 C 3 C 2 3 4 C 2 C 3 C 2 2 8 C 2 2 C 3 C 2 4 C 2 3 C 3 2 C 5 C 2 8 C 2 C 5 2 C 3 C 2 C 3 + 8 C 2 C 3 2 + 4 C 6 e ( k ) 5 + 8 C 2 6 4 C 2 4 C 3 + 8 C 2 2 C 3 C 2 2 4 C 2 C 3 C 2 3 24 C 3 C 2 4 + 4 C 2 3 C 3 C 2 10 C 2 2 C 3 2 2 C 2 C 3 C 2 C 3 + 10 C 3 2 C 2 2 + 12 C 3 C 2 C 3 C 2 + 16 C 3 C 2 2 C 3 10 C 2 C 3 2 C 2 + 10 C 2 C 4 C 3 6 C 4 C 2 C 3 2 C 4 C 3 C 2 4 C 3 C 2 C 4 + 10 C 2 C 3 C 4 4 C 2 C 4 C 2 2 + 10 C 4 C 2 3 4 C 2 3 C 4 12 C 2 2 C 4 C 2 C 5 C 3 5 C 3 C 5 10 C 2 C 6 2 C 6 C 2 3 C 3 3 + 10 C 2 C 5 C 2 + 10 C 2 2 C 5 4 C 5 C 2 2 2 C 4 2 e ( k ) 6 + O e ( k ) 7 .
Now, we obtain the expansion of F y ( k ) ,
y ( k ) ξ 2 = C 2 2 e ( k ) 4 + 4 C 2 3 + 2 C 2 C 3 + 2 C 3 C 2 e ( k ) 5 + 12 C 2 4 11 C 2 2 C 3 + 4 C 3 2 + 3 C 2 C 4 4 C 2 C 3 C 2 + 3 C 4 C 2 7 C 3 C 2 2 e ( k ) 6 + O e ( k ) 7 , y ( k ) ξ 3 = C 2 3 e ( k ) 6 + O e ( k ) 7 ,
F y ( k ) = F ( ξ ) y ( k ) ξ + C 2 y ( k ) ξ 2 + O y ( k ) ξ 3 = F ( ξ ) C 2 e ( k ) 2 + 2 C 3 C 2 2 e ( k ) 3 + 3 C 4 + 5 C 2 3 3 C 3 C 2 4 C 2 C 3 e ( k ) 4 + 4 C 5 6 C 2 C 4 + 10 C 2 2 C 3 6 C 3 2 4 C 4 C 2 + 8 C 2 C 3 C 2 12 C 2 4 + 6 C 3 C 2 2 e ( k ) 5 + 28 C 2 5 27 C 2 3 C 3 + 16 C 2 C 3 2 + 15 C 2 2 C 4 9 C 3 C 4 8 C 2 C 5 + 5 C 6 16 C 2 2 C 3 C 2 + 9 C 3 2 C 2 + 11 C 2 C 4 C 2 5 C 5 C 2 18 C 2 C 3 C 2 2 + 8 C 4 C 2 2 12 C 3 C 2 3 8 C 4 C 3 + 12 C 3 C 2 C 3 e ( k ) 6 + O e ( k ) 7 .
Considering the results obtained in (16)–(18), the second step has as the error equation
z ( k ) ξ = C 2 3 C 3 C 2 e ( k ) 4 + 2 C 4 C 2 + 2 C 2 2 C 3 + 2 C 2 C 3 C 2 + 4 C 3 C 2 2 2 C 3 2 4 C 2 4 e ( k ) 5 + 10 C 2 5 5 C 2 3 C 3 8 C 2 2 C 3 C 2 8 C 2 C 3 C 2 2 9 C 3 C 2 3 + 4 C 2 C 3 2 + 6 C 3 2 C 2 + 8 C 3 C 2 C 3 + 3 C 2 2 C 4 + 3 C 2 C 4 C 2 + 6 C 4 C 2 2 3 C 3 C 4 4 C 4 C 3 3 C 5 C 2 e ( k ) 6 .
To obtain the error equation of the third step, we need the calculation of [ x k , y k ; F ] 1 and F z ( k ) since the other elements were previously obtained. Following a process similar to that seen in (17) and developing only to order two, we have that
[ x k , y k ; F ] 1 = I C 2 e ( k ) C 3 e ( k ) 2 + O e ( k ) 3 .
For the calculation of F z ( k ) , we are only interested in the terms up to order six, so applying what we see in formula (18), we obtain
F z ( k ) = F ( ξ ) z ( k ) x * + O z ( k ) ξ 2 .
The resulting error equation for the family of methods (9) is
e ( k + 1 ) = ( 5 + α ) C 2 5 C 2 2 C 3 C 2 C 3 C 2 3 + C 3 2 C 2 e ( k ) 6 + O e ( k ) 7 .
Once the convergence order of the proposed class of the iterative method is proven, we undertake a complexity analysis, taking into account the cost of solving the linear systems and the rest of the computational effort, not only of the proposed class but also of Newton’s and those schemes presented in the Introduction, with the same order six. In order to calculate it, let us remark that the computational cost (in terms of products/quotients) of solving a linear system of size n × n is
1 3 n 3 + n 2 1 3 n ,
but if another linear system is solved with the same coefficient matrix, then the cost increases only in n 2 operations. Moreover, a matrix–vector product corresponds to n 2 operations. From these bases, the computational effort of each scheme is presented in Table 1. As the ACTV class depends on parameter α , we consider α = 1 , as this value eliminates one of the terms in the iterative expression, reducing the complexity.
Observing the data in Table 1, there seems to be a great difference among Newton’s and sixth-order methods, with P S H 6 2 , B 6 and C 6 2 being the most costly, in this order. Our proposed scheme ACTV for α = 1 , stays in the middle values of the table. However, this must be seen in contrast with the order of convergence p of each scheme. With this point of view, the comparison among the methods is more clear.
With the information provided by Table 1, we represent in Figure 1 and Figure 2 the performance of the efficiency index I O = p 1 C of each method, where p is the order of the corresponding scheme. This index was introduced by Traub in [15], in order to classify the procedures by their computational complexity.
In Figure 1, we observe that the best scheme is that of Newton, being that our proposed procedure (dashed line in the figure) is third in efficiency. This situation changes for bigger sizes of the system (see Figure 2), as ACTV for α = 1 achieves the second best place, very close to Newton’s, improving the rest of schemes of the same order of convergence. Our concern now is the following: is it possible to find some values of the parameter α such that this performance is held or even improved? The improvement can be in terms of the wideness of the set of converging initial estimations. This is the reason why we analyze the stability of the class of iterative methods.

3. Stability Analysis

Let us consider a polynomial system of n variables q ( x ) = 0 , q : R n R n where q i ( x ) = x i 2 1 , i = 1 , 2 , . . . , n and we denote by K ( x ) the associated rational function. From now on, we denote by A C T V 6 ( x , α ) = a c t v 1 6 ( x , α ) , a c t v 2 6 ( x , α ) , , a c t v n 6 ( x , α ) the vectorial function obtained when class (9) is applied on q ( x ) . As q ( x ) = 0 is uncoupled, all functions a c t v j 6 ( x , α ) are analogous, with the difference of the index j = 1 , 2 , . . . , n . Their expressions are
a c t v j 6 ( x , α ) = p ( x , α ) 128 x j 5 1 + x j 2 2 1 + 3 x j 2 , j = 1 , 2 , . . . , n ,
where,
p ( x , α ) = 1 + x j 6 ( 404 20 α ) + x j 10 ( 782 6 α ) + α + x j 12 ( 77 + α ) 2 x j 2 ( 1 + 3 α ) + 5 x j 8 ( 155 + 3 α ) + x j 4 ( 11 + 15 α ) .
There are values of α for which the operator coordinates are simplified; we show the particular case when α = 1 .
a c t v j 6 ( x , 1 ) = 1 7 x j 2 + 34 x j 4 + 90 x j 6 + 125 x j 8 + 13 x j 10 64 x j 5 1 + x j 2 2 , j = 1 , 2 , . . . , n .
By determining and analyzing the corresponding fixed points of the operator, we present a synthesis of the most relevant results.
Theorem 4.
Roots of q ( x ) are the components of 2 n superattracting fixed points of A C T V 6 ( x , α ) associated to the class of iterative methods (9). The same happens with the roots of l ( t ) = 1 α + ( 1 + 5 α ) t 2 ( 10 + 10 α ) t 4 + ( 10 α 286 ) t 6 ( 421 + 5 α ) t 8 + ( α 307 ) t 10 depending on α:
(a) 
If α < 1 or α > 307 , there are two real roots of l ( t ) , denoted by l i ( α ) , i = 1 , 2 . Fixed points l σ 1 ( α ) , l σ 2 ( α ) , , l σ n ( α ) where σ i { 1 , 2 } , are repulsive points. However, if any of l σ j ( α ) = ± 1 , j { 1 , 2 , , n } , then they are saddle points.
(b) 
A C T V 6 ( x , α ) has no strange fixed points for 1 α 307 .
Proof. 
To calculate the fixed points of A C T V 6 ( x , α ) , we solve a c t v j 6 ( x , α ) = x j ,
x j 2 1 l ( t ) = 1 α + ( 1 + 5 α ) t 2 ( 10 + 10 α ) t 4 + ( 10 α 286 ) t 6 ( 421 + 5 α ) t 8 + ( α 307 ) t 10 ,
for j = 1 , 2 , , n , that is, x j = ± 1 and roots of l ( t ) , provided that t 0 .
At most, two of the roots of l ( t ) are not complex, depending on α . The qualitative performance of A C T V 6 ( x , α ) is deduced from the eigenvalues of A C T V 6 ( x , α ) evaluated at the fixed points. Due to the nature of the polynomial system, these eigenvalues coincide with the coordinate function of the rational operator:
E i g j l j ( α ) , , l j ( α ) = 1 + l j ( α ) 2 5 5 ( 1 + α ) + 3 l j ( α ) 6 ( 77 + α ) + 3 l j ( α ) 4 ( 65 + 17 α ) + l j ( α ) 2 ( 49 + 37 α ) 128 l j ( α ) 6 1 + l j ( α ) 2 3 1 + 3 l j ( α ) 2 2
We calculate the absolute values of these eigenvalues only where fixed points are real; it is clear that those fixed points l j ( α ) = ± 1 are super attracting.
We proceed to plot some of the eigenvalues; if α < 1 , the eigenvalues of A C T V 6 ( x , α ) at any strange fixed point are named saddle points when their combinations have some component +1,-1 and the combinations of real roots coming from l ( t ) are named repulsors because all eigenvalues are greater than 1 (see Figure 3a); if α > 307 , a similar behavior is observed (see Figure 3b). □
Once the existence and stability of strange fixed points of A C T V 6 ( x , α ) is studied, our aim is to show if there exist any other attracting behavior different from the fixed points.

4. Bifurcation Analysis of Free Critical Points

In the following result, we summarize the most relevant results about critical points.
Theorem 5.
A C T V 6 ( x , α ) has as free critical points
c r σ 1 ( α ) , c r σ 2 ( α ) , , c r σ n ( α ) , σ i { 1 , 2 , , m } , m 6 ,
make null the entries of the Jacobian matrix, for j = 1 , 2 , , n , being c r j ( α ) ± 1 , j , that is,
(a) 
If α ( , 77 ] { 5 } [ 1 , ) , there not exist free critical points.
(b) 
If α ( 77 , 5 ) ( 5 , 1 ) , then two real roots of polynomial k ( x ) = 5 + 5 α + ( 49 + 37 α ) x 2 + ( 195 + 51 α ) x 4 + ( 231 + 3 α ) x 6 are components of the free critical point.
Proof. 
The not null components of A C T V 6 ( x , α ) are
a ctv j 6 ( x , α ) x j = 1 + x j 2 5 5 ( 1 + α ) + 3 x j 6 ( 77 + α ) + 3 x j 4 ( 65 + 17 α ) + x j 2 ( 49 + 37 α ) 128 x j 6 1 + x j 2 3 1 + 3 x j 2 2 , j = 1 , 2 .
Then, the real roots of 5 ( 1 + α ) + x j 2 ( 49 + 37 ) α + 3 x j 4 ( 65 + 17 α ) + 3 x j 6 ( 77 + α ) are free critical points, provided that they are not null. □

4.1. Parameter Line and Bifurcation Plane

Now, we use a graphical tool that helps us to identify for which values of parameters there might be convergence to roots, divergence or any other performance. Real parametric lines, for n = 2 , are presented in Figure 4 and Figure 5 (see Theorem 5). In these figures, a different free critical point is employed as a seed of each member of the class, using 77 < α < 5 and 5 < α < 1 to ensure the existence of real critical points.
To generate them, a mesh of 1000 × 1000 points is made in [ 0 , 1 ] × ] 77 , 5 [ for the first figure and [ 0 , 1 ] × ] 5 , 1 [ for the next. We use [ 0 , 15 ] in Figure 4a to increase the interval where α is defined and [ 0 , 1 ] in Figure 4a and Figure 5, allowing a better visualization. So, the color corresponding to each value of α is red if the corresponding critical point converges to one of the roots of the polynomial system, blue in the case of divergence, and black in other cases (chaos or periodic orbits). In addition, we use 500 as the limit of iterations and tolerance 10 3 .
The global performance of each pair of free critical points is similar, so only c r i 1 ( α ) , c r i 1 ( α ) is shown in Figure 4. In Figure 4a, only a small black region shows non-convergence to the roots (red color). Now we show the parameter line for α ( 5 , 1 ) .
In the line shown in Figure 5 it is observed that the zone shows global convergence to the roots. Therefore, it is a good area for choosing α .
The concept of bifurcation is important in nonlinear systems since it allows us to study the behavior of the solutions of a family of iterative methods. In reference to dynamical systems, a bifurcation occurs when a small variation in the values of the system parameters (bifurcation parameters) causes a qualitative or topological change in the behavior of the system. Feigenbaum or bifurcation diagrams appear to analyze the changes of each class of methods on q ( x ) by using each critical point of the function as a seed and observing its performance for different ranges of α . By using a mesh with 4000 subintervals in each axis and after 1000 iterations, different behaviors can be observed.
Figure 6 shows the bifurcation diagrams in the black area of the parameter line Figure 4b, specifically when α ( 73.5 , 72.5 ) . In Figure 6a, a general convergence to one of the roots appears. However, a quadruple-period orbit can be found in a small interval around α = 73 . It includes not only periodic but also chaotic behavior (strange attractors, blue regions).
To obtain the strange attractors, we plot in Figure 7 and Figure 8 the orbit of 1000 initial guesses close to point x = ( 0.36 , 0.36 ) in the ( x 1 , x 2 ) -space by iterating A C T V 6 ( ( x 1 , x 2 ) , α ) . The value of the parameter used is α = 73.25 , laying in the blue region. For each seed, the first 500 iterations are ignored; meanwhile, the following 400 appear in blue and the last 100 in magenta color. We see in Figure 7 and Figure 8 that a parabolic fixed point bifurcates in periodic orbits with increasing periods, and therefore falls in a dense orbit (chaotic behavior) in a small area of ( x 1 , x 2 ) space.
For values of α ( 76.9 , 76.5 ) , the bifurcation diagrams can be observed in Figure 9. It is related to the black region of Figure 4b. In addition, it can be observed a general convergence to one of the roots, but a sixth-order periodic orbit appears in a small interval around α = 76.8 . It includes chaotic behavior (blue regions) beside periodic performance. Strange attractors can be found in them. To represent it, we plot in Figure 10 the ( x 1 , x 2 ) -space the orbit of x ( 0 ) = ( 0.0001 , 0.0001 ) by A C T V 6 ( ( x 1 , x 2 ) , α ) , for α = 76.9 , laying in the blue region.

4.2. Dynamical Planes

The tool with which we can graphically visualize most of the information obtained is the dynamical planes; in these, we represent the basins of attraction of the attracting fixed points for several values of parameter α . The above mentioned can only be done when the nonlinear system has a dimension of 2, although the results of the dynamical analysis are valid for any dimension.
To calculate the dynamical planes for the systems, a grid of points is defined in the real plane, and each point is used as an initial estimate of the iterative method for a fixed α . If the iterative method converges to some zero of the polynomial from a particular point of the grid, then it is assigned a certain color; in particular, if it only converges to the roots of q ( x ) , the predominant colors are orange if it converges to x = ( 1 , 1 ) , blue if it converges to x = ( 1 , 1 ) , green if it converges to x = ( 1 , 1 ) and brown if it converges to x = ( 1 , 1 ) . If, as an initial estimate, a point has not converged to any root of the polynomial in 100 iterations at most, it is assigned as black. The colors in certain basins are darker or not, indicating that the orbit in certain initial estimate will converge to the fixed point of the basin with greater or fewer iterations, with the lighter colors causing fewer iterations.
For α with stable behavior, we elaborate our graph that has a grid with 500 × 500 points, with 100 as the maximum number of iterations and limit of [−5, 5] for both axes (see Figure 11), for values of α in unstable regions the interval is [−40, 40] in both axes in Figure 12 and [−30, 30] in Figure 13. Periodic orbits are also observed in Figure 13.
In Figure 11a, we see four basins of attraction that correspond to the roots of polynomial system, with a very stable behavior. However, each basin of attraction has more than one connected component for α = 5 and α = 40 , as can be seen in Figure 11b,c, respectively. This performance increases for lower values of α close to the instability zones seen in the parameter lines Figure 4b, as seen in Figure 11d.
In Figure 12, we can see a chaotic behavior (chaos) when we take initial estimations in the black zones, producing orbits with random behavior that do not lead to the expected solution. Finally, in Figure 13, the phase space for α = 73 is plotted. In them, the following 3-period orbits are painted in yellow:
  • { ( 21.0469 , 1 ) , ( 0.368161 , 1 ) , ( 0.361236 , 1 ) , ( 17.9769 , 1 ) } ,
  • { ( 1 , 17.9769 ) , ( 1 , 0.368161 ) , ( 1 , 0.361236 ) , ( 1 , 21.049 ) } ,
  • { ( 21.0469 , 21.0469 ) , ( 0.368161 , 0.368161 ) , ( 0.361236 , 0.361236 ) , ( 17.9769 , 17.9769 ) } ,
We can observe three attracting orbits, whose coordinates are symmetric.

5. Numerical Results

We are going to work with the following test functions and the known zero:
(1)
F 1 ( x 1 , x 2 ) = ( e x 1 e x 2 + x 1 cos ( x 2 ) , x 1 + x 2 1 ) , ξ ¯ 1 ( 3.46750 , 2.46750 ) .
(2)
F 2 ( x 1 , x 2 , x 3 ) = ( sin ( x 1 ) + x 2 2 + l o g ( x 3 ) 7 , 3 x 1 + 2 x 2 x 3 3 + 1 , x 1 + x 2 + x 3 5 ) , ξ ¯ 1 ( 2.21537 , 2.49969 , 4.71567 ) .
(3)
F 3 ( x 1 , x 2 ) = ( x 1 + e x 2 cos ( x 2 ) , 3 x 1 x 2 sin ( x 2 ) ) , ξ ¯ 1 ( 0 , 0 ) .
(4)
F 4 ( x 1 , x 2 , x 3 , x 4 ) = ( x 2 x 3 + x 4 ( x 2 + x 3 ) , x 1 x 3 + x 4 ( x 1 + x 3 ) , x 1 x 2 + x 4 ( x 1 + x 2 ) , ξ ¯ 1 ( 0.57735 , 0.57735 , 0.57735 , 0.28868 ) .
(5)
F 5 ( ξ i ) = arctan ( x i ) + 1 2 k = 1 20 x k 2 x i 2 = 0 , i = 1 , 2 , , 20 , ξ ¯ 1 ( 0.1757 , 0.1757 , . . . . , 0.1757 ) , ξ ¯ 2 ( 0.1496 , 0.1496 , . . . . , 0.1496 ) .
The obtained numerical results were performed with the Matlab2022b version, with 2000 digits in variable precision arithmetic, where the most relevant results are shown in different tables. In them appear the following data:
  • k is the number of iterations performed (“-” appears if there is no convergence or it exceeds the maximum number of iterations allowed).
  • x ¯ is the obtained solution.
  • ρ is the approximated computational order of convergence, ACOC, defined in [16]
    ρ = ln x ( k + 1 ) x ( k ) x ( k ) x ( k 1 ) ln x ( k ) x ( k 1 ) x ( k 1 ) x ( k 2 ) , k = 2 , 3 , ,
    (if the value of ρ for the last iterations is not stable, then “-” appears in the table).
  • ϵ a p r o x is the norm of the difference between the two last iterations, x ( k + 1 ) x ( k ) .
  • ϵ f is the norm of function F evaluated in the last iteration, F x ( k + 1 ) . (If the error estimates are very far from zero or we get NAN, infinity, then we will place “-” ).
The iterative process stops when one of the following three items is satisfied:
(i)
x ( k + 1 ) x ( k ) < 10 100 ;
(ii)
F x ( k + 1 ) < 10 100 ;
(iii)
100 iterations.
The results obtained in the tables show that, for the stable values α = 1 and α = 5 , the expected results were obtained. For the parameter values that present instability in their dynamical planes ( α = 73.25 and α = 76.89 ), in some examples, the convergence is a little lower than expected; Table 2, Table 3, Table 4 and Table 5 have a higher number of iterations than methods of the same order shown in Table 6. There is behavior that is not out of the normal for Table 3.
If the initial point is selected in the black area, these unstable family members do not converge to the solution, Table 4. In this last table, we observe that Newton’s method does not converge to the desired solution with the initial estimate x = ( 17.76 , 17.78 ) contrary to the stable members of the ACTV family.
Although Newton’s method (except in case of F 3 ) is faster than sixth-order methods, its error estimation is improved by the stable members of our proposed family. Moreover, there exist cases where Newton fails because the initial estimation is far for the searched roots. In this cases, the stable proposed methods are able to converge.

6. Conclusions

In this manuscript, we extend a family of iterative methods, initially designed to solve nonlinear equations, to the field of nonlinear systems, maintaining the order of convergence. We establish, by means of multidimensional real dynamics techniques, which members of the family are stable and which have a chaotic behavior, taking some of these cases for the numerical results.
On the other hand, the dynamical study reveals that there are no strange fixed points of an attracting nature; however, in a very small interval of values of parameter α we find some periodic orbits and chaos. By performing the numerical tests, we compare the method with some existing ones in the literature with equal and lower order, verifying that the proposed schemes comply with the theoretical results. In short, the proposed family is very stable.
Therefore, we conclude that our aim is achieved: we selected members of our proposed class of iterative methods that improve Newton and other known sixth-order schemes in terms of the wideness of the basins of attraction.

Author Contributions

Conceptualization, A.C. and J.R.T.; methodology, J.G.M.; software, A.R.-C.; validation, A.C., J.G.M. and J.R.T.; formal analysis, J.R.T.; investigation, A.C.; resources, A.R.-C.; writing—original draft preparation, A.R.-C.; writing—review and editing, A.C. and J.R.T.; visualization, J.G.M.; supervision, A.C. and J.R.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Not applicable.

Acknowledgments

The authors thank the reviewers for their corrections and comments that have helped to improve this document.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Artidiello, S. Design, Implementation and Convergence of Iterative Methods for Solving Nonlinear Equations and Systems Using Weight Functions. Ph.D. Thesis, Universitat Politècnica de València, Valencia, Spain, 2014. [Google Scholar]
  2. Cordero, A.; Moscoso, M.E.; Torregrosa, J.R. Chaos and Stability of in a New Iterative Family far Solving Nonlinear Equations. Algorithms 2021, 14, 101. [Google Scholar] [CrossRef]
  3. Cordero, A.; Hueso, J.L.; Torregosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef] [Green Version]
  4. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarrat composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  5. Behl, R.; Sarría, Í.; González, R.; Magreñán, Á.A. Highly efficient family of iterative methods for solving nonlinear models. J. Comput. Appl. Math. 2019, 346, 110–132. [Google Scholar] [CrossRef]
  6. Capdevila, R.; Cordero, A.; Torregrosa, J. A New Three-Step Class of Iterative Methods for Solving Nonlinear Systems. Mathematics 2019, 7, 121. [Google Scholar] [CrossRef] [Green Version]
  7. Xiao, X.Y.; Yin, H.W. Increasing the order of convergence for iterative methods to solve nonlinear systems. Calcolo 2016, 53, 285–300. [Google Scholar] [CrossRef]
  8. Clark, R.R. An Introduction to Dynamical Systems, Continous and Discrete; Americal Mathematical Society: Providence, RI, USA, 2012. [Google Scholar]
  9. Geum, Y.H.; Kim, Y.I.; Neta, B. A sixth-order family of three-point modified Newton-like multiple-root finders and the dynamics behind their extraneous fixed points. Appl. Math. Comput. 2016, 283, 120–140. [Google Scholar] [CrossRef] [Green Version]
  10. Devaney, R.L. An Introduction to Chaotic Dynamical Systems Advances in Mathematics and Engineering; CRC Press: Boca Raton, FL, USA, 2003. [Google Scholar]
  11. Gaston, J. Mémoire sur l’iteration des fonctions rationnelles. J. Mat. Pur. Appl. 1918, 8, 47–245. [Google Scholar]
  12. Fatou, P.J.L. Sur les équations fonctionelles. Bull. Soc. Mat. Fr. 1919, 47, 161–271. [Google Scholar] [CrossRef] [Green Version]
  13. Fatou, P.J.L. Sur les équations fonctionelles. Bull. Soc. Mat. Fr. 1920, 48, 208–314. [Google Scholar] [CrossRef] [Green Version]
  14. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: Cambridge, MA, USA, 1970. [Google Scholar]
  15. Traub, I.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Englewood Cliffs, NJ, USA, 1964. [Google Scholar]
  16. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
Figure 1. Efficiency index IO for systems of size n = 2 to n = 10 .
Figure 1. Efficiency index IO for systems of size n = 2 to n = 10 .
Algorithms 16 00163 g001
Figure 2. Efficiency index IO for systems of size n = 10 to n = 50 .
Figure 2. Efficiency index IO for systems of size n = 10 to n = 50 .
Algorithms 16 00163 g002
Figure 3. Eigenvalues associated to the fixed points. (a) E i g j l 1 ( α ) , , l 1 ( α ) , α for α < 1 . (b) E i g j l 1 ( α ) , , l 1 ( α ) , α for α > 307 .
Figure 3. Eigenvalues associated to the fixed points. (a) E i g j l 1 ( α ) , , l 1 ( α ) , α for α < 1 . (b) E i g j l 1 ( α ) , , l 1 ( α ) , α for α > 307 .
Algorithms 16 00163 g003
Figure 4. Parameter line of A C T V 6 ( x , α ) in α ( 77 , 5 ) . (a) α ( 77 , 5 ) . (b) α ( 77 , 72 ) .
Figure 4. Parameter line of A C T V 6 ( x , α ) in α ( 77 , 5 ) . (a) α ( 77 , 5 ) . (b) α ( 77 , 72 ) .
Algorithms 16 00163 g004
Figure 5. Parameter line of A C T V 6 ( x , α ) in α ( 5 , 1 ) .
Figure 5. Parameter line of A C T V 6 ( x , α ) in α ( 5 , 1 ) .
Algorithms 16 00163 g005
Figure 6. Feigenbaum diagrams of A C T V 6 ( x , α ) , for 73.5 < α < 72.5 , from different critical points. (a) c r i 1 ( α ) , c r i 1 ( α ) and c r i 2 ( α ) , c r i 2 ( α ) . (b) c r i 1 ( α ) , c r i 1 ( α ) a detail. (c) c r i 2 ( α ) , c r i 2 ( α ) a detail.
Figure 6. Feigenbaum diagrams of A C T V 6 ( x , α ) , for 73.5 < α < 72.5 , from different critical points. (a) c r i 1 ( α ) , c r i 1 ( α ) and c r i 2 ( α ) , c r i 2 ( α ) . (b) c r i 1 ( α ) , c r i 1 ( α ) a detail. (c) c r i 2 ( α ) , c r i 2 ( α ) a detail.
Algorithms 16 00163 g006
Figure 7. Strange attractors of A C T V 6 ( x , α ) for α in blue quadruple-period cascade. (a) α = 73.25 . (b) α = 73.25 .
Figure 7. Strange attractors of A C T V 6 ( x , α ) for α in blue quadruple-period cascade. (a) α = 73.25 . (b) α = 73.25 .
Algorithms 16 00163 g007
Figure 8. Details Strange attractors of A C T V 6 ( x , α ) . (a) α = 73.25 , a detail. (b) α = 73.25 , a detail.
Figure 8. Details Strange attractors of A C T V 6 ( x , α ) . (a) α = 73.25 , a detail. (b) α = 73.25 , a detail.
Algorithms 16 00163 g008
Figure 9. Feigenbaum diagrams of A C T V 6 ( x , α ) , for 76.9 < α < 76.6 , from different critical points. (a) c r i 1 ( α ) , c r i 1 ( α ) and c r i 2 ( α ) , c r i 2 ( α ) . (b) c r i 1 ( α ) , c r i 1 ( α ) a detail. (c) c r i 2 ( α ) , c r i 2 ( α ) a detail.
Figure 9. Feigenbaum diagrams of A C T V 6 ( x , α ) , for 76.9 < α < 76.6 , from different critical points. (a) c r i 1 ( α ) , c r i 1 ( α ) and c r i 2 ( α ) , c r i 2 ( α ) . (b) c r i 1 ( α ) , c r i 1 ( α ) a detail. (c) c r i 2 ( α ) , c r i 2 ( α ) a detail.
Algorithms 16 00163 g009
Figure 10. Strange attractors of A C T V 6 ( x , α ) for α in blue quadruple-period cascade. (a) α = 76.89 . (b) α = 76.89 a detail. (c) α = 76.89 into the detail.
Figure 10. Strange attractors of A C T V 6 ( x , α ) for α in blue quadruple-period cascade. (a) α = 76.89 . (b) α = 76.89 a detail. (c) α = 76.89 into the detail.
Algorithms 16 00163 g010
Figure 11. Dynamical planes for some stable values of parameter α . (a) α = 1 . (b) α = 5 . (c) α = 40 . (d) α = 72 .
Figure 11. Dynamical planes for some stable values of parameter α . (a) α = 1 . (b) α = 5 . (c) α = 40 . (d) α = 72 .
Algorithms 16 00163 g011
Figure 12. Unstable dynamical planes of A C T V 6 ( x , α ) on q ( x ) . (a) α = 73.25 . (b) α = 73.25 .
Figure 12. Unstable dynamical planes of A C T V 6 ( x , α ) on q ( x ) . (a) α = 73.25 . (b) α = 73.25 .
Algorithms 16 00163 g012
Figure 13. Periodic orbits for parameter α = 73 for different initial values. (a) x ( 0 ) = ( 0.361236 , 1 ) . (b) x ( 0 ) = ( 1 , 0.361236 ) . (c) x ( 0 ) = ( 21.0469 , 21.0469 ) .
Figure 13. Periodic orbits for parameter α = 73 for different initial values. (a) x ( 0 ) = ( 0.361236 , 1 ) . (b) x ( 0 ) = ( 1 , 0.361236 ) . (c) x ( 0 ) = ( 21.0469 , 21.0469 ) .
Algorithms 16 00163 g013
Table 1. Computational cost (products/quotients) of proposed and comparison methods.
Table 1. Computational cost (products/quotients) of proposed and comparison methods.
MethodComplexity C
Newton 1 3 n 3 + n 2 1 3 n
C 6 1 2 3 n 3 + 5 n 2 2 3 n
C 6 2 n 3 + 4 n 2 n
B 6 n 3 + 8 n 2 n
P S H 6 1 1 3 n 3 + 11 n 2 1 3 n
P S H 6 2 5 3 n 3 + 9 n 2 2 3 n
X H 6 2 3 n 3 + 7 n 2 2 3 n
ACTV α = 1 2 3 n 3 + 5 n 2 2 3 n
Table 2. Results for function F 1 , using as seed x ( 0 ) = ( 2.5 , 1.5 ) .
Table 2. Results for function F 1 , using as seed x ( 0 ) = ( 2.5 , 1.5 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
A C T V 6 , α = 1 46.0038 8.580 × 10 76 0 ξ ¯ 1 3.5022
A C T V 6 , α = 5 46.0001 8.576 × 10 e 110 0 ξ ¯ 1 3.5053
A C T V 6 , α = 73.25 45.883 1.015 × 10 e 38 1.299 × 10 e 229 ξ ¯ 1 3.8991
A C T V 6 , α = 76.89 45.874 6.0584 × 10 38 6.1856 × 10 225 ξ ¯ 1 3.4959
Newton92.0000 7.299 × 10 146 1.964 × 10 291 ξ ¯ 1 1.2855
C 6 1 46.0011 3.393 × 10 101 0 ξ ¯ 1 1.6573
C 6 2 46.0011 1.044 × 10 88 0 ξ ¯ 1 1.5505
B 6 , b 1 = 3 / 5 46.004 2.675 × 10 75 0 ξ ¯ 1 1.7605
B 6 , b 1 = 1 46.0008 6.720 × 10 92 0 ξ ¯ 1 1.5495
P S H 6 1 , α = 0 46.0101 3.503 × 10 65 0 ξ ¯ 1 3.6422
P S H 6 1 , α = 10 46.0000 1.013 × 10 125 0 ξ ¯ 1 3.5589
P S H 6 2 , α = 10 46.159 2.556 × 10 34 8.874 × 10 203 ξ ¯ 1 3.9764
X H 6 46.0026 1.161 × 10 80 0 ξ ¯ 1 1.6411
Table 3. Results for function F 2 , with initial estimation x ( 0 ) = ( 2 , 2 , 4 ) .
Table 3. Results for function F 2 , with initial estimation x ( 0 ) = ( 2 , 2 , 4 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
A C T V 6 , α = 1 46.0096 7.045 × 10 158 0 ξ ¯ 1 6.4105
A C T V 6 , α = 5 46.0214 5.875 × 10 179 0 ξ ¯ 1 6.1329
A C T V 6 , α = 73.25 46.0161 1.622 × 10 106 0 ξ ¯ 1 6.2416
A C T V 6 , α = 76.89 46.0162 1.447 × 10 105 0 ξ ¯ 1 6.0655
Newton82.0004 9.136 × 10 113 3.933 × 10 225 ξ ¯ 1 1.4579
C 6 1 45.995 7.360 × 10 143 0 ξ ¯ 1 2.4032
C 6 2 46.0076 3.126 × 10 188 0 ξ ¯ 1 2.1022
B 6 , b 1 = 3 / 5 46.007 2.544 × 10 149 0 ξ ¯ 1 2.3387
B 6 , b 1 = 1 45.9824 4.643 × 10 201 0 ξ ¯ 1 2.2433
P S H 6 1 , α = 0 46.0048 5.895 × 10 119 0 ξ ¯ 1 5.7749
P S H 6 1 , α = 10 45.9613 2.647 × 10 165 0 ξ ¯ 1 6.0901
P S H 6 2 , ( α = 10 ) 45.9973 5.492 × 10 46 2.478 × 10 273 ξ ¯ 1 7.0308
X H 6 46.0019 9.690 × 10 153 0 ξ ¯ 1 2.5955
Table 4. Results for function F 3 and initial guess x ( 0 ) = ( 17.76 , 17.78 ) .
Table 4. Results for function F 3 and initial guess x ( 0 ) = ( 17.76 , 17.78 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
A C T V 6 , α = 1 105.9096 1.148 × 10 35 4.953 × 10 211 ξ ¯ 1 8.2697
A C T V 6 , α = 5 796.0503 1.697 × 10 177 0 ξ ¯ 1 65.9107
A C T V 6 , α = 73.25 ------
A C T V 6 , α = 76.89 ------
Newton------
C 6 1 ------
C 6 2 ------
B 6 , b 1 = 3 / 5 65.9119 1.353 × 10 35 2.050 × 10 210 ξ ¯ 1 2.4022
B 6 , b 1 = 1 ------
P S H 6 1 , α = 0 ------
P S H 6 1 , α = 10 ------
P S H 6 2 , α = 10 265.994 7.5229 × 10 84 0 ξ ¯ 1 26.0787
X H 6 ------
Table 5. Results for function F 4 and initial approximation x ( 0 ) = ( 1 , 1 , 1 , 1 ) .
Table 5. Results for function F 4 and initial approximation x ( 0 ) = ( 1 , 1 , 1 , 1 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
A C T V 6 , α = 1 46.0917 6.448 × 10 61 0 ξ ¯ 1 9.0684
A C T V 6 , α = 5 47.0212 5.485 × 10 75 0 ξ ¯ 1 9.1124
A C T V 6 , α = 73.25 44.7204 2.972 × 10 41 3.558 × 10 239 ξ ¯ 1 8.6135
A C T V 6 , α = 76.89 44.3741 3.677 × 10 41 6.341 × 10 236 ξ ¯ 1 8.8727
Newton92.0083 2.927 × 10 144 4.469 × 10 290 ξ ¯ 1 2.0165
C 6 1 46.3232 1.707 × 10 91 0 ξ ¯ 1 2.5118
C 6 2 46.2666 4.519 × 10 110 0 ξ ¯ 1 2.4156
B 6 , b 1 = 3 / 5 46.3173 4.001 × 10 93 0 ξ ¯ 1 2.4766
B 6 , b 1 = 1 46.2481 5.909 × 10 118 0 ξ ¯ 1 2.3827
P S H 6 1 , α = 0 45.7907 5.398 × 10 50 2.505 × 10 292 ξ ¯ 1 8.5868
P S H 6 1 , α = 10 47.0301 1.819 × 10 70 0 ξ ¯ 1 8.7981
P S H 6 2 , α = 10 44.726 3.477 × 10 37 7.995 × 10 207 ξ ¯ 1 8.9312
X H 6 46.3144 5.747 × 10 94 0 ξ ¯ 1 2.5340
Table 6. Results for function F 5 using as initial estimation x ( 0 ) = ( 1 , 1 , , 1 ) .
Table 6. Results for function F 5 using as initial estimation x ( 0 ) = ( 1 , 1 , , 1 ) .
Iterative Methodk ρ ϵ aprox ϵ f ξ ¯ Cpu-Time
A C T V 6 , α = 1 56.000 3.605 × 10 128 0 ξ ¯ 1 427.4885
A C T V 6 , α = 5 55.9965 1.036 × 10 181 0 ξ ¯ 1 411.3637
A C T V 6 , α = 73.25 76.0011 4.997 × 10 81 0 ξ ¯ 1 625.0273
A C T V 6 , α = 76.89 166.000 8.800 × 10 139 0 ξ ¯ 2 1367.1656
Newton112.000 5.758 × 10 148 2.829 × 10 294 ξ ¯ 1 24.0160
C 6 1 55.9999 2.773 × 10 113 0 ξ ¯ 1 42.2095
C 6 2 56.000 9.4818 × 10 155 0 ξ ¯ 1 35.5499
B 6 , b 1 = 3 / 5 55.9999 1.685 × 10 116 0 ξ ¯ 1 45.1993
B 6 , b 1 = 1 56.000 1.018 × 10 173 0 ξ ¯ 1 40.3692
P S H 6 1 , α = 0 55.9991 2.420 × 10 87 0 ξ ¯ 1 482.4615
P S H 6 1 , α = 10 55.9627 2.360 × 10 162 0 ξ ¯ 1 598.1063
P S H 6 2 , α = 10 55.9723 1.127 × 10 48 2.564 × 10 285 ξ ¯ 1 593.7798
X H 6 55.9999 6.072 × 10 118 0 ξ ¯ 1 59.5617
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cordero, A.; G. Maimó, J.; Rodríguez-Cabral, A.; Torregrosa, J.R. Convergence and Stability of a New Parametric Class of Iterative Processes for Nonlinear Systems. Algorithms 2023, 16, 163. https://doi.org/10.3390/a16030163

AMA Style

Cordero A, G. Maimó J, Rodríguez-Cabral A, Torregrosa JR. Convergence and Stability of a New Parametric Class of Iterative Processes for Nonlinear Systems. Algorithms. 2023; 16(3):163. https://doi.org/10.3390/a16030163

Chicago/Turabian Style

Cordero, Alicia, Javier G. Maimó, Antmel Rodríguez-Cabral, and Juan R. Torregrosa. 2023. "Convergence and Stability of a New Parametric Class of Iterative Processes for Nonlinear Systems" Algorithms 16, no. 3: 163. https://doi.org/10.3390/a16030163

APA Style

Cordero, A., G. Maimó, J., Rodríguez-Cabral, A., & Torregrosa, J. R. (2023). Convergence and Stability of a New Parametric Class of Iterative Processes for Nonlinear Systems. Algorithms, 16(3), 163. https://doi.org/10.3390/a16030163

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop