Abstract
In this paper, new tools for the dynamical analysis of iterative schemes with memory for solving nonlinear systems of equations are proposed. These tools are in concordance with those of the scalar case and provide interesting results about the symmetry and wideness of the basins of attraction on different iterative procedures with memory existing in the literature.
1. Introduction
Iterative methods for solving nonlinear systems of equations are a fundamental tool for solving problems in applied mathematics since in most of the problems it is sometimes complicated or impossible to solve these systems. For this reason, iterative methods are used since, by giving an initial estimate close enough to the solution, they obtain an approximation of the solution.
As mentioned above, it is necessary for the initial estimate to be close to the solution to ensure convergence, but this does not always happen, which is why the dynamical study is becoming increasingly important, since this is how we can see the behaviour of the initial estimations.
The study of the stability of iterative fixed-point methods can be carried out by means of real or complex dynamics tools applied to a rational operator that result from applying the iterative scheme to low-degree polynomials. These dynamical techniques can be used to compare or to deepen known iterative methods, as can be seen in [1,2,3], to analyze the qualitative properties of new iterative methods without memory (see, for example, [4,5,6,7]) or with memory (see, for instant, [8,9]). They also change if the method is multidimensional, as we can see in [8,10,11,12,13,14,15].
In this paper, we are going to lay the foundations for our future work in the study of the dynamics of iterative methods with memory for approximating the solutions of nonlinear systems.
In the first section, we present the theoretical concepts and the obtained results. Then, in the second section, we apply these results to some multidimensional iterative schemes with memory. We choose two different systems to see the behaviour of these iterative schemes.
2. Theoretical Concepts
Let be a system of nonlinear equations where .
The standard form of an iterative method with memory that uses only two previous iterations to calculate the next one is:
where and are the initial estimations.
From here on, we assumed that when introducing memory to the iterative method, the operator depends not only on but also depends on the variable of the previous step, , since otherwise a study of the dynamics of a iterative method without memory for systems would be carried out.
A function defined from to cannot have fixed points since to be a fixed point of a function, the point and its image by the function must coincide. Therefore, an auxiliary function O is defined as follows:
If is a fixed point of O, then
and by the definition of O, one has
Thus, the discrete dynamical system is defined as
where is the operator associated with the vectorial iterative method with memory.
Then, a point is a fixed point of O if and If is a fixed point of operator O that does not satisfy , it is called a strange fixed point.
The basin of attraction of a fixed point is defined as the set of pre-images of any order such that
To study the caracter of the fixed points, we use the following result from [16].
Theorem 1.
Let be of class and x a fixed point. Let , , …, be the eigenvalues of , where is the Jacobian matrix of G.
- If , for , then x is attracing.
- If , for , then x is super attracing.
- If one eigenvalue has , then x is repelling or saddle.
- If , for , then x is repelling.
If one eigenvalue of satisfies , then x is not hyperbolic and we cannot conclude anything about the character of this fixed point.
We want to obtain a more specific result for determining the character of the fixed points of operator O. To do this, we calculate the Jacobian matrix of O, denoted by which has size . The result is matrix
We denote the matrices
and
So, matrix is defined as a block matrix
We need to obtain the eigenvalues of the Jacobian matrix evaluated at the fixed points, for determining the character of them (Theorem 1). It is easy to see that
By applying a result of [17] for calculating the determinant of a block matrix, we obtain
Then, is an eigenvalue of if
which is the same as
These calculations are summarized in the following result.
Theorem 2.
The eigenvalues of are those satisfying:
In particular, is an eigenvalue of if 0 is an eigenvalue of since
Another concept relevant in a dynamical study is the critical points. In this case, we use the following definition of these type of points.
Definition 1.
Vector is a critical point of if all the eigenvalues of are 0.
This is a restrictive definition of a critical point since it is usually sufficient that the determinant of the Jacobian matrix cancels out, but in this case, if we do not use the above definition, we obtain critical point surfaces because of the form of the operator.
To study the eigenvalues of , we use Theorem 2. In particular, for at least one of the eigenvalues to be 0, it must be satisfied that ; this is equivalent to
From this, we obtain that:
Theorem 3.
The determinant of is zero if and only if it satisfies
3. Experimental Results
In this section, we present the dynamical study of two simple vectorial methods with memory: Kurchatov’s scheme, ref. [18], whose expression is as follows
and Steffensen’s method with memory, ref. [19], whose expression is
This study is performed on two polynomial systems of different degrees.
3.1. Uncoupled Third Order System
We perform this dynamical study on a system of cubical polynomials in order to use graphical tools. However, these results can be easily extended to higher-dimension systems. The system, denoted by , is as follows:
where . The only root with real components of this systems is .
We choose a polynomial of degree 3 instead of one of degree 2 as it is usually done in dynamical studies because the polynomial system of second degree gives us an operator that does not depend on the previous iteration , so the study is the same as that of a iterative method without memory for solving nonlinear systems.
To make the study simpler, we denote by z and by x as was done in the theoretical study.
3.1.1. Kurchatov’s Method
Operator of Kurchatov’s method on the cubical system is
Theorem 4.
The only fixed point of operator has equal components and has superattractor character.
Proof.
We calculate matrices and appearing in the dynamical study,
and
If we evaluate these matrices at the fixed point, we get
and
By Theorem 2, it follows that . From the above relationship it follows that the only eigenvalue associated with the fixed point is . So, fixed point is a superattracting point. □
Regarding the critical points of operator , we have
Theorem 5.
Operator has four types of critical points, denoted by , which have the following form.
The notation of Table 1 is understood in such a way that, for example, the points C2(z, x) are those that verify that and , and the other components are arbitrary.
Table 1.
Types of points of Kurchatovs’s scheme that are critical points for .
Proof.
To do this, we calculate the eigenvalues of for any point and obtain those satisfying the condition that all their eigenvalues are 0.
It is obtained that the critical points are those and that satisfy these two expressions
It follows that the 4 types of points defined in Table 1 are critical points of operator . □
We have that the points of type are a preimage of the only fixed point since . We are only going to study the orbit of the points of type and since the points converge to the fixed point for any value of z, and the fixed points of type have a symmetrical study to that of the fixed points of type .
- Operator evaluated at the critical points of type has the following formThe convergence of these points only depends on and , as we can see in the expression of . For this reason, we draw planes of convergence of these points with these two variables.Now, we are going to describe how we generate the planes of convergence [4]. To draw these planes of the points of the type , what we are going to do is to see which of these points belong to the basins of attraction of the attractor fixed points, that is, which of these points converge to the attractor.We make a mesh of points of the set . On one of the axes, we have , and on the other, , and with them we construct our points of type . We take each of these points and apply the operator on it. If it converges to the only attractor fixed point, which is , then we paint it in orange. As convergence criterion, we have used that the distance from the iteration to the fixed point is less than in less than 40 iterations. If this is not verified, the mesh point is painted black.As can be seen in Figure 1 that we have a slower convergence when approaches the value 0 because of the shape of the operator, but we still have convergence. In the rest of the cases, the convergence to the point is clear.
Figure 1.
Behaviour of critical points .
Figure 1.
Behaviour of critical points .

- Next, we evaluate the operator at the critical points of type . In this case, the operator isIn this case, the critical points of type depend on variables and . For this reason, we draw the convergence plane of the critical points depending on these variables.As in the previous case, it is shown in Figure 2 that if any of the variables approach the value 0 we have slow convergence but that in the rest of the points the convergence to the point is clear.
Figure 2.
Behaviour of critical points .
Figure 2.
Behaviour of critical points .

To conclude the dynamical study of Kurchatov’s method for this system, let us draw some dynamical planes in order to see the behaviour of the points in general.
To draw these planes, given that we have an operator with 4 variables, what we have done is to select a parameter a, so that . We try different values of a to see which one gives the best results. Usually, testing with small values of a gives good results. Thus, our variables would be and , and the variables z are a variation of these.
To make the dynamical planes, we have chosen a mesh of points, where the chosen point of the mesh is the starting point. We study the orbit of the initial point. If the seed converges to , it is painted orange, and if it does not converge, it is painted black. We define convergence to the point because the distance of the iteration is less than , and this convergence is realised in, at most, 40 iterations.
We have tested with different values of a over a wide range and obtained that there are the same dynamical plane for different values of a, (Figure 3). As we can see in Figure 3, all initial points converge to the root showing the good stability properties of this iterative scheme with memory, even in this multidimensional case.
Figure 3.
Dynamical plane of Kurchatov’s scheme for .
3.1.2. Steffensen’s Scheme
In this part, we perform the dynamical study of Steffensen’s scheme with memory for system . Operator obtained by Steffensen’s method is
Theorem 6.
Operator has four fixed points, which are
- fixed point , with ,
- strange fixed point , being ,
- strange fixed point , being ,
- strange fixed point , with .
The strange fixed points are not hyperbolic, and the fixed point is a superattractor point.
Proof.
In order to study the character of these fixed points, we need to obtain matrices and .
We denote by for the following expression:
Thus,
We denote by for expression:
Thus,
Let us now obtain the character of the fixed points , for .
- For the point associated to , the related matrices areandFrom Theorem 2, we conclude that the fixed point is super attracting point.
- For the point associated to , we obtainandBy applying Theorem 2, the eigenvalues of this point are the values satisfyingIt follows that the eigenvalues are 0 and 1, so we cannot conclude anything about the character of this strange fixed point as it is not hyperbolic.
- For the fixed point associated to , the matrices areandThe eigenvalues associated with this fixed point are those values that satisfyIt follows that the eigenvalues are 0 and 1, so again the point is not hyperbolic.
- Finally, let us study the character of the fixed point associated with . The matrices for this fixed point areandSo the eigenvalues of the fixed point associated with are the values that satisfyIt follows that the eigenvalues are 0 and 1, so again the point is not hyperbolic. □
Now, let us calculate the critical points.
Theorem 7.
The critical points of operator are vectors and , which satisfy that they are of one of the following 16 types, which we denote by for . Table 2 is a summary of the different types of critical points we obtain.
Table 2.
Types of points of Steffensen’s scheme that are critical points for .
We are working with systems of equations with real variables, so it is assumed that the critical points have real numbers as their components.
Proof.
If we define , , as
then, we can check that
Additionally, it follows that all eigenvalues are zero if the point has one of the forms given in Table 3. □
Table 3.
Symmetry relation between .
As we can see on Table 3, there is a certain symmetry relation between the following .
For that reason, and because the operator also satisfies certain symmetry with the components, we only study the behaviour of certain types of points that are critical points.
- The asymptotic behaviour of the critical points is analysed with the following plane where the convergence to the fixed points is shown in different colours. In this case, if the distance from the iteration to the fixed point is less than , we say that the iteration is in the basin of attraction of the fixed point. In this case, it is painted orange if the critical point converges to , blue if it converges to the strange fixed point , red if it converges to the strange fixed point and green if it converges to the point . If the points are painted black, they have not converged to any of the fixed points in less than 40 iterations. In this case, we have a fixed value , and the value depends on , so the variables of the axes are and as shown in Figure 4.
Figure 4.
Convergence of the critical points of type .
Figure 4.
Convergence of the critical points of type .

- In a similar way to the previous case, we study the convergence of the critical points of type and of type . In these cases, the value is also fixed and the value depends on ; for this reason, the variables of the axes are and as in the previous cases and as can be seen in Figure 5. In this case, we have that the behaviour of both types of critical points is the same; for that reason, we only show one dynamical plane.
Figure 5.
Convergence of the critical points of type and .
Figure 5.
Convergence of the critical points of type and .

- For the critical points of type , the convergence study is similar to the previous ones, but in this case none of the variables are fixed, and it is and that depend on and , respectively; for this reason, the dynamical plane has as axis variables the values of and , as shown in Figure 6.
Figure 6.
Convergence of the critical points of type .
Figure 6.
Convergence of the critical points of type .

- For the critical points of type and , we also have as variables on the axes the values of and , as shown in Figure 7. In this case, we have decided to show only one dynamical plane because the behaviour of both types of critical points is the same.
Figure 7.
Convergence of the critical points of type and .
Figure 7.
Convergence of the critical points of type and .

- For the critical points of type , and , we also have as variables on the axes the values of and . In this case, we have that the behaviour of these 3 types of critical points is the same; for that reason, we only show one dynamical plane (Figure 8).
Figure 8. Convergence of the critical points of type , and .
3.2. A Coupled Second-Order System
Now, we are going to solve other system that has a more complicated aspect since the variables cannot be separated, that is to say, we do not have that the first component of the operator only depends on the first components of the variables of x and z and the same with the second component; instead, in this case, we have that both components of the operator depend on both components of the vectors. The next system we solve, denoted by , is
where . The real roots of this system are and .
3.2.1. Kurchatov’s Scheme
If we apply the Kurchatov’s scheme to the proposed system, we obtain the following operator:
Theorem 8.
The only fixed points of the operator are and , and both have superattractor character.
Proof.
Now, we calculate the matrices and to obtain the character of these fixed points.
and
If we evaluate the previous matrices in the fixed points, we obtain in both cases that both are the matrix that all the components are 0. So, by Theorem 2, both eigenvalues are 0 for all the fixed points. For that reason, both fixed points are super attracting points. □
Theorem 9.
Operator has two types of points that are critical points. These points have one of the following two structures:
- where .
- where .
Proof.
As we can see,
By the form of and , we can see that all the eigenvalues are zero if
□
Let us draw the orbit of these critical points. In this case, we draw on the abscissa axis the values of , which is the same value as , and we draw on the other axis the value of since is obtained from and .
To generate these convergence planes of the points of type , we are going to see which of these points belong to the basins of attraction of the attractor fixed points, that is, which of these points converge to the attractor fixed points.
To do this, we make a mesh of points of the set . We made sure that increasing the set did not alter the behaviour. On one of the axes, we have the variable , and on the other, the variable , and with these variables we construct our points of type . We take each of these points of type , and we apply our operator on them.
If this initial point converges to , we paint it in orange, and if converges to , we paint it in blue. As convergence criteria, we have that the distance from the iteration to the fixed point is less than in less than 40 iterations. If this is not verified, we paint it in black.
Figure 9 shows the plane of convergence for the points of type .
Figure 9.
Convergence of the critical points of type .
In the same way that the plane of convergence of the points of type is generated, we generate the plane of convergence of the points of type , which is shown in Figure 10.
Figure 10.
Convergence of the critical points of type .
We observe in this planes of convergence, Figure 9 and Figure 10, that global convergence to the roots of the system exists.
To conclude the dynamical study of the Kurchatov method for this system, we draw some dynamical planes in order to see the behaviour of the points in general. To draw these planes, given that we have an operator with 4 variables, what we have done is to select a parameter a, so that . Thus, our variables would be and , and the variables z are a variation of these.
To make the dynamical planes, we have chosen a mesh of points. If the initial point converges to the point , it is painted orange; if it converges to point , it is painted blue; and if it does not converge to any point, is painted black.
We have tested with different values of a over a wide range and obtained that there are similar dynamical planes for different values of a; for that reason, we only show Figure 11. As we can see on this figure, we have that all initial estimation converge to the roots of the polynomial.
Figure 11.
Dynamical plane of Kurchatov’s scheme with .
3.2.2. Steffensen’s Scheme
If we apply Steffensen’s scheme with memory to system , we obtain the following operator
Theorem 10.
The operator has three fixed points, that is,
- , which is a superattractor point.
- , which is a superattractor point.
- , which is a strange fixed point and not hyperbolic.
Proof.
Let us calculate the matrices and to obtain the character of the fixed points.
For the fixed points associated with the roots, both matrices are the zero matrix. So, by Theorem 2, both eigenvalues are 0. Then, the fixed points associated with the roots are super attracting points. Let see what happens to the strange fixed point. The matrices are
By Theorem 2, the eigenvalues for that strange fixed point are the values that satisfy the following equation:
So, the eigenvalues are and 0. We cannot determine the character of that strange fixed-point non-hyperbolic. □
Theorem 11.
Operator has six types of critical points. These types of points are:
All these points are preimages of one of the fixed points.
Proof.
Since
Then, all the eigenvalues are zero, if has one of the types shown in Table 4.
Table 4.
Types of points of Steffensen’s scheme that are critical points for .
- Since evaluated at the , and points is , then those types of critical points belongs to the basin of attraction of .
- Since evaluated at the , and points is , then those types of critical points belongs to the basin of attraction of .
□
Below, we draw a dynamical plane for the Steffensen’s scheme with memory in the same way as was done for the Kurchatov’s scheme. In Figure 12, we can see a black region; that is because we have slow convergence to the roots in that region since there are no critical points outside the basins of attraction of the roots, so there cannot be convergence to any point other than the roots. Here, we also tried different values for the parameter a, and similar results were obtained, although the larger the parameter was, the more slowly convergence zone increased.
Figure 12.
Dynamical plane of Steffensen’s method with .
4. Conclusions
The design of new vectorial iterative schemes with memory for solving nonlinear problems is a developing area of numerical analysis that has expanded in recent years. These methods can be numerically checked, but, until now, there were no possibility of analyzing their qualitative performance as all the existing techniques, including both complex and real discrete dynamics, were not defined to overcome the high dimensionality of the rational functions involved. This proposed procedure has been tested with the analysis of the performance of Kurchatov’ and Steffensen’s multidimensional methods on coupled and non-coupled nonlinear polynomial systems. The results obtained show the applicability of this technique and present many possibilites for future research.
Author Contributions
The individual contributions of the authors are as follows: conceptualization, J.R.T.; writing, original draft preparation P.T.-N. and N.G.; validation, A.C. and J.R.T.; formal analysis, A.C.; numerical experiments, P.T.-N. and N.G. All authors have read and agreed to the published version of the manuscript.
Funding
This research was partially supported by Grant PGC2018-095896-B-C22, funded by MCIN/ AEI/10.13039/5011000113033 by “ERDF A way of making Europe”, European Union; by the internal research project ADMIREN of Universidad Internacional de La Rioja (UNIR); and partially supported by Universitat Politècnica de València Contrato Predoctoral PAID-01-20-17 (UPV).
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Acknowledgments
The authors would like to thank the anonymous reviewers for their comments and suggestions, which have improved the final version of this manuscript.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Ardelean, G. A comparison between iterative methods by using the basins of attraction. Appl. Math. Comput. 2011, 218, 88–95. [Google Scholar] [CrossRef]
- Susanto, H.; Karjanto, N. Newton’s method’s basins of attraction revisited. Appl. Math. Comput. 2009, 215, 1084–1090. [Google Scholar] [CrossRef] [Green Version]
- Epureanu, B.; Greenside, H. Fractal Basins of Attraction Associated with a Damped Newton’s Method. SIAM Rev. 1998, 40, 102–109. [Google Scholar] [CrossRef]
- Magreñán, A. A new tool to study real dynamics: The Convergence Plane. Appl. Math. Comput. 2014, 248, 215–224. [Google Scholar] [CrossRef] [Green Version]
- Neta, B.; Chun, C.; Scott, M. Basins of attraction for optimal eighth order methods to find simple roots of nonlinear equations. Appl. Math. Comput. 2014, 227, 567–592. [Google Scholar] [CrossRef]
- Salimi, M.; Lotfi, T.; Sharifi, S.; Siegmund, S. Optimal Newton Secant like methods without memory for solving nonlinear equations with its dynamics. Int. J. Comput. Math. 2017, 94, 1759–1777. [Google Scholar] [CrossRef] [Green Version]
- Soleimani, F.; Soleymani, F.; Shateyi, S. Some Iterative Methods Free from Derivatives and Their Basins of Attraction for Nonlinear Equations. Discret. Dyn. Nat. Soc. 2013, 2013, 301718. [Google Scholar] [CrossRef] [Green Version]
- Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. Stability of King’s family of iterative methods with memory. Appl. Math. Comput. 2015, 271, 701–715. [Google Scholar] [CrossRef] [Green Version]
- Wang, X.; Zhang, T.; Qin, Y. Efficient two-step derivative-free iterative methods with memory and their dynamics. Int. J. Comput. Math. 2016, 93, 1423–1446. [Google Scholar] [CrossRef]
- Campos, B.; Cordero, A.; Torregrosa, J.; Vindel, P. A multidimensional dynamical approach to iterative methods with memory. Comput. Appl. Math. 2017, 318, 504–514. [Google Scholar] [CrossRef] [Green Version]
- Chicharro, F.; Cordero, A.; Garrido, N.; Torregrosa, J.R. On the effect of the multidimensional weight functions on the stability of iterative processes. Comput. Appl. Math. 2022, 405, 113052. [Google Scholar] [CrossRef]
- Cordero, A.; Jordán, C.; Sanabria-Codesal, E.; Torregrosa, J.R. Design, convergence and stability of a fourth-order class of iterative methods for solving nonlinear vectorial problems. Fractal Fract. 2021, 5, 125. [Google Scholar] [CrossRef]
- Cordero, A.; Soleymani, F.; Torregrosa, J.R. Dynamical analysis of iterative methods for nonlinear systems or how to deal with the dimension? Appl. Math. Comput. 2014, 244, 398–412. [Google Scholar] [CrossRef]
- Wang, X.; Chen, X. Derivative-Free Kurchatov-Type Accelerating Iterative Method for Solving Nonlinear Systems: Dynamics and Applications. Fractal Fract. 2022, 6, 59. [Google Scholar] [CrossRef]
- Liu, Y.; Pang, G. The basin of attraction of the Liu system. Commun. Nonlinear Sci. Numer. Simul. 2011, 16, 2065–2071. [Google Scholar] [CrossRef]
- Robinson, R.C. An Introduction to Dynamical Systems: Continuous and Discrete; Pearson Education: Cranbury, NJ, USA, 2004. [Google Scholar]
- Silvester, J. Determinants of Block Matrices. Math. Assoc. 2000, 84, 460–467. [Google Scholar] [CrossRef] [Green Version]
- Kurchatov, V.A. On a method of linear interpolation for the solution of funcional equations (Russian). Dolk. Akad. Nauk. SSSR 1971, 198, 524–526, Translation in Soviet Math. Dolk. 1971, 12, 835–838. [Google Scholar]
- Traub, J.F. Iterative Method for the Solution of Equations; Prentice Hall: New York, NJ, USA, 1964. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).





