Design, Convergence and Stability of a Fourth-Order Class of Iterative Methods for Solving Nonlinear Vectorial Problems

A new parametric family of iterative schemes for solving nonlinear systems is presented. Fourth-order convergence is demonstrated and its stability is analyzed as a function of the parameter values. This study allows us to detect the most stable elements of the class, to ﬁnd the fractals in the boundary of the basins of attraction and to reject those with chaotic behavior. Some numerical tests show the performance of the new methods, conﬁrm the theoretical results and allow to compare the proposed schemes with other known ones.


Introduction
To find the solutionsz of systems of nonlinear equations F(z) = 0, where F : D ⊆ R n → R n is a real vectorial function of several variables, is a classical and important problem in Science and Engineering. There are no analytical methods to find the solutions to these problems, so we must use iterative schemes that approximate them.
The most known algorithm in this context is classical Newton's scheme, expressed as z (i+1) = z (i) − [F (z (i) )] −1 F(z (i) ), i = 0, 1, . . . , where F represents the Jacobian matrix associated to function F. This scheme has quadratic convergence under some conditions of F, F and the initial estimation z (0) . Different techniques can be used in order to design Newton-like iterative schemes, as direct composition, weight functions, estimations of F (z) by means of the divided difference operator, etc. So, some high-order methods for computing the solutions of F(z) = 0 have been proposed in the literature. These new schemes are proposed with the aim of accelerating the convergence or improving the computational efficiency. For example, recently the authors proposed in [1][2][3] new parametric families of iterative methods and a fast algorithm for solving nonlinear systems. Other researchers have published iterative methods that avoid the Jacobian matrix with interesting orders of convergence, see, for instance [4][5][6]. In these manuscripts the Jacobian matrix is replaced by [·, ·; F], the divided difference operator.
The procedure of weight functions (in this case, matrix functions) plays also an important role for designing schemes for solving systems F(z) = 0, as we can see in [7,8]. Iterative processes with memory for systems are also beginning to appear in the literature. These are methods in which the new iteration is obtained from at least the previous two. In general, the convergence order is increased without adding functional evaluations [9][10][11].
In [12] the authors presented a class of iterative schemes depending on a parameter, whose iterative expression is y (i) = z (i) − [F (z (i) )] −1 F(z (i) ), where β = 0 is a real parameter. Let us observe that, for each iteration, we only need an inverse operator, so the three linear systems that we need to solve in each iteration have the same matrix of coefficient and therefore the number of operations (products and quotients) is reduced. For any value of the parameter, these schemes have order of convergence four and, in particular, for β = 1/5 we reach order five. This particular case was used in [13] for determining preliminary orbits of artificial satellites. Family (1) is denoted by M4(β). Now, we are going to recall some concepts used to prove the order of convergence, some of them firstly introduced in [14].
Let us consider a sequence {z (i) } i≥0 in R n , converging toz. Then, it has order of convergence p, p ≥ 1, if there exists K > 0, (0 < K < 1 if p = 1), and i 0 such that To present a rigorous proof of the order of convergence of a vectorial iterative method, we introduced in [14] the following notation which we now recall for completeness.
From (1) and (2), we recall the notation: Moreover, letz + h ∈ R n lie in a neighborhood ofz, F(z) = 0. By applying Taylor expansion aroundz and assuming that F (z) is nonsingular, being In addition, we can describe the Taylor expansion of F aroundz as where I is the identity matrix of size n × n. Therefore, q C q h q−1 ∈ L(R n ). From (3), we conjecture where A j , j = 2, 3, . . ., are obtained by forcing that [F (z + h)][F (z + h)] −1 = I, which gives The equation where p is the order of convergence and L is a p-linear operator, is called the error equation and we consider e (i) p = (e (i) , e (i) , . . . , e (i) ).
We summarize the contents of this manuscript: the new class of iterative methods for solving nonlinear systems is presented in Section 2, and its convergence is analyzed. As all the methods of the class have the same order, their qualitative properties are under study in Section 3. Section 4 is devoted to numerical tests for confirming the theoretical results and also show how the new schemes perform. This manuscript ends with some conclusions.

Construction and Analysis of the Methods
From damped Newton's method and following the ideas presented by Sharma and Arora in [15], we propose the following parametric class of multipoint schemes for approximating the solutions of nonlinear systems, whose iterative expression is where α, β, γ, µ, δ are real parameters to be chosen to obtain the highest order of convergence. In what follows, we fix the conditions for this convergence. Let alsoz ∈ R n be a solution of F(z) = 0 and let us assume that z (0) is close enough toz. Supposing F (z) to be nonsingular and continuous atz, sequence {z (i) } i≥0 obtained from (5) converges toz , being γ a free disposable parameter.
Proof. Let us consider the Taylor expansion of F(z (i) ), and its derivative aroundz. Then, we have So, Therefore, where Then, by replacing the development of F(z (i) ) and (9) in the second step of the iterative expression, we get the error equation So, in order to get the desired order four, the following equations must be satisfied, The solutions of this system are α = and the fourth-order of convergence is proven.
We denote this uniparametric class of iterative methods by PM4(γ). Some particular values of γ can yield to simpler iterative expressions, as in case of γ = 9 8 that coincides with Sharma et al. scheme, that was proposed in [8]. In the following section we discuss, by using discrete dynamics, which elements of this family have better qualitative properties.

Stability Analysis of Class PM4(γ)
The qualitative performance of the vectorial rational operator related to an iterative method applied to a low-degree polynomial system has demonstrated to be a useful and efficient tool for the study of the stability of the methods, see for example [16][17][18][19] and the references therein. Now, our aim is to determine which elements of class of iterative methods PM4(γ) present low dependence of the convergence regarding the starting guess used. To get this aim, let us recall some basic definitions of the real multidimensional discrete dynamics tools.
Let us consider R(z) as the rational fixed-point vectorial function associated to a family of methods acting on a polynomial system p(z) = 0 with n variables, p : R n → R n . Many of the following concepts are direct extension of their partners in complex dynamics [16,20].
and it is called strange fixed point when it is not a root of p(z) = 0. The stability of the fixed points is characterized by Robinson [20]. He states the character of a k-periodic point z * depending on the eigenvalues of R (z * ), λ 1 , λ 1 , . . . , λ n . It is repelling if all |λ j | > 1, j = 1, 2, . . . , n, unstable or saddle if at least one j 0 exists such that |λ j 0 | > 1 and attracting if all |λ j | < 1, j = 1, 2, . . . , n. In addition, a fixed point is called hyperbolic if λ j satisfies |λ j | = 1, for all j.
Theorem 2. Rational operator Op4(z, γ) corresponding to PM4(γ) has 2 n fixed points, that are superattracting, whose components are zeros of polynomial system p(z). This operator also has a different number of real strange fixed points whose components are combination of the real roots of depending on γ, denoted by q 1 (γ) and q 2 (γ) (being the only ones that are real, if any), and the roots of p(z): (a) If γ < 0 or γ > 423 8 , the roots q i (γ), i = 1, 2, are real. Then, the strange fixed points expressed as (q σ 1 (γ), q σ 2 (γ), . . . , q σ n (γ)) being σ i ∈ {1, 2}, are repelling. Moreover, if at least one (but not all) component of an strange fixed point is equal to ±1, then it is saddle.
, then the roots of q(t) are not real. So, do not exist strange fixed points.
Proof. In order to calculate the fixed points of for j = 1, 2, . . . , n. Then, z j = ±1 are components of the fixed points and also are (q i (γ), i = 1, 2, . . . , 6), being t = 0. Depending on γ, at most two roots of q(t) are real. The, the eigenvalues of Op4 (z, γ) can be expressed as for j = 1, 2, . . . , n. By evaluating these eigenvalues in each fixed point, its stability is deduced. Then, it is clear that the roots of p(z) are superattracting, as all the eigenvalues are null at these fixed points. Moreover, when we analyze the absolute value of the eigenvalue Eig j evaluated in a strange fixed point whose jth component coincides with , we find out that they are greater than one when γ < 0 or γ > 423 8 .
Therefore, combinations among q i (γ) give rise to strange fixed points, that are repelling. Moreover, all points whose components are q j (γ) combined with ±1, are saddle (see Figure 1). Let us remark that all the strange fixed points hold the same behavior in each interval, so we have plotted only the performance of |Eig j (q 1 (γ), . . . , q 1 (γ)|. Now, we need to study if it is possible the existence of attracting orbits or strange attractors. This is made by analyzing the asymptotic behavior of the free critical points, when they exist.

Bifurcation Diagrams and Critical Points
Firstly, we analyze the critical points of Op4(z, γ). A critical point is a values of z that makes zero all the eigenvalues of Op4 (z, γ). When it is not also a solution of p(z) = 0, then it is named free critical point. Proof. The proof is straightforward as critical points are, by definition, those satisfying Let us remark that there are wide sets of values of γ where there is no free critical point. The relevance of this information yields in (see [21]) the existence of a critical point in the basin of attraction of each attracting point. So, the absence of free critical points proofs that the only possible behavior is convergence to the roots. We plot the dynamical planes of Op 4 (z, γ) for different values of γ where this situation happens, see Figure 2.
Plots appearing in Figure 2 have been obtained by using the programs appearing in [22] as follows: a maximum number of 80 iterations, a mesh of 400 × 400 points and the vicinity of the roots is used as an stopping criterium with tolerance of 10 −3 . We have painted each point with a color depending on the root it tends to. The color is darker when the amount of iterations needed is higher; finally, it is black when it reaches 80 iterations without satisfying the stopping criterium.
In Figure 2a,b we see only one connected component for each basin of attraction without divergent behavior; they correspond to γ ≥ 0, being γ = 0 a bifurcation value, where the performance of the rational function changes. They have the same stable behavior as Newton's scheme, doubling its order of convergence (see [16]). On the contrary,  On the other hand, the orbits of critical points give us qualitative information about the iterative method involved. In Figure 3 we present two real parametric lines constructed from these orbits (see Theorem 3) for n = 2. We use a free critical point cp as seed, where − 153 8 < γ < − 177 64 or − 177 64 < γ < 0 and a mesh of 500 × 500 points is made. For better visualization, we also fatten the interval where γ is defined. Finally, each value of γ is colored following this pattern: red if cp converges to one of the roots of the system, blue if cp diverges and black otherwise. Moreover, 200 is the maximum number of iterations considered and the tolerance in the convergence to the roots is 10 −3 . In each interval, all the free critical points have the same performance, so we present only cp = (c 1 (γ), c 1 (γ), . . . , c 1 (γ)) in Figure 3 (for the bidimensional case). The parameter line is plotted for − 153 8 < γ < − 177 64 and − 177 64 < γ < 0 as outside these intervals the free critical points have complex components. Let us remark that there is convergence to the roots elsewhere, except a black small region around γ = −18.75 and a narrower one around γ = −16.9.
At this stage, bifurcation diagrams are employed to analyze the changes of performance for different ranges of γ. When Op 4 (z, γ) acts on a critical point, different performances are found after 500 iterations of the method, for each α divided in a mesh of 3000 points. It results in convergence to periodic orbits or to chaotic attractors.
In Figure 4 we see the bifurcation diagrams corresponding to the black region of the parameter line if − 153 8 < γ < − 177 64 (Figure 3a). In Figure 4a, convergence to one root is seen, but also some period-doubling cascades appear in a small interval around γ = −18.75, including chaotic behavior (blue regions). There, strange attractors can be found.  (c) (c 1 (γ), c 1 (γ), . . . , c 1 (γ)), a detail  To represent these strange attractors, we plot in the (x 1 , x 2 )-space the orbit of z (0) = (0.29, 0.29) by Op 4 ((x 1 , x 2 ), γ), for several close values of γ laying in the blue area of Figure 4b. For each γ, 2500 different starting guesses have been used and, their first 400 iterates are not plotted, the following 500 appear in blue color and the last ones are magenta. In Figure 5 it is observed as a parabolic strange fixed point, that bifurcates into periodic orbits of doubling periods, becomes chaotic while its orbits are dense in small regions of (z 1 , z 2 )-space.  This can be checked in the associated dynamical planes. Unstable performance is limited to values of γ in the black regions of Figure 3. In Figure 6b an strange attractor is found for γ = −18.75, that was plotted in Figure 5b,c. Finally, in Figure 6c,d, the phase space for γ = −18.45 and γ = −16.911 respectively are represented. In them, 4-period orbits appear in yellow (the elements of the orbit are linked by yellow lines). In all cases, there exist more attracting orbits with symmetric coordinates.
We conclude that members of PM4(γ) class are very stable. There not exist attractive strange fixed points and only in very narrow intervals of γ there exists unstable performance.

Numerical Performance
We begin this section checking the applicability of our proposed methods PM4(γ) by analyzing its behavior on some academic nonlinear systems. To get this aim, we select some members of the class with good qualitative properties and other with stability problems. This information is deduced from the results obtained in the previous section. Afterwards, we compare its behavior with that of other known methods on the same problems. The maximum number of iterates considered is 1000 and the stopping criterium is z (i+1) − z (i) < 10 −150 or F(z (i) ) < 10 −150 . All the calculations have been made using Matlab R2019b with variable precision arithmetic with 1000 digits of mantissa, to minimize round-off errors. We use in each example the Approximated Computational Order of Convergence, (ACOC), defined as , and introduced in [23], that estimates numerically the theoretical order of convergence p.
We compare the proposed family, for different values of parameter γ, with class M4(β) [12] for different values of β, with the extension of Jarratt's scheme for nonlinear systems [24], whose iterative expression is and with the recent scheme HM4 of order four [1] ; F] and I the identity matrix.
In order to check the efficiency of the proposed methods, we compare the execution time (in seconds) only in Example 1, as in Example 2 there is no critical difference among the methods, in the third example there are many divergent results and in the fourth example, the execution time are qualitatively equal to those of Example 1. Example 1. The first nonlinear system is defined by In this example, we use the initial estimation z (0) = (1, 2) T .

Methodz
Iterations (i) The numerical results for this test system are displayed in Tables 3 and 4, where we show the same information than in the previous example. The initial estimation used is z (0) = (1, 4) T , being the solution obtainedz ≈ (0.203, 4.915) T in all the cases shown. γ We observe in Tables 3 and 4 that the PM4(γ) get on a better ACOC that M4 (β = 5) with the same number of iterations, being the results similar to those of M4 (β = 1/5, β = 1), Jarratt's scheme and HM4.

Method
Iterations (i) We show the numerical results of the initial estimation z (0) = (1.25, . . . , 1.25) T in Tables 5 and 6. We note that in this case there are different behaviors depending on the value of γ. This is consistent with the results predicted in the dynamical study. It should be noticed that the method also diverges for M4 (β = 5). In general, the method with best performance for this example is HM4.   We also observe that the stability of the method depends on the initial condition, for example, in the case of using the initial condition z    As we can observe in Table 8 we get similar o better values for z (i) − z (i−1) and F(z i ) getting an ACOC equal to 4 or very close to 4.

Conclusions
A parametric family of two-steps iterative methods for solving nonlinear systems has been designed. This class contains other known schemes, for particular values of the parameter, and has fourth order of convergence for any value of the parameter. We have studied the stability of this family on quadratic polynomial systems, in terms of the values of the parameter. This analysis allows us to detect the most stable elements of the class and those with chaotical behavior. Some numerical test confirm the dynamical and theoretical results.