Generating Root-Finder Iterative Methods of Second Order: Convergence and Stability

: In this paper, a simple family of one-point iterative schemes for approximating the solutions of nonlinear equations, by using the procedure of weight functions, is derived. The convergence analysis is presented, showing the sufﬁcient conditions for the weight function. Many known schemes are members of this family for particular choices of the weight function. The dynamical behavior of one of these choices is presented, analyzing the stability of the ﬁxed points and the critical points of the rational function obtained when the iterative expression is applied on low degree polynomials. Several numerical tests are given to compare different elements of the proposed family on non-polynomial problems. of error to think of methods with memory that are beyond the object of this work.


Introduction
Solving nonlinear equations f (x) = 0, where f : I ⊆ R → R is a real function defined in an open interval I, is a classical problem with many applications in several branches of science and engineering. In general, to calculate the roots of f (x) = 0, we must appeal to iterative schemes, which can be classified in methods without or with memory. In this manuscript, we are going to work only with iterative methods without memory, that is algorithms that to calculate a new iteration only evaluate the nonlinear function (and its derivatives) on the previous one.
There exist many iterative methods of different orders of convergence, designed to estimate the roots of f (x) = 0 (see for example [1,2] and the references therein). The efficiency index is a tool defined by Ostrowski in [3] that allows classifying these algorithms in terms of their order of convergence p and the number of function evaluations d needed per iteration to reach it. It is defined as I = p 1/d ; the higher the efficiency index of an iterative scheme, the better it is. Moreover, this concept is complemented with Kung-Traub's conjecture [4] that establishes an upper bound for the order of convergence to be reached with a specific number of functional evaluations, p ≤ 2 d−1 . Those methods whose order reaches this bound are called optimal methods, being the most efficient ones among those with the same number of functional evaluations per iteration.
Newton's method is a well-known technique for solving nonlinear equations, , k = 0, 1, . . . , starting with an initial guess x 0 . This method fascinates many researchers because it is applicable to several kinds of equations such as nonlinear equations, systems of nonlinear algebraic equations, differential equations, integral equations, and even to random operator equations. However, as is well where . This family contains, for particular choices of function H, many known iterative schemes, in particular Newton's method is obtained when H(t) = t.
The following result shows the required conditions on weight function H in order to reach, at least, order two.
Proof. Let us denote the error in each iteration of (1) by e k = x k − x * , where x * is a simple solution of equation f (x) = 0. By using Taylor series around x * , we have: and therefore, weight variable t k can be expressed as: Now, by using Taylor expansion of H and (3), we get: Then, the error equation is: By applying conditions H(0) = 0, H (0) = 1, and |H (0)| < ∞, we obtain the error Equation (2). Therefore, the elements of the family (1) have quadratic order of convergence.
Many authors in the literature have devoted their efforts to analyzing the stability of Newton's method depending on the initial iterates. Good overviews can be found in [8,11]. Then, the dynamics of several iterative schemes belonging to the family (1) can be studied to compare it with that of Newton's method. We focus on an iterative family belonging to (1), called N2 α , which has a parameter on its iterative expression. Its stability is studied in Section 3 depending on the values of the parameter.
The weight function H(t k ) = t k 1 + αt k gives rise to the proposed iterative family: . Therefore, if we choose α = −c 2 , the method increases its order of convergence, but c 2 = (1/2) f (x * )/ f (x * ) and x * is not known. This type of error equation leads us to think of methods with memory that are beyond the object of this work.
In this manuscript, we are interested in analyzing the values of α that provide methods with greater stability and that also have the same order as Newton's method and those whose corresponding schemes have bad stability properties, so we will not use any approximation for the parameter.

Complex Dynamics of the Family N2 α
In this section, a stability analysis of family N2 α is made in the context of complex dynamics. First, we are going to recall some concepts of this theory. Further information can be found in [12].
Let R :Ĉ −→Ĉ be a rational function, whereĈ denotes the Riemann sphere. The orbit of a point z 0 ∈Ĉ is defined as the set of its successive images by R, i.e., {z 0 , R(z 0 ), R 2 (z 0 ), . . . , R m (z 0 ), . . .}. When a point z 0 ∈Ĉ satisfies R(z 0 ) = z 0 , it is called a fixed point. In particular, a fixed point z 0 , when z 0 = z r where f (z r ) = 0, is called a strange fixed point of R. If the point z T ∈Ĉ satisfies R T (z T ) = z T , but R t (z T ) = z T for t < T, it is a T-periodic point.
The orbits of the fixed points are classified depending on the value of their corresponding fixed point multiplier, R (z). If z 0 is a fixed point of R, then it is attracting when |R (z 0 )| < 1. In particular, it is a superattracting point if |R (z 0 )| = 0. The point is called repelling when |R (z 0 )| > 1 and neutral in the case |R (z 0 )| = 1. When the operator has an attracting point x * , we define its basin of attraction, A(x * ), as the set: A(z * ) = {z ∈Ĉ : R n (z) −→ z * , n → ∞}.
A point z 0 ∈Ĉ is called a critical point when R (z 0 ) = 0, being a free critical point when z 0 = z r , where z r satisfies f (z r ) = 0.
In this paper, we use two graphical tools to visualize the dynamical behavior of the fixed points in the complex plane: the dynamical and the parameter planes. We can find reference texts around these dynamic tools in works such as [9,13].
Dynamical planes represent the basins of attraction associated with the fixed points of the rational operator corresponding to an iterative method. For its implementation, the complex plane is divided into a mesh of values where the X-axis represents the real part of a point and the Y-axis its imaginary part. Then, each pair of points of the mesh corresponds to a complex number that is taken as the initial estimation to iterate the method. The initial guess is depicted in a color depending on where its orbit has converged. In this way, the basins of attraction of the fixed or periodic points are observed, and we can determine if the method has wide regions of convergence.
Parameter planes are used to study the dynamics of a family of iterative methods with at least one parameter. In this case, the complex plane is divided into a mesh of complex values for the parameter, the X-axis and the Y-axis being the real and imaginary parts of the parameter, respectively. Each point of the plane corresponds to a value of the parameter and, therefore, to a method belonging to the family. For each value of the parameter, the corresponding method is applied successively starting as the initial estimation with a free critical point of the operator, so there are as many parameter planes as independent (under conjugation) free critical points of the operator. The points in the plane are plotted with a non-black color when there is convergence to any of the attracting fixed points, remaining in black in other cases. This representation allows choosing the values of the parameter that give rise to the most stable methods of the family.
Next, the iterative family N2 α is applied on nonlinear quadratic polynomials in order to check its dependence on the initial estimations.
Taking into account that the method under study does not satisfy the scaling theorem, the quadratic polynomials under study will be p − (z) = z 2 − 1, p + (z) = z 2 + 1, and p 0 (z) = z 2 . When family N2 α is applied on these polynomials, three rational functions are obtained. Then, we analyze the asymptotic behavior of their fixed and critical points, and we represent the corresponding dynamical and parameter planes. Therefore, some conclusions are reached that can be extended to any quadratic polynomial and, up to some extent, to any nonlinear function.
By applying the family N2 α on the quadratic polynomials under consideration, the resulting rational functions are: for p − (z), p + (z), and p 0 (z), respectively. The following result analyzes the number and also the asymptotical performance of the fixed points of operators (6).

Lemma 1.
The fixed points of the rational operators R − (z), R + (z), and R 0 (z) (6) agree with the roots of their associated polynomials, being superattracting for R − (z) and R + (z). For operator R 0 (z), the fixed point is attracting. In addition, z = ∞ is a strange fixed point of all three operators, being in all cases neutral.

Proof. For the rational operator
or equivalently, The solution of (7) is obtained from the numerator by solving z 2 − 1 = 0. The only solutions are z 1,− = −1 and z 2,− = 1, the roots of p − (z). The proof for the rational operator R + (z) is completely analogous, so we obtain that the fixed points of R + (z) are the roots of the polynomial p + (z), denoted by z 1,+ = −i and z 2,+ = i.
To check that z = ∞ is a fixed point of any rational operator R β , β ∈ {−, +, 0}, we define the operator I β (z) = 1/R β (1/z). Then, ∞ is a fixed point of R β when the point z F = 0 is a fixed point of I β (z). For operators R − (z) and R + (z), we have, respectively, From this, it is straightforward that I ∓ (0) = 0, so infinity is a strange fixed point of all of them.
In the case of polynomial p 0 (z), the fixed points are the solution of: Regarding the operator associated with infinity, we have for R 0 (z): and also, infinity is a strange fixed point of R 0 , as I 0 (0) = 0. In order to analyze the stability of the fixed points, the derivatives of (6) are required: It is easy to prove from the term (z 2 ∓ 1) in (8) that z j,− and z j,+ , for j = 1, 2, are superattracting points of the corresponding operators. It is also immediate from (9) to prove that z 0 is attracting, The stability of infinity is studied throughout the value of the corresponding derivatives of I(z) for each operator: in z = 0. For the three considered cases, we have I ∓ (0) = 1 and I 0 (0) = 1, so infinity is a neutral point.
The critical points of the rational operators that are calculated by solving R (z) = 0 are presented in the following result.

Lemma 2.
Critical points of operators R − (z), R + (z), and R 0 (z) (see (6)) satisfy: . In this case, the root of p 0 (z) is not a critical point of R 0 .
Let us remark that case α = 0 corresponds to Newton's method, whose associated complex dynamics are well known. The proof of Lemma 2 uses the same procedures as in Lemma 1, so we do not extend on it.
As the family N2 α depends on parameter α, it is useful to draw the parameter planes associated with each critical point given in the previous result. This tool will allow us to choose values of α and compare the resulting dynamical planes.
In this paper, all the planes are plotted using the software MATLAB R2018b. For the parameter plane, α is taken over a mesh of 500 × 500 values in the complex plane in . Taking a critical point as initial estimation, the method is iterated until it reaches the maximum of 50 iterations or until there is convergence to any of the fixed points. The convergence is set when the difference between the iterate and one of the attracting fixed point is lower than 10 −6 . When there is convergence to any of them, the initial point is represented in the color red in the plane. In other cases, it is represented in black. In addition, the intensity in the color red means that more iterations are required to reach the convergence (the brighter the color, lower is the number of iterations needed). Figures 1-3 show the parameter planes of the considered operators for their associated free critical points. Each parameter plane depicts the values of α in the complex plane that give rise to methods for which there is convergence to the attracting fixed points, represented as white stars.
In Figure 1a, it is observed that only values of α with a positive real part are depicted in red. Figure 1b is symmetric with respect to the imaginary line. The rest of the complex plane is black, so we must avoid these values of α in order to select the most stable methods of the family. For the rational operator R + (z), Figure 2 shows a very similar behavior to Figure 1, but now, the values of α that provide more stable methods are located around the complex line in the plane, showing a vertical band to the left part of the origin in Figure 2a and another one to the right part of the origin in Figure 2b. In Figure 3 is observed a completely different dynamical behavior, as the critical point cr 1,0 (α) provides a black parameter plane, and for the point cr 2,0 (α), the parameter plane is all depicted in red; that is, one free critical point belongs to the basin of attraction of the root z 0 for any value of α, and the other one appears in a different basin of attraction with the independence of α. Below, some dynamical planes are shown that confirm the previous results for the three operators, as the different values of α to generate them have been chosen from the obtained results in the corresponding parameter planes.
The implementation in MATLAB of the dynamical planes is similar to that of the parameter planes (see [9]). To perform them, the real and imaginary parts of the initial estimations are represented on the two axes over a mesh of 500 × 500 points in [− 15,15] × [− 15,15] in the complex plane. The convergence criteria are the same as in the parameters plane, but now, different colors are used to indicate to which fixed point the method converges. Orange color represents the convergence to z 1,− , z 1,+ , and z 0 for the associated operator, and blue represents the convergence to z 2,− and z 2,+ . The attracting fixed points are depicted in the dynamical planes with white stars, while the free critical points are depicted with white squares. Figures 4-6 show the dynamical planes corresponding to operators R − (z), R + (z), and R 0 (z), respectively, by varying the value of α. In all cases, the parameter has been chosen in order to show different performances from Figures 1-3. On the one hand, in Figures 4a,b and 5a,b, the parameter is taken from the red regions shown in Figures 1 and 2 and quite close to the attracting fixed points. On the other hand, Figures 4d and 5d correspond to values of the parameter located in the black region of the parameter planes. As was expected, when α is taken from a red region of the parameter plane, the corresponding basins of attraction in the dynamical plane are bigger than in other cases (Figures 4d  and 5d). However, there is a wide black region in all the dynamical planes that corresponds to the basin of attraction of infinity, which generates its own basin of attraction. Furthermore, in the most dynamical planes, a free critical point belongs to the basin of attraction of infinity, the others remaining in the basin of attraction of a root of the corresponding polynomial. Moreover, in Figures 4d and 5d, those with more black color, the two free critical points belong to the basin of attraction of infinity. From Figure 6, we can observe that the basin of attraction of z 0 is bigger when α is close to z = 0, one free critical point being inside this basin of attraction. As in the other operators, infinity behaves as the attracting point with a free critical point laying in its basin of attraction.   The conclusions observed in the parameter planes can be checked in Figures 4c-6c, where the value α = 0.3, which is close to the origin of the complex plane, gives rise to the broadest basins of attraction. For this reason, to perform the numerical experiments in the next section, we selected this value of α, which shows the stability of our proposed method.

Numerical Results
In this section, numerical experiments are performed in order to check the efficiency of the N2 α family to calculate the solution of several nonlinear functions. This section also allows verifying that the stability analysis of the previous section is correct, and in fact, it is possible to obtain values of α that provide methods of the N2 α family that guarantee more stability in the root finding process. For this purpose, let us consider the following nonlinear test functions: where x * is the exact solution.
The weighted iterative family (1) includes many iterative schemes, due the generality and simplicity of its structure. In Section 2, we have seen that Newton's method and the N2 α family can be obtained with weight functions H(t) = t and H(t) = t 1+αt , respectively, from the family (1). Moreover, some well-known methods were obtained from different selections of the weight function. By choosing: the resulting class is the iterative family presented by Kou and Li in [14], which has order of convergence two for any value of parameters λ and β. According to the results presented by the authors in [14], for the numerical performance, the method obtained when λ = 1 and β = 0 was used, denoting the resulting one by KL. On the other hand, the weight function: gives rise to an iterative family presented by Noor et al. [15], which converges quadratically for all γ. Let us denote by Noor1 the associated method for γ = 1.
In this section, a numerical comparison between the Newton, KL, and Noor1 methods with our proposed family N2 α is carried out in order to show the performance of different methods with similar structures and the same order of convergence. Taking into account the dynamical results in Section 3, the values for the parameter α that provide the methods of N2 α with more stability are those that are close to the complex origin. For this reason, the numerical results were done with α = 0.3, denoting the resulting method by N2 0.3 . Tables 1 and 2 show the results obtained for the considered nonlinear test functions f 1−6 and different initial estimations. The numerical tests have been performed using the software MATLAB R2018b with variable precision arithmetics with 50 digits of mantissa. These tables show the number of iterations that each method needs to reach the convergence, which was set when |x k+1 − x k | < 10 −12 or | f (x k+1 )| < 10 −12 . After a maximum number of 200 iterations, the method did not reach the solution of the equation. Moreover, a computational approximation of the order of convergence of the methods was given by the ACOC [16] calculated as the following quotient: As we can observe in the results provided by Tables 1 and 2, Newton's method did not always converge for the nonlinear test functions considered, especially when the initial estimation was far from the solution. A similar behavior can be observed in the methods KL and Noor1 for some of the proposed examples. In addition, all three known methods often required a higher number of iterations. This fact shows the efficiency of N2 α , as the method was always convergent to the root of the functions, and it needed fewer iterations.

Conclusions
Starting with a simple weighted iterative structure, a new parametric family with quadratic convergence has been obtained. The asymptotic performance of the family has been analyzed by means of complex dynamics, obtaining the fixed and critical points of the rational functions associated with quadratic polynomials. The parameter planes have been useful to determine the values of α providing the most stable elements of the family. Moreover, the dynamical planes represent the basins of attraction of the attracting points, showing an increase in their width for values of the parameter near zero.
In Section 5, the numerical experiments showed that the proposed family can obtain the zeros of the test functions more efficiently than other quadratic methods, with which it has been compared, including Newton's scheme.