Convergence and Dynamics of a Higher-Order Method

Solving problems in various disciplines such as biology, chemistry, economics, medicine, physics, and engineering, to mention a few, reduces to solving an equation. Its solution is one of the greatest challenges. It involves some iterative method generating a sequence approximating the solution. That is why, in this work, we analyze the convergence in a local form for an iterative method with a high order to find the solution of a nonlinear equation. We extend the applicability of previous results using only the first derivative that actually appears in the method. This is in contrast to either works using a derivative higher than one, or ones not in this method. Moreover, we consider the dynamics of some members of the family in order to see the existing differences between them.


Introduction
Mathematics is always changing and the way we teach it also changes, as can be seen in the literature. Moreover, in advanced mathematics we need to use different alternatives since we all know the different problems that students encounter. In this paper, we present a study on iterative methods that can be used in postgraduate studies in order to teach them.
In the present work, we are focused on the problem of solving the equation giving an approximating solution x * , where g : Ω ⊆ S → S is differentiable and S = R or C. There exist several studies related to this problem, since we need to use iterative methods to find the solution.
We refer the reader to the book by Petkovic et al. [1] for a collection of relevant methods. The method of interest is in this case: t n = y n − g(y n )(g(x n ) + ρg(y n )) g (x n )(g(x n ) + (ρ − 2)g(y n )) , where a starting point x 0 is chosen, parameters ρ, δ ∈ S, and K(t n ) = g(x n ) + g (x n ) (t n − y n ) 2 (t n − x n ) (y n − x n )(x n + 2y n − 3t n ) +g (t n ) (t n − y n )(x n − t n ) x n + 2y n − 3t n − g[x n ; y n ] (t n − x n ) 3 (y n − x n )(x n + 2y n − 3t n ) .
If we consider only the first two steps of the method in Equation (2), we obtain the King's class of methods. This method has an order of 4 [2]. However, Equation (2) has limited usage, since its convergence assumes the existence of fifth order derivatives not appearing in the method. Moreover, no computable error bounds on ||x n − x * || or uniqueness results are given. Furthermore, the initial point x 0 is a shot in the dark. As an example consider function Then g (x) is unbounded on Ω = [− 1 2 , 3 2 ]. Hence, there is no guarantee that the method in Equation (2) converges to x * = 1 under the results in [2].

Local Convergence Analysis
In this section we study the local convergence analysis of the method in Equation (2). If v ∈ I R and µ > 0, we can define U(v, µ) and U(v, µ), respectively, the open and closed balls in R. Besides, we require the parameters L 0 > 0, L > 0, M 0 > 0, M > 0, γ 0 > 0, µ, and δ ∈ I R. We need to define some parameters and functions to analyze the local convergence. Consider functions on the interval 0, 1 L 0 by Function h 1 has zeros in the interval 0, 1 L 0 by the intermediate value theorem. Let r 1 be the smallest such zero. Define functions g 2 and h 2 on the interval [0, r 1 ) by By these definitions, h 2 (0) = −1 < 0 and h 2 (t) → +∞ as t → r − 1 . For this reason, function h 2 has a smallest zero r 2 ∈ (0, r 1 ). Moreover, define functions on [0, r 2 ) by We can see that and 0 ≤ g 3 (t) < 1 for each t ∈ [0, r).

Dynamical Analysis
In this section, the method in Equation (2) is applied to three different families of functions and its behavior has been analyzed by changing the δ parameter using techniques that appear in [19][20][21][22].

Exponential Family
The method has been applied to the function g(x) = e x − 1 by considering the corresponding equation g(x) = 0. This equation has a solution at the point x = 0, which is the only one attractive fixed point of the method. In Figure 1 we observe how the method changes with the δ parameter. Dynamical planes represent the behavior of the method in the complex domain.
In Figure 2 the symmetry of the region of convergence to the solution x = 0 with respect to the imaginary axis is observed. Small islands of convergence appear out of the main region. It is necessary to increase the maximum number of iterations to achieve convergence with high values of δ.

Sinus Family
The method can be applied to the function g(x) = sin(x) with equation g(x) = 0. In this case, the equation has a periodical solutions x = −π, x = 0, x = π, and . . ., coinciding with the fixed points of the method. In Figure 3 how the method changes with the δ parameter is shown. Dynamical planes represent the behavior of the method in the complex domain as it appears in Figure 4, where with high values of δ the region of convergence is reduced.  (c) (d)

Polynomial Family
The method was applied to the function g(x) = (x − 1)(x + 1) with equation g(x) = 0. The sink fixed points obtained in this case using the method are x = −1, x = 1, the solutions of the previous equation. In Figure 5 how the method changes with the δ parameter is shown. Dynamical planes represent the behavior of the method in the complex domain as it appears in Figure 6, where with larger values of δ the region of convergence is more complex.

Conclusions
The study of high-order iterative methods is very important, since problems from all disciplines require the solution of some equations. This solution is found as the limit of sequences generated by such methods, since closed form solutions can rarely be found in general. The convergence order is usually found in the literature using expensive Taylor expansions, high-order derivatives, and without computable error estimates on ||x n − x * || or uniqueness results. It is worth noticing that the high-order derivatives do not appear in these methods. Moreover, the initial point is a "shot in the dark". Hence, the applicability of these methods is very limited. To address all these problems, we have developed a technique using hypotheses only on the first derivative that actually appears on the method and Lipschitz-type conditions. This allows us to: extend the applicability of the method; find a radius of convergence as well as computable error estimates and uniqueness results based on Lipschitz constants. Although we demonstrated our technique on the method in Equation (2), clearly it can be used to extend the applicability of other methods along the same lines. In view of the involvement of parameters on the method, the dynamics of it have also been explored in many interesting cases.
Author Contributions: All authors have equally contributed to this work. All authors read and approved the final manuscript.
Funding: This research received no external funding.