Achieving Optimal Order in a Novel Family of Numerical Methods: Insights from Convergence and Dynamical Analysis Results

: In this manuscript, we introduce a novel parametric family of multistep iterative methods designed to solve nonlinear equations. This family is derived from a damped Newton’s scheme but includes an additional Newton step with a weight function and a “frozen” derivative, that is, the same derivative than in the previous step. Initially, we develop a quad-parametric class with a first-order convergence rate. Subsequently, by restricting one of its parameters, we accelerate the convergence to achieve a third-order uni-parametric family. We thoroughly investigate the convergence properties of this final class of iterative methods, assess its stability through dynamical tools, and evaluate its performance on a set of test problems. We conclude that there exists one optimal fourth-order member of this class, in the sense of Kung–Traub’s conjecture. Our analysis includes stability surfaces and dynamical planes, revealing the intricate nature of this family. Notably, our exploration of stability surfaces enables the identification of specific family members suitable for scalar functions with a challenging convergence behavior, as they may exhibit periodical orbits and fixed points with attracting behavior in their corresponding dynamical planes. Furthermore, our dynamical study finds members of the family of iterative methods with exceptional stability. This property allows us to converge to the solution of practical problem-solving applications even from initial estimations very far from the solution. We confirm our findings with various numerical tests, demonstrating the efficiency and reliability of the presented family of iterative methods.


Introduction
A multitude of challenges in Computational Sciences and other fields in Science and Technology can be effectively represented as nonlinear equations through mathematical modeling, see for example [1][2][3].Finding solutions ξ to nonlinear equations of the form f (x) = 0 stands as a classical yet formidable problem in the realms of Numerical Analysis, Applied Mathematics, and Engineering.Here, the function f : I ⊂ R → R is assumed to be differentiable enough within the open interval I. Extensive overviews of iterative methods for solving nonlinear equations published in recent years can be found in [4][5][6], and their associated references.
In recent years, many iterative methods have been developed to solve nonlinear equations.The essence of these methods is as follows: if one knows a sufficiently small domain that contains only one root ξ of the equation f (x) = 0, and we select a sufficiently close initial estimate of the root x 0 , we generate a sequence of iterates x 1 , x 2 , . . ., x k , . .., by means of a fixed point function g(x), which under certain conditions converges to ξ.The convergence of the sequence is guaranteed, among other elements, by the appropriate choice of the function g and the initial approximation x 0 .
The method described by the iteration function g : I ⊆ R → R such that starting from a given initial estimate x 0 , includes a large number of iterative schemes.These differ from each other by the way the iteration function g is defined.
Among these methods, Newton's scheme is widely acknowledged as the most renowned approach for locating a solution ξ ∈ I.This scheme is defined by the iterative formula: where k = 0, 1, 2, . .., and f ′ (x k ) denotes the derivative of the function f evaluated in the kth iteration.
A very important concept of iterative methods is their order of convergence, which provides a measure of the speed of convergence of the iterates.Let {x k } k ≥ 0 be a sequence of real numbers such that lim k→∞ x k = ξ.The convergence is called (see [7]): (a) Linear, if there exist C, 0 < C < 1 and k 0 ∈ N such that (b) Is of order p, if there exist C > 0 and k 0 ∈ N such that We denote by e k = x k − ξ the error of the k-th iteration.Moreover, equation e k+1 = Ce ) is called the error equation of the iterative method, where p is its order of convergence and C is called the asymptotic error constant.
It is known (see, for example, [4]) that if x k+1 = g(x k ) is an iterative point-to-point method with d functional evaluations per step, then the order of convergence of the method is, at most, p = d.On the other hand, Traub proves in [4] that to design a point-to-point method of order p, the iterative expression must contain derivatives of the nonlinear function whose zero we are looking for, at least of order p − 1.This is why point-to-point methods are not efficient if we seek to simultaneously increase the order of convergence and computational efficiency.
These restrictions of point-to-point methods are the starting point for the growing interest of researchers in multipoint methods, see for example [4][5][6].In such schemes, also called predictor-corrector, the (k + 1)-th iterate is obtained by using functional evaluations of the k-th iterate and also of other intermediate points.For example, a two-step multipoint method has the expression y k = Ψ(x k ), x k+1 = Φ(x k , y k ), k = 0, 1, 2, . . .Thus, the main motivation for designing new iterative schemes is to increase the order of convergence without adding many functional evaluations.The first multipoint schemes were designed by Traub in [4].At that time the concept of optimality had not yet been defined and the fact of designing multipoint schemes with the same order as classical schemes such as Halley or Chebyshev, but with a much simpler iterative expression and without using second derivatives, was of great importance.The techniques used then have been the seed of those that allowed the appearance of higher-order methods.
In recent years, different authors have developed a large number of optimal schemes for solving nonlinear equations [6,8].A common way to increase the convergence order of an iterative scheme is to use the composition of methods, based on the following result (see [4]).
However, this composition necessarily increases the number of functional evaluations.So, to preserve optimality, the number of evaluations must be reduced.There are many techniques used for this purpose by different authors, such as approximating some of the evaluations that have appeared with the composition by means of interpolation polynomials, Padé approximants, inverse interpolation, Adomian polynomials, etc. (see, for example, [6,9,10]).If after the reduction of functional evaluations the resulting method is not optimal, the weight function technique, introduced by Chun in [11], can be used to increase its order of convergence.
There are also other ways in the literature to compare different iterative methods with each other.Traub in [4] defined the information efficiency of an iterative method as where p is the order of convergence and d is the number of functional evaluations per iteration.On the other hand, Ostrowski in [12] introduced the so-called efficiency index, EI(M) = p 1/d , which, in turn, gives rise to the concept of optimality of an iterative method.
Regarding the order of convergence, Kung and Traub in their conjecture (see [13]) establish what is the highest order that a multipoint iterative scheme without memory can reach.Schemes that attain this limit are called optimal methods.Such a conjecture states that the order of convergence of any memoryless multistep method cannot exceed 2 d−1 (called optimal order), where d is the number of functional evaluations per iteration, with efficiency index 2 (d−1)/d (called optimal index).In this sense, Newton is an optimal method.Furthermore, in order to numerically test the behavior of the different iterative methods, Weerakoon and Fernando in [14] introduced the so-called computational order of convergence (COC), where x k+1 , x k and x k−1 are three consecutive approximations of the root of the nonlinear equation, obtained in the iterative process.However, the value of the zero ξ is not known in practice, which motivated the definition in [15] of the approximate computational convergence order ACOC, On the other hand, the dynamical analysis of rational operators derived from iterative schemes, particularly when applied to low-degree nonlinear polynomial equations, has emerged as a valuable tool for assessing the stability and reliability of these numerical methods.This approach is detailed, for instance, in Refs.[16][17][18][19][20] and their associated references.
Using the tools of complex discrete dynamics, it is possible to compare different algorithms in terms of their basins of attraction, the dynamical behavior of the rational functions associated with the iterative method on low-degree polynomials, etc. Varona [21], Amat et al. [22], Neta et al. [23], Cordero et al. [24], Magreñán [25], Geum et al. [26], among others, have analyzed many schemes and parametric families of methods under this point of view, obtaining interesting results about their stability and reliability.
The dynamical analysis of an iterative method focuses on the study of the asymptotic behavior of the fixed points (roots, or not, of the equation) of the operator, as well as on the basins of attraction associated with them.In the case of parametric families of iterative methods, the analysis of the free critical points (points where the derivative of the operator cancels out that are not roots of the nonlinear function) and stability functions of the fixed points allows us to select the most stable members of these families.Some of the existing works in the literature related to this approach are Refs.[27,28], among others.
In this paper, we introduce a novel parametric family of multistep iterative methods tailored for solving nonlinear equations.This family is constructed by enhancing the traditional Newton's scheme, incorporating an additional Newton step with a weight function and a frozen derivative.As a result, the family is characterized by a two-step iterative expression that relies on four arbitrary parameters.
Our approach yields a third-order uni-parametric family and a fourth-order member.However, in the course of developing these iterative schemes, we initially start with a first-order quad-parametric family.By selectively setting just one parameter, we manage to accelerate its convergence to a third-order scheme, and for a specific value of this parameter, we achieve an optimal member.To substantiate these claims, we conduct a comprehensive convergence analysis for all classes.
The stability of this newly introduced family is rigorously examined using dynamical tools.We construct stability surfaces and dynamical planes to illustrate the intricate behavior of this class.These stability surfaces help us to identify specific family members with exceptional behavior, making them well-suited for practical problem-solving applications.To further demonstrate the efficiency and reliability of these iterative schemes, we conduct several numerical tests.
The rest of the paper is organized as follows.In Section 2, we present the proposed class of iterative methods depending on several parameters, which is step-by-step modified in order to achieve the highest order of convergence.Section 3 is devoted to the dynamical study of the uni-parametric family; by means of this analysis, we find the most stable members, less dependent from their initial estimation.In Section 4, the previous theoretical results are checked by means of numerical tests on several nonlinear problems, using a wide variety of initial guesses and parameter values.Finally, some conclusions are presented.

Convergence Analysis of the Family
In this section, we conduct a convergence analysis of the newly introduced quadparametric iterative family, with the following iterative expression: where α, β, γ, δ are arbitrary parameters and k = 0, 1, 2, . ... Additionally, we present a strategy for simplifying it into a uni-parametric class to enhance convergence speed.Consequently, even though the quad-parametric family has a first-order convergence rate, we employ higher-order Taylor expansions in our proof, as they are instrumental in establishing the convergence rate of the uni-parametric subfamily.In Appendix A, the Mathematica code used for checking it is available.
Theorem 2 (quad-parametric family).Let f : I ⊆ R → R be a sufficiently differentiable function in an open interval I and ξ ∈ I a simple root of the nonlinear equation f (x) = 0. Let us suppose that f ′ (x) is continuous and nonsingular at ξ, and x 0 is an initial estimate close enough to ξ.Then, the sequence {x k } k≥0 obtained by using the expression (3) converges to ξ with order one, being its error equation where e k = x k − ξ, and α, β, γ, and δ are free parameters.
Proof.Let us consider ξ as the simple root of nonlinear function f (x) and x k = ξ + e k .We calculate the Taylor expansion of f (x k ) and f ′ (x k ) around the root ξ, we get and where By a direct division of (4) and (5), Replacing ( 6) in (3), we have Again a Taylor expansion of f (y k ) around ξ allows us to get Dividing ( 8) by (4), we obtain Finally, substituting (6), ( 7) and ( 9), in the second step of family (3), we have where being the error equation and the proof is finished.
From Theorem 2, it is evident that the newly introduced quad-parametric family exhibits a convergence order of one, irrespective of the values assigned to α, β, γ, and δ.Nevertheless, we can expedite convergence by holding only two parameters constant, effectively reducing the family to a bi-parametric iterative scheme.In Appendix B, the Mathematica code used for checking it is available.
Theorem 3 (bi-parametric family).Let f : I ⊆ R → R be a sufficiently differentiable function in an open interval I and ξ ∈ I a simple root of the nonlinear equation f (x) = 0. Let us suppose that f ′ (x) is continuous and nonsingular at ξ, and x 0 is an initial estimate close enough to ξ.Then, the sequence {x k } k≥0 obtained by using the expression (3) converges to ξ with order three, provided , being its error equation , , q = 2, 3, ..., and α, δ are arbitrary parameters.
Proof.Using the results of Theorem 2 to cancel A 1 and A 2 accompanying e k and e 2 k in (12), respectively, it must be satisfied that It is clear that system (13) has infinite solutions for where α and δ are free parameters.Therefore, replacing (14) in (11), we obtain that being the error equation and the proof is finished.
According to the findings in Theorem 3, it is evident that the newly introduced biparametric family where consistently exhibits a third-order convergence across all values of α and δ.Nevertheless, it is noteworthy that by restricting one of the parameters while transitioning to a uni-parametric iterative scheme, not only can we sustain convergence, but we can also enhance performance.This improvement arises from the reduction in the error equation complexity, resulting in more efficient computations.
Corollary 1 (uni-parametric family).Let f : I ⊆ R → R be a sufficiently differentiable function in an open interval I and ξ ∈ I a simple root of the nonlinear equation f (x) = 0. Let us suppose that f ′ (x) is continuous and nonsingular at ξ and x 0 is an initial estimate close enough to ξ.Then, the sequence {x k } k≥0 obtained by using the expression (17) converges to ξ with order three, provided that ϵ = α 4 δ = 2, being its error equation , , q = 2, 3, ..., and α is an arbitrary parameter.Indeed, α = 1 and, therefore, δ = ϵ = 2 provides the only member of the family of the optimal fourth-order of convergence.
Proof.Using the results of Theorem 3 to reduce the expression of A 3 accompanying e 3 k in (15), it must be satisfied that α 4 δ − 2 = 0 and/or α − 1 = 0.It is easy to show that the first equation has infinite solutions for Therefore, replacing (18) in (15), we obtain that and the proof is finished.
Based on the outcomes derived from Corollary 1, it becomes apparent that the recently introduced uni-parametric family, which we will call MCCTU(α), where and δ = 2 α 4 consistently exhibits a convergence order of three, regardless of the chosen value for α.Nevertheless, a remarkable observation emerges when α = 1: in such a case, a member of this family attains an optimal convergence order of four.
Due to the previous results, we have chosen to concentrate our efforts solely on the MCCTU(α) class of iterative schemes moving forward.To pinpoint the most effective members within this family, we will utilize dynamical techniques outlined in Section 3.

Stability Analysis
This section delves into the examination of the dynamical characteristics of the rational operator linked to the iterative schemes within the MCCTU(α) family.This exploration provides crucial insights into the stability and dependence of the members of the family with respect to the initial estimations used.To shed light on the performance, we create rational operators and visualize their dynamical planes.These visualizations enable us to discern the behavior of specific methods in terms of the attraction basins of periodic orbits, fixed points, and other relevant dynamics.Now, we introduce the basic concepts of complex dynamics used in the dynamical analysis of iterative methods.The texts [29,30], among others, provide extensive and detailed information on this topic.
Given a rational function R : Ĉ → Ĉ, where Ĉ is the Riemann sphere, the orbit of a point z 0 ∈ Ĉ is defined as: We are interested in the study of the asymptotic behavior of the orbits depending on the initial estimate z 0 , analyzed in the dynamical plane of the rational function R defined by the different iterative methods.
To obtain these dynamical planes, we must first classify the fixed or periodic points of the rational operator R.
On the other hand, a fixed point The basin of attraction of an attractor z is defined as the set of pre-images of any order: The Fatou set consists of the points whose orbits have an attractor (fixed point, periodic orbit or infinity).Its complementary in Ĉ is the Julia set, J .Therefore, the Julia set includes all the repulsive fixed points and periodic orbits, and also their pre-images.So, the basin of attraction of any fixed point belongs to the Fatou set.Conversely, the boundaries of the basins of attraction compose the Julia set.
The following classical result, which is due to Fatou [31] and Julia [32], includes both periodic points (of any period) and fixed points, considered as periodic points of the unit period.
Theorem 4 ([31,32]).Let R be a rational function.The immediate basins of attraction of each attracting periodic point contain at least one critical point.
By means of this key result, all the attracting behavior can be found using the critical points as a seed.

Rational Operator
While the fixed-point operator can be formulated for any nonlinear function, our focus here lies on constructing this operator for low-degree nonlinear polynomial equations, in order to get a rational function.This choice stems from the fact that the stability or instability criteria applied to methods on these equations can often be extended to other cases.Therefore, we introduce the following nonlinear equation represented by f (x): where a, b ∈ R are the roots of the polynomial.
Let us remark that when MCCTU(α) is directly applied to f (x), parameter α disappears in the resulting rational expression; so, no dynamical analysis can be made.However, if we use parameter ϵ = α 4 δ appearing in Corollary 1 the same class of iterative methods can be expressed as MCCTU(ϵ) and the dynamical analyisis can be made depending on ϵ.
with ϵ ∈ C being an arbitrary parameter.
Proof.Let f (x) be a generic quadratic polynomial function with roots a, b ∈ C. We apply the iterative scheme MCCTU(ϵ) given in (20) on f (x) and obtain a rational function A f (x, ϵ) that depends on the roots a, b ∈ C and the parameters ϵ ∈ C.Then, by using a Möbius transformation (see [22,33,34]) on A f (x, ϵ) with which depends on two arbitrary parameters ϵ ∈ C, thus completing the proof.
From Proposition 1, if we set ϵ − 2 = 0, we obtain and, then, it is easy to show that the rational operator R f (x, ϵ) simplifies to the expression which does not depend on any free parameters.

Fixed Points
Now, we calculate all the fixed points of R f (x, ϵ) given by (22), to afterwards analyze their character (attracting, repulsive, or neutral or parabolic).
From Proposition 2, we establish that there are seven fixed points.Among these, 0 and ∞ come from the roots a and b of f (x).ex 1 = 1 comes from the divergence of the original scheme, previous to the Möbius transformation.√ 97 − 47 .In the particular case of ϵ = 2 (using the Equation ( 24)), all the strange fixed points are repulsive.
Proof.We prove this result by analyzing the stability of the fixed points found in Proposition 2. It must be done by evaluating R ′ f (x, ϵ) at each fixed point and, if it is lower, equal, or greater than one it is called attracting, neutral, or repulsive, respectively.
The cases of x = 0 and ∞ are straightforward from the expression of  2a,b.Moreover, ex 1 can not be a superattractor as It is clear that 0 and ∞ are always superattracting fixed points, but the stability of the remaining fixed points depends on the values of ϵ.According to Proposition 3, two strange fixed points can become superattractors.This implies that there would exist basins of attraction for them, potentially causing the method to fail to converge to the solution.However, even when they are only attracting (that can be the case of ex 1 ), these basins of attraction exist.
As we have stated previously, Figure 1 represents the stability function of the strange fixed point ex 1 .In this figure, the zones of attraction are the yellow area and the repulsion zone corresponds to the grey area.For values of ϵ within the disk, ex 1 is repulsive; whereas for values of ϵ outside the grey disk, ex 1 becomes attracting.So, it is natural to select values within the grey disk, as a repulsive divergence improves the performance of the iterative scheme.
Similar conclusions can be stated from the stability region of strange fixed points ex 4,5 (ϵ), appearing in Figure 2. When a value of parameter ϵ is taken in the yellow area of Figure 2, both points are simultaneously attracting, so there are at least four different basins of attraction.
However, the basins of attraction also appear when there exist attracting periodic orbits of any period.To detect this kind of behavior, the role of critical points is crucial.

Critical Points
Now, we obtain the critical points of R f (x, ϵ).Proposition 4. The critical points of R f (x, ϵ) are x = 0, x = ∞ and also: .
From Proposition 4, we establish that, in general, there are five critical points.The free critical point cr 1 = −1 is a pre-image of the strange fixed point ex 1 = 1.Therefore, the stability of cr 1 corresponds to the stability of ex 1 (see Section 3.2).Note that if the Equation ( 24) is satisfied, the only remaining free critical point is cr 1 .Since cr 1 is the pre-image of ex 1 , it would be a repulsor.
Then, we use the only independent free critical point cr 2 (ϵ) (conversely, cr 3 (ϵ), as they are conjugates) to generate the parameter plane.This a graphical representation of the global stability performance of the member of the class of iterative methods.In a definite area of the complex plane, a mesh of 500 × 500 points is generated.Each one of these points is used as a value of parameter ϵ, i.e., we get a particular element of the family.For each one of these values, we get as our initial guess the critical point cr 2 (ϵ) and calculate its orbit.If it converges to x = 0 or x = ∞, then the point corresponding to this value of ϵ is represented using a red color.In other case, it is left in black.So, convergent schemes to the original roots of the quadratic equations appear in the red stable area and the black area corresponds to schemes of the classes that are not able to converge to them, by reason of an attracting strange fixed point or periodic orbit.This performance can be seen in Figure 3

Dynamical Planes
A dynamical plane is defined as a mesh in a limited domain of the complex plane, where each point corresponds to a different initial estimate x 0 .The graphical representation shows the method's convergence starting from x 0 within a maximum of 80 iterations and 10 −3 as the tolerance.Fixed points appear as a white circle ' ', critical points are '□', and a white asterisk ' * ' symbolizes an attracting point.Additionally, the basins of attraction are depicted in different colors.To generate this graph, we use MATLAB R2020b with a resolution of 400 × 400 pixels.
Here, we analyze the stability of various MCCTU(ϵ) methods using dynamical planes.We consider methods with ϵ values both inside and outside the stability surface of ex 1 , specifically, in the red and black areas of the parameter plane represented in Figure 3a.Secondly, some schemes outside the stability region (in black in the parameter plane) are provided for ϵ ∈ {100, 15, −15, 30}.Their dynamical planes are shown in Figure 5.Each of these members have specific characteristics: in Figure 5a, the widest basin of attraction (in green color) corresponds to ex 1 = 1, which is attracting for this value of ϵ, the basin of x = 0 is a very narrow area around the point; for ϵ = 15, we observe in Figure 5b three different basins of attraction, the third of the two being attracting periodic orbits of period 2 (one of them is plotted in yellow in the figure); Figure 5c corresponds to ϵ = −15, inside the stability area of ex 4,5 (ϵ) (see Figure 2), where both are simultaneously attracting; finally, for ϵ = 30, the widest basin of attraction corresponds to an attracting periodic orbit of period 2, see Figure 5d.

Numerical Results
In this section, we conduct several numerical tests to validate the theoretical convergence and stability results of the MCCTU(α) family obtained in previous sections.We use both stable and unstable methods from (20) and apply them to ten nonlinear test equations, with their expressions and corresponding roots provided in Table 1.

Nonlinear Test Equations
Roots We aim to demonstrate the theoretical results by testing the MCCTU(α) family.Specifically, we evaluate three representative members of the family with δ = 2 α 4 and α = 1, α = 2, and α = 100.Therefore, in all cases, ϵ = 2.
While performing these numerical tests, we start the iterations with different initial estimates: close (x 0 ≈ ξ), far (x 0 ≈ 3ξ), and very far (x 0 ≈ 10ξ) from the root ξ.This approach allows us to evaluate how sensitive the methods are to the initial estimation when finding a solution.
The calculations are performed using the MATLAB R2020b programming package with variable precision arithmetic set to 200 digits of mantissa (in Appendix C, an example with double-precision arithmetics is included).For each method, we analyze the number of iterations (iter) required to converge to the solution, with stopping criteria defined as |x k+1 − x k | < 10 −100 or | f (x k+1 )| < 10 −100 .Here, |x k+1 − x k | represents the error estimation between two consecutive iterations, and | f (x k+1 )| is the residual error of the nonlinear test function.
To check the theoretical order of convergence (p), we calculate the approximate computational order of convergence (ACOC) as described by Cordero and Torregrosa in [15].In the numerical results, if the ACOC values do not stabilize throughout the iterative process, it is marked as '-'; and if any method fails to converge within a maximum of 50 iterations, it is marked as 'nc'.

First Experiment: Stability Analysis of MCCTU(α) Family
In this experiment, we conducted a stability analysis of the MCCTU(α) family by considering values of α both within the stability regions (α = 2) and outside of them (α = 100), setting δ = 2 α 4 .The methods analyzed are of order 3, consistent with the theoretical convergence results.A special case occurs when α = 0, where the associated method never converges to the solution because the denominator in the relation δ = 2/α 4 becomes zero, causing δ to grow indefinitely.
The numerical performance of the iterative methods MCCTU(2) and MCCTU(100) is presented in Tables 2 and 3, using initial estimates that are close, far, and very far from the root.This approach enables us to assess the stability and reliability of the methods under various initial conditions.From the analysis of the first experiment, it is evident that the MCCTU(2) method exhibits robust performance.For initial estimates close to the root (x 0 ≈ ξ), the method consistently converges to the solution with very low errors, achieving convergence in three or four iterations, and the ACOC value stabilizes at 3. For initial estimates that are far (x 0 ≈ 3ξ), the number of iterations increases, but the method still converges to the solution in nine out of ten cases.For initial estimates that are very far (x 0 ≈ 10ξ), the method holds a similar performance, converging to the solution in eight out of ten cases.It is notable that as the initial condition moves further away, the method shows a slight difficulty in finding the solution.This slight dependence is understandable given the complexity of the nonlinear functions f 4 and f 7 .Nonetheless, the method is shown to be stable and robust, with a convergence order of 3, verifying the theoretical results.
On the other hand, MCCTU(100) method encounters significant difficulties in finding the solution.As the initial conditions move further away, the number of iterations increases.Despite lacking good stability characteristics, the method converges to the solution for initial estimates close to the root.However, for initial estimates that are far and very far from the root, it fails to converge in four out of ten cases.Additionally, the method never stabilizes the ACOC value in any case.These results confirm the theoretical instability of the method, as α = 100 lies outside the stability surface studied in Section 3.

Second Experiment: Efficiency Analysis of MCCTU(α) Family
In this experiment, we conduct a comparative study between an optimal method of the MCCTU(α) family and the fifteen fourth-order methods mentioned in the introduction of Section 4, to contrast their numerical performances on nonlinear equations.We consider the method associated with α = 1 and δ = 2, denoted as MCCTU(1), as the optimal stable member of the MCCTU(α) family with fourth-order of convergence.
Thus, in Tables 4-14, we present the numerical results for the sixteen known methods, considering initial estimates that are close, far, and very far from the root, as well as the ten test equations.In Tables 4-7, we observe that MCCTU(1) consistently converges to the solution for initial estimates close to the root (x 0 ≈ ξ), with a similar number of iterations as other methods across all equations.The theoretical convergence order is confirmed by the ACOC, which is close to 4. However, what about the dependence of MCCTU(1) on initial estimates?To answer this, we analyze the method for initial estimates far and very far from the solution, specifically for x 0 ≈ 3ξ and x 0 ≈ 10ξ, respectively.The results are shown in Tables 8-15.

Function
Method The results presented in Tables 8-11 are promising.MCCTU(1) converges to the solution in nine out of the ten nonlinear equations, even when the initial estimate is far from the root (x 0 ≈ 3ξ).In these cases, the ACOC consistently stabilizes and approaches 4.Only in one instance, for the function f 4 , does MCCTU(1) fail to converge, similar to the other thirteen methods.For this particular equation, only two methods successfully approximate the root.In the remaining equations, MCCTU(1) converges to the solution with a comparable number of iterations to other methods and even requires fewer iterations than Ostrowski's method, as seen with function f 10 .Therefore, we confirm that this method is robust, consistent with the stability results shown in previous sections.The results presented in Tables 12-15 confirm the exceptional robustness of the MC-CTU(1) method for initial estimates that are very far from the root (x 0 ≈ 10ξ), as the method converges in eight out of ten cases.A slight dependence on the initial estimate is observed for functions f 4 and f 7 , where the method does not converge; however, in these two cases, the other methods also fail to approximate the solution, except for the ACCT2 method, which converges to the root of function f 7 with 50 iterations.The complexity of the nonlinear equations plays a significant role in finding their solutions.Moreover, in the cases where the MCCTU(1) method converges to the roots, it does so with a comparable number of iterations to other methods and often with fewer iterations, as seen in function f 2 .Additionally, for these cases, the ACOC consistently stabilizes at values close to 4. Therefore, based on the results of the second experiment, we conclude that the MCCTU(α) family demonstrates impressive numerical performance when using the optimal stable member with α = 1 as a representative, highlighting its robustness and efficiency even with challenging initial conditions.Overall, the selected MCCTU(1) method exhibits low errors and requires a similar or fewer number of iterations compared to other methods.In certain cases, as the complexity of the nonlinear equation increases, the MCCTU(1) method outperforms Ostrowski's method and others.The theoretical convergence order is also confirmed by the ACOC, which is always close to 4.

Conclusions
The development of the parametric family of multistep iterative schemes MCCTU(α) based on the damped Newton scheme has proven to be an effective strategy for solving nonlinear equations.The inclusion of an additional Newton step with a weight function and a "frozen" derivative significantly improved the convergence speed from a first-order class to a uniparametric third-order family.
The numerical results confirm the robustness of the MCCTU(2) method for initial estimates close to the root (x 0 ≈ ξ), with very low errors and convergence in three or four iterations.As the initial estimates move further away (x 0 ≈ 3ξ) and (x 0 ≈ 10ξ), the method continues to show solid performance, converging in most cases and confirming its theoretical stability and robustness.
Through the analysis of stability surfaces and dynamical planes, specific members of the MCCTU(α) family with exceptional stability were identified.These members are particularly suitable for scalar functions with challenging convergence behavior, exhibiting attractive periodic orbits and strange fixed points in their corresponding dynamical planes.The MCCTU(1) member stood out for its optimal and stable performance.
In the comparative analysis, the MCCTU(1) method demonstrated superior numerical performance in many cases, requiring a similar or fewer number of iterations compared to well-established fourth-order methods such as Ostrowski's method.This superior performance is especially notable in more complex nonlinear equations, where MCCTU(1) outperforms several alternative methods.
The theoretical convergence order of the MCCTU(α) family was confirmed by calculating the approximate computational order of convergence (ACOC).In most cases, the ACOC value stabilized close to 3, validating the effectiveness and accuracy of the proposed methods both theoretically and practically.Additionally, it was confirmed that the convergence order of the method associated with α = 1 is optimal, achieving a fourth-order convergence.
Finally, the analysis revealed that certain members of the MCCTU(α) family, particularly those with α values outside the stability surface, exhibited significant instability.These methods struggled to converge to the solution, especially when initial estimates were far or very far from the root.For instance, the method with α = 100 failed to stabilize and did not meet the convergence criteria in four out of ten cases.Additionally, the ACOC values for this method did not stabilize, confirming its theoretical instability.This highlights the importance of selecting appropriate parameter values within the stability regions to ensure reliable performance.

2 Figure 3 .
Figure 3. Parameter plane of cr 2 (ϵ) on domain D 1 and a detail on D 2 .
Firstly, examples of methods within the stability region are provided for ϵ ∈ {1, 2, 10, 5 + 5i}.Their dynamical planes, along with their respective basins of attraction, are shown in Figure 4. Let us remark that all selected values of ϵ lie in the red area of the parameter plane and have only two basins of attraction, corresponding to x = 0 (in orange color in the figures) and x = ∞ (blue in the figures).

Figure 4 .
Figure 4. Dynamical planes for some stable methods.

Table 1 .
Nonlinear test equations and corresponding roots.

Table 4 .
Numerical performance of iterative methods on nonlinear equations for x 0 close to ξ (1/4).

Table 5 .
Numerical performance of iterative methods on nonlinear equations for x 0 close to ξ (2/4).

Table 6 .
Numerical performance of iterative methods on nonlinear equations for x 0 close to ξ (3/4).

Table 7 .
Numerical performance of iterative methods on nonlinear equations for x 0 close to ξ (4/4).

Table 11 .
Numerical performance of iterative methods on nonlinear equations for x 0 far from ξ (4/4).

Table 14 .
Numerical performance of iterative methods on nonlinear equations for x 0 very far from