Next Article in Journal
Rapid Deterioration of Convergence in Taylor Expansions of Linearizing Maps of Hénon Maps at Hyperbolic Fixed Points
Previous Article in Journal
Composite Test Functions for Benchmarking Nonlinear Optimization Software
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A New Perspective on the Convergence of Mean-Based Methods for Nonlinear Equations

by
Alicia Cordero
,
María Emilia Maldonado Machuca
and
Juan R. Torregrosa
*
Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(21), 3525; https://doi.org/10.3390/math13213525
Submission received: 7 September 2025 / Revised: 17 October 2025 / Accepted: 29 October 2025 / Published: 3 November 2025

Abstract

Many problems in science, engineering, and economics require solving of nonlinear equations, often arising from attempts to model natural systems and predict their behavior. In this context, iterative methods provide an effective approach to approximate the roots of nonlinear functions. This work introduces five new parametric families of multipoint iterative methods specifically designed for solving nonlinear equations. Each family is built upon a two-step scheme: the first step applies the classical Newton method, while the second incorporates a convex mean, a weight function, and a frozen derivative (i.e., the same derivative from the previous step). The careful design of the weight function was essential to ensure fourth-order convergence while allowing arbitrary parameter values. The proposed methods are theoretically analyzed and dynamically characterized using tools such as stability surfaces, parameter planes, and dynamical planes on the Riemann sphere. These analyses reveal regions of stability and divergence, helping identify suitable parameter values that guarantee convergence to the root. Moreover, a general result proves that all the proposed optimal parametric families of iterative methods are topologically equivalent, under conjugation. Numerical experiments confirm the robustness and efficiency of the methods, often surpassing classical approaches in terms of convergence speed and accuracy. Overall, the results demonstrate that convex-mean-based parametric methods offer a flexible and stable framework for the reliable numerical solution of nonlinear equations.

1. Introduction

In mathematics and engineering, many real-world and physical phenomena are modeled using nonlinear equations or systems. Solving such problems has led to the development of numerous numerical methods, which serve as fundamental tools for approximating solutions that cannot be found exactly or analytically.
Among these, iterative methods play a crucial role in finding the roots of nonlinear equations f ( x ) = 0 . Since a variety of iterative strategies exist, it becomes essential to evaluate their convergence order, stability, and computational efficiency. These aspects allow us to compare methods and choose the most appropriate one for a given problem.
Iterative methods are typically classified based on whether they are single-step or multi-step, are with or without memory, and require derivatives or not. In this context, we present new multipoint iterative methods aimed at approximating the zeros of nonlinear functions. These methods are inspired by the work in [1], which investigates and enhances Newton-type methods by incorporating convex combinations of classical means, achieving third-order convergence.
Motivated by these results, as well as by the contributions of Chun (2005) [2], King [3], Ostrowski [4], and more recently Artidiello [5], Abdullah et al. [6], and Zein [7], who designed multipoint schemes by starting from Newton’s method and then applying suitable correction steps, we introduce new families of iterative schemes based on composition and weight functions. These weight functions combine data from multiple evaluations of the function and its derivatives across iterations, improving both accuracy and efficiency. The proposed methods achieve fourth-order convergence and satisfy the optimality condition postulated by the Kung–Traub conjecture [8], which states
The order of convergence p of an iterative method without memory cannot exceed 2 d 1 , where d is the number of functional evaluations per iteration.
A method that reaches this bound is called optimal. This and other criteria described below allow us to classify the iterative methods.
Building upon these foundations, this paper introduces a unified theoretical and dynamical framework for mean-based iterative methods with weight functions. The main contributions are summarized as follows:
  • We propose five new parametric families of two-step, fourth-order, multipoint iterative methods. Each family combines (i) a Newton predictor, (ii) a convex-mean-based corrector, (iii) a frozen derivative, and (iv) a flexible weight function H ( μ ) with a free parameter β .
  • We derive explicit conditions on H ( μ ) that guarantee fourth-order convergence for various symmetric means (arithmetic, harmonic, counterharmonic).
  • We conduct a detailed dynamical analysis on the Riemann sphere, including stability surfaces, parameter planes, and dynamical planes, to study how the free parameter β affects convergence and stability.
  • We identify safe regions for the parameter β that ensure convergence to the root and prevent undesirable attractors, offering practical guidelines for parameter selection.
  • Finally, we present extensive numerical comparisons with recent optimal fourth-order methods (e.g., Artidiello [5], Zein [7], Zhao et al. [9,10]), showing that the proposed schemes exhibit comparable or superior accuracy and robustness.
One of the most widely used and foundational approaches in nonlinear root-finding is Newton’s method. It is defined as
x n + 1 = x n f ( x n ) f ( x n ) , n = 0 , 1 , 2 , ,
provided f ( x n ) 0 . Under appropriate smoothness conditions and for simple roots, the method exhibits quadratic convergence, meaning
| x n + 1 α | C | x n α | 2 ,
for some constant C > 0 , with α being a root of f ( x ) = 0 .
In 2000, Weerakoon and Fernando [11] proposed a third-order variant of Newton’s method. This method replaces the rectangular approximation in the integral form of Newton’s method with a trapezoidal approximation, reducing truncation error and improving convergence. Their method, known as the trapezoidal or arithmetic mean method, is defined as
y n = x n f ( x n ) f ( x n ) , x n + 1 = x n 2 f ( x n ) f ( x n ) + f ( y n ) , n = 0 , 1 , 2 , .
This method laid the seed for subsequent generalizations using other types of means. Researchers such as Chicharro et al. [12] and Cordero et al. [1] expanded this idea by incorporating various mathematical means to construct families of third-order methods:
x n + 1 = x n f ( x n ) M m f ( x n ) , f ( y n ) , n = 0 , 1 , 2 , ,
where y n denotes the Newton step and M m ( x , y ) represents the chosen mean applied to the values x and y.

1.1. Types of Means

Below are the different types of convex averages used in the literature for the design and analysis of various iterative methods. These concepts constitute a fundamental reference and will serve as a methodological basis for the development of our own iterative procedures.
Arithmetic Mean M A : The arithmetic mean of two real numbers x and y is given by
M A ( x , y ) = x + y 2 .
This mean appears in the trapezoidal scheme (2).
Harmonic Mean M H : The harmonic mean of two positive real numbers x and y is defined as
M H ( x , y ) = 2 x y x + y .
This mean is particularly sensitive to small values and is known for its use in rates and resistances. In the context of iterative methods, its reciprocal nature often yields improved stability under specific conditions. The following scheme arises by replacing the arithmetic mean in (2) with the harmonic one, as carried out in [13]:
x n + 1 = x n f ( x n ) ( f ( x n ) + f ( y n ) ) 2 f ( x n ) f ( y n ) , n = 0 , 1 , 2 ,
Counterharmonic Mean M C : The counterharmonic mean is given by
M C ( x , y ) = x 2 + y 2 x + y ,
and is always greater than or equal to the arithmetic mean. It accentuates larger values, making it suitable when higher magnitudes dominate the behavior of the function. When this mean is incorporated into iterative schemes, the obtained method, presented in [1,14], is
x n + 1 = x n ( f ( x n ) + f ( y n ) ) f ( x n ) ( f ( x n ) ) 2 + ( f ( y n ) ) 2 , n = 0 , 1 , 2 ,
Different authors have proven that all these schemes have an order of convergence of three.

1.2. Some Characteristics of the Iterative Methods

To analyze an iterative method in depth, it is essential to understand certain concepts related to the mathematical notation used, the order of convergence, the efficiency index, and the computational order of convergence, as well as the fundamental theorems and conjectures that support the correct formulation of the proposed new multipoint methods. Each of these aspects is detailed below.
Order of convergence
The speed at which a sequence { x n } approaches a solution α is quantified by the order of convergence p. Formally, the sequence { x n } is said to converge to α with order p 1 and asymptotic error constant C > 0 if
lim n | x n + 1 α | | x n α | p = C .
This limit establishes an asymptotic relation that describes how the errors e n = x n α decay as the number of iterations increases. Specifically,
  • If p = 1 and 0 < C < 1 , the convergence is linear;
  • If p = 2 , the convergence is quadratic;
  • If p = 3 , the convergence is cubic;
  • For p > 3 , the method has higher-order convergence.
In practice, a high value of p implies faster convergence toward the root of f ( x ) = 0 , assuming that the constant C remains reasonably small. However, higher-order methods often require more functional evaluations per iteration, increasing the computational cost.
The error equation of an iterative method can be expressed as
e n + 1 = C e n p + O ( e n p + 1 ) , C R ,
where C is the asymptotic error constant and O ( e n p + 1 ) denotes higher-order terms that become negligible as n increases. This expression is central to the local convergence analysis of iterative schemes.
Numerical estimation of the order of convergence
Since the exact root α is typically unknown, practical estimation of the convergence order relies on approximate values of the iterates. Two widely used techniques are
  • The computational order of convergence (COC), defined in [11];
  • The approximate computational order of convergence (ACOC), defined in [15].
These tools are commonly used in numerical experimentation to assess the performance of iterative schemes.
The classical estimate (COC) [11], assuming knowledge of the root α , is given by
p COC = ln x n + 1 α x n α ln x n α x n 1 α , n 2 .
When α is unknown, the ACOC [15] formula provides a root-free estimate using only the iterates:
p ACOC = ln x n + 1 x n x n x n 1 ln x n x n 1 x n 1 x n 2 , n 3 .
This tool allows us to approximate the theoretical order of convergence p without requiring knowledge of the exact solution. Its reliability increases as the iterates approach the root, provided that the errors remain sufficiently small to avoid numerical cancellation or round-off issues.
Efficiency index of an iterative method
To assess the computational efficiency of an iterative method, one must consider its convergence order and the number of functional evaluations per iteration. Ostrowski (1973) [4] introduced the efficiency index I, defined as
I = p 1 / d ,
where p is the order of convergence and d is the total number of functional evaluations per iteration (including derivatives, if applicable). This index provides a comparative efficiency measure across methods with varying orders and computational demands.
More recently, the concept has been extended to the computational efficiency index (CEI) [16], which includes not only functional evaluations but also the products/quotients of the iterative method.
CEI = p 1 / ( d + o p ) ,
where o p refers to the number of products/quotients produced in each iteration.
These indicators allow different iterative methods to be compared regarding convergence speed and total computational cost.
In this manuscript, Section 2 presents some known fourth-order iterative methods used in the numerical section for comparison. In Section 3, the new schemes are presented and their order of convergence is proven. Section 4 deals with the dynamical analysis of one of the proposed families of iterative schemes and a general result showing that the performance of all the fourth-order families is equivalent under conjugation. The best method in terms of stability is compared in the numerical section with known schemes. Two academic examples and two applied problems confirm the theoretical results. With some conclusions and references, we conclude the manuscript.

2. Some Fourth-Order Methods in the Literature

In recent decades, there has been an urgent need to develop iterative methods with high orders of convergence that do not require new functional evaluations or derivatives. Since Traub’s initial contributions [17] with his method, various approaches have been proposed to address this challenge. The following iterative expression defines Traub’s method:
y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( y n ) f ( x n ) , n = 0 , 1 , 2 ,
This scheme achieves cubic convergence without the need to evaluate the second derivative. Similarly, Jarratt [18] introduced a two-step iterative scheme:
y n = x n 2 3 f ( x n ) f ( x n ) , x n + 1 = x n 1 2 3 f ( y n ) + f ( x n ) 3 f ( y n ) f ( x n ) f ( x n ) f ( x n ) , n = 0 , 1 , 2 , ,
This also avoids evaluating second derivatives and achieves fourth-order convergence.
Based on these ideas, many multipoint methods have been developed to achieve even higher convergence orders. The specialized literature, including the works of Chun [2], Ostrowski [4], and King [3], among others, offers a wide range of fourth-order schemes based on the adjustment of parameters and weight functions. These schemes will serve as benchmarks for comparison with the methods proposed in this work.
Among them, it is worth highlighting the family of fourth-order methods introduced by Artidiello [5]. It is based on a weight function H ( μ ) that generalizes several known schemes. Its formulation follows a two-step scheme:
y n = x n f ( x n ) f ( x n ) , x n + 1 = y n H ( μ n ) f ( y n ) f ( x n ) , n = 0 , 1 , 2 , ,
where μ = f ( y ) f ( x ) .
Theorem 1
([5]). Let f : I R R be a sufficiently differentiable function on an open interval I containing a simple root α of f ( x ) = 0 , and H : R R be any sufficiently differentiable function satisfying
H ( 0 ) = 1 , H ( 0 ) = 2 , | H ( 0 ) | < .
Then, for an initial estimate x 0 close enough to α, method (7) converges with an order of at least 4 and its error equation is
e n + 1 = 5 H ( 0 ) 2 c 3 2 c 2 c 3 c 2 2 e n 4 + O ( e n 5 ) ,
where c j = f ( j ) ( α ) j ! f ( α ) , j = 2 , 3 , , and e n = x n α .
This theoretical framework not only unifies classical methods (such as those of Ostrowski or Chun) through specific choices of H ( μ ) , but also provides a rigorous basis for designing new schemes.
Likewise, Zein [7] proposed a generalized two-step multipoint method that extends classical approaches by introducing additional free parameters. The scheme is given by
y n = x n 2 f ( x n ) 3 f ( x n ) , x n + 1 = x n τ f ( x n ) f ( x n ) G ( η n ) f ( x n ) A f ( x n ) + B f ( y n ) , n = 0 , 1 , 2 , ,
where τ , A , and B are parameters, and η n = 1 f ( y n ) f ( x n ) .
Theorem 2
([7]). Let α I be a simple root of a sufficiently differentiable function f : R R for an open interval I. If x 0 is sufficiently close to α, and the weight function G ( η ) satisfies
G ( 0 ) = ( 1 τ ) ( A + B ) , G ( 0 ) = 3 A B 4 + τ B , G ( 0 ) = 9 A + 3 B 4 ,
then the scheme (8) converges to α with order four and satisfies the error equation
e n + 1 = 405 A + 189 B 32 G ( 0 ) c 2 3 81 ( A + B ) c 2 c 3 + 1 9 c 4 e n 4 + O e n 5 ,
where c j = f ( j ) ( α ) j ! f ( α ) , j = 2 , 3 , , provided that A + B 0 and | G ( 0 ) | < .
By selecting appropriate values for the parameters τ , A, and B, and ensuring the necessary conditions on the weight function G ( η n ) , the general method (8) achieves fourth-order convergence. It also encompasses methods such as those of Chun [2], Jarratt [18], Sharma and Bahl [19], Özban and Kaya [10], and Khirallah and Alkhomsan [20] as special cases.
Using expression (7), Table 1 summarizes several iterative methods that achieve fourth-order convergence thanks to the appropriate consideration of weight functions.
Each of these weight functions satisfies the conditions H ( 0 ) = 1 , H ( 0 ) = 2 , | H ( 0 ) | < , achieving the corresponding scheme, a fourth-order convergence.
On the other hand, by using (8), the iterative methods that achieve fourth-order convergence by appropriately choosing the values of the parameters τ , A, and B, as well as the values of the weight functions, are summarized below. All methods have as their predictor step y n = x n 2 f ( x n ) 3 f ( x n ) and as their correction step the one shown in each of the tables in the following items:
  • If τ = 0 , A = 1 , and B = 0 , the conditions in (9) imply G ( 0 ) = 1 , G ( 0 ) = 3 4 , and G ( 0 ) = 9 4 . Using these values, different weight functions can be defined, appearing in Table 2.
  • If τ = 0 , A = 0 , and B = 1 , the conditions in (9) give G ( 0 ) = 1 , G ( 0 ) = 1 4 , and G ( 0 ) = 3 4 . So, Table 3 shows the resulting scheme.
  • If τ = 2 3 , A = 1 , and B = 1 , the weight function G ( η ) must satisfy G ( 0 ) = 2 3 , G ( 0 ) = 7 6 , and G ( 0 ) = 3 . Table 4 shows the resulting scheme.
  • New methods introduced in [7]:
    (i)
    If τ = 2 3 , A = 1 , and B = 2 , the conditions in (9) imply G ( 0 ) = 1 3 , G ( 0 ) = 1 12 , and G ( 0 ) = 3 4 . Table 5 shows the resulting scheme.
    (ii)
    If τ = 2 3 , A = 4 , and B = 9 , the conditions in (9) yield G ( 0 ) = 5 3 , G ( 0 ) = 3 4 , and G ( 0 ) = 9 4 . So, Table 6 shows the method and its notation.
    (iii)
    If τ = 1 , A = 7 , and B = 15 , the conditions in (9) force G ( 0 ) = 0 , G ( 0 ) = 6 , and G ( 0 ) = 9 2 . In Table 7 we can see the resulting weight function and method.

3. General Framework of Mean-Based Iterative Methods

As discussed in Section 1.1, the mean-based iterative schemes introduced by [1] exhibit cubic convergence and are therefore not optimal under the Kung–Traub conjecture. Recent studies, such as those by [5,7], have demonstrated that this order of convergence can be increased from three to four by employing appropriate weight functions, without requiring additional functional or derivative evaluations.
In this work, five new families of mean-based iterative methods are proposed, all following this same principle. Each family incorporates a specifically designed weight function to raise the order of convergence from three to four, thereby producing optimal schemes according to the Kung–Traub conjecture. Moreover, the stability analysis of these families shows that there exists a conjugation that makes equivalent the qualitative performance of all the optimal methods coming from the different means. In these terms, the dependence on the initial estimation of all these methods has been unified.
In the following sections, a bounded real parameter β is introduced within the weight function, with the aim of defining parametric families of methods and analyzing their dynamical richness. In particular, we study the effect of replacing the bounded weight function H ( μ n ) with its parametric version H ( μ n ) , where the parameter β introduces an additional degree of freedom that enriches the dynamical behavior of the methods. Through this analysis, we identify fixed and critical points, parameter planes, and dynamical planes, making it possible to determine the values of β that provide greater stability and efficiency in solving nonlinear equations.
We propose a general multipoint method involving y n as the Newton step, M T is an arbitrary mean applied to f ( x n ) and f ( y n ) , and H ( μ ) is a weight function, with μ = f ( y ) f ( x ) .
By choosing different symmetric means M T , such as the arithmetic means ( M A ) and ( M A y ), the harmonic mean ( M H y ), and the counterharmonic means ( M C ) and ( M c y ), we obtain several iterative schemes with different correction steps, as summarized in Table 8.
Theorem 3.
Let f : I R R be a sufficiently differentiable function on an open interval I, holding its simple root α I , that is, f ( α ) = 0 and f ( α ) 0 . The multipoint iterative method is defined by a Newton step
y n = x n f ( x n ) f ( x n ) ,
followed by a corrector step
x n + 1 = x n H ( μ n ) M T ( f ( x n ) , f ( y n ) ) f ( x n ) , n = 0 , 1 , ,
where
  • M T ( f ( x n ) , f ( y n ) ) is a symmetric bivariate function representing a mean ( M A , M A y , M H y , M C , or M C y );
  • μ = f ( y ) f ( x ) , which is the variable of the weight function H;
  • H ( μ ) is a weight function with a Taylor expansion around μ = 0 :
    H ( μ ) = H ( 0 ) + H ( 0 ) μ + 1 2 H ( 0 ) μ 2 + 1 6 H ( 0 ) μ 3 + O ( μ 4 ) .
If the coefficients H ( 0 ) , H ( 0 ) , H ( 0 ) , a n d H ( 0 ) satisfy specific conditions given in Table 9, then the methods achieve fourth-order convergence.
All methods achieve fourth-order convergence, with the generalized error equation
e n + 1 = γ 1 c 2 3 + γ 2 c 2 c 3 e n 4 + O ( e n 5 ) ,
where
c j = 1 j ! f ( j ) ( α ) f ( α ) , j = 2 , 3 , ,
and e n = x n α represents the n-th iteration error. The constants γ 1 and γ 2 depend on the chosen mean and on the coefficients of the weight function H ( μ ) appearing in Table 9.
Proof. 
Let e n = x n α denote the error at the n-th iteration. Since f is sufficiently differentiable and α is a simple root, we can use Taylor expansions of f ( x n ) and f ( x n ) around α . In terms of e n , we have
f ( x n ) = f ( α ) + f ( α ) e n + f ( α ) 2 ! e n 2 + f ( α ) 3 ! e n 3 + f ( 4 ) ( α ) 4 ! e n 4 + O ( e n 5 ) .
Since f ( α ) = 0 , it follows that
f ( x n ) = f ( α ) e n + f ( α ) 2 ! e n 2 + f ( α ) 3 ! e n 3 + f ( 4 ) ( α ) 4 ! e n 4 + O ( e n 5 ) .
We simplify the calculations using the expression of the constants (12):
f ( x n ) = f ( α ) e n + c 2 e n 2 + c 3 e n 3 + c 4 e n 4 + O ( e n 5 ) .
In a similar way,
f ( x n ) = f ( α ) + f ( α ) e n + f ( α ) 2 e n 2 + f ( 4 ) ( α ) 6 e n 3 + O ( e n 4 ) = f ( α ) 1 + 2 c 2 e n + 3 c 3 e n 2 + 4 c 4 e n 3 + O ( e n 4 ) .
Therefore,
f ( x n ) f ( x n ) = e n c 2 e n 2 + ( 2 c 2 2 2 c 3 ) e n 3 + ( 4 c 2 3 + 7 c 2 c 3 3 c 4 ) e n 4 + O ( e n 5 ) ,
and the error in the approximation of y n becomes
e n y = e n e n c 2 e n 2 + ( 2 c 2 2 2 c 3 ) e n 3 + ( 4 c 2 3 + 7 c 2 c 3 3 c 4 ) e n 4 + O ( e n 5 ) = c 2 e n 2 ( 2 c 2 2 2 c 3 ) e n 3 + ( 4 c 2 3 7 c 2 c 3 + 3 c 4 ) e n 4 + O ( e n 5 ) .
In a similar way, we have
f ( y n ) = f ( α ) e n y + f ( α ) 2 ! ( e n y ) 2 + f ( α ) 3 ! ( e n y ) 3 + f 4 ( α ) 4 ! ( e n y ) 4 + O ( ( e n y ) 5 ) ,
and we rewrite the expansion in terms of normalized coefficients (12):
f ( y n ) = f ( α ) e n y + c 2 ( e n y ) 2 + c 3 ( e n y ) 3 + c 4 ( e n y ) 4 + O ( ( e n y ) 5 ) .
Therefore,
f ( x n ) + f ( y n ) = f ( α ) e n + 2 c 2 e n 2 + 2 c 2 2 + 3 c 3 e n 3 + 5 c 2 3 7 c 2 c 3 + 4 c 4 e n 4 + O ( ( e n ) 5 ) .
We calculate, by direct division,
μ n = f ( y n ) f ( x n ) = c 2 e n + ( 3 c 2 2 + 2 c 3 ) e n 2 + ( 8 c 2 3 10 c 2 c 3 + 3 c 4 ) e n 3 + O ( e n 4 ) .
Since μ n = f ( y n ) f ( x n ) 0 as n , we expand the weight function H ( μ ) in a Taylor series around μ = 0 :
H ( μ ) = H ( 0 ) + H ( 0 ) μ + 1 2 H ( 0 ) μ 2 + 1 6 H ( 0 ) μ 3 + O ( μ 4 ) .
Substituting the expression in terms of e n , we obtain:
H ( μ n ) = H ( 0 ) + H ( 0 ) c 2 e n + 1 2 c 2 2 ( H ( 0 ) 6 H ( 0 ) ) + 2 c 3 H ( 0 ) e n 2 + c 2 3 8 H ( 0 ) 3 H ( 0 ) + H ( 0 ) 6 + 2 c 3 c 2 ( H ( 0 ) 5 H ( 0 ) ) + 3 c 4 H ( 0 ) e n 3 + O ( e n 4 ) .
Now, we detail the expansion of M T ( f ( x n ) , f ( y n ) ) using the arithmetic mean M A in Table 8. For the rest of the means, all calculations are analogous.
We return to (13) to substitute it into the expression
f ( x n ) + f ( y n ) 2 f ( x n ) = e n 2 c 2 2 e n 3 + 9 c 2 3 2 7 c 2 c 3 2 e n 4 + O e n 5 ,
multiplied by (14):
H ( μ n ) f ( x n ) + f ( y n ) 2 f ( x n ) = e n H ( 0 ) 2 + 1 2 c 2 e n 2 H ( 0 ) + 1 4 c 2 2 ( 4 H ( 0 ) 6 H ( 0 ) + H ( 0 ) ) + c 3 H ( 0 ) e n 3 + 1 12 c 2 3 ( 54 H ( 0 ) + 36 H ( 0 ) 18 H ( 0 ) + H ( 0 ) ) 6 c 3 c 2 ( 7 H ( 0 ) + 10 H ( 0 ) 2 H ( 0 ) ) + 18 c 4 H ( 0 ) e n 4 + O ( e n 5 ) .
Thus, the error equation of M A in Table 8 expands to
x n + 1 α = 1 H ( 0 ) 2 e n 1 2 c 2 H ( 0 ) e n 2 + c 2 2 H ( 0 ) + 3 2 H ( 0 ) 1 4 H ( 0 ) c 3 H ( 0 ) e n 3 + 1 12 c 2 3 ( 54 H ( 0 ) + 36 H ( 0 ) 18 H ( 0 ) + H ( 0 ) ) + 6 c 3 c 2 ( 7 H ( 0 ) + 10 H ( 0 ) 2 H ( 0 ) ) 18 c 4 H ( 0 ) e n 4 + O ( e n 5 ) .
By solving the system obtained from eliminating the first-, second-, and third-order error terms, we get
H ( 0 ) = 2 , H ( 0 ) = 0 , H ( 0 ) = 8 , | H ( 0 ) | < ,
so, the error equation of the method based on the arithmetic mean M A becomes
e n + 1 = 1 12 ( 36 H ( 0 ) ) c 2 3 12 c 2 c 3 e n 4 + O ( e n 5 ) .
This finishes the proof for the case of the arithmetic mean. The order of convergence of the remaining methods is obtained in a similar way, replacing the mean function M T and using the corresponding values of the coefficients of H ( μ ) presented in Table 9. Proceeding in this manner, the convex-mean-based methods indicated in Table 8 achieve an optimal fourth-order convergence. This can be checked by using the available Supplementary Material of the manuscript. □

4. Dynamical Analysis

The order of convergence of an iterative method is not the only relevant criterion when evaluating its performance. In fact, the dynamics of the method, that is, the behavior of its orbits under different initial estimations, plays a fundamental role in its overall analysis. In this section, we analyze the qualitative properties of the different mean-based optimal families designed, finding the best performance in terms of wideness of the basins of attraction and finding the similarities among them. To achieve this aim, tools from complex analysis are used [21,22,23].
We start from a rational function resulting from the application of an iterative method to a polynomial of low degree p ( z ) , denoted by R : C ^ C ^ , where C ^ = C { } denotes the Riemann sphere. The orbit of a point z 0 C ^ is given by the sequence
{ z 0 , R ( z 0 ) , R 2 ( z 0 ) , , R n ( z 0 ) , } .
We are interested in studying the asymptotic behavior of the orbits of the rational operator R. A point z ^ is k-periodic k 1 if
R k ( z ^ ) = z ^ , and R p ( z ^ ) z ^ , 1 p < k .
We say that it is a fixed point of R if R ( z ^ ) = z ^ . If this fixed point is not a root of the polynomial p ( z ) , it is called a strange fixed point. These points are numerically undesirable, since the iterative method can converge on them under certain initial guesses [22].
The asymptotical performance of these fixed points z * is classified according to the R ( z * ) . If | R ( z * ) | < 1 , the fixed point is an attractor; if | R ( z * ) | = 1 , it is called parabolic or indifferent; if | R ( z * ) | > 1 , it is repellent; and if | R ( z * ) | = 0 , it is a superattractor.
On the other hand, the study of the basin of attraction of an attractor z * is defined by
A ( z * ) = { z 0 C ^ : lim n R n ( z 0 ) = z * } .
The Fatou set F is the union of the basins of attraction. The Julia set J is its topological complement in the Riemann sphere and represents the union of the boundaries of the basins of attraction.
A point z * C ^ is critical for R if R ( z * ) = 0 . The following classical result, given by Fatou [24] and Julia [25], includes both periodic points (of any period) and fixed points, considered periodic points of unit period.
Theorem 4.
Let R be a rational function. The immediate basins of attraction of each attractive periodic point contain at least one critical point.
Using this key result, any attracting behavior can be found using the critical points as seeds of the iterative process [26].
In order to obtain global results, we prove a Scaling Theorem for the iterative methods designed.

4.1. Conjugacy Classes

Let f and g be two analytic functions defined on the Riemann sphere. An analytic conjugacy between f and g is a diffeomorphism h on the Riemann sphere such that h f h 1 = g .
We now state a general result that holds for all types of symmetric means described in Table 8.
Theorem 5.
Let f : C ^ C be an analytic function on the Riemann sphere, and let h ( z ) = α z + β be an affine transformation with α 0 , and g ( z ) = λ f ( h ( z ) ) , with λ 0 . Let us consider the iterative scheme defined by
G f ( z ) = y f H ( μ f ) M T f ( z ) , f ( y f ) f ( z ) ,
where y f = z f ( z ) f ( z ) is Newton’s method, with M T being one of the means that provide the schemes M A y , M A , M H y or M C , M C y . Here, H ( μ ) satisfies the conditions indicated in Table 9.
Then, (17) is analytically conjugate to the analogous method applied to g, that is
( h G g h 1 ) ( z ) = G f ( z ) ,
sharing the same essential dynamics.
Proof. 
To prove the general result, we consider a particular case of the mean. For the rest of the methods, the proof is analogous. We choose the case of M A y , whose scheme is given by
x n + 1 = y n H ( μ n ) f ( x n ) + f ( y n ) 2 f ( x n ) .
As can be seen, its structure is representative of the methods included in Table 8. We know that the affine function h ( z ) = α z + β has an inverse given by h 1 ( z ) = z β α .
By hypothesis, g ( z ) = λ f ( h ( z ) ) = λ f ( α z + β ) . By the chain rule, we obtain
g ( h 1 ( z ) ) = λ f ( z ) , g ( h 1 ( z ) ) = λ α f ( z ) .
Defining the operator G g ( z ) as
G g ( z ) = y g H ( μ g ) M T g ( z ) , g ( y g ) g ( z ) ,
and evaluated at h 1 ( z ) , we obtain
G g ( h 1 ( z ) ) = h 1 ( z ) g ( h 1 ( z ) ) g ( h 1 ( z ) ) H ( μ g ( h 1 ( z ) ) ) g ( h 1 ( z ) ) + g h 1 ( z ) g ( h 1 ( z ) ) g ( h 1 ( z ) ) 2 g ( h 1 ( z ) ) .
Using the identities from (20) and taking into account that
h h 1 ( z ) g ( h 1 ( z ) ) g ( h 1 ( z ) ) = z f ( z ) f ( z ) ,
we deduce that
g h 1 ( z ) g ( h 1 ( z ) ) g ( h 1 ( z ) ) = λ f z f ( z ) f ( z ) .
Substituting (20) and (23) into (22), we obtain
G g ( h 1 ( z ) ) = z β α λ f ( z ) λ α f ( z ) H ( μ g ( h 1 ( z ) ) ) λ f ( z ) + λ f z f ( z ) f ( z ) 2 λ α f ( z ) = z β α f ( z ) α f ( z ) H ( μ g ( h 1 ( z ) ) ) f ( z ) + f z f ( z ) f ( z ) 2 α f ( z ) .
Now, we apply the transformation h:
h G g ( h 1 ( z ) ) = α z β α f ( z ) α f ( z ) H μ g ( h 1 ( z ) ) f ( z ) + f z f ( z ) f ( z ) 2 α f ( z ) + β = z f ( z ) f ( z ) H μ g ( h 1 ( z ) ) f ( z ) + f z f ( z ) f ( z ) 2 f ( z ) .
Using (23), we observe that the term μ g ( h 1 ( z ) ) transforms as
μ g ( h 1 ( z ) ) = g h 1 ( z ) g ( h 1 ( z ) ) g ( h 1 ( z ) ) g ( h 1 ( z ) ) = f z f ( z ) f ( z ) f ( z ) = μ f ( z ) .
Substituting (24) into (24), we finally deduce
h G g ( h 1 ( z ) ) = z f ( z ) f ( z ) H ( μ f ( z ) ) f ( z ) + f z f ( z ) f ( z ) 2 f ( z ) = G f ( z ) ,
which proves the desired identity (18), and confirms that G f and G g are analytically conjugate through the affine transformation h ( z ) . □
The same reasoning extends directly to the methods in Table 8, since
  • In each case, the correction term maintains the form M T f ( z ) , f ( y f ) , with symmetric combinations based on means;
  • The affine transformation h acts compatibly on both f ( z ) and f ( y f ) , preserving the functional structure of the correction;
  • The identity μ g ( h 1 ( z ) ) = μ f ( z ) holds, since it depends only on the ratio f ( y f ) / f ( z ) , scaled by λ .
Therefore, the result holds for the entire family of iterative methods based on symmetric means, as described in Table 8.

4.2. Dynamics of Fourth-Order Classes

As shown in Table 8, five different parametric families of iterative methods are identified, each associated with specific conditions of the weight function H ( μ ) . To satisfy these conditions, polynomial weight functions have been selected in this section. However, other functional forms could also be considered, provided that they meet the required smoothness and boundness conditions. This framework also allows for the introduction of an additional real parameter β . The inclusion of this parameter provides greater flexibility and allows for a more in-depth dynamic analysis of the proposed schemes.
Table 10 presents the chosen polynomials for the development of this section.
Here, μ n = f ( y n ) f ( x n ) and β is a free complex parameter.
To analyze the dynamics of these iterative methods, we start with the arithmetic mean family M A y , which is defined as
y n = x n f ( x n ) f ( x n ) ,
x n + 1 = y n 2 f ( x n ) f ( y n ) 2 f ( x n ) f ( y n ) 2 β f ( x n ) f ( y n ) 3 f ( x n ) + f ( y n ) 2 f ( x n ) .
The other cases are studied later in a similar way. This can be checked by using the available Supplementary Material of the manuscript.

4.3. Rational Operator

Proposition 1.
Let us consider the quadratic polynomial p ( z ) = ( z a ) ( z b ) , of roots a and b. The rational operator related to family M A y given in (26) on p ( z ) after a Möbius map, is
R p ( z , β ) = z 4 β + 2 z 6 + 16 z 5 + 54 z 4 + 96 z 3 β z 2 + 92 z 2 3 β z + 44 z + 8 ( β 8 ) z 6 + ( 3 β 44 ) z 5 + ( β 92 ) z 4 96 z 3 54 z 2 16 z 2 ,
with β C being an arbitrary parameter.
Proof. 
We apply the iterative scheme M A y to p ( z ) and obtain a rational function A p ( z , β ) that depends on the roots a and b and the parameter β C . Then, we apply a Möbius transformation [23,27,28] on A p ( z , β ) with
h ( z ) = z a z b ,
which satisfies h ( a ) = 0 , h ( b ) = and h ( ) = 1 . This transformation maps the roots a and b to the points 0 and , respectively, and the divergence of the method to 1. Thus, the new conjugate rational operator is defined as
(28) R p ( z , β ) : = h A p ( z , β ) h 1 ( z ) , (29) = z 4 β + 2 z 6 + 16 z 5 + 54 z 4 + 96 z 3 + ( β + 92 ) z 2 3 β z + 44 z + 8 ( β 8 ) z 6 + ( 3 β 44 ) z 5 + ( β 92 ) z 4 96 z 3 54 z 2 16 z 2 ,
which no longer depends on the parameters a and b. □
Thus, this transformation facilitates the analysis of the dynamics of iterative methods by allowing the standardization of roots and the structural study of dynamical planes and their stability regions [16].

4.4. Fixed Points of the Operator

Now, we calculate all the fixed points of R p ( z , β ) , to subsequently analyze their character (attractive, repulsive, neutral, or parabolic). Taking into account that the method has order four, the points z = 0 and z = are always superattractor fixed points, since they come from the roots of the polynomial.
It is easy to prove that the fixed points of R p ( z , β ) are z = 0 , z = , and nine strange fixed points:
  • z = 1 is a strange fixed point if β 312 5 , and
  • The roots of the polynomial:
    P β ( t ) = 2 + 18 t + 72 t 2 + ( 160 + β ) t 3 + ( 208 + 3 β ) t 4 + ( 160 + β ) t 5 + 72 t 6 + 18 t 7 + 2 t 8 ,
    denoted by e x i ( β ) , i = 1 , 2 , , 8 , for any β C .
Now, we study the stability of the strange fixed point z = 1 .
Proposition 2.
The strange fixed point of R p ( z , β ) , z = 1 , has the following character:
  • If β = 312 5 , z = 1 is not a fixed point.
  • If β 312 5 > 1024 5 , z = 1 is an attractor.
  • If β 312 5 = 1024 5 , z = 1 is parabolic.
  • If β 312 5 < 1024 5 , z = 1 is repulsive.
Proof. 
As seen in the previous section, the behavior of a fixed point z * can be determined according to the value of the stability function R p ( z * , β ) . The expression of operator R p ( z , β ) is
R p ( z , β ) = 2 z 3 ( z + 1 ) 8 4 β + ( 4 β 32 ) z 4 + ( 7 β 156 ) z 3 + ( 12 β 248 ) z 2 + ( 7 β 156 ) z 32 ( β 8 ) z 6 + ( 3 β 44 ) z 5 + ( β 92 ) z 4 96 z 3 54 z 2 16 z 2 2 .
Therefore,
R p ( 1 , β ) = 1024 312 5 β .
If β = 312 5 , then z = 1 is not a fixed point. To determine whether it is attractive or repulsive, we solve
1024 312 5 β 1 1024 2 | 312 5 β | 2 .
Expressing the right side in terms of ( β ) and ( β ) ,
| 312 5 ( ( β ) + i ( β ) ) | 2 = ( 312 5 ( β ) ) 2 + 25 ( β ) 2 .
So,
1024 2 312 2 3120 ( β ) + 25 ( β ) 2 + 25 ( β ) 2
By simplifying, we get
( β ) 312 5 2 + ( β ) 2 1024 5 2 ,
and thus,
β 312 5 1024 5 .
Graphically, the behavior of the fixed point z = 1 is visualized in Mathematica using the graph of the function 1024 312 5 β . For each stability function, its 3D representation (called stability surface) is constructed. In this context, the graphical representation distinguishes the orange regions as zones of the complex plane where the strange fixed point is attracting | R p ( e x i ( β ) ) | < 1 , the gray regions as zones of repulsion | R p ( e x i ( β ) ) | > 1 , where the point is superattracting in the vertex of the cone | R p ( e x i ( β ) ) | = 0 , and parabolic zones at the boundary | R p ( e x i ( β ) ) | = 1 .
In Figure 1, the attraction zones are the yellow area and the repulsion zone corresponds to the gray area. That is, for values of β within the disk, z = 1 is repulsive, while for values of β outside the gray disk, z = 1 becomes attractive. Therefore, it is natural to select values within the gray disk, since repulsive divergence improves the performance of the iterative scheme.
For the eighth roots e x i ( β ) , i = 1 , , 8 , of polynomial P β ( t ) , we obtain the following results:
  • | R p ( e x 1 ( β ) ) | 0 , for all β values;
  • | R p ( e x 2 , 3 ( β ) ) | = 0 , for β 1 0.408822 and β 2 18.0802 ;
  • | R p ( e x 4 , 6 ( β ) ) | = 0 , for β 3 , 4 0.782768 ± 1.1103 i ;
  • | R p ( e x 5 ( β ) ) | 0 , for all β values;
  • | R p ( e x 7 , 8 ( β ) ) | = 0 , for β 5 78.5858 .
In Figure 2, we represent the stability functions of the strange fixed points e x i ( β ) , i = 1 , 2 , , 8 .
From Figure 2, the following conclusions are drawn:
  • As the derivative operator associated with the strange fixed points e x 1 , 5 ( β ) cannot be zero, it can be seen in Figure 2a that the resulting surface has only one gray region. This indicates that these fixed points are repulsive throughout the analyzed range, which is desirable, as it prevents convergence to these strange fixed points.
  • Furthermore, at points e x 2 , 3 ( β ) , we obtain β 1 0.408822 and β 2 18.0802 . Figure 2b shows an inverted cone-shaped surface (normally yellow), representing an attractor inside the cone and a superattractor at its vertex β 2 (that of β 1 is similar, so it is omitted). The associated unstable domain is approximately [ 0 , 0.2 ] × [ 0 , 0.2 ] , indicating a small but localized region. Similarly, by setting the derivative operator associated with the roots e x 4 , 6 ( β ) to zero, we obtain β 3 0.782768 + 1.1103 i and β 4 0.782768 1.1103 i . Figure 2c,d show behavior qualitatively similar to that of β 2 , with a comparable domain.
  • By setting the derivative operator associated with the roots e x 7 , 8 ( β ) to zero, we obtain β 5 78.5858 . As illustrated in Figure 2e, a considerably wider region of attraction appears, approximately [ 0 , 100 ] × [ 0 , 100 ] , indicating that the method shows marked instability for these values of β .
Therefore, to ensure the robustness of the method, values of β where some root e x i ( β ) is an attractor or superattractor should be avoided. In contrast, values of β where all strange fixed points are repulsors are preferable to ensure stable numerical behavior.
Just as we have studied strange fixed points, we must also analyze critical points, since, recalling Theorem 4, it turns out that each basin of attraction of an attractive periodic point (of any period) holds at least one critical point.

4.5. Critical Points of R p ( Z , β )

Proposition 3.
The critical points of the rational operator R p ( z , β ) are z = 0 and z = , directly related to the zeros of the polynomial p ( z ) , and z = 1 , z 1 , 2 ( β ) = ± 1 2 a ( β ) + b ( β ) c ( β ) d ( β ) e ( β ) , and z 3 , 4 ( β ) = ± 1 2 a ( β ) + b ( β ) c ( β ) + d ( β ) e ( β ) are free critical points, where the auxiliary functions a ( β ) , b ( β ) , c ( β ) , d ( β ) , and e ( β ) are algebraic simplifications used for ease of notation:
a ( β ) = 2 ( β 8 ) 369 β 2 1800 β + 784 ( 7 β 156 ) 3 64 ( β 8 ) 3 ( 3 β 62 ) ( 7 β 156 ) ( β 8 ) 2 + 2 ( 7 β 156 ) β 8 , b ( β ) = ( 7 β 156 ) 2 32 ( β 8 ) 2 3 β 62 β 8 8 β 64 4 ( β 8 ) , c ( β ) = 369 β 2 1800 β + 784 16 ( β 8 ) , d ( β ) = 369 β 2 1800 β + 784 16 ( β 8 ) , e ( β ) = 7 β 156 16 ( β 8 ) .
Thus, there are five free critical points, except for β = 0 , β = 312 5 , and β = 8 , where only three free critical points exist.
Proof. 
To prove the result, we recall that R p ( z , β ) was presented in (31). It is easily observed that its roots are z = 0 , z = , z = 1 , and four roots z 1 , 2 ( β ) and z 3 , 4 ( β ) of the fourth-degree polynomial in the numerator of R p ( z , β ) .
Now, let us observe that for certain values of β , only three free critical points exist. One such case is β = 0 , where the derivative of the operator simplifies to
R p ( z , 0 ) = 2 z 3 ( z + 1 ) 6 8 z 2 + 23 z + 8 ( 2 z + 1 ) 2 2 z 3 + 6 z 2 + 4 z + 1 2 .
Here, the strange critical points are z = 1 , and the conjugate pair
z = 1 16 23 ± 273 .
When β = 312 5 , the derivative operator becomes
R p ( z , 312 5 ) = 10 z 3 ( z + 1 ) 8 272 z 2 + 895 z + 272 136 z 5 + 494 z 4 + 420 z 3 + 180 z 2 + 45 z + 5 2 .
In this scenario, the strange critical points are z = 1 , and the conjugate pair
z = 1 544 895 ± 3 56121 .
And finally, when β = 8 , the derivative operator becomes
R p ( z , 8 ) = 2 z 4 ( z + 1 ) 8 25 z 2 + 86 z + 25 10 z 5 + 42 z 4 + 48 z 3 + 27 z 2 + 8 z + 1 2 ,
whose zeros are z = 1 and the conjugate pair
z = 1 25 43 ± 6 34 .
To visualize the behavior of the free critical points that depend on β , we plot the parameter planes. In each parameter plane, we use each free critical point as an initial estimation. A mesh of 2000 × 2000 points is defined in the complex plane. Each point of the mesh corresponds to a value of β , that is, a member of the iterative method family, and for each one, we iterate the rational function R p ( z , β ) . If the orbit of the critical point converges to z = 0 or z = in a maximum of 100 iterations, the point is represented in red color; otherwise, it is colored black.
For the free critical point z = 1 , we have R p ( 1 , β ) = 1 , which is a strange fixed point. So, the parameter plane associated with the critical point z = 1 is not of much interest, since we already know the stability of z = 1 .
As a first step, we graph in Figure 3 the parameter plane of the conjugate pair z 1 , 2 ( β ) , with both in the domain D 1 = [ 150 , 25 ] × [ 225 , 225 ] . In it, a broad stable performance region around the origin is observed, and in D 2 = [ 150 , 25 ] × [ 60 , 60 ] , we see a black area related to the stability of strange fixed points e x 7 , 8 ( β ) .
Likewise, in Figure 4, the parameter plane of the conjugate pair z 3 , 4 ( β ) is shown, with both in the domain D 3 = [ 150 , 275 ] × [ 210 , 210 ] , and a detail in the domain D 4 = [ 70 , 270 ] × [ 100 , 100 ] , which represents a complex region of values for β with no convergence to the roots, nor to strange fixed points, but to periodic orbits of different periods.

4.6. Dynamical Planes

In the case of dynamical planes, each point in the defined mesh of the complex plane is considered as a starting point z 0 of the iterative scheme and is represented with different colors depending on the point it converges to. In this case, points that converge to z = are colored blue, and those that converge to z = 0 are colored orange. These dynamical planes have been generated using a grid of 800 × 800 points and a maximum of 100 iterations per point. In these planes, the fixed points are represented by a white circle, the critical points by a white square, and the attracting points by an asterisk.
Next, the dynamical planes are plotted based on the values for β obtained from the stability analysis of the strange fixed points of R p ( z , β ) and from the observations in the parameter plane.
In Figure 5, methods with stable behavior can be found for β = 0 and β = 1 , with convergence only to the roots.
A notable case is β = 312 5 , since in Proposition 2 it was established that when β takes that value, z = 1 is not a fixed point, as observed in Figure 6.
In Figure 6b, it can be clearly seen that z = 1 is no longer characterized as a strange fixed point of the method. Moreover, we recall that when β 312 5 < 1024 5 , z = 1 is repulsive, as shown in Figure 5a,b, and when β 312 5 > 1024 5 , z = 1 is an attractor (with a green basin of attraction), as shown in Figure 7.
Based on the previous study (see Figure 2e), when considering the value β 5 = 78.5858 , complex dynamical behavior is observed in Figure 8. It is seen that this basin of attraction is related to the parameter plane of the conjugate critical point z 1 , 2 and the superattracting character of strange fixed points e x 7 , 8 for this value of β , with their own basins of attraction (in red and green color). In this scenario, the method converges to elements different from the roots 0 and , indicating that it is not suitable for root-finding. Therefore, values such as β = 78.5858 should be avoided when applying this method. Likewise, as another example of divergence, β = 200 . Figure 8b represents a value of β inside D 4 , in the parameter plane of Figure 4. In this case, the black area corresponds to the basin of attraction of a periodical orbit of period 2.
The analysis of the remaining iterative methods presented in Table 10 has been carried out in an analogous way to that described for the iterative arithmetic mean scheme M A y defined in Equation (27). However, a more detailed study reveals that all methods based on convex means exhibit essentially equivalent dynamical behavior. In particular, in the next section, it is proven that the rational transformations associated with each method are affine conjugate, which implies that they share the same structure of basins of attraction and Julia sets except for scale transformations. This is, as far as we know, the first time this has been proven for a set of designed families. This result of dynamical equivalence is presented below in general terms, and the rational operators associated with each family of methods are shown explicitly.

4.7. Dynamic Equivalence Between Methods Based on Convex Means

The study of the dynamic behavior of iterative families constructed using convex means is performed on the generic quadratic polynomial p ( z ) = ( z a ) ( z b ) , of simple roots a , b C . For each family, a rational operator R p ( i ) ( z , β ) dependent on a complex parameter β is defined, which describes the qualitative performance of the iterative method applied to p ( z ) . In what follows, the five rational operators associated with the different convex means considered are presented explicitly.
Proposition 4.
Let p ( z ) be the reference quadratic polynomial and β C a free parameter. The normalized rational operators associated with the five iterative families considered are as follows:
(1) Method M A y , defined in (27):
R p ( A y ) ( z , β ) = z 4 β + 2 z 6 + 16 z 5 + 54 z 4 + 96 z 3 β z 2 + 92 z 2 3 β z + 44 z + 8 β z 6 8 z 6 + 3 β z 5 44 z 5 + β z 4 92 z 4 96 z 3 54 z 2 16 z 2 .
(2) Method M A :
R p ( A ) ( z , β ) = z 4 6 96 z 3 54 z 4 16 z 5 2 z 6 + z 2 ( 90 + β ) + β + z ( 40 + 3 β ) 2 16 z 54 z 2 96 z 3 + z 4 ( 90 + β ) + z 6 ( 6 + β ) + z 5 ( 40 + 3 β ) .
(3) Method M H y :
R p ( H y ) ( z , β ) = z 4 2 β + z 4 + 7 z 3 + 18 z 2 + 19 z + 7 ( 2 β 7 ) z 4 19 z 3 18 z 2 7 z 1 .
(4) Method M C :
R p ( C ) ( z , β ) = z 4 E β ( z ) F β ( z ) ,
where
E β ( z ) = β + z 8 + 11 z 7 + 52 z 6 + 137 z 5 ( β 218 ) z 4 + ( 211 4 β ) z 3 + ( 120 7 β ) z 2 + ( 37 4 β ) z + 5 ,
and
F β ( z ) = ( β 5 ) z 8 + ( 4 β 37 ) z 7 + ( 7 β 120 ) z 6 + ( 4 β 211 ) z 5 + ( β 218 ) z 4 137 z 3 52 z 2 11 z 1 .
(5) Method M C y :
R p ( C y ) ( z , β ) = z 4 W β ( z ) D β ( z ) ,
where
W β ( z ) = β + z 8 + 11 z 7 + 52 z 6 + 137 z 5 ( β 219 ) z 4 + ( 214 4 β ) z 3 + ( 124 7 β ) z 2 4 ( β 10 ) z + 6 ,
and
D β ( z ) = ( β 6 ) z 8 + 4 ( β 10 ) z 7 + ( 7 β 124 ) z 6 + ( 4 β 214 ) z 5 + ( β 219 ) z 4 137 z 3 52 z 2 11 z 1 .
Each of these operators represents the rational map induced by the corresponding iterative method on the Riemann sphere. The performance of these operators is analyzed by studying the parameter planes and the associated basins of attraction.
Now we introduce the characteristic quantities that allow us to establish the relationship between the different families: the scale r is defined by the stability function of z = 1 in each family of methods, while the relative scale factor corresponds to the ratio between each scale and that of the arithmetic family M A y .
Theorem 6.
Let R p ( i ) ( z , β ) be the rational operator associated with each of the five iterative families considered and R p ( A y ) ( z , β ) the base operator corresponding to the arithmetic mean method M A y . Let us define the characteristic quantities (obtained from the stability analysis of | R p ( i ) ( z , β ) | for each of the iterative methods):
r A y ( β ) = 1024 312 5 β , r A ( β ) = 1024 304 5 β , r H ( β ) = 80 26 β , r C ( β ) = 2560 792 17 β , r C y ( β ) = 2560 804 17 β .
Let the relative scale factor also be
k i ( β ) = r i ( β ) r A y ( β ) , i { A , A y , H , C , C y } .
Then, for each i, there exists an affine homothety ψ i ( z ) = k i ( β ) z such that
R p ( i ) ( z , β ) = ψ i R p ( A y ) ψ i 1 ( z ) = k i ( β ) R p ( A y ) 1 k i ( β ) z , β ,
which proves that the rational operators are affine conjugate. Consequently, the five iterative families are dynamically equivalent: they share the same topology of fixed points, basins of attraction, and Julia sets, differing only by a scale transformation controlled by k i ( β ) .
The relative scale factors are given explicitly by
k A ( β ) = 312 5 β 304 5 β , k H ( β ) = 5 64 · 312 5 β 26 β , k C ( β ) = 5 2 · 312 5 β 792 17 β , k C y ( β ) = 5 2 · 312 5 β 804 17 β .
Each function k i ( β ) is strictly increasing on its domain, which implies that the relative scale factor varies monotonically with the parameter β.
Proof. 
The result follows from the structural relationship among the rational operators associated with each iterative family. The proof is organized in three complementary steps.
(i) Construction of the rational operators. Applying each method to the generic quadratic polynomial p ( z ) yields a rational operator R p ( i ) ( z , β ) after a Möbius map, whose algebraic form depends on the convex mean employed and on the complex parameter β . All these operators have the same rational nature and algebraic degree, differing only in the coefficients associated with higher-order terms, which act as scale factors in the dynamical plane.
(ii) Establishment of the affine relation. The quantities r i ( β ) characterize the dynamical scale of each method, as they describe the stability function of z = 1 as a strange fixed point (coming from the divergence before Möbius transformation). From them, the area of the complex plane for a repulsive performance of z = 1 is defined. This is the bound of the red area of the parameter planes where the stable performance of the methods is defined. Defining the relative scale factor
k i ( β ) = r i ( β ) r A y ( β ) , i { A , A y , H , C , C y } ,
one observes that substituting z 1 k i ( β ) z in the base operator R p ( A y ) ( z , β ) , and multiplying by k i ( β ) , reproduces exactly the rational structure of R p ( i ) ( z , β ) . That is,
R p ( i ) ( z , β ) = k i ( β ) R p ( A y ) 1 k i ( β ) z , β = ψ i R p ( A y ) ψ i 1 ( z ) ,
where ψ i ( z ) = k i ( β ) z is an affine map. This identity proves that all operators are affine conjugate, sharing the same algebraic and dynamical structure and differing only by a global scale transformation controlled by k i ( β ) .
(iii) Monotonicity of the scale factor. From the explicit expressions of r i ( β ) , the scale factors k i ( β ) take the following closed forms:
k A ( β ) = 312 5 β 304 5 β , k H ( β ) = 5 64 · 312 5 β 26 β , k C ( β ) = 5 2 · 312 5 β 792 17 β , k C y ( β ) = 5 2 · 312 5 β 804 17 β .
Differentiation yields
k A ( β ) = 40 ( 304 5 β ) 2 > 0 , k H ( β ) = 5 64 · 182 ( 26 β ) 2 > 0 , k C ( β ) = 5 2 · 1344 ( 792 17 β ) 2 > 0 , k C y ( β ) = 5 2 · 1284 ( 804 17 β ) 2 > 0 .
So, each k i ( β ) is strictly increasing on its domain. Therefore, the affine transformation varies continuously and monotonically with the parameter β .
So, the rational operators R p ( i ) and R p ( A y ) are affine conjugate, implying full dynamical equivalence: they share the same structure of fixed points, basins of attraction, and Julia sets, differing only by a uniform scale factor k i ( β ) . Hence, all five iterative families belong to a single affine conjugacy class, and the analysis of one representative (e.g., M A y ) suffices to characterize the global dynamics of the entire family. □
The explicit forms of the scale factors k i ( β ) and their numerical reference values for β = 0 are summarized in Table 11. These results numerically confirm the relations observed in the parameter plane of each family.
Figure 9 presents the unified parameter planes obtained for each iterative family. The unified parameter plane [29] is presented, where the white color represents those values of the parameter that are simultaneously red in all parameter planes of the same family. Meanwhile, in the black color appears those that are black in any of the parameter planes.
The visual relationship between them is evident: the limit sets remain invariant except for the scale dictated by the factors k i ( β ) . This geometric correspondence provides an empirical verification of the affine conjugacy proven in Theorem 6.
Theorem 6 thus provides a rigorous mathematical foundation for the empirical evidence observed in the parameter spaces: all rational operators derived from the convex means considered are dynamically equivalent by affine conjugation. This equivalence justifies the reduction of the global analysis to a single representative method and establishes a unified framework for classifying convex-mean-based iterative schemes according to their affine dynamical equivalence.

5. Numerical Examples

The iterative methods used in this section are presented in Table 10. This table takes into account all the conditions established in Table 9 with respect to H μ , with the aim of guaranteeing fourth-order convergence. In particular, the parameter β = 0 is selected, given its favorable behavior observed in the dynamical analysis.
To evaluate the efficiency of the new iterative methods proposed, a comparison is made with classical and new algorithms in the literature, presented in Section 2.
This section evaluates the performance of the newly proposed multipoint iterative methods in comparison with several well-established fourth-order methods. The comparison includes the following performance indicators:
  • Number of iterations required for convergence (Iter).
  • Approximate computational order of convergence (ACOC).
  • Estimations of the errors (Incr, Incr2), where
    I n c r = | x n + 1 x n | , I n c r 2 = | f ( x n + 1 ) | ,
  • Efficiency index (EI), calculated with ACOC instead of p;
  • Execution time (Time), measured in seconds.
All the numerical tests have been conducted in Matlab R2024a, with variable-precision arithmetics of 200 digits of mantissa. The implemented algorithm uses as stopping criteria
| x n + 1 x n | < 10 50 , or | f ( x n + 1 ) | < 10 50 .
If neither of these criteria is met, the procedure ends when the maximum number of 100 iterations is reached. In each table, the best results are marked in bold, for the indicators Incr, Incr2, and Time.

5.1. Academic Example 1: f 1 ( x ) = cos ( x ) x

The target root of the problem considered is x * 0.7391 , obtained from the proposed nonlinear function. For the iterative process, x 0 = 0 was taken as the initial estimate. The results obtained under these conditions are presented in Table 12.
The results reported in Table 12 underscore the strong competitiveness of the newly developed methods when compared with classical and recent schemes. The proposed approaches exhibit comparable, and in several cases superior, performance in terms of both accuracy and computational efficiency. This is reflected not only in the reduced number of iterations required to approximate the root, but also in the exact convergence order of four achieved by several of the new methods. In particular, tiny residual errors were obtained, such as | f ( x n + 1 ) | 10 197 for method M H y and exactly zero for method M A y .
Nevertheless, it is important to acknowledge the effective performance of the classical iterative schemes. For instance, the MEDJA4 method demonstrated remarkable robustness and efficiency, reaching errors of the order of 10 143 with relatively low computational cost.
On the other hand, the MED44 method, despite its high estimated order of convergence (ACOC = 4.25 ), produced significantly larger final errors ( 10 18 ). This behavior may indicate the presence of numerical instabilities or sensitivity to the transcendental nature of the problem.

5.2. Academic Example 2: f 2 ( x ) = e x cos ( x ) 2

The real root of this function is x * 0.9488 , with an initial estimate x 0 = 0.5 . The results are given in Table 13.
In this nonlinear and rapidly varying function, M A y exhibits the most outstanding precision, with final error on the order of 10 208 , confirming its high robustness. The classical methods MEDJA4, MEDOS4, and MED44 maintain excellent convergence behavior with very low errors ( 10 176 ) and lower execution times.
Overall, methods M C and M C y achieve almost the theoretical order of convergence with a low number of iterations, and their performance may improve with adaptive strategies. In contrast, methods such as M A y offer a balance between precision and convergence, suggesting their suitability for problems demanding extremely high accuracy.
The proposed methods M C , M C y , and M H y demonstrate solid fourth-order behavior, but exhibit larger errors compared to Example 1, indicating sensitivity to the exponential component of f 2 ( x ) .

5.3. Applied Problems

Problem 1.
Chemical Equilibrium in Ammonia Synthesis
The analysis of chemical equilibrium systems using numerical methods has been widely addressed in the scientific literature. Solving complex nonlinear equations that model fractional conversions in reactive processes such as ammonia synthesis requires robust and efficient techniques.
This work analyzes the chemical equilibrium corresponding to the ammonia synthesis reaction from nitrogen and hydrogen, in a molar ratio of 1:3 [30], under standard industrial conditions (500 °C and 250 atm). The equation that describes this reaction is the following:
f ( x ) = x 4 7.79075 x 3 + 14.7445 x 2 + 2.511 x 1.674 ,
where x is the fractional conversion. Of the four real roots of this equation, only one x 1 0.27776 lies within the physical interval [ 0 , 1 ] , and therefore it is the only one with chemical significance.
In Table 14, it can be seen that all methods converge rapidly to the physically meaningful root, demonstrating high efficiency for this type of problem. For these results, we can use x 0 = 0.5 and the same stopping criteria as in the previous examples.
Method M A y exhibits the best performance in terms of accuracy, reaching a residual on the order of 10 208 , positioning it as the most precise among the set, at the cost of one additional iteration.
Among the classical methods, MEDOS4 and MEDZ4 stand out due to their excellent accuracy, with very small errors ( 10 138 and 10 145 , respectively) and ACOC.
Regarding the proposed methods M A , M C , M C y , and M H y , a consistent stability in the order of convergence and a good approximation in the errors can be observed, with results close to the classical methods, though without systematically surpassing them. Their computational performance is acceptable, although slightly inferior in terms of time.
In summary, for this chemical equilibrium problem, the methods with the best overall performance considering accuracy, efficiency, and stability are M A y , MEDOS4, and MEDZ4, all offering an ideal combination of minimal errors, fulfilled theoretical order of convergence, and low execution times.
Problem 2.
Determination of the Maximum in Planck’s Radiation Law
The study of blackbody radiation through numerical methods has been fundamental in the development of quantum physics. As noted in [31], determining the spectral maximum in Planck’s distribution requires advanced techniques to solve nonlinear transcendental equations.
We analyze the equation derived from Planck’s radiation law that determines the wavelength corresponding to the maximum energy density:
f 5 ( x ) = e x 1 + x 5 ,
where x = h c λ k T . Among the possible solutions, only x 4.9651142317 has physical meaning in this context.
Table 15 shows that all proposed methods M A , M A y , M H y , M C , and M C y converge to the physically valid root x * 4.9651142317 , even when starting from a distant initial condition ( x 0 = 1 ).
Method M A y stands out for its quartic convergence (ACOC = 4.0) and a final error of exactly 0 (with 200 digits) in only five iterations, making it the most accurate method. Although it is slightly more computationally expensive, it achieves the highest relative efficiency (EI ≈ 1.587) among the proposed methods.
Methods M A , M C , M C y show quadratic convergence A C O C 2 with errors of the order 10 67 in only four iterations, with low execution times. The method M H y slightly improves the order of convergence A C O C 2.67 , which represents a good compromise between efficiency and robustness.
In contrast, several classical methods converge to values that do not represent the physically significant root. For example, MEDCH4 converges to a negative value ( x 158.3 ) with a clear divergence (Incr2 10 68 ). Meanwhile, the other methods diverge completely, presenting values with no physical meaning.
This highlights that, under unfavorable initial conditions, the proposed methods remain stable, while the classical ones are sensitive. Therefore, for problems such as Planck’s, methods based on the mean offer a more robust and reliable alternative.
In summary, all the analyzed methods proved to be efficient from a distant initial point, with M A y standing out for its stability and performance.

6. Conclusions

This manuscript presents a new perspective on the design, analysis, and dynamical behavior of fourth-order multipoint iterative methods, constructed through convex combinations of classical means and parameterized weight functions. By extending the Newton-type scheme and incorporating arithmetic, harmonic, and contraharmonic means, a versatile family of optimal methods is developed, complying with the Kung–Traub conjecture by achieving order four with a minimal number of functional evaluations. Other means such as Heronian and centroidal means, have been used without positive results. The resulting iterative methods do not reach order four, so they are not optimal schemes.
The general formulation, grounded in a solid theoretical framework (Taylor expansions, affine conjugation, and local error analysis), enabled the derivation of explicit conditions on the weight functions to ensure fourth-order convergence. This has been complemented by a rigorous discrete dynamical system analysis using tools such as conjugated rational operators, stability surfaces, parameter planes, and dynamical planes on the Riemann sphere.
The results reveal that the proposed parametric families, particularly those associated with the modified arithmetic mean ( M A y ), exhibit stable convergence behavior over large regions of the complex parameter space β . Nevertheless, several regions of unstable performance have been identified, including attraction basins unrelated to the roots or convergence toward strange fixed points. Such zones, often visualized as black or green regions in the parameter and dynamical planes, must be avoided in practice. In this context, the detailed study of free critical points is fundamental, as each attractive basin must contain at least one critical point. The behavior of these points has provided early insights into the method’s stability and convergence characteristics. Noteworthy cases such as β = 78.5858 or β = 400 illustrate convergence toward undesirable fixed points (e.g., z = 1 ), despite proximity to the root z = 0 , emphasizing the importance of well-informed parameter selection.
A key contribution of this work is the discovery that all iterative families constructed from convex means are conjugate to each other. That is, their associated rational operators are affine conjugate, sharing identical topological structures of fixed points, basins of attraction, and Julia sets, differing only by a global scale factor. This remarkable equivalence implies that the dynamical behavior of the entire class of convex-mean methods can be completely represented by studying a single representative operator, such as M A y . Consequently, this finding provides a unified dynamical framework and reinforces the theoretical coherence of the proposed construction.
Finally, numerical experiments have confirmed the competitiveness of the proposed schemes. In various tests, both academic and applied, methods based on convex means have shown efficient performance comparable to classical and new methods. In most of the problems considered, the proposed methods managed to converge to the root in approximately three to five iterations. The only exception is the first applied example, which requires a greater number of iterations; however, it also reaches the root with errors of the order of ∼10−208. Likewise, in the second applied problem, their robustness is evident, as they are the only methods that converge to the root.
So, the proposed class of parametric iterative methods based on convex means not only achieves high computational efficiency but also, under the light of affine dynamical equivalence, constitutes a geometrically consistent and theoretically unified framework for the stable and predictive solution of nonlinear equations.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/math13213525/s1.

Author Contributions

Validation, A.C. and J.R.T.; Formal analysis, J.R.T.; Investigation, M.E.M.M.; Resources, M.E.M.M.; Writing—original draft, M.E.M.M.; Writing—review & editing, A.C. and J.R.T.; Supervision, A.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by the European Research Council (ERC) via Horizon Europe Advanced Grant, grant agreement nº 101097688 (“PeroSpiker”).

Data Availability Statement

The Original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cordero, A.; Franceschi, J.; Torregrosa, J.R.; Zagati, A.C. A Convex Combination Approach for Mean-Based Variants of Newton’s Method. Symmetry 2019, 11, 1106. [Google Scholar] [CrossRef]
  2. Chun, C. Some fourth-order iterative methods for solving nonlinear equations. Appl. Math. Comput. 2008, 195, 454–459. [Google Scholar] [CrossRef]
  3. King, R.F. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  4. Ostrowski, A.M. Solution of Equations in Euclidean and Banach Spaces; Academic Press: New York, NY, USA, 1973. [Google Scholar]
  5. Artidiello Moreno, S.D.J. Diseño, Implementación y Convergencia de métodos Iterativos para Resolver Ecuaciones y Sistemas no Lineales Utilizando Funciones Peso. Ph.D. Thesis, Universitat Politècnica de València, Valencia, Spain, 2014. [Google Scholar]
  6. Abdullah, S.; Choubey, N.; Dara, S.; Junjua, M.U.D.; Abdullah, T. A robust and optimal iterative algorithm employing a weight function for solving nonlinear equations with dynamics and applications. Axioms 2024, 13, 675. [Google Scholar] [CrossRef]
  7. Zein, A. A New Family of Optimal Fourth-Order Iterative Methods for Solving Nonlinear Equations With Applications. J. Appl. Math. 2024, 2024, 9955247. [Google Scholar] [CrossRef]
  8. Kung, H.T.; Traub, J.F. Optimal order of one-point and multipoint iteration. J. ACM (JACM) 1974, 21, 643–651. [Google Scholar] [CrossRef]
  9. Zhao, L.; Wang, X.; Guo, W. New families of eighth-order methods with high efficiency index for solving nonlinear equations. WSEAS Trans. Math. 2012, 11, 283–293. [Google Scholar]
  10. Özban, A.Y.; Kaya, B. A new family of optimal fourth-order iterative methods for nonlinear equations. Results Control. Optim. 2022, 8, 100157. [Google Scholar] [CrossRef]
  11. Weerakoon, S.; Fernando, T. A variant of Newton’s method with accelerated third-order convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  12. Chicharro, F.I.; Cordero, A.; Martínez, T.H.; Torregrosa, J.R. Mean-based iterative methods for solving nonlinear chemistry problems. J. Math. Chem. 2020, 58, 555–572. [Google Scholar] [CrossRef]
  13. Özban, A.Y. Some new variants of Newton’s method. Appl. Math. Lett. 2004, 17, 677–682. [Google Scholar] [CrossRef]
  14. Ababneh, O.Y. New Newton’s method with third-order convergence for solving nonlinear equations. World Acad. Sci. Eng. Technol. 2012, 61, 1071–1073. [Google Scholar]
  15. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  16. Zúñiga, A.G.; Cordero, A.; Torregrosa, J.R.; Soto, J.P. Diseño y análisis de la convergencia y estabilidad de métodos iterativos para la resolución de ecuaciones no lineales. Rev. Digit. MatemÁtica Educ. Inter. 2021, 21, 1–27. [Google Scholar]
  17. Traub, J.F. Iterative Methods for the Solution of Equations; American Mathematical Society: Providence, RI, USA, 1982; Volume 312. [Google Scholar]
  18. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  19. Sharma, R.; Bahl, A. An optimal fourth order iterative method for solving nonlinear equations and its dynamics. J. Complex Anal. 2015, 2015, 259167. [Google Scholar] [CrossRef]
  20. Khirallah, M.; Alkhomsan, A. Convergence and stability of optimal two-step fourth-order and its expanding to sixth order for solving nonlinear equations. Eur. J. Pure Appl. Math. 2022, 15, 971–991. [Google Scholar] [CrossRef]
  21. Blanchard, P. The dynamics of Newton’s method. In Proceedings of Symposia in Applied Mathematics; American Mathematical Society: Providence, RI, USA, 1994; Volume 49, pp. 139–154. [Google Scholar]
  22. Maimo, J.G. Análisis Dinámico y Numérico de Familias de Métodos Iterativos para la Resolución de Ecuaciones no Lineales y su Extensión a Espacios de Banach. Ph.D. Thesis, Universitat Politècnica de València, Valencia, Spain, 2017. [Google Scholar]
  23. Blanchard, P. Complex analytic dynamics on the Riemann sphere. Bull. Am. Math. Soc. 1984, 11, 85–141. [Google Scholar] [CrossRef]
  24. Fatou, P. Sur les frontières de certains domaines. Bull. Soc. Math. Fr. 1923, 51, 16–22. [Google Scholar] [CrossRef]
  25. Julia, G. Mémoire sur l’itération des fonctions rationnelles. J. Math. Pures Appl. 1918, 1, 47–245. [Google Scholar]
  26. Moscoso-Martínez, M.; Chicharro, F.I.; Cordero, A.; Torregrosa, J.R.; Ureña-Callay, G. Achieving optimal order in a novel family of numerical methods: Insights from convergence and dynamical analysis results. Axioms 2024, 13, 458. [Google Scholar] [CrossRef]
  27. Amat, S.; Busquier, S.; Plaza, S. Review of some iterative root-finding methods from a dynamical point of view. Scientia 2004, 10, 35. [Google Scholar]
  28. Scott, M.; Neta, B.; Chun, C. Basin attractors for various methods. Appl. Math. Comput. 2011, 218, 2584–2599. [Google Scholar] [CrossRef]
  29. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. On the choice of the best members of the Kim family and the improvement of its convergence. Math. Method Appl. Sci. 2020, 43, 8051–8066. [Google Scholar] [CrossRef]
  30. Nuevo, D. La síntesis de Amoniaco. Publicado el 19 de Marzo de 2024 en TECPA Formación de Ingenieros. 2024. Available online: https://www.tecpa.es/la-sintesis-de-amoniaco/ (accessed on 20 July 2025).
  31. González, S.L. Procedimiento didáctico para el estudio de la fórmula de Planck en carreras de ingeniería. Cad. Bras. Ensino Física 2021, 38, 270–292. [Google Scholar] [CrossRef]
Figure 1. Stability function of z = 1 .
Figure 1. Stability function of z = 1 .
Mathematics 13 03525 g001
Figure 2. Stability surface of strange fixed points ex i ( β ) in attraction zones.
Figure 2. Stability surface of strange fixed points ex i ( β ) in attraction zones.
Mathematics 13 03525 g002
Figure 3. Plane of parameters of z 1 , 2 ( β ) in domain D 1 and a detail in D 2 .
Figure 3. Plane of parameters of z 1 , 2 ( β ) in domain D 1 and a detail in D 2 .
Mathematics 13 03525 g003
Figure 4. Plane of parameters of z 1 , 2 ( β ) in domain D 3 and a detail in D 2 .
Figure 4. Plane of parameters of z 1 , 2 ( β ) in domain D 3 and a detail in D 2 .
Mathematics 13 03525 g004
Figure 5. Dynamical planes corresponding to methods with stable performance.
Figure 5. Dynamical planes corresponding to methods with stable performance.
Mathematics 13 03525 g005
Figure 6. Dynamical plane when z = 1 is not a fixed point.
Figure 6. Dynamical plane when z = 1 is not a fixed point.
Mathematics 13 03525 g006
Figure 7. Green basin of attraction of z = 1 .
Figure 7. Green basin of attraction of z = 1 .
Mathematics 13 03525 g007
Figure 8. Dynamical planes corresponding to methods with unstable performance.
Figure 8. Dynamical planes corresponding to methods with unstable performance.
Mathematics 13 03525 g008
Figure 9. Unified parameter planes for the iterative families.
Figure 9. Unified parameter planes for the iterative families.
Mathematics 13 03525 g009
Table 1. Summary of fourth-order iterative methods.
Table 1. Summary of fourth-order iterative methods.
Author H ( μ ) Iterative Method
Chun (MEDCH4) [2] 1 + 2 μ y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( x n ) + 2 f ( y n ) f ( x n ) f ( y n ) f ( x n )
Ostrowski (MEDOS4) [4] 1 1 + μ y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( x n ) f ( x n ) 2 f ( y n ) f ( y n ) f ( x n )
King (MEDK4) [3] 1 + ( 2 + β ) μ 1 + β μ y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( x n ) + ( 2 + β ) f ( y n ) f ( x n ) + β f ( y n ) f ( y n ) f ( x n )
Kung–Traub (MEDKT4) [8] 1 ( 1 μ ) 2 y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( x n ) 2 ( f ( x n ) f ( y n ) ) 2 f ( y n ) f ( x n )
Zhao et al. (MEDZ4) [9] 1 + 2 μ + μ 2 1 4 μ 2 y n = x n f ( x n ) f ( x n ) , x n + 1 = y n f ( x n ) 2 + 2 f ( x n ) f ( y n ) + f ( y n ) 2 f ( x n ) 2 4 f ( y n ) 2 f ( y n ) f ( x n )
Artidiello (MED44) [5] ( 1 + μ ) 2 1 5 μ 2 y n = x n f ( x n ) f ( x n ) , x n + 1 = y n ( f ( x n ) + f ( y n ) ) 2 f ( x n ) 2 5 f ( y n ) 2 f ( y n ) f ( x n )
Table 2. Known methods corresponding to τ = 0 , A = 1 , and B = 0 .
Table 2. Known methods corresponding to τ = 0 , A = 1 , and B = 0 .
AuthorWeight Function G ( η ) Iterative Expression
Jarratt’s method (MEDJA4) [18]
G ( η ) = 4 3 η 4 6 η
x n + 1 = x n 3 f ( y n ) + f ( x n ) 6 f ( y n ) 2 f ( x n ) · f ( x n ) f ( x n )
Sharma and Bahl (MEDSB4) [19]
G ( η ) = 1 + 3 η 8 + 9 8 8 η
x n + 1 = x n 1 2 + 9 8 f ( x n ) f ( y n ) + 3 8 f ( y n ) f ( x n ) · f ( x n ) f ( x n )
Table 3. Known methods corresponding to τ = 0 , A = 0 , and B = 1 .
Table 3. Known methods corresponding to τ = 0 , A = 0 , and B = 1 .
AuthorWeight Function G ( η ) Iterative Expression
Ozban and Kaya (MEDOK4) [10] G ( η ) = 16 ( 1 η 2 ) 3 ( 1 η 2 ) + 22 ( 1 η ) 9
x n + 1 = x n 16 f ( y n ) 2 9 f ( x n ) 2 + 22 f ( x n ) f ( y n ) + 3 f ( y n ) 2 · f ( x n ) f ( y n )
Table 4. Known methods corresponding to τ = 2 3 , A = 1 , and B = 1 .
Table 4. Known methods corresponding to τ = 2 3 , A = 1 , and B = 1 .
AuthorWeight Function G ( η ) Iterative Expression
Khirallah and Alkhomsan (MEDKA4) [20] G ( η ) = 85 ( 1 η ) 41 ( 1 η ) 2 66 120 η
x n + 1 = y n 85 f ( y n ) f ( x n ) 41 f ( y n ) 2 54 f ( x n ) 2 + 120 f ( y n ) f ( x n ) · f ( x n ) f ( x n ) + f ( y n )
Table 5. Known methods corresponding to τ = 2 3 , A = 1 , and B = 2 .
Table 5. Known methods corresponding to τ = 2 3 , A = 1 , and B = 2 .
AuthorWeight Function G ( η ) Iterative Expression
ZM41 [7] G ( η ) = 39 η 2 + 4 η 32 36 η 96
x n + 1 = y n + 11 f ( x n ) 2 82 f ( x n ) f ( y n ) + 39 f ( y n ) 2 60 f ( x n ) 2 + 36 f ( x n ) f ( y n ) · f ( x n ) 2 f ( y n ) f ( x n )
Table 6. Known methods corresponding to τ = 2 3 , A = 4 , and B = 9 .
Table 6. Known methods corresponding to τ = 2 3 , A = 4 , and B = 9 .
AuthorWeight Function G ( η ) Iterative Expression
ZM42 [7] G ( η ) = 20 + 39 η 12 + 18 η
x n + 1 = y n 59 f ( x n ) 39 f ( y n ) 30 f ( x n ) 18 f ( y n ) · f ( x n ) 9 f ( y n ) 4 f ( x n )
Table 7. Known methods corresponding to τ = 1 , A = 7 , and B = 15 .
Table 7. Known methods corresponding to τ = 1 , A = 7 , and B = 15 .
AuthorWeight Function G ( η ) Iterative Expression
ZM43 [7] G ( η ) = 48 η 8 + 3 η
x n + 1 = x n f ( x n ) f ( x n ) 48 f ( x n ) 48 f ( y n ) 11 f ( x n ) 3 f ( y n ) · f ( x n ) 15 f ( y n ) 7 f ( x n )
Table 8. Iterative methods defined via different symmetric means M T , along with their abbreviations and corresponding correction steps.
Table 8. Iterative methods defined via different symmetric means M T , along with their abbreviations and corresponding correction steps.
MethodMeanCorrection Step
M A Arithmetic x n + 1 = x n H ( μ n ) f ( x n ) + f ( y n ) 2 f ( x n )
M A y Arithmetic with y n x n + 1 = y n H ( μ n ) f ( x n ) + f ( y n ) 2 f ( x n )
M H y Harmonic with y n x n + 1 = y n H ( μ n ) 2 f ( x n ) f ( y n ) f ( x n ) ( f ( x n ) + f ( y n ) )
M C Counterharmonic x n + 1 = x n H ( μ n ) f ( x n ) 2 + f ( y n ) 2 f ( x n ) ( f ( x n ) + f ( y n ) )
M c y Counterharmonic with y n x n + 1 = y n H ( μ n ) f ( x n ) 2 + f ( y n ) 2 f ( x n ) ( f ( x n ) + f ( y n ) )
Table 9. Coefficients of H ( μ ) and error equations.
Table 9. Coefficients of H ( μ ) and error equations.
MeanCoefficients of H ( μ ) Error Equation
M A H ( 0 ) = 2 , H ( 0 ) = 0 , H ( 0 ) = 8 , | H ( 0 ) | < e n + 1 = 1 12 ( 36 + H ( 0 ) ) c 2 3 12 c 2 c 3 e n 4 + O ( e n 5 )
M A y H ( 0 ) = 0 , H ( 0 ) = 2 , H ( 0 ) = 4 , | H ( 0 ) | < e n + 1 = 1 12 ( 48 + H ( 0 ) ) c 2 3 12 c 2 c 3 e n 4 + O ( e n 5 )
M H y H ( 0 ) = 1 2 , H ( 0 ) = 3 2 , | H ( 0 ) | < e n + 1 = ( 7 + H ( 0 ) ) c 2 3 c 2 c 3 e n 4 + O ( e n 5 )
M C H ( 0 ) = 1 , H ( 0 ) = 2 , H ( 0 ) = 4 , | H ( 0 ) | < e n + 1 = 5 H ( 0 ) 6 c 2 3 c 2 c 3 e n 4 + O ( e n 5 )
M c y H ( 0 ) = 0 , H ( 0 ) = 1 , H ( 0 ) = 6 , | H ( 0 ) | < e n + 1 = 6 H ( 0 ) 6 c 2 3 c 2 c 3 e n 4 + O ( e n 5 )
Table 10. Families of fourth-order iterative methods based on means.
Table 10. Families of fourth-order iterative methods based on means.
Mean H ( μ ) Iterative Scheme
M A 2 + 4 μ 2 + β μ 3 x n + 1 = x n ( 2 + 4 μ n 2 + β μ n 3 ) f ( x n ) + f ( y n ) 2 f ( x n )
M A y 2 μ + 2 μ 2 + β μ 3 x n + 1 = y n ( 2 μ n + 2 μ n 2 + β μ n 3 ) f ( x n ) + f ( y n ) 2 f ( x n )
M H y 1 2 + 3 2 μ + β μ 2 x n + 1 = y n 1 2 + 3 2 μ n + β μ n 2 2 f ( x n ) f ( y n ) f ( x n ) ( f ( x n ) + f ( y n ) )
M C 1 + 2 μ + 2 μ 2 + β μ 3 x n + 1 = x n ( 1 + 2 μ n + 2 μ n 2 + β μ n 3 ) f ( x n ) 2 + f ( y n ) 2 f ( x n ) ( f ( x n ) + f ( y n ) )
M C y μ + 3 μ 2 + β μ 3 x n + 1 = y n ( μ n + 3 μ n 2 + β μ n 3 ) f ( x n ) 2 + f ( y n ) 2 f ( x n ) ( f ( x n ) + f ( y n ) )
Table 11. Relative scale factors k i ( β ) and numerical reference values ( β = 0 ).
Table 11. Relative scale factors k i ( β ) and numerical reference values ( β = 0 ).
Method k i ( β ) k i ( 0 )
M A 312 5 β 304 5 β 1.026316
M A y 1 (reference case) 1.000000
M H y 5 64 · 312 5 β 26 β 0.937500
M C 5 2 · 312 5 β 792 17 β 0.984849
M C y 5 2 · 312 5 β 804 17 β 0.970147
Table 12. Performance comparison for f 1 ( x ) = cos ( x ) x . The best results are marked in bold.
Table 12. Performance comparison for f 1 ( x ) = cos ( x ) x . The best results are marked in bold.
MethodIterIncrIncr2ACOCEITime (s)
M A 4 6.107 × 10 32 1.096 × 10 126 3.9951.587 5.151 × 10 2
M A y 5 2.324 × 10 86 0.000 4.0001.587 5.759 × 10 2
M H y 5 1 . 044 × 10 49 1 . 795 × 10 197 4.0001.587 6.806 × 10 2
M C 4 6.098 × 10 15 1.588 × 10 58 3.8401.566 3 . 098 × 10 2
M C y 5 9.841 × 10 49 1.247 × 10 193 4.0001.587 3.803 × 10 2
MEDJA44 5.37 × 10 36 3.276 × 10 143 3.9981.587 4.025 × 10 2
MEDOS44 1.893 × 10 35 5.492 × 10 141 3.9981.587 4.155 × 10 2
MEDK4 ( β = 1 )4 2.133 × 10 24 1.632 × 10 96 3.9801.585 4.469 × 10 2
MEDKT44 1.34 × 10 29 1.962 × 10 117 3.9931.586 4.487 × 10 2
MEDZ45 3.787 × 10 35 5.1 × 10 140 3.9991.587 5.990 × 10 2
MEDCH44 1.893 × 10 35 5.492 × 10 141 3.9981.587 3.784 × 10 2
MED445 1.752 × 10 18 6.378 × 10 74 4.2501.620 5.160 × 10 2
MEDSB44 2.756 × 10 27 3.657 × 10 108 3.9881.586 1.430 × 10 1
MEDOK44 8.188 × 10 40 1.500 × 10 158 3.9991.587 6.739 × 10 2
MEDKA44 1.098 × 10 41 4.243 × 10 166 3.9991.587 6.281 × 10 2
ZEIN413 6.978 × 10 14 5.067 × 10 55 3.8511.567 5.646 × 10 2
ZEIN425 7.834 × 10 47 3.976 × 10 187 3.9991.587 1.005 × 10 1
ZEIN434 2.161 × 10 38 3.679 × 10 153 3.9991.587 6.089 × 10 2
Table 13. Performance comparison for f 2 ( x ) = e x cos ( x ) 2 . The best results are marked in bold.
Table 13. Performance comparison for f 2 ( x ) = e x cos ( x ) 2 . The best results are marked in bold.
MethodIterIncrIncr2ACOCEITime (s)
M A 4 9.237 × 10 34 6.510 × 10 133 4.0001.587 5.444 × 10 2
M A y 5 2 . 646 × 10 97 7 . 787 × 10 208 4.0001.587 7.244 × 10 2
M H y 4 1.575 × 10 14 1.398 × 10 55 3.9261.578 6.071 × 10 2
M C 4 4.862 × 10 18 8.841 × 10 70 3.9641.583 6.263 × 10 2
M C y 4 1.164 × 10 14 3.535 × 10 56 3.9241.577 6.040 × 10 2
MEDJA44 3.801 × 10 44 4.500 × 10 175 4.0001.587 4.654 × 10 2
MEDOS44 2.037 × 10 44 3.549 × 10 176 4.0001.587 5.681 × 10 2
MEDK4 ( β = 1 )4 4.612 × 10 28 4.044 × 10 110 3.9951.587 3.920 × 10 2
MEDKT44 1.738 × 10 34 5.026 × 10 136 3.9991.587 3 . 818 × 10 2
MEDZ44 1.256 × 10 25 3.421 × 10 101 4.0421.593 4.764 × 10 2
MEDCH44 2.037 × 10 44 3.549 × 10 176 4.0001.587 4.491 × 10 2
MED444 2.842 × 10 16 3.140 × 10 63 4.1091.602 4.625 × 10 2
MEDSB44 2.056 × 10 31 1.204 × 10 123 3.9971.587 1.075 × 10 1
MEDOK43 1.655 × 10 13 7.584 × 10 53 4.0401.593 6.612 × 10 2
MEDKA43 1.991 × 10 16 3.320 × 10 65 4.0741.597 1.479 × 10 1
ZEIN414 1.144 × 10 48 2.198 × 10 193 4.0001.587 1.077 × 10 1
ZEIN424 1.502 × 10 25 1.704 × 10 100 4.0171.590 9.029 × 10 2
ZEIN434 3.846 × 10 39 4.685 × 10 155 4.0011.588 6.522 × 10 2
Table 14. Comparison of iterative methods for the chemical equilibrium problem. The best results are marked in bold.
Table 14. Comparison of iterative methods for the chemical equilibrium problem. The best results are marked in bold.
MethodIterIncrIncr2ACOCEITime (s)
M A 4 1.14 × 10 33 5.257 × 10 131 3.9961.587 1.490 × 10 1
M A y 5 8 . 294 × 10 130 3 . 893 × 10 208 4.0001.587 1.331 × 10 1
M H y 4 4.491 × 10 32 2.600 × 10 124 3.9951.587 1.218 × 10 1
M C 4 9.663 × 10 33 4.140 × 10 127 3.9951.587 1.058 × 10 1
M C y 4 2.179 × 10 32 1.255 × 10 125 3.9951.587 9.740 × 10 2
MEDJA44 4.663 × 10 33 6.992 × 10 129 3.9961.587 1.188 × 10 1
MEDOS44 2.592 × 10 35 6.632 × 10 138 3.9971.587 6.170 × 10 2
MEDK4 ( β = 1 )4 1.125 × 10 33 4.976 × 10 131 3.9961.587 6.300 × 10 2
MEDKT44 2.05 × 10 44 4.044 × 10 174 3.9991.587 1.018 × 10 1
MEDZ44 5.225 × 10 37 4.830 × 10 145 4.0011.588 7.200 × 10 2
MEDCH44 9.756 × 10 33 4.303 × 10 127 3.9951.587 4 . 920 × 10 2
MED444 7.927 × 10 41 6.800 × 10 161 3.9711.584 7.290 × 10 2
MEDSB44 9.528 × 10 44 2.120 × 10 171 3.9991.587 6.712 × 10 2
MEDOK44 1.166 × 10 47 2.229 × 10 187 4.0001.587 9.728 × 10 2
MEDKA44 1.520 × 10 48 5.422 × 10 191 4.0001.587 1.019 × 10 1
ZEIN414 1.805 × 10 50 6.999 × 10 199 4.0001.587 1.012 × 10 1
ZEIN423 7.338 × 10 14 4.841 × 10 53 4.0951.600 7.179 × 10 2
ZEIN434 1.508 × 10 51 2.346 × 10 203 4.0001.587 1.239 × 10 1
Table 15. Comparison of iterative methods for the Planck problem with x 0 = 1 . The best results are marked in bold.
Table 15. Comparison of iterative methods for the Planck problem with x 0 = 1 . The best results are marked in bold.
MethodRootIterIncrIncr2ACOCTime (s)
M A 4.9654 2.578 × 10 16 1.08 × 10 67 2.0081.2620
M A y 4.9655 6 . 912 × 10 67 0 . 0 4.01.5870
M H y 4.9654 2.71 × 10 16 1.564 × 10 67 2.6711.3880
M C 4.9654 2.82 × 10 16 1.689 × 10 67 1.9811.2560
M C y 4.9654 2.943 × 10 16 2.089 × 10 67 1.915 1 . 2420
MEDJA4 5.609 × 10 200 5 2.616 × 10 50 4.487 × 10 200 4.01.5870
MEDOS4 3.471 × 10 51 4 4.178 × 10 13 2.777 × 10 51 3.9421.5800
MEDK4 ( β = 1 ) 5.427 × 10 115 15 3.081 × 10 29 4.342 × 10 115 3.9971.5870
MEDKT4 1.186 × 10 79 5 1.364 × 10 14 9.913 × 10 57 3.9531.581
MEDZ4 3.344 × 10 119 7 4.003 × 10 30 2.675 × 10 119 3.9841.5850
MEDCH4 158.3 100 1.639 5.668 × 10 68 5.1211.7240
Artidiero IV 1.68 × 10 91 6 2.588 × 10 23 1.344 × 10 91 3.9761.5840
MEDSB4 3.337 × 10 54 9 5.232 × 10 14 2.67 × 10 54 3.941.5205
MEDOK4 1.698 × 10 71 4 4.587 × 10 18 1.358 × 10 71 3.9751.5597
MEDKA4 1.192 × 10 131 4 5.046 × 10 33 9.535 × 10 132 3.9961.7656
ZEIN41 5.44 × 10 197 5 1.446 × 10 49 4.352 × 10 197 4.01.4886
ZEIN42 2.079 × 10 131 5 2.96 × 10 33 1.663 × 10 131 4.0031.3412
ZEIN43 3.746 × 10 68 5 2.12 × 10 17 2.997 × 10 68 4.0481.27701
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Cordero, A.; Maldonado Machuca, M.E.; Torregrosa, J.R. A New Perspective on the Convergence of Mean-Based Methods for Nonlinear Equations. Mathematics 2025, 13, 3525. https://doi.org/10.3390/math13213525

AMA Style

Cordero A, Maldonado Machuca ME, Torregrosa JR. A New Perspective on the Convergence of Mean-Based Methods for Nonlinear Equations. Mathematics. 2025; 13(21):3525. https://doi.org/10.3390/math13213525

Chicago/Turabian Style

Cordero, Alicia, María Emilia Maldonado Machuca, and Juan R. Torregrosa. 2025. "A New Perspective on the Convergence of Mean-Based Methods for Nonlinear Equations" Mathematics 13, no. 21: 3525. https://doi.org/10.3390/math13213525

APA Style

Cordero, A., Maldonado Machuca, M. E., & Torregrosa, J. R. (2025). A New Perspective on the Convergence of Mean-Based Methods for Nonlinear Equations. Mathematics, 13(21), 3525. https://doi.org/10.3390/math13213525

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop