Next Article in Journal
Solutions to the Sine-Gordon Equation: From Darboux Transformations to Wronskian Representations of the k-Negaton-l-Positon-n-Soliton Solutions
Previous Article in Journal
Constraint Qualifications and Optimality Criteria for Nonsmooth Multiobjective Mathematical Programming Problems with Equilibrium Constraints on Hadamard Manifolds
Previous Article in Special Issue
Structural Results on the HMLasso
error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extended Parametric Family of Two-Step Methods with Applications for Solving Nonlinear Equations or Systems

by
Ioannis K. Argyros
1,*,
Stepan Shakhno
2,* and
Mykhailo Shakhov
2
1
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Theory of Optimal Processes, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
*
Authors to whom correspondence should be addressed.
Axioms 2026, 15(1), 41; https://doi.org/10.3390/axioms15010041
Submission received: 27 November 2025 / Revised: 25 December 2025 / Accepted: 1 January 2026 / Published: 6 January 2026

Abstract

The parametric family of two-step methods, with its special cases, has been introduced in various papers. However, in most cases, the local convergence analysis relies on the existence of derivatives of orders that the method does not require. Moreover, the more challenging semi-local convergence analysis was not introduced for this class of methods. These drawbacks are considered in this paper. We determine the radius of convergence and the uniqueness of the solution based on generalized continuity conditions. We also present the semi-local convergence analysis for this family of methods, which has not been studied before, using majorizing sequences. Numerical experiments and basins of attraction are included to validate the theoretical conditions and demonstrate the stability of the methods.

1. Introduction

Mathematical modeling is mostly used to convert problems from diverse disciplines to equations like
F ( x ) = 0 ,
where for X , Y denoting Banach spaces and D B 1 an open convex subset, F : D Y is a continuously differentiable operator in the Fréchet sense. Then, mostly iterative methods are developed to approximate a solution x D of the equation F ( x ) = 0 . This is the case since a closed-form solution can be obtained only in special cases.
Newton’s method defined for x 0 D and each n = 0 , 1 , 2 , by
x n + 1 = x n F ( x n ) 1 F ( x n )
is without a doubt the most popular iterative method [1,2,3,4,5]. Here, the operator F ( x ) L ( X , Y ) , which is the space of bounded linear operators mapping X into Y. Newton’s method is of convergence order provided that the initial point x 0 is close enough to x and the operator F ( x ) is invertible about in a neighborhood about the solution. There is extensive literature on multi-point methods generated to increase the convergence order of an iterative method [6,7,8,9,10,11,12,13,14].
In the present paper, we study the convergence of the family of two-step methods with parameters defined for x 0 D and each n = 0 , 1 , 2 , by
y n = x n F ( x n ) 1 F ( x n ) x n + 1 = x n p n F ( x n ) 1 F ( x n ) q n F ( x n ) 1 F ( y n )
or
y n = x n F ( x n ) 1 F ( x n ) x n + 1 = y n p ¯ n F ( x n ) 1 F ( x n ) q ¯ n F ( x n ) 1 F ( y n ) ,
where { p n } , { q n } , { q ¯ n } L ( X , Y ) are sequences depending on F , F and information from previous iterates. Notice that if p ¯ n = p n I , then the method (4) reduces to (3). So, it suffices to study only the convergence of the method (3). A plethora of choices exists for the sequences { p n } and { q n } . As an example, choose p n = q n = I . Then, method (3) reduces to
y n = x n F ( x n ) 1 F ( x n ) x n + 1 = y n F ( x n ) 1 F ( y n ) ,
which is a two-step Newton’s method of convergence order three using two function evaluations and one inverse per complete step [2]. Methods of convergence order four with particular interest when X = Y = R α , where α is a natural number, and the sequences { p n } , { q n } depend on previous iterates that have already been studied by A. Cordero et al. [15], and J.R. Sharma et al. [16] and are given, respectively, by
y n = x n F ( x n ) 1 F ( x n ) , x n + 1 = y n 2 b n 1 + λ b n F ( x n ) 1 F ( x n ) 1 1 + λ b n F ( x n ) 1 F ( y n ) ,
y n = x n F ( x n ) 1 F ( x n ) x n + 1 = x n γ 1 Ψ ( b n ) F ( x n ) 1 F ( x n ) γ 2 Ψ ( b n ) F ( x n ) 1 F ( y n ) ,
where λ , γ 1 , γ 2 R , b n = F ( y n ) T F ( y n ) F ( x n ) T F ( x n ) = | | F ( y n ) | | 2 | | F ( x n ) | | 2 ; function Ψ : R α R satisfies Ψ ( 0 ) = 1 γ 1 = 1 γ 2 , Ψ ( 0 ) = 2 Ψ ( 0 ) and the superscript T denotes the transpose of the vector function. If we select p ¯ n = 2 b n 1 + λ b n and q ¯ n = 1 1 + λ b n , the method (4) specializes to (6). Moreover, if p ¯ n = γ 1 ψ ( b n ) and q ¯ n = γ 2 ψ ( b n ) , then method (4) reduces to (7). Notice that the local convergence of methods (6) and (7) has been shown using Taylor series expansions, requiring the existence of derivatives which are not in these methods. As an example, we can choose the following function:
Let D = [ 2 , 2 ] . Define function f on D by
f ( t ) = θ 1 4 log t + θ 2 t 5 + θ 3 t 6 , t 0 0 , t = 0 ,
where θ 1 0 and θ 1 + θ 2 = 0 . Notice that f ( 4 ) ( 0 ) does not exist, and x = 1 D solves the equation f ( x ) = 0 . So, the results in [15,16] cannot guarantee convergence.
Other limitations of this approach have been reported in [16]. The new local convergence is shown using only the operators on the method (3). Moreover, the more important and challenging semi-local convergence of method (3) (i.e., of methods (6) and (7) too) not previously studied is presented using majorizing sequences in a Banach space setting. Moreover, generalized continuity is used to control the derivative and sharpen the error bounds | | x n + 1 x n | | , | | x x n | | . This way, the applicability of these methods is extended. The same technique can be used on other methods and for the same reasons [1,2,3,4,6,7,8,9,10,11,12,13,14,15,16,17,18,19].
The rest of the paper is structured as follows. Section 2 and Section 3 contain the local and semi-local convergence of method (3), respectively. The numerical experiments appear in Section 4. Final concluding remarks appear in Section 5.

2. Local Convergence Analysis

Let T = [ 0 , + ) . The analysis relies on some real functions with domain T or a subset of it. It is convenient to introduce the following abbreviations:
(FCND)
For a function which is continuous as well as nondecreasing on the interval T or a subset of it.
(SSE)
For the smallest solution of an equation.
Suppose the following:
(H1)
There exists FCND g 0 : T T such that the equation g 0 ( t ) 1 = 0 has SSE in T { 0 } , which shall be denoted by s 0 . Define the set T 0 = [ 0 , s 0 ) .
(H2)
There exists FCND g : T 0 T such that for function h 1 : T 0 T defined by
h 1 ( t ) = 0 1 g ( ( 1 θ ) t ) d θ 1 g 0 ( t ) .
The equation h 1 ( t ) 1 = 0 has SSE in T 0 { 0 } , which shall be denoted by s 1 .
(H3)
Let p 0 and q 0 be given constants. The function h 2 : T 0 T defined by
h 2 ( t ) = 0 1 g ( ( 1 θ ) t ) d θ + p ( 1 + 0 1 g 0 ( θ t ) d θ ) + q ( 1 + 0 1 g 0 ( θ h 1 ( t ) t ) d θ ) h 1 ( t ) 1 g 0 ( t )
is such that the equation h 2 ( t ) 1 = 0 has SSE in T 0 { 0 } , which shall be denoted by s 2 .
Define the parameter s and the interval T 1 , respectively, by
s = min { s j } , j = 1 , 2
and
T 1 = [ 0 , s ) .
The parameter s is shown to be a convergence radius for method (3) in Theorem 1. Then, by the definition of functions h 1 , h 2 , s and T 1 , we deduce that for each t T 1
0 g 0 ( t ) < 1
and
0 h j ( t ) < 1 .
Next, we connect functions h 1 , h 2 and radius s to the mappings on the method (3).
(H4)
There exists a solution x D and an invertible operator A L ( B 1 , B 2 ) such that for each u D
| | A 1 ( F ( u ) A ) | | g 0 ( | | u x | | ) .
Define the set D 0 = U ( x , s 0 ) D .
(H5)
| | A 1 ( F ( u 2 ) F ( u 1 ) ) | | g ( | | u 2 u 1 | | ) for each u 2 , u 1 D 0 .
(H6)
There exist parameters p 0 and q 0 such that for each n = 0 , 1 , 2 ,
| | I p n | | p and | | q n | | q ,
and
(H7)
U [ x , s ] D .
Remark 1. 
The linear operator A can be chosen to be A = I (the identity operator) or A = F ( x ˜ ) for some x ˜ D other than x or A = F ( x ) . In the latter case, x is a simple solution of the equation F ( x ) = 0 . Other selections are also possible. It is worth noting that conditions ( H 4 ) and ( H 5 ) do not necessarily imply that the linear operator F ( x ) is invertible.
Next, the main result follows for the local convergence analysis of the method (3). Define the set U 0 = U ( x , s ) { x } .
Theorem 1. 
Suppose that the conditions ( H 1 ) ( H 7 ) hold. Then, the sequence { x n } starting from x 0 U 0 and generated by the method (3) converges to the solution x D of the equation F ( x ) = 0 . Moreover, the following assertions hold for each n = 0 , 1 , 2 ,
| | y n x | | h 1 ( | | x n x | | ) | | x n x | | | | x n x | | < s
and
| | x n + 1 x | | h 2 ( | | x n x | | ) | | x n x | | | | x n x | | .
Proof. 
Mathematical induction shall be used to establish the existence of the iterates { x n } , { y n } as well as the assertions (12) and (13). Let w U 0 . Using the conditions (H1), (H4) and (10), we have, in turn, that
| | A 1 ( F ( w ) A ) | | g 0 ( | | w x | | ) g 0 ( s ) < 1 .
Then, it follows by estimate (14) and the Banach lemma on invertible operators [1,3,4,5] that F ( w ) is an invertible linear operator and
| | F ( w ) 1 A | | 1 1 g 0 ( | | w x | | ) .
Notice that estimate (15) holds if w = x 0 , since x 0 U 0 . Consequently, the iterate y 0 is well defined by the first substep of the method (3), and we can write
y 0 x = x 0 x F ( x 0 ) 1 F ( x 0 ) = [ F ( x 0 ) 1 A ] 0 1 A 1 ( F ( x + θ ( x 0 x ) ) F ( x 0 ) ) d θ ( x 0 x ) .
By (8), (H5), (11) (for j = 1 ), (15) and (16), we in turn obtain that
| | y 0 x | | 0 1 g ( ( 1 θ ) | | x 0 x | | ) d θ | | x 0 x | | 1 g 0 ( | | x 0 x | | ) h 1 ( | | x 0 x | | ) | | x 0 x | | | | x 0 x | | < s ,
which shows the assertion (12) if n = 0 and that the iterate y 0 U ( x , s ) . We need the following estimates:
F ( y 0 ) = F ( y 0 ) F ( x ) = 0 1 F ( x + θ ( y 0 x ) ) d θ ( y 0 x ) = 0 1 ( F ( x + θ ( y 0 x ) ) A + A ) d θ ( y 0 x ) .
So, by (H4) and (18), it follows that
| | A 1 F ( y 0 ) | | 0 1 A 1 ( F ( x + θ ( y 0 x ) ) A ) d θ + I ( y 0 x ) and | | A 1 F ( y 0 ) | | 1 + 0 1 g 0 ( θ | | y 0 x | | ) d θ | | y 0 x | | .
By switching y 0 by x 0 in (19), we also have
| | A 1 F ( x 0 ) | | ( 1 + 0 1 g 0 ( θ | | x 0 x | | ) d θ ) | | x 0 x | | .
Notice that the iterate x 1 is well defined by the second substep of the method (3) since the linear operator F ( x 0 ) is invertible, and we can write
x 1 x = x 0 x F ( x 0 ) 1 F ( x 0 ) + ( I p 0 ) F ( x 0 ) 1 F ( x 0 ) q 0 F ( x 0 ) 1 F ( y 0 ) .
In view of (8), (11) (for j = 2 ), (17), and (19)–(21), we obtain in turn that
| | x 1 x | | 1 1 g 0 ( | | x 0 x | | ) [ 0 1 g ( 1 θ ) | | x 0 x | | d θ + p ( 1 + 0 1 g 0 ( θ | | x 0 x | | ) d θ ) + q ( 1 + 0 1 g 0 ( θ h 1 ( | | x 0 x | | ) ) | | x 0 x | | ) d θ ] h 2 ( | | x 0 x | | ) | | x 0 x | | | | x 0 x | | .
So, the assertion (13) holds if n = 0 and the iterate x 1 U ( x , s ) . The induction can be completed if we just exchange x 0 , y 0 , x 1 by x m , y m , x m + 1 , respectively, in the preceding calculations. Set c = h 2 ( | | x 0 x | | ) . Then, from (13) and since c [ 0 , 1 ) , we have
| | x m + 1 x | | c | | x m x | | c m + 1 | | x 0 x | | < s .
Hence, by (23) lim m + x m = x and x m + 1 U ( x , s ) .    □
Next, a set is determined that contains x as the only solution.
Proposition 1. 
Suppose that the condition ( H 4 ) holds in the ball U ( x , s 3 ) for some s 3 > 0 , and there exists s 4 s 3 such that
0 1 g 0 ( θ s 4 ) d θ < 1 .
Define the set D 1 = U [ x , s 4 ] D . Then, the only solution of the equation F ( x ) = 0 in the set D 1 is x .
Proof. 
(See [3])    □
Remark 2. 
Under all the conditions ( H 1 ) ( H 7 ) , one can set s 3 = s in the Proposition 1.

3. Semi-Local Convergence Analysis

Majorizing sequences for { x n } shall be used to show its convergence to a solution x D of the equation F ( x ) = 0 . The majorant functions g 0 , g are replaced by v 0 , v, and x by x 0 , respectively. Moreover, the formulas are similar to the local analysis.
Suppose the following:
(C1)
There exists FCND v 0 : T T such that the equation v 0 ( t ) 1 = 0 has SSE in the interval T { 0 } which shall be denoted by r 0 . Set T 2 = [ 0 , r 0 ) .
(C2)
There exists FCND v : T 2 T . Define the sequence { a n } for each n = 0 , 1 , 2 , by a 0 = 0 , some b 0 0 , p 0 and q 0
c n = 0 1 v ( ( 1 θ ) ( b n a n ) ) d θ ( b n a n ) ,
a n + 1 = b n + p ( b n a n ) + q c n 1 v 0 ( a n )
d n + 1 = 0 1 v ( ( 1 θ ) ( a n + 1 a n ) ) d θ ( a n + 1 a n ) + ( 1 + v 0 ( a n ) ) ( a n + 1 b n )
and
b n + 1 = a n + 1 + d n + 1 1 v 0 ( a n + 1 ) .
It is shown in Theorem 1 that the sequence { a n } majorizes { x n } . But first, let us provide a general convergence condition for it.
(C3)
There exists r [ 0 , r 0 ) such that for each n = 0 , 1 , 2 ,   v 0 ( a n ) < 1 and a n r . These conditions, the definition of sequence { a n } and induction, show that for each n = 0 , 1 , 2 ,   0 a n b n a n + 1 r and there exists r [ 0 , r ] such that lim n + a n = r . It is well known that the limit point r is the (unique) least upper bound of the sequence { a n } .
As in the local analysis, the functions v 0 and v relate to the operators on the method (3).
(C4)
There exists x 0 D and an invertible operator A L ( B 1 , B 2 ) such that for each u D
| | A 1 ( F ( u ) A ) | | v 0 ( | | u x 0 | | ) .
If u = x 0 , conditions (C1) and (C4) imply | | A 1 ( F ( x 0 ) A ) | | < 1 . Therefore, F ( x 0 ) is invertible. Thus, we can choose b 0 | | F ( x 0 ) 1 F ( x 0 ) | | .
Define the set D 2 = U ( x 0 , r 0 ) D .
(C5)
| | A 1 ( F ( u 2 ) F ( u 1 ) ) | | v ( | | u 2 u 1 | | ) for each u 2 , u 1 D 2 .
(C6)
There exists p 0 and q 0 such that for each n = 0 , 1 , 2 ,
| | I p n | | p and | | q n | | q
and
(C7)
U [ X 0 , r ] D .
Remark 3. 
The linear operator A can be chosen to be A = I or A = F ( x ˜ ) for some x ˜ D or some other choice.
Next, the main semi-local convergence result follows for the method (3).
Theorem 2. 
Suppose that the conditions ( C 1 ) ( C 7 ) hold. Then, the sequence { x n } produced by (3) is well defined in the ball U ( x 0 , r ) and stays in U ( x 0 , r ) and there exists a solution x U [ x 0 , r ] of the equation F ( x ) = 0 such that for each n = 0 , 1 , 2 ,
| | x x n | | r a n .
Proof. 
We shall establish the assertions
| | y n x n | | b n a n
and
| | x n + 1 y n | | a n + 1 b n
using induction. By the choice of b 0 in the condition (C4) and the first substep of method (3), we have | | y 0 x 0 | | = | | F ( x 0 ) 1 F ( x 0 ) | | b 0 = b 0 a 0 < r . Thus, the estimate (27) holds if n = 0 and the iterate y 0 U ( x 0 , r ) . Let u D . Then, conditions (C1) and (C4) give | | A 1 ( F ( u ) A ) | | v 0 ( | | u x 0 | | ) < 1 ; thus, the linear operator F ( u ) is invertible and
| | F ( u ) 1 A | | 1 1 v 0 ( | | u x 0 | | ) .
Notice that for u = x m , the iterate x m + 1 is well defined by the second substep of the method (3). Moreover, by subtracting the first from the second substep of the method (3), we can write in turn that
x m + 1 y m = ( I p m ) F ( x m ) 1 F ( x m ) q m F ( x m ) 1 F ( y m ) .
We need the following estimates:
| | F ( x m ) 1 F ( x m ) | | = | | y m x m | | b m a m ,
F ( y m ) = F ( y m ) F ( x m ) F ( x m ) ( y m x m ) = 0 1 ( F ( x m + θ ( y m x m ) ) F ( x m ) ) d θ ( y m x m ) .
Hence, by the induction hypotheses and (C5), we in turn have
| | A 1 F ( y m ) | | 0 1 v ( 1 θ ) | | y m x m | | d θ | | y m x m | | 0 1 v ( 1 θ ) ( b m a m ) d θ ( b m a m ) = c m .
Summing up (31), (33) and using the condition (C6) in (30), we obtain by the following (25): and (29)
| | x m + 1 y m | | | | I p m | | ( b m a m ) + | | q m | | c m 1 v 0 ( a m ) p ( b m a m ) + q c m 1 v 0 ( a m ) = a m + 1 b m
and
| | x m + 1 x 0 | | | | x m + 1 y m | | + | | y m x 0 | | a m + 1 b m + b m a 0 = a m + 1 < r .
So, the assertion (28) holds for m = n and the iterate x m + 1 U ( x 0 , r ) . In view of the first substep of the method (3), we can write in turn that
F ( x m + 1 ) = F ( x m + 1 ) F ( x m ) F ( x m ) ( y m x m ) = F ( x m + 1 ) F ( x m ) F ( x m ) ( x m + 1 x m ) + F ( x m ) ( x m + 1 y m )
= 0 1 ( F ( x m + θ ( x m + 1 x m ) ) F ( x m ) ) d θ ( x m + 1 x m ) + F ( x m ) ( x m + 1 y m ) .
As in (32) for x m + 1 = y m , we obtain
| | 0 1 A 1 ( F ( x m + θ ( x m + 1 x m ) ) F ( x m ) ) d θ ( x m + 1 x m ) | | 0 1 v ( 1 θ ) ( a m + 1 a m ) d θ ( a m + 1 a m )
and
| | A 1 F ( x m ) | | = | | A 1 ( F ( x m ) A + A ) | | 1 + v 0 ( | | x m x 0 | | ) 1 + v 0 ( a m ) .
Then, by (25) and (35)–(38), we obtain
| | A 1 F ( x m + 1 ) | | 0 1 v ( ( 1 θ ) ( a m + 1 a m ) ) d θ ( a m + 1 a m ) + ( 1 + v 0 ( a m ) ) ( a m + 1 b m ) = d m + 1 .
Hence, by the first substep of the method (3) for n = m + 1 , we obtain
| | y m + 1 x m + 1 | | | | F ( x m + 1 ) 1 A | | · | | A 1 F ( x m + 1 ) | | d m + 1 1 v 0 ( a m + 1 ) = b m + 1 a m + 1
and
| | y m + 1 x 0 | | | | y m + 1 x m + 1 | | + | | x m + 1 x 0 | | b m + 1 a m + 1 + a m + 1 a 0 = b m + 1 < r .
The induction for the assertions (27) and (28) is completed and gives the iterates { x m } U ( x 0 , r ) . By the triangle inequality, (27) and (28), we obtain
| | x m + 1 x m | | a m + 1 a m .
It follows that the sequence { x m } is Cauchy (since { a m } is Cauchy as convergent by the condition (C3) in a Banach space B 1 and as such it converges to some x U [ x 0 , r ] ). By letting m + in (39), we deduce that F ( x ) = 0 . Furthermore, the triangle inequality and (40) show for i = 0 , 1 , 2 , that
| | x m + i x m | | a m + i a m .
Finally, by letting i + in (41), we show (26).    □
Next, a set is determined that contains only one solution of the equation F ( x ) = 0 .
Proposition 2. 
Suppose that there exists a solution z 1 U ( x 0 , r 1 ) for some r 1 > 0 ; condition ( C 4 ) holds in the ball U ( x 0 , r 1 ) , and there exists r 2 r 1 such that
0 1 v 0 ( 1 θ ) r 1 + θ r 2 d θ < 1 .
Define the set D 3 = U [ x 0 , r 2 ] D . Then, r 1 is the only solution of the equation F ( x ) = 0 in the set  D 3 .
Proof. 
(See [3])    □
Remark 4. 
(a) 
The limit point r can be replaced by r 0 in ( C 7 ) (see ( C 1 ) ).
(b) 
If all the conditions ( C 1 ) ( C 7 ) hold, then we can set r 1 = r and z 1 = x in the Proposition 2.

4. Numerical Results

This section provides the evaluation of the efficiency and convergence behaviour of the analyzed extended parametric family of methods. To validate the theoretical results, the performance of the analyzed schemes is compared with several established iterative methods from the literature.

4.1. Experimental Setup

The numerical computations utilized high-precision arithmetic (specifically, 1000 digits of precision) to minimize round-off errors and verify the theoretical findings. The algorithms were implemented in Python (https://www.python.org/) using the python-flint library for high-precision calculations. Experiments were conducted on a machine with the following specifications:
  • Processor: Intel(R) Core(TM) i7-10510U CPU @ 1.80 GHz
  • RAM: 32 GB
  • Operating System: Microsoft Windows 11 Pro (64-bit)
The iterative process for each method terminates when the following stopping criterion is satisfied:
F ( x k ) + x k x k 1 < 10 100 .
To verify the theoretical order of convergence, we compute the Computational Order of Convergence (COC) using the following formula:
COC log ( x k + 1 x k / x k x k 1 ) log ( x k x k 1 / x k 1 x k 2 ) .
This metric yields a numerical approximation of the asymptotic convergence rate.

4.2. Numerical Experiments

We compare some famous methods of the parametric family (3) with known fourth-order algorithms. We consider special cases of (7) that achieve fourth-order convergence:
  • Case 1. Linear Weight (LW) [16]. We utilize the linear weight function Ψ ( b n ) = 1 + 2 b n .
    y k = x k F ( x k ) 1 F ( x k ) x k + 1 = x k 1 + 2 b n F ( x k ) 1 F ( x k ) + F ( y k )
  • Case 2. Rational Weight (RW) [16]. We utilize the rational weight function Φ ( v k ) = 1 + β b n 1 + ( β 2 ) b n . For these experiments, we assume β = 1.5 .
    y n = x n F ( x n ) 1 F ( x n ) x n + 1 = x n 1 + 1.5 b n 1 0.5 b n F ( x n ) 1 F ( x n ) + F ( y n )
Additionally, (6) is considered.
  • Cordero’s Method (CM) [15]. This is investigated as a special case of the family with parameter λ = 2 .
    y n = x n F ( x n ) 1 F ( x n ) ,
    x n + 1 = y n 2 b n 1 2 b n F ( x n ) 1 F ( x n ) 1 1 2 b n F ( x n ) 1 F ( y n ) ,
These methods are compared to three fourth-order convergence methods.
  • OM (Ostrowski’s Method) [17] is a classic optimal fourth-order multipoint method requiring two function evaluations and one Jacobian inversion per iteration.
    y n = x n F ( x n ) 1 F ( x n ) ,
    x n + 1 = y n 2 [ y n , x n ; F ] F ( x n ) 1 F ( y n )
  • SNM (Singh’s Method) [18] is a three-step fourth-order method defined as follows:
    y n = x n F ( x n ) 1 F ( x n ) z n = x n F ( x n ) 1 ( F ( x n ) + F ( y n ) ) x n + 1 = z n F ( x n ) 1 F ( z n )
  • SHM (Sharma’s Method) [19] is a fourth-order modified Newton approach:
    y n = x n 2 3 F ( x n ) 1 F ( x n ) x n + 1 = x n 1 2 I + 9 4 F ( y n ) 1 F ( x n ) + 3 4 F ( x n ) 1 F ( y n ) F ( x n ) 1 F ( x n )
To clarify the relationship between the analyzed schemes and the analyzed theoretical framework, Table 1 classifies each method based on its adherence to the structure defined in (4) and provides the specific defining sequences p ¯ n and q ¯ n where applicable.

4.3. Computational Efficiency

To objectively compare the performance of the analyzed methods against existing schemes, we utilize the Computational Efficiency Index, denoted C E . Unlike the order of convergence, which solely measures error decay, the Computational Efficiency Index acts as a general normalized metric for processing time, allowing for comparisons that are unaffected by the specific machine architecture used. Similar analysis was also provided in [16] for parametric families (6) and (7).
According to the methodology for high-precision arithmetic contexts, the computational efficiency is defined as follows:
C E = 1 D log p C t o t a l
where p is the order of convergence of the method, and C t o t a l is the total computational cost per iteration. D acts as a normalization factor for the arithmetic cost; for our analysis, we set D = 10 5
To compare the efficiency of the analyzed methods against existing schemes, we establish a computational cost function C t o t a l for a system of nonlinear equations of dimension m. This function accounts for evaluations of elementary functions, linear algebra operations, and elementary vector operations.
The total computational cost is expressed as a weighted sum of function evaluations and algebraic operations, normalized to the cost of a single multiplication product. The cost function is defined as follows:
C t o t a l = m σ f + m 2 σ d + G ( m , γ ) + E ( m , γ )
where the parameters are defined as follows:
  • σ f is the computational cost of evaluating a single component of F ( x ) .
  • σ d is the computational cost of evaluating a single component of the Jacobian F ( x ) .
  • G ( m , γ ) is the aggregated cost of solving linear systems, where γ is the cost ratio of a division relative to a multiplication.
  • E ( m , γ ) is the cost of elementary operations, including scalar–vector, vector–vector, and matrix–vector multiplications.
The algebraic cost G ( m , γ ) depends on the matrix factorization technique. We assume an LU decomposition strategy. Based on standard linear algebra complexities, the cost of one LU decomposition ( C L U ) and the resolution of the necessary triangular systems ( C T S ) for a system of size m are given by
C L U ( m , γ ) = m ( m 1 ) ( 2 m 1 ) 6 + γ m ( m 1 ) 2 , C T S ( m , γ ) = m ( m 1 ) + γ m .
Additionally, we explicitly account for the cost of auxiliary operations ( E ( m ) ). The costs for scalar–vector ( C s v ), vector–vector inner product ( C v v ), and matrix–vector ( C m v ) multiplications are defined as follows:
C s v = m , C v v = m , C m v = m 2 .
By substituting the expressions for C L U and C T S into the method-specific operation counts, we derive the final polynomial cost formulas for the analyzed family (LW, RW, CM) and the comparison methods (SM, SHM, OM).
1. LW and RW
LW and RW require two function evaluations and one Jacobian evaluation. Algebraically, they perform one LU decomposition and two triangular system solutions. The update step involves two vector–vector products and one scalar–vector multiplication ( E ( m ) = 2 C v v + 1 C s v = 3 m ).
C L W = C R W = 2 m σ f + m 2 σ d + C L U ( m , γ ) + 2 C T S ( m , γ ) + 3 m = 2 m σ f + m 2 σ d + m 6 2 m 2 + 9 m + 7 + 3 γ ( m + 3 ) , C E L W = C E R W = 1 D log 4 C R W .
2. Cordero’s Method (CM)
Similar to the previous approaches, this method requires two function evaluations, one Jacobian evaluation, one LU decomposition, and two triangular solutions. However, the update formula applies distinct weights to the function vectors, requiring an additional scalar–vector multiplication ( E ( m ) = 4 m ).
C C M = 2 m σ f + m 2 σ d + C L U ( m , γ ) + 2 C T S ( m , γ ) + 4 m = 2 m σ f + m 2 σ d + m 6 2 m 2 + 9 m + 13 + 3 γ ( m + 3 ) , C E C M = 1 D log 4 C C M .
3. Singh’s Method (SM)
This method requires three function evaluations and one Jacobian evaluation. It performs one LU decomposition and three triangular system solutions.
C S M = 3 m σ f + m 2 σ d + C L U ( m , γ ) + 3 C T S ( m , γ ) = 3 m σ f + m 2 σ d + m 6 2 m 2 + 15 m 17 + 3 γ ( m + 5 ) , C E S M = 1 D log 4 C S M .
4. Sharma’s Method (SHM)
Sharma’s Method evaluates the Jacobian at two points and performs one function evaluation. It requires two LU decompositions and three triangular solutions. The method also involves matrix–algebraic approximations requiring two matrix–vector multiplications and four scalar–vector multiplications ( E ( m ) = 2 m 2 + 4 m ).
C S H M = m σ f + 2 m 2 σ d + 2 C L U ( m , γ ) + 3 C T S ( m , γ ) + 2 m 2 + 4 m = m σ f + 2 m 2 σ d + m 3 2 m 2 + 12 m + 4 + 3 γ ( m + 2 ) , C E S H M = 1 D log 4 C S H M .
5. Ostrowski’s Method (OM)
Ostrowski’s method requires two function evaluations and one Jacobian. It also involves a divided difference that contains m(m − 1) additional function evaluations.
C O M = m ( m + 1 ) σ f + m 2 σ d + 2 C L U ( m , γ ) + 2 C T S ( m , γ ) = 2 m σ f + m 2 σ d + m 3 2 m 2 + 6 m 5 + 3 γ ( 1 + 2 m ) , C E O M = 1 D log 4 C O M .
To accurately assess the efficiency of the analyzed iterative methods, we evaluated the computational cost of elementary functions using arbitrary-precision arithmetic. Unlike standard floating-point arithmetic, which is heavily optimized by hardware, arbitrary-precision calculations provide a hardware-independent benchmark that reflects the algorithmic complexity of the operations.
The costs were estimated using the python-flint library with a precision of 1000 decimal digits. The computational cost C of each function is defined relative to the time required for a single multiplication operation ( x y ):
C ( f ) = T ( f ) T m u l
where T ( f ) is the average execution time of the function f and T m u l is the average execution time of one multiplication. Table 2 presents the estimated costs used in this study.
As shown in the table, the cost of transcendental functions (e.g., sin ( x ) , e x ) increases significantly at high precision compared to basic arithmetic operations. This weighting is crucial for calculating C E , as it accounts for the heavy penalty of evaluating complex nonlinear functions in high-precision environments.
To test the applicability and robustness of the analyzed iterative methods, we have prepared a diverse set of numerical examples spanning low- to high-dimensional systems. The first two examples are selected from [16] to benchmark the implementation of the method against existing schemes in the article.

4.3.1. Example 1: Exponential Trigonometric System [16]

We consider the following system of nonlinear equations of dimension two:
F ( x ) = e x 1 2 sin ( x 2 ) = 0 sin ( x 1 ) + e x 2 2 = 0
For this problem, the initial estimate x 0 = ( 2 , 2 ) T is used. The approximate solution for this system is x ( 3.14164 , 3.14164 ) T , and the cost parameters are ( σ f , σ d ) = ( 52.28 , 26.11 ) .
The results are presented in Table 3. The columns report the number of iterations (k) required to meet the stopping criterion, the difference between two last iteratates ( x k x k 1 ), the residual norm ( F ( x k ) ), the Computational Order of Convergence (COC), computational efficiency (CE), and the CPU time. For all examples, we set γ = 1.26 (see Table 2).
The analyzed methods (RW and LW) demonstrate superior efficiency, converging in only five iterations compared to six for all other methods. This faster convergence is reflected in the CPU time, with LW being the fastest (0.00594 s). The computational efficiency analysis reveals that the analyzed methods, LW and RW, achieve the highest efficiency index. Although SHM and CM achieve extremely small residual norms (e.g., 10 272 ), this level of precision required an additional iteration, whereas the analyzed methods achieved sufficient accuracy ( 10 118 ) one step earlier, satisfying the 10 100 tolerance more quickly. CM requires six iterations but remains relatively fast. All methods exhibit a COC of approximately 4, confirming their theoretical fourth-order convergence.
Figure 1 illustrates the convergence behaviour of the iterative methods for Example 1 by plotting the error norms versus the iteration number k. The plots display the logarithm of the step norm, log 10 x n x n 1 , and the logarithm of the residual norm, log 10 F ( x n ) , against the iteration number. It is evident that the analyzed methods, LW (purple) and RW (brown), exhibit the fastest rate of decay (they coincide) and reach the required machine precision levels by the fifth iteration. In contrast, other algorithms require six iterations to converge, but all of them, except OM, achieve quite high precision. Overall, both plots corroborate the numerical results in Table 3, validating the enhanced performance of the analyzed methods.

4.3.2. Example 2: Trigonometric System [16]

The following system of nonlinear equations is considered:
cos x i j i x j x i = 0 , i = 1 , , m .
For this experiment, m = 10 is selected, and the initial estimate x 0 = ( 1.5 , , 1.5 ) T is used. The parameters needed for cost evaluation are ( σ f , σ d ) = ( 27.99 , 29.05 ) . Different algorithms yield different solutions.
Table 4 summarizes the performance of several iterative methods for Example 2. The Rational Weight (RW) method demonstrates the highest efficiency, converging in seven iterations with the lowest CPU time of 0.0281 seconds. The Sharma et al. Method (SHM) requires nine iterations and a longer CPU time (0.0795 s) than RW. The Linear Weight (LW) and Cordero’s Method (CM) converge in 12 and 14 iterations, respectively, with moderate CPU times. Ostrowski’s Method (OM) and Sharma’s Method (SM) are notably slower, requiring 31 and 47 iterations, respectively, and significantly more computational time. For this medium-sized system ( m = 10 ), CM, LW, and RW achieve efficiency scores nearly double that of Sharma’s Method and significantly exceeding Ostrowski’s Method. All methods achieve a computational order of convergence (COC) of 4.0000, consistent with their theoretical fourth-order convergence.
Figure 2 presents the overall convergence rate for all methods in Example 2. The RW method exhibits the fastest decay, confirming its rapid convergence. SHM demonstrates a similar process, while OM and SM display significantly slower convergence.

4.3.3. Example 3: High-Dimensional Cyclic System [9]

To evaluate the method on large-scale problems, the following cyclic system is considered:
x i 2 x i + 1 1 = 0 , i = 1 , , m 1 , and x m 2 x 1 1 = 0 .
A dimension of m = 300 is selected, with the initial approximation x 0 = ( 1.8 , , 1.8 ) T and cost parameters ( σ f , σ d ) = ( 2 , 0.003 ) .
The numerical results for Example 3, presented in Table 5, indicate that most methods (SM, SHM, CM, LW, RW) converge in six iterations, while Ostrowski’s Method (OM) requires seven. Cordero’s Method (CM) and the Rational Weight (RW) method are the most efficient in terms of CPU time, taking approximately 0.46 seconds, whereas SHM is significantly slower at 14.9 seconds. In the high-dimensional case, the efficiency gap widens drastically; the analyzed methods exhibit efficiency indices approximately twice as large as those of Ostrowski’s and Sharma’s Methods ( 1.49 × 10 2 vs 0.74 × 10 2 ), highlighting the critical benefit of minimizing matrix factorizations in large-scale problems. All methods exhibit a computational order of convergence of 4.0000, validating their theoretical properties on this large-scale problem. In Figure 3, most methods follow a similar rapid decay trajectory, with OM lagging slightly behind by requiring an additional step to reach the same precision.

4.4. Calculating Radius of Convergence

To show how the convergence radius can be calculated, we consider the following problem:
  • Example 4: The motion of a certain object is governed by the initial value problem
    f 1 ( t ) = e t , f 2 ( t ) = 1 and f 3 ( t ) = ( e 1 ) t + 1
    with f 1 ( 0 ) = f 2 ( 0 ) = f 3 ( 0 ) = 0 . Define the mapping F : D R 3 R 3 where D = U [ 0 , 1 ] for t ¯ = ( t 1 , t 2 , t 3 ) T by
    F ( s ¯ ) = ( f 1 ( t 1 ) , f 2 ( t 2 ) , f 3 ( t 3 ) ) T .
    Notice that
    f 1 ( t 1 ) = e t 1 1 , f 2 ( t 2 ) = t 2 and f 3 ( t 3 ) = 1 2 ( e 1 ) t 3 2 + t 3
    solve the boundary problem. According to these definitions, the Fréchet derivative F of the operator F is given by
    F ( t ¯ ) = e t 1 0 0 0 1 0 0 0 ( e 1 ) t 3 + 1 .
The vector x = ( 0 , 0 , 0 ) T solves the equation F ( t ¯ ) = 0 and F ( t ¯ ) = I . Then, for A = F ( x ) , the conditions ( H 1 ) , ( H 4 ) and ( H 5 ) are satisfied if
s 0 = 1 e 1 , g 0 ( t ) = ( e 1 ) t and g ( t ) = e 1 e 1 t .
We shall calculate the radius of convergence for (6). After defining these functions and SSE, we can calculate h 1 ( t ) and h 2 ( t ) using formulas from (H1) and (H2). We set parameters p = 0 and q = 1 as they satisfy (H6). After this, we find s 1 and s 2 as the SSE of equations h 1 ( t ) 1 and h 2 ( t ) 1 in T 0 { 0 } , respectively. In this case, s 1 = 0.5494 and s 2 = 0.428 , so the radius of convergence will be s = min { s 1 , s 2 } = s 2 = 0.428

4.5. Example 5: Nonlinear Integral Equation [20]

To evaluate the robustness of the analyzed methods on high-dimensional problems derived from real-world applications, we consider the following integral equation on the interval [ 0 , 1 ] :
u ( t ) = t e + 0 1 2 t σ exp u ( σ ) 2 d σ .
The equation is discretized using a uniform grid with m subintervals and step size h = 1 / m , with nodes defined as t i = i h for i = 1 , , m . The integral is approximated using the trapezoidal rule with weights w 1 = w m = h / 2 , and w j = h for j = 2 , , m 1 . This discretization results in a system of nonlinear equations given by
F i ( u ) = u i t i e 2 t i j = 1 m w j t j exp u j 2 = 0 , i = 1 , , m .
For this numerical test, we select a large-scale dimension of m = 1000 and use the initial guess u 0 = ( 0.1 , , 0.1 ) . The convergence behavior is compared against the reference solution u . The numerical results, including error norms at each iteration and total CPU time, are summarized in Table 6. This experiment shows how many iterations each method requires to achieve accuracy 10 15 .
The results presented in Table 6 demonstrate the distinct advantages of the analyzed parametric family methods (LW and RW) in solving high-dimensional systems ( m = 1000 ). Both LW and RW converge rapidly, satisfying the stopping criterion within only three iterations. In contrast, Cordero’s Method (CM) requires four iterations, while Ostrowski’s Method (OM) exhibits instability in the early stages, indicated by an increase in error at the second iteration, and requires six iterations to converge.
In terms of CPU time, the Rational Weight (RW) and Linear Weight (LW) methods are the fastest, with execution times of approximately 0.306 s and 0.317 s, respectively. While Sharma’s Method (SHM) also converges in three iterations, it is computationally expensive, requiring 1.35 s, which is more than four times the cost of the RW method. This confirms that the analyzed family offers an optimal balance between low iteration count and low computational cost per step for large-scale integral equations.

4.6. Basins of Attraction Analysis

The dynamical behavior of the iterative methods (3) is analyzed by generating basins of attraction. This visualization technique enables assessment of the method’s stability and sensitivity to the choice of initial guess. The basin of attraction of a root x is defined as the set of all initial points x 0 in a domain D that converge to x under the iterative mapping. If an initial point fails to converge to any root within a specified maximum number of iterations, it is considered divergent.
To generate these basins, we define a mesh of 201 × 201 points within a rectangular domain D = [ 2 , 2 ] × [ 2 , 2 ] R 2 . Each point z D is used as an initial guess x 0 . If the sequence generated by the iterative method converges to a root x , the starting point is colored according to that specific root. To provide further insight into the computational cost, the color intensity is scaled based on the number of iterations required: brighter colors indicate faster convergence (fewer iterations). Black regions indicate points that diverge or fail to converge within the maximum iteration limit.
For these visual simulations, the stopping criterion is set to F ( x k ) + x k x k 1 < 10 3 or a maximum of 50 iterations. The basins are examined for three nonlinear systems:
1.
System A: x 1 2 + 4 x 2 2 4 = 0 4 x 1 2 + x 2 2 4 = 0
2.
System B: x 1 3 x 2 1 = 0 x 2 3 x 1 + 1 = 0
3.
System C: x 1 2 + 2 x 2 2 4 = 0 x 1 2 x 2 1 = 0
To provide a theoretical verification of the stability regions observed in the basins of attraction, we calculated the precise radii of convergence for Systems A, B, and C. For the methods LW, RW, and CM, the parameters p and q utilized in the local convergence analysis (Condition (H6)) are determined by the limit behavior of the weight functions at the solution, resulting in p = 0 and q = 1 .
The bounding functions g 0 ( t ) and g ( t ) were determined as linear functions L t satisfying the generalized continuity conditions ( H 4 ) and ( H 5 ) . In our analysis, we selected g 0 ( t ) = g ( t ) = L t , where L is the constant. A is selected as F ( x ) . While it is theoretically possible to find a smaller L 0 for g 0 ( t ) (since it is restricted to paths originating from x ), using the global constant L provides a rigorous sufficient condition for convergence that holds throughout the entire domain.
The calculated radii are summarized in Table 7. A key observation is the distinction between the polynomial systems:
  • Systems A and C. These systems consist of quadratic equations where the second Fréchet derivative is constant. Due to the geometric symmetry of the roots, the norm of the inverse Jacobian A 1 is identical for all solutions. Consequently, the constant L and the resulting convergence radius s are the same for every root.
  • System B. This system involves cubic equations, meaning the second derivative changes with x. Furthermore, the roots are not symmetrically distributed. For instance, at the integer root ( 1 , 0 ) , the Jacobian matrix contains a zero on the diagonal, leading to a larger norm A 1 compared to other roots. This increases the constant L and results in a significantly smaller theoretical radius ( s 0.011 ) compared to the more stable roots ( s 0.020 ).
Figure 4 shows the basins for System A. In this visualization, the theoretical radius of convergence ( s 0.37 ), marked by cyan dashed circles for the analyzed family, is clearly visible and fully contained within the stable regions. The stability is quantitatively confirmed by the low divergence counts (Div). The Rational Weight (RW) method exhibits excellent stability with only Div = 489 divergent points (out of 40,401), which is comparable to Ostrowski’s Method ( Div = 401 ) and significantly better than the Linear Weight (LW) method ( Div = 2881 ). While both RW and OM are stable, RW displays larger bright areas, indicating a faster rate of convergence. The analyzed methods (LW, RW, CM) exhibit similar morphological shapes, suggesting consistent dynamical behavior across the parametric family.
Table 7. Convergence bounding functions and calculated radii for Systems A, B, and C (Methods: LW, RW, CM with p = 0 , q = 1 ).
Table 7. Convergence bounding functions and calculated radii for Systems A, B, and C (Methods: LW, RW, CM with p = 0 , q = 1 ).
ProblemSolution Vector x g 0 ( t ) = g ( t ) s 1 s 2 Radius s
System AAll 4 roots ( ± 0.89 , ± 0.89 ) 1.118 t 0.59630.37030.3703
System BRoot 1 ( 0.54 , 1.15 ) 20.26 t 0.03290.02040.0204
Root 2 ( 0.00 , 1.00 ) 37.95 t 0.01760.01090.0109
Root 3 ( 0.68 , 0.68 ) 21.68 t 0.03070.01910.0191
Root 4 ( 1.00 , 0.00 ) 37.77 t 0.01770.01100.0110
Root 5 ( 1.15 , 0.54 ) 20.17 t 0.03310.02050.0205
System CAll 2 roots ( ± 1.41 , 1.00 ) 0.85 t 0.78580.48790.4879
Figure 5 presents the results for System B. This system poses a unique challenge with fivw roots and the high constant L, resulting in a very small theoretical radius of convergence ( s 0.02 ). Consequently, the cyan radius circles are not visible at this scale. Despite this, the actual basins are extensive. The RW method demonstrates superior stability among the analyzed schemes, significantly reducing the divergence area ( Div = 1971 ) compared to Singh’s Method ( Div = 9640 ) and Cordero’s Method ( Div = 5164 ). Although OM achieves the lowest divergence ( Div = 38 ), the analyzed RW method exhibits larger areas of high brightness, implying faster convergence speed. This highlights a trade-off where RW offers competitive stability with enhanced speed.
Figure 6 illustrates the basins for System C, which is computationally the most difficult due to the presence of extensive divergent regions across all methods. OM and SNM exhibit narrow corridors of convergence surrounded by significant chaotic boundaries. In contrast, the analyzed methods LW and RW display wider, more uniform basins around the roots, with minimal chaotic interference. The theoretical stability circles ( s 0.4879 ) are clearly distinguishable and verify that the immediate neighborhood of the roots is safe. Numerical metrics confirm that the divergence count is lower for the Rational Weight method (Div = 17,631) compared to LW (Div = 20,753) and SNM (Div = 19,767). The analyzed methods (LW, RW, and CM) also appear slightly brighter and maintain similar basin shapes, whereas the comparison methods do not exhibit this common topology.
To complement the visual analysis of the basins of attraction, we computed numerical stability metrics over a 201 × 201 grid ( 40 , 401 total points). Table 8 reports the Stability Index ( S i n d ), defined as the percentage of initial guesses that successfully converged to any root, the average number of iterations ( I a v g ) for these converged points, and the total count of divergent points ( N d i v ).
The results indicate clear distinctions in robustness across the systems:
  • System A. This system is well behaved for most methods. Ostrowski’s Method (OM) and Sharma’s Method (SHM) exhibit the highest stability ( S i n d 99 % ) with minimal divergence ( N d i v = 401 ). Among the analyzed family, the Rational Weight (RW) method performs best, achieving 98.79 % stability, which is comparable to the optimal methods and significantly more robust than Linear Weight (LW) at 92.87 % .
  • System B. This system presents a greater challenge due to its cubic nonlinearity. While OM achieves near-perfect stability ( 99.91 % ), the analyzed RW method demonstrates strong performance with 95.12 % stability, significantly outperforming Singh’s Method (SNM) which drops to 76.14 % . The LW method shows moderate instability here ( N d i v = 7511 ), confirming the theoretical sensitivity observed in the radius of convergence analysis.
  • System C. This system proves to be the most difficult for all iterative schemes, likely due to the geometry of the solution space causing the Jacobian to become singular near the coordinate axes. All methods struggle to exceed 60 % stability. However, the RW method again proves to be the most robust of the analyzed family, achieving the highest stability index ( 56.36 % ) of all tested methods, effectively minimizing the chaotic regions compared to LW ( 48.63 % ).
Overall, the Rational Weight (RW) method consistently provides the best trade-off between stability and convergence speed within the analyzed parametric family, often rivaling established methods like OM and SHM.
Figure 6. Basins of attraction for System C. The roots are marked with white stars. Starting the iterative process from a point within a region of a specific color leads to convergence to the corresponding root. The cyan dashed circles indicate the theoretical radius of convergence s 0.4879 . Black regions indicate divergence (Div).
Figure 6. Basins of attraction for System C. The roots are marked with white stars. Starting the iterative process from a point within a region of a specific color leads to convergence to the corresponding root. The cyan dashed circles indicate the theoretical radius of convergence s 0.4879 . Black regions indicate divergence (Div).
Axioms 15 00041 g006
Table 8. Numerical stability metrics for Systems A, B, and C. S i n d : Stability Index (percentage of converged points); I a v g : Average iterations per converged point; N d i v : Number of divergent points (out of 40,401 total grid points).
Table 8. Numerical stability metrics for Systems A, B, and C. S i n d : Stability Index (percentage of converged points); I a v g : Average iterations per converged point; N d i v : Number of divergent points (out of 40,401 total grid points).
SystemMethod S ind (%) I avg N div
System AOM99.013.39401
SNM93.163.392765
SHM99.012.94401
LW92.874.192881
RW98.794.06489
CM96.923.791245
System BOM99.915.5338
SNM76.145.319640
SHM97.445.591034
LW81.416.197511
RW95.127.171971
CM87.226.675164
System COM55.643.5717,921
SNM51.074.2219,767
SHM55.903.4317,817
LW48.635.1920,753
RW56.364.7417,631
CM55.205.8218,099
Across all three examples, OM is the most stable method, but it has fewer bright areas than the parametric family, indicating slightly slower convergence. The RW method exhibits larger bright areas and fewer divergent points, representing the best trade-off between speed and stability. Additionally, the methods RW, LW, and CM display similar shapes across all examples.

5. Conclusions

We devised an extended parametric family of two-step iterative methods for solving nonlinear equations in Banach space. The main theoretical contribution of this work is the formulation of local and semi-local convergence analyses that do not require the existence of high-order derivatives, unlike standard approaches relying on Taylor series expansions. We obtained accurate error estimates, the radius of convergence, and the uniqueness of solutions by applying generalized continuity conditions to the first Fréchet derivative. The semi-local convergence was efficiently validated using the method of majorizing sequences, with only the initial data as the criterion of convergence. This greatly enhances the method’s applicability to equations where the operator lacks a derivative of the required order to satisfy Taylor-based conditions. Theoretical fourth-order convergence was verified through numerical experiments based on high-precision arithmetic. Comparative analysis revealed that certain members of the analyzed family, especially the Linear Weight (LW) variation, exhibit better performance in CPU time, Computational Efficiency Index, and iteration count than existing strategies such as Ostrowski’s and Singh’s.
The dynamical analysis further validated these theoretical results by superimposing calculated convergence radii onto basin plots, confirming that the theoretical bounds are strictly contained within stable regions. Notably, among the proposed variations, the Rational Weight (RW) method demonstrated superior global stability, minimizing the number of divergent points even in the most challenging test problems.
Future work on the direction of extensions of this analytical framework using majorizing sequences and generalized continuity for Banach spaces will also be very interesting for other classes of higher-order iterative schemes. The implementation of this tight approach with existing and future methods, which mainly rely on restrictive Taylor series assumptions, will help to characterize convergence domains and error estimates in their weak settings and thereby extend the practical applicability of these methods. The technique presented in this paper is very general. This is why it can be used to extend the usage of other methods [1,2,6,7,8,9,10,11,12,13,15,16,17,18,19,21] analogously. This shall be the focus of future research.

Author Contributions

Conceptualization, I.K.A., S.S. and M.S.; methodology, I.K.A., S.S. and M.S.; software, I.K.A., S.S. and M.S.; validation, I.K.A., S.S. and M.S.; formal analysis, I.K.A., S.S. and M.S.; investigation, I.K.A., S.S. and M.S.; resources, I.K.A., S.S. and M.S.; data curation, I.K.A., S.S. and M.S.; writing—original draft preparation, I.K.A., S.S. and M.S.; writing—review and editing, I.K.A., S.S. and M.S.; visualization, I.K.A., S.S. and M.S.; supervision, I.K.A., S.S. and M.S.; project administration, I.K.A., S.S. and M.S.; funding acquisition, I.K.A., S.S. and M.S. All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing is not applicable. No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  2. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
  3. Argyros, I.K.; Shakhno, S. Extending the applicability of two-step solvers for solving equations. Mathematics 2019, 7, 62. [Google Scholar] [CrossRef]
  4. Argyros, I.K.; Shakhno, S. Extended Two-Step-Kurchatov Method for Solving Banach Space Valued Nondifferentiable Equations. Int. J. Appl. Comput. Math. 2020, 6, 32. [Google Scholar] [CrossRef]
  5. Argyros, I.K.; Shakhno, S.; Yarmola, H. Two-step solver for nonlinear equations. Symmetry 2019, 11, 128. [Google Scholar] [CrossRef]
  6. Behl, R.; Martinez, E. A new high-order and efficient family of iterative techniques for nonlinear models. Complexity 2020, 2020, 1706841. [Google Scholar] [CrossRef]
  7. Zhanlav, T.; Otgondorj, K. Higher order Jarratt-like iterations for solving systems of nonlinear equations. Appl. Math. Comput. 2021, 395, 125849. [Google Scholar] [CrossRef]
  8. Singh, H.; Sharma, J.R. Simple yet highly efficient numerical techniques for systems of nonlinear equations. Comput. Appl. Math. 2023, 42, 22. [Google Scholar] [CrossRef]
  9. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  10. Arroyo, V.; Cordero, A.; Torregrosa, J.R. Approximation of artificial satellites’ preliminary orbits: The efficiency challenge. Math. Comput. Model. 2011, 54, 1802–1807. [Google Scholar] [CrossRef]
  11. Cordero, A.; Rojas-Hiciano, R.V.; Torregrosa, J.R.; Vassileva, M.P. Fractal complexity of a new biparametric family of fourth optimal order based on the Ermakov-Kalitkin scheme. Fractal Fract. 2023, 7, 459. [Google Scholar] [CrossRef]
  12. Khirallah, M.Q.; Hafiz, M.A. Solving system of non-linear equations using family of Jarratt methods. Int. J. Differ. Equ. Appl. 2013, 12, 69–83. [Google Scholar]
  13. Kumar, D.; Sharma, J.R.; Singh, H. Higher order Traub-Steffensen type methods and their convergence analysis in Banach spaces. Int. J. Nonlinear Sci. Numer. Simul. 2023, 24, 1565–1587. [Google Scholar] [CrossRef]
  14. Respondek, J. Numerical approach to the non-linear diofantic equations with applications to the controllability of infinite dimensional dynamical systems. Int. J. Control 2005, 78, 1017–1030. [Google Scholar] [CrossRef]
  15. Cordero, A.; Rojas-Hiciano, R.V.; Torregrosa, J.R.; Vassileva, M.P. A highly efficient class of optimal fourth-order methods for solving nonlinear systems. Numer. Algorithms 2024, 95, 1879–1904. [Google Scholar] [CrossRef]
  16. Singh, H.; Sharma, J.R. A two-point Newton-like method of optimal fourth order convergence for systems of nonlinear equations. J. Complex. 2025, 86, 101907. [Google Scholar] [CrossRef]
  17. Grau-Sánchez, M.; Grau, Á.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  18. Singh, A. On a Three-Step Efficient Fourth-Order Method for Computing the Numerical Solution of System of Nonlinear Equations and Its Applications. Proc. Natl. Acad. Sci. India Sect. A Phys. Sci. 2020, 90, 709–716. [Google Scholar] [CrossRef]
  19. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-Newton method for systems of nonlinear equations. Numer. Algorithms 2013, 62, 307–323. [Google Scholar] [CrossRef]
  20. Singh, H.; Sharma, J.R.; Kumar, S. A simple yet efficient two-step fifth-order weighted-Newton method for nonlinear models. Numer. Algorithms 2023, 93, 203–225. [Google Scholar] [CrossRef]
  21. Ostrowski, A.M. Solution of Equation and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
Figure 1. Convergence analysis of different approaches of | | F n | | and | | x n x n 1 | | for Example 1.
Figure 1. Convergence analysis of different approaches of | | F n | | and | | x n x n 1 | | for Example 1.
Axioms 15 00041 g001
Figure 2. Convergence analysis of different approaches of | | F n | | and | | x n x n 1 | | for Example 2.
Figure 2. Convergence analysis of different approaches of | | F n | | and | | x n x n 1 | | for Example 2.
Axioms 15 00041 g002
Figure 3. Convergence analysis of different approaches of | | F n | | and | | x n x n 1 | | for Example 3.
Figure 3. Convergence analysis of different approaches of | | F n | | and | | x n x n 1 | | for Example 3.
Axioms 15 00041 g003
Figure 4. Basins of attraction for System A. The roots are marked with white stars. Starting the iterative process from a point within a region of a specific color leads to convergence to the corresponding root. The cyan dashed circles indicate the theoretical radius of convergence s 0.37 . Black regions indicate divergence (Div is number of divergent points).
Figure 4. Basins of attraction for System A. The roots are marked with white stars. Starting the iterative process from a point within a region of a specific color leads to convergence to the corresponding root. The cyan dashed circles indicate the theoretical radius of convergence s 0.37 . Black regions indicate divergence (Div is number of divergent points).
Axioms 15 00041 g004
Figure 5. Basins of attraction for System B. The roots are marked with white stars. Starting the iterative process from a point within a region of a specific color leads to convergence to the corresponding root. The convergence radius is very small ( s 0.01 0.02 ), so it is not visible. Brighter areas mean faster convergence. Black regions indicate divergence (Div is number of divergent points).
Figure 5. Basins of attraction for System B. The roots are marked with white stars. Starting the iterative process from a point within a region of a specific color leads to convergence to the corresponding root. The convergence radius is very small ( s 0.01 0.02 ), so it is not visible. Brighter areas mean faster convergence. Black regions indicate divergence (Div is number of divergent points).
Axioms 15 00041 g005
Table 1. Classification of all comparison methods against the analysed framework (4) defined by sequences p ¯ n and q ¯ n .
Table 1. Classification of all comparison methods against the analysed framework (4) defined by sequences p ¯ n and q ¯ n .
MethodBelongs to (4)Sequences ( p ¯ n , q ¯ n )
Linear Weight (LW)Yes p ¯ n = 2 b n , q ¯ n = 1 + 2 b n
Rational Weight (RW)Yes p ¯ n = 2 b n 1 0.5 b n , q ¯ n = 1 + 1.5 b n 1 0.5 b n
Cordero’s Method (CM)Yes p ¯ n = 2 b n 1 2 b n , q ¯ n = 1 1 2 b n
Ostrowski’s Method (OM)No
Singh’s Method (SNM)No
Sharma’s Method (SHM)No
Table 2. Estimation of computational costs of elementary functions using arbitrary-precision arithmetic (1000 decimal digits).
Table 2. Estimation of computational costs of elementary functions using arbitrary-precision arithmetic (1000 decimal digits).
Functions xy x / y x e x cos ( x ) sin ( x )
C ( f ) 1.001.261.1922.2327.9929.05
Table 3. Comparison of the methods’ performance for Example 1 ( m = 2 ).
Table 3. Comparison of the methods’ performance for Example 1 ( m = 2 ).
Methodk x k 1 x k 2 x k x k 1 F ( x k ) COCCPU Time (s)CE
OM68.54 × 10−231.89 × 10−1121.89 × 10−11248.68 × 10−33.146 × 102
SM63.31 × 10−930.000.0049.20 × 10−33.195 × 102
SHM61.32 × 10−674.55 × 10−2724.55 × 10−27241.06 × 10−23.989 × 102
CM61.28 × 10−621.59 × 10−2521.59 × 10−25247.44 × 10−34.165 × 102
LW53.89 × 10−291.33 × 10−1181.33 × 10−11845.94 × 10−34.190 × 102
RW51.93 × 10−298.11 × 10−1208.11 × 10−12047.62 × 10−34.190 × 102
Table 4. Comparison of performance for Example 2 ( m = 10 ).
Table 4. Comparison of performance for Example 2 ( m = 10 ).
Methodk x k 1 x k 2 x k x k 1 F ( x k ) COCCPU Time (s)CE
OM313.92 × 10−864.77 × 10−2584.24 × 10−25741.12 × 10−119.53
SM471.22 × 10−623.29 × 10−2492.24 × 10−24841.83 × 10−131.55
SHM99.68 × 10−297.49 × 10−1123.63 × 10−11147.95 × 10−218.94
CM146.50 × 10−402.75 × 10−1572.15 × 10−15646.51 × 10−234.22
LW125.89 × 10−331.27 × 10−1286.13 × 10−12845.57 × 10−234.30
RW75.53 × 10−471.44 × 10−1851.13 × 10−18442.81 × 10−234.30
Table 5. Comparison of performance for Example 3 ( m = 300 ).
Table 5. Comparison of performance for Example 3 ( m = 300 ).
Methodk x k 1 x k 2 x k x k 1 F ( x k ) COCCPU Time (s)CE
OM71.34 × 10−581.62 × 10−1764.86 × 10−17645.59 × 10−17.422 × 10−3
SM64.50 × 10−513.15 × 10−2059.46 × 10−20548.43 × 10−11.479 × 10−2
SHM64.24 × 10−641.24 × 10−2573.73 × 10−25741.49 × 1017.431 × 10−3
CM61.02 × 10−605.64 × 10−2441.69 × 10−24344.59 × 10−11.493 × 10−2
LW64.54 × 10−572.18 × 10−2296.55 × 10−22948.02 × 10−11.493 × 10−2
RW67.10 × 10−581.30 × 10−2323.90 × 10−23244.68 × 10−11.493 × 10−2
Table 6. Errors x n x and CPU time (s) for the integral equation ( m = 1000 ).
Table 6. Errors x n x and CPU time (s) for the integral equation ( m = 1000 ).
kLWRWSMSHMOMCM
16.39 × 10−21.38 × 10−12.98 × 10−12.39 × 10−12.148.52 × 10−1
29.17 × 10−76.85 × 10−61.84 × 10−53.73 × 10−64.565.48 × 10−3
34.44 × 10−163.33 × 10−163.33 × 10−163.33 × 10−163.01 × 10−11.72 × 10−9
44.21 × 10−43.33 × 10−16
52.08 × 10−13
62.22 × 10−16
CPU (s)3.17 × 10−13.06 × 10−14.17 × 10−11.358.98 × 10−17.13 × 10−1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Shakhno, S.; Shakhov, M. Extended Parametric Family of Two-Step Methods with Applications for Solving Nonlinear Equations or Systems. Axioms 2026, 15, 41. https://doi.org/10.3390/axioms15010041

AMA Style

Argyros IK, Shakhno S, Shakhov M. Extended Parametric Family of Two-Step Methods with Applications for Solving Nonlinear Equations or Systems. Axioms. 2026; 15(1):41. https://doi.org/10.3390/axioms15010041

Chicago/Turabian Style

Argyros, Ioannis K., Stepan Shakhno, and Mykhailo Shakhov. 2026. "Extended Parametric Family of Two-Step Methods with Applications for Solving Nonlinear Equations or Systems" Axioms 15, no. 1: 41. https://doi.org/10.3390/axioms15010041

APA Style

Argyros, I. K., Shakhno, S., & Shakhov, M. (2026). Extended Parametric Family of Two-Step Methods with Applications for Solving Nonlinear Equations or Systems. Axioms, 15(1), 41. https://doi.org/10.3390/axioms15010041

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop