Next Article in Journal
NSLS: A Neighbor Similarity and Label Selection-Based Algorithm for Community Detection
Previous Article in Journal
Investigation of the Boundary Value Problem for an Extended System of Stationary Nernst–Planck–Poisson Equations in the Diffusion Layer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extending the Applicability of a Two-Step Vectorial Method with Accelerators of Order Five for Solving Systems of Equations

1
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Theory of Optimal Processes, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
3
Department of Mathematics, University of Houston, Houston, TX 77205, USA
4
Department of Mathematics, University of Florida, Gainesville, FL 32603, USA
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(8), 1299; https://doi.org/10.3390/math13081299
Submission received: 19 March 2025 / Revised: 11 April 2025 / Accepted: 13 April 2025 / Published: 15 April 2025

Abstract

:
The local convergence analysis of a two-step vectorial method of accelerators with order five has been shown previously. But the convergence order five is obtained using Taylor series and assumptions on the existence of at least the fifth derivative of the mapping involved, which is not present in the method. These assumptions limit the applicability of the method. Moreover, a priori error estimates or the radius of convergence or uniqueness of the solution results have not been given. All these concerns are addressed in this paper. Furthermore, the more challenging semi-local convergence analysis, not previously studied, is presented using majorizing sequences. The convergence for both analyses depends on the generalized continuity of the Jacobian of the mapping involved, which is used to control it and sharpen the error distances. Numerical examples validate the sufficient convergence conditions presented in the theory.

1. Introduction

Let j denote a fixed natural number, the set Ω R j be open and convex, and F : Ω R j stand for a continuously differentiable mapping with a Jacobian denoted by F .
Numerous problems from applied mathematics, scientific computing, and engineering can be written using mathematical modeling [1,2,3,4,5,6,7,8,9,10] as a system of equations in the form
F ( x ) = 0 .
A solution x * Ω of the system of equations F ( x ) = 0 is attainable in analytical form only in special cases. That explains why most solution schemes for such a system of equations are iterative.
Newton’s is undoubtedly the most popular method and it is defined for x 0 Ω and each n = 0 , 1 , 2 , by
x n + 1 = x n F ( x n ) 1 F ( x n ) .
Newton’s method is of convergence order two and has served as the first substep of higher convergence order schemes due to its computational efficiency (CE). In particular, Newton’s method is the first optimal (vectorial) scheme in the sense of an assumption which is given next.
Conjecture 1
([5]). The convergence order of any Newton-type method (without memory), which is defined on R j cannot exceed the bound C E = 2 s 1 + s 2 1 , s 2 s 1 , where s 1 is the number of function evaluations in the entries of F per iteration and s 2 is the number of function evaluations of F. Moreover, the iterative method is called optimal if it reaches C E .
Two-step optimal Newton-type methods with accelerators of order four have already been studied in Refs. [5,11], respectively, and are defined for a 1 , a 2 R by
y n = x n F ( x n ) 1 F ( x n ) , x n + 1 = x n φ ( y n ) F ( x n ) 1 ( a 1 F ( x n ) + a 2 F ( y n ) ) ,
where φ is a function such that
φ ( 0 ) = 1 a 1 , φ ( 0 ) = 2 φ ( 0 ) , a 1 = a 2 , a 1 0
or
y n = x n F ( x n ) 1 F ( x n ) , x n + 1 = y n F ( x n ) 1 ( p n F ( y n ) + q n F ( x n ) ) ,
where, for a , b R ,
u n = F ( y n ) T F ( y n ) F ( x n ) T F ( x n ) , K n = 1 1 + a u n ,
p n = K n ( 1 + b u n ) and q n = 2 K n u n .
We shall also use the equivalent versions of u n given for F = ( f 1 , f 2 , , f j ) by
u n = i = 1 j f i 2 ( y n ) i = 1 j f i 2 ( x n ) = F ( y n ) 2 F ( x n ) 2 .
The local convergence order four is established in [11] for method (4) using Taylor series expansions and by assuming the existence of at least the fifth derivative of the mapping F, which is not on method (4) (or method (3)). However, there are several issues limiting the applicability of these methods.

1.1. Motivational Issues

( P 1 )
The convergence order four is shown in [11] by utilizing Taylor series expansions and assuming the existence of at least the fifth derivative of F, which is not in method (4). In particular, the following local convergence result is shown in Ref. [11] for method (4).
Theorem 1.
Suppose that F : R j R j is sufficiently many times differentiable in a neighborhood of a simple solution x * . Then, the sequence { x n } generated by method (4) is convergent to x * . Moreover, the following error equation holds
x n + 1 x * = C x n x * 4 + O ( x n x * 5 ) ,
where C > 0 .
It is worth noting that the proof of this result uses Taylor series expansions and requires the existence of at least the fifth derivative of the mapping F, which is not in the method.
We look at a toy example where the results of Ref. [11] cannot apply.
Let j = 1 , Ω = [ 2 , 2 ] . Define G : Ω R by
G ( t ) = m 1 t 4 log t + m 2 t 5 + m 3 t 4 , if t 0 , 0 , if t = 0 ,
where m 1 0 and m 2 + m 3 = 1 . It follows by this definition that the fourth derivative of F does not exist, since for t = 0 Ω , F ( 4 ) is discontinuous at zero. Notice that t = 1 Ω solves the equation G ( t ) = 0 . Moreover, if x 0 = 0.9 Ω , both methods (3) and (4) converge to t * = 1 . So, this observation suggests that the sufficient convergence conditions in [11] or other studies using Taylor series can be replaced by weaker ones.
( P 2 )
There are no computable a priori estimates on the error distances x n x * . Hence, we cannot tell in advance how many iterations are required to achieve a desired error tolerance ε > 0 .
( P 3 )
There is no information on the uniqueness of the solution x * Ω in a neighborhood of it.
( P 4 )
The more challenging and important semi-local convergence analysis has not been studied previously.

1.2. Innovation

These problems constitute our motivation for this paper. Problems ( P 1 ) ( P 4 ) are addressed as follows.
( P 1 )
The local convergence analysis is presented using only conditions on the mappings which appear in method (4), i.e., F and F .
( P 2 )
A natural number k is determined in advance such that x n x *     ε for each n k . Moreover, the radius of convergence is given. Consequently, the initial points are picked from a specific neighborhood of x * such that lim n x n = x * .
( P 3 )
A domain is specified containing only one solution of the system of equations F ( x ) = 0 .
( P 4 )
The semi-local convergence analysis is provided using majorizing sequences [1].
Both types of convergence analysis relying on generalized continuity are used to control F and sharpen the error estimates x n + 1 x * and x x x * .
Since, we will refer to neighborhoods of points in R j , we use the standard notation for open and closed balls. Given a point x R j and a radius r > 0 , we obtain the following:
  • The open ball of radius r around x is as follows:
    B ( x , r ) = { y R j y x < r } , r > 0 ,
    which includes all points strictly within distance r from x.
  • The closed ball of radius r around x is as follows:
    B [ x , r ] = { y R j y x r } , r > 0 ,
    which also contains the boundary points exactly at distance r.
The remainder of the paper is organized as follows. In Section 2, we introduce our notation and establish the local convergence theorems, culminating in the error bounds and uniqueness results. Section 3 contains the semi-local convergence analysis via majorizing sequences. Section 4 discusses the implementation details, including the computational cost and comparisons with classical schemes. Finally, in Section 5, we conclude with numerical experiments that showcase the benefits of our approach, highlighting cases where existing fourth-order methods struggle or require more stringent assumptions.

2. Local Area Convergence

Some convergence conditions are needed. Let A = [ 0 , + ) .
Suppose the following hold:
( C 1 )
There exists a continuous, nondecreasing function w 0 : A A such that the function 1 w 0 ( t ) has a minimal positive zero. We shall call such zero by R 0 and set A 0 = [ 0 , R 0 ) .
( C 2 )
There exists a continuous, nondecreasing function w : A 0 A such that for g 1 : A 0 A defined by
g 1 ( t ) = 0 1 w ( 1 τ ) t d τ 1 w 0 ( t ) ,
the function 1 g 1 ( t ) has a minimal positive zero in A 0 . We shall denote such a zero by ρ 1 .
( C 3 )
For δ > 0 , the function h : A 0 A defined as
h ( t ) = δ 1 + 0 1 w 0 τ g 1 ( t ) t d τ g 1 ( t ) t 2
is such that 1 | a | h ( t ) has a minimal positive zero in A 0 , which is called R 1 .
Set A 1 = [ 0 , R * ) , where R * = min { R 0 , R 1 } .
( C 4 )
For functions c : A 1 A , d : A 1 A , and w ¯ : A 1 A and g 2 : A 1 A given by
c ( t ) = | 2 a b | h ( t ) 2 ( 1 | a | h ( t ) ) ,
d ( t ) = 2 h ( t ) 1 | a | h ( t ) ,
w ¯ ( t ) = w ( ( 1 + g 1 ( t ) ) t ) o r w 0 ( t ) + w 0 ( g 1 ( t ) t ) ,
and
g 2 ( t ) = 0 1 w ( 1 τ ) g 1 ( t ) t d τ 1 w 0 ( g 1 ( t ) t ) + w ¯ ( t ) 1 + 0 1 w 0 τ g 1 ( t ) t d τ ( 1 w 0 ( t ) ) ( 1 w 0 ( g 1 ( t ) t ) ) + c ( t ) 1 + 0 1 w 0 τ g 1 ( t ) t d τ 1 w 0 ( t ) g 1 ( t ) + d ( t ) 1 + 0 1 w 0 ( τ t ) d τ 1 w 0 ( t ) ,
the function 1 g 2 ( t ) has a minimal positive zero in A 1 , which is denoted by ρ 2 .
Set
ρ = min { ρ l } , l = 1 , 2 , and A 2 = [ 0 , ρ ) .
By the conditions ( C 1 )–( C 4 ) and (5), we have the following for each t A 2 :
0 w 0 ( t ) < 1 ,
0 | a | h ( t ) < 1
and
0 g l ( t ) < 1 .
We show in Theorem 2 that the number ρ is a convergence radius for method (4).
Next, the parameter δ and functions w 0 and w are associated with F .
( C 5 )
There exists an invertible linear operator S L ( R k , R k ) , where k is a natural number and a solution x * Ω of the nonlinear system of equations F ( x ) = 0 such that for each x Ω ,
S 1 ( F ( x ) S ) w 0 ( x x * ) .
Set Ω 0 = Ω B ( x * , R 0 ) .
( C 6 )
S 1 ( F ( x ˜ ) F ( x ) ) w ( x ˜ x ) for each x ˜ , x Ω .
( C 7 )
B [ x * , ρ ] Ω .
( C 8 )
There exists a parameter δ > 0 such that
0 < S 2 F ( x ) 2 δ , for each x B ( x * , ρ ) { x * } .
Remark 1.
One can choose S = I , the identity operator, or S = F ( x ¯ ) , where x ¯ Ω is a convenient point other than x * , or S = F ( x * ) . The last selection implies that x * is a simple solution. However, no such assumption is made or implied here. Thus, the method (4) can be used to approximate solutions x * of multiplicity 2, 3, …. Other choices for S are also possible [1,2,7,10].
The main local area convergence is based on the conditions ( C 1 )–( C 8 ). Set Ω 1 = B ( x * , ρ ) { x * } .
Theorem 2.
Suppose that conditions ( C 1 )–( C 8 ) hold. Then, the following items hold for the sequence { x n } , provided x 0 Ω 1 :
{ x n } B ( x * , ρ ) ,
y n x * g 1 ( x n x * ) x n x * x n x * ρ ,
x n + 1 x * g 2 ( x n x * ) x n x * x n x *
and x * = lim n x n .
Proof. 
Notice that item (9) holds if n = 0 . Induction is used to show items (9)–(11). Let z Ω 1 . The application of the conditions ( C 1 ) and ( C 5 ) and definitions (5) and (6) can, in turn, give
S 1 ( F ( z ) S ) w 0 ( z x * ) w 0 ( ρ ) < 1 .
The Banach lemma on linear invertible operators [1,7] and (12) assure the existence of F ( z ) 1 L ( R k , R k ) and the following estimate:
F ( z ) 1 S 1 1 w 0 ( z x * ) .
Notice now that for z = x 0 , the iterate y 0 is well defined by the first substep of method (4) if n = 0 and
y 0 x * = x 0 x * F ( x 0 ) 1 F ( x 0 ) = F ( x 0 ) 1 S 0 1 S 1 F ( x * + τ ( x 0 x * ) ) F ( x 0 ) d τ ( x 0 x * ) .
In view of condition ( C 6 ), definition (5), and estimate (13), we obtain the following by (14):
y 0 x * = 0 1 w ( 1 τ ) x 0 x * d τ x 0 x * 1 w 0 ( x 0 x * ) g 1 ( x 0 x * ) x 0 x * x 0 x * ρ .
It follows that, by estimate (15), item (10) holds for n = 0 and the iterate y 0 B ( x * , ρ ) . Notice that by (13), for z = x 0 and the condition ( C 8 ), the accelerators u k , K k , p k , and q k and the iterate x k + 1 are well defined. Moreover, we can, in turn, write the following:
x k + 1 x * = y k x * F ( x k ) 1 F ( x k ) + ( F ( y k ) 1 F ( x k ) 1 ) F ( y k ) + ( 1 p k ) F ( x k ) 1 F ( x k ) q k F ( x k ) 1 F ( x k ) .
Some estimates are needed before we revisit (16).
By condition ( C 8 ), (15), and the induction hypotheses, we obtain, in turn,
F ( y k ) = S S 1 F ( y k ) S S 1 F ( y k ) .
But we have
F ( y k ) = F ( y k ) F ( x * ) = 0 1 F ( x * + τ ( y k x * ) ) d τ ( y k x * ) ,
so by ( C 5 ),
S 1 F ( y k ) = S 1 0 1 F x * + τ ( y k x * ) S + S d τ ( y k x * ) 1 + 0 1 w 0 ( τ y k x * ) d τ y k x *
and
u k = F ( y k ) 2 F ( x k ) 2 δ 1 + 0 1 w 0 ( τ y k x * ) d τ y k x * 2 = h k .
Thus, by (18) and the condition ( C 3 ), we have
| a | u k | a | h k < 1 ,
so
K k 1 1 | a | h k ,
1 p k = 1 K k b K k u k = 1 K k b 2 q k = ( 2 a b ) u k 2 ( 1 + a u k ) ,
which gives
| 1 p k | | 2 a b | h k 2 ( 1 | a | h k ) = c k ,
and
| a k | 2 h k 1 | a | h k = d k .
In view of (15), (18)–(21), (5) and (8), for j = 2 , estimate (16) gives
x k + 1 x * 0 1 w ( 1 τ ) y k x * d τ 1 w 0 ( y k x * ) + w k ¯ 1 + 0 1 w 0 ( τ y k x * ) d τ ( 1 w 0 ( x k x * ) ) ( 1 w 0 ( y k x * ) ) + c k 1 + 0 1 w 0 ( τ y k x * ) d τ 1 w 0 ( y k x * ) y k x * + d k 1 + 0 1 w 0 ( τ y k x * ) d τ x k x * 1 w 0 ( x k x * ) g 2 ( x k x * ) x k x * x k x * ,
where we also used
S 1 ( F ( y k ) F ( x k ) ) = w ( y k x * + x k x * ) w ( g 1 ( x k x * ) x k x * + x k x * ) w k ¯
or
S 1 ( F ( y k ) F ( x k ) ) S 1 ( F ( y k ) S ) + S 1 ( F ( x k ) S ) w 0 ( y k x * ) + w 0 ( x k x * ) w k ¯
and
( F ( y k ) 1 F ( x k ) 1 ) F ( y k ) = F ( y k ) 1 ( F ( x k ) F ( y k ) ) F ( x k ) 1 F ( y k ) 1 S S 1 ( F ( x k ) F ( y k ) ) F ( x k ) 1 S S 1 F ( y k ) w k ¯ 1 + 0 1 w 0 ( τ y k x * ) d τ y k x * ( 1 w 0 ( x k x * ) ) ( 1 w 0 ( y k x * ) ) .
Hence, item (11) holds if n = k and the iterate x k + 1 B ( x * , z ) . We also have for η = g 2 ( x k x * ) [ 0 , 1 ) and (22) that
x k + 1 x * η x k x * η k + 1 x k x * < ρ .
Finally, by (23), we deduce that lim k x k = x * . □
A domain is defined with only x * as a solution of the system of equations F ( x ) = 0 .
Proposition 1.
Suppose the following:
There exists z B ( x * , ρ 3 ) solving system of equations F ( x ) = 0 for some ρ 3 > 0 , and there exists ρ 4 ρ 3 such that the condition ( C 5 ) holds in B ( x * , ρ 4 ) ,
0 1 w 0 ( τ ρ 4 ) d τ < 1 .
Set Ω 2 = Ω B [ x * , ρ 3 ] . Then, the only solution of the system of equations F ( x ) = 0 in the domain Ω 2 is x * .
Proof. 
Let us consider L 0 = 0 1 F ( x * + τ ( z x * ) ) d τ , provided that z x * . In view of the condition ( C 5 ) and (24), we have
S 1 ( L 0 S ) = 0 1 w 0 ( τ z x * ) d τ 0 1 w 0 ( τ ρ 4 ) d τ < 1 .
Therefore, L 0 1 L ( R j , R j ) . Finally, from the identity
z x * = L 0 1 ( F ( z ) F ( x * ) ) = L 0 1 ( 0 ) = 0 ,
we conclude that z = x * . □
Remark 2. 
(i) 
The function w ¯ has two versions. So, in practice, we shall use the smaller of the two. Notice that if the two versions cross on [ 0 , ρ ) , then w ¯ is chosen as the smallest on each interval.
(ii) 
A choice for z = x * and ρ 3 = ρ is provided if all conditions ( C 1 )–( C 7 ) hold in Proposition 1.

3. Semi-Local Area Convergence

In this section, the formulas and calculations are similar to the local area convergence. But the terms x * , w 0 , w are switched by x 0 , ν 0 , and ν , respectively.
Suppose the following:
( H 1 )
There exists a continuous, nondecreasing function ν 0 : A A so that the function 1 ν 0 ( t ) has a minimal positive zero. Let us denote such a zero by s. Set A 4 = [ 0 , s ) .
( H 2 )
There exists a continuous and nondecreasing function ν : A 4 A .
Define the sequences { α n } , { b n } , { γ n } , { μ n } , { c n ¯ } , and { d n ¯ } for α 0 = 0 , some b 0 0 and each n = 0 , 1 , 2 , , by
μ n = 0 1 ν ( 1 τ ) ( b n α n ) d τ ( b n α n ) , h n ¯ = δ μ n 2 for some δ > 0 , c n ¯ = 1 + | b | h n ¯ 1 | a | h n ¯ , d n ¯ = 2 h n ¯ 1 | a | h n ¯ , α n + 1 = b n + c n ¯ μ n + d n ¯ ( b n α n ) 1 ν 0 ( α n ) , γ n + 1 = 0 1 ν ( 1 τ ) ( α n + 1 α n ) d τ ( α n + 1 α n ) + ( 1 + ν 0 ( α n ) ) ( α n + 1 b n )
and
b n + 1 = α n + 1 + γ n + 1 1 ν 0 ( α n + 1 ) .
A general convergence condition for the sequence { α n } is needed since it is shown to be majorizing for { x n } in Theorem 3.
( H 3 )
There exists s 0 [ 0 , s ) such that for each n = 0 , 1 , 2 , ,
ν 0 ( α n ) < 1 , | a | h n ¯ < 1 , a n d α n s 0 .
In view of the condition ( H 3 ) and (26), it follows that 0 α n b n s 0 , and there exists α * [ 0 , s 0 ] such that lim n α n = α * .
It is known that α * is the unique least upper bound of the sequence { α n } .
There exists a relationship between the functions ν 0 and ν and the mappings in the method (4).
( H 4 )
There exist x 0 Ω and an invertible mapping S so that for each u Ω ,
S 1 ( F ( u ) S ) ν 0 ( u x 0 ) .
Set Ω 3 = Ω B ( x 0 , s ) .
Notice that if u = x 0 , the condition ( H 1 ) gives
S 1 ( F ( x 0 ) S ) = ν 0 ( 0 ) < 1 .
Thus, the linear mapping F ( x 0 ) is invertible. Consequently, the iterate y 0 is well defined by the first substep of the method. Thus, we can choose b 0 F ( x 0 ) 1 F ( x 0 ) .
( H 5 )
S 1 ( F ( u 2 ) F ( u 1 ) ) ν ( u 2 u 1 ) for each u 1 , u 2 Ω 3 .
( H 6 )
There exists δ > 0 so that for each x Ω 3 ,
S 2 F ( x ) 2 δ
and
( H 7 )
B [ x 0 , α * ] Ω .
Remark 3.
Some possible selections for S are the following: S = I , S = F ( x 0 ) , or S = F ( x ˜ ) , where x ˜ Ω is an auxiliary point other than x 0 . Other choices for S are also possible.
The semi-local area convergence for method (4) can now follow.
Theorem 3.
Suppose that conditions ( H 1 )–( H 7 ) hold. Then, the sequence generated by method (4) satisfies the assertions
{ x n } B ( x 0 , α * ) ,
y n x n b n α n
and
x n + 1 y n γ n + 1 b n .
Moreover, the point x * = lim n x n is well-defined and solves the system of equations F ( x ) = 0 .
Proof. 
The assertions (27)–(29) are shown by induction. Notice that by the definition of α 0 , b 0 , (26) and the method (4) we have that (27) holds, if n = 0 and F ( x 0 ) 1 F ( x 0 ) b 0 = b 0 α 0 < α * . So, the assertions (27) and (28) hold if n = 0 and the iterate y 0 B ( x 0 , α * ) .
Then, by ( H 3 )–( H 6 ) and the induction hypotheses, we obtain that F ( x k ) 1 exists and
F ( x k ) 1 S 1 1 ν 0 ( x k x 0 ) 1 1 ν 0 ( α k ) .
As in the local case, we need some estimates.
F ( y k ) = F ( y k ) F ( x k ) F ( x k ) ( y k x k ) ,
so
S 1 F ( y k ) = 0 1 S 1 F x k + τ ( y k x k ) F ( x k ) d τ ( y k x k ) 0 1 ν ( 1 τ ) y k x k d τ y k x k 0 1 ν ( 1 τ ) ( b k α k ) d τ ( b k α k ) = μ k ,
u k = F ( y k ) 2 F ( x k ) 2 δ μ k ¯ 2 = h k ¯ ,
| p k | = 1 + b u k 1 + a u k 1 + | b | h k ¯ 1 | a | h k ¯ = c k ¯
and
| q k | 2 h k ¯ 1 | a | h k ¯ = d k ¯ .
Then, by the second substep of (4), (30)–(34), and the triangle inequality, we have from
x n + 1 y k = p k F ( x k ) 1 F ( y k ) q k F ( x k ) 1 F ( x k ) ,
x k + 1 y k c k ¯ μ k + d k ¯ ( b k α k ) 1 ν 0 ( α k ) = α k + 1 b k
and
x k + 1 x 0 x k + 1 y k + y k x 0 α k + 1 b k + b k α 0 = α k + 1 ,
where we also used
F ( x k ) 1 F ( x k ) = y k x k b k α k .
Thus, assertion (29) holds, and the iterate x k + 1 B ( x 0 , α * ) .
Then, in view of the first substep of the method (4), we can write, in turn, that
F ( x k + 1 ) = F ( x k + 1 ) F ( x k ) F ( x k ) ( y k x k ) = F ( x k + 1 ) F ( x k ) F ( x k ) ( x k + 1 x k ) + F ( x k ) ( x k + 1 y k ) .
It follows that
S 1 ( F ( x k + 1 ) F ( x k ) F ( x k ) ( x k + 1 x k ) ) 0 1 ν ( 1 τ ) ( α k + 1 α k ) d τ ( α k + 1 α k ) ,
S 1 F ( x k ) = S 1 ( F ( x k ) S + S ) 1 + S 1 ( F ( x k ) S ) 1 + ν 0 ( x k x 0 ) 1 + ν 0 ( α k )
where (40) is obtained as (31) with y k replaced by x k + 1 .
Hence, in view of (40) and (41), (39) gives
S 1 F ( x k + 1 ) = γ k + 1 .
By the first substep of method (4), for n = k + 1 , (26), (42), and ( H 5 ), we have
y k + 1 x k + 1   F ( x k + 1 ) 1 S S 1 F ( x k + 1 ) γ k + 1 1 ν 0 ( x k + 1 x 0 ) γ k + 1 1 ν 0 ( α k + 1 ) = b k + 1 α k + 1
and
y k + 1 x 0   y k + 1 x k + 1 + x k + 1 x 0   b k + 1 α k + 1 + α k + 1 α 0 = b k + 1 < α * .
The induction for assertions (27)–(29) is completed. Notice that by condition ( H 3 ), the sequence { α n } is Cauchy as convergent. But all the iterates are such that x k B ( x 0 , α * ) and (27)–(30) hold. It follows that the sequence { x k } is also Cauchy in R j and, as such, it has a limit denoted by x * B [ x 0 , α * ] .
Let k in estimate (42) to obtain F ( x * ) = 0 . Furthermore, by assertions (28) and (29), and the triangle inequality, one can have
x k + 1 x k α k + 1 α k .
So, for j = 0 , 1 , 2 , , and using the triangle inequality,
x k + j x k α k + j α k .
Therefore, we conclude by (46) that, if j ,
x * x k α * α k .
Next, a domain is given with only one solution for the system of equations F ( x ) = 0 .
Proposition 2.
Suppose that there exists a solution z * B ( x 0 , s 1 ) for some s 1 > 0 , the condition ( H 2 ) holds on B ( x 0 , s 1 ) , and there exists s 2 s 1 so that
0 1 ν 0 ( 1 τ ) s 1 + τ s 2 d τ < 1 .
Set Ω 5 = Ω B [ x 0 , s 2 ] .
Then, the only solution of system of equations F ( x ) = 0 in the domain Ω 5 is z * .
Proof. 
Suppose that there exists solver z ¯ B ( x 0 , s 1 ) for the system of equations F ( x ) = 0 in the domain Ω 5 satisfying z ¯ z * . Let us define the linear mapping by L = 0 1 F ( z * + τ ( z ¯ z * ) ) d τ . Then, by condition ( H 5 ) and (48), we have, in turn, that
S 1 ( L S ) 0 1 ν 0 ( 1 τ ) z * x 0 + τ z ¯ x 0 d τ 0 1 ν 0 ( 1 τ ) s 1 + τ s 2 d τ < 1 .
It follows from (49) that the linear mapping L is invertible. Thus, by the identity
z ¯ z * = L 1 ( F ( z ¯ ) F ( z * ) ) = L 1 ( 0 ) = 0 ,
we deduce that z ¯ = z * . □
Remark 4. 
(i) 
The limit point α * in the condition ( H 7 ) can be switched by s, given in ( H 1 ).
(ii) 
If all conditions, ( H 4 )–( H 7 ), hold in Proposition 2, then one can take z * = x * and s 1 = α * .

4. Numerical Work

In this section, we first consider some alternatives to conditions ( C 8 ) and ( H 6 ).
Case: S = I and j = 1 . Local area convergence. In view of the estimate
F ( y k ) = F ( y k ) F ( x * ) = 0 1 F x * + τ ( y k x * ) d τ ( y k x * ) ,
we obtain
S 1 F ( y k ) = S 1 0 1 F x * + τ ( y k x * ) S + S d τ ( y k x * ) .
Thus, we have from (50) that
S 1 F ( y k ) 1 + 0 1 w 0 ( τ y k x * ) d τ y k x * .
If x k x * , we obtain
( S ( x k x * ) ) 1 ( F ( x k ) F ( x * ) S ( x k x * ) )
1 x k x * 0 1 w 0 ( τ x k x * ) d τ x k x *
= 0 1 w 0 ( τ x k x * ) d τ = e k .
Thus, F ( x k ) 0 and
1 F ( x k ) 1 x k x * ( 1 e k ) .
Hence, we can have
u k 1 + 0 1 w 0 ( τ y k x * ) d τ 2 y k x * 2 x k x * ( 1 e k ) 2 1 + 0 1 w 0 ( τ y k x * ) d τ 2 g 1 2 ( x k x * ) x k x * 2 x k x * 2 ( 1 e k ) 2 = 1 + 0 1 w 0 ( τ y k x * ) d τ 2 g 1 2 ( x k x * ) ( 1 e k ) 2 .
Consequently, we can drop the condition ( C 8 ) and replace the function h by
h ( t ) = 1 + 0 1 w 0 τ g 1 ( t ) t d τ 2 g 1 2 ( t ) ( 1 e ( t ) ) 2 ,
where
e ( t ) = 0 1 w 0 ( τ t ) d τ .
Semi-local area convergence
We have the estimates
S 1 F ( y n ) 0 1 ν ( 1 τ ) ( b n α n ) d τ ( b n α n ) ,
F ( x n ) = F ( x n ) ( y n x n ) ,
and
1 F ( x n ) F ( x n ) 1 .
Thus, we can choose
ψ n = 1 1 ν 0 ( α n ) ,
so
u n ψ n 2 0 1 ν ( 1 τ ) ( b n α n ) d τ 2 y n x n 2 y n x n 2 = ψ n 2 0 1 ν ( 1 τ ) ( b n α n ) d τ 2 .
General Case:
Local area convergence
Let us introduce the following conditions:
( C 8 )
0 < S 2 F ( x ) 2 δ 1
for some δ 1 > 0 and each x B ( x * , ρ ) { x * } , or
( C 8 )
S 1 + 0 1 w 0 ( τ x x * ) d τ x x * < 2
for each x B ( x * , ρ ) { x * } .
In this case, notice that
1 F ( x ) 2 = 1 1 ( 1 F ( x ) 2 ) = 1 + ( 1 F ( x ) 2 ) + ( 1 F ( x ) 2 ) 2 + δ 2 ,
for finite δ 2 , since
F ( x ) = S S 1 F ( x ) S S 1 F ( x ) S 1 + 0 1 w 0 ( τ x x * ) d τ x x * < 2
or equivalently 1 F ( x ) 2 < 1 . Thus, we can set
δ = δ 1 , if ( C 8 ) holds , o r δ 2 , if ( C 8 ) holds .
Thus, we have
u n δ 2 S 2 1 + 0 1 w 0 τ g 1 ( τ g 1 x n x * ) x n x * d τ 2 g 1 2 ( x n x * ) x n x * 2 .
Semi-local area convergence
Suppose that
( H 6 )
0 < S 2 F ( x ) 2 δ 3
for some finite δ 3 > 0 and each x B ( x 0 , α * ) .
Then,
u k δ 3 0 1 ν ( 1 τ ) ( b k α k ) d τ ( b k α k ) 2 .
Notice that in view of (55), one can also consider the following hybrid method:
y k = x k F ( x k ) 1 F ( x k ) ,
x k + 1 = y k F ( x k ) 1 p k F ( y k ) + q k F ( x k ) ,
but u k is replaced by u ˜ k , defined for some finite i by the following:
u ˜ k = F ( y k ) 2 M k ,
where
M k = 1 + 1 F ( x k ) 2 + + 1 F ( x k ) 2 i .
To further validate the efficiency and accuracy of the proposed method, we present three numerical examples of varying complexity. These examples serve as benchmarks to compare the convergence behavior, computational efficiency, and numerical stability of different iterative methods.
In all calculations, the default tolerance was set to 10 16 to ensure high precision in the numerical results. The maximum number of iterations for each method was limited to 50 to prevent excessive computational overhead while maintaining convergence efficiency. This limitation was chosen based on empirical observations that most efficient methods reach convergence well within this range, making further iterations redundant and computationally wasteful. Additionally, the reported CPU timing was obtained as the average over 50 independent runs, providing a more stable and representative measure of computational performance by reducing the impact of potential fluctuations in execution time. This approach is valuable in ensuring that the results are not unduly influenced by temporary variations in computational load, background processes, or system-specific execution conditions. Such fluctuations can distort comparative performance assessments, leading to misleading conclusions about the relative efficiency of the evaluated methods. All numerical experiments were conducted using Google Colab’s cloud computing resources. The runtime used for computations was equipped with an Intel Xeon CPU @2.20 GHz, 13 GB RAM, and a Tesla K80 accelerator with 12 GB of GDDR5 VRAM. This environment ensured that performance benchmarks were consistent and comparable across different test cases.
The first example focuses on a small system to illustrate the fundamental properties of the methods, while the second example scales up the problem size to assess medium-sized system performance. The third example examines a large-scale nonlinear system to demonstrate how well the methods handle computational challenges at an increased scale.
In this section, we compare the performance of Method (4) with two established iterative methods for solving systems of nonlinear equations. The first is Method (8) from [12]. The second is a sixth-order method without memory [13]. For completeness, we recall their definitions below as used in the numerical experiments.
1. 
Abbasbandy [12]
y k = x k 2 3 F ( x k ) 1 F ( x k ) , A k = F ( x k ) 1 F ( y k ) , x k + 1 = x k I + 21 8 A k 9 2 A k 2 + 15 8 A k 3 F ( x k ) 1 F ( x k ) .
2. 
A sixth-order method of convergence without memory [13]
y n = x n A n 1 F ( x n ) , z n = y n A n 1 F ( y n ) , x n + 1 = z n 3 I + A n 1 B n ( 3 I + A n 1 B n ) A n 1 F ( z n ) ,
where
A n = [ u n , v n ; F ] , B n = [ w n , s n ; F ] , u n = x n a F ( x n ) , v n = x n + b F ( x n ) , w n = z n c F ( z n ) , s n = z n + d F ( z n ) ,
and a , b , c , d R .
Example 1.
Let j = 3 and Ω = B ( 0 , 1 ) , and define the mapping F : Ω R 3 for t ¯ = ( t 1 , t 2 , t 3 ) T , t i R by
F ( t ¯ ) = e t 1 1 , 1 2 ( e 1 ) t 2 2 + t 2 , t 3 T .
From this definition, it follows that the Jacobian of mapping F is given by
F ( t ¯ ) = e t 1 0 0 0 ( e 1 ) t 2 0 0 0 1 .
Notice that t * = ( 0 , 0 , 0 ) T solves the system of equations F ( t ¯ ) = 0 and F ( t ¯ * ) = I . Then, for S = I , the conditions ( C 1 ) ( C 8 ) hold if w 0 ( t ) = ( e 1 ) t and w ( t ) = e 1 e 1 t , since R 0 = 1 e 1 .
Next, we compute ρ using (5), yielding the following radii: R 0 = 0.58198 , R 1 = 0.31997 , ρ 1 = 0.38269 , and ρ 2 = 0.24588 . Among these values, the minimum radius of convergence is 0.24588 . The chosen parameters are a = 3.5 and b = 1.5 . For the numerical experiment, the initial guess was set to t 0 = ( 3.5 , 0.5 , 2.5 ) .
We compare the methods’ performance for this example in Table 1.
Example 2.
Consider the nonlinear system of equations of size 200 as follows:
e t i l = 1 l i j t l = 0 , i = 1 , 2 , , 200 .
We set the initial estimate t 0 = 3 2 , 3 2 , 200 , 3 2 T , a = 2.0 , and b = 2.0 to obtain the solution t = 0.0050 , 0.0050 , 200 , 0.0050 T . The results are summarized in Table 2.
Example 3.
We analyze a large-scale nonlinear equation system of order 300 with 300 variables to demonstrate the method’s efficiency and scalability in handling complex computational problems. This study highlights the method’s capability to address significant computational challenges in large-scale systems.
To demonstrate its broad applicability to real-world large-scale nonlinear problems, we consider the following system:
P ( t ) = t l 2 t l + 1 1 = 0 , 1 l 299 , t 300 2 t 1 1 = 0 , otherwise .
The required solution for this system is given by t * = ( 1 , 1 , 300 , 1 ) T . We used t 0 = ( 96 100 , 96 100 , 300 , 96 100 ) T as the initial guess, and the parameters were set to a = 1 and b = 3.5 . Given the complexity of this system, computational efficiency becomes a critical factor. We compare the performance of various methods in Table 3.
The numerical experiments presented demonstrate the efficiency and robustness of the tested methods across different problem sizes. The proposed approach consistently showed reduced computation time while maintaining high accuracy, particularly in large-scale systems. These results confirm the method’s potential for practical applications in solving complex systems of nonlinear equations efficiently.

5. Conclusions

A finer local convergence analysis for method (4) is presented without Taylor series expansions, as used in Ref. [11], which in turn brings the drawbacks ( P 1 ) ( P 4 ) . The new analysis uses generalized continuity assumptions to control the derivative and sharpen the bounds on the error distances x n x * . Moreover, the rest of the assumptions rely only on the mapping of method (4), i.e., F and F . Furthermore, the more challenging and important semi-local convergence analysis, which has not been studied previously, is also provided by relying on majorizing sequences. The same technique can be used to extend the applicability of other methods, such as (3), or other methods along the same lines [2,3,4,5,6,7,12,14,15,16,17,18,19,20]. This is the direction of our future work.
Numerical experimentations complete this paper.

Author Contributions

Conceptualization, I.K.A., S.S., Y.S., S.R. and N.S. All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  2. Argyros, I.K.; Shakhno, S. Extended Two-Step-Kurchatov Method for Solving Banach Space Valued Nondifferentiable Equations. Int. J. Appl. Comput. Math. 2020, 6, 2. [Google Scholar] [CrossRef]
  3. Arroyo, V.; Cordero, A.; Torregrosa, J.R. Approximation of artificial satellites’ preliminary orbits: The efficiency challenge. Math. Comput. Model. 2011, 54, 1802–1807. [Google Scholar] [CrossRef]
  4. Behl, R.; Bhalla, S.; Magreñán, Á.A.; Kumar, S. An efficient high order iterative scheme for large nonlinear systems with dynamics. J. Comput. Appl. Math. 2022, 404, 113249. [Google Scholar] [CrossRef]
  5. Cordero, A.; Rojas-Hiciano, R.V.; Torregrosa, J.R.; Vassileeva, M.P. A highly efficient class of optimal fourth-order methods for solving nonlinear systems. Numer. Algorithms 2024, 95, 1879–1904. [Google Scholar] [CrossRef]
  6. Jarratt, P. Some fourth order multipoint iterative methods for solving equations. Math. Comput. 1966, 20, 434–437. [Google Scholar] [CrossRef]
  7. Ortega, J.M.; Rheinboldt, W.G. Iterative Solutions of Nonlinear Equations in Several Variables; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2000. [Google Scholar] [CrossRef]
  8. Shakhno, S.M.; Yarmola, H.P.; Shunkin, Y.V. Convergence analysis of the Gauss-Newton-Potra method for nonlinear least squares problems. Mat. Stud. 2018, 50, 211–221. [Google Scholar] [CrossRef]
  9. Sharma, J.R.; Kumar, S. A class of accurate Newton-Jarratt-like methods with applications to nonlinear models. Comput. Appl. Math. 2022, 41, 46. [Google Scholar] [CrossRef]
  10. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982. [Google Scholar]
  11. Singh, H.; Sharma, J.R.; Kumar, S. A simple yet efficient two-step fifth-order weighted-Newton method for nonlinear models. Numer. Algorithms 2022, 93, 203–225. [Google Scholar] [CrossRef]
  12. Abbasbandy, S.; Bakhtiari, P.; Cordero, A.; Torregrosa, J.R.; Lotfi, T. New efficient methods for solving nonlinear systems of equations with arbitrary even order. Appl. Math. Comput. 2016, 287–288, 94–103. [Google Scholar] [CrossRef]
  13. Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Design of iterative methods with memory for solving nonlinear systems. Math. Methods Appl. Sci. 2023, 46, 12361–12377. [Google Scholar] [CrossRef]
  14. Budzko, D.A.; Cordero, A.; Torregrosa, J.R. A new family of iterative methods widening areas of convergence. Appl. Math. Comput. 2015, 252, 405–417. [Google Scholar] [CrossRef]
  15. Chun, C.; Lee, M.Y.; Neta, B.; Dzunic, J. On optimal fourth-order iterative methods free from second derivative and their dynamics. Appl. Math. Comput. 2012, 218, 6427–6438. [Google Scholar] [CrossRef]
  16. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
  17. Cordero, A.; Rojas-Hiciano, R.V.; Torregrosa, J.R.; Vassileva, M.P. Fractal complexity of a new biparametric family of fourth optimal order based on the Ermakov-Kalitkin scheme. Fractal Fract. 2023, 7, 459. [Google Scholar] [CrossRef]
  18. Grau-Sánchez, M.; Noguera, M.; Gutiérrez, J. On new computational local orders of convergence. Appl. Math. Lett. 2012, 25, 2023–2030. [Google Scholar] [CrossRef]
  19. King, R. A family of fourth order methods for nonlinear equations. SIAM J. Numer. Anal. 1973, 10, 876–879. [Google Scholar] [CrossRef]
  20. Kung, H.T.; Traub, J.F. Optimal order of one-point and multi-point iteration. J. ACM 1974, 21, 643–651. [Google Scholar] [CrossRef]
Table 1. Comparison of methods for Example 1.
Table 1. Comparison of methods for Example 1.
Method | | t k t k 1 | | | | F ( t k ) | | k-IterationsCPU Timing
Abbasbandy [12]3.4360 × 10 20 1.3264 × 10 29 60.0045
Sixth Convergence Order [13]2.8065 × 10 20 3.5245 × 10 31 60.0011
Method (4)3.3981 × 10 21 2.6256 × 10 28 60.0007
Table 2. Comparison of methods for Example 2.
Table 2. Comparison of methods for Example 2.
Method | | t k t k 1 | | | | F ( t k ) | | k-IterationsCPU Timing
Abbasbandy [12]1.9336 × 10 18 2.4127 × 10 18 359.1256
Sixth Convergence Order [13]2.3646 × 10 18 2.3614 × 10 17 320.8906
Method (4)1.9673 × 10 19 1.2241 × 10 19 24.0766
Table 3. Comparison of methods for Example 3.
Table 3. Comparison of methods for Example 3.
Method | | t k t k 1 | | | | F ( t k ) | | k-IterationsCPU Timing
Abbasbandy [12]4.7921 × 10 19 3.0150 × 10 19 3177.1542
Sixth Convergence Order [13]3.3894 × 10 20 5.2938 × 10 19 392.9707
Method (4)4.4983 × 10 20 2.8317 × 10 19 29.2241
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Shakhno, S.; Shunkin, Y.; Regmi, S.; Shrestha, N. Extending the Applicability of a Two-Step Vectorial Method with Accelerators of Order Five for Solving Systems of Equations. Mathematics 2025, 13, 1299. https://doi.org/10.3390/math13081299

AMA Style

Argyros IK, Shakhno S, Shunkin Y, Regmi S, Shrestha N. Extending the Applicability of a Two-Step Vectorial Method with Accelerators of Order Five for Solving Systems of Equations. Mathematics. 2025; 13(8):1299. https://doi.org/10.3390/math13081299

Chicago/Turabian Style

Argyros, Ioannis K., Stepan Shakhno, Yurii Shunkin, Samundra Regmi, and Nirjal Shrestha. 2025. "Extending the Applicability of a Two-Step Vectorial Method with Accelerators of Order Five for Solving Systems of Equations" Mathematics 13, no. 8: 1299. https://doi.org/10.3390/math13081299

APA Style

Argyros, I. K., Shakhno, S., Shunkin, Y., Regmi, S., & Shrestha, N. (2025). Extending the Applicability of a Two-Step Vectorial Method with Accelerators of Order Five for Solving Systems of Equations. Mathematics, 13(8), 1299. https://doi.org/10.3390/math13081299

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop