Next Article in Journal
Proportional Log Survival Model for Discrete Time-to-Event Data
Previous Article in Journal
Exponential Bounds for the Density of the Law of the Solution of an SDE with Locally Lipschitz Coefficients
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Study of at Least Sixth Convergence Order Methods Without or with Memory and Divided Differences for Equations Under Generalized Continuity

by
Ioannis K. Argyros
1,
Ramandeep Behl
2,3,
Sattam Alharbi
4,* and
Abdulaziz Mutlaq Alotaibi
4
1
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Mathematical Modelling and Applied Computation Research Group (MMAC), Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
3
Department of Mathematics, Saveetha School of Engineering, SIMATS, Chennai 602105, India
4
Department of Mathematics, College of Science and Humanities in Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(5), 799; https://doi.org/10.3390/math13050799
Submission received: 23 January 2025 / Revised: 20 February 2025 / Accepted: 21 February 2025 / Published: 27 February 2025

Abstract

:
Multistep methods typically use Taylor series to attain their convergence order, which necessitates the existence of derivatives not naturally present in the iterative functions. Other issues are the absence of a priori error estimates, information about the radius of convergence or the uniqueness of the solution. These restrictions impose constraints on the use of such methods, especially since these methods may converge. Consequently, local convergence analysis emerges as a more effective approach, as it relies on criteria involving only the operators of the methods. This expands the applicability of such methods, including in non-Euclidean space scenarios. Furthermore, this work uses majorizing sequences to address the more challenging semi-local convergence analysis, which was not explored in earlier research. We adopted generalized continuity constraints to control the derivatives and obtain sharper error estimates. The sufficient convergence criteria are demonstrated through examples.
MSC:
65H10; 65Y20; 65G99; 41A58

1. Introduction

A scalar nonlinear equation (NE) or a system of nonlinear equations (SNE) can be derived by transforming models from science, engineering, and nature. Such equations are crucial in mathematics [1,2,3,4,5,6]. Solutions to these nonlinear problems enable us to forecast weather, analyze fluid dynamics, model populations, and predict financial markets. Therefore, to evaluate the following SNE by approximating its solution, we choose the initial approximation x 0 D
F ( x ) = 0 .
The above defined operator F maps an open subset D of a Banach space X into X . It is usually impossible to find analytical solutions to equations such as (1). As a result, we are limited to using iterative methods. For instance, we have one of the most often used iterative techniques, the Newton–Raphson method, which help us to obtain an approximate solution of such problems.
Using an iterative technique, researchers apply the Newton–Raphson method or similar approaches to achieve the desired accuracy of the solution. Additionally, they study the convergence, extensions or modifications, stability, and basin of attraction. The following are popular single-step iterative methods for solving (1):
x n + 1 = x n F ( x n ) 1 F ( x n ) , x n + 1 = x n [ x n 1 , x n ; F ] 1 F ( x n ) ,     and x n + 1 = x n [ x n F ( x n ) , x n + F ( x n ) ; F ] 1 F ( x n ) ,
These methods are called Newton’s secant and Steffensen’s method, respectively. But their convergence order (CO) does not exceed two [1,2,4,5,7,8]. The CO can be increased if we add more steps. That is why we have a sixth-order method of convergence without memory and a 6.60 CO with memory, as proposed by Cordero et al. [9]. These schemes are defined for x 0 X , and each n = 0 , 1 , 2 , 3 , . . . by:
(1)
Sixth CO without memory:
y n = x n A n 1 F ( x n ) , z n = y n A n 1 F ( y n ) , x n + 1 = z n 3 I + A n 1 B n 3 I + A n 1 B n A n 1 F ( z n ) ,
where A n = [ u n , v n ; F ] , B n = [ w n , s n ; F ] , u n = x n a F ( x n ) , v n = x n + b F ( x n ) , w n = z n c F ( z n ) , s n = z n + d F ( z n ) and a , b , c , d R .
Method (2) specializes to the preceding ones provided that we stop only at the first substep and choose a = b = 0 for Newton’s, A n = [ x n 1 , x n ; F ] for secant and a = b = 1 for Steffensen’s method. Thus, the convergence study of (2) includes the preceding methods.
(2)
6.60 CO with memory:
y n = x n K n 1 F ( x n ) , z n = y n K n 1 F ( y n ) , x n + 1 = z n q I + K n 1 B n ( 3 2 q ) I + ( q 2 ) K n 1 B n K n 1 F ( z n ) ,
where K n = [ u n , v n ; F ] , u n = x n + δ L n F ( x n ) , v n = x n γ L n F ( x n ) , ( L : D × D L X , X , [ . , . ; F ] : D × D L X , X . The parameters q , δ , γ R and are usually chosen to optimize the method. Authors usually choose q = δ = γ = 1 . Notice that the notation [ . , . ; F ] stands for the divided difference of order one such that [ x , y ; F ] ( y x ) = F ( y ) F ( x ) , if x y and [ x , x ; F ] = F ( x ) , if F is a differentiable x.
The CO is shown in [9] by adopting Taylor series and the fifth derivative of F. However, this approach has certain constraints.
  • Motivation:
(P1)
The inverses of derivatives, divided differences, or high-order derivatives are typically needed for the local convergence analysis. Local analysis of convergence (LAC) in [9] shows that the proof requires derivatives up to the fifth order, which is not present in the method. Their application in the case when X = R m is restricted by these limitations. As an example, we choose the following fundamental and inspirational function F on X = R . If X = [ 2 , 2 ] , defined as
f ( t ) = s 1 t 3 log t 6 + s 2 t 6 + s 3 t 5 t 0 0 , t = 0 ,
where s 1 , s 2 , s 3 R are three parameters with s 1 0 , s 2 + s 3 = 1 .
We can easily see that the function f ( t ) is not bounded on X at t = 0 X and f ( 1 ) = 0 . Thus, the convergence of procedures (2) and (3) is not guaranteed by the local convergence findings in [9]. However, if, for instance, x 0 = 0.1 X , a = b = c = d = γ = δ = 1 , q = 3 and s 1 = s 2 = 1 , s 3 = 1 , then the iterative schemes (2) and (3) converge to x * = 1 X . This observation implies that these criteria can be weakened.
(P2)
The results are applicable only on R m .
(P3)
There is no subset that exclusively contains x * as its solution of (1).
(P4)
The more important to obtain and difficult semi-local analysis (SLAC) is not given previously [9].
(P5)
There is no indication or information regarding the integer j that satisfies the condition such that e n < ϵ ( ϵ > 0 ) for each n j , where e n = x n x * , ( n = 0 , 1 , 2 , 3 ) .
We note that in the previous research [9], we had to deal with the above-mentioned issues ( P 1 ) ( P 5 ) . This is the primary reason we are conducting this investigation. Our method addresses these issues. The generality of the new technique makes it useful on extending the applicability of other methods in the same way [3,6,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]. Method (1.2) outperforms existing methods in terms of residual error and the absolute error difference between consecutive iterations. Additionally, it requires significantly less computational time and exhibits a more stable CO than the existing methods. That is the novelty of our paper.
The rest of the article includes LAC and SLAC of the method in Section 2 and Section 3, respectively. Section 4 provides numerical examples to illustrate the practical implementation and effectiveness of the methods. These examples demonstrate the accuracy, efficiency, radii of convergence, and convergence of the iterative schemes in solving SNE. Section 5 concludes this study with a summary of findings, key contributions, and potential directions for future research.

2. Local Convergence Analysis

The analysis uses some conditions.
  • Suppose:
(1)
There exist functions λ , μ , ψ 0 , continuous and nondecreasing, defined on the interval [ 0 , + ) , [ 0 , + ) and [ 0 , + ) × [ 0 , + ) so that the equation
ψ 0 λ ( t ) , μ ( t ) 1 = 0
has a positive smallest solution (SS), which is denoted by r 0 .
(2)
There exist majorant functions μ , μ 1 , λ , λ 1 , p , p 1 (see conditions (Q)) defined on the interval [ 0 , r 0 ) , continuous and nondecreasing, so that the equation
g 1 ( t ) 1 = 0
has an SS c 1 ( 0 , r 0 ) , where
g 1 ( t ) = ψ μ 1 ( t ) , μ ( t ) 1 ψ 0 λ ( t ) , μ ( t ) .
(3)
The equation g 2 ( t ) 1 = 0 has an SS c 2 ( 0 , r 0 ) , for function g 2 : [ 0 , r 0 ) R is given by
g 2 ( t ) = ψ λ ( t ) + g 1 ( t ) t , μ ( t ) g 1 ( t ) 1 ψ 0 λ ( t ) , μ ( t ) .
(4)
The equation g 3 ( t ) 1 = 0 has an SS c 3 ( 0 , r 0 ) , for function g 3 : [ 0 , r 0 ) R is given as
g 3 ( t ) = ψ μ 1 ( t ) + g 2 ( t ) t , μ ( t ) 1 ψ 0 λ ( t ) , μ ( t ) + [ ψ p ( t ) , p 1 ( t ) 1 ψ 0 λ ( t ) , μ ( t ) 2 + ψ p ( t ) , p 1 ( t ) 1 ψ 0 λ ( t ) , μ ( t ) λ 1 ( t ) 1 ψ 0 λ ( t ) , μ ( t ) ] g 2 ( t ) .
Let us introduce the parameter r * by
r * = min { c i } , i = 1 , 2 , 3 .
The parameter r * is established as the radius of convergence for the method, as demonstrated in Theorem 1. This result confirms that r * represents the threshold within which the method always converges.
Clearly, expression (4) implies that for each t [ 0 , r * ) ,
0 ψ 0 λ ( t ) , μ ( t ) < 1 ,
and
0 g i ( t ) < 1 .
Let us denote by B ( x ¯ , r ¯ ) , B [ x ¯ , r ¯ ] the open and closed balls in X , respectively, with center x ¯ X and of radius r ¯ > 0 . We shall use the same symbol · for the norm of linear operators involved as well as that of the elements of the Banach space X to simplify the presentation.
The real functions and the parameter r * are connected to the divided differences F on method (2).
  • Suppose:
(H1)
There exists a solution x D to F ( x ) = 0 , along with a linear operator M that is invertible. This means that the equation has at least one solution within the domain D , specifically the value x * . Additionally, the operator M is guaranteed to be invertible, implying that it possesses a unique inverse.
(H2)
M 1 [ x , y ; F ] M ψ 0 ( x x * , y x * ) for each x , y D .
Set D 0 = D B [ x * , r 0 ) .
(H3)
M 1 [ x , y ; F ] [ z , x * ; F ] ψ ( x z , y x * ) for each x , y , z D 0 .
and
(H4)
B [ x * , R ] D for some R > 0 to be given later.
  • Moreover, we consider:
(Q)
All the iterates on the method (2) exist and
u n x * λ ( e n ) λ n < r * , M 1 F ( z n ) λ 1 ( e n ) λ n 1 , v n x * μ ( e n ) μ n < r * , u n x n μ 1 ( e n ) μ n 1 , w n x * ξ ( e n ) ξ n < r * , s n x * ρ ( e n ) ρ n < r * , u n w n p ( e n ) p n < r * , v n s n p 1 ( e n ) p n 1 < r * ,
where the functions λ , μ , μ 1 , ξ , ρ , p , p 1 , λ 1 are continuous and nondecreasing and the sequences
{ λ n } , { μ n } , { μ n 1 } , { ξ n } , { ρ n } , { p n } , { p n 1 } , { λ n 1 } are nonnegative sequences.
  • The functions and sequences appearing in the conditions ( Q ) are specialized later in terms of the conditions ( H 1 ) ( H 4 ) (see Remarks 1 and 2).
The local convergence result for (2) follows from the following conditions.
Theorem 1. 
Under the conditions ( H 1 ) ( H 4 ) and ( Q ) , sequence x n converges to x * , as long as the initial value x 0 lies within the ball B ( x * , r * ) { x * } but is not equal to x * itself.
Proof. 
Mathematical induction is employed to demonstrate the validity of the following items:
{ x n } B ( x * , r * ) ,
y n x * g 1 ( e n ) e n e n < r * ,
z n x * g 2 ( e n ) e n e n ,
and
x n + 1 x * g 3 ( e n ) e n e n ,
where the parameter r * is defined by (4).
Mathematical induction shall establish the validity of the assertions (7)–(10).
  • By hypothesis, x 0 B ( x * , r * ) { x * } . Then, the conditions ( H 1 ) , ( H 2 ) and (4) give in turn
M 1 [ u 0 , v 0 ; F ] M ψ 0 u 0 x * , v 0 x * ψ 0 ( r * , r * ) < 1 .
The estimate (11) and the standard lemma on invertible operators due to Banach [2,4,5,6,8,14,19] give that the linear operator A 0 is invertible and
A 0 1 M 1 1 ψ 0 ( u 0 x * , v 0 x * ) .
The first substep of the method (2) implies
y 0 x * = e 0 A 0 1 F ( x 0 ) = A 0 1 ( [ u 0 , v 0 ; F ] [ x 0 , x * ; F ] ) ( e 0 ) .
It follows in turn by (4), (6), ( H 3 ) , (12) and (13),
y 0 x * ψ ( u 0 x 0 , v 0 x * ) e 0 1 ψ 0 ( u 0 x 0 , v 0 x * ) g 1 ( e 0 ) e 0 e 0 < r * .
So, the iterate y 0 B ( x * , r * ) , and the expression (8) holds if n = 0 .
The second substep of the method (2) gives, as previously,
z 0 x * = y 0 x * A 0 1 F ( y 0 ) = A 0 1 ( A 0 [ y 0 , x * ; F ] ) ( y 0 x * ) . z 0 x * ψ ( u 0 y 0 , v 0 x * ) y 0 x * 1 ψ 0 ( u 0 x * , v 0 x * ) ψ ( u 0 x * + y 0 x * , v 0 x * ) y 0 x * 1 ψ 0 ( λ ( e 0 ) , μ ( v 0 x * ) ) ψ ( λ 0 + g 1 ( e 0 ) e 0 , μ 0 ) y 0 x * 1 ψ 0 ( λ ( e 0 ) , μ ( v 0 x * ) ) g 2 ( e 0 ) e 0 e 0 .
Hence, the iterate z 0 B ( x * , r * ) , and the assertion (9) holds if n = 0 .
Next, the third substep of the method (2) gives
x 1 x * = z 0 x * 3 I + A 0 1 B 0 3 I + A 0 1 B 0 A 0 1 F ( z 0 ) = z 0 x * A 0 1 F ( z 0 ) 2 I 3 A 0 1 B 0 + A 0 1 B 0 A 0 1 B 0 A 0 1 F ( z 0 ) = z 0 x * A 0 1 F ( z 0 ) A 0 1 B 0 I 2 A 0 1 B 0 I A 0 1 F ( z 0 ) .
The following estimate is needed:
A 0 1 B 0 I = A 0 1 ( B 0 A 0 ) ψ ( u 0 z 0 , v 0 x * ) 1 ψ 0 ( u 0 x * , v 0 x * ) ,
so by (14) and (15),
x 1 x * ψ ( u 0 z 0 , v 0 x * ) 1 ψ 0 ( u 0 x * , v 0 x * ) + [ ψ ( u 0 w 0 , v 0 r 0 ) 1 ψ 0 ( u 0 x * , v 0 x * ) 2 + ψ ( u 0 w 0 , v 0 r 0 ) 1 ψ 0 ( u 0 x * , v 0 x * ) λ 0 1 1 ψ 0 ( u 0 x * , v 0 x * ) ] z 0 x * g 3 ( e 0 ) e 0 e 0 ,
where we also used the conditions ( Q ) ,
u 0 z 0 u 0 x * + z 0 x *
and
M 1 F ( z 0 ) = M 1 [ z 0 , x * ; F ] ( z 0 x * ) λ 0 1 z 0 x * .
Hence, the iterate x 1 B ( x * , r * ) and the assertion (10) holds if n = 0 . The proof for the statements in Equations (7)–(10) can be quickly finished by replacing x 0 , y 0 , z 0 , x 1 with x m , y m , z m , x m + 1 in the previous steps of the calculation. This substitution allows us to apply the same reasoning to the next case in the sequence. Additionally, based on the calculation for c = g 3 ( e 0 ) [ 0 , 1 ) , we have
x m + 1 x * c x m x * < r * ,
and it follows that the iterate x m + 1 B ( x * , r * ) and lim m x m = x * . □
Proposition 1. 
There is a solution y ¯ B ( x * , τ 1 ) to F ( x ) = 0 within a neighborhood of x * , specifically in the ball B ( x * , τ 1 ) for some τ 1 > 0 representing a certain radius. Then, we have
M 1 [ x * , y ¯ ; F ] M ψ ¯ 0 ( y ¯ x * ) ,
where ψ ¯ 0 : [ 0 , + ) R is a continuous and nondecreasing function and there exists τ 2 τ 1 such that
ψ ¯ 0 ( s 4 ) < 1 .
Let D 1 = D B [ x * , τ 2 ] . Then, x * is the only solution of F ( x ) = 0 in D 1 .
Proof. 
We consider the linear operator T = [ y ¯ , x * ; F ] , if y ¯ x * . By applying (16) and (17), we have
M 1 T M w ¯ 0 ( y ¯ x * ) w ¯ 0 ( s 4 ) < 1 .
Thus, the inverse of T exists and
y ¯ x * = T 1 F ( y ¯ ) F ( x * ) = T 1 ( 0 ) = 0 .
Hence, we obtain y ¯ = x * . □
Remark 1. 
The LAC of the method (3) is analyzed in an analogous way in two interesting cases. Let L be any invertible linear operator.
(I) 
Let L n = L . Suppose that the hypotheses ( H 1 ) ( H 4 ) and ( Q ) hold but with a = δ L 1 and b = γ L 1 and the functions g ¯ 1 and g ¯ 2 are g ¯ 1 ( t ) = g 1 ( t ) and g ¯ 2 ( t ) = g 2 ( t ) . In order to define the corresponding function g ¯ 3 , notice in turn the calculation
x n + 1 x * = z n x * K n 1 F ( z n ) ( 3 2 q ) K n 1 B n I + ( q 2 ) K n 1 B n I 2 K n 1 F ( z n )
leading to
x n + 1 x * [ ψ ( u n z n , v n x * ) 1 ψ 0 ( u n x * , v n x * ) + ( | q 2 | ψ ( u n w n , v n s n ) 1 ψ 0 ( u n x * , v n x * ) 2 + | 3 2 q | ψ ( u n w n , v n s n ) 1 ψ 0 ( u n x * , v n x * ) ) × λ n 1 1 ψ 0 ( u n x * , v n x * ) ] z n x * .
Therefore, we can choose
g ¯ 3 ( t ) = ψ μ 1 ( t ) + g 2 ( t ) , μ ( t ) 1 ψ 0 λ ( t ) , μ ( t ) + | q 2 | ψ p ( t ) , p 1 ( t ) 1 ψ 0 λ ( t ) , μ ( t ) 2 + | 3 2 q | ψ p ( t ) , p 1 ( t ) 1 ψ 0 λ ( t ) , μ ( t ) λ n 1 ( t ) 1 ψ 0 λ ( t ) , μ ( t ) g ¯ 2 ( t ) .
Let us assume that the equation g 3 ( t ) 1 = 0 has an SS c ¯ 3 ( 0 , r 0 ) .
(II) 
Let L n = [ x n 1 , x n ; F ] 1 for each n = 0 , 1 , 2 , Suppose that there exists γ ˜ > 0 such that
M 1 γ ˜
and
ψ 0 ( r * , r * ) < 1 .
Then, notice that
δ [ x n 1 , x n ; F ] 1 M M 1 | δ | γ ˜ 1 ψ 0 ( x n 1 x * , x n x * ) | δ | γ ˜ 1 ψ 0 ( r * , r * ) = a ,
and similarly,
γ [ x n 1 , x n ; F ] 1 M M 1 | δ | γ ˜ 1 ψ 0 ( r * , r * ) = b .
Under these choices of a , b and L n , the conclusions of Theorem 1 hold for the method (2).
Remark 2. 
The conditions ( Q ) shall be dropped by expressing the scalar sequences in terms of the conditions ( H 1 ) ( H 4 ) and
I a M a 0 , I + b M b 0 , M a 1 , I c M c 0 , I + d M d 0 , M 1 ( [ x , x * ; F ] M ) ψ 1 ( x x * ) ,
for each x D 0 , a 0 , b 0 , a 1 , c 0 , d 0 are nonnegative parameters and ψ 1 : [ 0 , s 0 ) R is a continuous and nondecreasing function.
We first determine λ n and λ 0 :
u n x * = e n a F ( x n ) = e n a [ x n , x * ; F ] ( e n ) = ( I a [ x n , x * ; F ] ) ( e n ) ,
but
I a M M 1 ( [ x n , x * ; F ] M + M ) = I a M ( M 1 ( [ x n , x * ; F ] M ) + I ) = I a M a M M 1 ( [ x n , x * ; F ] M ) ,
So, we have
I a [ x n , x * ; F ] ( e n ) I a M + | a | a 1 ψ 1 ( e n ) e n = λ n .
Consequently, we obtain
λ ( t ) = a 0 + | a | a 1 ψ 1 ( t ) t , μ ( t ) = b 0 + | b | a 1 ψ 1 ( t ) t , u n x n = a F ( x n ) = a [ x n , x * ; F ] ( e n ) = a M M 1 ( [ x n , x * ; F ] M + M ) ( e n ) = a M ( M 1 ( [ x n , x * ; F ] M ) + I ) ( e n ) = a M a M M 1 ( [ x n , x * ; F ] M ) ( e n ) .
Thus,
u n x n a 1 | a | + a 1 | a | ψ 1 ( e n ) e n = μ n 1 ,
and
μ 1 ( t ) = a 1 | a | 1 + ψ 1 ( t ) t ,
F ( z n ) = F ( z n ) F ( x * ) = [ z n , x * ; F ] ( z n x * ) = M M 1 [ z n , x * ; F ] M + M ( z n x * )
so
M 1 F ( z n ) ( 1 + ψ 1 ( z n x * ) ) z n x * 1 + ψ 1 g 2 ( e n ) e n g 2 ( e n ) e n = λ n 1 ,
and
λ 1 ( t ) = 1 + ψ 1 g 2 ( t ) t g 2 ( t ) t , w n x * = z n x * + d F ( z n ) = I + d [ z n , x * ; F ] ( z n x * ) ,
but
I + d [ z n , x * ; F ] = I + d M ( M 1 [ z n , x * ; F ] M ) + I .
Thus, we obtain
w n x * I + d M + a 1 | d | ψ 1 ( z n x * ) z n x * d 0 + a 1 | d | ψ 1 g 2 ( e n ) e n g 2 ( e n ) e n = ξ n .
In addition, we have
ξ ( t ) = d 0 + a 1 | d | ψ 1 g 2 ( t ) t g 2 ( t ) t ,
similarly,
e n I + c M + a 1 | c | ψ 1 ( z n x * ) z n x * = ρ n
and
ρ ( t ) = c 0 + a 1 | c | ψ 1 g 2 ( t ) t g 2 ( t ) t , u n w n = ( u n x * ) + ( x * w n ) , u n w n   u n x * + w n x * μ n 1 + r n = p n ,
so
p ( t ) = μ 1 ( t ) + ξ ( t ) .
In a similar way, we obtain
v n s n     v n x *   +   s n x *     μ n + ρ n = p n 1 .
Consequently, we set
p 1 ( t ) = μ ( t ) + ρ ( t ) .
In view of these calculations, the parameter R appearing in condition ( H 4 ) is defined by
R = max r * , λ ( r * ) , μ ( r * ) , r ( r * ) , ρ ( r * ) .
(III) 
Possible choices for the linear operator L n can be
L n = [ x n , x n 1 ; F ] 1 (secant [2,4,5,7]);
L n = [ 2 x n x n 1 , x n 1 ; F ] (Kurchatov [8,10,19]);
L n = [ x n , y n 1 ; F ] ;
L n = [ 2 x n y n 1 , y n 1 ; F ] 1 (Kurchatov [8,10,19]).
Moreover, for M, we can choose M = I or M = F ( x * ) or M = F ( x 0 ) . Other choices exist [4,5,14].

3. Semi-Local Convergence

Sequences are developed that are shown to be majorizing for the sequence { x n } given by the Formula (2). Let λ , μ , ϕ 0 be functions such that ϕ 0 : [ 0 , + ) × [ 0 , + ) R is continuous and nondecreasing and λ , μ : [ 0 , + ) R are continuous.
Suppose:
  • ( C 0 ) the equation ϕ 0 λ ( t ) , μ ( t ) 1 = 0 has a positive SS, say R 0 . Define the scalar sequence { α n } , where { ρ n } , { σ n } , { λ n } , { μ n } , { p n } , { q n } are given nonnegative sequences for α 0 = 0 , β 0 0 and each n = 0 , 1 , 2 , by
γ n = β n + ϕ ( ρ n , σ n ) ( β n α n ) 1 ϕ 0 ( λ n , μ n ) , e n = ϕ ( p n , q n ) 1 ϕ 0 ( λ n , μ n ) , t n = 1 + ϕ 0 ( γ n , β n ) ( γ n β n ) + ϕ ( ρ n , σ n ) ( β n α n ) , α n + 1 = γ n + ( 1 + e n + e n 2 ) t n 1 ϕ 0 ( λ n , μ n ) , l n + 1 = ϕ ( α n + 1 β n + δ n , σ n ) ( α n + 1 α n ) + 1 + ϕ 0 ( λ n , μ n ) ( α n + 1 β n ) , β n + 1 = α n + 1 + l n + 1 1 ϕ 0 ( λ n + 1 , μ n + 1 ) .
A convergence proof of { α n } sequence is essential in order to establish the behavior and stability of the sequence as it progresses.
Lemma 1. 
Suppose that there exists R ¯ [ 0 , R 0 ] such that for λ ( α n ) = λ n , μ ( α n ) = μ n and each n = 0 , 1 , 2 , ,
ϕ 0 ( λ 0 , μ n ) < 1 and α n R ¯ .
Then, we have
0 α n β n γ n α n + 1 R ¯ .
and there exists R * [ 0 , R ¯ ] so that lim n α n = R * .
Proof. 
The conditions (19) and the Formula (18) imply using induction on the assertion (20), from which the rest of the proof is completed. □
The functions ϕ 0 , ϕ and the parameter R 0 relate to the method (2) as follows:
  • Suppose:
(C1)
There exist a point x 0 D , a linear operator M which is invertible and
A 0 1 F ( x 0 ) β 0 .
(C2)
M 1 ( [ x , y ; F ] M ) ϕ 0 ( x x 0 , y x 0 ) for each x , y D .
It follows that the linear operator A 0 is invertible, since
M 1 ( A 0 M ) ϕ 0 0 , μ ( t ) < 1 . Hence, we can choose A 0 1 F ( x 0 ) β 0 .
Set D 0 = D B ( x 0 , R 0 ) .
(C3)
M 1 ( [ x , y ; F ] [ z , w ; F ] ) ϕ ( x z , y w ) for each x , y , z , w D 0 .
(C4)
B [ x 0 , R ¯ ] D , where the parameter R ¯ R * is specified later.
The following conditions are imposed on the iterates { x n } .
(E)
The iterates { x n } exist and satisfy for each n = 0 , 1 , 2 , , { x n } D ,
u n x 0 λ ¯ n λ n < R * , v n x 0 μ ¯ n μ n < R * , w n x 0 ξ ¯ n ξ n < R * , s n x 0 ρ ¯ n < R * , y n u n δ ¯ n δ n , v n x n σ ¯ n σ n , w n u n p ¯ n p n , s n v n q ¯ n q n ,
where { p ¯ n } , { σ ¯ n } , { δ ¯ n } , { q ¯ n } , { λ ¯ n } , { μ ¯ n } , { ξ ¯ n } , { ρ ¯ n } are nonnegative sequences. In addition, ψ 0 , ψ and R 0 j depending on the iterates of the method and { λ n } , { μ n } , { ξ n } , { ρ n } , { δ n } , { σ n } , { p n } , { q n } are nonnegative sequences that will later be expressed in terms of the conditions ( C 0 ) ( C 4 ) .
  • The main semi-local result for the method (2) follows.
Theorem 2. 
Given the conditions ( C 0 ) through ( C 4 ) and ( E ) , there exists a solution x * B [ x 0 , R * ] to the expression F ( x ) = 0 . Further, the following properties hold:
y n x n β n α n ,
z n y n γ n β n ,
x n + 1 z n α n + 1 γ n .
Proof. 
The assertions (21)–(23) are established by induction. The assertions (21) hold if n = 0 by (18), (2) and ( C 1 ) , since
y 0 x 0 = A 0 1 F ( x 0 ) = A 0 1 F ( x 0 ) β 0 = β 0 α 0 < R * .
Thus, the iterate y 0 B ( x 0 , R * ) . Notice that
F ( x 0 ) 1 ( A n F ( x 0 ) ) ϕ 0 ( u n x 0 , v n x 0 ) ϕ 0 ( λ n , μ n ) < 1 ,
so
A n 1 F ( x 0 ) 1 1 ϕ 0 ( λ n , μ n ) ,
and iterates y n , z n , x n + 1 are well defined.
Then, from the second substep of the iterative scheme (2), we obtain
z n y n = A n 1 F ( y n ) = A n 1 F ( y n ) F ( x n ) A n ( y n x n ) = A n 1 [ y n , x n ; F ] A n ( y n x n ) .
So, by the conditions ( C 3 ) , ( E ) , (24) and (25), we have
z n y n ϕ ( y n u n , x n v n ) y n x n 1 ϕ 0 ( u n x 0 , v n x 0 ) ϕ ( δ ¯ n , σ ¯ n ) y n x n 1 ϕ 0 ( λ ¯ n , μ ¯ n ) ϕ ( δ n , σ n ) ( β n α n ) 1 ϕ 0 ( λ n , μ n ) = γ n β n
and
z n x 0 z n y n + y n x 0 γ n β n + β n α 0 = γ n < R * .
Thus, the iterate z n B ( x 0 , R * ) and the assertion (22) holds.
Similarly, from the third substep of the method (2), we obtain in turn, as in the local case,
x n + 1 z n = ( 3 I + A n 1 B n ( 3 I + A n 1 B n ) ) A n 1 F ( z n ) , x n + 1 z n = ( ( A n 1 B n I ) 2 ( A n 1 B n I ) + I ) A n 1 F ( z n ) ( 1 + e ¯ n + e ¯ n 2 ) t ¯ n 1 ϕ 0 ( λ n , μ n ) ( 1 + e n + e n 2 ) t n 1 ϕ 0 ( λ n , μ n ) = α n + 1 γ n
and
x n + 1 x 0 x n + 1 z n + z n x 0 α n + 1 γ n + γ n α 0 = α n + 1 < R * ,
where we used
F ( z n ) = F ( z n ) F ( y n ) + F ( y n ) = [ z n , y n ; F ] ( z n y n ) + F ( y n ) F ( x n ) A n ( y n x n ) = [ z n , y n ; F ] ( z n y n ) + ( [ y n , x n ; F ] A n ) ( y n x n ) , M 1 F ( z n ) ( 1 + ϕ 0 ( z n x 0 , y n x 0 ) ) z n y n + ϕ ( y n u n , x n v n ) y n x n = t ¯ n ( 1 + ϕ 0 ( γ n , β n ) ) ( γ n β n ) + ϕ ( δ n , σ n ) ( β n α n ) = t n
and
A n 1 B n I = A n 1 ( B n A n ) = A n 1 ( [ w n , s n ; F ] [ u n , v n ; F ] ) ϕ ( w n u n , s n v n ) 1 ϕ 0 ( λ n , μ n ) = e ¯ n ϕ ( p n , q n ) 1 ϕ 0 ( λ n , μ n ) = e n .
Therefore, the iterate x n + 1 B ( x 0 , R * ) and the assertion (23) holds.
Using the first substep of method (2), we can express
F ( x n + 1 ) = F ( x n + 1 ) F ( x n ) A n ( y n x n ) = ( [ x n + 1 , x n ; F ] A n ) ( x n + 1 x n ) + A n ( x n + 1 y n )
leading to
M 1 F ( x n + 1 ) ϕ ( x n + 1 u n , x n v n ) x n + 1 x n + ( 1 + ϕ 0 ( u n x 0 , v n x 0 ) ) x n + 1 y n = ¯ n + 1 ϕ ( x n + 1 y n + y n u n , x n v n ) x n + 1 x n + ( 1 + ϕ 0 ( λ n , μ n ) ) ( α n + 1 β n ) = ϕ ( α n + 1 β n + δ n , σ n ) ( α n + 1 α n ) + ( 1 + ϕ 0 ( λ n , μ n ) ) ( α n + 1 β n ) = n + 1 .
Hence, we have in turn
y n + 1 x n + 1 A ( x n + 1 ) 1 F ( x 0 ) F ( x 0 ) 1 F ( x n + 1 ) e ¯ n + 1 1 ϕ 0 ( λ n , μ n ) e n + 1 1 ϕ 0 ( λ n , μ n ) = β n + 1 α n + 1
and
y n + 1 x 0 y n + 1 x n + 1 + x n + 1 x 0 β n + 1 α n + 1 + α n + 1 α 0 = β n + 1 < R * .
The proof by induction for the assertions in Equations (21)–(23) has been completed, and it is shown that the sequence { x n } is contained within the ball B ( x 0 , R * ) . As a result, the sequence { x n } is a Cauchy sequence in the Banach space X . Since every Cauchy sequence in a Banach space converges, it follows that { x n } converges to some limit x * lying in the ball B [ x 0 , R * ] . Notice that lim n n + 1 = 0 . Hence, we obtain F ( x * ) = 0 by (28) and the continuity of F.
Finally, from the estimate
x n + m x n α n + m α n ,
we obtain
x * x n α * α n
provided that m + . □
The region of uniqueness for the solution is then identified and defined. This means that we will now establish the specific area or set within which the solution is guaranteed to be unique.
Proposition 2. 
Assume the following:
  • There exists a solution z ¯ B ( x 0 , k 3 ) to F ( x ) = 0 for some k 3 > 0 ;
  • The condition ( C 2 ) is satisfied within the ball B ( x 0 , k 3 ) ;
  • There exists a constant k 4 k 3 such that
ϕ 0 ( k 3 , k 4 ) < 1 .
Set D 2 = D B [ x 0 , k 4 ] .
Then, there are no other points within D 2 satisfying the equation F ( x ) = 0 , ensuring that z ¯ is the unique root in this domain.
Proof. 
Let z * D 2 with F ( z * ) = 0 with z * = z ¯ . Define the linear operator M 1 = [ z ¯ , z * ; F ] . In view of condition ( C 2 ) and (31), one obtains
M 1 M 1 F ( x 0 ) ϕ 0 ( z ¯ x 0 , z * x 0 ) ϕ 0 ( k 3 , k 4 ) < 1 .
Thus, we deduce that z ¯ = z * .
Remark 3. 
Under the full set of conditions outlined in Theorem 2, which include all the necessary assumptions and requirements, set z * = x * and k 3 = R * .
Remark 4. 
Replace the conditions ( E ) by ( E ) given by
(E′)
I a M α ¯ , F ( x 0 ) β , M γ ¯ , I + b M h , I c M μ ,
I + d M h , I ( a + c ) M l and I + ( b + d ) M s .
Next, the scalar sequences appearing in the condition ( E ) are specialized. The following calculations are required:
u n x 0 = x n x 0 a F ( x n ) F ( x 0 ) a F ( x 0 ) = ( I a [ x n , x 0 ; F ] ) ( x n x 0 ) a F ( x 0 ) , u n x 0 I a [ x n , x 0 ; F ] x n x 0 + | a | F ( x 0 ) = I a M M 1 ( [ x n , x 0 ; F ] M + M ) + | a | F ( x 0 ) I a M + | a | M ϕ 0 ( 0 , x n x 0 ) x n x 0 + | a | F ( x 0 ) ( α ¯ + | a | γ ¯ ϕ 0 ( 0 , x n x 0 ) ) x n x 0 + | a | β : = λ ¯ n ,
and thus,
λ n : = ( α ¯ + | a | γ ϕ 0 ( 0 , t n ) ) t n + | a | β .
In a similar way, we have
v n x 0 = I + b [ x n , x 0 ; F ] ( x n x 0 ) + b F ( x 0 ) , v n x 0 I + b M + | b | M ϕ 0 ( 0 , x n x 0 ) x n x 0 + | b | F ( x 0 ) h + | b | γ ¯ ϕ 0 ( 0 , x n x 0 ) x n x 0 + | b | β = μ ¯ n ,
so, we obtain
μ n = ( h + | b | γ ¯ ϕ 0 ( 0 , t n ) ) t n + | b | β ; v n x n = b F ( x n ) = b F ( x n ) F ( x 0 ) + F ( x 0 ) = b [ x n , x 0 ; F ] ( x n x 0 ) + b F ( x 0 ) = b M M 1 ( [ x n , x 0 ; F ] M + M ) ( x n x 0 ) + b F ( x 0 ) = b M M 1 ( [ x n , x 0 ; F ] M ) + I ( x n x 0 ) + b F ( x 0 ) = b F ( x 0 ) + b M + b M M 1 ( [ x n , x 0 ; F ] M ) ( x n x 0 ) ,
v n x n | b | β + ( | b | γ + | b | γ ¯ ϕ 0 ( 0 , x n x 0 ) ) x n x 0 = σ ¯ n .
Thus, we have
σ n = | b | β + ( | b | γ ¯ + | b | γ ¯ ϕ 0 ( 0 , t n ) ) t n ; w n u n = I ( a + c ) [ z n , x 0 ; F ] ( z n x n ) c [ x n , x 0 ; F ] ( x n x 0 ) c F ( x 0 ) + a F ( z n ) F ( x 0 ) + a F ( x 0 ) = ( I ( a + c ) M ) ( a + c ) M M 1 [ z n , x 0 ; F ] M ( z n x n ) c M M 1 [ x n , x 0 ; F ] M + M ( x n x 0 ) + a M M 1 [ z n , x 0 ; F ] M + M ( z n x 0 ) + ( a c ) F ( x 0 ) ,
which further yields
w n u n l + | a + c | γ ¯ ϕ 0 ( 0 , z n x 0 ) z n x n + | c | γ + | c | γ ¯ ϕ 0 ( 0 , x n x 0 ) x n x 0 + | a | γ ¯ + | a | γ ¯ ϕ 0 ( 0 , z n x 0 ) z n x 0 + | a c | F ( x 0 ) = p ¯ n .
Therefore, we obtain
p n = l + | a + c | γ ¯ ϕ 0 ( 0 , γ n ) ( γ n α n ) + ( | c | γ ¯ + | c | ϕ 0 ( 0 , t n ) ) t n + | a | γ ¯ + | a | γ ¯ ϕ 0 ( 0 , γ n ) γ n + | a c | β , s n v n = I + ( b + d ) M + ( b + d ) M M 1 [ z n , x 0 ; F ] M ( z n x n ) + d F ( x n ) F ( x 0 ) b F ( z n ) F ( x 0 ) + ( d b ) F ( x 0 ) = I + ( b + d ) M + ( b + d ) M M 1 [ z n , x 0 ; F ] M ( z n x n ) + d M M 1 [ x n , x 0 ; F ] M + M ( x n x 0 ) b M M 1 [ z n , x 0 ; F ] M + M ( z n x 0 ) + ( d b ) F ( x 0 ) .
In the similar fashion, we have
w n x 0 = I c M c M M 1 [ x n , x 0 ; F ] M ( x n x 0 ) c F ( x 0 ) μ + | c | γ ¯ ϕ 0 ( 0 , x n x 0 ) x n x 0 + | c | β = ξ ¯ n ,
so
ξ n = μ + | c | γ ¯ ϕ 0 ( 0 , t n ) t n + | c | β .
Moreover, we obtain
s n x 0 I + d M + | d | γ ¯ ϕ 0 ( 0 , x n x 0 ) x n x 0 + | d | β = ξ ¯ n
and
ρ n = h + | d | γ ¯ ϕ 0 ( 0 , t n ) t n + | d | β ,
y n u n = y n x n + a F ( x n ) = y n x n a A n ( y n x n ) = ( I a A n ) ( y n x n ) = ( I a M ) + ( I a M ) M 1 ( A n M ) + M 1 ( A n M ) ( y n x n ) , y n u n α + α ϕ 0 ( u n x 0 , v n x 0 ) + ϕ 0 ( u n x 0 , v n x 0 ) y n x n α + α ϕ 0 ( λ ¯ n , μ ¯ n ) + ϕ 0 ( λ ¯ n , μ ¯ n ) y n x n = δ ¯ n ,
so
s n = α + α ϕ 0 ( λ n , μ n ) + ϕ 0 ( λ n , μ n ) ( β n α n ) , s n v n s + | b + d | γ ¯ ϕ 0 ( 0 , z n x n ) z n x n + | d | γ ¯ ϕ 0 ( 0 , x n x 0 ) x n x 0 + | b | ϕ 0 ( 0 , z n x 0 ) z n x 0 + | d | γ ¯ x n x 0 + | b | γ ¯ z n x 0 + | d b | β = q ¯ n .
Hence, we obtain
q n = s + | b + d | γ ¯ ϕ 0 ( 0 , γ n ) γ n + | d | γ ¯ ϕ 0 ( 0 , t n ) t n + | b | ϕ 0 ( 0 , γ n ) γ n + | d | γ ¯ t n + | b | γ ¯ γ n + | d b | β .
Under these choices,
R ¯ = max R * , λ ( R * ) , μ ( R * ) , r ( R * ) , ρ ( R * ) .
Remark 5. 
Regarding the SLAC of the method (3), let us examine two cases as in the local case. Suppose M 1 γ ˜ ˜ .
(I) 
L n = L 1 . Then, set a = δ L 1 and b = γ L 1 . This way, we have the following expressions from the third substep of (2):
x n + 1 z n = q I + K ¯ n 1 B n ( 3 2 q ) I + ( q 2 ) K ¯ n 1 B n A ¯ n 1 F ( z n ) = ( q 2 ) ( K ¯ n 1 B n I ) 2 K ¯ n 1 B n I + I K ¯ n 1 F ( z n ) , x n + 1 z n = | q z | e n 2 + e n + 1 t n 1 ϕ 0 ( λ n , μ n ) = α n + 1 γ n .
The iterates γ n , β n + 1 are the same as in the method (2).
(II) 
L n = [ x n 1 , x n ; F ] 1 . Notice that
δ [ x n 1 , x n ; F ] 1 F ( x 0 ) F ( x 0 ) 1 | δ | γ ˜ ˜ 1 ϕ 0 ( x n 1 x 0 , x n x 0 ) = a ¯ n | δ | γ ˜ ˜ 1 ϕ 0 ( R , R ) = a ,
similarly,
γ [ x n 1 , x n ; F ] 1 F ( x 0 ) F ( x 0 ) 1 b ,
where b = | γ | γ ˜ ˜ 1 ϕ 0 ( R , R ) .
Then, under cases (I) and (II), the conclusions of Theorem 2 hold for the method (3).

4. Numerical Experiments

4.1. Local Area Convergence

In Example 1, we examined the LAC and presented the computational results in Table 1. These findings were derived from a system of differential equations, with a focus on analyzing LAC.

4.2. Semi-Local Area of Convergence

On the other hand, the other examples illustrate the SLAC. Table 2 provides the numerical results of SLAC for the boundary value problem in Example 2. We have chosen a larger SNE of order 60 × 60 with 60 variables. In Example 3, we investigate another applied science problem, namely the Hammerstein integral equation, to demonstrate the applicability and efficacy of method (2). The values of abscissas t j and weights w j are depicted in Table 3 and numerical outcomes in Table 4. In the last Example 4, we consider an SNE, and the numerical results are shown in Table 5. We compared method (2) with existing sixth-order methods, selecting the following approaches for evaluation: method (8) from Abbasbandy et al. [24], method (14) from Hueso et al. [25], and method (6) from Wang and Li [26]. Finally, we included method (5) proposed by Lotfi et al. [27] in our comparison.
Moreover, we study the computational order of convergence ( C O C ) that has been determined by the following formulas:
η = ln x β + 1 x * | x β x * ln x β x * x β 1 x * , f o r β = 1 , 2 ,
or approximated COC ( A C O C ) [15,16] by:
η * = ln x β + 1 x β x β x β 1 ln x β x β 1 x β 1 x β 2 , f o r β = 2 , 3 ,
The conditions for terminating the program are specified as follows:
(i)
The difference between successive iterations satisfies x β + 1 x β < ϵ ;
(ii)
The norm of the operator at the current point meets the condition J ( x β ) < ϵ .
Here ϵ = 10 300 , represents an extremely small tolerance level, ensuring high precision and stability in the computational results. All computations used Mathematica 11 with multi-precision arithmetic, and the computer’s configuration details are listed below. Device Name: HP; Windows 10 Enterprise; OS Build: 19045.2006; Processor: Intel(R) Core(TM) i7-4790 CPU @ 3.60GHz 3.60 GHz; Installed RAM: 8.00 GB (7.89 GB usable); System type: 64-bit operating system, x64-based processor; Version: 22H2.
Example 1. 
Let us examine the following system of differential equations:
l 1 ( λ ) l 1 ( λ ) 1 = 0 l 2 ( η ) ( e 1 ) η 1 = 0 l 3 ( θ ) 1 = 0 ,
which are defined as the movement of a particle in three dimensions with λ , η , θ Γ for l 1 ( 0 ) = l 2 ( 0 ) = l 3 ( 0 ) = 0 . The solution v = ( λ , η , θ ) t r is then associated with l ¯ = ( l 1 , l 2 , l 3 ) : Γ R 3 given as
L ( v ) = e λ 1 , e 1 2 η 2 + η , θ t r .
By (33), we obtain
L ( v ) = e λ 0 0 0 ( e 1 ) η + 1 0 0 0 1 .
For A = L ( l ¯ ) = L ( 0 , 0 , 0 ) t r = I and [ y , x ; L ] = 0 1 L x + θ ( y x ) d θ , we have
ϕ 0 ( s , t ) = 1 2 ( e 1 ) ( s + t ) , ϕ ( s , t ) = 1 2 e 1 e 1 s + ( e 1 ) t ϕ 1 ( t ) = 1 2 ( e 1 ) t , a 0 = | 1 a | , b 0 = | 1 + b | , c 0 = | 1 c | and d 0 = | 1 + d | .
Then, compute r * for both methods (see Remarks 1 and 2). For the selection of the function in the condition ( Q ) , other parameters are given below:
a 1 = a = b = c = d = γ = δ = 1 , and q = 3 .
The radii of convergence methods (2) and (3) for Example 1 are depicted in Table 1.
Table 1. Distinct convergence radii for Example 1.
Table 1. Distinct convergence radii for Example 1.
Methods c 0 c 1 c 2 c 3 c ¯ 3 min { c 0 , c 1 , c 2 , c 3 , c ¯ 3 }
Method (2)0.26160.15360.13930.1164-0.1164
Method (3)0.26160.15360.1393-0.11250.1125
Example 2. 
Boundary value problems (BVPs) [17] are fundamental in applied science, involving differential equations with conditions at distinct points, often boundaries. These issues represent real-world processes such as fluid dynamics, nuclear reactors, heat transfer, optimization, quantum mechanics, and applied science, leading us to choose the following BVP (for details, see [18]):
v + a 2 v 2 + 1 = 0
with v ( 0 ) = 0 , v ( 1 ) = 1 . Divide the interval [ 0 , 1 ] into ℓ parts, which further provides
γ 0 = 0 < γ 1 < γ 2 < < γ 1 < γ , γ + 1 = γ + j , j = 1 .
Set v 0 = v ( γ 0 ) = 0 , v 1 = v ( γ 1 ) , , v 1 = v ( γ 1 ) , v = v ( γ ) = 1 . Then, we have
v τ = v τ + 1 v τ 1 2 j , v τ = v τ 1 2 v τ + v τ + 1 j 2 , τ = 1 , 2 , 3 , , p 1 ,
by discretization. Thus, we obtain a system of ( τ 1 ) × ( τ 1 )
v τ 1 2 v τ + v τ + 1 + μ 2 4 ( v τ + 1 v τ 1 ) 2 + j 2 = 0 .
For example, we have an SNE of order 60 × 60 with p = 61 and μ = 1 2 . The following is the required solution for the system (36) x * mentioned above:
x * = ( 0.02757 , 0.05468 , 0.08134 , 0.1076 , 0.1333 , 0.1587 , 0.1836 , 0.2081 , 0.2322 , 0.2559 , 0.2791 , 0.3020 , 0.3245 , 0.3465 , 0.3682 , 0.3895 , 0.4104 , 0.4310 , 0.4511 , 0.4710 , 0.4904 , 0.5095 , 0.5282 , 0.5466 , 0.5646 , 0.5822 , 0.5996 , 0.6165 , 0.6332 , 0.6495 , 0.6654 , , 0.6811 , 0.6964 , 0.7114 , 0.7260 , 0.7404 , 0.7544 , , 0.7681 , 0.7815 , 0.7945 , 0.8073 , 0.8198 , 0.8319 , , 0.8437 , 0.8553 , 0.8665 , 0.8775 , 0.8881 , 0.8985 , , 0.9085 , 0.9183 , 0.9277 , 0.9369 , 0.9458 , 0.9544 , , 0.9627 , 0.9707 , 0.9785 , 0.9859 , 0.9931 ) t r
Table 2 provides a detailed overview of the performance for Example 2, including the Coefficient of Convergence (COC), CPU time, number of iterations, residual errors, and the error differences between consecutive iterations. These computational results offer us valuable insight into the efficiency and accuracy of the required solution. This table provide us both computational time and the convergence behavior as the iterative methods with other existing methods.
Table 2. Computational results of Example 2.
Table 2. Computational results of Example 2.
Methods x 0 F ( x 3 ) x 4 x 3 n η * CPU
Timing
Lotfi et al. [27] ( 1.1 , 1.1 , 60 , 1.1 ) t r 1.2 × 10 134 4.7 × 10 133 45.0541135.752
Wang and Li [26] ( 1.1 , 1.1 , 60 , 1.1 ) t r 2.1 × 10 177 1.2 × 10 176 46.0281127.473
Abbasbandy [24] ( 1.1 , 1.1 , 60 , 1.1 ) t r 7.6 × 10 168 4.3 × 10 167 46.0293347.9
Hueso et al. [25] ( 1.1 , 1.1 , 60 , 1.1 ) t r 6.6 × 10 125 3.9 × 10 124 45.0358222.306
Method (2) ( 1.1 , 1.1 , 60 , 1.1 ) t r 3.2 × 10 182 1.7 × 10 181 46.0616104.265
CPU timing and η * are calculated based on the number of iterations required to reach the desired accuracy.
Example 3. 
We investigate a widely recognized problem, the Hammerstein integral equation, as detailed in [2] (pp. 19–20). The primary objective is to evaluate and contrast the effectiveness and practicality of iterative scheme (2) against the earlier ones of the same CO. The Hammerstein integral equation is given below:
x ( s ) = 1 + 1 5 0 1 G ( s , t ) x 3 ( t ) d t , for x C [ 0 , 1 ] s , t [ 0 , 1 ] .
The kernel G is given by
G ( s , t ) = s ( 1 t ) , s t ( 1 s ) t , t s .
To convert the given equation into a finite-dimensional problem, we utilize the Gauss–Legendre quadrature formula (GLQF), which allows us to approximate integrals with greater accuracy. Then, we obtain
0 1 g ( t ) d t j = 1 10 w j g ( t j ) .
The values t j (abscissas) and w j (weights) are calculated using the GLQF for j = 10 and shown in the Table 3. Let x i be defined as the approximations of x ( t i ) by x i ( i = 1 , 2 , , 10 ) . This leads to an SNE, which is defined as follows:
5 x i 5 j = 1 10 a i j x j 3 = 0 , i = 1 , 2 , , 10
where
a i j = w j t j ( 1 t i ) , j i , w j t i ( 1 t j ) , i < j .
Table 3. The values t j (abscissas) and w j (weights).
Table 3. The values t j (abscissas) and w j (weights).
j t j w j
1 0.01304673574141413996101799 0.03333567215434406879678440
2 0.06746831665550774463395165 0.07472567457529029657288816
3 0.16029521585048779688283632 0.10954318125799102199776746
4 0.28330230293537640460036703 0.13463335965499817754561346
5 0.42556283050918439455758700 0.14776211235737643508694649
6 0.57443716949081560544241300 0.14776211235737643508694649
7 0.71669769706462359539963297 0.13463335965499817754561346
8 0.83970478414951220311716368 0.10954318125799102199776746
9 0.93253168334449225536604834 0.07472567457529029657288816
10 0.98695326425858586003898201 0.03333567215434406879678440
Table 4 provides a detailed overview of the performance for Example 3, including the Coefficient of Convergence (COC), CPU time, number of iterations, residual errors, and the error differences between consecutive iterations. These computational results offer us a valuable insight into the efficiency and accuracy of the required solution. This table provide us both computational time and the convergence behavior as the iterative methods with other existing methods. Our required solution is given below:
x * = ( 1.001 , 1.006 , 1.014 , 1.021 , 1.026 , 1.026 , 1.021 , 1.014 , 1.006 , 1.0013 ) t r .
Table 4. Computational results of Example 3.
Table 4. Computational results of Example 3.
Methods x 0 F ( x 3 ) x 4 x 3 n η CPU
Timing
Lotfi et al. [27] ( 1.1 , 1.1 , 10 , 1.1 ) t r 3.3 × 10 316 6.7 × 10 317 35.10232.86834
Wang and Li [26] ( 1.1 , 1.1 , 10 , 1.1 ) t r 4.2 × 10 422 9.0 × 10 423 36.04102.82447
Abbasbandy [24] ( 1.1 , 1.1 , 10 , 1.1 ) t r 5.1 × 10 394 1.1 × 10 394 36.04283.67107
Hueso et al. [25] ( 1.1 , 1.1 , 10 , 1.1 ) t r 9.8 × 10 251 2.1 × 10 251 45.009910.559
Method (2) ( 1.1 , 1.1 , 10 , 1.1 ) t r 4.7 × 10 409 1.0 × 10 409 35.99953.30025
CPU timing and η are calculated based on the number of iterations required to reach the desired accuracy.
Example 4. 
Further, we analyze another larger SNE of order 100 by 100 variables, to demonstrate the method’s effectiveness and scalability when applied to more complex and large-scale systems. The analysis highlights the method’s capacity to tackle significant computational challenges and its applicability to a wide range of practical problems involving large-scale nonlinear systems. Thus, we consider the following system:
P ( X ) = x j 2 x j + 1 1 = 0 , 1 j 299 , x 300 2 x 1 1 = 0 , o t h e r w i s e .
The required zero for this problem is x * = ( 1 , 1 , 1 , 100 , 1 ) t r . In Table 5, we provide the Coefficient of Convergence (COC), CPU time, number of iterations, residual errors, and the error differences between two iterations for Example 4.
Table 5. Computational results of Example 4.
Table 5. Computational results of Example 4.
Methods x 0 F ( x 4 ) x 5 x 4 n η * CPU
Timing
Lotfi et al. [27] 96 100 , 96 100 , 100 , 96 100 t r 1.1 × 10 238 3.7 × 10 239 46.025248.3817
Wang and Li [26] 96 100 , 96 100 , 100 , 96 100 t r 1.2 × 10 231 3.9 × 10 232 46.026051.2171
Abbasbandy [24] 96 100 , 96 100 , 100 , 96 100 t r 1.8 × 10 224 6.0 × 10 225 46.026878.1555
Hueso et al. [25] 96 100 , 96 100 , 100 , 96 100 t r 7.3 × 10 158 2.4 × 10 158 45.0318100.235
Method (2) 96 100 , 96 100 , 100 , 96 100 t r 2.0 × 10 246 6.7 × 10 247 46.024450.3581
CPU timing and η * are calculated based on the number of iterations required to reach the desired accuracy.

5. Concluding Remarks

A new methodology has been proposed in this study, which expands the use of multistep approaches without relying on Taylor series expansions that impose derivative-based conditions which are not present in the method. Other drawbacks involve the absence of a priori and computable error distances and uniqueness of the solution results. These concerns are all positively addressed in this paper. Indeed, the sufficient convergence requirements of our approach depend solely on the operators involved in the technique. Additionally, our approach establishes the uniqueness of the solution, determines the radii of convergence, and provides upper a priori estimates for the error distances involved in the methods (2) and (3). Furthermore, the SLAC, which is not discussed in earlier papers, is also presented in this work. Method (2) outperforms existing methods in terms of residual error and the absolute difference in error between two consecutive iterations. Furthermore, method (1.2) consumes significantly less time compared to the existing methods. Moreover, it demonstrates a stable CO compared to the existing ones. In future research, we will explore how this methodology can be utilized to extend other iterative methods in a similar manner due to its generality [7,8,10,11,12,13,14,15,16,17,18,19,28,29,30,31,32,33,34,35,36,37,38,39].

Author Contributions

Conceptualization, R.B. and I.K.A.; methodology, R.B. and I.K.A.; software, R.B. and I.K.A.; validation, R.B. and I.K.A., formal analysis, R.B. and I.K.A.; investigation, R.B. and I.K.A.; resources, R.B. and I.K.A.; data curation, R.B. and I.K.A.; writing—original draft preparation, R.B. and I.K.A.; writing—review and editing, R.B., I.K.A., S.A. and A.M.A.; visualization, R.B., I.K.A., S.A. and A.M.A., supervision, R.B. and I.K.A. All authors have read and agreed to the published version of the manuscript.

Funding

This study is supported via funding from Prince Sattam bin Abdulaziz University project number (PSAU/2024/01/31597).

Data Availability Statement

The original contributions presented in this study are included in the article.

Acknowledgments

The authors extend their appreciation to Prince Sattam bin Abdulaziz University for funding this research work through the project number (PSAU/2024/01/31597).

Conflicts of Interest

The authors declare that there are no conflicts of interest.

References

  1. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: New York, NY, USA, 1964. [Google Scholar]
  2. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  3. Cordero, A.; Gómez, E.; Torregrosa, J.R. Efficient high-order iterative methods for solving nonlinear systems and their application on heat conduction problems. Complexity 2017, 2017, 6457532. [Google Scholar] [CrossRef]
  4. Argyros, I.K.; Magrenán, Ȧ.M. Iterative Methods and Their Dynamics with Applications; CRC Press: New York, NY, USA; Taylor & Francis: Boca Raton, FL, USA, 2018. [Google Scholar]
  5. Argyros, I.K. The Theory and Aplication of Iteration Methods; CRC Press: New York, NY, USA; Taylor & Francis: Boca Raton, FL, USA, 2022. [Google Scholar]
  6. Hernández, M.A.; Martinez, E. On the semilocal convergence of a three steps Newton-type process under mild convergence conditions. Numer. Algor. 2015, 70, 377–392. [Google Scholar] [CrossRef]
  7. Steffensen, J.F. Remarks on iteration. Skand. Aktuarietidskr. 1933, 16, 64–72. [Google Scholar] [CrossRef]
  8. Shakhno, S.M. Gauss-Newton-Kurchatov method for the solution of nonlinear least-squares problems. J. Math. Sci. 2020, 247, 58–72. [Google Scholar] [CrossRef]
  9. Cordero, A.; Garrido, N.; Torregrosa, J.R.; Triguero-Navarro, P. Design of iterative methods with memory for solving nonlinear system. Math. Methods Appl. Sci. 2023, 46, 12361–12377. [Google Scholar] [CrossRef]
  10. Wang, X.; Jin, Y.; Zhao, Y. Derivative-free iterative methods with some Kurchatov-type accelerating parameters for solving nonlinear systems. Symmetry 2021, 13, 943. [Google Scholar] [CrossRef]
  11. Cordero, A.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algor. 2010, 55, 87–99. [Google Scholar] [CrossRef]
  12. Behl, R.; Bhalla, S.; Magrenán, Ȧ.A.; Kumar, S. An efficient high order iterative scheme for large nonlinear systems with dynamics. J. Comput. Appl. Math. 2022, 404, 113249. [Google Scholar] [CrossRef]
  13. Xiao, X.Y.; Yin, H.W. Increasing the order of convergence for iterative methods to solve nonlinear systems. Calcolo 2016, 53, 285–300. [Google Scholar] [CrossRef]
  14. Ezquerro, J.A.; Hernández, M.A. New iterations of R-order four with reduced computational cost. BIT Numer. Math. 2009, 49, 325–342. [Google Scholar]
  15. Grau-Sánchez, M.; Grau, A.; Noguera, M. On the computational efficiency index and some iteratíve methods for solving system of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef]
  16. Grau-Sánchez, M.; Grau, A.; Noguera, M. Ostrowski type methods for solving systems of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  17. Sharma, J.R.; Gupta, P. An efficient fifth order method for solving systems of nonlinear equations. Comput. Math. Appl. 2014, 67, 591–601. [Google Scholar] [CrossRef]
  18. Kou, J.; Li, Y.; Wang, X. Some modification of Newton’s method with fifth-order convergence. J. Comput. Appl. Math. 2007, 209, 146–152. [Google Scholar] [CrossRef]
  19. Shakhno, S.M. Nonlinear majoriants for investigation of methods of linear interpolation for the solution of nonlinear equations. In Proceedings of the ECCOMAS 2004-European Congress on Computational Methods in applied Sciences and Engineering, Jyvaskyla, Finland, 24–28 July 2004. [Google Scholar]
  20. Cordero, A.; Rojas-Hiciano, R.V.; Torregrosa, J.R.; Vassileva, M.P. Maximally efficient damped composed Newton-type methods to solve nonlinear systems of equations. Appl. Math. Comput. 2025, 492, 129231. [Google Scholar] [CrossRef]
  21. Cordero, A.; Maimó, J.G.; Rodríguez-Cabral, A.; Torregrosa, J.R. Two-Step Fifth-Order Efficient Jacobian-Free Iterative Method for Solving Nonlinear Systems. Mathematics 2024, 12, 3341. [Google Scholar] [CrossRef]
  22. Singh, H.; Sharma, J.R. A two-point Newton-like method of optimal fourth order convergence for systems of nonlinear equations. J. Complex. 2025, 86, 101907. [Google Scholar] [CrossRef]
  23. Kumar, S.; Sharma, J.R.; Jäntschi, L. An Optimal Family of Eighth-Order Methods for Multiple-Roots and Their Complex Dynamics. Symmetry 2024, 16, 1045. [Google Scholar] [CrossRef]
  24. Abbasbandy, S.; Bakhtiari, P.; Cordero, A.; Torregrosa, J.R.; Lotfi, T. New efficient methods for solving nonlinear systems of equations with arbitrary even order. Appl. Math. Comput. 2016, 287–288, 94–103. [Google Scholar] [CrossRef]
  25. Hueso, J.L.; Martínez, E.; Teruel, C. Convergence, efficiency and dynamics of new fourth and sixth order families of iterative methods for nonlinear systems. J. Comput. Appl. Math. 2015, 275, 412–420. [Google Scholar] [CrossRef]
  26. Wang, X.; Li, Y. An Efficient Sixth-Order Newton Type Method for Solving Nonlinear Systems. Algorithms 2017, 10, 45. [Google Scholar] [CrossRef]
  27. Lotfi, T.; Bakhtiari, P.; Cordero, A.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar] [CrossRef]
  28. Wang, X. Fixed-point iterative method with eighth-order constructed by undetermined parameter technique for solving nonlinear systems. Symmetry 2021, 13, 863. [Google Scholar] [CrossRef]
  29. Cordero, A.; Villalba, E.G.; Torregrosa, J.R.; Triguero-Navarro, P. Convergence and stability of a parametric class of iterative schemes for solving nonlinear systems. Mathematics 2021, 9, 86. [Google Scholar] [CrossRef]
  30. Amiri, A.; Cordero, A.; Darvishi, M.T.; Torregrosa, J.R. A fast algorithm to solve systems of nonlinear equations. J. Comput. Appl. Math. 2019, 354, 242–258. [Google Scholar] [CrossRef]
  31. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. A new efficient parametric family of iterative methods for solving nonlinear systems. J. Differ. Equ. Appl. 2019, 25, 1454–1467. [Google Scholar] [CrossRef]
  32. Singh, A. An efficient fifth-order Steffensen-type method for solving systems of nonlinear equations. Int. J. Comput. Sci. Math. 2021, 9, 501–514. [Google Scholar] [CrossRef]
  33. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  34. Wang, X.; Zhang, T.; Qian, W.; Teng, M. Seventh-order derivative-free iterative method for solving nonlinear systems. Numer. Algor. 2015, 70, 545–558. [Google Scholar] [CrossRef]
  35. Artidiello, S.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Multidimensional generalization of iterative methods for solving nonlinear problems by means of weight-function procedure. Appl. Math. Comput. 2015, 268, 1064–1071. [Google Scholar] [CrossRef]
  36. Sharma, J.R.; Guha, R.K.; Sharma, R. An efficient fourth order weighted-newton method for systems of nonlinear equations. Numer. Algor. 2013, 62, 307–323. [Google Scholar] [CrossRef]
  37. Wang, X.; Chen, X. Derivative-free Kurchatov-type accelerating iterative method for solving nonlinear systems: Dynamics and applications. Fractal Fract. 2022, 6, 59. [Google Scholar] [CrossRef]
  38. Cordero, A.; Maimó, J.G.; Torregrosa, J.R.; Vassileva, M.P. Iterative methods with memory for solving systems of nonlinear equations using a second order approximation. Mathematics 2019, 7, 1069. [Google Scholar] [CrossRef]
  39. Chicharro, F.I.; Cordero, A.; Garrido, N.; Torregrosa, J.R. On the improvement of the order of convergence of iterative methods for solving nonlinear systems by means of memory. Appl. Math. Lett. 2020, 104, 106277. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Behl, R.; Alharbi, S.; Alotaibi, A.M. A Study of at Least Sixth Convergence Order Methods Without or with Memory and Divided Differences for Equations Under Generalized Continuity. Mathematics 2025, 13, 799. https://doi.org/10.3390/math13050799

AMA Style

Argyros IK, Behl R, Alharbi S, Alotaibi AM. A Study of at Least Sixth Convergence Order Methods Without or with Memory and Divided Differences for Equations Under Generalized Continuity. Mathematics. 2025; 13(5):799. https://doi.org/10.3390/math13050799

Chicago/Turabian Style

Argyros, Ioannis K., Ramandeep Behl, Sattam Alharbi, and Abdulaziz Mutlaq Alotaibi. 2025. "A Study of at Least Sixth Convergence Order Methods Without or with Memory and Divided Differences for Equations Under Generalized Continuity" Mathematics 13, no. 5: 799. https://doi.org/10.3390/math13050799

APA Style

Argyros, I. K., Behl, R., Alharbi, S., & Alotaibi, A. M. (2025). A Study of at Least Sixth Convergence Order Methods Without or with Memory and Divided Differences for Equations Under Generalized Continuity. Mathematics, 13(5), 799. https://doi.org/10.3390/math13050799

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop