Next Article in Journal
A Java Library to Perform S-Expansions of Lie Algebras
Previous Article in Journal
The Density Function of the Stochastic SIQR Model with a Two-Parameters Mean-Reverting Process
Previous Article in Special Issue
Certain Novel Best Proximity Theorems with Applications to Complex Function Theory and Integral Equations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extending the Applicability of Newton-Jarratt-like Methods with Accelerators of Order 2m + 1 for Solving Nonlinear Systems

by
Ioannis K. Argyros
1,*,
Stepan Shakhno
2 and
Mykhailo Shakhov
2
1
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
2
Department of Theory of Optimal Processes, Ivan Franko National University of Lviv, Universytetska Str. 1, 79000 Lviv, Ukraine
*
Author to whom correspondence should be addressed.
Axioms 2025, 14(10), 734; https://doi.org/10.3390/axioms14100734
Submission received: 7 August 2025 / Revised: 16 September 2025 / Accepted: 16 September 2025 / Published: 28 September 2025

Abstract

The local convergence analysis of the m + 1 -step Newton-Jarratt composite scheme with order 2 m + 1 has been shown previously. But the convergence order 2 m + 1 is obtained using Taylor series and assumptions on the existence of at least the fifth derivative of the mapping involved, which is not present in the method. These assumptions limit the applicability of the method. A priori error estimates or the radius of convergence or uniqueness of the solution results have not been given either. These drawbacks are addressed in this paper. In particular, the convergence is based only on the operators on the method, which are the operator and its first derivative. Moreover, the radius of convergence is established, a priori estimates and the isolation of the solution is discussed using generalized continuity assumptions on the derivative. Furthermore, the more challenging semi-local convergence analysis, not previously studied, is presented using majorizing sequences. The convergence for both analyses depends on the generalized continuity of the Jacobian of the mapping involved, which is used to control it and sharpen the error distances. Numerical examples validate the sufficient convergence conditions presented in the theory.

1. Introduction

The problem of solving nonlinear equations and systems of nonlinear equations is a fundamental and challenging task in numerical analysis with significant applications in many branches of science and engineering. Since closed-form or analytical solutions are rarely available, research has focused heavily on developing and analyzing iterative methods to approximate a solution x * to an equation of the form
F ( x ) = 0 ,
where F is a Fréchet-differentiable operator defined on a convex subset D of a Banach space B 0 with values in a Banach space B. The solution is typically found as the limit of a sequence { x k } generated by an iterative method. The results obtained here hold for any norm denoted by || · || . There are a variety of iterative methods for solving such equations and nonlinear systems [1,2,3,4,5]. The most well-known is Newton’s method, which is renowned for its quadratic convergence under standard assumptions [6]. To achieve a higher order of convergence, numerous modified Newton- or Newton-like methods have been proposed in the literature [7,8,9,10]. Among the most efficient and widely studied families are multipoint methods, which can attain a high order of convergence while keeping the number of function evaluations low. In this context, the Newton-Jarratt family and its variants are particularly prominent, offering fourth-order convergence or higher [11,12]. The development of even more efficient high-order multipoint methods continues to be an active area of research [10,13,14,15,16,17,18,19].
Recently, Sharma and Kumar [20] introduced an efficient and elegant class of Newton-Jarratt-like methods. Their approach involves a multi-step scheme designed to increase the order of convergence systematically. As described in their work, the key innovation is that in each step, the order of convergence is increased by two at the cost of only a single additional function evaluation. Furthermore, the computationally expensive inverse of the Fréchet derivative, F ( x n ) 1 , is calculated only once per iteration and then reused in all subsequent sub-steps. This design, which they term a method with a ’frozen inverse operator’, makes the algorithm highly efficient, especially for large systems of equations.
We provide a complete theoretical framework, covering both local and the more challenging semi-local convergence. The local analysis establishes a radius of convergence. It gives uniqueness of the results, improving the study in [20] by using larger convergence domains and tighter error bounds under our weaker hypotheses. Critically, and in a significant extension to this work, we also present the first semi-local convergence analysis for this class of methods using majorizing sequences. Owing to this analysis, we can guarantee convergence with a given initial point without assuming the existence of a solution beforehand. It is a crucial aspect for practical applications, where the existence of a solution is often not known. The theoretical results are then validated through numerical examples.
We extend the applicability of the multi-step method, introduced in [20], which is defined for a natural number m 2 , x 0 ( 0 ) = x 0 D , j = 2 , , m and each n = 0 , 1 , 2 , .
y n ( 0 ) = x n 2 3 F ( x n ) 1 F ( x n ) , A n = F ( x n ) 1 F ( y n ( 0 ) ) , y n ( 1 ) = y n ( 0 ) 1 12 ( 13 I 9 A n ) F ( x k ) 1 F ( x k ) , E n = E ( x n , y n ( 0 ) ) = 1 2 ( 5 I 3 A n ) , y n ( j ) = y n ( j 1 ) E n F ( x n ) 1 F ( y n ( j 1 ) ) , x n + 1 = y n ( m ) = y n ( m 1 ) E n F ( x n ) 1 F ( y n ( m 1 ) ) .
The structure of the method begins with a Newton-Jarratt-like predictor step to find y n ( 0 ) and then is followed by a series of additional steps. It is important to note that the operators A n and E n , and the inverse operator F ( x n ) 1 , are computed only once per iteration. The steps at j 2 use the fixed operator E n F ( x n ) 1 to iteratively improve the solution, enhancing the order of convergence without requiring additional costly derivative evaluations or inversions. The pseudo-code for the particular method is provided in Algorithm 1.
Algorithm 1 ( m + 1 ) -step Newton-Jarratt Method (2)
1:
Input:
2:
   Function F and its Jacobian J
3:
   Initial guess x 0
4:
   Number of steps of the algorithm m
5:
   Maximum iterations n
6:
   Tolerance ϵ
7:
Output:
8:
   Approximate root x *
9:
   Convergence status (boolean)
10:
x k x 0
11:
for  k = 1 , , n   do
12:
    Compute f k = F ( x k )
13:
    Compute Jacobian J k = J ( x k )
14:
    Perform LU factorization: J k = L k U k , store L k and U k
15:
    Solve system of linear equation J k c k = f k for c k using the LU factors
16:
     w k , 0 x k 2 3 c k
17:
    Compute Jacobian J w k , 0 = J ( w k , 0 )
18:
    Solve system of linear equations J k A k = J w k , 0 for A k using the LU factors
19:
     w k , 1 w k , 0 1 12 ( 13 I 9 A k ) c k
20:
     E k 1 2 ( 5 I 3 A k )
21:
    for  j = 2 , , m  do
22:
        Compute f w k , j 1 = F ( w k , j 1 )
23:
        Solve system of linear equations J k d = f w k , j 1 for d using the LU factors
24:
         w k , j w k , j 1 E k d
25:
    end for
26:
     x k + 1 w k , m
27:
    if  | | x k x k + 1 | | + | | F ( x k + 1 ) | | < ϵ  then
28:
        return  x k + 1 , True
29:
    end if
30:
     x k x k + 1
31:
end for
32:
return  x n , False
But there is a number of problems with the Taylor series approach which restrict the applicability of the method (2).
  • Motivation for our paper
    (P1)
    The existence of at least the fifth derivative is assumed in [20] to show the local convergence and provided that B 0 = B = R m , where m is a natural number. Let us consider m = 1 , D = [ 2 , 2 ] and define the function f : D R by
    f ( x ) = s 1 x 5 ln x + s 2 x 6 + s 3 x 7 , x 0 , 0 , x = 0 .
    Here s 1 , s 2 , s 3 are real numbers with s 1 0 and s 2 + s 3 = 1 . It follows by the definition of the function f that x * = 1 D is a solution of the equation f ( x ) = 0 . But the fifth derivative of the function f is not bounded, since it is not continuous at x = 0 D . Therefore the results in [20] cannot assure the convergence of the method to x * . However, the method converges to x * if we take x 0 = 0.95 . Thus, the sufficient convergence conditions in [20] can be weakened. It is also worth noting that only F and F appear on the method.
    (P2)
    There is no knowledge in advance about the natural number K such that || x n x * || < ε , where ε > 0 is the error tolerance. Thus, the number of such iterations K is unknown.
    (P3)
    Information about the isolation of x * is not available.
    (P4)
    The most important and challenging semi-local convergence is not given.
    (P5)
    The convergence is established only for B 0 = B = R m .
The problems ( P 1 )–( P 5 ) appear in studies utilizing the Taylor series approach. In this paper, we positively handle these problems in order to extend the applicability of the method.
  • Novelty of our Paper
    (P1)′
    The local convergence is based on the operators F and F , which only appear on the method. Moreover, generalized continuity assumptions [21] are used to control the derivative and sharpen the error distances || x n x * || .
    (P2)′
    The number of iterations K is known in advance, since a priori estimates on || x n x * || become available.
    (P3)′
    Domains are determined containing only one solution.
    (P4)′
    The semi-local convergence is developed using majorizing sequences [4,21,22,23].
    (P5)′
    The local and semi-local convergence is given in the more general setting of a Banach space. Notice that although our technique is applied on method (2) it can also be used on other methods with problems ( P 1 )–( P 5 ) along the same lines [11,12,15,22,24,25,26,27,28,29,30,31,32,33,34,35,36].
The rest of this paper is organized as follows. Section 2 is devoted to the local convergence analysis of method (2), where we establish a radius of convergence and provide uniqueness results under our weaker hypotheses. The more demanding semi-local convergence analysis is presented in Section 3, based on the use of majorizing sequences. In Section 4, there are numerical results for validation of the theoretical conditions and demonstration of the performance of the method on several problems. Finally, we provide concluding remarks in Section 5.

2. Local Convergence

The analysis uses some scalar functions. Set T = [ t 0 , + ) .
  • Suppose:
    (C1)
    There exists a continuous and nondecreasing function w 0 : T T such that the function 1 w 0 ( t ) has a smallest zero in the interval T { 0 } . We shall denote such zero by R 0 and set T 0 = [ 0 , R 0 ) .
    (C2)
    There exists a continuous and nondecreasing function w : T 0 T such that for the functions:
    w ¯ ( s ) ( t ) : T 0 T , s = 0 , 1 , , m 1 ,
    w ¯ ( s ) ( t ) = w ( 1 + g s ( t ) ) t o r w 0 ( t ) + w 0 ( g s ( t ) t ) ,
    the functions g 0 , g 1 , g j : T 0 T , j = 2 , , m are defined by
    g 0 ( t ) = 0 1 w ( ( 1 θ ) t ) d θ + 1 3 ( 1 + 0 1 w ( θ t ) d θ ) 1 w 0 ( t ) ,
    g 1 ( t ) = 0 1 w 0 ( ( 1 θ ) t ) d θ 1 w 0 ( t ) + w ¯ ( 0 ) ( t ) ( 1 + 0 1 w 0 ( θ t ) d θ ) ( 1 w 0 ( t ) ) 2 ,
    g j ( t ) = [ 0 1 w ( ( 1 θ ) g j 1 ( t ) t ) d θ 1 w 0 ( g j 1 ( t ) t ) + w ¯ ( j 1 ) ( t ) ( 1 + 0 1 w 0 ( θ g j 1 ( t ) t ) d θ ) ( 1 w 0 ( t ) ) ( 1 w 0 ( g j 1 ( t ) t ) )
    + 3 2 w ( 0 ) ( t ) ( 1 + 0 1 w 0 ( θ g j 1 ( t ) t ) d θ ) ( 1 w 0 ( t ) ) 2 ] g j 1 ( t )
    are such that the functions 1 g δ ( t ) , δ = 0 , 1 , 2 , , m , have smallest zeros in the interval T 0 { 0 } . We shall denote such zeros by r δ . Set
    r = min { r δ } and T 1 = [ 0 , r ) .
    This parameter shall be shown to be the radius of convergence for the method (2) in Theorem 1. It follows from the definition of r that for each t T 1 :
    0 w 0 ( t ) < 1 ,
    0 w 0 ( g j 1 ( t ) t ) < 1 ,
    and
    0 g s ( t ) < 1 .
    The real functions w 0 and w relate to the operators on the method (2).
    (C3)
    There exists a solution x * D of the equation F ( x ) = 0 and an invertible operator M L ( B 0 , B ) such that for each u D
    || M 1 ( F ( u ) M ) || w 0 ( || u x * || ) .
    Define the region D 0 = S ( x * , R 0 ) D .
    (C4)
    || M 1 ( F ( u 1 ) F ( u ) ) || w ( || u 1 u || ) for each u , u 1 D 0 .
    (C5)
    S [ x * , r ] D .
Remark 1. 
Some choices of M can be M = I , the identity operator on B 0 or M = F ( x ˜ ) for some auxiliary point x ˜ D other than x * , or M = F ( x * ) . In the last case, it follows by ( C 3 ) that x * is a simple solution of the equation F ( x ) = 0 . The main local convergence result for the method (2) follows in the next result.
Theorem 1. 
Suppose that conditions ( P 1 )–( P 5 ) hold. Then, the sequence { x n } generated by the method (2) converges to x * , provided that x 0 S 0 : = S ( x * , r ) { x * } . Moreover, the following items hold:
|| x n + 1 x * || c n + 1 || x 0 x * || ,
where
c = d m [ 0 , 1 ) ,
and for e 0 = || x 0 x * ||
d = min { g 1 ( e 0 ) , g 2 ( e 0 ) , , g m ( e 0 ) } [ 0 , 1 ) .
Proof. 
Notice that by the definition of the radius r, (4), (7)–(9), it follows that r, c, d exist and belong to the interval [ 0 , 1 ) . We shall show by induction that for each n = 0 , 1 , 2 ,
|| y n ( 0 ) x * || g 0 ( || x n ( 0 ) x * || ) || x n ( 0 ) x * || || x n ( 0 ) x * || < r ,
|| y n ( 1 ) x * || g 1 ( || x n ( 0 ) x * || ) || x n ( 0 ) x * || || x n ( 0 ) x * || ,
and for j = 2 , . . . , m
|| y n ( j ) x * || g j ( || y n ( j 1 ) x * || ) || y n ( j 1 ) x * || .
Let u S 0 . By the condition ( C 3 ), (3) and (4) we have the estimate
|| M 1 ( F ( u ) M ) || w 0 ( || u x * || ) w 0 ( r ) < 1 .
It follows by (13) and the Banach lemma on invertible operators that F ( u ) is invertible and
|| [ F ( u ) ] 1 M || 1 1 w 0 ( || u x * || ) .
In particular, for u = x 0 , the operator F ( x 0 ) is invertible. So, the iterates y 0 ( 0 ) , y 0 ( 1 ) , . . . , y 0 ( m ) are well defined by the method (2). Then, by the first substep, we can write in turn
y 0 ( 0 ) x * = x 0 x * F ( x 0 ) 1 F ( x 0 ) + 1 3 F ( x 0 ) 1 F ( x 0 ) = [ F ( x 0 ) 1 M ] 0 1 M 1 ( F ( x * + θ ( x 0 x * ) ) F ( x 0 ) ) d θ ( x 0 x * ) + 1 3 [ F ( x 0 ) 1 M ] [ M 1 F ( x 0 ) ] .
We need estimates
0 1 M 1 F x * + θ ( x 0 x * ) F ( x 0 ) d θ 0 1 w ( 1 θ ) x 0 x * d θ
by ( C 4 ),
F ( x 0 ) = F ( x 0 ) F ( x * ) = 0 1 F ( x * + θ ( x 0 x * ) ) d θ ( x 0 x * ) ,
so
|| M 1 F ( x 0 ) || = 0 1 M 1 ( F ( x * + θ ( x 0 x * ) ) M + M ) d θ ( x 0 x * ) 1 + 0 1 w 0 ( θ || x 0 x * || ) d θ || x 0 x * || .
Then, by (3), (6) (for δ = 1 ), (14) (for u = x 0 ), (16) and (17), identity (15) can give
|| y 0 ( 0 ) x * || 0 1 w ( ( 1 θ ) || x 0 x * || d θ + 1 3 1 + 0 1 w 0 ( θ || x 0 x * || ) d θ || x 0 x * || 1 w 0 ( || x 0 x * || ) g 0 ( || x 0 x * || ) || x 0 x * || || x 0 x * || < r .
Thus, the iterate y 0 ( 0 ) S ( x * , r ) and item (10) hold if n = 0 . Then, by the second substep of the method (2), we get in turn that
y 0 ( 1 ) x * = x 0 x * F ( x 0 ) 1 F ( x 0 ) + 1 3 I 1 12 ( 13 I 9 A 0 ) F ( x 0 ) 1 F ( x 0 ) = x 0 x * F ( x 0 ) 1 F ( x 0 ) + 3 4 F ( x 0 ) 1 F ( y 0 ( 0 ) ) I F ( x 0 ) 1 F ( x 0 ) .
Notice that
F ( x 0 ) 1 F ( y 0 ( 0 ) ) I = F ( x 0 ) 1 F ( y 0 ( 0 ) ) F ( x 0 ) ,   so M 1 ( F ( y 0 ) F ( x 0 ) ) w || y 0 ( 0 ) x 0 || w || x 0 x * || + || y 0 x * || w ( ( 1 + g 0 ( || x 0 x * || ) ) || x 0 x * || ) w 0 ( 0 )
or
|| M 1 ( F ( y 0 ( 0 ) ) F ( x 0 ) ) || || M 1 ( F ( x 0 ) M ) || + || M 1 ( F ( y 0 ( 0 ) ) M ) || w 0 ( || x 0 x * || ) + w 0 ( || y 0 ( 0 ) x * || ) w 0 ( 0 ) .
Hence, the identity (19) can give
|| y 0 ( 1 ) x * || 0 1 w 0 ( 1 θ ) || x 0 x * || d θ 1 w 0 ( || x 0 x * || ) + w 0 ( 0 ) 1 + 0 1 w 0 θ || x 0 x * || d θ ( 1 w 0 ( || x 0 x * || ) ) 2 || x 0 x * || g 1 ( || x 0 x * || ) || x 0 x * || || x 0 x * || .
So, the iterate y 0 ( 1 ) S ( x * , r ) and the item (11) holds if n = 0 . Then, we can write for j = 2 , , m by method (2) if n = 0
y 0 ( j ) x * = y 0 ( j 1 ) x * F ( y 0 ( j 1 ) ) 1 F ( y 0 ( j 1 ) ) + ( F ( y 0 ( j 1 ) ) 1 F ( x 0 ) 1 ) F ( y 0 ( j 1 ) ) + ( I E 0 ) F ( x 0 ) 1 F ( y 0 ( j 1 ) ) = y 0 ( j 1 ) x * F ( y 0 ( j 1 ) ) 1 F ( y 0 ( j 1 ) ) + F ( y 0 ( j 1 ) ) 1 ( F ( x 0 ) F ( y 0 ( j 1 ) ) ) F ( x 0 ) 1 F ( y 0 ( j 1 ) ) + 3 4 ( A 0 I ) F ( x 0 ) 1 F ( y 0 ( 1 ) ) .
In view of (3), (6) (for δ = 1 ) and (20)–(22), identity (23) can give
|| y 0 ( j ) x * || 0 1 w ( ( 1 θ ) || y 0 ( j 1 ) x * || ) d θ 1 w 0 ( || y 0 ( j 1 ) x * || ) + w 0 ( j 1 ) 1 + 0 1 w 0 ( θ || y 0 ( j 1 ) x * || ) d θ ( 1 w 0 ( || x 0 x * || ) ) ( 1 w 0 ( || y 0 ( j 1 ) x * || ) ) + 3 2 w 0 ( 0 ) 1 + 0 1 w 0 ( θ || y 0 ( j 1 ) x * || ) d θ ( 1 w 0 ( || x 0 x * || ) ) 2 || y 0 ( j 1 ) x * || g j ( || y 0 ( j 1 ) x * || ) || y 0 ( j 1 ) x * || || y 0 ( j 1 ) x * || .
Thus, the iterate y 0 ( j ) S ( x * , r ) , the items (12) hold for n = 0 and j = 2 , , m . Simply, exchange x 0 , y 0 , , y 0 ( j ) by x 1 , y 1 , , y 1 ( j ) respectively in the preceding calculations to complete the induction for items (10)–(12). Notice also that all the iterates of the method (2) belong to S ( x * , r ) . Moreover, the existence of d is guaranteed by (3)–(6) and items (10)–(12). Finally, by letting n + in (7), we deduce that lim n x n = x * . □
The following result provides a region inside which the only solution of the equation F ( x ) = 0 is x * .
Proposition 1. 
Suppose the condition ( C 3 ) holds in the ball S ( x * , R ) for some R > 0 and there exists R 1 R such that
0 1 w 0 ( θ R 1 ) d θ < 1 .
Define the region D 1 = S [ x * , R 1 ] D . Then, the only solution of the equation F ( x ) = 0 in the region D 1 is x * .
Proof. 
Suppose that there exists a solution z D 1 of the equation F ( x ) = 0 such that z x * . Define the linear operator H 0 = 0 1 F ( x * + θ ( z x * ) ) d θ . Then, it follows by the condition ( C 3 ) and (25) that
|| M 1 ( H 0 M ) || 0 1 w 0 ( θ || z x * || ) d θ 0 1 w 0 ( θ R 1 ) d θ < 1 .
Therefore, the operator H 0 is invertible. Finally, from the identity
z x * = H 0 1 F ( z ) F ( x * ) = H 0 1 ( 0 ) = 0 ,
we deduce that z = x * . □
Remark 2. 
(1) 
The limit point r can be replaced by R 0 in the condition ( C 5 ).
(2) 
Under all the conditions ( C 1 )–( C 5 ) one can set R = r and z = x * in the Proposition 1.

3. Semi-Local Convergence

The choice of the initial point x 0 is challenging in the general setting of a Banach space. In the special case when B 0 = B = R one can certainly employ the bisection method [2,3]:
(a)
The midpoint m = s + t 2 is taken as an approximation to the solution x * .
(b)
The interval [ s , t ] is replaced by [ s , m ] if f ( s ) f ( t ) < 0 , or, [ m , t ] if f ( m ) f ( t ) < 0 . The convergence of this method can then always be guaranteed.
One can refer to [2,3] and the references therein for more information on the bisection method and its assistance in the determination of initial points.
We shall first recall the definition of a majorizing sequence [4,21,22,23].
Definition 1. 
Let { z n } be a sequence in a complete normed linear space. Suppose that there exists a nonnegative scalar sequence { p n } such that for each n = 0 , 1 , 2 ,
|| z n + 1 z n || p n + 1 p n .
It is worth noting that if the inequality above holds, then p n + 1 p n 0 . Hence, the scalar sequence { p n } is nonnegative. Then, the sequence { p n } is majorizing for { z n } . Moreover, if the sequence { p n } converges to p * , then so does { z n } and
|| z n z * || p * p n ,
where z * = lim n z n .
As in the local case, the analysis relies on some scalar functions. Moreover, the formulas and calculations are the same, but x * , w 0 , and w are replaced by x 0 , v 0 , and v, respectively.
  • Suppose:
    (H1)
    There exists a continuous and nondecreasing function v 0 : T T such that the function 1 v 0 ( t ) has a smallest zero in the interval T { 0 } . Let us denote such zero by s and set T 2 = [ 0 , s ) .
    (H2)
    There exists a continuous and nondecreasing function v : T 2 T . Define the scalar sequences { α n ( j ) } , { b n ( j ) } for j = 2 , , m ; n = 0 , 1 , 2 , ; α n ( 0 ) = 0 , some b 0 ( 0 ) 0 by
    v ¯ n = v ( b n ( 0 ) α n ( 0 ) ) or v 0 ( α n ( 0 ) ) + v 0 ( b n ( 0 ) ) ,
    b n ( 1 ) = b n ( 0 ) + 1 2 1 + 9 4 · v ¯ n 1 v 0 ( α 0 ( 0 ) ) ( b n ( 0 ) α n ( 0 ) ) ,
    λ n ( j ) = 0 1 v ( 1 θ ) ( b n ( j 1 ) α n ( 0 ) ) d θ ( b n ( j 1 ) α n ( 0 ) ) + ( 1 + v 0 ( α n ( 0 ) ) ) ( b n ( j 1 ) b n ( 0 ) ) + 1 2 1 + v 0 ( α n ( 0 ) ) [ b n ( 0 ) α n ( 0 ) ] ,
    b n ( j ) = b n ( j 1 ) + 1 2 1 v 0 ( α n ( 0 ) ) 1 + 3 v ¯ n 1 v 0 ( α n ( 0 ) ) λ n ( j 1 ) ,
    ξ n + 1 = 0 1 v ( ( 1 θ ) ( α n + 1 ( 0 ) α n ( 0 ) ) ) d θ ( α n + 1 ( 0 ) α n ( 0 ) ) + ( 1 + v 0 ( α n ( 0 ) ) ) ( α n + 1 ( 0 ) b n ( 0 ) ) + 1 2 ( 1 + v 0 ( α n ( 0 ) ) ) ( b n ( 0 ) α n ( 0 ) ) ,
    b n + 1 ( 0 ) = α n + 1 ( 0 ) + 2 3 ξ n + 1 1 v 0 ( α n + 1 ( 0 ) ) ,
    and
    α n + 1 = b n ( m ) .
    These sequences are shown to be majorizing for the sequences generated by method (2). But let us first present a convergence result for them.
    (H3)
    There exists s 0 [ 0 , s ) such that for each n = 0 , 1 , 2 , , i = 0 , 1 , 2 , , m
    v ( α n ( 0 ) ) < 1 , α n ( 0 ) s 0 and b n ( i ) s 0 .
    This condition and (27) imply that these sequences are nondecreasing and bounded above by s 0 , and as such they converge to their unique least upper bound α * [ 0 , s 0 ) , 0 α n ( 0 ) b n ( 0 ) s 0 and β n ( j 1 ) b n ( j ) s 0 . The functions v 0 and v relate to the operators on the method (2).
    (H4)
    There exists x 0 D and an invertible operator M L ( B 0 , B ) such that for each u D
    || M 1 ( F ( u ) M ) || v 0 ( || u x 0 || ) .
    Define the region D 2 = S ( x 0 , s 0 ) D . By this condition and ( H 1 ) it follows that for u = x 0 : || M 1 ( F ( x 0 ) M ) || v 0 ( 0 ) < 1 . So the linear operator F ( x 0 ) is invertible. Therefore, we can take b 0 ( 0 ) 2 3 || [ F ( x 0 ) ] 1 F ( x 0 ) || .
    (H5)
    || M 1 F ( u 1 ) F ( u ) || v || u 1 u || .
    (H6)
    S [ x 0 , α * ] D .
Remark 3. 
Possible selections for M = I or M = F ( x ¯ ) , x ¯ D with x ¯ x 0 .
  • The semi-local analysis for the method (2) follows next.
Theorem 2. 
Suppose the conditions ( H 1 )–( H 6 ) hold. Then, the sequence { x n } generated by method (2) exists and converges to some x * S [ x 0 , α * ] such that
|| x * x n || α * α n .
Proof. 
We shall establish by induction. Items for i = 2 , , m and each n = 0 , 1 , 2 ,
|| y n ( 0 ) x n || b n ( 0 ) α n ( 0 ) ,
|| y n ( 1 ) y n ( 0 ) || b n ( 1 ) b n ( 0 ) ,
|| y n ( i ) y n ( i 1 ) || b n ( i ) b n ( i 1 ) .
Notice that for i = m , (31) gives by the method (2) and (27) that
|| x n + 1 x n || α n + 1 α n .
Item (29) holds if n = 0 by the definition of b 0 ( 0 ) in ( H 4 ) and (27), since
|| y 0 ( 0 ) x 0 || = 2 3 || [ F ( x 0 ) ] 1 F ( x 0 ) || b 0 ( 0 ) α 0 ( 0 ) = b 0 ( 0 ) < α * .
Moreover, the iterate y 0 ( 0 ) S ( x 0 , α * ) . Let u S ( x 0 , α * ) . Then, we have by ( H 2 ) and ( H 4 )
|| M 1 ( F ( u ) M ) || v 0 ( || u x 0 || ) v 0 ( α * ) < 1 ,
so F ( u ) L ( B , B 0 )
|| [ F ( u ) ] 1 M || 1 1 v 0 ( || u x 0 || )
and iterates for u = x 0 = x 0 ( 0 ) , x 0 ( 1 ) , , x 0 ( m ) = x 1 exist by the method (2) for n = 0 .
Then, we can write by the second substep and induction
y n ( 1 ) y n ( 0 ) = 1 3 F ( x n ) 1 F ( x n ) + 3 4 ( A n I ) F ( x n ) 1 F ( x n ) ,
which can give
|| y n ( 1 ) y n ( 0 ) || 1 2 || y n ( 0 ) x n || + 9 8 · v ¯ n 1 v 0 ( α n ( 0 ) ) || y n ( 0 ) x n || 1 2 ( b n ( 0 ) α n ( 0 ) ) + 9 8 v ¯ n ( b n ( 0 ) α n ( 0 ) ) = b n ( 1 ) b n ( 0 ) ,
where we also used the estimates
|| F ( x n ) 1 F ( x n ) || 3 2 || y n ( 0 ) x n || 3 2 ( b n ( 0 ) α n ( 0 ) ) ,
|| A n I || = || F ( x n ) 1 ( F ( y n ( 0 ) ) F ( x n ) ) || || F ( x n ) 1 M || || M 1 ( F ( y n ( 0 ) ) F ( x n ) ) || v ¯ n 1 v 0 || x n x 0 || v ¯ n 1 v 0 ( α n ( 0 ) ) .
We also have
|| y n ( 1 ) x 0 || || y n ( 1 ) y n ( 0 ) || + || y n ( 0 ) x 0 || b n ( 1 ) b n ( 0 ) + b n ( 0 ) α 0 = b n ( 1 ) < α * .
So, the iterate y n ( 1 ) S ( x * , α * ) .
By the first substep of the method (2), we have the identity
F ( y n ( i 1 ) ) = F ( y n ( i 1 ) ) F ( x n ) 3 2 F ( x n ) ( y n ( 0 ) x n ) = F ( y n ( i 1 ) ) F ( x n ) F ( x n ) ( y n ( i 1 ) x n ) + F ( x n ) ( y n ( i 1 ) y n ( 0 ) ) 1 2 F ( x n ) ( y n ( 0 ) x n )
leading to
|| M 1 F ( y n ( i 1 ) ) || 0 1 v ( 1 θ ) ( b n ( i 1 ) α n ( 0 ) ) d θ ( b n ( i 1 ) α n ( 0 ) ) + ( 1 + v 0 ( α n ( 0 ) ) ) ( b n ( i 1 ) α n ( 0 ) ) + 1 2 ( 1 + v 0 ( α n ( 0 ) ) ) ( b n ( 0 ) α n ( 0 ) ) = b n ( i 1 ) ,
so by the third, fourth …, i-th substep of the method (2)
|| y n ( i ) y n ( i 1 ) || || E n || || F ( x n ) 1 M || || M 1 F ( y n ( i 1 ) ) || 1 2 || F ( x n ) 1 M || || M 1 F ( y n ( i 1 ) ) || + 3 2 || A n I || || F ( x n ) 1 M || || M 1 F ( y n ( i 1 ) ) || 1 2 ( 1 v 0 ( α n ( 0 ) ) ) 1 + 3 v ¯ 0 1 v 0 ( α n ( 0 ) ) α n ( i 1 ) = b n ( i ) b n ( i 1 )
and
|| y n ( i ) x 0 || || y n ( i ) y n ( i 1 ) || + || y n ( i 1 ) x 0 || b n ( i ) b n ( i 1 ) + b n ( i 1 ) α 0 = b n ( i ) < α * .
Thus, the items (31) hold and the iterate y n ( i ) S ( x 0 , α * ) .
Given the first substep for n exchanged by n + 1 , we have then
F ( x n + 1 ) = F ( x n + 1 ) F ( x n ) 3 2 F ( x n ) ( y n ( 0 ) x n ) = F ( x n + 1 ) F ( x n ) F ( x n ) ( x n + 1 x n ) + F ( x n ) ( x n + 1 y n ( 0 ) ) 1 2 F ( x n ) ( y n ( 0 ) x n ) ,
so,
|| M 1 F ( x n + 1 ) || 0 1 v ( 1 θ ) ( α n + 1 ( 0 ) α n ( 0 ) ) d θ ( α n + 1 ( 0 ) α n ( 0 ) ) + 1 + v 0 ( α n ( 0 ) ) ( α n + 1 ( 0 ) b n ( 0 ) ) + 1 2 ( 1 + v 0 ( α n ( 0 ) ) ) ( b n ( 0 ) α n ( 0 ) ) = ξ n + 1 .
Then, it also follows by (27), (33) (for u = x n + 1 ) and the first substep of the method (2)
|| y n + 1 ( 0 ) x n + 1 || 2 3 || F ( x n + 1 ) 1 M || || M 1 F ( x n + 1 ) || 2 3 ξ n + 1 1 v 0 ( α n + 1 ( 0 ) ) = b n + 1 ( 0 ) α n + 1 ( 0 )
and
|| y n + 1 ( 0 ) x 0 || || y n + 1 ( 0 ) x n + 1 || + || x n + 1 x 0 || b n + 1 ( 0 ) α n + 1 ( 0 ) + α n + 1 ( 0 ) α 0 = b n + 1 ( 0 ) < α * .
The induction for items (29)–(32) is completed and all iterates { x n } belong to S ( x 0 , α * ) . By the triangle inequality (32) gives
|| x n + q x n || α n + q α n , q = 0 , 1 , 2 ,
Therefore, the sequence { x n } is complete in the Banach space B 0 and, as such, it converges to some x * S [ x 0 , α * ] (since S [ x 0 , α * ] is a closed set). Moreover, by letting n in (35) and using the continuity of the operator F, we obtain F ( x * ) = 0 . Finally, by letting q in (36) we show item (28). □
As in the local case, a domain is determined with only one solution of the equation F ( x ) = 0 .
Proposition 2. 
Suppose there exists a solution z S ( x 0 , μ ) of the equation F ( x ) = 0 for some μ > 0 , the condition ( H 4 ) holds in the ball S ( x 0 , μ ) and there exists μ 1 μ such that
0 1 v 0 ( 1 θ ) μ + θ μ 1 d θ < 1 .
Define the domain D 4 = S [ x 0 , μ 1 ] D 0 . Then, z is the only solution of the equation F ( x ) = 0 in the domain D 4 .
Proof. 
Suppose there exists a solution z 1 D 4 of the equation F ( x ) = 0 such that z z 1 . Define the linear operator
L = 0 1 F ( z + θ ( z 1 z ) ) d θ .
Then, it follows by ( H 4 ) and (37) that
|| M 1 ( L M ) || 0 1 v 0 ( ( 1 θ ) ( || z x 0 || + θ || z 1 x 0 || ) d θ 0 1 v 0 ( ( 1 θ ) μ + θ μ 1 ) d θ < 1 .
So L is invertible, and from
z 1 z = L 1 ( F ( z 1 ) F ( z ) ) = L 1 ( 0 ) = 0
we conclude that z 1 = z . □
Remark 4. 
(1) 
The limit point α * can be replaced by s in the condition ( H 6 ).
(2) 
Under all the conditions ( H 1 )–( H 6 ) we can take z = x * and μ = α * in the Proposition 2.
(3) 
The sufficient semi-local convergence conditions ( H 1 )–( H 6 ) are very general. Clearly, if the functions v 0 and v are specialized more, concrete results can be obtained, which include the rate and order of convergence. But in this paper, we wanted to minimize the limitations of our approach.

4. Numerical Results

In this section, we present numerical results to validate the theoretical analysis discussed in the preceding sections. We apply our proposed ( m + 1 ) -step method, given by (2), to several nonlinear systems of equations to demonstrate its efficiency and confirm its high order of convergence. The numerical experiments were performed using the C++ programming language on a machine equipped with an Intel(R) Core(TM) i7-10510U CPU @ 1.80 GHz processor. For more precise calculations, we used the Boost Multiprecision Library. It allowed us to utilize high-precision floating-point arithmetic, which was configured to 200 digits, to get more accurate results that are close to theoretical ones.
To validate our results, we tested the implemented approach on some test examples. As in paper [20], for all test cases, the iteration process is terminated when the following condition is met:
| | x k + 1 x k | | + | | F ( x k ) | | < 10 100 .
For each example, we track the error at each iteration. To numerically verify the theoretical convergence rate, we use the Computational Order of Convergence (COC), which is approximated using the following formula based on four consecutive iterations [17]:
COC = log || x k + 2 x k + 1 || / || x k + 1 x k || log || x k + 1 x k || / || x k x k 1 || ,
where x k 1 , x k , x k + 1 , and x k + 2 are iterates close to the final solution.
We have tested the method (2) on several examples that were used in [20] and also added some general functions that are used for testing algorithms for solving systems of nonlinear equations that are suggested in the article [28]. We used the exact initial approximations that were utilized in the previously mentioned articles. The numerical results will be presented for problems including the following:
Example 1.
A system of 2 equations given by: e x 1 2 + 8 x 1 sin x 2 = 0 , x 1 + x 2 1 = 0 with initial approximation x 0 = ( 4 10 , 4 10 ) T (see [32]).
Example 2. 
A system of 3 equations [16] given by: 10 x 1 + sin ( x 1 + x 2 ) 1 = 0 , 8 x 2 cos 2 ( x 3 x 2 ) 1 = 0 , 12 x 3 + sin ( x 3 ) 1 = 0 with initial approximation x 0 = ( 0 , 1 , 0 ) T .
Example 3. 
A system of n equations [33] given by:
x i 2 x i + 1 = 0 , i = 1 , , n 1 , x i 2 x 1 1 = 0 , i = n
with initial approximation x 0 = ( 18 10 , , 18 10 ) T , where the solution is: x i * = 1 for i = 1 , , n . We select n = 300 .
Example 4. 
A system of three equations:
e x 1 1 = 0 , e 1 2 x 2 2 + x 2 = 0 , x 3 = 0 ,
x 0 = ( 3.5 , 0.5 , 2.5 ) T , x * = ( 0 , 0 , 0 ) T .
Example 5. 
Discrete boundary value function [29]
f i ( x ) = 2 x i x i 1 x i + 1 + h 2 ( x i + t i + 1 ) 3 / 2 ,
where h = 1 n + 1 , t i = i h , i = 0 , , n , and x 0 = x n + 1 = 0   x 0 = ( ξ j ) , where ξ j = t j ( t j 1 ) . We take n = 300 .
Example 6. 
Discrete integral equation function [29]
f i ( x ) = x i + h 2 ( 1 t i ) j = 1 i t j ( x j + t j + 1 ) 3 + t i j = i + 1 n ( 1 t j ) ( x j + t j + 1 ) 3 ,
where h = 1 n + 1 , t i = i h , i = 0 , , n , and x 0 = x n + 1 = 0 . x 0 = ( ξ j ) , where ξ j = t j ( t j 1 ) . We take n = 300 .
Example 7. 
Broyden tridiagonal function [28]
f i ( x ) = ( 3 2 x i ) x i x i 1 2 x i + 1 + 1 ,
where x 0 = x n + 1 = 0 , x 0 = ( 1 , , 1 ) T . We take n = 300 .
Example 8. 
Broyden banded function [28].
f i ( x ) = x i ( 2 + 5 x i 2 ) + 1 j J i x j ( 1 + x j ) ,
where J i = { j : j i , max ( 1 , i m l ) j min ( n , i + m u ) } , m l = 5 , and m u = 1   x 0 = ( 1 , , 1 ) T . We take n = 300 .
The results for each problem are summarized in Table 1, showing the total number of iterations that are required for meeting the stopping criterion (k), the elapsed time (e-time), norm of the difference between approximate solution x k and x k 1 ( || x k x k 1 || ), norm of F k ( || F k || ), and the calculated C O C . It enables a comprehensive comparison of the method’s performance for various values of m. From these tables, we can conclude that the computational order of convergence is quite close to the theoretical one for almost all examples (the theoretical order of convergence for m = 2, 3, 4 will be 5, 7, and 9, respectively). || F k || and || x k x k 1 || converge even further to smaller number than established accuracy. Increasing the value of m does not always lead to better performance time, as it may not always result in a decrease in the number of iterations. For instance, in Example 8, there is no need to increase m from 3 to 4, since the iteration number is not reduced; however, it is a good idea to increase it from 2 to 3. For example, in Example 6, it is more crucial to increase m from 3 to 4, as it leads to a significant reduction in performance time.
We also prepared plots that show the values of the previously mentioned norms at each iteration on a logarithmic scale (see Figure 1 and Figure 2). The convergence of the method (2) for different values of m is graphically shown, and the comparison for all examples (colors represent different examples). The lines in Figure 1 show how quickly the function approaches zero, in other words, how quickly the algorithm converges to an approximate solution with accuracy 10 100 . As you can see, an increasing number of m from 2 to 4 leads to either faster convergence or smaller error. For instance, Example 4 requires one fewer iteration with each increase in the value of m by one. Example 7 does not reduce the number of iterations when we increase m from 3 to 4, but we can clearly see that the norm of the function is closer to zero. Also, from all these plots, we can conclude that the most significant step the algorithm makes is at the last iteration. A similar situation is also shown in Figure 2. Notice that the convergence plots for Example 5 and Example 6 are identical for all tested values of m (m = 2, 3, 4).
We have also provided an example of a nonlinear equation involving a mapping F : R 3 R 3 where the majorant functions w 0 and w are determined without the usage of high-order differentiability (see Example 9). Moreover, this example has error bounds on the distances || x n x * || that involve Lipschitz constants, as you requested. Notice that such a priori bounds are not available in [20] or other studies using Taylor series.
Example 9. 
Let us consider the mapping F : R 3 R 3 for D = S [ 0 , 1 ] , s = ( s 1 , s 2 , s 3 ) T defined by
F ( s ) = e s 1 1 , 1 2 ( e 1 ) s 2 2 + s 2 , s 3 .
Then, notice that s * = 0 solves the equation F ( s ) = 0 . Moreover, the Fréchet derivative of the mapping F is given by
F ( s ) = e s 1 0 0 0 ( e 1 ) s 2 + 1 0 0 0 1 .
It follows that F ( s * ) = I . We shall take M = F ( s * ) . Furthermore the conditions ( C 1 ), ( C 3 ) and ( C 4 ) are validated if we choose w 0 ( s ) = ( e 1 ) s , R 0 = 1 e 1 and w ( s ) = e 1 e 1 s . For this case, we can calculate the radius of convergence. In the case m = 2 , we must calculate g 0 ( t ) , g 1 ( t ) and g 2 ( t ) using formulas from ( C 2 ). After getting these functions, we find such r 0 , r 1 , r 2 for which g j ( r j ) 1 = 0 , where j = 0 , 1 , 2 . Then we can find the radius of convergence r = min ( r j ) . Notice that we can take two options of taking w ¯ ( s ) ( t ) :
w ¯ ( s ) ( t ) = w ( 1 + g s ( t ) ) t o r w 0 ( t ) + w 0 ( g s ( t ) t ) .
We calculated r j and r for both options and collected results in Table 2. So, when we take w ¯ ( s ) ( t ) = w ( 1 + g s ( t ) ) t radius of convergence is greater in comparison with the selection w ¯ ( s ) ( t ) = w 0 ( t ) + w 0 ( g s ( t ) t ) .
We also wanted to test our method on some ordinary and partial differential equations. For this purpose, we take the Lene–Emden equation [30], which states as
x ( t ) + 2 t x ( t ) + x ( t ) 5 = 0 , x ( 0 ) = 0 , x ( 0 ) = 1 ,
and Klein–Gordon equation [26]:
u t t ( z , t ) c 2 u z z ( z , t ) + f ( u ) = p ( z , t ) , < x < , t > 0 .
We compare results with the ( s 1 ) -step version of Newton–Raphson (NRM) presented in [35]:
y 1 = x k F ( x k ) 1 F ( x k ) , y 2 = y 1 F ( x k ) 1 F ( y 1 ) , y 3 = y 2 F ( x k ) 1 F ( y 2 ) , x k + 1 = y s 1 = y s 2 F ( x k ) 1 F ( y s 2 ) .
We also employ the Chebyshev pseudo-spectral collocation method for temporal and spatial discretizations, as was done in the previously mentioned article:
F ( x ) = D t 2 + diag 2 t D t x + x 5 = 0
for (39), and
F ( u ) = ( ( I z D t 2 ) c 2 ( D z 2 I z ) ) u + f ( u ) = 0
for (40).
We tested this solution in a slightly different environment; we did not use a high-precision library. We made this choice to ensure that our results were more representative in a realistic computing environment that relies on standard hardware floating-point types. We also set the dimension of our system to 500 for (39) and 4420 for (40) to test our method on high-dimensional problems. We set the same initial approximation and domain as was done in [35]. For (39), the method (2) converges to the solution with accuracy O ( 10 5 ) after 16 steps, while Newton-Raphson needs 23 ones. For (40), it took only three steps for the Newton-Jarrat-like method to converge with such accuracy, while for Newton-Raphson it took four steps. The results of the described experiments are presented in Table 3 and Table 4, and they are in the same format as they are presented in [35] for comparison purposes. As you might notice, the method (2) is quite expensive in comparison with NRM, since it requires additional evaluation of Jacobian, matrix vector multiplications, and solving a system of linear equations per iteration when the right side is a matrix. In the case of partial derivative (40), it plays a crucial role, since both algorithms converge in a small number of steps that also leads to the same theoretical order of convergence, and because of these costly operations, it takes more time for the method (14) to converge with the previously mentioned accuracy. However, in the case of (39), the order of convergence of the method (2) is greater than that of NRM, and it plays a more crucial role in terms of performance time. So, in this case, the method (2) takes less time to converge.

5. Conclusions

In this work, we have presented a significant extension of the convergence analysis for the class of multi-step Newton-Jarratt-like methods of order 2 m + 1 , initially proposed in the paper [20]. While the computational efficiency of their scheme is noteworthy, its practical application was limited by a theoretical framework that relied on strong conditions, namely the existence and boundedness of derivatives up to the fifth order. Other limitations of the Taylor series approach and how to handle them can be found in items ( P 1 )–( P 5 ) and ( P 1 )’–( P 5 )’ of the Introduction.
The primary contribution of this paper is the removal of these restrictive assumptions. We have developed a new convergence analysis using only conditions on the first Fréchet derivative. This approach, based on generalized Lipschitz-type continuity, not only confirms the local convergence of the method but does so under a much weaker and more practical set of hypotheses. This significantly broadens the range of nonlinear problems to which this efficient method can be applied with confidence.
Furthermore, we have established the first semi-local convergence analysis for this family of methods. By employing the concept of majorizing sequences, we provide criteria that guarantee convergence from an initial guess without prior knowledge of the existence of a solution. This is a crucial step forward, as it provides a practical tool for verifying the method’s applicability to real-world problems where the solution’s location is unknown.
The theoretical results were rigorously validated through numerical experiments. We expanded the range of challenging examples for testing the method (2) by using the large-scale problems from [28], and thereby proved the effectiveness of such method. The computational order of convergence (COC) was found to be in close agreement with the theoretical order of 2 m + 1 . We also tested the method on ordinary and partial differential equations and compared the results with the existing famous method.
In summary, by providing a more robust theoretical foundation with both local and semi-local convergence under weaker assumptions, our work substantially extends the applicability and utility of this powerful class of Newton-Jarratt-like methods for solving systems of nonlinear equations. The ideas in this paper shall be used to extend the applicability of other methods in our future research [11,12,15,22,24,25,26,27,28,29,30,31,32,33,34,35,36].

Author Contributions

Conceptualization, I.K.A., S.S. and M.S.; methodology, I.K.A., S.S. and M.S.; software, I.K.A., S.S. and M.S.; validation, I.K.A., S.S. and M.S.; formal analysis, I.K.A., S.S. and M.S.; investigation, I.K.A., S.S. and M.S.; resources, I.K.A., S.S. and M.S.; data curation, I.K.A., S.S. and M.S.; writing—original draft preparation, I.K.A., S.S. and M.S.; writing—review and editing, I.K.A., S.S. and M.S.; visualization, I.K.A., S.S. and M.S.; supervision, I.K.A., S.S. and M.S.; project administration, I.K.A., S.S. and M.S.; funding acquisition, I.K.A., S.S. and M.S. All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K.; Shakhno, S. Extended Local Convergence for the Combined Newton-Kurchatov Method Under the Generalized Lipschitz Conditions. Mathematics 2019, 7, 207. [Google Scholar] [CrossRef]
  2. Costabile, F.; Gualtieri, M.; Capizzano, S. An iterative method for the computation of the solutions of nonlinear equations. Calcolo 1999, 36, 17–34. [Google Scholar] [CrossRef]
  3. Nisha, S.; Parida, P.K. An improved bisection Newton-like method for enclosing simple zeros of nonlinear equations. SeMA J. 2015, 72, 83–92. [Google Scholar] [CrossRef]
  4. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  5. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice-Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  6. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  7. Darvishi, M.T.; Barati, A. Super cubic iterative methods to solve systems of nonlinear equations. Appl. Math. Comput. 2007, 188, 1678–1685. [Google Scholar] [CrossRef]
  8. Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariate case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef]
  9. Noor, M.A.; Saleem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef]
  10. Xiao, X.; Yin, H. A new class of methods with higher order of convergence for solving systems of nonlinear equations. Appl. Math. Comput. 2016, 264, 300–309. [Google Scholar] [CrossRef]
  11. Behl, R.; Cordero, A.; Motsa, S.S.; Torregrosa, J.R. Stable high-order iterative methods for solving nonlinear models. Appl. Math. Comput. 2017, 303, 70–80. [Google Scholar] [CrossRef]
  12. Cordero, A.; Feng, L.; Magreñán, Á.A.; Torregrosa, J.R. A new fourth-order family for solving nonlinear problems and its dynamics. J. Math. Chem. 2015, 53, 893–910. [Google Scholar] [CrossRef]
  13. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett. 2012, 25, 2369–2374. [Google Scholar] [CrossRef]
  14. Esmaeili, H.; Ahmadi, M. An efficient three-step method to solve system of non linear equations. Appl. Math. Comput. 2015, 266, 1093–1101. [Google Scholar]
  15. Sharma, J.R.; Sharma, R.; Bahl, A. An improved Newton-Traub composition for solving systems of nonlinear equations. Appl. Math. Comput. 2016, 290, 98–110. [Google Scholar]
  16. Sharma, J.R.; Arora, H. An efficient derivative-free family of seventh order methods for systems of nonlinear equations. SeMA J. 2016, 73, 39–75. [Google Scholar] [CrossRef]
  17. Weerakoon, S.; Fernando, T.G.I. A Variant of Newton’s Method with Accelerated Third-Order Convergence. Appl. Math. Lett. 2000, 13, 87–93. [Google Scholar] [CrossRef]
  18. Xiao, X.Y.; Yin, H.W. Increasing the order of convergence for iterative methods to solve nonlinear systems. Calcolo 2016, 53, 285–300. [Google Scholar]
  19. Xiao, X.; Yin, H. Accelerating the convergence speed of iterative methods for solving nonlinear systems. Appl. Math. Comput. 2018, 333, 8–19. [Google Scholar] [CrossRef]
  20. Sharma, J.R.; Kumar, S. A class of accurate Newton–Jarratt-like methods with applications to nonlinear models. Comput. Appl. Math. 2022, 41, 46. [Google Scholar] [CrossRef]
  21. Argyros, I.K. The Theory and Application of Iteration Methods, 2nd ed.; Taylor and Francis: Boca Raton, FL, USA, 2022. [Google Scholar]
  22. Argyros, I.K.; Shakhno, S. Extended Two-Step-Kurchatov Method for Solving Banach Space Valued Nondifferentiable Equations. Int. J. Appl. Comput. Math. 2020, 6, 2. [Google Scholar] [CrossRef]
  23. Argyros, I.K.; Shakhno, S.; Yarmola, H. Two-Step Solver for Nonlinear Equations. Symmetry 2019, 11, 128. [Google Scholar] [CrossRef]
  24. Alzahrani, A.K.H.; Behl, R.; Alshomrani, A.S. Some higher-order iteration functions for solving nonlinear models. Appl. Math. Comput. 2018, 334, 80–93. [Google Scholar] [CrossRef]
  25. Cordero, A.; Lotfi, T.; Bakhtiari, P.; Mahdiani, K.; Torregrosa, J.R. Some new efficient multipoint iterative methods for solving nonlinear systems of equations. Int. J. Comput. Math. 2015, 92, 1921–1934. [Google Scholar]
  26. Jang, T.S. An integral equation formalism for solving the nonlinear Klein–Gordon equation. Appl. Math. Comput. 2014, 243, 322–338. [Google Scholar] [CrossRef]
  27. Madhu, K.; Elango, A.; Landry, R.J.; Al-arydah, M. New Multi-Step Iterative Methods for Solving Systems of Nonlinear Equations and Their Application on GNSS Pseudorange Equations. Sensors 2020, 20, 5976. [Google Scholar] [CrossRef]
  28. Moré, J.J.; Garbow, B.S.; Hillstrom, K.H. Testing Unconstrained Optimization Software. ACM Trans. Math. Softw. 1981, 7, 17–41. [Google Scholar] [CrossRef]
  29. Moré, J.J.; Cosnard, M.Y. Numerical solution of nonlinear equations. ACM Trans. Math. Softw. 1979, 5, 64–85. [Google Scholar] [CrossRef]
  30. Motsa, S.S.; Shateyi, S. New Analytic Solution to the Lane–Emden Equation of Index 2. Math. Probl. Eng. 2012, 614796. [Google Scholar] [CrossRef]
  31. Regmi, S. Optimized Iterative Methods with Applications in Diverse Disciplines; Nova Science Publisher: New York, NY, USA, 2021. [Google Scholar]
  32. Sharma, J.R.; Arora, H. Improved Newton-like methods for solving systems of nonlinear equations. SeMA J. 2016, 74, 147–163. [Google Scholar] [CrossRef]
  33. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comput. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  34. Sharma, J.R.; Gupta, P. An efficient fifth order method for solving systems of nonlinear equations. Comput. Math. Appl. 2014, 67, 591–601. [Google Scholar] [CrossRef]
  35. Ullah, M.Z.; Serra-Capizzano, S.; Ahmad, F.; Al-Aidarous, E.S. Higher order multi-step iterative method for computing the numerical solution of systems of nonlinear equations: Application to nonlinear PDEs and ODEs. Appl. Math. Comput. 2015, 269, 972–987. [Google Scholar] [CrossRef]
  36. Usman, M.; Iqbal, J.; Khan, A.; Ullah, I.; Khan, H.; Alzabut, J.; Alkhawar, H.M. A new iterative multi-step method for solving nonlinear equation. MethodsX 2025, 15, 103394. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Convergence Analysis of | | F k | | for different values of m.
Figure 1. Convergence Analysis of | | F k | | for different values of m.
Axioms 14 00734 g001
Figure 2. Convergence Analysis of | | x k x k 1 | | for different values of m.
Figure 2. Convergence Analysis of | | x k x k 1 | | for different values of m.
Axioms 14 00734 g002
Table 1. Performance of ( m + 1 ) -step Newton-Jarrat (2) (Examples 1–8).
Table 1. Performance of ( m + 1 ) -step Newton-Jarrat (2) (Examples 1–8).
Value of m
 Metric 2 3 4
Example 1
 k554
 COC5.0106.9959.003
 e-time (s)0.0140.0140.013
| | F k | | 1.61 × 10−2061.61 × 10−2061.61 × 10−206
| | x k x k 1 | | 3.11 × 10−2032.45 × 10−2071.16 × 10−160
Example 2
 k444
 COC4.9756.6358.680
 e-time (s)0.0180.0200.021
| | F k | | 3.28 × 10−2076.24 × 10−2118.55 × 10−215
| | x k x k 1 | | 2.52 × 10−1541.22 × 10−2076.12 × 10−211
Example 3
 k554
 COC5.0006.9738.988
 e-time (s)17.7428.0724.30
| | F k | | 05.77 × 10−2090
| | x k x k 1 | | 6.16 × 10−1101.37 × 10−2081.62 × 10−111
Example 4
 k765
 COC5.3967.2409.150
 e-time (s)0.0210.0170.015
| | F k | | 000
| | x k x k 1 | | 4.59 × 10−3237.56 × 10−2874.23 × 10−101
Example 5
 k443
 COC4.9826.9818.980
 e-time (s)16.1615.6515.66
| | F k | | 1.58 × 10−2155.51 × 10−2141.23 × 10−213
| | x k x k 1 | | 4.84 × 10−2102.52 × 10−2115.62 × 10−135
Example 6
 k443
 COC4.9826.9818.980
 e-time (s)162.7160.2118.9
| | F k | | 2.75 × 10−2117.19 × 10−2124.08 × 10−211
| | x k x k 1 | | 5.08 × 10−2103.76 × 10−2115.62 × 10−135
Example 7
 k544
 COC4.7886.9127.004
 e-time (s)17.5216.7215.41
| | F k | | 1.12 × 10−2151.10 × 10−2158.89 × 10−216
| | x k x k 1 | | 2.31 × 10−2161.40 × 10−1381.82 × 10−216
Example 8
 k655
 COC4.9896.4638.664
 e-time (s)27.1722.8429.26
| | F k | | 1.21 × 10−2151.22 × 10−2151.10 × 10−215
| | x k x k 1 | | 1.40 × 10−2161.43 × 10−2161.26 × 10−216
Table 2. Radii of Convergence for Newton-Jarrat Method (2).
Table 2. Radii of Convergence for Newton-Jarrat Method (2).
w ¯ ( s ) ( t ) = w ( 1 + g s ( t ) ) t w ¯ ( s ) ( t ) = w 0 ( t ) + w 0 ( g s ( t ) t )
r 0 0.228990 0.228990
r 1 0.142601 0.043682
r 2 0.123565 0.069096
r 0.123565 0.043682
Table 3. Comparison of performances for the ( m + 1 ) -step Newton-Jarrat method (NJM) (2) and Multi-step Newton-Raphson (NRM) (41) for the Lene–Emden Equation (39); initial guess 1; domain [0, 9] [35].
Table 3. Comparison of performances for the ( m + 1 ) -step Newton-Jarrat method (NJM) (2) and Multi-step Newton-Raphson (NRM) (41) for the Lene–Emden Equation (39); initial guess 1; domain [0, 9] [35].
Iterative MethodsNJM (2)NRM (41)
Number of iterations11
Size of problem500500
Number of steps1623
Theoretical convergence-order3124
Function evaluations per iteration1623
Solutions of system of linear equations per iteration1623
when right side is vector
Solutions of system of linear equations per iteration10
when right side is matrix
Number of Jacobian evaluations per iteration21
Number of Jacobian LU-factorization per iteration11
Number of matrix vector multiplications per iteration150
Steps | | x m x * | | | | x m x * | |
16.8402 × 10−16.2052 × 10−1
25.5152 × 10−15.5165 × 10−1
38.1764 × 10−16.4046 × 10−1
41.53228.5943 × 10−1
51.20301.1858
62.37281.2422
77.9603 × 10−11.3089
88.2595 × 10−12.0382
98.5373 × 10−11.3065
107.8730 × 10−11.8401
114.7382 × 10−19.4516 × 10−1
121.6708 × 10−18.9284 × 10−1
133.7363 × 10−29.1266 × 10−1
146.1138 × 10−39.3032 × 10−1
157.9369 × 10−49.1482 × 10−1
168.5210 × 10−56.9886 × 10−1
17 3.7548 × 10−1
18 1.4397 × 10−1
19 4.3176 × 10−2
20 1.0867 × 10−2
21 2.3786 × 10−3
22 4.6137 × 10−4
23 8.0241 × 10−5
CPU time (s)0.0290.043
Table 4. Comparison of performances for the ( m + 1 ) -step Newton-Jarrat method (NJM) (2) and Multi-step Newton-Raphson (NRM) (41) for the Klein–Gordon Equation (40); initial guess u ( z j , t j ) = 0 , u ( z , t ) = δ sec h ( κ ( z v t ) ) , κ = k c 2 v 2 , δ = 2 k γ , c = 1 , γ = 1 , v = 0.5 , k = 0.5 , n z = 170 , n t = 26 , z [ 22 , 22 ] , t [ 0 , 0.5 ] [35].
Table 4. Comparison of performances for the ( m + 1 ) -step Newton-Jarrat method (NJM) (2) and Multi-step Newton-Raphson (NRM) (41) for the Klein–Gordon Equation (40); initial guess u ( z j , t j ) = 0 , u ( z , t ) = δ sec h ( κ ( z v t ) ) , κ = k c 2 v 2 , δ = 2 k γ , c = 1 , γ = 1 , v = 0.5 , k = 0.5 , n z = 170 , n t = 26 , z [ 22 , 22 ] , t [ 0 , 0.5 ] [35].
Iterative MethodsNJM (2)NRM (41)
Number of iterations11
Size of problem44204420
Number of steps34
Theoretical convergence-order55
Function evaluations per iteration34
Solutions of system of linear equations per iteration33
when right side is vector
Solutions of system of linear equations per iteration10
when right side is matrix
Number of Jacobian evaluations per iteration21
Number of Jacobian LU-factorization per iteration11
Number of matrix vector multiplications per iteration20
Steps | | x m x * | | | | x m x * | |
13.3122 × 10−11.1253 × 10−1
26.8391 × 10−36.8391 × 10−3
33.5588 × 10−51.5342 × 10−4
4 1.8717 × 10−6
CPU time (s)5.81.6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Argyros, I.K.; Shakhno, S.; Shakhov, M. Extending the Applicability of Newton-Jarratt-like Methods with Accelerators of Order 2m + 1 for Solving Nonlinear Systems. Axioms 2025, 14, 734. https://doi.org/10.3390/axioms14100734

AMA Style

Argyros IK, Shakhno S, Shakhov M. Extending the Applicability of Newton-Jarratt-like Methods with Accelerators of Order 2m + 1 for Solving Nonlinear Systems. Axioms. 2025; 14(10):734. https://doi.org/10.3390/axioms14100734

Chicago/Turabian Style

Argyros, Ioannis K., Stepan Shakhno, and Mykhailo Shakhov. 2025. "Extending the Applicability of Newton-Jarratt-like Methods with Accelerators of Order 2m + 1 for Solving Nonlinear Systems" Axioms 14, no. 10: 734. https://doi.org/10.3390/axioms14100734

APA Style

Argyros, I. K., Shakhno, S., & Shakhov, M. (2025). Extending the Applicability of Newton-Jarratt-like Methods with Accelerators of Order 2m + 1 for Solving Nonlinear Systems. Axioms, 14(10), 734. https://doi.org/10.3390/axioms14100734

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop