Previous Article in Journal
Machine Learning Subjective Opinions: An Application in Forensic Chemistry
Previous Article in Special Issue
Two Extrapolation Techniques on Splitting Iterative Schemes to Accelerate the Convergence Speed for Solving Linear Systems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Convergence Analysis of a Multi-Step Method with High-Efficiency Indices

by
Santhosh George
1,*,†,
Manjusree Gopal
1,†,
Samhitha Bhide
1,† and
Ioannis K. Argyros
2,†
1
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Mangalore 575025, India
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Algorithms 2025, 18(8), 483; https://doi.org/10.3390/a18080483 (registering DOI)
Submission received: 8 July 2025 / Revised: 25 July 2025 / Accepted: 1 August 2025 / Published: 4 August 2025
(This article belongs to the Special Issue Recent Advances in Numerical Algorithms and Their Applications)

Abstract

A multi-step method introduced by Raziyeh and Masoud for solving nonlinear systems with convergence order five has been considered in this paper. The convergence of the method was studied using Taylor series expansion, which requires the function to be six times differentiable. However, our convergence study does not depend on the Taylor series. We use the derivative of F up to two only in our convergence analysis, which is presented in a more general Banach space setting. Semi-local analysis is also discussed, which was not given in earlier studies. Unlike in earlier studies (where two sets of assumptions were used), we used the same set of assumptions for semi-local analysis and local convergence analysis. We discussed the dynamics of the method and also gave some numerical examples to illustrate theoretical findings.

1. Introduction

Numerous problems in science and engineering can be effectively modeled using linear and nonlinear mathematical equations [1,2,3]. However, unlike linear equations, finding exact solutions to nonlinear equations is not feasible. Therefore, we have to rely on iterative methods to find approximate solutions to such equations. One of the most prominent iterative methods is Newton’s method [4], which has a quadratic convergence.
In recent years, considerable efforts have been made to improve the convergence properties of iterative methods, aiming for higher convergence rate and better computational efficiency [1,5,6,7,8,9]. Many of these methods are extensions or modifications of Newton’s method [10,11,12] and have been applied to both single-variable and multi-variable nonlinear equations.
The order of convergence of an iterative method is a key measure of its efficiency, which indicates how rapidly the method converges to the solution. We say that a sequence { s n } converges to s * with the order of convergence at least p > 0 [13,14,15] if there exists a constant Q > 0 such that
s n + 1 s * Q s n s * p .
The efficiency index (EI) and informational efficiency (IE) are two metrics to evaluate and compare the performance of iterative methods. Ostrowski [16] has introduced EI as
E I = p 1 η ,
and Traub [14] has introduced IE as
I E = p η ,
where p is the order of convergence and η is the total number of functions and derivative evaluations per iteration.
We consider that the primary goal of the iterative method is to improve the convergence speed while enhancing the accuracy and overall efficiency of the method.
In this paper, we focus on the convergence analysis of iterative methods for solving equations of the form
Æ ( s ) = 0 ,
where Æ : Ω X Y is a nonlinear operator from Banach space X into Banach space Y, and Ω is a non-empty open convex set.
The multi-step method for solving nonlinear systems given by Raziyeh and Masoud [17] is defined for s 0 Ω and n = 0 , 1 , 2 , such that
t n = s n Æ ( s n ) 1 Æ ( s n ) s n + 1 = t n 2 Æ ( s n ) 1 Æ ( v n ) + 3 2 Æ ( z n ) 3 2 Æ ( t n ) ,
where v n = t n + Æ ( s n ) 1 Æ ( t n ) and z n = t n Æ ( s n ) 1 Æ ( t n ) .
The iterative method (3) has a convergence order 5, which is studied in [17] using Taylor series expansion, and requires the assumption that the operator F is at least six times differentiable. These restrictions are the motivations for our study. It is shown in [17] that the method (3) is highly efficient and superior to the earlier methods. A comparison with other methods in terms of efficiency has been given in [17]. If we follow the analysis made in [17], then the method (3) cannot be used to approximate the solution of (2) if F cannot be differentiated six times. For example, consider q : [ 2 , 2 ] R defined as
q ( s ) = a s 6 log s 2 + b s 7 + c s 8 if   s 0     0     if   s = 0 ,
where a 0 and a , b , c are real parameters. Notice that s * = 1 [ 2 , 2 ] solves the equation q ( s ) = 0 . Further, notice the unboundedness of q ( 6 ) on the interval [ 2 , 2 ] since q ( 6 ) ( s ) does not exists at s = 0 . Thus, the method (3) cannot assure the convergence of s n to the solution s * = 0 if we use the analysis in [17]. However, method (3) does converge to s * if, e.g., a = 1 , b = 2 , and c = 1 and the initial guess s 0 = 0.95 . We have studied the convergence of the method (3) without using the Taylor series. Thus, our study relaxes the condition that F has to be six times differentiable and requires F to be just two times differentiable. The innovative aspects and advantages of our analysis are as follows:
  • Our analysis is discussed in the Banach space setting.
  • We have given the semi-local analysis in our studies, which was not given in earlier work [17].
  • Earlier studies [17,18] rely on assumptions involving the solution for local convergence analysis. However, our assumptions for attaining the convergence order are independent of the solution.
  • In the existing studies, the assumptions depend on the actual solution x * , but our assumptions are independent of the solution x * . Using the information about x * obtained from semi-local analysis, we study the convergence order using local convergence.
This paper is arranged as follows: Section 2 contains the semi-local convergence analysis of the method (3). Section 3 contains the local convergence of the method (3) without using the Taylor series expansion. Section 4 and Section 5 contain numerical examples and the basins of attraction of the method, respectively. The conclusion is given in Section 6.

2. Semi-Local Analysis

We will define scalar majorizing sequences for our semi-local analysis [4].
For α 0 = 0 , β 0 0 and L 0 , L 1 0 , define the scalar sequences { α n } and { β n } such that
α n + 1 = β n + L 1 1 L 0 α n [ 1 1 L 0 α n 2 + L 0 β n + L 0 L 1 ( β n α n ) 2 4 ( 1 L 0 α n ) + 3 2 ( 1 L 0 α n ) L 1 ( β n α n ) + L 1 2 ( β n α n ) 2 4 ( 1 L 0 α n ) + 3 2 ] ( β n α n ) 2 , δ n + 1 = L 1 2 ( α n + 1 α n ) 2 + ( 1 + L 0 α n ) ( α n + 1 β n ) and β n + 1 = α n + 1 + δ n + 1 1 L o α n + 1 .
Lemma 1.
Assume there exists μ 0 such that
L 0 α n < 1   and   α n μ n N { 0 } .
Then, the sequences { α n } and { β n } defined by (4) are convergent to some λ [ β 0 , μ ] and 0 α n β n α n + 1 λ .
Proof. 
The scalar sequences { α n } and { β n } are non-decreasing and bounded above μ . Hence, λ [ β 0 , μ ] such that { α n } and { β n } converges to λ .
Let U ( s , r ) be the ball centered at s with radius r and U ¯ ( s , r ) be its closure.
For the convergence analysis, we use the following assumptions.
( a 1 )
∃ an initial point s 0 Ω such that Æ ( s 0 ) 1 Æ ( s 0 ) < β 0 .
( a 2 )
There exist an operator G B ( X , Y ) (the set of all bounded linear operators from X to Y) and constant L 0 > 0 with
G 1 ( Æ ( s ) G ) L 0 s s 0 , s Ω .
Set Ω 1 = Ω U ( s 0 , 1 L 0 ) .
( a 3 )
There exists a constant L 1 > 0 with
G 1 ( Æ ( s ) Æ ( t ) ) L 1 s t , s , t Ω 1 .
( a 4 )
U ¯ ( s 0 , λ ) Ω .
From assumption ( a 2 ) , we have s Ω 1
G 1 ( Æ ( s ) G ) L 0 s s 0 < 1
and hence by Banach Lemma (BL) on invertible operators [4], we get Æ ( s ) 1 B ( Y , X ) and
Æ ( s ) 1 G 1 1 L 0 s s 0 , s Ω 1 .
Further, we will be using the following mean value theorem (MVT) [4]:
Æ ( ν 1 ) Æ ( ν 2 ) = 0 1 Æ ( ν 2 + τ ( ν 1 ν 2 ) ) d τ ( ν 1 ν 2 ) , ν 1 , ν 2 Ω .
Next, the main semi-local result uses the conditions ( a 1 ) ( a 4 ) .
Theorem 1.
Under the assumptions ( a 1 ) ( a 4 ) , the sequence { s n } defined by method (3) with s 0 satisfies
t n s n β n α n
and
s n + 1 t n α n + 1 β n .
Moreover, the sequence s n converges to some s * U ¯ ( s 0 , λ ) with Æ ( s * ) = 0 .
Proof. 
We will be using mathematical induction to prove the result.
Using ( a 1 ) and first step of (3),
t 0 s 0 = Æ ( s 0 ) 1 Æ ( s 0 ) < β 0 = β 0 α 0 λ .
Thus, t 0 U ( s 0 , λ ) and (9) is true for n = 0 .
Using MVT, (8) and the first step of (3), we have
Æ ( s 0 ) 1 Æ ( t 0 ) = Æ ( s 0 ) 1 Æ ( t 0 ) Æ ( s 0 ) + Æ ( s 0 ) = Æ ( s 0 ) 1 0 1 Æ ( s 0 + τ ( t 0 s 0 ) ) d τ ( t 0 s 0 ) + Æ ( s 0 ) = Æ ( s 0 ) 1 0 1 Æ ( s 0 + τ ( t 0 s 0 ) ) d τ ( t 0 s 0 ) Æ ( s 0 ) ( t 0 s 0 ) 0 1 Æ ( s 0 ) 1 Æ ( s 0 + τ ( t 0 s 0 ) ) Æ ( s 0 ) d τ t 0 s 0 .
Using the assumption ( a 3 ) and (7), we get
Æ ( s 0 ) 1 Æ ( t 0 ) Æ ( s 0 ) 1 G 0 1 G 1 Æ ( s 0 + τ ( t 0 s 0 ) ) Æ ( s 0 ) d τ t 0 s 0 L 1 2 ( 1 L 0 s 0 s 0 ) t 0 s 0 2 = L 1 2 ( 1 L 0 α 0 ) t 0 s 0 2 .
Similarly,
Æ ( s 0 ) 1 Æ ( v 0 ) = Æ ( s 0 ) 1 [ Æ ( v 0 ) Æ ( t 0 ) + Æ ( t 0 ) ] = Æ ( s 0 ) 1 0 1 Æ ( t 0 + τ ( v 0 t 0 ) ) d τ ( v 0 t 0 ) + Æ ( t 0 ) = Æ ( s 0 ) 1 0 1 Æ ( t 0 + τ ( v 0 t 0 ) ) d τ ( v 0 t 0 ) + Æ ( s 0 ) ( v 0 t 0 ) = Æ ( s 0 ) 1 0 1 Æ ( t 0 + τ ( v 0 t 0 ) ) d τ + Æ ( s 0 ) ( v 0 t 0 ) = Æ ( s 0 ) 1 0 1 Æ ( t 0 + τ ( v 0 t 0 ) ) d τ + Æ ( s 0 ) Æ ( s 0 ) 1 Æ ( t 0 ) ,
So, by using assumption ( a 2 ), (7) and (11), we have
Æ ( s 0 ) 1 Æ ( v 0 ) = Æ ( s 0 ) 1 G [ 0 1 G 1 ( Æ ( t 0 + τ ( v 0 t 0 ) ) G + G ) d τ + G 1 ( Æ ( s 0 ) G + G ) ] Æ ( s 0 ) 1 Æ ( t 0 ) 1 ( 1 L 0 α 0 ) [ 1 + L 0 ( t 0 s 0 + 1 2 v 0 t 0 ) + L 0 s 0 s 0 + 1 ] L 1 2 ( 1 L 0 α 0 ) t 0 s 0 2 1 ( 1 L 0 α 0 ) [ 2 + L 0 t 0 s 0 + L 0 L 1 4 ( 1 L 0 α 0 ) t 0 s 0 2 + L 0 α 0 ] L 1 2 ( 1 L 0 α 0 ) t 0 s 0 2 ,
where we used v 0 t 0 = Æ ( s 0 ) 1 Æ ( t 0 ) .
Similarly using assumption ( a 3 ) , (7) and (11) we get
Æ ( s 0 ) 1 Æ ( z 0 ) = Æ ( s 0 ) 1 [ Æ ( z 0 ) Æ ( t 0 ) + Æ ( t 0 ) ] = Æ ( s 0 ) 1 0 1 Æ ( t 0 + τ ( z 0 t 0 ) ) d τ ( z 0 t 0 ) + Æ ( t 0 ) = Æ ( s 0 ) 1 0 1 Æ ( t 0 + τ ( z 0 t 0 ) ) d τ Æ ( s 0 ) ( z 0 t 0 ) = Æ ( s 0 ) 1 G 0 1 G 1 ( Æ ( t 0 + τ ( z 0 t 0 ) ) Æ ( s 0 ) ) d τ × Æ ( s 0 ) 1 Æ ( t 0 ) 1 ( 1 L 0 α 0 ) L 1 t 0 s 0 + L 1 4 ( 1 L 0 α 0 ) t 0 s 0 2 × L 1 2 ( 1 L 0 α 0 ) t 0 s 0 2 .
Next, using (11)–(13) in (3), we get
s 1 t 0 = 2 Æ ( s 0 ) 1 Æ ( v 0 ) + 3 Æ ( s 0 ) 1 Æ ( z 0 ) + 3 Æ ( s 0 ) 1 Æ ( t 0 ) 2 ( 1 L 0 α 0 ) [ 2 + L 0 t 0 s 0 + L 0 L 1 4 ( 1 L o α 0 ) t 0 s 0 2 + L 0 α 0 ] L 1 2 ( 1 L 0 α 0 ) t 0 s 0 2 + 3 ( 1 L 0 α 0 ) [ L 1 ( t 0 s 0 + L 1 4 ( 1 L 0 α 0 ) t 0 s 0 2 ) ] L 1 2 ( 1 L 0 α 0 ) t 0 s 0 2 + 3 L 1 2 ( 1 L 0 α 0 ) t 0 s 0 2 L 1 ( 1 L 0 α 0 ) [ 1 1 L 0 α 0 ( 2 + L 0 t 0 s 0 + L 0 L 1 4 ( 1 L 0 α 0 ) t 0 s 0 2 + L 0 α 0 ) + 3 L 1 2 ( 1 L 0 α 0 t 0 s 0 + L 1 4 ( 1 L 0 α 0 ) t 0 s 0 2 + 3 2 ] t 0 s 0 2 L 1 ( 1 L 0 α 0 ) [ 1 1 L 0 α 0 ( 2 + L 0 ( β 0 α 0 ) + L 0 L 1 4 ( 1 L 0 α 0 ) ( β 0 α 0 ) 2 + L 0 α 0 ) + 3 L 1 2 ( 1 L 0 α 0 ) ( ( β 0 α 0 ) + L 1 4 ( 1 L 0 α 0 ) ( β 0 α 0 ) 2 ) + 3 2 ] ( β 0 α 0 ) 2 = α 1 β 0 .
Note that s 1 s 0 s 1 t 0 + t 0 s 0 α 1 β 0 + β 0 α 0 λ . Therefore, s 1 U ( s 0 , λ ) and (10) is true for n = 0 .
Assume that (9) and (10) are true for all n = 0 , 1 , , j . This implies that s n t n 1 α n β n 1 and t n s n β n α n for all n = 1 , 2 , , j .
To show that (9) is true for all n = 1 , 2 , . . . , we consider Æ ( s j ) , j 0 , and using the first step of (3) and MVT, we have
Æ ( s j ) = Æ ( s j ) Æ ( s j 1 ) + Æ ( s j 1 ) + Æ ( s j 1 ) ( s j s j 1 ) Æ ( s j 1 ) ( s j s j 1 ) = 0 1 Æ s j 1 + θ ( s j s j 1 ) d θ ( s j s j 1 ) + Æ ( s j 1 ) ( s j 1 t j 1 ) + Æ ( s j 1 ) ( s j s j 1 ) Æ ( s j 1 ) ( s j s j 1 ) = 0 1 Æ s j 1 + θ ( s j s j 1 ) d θ ( s j s j 1 ) Æ ( s j 1 ) ( s j s j 1 ) + Æ ( s j 1 ) ( s j t j 1 ) = 0 1 [ Æ s j 1 + θ ( s j s j 1 ) Æ ( s j 1 ) d θ ( s j s j 1 ) + Æ ( s j 1 ) ( s j t j 1 ) .
Using assumptions ( a 2 ) ( a 3 ) in the above equation, we get
G 1 Æ ( s j ) 0 1 G 1 Æ s j 1 + θ ( s j s j 1 ) ) Æ ( s j 1 ) d θ s j s j 1 + G 1 Æ ( s j 1 ) s j t j 1 L 1 2 s j s j 1 2 + G 1 ( Æ ( s j 1 ) G + G ) s j t j 1 L 1 2 s j s j 1 2 + ( 1 + L 0 s j 1 s 0 ) s j t j 1 L 1 2 ( α j α j 1 ) 2 + ( 1 + L 0 α j 1 ) ( α j β j 1 ) δ j .
Therefore, we have
t j s j Æ ( s j ) 1 Æ ( s j ) Æ ( s j ) 1 G G 1 Æ ( s j ) δ j 1 L 0 s j s 0 β j α j
and
t j s 0 t j s j + s j s 0 β j α j + α j α 0 = β j < λ .
So, t j U ( s 0 , λ ) , and the inequality (9) holds for all j .
The proof is completed by replacing s 0 , t 0 and s 1 with s j , t j and s j + 1 , respectively, in the above argument.
Since { α n } and { β n } are Cauchy sequences, { s n } and { t n } are also Cauchy by (9) and (10). Hence, we have s n s * U ¯ ( s 0 , λ ) as n .
Now, by (3) we get
Æ ( s n ) = Æ ( s n ) ( t n s n ) M ( β n α n ) ,
where Æ ( s n ) M for some M > 0 . By letting n in (19), we have Æ ( s * ) = 0 .
Next, we study the uniqueness of the solution.
Proposition 1.
Suppose d U ( s 0 , a 0 ) is a simple solution of the Equation (2) for some a 0 > 0 and a a 0 such that
L 0 ( a 0 + a ) < 2 .
Set Ω 1 = Ω U ¯ ( s 0 , a ) . Then, d is unique in the region Ω 1 .
Proof. 
We can see the proof of the proposition from [19]. □

3. Local Convergence Analysis

We will be using the following extra assumptions in our local analysis.
( a 5 )
The condition (5) of the lemma holds for μ = 1 2 L 0 .
( a 6 )
G 1 ( Æ ( s ) Æ ( t ) ) L 2 s t for some L 2 > 0 and s , t Ω 1 .
( a 7 )
G 1 Æ ( s ) K 1 for some K 1 > 0 and s Ω 1 .
( a 8 )
G 1 Æ ( s ) K 2 for some K 2 > 0 and s Ω 1 .
We obtained from our semi-local analysis that the solution s * U ¯ ( s 0 , λ ) U ( s 0 , 1 2 L 0 ) . Then, by ( a 2 ) , for all s U ( s 0 , 1 2 L 0 ) , we can get
G 1 ( Æ ( s ) G ) L 0 s s 0 L 0 ( s s * + s * s 0 ) L 0 ( s s * + 1 2 L 0 ) = 1 2 + L 0 s s * .
Now, using BL, Æ ( s ) is invertible for all s U ( s 0 , 1 2 L 0 ) , and
Æ ( s ) 1 G 2 1 2 L 0 s s * .
We will use the following inequality in our study. For all s , t Ω 1 , by MVT, we get
Æ ( s ) 1 Æ ( t ) = Æ ( s ) 1 ( Æ ( t ) Æ ( s * ) ) Æ ( s ) 1 0 1 Æ ( s * + τ ( t s * ) ) d τ ( t s * ) Æ ( s ) 1 G 0 1 G 1 Æ ( s * + τ ( t s * ) ) d τ t s * .
Moreover, using assumption ( a 7 ) and (21), for s , t U ( s * , 1 2 L 0 ) , we obtain
Æ ( s ) 1 Æ ( t ) 2 K 1 1 2 L 0 s s * t s * .
Remark 1.
We study the local convergence in the ball U ( s * , r ) , which satisfies
U ( s * , r ) U ( s * , 1 2 L 0 ) Ω 1 .
Hence, hereafter we select s 0 from U ( s * , r ) .
Graphical representation of (23) is given in Figure 1.
As a consequence of (23), all the assumptions we have made remain valid in the local convergence ball. So, we can continue our local analysis independently under the same set of assumptions.
We need the following theorem for our local convergence analysis.
Theorem 2
([20]). Let F : Ω X Y be twice differentiable at the point a , then
( Æ ( a ) · h ) · k = ( Æ ( a ) · k ) · h , h , k Ω .
Proposition 2.
If s 0 U ( s * , r ˜ ) , where r ˜ = m i n { 1 L 1 + 2 L 0 , ρ 1 } , ρ 1 is the smallest zero of h 1 on [ 0 , 1 2 L 0 ) ,
ϕ 1 ( u ) = L 1 u ( 1 2 L 0 u ) 1 + 2 K 1 1 2 L 0 u and h 1 ( u ) = ϕ 1 ( u ) u 1 .
Then, under assumptions ( a 1 ) ( a 3 ) , we have t 0 , v 0 U ( s , r ˜ ) with
t 0 s * L 1 ( 1 2 L 0 s 0 s * ) s 0 s * 2 and v 0 s * ϕ 1 ( s 0 s * ) s 0 s * 2 .
Proof. 
Note that ϕ 1 , h 1 are non-decreasing continuous functions (NDCF) on [ 0 , 1 2 L 0 ) , with
h 1 ( 0 ) = 1   and lim u 1 2 L 0 h 1 ( u ) = + .
So, by Intermediate Value Theorem (IVT), there exists a smallest ρ 1 ( 0 , 1 2 L 0 ) such that h 1 ( ρ 1 ) = 0 .
Note that
t 0 s * = s 0 s * Æ ( s 0 ) 1 Æ ( s 0 ) = Æ ( s 0 ) 1 0 1 ( Æ ( s 0 ) Æ ( s * + τ ( s 0 s * ) ) ) d τ ( s 0 s * ) .
So, by the assumption ( a 3 ) and (21), we obtain
t 0 s * = Æ ( s 0 ) 1 G 0 1 G 1 ( Æ ( s 0 ) Æ ( s * + τ ( s 0 s * ) ) ) d τ s 0 s * L 1 s 0 s * 2 ( 1 2 L 0 s 0 s * ) < s 0 s * .
Thus, the iterate t 0 U ( s * , r ˜ ) .
Also, by (22) and the fact that ϕ ( t ) t < 1 , t ( 0 , ρ 1 ) , we have
v 0 s * = t 0 s * + Æ ( s 0 ) 1 Æ ( t 0 ) v 0 s * L 1 ( 1 2 L 0 s 0 s * ) s 0 s * 2 + 2 K 1 1 2 L 0 s 0 s * L 1 ( 1 2 L 0 s 0 s * ) s 0 s * 2 L 1 ( 1 2 L 0 s 0 s * ) 1 + 2 K 1 1 2 L 0 s 0 s * s 0 s * 2 = ϕ 1 ( s 0 s * ) s 0 s * 2 < s 0 s * .
Hence, the iterate v 0 U ( s * , r ˜ ) . □
Proposition 3.
If s 0 U ( s * , r ˜ ˜ ) , where r ˜ ˜ = m i n { r ˜ , ρ 2 } , ρ 2 is the smallest zero of h 2 on [ 0 , 1 2 L 0 ) ,
ϕ 2 ( u ) = 2 L 1 2 ( 1 2 L 0 u ) 2 1 + L 1 u 2 ( 1 2 L 1 u ) and h 2 ( u ) = ϕ 2 ( u ) u 2 1 .
Then, under assumptions ( a 1 ) ( a 3 ) , we have z 0 U ( s , r ˜ ˜ ) with
z 0 s * = ϕ 2 ( s 0 s * ) s 0 s * 3 .
Proof. 
Note that ϕ 2 , h 2 are NDCF on [ 0 , 1 2 L 0 ) , with
h 2 ( 0 ) = 1 and lim u 1 2 L 0 h 2 ( u ) = + .
So, by IVT, there exists a smallest ρ 2 ( 0 , 1 2 L 0 ) such that h 2 ( ρ 2 ) = 0 .
Using the assumption ( a 3 ) and (21), we obtain
z 0 s * = t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) = Æ ( s 0 ) 1 0 1 Æ ( s 0 ) Æ ( s * + τ ( t 0 s * ) ) d τ ( t 0 s * ) z 0 s * = Æ ( s 0 ) 1 G 0 1 G 1 ( Æ ( s 0 ) Æ ( s * + τ ( t 0 s * ) ) ) d τ t 0 s * 2 L 1 ( 1 2 L 0 s 0 s * ) s 0 s * + 1 2 t 0 s * t 0 s * 2 L 1 2 ( 1 2 L 0 s 0 s * ) 2 1 + L 1 2 ( 1 2 L 1 s 0 s * ) s 0 s * s 0 s * 3 = ϕ 2 ( s 0 s * ) s 0 s * 3 < s 0 s * ,
and hence z 0 U ( s * , r ˜ ˜ ) .
For the next lemma, we introduced NDCF ϕ , h : [ 0 , 1 2 L 0 ) R defined as
ϕ ( u ) = 2 L 1 L 2 u 3 ( 1 2 L 0 u ) 2 ϕ 1 2 ( u ) + 2 L 1 1 2 L 0 u ) 2 + ϕ 2 2 ( u ) u 2 + 2 K 1 L 1 u 3 ( 1 2 L 0 u ) 3 ϕ 1 2 ( u ) + ϕ 2 2 ( u ) u 2 + L 1 2 L 2 ( 1 2 L 0 u ) 3 ϕ 2 ( u ) u 2 + 1 3 ϕ 2 2 ( u ) u 4 + 1 2 + L 1 u 1 2 L 0 u + K 2 L 1 2 u ( 1 2 L 0 u ) 3 ϕ 2 ( u ) 2 + L 1 u 1 2 L 0 u + 32 K 1 2 K 2 L 1 L 2 ( 1 2 L 0 u ) 5 + 2 K 2 L 1 L 2 ( 1 2 L 0 u ) 3 2 L 1 u 1 2 L 0 u + L 1 1 2 L 0 u + 1 + 2 K 2 2 L 1 2 ( 1 2 L 0 u ) 4 + 4 K 1 K 2 L 1 2 + K 2 3 L 1 ( 1 2 L 0 u ) 4 2 + L 1 u ( 1 2 L 0 u )
and
h ( u ) = ϕ ( u ) u 4 1 .
Notice that
h ( 0 ) = 1   and lim u 1 2 L 0 h ( u ) = + .
So, by IVT, there exists a smallest ρ ( 0 , 1 2 L 0 ) such that h ( ρ ) = 0 .
  • Let r = m i n { r ˜ ˜ , ρ } .
Lemma 2.
If the assumptions ( a 2 ) ( a 8 ) hold and s 0 U ( s * , r ) { s * } . Then, we have s 1 U ( s * , r ) and
s 1 s * ϕ ( r ) s 0 s * 5 .
Proof. 
Let s 0 U ( s * , r ) . Note that, by adding and subtracting Æ ( s 0 ) 1 Æ ( t 0 ) , we have by (3)
s 1 s * = t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) Æ ( s 0 ) 1 2 Æ ( v 0 ) + 3 Æ ( z 0 ) 4 Æ ( t 0 ) = t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) Æ ( s 0 ) 2 ( Æ ( v 0 ) Æ ( s * ) ) + 3 ( Æ ( z 0 ) Æ ( s * ) ) 4 ( Æ ( t 0 ) Æ ( s * ) ) .
So, by MVT, and the definition of v 0 and z 0 , we have
s 1 s * = t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) Æ ( s 0 ) 1 [ 2 0 1 Æ ( s * + τ ( v 0 s * ) ) d τ ( t 0 s * + Æ ( s 0 ) 1 Æ ( t 0 ) ) + 3 0 1 Æ ( s * + τ ( z 0 s * ) ) d τ ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) 4 0 1 Æ ( s * + τ ( t 0 s * ) ) d τ ( t 0 s * ) ] .
Then, by rearranging, we get
s 1 s * = t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) 2 Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( v 0 s * ) ) Æ ( s * + τ ( t 0 s * ) ) d τ ( t 0 s * ) 2 Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( z 0 s * ) ) Æ ( s * + τ ( t 0 s * ) ) d τ ( t 0 s * ) 2 Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( v 0 s * ) ) Æ ( s * + τ ( z 0 s * ) ) d τ ( Æ ( s 0 ) 1 Æ ( t 0 ) ) Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( z 0 s * ) ) d τ ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) .
Combining the first and last terms, and adding and subtracting Æ ( s * ) appropriately, we have
s 1 s * = I Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( z 0 s * ) ) d τ ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) 2 Æ ( s 0 ) 1 0 1 [ ( Æ ( s * + τ ( v 0 s * ) ) Æ ( s * ) ) ( Æ ( s * + τ ( t 0 s * ) ) Æ ( s * ) ) ] d τ ( t 0 s * ) 2 Æ ( s 0 ) 1 0 1 [ ( Æ ( s * + τ ( z 0 s * ) ) Æ ( s * ) ) ( Æ ( s * + τ ( t 0 s * ) ) Æ ( s * ) ) ] d τ ( t 0 s * ) 2 Æ ( s 0 ) 1 0 1 [ ( Æ ( s * + τ ( v 0 s * ) ) Æ ( s * ) ) ( Æ ( s * + τ ( z 0 s * ) ) Æ ( s * ) ) ] d τ ( Æ ( s 0 ) 1 Æ ( t 0 ) ) .
Next, by applying MVT for first derivatives, we have
s 1 s * = I Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( z 0 s * ) ) d τ ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) 2 Æ ( s 0 ) 1 [ 0 1 0 1 Æ ( s * + θ τ ( v 0 s * ) ) τ d θ d τ ( v 0 s * ) 0 1 0 1 Æ ( s * + θ τ ( t 0 s * ) ) τ d θ d τ ( t 0 s * ) ] ( t 0 s * ) 2 Æ ( s 0 ) 1 [ 0 1 0 1 Æ ( s * + θ τ ( z 0 s * ) ) τ d θ d τ ( z 0 s * ) 0 1 0 1 Æ ( s * + θ τ ( t 0 s * ) ) τ d θ d τ ( t 0 s * ) ] ( t 0 s * ) 2 Æ ( s 0 ) 1 [ 0 1 0 1 Æ ( s * + θ τ ( v 0 s * ) ) τ d θ d τ ( v 0 s * ) 0 1 0 1 Æ ( s * + θ τ ( z 0 s * ) ) τ d θ d τ ( z 0 s * ) ] Æ ( s 0 ) 1 Æ ( t 0 ) .
Applying MVT on the first term, adding and subtracting Æ ( s * ) in other terms, we get
s 1 s * = Æ ( s 0 ) 1 0 1 0 1 Æ ( s * + τ ( z 0 s * ) + θ ( s 0 s * τ ( z 0 s * ) ) ) × ( s 0 s * τ ( z 0 s * ) ) d θ d τ ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) 2 Æ ( s 0 ) 1 [ 0 1 0 1 Æ ( s * + θ τ ( v 0 s * ) ) Æ ( s * ) τ d θ d τ ( v 0 s * ) 0 1 0 1 Æ ( s * + θ τ ( t 0 s * ) ) Æ ( s * ) τ d θ d τ ( t 0 s * ) ] ( t 0 s * ) Æ ( s 0 ) 1 Æ ( s * ) ( v 0 t 0 ) ( t 0 s * ) 2 Æ ( s 0 ) 1 [ 0 1 0 1 Æ ( s * + θ τ ( z 0 s * ) ) Æ ( s * ) τ d θ d τ ( z 0 s * ) 0 1 0 1 Æ ( s * + θ τ ( t 0 s * ) ) Æ ( s * ) τ d θ d τ ( t 0 s * ) ] ( t 0 s * ) Æ ( s 0 ) 1 Æ ( s * ) ( z 0 t 0 ) ( t 0 s * ) 2 Æ ( s 0 ) 1 [ 0 1 0 1 Æ ( s * + θ τ ( v 0 s * ) ) Æ ( s * ) τ d θ d τ ( v 0 s * ) 0 1 0 1 Æ ( s * + θ τ ( z 0 s * ) ) Æ ( s * ) τ d θ d τ ( z 0 s * ) ] Æ ( s 0 ) 1 Æ ( t 0 ) Æ ( s 0 ) 1 Æ ( s * ) ( v 0 z 0 ) Æ ( s 0 ) 1 Æ ( t 0 ) .
Note that Æ ( s 0 ) 1 Æ ( s * ) ( v 0 t 0 ) ( t 0 s * ) = Æ ( s 0 ) 1 Æ ( s * ) ( z 0 t 0 ) ( t 0 s * ) . This can be seen by substituting for ( v 0 t 0 ) and ( z 0 t 0 ) .
For convenience, let
A 1 = 2 Æ ( s 0 ) 1 [ 0 1 0 1 Æ ( s * + θ τ ( v 0 s * ) ) Æ ( s * ) τ d θ d τ ( v 0 s * ) 0 1 0 1 Æ ( s * + θ τ ( t 0 s * ) ) Æ ( s * ) τ d θ d τ ( t 0 s * ) ] ( t 0 s * ) , A 2 = 2 Æ ( s 0 ) 1 [ 0 1 0 1 Æ ( s * + θ τ ( z 0 s * ) ) Æ ( s * ) τ d θ d τ ( z 0 s * ) 0 1 0 1 Æ ( s * + θ τ ( t 0 s * ) ) Æ ( s * ) τ d θ d τ ( t 0 s * ) ] ( t 0 s * ) ,
and
A 3 = 2 Æ ( s 0 ) 1 [ 0 1 0 1 Æ ( s * + θ τ ( v 0 s * ) ) Æ ( s * ) τ d θ d τ ( v 0 s * ) 0 1 0 1 Æ ( s * + θ τ ( z 0 s * ) ) Æ ( s * ) τ d θ d τ ( z 0 s * ) ] Æ ( s 0 ) 1 Æ ( t 0 ) .
Then, we obtain
s 1 s * = i = 1 3 A i + Æ ( s 0 ) 1 0 1 0 1 Æ ( s * + τ ( z 0 s * ) + θ ( s 0 s * τ ( z 0 s * ) ) ) Æ ( s * ) × ( s 0 s * τ ( z 0 s * ) ) d θ d τ ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) + Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * 1 2 ( z 0 s * ) ) ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) 2 Æ ( s 0 ) 1 Æ ( s * ) ( Æ ( s 0 ) 1 Æ ( t 0 ) ) 2 ,
where we have used the relation v 0 z 0 = 2 Æ ( s 0 ) 1 Æ ( t 0 ) .
Let
A 4 = Æ ( s 0 ) 1 0 1 0 1 Æ ( s * + τ ( z 0 s * ) + θ ( s 0 s * τ ( z 0 s * ) ) ) Æ ( s * ) × ( s 0 s * τ ( z 0 s * ) ) d θ d τ ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) and A 5 = 1 2 Æ ( s 0 ) 1 Æ ( s * ) ( z 0 s * ) ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) .
Then, we have
s 1 s * = i = 1 5 A i + Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * ) ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) 2 Æ ( s 0 ) 1 Æ ( s * ) ( Æ ( s 0 ) 1 Æ ( t 0 ) ) 2 ,
Since Æ ( t 0 ) = 0 1 Æ ( s * + τ ( t 0 s * ) ) d τ ( t 0 s * ) , we have
s 1 s * = i = 1 5 A i + Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * ) ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) 2 Æ ( s 0 ) 1 Æ ( s * ) Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( t 0 s * ) ) d τ ( t 0 s * ) ( Æ ( s 0 ) 1 Æ ( t 0 ) ) .
Note that
t 0 s * = s 0 s * Æ ( s 0 ) 1 Æ ( s 0 ) = Æ ( s 0 ) 1 0 1 ( Æ ( s 0 ) Æ ( s * + τ ( s 0 s * ) ) d τ ( s 0 s * ) = Æ ( s 0 ) 1 0 1 0 1 Æ ( s ( θ , τ ) ) ( 1 τ ) d τ ( s 0 s * ) 2 ,
where s ( θ , τ ) = ( s * + ( τ + θ ( 1 τ ) ) ( s 0 s * ) ) , and hence by (27) we have
s 1 s * = i = 1 5 A i + Æ ( s 0 ) 1 Æ ( s * ) [ ( s 0 s * ) ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) 2 Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( t 0 s * ) ) d τ Æ ( s 0 ) 1 × 0 1 0 1 Æ ( s θ , τ ) ( 1 τ ) d θ d τ ( s 0 s * ) 2 ( Æ ( s 0 ) 1 Æ ( t 0 ) ) ] .
Add and subtract Æ ( s * ) in the last term appropriately to obtain
s 1 s * = i = 1 6 A i + Æ ( s 0 ) 1 Æ ( s * ) [ ( s 0 s * ) ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( t 0 s * ) ) d τ Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * ) 2 Æ ( s 0 ) 1 Æ ( t 0 ) ] ,
where
A 6 = 2 Æ ( s 0 ) 1 Æ ( s * ) Æ ( s 0 ) 1 0 1 ( Æ ( s * + τ ( t 0 s * ) ) d τ Æ ( s 0 ) 1 0 1 0 1 ( Æ ( s ( θ , τ ) ) Æ ( s * ) ) ( 1 τ ) d θ d τ ( s 0 s * ) 2 ( Æ ( s 0 ) 1 Æ ( t 0 ) ) .
Using Theorem 2, with h = s 0 s * and k = t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) , we get Æ ( s * ) ( s 0 s * ) ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) = Æ ( s * ) ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) ( s 0 s * ) .
Therefore, we can write
s 1 s * = i = 1 6 A i + Æ ( s 0 ) 1 Æ ( s * ) [ ( t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) ) ( s 0 s * ) Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( t 0 s * ) ) d τ Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * ) 2 Æ ( s 0 ) 1 Æ ( t 0 ) ] .
Note that since
t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) = Æ ( s 0 ) 1 0 1 ( Æ ( s 0 ) Æ ( s * + τ ( t 0 s * ) ) ) d τ ( t 0 s * ) = Æ ( s 0 ) 1 0 1 0 1 Æ ( s * + τ ( t 0 s * ) + θ ( s 0 s * τ ( t 0 s * ) ) × ( s 0 s * τ ( t 0 s * ) ) d τ d θ ( t 0 s * ) ,
we have by (28)
s 1 s * = i = 1 6 A i + Æ ( s 0 ) 1 Æ ( s * ) [ Æ ( s 0 ) 1 0 1 0 1 Æ ( s * + τ ( t 0 s * ) + θ ( s 0 s * τ ( t 0 s * ) ) ) × ( s 0 s * τ ( t 0 s * ) ) d θ d τ ( t 0 s * ) ( s 0 s * ) Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( t 0 s * ) ) d τ Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * ) 2 Æ ( s 0 ) 1 Æ ( t 0 ) ] .
Add and subtract Æ ( s * ) in the seventh term appropriately again to get
s 1 s * = i = 1 6 A i + Æ ( s 0 ) 1 Æ ( s * ) × [ Æ ( s 0 ) 1 0 1 0 1 Æ ( s * + τ ( t 0 s * ) + θ ( s 0 s * τ ( t 0 s * ) ) ) Æ ( s * ) × ( s 0 s * τ ( t 0 s * ) ) d θ d τ ( t 0 s * ) ( s 0 s * ) + Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * 1 2 ( t 0 s * ) ) ( t 0 s * ) ( s 0 s * ) Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( t 0 s * ) ) d τ Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * ) 2 Æ ( s 0 ) 1 Æ ( t 0 ) ] .
Using Theorem 2, with h = ( s 0 s * ) ( t 0 s * ) and k = s 0 s * , we get Æ ( s * ) ( s 0 s * ) ( t 0 s * ) ( s 0 s * ) = Æ ( s * ) ( s 0 s * ) 2 ( t 0 s * ) .
Then
s 1 s * = i = 1 8 A i + Æ ( s 0 ) 1 Æ ( s * ) [ Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * ) 2 ( t 0 s * ) Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( t 0 s * ) ) d τ Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * ) 2 Æ ( s 0 ) 1 Æ ( t 0 ) ] ,
where
A 7 = Æ ( s 0 ) 1 Æ ( s * ) [ Æ ( s 0 ) 1 0 1 0 1 ( Æ ( s * + τ ( t 0 s * ) + θ ( s 0 s * τ ( t 0 s * ) ) ) Æ ( s * ) ) ( s 0 s * τ ( t 0 s * ) ) d θ d τ ( s 0 s * ) ( t 0 s * ) ] and A 8 = 1 2 Æ ( s 0 ) 1 Æ ( s * ) Æ ( s 0 ) 1 Æ ( s * ) ( t 0 s * ) 2 ( s 0 s * ) .
Now, adding and subtracting ( t 0 s * ) in the last term of (30), we get
s 1 s * = i = 1 8 A i + Æ ( s 0 ) 1 Æ ( s * ) [ Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * ) 2 ( t 0 s * ) + Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( t 0 s * ) ) d τ Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * ) 2 × ( ( t 0 s * ) Æ ( s 0 ) 1 Æ ( t 0 ) ) Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( t 0 s * ) ) Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * ) 2 ( t 0 s * ) ] .
Combine the terms to get
s 1 s * = i = 1 9 A i + Æ ( s 0 ) 1 Æ ( s * ) I Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( t 0 s * ) ) d τ × Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * ) 2 ( t 0 s * ) ,
where
A 9 = Æ ( s 0 ) 1 0 1 Æ ( s * + τ ( t 0 s * ) ) d τ Æ ( s 0 ) 1 × Æ ( s * ) ( s 0 s * ) 2 ( ( t 0 s * ) Æ ( s 0 ) 1 Æ ( t 0 ) ) .
Apply MVT again to get
s 1 s * = i = 1 9 A i + Æ ( s 0 ) 1 Æ ( s * ) Æ ( s 0 ) 1 × [ 0 1 0 1 Æ ( s * + τ ( t 0 s * ) + θ ( s 0 s * τ ( t 0 s * ) ) ) × ( s 0 s * τ ( t 0 s * ) ) d θ d τ ] Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * ) 2 ( t 0 s * ) = i = 1 10 A i ,
where
A 10 = Æ ( s 0 ) 1 Æ ( s * ) Æ ( s 0 ) 1 [ 0 1 0 1 Æ ( s * + τ ( t 0 s * ) + θ ( s 0 s * τ ( t 0 s * ) ) ) × ( s 0 s * τ ( t 0 s * ) ) d θ d τ ] Æ ( s 0 ) 1 Æ ( s * ) ( s 0 s * ) 2 ( t 0 s * ) .
Using the assumptions and inequalities we already have, we calculate A i , i = 1 , 2 , , 10 as follows.
Using (21) and assumption ( a 6 ) , we get
A 1 = 2 Æ ( s 0 ) 1 G [ 0 1 0 1 G 1 Æ ( s * + θ τ ( v 0 s * ) ) Æ ( s * ) τ d θ d τ v 0 s * + 0 1 0 1 G 1 Æ ( s * + θ τ ( t 0 s * ) ) Æ ( s * ) τ d θ d τ t 0 s * ] t 0 s * 2 × 2 1 2 L 0 s 0 s * [ L 2 0 1 0 1 θ τ 2 d τ d θ v 0 s * 2 + L 2 0 1 0 1 θ τ 2 d τ d θ t 0 s * 2 ] t 0 s * 4 L 2 1 2 L 0 s 0 s * 1 6 v 0 s * 2 + t 0 s * 2 t 0 s * 2 L 1 L 2 3 ( 1 2 L 0 s 0 s * ) 2 ϕ 1 2 ( s 0 s * ) + L 1 1 2 L 0 s 0 s * 2 s 0 s * 6 .
and
A 2 = 2 Æ ( s 0 ) 1 G [ 0 1 0 1 G 1 Æ ( s * + θ τ ( z 0 s * ) ) Æ ( s * ) τ d θ d τ ( z 0 s * ) + 0 1 0 1 G 1 Æ ( s * + θ τ ( t 0 s * ) ) Æ ( s * ) τ d θ d τ ( t 0 s * ) ] t 0 s * 2 × 2 1 2 L 0 s 0 s * 1 6 L 2 z 0 s * 2 + 1 6 L 2 t 0 s * 2 t 0 s * 2 L 1 L 2 3 ( 1 2 L 0 s 0 s * ) 2 ϕ 2 2 ( s 0 s * ) s 0 s * 2 + L 1 1 2 L 0 s 0 s * 2 s 0 s * 6 .
Moreover, using (2), (22) and assumption ( a 6 ) , we get
A 3 = 2 Æ ( s 0 ) 1 G [ 0 1 0 1 G 1 Æ ( s * + θ τ ( v 0 s * ) ) Æ ( s * ) τ d θ d τ v 0 s * + 0 1 0 1 G 1 Æ ( s * + θ τ ( z 0 s * ) ) Æ ( s * ) τ d θ d τ z 0 s * ] × Æ ( s 0 ) 1 Æ ( t 0 ) 2 × 2 1 2 L 0 s 0 s * L 2 6 v 0 s * 2 + L 2 6 z 0 s * 2 2 K 1 1 2 L 0 s 0 s * t 0 s * 2 K 1 L 1 3 ( 1 2 L 0 s 0 s * ) 3 ϕ 1 2 ( s 0 s * ) + ϕ 2 2 ( s 0 s * ) s 0 s * 2 s 0 s * 6 .
By (29), we have
t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) = Æ ( s 0 ) 1 0 1 Æ ( s 0 ) Æ ( s * + τ ( t 0 s * ) ) d τ ( t 0 s * ) Æ ( s 0 ) 1 G × 0 1 G 1 Æ ( s 0 ) Æ ( s * + τ ( t 0 s * ) ) d τ t 0 s * 2 L 1 1 2 L 0 s 0 s * ( s 0 s * + 1 2 t 0 s * ) t 0 s * L 1 2 ( 1 2 L 0 s 0 s * ) 2 2 + L 1 s 0 s * 1 2 L 0 s 0 s * s 0 s * 3 .
Furthermore, by (21), (34) and assumption ( a 6 ) ,
A 4 = Æ ( s 0 ) 1 G 0 1 0 1 Æ ( s * + τ ( z 0 s * ) + θ ( s 0 s * τ ( z 0 s * ) ) ) Æ ( s * ) × ( s 0 s * τ ( z 0 s * ) ) d θ d τ t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) 2 L 2 1 2 L 0 s 0 s * 0 1 0 1 ( 1 θ ) τ z 0 s * + θ s 0 s * s 0 s * + τ z 0 s * d τ d θ t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) 2 L 2 1 2 L 0 s 0 s * 0 1 0 1 [ τ z 0 s * s 0 s * + ( 1 θ ) τ 2 z 0 s * 2 + θ s 0 s * 2 ] d τ d θ × t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) 2 L 2 1 2 L 0 s 0 s * 1 2 z 0 s * s 0 s * + 1 6 z 0 s * 2 + 1 2 s 0 s * 2 × L 1 2 ( 1 2 L 0 s 0 s * ) 2 2 + L 1 s 0 s * 1 2 s 0 s * s 0 s * 3 L 1 2 L 2 ( 1 2 L 0 s 0 s * ) 3 ϕ 2 ( s 0 s * ) s 0 s * 2 + 1 3 ϕ 2 2 ( s 0 s * ) s 0 s * 4 + 1 × 2 + L 1 s 0 s * 1 2 L 0 s 0 s * s 0 s * 5 .
Similarly, using (21) and assumption ( a 8 ) ,
A 5 = 1 2 Æ ( s 0 ) 1 G G 1 Æ ( s * ) ( z 0 s * ) t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) 1 2 2 K 2 1 2 L 0 s 0 s * ϕ 2 ( s 0 s * ) s 0 s * 3 × L 1 2 ( 1 2 L 0 s 0 s * ) 2 2 + L 1 s 0 s * 1 2 L 0 s 0 s * s 0 s * 3 K 2 L 1 2 ( 1 2 L 0 s 0 s * ) 3 ϕ 2 ( s 0 s * ) 2 + L 1 s 0 s * 1 2 L 0 s 0 s * s 0 s * 6 .
Then, use (21), (22) and assumptions ( a 6 ) and ( a 7 ) , we get
A 6 = 2 Æ ( s 0 ) 1 G G 1 Æ ( s * ) Æ ( s 0 ) 1 G 0 1 G 1 Æ ( s * + τ ( t 0 s * ) ) d τ × Æ ( s 0 ) 1 G 0 1 0 1 G 1 ( Æ ( s * + τ ( s 0 s * ) + θ ( 1 τ ) ( s 0 s * ) ) Æ ( s * ) ) ( 1 τ ) d θ d τ s 0 s * 2 Æ ( s 0 ) 1 Æ ( t 0 ) 16 K 1 K 2 ( 1 2 L 0 s 0 s * ) 3 0 1 0 1 L 2 τ ( s 0 s * ) + θ ( 1 τ ) ( s 0 s * ) ( 1 τ ) d τ d θ × s 0 s * 2 Æ ( s 0 ) 1 Æ ( t 0 ) 32 K 1 2 K 2 L 1 L 2 ( 1 2 L 0 s 0 s * ) 5 s 0 s * 5 .
Then, using (21), assumptions ( a 6 ) and ( a 8 ) , we get
A 7 = Æ ( s 0 ) 1 G G 1 Æ ( s * ) Æ ( s 0 ) 1 G × 0 1 0 1 G 1 Æ ( s * + τ ( t 0 s * ) + θ ( s 0 s * τ ( t 0 s * ) ) ) Æ ( s * ) × s 0 s * τ ( t 0 s * ) d θ d τ s 0 s * t 0 s * 4 K 2 ( 1 2 L 0 s 0 s * ) 2 [ 0 1 0 1 L 2 τ ( t 0 s * ) + θ ( s 0 s * ) + θ τ ( t 0 s * ) ( s 0 s * + τ t 0 s * ) d τ d θ ] s 0 s * t 0 s * 4 K 2 L 2 ( 1 2 L 0 s 0 s * ) 2 t 0 s * s 0 s * + 1 2 t 0 s * 2 + 1 2 s 0 s * 2 × s 0 s * t 0 s * 2 K 2 L 1 L 2 ( 1 2 L 0 s 0 s * ) 3 2 L 1 s 0 s * 1 2 L 0 s 0 s * + L 1 1 2 L 0 s 0 s * + 1 s 0 s * 5 .
Using (21) and assumption ( a 8 ) ,
A 8 = 1 2 Æ ( s 0 ) 1 G G 1 Æ ( s * ) Æ ( s 0 ) 1 G G 1 Æ ( s * ) t 0 s * 2 s 0 s * 2 K 2 2 L 1 2 ( 1 2 L 0 s 0 s * ) 4 s 0 s * 5 .
Using (21), (34) and the assumptions ( a 7 ) and ( a 8 ) , we get
A 9 = Æ ( s 0 ) 1 G 0 1 G 1 Æ ( s * + τ ( t 0 s * ) ) d τ Æ ( s 0 ) 1 G G 1 Æ ( s * ) × s 0 s * 2 t 0 s * Æ ( s 0 ) 1 Æ ( t 0 ) 4 K 1 K 2 L 1 2 ( 1 2 L 0 s 0 s * ) 4 2 + L 1 s 0 s * 1 2 L 0 s 0 s * s 0 s * 5 .
Finally, using (21) and assumption ( a 8 ) , we get
A 10 = Æ ( s 0 ) 1 G G 1 Æ ( s * ) Æ ( s 0 ) 1 G × 0 1 0 1 G 1 Æ ( s * + τ ( t 0 s * ) + θ ( s 0 s * τ ( t 0 s * ) ) ) × s 0 s * τ ( t 0 s * ) d θ d τ Æ ( s 0 ) 1 G G 1 Æ ( s * ) s 0 s * 2 t 0 s * 4 K 2 3 L 1 ( 1 2 L 0 s 0 s * ) 4 2 + L 1 s 0 s * ( 1 2 L 0 s 0 s * ) s 0 s * 5 .
Combining the inequalities ( 3.16 ) ( 3.25 ) , we get
s 1 s * i = 1 10 A i ϕ ( s 0 s * ) s 0 s * 5 .
Now, since ϕ ( s 0 s * ) s 0 s * 5 < s 0 s * , we have s 1 U ( s * , r ) .
Theorem 3.
If the assumptions ( a 2 ) ( a 8 ) hold, then the sequence ( s n ) defined by (3) with s 0 U ( s * , r ) { s * } is well defined and
s n + 1 s * ϕ ( r ) s n s * 5 .
In particular, s n U ( s * , r ) for all n N { 0 } and ( s n ) converges to s * with order of convergence five.
Proof. 
Proof of the theorem follows inductively from the previous lemma by replacing s 0 , t 0 and s 1 with s n , t n and s n + 1 , respectively. □
Next, we study the uniqueness of s * .
Proposition 4.
Suppose there exists
(i)
a simple solution s * U ( s * , ν ) of (2) and assumption ( a 3 ) holds.
(ii)
a δ > ν such that
δ < 2 L .
Set Ω 2 = U ¯ ( s * , δ ) Ω . Then (2) has a unique solution s * in Ω 2 .
Proof. 
We can see the proof of the proposition from [19].

4. Numerical Examples

In this section, we examine two examples to calculate the parameters we have discussed in our theoretical part.
Example 1.
Let X = R 3 with Ω = U ¯ ( 0 , 1 ) . F : Ω X X is defined for ω = ( ω 1 , ω 2 , ω 3 ) by
Æ ( ω ) = 1 3 s i n ω 1 , ω 2 2 15 + ω 2 3 , ω 3 3
The first derivative will be
Æ ( ω ) = c o s ω 1 3 0 0 0 2 ω 2 15 + 1 3 0 0 0 1 3 ,
and the second derivative will be
Æ ( w ) = s i n ω 1 3 0 0 0 0 0 0 0 0 0 0 0 0 2 15 0 0 0 0 0 0 0 0 0 0 0 0 0 .
Consider the solution s * = ( 0 , 0 , 0 ) . Start with the initial point ( 0 , 0 , 1 3 ) . Choosing G = I , we have our solution s * U ( ( 0 , 0 , 1 3 ) , 1 2 ) . By comparing with the assumptions ( a 2 ) ( a 3 ) and ( a 6 ) ( a 8 ) , the constants can be found to be L 0 = L 1 = L 2 = K 1 = K 1 = 1 3 . Then the parameters are ρ 1 = 0.7780542 , ρ 2 = 0.8549823 , and ρ = 0.588058733 . Thus, r = 0.588058733 .
Example 2.
Let us consider the trajectory of an electron in the air gap between two parallel plates, described by the expression
Æ ( s ) = π 4 1 2 c o s ( s ) + s .
Let the domain be Ω = [ 1 , 1 ] and take the initial point s 0 = 0 . Choosing G = I , the iterated solution is found to be s * = 0.30909327   [21]. By comparing with the assumptions ( a 2 ) ( a 3 ) and ( a 6 ) ( a 8 ) , the constants can be found to be L 0 = L 1 = L 2 = K 1 = K 1 = 1 2 . Then the parameters are ρ 1 = 0.53918887 , ρ 2 = 0.60668011 , and ρ = 0.38631554 . Thus r = 0.38631554 .

5. Basins of Attraction

To verify the numerical stability of the method, we analyze the dynamics of the method (3). The set of all initial points that converge to a specific root is called the Basin of Attraction (BA) [22].
Example 3.
a 5 b = 0 b 5 a = 0
with roots ( 1 , 1 ) , ( 0 , 0 ) , ( 1 , 1 ) .
Example 4.
3 a 2 b b 3 = 0 a 3 3 a b 2 1 = 0
with roots ( 1 2 , 3 2 ) , ( 1 2 , 3 2 ) , ( 1 , 0 ) .
Example 5.
a 2 + b 2 = 4 3 a 2 + 7 b 2 = 16
with roots ( 3 , 1 ) , ( 3 , 1 ) , ( 3 , 1 ) , ( 3 , 1 ) .
The BA for the roots of the given nonlinear equations is given (Figure 2) in a 401 × 401 equidistant grid points within a rectangular domain D = { ( a , b ) R 2 : 2 a 2 , 2 b 2 } . Each initial point is given a color corresponding to the root, which the iterative method converges. If the method fails to converge or diverges, the point is marked as black. The BA is shown with a tolerance of 10 8 , and a maximum of 45 iterations is considered.

6. Conclusions

We have analyzed both semi-local and local convergence analyses of the fifth-order method (3) in a more general Banach space setting. Apart from the previous studies, we have used the same set of assumptions for both semi-local and local analysis, which is independent of the solution x * . Even though our analysis is limited to the Lipschitz-type assumptions, it improves the applicability of the considered method. We have given numerical examples to calculate the parameters we have discussed in our theoretical part. To visualize the convergence behavior of the method, the dynamics of the method is presented through examples.

Author Contributions

S.G., M.G., S.B. and I.K.A. contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

Manjusree Gopal and Samhitha Bhide thank National Institute of Technology Karnataka, India for financial support.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algorithms 2010, 54, 395–409. [Google Scholar] [CrossRef]
  2. Yang, X.; He, X. A fully-discrete decoupled finite element method for the conserved Allen–Cahn type phase-field model of three-phase fluid flow system. Comput. Methods Appl. Mech. Eng. 2022, 389, 114376. [Google Scholar] [CrossRef]
  3. Berinde, V.; Takens, F. Iterative Approximation of Fixed Points; Springer: Berlin/Heidelberg, Germany, 2007; Volume 1912. [Google Scholar]
  4. Argyros, I.K. The Theory and Applications of Iteration Methods; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  5. Magreñán, Á.A.; Argyros, I. A Contemporary Study of Iterative Methods: Convergence, Dynamics and Applications; Academic Press: Cambridge, MA, USA, 2018. [Google Scholar]
  6. Sharma, J.R.; Arora, H. Efficient Jarratt-like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
  7. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  8. Cordero, A.; Torregrosa, J.R.; Triguero-Navarro, P. First optimal vectorial eighth-order iterative scheme for solving non-linear systems. Appl. Math. Comput. 2025, 498, 129401. [Google Scholar] [CrossRef]
  9. Cordero, A.; Rojas-Hiciano, R.V.; Torregrosa, J.R.; Vassileva, M.P. A highly efficient class of optimal fourth-order methods for solving nonlinear systems. Numer. Algorithms 2024, 95, 1879–1904. [Google Scholar] [CrossRef]
  10. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
  11. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  12. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; SIAM: Philadelphia, PA, USA, 2000. [Google Scholar]
  13. Collatz, L. Functional Analysis and Numerical Mathematics; Academic Press: Cambridge, MA, USA, 2014. [Google Scholar]
  14. Traub, J.F. Iterative Methods for the Solution of Equations; American Mathematical Soc.: Providence, RI, USA, 1982; Volume 312. [Google Scholar]
  15. Catinas, E. How many steps still left to x*? SIAM Rev. 2021, 63, 585–624. [Google Scholar] [CrossRef]
  16. Ostowski, A. Solution of Equations and System of Equations; Academic Press: New York, NY, USA, 1960. [Google Scholar]
  17. Erfanifar, R.; Hajarian, M. A new multi-step method for solving nonlinear systems with high efficiency indices. Numer. Algorithms 2024, 97, 959–984. [Google Scholar] [CrossRef]
  18. Muniyasamy, M.; Chandhini, G.; George, S.; Bate, I.; Senapati, K. On obtaining convergence order of a fourth and sixth order method of Hueso et al. without using Taylor series expansion. J. Comput. Appl. Math. 2024, 452, 116136. [Google Scholar] [CrossRef]
  19. Sadananda, R.; George, S.; Kunnarath, A.; Padikkal, J.; Argyros, I.K. Enhancing the practicality of Newton–Cotes iterative method. J. Appl. Math. Comput. 2023, 69, 3359–3389. [Google Scholar] [CrossRef]
  20. Cartan, H. Differential Calculus; Kershaw Publishing Company: Kershaw, SC, USA, 1971. [Google Scholar]
  21. Alzahrani, A.K.H.; Behl, R.; Alshomrani, A.S. Some higher-order iteration functions for solving nonlinear models. Appl. Math. Comput. 2018, 334, 80–93. [Google Scholar] [CrossRef]
  22. Amat, S.; Busquier, S.; Plaza, S. Review of some iterative root-finding methods from a dynamical point of view. Scientia 2004, 10, 35. [Google Scholar]
Figure 1. A visual comparison of the different balls discussed.
Figure 1. A visual comparison of the different balls discussed.
Algorithms 18 00483 g001
Figure 2. BA for Example (3), Example (4) and Example (5), respectively.
Figure 2. BA for Example (3), Example (4) and Example (5), respectively.
Algorithms 18 00483 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

George, S.; Gopal, M.; Bhide, S.; Argyros, I.K. An Improved Convergence Analysis of a Multi-Step Method with High-Efficiency Indices. Algorithms 2025, 18, 483. https://doi.org/10.3390/a18080483

AMA Style

George S, Gopal M, Bhide S, Argyros IK. An Improved Convergence Analysis of a Multi-Step Method with High-Efficiency Indices. Algorithms. 2025; 18(8):483. https://doi.org/10.3390/a18080483

Chicago/Turabian Style

George, Santhosh, Manjusree Gopal, Samhitha Bhide, and Ioannis K. Argyros. 2025. "An Improved Convergence Analysis of a Multi-Step Method with High-Efficiency Indices" Algorithms 18, no. 8: 483. https://doi.org/10.3390/a18080483

APA Style

George, S., Gopal, M., Bhide, S., & Argyros, I. K. (2025). An Improved Convergence Analysis of a Multi-Step Method with High-Efficiency Indices. Algorithms, 18(8), 483. https://doi.org/10.3390/a18080483

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop