Next Article in Journal
Estimating Large Global Significances with a New Monte Carlo Extrapolation Method
Previous Article in Journal
Partial Discharge Data Enhancement and Pattern Recognition Method Based on a CAE-ACGAN and ResNet
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Convergence Order of a Class of Jarratt-like Methods: A New Approach

by
Ajil Kunnarath
1,†,
Santhosh George
1,†,
Jidesh Padikkal
1,† and
Ioannis K. Argyros
2,*,†
1
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Karnataka 575025, India
2
Department of Computing and Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2025, 17(1), 56; https://doi.org/10.3390/sym17010056
Submission received: 27 November 2024 / Revised: 19 December 2024 / Accepted: 25 December 2024 / Published: 31 December 2024
(This article belongs to the Section Mathematics)

Abstract

:
Symmetry and anti-symmetry appear naturally in the study of systems of nonlinear equations resulting from numerous fields. The solutions of such equations can be obtained in analytical form only in some special situations. Therefore, algorithms or iterative schemes are mostly studied, which approximate the solution. In particular, Jarratt-like methods were introduced with convergence order at least six in Euclidean spaces. We study the methods in the Banach-space setting. Semilocal convergence is studied to obtain the ball containing the solution. The local convergence analysis is performed without the help of the Taylor series with relaxed differentiability assumptions. Our assumptions for obtaining the convergence order are independent of the solution; earlier studies used assumptions involving the solution for local convergence analysis. We compare the methods numerically with similar-order methods and also study the dynamics.

1. Introduction

Nonlinear equations (NEs) arise in science and engineering when modeling problems using mathematical modeling [1,2,3]. Many problems on symmetry properties which belong to different areas of science and engineering like quantum mechanics, cosmology, data analysis, operational research, finance, biology, etc. [4,5,6], have been converted into NEs in the literature. Since it is difficult to obtain the analytic solution of the NEs for most of the problems, iterative methods are used to find approximations for the solution [7,8]. For obtaining better approximations, researchers in this area are interested in developing efficient iterative methods [2,3,9,10,11,12,13,14,15,16]. A prototype for the iterative method was the celebrated Newton’s method (NM) [17]. Extensions of NM have been [17,18,19] developed for solving univariate as well as multivariate equations.
The order of convergence (OC) is an important measure that depicts the speed of convergence of a method. A sequence { x n } converges to x * with OC p > 0 [20] if there exists a constant M > 0 such that
x n + 1 x * M x n x * p .
Let Ş : Ω X Y be a nonlinear operator between the Banach spaces X and Y and Ω be an open convex subset of X. Consider the NE
Ş ( x ) = 0
with a solution x * . In [21], a Jarratt-like method based on a weight function was proposed to approximate x * . The method was defined and analyzed in scalar and in Euclidean space settings only. We present a general version of this method, which is defined as follows:
Let H n = Ş ( x n ) + Ş ( y n ) , for x 0 Ω ,
y n = x n 2 3 Ş ( x n ) 1 Ş ( x n ) x n + 1 = x n Q H n 1 Ş ( x n ) Ş ( x n ) 1 Ş ( x n ) ,
where Q : B ( X , Y ) B ( X , Y ) (the set of all bounded linear operator from X into Y) satisfies the following properties:
Q 1 2 I = I , Q 1 2 I = 3 I , Q 1 2 I = 24 I
and
Q T Q 1 2 I L * T 1 2 I for all T B ( X , Y ) ,
where L * > 0 . An extension of the method which is of sixth order is defined as follows:
y n = x n 2 3 Ş ( x n ) 1 Ş ( x n ) z n = x n Q H n 1 Ş ( x n ) Ş ( x n ) 1 Ş ( x n ) x n + 1 = z n 2 Ş ( x n ) 1 + 6 H n 1 Ş ( z n ) .
The analysis in [21] has the following limitations:
( 1 )
The method is defined only for functions on R n .
( 2 )
The OC of the method is proved using Taylor series expansion, and to proceed with this, one needs the existence of derivatives of Ş up to order five and seven for methods (3) and (5), respectively.
Let Ş : R R be such that
Ş ( x ) = x 8 s i n 1 x , x 0 0 , x = 0 .
Then,
Ş ( 4 ) ( x ) = 1680 x 4 180 x 2 s i n 1 x 840 x 3 20 x c o s 1 x + s i n 1 x
is not continuous. So, one cannot use the analysis in [21] to obtain the OC of methods (3) and (5).
( 3 )
One cannot predict the number of iterates required to reach a desired accuracy.
( 4 )
The weight function Q needs to be differentiable at least four times.
In our analysis, by using the mean value theorem (MVT) as the key we enhance the applicability of the method. The main advantages of our analysis are as follows:
( 1 )
The method is presented in the Banach-space setting.
( 2 )
OC is obtained using assumptions on the first three derivatives of Ş.
( 3 )
The class of weight function can be updated because we need conditions only until the second derivative of Q to prove the convergence order.
( 4 )
The number of iterates needed to reach a desired accuracy can be predicted using our approach.
Another advantage of our analysis is that our assumptions for the local convergence analysis do not depend on the unknown solution x * . In earlier studies [22], two sets of assumptions were used, one set of assumptions for semilocal convergence analysis and another for local convergence analysis. We use the same set of assumptions for semilocal and local convergence analysis.
It is possible to increase OC by increasing the number of steps in the iteration method. There is a measure called the Efficiency Index (EI) [23,24,25] to calculate the efficiency of the method.
According to Ostrowski [25], the EI is given by
I = R 1 η ,
here R is the OC and η = f + d , where f = the number of function evaluations per iteration and d = the number of derivative evaluations per iteration.
According to the definition (7) the EIs of methods (3) and (5) are 1.578 and 1.565 , respectively.
The paper is arranged as follows. Section 2 deals with the semilocal convergence of the method (5). [The semilocal convergence of the method was already presented in ([22], Chapter 34) with two particular choices of Q. In this paper, we present the general version of the semilocal convergence with a slight modification in the analysis]. In Section 3, a local analysis of method (3) is presented and in Section 4, we provide a local analysis of method (5). Section 5 deals with the numerical examples and Section 6 contains the dynamics of the method. The paper ends with conclusions in Section 7.

2. Semilocal Convergence of Method (5)

We develop scalar majorizing sequences for our semilocal analysis [26]. We define the scalar sequences { α n } , { β n } , and { γ n } using two constants: L 0 and L 1 . For α 0 = 0 , β 0 0 , define
q n = L 0 2 ( α n + β n ) , γ n = β n + 3 2 [ L * 48 L 1 ( α n + β n ) 1 q n 3 + 3 L 1 ( α n + β n ) 1 q n 2 + 3 2 L 1 ( α n + β n ) 1 q n + 1 3 ] ( β n α n ) , a n = ( 1 + L 0 ( α n + 1 2 ( γ n α n ) ) ) ( γ n α n ) + 3 2 ( 1 + L 0 α n ) ( β n α n ) , α n + 1 = γ n + L 1 ( β n α n ) ( 1 q n ) ( 1 L 0 α n ) + 2 1 q n a n , μ n + 1 = ( 1 + L 0 ( α n + 1 2 ( α n + 1 α n ) ) ) ( α n + 1 α n ) + 3 2 ( 1 + L 0 α n ) ( β n α n ) , β n + 1 = α n + 1 + 2 3 μ n + 1 1 L 0 α n + 1 .
Lemma 1. 
Assume μ 0 with
α n μ , L 0 α n < 1 a n d q n < 1 , n N { 0 } .
Then, { α n } , { β n } , and { γ n } given by Formula (8) are convergent to some λ [ β 0 , μ ] and 0 γ n α n + 1 β n + 1 λ .
Proof. 
By (8) and condition (9), it follows that { α n } , { β n } , and { γ n } are bounded above by μ and non-decreasing, so that they converge to some λ . □
Consider U ( x , r ) = { s X : x s < r } and U ¯ ( x , r ) { s X : x s r } .
Now, assume the following:
( a 1 )
There exist an initial point x 0 Ω and a constant β 0 0 such that 2 3 Ş ( x 0 ) 1 Ş ( x 0 ) β 0 .
( a 2 )
There exist an operator G B ( X , Y ) , x 0 Ω and L 0 > 0 with
G 1 ( Ş ( s ) G ) L 0 s x 0 , s Ω .
Set Ω 1 = Ω U ( x 0 , 1 L 0 ) .
( a 3 )
There exists L 1 > 0 with
G 1 ( Ş ( s ) Ş ( t ) ) L 1 s t s , t Ω 1 .
( a 4 )
The condition (9) holds for μ = 1 2 L 0 .
( a 5 )
U ¯ ( x 0 , λ ) Ω .
Remark 1. 
  • The choices for G can be G = I (the identity operator) or G = Ş ( x 0 ) . If ( a 2 ) and ( a 3 ) are satisfied, then other choices are possible.
  • From ( a 2 ) , we have
    G 1 Ş ( s ) 1 + L 0 s x 0 .
Theorem 1. 
Assume the conditions ( a 1 ) ( a 5 ) hold. Then, { x n } defined in (5) converges to x * U ( x 0 , λ ) , with Ş ( x * ) = 0 .
Proof. 
First, we prove the following assertions:
z n y n γ n β n ,
x n + 1 z n α n + 1 γ n
and
y n + 1 x n + 1 β n + 1 α n + 1 .
We use mathematical induction to prove the result. Using ( a 1 ) we obtain
y 0 x 0 2 3 Ş ( x 0 ) 1 Ş ( x 0 ) = β 0 = β 0 α 0 .
And by the assumptions of Lemma 1 we have
L 0 2 ( x 0 x 0 + y 0 x 0 ) < L 0 2 ( α 0 + β 0 ) = q 0 < 1
Let x U ( x 0 , λ ) . By ( a 2 ) , ( a 4 ) , we have
G 1 ( Ş ( x ) G ) L 0 x x 0 < L 0 λ < 1 .
Hence, by the Banach lemma (BL) on an invertible operator [26], we obtain Ş ( x ) 1 B ( Y , X ) and
Ş ( x ) 1 G 1 1 L 0 x x 0 .
Note that by (15) we can obtain
I 1 2 G 1 H 0 = G 1 ( 1 2 ( Ş ( x 0 ) + Ş ( y 0 ) ) G 1 2 ( L 0 x 0 x 0 + L 0 y 0 x 0 ) < q 0 < 1 .
So by the Banach lemma we obtain
1 2 H 0 1 G 1 1 q 0 H 0 1 G 1 2 ( 1 q 0 ) .
Again, by the second substep of (5), we have
z 0 y 0 = 2 3 Q H 0 1 Ş ( x 0 ) Ş ( x 0 ) 1 Ş ( x 0 ) = 3 2 Q H 0 1 Ş ( x 0 ) 2 3 I ( y 0 x 0 ) .
Note that
Q H 0 1 Ş ( x 0 ) 2 3 I = Q H 0 1 Ş ( x 0 ) I + 1 3 I .
Now, let us define
P θ 1 n = 1 2 I + θ 1 H n 1 Ş ( x n ) 1 2 I
and using mean value theorem (MVT) and the identity Q 1 2 I = I , we can write
Q H 0 1 Ş ( x 0 ) I = Q ( H 0 1 Ş ( x 0 ) ) Q 1 2 I = 0 1 Q P θ 1 0 d θ 1 H 0 1 Ş ( x 0 ) 1 2 I = 0 1 Q P θ 1 0 3 I + 3 I d θ 1 H 0 1 Ş ( x 0 ) 1 2 I .
Since Q 1 2 I = 3 I , from (20) we obtain
Q H n 1 Ş ( x n ) I = 0 1 Q P θ 1 0 Q 1 2 I d θ 1 H 0 1 Ş ( x 0 ) 1 2 I + 3 H 0 1 Ş ( x 0 ) 1 2 I .
Again, use MVT on the above equation to obtain
Q H 0 1 Ş ( x 0 ) I = 0 1 0 1 Q 1 2 I + θ 2 P θ 1 0 1 2 I d θ 2 P θ 1 0 1 2 I d θ 1 × H 0 1 Ş ( x 0 ) 1 2 I + 3 H 0 1 Ş ( x 0 ) 1 2 I .
Now, use the identity Q 1 2 I = 24 I to obtain
Q H 0 1 Ş ( x 0 ) I = 0 1 0 1 Q 1 2 I + θ 2 P θ 1 0 1 2 I Q 1 2 I ) ) d θ 2 × P θ 1 0 1 2 I d θ 1 H 0 1 Ş ( x 0 ) 1 2 I + 0 1 24 P θ 1 0 1 2 I d θ 1 H 0 1 Ş ( x 0 ) 1 2 I + 3 H 0 1 Ş ( x 0 ) 1 2 I .
Also, we have
H 0 1 Ş ( x 0 ) 1 2 I = H 0 1 Ş ( x 0 ) 1 2 H 0 = H 0 1 Ş ( x 0 ) 1 2 ( Ş ( x 0 ) + Ş ( y 0 ) ) = 1 2 H 0 1 ( Ş ( x 0 ) Ş ( y 0 ) )
and
P θ 1 0 1 2 I = θ 1 H 0 1 Ş ( x 0 ) 1 2 I = θ 1 2 H 0 1 ( Ş ( x 0 ) Ş ( y 0 ) ) .
Now, using (22) and (23) we can write (21) as
Q H 0 1 Ş ( x 0 ) I = 0 1 0 1 Q 1 2 I + θ 2 P θ 1 0 1 2 Q 1 2 I ) ) d θ 2 × θ 1 4 H 0 1 ( Ş ( x 0 ) Ş ( y 0 ) ) 2 d θ 1 + 0 1 6 θ 1 ( H 0 1 ( Ş ( x 0 ) Ş ( y 0 ) ) ) 2 d θ 1 + 3 2 H 0 1 ( Ş ( x 0 ) Ş ( y 0 ) ) .
Taking the norm on both sidess of the above equation and using the inequality (4), we obtain
Q H 0 1 Ş ( x 0 ) I 0 1 0 1 L * θ 2 P θ 1 0 1 2 I × θ 1 4 H 0 1 ( Ş ( x 0 ) Ş ( y 0 ) ) 2 d θ 2 d θ 1 + 0 1 6 θ 1 H 0 1 ( Ş ( x 0 ) Ş ( y 0 ) ) 2 d θ 1 + 3 2 H 0 1 ( Ş ( x 0 ) Ş ( y 0 ) ) .
Using (22) and (23) we obtain
Q H 0 1 Ş ( x 0 ) I L * 48 H 0 1 ( Ş ( x 0 ) Ş ( y 0 ) ) 3 + 3 H 0 1 ( Ş ( x 0 ) Ş ( y 0 ) ) 2 + 3 2 H 0 1 G G 1 ( Ş ( x 0 ) Ş ( y 0 ) ) L * 48 H 0 1 G 3 G 1 ( Ş ( x 0 ) Ş ( y 0 ) ) 3 + 3 H 0 1 G 2 G 1 ( Ş ( x 0 ) Ş ( y 0 ) ) 2 + 3 2 H 0 1 G G 1 ( Ş ( x 0 ) Ş ( y 0 ) ) ,
and using (17) and ( a 3 ) we can write
Q H 0 1 Ş ( x 0 ) I L * 48 L 1 x 0 y 0 1 q 0 3 + 3 L 1 x 0 y 0 1 q 0 2 + 3 2 L 1 x 0 y 0 1 q 0 .
Now, from (18), (19), and (24) we obtain
z 0 y 0 3 2 [ L * 48 L 1 x 0 y 0 1 q 0 3 + 3 L 1 x 0 y 0 1 q 0 2 + 3 2 L 1 x 0 y 0 1 q 0 + 1 3 ] y 0 x 0 .
Using (25) and (8) we obtain
z 0 y 0 γ 0 β 0 ,
and from (14) and (26) we obtain
z 0 x 0 z 0 y 0 + y 0 x 0 γ 0 .
Now, from the 2 nd and 3 rd substeps of (5) we have
x 1 z 0 = 2 ( Ş ( x 0 ) 1 3 H 0 1 ) Ş ( z 0 ) ,
Ş ( x 0 ) 1 3 H 0 1 = Ş ( x 0 ) 1 ( H 0 3 Ş ( x 0 ) ) H 0 1 Ş ( z 0 ) = ( Ş ( x 0 ) 1 ( ( Ş ( y 0 ) Ş ( x 0 ) ) H 0 1 2 H 0 1 ) ) Ş ( z 0 ) = ( Ş ( x 0 ) 1 G ( G 1 ( Ş ( y 0 ) Ş ( x 0 ) ) H 0 1 G + 2 H 0 1 G ) ) G 1 Ş ( z 0 ) .
Again, by using ( a 2 ) , (16) and (17), we have
Ş ( x 0 ) 1 3 H 0 1 L 1 y 0 x 0 ( 1 L 0 x 0 x 0 ) ( 1 q 0 ) + 2 1 q 0 G 1 Ş ( z 0 ) .
Now, by using MVT and the fact that Ş ( x 0 ) = 3 2 Ş ( x 0 ) ( y 0 x 0 ) we obtain
Ş ( z 0 ) = Ş ( z 0 ) Ş ( x 0 ) + Ş ( x 0 ) = 0 1 Ş ( x 0 + θ 2 ( z 0 x 0 ) ) d θ 2 ( z 0 x 0 ) 3 2 Ş ( x 0 ) ( y 0 x 0 )
then by using (10) we obtain
G 1 Ş ( z 0 ) = 0 1 G 1 Ş ( x 0 + θ 2 ( z 0 x 0 ) ) d θ 2 ( z 0 x 0 ) + 3 2 G 1 Ş ( x 0 ) ( y 0 x 0 ) 1 + 0 1 L 0 x 0 + θ 2 ( z 0 x 0 ) x 0 d θ 2 z 0 x 0 + 3 2 ( 1 + L 0 x 0 x 0 ) y 0 x 0 1 + L 0 x 0 x 0 + 1 2 z 0 x 0 z 0 x 0 + 3 2 1 + L 0 x 0 x 0 y 0 x 0 1 + L 0 ( α 0 + 1 2 ( γ 0 α 0 ) ) + 3 2 ( 1 + L 0 α 0 ) ( β 0 α 0 ) = a 0 .
Finally, by using (8), (28)–(30) we obtain
x 1 z 0 α 1 γ 0 ,
and using (31) and (27) we obtain
x 1 x 0 x 1 z 0 + z 0 x 0 α 1 α 0 = α 1 .
Now, from the 2 nd substep of (5) we obtain
y 1 x 1 = 2 3 Ş ( x 1 ) 1 Ş ( x 1 ) 2 3 Ş ( x 1 ) 1 G G 1 Ş ( x 1 )
and using the identity Ş ( x 0 ) = 3 2 Ş ( x 0 ) ( y 0 x 0 ) we obtain
G 1 Ş ( x 1 ) = G 1 Ş ( x 1 ) Ş ( x 0 ) 3 2 Ş ( x 0 ) ( y 0 x 0 ) G 1 0 1 Ş ( x 0 + θ 2 ( x 1 x 0 ) ) d θ 2 ( x 1 x 0 ) + 3 2 G 1 Ş ( x 0 ) ( y 0 x 0 ) .
Now, using (10), we obtain
G 1 Ş ( x 1 ) 1 + L 0 x 0 x 0 + 1 2 x 1 x 0 x 1 x 0 + 3 2 ( 1 + L 0 x 0 x 0 ) y 0 x 0 ( 1 + L 0 ( α 0 + 1 2 ( α 1 α 0 ) ) ) ( α 1 α 0 ) + 3 2 ( 1 + L 0 α 0 ) ( β 0 α 0 ) μ 1
and using (33), (34), (16), (8), and ( a 2 ) we obtain
y 1 x 1 2 3 Ş ( x 1 ) 1 G G 1 Ş ( x 1 ) 2 3 μ 1 1 L 0 x 1 x 0 2 3 μ 1 1 L 0 α 1 β 1 α 1 ,
also
y 1 x 0 y 1 x 1 + x 1 x 0 β 1 .
Finally, if we replace x 0 , y 0 , z 0 , α 0 , β 0 , γ 0 , x 1 , y 1 , α 1 , β 1 in the above estimates by x n , y n , z n , α n , β n , γ n , x n + 1 , y n + 1 , α n + 1 , β n + 1 , respectively, we can see the induction hypothesis is satisfied. [Notice that while replacing x 0 by x n , in all the places using assumption ( a 2 ) , the second x 0 in L 0 x 0 x 0 remains as x 0 itself. Also, in (27), (32), and (36), x 0 remains as x 0 itself].
Then, we obtain that (11)–(13) are true for all n N , and
x n + 1 x 0 α n + 1 < λ , z n + 1 x 0 z n + 1 x n + 1 + x n + 1 x 0
γ n + 1 α n + 1 + α n + 1 α 0 = γ n + 1 < λ and
y n + 1 x 0 y n + 1 x n + 1 + x n + 1 x 0 β n + 1 < λ .
Therefore, from (37)–(39) we can claim that x n , z n , y n U ( x 0 , λ ) for all n N . By condition ( a 4 ) and Lemma 1, the sequences { α n + 1 } , { β n + 1 } , and { γ n + 1 } are Cauchy, and by (11)–(13) we have
x n + 1 x n x n + 1 z n + z n y n + y n x n α n + 1 α n , z n + 1 z n z n + 1 y n + 1 + y n + 1 x n + 1 + x n + 1 z n γ n + 1 γ n , y n + 1 y n y n + 1 x n + 1 + x n + 1 z n + z n y n β n + 1 β n ,
so that sequences { x n + 1 } , { y n + 1 } , and { z n + 1 } are also Cauchy, and converge to some x * U ¯ ( x 0 , λ ) U ( x 0 , 1 2 L 0 ) . Again, consider (34), since the induction holds we have
G 1 Ş ( x n + 1 ) 1 + L 0 x n x 0 + 1 2 x n + 1 x n x n + 1 x n + 3 2 ( 1 + L 0 x n x 0 ) y n x n .
Finally, by the continuity of F, if we allow n + in the above inequality, we obtain Ş ( x * ) = 0 . □
Remark 2. 
The semilocal analysis of method (3) is analogous to that of method (5), so we omit the details. Therefore, we assume x n x * U ( x 0 , λ ) U ( x 0 , 1 2 L 0 ) .
Next, a region is given which contains only one solution of Ş ( x ) = 0 .
Proposition 1. 
Suppose the following exist:
(i) 
A solution z U ( x 0 , a ) for some a > 0 ;
(ii) 
b a such that
L 0 ( a + b ) < 2 .
Set S = Ω U ¯ ( x 0 , b ) . Then, z is the unique solution of Ş ( x ) = 0 in the region S .
Proof. 
See [19]. □

3. Local Analysis of (3)

We usethe following additional assumptions.
( a 6 )
G 1 ( Ş ( s ) Ş ( t ) ) L 2 s t , L 2 > 0 , s , t Ω 1 .
( a 7 )
G 1 ( Ş ( s ) Ş ( t ) ) L 3 s t , L 3 > 0 , s , t Ω 1 .
( a 8 )
G 1 Ş ( s ) K 1 for some K 1 > 0 and s Ω 1 .
( a 9 )
G 1 Ş ( s ) K 2 for some K 2 > 0 and s Ω 1 .
( a 10 )
G 1 Ş ( s ) K 3 for some K 3 > 0 and s Ω 1 .
From Remark 2, we obtained x * U ( x 0 , λ ) U ( x 0 , 1 2 L 0 ) . Then, by ( a 2 ) we can observe that for x U ( x 0 , 1 L 0 ) ,
G 1 ( Ş ( x ) G ) L 0 x x 0 L 0 ( x x * + x * x 0 ) L 0 x x * + 1 2 L 0 = 1 2 + L 0 x x * .
Now, using BL and (40) we obtain, for all x U ( x * , 1 2 L 0 ) , that Ş ( x ) is invertible and
Ş ( x ) 1 G 1 1 2 L 0 x x * .
Now, consider the functions, ϕ 0 , ϕ * , h * : [ 0 , 1 2 L 0 ) R as non-decreasing and continuous and defined as
ϕ 0 ( t ) = 1 + 2 K 1 3 ( 1 2 L 0 t ) , ϕ * ( t ) = 1 2 + L 0 2 1 + ϕ 0 ( t ) t
and
h * ( t ) = ϕ * ( t ) 1 .
Then, h * ( 0 ) = 1 2 and l i m t 1 2 L 0 h * ( t ) = + .
By intermediate value theorem (IVT) ∃ ρ * ( 0 , 1 2 L 0 ) such that h * ( ρ * ) = 0 .
Further, let ϕ , h : [ 0 , ρ * ) R be non-decreasing and continuous functions defined as
ϕ ( t ) = 9 K 2 L 1 2 8 ( 1 2 L 0 t ) 3 + K 1 K 2 L 2 2 ( 1 2 L 0 t ) 3 5 6 + K 1 12 ( 1 2 L 0 t ) + K 1 K 2 ( 1 2 L 0 t ) 4 3 K 2 L 1 4 1 + K 1 ( 1 2 L 0 t ) + L 2 K 1 2 ( 1 2 L 0 t ) 5 6 + K 1 6 ( 1 2 L 0 t ) + 5 K 1 2 L 3 8 ( 1 2 L 0 t ) 3 + L 3 K 1 2 4 ( 1 2 L 0 t ) 3 1 + 4 K 1 2 27 ( 1 2 L 0 t ) 2 + 2 K 1 3 ( 1 2 L 0 t ) + K 1 2 L 1 K 3 4 ( 1 2 L 0 t ) 4 + K 1 3 K 2 L * L 1 1 + ϕ 0 ( t ) 864 ( 1 2 L 0 t ) 3 ( 1 ϕ * ( t ) ) 3 + K 2 3 K 1 4 6 ( 1 ϕ * ( t ) ) ( 1 2 L 0 t ) 6 + K 2 3 K 1 4 9 ( 1 ϕ * ( t ) ) 2 ( 1 2 L 0 t ) 5
and
h ( t ) = ϕ ( t ) t 3 1 .
Note that, h ( 0 ) = 1 and l i m t ρ * h ( t ) = + ; so, by IVT there exist the smallest zero for h ( t ) in ( 0 , ρ * ) defined by r, with h ( r ) = 0 .
We use the following inequalities in our study. For all x , y Ω 1 , by MVT, we obtain
Ş ( x ) 1 Ş ( y ) = Ş ( x ) 1 ( Ş ( y ) Ş ( x * ) ) Ş ( x ) 1 0 1 Ş ( x * + θ 1 ( y x * ) ) d θ 1 ( y x * ) Ş ( x ) 1 G 0 1 G 1 Ş ( x * + θ 1 ( y x * ) ) d θ 1 y x * .
By (41) and ( a 8 ), for x , y U ( x * , 1 2 L 0 ) we obtain
Ş ( x ) 1 Ş ( y ) K 1 1 2 L 0 x x * y x * .
Using (42), we can find an estimation for y n x * provided y n , x n U ( x * , 1 2 L 0 ) , as follows:
y n x * = x n x * 2 3 Ş ( x n ) 1 Ş ( x n ) x n x * + 2 3 Ş ( x n ) 1 Ş ( x n ) 1 + 2 K 1 3 ( 1 2 L 0 x n x * ) x n x * = ϕ 0 ( x n x * ) x n x * .
Also, using MVT we can write for x , y U ( x * , 1 2 L 0 ) ,
y x * Ş ( x ) 1 Ş ( y ) = y x * Ş ( x ) 1 0 1 Ş ( x * + θ 1 ( y x * ) ) d θ 1 ( y x * ) Ş ( x ) 1 G × 0 1 G 1 ( Ş ( x ) Ş ( x * ) + Ş ( x * ) Ş ( x * + θ 1 ( y x * ) ) d θ 1 y x * Ş ( x ) 1 G 0 1 G 1 [ Ş ( x ) Ş ( x * ) ] d θ 1 ( y x * ) + Ş ( x ) 1 G 0 1 G 1 [ Ş ( x * ) Ş ( x * + θ 1 ( y x * ) ) ] d θ 1 ( y x * ) .
Now, using (41) and ( a 3 ) , we obtain
y x * Ş ( x ) 1 Ş ( y ) L 1 1 2 L 0 x x * x x * + 1 2 y x * y x * .
Remark 3. 
Note that the initial point x 0 we are considering in the local analysis is not the same as what we considered in the semilocal analysis. From now, we consider the x 0 in the semilocal analysis as a fixed element in Ω. We study the local convergence in U ( x * , r ) , which satisfies
U ( x * , r ) U ( x * , 1 2 L 0 ) Ω 1 ,
see Figure 1.
Consequently, all the assumptions we considered are valid in the local convergence ball, so that we can proceed with the local analysis as an independent analysis with the same assumptions.
Theorem 2. 
If assumptions ( a 2 ) , ( a 3 ) , and ( a 6 )–( a 10 ) hold. Then, the sequence ( x n ) defined by (3) with x 0 U ( x * , r ) { x * } is well defined and
x n + 1 x * ϕ ( r ) x n x * 4 .
In particular, x n U ( x * , r ) for all n N { 0 } and ( x n ) converges to x * with OC four.
Proof. 
Let x n U ( x * , r ) . If we add and subtract Ş ( x n ) 1 Ş ( x n ) in the 2nd step of (3), we obtain
x n + 1 x * = x n x * Ş ( x n ) 1 Ş ( x n ) Q ( H n 1 Ş ( x n ) ) I Ş ( x n ) 1 Ş ( x n ) = x n x * Ş ( x n ) 1 Ş ( x n ) Q ( H n 1 Ş ( x n ) ) Q 1 2 I Ş ( x n ) 1 Ş ( x n ) ,
where the identity Q 1 2 I = I is used. Again, by (20), we have
x n + 1 x * = x n x * Ş ( x n ) 1 Ş ( x n ) 0 1 Q P θ 1 n d θ 1 H n 1 Ş ( x n ) 1 2 I Ş ( x n ) 1 Ş ( x n ) = x n x * Ş ( x n ) 1 Ş ( x n ) 0 1 Q P θ 1 d θ 1 H n 1 Ş ( x n ) 1 2 H n Ş ( x n ) 1 Ş ( x n ) .
Now, since
Ş ( x n ) 1 2 H n = 1 2 Ş ( x n ) Ş ( y n ) ,
by MVT and using the relation x n y n = 2 3 Ş ( x n ) Ş ( x n ) , we have
Ş ( x n ) 1 2 H n = 1 2 0 1 Ş ( M θ 2 ) d θ 2 ( x n y n ) = 1 3 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) ,
where M θ 2 = y n + θ 2 ( x n y n ) . Now, if we invoke (48) in (46), we obtain
x n + 1 x * = x n x * Ş ( x n ) 1 Ş ( x n ) 1 3 0 1 Q P θ 1 n d θ 1 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 .
Next, we add and subtract the term H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 and use the equality Q 1 2 I = 3 I to obtain
x n + 1 x * = x n x * Ş ( x n ) 1 Ş ( x n ) H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 1 3 0 1 Q P θ 1 n Q 1 2 I d θ 1 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 .
By applying MVT on the first derivative of Q, we obtain
x n + 1 x * = x n x * Ş ( x n ) 1 Ş ( x n ) H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 1 3 0 1 0 1 Q 1 2 I + θ 3 P θ 1 n 1 2 I d θ 3 P θ 1 n 1 2 I d θ 1 × H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 = x n x * Ş ( x n ) 1 Ş ( x n ) H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 1 3 0 1 0 1 Q 1 2 I + θ 3 P θ 1 n 1 2 I d θ 3 θ 1 H n 1 Ş ( x n ) 1 2 H n d θ 1 × H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 .
By using (48) and adjusting the terms, we obtain
x n + 1 x * = x n x * Ş ( x n ) 1 Ş ( x n ) H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 1 9 0 1 0 1 Q 1 2 I + θ 3 P θ 1 n 1 2 I θ 1 d θ 3 d θ 1 × H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 Ş ( x n ) 1 Ş ( x n ) = x n x * Ş ( x n ) 1 Ş ( x n ) H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 1 9 0 1 0 1 Q 1 2 I + θ 3 P θ 1 n 1 2 I Q 1 2 I + Q 1 2 I θ 1 d θ 3 d θ 1 × H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 Ş ( x n ) 1 Ş ( x n ) .
Now, using the equalities
Q 1 2 I = 24 I and 1 9 0 1 0 1 24 θ 1 d θ 1 d θ 3 = 4 3 ,
we obtain
x n + 1 x * = x n x * Ş ( x n ) 1 Ş ( x n ) H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 4 3 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 Ş ( x n ) 1 Ş ( x n ) C 1 ,
where
C 1 = 1 9 0 1 0 1 Q 1 2 I + θ 3 P θ 1 n 1 2 I Q 1 2 I θ 1 d θ 3 d θ 1 × H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 Ş ( x n ) 1 Ş ( x n ) .
Now, we add and subtract
1 2 0 1 Ş ( x n ) 1 Ş ( M θ 2 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2
and
1 2 0 1 Ş ( x n ) 1 Ş ( M θ 2 ) Ş ( x n ) 1 Ş ( x n ) d θ 2 2 Ş ( x n ) 1 Ş ( x n )
in (49) to obtain
x n + 1 x * = E + 1 2 Ş ( x n ) 1 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 + 1 2 0 1 Ş ( x n ) 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 Ş ( x n ) 1 Ş ( x n ) 4 3 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 Ş ( x n ) 1 Ş ( x n ) C 1 ,
where
E = x n x * Ş ( x n ) 1 Ş ( x n ) 1 2 0 1 Ş ( x n ) 1 Ş ( M θ 2 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 1 2 0 1 Ş ( x n ) 1 Ş ( M θ 2 ) Ş ( x n ) 1 Ş ( x n ) d θ 2 2 Ş ( x n ) 1 Ş ( x n ) .
By using MVT we can obtain the following equality:
1 2 Ş ( x n ) 1 H n 1 = 1 2 H n 1 Ş ( x n ) Ş ( y n ) Ş ( x n ) 1 = 1 3 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) Ş ( x n ) 1 .
Using (51) and rewriting the terms in (50), we have
x n + 1 x * = E + 2 3 1 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) Ş ( x n ) 1 × 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 + 1 2 Ş ( x n ) 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) Ş ( x n ) 1 × 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 4 3 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) × H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 C 1 .
By combining the terms accordingly, we obtain
x n + 1 x * = E + 1 2 Ş ( x n ) 1 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) × Ş ( x n ) 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 4 3 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) × 1 2 Ş ( x n ) 1 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 C 1 = E C 1 + C 2 + C 3 ,
where
C 2 = 1 3 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) Ş ( x n ) 1 0 1 Ş ( M θ 2 ) d θ 2 × Ş ( x n ) 1 Ş ( x n ) Ş ( x n ) 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 , C 3 = 4 9 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) × Ş ( x n ) 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) 2 .
Now, if we use MVT twice we obtain
x n x * Ş ( x n ) 1 Ş ( x n ) = 0 1 0 1 Ş ( x n ) 1 Ş ( x θ 2 , θ 1 ) ( 1 θ 1 ) ( x n x * ) 2 d θ 2 d θ 1 ,
where x θ 2 , θ 1 = x * + θ 1 ( x n x * ) + θ 2 ( 1 θ 1 ) ( x n x * ) .
Next, by using the relation a 2 = ( a b ) 2 b 2 + 2 a b with a = x n x * and b = Ş ( x n ) 1 Ş ( x n ) , we obtain
Ş ( x n ) 1 0 1 0 1 Ş ( x θ 2 , θ 1 ) d θ 2 ( 1 θ 1 ) d θ 1 ( x n x * ) 2 = 0 1 0 1 Ş ( x n ) 1 Ş ( x θ 2 , θ 1 ) ( 1 θ 1 ) ( x n x * Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 0 1 0 1 Ş ( x n ) 1 Ş ( x θ 2 , θ 1 ) ( 1 θ 1 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 + 2 0 1 0 1 Ş ( x n ) 1 Ş ( x θ 2 , θ 1 ) ( 1 θ 1 ) ( x n x * ) Ş ( x n ) 1 Ş ( x n ) d θ 2 d θ 1 .
Combining (52)–(54) and using the equality 0 1 ( 1 θ 1 ) d θ 1 = 1 2 , we obtain
x n + 1 x * = E 1 + 2 0 1 0 1 Ş ( x n ) 1 Ş ( x θ 2 , θ 1 ) ( 1 θ 1 ) d θ 2 d θ 1 × ( x n x * Ş ( x n ) 1 Ş ( x n ) ) Ş ( x n ) 1 Ş ( x n ) + 0 1 0 1 Ş ( x n ) 1 [ Ş ( x θ 2 , θ 1 ) Ş ( M θ 2 ) ] ( 1 θ 1 ) d θ 2 d θ 1 × ( Ş ( x n ) 1 Ş ( x n ) ) 2 1 2 0 1 Ş ( x n ) 1 Ş ( M θ 2 ) Ş ( x n ) 1 Ş ( x n ) d θ 2 2 Ş ( x n ) 1 Ş ( x n ) C 1 + C 2 + C 3 ,
where
E 1 = 0 1 0 1 Ş ( x n ) 1 Ş ( x θ 2 , θ 1 ) ( 1 θ 1 ) ( x n x * Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 .
Now, substitute (53) in the 2nd term of (55), and add and subtract Ş ( x * ) in the 3rd term of (55) to obtain
x n + 1 x * = E 1 + 2 0 1 0 1 Ş ( x n ) 1 Ş ( x θ 2 , θ 1 ) ( 1 θ 1 ) d θ 2 d θ 1 × 0 1 0 1 Ş ( x n ) 1 Ş ( x θ 2 , θ 1 ) ( 1 θ 1 ) ( x n x * ) 2 d θ 2 d θ 1 Ş ( x n ) 1 Ş ( x n ) + 0 1 0 1 Ş ( x n ) 1 ( [ Ş ( x θ 2 , θ 1 ) Ş ( x * ) ] [ Ş ( M θ 2 ) Ş ( x * ) ] ) × ( 1 θ 1 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 1 2 0 1 Ş ( x n ) 1 Ş ( M θ 2 ) Ş ( x n ) 1 Ş ( x n ) d θ 2 2 Ş ( x n ) 1 Ş ( x n ) C 1 + C 2 + C 3 .
Again, using 0 1 ( 1 θ 1 ) d θ 1 = 1 2 in the 4th term, we obtain
x n + 1 x * = E 1 + 2 0 1 0 1 Ş ( x n ) 1 Ş ( x θ 2 , θ 1 ) ( 1 θ 1 ) d θ 2 d θ 1 × 0 1 0 1 Ş ( x n ) 1 Ş ( x θ 2 , θ 1 ) ( 1 θ 1 ) ( x n x * ) 2 d θ 2 d θ 1 Ş ( x n ) 1 Ş ( x n ) + 0 1 0 1 Ş ( x n ) 1 ( [ Ş ( x θ 2 , θ 1 ) Ş ( x * ) ] [ Ş ( M θ 2 ) Ş ( x * ) ] ) × ( 1 θ 1 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 2 0 1 0 1 Ş ( x n ) 1 Ş ( M θ 2 ) ( 1 θ 1 ) Ş ( x n ) 1 Ş ( x n ) d θ 2 d θ 1 2 × Ş ( x n ) 1 Ş ( x n ) C 1 + C 2 + C 3 .
Inserting Ş ( M θ 2 ) appropriately into the 2nd term gives
x n + 1 x * = E 1 + 2 0 1 0 1 Ş ( x n ) 1 [ Ş ( x θ 2 , θ 1 ) Ş ( M θ 2 ) ] ( 1 θ 1 ) d θ 2 d θ 1 × 0 1 0 1 Ş ( x n ) 1 Ş ( x θ 2 , θ 1 ) ( 1 θ 1 ) ( x n x * ) 2 d θ 2 d θ 1 Ş ( x n ) 1 Ş ( x n ) + 2 0 1 0 1 Ş ( x n ) 1 Ş ( M θ 2 ) ( 1 θ 1 ) d θ 2 d θ 1 × 0 1 0 1 Ş ( x n ) 1 Ş ( x θ 2 , θ 1 ) ( 1 θ 1 ) ( x n x * ) 2 d θ 2 d θ 1 Ş ( x n ) 1 Ş ( x n ) + 0 1 0 1 Ş ( x n ) 1 ( [ Ş ( x θ 2 , θ 1 ) Ş ( x * ) ] [ Ş ( M θ 2 ) Ş ( x * ) ] ) × ( 1 θ 1 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 2 0 1 0 1 Ş ( x n ) 1 Ş ( M θ 2 ) ( 1 θ 1 ) Ş ( x n ) 1 Ş ( x n ) d θ 2 d θ 1 2 × Ş ( x n ) 1 Ş ( x n ) C 1 + C 2 + C 3 .
By taking the 3rd and 5th terms together, we obtain
x n + 1 x * = E 1 + E 2 + 2 0 1 0 1 ( 1 θ 1 ) Ş ( x n ) 1 Ş ( M θ 2 ) d θ 2 d θ 1 × Ş ( x n ) 1 0 1 0 1 ( 1 θ 1 ) Ş ( x θ 2 , θ 1 ) ( x n x * ) 2 d θ 2 d θ 1 0 1 0 1 ( 1 θ 1 ) Ş ( M θ 2 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 Ş ( x n ) 1 Ş ( x n ) + 0 1 0 1 ( 1 θ 1 ) Ş ( x n ) 1 ( [ Ş ( x θ 2 , θ 1 ) Ş ( x * ) ] [ Ş ( M θ 2 ) Ş ( x * ) ] ) × ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 C 1 + C 2 + C 3 ,
where
E 2 = 2 0 1 0 1 ( 1 θ 1 ) Ş ( x n ) 1 [ Ş ( x θ 2 , θ 1 ) Ş ( M θ 2 ) ] d θ 2 d θ 1 × 0 1 0 1 ( 1 θ 1 ) Ş ( x n ) 1 Ş ( x θ 2 , θ 1 ) ( x n x * ) 2 d θ 2 d θ 1 Ş ( x n ) 1 Ş ( x n ) .
We now add and subtract ( Ş ( x n ) 1 Ş ( x n ) ) 2 in the 3rd term to obtain
x n + 1 x * = E 1 + E 2 + 2 0 1 ( 1 θ 1 ) 0 1 Ş ( x n ) 1 Ş ( M θ 2 ) d θ 2 d θ 1 × Ş ( x n ) 1 0 1 0 1 ( 1 θ 1 ) Ş ( x θ 2 , θ 1 ) [ ( x n x * ) 2 ( Ş ( x n ) 1 Ş ( x n ) ) 2 ] d θ 2 d θ 1 0 1 0 1 ( 1 θ 1 ) Ş ( M θ 2 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 + 0 1 0 1 ( 1 θ 1 ) Ş ( x θ 2 , θ 1 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 Ş ( x n ) 1 Ş ( x n ) + Ş ( x n ) 1 0 1 0 1 ( 1 θ 1 ) ( [ Ş ( x θ 2 , θ 1 ) Ş ( x * ) ] [ Ş ( M θ 2 ) Ş ( x * ) ] ) × ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 C 1 + C 2 + C 3 = E 1 + E 2 + E 3 + Ş ( x n ) 1 0 1 0 1 ( 1 θ 1 ) ( [ Ş ( x θ 2 , θ 1 ) Ş ( x * ) ] [ Ş ( M θ 2 ) Ş ( x * ) ] ) × ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 C 1 + C 2 + C 3 ,
where
E 3 = 2 Ş ( x n ) 1 0 1 0 1 ( 1 θ 1 ) Ş ( M θ 2 ) d θ 2 d θ 1 × Ş ( x n ) 1 0 1 0 1 ( 1 θ 1 ) Ş ( x θ 2 , θ 1 ) d θ 2 d θ 1 × [ x n x * + Ş ( x n ) 1 Ş ( x n ) ] [ x n x * Ş ( x n ) 1 Ş ( x n ) ] 0 1 0 1 ( 1 θ 1 ) Ş ( M θ 2 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 + 0 1 0 1 ( 1 θ 1 ) Ş ( x θ 2 , θ 1 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 Ş ( x n ) 1 Ş ( x n ) .
Now, we expand and apply MVT for the fourth term of (57). Further, add and subtract Ş ( x * ) accordingly as given below. Then,
x n + 1 x * = E 1 + E 2 + E 3 + Ş ( x n ) 1 0 1 0 1 [ Ş ( x * + θ 1 ( x n x * ) + θ 2 ( 1 θ 1 ) ( x n x * ) ) Ş ( x * ) ] × ( 1 θ 1 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 Ş ( x n ) 1 0 1 0 1 [ Ş ( M θ 2 ) Ş ( x * ) ] × ( 1 θ 1 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 C 1 + C 2 + C 3 .
By regrouping the terms, we obtain
x n + 1 x * = E 1 + E 2 + E 3 + 0 1 0 1 0 1 Ş ( x n ) 1 [ Ş ( x * + θ 3 ( θ 1 ( x n x * ) + θ 2 ( 1 θ 1 ) ( x n x * ) ) ) Ş ( x * ) ] × ( θ 1 ( x n x * ) + θ 2 ( 1 θ 1 ) ( x n x * ) ) ( 1 θ 1 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 3 d θ 2 d θ 1 0 1 0 1 0 1 Ş ( x n ) 1 [ Ş ( x * + θ 3 ( M θ 2 x * ) ) Ş ( x * ) ] × ( 1 θ 1 ) ( M θ 2 x * ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 3 d θ 2 d θ 1 + 0 1 0 1 0 1 Ş ( x n ) 1 Ş ( x * ) [ ( 1 θ 1 ) ( θ 1 ( x n x * ) + θ 2 ( 1 θ 1 ) ( x n x * ) ) ( 1 θ 1 ) ( M θ 2 x * ) ] ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 3 d θ 2 d θ 1 C 1 + C 2 + C 3 .
Let
E 4 = 0 1 0 1 0 1 Ş ( x n ) 1 [ Ş ( x * + θ 3 ( θ 1 ( x n x * ) + ( 1 θ 1 ) θ 2 ( x n x * ) ) ) Ş ( x * ) ] × ( θ 1 ( x n x * ) + θ 2 ( 1 θ 1 ) ( x n x * ) ) ( 1 θ 1 ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 3 d θ 2 d θ 1
and
E 5 = 0 1 0 1 0 1 Ş ( x n ) 1 [ Ş ( x * + θ 3 ( M θ 2 x * ) ) Ş ( x * ) ] × ( 1 θ 1 ) ( M θ 2 x * ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 3 d θ 2 d θ 1 .
Then, by the first subequation of (2), we obtain
x n + 1 x * = i = 1 5 E i + 0 1 0 1 0 1 Ş ( x n ) 1 Ş ( x * ) [ ( θ 1 ( x n x * ) + θ 2 ( 1 θ 1 ) ( x n x * ) ) ( 1 θ 1 ) ( x n x * 2 3 Ş ( x n ) 1 Ş ( x n ) + θ 2 ( 2 3 Ş ( x n ) 1 Ş ( x n ) ) ) ( 1 θ 1 ) ] ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 3 d θ 2 d θ 1 C 1 + C 2 + C 3 = i = 1 5 E i + 0 1 0 1 Ş ( x n ) 1 Ş ( x * ) [ ( θ 1 ( x n x * ) + θ 2 ( 1 θ 1 ) ( x n x * ) ) ( 1 θ 1 ) ( x n x * 2 3 ( 1 θ 2 ) Ş ( x n ) 1 Ş ( x n ) ) ( 1 θ 1 ) ] ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 2 d θ 1 C 1 + C 2 + C 3 .
By rearranging, we obtain
x n + 1 x * = i = 1 5 E i + 0 1 0 1 0 1 Ş ( x n ) 1 Ş ( x * ) [ ( θ 1 + θ 2 ( 1 θ 1 ) 1 ) ( 1 θ 1 ) ( x n x * ) + 2 3 ( 1 θ 1 ) ( 1 θ 2 ) Ş ( x n ) 1 Ş ( x n ) ] ( Ş ( x n ) 1 Ş ( x n ) ) 2 d θ 3 d θ 2 d θ 1 C 1 + C 2 + C 3 = i = 1 5 E i + Ş ( x n ) 1 Ş ( x * ) 1 6 ( x n x * ) + 1 6 Ş ( x n ) 1 Ş ( x n ) ( Ş ( x n ) 1 Ş ( x n ) ) 2 C 1 + C 2 + C 3 = i = 1 6 E i C 1 + C 2 + C 3 ,
where
E 6 = 1 6 Ş ( x n ) 1 Ş ( x * ) [ Ş ( x n ) 1 Ş ( x n ) ( x n x * ) ] ( Ş ( x n ) 1 Ş ( x n ) ) 2 .
Now, consider
I 1 2 G 1 H n = I 1 2 G 1 Ş ( x n ) + Ş ( y n ) = 1 2 G 1 [ Ş ( x n ) G + Ş ( y n ) G .
Using (40), we can obtain
I 1 2 G 1 H n 1 2 1 2 + L 0 x n x * + 1 2 + L 0 y n x * = 1 2 + L 0 2 1 + ϕ 0 ( x n x * ) x n x * = ϕ * ( x n x * ) < 1 , x n U ( x * , r ) .
Now, as followed by BL, we obtain
H n 1 G 1 2 1 ϕ * ( x n x * ) .
Using assumptions ( a 2 ) , ( a 3 ) , ( a 6 )–( a 10 ), (41), (42), and (43), we obtain
E 1 0 1 0 1 Ş ( x n ) 1 Ş ( x θ 2 , θ 1 ) ( 1 θ 1 ) d θ 2 d θ 1 x n x * Ş ( x n ) 1 Ş ( x n ) 2 Ş ( x n ) 1 G G 1 Ş ( x θ 2 , θ 1 ) 0 1 0 1 ( 1 θ 1 ) d θ 2 d θ 1 × x n x * Ş ( x n ) 1 Ş ( x n ) 2 K 2 2 ( 1 2 L 0 x n x * ) 3 L 1 2 ( 1 2 L 0 x n x * ) x n x * 2 2 9 L 1 2 K 2 8 ( 1 2 L 0 x n x * ) 3 x n x * 4 ,
E 2 2 Ş ( x n ) 1 G 0 1 0 1 ( 1 θ 1 ) G 1 [ Ş ( x θ 2 , θ 1 ) Ş ( M θ 2 ) ] d θ 2 d θ 1 × Ş ( x n ) 1 G 0 1 0 1 ( 1 θ 1 ) G 1 Ş ( x θ 2 , θ 1 ) x n x * 2 Ş ( x n ) 1 Ş ( x n ) d θ 2 d θ 1 2 Ş ( x n ) 1 G 0 1 0 1 ( 1 θ 1 ) × [ G 1 [ Ş ( x θ 2 , θ 1 ) Ş ( x * ) ] + G 1 [ Ş ( M θ 2 ) Ş ( x * ) ] ] d θ 2 d θ 1 × Ş ( x n ) 1 G 0 1 0 1 ( 1 θ 1 ) G 1 Ş ( x θ 2 , θ 1 ) x n x * 2 Ş ( x n ) 1 Ş ( x n ) d θ 2 d θ 1 .
Now, using (41) and (42) we obtain
E 2 K 1 K 2 L 2 2 ( 1 2 L 0 x n x * ) 3 0 1 0 1 [ ( θ 1 + θ 2 ( 1 θ 1 ) ) ( 1 θ 1 ) ( x n x * ) + ( 1 θ 1 ) ( x n x * ) + ( 1 θ 2 ) ( 1 θ 1 ) ( x n y n ) ] d θ 2 ( 1 θ 1 ) d θ 1 x n x * 3 K 1 K 2 L 2 2 ( 1 2 L 0 x n x * ) 3 5 6 + K 1 12 ( 1 2 L 0 x n x * ) x n x * 4 ,
and
E 3 2 Ş ( x n ) 1 G 2 G 1 Ş ( M θ 2 ) 1 2 × [ 0 1 0 1 G 1 Ş ( x θ 2 , θ 1 ) d θ 2 ( 1 θ 1 ) d θ 1 ( x n x * + Ş ( x n ) 1 Ş ( x n ) ) × x n x * Ş ( x n ) 1 Ş ( x n ) + 0 1 0 1 G 1 ( Ş ( x θ 2 , θ 1 ) Ş ( M θ 2 ) ) ( 1 θ 1 ) d θ 2 d θ 1 × Ş ( x n ) 1 Ş ( x n ) 2 ] Ş ( x n ) 1 Ş ( x n ) .
As in (64), we obtain
E 3 K 2 K 1 ( 1 2 L 0 x n x * ) 3 [ K 2 2 1 + K 1 ( 1 2 L 0 x n x * ) 3 L 1 2 ( 1 2 L 0 x n x * ) + L 2 4 3 + K 1 6 ( 1 2 L 0 x n x * ) K 1 2 ( 1 2 L 0 x n x * ) 2 ] x n x * 4 K 2 K 1 ( 1 2 L 0 x n x * ) 4 [ K 2 2 1 + K 1 ( 1 2 L 0 x n x * ) 3 L 1 2 + L 2 5 6 + K 1 6 ( 1 2 L 0 x n x * ) K 1 2 ( 1 2 L 0 x n x * ) ] x n x * 4 ,
E 4 Ş ( x n ) 1 G 0 1 0 1 0 1 G 1 Ş ( x * + θ 3 ( ( θ 1 + θ 2 ( 1 θ 1 ) ) ( x n x * ) ) Ş ( x * ) d θ 3 | ( θ 1 + θ 2 ( 1 θ 1 ) ) ( 1 θ 1 ) | d θ 2 d θ 1 x n x * Ş ( x n ) 1 Ş ( x n ) 2 K 1 2 L 3 ( 1 2 L 0 x n x * ) 3 0 1 0 1 0 1 ( θ 3 ( 1 θ 1 ) ( θ 1 + θ 2 ( 1 θ 1 ) ) 2 ) d θ 3 d θ 2 d θ 1 x n x * 4 K 1 2 L 3 2 ( 1 2 L 0 x n x * ) 3 0 1 | θ 1 2 ( 1 θ 1 ) | + | θ 1 ( 1 θ 1 ) 2 | + 1 3 ( 1 θ 1 ) 3 | d θ 1 x n x * 4 5 K 1 2 L 3 8 ( 1 2 L 0 x n x * ) 3 x n x * 4 ,
E 5 0 1 0 1 0 1 θ 3 L 3 y n x * + θ 2 ( x n y n ) 2 d θ 3 | 1 θ 1 | d θ 2 d θ 1 × Ş ( x n ) 1 G Ş ( x n ) 1 Ş ( x n ) 2 L 3 K 1 2 4 ( 1 2 L 0 x n x * ) 3 0 1 2 3 | θ 2 1 | Ş ( x n ) 1 Ş ( x n ) + x n x * 2 d θ 2 × x n x * 2 L 3 K 1 2 4 ( 1 2 L 0 x n x * ) 3 1 + 4 K 1 2 27 ( 1 2 L 0 x n x * ) 2 + 2 K 1 3 ( 1 2 L 0 x n x * ) × x n x * 4 ,
E 6 1 6 Ş ( x n ) 1 G G 1 Ş ( x * ) x n x * Ş ( x n ) 1 Ş ( x n ) × Ş ( x n ) 1 Ş ( x n ) 2 K 1 2 L 1 K 3 4 ( 1 2 L 0 x n x * ) 4 x n x * 4 ,
C 1 1 9 L * 0 1 0 1 θ 3 P θ 1 n 1 2 I θ 1 d θ 3 d θ 1 H n 1 G 2 × 0 1 G 1 Ş ( M θ 2 ) 2 d θ 2 Ş ( x n ) 1 Ş ( x n ) 3 K 2 2 9 L * 4 ( 1 ϕ * ( x n x * ) ) 2 0 1 0 1 | θ 3 θ 1 2 | H n 1 Ş ( x n ) 1 2 H n d θ 3 d θ 1 × K 1 3 1 2 L 0 x n x * 3 x n x * 3 K 2 2 432 K 1 3 L * ( 1 2 L 0 x n x * ) 3 ( 1 ϕ * ( x n x * ) ) 2 H n 1 G × G 1 Ş ( x n ) Ş ( x * ) ( Ş ( x n ) Ş ( x * ) x n x * 3 K 1 3 K 2 L * L 1 1 + ϕ 0 ( x n x * ) 864 ( 1 2 L 0 x n x * ) 3 ( 1 ϕ * ( x n x * ) ) 3 x n x * 4 ,
C 2 1 3 H n 1 G Ş ( x n ) 1 G 2 K 2 3 Ş ( x n ) 1 Ş ( x n ) 4 K 2 3 K 1 4 6 ( 1 ϕ * ( x n x * ) ) ( 1 2 L 0 x n x * ) 6 x n x * 4
and
C 3 4 9 H n 1 G 2 Ş ( x n ) 1 G K 2 3 Ş ( x n ) 1 Ş ( x n ) 4 K 2 3 K 1 4 9 ( 1 ϕ * ( x n x * ) ) 2 ( 1 2 L 0 x n x * ) 5 x n x * 4 .
Combining inequalities (63)–(71), we obtain
x n + 1 x * i = 1 6 E i + i = 1 3 C i ϕ ( x n x * ) x n x * 4 .
Now, since x 0 U ( x * , r ) , ϕ ( r ) r 3 < 1 and by taking n = 0 in (72), we obtain x 1 U ( x * , r ) . So, by (72) and induction x n U ( x * , r ) , n N { 0 } , and hence x n x * . By (72) and (1) we obtain OC four. □

4. Local Analysis of Method (5)

Consider the functions ψ , q : [ 0 , ρ * ) R as non-decreasing continuous and defined as
ψ ( t ) = K 2 ϕ ( t ) 2 2 1 2 L 0 t t 2 + L 2 ϕ ( t ) 1 2 L 0 t ϕ ( t ) t 3 4 + 1 2 + L 2 K 1 2 ϕ ( t ) 4 ( 1 ϕ * ( t ) ) ( 1 2 L 0 t ) ϕ 0 ( t ) + 1 + 3 L 1 K 2 ϕ ( t ) 2 ( 1 2 L 0 t ) 2 + K 2 K 1 ( 1 2 L 0 t ) 2 1 + 1 2 ϕ ( t ) t 3 ϕ ( t ) + K 2 2 K 1 3 3 ( 1 ϕ * ( t ) ) ( 1 2 L 0 t ) 4 ϕ ( t )
and
q ( t ) = ψ ( t ) t 5 1 .
Then, q ( 0 ) = 1 and lim t ρ * q ( t ) = + . Then, by IVT, there exists the smallest root r 2 [ 0 , ρ ) such that q ( r 2 ) = 0 . And let us define
R 1 = min { r , r 2 }
Theorem 3. 
If assumptions ( a 2 ) , ( a 3 ) , and ( a 6 )–( a 10 ) hold. Then, the sequence ( x n ) defined by (5) with x 0 U ( x * , R 1 ) { x * } is well defined and
x n + 1 x * ψ ( R 1 ) x n x * 6 .
In particular, x n U ( x * , R 1 ) n N { 0 } and x n x * , with OC six.
Proof. 
If we add and subtract Ş ( x n ) 1 Ş ( z n ) in the 3rd substep of (5), we obtain
x n + 1 x * = z n x * Ş ( x n ) 1 Ş ( z n ) 3 Ş ( x n ) 1 + 6 Ş ( x n ) + Ş ( y n ) 1 Ş ( z n ) = z n x * Ş ( x n ) 1 Ş ( z n ) 3 Ş ( x n ) 1 + 6 H n 1 Ş ( z n ) = z n x * Ş ( x n ) 1 Ş ( x n ) + 6 1 2 Ş ( x n ) 1 H n 1 Ş ( z n ) .
Now, using MVT we can write
z n x * Ş ( x n ) 1 Ş ( z n ) = z n x * Ş ( x n ) 1 0 1 Ş ( x * + θ 1 ( z n x * ) ) d θ 1 ( z n x * ) = Ş ( x n ) 1 0 1 Ş ( x n ) Ş ( x * + θ 1 ( z n x * ) ) d θ 1 ( z n x * ) = Ş ( x n ) 1 0 1 0 1 Ş ( z θ 2 , θ 1 ) d θ 2 ( x n x * ) θ 1 ( z n x * ) d θ 1 ( z n x * ) = Ş ( x n ) 1 0 1 0 1 Ş ( z θ 2 , θ 1 ) d θ 2 d θ 1 ( x n x * ) ( z n x * ) D 1 ,
where z θ 2 , θ 1 = x * + θ 1 ( z n x * ) + θ 2 ( x n x * θ 1 ( z n x * ) ) .
and
D 1 = Ş ( x n ) 1 0 1 0 1 Ş ( z θ 2 , θ 1 ) d θ 2 θ 1 d θ 1 ( z n x * ) 2 .
Now, by using (74) and (51) we can write (73) as
x n + 1 x * = D 1 + Ş ( x n ) 1 0 1 0 1 Ş ( z θ 2 , θ 1 ) d θ 2 d θ 1 ( x n x * ) ( z n x * ) 2 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) Ş ( x n ) 1 Ş ( z n ) = D 1 + Ş ( x n ) 1 0 1 0 1 Ş ( z θ 2 , θ 1 ) Ş ( x * ) + Ş ( x * ) d θ 2 d θ 1 × ( x n x * ) ( z n x * ) 2 H n 1 0 1 Ş ( M θ 2 ) Ş ( x * ) + Ş ( x * ) d θ 2 × Ş ( x n ) 1 Ş ( x n ) Ş ( x n ) 1 Ş ( z n ) = D 1 + D 2 D 3 + Ş ( x n ) 1 Ş ( x * ) ( x n x * ) ( z n x * ) 2 H n 1 Ş ( x * ) Ş ( x n ) 1 Ş ( x n ) Ş ( x n ) 1 Ş ( z n ) ,
where
D 2 = Ş ( x n ) 1 0 1 0 1 Ş ( z θ 2 , θ 1 ) Ş ( x * ) d θ 2 d θ 1 ( x n x * ) ( z n x * ) , D 3 = 2 H n 1 0 1 Ş ( M θ 2 ) Ş ( x * ) d θ 2 Ş ( x n ) 1 Ş ( x n ) Ş ( x n ) 1 Ş ( z n ) .
Next, we add and subtract
Ş ( x n ) 1 Ş ( x * ) Ş ( x n ) 1 Ş ( x n ) ( z n x * )
and
Ş ( x n ) 1 Ş ( x * ) Ş ( x n ) 1 Ş ( x n ) Ş ( x n ) 1 Ş ( z n )
appropriately into (75) to obtain
x n + 1 x * = D 1 + D 2 D 3 + D 4 + D 5 + 2 1 2 Ş ( x n ) 1 H n 1 Ş ( x * ) Ş ( x n ) 1 Ş ( x n ) Ş ( x n ) 1 Ş ( z n ) ,
where
D 4 = Ş ( x n ) 1 Ş ( x * ) x n x * Ş ( x n ) 1 Ş ( x n ) ( z n x * ) and D 5 = Ş ( x n ) 1 Ş ( x * ) Ş ( x n ) 1 Ş ( x n ) z n x * Ş ( x n ) 1 Ş ( z n ) .
Finally, if we apply (51) again, we obtain
x n + 1 x * = D 1 + D 2 D 3 + D 4 + D 5 D 6 ,
where
D 6 = 2 3 H n 1 0 1 Ş ( M θ 2 ) d θ 2 Ş ( x n ) 1 Ş ( x n ) × Ş ( x n ) 1 Ş ( x * ) Ş ( x n ) 1 Ş ( x n ) Ş ( x n ) 1 Ş ( z n ) .
Now, by using ( a 2 ) , ( a 3 ) , ( a 6 )–( a 10 ), (41)–(43), and (72) we obtain the following inequalities:
D 1 1 2 K 2 Ş ( x n ) 1 G z n x * 2 K 2 ϕ ( x n x * ) 2 2 1 2 L 0 x n x * x n x * 8 ,
D 2 Ş ( x n ) 1 G L 2 0 1 0 1 z θ 2 , θ 1 x * d θ 2 d θ 1 x n x * z n x * L 2 ϕ ( x n x * ) 1 2 L 0 x n x * 0 1 0 1 z θ 2 , θ 1 x * d θ 2 d θ 1 x n x * 5
L 2 ϕ ( x n x * ) 1 2 L 0 x n x * 0 1 0 1 θ 1 ( 1 θ 2 ) ( z n x * ) + θ 2 ( x n x * ) d θ 2 d θ 1 x n x * 5 L 2 ϕ ( x n x * ) 1 2 L 0 x n x * ϕ ( x n x * ) x n x * 3 4 + 1 2 x n x * 6 ,
D 3 H n 1 G L 2 0 1 M θ 2 x * d θ 2 Ş ( x n ) 1 Ş ( x n ) Ş ( x n ) 1 Ş ( z n ) L 2 K 1 2 ϕ ( x n x * ) 4 ( 1 ϕ * ( x n x * ) ) ( 1 2 L 1 x n x * ) 2 × 0 1 ( 1 θ 2 ) ( y n x * ) + θ 2 ( x n x * ) d θ 2 x n x * 5 L 2 K 1 2 ϕ ( x n x * ) 4 ( 1 ϕ * ( x n x * ) ) ( 1 2 L 0 x n x * ) × ϕ 0 ( x n x * ) + 1 x n x * 6 ,
D 4 K 2 Ş ( x n ) 1 G x n x * Ş ( x n ) 1 Ş ( x n ) z n x * 3 L 1 K 2 ϕ ( x n x * ) 2 ( 1 2 L 0 x n x * ) 2 x n x * 6 ,
D 5 K 2 Ş ( x n ) 1 Ş ( x n ) z n x * Ş ( x n ) 1 Ş ( z n ) K 2 K 1 ( 1 2 L 0 x n x * ) 2 1 + 1 2 ϕ ( x n x * ) x n x * 3 × ϕ ( x n x * ) x n x * 6
and
D 6 2 3 H n 1 G Ş ( x n ) 1 G K 2 2 Ş ( x n ) 1 Ş ( x n ) 2 Ş ( x n ) 1 Ş ( z n ) K 2 2 K 1 3 3 ( 1 ϕ * ( x n x * ) ) ( 1 2 L 0 x n x * ) 3 ϕ ( x n x * ) x n x * 6 .
Combining (76)–(81) we obtain
x n + 1 x * ψ ( x n x * ) x n x * 6 .
Now, since x 0 U ( x * , R 1 ) and ψ ( R 1 ) R 1 5 < 1 , by taking n = 0 in (82), we obtain x 1 U ( x * , R 1 ) . So, by (82) and induction we have x n U ( x * , R 1 ) , n N { 0 } , and hence x n x * . By (82) and (1), we obtain OC six. □
Proposition 2. 
Assume the following:
(1) x * U ( x * , r * ) is a solution of (2);
(2) ( a 3 );
(3) δ r * so that
L 1 δ < 2 .
Let Ω 2 = Ω U ( x * , δ ) . Then, x * is unique in the domain Ω 2 .
Proof. 
Analogous to the proof of Proposition 4 in [19]. □

5. Numerical Examples

Consider the following choices of Q [21]:
Q 1 ( T ) = 12 T 2 9 T + 5 2 I , Q 1 * ( T ) = T 1 2 I 3 I + Q 1 ( T ) , Q 2 ( T ) = 9 T + 3 2 T 1 13 2 I , Q 2 * ( T ) = T 1 2 I 3 I + Q 2 ( T ) , Q 3 ( T ) = 3 12 16 T 1 + 1 4 I , Q 3 * ( T ) = T 1 2 I 3 I + Q 3 ( T ) .
For the following examples (Examples 1 and 2) we calculate the radius r, and R 1 for methods (3) and (5) with Q = Q 1 , and consequently L * = 0 .
Example 1. 
Let X and Y be R 3 , Ω = U ¯ ( 0 , 1 ) . Define the function Ş on Ω for x = ( x 1 , x 2 , x 3 ) T r by
Ş ( x ) = ( s i n x 1 , x 2 2 5 + x 2 , x 3 ) T r .
The derivatives are
Ş ( ω ) = c o s x 1 0 0 0 2 x 2 5 + 1 0 0 0 1 , Ş ( w ) = s i n x 1 0 0 0 0 0 0 0 0 0 0 0 0 2 5 0 0 0 0 0 0 0 0 0 0 0 0 0
and
Ş ( x ) = c o s x 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 .
Now, consider x * = ( 0 , 0 , 0 ) T r and the initial point x 0 = ( 0 , 0 , 1 3 ) . Choose G = Ş ( x 0 ) = Ş ( x * ) = I , then x * U ( ( 0 , 0 , 1 3 ) , 1 2 ) . The conditions ( a 2 ) , ( a 3 ) , and ( a 6 )–( a 10 ) are validated if L 0 = L 1 = L 2 = L 3 = K 1 = K 2 = K 3 = 1 . Then, the parameters are r = 0.11423168 and R 1 = 0.1123702500 .
Example 2. 
Consider the expression of the trajectory of an electron in the air gap between two parallel plates, with particular parameters defined as
Ş ( x ) = 1 2 c o s x + π 4 + x .
The iterated solution x * = 0.30909327 is calculated in [21] using method (5). Now, let Ω = [ 1 , 1 ] and consider the initial point x 0 = 0 , Choose G = Ş ( x 0 ) = Ş ( 0 ) = 1 and we have x * U ( 0 , 1 ) . The conditions ( a 2 ) , ( a 3 ) , and ( a 6 )–( a 10 ) are validated if L 0 = L 1 = L 2 = L 3 = K 1 = K 2 = K 3 = 1 2 . Then, the parameters are r = 0.3168574 and R 1 = 0.31062916 .
Example 3 
(Plank’s radiation law [27]). The spectral density of electromagnetic radiation released by a black body can be calculated by Plank’s radiation law. Let T , λ , k B , h , and c denote the absolute temperature of the black body, wavelength of radiation, Boltzmann constant, Plank’s constant, and speed of light in the medium (vacuum), respectively. The equation is given as
ϑ ( λ ) = 8 π h c λ 5 e c h λ k B T 1 1 .
The wavelength λ which gives the maximum energy density ϑ ( λ ) is the solution of the equation
c h λ k B T e c h λ k B T e c h λ k B T 1 = 5 .
Simplifying the equation by choosing x = c h λ k B T , then we obtain the nonlinear equation
Ş ( x ) = 5 e x + x 5 = 0 .
The solution x * of (83) gives the maximum value of λ by the below expression
λ = c h x * k B T .
The root is x * = 4.9651142317442 . Let Ω = [ 3 , 7 ] and consider the initial point x 0 = 4 , Choose G = Ş ( x 0 ) = Ş ( 4 ) and we have x * U ( 4 , 1 0.6 ) . The conditions ( a 2 ) , ( a 3 ) , and ( a 6 )–( a 10 ) are validated if L 0 = L 1 = K 2 = L 3 = 0.3 , L 2 = K 3 = 0.2 , K 1 = 1.082 . Then, the parameters are r = 0.3203225403657 and R 1 = 0.31506694069213 .
Example 4 
(Kepler’s law of planetary motion [27]). A planet revolving about the sun traces an elliptical path. Using Kepler’s law of planetary motion, one can calculate the position ( x , y ) of the planet at time t. The expression is described as follows:
x = u ( cos E e ) , y = u 1 e 2 sin E .
Here, e is the eccentricity of the ellipse e = 1 v 2 u 2 and E is the eccentric anomaly ( E = τ t + e sin E , where τ is the frequency of the orbit). To find E for the given values τ t = 0.01 and e = 0.9995 , we need to solve the following nonlinear equation:
Ş ( x ) = 0.01 x + 0.9995 sin x .
The solution is x * = 0.3899777749 . Let Ω = [ 0 . 0 . 5 ] and consider the initial point x 0 = 0.4 , Choose G = Ş ( x 0 ) = Ş ( 0.4 ) and we have x * U ( 0.4 , 1 12.2 ) . The conditions ( a 2 ) , ( a 3 ) , and ( a 6 )–( a 10 ) are validated if L 0 = 6.1 , L 1 = K 2 = L 3 = 0.5474 , L 2 = K 3 = 0.3650 , K 1 = 1.4404 . Then, the parameters are r = 0.0305154629952 and R 1 = 0.0304663983648 .
For the following example, we calculate the iterates of (5) for different choices of Q and for other three sixth-order methods.
Example 5. 
The following system of equations is considered,
3 t 1 2 t 2 + t 2 2 = 1 t 1 4 + t 1 t 2 3 = 1 ,
which has solutions ( 1 , 0.2 ) , ( 0.4 , 1.3 ) , and ( 0.9 , 0.3 ) . The iterates are given in Table 1 and Table 2. For convenience, denote ζ k = ( t 1 k , t 2 k ) and x n , 6 = x n + 1 x * x n x * 6 .

6. Dynamics

To visually analyze the convergence region of method (5), we study the dynamics of the method (5) (a similar analysis can be obtained for method (3)).
Example 6. 
u 3 v = 0 v 3 u = 0
with roots ( 1 , 1 ) , ( 0 , 0 ) , and ( 1 , 1 ) .
Example 7. 
3 u 2 v v 3 = 0 u 3 3 u v 2 1 = 0
with roots ( 1 2 , 3 2 ) , ( 1 2 , 3 2 ) , and ( 1 , 0 ) .
Example 8. 
u 2 + v 2 4 = 0 3 u 2 + 7 v 2 16 = 0
with roots ( 3 , 1 ) , ( 3 , 1 ) , ( 3 , 1 ) , and ( 3 , 1 ) .
The basin of attraction (BA) [29] is the set of all initial points which gives convergence to some root. The visualization of the convergence is achieved by the following procedure.
Method:
  • To plot basins of attraction for the given equations, the domain R = { ( u , v ) R 2 : 2 u 2 , 2 v 2 } R 2 contains all the roots of the system. We divide R into 401 × 401 grid points.
  • The point ( u 0 , v 0 ) R 2 whose iterates converge to a particular root is marked in the color assigned to that root.
  • Iterates which do not converge to any root are marked in black.
  • A maximum of 50 iterations are performed.
  • The tolerance is fixed as 10 8 .
We use MATLAB on a Windows machine, 64 bit, 16 cores, with Intel Core i7−10700 CPU @ 2.90 GHz for all computations.
From the figures (Figure 2, Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7), we can observe that method (5) shows a very good convergence trend. For the choices Q 1 * , Q 2 * , and Q 3 * , BA is almost the same compared to the choices Q i , i = 1 , 2 , 3 . Further, note that Q 1 * , Q 2 * , and Q 3 * are not differentiable more than two times but they satisfy the conditions used in our convergence analysis.

7. Conclusions

We proved the OC of methods (3) and (5) using assumptions only on the first three derivatives of function Ş. We presented the semilocal convergence, and also we used our semilocal analysis to avoid using any assumptions on x * to prove the local convergence. All these analyses were performed in a more general Banach-space setting. Computable error bounds and uniqueness results are provided. The analysis improves the class of functions to which the methods are applicable. Moreover, it updates the class of methods itself by relaxing the assumptions on the weight function. We presented numerical examples, comparison studies, and dynamics of methods (3) and (5).

Author Contributions

Conceptualization, S.G., A.K., I.K.A. and J.P.; methodology, S.G., A.K., I.K.A. and J.P.; software, S.G., A.K., I.K.A. and J.P.; validation, S.G., A.K., I.K.A. and J.P.; formal analysis, S.G., A.K., I.K.A. and J.P.; investigation, S.G., A.K., I.K.A. and J.P.; resources, S.G., A.K., I.K.A. and J.P.; data curation, S.G., A.K., I.K.A. and J.P.; writing—original draft preparation, S.G., A.K., I.K.A. and J.P.; writing—review and editing, S.G., A.K., I.K.A. and J.P.; visualization, S.G., A.K., I.K.A. and J.P.; supervision, S.G., A.K., I.K.A. and J.P.; project administration, S.G., A.K., I.K.A. and J.P.; funding acquisition, S.G., A.K., I.K.A. and J.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Moré, J.J. A Collection of Nonlinear Model Problems; Technical Report; Argonne National Lab.: Lemont, IL, USA, 1989. [Google Scholar]
  2. Lin, Y.; Bao, L.; Jia, X. Convergence analysis of a variant of the Newton method for solving nonlinear equations. Comput. Math. Appl. 2010, 59, 2121–2127. [Google Scholar] [CrossRef]
  3. Awawdeh, F. On new iterative method for solving systems of nonlinear equations. Numer. Algorithms 2010, 54, 395–409. [Google Scholar] [CrossRef]
  4. Khlopov, M. Primordial Black Hole Messenger of Dark Universe. Symmetry 2024, 16, 1487. [Google Scholar] [CrossRef]
  5. Kang, S.; Lee, H. Probabilistic Multi-Robot Task Scheduling for the Antarctic Environments with Crevasses. Symmetry 2024, 16, 1229. [Google Scholar] [CrossRef]
  6. Bouzeffour, F. Supersymmetric Quesne-Dunkl Quantum Mechanics on Radial Lines. Symmetry 2024, 16, 1508. [Google Scholar] [CrossRef]
  7. Regmi, S.; Argyros, I.K.; George, S.; Argyros, C.I. Extended Convergence of Three Step Iterative Methods for Solving Equations in Banach Space with Applications. Symmetry 2022, 14, 1484. [Google Scholar] [CrossRef]
  8. Remesh, K.; Argyros, I.K.; Saeed K, M.; George, S.; Padikkal, J. Extending the applicability of Cordero type iterative method. Symmetry 2022, 14, 2495. [Google Scholar] [CrossRef]
  9. Artidiello, S.; Cordero, A.; Torregrosa, J.R.; Vassileva, M.P. Multidimensional generalization of iterative methods for solving nonlinear problems by means of weight-function procedure. Appl. Math. Comput. 2015, 268, 1064–1071. [Google Scholar] [CrossRef]
  10. Balaji, G.V.; Seader, J.D. Application of interval Newton’s method to chemical engineering problems. Reliab. Comput. 1995, 1, 215–223. [Google Scholar] [CrossRef]
  11. Cordero, A.; Hueso, J.L.; Martínez, E.; Torregrosa, J.R. A modified Newton-Jarratt’s composition. Numer. Algorithms 2010, 55, 87–99. [Google Scholar] [CrossRef]
  12. Cordero, A.; Maimó, J.G.; Torregrosa, J.R.; Vassileva, M.P. Solving nonlinear problems by Ostrowski–Chun type parametric families. J. Math. Chem. 2015, 53, 430–449. [Google Scholar] [CrossRef]
  13. Constantinides, A.; Mostoufi, N. Numerical Methods for Chemical Engineers with MATLAB Applications with Cdrom; Prentice Hall PTR: London, UK, 1999. [Google Scholar]
  14. Chun, C.; Ham, Y. Some sixth-order variants of Ostrowski root-finding methods. Appl. Math. Comput. 2007, 193, 389–394. [Google Scholar] [CrossRef]
  15. Sharma, J.R.; Arora, H. Efficient Jarratt-like methods for solving systems of nonlinear equations. Calcolo 2014, 51, 193–210. [Google Scholar] [CrossRef]
  16. Soleymani, F. Regarding the accuracy of optimal eighth-order methods. Math. Comput. Model. 2011, 53, 1351–1357. [Google Scholar] [CrossRef]
  17. Ortega, J.M.; Rheinboldt, W.C. Iterative Solution of Nonlinear Equations in Several Variables; SIAM: Montréal, QC, Canada, 2000. [Google Scholar]
  18. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method for functions of several variables. Appl. Math. Comput. 2006, 183, 199–208. [Google Scholar] [CrossRef]
  19. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  20. Collatz, L. Functional Analysis and Numerical Mathematics; Academic Press: New York, NY, USA, 1966. [Google Scholar]
  21. Alzahrani, A.K.H.; Behl, R.; Alshomrani, A.S. Some higher-order iteration functions for solving nonlinear models. Appl. Math. Comput. 2018, 334, 80–93. [Google Scholar] [CrossRef]
  22. Argyros, I.K.; Argyros, G.I.; Regmi, S.; George, S. Contemporary Algorithms: Theory and Applications: Vol 4; Nova Science Publishers: New York, NY, USA, 2024. [Google Scholar]
  23. Amat, S.; Busquier, S.; Grau, À.; Grau-Sánchez, M. Maximum efficiency for a family of Newton-like methods with frozen derivatives and some applications. Appl. Math. Comput. 2013, 219, 7954–7963. [Google Scholar] [CrossRef]
  24. Grau-Sánchez, M.; Grau, À.; Noguera, M. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math. 2011, 236, 1259–1266. [Google Scholar] [CrossRef]
  25. Ostrowski, A.M. Iterative Solution of Nonlinear Equations in Several Variables; Academic Press: New York, NY, USA, 1970. [Google Scholar]
  26. Argyros, I.K. The Theory and Applications of Iteration Methods; CRC Press: Boca Raton, FL, USA, 2022. [Google Scholar]
  27. Behl, R.; Kansal, M.; Salimi, M. Modified King’s Family for Multiple Zeros of Scalar Nonlinear Functions. Mathematics 2020, 8, 827. [Google Scholar] [CrossRef]
  28. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef]
  29. Amat, S.; Busquier, S.; Plaza, S. Review of some iterative root–finding methods from a dynamical point of view. Sci. Ser. A Math. Sci. 2004, 10, 3–35. [Google Scholar]
Figure 1. Graphical representation of various balls considered.
Figure 1. Graphical representation of various balls considered.
Symmetry 17 00056 g001
Figure 2. BA for method (5) for the choices Q 1 , Q 2 , and Q 3 (leftright) for Example 6.
Figure 2. BA for method (5) for the choices Q 1 , Q 2 , and Q 3 (leftright) for Example 6.
Symmetry 17 00056 g002
Figure 3. BA for method (5) for the choices Q 1 * , Q 2 * , and Q 3 * (leftright) for Example 6.
Figure 3. BA for method (5) for the choices Q 1 * , Q 2 * , and Q 3 * (leftright) for Example 6.
Symmetry 17 00056 g003
Figure 4. BA for method (5) for the choices Q 1 , Q 2 , and Q 3 (leftright) for Example 7.
Figure 4. BA for method (5) for the choices Q 1 , Q 2 , and Q 3 (leftright) for Example 7.
Symmetry 17 00056 g004
Figure 5. BA for method (5) for the choices Q 1 * , Q 2 * , and Q 3 * (leftright) for Example 7.
Figure 5. BA for method (5) for the choices Q 1 * , Q 2 * , and Q 3 * (leftright) for Example 7.
Symmetry 17 00056 g005
Figure 6. BA for method (5) for the choices Q 1 , Q 2 , and Q 3 (leftright) for Example 8.
Figure 6. BA for method (5) for the choices Q 1 , Q 2 , and Q 3 (leftright) for Example 8.
Symmetry 17 00056 g006
Figure 7. BA for method (5) for the choices Q 1 * , Q 2 * , and Q 3 * (leftright) for Example 8.
Figure 7. BA for method (5) for the choices Q 1 * , Q 2 * , and Q 3 * (leftright) for Example 8.
Symmetry 17 00056 g007
Table 1. Iterates.
Table 1. Iterates.
kMethod (5) for Q = Q 1 RatioMethod (5) for Q = Q 2 RatioMethod (5) for Q = Q 3 Ratio
ζ k x n , 6 ζ k x n , 6 ζ k x n , 6
0 ( 2.000000 , 1.000000 ) ( 2.000000 , 1.000000 ) ( 2.000000 , 1.000000 )
1 ( 1.224220 , 0.107816 ) 0.001807 ( 1.240647 , 0.136149 ) 0.001868 ( 1.164290 , 0.035469 ) 0.001527
2 ( 1.006587 , 0.288928 ) 0.129886 ( 1.010789 , 0.282265 ) 0.107490 ( 0.995880 , 0.303264 ) 0.346089
3 ( 0.992781 , 0.306439 ) 4.459994 ( 0.992783 , 0.306437 ) 4.150013 ( 0.992780 , 0.306440 ) 5.274025
4 ( 0.992780 , 0.306440 ) 5.509659 ( 0.992780 , 0.306440 ) 5.509465 ( 0.992780 , 0.306440 ) 5.509727
Table 2. Iterates.
Table 2. Iterates.
kSixth-order Method in [15]RatioO N Method [19]RatioN W Method [28]Ratio
ζ k x n , 6 ζ k x n , 6 ζ k x n , 6
0 ( 2.000000 , 1.000000 ) ( 2.000000 , 1.000000 ) ( 2.000000 , 1.000000 )
1 ( 1.144688 , 0.007820 ) 0.001540 ( 1.127146 , 0.054883 ) 0.004363 ( 1.067979 , 0.174843 ) 0.001211
2 ( 0.993894 , 0.304953 ) 0.327787 ( 0.993328 , 0.305734 ) 0.501670 ( 0.992784 , 0.306436 ) 1.383068
3 ( 0.992780 , 0.306440 ) 5.417373 ( 0.992780 , 0.306440 ) 3.889832 ( 0.992780 , 0.306440 ) 5.509412
4 ( 0.992780 , 0.306440 ) 5.509727 ( 0.992780 , 0.306440 ) 3.916553 ( 0.992780 , 0.306440 ) 5.509412
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kunnarath, A.; George, S.; Padikkal, J.; Argyros, I.K. Convergence Order of a Class of Jarratt-like Methods: A New Approach. Symmetry 2025, 17, 56. https://doi.org/10.3390/sym17010056

AMA Style

Kunnarath A, George S, Padikkal J, Argyros IK. Convergence Order of a Class of Jarratt-like Methods: A New Approach. Symmetry. 2025; 17(1):56. https://doi.org/10.3390/sym17010056

Chicago/Turabian Style

Kunnarath, Ajil, Santhosh George, Jidesh Padikkal, and Ioannis K. Argyros. 2025. "Convergence Order of a Class of Jarratt-like Methods: A New Approach" Symmetry 17, no. 1: 56. https://doi.org/10.3390/sym17010056

APA Style

Kunnarath, A., George, S., Padikkal, J., & Argyros, I. K. (2025). Convergence Order of a Class of Jarratt-like Methods: A New Approach. Symmetry, 17(1), 56. https://doi.org/10.3390/sym17010056

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop