Next Article in Journal
Implementation of Two-Mode Gaussian States Whose Covariance Matrix Has the Standard Form
Previous Article in Journal
An Authentication Protocol for the Medical Internet of Things
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Extended Convergence of Three Step Iterative Methods for Solving Equations in Banach Space with Applications

by
Samundra Regmi
1,*,
Ioannis K. Argyros
2,
Santhosh George
3 and
Christopher I. Argyros
4
1
Department of Mathematics, University of Houston, Houston, TX 77204, USA
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
3
Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, Mangaluru 575 025, India
4
Department of Computing and Technology, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(7), 1484; https://doi.org/10.3390/sym14071484
Submission received: 17 June 2022 / Revised: 16 July 2022 / Accepted: 17 July 2022 / Published: 20 July 2022

Abstract

:
Symmetries are vital in the study of physical phenomena such as quantum physics and the micro-world, among others. Then, these phenomena reduce to solving nonlinear equations in abstract spaces. These equations in turn are mostly solved iteratively. That is why the objective of this paper was to obtain a uniform way to study three-step iterative methods to solve equations defined on Banach spaces. The convergence is established by using information appearing in these methods. This is in contrast to earlier works which relied on derivatives of the higher order to establish the convergence. The numerical example completes this paper.
MSC:
65J15; 47H17; 49M15; 65G99; 41A25

1. Introduction

The objective in this paper was to locate a simple solution x * Ω of
G ( x ) = 0 ,
given that G : Ω X X 1 is a continuous operator, X , X 1 are Banach spaces and the set Ω . Numerous methods can be represented for all m = 0 , 1 , 2 , , by
y m = x m a G ( x m ) 1 G ( x m ) z m = x m A m G ( x m ) x m + 1 = z m B m G ( z m ) ,
where A m = A ( x m , y m ) , A : Ω × Ω L ( X , X 1 ) , A 1 L ( X 1 , X ) , B m = B ( x m , y m ) , B : Ω × Ω L ( X , X 1 ) and B 1 L ( X 1 , X ) .
  • Special Cases:
  • Newton’s method (second order) [1,2,3,4,5,6,7,8,9,10]: Set a = 1 and A m = B m = O ,
    y m = x m G ( x m ) 1 G ( x m ) .
This method is of the order of two.
  • Jarrat’s method (second order) [10]: Set a = 2 3 and A m = B m = O to obtain
    y m = x m 2 3 G ( x m ) 1 G ( x m ) .
  • Traub-like method (fifth order) [10]: Let a = 1 and A m = B m = G ( x m ) 1 to get
    y m = x m G ( x m ) 1 G ( x m ) z m = x m G ( x m ) 1 G ( x m ) x m + 1 = x m G ( x m ) ) 1 G ( z m ) .
  • Homeier method (third order) [11]: Set a = 1 2 , A m = G ( y m ) 1 and B m = O , to obtain
    y m = x m 1 2 G ( x m ) 1 G ( x m ) x m + 1 = y m G ( y m ) 1 ) G ( y m ) .
  • Corodero–Torregrosa method (third order) [12]: Set a = 1 , A m = 6 [ G ( x m ) + 4 G ( x m + y m 2 ) + G ( y m ) ] 1 and B m = O , to obtain
    y m = x m A m G ( x m )
    or A m = 2 [ 2 G ( 3 x m + y m 4 ) G ( x m + y m 2 ) + 2 G ( x m + 3 y m 4 ) ] 1 .
  • Noor–Wasseem method (third order) [13]: a = 1 , A m = 4 [ 3 G ( 2 x m + y m 3 + G ( y m ) ] 1 , and B 0 = O .
  • Xiao–Yin method (third order) [14]: a = 1 , A m = 2 3 [ ( 3 G ( y m ) G ( x m ) ) 1 + G ( x m ) 1 ] and B m = O .
  • Cordero method (fifth order) [12]: a = 2 3 , A m = 1 2 ( 3 G ( y m ) G ( x m ) ) 1 ( 3 G ( y m ) + G ( x m ) ) G ( x m ) 1 and B m = ( 1 2 G ( y m ) + 1 2 G ( x m ) ) 1 or a = 1 , A m = 2 ( G ( y m ) + G ( x m ) ) 1 and B m = G ( y m ) 1 .
  • Sharma–Arora method (fifth order) [15]: a = 1 , A m = G ( y m ) 1 and B m = 2 G ( y m ) 1 G ( x m ) 1 .
  • Xiao–Yinmethod (fifth order) [16]: a = 2 3 , A m = 1 4 ( 3 G ( y m ) 1 + G ( x m ) 1 ) and B m = 1 3 ( 3 G ( y m ) 1 G ( x m ) 1 ) .
Other choices are also possible [1,2,3,8,9,14,15,17,18,19]. Therefore, it is interesting to consider the semilocal convergence of these methods not given in earlier papers under the same convergence criteria in the Banach space setting using the method (2). In earlier papers, only the local convergence was given in the finite-dimensional Euclidean space requiring the existence of derivatives one more than the order. Moreover, these derivatives do not appear on the methods but are only used to show the convergence order.
For example, let X = X 1 = R , Ω = [ 0.5 , 1.5 ] . Define function ξ on Ω by
ξ ( x ) = 0 , i f x = 0 2 x 3 log x + x 5 x 4 , i f x 0 .
Notice that x * = 1 . The definition of function ξ gives
ξ ( x ) = log x 12 + 60 x 2 24 x + 22 .
However, ξ ( x ) is unbounded on Ω . Thus, the convergence of method (2) is not verified by the earlier analyses. This paper extends the usage of these methods because no conditions on derivatives of high order are used to show convergence. This is the novelty of the paper. The study also includes the semilocal analysis not given in earlier research. Notice that the branching of the solutions cannot be handled using the iterative method (2) since the first step required that G ( x m ) 1 exists. The paper contains seven sections, including a numerical and a concluding section.

2. Real Sequences

Let { p m } and { q m } be nonnegative sequences and η 0 be a given parameter. Set S = [ 0 , ) . Consider functions φ 0 , φ : S S to be nondecreasing, continuous, and sequence { t m } , { s m } and { u m } are defined by
t 0 = 0 , s 0 = η , u m = s m + p m ( s m t m ) , t m + 1 = u m + q m ( u m s m ) , s m + 1 = t m + 1 + ψ ¯ ( t m , s m , u m ) 1 ψ 0 ( t m + 1 ) ,
where ψ ¯ ( t m , s m , u m ) = 0 1 φ ( θ ( t m + 1 t m ) ) d θ ( t m + 1 t m ) + ( 1 + φ 0 ( t m ) ) ( t m + 1 s m ) + | 1 a | ( 1 + φ 0 ( t m ) ) ( s m t m ) .
Next, three auxiliary results are given on the convergence of the majorizing sequence (3).
Lemma 1.
Suppose there exists minimal zero τ of function φ 0 ( t ) 1 and
t m τ 0 f o r a l l m = 0 , 1 , 2 , .
Then, the sequence { t m } is nondecreasing and convergent to some τ * [ 0 , τ 0 ] . The limit point τ * is the least upper bound of the sequence { t m } and it is unique.
Proof. 
The result followed by (3) and (4), since the sequence { t m } is bounded from τ and nondecreasing.  □
A stronger result follows.
Lemma 2.
Sequence { t m } is strictly increasing and
t m φ 0 1 ( 1 ) .
Then, it holds lim m + t m = τ * φ 0 1 ( 1 ) .
Proof. 
Set τ = φ 0 1 ( 1 ) in Lemma 1.  □
Next, we define sequences { b m } and { c m } for all m = 0 , 1 , 2 , by
b m = ( 1 + p m ) q m
and
c m = c m 1 1 φ 0 ( t m + 1 ) ,
where c m 1 = 0 1 φ ( θ ( 1 + p m ) ( 1 + q m ) ) d θ ( s m t m ) ( ( 1 + p m ) ( 1 + q m ) ) + | 1 a | ( 1 + φ 0 ( t m ) ) + ( p m + ( 1 + p m ) q m ) ( 1 + φ 0 ( t m ) ) and functions g i , i = 1 , 2 , 3 by
g 1 ( t ) = t + c 1 ( t ) + t φ 0 ( η 1 t ) ,
g 2 ( t ) = t + b ( t ) , g 3 ( t ) = t + c ( t )
and c 1 ( t ) = 0 1 φ ( θ ( 1 + p ) ( 1 + q ) ) λ η d θ ( 1 + p ) ( 1 + q ) + | 1 a | ( 1 + φ 0 ( η 1 t ) ) + ( p + ( 1 + p ) q ) ( 1 + φ 0 ( t ) ) , provided that these exist p , q 0 such that
p m p and q m q .
The convergence criteria given so far are very general. However, we can consider stronger ones.
Suppose functions g i have minimal zeros λ i ( 0 , 1 ) and set
λ = min { λ i } and λ ¯ = max { p 0 , b 0 , c 0 1 1 φ 0 ( t 1 ) } .
Next, we present the third result.
Lemma 3.
Suppose:
p 0 λ ,
b 0 λ
and
c 0 1 1 φ 0 ( t 1 ) λ .
Then, the following items hold for all m = 0 , 1 , 2 ,
0 u m s m λ m + 1 η ,
0 t m + 1 u m λ m + 1 η ,
0 s m + 1 t m = 1 λ m + 1 η ,
and
τ * = lim m + t m η 1 λ .
Proof. 
Items (10)–(12) are shown using induction on m . Using (7)–(9), and the definition of the sequence { t m }
u 0 s 0 = p 0 ( s 0 t 0 ) λ η , t 1 u 0 = q 0 ( u 0 s 0 + s 0 t 0 ) q 0 ( p 0 ( s 0 t 0 ) + ( s 0 t 0 ) )
q 0 ( 1 + p 0 ) ( s 0 t 0 ) = b 0 η λ η ,
t 1 t 0 = t 1 u 0 + u 0 s 0 + s 0 t 0 ( q 0 ( 1 + p 0 ) + ( 1 + p 0 ) ) ( s 0 t 0 ) , t 1 s 0 = t 1 u 0 + u 0 s 0 [ q 0 ( 1 + p 0 ) + p 0 ] ( s 0 t 0 ) .
Thus,
s 1 t 1 = c 0 1 1 φ 0 ( t 1 ) λ η .
By (14)–(16), estimates (10)–(12) hold for m = 0 . Assume they are true for all integers m smaller than n . Using the induction
s m t m + λ m η s m 1 + λ m 1 η + λ m η = 1 λ m + 1 1 λ η < η 1 λ
and
t m + 1 s m + λ m + 1 η 1 λ m + 2 1 λ η < η 1 λ .
Moreover, estimates (14)–(16) shall hold for m replacing 0 if g 1 ( λ i ) 0 , which is true by the definition of parameters λ i and functions g i . Hence, the induction for estimates (10)–(12) is terminated. Consequently, it follows lim m + t m = τ * < η 1 λ .   □

3. Semilocal Convergence

The convergence requires conditions:
Assume:
(C1)
There exist x 0 Ω , η 0 such that G ( x 0 ) 1 L ( X 1 , X ) and
| a | G ( x 0 ) 1 G ( x 0 ) η .
(C2)
G ( x 0 ) 1 ( G ( v ) G ( x 0 ) ) φ 0 ( v x 0 ) for all v Ω .
(C3)
Equation φ 0 ( t ) 1 = 0 has a minimal positive solution τ 0 . Let Ω 0 = U ( x 0 , τ 0 ) .
(C4)
There exist nondecreasing and continuous functions φ : Ω 0 [ 0 , ) , p : Ω 0 × Ω 0 [ 0 , ) , q : Ω 0 × Ω 0 × Ω 0 [ 0 , ) such that
G ( x 0 ) 1 ( G ( v 1 ) G ( v ) ) φ ( v 1 v ) ,
a I A ( v , v 1 ) G ( v ) p ( v , v 1 ) ,
A ( v , v 1 ) G ( v ) U ( v , v 1 ) G ( v 2 ) q ( v , v 1 , v 2 )
for all v 1 , v 2 , v Ω 0 .
(C5)
Conditions of any of the Lemmas in Section 2 hold.
(C6)
U [ x 0 , τ * ] Ω .
It is worth noticing that if v 1 = v a G ( v ) 1 G ( v ) , the resulting (C4) conditions will have a tighter function φ ¯ than φ . Moreover, the same proof as that of Theorem 2 follows through (see the numerical Section).
Next, we provide the semilocal convergence.
Theorem 1.
Suppose conditions (C1)–(C6) hold. Then, sequence { x m } exists, { x m } U [ x 0 , τ * ] and there exists x * U [ x 0 , τ * ] so that G ( x * ) = 0 and
x m x * τ * t m .
Proof. 
The iterates y 0 , z 0 , x 1 exist by (C1) and (2) for m = 0 . Then, the estimate is derived by (C1)
y 0 x 0 = | a | G ( x 0 ) 1 G ( x 0 ) η = s 0 t 0 = η .
Thus, the iterate y 0 U [ x 0 , τ * ] . Let v U [ x 0 , τ * ] . Then, by (C2) and (C6)
G ( x 0 ) 1 ( G ( v ) G ( x 0 ) ) φ 0 ( v x 0 ) φ 0 ( τ * ) < 1 ,
leading to G ( v ) 1 L ( E 1 , E ) and
G ( v ) 1 G ( x 0 ) 1 1 φ 0 ( v x 0 )
by the Banach lemma on the linear operator with inverses [6]. Moreover, by (C4) and method (2), the following estimates are obtained in turn
z 0 = x 0 a G ( x 0 ) 1 G ( x 0 ) + a G ( x 0 ) 1 G ( x 0 ) A 0 G ( x 0 ) = y 0 + ( a G ( x 0 ) 1 A 0 ) G ( x 0 ) = y 0 + ( a G ( x 0 ) 1 A 0 ) G ( x 0 ) G ( x 0 ) 1 G ( x 0 ) = y 0 ( a I A 0 G ( x 0 ) ) ( y 0 x 0 ) ,
z 0 y 0 = ( a I A 0 G ( x 0 ) ) ( 0 x 0 ) a I A 0 G ( x 0 ) y 0 x 0 p ( s 0 t 0 ) = u 0 s 0 ,
x 1 = x 0 B 0 G ( x 0 ) = x 0 A 0 G ( x 0 ) + A 0 G ( x 0 ) B 0 G ( z 0 ) = z 0 + A 0 G ( x 0 ) B 0 G ( z 0 ) , x 0 z 0 = A 0 G ( x 0 ) B 0 G ( z 0 ) q 0 z 0 x 0 q 0 ( u 0 t 0 ) = t 1 u 0 ,
G ( x 1 ) = G ( x 1 ) G ( x 0 ) a G ( x 0 ) ( y 0 x 0 ) = G ( x 1 ) G ( x 0 ) G ( x 0 ) ( x 1 x 0 ) + G ( x 0 ) ( ( x 1 y 0 ) + ( y 0 x 0 ) ) a G ( x 0 ) ( y 0 x 0 ) ,
G ( x 0 ) 1 G ( x 1 ) 0 1 φ ( θ x 1 x 0 ) d θ x 1 x 0 + ( 1 + φ 0 ( x 0 x 0 ) ) x 1 x 0 + | 1 a | ( 1 + φ 0 ( x 0 x 0 ) ) y 0 x 0 .
Hence,
y 1 x 1 G ( x 1 ) 1 G ( x 0 ) G ( x 0 ) 1 G ( x 1 ) Θ 1 φ 0 ( t 1 ) = s 1 t 1 ,
where Θ = 0 1 φ ( θ ( t 1 t 0 ) ) d θ ( t 1 t 0 ) + ( 1 + φ 0 ( t 1 t 0 ) ) ( t 1 s 0 ) + | 1 a | ( 1 + φ 0 ( t 0 t 0 ) ) ( s 0 t 0 ) ,
z 0 x 0 = z 0 y 0 + y 0 x 0 z 0 y 0 + y 0 x 0 u 0 s 0 + s 0 t 0 = u 0 t 0 u 0 < τ * ,
and
x 1 x 0 x 1 z 0 + z 0 x 0 t 1 u 0 + u 0 t 0 = t 1 τ * .
Therefore,
y m x m s m t m ,
z m y m u m s m ,
x m + 1 z m t m + 1 u m ,
y m + 1 x m + 1 s m + 1 t m + 1
and
x m , y m , z m , x m + 1 U ( x 0 , τ * )
hold for m = 0 . Estimates preceding (23) hold with indices m , m + 1 , replacing 0 , 1 , respectively. Thus, the induction for estimates (23)–(27) is terminated.
It follows sequence { t m } is fundamental in X which is a Banach space, so x * = lim m + x m exists and x * U [ x 0 , τ * ] . Then, considering the estimate (see (21))
G ( x 0 ) 1 G ( x m + 1 ) 0 1 φ ( θ t m + 1 ) d θ ( t m + 1 t m ) + ( 1 + φ 0 ( t m + 1 ) ) ( t m + 1 s m ) + | 1 a | ( 1 + φ 0 ( t m ) ) ( s m t m ) .
Therefore, G ( x * ) = 0 follows if m + in (28).  □
Proposition 1.
Suppose:
(i) Point x * U ( x 0 , τ 1 ) for some τ 1 > 0 solves the equation G ( x ) = 0 .
(ii) Condition (C2) holds.
(iii) There exists τ 2 τ 1 so that
0 1 φ 0 ( ( 1 θ ) τ 2 + θ τ 1 ) d θ < 1 .
Let Ω 1 = U [ x 0 , τ 2 ] Ω . Then, x * solves Equation (1) uniquely in Ω 1 .
Proof. 
Let y * Ω 1 satisfy G ( y * ) = 0 . Set T = 0 1 G ( x * + θ ( y * x * ) ) d θ . By applying (29) and (C2), one obtains
G ( x 0 ) 1 ( T G ( x 0 ) ) 0 1 φ 0 ( ( 1 θ ) y * x 0 + θ x * x 0 ) d θ 0 1 φ 0 ( ( 1 θ ) τ 2 + θ τ 1 ) d θ < 1 .
Then, it follows that y * = x * by 0 = G ( y * ) G ( x * ) = T ( y * x * ) and the implication T 1 L ( X 1 , X ) .   □
Remark 1.
(i) The point η 1 λ which is in closed form may replace τ * in the condition (C6).
(ii) Proposition 2 is not using all conditions of Theorem 2. However, if all conditions are assumed then, set τ 1 = τ * .

4. Local Convergence Analysis

Some auxiliary scalar functions and parameters are first introduced based on which the local convergence analysis of method (2) shall be given. Set S = [ 0 , + ) . Let function ψ 0 : S R be continuous and nondecreasing.
Suppose:
(H1) Equation ψ 0 ( t ) 1 = 0 has a smallest solution r 0 S { 0 } . Set S 1 = [ 0 , r 0 ) . Let function ψ 1 : S 1 R be continuous and nondecreasing. Define function g 1 : S 1 R by
g 1 ( t ) = 1 1 ψ 0 ( t ) [ 0 1 ψ ( ( 1 θ ) t ) d θ + | 1 a | ( 1 + 0 1 ψ 0 ( ( 1 θ ) t ) d θ ) ] .
(H2) Equation
g 1 ( t ) 1 = 0
has a smallest solution r 1 S { 0 } . Let H 1 0 be a parameter. Define function g 2 : S 1 R by
g 2 ( t ) = 1 1 ψ 0 ( t ) [ 0 1 ψ ( ( 1 θ ) t ) d θ + H 1 ( 1 + 0 1 ψ 0 ( ( 1 θ ) t ) d θ ) ] .
(H3) Equation
g 2 ( t ) 1 = 0
has a smallest solution r 2 S { 0 } . Let H 2 0 be a parameter. Define function g 3 : S 1 R by
g 3 ( t ) = 1 1 ψ 0 ( t ) [ 0 1 ψ ( ( 1 θ ) t ) d θ + H 2 ( 1 + 0 1 ψ 0 ( ( 1 θ ) t ) d θ ) g 2 ( t ) ] .
(H4) Equation
g 3 ( t ) 1 = 0
has a smallest solution r 3 S { 0 } . The parameter r defined for j = 1 , 2 , 3 as
r = min { r j } .
is proven to be a radius of convergence for method (2) in Theorem 2. Set S 1 = [ 0 , r ) . In view of these definitions, we have that for all t S 1
0 ψ 0 ( t ) < 1 ,
and
0 g 1 ( t ) < 1 .
Next, the relationship is given between the aforementioned functions and the operators appearing on the method (2). Consider the conditions.
Suppose:
(A1) There exists a solution x * Ω such that G ( x * ) is invertible.
(A2) G ( x * ) 1 ( G ( x ) G ( x * ) ) ψ 0 ( x x * ) for all x Ω . Set U 0 = U ( x * , r 0 ) Ω .
(A3) G ( x * ) 1 ( G ( x ) G ( y ) ) ψ ( x y )
G ( x * ) 1 ( I G ( x ) A ( x , y ) ) h 1 ( x , y ) ,
G ( x * ) 1 ( I G ( x ) B ( x , y , z ) ) h 2 ( x , y , z ) ,
h 1 ( x , y ) H 1
and
h 2 ( x , y , z ) H 2
for all x , y U 0 , where functions h 1 : U 0 × U 0 R and h 2 : U 0 × U 0 × U 0 R are continuous.
(A4) The parameter given by the Formula (30) exists and
(A5) U [ x * , r ] Ω .
The main local convergence result follows for the method (2).
Theorem 2.
Suppose conditions (A1)–(A5) hold. Then, sequence { x m } produced by method (2) for x 0 U ( x * , r ) { x * } exists in U ( x * , r ) , remains in U ( x * , r ) for all m = 0 , 1 , 2 , and converges to x * . Moreover, the following items hold for all m = 0 , 1 , 2 ,
y m x * ψ 1 ( x m x * ) x m x * x m x * < r ,
y m x * ψ 2 ( x m x * ) x m x * x m x *
and
y m x * ψ 3 ( x m x * ) x m x * x m x * ,
where the functions φ j , j = 1 , 2 , 3 are previously defined and the radius r is given by (3).
Proof. 
Mathematical induction is utilized to prove items (8)–(10). Let v U ( x * , r ) { x * } . Let w U ( x * , r ) be arbitrary. By applying conditions ( A 1 ) and ( A 2 )
G ( x * ) 1 ( G ( x * ) G ( w ) ) ψ 0 ( x * w ) ψ 0 ( r ) < 1 .
Then, the linear operator G ( w ) 1 exists and
G ( w ) 1 G ( x * ) 1 1 ψ 0 ( x * w ) .
If w = x 0 , then the iterative y 0 exists by method (2). It follows
y 0 x * = x 0 x * G ( x 0 ) 1 G ( x 0 ) + ( 1 a ) G ( x 0 ) 1 G ( x 0 ) .
Then, by applying (A3) and (37) (for w = x 0 )
y 0 x * ( 0 1 ψ ( ( 1 θ ) x 0 x * ) 1 ψ 0 ( x 0 x * ) d θ + | 1 a | ( 1 + 0 1 ψ 0 ( ( 1 θ ) x 0 x * ) d θ ) ) x 0 x * g 1 ( x 0 x * ) x 0 x * x 0 x * < r .
Hence, the iterate y 0 U ( x * , r ) and (33) is true for m = 0 , where we also use the estimate
G ( x 0 ) = G ( x 0 ) G ( x * ) = ( 0 1 G ( x * + θ ( x 0 x * ) ) G ( x * ) + G ( x * ) ( x 0 x * ) ) ( x 0 x * )
so,
G ( x * ) 1 G ( x 0 ) G ( x * ) 1 ( G ( x * + 0 1 G ( x * ) + θ ( x 0 x * ) d θ G ( x * ) ) ( x 0 x * ) ( 1 + 0 1 ψ 0 ( θ x 0 x * ) d θ ) x 0 x *
Similarly, by the second substep of method (2), we can write
z 0 x * = x 0 x * A 0 G ( x 0 ) = x 0 x * G ( x 0 ) 1 G ( x 0 ) + ( G ( x 0 ) 1 A 0 ) G ( x 0 ) = x 0 x * G ( x 0 ) 1 G ( x 0 ) + G ( x 0 ) 1 ( I G ( x 0 ) A 0 ) G ( x 0 ) .
By applying (A3), and (37) (for w = x 0 ), we obtain in turn that
z 0 x * 0 1 ψ ( ( 1 θ ) x 0 x * ) d θ x 0 x * 1 ψ 0 ( x 0 x * ) + h 1 ( x 0 , y 0 ) 1 ψ 0 ( x 0 x * ) ( 1 + 0 1 ψ 0 ( ( 1 θ ) x 0 x * ) d θ ) x 0 x * g 2 ( x 0 x * ) x 0 x * x 0 x * .
Thus, the iterate z 0 U ( x * , r ) and estimate (34) hold for n = 0 . Then, again, by the third substep of method (2), we obtain:
x 1 x * = x 0 x * G ( x 0 ) 1 G ( x 0 ) + ( G ( x 0 ) 1 B 0 ) G ( z 0 ) = x 0 x * G ( x 0 ) 1 G ( x 0 ) + G ( x 0 ) 1 ( I G ( x 0 ) B 0 ) G ( z 0 ) .
Consequently,
x 1 x * 0 1 ψ ( ( 1 θ ) x 0 x * ) d θ x 0 x * 1 ψ 0 ( x 0 x * ) + h 2 ( x 0 , y 0 , z 0 ) 1 ψ 0 ( x 0 x * ) ( 1 + 0 1 ψ 0 ( θ z 0 x * d θ ) z 0 x * g 3 ( x 0 x * ) x 0 x * x 0 x * ,
where we also used (39) and
G ( z 0 ) = G ( z 0 ) G ( x * ) = 0 1 G ( x * + θ ( z 0 x * ) ) d θ ( z 0 x * ) = ( 0 1 G ( x * + θ ( z 0 x * ) ) d θ G ( x * ) + G ( x * ) ) ( z 0 x * ) .
Hence,
G ( x * ) 1 G ( z 0 ) ( 1 + 0 1 ψ 0 ( θ z 0 x * ) d θ ) z 0 x * .
It follows from estimate (40) that iterate x 1 is well defined and (35) holds for m = 0 . Therefore, the induction for assertions (33)–(35) is completed if the iterates x i , y i , z i , x i + 1 are exchanged with the iterates x 0 , y 0 , z 0 , x 1 , respectively, in the previous calculations. Finally, from the calculation
x i + 1 x * u x i x * < r ,
where u = φ 3 ( x 0 x * [ 0 , 1 ) , we obtain that lim i + x i = x * and the iterate x i + 1 U ( x * , r ) .   □
The uniqueness of the solution result follows.
Proposition 2.
Suppose: there exists a simple solution x * U ( x * , ρ ) Ω of equation G ( x ) = 0 for some ρ > 0 and (A2) holds. Furthermore, suppose equation ψ 0 ( t ) 1 = 0 has a smallest positive solution ρ 1 . Set Ω 2 = U [ x * , ρ 1 ] U ( x * , ρ ) . Then, the point x * is the only solution of equation G ( x ) = 0 in the region Ω 2 .
Proof. 
Let y * Ω 2 with G ( y * ) = 0 . Let the linear operator T = 0 1 G ( x * + θ ( y * x * ) ) d θ . Then, by applying condition (A3)
G ( x * ) 1 ( T G ( x * ) ) 0 1 ψ 0 ( θ y * x * ) d θ < 1 .
Hence, y * = x * is implied by the inverse of T and the application T ( x * y * ) = G ( x * ) G ( y * ) = 0 0 = 0 . gives x * y * = T 1 ( 0 ) = 0 . Therefore, we conclude that y * = x * .   □

5. A Specialization of Method

Set a = 1 , A m = G ( x m ) and B m = G ( x m ) for all m = 0 , 1 , 2 , . Then, method (2) reduces to
y m = x m G ( x m ) 1 G ( x m ) z m = y m G ( y m ) 1 G ( x m ) x m + 1 = z m G ( z m ) 1 G ( z m ) .
This is Newton’s three-step fifth-order method, also called Traub’s extended three-step method. It seems to be the most interesting special case of method (2) to study as an application.
Consider the popular choices:
Semilocal case:
ψ 0 ( t ) = L 0 t and ψ ( t ) = L t . We can also set p m = p = 0 . However, for determining q m and q, let us start with
G ( x ) 1 ( G ( x ) G ( y ) ) = G ( x ) 1 0 1 G ( y + θ ( x y ) ) d θ ( x y ) G ( x ) 1 G ( x ) 0 1 G ( x ) 1 ( G ( y + θ ( x y ) ) d θ G ( x * ) + G ( x * ) ) ( x y ) ( 1 + L y x 2 2 ( 1 L 0 x x 0 ) ) y x
It follows that we can set
q m = ( 1 + L u m t m 2 2 ( 1 L 0 t m ) ) u m t m ) .
Local case: ψ 0 ( t ) = l 0 ( t ) and ψ ( t ) = l t . Then, we obtain h 1 = h 2 = H 1 = H 2 = 0 .
These choices are used in the examples of the numerical section.

6. Numerical Examples

We verify the convergence criteria using method (42). Moreover, we compare the Lipschitz constants L 0 , L , L 1 , and m .
In particular, we used the first example to show that the ratio L 0 L 1 can be arbitrarily small.
Example 1.
Let X = X 1 = Ω = R . Define the function
λ 1 ( x ) = γ 0 x + γ 1 + γ 2 sin γ 3 x , x 0 = 0 ,
where γ j , j = 0 , 1 , 2 , 3 are fixed parameters. It follows that for γ 3 large and γ 2 small, L 0 L 1 can be small (arbitrarily), so that L 0 L 1 0 .
The parameters L 0 , L , K and L 1 are computed in the next example, where L 1 is the Lipschitz parameter on Ω used by Kantorovich [6], whereas K is the parameter replacing L if, as noted in Section 3, for v Ω , we choose v 1 = v a G ( v ) 1 G ( v ) in (C4). Moreover, the convergence conditions by Kantorovich [6] are compared to those of Lemma 1.
Example 2.
Let X = X 1 = R . Define scalar function G on the interval Ω = U [ v 0 , 1 α ] for α ( 0 , 1 2 ) by
G ( v ) = v 3 α .
Pick v 0 = 1 . Then, the estimates are η = 1 α 3 ,
| G ( v 0 ) 1 ( G ( v ) G ( v 0 ) ) | = | v 2 v 0 2 | | v + v 0 | | v v 0 | ( | v v 0 | + 2 | v 0 | ) | v v 0 | = ( 1 α + 2 ) | v v 0 | = ( 3 α ) | v v 0 | ,
for all v Ω , so L 0 = 3 α , Ω 0 = U ( v 0 , 1 L 0 ) Ω = U ( v 0 , 1 L 0 ) ,
| G ( v 0 ) 1 ( G ( w ) G ( v ) | = | w 2 v 2 | | w + v | | w v | ( | w v 0 + v v 0 + 2 v 0 | ) | w v | = ( | w v 0 | + | v v 0 | + 2 | v 0 | ) | w v | ( 1 L 0 + 1 L 0 + 2 ) | w v | = 2 ( 1 + 1 L 0 ) | w v | ,
for all v , w Ω and so K = 2 ( 1 + 1 L 0 ) .
| G ( v 0 ) 1 ( G ( w ) G ( v ) | = ( | w v 0 | + | v v 0 | + 2 | v 0 | ) | w v | ( 1 α + 1 α + 2 ) | w v | = 2 ( 2 α ) | w v | ,
for all v , w Ω and L 1 = 2 ( 2 α ) .
Notice that for all α ( 0 , 1 2 )
L 0 < K < L 1 .
Next, set w = v G ( v ) 1 G ( v ) , v Ω . Then, we have
w + v = v G ( v ) 1 G ( v ) + v = 5 v 3 + α 3 v 2 .
Define function G ¯ on the set Ω = [ α , 2 α ] by
G ¯ ( v ) = 5 v 3 + α 3 v 2 .
Then, we obtain by this definition that
G ¯ ( v ) = 15 v 4 6 v α 9 v 4 κ = 5 ( v α ) ( v 2 + v α + α 2 ) 3 v 3 ,
with s = 2 α 5 3 being the critical point of function G ¯ . Notice that α < s < 2 α . It follows that this function is decreasing on the interval ( α , p ) and increasing on the interval ( α , 2 α ) , since v 2 + v α + α 2 > 0 and v 3 > 0 . Hence, we can set
K 2 = 5 ( 2 α ) 2 + α 9 ( 2 α ) 2
and
K 2 < L 0 .
However, if v Ω 0 = [ 1 1 L 0 , 1 + 1 L 0 ] , then
L = 5 ϖ 3 + α 9 ϖ 2 ,
where ϖ = 4 α 3 α and K < K 1 for all α ( 0 , 1 2 ) . Then, the Kantorovich criterion 2 L 1 η 1 [6] is not satisfied for all α ( 0 , 1 2 ) . Therefore, there is no assurance that method (2) is convergent to v * = α 3 .
Let us test the convergence criteria of Lemma 1 by selecting α = 0.4 . Then, we have the following Table 1, verifying the convergence condition (6) for τ 0 = 1 L 0 .
Example 3.
Let Ω = U [ 0 , 1 ] for X = X 1 = C [ 0 , 1 ] . Then, the boundary value problem [4]
μ ( 0 ) = 0 , μ ( 1 ) = 1 ,
μ = μ γ μ 2
is transformed as the integral equation
μ ( s ) = s + 0 1 G ( s , s 1 ) ( μ 3 ( s 1 ) + γ μ 2 ( s 1 ) ) d s 1
where γ is a constant and G ( s , s 1 ) is due to Green’s function given by
G ( s , s 1 ) = s 1 ( 1 s ) , s 1 s s ( 1 s 1 ) , s < s 1 .
Consider G : Ω X 1 as
[ G ( x ) ] ( s ) = x ( s ) s 0 1 G ( s , s 1 ) ( x 3 ( s 1 ) + γ x 2 ( s 1 ) ) d s 1 .
Let us pick μ 0 ( s ) = s and Ω = U ( μ 0 , κ 0 ) . Then, clearly U ( μ 0 , κ 0 ) U ( 0 , κ 0 + 1 ) , since μ 0 = 1 . If 2 γ < 5 . Then, conditions (H1)–(H4) are satisfied for
L 0 = 2 γ + 3 κ 0 + 6 8 , L = γ + 6 κ 0 + 3 4 .
Hence, L 0 < L 1 .
The next two examples concern the local convergence of the method (2) and radii r j , r computed using Formula (30) and the functions φ j .
Example 4.
Consider X = X 1 = C [ 0 , 1 ] and Ω = U [ 0 , 1 ] . Consider G : Ω X 1 given as
G ( f ) ( x ) = φ ( x ) 5 0 1 x τ f ( τ ) 3 d τ .
This definition gives
G ( f ( ξ ) ) ( x ) = ξ ( x ) 15 0 1 x τ f ( τ ) 2 ξ ( τ ) d τ , f o r a l l ξ Ω .
The max-norm is used. Then, since x * = 0 , conditions (A1)–(A5) hold, provided that 0 = 7.5 and = 15 . Then, the radii are:
r = r 1 = 0.0533 , r 2 = 0.1499 , a n d r 3 = 0.1660 .
Example 5.
Let the system of differential equations
G 1 ( μ 1 ) = e μ 1 , G 2 ( μ 2 ) = ( e 1 ) μ 2 + 1 , G 3 ( μ 3 ) = 1
with G 1 ( 0 ) = G 2 ( 0 ) = G 3 ( 0 ) = 0 . Let G = ( G 1 , G 2 , G 3 ) . Let X = X 1 = R 3 , Ω = U [ 0 , 1 ] . Then, x * = ( 0 , 0 , 0 ) c r . Let function G on Ω for μ = ( μ 1 , μ 2 , μ 3 ) c r given as
G ( μ ) = ( e μ 1 1 , e 1 2 μ 2 2 + μ 2 , μ 3 ) c r .
Then, the derivative due to Fréchet is given by
G ( μ ) = e μ 1 0 0 0 ( e 1 ) μ 2 + 1 0 0 0 1 .
This definition implies that G ( x * ) = I . Let μ R 3 with μ = ( μ 1 , μ 2 , μ 3 ) c r . Moreover, the norm for Δ R 3 × R 3 is
Δ = max 1 k 3 i = 1 3 δ k , i .
We need to verify the conditions (A1)–(A5). To achieve this, we study G ( c ) = e c 1 on Ω = [ 1 , 1 ] , so c * = 1 , hence G ( c * ) = 1 , and
| G ( c ) G ( c * ) | = | c + c 2 2 + + c j j ! + | = | 1 + c 0 2 ! + + ( c 0 ) j 1 j ! + | | c 0 | .
It follows that 1 = e 1 . Then, Ω 1 = U ( x * , 1 e 1 ) Ω = U ( x * , 1 e 1 ) . This time, we obtain
| G ( c ) G ( c * ) | 0 | c 0 | ,
where
0 = 1 + 1 ( e 1 ) 2 ! + + 1 ( e 1 ) j 1 j ! + 1.43 < 1 .
Then, we obtain for all c Ω 1
| s | = | c G ( c ) 1 G ( c ) | = | c 1 + e c | = | ( c ) 2 2 ! + + ( c ) j j ! + | = | c | ( | c | 2 ! + + | c | j 1 j ! + ) 0 1 e 1 .
Moreover,
| G ( s ) G ( c * ) | = | e s 1 | | s | ( 1 + | s | 2 ! + + | s | j 1 j ! + ) | c | 0 1 e 1 ( 1 + 0 1 ( e 1 ) 2 ! + + 0 1 e 1 j 1 1 j ! + ) = 2 ( c 0 ) ,
where 2 0.49 < 1 . We can set 3 = 2 .
Therefore, the computed radii are r = r 1 = 0.2409 = r , r 2 = 0.3101 , a n d r 3 = 0.3588 .
  • Discussion: It is important to mention some more applications. Notice that the branching of the solutions cannot be handled using the iterative method (2) since the existence of G ( x m ) 1 is required in the first step. It is worth noticing that the present results can also apply to notable references by Singh et al. [10] and Vijayakumar et al. [20,21] involving the solution of differential equations. This is provided that the Banach space X 1 = X is specialized to be the space of all Bockner integrable functions and the involved operator is defined as a Riemann–Liouville integral, Riemann–Liouville fractional derivative, or Caputo fractional derivative of a certain order in the interval ( 1 , 2 ] [10].
In the references [20,21], the evolution differential inclusions should be in Banach space. In particular, the control function should belong in L 2 ( I , X ) , which is the Banach space of admissible functions with X 1 = X .

7. Conclusions

Sufficient conditions unify the convergence of generalized three-step methods. Their specializations provide a finer convergence analysis since smaller Lipschitz parameters and tighter real majorizing sequences are used than in [3,4,6,11,12,17,18].
More areas of application can be found in [3,4,6,9,19] and the references therein. These ideas can be immediately extended to include multistep as well as multipoint iterative methods along the same lines. This is the topic of future work.

Author Contributions

Conceptualization, S.R., I.K.A., S.G. and C.I.A.; methodology, S.R., I.K.A., S.G. and C.I.A.; software, S.R., I.K.A., S.G. and C.I.A.; validation, S.R., I.K.A., S.G. and C.I.A.; formal analysis, S.R., I.K.A., S.G. and C.I.A.; investigation, S.R., I.K.A., S.G. and C.I.A.; resources, S.R., I.K.A., S.G. and C.I.A.; data curation, S.R., I.K.A., S.G. and C.I.A.; writing—original draft preparation, S.R., I.K.A., S.G. and C.I.A.; writing—review and editing, S.R., I.K.A., S.G. and C.I.A.; visualization, S.R., I.K.A., S.G. and C.I.A.; supervision, S.R., I.K.A., S.G. and C.I.A.; project administration, S.R., I.K.A., S.G. and C.I.A.; funding acquisition, S.R., I.K.A., S.G. and C.I.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Argyros, I.K.; Hilout, S. Weaker conditions for the convergence of Newton’s method. J. Complex. 2012, 28, 364–387. [Google Scholar] [CrossRef] [Green Version]
  2. Argyros, I.K. Unified Convergence Criteria for Iterative Banach Space Valued Methods with Applications. Mathematics 2021, 9, 1942. [Google Scholar] [CrossRef]
  3. Argyros, I.K.; Magréñan, A.A. A Contemporary Study of Iterative Methods; Elsevier (Academic Press): New York, NY, USA, 2018. [Google Scholar]
  4. Argyros, I.K. The Theory and Applications of Iteration Methods, 2nd ed.; Engineering Series; CRC Press, Taylor and Francis Group: Boca Raton, FL, USA, 2022. [Google Scholar]
  5. Ezquerro, J.A.; Hernandez, M.A. Newton’s Method: An Updated Approach of Kantorovich’s Theory; Springer: Cham, Switzerland, 2018. [Google Scholar]
  6. Kantorovich, L.V.; Akilov, G.P. Functional Analysis; Pergamon Press: Oxford, UK, 1982. [Google Scholar]
  7. Nashed, M.Z.; Chen, X. Convergence of Newton-like methods for singular operator equations using outer inverses. Numer. Math. 1993, 66, 235–257. [Google Scholar] [CrossRef]
  8. Proinov, P.D. New general convergence theory for iterative processes and its applications to Newton–Kantorovich type theorems. J. Complex. 2010, 26, 3–42. [Google Scholar] [CrossRef] [Green Version]
  9. Shakhno, S.M.; Gnatyshyn, O.P. On an iterative Method of order 1.839⋯ for solving nonlinear least squares problems. Appl. Math. Appl. 2005, 161, 253–264. [Google Scholar]
  10. Singh, A.; Shukla, A.; Vijayakumar, V.; Udhayakumar, R. Asymptotic stability of fractional order (1, 2] stochastic delay differential equations in Banach spaces. Chaos Solitons Fractals 2021, 150, 111095. [Google Scholar] [CrossRef]
  11. Homeier, H.H.H. A modified Newton method with cubic convergence: The multivariate case. J. Comput. Appl. Math. 2004, 169, 161–169. [Google Scholar] [CrossRef] [Green Version]
  12. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comput. 2007, 190, 686–698. [Google Scholar] [CrossRef]
  13. Noor, M.A.; Waseem, M. Some iterative methods for solving a system of nonlinear equations. Comput. Math. Appl. 2009, 57, 101–106. [Google Scholar] [CrossRef] [Green Version]
  14. Verma, R. New Trends in Fractional Programming; Nova Science Publisher: New York, NY, USA, 2019. [Google Scholar]
  15. Sharma, J.R.; Arora, H. Efficient derivative-free numerical methods for solving systems of nonlinear equations. Comp. Appl. Math. 2016, 35, 269–284. [Google Scholar] [CrossRef]
  16. Xiao, X.; Yin, H. Achieving higher order of convergence for solving systems of nonlinear equations. Appl. Math. Comput. 2017, 311, 251–261. [Google Scholar] [CrossRef]
  17. Grau-Sanchez, M.; Grau, A.; Noguera, M. Ostrowski type methods for solving system of nonlinear equations. Appl. Math. Comput. 2011, 218, 2377–2385. [Google Scholar] [CrossRef]
  18. Kou, J.; Wang, X.; Li, Y. Some eight order root finding three-step methods. Commun. Nonlinear Sci. Numer. Simul. 2010, 15, 536–544. [Google Scholar] [CrossRef]
  19. Traub, J.F. Iterative Methods for the Solution of Equations; Prentice Hall: Hoboken, NJ, USA, 1964. [Google Scholar]
  20. Vijayakumar, V.; Murugesu, R. Controllability for a class of second-order evolution differential inclusions without compactness. Appl. Anal. 2019, 98, 1367–1385. [Google Scholar] [CrossRef]
  21. Vijayakumar, V.; Nisar, K.S.; Chalishajar, D.; Shukla, A.; Malik, M.; Alsaadi, A.; Aldosary, S.F. A Note on Approximate Controllability of Fractional Semilinear Integrodifferential Control Systems via Resolvent Operators. Fractal Fract. 2022, 6, 73. [Google Scholar] [CrossRef]
Table 1. Real sequence (42).
Table 1. Real sequence (42).
n123456
u i 0.23300.29450.30080.30090.30090.3009
s i 0.20000.28960.30080.30090.30090.3009
t n + 1 0.23410.29460.30080.30090.30090.3009
L 0 s i 0.52000.75300.78200.78240.78240.7824
L 0 u i 0.60580.76580.78220.78240.78240.7824
L 0 t i + 1 0.60870.76590.78220.78240.78240.7824
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Regmi, S.; Argyros, I.K.; George, S.; Argyros, C.I. Extended Convergence of Three Step Iterative Methods for Solving Equations in Banach Space with Applications. Symmetry 2022, 14, 1484. https://doi.org/10.3390/sym14071484

AMA Style

Regmi S, Argyros IK, George S, Argyros CI. Extended Convergence of Three Step Iterative Methods for Solving Equations in Banach Space with Applications. Symmetry. 2022; 14(7):1484. https://doi.org/10.3390/sym14071484

Chicago/Turabian Style

Regmi, Samundra, Ioannis K. Argyros, Santhosh George, and Christopher I. Argyros. 2022. "Extended Convergence of Three Step Iterative Methods for Solving Equations in Banach Space with Applications" Symmetry 14, no. 7: 1484. https://doi.org/10.3390/sym14071484

APA Style

Regmi, S., Argyros, I. K., George, S., & Argyros, C. I. (2022). Extended Convergence of Three Step Iterative Methods for Solving Equations in Banach Space with Applications. Symmetry, 14(7), 1484. https://doi.org/10.3390/sym14071484

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop