Next Article in Journal
Multi-Objective Bee Swarm Optimization Algorithm with Minimum Manhattan Distance for Passive Power Filter Optimization Problems
Next Article in Special Issue
Convexity, Markov Operators, Approximation, and Related Optimization
Previous Article in Journal
An Unconditional Positivity-Preserving Difference Scheme for Models of Cancer Migration and Invasion
Previous Article in Special Issue
Geometric Invariants of Surjective Isometries between Unit Spheres
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On the Solution of Generalized Banach Space Valued Equations

by
Ramandeep Behl
1,† and
Ioannis K. Argyros
2,*,†
1
Department of Mathematics, Faculty of Science, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia
2
Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2022, 10(1), 132; https://doi.org/10.3390/math10010132
Submission received: 9 December 2021 / Revised: 29 December 2021 / Accepted: 30 December 2021 / Published: 2 January 2022

Abstract

:
We develop a class of Steffensen-like schemes for approximating solution of Banach space valued equations. The sequences generated by these schemes are, converging to the solution under certain hypotheses that are weaker than in earlier studies. Hence, extending the region of applicability of these schemes without additional hypotheses. Benefits include: more choices for initial points; the computation of fewer iterates to reach a certain accuracy in the error distances, and a more precise knowledge of the solution. Technique is applicable on other schemes our due to its generality.

1. Introduction

A plethora of problems from optimal control, variational inequations, mathematical programming and other disciplines can be converted to finding a solution x * of the generalized equation
0 F ( x ) + Q ( x ) ,
with F : B 1 B 2 being a continuous operators, Q a set valued operator mapping B 1 into subsets of B 2 that have closed graph, and B 1 , B 2 are standing for Banach spaces.
The Steffensen-like scheme (SLS)
0 F ( x n ) + F [ q 1 ( x n ) , q 2 ( x n ) ] ( x n + 1 x n ) + Q ( x n + 1 ) ,
has been developed (see [1]) to produce a sequence approximating x * , and F [ · , · ] is a divided difference of order one.
Notice that G : = 0 , then SLS reduces to the classical Steffensen scheme (SS)
x n + 1 = x n F [ q 1 ( x n ) , q 2 ( x n ) ] 1 F ( x n ) ,
and if F is differentiable (Fréchet) with q j ( x ) = x , then SLS becomes Newton’s scheme ( N S ) for solving Equation (1), which is given by
0 F ( x n ) + F ( x n ) ( x n + 1 x n ) + Q ( x n ) .
Special choices of mapping q j have been studied, such as:
q 1 ( x ) = x q j ( x ) = x n F ( x n ) 1 F ( x n ) N e w t o n s s c h e m e ; q 1 ( x n ) = x n , q 2 ( x n 1 ) = x n F [ x n 1 , x n 2 ] 1 F ( x n 1 ) s e c a n t s c h e m e ; q 1 ( x n 1 ) = x n + a n ( x n 1 x n ) , a n d q 2 ( x n ) = x n ; q 1 ( x ) = x a n d q 2 ( x ) = x F ( x ) .
Dontchev [2] showed convergence under the continuity due to Aubin ( F + Q ) 1 together with continuity by Lipschitz for NS. Argyros [3] showed convergence of NS when F is i 2 differentiable.
Convergence of NS under Hölder like conditions can be found in [4]. For
q 1 ( x n ) = α x n + ( 1 α ) x n 1 and q 2 ( x n ) = x n , α [ 0 , 1 ) ,
scheme (2) was studied in [5]. Later a local convergence result was prescribed in [1] who generalized earlier results [6,7,8].
Based on the aforementioned, there is a need to find unifying convergence criteria under weaker conditions. Moreover, we are concerned with optimization considerations. These ideas and concerns constitute our motivation for this paper. In particular, we extend the region of applicability of SLS by finding more initial point x 0 ; providing fewer iterates for achieving a certain accuracy and we give a better information related to the where about x * . These are obtained without additional hypothesis by developing the center-Lipchitz idea. This technique may be utilized to extend the applicability of other schemes with the same benefits. Another novelty of our idea is that all the benefits are obtained without additional to the previous works hypotheses.
In order to create a self-contained study, we present a background and auxiliary definitions and results in Section 2. The convergence analysis local is given for various cases in Section 3 and Section 4 with examples. A concluding Section 7 completes this study.

2. Background and Auxiliary Results

We present material necessary for only understanding our results. A more detailed inside in the topics developed can be found in [2,9,10,11]. A distance from x B 1 to a region S in ( B 3 , · ) (Banach space) is given by
D I S T ( x , S ) : = inf x y , ( y S ) .
If A B 3 , then the excess E is defined as
E ( A , S ) : = sup D I S T ( x , S ) , ( x A ) .
If T : B 1 B 2 stands for a set-valued operator, then
G P H T = ( u , v ) B 1 × B 2 , v T ( u )
to be the graph of T and T 1 ( v ) = { u B 1 , v T ( u ) } is the inverse operator of T. Let L ( B 1 , B 2 ) stand for the space of continuous linear operators mapping B 1 into B 2 . Then, notation U [ x , r ] is used to denote a ball (closed) of center x B 1 and radius r > 0 .
Definition 1
([2]). We say that T is Lipschitz (pseudo) about ( x 0 , y 0 ) G P H T with modulus μ is there exist parameters α 0 and β 0 such that for each u , v U [ x 0 , β ]
E T ( u ) U [ y 0 , α ] , T ( v ) μ u v .
Remark 1.
(a) 
Let v n B 1 . Then, lim n v n = x * (linearly), if for e n = x n x * .
(b) 
Divided difference F [ · , · ] : B 1 × B 2 L ( B 1 , B 2 ) satisfies
F [ u , v ] = F ( u ) F ( v ) for each u , v B 1 with u v .
If F is differentiable, we get F [ x , x ] = F ( x ) [8].
Definition 2.
This definition does not uniquely specify the divided difference. But a popular choice is [4,11,12]
F [ u , v ] = 0 1 F u + τ ( v u ) d τ .
We need the auxiliary result on fixed points.
Proposition 1
([1,2]). Consider f to be a set-valued operator. Suppose that the following hypotheses hold for σ 0 B 1 , ρ 0 and p [ 0 , 1 )
(i) 
D I S T σ 0 , f ( σ 0 ) ( 1 p ) ρ .
(ii) 
E f ( u ) U [ σ 0 , ρ ] , f ( v ) p u v for each u , v U [ σ 0 , ρ ] .
Then, f has a fixed point x in U [ σ 0 , ρ ] .
Moreover, if f is a single valued function, then x is a unique fixed point for f in U [ σ 0 , ρ ] .
The following hypotheses H shall be used. Suppose:
(H1)
Operators q j : W W are q j center-Lipshitz and
q j ( x * ) = x * , w h e r e q j [ 0 , 1 ) .
(H2)
F [ x * , x ] F [ q 1 ( x ) , q 2 ( x ) ] g x * q 1 ( x ) , x * q 2 ( x ) for all x W , and some functions g : [ 0 , ) × [ 0 , ) [ 0 , ) , which is continuous and non-decreasing.
(H3)
The operator Q 1 is Lipschitz (pseudo) about F ( x * ) , x * .
(H4)
F [ u , v ] δ and
μ δ + g a 1 α , ( 1 + a 2 ) α < 1 ,
for all u , v W .
Remark 2.
In [1], the following hypotheses ( H ) were used.
(H1)′
Operators q j : W W are q ¯ j center-Lipshitz and
q j ( x * ) = x * , w h e r e q ¯ j [ 0 , 1 ) .
(H2)′
F [ x , y ] F [ u , v ] g ¯ x u , x v for all x W , and some function g : [ 0 , ) × [ 0 , ) [ 0 , ) which is continuous and non-decreasing.
(H3)′
= ( H 3 ).
(H4)′
F [ u , v ] δ and
μ δ + g ¯ a ¯ 1 α , ( 1 + a ¯ 2 ) α < 1 ,
for all u , v W .
Then, clearly
a j a ¯ j ,
g g ¯ ,
( H 2 ) ( H 2 ) ,
and
( H 4 ) ( H 4 ) .
In view of items (8)–(11), the results that follow improve the corresponding ones in [1]. Examples where items (8)–(11) are strict can be found in [13].
Next, we show that weaker and actually needed hypotheses ( H ) can replace ( H ) .

3. Convergence for SLS

We shall show the main local convergence result for SLS under hypotheses ( H ) .
Theorem 1.
Under hypotheses ( H ) , choose c ( c 0 , 1 ) , where
c 0 = μ f a 1 α , ( 1 + a 2 ) α 1 μ δ .
Then, there exists ϵ > 0 , such that for any x 0 U [ x * , ϵ ] { x * } , sequence { x n } generated by SLS converges to x * so that for e n = x n x * :
e n + 1 c e n .
Some notations and discussion follow to help with the proof of Theorem 1.
Define set-valued operators G : B 1 B 2 and ψ n : B 1 B 2 as
G ( · ) : = F ( x * ) + Q ( · ) , ψ n ( · ) : = G 1 P n ( · ) ,
where P n : B 1 B 2 is given by
P n ( x ) : = F ( x * ) F ( x n ) F q 1 ( x n ) , q 2 ( x n ) ( x x n ) .
Notice that iterate x 1 is such that ψ 0 ( x 1 ) = x 1 if and only if
0 F ( x 0 ) + F q 1 ( x n ) , q 2 ( x n ) ( x 1 x 0 ) + Q ( x 1 ) .
Lemma 1.
Suppose hypotheses ( H ) hold. Then, there exists ϵ > 0 such that ψ 0 has a fixed point in U [ x * , ϵ ] so that
e 1 c e 0 .
Proof. 
Choose ϵ < c 2 , where
c 2 = min { α , c 1 } ,
and
c 1 = β δ + 2 g a 1 α , ( 1 + a 2 ) α .
Using ( H 3 ) , we get
E G 1 ( u ) U [ x * , α ] , G 1 ( v ) μ u v ,
for each u , v U [ 0 , β ] .
By the definition of P n , ( H 1 ) , ( H 2 ) and ( H 4 ) , we can write in turn for x U [ x * , ϵ ] :
P 0 ( x ) = F ( x * ) F ( x 0 ) F [ q 1 ( x 0 ) , q 2 ( x 0 ) ] ( x 1 x 0 ) = F [ x * , x 0 ] ( x * x ) + ( x x 0 ) F [ q 1 ( x 0 ) , q 2 ( x 0 ) ] ( x 1 x 0 ) F [ x * , x 0 ] x * x + F [ x * , x 0 ] F [ q 1 ( x 0 ) , q 2 ( x 0 ) ] x x 0 δ x * x + g x * q 1 ( x 0 ) , x * q 2 ( x 0 ) , x x 0 .
By (7), (14), (15) and for a proposition σ 0 = x * , p = μ δ , ρ = ρ 0 = c e 0 , we have in turn that
D I S T x * , ψ 0 ( x * ) E G 1 ( 0 ) U [ x * , ϵ ] , ψ 0 ( x 1 ) μ g a 1 α , ( 1 + a 2 ) α e 0 c ( 1 p ) e 0 .
(By the choice of c and (16)).
Hence, hypothesis ( i ) of the Proposition 1 is true. Next, by the choice of ϵ and (15) we conclude that for each x U [ x * , ϵ ] , we obtain
P 0 ( x ) U [ 0 , β ] .
Hence for each u , v U [ x * , r ] , we get in turn that
E ψ 0 ( u ) U [ x * , ρ 0 ] , ψ 0 ( v ) E ψ 0 ( u ) U [ x * , ϵ ] , ψ 0 ( v ) μ P 0 ( u ) P 0 ( v ) μ F [ q 1 ( x 0 ) , q 2 ( x 0 ) u v p v u ,
showing (ii) of Proposition 1. Hence, we conclude that ψ 0 ( x 1 ) = x 1 U [ x * , ρ 0 ] , so that (13) holds.  □
Next, we present the proof of Theorem 1.
Proof of Theorem 1.
Set ρ : = ρ n = c e n . Then, mathematical induction and the proof and application of Lemma 1 imply ψ n ( x n + 1 ) = x n + 1 U [ x * , ρ n ] , so (12) holds.  □
Remark 3.
Our results are connected to the one in [1,4,5,7,8,10,11,12] so they can extend them as follows. Consider hypothesis (see ( H 2 ) )
F [ x * , x ] F [ q 1 ( x ) , q 2 ( x ) ] μ x * q 1 ( x ) δ 0 + x * q 2 ( x ) δ 0 ,
for μ > 0 , δ 0 [ 0 , 1 ] and each x W . Then, we can choose
g ( s 1 , s 2 ) = μ ( s 1 δ 0 + s 2 δ 0 ) .
In the case of ( H 2 ) , we first introduce hypothesis
F [ x , y ] F [ u , v ] μ ¯ x u δ 0 + x v δ 0 .
Then, we can choose
g ¯ ( s 1 , s 2 ) = μ ¯ ( s 1 δ 0 + s 2 δ 0 ) .
Clearly, in this case we have
μ μ ¯ ,
so the same benefits are obtained in this special case too in particular with the function g as previously defined, we have:
Proposition 2.
Suppose hypotheses ( H ) hold. Then, for each
c > μ κ a 1 δ 0 + ( 1 + a 2 ) δ 0 1 μ 6 δ ,
there exists ϵ > 0 so that sequence generated by SLS converges to x * and
e n + 1 = c e n δ 0 + 1 .
Proof. 
Simply follow the steps of the proof of Theorem 1, but choose the convergence radius ϵ < ϵ 1 so that where
ϵ 1 = min α , 1 c δ 0 , β 2 δ , c 1 ,
and
c 1 = β 4 κ a 1 δ 0 + ( 1 + a 2 ) δ 0 1 δ 0 + 1 .
Notice that if δ 0 = 1 quadratic convergence is attained.

4. Inexact Scheme

A combination of a derivative and a divided difference has been used in a scheme to solve equations containing a non-differentiable term [4]. We extend the results of Section 3 in an analogous way to solve generalized equation
0 F ( x ) + H ( x ) + Q ( x ) ,
where H is differentiable for x = x * but not necessarily so in W. The corresponding to (20) inexact defined by
0 F ( x n ) + H ( x n ) + F ( x n ) + F [ q 1 ( x n ) , q 2 ( x n ) ] ( x n + 1 x n ) + Q ( x n + 1 ) .
As in Section 3, we use hypotheses ( H ) 1 . Suppose:
(H1)1
= ( H 1 ) .
(H2)1
= ( H 2 ) , (for F being H ) .
(H3)1
F ( x * ) ( x * ) + H ( · ) + Q ( · ) 1 is Lipschitz (pseudo) about F ( x * ) , x * (see definition 1).
(H4)1
F ( x ) F ( x * ) d 0 ( x x * ) for each x W and some function d 0 : [ 0 , ) [ 0 , ) and which is continuous and non-decreasing.
Remark 4.
(a) The hypotheses ( H ) 2 were used in [1]
(H1)2
= ( H 1 ) .
(H2)2
= ( H 2 ) (for F being H ) .
(H3)3
= ( H 3 ) .
(H4)2
F ( y ) F ( x ) d ( y x ) for each x , y W and some function d : [ 0 , ) [ 0 , ) , which is continuous and non-decreasing.
Notice that
d 0 d ,
so the hypotheses ( H ) 1 can be used of ( H ) 2 in the proofs in [1] to extend those results too. In particular, we have:
Theorem 2.
Suppose: hypotheses ( H ) 1 hold and
R = μ d 0 ( α ) + g a 1 α , ( 1 + a 2 ) α < 1 .
Then, for each c 2 ( R , 1 ) , there exists c 2 > 0 so that sequence { x n } generated by ISLS is in U [ x * , c 2 ] , lim n x n = x * and e n + 1 c 1 e n .
Proof. 
Simply follow the steps of the proof in Theorem 1, but define
G 1 ( x ) : = F ( x * ) + H ( x ) + F ( x * ) ( x x * ) + Q ( x ) , ψ n 1 ( x ) : = G 1 1 P n 1 ( x ) ,
where
P n 1 ( x ) : = F ( x n ) + H ( x ) + F ( x * ) ( x x * ) F ( x n ) H ( x n ) = F ( x n ) + H [ q 1 ( x n ) , q 2 ( x n ) ] ( x x n ) .

5. Convergence of Higher Order Schemes

We shall demonstrate how to extend the ideas of the previous section in the case of some special schemes of high convergence order. The same idea can be used on other schemes.
In particular, we revisit the fourth convergence order scheme defined by
y n = x n F [ z n , x n ] 1 F ( x n ) , z n = x n + F ( x n ) x n + 1 = y n A n 1 F [ y n , x n ] + F [ y n , z n ] F [ z n , x n ] F [ y n , x n ] F ( y n ) ,
used to solve equation
F ( x ) = 0 ,
where A n = F [ y n , x n ] + 2 F [ y n , z n ] 2 F [ z n , x n ] and F : Ω B 1 B 1 for some Ω open and Ω 0 . The fourth order of convergence of scheme (23) was established in [8] in the special case when B 1 = B 2 = R i using Taylor series requiring the existence of up to the fifth derivative (not on scheme (23)), which restrict the applicability of scheme (23). Let us consider a motivational example. Therefore, we assume the following function G on B 1 = B 2 = R , Ω = [ 1 2 , 3 2 ] as:
G ( t ) = t 3 ln t 2 + t 5 t 4 , t 0 0 , t = 0 .
We yield
G ( t ) = 3 t 2 ln t 2 + 5 t 4 4 t 3 + 2 t 2 ,
G ( t ) = 6 t ln t 2 + 20 t 3 12 t 2 + 10 t ,
G ( t ) = 6 ln t 2 + 60 t 2 12 t + 22 .
So, we identify that G ( t ) is not bounded in Ω . Therefore, results requiring the existence of G ( t ) or higher cannot apply for studying the convergence of (23).
In order to extend the applicability of scheme (23), we introduce hypotheses on only the divided difference F [ · , · ] and F ( x * ) .
We first develop some non-negative functions and parameters. Set D = [ 0 , ) and consider θ 0 0 , θ 0 . Suppose function:
(i)
ζ 0 ( t ) 1 has a least zero τ 0 D 0 { 0 } , for some function ζ 0 : D D , which is continuous and non-decreasing. Set D 0 = [ 0 , τ 0 ) .
(ii)
Γ 1 ( t ) 1 has a least zero τ 1 D { 0 } , where Γ 1 : D 0 D , Γ 1 ( t ) = ζ ( t ) 1 ζ 0 ( t ) and function ζ : D 0 D is continuous and non-decreasing.
(iii)
ζ 3 ( t ) 1 has a least zero τ 2 D 0 { 0 } , where ζ 3 ( t ) = ζ 1 ( t ) + 2 ζ 2 ( t ) + 2 ζ 0 ( t ) , where functions ζ 1 and ζ 2 are the functions as ζ .
(iv)
ζ 1 ( t ) 1 has a least zero τ 3 D 0 { 0 } . Set τ 4 = min { τ 2 , τ 3 } and D 1 = [ 0 , τ 4 ] .
(v)
Γ 2 ( t ) 1 has a least zero τ 5 D 1 { 0 } , where Γ 2 : D 1 D is defined by
Γ 2 ( t ) = ζ 3 ( t ) + 2 ζ 2 ( t ) + 2 ζ ( t ) 1 ζ 3 ( t ) + ζ 2 ( t ) + ζ 0 ( t ) θ 1 ζ 1 ( t ) 1 ζ 3 ( t ) Γ 1 ( t ) ,
where functions ζ 3 and ζ 3 are the functions as ζ 2 .
Next, we shall show that parameter
τ * = min { τ 1 , τ 5 } ,
is a convergence radius for scheme (23). Set D 2 = [ 0 , τ * ) .
By the definition of τ * it follows that for each t D 2
0 ζ 0 ( t ) < 1 , 0 ζ 1 ( t ) < 1 ,
0 ζ 3 ( t ) < 1 ,
0 Γ 1 ( t ) < 1 ,
and
0 Γ 2 ( t ) < 1 .
The hypotheses ( C ) are needed with ζ " functions as previously given and x * being a simple solution of Equation (24). Suppose:
(C1)
For each x Ω
F ( x * ) 1 F [ x + F ( x ) , x ] F ( x * ) ζ 0 ( x ζ 0 )
and
I + F [ x , x * ] θ 0 .
Set Ω 1 = U [ x * , τ 0 ] Ω .
(C2)
For each x , y Ω 0
F ( x * ) 1 F [ x + F ( x ) , x ] F [ x , x * ] ζ ( x x * ) , F ( x * ) 1 F [ y , x ] F ( x * ) ζ 1 ( x x * ) , F ( x * ) 1 F [ y , x ] F [ y , x * ] ζ 2 ( x x * ) ,
and
F ( x * ) 1 F [ x , x * ] θ .
(C3)
U [ x * , τ ¯ ] Ω , where τ ¯ = max { τ * , θ 0 τ * } .
Next, we present the local convergence analysis of scheme (23) using the hypotheses ( C ) .
Theorem 3.
Under hypotheses ( C ) further choose x 0 U [ x * , τ * ] { x * } . Then, lim n x n = x * , where sequence { x n } is generated by scheme (23).
Proof. 
The following assertions shall be shown based on mathematical induction:
{ x n } U [ x * , τ * ] ,
y n x * Γ 1 ( e n ) e n < τ *
and
e n + 1 Γ 2 ( e n ) e n .
By hypotheses ( C 1 ) , (26), (27) and x 0 U [ x * , τ * ] { x * } , we have in turn that
F ( x * ) 1 F [ x + F ( x ) , x ] F ( x * ) ζ 0 ( x x * ) ζ 0 ( τ * ) < 1 .
So the Banach Lemma for linear invertible operators [8] gives F [ x + F ( x ) , x ] 1 L ( B 1 , B 2 ) and
F [ x + F ( x ) , x ] 1 F ( x * ) 1 1 ζ 0 ( x x * ) ,
where we also used
x + F ( x ) x * ( I + F [ x , x * ] ) ( x x * ) I + F [ x , x * ] x x * θ 0 τ * τ ¯ .
Notice that for x = x 0 iterate y 0 is well defined by the first substep of scheme (23) from which we can also write
y 0 x * = F [ z 0 , x 0 ] 1 F ( x * ) F ( x * ) 1 ( F [ z 0 , x 0 ] F [ x 0 , x * ] ) ( x 0 x * ) .
In view of ( C 2 ) , (26), (29), (34) (for x = x 0 ) and (35), we get in turn that
y x * ζ ( x 0 x * ) x 0 x * 1 ζ ( x 0 x * ) Γ 1 ( x 0 x * ) x 0 x * x 0 x * τ * ,
showing (32) for n = 0 and y 0 U [ x * , τ * ] . Next, we shall show that A 0 and [ y 0 , x 0 ] 1 L ( B 1 , B 1 ) . Using (26), (28) and ( C 2 ) , we obtain in turn
F ( x * ) 1 ( A 0 F ( x * ) ) F ( x * ) 1 ( F [ y 0 , x 0 ] F ( x * ) ) + 2 F ( x * ) 1 ( F [ y 0 , z 0 ] F ( x * ) ) + 2 F ( x * ) 1 ( F [ z 0 , x 0 ] F ( x * ) ) ζ 1 ( x 0 x * ) + 2 ζ 2 ( x 0 x * ) + 2 ζ 0 ( x 0 x * ) ζ 3 ( x 0 x * ) ζ 3 ( τ * ) < 1 ,
so
A 0 1 F ( x * ) 1 1 ζ 3 ( x 0 x * ) .
Similarly by using (27) and ( C 2 ) , we get
F [ y 0 , x 0 ] 1 F ( x * ) 1 1 ζ 1 ( x 0 x * ) .
Hence, iterate x 1 is well defined by the second substep of scheme (23) from which we can also write
x 1 x * = y 0 x * A 0 1 F ( y 0 ) A 0 1 ( F [ y 0 , z 0 ] F [ z 0 , x 0 ] ) F [ y 0 , x 0 ] 1 F ( y 0 ) .
By (26), (30), ( C 2 ) and (37)–(39), we obtain in turn
x 1 x * [ 3 ζ 2 ( x 0 x * ) + 2 ζ ( x 0 x * ) 1 ζ 3 ( x 0 x * ) + θ ζ 2 ( x 0 x * ) + ζ 0 ( x 0 x * ) ( 1 ζ 3 ( x 0 x * ) ) ( 1 ζ 1 ( x 0 x * ) ) ] y 0 x * ζ ¯ 2 ( x 0 x * ) x 0 x * x 0 x * ,
showing (33) for n = 0 and ζ U [ x * , τ * ] . Simply exchange x 0 , z 0 , y 0 , x 1 by x m , z m , y m , x m + 1 , respectively, in the previous calculations to terminate the induction for the items (31)–(33). It then follows from the estimation
x m + 1 x * l x m x * < τ * ,
where l = Γ 2 ( x 0 x * ) [ 0 , 1 ) , we deduce x m + 1 U [ x * , τ * ] and lim m x m = x * .  □
Next, concerning the uniqueness of the solution, we give a result not necessarily relying on the hypotheses ( C ) .
Proposition 3.
Suppose: equation F ( x ) = 0 has a simple solution x * Ω .
x m + 1 x * l x m x * < τ * ,
For all x Ω
F ( x * ) 1 ( F [ x , x * ] F ( x * ) ) ζ 4 ( x m x * ) ,
and function ζ 4 ( t ) 1 has a smallest positive zero τ 6 , where ζ 4 : M M is a continuous and non-decreasing function. Set Ω 1 = S [ x * , ρ ¯ ] Ω for τ < τ 6 . Then, the only solution of equation F ( x ) = 0 in the region Ω 1 is x * .
Proof. 
Set U = F [ x * , x * ] for some x * Ω 1 with F ( x * ) = 0 . Then, using (42), we get
F ( x * ) 1 U F ( x * ) ζ 4 ( x x * ) ζ 4 ( τ ¯ ) < 1 ,
so U 1 L ( X , X ) and x * = x * follows from U ( x * x * ) = F ( x * ) F ( x * ) .  □
Remark 5.
Let us consider choices
F [ x , y ] = 1 2 F ( x ) + F ( y ) or F [ x , y ] = 0 1 F ( x + θ ( y x ) ) d θ
or the standard definition of the divided difference when X = R i [12].
Moreover, suppose
F ( x * ) 1 F ( x ) F ( x * ) ϕ 0 ( x x * )
and
F ( x * ) 1 F ( x ) F ( y ) ϕ ( x x * )
where function ϕ 0 : M M , ϕ : M M are continuous and non-decreasing. Then, under the first or second choice above it can easily be seen that the hypotheses ( C ) require
ζ 0 ( t ) = 1 2 ϕ 0 ( θ 0 t ) + ϕ 0 ( t ) , ζ ( t ) = 1 2 ϕ ( θ t ) + ϕ 0 ( t ) , ζ 1 ( t ) = 1 2 ϕ 0 ( t ) + ϕ 0 ( t ) , ζ 2 ( t ) = ζ ( t ) .

6. Numerical Examples

Here, we present the computational results based on the suggested theoretical results in this paper. We choose a applied science problem for the computational results, which is illustrated in Example 1. The results are listed in Table 1. Additionally, we obtain the C O C approximated by means of
λ = ln x n + 1 x * | x n x * ln x n x * x n 1 x * , f o r   n = 1 , 2 ,
or A C O C [13] by:
λ * = ln x n + 1 x n x n x n 1 ln x n x n 1 x n 1 x n 2 , f o r   n = 2 , 3 ,
The computations are performed with the package M a t h e m a t i c a 11 with multiple precision arithmetic.
Example 1.
Let B 1 = B 2 = R 3 and Ω = S ( 0 , 1 ) . Assume F on Ω with v = ( x , y , z ) T as
F ( u ) = F ( u 1 , u 2 , u 3 ) = e u 1 1 , e 1 2 u 2 2 + u 2 , u 3 T ,
where, u = ( u 1 , u 2 , u 3 ) T . Then, we obtain
F ( u ) = e u 1 0 0 0 ( e 1 ) u 2 + 1 0 0 0 1 ,
the Fréchet-derivative. Hence, for x * = ( 0 , 0 , 0 ) T , F ( x * ) = F ( x * ) 1 = d i a g { 1 , 1 , 1 } , we have
ϕ 0 ( t ) = ( e 1 ) t , ϕ ( t ) = e 1 e 1 t θ 0 = 1 2 3 + e 1 e 1 , and θ = 1 2 1 + e 1 e 1 .
So, we obtain convergence radii that are mentioned in Table 1.
Example 2.
Define scalar function
h ( t ) = ξ 0 t + ξ 1 + ξ 2 sin ξ 3 t , x 0 = 0
where ξ j , j = 0 , 1 , 2 , 3 are scalars. Choose φ 0 ( t ) = K t and φ ( t ) = L t . Then, clearly for ξ 3 large and ξ 2 small, K 0 L can be small (arbitrarily). Notice that K L 0 . In early studies [1], K = L . Hence, our results constitute a significant improvement without additional hypotheses, since in practice the computation of L requires that of K as a special case.

7. Conclusions

The region of applicability for SLS and ISLS has been extended to solve generalized Equations (1) and (21), respectively, and under weaker or the same hypotheses. Moreover, tighter error distances are realized as well as a more precise knowledge of the where abouts of the solution x * . Due to its generality of our idea can provide the same benefits when applied to other methods.

Author Contributions

R.B. and I.K.A.: conceptualization; methodology; validation; writing—original draft preparation; writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under Grant No. (KEP-MSc-49-130-42).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

This project was funded by the Deanship of Scientific Research (DSR) at King Abdulaziz University, Jeddah, Saudi Arabia, under grant no. (KEP-MSc-49-130-42). The authors, therefore, acknowledge with thanks DSR for technical and financial support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hilout, S. Convergence analysis of a family of Steffensen-type methods for generalized equations. J. Math. Anal. Appl. 2008, 339, 753–761. [Google Scholar] [CrossRef] [Green Version]
  2. Dontchev, A.L.; Quincampoix, M.; Zlateva, N. Aubin criterion for metric regularity. J. Convex Anal. 2006, 13, 281–297. [Google Scholar]
  3. Argyros, I.K. On the solution of generalized equations using m(m > 2) Fréchet differential operators. Commun. Appl. Nonlinear Anal. 2002, 09, 85–89. [Google Scholar]
  4. Hernández, M.A.; Rubio, M.J. A modification of Newton’s method for nondifferentiable equations. J. Comput. Appl. Math. 2004, 164–165, 323–330. [Google Scholar] [CrossRef] [Green Version]
  5. Hilout, S.; Piétrus, A. A semilocal convergence of a Secant-type method for solving generalized equations. Positivity 2006, 10, 673–700. [Google Scholar] [CrossRef]
  6. Aubin, J.P.; Frankowska, H. Set-Valued Analysis; Birkhäuser: Boston, MA, USA, 1990. [Google Scholar]
  7. Mordukhovich, B.S. Coderivatives of set-valued mappings: Calculus and applications. Nonlinear Anal. 1997, 30, 3059–3070. [Google Scholar] [CrossRef]
  8. Potra, F.A.; Ptak, V. Nondiscrete Induction and Iterative Processes; Pitman: Boston, MA, USA, 1984. [Google Scholar]
  9. Cǎtinas, E. On some iterative methods for solving nonlinear equations. Rev. Anal. Numér. Théor. Approx. 1994, 23, 17–53. [Google Scholar]
  10. Robinson, S.M. Generalized equations and their solutions, part II: Applications to nonlinear programming. Math. Program. Study 1982, 19, 200–221. [Google Scholar]
  11. UI’m, S. Generalized divided differences. I, II. Eesti NSV Tead. Akad. Toimetised Füls. Mat. 1967, 16, 13–26, 146–156. (In Russian) [Google Scholar]
  12. Schmidt, J.W. Eine übertragung der Regula Falsi and gleicbungen in Banachräumen I. Z. Angew. Math. Mech. 1963, 43, 97–110. [Google Scholar] [CrossRef]
  13. Argyros, I.K.; Magréñan, A.A.M. A Contemporary Study of Iterative Methods; Elsevier (Academic Press): New York, NY, USA, 2018. [Google Scholar]
Table 1. Radii for Example 1.
Table 1. Radii for Example 1.
τ 0 τ 1 τ 2 τ 3 τ 5 τ *
0.3428650.1990530.08499190.5819760.061692170.0849919
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Behl, R.; Argyros, I.K. On the Solution of Generalized Banach Space Valued Equations. Mathematics 2022, 10, 132. https://doi.org/10.3390/math10010132

AMA Style

Behl R, Argyros IK. On the Solution of Generalized Banach Space Valued Equations. Mathematics. 2022; 10(1):132. https://doi.org/10.3390/math10010132

Chicago/Turabian Style

Behl, Ramandeep, and Ioannis K. Argyros. 2022. "On the Solution of Generalized Banach Space Valued Equations" Mathematics 10, no. 1: 132. https://doi.org/10.3390/math10010132

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop