Next Article in Journal
Consistency and Asymptotic Normality of Estimator for Parameters in Multiresponse Multipredictor Semiparametric Regression Model
Next Article in Special Issue
Some New James Type Geometric Constants in Banach Spaces
Previous Article in Journal
Symmetries of Thirring Models on 3D Lattices
Previous Article in Special Issue
An Algorithm Derivative-Free to Improve the Steffensen-Type Methods
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Generalized Explicit Iterative Method for Solving Generalized Split Feasibility Problem and Fixed Point Problem in Real Banach Spaces

by
Godwin Chidi Ugwunnadi
1,2,
Lateef Olakunle Jolaoso
2,3,* and
Chibueze Christian Okeke
4
1
Department of Mathematics, University of Eswatini, Private Bag 4, Kwaluseni M201, Eswatini
2
Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, P.O. Box 94, Medunsa 0204, South Africa
3
Department of Mathematics, Federal University of Agriculture, Abeokuta 2240, Ogun State, Nigeria
4
School of Mathematics, University of the Witwatersrand, Private Bag 3, Johannesburg 2050, South Africa
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(2), 335; https://doi.org/10.3390/sym14020335
Submission received: 24 August 2021 / Revised: 22 September 2021 / Accepted: 23 September 2021 / Published: 6 February 2022

Abstract

:
In this paper, we propose a generalized explicit algorithm for approximating the common solution of generalized split feasibility problem and the fixed point of demigeneralized mapping in uniformly smooth and 2-uniformly convex real Banach spaces. The generalized split feasibility problem is a general mathematical problem in the sense that it unifies several mathematical models arising in (symmetry and non-symmetry) optimization theory and also finds many applications in applied science. We designed the algorithm in such a way that the convergence analysis does not need a prior estimate of the operator norm. More so, we establish the strong convergence of our algorithm and present some computational examples to illustrate the performance of the proposed method. In addition, we give an application of our result for solving the image restoration problem and compare with other algorithms in the literature. This result improves and generalizes many important related results in the contemporary literature.

1. Introduction

Let C and Q be nonempty, closed, and convex subsets of two real Hilbert spaces H 1 and H 2 , respectively, and B : H 1 H 2 be a bounded linear operator. The Split Feasibility Problem (shortly, SFP) is defined as
Find v * C such that B v * Q .
We denote the set of solutions of the SFP (1) by S F P ( C , Q , B ) , i.e., S F P ( C , Q , B ) = { v * C : B v * Q } . The SFP was first introduced by [1] in the setting of finite dimensional spaces, for modeling inverse problems arising from phrase retrievals and in medical image reconstruction. Since then, it has been studied widely and extended by many researchers mainly due to its applications in various areas such as radiation therapy treatment planning, signal processing, image restoration, computer tomography, etc., see e.g., [2,3,4,5].
In 2014, Ref. [6] introduced the Generalized Split Feasibility Problem (GSFP) in the framework of real Hilbert spaces as follows:
Find v * C such that 0 A v * and B v * F ( T ) ,
where A : H 1 2 H 1 is a maximal monotone operator, B : H 1 H 2 is a bounded linear operator, T : H 2 H 2 is a non-expansive mapping and F ( T ) is the set of fixed points of T, i.e., F ( T ) : = { x H : T x = x } . We denote the set of solution of the GSFP (2) by Ω . Note that, when B = N C (i.e., the normal cone operator at C) and F ( T ) = Q , the GSFP reduces to the SFP. Ref. [6] proposed the following iterative method for solving the GSFP in real Hilbert spaces:
x n + 1 = J λ n A ( I γ B * ( I T ) B ) x n n N ,
where J λ A x = ( I λ A ) 1 x is the resolvent operator of A and B * : H 2 H 1 is the adjoint of B . They also proved that the sequence { x n } generated by (3) converges weakly to a solution of the GSFP. Recently, Ref. [7] extended the result of [6] to the setting of uniformly convex and 2-uniformly smooth real Banach spaces. They proposed the following iterative method in particular, for solving the GSFP in real Banach spaces:
y n = J E 1 1 ( J E 1 x n γ B * J E 2 ( I U ) B x n ) , x n + 1 = J E 1 1 ( β n J E 1 x n + ( 1 β n ) J E 1 T J λ A y n ) , n N ,
where 0 < a β n b < 1 , γ 0 , 1 B 2 , λ > 0 , J E 1 and J E 2 are the normalized duality mapping on the real Banach spaces E 1 and E 2 , respectively, A : E 1 2 E 1 * is a maximal monotone operator and T : C C and U : E 2 E 2 are nonexpansive mappings with F ( U ) 0 . The authors proved that the sequence generated by (4) converges weakly to an element in Γ = Ω F ( T ) . Furthermore, Ref. [8] also introduced a strong convergence algorithm for finding a common element in the set of solution of GSFP and common fixed point problem for a countable family of nonexpansive mappings between a real Hilbert space H and real Banach space E as follows:
x 1 H , z n = J λ n A ( x n γ n B * J E ( B x n U B x n ) ) , y n = ( 1 σ n ) z n + σ n i = 1 η i T i z n , x n + 1 = P C ( α n x 0 + β n y n + δ n z n ) ,
where P C is the metric projection from H onto C, J E : E 2 E is the normalized duality mapping on E, B : H E is a bounded linear operator, U : E E is a firmly nonexpansive-like mapping, T i is a countable family of demimetric mappings on C with k i ( , 1 ) , { α n } , { β n } , { δ n } , { η i } ( 0 , 1 ) , { λ n } , { σ n } , { γ n } ( 0 , + ) are sequences satisfying the following conditions:
(i)
lim n α n = 0 , and n = 0 α = ,
(ii)
0 < lim inf n β n lim sup n β n < 1 and α n + β n + δ n = 1 ,
(iii)
i = 1 η i = 1 ,
(iv)
0 < a γ n b and 1 k , where k = sup { k i , i N } < 1 ,
(v)
0 < c γ n γ < 2 B 2 and 0 < lim inf n λ n lim sup n λ n < .
Very recently, Ref. [9] further introduced a Halpern-type strong convergence algorithm for solving the GSFP in real Banach spaces as follows:
x 1 , u E 1 , y n = J E 1 1 ( J E 1 x n γ B * J E 2 ( I U ) B x n ) , x n + 1 = J E 1 1 ( α n J E 1 u + ( 1 α n ) J E 1 Q r n A T y n ) , n N ,
where { α n } ( 0 , 1 ) satisfying lim n α n = 0 , n = 0 α n = + and 0 < γ < 1 τ B 2 , T : E 2 E 2 is a τ -quasi-strictly pseudononspreading mappings such that F ( T ) . The authors proved that the sequence { x n } generated by Algorithm (6) converges strongly to an element in Ω under some mild conditions on the control sequences. Note that, in the methods mentioned above, the stepsize γ n depends on prior estimates of the norm of the bounded linear operator, i.e., B , which, in general, it is very difficult to estimate (see, e.g., [10]), thus the following question arises naturally:
Question A: Can we provide an iterative scheme which does not depend on a prior estimate of the norm of the bounded linear operator for solving the generalized split feasibility problem in real Banach spaces?
On the other hand, Ref. [11] introduced the generalized viscosity implicit rule for approximating the fixed point of a nonexpansive mapping T : C C in real Hilbert spaces as follows: given x 0 C , compute
x n + 1 = α n f ( x n ) + ( 1 α n ) T ( t n x n + ( 1 t n ) x n + 1 ) n 0 .
They also proved that the sequence { x n } generated by (7) converges strongly to a point in F ( T ) . However, it was noted that the computation by implicit method is not a simple task in general. To overcome this difficulty, the explicit midpoint method was given by the following finite difference scheme which was originally introduced in the books [12,13]:
y 0 = x 0 , y ¯ n + 1 = y n + h f ( y n ) , y n + 1 = y n + h f y n + y ¯ n + 1 2 n 0 ,
where f : H H is a contraction mapping and h [ 0 , 1 ] is the mesh. In 2017, Ref. [14] combined the generalized viscosity implicit midpoint method (7) with the explicit midpoint method (8) for approximating the fixed point problem of a quasi-nonexpansive mapping T . They introduced the following generalized viscosity explicit midpoint method in particular: for any x 1 C and
x ¯ n + 1 = β n x n + ( 1 β n ) T x n , x n + 1 = α n f ( x n ) + ( 1 α n ) T ( t n x n + ( 1 t n ) x ¯ n + 1 ) n 1 .
They also showed that the sequence { x n } generated by (9) converges strongly to a fixed point of T under certain assumptions imposed on the parameters { α n } , { β n } , and { t n } .
Motivated by the above results, in this paper, we provide an affirmative answer to Question A using the technique above in real Banach spaces. In particular, we introduce a generalized explicit method for solving the GSFP without prior knowledge of the norm of the bounded operator in uniformly smooth and 2-uniformly convex real Banach spaces. The algorithm is designed such that its stepsize is determined self-adaptively at each iteration, and its convergence does not require prior estimate of the bounded linear operator norm. We also prove a strong convergence result for the sequence generated by the algorithm and also provide a numerical example to illustrate the performance of the iterative method. Furthermore, we utilize the algorithm to solves image restoration problem and also compare it performance with other related methods in the literature.

2. Preliminaries

In this section, we present some preliminary Definitions and concepts which are needed in this paper. Let E be a real Banach space with dual E * and S E ( x ) : = { x E : | | x | | = 1 } denotes the unit sphere of E. We denote the value of y E * at x E by x , y . In addition, we denote the strong (resp. weak) convergence of a sequence { x n } E to a point x E by x n x (resp. x n x ).
Let E 1 , E 2 be two Banach spaces and B : E 1 E 2 denotes the bounded linear operator. Then, the adjoint operator of B which is denoted by B * is defined as B * : E 2 * E 1 * with x , B * y = B x , y for all x E 1 and y E 2 * . B * is also bounded linear operator and | | B | | = | | B * | | .
A Banach space E is said to be smooth if lim t 0 | | x + t y | | | | x | | t exists for each x , y S E and for any λ ( 0 , 1 ) , if | | λ x + ( 1 λ ) y | | < 1 for all x , y S E with x y , then E is called strictly convex. In addition, E is said to be uniformly convex if, for any ϵ ( 0 , 2 ] , there exists δ = δ ( ϵ ) > 0 such that, if | | x + y | | 2 1 δ , then | | x y | | ϵ for all x , y S E . The modulus of smoothness of E is the function ρ E : [ 0 , ) [ 0 , ) defined by
ρ E ( τ ) = sup | | x + τ y | | + | | x τ y | | 2 1 : x , y S E .
In addition, E is called uniformly smooth if lim τ 0 ρ E ( τ ) τ = 0 ; q-uniformly smooth if there exists a positive real number C q such that ρ E ( τ ) C q τ q for any τ > 0 . Hence, every q-uniformly smooth Banach space is uniformly smooth. We know that L p , p and W p m are q-uniformly smooth for 1 q < 2 ; 2-uniformly smooth and uniformly convex (see [15] for more details).
Furthermore, the normalized duality mapping J : E 2 E * is defined by
J ( x ) : = { y E * : x , y = | | y | | 2 = | | x | | 2 } , y E .
It is known that J has the following properties (for more details, see [16,17,18]):
(UM1)
If E is smooth, then J is single-valued.
(UM2)
If E is strictly convex, then J is one to one and strictly monotone.
(UM3)
If E is uniformly smooth, then J is uniformly norm to norm continuous on a bounded subset of E.
(UM4)
If E is smooth, strictly convex, and reflexive Banach space, then J is single-valued, one to one and onto.
(UM5)
If E is uniformly smooth and uniformly convex, then the dual space E * is also uniformly smooth and uniformly convex; furthermore, J and J 1 are both uniformly continuous on bounded subsets of E.
(UM6)
If E is a reflexive, strictly convex, and smooth Banach space, then J 1 (the duality mapping from E * into E) is single-valued, one to one and onto.
Let E be a Banach space and ϕ : E × E [ 0 , ) denotes the Lyapunov functional defined as
ϕ ( x , y ) = | | x | | 2 2 x , J y + | | y | | 2 , x , y E .
The functional ϕ satisfies the following properties (see [19]):
(A1)
( | | x | | | | y | | ) 2 ϕ ( x , y ) ( | | x | | + | | y | | ) 2 , x , y E ;
(A2)
ϕ ( x , y ) = ϕ ( x , z ) + ϕ ( z , y ) + 2 x z , J z J y , x , y , z E ;
(A3)
ϕ ( x , y ) = x , J x J y + y x , J y | | x | | | | J x J y | | + | | y x | | | | y | | , x , y E ;
(A4)
ϕ ( z , J 1 ( α J x + ( 1 α ) J y ) ) α ϕ ( z , x ) + ( 1 α ) ϕ ( z , y ) , where α ( 0 , 1 ) and x , y E .
Remark 1.
If E is strictly convex, then, for all x , y E , ϕ ( x , y ) = 0 if and only if x = y (See Remark 2.1 in [20]).
Now, we introduce another functional-like V : E × E * [ 0 , ) by [21], which is a mild modification and has a relationship with a Lyapunov functional as follows:
V ( x , x * ) = | | x | | 2 2 x , x * + | | x * | | 2
for all x E and x * E * . From the Definition of ϕ , we get
V ( x , x * ) = ϕ ( x , J 1 ( x * ) ) ,   for   all   x E   and   x * E * .
For each x E , the mapping g defined by g ( x * ) = V ( x , x * ) for all x * E * is a continuous, convex function from E * into R . The following Lemma is a very important property of V.
Lemma 1
([21]). Let E be a reflexive, strictly convex and smooth Banach space and let V be as in (10). Then,
V ( x , x * ) + 2 J 1 x * x , y * V ( x , x * + y * )
for all x E and x * , y * E * .
Let E a be reflexive, strictly convex and smooth Banach space and C a nonempty closed and convex subset of E. Then, by [21], for each x E , there exists a unique element u C (denoted by Π C x ) such that
ϕ ( u , x ) = min y C ϕ ( y , x ) .
The mapping Π C : E C , defined by Π C x = u , is called the generalized projection operator (see [22]), which has the following important characteristic.
Lemma 2
([23]). Let C be a nonempty, closed and convex subset of a smooth Banach space E, if u E . Then, v = Π C x if and only if
v w , J u J v 0 , w C .
In the sequel, we shall use the following results.
Lemma 3
([19]). Let E be a uniformly smooth Banach space and r > 0 . Then, there exists a continuous, strictly increasing and convex function g : [ 0 , 2 r ] [ 0 , ] such that g ( 0 ) = 0 and
ϕ ( u , J 1 ( t J v + ( 1 t ) J w ) ) t ϕ ( u , v ) + ( 1 t ) ϕ ( u , w ) t ( 1 t ) g ( | | J v J w | | )
for all t [ 0 , 1 ] , u E and v , w B r : = { z E : | | z | | r } .
Lemma 4
([15]). Let E be a real uniformly convex Banach space and r > 0 . Let B r ( 0 ) : = { x E : | | x | | r } . Then, there exists a continuous strictly increasing convex function g : [ 0 , 2 r ] R , such that g ( 0 ) = 0 and
| | β y + ( 1 β ) z | | 2 β | | y | | 2 + ( 1 β ) | | y | | 2 β ( 1 β ) g ( | | y z | | ) ,
for all y , z B r and β [ 0 , 1 ] .
Lemma 5
([20]). Let E be a uniformly convex and smooth Banach space and { u n } and { v n } be two sequences in E. If lim n ϕ ( u n , v n ) = 0 and either { u n } or { v n } is bounded, then lim n | | u n v n | | = 0 .
Lemma 6
([15]). Let E be a 2-uniformly convex and smooth real Banach space. Then, there exists a positive real-valued constant α such that
α | | x y | | 2 ϕ ( x , y ) , x , y E .
Lemma 7
([15]). Let E be a 2-uniformly smooth Banach space, then, for each s > 0 and x , y E , the following holds:
| | x + y | | 2 | | x | | 2 + 2 x , J y + 2 s 2 | | y | | 2 .
Definition 1.
A mapping T : C E is said to be:
(i)
quasi-nonexpansive (see [20]) if F ( T ) and
ϕ ( p , T x ) ϕ ( p , x ) , x C , p F ( T ) ,
(ii)
firmly nonexpansive type if for all x , y C , we have
ϕ ( T x , T y ) + ϕ ( T y , T x ) + ϕ ( T x , x ) + ϕ ( T y , y ) ϕ ( T x , y ) + ϕ ( T y , x ) .
(iii)
quasi-ϕ-strictly pseudocontractive (see [24]) if F ( T ) and there exists a constant k [ 0 , 1 ) such that
ϕ ( p , T x ) ϕ ( p , x ) + k ϕ ( x , T x ) , x C and p F ( T ) .
(iv)
( η , s ) -demigeneralized (see [25]) if F ( T ) and there exists η ( , 1 ) and s [ 0 , ) such that for any x C and q F ( T ) , we have
2 x q , J x J T x ( 1 η ) ϕ ( x , T x ) + s ϕ ( T x , x ) .
In particular, T is ( η , 0 ) -demigeneralized mapping if and only if
2 x q , J x J T x ( 1 η ) ϕ ( x , T x ) .
A set-valued operator A : C 2 E * is said to be: (i) m o n o t o n e if for all x , y C , we have
x y , u v 0 ,
where u A x and v A y , (ii) maximal monotone if A is monotone and the graph of A i.e., G ( A ) : = { ( x , y ) E × E * : y A ( x ) } is not properly contained in the graph of any other monotone operator. It is known that, when A is a maximal monotone operator and r > 0 , then the resolvent of A is defined by Q r x = ( J + r A ) 1 J x for x E . More so, the null points of A is defined by A 1 ( 0 ) : = { w E : 0 A w } . Note that A 1 ( 0 ) is a closed and convex set, and F ( Q r ) = A 1 ( 0 ) , (see [18]).
Lemma 8
([26,27]). Let C be a nonempty, closed and convex subset of strictly convex, smooth and reflexive Banach space, and let r > 0 and A E × E * be a monotone operator such that D ( A ) J 1 R ( J + r A ) . Then, the resolvent of A which is defined by Q r x = ( J + r A ) 1 J x for all x C is a firmly nonexpansive type mapping.
Lemma 9
([28]). Let E be a reflexive, smooth and strictly convex Banach space and A : E 2 E * a maximal monotone operator such that A 1 ( 0 ) and Q r = ( J + μ A ) 1 J for all r > 0 . Then,
ϕ ( u , Q r v ) + ϕ ( Q r v , v ) ϕ ( u , v ) u F ( Q r ) , v E .
Lemma 10
([25]). Let E be a smooth Banach space and C be a nonempty closed and convex subset of E. Let η be a real number with η ( , 1 ) and s be a real number with s [ 0 , ) . Let U be an ( η , s ) -demigeneralized mapping of C into E. Then, F ( U ) is closed and convex.
Lemma 11
([29]). Let E be a smooth Banach space and C a nonempty closed and convex subset of E. Let k ( , 0 ] and T : C E be a ( k , 0 ) -demigeneralized mapping with F ( T ) . Let λ be a real number in ( 0 , 1 ] and define T λ = J 1 ( ( 1 λ ) J + λ J T ) , where J is the duality mapping on E. Then, T λ is a quasi-nonexpansive mapping of C into E and F ( T ) = F ( T λ ) .
Lemma 12
([30]). If { a n } is a sequence of nonnegative real numbers satisfying the following inequality:
a n + 1 ( 1 α n ) a n + α n σ n + γ n , n 0 ,
where ( i ) { α n } [ 0 , 1 ] , α n = ; ( i i ) lim s u p σ n 0 ; ( i i i ) γ n 0 ; ( n 0 ) and γ n < . Then, a n 0 as n .
Definition 2.
A self-mapping T on a Banach space is said to be demiclosed at y, if for any sequence { x n } which converges weakly to x, and if the sequence { T x n } converges strongly to y, then T ( x ) = y . In particular, if y = 0 , then T is demiclosed at 0.
Lemma 13
([31]). Let { a n } be a sequence of real numbers such that there exists a subsequence { a n i } of { a n } such that a n i < a n i + 1 for all i N . Then, there exists a nondecreasing sequence { m k } N such that m k and the following properties are satisfied by all (sufficiently large) numbers k N :
a m k a m k + 1 and a k a m k + 1 .
In fact, m k = max { j k : a j < a j + 1 } .

3. Results

In this section, we present our algorithm and its convergence analysis as follows:
Theorem 1.
Let E 1 and E 2 be uniformly smooth and 2-uniformly convex real Banach spaces. Let T : E 1 E 1 be a ( η , 0 ) -demigeneralized mapping and demiclosed at zero with η ( , λ ] and λ [ 0 , 1 ) such that F ( T ) . Let U : E 2 E 2 be a ( θ , 0 ) -demigeneralized mapping and demiclosed at zero with θ ( , 0 ] such that F ( U ) . Let A be a maximal monotone operator of E into 2 E * such that A 1 and Q μ are the generalized resolvent operator of A for μ > 0 . Let B : E 1 E 2 be a bijective bounded linear operator with its adjoint B * : E 2 * E 1 * . Suppose that Γ : = F ( T ) A 1 ( 0 ) B 1 ( F ( U ) ) . Let { u n } be a sequence in E 1 such that u n u E 1 and { x n } E 1 be a sequence generated by the following iterative scheme: given x 1 E 1 , compute
w n = J E 1 1 ( J x n γ n B * ( J E 2 ( B x n ) J E 2 U ( B x n ) ) ) , x ¯ n + 1 = J E 1 1 ( β n J E 1 ( w n ) + ( 1 β n ) J E 1 T λ Q μ n ( w n ) ) , y n = J E 1 1 ( t n J E 1 ( x n ) + ( 1 t n ) J E 1 x ¯ n + 1 ) , x n + 1 = J E 1 1 [ α n J E 1 ( u n ) + ( 1 α n ) J E 1 Q μ n y n ] , n 1
where T λ : = J E 1 1 ( ( 1 λ ) J E 1 + λ J E 1 T ) and { α n } , { β n } , { t n } are sequences in ( 0 , 1 ) and μ n ( 0 , ) . Suppose the following conditions are satisfied:
(C1)
lim n α n = 0 and n = 1 α n = ;
(C2)
for any α > 0 as in Lemma 6 and any fixed value a > 0 , the stepsize γ n is chosen as follows:
0 < a γ n α ( 1 θ ) | | B x n U B x n | | 2 s 2 | | B * ( J E 2 ( B x n ) J E 2 ( U B x n ) ) | | 2 a ,
if B x n U B x n ; otherwise, γ n = γ ( γ 0 ).
(C3)
0 < t t n t * < 1 and 0 < β β n β * < 1 , where [ t , t * ] ( 0 , 1 ) , [ β , β * ] ( 0 , 1 ) .
Then, { x n } n = 1 converges strongly to z * Γ , where z * = Π Γ u .
Remark 2.
Since T : E 1 E 1 and U : E 2 E 2 are ( η , 0 ) and ( θ , 0 ) -demigeneralized mappings, respectively, then F ( T ) and F ( U ) are closed and convex sets by Lemma 10. Since B : E 1 E 2 is linear and bounded and B 1 exists, then B 1 is linear and bounded (continuous). Since F ( U ) is closed and convex, then B 1 ( F ( U ) ) is also closed and convex. A 1 ( 0 ) is closed and convex (see [18] for details). Hence, Γ : = F ( T ) A 1 ( 0 ) B 1 ( F ( U ) ) is nonempty closed and convex. Therefore, Π Γ from E 1 into Γ is well-defined.
Furthermore, since T λ : = J E 1 1 ( ( 1 λ ) J E 1 + λ J E 1 T ) for any λ [ 0 , 1 ) is relatively nonexpansive mapping and F ( T ) = F ( T λ ) by Lemma 11. Let z Γ , we have that z F ( T ) = F ( T λ ) , z B 1 ( F ( U ) ) B z = U B z , thus ( B z U B z ) = 0 , and also z = Q μ n z . In addition, since Q μ n is generalized resolvent for μ n > 0 , then, from Lemma 8, we have that Q μ n is firmly nonexpansive operator for z F ( Q μ n ) , then for any x n E 1 , we have ϕ ( z , Q μ n x n ) ϕ ( z , x n ) for all n 1 .
Proof. 
Let z Γ , then, from Lemma 6, 7 and (15), we have
ϕ ( z , w n ) = ϕ ( z , J E 1 1 ( J E 1 x n γ n B * ( J E 2 ( B x n ) J E 2 U ( B x n ) ) ) ) = V ( z , J E 1 x n γ n B * ( J E 2 ( B x n ) J E 2 U ( B x n ) ) ) = | | z | | 2 2 z , ( J E 1 x n γ n B * ( J E 2 ( B x n ) J E 2 U ( B x n ) ) ) + | | J x n γ n B * ( J E 2 ( B x n ) J E 2 U ( B x n ) ) | | 2 | | z | | 2 2 z , J E 1 x n + 2 γ n z , B * ( J E 2 ( B x n ) ) J E 2 ( U B x n ) + | | J E 1 ( x n ) | | 2 2 γ n x n , B * ( J E 2 ( B x n ) ) J E 2 ( U B x n ) + γ n 2 s 2 | | B * ( J E 2 ( B x n ) ) J E 2 ( U B x n ) | | 2 = ϕ ( z , x n ) 2 γ n B ( x n ) B ( z ) , ( J E 2 ( B x n ) ) J E 2 ( U B x n ) + γ n 2 s 2 | | B * ( J E 2 ( B x n ) ) J E 2 ( U B x n ) | | 2 ϕ ( z , x n ) γ n ( 1 η ) ϕ ( B x n , U ( B x n ) ) + γ n 2 s 2 | | B * ( J E 2 ( B x n ) ) J E 2 ( U B x n ) | | 2 ϕ ( z , x n ) γ n α ( 1 η ) | | B x n U ( B x n | | 2 + γ n 2 s 2 | | B * ( J E 2 ( B x n ) ) J E 2 ( U B x n ) | | 2 = ϕ ( z , x n ) γ n ( α ( 1 η ) | | ( B x n ) U ( B x n ) | | 2 γ n s 2 | | B * ( J E 2 ( B x n ) ) J E 2 ( U B x n ) | | 2 ) ϕ ( z , x n ) ,
where the last estimation follows from the stepsize rule (18). In addition, from ( y n ) in (17) and (24), we have
  ϕ ( z , y n ) = ϕ ( z , J E 1 1 ( t n J E 1 ( x n ) + ( 1 t n ) J E 1 x ¯ n + 1 ) )   t n ϕ ( z , x n ) + ( 1 t n ) ϕ ( z , x ¯ n + 1 )   = t n ϕ ( z , x n ) + ( 1 t n ) ϕ ( z , J E 1 1 ( β n J E 1 w n + ( 1 β n ) J E 1 T λ Q μ n w n ) ) (20) t n ϕ ( z , x n ) + ( 1 t n ) [ β n ϕ ( z , w n ) + ( 1 β n ) ϕ ( z , Q μ n w n ) ] (21) = t n ϕ ( z , x n ) + ( 1 t n ) ϕ ( z , w n ) (22) ϕ ( z , x n ) ,
and
  ϕ ( z , x n + 1 ) = ϕ ( z , J E 1 1 ( α n J E 1 u n + ( 1 α n ) J E 1 Q μ n y n ) ) (23) α n ϕ ( z , u n ) + ( 1 α n ) ϕ ( z , y n ) (24) α n ϕ ( z , u n ) + ( 1 α n ) ϕ ( z , x n ) .
Since { u n } converges, then it is bounded and so, with the help of (A1), there exists M > 0 , such that sup ϕ ( z , u n ) M . Now, letting M * = max { M , ϕ ( z , x n ) } , for all n 1 , in particular, ϕ ( z , x 1 ) M * . Assuming for some k 1 that ϕ ( z , x k ) M * , then, by (24), we have
ϕ ( z , x k + 1 ) α k ϕ ( z , u k ) + ( 1 α k ) ϕ ( z , x k ) α k M * + ( 1 α k ) M * = M * .
Hence by induction, we obtain that ϕ ( z , x n ) M * for all n N . Therefore, { x n } n = 1 is bounded.
Furthermore, from (19), (21) and (23), we get
ϕ ( z , x n + 1 ) α n ϕ ( z , u n ) + ( 1 α n ) [ t n ϕ ( z , x n ) + ( 1 t n ) ϕ ( z , w n ) ] α n ϕ ( z , u n ) + t n ϕ ( z , x n ) + ( 1 t n ) ϕ ( z , w n ) α n ϕ ( z , u n ) + ϕ ( z , x n ) γ n ( 1 t n ) ( α ( 1 η ) | | ( B x n ) U ( B x n ) | | 2 γ n s 2 | | B * ( J E 2 ( B x n ) ) J E 2 ( U B x n ) | | 2 ) .
From (C2), we have that γ n α ( 1 η ) | | B x n U ( B x n ) | | 2 s 2 | | B * ( J E 2 ( B x n ) J E 2 U ( B x n ) ) | | 2 a , hence
γ n s 2 | | B * ( J E 2 ( B x n ) J E 2 U ( B x n ) ) | | 2 α ( 1 η ) | | ( B x n ) U ( B x n ) | | 2 a s 2 | | B * ( J E 2 ( B x n ) J E 2 U ( B x n ) ) | | 2 .
This implies that
a s 2 | | B * ( J E 2 ( B x n ) J E 2 U ( B x n ) ) | | 2 α ( 1 η ) | | ( B x n ) U ( B x n ) | | 2 γ n s 2 | | B * ( J E 2 ( B x n ) J E 2 U ( B x n ) ) | | 2 .
Hence, from (25) and (26), we get
γ n a 2 s 2 ( 1 t n ) | | B * ( J E 2 ( B x n ) J E 2 ( U ( B x n ) ) ) | | 2 α n ϕ ( z , u n ) + ϕ ( z , x n ) ϕ ( z , x n + 1 ) .
The remaining part of the proof will be divided into two cases.
Case I: Suppose that the sequence { ϕ ( z , x n ) } n = 1 is non-increasing sequence of real numbers. Since this sequence { ϕ ( z , x n ) } n = 1 is bounded, then it converges for all n n 0 . That is,
lim n ( ϕ ( z , x n ) ϕ ( z , x n + 1 ) ) = 0 .
Thus, from (C1), (C3), (27) and (28), we have
lim n | | B * ( J E 2 ( B x n ) J E 2 ( U ( B x n ) ) ) | | = 0 .
In addition, combining (C2), (C3) and (25), we have
0 a ( 1 t * ) α ( 1 η ) | | B x n U ( B x n ) | | 2 α n ϕ ( z , x n ) + ϕ ( z , x n ) ϕ ( z , x n + 1 ) + ϵ s 2 | | B * ( J E 2 ( B x n ) J E 2 ( U ( B x n ) ) ) | | 2 .
It follows from (28) and (29) that
lim n | | J E 2 ( B x n ) J E 2 ( U ( B x n ) ) | | = 0
as n . Since E 2 is uniformly smooth, then, from (UM5) and (30), we obtain
lim n | | B ( x n ) U ( B x n ) | | = lim n | | J E 2 1 ( J E 2 ) ( B x n ) J E 2 1 ( J E 2 ( U ( B x n ) ) ) | | = 0 .
Furthermore, using (10) and Lemma 4, we get
ϕ ( z , x ¯ n + 1 ) = ϕ ( z , J E 1 1 ( β n J E 1 w n + ( 1 β n ) J E 1 T λ Q μ n w n ) ) = V ( z , β n J E 1 w n + ( 1 β n ) J E 1 T λ Q μ n w n ) = | | z | | 2 2 z , β n J E 1 w n + ( 1 β n ) J E 1 T λ Q μ n w n + | | β n J E 1 w n + ( 1 β n ) J E 1 T λ Q μ n w n | | 2 = β n z n 2 + ( 1 β n ) z 2 2 β n z , J E 1 w n 2 ( 1 β n ) z , J E 1 T λ Q μ n w n + β n J E 1 w n 2 + ( 1 β n ) J E 1 T λ Q μ n w n 2 β n ( 1 β n ) g ( J E 1 w n J E 1 T λ Q μ n w n ) β n ϕ ( z , w n ) + ( 1 β n ) ϕ ( z , T λ Q μ n w n ) β n ( 1 β n ) g ( | | J E 1 w n J E 1 T λ Q μ n w n | | ) ϕ ( z , x n ) ( 1 t n ) β n ( 1 β n ) g ( | | J E 1 w n J E 1 T λ Q μ n w n | | ) .
In addition, with (20), (23) and (32), we get
ϕ ( z , x n + 1 ) α n ϕ ( z , u n ) + ( 1 α n ) [ t n ϕ ( z , x n ) + ( 1 t n ) ϕ ( z , x ¯ n + 1 ) ] α n ϕ ( z , u n ) + t n ϕ ( z , x n ) + ( 1 t n ) ϕ ( z , x ¯ n + 1 ) α n ϕ ( z , u n ) + t n ( z , x n ) + ( 1 t n ) [ ϕ ( z , x n ) β n ( 1 β n ) g ( J E 1 w n J E 1 T λ Q μ n w n ) ] α n ϕ ( z , u n ) + ϕ ( z , x n ) β n ( 1 β n ) g ( J E 1 w n J E 1 T λ Q μ n w n ) .
Thus, from (C3) and (33), we obtain
0 ( 1 t * ) β n ( 1 β * ) ) g ( | | J E 1 w n J E 1 T λ Q μ n w n | | ) ( 1 t n ) β n ( 1 β n ) ) g ( | | J E 1 w n J E 1 T λ Q μ n w n | | ) α n ϕ ( z , u n ) + ϕ ( z , x n ) ϕ ( z , x n + 1 ) .
Then, using (C1), we get
lim n g ( | | J E 1 w n J E 1 T λ Q μ n w n | | ) = 0 .
Using property of g in Lemma 4, we obtain
lim n | | J E 1 w n J E 1 T λ Q μ n w n | | = 0 .
Since E 1 * is uniformly smooth, then, from (34), we get
lim n | | w n T λ Q μ n w n | | = 0 .
In addition, by Lemma 9, (20) and (23), we get
ϕ ( z , x n + 1 ) α n ϕ ( z , u n ) + ( 1 α n ) [ t n ϕ ( z , x n ) + ( 1 t n ) [ β n ϕ ( z , w n ) + ( 1 β n ) [ ϕ ( z , w n ) ϕ ( Q μ n w n , w n ) ] ] ] α n ϕ ( z , u n ) + ϕ ( z , x n ) ( 1 α n ) ( 1 t n ) ( 1 β n ) ϕ ( Q μ n w n , w n ) .
Thus, using (C3), we have
0 < ( 1 t * ) ( 1 β * ) ( 1 α n ) ϕ ( Q μ n w n , w n ) ( 1 t n ) ( 1 β n ) ϕ ( Q μ n w n , w n ) α n ϕ ( z , u n ) + ϕ ( z , x n ) ϕ ( z , x n + 1 ) .
Then, applying (C1) and (28), we get
lim n ϕ ( Q μ n w n , w n ) = 0 .
From Lemma 5, we get
lim n | | Q μ n w n w n | | = 0 .
Since E 1 is uniformly smooth, then
lim n | | J E 1 Q μ n w n J E 1 w n | | = 0 .
It follows from (35) and (36) that
lim n | | T λ Q μ n w n Q μ n w n | | = 0 .
Since E 1 is uniformly smooth, then J E 1 is uniformly norm-to-norm continuous on bounded subsets if E 1 and from (38), we obtain
lim n | | J E 1 T λ Q μ n w n J E 1 Q μ n w n | | = 0 .
As we know that T λ = J E 1 1 ( ( 1 λ ) J E 1 + λ J E 1 T ) , thus
| | J E 1 T λ Q μ n w n J E 1 Q μ n w n | | = λ | | J E 1 T Q μ n w n J E 1 Q μ n w n | | .
Since λ > 0 , it follows from (39) that
lim n | | J E 1 T Q μ n w n J E 1 Q μ n w n | | = 0 .
Thus, we have
lim n | | T Q μ n w n Q μ n w n | | = 0 .
In addition, since { α n } ( 0 , 1 ) , then
ϕ ( z , x n + 1 ) α n ϕ ( z , u n ) + ( 1 α n ) ϕ ( z , Q μ n y n ) α n ϕ ( z , u n ) + ( 1 α n ) ϕ ( z , y n ) α n ϕ ( z , u n ) + ϕ ( z , J E 1 1 ( t n J E 1 x n + ( 1 t n ) J E 1 x ¯ n + 1 ) ) = α n ϕ ( z , u n ) + V ( z , t n J E 1 x n + ( 1 t n ) J E 1 x ¯ n + 1 ) = α n ϕ ( z , u n ) + | | z | | 2 2 z , t n J E 1 x n + ( 1 t n ) J E 1 x ¯ n + 1 + | | t n J E 1 x n + ( 1 t n ) J E 1 x ¯ n + 1 | | 2 α n ϕ ( z , u n ) + t n ϕ ( z , x n ) + ( 1 t n ) ϕ ( z , x ¯ n + 1 ) t n ( 1 t n ) | | J E 1 x n J E 1 x ¯ n + 1 | | 2 = α n ϕ ( z , u n ) + ϕ ( z , x n ) t n ( 1 t n ) | | J E 1 x n J E 1 x ¯ n + 1 | | 2 .
It follows from (C1), (C3) and (28) that
0 t ( 1 t * ) | | J E 1 x n J E 1 x ¯ n + 1 | | 2 t n ( 1 t n ) | | J E 1 x n J E 1 x ¯ n + 1 | | 2 α n ϕ ( z , u n ) + ϕ ( z , x n ) ϕ ( z , x n + 1 ) 0 as n .
Hence,
lim n | | J E 1 x n J E 1 x ¯ n + 1 | | = 0 .
This implies that
lim n | | x n x ¯ n + 1 | | = 0 .
In addition, from (A3) and (17), we get
ϕ ( y n , x ¯ n + 1 ) ] t n ϕ ( x n , x ¯ n + 1 ) + ( 1 t n ) ϕ ( x ¯ n + 1 , x ¯ n + 1 ) t n [ | | x n | | | | J E 1 x n J E 1 x ¯ n + 1 | | + | | x ¯ n + 1 | | | | x n x ¯ n + 1 | | ] .
It follows from (42) and (43) that
lim n ϕ ( y n , x ¯ n + 1 ) = 0 .
Thus, from Lemma 5, we get
lim n | | y n x ¯ n + 1 | | = 0 .
Using Lemma 9, ( x n + 1 ) in (17) and by (22), we have
ϕ ( Q μ n y n , y n ) ϕ ( z , y n ) ϕ ( z , Q μ n y n ) = ϕ ( z , y n ) ϕ ( z , x n + 1 ) + ϕ ( z , x n + 1 ) ϕ ( z , Q μ n y n ) ϕ ( z , x n ) ϕ ( z , x n + 1 ) + α n ϕ ( z , u n ) + ( 1 α n ) ϕ ( z , Q μ y n ) ϕ ( z , Q μ y n ) ϕ ( z , x n ) ϕ ( z , x n + 1 ) + α n [ ϕ ( z , u n ) ϕ ( z , Q μ y n ) ] .
It follows from (C1) and (28) that
lim n ϕ ( Q μ n y n , y n ) = 0 ,
and, by Lemma 5, we have
lim n | | Q μ n y n y n | | = 0 .
Combining (17) and (C1), we have
ϕ ( Q μ n y n , x n + 1 ) α n ϕ ( Q μ n y n , u n ) + ( 1 α n ) ϕ ( Q μ n y n , Q μ n y n ) , 0
as n ; thus, with Lemma 5, we obtain
lim n | | Q μ n y n x n + 1 | | = 0 .
Since
| | x n x n + 1 | | | | x n x ¯ n + 1 | | + | | x ¯ n + 1 y n | | + | | y n Q μ n y n | | + | | Q μ n y n x n + 1 | | ,
then it follows from (43), (45), (47) and (49) that
lim n | | x n x n + 1 | | = 0 .
Furthermore, since E 1 is reflexive and { x n } n = 1 is bounded, then there exists a subsequence { x n j } of { x n } such that { x n j } converges weakly to x * in E 1 . In addition, we know that B is linear and bounded, then it is continuous, so x n j x * implies that B x n j B x * . Thus, by (31), we have lim n | | B x n U ( B x n ) | | = 0 and, since U is demiclosed at zero, then, B x * = U ( B x * ) , that is, x * B 1 ( F ( U ) ) . From (30), we get lim n | | x n w n | | = 0 , thus w n j x * , so, using (36) it implies that { Q μ j w n j } converges weakly to x * . Since Q μ n is generalized resolvent of A in E 1 , then
J E 1 w n J E 1 Q μ n w n μ n A Q μ n w n , n 1 .
Since A is monotone, we have
0 c Q μ n j w n j , d J E 1 w n j J E 1 Q μ n j w n j μ n j ,
for all ( c , d ) A and we know from (37) that lim j = | | J E 1 w n j J E 1 Q μ n j w n j | | = 0 , then, since μ n j > 0 for all j 1 , we have that c x * , d 0 0 for all ( c , d ) A and with the fact that A is maximal monotone, we obtain x * A 1 ( 0 ) . In addition, since Q μ n j w n j x * , we know from (41) that lim n | | Q μ n w n T ( Q μ n w n ) | | = 0 ; then, using the fact that T is demiclosed at zero, we obtain x * F ( T ) . Therefore, x * Γ : = F ( T ) A 1 ( 0 ) B 1 ( F ( U ) ) .
Next, we show that { x n } converges strongly to Π Γ u . Letting z * = Π Γ u , since x n j converges weakly to x * , then, using Lemma 2, we get
lim sup n x n z * , J E 1 ( u n ) J E 1 z * = lim j x n j z * , J E 1 ( u n j ) J E 1 z * = x * z * , J E 1 ( u ) J E 1 z * 0 .
Observe that
x n + 1 z * , J E 1 ( u n ) J E 1 z * = x n + 1 x n , J E 1 ( u n ) J E 1 z * + x n z * , J E 1 ( u n ) J E 1 z * .
It follows from (50) and (51) that
lim sup n x n + 1 z * , J E 1 ( u n ) J E 1 ( z * ) 0 .
Finally, using (11), Lemma 1, and (22), we obtain
ϕ ( z * , x n + 1 ) = ϕ ( z * , J E 1 1 ( α n ) J E 1 u n + ( 1 α n ) J E 1 Q μ n y n ) = V ( z * , α n ) J E 1 u n + ( 1 α n ) J E 1 Q μ n y n ) V ( z * , α n J E 1 u n + ( 1 α n ) J E 1 Q μ n y n α n ( J E 1 u n J E 1 z * ) ) 2 x n + 1 z * , α n ( J E 1 u n J E 1 z * ) ( 1 α n ) ϕ ( z * , x n ) + 2 α n x n + 1 z * , J E 1 u n J E 1 z * .
Thus, with the help of (C1), (52) and applying Lemma 12, we get ϕ ( z * , x n ) 0 as n ; then, by Lemma 6, we obtain | | x n z * | | 0 as n . Hence, x n z * : = Π Γ u , where Γ : = F ( T ) A 1 ( 0 ) B 1 ( F ( U ) ) .
Case II: Suppose that { ϕ ( z , x n ) } n = 1 is not a non-increasing sequence. Then, let { x n i } be a subsequence of { x n } such that ϕ ( z , x n i ) < ϕ ( z , x n i + 1 ) for all i N . Then, by Lemma 13, there exists a nondecreasing sequence { m s } N such that m s as s ,
ϕ ( z , x m s ) ϕ ( z , x m s + 1 ) and ϕ ( z , x s ) ϕ ( z , x m s + 1 ) .
Since { ϕ ( z , x m s ) } is bounded, then lim s ϕ ( z , x m s ) exists. Therefore, using the same method of arguments as in Case I (from (30)–(50)), we get
lim s | | x m s + 1 x m s | | = 0 .
Similarly as in the proof of Case 1, we obtain
lim sup m x m s + 1 z * , J E 1 ( u m s ) J E 1 z * 0 .
In addition, from (53), we have
ϕ ( z * , x m s + 1 ) ( 1 α m s ) ϕ ( z * , x m s ) + 2 α m s x m s + 1 z * , J E 1 u m s J E 1 z *
which implies
α m s ϕ ( z * , x m s ) ϕ ( z * , x m s ) ϕ ( z * , x m s + 1 ) + 2 α m s x m s + 1 z * , J E 1 u m s J E 1 z *
and, since α m s > 0 for all s N and ϕ ( z * , x m s ) ϕ ( z * , x m s + 1 ) , then
ϕ ( z * , x m s ) 2 x m s + 1 z * , J E 1 u m s J E 1 z * .
Hence, from (54), we obtain lim s ϕ ( z * , x m s ) = 0 , then, with (55), we have lim s ϕ ( z * , x m s + 1 ) = 0 . However, we know that ϕ ( z * , x s ) ϕ ( z * , x m s + 1 ) for all s N , thus lim s ϕ ( z * , x s ) = 0 . Therefore, using Lemma 6, we obtain | | x s z * | | 0 as s . Hence, x s z * : = Π Γ u , where Γ : = F ( T ) A 1 ( 0 ) B 1 ( F ( U ) ) . This completes the proof. □
We obtained the following results as the consequences of our main result.
(i) Let C and Q be nonempty, closed, and a convex subset of E 1 and E 2 , respectively. Taking A = N C , the normal cone operator at C which is maximally monotone (see [32]) and defined by
N C ( x ) = , if x C , { w : w , z x 0 , z C } if x C ,
then the resolvent operator with respect to A is the projection operator Π C . In addition, taking U = P Q , the metric projection from E 2 onto Q, then the GSFP reduces to the SFP. Note that the class of (0,0)-demigeneralized mapping is nonexpansive. Hence, from Theorem 1, we obtained the following result for solving SFP in real Banach spaces.
Corollary 1.
Let C and Q be nonempty, closed and convex subsets of two uniformly smooth and 2-uniformly convex real Banach spaces E 1 and E 2 , respectively. Let J E 1 and J E 2 be normalized duality mappings on E 1 and E 2 , respectively. Let T : E 1 E 1 be a ( η , 0 ) -demigeneralized mapping and demiclosed at zero with η ( , λ ] and λ [ 0 , 1 ) such that F ( T ) . Let B : E 1 E 2 be a bijective bounded linear operator with its adjoint B * : E 2 * E 1 * . Suppose that Γ = F ( T ) S F P ( C , Q , B ) . Let { u n } be a sequence in E 1 such that u n u E 1 and for any arbitrary sequence { x n } n = 1 in E 1 generated by x 1 E 1 and
w n = J E 1 1 ( J E 1 x n γ n B * ( J E 2 ( B x n ) J E 2 P Q ( B x n ) ) ) , x ¯ n + 1 = J E 1 1 ( β n J E 1 ( w n ) + ( 1 β n ) J E 1 T λ Π C ( w n ) ) , x n + 1 = J E 1 1 [ α n J E 1 ( u n ) + ( 1 α n ) J E 1 Π C J 1 ( ( t n J E 1 ( x n ) + ( 1 t n ) J E 1 x ¯ n + 1 ) ) ] , n 1
where T λ = J E 1 1 ( ( 1 λ ) J E 1 + λ J E 1 T ) , { α n } , { β n } , and { t n } are sequences in ( 0 , 1 ) and μ n ( 0 , ) . Suppose that the following conditions are satisfied:
(C1)
lim n α n = 0 and n = 1 α n = ;
(C2)
for any α > 0 as in Lemma 6 and any fixed value a > 0 , the stepsize γ n is chosen as follows
0 < a γ n α ( 1 θ ) | | B x n P Q B x n | | 2 | | B * ( J E 2 ( B x n ) J E 2 ( P Q B x n ) ) | | 2 a ,
if B x n P Q B x n ; otherwise, γ n = γ ( γ 0 ).
(C3)
0 < t t n t * < 1 and 0 < β β n β * < 1 , where [ t , t * ] ( 0 , 1 ) , [ β , β * ] ( 0 , 1 ) .
Then, { x n } n = 1 converges strongly to z * Γ , where z * = Π Γ u .
(ii) When E 1 = H 1 and E 2 = H 2 , where H 1 and H 2 are real Hilbert spaces, we obtained the following generalized explicit algorithm for solving the GSFP in real Hilbert spaces.
Corollary 2.
Let H 1 and H 2 be real Hilbert spaces. Let T : H 1 H 1 be a ( η , 0 ) -demigeneralized mapping and demiclosed at zero with η ( , λ ] and λ [ 0 , 1 ) such that F ( T ) . Let U : H 2 H 2 be a ( θ , 0 ) -demigeneralized mapping and demiclosed at zero with θ ( , 0 ] such that F ( U ) . Let A be a maximal monotone operator of H into 2 H such that A 1 and J μ A are the resolvent operator of A for μ > 0 . Let T : C H be a ( k , 0 ) -demigeneralized mapping and demiclosed at zero with k ( , 0 ] . Let B : H 1 H 2 be a bijective bounded linear operator with its adjoint H * : H 2 * H 1 * . Suppose that Γ = F ( T ) A 1 ( 0 ) B 1 F ( U ) . Let { u n } be a sequence in H 1 such that u n u E 1 and for any arbitrary sequence { x n } n = 1 in H 1 generated by x 1 H 1 and
w n = x n γ n B * ( B x n ) U ( B x n ) , x ¯ n + 1 = β n w n + ( 1 β n ) T λ J μ n B w n , x n + 1 = α n u n + ( 1 α n ) J μ n B ( t n x n + ( 1 t n ) x ¯ n + 1 ) , n 1
where T λ = ( 1 λ ) + λ T , { α n } , { β n } and { t n } are sequences in ( 0 , 1 ) and μ n ( 0 , ) . Suppose the following conditions are satisfied:
(C1)
lim n α n = 0 and n = 1 α n = ;
(C2)
for any α > 0 as in Lemma 6 and any fixed value a > 0 , the stepsize γ n is chosen as follows
0 < a γ n α ( 1 θ ) | | B x n U B x n | | 2 | | B * ( B x n U B x n ) | | 2 a ,
if B x n U B x n ; otherwise γ n = γ ( γ 0 ).
(C3)
0 < t t n t * < 1 and 0 < β β n β * < 1 , where [ t , t * ] ( 0 , 1 ) , [ β , β * ] ( 0 , 1 ) .
Then, { x n } n = 1 converges strongly to z * Γ , where z * = P Γ u and P Γ is the metric projection onto Γ .

4. Numerical Examples

In this section, we present some numerical examples to illustrate the efficiency and performance of the proposed algorithm. We compare the performance of our iteration (17) with (4)–(6). The numerical computations are carried out using MATLAB 2019b on a PC with specification Intel(R)core i7-600, CPU 2.48 GHz, RAM 8.0 GB.
Example 1.
Let E 1 = E 2 = 2 ( R ) , where 2 ( R ) = { u = ( u 1 , u 2 , , u k , ) , u k R : k = 1 | u k | 2 < } , u 2 = k = 1 | u k | 2 1 2 for all u E 1 . Let A : 2 2 and B : 2 2 be two mappings defined by
A u = 2 u + ( 1 , 1 , 0 , 0 , ) and B u = 3 u ,
where u = ( u 1 , u 2 , , u k , ) 2 . We see that A is maximal monotone and B is a bounded linear operator. In addition, we define the mappings T : 2 2 and U : 2 2 by T u = 3 u 1 2 , 3 u 2 2 , , 3 u k 2 , and U v = v 1 2 , v 2 2 , , v k 2 , for u = ( u 1 , u 2 , , u k , ) 2 and v = ( v 1 , v 2 , , v k , ) 2 . Then, T and U are 1 5 , 0 and ( 0 , 0 ) -demigeneralized mappings, respectively, and demiclosed at zero. We choose α n = 1 n + 1 , t n = 2 n 5 n + 1 ,   β n = 1 3 1 3 ( n + 1 ) , α = 0.2 ,   λ = 0.025 . Thus, our iterative scheme (17) becomes:
w n = J E 1 1 J E 1 ( x n ) γ n B * ( J E 2 ( B x n ) J E 2 ( U B x n ) ) , x ¯ n + 1 = J E 1 1 n 3 ( n + 1 ) J E 1 ( w n ) + 2 n + 3 3 ( n + 1 ) J E 1 T λ Q μ n ( w n ) , x n + 1 = J E 1 1 1 n + 1 J E 1 ( u n ) + n n + 1 J E 1 Q μ n J 1 2 n 5 n + 1 J E 1 ( x n ) + 3 n + 1 5 n + 1 J E 1 ( x n + 1 ) ,
where T λ x = J E 1 1 ( 0.975 J E 1 ( x ) + 0.025 J E 1 + E 1 ( T x ) ) . Since E 1 = E 2 = 2 ( R ) , the duality mappings J E i ( i = 1 , 2 ) and J E 1 1 reduce to the identity mappings on 2 . We compare the performance of (58) with the iterative scheme (6) of [9], (5) of [8] and (4) of [7]. For iteration (6), U is defined as above, which is 0-quasi-strict pseudocontractive mapping and F ( U ) . In addition, we take α n = 1 n + 1 and γ = 1 2 B 2 . For iteration (5), we consider the case for which i = 1 , and we take α n = 1 n + 1 , β n = 2 n 3 n + 3 , γ n = 1 α n β n . δ n = 1 A 2 , while,, for iteration (4), we choose β n = 2 n 5 n + 1 . We study the convergence of the algorithms using D n = x n + 1 x n < 10 6 as a stopping criterion. We test the algorithms using the following initial values:
Case I: x 0 = ( 1 2 , 1 4 , 1 8 , ) ,
Case II: x 0 = ( 5 , 5 , 5 , ) ;
Case III: x 0 = ( 3 , 9 , 27 , ) ;
Case IV: x 0 = ( 2 , 2 , 0 , 0 , ) .
The computational results are shown in Table 1 and Figure 1.
Example 2.
Next, we present an example in real Banach spaces. Let E 1 = E 2 = L p ( [ 0 , 2 π ] ) (for p ( 0 , 2 ] ) with norm x L p = 0 2 π | x ( t ) | p d t 1 p and inner product x , y = 0 2 π x ( t ) y ( t ) d t for all x , y L p ( [ 0 , 2 π ] ) . Let C = { x L p ( [ 0 , 2 π ] ) : 3 t 2 , x 1 } . The duality mapping is defined by (see [33])
J E ( x ) ( t ) = | x ( t ) | p 1 s g n ( x ( t ) ) .
Define the operator A i C ; then, A is maximal monotone and the resolvent Q μ is the projection operator onto C which is given by
Π C ( x ) ( t ) = x ( t ) 3 t 2 , x 3 t 2 2 3 t 2 if 3 t 2 , x > 1 , x ( t ) if 3 t 2 , x 1 .
Furthermore, let Q = { y L p ( [ 0 , 2 π ] ) : t 3 , y = 0 } . Then, projection onto Q is given by
Π Q ( y ) ( t ) = y ( t ) t 3 , y t 3 2 t 3 if t 3 , y 0 , y ( t ) if t 3 , y = 0 .
We set T = Π C and U = Π Q ; then, T and U are ( 0 , 0 ) -demigeneralized mappings. In addition, let B : L p ( [ 0 , 2 π ] ) L p ( [ 0 , 2 π ] ) be defined by B x ( t ) = x ( t ) 2 for all x L p ( [ 0 , 2 π ] ) . In particular, we take p = 3 2 so that E is not a real Hilbert space. We choose the following parameters and compare our method with the methods of Zi et al. [9] (Algorithm 1.6): For Algorithm 17, we take α n = 1 100 ( n + 1 ) ,   β n = 30 n 50 n + 1 ,   t n = 3 5 ,   λ = 0.03 ,   α = 0.1 and, for Algorithm 1.6, we take α n = 1 100 ( n + 1 ) ,   γ = 1 100 . We test the algorithms for the following initial values:
Case I: x 0 = exp ( 3 t ) 30 ;
Case II: x 0 = 2 t cos ( 5 t 2 ) ;
Case III: x 0 = t 3 27 ;
Case IV: x 0 = 3 t exp ( 7 t ) .
Using x n + 1 x n L p < 10 4 , we plot the graphs of error ( x n + 1 x n L p ) against the number of iterations in each case. The computation results can be seen in Table 2 and Figure 2.
Example 3.
Next, we apply our result to solve the image restoration problem. We show the performance of our algorithm (17) with iteration (4)–(6). The image restoration problem can modeled as the linear system
b = D x + e ,
where b R M is the observed data with noisy, e is the noise, D : R N R M ( M N ) is a bounded linear operator, and x R N is the vector with m non-zero components. This problem can also be formulated as the following Least Absolute Shrinkage and Selection Operator (LASSO) problem:
min x R N { D x b 2 2 + λ x 1 } ,
for some regularization parameter λ > 0 . Equivalently, the split feasibility problem (1) can be rewritten as
min x R N { f ( x ) + g ( x ) }
where f ( x ) = D x b 2 2 and g ( x ) = λ x 1 . Following Corollary 1, we can apply our algorithm for solving the image deblurring problem by setting A = N C , T = P Q , Q = { b } , B = D and C = { x R N : x 1 t } , where t > 0 . In our experiments, we used the grey test image Cameraman ( 256 × 256 ) and Moon ( 537 × 358 ) in an Image Processing Toolbox in MATLAB, while each test image is degraded by Gaussian 7 × 7 blur kernel with standard deviation 4 . In our computation, we take α n 1 100 n ,   t n = 15 100 ,   β n = 29 n 100 n + 1 ,   λ = 0.25 ,   α = 0.04 ; for (6), we take α n = 1 100 n ,   γ = 0.001 ; for (5), we take α n = 1 100 n ,   β n = 1 3 ,   δ n = 1 1 3 α n ,   γ = 0.001 , η i = 1 ,   i = 1 ,   σ n = 2 n 100 n + 1 ; for (4), we take β n = 2 n 100 n + 1 . The maximum number of allowed iterations is set to be 1000. We compare the quality of the restored image using the signal-to-noise ratio defined as
S N R = 20 × log 10 x 2 x x * 2 ,
where x is the original image and x * is the restored image. Typically, the larger the S N R , the better the quality of the restored image. Figure 3 and Figure 4 show the reconstructed images using the iterative algorithms. In Figure 5 and Figure 6, we show the graphs of SNR against number of iterations for each algorithm. In Table 3, we show the time taken by each iteration for reconstruction of the test images.
From Figure 3 and Figure 4, it can be seen that all the algorithms are efficient for restoring the test images. Moreover, from Table 3, the proposed iteration (17) is faster than (4) and (6) with respect to the time taken for restoring the cameraman and tree images.

5. Conclusions

In this paper, we introduced a new generalized explicit iterative method for solving generalized split feasibility problems in real Banach spaces. The algorithm is designed such that it is stepsize chosen self-adaptively and does not require the prior knowledge of the norm of the bounded linear operator, which is difficult to estimate in general. Furthermore, a strong convergence result is proved and some numerical examples are presented to illustrate the performance of the proposed method. In addition, the algorithm is applied to an image reconstruction problem to show its usefulness and efficiency.

Author Contributions

Conceptualization, L.O.J.; methodology, G.C.U. and L.O.J.; validation, G.C.U. and L.O.J.; formal analysis, G.C.U. and L.O.J.; writing—original draft preparation, G.C.U.; writing—review and editing, L.O.J. and C.C.O.; visualization, L.O.J.; supervision, G.C.U., L.O.J. and C.C.O.; project administration, G.C.U. and C.C.O.; funding acquisition, L.O.J. All authors approved the final manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Sefako Makgatho Health Sciences University Postdoctoral research fund and the APC was funded by the Department of Mathematics and Applied Mathematics, Sefako Makgatho Health Sciences University, Pretoria, South Africa.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors acknowledge with thanks the Sefako Makgatho Health Sciences University for providing funds for the research. L.O.J. also thanks the the Federal University of Agriculture, Abeokuta, for hosting him and providing their facilities to support the course of the research.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projection in a product splace. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Byne, C. Iterative obligue projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar]
  3. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, T. A unified approach for inversion problem in intensity-modolated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [Green Version]
  4. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications. Inverse Probl. 2005, 51, 2071–2084. [Google Scholar] [CrossRef] [Green Version]
  5. Censor, Y.; Motova, A.; Segal, A. Pertured projections and subgradient projections for the multiple-sets split feasibility problems. J. Math. Anal. Appl. 2007, 327, 1244–1256. [Google Scholar] [CrossRef] [Green Version]
  6. Takahashi, W.; Xu, H.K.; Yao, J.C. Iterative methods for generalized split feasibility problems in Hilbert spaces. Set-Valued Var. Anal. 2014, 23, 205–221. [Google Scholar] [CrossRef]
  7. Ansari, Q.H.; Rehan, A. Iterative methods for generalized split feasibility problems in Banach spaces. Carp. J. Math. 2017, 33, 9–26. [Google Scholar] [CrossRef]
  8. Song, Y. Iterative methods for fixed point problems and generalized split feasibility problems in Banach spaces. J. Nonlinear Sci. Appl. 2018, 11, 198–217. [Google Scholar] [CrossRef] [Green Version]
  9. Zi, X.; Ma, Z.; Du, W.S. Strong convergence theorems for generalized split feasibility problems in Banach spaces. Mathematics 2020, 8, 892. [Google Scholar] [CrossRef]
  10. Zhao, J. Solving split equality fixed-point problem of quasi-nonexpansive mappings without prior knowledge of operators norms. Optimization 2015, 64, 2619–2630. [Google Scholar] [CrossRef]
  11. Ke, Y.; Ma, C. The generalized viscosity implicit rules of nonexpansive mappings in Hilbert space. Fixed Point Theory Appl. 2015, 2015, 190. [Google Scholar] [CrossRef] [Green Version]
  12. Hoffman, J.D. Numerical Methods for Engineers and Scientists, 2nd ed.; Marcel Dekker, Inc.: New York, NY, USA, 2001. [Google Scholar]
  13. Palais, R.S.; Palais, R.A. Differential Equations, Mechanics, and Computation; American Mathematical Society: Providence, RI, USA, 2009. [Google Scholar]
  14. Marino, G.; Scardamaglia, B.; Zaccone, R. A general viscosity explicit midpoint rule for quasi-nonexpansive mappings. J. Nonlinear Convex Anal. 2017, 18, 137–148. [Google Scholar]
  15. Xu, H.K. Inequalities in Banach spaces with applications. Nonlinear Anal. 1991, 16, 1127–1138. [Google Scholar] [CrossRef]
  16. Cioranescu, I. Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems; Kluwer Academic: Dordrecht, The Netherlands, 1990. [Google Scholar]
  17. Reich, S. A weak convergence theorem for the alternating method with Bregman distance. In Theory and Applications of Nonlinear Operators of Accretive and Monotone Type; Kartsatos, A.G., Ed.; Marcel Dekker: New York, NY, USA, 1996; pp. 313–318. [Google Scholar]
  18. Takahashi, W. Nonlinear Functional Analysis; Yokohama Publishers: Yokohama, Japan, 2000. [Google Scholar]
  19. Nilsrakoo, W.; Saejung, S. Strong convergence theorems by Halpern-Mann iterations for relatively nonexpansive maps in Banach spaces. Appl. Math. Comput. 2011, 217, 6577–6586. [Google Scholar] [CrossRef]
  20. Matsushita, S.; Takahashi, W. A strong convergence theorem for relatively nonexpansive mappings in Banach spaces. J. Approx. Theory 2005, 134, 257–266. [Google Scholar] [CrossRef] [Green Version]
  21. Alber, Y.I. Metric and generalized projection operators in Banach spaces: Properties and applications. In Theory and Applications of Nonlinear Operators of Accretive and Monotone Type; Kartsatos, A.G., Ed.; Lecture Notes Pure Appl. Math.; Dekker: New York, NY, USA, 1996; Volume 178, pp. 15–50. [Google Scholar]
  22. Alber, Y.I.; Guerre-Delabriere, S. On the projection methods for fixed point problems. Analysis 2001, 21, 17–39. [Google Scholar] [CrossRef]
  23. Alber, Y.I.; Reich, S. An iterative method for solving a class of nonlinear operator equations in Banach spaces. Panamer. Math. J. 1994, 4, 39–54. [Google Scholar]
  24. Zhou, H.; Gao, E. An iterative method of fixed points for closed and quasi-strict pseudocontractions in Banach spaces. J. Appl. Math. Comput. 2010, 33, 227–237. [Google Scholar] [CrossRef]
  25. Takahashi, W.; Wen, C.F.; Yao, J.C. Strong convergence theorem by shrinking projection method for new nonlinear mappings in Banach spaces and applications. Optimization 2017, 66, 609–621. [Google Scholar] [CrossRef]
  26. Browder, F.E. Browder, Nonlinear maximal monotone operators in a Banach space. Math. Ann. 1968, 175, 89–113. [Google Scholar] [CrossRef] [Green Version]
  27. Kamimura, S.; Takahashi, W. Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 2002, 13, 938–945. [Google Scholar] [CrossRef]
  28. Kohsaka, F.; Takahashi, W. Strong convergence of an iterative sequence for maximal monotone operators in a Banach space. Abstr. Appl. Anal. 2004, 2004, 239–249. [Google Scholar] [CrossRef] [Green Version]
  29. Khan, A.R.; Ugwunnadi, G.C.; Makukula, Z.G.; Abbas, M. Strong convergence of inertial subgradient extragradient method for solving variational inequality in Banach space. Carpathian J. Math. 2019, 35, 327–338. [Google Scholar] [CrossRef]
  30. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar] [CrossRef]
  31. Maingé, P.E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  32. Rockafellar, R.T. On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149, 75–88. [Google Scholar] [CrossRef]
  33. Agarwal, R.P.; Regan, D.O.; Sahu, D.R. Fixed Point Theory for Lipschitzian-Type Mappings with Applications; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
Figure 1. Example 1, Top Left: Case I; Top Right: Case II; Bottom Left: Case III; Bottom Right: Case IV.
Figure 1. Example 1, Top Left: Case I; Top Right: Case II; Bottom Left: Case III; Bottom Right: Case IV.
Symmetry 14 00335 g001
Figure 2. Example 2, Top Left: Case I; Top Right: Case II; Bottom Left: Case III; Bottom Right: Case IV.
Figure 2. Example 2, Top Left: Case I; Top Right: Case II; Bottom Left: Case III; Bottom Right: Case IV.
Symmetry 14 00335 g002
Figure 3. Example 3, Top shows original Cameraman image (left) and degraded Cameraman image (middle), recovered image by iteration (17) (right); Bottom shows image recovered by iteration (6) (left), (5) (middle), and (4) (right).
Figure 3. Example 3, Top shows original Cameraman image (left) and degraded Cameraman image (middle), recovered image by iteration (17) (right); Bottom shows image recovered by iteration (6) (left), (5) (middle), and (4) (right).
Symmetry 14 00335 g003
Figure 4. Example 3, Top shows original moon image (left) and degraded moon image (middle), recovered image by iteration (17) (right); Bottom shows image recovered by iteration (6) (left), (5) (middle), and (4) (right).
Figure 4. Example 3, Top shows original moon image (left) and degraded moon image (middle), recovered image by iteration (17) (right); Bottom shows image recovered by iteration (6) (left), (5) (middle), and (4) (right).
Symmetry 14 00335 g004
Figure 5. Example 3, Graphs of SNR value for test image cameraman against number of iterations for iteration (17), (6), (5) and (4).
Figure 5. Example 3, Graphs of SNR value for test image cameraman against number of iterations for iteration (17), (6), (5) and (4).
Symmetry 14 00335 g005
Figure 6. Example 3, Graphs of SNR value for test image cameraman against number of iterations for iteration (17), (6), (5) and (4).
Figure 6. Example 3, Graphs of SNR value for test image cameraman against number of iterations for iteration (17), (6), (5) and (4).
Symmetry 14 00335 g006
Table 1. Computation results for Example 1.
Table 1. Computation results for Example 1.
UJO alg. (17)ZMD alg. (6)Song alg. (5)AR alg. (4)
Case INo. of Iter.15502635
CPU time (sec)0.00340.01680.00720.091
Case IINo. of Iter.16492535
CPU time (sec)0.00290.01480.00820.0089
Case IIINo. of Iter.22472433
CPU time (sec)0.00840.01130.01020.0116
Case IVNo. of Iter.10522737
CPU time (sec)0.00320.01250.00750.0098
Table 2. Computation results for Example 2.
Table 2. Computation results for Example 2.
Algorithm (17)Algorithm (6)
Case INo. of Iter.22
CPU time (sec)1.52064.1734
Case IINo. of Iter.22
CPU time (sec)1.38165.1309
Case IIINo. of Iter.22
CPU time (sec)2.27958.5628
Case IVNo. of Iter.22
CPU time (sec)1.88848.5531
Table 3. Time (seconds) for computing the recovered images in Example 3.
Table 3. Time (seconds) for computing the recovered images in Example 3.
MoonCameraman
UJO alg. (17)10.227520.3075
ZMD alg. (6)11.852139.2120
Song alg. (5)10.064220.1219
AR alg. (4)12.216421.4980
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ugwunnadi, G.C.; Jolaoso, L.O.; Okeke, C.C. A Generalized Explicit Iterative Method for Solving Generalized Split Feasibility Problem and Fixed Point Problem in Real Banach Spaces. Symmetry 2022, 14, 335. https://doi.org/10.3390/sym14020335

AMA Style

Ugwunnadi GC, Jolaoso LO, Okeke CC. A Generalized Explicit Iterative Method for Solving Generalized Split Feasibility Problem and Fixed Point Problem in Real Banach Spaces. Symmetry. 2022; 14(2):335. https://doi.org/10.3390/sym14020335

Chicago/Turabian Style

Ugwunnadi, Godwin Chidi, Lateef Olakunle Jolaoso, and Chibueze Christian Okeke. 2022. "A Generalized Explicit Iterative Method for Solving Generalized Split Feasibility Problem and Fixed Point Problem in Real Banach Spaces" Symmetry 14, no. 2: 335. https://doi.org/10.3390/sym14020335

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop