Next Article in Journal
Intelligent Supply Chain Information System Based on Internet of Things Technology under Asymmetric Information
Next Article in Special Issue
Generalized Contractive Mappings and Related Results in b-Metric Like Spaces with an Application
Previous Article in Journal
A Note on States and Traces from Biorthogonal Sets
Previous Article in Special Issue
Basic Concepts of Riemann–Liouville Fractional Differential Equations with Non-Instantaneous Impulses
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Iteration Process for Fixed Point Problems and Zeros of Maximal Monotone Operators

1
Department of Mathematics, Pt NRS Government College, Rohtak 124001, India
2
Department of Mathematics, Maharshi Dayanand University, Rohtak 124001, India
3
Department of General Education, China Medical University, Taichung 40402, Taiwan
4
Iacob Institute of Mathematical Statistics and Applied Mathematics, Romanian Academy, Gh. Mihoc-C., 050711 Bucharest, Romania
5
Department of Mathematics and Informatics, University “Politehnica” of Bucharest, 060042 Bucharest, Romania
*
Author to whom correspondence should be addressed.
Current address: Department of Mathematics and Informatics, University “Politehnica” of Bucharest, 060042 Bucharest, Romania.
Symmetry 2019, 11(5), 655; https://doi.org/10.3390/sym11050655
Submission received: 7 April 2019 / Revised: 7 May 2019 / Accepted: 9 May 2019 / Published: 10 May 2019
(This article belongs to the Special Issue Fixed Point Theory and Fractional Calculus with Applications)

Abstract

:
We introduce an iterative algorithm which converges strongly to a common element of fixed point sets of nonexpansive mappings and sets of zeros of maximal monotone mappings. Our iterative method is quite general and includes a large number of iterative methods considered in recent literature as special cases. In particular, we apply our algorithm to solve a general system of variational inequalities, convex feasibility problem, zero point problem of inverse strongly monotone and maximal monotone mappings, split common null point problem, split feasibility problem, split monotone variational inclusion problem and split variational inequality problem. Under relaxed conditions on the parameters, we derive some algorithms and strong convergence results to solve these problems. Our results improve and generalize several known results in the recent literature.

1. Introduction

Fixed point theory has been revealed as a very powerful and effective method for solving a large number of problems which emerge from real world applications and can be translated into equivalent fixed point problems. In order to obtain approximate solution of the fixed point problems various iterative methods have been proposed (see, e.g., [1,2,3,4,5,6,7,8,9,10] and the reference therein). One of the important instances of fixed point problems is the problem of solving zero point problem of nonlinear operators. The most popular method for finding zeros of a maximal monotone operator is the proximal point algorithm (PPA). Rockafellar [11] proved the weak convergence of PPA, but it fails to converge strongly (see [12]). To obtain strong convergence, several authors proposed modification of PPA (see: Kamimura and Takahashi [13], Iiduka-Takahashi [14] and reference therein). In [15], Lehdili and Moudafi introduced the prox-Tikhonov regularization method which combined Tikhonov method with PPA to obtain a strongly convergent sequence.
In 2012, Censor, Gibali and Reich [16] (see also [17,18]) introduced a new variational inequality problem, called the common solutions to variational inequality problem (CSVIP) which comprises of finding common solutions to unrelated variational inequalities. The significance of studying the CSVIP lies in the fact that it includes the well-known convex feasibility problem (CFP) as its special case. The CFP which lies in center of many problems of physical sciences such as sensor networking [19], radiation therapy treatment planning [20], computerized tomography [21], image restoration [22] is to find a point in the intersection of a family of closed convex sets in a Hilbert space.
A special case of the CFP is the split feasibility problem (SFP). In 1994, Censor and Elfving [23] introduced the SFP for modeling phase retrieval problems. This problem has large number of applications in optimization problems, signal processing, image reconstruction, intensity-modulated radiation therapy (IMRT). Starting from SFP, various important split type problems have been introduced and studied in recent years, for example, the split common null point problem (SCNPP), split monotone variational inclusion problem (SMVIP), split variational inequality problem (SVIP).
Motivated and inspired by the above work, we propose an iterative algorithm for finding common element of fixed point sets of nonexpansive mappings and sets of zeros of maximal monotone mappings. As applications, we solve all the problems discussed above under weaker conditions.

2. Preliminaries

Throughout the paper, we assume that H is a Hilbert space with the inner product . , . and the norm . and let I be the identity mapping on H . We denote by Fix ( T ) the set of all fixed points of a mapping T. A sequence { x n } in H converges to x H strongly if { x n x } converges to 0 and weakly if { x n x , y } converges to 0, for every y H . We shall use the notations x n x and x n x to indicate the strong and weak convergence respectively. It is important to note that strong convergence always implies weak convergence, but the converse is not true (see [24]). Let D be a nonempty closed convex subset of H and P D denotes the nearest point projection (metric projection) from H onto D , that is, for each u H , u P D u u v , for all v D . Furthermore, P D is characterized by the fact that P D u D and
u P D u , v P D u 0 , v D .
Next, we recall some definitions of well known operators, which we will use in our paper.
Definition 1.
An operator S : H H is said to be
1. 
Nonexpansive if S u S v u v , u , v H .
2. 
Contraction if there exists a constant k ( 0 , 1 ) such that S u S v k u v , u , v H .
3. 
α-averaged if there exists a constant α ( 0 , 1 ) and a nonexpansive mapping V such that S = ( 1 α ) I + α V .
4. 
β-inverse strongly monotone (for short, β-ism) if there exists β > 0 such that S u S v , u v β S u S v 2 , u , v H .
5. 
Firmly nonexpansive if S u S v , u v S u S v 2 , u , v H .
It is known that metric projection P D is firmly nonexpansive and every firmly nonexpansive is ( 1 / 2 ) -averaged.
An operator M : H 2 H is called maximal monotone on H , if M is monotone, i.e., u 1 v 1 , u v 0 u , v d o m ( M ) , u 1 M u and v 1 M v , and there is no other monotone operator whose graph contains graph of M . Further, a resolvent associated with a maximal monotone operator M is a single valued operator defined as:
J λ M = ( I + λ M ) 1 : H H .
It is well known [24] that if M : H 2 H is a maximal monotone operator and λ > 0 , then J λ M is firmly nonexpansive and Fix ( J λ M ) = M 1 0 = { u H : 0 M u } .
A sequence { T n } of mappings is said to be a strongly nonexpansive sequence [25] if each T n is nonexpansive and
x n y n ( T n x n T n y n ) 0 ,
whenever { x n } , { y n } H such that { x n y n } is bounded and x n y n T n x n T n y n 0 . Note that if we put T n = T , for all n N , then we have definition of strongly nonexpansive mapping defined in [26].
In order to establish our results, we collect several lemmas.
Lemma 1.
Let F : H H be a β-ism operator on H . Then I 2 β F is nonexpansive.
Proof. 
u v 2 ( I 2 β F ) u ( I 2 β F ) v 2 = u v 2 ( u v 2 + ( 2 β ) 2 F ( u ) F ( v ) 2 4 β F ( u ) F ( v ) , u v ) = 4 β F ( u ) F ( u ) , u v 4 β 2 F ( u ) F ( v ) 2 = 4 β F ( u ) F ( v ) , u v β F ( u ) F ( v ) 2 0 .
Thus I 2 β F is nonexpansive. □
Lemma 2.
For all u , v H , the following inequality holds:
u + v 2 u 2 + 2 v , u + v .
Lemma 3
([27]). Suppose { a n } [ 0 , ) , { γ n } [ 0 , 1 ] and { b n } are three real number sequences satisfying a n + 1 ( 1 γ n ) a n + γ n b n , n 0 . Assume that n = 0 γ n = and lim sup n b n 0 . Then lim n a n = 0 .
Lemma 4
([25]). Let { V n } be a sequence of nonexpansive mappings of D into H , where D is a nonempty subset of a Hilbert space H . Assume that { γ n } [ 0 , 1 ] satisfy the condition lim inf n γ n > 0 . Then a sequence { W n } of mappings of D into H defined by W n = γ n I + ( 1 γ n ) V n , is a strongly nonexpansive sequence, where I is the identity mapping on D .
Lemma 5
([25]). Let { S n } be a sequence of firmly nonexpansive mappings of D into H , where D is a nonempty subset of H . Then { S n } is a strongly nonexpansive sequence. In particular, { J λ n M = ( I + λ n M ) 1 } , resolvent of a maximal monotone operator M is a strongly nonexpansive sequence.
Lemma 6 
([25]). Let C and D be two nonempty subsets of a Hilbert space H . Let { S n } be a sequence of mappings of C into H and { T n } a sequence of mappings of D into H . Suppose that both { S n } and { T n } are strongly nonexpansive sequences such that T n ( D ) C , for each n N . Then { S n T n } is a strongly nonexpansive sequence.
Lemma 7
([26]). If { T i : 1 i k } are strongly nonexpansive mappings and i = 1 k { Fix ( T i ) : 1 i k } , then i = 1 k { Fix ( T i ) : 1 i k } = Fix ( T 1 T 2 T k ) .
Lemma 8
([28]). The composition of finitely many averaged mappings is averaged. That is, if { T i : 1 i k } are averaged mappings, then so is the composition T 1 T 2 T k . Furthermore, if i = 1 k { Fix ( T i ) : 1 i k } , then i = 1 k { Fix ( T i ) : 1 i k } = Fix ( T 1 T 2 T k ) .
Lemma 9
([29]). Let T be a firmly nonexpansive self-mapping on H with Fix ( T ) . Then, for any x H , one has x T x , w T x 0 , for all w Fix ( T ) .
Lemma 10
([30]). Let D H be a nonempty closed convex set and V : D D be a nonexpansive mapping. Then I V is demiclosed at 0, that is, if { x n } D with x n w and ( I V ) x n 0 , then w Fix ( V ) .
Lemma 11
(The Resolvent Identity; [31]). For each λ , μ > 0 ,
J λ A x = J μ A μ λ x + 1 μ λ J λ A x .
Lemma 12
([32]). Let { c n } be a sequence of real numbers such that there exists a subsequence { n i } of { n } such that c n i < c n i + 1 , for all i N . Then, there exists a nondecreasing sequence { m q } N such that m q and the following properties are satisfied by all (sufficiently large) numbers q N :
c m q c m q + 1 , c q c m q + 1 .
In fact,
m q = max { j q : c j < c j + 1 } .

3. Main Results

Theorem 1.
Let H be a real Hilbert space. Let { T i } i = 1 m and V be nonexpansive self-mappings on H and B 1 , B 2 : H 2 H be maximal monotone mappings such that
Γ : = i = 1 m Fix ( T i ) Fix ( V ) B 1 1 0 B 2 1 0 .
Let g : H H be a contraction with coefficient k ( 0 , 1 ) and { x n } a sequence defined by x 0 H and
y n = α n g ( x n ) + ( 1 α n ) J μ n B 2 V n x n , x n + 1 = J ρ n B 1 T m n T m 1 n T 2 n T 1 n y n , ,
for all n 0 , where V n = ( 1 β n ) I + β n V and T i n = ( 1 γ n i ) I + γ n i T i , for i = 1 , 2 , , m . Suppose that { α n } , { β n } and { γ n i } are sequences in ( 0 , 1 ) and { ρ n } and { μ n } are sequences of positive real numbers satisfying the following conditions:
1. 
lim n α n = 0 , n = 0 α n = ;
2. 
0 < lim inf n β n lim sup n β n < 1 ;
3. 
0 < lim inf n γ n i lim sup n γ n i < 1 , for all i = 1 , 2 , , m ;
4. 
for all sufficiently large n , min { ρ n , μ n } > ε for some ε > 0 .
Then the sequence { x n } converges strongly to x * Γ , where x * is the unique fixed point of the contraction P Γ g .
Proof. 
Set W n = J ρ n B 1 T m n T 2 n T 1 n and S n = J μ n B 2 V n . Clearly, each W n and S n are nonexpansive mappings for each n 0 . By Lemmas 4 and 5, for each n 0 , W n and S n are composition of strongly nonexpansive mappings. Therefore, from Lemma 7, we get
Γ = i = 1 m Fix ( T i ) Fix ( V ) B 1 1 0 B 2 1 0 = i = 1 m Fix ( T i n ) Fix ( V n ) Fix ( J ρ n B 1 ) Fix ( J μ n B 2 ) = Fix ( W n ) Fix ( S n ) .
First, we claim that { x n } is bounded. Take an arbitrary element x * Γ .
x n + 1 x * = W n y n x * y n x * = α n ( g ( x n ) g ( x * ) ) + α n ( g ( x * ) x * ) + ( 1 α n ) ( S n x n x * ) α n g ( x n ) g ( x * ) + α n g ( x * ) x * + ( 1 α n ) S n x n x * α n k x n x * + α n g ( x * ) x * + ( 1 α n ) x n x * ( 1 α n ( 1 k ) ) x n x * + α n g ( x * ) x * max x n x * , 1 1 k g ( x * ) x * .
By induction, we have
x n + 1 x * max x 0 x * , 1 1 k g ( x * ) x * ,
which proves the boundedness of { x n } and so we have { g ( x n ) } and { y n } . It is well known that fixed point set of nonexpansive mapping is closed and convex and so their intersection. Hence, the metric projection P Γ is well defined. In addition, since P Γ g : H H is a contraction mapping, there exist x * Γ such that x * = P Γ g ( x * ) . In order to prove x n x * as n , we examine two possible cases:
Case I. Assume that there exists n 0 N such that the real sequence { x n x * } is nonincreasing for all n n 0 . Since { x n x * } is bounded, { x n x * } is convergent. We first show that y n W n y n 0 . Using nonexpansivness of W n and (2), we obtain
0 y n x * W n y n x * α n g ( x n ) x * + ( 1 α n ) S n x n x * x n + 1 x * α n g ( x n ) x * + x n x * x n + 1 x * ,
since { g ( x n ) } is bounded, α n 0 and { x n x * } is convergent, we obtain
y n x * W n y n x * 0 as n .
Also { W n } is strongly nonexpansive sequence so we conclude that
y n W n y n 0 as n .
We next show that x n S n x n 0 . From (2), we obtain
x n + 1 x * α n g ( x n ) x * + ( 1 α n ) S n x n x * α n g ( x n ) x * + S n x n x * ,
Now, from the nonexpansiveness of S n and (5), we observe
0 x n x * S n x n x * x n x * x n + 1 x * + α n g ( x n ) x * ,
since { g ( x n ) } is bounded, α n 0 and { x n x * } is convergent, we obtain
x n x * S n x n x * 0 as n .
As { S n } is strongly nonexpansive sequence, we have
x n S n x n 0 as n .
Again from (2), we observe
x n + 1 x * α n g ( x n ) x * + ( 1 α n ) J μ n B 2 V n x n x * α n g ( x n ) x * + V n x n x * .
Using nonexpansiveness of V n and (8), we observe
0 x n x * V n x n x * x n x * x n + 1 x * + α n g ( x n ) x * ,
so that x n x * V n x n x * 0 by boundedness of sequence { g ( x n ) } , α n 0 and convergent sequence { x n x * } . By Lemma 4, { V n } is strongly nonexpansive sequence, so we have
x n V n x n 0 as n .
Also, notice that x n V n x n = β n ( x n V x n ) . Condition (ii) together with (10) implies that
x n V x n 0 as n .
Now consider
x n J μ n B 2 x n x n J μ n B 2 V n x n + J μ n B 2 V n x n J μ n B 2 x n x n S n x n + V n x n x n ,
in view of (7) and (10), we deduce
x n J μ n B 2 x n 0 as n .
Notice that y n x n = α n ( g ( x n ) x n ) + ( 1 α n ) ( S n x n x n ) . This together with given condition α n 0 and (7) implies that
y n x n 0 as n .
Next, we consider
x n W n x n x n y n + y n W n y n + W n y n W n x n 2 x n y n + y n W n y n ,
it follows from (4) and (13) that
x n W n x n 0 as n .
On the other hand, we observe
x n + 1 x * = W n y n x * W n y n W n x n + W n x n x * y n x n + W n x n x * .
Using nonexpansiveness of T i n T i 1 n T 1 n for each i = 1 , 2 , , m and (15), we obtain
0 x n x * T i n T i 1 n T 1 n x n x * x n x * W n x n x * x n x * x n + 1 x * + y n x n ,
in view of the fact that { x n x * } is convergent and using (13), we obtain
x n x * T i n T i 1 n T 1 n x n x * 0 as n .
Also by using Lemma 6, { T i n T i 1 n T 1 n } is strongly nonexpansive sequence for each i = 1 , 2 , , m . Therefore, we have
x n T i n T i 1 n T 1 n x n 0 as n for each i = 1 , 2 , , m .
Now consider
x n J ρ n B 1 x n x n J ρ n B 1 T m n T m 1 n T 1 n x n + J ρ n B 1 T m n T m 1 m T 1 n x n J ρ n B 1 x n x n W n x n + T m n T m 1 n T 1 n x n x n .
This together with (14) and (17) implies that
x n J ρ n B 1 x n 0 as n .
Choose a fixed number s such that ε > s > 0 and using Lemma 11, for all sufficiently large n, we have
x n J s B 1 x n x n J ρ n B 1 x n + J ρ n B 1 x n J s B 1 x n = x n J ρ n B 1 x n + J s B 1 s ρ n x n + 1 s ρ n J ρ n B 1 x n J s B 1 x n x n J ρ n B 1 x n + s ρ n x n + 1 s ρ n J ρ n B 1 x n x n = x n J ρ n B 1 x n + 1 s ρ n J ρ n B 1 x n x n 2 x n J ρ n B 1 x n .
Using (18), we obtain
x n J s B 1 x n 0 as n .
Similarly, using (12) and Lemma 11, we can obtain
x n J s B 2 x n 0 as n .
Next, we show that
x n T i n x n 0 as n for each i = 1 , 2 , , m .
Clearly, from (17) for i = 1 , (21) holds. Now for i = 2 , , m , we see that
x n T i n x n x n T i n T i 1 n T 1 n x n + T i n T i 1 n T 1 n x n T i n x n x n T i n T i 1 n T 1 n x n + T i 1 n T 1 n x n x n .
Thus, in view of (17), (21) holds for all i = 1 , 2 , , m .
Observe that x n T i n x n = γ n i ( x n T i x n ) . Condition ( i i i ) and (21) implies that
x n T i x n 0 as n for each i = 1 , 2 , , m .
Put U : = 1 m + 3 i = 1 m T i + V + J s B 1 + J s B 2 . Clearly, U is a convex combination of nonexpansive mappings, so is itself nonexpansive and
Fix ( U ) = i = 1 m Fix ( T i ) Fix ( V ) B 1 1 0 B 2 1 0 = Γ .
We observe
x n U x n = x n 1 m + 3 i = 1 m T i x n + V x n + J s B 1 x n + J s B 2 x n = 1 m + 3 m x n i = 1 m T i x n + 1 m + 3 ( x n V x n ) + 1 m + 3 ( x n J s B 1 x n ) + 1 m + 3 ( x n J s B 2 x n ) 1 m + 3 i = 1 m x n T i x n + 1 m + 3 x n V x n + 1 m + 3 x n J s B 1 x n + 1 m + 3 x n J s B 2 x n .
In view of (11), (19), (20) and (22), we obtain
x n U x n 0 as n .
Observe that
y n U y n y n x n + x n U x n + U x n U y n 2 y n x n + x n U x n .
This together with (13) and (23) implies that
y n U y n 0 as n .
Since { y n } is bounded, it has a convergent subsequence { y n i } such that { y n i } converges weakly to some z H . Further Lemma 10, and (24) implies that z Fix ( U ) = Γ , it follows that
lim sup n g ( x * ) x * , y n x * = lim i g ( x * ) x * , y n i x * = g ( x * ) x * , z x * = g ( x * ) P Γ g ( x * ) , z P Γ g ( x * ) 0 ,
where the last inequality follows from (1).
Using Lemma 2, we obtain
y n x * 2 = α n ( g ( x n ) x * ) + ( 1 α n ) ( S n x n x * ) 2 ( 1 α n ) 2 S n x n x * 2 + 2 α n g ( x n ) x * , y n x * ( 1 α n ) 2 x n x * 2 + 2 α n g ( x n ) g ( x * ) , y n x * + 2 α n g ( x * ) x * , y n x * ( 1 α n ) 2 x n x * 2 + 2 α n k x n x * · y n x * + E n ( 1 α n ) 2 x n x * 2 + α n k [ x n x * 2 + y n x * 2 ] + E n ,
where E n = 2 α n g ( x * ) x * , y n x * .
It turns out that
( 1 α n k ) y n x * 2 [ ( 1 α n ) 2 + α n k ] x n x * 2 + E n , y n x * 2 ( 1 α n ) 2 + α n k 1 α n k x n x * 2 + E n 1 α n k .
Next, we have
x n + 1 x * 2 y n x * 2 ( 1 α n ) 2 + α n k 1 α n k x n x * 2 + E n 1 α n k 1 2 α n ( 1 k ) 1 α n k x n x * 2 + α n 2 1 α n k x n x * 2 + E n 1 α n k = 1 2 α n ( 1 k ) 1 α n k x n x * 2 + 2 α n ( 1 k ) 1 α n k 1 1 k g ( x * ) x * , y n x * + α n 2 ( 1 k ) x n x * 2 ,
that is,
a n + 1 ( 1 γ n ) a n + γ n b n ,
where a n = x n x * 2 , γ n = 2 α n ( 1 k ) 1 α n k , b n = 1 1 k g ( x * ) x * , y n x * + α n 2 ( 1 k ) x n x * 2 . Using (25), the condition α n 0 and boundedness of { x n } , we obtain lim sup n b n 0 . Using condition ( i ) , it can be easily proven that n = 0 γ n = . Finally, we apply Lemma 3 to (26) to conclude that x n x * as n .
Case II. Assume that there exists a subsequence { x n j } of { x n } such that
x n j x * < x n j + 1 x * , j N .
Then, by Lemma 12, there exists a nondecreasing sequence of integers { m q } N such that m q as q and
x m q x * x m q + 1 x * and x q x * x m q + 1 x * , q N .
Now, using (27) in (3), we have
0 y m q x * W m q y m q x * α m q g ( x m q ) x * + x m q x * x m q + 1 x * α m q g ( x m q ) x * ,
since { g ( x m q ) } is bounded and α m q 0 , we obtain y m q x * W m q y m q x * 0 as q .
As { W m q } is a strongly nonexpansive sequence, we have y m q W m q y m q 0 as q .
Similarly, using (27) in (6) and (9), we obtain
x m q S m q x m q 0 and x m q V m q x m q 0 as q ,
respectively. Arguing as in case I, we obtain
x m q V x m q 0 , x m q J μ m q B 2 x m q 0 , y m q x m q 0 , x m q W m q x m q 0 as q .
Using (27) in (16), we have
0 x m q x * T i m q T i 1 m q T 1 m q x m q x * x m q x * x m q + 1 x * + y m q x m q y m q x m q ,
it follows from (28) that
x m q x * T i m q T i 1 m q T 1 m q x m q x * 0 as q for each i = 1 , 2 , m .
Following similar arguments as in Case I, we have
x m q T i m q T i 1 m q T 1 m q x m q 0 , x m q J s B 1 x m q 0 , x m q J s B 2 x m q 0 , x m q T i m q x m q 0 , x m q T i x m q 0 , x m q U x m q 0 , y m q U y m q 0 as q
lim sup q g ( x * ) x * , y m q x * 0 .
Next, from (26), we have
a m q + 1 ( 1 γ m q ) a m q + γ m q b m q ,
where a m q = x m q x * 2 , b m q = 1 1 k g ( x * ) x * , y m q x * + α m q 2 ( 1 k ) x m q x * 2 , γ m q = 2 α m q ( 1 k ) 1 α m q k . Thus, (30) and (27) implies that
γ m q a m q a m q a m q + 1 + γ m q b m q , γ m q a m q γ m q b m q .
Using the fact that γ m q > 0 , we obtain a m q b m q , that is,
x m q x * 2 1 1 k g ( x * ) x * , y m q x * + α m q 2 ( 1 k ) x m q x * 2 .
Since { x m q } is bounded, α m q 0 , it follows from (29) that x m q x * 0 as q .
This together with (30) implies that x m q + 1 x * 0 as q . But x q x * x m q + 1 x * , for all q N , which gives that x q x * as q . □
Remark 1.
A similar approach has been adopted in the study of consensus problems (see the seminal work [33]).

4. Applications

In this section, we utilize the main result presented in this paper to study many problems in Hilbert spaces.

4.1. Application to a General System of Variational Inequalities

Let H be a real Hilbert space and let there be given for each i = 1 , 2 , , N , an operator A i : H H and a nonempty closed convex subset C i H . First, we introduce the following general system of variational inequalities in Hilbert space, which aims to find ( x 1 * , x 2 * , , x N * ) C 1 × C 2 × × C N such that
θ 1 A 1 x 2 * + x 1 * x 2 * , x x 1 * 0 , x C 1 , θ 2 A 2 x 3 * + x 2 * x 3 * , x x 2 * 0 , x C 2 , θ N 1 A N 1 x N * + x N 1 * x N * , x x N 1 * 0 , x C N 1 , θ N A N x 1 * + x N * x 1 * , x x N * 0 , x C N ,
where θ i > 0 for all i { 1 , 2 , , N } . Here, Ω will be used to denote the solution set of (31). In particular, if N = 2 and C 1 = C 2 = C , then problem (31) can be reduced to finding ( x 1 * , x 2 * ) C × C such that
θ 1 A 1 x 2 * + x 1 * x 2 * , x x 1 * 0 , x C , θ 2 A 2 x 1 * + x 2 * x 1 * , x x 2 * 0 , x C , ,
which was considered and studied by Ceng et al. [34]. In particular, if A 1 = A 2 = A , θ 1 = θ 2 = θ and x 1 * = x 2 * = x * , then the problem (32) reduces to the variational inequality problem for finding x * C such that
A x * , x x * 0 , x C .
Variational inequalities produce effective method to solve several important problems appearing in finance, optimization theory, game theory, mechanics and economics.
Another motivation for introducing (31) is that if we choose x 1 * = x 2 * = = x N * = x * and θ i = 1 for all i { 1 , 2 , , N } , then (31) reduces to an important problem, called the common solutions to variational inequality problem (CSVIP) introduced by Censor, Gibali and Reich [16,17].
Lemma 13.
Let { C i } i = 1 N be a finite family of closed convex subsets of a real Hilbert space H . Let A i : H H be nonlinear mappings, where i = 1 , 2 , , N . For given x i * C i , i = 1 , 2 , , N , ( x 1 * , x 2 * , , x N * ) is a solution of problem (31) if and only if
x i * = P C i ( I θ i A i ) x i + 1 * , x N * = P C N ( I θ N A N ) x 1 * , i = 1 , 2 , , N 1 .
That is
x 1 * = P C 1 ( I θ 1 A 1 ) P C 2 ( I θ 2 A 2 ) P C N 1 ( I θ N 1 A N 1 ) P C N ( I θ N A N ) x 1 * .
Proof. 
We can rewrite (31) as
x 1 * ( x 2 * θ 1 A 1 x 2 * ) , x x 1 * 0 , x C 1 , x 2 * ( x 3 * θ 2 A 2 x 3 * ) , x x 2 * 0 , x C 2 , x N 1 * ( x N * θ N 1 A N 1 x N * ) , x x N 1 * 0 , x C N 1 , x N * ( x 1 * θ N A N x 1 * ) , x x N * 0 , x C N .
From (1), we find (34) is equivalent to
x i * = P C i ( I θ i A i ) x i + 1 * , x N * = P C N ( I θ N A N ) x 1 * , i = 1 , 2 , , N 1 .
Therefore, we have
x 1 * = P C 1 ( I θ 1 A 1 ) P C 2 ( I θ 2 A 2 ) P C N 1 ( I θ N 1 A N 1 ) P C N ( I θ N A N ) x 1 * .
 □
Lemma 14.
Let { C i } i = 1 N be a finite family of closed convex subsets of a real Hilbert space H . Let A i be η i -ism self-mappings on H , where i { 1 , 2 , , N } . Let T : H H be a mapping defined by
T ( x ) = P C 1 ( I θ 1 A 1 ) P C 2 ( I θ 2 A 2 ) P C N 1 ( I θ N 1 A N 1 ) P C N ( I θ N A N ) x , x H .
If θ i ( 0 , 2 η i ) , i = 1 , 2 , , N , then T is averaged.
Proof. 
We first prove that I θ i A i is averaged for each i { 1 , 2 , , N } .
Note that I θ i A i = 1 θ i 2 η i I + θ i 2 η i ( I 2 η i A i ) and θ i 2 η i ( 0 , 1 ) . Thus, applying Lemma 1, I 2 η i A i is nonexpansive and therefore, I θ i A i is averaged for θ i ( 0 , 2 η i ) , i = 1 , 2 , , N . Also, it well known that P C i is averaged, so the composition P C i ( I θ i A i ) (see Lemma 8). Hence again applying Lemma 8, the mapping T is averaged. □
Theorem 2.
Let { C i } i = 1 N be a finite family of closed convex subsets of a real Hilbert space H . Let A i be η i -ism self-mappings on H , where i { 1 , 2 , , N } . Assume that Ω = Fix ( T ) , where T is defined in Lemma 14. Let { x n } be a sequence defined by x 0 H and
y n = ( 1 α n ) x n , x n + 1 = P C 1 ( I θ 1 A 1 ) P C 2 ( I θ 2 A 2 ) P C N 1 ( I θ N 1 A N 1 ) P C N ( I θ N A N ) y n , ,
where θ i ( 0 , 2 η i ) . Suppose { α n } ( 0 , 1 ) satisfying the conditions lim n α n = 0 and n = 0 α n = . Then the sequence { x n } converges strongly to a point x * Ω .
Proof. 
Applying Lemma 14, we have that T is an averaged mapping on H . Therefore, by definition, T = ( 1 γ ) I + γ T 1 , for some γ ( 0 , 1 ) and a nonexpansive mapping T 1 , where Fix ( T 1 ) = Fix ( T ) . Letting m = 1 , B 1 = B 2 = g = 0 , V = I and γ n 1 = γ in Theorem 1, the conclusion of Theorem 2 is obtained. □
Remark 2.
In [17], Censor, Gibali and Reich proved the weak convergence theorem for solving the CSVIP. If we take x 1 * = x 2 * = = x N * = z and θ i = 1 , for all i { 1 , 2 , , N } in (31), then problem (31) reduces to CSVIP and through algorithm (35), we obtain modification of Algorithm 4.1 in [17] and obtain strong convergence, which is often much more desirable than weak convergence.

4.2. Convex Feasibility Problem

Let C i , i = 1 , 2 , , m be nonempty closed convex subsets of a real Hilbert space H with i = 1 m C i , the convex feasibility problem (CFP) is to find x * such that x * i = 1 m C i .
Most common methods to solving CFP are the projection and reflection methods which comprise some well-known methods, such as the so-called alternating projection method [35,36,37], the Douglas–Rachford (DR) algorithm [38,39,40] and many extensions [41,42,43]. Most projection and reflection methods can be extended to solve the convex feasibility problem involving any finite number of sets. An exception is the Douglas–Rachford method, for which only the theory of two set feasibility problems has been investigated. Motivated by this fact, Borwein and Tam [43], introduced the following cyclic Douglas–Rachford method which can be applied directly to many-set convex feasibility problem in a Hilbert space.
For any x 0 H , the cyclic Douglas–Rachford method defines a sequence { x n } by setting
x n + 1 = T [ C 1 C 2 C m ] x n , n N .
Here, T [ C 1 C 2 C m ] is a m-set cyclic Douglas–Rachford operator defined as
T [ C 1 C 2 C m ] = T C m , C 1 T C m 1 , C m T C 2 , C 3 T C 1 , C 2
and each T C i , C j = I + R C j R C i 2 is a two set Douglas-Rachford operator and R C i = 2 P C i I and R C j = 2 P C j I are the reflection operators into C i and C j respectively. However, it is known that cyclic Douglas-Rachford method may fail to converge strongly (see [44]). We introduce a modification of cyclic Douglas-Rachford method in which strong convergence is guaranteed.
Theorem 3.
Let C 1 , C 2 , C m H be closed and convex sets with nonempty intersection and let { x n } be a sequence defined by x 0 H and
y n = ( 1 α n ) x n x n + 1 = T m n T m 1 n T 2 n T 1 n y n , ,
where T i n = ( 1 γ n i ) I + γ n i R C i + 1 R C i , for i = 1 , 2 , , m and C m + 1 : = C 1 . Suppose { α n } and { γ n i } ( 0 , 1 ) satisfying
(i) 
lim n α n = 0 , n = 0 = ;
(ii) 
0 < lim inf n γ n i lim sup n γ n i < 1 , for all i = 1 , 2 , , m .
Then the sequence { x n } converges strongly to a point x * such that P C i x * i = 1 m C i for i = 1 , 2 , , m .
Proof. 
Set T i = R C i + 1 R C i , for i = 1 , 2 , , m . By Proposition 4.2, in [24], R C i and R C i + 1 are nonexpansive. Therefore, their combination T i is nonexpansive.
Further i = 1 m C i i = 1 m Fix ( T i ) . Put B 1 = B 2 = g = 0 and V = I in Theorem 1, the sequence { x n } converges strongly to a point x * in i = 1 m Fix ( T i ) . By Corollary 4.3.17 (iii) in [45], P C i x * C i C i + 1 , for each i = 1 , 2 , , m . So, P C i x * C i + 1 for each i = 1 , 2 , , m . Further, using inequality (1), we have
0 i = 1 m x * P C i + 1 x * , P C i x * P C i + 1 x * = 1 2 i = 1 m ( x * P C i + 1 x * 2 + P C i x * P C i + 1 x * 2 x * P C i x * 2 ) = 1 2 i = 1 m P C i x * P C i + 1 x * 2 0 .
Thus, P C i x * = P C i + 1 x * , for each i and therefore, P C i x * i = 1 m C i for each i. □
Remark 3.
By taking γ n i = 1 2 , for all i = 1 , 2 , , m in the operator T m n T m 1 n T 2 n T 1 n , we obtain the cyclic Douglas–Rachford operator.

4.3. Zeros of Ism and Maximal Monotone

Very recently, based on Yamada’s hybrid steepest descent method, Tian and Jiang [46] introduced an iterative algorithm and proved a weak convergence theorem for zero points of ism and fixed points of a nonexpansive mapping in Hilbert space. Moreover, using this algorithm, they also constructed following algorithm to obtain weak convergence theorem for common zeros of ism and maximal monotone mapping:
z n = ( 1 λ n ) x n + λ n J r B 1 x n , x n + 1 = ( I μ δ n F ) z n .
Now, we combine hybrid steepest descent method, proximal point algorithm and viscosity approximation method to obtain following strong convergence result.
Theorem 4.
Let M : H 2 H be a maximal monotone mapping and F be an θ-ism of H into itself such that M 1 0 F 1 0 . Let g : H H be a contraction with coefficient k ( 0 , 1 ) and let { x n } be a sequence defined by x 0 H and
y n = α n g ( x n ) + ( 1 α n ) x n , z n = ( 1 λ n ) y n + λ n J r M y n , x n + 1 = ( I η δ n F ) z n , n 0 .
Suppose that { λ n } ( 0 , 2 ) , { η δ n } ( 0 , 2 θ ) and { α n } ( 0 , 1 ) satisfying
(i) 
lim n α n = 0 , n = 0 α n = ;
(ii) 
0 < lim inf n λ n lim sup n λ n < 2 ;
(iii) 
0 < lim inf n η δ n lim sup n η δ n < 2 θ .
Then the sequence { x n } converges strongly to a point x * M 1 0 F 1 0 .
Proof. 
First, we rewrite I η δ n F as
I η δ n F = 1 η δ n 2 θ I + η δ n 2 θ ( I 2 θ F ) .
Using Lemma 1, I 2 θ F is nonexpansive. Also, it can be easily proven that Fix ( I 2 θ F ) = F 1 0 .
Further, we observe that
( 1 λ n ) I + λ n J r M = 1 λ n 2 I + λ n 2 ( 2 J r M I ) .
By Proposition 4.2, in [24], 2 J r M I is nonexpansive. Also note that Fix ( 2 J r M I ) = M 1 0 . Now, take m = 2 , T 1 = 2 J r M I , T 2 = I 2 θ F , γ n 1 = λ n 2 , γ n 2 = η δ n 2 θ , B 1 = B 2 = 0 and V = I in Theorem 1, which yields the conclusion of Theorem 4. □
Remark 4.
Theorem 4 improves the Tian and Jiang’s result ([46] Theorem 4.4) from weak to strong convergence theorem. Also { λ n } is bounded in ( 0 , 1 ) in ([46] Theorem 4.4), but in Theorem 4, we relax { λ n } ( 0 , 1 ) to { λ n } ( 0 , 2 ) .
Theorem 5.
Let S be an θ-ism of H into itself and let B 1 , B 2 : H 2 H be maximal monotone mappings such that S 1 0 B 1 1 0 B 2 1 0 . Let g : H H be a contraction with coefficient k ( 0 , 1 ) and let { x n } be a sequence defined by x 0 H and
x n + 1 = J ρ n B 1 ( α n g ( x n ) + ( 1 α n ) J μ n B 2 ( x n λ n S x n ) ) , n 0 .
Suppose that { α n } ( 0 , 1 ) , { λ n } ( 0 , 2 θ ) and { ρ n } , { μ n } ( 0 , ) satisfying
(i) 
lim n α n = 0 , n = 0 α n = ;
(ii) 
0 < lim inf n λ n lim sup n λ n < 2 θ ;
(iii) 
for all sufficiently large n , min { ρ n , μ n } > ε for some ε > 0 .
Then the sequence { x n } converges strongly to a point x * S 1 0 B 1 1 0 B 2 1 0 .
Proof. 
First, we rewrite that
I λ n S = 1 λ n 2 θ I + λ n 2 θ ( I 2 θ S ) .
By using Lemma 1, I 2 θ S is nonexpansive and it can be easily proven that Fix ( I 2 θ S ) = S 1 0 . Putting V = I 2 θ S , β n = λ n 2 θ , T i = I , for all i = 1 , 2 , , m , in Theorem 1, the conclusion of Theorem 5 is obtained. □
Remark 5.
1. 
Theorem 5 improves and extends Iiduka–Takahashi’s result ([14] Theorem 4.3). By taking B 1 = 0 , B 2 = B , μ n = r , g = x 0 in Theorem 5, we obtain ([14] Theorem 4.3) without assuming extra conditions n = 1 | α n + 1 α n | < and n = 1 | λ n + 1 λ n | < assumed in ([14] Theorem 4.3).
2. 
If we take B 1 = S = 0 , g = x 0 in Theorem 5, we obtain Kamimura and Takahashi’s result ([13] Theorem 1). Also we remove the superfluous condition lim n r n = assumed in ([13] Theorem 1). Hence our result improves the result of Kamimura and Takahashi.
3. 
The alternating resolvent method studied in Bauschke et al. [47] deals essentially with a special case of the algorithm (38). In fact, if we take g = S = 0 , then (38) becomes
x n + 1 = J ρ n B 1 ( 1 α n ) J μ n B 2 ( x n ) , n 0 .
We can rewrite (39) as
x n + 1 = J γ n A n J μ n B 2 x n , n 0 ,
where γ n = ρ n 1 α n and A n = B 1 + α n ρ n I is the Tikhonov regularization of B 1 . Thus Theorem 5 extends and improves the result of Bauschke et al. [47] from weak to strong convergence theorem by using prox-Tikhonov method.
4. 
Theorem 5 also improves the convergence result studied in Lehdili and Moudafi [15]. In fact, if we take B 2 = 0 in (40), then (40) becomes
x n + 1 = J γ n A n x n , n 0 ,
which is prox-Tikhonov algorithm presented by Lehdili and Moudafi [15].

4.4. Split Common Null Point Problem

Let H 1 and H 2 be two real Hilbert spaces. Given two set-valued operators A 1 : H 1 2 H 1 and A 2 : H 2 2 H 2 and a bounded linear operator U : H 1 H 2 , the split common null point problem (SCNPP) is the problem of finding
x ^ H 1 such that 0 A 1 ( x ^ ) and 0 A 2 ( U x ^ ) .
In [48], Byrne et al. introduced this problem for finding such a solution x ^ when A 1 and A 2 are maximal monotone.
Using the fact 0 A ( x ) if and only if x Fix ( J μ A ) , the problem (42) is equivalent to the problem of finding
x ^ H 1 such that x ^ Fix ( J μ A 1 ) and U x ^ Fix ( J μ A 2 ) ,
where μ > 0 . Here, Ψ will be used to denote the solution set of (42).
Lemma 15.
Let H 1 and H 2 be two real Hilbert spaces. Let U : H 1 H 2 be a bounded linear operator and S : H 2 H 2 be a firmly nonexpansive maping. Then U * ( I S ) U is 1 U 2 -ism.
Proof. 
Since S is firmly nonexpansive, using Proposition 4.2, in [24], I S is firmly nonexpansive. Therefore, for all x , y H 1 , we obtain
U * ( I S ) U x U * ( I S ) U y , x y = U * ( ( I S ) U x ( I S ) U y ) , x y = ( I S ) U x ( I S ) U y , U x U y ( I S ) U x ( I S ) U y 2 .
Also,
U * ( I S ) U x U * ( I S ) U y 2 = U * ( ( I S ) U x ( I S ) U y ) , U * ( ( I S ) U x ( I S ) U y ) = ( I S ) U x ( I S ) U y , U U * ( ( 1 S ) U x ( I S ) U y ) U 2 ( I S ) U x ( I S ) U y 2 .
Combining the above inequalities, we obtain
U * ( I S ) U x U * ( I S ) U y , x y ( 1 U 2 ) U * ( I S ) U x U * ( I S ) U y 2 .
Thus U * ( I S ) U is 1 U 2 -ism. □
Theorem 6.
Let H 1 and H 2 be two real Hilbert spaces. Let A 1 : H 1 2 H 1 and A 2 : H 2 2 H 2 be two set-valued maximal monotone operators. Let U : H 1 H 2 be a bounded linear operator and g : H 1 H 1 be a contraction with coefficient k ( 0 , 1 ) . Let Ψ and let { x n } be a sequence defined by x 0 H 1 and
x n + 1 = α n g ( x n ) + ( 1 α n ) J μ A 1 ( I + λ n U * ( J μ A 2 I ) U ) x n , n 0 .
Suppose that { α n } ( 0 , 1 ) and { λ n } ( 0 , 2 U 2 ) satisfying
(i) 
lim n α n = 0 , n = 0 α n = ;
(ii) 
0 < lim inf n λ n lim sup n λ n < 2 U 2 .
Then the sequence { x n } converges strongly to a point in Ψ.
Proof. 
Let x ^ solves SCNPP i.e. x ^ Ψ , then we have x ^ H 1 such that 0 A 1 ( x ^ ) and 0 A 2 ( U x ^ ) . Note that 0 A 2 ( U x ^ ) if and only if U x ^ Fix ( J μ A 2 ) .
Therefore, ( I J μ A 2 ) U x ^ = 0 and so U * ( I J μ A 2 ) U x ^ = 0 , means x ^ ( U * ( I J μ A 2 ) U ) 1 0 . Thus Ψ A 1 1 0 ( U * ( I J μ A 2 ) U ) 1 0 .
Now let x ^ A 1 1 0 ( U * ( I J μ A 2 ) U ) 1 0 , which implies
U * ( I J μ A 2 ) U x ^ = 0 .
Choose z Ψ . Therefore, U z Fix ( J μ A 2 ) . An application of Lemma 9, yields
( I J μ A 2 ) U x ^ , U z J μ A 2 U x ^ 0 .
Using (43) and (44), we have
( I J μ A 2 ) U x ^ 2 = ( I J μ A 2 ) U x ^ , U x ^ U z + ( I J μ A 2 ) U x ^ , U z J μ A 2 U x ^ ( I J μ A 2 ) U x ^ , U x ^ U z = U * ( I J μ A 2 ) U x ^ , x ^ z = 0 .
Therefore, U x ^ Fix ( J μ A 2 ) i.e., 0 A 2 ( U x ^ ) . Thus x ^ Ψ . Hence Ψ = A 1 1 0 ( U * ( I J μ A 2 ) U ) 1 0 .
Also, using Lemma 15, U * ( I J μ A 2 ) U is 1 U 2 -ism.
Now, putting B 1 = 0 , B 2 = A 1 , S = U * ( I J μ A 2 ) U and μ n = μ in Theorem 5, the conclusion of Theorem 6 is obtained. □
Remark 6.
1. 
Theorem 6 generalizes and improves the result in ([49] Theorem 5.1). Indeed, the result in ([49] Theorem 5.1) considers the special case λ n = γ , for all n. Moreover, we assume that λ n ( 0 , 2 U 2 ) , while in ([49] Theorem 5.1), γ was assumed to be in ( 0 , 1 U 2 ) , which is a more restrictive condition.
2. 
If we take g = x 0 and λ n = γ in Theorem 6, we obtain the result of Byrne et al. ([48] Theorem 4.5).

4.5. Split Feasibility Problem

Let C and Q be nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 respectively. The split feasibility problem (SFP) [23] is defined as finding a point x ^ satisfying:
x ^ C and U x ^ Q ,
where U : H 1 H 2 is a bounded linear operator. In [50], Byrne gave the following algorithm called CQ algorithm for solving the SFP (45):
x n + 1 = P C ( I γ U * ( I P Q ) U ) x n ,
where γ ( 0 , 2 U 2 ) . Let h : H ( , ] be a proper lower semicontinuous convex function.
Then subdifferential of h can be defined as
h ( x ) = { y H : h ( x ) + z x , y h ( z ) , z H } , x H .
By Rockafellar Theorem [51], h is a maximal monotone operator of H into itself. For a closed convex subset C of H , the indicator function i C can be defined as
i C x = 0 , x C , , x C .
Also recall, the normal cone of C at a point x C can be defined as
N C ( x ) = { y H : y , z x 0 , z C } .
Since i C : H ( , ] is a proper lower semicontinuous convex function, i C is a maximal monotone operator. Also it is known that i C = N C (see [24] Ex. 16.12). Using Theorem 1 and the equality
( I + r i C ) 1 = ( I + r N C ) 1 = P C ,
for all closed convex subset C in H and for all r > 0 , we solve the SFP as follows:
Theorem 7.
Let the solution set of SFP (45) is nonempty. Let g : H 1 H 1 be a contraction with coefficient k ( 0 , 1 ) and let { x n } be a sequence defined by x o H 1 and
x n + 1 = α n g ( x n ) + ( 1 α n ) P C ( I λ n U * ( I P Q ) U ) x n , n 0 .
Suppose that { α n } ( 0 , 1 ) and { λ n } ( 0 , 2 U 2 ) satisfying
(i) 
lim n α n = 0 , n = 0 α n = ;
(ii) 
0 < lim inf n λ n lim sup n λ n < 2 U 2 .
Then the sequence { x n } converges strongly to a point in the solution set of SFP (45).
Proof. 
Put A 1 = N C and A 2 = N Q in Theorem 6, which yields the conclusion of Theorem 7. □
Remark 7.
1. 
Theorem 7 extends and improves the result in ([52] Corollary 3.7). In fact, in Theorem 7 taking g = u (constant) and λ n = γ , for all n, we obtain the result in ([52] Corollary 3.7) without assuming an extra condition n = 1 | α n + 1 α n | < which was assumed in ([52] Corollary 3.7).
2. 
Theorem 7 also improves the result in ([53] Theorem 1).

4.6. Split Monotone Variational Inclusion Problem and Fixed Point Problem for Strictly Pseudocontractive Maps

Let H 1 and H 2 be two real Hilbert spaces and let M 1 : H 1 2 H 1 and M 2 : H 2 2 H 2 be two set-valued maximal monotone operators.
Let U : H 1 H 2 be a bounded linear operator and f 1 : H 1 H 1 and f 2 : H 2 H 2 be two ism mappings. The split monotone variational inclusion problem (SMVIP) is to find x ^ H 1 such that
0 f 1 ( x ^ ) + M 1 ( x ^ )
and
y ^ = U x ^ H 2 such that 0 f 2 ( y ^ ) + M 2 ( y ^ ) .
Also, it can be easily proven that (see, e.g., Moudafi [54])
0 f 1 ( x ^ ) + M 1 ( x ^ ) x ^ = J λ M 1 ( I λ f 1 ) x ^
and
0 f 2 ( y ^ ) + M 2 ( x ^ ) y ^ = J λ M 2 ( I λ f 2 ) y ^ .
Let K be a nonempty closed convex subset of a Hilbert space H . A mapping S : K K is said to be θ-strictly pseudocontractive if there exist θ with 0 θ < 1 such that
S x S y 2 x y 2 + θ ( I S ) x ( I S ) y 2 , x , y K .
It can be observed that I S is 1 θ 2 -ism. In fact, in a Hilbert space, we have
S x S y 2 = ( x y ) ( ( I S ) x ( I S ) y ) 2 = x y 2 + ( I S ) x ( I S ) y 2 2 x y , ( I S ) x ( I S ) y .
Hence, we have
x y , ( I S ) x ( I S ) y 1 θ 2 ( I S ) x ( I S ) y 2 .
Moudafi [54] introduced the SMVIP (46) and (47) and gave an iterative algorithm for solving this problem. Very recently, Shehu and Ogbuisi [55] proposed an iterative algorithm for solving SMVIP which also solves a fixed point problem for strictly pseudocontractive maps in a real Hilbert space.
The following result of Shehu and Ogbuisi [55] is a consequence of our Theorem 1.
Theorem 8.
Let H 1 and H 2 be two real Hilbert spaces and let M 1 : H 1 2 H 1 and M 2 : H 2 2 H 2 be two set-valued maximal monotone operators. Let U : H 1 H 1 be a bounded linear operator. Let f 1 : H 1 H 1 be ν 1 -ism and f 2 : H 2 H 2 be ν 2 -ism. Let S : H 1 H 1 be a θ-strictly pseudocontractive mapping and Fix ( S ) Λ , where Λ is a solution set of (46) and (47). Let { x n } be a sequence defined by x o H 1 and
z n = ( 1 α n ) x n , y n = J λ M 1 ( I λ f 1 ) ( z n + η U * ( J λ M 2 ( I λ f 2 ) I ) U z n ) , x n + 1 = ( 1 δ n ) y n + δ n S y n , n 0 , ,
where λ ( 0 , 2 ν ) , ν = min { ν 1 , ν 2 } and η 0 , 1 L with L being the spectral radius of the operator U * U and U * is the adjoint of U. Suppose { α n } ( 0 , 1 ) and { δ n } ( 0 , 1 θ ) satisfying
(i) 
lim n α n = 0 , n = 0 α n = ;
(ii) 
0 < lim inf n δ n lim sup n δ n < 1 θ .
Then the sequence { x n } converges strongly to point in Fix ( S ) Λ .
Proof. 
With similar arguments as in the proof of Lemma 14, we can easily show that I λ f 1 and I λ f 2 are averaged mappings on H 1 and H 2 respectively. Further, in view of Lemma 3.3 in [56], ( I + η U * ( J λ M 2 ( I λ f 2 ) I ) U ) is averaged mapping on H 1 . Also, applying Lemma 8, the operator J λ M 1 ( I λ f 1 ) is averaged. Therefore, the composition R is averaged, where R : = J λ M 1 ( I λ f 1 ) ( I + η U * ( J λ M 2 ( I λ f 2 ) I ) U ) . Thus, by definition, R = ( 1 γ 1 ) I + γ 1 T 1 for some γ 1 ( 0 , 1 ) and a nonexpansive mapping T 1 , where Fix ( T 1 ) = Fix ( R ) .
Also, we note that
( 1 δ n ) I + δ n S = ( 1 γ n 2 ) I + γ n 2 T 2 ,
where γ n 2 = δ n 1 θ and T 2 = I ( 1 θ ) ( I S ) .
Note that I S is 1 θ 2 -ism. Therefore, using Lemma 1, T 2 is nonexpansive. Also, it can be easily proven that Fix ( T 2 ) = Fix ( S ) .
Now let x ^ Λ , then we have x ^ Fix ( J λ M 1 ( I λ f 1 ) ) and U x ^ Fix ( J λ M 2 ( I λ f 2 ) ) .
It is obvious that U x ^ Fix ( J λ M 2 ( I λ f 2 ) ) implies x ^ Fix ( I + η U * ( J λ M 2 ( I λ f 2 ) I ) U ) . Therefore, x ^ Fix ( J λ M 1 ( I λ f 1 ) ) Fix ( I + η U * ( J λ M 2 ( I λ f 2 ) I ) U ) . Using Lemma 8, x ^ Fix ( R ) . Thus Λ Fix ( R ) .
Now let x ^ Fix ( R ) . Using Lemma 8, x ^ Fix ( J λ M 1 ( I λ f 1 ) ) Fix ( I + η U * ( J λ M 2 ( I λ f 2 ) I ) U ) . It follows from Lemma 3.3 in [57] that
x ^ Fix ( J λ M 1 ( I λ f 1 ) ) and U x ^ Fix ( J λ M 2 ( I λ f 2 ) ) .
Therefore, x ^ Λ . Hence Λ = Fix ( R ) . Thus Fix ( S ) Λ = Fix ( S ) Fix ( R ) = Fix ( T 2 ) Fix ( T 1 ) .
Now taking B 1 = B 2 = g = 0 , V = I , m = 2 , γ n 1 = γ 1 and γ n 2 = δ n 1 θ in Theorem 1, which yields the desired result. □

4.7. Split Variational Inequality Problem (SVIP)

The SVIP [16] can be formulated as follows:
find a point x ^ C such that f 1 ( x ^ ) , x x ^ 0 , for all x C ,
and such that
y ^ = U x ^ Q solves f 2 ( y ^ ) , y y ^ 0 , for all y Q ,
where C and Q are nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 respectively and U : H 1 H 2 is a bounded linear operator and f 1 : H 1 H 1 and f 2 : H 2 H 2 are two given operators. If we denote the solution sets of VIPs in (48) and (49) by SOL ( f 1 , C ) and SOL ( f 2 , Q ) respectively, then the solution set of SVIP can be written as:
Φ = { x ^ SOL ( f 1 , C ) such that U x ^ SOL ( f 2 , Q ) } .
As mentioned in [54], if we choose M 1 = N C and M 2 = N Q in SMVIP (46) and (47), respectively, then we recover SVIP (48,49), where N C and N Q are normal cones of closed and convex sets C and Q respectively.
Theorem 9.
Let H 1 and H 2 be two real Hilbert spaces and let U : H 1 H 2 be a bounded linear operator. Let f 1 : H 1 H 1 be ν 1 -ism and f 2 : H 2 H 2 be ν 2 -ism. Assume that Φ and let { x n } be a sequence defined by x o H 1 and
y n = ( 1 α n ) x n , x n + 1 = P C ( I λ f 1 ) ( y n + η U * ( P Q ( I λ f 2 ) I ) U y n ) , n 0 ,
where λ ( 0 , 2 ν ) , ν = min { ν 1 , ν 2 } and η 0 , 1 L with L being the spectral radius of the operator U * U and U * is the adjoint of U. Suppose { α n } is a real sequence in ( 0 , 1 ) satisfying the conditions lim n α n = 0 and n = 0 α n = . Then the sequence { x n } converges strongly to a point in Φ.
Proof. 
Put M 1 = N C , M 2 = N Q and S = I in Theorem 8, which yields the desired result. □
Remark 8.
Theorem 9 improves and extends the Censor et al.’s result ([16] Theorem 6.3), where it was assumed that for all x ^ SOL ( f 1 , C ) ,
f 1 ( x ) , P C ( I λ f 1 ) ( x ) x ^ 0 , x H 1 .
We drop this assumption in our result. Furthermore, our result extends Censor et al.’s result ([16] Theorem 6.3) from weak to strong convergence.

5. Concluding Remarks

In this article, we present a new iterative algorithm for finding a common point of fixed point sets of nonexpansive mappings and sets of zeros of maximal monotone mappings. Further, we introduced a new general system of variational inequalities which comprises some existing general system of variational inequalities and it is shown that our algorithm converges strongly to a solution of this variational inequality problem. Also, we give modification of cyclic Douglas–Rachford method to solve convex feasibility problem in such a way that strong convergence is guaranteed. In addition, we combine hybrid steepest descent method, proximal point algorithm and viscosity approximation method to obtain a common zero point of maximal monotone and inverse strongly monotone mappings. Further, we improve and extend many results related to different split type problems like split common null point problem, split feasibility problem, split monotone variational inclusion problem and split variational inequality problem. Applicability of our algorithm is not limited to the problems discussed above, it can be further used to solve many important problems, for instance, quasi variational inclusion problem, convex minimization problem, lasso problem, equilibrium problem and many more. Since in this paper, we have worked in a Hilbert space, it should be a natural question for the next research to extend our result in Banach spaces.

Author Contributions

Each author participated to conceptualization, validation, formal analysis, investigation, writing–original draft preparation, writing–review and editing.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chugh, R.; Malik, P.; Kumar, V. On a new faster implicit fixed point iterative scheme in convex metric spaces. J. Funct. Spaces 2015, 2015. [Google Scholar] [CrossRef]
  2. Khan, A.R.; Kumar, V.; Narwal, S.; Chugh, R. Random iterative algorithms and almost sure stability in Banach spaces. Filomat 2017, 31, 3611–3626. [Google Scholar]
  3. Kumar, V.; Hussain, N.; Malik, P.; Chugh, R. Jungck-type implicit iterative algorithms with numerical examples. Filomat 2017, 31, 2303–2320. [Google Scholar]
  4. Yao, Y.; Agarwal, R.P.; Postolache, M.; Liou, Y.C. Algorithms with strong convergence for the split common solution of the feasibility problem and fixed point problem. Fixed Point Theory Appl. 2014, 2014, 183. [Google Scholar] [Green Version]
  5. Yao, Y.; Leng, L.; Postolache, M.; Zheng, X. Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal. 2017, 18, 875–882. [Google Scholar]
  6. Yao, Y.; Liou, Y.C.; Postolache, M. Self-adaptive algorithms for the split problem of the demicontractive operators. Optimization 2018, 67, 1309–1319. [Google Scholar]
  7. Dadashi, V.; Postolache, M. Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 2019, 1–11. [Google Scholar] [CrossRef]
  8. Yao, Y.; Postolache, M.; Yao, J.C. An iterative algorithm for solving generalized variational inequalities and fixed points problems. Mathematics 2019, 7, 61. [Google Scholar]
  9. Yao, Y.; Postolache, M.; Zhu, Z. Gradient methods with selection technique for the multiple-sets split feasibility problem. Optimization 2019. [Google Scholar] [CrossRef]
  10. Yao, Y.; Noor, M.A.; Liou, Y.C.; Kang, S.M. Iterative algorithms for generalized variational inequalities. Abstr. Appl. Anal. 2012, 1–10. [Google Scholar]
  11. Rockafellar, R.T. Monotone operators and proximal point algorithm. SIAM J. Control Optim. 1976, 14, 887–897. [Google Scholar]
  12. Güler, O. On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 1991, 29, 403–419. [Google Scholar]
  13. Kamimura, S.; Takahashi, W. Approximating solutions of maximal monotone operators in Hilbert spaces. J. Approx. Theory 2000, 106, 226–240. [Google Scholar]
  14. Iiduka, H.; Takahashi, W. Strong convergence theorems for nonexpansive nonself-mappings and inverse-strongly-monotone mappings. J. Convex Anal. 2004, 11, 69–79. [Google Scholar]
  15. Lehdili, N.; Moudafi, A. Combining the proximal algorithm and Tikhonov regularization. Optimization 1996, 37, 239–252. [Google Scholar]
  16. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algorithms 2012, 59, 301–323. [Google Scholar]
  17. Censor, Y.; Gibali, A.; Reich, S. A von Neumann alternating method for finding common solutions to variational inequalities. Nonlinear Anal. 2012, 75, 4596–4603. [Google Scholar] [Green Version]
  18. Censor, Y.; Gibali, A.; Reich, S.; Sabach, S. Common solutions to variational inequalities. Set-Valued Var. Anal. 2012, 20, 229–247. [Google Scholar] [CrossRef]
  19. Blatt, D.; Hero III, A.O. Energy based sensor network source localization via projection onto convex sets (POCS). IEEE Trans. Signal Process. 2006, 54, 3614–3619. [Google Scholar] [CrossRef]
  20. Censor, Y.; Altschuler, M.D.; Powlis, W.D. On the use of Cimmino’s simultaneous projections method for computing a solution of the inverse problem in radiation therapy treatment planning. Inverse Probl. 1988, 4, 607–623. [Google Scholar]
  21. Herman, G.T. Fundamentals of Computerized Tomography: Image Reconstruction from Projections, 2nd ed.; Springer: London, UK, 2009. [Google Scholar]
  22. Combettes, P.L. The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 1996, 95, 155–270. [Google Scholar]
  23. Censor, Y.; Elfving, T. A multiprojection algorithm using bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar]
  24. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  25. Aoyama, K.; Kimura, Y.; Takahashi, W.; Toyoda, M. On a strongly nonexpansive sequence in Hilbert spaces. J. Nonlinear Convex Anal. 2007, 8, 471–489. [Google Scholar]
  26. Bruck, R.E.; Reich, S. Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houston J. Math. 1977, 3, 459–470. [Google Scholar]
  27. Xu, H.K. Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66, 240–256. [Google Scholar]
  28. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar]
  29. Huang, Y.Y.; Hong, C.C. A unified iterative treatment for solutions of problems of split feasibility and equilibrium in Hilbert spaces. Abstr. Appl. Anal. 2013, 613928. [Google Scholar]
  30. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge University Press: London, UK, 1990. [Google Scholar]
  31. Barbu, V. Nonlinear Semigroups and Differential Equations in Banach space; Noordhoff: Groningen, The Netherlands, 1976. [Google Scholar]
  32. Mainge, P.-E. Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. Set-Valued Anal. 2008, 16, 899–912. [Google Scholar] [CrossRef]
  33. Shang, Y. Resilient consensus of switched multi-agent systems. Syst. Control Lett. 2018, 122, 12–18. [Google Scholar]
  34. Ceng, L.C.; Wang, C.; Yao, J.C. Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67, 375–390. [Google Scholar]
  35. Bauschke, H.H.; Borwein, J.M. On the convergence of von Neumann’s alternating projection algorithm. Set-Valued Anal. 1993, 1, 185–212. [Google Scholar] [CrossRef]
  36. Borwein, J.M.; Li, G.; Yao, L. Analysis of the convergence rate for the cyclic projection algorithm applied to basic semi-algebraic convex sets. SIAM J. Optim. 2014, 24, 498–527. [Google Scholar]
  37. Gubin, L.G.; Polyak, B.T.; Raik, E.V. The method of projections for finding the common point of convex sets. USSR Comput. Math. Math. Phys. 1967, 7, 1–24. [Google Scholar]
  38. Douglas, J.; Rachford, H.H. On the numerical solution of the heat conduction problem in 2 and 3 space variables. Trans. Am. Math. Soc. 1956, 82, 421–439. [Google Scholar]
  39. Eckstein, J.; Bertsekas, D.P. On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar]
  40. Lions, P.L.; Mercier, B. Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 1979, 16, 964–979. [Google Scholar]
  41. Aragón Artacho, F.J.; Censor, Y.; Gibali, A. The cyclic Douglas-Rachford algorithm with-sets-Douglas- Rachford operators. Optim. Method Softw. 2018. [Google Scholar] [CrossRef]
  42. Bauschke, H.H.; Noll, D.; Phan, H.M. Linear and strong convergence of algorithms involving averaged nonexpansive operators. J. Math. Anal. Appl. 2015, 421, 1–20. [Google Scholar] [Green Version]
  43. Borwein, J.M.; Tam, M.K. A cyclic Douglas–Rachford iteration scheme. J. Optim. Theory Appl. 2014, 160, 1–29. [Google Scholar]
  44. Aragón Artacho, F.J.; Borwein, J.M.; Tam, M.K. Recent results on Douglas-Rachford methods for combinatorial optimization problems. J. Optim. Theory Appl. 2014, 163, 1–30. [Google Scholar]
  45. Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces; Lecture Notes in Mathematics; Springer: Heidelberg, Germany, 2012; Volume 2057. [Google Scholar]
  46. Tian, M.; Jiang, B.N. Weak convergence theorem for zero points of inverse strongly monotone mapping and fixed points of nonexpansive mapping in Hilbert space. Optimization 2017, 66, 1689–1698. [Google Scholar]
  47. Bauschke, H.H.; Combettes, P.L.; Reich, S. The asymptotic behavior of the composition of two resolvents. Nonlinear Anal. 2005, 60, 283–301. [Google Scholar]
  48. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. The split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  49. Thong, D.V. Viscosity approximation methods for solving fixed point problems and split common fixed point problems. J. Fixed Point Theory Appl. 2017, 19, 1481–1499. [Google Scholar]
  50. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar]
  51. Rockafellar, R.T. Characterization of the sub differentials of convex functions, Pac. J. Math. 1966, 17, 497–510. [Google Scholar]
  52. Xu, H.K. A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021–2034. [Google Scholar]
  53. Deepho, J.; Kumam, P. A viscosity approximation method for the split feasibility problem. Trans. Engng. Tech. 2015, 69–77. [Google Scholar] [CrossRef]
  54. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar]
  55. Shehu, Y.; Ogbuisi, F.U. An iterative method for solving split monotone variational inclusion and fixed point problems. RACSAM. 2016, 110, 503–518. [Google Scholar]
  56. Takahashi, W.; Xu, H.K.; Yao, J.C. Iterative methods for generalized split feasibility problems in Hilbert spaces. Set-Valued Var. Anal. 2015, 23, 205–221. [Google Scholar] [CrossRef]
  57. Kraikaew, R.; Saejung, S. On split common fixed point problems. J. Math. Anal. Appl. 2014, 415, 513–524. [Google Scholar]

Share and Cite

MDPI and ACS Style

Nandal, A.; Chugh, R.; Postolache, M. Iteration Process for Fixed Point Problems and Zeros of Maximal Monotone Operators. Symmetry 2019, 11, 655. https://doi.org/10.3390/sym11050655

AMA Style

Nandal A, Chugh R, Postolache M. Iteration Process for Fixed Point Problems and Zeros of Maximal Monotone Operators. Symmetry. 2019; 11(5):655. https://doi.org/10.3390/sym11050655

Chicago/Turabian Style

Nandal, Ashish, Renu Chugh, and Mihai Postolache. 2019. "Iteration Process for Fixed Point Problems and Zeros of Maximal Monotone Operators" Symmetry 11, no. 5: 655. https://doi.org/10.3390/sym11050655

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop