Next Article in Journal
Existence Results of a Coupled System of Caputo Fractional Hahn Difference Equations with Nonlocal Fractional Hahn Integral Boundary Value Conditions
Next Article in Special Issue
Some Common Fixed Point Theorems for Generalized F-Contraction Involving w-Distance with Some Applications to Differential Equations
Previous Article in Journal
Imperfect Multi-Stage Lean Manufacturing System with Rework under Fuzzy Demand
Previous Article in Special Issue
Robust Approximate Optimality Conditions for Uncertain Nonsmooth Optimization with Infinite Number of Constraints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Strong Convergence Theorems of Viscosity Iterative Algorithms for Split Common Fixed Point Problems

College of Science, Civil Aviation University of China, Tianjin 300300, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(1), 14; https://doi.org/10.3390/math7010014
Submission received: 19 November 2018 / Revised: 17 December 2018 / Accepted: 19 December 2018 / Published: 24 December 2018
(This article belongs to the Special Issue Fixed Point Theory and Related Nonlinear Problems with Applications)

Abstract

:
In this paper, we propose a viscosity approximation method to solve the split common fixed point problem and consider the bounded perturbation resilience of the proposed method in general Hilbert spaces. Under some mild conditions, we prove that our algorithms strongly converge to a solution of the split common fixed point problem, which is also the unique solution of the variational inequality problem. Finally, we show the convergence and effectiveness of the algorithms by two numerical examples.

1. Introduction

Let H 1 and H 2 be two real Hilbert spaces with the inner product · , · and the induced norm · .
The split feasibility problem (SFP for short) is as follows:
Find a point x * C such that A x * Q ,
where C and Q are nonempty closed convex subsets of H 1 and H 2 , respectively, and A is a bounded linear operator of H 1 into H 2 .
If the set of solutions of the problem (1) is nonempty, then x * solving problem (1) is equivalent to
x * = P C ( I τ A * ( I P Q ) A ) x * ,
where τ > 0 and P C denotes the metric projection of H 1 onto C and A * is the corresponding adjoint operator of A.
Recently, many problems in engineering and technology can be modeled by problem (1) and many authors have shown that the SFP has many applications in our real life such as image reconstruction, signal processing and intensity-modulated radiation therapy (see [1,2,3]).
In 1994, Censor and Elfving [4] used their algorithm to solve the SFP in the finite-dimensional Euclidean space. In 2002, Byrne [5] improved the algorithm of Censor and Elfving and presented a new method called the C Q algorithm for solving the SFP (1) as follows:
x n + 1 = P C ( I τ A * ( I P Q ) A ) x n , n N .
The split common fixed point problem (shortly, SCFPP) is formulated as follows:
Find a point x * F i x ( U ) such that A x * F i x ( T ) ,
where U : H 1 H 1 and T : H 2 H 2 are nonlinear mappings; here, F i x ( U ) denotes the set of fixed points of the mapping U. We use S to denote the solution set of problem (4).
Note that, since every closed convex subset of a Hilbert space is the fixed point set of its associating projection if U : = P C and T : = P Q , the SFP becomes a special case of the SCFPP.
In 2007, Censor and Segal [6] first studied the SCFPP and, to solve the SCFPP, they proposed the following iterative algorithm:
x n + 1 = U ( x n τ A * ( I T ) A x n ) , n N ,
where τ is a properly chosen stepsize. Algorithm (5) was originally designed to solve the problem (4) for directed mappings.
In 2010, Moudafi [7] proposed an iterative method to solve the SCFPP for quasi-nonexpansive mappings. In 2014, combining the Moudafi method with the Halpern iterative method, Kraikaew and Saejung [8] proposed a new iterative algorithm which does not involve the projection operator to solve the split common fixed point problem. More specifically, their algorithm generates a sequence { x n } via the recursions:
x n + 1 = α n x 0 + ( 1 α n ) U ( x n τ A * ( I T ) A x n ) , n N ,
where x 0 H is a fixed element, U and T are quasi-nonexpansive operators.
Recently, many authors have studied the SCFPP, the generalized SCFPP and some relative problems (see, for instance, refs. [3,4,5,9,10,11,12,13] and they have also proposed a lot of algorithms to solve the SCFPP (see [14,15,16,17] and the references therein).
On the other hand, the bounded perturbation resilience and superiorization of iterative methods have been studied by some authors (see [18,19,20,21,22,23]). These problems have received much attention because of their applications in convex feasibility problems [24], image reconstruction [25] and inverse problems of radiation therapy [26] and so on.
Let P denote an algorithm operator. If the iteration x n + 1 = P x n is replaced by
x n + 1 = P ( x n + β n ν n ) ,
where β n is a sequence of nonnegative real numbers and { ν n } is a sequence in H such that
n = 0 β n < and ν n M .
Then, the algorithm is still convergent and so the algorithm P is the bounded perturbation resilient [19].
In 2016, Jin, Censor and Jiang [21] introduced the projected scaled gradient method (PSG for short) with bounded perturbations for solving the following minimization problem:
min x C f ( x ) ,
where f is a continuous differentiable, convex function. The method PSG generates a sequence { x n } defined by
x n + 1 = P C ( x n γ n D ( x n ) f ( x n ) + e ( x n ) ) , n 0 ,
where D ( x n ) is a diagonal scaling matrix and e ( x n ) denotes the sequence of outer perturbations satisfying n = 0 e ( x n ) < .
Recently, Xu [22] projected the superiorization techniques for the relaxed PSG as follows:
x n + 1 = ( 1 τ n ) x n + τ n P C ( x n γ n D ( x n ) f ( x n ) + e ( x n ) ) , n 0 ,
where τ n is a sequence in [ 0 , 1 ] .
Recently, for solving minimization problem of the combination of two convex functions min x H f ( x ) + g ( x ) , Guo and Cui [20] considered the modified proximal gradient method:
x n + 1 = α n h ( x n ) + ( 1 α n ) p r o x λ n g ( I λ n f ) ( x n ) + e ( x n ) , n 0 ,
and, under suitable conditions, they proved some strong convergence theorems of the method. The definition of proximal operator p r o x λ φ is as follows.
Definition 1 
(see [27]). Let Γ 0 ( H ) be the space of functions on a real Hilbert space H that are proper, lower semicontinuous and convex. The proximal operator of φ Γ 0 ( H ) is defined by
p r o x φ ( x ) = a r g min ν H { φ ( ν ) + 1 2 ν x 2 } , x H .
The proximal operator of φ of order λ > 0 is defined as the proximal operator of λ φ , that is,
p r o x λ φ ( x ) = a r g min ν H { φ ( ν ) + 1 2 λ ν x 2 } , x H .
Now, we propose a viscosity method for the problem (4) as follows:
x n + 1 = α n h ( x n + e ( x n ) ) + ( 1 α n ) U ( x n τ n A * ( I T ) A x n + e ( x n ) ) , n 0 .
If we treat the above algorithm as the basic algorithm P , the bounded perturbation of it is a sequence { x n } generated by the iterative process:
y n = x n + β n ν n , x n + 1 = α n h ( y n + e ( y n ) ) + ( 1 α n ) U ( y n τ n A * ( I T ) A y n + e ( y n ) ) , n 0 .
In this paper, mainly based on the above works [6,20,22], we prove that our main iterative method (11) is the bounded perturbation resilient and, under some mild conditions, our algorithms strongly converge to a solution of the split common fixed point problem, which is also the unique solution of the variational inequality problem (13). Finally, we give two numerical examples to demonstrate the effectiveness of our iterative schemes.

2. Preliminaries

Let { x n } be a sequence in the real Hilbert space H. We adopt the following notations:
(1)
Denote { x n } converging weakly to x by x n x and { x n } converging strongly to x by x n x .
(2)
Denote the weak ω -limit set of { x n } by ω w ( x n ) : = { x : x n j x } .
Definition 2.
A mapping F : H H is said to be:
(i) 
Lipschitz if there exists a positive constant L such that
F x F y L x y , x , y H .
In particular, if L = 1 , then we say that F is nonexpansive, namely,
F x F y x y , x , y H .
If L [ 0 , 1 ) , then we say F is contractive.
(ii) 
α-averaged mapping (shortly, α-av) if
F = ( 1 α ) I + α T ,
where α [ 0 , 1 ) and T : H H is nonexpansive.
Definition 3.
A mapping B : H H is said to be:
(i) 
monotone if
B x B y , x y 0 , x , y H .
(ii) 
η-strongly monotone if there exists a positive constant η such that
B x B y , x y η x y 2 , x , y H .
(iii) 
α-inverse strongly monotone (shortly, α-ism) if there exists a positive constant α such that
B x B y , x y α B x B y 2 , x , y H .
In particular, if α = 1 , then we say B is firmly nonexpansive, namely,
B x B y , x y B x B y 2 , x , y H .
Using the Cauchy–Schwartz inequality, it is easy to deduce that B is 1 α Lipschitz if it is α -ism.
Now, we give the following lemmas and propositions needed in the proof of the main results.
Lemma 1 
([28]). Let H be a real Hilbert space. Then, the following inequality holds:
x + y 2 x 2 + 2 x + y , y , x , y H .
Lemma 2 
([29]). Let h : H H be a ρ-contraction with ρ ( 0 , 1 ) and T : H H be a nonexpansive mapping. Then,
(i) 
I h is ( 1 ρ ) -strongly monotone, that is,
( I h ) x ( I h ) y , x y ( 1 ρ ) x y 2 , x , y H .
(ii) 
I T is monotone, that is,
( I T ) x ( I T ) y , x y 0 , x , y H .
Proposition 1 
([30]).
(i) 
If T 1 , T 2 , , T n are averaged mappings, then we can get that T n T n 1 T 1 is averaged. In particular, if T i is α i -av for each i = 1 , 2 , where α i ( 0 , 1 ) , then T 2 T 1 is ( α 2 + α 1 α 2 α 1 ) -av.
(ii) 
If the mappings { T i } i = 1 N are averaged and have a common fixed point, then we have
i = 1 N F i x ( T i ) = F i x ( T 1 T N ) .
(iii) 
A mapping T is nonexpansive if and only if I T is 1 2 ism.
(iv) 
If T is ν-ism, then, for any τ > 0 , τ T is ν τ -ism.
(v) 
T is averaged if and only if I T is ν-ism for some ν > 1 2 . Indeed, for any 0 < α < 1 , T is α averaged if and only if I T is 1 2 α -ism.
Lemma 3 
([31]). Let H be a real Hilbert space and T : H H be a nonexpansive mapping with F i x ( T ) . If { x n } is a sequence in H weakly converging to x and { ( I T ) x n } converges strongly to y, then ( I T ) x = y . In particular, if y = 0 , then x F i x ( T ) .
Lemma 4 
([32] or [33]). Assume that { s n } is a sequence of nonnegative real numbers such that
s n + 1 ( 1 γ n ) s n + γ n δ n , s n + 1 s n η n + φ n , n 0 ,
where { γ n } is a sequence in ( 0 , 1 ) , { η n } is a sequence of nonnegative real numbers, { δ n } and { φ n } are two sequences in R such that
(i) 
n = 0 γ n = ;
(ii) 
lim n φ n = 0 ;
(iii) 
lim k η n k = 0 implies lim sup k δ n k 0 for any subsequence { n k } { n } .
Then, lim n s n = 0 .
Lemma 5.
Assume that A : H 1 H 2 is a bounded linear operator and A * is the corresponding adjoint operator of A. Let T : H 2 H 2 be a nonexpansive mapping. If there exists a point z H 1 such that A z F i x ( T ) , then
( I T ) A x = 0 A * ( I T ) A x = 0 , x H 1 .
Proof. 
It is clear that ( I T ) A x = 0 implies A * ( I T ) A x = 0 for all x H 1 .
To see the converse, let x H such that A * ( I T ) A x = 0 . Take A z F i x ( T ) . Since T is nonexpansive, we have
T A x A z 2 = T A x T A z 2 A x A z 2
and
T A x A z 2 = A x T A x ( A x A z ) 2 = A x T A x 2 2 A x T A x , A x A z + A x A z 2 .
Combine the above two formulas, we have
A x T A x 2 2 A x T A x , A x A z = 2 A * ( I T ) A x , x z = 0 .
This completes the proof. □

3. The Main Results

In 2000, Moudafi [34] proposed the viscosity approximation method:
x n + 1 = α n h ( x n ) + ( 1 α n ) N x n , n 0 ,
which converges strongly to a fixed point x * of the nonexpansive mapping N (see [35,36]). In 2004, Xu [29] further proved that x * F i x ( N ) is also the unique solution of the following variational inequality problem:
( I h ) x * , x ˜ x * 0 , x ˜ F i x ( N ) ,
where h : H H is a ρ -contraction. By Lemma 2, we get I h is strongly monotone, hence the solution of problem (13) is unique.
In this section, we present a viscosity iterative method for solving problem (4). Meanwhile, the algorithm approximates the unique fixed point of variational inequality problem (13).
Putting e n : = e ( x n ) , we can rewrite the iteration (11) as follows:
x n + 1 = α n h ( x n + e n ) + ( 1 α n ) U ( x n τ n A * ( I T ) A x n + e n ) = α n h ( x n ) + ( 1 α n ) U ( x n τ n A * ( I T ) A x n ) + e ˜ n , n 0 ,
where
e ˜ n = α n ( h ( x n + e n ) h ( x n ) ) + ( 1 α n ) ( U ( x n τ n A * ( I T ) A x n + e n ) U ( x n τ n A * ( I T ) A x n ) ) .
Since U is nonexpansive and h is contractive, it is easy to get
e ˜ n α n h ( x n + e n ) h ( x n ) + ( 1 α n ) U ( x n τ n A * ( I T ) A x n + e n ) U ( x n τ n A * ( I T ) A x n ) ( α n ρ + 1 α n ) e n e n .
Theorem 1.
Let H 1 , H 2 be two real Hilbert spaces and A : H 1 H 2 be a bounded linear operator with L = A * A , where A * is the adjoint of A. Suppose that U : H 1 H 1 and T : H 2 H 2 are two averaged mappings with the coefficients γ 1 and γ 2 , respectively. Assume that the problem ( 4 ) is consistent (i.e., S ). Let h : H 1 H 1 be a ρ-contraction with 0 ρ < 1 . For any x 0 H 1 , define the sequence { x n } by ( 14 ) . If the following conditions are satisfied:
(i) 
lim n α n = 0 and n = 0 α n = ;
(ii) 
0 < lim inf n τ n lim sup n τ n < 1 γ 2 L ;
(iii) 
n = 0 e n < .
Then, the sequence { x n } converges strongly to a point x * S , which is also the unique solution of the variational inequality problem ( 13 ) .
Proof. 
Set V τ n : = U ( I τ n A * ( I T ) A ) . Then, by Proposition 1, it follows that U ( I τ n A * ( I T ) A ) is ( γ 1 + ( 1 γ 1 ) γ 2 τ n L ) - a v as 0 < τ n < 1 γ 2 L .
Step 1. Show that { x n } is bounded. For any z S , we have
x n + 1 z = α n h ( x n ) + ( 1 α n ) V τ n x n + e ˜ n z = α n ( h ( x n ) z ) + ( 1 α n ) ( V τ n x n z ) + e ˜ n α n h ( x n ) h ( z ) + α n h ( z ) z + ( 1 α n ) V τ n x n z + e ˜ n α n ρ x n z + α n h ( z ) z + ( 1 α n ) x n z + e ˜ n = ( 1 α n ( 1 ρ ) ) x n z + α n ( 1 ρ ) h ( z ) z + e ˜ n / α n 1 ρ .
Note that the condition (iii) and (15) imply that n = 0 e ˜ n < and, from the conditions (i), (iii) and α n > 0 , it is easy to show that { e ˜ n / α n } is bounded. Therefore, there exists M 1 > 0 , such that M 1 : = sup n N { h ( z ) z + e ˜ n / α n } . Thus, since the induction argument shows that
x n z max x 0 z , M 1 1 ρ ,
it turns out that the sequence { x n } is bounded and so are { h ( x n ) } , { V τ n x n } and { A * ( I T ) A x n } .
Step 2. Show that, for any sequence { n k } { n } , if η n k 0 , then lim k x n k V τ n k x n k = 0 . First, if z S , then we have
x n + 1 z 2 = α n h ( x n ) + ( 1 α n ) V τ n x n + e ˜ n z 2 = α n h ( x n ) + ( 1 α n ) V τ n x n z 2 + 2 α n h ( x n ) + ( 1 α n ) V τ n x n z , e ˜ n + e ˜ n 2 α n 2 h ( x n ) z 2 + ( 1 α n ) 2 V τ n x n z 2 + 2 α n ( 1 α n ) h ( x n ) z , V τ n x n z + ( 2 α n h ( x n ) z + 2 ( 1 α n ) x n z + e ˜ n ) e ˜ n 2 α n 2 ( h ( x n ) h ( z ) 2 + h ( z ) z 2 ) + ( 1 α n ) 2 V τ n x n z 2 + 2 α n ( 1 α n ) h ( x n ) z , V τ n x n z + M 2 e ˜ n 2 α n 2 ( h ( x n ) h ( z ) 2 + h ( z ) z 2 ) + ( 1 α n ) 2 V τ n x n z 2 + 2 α n ( 1 α n ) ( h ( x n ) h ( z ) x n z + h ( z ) z , V τ n x n z ) + M 2 e ˜ n ( 1 α n ( 2 α n ( 1 + 2 ρ 2 ) 2 ( 1 α n ) ρ ) ) x n z 2 + 2 α n ( 1 α n ) h ( z ) z , V τ n x n z + 2 α n 2 h ( z ) z 2 + M 2 e ˜ n ,
where M 2 : = sup n N { 2 α n h ( x n ) z + 2 ( 1 α n ) x n z + e ˜ n } .
Second, we can rewrite V τ n as
V τ n = U ( I τ n A * ( I T ) A ) = ( 1 w n ) I + w n W n ,
where w n = γ 1 + ( 1 γ 1 ) γ 2 τ n L and W n is nonexpansive. By the condition (ii), we get γ 1 < lim inf n w n lim sup n w n < 1 . Thus, it follows from (14), (17) and (18) that
x n + 1 z 2 = α n h ( x n ) + ( 1 α n ) V τ n x n + e ˜ n z 2 α n h ( x n ) + ( 1 α n ) V τ n x n z 2 + M 2 e ˜ n = V τ n x n z + α n ( h ( x n ) V τ n x n ) 2 + M 2 e ˜ n = V τ n x n z 2 + α n 2 h ( x n ) V τ n x n 2 + 2 α n V τ n x n z , h ( x n ) V τ n x n + M 2 e ˜ n = ( 1 w n ) x n + w n W n x n z 2 + α n 2 h ( x n ) V τ n x n 2 + 2 α n V τ n x n z , h ( x n ) V τ n x n + M 2 e ˜ n = ( 1 w n ) x n z 2 + w n W n x n W n z 2 w n ( 1 w n ) W n x n x n 2 + α n 2 h ( x n ) V τ n x n 2 + 2 α n V τ n x n z , h ( x n ) V τ n x n + M 2 e ˜ n x n z 2 w n ( 1 w n ) W n x n x n 2 + α n 2 h ( x n ) V τ n x n 2 + 2 α n V τ n x n z , h ( x n ) V τ n x n + M 2 e ˜ n .
Furthermore, set
s n = x n z 2 , γ n = α n ( 2 α n ( 1 + 2 ρ 2 ) 2 ( 1 α n ) ρ ) ,
δ n = 1 2 α n ( 1 + 2 ρ 2 ) 2 ( 1 α n ) ρ [ 2 α n h ( z ) z 2 + M 2 e ˜ n α n + 2 ( 1 α n ) h ( z ) z , V τ n x n z ] ,
η n = w n ( 1 w n ) W n x n x n 2 ,
φ n = α n 2 h ( x n ) V τ n x n 2 + 2 α n V τ n x n z , h ( x n ) V τ n x n + M 2 e ˜ n .
Using the condition (i), it is easy to get γ n 0 , Σ n = 0 γ n = and φ n 0 . In order to complete the proof, from Lemma 4, it suffices to verify that η n k 0 as k , which implies that
lim sup k δ n k 0
for any subsequence { n k } { n } . Indeed, η n k 0 as k implies that W n k x n k x n k 0 as k from the condition (iii). Thus, from (18), it follows that
x n k V τ n k x n k = w n k x n k W n k x n k 0 .
Step 3. Show that
ω w ( x n k ) S ,
where ω w ( x n k ) is the set of all weak cluster points of { x n k } . To see (21), we prove the following:
Take x ˜ ω w { x n k } and assume that { x n k j } is a subsequence of { x n k } weakly converging to x ˜ . Without loss of generality, we still use { x n k } to denote { x n k j } . Assume τ n k τ . Then, we have 0 < τ < 1 γ 2 L . Setting V = U ( I τ A * ( I T ) A ) , we deduce that
V τ n k x n k V x n k = U ( x n k τ n k A * ( I T ) A x n k ) U ( x n k τ A * ( I T ) A x n k ) | τ n k τ | A * ( I T ) A x n k .
Since τ n k τ as k , it follows immediately from (22) that
V τ n k x n k V x n k 0
as k . Thus, we have
x n k V x n k x n k V τ n k x n k + V τ n k x n k V x n k 0 .
Using Lemma 3, we get ω w ( x n k ) F i x ( V ) . Since both U and T are averaged, it follows from Proposition 1 (ii) that
ω w ( x n k ) F i x ( U ) , ω w ( x n k ) F i x ( I τ A * ( I T ) A ) .
Then, by Lemma 5, we obtain ω w ( x n k ) S immediately. Meanwhile, we have
lim sup k h ( x * ) x * , x n k x * = h ( x * ) x * , x ˜ x * , x ˜ S .
In addition, since x * is the unique solution of the variational inequality problem (13), we have
lim sup k h ( x * ) x * , x n k x * 0
together with (20) and hence lim sup k δ n k 0 . This completes the proof. □
Next, we consider the bounded perturbation of (14) generated by the following iterative process:
y n = x n + β n ν n , x n + 1 = α n h ( y n + e ( y n ) ) + ( 1 α n ) U ( I τ n A * ( I T ) A y n + e ( y n ) ) .
Theorem 2.
Assume that the sequences { β n } and { ν n } satisfy the condition ( 6 ) . Let H 1 , H 2 be two real Hilbert spaces and A : H 1 H 2 be a bounded linear operator with L = A * A , where A * is the adjoint of A. Suppose that U : H 1 H 1 and T : H 2 H 2 are two averaged mappings with the coefficients γ 1 and γ 2 , respectively. Assume that problem ( 4 ) is consistent (i.e., S ). Let h : H 1 H 1 be a ρ-contraction with 0 ρ < 1 . For any x 0 H 1 , define the sequence { x n } by ( 25 ) . If the following conditions are satisfied:
(i) 
lim n α n = 0 and n = 0 α n = ;
(ii) 
0 < lim inf n τ n lim sup n τ n < 1 γ 2 L ;
(iii) 
n = 0 e ( y n ) < .
Then, the sequence { x n } converges strongly to x * , where x * is a solution of the problem ( 4 ) , which is also the unique solution of the variational inequality problem ( 13 ) .
Proof. 
Now, put
e ˜ n = α n ( h ( y n + e ( y n ) ) h ( x n ) ) + ( 1 α n ) ( U ( y n τ n A * ( I T ) A y n + e ( y n ) ) U ( x n τ n A * ( I T ) A x n ) ) .
Then, Equation (25) can be rewritten as follows:
x n + 1 = α n h ( x n ) + ( 1 α n ) U ( I τ n A * ( I T ) A ) ( x n ) + e ˜ n ,
In fact, by Proposition 1 (iii) and the nonexpansiveness of T, it is not hard to show that A * ( I T ) A is 2 L Lipschitz. Thus, we have
e ˜ n α n h ( y n + e ( y n ) ) h ( x n ) + ( 1 α n ) y n x n τ n ( A * ( I T ) A y n + e ( y n ) A * ( I T ) A x n ) α n ρ y n x n + e ( y n ) + ( 1 α n ) ( y n x n + 2 τ n L y n x n + e ( y n ) ) ( α n ρ + ( 1 α n ) ( 1 + 2 τ n L ) ) β n ν n + ( α n ρ + ( 1 α n ) ) e ( y n ) .
From the condition (iii) and condition (6), we have n = 0 e ˜ n < . Consequently, using Theorem 1, it follows that the algorithm (14) is bounded perturbation resilient. This completes the proof. □

4. Numerical Results

In this section, we consider the following numerical examples to present the effectiveness, realization and convergence of Theorems 1 and 2:
Example 1.
Let H 1 = H 2 = R 2 . Suppose h ( x ) = 1 10 x and
A = 2 0 0 3 .
Take U = P C and T = P Q , where C and Q are defined as follows:
C = { x R 2 : x 2 4 }
and
Q = { y R 2 : 6 y ( i ) 12 , i = 1 , 2 } ,
where y ( i ) denotes the i t h element of y.
We can compute the solution set S = { x : 3 x ( 1 ) 6 , 2 x ( 2 ) 4 , x 2 4 } .
Take the experiment parameters τ n = 1 L and α n = 1 n + 1 in the following iterative algorithms and the stopping criteria is x n + 1 x n < e r r o r . According to the iterative process of Theorem 1, the sequence { x n } is generated by
x n + 1 = 1 n + 1 1 10 x n + ( 1 1 n + 1 ) U ( x n τ n A T ( I T ) A x n ) .
As n , we have x n x * . Then, taking the random initial guess x 0 and using MATLAB software (MATLAB R2012a, MathWorks, Natick, MA, USA), we obtain the numerical experiment results in Table 1.
Next, we consider the algorithm with bounded perturbation resilience. Choose the the bounded sequence { ν n } and the summarable nonnegative real sequence { β n } as follows:
v n = d n d n , if 0 d n I C ( x n ) , 0 , if 0 = d n I C ( x n ) ,
and
β n = c n
for some c ( 0 , 1 ) , where the indicator function
I C ( x ) = 0 , if x C , , if x C ,
and
I C ( x ) = N C ( x ) = { u H : u , x y 0 , y C } , if x C , , if x C ,
is the normal cone to C. The point d n is taken from N C ( x n ) . Setting c = 0.5 , the numerical results can be seen in Table 2.
As we have seen above, the accuracy of the solution is improved with the decrease of the stop criteria. In addition, the sequence { x n } converges to the point ( 3 , 2 ) , which is a solution of the numerical example. Of course, it is also the unique solution of the variational inequality ( I h ) x * , x x * 0 .
In addition, we contrast the approximate value of solution x * of Example 1 under the same parameter conditions, the same iterative numbers and the same initial value. The numerical results are reported in Table 3 and Table 4, where { x n ( 1 ) } and { x n ( 2 ) } denote the iterative sequences generated by the algorithm (14) in this paper and Theorem 3.2 in Ref. [8], respectively.
Example 2.
Let H 1 = H 2 = R 3 . Suppose h ( x ) = 1 3 x and
A = 1 0 8 0 2 0 0 0 5 .
Define T : R 3 R 3 by
T : y = ( y ( 1 ) , y ( 2 ) , y ( 3 ) ) T ( y ( 1 ) , y ( 2 ) , y ( 3 ) + s i n y ( 3 ) 2 ) T .
It is obvious that T is 1 2 a v and the set of fixed points F i x ( T ) = { y | ( y ( 1 ) , y ( 2 ) , 0 ) T } is nonempty. Let U = P C and C = { x R 3 | x 1 } . Then, we use the iterative algorithm of Theorem 1 to approximate a point x * C such that A x * F i x ( T ) .
Take the experiment parameters τ n = 1.9 n ( n + 1 ) L and α n = 1 n + 1 in the following iterative algorithms. Let F ( x ) = 1 2 ( I T ) A x 2 + I C ( x ) and the stopping criteria is F ( x ) < e r r o r .
Then, taking the random initial guess x 0 and using MATLAB software, we obtain the numerical experiment results in Table 5.
Next, we consider the bounded perturbation. The definitions of v n and β n are similar to the Example 1. Setting c = 0.8 , the numerical results can be seen in Table 4.
As we have seen in Table 5 and Table 6, the sequence { x n } approximates to the point ( 0 , 0 , 0 ) T , which is a solution of the numerical example. Of course, it is also the unique solution of the variational inequality ( I h ) x * , x x * 0 .

5. Conclusions

The SCFPP is an inverse problem that consists in finding a point in a fixed point set such that its image under a bounded linear operator belongs to another fixed point set. Many iterative algorithms have been developed to solve these kinds of problems. In this paper, we have introduced a viscosity iterative sequence and obtained the strong convergence. We prove the main result using the weaker conditions than many existing similar methods—for example, Xu’s algorithm [37] for the SFP. More specifically, his algorithm generates a sequence { x n } via the following recursions:
x n + 1 = α n u + ( 1 α n ) P C ( x n τ n A * ( I P Q ) A x n ) ,
where u is a a fixed element and { α n } [ 0 , 1 ] satisfies the assumptions:
(i)
lim n α n = 0 and n = 0 α n = ,
(ii)
either n = 0 α n + 1 α n < or lim n ( α n + 1 / α n ) = 1 .
The second condition is not necessary in our theorems. We also consider the bounded perturbation resilience of the proposed method and get theoretical convergence results. Finally, numerical experiments have been presented to illustrate the effectiveness of the proposed algorithms.

Author Contributions

All authors read and approved the final manuscript. Conceptualization, P.D.; Data Curation, P.D. and X.Z.; Formal Analysis, P.D. and J.Z.

Funding

This work was supported of the scientific research project of the Tianjin Municipal Education Commission (2018KJ253) and the Fundamental Research Funds for the Central Universities (3122017072).

Acknowledgments

The authors would like to thank the referee for valuable suggestions to improve the manuscript.

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Byrne, C. A united treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  2. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef] [Green Version]
  3. Censor, Y.; Bortfeld, T.; Martin, B.; Trofimov, A. A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
  4. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  5. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  6. Censor, Y.; Segal, A. The split common fixed point problem for directed operators. J. Convex Anal. 2009, 16, 587–600. [Google Scholar]
  7. Moudafi, A. A note on the split common fixed-point problem for quasi-nonexpansive operators. Nonlinear Anal. 2011, 74, 4083–4087. [Google Scholar] [CrossRef]
  8. Kraikaew, R.; Saejung, S. On split common fixed point problems. J. Math. Anal. Appl. 2014, 415, 513–524. [Google Scholar] [CrossRef]
  9. Dong, Q.L.; He, S.N.; Zhao, J. Solving the split equality problem without prior knowledge of operator norms. Optimization 2015, 64, 1887–1906. [Google Scholar] [CrossRef]
  10. He, S.N.; Tian, H.; Xu, H.K. The selective projection method for convex feasibility and split feasibility problems. J. Nonlinear Convex Anal. 2018, 19, 1199–1215. [Google Scholar]
  11. Padcharoen, A.; Kumam, P.; Cho, Y.J. Split common fixed point problems for demicontractive operators. Numer. Algorithms 2018. [Google Scholar] [CrossRef]
  12. Zhao, J. Solving split equality fixed-point problem of quasi-nonexpansive mappings without prior knowledge of operators norms. Optimization 2015, 64, 2619–2630. [Google Scholar] [CrossRef]
  13. Zhao, J.; Hou, D.F. A self-adaptive iterative algorithm for the split common fixed point problems. Numer. Algorithms 2018. [Google Scholar] [CrossRef]
  14. Byrne, C.; Censor, Y.; Gibali, A.; Reich, S. Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 2012, 13, 759–775. [Google Scholar]
  15. Combettes, P.L. Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004, 53, 475–504. [Google Scholar] [CrossRef]
  16. Chuang, C.S.; Lin, I.J. New strong convergence theorems for split variational inclusion problems in Hilbert spaces. J. Inequal. Appl. 2015, 176, 1–20. [Google Scholar] [CrossRef]
  17. Dong, Q.L.; Yuan, H.B.; Cho, Y.J.; Rassias, T.M. Modified inertial Mann algorithm and inertial CQ-algorithm for nonexpansive mappings. Optim. Lett. 2018, 12, 87–102. [Google Scholar] [CrossRef]
  18. Censor, Y.; Motova, A.; Segal, A. Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327, 1244–1256. [Google Scholar] [CrossRef] [Green Version]
  19. Censor, Y.; Davidi, R.; Herman, G.T. Perturbation resilience and superiorization of iterative algorithms. Inverse Probl. 2010, 26, 65008. [Google Scholar] [CrossRef] [Green Version]
  20. Guo, Y.N.; Cui, W. Strong convergence and bounded perturbation resilience of a modified proximal gradient algorithm. J. Inequal. Appl. 2018, 103, 1–15. [Google Scholar] [CrossRef]
  21. Jin, W.; Censor, Y.; Jiang, M. Bounded perturbation resilience of projected scaled gradient methods. Comput. Optim. Appl. 2016, 63, 365–392. [Google Scholar] [CrossRef]
  22. Xu, H.K. Bounded perturbation resilience and superiorization techniques for the projected scaled gradient method. Inverse Probl. 2017, 33, 044008. [Google Scholar] [CrossRef]
  23. Dong, Q.L.; Zhao, J.; He, S.N. Bounded perturbation resilience of the viscosity algorithm. J. Inequal. Appl. 2016, 299, 1–12. [Google Scholar] [CrossRef]
  24. Censor, Y.; Chen, W.; Combettes, P.L.; Davidi, R.; Herman, G.T. On the effectiveness of projection methods for convex feasibility problem with linear inequality constrains. Comput. Optim. Appl. 2012, 51, 1065–1088. [Google Scholar] [CrossRef]
  25. Davidi, R.; Herman, G.T.; Censor, Y. Perturbation-resilient block-iterative projection methods with application to image reconstruction from projections. Int. Trans. Oper. Res. 2009, 16, 505–524. [Google Scholar] [CrossRef] [PubMed]
  26. Davidi, R.; Censor, Y.; Schulte, R.W.; Geneser, S.; Xing, L. Feasibility-seeking and superiorization algorithm applied to inverse treatment plannning in rediation therapy. Contemp. Math. 2015, 636, 83–92. [Google Scholar]
  27. Moreau, J.J. Proprietes des applications ‘prox’. C. R. Acad. Sci. Paris Ser. A Math. 1963, 256, 1069–1071. [Google Scholar]
  28. Marino, G.; Scardamaglia, B.; Karapinar, E. Strong convergence theorem for strict pseudo-contractions in Hilbert spaces. J. Inequal. Appl. 2016, 134, 1–12. [Google Scholar] [CrossRef]
  29. Xu, H.K. Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298, 279–291. [Google Scholar] [CrossRef]
  30. Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
  31. Geobel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge Studies in Advanced Mathematics 28; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  32. He, S.N.; Yang, C.P. Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, 2013, 942315. [Google Scholar] [CrossRef]
  33. Yang, C.P.; He, S.N. General alterative regularization methods for nonexpansive mappings in Hilbert spaces. Fixed Point Theory Appl. 2014, 2014, 203. [Google Scholar] [CrossRef]
  34. Moudafi, A. Viscosity approximation methods for fixed point problems. J. Math. Anal. Appl. 2000, 241, 46–55. [Google Scholar] [CrossRef]
  35. Cho, Y.J.; Qin, X. Viscosity approximation methods for a finite family of m-accretive mappings in reflexive Banach spaces. Positivity 2008, 12, 483–494. [Google Scholar] [CrossRef]
  36. Qin, X.; Cho, Y.J.; Kang, S.M. Viscosity approximation methods for generalized equilibrium problems and fixed point problems with applications. Nonlinear Anal. 2010, 72, 99–112. [Google Scholar] [CrossRef]
  37. Xu, H.K. A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021–2034. [Google Scholar] [CrossRef]
Table 1. x 0 = r a n d ( 2 , 1 ) , results without bounded perturbation.
Table 1. x 0 = r a n d ( 2 , 1 ) , results without bounded perturbation.
τ nTime (s) x n Error
0.1111 52 0.02657 [ 2.9995 1.9998 ] T 10 5
0.1111 135 0.07538 [ 2.9998 2.0000 ] T 10 6
0.1111 216 0.15873 [ 3.0000 2.0000 ] T 10 7
Table 2. x 0 = r a n d ( 2 , 1 ) , results with bounded perturbation.
Table 2. x 0 = r a n d ( 2 , 1 ) , results with bounded perturbation.
τ nTime (s) x n Error
0.1111 45 0.02342 [ 2.9992 1.9998 ] T 10 5
0.1111 98 0.05317 [ 29998 2.0000 ] T 10 6
0.1111 153 0.10256 [ 3.0000 2.0000 ] T 10 7
Table 3. x 0 = 2 r a n d ( 2 , 1 ) , results without bounded perturbation.
Table 3. x 0 = 2 r a n d ( 2 , 1 ) , results without bounded perturbation.
τ n x n ( 1 ) x n ( 2 ) Error
0.2179 52 [ 2.9884 1.9993 ] T [ 2.9850 1.9940 ] T 10 5
0.2179 132 [ 3.0000 1.9995 ] T [ 2.9942 1.9988 ] T 10 6
0.2179 208 [ 3.0000 1.9999 ] T [ 2.9995 1.9996 ] T 10 7
Table 4. x 0 = 2 r a n d ( 2 , 1 ) , results with bounded perturbation.
Table 4. x 0 = 2 r a n d ( 2 , 1 ) , results with bounded perturbation.
τ n x n ( 1 ) x n ( 2 ) Error
0.2179 32 [ 2.9990 1.9997 ] T [ 2.9920 1.9987 ] T 10 5
0.2179 56 [ 2.9997 1.9999 ] T [ 2.9942 1.9988 ] T 10 6
0.2179 115 [ 3.0000 2.0000 ] T [ 2.9995 1.9998 ] T 10 7
Table 5. x 0 = 10 r a n d ( 3 , 1 ) , results without bounded perturbation.
Table 5. x 0 = 10 r a n d ( 3 , 1 ) , results without bounded perturbation.
τ nTime (s) x n Error
0.0255 120 0.0155 [ 0.0085 0.0167 0.0037 ] T 10 2 10 5
0.0256 235 0.0461 [ 0.0078 0.0013 0.0013 ] T 10 3 10 6
0.0257 518 0.1881 [ 0.0029 0.0015 0.0002 ] T 10 4 10 7
Table 6. x 0 = 10 r a n d ( 3 , 1 ) , results with bounded perturbation.
Table 6. x 0 = 10 r a n d ( 3 , 1 ) , results with bounded perturbation.
τ nTime (s) x n Error
0.0246 22 0.0036 [ 0.0108 0.0139 0.0015 ] T 10 2 10 5
0.0249 32 0.0040 [ 0.0085 0.0014 0.0014 ] T 10 3 10 6
0.0250 36 0.0058 [ 0.0028 0.0025 0.0034 ] T 10 4 10 7

Share and Cite

MDPI and ACS Style

Duan, P.; Zheng, X.; Zhao, J. Strong Convergence Theorems of Viscosity Iterative Algorithms for Split Common Fixed Point Problems. Mathematics 2019, 7, 14. https://doi.org/10.3390/math7010014

AMA Style

Duan P, Zheng X, Zhao J. Strong Convergence Theorems of Viscosity Iterative Algorithms for Split Common Fixed Point Problems. Mathematics. 2019; 7(1):14. https://doi.org/10.3390/math7010014

Chicago/Turabian Style

Duan, Peichao, Xubang Zheng, and Jing Zhao. 2019. "Strong Convergence Theorems of Viscosity Iterative Algorithms for Split Common Fixed Point Problems" Mathematics 7, no. 1: 14. https://doi.org/10.3390/math7010014

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop