Next Article in Journal
Credit Risk Contagion and Systemic Risk on Networks
Next Article in Special Issue
Strong Convergence of a New Generalized Viscosity Implicit Rule and Some Applications in Hilbert Space
Previous Article in Journal
NP-Hardness of the Problem of Optimal Box Positioning
Previous Article in Special Issue
On Fixed Point Results in Gb-Metric Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Three-Step Projective Methods for Solving the Split Feasibility Problems

1
Data Science Research Center, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
Demonstration School, University of Phayao, Phayao 56000, Thailand
3
School of Science, University of Phayao, Phayao 56000, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(8), 712; https://doi.org/10.3390/math7080712
Submission received: 30 June 2019 / Revised: 22 July 2019 / Accepted: 31 July 2019 / Published: 6 August 2019
(This article belongs to the Special Issue Fixed Point, Optimization, and Applications)

Abstract

:
In this paper, we focus on studying the split feasibility problem (SFP) in Hilbert spaces. Based on the CQ algorithm involving the self-adaptive technique, we introduce a three-step iteration process for approximating the solution of SFP. Then, the convergence results are established under mild conditions. Numerical experiments are provided to show the efficiency in signal processing. Some comparisons to various methods are also provided in this paper.

1. Introduction

In the present work, we aim to study the split feasibility problem (SFP), which is to find a point
x C   such   that   A x Q ,
where C and Q are non-empty, closed, and convex subsets of R M and R N , and A is an M × N matrix. The SFP was first investigated in 1994 by Censor-Elfving [1]. Subsequently, Xu [2,3] also studied this problem in finite dimensional Hilbert spaces. There have also been real-world applications, such as image processing and signal recovery.
Censor et al. [4] (see also [5]) introduced the Split Inverse Problem (SIP). In this, let X and Y be two vector spaces and A : X Y be a linear operator, such that two inverse problems are involved. Denote IP1 and IP2 by such inverse problems in X and Y, respectively. Given these data, the SIP is formulated as follows: find a point x X that solves IP1, and such that the point y = A x Y solves IP2.
It is known that the special case of the SFP can be reformulated to the following constrained minimization:
min x C   P Q ( A x ) A x .
Due to this reformulation, it can be seen as the following linear equation:
x C   and   A x = b .
In 2002, Byrne [6,7] introduced a new projection algorithm for the SFP. It was defined as follows:
x n + 1 = P C ( x n τ n A ( I P Q ) A x n )
where P C and P Q are projections onto C and Q, and A denotes the adjoint operator of A. This method is often called the CQ algorithm. In this case, the convergence is guaranteed when the step-size τ n is in ( 0 , 2 A 2 ) , where A 2 is the spectral radius of the operator A A and I stands for the identity operator. However, it should be noted that projections are not easy to be calculated, and also come with costs of computation.
In practical applications, the sets C and Q are usually defined by
C = { x H 1 : c ( x ) 0 }   and   Q = { y H 2 : q ( y ) 0 } ,
where c : H 1 R and q : H 2 R are convex and sub-differential functions on H 1 and H 2 . We always assume that c and q are bounded operators.
In 2004, Yang [8] presented the relaxed CQ algorithm, which follows from the idea of Fukushima [9]. The relaxed CQ algorithm, P C and P Q , has been replaced by P C n and P Q n , respectively, where C n and Q n are defined by
C n = { x H 1 : c ( x n ) ξ n , x n x } ,
where ξ n c ( x n ) and
Q n = { y H 2 : q ( A x n ) ζ n , A x n y } ,
where ζ n q ( A x n ) . It is easily seen that C n C and Q n Q for all n 1 . Next, we set
f n ( x ) = 1 2 ( I P Q n ) A x 2 ,   n 1 .
In this case, we get
f n ( x ) = A ( I P Q n ) A x .
Since these sets are half-spaces, the computation for these projections is easy. However, if the step-size depends on the norm of operators, it is not an easy task to undertake. In fact, the relaxed CQ algorithm in a finite-dimentional Hilbert space was introduced by Yang [8] as follows:
x n + 1 = P C n ( x n τ n f n ( x n ) ) ,
where τ n ( 0 , 2 / A 2 ) . We note that the norm of A turned out to be costly in the computation. In particular, A is a dense matrix and has a large dimension.
To overcome this difficultly, in 2012, López et al. [10] presented a new step-size τ n as follows:
τ n = ρ n f n ( x n ) f n ( x n ) 2 ,
where { ρ n } is a sequence in ( 0 , 4 ) such that inf n N ρ n ( 4 ρ n ) > 0 . It was shown that { x n } , with the step-size (11), converged weakly to a solution of SFP.
Another algorithm that can produce strong convergence is the Halpern-type algorithm. It is defined by
x n + 1 = α n u + ( 1 α n ) P C n ( x n τ n f n ( x n ) ) ,
where u H 1 is fixed and τ n is defined by (11). It was claimed that { x n } converges strongly to P S u when α n 0 and n = 1 α n = .
Recently, in 2005, Qu-Xiu [11] suggested the relaxed CQ algorithm by using Armijo-linesearch in Euclidean spaces, and then Gibali et al. [12] generalized the results of Qu-Xiu [11] to real Hilbert spaces as follows:
x n + 1 = P C n ( x n τ n f n ( y n ) ) y n = P C n ( x n τ n f n ( x n ) ) ,
where σ > 0 , ρ , μ ( 0 , 1 ) , τ n = σ ρ μ n , and μ n is the smallest non-negative integer, such that
τ n f n ( x n ) f n ( y n ) μ x n y n .
It was shown that { x n } converges weakly to a solution of SFP. Various iterative methods have been established to solve the SFP and some related problems—see, for example, [2,3,4,5,13,14,15,16,17].
We aim to suggest a new three-step iteration process by using the CQ algorithm with step-sizes that employ the self-adaptive terminology. We remark that our assumptions do not depend on the operator norms, which is an easy task in practice. We then establish weak and strong convergence results under suitable conditions. Finally, we apply our results to compressed sensing. Some comparisons are also given to those of Yang [8], Gibali et al. [12], and López et al. [10].
Moreover, based on the three-step iterative methods, some convergence results, including its efficiency, have been established—see, for example, [18,19,20,21,22,23].

2. Basic Concepts

We next recall some useful basic concepts that will be used in our proof. Let H be a real Hilbert space. Let T : H H be a nonlinear mapping. Then, T is called
(i)
nonexpansive if
T x T y x y ,   f o r   a l l   x , y H .
(ii)
firmly nonexpansive if, for all x , y H ,
T x T y 2 x y , T x T y .
A function f : H R is convex if
f ( λ x + ( 1 λ ) y ) λ f ( x ) + ( 1 λ ) f ( y ) , f o r   a l l   λ ( 0 , 1 ) , f o r   a l l   x , y H .
A function f : H R is weakly lower semi-continuous (w-lsc) at x if x n x implies
f ( x ) lim inf n f ( x n ) .
The projection of a non-empty, closed, and convex set C onto H is defined by
P C x : = arg min y C x y 2 ,   x H .
We note that P C and I P C are firmly non-expansive. From [7], we know that if
f ( x ) = 1 2 ( I P Q ) A x 2 ,
then f is A 2 -Lipschitz continuous. Moreover, in real Hilbert spaces, we know that [24]
(i)
x P C x , z P C x 0 for all z C ;
(ii)
P C x P C y 2 P C x P C y , x y for all x , y H ;
(iii)
P C x z 2 x z 2 P C x x 2 for all z C .
Lemma 1.
[25] Let H be a real Hilbert space and S be a non-empty, closed, and convex subset of H. Let { x n } be a sequence in H that satisfies the following conditions:
(i) 
For each x S , lim n x n x exists;
(ii) 
ω w ( x n ) S .
Then, { x n } converges weakly to a point in S.
Lemma 2.
[26] Let { s n } be a non-negative real sequence, such that
s n + 1 ( 1 α n ) s n + α n μ n , n 1 , s n + 1 s n λ n + υ n , n 1 ,
where { α n } ( 0 , 1 ) , { λ n } is a non-negative, real sequence, and { μ n } and { υ n } are real sequences such that
(i) 
n = 1 α n = ;
(ii) 
lim n υ n = 0 ;
(iii) 
lim k λ n k = 0 implies lim sup k μ n k 0 for any subsequence { n k } of { n } .
Then, lim n s n = 0 .
Next, we propose Algorithms 1 and 2 for solving the split feasibility problem in Hilbert spaces.

3. Weak Convergence Result

We next introduce a new CQ algorithm and derive the weak convergence of the proposed method.
Algorithm 1: The proposed algorithm for weak convergence.
Choose x 0 H 1 . Let x n + 1 be iteratively generated by
z n = x n τ n f n ( x n ) y n = z n γ n f n ( z n ) x n + 1 = P C n ( y n δ n f n ( y n ) )
where C n is given as (6),
τ n = ρ n f n ( x n ) f n ( x n ) 2 ,   γ n = ρ n f n ( z n ) f n ( z n ) 2   and   δ n = ρ n f n ( y n ) f n ( y n ) 2 ,   0 < ρ n < 4 .
Remark 1.
We see that Algorithm 1 is defined as the iterates z n and y n by a gradient method with the step-size τ n and γ n , respectively, and the iterate x n + 1 is defined by a relaxed CQ algorithm with the step-size δ n .
In this paper, we denote S by the solution set of SFP and assume that S is non-empty. Next, we prove its weak convergence theorem as follows:
Theorem 1.
Suppose inf n ρ n ( 4 ρ n ) > 0 . Then, { x n } , defined by Algorithm 1, converges weakly to a point of S.
Proof. 
Let x ^ S . Because C C n and Q Q n , we have x ^ = P C ( x ^ ) = P C n ( x ^ ) and A x ^   =   P Q ( A x ^ )   =   P Q n ( A x ^ ) . It follows that f n ( x ^ ) = 0 . Then we obtain
x n + 1 x ^ 2 = P C n ( y n δ n f n ( y n ) ) x ^ 2 y n δ n f n ( y n ) x ^ 2 x n + 1 y n + δ n f n ( y n ) 2 = y n x ^ 2 + δ n 2 f n ( y n ) 2 2 δ n y n x ^ , f n ( y n ) x n + 1 y n + δ n f n ( y n ) 2 .
From (23) and f n ( x ^ ) = 0 , we see that
y n x ^ , f n ( y n ) = y n x ^ , f n ( y n ) f n ( x ^ ) = y n x ^ , A ( I P Q n ) A y n A ( I P Q n ) A x ^ = A y n A x ^ , ( I P Q n ) A y n ( I P Q n ) A x ^ ( I P Q n ) A y n 2 = 2 f n ( y n ) .
We can also show that
x n x ^ , f n ( x n ) 2 f n ( x n )
and
z n x ^ , f n ( z n ) 2 f n ( z n ) .
So, by (26), it follows that
y n x ^ 2 = z n γ n f n ( z n ) x ^ 2 = z n x ^ 2 + γ n 2 f n ( z n ) 2 2 γ n z n x ^ , f n ( z n ) z n x ^ 2 + γ n 2 f n ( z n ) 2 4 γ n f n ( z n ) .
Moreover, by (25), we obtain
z n x ^ 2 = x n τ n f n ( x n ) x ^ 2 = x n x ^ 2 + τ n 2 f n ( x n ) 2 2 τ n x n x ^ , f n ( x n ) x n x ^ 2 + τ n 2 f n ( x n ) 2 4 τ n f n ( x n ) .
Combining (23)–(28), we have
x n + 1 x ^ 2 x n x ^ 2 + τ n 2 f n ( x n ) 2 4 τ n f n ( x n ) + γ n 2 f n ( z n ) 2 4 γ n f n ( z n ) + δ n 2 f n ( y n ) 2 4 δ n f n ( y n ) x n + 1 y n + δ n f n ( y n ) 2 = x n x ^ 2 + ρ n 2 f n 2 ( x n ) ( f n ( x n ) 2 ) 2 f n ( x n ) 2 4 ρ n f n 2 ( x n ) f n ( x n ) 2 + ρ n 2 f n 2 ( z n ) ( f n ( z n ) 2 ) 2 f n ( z n ) 2 4 ρ n f n 2 ( z n ) f n ( z n ) 2 + ρ n 2 f n 2 ( y n ) ( f n ( y n ) 2 ) 2 f n ( y n ) 2 4 ρ n f n 2 ( y n ) f n ( y n ) 2 x n + 1 y n + δ n f n ( y n ) 2 = x n x ^ 2 + ρ n 2 f n 2 ( x n ) f n ( x n ) 2 4 ρ n f n 2 ( x n ) f n ( x n ) 2 + ρ n 2 f n 2 ( z n ) f n ( z n ) 2 4 ρ n f n 2 ( z n ) f n ( z n ) 2 + ρ n 2 f n 2 ( y n ) f n ( y n ) 2 4 ρ n f n 2 ( y n ) f n ( y n ) 2 x n + 1 y n + δ n f n ( y n ) 2 = x n x ^ 2 ρ n ( 4 ρ n ) f n 2 ( x n ) f n ( x n ) 2 ρ n ( 4 ρ n ) f n 2 ( z n ) f n ( z n ) 2 ρ n ( 4 ρ n ) f n 2 ( y n ) f n ( y n ) 2 x n + 1 y n + δ n f n ( y n ) 2 .
This implies that, since 0 < ρ n < 4 ,
x n + 1 x ^ x n x ^ .
Thus, lim n x n x ^ exists and { x n } is bounded. Since inf n N ρ n ( 4 ρ n ) > 0 , there is a ρ such that ρ n ( 4 ρ n ) ρ ( 4 ρ ) > 0 . Again, from (29), it yields
x n x ^ 2 x n + 1 x ^ 2 ρ ( 4 ρ ) f n 2 ( x n ) f n ( x n ) 2 + ρ ( 4 ρ ) f n 2 ( z n ) f n ( z n ) 2   + ρ ( 4 ρ ) f n 2 ( y n ) f n ( y n ) 2 + x n + 1 y n + δ n f n ( y n ) 2 .
So, we obtain
0 = lim n x n + 1 x ^ 2 x n x ^ 2 lim n [   ρ ( 4 ρ ) f n 2 ( x n ) f n ( x n ) 2 + ρ ( 4 ρ ) f n 2 ( z n ) f n ( z n ) 2 + ρ ( 4 ρ ) f n 2 ( y n ) f n ( y n ) 2 + x n + 1 y n + δ n f n ( y n ) 2   ] 0 .
This shows that
lim n f n 2 ( x n ) f n ( x n ) 2 = 0 , lim n f n 2 ( z n ) f n ( z n ) 2 = 0 , lim n f n 2 ( y n ) f n ( y n ) 2 = 0 , lim n x n + 1 y n + δ n f n ( y n ) 2 = 0 .
We can check that { f n ( x n ) } is bounded. So lim n f n ( x n ) = 0 . This means lim n ( I P Q n ) A x n   =   0 . Also lim n f n ( z n ) = lim n ( I P Q n ) A z n = 0 and lim n f n ( y n ) = lim n ( I P Q n ) A y n   =   0 .
Furthermore, from (33), we get
lim n x n + 1 y n + δ n f n ( y n ) = 0 .
We note that
δ n f n ( y n ) = ρ n f n ( y n ) f n ( y n ) 2 f n ( y n ) 0 ,   as   n .
Hence, by (34) and (35), lim n x n + 1 y n = 0 . Further, by (21) and τ n f n ( x n ) 0 as n , we get lim n z n x n = 0 . Since γ n f n ( x n ) 0 as n , we also get lim n y n z n = 0 . Hence lim n x n + 1 x n = 0 .
By the boundedness of { x n } , the set ω w ( x n ) is non-empty. Let x ω w ( x n ) . Then, there is a subsequence { x n k } of { x n } that x n k x H 1 .
Next, we show that x is in S. Since x n k + 1 C n k , by the definition of C n k , we get
c ( x n k ) ξ n k , x n k x n k + 1
where ξ n k c ( x n k ) . It follows, by the boundedness of c , that
c ( x n k ) ξ n k x n k x n k + 1 0 ,   as   k .
By the w-lsc of c, x n k x and (37), we see that
c ( x ) lim inf k c ( x n k ) 0 .
Thus, x C .
Next, we will show that A x Q . Since P Q n k ( A x n k ) Q n k ,
q ( A x n k ) η n k , A x n k P Q n k ( A x n k )
where η n k q ( A x n k ) . So, we obtain
q ( A x n k ) η n k A x n k P Q n k ( A x n k ) 0 ,   as   k .
The w-lsc of q and (40) give that
q ( A x ) lim inf k q ( A x n k ) 0 .
Thus, A x Q . By Lemma 1, we can deduce that { x n } converges weakly to a point in S. □

4. Strong Convergence Result

We next discuss the strong convergence of the sequence generated by the Halpern-type iteration.
Algorithm 2: The proposed algorithm for strong convergence.
Choose x 0 H 1 . Assume x n , z n and y n have been constructed. Compute the sequence x n + 1 by
z n = x n τ n f n ( x n ) y n = z n γ n f n ( z n ) x n + 1 = α n u + ( 1 α n ) P C n ( y n δ n f n ( y n ) )
where u H 1 and { α n } ( 0 , 1 ) , C n is given as (6),
τ n = ρ n f n ( x n ) f n ( x n ) 2 ,   γ n = ρ n f n ( z n ) f n ( z n ) 2   and   δ n = ρ n f n ( y n ) f n ( y n ) 2 ,   0 < ρ n < 4 .
Theorem 2.
Assume that { α n } and { ρ n } satisfy the conditions:
(a) 
lim n α n = 0 and n = 1 α n = ;
(b) 
inf n ρ n ( 4 ρ n ) > 0 .
Then, { x n } , defined by Algorithm 2, converges strongly to P S u .
Proof. 
Set x ^ = P S u . By using the same argument as in Theorem 1, we can show that
P C n ( y n δ n f n ( y n ) ) x ^ 2 y n x ^ 2 ρ n ( 4 ρ n ) f n 2 ( y n ) f n ( y n ) P C n ( y n δ n f n ( y n ) ) y n + δ n f n ( y n ) 2 .
So,
y n x ^ 2 z n x ^ 2 ρ n ( 4 ρ n ) f n 2 ( z n ) f n ( z n ) 2
and
z n x ^ 2 x n x ^ 2 ρ n ( 4 ρ n ) f n 2 ( x n ) f n ( x n ) 2 .
Also, we obtain
x n + 1 x ^ 2 = α n ( u x ^ ) + ( 1 α n ) ( P C n ( y n δ n f n ( y n ) ) x ^ ) 2 ( 1 α n ) P C n ( y n δ n f n ( y n ) ) x ^ 2 + 2 α n u x ^ , x n + 1 x ^ .
Combining (44)–(47), we obtain
x n + 1 x ^ 2 ( 1 α n ) x n x ^ 2 ( 1 α n ) ρ n ( 4 ρ n ) f n 2 ( x n ) f n ( x n ) 2 ( 1 α n ) ρ n ( 4 ρ n ) f n 2 ( z n ) f n ( z n ) 2 ( 1 α n ) ρ n ( 4 ρ n ) f n 2 ( y n ) f n ( y n ) 2 ( 1 α n ) P C n ( y n δ n f n ( y n ) ) y n + δ n f n ( y n ) 2 + 2 α n u x ^ , x n + 1 x ^ .
Next, we will show that { x n } is bounded. Again, using (44)–(46), we get
x n + 1 x ^ = α n u + ( 1 α n ) P C n ( y n δ n f n ( y n ) ) x ^ α n u x ^ + ( 1 α n ) y n x ^ α n u x ^ + ( 1 α n ) z n x ^ α n u x ^ + ( 1 α n ) x n x ^ .
This shows that { x n } is bounded. From Lemma 2 and (48), we set
s n = x n x ^ 2 ; υ n = 2 α n u x ^ , x n + 1 x ^ ; μ n = 2 u x ^ , x n + 1 x ^ ; λ n = ( 1 α n ) P C n ( y n δ n f n ( y n ) ) y n + δ n f n ( y n ) 2 + ( 1 α n ) ρ n ( 4 ρ n ) f n 2 ( x n ) f n ( x n ) 2 + ( 1 α n ) ρ n ( 4 ρ n ) f n 2 ( y n ) f n ( y n ) 2 + ( 1 α n ) ρ n ( 4 ρ n ) f n 2 ( z n ) f n ( z n ) 2 .
So (48) can be transformed to the inequalities
s n + 1 ( 1 α n ) s n + α n μ n , n 1 s n + 1 s n λ n + υ n .
Let { n k } be a subsequence of { n } , such that
lim k λ n k = 0 .
Then, we have
lim k   ( 1 α n k ) P C n k ( y n k δ n k f n k ( y n k ) ) y n k + δ n k f n k ( y n k ) 2 + ( 1 α n k ) ρ n k ( 4 ρ n k ) f n k 2 ( x n k ) f n k ( x n k ) 2 + ( 1 α n k ) ρ n k ( 4 ρ n k ) f n k 2 ( z n k ) f n k ( z n k ) 2 + ( 1 α n k ) ρ n k ( 4 ρ n k ) f n k 2 ( y n k ) f n k ( y n k ) 2 = 0
which implies by our assumptions that
f n k 2 ( x n k ) f n k ( x n k ) 2 0 ,   f n k 2 ( z n k ) f n k ( z n k ) 2 0 ,   f n k 2 ( y n k ) f n k ( y n k ) 2 0   and P C n k ( y n k δ n k f n k ( y n k ) ) y n k + δ n k f n k ( y n k ) 0
as k . Since { f n k ( x n k ) } , { f n k ( z n k ) } and { f n k ( y n k ) } are bounded, it follows that f n k ( x n k ) 0 , f n k ( z n k ) 0 and f n k ( y n k ) 0 as k . We also get lim k ( I P Q n k ) A x n k = 0 , lim k ( I P Q n k ) A z n k = 0 and lim k ( I P Q n k ) A y n k = 0 .
As in Theorem 1, we can show that ω w ( x n k ) S . Hence, there is a subsequence { x n k i } of { x n k } , such that x n k i x S . So, we obtain
lim sup k u x ^ , x n k x ^ = lim i u x ^ , x n k i x ^ = u x ^ , x x ^ 0 .
On the other hand, we see that
x n k + 1 y n k = α n k u + ( 1 α n k ) P C n k ( y n k δ n k f n k ( y n k ) ) y n k α n k u y n k + ( 1 α n k ) P C n k ( y n k δ n k f n k ( y n k ) ) y n k α n k u y n k + ( 1 α n k ) P C n k ( y n k δ n k f n k ( y n k ) ) y n k + δ n k f n k ( y n k ) + ( 1 α n k ) δ n k f n k ( y n k ) 0   as   k .
We see that
lim k z n k x n k = 0   and   lim k y n k z n k = 0 .
Hence, we obtain
x n k + 1 x n k x n k + 1 y n k + y n k z n k + z n k x n k 0   as   k .
By (54) and (56), we obtain
lim sup k u x ^ , x n k + 1 x ^ 0 .
Hence, we get
lim sup k μ n k 0 .
By Lemma 2, we can deduce that { x n } converges strongly to x ^ = P S u . □

5. Numerical Examples

Finally, we provide numerical experiments of the compressed sensing in signal recovery. We demonstrate the performance of the relaxed CQ algorithms defined by Yang [8], López et al. [10], Gibali et al. [12] and our CQ algorithms. The compressed sensing can be modeled as the linear equation:
y = A x + ε ,
where x R N is a recovered vector with m non-zero components, y R M is the observed data with noisy ε , and A : R N R M ( M < N ) . It is noted that (59) can be seen as solving the LASSO problem:
min x R N 1 2 y A x 2   subject to   x 1 t ,
where t > 0 . In particular, in case C = { x R N : x 1 t } and Q = { y } , the LASSO problem can be considered as the SFP (1). From this point of view, we can apply the CQ algorithm to solve (60).
In our experiment, one matrix A R M × N is generated from a normal distribution with mean zero and invariance one. The sparse vector x R N is generated from uniform distribution in the interval [ 1 , 1 ] with m nonzero elements. The observation y is generated by white Gaussian noise with signal-to-noise ratio SNR=40. Let t = m and x 1 = 0 .
The stopping criterion is defined by the mean square error (MSE):
MSE = 1 N x ^ x 2 2 < 10 5 ,
where x ^ is an approximated signal of x.
In what follows, let τ n = 1 A 2 in the CQ algorithm (10) by Yang [8], τ n = ρ n A x y 2 2 A T ( A x y ) 2 with ρ n = 2 in (11) of López et al. [10], τ n defined by (14) with σ = 1 , ρ = μ = 0 . 5 in that of Gibali et al. [12] and τ n , γ n , δ n defined by (22) with ρ n = 2 . The numerical results are reported as follows.
From Table 1 and Figure 1 and Figure 2, we observe that the convergence behavior of Algorithm 1 outperforms those of Yang [8], López et al. [10], Gibali et al. [12]. Indeed, Algorithm 1 has less number of iterations than other methods.
Next, we discuss the strong convergence of the relaxed CQ algorithm (12) by López et al. [10] and Algorithm 2. We set each step-sizes τ n as in the weak convergence and let α n = 1 100 n + 1 . The initial vector x 1 = 0 and u is generated randomly. Then, we have the following numerical results.
From Table 2 and Figure 3 and Figure 4, it is observed that Algorithm 2 has a smaller number of iterations than that of López et al. [10].
We provide the numerical examples in L 2 -space, which is an infinite Hilbert space, by using Algorithm 2. Let H 1 = H 2 = L 2 [ 0 , 1 ] with the inner product given by
f , g = 0 1 f ( t ) g ( t ) d t .
Let C = { x L 2 [ 0 , 1 ] :   x L 2 1 } and Q = { x L 2 [ 0 , 1 ] : x , t 2 0 } . Find x C such that A x Q , where ( A x ) ( t ) = x ( t ) 2 . We take α n = 1 10 n + 1 , ρ n = 1 . 75 . The stopping criterion is defined by
E n = 1 2 A x n P Q A x n L 2 2 < 10 4 .
From Table 3 and Figure 5, we see that our algorithm is better than that of López et al. [10] in terms of number of iterations and CPU time.

6. Conclusions

In this work, we have introduced new three-step iterative methods involving the self-adaptive technique for the SFP in Hilbert spaces. Weak and strong convergence was discussed under suitable conditions. Preliminary numerical experiments showed that our proposed methods outperform those of Yang [8], López et al. [10], and Gibali et al. [12]. In future work, we aim to investigate the SFP in Banach spaces, and to also establish its convergence under suitable conditions.

Author Contributions

S.S.; supervision and investigation, N.E.; writing original draft, N.P.; data analysis and P.C.; formal analysis and methodology.

Funding

This research is supported by Chiang Mai University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithms using Bregman projection in a product space. Numer. Algor. 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Wang, F.; Xu, H.K. Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010, 2010, 102085. [Google Scholar] [CrossRef]
  3. Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 105018. [Google Scholar] [CrossRef]
  4. Censor, Y.; Gibali, A.; Reich, S. Algorithms for the split variational inequality problem. Numer. Algor. 2012, 59, 301–323. [Google Scholar] [CrossRef]
  5. Gibali, A. A new split inverse problem and application to least intensity feasible solutions. Pure Appl. Funct. Anal. 2017, 2, 243–258. [Google Scholar]
  6. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  7. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  8. Yang, Q. The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl. 2004, 20, 1261–1266. [Google Scholar] [CrossRef]
  9. Fukushima, M. A relaxed projection method for variational inequalities. Math. Program. 1986, 35, 58–70. [Google Scholar] [CrossRef]
  10. López, G.; Martín-Márquez, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar] [CrossRef]
  11. Qu, B.; Xiu, N. A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21, 1655–1665. [Google Scholar] [CrossRef]
  12. Gibali, A.; Liu, L.W.; Tang, Y.C. Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim. Lett. 2017, 12, 1–14. [Google Scholar] [CrossRef]
  13. Ćirić, L.; Rafiq, A.; Radenović, S.; Rajović, M.; Ume, J.S. On Mann implicit iterations for strongly accretive and strongly pseudo-contractive mappings. Appl. Math. Comput. 2008, 198, 128–137. [Google Scholar] [CrossRef]
  14. Ćirić, L. Some Recent Results in Metrical Fixed Point Theory; University of Belgrade: Belgrade, Serbia, 2003. [Google Scholar]
  15. Dang, Y.; Gao, Y. The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011, 27, 015007. [Google Scholar] [CrossRef]
  16. Yang, Q. On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302, 166–179. [Google Scholar] [CrossRef] [Green Version]
  17. Zhao, J.; Zhang, Y.; Yang, Q. Modified projection methods for the split feasibility problem and multiple-sets feasibility problem. Appl. Math. Comput. 2012, 219, 1644–1653. [Google Scholar] [CrossRef]
  18. Bnouhachem, A.; Noor, M.A. Three-step projection method for general variational inequalities. Int. J. Mod. Phys. B. 2012, 26, 1250066. [Google Scholar] [CrossRef]
  19. Cordero, A.; Hueso, J.L.; Martinez, E.; Torregrosa, J.R. Efficient three-step iterative methods with sixth order convergence for nonlinear equations. Numer. Algorithms 2010, 53, 485–495. [Google Scholar] [CrossRef]
  20. Noor, M.A.; Noor, K.I. Three-step iterative methods for nonlinear equations. Appl. Math. Comput. 2006, 183, 322–327. [Google Scholar]
  21. Noor, M.A.; Yao, Y. Three-step iterations for variational inequalities and nonexpansive mappings. Appl. Math. Comput. 2007, 190, 1312–1321. [Google Scholar] [CrossRef]
  22. Phuengrattana, W.; Suantai, S. On the rate of convergence of Mann, Ishikawa, Noor and SP-iterations for continuous functions on an arbitrary interval. J. Comput. Appl. Math. 2011, 235, 3006–3014. [Google Scholar] [CrossRef] [Green Version]
  23. Rafiq, A.; Hussain, S.; Ahmad, F.; Awais, M.; Zafar, F. An efficient three-step iterative method with sixth-order convergence for solving nonlinear equations. Int. J. Comput. Math. 2007, 84, 369–375. [Google Scholar] [CrossRef]
  24. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: London, UK, 2011. [Google Scholar]
  25. Bauschke, H.H.; Combettes, P.L. A weak-to-strong convergence principle for Fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 2001, 26, 248–264. [Google Scholar] [CrossRef]
  26. He, S.; Yang, C. Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013. [Google Scholar] [CrossRef]
Figure 1. MSE versus number of iterations of Algorithm 1 in case N= 4096, M = 2048, and m = 200.
Figure 1. MSE versus number of iterations of Algorithm 1 in case N= 4096, M = 2048, and m = 200.
Mathematics 07 00712 g001
Figure 2. From top to bottom: original signal, observation data, recovered signal by Algorithms of Yang [8], López et al. [10], Gibali et al. [12], and Algorithm 1 with N = 4096 , M = 2048 and m = 200 .
Figure 2. From top to bottom: original signal, observation data, recovered signal by Algorithms of Yang [8], López et al. [10], Gibali et al. [12], and Algorithm 1 with N = 4096 , M = 2048 and m = 200 .
Mathematics 07 00712 g002
Figure 3. MSE versus number of iterations of Algorithm 2 in case N = 4096, M = 2048 and m = 200.
Figure 3. MSE versus number of iterations of Algorithm 2 in case N = 4096, M = 2048 and m = 200.
Mathematics 07 00712 g003
Figure 4. From top to bottom: original signal, observation data, recovered signal by Algorithms (12) of López et al. [10] and Algorithm 2.
Figure 4. From top to bottom: original signal, observation data, recovered signal by Algorithms (12) of López et al. [10] and Algorithm 2.
Mathematics 07 00712 g004
Figure 5. Error versus numberof iterations of Algorithm 2 in L 2 -space.
Figure 5. Error versus numberof iterations of Algorithm 2 in L 2 -space.
Mathematics 07 00712 g005
Table 1. Number of iterations for Algorithm 1.
Table 1. Number of iterations for Algorithm 1.
Case 1: N = 512 , M = 256 Yang (10)López et al. (11)Gibali et al. (13)Algorithm 1
m = 10 746510639
m = 20 217184246111
Case 2: N = 4096 , M = 2048 Yang (10)López et al. (11)Gibali et al. (13)Algorithm 1
m = 100 877711748
m = 200 18415622094
Table 2. Number of iterations for Algorithm 2.
Table 2. Number of iterations for Algorithm 2.
Case 1: N = 512 , M = 256 López et al. (12)Algorithm 2
m = 10 8543
m = 20 11964
Case 2: N = 4096 , M = 2048 López et al. (12)Algorithm 2
m = 100 8548
m = 200 230140
Table 3. Numberof iterations for Algorithm 2 in L 2 -space.
Table 3. Numberof iterations for Algorithm 2 in L 2 -space.
López et al. (12)Algorithm 2
u = t No. of Iter.94
x 1 = 7 t 2 + 2 cpu (time)6.37074.0171
u = t + 1 No. of Iter.94
x 1 = 4 t 2 + t + 3 cpu (time)6.51694.1789
u = t 2 No. of Iter.104
x 1 = 2 t 2 + 3 e t cpu (time)9.48185.5274
u = t 3 No. of Iter.63
x 1 = 5 t 3 + s i n ( t ) + 1 cpu (time)3.74782.9404

Share and Cite

MDPI and ACS Style

Suantai, S.; Eiamniran, N.; Pholasa, N.; Cholamjiak, P. Three-Step Projective Methods for Solving the Split Feasibility Problems. Mathematics 2019, 7, 712. https://doi.org/10.3390/math7080712

AMA Style

Suantai S, Eiamniran N, Pholasa N, Cholamjiak P. Three-Step Projective Methods for Solving the Split Feasibility Problems. Mathematics. 2019; 7(8):712. https://doi.org/10.3390/math7080712

Chicago/Turabian Style

Suantai, Suthep, Nontawat Eiamniran, Nattawut Pholasa, and Prasit Cholamjiak. 2019. "Three-Step Projective Methods for Solving the Split Feasibility Problems" Mathematics 7, no. 8: 712. https://doi.org/10.3390/math7080712

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop