Next Article in Journal
Approximating Fixed Points of Bregman Generalized α-Nonexpansive Mappings
Next Article in Special Issue
System of Multi-Valued Mixed Variational Inclusions with XOR-Operation in Real Ordered Uniformly Smooth Banach Spaces
Previous Article in Journal
On the Parametrization of Caputo-Type Fractional Differential Equations with Two-Point Nonlinear Boundary Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Modified Proximal Algorithms for Finding Solutions of the Split Variational Inclusions

by
Suthep Suantai
1,
Suparat Kesornprom
2 and
Prasit Cholamjiak
2,*
1
Data Science Research Center, Department of Mathematics, Faculty of Science, Chiang Mai University, Chiang Mai 50200, Thailand
2
School of Science, University of Phayao, Phayao 56000, Thailand
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(8), 708; https://doi.org/10.3390/math7080708
Submission received: 27 June 2019 / Revised: 1 August 2019 / Accepted: 2 August 2019 / Published: 6 August 2019
(This article belongs to the Special Issue Nonlinear Analysis and Optimization)

Abstract

:
We investigate the split variational inclusion problem in Hilbert spaces. We propose efficient algorithms in which, in each iteration, the stepsize is chosen self-adaptive, and proves weak and strong convergence theorems. We provide numerical experiments to validate the theoretical results for solving the split variational inclusion problem as well as the comparison to algorithms defined by Byrne et al. and Chuang, respectively. It is shown that the proposed algorithms outrun other algorithms via numerical experiments. As applications, we apply our method to compressed sensing in signal recovery. The proposed methods have as a main advantage that the computation of the Lipschitz constants for the gradient of functions is dropped in generating the sequences.

1. Introduction

Let H be a real Hilbert space. Then, B : H 2 H is called monotone if u v , x y 0 for each u B x , v B y . Moreover, B is maximal monotone provided its graph is not properly included in the graph of other monotone mappings. Many problems in optimization can be reduced to finding x * H such that 0 B x * . Martinet [1] and Rockafellar [2] suggested the proximal method for solving this problem. They construct the sequence { x n } H by choosing x 1 H and putting
x n + 1 = J β n B x n , n N ,
where { β n } ( 0 , ) , B is a set-valued maximal monotone operator and J β B is defined by J β B = ( I + β B ) 1 for each β > 0 . We see that Equation (1) is equivalent to x n x n + 1 β n B x n + 1 , n N .
The split variational inclusion problem (SVIP) was first investigated by Moudafi [3]. The problem consists of finding x * H 1 such that
0 B 1 ( x * ) and 0 B 2 ( A x * ) ,
where H 1 and H 2 are real Hilbert spaces, B 1 and B 2 are set-valued mappings on H 1 and H 2 . In addition, A : H 1 H 2 is a bounded and linear operator and A * is the adjoint of A. We know that the SVIP is a generalization of the split feasibility problem that was investigated by Censor and Elfving [4] in Euclidean spaces. See [4,5,6,7,8,9]. In this paper, we denote by Ω the solution set of SVIP. Suppose that Ω is nonempty.
In 2011, Byrne et al. [6] established a weak convergence theorem for SVIP as follows:
Theorem 1.
Let H 1 and H 2 be real Hilbert spaces, A : H 1 H 2 be a bounded linear operator. Let B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 be set-valued maximal monotone operators. Let β > 0 and γ ( 0 , 2 A 2 ) . Let { x n } be generated by
x n + 1 = J β B 1 ( x n γ A * ( I J β B 2 ) A x n ) , n N .
Then, { x n } converges weakly to x * in Ω.
In 2015, Chuang [10] introduced the following iteration for SVIP in Hilbert spaces.
Chuang [10] established its convergence as follows:
Theorem 2.
Let H 1 and H 2 be real Hilbert spaces, A : H 1 H 2 be a bounded and linear operator. Let B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 be set-valued maximal monotone operators. Choose δ ( 0 , 1 ) and let { β n } ( 0 , ) and { γ n } ( 0 , δ A 2 ) and assume that
n = 1 γ n = , n = 1 γ n 2 < , lim inf n β n > 0 .
If H 1 is finite dimensional, then lim n x n = x * Ω .
Chuang [10] also provided the following result.
Theorem 3.
Let H 1 and H 2 be infinite dimensional Hilbert spaces, A : H 1 H 2 be a bounded and linear operator. Let B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 be set-valued maximal monotone mappings. Choose δ ( 0 , 1 ) and let { β n } ( 0 , ) , lim inf n β n > 0 and { γ n } ( 0 , δ A 2 ) with inf n N γ n > 0 . Then, x n x * Ω .
In 2013, Chuang [11] proved strong convergence theorem for SVIP using the following algorithm.
Algorithm 1:
 [11].
 For n N , set y n as
y n = J β n B 1 ( x n γ n A * ( I J β n B 2 ) A x n ) ,

 where γ n > 0 is chosen such that
γ n A * ( I J β n B 2 ) A x n A * ( I J β n B 2 ) A y n δ x n y n , 0 < δ < 1 .

 The iterative x n + 1 is generated by
x n + 1 = J β n B 1 ( x n α n D ( x n , γ n ) ) ,

 where
D ( x n , γ n ) = x n y n + γ n ( A * ( I J β n B 2 ) A y n A * ( I J β n B 2 ) A x n )

 and
α n = x n y n , D ( x n , γ n ) D ( x n , γ n ) 2 .
Theorem 4.
Let H 1 and H 2 be two real Hilbert spaces, A : H 1 H 2 be a bounded and linear operator. Let B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 be two set-valued maximal monotone operators. Let { a n } , { b n } , { c n } , and { d n } be sequences of real numbers in [ 0 , 1 ] with a n + b n + c n + d n = 1 and 0 < a n < 1 for each n N . Let { β n } ( 0 , ) and let { γ n } ( 0 , 2 A 2 + 1 ) . Let { v n } be a bounded sequence in H 1 . Fix u H 1 and let the sequence { x n } H 1 be generated by
x n + 1 = a n u + b n x n + c n J β n B 1 ( x n γ n A * ( I J β n B 2 ) A x n ) + d n v n
for each n N . Suppose that
(i) 
lim n a n = lim n d n a n = 0 ; n = 1 a n = ; n = 1 d n < ;
(ii) 
lim inf n c n γ n > 0 , lim inf n b n c n > 0 , lim inf n β n > 0 .
Then, lim n x n = x * , where x * = P Ω u and P Ω u is nearest to u.
We aim to find the approximate algorithms with a new step size which is self-adaptive (see López et al. [8]) for solving our SVIP and prove its convergence. We present numerical examples and the comparison to algorithms of Byne et al. [6] and algorithms of Chuang [10,11]. We also obtain the result for split feasibility problem (SFP) and its applications to compressed sensing in signal recovery. It reveals that our methods have a better convergence than those of Byrne et al. [6] and Chuang [10,11].

2. Preliminaries

We next provide some basic concepts for our proof. In what follows, we shall use the following symbols:
  • ⇀ stands for the weak convergence,
  • → stands for the strong convergence.
Recall that a mapping T : H H is called
(1)
nonexpansive if, for all x , y H ,
T x T y x y .
(2)
firmly-nonexpansive if, for all x , y H ,
T x T y 2 T x T y , x y .
It is clear that I T is also firmly-nonexpansive when T is firmly-nonexpansive. We know that, for each x , y H ,
x , y = 1 2 x 2 + 1 2 y 2 1 2 x y 2
and
t x + ( 1 t ) y 2 = t x 2 + ( 1 t ) y 2 t ( 1 t ) x y 2
for all x , y H and for all t [ 0 , 1 ] .
The following lemma can be found in [12].
Lemma 1.
Let C be a nonempty closed convex subset of a real Hilbert space H. Let T : C C be a nonexpansive mapping. If x n x C and lim n x n T x n = 0 , then x = T x .
We use F i x ( T ) by the fixed point set of a mapping T, that is, F i x ( T ) = { x H : x = T x } and D ( T ) by the domain of a mapping T, i.e., D ( T ) = { x H : T ( x ) } .
The following lemma can be found in [11,13].
Lemma 2.
Let H be a real Hilbert space and let B : H 2 H be a maximal monotone operator. Then,
(i) 
J β B is single-valued and firmly nonexpansive for each β > 0 ;
(ii) 
D ( J β B ) = H and F i x ( J β B ) = { x D ( B ) : 0 B x } ;
(iii) 
x J β B x x J γ B x for all 0 < β γ and for all x H ;
(iv) 
If B 1 ( 0 ) , then we have x J β B x 2 + J β B x x * 2 x x * 2 for all x H , each x * B 1 ( 0 ) , and each β > 0 ;
(v) 
If B 1 ( 0 ) , then we have x J β B x , J β B x w 0 for all x H , each w B 1 ( 0 ) , and each β > 0 .
Lemma 3.
Let H 1 and H 2 be real Hilbert spaces. Let A : H 1 H 2 be a bounded and linear operator. Let β > 0 , γ > 0 , B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 be maximal monotone operators. Let x * H 1 .
(i) 
If x * is a solution of (SVIP), then J β B 1 ( x * γ A * ( I J β B 2 ) A x * ) = x * .
(ii) 
Suppose that J β B 1 ( x * γ A * ( I J β B 2 ) A x * ) = x * and the solution set of (SVIP) is nonempty. Then, x * is a solution of (SVIP).
Lemma 4.
Let H 1 and H 2 be real Hilbert spaces. Let A : H 1 H 2 be a bounded and linear operator and β > 0 . Let B : H 2 2 H 2 be a maximal monotone operator. Define a mapping T : H 1 H 1 by T x : = A * ( I J β B ) A x for each x H 1 . Then,
(i) 
( I J β B ) A x ( I J β B ) A y 2 T x T y , x y for all x , y H 1 ;
(ii) 
A * ( I J β B ) A x A * ( I J β B ) A y 2 A 2 · T x T y , x y for all x , y H 1 .
The following lemma can be found in [14].
Lemma 5.
Let C be a nonempty subset of a Hilbert space H. Let { x n } be a sequence in H that satisfies the following assumptions:
(i) 
lim n x n x exists for each x C ;
(ii) 
every sequential weak limit point of { x n } is in C.
Then, { x n } weakly converges to a point in C.
The following lemma can be found in [15].
Lemma 6.
Assume { s n } ( 0 , ) such that
s n + 1 ( 1 α n ) s n + α n δ n , n 1 ,
s n + 1 s n λ n + φ n , n 1 ,
where { α n } ( 0 , 1 ) , { λ n } ( 0 , 1 ) and { δ n } and { φ n } are real sequences such that
(i) 
n = 1 α n = ;
(ii) 
lim n φ n = 0 ;
(iii) 
lim k λ n k = 0 implies lim sup k δ n k 0 for any subsequence { n k } of { n } .
Then, lim n s n = 0 .

3. Weak Convergence Result

Let, H 1 and H 2 be real Hilbert spaces, A : H 1 H 2 be a bounded and linear operator. Let B 1 : H 1 2 H 1 and B 2 : H 2 2 H 2 be set-valued maximal monotone operators.
Let Ω be a solution set of problem (SVIP) and assume that Ω . We remark that the stepsize sequence { γ n } does not depend on the norm of an operator A as introduced by Byrne et al. [6] and Chuang [10,11].
Theorem 5.
Suppose that lim inf n β n > 0 , inf n ρ n ( 4 ρ n ) > 0 and lim n θ n = 0 . Then, { x n } defined by Algorithm 2 converges weakly to a solution in Ω.
Algorithm 2:
 Choose x 1 H 1 and define
x n + 1 = J β n B 1 ( x n γ n g ( x n ) ) ,

   where
γ n = ρ n f ( x n ) g ( x n ) 2 + θ n , 0 < ρ n < 4 , 0 < θ n < 1 , β n > 0 ,

   and
f ( x n ) = 1 2 ( I J β n B 2 ) A x n 2 , g ( x n ) = A * ( I J β n B 2 ) A x n .
Proof. 
Let z Ω . Then, z B 1 1 ( 0 ) and A z B 2 1 ( 0 ) . Thus, we have J β n B 2 A z = A z . Using Lemma 4 (i), we have
x n z , g ( x n ) = x n z , g ( x n ) g ( z ) = x n z , A * ( I J β n B 2 ) A x n A * ( I J β n B 2 ) A z = A x n A z , ( I J β n B 2 ) A x n ( I J β n B 2 ) A z ( I J β n B 2 ) A x n 2 = 2 f ( x n ) .
From Equation (20), Lemma 2 (iv) and the defining formulas for Algorithm 2
x n + 1 z 2 = J β n B 1 ( x n γ n g ( x n ) ) z 2 x n γ n g ( x n ) z 2 x n + 1 x n + γ n g ( x n ) 2 = x n z 2 + γ n 2 g ( x n ) 2 2 γ n x n z , g ( x n ) x n + 1 x n + γ n g ( x n ) 2 x n z 2 + γ n 2 g ( x n ) 2 4 γ n f ( x n ) x n + 1 x n + γ n g ( x n ) 2 = x n z 2 + ρ n 2 f 2 ( x n ) ( g ( x n ) 2 + θ n ) 2 g ( x n ) 2 4 ρ n f 2 ( x n ) g ( x n ) 2 + θ n x n + 1 x n + γ n g ( x n ) 2 x n z 2 + ρ n 2 f 2 ( x n ) g ( x n ) 2 + θ n 4 ρ n f 2 ( x n ) g ( x n ) 2 + θ n x n + 1 x n + γ n g ( x n ) 2 = x n z 2 ρ n ( 4 ρ n ) f 2 ( x n ) g ( x n ) 2 + θ n x n + 1 x n + γ n g ( x n ) 2 .
This implies that, since 0 < ρ n < 4 ,
x n + 1 z x n z .
Thus, lim n x n z exists. It follows that { x n } is bounded. Again, by Equation (21), we get
ρ n ( 4 ρ n ) f 2 ( x n ) g ( x n ) 2 + θ n x n z 2 x n + 1 z 2 ,
which yields by our assumptions that
lim n f 2 ( x n ) g ( x n ) 2 = 0 .
By Lemma 3 (ii), it can be checked that g is a Lipschitzian mapping and thus { g ( x n ) } is bounded. Hence, we get lim n f ( x n ) = 0 . This means
lim n ( I J β n B 2 ) A x n = 0 .
Furthermore, by Equation (21), we also have
lim n x n + 1 x n + γ n g ( x n ) = 0 .
We note that
γ n g ( x n ) = ρ n f ( x n ) g ( x n ) 2 + θ n g ( x n ) 0 , as n .
Hence, by Equations (26) and (27), we obtain
lim n x n + 1 x n = 0 .
From Equation (25) and Lemma 2 (iii), we get
lim n A x n J β B 2 A x n lim n A x n J β n B 2 A x n = 0 ,
for some β > 0 such that β n β > 0 for all n N . From Equation (27), we see that
x n + 1 J β n B 2 x n = J β n B 1 ( x n γ n g ( x n ) ) J β n B 1 x n x n γ n g ( x n ) x n = γ n g ( x n ) 0 as n .
From Equations (28) and (30), we have
x n J β n B 1 x n = x n x n + 1 + x n + 1 J β n B 1 x n x n x n + 1 + x n + 1 J β n B 1 x n 0 as n .
Lemma 2 (iii) gives
lim n x n J β B 1 x n lim n x n J β n B 1 x n = 0 .
Since { x n } is bounded, there is a subsequence { x n k } of { x n } and x * H 1 with x n k x * . We also have A x n k A x * . By Equations (29) and (32), Lemmas 1 and 2 (ii), we obtain x * Ω . Using Lemma 5, we obtain that { x n } converges weakly to a solution in Ω . □

4. Strong Convergence Result

Theorem 6.
Assume that { α n } , { ρ n } and { θ n } satisfy the assumptions:
(a1) 
lim n α n = 0 and n = 1 α n = ;
(a2) 
inf n ρ n ( 4 ρ n ) > 0 ;
(a3) 
lim n θ n = 0 ;
(a4) 
lim inf n β n > 0 .
Then, { x n } defined by Algorithm 3 converges strongly to z = P Ω u and P Ω u is closest to u.
Algorithm 3:
 Choose x 1 H 1 and let u H 1 . Let { α n } be a real sequence in ( 0 , 1 ) . Let { x n } be iteratively
  generated by
x n + 1 = α n u + ( 1 α n ) J β n B 1 ( x n γ n g ( x n ) ) ,

  where
γ n = ρ n f ( x n ) g ( x n ) 2 + θ n , 0 < ρ n < 4 , 0 < θ n < 1 , β n > 0

  and
f ( x n ) = 1 2 ( I J β n B 2 ) A x n 2 , g ( x n ) = A * ( I J β n B 2 ) A x n .
Proof. 
Set z = P Ω u Ω . Using the line of proof as for Theorem 5, we have
J β n B 1 ( x n γ n g ( x n ) ) z 2 x n z 2 ρ n ( 4 ρ n ) f 2 ( x n ) g ( x n ) 2 + θ n J β n B 1 ( x n γ n g ( x n ) ) x n + γ n g ( x n ) 2 .
Then,
x n + 1 z 2 = α n ( u z ) + ( 1 α n ) ( J β n B 1 ( x n γ n g ( x n ) ) z ) 2 ( 1 α n ) J β n B 1 ( x n γ n g ( x n ) ) z 2 + 2 α n u z , x n + 1 z .
Combining Equations (36) and (37), we get
x n + 1 z 2 ( 1 α n ) x n z 2 ( 1 α n ) ρ n ( 4 ρ n ) f 2 ( x n ) g ( x n ) 2 + θ n ( 1 α n ) J β n B 1 ( x n γ n g ( x n ) ) x n + γ n g ( x n ) 2 + 2 α n u z , x n + 1 z .
Next, we will show that { x n } is bounded. Again, using Equation (36),
x n + 1 z = α n u + ( 1 α n ) J β n B 1 ( x n γ n g ( x n ) ) z α n u z + ( 1 α n ) x n z .
Thus, { x n } is bounded. Employing Lemma 6, from Equation (38), we set
s n = x n z 2 ; φ n = 2 α n u z , x n + 1 z ; δ n = 2 u z , x n + 1 z ; λ n = ( 1 α n ) ρ n ( 4 ρ n ) f 2 ( x n ) g ( x n ) 2 + θ n + ( 1 α n ) J β n B 1 ( x n γ n g ( x n ) ) x n + γ n g ( x n ) 2 .
Thus, Equation (38) reduces to the inequalities
s n + 1 ( 1 α n ) s n + α n δ n , n 1 ,
s n + 1 s n λ n + φ n .
Let { n k } { n } be such that
lim k λ n k = 0 .
Then, we have
lim k ( ( 1 α n k ) ρ n k ( 4 ρ n k ) f 2 ( x n k ) g ( x n k ) 2 + θ n k + ( 1 α n k ) J β n k B 1 ( x n k γ n k g ( x n k ) ) x n k + γ n k g ( x n k ) 2 ) = 0 ,
which, by using our assumptions, implies
f n k 2 ( x n k ) g n k ( x n k ) 2 0 as k ,
and
J β n k B 1 ( x n k γ n k g ( x n k ) ) x n k + γ n k g ( x n k ) 0 as k .
Since { g n k ( x n k ) } is bounded, it follows that f n k ( x n k ) 0 as k . Thus, we get
lim k ( I J β n k B 1 ) A x n k = 0 .
As the same proof in Theorem 5, we can show that there is { x n k i } of { x n k } such that x n k i x * Ω . From Lemma 2 (v), we obtain
lim sup k u z , x n k z = lim i u z , x n k i z = u z , x * z 0 .
We see that
x n k + 1 x n k = α n k u + ( 1 α n k ) J β n k B 1 ( x n k γ n k g ( x n k ) ) x n k α n k u x n k + ( 1 α n k ) J β n k B 1 ( x n k γ n k g ( x n k ) ) x n k α n k u x n k + ( 1 α n k ) J β n k B 1 ( x n k γ n k g ( x n k ) ) x n k + γ n k g ( x n k ) + ( 1 α n k ) γ n k g ( x n k ) 0 as k .
From Equations (48) and (49), it follows that
lim sup k u z , x n k + 1 z 0 .
Hence, we get
lim sup k δ n k 0 .
Thus, { x n } converges strongly to z = P Ω u by Lemma 6. □

5. Numerical Experiments

We present numerical experiments for our main results.
First, we give a comparison among Theorems 1–3 and 5 for a weak convergence theorem.
The following example is introduced in [10].
Example 1.
Let B 1 : R 2 R 2 and B 2 : R 3 R 3 be
A = 2 1 1 2 2 2 , B 1 x y = 2 2 2 2 x y + 2 2 , B 2 x y z = 2 2 2 2 2 2 2 2 2 x y z .
We aim to find x * = ( x 1 * , x 2 * ) T R 2 such that B 1 ( x * ) = ( 0 , 0 ) T and B 2 ( A x * ) = ( 0 , 0 , 0 ) T . In this case, we know that x 1 * = 1.5 and x 2 * = 0.5 .
We set γ n = 0.001 in Theorem 1, γ n = 1 2 n A 2 in Theorem 2, γ n = δ 2 A 2 in Theorem 3 and γ n = ρ n f ( x n ) g ( x n ) 2 + θ n , θ n = 1 n 5 in Theorem 5. The stopping criterion is given by x n x * 2 < ε .
We test by the following cases:
Case 1:
x 1 = [ 1 , 1 ] , β n = 1 , ρ n = 1.5 n n + 1 , and δ = 1 3 ,
Case 2:
x 1 = [ 4 , 2 ] , β n = 2 , ρ n = 3.5 n n + 1 , and δ = 1 2 ,
Case 3:
x 1 = [ 5 , 3 ] , β n = 3 , ρ n = 2.8 , and δ = 1 4 ,
Case 4:
x 1 = [ 2 , 7 ] , β n = 4 , ρ n = 3.9 , and δ = 1 5 .
From Table 1, we see that Theorem 5 using Algorithm 2 has a better convergence rate than other algorithms.
Second, we give a comparison between Theorems 4 and 6 for a strong convergence theorem by using Example 1.
Choose a n = 1 n + 1 , b n = 1 5 , c n = 1 a n b n , d n = 0 and γ n = 1 A 2 + 1 in Theorem 4 and set θ n = 1 n 5 , α n = 1 n + 1 and γ n = ρ n f ( x n ) g ( x n ) 2 + θ n in Theorem 6. In this case, we let u = [ 2 , 2 ] .
We test by the following cases:
Case 1:
x 1 = [ 1 , 1 ] , β n = 1 and ρ n = 1.5 n n + 1 ,
Case 2:
x 1 = [ 4 , 2 ] , β n = 2 and ρ n = 3.5 n n + 1 ,
Case 3:
x 1 = [ 5 , 3 ] , β n = 3 and ρ n = 2.8 ,
Case 4:
x 1 = [ 2 , 7 ] , β n = 4 and ρ n = 3.9 .
From Table 2, we observe that, in each case, the convergence behavior of Theorem 4 is worse than that Theorem 6.

6. Split Feasibility Problem

Let H 1 and H 2 be real Hilbert spaces. We next study the split feasibility problem (SFP) that is to seek x * H 1 such that
x * C and A x * Q ,
where C and Q are nonempty closed convex subsets of H 1 and H 2 , respectively, and A : H 1 H 2 is a bounded linear operator with the adjoint operator A * . Many authors introduced various algorithms for solving the SFP [16,17,18,19].
Let H be a Hilbert space and let g : H ( , ] be a proper, lower semicontinuous and convex function. The subdifferential g of g is defined by
g ( x ) = { z H : g ( x ) + z , y x g ( y ) , y H }
for all x H . Let C be a nonempty closed convex subset of H, and ι C be the indicator function of C defined by
ι C x = 0 x C , x C .
The normal cone N C u of C at u is defined by
N C u = { z H : z , v u 0 , v C } .
Then, ι C is a proper, lower semicontinuous and convex function on H. See [20,21]. Moreover, the subdifferential ι C of ι C is a maximal monotone mapping. In this connection, we can define the resolvent J λ ι C of ι C for λ > 0 by
J λ ι C x = ( I + λ ι C ) 1 x
for all x H . Hence, we see that
ι C x = { z H : ι C x + z , y x ι C y , y H } = { z H : z , y x 0 , y C } = N C x
for all x C . Hence, for each β > 0 , we obtain the following relation:
u = J β ι C x x u + β ι C u x u β N C u x u , y u 0 , y C u = P C x .
Consequently, we obtain the following results which are deduced from Algorithm 2.
Theorem 7.
Assume that inf n ρ n ( 4 ρ n ) > 0 and lim n θ n = 0 . Choose x 1 H 1 and let { x n } be defined by
x n + 1 = P C ( x n γ n g ( x n ) ) ,
where
γ n = ρ n f ( x n ) g ( x n ) 2 + θ n , 0 < ρ n < 4 , 0 < θ n < 1
and
f ( x n ) = 1 2 ( I P Q ) A x n 2 , g ( x n ) = A * ( I P Q ) A x n .
Then, { x n } converges weakly to a solution in Ω.
By Theorem 1, we obtain the result of Byrne et al. [6].
Theorem 8.
Let { x n } be generated by
x n + 1 = P C ( x n γ A * ( I P Q ) A x n ) , n N ,
where H 1 and H 2 are Hilbert spaces, A : H 1 H 2 be a bounded and linear operator and γ ( 0 , 2 A 2 ) . Then, { x n } converges weakly to x * Ω .
Using Chuang’s results in Algorithm 1, we have
Theorem 9.
Let H 1 and H 2 be infinite dimensional Hilbert spaces, A : H 1 H 2 be a bounded and linear operator. Choose δ ( 0 , 1 ) and { γ n } ( 0 , δ A 2 ) with inf n N γ n > 0 . Choose x 1 H 1 . For n N , set y n as
y n = P C ( x n γ n A * ( I P Q ) A x n ) ,
where γ n > 0 satisfies
γ n A * ( I P Q ) A x n A * ( I P Q ) A y n δ x n y n , 0 < δ < 1 .
Construct x n + 1 by
x n + 1 = P C ( x n α n D ( x n , γ n ) ) ,
where
D ( x n , γ n ) = x n y n + γ n ( A * ( I P Q ) A y n A * ( I P Q ) A x n )
and
α n = x n y n , D ( x n , γ n ) D ( x n , γ n ) 2 .
Then, the sequence { x n } converges weakly to x * Ω .
From Algorithm 3 and Theorem 6, we have
Theorem 10.
Assume that { α n } , { ρ n } and { θ n } satisfy the assumptions:
(a1) 
lim n α n = 0 and n = 1 α n = ;
(a2) 
inf n ρ n ( 4 ρ n ) > 0 ;
(a3) 
lim n θ n = 0 .
Choose x 1 H 1 and define { x n } by
x n + 1 = α n u + ( 1 α n ) P C ( x n γ n g ( x n ) ) ,
where
γ n = ρ n f ( x n ) g ( x n ) 2 + θ n , 0 < ρ n < 4 , 0 < θ n < 1 , 0 < α n < 1 ,
and
f ( x n ) = 1 2 ( I P Q ) A x n 2 , g ( x n ) = A * ( I P Q ) A x n .
Then, { x n } converges strongly to z = P Ω u .
We also have the following result.
Theorem 11.
Let { a n } , { b n } , { c n } , and { d n } be sequences of real numbers in [ 0 , 1 ] with a n + b n + c n + d n = 1 and 0 < a n < 1 for each n N . Let { v n } be a bounded sequence in H 1 . Let u H 1 be fixed and { γ n } ( 0 , 2 A 2 + 1 ) . Let { x n } be defined by
x n + 1 = a n u + b n x n + c n P C ( x n γ n A * ( I P Q ) A x n ) + d n v n
for each n N . Suppose that
(i) 
lim n a n = lim n d n a n = 0 ; n = 1 a n = ; n = 1 d n < ;
(ii) 
lim inf n c n γ n > 0 and lim inf n b n c n > 0 .
Then, lim n x n = x * , where x * = P Ω u , A : H 1 H 2 be a bounded and linear operator. Then, { x n } converges strongly to a point in Ω.

7. Applications to Compressed Sensing

In signal processing, we consider the following linear equation:
y = A x + ε ,
where x R N is a sparse vector that has m nonzero components, y R M is the observed data with noisy ε , and A : R N R M ( M < N ) . It can be seen that Equation (73) relates to the LASSO problem [22]
min x R N 1 2 y A x 2 2 subject to x 1 t ,
where t > 0 . In particular, if C = { x R N : x 1 t } and Q = { y } , then the LASSO problem can be considered as the SFP Equation (53).
The vector x R N is generated by the uniform distribution in [ 2 , 2 ] with m nonzero components. Let A be an M × N matrix that is generated by the normal distribution with mean zero and the variance one. The observed data y is generated by white Gaussian noise with signal-to-noise ratio (SNR)40. The process is started with t = m and initial point x 1 = 0 .
The stopping error is defined by
E n = 1 N x n x 2 2 < κ ,
where x n is an estimated signal of x.
We give some numerical results of Theorems 7–9. Choose γ n = ρ n f ( x n ) g ( x n ) 2 + θ n , ρ n = 3 , θ n = 1 n 5 in Theorem 7 and γ n = δ 2 A 2 in Theorem 8 and δ = 0.8 , γ n = δ 2 A 2 in Theorem 9.
Table 3 and Table 4 show that both the number of iterations and the CPU time in our algorithm in Theorem 7 are less than algorithms in Theorems 8 and 9 have in their computations. Next, we test numerical experiments in signal recovery in the case N = 512 , M = 256 and N = 2048 , M = 1024 , respectively.
Finally, we discuss the strong convergence of Theorems 10 and 11. We set a n = 1 n + 1 , b n = 1 5 , c n = 1 a n b n , d n = 0 and γ n = 1 A 2 + 1 in Theorem 11 and set ρ n = 2 , θ n = 1 n 5 , α n = 1 n + 1 and u = [ 1 , 1 , , 1 ] in Theorem 10.
Table 5 and Table 6 show that our proposed algorithm in Theorem 10 has a better convergence behavior than the algorithm defined in Theorem 11 in iterations and CPU time.
We next provide some experiments in recovering the signal.
From Figure 1, Figure 2, Figure 3 and Figure 4, we observe that our algorithms can be applied to solve the LASSO problem. Moreover, the proposed algorithms have a better convergence behavior than other methods.

8. Conclusions

In the present work, we introduce a new approximation algorithm with a new stepsize that involves the self adaptive method for SVIP. The stepsize does not use the Lipschitz constant and the norm of operators in computing. We show its convergence analysis, which was proved under some suitable assumptions. The numerical results showed the efficiency of our algorithms. It is reported that the performance of our algorithms outruns those of Byrne et al. [6] and Chuang [10,11] through experiments.

Author Contributions

S.S.; Funding acquisition and Supervision, S.K. and P.C.; Writing-review & editing and Software.

Funding

This research was funded by Chiang Mai University.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Martinet, B. Régularisation d’inéquations variationnelles par approximations successives. Revue Francaise d’Informatique et de Recherche Operationelle 1970, 4, 154–158. [Google Scholar]
  2. Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
  3. Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
  4. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer Algor. 1994, 8, 221–239. [Google Scholar] [CrossRef]
  5. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  6. Byrne, C.; Censor, Y.; Gibali, A. Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 2011, 13, 759–775. [Google Scholar]
  7. Censor, Y.; Bortfeld, T.; Martin, B. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  8. López, G.; Martín-Márquez, V.; Xu, H.K. Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems; Censor, Y., Jiang, M., Wang, G., Eds.; Medical Physics Publishing: Madison, WI, USA, 2010; pp. 243–279. [Google Scholar]
  9. Stark, H. Image Recovery: Theory and Applications; Academic Press: San Diego, CA, USA, 1987. [Google Scholar]
  10. Chuang, C.S. Algorithms with new parameter conditions for split variational inclusion problems in Hilbert spaces with application to split feasibility problem. Optimization 2016, 65, 859–876. [Google Scholar] [CrossRef]
  11. Chuang, C.S. Strong convergence theorems for the split variational inclusion problem in Hilbert spaces. Fixed Point Theory Appl. 2013, 2013, 350. [Google Scholar] [CrossRef]
  12. Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 1990; Volume 28. [Google Scholar]
  13. Marino, G.; Xu, H.K. Convergence of generalized proximal point algorithms. Commun. Pure Appl. Anal. 2004, 3, 791–808. [Google Scholar]
  14. Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef] [Green Version]
  15. He, S.; Yang, C. Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013. [Google Scholar] [CrossRef]
  16. Dang, Y.; Gao, Y. The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Prob. 2011, 27, 015007. [Google Scholar] [CrossRef]
  17. Wang, F.; Xu, H.K. Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010, 2010, 102085. [Google Scholar] [CrossRef]
  18. Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Prob. 2010, 26, 105018. [Google Scholar] [CrossRef]
  19. Zhao, J.; Zhang, Y.; Yang, Q. Modified projection methods for the split feasibility problem and multiple-sets feasibility problem. Appl. Math. Comput. 2012, 219, 1644–1653. [Google Scholar] [CrossRef]
  20. Agarwal, R.P.; O’Regan, D.; Sahu, D.R. Fixed Point Theory for Lipschitzian-Type Mapping with Applications; Springer: New York, NY, USA, 2009; Volume 6. [Google Scholar]
  21. Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Tokohama Publishers, Inc.: Yokohama, Japan, 2009. [Google Scholar]
  22. Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
Figure 1. From top to bottom: original signal, measured values, recovered signal by Theorem 8, Theorem 9 and Theorem 7 with N = 512 , M = 256 and m = 10 .
Figure 1. From top to bottom: original signal, measured values, recovered signal by Theorem 8, Theorem 9 and Theorem 7 with N = 512 , M = 256 and m = 10 .
Mathematics 07 00708 g001
Figure 2. From top to bottom: original signal, measured values, recovered signal by Theorem 8, Theorem 9 and Theorem 7 with N = 2048 , M = 1024 and m = 40 .
Figure 2. From top to bottom: original signal, measured values, recovered signal by Theorem 8, Theorem 9 and Theorem 7 with N = 2048 , M = 1024 and m = 40 .
Mathematics 07 00708 g002
Figure 3. From top to bottom: original signal, measured values, recovered signal by Theorem 11 and Theorem 10 with N = 512 , M = 256 and m = 10 .
Figure 3. From top to bottom: original signal, measured values, recovered signal by Theorem 11 and Theorem 10 with N = 512 , M = 256 and m = 10 .
Mathematics 07 00708 g003
Figure 4. From top to bottom: original signal, measured values, recovered signal by Theorem 11 and Theorem 10 with N = 2048 , M = 1024 and m = 30 .
Figure 4. From top to bottom: original signal, measured values, recovered signal by Theorem 11 and Theorem 10 with N = 2048 , M = 1024 and m = 30 .
Mathematics 07 00708 g004
Table 1. Comparison for Theorems 1–3 and 5 for each case.
Table 1. Comparison for Theorems 1–3 and 5 for each case.
Method ε = 10 4 ε = 10 5
CPUIterCPUIter
Case 1Theorem 10.109136570.27637688
Theorem 20.00781310.07781272
Theorem 30.04527770.06991186
Theorem 50.0017660.002386
Case 2Theorem 10.156546450.53748388
Theorem 20.02764540.31434357
Theorem 30.03686090.0487860
Theorem 50.0011390.002848
Case 3Theorem 10.139045720.31728219
Theorem 20.02804710.29364510
Theorem 30.063510480.09051541
Theorem 50.0014450.001655
Case 4Theorem 10.118940690.28497668
Theorem 20.02133450.20923307
Theorem 30.068611590.10461768
Theorem 50.0011340.001143
Table 2. Comparing results for Theorems 4 and 6 for each case.
Table 2. Comparing results for Theorems 4 and 6 for each case.
Method ε = 10 4 ε = 10 5
CPUIterCPUIter
Case 1Theorem 40.03107140.08712256
Theorem 60.00682730.0240860
Case 2Theorem 40.02256850.07902166
Theorem 60.00381420.0117448
Case 3Theorem 40.02046750.07272135
Theorem 60.00431560.0132492
Case 4Theorem 40.02416710.06772120
Theorem 60.00381400.0156441
Table 3. Numerical results for the LASSO problem in case M = 256 , N = 512 .
Table 3. Numerical results for the LASSO problem in case M = 256 , N = 512 .
m-SparseMethod κ = 10 3 κ = 10 4
CPUIterCPUIter
m = 10 Theorem 80.9662443.6208132
Theorem 91.3204584.2151170
Theorem 70.0054260.011163
m = 15 Theorem 81.3082572.8470124
Theorem 91.8984843.7938170
Theorem 70.0058360.009972
m = 20 Theorem 81.4928653.5994161
Theorem 92.72941225.7801251
Theorem 70.0070420.014399
m = 25 Theorem 82.2008986.0600275
Theorem 94.173018318.6269824
Theorem 70.0107670.0323227
Table 4. Numerical results for the LASSO problem in case M = 2048 , N = 1024 .
Table 4. Numerical results for the LASSO problem in case M = 2048 , N = 1024 .
m-SparseMethod κ = 10 3 κ = 10 4
CPUIterCPUIter
m = 30 Theorem 847.653041119.9776101
Theorem 967.686957157.1087134
Theorem 70.0807250.189958
m = 40 Theorem 847.734741151.0891117
Theorem 993.189879306.8623240
Theorem 70.1007310.288082
m = 50 Theorem 865.177155136.1508115
Theorem 999.002183188.9366158
Theorem 70.1227350.220367
m = 60 Theorem 876.745764163.8805138
Theorem 9127.5520106209.5990177
Theorem 70.1401430.244975
Table 5. Numerical results for the LASSO problem in case M = 512 , N = 256 .
Table 5. Numerical results for the LASSO problem in case M = 512 , N = 256 .
m-SparseMethod κ = 10 3 κ = 10 4
CPUIterCPUIter
m = 10 Theorem 115.886923728.2850863
Theorem 100.02961570.1232551
m = 15 Theorem 116.120424538.90491561
Theorem 100.02601550.1550950
m = 20 Theorem 119.3238376116.75904613
Theorem 100.03772330.44842730
m = 25 Theorem 119.320637932.82081255
Theorem 100.04202520.1578858
Table 6. Numerical results for the LASSO problem in case M = 2048 , N = 1024 .
Table 6. Numerical results for the LASSO problem in case M = 2048 , N = 1024 .
m-SparseMethod κ = 10 3 κ = 10 4
CPUIterCPUIter
m = 10 Theorem 11131.3365111578.6894490
Theorem 100.2419740.9969305
m = 20 Theorem 11184.8031157616.7051526
Theorem 100.32741011.0929339
m = 30 Theorem 11262.39762241.3220 × 103503
Theorem 100.46331411.4516339
m = 40 Theorem 11282.60132371.6136 × 1031326
Theorem 100.53931582.6758791

Share and Cite

MDPI and ACS Style

Suantai, S.; Kesornprom, S.; Cholamjiak, P. Modified Proximal Algorithms for Finding Solutions of the Split Variational Inclusions. Mathematics 2019, 7, 708. https://doi.org/10.3390/math7080708

AMA Style

Suantai S, Kesornprom S, Cholamjiak P. Modified Proximal Algorithms for Finding Solutions of the Split Variational Inclusions. Mathematics. 2019; 7(8):708. https://doi.org/10.3390/math7080708

Chicago/Turabian Style

Suantai, Suthep, Suparat Kesornprom, and Prasit Cholamjiak. 2019. "Modified Proximal Algorithms for Finding Solutions of the Split Variational Inclusions" Mathematics 7, no. 8: 708. https://doi.org/10.3390/math7080708

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop