Abstract
We investigate the split variational inclusion problem in Hilbert spaces. We propose efficient algorithms in which, in each iteration, the stepsize is chosen self-adaptive, and proves weak and strong convergence theorems. We provide numerical experiments to validate the theoretical results for solving the split variational inclusion problem as well as the comparison to algorithms defined by Byrne et al. and Chuang, respectively. It is shown that the proposed algorithms outrun other algorithms via numerical experiments. As applications, we apply our method to compressed sensing in signal recovery. The proposed methods have as a main advantage that the computation of the Lipschitz constants for the gradient of functions is dropped in generating the sequences.
1. Introduction
Let H be a real Hilbert space. Then, is called monotone if for each , . Moreover, B is maximal monotone provided its graph is not properly included in the graph of other monotone mappings. Many problems in optimization can be reduced to finding such that . Martinet [] and Rockafellar [] suggested the proximal method for solving this problem. They construct the sequence by choosing and putting
where , B is a set-valued maximal monotone operator and is defined by for each . We see that Equation (1) is equivalent to , .
The split variational inclusion problem (SVIP) was first investigated by Moudafi []. The problem consists of finding such that
where and are real Hilbert spaces, and are set-valued mappings on and . In addition, is a bounded and linear operator and is the adjoint of A. We know that the SVIP is a generalization of the split feasibility problem that was investigated by Censor and Elfving [] in Euclidean spaces. See [,,,,,]. In this paper, we denote by the solution set of SVIP. Suppose that is nonempty.
In 2011, Byrne et al. [] established a weak convergence theorem for SVIP as follows:
Theorem 1.
Let and be real Hilbert spaces, be a bounded linear operator. Let and be set-valued maximal monotone operators. Let and . Let be generated by
Then, converges weakly to in Ω.
In 2015, Chuang [] introduced the following iteration for SVIP in Hilbert spaces.
Chuang [] established its convergence as follows:
Theorem 2.
Let and be real Hilbert spaces, be a bounded and linear operator. Let and be set-valued maximal monotone operators. Choose and let and and assume that
If is finite dimensional, then .
Chuang [] also provided the following result.
Theorem 3.
Let and be infinite dimensional Hilbert spaces, be a bounded and linear operator. Let and be set-valued maximal monotone mappings. Choose and let , and with . Then, .
In 2013, Chuang [] proved strong convergence theorem for SVIP using the following algorithm.
| Algorithm 1: |
| []. For , set as where is chosen such that The iterative is generated by where and |
Theorem 4.
Let and be two real Hilbert spaces, be a bounded and linear operator. Let and be two set-valued maximal monotone operators. Let , , , and be sequences of real numbers in with and for each . Let and let . Let be a bounded sequence in . Fix and let the sequence be generated by
for each . Suppose that
- (i)
- ; ; ;
- (ii)
- , , .
Then, , where and is nearest to u.
We aim to find the approximate algorithms with a new step size which is self-adaptive (see López et al. []) for solving our SVIP and prove its convergence. We present numerical examples and the comparison to algorithms of Byne et al. [] and algorithms of Chuang [,]. We also obtain the result for split feasibility problem (SFP) and its applications to compressed sensing in signal recovery. It reveals that our methods have a better convergence than those of Byrne et al. [] and Chuang [,].
2. Preliminaries
We next provide some basic concepts for our proof. In what follows, we shall use the following symbols:
- ⇀ stands for the weak convergence,
- → stands for the strong convergence.
Recall that a mapping is called
- (1)
- nonexpansive if, for all ,
- (2)
- firmly-nonexpansive if, for all ,
It is clear that is also firmly-nonexpansive when T is firmly-nonexpansive. We know that, for each ,
and
for all and for all .
The following lemma can be found in [].
Lemma 1.
Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a nonexpansive mapping. If and , then .
We use by the fixed point set of a mapping T, that is, and by the domain of a mapping T, i.e., .
The following lemma can be found in [,].
Lemma 2.
Let H be a real Hilbert space and let be a maximal monotone operator. Then,
- (i)
- is single-valued and firmly nonexpansive for each ;
- (ii)
- and ;
- (iii)
- for all and for all ;
- (iv)
- If , then we have for all , each , and each ;
- (v)
- If , then we have for all , each , and each .
Lemma 3.
Let and be real Hilbert spaces. Let be a bounded and linear operator. Let , , and be maximal monotone operators. Let .
- (i)
- If is a solution of (SVIP), then .
- (ii)
- Suppose that and the solution set of (SVIP) is nonempty. Then, is a solution of (SVIP).
Lemma 4.
Let and be real Hilbert spaces. Let be a bounded and linear operator and . Let be a maximal monotone operator. Define a mapping by for each . Then,
- (i)
- for all ;
- (ii)
- for all .
The following lemma can be found in [].
Lemma 5.
Let C be a nonempty subset of a Hilbert space H. Let be a sequence in H that satisfies the following assumptions:
- (i)
- exists for each ;
- (ii)
- every sequential weak limit point of is in C.
Then, weakly converges to a point in C.
The following lemma can be found in [].
Lemma 6.
Assume such that
where , and and are real sequences such that
- (i)
- ;
- (ii)
- ;
- (iii)
- implies for any subsequence of .
Then, .
3. Weak Convergence Result
Let, and be real Hilbert spaces, be a bounded and linear operator. Let and be set-valued maximal monotone operators.
Let be a solution set of problem (SVIP) and assume that . We remark that the stepsize sequence does not depend on the norm of an operator A as introduced by Byrne et al. [] and Chuang [,].
Theorem 5.
Suppose that , and . Then, defined by Algorithm 2 converges weakly to a solution in Ω.
| Algorithm 2: |
| Choose and define where and |
Proof.
Let . Then, and . Thus, we have . Using Lemma 4 (i), we have
From Equation (20), Lemma 2 (iv) and the defining formulas for Algorithm 2
This implies that, since ,
Thus, exists. It follows that is bounded. Again, by Equation (21), we get
which yields by our assumptions that
By Lemma 3 (ii), it can be checked that g is a Lipschitzian mapping and thus is bounded. Hence, we get . This means
Furthermore, by Equation (21), we also have
We note that
From Equation (25) and Lemma 2 (iii), we get
for some such that for all . From Equation (27), we see that
Lemma 2 (iii) gives
4. Strong Convergence Result
Theorem 6.
Assume that , and satisfy the assumptions:
- (a1)
- and ;
- (a2)
- ;
- (a3)
- ;
- (a4)
- .
Then, defined by Algorithm 3 converges strongly to and is closest to u.
| Algorithm 3: |
| Choose and let . Let be a real sequence in . Let be iteratively generated by where and |
Proof.
Set . Using the line of proof as for Theorem 5, we have
Then,
Next, we will show that is bounded. Again, using Equation (36),
Thus, is bounded. Employing Lemma 6, from Equation (38), we set
Let be such that
Then, we have
which, by using our assumptions, implies
and
Since is bounded, it follows that as . Thus, we get
As the same proof in Theorem 5, we can show that there is of such that . From Lemma 2 (v), we obtain
We see that
Hence, we get
Thus, converges strongly to by Lemma 6. □
5. Numerical Experiments
We present numerical experiments for our main results.
First, we give a comparison among Theorems 1–3 and 5 for a weak convergence theorem.
The following example is introduced in [].
Example 1.
Let and be
We aim to find such that and . In this case, we know that and .
We set in Theorem 1, in Theorem 2, in Theorem 3 and , in Theorem 5. The stopping criterion is given by .
We test by the following cases:
- Case 1:
- , , , and
- Case 2:
- , , , and
- Case 3:
- , , , and
- Case 4:
- , , , and .
From Table 1, we see that Theorem 5 using Algorithm 2 has a better convergence rate than other algorithms.
Table 1.
Comparison for Theorems 1–3 and 5 for each case.
Second, we give a comparison between Theorems 4 and 6 for a strong convergence theorem by using Example 1.
Choose , , , and in Theorem 4 and set , and in Theorem 6. In this case, we let .
We test by the following cases:
- Case 1:
- , and
- Case 2:
- , and
- Case 3:
- , and
- Case 4:
- , and .
From Table 2, we observe that, in each case, the convergence behavior of Theorem 4 is worse than that Theorem 6.
Table 2.
Comparing results for Theorems 4 and 6 for each case.
6. Split Feasibility Problem
Let and be real Hilbert spaces. We next study the split feasibility problem (SFP) that is to seek such that
where C and Q are nonempty closed convex subsets of and , respectively, and is a bounded linear operator with the adjoint operator . Many authors introduced various algorithms for solving the SFP [,,,].
Let H be a Hilbert space and let be a proper, lower semicontinuous and convex function. The subdifferential of g is defined by
for all . Let C be a nonempty closed convex subset of H, and be the indicator function of C defined by
The normal cone of C at u is defined by
Then, is a proper, lower semicontinuous and convex function on H. See [,]. Moreover, the subdifferential of is a maximal monotone mapping. In this connection, we can define the resolvent of for by
for all . Hence, we see that
for all . Hence, for each , we obtain the following relation:
Consequently, we obtain the following results which are deduced from Algorithm 2.
Theorem 7.
Assume that and . Choose and let be defined by
where
and
Then, converges weakly to a solution in Ω.
By Theorem 1, we obtain the result of Byrne et al. [].
Theorem 8.
Let be generated by
where and are Hilbert spaces, be a bounded and linear operator and . Then, converges weakly to .
Using Chuang’s results in Algorithm 1, we have
Theorem 9.
Let and be infinite dimensional Hilbert spaces, be a bounded and linear operator. Choose and with . Choose . For , set as
where satisfies
Construct by
where
and
Then, the sequence converges weakly to .
From Algorithm 3 and Theorem 6, we have
Theorem 10.
Assume that , and satisfy the assumptions:
- (a1)
- and ;
- (a2)
- ;
- (a3)
- .
Choose and define by
where
and
Then, converges strongly to .
We also have the following result.
Theorem 11.
Let , , , and be sequences of real numbers in with and for each . Let be a bounded sequence in . Let be fixed and . Let be defined by
for each . Suppose that
- (i)
- ; ; ;
- (ii)
- and
Then, , where , be a bounded and linear operator. Then, converges strongly to a point in Ω.
7. Applications to Compressed Sensing
In signal processing, we consider the following linear equation:
where is a sparse vector that has m nonzero components, is the observed data with noisy , and . It can be seen that Equation (73) relates to the LASSO problem []
where . In particular, if and , then the LASSO problem can be considered as the SFP Equation (53).
The vector is generated by the uniform distribution in with m nonzero components. Let A be an matrix that is generated by the normal distribution with mean zero and the variance one. The observed data y is generated by white Gaussian noise with signal-to-noise ratio (SNR)40. The process is started with and initial point .
The stopping error is defined by
where is an estimated signal of x.
We give some numerical results of Theorems 7–9. Choose , , in Theorem 7 and in Theorem 8 and , in Theorem 9.
Table 3 and Table 4 show that both the number of iterations and the CPU time in our algorithm in Theorem 7 are less than algorithms in Theorems 8 and 9 have in their computations. Next, we test numerical experiments in signal recovery in the case , and , , respectively.
Table 3.
Numerical results for the LASSO problem in case , .
Table 4.
Numerical results for the LASSO problem in case , .
Finally, we discuss the strong convergence of Theorems 10 and 11. We set , , , and in Theorem 11 and set , , and in Theorem 10.
Table 5 and Table 6 show that our proposed algorithm in Theorem 10 has a better convergence behavior than the algorithm defined in Theorem 11 in iterations and CPU time.
Table 5.
Numerical results for the LASSO problem in case , .
Table 6.
Numerical results for the LASSO problem in case , .
We next provide some experiments in recovering the signal.
From Figure 1, Figure 2, Figure 3 and Figure 4, we observe that our algorithms can be applied to solve the LASSO problem. Moreover, the proposed algorithms have a better convergence behavior than other methods.
Figure 1.
From top to bottom: original signal, measured values, recovered signal by Theorem 8, Theorem 9 and Theorem 7 with , and .
Figure 2.
From top to bottom: original signal, measured values, recovered signal by Theorem 8, Theorem 9 and Theorem 7 with , and .
Figure 3.
From top to bottom: original signal, measured values, recovered signal by Theorem 11 and Theorem 10 with , and .
Figure 4.
From top to bottom: original signal, measured values, recovered signal by Theorem 11 and Theorem 10 with , and .
8. Conclusions
In the present work, we introduce a new approximation algorithm with a new stepsize that involves the self adaptive method for SVIP. The stepsize does not use the Lipschitz constant and the norm of operators in computing. We show its convergence analysis, which was proved under some suitable assumptions. The numerical results showed the efficiency of our algorithms. It is reported that the performance of our algorithms outruns those of Byrne et al. [] and Chuang [,] through experiments.
Author Contributions
S.S.; Funding acquisition and Supervision, S.K. and P.C.; Writing-review & editing and Software.
Funding
This research was funded by Chiang Mai University.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Martinet, B. Régularisation d’inéquations variationnelles par approximations successives. Revue Francaise d’Informatique et de Recherche Operationelle 1970, 4, 154–158. [Google Scholar]
- Rockafellar, R.T. Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 1976, 14, 877–898. [Google Scholar] [CrossRef]
- Moudafi, A. Split monotone variational inclusions. J. Optim. Theory Appl. 2011, 150, 275–283. [Google Scholar] [CrossRef]
- Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer Algor. 1994, 8, 221–239. [Google Scholar] [CrossRef]
- Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
- Byrne, C.; Censor, Y.; Gibali, A. Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 2011, 13, 759–775. [Google Scholar]
- Censor, Y.; Bortfeld, T.; Martin, B. A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol. 2006, 51, 2353–2365. [Google Scholar] [CrossRef] [PubMed]
- López, G.; Martín-Márquez, V.; Xu, H.K. Iterative algorithms for the multiple-sets split feasibility problem. In Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems; Censor, Y., Jiang, M., Wang, G., Eds.; Medical Physics Publishing: Madison, WI, USA, 2010; pp. 243–279. [Google Scholar]
- Stark, H. Image Recovery: Theory and Applications; Academic Press: San Diego, CA, USA, 1987. [Google Scholar]
- Chuang, C.S. Algorithms with new parameter conditions for split variational inclusion problems in Hilbert spaces with application to split feasibility problem. Optimization 2016, 65, 859–876. [Google Scholar] [CrossRef]
- Chuang, C.S. Strong convergence theorems for the split variational inclusion problem in Hilbert spaces. Fixed Point Theory Appl. 2013, 2013, 350. [Google Scholar] [CrossRef][Green Version]
- Goebel, K.; Kirk, W.A. Topics in Metric Fixed Point Theory; Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 1990; Volume 28. [Google Scholar]
- Marino, G.; Xu, H.K. Convergence of generalized proximal point algorithms. Commun. Pure Appl. Anal. 2004, 3, 791–808. [Google Scholar]
- Opial, Z. Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 1967, 73, 591–597. [Google Scholar] [CrossRef]
- He, S.; Yang, C. Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013. [Google Scholar] [CrossRef]
- Dang, Y.; Gao, Y. The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Prob. 2011, 27, 015007. [Google Scholar] [CrossRef]
- Wang, F.; Xu, H.K. Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010, 2010, 102085. [Google Scholar] [CrossRef]
- Xu, H.K. Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Prob. 2010, 26, 105018. [Google Scholar] [CrossRef]
- Zhao, J.; Zhang, Y.; Yang, Q. Modified projection methods for the split feasibility problem and multiple-sets feasibility problem. Appl. Math. Comput. 2012, 219, 1644–1653. [Google Scholar] [CrossRef]
- Agarwal, R.P.; O’Regan, D.; Sahu, D.R. Fixed Point Theory for Lipschitzian-Type Mapping with Applications; Springer: New York, NY, USA, 2009; Volume 6. [Google Scholar]
- Takahashi, W. Introduction to Nonlinear and Convex Analysis; Yokohama Tokohama Publishers, Inc.: Yokohama, Japan, 2009. [Google Scholar]
- Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 1996, 58, 267–288. [Google Scholar] [CrossRef]
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).