Abstract
In this paper, we propose a new method, which is called the combination projection method (CPM), for solving the convex feasibility problem (CFP) of finding some , where m is a positive integer, is a real Hilbert space, and are convex functions defined as . The key of the CPM is that, for the current iterate , the CPM firstly constructs a new level set through a convex combination of some of in an appropriate way, and then updates the new iterate only by using the projection . We also introduce the combination relaxation projection methods (CRPM) to project onto half-spaces to make CPM easily implementable. The simplicity and easy implementation are two advantages of our methods since only one projection is used in each iteration and the projections are also easy to calculate. The weak convergence theorems are proved and the numerical results show the advantages of our methods.
Keywords:
convex feasibility problem; projection operator; combination projection method; hilbert space MSC:
47J20; 90C25; 90C30; 90C52
1. Introduction
Let be a real Hilbert space with inner product and norm , respectively. Recall that the projection operator of a nonempty closed convex subset D of , , is defined by
It is inevitable to use projections to solve convex feasibility problems (CFP) [1,2,3] split feasibility problems (SFP) [4,5], and variational inequality problems (VIP) [6,7].
If the set D is simple, such as a hyperplane or a halfspace, the projection onto D can be calculated explicitly. However, it is well known that in general, D is very complex, and has no closed form formula, for which, the computation of is rather difficult [8]. So, how to efficiently compute is a very important and interesting problem. Fukushima [9] suggested the half-space relaxation projection method and the idea was followed by many authors to introduce relaxed projection algorithms for solving the SFP [10,11] and the VIP [12,13].
Let m be a positive integer and be a finite family of nonempty closed convex subsets of with a nonempty intersection. The convex feasibility problem [14] is to find
which is very common problem in diverse areas of mathematics and physical sciences [15]. In the last twenty years, there has been growing interests in the CFP since it was found to have various applications in imaging science [16,17], medical treatments [18], and statistics [19].
A great deal of literature on methods for solving the CFP have been published (e.g., [20,21,22,23]). The classical method traces back at least to the alternating projection method introduced by von Neumann [14] in 1930s, which is called the successive orthogonal projection method (SOP) in Reference [24]. The SOP in Reference [14] solves the CFP with and being two closed subspaces in , and generates a sequence via the iterating process:
where is an arbitrary initial guess. Von Neumann [14] proved that the sequence converges strongly to . In 1965, Bregman [2] extended von Neumann’s results to the case where and are closed convex subsets and proved the weak convergence. Hundal [25] showed that for two closed convex subsets and , the SOP does not always converge in norm by providing an explicit counterexample. Further results on the SOP were obtained by Gubin et al. [26] and Bruck et al. [27].
The SOP is the most fundamental method to solve CFP, and many existing algorithms [24,28] can be regarded as generalizations or variants of the SOP. Let be a finite family of level sets of convex functions (i.e., ) such that . Adopting Fukushima’s relaxed technique [9], He et al. [28] introduced a contraction type sequential projection algorithm which generates the iterating process:
where the sequence , is a given point and is a finite family of half-spaces such that for and all . They proved that the sequence converges strongly to under certain conditions. Because the projection operators onto half-spaces have closed-form formulae, the algorithm (3) seems to be easily implemented. However, one common feature of SOP-type algorithms is that they need to evaluate all the projections (or relaxed projections ) in each iteration (see, e.g., Reference [24]), which results in prohibitive computational cost for large scale problems.
Therefore, to solve the CFP (1) efficiently, it is necessary to design methods which use fewer projections in each iteration. He et al. [29,30] proposed the selective projection method (SPM) for solving the CFP (1) where C is the intersection of a finite family of level sets of convex functions. An advantage of the SPM is that we only need to compute one (appropriately selected) projection in each iteration, and the weak convergence of the algorithm is still guaranteed. More precisely, the SPM consists of two steps in each iteration. Step one, once the k-th iterate is obtained, according to a certain criterion, we select one set or from the sets , or the relaxed sets , where is some half-space containing for all and , respectively. Step two, we then update the new iterate via the process:
Because (4) only involves one projection, the SPM is simpler than the SOP-type algorithms.
The main purpose of this paper is to propose a new method, which is called the combination projection method (CPM), for solving the convex feasibility problem of finding some
where m is a positive integer and are convex functions defined as . The key of the CPM is that for the current iterate , the CPM firstly constructs a new level set through a convex combination of some of , and then updates the new iterate by using the projection . The simplicity and ease of implementation are two of the advantages of our method since only one projection is used in each iteration and the projections are easy to calculate. To make the CPM easily implementable, we also introduce the combination relaxation projection method (CRPM), which involves projection onto half-spaces. The weak convergence theorems are proved and the numerical results show the advantages of our methods. In fact, the methods in this paper can be easily extended to solve other nonlinear problems, for example, the SFP and the VIP.
2. Preliminaries
Let be a real Hilbert space and be a mapping. Recall that
- T is nonexpansive if for all
- T is firmly nonexpansive if for all .
- is an averaged mapping if there exists some and nonexpansive mapping such that ; in this case, T is also said to be -averaged.
- T is inverse strongly monotone (ISM) if there exists some such thatIn this case, we say that T is -ISM.
Lemma 1
([31]).
For a mapping , the following are equivalent:
- (i)
- T is -averaged;
- (ii)
- T is 1-ISM;
- (iii)
- T is firmly nonexpansive;
- (iv)
- is firmly nonexpansive.
Recall that the projection onto a closed convex subset D of is defined by
It is well known that is characterized by the inequality
Some useful properties of the projection operators are collected in the lemma below.
Lemma 2
([31]). For any nonempty closed convex subset D of , the projection is both -averaged and 1-ISM. Equivalently, is firmly nonexpansive.
Lemma 3
([32]). Let D be a nonempty closed convex subset of . Let satisfy the properties:
- (i)
- exists for each ;
- (ii)
- .
Then, converges weakly to a point in D.
Lemma 4.
Let be a finite family of convex functions defined as such that their level sets , , with nonempty intersection. Let with such that . Then, the following properties are satisfied.
- (i)
- If each is a half-space, i.e., with and such that , in addition, if the vector group is also linearly independent, then H is a half-space;
- (ii)
- H is a closed ball if each is a closed ball;
- (iii)
- H is a closed ball if each is a closed ball or a half-space, and at least one of them is a closed ball.
Proof.
(i) Obviously, for any with , we have
Since is a linearly independent group, we assert that , and hence, it is easy to see from (6) that H is a half-space.
(ii) If is a closed ball with center and radius , then . For any with , noting the identity
We directly deduce
Consequently,
Since , there exists some such that for all , thus this implies that
that is, H is a closed ball.
(iii) Assume that is a closed ball and is a half-space, then and have the forms and , respectively, where , and . For any we have from calculating that
By using the same argument as in (ii), we assert , and this means that H is indeed a closed ball. This together with (i) and (ii) indicates that the conclusion is true for the general case. □
Suppose is a proper, lower-semicontinuous (lsc), convex function. Recall that an element is said to be a subgradient of f at x if
We denote by the set of all subgradients of f at x. Recall that f is said to be subdifferentiable at x if , and f is said to be subdifferentiable (on ) if it is subdifferentiable at every . Recall also that the inequality (9) is called the subdifferential inequality of f at x.
3. The CPM for Solving Convex Feasibility Problems
In this section, the combination projection method (CPM) is proposed for solving the convex feasibility problem (CFP):
where
with a convex function for each . The algorithm proposed below for solving the CFP (10) is called the combination projection method (CPM) for the reason that the projection that is used to update the next iterate is on the level set of a convex combination of some of in an appropriate way. Throughout this section, we always assume that and use I to represent the index set for convenience.
Remark 1.
Algorithm 1 suits for the case where have closed-form representations. For example, according to Lemma 4, if each of is a closed ball or a half-space, then is also a closed ball or a half-space for each , and hence, has the closed-form representation for all . In this case, Algorithm 1 is easy implementable.
| Algorithm 1: (The Combination Projection Method) |
|
Remark 2.
The simplicity and ease of implementation of Algorithm 1 can be illustrated through a simple example in . Compute the projection , where and is given by
where
Selecting the initial guess and using the CPM (Algorithm 1), only one iteration step is needed to get the exact solution of the problem. Indeed, since for each then . Taking the convex combination coefficients as , , the CPM firstly generates a new set i.e., the level set of the convex function , then, the CPM updates the new iterate . However, if we adopt the SOP to get , the iteration process will be complicated. On one hand, although there is an expression for the projection onto an ellipsoid [8], obtaining a constant in the expression requires solving an algebraic equation. On the other hand, the actual calculation shows that after several iterations, we can only get an approximate solution of .
We have the following convergence result for Algorithm 1.
Theorem 1.
Assume that for each , is a bounded uniformly continuous (i.e., uniformly continuous on each bounded subset of ) and convex function. If , then the sequence generated by Algorithm 1 converges weakly to a solution of CFP (10).
Proof.
Obviously, we may assume that for all with no loss of generality. By the definition of , it is very easy to see that holds for all . For any , we have by Lemma 2 that
From (11), we assert that is nonincreasing; hence, is bounded and exists. Furthermore, we also get
Particularly, as . By Lemma 3, all we need to prove is that . To see this, take and let be a subsequence of weakly converging to . Noticing , we get
For each fixed and any , if , then
If , by virtue of the definition of and (12), we get
The second algorithm for solving the CFP (10) is named the combination relaxation projection method (CRPM), which works for the case where the projection operators do not have closed-form formulae. In this case, we assume that the convex functions are subdifferentiable on .
The convergence of Algorithm 2 is given as follows.
| Algorithm 2: (The combination Relaxation Projection Method) |
|
Theorem 2.
Assume that for each , is a w-lsc, subdifferentiable, convex function such that the subdifferential mapping is bounded (i.e., bounded on bounded subsets of ). If , then the sequence generated by Algorithm 2 converges weakly to a solution of the CFP (10).
Proof.
With no loss of generality, we assume for all . First of all, we show that is a half-space, i.e., . Indeed, if otherwise, it can be asserted by (16) that holds for each and this is contradictory to the definition of . We next show . Indeed, for each , we have from the subdifferential inequality (9) that
where . Summing (16) over , we have
By the definition of (see (16)), we assert from (17) that and hence . For any , noting , we have by Lemma 2 that
This implies that is bounded, exists, and . Now we verify that . Since is bounded and is a bounded operator, there exists a constant such that for all and . By the definition of and the fact that , we get
From (21), the containment follows immediately from an argument similar to the final part of the proof of Theorem 1. □
4. Numerical Results
In this section, we compare the behavior of the CPM (Algorithm 1) and SOP by solving two synthetic examples in the Euclidean space . All the codes were written by Matlab R2010a and all the numerical experiments were conducted on a HP Pavilion notebook with Intel(R) Core(TM) i5-3230M CPU@2.60 GHz and 4 GB RAM running on Windows 7 Home Premium operating system.
Example 1.
Consider the convex feasibility problem:
where and are nonnegative real numbers. Take ,
, , , , , , , and the initial point is randomly chosen in .
Obviously, , i.e., problem (22) is solvable. We use to denote the k-th iterate and define
to measure the error of the k-th iteration, which also serves as the role of checking whether or not the proposed algorithm converges to a solution. In fact, it is easy to see that if is less than or equal to zero, then is an exact solution of Problem (22) and the iteration can be terminated; if is greater than zero, then is just an approximate solution and the smaller , the smaller the error of to a solution.
Let denote the number of elements of the set . We give two ways to choose
- (1)
- , . Denote the corresponding combination projection method by CPM1.
- (2)
- , . Denote the corresponding combination projection method by CPM2.
Table 1 illustrates that the set is generally different each iteration. From Figure 1, we conclude that the behaviors of the CPM1 and CPM2 depend on the initial point The errors for CPM1 and CPM2 oscillate which may be because only partial information about the convex sets is used in each iteration. However, in view of the SOP, all the information about the convex sets is used in each iteration since it involves all projections in each iteration. From Figure 2, the CPM1 behaves better than the SOP.
Table 1.
Comparison of of the combination projection method (CPM) 1 and CPM2.
Figure 1.
The comparison of two choices of for different random choices of for Example 1. (a) Number of iteration.
Figure 2.
The comparison of the CPM and the successive orthogonal projection method (SOP) for Example 1.
Example 2.
Consider the linear equation system:
where A is an matrix, , and b is a vector in If the noise is taken into consideration, Problem (24) is stated as
where measures the level of errors.
The set C is nonempty since the linear equation system has infinite solutions.
Set
The initial point is randomly chosen in . We compared the CPM2 and SOP for different m and n. From Figure 3, the behavior of the CPM2 is better than that of the SOP. The error of the CPM2 has a bigger oscillation than that in Example 2, the oscillation seems to decrease when the iteration is very big. The SOP behaves well when m and n are small, while its error is very big for big m and n. In Figure 4, we compare the CPU time of the CPM2 and SOP, which illustrates that the CPU time of the CPM2 is less than that of the SOP. Furthermore, the CPU time of the SOP exceeds that of the CPM2 with the iteration.
Figure 3.
(a) = (20, 40); (b) = (200, 400); (c) = (500, 1000); (d) = (1000, 2000). Comparison of the CPM1 and SOP for different m and n of Example 2.
Figure 4.
The comparison of the CPU time of the CPM and SOP for = (1000, 2000) of Example 2.
5. Conclusions
In this paper, we propose the combination projection method (CPM) for solving the convex feasibility problem (CFP). The CPM is simple and easy to implement, and has a fast convergence speed. How to further speed up the convergence for the CPM through selecting the convex combination coefficients in Algorithms 1 and 2 is worthy of further study.
Author Contributions
S.H. and Q.-L.D. contributed equally in this work.
Funding
This work was supported by the Fundamental Research Fund for the Central Universities (3122017078).
Conflicts of Interest
The authors declare no conflict of interest.
References
- Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problem. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef]
- Bregman, L.M. The method of successive projections for finding a common point of convex sets. Sov. Math. Dokl. 1965, 6, 688–692. [Google Scholar]
- Aleyner, A.; Reich, S. Block-iterative algorithms for solving convex feasibility problems in Hilbert and Banach spaces. J. Math. Anal. Appl. 2008, 343, 427–435. [Google Scholar] [CrossRef]
- López, G.; Martín-Márquez, V.; Wang, F.H.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 374–389. [Google Scholar] [CrossRef]
- Dong, Q.L.; Tang, Y.C.; Cho, Y.J.; Rassias, T.M. “Optimal” choice of the step length of the projection and contraction methods for solving the split feasibility problem. J. Glob. Optim. 2018, 71, 341–360. [Google Scholar] [CrossRef]
- Facchinei, F.; Pang, J.-S. Finite-Dimensional Variational Inequalities and Complementarity Problems; Springer Series in Operations Research; Springer: New York, NY, USA, 2003; Volume II. [Google Scholar]
- Dong, Q.L.; Lu, Y.Y.; Yang, J. The extragradient algorithm with inertial effects for solving the variational inequality. Optimization 2016, 65, 2217–2226. [Google Scholar] [CrossRef]
- Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces; Springer: Paris, France, 2013. [Google Scholar]
- Fukushima, M. A relaxed projection method for variational inequalities. Math. Programs 1986, 35, 58–70. [Google Scholar] [CrossRef]
- He, S.; Zhao, Z.; Luo, B. A relaxed self-adaptive CQ algorithm for the multiple-sets split feasibility problem. Optimization 2015, 64, 1907–1918. [Google Scholar] [CrossRef]
- He, H.; Xu, H.K. Splitting methods for split feasibility problems with application to Dantzig selectors. Inverse Probl. 2017, 33, 055003. [Google Scholar] [CrossRef]
- Gibali, A.; Censor, Y.; Reich, S. The subgradient extragradient method for solving variational inequalities in in Hilbert space. J. Optim. Anal. Theory Appl. 2011, 148, 318–335. [Google Scholar]
- He, S.; Yang, C. Solving the variational inequality problem defined on intersection of finite level sets. Abstr. Appl. Anal. 2013, 2013, 942315. [Google Scholar] [CrossRef]
- von Neumann, J. Functional Operators, Volume II: The Geometry of Orthogonal Spaces. In Annals of Mathematical Studies; Reprint of 1933 Lecture Notes; Princeton University Press: Princeton, NJ, USA, 1950; Volume 22. [Google Scholar]
- Zaslavski, A.J. Approximate Solutions of Common Fixed-Point Problems; Springer: New York, NY, USA, 2018. [Google Scholar]
- Combettes, P.L. The convex feasibility problem in image recovery. In Advances in Imaging and Electron Physics; Hawkes, P., Ed.; Academic Press: New York, NY, USA, 1996; Volume 95, pp. 155–270. [Google Scholar]
- Censor, Y.; Gibali, A.; Lenzen, F.; Schnörr, C. The implicit convex feasibility problem and its application to adaptive image denoising. J. Comput. Math. 2016, 34, 1–16. [Google Scholar] [CrossRef]
- Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problem. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef]
- Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
- Zhao, X.; Ng, K.F.; Li, C.; Yao, Y.C. Linear regularity and linear convergence of projection-based methods for solving convex feasibility problems. Appl. Math. Opt. 2018, 78, 613–641. [Google Scholar] [CrossRef]
- Burachik, R.S.; Martín-Márquez, V. An approach for the convex feasibility problem via monotropic programming. J. Math. Anal. Appl. 2017, 453, 746–760. [Google Scholar] [CrossRef]
- Gibali, A.; Küfer, K.-H.; Reem, D.; Süss, P. A generalized projection-based scheme for solving convex constrained optimization problems. Comput. Optim. Appl. 2018, 70, 737–762. [Google Scholar] [CrossRef]
- Aragón Artachoa, F.J.; Censor, Y.; Gibali, A. The cyclic Douglas-Rachford algorithm with r-sets-DouglasCRachford operators. Optim. Method Softw. 2018. [Google Scholar] [CrossRef]
- Censor, Y.; Zenios, S.A. Parallel Optimization: Theory, Algorithm, and Applications; Oxdord University Press: New York, NY, USA, 1997. [Google Scholar]
- Hundal, H.S. An alternating projection that does not converge in norm. Nonlinear Anal. 2004, 57, 35–61. [Google Scholar] [CrossRef]
- Gubin, L.G.; Polyak, B.T.; Raik, E.V. The method of projections for finding the common point of convex sets. USSR Comput. Math. Math. Phys. 1967, 7, 1–24. [Google Scholar] [CrossRef]
- Bruck, R.E.; Reich, S. Nonexpansive projections and resolvents of accretive operators in Banach spaces. Houst. J. Math. 1977, 3, 459–470. [Google Scholar]
- He, S.; Zhao, Z.; Luo, B. A simple algorithm for computing projection onto intersection of finite level sets. J. Inequal. Appl. 2014, 2014, 307. [Google Scholar] [CrossRef]
- He, S.; Tian, H. Selective projection methods for solving a class of variational inequalities. Numer. Algorithms 2018, 1–18. [Google Scholar] [CrossRef]
- He, S.; Tian, H.; Xu, H.K. The selective projection method for convex feasibility and split feasibility problems. J. Nonlinear Convex Anal. 2018, 19, 1199–1215. [Google Scholar]
- Xu, H.K. Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150, 360–378. [Google Scholar] [CrossRef]
- López Acedo, G.; Xu, H.K. Iterative methods for strict pseudo-contractions in Hilbert spaces. Nonlinear Anal. 2007, 67, 2258–2271. [Google Scholar] [CrossRef]
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).