In this paper, the original algorithm, the relaxed algorithm, the gradient projection method () algorithm, and the subgradient projection method () algorithm for the convex split feasibility problem are reviewed, and a renewed algorithm with S-subdifferential functions to solve nonconvex split feasibility problems in finite dimensional spaces is suggested. The weak convergence theorem is established.
S-subgradient projection method; nonconvex; S-subdifferentiable; split feasibility problem
47J25; 47H10; 58C20; 49J50; 46T20
The split feasibility problem  (subgradient projection method ()) is the issue of finding a vector u satisfying:
here, both the nonempty underlying sets and are closed convex, and A is a matrix of m rows and n columns. Since the was raised by Censor , it has been rapidly applied in signal processing , image restoration , intensity modulated radiation therapy () , and other fields. Besides, different types of iterative algorithms are used to solve the (see [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22] and the references therein).
The original algorithm used to solve appeared in  involved calculating the inverse of matrix A (not necessarily symmetrical, and suppose the inverse exists). In fact, it is very difficult to calculate the inverse of matrix A. Thus, the following algorithm presented by Byrne  seemed to be more popular:
where and represent the vertical projections on C and Q, respectively, the initial value , means the adjoint of A, and with relating to the spectral radius of the matrix . In some other references [2,10], they wrote the spectral radius of the matrix by . In the sequel, means the two-norm. It is found that Algorithm (1) is a special example of the gradient projection method  () associated with convex minimization. That is, let:
and consider the convex minimization problem :
Recall that the algorithm for the above convex minimization problem is:
The stepsize in the algorithm (1) and the algorithm (2) depends heavily on the matrix norm . However, it is difficult to calculate or estimate the norm in reality. Thus, another way to construct a different stepsize independent of norm is expected. Yang  proposed the following stepsize:
Yang  proved the convergence of the algorithm (2) under (3) and (4). Besides, the following two more conditions are needed:
The boundedness of subset Q;
The full column rank of matrix A.
However, the conditions above are still very strict, so the application area of the algorithm (2) is limited. Thus, López et al.  renewed the stepsize (3) as:
Then, López et al.  analyzed the weak convergence of the algorithm (2) with the stepsize (5).
On the other hand, although C and Q are convex sets, the projections onto them may not be easy to implement. To overcome this difficulty, Yang  presented the relaxed algorithm, in which and are lower level sets of subdifferentiable convex functions and at zero, respectively. Recall that the relaxed algorithm:
Define a function:
hence, its gradient:
López et al.  improved this relaxed algorithm (6) as follows:
Thus, the convergence of Algorithm (8) with the stepsize (9) need not calculate or estimate the norm of matrix A.
Guo  reformulated the relaxed algorithm (6) into a subgradient projection method () by studying the subgradient projector of convex continuous functions. He denoted the subgradient projector related to (c, zero, ) and (, zero, ) by and , respectively. Let , then:
converges iteratively to a point such that and .
In this paper, the algorithm (1), the relaxed algorithm (6), the algorithm (2), and the algorithm (10) for the convex are reviewed, the definition of the S-subdifferential with respect to a set S is introduced, the is generalized to a nonconvex case where the functions c and q are both continuous and S-subdifferentiable, then the supposed algorithm converges iteratively to a solution of nonconvex . The S-subgradient projector of a continuous function has a pivotal role in structuring the iterative algorithm to solve the nonconvex .
First of all, we write  to show that converges weakly to u. Let nonempty set be closed and the vertical projection  from onto S be defined by the following form:
To define S-subgradient projector of continuous functions, we need the following definition.
(). Given a set and a constant , a vector is said to be an S-subgradient of function at u if:
The set of all S-subgradients of function f at u is called the S-subdifferential of f at u and is denoted by:
where is the usual distance related to the two-norm from point u to set S.
Note that if , the S-subdifferential collapses to the Fenchel subdifferential; so does . The definition of the Fenchel subdifferential is given below.
(). Let (not necessarily convex), and define its Fenchel subdifferential at u,
When f is convex, is the usual subdifferential.
(). Let , , S be closed and convex, and be the S-subdifferential on . Then, there exists a constant and for any such that:
Therefore, we can define the S-subgradient projector.
(). Assume that is continuous and S-subdifferential on with respect to S. Let the lower level sets of f at height be such that . Let and S be closed and convex. Assume that is the S-subdifferential of f with respect to S and . The S-subgradient projector onto related to is:
(). Let be closed and convex and be the S-subdifferential on . Then, there exists a constant such that:
3. Nonconvex Split Feasibility Problem
In this part, we take a look at the nonconvex split feasibility problem. Let us look at some hypothetical situations. Assume that:
continuous, but not necessarily convex functions and are the S-subdifferential, and c and q are locally Lipschitzian in addition.
the lower level sets of c and q at height are defined by and
the set of solutions to is nonempty, that is there exists at least one element such that , where A is an matrix.
and are closed convex subsets such that and .
c and q are the S-subdifferential on and with respect to U and V, respectively.
and are the S-subdifferential of c and q with respect to U and V, respectively.
both and are not empty; let and .
In such conditions, the S-subgradient projector onto related to is:
Therefore, is bounded. From (20) and (21), we have , which means:
Since q is locally Lipschitz, we have the local boundedness of ; therefore, we get that is bounded on the bounded set; so is . From Lemma 2, we obtain that is bounded on the bounded set; thus, there exists such that . Since , we conclude:
As is bounded, we can find a subsequence of such that . Then, the continuity of q and (22) imply that:
Since , we have , and then, from (20), we have that:
Thus, , in other words, ; this together with shows that the proof is done. □
We raise two questions:
Can the result presented in Theorem 1 hold in infinity spaces?
Since we only obtain weak convergence of the proposed algorithm in this paper, how do we modify the algorithm so that the strong convergence is guaranteed?
Let be a sequence such that , but in the process of proving the convergence of the subgradient projection algorithm, Guo  used in particular. In our proof, we do not use that.
In this paper, we studied the in the nonconvex case. In finite dimensional spaces, we gave two S-subdifferentiable functions and then structured nonconvex sets based on the epigraph. By the nonzero of the S-subgradient of the S-subdifferentiable function, we introduced the S-subgradient projector of the continuous function, but not necessarily convex. Under this S-subgradient projector, we transferred the into the , that is we suggested the S-subgradient projection method with S-subdifferential functions for solving nonconvex . The weak convergence theorem was guaranteed.
All authors participated in the conceptualization, validation, formal analysis, and investigation, as well as the writing of the original draft preparation, reviewing, and editing.
This work was supported by the Key Subject Program of Lingnan Normal University (1171518004), the Natural Science Foundation of Guangdong Province (2018A0303070012), the Young Innovative Talents Project at Guangdong Universities (2017KQNCX125), and the Ph.D. research startup foundation of Lingnan Normal University (ZL1919). Yonghong Yao was supported in part by the grant TD13-5033.
Conflicts of Interest
The authors declare that they have no competing interests.
Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms1994, 8, 221–239. [Google Scholar] [CrossRef]
López, G.; Martín, V.; Wang, F.; Xu, H. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl.2012, 28, 085004. [Google Scholar] [CrossRef]
Byrne, C. Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl.2002, 18, 441–453. [Google Scholar] [CrossRef]
Censor, Y.; Motova, A.; Segal, A. Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl.2007, 327, 1244–1256. [Google Scholar] [CrossRef]
Ceng, L.; Petruşel, A.; Yao, J. Relaxed extragradient methods with regularization for general system of variational inequalities with constraints of split feasibility and fixed point problems. In Abstract and Applied Analysis; Hindawi: London, UK, 2013. [Google Scholar]
Ceng, L.; Wong, M.; Petruşel, A.; Yao, J. Relaxed implicit extragradient-like methods for finding minimum-norm solutions of the split feasibility problem. Fixed Point Theory2013, 14, 327–344. [Google Scholar]
Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed Points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory2018, 19, 487–502. [Google Scholar] [CrossRef]
Ceng, L.; Petruşel, A.; Yao, J.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory2019, 20, 113–134. [Google Scholar] [CrossRef]
Wang, F.; Xu, H. Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl.2010, 102085. [Google Scholar] [CrossRef]
Xu, H. A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl.2006, 22, 2021–2034. [Google Scholar] [CrossRef]
Chen, J.; Ceng, L.; Qiu, Y.; Kong, Z. Extra-gradient methods for solving split feasibility and fixed point problems. Fixed Point Theory Appl.2015, 192. [Google Scholar] [CrossRef]
Yao, Y.; Liou, Y.C.; Yao, J.C. Split common fixed point problem for two quasi-pseudocontractive operators and its algorithm construction. Fixed Point Theory Appl.2015, 2015, 127. [Google Scholar] [CrossRef]
Yao, Y.; Yao, J.; Liou, Y.; Postolache, M. Iterative algorithms for split common fixed points of demicontractive operators without priori knowledge of operator norms. Carpathian J. Math.2018, 34, 459–466. [Google Scholar]
Yao, Y.; Liou, Y.; Postolache, M. Self-adaptive algorithms for the split problem of the demicontractive operators. Optimization2018, 67, 1309–1319. [Google Scholar] [CrossRef]
Yao, Y.; Leng, L.; Postolache, M.; Zheng, X. Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal.2017, 18, 875–882. [Google Scholar]
Yao, Y.; Postolache, M.; Yao, J. An iterative algorithm for solving the generalized variational inequalities and fixed points problems. Mathematics2019, 7, 61. [Google Scholar] [CrossRef]
Yao, Y.; Liou, Y.; Yao, J. Iterative algorithms for the split variational inequality and fixed point problems under nonlinear transformations. J. Nonlinear Sci. Appl.2017, 10, 843–854. [Google Scholar] [CrossRef]