S -Subgradient Projection Methods with S -Subdifferential Functions for Nonconvex Split Feasibility Problems

: In this paper, the original CQ algorithm, the relaxed CQ algorithm, the gradient projection method ( GPM ) algorithm, and the subgradient projection method ( SPM ) algorithm for the convex split feasibility problem are reviewed, and a renewed SPM algorithm with S -subdifferential functions to solve nonconvex split feasibility problems in ﬁnite dimensional spaces is suggested. The weak convergence theorem is established.


Introduction
The split feasibility problem [1] (subgradient projection method (SPM)) is the issue of finding a vector u satisfying: u ∈ C and Au ∈ Q; here, both the nonempty underlying sets C ⊆ R n and Q ⊆ R m are closed convex, and A is a matrix of m rows and n columns. Since the SFPwas raised by Censor [1], it has been rapidly applied in signal processing [2], image restoration [3], intensity modulated radiation therapy (I MRT) [4], and other fields. Besides, different types of iterative algorithms are used to solve the SFP (see [4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22] and the references therein). The original algorithm used to solve SFP appeared in [1] involved calculating the inverse of matrix A (not necessarily symmetrical, and suppose the inverse A −1 exists). In fact, it is very difficult to calculate the inverse of matrix A. Thus, the following CQ algorithm presented by Byrne [3] seemed to be more popular: u k+1 = P C u k − ρ k A * I − P Q Au k , k ≥ 1, where P C and P Q represent the vertical projections on C and Q, respectively, the initial value u 1 ∈ R n , A * means the adjoint of A, and ρ k ∈ (0, 2/σ) with σ relating to the spectral radius of the matrix A * A.
In some other references [2,10], they wrote the spectral radius of the matrix A * A by A 2 . In the sequel, · means the two-norm. It is found that Algorithm (1) is a special example of the gradient projection method [10] (GPM) associated with convex minimization. That is, let: and consider the convex minimization problem [10]: Recall that the GPM algorithm for the above convex minimization problem is: The stepsize ρ k in the CQ algorithm (1) and the GPM algorithm (2) depends heavily on the matrix norm A . However, it is difficult to calculate or estimate the norm A in reality. Thus, another way to construct a different stepsize independent of norm A is expected. Yang [23] proposed the following stepsize: where λ k satisfies: Yang [23] proved the convergence of the GPM algorithm (2) under (3) and (4). Besides, the following two more conditions are needed: • The boundedness of subset Q; • The full column rank of matrix A.
However, the conditions above are still very strict, so the application area of the GPM algorithm (2) is limited. Thus, López et al. [2] renewed the stepsize (3) as: Then, López et al. [2] analyzed the weak convergence of the GPM algorithm (2) with the stepsize (5).
On the other hand, although C and Q are convex sets, the projections onto them may not be easy to implement. To overcome this difficulty, Yang [24] presented the relaxed CQ algorithm, in which C 0 = {u ∈ R n : c(u) ≤ 0} and Q 0 = {v ∈ R m : q(v) ≤ 0} are lower level sets of subdifferentiable convex functions c : R n → R and q : R m → R at zero, respectively. Recall that the relaxed CQ algorithm: where: and: Define a function: hence, its gradient: López et al. [2] improved this relaxed CQ algorithm (6) as follows: where: Thus, the convergence of Algorithm (8) with the stepsize (9) need not calculate or estimate the norm of matrix A. Guo [25] reformulated the relaxed CQ algorithm (6) into a subgradient projection method (SPM) by studying the subgradient projector of convex continuous functions. He denoted the subgradient projector related to (c, zero, s c ) and ( f k , zero, ∇ f k ) by G c 0 and G f 0 converges iteratively to a point u such that u ∈ C 0 and A u ∈ Q 0 . In this paper, the CQ algorithm (1), the relaxed CQ algorithm (6), the GPM algorithm (2), and the SPM algorithm (10) for the convex SFP are reviewed, the definition of the S-subdifferential with respect to a set S is introduced, the SFP is generalized to a nonconvex case where the functions c and q are both continuous and S-subdifferentiable, then the supposed algorithm converges iteratively to a solution of nonconvex SFP. The S-subgradient projector of a continuous function has a pivotal role in structuring the iterative algorithm to solve the nonconvex SFP.

Preliminaries
First of all, we write u k u [5] to show that {u k } converges weakly to u. Let nonempty set S ⊆ R n be closed and the vertical projection [16] P S from R n onto S be defined by the following form: The domain of f is: The graph of f is: The epigraph of f is: The lower level set of f at height ξ ∈ R is: To define S-subgradient projector of continuous functions, we need the following definition.

Definition 2 ([25]
). Given a set S ⊆ R n and a constant r f > 0, a vector x ∈ R n is said to be an S-subgradient of function f : The set of all S-subgradients of function f at u is called the S-subdifferential of f at u and is denoted by: where d S (u) = inf v∈S u − v is the usual distance related to the two-norm from point u to set S.
Note that if S = R n , the S-subdifferential collapses to the Fenchel subdifferential; so does r = 0. The definition of the Fenchel subdifferential is given below.
S be closed and convex, and f : R n → R be the S-subdifferential on R n . Then, there exists a constant r f > 0 and for any u / ∈ C ξ such that: Therefore, we can define the S-subgradient projector.

Definition 4 ([25]
). Assume that f : R n → R is continuous and S-subdifferential on R n with respect to S. Let the lower level sets of f at height ξ ∈ R be such that C ξ = lev ≤ξ f = ∅. Let C ξ ⊆ S ⊆ R n and S be closed and convex. Assume that ∂ S,r f f (u) is the S-subdifferential of f with respect to S and s f (u) ∈ ∂ S,r f f (u). The S-subgradient projector onto C ξ related to ( f , ξ, s f ) is:

Lemma 2 ([25]
). Let S ⊆ R n be closed and convex and f : R n → R be the S-subdifferential on R n . Then, there exists a constant r f > 0 such that:

Nonconvex Split Feasibility Problem
In this part, we take a look at the nonconvex split feasibility problem. Let us look at some hypothetical situations. Assume that: (1) continuous, but not necessarily convex functions c : R n → R and q : R m → R are the S-subdifferential, and c and q are locally Lipschitzian in addition. (2) the lower level sets of c and q at height ξ ∈ R, ξ > 0 are defined by C ξ = {u ∈ R n : c(u) ≤ ξ} and Q ξ = {v ∈ R m : q(v) ≤ ξ} . (3) the set of solutions to SFP is nonempty, that is there exists at least one element u ∈ C ξ such that A u ∈ Q ξ , where A is an m × n matrix. (4) U ⊆ R n and V ⊆ R m are closed convex subsets such that C ξ ⊆ U and Q ξ ⊆ V. (5) c and q are the S-subdifferential on R n and R m with respect to U and V, respectively. (6) ∂ U,r c c(u) and ∂ V,r q q(v) are the S-subdifferential of c and q with respect to U and V, respectively.
(7) both ∂ U,r c c(u) and ∂ V,r q q(v) are not empty; let s c (u) ∈ ∂ U,r c c(u) and s q (v) ∈ ∂ V,r q q(v).
In such conditions, the S-subgradient projector onto C ξ related to (c, ξ, s c ) is: The S-subgradient projector onto Q ξ related to (q, ξ, s q ) is: For k ≥ 1 and φ k ∈ ∂ U,r c c(u k ), give a set: and for ϕ k ∈ ∂ V,r q q(Au k ), give another set: Then, we can define a function like (7), where the set Q k,ξ is mentioned in (13), so the gradient of f k at u is: Then, we can improve the relaxed CQ algorithm by: where: For any u k ∈ R n , by [27], we get: Denote the S-subgradient projector related to ( f k , 0, ∇ f k ) by G f 0 k , that is, Let R λ k , f 0 k = I + λ k G f 0 k − I , and by (14), we obtain: Now, we suggest the S-subgradient projection method with the S-subdifferential functions for solving nonconvex SFP by: Theorem 1. Assume that (1)-(7) are satisfied and inf k λ k (2 − λ k ) > 0. Then, {u n } generated by (16) weakly converges to a point u such that u ∈ C ξ and A u ∈ Q ξ .
Proof. Let w be any point in the solution set; that is, w ∈ C ξ and Aw ∈ Q ξ . Since ϕ k ∈ ∂ V,r q q(Au k ), for any Aw ∈ Q ξ , from (11), we attain: Hence, we achieve Aw ∈ Q k,ξ . Moreover, f k (w) = 0. Next, we consider two cases. If Au k ∈ Q k,ξ , by the definition of G f 0 k , then: If Au k / ∈ Q k,ξ , it is deduced from (12), (15) and f k (w) = 0 that: From the definition of R λ k , f 0 k and (17), we estimate: This together with (16) and (18) implies that: By inf k λ k (2 − λ k ) > 0, we gain the Fejér monotonicity: Thus, we receive the existence of lim k→∞ u k − w , so the boundedness of {u k } is obtained. By (19), we can find: One can see that: We observe from ∇ f k (w) = 0 that: Therefore, {∇ f k (u k )} is bounded. From (20) and (21), we have lim k→∞ f k (u k ) = 0, which means: Since q is locally Lipschitz, we have the local boundedness of ∂q; therefore, we get that ∂q is bounded on the bounded set; so is I − P S . From Lemma 2, we obtain that ∂ V,r q q is bounded on the bounded set; thus, there exists δ > 0 such that ϕ k ≤ δ. Since P Q k,ξ Au k ∈ Q k,ξ , we conclude: As {u k } is bounded, we can find a subsequence {u k i } of {u k } such that u k i u. Then, the continuity of q and (22) imply that: Hence, A u ∈ Q ξ . Since , and then, from (20), we have that: Since u k i u, we have v k i u. Next, two cases are considered.
No matter whether v k i belongs to C ξ or not, we have max From Lemma 2, there exists κ > 0 such that {v k i } lies in B( u; κ) and: Hence, By (20), we have: Thus, c( u) ≤ ξ, in other words, u ∈ C ξ ; this together with A u ∈ Q ξ shows that the proof is done.

Remark 1.
We raise two questions: 1, Can the result presented in Theorem 1 hold in infinity spaces? 2, Since we only obtain weak convergence of the proposed algorithm in this paper, how do we modify the algorithm so that the strong convergence is guaranteed?
Remark 2. Let {λ k } be a sequence such that inf k λ k (2 − λ k ) > 0, but in the process of proving the convergence of the subgradient projection algorithm, Guo [25] used λ k = 1 in particular. In our proof, we do not use that.

Conclusions
In this paper, we studied the SFP in the nonconvex case. In finite dimensional spaces, we gave two S-subdifferentiable functions and then structured nonconvex sets based on the epigraph. By the nonzero of the S-subgradient of the S-subdifferentiable function, we introduced the S-subgradient projector of the continuous function, but not necessarily convex. Under this S-subgradient projector, we transferred the GPM into the SPM, that is we suggested the S-subgradient projection method with S-subdifferential functions for solving nonconvex SFP. The weak convergence theorem was guaranteed.