Next Article in Journal
Automatic Defect Inspection for Coated Eyeglass Based on Symmetrized Energy Analysis of Color Channels
Next Article in Special Issue
CQ-Type Algorithm for Reckoning Best Proximity Points of EP-Operators
Previous Article in Journal
Study on Adaptive Cruise Control Strategy for Battery Electric Vehicle Considering Weight Adjustment
Previous Article in Special Issue
Convergence Analysis for a Three-Step Thakur Iteration for Suzuki-Type Nonexpansive Mappings with Visualization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

S-Subgradient Projection Methods with S-Subdifferential Functions for Nonconvex Split Feasibility Problems

1
School of Mathematics and Statistics, Lingnan Normal University, Zhanjiang 524048, China
2
Center for General Education, China Medical University, Taichung 40402, Taiwan
3
Department of Mathematics and Informatics, University “Politehnica” of Bucharest, 060042 Bucharest, Romania
4
Romanian Academy, Gh. Mihoc-C. Iacob Institute of Mathematical Statistics and Applied Mathematics, 050711 Bucharest, Romania
5
School of Mathematical Sciences, Tianjin Polytechnic University, Tianjin 300387, China
6
The Key Laboratory of Intelligent Information and Big Data Processing of NingXia Province, North Minzu University, Yinchuan 750021, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(12), 1517; https://doi.org/10.3390/sym11121517
Submission received: 24 November 2019 / Revised: 11 December 2019 / Accepted: 11 December 2019 / Published: 14 December 2019
(This article belongs to the Special Issue Advance in Nonlinear Analysis and Optimization)

Abstract

:
In this paper, the original C Q algorithm, the relaxed C Q algorithm, the gradient projection method ( G P M ) algorithm, and the subgradient projection method ( S P M ) algorithm for the convex split feasibility problem are reviewed, and a renewed S P M algorithm with S-subdifferential functions to solve nonconvex split feasibility problems in finite dimensional spaces is suggested. The weak convergence theorem is established.
MSC:
47J25; 47H10; 58C20; 49J50; 46T20

1. Introduction

The split feasibility problem [1] (subgradient projection method ( S P M )) is the issue of finding a vector u satisfying:
u C and A u Q ;
here, both the nonempty underlying sets C R n and Q R m are closed convex, and A is a matrix of m rows and n columns. Since the S F P was raised by Censor [1], it has been rapidly applied in signal processing [2], image restoration [3], intensity modulated radiation therapy ( I M R T ) [4], and other fields. Besides, different types of iterative algorithms are used to solve the S F P (see [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22] and the references therein).
The original algorithm used to solve S F P appeared in [1] involved calculating the inverse of matrix A (not necessarily symmetrical, and suppose the inverse A 1 exists). In fact, it is very difficult to calculate the inverse of matrix A. Thus, the following C Q algorithm presented by Byrne [3] seemed to be more popular:
u k + 1 = P C u k ρ k A * I P Q A u k , k 1 ,
where P C and P Q represent the vertical projections on C and Q, respectively, the initial value u 1 R n , A * means the adjoint of A, and ρ k 0 , 2 / σ with σ relating to the spectral radius of the matrix A * A . In some other references [2,10], they wrote the spectral radius of the matrix A * A by A 2 . In the sequel, · means the two-norm. It is found that Algorithm (1) is a special example of the gradient projection method [10] ( G P M ) associated with convex minimization. That is, let:
f ( u ) = 1 2 A u P Q ( A u ) 2 ,
and consider the convex minimization problem [10]:
min u C f ( u ) .
Recall that the G P M algorithm for the above convex minimization problem is:
u k + 1 = P C u k ρ k f ( u k ) , k 1 ,
The stepsize ρ k in the C Q algorithm (1) and the G P M algorithm (2) depends heavily on the matrix norm A . However, it is difficult to calculate or estimate the norm A in reality. Thus, another way to construct a different stepsize independent of norm A is expected. Yang [23] proposed the following stepsize:
ρ k = λ k f ( x k ) ,
where λ k satisfies:
k = 1 λ k = and k = 1 λ k 2 < .
Yang [23] proved the convergence of the G P M algorithm (2) under (3) and (4). Besides, the following two more conditions are needed:
  • The boundedness of subset Q;
  • The full column rank of matrix A.
However, the conditions above are still very strict, so the application area of the G P M algorithm (2) is limited. Thus, López et al. [2] renewed the stepsize (3) as:
ρ k = λ k f ( x k ) f ( x k ) 2 , 0 < λ k < 4 .
Then, López et al. [2] analyzed the weak convergence of the G P M algorithm (2) with the stepsize (5).
On the other hand, although C and Q are convex sets, the projections onto them may not be easy to implement. To overcome this difficulty, Yang [24] presented the relaxed C Q algorithm, in which C 0 = { u R n : c ( u ) 0 } and Q 0 = { v R m : q ( v ) 0 } are lower level sets of subdifferentiable convex functions c : R n R and q : R m R at zero, respectively. Recall that the relaxed C Q algorithm:
u k + 1 = P C k , 0 ( u k ρ k A * ( I P Q k , 0 ) A u k ) , ρ k 0 , 2 / A 2 , k 1 ,
where:
C k , 0 = u R n : c ( u k ) + ϕ k , u u k 0 , ϕ k c ( u k ) ,
and:
Q k , 0 = v R m : q ( A u k ) + φ k , v A u k 0 , φ k q ( A u k ) .
Define a function:
f k ( u ) = 1 2 A u P Q k , 0 ( A u ) 2 ;
hence, its gradient:
f k ( u ) = A * A u P Q k , 0 ( A u ) .
López et al. [2] improved this relaxed C Q algorithm (6) as follows:
u k + 1 = P C k , 0 u k ρ k f k ( u k ) , k 1 ,
where:
ρ k = λ k f k ( u k ) f k ( u k ) 2 , 0 < λ k < 4 .
Thus, the convergence of Algorithm (8) with the stepsize (9) need not calculate or estimate the norm of matrix A.
Guo [25] reformulated the relaxed C Q algorithm (6) into a subgradient projection method ( S P M ) by studying the subgradient projector of convex continuous functions. He denoted the subgradient projector related to (c, zero, s c ) and ( f k , zero, f k ) by G c 0 and G f k 0 , respectively. Let R λ k , f k 0 = I + λ k G f k 0 I , then:
u k + 1 = G c 0 R λ k , f k 0 ( u k ) , 0 < λ k < 2 ,
converges iteratively to a point u ˜ such that u ˜ C 0 and A u ˜ Q 0 .
In this paper, the C Q algorithm (1), the relaxed C Q algorithm (6), the G P M algorithm (2), and the S P M algorithm (10) for the convex S F P are reviewed, the definition of the S-subdifferential with respect to a set S is introduced, the S F P is generalized to a nonconvex case where the functions c and q are both continuous and S-subdifferentiable, then the supposed algorithm converges iteratively to a solution of nonconvex S F P . The S-subgradient projector of a continuous function has a pivotal role in structuring the iterative algorithm to solve the nonconvex S F P .

2. Preliminaries

First of all, we write u k u [5] to show that { u k } converges weakly to u. Let nonempty set S R n be closed and the vertical projection [16] P S from R n onto S be defined by the following form:
P S ( u ) : = a r g m i n v S u v , u R n .
Definition 1.
[26] Let f : R n ( , + ) .
The domain of f is:
d o m f = u R n : f ( u ) < + .
The graph of f is:
g r a f = ( u , ξ ) R n × R : f ( u ) = ξ .
The epigraph of f is:
e p i f = ( u , ξ ) R n × R : f ( u ) ξ .
The lower level set of f at height ξ ∈ ℝ is:
l e v ξ f = u R n : f ( u ) ξ .
To define S-subgradient projector of continuous functions, we need the following definition.
Definition 2
([25]). Given a set S R n and a constant r f > 0 , a vector x R n is said to be an S-subgradient of function f : R n R at u if:
v u , x + f ( u ) + r f 2 d S 2 ( u ) f ( v ) + r f 2 d S 2 ( v ) , v R n .
The set of all S-subgradients of function f at u is called the S-subdifferential of f at u and is denoted by:
S , r f f ( u ) = x R n : v u , x + f ( u ) + r f 2 d S 2 ( u ) f ( v ) + r f 2 d S 2 ( v ) , v R n
where d S ( u ) = inf v S u v is the usual distance related to the two-norm from point u to set S.
Note that if S = R n , the S-subdifferential collapses to the Fenchel subdifferential; so does r = 0 . The definition of the Fenchel subdifferential is given below.
Definition 3
([26]). Let f : R n ( , + ) (not necessarily convex), and define its Fenchel subdifferential at u,
f ( u ) : = x R n : v u , x + f ( u ) f ( v ) , v R n .
When f is convex, f ( u ) is the usual subdifferential.
Lemma 1
([25]). Let C ξ = l e v ξ f , C ξ S R n , S be closed and convex, and f : R n R be the S-subdifferential on R n . Then, there exists a constant r f > 0 and for any u C ξ such that:
s f ( u ) S , r f f ( u ) s f ( u ) 0 .
Therefore, we can define the S-subgradient projector.
Definition 4
([25]). Assume that f : R n R is continuous and S-subdifferential on R n with respect to S. Let the lower level sets of f at height ξ R be such that C ξ = l e v ξ f . Let C ξ S R n and S be closed and convex. Assume that S , r f f ( u ) is the S-subdifferential of f with respect to S and s f ( u ) S , r f f ( u ) . The S-subgradient projector onto C ξ related to ( f , ξ , s f ) is:
G S , f ξ : R n R n u u + ξ f ( u ) s f ( u ) 2 s f ( u ) , u C ξ u , u C ξ .
Lemma 2
([25]). Let S R n be closed and convex and f : R n R be the S-subdifferential on R n . Then, there exists a constant r f > 0 such that:
x S , r f f ( u ) x f ( u ) + r f ( I P S ) ( u ) .

3. Nonconvex Split Feasibility Problem

In this part, we take a look at the nonconvex split feasibility problem. Let us look at some hypothetical situations. Assume that:
(1)
continuous, but not necessarily convex functions c : R n R and q : R m R are the S-subdifferential, and c and q are locally Lipschitzian in addition.
(2)
the lower level sets of c and q at height ξ R , ξ > 0 are defined by C ξ = u R n : c ( u ) ξ and Q ξ = v R m : q ( v ) ξ .
(3)
the set of solutions to S F P is nonempty, that is there exists at least one element u ˜ C ξ such that A u ˜ Q ξ , where A is an m × n matrix.
(4)
U R n and V R m are closed convex subsets such that C ξ U and Q ξ V .
(5)
c and q are the S-subdifferential on R n and R m with respect to U and V, respectively.
(6)
U , r c c ( u ) and V , r q q ( v ) are the S-subdifferential of c and q with respect to U and V, respectively.
(7)
both U , r c c ( u ) and V , r q q ( v ) are not empty; let s c ( u ) U , r c c ( u ) and s q ( v ) V , r q q ( v ) .
In such conditions, the S-subgradient projector onto C ξ related to ( c , ξ , s c ) is:
G U , c ξ : R n R n u u + ξ c ( u ) s c ( u ) 2 s c ( u ) , u C ξ u , u C ξ .
The S-subgradient projector onto Q ξ related to ( q , ξ , s q ) is:
G V , q ξ : R m R m v v + ξ q ( v ) s q ( v ) 2 s q ( v ) , v Q ξ v , v Q ξ .
For k 1 and ϕ k U , r c c ( u k ) , give a set:
C k , ξ = u R n : c ( u k ) + ϕ k , u u k ξ ,
and for φ k V , r q q ( A u k ) , give another set:
Q k , ξ = v R m : q ( A u k ) + φ k , v A u k ξ .
Then, we can define a function like (7),
f k ( u ) = 1 2 A u P Q k , ξ ( A u ) 2 ,
where the set Q k , ξ is mentioned in (13), so the gradient of f k at u is:
f k ( u ) = A * A u P Q k , ξ ( A u ) .
Then, we can improve the relaxed C Q algorithm by:
u k + 1 = P C k , ξ u k ρ k f k ( u k ) ,
where:
ρ k = λ k f k ( u k ) f k ( u k ) 2 .
For any u k R n , by [27], we get:
P C k , ξ ( u k ) = u k + ( ξ c ( u k ) + ϕ k , u k ) ϕ k , u k ϕ k 2 ϕ k = u k + ξ c ( u k ) ϕ k 2 ϕ k = G U , c ξ ( u k ) .
Denote the S-subgradient projector related to ( f k , 0, f k ) by G f k 0 , that is,
G f k 0 : R n R n u u + f k ( u ) f k ( u ) 2 f k ( u ) , A u Q k , ξ u , A u Q k , ξ .
Let R λ k , f k 0 = I + λ k G f k 0 I , and by (14), we obtain:
u k + 1 = P C k , ξ u k ρ k f k ( u k ) = G U , c ξ u k λ k f k ( u k ) f k ( u k ) 2 f k ( u k ) = G U , c ξ R λ k , f k 0 ( u k ) .
Now, we suggest the S-subgradient projection method with the S-subdifferential functions for solving nonconvex S F P by:
u k + 1 = G U , c ξ R λ k , f k 0 ( u k ) .
Theorem 1.
Assume that (1)–(7) are satisfied and inf k λ k ( 2 λ k ) > 0 . Then, { u n } generated by (16) weakly converges to a point u ˜ such that u ˜ C ξ and A u ˜ Q ξ .
Proof. 
Let w be any point in the solution set; that is, w C ξ and A w Q ξ . Since φ k V , r q q ( A u k ) , for any A w Q ξ , from (11), we attain:
q ( A u k ) + φ k , A w A u k q ( A w ) + r q 2 d V 2 ( A w ) r q 2 d V 2 ( A u k ) = q ( A w ) r q 2 d V 2 ( A u k ) q ( A w ) ξ .
Hence, we achieve A w Q k , ξ . Moreover, f k ( w ) = 0 .
Next, we consider two cases.
If A u k Q k , ξ , by the definition of G f k 0 , then:
G f k 0 ( u k ) w , G f k 0 ( u k ) u k = G f k 0 ( u k ) w , u k u k = 0 .
If A u k Q k , ξ , it is deduced from (12), (15) and f k ( w ) = 0 that:
G f k 0 ( u k ) w , G f k 0 ( u k ) u k = u k + f k ( u k ) f k ( u k ) 2 f k ( u k ) w , u k + f k ( u k ) f k ( u k ) 2 f k ( u k ) u k = u k w , f k ( u k ) f k ( u k ) 2 f k ( u k ) + f k 2 ( u k ) f k ( u k ) 2 = f k ( u k ) f k ( u k ) 2 w u k , f k ( u k ) + f k 2 ( u k ) f k ( u k ) 2 f k ( u k ) f k ( u k ) 2 f k ( w ) f k ( u k ) + f k 2 ( u k ) f k ( u k ) 2 = 0 .
Whether or not A u k belongs to Q k , ξ , we have:
G f k 0 ( u k ) w , G f k 0 ( u k ) u k 0 .
Likewise, we get:
G U , c ξ R λ k , f k 0 ( u k ) w , G U , c ξ R λ k , f k 0 ( u k ) R λ k , f k 0 ( u k ) 0 .
From the definition of R λ k , f k 0 and (17), we estimate:
R λ k , f k 0 ( u k ) w 2 = u k + λ k G f k 0 ( u k ) u k w 2 = u k w 2 + 2 λ k u k G f k 0 ( u k ) , G f k 0 ( u k ) u k + 2 λ k G f k ( u k ) w , G f k 0 ( u k ) u k + λ k 2 G f k 0 ( u k ) u k 2 u k w 2 λ k ( 2 λ k ) G f k 0 ( u k ) u k 2 .
This together with (16) and (18) implies that:
u k + 1 w 2 = G U , c ξ R λ k , f k 0 ( u k ) w 2 = R λ k , f k 0 ( u k ) + G U , c ξ R λ k , f k 0 ( u k ) R λ k , f k 0 ( u k ) w 2 R λ k , f k 0 ( u k ) w 2 G U , c ξ R λ k , f k 0 ( u k ) R λ k , f k 0 ( u k ) 2 u k w 2 λ k ( 2 λ k ) G f k 0 ( u k ) u k 2 G U , c ξ ( R λ k , f k 0 ( u k ) ) R λ k , f k 0 ( u k ) 2 .
By inf k λ k ( 2 λ k ) > 0 , we gain the Fejér monotonicity:
u k + 1 w 2 u k w 2 , k 1 .
Thus, we receive the existence of lim k u k w , so the boundedness of { u k } is obtained.
By (19), we can find:
λ k ( 2 λ k ) G f k 0 ( u k ) u k 2 + G U , c ξ ( R λ k , f k 0 ( u k ) ) R λ k , f k 0 ( u k ) 2 u k w 2 u k + 1 w 2 .
By inf k λ k ( 2 λ k ) > 0 and let v k = R λ k , f k 0 ( u k ) , we get:
lim k G f k 0 ( u k ) u k = lim k G U , c ξ ( v k ) v k = 0 .
One can see that:
G f k 0 ( u k ) u k = u k + f k ( u k ) f k ( u k ) 2 f k ( u k ) u k = f k ( u k ) f k ( u k ) .
We observe from f k ( w ) = 0 that:
f k ( u k ) = f k ( u k ) f k ( w ) A 2 u k w .
Therefore, { f k ( u k ) } is bounded. From (20) and (21), we have lim k f k ( u k ) = 0 , which means:
lim k A u k P Q k , ξ A u k = 0 .
Since q is locally Lipschitz, we have the local boundedness of q ; therefore, we get that q is bounded on the bounded set; so is I P S . From Lemma 2, we obtain that V , r q q is bounded on the bounded set; thus, there exists δ > 0 such that φ k δ . Since P Q k , ξ A u k Q k , ξ , we conclude:
q ( A u k ) ξ + φ k , A u k P Q k , ξ ( A u k ) ξ + δ A u k P Q k , ξ ( A u k ) .
As { u k } is bounded, we can find a subsequence { u k i } of { u k } such that u k i u ˜ . Then, the continuity of q and (22) imply that:
q ( A u ˜ ) = lim i q ( A u k i ) ξ .
Hence, A u ˜ Q ξ .
Since v k = R λ k , f k 0 ( u k ) , we have v k i = R λ k i , f k i 0 ( u k i ) , and then, from (20), we have that:
lim i v k i u k i = lim i λ k i G f k i 0 ( u k i ) u k i = 0 .
Since u k i u ˜ , we have v k i u ˜ . Next, two cases are considered.
If v k i C ξ , i.e., c ( v k i ) ξ and G U , c ξ ( v k i ) = v k i , so
max c ( v k i ) ξ , 0 = 0
and:
s c ( v k i ) G U , c ξ ( v k i ) v k i = 0 .
Hence, max c ( v k i ) ξ , 0 = s c ( v k i ) G U , c ξ ( v k i ) v k i .
If v k i C ξ , i.e., c ( v k i ) > ξ , hence, max { c ( v k i ) ξ , 0 } = c ( v k i ) ξ .
s c ( v k i ) G U , c ξ ( v k i ) v k i = s c ( v k i ) v k i + ξ c ( v k i ) s c ( v k i ) 2 s c ( v k i ) v k i = c ( v k i ) ξ .
No matter whether v k i belongs to C ξ or not, we have max c ( v k i ) ξ , 0 = s c ( v k i ) G U , c ξ ( v k i ) v k i .
From Lemma 2, there exists κ > 0 such that { v k i } lies in B ( u ˜ ; κ ) and:
τ = sup c ( B ( x ˜ ; κ ) ) + r c sup i 1 ( I P S ) v k i < + .
Hence,
s c ( v k i ) τ , i 1 .
By (20), we have:
max c ( u ˜ ) ξ , 0 lim i max c ( v k i ) ξ , 0 τ lim i G U , c ξ ( v k i ) v k i = 0 .
Thus, c ( u ˜ ) ξ , in other words, u ˜ C ξ ; this together with A u ˜ Q ξ shows that the proof is done.  □
Remark 1.
We raise two questions:
1,
Can the result presented in Theorem 1 hold in infinity spaces?
2,
Since we only obtain weak convergence of the proposed algorithm in this paper, how do we modify the algorithm so that the strong convergence is guaranteed?
Remark 2.
Let { λ k } be a sequence such that inf k λ k ( 2 λ k ) > 0 , but in the process of proving the convergence of the subgradient projection algorithm, Guo [25] used λ k = 1 in particular. In our proof, we do not use that.

4. Conclusions

In this paper, we studied the S F P in the nonconvex case. In finite dimensional spaces, we gave two S-subdifferentiable functions and then structured nonconvex sets based on the epigraph. By the nonzero of the S-subgradient of the S-subdifferentiable function, we introduced the S-subgradient projector of the continuous function, but not necessarily convex. Under this S-subgradient projector, we transferred the G P M into the S P M , that is we suggested the S-subgradient projection method with S-subdifferential functions for solving nonconvex S F P . The weak convergence theorem was guaranteed.

Author Contributions

All authors participated in the conceptualization, validation, formal analysis, and investigation, as well as the writing of the original draft preparation, reviewing, and editing.

Funding

This work was supported by the Key Subject Program of Lingnan Normal University (1171518004), the Natural Science Foundation of Guangdong Province (2018A0303070012), the Young Innovative Talents Project at Guangdong Universities (2017KQNCX125), and the Ph.D. research startup foundation of Lingnan Normal University (ZL1919). Yonghong Yao was supported in part by the grant TD13-5033.

Conflicts of Interest

The authors declare that they have no competing interests.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. López, G.; Martín, V.; Wang, F.; Xu, H. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar] [CrossRef]
  3. Byrne, C. Iterative oblique projection onto convex subsets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  4. Censor, Y.; Motova, A.; Segal, A. Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327, 1244–1256. [Google Scholar] [CrossRef] [Green Version]
  5. Ceng, L.; Petruşel, A.; Yao, J. Relaxed extragradient methods with regularization for general system of variational inequalities with constraints of split feasibility and fixed point problems. In Abstract and Applied Analysis; Hindawi: London, UK, 2013. [Google Scholar]
  6. Ceng, L.; Wong, M.; Petruşel, A.; Yao, J. Relaxed implicit extragradient-like methods for finding minimum-norm solutions of the split feasibility problem. Fixed Point Theory 2013, 14, 327–344. [Google Scholar]
  7. Ceng, L.C.; Petrusel, A.; Yao, J.C.; Yao, Y. Hybrid viscosity extragradient method for systems of variational inequalities, fixed Points of nonexpansive mappings, zero points of accretive operators in Banach spaces. Fixed Point Theory 2018, 19, 487–502. [Google Scholar] [CrossRef]
  8. Ceng, L.; Petruşel, A.; Yao, J.; Yao, Y. Systems of variational inequalities with hierarchical variational inequality constraints for Lipschitzian pseudocontractions. Fixed Point Theory 2019, 20, 113–134. [Google Scholar] [CrossRef] [Green Version]
  9. Wang, F.; Xu, H. Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl. 2010, 102085. [Google Scholar] [CrossRef] [Green Version]
  10. Xu, H. A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl. 2006, 22, 2021–2034. [Google Scholar] [CrossRef]
  11. Chen, J.; Ceng, L.; Qiu, Y.; Kong, Z. Extra-gradient methods for solving split feasibility and fixed point problems. Fixed Point Theory Appl. 2015, 192. [Google Scholar] [CrossRef] [Green Version]
  12. Yao, Y.; Liou, Y.C.; Yao, J.C. Split common fixed point problem for two quasi-pseudocontractive operators and its algorithm construction. Fixed Point Theory Appl. 2015, 2015, 127. [Google Scholar] [CrossRef] [Green Version]
  13. Yao, Y.; Yao, J.; Liou, Y.; Postolache, M. Iterative algorithms for split common fixed points of demicontractive operators without priori knowledge of operator norms. Carpathian J. Math. 2018, 34, 459–466. [Google Scholar]
  14. Yao, Y.; Liou, Y.; Postolache, M. Self-adaptive algorithms for the split problem of the demicontractive operators. Optimization 2018, 67, 1309–1319. [Google Scholar] [CrossRef]
  15. Yao, Y.; Leng, L.; Postolache, M.; Zheng, X. Mann-type iteration method for solving the split common fixed point problem. J. Nonlinear Convex Anal. 2017, 18, 875–882. [Google Scholar]
  16. Yao, Y.; Postolache, M.; Yao, J. An iterative algorithm for solving the generalized variational inequalities and fixed points problems. Mathematics 2019, 7, 61. [Google Scholar] [CrossRef] [Green Version]
  17. Yao, Y.; Liou, Y.; Yao, J. Iterative algorithms for the split variational inequality and fixed point problems under nonlinear transformations. J. Nonlinear Sci. Appl. 2017, 10, 843–854. [Google Scholar] [CrossRef] [Green Version]
  18. Zhang, C.; Zhu, Z.; Yao, Y.; Liu, Q. Homotopy method for solving mathematical programs with bounded box-constrained variational inequalities. Optimization 2019, 68, 2293–2312. [Google Scholar] [CrossRef]
  19. Zhao, X.P.; Yao, J.C.; Yao, Y. A proximal algorithm for solving split monotone variational inclusions. UPB Sci. Bull. Ser. A 2020, in press. [Google Scholar]
  20. Yao, Y.; Postolache, M.; Liou, Y.C. Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013, 201. [Google Scholar] [CrossRef] [Green Version]
  21. Dadashi, V.; Postolache, M. Forward-backward splitting algorithm for fixed point problems and zeros of the sum of monotone operators. Arab. J. Math. 2019. [Google Scholar] [CrossRef] [Green Version]
  22. Yao, Y.; Postolache, M.; Zhu, Z. Gradient methods with selection technique for the multiple-sets split feasibility problem. Optimization 2019. [Google Scholar] [CrossRef]
  23. Yang, Q. On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302, 166–179. [Google Scholar] [CrossRef] [Green Version]
  24. Yang, Q. The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 2004, 20, 1261–1266. [Google Scholar] [CrossRef]
  25. Guo, Y. CQ Algorithms: Theory, Computations and Nonconvex Extensions. Master’s Thesis, University of British Columbia, Vancouver, BC, Canada, 2014. [Google Scholar]
  26. Bauschke, H.; Combettes, P. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  27. Cegielski, A. Iterative Methods for Fixed Point Problems in Hilbert Spaces; Lecture Notes in Mathematics; Springer: Heidelberg, Germany, 2012. [Google Scholar]

Share and Cite

MDPI and ACS Style

Chen, J.; Postolache, M.; Yao, Y. S-Subgradient Projection Methods with S-Subdifferential Functions for Nonconvex Split Feasibility Problems. Symmetry 2019, 11, 1517. https://doi.org/10.3390/sym11121517

AMA Style

Chen J, Postolache M, Yao Y. S-Subgradient Projection Methods with S-Subdifferential Functions for Nonconvex Split Feasibility Problems. Symmetry. 2019; 11(12):1517. https://doi.org/10.3390/sym11121517

Chicago/Turabian Style

Chen, Jinzuo, Mihai Postolache, and Yonghong Yao. 2019. "S-Subgradient Projection Methods with S-Subdifferential Functions for Nonconvex Split Feasibility Problems" Symmetry 11, no. 12: 1517. https://doi.org/10.3390/sym11121517

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop