Next Article in Journal
FIGD-Net: A Symmetric Dual-Branch Dehazing Network Guided by Frequency Domain Information
Previous Article in Journal
Elastic to Plastic Lattice Structure Homogenization via Finite Element Limit Analysis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Accelerated Gradient-CQ Algorithms for Split Feasibility Problems

1
School of Mathematics and Statistics, Institute of Big Data Analysis and Applied Mathematics, Hubei University of Education, Wuhan 430205, China
2
School of Mathematics and Statistics, Shanxi Datong University, Datong 037009, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(7), 1121; https://doi.org/10.3390/sym17071121
Submission received: 12 June 2025 / Revised: 26 June 2025 / Accepted: 8 July 2025 / Published: 12 July 2025
(This article belongs to the Section Mathematics)

Abstract

This work focuses on split feasibility problems in Hilbert spaces. To accelerate the convergent rate of gradient-CQ algorithms, we introduce an inertial term. Additionally, non-monotone stepsizes are employed to adjust the relaxation parameter applied to the original stepsizes, ensuring that these original stepsizes maintain a positive lower bound. Thereby, the efficiency of the algorithms is improved. Moreover, the weak and strong convergence of the proposed algorithms are established through proofs that exhibit a similar symmetry structure and do not require the assumption of Lipschitz continuity for the gradient mappings. Finally, the LASSO problem is presented to illustrate and compare the performance of the algorithms.

1. Introduction

This work focuses on the split feasibility problem ( S F P ), as follows
Seek S * C such   that A S * Q .
where H 1 and H 2 are both real Hilbert spaces, the closed convex sets C H 1 and Q H 2 are nonempty, and A : H 1 H 2 is a bounded linear operator with adjoint A * . If a point S * exists, then the solution set of S F P is given by
Γ = { S * C : A S * Q } .
Censor and Elfving [1] was the first to study the original formulation of S F P for modeling inverse problems. This model has since been widely applied to practical problems such as signal processing [2] and medical image reconstruction [3]. Because of its wide range of applications, numerous numerical algorithms have been proposed to solve the S F P ; see [2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18] and the references therein. Among these, a classical method for solving the S F P is Byrne’s CQ algorithm [2,3], which us defined by the following scheme:
ϑ n + 1 = P C ϑ n τ A * ( I P Q ) A ϑ n , n 1 ,
where τ 0 , 2 A 2 , while P C and P Q are projections onto C and Q, respectively. In numerical experiments, sets C and Q are defined as follows
C = ζ 1 H 1 : c ( ζ 1 ) 0 , Q = ζ 2 H 2 : q ( ζ 2 ) 0 ,
where c : H 1 R and q : H 2 R are two proper convex functions. Since the projections P C and P Q do not have closed-form expressions, algorithm (1) is not applicable. To address this, Yang [19] proposed the following weakly convergent algorithm:
ϑ n + 1 = P C n ϑ n τ n V n ( ϑ n ) ,
where the stepsize is τ n 0 , 2 A 2 and
C n = { ζ 1 H 1 : c ( ϑ n ) ε n , ϑ n ζ 1 }
with ε n c ( ϑ n ) and
Q n = { ζ 2 H 2 : q ( A ϑ n ) ζ n , A ϑ n ζ 2 }
with ζ n q ( A ϑ n ) , and the function V n ( ζ 1 ) with a gradient of V n ( ζ 1 ) is defined in Lemma 6. Evidently, C C n and Q Q n , and the projections onto the half-spaces C n and Q n have closed-form expressions.Therefore, algorithm (3) performs well. However, the fixed step size τ n 0 , 2 A 2 is quite conservative, which negatively impacts the numerical performance of Byrne’s CQ algorithm. To overcome this limittion, many self-updated step size algorithms have been proposed to solve the S F P ; see, e.g., [7,8,10,11,12,13,14,16,17,19,20,21,22,23,24]. Among them, Kesornprom et al. [22] proposed the following gradient-CQ algorithm for solving the S F P
z n = ϑ n τ n V n ( ϑ n ) ϑ n + 1 = P C n ( z n λ n V n ( z n ) ) ,
where τ n = γ n V n ( ϑ n ) V n ( ϑ n ) 2 + θ n and λ n = γ n V n ( z n ) V n ( z n ) 2 + θ n , 0 < γ n < 4 , 0 < θ n < 1 . They proved the weak convergence of algorithm (4) under the conditions inf n γ n ( 4 γ n ) > 0 and lim n θ n = 0 . To obtain strong convergence, they presented the following gradient-CQ algorithm through Halpern’s iteration process [25,26,27]. Specifically, the fixed vector u H 1 is provided and the initial guess ϑ 1 is chosen arbitrarily. The sequence { ϑ n } is computed as follows:
z n = ϑ n τ n V n ( ϑ n ) ϑ n + 1 = π n u + ( 1 π n ) P C n ( z n λ n V n ( z n ) ) ,
where τ n and λ n are given in (4). The resulting sequence { ϑ n } converges strongly to P Γ u under the conditions inf n γ n ( 4 γ n ) > 0 , lim n θ n = 0 , { π n } ( 0 , 1 ) , lim n π n = 0 and n = 1 π n = .
To accelerate the convergent rate of relaxed CQ algorithms, the inertial extrapolation technique [28] is adopted, and inertial relaxed CQ algorithms have been introduced; see, e.g., [5,6,8,9,10,11]. Recently, Ma and Liu [11] proved the strong convergence of the relaxed CQ approach [9] using the following scheme.
R n = ϑ n + n ( ϑ n ϑ n 1 ) , z n = P C n ( R n τ n V n ( R n ) ) , ϑ n + 1 = π n u + ( 1 π n ) z n , τ n + 1 = min { 2 δ V n ( R n ) V n ( R n ) 2 , d n τ n + ψ n } , if V n ( R n ) 0 , d n τ n + ψ n , otherwise ,
where n is the inertial term, d n and ψ n are found through Lemma 3.
Note that under the assumptions of Lipschitz continuity for the gradient operator and firm nonexpansiveness of I P Q n , the described algorithms exhibit certain features. In algorithms (4) and (5), the relaxation parameters θ n and inf n γ n ( 4 γ n ) > 0 are applied directly to the step sizes.As a result, the step sizes lack positive lower bounds, possible leading to conservative choices and, consequently, slow convergence, as observed in [29]. The convergence of these algorithms is established under the aforementioned conditions. However, the study of gradient-CQ algorithms with inertial effects, such as in (6), has not yet been established.This raises a natural question:
Question: Can we develop new modifications of algorithms (4) and (5) that not only improve numerical performance but also ensure convergence by relaxing the assumptions of Lipschitz continuity for the gradient operator V n and firm nonexpansiveness for the mapping I P Q n ?
In response to the proposed question, we answer in the affirmative. The main contributions of this work are as follows:
-
We include the inertial idea [8,9,11,28,30,31] into algorithms (4) and (5) to accelerate their convergence rate and establish new convergence results.
-
We adopt double non-Lipschitz step sizes [9], which remove the restrictions imposed by the condition inf n γ n ( 4 γ n ) > 0 and the requirement lim n θ n = 0 . The structures of our proposed step sizes are similar symmetry, and they have positive lower bounds and grow with the iterative count, leading to accelerated convergence;
-
We propose inertial gradient-CQ algorithms with double non-Lipschitz step sizes for solving the S F P , and we establish their weak and strong convergence without requiring Lipschitz continuity of V n or firm nonexpansiveness of I P Q n , respectively;
-
We apply our methods to the LASSO problem to demonstrate and validate the theoretical results.
The paper is constructed as follows. In Section 2, some preliminaries are listed. Our proposed algorithms and their convergence results are presented in Section 3 and Section 4. A comparison of the results is described in Section 5.

2. Preliminaries

The symbol ⇀ stands for weak convergence and → represents strong convergence. Let H 1 and H 2 be real Hilbert spaces. For any sequence { ϑ n } H 1 , ω ω ( ϑ n ) denotes the weak w limit set of { ϑ n } ; namely, ω ω ( ϑ n ) = ζ 1 | ( ϑ n i ) ( ϑ n ) such that ϑ n i ζ 1 .
Definition 1
([32]). Let H 1 be a real Hilbert space and let V : H 1 ( , + ) be a convex function. An element ν H 1 is called the subgradient of V at v ˜ H 1 if
ν , ζ 1 v ˜ V ( ζ 1 ) V ( v ˜ ) , ζ 1 H 1 .
The collection of all subgradients of V at v ˜ are subdifferentials of the function V at this point, which is denoted by V ( v ˜ ) , i.e.,
V ( v ˜ ) = { ν H 1 : ν , ζ 1 v ˜ V ( ζ 1 ) V ( v ˜ ) , ζ 1 H 1 } .
Lemma 1.
Let H 1 be a real Hilbert space. For all ζ 1 , ζ 2 , ζ 3 H 1 , then
(i) α ζ 1 + ( 1 α ) ζ 2 2 = α ζ 1 2 + ( 1 α ) ζ 2 2 α ( 1 α ) ζ 1 ζ 2 2 , α ( 0 , 1 ) .
(ii) ζ 1 + ζ 2 2 ζ 1 2 + 2 ζ 2 , ζ 1 + ζ 2 .
(iii) ζ 1 ζ 2 , ζ 1 ζ 3 = 1 2 ζ 1 ζ 2 2 + 1 2 ζ 1 ζ 3 2 1 2 ζ 2 ζ 3 2 .
Let C be a nonempty closed and convex subset of H 1 , then the orthogonal projection of ζ 1 onto C is defined by
P C ζ 1 = argmin { ζ 1 ζ 2 | ζ 2 C } , ζ 1 H 1 .
By the following Lemma 2 (iii), this is a firmly nonexpansive mapping.
Lemma 2
([33]). Let C be a nonempty closed convex subset of a real Hilbert space H 1 and let P C be the metric projection from H 1 onto C. Then the following statements hold:
(i) ζ 1 P C ζ 1 , ζ 2 P C ζ 1 0 f o r a l l ζ 1 H 1 a n d ζ 2 C ;
(ii) P C ζ 1 P C ζ 2 ζ 1 ζ 2 f o r a l l ζ 1 , ζ 2 H 1 ;
(iii) P C ζ 1 P C ζ 2 2 ζ 1 ζ 2 , P C ζ 1 P C ζ 2 f o r a l l ζ 1 , ζ 2 H 1 ;
(iv) P C ζ 1 ζ 2 2 ζ 1 ζ 2 2 ζ 1 P C ζ 1 2 f o r a l l ζ 1 H 1 a n d ζ 2 C .
A function V : H 1 R is called weakly lower semi-continuous (w-lsc) at v ˜ if ϑ n v ˜ yields
V ( v ˜ ) lim inf n V ( ϑ n ) .
Lemma 3
([34]). Let { φ n } be a sequence of nonnegative numbers fulfilling the following:
φ n + 1 d n φ n + ψ n , n N ,
where { d n } and { ψ n } are sequences of nonnegative numbers, such that { d n } [ 1 , + ) ,   n = 1 ( d n 1 ) < and n = 1 ψ n < . Then, lim n φ n exists.
Lemma 4
([9]). Let H 1 be a real Hilbert space and let { ϑ n } be a sequence in H 1 , such that there exists a nonempty closed and convex subset C of H 1 fulfilling the following conditions:
(i) for all z C , lim n ϑ n z exists;
(ii) any weak cluster point of { ϑ n } belongs to C.
  • Then, there exists S * C , such that { ϑ n } converges weakly to S * .
Lemma 5
((Lemma 2 [9])). Let { Ξ n } [ 0 , ) and { b n } [ 0 , ) be sequences and such that there N > 0 , n N ,
(i) Ξ n + 1 Ξ n n ( Ξ n Ξ n 1 ) + b n ;
(ii) n = N b n < ;
(iii) { n } [ 0 , β ] , where β [ 0 , 1 ) .
  • Then, { Ξ n } is a converging sequence and n = N [ Ξ n Ξ n 1 ] + < , where [ t ] + = max { t , 0 } (for any t R ).
Lemma 6
([35]). Let C and Q be closed and convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and A : H 1 H 2 be a bounded linear operator. Let V : H 1 R be a function defined by
V ( ζ 1 ) = 1 2 A ζ 1 P Q A ζ 1 2 , ζ 1 H 1 .
The following properties hold:
(i) V is convex and differentiable;
(ii) V is w lsc on H 1 ;
(iii) V ( ζ 1 ) = A * ( I P Q ) A ζ 1 , ζ 1 H 1 ;
(iv) V is A 2 Lipschitz continuous.
Lemma 7
([36]). Suppose that { Π n } is a sequence of nonnegative real numbers, such that there N > 0 , n N ,
Π n + 1 ( 1 π n ) Π n + π n b n Π n + 1 Π n δ n + Ξ n ,
where { π n } is a sequence in ( 0 , 1 ) , { δ n } is a sequence of nonnegative real numbers, and { b n } and { Ξ n } are two sequences in R , such that
(i) n = 1 π n = ;
(ii) lim n Ξ n = 0 ;
(iii) lim i δ n i = 0 yields lim sup i b n i 0 for any subsequence { n i } of { n } .
Then, lim n Π n = 0 .
Lemma 8
([37]). Consider the S F P with the function V as in Lemma 6, and let τ > 0 and S * H . Then, the equivalence of the following statements holds:
(i) The point S * solves the S F P ;
(ii) The point S * solves the fixed-point equation:
S * = P C ( S * τ V ( S * ) ) = P C ( S * τ A * ( I P Q ) A S * ) ;
(iii) The point S * solves the variational inequality problem with respect to the gradient of V, namely, find a point ζ 1 C , such that
V ( ζ 1 ) , ς ζ 1 0 , ς C .

3. Weak Convergence

In this section, we introduce an accelerated gradient-CQ algorithm for solving SFP in real Hilbert spaces. Also, its weak convergence is analyzed under conditions that are weaker than those that are typically assumed. Before presenting our algorithm, we list several preliminary conditions below.
(A1) The solution set of S F P is nonempty, i.e., Γ .
(A2) Sets C and Q are defined in (2), and sets C n and Q n are defined in (3), the function V n with its gradient V n are defined in Lemma 6.
Now, our proposed algorithm is as follows.
Remark 1.
As shown in Algorithm 1, if ϑ n + 1 = z n = R n , then it can be seen from (7) that R n = P C n ( R n τ n V n ( R n ) ) , that is, R n C n . Also, its weak convergence is analyzed under conditions that are weaker than usual. Before describing our algorithm, several conditions are given below. From Lemma 8 (i–ii), we also show A R n Q n . Combining sets C n and Q n , we further see that c ( R n ) 0 and q ( A R n ) 0 . Thus, it can be concluded from (2) that R n C and A R n Q . Also, its weak convergence is analyzed under conditions that are weaker than usual. Before presenting our algorithm, several conditions are given below. Thereby, R n is a solution of the S F P .
Algorithm 1: A weakly convergent algorithm for SFP.
    Step 0. Take x 0 , x 1 H 1 , τ 1 > 0 , λ 1 > 0 , δ 0 , ρ 0 , 1 and
       { n } [ 0 , β ) [ 0 , 1 ) . The sequence { d n } satisfies
       { d n } [ 1 , + ) ,   n = 1 ( d n 1 ) < .
    Step n. Compute
R n = ϑ n + n ( ϑ n ϑ n 1 ) , z n = R n τ n V n ( R n ) , ϑ n + 1 = P C n ( z n λ n V n ( z n ) ) ,
      where the stepsizes τ n and λ n are updated via
τ n + 1 = min δ V n ( R n ) V n ( R n ) 2 , d n τ n , if V n ( R n ) 0 , d n τ n , otherwise ,
      and
λ n + 1 = min δ V n ( z n ) V n ( z n ) 2 , d n λ n , if V n ( z n ) 0 , d n λ n . otherwise ,
      
  • Stop criterion: If ϑ n + 1 = z n = R n , then stop, R n is the solution to the S F P . Also, its weak convergence is analyzed under conditions that are weaker than usual. Before describing our algorithm, several conditions are given below. Otherwise, return to step n.
Remark 2
([38]). In Lemma 9, the condition is easy to compute. Specifically, given two consecutive iteratives ϑ n and ϑ n 1 , once can compute ϑ n + 1 using (7), where the inertial parameter n is chosen to satisfy 0 n ¯ n , with
¯ n = min ϱ n ϑ n ϑ n 1 2 , β i f ϑ n ϑ n 1 , β , o t h e r w i s e ,
where ϱ n [ 0 , ) fulfills n = 1 ϱ n < .
Lemma 9.
Let the sequence { ϑ n } be generated by Algorithm 1, and suppose there N > 0 , n N ,   n = N n ϑ n ϑ n 1 2 < . Then, { ϑ n } is a bounded sequence.
Now, we illustrate that our proposed step sizes are well-defined when assuming non-Lipschitz continuity of V n .
Lemma 10.
Let the sequences { τ n } and { λ n } be generated by Algorithm 1. Then, we obtain lim n τ n = τ , lim n λ n = λ , τ min δ 2 A * A , τ 1 and λ min δ 2 A * A , λ 1 , where τ 1 and λ 1 are initial step sizes.
Proof. 
In situation V n ( R n ) 0 , we obtain
V n ( R n ) 2 = V n ( R n ) , V n ( R n ) = A * ( I P Q n ) A R n , A * ( I P Q n ) A R n = ( I P Q n ) A R n , A A * ( I P Q n ) A R n   A * A ( I P Q n ) A R n , ( I P Q n ) A R n   A * A ( I P Q n ) A R n 2 . = 2 A * A V n ( R n ) ,
which yields that
δ V n ( R n ) V n ( R n ) 2 δ 2 A * A .
This further shows that
τ n + 1 = min δ V n ( R n ) V n ( R n ) 2 , d n τ n min δ 2 A * A , τ n
where d n 1 . Through induction, we see that the sequence { τ n } has a lower bound min δ 2 A * A , τ 1 . Using (8), we find that
τ n + 1 d n τ n .
Using Lemma 3, it can be seen that lim n τ n exists and we can denote lim n τ n = τ , and as its lower bound is positive, then τ > 0 . Similarly, we obtain lim n λ n = λ and λ min δ 2 A * A , λ 1 > 0 . □
We analyze the following theorem without requiring firm-nonexpansiveness of I P Q n .
Theorem 1.
Assume that Conditions (A1)–(A2) hold and N > 0 , n N ,   n = N n x n x n 1 2 < . Then, the sequence { ϑ n } developed by Algorithm 1 converges weakly to a point in the solution set Γ of S F P .
Proof. 
Let ς Γ . Since C C n and Q Q n , we can see that ς = P C ( ς ) = P C n ( ς ) and A ς = P Q ( A ς ) = P Q n ( A ς ) . Using Lemma 1 (iii) and Lemma 2 (ii), we show that
R n ς , V n ( R n ) = R n ς , A * ( P Q n I ) A R n = A R n A ς , ( P Q n I ) A R n = A R n P Q n A R n + P Q n A R n A ς , ( P Q n I ) A R n = P Q n A R n A ς , P Q n A R n A R n P Q n A R n A R n 2 = 1 2 P Q n A R n A ς 2 + P Q n A R n A R n 2 A R n A ς 2   P Q n A R n A R n 2 1 2 P Q n A R n A R n 2 = V n ( R n ) ,
Similarly, we arrive at
z n ς , V n ( z n ) V n ( z n ) .
From (7), (8), and (10), we have
z n ς 2 = R n τ n V n ( R n ) ς 2 =   R n ς 2 + τ n 2 V n ( R n ) 2 2 τ n V n ( R n ) , R n ς   R n ς 2 + τ n 2 V n ( R n ) 2 2 τ n V n ( R n )   R n ς 2 + 2 τ n δ τ n τ n + 1 1 V n ( R n ) .
Combining (7), (8), (11), (12), and Lemma 2 (iv), we obtain
ϑ n + 1 ς 2 = P C n ( z n λ n V n ( z n ) ) ς 2   z n λ n V n ( z n ) ς 2 ϑ n + 1 z n + λ n V n ( z n ) 2 =   z n ς 2 + λ n 2 V n ( z n ) 2 2 λ n V n ( z n ) , z n ς   ϑ n + 1 z n + λ n V n ( z n ) 2 .   z n ς 2 + λ n 2 V n ( z n ) 2 2 λ n V n ( z n ) ϑ n + 1 z n + λ n V n ( z n ) 2   z n ς 2 + 2 λ n δ λ n λ n + 1 1 V n ( z n ) ϑ n + 1 z n + λ n V n ( z n ) 2 .   R n z 2 + 2 τ n δ τ n τ n + 1 1 V n ( R n ) +   2 λ n δ λ n λ n + 1 1 V n ( z n ) ϑ n + 1 z n + λ n V n ( z n ) 2 .
From the definition of R n , it follows that
R n ς 2 = ϑ n + n ( ϑ n ϑ n 1 ) ς 2 =   ϑ n ς 2 + 2 n ϑ n ς , ϑ n ϑ n 1 + n 2 ϑ n ϑ n 1 2 .
Thanks to Lemma 1 (iii), one has
ϑ n ς , ϑ n ϑ n 1 = 1 2 ϑ n ς 2 + 1 2 ϑ n ϑ n 1 2 1 2 ϑ n 1 z 2 .
Substituting this into (14), we arrive at
R n ς 2 =   ϑ n ς 2 + n 2 ϑ n ϑ n 1 2 + n ϑ n ς 2 +   n ϑ n ϑ n 1 2 n ϑ n 1 ς 2   ϑ n ς 2 + n ( ϑ n ς 2 ϑ n 1 ς 2 ) + 2 n ϑ n ϑ n 1 2 .
Using (13) and (15), we can verify that
ϑ n + 1 z 2 R n ς 2 + 2 τ n δ τ n τ n + 1 1 V n ( R n ) +   2 λ n δ λ n λ n + 1 1 V n ( z n ) ϑ n + 1 z n + λ n V n ( z n ) 2 .   ϑ n ς 2 + n ( ϑ n ς 2 ϑ n 1 ς 2 ) + 2 n ϑ n ϑ n 1 2 +   2 τ n δ τ n τ n + 1 1 V n ( R n ) + 2 λ n δ λ n λ n + 1 1 V n ( z n )   ϑ n + 1 z n + λ n V n ( z n ) 2 .
Since
lim n 1 δ τ n τ n + 1 = 1 δ > 1 ρ , lim n 1 δ λ n λ n + 1 = 1 δ > 1 ρ ,
where δ ( 0 , ρ ) ( 0 , 1 ) . Then, N 0 , n N , 1 δ τ n τ n + 1 > 1 ρ and 1 δ λ n λ n + 1 > 1 ρ . So, n N ,
ϑ n + 1 z 2   ϑ n ς 2 + n ( ϑ n ς 2 ϑ n 1 ς 2 ) + 2 n ϑ n ϑ n 1 2 +   2 ρ 1 ( τ n V n ( R n ) + λ n V n ( z n ) ) ϑ n + 1 z n + λ n V n ( z n ) 2 .   ϑ n ς 2 + n ( ϑ n ς 2 ϑ n 1 ς 2 ) + 2 n ϑ n ϑ n 1 2
Now, applying Lemma 5 with
Ξ n = ϑ n ς 2
and
b n = 2 n ϑ n ϑ n 1 2 .
Since n = N n ϑ n ϑ n 1 2 < , and using Lemma 5, we can see that lim n ϑ n ς exists and
n = N ϑ n ς 2 ϑ n 1 ς 2 + < ,
where [ t ] + = max { t , 0 } . As a result,
lim n ϑ n ς 2 ϑ n 1 ς 2 + = 0 .
Further, it follows from (17) that n N ,
2 1 ρ ( τ n V n ( R n ) + λ n V n ( z n ) ) + ϑ n + 1 z n + λ n V n ( z n ) 2   ϑ n ς 2 ϑ n + 1 ς 2 + n ( ϑ n ς 2 ϑ n 1 ς 2 ) + 2 n ϑ n ϑ n 1 2   ϑ n ς 2 ϑ n + 1 ς 2 + n ϑ n ς 2 ϑ n 1 ς 2 + + 2 n ϑ n ϑ n 1 2 ,
which implies that
lim n ( τ n V n ( R n ) + λ n V n ( z n ) ) = 0 ,
which, along with lim n τ n = τ > 0 and lim n λ n = λ > 0 , deduces that
lim n V n ( R n ) = 0 and lim n V n ( z n ) = 0 .
Moreover, from (17), we have
lim n ϑ n + 1 ς n + λ n V n ( z n ) 2 = 0 .
Equation (18) shows that
lim n ( I P Q n ) A R n = 0 .
Additionally, using (7), (18), and (19), we arrive at
z n R n 2 = τ n 2 V n ( R n ) 2 = τ n 2 τ n + 1 τ n + 1 V n ( R n ) 2 2 δ τ n 2 τ n + 1 V n ( R n ) 0 , as n ,
which means that
z n R n   0 , as n .
and
ϑ n + 1 z n 2 ϑ n + 1 z n + λ n V n ( z n ) + λ n V n ( z n ) 2 2 ϑ n + 1 z n + λ n V n ( z n ) 2 + 2 λ n 2 V n ( z n ) 2 = 2 ϑ n + 1 z n + λ n V n ( z n ) 2 + 2 λ n 2 λ n + 1 λ n + 1 V n ( z n ) 2 2 ϑ n + 1 z n + λ n V n ( z n ) 2 + 4 δ λ n 2 λ n + 1 V n ( z n ) 0 , as n ,
which yields that
ϑ n + 1 z n   0 , as n ,
where the second inequality in (22) comes from the basic inequality ( c + d ) 2 2 c 2 + 2 d 2 . In view of (21) and (22), we have
lim n ϑ n + 1 R n = 0 .
Using (7) and (9), we see that n N ,
R n ϑ n 2 = n 2 ϑ n ϑ n 1 2 β n ϑ n ϑ n 1 2 0 as n .
This shows that
lim n R n ϑ n = 0 .
Combining (23) and (24), we observe that
lim n ϑ n + 1 ϑ n = 0 .
Now, we need to verify that ω ω ( ϑ n ) Γ . Let S * ω ω ( ϑ n ) be an arbitrary element. Since { ϑ n } is bounded by Lemma 9, there exists a sequence ϑ n i of ϑ n , such that x n i S * . It follows from (24) that R n i S * H 1 . Note that P Q n i A R n i Q n i , one has
q A R n i ζ n i , A R n i P Q n i A R n i ,
where ζ n i q A R n i . From the boundedness of q , we conclude that ζ n i is bounded. From (20) and (25), we obtain
q A R n i ζ n i A R n i P Q n i A R n i   0 , as i .
Using w-lsc of q again, it implies that
q ( A S * ) lim inf i q A R n i 0 .
This concludes that A S * Q .
Below, we establish that S * C . Using the definition of C n i and x n i + 1 C n i , we can verify that
c ( R n i ) ε n i , R n i ε n i + 1 .
where ε n i c R n i . From the boundedness of c , ε n i is bounded. From (23) and (27), we see that
c ( R n i ) ε n i R n i ε n i + 1 0 , as i .
From the w-lsc of c, R n i S * and (28), it follows that
c S * lim inf i c R n i 0 .
Hence, S * C . So, we know that S * Γ . Since the choice of S * ω ω ϑ n is arbitrary, we arrive at ω ω ϑ n Γ . Hence, the results are achieved using Lemma 4. □

4. Strong Convergence

This section presents a strongly convergent algorithm that integrates inertial effects, the relaxed gradient-CQ method [22], and the Halpern-type method [25], combined with newly designed step sizes. The following assumptions are given to prove the convergence of the method.
(A3) Let { ϱ n } be a positive sequence in [ 0 , θ ˜ ) , such that ϱ n = o ( π n ) , that is, lim n ϱ n π n = 0 , where 0 < θ ˜ < 1 and { π n } ( 0 , 1 ) fulfill n = 1 π n = and lim n π n = 0 .
The proposed approach has the following form.
Remark 3.
In Algorithm 2, the inertial parameter n is chosen as
n = min ϱ n ϑ n ϑ n 1 , β i f ϑ n ϑ n 1 , β , o t h e r w i s e ,
Algorithm 2: A strongly convergent algorithm for SFP.
    Step 0. Take x 0 , x 1 H 1 , τ 1 > 0 , λ 1 > 0 , δ 0 , ρ 0 , 1 and
       { n } [ 0 , β ) [ 0 , 1 ) , u is a fixed vector. The sequence { d n } satisfies
       { d n } [ 1 , + ) ,   n = 1 ( d n 1 ) < .
    Step n. Compute
R n = ϑ n + n ( ϑ n ϑ n 1 ) , z n = R n τ n V n ( R n ) , ϑ n + 1 = π n u + ( 1 π n ) P C n ( z n λ n V n ( z n ) ) ,
      where the step sizes τ n and λ n are updated via
τ n + 1 = min δ V n ( R n ) V n ( R n ) 2 , d n τ n , if V n ( R n ) 0 , d n τ n , otherwise ,
      and
λ n + 1 = min δ V n ( z n ) V n ( z n ) 2 , d n λ n , if V n ( z n ) 0 , d n λ n , otherwise ,
      
  • Stop criterion: If ϑ n + 1 = z n = R n and z n = P C n ( z n λ n V n ( z n ) ) , then stop; R n is a solution of the S F P . Otherwise, return to Step n.
In the following, the strong convergence proofs of Algorithm 2 are similar to some weakly convergent proof steps from Algorithm 1 and are thus omitted.
Theorem 2.
Assume that Conditions (A1)–(A3) hold, then the sequence { ϑ n } generated by Algorithm 2 converges strongly to a point P Γ u in the solution set Γ of S F P .
Proof We set ς = P Γ u and t n = P C n ( z n λ n V n ( z n ) ) . Using the similar proof as in Theorem 1, we have that N 0 , n N ,
t n ς 2 = P C n ( z n λ n V n ( z n ) ) z 2   z n ς 2 + 2 λ n ρ 1 V n ( z n ) t n z n + λ n V n ( z n ) 2 .
and
z n ς 2 = R n τ n V n ( R n ) ς 2   R n ς 2 + 2 τ n ρ 1 V n ( R n ) .
From the definition of R n , it follows that
R n ς = ϑ n + n ( ϑ n ϑ n 1 ) z   ϑ n ς + n ϑ n ϑ n 1 =   ϑ n ς + π n · n π n ϑ n ϑ n 1 .
By (29), we have n ϑ n ϑ n 1 ϱ n for all n 1 , which, along with lim n ϱ n π n = 0 , implies that
lim n n π n ϑ n ϑ n 1 lim n ϱ n π n = 0 .
So, there is a constant M 1 > 0 , such that
n π n ϑ n ϑ n 1 M 1 .
Combining (32)–(35) and (36), we derive that n N ,
t n ς     z n ς     R n ς     ϑ n ς + π n M 1 .
Therefore, n N ,
ϑ n + 1 ς = π n u + ( 1 π n ) t n ς π n u ς + ( 1 π n ) t n ς π n u ς + ( 1 π n ) ( ϑ n ς + π n M 1 ) π n ( u ς + M 1 ) + ( 1 π n ) ϑ n ς max ϑ n ς , ( u ς + M 1 ) max { ϑ n ς , ( u ς + M 1 ) } .
This means that the sequence { ϑ n } is bounded. Hence, { R n } is also bounded. For n N , one has
ϑ n + 1 ς 2 = ( 1 π n ) t n + π n u ς 2 =   π n ( u ς ) + ( 1 π n ) ( t n ς ) 2 ( 1 π n ) t n ς 2 + 2 π n u ς , ϑ n + 1 ς ( 1 π n ) ( R n ς 2 + 2 τ n ( ρ 1 ) V n ( R n ) + 2 λ n ( ρ 1 ) V n ( z n )   t n z n + λ n V n ( z n ) 2 ) + 2 π n u ς , ϑ n + 1 ς = ( 1 π n ) R n ς 2 + 2 ( ρ 1 ) ( 1 π n ) τ n V n ( R n ) + 2 ( ρ 1 ) ( 1 π n ) λ n V n ( z n ) ( 1 π n ) t n z n + λ n V n ( z n ) 2 + 2 π n u ς , ϑ n + 1 ς .
Now, we compute the following estimation:
R n ς 2 = ϑ n + n ( ϑ n ϑ n 1 ) ς 2 =   ϑ n ς 2 + n 2 ϑ n ϑ n 1 2 + 2 n ϑ n ς , ϑ n ϑ n 1   ϑ n ς 2 + n 2 ϑ n ϑ n 1 2 + 2 n ϑ n ς ϑ n ϑ n 1 .
Let M 2 = sup n N { β ϑ n ϑ n 1 , ϑ n ς } . Substituting (39) into (38), we find that n N ,
ϑ n + 1 ς 2 ( 1 π n ) R n ς 2 + 2 ( ρ 1 ) ( 1 π n ) τ n V n ( R n ) + 2 ( ρ 1 ) ( 1 π n ) λ n V n ( z n ) ( 1 π n ) t n z n + λ n V n ( z n ) 2 + 2 π n u ς , ϑ n + 1 ς . ( 1 π n ) ( ϑ n ς 2 + n 2 ϑ n ϑ n 1 2 + 2 n ϑ n ς ϑ n ϑ n 1 ) +   2 ( ρ 1 ) ( 1 π n ) τ n V n ( R n ) + 2 ( ρ 1 ) ( 1 π n ) λ n V n ( z n ) ( 1 π n ) t n z n + λ n V n ( z n ) 2 + 2 π n u ς , ϑ n + 1 ς . = ( 1 π n ) ( ϑ n ς 2 + n ϑ n ϑ n 1 ( 2 ϑ n ς + n ϑ n ϑ n 1 ) ) +   2 ( ρ 1 ) ( 1 π n ) τ n V n ( R n ) + 2 ( ρ 1 ) ( 1 π n ) λ n V n ( z n ) ( 1 π n ) t n z n + λ n V n ( z n ) 2 + 2 π n u ς , ϑ n + 1 ς . ( 1 π n ) ϑ n ς 2 + 2 ( ρ 1 ) ( 1 π n ) τ n V n ( R n ) + 2 ( ρ 1 ) ( 1 π n ) λ n V n ( z n ) ( 1 π n ) t n z n + λ n V n ( z n ) 2 + 3 M 2 n ϑ n ϑ n 1 +   2 π n u ς , ϑ n + 1 ς .
After arrangement, n N ,
ϑ n + 1 ς 2 ( 1 π n ) ϑ n ς 2 + 2 ( ρ 1 ) ( 1 π n ) τ n V n ( R n ) + 2 ( ρ 1 ) ( 1 π n ) λ n V n ( z n ) ( 1 π n ) t n z n + λ n V n ( z n ) 2 + 3 M 2 n ϑ n ϑ n 1 +   2 π n u ς , ϑ n + 1 ς .
Using Lemma 7, we see that n N , let
Π n = ϑ n ς 2 ; Ξ n = 3 M 2 π n n π n ϑ n ϑ n 1 + 2 π n u ς , ϑ n + 1 ς ; b n = 2 u ς , ϑ n + 1 ς + 3 M 2 n π n ϑ n ϑ n 1 ; δ n = ( 1 π n ) 2 τ n ( 1 ρ ) V n ( R n ) + 2 ( ρ 1 ) ( 1 π n ) λ n V n ( z n ) + ( 1 π n ) t n z n + λ n V n ( z n ) 2 .
So, (40) is reduced to the following inequalities
Π n + 1 ( 1 π n ) Π n + π n b n , n N , Π n + 1 Π n δ n + Ξ n ,
Let { n i } { n } be a sequence and assume that
lim i δ n i = 0 .
This means
lim i ( 1 α n i ) ( 2 τ n i ( 1 ρ ) V n i ( R n i ) + 2 ( ρ 1 ) λ n i V n i ( z n i ) + t n i z n i + λ n i V n i ( z n i ) 2 ) = 0 .
By lim i τ n i = τ > 0 , lim i λ n i = λ > 0 and lim i α n i = 0 , we deduce
lim i V n i ( R n i ) = 0 , lim i V n i ( z n i ) = 0 and lim i t n i z n i + λ n i V n i ( z n i ) 2 = 0 ,
which implies that
lim i ( I P Q n i ) A R n i = 0 .
Similarly to Theorem 1, we can prove that ω ω ( ϑ n i ) Γ . Hence, there exists a subsequence ϑ n i j of ϑ n i , such that ϑ n i j S * Γ .
  • From Lemma 2 (i), we obtain
lim sup i u ς , ϑ n i ς = lim j u ς , ϑ n i j ς = u ς , S * ς 0 .
From (A3), we have
ϑ n i R n i = n i ϑ n i ϑ n i 1 = π n i n i π n i ϑ n i ϑ n i 1 0 , a s i .
Using the same analysis as in (21), we can show that
lim i z n i R n i = 0 .
From (44) and (45), we derive that
ϑ n i z n i     ϑ n i R n i   +   z n i R n i   0 , a s i .
From the convexity of · 2 and (41), we arrive at
ϑ n i + 1 z n i 2 = π n i u + ( 1 π n i ) t n i z n i 2 π n i u ς n i 2 + ( 1 π n i ) t n i z n i 2 π n i u ς n i 2 + 2 ( 1 π n i ) t n i z n i + λ n i V n i ( z n i ) 2 +   2 ( 1 π n i ) λ n i 2 V n i ( z n i ) 2 π n i u ς n i 2 + 2 ( 1 π n i ) t n i z n i + λ n i V n i ( z n i ) 2 +   4 δ ( 1 π n i ) λ n i 2 λ n i + 1 V n i ( z n i ) 0 , a s i .
So, we arrive at
ϑ n i + 1 ϑ n i     ϑ n i + 1 z n i   +   ϑ n i z n i 0 , a s i .
Combining (43) and (48), we obtain
lim sup i u ς , ϑ n i + 1 z + 3 M 2 n i π n i ϑ n i ϑ n i 1 lim sup i u ς , ϑ n i + 1 ϑ n i + lim sup i u ς , ϑ n i ς +   3 M 2 lim sup i n i π n i ϑ n i ϑ n i 1 0 .
Using the above results, one has
lim sup i b n i 0 .
By Lemma 7, it can be found that the sequence { ϑ n } converges strongly to z = P Γ ( u ) .

5. Numerical Experiments

In the first numerical example, we compare our Algorithm 1 with some weakly convergent algorithms, including Gibali et al.’s Algorithm 3.1 (shortly, GAlg.3.1) [6] and Sahu et al.’s Algorithm 3.1 (SAlg.3.1) [9]. In the second example, we compare Algorithm 2 with Ma and Liu’s Algorithm 3.1 (shortly, MLAlg.3.1) [11] and Ma et al.’s Algorithm 3 (shortly, Alg.3) [39]. These experiments were cinducted in MATLAB R2017a on a PC Desktop Intel(R) Core(TM) i7-6700 CPU @ 3.40 GHZ computer with RAM 8.00 GB.
Example 1.
LASSO problem [9,40,41,42]
Here, the following LASSO problem was changed to model the S F P , and it recovered the sparse signal.
min 1 2 A ϑ b 2 : ϑ R N , ϑ 1 κ ,
where A R M × N , M < N , b R M and κ > 0 .
Now, we need to find a sparse solution for the S F P . Here, the system A is produced from a standard normal distribution with a mean zero and unit variance. The true sparse signal S * is created from uniform distribution at an interval of [ 2 , 2 ] with a random k position that is nonzero, while the remainder is maintained at zero.The sample data are b = A S * .
Certain assumptions are imposed on matrix A, where the solution to (49) is equal to the 0 norm solution of the underdetermined linear system. Further, S F P is changed into closed convex sets C = { z | z 1 κ } , κ = k , and Q = { b } . Since the projection onto C has no a closed form solution, the subgradient projection is used. We further introduce a convex function c ( z ) = z 1 κ and
C n = { z : c ( ϑ n ) + ε n , ς ϑ n 0 } ,
where ε n c ( ϑ n ) . Meanwhile, the orthogonal projection of a point z R N onto C n is the following
P C n ( ς ) = ς , i f c ( ϑ n ) + ε n , ς ϑ n 0 , ς c ( ϑ n ) + ε n , ς ϑ n ε n 2 ε n , o t h e r w i s e ,
The subdifferential c at ϑ n is
c ( ϑ n ) = 1 , i f ϑ n > 0 , [ 1 , 1 ] i f ϑ n = 0 , 1 , i f ϑ n < 0 .
To ensure all algorithms run efficiently, they were initialized at the start through the following
E n = ϑ n S * max { 1 , ϑ n } .
We used this to test how the algorithms behaved under different M , N a n d k , while using the same iterative numbers (3000). Meanwhile, E n is reported in Table 1.
In the following, the problem is the recovery of the signal S * ; so, we take M = 240 ,   N = 1024 ,   k = 30 , M × N matrix A is randomly obtained with independent samples of a standard Gaussian distribution. In detail, the original signal S * contains 30 randomly placed ± 1 spikes. All of the algorithms start their iterations with x 0 = 0 , and the following mean square error (MSE) is defined for measuring the accuracy of the recovery:
M S E = 1 N ϑ n S * 2 .
For SAlg.3.1, we take Ξ n = 0.1 and β = 0.3 ; for GAlg.3.1, we choose Ξ n = 0.1 , β = 0.3 and ϱ n = 1 n 2 ; and for Alg.1, we set τ 1 = λ 1 = 0.3 ,   δ = 0.3 , d n = 1 ( n + 1 ) 1.1 + 1 , ψ n = 0 , ϱ n = 1 n 2 and β = 0.3 .
Remark 4.
From Table 1, we see that our proposed algorithm (Alg.1) takes less CPU time to obtain a smaller E n compared with algorithms SAlg.3.1 and GAlg.3.1 in different situations.
Figure 1 shows the recovery results of all algorithms, and includes the original signal, MSE, and iterative time.
Remark 5.
From Figure 1, we know that our proposed Algorithm 1 uses less CPU time and smaller MSE to achieve iterative better numbers (3000) than SAlg.3.1 and GAlg.3.1.
Example 2
([43,44]). Let H 1 = H 2 = L 2 ( [ 0 , 1 ] ) with norm ζ 1 L 2 = 0 1 ζ 1 ( t ) 2 d t 1 2 and inner product ζ 1 , ζ 2 = 0 1 ζ 1 ( t ) ζ 2 ( t ) d t , ζ 1 , ζ 2 L 2 ( [ 0 , 1 ] ) . Let C = { ζ 1 L 2 ( [ 0 , 1 ] ) : ζ 1 L 2 1 } and Q = { ζ 1 L 2 ( [ 0 , 1 ] ) : ζ 1 , t = 0 } . Set A ζ 1 ( t ) = ζ 1 ( t ) 2 . We observe that 0 Γ , and so Γ . For MLAlg. 3.1, we take λ 1 = 0.0003 , δ = 0.4 ,   β = 0.3 , d n = 1 ( n + 1 ) 1.1 + 1 , ψ n = 0 , π n = 10 4 n , ϱ n = 1 n 2 , and u = ϑ 0 . For Alg. 2, we choose λ 1 = 0.6 , τ 1 = 0.6 , δ = 0.9 , d n = 10 ( n + 1 ) 1.1 + 1 , ψ n = 0 , β = 0.3 , π n = 10 4 n , ϱ n = 1 n 2 and u = x 0 , which we use here. For all of the algorithms, the stopping criterion is denoted by ϑ n + 1 ϑ n L 2 < ϵ . Now, the starting points can be one of the following.
Case 1: ϑ 0 = 3 t 4 + t 6 , ϑ 1 = 5 t ,
Case 2: ϑ 0 = e t 8 , ϑ 1 = 5 t .
The obtained results are recorded in Table 2.
In this experiment, the involved projections on sets C and Q have the following formulas, that is,
P C ( ϑ ) = ϑ ϑ L 2 , i f ϑ L 2 > 1 , ϑ , i f ϑ L 2 1 . a n d P Q ( ϑ ) = ϑ t , ϑ t L 2 t .
Remark 6.
We see from Table 2 that our proposed Algorithm 2 fulfilled the stopping criterion at a shorter iterative time and with fewer iterations than MLAlg. 3.1.
Example 3.
The following split feasibility problem is proposed in [9,13,45,46]:
F i n d z * C s u c h t h a t A z * Q ,
where A R 20 × 10 is a matrix. The set C = y R 10 : c ( y ) 0 , where
c ( y ) = y 1 + y 2 2 + + y 10 2 ,
and Q = { z R 20 : q ( z ) 0 } , where
q ( z ) = z 1 + z 2 2 + + z 20 2 1 .
Notice that C is the set above the function y 1 = y 2 2 + + y 10 2 and Q is the set below the function z 1 = z 2 2 z 20 2 + 1 . Every element of A is randomly selected in ( 0 , 1 ) , fulfilling
A ( C ) Q .
For all of the algorithms, we adopted the same starting points x 0 R 10 a n d x 1 R 10 , which were generated randomly, every element in these two starting points is in ( 0 , 1 ) .
For Alg.3, we used τ 1 = 0.01 , δ = 0.3 ,   d n = 22 ( n + 1 ) 1.1 + 1 , π n = 1 n , ϱ n = 1 n 3 , β = 0.3 and η = 0.5 , which was suggested in [39].
For Alg.2, we suggest τ 1 = 0.01 ,   λ 0 = 0.6 , δ = 0.49 ,   d n = 22 ( n + 1 ) 1.1 + 1 , π n = 1 n , ϱ n = 1 n 3 and β = 0.3 .
Remark 7.
From Table 3, we found that Alg.2 had fewer iterations than Alg.3; however, its iterative time was much longer than Alg.3.

6. Conclusions

Our work establishes the convergence of the proposed algorithms under conditions weaker than the usual assumptions of Lipschitz continuity for V n and firm nonexpansiveness for I P Q n , which are typically associated with the SFP. However, the addition of double inertia to Algorithm 1 makes it challenging to establish a new weak convergence result. Therefore, we leave the investigation of how double inertia affects the effectiveness and convergence analysis of the proposed algorithms for future work.

Author Contributions

Writing—original draft, Y.Z.; Writing—review & editing, X.M. All authors have read and agreed to the published version of the manuscript.

Funding

The first author was supported by the Natural Science Foundation of Hubei Province (No. 2025AFC080), the Scientific Research Project of Hubei Provincial Department of Education (No. 23Q166), the Scientific Research Foundation of Hubei University of Education for Talent Introduction (No. ESRC20220008), and the Foundation for Innovative Research Team of Hubei Provincial Department of Education (No. T2022034). The second author was supported by the National Natural Science Foundation of China (No. 12172266) and the Fundamental Research Program of Shanxi Province, China (No. 202303021222208).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

There are no conflicts of interest in this work.

References

  1. Censor, Y.; Elfving, T. A multiprojection algorithm using Bregman projection in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  3. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  4. Censor, Y.; Elfving, T.; Kopf, N.; Bortfeld, T. The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 2005, 21, 2071–2084. [Google Scholar] [CrossRef]
  5. Dang, Y.; Sun, J.; Xu, H. Inertial accelerated algorithm for solving a split feasibility problem. J. Ind. Manag. Optim. 2017, 13, 1383–1394. [Google Scholar] [CrossRef]
  6. Gibali, A.; Mai, D.T.; Vinh, N.T. A new relaxed CQ algorithm for solving split feasibility problems in Hilbert spaces and its applications. J. Ind. Manag. Optim. 2019, 15, 963–984. [Google Scholar] [CrossRef]
  7. López, G.; Martin, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 085004. [Google Scholar]
  8. Shehu, Y.; Gibali, A. New inertial relaxed method for solving split feasibilities. Optim. Lett. 2020, 15, 2109–2126. [Google Scholar] [CrossRef]
  9. Sahu, D.R.; Cho, Y.J.; Dong, Q.L.; Kashyap, M.R.; Li, X.H. Inertial relaxed CQ algorithms for solving a split feasibility problem in Hilbert spaces. Numer. Algorithms 2020, 87, 1075–1095. [Google Scholar] [CrossRef]
  10. Suantai, S.; Pholasa, N.; Cholamjiak, P. The modified inertial relaxed CQ algorithm for solving the split feasibility problems. J. Ind. Manag. Optim. 2018, 23, 1595–1615. [Google Scholar] [CrossRef]
  11. Ma, X.; Liu, H. An inertial Halpern-type CQ algorithm for solving split feasibility problems in Hilbert spaces. J. Appl. Math. Comput. 2021, 68, 1699–1717. [Google Scholar] [CrossRef]
  12. Reich, S.; Tuyen, T.M.; Ha, M.T.N. An optimization approach to solving the split feasibility problem in Hilbert spaces. J. Glob. Optim. 2021, 79, 837–852. [Google Scholar] [CrossRef]
  13. Dong, Q.L.; He, S.; Rassias, M.T. General splitting methods with linearization for the split feasibility problem. J. Glob. Optim. 2021, 79, 813–836. [Google Scholar] [CrossRef]
  14. Yen, L.H.; Huyen, N.T.T.; Muu, L.D. A subgradient algorithm for a class of nonlinear split feasibility problems: Application to jointly constrained Nash equilibrium models. J. Glob. Optim. 2019, 73, 849–868. [Google Scholar] [CrossRef]
  15. Chen, C.; Pong, T.K.; Tan, L.; Zeng, L. A difference-of-convex approach for split feasibility with applications to matrix factorizations and outlier detection. J. Glob. Optim. 2020, 78, 107–136. [Google Scholar] [CrossRef]
  16. Wang, J.; Hu, Y.; Yu, C.K.W.; Zhuang, X. A Family of Projection Gradient Methods for Solving the Multiple-Sets Split Feasibility Problem. J. Optim. Theory Appl. 2019, 183, 520–534. [Google Scholar] [CrossRef]
  17. Qu, B.; Wang, C.; Xiu, N. Analysis on Newton projection method for the split feasibility problem. Comput. Optim. Appl. 2017, 67, 175–199. [Google Scholar] [CrossRef]
  18. Qin, X.; Wang, L. A fixed point method for solving a split feasibility problem in Hilbert spaces. RACSAM 2019, 113, 315–325. [Google Scholar] [CrossRef]
  19. Yang, Q. On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302, 166–179. [Google Scholar] [CrossRef]
  20. Dong, Q.L.; Tang, Y.C.; Cho, Y.J.; Rassias, T.M. “Optimal” choice of the step length of the projection and contraction methods for solving the split feasibility problem. J. Glob. Optim. 2018, 71, 341–360. [Google Scholar] [CrossRef]
  21. Gibali, A.; Liu, L.W.; Tang, Y.C. Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim. Lett. 2018, 12, 817–830. [Google Scholar] [CrossRef]
  22. Kesornprom, S.; Pholasa, N.; Cholamjiak, P. On the convergence analysis of the gradient-CQ algorithms for the split feasibility problem. Numer. Algorithms 2020, 84, 997–1017. [Google Scholar] [CrossRef]
  23. Qu, B.; Xiu, N. A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 2005, 21, 1655–1665. [Google Scholar] [CrossRef]
  24. Xu, J.; Chi, E.C.; Yang, M.; Lange, K. A majorization–Cminimization algorithm for split feasibility problems. Comput. Optim. Appl. 2018, 71, 795–828. [Google Scholar] [CrossRef]
  25. Halpern, B. Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 1967, 73, 957–961. [Google Scholar] [CrossRef]
  26. Polyak, B.T. Some methods of speeding up the convergence of iteration methods. Ussr Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  27. Saejung, S.; Yotkaew, P. Approximation of zeros of inverse strongly monotone operators in Bachna spaces. Nolinear Anal. 2012, 75, 742–750. [Google Scholar] [CrossRef]
  28. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  29. Maingé, P.E.; Gobinddass, M.L. Convergence of one-step projected gradient methods for variational inequalities. J. Optim. Theory Appl. 2016, 171, 146–168. [Google Scholar]
  30. Alvarez, F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J. Optim. 2004, 14, 773–782. [Google Scholar] [CrossRef]
  31. Nesterov, Y. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk. SSSR 1983, 269, 543–547. [Google Scholar]
  32. Bauschke, H.H.; Combettes, P.L. Convex Analysis and Monotone Operator Theory in Hilbert Spaces; Springer: New York, NY, USA, 2011. [Google Scholar]
  33. Agarwal, R.P.; Regan, D.O.; Sahu, D.R. Fixed Point Theory for Lipschitzian-Type Mappings with Applications, Topological Fixed Point Theory and Its Applications; Springer: New York, NY, USA, 2009. [Google Scholar]
  34. Osilike, M.O.; Aniagbosor, S.C. Weak and strong convergence theorems for fixed points of asymptotically nonexpansive mappings. Math. Comput. Model. 2000, 32, 1181–1191. [Google Scholar] [CrossRef]
  35. Vinh, N.; Cholamjiak, P.; Suantai, S. A new CQ algorithm for solving split feasibility problems in Hilbert spaces. Bull. Malays. Math. Sci. Soc. 2018, 42, 2517–2534. [Google Scholar] [CrossRef]
  36. Bauschke, H.H.; Combettes, P.L. A weak-to-strong convergence principle for fejér-monotone methods in Hilbert spaces. Math. Oper. Res. 2001, 26, 248–264. [Google Scholar] [CrossRef]
  37. Xu, H.K. Iterative methods for solving the split feasibility in infinite-dimensional Hilbert spaces. Inverse Probl. 2010, 26, 105018. [Google Scholar] [CrossRef]
  38. Maingé, P.E. Convergence theorems for inertial KM-type algorithms. J. Comput. Appl. Math. 2018, 219, 223–236. [Google Scholar] [CrossRef]
  39. Ma, X.; Jia, Z.; Li, Q. On inertial non-lipschitz stepsize algorithms for split feasibility problems. Comp. Appl. Math. 2024, 43, 431. [Google Scholar] [CrossRef]
  40. Tibshirani, R. Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
  41. Tan, B.; Qin, X.; Wang, X. Alternated inertial algorithms for split feasibility problems. Numer. Algor 2024, 95, 773–812. [Google Scholar] [CrossRef]
  42. Okeke, C.C.; Okorie, K.O.; Nwakpa, C.E.; Mewomo, O.T. Two-step inertial accelerated algorithms for solving split feasibility problem with multiple output sets. Commun. Nonlinear Sci. Numer. Simul. 2025, 141, 108461. [Google Scholar] [CrossRef]
  43. van Thang, T. Projection algorithms with adaptive step sizes for multiple output split mixed variational inequality problems. Comp. Appl. Math. 2024, 43, 387. [Google Scholar] [CrossRef]
  44. Kesornprom, S.; Cholamjiak, P. Proximal type algorithms involving linesearch and inertial technique for split variational inclusion problem in hilbert spaces with applications. Optimization 2019, 68, 2369–2395. [Google Scholar] [CrossRef]
  45. He, H.; Ling, C.; Xu, H.K. An Implementable Splitting Algorithm for the 1-norm Regularized Split Feasibility Problem. J. Sci. Comput. 2016, 67, 281–298. [Google Scholar] [CrossRef]
  46. Ma, X.; Liu, H.; Li, X. The iterative method for solving the proximal split feasibility problem with an application to LASSO problem. Comp. Appl. Math. 2022, 41, 5. [Google Scholar] [CrossRef]
Figure 1. Comparison of signal processing.
Figure 1. Comparison of signal processing.
Symmetry 17 01121 g001
Table 1. Results for Example 1.
Table 1. Results for Example 1.
( M , N , k ) Alg.1SAlg.3.1GAlg.3.1
E n time E n time E n time
(240, 1024, 30)1.8479 × 10−100.91922.5269 × 10−91.39431.3316 × 10−51.8579
(480, 2048, 60)1.1073 × 10−144.58406.8123 × 10−126.73243.3606 × 10−68.8775
(720, 3072, 90)1.0120 × 10−1427.58021.2631 × 10−1240.67431.7002 × 10−653.7751
(960, 4096, 120)1.0123 × 10−1450.40164.7432 × 10−1375.46837.8236 × 10−7100.9057
(1200, 5120, 150)9.1161 × 10−1581.37783.0586 × 10−14122.10316.1916 × 10−7162.5137
Table 2. Results for Example 2.
Table 2. Results for Example 2.
Cases ϵ Alg. 2MLAlg. 3.1
iter.timeiter.time
Case 1 10 2 70.175170.2262
10 3 90.2418100.7618
Case 2 10 2 70.168070.2330
10 3 90.261090.2839
Table 3. Results for Example 3.
Table 3. Results for Example 3.
( n , m ) Alg.3Alg.2
iter.timeiter.time
(10, 20)197.5570 × 10−4177.4960 × 10−4
(20, 20)206.6100 × 10−4194.3310 × 10−4
(30, 20)200.0058180.0061
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Ma, X. Accelerated Gradient-CQ Algorithms for Split Feasibility Problems. Symmetry 2025, 17, 1121. https://doi.org/10.3390/sym17071121

AMA Style

Zhang Y, Ma X. Accelerated Gradient-CQ Algorithms for Split Feasibility Problems. Symmetry. 2025; 17(7):1121. https://doi.org/10.3390/sym17071121

Chicago/Turabian Style

Zhang, Yu, and Xiaojun Ma. 2025. "Accelerated Gradient-CQ Algorithms for Split Feasibility Problems" Symmetry 17, no. 7: 1121. https://doi.org/10.3390/sym17071121

APA Style

Zhang, Y., & Ma, X. (2025). Accelerated Gradient-CQ Algorithms for Split Feasibility Problems. Symmetry, 17(7), 1121. https://doi.org/10.3390/sym17071121

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop