Next Article in Journal
Mathematical Formulation of a Symmetry-Compact Three-Step Algorithm for Computing the Spatio-Temporal Generalized FitzHugh–Nagumo Equations
Previous Article in Journal
A Hybrid Vibration Isolation Base Design Based on Symmetrically Distributed Acoustic Black Holes and Locally Resonant Metamaterials
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Pre-Conditioning CQ Algorithm with Double Inertia and Self-Updated Stepsizes for Split Feasibility Problems in Hilbert Spaces

1
School of Mathematics and Statistics, Institute of Big Data Analysis and Applied Mathematics, Hubei University of Education, Wuhan 430205, China
2
School of Mathematics and Statistics, Shanxi Datong University, Datong 037009, China
*
Author to whom correspondence should be addressed.
Symmetry 2026, 18(2), 321; https://doi.org/10.3390/sym18020321
Submission received: 15 January 2026 / Revised: 31 January 2026 / Accepted: 4 February 2026 / Published: 10 February 2026
(This article belongs to the Section Mathematics)

Abstract

In this work, we propose a double inertial pre-conditioning CQ Algorithm for split feasibility problem in real Hilbert spaces, in which, we use the double inertial steps and new stepsizes to speed up the convergent rate. We also compute the only one projection onto the nonempty closed convex set C in our proposed method. These help improve the numerical results. Next, we establish the weak convergence of the sequence generated by our method. Finally, we use a numerical experiment to demonstrate our theoretical results.

1. Introduction

In this paper, let H be a real Hilbert space with inner product · , · and the induced norm · . The split feasibility problem (SFP) is as follows.
Find g C such   that A g Q .
Its solution set is suggested as
Ω = { g C : A g Q } .
Here, C and Q are nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and A : H 1 H 2 is a bounded linear operator.
The split feasibility problem (SFP) proposed by Censor and Elfving [1] was used to study inverse problems, and then it was applied to signal recovery [2] and image deblurring problems [3]. From then, the SFP has attracted attention, and many related algorithms have been created to investigate it deeply. For instance, Byrne [2,3] stated a classical CQ algorithm as follows:
j n + 1 = P C j n Γ A ( I P Q ) A j n , n 1 ,
where Γ 0 , 2 A 2 , I represents an identity operator, P C and P Q are the projections onto C and Q, respectively, and A means the adjoint of A. For the CQ algorithms, more details can be found in [4,5,6,7,8,9,10]. We know that the algorithm (1) begins the iteration; it first computes the operator norm A , which creates a small stepsize Γ , and thus, the convergent rate of the algorithm is slow. Meanwhile, it is not easy to calculate the operator norm A in practice. From this consideration, Lopéz [11] suggested the following self-adaptive stepsize by modifying Yang [12].
Γ n = t U ( j n ) U ( j n ) 2 .
Here, inf t ( 4 t ) > 0 , and U ( j ) = 1 2 ( I P Q ) A j 2 . Under the stepsize (2), the weak convergence of the algorithm (1) is proved. Meanwhile, several CQ algorithms with self-updated stepsizes have been proposed to solve the SFP; (see Refs. [13,14,15,16,17] and the references therein).
To introduce an efficient stepsize, Altiparmak et al. [18] presented a pre-conditioning CQ algorithm for solving the SFP, as follows:
y n = P C B j n Γ n U E ˜ ( j n ) ; j n + 1 = P C B ( y n γ n U E ˜ ( y n ) ) ; where   the   stepsizes   Γ n and   γ n is   updated   respectively   via   Γ n U ( y n ) U ( j n ) δ y n j n , 0 < δ < 1 , γ n = t U ( y n ) U E ˜ ( y n ) 2 , 0 < t < 4 ,
where U E ˜ ( j ) = E A I P Q A j U ( j ) = A I P Q A j , E is defined in Section 3, Γ n = σ ρ m n , σ > 0 , 0 < ρ < 1 , and m n is the smallest nonnegative integer. The authors analyzed the weak convergence of the algorithm (3) under these stepsizes. Pre-conditioning methods are usually applied to speed up the convergence of the algorithms, for more details, see [19,20,21].
On the other hand, the inertial extrapolation technique can be used to accelerate the convergence rate of the algorithms; for this, Nesterov [22] proposed the following classical iterative scheme
v n = j n + u n ( j n j n 1 ) j n + 1 = Ϝ n Γ n F ( v n ) , n 1 ,
where u n [ 0 , 1 ) , and Γ n > 0 . The notation v n is the inertial step. Inspired by the inertial idea [23,24,25,26,27], some inertial splitting algorithms have been proposed to solve the SFP [28,29,30,31]. Recently, Wang et al. [32] proposed the following double inertial splitting methods to solve monotone inclusion problems.
v n = j n + u n ( j n j n 1 ) , s n = j n + q n ( j n j n 1 ) , y n = ( I + Γ n Y ) 1 ( I Γ n L ) v n ; j n + 1 = ( 1 α n ) s n + α n ( y n λ n ( L y n L v n ) ) ,
where the stepsize Γ n is updated by
Γ n + 1 = min ( μ n + δ ) v n y n L v n L y n , Γ n + P ^ n , if L v n L y n 0 , Γ n + P ^ n , otherwise ,
Here, I is the identity operator on H. Y : H 2 H is maximally monotone mapping, L : H H is monotone and Lipschtz continuous mapping, δ ( 0 , 1 ) , u n [ 0 , 1 ] , 0 q n q n + 1 < 3 + 2 d 8 d + 17 2 d ,   d ( 1 , + ) ; 0 < α n α n + 1 1 1 + d ,   d ( 1 , + ) , and n = 1 P ^ n < . Its convergence is established under suitable conditions in real Hilbert spaces.
It is observed that, in algorithm (3), the proper choice of the stepsize Γ n is required to satisfy the Armijo-like linesearch techniques; that is, the projections of the iterate point A y n onto Q and the projections onto C may be calculated many times at each iteration. On the other hand, the algorithm (3) needs to compute the double projections onto C at least for finishing an iteration. These considerations affect its effectiveness. Furthermore, their weak convergence is guaranteed by employing some strong conditions such as the firm nonexpansiveness of I P Q . However, the investigation of the algorithm (3) with double inertial effects (5) has yet to be conducted. The following question is asked.
Question: Can we develop new modifications of the algorithm (3) such that numerical improvement and new convergence results can be obtained under weaker conditions than the firm nonexpansiveness of I P Q ?
To answer this question, our contributions in this article are as follows:
  • To adopt double inertial steps to speed up the convergent rate of the algorithm (3) and create a new convergence result;
  • To design a new stepsize not only to reduce the computational projections of y n onto Q but also to avoid the Armijo-like linesearch techniques [13,14,18]. Moreover, the projections P C B for the computation of the next iteration j n can be removed, and thus, an accelerated convergent algorithm is generated;
  • To provide some practical examples such as signal recovery problems for demonstration and illustration.
The paper is organized as follows. In Section 2, we list some useful results for the convergence analysis of the algorithm. In Section 3, we propose a double inertial pre-conditioning CQ algorithm and prove its weak convergence. Numerical experiments are described in Section 4.

2. Preliminaries

For the reader’s convenience, we first introduce two notations; that is, ⇀ stands for weak convergence, while → stands for strong convergence. Second, we let H be a real Hilbert space, and we recall some well-known concepts related to the SFP.
Definition 1.
Let C be a subset of H and U : C H be an operator; then,
(i) 
U is called nonexpansive on C if
U j U y j y , j , y C .
(ii) 
F is said to be firmly nonexpansive on C if
U j U y , j y U j U y 2 , j , y C .
Definition 2
([18]). Let H be a real Hilbert space; a differentiable function F : H R is convex if and only if
U ( y ) U ( j ) U ( j ) , y j , j , y H .
Lemma 1.
For all j , y , z H , then
(i) α ˜ j + ( 1 α ˜ ) y 2 = α ˜ j 2 + ( 1 α ˜ ) y 2 α ˜ ( 1 α ˜ ) j y 2 , α ˜ ( 0 , 1 ) .
(ii) j y , j z = 1 2 j y 2 + 1 2 j z 2 1 2 y z 2 .
Next, we provide some concepts about a positive-definite operator and a self-adjoint operator.
Let B : H H be a bounded linear operator on a real Hilbert space. B is named self adjoint if B = B , where B is the adjoint operator. Moreover, a self adjoint operator is called positive definite if
j , B j > 0 , j H , j 0 .
For more details, see [33,34].
Let B be a self-adjoint positive-definite operator, and E = B 1 . Thereby, we define the norm · B as
j B 2 = j , B j = j , j B , j H ,
as in [18]. The B —projection operator on C related to the norm · B denotes P C B ; that is,
P C B ( j ) = arg min z C { j z B } .
Now, we give the following related properties.
Lemma 2
([18]). Let C be a nonempty closed convex subset of a real Hilbert space H; the B —projection operator on C fulfills the following relationships.
(i) B ( j P C B j ) , y P C B j 0 , j H a n d y C .
(ii) j ± z B 2 = j B 2 ± 2 j , B z + z B 2 , j , z H .
(iii) P C B j P C B y B 2 j y B 2 ( P C B j j ) ( P C B y y ) B 2 , j , y H .
(iv) P C B j y B 2 j y B 2 j P C B j B 2 , j , y H .
Lemma 3
([29,35]). Let { φ n } be a sequence of nonnegative numbers fulfilling
φ n + 1 ϕ n φ n + U n , n N ,
where { ϕ n } and { U n } are sequences of nonnegative numbers such that { ϕ n } [ 1 , + ) ,   n = 1 ( ϕ n 1 ) < , and   n = 1 U n < . Then, lim n φ n exists.
Lemma 4
([31]). Let C H be a non-empty set, and let { j n } be a sequence in H that fulfills the following conditions:
(i) lim n j n j exists for all j C ;
(ii) Every weak sequential cluster point of { j n } belongs to C.
Then, { j n } converges weakly to a point in C.
Lemma 5
(Lemma 2.2 [31]). Let { Q n } [ 0 , ) and { b ˜ n } [ 0 , ) be the sequences such that
(i) Q n + 1 Q n ϱ n ( Q n Q n 1 ) + b n ;
(ii) n = 1 b ˜ n < ;
(iii) { ϱ n } [ 0 , ϱ ] , where ϱ [ 0 , 1 ) .
  • Then, { Q n } is a converging sequence, and n = 1 [ Q n Q n 1 ] + < , where [ ζ ] + = max { ζ , 0 } (for any ζ R ).

3. Weak Convergence

In this section, we propose a double inertial pre-conditioning CQ algorithm for solving SFP and analyze a weak convergence result under mild conditions. Next, we present some assumptions.
( P 1 ) The solution set of SFP is nonempty; that is, Ω .
( P 2 ) Let C and Q be nonempty closed convex subsets of real Hilbert spaces H 1 and H 2 , respectively, and A : H 1 H 2 be a bounded linear operator with its adjoint A . Let B and E ˜ be self adjoint positive-definite operators such that A E ˜ = E A , with E = B 1 .
( P 3 ) Let κ min ( B ) and κ max ( B ) be the minimum and the largest eigenvalues of B, respectively. For the · B 2 , the result is as follows:
κ min ( B ) z 2 z B 2 κ max ( B ) z 2 , z H ,
which is suggested in Ref. [36].
( P 4 ) The sequences { u n } , { q n } , { α n } , and { h n } have the following inequalities.
(i) 0 u n 1 ;
(ii) 0 q n q n + 1 q < 3 + 2 d 8 d + 17 2 d , d ( 1 , ) ;
(iii) 0 < α < α n α n + 1 1 1 + d , d ( 1 , ) ;
(iv) h n = ( 1 α n ) q n + α n u n is a non-decreasing sequence.
( P 5 ) Let P Q be E ˜ -projection on Q related to the norm · E ˜ . For the continuously differentiable convex functions U , U E ˜ : H 1 R , we define
U ( j ) = 1 2 ( I P Q ) A x 2 ,
and
U E ˜ ( j ) = 1 2 ( I P Q ) A j E ˜ 2 = 1 2 E ˜ ( ( I P Q ) A j ) , ( I P Q ) A j ;
their corresponding gradients are the following, respectively.
U E ˜ ( j ) = E A ( I P Q ) A j , U ( j ) = A ( I P Q ) A j .
Proposition 1
([18]). Assume that the condition P 1 holds. The results are equivalent below.
(i) g Ω ,
(ii) g C and F ( g ) = 0 ;
(iii) g C and F ( g ) = 0 .
In what follows, we illustrate that our designed stepsize scheme is well-defined.
Lemma 6.
Let { Γ n } be the sequence produced by Algorithm 1. Then, we need to prove that lim n Γ n exists and denote lim n Γ n = Γ ˜ , where Γ ˜ min δ A 2 , Γ 1 and Γ 1 > 0 , is an initial stepsize.
Proof. 
In the case of M n > 0 , and since P Q is of firm nonexpansiveness, we derive
U ( y n ) U ( v n ) , y n v n = A I P Q A y n A I P Q A v n , y n v n = I P Q A y n I P Q A v n , A y n A v n = A y n A v n 2 P Q ( A y n ) P Q ( A v n ) , A y n A v n A y n A v n 2 P Q ( A y n ) P Q ( A v n ) 2 A y n A v n 2 A 2 y n v n 2 ,
which shows that
δ y n v n 2 F ( y n ) F ( v n ) , y n v n δ A 2 .
This further discloses that
Γ n + 1 = min δ y n v n 2 U ( y n ) U ( v n ) , y n v n , ϕ n Γ n min δ A 2 , Γ n ,
where ϕ n 1 . By induction, the sequence { Γ n } has a lower bound min δ A 2 , Γ 1 . From (8), it verifies that
Γ n + 1 ϕ n Γ n .
Thanks to Lemma 3, we see that lim n Γ n exists and denote lim n Γ n = Γ ˜ . Since the sequence { Γ n } has the lower bound min δ A 2 , Γ 1 , we disclose that Γ ˜ > 0 .    □
Now, our algorithm is described as follows.
Algorithm 1 A Pre-conditioning CQ Algorithm With Double Inertia and Self-updated Stepsizes For Solving SFP
 Step 0. Given j 0 , j 1 H 1 , Γ 1 > 0 , δ 0 , κ min ( B ) 2 . Take the sequence { ϕ n } fulfilling Lemma 2.3.
 Step 1. Given j n 1 , j n ( n 1 ) and compute
v n = j n + u n ( j n j n 1 ) , s n = j n + q n ( j n j n 1 ) ,
y n = P C B ( v n Γ n U E ˜ ( v n ) ) , j n + 1 = ( 1 α n ) s n + α n ( y n γ n U E ˜ ( y n ) ) ,
where the stepsize Γ n is computed by
Γ n + 1 = min δ y n v n 2 U ( y n ) U ( v n ) , y n v n , ϕ n Γ n , if M n > 0 , ϕ n Γ n , otherwise ,
where M n = U ( y n ) U ( v n ) , y n v n .
And where the stepsize γ n is computed by
γ n = t U ( y n ) U E ˜ ( y n ) B 2 , 0 < t < 2 .
If y n = v n , then stop, y n is a solution of SFP. Otherwise, go to Step 1.
Remark 1.
From Algorithm 1, we observe that we can directly compute the iterative point j n by using the step (7), which is not involved in the projection P C B , compared to that in the original algorithm [18]. Thus, the numerical result is improved.
Theorem 1.
Assume that conditions ( P 1 ) ( P 5 ) hold. The sequence { j n } produced by Algorithm 1 converges weakly to a point z Ω .
Proof. 
Let z Ω ; it yields that F ( z ) = 0 , z = P C B z , and P Q A z = A z . Now, we have the following result by using the Pythagorean theorem.
y n γ n U E ˜ ( y n ) z B 2 = y n z B 2 + γ n 2 U E ˜ ( y n ) B 2 2 γ n y n z , U E ˜ ( y n ) B .
By Lemma 1(ii), we show that
y n z , U E ˜ ( y n ) B = y n z , U ( y n ) = A y n A z , P Q I A y n = A y n P Q ( A y n ) + P Q ( A y n ) A z , P Q I A y n = P Q ( A y n ) A z , P Q ( A y n ) A y n P Q ( A y n ) A y n 2 = 1 2 P Q ( A y n ) A z 2 + P Q ( A y n ) A y n 2 A y n A z 2 P Q ( A y n ) A y n 2 1 2 P Q ( A y n ) A y n 2 ;
we also prove
v n z , U E ˜ ( v n ) B 1 2 P Q ( A v n ) A v n 2 .
By the iterate y n , we have
y n z B 2 = P C B v n Γ n U E ˜ ( v n ) z B 2 v n Γ n U E ˜ ( v n ) z B 2 y n v n + Γ n U E ˜ ( v n ) B 2 = v n z B 2 2 Γ n v n z , U E ˜ ( v n ) B y n v n B 2 2 Γ n y n v n , U E ˜ ( v n ) B .
Now, we need to deduce the above part y n v n , U E ˜ ( v n ) B . By the differentiable convexity of U, we obtain
y n v n , U E ˜ ( v n ) B = y n v n , U ( v n ) = y n v n , U ( v n ) U ( y n ) + y n v n , U ( y n ) = y n v n , U ( y n ) U ( v n ) + y n v n , U ( y n ) y n v n , U ( y n ) U ( v n ) + U ( y n ) U ( v n ) y n v n , U ( y n ) U ( v n ) U ( v n ) .
After arrangement by substituting (12) and (14) into (13), we have
y n z B 2 v n z B 2 + 2 Γ n y n v n , U ( y n ) U ( v n ) y n v n B 2 v n z B 2 + 2 Γ n Γ n + 1 Γ n + 1 y n v n , U ( y n ) U ( v n ) y n v n B 2 v n z B 2 + 2 Γ n Γ n + 1 δ y n v n y n v n B 2 = v n z B 2 κ min ( B ) 2 Γ n Γ n + 1 δ y n v n 2 .
Since
lim n κ min ( B ) 2 Γ n Γ n + 1 δ = κ min ( B ) 2 δ > 0 ,
where δ 0 , κ min ( B ) 2 , then N 0 , n N , κ min ( B ) 2 Γ n Γ n + 1 δ > 0 . So, for any n N ,
y n z B 2 v n z B 2 κ min ( B ) 2 δ y n v n 2 .
By Lemma 1(i), we deduce from the iterative step j n + 1 that
j n + 1 z B 2 = ( 1 α n ) s n + α n ( y n γ n U E ˜ ( y n ) ) z B 2 = ( 1 α n ) s n z B 2 + α n y n γ n U E ˜ ( y n ) z B 2 ( 1 α n ) α n s n y n + γ n U E ˜ ( y n ) B 2 = ( 1 α n ) s n z B 2 + α n ( y n z B 2 + γ n 2 U E ˜ ( y n ) B 2 2 γ n y n z , U E ˜ ( y n ) B ) ( 1 α n ) α n s n y n + γ n U E ˜ ( y n ) B 2
Substituting (11) and (16) into (17), and for any n N , one has
j n + 1 z B 2 ( 1 α n ) s n z B 2 + α n ( v n z B 2 κ min ( B ) 2 δ y n v n 2 ) + α n ( γ n 2 U E ˜ ( y n ) B 2 2 γ n U ( y n ) ) ( 1 α n ) α n s n y n + γ n U E ˜ ( y n ) B 2 = ( 1 α n ) s n z B 2 + α n v n z B 2 α n κ min ( B ) 2 δ y n v n 2 + α n t 2 U 2 ( y n ) U E ˜ ( y n ) B 4 U E ˜ ( y n ) B 2 2 t U ( y n ) U E ˜ ( y n ) B 2 U ( y n ) ( 1 α n ) α n s n y n + γ n U E ˜ ( y n ) B 2 = ( 1 α n ) s n z B 2 + α n v n z B 2 α n κ min ( B ) 2 δ y n v n 2 α n t ( 2 t ) U 2 ( y n ) U E ˜ ( y n ) B 2 ( 1 α n ) α n s n y n + γ n U E ˜ ( y n ) B 2 .
Next, the inertial steps s n and v n respectively yield that
s n z B 2 = j n + q n ( j n j n 1 ) z B 2 = ( 1 + q n ) ( j n z ) q n ( j n 1 z ) B 2 = ( 1 + q n ) j n z B 2 q n j n 1 z B 2 + q n ( 1 + q n ) j n j n 1 B 2 ,
and
v n z B 2 = ( 1 + u n ) j n z B 2 u n j n 1 z B 2 + u n ( 1 + u n ) j n j n 1 B 2 .
Substituting (19) and (20) into (18), and for any n N , we directly deduce that
j n + 1 z B 2 ( 1 α n ) ( ( 1 + q n ) j n z B 2 q n j n 1 z B 2 + q n ( 1 + q n ) j n j n 1 2 ) + α n ( ( 1 + u n ) j n z B 2 u n j n 1 z B 2 + u n ( 1 + u n ) j n j n 1 B 2 ) α n κ min ( B ) 2 δ y n v n 2 α n t ( 2 t ) U 2 ( y n ) U E ˜ ( y n ) B 2 ( 1 α n ) α n s n y n + γ n U E ˜ ( y n ) B 2 .
Since
α n 2 s n y n + γ n U E ˜ ( y n ) B 2 = j n + 1 s n B 2 = j n + 1 j n q n ( j n j n 1 ) B 2 j n + 1 j n B 2 + q n 2 j n j n 1 B 2 2 q n j n + 1 j n B j n j n 1 B ( 1 q n ) j n + 1 j n B 2 + ( q n 2 q n ) j n j n 1 B 2 ,
after arrangement by substituting (22) into (21), and for any n N , the obtained result is the following:
j n + 1 z B 2 ( 1 α n ) ( ( 1 + q n ) j n z B 2 q n j n 1 z B 2 + q n ( 1 + q n ) j n j n 1 B 2 ) + α n ( ( 1 + u n ) j n z B 2 u n j n 1 z B 2 + u n ( 1 + u n ) j n j n 1 B 2 ) α n κ min ( B ) 2 δ y n v n 2 α n t ( 2 t ) U 2 ( y n ) U E ˜ ( y n ) B 2 1 α n α n ( ( 1 q n ) j n + 1 j n B 2 + ( q n 2 q n ) j n j n 1 B 2 ) ( 1 + ( 1 α n ) q n + α n u n ) j n z B 2 ( ( 1 α n ) q n + α n u n ) j n 1 z B 2 + ( 1 α n ) ( 1 + q n ) q n + α n ( 1 + u n ) u n 1 α n α n ( q n 2 q n ) j n j n 1 B 2 1 α n α n ( 1 q n ) j n + 1 j n B 2 ( 1 + h n ) j n z B 2 h n j n 1 z B 2 + g n j n j n 1 B 2 c n j n + 1 j n B 2 ,
where
0 < t < 2 , h n = ( 1 α n ) q n + α n u n , g n = ( 1 α n ) ( 1 + q n ) q n + α n ( 1 + u n ) u n 1 α n α n ( q n 2 q n ) , c n = 1 α n α n ( 1 q n ) .
Define
Λ n = j n z B 2 h n j n 1 z B 2 + g n j n j n 1 B 2 .
Since the sequence h n is non-decreasing, by (23), one obtains
Λ n Λ n 1 = j n + 1 z B 2 ( 1 + h n + 1 ) j n z B 2 + h n j n 1 z B 2 g n j n j n 1 B 2 + g n + 1 j n + 1 j n B 2 j n + 1 z B 2 ( 1 + h n ) j n z B 2 + h n j n 1 z B 2 g n j n j n 1 B 2 + g n + 1 j n + 1 j n B 2 ( c n g n + 1 ) j n + 1 j n B 2 .
According to A 4 , we discuss that
c n g n + 1 = 1 α n α n ( 1 q n ) ( 1 q n + 1 ) ( 1 + q n + 1 ) q n + 1 q n + 1 ( 1 + u n + 1 ) u n + 1 + 1 q n + 1 q n + 1 ( q n + 1 2 q n + 1 ) 1 q n + 1 q n + 1 ( 1 2 q n + 1 + q n + 1 2 ) ( 1 q n + 1 ) ( q n + 1 + q n + 1 2 ) 2 q n + 1 d ( 1 2 q + q 2 ) d 1 + d ( q + q 2 ) 2 1 + d = 1 1 + d ( d 2 q 2 ( 3 d + 2 d 2 ) q + ( d 2 + d 2 ) ) .
The work (24) with (25) implies that
Λ n Λ n 1 κ ˜ j n + 1 j n B 2 ,
where κ ˜ = d 2 q 2 ( 3 d + 2 d 2 ) q + ( d 2 + d 2 ) 1 + d . We find that q < 3 + 2 d 8 d + 17 2 d , d ( 1 , ) , and one has κ ˜ > 0 . With the help of (26), we see that
Λ n Λ n 1 0 .
Thereby, the sequence Λ n is non-increasing. We know from A 4 that g n > 0 . Therefore,
Λ n = j n z B 2 h n j n 1 z B 2 + g n j n j n 1 B 2 j n z B 2 h n j n 1 z B 2 .
This is the equivalent to the following analysis:
j n z B 2 h n j n 1 z B 2 + Λ n h j n 1 z B 2 + Λ n h n j 0 z B 2 + Λ 1 ( 1 + h + + h n 1 ) h n j 0 z B 2 + Λ 1 1 h ,
where h = 5 + 2 d 8 d + 17 2 + 2 d < 1 .
Using the definition of the sequence Λ n , it shows that
Λ n + 1 = j n + 1 z B 2 h n + 1 j n 1 z B 2 + b n + 1 j n j n 1 B 2 h n + 1 j n z B 2 .
Combining (29) together with (30), we deduce that
Λ n + 1 h n + 1 j n z B 2 h j n z B 2 h n + 1 j 0 z B 2 + h Λ 1 1 h .
Using (26) and (31) again, we see that
κ ˜ k = 1 n j k + 1 j k B 2 Λ 1 Λ n + 1 h n + 1 j 0 z B 2 + Λ 1 1 h j 0 z B 2 + Λ 1 1 h .
So, we conclude that
k = 1 n j k + 1 j k B 2 < .
Thereby,
lim n j n + 1 j n B = 0 ,
which combined with the boundedness of u n , directly shows that
j n + 1 v n B 2 = j n + 1 j n B 2 + u n 2 j n j n 1 B 2 2 u n j n + 1 j n , j n j n 1 0 as n .
We also have
j n v n B j n j n + 1 B + j n + 1 v n B 0 as n .
From (23), we know that
j n + 1 z B ( 1 + h n ) j n z B 2 h n j n 1 z B 2 + g n j n j n 1 B 2 c n j n + 1 j n B 2 ( 1 + h n ) j n z B 2 h n j n 1 z B 2 + g n j n j n 1 B 2 = j n z B 2 + h n ( j n z B 2 j n 1 z B 2 ) + g n j n j n 1 B 2
Owing to 0 h n h < 1 and the boundedness of g n , and by Lemma 2.5 and (33), we seek a point g [ 0 , ) such that
lim n j n z B = g .
In view of (19) and the boundedness of q n , we arrive at
s n z B 2 = ( 1 + q n ) j n z B 2 q n j n 1 z B 2 + q n ( 1 + q n ) j n j n 1 B 2 = j n z B 2 + q n ( j n z B 2 j n 1 z B 2 ) + q n ( 1 + q n ) j n j n 1 B 2 g as n .
Similarly, and by the boundedness of u n , one gets
lim n v n z B = g .
The inequality (18) can be rewritten as
α n κ min ( B ) 2 δ y n v n 2 + t ( 2 t ) U 2 ( y n ) U E ˜ ( y n ) B 2 + ( 1 α n ) s n y n + Λ n U E ˜ ( y n ) B 2 ( 1 α n ) s n z B 2 + α n v n z B 2 j n + 1 z B .
This work, together with (38)–(40), and 0 < α n < 1 1 + d ,   d ( 1 , ) , shows that
lim n y n v n 2 = 0 , lim n U 2 ( y n ) U E ˜ ( y n ) B 2 = 0 , lim n s n y n + Λ n U E ˜ ( y n ) B 2 = 0 .
Since the gradient mapping U is Lipschitzian, U E ˜ is Lipschitzian too. Thus, U E ˜ ( y n ) is bounded. Therefore,
lim n U ( y n ) = 0 .
By lim n y n v n 2 = 0 , we get
lim n y n v n B 2 = 0 .
Let z ¯ C and z ¯ w w ( j n ) Ω , and thus, by the boundedness of the sequence j n , there exists a subsequence j n i of j n such that j n i z ¯ as i . From (36) and (44), we know that y n i z ¯ as i . Using (43), one sees that
lim i U ( y n i ) = U ( z ¯ ) = A z ¯ P Q A z ¯ = 0 ,
which indicates A z ¯ Q , and from Proposition 1(ii) and Lemma 4, it proves that z ¯ Ω .

4. Numerical Experiments

In this section, some numerical experiments are providing for comparing our proposed Algorithm 1 (abbreviated, Alg.1) with algorithm 3.2 [18] (PCQ), algorithm 3.1 [31] (Sahu 2021), and algorithm 1 [37] (Suantai 2022). All codes were written by MATLAB R2018a and run on a PC Desktop Intel(R) Core(TM) i7-6700 CPU @ 3.40 GHZ computer with RAM 8.00 GB.
Example 1
(LASSO problem [38]). The concrete form of the LASSO problem is as follows:
min 1 2 A j V 2 : j R N , j 1 κ ,
where A R M × N , M < N , V R M , and κ > 0 . This problem is devoted to finding a sparse solution to the SFP. The sampling matrice A is produced by a standard normal distribution with mean zero and unit variance. The truly sparse signal j is built by distributing W nonzero elements uniformly at random in the interval [ 2 , 2 ] , while the rest is kept zero. The sample data V = A j .
In consideration of SFP, we define C = { j ˜ | j ˜ 1 κ } , κ = W , and Q = { V } . We suggest the subgradient projection, because of the fact that there is no closed-form expression for the projection of a closed convex set C. Let c ( j ˜ ) = j ˜ 1 κ and
C n = { j ˜ : c ( v n ) + ω n , j ˜ v n 0 } ,
where ω n c ( v n ) . The orthogonal projection of a point p ˜ R N onto C n can be achieved by
P C n ( j ˜ ) = j , i f c ( v n ) + ω n , j ˜ v n 0 , j ˜ c ( v n ) + ω n , j ˜ y n ω n 2 ω n , o t h e r w i s e ,
The subdifferential c at v n is
c ( v n ) = 1 , i f v n > 0 , [ 1 , 1 ] , i f v n = 0 , 1 , i f v n < 0 .
We initialize the algorithms at the original and define
E n = j n j max { 1 , j n } .
To test all methods under the same iteration error E n for different M , N , a n d W , we limit the number of iterations to 7000 and a smaller value of E n ; the algorithms behave better. Meanwhile, the corresponding results are reported in Table 1.
The second problem is the recovery of the signal j when M = 960 , N = 4096 , W = 120 , and M × N matrix A is randomly obtained with independent samples of standard Gaussian distribution. The original signal j contains 120 randomly placed ± 1 spikes. The initial point j 0 = 0 , and we adopt the following mean square error for measuring the restoration accuracy:
M S E = 1 N j n j 2 .
The smaller the MSE, the better the signal recovery.
The same parameters for all algorithms were chosen in the following manner:
For Algorithm 1 q n = 0.1 1 1000 + n , α n = 0.46 1 1000 + n , and ϕ n = 1 ( n + 1 ) 1.1 + 1 ;
For PCQ, σ = 1 , ρ = 0.5 , t = 1.2 , and 0 < δ < κ min ( B ) 2 , which is suggested in [18];
For the Sahu 2021, θ = 0.3 , and δ = 0.2 , which is suggested in [31];
For Suantai 2022, we set σ = 1 , ρ = 0.5 , δ = 0.3 , t = 2 × 10 4 , and θ = 0.3 in [37].
We present the original signal and restored signal, as shown in Figure 1. Then, we compare the CPU time and E n for all the algorithms as shown in Table 1.
Remark 2.
From Figure 1, all algorithms finish the signal recovery successfully; we observe that Algorithm 1 uses less CPU time to obtain a smaller MSE than Sahu 2021 [31], Suantai 2022 [37], and PCQ. Thereby, our proposed algorithm runs faster than the others. According to Table 1, the E n we obtain is larger than the other compared algorithms under the same parameters. However, it is difficult for our proposed algorithm to recover the signal, if we choose a larger or smaller parameter than the fixed parameter that properly makes our algorithm achieve the signal recovery.

5. Conclusions

In this paper, we use the double inertia and new stepsizes to modify the pre-conditioning CQ algorithm for solving SFP. The weak convergence of the algorithm is obtained under mild conditions. Then, we use the LASSO problem to show that our algorithm behaves well.

Author Contributions

Methodology, Y.Z. and X.M.; Software, Y.Z. and X.M.; Validation, X.M.; Formal analysis, Y.Z. and X.M.; Writing—original draft, Y.Z. and X.M.; Writing—review & editing, Y.Z. and X.M.; Funding acquisition, Y.Z. and X.M. All authors have read and agreed to the published version of the manuscript.

Funding

Yu Zhang was supported by Natural Science Foundation of Hubei Province (No. 2025AFC080), the Scientific Research Project of Hubei Provincial Department of Education (No. D20243001, 23Q166), the Scientific Research Foundation of Hubei University of Education for (No. ESRC20220008), and the Foundation for Talent Introduction Innovative Research Team of Hubei Provincial Department of Education (No. T2022034).

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Censor, Y.; Elfving, T. A multi-projection algorithm using Bregman projection in a product space. Numer. Algorithms 1994, 8, 221–239. [Google Scholar] [CrossRef]
  2. Byrne, C. A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 2004, 20, 103–120. [Google Scholar] [CrossRef]
  3. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  4. Ha, N.S. Weak and strong convergence theorems for the mixed split feasibility problem in Hilbert spaces. J. Glob. Optim. 2025, 93, 1053–1077. [Google Scholar] [CrossRef]
  5. Zhang, D.; Ye, M. Ball selected relaxation inertial projection algorithm for multiple-sets split feasibility problem. Optim. Eng. 2025. [Google Scholar] [CrossRef]
  6. Cao, Y.; Peng, Y.; Chen, Y.; Shi, L. Several relaxed CQ-algorithms for the split feasibility problem with multiple output sets. J. Appl. Math. Comput. 2025, 71, 5231–5258. [Google Scholar] [CrossRef]
  7. Zhan, W.; Yu, H. A Novel Relaxed Method for the Split Feasibility Problem in Hilbert Spaces. Bull. Iran. Math. Soc. 2025, 51, 34. [Google Scholar] [CrossRef]
  8. Huong, V.T.; Xu, H.K.; Yen, N.D. Stability analysis of split equality and split feasibility problems. J. Glob. Optim. 2025, 92, 411–429. [Google Scholar] [CrossRef]
  9. Tong, X.; Ling, T.; Shi, L. Self-adaptive relaxed CQ algorithms for solving split feasibility problem with multiple output sets. J. Appl. Math. Comput. 2024, 70, 1441–1469. [Google Scholar] [CrossRef]
  10. Tuyen, T.M.; Ha, N.S. An explicit iterative algorithm for solving the split common solution problem with multiple output sets. Math. Meth. Oper. Res. 2025, 102, 105–129. [Google Scholar] [CrossRef]
  11. López, G.; Martin, V.; Wang, F.; Xu, H.K. Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 2012, 28, 85004. [Google Scholar] [CrossRef]
  12. Yang, Q. On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 2005, 302, 166–179. [Google Scholar] [CrossRef]
  13. Bnouhachem, A.; Noor, M.A.; Khalfaoui, M.; Zhaohan, S. On descent-projection method for solving the split feasibility problems. J. Glob. Optim. 2012, 54, 627–639. [Google Scholar] [CrossRef]
  14. Dong, Q.L.; Yao, Y.; He, S. Weak convergence theorems of the modified relaxed projection algorithms for the split feasibility problem in Hilbert spaces. Optim. Lett. 2014, 8, 1031–1046. [Google Scholar] [CrossRef]
  15. Gibali, A.; Liu, L.W.; Tang, Y.C. Note on the modified relaxation CQ algorithm for the split feasibility problem. Optim. Lett. 2018, 12, 817–830. [Google Scholar] [CrossRef]
  16. Kesornprom, S.; Pholasa, N.; Cholamjiak, P. On the convergence analysis of the gradient-CQ algorithms for the split feasibility problem. Numer. Algorithms 2020, 84, 997–1017. [Google Scholar] [CrossRef]
  17. Alvarez, F.; Attouch, H. An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 2001, 9, 3–11. [Google Scholar] [CrossRef]
  18. Altiparmak, E.; Jolaoso, L.O.; ur Rehman, H. Pre-conditioning CQ algorithm for solving the split feasibility problem and its application to image restoration problem. Optimization 2025, 13, 3123–3141. [Google Scholar] [CrossRef]
  19. Bauschke, H.H.; Borwein, J.M. On projection algorithms for solving convex feasibility problems. SIAM Rev. 1996, 38, 367–426. [Google Scholar] [CrossRef]
  20. Byrne, C.L. Iterative projection onto convex sets using multiple Bregman distances. Inverse Probl. 1999, 15, 1295–1313. [Google Scholar] [CrossRef]
  21. Wang, P.; Zhou, H. A preconditioning method of the CQ algorithm for solving an extended split feasibility problem. J. Inequal. Appl. 2014, 1, 1–11. [Google Scholar] [CrossRef]
  22. Nesterov, Y. A method for solving the convex programming problem with convergence rate O(1/k2). Dokl. Akad. Nauk SSSR 1983, 269, 543–547. [Google Scholar]
  23. Attouch, H.; Peypouquet, J. The Rate of Convergence of Nesterov’s Accelerated Forward-Backward Method is Actually Faster Than 1/k2. SIAM J. Optim. 2016, 26, 1824–1834. [Google Scholar] [CrossRef]
  24. Attouch, H.; Peypouquet, J.; Redonta, P. Fast convex optimization via inertial dynamics with Hessian driven damping. J. Differ. Equ. 2016, 261, 5734–5783. [Google Scholar] [CrossRef]
  25. Attouch, H.; Chbani, Z.; Peypouquet, J.; Redont, P. Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity. Math. Program. 2018, 168, 123–175. [Google Scholar] [CrossRef]
  26. Attouch, H.; Peypouquet, J. Convergence of inertial dynamics and proximal algorithms governed by maximally monotone operators. Math. Program. 2019, 174, 391–432. [Google Scholar] [CrossRef]
  27. Suebcharoen, T.; Suparatulatorn, R.; Chaobankoh, T.; Kunwai, K.; Mouktonglang, T. An Inertial Relaxed CQ Algorithm with Two Adaptive Step Sizes and Its Application for Signal Recovery. Mathematics 2024, 12, 2406. [Google Scholar] [CrossRef]
  28. Alvarez, F. Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert spaces. SIAM J. Optim. 2004, 14, 773–782. [Google Scholar] [CrossRef]
  29. Ma, X.; Jia, Z.; Li, Q. On inertial non-lipschitz stepsize algorithms for split feasibility problems. Comp. Appl. Math. 2024, 43, 431. [Google Scholar] [CrossRef]
  30. Ma, X.; Liu, H.; Li, X. The iterative method for solving the proximal split feasibility problem with an application to LASSO problem. Comp. Appl. Math. 2022, 41, 5. [Google Scholar] [CrossRef]
  31. Sahu, D.R.; Cho, Y.J.; Dong, Q.L.; Kashyap, M.R.; Li, X.H. Inertial relaxed CQ algorithms for solving a split feasibility problem in Hilbert spaces. Numer. Algorithms 2021, 87, 1075–1095. [Google Scholar] [CrossRef]
  32. Wang, Z.B.; Zhen, Y.L.; Long, X.; Chen, Z.-y. A Modified Tseng Splitting Method with Double Inertial Steps for Solving Monotone Inclusion Problems. J. Sci. Comput. 2023, 96, 1–29. [Google Scholar] [CrossRef]
  33. Limaye, B.V. Functional Analysis; New Age: New Delhi, India, 1996. [Google Scholar]
  34. Mondal, S.; Sivakumar, K.C. A Survey of Z-Matrices and New Results on the Subclass of F0-Matrices. In Inverse Problems, Regularization Methods and Related Topics. Industrial and Applied Mathematics; Pereverzyev, S.V., Radha, R., Sivananthan, S., Eds.; Springer: Singapore, 2025. [Google Scholar]
  35. Osilike, M.O.; Aniagbosor, S.C. Weak and strong convergence theorems for fixed points of asymptotically nonexpansive mappings. Math. Comput. Model. 2000, 32, 1181–1191. [Google Scholar] [CrossRef]
  36. Facchinei, F.; Pang, J.S. Finite-Dimensional Variational Inequality and Complementarity Problems; Springer: New York, NY, USA, 2003; Volume I. [Google Scholar]
  37. Suantai, S.; Panyanak, B.; Kesornprom, S.; Cholamjiak, P. Inertial projection and contraction methods for split feasibility problem applied to compressed sensing and image restoration. Optim. Lett. 2022, 16, 1725–1744. [Google Scholar] [CrossRef]
  38. Tibshirani, R. Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B 1996, 58, 267–288. [Google Scholar] [CrossRef]
Figure 1. Comparison of signal processing ([18,31,37]).
Figure 1. Comparison of signal processing ([18,31,37]).
Symmetry 18 00321 g001
Table 1. Results for Example 1.
Table 1. Results for Example 1.
( M , N , W ) Algorithm 1Sahu 2021 [31]Suantai 2022 [37]PCQ
E n Time E n Time E n Time E n Time
(240, 1024, 30)5.47721.70941.3328   ×   10 9 2.07154.2953   ×   10 5 2.92066.8805   ×   10 16 3.9255
(480, 2048, 60)7.74605.93451.7791   ×   10 11 7.82452.5638   ×   10 5 14.20647.3943   ×   10 16 32.5748
(720, 3072, 90)9.486851.00436.8132   ×   10 12 70.73822.1654   ×   10 5 117.73696.3745   ×   10 16 134.6671
(960, 4096, 120)10.954546.22163.8179   ×   10 11 58.25181.9982   ×   10 5 92.23566.6815   ×   10 16 132.1087
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, Y.; Ma, X. A Pre-Conditioning CQ Algorithm with Double Inertia and Self-Updated Stepsizes for Split Feasibility Problems in Hilbert Spaces. Symmetry 2026, 18, 321. https://doi.org/10.3390/sym18020321

AMA Style

Zhang Y, Ma X. A Pre-Conditioning CQ Algorithm with Double Inertia and Self-Updated Stepsizes for Split Feasibility Problems in Hilbert Spaces. Symmetry. 2026; 18(2):321. https://doi.org/10.3390/sym18020321

Chicago/Turabian Style

Zhang, Yu, and Xiaojun Ma. 2026. "A Pre-Conditioning CQ Algorithm with Double Inertia and Self-Updated Stepsizes for Split Feasibility Problems in Hilbert Spaces" Symmetry 18, no. 2: 321. https://doi.org/10.3390/sym18020321

APA Style

Zhang, Y., & Ma, X. (2026). A Pre-Conditioning CQ Algorithm with Double Inertia and Self-Updated Stepsizes for Split Feasibility Problems in Hilbert Spaces. Symmetry, 18(2), 321. https://doi.org/10.3390/sym18020321

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop