Next Article in Journal
Efficient Numerical Methods for Reaction–Diffusion Problems Governed by Singularly Perturbed Fredholm Integro-Differential Equations
Previous Article in Journal
Optimal Design Considering AC Copper Loss of Traction Motor Applied HSFF Coil for Improving Electric Bus Fuel Economy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Partially Symmetric Regularized Two-Step Inertial Alternating Direction Method of Multipliers for Non-Convex Split Feasibility Problems

Business School, University of Shanghai for Science and Technology, Jungong Road, Shanghai 200093, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(9), 1510; https://doi.org/10.3390/math13091510
Submission received: 21 March 2025 / Revised: 18 April 2025 / Accepted: 28 April 2025 / Published: 4 May 2025

Abstract

:
This paper presents a partially symmetric regularized two-step inertial alternating direction method of multipliers for solving non-convex split feasibility problems (SFP), which adds a two-step inertial effect to each subproblem and includes an intermediate update term for multipliers during the iteration process. Under suitable assumptions, the global convergence is demonstrated. Additionally, with the help of the Kurdyka−Łojasiewicz (KL) property, which quantifies the behavior of a function near its critical points, the strong convergence of the proposed algorithm is guaranteed. Numerical experiments are performed to demonstrate the efficacy.

1. Introduction

The split feasibility problem (SFP) can be expressed in the following manner:
t o   f i n d   x C   such   that   A x Q ,
where C is a closed convex set and Q is a non-convex closed set. A is a linear mapping from m to n . The split feasibility problem (SFP) has been applied to address a diverse array of real-world challenges, including image denoising [1], CT image reconstruction [2], intensity-modulated radiation therapy (IMRT) [3,4,5], and Pareto front navigation in multi-criteria optimization [6]. Additionally, numerous iterative approaches have been used to address the split feasibility problem (SFP) [7,8,9,10]. The majority of existing techniques are designed for convex sets, but satisfying convexity requirements remains challenging in practice. Therefore, the primary objective of this paper will be the split feasibility problem (SPF), where the sets involved are not always convex.
The alternating direction method of multipliers (ADMM) [11] is a crucial method for solving the separable linear constrained problem, as follows:
min f ( x ) + g ( y ) s . t   A x + B y = b ,
where f: n { + } is a proper lower semi-continuous function, g: m is smooth, A l × n , B l × m , b l . The augmented Lagrangian function for problem (2) is
L β ( x , y , λ ) = f ( x ) + g ( y ) < λ , A x + B y b > + β 2 A x + B y b 2 ,
where λ l is a Lagrangian multiplier and β > 0 is a penalty parameter. The classic iterative format of ADMM for solving problem (2) is as follows:
x k + 1 arg min L β x , y k , λ k , y k + 1 arg min L β x k + 1 , y , λ k , λ k + 1 = λ k + β A x k + 1 + B y k + 1 b .
In recent years, research on the theory and algorithms of ADMM has become relatively comprehensive [12,13,14,15,16]. ADMM has been widely applied for solving convex optimization problems. However, when the objective function is non-convex, ADMM may not converge. To address this issue, we transform problem (1) into a separable problem with linear constraints, which makes it easier to solve. Two functions play a crucial role: the indication function and the distance function. Mathematically, given a non-empty closed set D in the d-dimensional Euclidean space n , the indicator function is defined as follows:
δ Q ( y ) = 0 , i f   y Q , + , o t h e r w i s e .
The distance function of set C, represented by d C : n is given by
d C ( x ) = inf x { x u : u C } ,
where δ Q ( y ) is obviously proper lower semi-continuous and d C ( x ) is smooth. When C is a closed convex set and Q is a non-convex closed set, the non-convex split feasibility problem can be reformulated as follows:
min x , y f ( x ) + g ( y ) = 1 2 x P C ( x ) 2 + δ Q ( y ) s . t . A x = y .
This optimization problem is the sum of two non-negative functions, and their minimum value of zero can only be achieved under the condition of problem (1). Due to C being a convex closed set, f(x) is continuous, differentiable, and gradient-Lipschitz continuous. The augmented Lagrangian function for problem (3) is as follows:
L β ( x , y , λ ) = f ( x ) + g ( y ) < λ , A x y > + β 2 A x y 2 .
In order to endow the ADMM algorithm with better theoretical properties, a number of researchers have carried out the following studies based on problem (3). Zhao et al. [17] considered the symmetric version of ADMM and selected different relaxation factors γ and s, adding the intermediate update term of the Lagrange multiplier λ k + 1 2 during the algorithm iteration process:
y k + 1 arg min L β x k , y , λ k + 1 2 y y k G 2 , λ k + 1 2 = λ k γ β A x k y k + 1 , x k + 1 arg min L β x , y k + 1 , λ k + 1 2 , λ k + 1 = λ k + 1 2 s β A x k + 1 y k + 1 .
In addition, the combination of the ADMM algorithm with inertial technology can also significantly improve the performance of the ADMM algorithm in solving non-convex optimization problems. Dang et al. [18] incorporated the inertial technique into each sub-problem of the ADMM algorithm and employed a dual-relaxed term to ensure the convergence of the algorithm.
Based on the previous work, we propose a partially symmetric regularized two-step inertial alternating direction method of multipliers for solving the non-convex split feasibility problem, which has not been extensively studied in the past. The novelty of this paper can be summarized as follows: Firstly, we transform such type of non-convex split feasibility problems into two separable non-convex linear constraint problems for an easier solution. Secondly, we add an intermediate update term for the multipliers throughout the iteration phase and apply the two-step inertial technique to each sub-problem of the Alternating Direction Method of Multipliers (ADMM) algorithm. Lastly, to guarantee the strong convergence of the proposed algorithm for resolving non-convex split feasibility problems, we employ the Kurdyka−Łojasiewicz (KL) property.
The structure of the paper is as follows: The basic concepts, definitions, and related results are described in the second part. The convergence of the algorithm is demonstrated in the Section 3. The fourth part showcases the effectiveness of the algorithms through experiments. The Section 5 presents the main conclusions.

2. Preliminaries

In this article, n represents an n dimensional Euclidean space and   .   represents the Euclidean norm. For any x , y n , x , y = x y . x G = x T G x , where G ( ) 0 is a symmetric positive semidefinite matrix. λ min ( G ) and λ max ( G ) represent the minimum and maximum eigenvalues of the symmetric matrix G, respectively. Then, λ min ( G ) y 2 y G 2 λ max ( G ) y 2 ,   f o r   y n . When set Q n is non-empty, for any point y n , the distance from point y to set Q is defined as d ( x , Q ) = inf { x y y Q } . In particular, if Q = ∅, then d(y, Q) = +∞. The domain of function g is denoted as dom g = { y n : g ( y ) < + }. In the Euclidean space, let C be a non-empty closed subset. The projection on the set C is the operator P C ( x ) : n C , which is defined as P C ( x ) = arg min { u x : u C } .
Definition 1 
([19]). If function g:  n { + }  satisfies  g y 0 lim inf y y 0 g ( y )  at  y 0 , then function g is said to be lower semi-continuous at y. If g is lower semi-continuous at every point, then g is called a lower semi-continuous function. Due to Q being a closed set, g(y), y    Q is a normal lower semi-continuous function.
Definition 2 
([19]). Let the function g:  n { + }  be normal lower semi-continuous.
(I) 
The Fréchet subdifferentiation of g at y   dom g is defined as
^ g ( y ) = y n : lim x y   inf x y g ( x ) g ( y ) y , x y x y 0 .
(II) 
The limit subdifferentiation of g at y   dom g is defined as
g ( y ) = y n : y k y , g y k g ( y ) , y ^ k ^ g ( y ) , y ^ k y .
Note: The properties of several subdifferentials (see [19]) are listed as follows:
(I) 
y n , ^ g ( y ) g ( y )   and   ^ g ( y )  is a closed convex set,  g ( y )   is a closed set.
(II) 
If  y k ^ g ( y )   and lim y y k , y k = y , y , then  y g ( y )   .
(III) 
If  y n  is the minimum point of g, then  0 g ( y ) ; if  0 g ( y ) , then y is the stable point of function g. The set of stable points of function g is denoted as crit g.
(IV) 
For any y   dom g, we get  ( f ( x ) + g ( y ) ) = f ( x ) + g ( y )  if  g : n { + }  is normal lower semi-continuous and  f : n { + }  is continuous differentiable.
Definition 3. 
ω = x , y , λ  is the stable point of the augmented Lagrangian function  L β ( x , y, λ) for problem (1), i.e.,  0 L β ( ω * ) , if and only if
λ g y , f x = A T λ , A x y = 0 .
Definition 4 
([20]). (Kurdyka−Łojasiewicz property) The function  g : n { + }  is a normal lower semi-continuous function, let  < η + , denoted as  [ η 1 < f < η 2 ]  and g is said to have KL property in  y d o m   g .  If there exists  η    (0, +∞], a certain rain  U  of y and a continuous concave function  φ : [0, η )→ + , such that
(I) 
φ ( 0 ) = 0 .
(II) 
φ  is continuously differentiable on (0,  η ) and  φ  is also continuous at 0.
(III) 
s ( 0 , η ) , φ ( s ) > 0 .
(IV) 
y U [ g y < g < g y + η ] , all KL inequalities hold:
φ g ( y ) g y d ( 0 , g ( y ) ) 1 .
Lemma 1 
([21]). (Consistent KL property) Assuming Ω is a compact set and function g:  n { + }  is a normal lower semi-continuous function. If the function g is a constant on Ω and satisfies the KL property at every point in Ω, then there exists  ε > 0 , η > 0 , φ ϕ n  such that for any  y ¯  ∈ Ω and any y, they belong to the following intersection
{ y n : d ( y , Ω ) < ε } [ g ( y ¯ ) < g < g ( y ¯ ) + η }
With
φ ( g ( y ) g ( y ¯ ) ) d ( 0 , g ( y ) ) 1 .
In some practical applications, many functions satisfy the KL property, such as semi-algebraic functions, real analytic functions, sub analytic functions, and strongly convex functions, as seen in reference [22].
Lemma 2 
([23]). If  ψ :  n { + }  is a continuous differentiable function and  ψ  is Lipschitz continuous, then there exists a Lipschitz constant  L ψ  > 0, such that for any x, y  n , there is:
ψ ( x ) ψ ( y ) ψ ( y ) , x y L ψ 2 x y 2 .
Definition 5. 
(Sets and functions in semi-algebra)
(I) 
If there are a finite number of real polynomial functions  g i j , h i j : n , such that
S = j = 1 p i = 1 q { u n : g i j ( u ) = 0   and   h i j ( u ) < 0 } ,
then a subset S of  n  is a real semi-algebraic set.
(II) 
A funtion f:  n →(−, +] is called semi-algebraic if its graph
( u , t ) n + 1 : h ( u ) = t
is a semi-algebraic subset of  n + 1 .
Definition 6. 
(Cauchy–Schwarz inequality) If and only if x and y are linearly dependent, for  x , y n , we have  x T y x y .

3. Split Feasibility Problem

3.1. Assumptions

Some assumptions and conditions about problem (3) are listed below.
(1)
γ + s > 0 and γ ( 1 2 s ) + s 6 ( 1 s ) 2 > 0 .
The solution set gram of this inequality system is represented as ( γ , s ) D = D 1 D 2 , where
D 1 = 6 ( 1 s ) 2 s 1 2 s , + × , 1 2
and
D 2 = s , 6 ( 1 s ) 2 s 1 2 s × 3 3 2 , 3 + 3 2
Note: It can be seen that (γ, s) has a wide range of choices. Specifically, when s ( 3 4 , 1 ) occurs, γ = ( s , s ) D 2 the parameters γ and s of the proposed algorithm can take the same value in this interval.
(2)
β > β 0 = ( γ + s ) + ( γ + s ) 2 + 24 γ ( 1 2 s ) + s 6 ( 1 s ) 2 2 γ ( 1 2 s ) + s 6 ( 1 s ) 2 > 0 .
(3)
Note δ = min { δ 1 , δ 2 } , where,
δ 1 = ( 1 2 λ min ( G ) - [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β τ + 4 θ ) ] ( 2 A 2 β 2 ( 1 s ) 2 + θ ) ( γ + s ) 2 β θ 2 2 θ ) > 0 ,
δ 2 = ( - ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) ( 4 L + A 2 ( 1 s ) β 2 + ( s β + τ ) A 2 + 2 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ( 4 L + A 2 ( 1 s ) β 2 + ( γ β τ ) A 2 + 2 θ ) 2 ( γ + s ) 2 β   L + ( 2 τ 2 β ) A 2 + 3 2 2 θ 2 ) > 0 .
(4)
Note θ > max { θ 1 k , θ 2 k , } and { { θ 1 k } , { θ 2 k } , { τ } } ( 0 , 1 ) .   { θ 1 k }   and { θ 2 k } are fixed constants.
(5)
C, Q are both semi-algebraic sets.
(6)
f is lg-Lipschitz differentiable, i.e., f ( u ) f ( v ) l g u v for   all   u , v p × q .
(7)
g is proper lower semi-continuous.
(8)
The set ω k X L β ( ω ) L β ω 0 is bounded.

3.2. Algorithm

For Algorithm 1 (PSRTADMM), its optimal conditions are as follows:
0 g y k + 1 + λ k + β A x k y k + 1 + G y k + 1 y k + θ 1 k ( y k 1 y k ) + θ 2 k ( y k 2 y k 1 )
λ k + 1 2 = λ k γ β A x k y k + 1 θ 1 k ( y k 1 y k ) θ 2 k ( y k 2 y k 1 )
0 = f x k + 1 A T λ k + 1 2 + β A T A x k + 1 y k + 1 + τ A T ( A x k + 1 A x k ) + θ 1 k A T ( x k 1 x k )   + θ 2 k A T ( x k 2 x k 1 )
λ k + 1 = λ k + 1 2 s β A x k + 1 y k + 1 τ ( A x k + 1 A x k ) θ 1 k A ( x k 1 x k ) θ 2 k A ( x k 2 x k 1 )
Algorithm 1 Partially Symmetric Regularized Two-step Inertial Alternating Direction Method of Multipliers for Non-convex Split Feasibility Problems (PSRTADMM)
Step   0 .   x 0 n , y 0 n , λ 0 n , x 2 , y 2 = x 1 , y 1 = x 0 , y 0   and   β   >   0   are   given ;   set   k :   =   0 Step   1 .   Find   y k + 1 n   by   solving y k + 1 arg min { L β ( x k , y , λ k ) + 1 2 y y k G 2 + θ 1 k < y , y k 1 y k > + θ 2 k < y , y k 2 y k 1 > G = τ I β A T A ,   where   G     n × n   is   symmetric   positive   definite   matrix . Step   2 .   Find   λ k + 1 2   by   solving λ k + 1 2 = λ k γ β ( A x k y k + 1 ) θ 1 k ( y k 1 y k ) θ 2 k ( y k 2 y k 1 ) Step   3 .   Find   x k + 1   by   solving x k + 1 arg min { L β ( x , y k + 1 , λ k + 1 2 ) + τ 2 A x A x k 2 + θ 1 k < A x , A ( x k 1 x k ) > + θ 2 k < A x , A ( x k 2 x k 1 ) > } Step   4 .   Find   λ k + 1   by   solving λ k + 1 = λ k + 1 2 s β ( A x k y k + 1 ) τ ( A x k + 1 A x k ) A θ 1 k ( x k 1 x k ) A θ 2 k ( x k 2 x k 1 ) If   the   stopping   criterion   is   satisfied ,   then   stop ;   else ,   go   to   Step   1   with   k :   =   k   +   1   .

3.3. Convergence Analysis

Next, we establish the convergence analysis of the proposed algorithm. Lemma 3 below indicates that the sequence { L β ( ω k ) } monotonically decreases. For ease of analysis, z = (x, y).
Lemma 3. 
If Assumptions holds, then
L ω ˜ k + 1 + δ z k + 1 z k 2 L ω ˜ k .
Proof. 
Firstly, from the optimality conditions of Equations (5) and (7), we can obtain
L β x k + 1 , y k + 1 , λ k + 1 = L β x k + 1 , y k + 1 , λ k + 1 2 < λ k + 1 λ k + 1 2 , A x k + 1 y k + 1 > = L β x k + 1 , y k + 1 , λ k + 1 2 + < s β ( A x k + 1 y k + 1 ) + τ ( A x k + 1 A x k ) + θ 1 k A ( x k 1 x k ) + θ 2 k A ( x k 2 x k 1 ) , A x k + 1 y k + 1 >
L β x k + 1 , y k + 1 , λ k + 1 2 + < s β ( A x k + 1 y k + 1 ) , A x k + 1 y k + 1 > + < τ ( A x k + 1 A x k ) , A x k + 1 y k + 1 >         + < θ 1 k A ( x k 1 x k ) , A x k + 1 y k + 1 > + < θ 2 k A ( x k 2 x k 1 ) , A x k + 1 y k + 1 > L β x k + 1 , y k + 1 , λ k + 1 2 + 2 s 2 β 2 + 3 2 A x k + 1 y k + 1 2 + τ 2 A 2 2 x k + 1 x k 2         + θ 2 A 2 2 x k 1 x k 2 + θ 2 A 2 2 x k 2 x k 1 2
And
L β x k , y k + 1 , λ k + 1 2 = L β x k , y k + 1 , λ k < λ k + 1 2 λ k , A x k y k + 1 > = L β x k , y k + 1 , λ k + < γ β ( A x k + 1 y k + 1 ) + θ 1 k ( y k 1 y k ) + θ 2 k ( y k 2 y k 1 ) , A x k y k + 1 > L β x k , y k + 1 , λ k + < γ β ( A x k + 1 y k + 1 ) , A x k y k + 1 > + < θ 1 k ( y k 1 y k ) , A x k y k + 1 >         + < θ 2 k ( y k 2 y k 1 ) , A x k y k + 1 > L β x k , y k + 1 , λ k + ( γ 2 β 2 + 1 ) A x k y k + 1 2 + θ 2 2 y k 1 y k 2 + θ 2 2 y k 2 y k 1 2
On the other hand, by the definition of augmented Lagrangian functions, the optimality condition of Equation (6), Lipschitz continuity, and Lemma 2 of f, we have
L β x k + 1 , y k + 1 , λ k + 1 2 L β x k , y k + 1 , λ k + 1 2 = f x k + 1 f x k β 2 A 2 x k + 1 x k 2 < λ k + 1 2 β A x k + 1 y k + 1 , A x k + 1 A x k > = f x k + 1 f x k β 2 A 2 x k + 1 x k 2 < f x k + 1 , x k + 1 x k >         < τ ( A x k + 1 A x k ) , x k + 1 x k > < θ 1 k A ( x k 1 x k ) , x k + 1 x k >         < θ 2 k A ( x k 2 x k 1 ) , x k + 1 x k > L + ( τ 2 β ) A 2 + 3 2 x k + 1 x k 2 + θ 2 A 2 2 x k 1 x k 2 + θ 2 A 2 2 x k 2 x k 1 2
Given y k + 1 is the optimal solution of (4), it follows that
L β x k , y k + 1 , λ k L β x k , y k , λ k 1 2 y k + 1 y k G 2 θ 1 k < y k 1 y k , y k y k + 1 > θ 2 k < y k 2 y k 1 , y k y k + 1 > ( θ 1 2 λ min ( G ) ) y k + 1 y k 2 + θ 2 y k 1 y k 2 + θ 2 y k 2 y k 1 2
Therefore, adding Equations (8)–(11), we have
L β x k + 1 , y k + 1 , λ k + 1 L β x k , y k , λ k L + ( 2 τ 2 β ) A 2 + 3 2 x k + 1 x k 2 + ( θ 1 2 λ min ( G ) ) y k + 1 y k 2 + θ 2 A 2 x k 1 x k 2 + θ 2 A 2 x k 2 x k 1 2 + θ 2 + θ 2 y k 1 y k 2 + θ 2 + θ 2 y k 2 y k 1 2 + 2 s 2 β 2 + 3 2 A x k + 1 y k + 1 2 + ( γ 2 β 2 + 1 ) A x k y k + 1 2
In addition, it can be obtained from Step 2 and Step 4 of Algorithm 1 that
A x k y k + 1 = 1 ( γ + s ) β λ k λ k + 1 s β + τ ( γ + s ) β A x k + 1 A x k 1 ( γ + s ) β ( θ 1 k A ( x k 1 x k ) + θ 2 k A ( x k 2 x k 1 ) + θ 1 k ( y k 1 y k ) + θ 2 k ( y k 2 y k 1 ) )
A x k + 1 y k + 1 = 1 ( γ + s ) β λ k λ k + 1 + γ β τ ( γ + s ) β A x k + 1 A x k 1 ( γ + s ) β ( θ 1 k A ( x k 1 x k ) + θ 2 k A ( x k 2 x k 1 ) + θ 1 k ( y k 1 y k ) + θ 2 k ( y k 2 y k 1 ) )
On the other hand, it can be concluded from Step 4 of Algorithm 1 and (6) that:
A T λ k + 1 = f x k + 1 + A T β ( 1 s ) A x k + 1 y k + 1
Combining the Lipschitz continuity of f, we obtain
A T λ k + 1 λ k L + A 2 ( 1 s ) β x k + 1 x k + A T β ( 1 γ ) y k + 1 y k
Furthermore, by applying the Cauchy inequality to (16), we have
A 2 λ k + 1 λ k 2 4 L + A 2 ( 1 s ) β 2 x k + 1 x k 2 + 4 A 2 β 2 ( 1 s ) 2 y k + 1 y k 2
So, combining Equations (13), (14) and (17), one has
2 s 2 β 2 + 3 2 A x k + 1 y k + 1 2 + ( γ 2 β 2 + 1 ) A x k y k + 1 2 ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β τ + 4 θ ) 2 ( γ + s ) 2 β λ k + 1 λ k 2 + ( 2 s 2 β 2 + 3 ) ( s β + τ ) A 2 ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( γ β τ ) A 2 ( 1 + γ β + τ + 4 θ ) 2 ( γ + s ) 2 β         x k + 1 x k 2 + ( 2 s 2 β 2 + 3 ) θ ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) θ ( 1 + γ β τ + 4 θ ) 2 ( γ + s ) 2 β A 2 x k 1 x k 2 + ( 2 s 2 β 2 + 3 ) θ ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) θ ( 1 + γ β τ + 4 θ ) 2 ( γ + s ) 2 β A 2 x k 2 x k 1 2 + ( 2 s 2 β 2 + 3 ) θ ( 1 + s β + τ + 2 θ 1 k + 2 θ 2 k ) + ( 2 γ 2 β 2 + 2 ) θ ( 1 + γ β τ + 4 θ ) 2 ( γ + s ) 2 β y k 1 y k 2 + ( 2 s 2 β 2 + 3 ) θ ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) θ ( 1 + γ β τ + 4 θ ) 2 ( γ + s ) 2 β y k 2 y k 1 2 ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) ( 4 L + A 2 ( 1 s ) β 2 + ( s β + τ ) A 2 ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ( 4 L + A 2 ( 1 s ) β 2 + ( γ β τ ) A 2 ) 2 ( γ + s ) 2 β x k + 1 x k 2 + [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β τ + 4 θ ) ] 4 A 2 β 2 ( 1 s ) 2 2 ( γ + s ) 2 β y k + 1 y k 2 + θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β τ + 4 θ ) ] 2 ( γ + s ) 2 β A 2 x k 1 x k 2 + θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] 2 ( γ + s ) 2 β A 2 x k 2 x k 1 2 + θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] 2 ( γ + s ) 2 β y k 1 y k 2 + θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] 2 ( γ + s ) 2 β y k 2 y k 1 2
Substituting Equation (18) into Equation (12), we have
L β x k + 1 , y k + 1 , λ k + 1 L β x k , y k , λ k + ( ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) ( 4 L + A 2 ( 1 s ) β 2 + ( s β + τ ) A 2 ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ( 4 L + A 2 ( 1 s ) β 2 + ( γ β τ ) A 2 ) 2 ( γ + s ) 2 β + L + ( 2 τ 2 β ) A 2 + 3 2 ) x k + 1 x k 2 + ( [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β τ + 4 θ ) ] 2 A 2 β 2 ( 1 s ) 2 ( γ + s ) 2 β + θ 1 2 λ min ( G ) ) y k + 1 y k 2 + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β τ + 4 θ ) ] 2 ( γ + s ) 2 β + θ 2 ) A 2 x k 1 x k 2 + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] 2 ( γ + s ) 2 β + θ 2 ) A 2 x k 2 x k 1 2 + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] 2 ( γ + s ) 2 β + θ 2 + θ 2 ) y k 1 y k 2 + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] 2 ( γ + s ) 2 β + θ 2 + θ 2 ) y k 2 y k 1 2
That is
L β x k + 1 , y k + 1 , λ k + 1 + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] ( γ + s ) 2 β + θ 2 + θ ) y k + 1 y k 2 + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β τ + 4 θ ) ] ( γ + s ) 2 β + 2 θ 2 ) A 2 x k + 1 x k 2 + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] 2 ( γ + s ) 2 β + θ 2 + θ 2 ) y k 1 y k 2 + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] 2 ( γ + s ) 2 β + θ 2 ) A 2 x k 1 x k 2 + ( 1 2 λ min ( G ) - [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β τ + 4 θ ) ] ( 2 A 2 β 2 ( 1 s ) 2 + θ ) ( γ + s ) 2 β         θ 2 2 θ ) y k + 1 y k 2 + ( - ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) ( 4 L + A 2 ( 1 s ) β 2 + ( s β + τ ) A 2 + 2 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ( 4 L + A 2 ( 1 s ) β 2 + ( γ β τ ) A 2 + 2 θ ) 2 ( γ + s ) 2 β         L + ( 2 τ 2 β ) A 2 + 3 2 2 θ 2 ) x k + 1 x k 2 L β x k , y k , λ k + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β τ + 4 θ ) ] ( γ + s ) 2 β + 2 θ 2 ) A 2 x k 1 x k 2 + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] 2 ( γ + s ) 2 β + θ 2 ) A 2 x k 2 x k 1 2 + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] ( γ + s ) 2 β + θ 2 + θ ) y k 1 y k 2 + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] 2 ( γ + s ) 2 β + θ 2 + θ 2 ) y k 2 y k 1 2
Therefore,
L w ˜ k + 1 + δ 1 y k + 1 y k G 2 + δ 2 x k + 1 x k 2 L w ˜ k + 1 + δ z k + 1 z k 2 L w ˜ k . L w ˜ k = L β ( x k , y k , λ k , ) + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β τ + 4 θ ) ] ( γ + s ) 2 β + 2 θ 2 ) A 2 x k 1 x k 2 + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] 2 ( γ + s ) 2 β + θ 2 ) A 2 x k 2 x k 1 2 + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] ( γ + s ) 2 β + θ 2 + θ ) y k 1 y k 2 + ( θ [ ( 2 s 2 β 2 + 3 ) ( 1 + s β + τ + 4 θ ) + ( 2 γ 2 β 2 + 2 ) ( 1 + γ β + τ + 4 θ ) ] 2 ( γ + s ) 2 β + θ 2 + θ 2 ) y k 2 y k 1 2
Lemma 4. 
If Section 3.1 holds, then the following statements is true:
(1) 
The sequence  { ω k }  is bounded.
(2) 
L ω ˜ k  is bounded from below and convergent, additionally,
k = 0 + ω k + 1 ω k 2 < +
(3) 
The sequence  L ω ˜ k  and  L β ω k  have the same limit  L ^ *
Proof. 
(1) Because of the decreasing property of { L ω ˜ k }, we obtain
L β ω k L ω ˜ k L ω ˜ 0 = L β ω 0
where the last equation dues to the initialization parameters x o = x 1 and y o = y 1 in Algorithm 1. Hence, ω k ω k X : L β ( ω ) L β ω 0 . By Section 3.1 (8), the sequence { ω k } is bounded.
(2) As { ω k } is bounded, { ω ˜ k } is also bounded, and it has at least one aggregation point. Let ω ˜ * be a cluster of { ω ˜ k }, and lim j ω ˜ k j = ω ˜ * . Because g is a lower semi-continuous function and f is continuous differentiable, then L   .   is lower semi-continuous. So lim j inf L ω ˜ k j L ω ˜ * , that is L ω ˜ k j has a lower bound. According to Lemma 3, L ω ˜ k monotonically decreases and then L ω ˜ k j converges. Therefore, Lemma 3, for k = 0, 1, …, n. Let n , we have:
δ k = 1 + z k + 1 z k 2 L β ω ˜ 0 L ^ β ( ω ˜ * ) < + .
As δ > 0 , it follows that k = 1 + x k + 1 x k 2 < + , k = 1 + y k + 1 y k 2 < + .
According to Equation (23), there are k = 1 + λ k + 1 λ k 2 < + , so k = 1 + ω k + 1 ω k 2 < + .
(3) From (2), we have k = 1 + x k + 1 x k 2 < + and k = 1 + y k + 1 y k 2 < + , so x k + 1 x k 2 0 and y k + 1 y k 2 0 . Combining the definition of L β ω ˜ k in (21) yields L ^ * = lim k L ω ˜ k = lim k L β ω k . Then, the lemma has been proven.
Utilize the outcomes of Lemma 4 to demonstrate the global convergence of the PSRTADMM algorithm. □
Theorem 1. 
(Global convergence) Denote the set of the cluster points of the sequence  { ω k }  and  { ω ˜ k }  by  Ω  and  Ω ˜ , respectively. We have:
(1) 
{ ω k }  and  { ω ˜ k }  are non-empty compact sets and  lim k d ( ω k , Ω ) = lim k d ( ω ˜ k , Ω ˜ ) = 0 .
(2) 
x , y , λ , x ^ , y ^ ω ˜  if and only if  x = x ^ , y ^ = y , x , y , λ Ω .
(3) 
ω ˜ * ω ˜ ,  L w ˜ k  is convergent and  L ω ˜ * = inf k N L ω ˜ k = lim k L ω ˜ k
(4) 
Ω c r i t L β .
Proof. 
(1) It is easy to prove by the definition of ω ˜ and Ω.
(2) From Lemma 4 and the definitions of { ω k } and { ω ˜ k }, we can easily reach the conclusion.
(3) Let ω ˜ * ω ˜ . Therefore, there exists { ω ˜ k j } , such that lim j ω ˜ k j = ω ˜ * . According to (17) k = 1 + x k + 1 x k 2 < + , k = 1 + y k + 1 y k 2 < + and the continuity of f, we derive
lim k j L ω ˜ k j = L ω ˜ *
Taking monotonicity of { L ω ˜ k } into account, we obtain that { L ω ˜ k } is convergent. Thus,
L ω ˜ * = inf k N L ω ˜ k = lim k L ω ˜ k ,   ω ˜ * ω ˜
(4) Let ω * = ( x * , y * , λ * ) Ω . And critL is the set of critical points of L. Then, there exists a subsequence { ω k j } of { ω k } that converges to ω * . According to Lemma 4, lim k ω k + 1 ω k 2 = 0 . From the equations in Step 2 and Step 4, let k = k j → +∞ and take the limit
λ = λ γ β A x y , λ = λ s β A x y .
Combining ( γ + s ) β > 0 , it can be seen that A x * y * = 0 , λ * * = λ * .
So, x * and y * are the feasible points for problem (1). According to the PSRTADMM algorithm, the y subproblem has
g y k j + 1 + λ k j , y k j + 1 + β 2 A x k j y k j + 1 + 1 2 y k j + 1 y k j G 2
g y + λ k j , y + β 2 A x k j y + 1 2 y y k j G 2 .
Combining lim k j + ω ˜ k j = lim k j + ω ˜ k j + 1 = ω * , with lim k j + sup g ( y k j + 1 ) g ( y * ) . Also, due to the lower semi-continuity of g(y), there is lim k j + sup g ( y k j + 1 ) = g ( y * ) . Sd g is lower semi-continous, g ( y * ) lim inf k j g ( y k j + 1 ) . It follows that lim k j + g ( y k j + 1 ) = g ( y * ) .
Furthermore, combining the closeness of ∂g and the continuity f, under the optimality necessary condition, let k = k j + is
λ * g ( y * ) , f x = A T λ * , A x * y * = 0 .
Therefore, according to Definition 3, ω * c r i t L β . Therefore, Ω ⊆ c r i t L β . □
Lemma 5. 
If the assumptions hold, then there exists a constant C > 0, such that
d ( 0 , L β ω k + 1 ) C ( x k + 1 x k + y k + 1 y k )
Proof. 
According to the result of Equation (16), there exists a constant C 1 > 0, so
λ k + 1 λ k C 1 ( x k + 1 x k + y k + 1 y k )
By the definition of the augmented Lagrangian function L β (·), we have
y L β ω k + 1 = g y k + 1 + λ k + 1 β A x k + 1 y k + 1 x L β ω k + 1 = f x k + 1 A T λ k + 1 + β A T A x k + 1 y k + 1 λ L β ω k + 1 = A x k + 1 y k + 1
Combining the necessary conditions for optimality (4)–(7) and (23), there are
ε 1 k + 1 = λ k λ k + 1 β A x k + 1 x k G y k + 1 y k   θ 1 k ( y k 1 y k ) θ 2 k ( y k 2 y k 1 ) y L β ω k + 1 ε 2 k + 1 = γ s β γ + s A T A x k + 1 x k s A T γ + s λ k + 1 λ k   ( A T ( γ + s ) β 1 ) ( τ ( A x k + 1 A x k ) + θ 1 k A ( x k 1 x k ) + θ 2 k A ( x k 2 x k 1 )   + θ 1 k ( y k 1 y k ) + θ 2 k ( y k 2 y k 1 ) ) ε 3 k + 1 = 1 ( γ + s ) β ( λ k + 1 λ k ) γ β τ ( γ + s ) β A x k + 1 A x k + 1 ( γ + s ) β ( θ 1 k A ( x k 1 x k )   + θ 2 k A ( x k 2 x k 1 ) + θ 1 k ( y k 1 y k ) + θ 2 k ( y k 2 y k 1 ) ) λ L β ω k + 1
So, ε k + 1 = ε 1 k + 1 , ε 2 k + 1 , ε 3 k + 1 L β ω k + 1 . Therefore, ∀k ≥ 1, there exists C 0 > 0, with
d ( 0 , L β ω k + 1 ) ε k + 1 C 0 ( x k + 1 x k + y k + 1 y k + λ k + 1 λ k )
Based on the synthesis of Equations (22) and (24), it can be concluded that there exists a constant τ 1 > 0, ∀k ≥ 2, which has
d ( 0 , L β ω k + 1 ) C ( x k + 1 x k + y k + 1 y k )
Lemma 6. 
If Section 3.1 (5) holds, then the augmented Lagrangian function  L β  satisfies the KL property.
Proof. 
Because C is a semi-algebraic set, the projection P C ( x ) is defined by polynomial constraints. Thus, f ( x ) = 1 2 x P C ( x ) 2 is a semi-algebraic function.
As Q is a semi-algebraic set, its indicator function δ Q ( y ) = 0 , i f   y Q , + , o t h e r w i s e . is a semi-algebraic function.
In addition, the terms < λ , A x y > + β 2 A x y 2 . are polynomial functions, hence semi-algebraic.
Therefore, L β is a semi-algebraic function. According to Lemma 1, the augmented Lagrangian function L β satisfies the KL property.
The strong convergence of the PSRTADMM algorithm is established using Lemmas 3, 5 and 6, and the relevant conclusions in Theorem 1. □
Theorem 2. 
(Strong convergence) Suppose that Section 3.1 holds, L ω ˜ satisfies the KL property at each point of ω ˜ , then
(1)
k = 0 + ω k + 1 ω k < + .
(2)
{ ω k } converges to the stable point of L(.).
Proof.
(1) Theorem 1 means that ω ˜ ω ˜ we have lim k + L ω ˜ k   = L ω ˜ * . Now, there are two situations for analysis:
Case 1 
There exists an integer k 0 , such that L β ( ω ˜ k 0 ) = L β ( ω ˜ * ) . By Lemma 3, ∀k ≥ k 0 , have
δ z k + 1 z k 2 L ω ˜ k L ω ˜ k + 1 L ω ˜ k 0 L ω ˜ * = 0
Thus, ∀k ≥ k 0 , we have x i k + 1 = x i k , i = 1 , 2 , , n , y k + 1 = y k . Hence, k > k 0 + 1 , we have ω ˜ k + 1 = ω ˜ k and the assertion holds.
Case 2 
Assume ∀k ≥ 1, we have L β ( ω ˜ k ) > L β ( ω ˜ * ) . Due to d ( ω ˜ k , ω ˜ ) 0 , for any given ε > 0, there exists k 1 > 0, for k > k 1 , such that d ( ω ˜ k , ω ˜ ) < ε. For any given η > 0, there exists k 2 > 0, for k > k 2 , such that L β ( ω ˜ k ) < L β ( ω ˜ * ) + η . Therefore, for a given ε, η > 0, when k > k = max{ k 1 , k 2 },
d ω ˜ k , ω ˜ < ε , L β ω ˜ < L β ω ˜ k < L β ω ˜ + η .
Theorem 1 states that as { ω ˜ k } is limited, Ω is a non-empty compact set and L β (·) is a constant on Ω. Therefore, using Lemma 1, let ∀k > k ˜ we have
ϕ ( L ω ˜ k L ω ˜ * ) d ( 0 , L ω ˜ k ) 1 , k > k ˜
Because ϕ ( L ω ˜ k L ω ˜ * ) > 0 , therefore,
1 ϕ L ω ˜ k L ω ˜ d 0 , L ω ˜ k
Due to the convexity of function ϕ , there are
ϕ ( L ω ˜ k L ω ˜ * ) ϕ ( L ω ˜ k + 1 L ω ˜ * ) ϕ ( L ω ˜ k L ω ˜ * ) ( L ω ˜ k L ω ˜ k + 1 )
So,
0 L ω ˜ k L ω ˜ k + 1 ϕ ( L ω ˜ k L ω ˜ * ) ϕ ( L ω ˜ k + 1 L ω ˜ * ) ϕ ( L ω ˜ k L ω ˜ * )
Combining Equation (25), (26) and Lemma 5, there are
L ω ˜ k L ω ˜ k + 1 C ( x k + 1 x k + y k + 1 y k ) Δ k , k + 1
Note Δ p , q = ϕ ( L ω ˜ p L ω ˜ * ) ϕ ( L ω ˜ q L ω ˜ * ) .
For simplicity, let ∧k = x k x k 1 + y k y k 1 , combining Lemma 3 and Equation (21), k > k ˜ , there is
δ z k + 1 z k 2 C k Δ k , k + 1
According to Equation (28), there are
z k + 1 z k 2 C k Δ k , k + 1 δ
Furthermore, there is
5 z k + 1 z k 2 k 1 / 2 5 2 C δ Δ k , k + 1 1 / 2 ) k + 25 C 4 δ Δ k , k + 1
From Equation (29), the sum of k = k ˜ + 1, …, m is obtained
5 k = k ˜ + 1 m z k + 1 z k k = k ˜ + 1 m k + 25 C 4 δ Δ k , k + 1
Notice the value of ϕ L ω m + 1 L ω 0 , move the term in the equation above, and apply Cauchy inequality to make m → +∞ to have
5 2 2 1 k = k ˜ + 1 m x k + 1 x k + 5 2 2 1 y k + 1 y k x κ ˜ + 1 x k ˜ + y k ˜ + 1 y k ˜ + 25 C 4 δ ϕ L ω k ˜ L ω <
So, k = 0 + x k + 1 x k < + , k = 0 + y k + 1 y k < + . Combined with Equation (26), there are
k = 0 + λ k + 1 λ k < +
Additionally, we have noticed that
ω k + 1 ω k = ( x k + 1 x k 2 + y k + 1 y k 2 + λ k + 1 λ k 2 ) 1 2
x k + 1 x k + y k + 1 y k + λ k + 1 λ k )
So,
k = 0 + ω k + 1 ω k < +
(2) From (1), we know that { ω k } is a Cauchy sequence and therefore converges, and then from Theorem 1 (3) knowing that { ω k } converges to the stable point of sequence L β (·) as a whole. □
Lemma 7. 
If Section 3.1 and the following conditions (1)–(3) hold,
(1) 
g is coercive, i.e.,  lim inf y g ( y ) = + .
(2) 
relaxation factor  s ( 3 3 2 , 3 + 3 2 ) .
(3) 
function  f ¯ ( x k ) = f ( x k ) f x k 2 2 A 2 ( 2 s 1 ) β  has a lower bound and is coercive, i.e.,
inf f ¯ ( x k ) >   and   lim x k + f ¯ ( x k ) = +
Then, the sequence { ω k } generated by PSRTADMM is bounded.
Proof. 
According to monotonically decreasing L β ω k , L β ω k L β ω 0 , combined with (15) has
L β ω 0 L β ω k = f x k + g y k λ k , A x k y k + β 2 A x k y k 2 = f x k + g y k A T 1 f x k + β ( 1 s ) , A x k y k = f x k + g y k f x k 2 2 A 2 ( 2 s 1 ) β + ( 2 s 1 ) β 2 A x k y k A T 1 f x k ( 2 s 1 ) β 2 .
Furthermore, the fact that inf y g ( y ) > is a normal lower semi-continuous forcing function on a closed set is readily apparent. Consequently, there is
f x k + g y k f x k 2 2 A 2 ( 2 s 1 ) β + ( 2 s 1 ) β 2 A x k y k A T 1 f x k ( 2 s 1 ) β 2 L β ω 0 < + , inf y g ( y ) > , inf x k f x k > , s 3 3 2 , 3 + 3 2 , ( s > 1 2 ) .
Therefore, it is easy to determine that A x k y k A T 1 f x k ( 2 s 1 ) β is bounded, so f x k is bounded, and the combination of combination (10) proves λ k is bounded as well. The boundedness of λ k has thus been established. Certificate completion. □

4. Numerical Experiments

This section presents a numerical example to validate the efficacy of Algorithm 1 by addressing the split feasibility problem. The variables and constraints in the experiments exactly match those defined in the theoretical part, thereby ensuring that the experiments can effectively test the validity and practicality of the theory. In the experimental stage, we compare Algorithm 1 with the traditional proximal algorithm to demonstrate how superior the performance of Algorithm 1 is when dealing with the split feasibility problem. All codes were implemented using MATLAB R2021a on a desktop computer with 32 GB of RAM.
In the field of CT image reconstruction, the central objective of inverse problems is to restore an image from noisy CT data. This process can be regarded as the following inverse problem:
b = H x + w
where b m is the observed data, x n is the true (original) underlying image, w m represents the measurement error (Gaussian noise at level 1 200 b ), and H m × m is the Radon transform in X-Ray computer tomography (CT).
In view of the fact that Equation (30) often exhibits ill-posedness and there are significant difficulties in solving it; in order to ensure the stability of the solutions, we conduct an investigation into the following model:
min x C F ( x ) + G ( x ) .
where F(x) is the regularization term dependent on prior knowledge of images, G(x) is the fidelity term and α > 0 is a balancing regularization parameter. In this experiment, we consider F ( x ) = H x b 2 and G = i = 1 n D i x , which is a widely used fidelity term in CT reconstruction. The term Dix indicates the discrete gradient of x at pixel I and the sum D i x plays the role of total variation (TV) regularization for x. The operator Di at the pixel horizontal gradient operator and vertical gradient operator. Thus, it is easy to rewrite the model (31) as SFP (1) by setting C = x n : G ( x ) r , Q = y n : y b 2 ε , and y = Hx. The test image is the 96 × 96 Shepp−Logan phantom, corrupted by Gaussian noise with standard deviation σ = 0.05 (equivalent to ε ≈ 4.8 in the feasibility set Q). The parameters that we used for the trials were as follows: β = 1.0 , τ = 0.5 , θ 1 = 0.1 , θ 2 = 0.1 , and α = 0.001. The choice of these parameters was primarily based on empirical tuning and theoretical guarantees to ensure the stability and convergence of the algorithm. The settings of β and τ influence the step size and convergence speed of the algorithm, while θ 1 and θ 2 control the strength of the inertial effects, which helps accelerate convergence. As a regularization parameter, α balances the fidelity term and the regularization term in image reconstruction, significantly impacting image quality.
Figure 1 shows four images: the original sinogram image, the noised sinogram image, the image reconstructed by ADMM, and the image reconstructed by Algorithm 1. A comprehensive comparison of the reconstructed images in Figure 1 shows that Algorithm 1 yields higher-quality images than ADMM. To evaluate image quality more objectively, we used the peak signal-to-noise ratio (PSNR) as a statistical indicator. A higher PSNR value means better image quality, less noise, and a smaller difference between the processed image and the original one. The results show that the PSNR of the image reconstructed by Algorithm 1 is 34.6383, which has a significant advantage over the 33.3561 obtained by the alternating direction method of multipliers (ADMM).
From the relationship between the peak signal-to-noise ratio (PSNR) values and the number of repetitions of the two algorithms presented in Figure 2, it can be clearly observed that the images generated by Algorithm 1 are approaching the original image, and their stability is gradually increasing, eventually achieving convergence to the ideal state. This proves our algorithm is successful.
Note: the presented experiment focuses on a specific CT image reconstruction scenario. Although the results highlight the algorithm’s efficacy in this case, further studies with diverse datasets and problem configurations are necessary to comprehensively evaluate the robustness and generality of PSRTADMM. This constitutes an important direction for future research.

5. Conclusions

In this paper, we put forward a partially symmetric regularized two-step inertial alternating direction method of multipliers to deal with non-convex split feasibility problems. The proposed algorithm innovatively includes an intermediate multiplier update and two-step inertial effects in subproblems. Through theoretical analysis under appropriate assumptions, its global convergence is proven. In addition, when the augmented Lagrangian function satisfies the Kurdyka−Łojasiewicz (KL) property, the algorithm can achieve a strong convergence, which means it can converge to a more accurate solution. Finally, numerical experiments were conducted in the field of CT image reconstruction. The results show that the proposed algorithm outperforms traditional methods in terms of the reconstruction quality and convergence speed, further confirming its effectiveness.

Author Contributions

Conceptualization, C.Y. and Y.D.; Methodology, C.Y. and Y.D.; Software, C.Y.; Validation, C.Y.; Writing—original draft, C.Y.; Visualization, C.Y.; Supervision, Y.D.; Project administration, Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Research supported by The Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning, grant number No. TP2022126.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A New Alternating Minimization Algorithm for Total Variation Image Reconstruction. SIAM J. Imaging Sci. 2008, 1, 248–272. [Google Scholar] [CrossRef]
  2. Dong, B.; Li, J.; Shen, Z. X-Ray CT Image Reconstruction via Wavelet Frame Based Regularization and Radon Domain Inpainting. J. Sci. Comput. 2013, 54, 333–349. [Google Scholar] [CrossRef]
  3. Block, K.T.; Uecker, M.; Frahm, J. Undersampled radial MRI with multiple coils. Iterative image reconstruction using a total variation constraint. Magn. Reson. Med. 2007, 57, 1086–1098. [Google Scholar] [CrossRef]
  4. Liu, S.; Cao, J.; Liu, H.; Zhou, X.; Zhang, K.; Li, Z. MRI reconstruction via enhanced group sparsity and nonconvex regularization. Neurocomputing 2018, 272, 108–121. [Google Scholar] [CrossRef]
  5. Lustig, M.; Donoho, D.; Pauly, J.M. Sparse MRI: The application of compressed sensing for rapid MR imaging. Magn. Reson. Med. 2007, 58, 1182–1195. [Google Scholar] [CrossRef]
  6. Gibali, A.; Kuefer, K.H.; Suess, P. Successive Linear Programing Approach for Solving the Nonlinear Split Feasibility Problem. J. Nonlinear Convex Anal. 2014, 15, 345–353. [Google Scholar]
  7. Byrne, C. Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 2002, 18, 441–453. [Google Scholar] [CrossRef]
  8. Censor, Y.; Motova, A.; Segal, A. Perturbed projections and subgradient projections for the multiple-sets split feasibility problem. J. Math. Anal. Appl. 2007, 327, 1244–1256. [Google Scholar] [CrossRef]
  9. Dang, Y.; Gao, Y. The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl. 2011, 27, 015007. [Google Scholar] [CrossRef]
  10. Qu, B.; Wang, C.; Xiu, N. Analysis on Newton projection method for the split feasibility problem. Comput. Optim. Appl. 2017, 67, 175–199. [Google Scholar] [CrossRef]
  11. Fukushima, M. Application of the alternating direction method of multipliers to separable convex programming problems. Comput. Optim. Appl. 1992, 1, 93–111. [Google Scholar] [CrossRef]
  12. Deng, W.; Yin, W. On the Global and Linear Convergence of the Generalized Alternating Direction Method of Multipliers. J. Sci. Comput. 2016, 66, 889–916. [Google Scholar] [CrossRef]
  13. Wang, Y.; Yin, W.; Zeng, J. Global Convergence of ADMM in Nonconvex Nonsmooth Optimization. J. Sci. Comput. 2019, 78, 29–63. [Google Scholar] [CrossRef]
  14. Yang, Y.; Jia, Q.S.; Xu, Z.; Guan, X.; Spanos, C.J. Proximal ADMM for nonconvex and nonsmooth optimization. Automatica 2022, 146, 110551. [Google Scholar] [CrossRef]
  15. Ouyang, Y.; Chen, Y.; Lan, G.; Pasiliao, E., Jr. An Accelerated Linearized Alternating Direction Method of Multipliers. SIAM J. Imaging Sci. 2015, 8, 644–681. [Google Scholar] [CrossRef]
  16. Hong, M.; Luo, Z.-Q. On the linear convergence of the alternating direction method of multipliers. Math. Program. 2017, 162, 165–199. [Google Scholar] [CrossRef]
  17. Zhao, Y.; Li, M.; Pen, X.; Tan, J. Partial symmetric regularized alternating direction method of multipliers for non-convex split feasibility problems. AIMS Math. 2025, 10, 3041–3061. [Google Scholar] [CrossRef]
  18. Dang, Y.; Chen, L.; Gao, Y. Multi-block relaxed-dual linear inertial ADMM algorithm for nonconvex and nonsmooth problems with nonseparable structures. Numer. Algorithms 2025, 98, 251–285. [Google Scholar] [CrossRef]
  19. Rockafellar, R.T.; Wets, R.J.-B. Variational Analysis; Springer Science and Business Media: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  20. Bolte, J.; Daniilidis, A.; Lewis, A. The Lojasiewicz inequality for nonsmooth subanalytic functions with applications to subgradient dynamical systems. SIAM J. Optim. 2007, 17, 1205–1223. [Google Scholar] [CrossRef]
  21. Attouch, H.; Bolte, J.; Svaiter, B.F. Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward–backward splitting, and regularized Gauss–Seidel methods. Math. Program. 2013, 137, 91–129. [Google Scholar] [CrossRef]
  22. Wang, F.; Cao, W.; Xu, Z. Convergence of multi-block Bregman ADMM for nonconvex composite problems. Sci. China-Inf. Sci. 2018, 61, 122101. [Google Scholar] [CrossRef]
  23. Nesterov, Y. Introductory Lectures on Convex Optimization: A Basic Course; Springer: New York, NY, USA, 2004. [Google Scholar]
Figure 1. The recovered sinogram image by two algorithms.
Figure 1. The recovered sinogram image by two algorithms.
Mathematics 13 01510 g001
Figure 2. PSNR value of the reconstructed image using two algorithms.
Figure 2. PSNR value of the reconstructed image using two algorithms.
Mathematics 13 01510 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, C.; Dang, Y. Partially Symmetric Regularized Two-Step Inertial Alternating Direction Method of Multipliers for Non-Convex Split Feasibility Problems. Mathematics 2025, 13, 1510. https://doi.org/10.3390/math13091510

AMA Style

Yang C, Dang Y. Partially Symmetric Regularized Two-Step Inertial Alternating Direction Method of Multipliers for Non-Convex Split Feasibility Problems. Mathematics. 2025; 13(9):1510. https://doi.org/10.3390/math13091510

Chicago/Turabian Style

Yang, Can, and Yazheng Dang. 2025. "Partially Symmetric Regularized Two-Step Inertial Alternating Direction Method of Multipliers for Non-Convex Split Feasibility Problems" Mathematics 13, no. 9: 1510. https://doi.org/10.3390/math13091510

APA Style

Yang, C., & Dang, Y. (2025). Partially Symmetric Regularized Two-Step Inertial Alternating Direction Method of Multipliers for Non-Convex Split Feasibility Problems. Mathematics, 13(9), 1510. https://doi.org/10.3390/math13091510

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop