Next Article in Journal
Geometric Properties and Applications in System Modeling for a Generalized q-Symmetric Operator
Next Article in Special Issue
PT-Symmetric Dirac Inverse Spectral Problem with Discontinuity Conditions on the Whole Axis
Previous Article in Journal
A Multistage Algorithm for Phase Load Balancing in Low-Voltage Electricity Distribution Networks Operated in Asymmetrical Conditions
Previous Article in Special Issue
Mathematical Modelling of Electrode Geometries in Electrostatic Fog Harvesters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Nonconvex Fractional Regularization Model in Robust Principal Component Analysis via the Symmetric Alternating Direction Method of Multipliers

1
School of Mathematical Sciences, Nanjing Normal University of Special Education, Nanjing 210038, China
2
School of Microelectronics and Data Science, Anhui University of Technology, Ma’anshan 243032, China
3
School of Mathematics and Physics, Suqian University, Suqian 223800, China
4
Key Laboratory of Numerical Simulation for Large Scale Complex Systems, Ministry of Education, Nanjing 210023, China
5
School of Information Science and Engineering, Southeast University, Nanjing 211189, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2025, 17(10), 1590; https://doi.org/10.3390/sym17101590
Submission received: 25 August 2025 / Revised: 16 September 2025 / Accepted: 18 September 2025 / Published: 24 September 2025
(This article belongs to the Special Issue Mathematics: Feature Papers 2025)

Abstract

This paper addresses the NP-hard problem of solving the rank of a matrix in Robust Principal Component Analysis (RPCA) by proposing a nonconvex fractional regularization approximation. Compared to existing convex regularization (which often yields suboptimal solutions) and nonconvex regularization (which typically requires parameter selection), the proposed model effectively avoids parameter selection while preserving scale invariance. By introducing an auxiliary variable, we transform the problem into a nonconvex optimization problem with a separable structure. We use a more flexible Symmetric Alternating Direction Method of Multipliers (SADMM) to arrive at a solution and provide a rigorous convergence proof. In numerical experiments involving synthetic data, image recovery, and foreground–background separation for surveillance video, the proposed fractional regularization model demonstrates high computational accuracy, and its performance is comparable to that of many state-of-the-art algorithms.

1. Introduction

Over the past few decades, Robust Principal Component Analysis (RPCA), a crucial analytical tool, has been widely applied in various fields (see references [1,2,3,4,5,6,7]) to reconstruct low-rank matrices X and sparse matrices Y from observed datasets M R m × n ( m n ) . In practical applications, M typically represents high-dimensional datasets such as surveillance videos, recognized images, or text documents; X denotes the primary structural components of the data (e.g., static backgrounds in surveillance footage, texture features in images, or common vocabulary across documents); while Y represents anomalous components (such as moving objects in videos, image noise, or keywords distinguishing different documents). Given the low rank and sparse nature of the underlying data, we mathematically formulate RPCA as the following model:
min X , Y rank ( X ) + μ Y 0 , s . t . X + Y = M ,
where rank ( · ) (resp., · 0 ) denotes the rank (resp., is the number of non-zeros) of the matrix, and μ > 0 is a trade-off parameter. However, due to the discontinuity of the model, it is NP-hard.

1.1. Related Work

In recent years, theoretical exploration and algorithm design for problem (1) have sparked a significant research boom in academia. Most studies focus on convex and nonconvex relaxation strategies for RPCA. Wright et al. [8] conducted a systematic and comprehensive analysis of the convex relaxation model for problem (1) under relatively mild assumptions. The details of which are as follows:
min X , Y X * + μ Y 1 , s . t . X + Y = M ,
where · * represents the nuclear norm (calculated as the sum of all its non-zero singular values of a matrix) to promote the low-rank component of M, and · 1 denotes the 1 norm (calculated as maximum of the absolute sums of all columns of a matrix) to encourage the sparse component of M.
For practical applications, and in view of the incomplete observation characteristics of M, Tao and Yuan [9] further extended model (2), and the specific mathematical form is as follows:
min X , Y X * + μ Y 1 , s . t . P Ω ( X + Y ) = P Ω ( M ) ,
where Ω is a subset of the index set { 1 , 2 , , m } × { 1 , 2 , , n } , representing the observable entries, and P Ω : R m × n R m × n is the orthogonal projection onto the span of matrices vanishing outside of Ω so that the i j -th entry of P Ω ( X ) is X i j if ( i , j ) Ω , otherwise it is zero.
Models (2) and (3) have a concise and clear architecture with a convenient solution process [9]. Under the assumption of incoherence, the convex model is able to recover the low-rank and sparse components with very high probability [1]. However, the model suffers from two limitations that restrict its applicability. Firstly, in real-world applications, the underlying matrices may lack the necessary incoherence guarantees [1] and the data may be severely compromised. In this scenario, the global optimal solution of the model may deviate significantly from the true value. Secondly, RPCA performs an equal scaling reduction for all singular values. The nuclear norm is essentially the 1 norm of the singular values, and the 1 norm has a shrinkage property, which will lead to estimates [10,11] that are biased. This means that the nuclear norm imposes an overpenalty on larger singular values and may end up obtaining only a severely erroneous solution. Based on this, a variety of nonconvex relaxations of rank functions have been proposed. For convenience, we provide the general expression for nonconvex regularization as follows (for specific details, please refer to Table 1):
min X , Y i = 1 m h ( σ i ( X ) ) + μ Y 1 , s . t . P Ω ( X + Y ) = P Ω ( M ) ,
where σ i denotes the i-th singular value of matrix X.
Numerous experimental results reveal that, with proper parameter settings, model (4) can often achieve superior performance compared to the nuclear norm. However, parameter estimation for nonconvex approximations is too time-consuming in terms of the computational process. Theoretically, a uniform evaluation criterion is lacking for optimal parameter selection. Therefore, the natural question is: is it possible to find a nonconvex model to avoid parameter estimation?
Fortunately, the fractional model (Nuclear/Frobenius, N/F) can effectively fill this research gap. On the one hand, it avoids the time consumption caused by parameter selection during computation. On the other hand, the fractional structure possesses inherent scale invariance, which also distinguishes it from other nonconvex models. Moreover, Gao et al. [20] demonstrated the significant advantage of the fractional model in approximating the rank function through a concrete example. It should be especially emphasized that when the fractional model is applied to the field of vector optimization, it will be reduced to the 1 / 2 model, and the related applications have been explored in several references [21,22,23,24,25,26,27], which will not be discussed in detail in this paper.

1.2. Problem Description

Inspired by the application of the fractional model to the low-rank matrix recovery problem [20], we constructed the following more general optimization model under the premise of incomplete data observation:
min X , Y λ 1 X * X F + λ 2 Y 1 s . t . A ( X + Y ) = b ,
where · F is the Frobenius norm (calculated as the square root of the sum of the squares of all matrix entries), λ i > 0 ( i = 1 , 2 ) are the balancing parameters of X and Y, A R m × n R m × n is a linear operator, and b R m × n is an observed vector/matrix. By penalizing the constraint into the objective function, we obtain the following equivalent form of (5):
min X , Y λ 1 X * X F + λ 2 Y 1 + 1 2 A ( X + Y ) b F 2 .
In comparison to the existing nonconvex models, the proposed model can effectively avoid the suboptimal solution caused by improper parameter selection. In addition, the model is closer to the matrix’s rank and still retains the property of scale invariance.
Through the introduction of a supplementary variable Z R m × n , we reconstruct the primitive formulation (6) into the following optimization problem equipped with a separable structure:
min X , Y , Z λ 1 X * X F + λ 2 Y 1 + 1 2 A ( Z ) b F 2 s . t . X + Y = Z .
Due to the fact that the Alternating Direction Multiplier Method (ADMM) is known to be a very effective algorithmic framework for solving an optimization problem with a separable structure, and inspired by [28], we adopt a more flexible Symmetric ADMM (SADMM) to solve the problem. The implementation framework of the specific algorithm is described below.
{ (8a) X k + 1 arg min X L β ( X , Y k , Z k , Λ k ) , (8b) Y k + 1 arg min Y L β ( X k + 1 , Y , Z k , Λ k ) , (8c) Λ k + 1 2 = Λ k + α β ( X k + 1 + Y k + 1 Z k ) , (8d) Z k + 1 = arg min Z L β ( X k + 1 , Y k + 1 , Z , Λ k + 1 2 ) , (8e) Λ k + 1 = Λ k + 1 2 + β ( X k + 1 + Y k + 1 Z k + 1 ) ,
where Λ is the Lagrange multiplier, α ( 1 , 1 ) , β is the penalty parameter, and L β ( · ) is the augmented Lagrangian function of (7) and is given as
L β ( X , Y , Z , Λ ) = λ 1 X * X F + λ 2 Y 1 + 1 2 A ( Z ) b F 2 + Λ , X + Y Z + β 2 X + Y Z F 2 .
In contrast to the classical ADMM, the SADMM proposed in this work not only increases the intermediate updating process of the multipliers, but also introduces the relaxation factor α . The introduction of this relaxation factor effectively improves the performance of the algorithm in terms of numerical computation. The method returns to the classical ADMM when α takes the value of 0. Therefore, SADMM is an extension of ADMM. This is also validated in the subsequent numerical experiments.

1.3. Our Contribution

We briefly summarize the work of our research.
  • We propose a new rank approximation model for the RPCA. Compared with the existing nonconvex models, this model not only circumvents the parameter restriction, but also exhibits superior performance in terms of approximating the rank.
  • We employ the SADMM to solve the proposed model, and prove the convergence of the algorithm under mild conditions.
  • We conduct experiments using both synthetic and real-world data. The experimental results demonstrate the superiority of the proposed algorithm and model.
The remainder of this paper is organized as follows. Section 2 describes the algorithmic framework and provides a detailed description of the solution to each subproblem. Section 3 introduces the preliminaries necessary for analyzing the algorithm and rigorously analyzes the convergence. Section 4 presents the numerical results of the proposed algorithm on synthetic and real data. Finally, the main conclusions are summarized in Section 5.

2. Algorithm

We will focus on discussing the strategy for solving problem (7) using the SADMM. We will now detail the solution methods for the X-subproblem, Y-subproblem, and Z-subproblem.
(1)
Solving the X-subproblem: For the variables Y k , Z k , and Λ k , and we introduce an auxiliary variable N to reformulate the X-subproblem, which is described as follows:
min X , N λ 1 X * N F + β 2 X B k F 2 s . t . X = N ,
where B k = Z k Y k Λ k β . Then the augmented Lagrangian function of (9) is given by
L δ k ( X , N , T ) = λ 1 X * N F + β 2 X B k F 2 + T , X N + δ 2 X N F 2 ,
where T is the Lagrange multiplier and δ is the penalty parameter. Then, we use ADMM to solve (9) by the following iterative process:
{ (10a) X t + 1 = arg min X L δ k ( X , N t , T t ) , (10b) N t + 1 = arg min N L δ k ( X t + 1 , N , T t ) , (10c) T t + 1 = T t + δ ( X t + 1 N t + 1 ) .
Here t is the number of inner iterations. So,
X t + 1 = arg min X L δ k ( X , N t , T t ) = arg min X λ 1 X * N t F + β + δ 2 X β B k + δ N t T t β + δ F 2 = U D ( Σ ) V ,
where D ( Σ ) = diag max Σ i i λ 1 ( β + δ ) N t F , 0 with Σ satisfying
β B k + δ N t T t = ( β + δ ) U Σ V .
Similar to the analysis in [21], the optimal solution of N-subproblem is obtained by
N t + 1 = arg min N L δ k ( X t + 1 , N , T t ) = arg min X λ 1 X t + 1 * N F + δ 2 X t + 1 N + T t δ F 2 = Q t , C t = 0 , γ t C t , C t 0 ,
where C t = X t + 1 + T t δ , Q t is random matrix satisfying Q t F 3 = d t δ with d t = λ 1 X t + 1 * , and  γ t = 1 3 + 1 3 ( a t + 1 a t ) with a t = 27 b t + 2 + ( 27 b t + 2 ) 2 4 2 and b t = d t δ C t F 3 .
(2)
Solving the Y-subproblem: For the variables X k + 1 , Z k , and Λ k ,
Y k + 1 = arg min Y L β ( X k + 1 , Y , Z k , Λ k ) = arg min Y λ 2 Y 1 + β 2 Y Z k X k + 1 Λ k β F 2 = sign Z k X k + 1 Λ k β · max Z k X k + 1 Λ k β λ 2 β , 0 .
(3)
Solving the Z-subproblem: For the variables X k + 1 , Y k + 1 , and Λ k + 1 2 ,
Z k + 1 = arg min Z L β ( X k + 1 , Y k + 1 , Z , Λ k + 1 2 ) = arg min Z 1 2 A ( Z ) b 2 2 + β 2 Z X k + 1 + Y k + 1 + Λ k + 1 2 β F 2 = ( A A + β I ) 1 ( A b + β ( X k + 1 + Y k + 1 ) + Λ k + 1 2 ) .
The specific algorithm implementation process is summarized in Algorithm 1.
Algorithm 1 SADMM for solving (7)
1:
Input:  A , b , λ 1 , λ 2 , β , δ , T o l , k max and t max .
2:
Initialization:  X 0 , N 0 , T 0 , Y 0 , Z 0 , Λ 0 and k , t = 0 .
3:
while  max { X k + 1 X k F X k F , Y k + 1 Y k F Y k F } > T o l and k k max   do
4:
   while  X t + 1 X t F X t F > 10 5 and t t max  do
5:
       Update X t + 1 via (11).
6:
       Update N t + 1 via (12).
7:
       Update T t + 1 via (10c).
8:
       Let t = t + 1 .
9:
  end while
10:
   return  X k + 1 = X t
11:
   Update Y k + 1 via (13).
12:
   Update Λ k + 1 2 via (8c).
13:
   Update Z k + 1 via (14).
14:
   Update Λ k + 1 via (8e).
15:
   Let k = k + 1 and t = 0 .
16:
end while
17:
return  X * = X k and Y * = Y k .
To help readers follow the overall process, the summarized algorithm block diagram is given in Figure 1.

3. Convergence

This section mainly analyzes the convergence of the proposed algorithm. First, we provide the necessary definitions and concepts.

3.1. Preliminaries

For an extended-real-valued function g, the domain of g is defined as
dom g : = { x R n | g ( x ) < } .
A function g is closed if it is lower semicontinuous and is proper if dom g and g ( x ) > for any x dom g . For any point x R n and subset S R n , the Euclidean distance from x to S is defined by
dist ( x , S ) : = inf y x | y S ,
where the notation · is 2 norm (calculated as the square root of the sum of squares of all vector entries). For a proper and closed function g : R n R { } , a vector u g ( x ) is a subgradient of g at x dom g , where g denotes the subdifferential of g [29] defined by
g ( x ) : = u R n | x k x , ^ g ( x k ) u k u with g ( x k ) g ( x )
with ^ g ( x ) being the set of regular subgradients of g at x :
^ g ( x ) : = { u R n | g ( y ) g ( x ) + u , y x + o ( y x ) y R n } .
Note that if f : R n R is continuously differentiable and g : R n R { } is proper and lower semicontinuous, it follows from [29] that ( f + g ) = f + g . A point x * is called a (limiting-) critical point or stationary point of F if it satisfies 0 F ( x * ) , and the set of critical points of F is denoted by crit F .
Definition 1. 
We say that ( X * , Y * , Z * , Λ * ) is a critical point of the augmented Lagrangian function L β ( · ) if it satisfies
0 λ 1 X * * X * F + Λ * , 0 λ 2 ( Y * 1 ) + Λ * , 0 = A ( A ( Z * ) b ) Λ * , 0 = X * + Y * Z * .
It is evident that a critical point of the augmented Lagrange function L β ( · ) is precisely the corresponding KKT point of (7). In this paper, we assume that there is at least one KKT point of (7).

3.2. Convergence

Next, we will analyze the convergence. Reviewing the iteration scheme (8a)–(8e), we next propose the following first-order optimality conditions for the subproblems in Algorithm 1.
Lemma 1. 
Let { ( X k , Y k , Z k , Λ k ) } be the sequence generated by Algorithm 1. Then, we have
0 λ 1 X k + 1 * X k + 1 F + Λ k + 1 + β α + 1 ( Z k + 1 Z k ) β ( Y k + 1 Y k ) α α + 1 ( Λ k + 1 Λ k ) , 0 λ 2 ( Y k + 1 1 ) + Λ k + 1 + β α + 1 ( Z k + 1 Z k ) α α + 1 ( Λ k + 1 Λ k ) , 0 = A ( A ( Z k + 1 ) b ) Λ k + 1 , Λ k + 1 = Λ k + β [ ( α + 1 ) ( X k + 1 + Y k + 1 Z k ) ( Z k + 1 Z k ) ] .
Proof. 
Adding (8c) and (8e), we get
Λ k + 1 Λ k = α β ( X k + 1 + Y k + 1 Z k ) + β ( X k + 1 + Y k + 1 Z k + 1 ) = α β ( X k + 1 + Y k Z k ) + α β ( Y k + 1 Y k ) + β ( X k + 1 + Y k Z k ) + β ( Y k + 1 Y k ) β ( Z k + 1 Z k ) = ( α + 1 ) β ( X k + 1 + Y k Z k ) + ( α + 1 ) β ( Y k + 1 Y k ) β ( Z k + 1 Z k ) ,
and thus
X k + 1 + Y k Z k = 1 ( α + 1 ) β ( Λ k + 1 Λ k ) ( Y k + 1 Y k ) + 1 α + 1 ( Z k + 1 Z k ) .
Based on the optimality condition of (8a),
0 λ 1 X k + 1 * X k + 1 F + Λ k + β ( X k + 1 + Y k Z k ) = λ 1 X k + 1 * X k + 1 F + Λ k + 1 ( Λ k + 1 Λ k ) + β ( X k + 1 + Y k Z k ) = λ 1 X k + 1 * X k + 1 F + Λ k + 1 + β α + 1 ( Z k + 1 Z k ) β ( Y k + 1 Y k ) α α + 1 ( Λ k + 1 Λ k )
Similarly,
Λ k + 1 Λ k = α β ( X k + 1 + Y k + 1 Z k ) + β ( X k + 1 + Y k + 1 Z k + 1 ) = ( α + 1 ) β ( X k + 1 + Y k + 1 Z k ) β ( Z k + 1 Z k ) .
Therefore,
X k + 1 + Y k + 1 Z k = 1 ( α + 1 ) β ( Λ k + 1 Λ k ) + 1 α + 1 ( Z k + 1 Z k ) .
By the optimality condition of (8b),
0 λ 2 ( Y k + 1 1 ) + Λ k + β ( X k + 1 + Y k + 1 Z k ) = λ 2 ( Y k + 1 1 ) + Λ k + 1 ( Λ k + 1 Λ k ) + β ( X k + 1 + Y k + 1 Z k ) .
Substituting (16) into the above inclusion yields, we get
0 λ 2 ( Y k + 1 1 ) + Λ k + 1 + β α + 1 ( Z k + 1 Z k ) α α + 1 ( Λ k + 1 Λ k ) .
We also get the following equality by (8d)’s optimality condition,
A ( A ( Z k + 1 ) b ) Λ k + 1 2 β ( X k + 1 + Y k + 1 Z k + 1 ) = 0 .
And then from (8e),
A ( A ( Z k + 1 ) b ) Λ k + 1 = 0 .
In addition,
Λ k + 1 = Λ k + 1 2 + β ( X k + 1 + Y k + 1 Z k + 1 ) = Λ k + α β ( X k + 1 + Y k + 1 Z k ) + β ( X k + 1 + Y k + 1 Z k + 1 ) = Λ k + β [ ( α + 1 ) ( X k + 1 + Y k + 1 Z k ) ( Z k + 1 Z k ) ] .
Therefore, (15) holds. This completes the proof. □
Lemma 2. 
Let { W k : = ( X k , Y k , Z k , Λ k ) } be the sequence generated by Algorithm 1. For any α ( 1 , 1 ) , β > 2 1 α L and L = σ m a x ( A A ) , we have the sequence { L β ( W k ) } , which is decreasing, and
L β ( W k ) L β ( W k + 1 ) c Z k + 1 Z k F 2 + β 2 Y k + 1 Y k | F 2 ,
where c = 1 α + 1 1 2 β L 2 L 2 ( α + 1 ) β > 0 .
Proof. 
From (8a),
L β ( X k , Y k , Z k , Λ k ) L β ( X k + 1 , Y k , Z k , Λ k ) 0 .
Note that L β ( X k + 1 , Y , Z k , Λ k ) is strongly convex and, with a modulus of at least β ,
L β ( X k + 1 , Y k , Z k , Λ k ) L β ( X k + 1 , Y k + 1 , Z k , Λ k ) β 2 Y k + 1 Y k F 2 .
L β ( X k + 1 , Y k + 1 , Z k , Λ k + 1 2 ) L β ( X k + 1 , Y k + 1 , Z k + 1 , Λ k + 1 2 ) = ( Λ k + 1 2 ) ( Z k + 1 Z k ) + β 2 X k + 1 + Y k + 1 Z k F 2 + 1 2 A ( Z k ) b F 2 1 2 A ( Z k + 1 ) b F 2 β 2 X k + 1 + Y k + 1 Z k + 1 F 2 = 1 2 A ( Z k ) b F 2 1 2 A ( Z k + 1 ) b F 2 β ( X k + 1 + Y k + 1 Z k ) ( Z k + 1 Z k ) + [ Λ k + ( α + 1 ) β ( X k + 1 + Y k + 1 Z k ) ] ( Z k + 1 Z k ) + β 2 X k + 1 + Y k + 1 Z k F 2 β 2 X k + 1 + Y k + 1 Z k + 1 F 2 = 1 2 A ( Z k ) b F 2 1 2 A ( Z k + 1 ) b F 2 β 2 Z k + 1 Z k F 2 + [ Λ k + ( α + 1 ) β ( X k + 1 + Y k + 1 Z k ) ] ( Z k + 1 Z k ) = 1 2 A ( Z k ) b F 2 1 2 A ( Z k + 1 ) b F 2 β 2 Z k + 1 Z k F 2 + [ Λ k + ( Λ k + 1 Λ k ) + β ( Z k + 1 Z k ) ] ( Z k + 1 Z k ) = 1 2 A ( Z k ) b F 2 1 2 A ( Z k + 1 ) b F 2 + ( Λ k + 1 ) ( Z k + 1 Z k ) + β 2 Z k + 1 Z k F 2 β L 2 Z k + 1 Z k F 2 ,
where the forth equality follows from (16), and the last inequality follows from [30], Lemma 1. Next, by using the definition of L β ( · ) and (8c),
L β ( X k + 1 , Y k + 1 , Z k , Λ k ) L β ( X k + 1 , Y k + 1 , Z k , Λ k + 1 2 ) + L β ( X k + 1 , Y k + 1 , Z k + 1 , Λ k + 1 2 ) L β ( X k + 1 , Y k + 1 , Z k + 1 , Λ k + 1 ) = ( Λ k Λ k + 1 2 ) ( X k + 1 + Y k Z k ) + ( Λ k + 1 2 Λ k + 1 ) ( X k + 1 + Y k + 1 Z k + 1 ) = ( Λ k Λ k + 1 ) ( X k + 1 + Y k + 1 Z k + 1 ) α β ( X k + 1 + Y k + 1 Z k ) ( Z k + 1 Z k )
From (16),
X k + 1 + Y k + 1 Z k + 1 = ( X k + 1 + Y k + 1 Z k ) ( Z k + 1 Z k ) = 1 ( α + 1 ) β ( Λ k + 1 Λ k ) + ( 1 α + 1 1 ) ( Z k + 1 Z k ) = 1 ( α + 1 ) β ( Λ k + 1 Λ k ) α α + 1 ( Z k + 1 Z k )
Substituting (16) and (22) into (21),
L β ( X k + 1 , Y k + 1 , Z k , Λ k ) L β ( X k + 1 , Y k + 1 , Z k , Λ k + 1 2 ) + L β ( X k + 1 , Y k + 1 , Z k + 1 , Λ k + 1 2 ) L β ( X k + 1 , Y k + 1 , Z k + 1 , Λ k + 1 ) = ( Λ k Λ k + 1 ) Λ k + 1 Λ k ( α + 1 ) β α α + 1 ( Z k + 1 Z k ) α β Λ k + 1 Λ k ( α + 1 ) β + 1 α + 1 ( Z k + 1 Z k ) ( Z k + 1 Z k ) L 2 ( α + 1 ) β + α β α + 1 Z k + 1 Z k F 2 ,
where the last inequality follows from (17) and L = σ m a x ( A A ) .
Summing (18)–(20), and (23),
L β ( W k ) L β ( W k + 1 ) β L 2 Z k + 1 Z k F 2 L 2 ( α + 1 ) β + α β α + 1 Z k + 1 Z k F 2 + β 2 Y k + 1 Y k F 2 = β 2 Y k + 1 Y k F 2 + 1 α + 1 1 2 β L 2 L 2 ( α + 1 ) β Z k + 1 Z k F 2 .
If α ( 1 , 1 ) and β > 2 1 α L are given in Algorithm 1, we know that the sequence { L β ( W k ) } is decreasing. This completes the proof. □
Lemma 3. 
Let { W k } be the sequence generated by Algorithm 1. If { X k } is bounded, then for any α ( 1 , 1 ) , β > 2 1 α L and L = σ m a x ( A A ) , the whole sequence { W k } is bounded.
Proof. 
From Lemma 2,
L β ( W 1 ) L β ( W k ) = λ 1 X k * X k F + λ 2 Y k 1 + 1 2 A ( Z k ) b F 2 + Λ k , X k + Y k Z k + β 2 X k + Y k Z k F 2 = λ 1 X k * X k F + λ 2 Y k 1 + 1 2 A ( Z k ) b F 2 + A ( A ( Z k ) b ) , X k + Y k Z k + β 2 X k + Y k Z k F 2 λ 1 X k * X k F + λ 2 Y k 1 + 1 2 A ( X k + Y k ) b F 2 + β L 2 X k + Y k Z k F 2
where the second equality yields from (17) and the last inequality arises from [30], Lemma 1. Since α ( 1 , 1 ) , β > 2 1 α L , we can easily see that β > L . By using the boundedness of { X k } and nonnegativity of the last inequality of (24), we find that { Y k } , { X k + Y k } , and { X k + Y k Z k } are bounded. According to the triangle inequality Z k F X k + Y k F + X k + Y k Z k F , we find that { Z k } is also bounded. In addition, from the boundedness of { A ( X k + Y k ) b } , we can derive that linear operator A is bounded. So, we have
Λ k F A ( A ( Z k ) b ) F ,
which implies that { Λ k } is also bounded. Hence, { ( X k , Y k , Z k , Λ k ) } is bounded. This completes the proof. □
Remark 1. 
The boundedness of { X k } is required throughout the subsequent analysis. In image processing, the elements in X represent pixel values ( [ 0 , 255 ] ), making the assumption of boundedness for X reasonable. Some studies [20,26,27] even directly constrain X within a specific box to replace the assumption of boundedness.
Lemma 4. 
Let { W k } be the sequence generated by Algorithm 1. If { X k } is bounded, then for any α ( 1 , 1 ) , β > 2 1 α L and L = σ m a x ( A A ) , there exists a positive constant m such that
dist ( 0 , L β ( W k + 1 ) ) F 2 m Z k + 1 Z k F 2 + Y k + 1 Y k F 2 .
Proof. 
Define
w X k + 1 : = ( Λ k + 1 Λ k ) + β ( Y k + 1 Y k ) β ( Z k + 1 Z k ) , w Y k + 1 : = ( Λ k + 1 Λ k ) β ( Z k + 1 Z k ) , w Z k + 1 : = 1 α + 1 ( Λ k + 1 Λ k ) + α β α + 1 ( Z k + 1 Z k ) , w Λ k + 1 : = 1 ( α + 1 ) β ( Λ k + 1 Λ k ) α α + 1 ( Z k + 1 Z k ) .
From the definition of L β ( . ) ,
X L β ( W k + 1 ) = λ 1 X k + 1 * X k + 1 F + Λ k + 1 + β ( X k + 1 + Y k + 1 Z k + 1 ) , Y L β ( W k + 1 ) = λ 2 ( Y k + 1 1 ) + Λ k + 1 + β ( X k + 1 + Y k + 1 Z k + 1 ) , Z L β ( W k + 1 ) = A ( A ( Z k + 1 ) b ) Λ k + 1 β ( X k + 1 + Y k + 1 Z k + 1 ) , Λ L β ( W k + 1 ) = X k + 1 + Y k + 1 Z k + 1 .
Then, it follows from (15) and (22) that ( w X k + 1 , w Y k + 1 , w Z k + 1 , w Λ k + 1 ) L β ( W k + 1 ) .
( w X k + 1 , w Y k + 1 , w Z k + 1 , w Λ k + 1 ) F 2 w X k + 1 F 2 + w Y k + 1 F 2 + w Z k + 1 F 2 + w Λ k + 1 F 2 4 β 2 Y k + 1 Y k F 2 + 6 + 2 ( α + 1 ) 2 + 2 ( α + 1 ) 2 β 2 Λ k + 1 Λ k F 2 + 2 β 2 + 2 α 2 β 2 ( α + 1 ) 2 + 2 α 2 ( α + 1 ) 2 Z k + 1 Z k F 2
Again, from (17), we set
m : = max 6 + 2 ( α + 1 ) 2 + 2 ( α + 1 ) 2 β 2 L 2 + 2 β 2 + 2 α 2 β 2 ( α + 1 ) 2 + 2 α 2 ( α + 1 ) 2 , 4 β 2 ,
which implies that (25) holds. This completes the proof. □
Lemma 5. 
Let { W k } be the sequence generated by Algorithm 1. If { X k } is bounded, then for any α ( 1 , 1 ) , β > 2 1 α L and L = σ m a x ( A A ) , we have
lim k ( X k + 1 X k F + Y k + 1 Y k F + Z k + 1 Z k F + Λ k + 1 Λ k F ) = 0 .
Proof. 
From the result of Lemma 2,
L β ( W k + 1 ) L β ( W 1 ) c j = 1 k Z j + 1 Z j F 2 β 2 j = 1 k Y j + 1 Y j F 2 .
From (24), we have L β ( W k + 1 ) 0 . Let k . We know that j = 1 Z j + 1 Z j F 2 < and j = 1 Y j + 1 Y j F 2 < which implies that
lim k Z k + 1 Z k F = 0 and lim k Y k + 1 Y k F = 0 .
According to (17), we have lim k Λ k + 1 Λ k F = 0 . Moreover, from (22), we have lim k X k + 1 X k F = 0 . This completes the proof. □
By utilizing the Kurdyka–Lojasiewicz (KL) property [30,31] and previous conclusions, we can ensure the convergence of the generated sequence to a critical point. In the following theorems, we summarize these results, and the proof process is similar to the proof techniques in references [28,30,31,32,33,34,35]. Due to the limited space, the specific proof steps are omitted here.
Theorem 1. 
Let { W k } be the sequence generated by Algorithm 1 and α ( 1 , 1 ) , β > 2 1 α L , L = σ m a x ( A A ) . If { X k } is bounded, then any cluster point W * : = ( X * , Y * , Z * , Λ * ) of the sequence { W k } is a critical point of (7).
Theorem 2. 
Let { W k } be the sequence generated by Algorithm 1. If β > 2 1 α L , α ( 1 , 1 ) , L = σ m a x ( A A ) , and { X k } is bounded, then { W k } has finite length, i.e.,
k = 1 W k + 1 W k F 2 < ,
and hence { W k } globally converges to the critical point of (7).

4. Numerical Results

We apply the proposed SADMM to principal component analysis experiments on synthetic data, image recovery, and background and foreground separations of surveillance videos to validate its effectiveness. For these experiments, we focus on addressing the following RPCA problem:
min X , Y , Z λ 1 X * X F + λ 2 Y 1 + 1 2 P Ω ( Z ) P Ω ( M ) F 2 s . t . X + Y = Z .
All codes were written and run using MATLAB 2021a software on a Windows 10 laptop equipped with an Intel(R) Core(TM) i7-1165G7 processor, a 2.80 GHz clock speed, and 16 GB of RAM. All the numerical results are the averages of 10 random running trials.

4.1. Synthetic Data

The purpose of this subsection is to compare the numerical performance of the classical ADMM ( α = 0 ) and the proposed SADMM under the same experimental conditions. We mainly tested the effect of different values of α ( 1 , 1 ) on the numerical results.
Let M = X + Y be the observed matrix, where X and Y represent the low-rank and sparse components to be recovered, respectively. The dimension of M is m × n . We generate rank-r matrix X via X = M L M R , where M L R m × r and M R R r × n are randomly generated using the MATLAB 2021a command rand . The low rank of X is controlled by r r (i.e., r r = r / m ), the sparsity of Y is controlled by s p r ( i.e., Y 0 / m n ), s r is the ratio of sample (observed) entries (i.e., | Ω | / m n ), and | Ω | is the cardinality of Ω ). Based on the s p r , we randomly selected non-zero positions and set the elements at them to random numbers within the range [ 500 , 500 ] to construct the sparse matrix Y. We started process with the initial iteration values ( X 0 , Y 0 , Z 0 ) = ( 0 , 0 , 0 ) . We set the experimental parameters as m = n = 300 , λ 1 = 100 , λ 2 = 10 , β = δ = 0.05 , and r r = s p r = 0.05 , s r = 0.9 , and the maximum iteration numbers k m a x and t m a x for the outer and inner iterations, respectively, as 500 and 5. The stopping criterion T o l is set to 10 5 in this subsection.
Table 2 presents CPU time (Time), the iteration number (Iter.), and relative error (RE:= X * X F / X F , where X * is the recovered matrix) for different values of α . The numerical results reveals that α = 0.5 delivers the optimal performance while maintaining a relatively stable relative error.
We further investigated the convergence properties of the SADMM. Figure 2 shows the evolution of the relative error and the rank of low-rank matrix X as the number of iterations increases, for parameters α = 0.5 , 0 , 0.5 , 0.9 , respectively. Observing the left side of Figure 2, it is evident that when α = 0.5 , the SADMM exhibits a faster convergence rate compared to the traditional ADMM. From the right side of Figure 2, we can conclude that, during the initial few iterations, the rank changes caused by α = 0.5 are more pronounced than α = 0 ; however, α = 0.5 can more quickly reach the true low-rank state, and the low-rank characteristics remain stable throughout the iteration process.
Based on the above results, it can be observed that the SADMM achieves the most optimal performance when α = 0.5 . To further evaluate the advantage of the proposed model, we solve it with the SADMM with the parameter α set to 0.5 (the optimal parameter) and apply it to the subsequent image processing and video surveillance experiments.

4.2. Image Recovery

In this subsection, we mainly evaluate the effectiveness of the SADMM through the recovery of real images. In practical applications, the original image is often not low-rank, but its essential information is often dominated by high singular values. Therefore, we can approximately regard it as a low-rank matrix. The observed original image is often damaged or obscured, resulting in the loss of some elements. That is, the observed matrix M can be approximately regarded as the sum of the low-rank matrix X and the sparse noise matrix Y. Therefore, the purpose of this subsection is to recover the low-rank part from the partially damaged image, which is the matrix completion problem. We compare the SADMM with three other classical algorithms used for matrix completion problems. They are the Accelerated Iteratively Reweighted Nuclear Norm algorithm (AIRNN) [36], Singular Value Thresholding algorithm (SVT) [37], and the Schatten Capped p norm minimization method (SCp) [16]. In this experiment, the parameters in the SADMM were set as λ 1 = 50 , λ 2 = 5 , β = 0.5 , δ = 0.05 , and k m a x = 120 , t m a x = 5 . The parameters in AIRNN, SVT, and SCp were all set to the optimal parameters used in the corresponding literature, and T o l = 10 5 .
Figure 3 presents four image samples captured by our team, including four color images: “Tower ( 256 × 256 )”, “Sky ( 512 × 512 )”, “Spillikins ( 512 × 512 )” and “Texture ( 512 × 512 )”. We simulated partial image damage by randomly removing 50 % of the real data (for color images, each channel was processed separately and then averaged). Figure 3 also shows the performance of different algorithms in recovering damaged images.
In order to further evaluate the performance of the SADMM and the proposed model, Table 3 lists the numerical results of the four algorithms, AIRNN, SVT, SCp, and SADMM, including the Signal-to-Noise Ratio (SNR) and relative error (RE) between the original image X and the recovered image X * . The SNR was calculated with the following formula:
SNR : = 10 log 10 X X ¯ 2 X X * 2
where X ¯ is the mean of the original data.
The numerical results in Table 3 show that the SADMM demonstrated better recovery performance when solving the proposed model. Figure 4 further shows the evolution tendency of the SNR values of the four methods as the number of iterations increased. From Figure 4, it can be observed that, under the same number of iterations, the proposed SADMM achieved better image recovery results than the other three algorithms.

4.3. Foreground and Background Separation in Surveillance Video

In this subsection, We apply the proposed algorithm to the problem of separating the foreground and background in surveillance video, which is a typical RPCA problem. We test the algorithm using two surveillance video datasets: restaurant and shopping mall surveillance videos. They can be obtained from https://hqcai.org/datasets/restaurant.mat (accessed on 13 July 2025) and https://hqcai.org/datasets/shoppingmall.mat (accessed on 13 July 2025) of [38]. The details of these videos, including the resolution and the number of frames, are shown in Table 4, where each column of the matrix corresponds to a frame of the video. The index set Ω of missing information is randomly determined based on the specified sample ratio s r . In the following experiment, we set s r = 0.9 , which means that 10 % of the information in the test video was randomly removed.
In order to further evaluate algorithm performance and model accuracy, we compare the parallel split augmented Lagrange method from [39] with the Optimal Proximal Augmented Lagrange Method (OPALM) from [40]. For clarity, the dynamic step size and constant step size strategies used in [39] are labeled Alg1d and Alg1c, respectively. The parameter settings of the SADMM are λ 1 = 1 , λ 2 = 1 / m , β = 0.004 / M 1 , δ = 0.05 , k max = 100 , and t max = 5 , respectively. The relevant parameter settings of Alg1d, Alg1c, and OPALM are the same as those in the corresponding literature, with the stopping criteria T o l = 3 × 10 2 .
Figure 5 shows the original frames and the corrupted frames of the 150th and 200th frames in the restaurant surveillance video and the 350th and 400th frames in the shopping mall surveillance video.
Figure 6 shows the results of different algorithms separating the foreground and background images from a video sequence. The images displayed in Figure 6 are selected from the 150th and 200th frames of the restaurant video and the 350th and 400th frames of the shopping mall video. Table 5 provides a detailed list of the rank of the low-rank matrix (denoted as rank( X * )), the sparsity of the sparse matrix (denoted as Y * 0 ), and the relative error (denoted as M X * Y * F M F ) for the reconstructed matrices using different methods. According to the data presented in Table 5, it can be observed that, at the end of the iteration process, the ranks of the low-rank matrices obtained by each algorithm remain consistent, and the differences in the sparsity of sparse matrices are minimal. In addition, the SADMM demonstrated smaller relative errors across the two datasets, further indicating that the fractional regularization model proposed in this paper achieves higher accuracy.

5. Conclusions

This paper proposes a nonconvex fractional regularization model to address the rank approximation problem in robust principal component analysis. A more general symmetric alternating direction method of multipliers is employed to solve it, and the convergence is proved. The final results also validate the effectiveness of our algorithm. However, the iterative solution of the X subproblem imposes certain limitations on the scalability of this algorithm. Future work will explore explicit solutions for subproblem X and test the model on larger-scale real-world datasets.

Author Contributions

Formal analysis and writing—original draft preparation, Z.G.; validation, X.Z. and S.Z.; writing—review and editing, S.Z. and Y.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (12471290, 120081), Suqian Sci&Tech Program (M202206), Open Fund of the Key Laboratory of NSLSCS, Ministry of Education and Qing Lan Project.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Candès, E.J.; Li, X.D.; Ma, Y.; Wright, J. Robust principal component analysis? J. ACM 2011, 58, 11. [Google Scholar] [CrossRef]
  2. Huang, Y.; Wang, Z.; Chen, Q.; Chen, W. Robust principal component analysis via truncated L1−2 minimization. In Proceedings of the 2023 International Joint Conference on Neural Networks, Gold Coast, Australia, 18–23 June 2023; pp. 1–9. [Google Scholar] [CrossRef]
  3. Bian, J.T.; Zhao, D.D.; Nie, F.P.; Wang, R.; Li, X.L. Robust and sparse principal component analysis with adaptive loss minimization for feature selection. IEEE T. Neur. Net. Lear. 2022, 35, 3601–3614. [Google Scholar] [CrossRef] [PubMed]
  4. Wang, Q.S.; Han, D.R.; Zhang, W.X. A customized inertial proximal alternating minimization for SVD-free robust principal component analysis. Optimization 2023, 73, 2387–2412. [Google Scholar] [CrossRef]
  5. Shen, Y.; Xu, H.Y.; Liu, X. An alternating minimization method for robust principal component analysis. Optim. Method Softw. 2018, 34, 1251–1276. [Google Scholar] [CrossRef]
  6. Zhuang, S.T.; Wang, Q.W.; Chen, J.F. Dual graph Laplacian RPCA method for face recognition based on anchor points. Symmetry 2025, 17, 691. [Google Scholar] [CrossRef]
  7. Zhang, W.T.; Chen, X.H. Robust discriminative non-negative and symmetric low-rank projection learning for feature extraction. Symmetry 2025, 17, 307. [Google Scholar] [CrossRef]
  8. Wright, J.; Peng, Y.G.; Ma, Y.; Ganesh, A.; Rao, S. Robust principal component analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In Proceedings of the 23rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 7–10 December 2009; pp. 2080–2088. Available online: https://dl.acm.org/doi/abs/10.5555/2984093.2984326 (accessed on 13 July 2025).
  9. Tao, M.; Yuan, X.M. Recovering low-rank and sparse components of matrices from incomplete and noisy observations. SIAM J. Optim. 2011, 21, 57–81. [Google Scholar] [CrossRef]
  10. Fan, J.Q.; Li, R.Z. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  11. Zhang, C.H. Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 2010, 38, 894–942. [Google Scholar] [CrossRef] [PubMed]
  12. Zhang, T. Analysis of multi-stage convex relaxation for sparse regularization. J. Mach. Learn. Res. 2010, 11, 1081–1107. Available online: https://dl.acm.org/doi/10.5555/1756006.1756041 (accessed on 13 July 2025).
  13. Candès, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted 1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  14. Friedman, J.H. Fast sparse regression and classification. Int. J. Forecasting 2012, 28, 722–738. [Google Scholar] [CrossRef]
  15. Nie, F.P.; Huang, H.; Ding, C. Low-rank matrix recovery via efficient schatten p-norm minimization. In Proceedings of the 26-th AAAI Conference on Artificial Intelligence, Toronto, ON, Canada, 22–26 July 2012; pp. 655–661. Available online: https://dl.acm.org/doi/10.5555/2900728.2900822 (accessed on 13 July 2025).
  16. Li, X.L.; Zhang, H.Y.; Zhang, R. Matrix completion via non-convex relaxation and adaptive correlation learning. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 1981–1991. [Google Scholar] [CrossRef] [PubMed]
  17. Gao, C.X.; Wang, N.Y.; Yu, Q.; Zhang, Z.H. A feasible nonconvex relaxation approach to feature selection. In Proceedings of the 25th AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 7–11 August 2011; pp. 356–361. [Google Scholar] [CrossRef]
  18. Geman, D.; Yang, C.D. Nonlinear image recovery with half-quadratic regularization. IEEE T. Image Process. 1995, 4, 932–946. [Google Scholar] [CrossRef] [PubMed]
  19. Trzasko, J.; Manduca, A. Highly undersampled magnetic resonance image reconstruction via homotopy 0-minimization. IEEE T. Med. Imaging 2009, 28, 106–121. [Google Scholar] [CrossRef]
  20. Gao, K.X.; Huang, Z.H.; Guo, L.L. Low-rank matrix recovery problem minimizing a new ratio of two norms approximating the rank function then using an ADMM-type solver with applications. J. Comput. Appl. Math. 2024, 438, 115564. [Google Scholar] [CrossRef]
  21. Rahimi, Y.; Wang, C.; Dong, H.B.; Lou, Y.F. A scale-invariant approach for sparse signal recovery. SIAM J. Sci. Comput. 2019, 41, A3649–A3672. [Google Scholar] [CrossRef]
  22. Wang, C.; Yan, M.; Rahimi, Y.; Lou, Y.F. Accelerated schemes for the L1/L2 minimization. IEEE Trans. Signal Process. 2020, 68, 2660–2669. [Google Scholar] [CrossRef]
  23. Tao, M. Minimization of L1 over L2 for sparse signal recovery with convergence guarantee. SIAM J. Sci. Comput. 2022, 44, A770–A797. [Google Scholar] [CrossRef]
  24. Hurley, N.; Rickard, S. Comparing measures of sparsity. IEEE T. Inform. Theory 2009, 55, 4723–4741. [Google Scholar] [CrossRef]
  25. Yin, P.H.; Esser, E.; Xin, J. Ratio and difference of l1 and l2 norms and sparse representation with coherent dictionaries. Commun. Inf. Syst. 2014, 14, 87–109. [Google Scholar] [CrossRef]
  26. Wang, C.; Tao, M.; Nagy, J.G.; Lou, Y.F. Limited-angle CT reconstruction via the L1/L2 minimization. SIAM J. Imaging Sci. 2021, 14, 749–777. [Google Scholar] [CrossRef]
  27. Wang, C.; Tao, M.; Chuah, C.N.; Nagy, J.G.; Lou, Y.F. Minimizing L1 over L2 norms on the gradient. Inverse Probl. 2022, 38, 065011. [Google Scholar] [CrossRef]
  28. Wu, Z.M.; Li, M.; Wang, D.Z.W.; Han, D.R. A symmetric alternating direction method of multipliers for separable nonconvex minimization problems. Asia. Pac. J. Oper. Res. 2017, 34, 1750030. [Google Scholar] [CrossRef]
  29. Rockafellar, R.T.; Wets, R.J.B. Variational Analysis; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar] [CrossRef]
  30. Bolte, J.; Sabach, S.; Teboulle, M. Proximal alternating linearized minimization for nonconvex and nonsmooth problems. Math. Program. 2014, 146, 459–494. [Google Scholar] [CrossRef]
  31. Attouch, H.; Bolte, J.; Svaiter, B.F. Convergence of descent methods for semi-algebraic and tame problems: Proximal algorithms, forward−backward splitting, and regularized Gauss−Seidel methods. Math. Program. 2013, 137, 91–129. [Google Scholar] [CrossRef]
  32. Guo, K.; Han, D.R.; Wu, T.T. Convergence of alternating direction method for minimizing sum of two nonconvex functions with linear constraints. Int. J. Comput. Math. 2016, 94, 1653–1669. [Google Scholar] [CrossRef]
  33. Ge, Z.L.; Zhang, X.; Wu, Z.M. A fast proximal iteratively reweighted nuclear norm algorithm for nonconvex low-rank matrix minimization problems. Appl. Numer. Math. 2022, 179, 66–86. [Google Scholar] [CrossRef]
  34. Ge, Z.L.; Wu, Z.M.; Zhang, X.; Ni, Q. An extrapolated proximal iteratively reweighted method for nonconvex composite optimization problems. J. Glob. Optim. 2023, 86, 821–844. [Google Scholar] [CrossRef]
  35. Ge, Z.L.; Zhang, S.Y.; Zhang, X.; Cui, Y. A new proximal iteratively reweighted nuclear norm method for nonconvex nonsmooth optimization problems. Mathematics 2025, 13, 2630. [Google Scholar] [CrossRef]
  36. Phan, D.N.; Nguyen, T.N. An accelerated IRNN-Iteratively Reweighted Nuclear Norm algorithm for nonconvex nonsmooth low-rank minimization problems. J. Comput. Appl. Math. 2021, 396, 113602. [Google Scholar] [CrossRef]
  37. Cai, J.F.; Candès, E.J.; Shen, Z.W. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  38. Giampouras, P.; Cai, H.Q.; Vidal, R. Guarantees of a preconditioned subgradient algorithm for overparameterized asymmetric low-rank matrix recovery. arXiv 2025, arXiv:2410.16826. [Google Scholar] [CrossRef]
  39. Jiang, F.; Lu, B.Y.; Wu, Z.M. Revisiting parallel splitting augmented Lagrangian method: Tight convergence and ergodic convergence rate. CSIAM T. Appl. Math. 2024, 5, 884–913. [Google Scholar] [CrossRef]
  40. He, B.S.; Ma, F.; Yuan, X.M. Optimal proximal augmented Lagrangian method and its application to full Jacobian splitting for multi-block separable convex minimization problems. IMA J. Numer. Anal. 2020, 40, 1188–1216. [Google Scholar] [CrossRef]
Figure 1. Summarized algorithm block diagram.
Figure 1. Summarized algorithm block diagram.
Symmetry 17 01590 g001
Figure 2. Evolutions of the relative error of low-rank (left) and rank estimation (right) for α = 0.5 , α = 0 , α = 0.5 , and α = 0.9 .
Figure 2. Evolutions of the relative error of low-rank (left) and rank estimation (right) for α = 0.5 , α = 0 , α = 0.5 , and α = 0.9 .
Symmetry 17 01590 g002
Figure 3. Image restoration results under 50% random pixel loss using different methods. Scene contains Tower, Sky, Spillikins, and Texture. From top to bottom: Original, Damaged, AIRNN, SVT, SCp, SADMM.
Figure 3. Image restoration results under 50% random pixel loss using different methods. Scene contains Tower, Sky, Spillikins, and Texture. From top to bottom: Original, Damaged, AIRNN, SVT, SCp, SADMM.
Symmetry 17 01590 g003
Figure 4. Evolution of SNR values as the number of iterations increases.
Figure 4. Evolution of SNR values as the number of iterations increases.
Symmetry 17 01590 g004
Figure 5. The 150th and 200th frames in the restaurant surveillance video and the 350th and 400th frames in the shopping mall surveillance video, showing the original frames (top row) and the corrupted frames (bottom row).
Figure 5. The 150th and 200th frames in the restaurant surveillance video and the 350th and 400th frames in the shopping mall surveillance video, showing the original frames (top row) and the corrupted frames (bottom row).
Symmetry 17 01590 g005
Figure 6. Foreground and background separation in surveillance videos separated by the tested algorithms. Columns 1–4 correspond to the results of the Alg1d, Alg1c, OPALM, and SADMM, respectively. From top to bottom: the 150th and 200th frames of the restaurant video and the 350th and 400th frames of the shopping mall video.
Figure 6. Foreground and background separation in surveillance videos separated by the tested algorithms. Columns 1–4 correspond to the results of the Alg1d, Alg1c, OPALM, and SADMM, respectively. From top to bottom: the 150th and 200th frames of the restaurant video and the 350th and 400th frames of the shopping mall video.
Symmetry 17 01590 g006aSymmetry 17 01590 g006b
Table 1. Nonconvex regularized functions.
Table 1. Nonconvex regularized functions.
h : R + R with parameters λ > 0 , γ > 0 and 0 < p < 1
SCAD [10] h ( t ) = λ t , if t λ t 2 + 2 γ λ t λ 2 2 ( γ 1 ) , if λ < t γ λ λ 2 ( γ + 1 ) 2 , if t > γ λ
MCP [11] h ( t ) = λ t t 2 2 γ , if t < γ λ γ λ 2 2 , if t γ λ
Clapped 1 [12] h ( t ) = λ t , if t < γ λ γ , if t γ
LSP [13] h ( t ) = λ log ( 1 + t γ )
Logarithm [14] h ( t ) = λ log ( γ t + 1 ) log ( γ + 1 )
Sp [15] h ( t ) = λ t p
SCp [16] h ( t ) = 1 2 ( t δ ) 2 + λ γ p , if t > γ 1 2 ( t δ ) 2 + λ t p , if t γ
ETP [17] h ( t ) = λ 1 exp ( γ ) ( 1 exp ( γ t ) )
Geman [18] h ( t ) = λ t t + γ
Laplace [19] h ( t ) = λ ( 1 exp ( t γ ) )
Table 2. The iteration time and number of iterations corresponding to different α values.
Table 2. The iteration time and number of iterations corresponding to different α values.
α −0.5−0.300.30.50.70.9
Time9.617.636.054.884.385.3110.63
Iter.15611485716377154
RE 3.70 × 10 3 3.40 × 10 3 2.70 × 10 3 2.10 × 10 3 2.02 × 10 3 9.38 × 10 3 1.81 × 10 3
Table 3. Numerical results of different algorithms on various images.
Table 3. Numerical results of different algorithms on various images.
ImagesAIRNNSVTSCpSADMM
SNRRESNRRESNRRESNRRE
Tower16.500.191521.980.131813.330.200323.130.1084
Sky25.970.108323.940.120029.320.083130.700.0770
Spillikins18.160.112916.210.140219.610.102425.130.0649
Texture19.190.194318.740.197819.940.167720.620.1462
Average19.950.151720.210.147420.550.138324.890.0991
Table 4. Dataset description. The video name, resolution, and the number of frames.
Table 4. Dataset description. The video name, resolution, and the number of frames.
Video NameResolutionThe Number of Frames
Restaurant 120 × 160 200
Shopping Mall 256 × 320 400
Table 5. Numerical results for background extraction on surveillance videos.
Table 5. Numerical results for background extraction on surveillance videos.
MethodRank( X * ) Y * 0 M X * Y * F M F
RestaurantShopping MallRestaurantShopping MallRestaurantShopping Mall
Alg1d1820551,3262,698,367 8.11 × 10 2 3.68 × 10 2
Alg1c1820548,2382,804,602 8.16 × 10 2 3.69 × 10 2
OPALM1820526,5572,795,618 8.22 × 10 2 3.71 × 10 2
SADMM1820561,1972,818,965 8.10 × 10 2 3.66 × 10 2
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ge, Z.; Zhang, S.; Zhang, X.; Xu, Y. A Nonconvex Fractional Regularization Model in Robust Principal Component Analysis via the Symmetric Alternating Direction Method of Multipliers. Symmetry 2025, 17, 1590. https://doi.org/10.3390/sym17101590

AMA Style

Ge Z, Zhang S, Zhang X, Xu Y. A Nonconvex Fractional Regularization Model in Robust Principal Component Analysis via the Symmetric Alternating Direction Method of Multipliers. Symmetry. 2025; 17(10):1590. https://doi.org/10.3390/sym17101590

Chicago/Turabian Style

Ge, Zhili, Siyu Zhang, Xin Zhang, and Yingying Xu. 2025. "A Nonconvex Fractional Regularization Model in Robust Principal Component Analysis via the Symmetric Alternating Direction Method of Multipliers" Symmetry 17, no. 10: 1590. https://doi.org/10.3390/sym17101590

APA Style

Ge, Z., Zhang, S., Zhang, X., & Xu, Y. (2025). A Nonconvex Fractional Regularization Model in Robust Principal Component Analysis via the Symmetric Alternating Direction Method of Multipliers. Symmetry, 17(10), 1590. https://doi.org/10.3390/sym17101590

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop