Next Article in Journal
Orientation Detection in Color Images Using a Bio-Inspired Artificial Visual System
Next Article in Special Issue
Temporal Adaptive Attention Map Guidance for Text-to-Image Diffusion Models
Previous Article in Journal
A New Koch and Hexagonal Fractal Combined Circular Structure Antenna for 4G/5G/WLAN Applications
Previous Article in Special Issue
An Overview of Quantum Circuit Design Focusing on Compression and Representation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Low-Rank Tensor Recovery Based on Nonconvex Geman Norm and Total Variation

School of Sports Engineering, Beijing Sport University, Beijing 100084, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(2), 238; https://doi.org/10.3390/electronics14020238
Submission received: 28 November 2024 / Revised: 4 January 2025 / Accepted: 6 January 2025 / Published: 8 January 2025
(This article belongs to the Special Issue Image Fusion and Image Processing)

Abstract

:
Tensor restoration finds applications in various fields, including data science, image processing, and machine learning, where the global low-rank property is a crucial prior. As the convex relaxation to the tensor rank function, the traditional tensor nuclear norm is used by directly adding all the singular values of a tensor. Considering the variations among singular values, nonconvex regularizations have been proposed to approximate the tensor rank function more effectively, leading to improved recovery performance. In addition, the local characteristics of the tensor could further improve detail recovery. Currently, the gradient tensor is explored to effectively capture the smoothness property across tensor dimensions. However, previous studies considered the gradient tensor only within the context of the nuclear norm. In order to better simultaneously represent the global low-rank property and local smoothness of tensors, we propose a novel regularization, the Tensor-Correlated Total Variation (TCTV), based on the nonconvex Geman norm and total variation. Specifically, the proposed method minimizes the nonconvex Geman norm on singular values of the gradient tensor. It enhances the recovery performance of a low-rank tensor by simultaneously reducing estimation bias, improving approximation accuracy, preserving fine-grained structural details and maintaining good computational efficiency compared to traditional convex regularizations. Based on the proposed TCTV regularization, we develop TC-TCTV and TRPCA-TCTV models to solve completion and denoising problems, respectively. Subsequently, the proposed models are solved by the Alternating Direction Method of Multipliers (ADMM), and the complexity and convergence of the algorithm are analyzed. Extensive numerical results on multiple datasets validate the superior recovery performance of our method, even in extreme conditions with high missing rates.

1. Introduction

In the era of big data, high-order data have become ubiquitous across various scientific and engineering fields (such as signal processing [1], computer vision [2], machine learning [3] and data mining [4]). This increasing abundance of high-order data has motivated the widespread use of tensors, which serve as multidimensional extensions of vectors (first-order tensors) and matrices (second-order tensors) for modeling complex data structures. Tensor recovery is a classical inverse problem aimed at reconstructing a tensor T R n 1 × n 2 × × n d with certain structural priors from corrupted observations Y = Φ ( T ) . Due to inherent limitations in signal acquisition equipment, including sensor sensitivity and photon effects, tensor data collected often exhibit noticeable degradation, which manifests as missing values or corruptions. For observation tensors with massive amounts of information loss, this issue is often solved using tensor recovery methods based on low-rank tensor priors, leading to the formulation of tensor completion models (TC) [5,6]. In the case of observation tensors contaminated with substantial noise, tensor robust principal component analysis models (TRPCA) [7,8] are commonly employed.
In the field of low-rankness (L) priors for tensors, an unavoidable concern revolves around how to characterize the rank of tensors. Although a tensor is a higher-order extension of a matrix, the definition of tensor rank remains ambiguous, making it challenging to directly tackle the problem of tensor rank minimization. Currently, there exist numerous definitions of tensor rank based on different tensor decomposition methods, such as the CP rank [9], Tucker rank [10], multi-rank [11], and tubal-rank [12] methods, among others. However, it is NP-hard to directly solve the minimization problem for CP rank and Tucker rank. The tensor nuclear norm (TNN), widely adopted as a convex surrogate for tubal rank, is computationally efficient but fails to faithfully approximate the true tensor rank. Extensive studies have shown that the nonconvex approximation of the rank function can offer better estimation accuracy than the nuclear norm. To better approximate tensor rank, various nonconvex alternatives have been proposed, such as the L p norm [13], the L 1 / 2 norm [14], the L g norm [15] and so on. Researchers are committed to designing more effective regularizers to characterize low-rankness prior information about the underlying tensor, and the corresponding tensor recovery model can be constructed as follows:
min X L ( X ) , s . t . X 0 = Φ ( X ) ,
where X denotes the tensor to be recovered, X 0 denotes the observed tensor, L · is the regularizer characterizing the low-rankness prior of the tensor, and  Φ ( · ) is the operator describing the degradation process. Chen et al. [16] propose a denoising method that integrates the low-rankness prior with a Difference of Gaussian filter to enhance denoising performance. However, the effectiveness of the filter relies on appropriate parameter tuning to balance noise suppression and detail preservation.
To further enhance the accuracy of tensor restoration, researchers have focused on exploring the local features of tensors, thereby introducing the smoothness (S) characteristic. The smoothness prior describes the structural properties of a tensor, where adjacent pixels typically show similar properties or values. For  X R n 1 × × n d with t-SVD rank R and L and S priors, the following inequalities are induced: R 1 rank t SVD ( G k ) R , where G k denotes the gradient tensor along its k-th mode. This implies that the L structure between the gradient tensor and the original one remains consistent, and constraining the L structure of the gradient tensor can indirectly induce the low-rank property of the original structure. The smoothness property is often characterized by the low energy of X ’s gradient tensor. Since it was proven in [17] that the difference operation changes neither low-rankness nor boundness, the L and S priors of the gradient tensor can be captured by the specific regularizer we designed.
To more effectively characterize the global low-rank properties and local smooth features of tensors, we propose a novel regularization, the Tensor-Correlated Total Variation (TCTV), based on gradient tensors and the nonconvex Geman norm. Meanwhile, based on the TCTV regularizer, we develop TC-TCTV and TRPCA-TCTV models for completion and denoising tasks and solve them using the ADMM framework. Experiments validate the outstanding performance of the proposed models in both completion and denoising tasks. The specific contributions are as follows:
  • We introduce a novel nonconvex regularizer termed the Tensor-Correlated Total Variation (TCTV), which is based on gradient tensors and the nonconvex Geman norm. The TCTV captures both the low-rankness and smoothness priors of the gradient tensor simultaneously, thereby alleviating the inconvenience of choosing a balancing parameter.
  • Building upon the TCTV regularization, we propose two novel models, namely TC-TCTV and TRPCA-TCTV. To solve these models efficiently, we design Alternating Direction Method of Multipliers (ADMM) algorithms, which can obtain closed-form solutions for each variable. Additionally, we test the sensitivity of the parameters and the convergence of the proposed algorithms.
  • Extensive experimental results demonstrate the robust tensor recovery capability of our model. As depicted in Figure 1, the image recovery performance remains impressive even with a missing rate as high as 99.5%.
The remainder of this paper is organized as follows. Section 2 introduces some notations and preliminaries under the high-order t-SVD framework. The tensor completion (TC-TCTV) model and tensor robust principal component analysis (TRPCA-TCTV) model are shown in Section 3. In Section 4, extensive results are presented. We draw some conclusions about our work in Section 5.

2. Preliminaries

2.1. Notations

In this paper, we use the following notations, and please refer to [18,19,20,21] for more details. Matrices and tensors are represented by the bold capital letter X R n 1 × n 2 and bold calligraphic letter X R n 1 × × n d , respectively. Scalars and vectors are denoted by the lowercase letter x and bold lowercase letter x R n . For a d-way tensor X R n 1 × n 2 × × n d , X ( i 1 , i 2 , , i d ) denotes its ( i 1 , i 2 , , i d ) -th entry, and  X ( : , : , i 3 , , i d ) is its ( i 3 , , i d ) -th face slice, which can also be written as X ( i 3 , , i d ) for brevity.

2.2. High-Order t-Singular-Value Decomposition Framework

Under the t-singular-value decomposition (t-SVD) framework, we introduce some definitions as follows for use.
Definition 1
(invertible transform P [21]). For X R n 1 × n 2 × × n d with invertible transform P [21], its transform form is defined as follows:
X P : = P ( X ) = X × 3 U n 3 × 4 × d U n d ,
where U n j R n j × n j ( j = 3 , . . . , d ) denotes the invertible transform matrix (like the discrete Fourier transform matrix), and × j denotes the mode-j product ( C = A × j B means C ( , i j 1 , : , i j + 1 , ) = B · A ( , i j 1 , : , i j + 1 , ) ). This satisfies the conditions that P 1 ( P ( X ) ) = X and P 1 ( X ) : = X × 3 U n 3 1 × 4 × d U n d 1 .
Definition 2
(tensor product [21]). For A R n 1 × n 2 × × n d and B R n 1 × n 2 × × n d , the tensor product is defined as follows:
A P B = P 1 ( P ( A ) Δ P ( B ) ) ,
where Δ denotes the face-wise product ( C = A Δ B means C ( i 3 , , i d ) = A ( i 3 , , i d ) B ( i 3 , , i d ) for each frontal slice).
Definition 3
(transpose [21]). The transpose of an order-d tensor X R n 1 × n 2 × n 3 × × n d is denoted as X T R n 2 × n 1 × n 3 × × n d and satisfies X P T ( : , : , i 3 , , i d ) = X P ( : , : , i 3 , , i d ) T for each frontal slice.
Definition 4
(orthogonal tensor [21]). For an orthogonal tensor X R n × n × n 3 × × n d , it follows that X T P X = X P X T = I n .
Definition 5
(f-diagonal tensor [21]). For an f-diagonal tensor X R n × n × n 3 × × n d , it follows that all frontal slices are diagonal.
Theorem 1
(t-SVD [21]). Any tensor X R n 1 × n 2 × n 3 × × n d can be factorized as follows by t-SVD:
X = U P S P V T ,
where U R n 1 × n 1 × × n d and V R n 2 × n 2 × × n d are both orthogonal and S R n 1 × n 2 × × n d is f-diagonal.
Definition 6
(t-SVD rank [21]). For any tensor X R n 1 × n 2 × n 3 × × n d factorized as X = U P S P V T by t-SVD, the t-SVD rank can be defined as follows:
rank t SVD ( X ) : = { i : S ( i , i , : , , : ) 0 } ,
whereis used to denote the cardinality of a set, which refers to the number of elements within the set.
Definition 7
(tensor Geman norm [15]). For any tensor X R n 1 × n 2 × n 3 × × n d with transform P , the tensor Geman norm ( L g ) can be defined as follows:
X P : = 1 i 3 = 1 n 3 i d = 1 n d X P ( : , : , i 3 , , i d ) g ,
where · g denotes the Geman norm of a matrix and can be defined as follows:
X P ( : , : , i 3 , , i d ) g : = X g = λ t = 1 min { p , q } σ t ( X ) σ t ( X ) + g , X R p × q ,
where σ t ( X ) denotes the t-th biggest singular value of X, and g is a positive parameter. λ is an adaptive parameter that is set as a large initial value and gradually decreases to the target value.
Theorem 2
(t-SVT [21]). For any tensor X R n 1 × n 2 × n 3 × × n d factorized by t-SVD, its tensor singular value thresholding (t-SVT) is defined as follows:
t-SVT τ ( X ) : = U P S τ P V T ,
where S τ = P 1 ( ( S P τ ) + ) , t + = max ( 0 , t ) , and  t-SVT τ ( X ) obeys the following equation:
t-SVT τ ( K ) = arg min K τ K P + 1 2 K X F 2 .
Definition 8
(gradient tensor). The gradient tensor of an order-d tensor X R n 1 × n 2 × n 3 × × n d along the k-th mode is defined as follows:
G k : = k ( X ) = X × k D n k , ( k = 1 , 2 , , d ) ,
where D n k is a row-circulant matrix composed by ( 1 , 1 , 0 , , 0 ) .

3. Tensor Recovery with Nonconvex TCTV Norm

3.1. TCTV Norm

We propose a novel nonconvex norm, termed the TCTV, which integrates the tensor nuclear norm based on differences with the nonconvex Geman norm. This design leverages the complementary strengths of both components: the tensor nuclear norm ensures structural integrity and low-rank representation, while the Geman norm enhances accuracy in rank approximation and reduces estimation bias. By combining these advantages, the TCTV achieves superior performance in preserving fine-grained details, making it highly effective for challenging tensor recovery tasks. The definition of the TCTV is presented as follows:
For any tensor X R n 1 × n 2 × n 3 × × n d , we define the TCTV norm based on the t-CTV norm [17]. The main change required to convert between the two norms is to replace the convex nuclear norm with the nonconvex Geman norm.
X TCTV : = 1 m k Γ G k P ,
where G k represents correlated tensors of X , Γ denotes a prior set composed of directions along which X equips low-rankness and smoothness priors, and  m denotes the cardinality of Γ . Through comparative analysis in ablation experiments, we observed that the use of the Geman norm achieves favorable performance. While other nonconvex norms also demonstrated promising results, this study focuses on the Geman function as a representative case for experimental validation. The generalization of nonconvex TCTV norms will be further explored in future work to provide a more comprehensive framework.

3.2. Models

For the TC model, we suppose that X R n 1 × n 2 × n 3 × × n d is the underlying unknown tensor with the L and S priors. The TC-TCTV model is established as follows, using the TCTV norm to characterize the joint L and S priors of X :
min X X TCTV s . t . P Ω ( X ) = P Ω ( X 0 ) ,
where P ( · ) denotes the projection operator and  Ω denotes the index set of observed entries and obey random Bernoulli sampling ( Ω Ber ( p ) ). Then, we reformulate the TC-TCTV model as follows:
min X , G k , K 1 m k Γ G k P + δ Ω ( K ) s . t . G k = k ( X ) , X + K = P Ω ( X 0 ) ,
where X 0 denotes the observed tensor, k ( · ) denotes the difference operator along a certain mode, K is restricted in Ω to compensate for the missing entries of X , and  δ Ω ( K ) is an indicative function that can be defined as follows:
δ Ω ( K ) = 0 , P Ω ( K ) = O , + , otherwise .
For the TRPCA model, we assume that the underlying tensor X is corrupted by noise E . Its corruption obeys M = X + E , and  M is the observation. Thus, the TRPCA-TCTV model is established as follows:
min X , G k , E 1 m k Γ G k P + β E 1 s . t . G k = k ( X ) , X + E = M .

3.3. Optimization Algorithms

3.3.1. Optimization to TC-TCTV

The augmented Lagrange function of (13) is
L ( X , { G k , k Γ } , K , { Λ k , k Γ } , Υ ) = k Γ 1 m G k P + Λ k , k ( X ) G k + μ t 2 k ( X ) G k F 2 + δ Ω ( K ) + Υ , P Ω ( X 0 ) X K + μ t 2 P Ω ( X 0 ) X K F 2 ,
where Λ k and Υ are Lagrange multipliers, and  μ t is a penalty parameter. Then, (16) can be rewritten as follows:
L = k Γ 1 m G k P + μ t 2 k ( X ) G k + Λ k / μ t F 2 + δ Ω ( K ) + μ t 2 P Ω ( X 0 ) X K + Υ / μ t F 2 .
Step 1. Update X t + 1 from L :
The following system is obtained by taking the derivative of (17) with respect to X :
I + k T k ( X ) = P Ω ( X 0 ) K t + Υ t / μ t + k T ( G k t Λ k t / μ t ) .
According to [22], we adopt a multidimensional FFT to diagonalize the difference tensors, D k , corresponding to k ( · ) . Then, we can obtain the optimal solution of (18) as follows:
X t + 1 = F 1 F ( P Ω ( X 0 ) K t + Υ t / μ t ) + H 1 + k Γ F ( D k ) F ( D k ) ,
where 1 is a tensor with all elements equal to 1, and  H = k Γ F ( D k ) F ( G k t Λ k t / μ t ) .
Step 2. Update G k t + 1 from L : For each k Γ , we extract the part related to G k :
G k t + 1 = arg min G k G k P + m μ t 2 k ( X t + 1 ) G k + Λ k t μ t F 2 .
Let P k t + 1 = k ( X t + 1 ) + Λ k t μ t , and we consider G k and P k as the compositions of all face slices.  Equation (20) can then be rewritten as follows:
{ G k t + 1 ( 1 ) , , G k t + 1 ( M ) } = arg min { G k ( 1 ) , , G k ( M ) } j = 1 M G k ( j ) g + m μ t 2 G k ( j ) P k t + 1 ( j ) F 2 ,
where M = n 3 n d denotes the number of face slices of G k . For each j M , we extract the subproblem as follows:
G k t + 1 = arg min G k G k g + m μ t 2 G k P k t + 1 F 2 .
Let σ 1 t σ 2 t σ q n t denote the singular values of G k t R r n × s n with q n = min { r n , s n } , ϕ ( σ t ) denote the gradient of ϕ ( x ) = λ x x + γ at σ t , and  f ( G k ) = 1 2 G k P k t + 1 F 2 . By setting the Lipschitz constant to 1, it is easy to prove that the gradient of f ( G k ) is Lipschitz-continuous. Following [23], we consider the nonascending order of singular values and the antimonotone property of the gradient of the nonconvex Geman function to obtain two inequalities:
0 ϕ σ 1 t ϕ σ 2 t ϕ σ q n t , ϕ σ i ( G k ) ϕ σ i t + ϕ σ i t σ i ( G k ) σ i t ,
where i = 1 , 2 , , q n . Then, the subproblem with respect to G k can be rewritten as follows through relaxation:
arg min G k 1 m μ t n = 1 q n ϕ σ n t + ϕ σ n t σ n ( G k ) σ n t + f ( G k ) = arg min G k 1 m μ t n = 1 q n ϕ σ n t σ n ( G k ) + 1 2 G k P k t + 1 F 2 .
Based on [23,24], we solve (24) through generalized-weight singular value thresholding (WSVT) [25].
Lemma 1.
For any 1 m μ t > 0 , the auxiliary variable P k t + 1 = k ( X t + 1 ) + Λ k t μ t and  0 ϕ σ 1 t ϕ σ 2 t ϕ σ q n t . Therefore, the global optimal solution, G k , to (24) can be obtained as follows:
G k t + 1 = WSVT P k t + 1 , ϕ m μ t = U S ϕ m μ t ( Σ ) V T .
S ϕ m μ t ( Σ ) = Diag max Σ i i ϕ σ i k m μ t , 0 , ( i = 1 , 2 , , q n ) .
Then, we can easily obtain G k t + 1 by concatenating { G k t + 1 ( 1 ) , , G k t + 1 ( M ) } along a certain mode. The closed-form solution of (20) is shown as follows:
G k t + 1 = Concat { G k t + 1 ( 1 ) , , G k t + 1 ( M ) } .
Step 3. Update K t + 1 from L :
K t + 1 = P Ω ( X 0 ) X t + 1 + Υ t / μ t , P Ω ( K t + 1 ) = O .
Step 4. Update Λ k t + 1 and Υ t + 1 from L : Based on the ADMM algorithm, Λ k t + 1 and Υ t + 1 have the following update equations:
Λ k t + 1 = Λ k t + μ t ( k ( X t + 1 ) G k t + 1 ) ,
Υ t + 1 = Υ t + μ t ( P Ω ( X 0 ) X t + 1 K t + 1 ) .
Finally, the penalty parameter, μ t + 1 , is updated by μ t + 1 = ρ μ t , where the constant ρ > 1 . The entire optimization scheme for the TC-TCTV is described in Algorithm 1.
Algorithm 1: ADMM for TC-TCTV.
Input: observation P Ω ( X 0 ) , priori set Γ and transform P .
  •   Initialize G k 0 = k ( P Ω ( X 0 ) ) , K 0 = Υ 0 = Λ k 0 = O .
  •   Repeat
  •   Update X t + 1 by (19);
  •   Update G k t + 1 by (27) for each k Γ ;
  •   Update K t + 1 by (28);
  •   Update Λ k t + 1 and Υ t + 1 by (29), (30);
  •   Let μ t + 1 = ρ μ t ; t = t + 1 .
  •   Until converged
Output: The optimal solution X ^ = X t + 1 .

3.3.2. Optimization to TRPCA-TCTV

The processes for optimizing to the TRPCA-TCTV and TC-TCTV are quite similar, except that some variables are replaced in the latter. Specifically, K is replaced by sparse additive noise, E , and the observation P Ω ( X 0 ) is modified to M . Thus, the ADMM optimization process for TRPCA-TCTV can be deduced as follows:
X t + 1 = F 1 F ( M E t + Υ t / μ t ) + H 1 + k Γ F ( D k ) F ( D k ) ;
G k t + 1 = Concat { G k t + 1 ( 1 ) , , G k t + 1 ( M ) } ;
E t + 1 = S 1 / m μ t ( M X t + 1 + Υ t / μ t ) ;
Λ k t + 1 = Λ k t + μ t ( k ( X t + 1 ) G k t + 1 ) ;
Υ t + 1 = Υ t + μ t ( M X t + 1 E t + 1 ) .
Here, S ( · ) denotes the soft-thresholding operator. The whole optimization scheme for the TRPCA-TCTV is summarized in Algorithm 2. It begins with initializing several tensors to zero, including G k 0 , E 0 , Υ 0 , and  Λ k 0 . Then, it enters a repeat loop where it iteratively updates these tensors and a parameter μ using specific equations. The updates involve refining the estimates of X , G k , E , Λ k , and Υ . The parameter μ is increased by a factor of ρ after each iteration, and the process continues until convergence is achieved. The final output is the optimal solution X ^ , representing the refined data estimate after the iterative process has converged.
Algorithm 2: ADMM for TRPCA-TCTV.
Input: observation M , priori set Γ and transform P .
  •   Initialize G k 0 = E 0 = Υ 0 = Λ k 0 = O .
  •   Repeat
  •   Update X t + 1 by (31);
  •   Update G k t + 1 by (32) for each k Γ ;
  •   Update E t + 1 by (33);
  •   Update Λ k t + 1 and Υ t + 1 by (34), (35);
  •   Let μ t + 1 = ρ μ t ; t = t + 1 .
  •   Until converged
Output: The optimal solution X ^ = X t + 1 .

3.4. Computational Complexity Analysis

For Algorithm 1, we mainly consider the cost of calculating X t + 1 , G k t + 1 , K t + 1 , Λ k t + 1 , and Υ t + 1 in each iteration. The time complexity of computing X t + 1 is O ( n 1 n d log ( n 1 n d ) ) . The cost of calculating G k t + 1 is O ( n 1 n d r n 2 ) . Calculating K t + 1 , Λ k t + 1 , and Υ t + 1 have the same complexity O ( n 1 n d ) . Therefore, the per-iteration complexity of Algorithm 1 is O ( n 1 n d log ( n 1 n d ) + n 1 n d r n 2 + 3 n 1 n d ) .
For Algorithm 2, it is easy to see that the only difference exists in calculating E t + 1 , which involves the soft-thresholding operator. The cost of computing E t + 1 is O ( n 1 n d ) , and the total complexity is the same as in Algorithm 1.

3.5. Convergence Analysis

Since the constraints in the proposed models are separable, they can be transformed into a standard two-block ADMM by using the method given in [26]. The convergence of Algorithms 1 and 2 can be proven by using the general conclusion given in [26]. To provide an intuitive visualization of the error variation after each iteration, curves of the relative error against the number of iterations are plotted in Figure 2. It is easy to see that the gap between X t + 1 and X 0 deacreases with increasing iterations. After 50 iterations, the errors gradually decrease and become closer to zero, which validates the convergence of the algorithm intuitively.

3.6. Sensitivity Analysis

Considering that ρ and μ 0 are two crucial parameters in Algorithms 1 and 2, we conducted a sensitivity analysis by adjusting the value of each parameter. The effects of ρ and μ 0 on the PSNR at various sampling rates are shown in Figure 3. We can observe that a convincing performance in terms of the PSNR is achieved when ρ = 1.1 and μ 0 = 10 5 .

4. Experimental Results

In this section, we conduct numerous experiments on public datasets to evaluate the performance of the proposed models. We compare our models with different methods (Table 1) considering L or S priors of tensors. All experiments were performed on MATLAB (R2021a); the CPU of the computer was AMD Ryzen 7 5800H with Radeon Graphics (16 CPUs), and the memory was 16 GB.
For a quantitative comparison, three picture quality indices (PQIs) were employed, including the peak signal-to-noise ratio (PSNR [34]), structural similarity (SSIM [35]), and feature similarity (FSIM [36]). The larger the PSNR, SSIM, and FSIM, the better the performance of the recovery model. All the parameters used in these models were used as given in the reference papers or adjusted optimally.
Parameter setting: First, we give brief explanations of important variables in the models. X , G k , and E denote the recovered tensor, gradient tensor, and noise, respectively. K is restricted in Ω to compensate for the missing entries of X . Λ k and Υ are Lagrange multipliers. For all competing approaches, we either adopted the parameters suggested in the released code or adjusted some to obtain better results. We set ρ = 1.1, μ 0 = 10 5 , max( μ t ) = 10 10 , and μ t + 1 = min( ρ μ t , max( μ t )) in Algorithms 1 and 2. In (7), g = 100 and λ 0 are set to a large value first, and then λ decreases by λ = η k λ 0 with η = 0.1 until reaching a target value of 1. Since the nonconvex norm plays an important role in two models, we conducted ablation experiments and present the results in Table 2. In subsequent experiments, we will use only the Geman norm as an example.

4.1. Color Image Completion

In this experiment, we randomly choose 20 color images (sized n 1 × n 2 × 3 ) from the BSD and USC-SIPI databases, and all original images are shown in Figure 4. The images are corrupted with different sampling rates (from 5% to 30%). The completion results of quantitative metrics as averages are presented in Table 3. The results show that the proposed TC-TCTV model achieved higher PSNR, SSIM, and FSIM values than the other methods. We also drew a bar chart of the PSNR values of the images recovered by the different methods (as shown in Figure 5), which presents an intuitive comparison. Meanwhile, our method strikes a balance between the quality of the recovery results and computational efficiency. As indicated in Table 3, the runtime of our algorithm is entirely acceptable. Due to space limitations, a partial selection of images and sampling conditions was made for an intuitive visual comparison, as shown in Figure 6. We magnified specific local regions in the recovery image and then computed the differences between the ground truth image and the recovered images from competing methods. This approach provides a straightforward visual assessment of the restoration performance achieved by the different methods.
We further tested all the completion models on extreme conditions, when sampling rates were set to 3%, 1%, and 0.5%. Due to space limitations, the recovery results of some typical cases are given in Table 4 and Figure 1 and Figure 7. In comparison, our method shows strong restoration capabilities, while other methods almost fail.

4.2. Denoising Task

In this part, we applied the TRPCA-TCTV model for the denoising task to remove noise from the observations. Due to space limitations, we only present a few examples. Table 1 presents seven relevant methods. Likewise, we chose the parameters suggested in the release codes or tuned them finely to obtain better results.
Specifically, we chose salt and pepper noise (N) at different levels. The datasets comprise color images and hyperspectral images (hsi_PaC, PaviaU, Salinas). From the results given in Table 5 and Figure 8, it can be seen that our method achieves the best results when noise levels are at 20% and 40%.

5. Conclusions and Future Work

In this paper, we devised the TCTV as a regularizer to capture the low-rankness and smoothness priors of the gradient tensor. Specifically, the nuclear norm was replaced by the nonconvex Geman norm, which showed better performance in tensor rank estimation. Based on this, we constructed the TC-TCTV and TRPCA-TCTV models, which showed superior performance. Even under extreme conditions like when 99.5% of the pixel values are missing, the TC-TCTV model can still achieve impressive recovery results, while the other methods all fail.
There are still various interesting studies to carry out. We plan to develop general nonconvex TC and TRPCA models by using different nonconvex norms. It would also be meaningful to test more recovery tasks like super-resolution, flash–no flash reconstruction, and motion deblurring. We also plan to incorporate deep priors into our future work to build a more generalized model to solve various recovery tasks such as super-resolution and deblurring.

Author Contributions

Conceptualization, X.S., H.L. and H.G.; methodology, X.S. and H.L.; software, H.L. and Y.M.; validation, X.S., H.L., H.G. and Y.M.; formal analysis, H.L.; investigation, X.S.; resources, X.S. and H.G.; data curation, X.S. and H.L.; writing—original draft preparation, H.L.; writing—review and editing, X.S. and H.L.; visualization, H.L.; supervision, X.S. and H.G.; project administration, X.S.; funding acquisition, X.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant 62301056, the Fundamental Research Funds for Central Universities No. 2024JCYJ005, and the National Natural Science Foundation of China under Grant 12371094.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. De Lathauwer, L.; De Moor, B. From matrix to tensor: Multilinear algebra and signal processing. In Institute of Mathematics and Its Applications Conference Series; Citeseer: University Park, PA, USA, 1998; Volume 67, pp. 1–16. [Google Scholar]
  2. Goldfarb, D.; Qin, Z. Robust low-rank tensor recovery: Models and algorithms. SIAM J. Matrix Anal. Appl. 2014, 35, 225–253. [Google Scholar] [CrossRef]
  3. Signoretto, M.; Tran Dinh, Q.; De Lathauwer, L.; Suykens, J.A. Learning with tensors: A framework based on convex optimization and spectral regularization. Mach. Learn. 2014, 94, 303–351. [Google Scholar] [CrossRef]
  4. Hamout, H.; Elyousfi, A. Fast depth map intra coding for 3D video compression-based tensor feature extraction and data analysis. IEEE Trans. Circuits Syst. Video Technol. 2019, 30, 1933–1945. [Google Scholar] [CrossRef]
  5. Liu, J.; Musialski, P.; Wonka, P.; Ye, J. Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 208–220. [Google Scholar] [CrossRef] [PubMed]
  6. Gross, D. Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Inf. Theory 2011, 57, 1548–1566. [Google Scholar] [CrossRef]
  7. Huang, B.; Mu, C.; Goldfarb, D.; Wright, J. Provable models for robust low-rank tensor completion. Pac. J. Optim. 2015, 11, 339–364. [Google Scholar]
  8. Pu, X.; Che, H.; Pan, B.; Leung, M.F.; Wen, S. Robust weighted low-rank tensor approximation for multiview clustering with mixed noise. IEEE Trans. Comput. Soc. Syst. 2023, 11, 3268–3285. [Google Scholar] [CrossRef]
  9. Kolda, T.G.; Bader, B.W. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  10. Tucker, L.R. Implications of factor analysis of three-way matrices for measurement of change. Probl. Meas. Chang. 1963, 15, 3. [Google Scholar]
  11. Dian, R.; Li, S. Hyperspectral image super-resolution via subspace-based low tensor multi-rank regularization. IEEE Trans. Image Process. 2019, 28, 5135–5146. [Google Scholar] [CrossRef]
  12. Zheng, Y.B.; Huang, T.Z.; Zhao, X.L.; Jiang, T.X.; Ji, T.Y.; Ma, T.H. Tensor N-tubal rank and its convex relaxation for low-rank tensor recovery. Inf. Sci. 2020, 532, 170–189. [Google Scholar] [CrossRef]
  13. Lysaker, M.; Osher, S.; Tai, X.C. Noise removal using smoothed normals and surface fitting. IEEE Trans. Image Process. 2004, 13, 1345–1357. [Google Scholar] [CrossRef]
  14. Lou, Y.; Yin, P.; He, Q.; Xin, J. Computing sparse representation in a highly coherent dictionary based on difference of L1 and L2. J. Sci. Comput. 2015, 64, 178–196. [Google Scholar] [CrossRef]
  15. Geman, D.; Yang, C. Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 1995, 4, 932–946. [Google Scholar] [CrossRef] [PubMed]
  16. Chen, Z.; Zhou, Z.; Adnan, S. Joint low-rank prior and difference of Gaussian filter for magnetic resonance image denoising. Med Biol. Eng. Comput. 2021, 59, 607–620. [Google Scholar] [CrossRef] [PubMed]
  17. Wang, H.; Peng, J.; Qin, W.; Wang, J.; Meng, D. Guaranteed tensor recovery fused low-rankness and smoothness. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 10990–11007. [Google Scholar] [CrossRef] [PubMed]
  18. Kilmer, M.E.; Martin, C.D. Factorization strategies for third-order tensors. Linear Algebra Appl. 2011, 435, 641–658. [Google Scholar] [CrossRef]
  19. Kilmer, M.E.; Horesh, L.; Avron, H.; Newman, E. Tensor-tensor algebra for optimal representation and compression of multiway data. Proc. Natl. Acad. Sci. USA 2021, 118, e2015851118. [Google Scholar] [CrossRef] [PubMed]
  20. Martin, C.D.; Shafer, R.; LaRue, B. An order-p tensor factorization with applications in imaging. SIAM J. Sci. Comput. 2013, 35, A474–A490. [Google Scholar] [CrossRef]
  21. Qin, W.; Wang, H.; Zhang, F.; Wang, J.; Luo, X.; Huang, T. Low-rank high-order tensor completion with applications in visual data. IEEE Trans. Image Process. 2022, 31, 2433–2448. [Google Scholar] [CrossRef]
  22. Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci. 2008, 1, 248–272. [Google Scholar] [CrossRef]
  23. Chen, Y.; Guo, Y.; Wang, Y.; Wang, D.; Peng, C.; He, G. Denoising of hyperspectral images using nonconvex low rank matrix approximation. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5366–5380. [Google Scholar] [CrossRef]
  24. Lu, C.; Tang, J.; Yan, S.; Lin, Z. Nonconvex nonsmooth low rank minimization via iteratively reweighted nuclear norm. IEEE Trans. Image Process. 2015, 25, 829–839. [Google Scholar] [CrossRef]
  25. Gaïffas, S.; Lecué, G. Weighted algorithms for compressed sensing and matrix completion. arXiv 2011, arXiv:1107.1638. [Google Scholar]
  26. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends® Mach. Learn. 2011, 3, 1–122. [Google Scholar]
  27. Xie, Q.; Zhao, Q.; Meng, D.; Xu, Z. Kronecker-basis-representation based tensor sparsity and its applications to tensor recovery. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 1888–1902. [Google Scholar] [CrossRef] [PubMed]
  28. Wang, H.; Zhang, F.; Wang, J.; Huang, T.; Huang, J.; Liu, X. Generalized nonconvex approach for low-tubal-rank tensor recovery. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 3305–3319. [Google Scholar] [CrossRef]
  29. Ji, T.Y.; Huang, T.Z.; Zhao, X.L.; Ma, T.H.; Liu, G. Tensor completion using total variation and low-rank matrix factorization. Inf. Sci. 2016, 326, 243–257. [Google Scholar] [CrossRef]
  30. Yokota, T.; Zhao, Q.; Cichocki, A. Smooth PARAFAC decomposition for tensor completion. IEEE Trans. Signal Process. 2016, 64, 5423–5436. [Google Scholar] [CrossRef]
  31. Qiu, D.; Bai, M.; Ng, M.K.; Zhang, X. Robust low-rank tensor completion via transformed tensor nuclear norm with total variation regularization. Neurocomputing 2021, 435, 197–215. [Google Scholar] [CrossRef]
  32. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2015, 54, 178–188. [Google Scholar] [CrossRef]
  33. Chen, Y.; Wang, S.; Zhou, Y. Tensor nuclear norm-based low-rank approximation with total variation regularization. IEEE J. Sel. Top. Signal Process. 2018, 12, 1364–1377. [Google Scholar] [CrossRef]
  34. Huynh-Thu, Q.; Ghanbari, M. Scope of validity of PSNR in image/video quality assessment. Electron. Lett. 2008, 44, 800–801. [Google Scholar] [CrossRef]
  35. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  36. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef]
Figure 1. Example of recovery performance under extreme conditions. From top to bottom, sampling rates are 3%, 1%, and 0.5%, respectively.
Figure 1. Example of recovery performance under extreme conditions. From top to bottom, sampling rates are 3%, 1%, and 0.5%, respectively.
Electronics 14 00238 g001
Figure 2. Convergence curves of the TC-TCTV and TRPCA-TCTV models. The relative errors of the two models decrease and converge with the increasing number of iterations.
Figure 2. Convergence curves of the TC-TCTV and TRPCA-TCTV models. The relative errors of the two models decrease and converge with the increasing number of iterations.
Electronics 14 00238 g002
Figure 3. Sensitivity analysis of parameters ρ and μ 0 under different sampling rates.
Figure 3. Sensitivity analysis of parameters ρ and μ 0 under different sampling rates.
Electronics 14 00238 g003
Figure 4. The 20 original color images that are chosen and tested in this experiment.
Figure 4. The 20 original color images that are chosen and tested in this experiment.
Electronics 14 00238 g004
Figure 5. Bar chart of PSNR values for different sampling rates.
Figure 5. Bar chart of PSNR values for different sampling rates.
Electronics 14 00238 g005
Figure 6. Visual comparisons of color images produced by all approaches. From top to bottom, sampling rates are 10%, 20%, and 30%, respectively.
Figure 6. Visual comparisons of color images produced by all approaches. From top to bottom, sampling rates are 10%, 20%, and 30%, respectively.
Electronics 14 00238 g006
Figure 7. Examples of recovery performance under extreme conditions for different methods. From top to bottom, sampling rates are 3%, 1%, and 0.5%, respectively.
Figure 7. Examples of recovery performance under extreme conditions for different methods. From top to bottom, sampling rates are 3%, 1%, and 0.5%, respectively.
Electronics 14 00238 g007
Figure 8. Denoising results under 20% sparse salt and pepper noise for all competing methods.
Figure 8. Denoising results under 20% sparse salt and pepper noise for all competing methods.
Electronics 14 00238 g008
Table 1. TC and TRPCA methods for comparison.
Table 1. TC and TRPCA methods for comparison.
TypesMethods
TC−L priorSNN [5]KBR [27]IRTNN [28]TNN [21]
TC−L+S priorsMF-TV [29]SPC-TV [30]TNN-TV [31]t-CTV [17]
TRPCA−L+S priorsLRTV [32]LRTDTV [30]TLRHTV [33]t-CTV
Table 2. Recovery performances of different nonconvex norms.
Table 2. Recovery performances of different nonconvex norms.
SRNormsLaplaceGemanETPCapped L 1 MCPLogarithmSCAD L p
70%PSNR38.84738.85438.85338.84938.85238.85338.84638.849
Table 3. PQI comparison of color image completion under various sampling rates (SRs). Results are averaged over 20 randomly selected images, and the best result is highlighted in bold.
Table 3. PQI comparison of color image completion under various sampling rates (SRs). Results are averaged over 20 randomly selected images, and the best result is highlighted in bold.
MethodsPQIsOBSSNNKBRIRTNNTNNMF-TVSPC-TVTNN-TVt-CTVTCTV
5% SRPSNR6.40418.16816.62514.67217.7348.61218.06820.45921.76323.912
SSIM0.0170.3150.2400.1480.2730.0750.3170.4500.5660.647
FSIM0.5670.6680.6660.6680.6780.6420.6520.6070.8250.855
10% SRPSNR6.64020.02818.45718.20119.94610.87919.97022.36625.08626.319
SSIM0.0300.4290.3370.2890.4110.1290.4220.5690.7260.765
FSIM0.6210.7310.7340.7360.7540.6570.7270.7250.8910.906
15% SRPSNR6.88721.54820.31220.36821.63013.44221.57623.87727.21328.290
SSIM0.0420.5350.4470.4280.5270.2280.5270.6650.8100.835
FSIM0.6430.7860.7940.7980.8110.7060.7880.8040.9250.935
20% SRPSNR7.15022.92722.17622.20323.08916.00723.00325.12529.00329.981
SSIM0.0540.6250.5580.5480.6240.3360.6180.7380.8630.883
FSIM0.6530.8310.8440.8440.8520.7490.8360.8570.9460.954
30% SRPSNR7.72825.47026.86925.94325.86523.96925.45027.29232.24233.271
SSIM0.0790.7640.7780.7400.7690.6770.7540.8380.9280.939
FSIM0.6630.8960.9240.9130.9100.8870.8990.9190.9710.976
 TIME09.31378.002204.0698.652734.03384.28739.76075.59852.739
Table 4. Averaged PQIs of completion results under 3%, 1%, and 0.5% sampling rates.
Table 4. Averaged PQIs of completion results under 3%, 1%, and 0.5% sampling rates.
MethodsPQIsOBSSNNKBRIRTNNTNNMF-TVSPC-TVTNN-TVt-CTVTCTV
0.5% SRPSNR6.8589.37116.0958.7967.4937.3007.5639.78018.40319.466
SSIM0.5010.5710.6240.5280.5280.5200.5420.6640.7080.746
FSIM0.6720.7630.7310.7520.7250.7480.7260.6940.8010.795
1% SRPSNR7.24013.18116.19610.5639.0927.97114.00215.10519.05721.422
SSIM0.6680.7350.7400.6880.6990.6810.7660.8100.8060.847
FSIM0.7420.8040.7810.7920.7870.7970.8090.7750.8520.853
3% SRPSNR5.61618.01417.06012.63116.3007.19517.68020.74721.41223.957
SSIM0.6710.7760.7470.7000.7440.6830.7800.8380.8490.898
FSIM0.7630.8190.8010.7960.8080.7940.8140.8210.9030.929
Table 5. Denoising performance of competing methods under various levels of salt and pepper noise.
Table 5. Denoising performance of competing methods under various levels of salt and pepper noise.
MethodsPQIsOBSSNNKBRTNNLRTVLRTDTVTLR-HTVt-CTVTCTV
0.2 NPSNR11.28742.25336.50945.59839.37438.76235.59346.95249.949
SSIM0.0770.8480.9470.9870.9560.9410.9260.9890.990
FSIM0.4750.9400.9590.9890.9650.9580.9330.9920.993
0.4 NPSNR8.28523.36835.36328.99936.68237.75133.26743.95445.765
SSIM0.0290.4520.9310.5840.9280.9340.8860.9840.983
FSIM0.3280.6230.9470.7720.9390.9530.9000.9890.989
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Su, X.; Lin, H.; Ge, H.; Mei, Y. Low-Rank Tensor Recovery Based on Nonconvex Geman Norm and Total Variation. Electronics 2025, 14, 238. https://doi.org/10.3390/electronics14020238

AMA Style

Su X, Lin H, Ge H, Mei Y. Low-Rank Tensor Recovery Based on Nonconvex Geman Norm and Total Variation. Electronics. 2025; 14(2):238. https://doi.org/10.3390/electronics14020238

Chicago/Turabian Style

Su, Xinhua, Huixiang Lin, Huanmin Ge, and Yifan Mei. 2025. "Low-Rank Tensor Recovery Based on Nonconvex Geman Norm and Total Variation" Electronics 14, no. 2: 238. https://doi.org/10.3390/electronics14020238

APA Style

Su, X., Lin, H., Ge, H., & Mei, Y. (2025). Low-Rank Tensor Recovery Based on Nonconvex Geman Norm and Total Variation. Electronics, 14(2), 238. https://doi.org/10.3390/electronics14020238

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop