Next Article in Journal
Correction: Yang et al. Robust Visual Recognition in Poor Visibility Conditions: A Prior Knowledge-Guided Adversarial Learning Approach. Electronics 2023, 12, 3711
Previous Article in Journal
Optimizing Customer Retention in the Telecom Industry: A Fuzzy-Based Churn Modeling with Usage Data
Previous Article in Special Issue
A Method for Generating Geometric Image Sequences for Non-Isomorphic 3D-Mesh Sequence Compression
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Review of Matrix Rank Constraint Model for Impulse Interference Image Inpainting

1
School of Space Information, Space Engineering University, Beijing 101400, China
2
China Academy of Space Technology (Xi’an), Xi’an 710100, China
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(3), 470; https://doi.org/10.3390/electronics13030470
Submission received: 26 October 2023 / Revised: 17 January 2024 / Accepted: 21 January 2024 / Published: 23 January 2024
(This article belongs to the Special Issue Image and Video Quality and Compression)

Abstract

:
Camera failure or loss of storage components in imaging equipment may result in the loss of important image information or random pulse noise interference. The low-rank prior is one of the most important priors in image optimization processing. This paper reviews and compares some low-rank constraint models for image matrices. Firstly, an overview of image-inpainting models based on nuclear norm, truncated nuclear norm, weighted nuclear norm, and matrix-factorization-based F norm is presented, and corresponding optimization iterative algorithms are provided. Then, we use different image matrix low-order constraint models to recover satellite images from three types of pulse interference and provide our experimental visual and numerical results. Finally, it can be concluded that the method based on the weighted nuclear norm can achieve the best image restoration effect. The F norm method based on matrix factorization has the shortest computational time and can be used for large-scale low-rank matrix calculations. Compared with nuclear norm-based methods, weighted nuclear norm-based methods and truncated nuclear norm-based methods can significantly improve repair performance.

1. Introduction

In machine vision applications, images often suffer from impulse interference due to various factors such as pulse noise caused by detector pixel failure in the camera or loss of storage elements in the imaging equipment [1]. Satellite images, unmanned aerial vehicle (UAV) images, etc. generally have local smoothness, so their two-dimensional representation matrices usually have obvious low rankness. Low-rank prior information has excellent performance in image denoising [1], inpainting [2,3], reconstruction [4], deblurring [5], and other signal optimization processing fields. In existing low-matrix-rank-based image-inpainting methods, low-rank prior information is mainly divided into the low rank of the signal itself, such as the inherent low rank of the matrix, and the similarity of local map block information [6], the similarity between frames of a video [7], etc. A low-rank matrix with a Hankel-like structure can be constructed in the Fourier domain by using the annihilating filter relationship [2,4]. A high-order tensor rank can be obtained under various tensor decomposition frameworks, such as CANECOMP/PARAFAC (CP), Tucker [8,9], tensor train (TT) [10], and tensor singular value decomposition (t-SVD) [11].
In addition to the low-rank prior, early image-denoising methods assumed that images had sparse representations in their transformation domains, such as the difference domain, wavelet domain, etc. [12,13]. Following this assumption, the sparse prior information was then constrained to recover the image from the noise. Due to the effectiveness of low rank and sparsity in constrained image optimization problems, image-processing schemes that combine constrained sparsity with low-rank prior information have been continuously proposed [14,15]. Some image-denoising methods use decomposition models of low rank and sparse components, such as the robust principal component analysis (RPCA) method [16], aimed at separating low-rank images from sparse interference images. With the progress in the research and development of tensor decomposition tools in the field of mathematics, such as t-SVD, TT decomposition, etc. [10,11,17], related image or video optimization applications based on a low-rank tensor are also being developed [18,19,20,21,22].
In addition to using low-rank prior information to construct the constraint model for impulse interference image inpainting, many theories, methods, and technologies in the field of signal processing can be combined to solve image-inpainting problems, such as various matrix/tensor completion theories in mathematics [18,23,24], finite innovation rate (FIR) theory [25], image and video enhancing (such as Hankel-like-matrix-based technology [1,2,4]), denoising schemes (such as the famous BM3D image-denoising technology [26] and nonlocal TV denoising technology [12]), etc. Various tensor-decomposition-based completion methods, convex optimization schemes, and fast optimization algorithm research systems can also be used to optimize image-inpainting methods.
The above methods are inseparable from the most basic problems in modeling and solving matrix low-rank constraints. The initial modeling method is to establish a constraint minimizing the number of nonzero singular values in the matrix, i.e., minimizing the l 0 norm, but this is an NP-hard problem that cannot be solved. In order to approximate the optimal solution, various matrix low-rank constraint modeling schemes have been proposed to replace the l 0 norm, such as the l p norm, weighted nuclear norm, truncated nuclear norm, matrix factorization replacing nuclear norm, and so on. However, to our knowledge, there has been no literature review or comparison of these methods in dealing with the same optimization problem.
This paper uses the low-rank property of the image matrix to optimize the image-inpainting model and algorithm under three kinds of pulse interference. Image-inpainting-modeling schemes based on the nuclear norm, truncated nuclear norm, weighted nuclear norm, and matrix-factorization-based F norm are reviewed, and their corresponding optimization iterative algorithms, such as the TSVT_ADMM algorithm, WSVT_ADMM algorithm, and UV_ADMM algorithm, are given. The experimental results of various inpainting methods are displayed visually and numerically, and a comparative analysis is given.
The structure and content of this paper are arranged as follows: Section 1 presents an introduction; Section 2 establishes the basis of the matrix low-rank constraint inpainting model and solution algorithm; Section 3 presents the experimental comparison; and the last section presents the conclusions.

2. The Matrix Low-Rank Constrained Inpainting Model and Its Solution Algorithm

Image-inpainting models based on a low-rank matrix are generally expressed as follows:
X = arg   min X   r a n k ( Φ X )   s . t .   Y Θ Ω X ε
where X represents the image ( X R 2 grayscale value image, X R 3 RGB image, grayscale value video, etc.); Θ Ω represents the interference operator, in which the set of interference pixel positions is Ω ; X represents the optimal solution; Y represents the interfered image; ε represents the error, generally set as a small constant value matrix, such as constant 10−14; Φ represents the low-rank transformation operation; and Φ X is used to transform X into a matrix or tensor with low ranks, such as a low-rank matrix transformed by similar blocks of local images, a low-rank structure matrix [1,2,4] transformed by the relationship of the annihilating filter, or a low-rank matrix [7] transformed by the similarity between frames. If videos or RGB images are treated as third-order or higher-order tensors, the rank property may come from the tensor Tucker rank [27], TT rank [19], etc. Under the interference of impulse noise, the Θ Ω operator generally has three representations, as detailed below.
The first representation is random-valued impulse noise (RVIN) [1]:
Θ Ω X ( i ,   j ) = V ( i ,   j )   ,   ( i ,   j ) Ω with   random   probability p X ( i ,   j )   ,   ( i ,   j ) Ω with   random   probability   1 - p
The value of V is random, and its range is within the range of X ’s pixel value, such as the range of 0~255, or the range of normalized 0~1. p is the interference rate, that is, the percentage of the number of interfering pixels in the total number of pixels in the image.
The second representation is salt and pepper noise, a special kind of RVIN [1]:
Θ Ω X ( i ,   j ) = V max   ,   ( i ,   j ) Ω with   random   probability p 2 V min   ,   ( i ,   j ) Ω with   random   probability p 2 X ( i ,   j ) ,   ( i ,   j ) Ω with   random   probability   1 - p
where V max is the maximum value of salt and pepper noise and V min is the minimum value of salt and pepper noise.
In addition, random pixel loss is also a typical problem in the research field of image repair [2,6,8]:
Θ Ω X ( i ,   j ) = 0 ,   ( i ,   j ) Ω Random   probability   p X ( i ,   j )   ,   ( i ,   j ) Ω Random   probability   1 - p
The low-rank property is another form of sparsity essentially. Sparse constraints on matrices essentially minimize the zero norm of matrix elements, while low-rank constraints minimize the zero norm of singular values of the constraint matrix. The low-rank constraint on a matrix is the l 0 norm constraint on the singular values of the matrix, i.e., min X   r a n k ( Φ X )     min X   Φ X 0 . Since min X   Φ X 0 is nonconvex, the l p norm taking the form min X Φ X p is commonly used for convex substitution [28], where 0 p 1 , Φ X p = i = 1 n σ i p , and σ i are singular values of matrix Φ X of size n 1 × n 2 , n = min   (   n 1 ,   n 2   ) . The special case of the l p norm is the nuclear norm Φ X * = i = 1 n σ i , which means p = 1 . Whether the low rank constraint form used can accurately perform convex approximation has a significant impact on the repair effect. Let Φ X p = i n g p ( σ i ) , where function g p ( σ i ) = σ i p   ,   0 p 1 . For the l 0 and l p norms, the function g p ( σ i ) is
g 0 ( σ i ) =   0   , σ i = 0 1   , σ i 0 , g 1 ( σ i ) = σ i .
Normalize σ i within the range of 0–1 and plot the curves of function g p ( σ i ) at p = 0, 0.3, 0.5, 0.7, and 1. A visualization of the convex approximation is shown in Figure 1. It can be seen that the smaller the value of p is, the closer the convex approximation function g p ( σ i ) curve is to the l 0 norm curve.
As the simplest convex substitution of the l 0 norm, the nuclear norm is the most common in low-rank constraint modeling. To further improve the accuracy of low-rank approximation, we can use the weighted l 1 norm of the singular values of the matrix, that is, the weighted nuclear norm [29,30,31,32], or use the truncated nuclear norm [33,34,35,36] to replace the nuclear norm. Common regularization constraint schemes for low-rank matrices are summarized as follows.

2.1. Nuclear Norm X *

For example, we use the minimized nuclear norm as a low-rank constraint to establish an image-inpainting model, as follows:
X = arg   min X Φ X * s . t .   Y Θ Ω X ε
where Y is the impulse interference image of size n 1 × n 2 . The regularization parameter λ is introduced to convert model (2) into the following unconstrained form:
X = arg   min X 1 2 Y Θ Ω X F 2 + λ   Φ X *
Three algorithms can be used to solve Equation (3). The most commonly used algorithm is the singular value shrinkage/threshold (SVT) algorithm [37].
The SVT algorithm is shown below.
First, perform singular value decomposition for Y , U Σ V H = S V D Y ,   Σ = d i a g     { σ i } 1 i n   , where d i a g     .   is the diagonal matrix operation of the elements,   n = min   ( n 1 ,   n 2 ) . Then, soft threshold operation D λ ( σ i ) =   max 0 ,   σ i λ is performed on the singular values [38]. Then, set Σ S V T = d i a g   { D λ ( σ i ) } 1 i n   . Finally, obtain the solution results   X = U Σ S V T V H .
Jain P. proposed the singular value projection (SVP) algorithm in order to solve the problem in model (2) [39]. With the development of large-scale data processing and distributed computing, the alternating direction method of multipliers (ADMM) algorithm has become the mainstream optimization algorithm [40]. When using the ADMM algorithm to solve (3), auxiliary variables Z = Φ X and residuals L must first be introduced to transform model (3) into multiple sub-problems for an iterative solution:
( X , Z , Θ ) = arg min x , z , Θ 1 2 Y Θ Ω X F 2 + λ 2 Z * + λ ρ 2 Φ X Z + Θ F 2
where ρ > 0 is the introduced penalty parameter, and the SVT method is used to solve sub-problem Z .
In this paper, we use the SVT algorithm, SVP algorithm, and ADMM algorithm to solve the nuclear-norm-based image-inpainting model, which correspond by name to the SVT method, SVP method, and n_ADMM method, respectively. The details of the SVT algorithm, SVP algorithm, and n_ADMM algorithm are shown in Algorithms 1–3, respectively.
Algorithm 1. The SVT algorithm for solving model (3)
Input:  Y ,   Θ Ω ,   ρ   , λ ,   the   maximum   number   of   iterations   t max ,   convergence   condition   η t o l = e 6 .
Initialization:  ( m , n ) = s i z e ( Y ) ,   X ( 0 ) = z e r o s ( m , n ) ,   Y d ( 0 ) = Y ,   t = 1 .
While  t < t max   and   η ( t ) < η t o l  do
[ U , S , V ] = S V D ( Y d ( t 1 ) ) .
δ 0 = d i a g ( S ) ,   τ = 1 ρ ,   δ = 1 min ( τ δ 0 , 1 ) ,   δ 0 = δ 0 * δ .
 Update X ( t ) = U * d i a g ( δ 0 ) * V H .
 Update Y d ( t ) = Y d ( t 1 ) + λ ( Y X ( t ) ) .
 Update η ( t ) = X ( t ) X ( t 1 ) F X ( t ) F .
t = t + 1 .
End while
X * ( i ,   j ) = X ( t ) ( i ,   j ) ,   ( i ,   j ) Ω Y ( i ,   j ) ,   ( i ,   j ) Ω .
Output:  X * .
Algorithm 2. The SVP algorithm for solving model (2)
Input:  Y ,   Θ Ω ,   τ = 0.01 ,   the   rank     r ,   the   maximum   number   of   iterations   t max ,   convergence   condition   η t o l = e 6 .
Initialization:  ( m , n ) = s i z e ( Y ) ,   X ( 0 ) = z e r o s ( m , n ) ,   Y d ( 0 ) = Y ,   t = 1 .
While  t < t max   and   η ( t ) < η t o l  do
Y d ( t ) = X ( t 1 ) τ ( X ( t 1 ) Y ) .
Θ Ω Y d ( t ) = Θ Ω Y .
[ U , S , V ] = S V D ( Y d ( t ) )
δ 0 = d i a g ( S ) ,   δ 0 = δ 0 ( 1 : r , 1 ) .
 Update X ( t ) = U ( : , 1 : r ) * d i a g ( δ 0 ) * V ( : , 1 : r ) H .
 Update η ( t ) = X ( t ) X ( t 1 ) F X ( t ) F .
X ( t ) ( i ,   j ) = X ( t ) ( i ,   j ) ,   ( i ,   j ) Ω Y ( i ,   j ) ,   ( i ,   j ) Ω ,   t = t + 1 .
End while
X * = X ( t ) .
Output:  X * .
Algorithm 3. The n_ADMM algorithm for solving model (3)
Input:  Y ,   Θ Ω ,     ρ   , λ ,   the   maximum   number   of   iterations   t max ,   and   convergence   condition   η t o l = e 6 .
Initialization:  ( m , n ) = s i z e ( Y ) ,   X ( 0 ) = z e r o s ( m , n ) , Y d ( 0 ) = Y ,   Z ( 0 ) = z e r o s ( m , n ) ,   L ( 0 ) = z e r o s ( m , n ) ,   t = 1 .
While  t < t max   and   η ( t ) < η t o l  do
 Update X ( t ) = ( Y d ( t 1 ) + λ ρ ( Z ( t 1 ) L ( t 1 ) ) ) . / ( Θ Ω + λ ρ ) .
[ U , S , V ] = S V D ( X ( t ) + L ( t 1 ) ) .
δ 0 = d i a g ( S ) 1 ρ ,   δ 0 ( f i n d ( δ 0 ) < 0 ) = 0 .
 Update Z ( t ) = U * d i a g ( δ 0 ) * V H .
 Update L ( t ) = L ( t 1 ) + X ( t ) Z ( t ) .
 Update η ( t ) = X ( t ) X ( t 1 ) F X ( t ) F .
t = t + 1 .
End while
X * ( i ,   j ) = X ( t ) ( i ,   j ) ,     ( i ,   j ) Ω Y ( i ,   j ) ,   ( i ,   j ) Ω .
Output:  X * .

2.2. Weighted Nuclear Norm i f u n σ i

The weighted nuclear norm i f u n σ i is a scheme that uses weighted singular value constraints to approximate l 0 constrained singular values [29,30,31,32]. It is a balanced constraint scheme that makes large singular values smaller and small singular values larger. It can be more accurate than the nuclear norm (i.e., l 1 constraints on singular values), where f u n   .     is the weighted function of each singular value σ i of matrix Φ X , and [   U ,   d i a g { σ i } i = 1 : min ( n 1 , n 2 ) ,   V   ] = S V D ( Φ X ) . We use the weighted nuclear norm as a low-rank constraint to establish an image-inpainting model, as follows:
X = arg   min X i f u n σ i s . t .   Y Θ Ω X ε  
Then, we introduce the regularization parameter λ and convert model (4) into an unconstrained form:
X = arg   min X 1 2 Y Θ Ω X F 2 + λ i f u n σ i
There are many kinds of weighting functions f u n   .     , and the p-norm ( 0   <   p   <   1 ) is the simplest weighting scheme, namely, g p ( σ i ) = f u n σ i . Reference [30] reviewed various weighting functions that approximate the l 0 norm of singular values, such as SCAD [41], MCP [42], Logarithm [43], Geman [44], Laplace [45,46], etc., among which the Logarithm scheme is the most classic. In the experimental section of this paper, we choose the Logarithm scheme to perform our comparisons. The weighting function in the Logarithm scheme is shown below:
f u n σ i   = λ log ( γ + 1 ) log ( γ σ i + 1 )
where γ > 0 is a parameter that is determined based on experience.
The simplest and most direct solution to model (4) is the weighted SVT (WSVT) algorithm. Set the weight w i = f u n σ i , i = 1 , 2 , , n , and then   X = U Σ W S V T V H , where Σ W S V T = d i a g { D λ ( w i σ i ) } 1 i n .
We use ADMM to solve the weighted nuclear norm image-inpainting problem (5). We introduce auxiliary variables Z = Φ X and residuals L to transform model (5) into multiple sub-problems for iterative solution:
( X , Z , L ) = arg min x , z , L 1 2 Y Θ Ω X F 2 + λ 2 i f u n σ i + λ ρ 2 Φ X Z + L F 2
where ρ > 0 is the introduced penalty parameter, and the WSVT algorithm is used to solve the sub-problem Z . The combination of the weighted SVT algorithm and ADMM algorithm can obtain more accurate iterative estimations. We use the ADMM algorithm to solve the weighted-nuclear-norm-based image-inpainting model (7), and name it the WSVT_ADMM method. The details of the WSVT_ADMM algorithm used to solve model (7) are shown in Algorithm 4.
Algorithm 4. The WSVT_ADMM algorithm for solving model (7)
Input:  Y ,   Θ Ω ,   ρ   , θ , λ , γ ,   the   maximum   number   of   iterations   t max ,   decay   factor   ς = 0.9   convergence   condition   η t o l = e 6 .
Initialization:  ( m , n ) = s i z e ( Y ) ,   X ( 0 ) = z e r o s ( m , n ) ,   L ( 0 ) = z e r o s ( m , n ) ,   Y d ( 0 ) = Y ,   λ = ς * max ( Y ( : ) ) ,   t = 1 .
While  t < t max   and   η ( t ) < η t o l  do
[ U , S , V ] = S V D ( Y d ( t 1 ) ) .
δ 0 = d i a g ( S ) ,   w = f u n ( δ 0 , γ , λ ) ,   δ 0 = δ 0 w * 1 ρ ,   δ 0 ( f i n d ( δ 0 < 0 ) ) = 0 .
 Update X ( t ) = U * d i a g ( δ 0 ) * V H .
 Update Y d ( t ) = Y d ( t 1 ) + θ ( Y X ( t ) ) .
 Update η ( t ) = X ( t ) X ( t 1 ) F X ( t ) F .
λ = ς * λ ,   X ( t ) ( i ,   j ) = X ( t ) ( i ,   j ) ,   ( i ,   j ) Ω Y ( i ,   j ) ,   ( i ,   j ) Ω ,   t = t + 1 .
End while
X * = X ( t ) .
Output:  X * .

2.3. Truncated Nuclear Norm

In general, the singular value curve of a low-rank matrix exhibits an exponential extreme decay trend from large to small, and the singular values sorted backward will approach 0. Therefore, the minimization of the nuclear norm is mainly to constrain the minimization of large singular values. To fully utilize the small singular values, a truncated nuclear norm minimization scheme can be used. The purpose of truncated nuclear norm minimization is to constrain the minimization of small singular values [33,34,35,36]. We use the truncated nuclear norm as a low-rank constraint to establish an image-inpainting model, as follows:
X = arg   min X     Φ X T r   (   U H Φ X V   )   s . t .   Y Θ Ω X ε
where T r is a truncation operation that extracts the first r larger diagonal elements in the diagonal matrix U H Φ X V , and   | | Φ X | | T r   (   U H Φ X V   ) means that the first r larger diagonal elements in diagonal matrix U H Φ X V are zeroed and the last r smaller diagonal elements are retained. We introduce a regularization parameter λ and convert (8) into an unconstrained form:
X = arg   min X   1 2 Y Θ Ω X F 2 + λ   | | Φ X | | T r   (   U H Φ X V   )  
where U , V are the truncated left and right singular value vector group matrices of Φ X . The essence of truncated nuclear norm minimization is to minimize the sum of smaller singular value elements of the constrained low-rank matrices.
The truncated-nuclear-norm-based model can be solved by the APGL or ADMM algorithms. This paper combines the ADMM algorithm with the SVT algorithm to solve the truncated-nuclear-norm-based image-inpainting model (9), and abbreviates it as the TSVT (truncated SVT) algorithm. The details of the TSVT algorithm used to solve model (9) are shown in Algorithm 5.
Algorithm 5. The TSVT algorithm for solving the model (9)
Input:  Y ,   Θ Ω ,     ρ   , λ ,   the   maximum   number   of   iterations   t max , the truncated rank  r ,   convergence   condition   η t o l = e 6 .
Initialization:  ( m , n ) = s i z e ( Y ) , X ( 0 ) = z e r o s ( m , n ) , Y d ( 0 ) = Y , Z ( 0 ) = z e r o s ( m , n ) , L ( 0 ) = z e r o s ( m , n ) , t = 1 .
While  t < t max   and   η ( t ) < η t o l  do
τ = 1 ρ ,   T = Z ( t 1 ) τ * L ( t 1 ) ,   [ U , S , V ] = S V D ( T ) .
δ 0 = d i a g ( S ) ,   δ = 1 min ( τ δ 0 , 1 ) ,   δ 0 = δ 0 * δ .
 Update X ( t ) = U * d i a g ( δ 0 ) * V H .
[ U Z , S Z , V Z ] = S V D ( Z ( t 1 ) ) ,   B = U Z ( : , 1 : r ) . H * V Z ( : , 1 : r ) .
 Update Z ( t ) = X ( t ) + τ * ( L ( t 1 ) + B ) ,   Θ Ω Z ( t ) = Θ Ω Y .
 Update L ( t ) = L ( t 1 ) + ρ ( X ( t ) Z ( t ) ) .
 Update η ( t ) = X ( t ) X ( t 1 ) F X ( t ) F .
t = t + 1 .
End while
X * ( i ,   j ) = X ( t ) ( i ,   j ) ,   ( i ,   j ) Ω Y ( i ,   j ) ,   ( i ,   j ) Ω .
Output:  X * .

2.4. The F Norm of UV Matrix Factorization

The process of solving the nuclear norm minimization problem involves time-consuming matrix singular value decomposition. Earlier, Srebro N. [47] proposed the property X * = min U V H = X 1 2   U F 2 +   V F 2 and then proved it. Later, the F norm of UV matrix factorization was used instead of the nuclear norm in many applications to simplify the calculation time [1,48]. We use the minimized F norm of UV matrix factorization as a low-rank constraint to establish an image-inpainting model, as follows.
X = arg min X   1 2     U F 2 +   V F 2   s . t .   Y Θ Ω X ε ,   U V H = Φ X
Then, we introduce the regularization parameter λ and penalty parameter ρ > 0 to convert model (10) into an unconstrained form:
X = arg min U V T = Φ X 1 2 Θ Ω X Y F 2 + λ 2 U F 2 + V F 2 + λ ρ 2 Φ X U V T + L F 2
where L is the residual variable. The initial values of U and V can be initialized using the LMaFit method [2,49]. Model (11) is commonly solved using the ADMM algorithm, and we name it the UV_ADMM method. The details of the UV_ADMM algorithm are shown in Algorithm 6.
Algorithm 6. The UV_ ADMM algorithm for solving model (11)
Input:  Y ,   Θ Ω ,     ρ   , λ ,   the   maximum   number   of   iterations   t max ,   convergence   condition   η t o l .
Initialization:  U n ( 0 )  and  V n ( 0 )  by the LMaFit method [49],  ( m , n ) = s i z e ( Y ) , X ( 0 ) = z e r o s ( m , n ) , Y d ( 0 ) = Y , L ( 0 ) = z e r o s ( m , n ) , t = 1 .
While  t < t max   and   η < η t o l  do
 Update X ( t ) = [ Y ( t ) + λ ρ * ( U ( t 1 ) V ( t 1 ) H L ( t 1 ) ) ] . / ( Θ Ω + λ ρ ) .
 Update U ( t ) = ρ * ( X ( t ) + L ( t 1 ) ) * V ( t 1 ) * i n v ( e y e ( r ) + ρ V ( t 1 ) V ( t 1 ) H ) .
 Update V ( t ) = ρ * ( X ( t ) + L ( t 1 ) ) * U ( t ) * i n v ( e y e ( r ) + ρ U ( t ) H U ( t ) ) .
 Update L ( t ) = L ( t 1 ) + X ( t ) U ( t ) V ( t ) H .
 Update η t + 1 = X n t + 1 ( : ) X n t ( : ) F X n t ( : ) F .
t = t + 1 .
End while
X * ( i ,   j ) = X ( t ) ( i ,   j ) ,     ( i ,   j ) Ω Y ( i ,   j ) ,   ( i ,   j ) Ω .
Output:  X *
Compared with the n_ADMM method, the F norm of the UV-matrix-based UV_ ADMM method avoids the time-consuming SVD in each iteration, making it more suitable for large matrix modeling with low-rank constraints. This method and the weighted nuclear norm method are commonly used in low-rank matrix constrained models. The above model and its solving algorithm are summarized in Table 1.
In addition, other algorithms can solve the above model, for example, commonly used algorithms in sparsity-solving models. A sparse constraint on a signal refers to minimization of the l0 norm of signal elements, while a low-rank constraint on a signal refers to minimization of zero norms of the singular value of the signal matrix. Therefore, the optimization solution based on a low-rank constraint model has a lot in common with the optimization solution based on the sparse constraint model in solving the algorithm. Iterative optimization algorithms based on sparse constraint models can be applied to optimization solutions based on matrix low-rank constraint models, such as convex relaxation algorithms, which find a sparse or low-rank approximation of signals by transforming nonconvex problems into convex problems through iterative solutions. Among them, the conjugate gradient (CG) algorithm, iterative soft thresholding (IST) algorithm [50], Split Bregman algorithm [51], and major minimize (MM) algorithm [52] can be flexibly changed according to different optimization models.

3. Comparative Experiments

In this section, we conduct a comparison of the above methods for solving satellite-image-inpainting problems. We simulated the impulse interference on satellite images with an interference rate of 30% (the interference rate is the percentage of interference pixels among the total number of image pixels). The three kinds of impulse interference were as follows: A. random impulse interference; B. salt and pepper impulse interference; C. and random missing pixels. The satellite images in this paper are sourced from the public dataset DOTA v.2.0 (https://captain-whu.github.io/DOTA/dataset.html, accessed on 12 December 2020), with images provided by the China Resources Satellite Data and Application Center, satellite GF-1, satellite GF-2, etc. The methods employed in the comparison are SVT, SVP, n_ADMM, TSVT_ADMM, WSVT_ADMM, and UV_ADMM. For a fair comparison, each method is carried out using its optimal parameters to ensure every method shows its most representative performance.
The relative least normalized error (RLNE) and the structural similarity (SSIM) [53] are used as image-inpainting quality indicators. The RLNE is an index based on the error between pixels, and the SSIM index is more consistent with human visual perception in image visual evaluation. Generally, the larger the SSIM value is, the better the image-inpainting quality is. All simulations were carried out on a Windows 10 operating system and MATLAB R2019a running on a PC with an Intel Core i7 CPU 2.8 GHz and 16 GB of memory.
A gray satellite image and its singular values curve are shown in Figure 2a and Figure 2b, respectively. The singular values of the image descend rapidly from large to small, and most of them tend to be zero. This indicates that the image has the characteristic of being low rank. The three examples of impulse interference satellite images are shown in Figure 3, where the original image is the image in Figure 2a. It can be seen that the 30% interference rate has caused obvious information loss on the building shape, layout, gray value shading, and other features in the original image.
The comparison of the average values (RLNE, SSIM, and running time) of six image-inpainting methods under the interference of random impulse, salt and pepper noise, and missing pixels is shown in Table 2. The visual comparison of the six image-inpainting methods under the interference of salt and pepper noise is shown in Figure 4.
Based on the above visual and numerical comparison, we analyze the experimental methods below.
The matrix rank constraint method based on the F norm of UV matrix factorization (i.e., the UV_ADMM method) is equivalent to the method based on the nuclear norm constraint (i.e., the n_ADMM method) in terms of effectiveness. Overall, the n_ADMM method is slightly better, increasing the RLNE index by about 0.3% and the SSIM index by 0.3~1.
Since the nuclear-norm-based SVT, SVP, and n_ADMM methods, the weighted-nuclear-norm-based WSVT_ADMM method, and the truncated-nuclear-norm-based TSVT method all involve time-consuming SVD calculations in each iteration, the UV_ADMM method based on the UV matrix factorization F norm has significant advantages in terms of runtime. However, the UV_ADMM method did not achieve more accurate results compared to other methods, because the UV_ADMM method needs to initially set the estimated rank, for example, by using the LMaFit method to initialize the rank. However, the rank initialization is estimated and the accuracy of the rank is therefore not high, which leads to inaccurate low-rank constraints. Thus, this UV-matrix-factorization-based method is more commonly used for large-scale low-rank matrix calculations, and it can greatly reduce the inpainting time of large-scale matrix optimization as a result of avoiding the SVD of each iteration.
Since the weighted and truncated nuclear norm can better approach the singular value l0 norm, the WSVT_ADMM and TSVT methods are significantly better than the nuclear-norm-based methods (SVT, SVP, n_ADMM) in terms of the accuracy of inpainting.
In summary, we have obtained a preliminary understanding that, in solving the same image restoration optimization problem using different algorithms, the various constraint schemes based on matrix low rank mentioned above can effectively solve the image optimization problem, but the solution accuracy and computation time vary. Compared to other schemes, weighted schemes can obtain more accurate repair results. In terms of computation time, because the time-consuming SVD decomposition is avoided, the matrix-factorization-based scheme saves more time compared to other schemes, but its repair accuracy is slightly lower than the weighted scheme, the truncated nuclear norm scheme, and the nuclear norm scheme. In addition, under the same low-rank constraint scheme of the matrix, using different algorithms to solve the problem also affects the accuracy of image restoration. For example, the nuclear norm scheme based on the SVP algorithm has a significantly better restoration accuracy than the nuclear norm scheme based on SVT, while the nuclear norm scheme based on the ADMM algorithm takes second place. It is obvious that the repair effects of the weighted nuclear norm scheme and truncated nuclear norm scheme are significantly better than that of the nuclear norm scheme.

4. Conclusions

In the application of machine vision, satellite images may suffer from three forms of impulse noise interference. In this paper, we use the low-rank characteristics of the image matrix to optimize and repair the images under three kinds of impulse interference and provide the optimization algorithm. Firstly, image-inpainting-modeling schemes based on nuclear norm, truncated nuclear norm, weighted nuclear norm, and matrix factorization F norm are reviewed. Then, the corresponding optimization iterative algorithms are provided, such as the TSVT_ ADMM algorithm, WSVT_ ADMM algorithm, UV_ ADMM algorithm, etc. Finally, the experimental results of various matrix-rank-constraint-based methods are presented visually and numerically, and a comparative analysis is provided. The experimental results show that all the mentioned matrix-rank-constraint-based methods can repair images to a certain extent and suppress certain forms of interference noise. Among the methods studied, those based on the weighted nuclear norm and the truncated nuclear norm achieved better repair effects, while methods based on the matrix factorization F norm take the shortest time and can be used for large-scale matrix low-rank calculation.

Author Contributions

Conceptualization, S.M.; methodology, S.M.; investigation, W.Y. and Z.L.; resources, S.M. and Z.L.; writing—original draft preparation, S.M. and F.C.; writing—review and editing, S.M., W.Y. and S.F.; supervision, L.L. and S.F.; funding acquisition, S.M. and L.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by the National Key Laboratory of Science and Technology on Space Microwave, No. HTKJ2021KL504012; supported by the Science and Technology Innovation Cultivation Fund of Space Engineering University, No. KJCX-2021-17; supported by the Information Security Laboratory of National Defense Research and Experiment, No.2020XXAQ02.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Kyong, H.; Jong, C. Sparse and Low-Rank Decomposition of a Hankel Structured Matrix for Impulse Noise Removal. IEEE Trans. Image Process. 2018, 27, 1448–1461. [Google Scholar]
  2. Kyong, H.; Ye, J. Annihilating filter-based low-rank Hankel matrix approach for image inpainting. IEEE Trans. Image Process. 2015, 24, 3498–3511. [Google Scholar] [CrossRef] [PubMed]
  3. Balachandrasekaran, A.; Magnotta, V.; Jacob, M. Recovery of damped exponentials using structured low rank matrix completion. IEEE Trans. Med. Imaging 2017, 36, 2087–2098. [Google Scholar] [CrossRef] [PubMed]
  4. Haldar, J. Low-rank modeling of local-space neighborhoods (LORAKS) for constrained MRI. IEEE Trans. Med. Imaging 2014, 33, 668–680. [Google Scholar] [CrossRef] [PubMed]
  5. Ren, W.; Cao, X.; Pan, J.; Guo, X.; Zuo, W.; Yang, M.-H. Image deblurring via enhanced low-rank prior. IEEE Trans. Image Process. 2016, 25, 3426–3437. [Google Scholar] [CrossRef] [PubMed]
  6. Xu, Z.; Sun, J. Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Process. 2010, 9, 1153–1165. [Google Scholar]
  7. Zhao, B.; Haldar, J.; Christodoulou, A.; Liang, Z.-P. Image reconstruction from highly undersampled (k, t)-space data with joint partial separability and sparsity constraints. IEEE Trans. Med. Imaging 2012, 31, 1809–1820. [Google Scholar] [CrossRef] [PubMed]
  8. Long, Z.; Liu, Y.; Chen, L.; Zhu, C. Low rank tensor completion for multiway visual data. Signal Process. 2019, 155, 301–316. [Google Scholar] [CrossRef]
  9. Kolda, T.; Bader, B. Tensor decompositions and applications. SIAM Rev. 2009, 51, 455–500. [Google Scholar] [CrossRef]
  10. Oseledets, I. Tensor-train decomposition. SIAM J. Scien. Comput. 2011, 33, 2295–2317. [Google Scholar] [CrossRef]
  11. Kilmer, M.; Braman, K.; Hao, N. Third-order tensors as operators on matrices: A theoretical and computational framework with applications in imaging. Siam J. Matrix Anal. A 2013, 34, 148–172. [Google Scholar] [CrossRef]
  12. Zhang, X.; Chan, T. Wavelet inpainting by nonlocal total variation. Inverse Probl. Imaging 2010, 4, 191–210. [Google Scholar] [CrossRef]
  13. Ou, Y.; Li, B.; Swamy, M. Low-rank with sparsity constraints for image denoising. Inf. Sci. 2023, 637, 118931. [Google Scholar] [CrossRef]
  14. Lingala, S.; Hu, Y.; Dibella, E.; Jacob, M. Accelerated dynamic MRI exploiting sparsity and low-rank structure: K-t SLR. IEEE Trans. Med. Imaging 2011, 30, 1042–1054. [Google Scholar] [CrossRef] [PubMed]
  15. Zhang, Y.; Huang, L.; Li, Y.; Zhang, K.; Yin, C. Low-Rank and Sparse Matrix Recovery for Hyperspectral Image Reconstruction Using Bayesian Learning. Sensors 2022, 22, 343. [Google Scholar] [CrossRef] [PubMed]
  16. Tremoulheac, B.; Dikaios, N.; Atkinson, D.; Arridge, S.R. Dynamic MR image reconstruction-separation from undersampled (k, t)-space via low-rank plus sparse prior. IEEE Trans. Med. Imaging 2014, 33, 1689–1701. [Google Scholar] [CrossRef] [PubMed]
  17. Zhao, Q.; Zhou, G.; Xie, S.; Zhang, L.; Cichocki, A. Tensor ring decomposition. arXiv 2016, arXiv:1606.05535. [Google Scholar]
  18. Zhang, Z.; Aeron, S. Exact tensor completion using t-SVD. IEEE Trans. Signal Process. 2017, 65, 1511–1526. [Google Scholar] [CrossRef]
  19. Bengua, J. Efficient tensor completion for color image and video recovery: Low-rank tensor train. IEEE Trans. Image Process. 2017, 26, 1057–7149. [Google Scholar] [CrossRef]
  20. Ma, S.; Du, H.; Mei, W. Dynamic MR image reconstruction from highly undersampled (k, t)-space data exploiting low tensor train rank and sparse prior. IEEE Access 2020, 8, 28690–28703. [Google Scholar] [CrossRef]
  21. Ma, S.; Ai, J.; Du, H.; Fang, L.; Mei, W. Recovering low-rank tensor from limited coefficients in any ortho-normal basis using tensor-singular value decomposition. IET Signal Process. 2021, 19, 162–181. [Google Scholar] [CrossRef]
  22. Tang, T.; Kuang, G. SAR Image Reconstruction of Vehicle Targets Based on Tensor Decomposition. Electronics 2022, 11, 2859. [Google Scholar] [CrossRef]
  23. Gross, D. Recovering low-rank matrices from few coefficients in any basis. IEEE Trans. Infor. Theory 2011, 57, 1548–1566. [Google Scholar] [CrossRef]
  24. Jain, P.; Oh, S. Provable tensor factorization with missing data. In Proceedings of the 27th International Conference on Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 8–13 December 2014; Volume 1, pp. 1431–1439. [Google Scholar]
  25. Vetterli, M.; Marziliano, P.; Blu, T. Sampling signals with finite rate of innovation. IEEE Trans. Signal Process. 2002, 50, 1417–1428. [Google Scholar] [CrossRef]
  26. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2209. [Google Scholar] [CrossRef] [PubMed]
  27. Filipovi, M.; Juki’, A. Tucker factorization with missing data with application to low-n-rank tensor completion. Multidim Syst. Sign. Process. 2015, 26, 677–692. [Google Scholar] [CrossRef]
  28. Wang, X.; Kong, L.; Wang, L.; Yang, Z. High-Dimensional Covariance Estimation via Constrained Lq-Type Regularization. Mathematics 2023, 11, 1022. [Google Scholar] [CrossRef]
  29. Candès, E.; Wakin, M.; Boyd, S. Enhancing sparsity by reweighted l1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  30. Lu, C.; Tang, J.; Yan, S.; Lin, Z. Nonconvex Nonsmooth Low-Rank Minimization via Iteratively Reweighted Nuclear Norm. IEEE Trans. Image Process. 2015, 25, 829–839. [Google Scholar] [CrossRef]
  31. Zhang, J.; Lu, J.; Wang, C.; Li, S. Hyperspectral and multispectral image fusion via superpixel-based weighted nuclear norm minimization. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–12. [Google Scholar] [CrossRef]
  32. Li, Z.; Yan, M.; Zeng, T.; Zhang, G. Phase retrieval from incomplete data via weighted nuclear norm minimization. Pattern Recognit. 2022, 125, 108537. [Google Scholar] [CrossRef]
  33. Cao, F.; Chen, J.; Ye, H.; Zhao, J.; Zhou, Z. Recovering low-rank and sparse matrix based on the truncated nuclear norm. Neural Netw. 2017, 85, 10–20. [Google Scholar] [CrossRef] [PubMed]
  34. Hu, Y.; Zhang, D.; Ye, J.; Li, X.; He, X. Fast and accurate matrix completion via truncated nuclear norm regularization. IEEE Trans. Pattern Anal. Mach. Intell. 2013, 35, 2117–2130. [Google Scholar] [CrossRef] [PubMed]
  35. Fan, Q.; Liu, Y.; Yang, T.; Peng, H. Fast and accurate spectrum estimation via virtual coarray interpolation based on truncated nuclear norm regularization. IEEE Signal Process. Lett. 2022, 29, 169–173. [Google Scholar] [CrossRef]
  36. Yadav, S.; George, N. Fast direction-of-arrival estimation via coarray interpolation based on truncated nuclear norm regularization. IEEE Trans. Circuits Syst. II Exp. Briefs. 2021, 68, 1522–1526. [Google Scholar] [CrossRef]
  37. Cai, J.; Candès, E.; Shen, Z. A singular value thresholding algorithm for matrix completion. SIAM J. Optimiz. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  38. Xu, J.; Fu, Y.; Xiang, Y. An edge map-guided acceleration strategy for multi-scale weighted nuclear norm minimization-based image denoising. Digit. Signal Process. 2023, 134, 103932. [Google Scholar] [CrossRef]
  39. Jain, P.; Meka, R. Guaranteed Rank Minimization via Singular Value Projection. Available online: http://arxiv.org/abs/0909.5457 (accessed on 19 October 2009).
  40. Stephen, B. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Le. 2010, 3, 1–122. [Google Scholar]
  41. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  42. Friedman, J. Fast sparse regression and classification. Int. J. Forecast. 2012, 28, 722–738. [Google Scholar] [CrossRef]
  43. Zhang, C.-H. Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 2010, 38, 894–942. [Google Scholar] [CrossRef] [PubMed]
  44. Geman, D.; Yang, C. Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 1995, 4, 932–946. [Google Scholar] [CrossRef] [PubMed]
  45. Trzasko, J.; Manduca, A. Highly undersampled magnetic resonance image reconstruction via homotopic 0-minimization. IEEE Trans. Med. Imaging 2009, 28, 106–121. [Google Scholar] [CrossRef] [PubMed]
  46. Liu, Q. A truncated nuclear norm and graph-Laplacian regularized low-rank representation method for tumor clustering and gene selection. BMC Bioinform. 2021, 22, 436. [Google Scholar] [CrossRef] [PubMed]
  47. Signoretto, M.; Cevher, V.; Suykens, J. An SVD-free approach to a class of structured low rank matrix optimization problems with application to system identification. In Proceedings of the IEEE Conference on Decision and Control, EPFL-CONF-184990, Firenze, Italy, 10–13 December 2013. [Google Scholar]
  48. Srebro, N. Learning with Matrix Factorizations; Massachusetts Institute of Technology: Cambridge, MA, USA, 2004. [Google Scholar]
  49. Wen, Z.; Yin, W.; Zhang, Y. Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm. Math. Program. Comput. 2012, 4, 333–361. [Google Scholar] [CrossRef]
  50. Daubechies, I.; Defrise, M.; Mol, C. An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. Commun. Pure Appl. Math. 2004, 57, 1413–1457. [Google Scholar] [CrossRef]
  51. Goldstein, T.; Osher, S. The split Bregman method for L1-regularized problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar] [CrossRef]
  52. Jacobson, M.; Fessler, J. An Expanded Theoretical Treatment of Iteration-Dependent MajorizeMinimize Algorithms. IEEE Trans. Image Process. 2007, 16, 2411–2422. [Google Scholar] [CrossRef]
  53. Wang, Z.; Bovik, A.; Sheikh, H.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef]
Figure 1. The curves of g p ( σ i ) , where p = 0, 0.3, 0.5, 0.7, and 1.
Figure 1. The curves of g p ( σ i ) , where p = 0, 0.3, 0.5, 0.7, and 1.
Electronics 13 00470 g001
Figure 2. A gray satellite image and its singular values curve. (a) The gray satellite image; (b) the singular values curve of image in (a).
Figure 2. A gray satellite image and its singular values curve. (a) The gray satellite image; (b) the singular values curve of image in (a).
Electronics 13 00470 g002
Figure 3. The interference satellite image (interference rate 30%): (a) random impulse interference pattern; (b) salt and pepper impulse interference pattern; (c) random missing pixel pattern.
Figure 3. The interference satellite image (interference rate 30%): (a) random impulse interference pattern; (b) salt and pepper impulse interference pattern; (c) random missing pixel pattern.
Electronics 13 00470 g003
Figure 4. Visual comparison of six image-inpainting methods under the interference of salt and pepper noise: (a) original image; (b) interference image; (c) SVT; (d) SVP; (e) n_ADMM; (f) TSVT_ADMM; (g) WSVT_ADMM; (h) UV_ADMM.
Figure 4. Visual comparison of six image-inpainting methods under the interference of salt and pepper noise: (a) original image; (b) interference image; (c) SVT; (d) SVP; (e) n_ADMM; (f) TSVT_ADMM; (g) WSVT_ADMM; (h) UV_ADMM.
Electronics 13 00470 g004
Table 1. Modeling and solving algorithms based on the low-rank constraint in this paper.
Table 1. Modeling and solving algorithms based on the low-rank constraint in this paper.
Constrained ModelingNuclear NormTruncated Nuclear NormWeighted Nuclear NormMatrix Decomposition F Norm
Solution algorithmSVTSVPADMMADMMADMMADMM
Method abbreviationSVTSVPn_ADMMTSVT_ADMMWSVT_ADMMUV_ADMM
Table 2. Numerical comparison of six image-inpainting methods under three types of interference. (The bold indicators are the best indicators among various methods).
Table 2. Numerical comparison of six image-inpainting methods under three types of interference. (The bold indicators are the best indicators among various methods).
Noise Forms MethodsUntreatedSVTSVPn_ADMMTSVT_
ADMM
WSVT_
ADMM
UV_
ADMM
Indices
Random impulseRLNE (%)45.8818.369.8319.708.438.1920.01
SSIM (%)34.7777.5092.1974.9694.2594.4974.30
Running time (s)/11.73570.54841.38831.04012.05230.3375
Salt and pepper noiseRLNE (%)69.7219.969.7619.738.418.2220.29
SSIM (%)18.6374.1892.2574.7994.2194.3973.80
Running time(s)/5.340.47931.14511.29962.2160.1993
Missing pixelsRLNE (%)54.7712.969.848.738.438.198.9
SSIM (%)32.4687.9192.2594.0294.2594.4993.69
Running time (s)/9.95690.54180.51940.98422.34750.2543
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, S.; Li, Z.; Chu, F.; Fang, S.; Yang, W.; Li, L. Review of Matrix Rank Constraint Model for Impulse Interference Image Inpainting. Electronics 2024, 13, 470. https://doi.org/10.3390/electronics13030470

AMA Style

Ma S, Li Z, Chu F, Fang S, Yang W, Li L. Review of Matrix Rank Constraint Model for Impulse Interference Image Inpainting. Electronics. 2024; 13(3):470. https://doi.org/10.3390/electronics13030470

Chicago/Turabian Style

Ma, Shuli, Zhifei Li, Feihuang Chu, Shengliang Fang, Weichao Yang, and Li Li. 2024. "Review of Matrix Rank Constraint Model for Impulse Interference Image Inpainting" Electronics 13, no. 3: 470. https://doi.org/10.3390/electronics13030470

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop