Next Article in Journal
Influence Cascades: Entropy-Based Characterization of Behavioral Influence Patterns in Social Media
Previous Article in Journal
Collective Strategy Condensation: When Envy Splits Societies
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weighted Schatten p-Norm Low Rank Error Constraint for Image Denoising

1
College of Computer and Information Engineering, Henan Normal University, Xinxiang 453007, China
2
Engineering Technology Research Center for Computing Intelligence and Data Mining, Xinxiang 453007, China
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(2), 158; https://doi.org/10.3390/e23020158
Submission received: 8 January 2021 / Revised: 22 January 2021 / Accepted: 25 January 2021 / Published: 27 January 2021

Abstract

:
Traditional image denoising algorithms obtain prior information from noisy images that are directly based on low rank matrix restoration, which pays little attention to the nonlocal self-similarity errors between clear images and noisy images. This paper proposes a new image denoising algorithm based on low rank matrix restoration in order to solve this problem. The proposed algorithm introduces the non-local self-similarity error between the clear image and noisy image into the weighted Schatten p-norm minimization model using the non-local self-similarity of the image. In addition, the low rank error is constrained by using Schatten p-norm to obtain a better low rank matrix in order to improve the performance of the image denoising algorithm. The results demonstrate that, on the classic data set, when comparing with block matching 3D filtering (BM3D), weighted nuclear norm minimization (WNNM), weighted Schatten p-norm minimization (WSNM), and FFDNet, the proposed algorithm achieves a higher peak signal-to-noise ratio, better denoising effect, and visual effects with improved robustness and generalization.

1. Introduction

Image contains a lot of information. However, owing to the noise, important information may lost in the process of image acquisition, compression, transmission, and storage, which brings inconvenience to the subsequent image processing. Therefore, image denoising is necessary in image preprocessing [1]. The degradation model for the denoising problem can be expressed as: Y = X + N , where N is usually assumed to be additive white Gaussian noise with a standard deviation of σ n . The purpose of image denoising is to restore a clean image X from the noise observation Y as accurately as possible while maintaining important detailed features (such as edges and textures). Image denoising is a typical ill-posed problem in mathematics, which can be solved using prior knowledge of an image [2]. In the past few decades, many effective image prior knowledge models have been developed, such as regularization methods that are based on total variation [3,4,5], sparse representation [6,7], low rank representation [8,9], nonlocal self-similarity [10,11], and deep learning [12], et al.
Recently, the image prior method based on nonlocal self-similarity [13,14] and low rank matrix approximating [15,16,17,18] can better preserve image edge details while denoising, which has achieved some success in image denoising [19,20]. Low rank matrix approximation aims to recover the underlying low rank matrix from degraded observations. It is widely used in computer vision and machine learning. Low rank matrix approximation can be divided into two categories: low rank matrix decomposition and rank minimization. This study focuses on the rank minimization method; its main idea is to reconstruct the data matrix by imposing additional rank constraints on the estimated matrix [21]. Cai et al. proposed the nuclear norm minimization (NNM) model and applied it to image denoising [16], while NNM tends to excessively reduce the singular values and treat different singular values equally. In practical problems, the larger singular value of the data matrix quantifies the information of its basic principal direction. In the image data matrix, the larger singular value provides the main edge and texture information. Therefore, the shrinking of the larger singular value should be reduced, and the smaller singular value should be shrunk more, in order to restore the image from the damaged image. Obviously, the traditional NNM model is not flexible enough to deal with such problems and it cannot accurately estimate the rank of the matrix. To solve this problem, Nie et al. proposed the Schatten p-norm model, which obtains better estimation results for matrix rank than NNM [22]. However, similar to the standard nuclear norm, most of the models that are based on Schatten p-norm treat all singular values equally, and cannot estimate the rank of matrix commendably in many practical problems (such as image inverse problem). Gu et al. proposed a weighted nuclear norm minimization (WNNM) model to further improve the flexibility of the NNM model [23]. When compared with NNM, WNNM assigns different weights to different singular values, which makes the value of soft threshold more reasonable. Then, Xie et al. proposed a more flexible model, the weighted Schatten p-norm minimization (WSNM) model, which assigns weights to different singular values and better approximates the original low rank matrix approximation problem. Among them, WNNM is a special case of WSNM [24]. However, WSNM has a high time complexity. Zhang et al. proposed a modified Schatten p-norm minimization (MSpNM) model to reduce the total number of iterations, thereby reducing the time complexity of calculation, in order to reduce the time complexity [25]. However, the model is difficult to learn accurate prior knowledge when the image is seriously damaged by noise. Zha et al. proposed a rank residual constraint (RRC) model that could progressively approximate the underlying low rank matrix via minimizing the rank residual and achieve a better estimate of the desired image [19].
In order to solve the above problems, this paper studies the weighted Schatten p-norm minimization model, and attempts to integrate the nonlocal self- similarity errors of clear and noisy images into the weighted Schatten p-norm minimization denoising algorithm. The image denoising problem is converted to the problem of minimizing Schatten p-norm and low rank error constraints, and then finds the optimal low rank matrix. Secondly, the generalized soft threshold algorithm [26] is applied to solve the optimal solution of the low rank matrix in the weighted Schatten p-norm minimization and the optimal solution in the low rank error constraint, and further obtain a more robust low rank matrix optimal solution that is based on the mean of the two optimal solutions. Finally, an image denoising algorithm that is based on the weighted Schatten p-norm low rank error constraint (WSNLEC) is proposed, and the standard image Data set Set12 is used for simulation experiments to verify the effectiveness of the proposed algorithm.

2. Related Work

The proposed algorithm is based on the minimization of the weighted Schatten p-norm and the nonlocal self-similarity of the image. The weighted Schatten p-norm perform low rank regularization effectively, and image nonlocal similarity can preserve the edge details commendably in image denoising process.

2.1. Weighted Schatten p-Norm Minimization

When the number of columns or rows of a matrix is much greater than the rank of the matrix, it is said that the matrix has low rank. The low rank property of a matrix can also be described as the existence of a small number of non-zero eigenvalues after singular value decomposition. The rank minimization method reconstructs the data matrix by adding additional rank constraints to the estimated matrix. The main idea is to give a matrix Y and obtain a low rank matrix X, then rank minimization is defined as:
X ^ = arg min X Y X F 2 + λ R ( X ) ,
where Y X F 2 is the data fidelity term, F represents the F-norm, λ R ( X ) denotes the low rank regularization term, and  λ is a parameter that is used to balance the loss function and the low rank regularization term.
Because the direct rank minimization is NP-difficult and ill-posed, this problem is generally solved by alternatively minimizing the nuclear norm of the estimated matrix. However, nuclear norm minimization tends to excessively reduce the rank component and treat different rank components equally, which limits its ability and flexibility. In order to effectively carry out low rank regularization, paper [24] proposed a weighted Schatten p-norm minimization model, where the weighted Schatten p-norm of the matrix X b × m is expressed as:
X w , S p p = i = 1 min { b , m } w i σ i p = t r ( W Δ p ) ,
where 0 < p 1 , σ i denotes the i-th singular value of matrix X, w = [ w 1 , , w min { b , m } ] , w i 0 represents the non-negative weight assigned to σ i , w i , and σ i are the diagonal elements of the diagonal matrix W and Δ , respectively.
Given a matrix Y, the nonconvex weighted Schatten p-norm minimization model aims to find a matrix X, which is as close to Y as possible under the conditions of F-norm data fidelity and weighted Schatten p-norm regularization:
X ^ = arg min X Y X F 2 + X w , S p p .

2.2. NonLocal Self-Similarity

The main idea of image prior method that is based on low rank representation is that the data matrix formed by nonlocal similar patches in natural images has low rank property. Among them, nonlocal self-similarity characterizes the repeatability of texture and structure reflected by natural image in nonlocal area, which is, for an image patch x i , a large number of image patches that are similar to the image patch can be found in the image, and these similar image patches are called similar patches [13].
This study is based on the nonlocal self-similarity of images. The clear image X with size N is divided into n overlapping image patches, i = 1 , 2 , , n . For each image patch x i , use the block matching algorithm that was proposed in [27] to search for the m image patches that are most similar to the image patch x i to form a matrix X i , namely X i = { x i , 1 , x i , 2 , , x i , m } . Because all of the image patches have similar structure in each data matrix, the constructed data matrix X i has low rank property.
The corresponding low rank matrix X C i is obtained from each similar group X i , and the optimal solution of X C i is obtained by the Schatten p-norm, which can be expressed as:
X ^ C i = arg min X C i X i X C i F 2 + X C i w , S p p .
The similarity group Y i in the noise image is similar to the clear image, namely Y i = { y i , 1 , y i , 2 , , y i , m } , where y i , m represents the m-th similar patch of the i-th similar group Y i . The problem of image denoising can be transformed into recovering potential clear image X from noisy image Y using low rank representation and solving the optimal solution X of low rank matrix in noisy image, which can be expressed as:
X ^ i = arg min X i Y i X i F 2 + X i w , S p p .
The nonlocal self-similarity method can preserve the edge details in the image denoising process.

3. Principle and Method of WSNLEC

The WSNLEC algorithm that is proposed in this paper merges the low rank error constraints into the weighted Schatten p-norm minimization denoising algorithm, and the image denoising problem is transformed into minimizing the Schatten p-norm and low rank error constraints, and then the optimal low rank matrix problem is obtained.

3.1. Low Rank Error

It is difficult to estimate an accurate low rank matrix from the image Y, due to the influence of noise. Specifically, there is an error between the low rank matrix X C of the original clear image X obtained from the Equation (4) and the estimated low rank matrix X obtained from the Equation (5). The error R can be expressed as:
R = X X C .
It is necessary to enhance the accuracy of the low rank matrix, which is to make the error sufficiently small, in order to improve the performance of image denoising. Therefore, this paper introduces the low rank error into the weighted Schatten p-norm minimization denoising model, and Equation (5) can be improved as:
X ^ i = arg min X i Y i X i F 2 + X i X C i w , S p p .
We use the Schatten p-norm to regularize the low rank error, and then obtain the optimal low-rank matrix by minimizing the low-rank error. The accuracy of low rank matrix increases with the decrease of low rank error.

3.2. Core Idea of WSNLEC

The clear image X is unknown in the process of image denoising. Therefore, it is difficult to obtain the real low rank matrix, but it can be approximately expressed by the accurate estimation of the low rank matrix. For the algorithm shown in this paper, the FFDNet method first proposed in [28] is used to preprocess the noise image Y to obtain the image Y D , and then initialization Y D in order to obtain more accurate estimation value of the low rank matrix X D .
This paper fuses the low rank error constraints into the weighted Schatten p-norm minimization denoising algorithm, and the image denoising problem is converted to minimizing the Schatten p-norm and low rank error constraints, and then obtain the optimal low rank matrix problem, in order to improve the performance of the image denoising algorithm. The minimization that is based on the weighted Schatten p-norm error constraint can be expressed as:
X ^ i = arg min X i Y i X i F 2 + X i w , S p p + X i X C i w , S p p .
Under the assumption of low rank, we could use the low rank matrix approximation method to obtain estimation matrix X i that can be obtained from Y i , according to the degeneration model of additive white Gaussian noise. Subsequently, we apply the proposed weighted Schatten p-norm error constraint model to the estimation of X i , and the corresponding optimization problem can be defined as:
X ^ i = arg min X i 1 σ n 2 Y i X i F 2 + X i w , S p p + X i X C i w , S p p ,
where σ n 2 is the noise variance, Y i X i F 2 denotes the F-norm fidelity term, X i w , S p p represents the low rank regularization, and  X i X C i w , S p p is the low rank error constraint term.
Equation (9) is divided into two sub-problems, one is to solve the low rank matrix in the weighted Schatten p-norm minimization problem and the other is to solve the low rank matrix in the low rank error constraint problem. Finally, we use the mean solving method to obtain the final low rank matrix, and Equation (9) can be rewritten as:
X ^ i = arg min X i 1 σ n 2 Y i X i F 2 + 1 2 ( X i w , S p p + X i X C i w , S p p ) .

3.3. Solution Method

In this paper, we use the generalized soft-thresholding algorithm (GST) to solve the proposed algorithm [26]. Given p and w i , there is a specific threshold:
τ p G S T ( w i ) = ( 2 w i ( 1 p ) ) 1 2 p + w i p ( 2 w i ( 1 p ) ) p 1 2 p ,
where, if  σ i < τ p G S T ( w i ) , then δ i = 0 is the global minimum; otherwise, the best value will be obtained at a non-zero point. For any σ i ( τ p G S T ( w i ) , + ) , f i ( δ ) has a unique minimum value S p G S T ( σ i ; w i ) , which can be obtained by solving the following equation:
S p G S T ( σ i ; w i ) σ i + w i p ( S p G S T ( σ i ; w i ) ) p 1 = 0 .
Generally, the larger j-th singular value of X is of greater importance than the smaller singular value σ j ( X i ) . Because the larger singular value of the matrix provides the information of its basic principal direction, and the larger singular value in the image matrix provides the main edge and texture information. Therefore, the shrinking of the larger singular value should be reduced, and the smaller singular value should be shrunk more, in order to recover the clear image from the damaged image. Similarly, the j-th singular value of the optimal solution of Equation (9) has the same attribute, and then the larger the value of δ j ( X ^ i ) , the smaller the value that should be reduced. Therefore, an intuitive way to set the weight is that the weight should be inversely proportional to δ j ( X ^ i ) [20]:
w j = c n c n ( δ j 1 1 p p ( δ j 1 1 p p ( X ^ i ) + ε ) ,
where n is the number of similar patches in Y i , ε sets to 10 16 to avoid division by zero, and  c = 2 2 σ n 2 . Because δ j ( X ^ i ) is not available before estimating X ^ , it can be initialized as:
δ j ( X ^ i ) = max { σ j 2 ( Y i ) n σ n 2 , 0 } .
We use the iterative regularization scheme that was adopted in [12], this scheme adds the filtered residual back to the denoised image, as shown below:
Y ( k ) = X ^ ( k 1 ) + α ( Y X ^ ( k 1 ) ) ,
where k represents the number of iterations and  α is a relaxation parameter.
Finally, all of the denoised image patches are merged to form the denoised image X ^ k .

3.4. WSNLEC in Image Denoising

In this paper, we use the weighted Schatten p-norm as the regularization term to ensure the low rank of the key information in the image. There is a certain error between the low rank matrix solved from the noisy image and the real low rank matrix due to the impact of the noise in the image. Therefore, WSNLEC introduces a low rank error constraint, and it proposes a gray image denoising algorithm that is based on the weighted Schatten p-norm low rank error constraint. The low rank error constraint reduces the error between the obtained low rank matrix and the real low rank matrix. Hence, a more accurate low rank matrix optimal solution is obtained and the denoising performance of the algorithm is improved.
Algorithm 1 summarizes the image denoising algorithm that is based on the weighted Schatten p-norm low rank error constraint.
Algorithm 1: Weighted Schatten p-norm low rank error constraint (WSNLEC) for Image Denoising.
Input: Noisy image Y
(1) Initialize X ^ 0 = Y , Y D ;
(2) For k = 1 : K do
(3)    Iterative regularization Y ( k ) = X ^ ( k 1 ) + α ( Y X ^ ( k 1 ) ) ;
(4)    Construct similar groups Y i k and Y D i k by block matching method [27];
(5)    For each local image patch y i do
(6)       Estimate the k-th weight vector W j k by w j = c n c n ( δ j 1 1 p p ( δ j 1 1 p p ( X ^ i ) + ε ) ;
(7)       Update X i k by GST algorithm;
(8)       Update X R i k by GST algorithm;
(9)       Update X ^ i k by Equation (10);
(10)   End For
(11)   Aggregate X ^ i k to form the denoised image X ^ k ;
(10) End For
Output: Denoised image X ^ k

4. Experimental Results and Analysis

In order to test the performance of the proposed WSNLEC in image denoising, we compare it with four representative algorithms: block matching three-dimensional (3D) filtering (BM3D) [27], weighted nuclear norm minimization (WNNM) [23], weighted Schatten p-norm minimization method (WSNM) [24], and FFDNet [28]. Subsequently, analyze the experimental results.

4.1. Experimental Setup

WSNLEC needs to set several parameters. The parameter settings are the same as WSNM in order to ensure the validity and reliability of the experiment. The power p value ranges from 0.05 to 1, and the step size is 0.05. Finally, p = { 1.0 , 0.85 , 0.75 , 0.7 , 0.1 , 0.05 } and the corresponding noise level is set as σ n = { 20 , 30 , 50 , 60 , 75 , 100 } . In order to test the effectiveness of the algorithm, the public dataset Set12 is used in the experiment (as shown in Figure 1). All of the experiments in this dataset are implemented using MATLAB R2016a on Windows 10 with an Intel Core i5-3470 CPU at 3.20 GHz and 8.0 GB memory.
In order to obtain the noise image, add Gaussian white noise to the test image, and the noise standard deviations σ are 20, 30, 50, 60, 75, and 100, respectively. The size of overlapped image patches is different at various noise level. When the noise standard deviation σ 20 , the size of overlap patch is 6 × 6 ; when the noise standard deviation 20 σ 40 , the size of overlap patch is 7 × 7 ; when the noise standard deviation 40 σ 60 , the size of overlap patch is 8 × 8 ; when the noise standard deviation σ > 60 , the size of overlap patch is 9 × 9 .

4.2. Experimental Results of Noise Reduction Algorithms with Different Standard Deviations

We use the peak signal-to-noise ratio (PSNR) as the evaluation criterion. The higher the PSNR value, the better the image denoising effect. Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 show the PNSR values under different standard deviations between the proposed algorithm and other comparison algorithms.
Table 1 shows the standard deviation σ = 20 of Gaussian noise. Table 2 shows the standard deviation σ = 30 of Gaussian noise. Table 3 shows the standard deviation σ = 50 of Gaussian noise. Table 4 shows the standard deviation σ = 60 of Gaussian noise. Table 5 shows the standard deviation σ = 75 of Gaussian noise. Table 6 shows the standard deviation σ = 100 of Gaussian noise. The highest values of PNSR shown in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6 are expressed in bold. The PSNR value of each denoising algorithm decreases with the increase of noise standard deviation, as shown from Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6. The PSNR values of WSNLEC are higher than other comparison algorithms at almost all noise levels, and the average PSNR value is higher than other comparison algorithms.
Among them, the results of the experiment are compared with the results of the FFDNet algorithm used in preprocessing in order to prove the validity of the experiment. The results prove that the PNSR value of the algorithm that is proposed in this paper is higher than the FFDNet algorithm under different standard deviations. That is, the performance of the WSNLEC algorithm is better than the FFDNet algorithm.
In the case that the noise standard deviation σ = 20, the PSNR value of WSNLEC is 0.72 dB, 2.09 dB, 0.37 dB, and 0.23 dB, which are higher than BM3D, WNNM, WSNM, and FFDNet, respectively. When the noise standard deviation σ = 30, the PSNR value of WSNLEC is 0.76 dB, 1.80 dB, 0.37 dB, and 0.21 dB higher than BM3D, WNNM, WSNM, and FFDNet, respectively. When the noise standard deviation σ = 50, the PSNR value of WSNLEC is 0.87 dB, 1.39 dB, 0.49 dB, and 0.23 dB higher than BM3D, WNNM, WSNM, and FFDNet, respectively. When the noise standard deviation σ = 60, the PSNR value of WSNLEC is 0.88 dB, 1.44 dB, 0.55 dB, and 0.27 dB higher than BM3D, WNNM, WSNM, and FFDNet, respectively. When the noise standard deviation σ = 75, the PSNR value of WSNLEC is 0.89 dB, 1.61 dB, 0.51 dB, and 0.22 dB higher than BM3D, WNNM, WSNM, and FFDNet, respectively. When the noise standard deviation σ = 100, the PSNR value of WSNLEC is 1.36 dB, 1.51 dB, 0.5 dB, and 0.21 dB higher than BM3D, WNNM, WSNM, and FFDNet, respectively.
While using the nonlocal self-similarity of the image, the proposed algorithm adds a low rank error constraint that is based on the weighted Schatten p-norm and reduces the error between the estimated low rank matrix and real low rank matrix. The experimental results show that the proposed algorithm has better denoising performance, effectiveness and feasibility.

4.3. Experimental Results of Denoising Algorithms for Different Test Images

For each image in the Set12 image dataset, under different standard deviations, the PNSR values of the proposed algorithm and all of the comparison algorithms are represented by line graph in order to demonstrate the advantages of the proposed algorithm clearly, as shown in Figure 2.
Figure 2 shows the PNSR values of all the algorithms under different noise standard deviations ( σ = 20 , 30 , 50 , 60 , 75 , 100 ) for each image. The test images are C. Man, House, Peppers, Starfish, Monarch, Airplane, Parrot, Lena, Barbara, Boat, Man, and Couple. The red line represents the algorithm proposed in this paper. The blue line represents BM3D. The green line represents WNNM. The purple line represents WSNM, and the gray line represents FFDNet. It is clear that, except for Barbara image, whose PNSR value of WSNM is the highest, in all other test images, the PNSR value of WSNLEC is significantly higher than that of other comparison algorithms.The results show that, in most test images, the performance of the proposed algorithm is better than other comparison algorithms.
Among them, the results of the experiment are compared with the results of the FFDNet algorithm used in preprocessing in order to prove the validity of the experiment. The results prove that the PNSR value of the algorithm that is proposed in this paper is higher than the FFDNet algorithm under different standard deviations. It can be seen that the performance of the algorithm that is proposed in this paper is better than the pre-processed FFDNet algorithm.

4.4. Visual Effects of Different Denoising Algorithms

We use the House test image with noise standard deviation σ = 50 for simulation experiment in order to show the visual effect of the denoising algorithm better. Figure 3 shows the test result of House. It is obvious that the wsnlec algorithm performs well in the denoising effect, the edge and detail information are better protected, especially the line contour, the background is smoother, and better visual experience is obtained. However, we have no great contribution in texture. Although our algorithm does not express all details clearly, it shows the best visual effect in the comparison algorithm effect.
Furthermore, we use the Lena test image with noise standard deviation σ = 30 for simulation experiment. Figure 4 shows the test result of Lena. It can be clearly seen that WSNLEC has a better denoising effect than other comparison algorithms. The facial texture is smoother, and the edge details of the facial features are clearer. Although there is no clear display of texture details, it shows the best visual effect in the contrast algorithm.
The proposed algorithm adds a low rank error constraint on the basis of the weighted Schatten p-norm, reduces the error between the estimated low rank matrix and real low rank matrix, and retains the image detail features well while removing the noise. From the above experimental results and analysis, it is obvious that WSNLEC not only has strong denoising performance and obtains a higher peak signal-to-noise ratio, but it also can produce visual effects better.

5. Conclusions

The Schatten p-norm optimization method is normally employed to obtain prior information from noisy images directly, and little effort is paid on the nonlocal self-similarity errors between clear images and noisy images. Aiming at these problems, a weighted Schatten p-norm low rank error constraint algorithm for image denoising is proposed, which introduces the nonlocal self-similarity error between the clear image and noise image to the weighted Schatten p-norm minimization model. The low rank error is constrained to obtain a better low rank matrix and improve the denoising performance of the algorithm. Firstly, the algorithm divides the problem of solving the optimal low rank matrix into two sub-problems. Subsequently, the generalized soft threshold algorithm is used to solve the low rank matrix in Schatten p-norm and low rank error constraint, respectively. Finally, the mean value of them is taken as the final low rank matrix. The proposed algorithm is compared with four classical and effective image denoising algorithms (BM3D, WNNM, WSNM, and FFDNet), and the experimental results show that the algorithm can robustly solve the low rank matrix, with higher PSNR, better denoising effect, and greater practicability and effectiveness. In the future, we will continue to optimize the algorithm, reduce the time complexity of the algorithm, and apply it to color images.

Author Contributions

Conceptualization, J.X.; methodology, Y.C. and Y.M.; software, validation, Y.C.; formal analysis, Y.M.; writing-original draft preparaton, Y.C.; writing review and editing, Y.C.; visualization, Y.C.; project administration, J.X. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant (61976082, 61976120, 62002103), and in part by the Key Scientific and Technological Projects of Henan Province under Grant 202102210165.

Conflicts of Interest

The authors declare no conflict of interest.

Symbols Explanation

YNoisy image
XClear image
NNoise
λ Regularization parameter
X ^ k Denoised image
X i Similarity group of clear image
X C i Low rank matrix from clear image
Y i Similarity group of noisy image
RLow rank matrix error
Y D Apply FFDNet method to preprocess the noise image Y and obtain the image Y D
σ n 2 Noise variance
pPower
w i Weight
σ i , δ i The i-th singular value of matrix
τ p G S T ( w i ) Threshold
ε Constant term
kThe number of iterations
α Relaxation parameter

References

  1. Jin, C.; Luan, N. An image denoising iterative approach based on total variation and weighting function. Multimed. Tools Appl. 2020, 79, 20947–20971. [Google Scholar] [CrossRef]
  2. Huang, S.; Zhou, P.; Shi, H.; Sun, Y.; Wan, S.R. Image speckle noise denoising by a multi-layer fusion enhancement method based on block matching and 3D filtering. Imaging Sci. J. 2020, 67, 224–235. [Google Scholar] [CrossRef]
  3. Sun, L.; Jeon, B.; Zheng, Y.; Wu, Z.B. A novel weighted cross total variation method for hyperspectral image mixed denoising. IEEE Access 2017, 5, 27172–27188. [Google Scholar] [CrossRef]
  4. Shen, J.; Chan, T. Mathematical models for local nontexture inpaintings. SIAM J. Appl. Math. 2002, 62, 1019–1043. [Google Scholar] [CrossRef]
  5. Wang, W.; Yao, M.J.; Ng, M.K. Color image multiplicative noise and blur removal by saturation-value total variation. Appl. Math. Model. 2021, 90, 240–264. [Google Scholar] [CrossRef]
  6. He, W.; Zhang, H.; Shen, H.; Zhang, L. Hyperspectral image denoising using local low-rank matrix recovery and global spatial-spectral total variation. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2018, 11, 713–729. [Google Scholar] [CrossRef]
  7. Xie, Z.; Liu, L.; Yang, C. An entropy-based algorithm with nonlocal residual learning for image compressive sensing recovery. Entropy 2019, 21, 900. [Google Scholar] [CrossRef] [Green Version]
  8. Dong, W.; Shi, G.; Li, X. Nonlocal image restoration with bilateral variance estimation: A low-rank approach. IEEE Trans. Image Process. 2013, 22, 700–711. [Google Scholar] [CrossRef]
  9. Maboud, F.K.; Rodrigo, C. Subspace-orbit randomized decomposition for low-rank matrix approximations. IEEE Trans. Signal Process. 2018, 66, 4409–4424. [Google Scholar]
  10. Xu, J.C.; Wang, N.; Xu, Z.W.; Xu, K.Q. Weighted lp norm sparse error constraint based ADMM for image denoising. Math. Probl. Eng. 2019, 2019, 1–15. [Google Scholar] [CrossRef] [Green Version]
  11. Zuo, C.; Jovanov, L.; Goossens, B. Image denoising using quadtreebased nonlocal means with locally adaptive principal component analysis. IEEE Signal Process. Lett. 2016, 23, 434–438. [Google Scholar] [CrossRef]
  12. Zhang, K.; Zuo, W.M.; Chen, Y. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Fu, Y.L.; Xu, J.W.; Xiang, Y.J.; Chen, Z.; Zhu, T.; Cai, L.; He, W.H. Multi-scale patches based image denoising using weighted nuclear norm minimisation. IET Image Process. 2020, 14, 3161–3168. [Google Scholar] [CrossRef]
  14. Cao, C.H.; Yu, J.; Zhou, C.Y.; Hu, K.; Xiao, F.; Gao, X.P. Hyperspectral image denoising via subspace-based nonlocal low-rank and sparse factorization. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2019, 12, 973–988. [Google Scholar] [CrossRef]
  15. Ren, F.J.; Wen, R.P. A new method based on the manifold-alternative approximating for low-rank matrix completion. Entropy 2018. [Google Scholar] [CrossRef] [PubMed]
  16. Cai, J.F.; Candès, E.J.; Shen, Z.W. A singular value thresholding algorithm for matrix completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  17. Liu, X.; Zhao, G.Y.; Yao, J.W.; Qi, C. Background subtraction based on low-rank and structured sparse decomposition. IEEE Trans. Image Process. 2015, 24, 2302–2314. [Google Scholar] [CrossRef] [PubMed]
  18. Shang, F.; Cheng, J.; Liu, Y.; Luo, Z.; Lin, Z. Bilinear factor matrix norm minimization for robust PCA: Algorithms and applications. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 2066–2080. [Google Scholar] [CrossRef] [Green Version]
  19. Zha, Z.Y.; Yuan, X.; Wen, B.H.; Zhou, J.T.; Zhang, J.C.; Zhu, C. From rank estimation to rank approximation: Rank residual constraint for image restoration. IEEE Trans. Image Process. 2020, 29, 3254–3269. [Google Scholar] [CrossRef] [Green Version]
  20. Zeng, H.Z.; Xie, X.Z.; Kong, W.F.; Cui, S.; Ning, J.F. Hyperspectral image denoising via combined non-local self-similarity and local low-rank regularization. IEEE Access 2020, 8, 50190–50208. [Google Scholar] [CrossRef]
  21. An, J.L.; Lei, J.H.; Song, Y.Z.; Zhang, X.R.; Guo, J.M. Tensor based multiscale low rank decomposition for hyperspectral images dimensionality reduction. Remote Sens. 2019, 11, 1485–1503. [Google Scholar] [CrossRef] [Green Version]
  22. Nie, F.; Huang, H.; Ding, C. Low-rank matrix recovery via efficient schatten p-norm minimization. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, Toronto, ON, Canada, 22–26 July 2012; pp. 655–661. [Google Scholar]
  23. Gu, S.H.; Zhang, L.; Zuo, W.M. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar]
  24. Xie, Y.; Gu, S.; Liu, Y. Weighted schatten p-norm minimization for image denoising and background subtraction. IEEE Trans. Image Process. 2016, 25, 4842–4857. [Google Scholar] [CrossRef] [Green Version]
  25. Zhang, H.M.; Qian, J.; Zhang, B.; Yang, J.; Gong, C.; Wei, Y. Low-Rank matrix recovery via modified Schatten-p norm minimization with convergence guarantees. IEEE Trans. Image Process. 2020, 29, 3132–3142. [Google Scholar] [CrossRef] [PubMed]
  26. Zuo, W.M.; Meng, D.Y.; Zhang, L. A generalized iterated shrinkage algorithm for non-convex sparse coding. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 12 April 2013; pp. 217–224. [Google Scholar]
  27. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-d transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
  28. Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a fast and flexible solution for CNN based image denoising. SSIM. IEEE Trans. Image Process. 2018, 27, 4608–4622. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Set12 test image set.
Figure 1. Set12 test image set.
Entropy 23 00158 g001
Figure 2. PNSR values of all algorithms under different noise standard deviations (σ) for each image in the Set12 image data set. (a) C.Man. (b) House. (c) Peppers. (d) Starfish. (e) Monarch. (f) Airplane. (g) Parrot. (h) Lena. (i) Barbara. (j) Boat. (k) Man. (l) Couple.
Figure 2. PNSR values of all algorithms under different noise standard deviations (σ) for each image in the Set12 image data set. (a) C.Man. (b) House. (c) Peppers. (d) Starfish. (e) Monarch. (f) Airplane. (g) Parrot. (h) Lena. (i) Barbara. (j) Boat. (k) Man. (l) Couple.
Entropy 23 00158 g002aEntropy 23 00158 g002b
Figure 3. Denoising results on image “House” with noise level σ = 50. (a) is the clear image; (b) is the noisy image (σ = 50); (c) is the denoised image of BM3D (PNSR = 28.57); (d) is the denoised image of WNNM (PNSR = 28.46); (e) is the denoised image of WSNM (PNSR = 30.56); (f) is the denoised image of FFDNet (PNSR = 29.50); and, (g) is the denoised image of WSNLEC (PNSR = 30.85).
Figure 3. Denoising results on image “House” with noise level σ = 50. (a) is the clear image; (b) is the noisy image (σ = 50); (c) is the denoised image of BM3D (PNSR = 28.57); (d) is the denoised image of WNNM (PNSR = 28.46); (e) is the denoised image of WSNM (PNSR = 30.56); (f) is the denoised image of FFDNet (PNSR = 29.50); and, (g) is the denoised image of WSNLEC (PNSR = 30.85).
Entropy 23 00158 g003
Figure 4. Denoising results on image “Lena” with noise level σ = 30. (a) is the clear image; (b) is the noisy image ( σ = 30); (c) is the denoised image of BM3D (PNSR = 31.28); (d) is the denoised image of WNNM (PNSR = 29.93); (e) is the denoised image of WSNM (PNSR = 31.50); (f) is the denoised image of FFDNet (PNSR = 31.82); (g) is the denoising image of WSNLEC (PNSR = 32.13).
Figure 4. Denoising results on image “Lena” with noise level σ = 30. (a) is the clear image; (b) is the noisy image ( σ = 30); (c) is the denoised image of BM3D (PNSR = 31.28); (d) is the denoised image of WNNM (PNSR = 29.93); (e) is the denoised image of WSNM (PNSR = 31.50); (f) is the denoised image of FFDNet (PNSR = 31.82); (g) is the denoising image of WSNLEC (PNSR = 32.13).
Entropy 23 00158 g004
Table 1. Denoising PNSR (dB) results of different denoising algorithms under standard deviations ( σ = 20).
Table 1. Denoising PNSR (dB) results of different denoising algorithms under standard deviations ( σ = 20).
σ = 20
ImageBM3DWNNMWSNMFFDNetWSNLEC
C.Man30.2829.3130.7231.0331.15
House33.7531.8133.9934.0634.42
Peppers31.3229.4231.5531.7232.02
Starfish29.5228.3830.2830.4330.63
Monarch30.3229.0731.2531.4231.59
Airplane29.4728.7029.9330.1030.21
Parrot29.8028.9530.1030.4230.51
Lena33.0431.3033.1333.4733.75
Barbara31.5829.8832.1131.0931.62
Boat30.8529.4930.9831.1731.41
Man30.5729.3430.7231.0531.22
Couple30.7429.2230.7431.1531.37
Average30.9429.5731.2931.4331.66
Table 2. Denoising PNSR (dB) results of different denoising algorithms under standard deviations ( σ = 30).
Table 2. Denoising PNSR (dB) results of different denoising algorithms under standard deviations ( σ = 30).
σ = 30
ImageBM3DWNNMWSNMFFDNetWSNLEC
C.Man28.5827.6528.7829.2829.38
House31.9130.4332.5832.5732.97
Peppers29.1527.8529.5929.8730.13
Starfish27.4526.6728.0128.3428.25
Monarch28.3927.3929.0229.3929.57
Airplane27.6126.6827.9228.1328.21
Parrot27.9726.9228.3328.6528.76
Lena31.2829.9331.5031.8232.13
Barbara29.6028.5530.3129.0729.69
Boat28.9728.1429.2029.4529.67
Man28.8928.0328.9529.3129.46
Couple28.8227.8428.9629.3329.54
Average29.0528.0129.4329.6029.81
Table 3. Denoising PNSR (dB) results of different denoising algorithms under standard deviations ( σ = 50).
Table 3. Denoising PNSR (dB) results of different denoising algorithms under standard deviations ( σ = 50).
σ = 50
ImageBM3DWNNMWSNMFFDNetWSNLEC
C.Man25.9625.7426.3927.2427.32
House29.6128.4630.5630.3630.85
Peppers26.6726.0127.1027.4127.63
Starfish24.8124.5725.2525.6825.86
Monarch25.7625.3426.2226.9227.10
Airplane25.2424.6125.4225.7925.88
Parrot25.8925.5426.1226.5726.67
Lena28.9928.1429.2229.6329.94
Barbara27.0426.7527.8326.4126.97
Boat26.7826.2326.8227.3027.51
Man26.7526.2826.9327.2627.41
Couple26.4526.0126.6427.0427.24
Average26.6626.1427.0427.3027.53
Table 4. Denoising PNSR (dB) results of different denoising algorithms under standard deviations ( σ = 60).
Table 4. Denoising PNSR (dB) results of different denoising algorithms under standard deviations ( σ = 60).
σ = 60
ImageBM3DWNNMWSNMFFDNetWSNLEC
C.Man25.5025.0325.6326.5226.61
House28.5727.9829.4929.5030.09
Peppers25.8624.7926.0926.5326.81
Starfish23.9823.6724.4124.7424.93
Monarch24.8424.5325.4026.0226.24
Airplane24.5423.8324.5725.0325.14
Parrot25.0724.3725.3225.8325.95
Lena28.1327.3128.3728.8329.18
Barbara26.2125.8827.0225.4626.04
Boat26.0325.5226.1326.5426.77
Man26.1125.5726.1826.5626.74
Couple25.6325.2125.8126.2426.46
Average25.8725.3126.2026.4826.75
Table 5. Denoising PNSR (dB) results of different denoising algorithms under standard deviations ( σ = 75).
Table 5. Denoising PNSR (dB) results of different denoising algorithms under standard deviations ( σ = 75).
σ = 75
ImageBM3DWNNMWSNMFFDNetWSNLEC
C.Man24.2723.8324.6325.6225.64
House27.3926.2328.5728.1428.98
Peppers24.7523.7825.1725.4525.67
Starfish23.1422.1923.1423.6223.75
Monarch23.7222.9224.1924.9025.07
Airplane23.3723.1223.7224.1224.16
Parrot24.0723.1724.2624.9125.00
Lena26.9426.2827.5427.8628.17
Barbara25.0224.6125.9024.2924.96
Boat25.1024.3825.0925.6325.74
Man25.2824.5725.3225.7325.82
Couple24.6723.9824.8025.2925.42
Average24.8124.0925.1925.4825.70
Table 6. Denoising PNSR (dB) results of different denoising algorithms under standard deviations ( σ = 100).
Table 6. Denoising PNSR (dB) results of different denoising algorithms under standard deviations ( σ = 100).
σ = 100
ImageBM3DWNNMWSNMFFDNetWSNLEC
C.Man22.2922.5523.4524.3024.25
House25.5025.0327.0226.9827.65
Peppers22.8722.6023.4324.0324.23
Starfish21.4921.0522.1122.2522.39
Monarch21.4521.3122.9923.4023.61
Airplane21.8521.7222.3922.9022.95
Parrot21.9621.9522.9023.7223.80
Lena25.3225.0326.3126.6126.93
Barbara22.8023.5324.4222.8923.56
Boat23.5623.2123.9124.4424.53
Man23.9823.6724.3424.6624.77
Couple23.3723.0223.5424.0824.14
Average23.0422.8923.9024.1924.40
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Xu, J.; Cheng, Y.; Ma, Y. Weighted Schatten p-Norm Low Rank Error Constraint for Image Denoising. Entropy 2021, 23, 158. https://doi.org/10.3390/e23020158

AMA Style

Xu J, Cheng Y, Ma Y. Weighted Schatten p-Norm Low Rank Error Constraint for Image Denoising. Entropy. 2021; 23(2):158. https://doi.org/10.3390/e23020158

Chicago/Turabian Style

Xu, Jiucheng, Yihao Cheng, and Yuanyuan Ma. 2021. "Weighted Schatten p-Norm Low Rank Error Constraint for Image Denoising" Entropy 23, no. 2: 158. https://doi.org/10.3390/e23020158

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop