Next Article in Journal
An Active School Transport Instrument to Measure Parental Intentions: The Case of Indonesia
Next Article in Special Issue
A KGE Based Knowledge Enhancing Method for Aspect-Level Sentiment Classification
Previous Article in Journal
Well-Posedness Results of Certain Variational Inequalities
Previous Article in Special Issue
Tensor Affinity Learning for Hyperorder Graph Matching
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Plug-and-Play-Based Algorithm for Mixed Noise Removal with the Logarithm Norm Approximation Model

1
School of Mathematics and Computer Science, Shangrao Normal University, Shangrao 334001, China
2
Department of Network Engineering, Chengdu University of Information Technology, Chengdu 610225, China
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(20), 3810; https://doi.org/10.3390/math10203810
Submission received: 2 September 2022 / Revised: 6 October 2022 / Accepted: 12 October 2022 / Published: 15 October 2022

Abstract

:
During imaging and transmission, images are easily affected by several factors, including sensors, camera motion, and transmission channels. In practice, images are commonly corrupted by a mixture of Gaussian and impulse noises, further complicating the denoising problem. Therefore, in this work, we propose a novel mixed noise removal model that combines a deterministic low-rankness prior and an implicit regularization scheme. In the optimization model, we apply the matrix logarithm norm approximation model to characterize the global low-rankness of the original image. We further adopt the plug-and-play (PnP) scheme to formulate an implicit regularizer by plugging an image denoiser, which is used to preserve image details. The above two building blocks are complementary to each other. The mixed noise removal algorithm is thus established. Within the framework of the PnP scheme, we address the proposed optimization model via the alternating directional method of multipliers (ADMM). Finally, we perform extensive experiments to demonstrate the effectiveness of the proposed algorithm. Correspondingly, the simulation results show that our algorithm can recover the global structure and detailed information of images well and achieves superior performance over competing methods in terms of quantitative evaluation and visual inspection.

1. Introduction

Image denoising has been widely used in many applications, such as hyperspectral imaging (HSI) [1], scene recognition [2], and image restoration [3]. However, due to imaging conditions, natural images inevitably suffer from various kinds of noises, e.g., Gaussian, random, salt-and-pepper (S&P), and stripe noises, which critically influence subsequent applications. In particular, many images are contaminated by mixed noise, including Gaussian noise plus random noise or Gaussian noise plus stripe noise. Therefore, restoring a clean image from its corrupted version is the central issue in image denoising. From a mathematical perspective, the denoising problem is morbid and irreversible. Hence, to some extent, the prior knowledge of the image is of great importance.
In the past decade, scholars have proposed numerous image denoising models, such as bivariate probability [4], Gaussian–Hermite distribution [5], total variation [6], autoregressive [7], Block-Matching 3D (BM3D) [8], and sparse representation-based image modeling [9,10,11]. Among these models, the image sparse representation model has been extensively studied and applied. It transforms a natural image into a linear combination of a group of base or dictionary atoms and makes the transformed image coefficient sparse and compressible. Finally, only a few coefficients are unequal to 0. A few examples of this model are the common cosine, wavelet, and Fourier base methods. However, this image denoising method can only address white Gaussian noise. In actual applications, images are often affected by many types of noise, such as Gaussian, S&P, or random noises. The traditional denoising method cannot easily remove impulse noises, because it maintains impulse noise points at edges [12,13].
In general, two types of typical impulse noises exist, i.e., S&P and random noises. Conventional methods use two approaches to remove the mixture of Gaussian–impulse noises. The first is the detection-based noise removal method, and the second is the modeling-based method. The detection-based denoising method has been discussed in existing research [14,15,16,17]. This method first detects the locations of damaged image pixels, then handles the mixed noise. In fact, the accuracy of the detection of the damaged pixels is very important for removing mixed noise. Generally, detection-based methods are effective in removing impulse noise. However, their fidelity terms do not take Gaussian noise into account. Therefore, they cannot remove Gaussian noise effectively.
The second method treats impulse noise as a sparse signal and constructs a statistical distribution model on the basis of the impulse noise. A previously reported method [18] adopts Laplacian scale mixture (LSM) modeling to characterize impulse noise and estimates the hidden variables and impulse noise jointly from the noisy image. This method utilizes a nonlocal low-rank regularizer to regularize the denoising model. Liu et al. [19] proposed a mixed noise removal algorithm using weighted dictionary learning. Although this method can handle mixed noise, its training process is time-consuming. Jiang et al. [20] developed an image denoising method by combining weighted encoding and nonlocal self-similarity. This method can remove Gaussian and impulse noises jointly. However, its denoising performance relies on the design of the diagonal weight matrix.
Recently, low-rank matrix recovery has attracted considerable attention in the field of image restoration [4,21,22,23,24]. The fundamental problem of this process is how to find and use the low-dimensional structures of images. In contrast to the traditional mixed noise denoising method, low-rank matrix recovery can handle different noise types without any noisy prior information. Therefore, many researchers have applied the low-rank matrix restoration model to reconstruct images. Zhang et al. [25] proposed a denoising method for hyperspectral images based on a low-rank matrix recovery model. Subsequently, a noise-adjustable low-rank matrix approximation model was applied to hyperspectral image denoising [26]. However, in the above two methods [25,26], the upper bound of the rank of a given matrix must be set. Nuclear norm was introduced to design the rank approximation function in [27] for hyperspectral image denoising to solve the above issue. This nuclear norm-based rank approximation function is mainly characterized by its treatment of each singular value as equal. However, this approach ignores the fact that the contribution of each nonzero singular value is different. As a result, some nonconvex low-rank-based approaches are exploited for hyperspectral image restoration [28,29]. In addition, the total variation-regularized low-rank restoration method has been developed to remove mixed noise from HSI images [30,31]. In recent years, deep learning-based approaches to image denoising have been extensively studied. Instead of mathematical model construction, learning-based methods directly learn a mapping function from a noisy image to a clean image. These methods include convolutional neural network-based CT denoising [32], autonomous illumination systems [33], and deep plug-and-play (PnP) image restoration [34]. Additionally, some low-rank tensor-based HSI restoration algorithms have been proposed. These algorithms include weighted group sparsity-regularized low-rank tensor decomposition (LRTDGS) [35] and fibered rank constrained tensor restoration PnP [36].
In this work, inspired by PnP-based [34,36,37,38,39] and low-rank based [40,41] methods, we propose a mixed noise removal algorithm by applying the PnP regularization-based logarithm norm approximation (LNAM) model. First, the LNAM is used to characterize the global low-rankness of the original image. Second, the PnP regularization method is adopted to preserve the image detail information. Finally, the experimental results obtained through simulations on test images are used to confirm the effectiveness of the proposed denoising method. The contributions of the proposed method can be summarized as follows:
First, instead of utilizing the matrix-based low-rank approximation function, we introduce a logarithm norm-based smooth rank function and propose the LNAM. Compared with the nuclear norm-based low-rank function, the proposed model could more effectively exploit the global low-rank structure of HSI and provides a tighter approximation.
Second, the low-rankness prior is known to usually face limitations in preserving the local details of images. Therefore, the PnP framework is incorporated into the LNAM model to break through this limitation. Furthermore, we introduce a classic BM3D denoiser [8] that extensively exploits the nonlocal self-similarity prior of images.
Third, several simple subproblems are solved by decomposing the original problem by using the framework of the alternating direction multiplier method (ADMM) to address the LNAM optimization problem effectively.
The remainder of this article is organized as follows: Section 2 introduces the related works using mixed noise denoising models on hyperspectral images. As described in Section 3, the LNAM model is proposed and solved with the ADMM-based optimization algorithm. Section 4 presents the experimental results of the test images and a discussion on the effect of several parameters on the proposed algorithm. Finally, we conclude this paper in Section 5.

2. Background of the Low-Rank-Based Hyperspectral Image Denoising Method

Mixed noise removal techniques based on low-rank matrix recovery are mainly inspired by the robust principal component analysis (RPCA) [42]. The main concept of RPCA is that it aims to find the underlying low-dimensional subspace structure of high-dimensional signals from the corrupted observation. The RPCA model can be expressed as
min X , S   r a n k ( X ) + λ S 0 s . t . Y = X + S ,
where λ denotes the regularization parameter; Y represents the corrupted observational data; X and S are denoted the unknown low-rank matrix and the sparse matrix, respectively; and 0 represents the 0 -norm, which attempts to promote sparsity. Although the RPCA model can be utilized to remove the sparse noise, however, it cannot work well when the hyperspectral image is polluted by mixed noise, e.g., Gaussian noise plus sparse noise. Therefore, an improved model has been proposed by considering the Gaussian noise E in the following:
min X , S , E   r a n k ( X ) + λ S 0 + η 2 E F 2 s . t .   Y = X + S + E   ,
where λ , η are both the regularization parameters. Problems (1) and (2) are NP-hard problems. One common approach is replacing the rank function with the nuclear norm, and correspondingly, the 0 -norm is replaced with the 1 -norm [43].
min X , S , E   X + λ S 1 + η 2 E F 2 s . t .   Y = X + S + E   ,
The low-rank matrix approximation model has been widely used in most hyperspectral image denoising applications. However, this model suffers from the following aspects: First, all nonzero singular values are known to have the same contribution to the rank function. In fact, different singular values have different contributions. Large singular values would be penalized more heavily than small ones by using the nuclear norm approach. This situation easily leads to the overshrinking of the rank. Second, the rank function may be impractical. Third, low-rank matrix approximation approaches require numerous iterations. This requirement results in low computational efficiency.
Recently, the nonconvex relaxation approach has been utilized to approximate the nuclear norm [44]. In particular, a well-known method named the weighted Schatten p-norm model was introduced [45] for hyperspectral image denoising. This method is represented as
min X , S   C X w , S p p + λ S 1 s . t .   Y = X + S + E ,   E F ξ ,
where C denotes the weights for the low-rank constraint, λ represents the regularization constraint parameter, and ξ denotes the noise level. In X w , S p p = i w i σ i p ( X ) , w i represents the ith non-negative weighted value, and σ i is the ith singular value of matrix X . E F denotes the Frobenius-norm of matrix E .
This weighted Schatten p-norm model can effectively remove noise. However, it is sensitive to the initial parameters, such as the noise level and the weights. Furthermore, the model is difficult to adapt for the removal of mixed noise. Therefore, inspired by the idea presented in a previous work [40,41], in this work, we use the matrix LNAM to eliminate mixed noise from images.

3. Proposed Mixed Denoising Algorithm

As mentioned above, hyperspectral images are often contaminated by mixed noise, and a strong structural correlation exists among the image blocks. This situation prompted us to apply the rank function-based method. In this work, we propose a PnP-based LNAM for mixed noise removal from hyperspectral images. Next, we adopt the ADMM optimization algorithm to solve the proposed mixed noise removal model within the PnP framework and develop the corresponding hyperspectral image denoising algorithm.

3.1. PnP-Based LNAM Model

Given that various noises in natural images are independent, we propose the mixed noise removal model based on a logarithm norm-based rank approximation as follows:
min X , S X L + λ S 1 + ρ ϕ ( X ) s . t .   Y X S F 2 ζ ,
where λ , ρ are the regularization parameters, Y is the corrupted image, and S denotes the sparse noise. ζ > 0 . X L represents the logarithmic norm-based low-rank function. The subscript “L” is the first letter of the logarithm, which can be expressed as
X L = i = 1 min { m 1 , m 2 } log ( σ i p ( X ) + δ ) ,
where X denotes a clear image with the size of m 1 × m 2 , and σ i ( X ) represents the ith singular value of X . 0 < p 1 , and δ > 0 denotes a constant that is used to avoid dividing the result by 0.
In model (5), ϕ ( X ) denotes an implicit regularizer exploiting certain priors of natural images, which can be selected from many famous denoisers, such as the BM3D denoiser [8], DnCNN denoiser [46] and FFDNET [47]. In this work, the BM3D denoiser is selected as the embedded regularization module. In summary, X L characterize the global information of the original image, i.e., low-rankness. Additionally, the image details can be persevered by plugging the regularization module ϕ ( X ) into the PnP framework. To preserve the global structure and detailed information of the image, the two above complementary modules are used in our work.
Compared with the nuclear norm function, the logarithmic norm-based low-rank function can obtain a superior sparseness on real images. In reference to a previous work [48], we suppose that a constant M is the boundary of feasible set X , such that X = | x | M , and the convex envelop of rank(x) is 1 M X = 1 M | x | 1 . The logarithmic function is clearly closer to rank(x) than the convex envelope when the positive constant δ 0 . Therefore, the logarithmic function can achieve stronger sparsity than the nuclear norm.

3.2. Optimization Method

We introduce an auxiliary variable L to address the PnP-based logarithmic norm approximation model (7). Correspondingly, model (7) can be represented as
min X , S X L + λ S 1 + ρ ϕ ( L ) s . t .   Y X S F 2 ζ   ;   X = L   ,
Furthermore, the augmented Lagrangian function of (7) is constructed as
( X , L , S , Λ 1 , Λ 2 , λ , ρ , β 1 , β 2 ) = X L + λ S 1 + Λ 1 , Y X S + β 1 2 Y X S F 2 + ρ ϕ ( L ) + Λ 2 , X L + β 2 2 X L F 2 ,
where Λ 1 , Λ 2 denote the Lagrangian multipliers, and β 1 , β 2 represent the penalty parameters. Within the framework of ADMM, we minimize the augmented Lagrangian function (8) by using an alternating strategy, i.e., at the ( k + 1)th step. We thus update the solution by fixing some variables and solving the remaining ones. Finally, the proposed mixed noise removal method can be divided into the following three subproblems and summarized in Algorithm 1.
(1)
X-Subproblem
Given S k and L k , we update X k as
X k + 1 = arg min X { X L + Λ 1 , Y X S k + β 1 2 Y X S k F 2 + Λ 2 , X L k + β 2 2 X L k F 2   } = arg min X { X L + β 1 2 X ( Y S k + Λ 1 β 1 ) F 2 + β 2 2 X L k + Λ 2 β 2 F 2 } = arg min X { X L + β 1 + β 2 2 X β 1 A + β 2 B β 1 + β 2 F 2 }   ,
where A = Y S k + Λ 1 β 1 , B = L k Λ 2 β 2 . We introduce the following theorem to obtain the solution to (9).
Theorem 1 (Logarithmic Singular Value Thresholding [40]).
Let  G R m 1 × m 2 be a given matrix, and the SVD of G is G = U G G V G T , where G  is the diagonal matrix whose diagonal elements are the singular values. For any  α > 0 ,the closed-form solution of the following problem:
min X α X L + 1 2 X G F 2   ,
is given by X = U G T α ,   ξ ( G ) V G T , where T α ,   ξ ( ) represents the logarithmic singular value thresholding function, which can be expressed as
T α ,   ξ ( x ) = { 0 , Δ 0   arg min φ ( y ) , Δ > 0 y { 0 ,   ( x ξ + Δ ) / 2 } ,
where Δ = ( x ξ ) 2 4 ( α x ξ ) and φ ( y ) = α log ( y + ξ ) + ( y x ) 2 / 2 .
(2)
L-Subproblem
Given X k and S k , we update L k as
L k + 1 = arg min L ρ ϕ ( L k ) + β 2 2 X k + 1 L + Λ 2 β 2 F 2   .
Let σ ^ 2 = ρ β 2 . Equation (12) can be represented as
p r o x ϕ ( L k + 1 ) = arg min L ϕ ( L ) + 1 2 σ ^ 2 X k + 1 L + Λ 2 β 2 F 2   ,
where p r o x ϕ ( ) denotes the proximal operator of regularization, which is replaced by the embedded denoiser. It is known that BM3D [8] and FFDNET [47] are both famous image denoisers. The main advantage of the BM3D denoiser is that it can be applied to characterize the piecewise smoothness and the nonlocal self-similarity of images in a 3D transform domain. Recently, deep learning-based image denoisers have shown promising performance. However, the deep learning-based method needs a massive amount of training data, and these datasets are difficult to obtain. Therefore, the BM3D denoiser [8] is selected as a module within the PnP framework. By plugging in the BM3D denoiser, the solution can be expressed as
L k + 1 = B M 3 D ( X k + 1 + Λ 2 β 2 , σ ^ ) .
(3)
S-Subproblem
Given X k + 1 and L k + 1 , we update S k as
S k + 1 = arg min S { λ S 1 + Λ 1 , Y X k + 1 S + β 1 2 Y X k + 1 S F 2 } = arg min S { λ S 1 + β 1 2 Y X k + 1 S + Λ 1 β 1 F 2 }
We apply the soft thresholding operator s o f t ( ) to obtain the solution to the subproblem of (15). The operator is defined as s o f t τ ( x ) = max ( | x | τ , 0 ) sgn ( x ) , where x denotes the variable, and τ represents a parameter. Accordingly, the solution of (15) can be represented as
S k + 1 = s o f t λ β 1 ( Y X k + 1 + Λ 1 β 1 )   .
(4)
Update Multipliers
The Lagrangian multipliers are updated as follows:
{ Λ 1 = Λ 1 + β 1 ( Y X k + 1 S k + 1 ) Λ 2 = Λ 2 + β 2 ( X k + 1 L k + 1 ) .
Algorithm 1. ADMM for Solving the PnP-Based LNAM Model.
Input: The noisy image Y , parameter λ , ρ , stopping criteria ε .
Initialization: t = 0 , let X , L , S , and Lagrangian multiplies Λ 1 , Λ 2 be zeros matrices, penalty parameter β 1 = 1 . 1 ; β 2 = 1 . 2 .
Step 1: Calculate X via (9).
Step 2: Calculate L via (14).
Step 3: Calculate S via (16).
Step 4: Update the multiplies Λ 1 , Λ 2 via (17).
Step 5: Check convergence criteria: X t + 1 X t F X t F ε .
Step 6: If the convergence criteria are not met, set t = t + 1 and go to Step 1.
Output: The restored HSI X .

4. Experimental Results

Simulated and real HSI image sets are selected to evaluate the performance of the proposed method. Meanwhile, we conduct comparison experiments on these HSI datasets with other mixed noise removal algorithms, including the modified BM3D method [8], low-rank matrix recovery (LRMR) [25], low-rank global total variation (LRGTV) [31], and a weighted group sparsity-regularized low-rank tensor decomposition (LRTDGS) method [35]. In all experiments, each band of the HSI data is normalized into [0, 1], and the parameters of the methods for comparison are based on the suggested values in the original article. Moreover, the modified BM3D method proposed in [8] is used to remove the Gaussian noise. Before denoising, the sparse noise is detected and removed through adaptive median filtering. Then, BM3D can remove the Gaussian noise. Hence, the modified BM3D method is called A-BM3D.
All the algorithm simulation environments used MATLAB R2018 and a 64-bit Windows 10 operating system with 2.6 GB CPU and 16 GB memory. The configuration of the experimental environmental parameters is summarized in Table 1.

4.1. Simulated Data Experiment

In this study, the ground truth of the Simu–Indian data [50] and the Pavia City Center data [51] are adopted to generate the synthetic data for our experiments. The sizes of the Simu–Indian and the Pavia data are 145 × 145 × 224 and 200 × 200 × 80, respectively. In addition, we normalize each band of the HSI data into [0, 1] and consider the synthetic HSI data as the clean data. The mean of the peak signal-to-noise ratio (MPSNR) and the mean of structural similarity (MSSIM) over all the bands are utilized to assess the performances of different mixed noise removal algorithms. For the generation of a noisy image, Gaussian and S&P noises are added into all the bands of the clean HSI data, as in the following two cases:
Case 1: In this case, the noise intensity is equal in all bands. First, we add the Gaussian noise with a zero mean into all bands with the noise standard variances G = 0.025, 0.05, 0.075, and 0.10. Second, we add S&P noise into all bands with the noise proportions S&P = 0.05, 0.10, 0.15, and 0.20.
Case 2: In contrast to that in Case 1, the noise intensity in different bands differs in Case 2. We add different zero-mean Gaussian noises into each band. In contrast to that in Case 1, the Gaussian noise variance is randomly selected from 0.02 to 0.10. Then, different percentages of S&P noise, which are randomly selected from [0.10, 0.20], are added into each band. In addition, five selected bands of the Simu–Indian data and 10 selected bands of the Pavia City Center data are corrupted with 10 and 15 stripes, respectively.
Table 2 and Table 3 report the comparison results of different denoising methods for the Simu–Indian and Pavia datasets in the above two cases. MPSNR and MSSIM are used to evaluate the performances of different denoising algorithms. These two tables show that, on the whole, the proposed algorithm provides satisfactory PSNR and SSIM values in most cases when compared with other methods. This situation confirms the advantages of the proposed algorithm in mixed noise denoising. For the Simu–Indian data, the performance of the proposed algorithm is close to that of the LRTDGS algorithm when the mixed noise intensity is low. For the Pavia data, the quality results of the LRGTV method are the best likely, because the LRGTV algorithm processes all the patches together and uses the spatial–spectral total variation regularization method to recover the whole 3D HSI. The restoration effect of the LRMR algorithm is relatively unsatisfactory when the Gaussian noise is strong. Although the A-BM3D algorithm adopts the adaptive filter to remove S&P noise, its denoising effect is not ideal when the density of the S&P noise is high. Table 3 shows that, surprisingly, the LRTDGS algorithm performs poorly on the Pavia data.
Figure 1 and Figure 2 provide a visual representation of the performances of different methods based on their restoration results for the Simu–Indian dataset. In Figure 1, the zero-mean Gaussian noise standard variance is 0.10, and the S&P noise intensity is 0.10. Meanwhile, in Figure 2, we set the Gaussian intensity to be the same as that in Figure 1, but the noise intensity of S&P is 0.20. Furthermore, the same subregion of each subfigure is marked with red boxes and enlarged. Figure 1 and Figure 2 show that all the compared algorithms can remove mixed noise to some extent. The image tends to be blurry after the A-BM3D method is used. Although the two LRMR algorithms can remove noise and preserve spectral information, they cannot remove the Gaussian noise completely. LRGTV, by taking advantage of the whole 3D structure and spatial–spectral total variation regularization, can obtain satisfactory denoising results. However, it fails to recover the local details well. The performance of the proposed method is close to that of the LRTDGS algorithm mainly because we use the logarithm norm and PnP prior to describe the global structure and nonlocal similarity of the HSI image.
The visual results of the different denoising methods for the Pavia dataset are presented in Figure 3 and Figure 4. The noise intensity in these figures is the same as that in Figure 1 and Figure 2. Figure 3 and Figure 4 show that the denoising performance of the proposed method is satisfactory. However, Figure 4 illustrates that LRGTV is the best algorithm, mainly because it employs the global structure and the spectral information in the low-rank constraint. Compared with the LRGTV method, the proposed method is more sensitive to S&P noise when the noise level is strong. We will address this issue in our future work.
Figure 5, Figure 6, Figure 7 and Figure 8 provide the PSNR and SSIM values of each band for the Simu–Indian and Pavia datasets, respectively. As shown in Figure 5 and Figure 6, the proposed algorithm presents satisfactory PSNR and SSIM values for almost all bands in the Simu–Indian dataset, indicating that the proposed algorithm outperforms the algorithms for comparison in mixed noise removal. As mentioned above, and as illustrated in Figure 7 and Figure 8, LRGTV achieves the best PSNR and SSIM values for each band in the Pavia dataset. However, the performance of the proposed method is relatively weak. The main reason for this result is not yet clear and will be addressed in our next work.

4.2. Real Experiments

Only the Hyper-spectral Digital Imagery Collection Experiment urban dataset, which can be downloaded online [52], is utilized in this experiment and described in this paper due to space limitations. The size of the urban image is 307 × 307 × 210. Figure 9 shows the real-world urban data.
Figure 10 and Figure 11 present bands 83 and 205 of the restored images. As shown in Figure 10, the restoration result of A-BM3D is oversmoothed, causing the local details to become distorted. Most other methods, such as LRMR and LRGTV, can effectively remove noise from the urban image. Overall, the results show that the proposed algorithm performs satisfactorily. However, when the band is in the range of [199, 210], the stripes are considered to be the low-rank part, which is assumed to be the clean data, in the low-rank decomposition. Although we use PnP-based regularization to mine the spatial information of the real urban image, the proposed method cannot completely remove the stripes in Figure 11. Therefore, we will explore and address the reason for this problem in our future work.
Figure 12 shows the vertical mean profiles of band 205 before and after restoration. Concretely, it illustrates the spectral curves at one spatial location of the restored results by different algorithms. In this figure, the horizontal axis represents the band index, and the vertical axis represents the mean digital number value of each column. Rapid fluctuations are observed in the curve given the presence of mixed noise. After restoration, the fluctuations are more or less suppressed. Here, the proposed method appears to perform satisfactorily in accordance with the visual results presented in Figure 11. In summary, the above observations in Figure 12 prove that the proposed algorithm achieves satisfied results on mixed noise removal and fine details preservation. The reason why our method performs well is that it utilizes the logarithm norm-based rank function to exploit the global information and PnP regularization module to preserve the details of the image. Furthermore, the small singular values can be eliminated by using the logarithm-norm rank function. It helps to reconstruct the global structure information. However, the elimination of small singular values results in the loss of image details. This can be restored by using the BM3D regularization method.

4.3. Performance Analysis

Generally speaking, HSI mixed noise removal is a highly ill-posed problem. In this work, we introduce a PnP prior to make the problem produce feasible results. The nonconvex optimization of the proposed model is challenging, and with the idea of auxiliary variables and the ADMM scheme, one problem that has been noted is convergence.
Therefore, we show the traces of the quality index PSNR with respect to the iterations in Figure 13 to further verify the stability of the proposed algorithm. Figure 13 provides the curve of PSNR vs. iteration number for the Simu–Indian and Pavia datasets. The Gaussian and S&P noise intensities are set as 0.10 and 0.20, respectively. Figure 13 shows that, when the iteration number exceeds 60, the PSNR value tends to be stable. Therefore, the effectiveness of the proposed algorithm is further demonstrated by these experimental results.
Finally, we provide the computational time of the different methods in Table 4. Note that all the results are implemented in MATLAB R2018. The Gaussian and S&P noise intensities are also set as 0.10 and 0.20, respectively. As shown in Table 4, most of the denoising methods have high computational efficiency. A-BM3D has the shortest running time. However, the proposed algorithm has relatively low computational efficiency, mainly because we use the PnP-based BM3D module to restore the HSI image, which is highly time-consuming. Concretely, this is mainly because the whole HSI image has been divided into image patches, and each image patch is restored by using the BM3D module separately.

5. Conclusions

We propose a logarithm norm nonconvex approximation-based HSI algorithm for mixed noise removal. Specifically, the logarithm norm-based nonconvex low-rank is used to characterize the global spatial–spectral correlation among all hyperspectral image bands, and PnP-based regularization is introduced to further exploit the local detailed information of HSI. Then, we develop the ADMM optimization scheme to address the proposed model. Finally, through simulations, real experiments, and discussion, we demonstrate quantitatively and qualitatively that the proposed algorithm achieves satisfactory performance, because the logarithm norm-based low-rank can help restore the global information of the target hyperspectral image, while the embedded BM3D denoiser helps preserve the image details and remove the image structure noise. Our future work will include investigating a novel mixed noise removal algorithm by applying other technologies, such as LSM modeling, deep convolution neural network, attention mechanism, and transformer frameworks.

Author Contributions

J.L. conceived the idea, designed the experiments, and wrote the paper; J.W. and M.X. helped to analyze the experimental data; and Y.H. helped to review this manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Science and Technology Research Program of Shangrao (No. 2021J005). This work was also supported by the Natural Science Foundation of Sichuan (No. 2022NSFSC0557).

Data Availability Statement

From this study, the ground truth of the Simu–Indian data can be downloaded online at https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html (accessed on 12 March 2022) [50], and the Pavia City center data used in our work can be downloaded from http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes (accessed on 19 March 2022) [51]. The Hyper-spectral Digital Imagery Collection Experiment (HYDICE) urban dataset can be downloaded online from [52].

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhao, X.-L.; Wang, F.; Huang, T.; Ng, M.K.; Plemmons, R. Deblurring and sparse unmixing for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4045–4058. [Google Scholar] [CrossRef]
  2. Ma, Y.; Lei, Y.; Wang, T. A natural scene recognition learning based on label correlation. IEEE Trans. Emerg. Top. Comput. Intell. 2022, 6, 150–158. [Google Scholar] [CrossRef]
  3. Zha, Z.; Wen, B.; Yuan, X.; Zhou, T.; Zhou, J.; Zhu, C. Triply complementary priors for image restoration. IEEE Trans. Image Process. 2021, 30, 5819–5834. [Google Scholar] [CrossRef] [PubMed]
  4. Dong, W.; Shi, G.; Li, X. Nonlocal image restoration with bilateral variance estimation: A low-rank approach. IEEE Trans. Image Process. 2013, 22, 700–711. [Google Scholar] [CrossRef]
  5. Rahman, S.; Ahmad, M.O.; Swamy, M.N. Bayesian wavelet-based image denoising using the Gaussian-hermite expansion. IEEE Trans. Image Process. 2008, 17, 1755–1771. [Google Scholar] [CrossRef] [PubMed]
  6. Oliveira, J.; Bioucas, J.M.; Figueiredo, M. Adaptive total variation image deblurring: A majorization-minimization approach. Signal Process. 2009, 89, 1683–1693. [Google Scholar] [CrossRef]
  7. Zhang, X.; Wu, X. Image interpolation by 2-D autoregressive modeling and soft-decision estimation. IEEE Trans. Image Process. 2008, 17, 887–896. [Google Scholar] [CrossRef] [PubMed]
  8. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  9. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  10. Wu, W.; Jia, Y.; Li, P.; Zhang, J.; Yuan, J. Manifold kernel sparse representation of symmetric positive-definite matrices and its applications. IEEE Trans. Image Process. 2015, 24, 3729–3741. [Google Scholar] [CrossRef]
  11. Dong, W.; Fu, F.; Shi, G.; Cao, X.; Wu, J.; Li, G.; Li, X. Hyperspectral image super-resolution via non-negative structured sparse representation. IEEE Trans. Image Process. 2016, 25, 2337–2351. [Google Scholar] [CrossRef] [PubMed]
  12. Hwang, H.; Haddad, R.A. Adaptive median filters: New algorithm and results. IEEE Trans. Image Process. 1995, 4, 499–502. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Nikolova, M. A variational approach to remove outliers and impulse noise. J. Math. Imaging Vis. 2004, 20, 99–120. [Google Scholar] [CrossRef]
  14. Cai, J.; Chan, R.; Nikolova, M. Two-phase approach for deblurring images corrupted by impulse plus Gaussian noise. Inverse Probl. Imaging 2008, 2, 187–204. [Google Scholar] [CrossRef]
  15. Xiao, Y.; Zeng, T.; Yu, J.; Ng, M. Restoration of images corrupted by mixed Gaussian-impulse noise via L1-L0 minimization. Pattern Recogn. 2011, 44, 1708–1720. [Google Scholar] [CrossRef] [Green Version]
  16. Xiong, B.; Yin, Z.P. A universal denoising framework with a new impulse detector and nonlocal means. IEEE Trans. Image Process. 2012, 21, 1663–1675. [Google Scholar] [CrossRef]
  17. Liu, L.; Chen, C.; Zhou, Y.; You, X. A new weighted mean filter with a two-phase detector for removing impulse noise. Infor. Sci. 2015, 315, 1–16. [Google Scholar] [CrossRef]
  18. Huang, T.; Dong, W.; Xie, X.; Shi, G.; Bai, X. Mixed noise removal via Laplacian scale mixture modeling and local low-rank approximation. IEEE Trans. Image Process. 2017, 26, 3171–3186. [Google Scholar] [CrossRef]
  19. Liu, J.; Tai, X.; Huang, H.; Huan, Z. A weighted dictionary learning model for denoising images corrupted by mixed noise. IEEE Trans. Image Process. 2013, 22, 1108–1120. [Google Scholar] [CrossRef]
  20. Jiang, J.; Zhang, L.; Yang, J. Mixed noise removal by weighted encoding with sparse nonlocal regularization. IEEE Trans. Image Process. 2014, 23, 2651–2662. [Google Scholar] [CrossRef]
  21. Xie, Y.; Gu, S.; Liu, Y.; Zuo, W.; Zhang, W.; Zhang, L. Weighted schatten p-norm minimization for image denoising and background subtraction. IEEE Trans. Image Process. 2016, 25, 4842–4857. [Google Scholar] [CrossRef] [Green Version]
  22. Zhou, P.; Lu, C.; Feng, J.; Lin, C.; Yan, S. Tensor low-rank representation for data recovery and clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 43, 1718–1732. [Google Scholar] [CrossRef]
  23. Chen, Y.; Huang, T.; He, W.; Yokoya, N.; Zhao, X.-L. Hyperspectral image compressive sensing reconstruction using subspace-based nonlocal tensor ring decomposition. IEEE Trans. Image Process. 2020, 29, 6813–6828. [Google Scholar] [CrossRef]
  24. Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.; Zhu, C. Image restoration via reconciliation of group sparsity and low-rank models. IEEE Trans. Image Process. 2021, 30, 5223–5238. [Google Scholar] [CrossRef]
  25. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  26. He, W.; Zhang, H.; Zhang, L.; Shen, H. Hyperspectral image denoising via noise-adjusted iterative low-rank matrix approximation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3050–3061. [Google Scholar] [CrossRef]
  27. Song, H.; Wang, G.; Zhang, K. Hyperspectral image denoising via low-rank matrix recovery. Remote Sens. Lett. 2014, 5, 872–881. [Google Scholar] [CrossRef]
  28. Chen, Y.; Guo, Y.; Wang, Y.; Wang, D.; Peng, C.; He, G. Denoising of hyperspectral images using nonconvex low rank matrix approximation. IEEE Trans. Geosci. Remote Sens. 2017, 55, 5366–5380. [Google Scholar] [CrossRef]
  29. Ye, H.; Li, H.; Yang, B.; Cao, F.; Tang, Y. A novel rank approximation method for mixture noise removal of hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 4457–4469. [Google Scholar] [CrossRef]
  30. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2016, 54, 178–188. [Google Scholar] [CrossRef]
  31. He, W.; Zhang, H.; Shen, H.; Zhang, L. Hyperspectral image denoising using local low-rank matrix recovery and global spatial–spectral total variation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 713–727. [Google Scholar] [CrossRef]
  32. Kim, B.; Divel, S.; Pelc, N.; Baek, J. A methodology to train a convolutional neural network-based low-dose CT denoiser with an accurate image domain noise insertion technique. IEEE Access 2022, 10, 86395–86407. [Google Scholar] [CrossRef]
  33. Leontaris, L.; Dimitriou, N.; Ioannidis, D.; Votis, K.; Tzovaras, D.; Papageorgiou, E. An autonomous illumination system for vehicle documentation based on deep reinforcement learning. IEEE Access 2021, 9, 75336–75348. [Google Scholar] [CrossRef]
  34. Zhang, K.; Li, Y.; Zuo, W.; Zhang, L.; Gool, L.; Timofte, R. Plug-and-play image restoration with deep denoiser prior. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6360–6376. [Google Scholar] [CrossRef] [PubMed]
  35. Chen, Y.; He, W.; Yokoya, N.; Huang, T. Hyperspectral image restoration using weighted group sparsity-regularized low-rank tensor decomposition. IEEE Trans. Cybernetics 2020, 50, 3556–3570. [Google Scholar] [CrossRef]
  36. Liu, Y.; Zhao, X.-L.; Zheng, Y.; Ma, T.; Zhang, H. Hyperspectral image restoration by tensor fibered rank constrained optimization and plug-and-play regularization. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5500717. [Google Scholar] [CrossRef]
  37. Venkatakrishnan, S.; Bouman, C.; Wohlberg, B. Plug-and-play priors for model based reconstruction. In Proceedings of the IEEE Global Conference on Signal and Information Processing, Austin, TX, USA, 3–5 December 2013; pp. 945–948. [Google Scholar]
  38. Chan, S.H.; Wang, X.; Elgendy, O. Plug-and-play ADMM for image restoration: Fixed-point convergence and applications. IEEE Trans. Comput. Imaging 2017, 3, 84–98. [Google Scholar] [CrossRef] [Green Version]
  39. Zhao, X.-L.; Xu, W.; Jiang, T.; Wang, Y.; Ng, M.K. Deep plug-and-play prior for low-rank tensor completion. Neurocomputing 2020, 400, 137–149. [Google Scholar] [CrossRef] [Green Version]
  40. Chen, L.; Jiang, X.; Liu, X.; Zhou, Z. Robust low-rank tensor recovery via nonconvex singular value minimization. IEEE Trans. Image Process. 2020, 29, 9044–9059. [Google Scholar] [CrossRef]
  41. Chen, L.; Jiang, X.; Liu, X.; Zhou, Z. Logarithmic norm regularized low-rank factorization for matrix and tensor completion. IEEE Trans. Image Process. 2021, 30, 3434–3449. [Google Scholar] [CrossRef]
  42. Wright, J.; Ganesh, A.; Rao, S.; Peng, Y.; Ma, Y. Robust principalcomponent analysis: Exact recovery of corrupted low-rank matrices via convex optimization. In Proceedings of the Neural Information Processing Systems, Vancouver, BC, Canada, 7–10 December 2009; pp. 2080–2088. [Google Scholar]
  43. Zhou, Z.; Li, X.; Wright, J.; Candes, E.; Ma, Y. Stable principal component pursuit. In Proceedings of the IEEE international symposium on information theory, Austin, TX, USA, 13–18 June 2010; pp. 1518–1522. [Google Scholar]
  44. Cao, F.; Chen, J.; Ye, H.; Zhao, J.; Zhou, Z. Recovering low-rank and sparse matrix based on the truncated nuclear norm. Neural Netw. 2017, 85, 10–20. [Google Scholar] [CrossRef] [PubMed]
  45. Xie, Y.; Qu, Y.; Tao, D.; Wu, W.; Yuan, Q.; Zhang, W. Hyperspectral image restoration via iteratively regularized weighted schatten p-norm minimization. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4642–4659. [Google Scholar] [CrossRef]
  46. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian denoiser: Residual learning of deep CNN for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  47. Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a fast and flexible solution for CNN-based image denoising. IEEE Trans. Image Process. 2018, 27, 4608–4622. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  48. Fazel, M. Matrix Rank Minimization With Applications. Ph.D. Dissertation, Stanford University, Stanford, CA, USA, 2002. [Google Scholar]
  49. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  50. Available online: https://engineering.purdue.edu/~biehl/MultiSpec/hyperspectral.html (accessed on 12 March 2022).
  51. Available online: http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes (accessed on 19 March 2022).
  52. Available online: http://www.tec.army.mil/hypercube (accessed on 20 April 2022).
Figure 1. Restored results of band 35 on Simu–Indian. From top to bottom: the results under a subcase (the standard deviation of zero-mean Gaussian noise is G = 0.10, and the noise proportion of S&P noise is S = 0.10).
Figure 1. Restored results of band 35 on Simu–Indian. From top to bottom: the results under a subcase (the standard deviation of zero-mean Gaussian noise is G = 0.10, and the noise proportion of S&P noise is S = 0.10).
Mathematics 10 03810 g001
Figure 2. Restored results of band 57 on Simu–Indian. From top to bottom: the results under a subcase (the standard deviation of zero-mean Gaussian noise is G = 0.10, and the noise proportion of S&P noise is S = 0.20).
Figure 2. Restored results of band 57 on Simu–Indian. From top to bottom: the results under a subcase (the standard deviation of zero-mean Gaussian noise is G = 0.10, and the noise proportion of S&P noise is S = 0.20).
Mathematics 10 03810 g002
Figure 3. Restored results of band 35 on Pavia. From top to bottom: the results under a subcase (the standard deviation of zero-mean Gaussian noise is G = 0.10, and the noise proportion of S&P noise is S = 0.10).
Figure 3. Restored results of band 35 on Pavia. From top to bottom: the results under a subcase (the standard deviation of zero-mean Gaussian noise is G = 0.10, and the noise proportion of S&P noise is S = 0.10).
Mathematics 10 03810 g003
Figure 4. Restored results of band 57 on Pavia. From top to bottom: the results under a subcase (the standard deviation of zero-mean Gaussian noise is G = 0.10, and the noise proportion of S&P noise is S = 0.20).
Figure 4. Restored results of band 57 on Pavia. From top to bottom: the results under a subcase (the standard deviation of zero-mean Gaussian noise is G = 0.10, and the noise proportion of S&P noise is S = 0.20).
Mathematics 10 03810 g004
Figure 5. PSNR and SSIM values of restored results by different methods on Simu–Indian data (G = 0.10, S = 0.10).
Figure 5. PSNR and SSIM values of restored results by different methods on Simu–Indian data (G = 0.10, S = 0.10).
Mathematics 10 03810 g005
Figure 6. PSNR and SSIM values of restored results by different methods on Simu–Indian data (G = 0.10, S = 0.20).
Figure 6. PSNR and SSIM values of restored results by different methods on Simu–Indian data (G = 0.10, S = 0.20).
Mathematics 10 03810 g006
Figure 7. PSNR and SSIM values of restored results by different methods on Pavia data (G = 0.10, S = 0.10).
Figure 7. PSNR and SSIM values of restored results by different methods on Pavia data (G = 0.10, S = 0.10).
Mathematics 10 03810 g007
Figure 8. PSNR and SSIM values of restored results by different methods on Pavia data. (G = 0.10, S = 0.20).
Figure 8. PSNR and SSIM values of restored results by different methods on Pavia data. (G = 0.10, S = 0.20).
Mathematics 10 03810 g008
Figure 9. Real-world urban data.
Figure 9. Real-world urban data.
Mathematics 10 03810 g009
Figure 10. Restoration results on HYDICE urban image set: slight noise band.
Figure 10. Restoration results on HYDICE urban image set: slight noise band.
Mathematics 10 03810 g010
Figure 11. Restoration results on HYDICE urban image set: moderate noise band.
Figure 11. Restoration results on HYDICE urban image set: moderate noise band.
Mathematics 10 03810 g011
Figure 12. The vertical mean profiles of band 205 on a real urban image.
Figure 12. The vertical mean profiles of band 205 on a real urban image.
Mathematics 10 03810 g012
Figure 13. PSNR values with respect to the iterations for different datasets.
Figure 13. PSNR values with respect to the iterations for different datasets.
Mathematics 10 03810 g013
Table 1. Experimental environmental configuration.
Table 1. Experimental environmental configuration.
NameConfiguration
Simulated images and sizeSumi-Indian (145 × 145 × 224), Pavia (200 × 200 × 80)
Real HSI image and sizeUrban (307 × 307 × 210)
Performance EvaluationPSNR (dB), SSIM [49]
Experimental platformWindows 10, MATLAB R2018b, 16GB Memory
Table 2. Quantitative evaluation of the different methods on the Simu–Indian dataset.
Table 2. Quantitative evaluation of the different methods on the Simu–Indian dataset.
CaseNoise LevelEvaluation IndexA-BM3DLRMRLRGTVLRTDGSProposed
Case 1G = 0.025,
S&P = 0.05
MPSNR (dB)32.738443.891348.386147.742947.8694
MSSIM0.96030.99170.99660.99860.9928
G = 0.05,
S&P = 0.10
MPSNR (dB)31.398839.430843.827744.158343.9426
MSSIM0.94590.97560.98730.99500.9907
G = 0.075,
S&P = 0.15
MPSNR (dB)29.599736.2251 40.2178 41.457840.3526
MSSIM0.91210.94920.97010.99620.9821
G = 0.10,
S&P = 0.20
MPSNR (dB)27.207133.660737.284239.091037.6930
MSSIM0.81560.91220.94480.99120.9810
Case 2MPSNR (dB)25.093231.276534.921836.243535.1372
MSSIM0.71260.90940.93430.94470.9351
Table 3. Quantitative evaluation of the different methods on the Pavia dataset.
Table 3. Quantitative evaluation of the different methods on the Pavia dataset.
CaseNoise LevelEvaluation IndexA-BM3DLRMRLRGTVLRTDGSProposed
Case 1G = 0.025,
S&P = 0.05
MPSNR (dB)29.185840.832743.146431.750142.4572
MSSIM0.82550.98710.99160.90490.9689
G = 0.05,
S&P = 0.10
MPSNR (dB)28.442736.328538.301930.303437.6853
MSSIM0.80020.96630.97560.86900.9314
G = 0.075,
S&P = 0.15
MPSNR (dB)27.463233.283634.963629.193633.7557
MSSIM0.76560.93700.95120.83520.8888
G = 0.10,
S&P = 0.20
MPSNR (dB)26.170831.164732.324728.150731.6958
MSSIM0.71420.90260.92080.79800.8402
Case 2MPSNR (dB)24.753930.344731.569327.129530.9436
MSSIM0.68200.90830.92050.73560.9347
Table 4. Computational times of different methods (unit: s).
Table 4. Computational times of different methods (unit: s).
HSI ImageA-BM3DLRMRLRGTVLRTDGSProposed
Simu-Indian0.0906 62.9295119.948774.8305961.2586
Pavia0.180754.432092.193349.3622723.0967
Urban0.5035292.0146507.7214254.92931669.3409
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, J.; Wu, J.; Xu, M.; Huang, Y. Plug-and-Play-Based Algorithm for Mixed Noise Removal with the Logarithm Norm Approximation Model. Mathematics 2022, 10, 3810. https://doi.org/10.3390/math10203810

AMA Style

Liu J, Wu J, Xu M, Huang Y. Plug-and-Play-Based Algorithm for Mixed Noise Removal with the Logarithm Norm Approximation Model. Mathematics. 2022; 10(20):3810. https://doi.org/10.3390/math10203810

Chicago/Turabian Style

Liu, Jinhua, Jiayun Wu, Mulian Xu, and Yuanyuan Huang. 2022. "Plug-and-Play-Based Algorithm for Mixed Noise Removal with the Logarithm Norm Approximation Model" Mathematics 10, no. 20: 3810. https://doi.org/10.3390/math10203810

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop