Salt and Pepper Noise Removal with Multi-Class Dictionary Learning and L 0 Norm Regularizations

Images may be corrupted by salt and pepper impulse noise during image acquisitions or transmissions. Although promising denoising performances have been recently obtained with sparse representations, how to restore high-quality images remains challenging and open. In this work, image sparsity is enhanced with a fast multiclass dictionary learning, and then both the sparsity regularization and robust data fidelity are formulated as minimizations of L0-L0 norms for salt and pepper impulse noise removal. Additionally, a numerical algorithm of modified alternating direction minimization is derived to solve the proposed denoising model. Experimental results demonstrate that the proposed method outperforms the compared state-of-the-art ones on preserving image details and achieving higher objective evaluation criteria.


Introduction
Images may be corrupted by the salt and pepper noise when they are acquired by imperfect sensors or transmitted in unideal channels [1].This noise introduces the sharp and sudden disturbances in the images and the noise value usually equals to either the minimal or maximal pixel value.To remove the impulse noise, traditional methods include spatial domain approaches such as median [1] or adaptive median filtering (AMF) [2], and transform domain approaches such as wavelet denoising [3,4].The former ones try to distinguish noise from meaningful image structures with some pre-defined filtering, owing fast computation but limited performance.The latter ones explore the image sparsity in transform domains, e.g., wavelets and contourlets, to distinguish noise but has to carefully choose appropriate basis functions to represent different image features.More comparisons can be found in a recent review on impulse noise removal [5].Recently, the transform domain sparsity is incorporated into sparse image reconstruction models [6][7][8] to significantly improve the image quality.The improvement on denoising, however, is still unsatisfactory since the sparsity is limited by pre-defined dictionaries or transforms which may not capture different image structures [8][9][10][11].
The state-of-the-art approaches introduce the adaptive dictionary learning [8][9][10][11] to provide sparser representations, leading to boosted performance for salt and pepper noise removal.However, the commonly used redundant dictionary learning algorithm, K-SVD [12], is relatively time consuming [13], which may slow down the iterative image reconstruction [8,9,14,15] or lose optimal sparsity when partial image patches are used for fast training [12].By exploring the intrinsic self-similarity in images, patch-based nonlocal operator (PANO) [16], which is originated from the well-known block-matching and 3D filtering [17], has been shown to provide adaptive sparse representations and achieve high quality image reconstructions in impulse noise removal [10,11,16].Yet, PANO may be suboptimal since similar patches are sparsified with pre-defined wavelets.Therefore, it is highly expecting to find new dictionary (or transform) that enables adaptive sparse representation and fast computation while performs well in impulse noise removal.
Even with a proper dictionary, the sparse image reconstruction model for impulse noise removal should be carefully defined.Generally, a restored image is obtained by minimizing a model that balance the sparsity and data fidelity [10,11,18].For instance, as a convex approximation of the sparsity, the commonly used L 1 norm may bias to penalize small sparse coefficients than large ones, and, thus, suffers from losing weak signal details in the reconstruction problems, such as image reconstruction [19][20][21], biological spectrum recovery [22][23][24], and fault detection [25].Therefore, it is meaningful to incorporate other norms to restore more weak signals.
Beyond the sparsity constraint in the sparse image reconstruction models [8][9][10][11]16], the data fidelity, which models the impulse noise with the L 1 norm, may not be appropriate.It was recently found that the L 0 norm, that model the impulse noise, can be interpreted from the Bayesian view of maximum a posteriori probability [18] and can further improve the denoising performance.
To overcome these limitations of current approaches, in this paper, salt and pepper noise removal is improved from two aspects: (1) sparser representation with fast transform; and (2) better regularization of the sparsity and data fidelity.Accordingly, three contributions are made: (1) adaptive sparse image representations with fast orthogonal dictionary learning will be introduced to improve the denoising performance with updated reference images; (2) L 0 norms are introduced to regularize not only sparsity but also the data fidelity, resulting in stronger sparse constraint for images and robustness to outliers introduced by salt and pepper noise; and (3) the impulse denoising model is solved with a feasible numerical algorithm.
The rest of this paper is organized as follows: The typical L 1 -L 1 regularization model is reviewed in Section 2.1.The proposed method is presented in Section 2.2.Experimental results are analyzed in Section 3. Discussions are made in Section 4. Finally, Section 5 presents the conclusions.

Typical Sparsity-Based Impulse Noise Removal Method
The target image is usually assumed to be sparse under a certain dictionary or transform representation.The sparsity is usually indicated by measuring the L 1 norm of the representation coefficients [6][7][8] and this norm can be used as feature selection criteria in the sparse representation-based image fusion [26][27][28].In addition, the salt and pepper noise are often treated as the outlier in images and the L 1 norm is employed to constraint the data consistency [29].Thus, a L 1 -L 1 regularization model is defined to remove impulse noise according to: where λ balances the sparsity of an image x (under the transform Ψ) and the data fidelity that is robust to outliers in the noisy image y.
How to sparsify the image with a proper Ψ heavily affects the denoising results.A typical choice is a fixed sparse representation, e.g. the finite difference, a basic form of total variations (TV) [18,30] that models the piece-wise constant signals, and wavelets which characterize the piecewise smooth signals [6].To better captures the image features, a dictionary [8,9] or transform [10,11] may be adaptively trained from the noisy image itself.Unlike the typical dictionary learning, which is time consuming in the iterative image reconstruction [8,9], the recently-proposed PANO not only saves the training time (only several seconds) but also provides adaptively sparse representation to the image by learning the self-similarity [11].Similar patches are grouped into 3D cubes and then sparsified with 3D wavelets.For removing salt and pepper noise, PANO significantly improved the denoising performance [11] and obtained better results with impulse detection [10].With PANO, Equation ( 1) is turned into: min where ∑ J j=1 A j x 1 promotes the sparsity on J groups of similar patches and W is a diagonal matrix whose entries stands for weights on pixels.A reasonable weight penalizes the noisy pixels much heavier than the noise-free ones [10].It is worth noting that the penalty function ∑ J j=1 A j x 1 promotes the intrinsic group sparsity [31], which is not only important in image denoising [32,33], but also in other applications [34].
Although the model in Equation ( 2) has shown promising performance in salt and pepper noise removal, it has two limitations: (1) sparsity is insufficient since 3D wavelets in PANO are still pre-defined basis; and (2) the L 1 norm term is only approximation of the sparsity which may lose weak image structures, e.g., small edges, in the reconstruction [19][20][21] or lack robustness to impulse noise [18].

Proposed Method
In this work, we propose an approach to remove the impulse noise with fast adaptive dictionary learning to provide sparsity of images and formulate the denoising problems with the L 0 norms regularizations.
The flowchart of the whole scheme is summarized in Figure 1.First, a reference image is initially obtained from a noised image using the AMF.Then, the geometric directions are learnt from the reference image to get the adaptive sparse transform is via fast dictionary training.Next, a denoised image is reconstructed using the proposed L 0 norms regularization model.
In the following, the essential part of the approach will be given in more details.
Algorithms 2018, 11, x FOR PEER REVIEW 3 of 14 denoising performance [11] and obtained better results with impulse detection [10].With PANO, Equation ( 1) is turned into: where ∑   promotes the sparsity on J groups of similar patches and  is a diagonal matrix whose entries stands for weights on pixels.A reasonable weight penalizes the noisy pixels much heavier than the noise-free ones [10].It is worth noting that the penalty function ∑   promotes the intrinsic group sparsity [31], which is not only important in image denoising [32,33], but also in other applications [34].
Although the model in Equation ( 2) has shown promising performance in salt and pepper noise removal, it has two limitations: (1) sparsity is insufficient since 3D wavelets in PANO are still predefined basis; and (2) the L1 norm term is only approximation of the sparsity which may lose weak image structures, e.g., small edges, in the reconstruction [19][20][21] or lack robustness to impulse noise [18].

Proposed Method
In this work, we propose an approach to remove the impulse noise with fast adaptive dictionary learning to provide sparsity of images and formulate the denoising problems with the L0 norms regularizations.
The flowchart of the whole scheme is summarized in Figure 1.First, a reference image is initially obtained from a noised image using the AMF.Then, the geometric directions are learnt from the reference image to get the adaptive sparse transform is via fast dictionary training.Next, a denoised image is reconstructed using the proposed L0 norms regularization model.
In the following, the essential part of the approach will be given in more details.

Adaptive Dictionary Learning in Salt and Pepper Noise Removal
We first introduce the Fast Dictionaries Learning Method on Classified Patches (FDLCP) [13] into salt and pepper noise removal.FDLCP not only inherits fast learning to form orthogonal dictionaries, but also provides sparser representations by training comprehensive dictionaries for

Adaptive Dictionary Learning in Salt and Pepper Noise Removal
We first introduce the Fast Dictionaries Learning Method on Classified Patches (FDLCP) [13] into salt and pepper noise removal.FDLCP not only inherits fast learning to form orthogonal dictionaries, but also provides sparser representations by training comprehensive dictionaries for different geometrical image patches.As shown in Figure 2, patches that share the same geometric direction are grouped together to form one class, and then this class is used to learn a specific dictionary.Therefore, the dictionary of each class effectively captures the underlying image structures (Figure 2c).different geometrical image patches.As shown in Figure 2, patches that share the same geometric direction are grouped together to form one class, and then this class is used to learn a specific dictionary.Therefore, the dictionary of each class effectively captures the underlying image structures (Figure 2c).Mathematically, within a class of patches  that shares a same geometric direction , an orthogonal dictionary is trained by solving the optimization problem [15]: where  is the dictionary and ‖. ‖ is the Frobenius norm.We choose the orthogonal dictionary learning because it enables the significant reduction of computation [13,15] than the commonly redundant dictionary trained in the K-SVD.For example, it has been shown that the computational time of learning orthogonal dictionary will be approximately 1% than that of the redundant dictionary in K-SVD [13].Equation ( 3) can be fast solved by alternatively computing the sparse coding  and updating the dictionary  in each iteration [13,15] as follows: (1) Given the current dictionary  , update sparse coefficients Z via hard thresholding: where the hard threshold  () for a matrix  is an element-wise operator that performs on the element  according to   ( ) = 0,  ( ) ≤   ,  ( ) >  with a threshold c.
(2) Given the coefficients , compute dictionary  according to: where  is a parameter to decide the sparsity and  is an identity matrix.The solution of Equation ( 5) is: where  and  are orthogonal matrices of the following singular value decomposition:    =  , where T denotes the matrix transpose.The solutions in Equations ( 7) and ( 8) are called the orthogonal Procrustes solutions, which were first considered in dictionary learning before [35].In our implementation, the parameter  is set to 0.2 in all experiments.Mathematically, within a class of patches X ω that shares a same geometric direction ω, an orthogonal dictionary is trained by solving the optimization problem [15]: where D w is the dictionary and .F is the Frobenius norm.We choose the orthogonal dictionary learning because it enables the significant reduction of computation [13,15] than the commonly redundant dictionary trained in the K-SVD.For example, it has been shown that the computational time of learning orthogonal dictionary will be approximately 1% than that of the redundant dictionary in K-SVD [13].Equation ( 3) can be fast solved by alternatively computing the sparse coding Z and updating the dictionary D ω in each iteration [13,15] as follows: (1) Given the current dictionary D ω , update sparse coefficients Z via hard thresholding: where the hard threshold H c (Q) for a matrix Q is an element-wise operator that performs on the element (2) Given the coefficients Z, compute dictionary D ω according to: min where η is a parameter to decide the sparsity and I is an identity matrix.The solution of Equation ( 5) is: where P and V are orthogonal matrices of the following singular value decomposition: where T denotes the matrix transpose.The solutions in Equations ( 7) and ( 8) are called the orthogonal Procrustes solutions, which were first considered in dictionary learning before [35].In our implementation, the parameter η is set to 0.2 in all experiments.
A good feature of the FDLCP [15] is that it balances the fast learning via orthogonal dictionary, comparing with the relatively slow learning in the classic K-SVD [12], and sparse representation for each classes, comparing with the sole orthogonal dictionary for all image patches [13].FDLCP has been observed to outperform K-SVD in the medical image reconstruction [15], but not yet applied to impulse denoise so far.
To further derive the mathematical property of FDLCP, let R j denote the operator that extract a patch owned with jth geometric direction from an image x, and then the FDLCP transform is simplified as T for the total J geometric directions.The forward and inverse transform satisfies: where I is an identity matrix and the variable c is the overlapping factor for image patches [15].By further setting Φ T = 1 √ c Ψ T , the transform Φ obeys: meaning that the modified FDLCP transform Φ is a tight frame for image reconstruction [36].
Then, how to train the representations with FDLCP in the salt and pepper noise removal?We suggest learning the FDLCP from a reasonable pre-reconstructed image, e.g., which is obtained with AMF.This strategy has been found to achieve sparse representations under fast computing not only in impulse noise removal [10,11], but also other image reconstruction problems [15,16,37].

FLCLP-Based Image Reconstruction with The L 0 Norm Regularizations
Two L 1 norm terms are defined in Equation (2) for impulse noise removal, and they play different roles and can be replaced by two L 0 norms for a better reconstruction.The first term enforces the image sparsity, and the L 1 norm may lose weak signals in the reconstruction, thus, researchers tend to minimize the sparsity by the L 0 norm.This modification has been observed to improve the reconstruction [8,15,20,21] including impulse denoise removal [18].The second term maintains the data fidelity, and L 1 norm may reduce robustness to impulse noise [6,29,30], while using the L 0 norm has been observed to significantly improve rejecting the outliers and good edge preserving in the reconstruction [18].Therefore, both the L 0 norm terms on the sparsity and data fidelity are adopted in this work.
The FLCLP-based image reconstruction with the L 0 norm regularizations is proposed as follows: where the α 0 is the total number of nonzero entries in the α.For simplicity, we name the proposed method as FDLCP-L 0 in the following descriptions.The two L 0 norms significantly boost the FDLCP-based impulse noise removal as shown in Figure 3.
The FLCLP-based image reconstruction with the L0 norm regularizations is proposed as follows: where the ‖‖ is the total number of nonzero entries in the  .For simplicity, we name the proposed method as FDLCP-L0 in the following descriptions.The two L0 norms significantly boost the FDLCP-based impulse noise removal as shown in Figure 3.To better understand the model, we will illustrate L 0 norm regularizations from the Bayesian view.
Given the observed noisy image y, the maximize posterior probability for the reconstructed image x can be obtained by maximize posterior probability: Since a given x corresponds to the unique transform coefficients Φ T x, then p y| Φ T x = p( y| x).By plugging in the salt and pepper noise model: 1, with probability q 2 x j , with probability 1 − q (12) where q is the probability that jth pixel is contaminated by noise, then the first term in Equation ( 11) is modified as [18]: Therefore, ignoring the constant in Equation ( 13), maximizing the posterior probability of x means minimizing y − x 0 , which is the same as our data consistency term in Equation ( 10) by setting W as the identity matrix.
We then use a transform domain prior as p(x) = 1 v e −g[Φ T x] where α = Φ T x and g α j = 1 − h, if α j = 0 h, otherwise and h is a probability of a coefficient being in nonzero value.Then, maximizing the log function of p(x) leads to minimizing Φ T x 0 which is the same as our sparsity constraint term.
The above analysis implies that the L 0 norm in data consistency specifically works for impulse noise removal, and the L 0 norm that directly counts the number of non-zero coefficients in sparsity constraint evaluates the sparsity better the L 1 norm.

Numerical Algorithm
Directly solving Equation ( 10) is very hard since its L 0 norm terms are non-smooth and non-differential.In this work, we adopt the alternating direction minimization with continuation algorithm (ADMC) [10,16,37] to solve Equation (10).ADMC is chosen since it enables to obtain the solution with much easier sub-problems that have analytical solutions as discussed below.
By introducing the continuation parameter β and augmented variables α and d, a relax form of Equation ( 10) is: where α and d are two augmented variables.When β → ∞ , the solution of Equation ( 14) approaches that of Equation ( 10) since any non-zero values of α − Φ T x (or d − W(y − x) ) will lead β 2 α − Φ T x 2 2 (or 2 ) to be infinite.Therefore, in order to minimize the cost function of Equation ( 14), both α = Φ T x and d = W(y − x) must be simultaneously satisfied, meaning that Equation ( 14) is equivalent to Equation ( 10) when β → ∞ .In the implementation, we gradually increase β.
While β is fixed at the kth iterations, the solution of Equation ( 14) is obtained by alternatively solving the following sub-problems where each one has the analytical solution.
(1) Fixed d (k) and x (k) , solve: whose solution is achieved by hard thresholding according to: (2) Fix x (k) and α (k+1) , solve: whose solution is also achieved by hard thresholding as follows (3) Fix α (k+1) and d (k+1) , solve: and have the solution as follows: The algorithm is summarized in Algorithm 1.It is worth noting that the existing ADMC algorithms solve either a reconstruction model with minimization of L 1 norm (sparsity regularization) and L 2 norm (data fidelity) in compressive image recovery [16,38,39], or a denoising model with minimization of L 1 norm (sparsity regularization) and L 1 norm (data fidelity) for impulse noise removal [10,11].The ADMC algorithm derived in the work solves the denoising model with minimization of L 0 norm (sparsity regularization) and L 0 norm (data consistency) as Equation ( 14) and has totally different sub-problems and solutions from previous ones.

Results
The proposed method, FDLCP-L 0 , is compared with basic AMF [2], the state-of-the-art PANO-L 1 [9] and total variation (TV)-L 0 methods [17].Two objective evaluation criteria, the peak signal-to-noise ratio (PSNR) and the mean measure of structural similarity (MSSIM) [40], which are most widely used in image denoising and reconstruction [15,17,37,41,42], are adopted to quantitatively measure the denoising performance here.The PSNR measures average pixel distortion of the denoised image and the MSSIM specifically cares about the preserved image structures, e.g., local luminance and contrast, which are important for human visual systems [40].Higher PSNR and MSSIM mean better denoising performance.All the computations were performed on four cores 3.6 GHz CPU desktop computer with 16 GB RAM.The typical reconstruction time of a 512 × 512 image with AMF, PANO-L 1 , TV-L 0 , and FDLCP-L 0 are 0.25 s, 55.7 s, 6.74 s, and 473.22 s, respectively.The test images are downloaded from website [43].

Denoising Performance under a Fixed Noise Level
At the typical noise level of 0.5 (50% pixels are contaminated by salt and pepper noise), both PANO-L 1 and TV-L 0 remove noise much better than the classic AMF.However, the two state-of-the-art approaches still lose some straight lines (Figure 4d,e) or weaken some textures (Figure 5d,e).The proposed method reconstructs these image features much better than others.As listed in Table 1, evaluation criteria including both PSNR and MSSIM indicate that the proposed method reconstructs the most consistent images to the noise-free ones.At the typical noise level of 0.5 (50% pixels are contaminated by salt and pepper noise), both PANO-L1 and TV-L0 remove noise much better than the classic AMF.However, the two state-of-theart approaches still lose some straight lines (Figure 4d,e) or weaken some textures (Figure 5d,e).The proposed method reconstructs these image features much better than others.As listed in Table 1, evaluation criteria including both PSNR and MSSIM indicate that the proposed method reconstructs the most consistent images to the noise-free ones.

Denoising Performance under Different Noise Levels
The denoising performances under different noise levels were evaluated in Figure 6.The AMF is inferior to other compared methods.The TV-L0 outperforms the PANO-L1 although PANO provides adaptive sparse representation.This observation implies that the L0 norm is more robust than the L1 norm to the outliers introduced by the impulse noise.The proposed method not only makes use of adaptive sparsity in the representation but also incorporates the robust L0 norm to better remove outliers in the data fidelity term.Therefore, improved denoising performance by the

Denoising Performance under Different Noise Levels
The denoising performances under different noise levels were evaluated in Figure 6.The AMF is inferior to other compared methods.The TV-L 0 outperforms the PANO-L 1 although PANO provides adaptive sparse representation.This observation implies that the L 0 norm is more robust than the L 1 norm to the outliers introduced by the impulse noise.The proposed method not only makes use of adaptive sparsity in the representation but also incorporates the robust L 0 norm to better remove outliers in the data fidelity term.Therefore, improved denoising performance by the proposed method are consistently observed in all experiments with different noise levels.

Discussions
To further confirm the advantage of using the L0 norm over the L1 norm, FDLCP-L1 is compared with the proposed method FDLCP-L0.Typical denoised images are shown in Figure 7 and quantitative measures are shown Figure 8, Table 2 and Table 3. Image features are preserved much better using FDLCP-L0 than that using FDLCP-L1.Consistent improvements at all noise levels are observed for FDLCP-L0.Therefore, replacing the L1 norm with the L0 norm is valuable to improve the quality of the images that are contaminated by the salt and pepper noise.

Discussions
To further confirm the advantage of using the L 0 norm over the L 1 norm, FDLCP-L 1 is compared with the proposed method FDLCP-L 0 .Typical denoised images are shown in Figure 7 and quantitative measures are shown Figure 8, Tables 2 and 3. Image features are preserved much better using FDLCP-L 0 than that using FDLCP-L 1 .Consistent improvements at all noise levels are observed for FDLCP-L 0 .Therefore, replacing the L 1 norm with the L 0 norm is valuable to improve the quality of the images that are contaminated by the salt and pepper noise.

Discussions
To further confirm the advantage of using the L0 norm over the L1 norm, FDLCP-L1 is compared with the proposed method FDLCP-L0.Typical denoised images are shown in Figure 7 and quantitative measures are shown Figure 8, Table 2 and Table 3. Image features are preserved much better using FDLCP-L0 than that using FDLCP-L1.Consistent improvements at all noise levels are observed for FDLCP-L0.Therefore, replacing the L1 norm with the L0 norm is valuable to improve the quality of the images that are contaminated by the salt and pepper noise.

Conclusions
A new salt and pepper impulse noise removal method is proposed by simultaneously exploring: (1) The adaptively sparse representation of images; (2) better regularizing the sparsity of representation and the data fidelity under impulse noise.The former is accomplished via fast

Conclusions
A new salt and pepper impulse noise removal method is proposed by simultaneously exploring: (1) The adaptively sparse representation of images; (2) better regularizing the sparsity of representation and the data fidelity under impulse noise.The former is accomplished via fast orthogonal dictionary learning within multiclass geometric image patches while the latter is enforced by regularizing L 0 norms on both the sparsity and data fidelity constraints.Experimental results demonstrate that this proposed approach outperforms the compared ones in terms of better preserving image structures and higher objective denoising performance evaluation criteria.The theoretical prof of the advantage of L 0 norm over L 1 norm in impulse noise removal would be an interesting future work.

Figure 1 .
Figure 1.A flowchart of the proposed method on impulse noise removal.

Figure 1 .
Figure 1.A flowchart of the proposed method on impulse noise removal.

Figure 2 .
Figure 2. Adaptive representations learnt with FDLCP.(a) Geometric directions estimated from the Barbara image, (b) one class of image patches that share the same geometric direction, (c) one dictionary learnt from the class of patches in (b).Note: red lines in (a) indicate geometric directions of image patches.

Figure 2 .
Figure 2. Adaptive representations learnt with FDLCP.(a) Geometric directions estimated from the Barbara image, (b) one class of image patches that share the same geometric direction, (c) one dictionary learnt from the class of patches in (b).Note: red lines in (a) indicate geometric directions of image patches.

Figure 3 .
Figure 3. Comparisons on the denoised Barbara images using different norms.(a) is the noise-free image, (b) is the noisy image, (c) and (d) are denoised images using FDLCP with L1 and L0 norm minimizations, respectively.Note: The noise level is 0.5, meaning that 50% pixels are contaminated by salt and pepper noise.

Figure 3 .
Figure 3. Comparisons on the denoised Barbara images using different norms.(a) is the noise-free image, (b) is the noisy image, (c) and (d) are denoised images using FDLCP with L 1 and L 0 norm minimizations, respectively.Note: The noise level is 0.5, meaning that 50% pixels are contaminated by salt and pepper noise.

Figure 5 .
Figure 5. Denoised Lena images at the noise level of 0.5.(a) is the noise-free image, (b) is the noisy image, (c)-(f) are denoised images using AMF, PANO-L 1 , TV-L 0 , and the proposed FDLCP-L 0 , respectively.

Figure 7 .
Figure 7. Comparisons on denoised images using FDLCP-L1 and FDLCP-L0.(a) and (b) are noise-free and noisy images, (c) and (d) are denoised images using FDLCP-L1 and FDLCP-L0, respectively.The first row and the second row are Boat and Lena images.Note: The salt and pepper noise level is 0.5 and the denoised Barbara images is shown in Figure 3.

Figure 7 .
Figure 7. Comparisons on denoised images using FDLCP-L1 and FDLCP-L0.(a) and (b) are noise-free and noisy images, (c) and (d) are denoised images using FDLCP-L1 and FDLCP-L0, respectively.The first row and the second row are Boat and Lena images.Note: The salt and pepper noise level is 0.5 and the denoised Barbara images is shown in Figure 3.

Figure 7 .
Figure 7. Comparisons on denoised images using FDLCP-L 1 and FDLCP-L 0 .(a) and (b) are noise-free and noisy images, (c) and (d) are denoised images using FDLCP-L 1 and FDLCP-L 0 , respectively.The first row and the second row are Boat and Lena images.Note: The salt and pepper noise level is 0.5 and the denoised Barbara images is shown in Figure 3.

Table 1 .
Quantitative measures at the salt and pepper impulse noise level of 0.5.
9 of 14 3.1.Denoising Performance under a Fixed Noise Level

Table 1 .
Quantitative measures at the salt and pepper impulse noise level of 0.5.

Table 2 .
PSNR performance under the salt and pepper noise using FDCLP with L1 or L0 norms.

Table 3 .
MSSIM performance under the salt and pepper noise using FDCLP with L1 or L0 norms.

Table 2 .
PSNR performance under the salt and pepper noise using FDCLP with L 1 or L 0 norms.

Table 3 .
MSSIM performance under the salt and pepper noise using FDCLP with L 1 or L 0 norms.