Next Article in Journal
A Combined Protective Dose of Angelica archangelica and Ginkgo biloba Restores Normal Functional Hemoglobin Derivative Levels in Rabbits after Oxidative Stress Induced by Gallium-68
Previous Article in Journal
A Self-Adjoint Coupled System of Nonlinear Ordinary Differential Equations with Nonlocal Multi-Point Boundary Conditions on an Arbitrary Domain
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Denoising Using a Novel Deep Generative Network with Multiple Target Images and Adaptive Termination Condition

School of Information Engineering, Nanchang University, Nanchang 330031, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(11), 4803; https://doi.org/10.3390/app11114803
Submission received: 13 April 2021 / Revised: 14 May 2021 / Accepted: 21 May 2021 / Published: 24 May 2021
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

:
Image denoising, a classic ill-posed problem, aims to recover a latent image from a noisy measurement. Over the past few decades, a considerable number of denoising methods have been studied extensively. Among these methods, supervised deep convolutional networks have garnered increasing attention, and their superior performance is attributed to their capability to learn realistic image priors from a large amount of paired noisy and clean images. However, if the image to be denoised is significantly different from the training images, it could lead to inferior results, and the networks may even produce hallucinations by using inappropriate image priors to handle an unseen noisy image. Recently, deep image prior (DIP) was proposed, and it overcame this drawback to some extent. The structure of the DIP generator network is capable of capturing the low-level statistics of a natural image using an unsupervised method with no training images other than the image itself. Compared with a supervised denoising model, the unsupervised DIP is more flexible when processing image content that must be denoised. Nevertheless, the denoising performance of DIP is usually inferior to the current supervised learning-based methods using deep convolutional networks, and it is susceptible to the over-fitting problem. To solve these problems, we propose a novel deep generative network with multiple target images and an adaptive termination condition. Specifically, we utilized mainstream denoising methods to generate two clear target images to be used with the original noisy image, enabling better guidance during the convergence process and improving the convergence speed. Moreover, we adopted the noise level estimation (NLE) technique to set a more reasonable adaptive termination condition, which can effectively solve the problem of over-fitting. Extensive experiments demonstrated that, according to the denoising results, the proposed approach significantly outperforms the original DIP method in tests on different databases. Specifically, the average peak signal-to-noise ratio (PSNR) performance of our proposed method on four databases at different noise levels is increased by 1.90 to 4.86 dB compared to the original DIP method. Moreover, our method achieves superior performance against state-of-the-art methods in terms of popular metrics, which include the structural similarity index (SSIM) and feature similarity index measurement (FSIM). Thus, the proposed method lays a good foundation for subsequent image processing tasks, such as target detection and super-resolution.

1. Introduction

During acquisition and transmission, the quality of digital images inevitably degrades owing to corruption caused by various reasons. Therefore, the ability to recover a clean image from a noisy one is of great importance, and image denoising is a fundamental step applied in all image processing pipelines. In the computer vision field, image denoising has been a research hotspot since the 1990s. After decades of research, many denoising algorithms had achieved good results through approaches such as non-local self-similarity in natural images [1,2,3], low rankness-based models [4,5], sparse representation-based models [3,6,7], and fuzzy (or neuro-fuzzy)-based models [8,9]. Nevertheless, researchers are still aiming to further improve the performance of image denoising algorithms.
Existing denoising algorithms can be roughly divided into internal algorithms and external algorithms [10]. Internal algorithms utilize the noisy image itself, while the external algorithms exploit clean, natural images related to the noisy image. Internal image denoising algorithms include filter algorithms, low rankness-based models, and sparse representation-based algorithms. Representative examples of filter algorithms are the non-local means (NLM) algorithm and the block-matching and 3D filtering (BM3D) algorithm. The NLM algorithm [1], proposed by Baudea et al. in 2005, exploited non-local self-similarity in natural images. It first finds similar patches and obtains their weighted average to achieve the denoised patches. Although it exhibits excellent performance, the NLM algorithm is limited by its inability to identify truly similar patches in a noisy environment. BM3D [2], a benchmark denoising algorithm, starts with the block-matching of each reference block, and obtains 3D arrays by grouping similar blocks together. The authors used a two-step algorithm to denoise an image. First, they denoised the input image simply and obtained a basic estimate; next, they achieved an improved denoising effect through collaborative filtering of the basic estimate. Among low rankness-based methods, nuclear norm minimization (NNM) and weighted nuclear norm minimization (WNNM) are two well-known algorithms. The NNM algorithm [4] was proposed by Ji et al. for video denoising. In their work, the problem of removing noise was transformed into a low-rank matrix completion problem, which can be well solved by singular value decomposition. However, the authors equalized each singular value to ensure the convexity of the objective function, which severely restricts its capability and flexibility when dealing with denoising problems. Based on the NMM algorithm and proposed in [5], the WNNM algorithm takes advantage of the non-local self-similarity of the image for denoising. Among sparse representation-based algorithms, the K-singular value decomposition (K-SVD) algorithm, the learned simultaneous sparse coding (LSSC) algorithm, and the non-locally centralized sparse representation (NCSR) algorithm are three noteworthy algorithms. K-SVD [6] is a classic dictionary learning algorithm, which utilizes the sparsity and redundancy of over-complete learning dictionaries to produce high-quality denoising images. LSSC [3] exploits the combination of the self-similarity of image patches and sparse coding to further boost denoising performance. The NCSR [7] algorithm was proposed by Dong et al. and utilizes non-local self-similarity and sparse representation of images. It introduces the concept of sparse coding noise with the goal of suppressing the sparse coding noise to denoise an image. In general, most of these traditional denoising methods use custom-made image priority and multiple, manually selected parameters, providing ample room for improvement.
In recent years, deep learning-based methods have become a popular research direction in the field of image denoising. These methods can be categorized as external methods whose denoising performance is superior to internal methods. The main idea is to collect a large number of noise-clean image pairs, and then train the deep neural network denoiser using end-to-end learning. These methods have significant advantages in accumulating knowledge from big datasets; thus, they can achieve superior denoising performance. In 2017, Zhang et al. proposed Deep CNN (DnCNN) [11], which exploited the residual learning strategy to remove noise. They introduced the batch normalization technique as it not only reduced the training time, but also boosted the denoising effect quantitatively and qualitatively. However, it is only effective when the noise level is within a pre-set range. Hence, Zhang et al. proposed FFDNet in [12]. FFDNet showed considerable improvement in flexibility and robustness using a single network. Specifically, it was formulated as x = F y , M ; Θ , where x is the expected output, y is the input noise observation, and M is a noise level map. In the DnCNN model x = F y ; Θ , the parameters Θ change with the noise level. As for the FFDNet model, M is modeled as the input and the hyper-parameters have no relationship to the noise level. Therefore, it could handle different noise levels in a flexible manner using a single network. The consensus neural network (CsNet) was proposed by Choi et al. [13], and combines multiple relatively weak image denoisers to produce a satisfactory result. CsNet exhibits superior performance in three aspects: solving the noise level mismatch, incorporating denoisers for different image classes, and uniting different denoiser types. In summary, these supervised denoising networks are exceedingly effective when supplied with plenty of noise-clean image pairs for training, but collecting clean images of the ground truth in many real-world scenarios is very difficult. Moreover, if the image priors are significantly different from the image to be denoised, the supervised denoising networks tend to produce hallucination-like effects when handling an unseen noisy image, because the previously learned image statistics cannot handle the untouched image content and noise level well. These networks have strong data dependence [14], leading to a lack of flexibility.
To overcome the aforementioned limitations, researchers have focused on training unsupervised denoising networks without training images. Recently, research on generative networks using the deep image prior (DIP) framework [15] demonstrated that even if only the input image itself is used in training, deep convolutional neural networks (CNNs) can still provide superior performance on various inverse problems. No prior training is required, and random noise is used as the network input to generate denoised images. It can be widely used in image noise reduction, super-resolution, and other image restoration problems. Because the hyper-parameters of DIP are determined based on the specific noisy image, it may, in some cases, achieve better denoising results than the supervised denoising models. As shown in Figure 1, although the general denoising performance of DIP is inferior to that of DnCNN, we can still find some local details, such as the magnified part of the images, that DIP can preserve more accurately as compared with DnCNN. The reason is that the prior knowledge captured by DnCNN cannot handle the subtle information that DIP can. DIP performs the inference by stopping the training early. However, in the original DIP model, the early stopping point is set as a fixed number of iterations using experimental data, so the result is not always optimal. Furthermore, the noisy image is used as the target image that provides poor guidance and leads to slow convergence of the generative network. Thus, the denoising performance of DIP is much lower than that of deep learning in some cases, and it still leaves room for improvement. In view of the limitations of the existing DIP method, we propose a novel deep generative network with multiple target images and an adaptive termination condition, which not only retains the flexibility of the original DIP model, but also improves denoising performance. Specifically, instead of the noisy image, we use two target images of higher quality to participate in the formation of the loss functions. In addition, we adopt a noise level estimation (NLE) method to automatically terminate the iterative process to resolve the early stopping problem, prevent over-fitting, and ensure an optimal output image.
The remainder of the paper is organized as follows: Section 2 introduces a literature review of related work. In Section 3, we describe the proposed approach in detail. Section 4 discusses our experimental results and analysis. We discuss our current work and future work in Section 5. Finally, we conclude this paper in Section 6.

2. Related Work

2.1. Background

Image denoising is the most fundamental inverse problem of image processing, and its purpose is to recover the underlying image from its noisy measurement. In most cases, image denoising is an ill-posed problem; based on the noisy observation, we can always find many reasonable images that could belong to the clean image manifold. The image denoising problem can be described by a simple mathematical formula:
y = x + n
where y is a noisy observed image and x is a clean, no-noise image. Generally, n is assumed to be the additive white Gaussian noise (AWGN), which is widely used in the field of image denoising. The denoising problem requires finding the denoised image x ^ that is closest to the true value image.

2.2. Deep Neural Network with Training Pairs

A deep neural network with training pairs is a type of supervised learning method that requires training on large datasets. It aims to map a noisy image to a clean manifold to enable it to remove noise once it is trained. When a large number of training pairs are usable, a neural network can be trained in the following manner:
θ ^ = a r g m i n θ i x l a b e l i f θ , x n o i s e i
where θ R L represents the trainable variables, f : R N R N indicates the neural network, x l a b e l i R N denotes the i t h training label, and x n o i s e i R N is the network input for the i t h training pair. In CNN, θ contains convolution filters and bias terms for all layers. Once trained, the network can be applied to image denoising [16,17,18,19]. Compared with traditional denoising methods such BM3D, WNNM, and NLM, the deep learning-based methods show superior denoising performance by restoring more image details. These supervised deep learning-based methods require a large number of training pairs to learn network hyper-parameters. These parameters, denoted by θ , have a strong data dependence [14]; specifically, when the image content and noise level values are not uniformly distributed in the image database, the denoising results will be poor. Once the training model is determined, the parameters will not change during the test stage. Therefore, when a noisy image is noticeably different from the training images, the neural network may produce non-existent reconstructed output that results in poor denoising performance.

2.3. Deep Image Prior

Different from supervised deep learning-based methods, DIP is regarded as an unsupervised learning method, which does not require a dataset with a large number of clean target images for training. The general idea is similar to the adaptive dictionary learning method. Ulyanov et al. [15] demonstrated that untrained networks can capture some low-level statistics of natural images, especially the translation invariance of local convolution and its usage. A series of such operators can capture pixel neighborhoods on multiple scales. Let x 0 R N be a distorted image, and the training process can be characterized as:
θ ^ = a r g m i n θ x 0 f θ , z , x ^ = f θ ^ , z
where the network input z R M is random noise, and x ^ R N is the denoised image output. The U-Net’s encoder-decoder architecture [20] is mainly used by the network, where z is a fixed 3D tensor having the same space size as x and 32 feature maps. The network has a large number of parameters. Specifically, the encoder portion is a contracting path containing maximum pooling layers and stacked convolution, while the decoder portion is an expanding path containing the nearest neighbor upsampling and bilinear upsampling techniques. The encoder is composed of four downsampling layers and four convolutional blocks, while the decoder contains four upsampling layers and four convolutional blocks. No training pair is required, and f θ , z is updated at the start. Given the noisy target x 0 , the denoised image x ^ is acquired by minimizing the reconstruction error x 0 f θ , z over z and θ . The method starts with the initial values of z with zero-mean Gaussian distribution, and θ is optimized by gradient descent.
Figure 2 schematically depicts the use of DIP with a fixed number of iterations in the optimization process. Here, Ulyanov et al. optimized Equation (3) by using a data term such as the L 2 distance, which compared the generated image with x 0 :
E x ; x 0 = x x 0 2
The ground truth value x g t has the non-zero cost E x g t , x 0 > 0 . As shown in Figure 2, if it runs for a long enough time, DIP will obtain a solution ( x i = x 0 ) that is quite far from x g t . However, the optimized path will usually be close to x g t , and the early stopping point (here at step t * ) will obtain a good solution. Ulyanov et al. [15] showed that this prior is comparable to state-of-the-art learning-free methods in image denoising such as BM3D [2]. The prior encodes the hierarchical self-similarity utilized by dictionary-based methods [21] and non-native technologies (such as BM3D). Several layered networks with skipped connections are used for denoising, which plays a vital role in the network architecture.

2.4. Drawbacks of DIP

Despite the flexibility of DIP shown in image denoising, its results are in some cases not optimal. First, the generators used for DIP are usually over-parameterized; that is, the number of network parameters is greater than the number of output dimensions, and too many iterations result in an empirically overfitted image. In Figure 3, it can be seen that for each curve, the peak signal-to-noise ratio (PSNR) result continuously improves until it reaches a specific iteration; beyond this iteration, the resultant curve of PSNR begins to decline. Thus, if the iteration process does not stop at the appropriate iteration, the DIP experiences under- or over-fitting problems. Though Ulyanov et al. set a fixed number of iterations (early stopping point) in the deep natural network, it was based on experimental data; thus, it could not guarantee the optimal denoising effect. As shown in Figure 3, regardless of where the iterations end, not all three output images could achieve the optimal denoising effect at the same iteration due to differences in their optimal early stopping points.
Second, according to the data listed in Table 1, the denoising effect of DIP is inferior to that of the mainstream FFDNet method. The main reason is that DIP’s loss function is defined as:
L o s s 1 = M S E x ^ i , x 0
where MSE is the mean square error, x ^ i is the output image of the deep neural network, and x 0 is the noisy image. The guiding ability of the noisy image x 0 , which controls the final convergence direction of the output image, is limited. Notably, more noise in the noisy image will weaken its guiding ability, causing slow iteration convergence and poor denoising performance.

3. Methodology

3.1. Multiple Target Images

First, we considered a new approach to enhance the guidance of the loss function described in Equation (5) by adding two sub-items. In other words, we added two images with higher guiding ability (higher image quality) to participate in the calculation of the loss function. Specifically, we applied two mainstream denoising methods (FFDNet and BM3D) to denoise each noisy image, thereby obtaining two preliminary denoised images x 1 and x 2 . Next, we added the MSE values of the two preliminary denoised images ( x 1 and x 2 ) to the loss function. The new loss function can be computed with:
L o s s 2 = M S E x ^ i , x 0 + M S E x ^ i , x 1 + M S E x ^ i , x 2
where x ^ i is the output image of the network, x 0 represents the noisy image, and x 1 and x 2 are the preliminary denoised images produced by FFDNet and BM3D, respectively. As shown in Figure 4, the proposed approach starts with random weights θ 0 , and we iteratively update them to minimize the objective function described in Equation (6). For each iteration i, the weights θ are used to generate the image x ^ i = f θ i z , where the mapping f is a neural network with parameters θ i and z is a fixed tensor. The image x ^ i is used to calculate the non-zero cost E x ^ i ; x 0 , x 1 , x 2 . The weight θ i is then updated using the stochastic gradient descent (SGD) training method. The advantage of this method is that it utilizes preliminary denoising images to construct the loss function and can thereby adjust the evolution direction of the generative network model. This ensures that the network output image x ^ i evolves in a reasonable direction within the solution space (close to the ground truth x g t ). The schematic diagram in Figure 5 shows the center of gravity of x 0 , x 1 , and x 2 , and shows the point x ¯ 0 , where the network output image x ^ i finally converges in the solution space after adopting the new hybrid loss function. This results in an output image x ^ i that is closer to the undistorted image x g t after a specific iteration. It should be noted that the preliminary denoised image x 1 is obtained from FFDNet, which utilizes external information captured through a training image set, while x 2 is obtained from BM3D, which utilizes the internal self-similarity information of the image. Consequently, the proposed method essentially utilizes both the internal and external prior constraints of the image to remove noise.

3.2. Adaptive Termination Condition

Second, to solve the problem of over- or under-fitting, we adopted the previously proposed NLE module [22], which can assess the severity of the noise interference and obtain the noise level value of the noisy image to allow us to set a more reasonable adaptive termination condition. Specifically, the residual image n ^ i = x 0 x ^ i can be obtained by subtracting the i t h output image of the deep generative network from the noisy image x 0 . In the early stages of the network iterations, the network output image n ^ i is far from the undistorted image so the standard deviation s t d n ^ i of the residual image n ^ i is relatively large. When arriving at the appropriate i t h iteration, the standard deviation s t d n ^ i of the residual image n ^ i should be close to the noise level value σ of the noisy image measured by the previously proposed NLE module. Therefore, s t d n ^ i δ is used in our work to adaptively terminate the iteration process. In short, with an accurate noise level value, σ , we can determine when to terminate the iteration process after the appropriate number of iterations has been completed. As shown in Figure 5, the adaptive termination point x ^ p r o p o s e d * of the proposed method is closer to the optimal point x g t than that of the DIP method. Meanwhile, Figure 6 also shows that our adaptive termination condition set the termination step number at 2801, which is very close to the optimal iteration step, 2835, in the iterative process. Thus, the proposed approach can resolve the early stopping problem and achieve more optimal denoising performance.

4. Experiments

4.1. Datasets and Experimental Setup

To evaluate our method comprehensively and verify its effectiveness, we conducted extensive experiments and compared it with DIP and eight other state-of-the-art image denoising methods, including BM3D [2], NCSR [7], WNNM [5], DnCNN [11], FFDNet [12], TWSC [23], RED-Net [24], and CsNet [13]. We conducted denoising experiments on four datasets. In the first dataset, as shown in Figure 7, 10 images commonly used in the literature were selected, comprising six images with a size of 512 × 512 (Barbara, Boat, Couple, Hill, Lena, and Man) and four images with a size of 256 × 256 (Cameraman, House, Monarch, and Peppers). For the second dataset, as illustrated in Figure 8, we randomly selected 50 natural images from the Berkeley segmentation dataset (BSD) [25]. The third dataset contains 10 images obtained randomly from the Flickr1024 database, [26] which consists of 1024 high-quality images covering diverse scenarios. Figure 9 shows some examples. The 10 images in the fourth dataset were randomly selected from Urban100, which contains 100 high-resolution images with various real-world structures; Figure 10 shows some representative images in this dataset.
To test the denoising performance of the proposed method objectively, we utilized three widely accepted image quality evaluation criteria [27,28,29], including PSNR, structural similarity index (SSIM) [30], and the feature similarity index measurement (FSIM) [31]. In addition, we compared the results visually to assess the quality of the denoising effects subjectively. We performed our experiments using a Lenovo desktop with a 4.00 GHz eight-core Intel Core i7-6700K CPU and 16 GB of RAM.

4.2. Experimental Results and Analysis

In this subsection, we present the PSNR results of the proposed method and the original DIP method on 10 commonly used test images. Table 2 shows the results with noise levels of σ [10, 20, 30, 40, 50, 60]. The highest PSNR values for each noise level are highlighted in bold. According to the PSNR results shown in Table 2, it can be observed that our approach achieved better performance in all cases compared with the original DIP method. In particular, the processing result of the Barbara image at noise level σ = 30 is notable. Even though the Barbara image has abundant texture details and is complex, our proposed method increased the PSNR result by 4.35 dB. The reason for this is that our proposed method is especially suited to denoising images with complex textures because it utilizes two target images of high quality. Additionally, the minimum increase in the PSNR value was observed on the Hill image with the noise level σ = 10 , which reached 1.15 dB. The SSIM and FSIM indexes of the two methods on the 10 commonly used test images were also computed and are shown in Table 3 and Table 4, respectively. These results show that the proposed method surpassed the DIP method in both SSIM and FSIM, which confirms that our method significantly improved the local structure preservation and global brightness consistency. We also performed experiments to compare the time efficiency of the two methods. In Table 5, the PSNR value in the second column is the best result that the original DIP method can achieve. The following rows list the number of iterations and time required for DIP and our method to achieve the value, respectively. We can observe that compared with the original DIP method, the proposed method requires less time to reach the PSNR value, which shows that the proposed method surpasses the original DIP method not only in denoising performance but also in time efficiency.
Moreover, we present the average PSNR, SSIM, and FSIM results of the eight other denoising methods on the 10 commonly used images for noise levels σ [10, 20, 30, 40, 50, 60] in Table 6, Table 7 and Table 8, respectively. From Table 6, we can draw the following conclusions. First, although the original DIP method is more flexible, its overall denoising effect is inferior to the mainstream denoising methods. Second, our method surpassed the other mainstream denoising methods and obtained the highest average PSNR results. Specifically, it outperformed the deep learning-based method FFDNet by 0.48 to 1.23 dB. Table 7 shows that the original DIP method was the second best method and achieved impressive SSIM results at noise levels σ [10, 20, 30, 40, 50]. However, the SSIM results of our method were higher than DIP and improved the SSIM values by 0.0184 to 0.0901. Further, from Table 8, it can be seen that the proposed method also achieved the highest FSIM results in all cases.
To test the robustness of the proposed method, we conducted experiments using the BSD dataset in which the texture of the images is more complex, making the task of image denoising more difficult. The PSNR performance of the nine competitive denoising methods is shown in Table 9. The denoising effects of each method on this dataset showed different degrees of decline compared to the average PSNR results achieved on the first dataset, shown in Table 6. Nevertheless, it is apparent that the PSNR results obtained by our method still outperformed all other methods. Especially when the noise level was set to 10, the improvement was significant (e.g., an average improvement of 3.31 dB over the DIP method). Table 10 and Table 11 list the average SSIM and FSIM results for all 10 methods under six different noise levels. We can observe that our method obtained the highest SSIM and FSIM values; additionally, the improvements obtained by our proposed method for both the SSIM and FFIM results are noteworthy.
In addition to the traditional databases, we performed experiments on a larger dataset called Flickr1024 with a variety of images. The average PSNR results are shown in Table 12. The PSNR results obtained by our method are clearly superior to those of the other nine methods. Table 13 and Table 14 list the average SSIM and FSIM values obtained by 10 methods under six different noise levels. The results show that our method also achieved excellent performance in terms of the values of SSIM and FSIM. Compared to DIP, the proposed method can boost the average SSIM and FSIM values from 0.0607 to 0.1423 and from 0.0144 to 0.0508, respectively.
To further evaluate the applicability of our method comprehensively, we randomly selected 10 high-resolution images from Urban100. From Table 15, we can observe that the WNNM method obtains good results when processing high-resolution images. Nevertheless, our method still outperforms it and achieves the highest PSNR results. As shown in Table 16, our method obtained the highest average SSIM results; compared with the other nine methods, the improvement in the values was approximately 0.0104 to 0.1023. Moreover, in Table 17, it can be observed that our method obtained the best average FSIM results.
The experimental results clearly show that our proposed approach outperformed the existing state-of-the-art denoising methods on four classical datasets that are highly representative. In particular, as an improvement of DIP, its denoising effect was notably better than that of the original DIP. It not only retained the flexibility of the original DIP method, but also greatly improved denoising performance. Therefore, it shows promise and adaptability. In the following subsection, we will analyze a visual comparison of images denoised by the different methods, which further supports our conclusion.

4.3. Visual Comparisons

Visual quality is a crucial indicator for evaluating denoising effects in image processing. Therefore, a visual comparison experiment was conducted on multiple test images with rich texture information. We invited graduate students from different grades in the laboratory to evaluate visual images. We asked the students, who range in age from approximately 18 to 24 years old, to observe these images for about 5 minutes before making comments. Figure 11 shows a visual comparison of the denoising results of one image selected randomly from BSD with a noise level σ = 40 . In the denoised images, we chose to evaluate a portion of the back thigh of a tiger, which was magnified and displayed in the bottom right corner of each image for better visualization. It can be found that DIP exhibited poor denoising performance as some details were lost. The image of the grass that overlaps the thigh of the tiger is completely unobservable, and the position of the spots on the thigh is also distorted compared to the original image. As for the image denoised by Red-Net, the blurry spots of the noise could not be removed effectively, leading to unsatisfactory results. Although WNNM, NCSR, and TWSE produced smoother edges compared to DIP, the texture details were not preserved. While DnCNN, FFDNet, and CSNet retained more texture details, they were prone to generate oversmoothed artifacts. We can observe from the magnified part of the image obtained from our proposed approach that the details of the grass were strengthened and the discernibility of the fur was improved, which demonstrates that the image denoised by our method is close to the original image. Compared with the nine above-mentioned methods, our proposed method preserved more local edges and high-frequency components, leading to a denoised image with better visual effects. Overall, our proposed method yielded satisfactory visual quality compared with the state-of-the-art denoising methods and increased the PSNR value to 28.08 dB.

5. Discussions

It is well known that Gaussian noise is widely used in image denoising, thus our generative network can fully manage such noise. However, real-world noise, such as Poisson noise, Gaussian–Poisson noise, and salt and pepper noise, is usually non-Gaussian. Poisson noise and Gaussian–Poisson noise are so-called signal-related noise. In an image, their noise levels are variable while the noise level of Gaussian noise is fixed. To handle these cases, we can exploit the average noise level [32] rather than the fixed noise level in our method. To handle salt and pepper noise, we must utilize the corresponding denoising algorithms to obtain preliminary images and use the noise ratio as a condition. That is, the criterion for over-fitting is no longer the noise level, but the noise ratio. Therefore, under the framework of our method, as long as the preliminary denoising images and the iteration termination conditions are modified accordingly, the salt and pepper noise or other types of noise can also be handled well.
In this work, we adopt the mixed loss function, in which the three terms have the same weight, and achieved satisfactory results. Here, we adopt the noisy image to utilize its internal information, but its guiding ability is interfered with by different noise levels to varying degrees. Theoretically, when the noise level is relatively low, the noise image contains more useful information, so it can occupy more weight; when the noise level is relatively high, the noise image is seriously disturbed, and it contains less useful information, so it will occupy less weight. In future work, we consider assigning different weights to the terms to further improve the denoising performance of our generated network.
Further, we exploit the MSE loss function that uses the L2 norm to characterize the distance between the generative image and the noisy image, and preliminary denoising images, respectively. Although the MSE loss function can easily reach local minimums and is sensitive to errors, it still has some defects. For example, it could over-penalize larger errors and may not capture complex characteristics in some cases; meanwhile, the mean absolute error (MAE) loss function that uses the L1 norm to describe the distance may allow our network to obtain better results. Thus, we are considering exploring a mixed loss function with more norms in future work.
It should be noted that although the proposed method obtains the optimal result through online training, it requires a large number of gradient updates, resulting in long inference times. Thus, its execution efficiency is relatively low. In the future, we will consider adopting transfer learning [33] to first find a suitable general initial parameter to improve performance for a faster denoising process.

6. Conclusions

In this paper, image denoising is modeled as image generation by exploiting DIP with multiple target images and an adaptive termination condition. The experimental results confirm that the proposed generative network exhibits better denoising performance than the original DIP. Moreover, experiments also show that our approach achieves significant performance gains over the state-of-the-art methods according to quantitative evaluation indicators and visual comparisons. The main reason for the increased performance is the integration of preliminary denoising images into the loss function. This allows the proposed generative network to ensure a reasonable convergence position for the output image in the image solution space, thus obtaining an output image as close to the ground truth as possible. Moreover, the adaptive termination condition guarantees the optimal early stopping point in the convergence process that can ensure superior denoising performance.

Author Contributions

S.C.: conceptualization, writing—original draft. S.X.: methodology, supervision. X.C.: software. F.L.: visualization, writing—review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

Natural Science Foundation of China for Grants 61163023 and 61662044.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Acknowledgments

The authors would like to thank Ulyanov et al. for providing the code for DIP.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Buades, A.; Coll, B.; Morel, J. A non-local algorithm for image denoising. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar] [CrossRef]
  2. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-Domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
  3. Mairal, J.; Elad, M.; Sapiro, G. Sparse representation for color image restoration. IEEE Trans. Image Process. 2007, 17, 53–69. [Google Scholar] [CrossRef] [Green Version]
  4. Ji, H.; Liu, C.; Shen, Z.; Xu, Y. Robust video denoising using low rank matrix completion. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 1791–1798. [Google Scholar] [CrossRef]
  5. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted nuclear norm minimization with application to image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2862–2869. [Google Scholar] [CrossRef] [Green Version]
  6. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  7. Dong, W.; Zhang, L.; Shi, G.; Li, X. Nonlocally centralized sparse representation for image restoration. IEEE Trans. Image Process. 2013, 22, 1620–1630. [Google Scholar] [CrossRef] [Green Version]
  8. Caliskan, A.; Çil, Z.A.; Badem, H.; Karaboga, D. Regression-Based Neuro-Fuzzy Network Trained by ABC Algorithm for High-Density Impulse Noise Elimination. IEEE Trans. Fuzzy Syst. 2020, 28, 1084–1095. [Google Scholar] [CrossRef]
  9. Mario, V.; Morabito, F. Image Edge Detection: A New Approach Based on Fuzzy Entropy and Fuzzy Divergence. Int. J. Fuzzy Syst. 2021. [Google Scholar] [CrossRef]
  10. Mosseri, I.; Zontak, M.; Irani, M. Combining the power of internal and external denoising. In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Cambridge, MA, USA, 19–21 April 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 1–9. [Google Scholar]
  11. Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef] [Green Version]
  12. Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a fast and flexible solution for cnn-based image denoising. IEEE Trans. Image Process. 2018, 27, 4608–4622. [Google Scholar] [CrossRef] [Green Version]
  13. Choi, J.H.; Elgendy, O.A.; Chan, S.H. Optimal combination of image denoisers. IEEE Trans. Image Process. 2019, 28, 4016–4031. [Google Scholar] [CrossRef] [Green Version]
  14. Wang, F.; Huang, H.; Liu, J. Variational-Based Mixed Noise Removal With CNN Deep Learning Regularization. IEEE Trans. Image Process. 2020, 29, 1246–1258. [Google Scholar] [CrossRef] [Green Version]
  15. Lempitsky, V.; Vedaldi, A.; Ulyanov, D. Deep image prior. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 9446–9454. [Google Scholar] [CrossRef]
  16. Wang, S.; Su, Z.; Ying, L.; Peng, X.; Zhu, S.; Liang, F.; Feng, D.; Liang, D. Accelerating magnetic resonance imaging via deep learning. In Proceedings of the IEEE 13th International Symposium on Biomedical Imaging (ISBI), Prague, Czech Republic, 13–16 April 2016; pp. 514–517. [Google Scholar] [CrossRef]
  17. Kang, E.; Min, J.; Ye, J.C. A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction. Med. Phys. 2017, 44, e360–e375. [Google Scholar] [CrossRef] [Green Version]
  18. Chen, H.; Zhang, Y.; Zhang, W.; Liao, P.; Li, K.; Zhou, J.; Wang, G. Low-dose CT via convolutional neural network. Biomed. Opt. Express 2017, 8, 679–694. [Google Scholar] [CrossRef] [PubMed]
  19. Gong, K.; Guan, J.; Kim, K.; Zhang, X.; Yang, J.; Seo, Y.; El Fakhri, G.; Qi, J.; Li, Q. Iterative pet image reconstruction using convolutional neural network representation. IEEE Trans. Med. Imaging 2019, 38, 675–685. [Google Scholar] [CrossRef] [PubMed]
  20. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  21. Papyan, V.; Romano, Y.; Sulam, J.; Elad, M. Convolutional dictionary learning via local processing. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017. [Google Scholar]
  22. Xu, S.; Lin, Z.; Zhang, G.; Liu, T.; Yang, X. A fast yet reliable noise level estimation algorithm using shallow CNN-based noise separator and BP network. Signal Image Video Process. 2020, 14, 1–8. [Google Scholar] [CrossRef]
  23. Xu, J.; Zhang, L.; Zhang, D. A trilateral weighted sparse coding scheme for real-world image denoising. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 20–36. [Google Scholar]
  24. Peng, X.; Feris, R.S.; Wang, X.; Metaxas, D.N. Red-net: A recurrent encoder–decoder network for video-based face alignment. Int. J. Comput. Vis. 2018, 126, 1103–1119. [Google Scholar] [CrossRef] [Green Version]
  25. Arbeláez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour Detection and Hierarchical Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 898–916. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Wang, Y.; Wang, L.; Yang, J.; An, W.; Guo, Y. Flickr1024: A Large-Scale Dataset for Stereo Image Super-Resolution. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), Seoul, Korea, 27–28 October 2019; pp. 3852–3857. [Google Scholar] [CrossRef] [Green Version]
  27. Gao, X.; Lu, W.; Tao, D.; Li, X. Image quality assessment based on multiscale geometric analysis. IEEE Trans. Image Process. 2009, 18, 1409–1423. [Google Scholar]
  28. Li, X.; He, H.; Wang, R.; Tao, D. Single image superresolution via directional group sparsity and directional features. IEEE Trans. Image Process. 2015, 24, 2874–2888. [Google Scholar] [CrossRef]
  29. Zhang, K.; Tao, D.; Gao, X.; Li, X.; Xiong, Z. Learning multiple linear mappings for efficient single image super-resolution. IEEE Trans. Image Process. 2015, 24, 846–861. [Google Scholar] [CrossRef]
  30. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [Green Version]
  31. Zhang, L.; Zhang, L.; Mou, X.; Zhang, D. FSIM: A feature similarity index for image quality assessment. IEEE Trans. Image Process. 2011, 20, 2378–2386. [Google Scholar] [CrossRef] [Green Version]
  32. Liu, X.; Tanaka, M.; Okutomi, M. Practical Signal-Dependent Noise Parameter Estimation From a Single Noisy Image. IEEE Trans. Image Process. 2014, 23, 4361–4371. [Google Scholar] [CrossRef]
  33. Soh, J.W.; Cho, S.; Cho, N.I. Meta-Transfer Learning for Zero-Shot Super-Resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020. [Google Scholar]
Figure 1. Denoising results on Lena with noise level σ = 40: (a) Original; (b) Noisy, (c) DNCNN/PSNR = 30.47 dB; (d) DIP/PSNR = 29.43 dB.
Figure 1. Denoising results on Lena with noise level σ = 40: (a) Original; (b) Noisy, (c) DNCNN/PSNR = 30.47 dB; (d) DIP/PSNR = 29.43 dB.
Applsci 11 04803 g001
Figure 2. Image space visualization of image denoising using deep image prior.
Figure 2. Image space visualization of image denoising using deep image prior.
Applsci 11 04803 g002
Figure 3. PSNRs (dB) of DIP evaluated on the Barbara, Cameraman, and Peppers images with the noise level σ = 50.
Figure 3. PSNRs (dB) of DIP evaluated on the Barbara, Cameraman, and Peppers images with the noise level σ = 50.
Applsci 11 04803 g003
Figure 4. Image denoising uses our generative network.
Figure 4. Image denoising uses our generative network.
Applsci 11 04803 g004
Figure 5. Image space visualization of image denoising using the proposed method and deep image prior.
Figure 5. Image space visualization of image denoising using the proposed method and deep image prior.
Applsci 11 04803 g005
Figure 6. Performance comparison between the adaptive termination step and optimal step.
Figure 6. Performance comparison between the adaptive termination step and optimal step.
Applsci 11 04803 g006
Figure 7. A range of 10 widely-used test images in the references.
Figure 7. A range of 10 widely-used test images in the references.
Applsci 11 04803 g007
Figure 8. Some representative images in the BSD database.
Figure 8. Some representative images in the BSD database.
Applsci 11 04803 g008
Figure 9. Some representative images in the Flickr1024 database.
Figure 9. Some representative images in the Flickr1024 database.
Applsci 11 04803 g009
Figure 10. Some representative images in the Urban100 database.
Figure 10. Some representative images in the Urban100 database.
Applsci 11 04803 g010
Figure 11. Denoising results of one image in BSD with noise level σ = 40 : (a) original; (b) noisy image; (c) BM3D/26.11 dB; (d) NCSR/26.28 dB; (e) TWSC/26.46 dB; (f) WNNM/26.53 dB; (g) RED-Net/23.89 dB; (h) DnCNN/26.64 dB; (i) FFDNet/26.69 dB; (j) CsNet/27.64 dB; (k) DIP/24.89 dB; (l) Proposed/28.08 dB.
Figure 11. Denoising results of one image in BSD with noise level σ = 40 : (a) original; (b) noisy image; (c) BM3D/26.11 dB; (d) NCSR/26.28 dB; (e) TWSC/26.46 dB; (f) WNNM/26.53 dB; (g) RED-Net/23.89 dB; (h) DnCNN/26.64 dB; (i) FFDNet/26.69 dB; (j) CsNet/27.64 dB; (k) DIP/24.89 dB; (l) Proposed/28.08 dB.
Applsci 11 04803 g011
Table 1. PNSR(dB) results of DIP and FFDNet on 10 commonly used images with noise level σ = 30 and 40.
Table 1. PNSR(dB) results of DIP and FFDNet on 10 commonly used images with noise level σ = 30 and 40.
ImagesBarbaraBoatCameramanCoupleHillHouseLenaManMonarchPeppers
Noise Level σ = 30
DIP25.7628.1427.3127.7328.3631.0430.8028.0327.9128.50
FFDNet28.9529.6629.0729.4629.5732.5432.0529.3528.9529.63
Noise Level σ = 40
DIP24.2626.9125.6926.2827.3529.4329.4326.9526.4827.13
FFDNet27.5428.4127.8228.1528.5031.4030.8028.1927.7028.34
Table 2. PNSR(dB) results of two methods on 10 commonly used images with various noise levels.
Table 2. PNSR(dB) results of two methods on 10 commonly used images with various noise levels.
ImagesBarbaraBoatCameramanCoupleHillHouseLenaManMonarchPeppers
Noise Level σ = 10
DIP32.0633.3632.8632.9732.8435.4635.3632.7133.0033.77
Proposed33.7935.2535.5034.8034.0037.3636.9734.2735.0136.44
Noise Level σ = 20
DIP27.8730.0429.3529.5729.8732.6832.4729.5729.6830.48
Proposed32.1832.8732.0132.4932.3035.0135.0032.2032.2433.12
Noise Level σ = 30
DIP25.7628.1427.3127.7328.3631.0430.8028.0327.9128.50
Proposed30.1130.9130.0330.4930.6233.6233.3130.3130.2831.17
Noise Level σ = 40
DIP24.2626.9125.6926.2827.3529.4329.4326.9526.4827.13
Proposed28.2929.4928.5629.1129.4632.5431.9529.0328.9629.69
Noise Level σ = 50
DIP23.2925.7524.5225.2026.3428.5128.2726.0525.3726.04
Proposed26.9128.3327.4927.9428.5831.4830.8428.1127.8828.56
Noise Level σ = 60
DIP22.7624.9623.2624.4425.4927.4027.0625.2324.4824.68
Proposed25.7927.4426.5427.1427.8230.6530.0027.3626.8227.55
Table 3. SSIM results of two methods on 10 commonly used images with various noise levels.
Table 3. SSIM results of two methods on 10 commonly used images with various noise levels.
ImagesBarbaraBoatCameramanCoupleHillHouseLenaManMonarchPeppers
Noise Level σ = 10
DIP0.93950.95190.95700.95100.93900.96000.95850.94400.95120.9382
Proposed0.96300.96500.97370.96480.94730.97010.97420.95730.98080.9784
Noise Level σ = 20
DIP0.89140.91040.91700.90500.88900.94000.91340.89500.90390.8920
Proposed0.95270.94550.94780.94520.92820.95580.9640.93620.96630.9607
Noise Level σ = 30
DIP0.83320.87410.88000.86600.85400.92400.87800.85900.86670.8538
Proposed0.92640.92140.92380.91820.90030.94680.9520.90630.95130.9439
Noise Level σ = 40
DIP0.76960.84300.83500.82400.82700.90400.84400.83000.82940.8270
Proposed0.89280.89710.89950.89320.87550.93850.93870.88030.9350.9262
Noise Level σ = 50
DIP0.74520.81500.81000.78800.80000.89000.80400.80800.79400.8017
Proposed0.85760.87490.88510.86780.85530.93220.92650.85870.92120.9078
Noise Level σ = 60
DIP0.72950.79310.76400.76200.77900.87800.77310.78700.76370.7751
Proposed0.82380.85450.86390.84740.83710.92530.91540.84040.90210.8958
Table 4. FSIM results of two methods on 10 commonly used images with various noise levels.
Table 4. FSIM results of two methods on 10 commonly used images with various noise levels.
ImagesBarbaraBoatCameramanCoupleHillHouseLenaManMonarchPeppers
Noise Level σ = 10
DIP0.97540.97750.95110.98020.97500.95200.94820.95370.97650.9747
Proposed0.98250.98730.96530.98680.98130.95830.98570.98430.96420.9668
Noise Level σ = 20
DIP0.94780.95250.92490.96450.94620.92620.90720.92510.94900.9495
Proposed0.97600.97410.93790.97390.96950.94290.97670.97030.94680.9472
Noise Level σ = 30
DIP0.92610.92770.90240.94880.92270.90650.87930.90360.92560.9272
Proposed0.96300.95800.91610.95740.95360.92600.96710.95290.93150.9312
Noise Level σ = 40
DIP0.89680.90880.88090.93490.90730.89120.85890.88780.90450.9058
Proposed0.95010.94160.88370.93980.93680.91450.95670.93650.91720.9168
Noise Level σ = 50
DIP0.90330.89030.86330.91660.88860.87350.83750.86710.88260.8946
Proposed0.93770.92740.87790.92370.92280.90170.94590.91990.90700.8900
Noise Level σ = 60
DIP0.88500.87370.84940.90750.86900.86150.81950.85340.86540.8750
Proposed0.92490.91310.84960.91120.90890.89630.93730.90480.89360.8899
Table 5. Number of iterations and time(s) of two methods on 10 commonly used images with σ = 30.
Table 5. Number of iterations and time(s) of two methods on 10 commonly used images with σ = 30.
ImagesBarbaraBoatCameramanCoupleHillHouseLenaManMonarchPeppers
PSNR25.7628.1427.3127.7328.3631.0430.8028.0327.9128.50
Number of Iterations
DIP3373250520832555277415512687265517031575
Proposed2873248818322232252913802307233415411183
Time
DIP43.6431.4427.0933.7037.7221.9738.9139.6026.1124.11
Proposed37.0431.4125.0031.3336.7820.7535.4537.1625.4418.02
Table 6. Average PSNR results of different methods on 10 commonly used images with various noise levels.
Table 6. Average PSNR results of different methods on 10 commonly used images with various noise levels.
MethodNoise Level
σ = 10 σ = 20 σ = 30 σ = 40 σ = 50 σ = 60
BM3D34.8231.4429.5928.1327.5426.37
FFDNet34.8631.7229.9528.7027.7326.92
NCSR34.8131.3829.4328.0627.0226.08
DnCNN34.9431.6929.8328.5227.5626.65
WNNM35.0131.6129.8028.4827.5126.68
RED-Net34.2531.1329.4428.2327.2626.46
TWSC34.8531.5629.7328.4227.3826.51
CsNet34.6231.4029.7528.5527.6126.79
DIP33.4430.1628.3626.9925.9324.98
Proposed 35.34 32.95 31.08 29.71 28.51 27.71
Table 7. Average SSIM results of different methods on 10 commonly used images with various noise levels.
Table 7. Average SSIM results of different methods on 10 commonly used images with various noise levels.
MethodNoise Level
σ = 10 σ = 20 σ = 30 σ = 40 σ = 50 σ = 60
BM3D0.93060.87640.83480.79680.77010.7436
FFDNet0.93300.88550.84870.81770.79080.7668
NCSR0.93070.87450.82890.79600.76680.8603
DnCNN0.93270.88290.84320.80930.78230.7523
WNNM0.93110.87730.83560.80240.77790.8700
RED-Net0.91970.86940.83150.79860.76980.7443
TWSC0.93130.87870.83800.80350.77270.7446
CsNet0.92510.87560.84050.81000.78310.7581
DIP0.94900.94900.86890.83330.80560.7805
Proposed 0.9675 0.9502 0.9290 0.9077 0.8887 0.8706
Table 8. Average FSIM results of different methods on 10 commonly used images with various noise levels.
Table 8. Average FSIM results of different methods on 10 commonly used images with various noise levels.
MethodNoise Level
σ = 10 σ = 20 σ = 30 σ = 40 σ = 50 σ = 60
BM3D0.97170.94520.92390.90380.88830.8737
FFDNet0.97230.94760.92780.91030.89470.8806
NCSR0.97190.94350.91960.89560.87890.7414
DnCNN0.97200.94720.92660.90780.89230.8771
WNNM0.97130.94440.92330.90310.88630.7508
RED-Net0.96980.94540.92510.90680.89080.8770
TWSC0.97190.94480.92150.90030.88150.8641
CsNet0.97180.94750.92800.90990.89380.8797
DIP0.96630.93930.91700.89770.88170.8659
Proposed 0.9762 0.9615 0.9457 0.9154 0.9154 0.9030
Table 9. Average PSNR results of different methods on BSD50 with various noise levels.
Table 9. Average PSNR results of different methods on BSD50 with various noise levels.
MethodNoise Level
σ = 10 σ = 20 σ = 30 σ = 40 σ = 50 σ = 60
BM3D33.7229.7027.9126.6625.8225.14
FFDNet33.9230.1928.4427.2926.4325.74
NCSR33.4229.5827.6026.2725.3524.59
DnCNN34.0230.2128.4227.2226.3725.63
WNNM33.5329.7327.8126.5425.6324.92
RED-Net33.3729.5127.7526.6025.7525.09
TWSC33.4829.7027.7426.4625.5224.79
CsNet33.5929.6727.9026.7525.9125.20
DIP32.3929.0027.1525.8724.8423.90
Proposed 35.52 31.85 29.75 28.35 27.25 26.43
Table 10. Average SSIM results of different methods on BSD50 with various noise levels.
Table 10. Average SSIM results of different methods on BSD50 with various noise levels.
MethodNoise Level
σ = 10 σ = 20 σ = 30 σ = 40 σ = 50 σ = 60
BM3D0.92190.84040.78360.73900.70410.6758
FFDNet0.92890.86060.81010.76900.73550.7076
NCSR0.92430.84200.78600.73730.70320.6732
DnCNN0.92950.85920.80640.76320.72980.6986
WNNM0.92480.84490.79040.74600.71400.6844
RED-Net0.92290.84230.78260.73520.69740.6673
TWSC0.92670.84510.77750.72430.68220.6483
CsNet0.92600.84740.78950.74400.70590.6731
DIP0.94880.90020.85640.82060.78940.7614
Proposed 0.9717 0.9402 0.9101 0.8797 0.8528 0.8303
Table 11. Average FSIM results of different methods on BSD50 with various noise levels.
Table 11. Average FSIM results of different methods on BSD50 with various noise levels.
MethodNoise Level
σ = 10 σ = 20 σ = 30 σ = 40 σ = 50 σ = 60
BM3D0.95090.90300.86800.84030.81390.7966
FFDNet0.95450.91430.88310.85590.83230.8121
NCSR0.95240.90380.86770.82950.80380.7794
DnCNN0.95450.91430.88310.85590.83230.8121
WNNM0.95270.90530.86950.83850.81460.7916
RED-Net0.95270.90790.87210.84110.81470.7949
TWSC0.95470.90440.85900.82000.78810.7627
CsNet0.95470.91090.87630.84610.81850.7958
DIP0.94420.90020.86710.83910.81670.7977
Proposed 0.9550 0.9306 0.9029 0.8789 0.8554 0.8366
Table 12. Average PSNR results of different methods on 10 images in Flickr1024 with various noise levels.
Table 12. Average PSNR results of different methods on 10 images in Flickr1024 with various noise levels.
MethodNoise Level
σ = 10 σ = 20 σ = 30 σ = 40 σ = 50 σ = 60
BM3D32.4928.5126.5125.1324.2023.57
FFDNet32.8829.1227.1425.8524.9124.18
NCSR32.6128.5926.5625.1324.2121.66
DnCNN33.0729.1927.1425.8024.8724.06
WNNM32.8328.8826.8425.5324.5823.85
RED-Net32.4928.6226.2724.5523.2622.19
TWSC32.6728.7826.7625.4324.4723.71
CsNet32.9129.0126.7625.1823.8321.89
DIP30.7325.7725.1424.1223.0722.15
Proposed 33.63 30.62 28.30 26.78 25.58 24.69
Table 13. Average SSIM results of different methods on 10 images in Flickr1024 with various noise levels.
Table 13. Average SSIM results of different methods on 10 images in Flickr1024 with various noise levels.
MethodNoise Level
σ = 10 σ = 20 σ = 30 σ = 40 σ = 50 σ = 60
BM3D0.91270.81950.74850.69010.64260.6066
FFDNet0.92770.85410.79330.74200.69910.6627
NCSR0.91900.82810.76030.69500.65170.5154
DnCNN0.92830.85260.78880.73530.69270.6511
WNNM0.92230.83780.77210.71550.67320.6335
RED-Net0.92280.84130.76440.69000.63220.5806
TWSC0.92150.83700.76690.70890.66030.6189
CsNet0.96040.91220.86020.81430.76630.6922
DIP0.89970.78640.80290.77650.73630.7019
Proposed 0.9604 0.9287 0.8867 0.8534 0.8176 0.7861
Table 14. Average FSIM results of different methods on 10 images in Flickr1024 with various noise levels.
Table 14. Average FSIM results of different methods on 10 images in Flickr1024 with various noise levels.
MethodNoise Level
σ = 10 σ = 20 σ = 30 σ = 40 σ = 50 σ = 60
BM3D0.96150.91640.87930.84740.81780.7979
FFDNet0.96490.92730.89560.86730.84200.8190
NCSR0.95240.91120.87590.83560.80820.7805
DnCNN0.96530.92630.89290.86320.83830.8136
WNNM0.95440.91600.88230.85030.82450.7983
RED-Net0.96370.92560.89170.86070.83470.8119
TWSC0.96350.92050.88020.84380.81140.7834
CsNet0.96450.92260.87540.83690.80650.7889
DIP0.94680.90860.87670.84710.82160.7988
Proposed 0.9612 0.9404 0.9160 0.8934 0.8703 0.8496
Table 15. Average PSNR results of different methods on 10 images in Urban100 with various noise levels.
Table 15. Average PSNR results of different methods on 10 images in Urban100 with various noise levels.
MethodNoise Level
σ = 10 σ = 20 σ = 30 σ = 40 σ = 50 σ = 60
BM3D33.7230.2628.2826.5525.6224.77
FFDNet33.3530.2728.5227.2326.2025.34
NCSR33.7130.2928.3126.8425.7324.76
DnCNN33.6030.3128.3626.9825.9324.89
WNNM33.8930.6428.8727.5126.4625.60
RED-Net33.1929.3827.2525.5524.0522.29
TWSC33.7930.5828.7827.4426.3525.43
CsNet32.8529.2527.2025.5224.0722.07
DIP31.9828.6826.6625.2023.9422.87
Proposed 33.98 32.95 29.54 28.09 27.47 25.94
Table 16. Average SSIM results of different methods on 10 images in Urban100 with various noise levels.
Table 16. Average SSIM results of different methods on 10 images in Urban100 with various noise levels.
MethodNoise Level
σ = 10 σ = 20 σ = 30 σ = 40 σ = 50 σ = 60
BM3D0.94710.89710.85340.80760.77670.744
FFDNet0.94610.90450.87000.83770.80740.7790
NCSR0.94710.94710.85600.81490.77880.7442
DnCNN0.94620.88910.84370.81000.77490.7417
WNNM0.94860.90410.86900.83270.80500.7749
RED-Net0.95880.91390.88180.84860.81190.7531
TWSC0.94930.90690.86920.83300.79860.7659
CsNet0.95470.91310.88260.84990.81500.7447
DIP0.94740.90010.86240.83050.79960.7732
Proposed 0.9692 0.9579 0.9180 0.8909 0.8772 0.8433
Table 17. Average FSIM results of different methods on 10 images in Urban100 with various noise levels.
Table 17. Average FSIM results of different methods on 10 images in Urban100 with various noise levels.
MethodNoise Level
σ = 10 σ = 20 σ = 30 σ = 40 σ = 50 σ = 60
BM3D0.97780.95710.93870.91910.90180.8853
FFDNet0.97740.95770.94100.92560.91050.8959
NCSR0.97820.95730.93850.92040.90020.8823
DnCNN0.97810.95720.93870.92190.90590.8885
WNNM0.97870.95920.94330.92690.91220.8967
RED-Net0.97570.95230.92890.90700.87830.8586
TWSC0.97890.95990.94290.92600.90860.8912
CsNet0.97490.95100.92670.90470.87580.8542
DIP0.97240.94970.93080.91340.89630.8791
Proposed0.98010.97670.95550.94210.93680.9168
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chen, S.; Xu, S.; Chen, X.; Li, F. Image Denoising Using a Novel Deep Generative Network with Multiple Target Images and Adaptive Termination Condition. Appl. Sci. 2021, 11, 4803. https://doi.org/10.3390/app11114803

AMA Style

Chen S, Xu S, Chen X, Li F. Image Denoising Using a Novel Deep Generative Network with Multiple Target Images and Adaptive Termination Condition. Applied Sciences. 2021; 11(11):4803. https://doi.org/10.3390/app11114803

Chicago/Turabian Style

Chen, Shiming, Shaoping Xu, Xiaoguo Chen, and Fen Li. 2021. "Image Denoising Using a Novel Deep Generative Network with Multiple Target Images and Adaptive Termination Condition" Applied Sciences 11, no. 11: 4803. https://doi.org/10.3390/app11114803

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop