Adversarial Content–Noise Complementary Learning Model for Image Denoising and Tumor Detection in Low-Quality Medical Images
Abstract
:1. Introduction
- The ACNCL model uniquely connects image denoising with tumor detection, evaluating tumor detection performance before and after denoising. This integration addresses the gap in previous research [16], which focused solely on content and noise learning without considering the impact on tumor detection.
- The incorporation of PatchGAN as a local discriminator ensures fine-grained texture restoration and the structural integrity of medical images. The ACNCL model further enhances denoising efficiency using dual discriminators, one for real pairs and another for fake pairs, striking a balance between noise suppression and content preservation.
- Unlike traditional models that discard noise priors during testing, the ACNCL predictor model uses two predictors, content and noise, in a complementary manner. This approach allows for more effective noise removal while preserving critical anatomical details, enhancing the diagnostic quality of medical images.
2. Materials and Methods
- a.
- U-Net
- b.
- DnCNN
- c.
- DenseNet
- d.
- Context-Aware Anisotropic Gaussian Filtering (CA-AGF)
- e.
- Discrete Wavelet Transform (DWT)
- f.
- Generative Adversarial Networks (GAN)
2.1. Mathematical Formulations for Detecting Gaussian Noise in CT and MRI Images
- i.
- Gaussian Noise Model
- : The mean of the distribution, indicating the central value around which noise values are distributed.
- : The variance of the distribution, indicating the spread of noise values around the mean. A higher variance corresponds to greater deviations from the mean.
- ii.
- Estimating Gaussian Noise Parameters (Mean and Variance)
- iii.
- Overall Noise Detection Formula
- represents the pixel intensities in a homogeneous image region;
- is the number of pixels in this region;
- and represent the mean and variance of Gaussian noise in that region, providing essential parameters for further denoising.
2.2. Adversarial Content–Noise Complementary Learning (ACNCL)
- The pair is used to train the content predictor , focusing on reconstructing clean content.
- The pair is used to train the noise predictor , which learns to isolate noise patterns.
- This training approach enables the network to learn complementary aspects of noise and content synergistically.
2.3. Flowchart Representation of the Proposed ACNCL
3. Implementation Based on Generative Adversarial Network (GAN)
- 1.
- The GAN architecture: It is structured with two primary components: a Generator and a Discriminator. The Generator uses multiple predictors, such as DnCNN, U-Net, and DenseNet, to process the degraded image, separating it into predicted content and predicted noise. The Generator also incorporates wavelet transformations (DWT) and a Context-Aware Anisotropic Gaussian Filter (CA-AGF) to enhance the accuracy of the predictions. The Discriminator Patch GAN architecture is employed, which is responsible for distinguishing between real and generated image pairs (i.e., real content vs. predicted content). The Discriminator also evaluates the tumor detection accuracy by incorporating specialized loss functions that penalize incorrect predictions of tumor regions.Figure 3 shows a GAN-based implementation of the proposed ACNCL model. The diagram is organized into several interconnected components:Input Degraded Image (I_degraded): The degraded medical image, potentially containing a tumor, is passed through the Noise Predictor and Content Predictor sub-networks within the Generator.Noise Predictor (GAN-based): The noise predictor leverages models like U-net and DenseNet to estimate the noise components in the degraded image. It incorporates Discrete Wavelet Transform (DWT) and Context-Aware Anisotropic Gaussian Filter (CA-AGF) extraction to capture and enhance the noisy regions. The output is the Predicted Noise (I_noise).Content Predictor (GAN-based): Simultaneously, the degraded image is processed through the content prediction pathway. Models such as U-net and DenseNet are used to predict the underlying clean content, focusing on preserving key diagnostic features like potential tumors. The output here is the Predicted Content (I_content1).Subtraction and Fusion Mechanism: The predicted noise (I_noise) is subtracted from the degraded image (I_degraded) to yield an intermediate prediction, which undergoes further refinement. A Fusion Mechanism then concatenates and processes the predicted content and noise through additional layers (e.g., convolutional layers and AGF extraction), producing the final predicted content (I_content2).Tumor Detection: The tumor detection mechanism is embedded within the Generator, where the final predicted content is analyzed using specific features to identify potential tumors. DenseNet plays a key role in enhancing detection accuracy by learning complex patterns associated with tumor regions.Discriminator (PatchGAN): The Discriminator receives real and fake pairs. Real Pairs: The real content (I_content) and real noise (I_noise). Fake Pair: The final predicted content (I_content2) and the original corrupted image (I_degraded). The Discriminator evaluates the authenticity of the generated images and the accuracy of tumor detection, feeding back the results to refine the Generator’s predictions. The final output from the Generator is a denoised medical image with tumor regions. The Discriminator ensures that the generated image is not only noise-free but also preserves critical diagnostic features, such as tumors, for reliable medical analysis. PatchGAN, a well-known Discriminator, targets structural inconsistencies at the patch level, addressing common issues like texture or style degradation. To tackle this problem, we incorporated PatchGAN as the Discriminator in our proposed GAN model. PatchGAN processes either a set of real images. or a fake image pair generated as an image set as inputs. The Generator’s objective is to produce synthetic data that can deceive the Discriminator, while PatchGAN focuses on differentiating between synthetic and real data.PatchGAN is significant in medical imaging tasks because it focuses on local image patches rather than the entire image, allowing it to effectively capture fine-grained details that are crucial in medical diagnostics, such as tumor boundaries or subtle variations in tissue structures. By operating on smaller regions of the image, PatchGAN learns to differentiate between small-scale features, which is especially important in applications like tumor detection, organ segmentation, or lesion identification, where local abnormalities need to be detected with high accuracy.
- 2.
- Loss Function: To define the loss function, we followed the approach suggested by [31], using a combination of PatchGAN loss and L1 loss. This composite loss function, denoted as , is formulated as follows:The GAN loss is expressed asThe L1 loss, , is separated into two distinct parts:Here, LL1-content quantifies the mean absolute error between the generated content and the actual content , while measures the mean absolute error between the predicted noise and the true noise . The parameter serves as a scaling factor for .
Experimental Setup
Predictor | Original Form | Modification | Overall Performance |
U-Net (Figure 4a) [11] | -Unified one U-Net -No Paddling applied -No batch normalization | -Compared to U-Net in [11] our architecture is split into two parallel pathways (one for content and the other for noise). -Padding applied to maintain consistent feature map size before and after convolution. -Batch normalization is introduced to stabilize the training process by normalizing layer activations. | -Enhances ability to separate and utilize noise-related and content features for effective capturing and processing. -Padding allows accurate alignment and combination of high-level and low-level features across the network. -Faster convergence and improved denoising performance. |
DnCNN (Figure 4b) [13] | -DnCNN originally used residual learning, where the network learned to predict the noise rather than the denoised image itself -DnCNN used the standard ReLU (Rectified Linear Unit) activation function after convolution layers | -In contrast with DnCNN in [13] this work discarded the residual learning since noise can dominate residuals. -Leakey ReLU is used in place of standard ReLU. | -The network is trained directly to output the denoised image rather than the noise residual. Residual was removed to prevent over-smoothing. -Leaky ReLU allows a small, non-zero gradient for negative values, thereby preventing dead neurons and allowing better information flow through the network. |
DenseNet (Figure 4c) [16] | -Used deconvolution layers -Bottleneck layers -Traditional reconstruction layers | -Sub-pixel convolution replaced deconvolution bottleneck. -Atrous Spatial Pyramid Pooling (ASPP) substituted bottleneck layers. -Cascaded Refinement Network (CRN). | -Reconstruction layers used to produce smoother and more accurate up-sampled images. -Captures multi-scale contextual information without reducing spatial resolution, allowing for better feature preservation and enhanced localization of tumors in noisy medical images. -Higher quality and more accurate denoised outputs. |
CA-AGF (Figure 4d) [5] | -Traditional AGF has no dynamism in the filtering process | -The Context-Aware AGF dynamically adjusts the filtering process based on local image structures. | -Enhances feature preservation and denoising adaptability in complex regions like medical images. |
DWT (Figure 4e) [6] | -Traditional DWT produces less natural-looking and less detailed denoised images. | -Enhanced with multi-resolution fusion and anisotropic diffusion to preserve fine details and broader structures. | -Produces more natural-looking and detailed denoised images compared to the original DWT. |
- (a)
- CT Dataset: The proposed denoising model was validated on a publicly available CT dataset collected from The Cancer Genome Atlas Lung Adenocarcinoma (TCGA-LUAD) specifically curated for evaluating medical image denoising techniques [32]. This dataset can be accessed at this link: https://www.cancerimagingarchive.net/collection/tcga-luad/, accessed on 5 May 2020. The dataset comprised 10 CT scans from anonymous patients, each containing 2D slices of abdominal regions and corresponding simulated 25% dose CT 2D slices. The full-dose data were acquired at 120 k and 220 effective mAs, while the 25% dose data were generated by adding Poisson noise into the projection data. The dataset included 2428 2D slice pairs (512 × 512 pixels). We selected 1830 slice pairs from 7 patient scans for training and 598 CT slice pairs from 3 patients for testing. Another set of brain datasets for CT images was used to compare the performance of the results obtained. The dataset had dimensions of 512 × 512 pixels and is a collection of human brains with neurological conditions, including cancer, tumors, and aneurysms. The dataset can be accessed at https://www.kaggle.com/datasets/trainingdatapro/computed-tomography-ct-of-the-brain/data, accessed on 13 December 2024.
- (b)
- MRI: The dataset used in this experiment is a curated combination of three sources: the Figshare dataset, the SARTAJ dataset, and the Br35H dataset. The Figshare dataset provided a significant portion of the images, particularly for the glioma class, which was critical due to issues identified in the SARTAJ dataset. The SARTAJ dataset initially included images for various tumor classes, but it was noted that the glioma class images were not accurately categorized. To ensure the integrity of the experiment, these images were replaced with more reliable ones from the Figshare dataset. Finally, the Br35H dataset supplied the no-tumor class images, which were necessary for the negative class in the classification tasks. The dataset can be accessed at this link: https://www.kaggle.com/datasets/masoudnickparvar/brain-tumor-mri-dataset, accessed on 4 January 2021. It contained abdomen 2D slices of 10 patients in a clinical trial containing 1311 2D slice pairs in total (544 × 394 pixels). Four classes were categorized into Figshare, SARTAJ, Br35H, and pituitary tumors. We selected 991 slice pairs from 8 patients for training and 320 slice pairs from 2 patients for testing. Additionally, we used a dataset from the Stanford Brain MRI dataset comprising 156 whole-brain MRI studies [33].
- (a)
- Peak Signal–Noise Ratio (PSNR)
- (b)
- Mean Square Error (MSE): The Mean Square Error (MSE) is the cumulative error between the compressed image and the original image. The lower the MSE, the better the quality of the compressed image. It is calculated as
- (c)
- Structural Similarity Index Measure (SSIM)
4. Results
4.1. Image Denoising Before Tumor Detection in CT Images
4.2. Ablation Studies
Comparison with Reference Methods
5. Cross-Contextual Studies
6. Discussion
7. Conclusions
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Hussain, S.; Mubeen, I.; Ullah, N.; Shah, S.S.U.D.; Khan, B.A.; Zahoor, M.; Ullah, R.; Khan, F.A.; Sultan, M.A. Modern diagnostic imaging technique applications and risk factors in the medical field: A review. BioMed Res. Int. 2022, 2022, 5164970. [Google Scholar]
- Islam, S.M.S.; Nasim, M.A.A.; Hossain, I.; Ullah, D.M.A.; Gupta, D.K.D.; Bhuiyan, M.M.H. Introduction of medical imaging modalities. In Data-Driven Approaches on Medical Imaging; Springer Nature: Cham, Switzerland, 2023; pp. 1–25. [Google Scholar]
- Nazir, N.; Sarwar, A.; Saini, B.S. Recent developments in denoising medical images using deep learning: An overview of models, techniques, and challenges. Micron 2024, 180, 103615. [Google Scholar]
- Chang, L.W.; Liao, J.R. Improving non-local means image denoising by correlation correction. Multidimens. Syst. Signal Process. 2023, 34, 147–162. [Google Scholar]
- Abuya, T.K.; Rimiru, R.M.; Okeyo, G.O. An Image Denoising Technique Using Wavelet-Anisotropic Gaussian Filter-Based Denoising Convolutional Neural Network for CT Images. Appl. Sci. 2023, 13, 12069. [Google Scholar] [CrossRef]
- Ismael, A.A.; Baykara, M. Digital Image Denoising Techniques Based on Multi-Resolution Wavelet Domain with Spatial Filters: A Review. Trait. Signal 2021, 38, 639–651. [Google Scholar] [CrossRef]
- Elad, M.; Kawar, B.; Vaksman, G. Image denoising: The deep learning revolution and beyond—A survey paper. SIAM J. Imaging Sci. 2023, 16, 1594–1654. [Google Scholar] [CrossRef]
- Jakhar, S.P.; Nandal, A.; Dhaka, A.; Alhudhaif, A.; Polat, K. Brain tumor detection with multi-scale fractal feature network and fractal residual learning. Appl. Soft Comput. 2024, 153, 111284. [Google Scholar] [CrossRef]
- Liu, Z.S.; Siu, W.C.; Wang, L.W.; Li, C.T.; Cani, M.P. Unsupervised real image super-resolution via generative variational autoencoder. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 442–443. [Google Scholar]
- Di Feola, F.; Tronchin, L.; Guarrasi, V.; Soda, P. Multi-Scale Texture Loss for CT denoising with GANs. arXiv 2024, arXiv:2403.16640. [Google Scholar]
- Mehta, D.; Padalia, D.; Vora, K.; Mehendale, N. MRI image denoising using U-Net and Image Processing Techniques. In Proceedings of the 2022 5th International Conference on Advances in Science and Technology (ICAST), Mumbai, India, 2–3 December 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 306–313. [Google Scholar]
- Zhang, J.; Zhou, H.; Niu, Y.; Lv, J.; Chen, J.; Cheng, Y. CNN and multi-feature extraction-based denoising of CT images. Biomed. Signal Process. Control 2021, 67, 102545. [Google Scholar] [CrossRef]
- Trung, N.T.; Trinh, D.H.; Trung, N.L.; Luong, M. Low-dose CT image denoising using deep convolutional neural networks with extended receptive fields. Signal Image Video Process. 2022, 16, 1963–1971. [Google Scholar] [CrossRef]
- Chen, Y.; Li, Y.; Zhang, X.; Sun, J.; Jia, J. Focal sparse convolutional networks for 3d object detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5428–5437. [Google Scholar]
- Jifara, W.; Jiang, F.; Rho, S.; Cheng, M.; Liu, S. Medical image denoising using convolutional neural network: A residual learning approach. J. Supercomput. 2019, 75, 704–718. [Google Scholar]
- Geng, M.; Meng, X.; Yu, J.; Zhu, L.; Jin, L.; Jiang, Z.; Qiu, B.; Li, H.; Kong, H.; Yuan, J.; et al. Content-noise complementary learning for medical image denoising. IEEE Trans. Med. Imaging 2021, 41, 407–419. [Google Scholar] [CrossRef]
- Yahya, A.A.; Tan, J.; Su, B.; Hu, M.; Wang, Y.; Liu, K.; Hadi, A.N. BM3D image denoising algorithm based on adaptive filtering. Multimed. Tools Appl. 2020, 79, 20391–20427. [Google Scholar]
- Kim, H.J.; Lee, D. Image denoising with conditional generative adversarial networks (CGAN) in low-dose chest images. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrometers Detect. Assoc. Equip. 2020, 954, 161914. [Google Scholar]
- Wang, G.; Hu, X. Low-dose CT denoising using a progressive Wasserstein generative adversarial network. Comput. Biol. Med. 2021, 135, 104625. [Google Scholar]
- Yang, Q.; Yan, P.; Zhang, Y.; Yu, H.; Shi, Y.; Mou, X.; Kalra, M.K.; Zhang, Y.; Sun, L.; Wang, G. Low-dose CT image denoising using a generative adversarial network with Wasserstein distance and perceptual loss. IEEE Trans. Med. Imaging 2018, 37, 1348–1357. [Google Scholar]
- Kascenas, A.; Sanchez, P.; Schrempf, P.; Wang, C.; Clackett, W.; Mikhael, S.S.; Voisey, J.P.; Goatman, K.; Weir, A.; Pugeault, N.; et al. The role of noise in denoising models for anomaly detection in medical images. Med. Image Anal. 2023, 90, 102963. [Google Scholar]
- Shomal Zadeh, F.; Pooyan, A.; Alipour, E.; Hosseini, N.; Thurlow, P.C.; Del Grande, F.; Shafiei, M.; Chalian, M. Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) in the differentiation of soft tissue sarcoma from benign lesions: A systematic review of the literature. Skelet. Radiol. 2024, 53, 1343–1357. [Google Scholar]
- Xu, X.; Wang, R.; Fu, C.W.; Jia, J. Snr-aware low-light image enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 17714–17724. [Google Scholar]
- Koonjoo, N.; Zhu, B.; Bagnall, G.C.; Bhutto, D.; Rosen, M.S. Boosting the signal-to-noise of low-field MRI with deep learning image reconstruction. Sci. Rep. 2021, 11, 8248. [Google Scholar]
- Sarkar, K.; Bag, S.; Tripathi, P.C. Noise-aware content-noise complementary GAN with local and global discrimination for low-dose CT denoising. Neurocomputing 2024, 582, 127473. [Google Scholar]
- Tang, Y.; Du, Q.; Wang, J.; Wu, Z.; Li, Y.; Li, M.; Yang, X.; Zheng, J. CCN-CL: A content-noise complementary network with contrastive learning for low-dose computed tomography denoising. Comput. Biol. Med. 2022, 147, 105759. [Google Scholar] [CrossRef]
- Utz, J.; Weise, T.; Schlereth, M.; Wagner, F.; Thies, M.; Gu, M.; Uderhardt, S.; Breininger, K. Focus on Content not Noise: Improving Image Generation for Nuclei Segmentation by Suppressing Steganography in CycleGAN. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 6 October 2023; pp. 3856–3864. [Google Scholar]
- Ghahremani, M.; Khateri, M.; Sierra, A.; Tohka, J. Adversarial distortion learning for medical image denoising. arXiv 2022, arXiv:2204.14100. [Google Scholar]
- Jiang, J.; Wang, Z.; Qiu, S.; Zhang, K.; Su, Y.; Zhang, M.; Zhang, C. Multi-Task Learning-Based Method for Load Identification, Electrical Fault Detection, and Signal Denoising. IEEE Sens. J. 2024, 24, 40069–40082. [Google Scholar] [CrossRef]
- Sun, Y.; Yang, H.; Zhou, J.; Wang, Y. ISSMF: Integrated semantic and spatial information of multi-level features for automatic segmentation in prenatal ultrasound images. Artif. Intell. Med. 2022, 125, 102254. [Google Scholar] [CrossRef]
- Isola, P.; Zhu, J.Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
- Albertina, B.; Watson, M.; Holback, C.; Jarosz, R.; Kirk, S.; Lee, Y.; Rieger-Christ, K.; Lemmerman, J. The Cancer Genome Atlas Lung Adenocarcinoma Collection (TCGA-LUAD) (Version 4) [Data set]. Cancer Imaging Arch. 2016. [Google Scholar] [CrossRef]
- Grøvik, E.; Yi, D.; Iv, M.; Tong, E.; Rubin, D.; Zaharchuk, G. Deep learning enables automatic detection and segmentation of brain metastases on multisequence MRI. J. Magn. Reson. Imaging 2020, 51, 175–182. [Google Scholar] [CrossRef]
- Prasad, M.A.; Subiramaniyam, N.P. A hybrid approach of Adam optimization with CNN to remove noise on images. In Proceedings of the 2022 International Mobile and Embedded Technology Conference (MECON), Noida, India, 10–11 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 527–530. [Google Scholar]
No. | Method | SSIM (BDn) | SSIM (ADn) | PSNR (BDn) | PSNR (ADn) | RMSE (BDn) | RMSE (ADn) |
---|---|---|---|---|---|---|---|
1 | DnCNN | 0.64 | 0.91 | 23.0 | 29.5 | 0.108 | 0.048 |
2 | AGF | 0.61 | 0.84 | 22.3 | 28.4 | 0.114 | 0.052 |
3 | U-Net | 0.60 | 0.83 | 22.0 | 27.6 | 0.120 | 0.065 |
4 | DWT | 0.57 | 0.81 | 21.8 | 26.9 | 0.126 | 0.070 |
5 | DenseNet | 0.62 | 0.86 | 22.7 | 29.1 | 0.112 | 0.050 |
No. | Method | SSIM (BDn) | SSIM (ADn) | PSNR (BDn) | PSNR (ADn) | RMSE (BDn) | RMSE (ADn) |
---|---|---|---|---|---|---|---|
1 | DnCNN | 0.62 | 0.89 | 22.1 | 28.3 | 0.114 | 0.056 |
2 | AGF | 0.59 | 0.82 | 21.7 | 26.4 | 0.125 | 0.067 |
3 | U-Net | 0.57 | 0.81 | 21.1 | 26.0 | 0.128 | 0.069 |
4 | DWT | 0.55 | 0.80 | 20.9 | 25.7 | 0.134 | 0.072 |
5 | DenseNet | 0.60 | 0.84 | 21.5 | 27.1 | 0.120 | 0.063 |
Predictor | SSIM (Mean ± SD) | RMSE (Mean ± SD) | PSNR (dB, Mean ± SD) |
---|---|---|---|
ACNCL-U-Net | 0.91 ± 0.02 | 0.050 ± 0.007 | 30.3 ± 0.4 |
Sole U-Net for Content | 0.88 ± 0.05 | 0.055 ± 0.007 | 29.0 ± 0.5 |
Sole U-Net for Noise | 0.87 ± 0.03 | 0.056 ± 0.006 | 28.4 ± 0.5 |
Broad U-Net for Content | 0.89 ± 0.02 | 0.057 ± 0.006 | 28.1 ± 0.4 |
Broad U-Net for Noise | 0.88 ± 0.02 | 0.057 ± 0.005 | 27.4 ± 0.4 |
ACNCL-DenseNet | 0.85 ± 0.02 | 0.058 ± 0.005 | 28.1 ± 0.4 |
Sole DenseNet for Content | 0.80 ± 0.03 | 0.072 ± 0.007 | 25.7 ± 0.5 |
Sole DenseNet for Noise | 0.81 ± 0.03 | 0.070 ± 0.007 | 26.0 ± 0.5 |
ACNCL-CA-AGF | 0.88 ± 0.03 | 0.052 ± 0.006 | 29.9 ± 0.5 |
Sole CA-AGF for Content | 0.85 ± 0.03 | 0.055 ± 0.007 | 28.0 ± 0.5 |
Sole CA-AGF for Noise | 0.83 ± 0.03 | 0.054 ± 0.007 | 27.3 ± 0.5 |
ACNCL-DWT | 0.80 ± 0.03 | 0.072 ± 0.007 | 26.9 ± 0.5 |
Sole DWT for Content | 0.75 ± 0.03 | 0.084 ± 0.008 | 24.7 ± 0.6 |
Sole DWT for Noise | 0.76 ± 0.03 | 0.081 ± 0.008 | 25.0 ± 0.6 |
ACNCL-DnCNN | 0.86± 0.03 | 0.056 ± 0.005 | 28.9 ± 0.5 |
Sole DnCNN for Content | 0.81 ± 0.03 | 0.054 ± 0.008 | 28.7 ± 0.6 |
Sole DnCNN for Noise | 0.84 ± 0.03 | 0.053 ± 0.008 | 27.0 ± 0.6 |
Predictor | SSIM (Mean ± SD) | RMSE (Mean ± SD) | PSNR (dB, Mean ± SD) |
---|---|---|---|
BM3D [17] | 0.782 ± 0.035 | 0.084 ± 0.007 | 24.9 ± 0.7 |
RED-CNN [15] | 0.821 ± 0.032 | 0.073 ± 0.006 | 26.3 ± 0.6 |
WGAN-GP [19] | 0.845 ± 0.029 | 0.068 ± 0.006 | 27.0 ± 0.6 |
VAE [9] | 0.805 ± 0.033 | 0.080 ± 0.007 | 25.4 ± 0.7 |
ACNCL-U-Net | 0.899± 0.025 | 0.056 ± 0.005 | 28.3 ± 0.9 |
ACNCL-DnCNN | 0.883 ± 0.027 | 0.061 ± 0.005 | 28.0 ± 0.5 |
ACNCL-DenseNet | 0.876 ± 0.028 | 0.063 ± 0.005 | 28.2 ± 0.5 |
ACNCL-CA-AGF | 0.896 ± 0.029 | 0.057 ± 0.006 | 28.5 ± 0.9 |
ACNCL-DWT | 0.855 ± 0.030 | 0.068 ± 0.006 | 26.9 ± 0.5 |
Predictor | SSIM (Mean ± SD) | RMSE (Mean ± SD) | PSNR (dB, Mean ± SD) |
---|---|---|---|
NLM [4] | 0.779 ± 0.036 | 0.085 ± 0.007 | 24.8 ± 0.7 |
SCNN [14] | 0.806 ± 0.034 | 0.079 ± 0.007 | 25.5 ± 0.7 |
U-Net [11] | 0.821 ± 0.032 | 0.073 ± 0.006 | 26.3 ± 0.6 |
DnCNN [13] | 0.829 ± 0.031 | 0.071 ± 0.006 | 26.6 ± 0.6 |
DenseNet | 0.833 ± 0.030 | 0.070 ± 0.006 | 26.8 ± 0.6 |
CA-AGF | 0.839 ± 0.031 | 0.072 ± 0.006 | 26.5 ± 0.6 |
DWT | 0.819 ± 0.032 | 0.074 ± 0.006 | 26.2 ± 0.6 |
ACNCL-U-Net | 0.889 ± 0.026 | 0.060 ± 0.005 | 27.9 ± 0.5 |
ACNCL-DnCNN | 0.893 ± 0.025 | 0.059 ± 0.005 | 28.1 ± 0.5 |
ACNCL-DenseNet | 0.881 ± 0.027 | 0.063 ± 0.005 | 27.6 ± 0.5 |
ACNCL-CA-AGF | 0.890 ± 0.028 | 0.060 ± 0.005 | 28.3 ± 0.5 |
ACNCL-DWT | 0.868 ± 0.029 | 0.066 ± 0.006 | 27.0 ± 0.5 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Abuya, T.; Rimiru, R.; Okeyo, G. Adversarial Content–Noise Complementary Learning Model for Image Denoising and Tumor Detection in Low-Quality Medical Images. Signals 2025, 6, 17. https://doi.org/10.3390/signals6020017
Abuya T, Rimiru R, Okeyo G. Adversarial Content–Noise Complementary Learning Model for Image Denoising and Tumor Detection in Low-Quality Medical Images. Signals. 2025; 6(2):17. https://doi.org/10.3390/signals6020017
Chicago/Turabian StyleAbuya, Teresa, Richard Rimiru, and George Okeyo. 2025. "Adversarial Content–Noise Complementary Learning Model for Image Denoising and Tumor Detection in Low-Quality Medical Images" Signals 6, no. 2: 17. https://doi.org/10.3390/signals6020017
APA StyleAbuya, T., Rimiru, R., & Okeyo, G. (2025). Adversarial Content–Noise Complementary Learning Model for Image Denoising and Tumor Detection in Low-Quality Medical Images. Signals, 6(2), 17. https://doi.org/10.3390/signals6020017