A Review of Neural Network-Based Image Noise Processing Methods
Abstract
1. Introduction
2. Digital Camera Noise
2.1. Types of Camera Photosensor Noise
- (1)
- Dark temporal noise.
- (2)
- Dark spatial noise.
- (3)
- Light temporal noise.
- (4)
- Light spatial noise.
2.2. Interconnection of Applications
2.3. Noise Estimation for Characterization, Denoising, and Identification
3. Fundamentals of Neural Network-Based Image Processing
3.1. Training of Neural Networks
3.2. Convolutional Neural Networks
3.3. Generative-Adversarial Network Architecture
4. Practical Applications of Neural Network Methods
4.1. Synthesis of Datasets and Noise Modeling
4.2. Denoising
4.3. Noise Estimation for Denoising
4.4. Source Camera Identification
4.5. Other Applications
5. Discussion
- Physics-informed architecture design: Developing neural networks that explicitly model the four-component noise structure defined by EMVA 1288 [8], enabling direct separation of photosensor noise from scene artifacts while maintaining computational efficiency.
- Standardized synthetic dataset creation: Establishing large-scale, validated synthetic datasets that accurately model complex noise pipelines of modern smartphones and cameras, reducing dependency on labor-intensive real-world data collection.
- Efficiency-optimized architectures: Investigating architectural innovations that achieve the accuracy of dual-network models while approaching single-network computational costs, particularly for real-time and mobile deployment scenarios.
- Hybrid evaluation frameworks: Creating benchmarking protocols that quantitatively compare learning-based and traditional methods across multiple dimensions—accuracy, computational cost, memory efficiency, and physical parameter interpretability.
- Interpretable feature extraction: Developing methods to ensure that neural network outputs correspond to physically meaningful noise parameters, enabling their use in camera characterization and forensic applications where interpretability is paramount.
- Adaptive computational scaling: Designing networks capable of trading accuracy for efficiency based on deployment constraints, allowing the same architecture to serve both high-accuracy offline applications and resource-constrained real-time scenarios.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
AIRM | Adaptive Instance Residual Module |
AWGN | Additive White Gaussian Noise |
BM3D | Block-Matching and 3D Filtering |
CNN | Convolutional Neural Network |
CPU | Central Processing Unit |
DN | Digital Number |
DS | Dilation Selective (block) |
DSNU | Dark Signal Non-Uniformity |
DWT | Discrete Wavelet Transformation |
EMVA | European Machine Vision Association |
GAN | Generative Adversarial Network |
GAT | Generalized Anscombe Transformation |
GPU | Graphics Processing Unit |
KL | Kullback–Leibler (Divergence) |
NLF | Noise Level Functions |
NLL | Negative Log-Likelihood |
NN | Neural Network |
PCE | Peak-to-Correlation Energy |
RDAM | Residual Dynamic Attention Modules |
ReLU | Rectified Linear Unit |
RMS | Root Mean Square (error) |
PRNU | Photo Response Non-Uniformity |
PSNR | Peak Signal-to-Noise Ratio |
SSIM | Structural Similarity Index Measure |
Std | Standard deviation |
References
- Xu, P.; Wang, J.; Jiang, Y.; Gong, X. Applications of Artificial Intelligence and Machine Learning in Image Processing. Front. Mater. 2024, 11, 1431179. [Google Scholar] [CrossRef]
- Bernacki, J.; Scherer, R. Algorithms and Methods for Individual Source Camera Identification: A Survey. Sensors 2025, 25, 3027. [Google Scholar] [CrossRef]
- Nematollahi, M.A.; Vorakulpipat, C.; Rosales, H.G. Digital Watermarking; Springer: Berlin/Heidelberg, Germany, 2016; ISBN 9789811020940. [Google Scholar]
- Sadia, R.T.; Chen, J.; Zhang, J. CT Image Denoising Methods for Image Quality Improvement and Radiation Dose Reduction. J. Appl. Clin. Med. Phys. 2024, 25, e14270. [Google Scholar] [CrossRef]
- Hussein, T.D.H.; Jihad, K.H.; Omar, H.K. A study on image noise and various image denoising techniques. Res. J. Anal. Invent. 2021, 2, 27–44. [Google Scholar] [CrossRef]
- Fan, L.; Zhang, F.; Fan, H.; Zhang, C. Brief Review of Image Denoising Techniques. Vis. Comput. Ind. Biomed. Art 2019, 2, 7. [Google Scholar] [CrossRef]
- Li, Y.; Liu, C.; You, X.; Liu, J. A Single-Image Noise Estimation Algorithm Based on Pixel-Level Low-Rank Low-Texture Patch and Principal Component Analysis. Sensors 2022, 22, 8899. [Google Scholar] [CrossRef]
- European Machine Vision Association EMVA Standard 1288, Standard for Characterization of Image Sensors and Cameras. Available online: https://www.emva.org/standards-technology/emva-1288/ (accessed on 14 September 2025).
- Maître, H. From Photon to Pixel; John Wiley & Sons: Hoboken, NJ, USA, 2017; ISBN 9781119402466. [Google Scholar]
- Kozlov, A.V.; Nikitin, N.V.; Rodin, V.G.; Cheremkhin, P.A. Improving the Reliability of Digital Camera Identification by Optimizing the Algorithm for Comparing Noise Signatures. Meas. Tech. 2024, 66, 923–934. [Google Scholar] [CrossRef]
- Nakamoto, K.; Hotaka, H. Efficient and Accurate Conversion-Gain Estimation of a Photon-Counting Image Sensor Based on the Maximum Likelihood Estimation. Opt. Express 2022, 30, 37493. [Google Scholar] [CrossRef] [PubMed]
- Widrow, B.; Kollár, I. Quantization Noise; Cambridge University Press: Cambridge, UK, 2008; ISBN 9781139472845. [Google Scholar]
- Kozlov, A.V.; Rodin, V.G.; Starikov, R.S.; Evtikhiev, N.N.; Cheremkhin, P.A. A Family of Methods Based on Automatic Segmentation for Estimating Digital Camera Noise: A Review. IEEE Sens. J. 2024, 24, 17353–17365. [Google Scholar] [CrossRef]
- Zheng, L.; Jin, G.; Xu, W.; Qu, H.; Wu, Y. Noise Model of a Multispectral TDI CCD Imaging System and Its Parameter Estimation of Piecewise Weighted Least Square Fitting. IEEE Sens. J. 2017, 17, 3656–3668. [Google Scholar] [CrossRef]
- Jeong, B.G.; Kim, B.C.; Moon, Y.H.; Eom, I.K. Simplified Noise Model Parameter Estimation for Signal-Dependent Noise. Signal Process. 2014, 96, 266–273. [Google Scholar] [CrossRef]
- Zhang, Y.; Wang, G.; Xu, J. Parameter Estimation of Signal-Dependent Random Noise in CMOS/CCD Image Sensor Based on Numerical Characteristic of Mixed Poisson Noise Samples. Sensors 2018, 18, 2276. [Google Scholar] [CrossRef]
- Ta, C.-K.; Aich, A.; Gupta, A.; Roy-Chowdhury, A.K. Poisson2Sparse: Self-Supervised Poisson Denoising from a Single Image; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2022; pp. 557–567. [Google Scholar] [CrossRef]
- Hai Thai, T.; Retraint, F.; Cogranne, R. Generalized Signal-Dependent Noise Model and Parameter Estimation for Natural Images. Signal Process. 2015, 114, 164–170. [Google Scholar] [CrossRef]
- Li, J.; Wu, Y.; Zhang, Y.; Zhao, J.; Si, Y. Parameter Estimation of Poisson–Gaussian Signal-Dependent Noise from Single Image of CMOS/CCD Image Sensor Using Local Binary Cyclic Jumping. Sensors 2021, 21, 8330. [Google Scholar] [CrossRef]
- Foi, A.; Trimeche, M.; Katkovnik, V.; Egiazarian, K. Practical Poissonian-Gaussian Noise Modeling and Fitting for Single-Image Raw-Data. IEEE Trans. Image Process. 2008, 17, 1737–1754. [Google Scholar] [CrossRef]
- Liu, C.; Szeliski, R.; Kang, S.B.; Zitnick, C.L.; Freeman, W.T. Automatic Estimation and Removal of Noise from a Single Image. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 30, 299–314. [Google Scholar] [CrossRef] [PubMed]
- Dong, L.; Zhou, J.; Tang, Y.Y. Effective and Fast Estimation for Image Sensor Noise via Constrained Weighted Least Squares. IEEE Trans. Image Process. 2018, 27, 2715–2730. [Google Scholar] [CrossRef] [PubMed]
- Gastasini, E.; Capecci, N.; Lupi, F.; Gagliardi, A.; Saponara, S.; Lanzetta, M. An Instrument for the Characterization and Calibration of Optical Sensors. Sensors 2021, 21, 5141. [Google Scholar] [CrossRef] [PubMed]
- Evtikhiev, N.N.; Kozlov, A.V.; Krasnov, V.V.; Rodin, V.G.; Starikov, R.S.; Cheremkhin, P.A. Estimation of the Efficiency of Digital Camera Photosensor Noise Measurement through the Automatic Segmentation of Non-Uniform Target Methods and the Standard EMVA 1288. Meas. Tech. 2021, 64, 296–304. [Google Scholar] [CrossRef]
- Kozlov, A.V.; Rodin, V.G.; Starikov, R.S.; Evtikhiev, N.N.; Cheremkhin, P.A. Estimation of Camera’s Noise by Uniform Target Segmentation. IEEE Sens. J. 2023, 23, 4883–4891. [Google Scholar] [CrossRef]
- Evtikhiev, N.N.; Kozlov, A.V.; Krasnov, V.V.; Rodin, V.G.; Starikov, R.S.; Cheremkhin, P.A. A Method for Measuring Digital Camera Noise by Automatic Segmentation of a Striped Target. Comput. Opt. 2021, 45, 267–276. [Google Scholar] [CrossRef]
- Bilcu, R.C.; Vehvilainen, M. A New Method for Noise Estimation in Images. In Proceedings of the IEEE-Eurasip Nonlinear Signal and Image Processing, Sapporo, Japan, 18–20 May 2005; p. 25. [Google Scholar] [CrossRef]
- Tai, S.-C.; Yang, S.-M. A Fast Method for Image Noise Estimation Using Laplacian Operator and Adaptive Edge Detection. In Proceedings of the 3rd International Symposium on Communications, Control and Signal Processing, St Julians, Malta, 12–14 March 2008; pp. 1077–1081. [Google Scholar] [CrossRef]
- Rank, K.; Lendl, M.; Unbehauen, R. Estimation of Image Noise Variance. IEE Proc.—Vis. Image Signal Process. 1999, 146, 80. [Google Scholar] [CrossRef]
- Yang, S.-M. Fast and Reliable Image-Noise Estimation Using a Hybrid Approach. J. Electron. Imaging 2010, 19, 033007. [Google Scholar] [CrossRef]
- De Stefano, A.; White, P.R.; Collis, W.B. Training Methods for Image Noise Level Estimation on Wavelet Components. EURASIP J. Adv. Signal Process. 2004, 2004, 405209. [Google Scholar] [CrossRef]
- Starck, J.; Murtagh, F. Automatic Noise Estimation from the Multiresolution Support. Publ. Astron. Soc. Pac. 1998, 110, 193–199. [Google Scholar] [CrossRef]
- Pimpalkhute, V.A.; Page, R.; Kothari, A.; Bhurchandi, K.M.; Kamble, V.M. Digital Image Noise Estimation Using DWT Coefficients. IEEE Trans. Image Process. 2021, 30, 1962–1972. [Google Scholar] [CrossRef]
- Donoho, D.L. De-Noising by Soft-Thresholding. IEEE Trans. Inf. Theory 1995, 41, 613–627. [Google Scholar] [CrossRef]
- Hashemi, M.; Beheshti, S. Adaptive Noise Variance Estimation in BayesShrink. IEEE Signal Process. Lett. 2009, 17, 12–15. [Google Scholar] [CrossRef]
- Liu, X.; Tanaka, M.; Okutomi, M. Single-Image Noise Level Estimation for Blind Denoising. IEEE Trans. Image Process. 2013, 22, 5226–5237. [Google Scholar] [CrossRef]
- Pyatykh, S.; Hesser, J.; Zheng, L. Image Noise Level Estimation by Principal Component Analysis. IEEE Trans. Image Process. 2013, 22, 687–699. [Google Scholar] [CrossRef]
- Amer, A.; Mitiche, A.; Dubois, E. Reliable and Fast Structure-Oriented Video Noise Estimation. In Proceedings of the International Conference on Image Processing, Rochester, NY, USA, 22–25 September 2003; p. I. [Google Scholar] [CrossRef]
- Chen, G.; Zhu, F.; Heng, P.A. An Efficient Statistical Method for Image Noise Level Estimation. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 477–485. [Google Scholar] [CrossRef]
- Ponomarenko, N.N.; Lukin, V.V.; Zriakhov, M.S.; Kaarna, A.; Astola, J. An Automatic Approach to Lossy Compression of AVIRIS Images. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; pp. 472–475. [Google Scholar] [CrossRef]
- Shin, D.-H.; Park, R.-H.; Yang, S.; Jung, J.-H. Block-Based Noise Estimation Using Adaptive Gaussian Filtering. IEEE Trans. Consum. Electron. 2005, 51, 218–226. [Google Scholar] [CrossRef]
- Danielyan, A.; Foi, A. Noise Variance Estimation in Nonlocal Transform Domain. In Proceedings of the International Workshop on Local and Non-Local Approximation in Image Processing, Tuusula, Finland, 19–21 August 2009; pp. 41–45. [Google Scholar] [CrossRef]
- Li, F.; Fang, F.; Li, Z.; Zeng, T. Single Image Noise Level Estimation by Artificial Noise. Signal Process. 2023, 213, 109215. [Google Scholar] [CrossRef]
- Luka, J.; Fridrich, J.; Goljan, M. Digital Camera Identification from Sensor Pattern Noise. IEEE Trans. Inf. Forensics Secur. 2006, 1, 205–214. [Google Scholar] [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA, 2016; ISBN 9780262035613. [Google Scholar]
- Gloe, T.; Böhme, R. The “Dresden Image Database” for Benchmarking Digital Image Forensics. In Proceedings of the 2010 ACM Symposium on Applied Computing—SAC ’10, Sierre, Switzerland, 22–26 March 2010; pp. 1584–1590. [Google Scholar] [CrossRef]
- Abdelhamed, A.; Lin, S.; Brown, M.E. A High-Quality Denoising Dataset for Smartphone Cameras. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 1692–1700. [Google Scholar] [CrossRef]
- Plötz, T.; Roth, S. Benchmarking Denoising Algorithms with Real Photographs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 1586–1595. [Google Scholar] [CrossRef]
- Chen, C.; Chen, Q.; Xu, J.; Koltun, V. Learning to See in the Dark. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3291–3300. [Google Scholar] [CrossRef]
- Wei, K.; Fu, Y.; Zheng, Y.; Yang, J. Physics-Based Noise Modeling for Extreme Low-Light Photography. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 8520–8537. [Google Scholar] [CrossRef] [PubMed]
- Martin, D.; Fowlkes, C.; Tal, D.; Malik, J. A Database of Human Segmented Natural Images and Its Application to Evaluating Segmentation Algorithms and Measuring Ecological Statistics. In Proceedings of the Eighth IEEE International Conference on Computer Vision ICCV 2001, Vancouver, BC, Canada, 7–14 July 2001. [Google Scholar] [CrossRef]
- Wu, X. Color Demosaicking by Local Directional Interpolation and Nonlocal Adaptive Thresholding. J. Electron. Imaging 2011, 20, 023016. [Google Scholar] [CrossRef]
- Arbeláez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour Detection and Hierarchical Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 898–916. [Google Scholar] [CrossRef]
- Nam, S.; Hwang, Y.; Matsushita, Y.; Kim, S.J. A Holistic Approach to Cross-Channel Image Noise Modeling and Its Application to Image Denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1683–1691. [Google Scholar] [CrossRef]
- Fu, B.; Zhang, X.; Wang, L.; Ren, Y.; Thanh, D.N.H. A Blind Medical Image Denoising Method with Noise Generation Network. J. X-Ray Sci. Technol. 2022, 30, 531–547. [Google Scholar] [CrossRef]
- Abdelhamed, A.; Brubaker, M.; Brown, M. Noise Flow: Noise Modeling with Conditional Normalizing Flows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3165–3173. [Google Scholar] [CrossRef]
- Chang, K.-C.; Wang, R.; Lin, H.-J.; Liu, Y.-L.; Chen, C.-P.; Chang, Y.-L.; Chen, H.-T. Learning Camera-Aware Noise Models. Lect. Notes Comput. Sci. 2020, 12369, 343–358. [Google Scholar] [CrossRef]
- Zou, Y.; Fu, Y. Estimating Fine-Grained Noise Model via Contrastive Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 12672–12681. [Google Scholar] [CrossRef]
- Jang, G.; Lee, W.; Son, S.; Lee, K. C2N: Practical Generative Noise Modeling for Real-World Denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 2330–2339. [Google Scholar] [CrossRef]
- Maleky, A.; Kousha, S.; Brown, M.S.; Brubaker, M.A. Noise2NoiseFlow: Realistic Camera Noise Modeling without Clean Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 17611–17620. [Google Scholar] [CrossRef]
- Lehtinen, J.; Munkberg, J.; Hasselgren, J.; Laine, S.; Karras, T.; Aittala, M.; Aila, T. Noise2Noise: Learning Image Restoration without Clean Data. arXiv 2018, arXiv:1803.04189. [Google Scholar] [CrossRef]
- Hubel, D.H.; Wiesel, T.N. Receptive Fields, Binocular Interaction and Functional Architecture in the Cat’s Visual Cortex. J. Physiol. 1962, 160, 106–154. [Google Scholar] [CrossRef]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-Based Learning Applied to Document Recognition. Proc. IEEE 2018, 86, 2278–2324. [Google Scholar] [CrossRef]
- Boureau, Y.-L.; Ponce, J.; LeCun, Y. A Theoretical Analysis of Feature Pooling in Visual Recognition. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 111–118. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar] [CrossRef]
- Nair, V.; Hinton, G. Rectified Linear Units Improve Restricted Boltzmann Machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V. Going Deeper with Convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef]
- Minaee, S.; Boykov, Y.Y.; Porikli, F.; Plaza, A.J.; Kehtarnavaz, N.; Terzopoulos, D. Image Segmentation Using Deep Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3523–3542. [Google Scholar] [CrossRef] [PubMed]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. Lect. Notes Comput. Sci. 2015, 9351, 234–241. [Google Scholar] [CrossRef]
- Schawinski, K.; Zhang, C.; Zhang, H.; Fowler, L.; Santhanam, G.K. Generative Adversarial Networks Recover Features in Astrophysical Images of Galaxies beyond the Deconvolution Limit. Mon. Not. R. Astron. Soc. Lett. 2017, 467, L110–L114. [Google Scholar] [CrossRef]
- Krull, A.; Vičar, T.; Prakash, M.; Lalit, M.; Jug, F. Probabilistic Noise2Void: Unsupervised Content-Aware Denoising. Front. Comput. Sci. 2020, 2, 5. [Google Scholar] [CrossRef]
- Chi, J.; Wu, C.; Yu, X.; Ji, P.; Chu, H. Single Low-Dose CT Image Denoising Using a Generative Adversarial Network with Modified U-Net Generator and Multi-Level Discriminator. IEEE Access 2020, 8, 133470–133487. [Google Scholar] [CrossRef]
- Zou, Y.; Fu, Y.; Zhang, Y.; Zhang, T.; Yan, C.; Timofte, R. Calibration-Free Raw Image Denoising via Fine-Grained Noise Estimation. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 5368–5384. [Google Scholar] [CrossRef]
- Zhang, K.; Zuo, W.; Chen, Y.; Meng, D.; Zhang, L. Beyond a Gaussian Denoiser: Residual Learning of Deep CNN for Image Denoising. IEEE Trans. Image Process. 2017, 26, 3142–3155. [Google Scholar] [CrossRef]
- Li, L.; Song, S.; Lv, M.; Jia, Z.; Ma, H. Multi-Focus Image Fusion Based on Fractal Dimension and Parameter Adaptive Unit-Linking Dual-Channel PCNN in Curvelet Transform Domain. Fractal Fract. 2025, 9, 157. [Google Scholar] [CrossRef]
- Cao, Z.-H.; Liang, Y.-J.; Deng, L.-J.; Vivone, G. An Efficient Image Fusion Network Exploiting Unifying Language and Mask Guidance. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 1–18. [Google Scholar] [CrossRef]
- Sun, H.; Duan, R.; Sun, G.; Zhang, H.; Chen, F.; Yang, F.; Cao, J. SARFT-GAN: Semantic-Aware ARConv Fused Top-k Generative Adversarial Network for Remote Sensing Image Denoising. Remote Sens. 2025, 17, 3114. [Google Scholar] [CrossRef]
- Yu, S.; Park, B.; Jeong, J. Deep Iterative Down-up CNN for Image Denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 2095–2103. [Google Scholar] [CrossRef]
- Hong, Z.; Fan, X.; Jiang, T.; Feng, J. End-To-End Unpaired Image Denoising with Conditional Adversarial Networks. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 4140–4149. [Google Scholar] [CrossRef]
- Zou, Y.; Yan, C.; Fu, Y. Iterative Denoiser and Noise Estimator for Self-Supervised Image Denoising. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 13219–13228. [Google Scholar] [CrossRef]
- Zeyde, R.; Elad, M.; Protter, M. On Single Image Scale-up Using Sparse-Representations. Lect. Notes Comput. Sci. 2010, 6920, 711–730. [Google Scholar] [CrossRef]
- Wang, Z.; Liu, J.; Li, G.; Han, H.J. Blind2Unblind: Self-Supervised Image Denoising with Visible Blind Spots. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 2017–2026. [Google Scholar] [CrossRef]
- Zeng, H.; Hosseini, M.D.M.; Deng, K.; Peng, A.; Goljan, M. A Comparison Study of CNN Denoisers on PRNU Extraction. Available online: https://arxiv.org/abs/2112.02858 (accessed on 3 April 2024).
- Tian, C.; Xu, Y.; Li, Z.; Zuo, W.; Fei, L.; Liu, H. Attention-Guided CNN for Image Denoising. Neural Netw. 2020, 124, 117–129. [Google Scholar] [CrossRef] [PubMed]
- Yue, Z.; Zhao, Q.; Zhang, L.; Meng, D. Dual Adversarial Network: Toward Real-World Noise Removal and Noise Generation. Lect. Notes Comput. Sci. 2020, 12355, 41–58. [Google Scholar] [CrossRef]
- Kousha, S.; Maleky, A.; Brown, M.S.; Brubaker, M.A. Modeling SRGB Camera Noise with Normalizing Flows. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 17442–17450. [Google Scholar] [CrossRef]
- Tan, H.; Xiao, H.; Lai, S.; Liu, Y.; Zhang, M. Pixelwise Estimation of Signal-Dependent Image Noise Using Deep Residual Learning. Comput. Intell. Neurosci. 2019, 2019, 1–12. [Google Scholar] [CrossRef]
- Zhang, K.; Zuo, W.; Zhang, L. FFDNet: Toward a Fast and Flexible Solution for CNN-Based Image Denoising. IEEE Trans. Image Process. 2018, 27, 4608–4622. [Google Scholar] [CrossRef]
- Ma, R.; Zhang, Y.; Zhang, B.; Fang, L.; Huang, D.; Qi, L. Learning Attention in the Frequency Domain for Flexible Real Photograph Denoising. IEEE Trans. Image Process. 2024, 33, 3707–3721. [Google Scholar] [CrossRef]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.-H.; Shao, L. CycleISP: Real Image Restoration via Improved Data Synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 2693–2702. [Google Scholar] [CrossRef]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.-H.; Shao, L. Multi-Stage Progressive Image Restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 14816–14826. [Google Scholar] [CrossRef]
- Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.-H. Restormer: Efficient Transformer for High-Resolution Image Restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5718–5729. [Google Scholar] [CrossRef]
- Yu, J.; Zhou, Y.; Sun, M.; Wang, D. Dual-Path Adversarial Denoising Network Based on UNet. Sensors 2025, 25, 4751. [Google Scholar] [CrossRef]
- Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef]
- Wang, L.; Li, J.; Zhang, R.; Guo, X. Multi-Stage Progressive Generative Adversarial Network for Low-Dose CT Denoising. In Proceedings of the 6th International Conference on Communications, Information System and Computer Engineering (CISCE), Guangzhou, China, 10–12 May 2024; pp. 750–753. [Google Scholar] [CrossRef]
- Mao, X.-J.; Shen, C.; Yang, Y.-B. Image Restoration Using Very Deep Convolutional Encoder-Decoder Networks with Symmetric Skip Connections. In Proceedings of the NIPS’16: Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; Volume 29, pp. 2810–2818. [Google Scholar]
- Santhanam, V.; Morariu, V.I.; Davis, L.S. Generalized Deep Image to Image Regression. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 5395–5405. [Google Scholar] [CrossRef]
- Tai, Y.; Yang, J.; Liu, X.; Xu, C. MemNet: A Persistent Memory Network for Image Restoration. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; Volume 2017, pp. 4549–4557. [Google Scholar] [CrossRef]
- Park, B.; Yu, S.; Jeong, J. Densely Connected Hierarchical Network for Image Denoising. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 2103–2113. [Google Scholar] [CrossRef]
- Wang, M.; Yuan, P.; Qiu, S.; Jin, W.; Li, L.; Wang, X. Dual-Encoder UNet-Based Narrowband Uncooled Infrared Imaging Denoising Network. Sensors 2025, 25, 1476. [Google Scholar] [CrossRef]
- Zhang, K.; Zuo, W.; Gu, S.; Zhang, L. Learning Deep CNN Denoiser Prior for Image Restoration. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2808–2817. [Google Scholar] [CrossRef]
- Ignatov, A.; Kobyshev, N.; Timofte, R.; Vanhoey, K.; Gool, L.V. DSLR-Quality Photos on Mobile Devices with Deep Convolutional Networks. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 3297–3305. [Google Scholar] [CrossRef]
- Bartlett, O.J.; Benoit, D.M.; Pimbblet, K.A.; Simmons, B.; Hunt, L. Noise Reduction in Single-Shot Images Using an Auto-Encoder. Mon. Not. R. Astron. Soc. 2023, 521, 6318–6329. [Google Scholar] [CrossRef]
- Guo, S.; Yan, Z.; Zhang, K.; Zuo, W.; Zhang, L. Toward Convolutional Blind Denoising of Real Photographs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–17 June 2019; pp. 1712–1722. [Google Scholar] [CrossRef]
- Xu, J.; Zhang, L.; Feng, X.; Zhang, D. Multi-Channel Weighted Nuclear Norm Minimization for Real Color Image Denoising. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 1105–1113. [Google Scholar] [CrossRef]
- Xu, J.; Zhang, L.; Zhang, D. A Trilateral Weighted Sparse Coding Scheme for Real-World Image Denoising. Lect. Notes Comput. Sci. 2018, 11212, 21–38. [Google Scholar] [CrossRef]
- Guo, B.; Song, K.; Dong, H.; Yan, Y.; Tu, Z.; Zhu, L. NERNet: Noise Estimation and Removal Network for Image Denoising. J. Vis. Commun. Image Represent. 2020, 71, 102851. [Google Scholar] [CrossRef]
- Byun, J.; Cha, S.; Moon, T. FBI-Denoiser: Fast Blind Image Denoiser for Poisson-Gaussian Noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 5764–5773. [Google Scholar] [CrossRef]
- Makitalo, M.; Foi, A. Optimal Inversion of the Generalized Anscombe Transformation for Poisson-Gaussian Noise. IEEE Trans. Image Process. 2013, 22, 91–103. [Google Scholar] [CrossRef]
- Liu, X.; Tanaka, M.; Okutomi, M. Practical Signal-Dependent Noise Parameter Estimation from a Single Noisy Image. IEEE Trans. Image Process. 2014, 23, 4361–4371. [Google Scholar] [CrossRef]
- Roth, S.; Black, M.M. Fields of Experts: A Framework for Learning Image Priors. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; pp. 860–867. [Google Scholar] [CrossRef]
- Bychkovsky, V.; Paris, S.; Chan, E.; Durand, F. Learning Photographic Global Tonal Adjustment with a Database of Input/Output Image Pairs. In Proceedings of the CVPR 2011, Colorado Springs, CO, USA, 20–25 June 2011; pp. 97–104. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhu, Y.; Nichols, E.; Wang, Q.; Zhang, S.; Smith, C.; Howard, S. A Poisson-Gaussian Denoising Dataset with Real Fluorescence Microscopy Images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–17 June 2019; pp. 11702–11710. [Google Scholar] [CrossRef]
- Wu, X.; Liu, M.; Cao, Y.; Ren, D.; Zuo, W. Unpaired Learning of Deep Image Denoising. Lect. Notes Comput. Sci. 2020, 12349, 352–368. [Google Scholar] [CrossRef]
- Cha, S.; Moon, T. Fully Convolutional Pixel Adaptive Image Denoiser. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 4159–4168. [Google Scholar] [CrossRef]
- Li, X.; Wang, Z.; Fang, H.; Fan, Z.; Li, S.; Huang, Z. Adaptive Image Noise Level Estimation with Chi-Square Distribution on the Flat Patches Selected by Improved PCANet and ResNet101. Optik 2023, 287, 171107. [Google Scholar] [CrossRef]
- Chan, T.-H.; Jia, K.; Gao, S.; Lu, J.; Zeng, Z.; Ma, Y. PCANet: A Simple Deep Learning Baseline for Image Classification? IEEE Trans. Image Process. 2015, 24, 5017–5032. [Google Scholar] [CrossRef]
- Rao, Y.; He, L.; Zhu, J. A Residual Convolutional Neural Network for Pan-Shaprening. In Proceedings of the International Workshop on Remote Sensing with Intelligent Processing (RSIP), Shanghai, China, 18–21 May 2017; pp. 1–4. [Google Scholar] [CrossRef]
- Tian, C.; Xu, Y.; Zuo, W.; Du, B.; Lin, C.-W.; Zhang, D. Designing and Training of a Dual CNN for Image Denoising. Knowl.Based Syst. 2021, 226, 106949. [Google Scholar] [CrossRef]
- Wu, W.; Ge, A.; Lv, G.; Xia, Y.; Zhang, Y.; Xiong, W. Two-Stage Progressive Residual Dense Attention Network for Image Denoising. arXiv 2024, arXiv:2401.02831. [Google Scholar] [CrossRef]
- Wu, W.; Lv, G.; Duan, Y.; Liang, P.; Zhang, Y.; Xia, Y. Dual Convolutional Neural Network with Attention for Image Blind Denoising. Multimed. Syst. 2024, 30, 263. [Google Scholar] [CrossRef]
- Wu, W.; Liao, S.; Lv, G.; Liang, P.; Zhang, Y. Image Blind Denoising Using Dual Convolutional Neural Network with Skip Connection. Signal Process. Image Commun. 2025, 138, 117365. [Google Scholar] [CrossRef]
- Zhang, K.; Li, Y.; Zuo, W.; Zhang, L.; Van Gool, L.; Timofte, R. Plug-And-Play Image Restoration with Deep Denoiser Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 6360–6376. [Google Scholar] [CrossRef] [PubMed]
- Wischow, M.; Irmisch, P.; Boerner, A.; Gallego, G. Real-Time Noise Source Estimation of a Camera System from an Image and Metadata. Adv. Intell. Syst. 2024, 6, 2300479. [Google Scholar] [CrossRef]
- Huang, Y.; Huang, H. Beyond Image Prior: Embedding Noise Prior into Conditional Denoising Transformer. arXiv 2024, arXiv:2407.09094. [Google Scholar] [CrossRef]
- Huang, J.-B.; Singh, A.; Ahuja, N. Single Image Super-Resolution from Transformed Self-Exemplars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 5197–5208. [Google Scholar] [CrossRef]
- Makitalo, M.; Foi, A. Noise Parameter Mismatch in Variance Stabilization, with an Application to Poisson–Gaussian Noise Estimation. IEEE Trans. Image Process. 2014, 23, 5348–5359. [Google Scholar] [CrossRef]
- Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H. Uformer: A General U-Shaped Transformer for Image Restoration. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 17662–17672. [Google Scholar] [CrossRef]
- Guo, H.; Li, J.; Dai, T.; Ouyang, Z.; Ren, X.; Xia, S.-T. MambaIR: A Simple Baseline for Image Restoration with State-Space Model. Lect. Notes Comput. Sci. 2024, 15076, 222–241. [Google Scholar] [CrossRef]
- Tuama, A.; Comby, F.; Chaumont, M. Camera Model Identification with the Use of Deep Convolutional Neural Networks. In Proceedings of the IEEE International Workshop on Information Forensics and Security (WIFS), Abu Dhabi, United Arab Emirates, 4–7 December 2016; pp. 1–6. [Google Scholar] [CrossRef]
- Chen, Y.; Huang, Y.; Ding, X. Camera Model Identification with Residual Neural Network. In Proceedings of the IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 4337–4341. [Google Scholar] [CrossRef]
- Bondi, L.; Baroffio, L.; Guera, D.; Bestagini, P.; Delp, E.J.; Tubaro, S. First Steps toward Camera Model Identification with Convolutional Neural Networks. IEEE Signal Process. Lett. 2017, 24, 259–263. [Google Scholar] [CrossRef]
- Huang, N.; He, J.; Zhu, N.; Xuan, X.; Liu, G.; Chang, C. Identification of the Source Camera of Images Based on Convolutional Neural Network. Digit. Investig. 2018, 26, 72–80. [Google Scholar] [CrossRef]
- Yao, H.; Qiao, T.; Xu, M.; Zheng, N. Robust Multi-Classifier for Camera Model Identification Based on Convolution Neural Network. IEEE Access 2018, 6, 24973–24982. [Google Scholar] [CrossRef]
- Marra, F.; Gragnaniello, D.; Verdoliva, L. On the Vulnerability of Deep Learning to Adversarial Attacks for Camera Model Identification. Signal Process. Image Commun. 2018, 65, 240–248. [Google Scholar] [CrossRef]
- Wang, B.; Yin, J.; Tan, S.; Li, Y.; Li, M. Source Camera Model Identification Based on Convolutional Neural Networks with Local Binary Patterns Coding. Signal Process. Image Commun. 2018, 68, 162–168. [Google Scholar] [CrossRef]
- Ding, X.; Chen, Y.; Tang, Z.; Huang, Y. Camera Identification Based on Domain Knowledge-Driven Deep Multi-Task Learning. IEEE Access 2019, 7, 25878–25890. [Google Scholar] [CrossRef]
- Sameer, V.U.; Naskar, R. Deep Siamese Network for Limited Labels Classification in Source Camera Identification. Multimed. Tools Appl. 2020, 79, 28079–28104. [Google Scholar] [CrossRef]
- Freire-Obregón, D.; Narducci, F.; Barra, S.; Castrillón-Santana, M. Deep Learning for Source Camera Identification on Mobile Devices. Pattern Recognit. Lett. 2019, 126, 86–91. [Google Scholar] [CrossRef]
- Bennabhaktula, G.S.; Alegre, E.; Karastoyanova, D.; Azzopardi, G. Camera Model Identification Based on Forensic Traces Extracted from Homogeneous Patches. Expert Syst. Appl. 2022, 206, 117769. [Google Scholar] [CrossRef]
- Bennabhaktula, G.S.; Timmerman, D.; Alegre, E.; Azzopardi, G. Source Camera Device Identification from Videos. SN Comput. Sci. 2022, 3, 316. [Google Scholar] [CrossRef]
- Bharathiraja, S.; Rajesh Kanna, B.; Hariharan, M. A Deep Learning Framework for Image Authentication: An Automatic Source Camera Identification Deep-Net. Arab. J. Sci. Eng. 2022, 48, 1207–1219. [Google Scholar] [CrossRef]
- Huan, S.; Liu, Y.; Yang, Y.; Law, N.-F.B. Camera Model Identification Based on Dual-Path Enhanced ConvNeXt Network and Patches Selected by Uniform Local Binary Pattern. Expert Syst. Appl. 2023, 241, 122501. [Google Scholar] [CrossRef]
- Sychandran, C.S.; Shreelekshmi, R. SCCRNet: A Framework for Source Camera Identification on Digital Images. Neural Comput. Appl. 2023, 36, 1167–1179. [Google Scholar] [CrossRef]
- Liu, Y.-y.; Chen, C.; Lin, H.; Li, Z. A New Camera Model Identification Method Based on Color Correction Features. Multimed. Tools Appl. 2023, 83, 29179–29195. [Google Scholar] [CrossRef]
- Nayerifard, T.; Amintoosi, H.; Ghaemi Bafghi, A. A Robust PRNU-Based Source Camera Attribution with Convolutional Neural Networks. J. Supercomput. 2024, 81, 25. [Google Scholar] [CrossRef]
- Martín-Rodríguez, F.; Isasi-de-Vicente, F.; Fernández-Barciela, M. A Stress Test for Robustness of Photo Response Nonuniformity (Camera Sensor Fingerprint) Identification on Smartphones. Sensors 2023, 23, 3462. [Google Scholar] [CrossRef] [PubMed]
- Bernacki, J. Robustness of Digital Camera Identification with Convolutional Neural Networks. Multimed. Tools Appl. 2021, 80, 29657–29673. [Google Scholar] [CrossRef]
- Manisha; Li, C.-T.; Lin, X.; Kotegar, K.A. Beyond PRNU: Learning Robust Device-Specific Fingerprint for Source Camera Identification. Sensors 2022, 22, 7871. [Google Scholar] [CrossRef]
- Liu, Y.; Xiao, Y.; Tian, H. Plug-And-Play PRNU Enhancement Algorithm with Guided Filtering. Sensors 2024, 24, 7701. [Google Scholar] [CrossRef]
- Yang, P.; Baracchi, D.; Ni, R.; Zhao, Y.; Argenti, F.; Piva, A. A Survey of Deep Learning-Based Source Image Forensics. J. Imaging 2020, 6, 9. [Google Scholar] [CrossRef]
- Tian, N.; Qiu, X.; Pan, Q. An Improved PRNU Noise Extraction Model for Highly Compressed Image Blocks with Low Resolutions. Multimed. Tools Appl. 2024, 83, 66657–66690. [Google Scholar] [CrossRef]
- Liu, Y.; Zou, Z.; Yang, Y.; Law, N.-F.; Bharath, A.A. Efficient Source Camera Identification with Diversity-Enhanced Patch Selection and Deep Residual Prediction. Sensors 2021, 21, 4701. [Google Scholar] [CrossRef]
- Rafi, A.M.; Tonmoy, T.I.; Kamal, U.; Wu, Q.M.J.; Hasan, M.K. RemNet: Remnant Convolutional Neural Network for Camera Model Identification. Neural Comput. Appl. 2020, 33, 3655–3670. [Google Scholar] [CrossRef]
- Hui, C.; Jiang, F.; Liu, S.; Zhao, D. Source Camera Identification with Multi-Scale Feature Fusion Network. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2022. [Google Scholar] [CrossRef]
- Timmerman, D.; Bennabhaktula, S.; Alegre, E.; Azzopardi, G. Video Camera Identification from Sensor Pattern Noise with a Constrained ConvNet. arXiv 2020, arXiv:2012.06277. [Google Scholar] [CrossRef]
- Shullani, D.; Fontani, M.; Iuliani, M.; Shaya, O.A.; Piva, A. VISION: A Video and Image Dataset for Source Identification. EURASIP J. Inf. Secur. 2017, 2017, 15. [Google Scholar] [CrossRef]
- Edwards, T. Discrete Wavelet Transforms: Theory and Implementation; Stanford University: Stanford, CA, USA, 1991. [Google Scholar]
- Zeng, H.; Wan, Y.; Deng, K.; Peng, A. Source Camera Identification with Dual-Tree Complex Wavelet Transform. IEEE Access 2020, 8, 18874–18883. [Google Scholar] [CrossRef]
- Tian, H.; Xiao, Y.; Cao, G.; Zhang, Y.; Xu, Z.; Zhao, Y. Daxing Smartphone Identification Dataset. IEEE Access 2019, 7, 101046–101053. [Google Scholar] [CrossRef]
- Goljan, M.; Fridrich, J.; Filler, T. Large Scale Test of Sensor Fingerprint Camera Identification. In Proceedings of the Media Forensics Security, San Jose, CA, USA, 19 January 2009; Volume 7254. [Google Scholar] [CrossRef]
- Kligvasser, I.; Shaham, T.R.; Michaeli, T. XUnit: Learning a Spatial Activation Function for Efficient Image Restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2433–2542. [Google Scholar] [CrossRef]
- Magistris, G.; Grycuk, R.; Mandelli, L.; Scherer, R. New Approaches Based on PRNU-CNN for Image Camera Source Attribution in Forensic Investigations. In Proceedings of the SYSYEM 2024: 10th Scholar’s Yearly Symposium of Technology, Engineering and Mathematics, Rome, Italy, 2–5 December 2024; pp. 67–72. [Google Scholar]
- Torres, L.; Barrios, C.; Denneulin, Y. Computational Resource Consumption in Convolutional Neural Network Training—A Focus on Memory. Supercomput. Front. Innov. 2021, 8, 45–61. [Google Scholar] [CrossRef]
- de Roos, L.; Geradts, Z. Factors That Influence PRNU-Based Camera-Identification via Videos. J. Imaging 2021, 7, 8. [Google Scholar] [CrossRef]
- Tripathi, M. Facial Image Noise Classification and Denoising Using Neural Network. Sustain. Eng. Innov. 2021, 3, 102–111. [Google Scholar] [CrossRef]
- Mudhafar, R.; Abbadi, N. Image Noise Detection and Classification Based on Combination of Deep Wavelet and Machine Learning. Al-Salam J. Eng. Technol. 2023, 3, 23–36. [Google Scholar] [CrossRef]
- Kavitha, G.; Prakash, C.; Alhomrani, M.; Pradeep, N.; Alamri, A.S.; Pareek, P.K.; Alhassan, M. Noise Estimation and Type Identification in Natural Scene and Medical Images Using Deep Learning Approaches. Contrast Media Mol. Imaging 2023, 2023, 1–15. [Google Scholar] [CrossRef]
- Martin-Rodriguez, F.; Garcia-Mojon, R.; Fernandez-Barciela, M. Detection of AI-Created Images Using Pixel-Wise Feature Extraction and Convolutional Neural Networks. Sensors 2023, 23, 9037. [Google Scholar] [CrossRef]
- Kingra, S.; Aggarwal, N.; Kaur, N. SiamNet: Exploiting Source Camera Noise Discrepancies Using Siamese Network for Deepfake Detection. Inf. Sci. 2023, 645, 119341. [Google Scholar] [CrossRef]
Parameters | Canon EOS M100 | PixeLink PL-B781F | Retiga R6 |
---|---|---|---|
Pixel size, μm | 3.7 | 3.5 | 4.6 |
Full resolution, MP | 24 | 6.6 | 5.9 |
Sensor | CMOS | CMOS | CCD |
Type | Consumer | Machine vision | Microscopy |
Bit depth, bit | 13.5 | 10 | 14 |
σdt, DN | 2.479 ± 0.004 | 0.351 ± 0.004 | 4.46 ± 0.01 |
K, DN/e | 0.781 ± 0.006 | 0.093 ± 0.005 | 0.84 ± 0.03 |
DSNU, DN | 0.191 ± 0.005 | 0.66 ± 0.02 | 0.50 ± 0.01 |
PRNU, relative units | 0.0092 ± 0.0003 | 0.0075 ± 0.0002 | 0.0033 ± 0.0001 |
Architecture | Numerical Results | Training Conditions and Datasets | Notes |
---|---|---|---|
NoiseFlow [56] | NLL: −3.521 nats/pixel KL: 0.008 | SIDD [47] with ~30,000 raw–RGB image pairs, 5 smartphone cameras, ISO 50–10,000 | Conditional normalizing flow for complex signal-dependent noise modeling, <2500 parameters |
CANGAN [57] | KL: 0.00159, DnCNN [75]: PSNR: 48.71 dB, SSIM: 0.992 | SIDD [47], ~24,000 raw–RGB image pairs | U-Net [70] based noise generator + camera encoding network for camera-specific noise |
ResNet-based frameworks [67,74] | KL: 0.0211 DnCNN [75]/U-Net [70]: PSNR: 50.13/51.40 dB, SSIM: 0.9891/– | SIDD [47], SID [49] Canon EOS 5D4, Nikon D850, Sony RX100VI, HUAWEI P40 Pro | Contrastive learning for fine-grain noise parameter estimation with 4-tuple model |
C2N [59] | KL: 0.1638 DnCNN [75]/DIDN [79]: PSNR: 33.76/35.35 dB, SSIM: 0.901/0.937 | SIDD [47], DND [48], unpaired clean and noisy images | Unsupervised GAN for noise modeling without paired data |
Noise2NoiseFlow [60] | NLL: −3.501 nats/dim, KL: 0.0265, DnCNN [75]: PSNR: 52.80 dB, SSIM: 0.984 | SIDD [47], ~500,000 patches 32 × 32, ISO 100–3200 | Combines Noise2Noise [61] with normalizing flow, eliminates need for clean ground truth |
DCD-Net [81] | PSNR: up to 51.40 dB, SSIM: up to 0.992 | Kodak, BSD300 [51], Set14 [82], SIDD [47] raw–RGB validation | Iterative training «denoise-corrupt-denoise» on noisy images only, denoising enhancement |
Architecture | Numerical Results | Training Conditions and Datasets | Notes |
---|---|---|---|
DRNE [88] | FFDNet [89]: PSNR: 33.68 dB Average error (noise estimation): up to 0.32 dB | Kodak, McMaster [52], BSD500 [53], synthetic Gaussian noise | 16-layer CNN for pixelwise noise variance mapping, signal-dependent noise estimation |
FADNet [90] | PSNR: 41.36 dB | Nam [54], SIDD [47], 1200 random 512 × 512 patches | Frequency–domain attention mechanism with encoder–decoder, 22 M parameters, ~150 GFLOPs |
GAN-based denoiser [94] | PSNR: 39.29 dB, SSIM: 0.915 | SIDD [47] | Three-component: generator + dual-path U-Net [70] denoiser + discriminator, 15.6 M parameters, 68.9 GFLOPs |
Architecture | Numerical Results | Training Conditions and Datasets | Notes |
---|---|---|---|
CBDNet [105] | PSNR: up to 41.31 dB, SSIM: 0.9421 | DND [48], Nam [54] dataset, synthetic + real-world | Two-subnetwork: 5-layer noise estimation + 16-layer U-Net [70] denoising |
NERNet [108] | PSNR: up to 40.10 dB, SSIM: 0.942 | SIDD [47], Nam [54], BSD68 [112] | Enhanced CBDNet [105] with pyramid feature fusion and attention mechanisms |
FBI-Denoiser [109] | PSNR: up to 48.02 dB, SSIM: up to 0.9797 1560× speedup in estimation | BSD68 [112], FiveK [113], FMD [114], SIDD [47], DND [48] | GAT [110] preprocessing, 0.21 s inference time, 340 K parameters |
PCANet + ResNet101 [117] | Mean estimation error: 0.22, patch selection accuracy: 92% | 100 images of BSD300 [51], 1M+ training patches | Global statistical noise estimation with chi-square distribution |
Metadata-enhanced model [125] | RMS errors: from 0.09 to 0.47 DN, PSNR: up to 43.74 dB | Sony ICX285 CCD, EV76C661 CMOS, synthetic data | DRNE-based [88] with EXIF metadata integration, 1.3 ms inference time |
Condformer + LoNPE [126] | RMS error: up to 0.023, 300× speedup, PSNR improvement: 0.34 dB | Urban100 [127] with synthetic Poisson-Gaussian noise | Transformer-based with noise prior embedding, 27 M parameters, 565 GFLOPs |
Architecture | Numerical Results | Training Conditions and Datasets | Notes |
---|---|---|---|
Constrained-Net [157] | Video classification accuracy: 66.5% overall, 89.1% flat scenes | VISION [158], 1539 videos, 28 camera devices, >100 K training frames | Extended constrained convolutional layer for video PRNU extraction |
CNN adaptations [84] | PCE: up to 16.5 (FFDNet [89]) | DID [46], 40 cameras, 11 models, 128 × 128 and 64 × 64 patches | Adaptation of denoising CNNs for PRNU extraction with correlation loss |
DHDN [100] | Kappa improvement: at least 0.0473 | DID [46] (74 cameras), DSD [161] (90 devices), 100 images per device | Modified U-Net [70] with dense connectivity for sensor noise isolation |
ResNet-based extractor [143] | Classification accuracy: 92.41% | VISION [158], 2194 patches 256 × 256, 10 devices | U-Net [70] denoising with ResNet [67] residual noise extraction for PRNU fingerprints |
Architecture | Features | Description |
---|---|---|
NOISE SUPPRESSION | ||
DnCNN [75] | Unsupervised noise suppression | Residual convolutional neural network that uses unsupervised training to suppress noise in images |
DRNE [88] | Pixel-based noise mapping | Network for noise estimation capable of utilizing metadata and additional information |
CBDNet [105], NERNet [108], FBI-Denoiser [109], DCANet [122], DCBDNet [123] | Two-network structure: noise estimator and noise suppressor | Separation of noise estimation and suppression allows them to be improved separately |
Other neural networks for noise suppression [94,96,97,98,99,100,101,102,103,104] | Convolutional (U-Net based, etc.) or generative-adversarial networks | Can be used for approximated PRNU extraction |
SOURCE CAMERA IDENTIFICATION | ||
U-Net modifications [70] | Feature extraction | U-Net’s high spatial resolution feature extraction capability is adapted for approximated PRNU extraction |
DnCNN [75], FFDNet [89], ADNet [85], DANet [86] | Adapted noise suppression networks | Can be adapted for approximated PRNU extraction |
Constrained-Net [157] | Works with video data | Requires large (over 100 thousand frames) amount of video data for training |
SYNTHETIC DATASET GENERATION | ||
NoiseFlow [56], CANGAN [57], C2N [59] | Noise synthesizers | Models based on generative-adversarial networks or normalizing flows |
Noise2NoiseFlow [60], DCD-Net [81] | Noise suppression with a noise synthesizer | Improving noise suppression by utilizing a noise synthesizer or by using only noisy data for training |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Volkov, A.A.; Kozlov, A.V.; Cheremkhin, P.A.; Rymov, D.A.; Shifrina, A.V.; Starikov, R.S.; Nebavskiy, V.A.; Petrova, E.K.; Zlokazov, E.Y.; Rodin, V.G. A Review of Neural Network-Based Image Noise Processing Methods. Sensors 2025, 25, 6088. https://doi.org/10.3390/s25196088
Volkov AA, Kozlov AV, Cheremkhin PA, Rymov DA, Shifrina AV, Starikov RS, Nebavskiy VA, Petrova EK, Zlokazov EY, Rodin VG. A Review of Neural Network-Based Image Noise Processing Methods. Sensors. 2025; 25(19):6088. https://doi.org/10.3390/s25196088
Chicago/Turabian StyleVolkov, Anton A., Alexander V. Kozlov, Pavel A. Cheremkhin, Dmitry A. Rymov, Anna V. Shifrina, Rostislav S. Starikov, Vsevolod A. Nebavskiy, Elizaveta K. Petrova, Evgenii Yu. Zlokazov, and Vladislav G. Rodin. 2025. "A Review of Neural Network-Based Image Noise Processing Methods" Sensors 25, no. 19: 6088. https://doi.org/10.3390/s25196088
APA StyleVolkov, A. A., Kozlov, A. V., Cheremkhin, P. A., Rymov, D. A., Shifrina, A. V., Starikov, R. S., Nebavskiy, V. A., Petrova, E. K., Zlokazov, E. Y., & Rodin, V. G. (2025). A Review of Neural Network-Based Image Noise Processing Methods. Sensors, 25(19), 6088. https://doi.org/10.3390/s25196088