Recent Advances in Deep Learning-Based Source Camera Identification and Device Linking
Abstract
1. Introduction
- Camera brand identification: identifying the brand of the camera that captured the photograph.
- Camera model identification: identifying the camera model that captured the photograph.
- Camera device identification: identifying the exact device that took the photograph.
- Ref. [4] surveyed works published before 2021 on source identification using noise patterns in machine learning-based systems.
- Ref. [5] focused on PRNU and related techniques such as lens radial distortion, color filter array interpolation, and auto-white balance approximation.
- Ref. [6] reviewed PRNU, statistical methods, and deep learning methods in classification settings. While it mentions model and device-level identification, it lacks performance and comparative analysis.
- Ref. [7] explored PRNU, CNN-based, feature-based, and metadata-based methods, questioning whether PRNU remains the gold standard in modern imaging pipelines.
- Ref. [8] provided an overview of camera noise types and neural network-based noise estimation techniques. However, it lacks performance analysis of identification.
2. Sensor Noise-Based Methods for Source Camera Identification and Device Linking
3. Deep Learning Approaches for Source Camera Identification
| Year | Techniques | References |
|---|---|---|
| 2016 | Pre-processing | Highpass [30] |
| 2017 | Pre-processing | Median filter [31] |
| Pre-processing | Adaptive conv layer [32] | |
| CNN feature extraction | with SVM [3] | |
| Residual network | Content-adaptive fusion [36] | |
| 2019 | Residual network | Domain knowledge [37] |
| 2020 | Pre-processing | Edge map [33] |
| 2021 | Pre-processing | Data-driven [34] |
| Multi-scale | Residual prediction [42] | |
| Multi-scale | Multiple-scale filters [43] | |
| 2022 | U-Net | Hierarchical [39] |
| Multiscale | Multiscale encoder-decoder [44] | |
| 2023 | CNN feature extraction | Wavelet with LBP [29] |
| U-Net | Residual-noise extraction [40] | |
| 2024 | Residual network | Conv with residual [38] |
| U-Net | Multi-scale with transformer [41] | |
| Dual-path | ConvNeXt [45] | |
| Dual-path | Multiscale feature fusion [46] | |
| Dual-path | High and low pass fusion [47] | |
| 2025 | Pre-processing | Angular and radial feature extraction [35] |
| Dual-path | Contrastive learning [48] | |
| Dual-path | Multiscale with wavelet [49] |
4. Deep Learning Approaches for Device Linking
- Few-shot learning, particularly one-shot learning.
- Contrastive learning, which compares image pairs.
| Year | Techniques | References |
|---|---|---|
| 2019 | Contrastive learning | Siamese network [53] |
| Contrastive learning | Forensic similarity [54] | |
| 2020 | Contrastive learning | Multi-layer with Siamese [55] |
| 2022 | Few-shot learning | Global fuzzification with information diffusion [50] |
| Few-shot learning | Ensemble strategy [51] | |
| 2023 | Few-shot learning | Coordinate pseudo-label selection [52] |
| 2025 | Contrastive learning | SiamNet with contextual information aggregation [56] |
5. Performance Comparison and Benchmarking
5.1. Dataset
5.2. Performance of Deep Learning Methods in Source Camera Identification
5.3. Performance of Deep Learning Methods in Device Linking
6. Discussions and Challenges
6.1. Device-Level Performance
6.2. Cross-Dataset Validation
6.3. Computational Costs
6.4. Modern Imaging Pipeline
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
| CNN | Convolutional neural networks |
| DL | Deep learning |
| PCE | Peak correlation energy |
| PRNU | Photo-response non-uniformity |
| SVM | Support vector machine |
References
- Lukáš, J.; Fridrich, J.; Goljan, M. Detecting digital image forgeries using sensor pattern noise. In Proc. SPIE 6072, Security, Steganography, and Watermarking of Multimedia Contents; SPIE Press: Bellingham, DC, USA, 2006; pp. 362–372. [Google Scholar]
- Goljan, M.; Fridrich, J.; Filler, T. Large scale test of sensor fingerprint camera identification. In Proc. SPIE 7254, Media Forensics and Security; SPIE Press: Bellingham, DC, USA, 2009; p. 725401. [Google Scholar]
- Bondi, L.; Baroffio, L.; Güera, D.; Bestagini, P.; Delp, E.J.; Tubaro, S. First steps toward camera model identification with convolutional neural networks. IEEE Signal Process. Lett. 2017, 24, 259–263. [Google Scholar] [CrossRef]
- Gouda, O.; Bouridane, A.; Talib, M.A.; Nasir, Q. Machine learning-based methods in source camera identification: A systematic review. In Proceedings of the International Conference on Business Analytics for Technology and Security, Dubai, United Arab Emirates, 16–17 February 2022; pp. 1–7. [Google Scholar] [CrossRef]
- Nwokeji, C.E.; Sheikh-Akbari, A.; Gorbenko, A.; Mporars, I. Source camera identification techniques: A survey. J. Imaging 2024, 10, 31. [Google Scholar] [CrossRef]
- Bernacki, J.; Scherer, R. Algorithms and methods for individual source camera identification: A survey. Sensors 2025, 25, 3027. [Google Scholar] [CrossRef]
- Klier, S.; Baier, H. Source camera identification—Do we have a good standard? Forensic Sci. Int. Digit. Investig. 2025, 52, 301858. [Google Scholar]
- Volkov, A.A.; Kozlov, A.V.; Cheremkhin, P.A.; Rymov, D.A.; Shifrina, A.V.; Starikov, R.S.; Nebavskiy, V.A.; Petrova, E.K.; Zlokazov, E.Y.; Rodin, V.G. A review of Neural network-based image noise processing methods. Sensors 2025, 25, 6088. [Google Scholar] [CrossRef]
- Chen, M.; Fridrich, J.; Goljan, M.; Lukas, J. Determining image origin and integrity using sensor noise. IEEE Trans. Inf. Forensics Secur. 2008, 3, 74–90. [Google Scholar] [CrossRef]
- Salazar, D.A.; Ramirez-Rodriguez, A.E.; Nakano, M.; Cedillo-Hernandez, M.; Perez-Meana, H. Evaluating of denoising algorithms for source camera linking. In Proceedings of the 13th Mexican Conference on Pattern Recognition, Mexico City, Mexico, 23–26 June 2021; pp. 282–291. [Google Scholar]
- Kozlov, A.V.; Nikitin, N.V.; Rodin, V.G.; Cheremkhin, P.A. Improving the reliability of digital camera identification by optimizing the algorithm for comparing noise signatures. Meas. Tech. 2024, 66, 923–934. [Google Scholar] [CrossRef]
- Goljan, M.; Chen, M.; Fridrich, J. Identifying common source digital camera from image pairs. In Proceedings of the IEEE International Conference on Image Processing, San Antonio, TX, USA, 16 September–19 October 2007; pp. VI–125–VI–128. [Google Scholar]
- Fridrich, J. Sensor defects in digital image forensic. In Digital Image Forensics: There Is More to a Picture Than Meets the Eye; Sencar, H.T., Memon, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 179–218. [Google Scholar]
- Mieremet, A. Camera-identification and common-source identification: The correlation values of mismatches. Forensic Sci. Int. 2019, 301, 46–54. [Google Scholar] [CrossRef] [PubMed]
- Li, C.-T. Source camera linking using enhanced sensor pattern noise extracted from images. In Proceedings of the International Conference on Imaging for Crime Detection and Prevention, London, UK, 3 December 2009. [Google Scholar]
- Kang, X.; Li, Y.; Qu, Z.; Huang, J. Enhancing source camera identification performance with a camera reference phase sensor pattern noise. IEEE Trans. Inf. Forensics Secur. 2012, 7, 393–402. [Google Scholar] [CrossRef]
- Chan, L.H.; Law, N.F.; Siu, W.C. A confidence map and pixel-based weighted correlation for PRNU-based camera identification. Digit. Investig. 2013, 10, 215–225. [Google Scholar] [CrossRef]
- Shi, C.; Law, N.F.; Leung, H.; Siu, W.C. A local variance based approach to alleviate the scene content interference for source camera identification. Digit. Investig. 2017, 22, 74–87. [Google Scholar] [CrossRef]
- Liu, Y.; Xiao, Y.; Tian, H. Plug-and-Play PRNU enhancement algorithm with guided filtering. Sensors 2024, 24, 7701. [Google Scholar] [CrossRef]
- Ramirez-Rodriguez, A.E.; Nakano, M.; Perez-Meana, H. Source camera linking algorithm based on the analysis of plain image zones. Eng. Proc. 2024, 60, 17. [Google Scholar] [CrossRef]
- Martin, A.; Newman, J. Significance of image brightness levels for PRNU camera identification. J. Forensic Sci. 2025, 70, 132–149. [Google Scholar] [CrossRef]
- Bayram, S.; Sencar, T.; Memon, N.D. Seam-carving based anonymization against image & video source attribution. In Proceedings of the IEEE International Workshop on Multimedia Signal Processing, Pula, Italy, 30 September–2 October 2013; pp. 272–277. [Google Scholar]
- Dirik, A.E.; Sencar, H.T.; Memon, N. Analysis of seam-carving-based anonymization of images against PRNU noise pattern-based source attribution. IEEE Trans. Inf. Forensics Secur. 2014, 9, 2277–2290. [Google Scholar] [CrossRef]
- Martín-Rodríguez, F.; Isasi-de-Vicente, F.; Fernández-Barciela, M. A Stress Test for Robustness of Photo Response Nonuniformity (Camera Sensor Fingerprint) Identification on Smartphones. Sensors 2023, 23, 3462. [Google Scholar] [CrossRef] [PubMed]
- Taspinar, S.; Manoranjan, M.; Memon, N. PRNU-based camera attribution from multiple seam-carved images. IEEE Trans. Inf. Forensics Secur. 2017, 12, 3065–3080. [Google Scholar] [CrossRef]
- Irshad, M.; Law, N.F.; Loo, K.H.; Haider, S. IMGCAT: An approach to dismantle the anonymity of a source camera using correlative features and an integrated 1D convolutional neural network. Array 2023, 18, 100279. [Google Scholar] [CrossRef]
- Li, J.; Zhang, X.; Ma, B.; Qin, C.; Wang, C. Reversible PRNU anonymity for device privacy protection based on data hiding. Expert Syst. Appl. 2023, 234, 121017. [Google Scholar] [CrossRef]
- Wang, C.; Zhang, Q.; Wang, X.; Zhou, L.; Li, Q.; Zia, Z.; Ma, B.; Shi, Y.Q. Light-Field Image Multiple Reversible Robust Watermarking Against Geometric Attacks. IEEE Trans. Dependable Secur. Comput. 2025, 22, 5861–5875. [Google Scholar] [CrossRef]
- Jaiswal, A.K.; Srivastava, R. Source camera identification using hybrid feature set and machine learning classifiers. In Role of Data-Intensive Distributed Computing Systems in Designing Data Solutions; Springer: Cham, Switzerland, 2023; pp. 111–127. [Google Scholar]
- Tuama, A.; Comby, F.; Chaumont, M. Camera Model Identification with the Use of Deep Convolutional Neural Networks. In Proceedings of the IEEE International Workshop on Information Forensics & Security, Abu Dhabi, United Arab Emirates, 4–7 December 2016; pp. 1–6. [Google Scholar]
- Chen, Y.; Huang, Y.; Ding, X. Camera model identification with residual neural network. In Proceedings of the IEEE International Conference on Image Processing, Beijing, China, 17–20 September 2017; pp. 4337–4341. [Google Scholar]
- Bayar, B.; Stamm, M.C. Augmented convolutional feature maps for robust CNN-based camera model identification. In Proceedings of the IEEE International Conference on Image Processing, Beijing, China, 17–20 September 2017; pp. 4098–4102. [Google Scholar]
- Kang, C.; Kang, S. Camera model identification using a deep network and a reduced edge dataset. Neural Comput. Appl. 2020, 32, 13139–13146. [Google Scholar] [CrossRef]
- Rafi, A.M.; Tonmoy, T.I.; Kamal, U.; Wu, Q.J.; Hasan, M.K. RemNet: Remnant convolutional neural network for camera model identification. Neural Comput. Appl. 2021, 33, 3655–3670. [Google Scholar] [CrossRef]
- Elharrouss, O.; Akbari, Y.; Almadeed, N.; Al-Maadeed, S.; Khelifi, F.; Bouridane, A. PDC-ViT: Source camera identification using pixel difference convolution and vision transformer. Neural Comput. Appl. 2025, 37, 6933–6949. [Google Scholar] [CrossRef]
- Yang, P.; Ni, R.; Zhao, Y.; Zhao, W. Source camera identification based on content-adaptive fusion residual networks. Pattern Recognit. Lett. 2019, 119, 195–204. [Google Scholar] [CrossRef]
- Ding, X.; Chen, Y.; Tang, Z.; Huang, Y. Camera identification based on domain knowledge-driven deep multi-task learning. IEEE Access 2019, 7, 25878–25890. [Google Scholar] [CrossRef]
- Sychandran, C.; Shreelekshmi, R. SCCRNet: A framework for source camera identification on digital images. Neural Comput. Appl. 2024, 36, 1167–1179. [Google Scholar] [CrossRef]
- Xiao, Y.; Tian, H.; Cao, G.; Yang, D.; Li, H. Effective PRNU extraction via densely connected hierarchical network. Multimed. Tools Appl. 2022, 81, 20443–20463. [Google Scholar] [CrossRef]
- Bharathiraja, S.; Rajesh Kanna, B.; Hariharan, M. A deep learning framework for image authentication: An automatic source camera identification Deep-Net. Arab. J. Sci. Eng. 2023, 48, 1207–1219. [Google Scholar] [CrossRef]
- Lu, J.; Li, C.; Huang, X.; Cui, C.; Emam, M. Source camera identification algorithm based on multi-scale feature fusion. Comput. Mater. Contin. 2024, 80, 3047–3065. [Google Scholar] [CrossRef]
- Liu, Y.; Zhou, Z.; Yang, Y.; Law, N.F.B.; Bharath, A.A. Efficient source camera identification with diversity-enhanced patch selection and deep residual prediction. Sensors 2021, 21, 4701. [Google Scholar] [CrossRef]
- You, C.; Zheng, H.; Guo, Z.; Wang, T.; Wu, X. Multiscale content-independent feature fusion network for source camera identification. Appl. Sci. 2021, 11, 6752. [Google Scholar] [CrossRef]
- Hui, C.; Jiang, F.; Liu, S.; Zhao, D. Source camera identification with multi-scale feature fusion network. In Proceedings of the IEEE International Conference on Multimedia and Expo, Taipei, Taiwan, 18–22 July 2022; pp. 1–6. [Google Scholar]
- Huan, S.; Liu, Y.; Yang, Y.; Law, N.F. Camera model identification based on dual-path enhanced ConvNeXt network and patches selected by uniform local binary pattern. Expert Syst. Appl. 2024, 241, 122501. [Google Scholar] [CrossRef]
- Zheng, H.; You, C.; Wang, T.; Ju, J.; Li, X. Source camera identification based on an adaptive dual-branch fusion residual network. Multimed. Tools Appl. 2024, 83, 18479–18495. [Google Scholar] [CrossRef]
- Rana, K.; Goyal, P.; Sharma, G. Dual-branch convolutional neural network for robust camera model identification. Expert Syst. Appl. 2024, 238, 121828. [Google Scholar] [CrossRef]
- Han, Z.; Yang, Y.; Zhang, J.; Li, Y.; Liu, Y.; Law, N.F. A contrastive learning-based heterogeneous dual-branch network for source camera identification. Neurocomputing 2025, 645, 130406. [Google Scholar] [CrossRef]
- Han, Z.; Yang, Y.; Zhang, J.; Liu, Y.; Law, N.F. DWT-RFNet: A wavelet-based deep learning method for robust source camera identification. Appl. Soft Comput. 2025, 185, 114027. [Google Scholar] [CrossRef]
- Wang, B.; Wu, S.; Wei, F.; Wang, Y.; Hou, J.; Sui, X. Virtual sample generation for few-shot source camera identification. J. Inf. Secur. Appl. 2022, 66, 103153. [Google Scholar] [CrossRef]
- Wang, B.; Hou, J.; Ma, Y.; Wang, F.; Wei, F. Multi-DS Strategy for source camera identification in few-shot sample data sets. Hindawi Secur. Commun. Netw. 2022, 2022, 8716884. [Google Scholar] [CrossRef]
- Wang, B.; Hou, J.; Wei, F.; Yu, F.; Zheng, W. MDM-CPS: A few shot sample approach for source camera identification. Expert Syst. Appl. 2023, 229, 120315. [Google Scholar] [CrossRef]
- Cozzolino, D.; Verdoliva, L. Noiseprint: A CNN-based camera model fingerprint. IEEE Trans. Inf. Forensics Secur. 2019, 15, 144–159. [Google Scholar] [CrossRef]
- Mayer, O.; Stamm, M.C. Forensic similarity for digital images. IEEE Trans. Inf. Forensics Secur. 2019, 15, 1331–1346. [Google Scholar] [CrossRef]
- Sameer, V.U.; Naskar, R. Deep Siamese network for limited labels classification in source camera identification. Multimed. Tools Appl. 2020, 79, 28079–28104. [Google Scholar] [CrossRef]
- Zheng, M.; Law, N.F.; Siu, W.C. Unveiling image source: Instance-level camera device linking via context-aware deep siamese network. Expert Syst. Appl. 2025, 262, 125617. [Google Scholar] [CrossRef]
- Gloe, T.; Böhme, R. The Dresden image database for benchmarking digital image forensics. In Proceedings of the 2010 ACM Symposium on Applied Computing, New York, NY, USA, 22 March 2010; pp. 1584–1590. [Google Scholar]
- De Marsico, M.; Nappi, M.; Riccio, D.; Wechsler, H. Mobile iris challenge evaluation (MICHE)-I, biometric iris dataset and protocols. Pattern Recognit. Lett. 2015, 57, 17–23. [Google Scholar] [CrossRef]
- Shaya, O.; Yang, P.; Ni, R.; Zhao, Y.; Piva, A. A new dataset for source identification of high dynamic range images. Sensors 2018, 18, 3801. [Google Scholar] [CrossRef]
- Bernacki, J.; Scherer, R. IMAGINE dataset: Digital camera identification image benchmarking dataset. In Proceedings of the 20th International Conference on Security and Cryptography (SECRYPT), Rome, Italy, 10–12 July 2023; pp. 799–804. [Google Scholar]
- Galdi, C.; Hartung, F.; Dugelay, J.L. SOCRatES: A database of realistic data for source camera recognition on smartphones. In Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods (ICPRAM), Prague, Czech Republic, 19–21 February 2019; pp. 648–655. [Google Scholar]
- Tian, H.; Xiao, Y.; Cao, G.; Zhang, Y.; Xu, Z.; Zhao, Y. Daxing smartphone identification dataset. IEEE Access 2019, 7, 101046–101053. [Google Scholar] [CrossRef]
- Shullani, D.; Fontani, M.; Iuliani, M.; Shaya, O.A.; Piva, A. VISION: A video and image dataset for source identification. EURASIP J. Inf. Secur. 2017, 1, 15. [Google Scholar] [CrossRef]
- Hadwiger, B.; Riess, C.; Farinella, G.M.; Bertini, M.; Vezzani, R.; Sclaroff, S.; Mei, T.; Del Bimbo, A.; Escalante, H.J.; Cucchiara, R. The Forchheim image database for camera identification in the Wild. In Pattern Recognition. ICPR International Workshops and Challenges; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; Volume 12666, pp. 500–515. [Google Scholar]
- Costa, F.O.; Silva, E.; Eckmann, M.; Scheirer, W.J.; Rocha, A. Open set source camera attribution and device linking. Pattern Recognit. Lett. 2014, 39, 92–101. [Google Scholar] [CrossRef]
- Luliani, M.; Fontani, M.; Piva, A. A leak in PRNU based source identification—Questioning fingerprint Uniqueness. IEEE Access 2021, 9, 52455–52463. [Google Scholar] [CrossRef]
- Dong, J.; Wang, W.; Tan, T. CASIA image tampering detection evaluation database. In Proceedings of the IEEE China Summit and International Conference on Signal and Information Processing, Beijing, China, 6–10 July 2013; pp. 422–426. [Google Scholar]
- Novozamsky, A.; Mahdian, B.; Saic, S. Imd2020: A large-scale annotated dataset tailored for detecting manipulated images. In Proceedings of the IEEE Winter Applications of Computer Vision Workshops (WACVW), Snowmass Village, CO, USA, 1–5 March 2020; pp. 71–80. [Google Scholar]
- Fischinger, D.; Boyer, M. DF2023: The digital forensics 2023 dataset for image forgery detection. arXiv 2025, arXiv:2503.22417. [Google Scholar] [CrossRef]
- Irshad, M.; Liew, S.R.C.; Law, N.F.; Loo, K.H. CAMID: An assuasive approach to reveal source camera through inconspicuous evidence. Forensic Sci. Int. Digit. Investig. 2023, 46, 301616. [Google Scholar] [CrossRef]
- Kirchner, M.; Johnson, C. SPN-CNN: Boosting sensor-based source camera attribution with deep learning. In Proceedings of the IEEE International Workshop on Information Forensics and Security, Delft, The Netherlands, 9–12 December 2019. [Google Scholar]
- Sun, Y.; Li, Z.; Zhang, Y.; Pan, T.; Dong, B.; Guo, Y.; Wang, J. Efficient Attention Mechanisms for Large Language Models: A survey. arXiv 2025, arXiv:2507.19595. [Google Scholar] [CrossRef]
- Hu, H.; Wang, X.; Zhang, Y.; Chen, Q.; Guan, Q. A comprehensive survey on contrastive learning. Neurocomputing 2024, 610, 128645. [Google Scholar] [CrossRef]
- Lopez, E.; Etxebarria-Elezgarai, J.; Amigo, J.M.; Seifert, A. The importance of choosing a proper validation strategy in predictive models. A tutorial with real examples. Anal. Chim. Acta 2023, 1275, 341532. [Google Scholar] [CrossRef]
- Moslemi, A.; Briskina, A.; Dang, Z.; Li, J. A survey on knowledge distillation: Recent advancements. Mach. Learn. Appl. 2024, 18, 100504. [Google Scholar] [CrossRef]
- AI Camera: What It Is, How It Works, Samsung Semiconductor Global. Available online: https://semiconductor.samsung.com/applications/ai/ai-camera/ (accessed on 25 November 2025).
- Improve Photos with Pixel’s AI Camera Technology, Google Store. Available online: https://store.google.com/intl/en_uk/ideas/articles/what-is-an-ai-camera/ (accessed on 25 November 2025).
- What Is an AI Camera Phone, and How Does It Work? HONOR SA. Available online: https://www.honor.com/sa-en/blog/what-are-ai-camera-phones/ (accessed on 25 November 2025).











| References | Accuracy | True-Positive Rate at a False-Positive Rate of 10−3 |
|---|---|---|
| 2009/2013, [2,13] | 0.9032 | 0.7768 |
| 2009, [15] | 0.9116 | 0.7674 |
| 2012, [16] | 0.9000 | 0.7672 |
| 2017, [17] | 0.9263 | 0.8011 |
| References | True Positive Rate | False Positive Rate | Accuracy | F1 Score |
|---|---|---|---|---|
| 2009, [15] | 0.0013 | 0.0002 | 0.9354 | 0 |
| 2013, [13] | 0.1040 | 0.0002 | 0.9420 | 0.19 |
| 2019, [14] | 0.2240 | 0.0004 | 0.9496 | 0.36 |
| 2021, [10] | 0.0267 | 0.0004 | 0.9369 | 0.05 |
| Nature of Images | References | Dataset Name | Number of Images | Number of Camera Devices | Number of Camera Models |
|---|---|---|---|---|---|
| Images acquired under default settings | [57] | Dresden | 16,960 | 74 | 25 |
| [58] | MICHE-I | 3700 | 3 | 3 | |
| [59] | UNIFI | 5415 | 23 | 21 | |
| [60] | IMAGINE | 2816 | 67 | 55 | |
| [61] | SOCRatES | 9700 | 103 | 65 | |
| [62] | Daxing | 43,400 | 90 | 22 | |
| Compressed images at different qualities | [63] | VISION | 34,427 | 35 | 29 |
| [64] | Forchheim | 23,000 | 27 | 25 | |
| [65] | --- | 13,210 | 400 | --- | |
| [66] | --- | 32,445 | 486 | --- | |
| Forged images (copy move, splicing, enhancement) | [67] | CASIA | 5123 | No camera information | |
| Forged images (copy move, splicing, removal, enhancement) | [68] | IMD2020 | 2010 | No camera information | |
| [69] | DF2023 | 1 million | No camera information | ||
| Forged images (seam carving) | [26] | --- | 2750 | 11 | 10 |
| [70] | --- | 1560 | 13 | 12 | |
| (a) | ||
| Number of Camera Models (Dataset) | References | Accuracy |
| 25 models (Forchheim) (selecting 50 patches) | 2025, [49] | 0.9673 (F1: 0.9604) |
| 2024, [47] | 0.9451 (F1: 0.9312) | |
| 2024, [45] | 0.9413 (F1: 0.9266) | |
| 2021, [43] | 0.9387 (F1:0.9035) | |
| 2021, [34] | 0.9497 (F1: 0.9475) | |
| 2021, [42] | 0.9105 (F1: 0.9139) | |
| 2017, [31] | 0.8717 (F1: 0.8667) | |
| 2017, [32] | 0.8833 (F1: 0.8924) | |
| 25 models (Forchheim) (selecting 256 image patches) | 2024, [47] | 1.000 |
| 2021, [34] | 0.9987 | |
| 2021, [42] | 0.9948 | |
| 2021, [43] | 0.9961 | |
| 2017, [3] | 0.9361 | |
| 23 models (Dresden) | 2024, [46] | 0.9933 |
| 2021, [42] | 0.9862 | |
| 2021, [43] | 0.9851 | |
| 2017, [31] | 0.9806 | |
| 2017, [36] | 0.9735 | |
| 18 models (Dresden) | 2025, [49] | 0.957 (F1: 0.951) |
| 2025, [48] | 0.950 (F1: 0.945) | |
| 2024, [45] | 0.938 (F1: 0.931) | |
| 2024, [47] | 0.944 (F1: 0.939) | |
| 2022, [44] | 0.931 (F1: 0.932) | |
| 2021, [34] | 0.942 (F1: 0.931) | |
| 2021, [42] | 0.912 (F1: 0.904) | |
| 2021, [43] | 0.932 (F1: 0.932) | |
| 2019, [53] | 0.913 (F1: 0.916) | |
| 2017, [32] | 0.898 (F1: 0.891) | |
| 13 models (Dresden) | 2023, [40] | 0.9760 (F1: 0.9759) |
| 2019, [71] | 0.9756 (F1: 0.9760) | |
| 2017, [3] | 0.9034 (F1: 0.9050) | |
| 4 models (Dresden) | 2024, [38] | 0.9570 |
| 29 models (Vision) | 2025, [49] | 0.891 (F1: 0.920) |
| 2025, [48] | 0.855 (F1: 0.893) | |
| 2024, [45] | 0.831 (F1: 0.876) | |
| 2024, [47] | 0.837 (F1: 0.878) | |
| 2022, [44] | 0.823 (F1: 0.862) | |
| 2021, [34] | 0.829 (F1: 0.867) | |
| 2021, [42] | 0.765 (F1: 0.792) | |
| 2021, [43] | 0.826 (F1: 0.872) | |
| 2019, [53] | 0.801 (F1: 0.813) | |
| 2017, [32] | 0.732 (F1: 0.794) | |
| 4 models (Vision) | 2024, [38] | 0.9629 |
| 15 models (from [41]) | 2024, [41] | 0.9787 |
| 2021, [43] | 0.9234 | |
| 2019, [37] | 0.9509 | |
| 2017, [36] | 0.9356 | |
![]() | ||
| (b) | ||
| Number of Camera Devices (Dataset) | References | Accuracy |
| 74 devices (Dresden) Average no of devices per model = 2.96 | 2025, [48] | 0.492 (F1: 0.486) |
| 2024, [45] | 0.414 (F1: 0.416) | |
| 2024, [47] | 0.475 (F1: 0.471) | |
| 2022, [44] | 0.439 (F1: 0.448) | |
| 2021, [34] | 0.446 (F1: 0.467) | |
| 2021, [42] | 0.393 (F1: 0.393) | |
| 2021, [43] | 0.428 (F1: 0.446) | |
| 2019, [37] | 0.5240 | |
| 2019, [53] | 0.427 (F1: 0.444) | |
| 2017, [31] | 0.4581 | |
| 2017, [32] | 0.367 (F1: 0.392) | |
| 35 devices (vision) Average no of devices per model = 1.21 | 2025, [35] | 0.943 |
| 2024, [45] | 0.813 | |
| 2022, [39] | 0.811 | |
| 2022, [44] | 0.832 | |
| 2021, [42] | 0.721 | |
| 2021, [43] | 0.765 | |
| 2019, [71] | 0.831 | |
| 2017, [32] | 0.830 | |
| 13 devices (from [41]) Average no of devices per model = 1.625 | 2024, [41] | 0.9185 |
| 2021, [43] | 0.7996 | |
| 2019, [36] | 0.8023 | |
| 2017, [37] | 0.7585 | |
![]() | ||
| (a) | |||
| Linking | Number of Cameras (Dataset) | Accuracy | |
| Model-level | 14 models (Dresden) | 0.5660 | |
| 11 models (Vision) | 0.5303 | ||
| Device-level | 27 devices (Dresden) | Around 0.4 | |
| 35 devices (vision) | around 0.35 | ||
| (b) | |||
| Linking | Number of Cameras (Dataset) | References | Accuracy |
| Model-level | 5 models (Dresden) | 2020, [55] | 0.7840 |
| 2017, [3] | 0.6930 | ||
| 2016, [30] | 0.6530 | ||
| 5 models (Vision) | 2020, [55] | 0.7450 | |
| 2017, [3] | 0.6250 | ||
| 2016, [30] | 0.6380 | ||
| Device-level | 30 devices (Dresden) Average number of devices per model = 3.33 | 2025, [56] | 0.9520 (F1 = 0.82) |
| 2022, [39] | 0.9431 (F1 = 0.22) | ||
| 2019, [54] | 0.7448 (F1 = 0.12) | ||
| 5 devices (Dresden) from one camera model | 2020, [55] | 0.6530 | |
| 2017, [3] | 0.6863 | ||
| 2016, [30] | 0.6397 | ||
![]() | |||
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, Z.; Law, N.-F. Recent Advances in Deep Learning-Based Source Camera Identification and Device Linking. Sensors 2025, 25, 7432. https://doi.org/10.3390/s25247432
Li Z, Law N-F. Recent Advances in Deep Learning-Based Source Camera Identification and Device Linking. Sensors. 2025; 25(24):7432. https://doi.org/10.3390/s25247432
Chicago/Turabian StyleLi, Zimeng, and Ngai-Fong Law. 2025. "Recent Advances in Deep Learning-Based Source Camera Identification and Device Linking" Sensors 25, no. 24: 7432. https://doi.org/10.3390/s25247432
APA StyleLi, Z., & Law, N.-F. (2025). Recent Advances in Deep Learning-Based Source Camera Identification and Device Linking. Sensors, 25(24), 7432. https://doi.org/10.3390/s25247432




