Deepfakes, Fake News and Multimedia Manipulation from Generation to Detection (2nd Edition)

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Biometrics, Forensics, and Security".

Deadline for manuscript submissions: 31 August 2025 | Viewed by 6123

Special Issue Editor


E-Mail Website
Guest Editor
Department of Network and Computer Security, State University of New York Polytechnic Institute, C135, Kunsela Hall, Utica, NY 13502, USA
Interests: machine learning and computer vision with applications to cybersecurity; biometrics; deepfakes; affect recognition; image and video processing; perceptual-based audiovisual multimedia quality assessmentsing; perceptual-based audiovisual multimedia quality assessment; cybersecurity
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Machine-learning-based techniques are being utilized to generate hyper-realistic manipulated facial multimedia content known as DeepFakes. While such technologies have positive potentials for use in entertainment applications, the malevolent use of this technology can harm citizens and society as a whole by facilitating the construction of indecent content, the spread of fake news to subvert elections or undermine politics, bullying, and the amelioration of social engineering to perpetrate financial fraud. In fact, it has been shown that manipulated facial multimedia content can deceive not only humans but also automated face-recognition-based biometric systems. The advent of advanced hardware, powerful smart devices, user-friendly apps (e.g., FaceApp and ZAO), and open-source ML codes (e.g., Generative Adversarial Networks) has enabled even non-experts to effortlessly create manipulated facial multimedia contents. In principle, face manipulation involves swapping two faces, modifying facial attributes (e.g., age and gender), morphing two different faces into one face, adding imperceptible perturbations (i.e., adversarial examples), synthetically generating faces, or animating/recreating facial expressions in face images/videos.

Topics of interest in this Special Issue include but are not limited to:

  • The generation of DeepFakes, face morphing, manipulation, and adversarial attacks;
  • The generation of synthetic faces using ML/AI techniques, e.g., GANs;
  • The detection of DeepFakes, face morphing, manipulation, and adversarial attacks, including generalizable systems;
  • The generation and detection of audio DeepFakes;
  • Novel datasets and experimental protocols to facilitate research in DeepFakes and face manipulations;
  • The formulation and extraction of DeepFake devices, platforms, and software/app fingerprints;
  • Face recognition systems (and humans) against DeepFakes, face morphing, manipulation, and adversarial attacks, including their vulnerabilities to digital face manipulations;
  • DeepFakes in the courtroom and on copyright law.

Dr. Zahid Akhtar
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deepfakes
  • digital face manipulations
  • digital forensics
  • fake news
  • multimedia manipulations
  • generative AI
  • security and privacy
  • information authenticity
  • face morphing attack
  • biometrics

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 1569 KiB  
Article
Dual-Model Synergy for Fingerprint Spoof Detection Using VGG16 and ResNet50
by Mohamed Cheniti, Zahid Akhtar and Praveen Kumar Chandaliya
J. Imaging 2025, 11(2), 42; https://doi.org/10.3390/jimaging11020042 - 4 Feb 2025
Cited by 1 | Viewed by 1006
Abstract
In this paper, we address the challenge of fingerprint liveness detection by proposing a dual pre-trained model approach that combines VGG16 and ResNet50 architectures. While existing methods often rely on a single feature extraction model, they may struggle with generalization across diverse spoofing [...] Read more.
In this paper, we address the challenge of fingerprint liveness detection by proposing a dual pre-trained model approach that combines VGG16 and ResNet50 architectures. While existing methods often rely on a single feature extraction model, they may struggle with generalization across diverse spoofing materials and sensor types. To overcome this limitation, our approach leverages the high-resolution feature extraction of VGG16 and the deep layer architecture of ResNet50 to capture a more comprehensive range of features for improved spoof detection. The proposed approach integrates these two models by concatenating their extracted features, which are then used to classify the captured fingerprint as live or spoofed. Evaluated on the Livedet2013 and Livedet2015 datasets, our method achieves state-of-the-art performance, with an accuracy of 99.72% on Livedet2013, surpassing existing methods like the Gram model (98.95%) and Pre-trained CNN (98.45%). On Livedet2015, our method achieves an average accuracy of 96.32%, outperforming several state-of-the-art models, including CNN (95.27%) and LivDet 2015 (95.39%). Error rate analysis reveals consistently low Bonafide Presentation Classification Error Rate (BPCER) scores with 0.28% on LivDet 2013 and 1.45% on LivDet 2015. Similarly, the Attack Presentation Classification Error Rate (APCER) remains low at 0.35% on LivDet 2013 and 3.68% on LivDet 2015. However, higher APCER values are observed for unknown spoof materials, particularly in the Crossmatch subset of Livedet2015, where the APCER rises to 8.12%. These findings highlight the robustness and adaptability of our simple dual-model framework while identifying areas for further optimization in handling unseen spoof materials. Full article
Show Figures

Figure 1

19 pages, 429 KiB  
Article
Media Forensic Considerations of the Usage of Artificial Intelligence Using the Example of DeepFake Detection
by Dennis Siegel, Christian Kraetzer, Stefan Seidlitz and Jana Dittmann
J. Imaging 2024, 10(2), 46; https://doi.org/10.3390/jimaging10020046 - 9 Feb 2024
Cited by 7 | Viewed by 4024
Abstract
In recent discussions in the European Parliament, the need for regulations for so-called high-risk artificial intelligence (AI) systems was identified, which are currently codified in the upcoming EU Artificial Intelligence Act (AIA) and approved by the European Parliament. The AIA is the first [...] Read more.
In recent discussions in the European Parliament, the need for regulations for so-called high-risk artificial intelligence (AI) systems was identified, which are currently codified in the upcoming EU Artificial Intelligence Act (AIA) and approved by the European Parliament. The AIA is the first document to be turned into European Law. This initiative focuses on turning AI systems in decision support systems (human-in-the-loop and human-in-command), where the human operator remains in control of the system. While this supposedly solves accountability issues, it includes, on one hand, the necessary human–computer interaction as a potential new source of errors; on the other hand, it is potentially a very effective approach for decision interpretation and verification. This paper discusses the necessary requirements for high-risk AI systems once the AIA comes into force. Particular attention is paid to the opportunities and limitations that result from the decision support system and increasing the explainability of the system. This is illustrated using the example of the media forensic task of DeepFake detection. Full article
Show Figures

Figure 1

Back to TopTop