Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Authors = Roberto Caldelli ORCID = 0000-0003-3471-1196

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
42 pages, 10351 KiB  
Article
Deepfake Media Forensics: Status and Future Challenges
by Irene Amerini, Mauro Barni, Sebastiano Battiato, Paolo Bestagini, Giulia Boato, Vittoria Bruni, Roberto Caldelli, Francesco De Natale, Rocco De Nicola, Luca Guarnera, Sara Mandelli, Taiba Majid, Gian Luca Marcialis, Marco Micheletto, Andrea Montibeller, Giulia Orrù, Alessandro Ortis, Pericle Perazzo, Giovanni Puglisi, Nischay Purnekar, Davide Salvi, Stefano Tubaro, Massimo Villari and Domenico Vitulanoadd Show full author list remove Hide full author list
J. Imaging 2025, 11(3), 73; https://doi.org/10.3390/jimaging11030073 - 28 Feb 2025
Cited by 6 | Viewed by 9808
Abstract
The rise of AI-generated synthetic media, or deepfakes, has introduced unprecedented opportunities and challenges across various fields, including entertainment, cybersecurity, and digital communication. Using advanced frameworks such as Generative Adversarial Networks (GANs) and Diffusion Models (DMs), deepfakes are capable of producing highly realistic [...] Read more.
The rise of AI-generated synthetic media, or deepfakes, has introduced unprecedented opportunities and challenges across various fields, including entertainment, cybersecurity, and digital communication. Using advanced frameworks such as Generative Adversarial Networks (GANs) and Diffusion Models (DMs), deepfakes are capable of producing highly realistic yet fabricated content, while these advancements enable creative and innovative applications, they also pose severe ethical, social, and security risks due to their potential misuse. The proliferation of deepfakes has triggered phenomena like “Impostor Bias”, a growing skepticism toward the authenticity of multimedia content, further complicating trust in digital interactions. This paper is mainly based on the description of a research project called FF4ALL (FF4ALL-Detection of Deep Fake Media and Life-Long Media Authentication) for the detection and authentication of deepfakes, focusing on areas such as forensic attribution, passive and active authentication, and detection in real-world scenarios. By exploring both the strengths and limitations of current methodologies, we highlight critical research gaps and propose directions for future advancements to ensure media integrity and trustworthiness in an era increasingly dominated by synthetic media. Full article
Show Figures

Figure 1

14 pages, 2363 KiB  
Article
Optical Systems Identification through Rayleigh Backscattering
by Pantea Nadimi Goki, Thomas Teferi Mulugeta, Roberto Caldelli and Luca Potì
Sensors 2023, 23(11), 5269; https://doi.org/10.3390/s23115269 - 1 Jun 2023
Cited by 6 | Viewed by 2251
Abstract
We introduce a technique to generate and read the digital signature of the networks, channels, and optical devices that possess the fiber-optic pigtails to enhance physical layer security (PLS). Attributing a signature to the networks or devices eases the identification and authentication of [...] Read more.
We introduce a technique to generate and read the digital signature of the networks, channels, and optical devices that possess the fiber-optic pigtails to enhance physical layer security (PLS). Attributing a signature to the networks or devices eases the identification and authentication of networks and systems thus reducing their vulnerability to physical and digital attacks. The signatures are generated using an optical physical unclonable function (OPUF). Considering that OPUFs are established as the most potent anti-counterfeiting tool, the created signatures are robust against malicious attacks such as tampering and cyber attacks. We investigate Rayleigh backscattering signal (RBS) as a strong OPUF to generate reliable signatures. Contrary to other OPUFs that must be fabricated, the RBS-based OPUF is an inherent feature of fibers and can be easily obtained using optical frequency domain reflectometry (OFDR). We evaluate the security of the generated signatures in terms of their robustness against prediction and cloning. We demonstrate the robustness of signatures against digital and physical attacks confirming the unpredictability and unclonability features of the generated signatures. We explore signature cyber security by considering the random structure of the produced signatures. To demonstrate signature reproducibility through repeated measurements, we simulate the signature of a system by adding a random Gaussian white noise to the signal. This model is proposed to address services including security, authentication, identification, and monitoring. Full article
(This article belongs to the Special Issue Optical Network and Optical Communication Technology with Sensors)
Show Figures

Figure 1

12 pages, 874 KiB  
Article
On the Generalization of Deep Learning Models in Video Deepfake Detection
by Davide Alessandro Coccomini, Roberto Caldelli, Fabrizio Falchi and Claudio Gennaro
J. Imaging 2023, 9(5), 89; https://doi.org/10.3390/jimaging9050089 - 29 Apr 2023
Cited by 16 | Viewed by 4181
Abstract
The increasing use of deep learning techniques to manipulate images and videos, commonly referred to as “deepfakes”, is making it more challenging to differentiate between real and fake content, while various deepfake detection systems have been developed, they often struggle to detect deepfakes [...] Read more.
The increasing use of deep learning techniques to manipulate images and videos, commonly referred to as “deepfakes”, is making it more challenging to differentiate between real and fake content, while various deepfake detection systems have been developed, they often struggle to detect deepfakes in real-world situations. In particular, these methods are often unable to effectively distinguish images or videos when these are modified using novel techniques which have not been used in the training set. In this study, we carry out an analysis of different deep learning architectures in an attempt to understand which is more capable of better generalizing the concept of deepfake. According to our results, it appears that Convolutional Neural Networks (CNNs) seem to be more capable of storing specific anomalies and thus excel in cases of datasets with a limited number of elements and manipulation methodologies. The Vision Transformer, conversely, is more effective when trained with more varied datasets, achieving more outstanding generalization capabilities than the other methods analysed. Finally, the Swin Transformer appears to be a good alternative for using an attention-based method in a more limited data regime and performs very well in cross-dataset scenarios. All the analysed architectures seem to have a different way to look at deepfakes, but since in a real-world environment the generalization capability is essential, based on the experiments carried out, the attention-based architectures seem to provide superior performances. Full article
(This article belongs to the Special Issue Robust Deep Learning Techniques for Multimedia Forensics and Security)
Show Figures

Figure 1

12 pages, 982 KiB  
Article
Improving the Adversarial Robustness of Neural ODE Image Classifiers by Tuning the Tolerance Parameter
by Fabio Carrara, Roberto Caldelli, Fabrizio Falchi and Giuseppe Amato
Information 2022, 13(12), 555; https://doi.org/10.3390/info13120555 - 26 Nov 2022
Cited by 3 | Viewed by 2588
Abstract
The adoption of deep learning-based solutions practically pervades all the diverse areas of our everyday life, showing improved performances with respect to other classical systems. Since many applications deal with sensible data and procedures, a strong demand to know the actual reliability of [...] Read more.
The adoption of deep learning-based solutions practically pervades all the diverse areas of our everyday life, showing improved performances with respect to other classical systems. Since many applications deal with sensible data and procedures, a strong demand to know the actual reliability of such technologies is always present. This work analyzes the robustness characteristics of a specific kind of deep neural network, the neural ordinary differential equations (N-ODE) network. They seem very interesting for their effectiveness and a peculiar property based on a test-time tunable parameter that permits obtaining a trade-off between accuracy and efficiency. In addition, adjusting such a tolerance parameter grants robustness against adversarial attacks. Notably, it is worth highlighting how decoupling the values of such a tolerance between training and test time can strongly reduce the attack success rate. On this basis, we show how such tolerance can be adopted, during the prediction phase, to improve the robustness of N-ODE to adversarial attacks. In particular, we demonstrate how we can exploit this property to construct an effective detection strategy and increase the chances of identifying adversarial examples in a non-zero knowledge attack scenario. Our experimental evaluation involved two standard image classification benchmarks. This showed that the proposed detection technique provides high rejection of adversarial examples while maintaining most of the pristine samples. Full article
(This article belongs to the Special Issue Feature Papers in Information in 2023)
Show Figures

Figure 1

21 pages, 4549 KiB  
Article
The Face Deepfake Detection Challenge
by Luca Guarnera, Oliver Giudice, Francesco Guarnera, Alessandro Ortis, Giovanni Puglisi, Antonino Paratore, Linh M. Q. Bui, Marco Fontani, Davide Alessandro Coccomini, Roberto Caldelli, Fabrizio Falchi, Claudio Gennaro, Nicola Messina, Giuseppe Amato, Gianpaolo Perelli, Sara Concas, Carlo Cuccu, Giulia Orrù, Gian Luca Marcialis and Sebastiano Battiato
J. Imaging 2022, 8(10), 263; https://doi.org/10.3390/jimaging8100263 - 28 Sep 2022
Cited by 54 | Viewed by 15532
Abstract
Multimedia data manipulation and forgery has never been easier than today, thanks to the power of Artificial Intelligence (AI). AI-generated fake content, commonly called Deepfakes, have been raising new issues and concerns, but also new challenges for the research community. The Deepfake detection [...] Read more.
Multimedia data manipulation and forgery has never been easier than today, thanks to the power of Artificial Intelligence (AI). AI-generated fake content, commonly called Deepfakes, have been raising new issues and concerns, but also new challenges for the research community. The Deepfake detection task has become widely addressed, but unfortunately, approaches in the literature suffer from generalization issues. In this paper, the Face Deepfake Detection and Reconstruction Challenge is described. Two different tasks were proposed to the participants: (i) creating a Deepfake detector capable of working in an “in the wild” scenario; (ii) creating a method capable of reconstructing original images from Deepfakes. Real images from CelebA and FFHQ and Deepfake images created by StarGAN, StarGAN-v2, StyleGAN, StyleGAN2, AttGAN and GDWCT were collected for the competition. The winning teams were chosen with respect to the highest classification accuracy value (Task I) and “minimum average distance to Manhattan” (Task II). Deep Learning algorithms, particularly those based on the EfficientNet architecture, achieved the best results in Task I. No winners were proclaimed for Task II. A detailed discussion of teams’ proposed methods with corresponding ranking is presented in this paper. Full article
Show Figures

Figure 1

Back to TopTop