Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (19)

Search Parameters:
Keywords = forensics adversary model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
30 pages, 3412 KB  
Article
QuantumTrust-FedChain: A Blockchain-Aware Quantum-Tuned Federated Learning System for Cyber-Resilient Industrial IoT in 6G
by Saleh Alharbi
Future Internet 2025, 17(11), 493; https://doi.org/10.3390/fi17110493 - 27 Oct 2025
Abstract
Industrial Internet of Things (IIoT) systems face severe security and trust challenges, particularly under cross-domain data sharing and federated orchestration. We present QuantumTrust-FedChain, a cyber-resilient federated learning framework integrating quantum variational trust modeling, blockchain-backed provenance, and Byzantine-robust aggregation for secure IIoT collaboration in [...] Read more.
Industrial Internet of Things (IIoT) systems face severe security and trust challenges, particularly under cross-domain data sharing and federated orchestration. We present QuantumTrust-FedChain, a cyber-resilient federated learning framework integrating quantum variational trust modeling, blockchain-backed provenance, and Byzantine-robust aggregation for secure IIoT collaboration in 6G networks. The architecture includes a Quantum Graph Attention Network (Q-GAT) for modeling device trust evolution using encrypted device logs. This consensus-aware federated optimizer penalizes adversarial gradients using stochastic contract enforcement, and a shard-based blockchain for real-time forensic traceability. Using datasets from SWaT and TON IoT, experiments show 98.3% accuracy in anomaly detection, 35% improvement in defense against model poisoning, and full ledger traceability with under 8.5% blockchain overhead. This framework offers a robust and explainable solution for secure AI deployment in safety-critical IIoT environments. Full article
(This article belongs to the Special Issue Security and Privacy in Blockchains and the IoT—3rd Edition)
Show Figures

Figure 1

21 pages, 2789 KB  
Article
BIM-Based Adversarial Attacks Against Speech Deepfake Detectors
by Wendy Edda Wang, Davide Salvi, Viola Negroni, Daniele Ugo Leonzio, Paolo Bestagini and Stefano Tubaro
Electronics 2025, 14(15), 2967; https://doi.org/10.3390/electronics14152967 - 24 Jul 2025
Viewed by 1061
Abstract
Automatic Speaker Verification (ASV) systems are increasingly employed to secure access to services and facilities. However, recent advances in speech deepfake generation pose serious threats to their reliability. Modern speech synthesis models can convincingly imitate a target speaker’s voice and generate realistic synthetic [...] Read more.
Automatic Speaker Verification (ASV) systems are increasingly employed to secure access to services and facilities. However, recent advances in speech deepfake generation pose serious threats to their reliability. Modern speech synthesis models can convincingly imitate a target speaker’s voice and generate realistic synthetic audio, potentially enabling unauthorized access through ASV systems. To counter these threats, forensic detectors have been developed to distinguish between real and fake speech. Although these models achieve strong performance, their deep learning nature makes them susceptible to adversarial attacks, i.e., carefully crafted, imperceptible perturbations in the audio signal that make the model unable to classify correctly. In this paper, we explore adversarial attacks targeting speech deepfake detectors. Specifically, we analyze the effectiveness of Basic Iterative Method (BIM) attacks applied in both time and frequency domains under white- and black-box conditions. Additionally, we propose an ensemble-based attack strategy designed to simultaneously target multiple detection models. This approach generates adversarial examples with balanced effectiveness across the ensemble, enhancing transferability to unseen models. Our experimental results show that, although crafting universally transferable attacks remains challenging, it is possible to fool state-of-the-art detectors using minimal, imperceptible perturbations, highlighting the need for more robust defenses in speech deepfake detection. Full article
Show Figures

Figure 1

20 pages, 760 KB  
Article
Detecting AI-Generated Images Using a Hybrid ResNet-SE Attention Model
by Abhilash Reddy Gunukula, Himel Das Gupta and Victor S. Sheng
Appl. Sci. 2025, 15(13), 7421; https://doi.org/10.3390/app15137421 - 2 Jul 2025
Viewed by 2200
Abstract
The rapid advancements in generative artificial intelligence (AI), particularly through models like Generative Adversarial Networks (GANs) and diffusion-based architectures, have made it increasingly difficult to distinguish between real and synthetically generated images. While these technologies offer benefits in creative domains, they also pose [...] Read more.
The rapid advancements in generative artificial intelligence (AI), particularly through models like Generative Adversarial Networks (GANs) and diffusion-based architectures, have made it increasingly difficult to distinguish between real and synthetically generated images. While these technologies offer benefits in creative domains, they also pose serious risks in terms of misinformation, digital forgery, and identity manipulation. This paper presents a novel hybrid deep learning model for detecting AI-generated images by integrating the ResNet-50 architecture with Squeeze-and-Excitation (SE) attention blocks. The proposed SE-ResNet50 model enhances channel-wise feature recalibration and interpretability by integrating Squeeze-and-Excitation (SE) blocks into the ResNet-50 backbone, enabling dynamic emphasis on subtle generative artifacts such as unnatural textures and semantic inconsistencies, thereby improving classification fidelity. Experimental evaluation on the CIFAKE dataset demonstrates the model’s effectiveness, achieving a test accuracy of 96.12%, precision of 97.04%, recall of 88.94%, F1-score of 92.82%, and an AUC score of 0.9862. The model shows strong generalization, minimal overfitting, and superior performance compared with transformer-based models and standard architectures like ResNet-50, VGGNet, and DenseNet. These results confirm the hybrid model’s suitability for real-time and resource-constrained applications in media forensics, content authentication, and ethical AI governance. Full article
(This article belongs to the Special Issue Advanced Signal and Image Processing for Applied Engineering)
Show Figures

Figure 1

41 pages, 5112 KB  
Article
Deepfake Face Detection and Adversarial Attack Defense Method Based on Multi-Feature Decision Fusion
by Shanzhong Lei, Junfang Song, Feiyang Feng, Zhuyang Yan and Aixin Wang
Appl. Sci. 2025, 15(12), 6588; https://doi.org/10.3390/app15126588 - 11 Jun 2025
Cited by 1 | Viewed by 3924
Abstract
The rapid advancement in deep forgery technology in recent years has created highly deceptive face video content, posing significant security risks. Detecting these fakes is increasingly urgent and challenging. To improve the accuracy of deepfake face detection models and strengthen their resistance to [...] Read more.
The rapid advancement in deep forgery technology in recent years has created highly deceptive face video content, posing significant security risks. Detecting these fakes is increasingly urgent and challenging. To improve the accuracy of deepfake face detection models and strengthen their resistance to adversarial attacks, this manuscript introduces a method for detecting forged faces and defending against adversarial attacks based on a multi-feature decision fusion. This approach allows for rapid detection of fake faces while effectively countering adversarial attacks. Firstly, an improved IMTCCN network was employed to precisely extract facial features, complemented by a diffusion model for noise reduction and artifact removal. Subsequently, the FG-TEFusionNet (Facial-geometry and Texture enhancement fusion-Net) model was developed for deepfake face detection and assessment. This model comprises two key modules: one for extracting temporal features between video frames and another for spatial features within frames. Initially, a facial geometry landmark calibration module based on the LRNet baseline framework ensured an accurate representation of facial geometry. A SENet attention mechanism was then integrated into the dual-stream RNN to enhance the model’s capability to extract inter-frame information and derive preliminary assessment results based on inter-frame relationships. Additionally, a Gram image texture feature module was designed and integrated into EfficientNet and the attention maps of WSDAN (Weakly Supervised Data Augmentation Network). This module aims to extract deep-level feature information from the texture structure of image frames, addressing the limitations of purely geometric features. The final decisions from both modules were integrated using a voting method, completing the deepfake face detection process. Ultimately, the model’s robustness was validated by generating adversarial samples using the I-FGSM algorithm and optimizing model performance through adversarial training. Extensive experiments demonstrated the superior performance and effectiveness of the proposed method across four subsets of FaceForensics++ and the Celeb-DF dataset. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

27 pages, 534 KB  
Article
ForensicTwin: Incorporating Digital Forensics Requirements Within a Digital Twin
by Aymen Akremi
Computers 2025, 14(4), 115; https://doi.org/10.3390/computers14040115 - 22 Mar 2025
Viewed by 1062
Abstract
The Digital Twin (DT) technology shifts the monitoring and control of physical assets into cyberspace through IoT, network, and simulation technologies. However, new challenges have arisen regarding the admissibility of evidence collected from Digital Twin environments. In this paper, we examine the features [...] Read more.
The Digital Twin (DT) technology shifts the monitoring and control of physical assets into cyberspace through IoT, network, and simulation technologies. However, new challenges have arisen regarding the admissibility of evidence collected from Digital Twin environments. In this paper, we examine the features and challenges that the Digital Twin technology presents to digital forensic science. We propose a new architectural model to guide the implementation of a forensically sound environment. Additionally, we introduce a new knowledge model representation that encompasses all forensic requirements to ensure the admissibility of evidence replicas. We propose a new forensic adversary model to formally analyze the preservation of forensic requirements. Full article
(This article belongs to the Section Internet of Things (IoT) and Industrial IoT)
Show Figures

Figure 1

42 pages, 10351 KB  
Article
Deepfake Media Forensics: Status and Future Challenges
by Irene Amerini, Mauro Barni, Sebastiano Battiato, Paolo Bestagini, Giulia Boato, Vittoria Bruni, Roberto Caldelli, Francesco De Natale, Rocco De Nicola, Luca Guarnera, Sara Mandelli, Taiba Majid, Gian Luca Marcialis, Marco Micheletto, Andrea Montibeller, Giulia Orrù, Alessandro Ortis, Pericle Perazzo, Giovanni Puglisi, Nischay Purnekar, Davide Salvi, Stefano Tubaro, Massimo Villari and Domenico Vitulanoadd Show full author list remove Hide full author list
J. Imaging 2025, 11(3), 73; https://doi.org/10.3390/jimaging11030073 - 28 Feb 2025
Cited by 17 | Viewed by 14183
Abstract
The rise of AI-generated synthetic media, or deepfakes, has introduced unprecedented opportunities and challenges across various fields, including entertainment, cybersecurity, and digital communication. Using advanced frameworks such as Generative Adversarial Networks (GANs) and Diffusion Models (DMs), deepfakes are capable of producing highly realistic [...] Read more.
The rise of AI-generated synthetic media, or deepfakes, has introduced unprecedented opportunities and challenges across various fields, including entertainment, cybersecurity, and digital communication. Using advanced frameworks such as Generative Adversarial Networks (GANs) and Diffusion Models (DMs), deepfakes are capable of producing highly realistic yet fabricated content, while these advancements enable creative and innovative applications, they also pose severe ethical, social, and security risks due to their potential misuse. The proliferation of deepfakes has triggered phenomena like “Impostor Bias”, a growing skepticism toward the authenticity of multimedia content, further complicating trust in digital interactions. This paper is mainly based on the description of a research project called FF4ALL (FF4ALL-Detection of Deep Fake Media and Life-Long Media Authentication) for the detection and authentication of deepfakes, focusing on areas such as forensic attribution, passive and active authentication, and detection in real-world scenarios. By exploring both the strengths and limitations of current methodologies, we highlight critical research gaps and propose directions for future advancements to ensure media integrity and trustworthiness in an era increasingly dominated by synthetic media. Full article
Show Figures

Figure 1

38 pages, 2985 KB  
Systematic Review
Generative Artificial Intelligence and the Evolving Challenge of Deepfake Detection: A Systematic Analysis
by Reza Babaei, Samuel Cheng, Rui Duan and Shangqing Zhao
J. Sens. Actuator Netw. 2025, 14(1), 17; https://doi.org/10.3390/jsan14010017 - 6 Feb 2025
Cited by 9 | Viewed by 20718
Abstract
Deepfake technology, which employs advanced generative artificial intelligence to create hyper-realistic synthetic media, poses significant challenges across various sectors, including security, entertainment, and education. This literature review explores the evolution of deepfake generation methods, ranging from traditional techniques to state-of-the-art models such as [...] Read more.
Deepfake technology, which employs advanced generative artificial intelligence to create hyper-realistic synthetic media, poses significant challenges across various sectors, including security, entertainment, and education. This literature review explores the evolution of deepfake generation methods, ranging from traditional techniques to state-of-the-art models such as generative adversarial networks and diffusion models. We navigate through the effectiveness and limitations of various detection approaches, including machine learning, forensic analysis, and hybrid techniques, while highlighting the critical importance of interpretability and real-time performance in detection systems. Furthermore, we discuss the ethical implications and regulatory considerations surrounding deepfake technology, emphasizing the need for comprehensive frameworks to mitigate risks associated with misinformation and manipulation. Through a systematic review of the existing literature, our aim is to identify research gaps and future directions for the development of robust, adaptable detection systems that can keep pace with rapid advancements in deepfake generation. Full article
Show Figures

Figure 1

16 pages, 603 KB  
Article
Comprehensive Evaluation of Deepfake Detection Models: Accuracy, Generalization, and Resilience to Adversarial Attacks
by Maryam Abbasi, Paulo Váz, José Silva and Pedro Martins
Appl. Sci. 2025, 15(3), 1225; https://doi.org/10.3390/app15031225 - 25 Jan 2025
Cited by 3 | Viewed by 11626
Abstract
The rise of deepfakes—synthetic media generated using artificial intelligence—threatens digital content authenticity, facilitating misinformation and manipulation. However, deepfakes can also depict real or entirely fictitious individuals, leveraging state-of-the-art techniques such as generative adversarial networks (GANs) and emerging diffusion-based models. Existing detection methods face [...] Read more.
The rise of deepfakes—synthetic media generated using artificial intelligence—threatens digital content authenticity, facilitating misinformation and manipulation. However, deepfakes can also depict real or entirely fictitious individuals, leveraging state-of-the-art techniques such as generative adversarial networks (GANs) and emerging diffusion-based models. Existing detection methods face challenges with generalization across datasets and vulnerability to adversarial attacks. This study focuses on subsets of frames extracted from the DeepFake Detection Challenge (DFDC) and FaceForensics++ videos to evaluate three convolutional neural network architectures—XCeption, ResNet, and VGG16—for deepfake detection. Performance metrics include accuracy, precision, F1-score, AUC-ROC, and Matthews Correlation Coefficient (MCC), combined with an assessment of resilience to adversarial perturbations via the Fast Gradient Sign Method (FGSM). Among the tested models, XCeption achieves the highest accuracy (89.2% on DFDC), strong generalization, and real-time suitability, while VGG16 excels in precision and ResNet provides faster inference. However, all models exhibit reduced performance under adversarial conditions, underscoring the need for enhanced resilience. These findings indicate that robust detection systems must consider advanced generative approaches, adversarial defenses, and cross-dataset adaptation to effectively counter evolving deepfake threats. Full article
Show Figures

Figure 1

21 pages, 1205 KB  
Article
SpecRep: Adversary Emulation Based on Attack Objective Specification in Heterogeneous Infrastructures
by Radu Marian Portase, Adrian Colesa and Gheorghe Sebestyen
Sensors 2024, 24(17), 5601; https://doi.org/10.3390/s24175601 - 29 Aug 2024
Cited by 1 | Viewed by 2159
Abstract
Cybercriminals have become an imperative threat because they target the most valuable resource on earth, data. Organizations prepare against cyber attacks by creating Cyber Security Incident Response Teams (CSIRTs) that use various technologies to monitor and detect threats and to help perform forensics [...] Read more.
Cybercriminals have become an imperative threat because they target the most valuable resource on earth, data. Organizations prepare against cyber attacks by creating Cyber Security Incident Response Teams (CSIRTs) that use various technologies to monitor and detect threats and to help perform forensics on machines and networks. Testing the limits of defense technologies and the skill of a CSIRT can be performed through adversary emulation performed by so-called “red teams”. The red team’s work is primarily manual and requires high skill. We propose SpecRep, a system to ease the testing of the detection capabilities of defenses in complex, heterogeneous infrastructures. SpecRep uses previously known attack specifications to construct attack scenarios based on attacker objectives instead of the traditional attack graphs or a list of actions. We create a metalanguage to describe objectives to be achieved in an attack together with a compiler that can build multiple attack scenarios that achieve the objectives. We use text processing tools aided by large language models to extract information from freely available white papers and convert them to plausible attack specifications that can then be emulated by SpecRep. We show how our system can emulate attacks against a smart home, a large enterprise, and an industrial control system. Full article
(This article belongs to the Special Issue Advanced IoT Systems in Smart Cities: 2nd Edition)
Show Figures

Figure 1

13 pages, 1876 KB  
Article
Research on the Face Forgery Detection Model Based on Adversarial Training and Disentanglement
by Yidi Wang, Hui Fu and Tongkai Wu
Appl. Sci. 2024, 14(11), 4702; https://doi.org/10.3390/app14114702 - 30 May 2024
Viewed by 2668
Abstract
With the advancement of generative models, face forgeries are becoming increasingly realistic, making face forgery detection a hot topic in research. The primary challenge in face forgery detection is the inadequate generalization performance. Numerous studies have proposed solutions to this issue; however, some [...] Read more.
With the advancement of generative models, face forgeries are becoming increasingly realistic, making face forgery detection a hot topic in research. The primary challenge in face forgery detection is the inadequate generalization performance. Numerous studies have proposed solutions to this issue; however, some methods heavily rely on the overall feature space of training samples, interfering with the extraction of key features for detection. Additionally, some studies design disentangled frameworks that overlook data diversity, limiting their effectiveness in complex real-world scenarios. This paper presents a model framework based on adversarial training and disentanglement strategy. Adversarial training is employed to generate forged samples that imitate the face forgery process, specifically targeting certain facial areas to simulate face forgery effects, which enriches data diversity. Simultaneously, the feature disentanglement strategies are employed to focus the model on forgery features, with a mutual information loss function designed to obtain the disentanglement effect. Additionally, an adversarial loss based on mutual information is designed to further enhance the disentanglement effect. On the FaceForensics++ dataset, our method achieves an AUC of 96.75%. Simultaneously, it demonstrates outstanding performance in cross-method evaluations with an accuracy of 80.32%. In cross-dataset experiments, our method also exhibits excellent performance. Full article
Show Figures

Figure 1

18 pages, 5383 KB  
Article
Reliable Out-of-Distribution Recognition of Synthetic Images
by Anatol Maier and Christian Riess
J. Imaging 2024, 10(5), 110; https://doi.org/10.3390/jimaging10050110 - 1 May 2024
Cited by 2 | Viewed by 2493
Abstract
Generative adversarial networks (GANs) and diffusion models (DMs) have revolutionized the creation of synthetically generated but realistic-looking images. Distinguishing such generated images from real camera captures is one of the key tasks in current multimedia forensics research. One particular challenge is the generalization [...] Read more.
Generative adversarial networks (GANs) and diffusion models (DMs) have revolutionized the creation of synthetically generated but realistic-looking images. Distinguishing such generated images from real camera captures is one of the key tasks in current multimedia forensics research. One particular challenge is the generalization to unseen generators or post-processing. This can be viewed as an issue of handling out-of-distribution inputs. Forensic detectors can be hardened by the extensive augmentation of the training data or specifically tailored networks. Nevertheless, such precautions only manage but do not remove the risk of prediction failures on inputs that look reasonable to an analyst but in fact are out of the training distribution of the network. With this work, we aim to close this gap with a Bayesian Neural Network (BNN) that provides an additional uncertainty measure to warn an analyst of difficult decisions. More specifically, the BNN learns the task at hand and also detects potential confusion between post-processing and image generator artifacts. Our experiments show that the BNN achieves on-par performance with the state-of-the-art detectors while producing more reliable predictions on out-of-distribution examples. Full article
(This article belongs to the Special Issue Robust Deep Learning Techniques for Multimedia Forensics and Security)
Show Figures

Figure 1

28 pages, 984 KB  
Review
A Survey on Programmable Logic Controller Vulnerabilities, Attacks, Detections, and Forensics
by Zibo Wang, Yaofang Zhang, Yilu Chen, Hongri Liu, Bailing Wang and Chonghua Wang
Processes 2023, 11(3), 918; https://doi.org/10.3390/pr11030918 - 17 Mar 2023
Cited by 28 | Viewed by 10571
Abstract
Programmable Logic Controllers (PLCs), as specialized task-oriented embedded field devices, play a vital role in current industrial control systems (ICSs), which are composed of critical infrastructure. In order to meet increasing demands on cost-effectiveness while improving production efficiency, commercial-off-the-shelf software and hardware, and [...] Read more.
Programmable Logic Controllers (PLCs), as specialized task-oriented embedded field devices, play a vital role in current industrial control systems (ICSs), which are composed of critical infrastructure. In order to meet increasing demands on cost-effectiveness while improving production efficiency, commercial-off-the-shelf software and hardware, and external networks such as the Internet, are integrated into the PLC-based control systems. However, it also provides opportunities for adversaries to launch malicious, targeted, and sophisticated cyberattacks. To that end, there is an urgent need to summarize ongoing work in PLC-based control systems on vulnerabilities, attacks, and security detection schemes for researchers and practitioners. Although surveys on similar topics exist, they are less involved in three key aspects, as follows: First and foremost, previous work focused more on system-level vulnerability analysis than PLC itself. Subsequently, it was not clear whether their work applied to the current systems or future ones, especially for security detection schemes. Finally, the prior surveys lacked a digital forensic research review of PLC-based control systems, which was significant for security analysis at different stages. As a result, we highlight vulnerability analysis at both a core component level and a system level, as well as attack models against availability, integrity, and confidentiality. Meanwhile, reviews of security detection schemes and digital forensic research for the current PLC-based systems are provided. Finally, we discuss future work for the next-generation systems. Full article
Show Figures

Figure 1

11 pages, 632 KB  
Article
Auguring Fake Face Images Using Dual Input Convolution Neural Network
by Mohan Bhandari, Arjun Neupane, Saurav Mallik, Loveleen Gaur and Hong Qin
J. Imaging 2023, 9(1), 3; https://doi.org/10.3390/jimaging9010003 - 21 Dec 2022
Cited by 27 | Viewed by 4586
Abstract
Deepfake technology uses auto-encoders and generative adversarial networks to replace or artificially construct fine-tuned faces, emotions, and sounds. Although there have been significant advancements in the identification of particular fake images, a reliable counterfeit face detector is still lacking, making it difficult to [...] Read more.
Deepfake technology uses auto-encoders and generative adversarial networks to replace or artificially construct fine-tuned faces, emotions, and sounds. Although there have been significant advancements in the identification of particular fake images, a reliable counterfeit face detector is still lacking, making it difficult to identify fake photos in situations with further compression, blurring, scaling, etc. Deep learning models resolve the research gap to correctly recognize phony images, whose objectionable content might encourage fraudulent activity and cause major problems. To reduce the gap and enlarge the fields of view of the network, we propose a dual input convolutional neural network (DICNN) model with ten-fold cross validation with an average training accuracy of 99.36 ± 0.62, a test accuracy of 99.08 ± 0.64, and a validation accuracy of 99.30 ± 0.94. Additionally, we used ’SHapley Additive exPlanations (SHAP) ’ as explainable AI (XAI) Shapely values to explain the results and interoperability visually by imposing the model into SHAP. The proposed model holds significant importance for being accepted by forensics and security experts because of its distinctive features and considerably higher accuracy than state-of-the-art methods. Full article
Show Figures

Figure 1

23 pages, 12542 KB  
Article
Deep Learning Based One-Class Detection System for Fake Faces Generated by GAN Network
by Shengyin Li, Vibekananda Dutta, Xin He and Takafumi Matsumaru
Sensors 2022, 22(20), 7767; https://doi.org/10.3390/s22207767 - 13 Oct 2022
Cited by 19 | Viewed by 8044
Abstract
Recently, the dangers associated with face generation technology have been attracting much attention in image processing and forensic science. The current face anti-spoofing methods based on Generative Adversarial Networks (GANs) suffer from defects such as overfitting and generalization problems. This paper proposes a [...] Read more.
Recently, the dangers associated with face generation technology have been attracting much attention in image processing and forensic science. The current face anti-spoofing methods based on Generative Adversarial Networks (GANs) suffer from defects such as overfitting and generalization problems. This paper proposes a new generation method using a one-class classification model to judge the authenticity of facial images for the purpose of realizing a method to generate a model that is as compatible as possible with other datasets and new data, rather than strongly depending on the dataset used for training. The method proposed in this paper has the following features: (a) we adopted various filter enhancement methods as basic pseudo-image generation methods for data enhancement; (b) an improved Multi-Channel Convolutional Neural Network (MCCNN) was adopted as the main network, making it possible to accept multiple preprocessed data individually, obtain feature maps, and extract attention maps; (c) as a first ingenuity in training the main network, we augmented the data using weakly supervised learning methods to add attention cropping and dropping to the data; (d) as a second ingenuity in training the main network, we trained it in two steps. In the first step, we used a binary classification loss function to ensure that known fake facial features generated by known GAN networks were filtered out. In the second step, we used a one-class classification loss function to deal with the various types of GAN networks or unknown fake face generation methods. We compared our proposed method with four recent methods. Our experiments demonstrate that the proposed method improves cross-domain detection efficiency while maintaining source-domain accuracy. These studies show one possible direction for improving the correct answer rate in judging facial image authenticity, thereby making a great contribution both academically and practically. Full article
Show Figures

Figure 1

21 pages, 1761 KB  
Article
Robust and Explainable Semi-Supervised Deep Learning Model for Anomaly Detection in Aviation
by Milad Memarzadeh, Ata Akbari Asanjan and Bryan Matthews
Aerospace 2022, 9(8), 437; https://doi.org/10.3390/aerospace9080437 - 10 Aug 2022
Cited by 19 | Viewed by 4253
Abstract
Identifying safety anomalies and vulnerabilities in the aviation domain is a very expensive and time-consuming task. Currently, it is accomplished via manual forensic reviews by subject matter experts (SMEs). However, with the increase in the amount of data produced in airspace operations, relying [...] Read more.
Identifying safety anomalies and vulnerabilities in the aviation domain is a very expensive and time-consuming task. Currently, it is accomplished via manual forensic reviews by subject matter experts (SMEs). However, with the increase in the amount of data produced in airspace operations, relying on such manual reviews is impractical. Automated approaches, such as exceedance detection, have been deployed to flag safety events which surpass a pre-defined safety threshold. These approaches, however, completely rely on domain knowledge and outcome of the SMEs’ reviews and can only identify purely threshold crossings safety vulnerabilities. Unsupervised and supervised machine learning approaches have been developed in the past to automate the process of anomaly detection and vulnerability discovery in the aviation data, with availability of the labeled data being their differentiator. Purely unsupervised approaches can be prone to high false alarm rates, while a completely supervised approach might not reach optimal performance and generalize well when the size of labeled data is small. This is one of the fundamental challenges in the aviation domain, where the process of obtaining safety labels for the data requires significant time and effort from SMEs and cannot be crowd-sourced to citizen scientists. As a result, the size of properly labeled and reviewed data is often very small in aviation safety and supervised approaches fall short of the optimum performance with such data. In this paper, we develop a Robust and Explainable Semi-supervised deep learning model for Anomaly Detection (RESAD) in aviation data. This approach takes advantage of both majority unlabeled and minority labeled data sets. We develop a case study of multi-class anomaly detection in the approach to landing of commercial aircraft in order to benchmark RESAD’s performance to baseline methods. Furthermore, we develop an optimization scheme where the model is optimized to not only reach maximum accuracy, but also a desired interpretability and robustness to adversarial perturbations. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

Back to TopTop