Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (34)

Search Parameters:
Keywords = poisoning attack defense

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 18048 KiB  
Article
Natural Occlusion-Based Backdoor Attacks: A Novel Approach to Compromising Pedestrian Detectors
by Qiong Li, Yalun Wu, Qihuan Li, Xiaoshu Cui, Yuanwan Chen, Xiaolin Chang, Jiqiang Liu and Wenjia Niu
Sensors 2025, 25(13), 4203; https://doi.org/10.3390/s25134203 - 5 Jul 2025
Viewed by 355
Abstract
Pedestrian detection systems are widely used in safety-critical domains such as autonomous driving, where deep neural networks accurately perceive individuals and distinguish them from other objects. However, their vulnerability to backdoor attacks remains understudied. Existing backdoor attacks, relying on unnatural digital perturbations or [...] Read more.
Pedestrian detection systems are widely used in safety-critical domains such as autonomous driving, where deep neural networks accurately perceive individuals and distinguish them from other objects. However, their vulnerability to backdoor attacks remains understudied. Existing backdoor attacks, relying on unnatural digital perturbations or explicit patches, are difficult to deploy stealthily in the physical world. In this paper, we propose a novel backdoor attack method that leverages real-world occlusions (e.g., backpacks) as natural triggers for the first time. We design a dynamically optimized heuristic-based strategy to adaptively adjust the trigger’s position and size for diverse occlusion scenarios, and develop three model-independent trigger embedding mechanisms for attack implementation. We conduct extensive experiments on two different pedestrian detection models using publicly available datasets. The results demonstrate that while maintaining baseline performance, the backdoored models achieve average attack success rates of 75.1% on KITTI and 97.1% on CityPersons datasets, respectively. Physical tests verify that pedestrians wearing backpack triggers could successfully evade detection under varying shooting distances of iPhone cameras, though the attack failed when pedestrians rotated by 90°, confirming the practical feasibility of our method. Through ablation studies, we further investigate the impact of key parameters such as trigger patterns and poisoning rates on attack effectiveness. Finally, we evaluate the defense resistance capability of our proposed method. This study reveals that common occlusion phenomena can serve as backdoor carriers, providing critical insights for designing physically robust pedestrian detection systems. Full article
(This article belongs to the Special Issue Intelligent Traffic Safety and Security)
Show Figures

Figure 1

20 pages, 1526 KiB  
Article
Chroma Backdoor: A Stealthy Backdoor Attack Based on High-Frequency Wavelet Injection in the UV Channels
by Yukang Fan, Kun Zhang, Bing Zheng, Yu Zhou, Jinyang Zhou and Wenting Pan
Symmetry 2025, 17(7), 1014; https://doi.org/10.3390/sym17071014 - 27 Jun 2025
Viewed by 332
Abstract
With the widespread adoption of deep learning in critical domains, such as computer vision, model security has become a growing concern. Backdoor attacks, as a highly stealthy threat, have emerged as a significant research topic in AI security. Existing backdoor attack methods primarily [...] Read more.
With the widespread adoption of deep learning in critical domains, such as computer vision, model security has become a growing concern. Backdoor attacks, as a highly stealthy threat, have emerged as a significant research topic in AI security. Existing backdoor attack methods primarily introduce perturbations in the spatial domain of images, which suffer from limitations, such as visual detectability and signal fragility. Although subsequent approaches, such as those based on steganography, have proposed more covert backdoor attack schemes, they still exhibit various shortcomings. To address these challenges, this paper presents HCBA (high-frequency chroma backdoor attack), a novel backdoor attack method based on high-frequency injection in the UV chroma channels. By leveraging discrete wavelet transform (DWT), HCBA embeds a polarity-triggered perturbation in the high-frequency sub-bands of the UV channels in the YUV color space. This approach capitalizes on the human visual system’s insensitivity to high-frequency signals, thereby enhancing stealthiness. Moreover, high-frequency components exhibit strong stability during data transformations, improving robustness. The frequency-domain operation also simplifies the trigger embedding process, enabling high attack success rates with low poisoning rates. Extensive experimental results demonstrate that HCBA achieves outstanding performance in terms of both stealthiness and evasion of existing defense mechanisms while maintaining a high attack success rate (ASR > 98.5%). Specifically, it improves the PSNR by 25% compared to baseline methods, with corresponding enhancements in SSIM as well. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

27 pages, 1799 KiB  
Article
Reducing Defense Vulnerabilities in Federated Learning: A Neuron-Centric Approach
by Eda Sena Erdol, Hakan Erdol, Beste Ustubioglu, Guzin Ulutas and Iraklis Symeonidis
Appl. Sci. 2025, 15(11), 6007; https://doi.org/10.3390/app15116007 - 27 May 2025
Viewed by 442
Abstract
Federated learning is a distributed machine learning approach where end users train local models with their own data and combine model updates on a reliable server to create a global model. Despite its advantages, this distributed structure is vulnerable to attacks as end [...] Read more.
Federated learning is a distributed machine learning approach where end users train local models with their own data and combine model updates on a reliable server to create a global model. Despite its advantages, this distributed structure is vulnerable to attacks as end users keep their data and training process private. Current defense mechanisms often fail when facing different attack types or high percentages of malicious participants. This paper proposes a new defense algorithm called Neuron-Centric Federated Learning Defense (NC-FLD), a novel approach that dynamically identifies and analyzes the most significant neurons across model layers rather than examining entire gradient spaces. Unlike existing methods that analyze all parameters equally, NC-FLD creates feature vectors from specifically selected neurons that show the highest training impact, then applies dimensionality reduction to enhance their discriminative features. We conduct experiments with various attack scenarios and different malicious participant rates across multiple datasets (CIFAR-10, F-MNIST, and MNIST). Additionally, we perform simulations on the GTSR dataset as a real-world application. Experimental results demonstrate that NC-FLD successfully defends against diverse attack scenarios in both IID and non-IID dataset distributions, maintaining accuracy above 70% with 40% malicious participation, a 5–15% improvement over the state-of-the-art method, showing enhanced robustness across diverse data distributions while effectively mitigating the impacts of both data and model poisoning attacks. Full article
(This article belongs to the Special Issue AI in Software Engineering: Challenges, Solutions and Applications)
Show Figures

Figure 1

26 pages, 3786 KiB  
Article
Privacy-Preserving Poisoning-Resistant Blockchain-Based Federated Learning for Data Sharing in the Internet of Medical Things
by Xudong Zhu and Hui Li
Appl. Sci. 2025, 15(10), 5472; https://doi.org/10.3390/app15105472 - 13 May 2025
Viewed by 606
Abstract
The Internet of Medical Things (IoMT) creates interconnected networks of smart medical devices, utilizing extensive medical data collection to improve patient outcomes, streamline resource management, and guarantee comprehensive life-cycle security. However, the private nature of medical data, coupled with strict compliance requirements, has [...] Read more.
The Internet of Medical Things (IoMT) creates interconnected networks of smart medical devices, utilizing extensive medical data collection to improve patient outcomes, streamline resource management, and guarantee comprehensive life-cycle security. However, the private nature of medical data, coupled with strict compliance requirements, has resulted in the separation of information repositories in the IoMT network, severely hindering protected inter-domain data cooperation. Although current blockchain-based federated learning (BFL) approaches aim to resolve these issues, two persistent security weaknesses remain: privacy leakage and poisoning attacks. This study proposes a privacy-preserving poisoning-resistant blockchain-based federated learning (PPBFL) scheme for secure IoMT data sharing. Specifically, we design an active protection framework that uses a lightweight (t,n)-threshold secret sharing scheme to protect devices’ privacy and prevent coordination edge nodes from colluding. Then, we design a privacy-guaranteed cosine similarity verification protocol integrated with secure multi-party computation techniques to identify and neutralize malicious gradients uploaded by malicious devices. Furthermore, we deploy an intelligent aggregation system through blockchain smart contracts, removing centralized coordination dependencies while guaranteeing auditable computational validity. Our formal security analysis confirms the PPBFL scheme’s theoretical robustness. Comprehensive evaluations across multiple datasets validate the framework’s operational efficiency and defensive capabilities. Full article
Show Figures

Figure 1

26 pages, 4807 KiB  
Article
DRLAttack: A Deep Reinforcement Learning-Based Framework for Data Poisoning Attack on Collaborative Filtering Algorithms
by Jiaxin Fan, Mohan Li, Yanbin Sun and Peng Chen
Appl. Sci. 2025, 15(10), 5461; https://doi.org/10.3390/app15105461 - 13 May 2025
Viewed by 518
Abstract
Collaborative filtering, as a widely used recommendation method, is widely applied but susceptible to data poisoning attacks, where malicious actors inject synthetic user interaction data to manipulate recommendation results and secure illicit benefits. Traditional poisoning attack methods require in-depth understanding of the recommendation [...] Read more.
Collaborative filtering, as a widely used recommendation method, is widely applied but susceptible to data poisoning attacks, where malicious actors inject synthetic user interaction data to manipulate recommendation results and secure illicit benefits. Traditional poisoning attack methods require in-depth understanding of the recommendation system. However, they fail to address its dynamic nature and algorithmic complexity, thereby hindering effective breaches of the system’s defensive mechanisms. In this paper, we propose DRLAttack, a deep reinforcement learning-based framework for data poisoning attacks. DRLAttack can launch both white-box and black-box data poisoning attacks. In the white-box setting, DRLAttack dynamically tailors attack strategies to recommendation context changes, generating more potent and stealthy fake user interactions for the precise targeting of data poisoning. Furthermore, we extend DRLAttack to black-box settings. By introducing spy users to simulate the behavior of active and inactive users into the training dataset, we indirectly obtain the promotion status of target items and adjust the attack strategy in response. Experimental results on real-world recommendation system datasets demonstrate that DRLAttack can effectively manipulate recommendation results. Full article
Show Figures

Figure 1

19 pages, 767 KiB  
Article
Defending Graph Neural Networks Against Backdoor Attacks via Symmetry-Aware Graph Self-Distillation
by Hanlin Wang, Liang Wan and Xiao Yang
Symmetry 2025, 17(5), 735; https://doi.org/10.3390/sym17050735 - 10 May 2025
Cited by 1 | Viewed by 1011
Abstract
Graph neural networks (GNNs) have exhibited remarkable performance in various applications. Still, research has revealed their vulnerability to backdoor attacks, where Adversaries inject malicious patterns during the training phase to establish a relationship between backdoor patterns and a specific target label, thereby manipulating [...] Read more.
Graph neural networks (GNNs) have exhibited remarkable performance in various applications. Still, research has revealed their vulnerability to backdoor attacks, where Adversaries inject malicious patterns during the training phase to establish a relationship between backdoor patterns and a specific target label, thereby manipulating the behavior of poisoned GNNs. The inherent symmetry present in the behavior of GNNs can be leveraged to strengthen the robustness of GNNs. This paper presents a quantitative metric, termed Logit Margin Rate (LMR), for analyzing the symmetric properties of the output landscapes across GNN layers. Additionally, a learning paradigm of graph self-distillation is combined with LMR to distill the symmetry knowledge from shallow layers, which can serve as the defensive supervision signals to preserve the benign symmetric relationships in deep layers, thus improving both model stability and adversarial robustness. Experiments were conducted on four benchmark datasets to evaluate the robustness of the proposed Graph Self-Distillation-based Backdoor Defense (GSD-BD) method against three widely used backdoor attack algorithms, demonstrating the robustness of GSD-BD even under severe infection scenarios. Full article
(This article belongs to the Special Issue Information Security in AI)
Show Figures

Figure 1

21 pages, 2595 KiB  
Article
Adversarial Training for Mitigating Insider-Driven XAI-Based Backdoor Attacks
by R. G. Gayathri, Atul Sajjanhar and Yong Xiang
Future Internet 2025, 17(5), 209; https://doi.org/10.3390/fi17050209 - 6 May 2025
Viewed by 765
Abstract
The study investigates how adversarial training techniques can be used to introduce backdoors into deep learning models by an insider with privileged access to training data. The research demonstrates an insider-driven poison-label backdoor approach in which triggers are introduced into the training dataset. [...] Read more.
The study investigates how adversarial training techniques can be used to introduce backdoors into deep learning models by an insider with privileged access to training data. The research demonstrates an insider-driven poison-label backdoor approach in which triggers are introduced into the training dataset. These triggers misclassify poisoned inputs while maintaining standard classification on clean data. An adversary can improve the stealth and effectiveness of such attacks by utilizing XAI techniques, which makes the detection of such attacks more difficult. The study uses publicly available datasets to evaluate the robustness of the deep learning models in this situation. Our experiments show that adversarial training considerably reduces backdoor attacks. These results are verified using various performance metrics, revealing model vulnerabilities and possible countermeasures. The findings demonstrate the importance of robust training techniques and effective adversarial defenses to improve the security of deep learning models against insider-driven backdoor attacks. Full article
(This article belongs to the Special Issue Generative Artificial Intelligence (AI) for Cybersecurity)
Show Figures

Figure 1

24 pages, 985 KiB  
Article
Secure Hierarchical Federated Learning for Large-Scale AI Models: Poisoning Attack Defense and Privacy Preservation in AIoT
by Chengzhuo Han, Tingting Yang, Xin Sun and Zhengqi Cui
Electronics 2025, 14(8), 1611; https://doi.org/10.3390/electronics14081611 - 16 Apr 2025
Cited by 1 | Viewed by 852
Abstract
The rapid integration of large-scale AI models into distributed systems, such as the Artificial Intelligence of Things (AIoT), has introduced critical security and privacy challenges. While configurable models enhance resource efficiency, their deployment in heterogeneous edge environments remains vulnerable to poisoning attacks, data [...] Read more.
The rapid integration of large-scale AI models into distributed systems, such as the Artificial Intelligence of Things (AIoT), has introduced critical security and privacy challenges. While configurable models enhance resource efficiency, their deployment in heterogeneous edge environments remains vulnerable to poisoning attacks, data leakage, and adversarial interference, threatening the integrity of collaborative learning and responsible AI deployment. To address these issues, this paper proposes a Hierarchical Federated Cross-domain Retrieval (FHCR) framework tailored for secure and privacy-preserving AIoT systems. By decoupling models into a shared retrieval layer (globally optimized via federated learning) and device-specific layers (locally personalized), FHCR minimizes communication overhead while enabling dynamic module selection. Crucially, we integrate a retrieval-layer mean inspection (RLMI) mechanism to detect and filter malicious gradient updates, effectively mitigating poisoning attacks and reducing attack success rates by 20% compared to conventional methods. Extensive evaluation on General-QA and IoT-Native datasets demonstrates the robustness of FHCR against adversarial threats, with FHCR maintaining global accuracy not lower than baseline levels while reducing communication costs by 14%. Full article
(This article belongs to the Special Issue Security and Privacy for AI)
Show Figures

Graphical abstract

14 pages, 1019 KiB  
Article
Enhanced Blockchain-Based Data Poisoning Defense Mechanism
by Song-Kyoo Kim
Appl. Sci. 2025, 15(7), 4069; https://doi.org/10.3390/app15074069 - 7 Apr 2025
Viewed by 702
Abstract
This paper deals with a new secured execution environment which adapts blockchain technology to defend artificial intelligence (AI) models against data poisoning (DP) attacks. The Blockchain Governance Game (BGG) is a theoretical framework for analyzing the network to provide the decision-making moment for [...] Read more.
This paper deals with a new secured execution environment which adapts blockchain technology to defend artificial intelligence (AI) models against data poisoning (DP) attacks. The Blockchain Governance Game (BGG) is a theoretical framework for analyzing the network to provide the decision-making moment for taking preliminary cybersecurity actions before DP attacks. This innovative method for conventional decentralized network securities is adapted into a DP defense for AI models in this paper. The core components in the DP defense network, including the Predictor and the BGG engine, are fully implemented. This research concerns the first blockchain-based DP defense mechanism which establishes an innovative framework for DP defense based on the BGG. The simulation in the paper demonstrates realistic DP attack situations targeting AI models. This new controller is newly designed to provide sufficient cybersecurity performance measures even with minimal data collection and limited computing power. Additionally, this research will be helpful for those considering using blockchain to implement a DP defense mechanism. Full article
(This article belongs to the Special Issue Approaches to Cyber Attacks and Malware Detection)
Show Figures

Figure 1

16 pages, 920 KiB  
Article
Towards Robust Speech Models: Mitigating Backdoor Attacks via Audio Signal Enhancement and Fine-Pruning Techniques
by Heyan Sun, Qi Zhong, Minfeng Qi, Uno Fang, Guoyi Shi and Sanshuai Cui
Mathematics 2025, 13(6), 984; https://doi.org/10.3390/math13060984 - 17 Mar 2025
Viewed by 1135
Abstract
The widespread adoption of deep neural networks (DNNs) in speech recognition has introduced significant security vulnerabilities, particularly from backdoor attacks. These attacks allow adversaries to manipulate system behavior through hidden triggers while maintaining normal operation on clean inputs. To address this challenge, we [...] Read more.
The widespread adoption of deep neural networks (DNNs) in speech recognition has introduced significant security vulnerabilities, particularly from backdoor attacks. These attacks allow adversaries to manipulate system behavior through hidden triggers while maintaining normal operation on clean inputs. To address this challenge, we propose a novel defense framework that combines speech enhancement with neural architecture optimization. Our approach consists of three key steps. First, we use a ComplexMTASS-based enhancement network to isolate and remove backdoor triggers by leveraging their unique spectral characteristics. Second, we apply an adaptive fine-pruning algorithm to selectively deactivate malicious neurons while preserving the model’s linguistic capabilities. Finally, we fine-tune the pruned model using clean data to restore and enhance recognition accuracy. Experiments on the AISHELL dataset demonstrate the effectiveness of our method against advanced steganographic attacks, such as PBSM and VSVC. The results show a significant reduction in attack success rate to below 1.5%, while maintaining 99.4% accuracy on clean inputs. This represents a notable improvement over existing defenses, particularly under varying trigger intensities and poisoning rates. Full article
Show Figures

Figure 1

24 pages, 5134 KiB  
Article
A Novel Data Sanitization Method Based on Dynamic Dataset Partition and Inspection Against Data Poisoning Attacks
by Jaehyun Lee, Youngho Cho, Ryungeon Lee, Simon Yuk, Jaepil Youn, Hansol Park and Dongkyoo Shin
Electronics 2025, 14(2), 374; https://doi.org/10.3390/electronics14020374 - 18 Jan 2025
Viewed by 1398
Abstract
Deep learning (DL) technology has shown outstanding performance in various fields such as object recognition and classification, speech recognition, and natural language processing. However, it is well known that DL models are vulnerable to data poisoning attacks, where adversaries modify or inject data [...] Read more.
Deep learning (DL) technology has shown outstanding performance in various fields such as object recognition and classification, speech recognition, and natural language processing. However, it is well known that DL models are vulnerable to data poisoning attacks, where adversaries modify or inject data samples maliciously during the training phase, leading to degraded classification accuracy or misclassification. Since data poisoning attacks keep evolving to avoid existing defense methods, security researchers thoroughly examine data poisoning attack models and devise more reliable and effective detection methods accordingly. In particular, data poisoning attacks can be realistic in an adversarial situation where we retrain a DL model with a new dataset obtained from an external source during transfer learning. By this motivation, we propose a novel defense method that partitions and inspects the new dataset and then removes malicious sub-datasets. Specifically, our proposed method first divides a new dataset into n sub-datasets either evenly or randomly, inspects them by using the clean DL model as a poisoned dataset detector, and finally removes malicious sub-datasets classified by the detector. For partition and inspection, we design two dynamic defensive algorithms: the Sequential Partitioning and Inspection Algorithm (SPIA) and the Randomized Partitioning and Inspection Algorithm (RPIA). With this approach, a resulting cleaned dataset can be used reliably for retraining a DL model. In addition, we conducted two experiments in the Python and DL environment to show that our proposed methods effectively defend against two data poisoning attack models (concentrated poisoning attacks and random poisoning attacks) in terms of various evaluation metrics such as removed poison rate (RPR), attack success rate (ASR), and classification accuracy (ACC). Specifically, the SPIA completely removed all poisoned data under concentrated poisoning attacks in both Python and DL environments. In addition, the RPIA removed up to 91.1% and 99.1% of poisoned data under random poisoning attacks in Python and DL environments, respectively. Full article
(This article belongs to the Special Issue Big Data Analytics and Information Technology for Smart Cities)
Show Figures

Figure 1

23 pages, 3347 KiB  
Article
Invisible Backdoor Learning in Transform Domain with Flexible Triggers and Targets
by Yuyuan Sun, Yuliang Lu, Xuehu Yan and Zeshan Pang
Electronics 2025, 14(1), 196; https://doi.org/10.3390/electronics14010196 - 5 Jan 2025
Viewed by 1241
Abstract
The high demands on datasets and computing resources in deep learning make the models vulnerable to a range of security threats such as backdoor learning. The study of backdoor learning also helps to improve the understanding of model security. In order to ensure [...] Read more.
The high demands on datasets and computing resources in deep learning make the models vulnerable to a range of security threats such as backdoor learning. The study of backdoor learning also helps to improve the understanding of model security. In order to ensure the attack effect, the triggers and targets in the existing backdoor learning methods are usually fixed and single, so a single defense will lead to the failure of the attack. This paper proposes an invisible backdoor learning scheme in the transform domain with flexible triggers and targets. By adding different offsets of different frequencies in the transform domain, multiple triggers and multiple targets are controlled. The generated poisoning images are added to the training dataset and the model is fine-tuned. Under the conception, two modes of backdoor learning enable flexible triggers and targets. One mode is multi-triggers and multi-targets (MTMT), and it can implement multiple triggers corresponding to different activation targets. The other mode is multi-triggers and one-target (MTOT), and it can realize multiple trigger sets to activate the target together. The experimental results show that the attack success rate reaches 95% and the accuracy of the model decreases within 3% under the premise that the trigger is not visible. This scheme can resist the common defense methods and has a good sample of the visual quality. Full article
Show Figures

Figure 1

15 pages, 526 KiB  
Article
Data Poisoning Attack on Black-Box Neural Machine Translation to Truncate Translation
by Lingfang Li, Weijian Hu and Mingxing Luo
Entropy 2024, 26(12), 1081; https://doi.org/10.3390/e26121081 - 11 Dec 2024
Viewed by 1170
Abstract
Neural machine translation (NMT) systems have achieved outstanding performance and have been widely deployed in the real world. However, the undertranslation problem caused by the distribution of high-translation-entropy words in source sentences still exists, and can be aggravated by poisoning attacks. In this [...] Read more.
Neural machine translation (NMT) systems have achieved outstanding performance and have been widely deployed in the real world. However, the undertranslation problem caused by the distribution of high-translation-entropy words in source sentences still exists, and can be aggravated by poisoning attacks. In this paper, we propose a new backdoor attack on NMT models by poisoning a small fraction of parallel training data. Our attack increases the translation entropy of words after injecting a backdoor trigger, making them more easily discarded by NMT. The final translation is part of target translation, and the position of the injected trigger poison affects the scope of the truncation. Moreover, we also propose a defense method, Backdoor Defense by Sematic Representation Change (BDSRC), against our attack. Specifically, we selected backdoor candidates based on the similarity between the semantic representation of words in a sentence and the overall sentence representation. Then, the injected backdoor is identified through computing the semantic deviation caused by backdoor candidates. The experiments show that our attack strategy can achieve a nearly 100% attack success rate, and the functionality of main translation tasks is almost unaffected in models having performance degradation that is less than 1 BLEU. Nonetheless, our defense method can effectively identify backdoor triggers and alleviate performance degradation. Full article
(This article belongs to the Section Multidisciplinary Applications)
Show Figures

Figure 1

14 pages, 1177 KiB  
Article
FLARE: A Backdoor Attack to Federated Learning with Refined Evasion
by Qingya Wang, Yi Wu, Haojun Xuan and Huishu Wu
Mathematics 2024, 12(23), 3751; https://doi.org/10.3390/math12233751 - 28 Nov 2024
Viewed by 2364
Abstract
Federated Learning (FL) is vulnerable to backdoor attacks in which attackers inject malicious behaviors into the global model. To counter these attacks, existing works mainly introduce sophisticated defenses by analyzing model parameters and utilizing robust aggregation strategies. However, we find that FL systems [...] Read more.
Federated Learning (FL) is vulnerable to backdoor attacks in which attackers inject malicious behaviors into the global model. To counter these attacks, existing works mainly introduce sophisticated defenses by analyzing model parameters and utilizing robust aggregation strategies. However, we find that FL systems can still be attacked by exploiting their inherent complexity. In this paper, we propose a novel three-stage backdoor attack strategy named FLARE: A Backdoor Attack to Federated Learning with Refined Evasion, which is designed to operate under the radar of conventional defense strategies. Our proposal begins with a trigger inspection stage to leverage the initial susceptibilities of FL systems, followed by a trigger insertion stage where the synthesized trigger is stealthily embedded at a low poisoning rate. Finally, the trigger is amplified to increase the attack’s success rate during the backdoor activation stage. Experiments on the effectiveness of FLARE show significant enhancements in both the stealthiness and success rate of backdoor attacks across multiple federated learning environments. In particular, the success rate of our backdoor attack can be improved by up to 45× compared to existing methods. Full article
Show Figures

Figure 1

42 pages, 10646 KiB  
Article
Federated Learning: A Comparative Study of Defenses Against Poisoning Attacks
by Inês Carvalho, Kenton Huff, Le Gruenwald and Jorge Bernardino
Appl. Sci. 2024, 14(22), 10706; https://doi.org/10.3390/app142210706 - 19 Nov 2024
Cited by 2 | Viewed by 2884
Abstract
Federated learning is a new paradigm where multiple data owners, referred to as clients, work together with a global server to train a shared machine learning model without disclosing their personal training data. Despite its many advantages, the system is vulnerable to client [...] Read more.
Federated learning is a new paradigm where multiple data owners, referred to as clients, work together with a global server to train a shared machine learning model without disclosing their personal training data. Despite its many advantages, the system is vulnerable to client compromise by malicious agents attempting to modify the global model. Several defense algorithms against untargeted and targeted poisoning attacks on model updates in federated learning have been proposed and evaluated separately. This paper compares the performances of six state-of-the art defense algorithms—PCA + K-Means, KPCA + K-Means, CONTRA, KRUM, COOMED, and RPCA + PCA + K-Means. We explore a variety of situations not considered in the original papers. These include varying the percentage of Independent and Identically Distributed (IID) data, the number of clients, and the percentage of malicious clients. This comprehensive performance study provides the results that the users can use to select appropriate defense algorithms to employ based on the characteristics of their federated learning systems. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop