You are currently viewing a new version of our website. To view the old version click .

637 Results Found

  • Article
  • Open Access
2,145 Views
12 Pages

Stylized Pairing for Robust Adversarial Defense

  • Dejian Guan,
  • Wentao Zhao and
  • Xiao Liu

18 September 2022

Recent studies show that deep neural networks (DNNs)-based object recognition algorithms overly rely on object textures rather than global object shapes, and DNNs are also vulnerable to human-less perceptible adversarial perturbations. Based on these...

  • Article
  • Open Access
9 Citations
4,569 Views
23 Pages

21 March 2023

Deep learning (DL) models have recently been widely used in UAV aerial image semantic segmentation tasks and have achieved excellent performance. However, DL models are vulnerable to adversarial examples, which bring significant security risks to saf...

  • Review
  • Open Access
2,724 Views
25 Pages

Advances in Brain-Inspired Deep Neural Networks for Adversarial Defense

  • Ruyi Li,
  • Ming Ke,
  • Zhanguo Dong,
  • Lubin Wang,
  • Tielin Zhang,
  • Minghua Du and
  • Gang Wang

Deep convolutional neural networks (DCNNs) have achieved impressive performance in image recognition, object detection, etc. Nevertheless, they are susceptible to adversarial attacks and interferential noise. Adversarial attacks can mislead DCNN mode...

  • Article
  • Open Access
1 Citations
1,133 Views
21 Pages

18 December 2024

The Transductive Support Vector Machine (TSVM) is an effective semi-supervised learning algorithm vulnerable to adversarial sample attacks. This paper proposes a new adversarial attack method called the Multi-Stage Dual-Perturbation Attack (MSDPA), s...

  • Article
  • Open Access
4 Citations
3,733 Views
18 Pages

FLGQM: Robust Federated Learning Based on Geometric and Qualitative Metrics

  • Shangdong Liu,
  • Xi Xu,
  • Musen Wang,
  • Fei Wu,
  • Yimu Ji,
  • Chenxi Zhu and
  • Qurui Zhang

30 December 2023

Federated learning is a distributed learning method that seeks to train a shared global model by aggregating contributions from multiple clients. This method ensures that each client’s local data are not shared with others. However, research ha...

  • Article
  • Open Access
2 Citations
4,875 Views
14 Pages

Transformer-based models are driving a significant revolution in the field of machine learning at the moment. Among these innovations, vision transformers (ViTs) stand out for their application of transformer architectures to vision-related tasks. By...

  • Article
  • Open Access
1 Citations
1,623 Views
20 Pages

28 August 2024

Significant advancements in robustness against input perturbations have been realized for deep neural networks (DNNs) through the application of adversarial training techniques. However, implementing these methods for perception tasks in unmanned veh...

  • Article
  • Open Access
3 Citations
2,102 Views
17 Pages

An Adaptive Model Filtering Algorithm Based on Grubbs Test in Federated Learning

  • Wenbin Yao,
  • Bangli Pan,
  • Yingying Hou,
  • Xiaoyong Li and
  • Yamei Xia

26 April 2023

Federated learning has been popular for its ability to train centralized models while protecting clients’ data privacy. However, federated learning is highly susceptible to poisoning attacks, which can result in a decrease in model performance...

  • Article
  • Open Access
1 Citations
1,370 Views
16 Pages

20 September 2024

When dealing with non-IID data, federated learning confronts issues such as client drift and sluggish convergence. Therefore, we propose a Bidirectional Corrective Model-Contrastive Federated Adversarial Training (BCMCFAT) framework. On the client si...

  • Review
  • Open Access
664 Views
27 Pages

21 November 2025

Adversarial patch attacks have emerged as a powerful and practical threat to machine learning models in vision-based tasks. Unlike traditional perturbation-based adversarial attacks, which often require imperceptible changes to the entire input, patc...

  • Article
  • Open Access
15 Citations
4,024 Views
40 Pages

Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples

  • Kaleel Mahmood,
  • Deniz Gurevin,
  • Marten van Dijk and
  • Phuoung Ha Nguyen

18 October 2021

Many defenses have recently been proposed at venues like NIPS, ICML, ICLR and CVPR. These defenses are mainly focused on mitigating white-box attacks. They do not properly examine black-box attacks. In this paper, we expand upon the analyses of these...

  • Article
  • Open Access
17 Citations
2,153 Views
26 Pages

22 September 2023

As a safety-related application, visual systems based on deep neural networks (DNNs) in modern unmanned aerial vehicles (UAVs) show adversarial vulnerability when performing real-time inference. Recently, deep ensembles with various defensive strateg...

  • Article
  • Open Access
1 Citations
1,031 Views
31 Pages

Traffic sign classification (TSC) based on deep neural networks (DNNs) plays a crucial role in the perception subsystem of autonomous driving systems (ADSs). However, studies reveal that the TSC system can make dangerous and potentially fatal errors...

  • Article
  • Open Access
9 Citations
5,724 Views
22 Pages

30 October 2023

In response to the susceptibility of federated learning, which is based on a distributed training structure, to byzantine poisoning attacks from malicious clients, resulting in issues such as slowed or disrupted model convergence and reduced model ac...

  • Article
  • Open Access
216 Views
20 Pages

12 December 2025

This study develops a tri-level adversarial robust optimization framework for cyber–physical scheduling in smart grids, addressing the intertwined challenges of coordinated cyberattacks, defensive resource allocation, and stochastic operational...

  • Article
  • Open Access
4 Citations
2,986 Views
16 Pages

27 November 2021

The underlying mechanisms of microalgal host–pathogen interactions remain largely unknown. In this study, we applied physiological and simultaneous dual transcriptomic analysis to characterize the microalga Graesiella emersonii–Amoeboaphe...

  • Article
  • Open Access
863 Views
32 Pages

11 September 2025

In industrial automation, detecting defects in threaded components is challenging due to their complex geometry and the concealment of micro-flaws. This paper presents an integrated vision system capable of inspecting both internal and external threa...

  • Article
  • Open Access
1,157 Views
17 Pages

Federated learning offers a powerful approach for training models across decentralized datasets, enabling the creation of machine learning models that respect data privacy. However, federated learning faces significant challenges due to its vulnerabi...

  • Article
  • Open Access
11 Citations
1,966 Views
14 Pages

2 September 2023

Machine learning has been applied in continuous-variable quantum key distribution (CVQKD) systems to address the growing threat of quantum hacking attacks. However, the use of machine learning algorithms for detecting these attacks has uncovered a vu...

  • Article
  • Open Access
3 Citations
3,052 Views
14 Pages

A Mask-Based Adversarial Defense Scheme

  • Weizhen Xu,
  • Chenyi Zhang,
  • Fangzhen Zhao and
  • Liangda Fang

6 December 2022

Adversarial attacks hamper the functionality and accuracy of deep neural networks (DNNs) by meddling with subtle perturbations to their inputs. In this work, we propose a new mask-based adversarial defense scheme (MAD) for DNNs to mitigate the negati...

  • Article
  • Open Access
4 Citations
4,532 Views
23 Pages

Increasing the Robustness of Image Quality Assessment Models Through Adversarial Training

  • Anna Chistyakova,
  • Anastasia Antsiferova,
  • Maksim Khrebtov,
  • Sergey Lavrushkin,
  • Konstantin Arkhipenko,
  • Dmitriy Vatolin and
  • Denis Turdakov

The adversarial robustness of image quality assessment (IQA) models to adversarial attacks is emerging as a critical issue. Adversarial training has been widely used to improve the robustness of neural networks to adversarial attacks, but little in-d...

  • Article
  • Open Access
4 Citations
3,364 Views
25 Pages

17 June 2023

The outstanding performance of deep neural networks (DNNs) in multiple computer vision in recent years has promoted its widespread use in aerial image semantic segmentation. Nonetheless, prior research has demonstrated the high susceptibility of DNNs...

  • Article
  • Open Access
1 Citations
793 Views
20 Pages

29 April 2025

In the rapidly evolving landscape of transportation technologies, hydrogen vehicle networks integrated with photovoltaic (PV) systems represent a significant advancement toward sustainable mobility. However, the integration of such technologies also...

  • Article
  • Open Access
31 Citations
882 Views
12 Pages

Reputation systems provide a form of social control and reveal behaviour patterns in the uncertain and riskladen environment of the open Internet. However, proposed reputation systems typically focus on the effectiveness and accuracy of reputation ma...

  • Article
  • Open Access
492 Views
36 Pages

13 November 2025

Underwater object detection is critical for marine resource utilization, ecological monitoring, and maritime security, yet it remains constrained by optical degradation, high energy consumption, and vulnerability to adversarial perturbations. To addr...

  • Article
  • Open Access
2 Citations
3,546 Views
19 Pages

Enhancing Adversarial Robustness through Stable Adversarial Training

  • Kun Yan,
  • Luyi Yang,
  • Zhanpeng Yang and
  • Wenjuan Ren

14 October 2024

Deep neural network models are vulnerable to attacks from adversarial methods, such as gradient attacks. Evening small perturbations can cause significant differences in their predictions. Adversarial training (AT) aims to improve the model’s a...

  • Article
  • Open Access
1 Citations
1,843 Views
13 Pages

A3GT: An Adaptive Asynchronous Generalized Adversarial Training Method

  • Zeyi He,
  • Wanyi Liu,
  • Zheng Huang,
  • Yitian Chen and
  • Shigeng Zhang

15 October 2024

Adversarial attack methods can significantly improve the classification accuracy of deep learning models, but research has found that although most deep learning models with defense methods still show good classification accuracy in the face of vario...

  • Article
  • Open Access
137 Views
24 Pages

19 December 2025

Remote sensing plays a critical role in environmental monitoring, land use analysis, and disaster response by enabling large-scale, data-driven observation of Earth’s surface. Image classification models are central to interpreting remote sensi...

  • Article
  • Open Access
826 Views
18 Pages

RobustQuote: Using Reference Images for Adversarial Robustness

  • Hugo Lemarchant,
  • Hong Liu and
  • Yuta Nakashima

13 May 2025

We propose RobustQuote, a novel defense framework designed to enhance the adversarial robustness of vision transformers. The core idea is to leverage trusted reference images drawn from a dynamically changing pool unknown to the attacker as contextua...

  • Article
  • Open Access
3 Citations
4,665 Views
26 Pages

MPSD: A Robust Defense Mechanism against Malicious PowerShell Scripts in Windows Systems

  • Min-Hao Wu,
  • Fu-Hau Hsu,
  • Jian-Hong Hunag,
  • Keyuan Wang,
  • Yen-Yu Liu,
  • Jian-Xin Chen,
  • Hao-Jyun Wang and
  • Hao-Tsung Yang

19 September 2024

This manuscript introduces MPSD (Malicious PowerShell Script Detector), an advanced tool to protect Windows systems from malicious PowerShell commands and scripts commonly used in fileless malware attacks. These scripts are often hidden in Office doc...

  • Article
  • Open Access
12 Citations
7,371 Views
15 Pages

In this work, we propose a novel defense system against adversarial examples leveraging the unique power of Generative Adversarial Networks (GANs) to generate new adversarial examples for model retraining. To do so, we develop an automated pipeline u...

  • Article
  • Open Access
6 Citations
1,987 Views
25 Pages

9 October 2023

The cooperative active defense guidance problem for a spacecraft with active defense is investigated in this paper. An engagement between a spacecraft, an active defense vehicle, and an interceptor is considered, where the target spacecraft with acti...

  • Article
  • Open Access
1 Citations
2,407 Views
19 Pages

11 December 2023

Salient object detection (SOD) networks are vulnerable to adversarial attacks. As adversarial training is computationally expensive for SOD, existing defense methods instead adopt a noise-against-noise strategy that disrupts adversarial perturbation...

  • Article
  • Open Access
2 Citations
1,750 Views
21 Pages

CMDN: Pre-Trained Visual Representations Boost Adversarial Robustness for UAV Tracking

  • Ruilong Yu,
  • Zhewei Wu,
  • Qihe Liu,
  • Shijie Zhou,
  • Min Gou and
  • Bingchen Xiang

23 October 2024

Visual object tracking is widely adopted to unmanned aerial vehicle (UAV)-related applications, which demand reliable tracking precision and real-time performance. However, UAV trackers are highly susceptible to adversarial attacks, while research on...

  • Article
  • Open Access
105 Citations
9,234 Views
21 Pages

Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition

  • Sheeba Lal,
  • Saeed Ur Rehman,
  • Jamal Hussain Shah,
  • Talha Meraj,
  • Hafiz Tayyab Rauf,
  • Robertas Damaševičius,
  • Mazin Abed Mohammed and
  • Karrar Hameed Abdulkareem

7 June 2021

Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been...

  • Article
  • Open Access
505 Views
27 Pages

DRLMDS: A Deep Reinforcement Learning-Based Scheduling Algorithm for Mimic Defense Servers

  • Xiaoyun Liao,
  • Sen Yang,
  • Lijian Ouyang,
  • Rong Wu,
  • Xin Huang,
  • Shengjie Yu,
  • Jinzhou Mao,
  • Shangdong Liu and
  • Yimu Ji

14 November 2025

Mimic defense, as an emerging active defense architecture, enhances the resilience of critical systems against unknown attacks through diversified redundant executors and dynamic switching mechanisms. However, the structural heterogeneity and dynamic...

  • Article
  • Open Access
1 Citations
922 Views
28 Pages

Adversarial attacks in Natural Language Processing (NLP) present a critical challenge, particularly in sentiment analysis, where subtle input modifications can significantly alter model predictions. In search of more robust defenses against adversari...

  • Review
  • Open Access
11 Citations
7,732 Views
24 Pages

25 April 2023

Represented by reactive security defense mechanisms, cyber defense possesses a static, reactive, and deterministic nature, with overwhelmingly high costs to defend against ever-changing attackers. To change this situation, researchers have proposed m...

  • Article
  • Open Access
951 Views
20 Pages

1 October 2025

Graph neural networks (GNNs) are deep learning models that process structured graph data. By leveraging their graphs/node classification and link prediction capabilities, they have been effectively applied in multiple domains such as community detect...

  • Article
  • Open Access
8 Citations
3,190 Views
20 Pages

Towards Adversarial Attacks for Clinical Document Classification

  • Nina Fatehi,
  • Qutaiba Alasad and
  • Mohammed Alawad

28 December 2022

Regardless of revolutionizing improvements in various domains thanks to recent advancements in the field of Deep Learning (DL), recent studies have demonstrated that DL networks are susceptible to adversarial attacks. Such attacks are crucial in sens...

  • Article
  • Open Access
2 Citations
1,720 Views
28 Pages

11 October 2024

This paper presents a novel multi-dimensional asymmetric game model for network attack–defense decision-making, called “Catch the Cyber Thief”. The model is built upon the concept of partially observable stochastic games (POSG) and...

  • Article
  • Open Access
2,495 Views
20 Pages

1 July 2025

Advanced persistent threats (APTs) pose significant risks to critical systems and infrastructures due to their stealth and persistence. While several studies have reviewed APT characteristics and defense mechanisms, this paper goes further by proposi...

  • Article
  • Open Access
4 Citations
5,202 Views
19 Pages

10 September 2024

The security and privacy of a system are urgent issues in achieving secure and efficient learning-based systems. Recent studies have shown that these systems are susceptible to subtle adversarial perturbations applied to inputs. Although these pertur...

  • Article
  • Open Access
1 Citations
2,623 Views
19 Pages

10 May 2025

Graph neural networks (GNNs) have exhibited remarkable performance in various applications. Still, research has revealed their vulnerability to backdoor attacks, where Adversaries inject malicious patterns during the training phase to establish a rel...

  • Article
  • Open Access
21 Citations
3,816 Views
18 Pages

1 February 2021

For 1990–2019, this study presents two-step GMM estimates of EU members’ demands for defense spending based on alternative spatial-weight matrices. In particular, EU spatial connectivity is tied to EU membership status, members’ contiguity, contiguit...

  • Article
  • Open Access
633 Views
29 Pages

8 September 2025

The interconnectivity of avionics systems supports the need to incorporate functional safety and information security into airworthiness validation and maintenance protocols, which is critical. This necessity arises from the demanding operational env...

  • Article
  • Open Access
472 Views
30 Pages

17 September 2025

Collective intelligence systems have demonstrated considerable potential in dynamic adversarial environments due to their distributed, self-organizing, and highly robust characteristics. The crux of an efficacious defense lies in establishing a dynam...

  • Article
  • Open Access
2 Citations
2,081 Views
18 Pages

24 November 2024

Escalating advancements in artificial intelligence (AI) has prompted significant security concerns, especially with its increasing commercialization. This necessitates research on safety measures to securely utilize AI models. Existing AI models are...

  • Article
  • Open Access
3 Citations
3,407 Views
23 Pages

20 March 2023

Deep neural networks (DNNs) have been known to be vulnerable to adversarial attacks. Adversarial training (AT) is, so far, the only method that can guarantee the robustness of DNNs to adversarial attacks. However, the robustness generalization accura...

  • Review
  • Open Access
1 Citations
2,538 Views
29 Pages

Supplying fresh produce that meets consumers’ needs necessitates production of robust fruit and vegetables. However, supply chains can struggle to deliver robust produce, especially for delicate leafy vegetables. Interacting preharvest genetic,...

of 13