Skip to Content

1,261 Results Found

  • Feature Paper
  • Article
  • Open Access
94 Citations
18,558 Views
19 Pages

Adversarial Attack and Defense: A Survey

  • Hongshuo Liang,
  • Erlu He,
  • Yangyang Zhao,
  • Zhe Jia and
  • Hao Li

In recent years, artificial intelligence technology represented by deep learning has achieved remarkable results in image recognition, semantic analysis, natural language processing and other fields. In particular, deep neural networks have been wide...

  • Article
  • Open Access
108 Citations
9,425 Views
21 Pages

Adversarial Attack and Defence through Adversarial Training and Feature Fusion for Diabetic Retinopathy Recognition

  • Sheeba Lal,
  • Saeed Ur Rehman,
  • Jamal Hussain Shah,
  • Talha Meraj,
  • Hafiz Tayyab Rauf,
  • Robertas Damaševičius,
  • Mazin Abed Mohammed and
  • Karrar Hameed Abdulkareem

7 June 2021

Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been...

  • Article
  • Open Access
871 Views
13 Pages

20 September 2025

Adversarial attacks against deep learning models achieve high performance in white-box settings but often exhibit low transferability in black-box scenarios, especially against defended models. In this work, we propose Multi-Path Random Restart (MPRR...

  • Article
  • Open Access
38 Citations
5,261 Views
20 Pages

29 October 2021

Some recent articles have revealed that synthetic aperture radar automatic target recognition (SAR-ATR) models based on deep learning are vulnerable to the attacks of adversarial examples and cause security problems. The adversarial attack can make a...

  • Article
  • Open Access
7 Citations
3,605 Views
15 Pages

A Hybrid Adversarial Attack for Different Application Scenarios

  • Xiaohu Du,
  • Jie Yu,
  • Zibo Yi,
  • Shasha Li,
  • Jun Ma,
  • Yusong Tan and
  • Qinbo Wu

21 May 2020

Adversarial attack against natural language has been a hot topic in the field of artificial intelligence security in recent years. It is mainly to study the methods and implementation of generating adversarial examples. The purpose is to better deal...

  • Article
  • Open Access
2 Citations
2,236 Views
20 Pages

Deep neural networks (DNNs) have shown remarkable performance across a wide range of fields, including image recognition, natural language processing, and speech processing. However, recent studies indicate that DNNs are highly vulnerable to well-cra...

  • Article
  • Open Access
5 Citations
3,286 Views
17 Pages

31 July 2023

While Machine Learning has become the holy grail of modern-day computing, it has many security flaws that have yet to be addressed and resolved. Adversarial attacks are one of these security flaws, in which an attacker appends noise to data samples t...

  • Article
  • Open Access
2 Citations
3,191 Views
23 Pages

The actual problem of adversarial attacks on classifiers, mainly implemented using deep neural networks, is considered. This problem is analyzed with a generalization to the case of any classifiers synthesized by machine learning methods. The imperfe...

  • Review
  • Open Access
354 Citations
37,314 Views
29 Pages

4 March 2019

In recent years, artificial intelligence technologies have been widely used in computer vision, natural language processing, automatic driving, and other fields. However, artificial intelligence systems are vulnerable to adversarial attacks, which li...

  • Article
  • Open Access
15 Citations
2,908 Views
23 Pages

Boosting Adversarial Transferability with Shallow-Feature Attack on SAR Images

  • Gengyou Lin,
  • Zhisong Pan,
  • Xingyu Zhou,
  • Yexin Duan,
  • Wei Bai,
  • Dazhi Zhan,
  • Leqian Zhu,
  • Gaoqiang Zhao and
  • Tao Li

22 May 2023

Adversarial example generation on Synthetic Aperture Radar (SAR) images is an important research area that could have significant impacts on security and environmental monitoring. However, most current adversarial attack methods on SAR images are des...

  • Article
  • Open Access
1 Citations
3,288 Views
13 Pages

14 October 2021

Despite deep neural networks (DNNs) having achieved impressive performance in various domains, it has been revealed that DNNs are vulnerable in the face of adversarial examples, which are maliciously crafted by adding human-imperceptible perturbation...

  • Review
  • Open Access
49 Citations
8,541 Views
38 Pages

Adversarial Attack and Defense Strategies of Speaker Recognition Systems: A Survey

  • Hao Tan,
  • Le Wang,
  • Huan Zhang,
  • Junjian Zhang,
  • Muhammad Shafiq and
  • Zhaoquan Gu

Speaker recognition is a task that identifies the speaker from multiple audios. Recently, advances in deep learning have considerably boosted the development of speech signal processing techniques. Speaker or speech recognition has been widely adopte...

  • Article
  • Open Access
2 Citations
2,199 Views
14 Pages

Efficient Adversarial Attack Based on Moment Estimation and Lookahead Gradient

  • Dian Hong,
  • Deng Chen,
  • Yanduo Zhang,
  • Huabing Zhou,
  • Liang Xie,
  • Jianping Ju and
  • Jianyin Tang

Adversarial example generation is a technique that involves perturbing inputs with imperceptible noise to induce misclassifications in neural networks, serving as a means to assess the robustness of such models. Among the adversarial attack algorithm...

  • Article
  • Open Access
7 Citations
4,099 Views
16 Pages

29 January 2022

To protect images from the tampering of deepfake, adversarial examples can be made to replace the original images by distorting the output of the deepfake model and disrupting its work. Current studies lack generalizability in that they simply focus...

  • Article
  • Open Access
20 Citations
6,905 Views
18 Pages

30 March 2023

SQL injection is a highly detrimental web attack technique that can result in significant data leakage and compromise system integrity. To counteract the harm caused by such attacks, researchers have devoted much attention to the examination of SQL i...

  • Article
  • Open Access
5 Citations
3,200 Views
15 Pages

ADSAttack: An Adversarial Attack Algorithm via Searching Adversarial Distribution in Latent Space

  • Haobo Wang,
  • Chenxi Zhu,
  • Yangjie Cao,
  • Yan Zhuang,
  • Jie Li and
  • Xianfu Chen

Deep neural networks are susceptible to interference from deliberately crafted noise, which can lead to incorrect classification results. Existing approaches make less use of latent space information and conduct pixel-domain modification in the input...

  • Article
  • Open Access
2 Citations
2,034 Views
20 Pages

Dynamic Programming-Based White Box Adversarial Attack for Deep Neural Networks

  • Swati Aggarwal,
  • Anshul Mittal,
  • Sanchit Aggarwal and
  • Anshul Kumar Singh

24 July 2024

Recent studies have exposed the vulnerabilities of deep neural networks to some carefully perturbed input data. We propose a novel untargeted white box adversarial attack, the dynamic programming-based sub-pixel score method (SPSM) attack (DPSPSM), w...

  • Article
  • Open Access
10 Citations
4,024 Views
20 Pages

AdvRain: Adversarial Raindrops to Attack Camera-Based Smart Vision Systems

  • Amira Guesmi,
  • Muhammad Abdullah Hanif and
  • Muhammad Shafique

28 November 2023

Vision-based perception modules are increasingly deployed in many applications, especially autonomous vehicles and intelligent robots. These modules are being used to acquire information about the surroundings and identify obstacles. Hence, accurate...

  • Review
  • Open Access
2,460 Views
34 Pages

29 October 2025

Deep neural networks have demonstrated remarkable performance in object detection tasks; however, they remain highly susceptible to adversarial attacks. Previous surveys in computer vision have provided considerable coverage of physical adversarial a...

  • Article
  • Open Access
3 Citations
2,715 Views
18 Pages

An Optimized Black-Box Adversarial Simulator Attack Based on Meta-Learning

  • Zhiyu Chen,
  • Jianyu Ding,
  • Fei Wu,
  • Chi Zhang,
  • Yiming Sun,
  • Jing Sun,
  • Shangdong Liu and
  • Yimu Ji

27 September 2022

Much research on adversarial attacks has proved that deep neural networks have certain security vulnerabilities. Among potential attacks, black-box adversarial attacks are considered the most realistic based on the the natural hidden nature of deep n...

  • Article
  • Open Access
2 Citations
2,211 Views
26 Pages

4 March 2024

This study introduces a deep-learning-based framework for detecting adversarial attacks in CT image segmentation within medical imaging. The proposed methodology includes analyzing features from various layers, particularly focusing on the first laye...

  • Article
  • Open Access
3,915 Views
12 Pages

Adversarial Attacks Impact on the Neural Network Performance and Visual Perception of Data under Attack

  • Yakov Usoltsev,
  • Balzhit Lodonova,
  • Alexander Shelupanov,
  • Anton Konev and
  • Evgeny Kostyuchenko

5 February 2022

Machine learning algorithms based on neural networks are vulnerable to adversarial attacks. The use of attacks against authentication systems greatly reduces the accuracy of such a system, despite the complexity of generating a competitive example. A...

  • Article
  • Open Access
2 Citations
2,983 Views
37 Pages

Neural-network-based models have made considerable progress in many computer vision areas over recent years. However, many works have exposed their vulnerability to malicious input data manipulation—that is, to adversarial attacks. Although man...

  • Article
  • Open Access
4 Citations
3,623 Views
13 Pages

22 January 2023

The research on image-classification-adversarial attacks is crucial in the realm of artificial intelligence (AI) security. Most of the image-classification-adversarial attack methods are for white-box settings, demanding target model gradients and ne...

  • Article
  • Open Access
1,229 Views
22 Pages

9 November 2024

Deep learning has dramatically advanced computer vision tasks, including person re-identification (re-ID), substantially improving matching individuals across diverse camera views. However, person re-ID systems remain vulnerable to adversarial attack...

  • Article
  • Open Access
4 Citations
2,358 Views
17 Pages

Evading Logits-Based Detections to Audio Adversarial Examples by Logits-Traction Attack

  • Songshen Han,
  • Kaiyong Xu,
  • Songhui Guo,
  • Miao Yu and
  • Bo Yang

19 September 2022

Automatic Speech Recognition (ASR) provides a new way of human-computer interaction. However, it is vulnerable to adversarial examples, which are obtained by deliberately adding perturbations to the original audios. Thorough studies on the universal...

  • Article
  • Open Access
3 Citations
1,989 Views
20 Pages

25 November 2024

The study of LiDAR-based 3D object detection and its robustness under adversarial attacks has achieved great progress. However, existing adversarial attack methods mainly focus on the targeted object, which destroys the integrity of the object and ma...

  • Article
  • Open Access
6 Citations
2,482 Views
19 Pages

SAR-PATT: A Physical Adversarial Attack for SAR Image Automatic Target Recognition

  • Binyan Luo,
  • Hang Cao,
  • Jiahao Cui,
  • Xun Lv,
  • Jinqiang He,
  • Haifeng Li and
  • Chengli Peng

25 December 2024

Deep neural network-based synthetic aperture radar (SAR) automatic target recognition (ATR) systems are susceptible to attack by adversarial examples, which leads to misclassification by the SAR ATR system, resulting in theoretical model robustness p...

  • Article
  • Open Access
2 Citations
3,127 Views
22 Pages

A Novel Dataset and Approach for Adversarial Attack Detection in Connected and Automated Vehicles

  • Tae Hoon Kim,
  • Moez Krichen,
  • Meznah A. Alamro and
  • Gabreil Avelino Sampedro

Adversarial attacks have received much attention as communication network applications rise in popularity. Connected and Automated Vehicles (CAVs) must be protected against adversarial attacks to ensure passenger and vehicle safety on the road. Never...

  • Article
  • Open Access
1,552 Views
14 Pages

30 April 2024

Existing textual attacks mostly perturb keywords in sentences to generate adversarial examples by relying on the prediction confidence of victim models. In practice, attackers can only access the prediction label, meaning that the victim model can ea...

  • Review
  • Open Access
40 Citations
13,431 Views
41 Pages

A Comprehensive Review and Analysis of Deep Learning-Based Medical Image Adversarial Attack and Defense

  • Gladys W. Muoka,
  • Ding Yi,
  • Chiagoziem C. Ukwuoma,
  • Albert Mutale,
  • Chukwuebuka J. Ejiyi,
  • Asha Khamis Mzee,
  • Emmanuel S. A. Gyarteng,
  • Ali Alqahtani and
  • Mugahed A. Al-antari

13 October 2023

Deep learning approaches have demonstrated great achievements in the field of computer-aided medical image analysis, improving the precision of diagnosis across a range of medical disorders. These developments have not, however, been immune to the ap...

  • Article
  • Open Access
9 Citations
5,950 Views
20 Pages

12 September 2021

Voice Processing Systems (VPSes), now widely deployed, have become deeply involved in people’s daily lives, helping drive the car, unlock the smartphone, make online purchases, etc. Unfortunately, recent research has shown that those systems based on...

  • Article
  • Open Access
9 Citations
3,468 Views
24 Pages

27 February 2023

Aerial Image Semantic segmentation based on convolution neural networks (CNNs) has made significant process in recent years. Nevertheless, their vulnerability to adversarial example attacks could not be neglected. Existing studies typically focus on...

  • Article
  • Open Access
5 Citations
2,805 Views
14 Pages

11 December 2024

Adversarial attacks targeting industrial control systems, such as the Maroochy wastewater system attack and the Stuxnet worm attack, have caused significant damage to related facilities. To enhance the security of industrial control systems, recent r...

  • Article
  • Open Access
2 Citations
2,157 Views
16 Pages

21 August 2023

Over the past decade, Convolutional Neural Networks (CNNs) have been extensively deployed in security-critical areas; however, the security of CNN models is threatened by adversarial attacks. Decision-based adversarial attacks, wherein an attacker re...

  • Article
  • Open Access
17 Citations
4,929 Views
23 Pages

7 November 2023

The perception system is a safety-critical component that directly impacts the overall safety of autonomous driving systems (ADSs). It is imperative to ensure the robustness of the deep-learning model used in the perception system. However, studies h...

  • Article
  • Open Access
2,703 Views
23 Pages

A Local Adversarial Attack with a Maximum Aggregated Region Sparseness Strategy for 3D Objects

  • Ling Zhao,
  • Xun Lv,
  • Lili Zhu,
  • Binyan Luo,
  • Hang Cao,
  • Jiahao Cui,
  • Haifeng Li and
  • Jian Peng

13 January 2025

The increasing reliance on deep neural network-based object detection models in various applications has raised significant security concerns due to their vulnerability to adversarial attacks. In physical 3D environments, existing adversarial attacks...

  • Article
  • Open Access
1 Citations
1,235 Views
21 Pages

18 December 2024

The Transductive Support Vector Machine (TSVM) is an effective semi-supervised learning algorithm vulnerable to adversarial sample attacks. This paper proposes a new adversarial attack method called the Multi-Stage Dual-Perturbation Attack (MSDPA), s...

  • Article
  • Open Access
7 Citations
2,943 Views
15 Pages

Enhance Domain-Invariant Transferability of Adversarial Examples via Distance Metric Attack

  • Jin Zhang,
  • Wenyu Peng,
  • Ruxin Wang,
  • Yu Lin,
  • Wei Zhou and
  • Ge Lan

11 April 2022

A general foundation of fooling a neural network without knowing the details (i.e., black-box attack) is the attack transferability of adversarial examples across different models. Many works have been devoted to enhancing the task-specific transfera...

  • Feature Paper
  • Article
  • Open Access
3 Citations
2,557 Views
21 Pages

29 February 2024

Semi-supervised learning (SSL) models, integrating labeled and unlabeled data, have gained prominence in vision-based tasks, yet their susceptibility to adversarial attacks remains underexplored. This paper unveils the vulnerability of SSL models to...

  • Article
  • Open Access
26 Citations
4,982 Views
19 Pages

In the past years, Deep Neural Networks (DNNs) have become popular in many disciplines such as Computer Vision (CV), and the evolution of hardware has helped researchers to develop many powerful Deep Learning (DL) models to deal with several problems...

  • Article
  • Open Access
3 Citations
4,785 Views
23 Pages

AT-BOD: An Adversarial Attack on Fool DNN-Based Blackbox Object Detection Models

  • Ilham A. Elaalami,
  • Sunday O. Olatunji and
  • Rachid M. Zagrouba

15 February 2022

Object recognition is a fundamental concept in computer vision. Object detection models have recently played a vital role in various applications, including real-time and safety-critical systems such as camera surveillance and self-driving cars. Scie...

  • Article
  • Open Access
1 Citations
2,133 Views
12 Pages

Deep Neural Networks (DNNs) used for image classification are vulnerable to adversarial examples, which are images that are intentionally generated to predict an incorrect output for a deep learning model. Various defense methods have been proposed t...

  • Article
  • Open Access
4 Citations
3,137 Views
23 Pages

31 January 2025

Through advances in AI-based computer vision technology, the performance of modern image classification models has surpassed human perception, making them valuable in various fields. However, adversarial attacks, which involve small changes to images...

  • Article
  • Open Access
1 Citations
2,128 Views
16 Pages

To address the challenges of black-box video adversarial attacks, such as excessive query times and suboptimal attack performance due to the lack of result feedback during the attack process, we propose a reinforcement learning-based sparse adversari...

  • Article
  • Open Access
28 Citations
3,863 Views
16 Pages

29 November 2020

Biometric-based authentication is widely deployed on multimedia systems currently; however, biometric systems are vulnerable to image-level attacks for impersonation. Reconstruction attack (RA) and presentation attack (PA) are two typical instances f...

  • Article
  • Open Access
2 Citations
2,296 Views
20 Pages

Transferable Targeted Adversarial Attack on Synthetic Aperture Radar (SAR) Image Recognition

  • Sheng Zheng,
  • Dongshen Han,
  • Chang Lu,
  • Chaowen Hou,
  • Yanwen Han,
  • Xinhong Hao and
  • Chaoning Zhang

3 January 2025

Deep learning models have been widely applied to synthetic aperture radar (SAR) target recognition, offering end-to-end feature extraction that significantly enhances recognition performance. However, recent studies show that optical image recognitio...

  • Article
  • Open Access
8 Citations
7,502 Views
17 Pages

Several recent studies have shown that artificial intelligence (AI) systems can malfunction due to intentionally manipulated data coming through normal channels. Such kinds of manipulated data are called adversarial examples. Adversarial examples can...

  • Article
  • Open Access
9 Citations
2,888 Views
12 Pages

17 February 2023

The synthetic aperture radar (SAR) image ship detection system needs to adapt to an increasingly complicated actual environment, and the requirements for the stability of the detection system continue to increase. Adversarial attacks deliberately add...

  • Technical Note
  • Open Access
10 Citations
3,245 Views
17 Pages

CamoNet: A Target Camouflage Network for Remote Sensing Images Based on Adversarial Attack

  • Yue Zhou,
  • Wanghan Jiang,
  • Xue Jiang,
  • Lin Chen and
  • Xingzhao Liu

27 October 2023

Object detection algorithms based on convolutional neural networks (CNNs) have achieved remarkable success in remote sensing images (RSIs), such as aircraft and ship detection, which play a vital role in military and civilian fields. However, CNNs ar...

of 26