Skip Content
You are currently on the new version of our website. Access the old version .

65 Results Found

  • Article
  • Open Access
9 Citations
6,129 Views
23 Pages

Deep Learning-Based Spread-Spectrum FGSM for Underwater Communication

  • Zeyad A. H. Qasem,
  • Hamada Esmaiel,
  • Haixin Sun,
  • Jie Qi and
  • Junfeng Wang

28 October 2020

The limitation of the available channel bandwidth and availability of a sustainable energy source for battery feed sensor nodes are the main challenges in the underwater acoustic communication. Unlike terrestrial’s communication, using multi-in...

  • Article
  • Open Access
34 Citations
4,128 Views
18 Pages

9 January 2023

Plant diseases have received common attention, and deep learning has also been applied to plant diseases. Deep neural networks (DNNs) have achieved outstanding results in plant diseases. Furthermore, DNNs are very fragile, and adversarial attacks in...

  • Proceeding Paper
  • Open Access
868 Views
7 Pages

Evaluation of Modified FGSM-Based Data Augmentation Method for Convolutional Neural Network-Based Image Classification

  • Paulo Monteiro de Carvalho Monson,
  • Vinicius Augusto Dare de Almeida,
  • Gabriel Augusto David,
  • Pedro Oliveira Conceição Junior and
  • Fabio Romano Lofrano Dotto

26 November 2024

Computer vision applications demand a significant amount of data for effective training and inference in many computer vision tasks. However, data insufficiency situations usually happen due to multiple reasons, resulting in computational models whos...

  • Article
  • Open Access
5 Citations
2,448 Views
19 Pages

Intelligent Transmit Antenna Selection Schemes for High-Rate Fully Generalized Spatial Modulation

  • Hindavi Kishor Jadhav,
  • Vinoth Babu Kumaravelu,
  • Arthi Murugadass,
  • Agbotiname Lucky Imoize,
  • Poongundran Selvaprabhu and
  • Arunkumar Chandrasekhar

21 August 2023

The sixth-generation (6G) network is supposed to transmit significantly more data at much quicker rates than existing networks while meeting severe energy efficiency (EE) targets. The high-rate spatial modulation (SM) methods can be used to deal with...

  • Article
  • Open Access
6 Citations
2,613 Views
18 Pages

20 April 2023

In this work, deep learning (DL)-based transmit antenna selection (TAS) strategies are employed to enhance the average bit error rate (ABER) and energy efficiency (EE) performance of a spectrally efficient fully generalized spatial modulation (FGSM)...

  • Article
  • Open Access
50 Citations
4,535 Views
17 Pages

7 May 2021

The COVID-19 pandemic requires the rapid isolation of infected patients. Thus, high-sensitivity radiology images could be a key technique to diagnose patients besides the polymerase chain reaction approach. Deep learning algorithms are proposed in se...

  • Article
  • Open Access
22 Citations
4,705 Views
16 Pages

Enhanced Fully Generalized Spatial Modulation for the Internet of Underwater Things

  • Zeyad A. H. Qasem,
  • Hamada Esmaiel,
  • Haixin Sun,
  • Junfeng Wang,
  • Yongchun Miao and
  • Sheraz Anwar

28 March 2019

A full design of the Internet of Underwater Things (IoUT) with a high data rate is one of the greatest underwater communication difficulties due to the unavailability of a sustainable power source for the battery supplies of sensor nodes, electromagn...

  • Proceeding Paper
  • Open Access
1,026 Views
10 Pages

14 October 2025

The rise of IoT devices has led to significant advancements but also new security challenges. This paper assesses the performance of various machine learning (ML) models—Decision Trees, Naïve Bayes, Support Vector Machines (SVMs), and a de...

  • Article
  • Open Access
5 Citations
1,793 Views
18 Pages

As AI becomes indispensable in healthcare, its vulnerability to adversarial attacks demands serious attention. Even minimal changes to the input data can mislead Deep Learning (DL) models, leading to critical errors in diagnosis and endangering patie...

  • Article
  • Open Access
3 Citations
1,419 Views
16 Pages

One Possible Path Towards a More Robust Task of Traffic Sign Classification in Autonomous Vehicles Using Autoencoders

  • Ivan Martinović,
  • Tomás de Jesús Mateo Sanguino,
  • Jovana Jovanović,
  • Mihailo Jovanović and
  • Milena Djukanović

The increasing deployment of autonomous vehicles (AVs) has exposed critical vulnerabilities in traffic sign classification systems, particularly against adversarial attacks that can compromise safety. This study proposes a dual-purpose defense framew...

  • Article
  • Open Access
27 Citations
10,273 Views
23 Pages

This study evaluated the generation of adversarial examples and the subsequent robustness of an image classification model. The attacks were performed using the Fast Gradient Sign method, the Projected Gradient Descent method, and the Carlini and Wag...

  • Article
  • Open Access
2 Citations
2,697 Views
13 Pages

Weakness Evaluation on In-Vehicle Violence Detection: An Assessment of X3D, C2D and I3D against FGSM and PGD

  • Flávio Santos,
  • Dalila Durães,
  • Francisco S. Marcondes,
  • Niklas Hammerschmidt,
  • José Machado and
  • Paulo Novais

When constructing a deep learning model for recognizing violence inside a vehicle, it is crucial to consider several aspects. One aspect is the computational limitations, and the other is the deep learning model architecture chosen. Nevertheless, to...

  • Article
  • Open Access
3 Citations
1,941 Views
13 Pages

Attacking Robot Vision Models Efficiently Based on Improved Fast Gradient Sign Method

  • Dian Hong,
  • Deng Chen,
  • Yanduo Zhang,
  • Huabing Zhou and
  • Liang Xie

2 February 2024

The robot vision model is the basis for the robot to perceive and understand the environment and make correct decisions. However, the security and stability of robot vision models are seriously threatened by adversarial examples. In this study, we pr...

  • Article
  • Open Access
3 Citations
2,738 Views
23 Pages

2 June 2024

Deep learning has shown significant advantages in Automatic Dependent Surveillance-Broadcast (ADS-B) anomaly detection, but it is known for its susceptibility to adversarial examples which make anomaly detection models non-robust. In this study, we p...

  • Article
  • Open Access
5 Citations
2,263 Views
16 Pages

18 June 2024

The safety and robustness of convolutional neural networks (CNNs) have raised increasing concerns, especially in safety-critical areas, such as medical applications. Although CNNs are efficient in image classification, their predictions are often sen...

  • Article
  • Open Access
41 Citations
5,878 Views
28 Pages

25 June 2022

The ever-evolving cybersecurity environment has given rise to sophisticated adversaries who constantly explore new ways to attack cyberinfrastructure. Recently, the use of deep learning-based intrusion detection systems has been on the rise. This ris...

  • Article
  • Open Access
4 Citations
2,482 Views
14 Pages

10 March 2025

Incorporating Artificial Intelligence (AI) in healthcare has transformed disease diagnosis and treatment by offering unprecedented benefits. However, it has also revealed critical cybersecurity vulnerabilities in Deep Learning (DL) models, which rais...

  • Article
  • Open Access
820 Views
13 Pages

20 September 2025

Adversarial attacks against deep learning models achieve high performance in white-box settings but often exhibit low transferability in black-box scenarios, especially against defended models. In this work, we propose Multi-Path Random Restart (MPRR...

  • Article
  • Open Access
4 Citations
2,353 Views
15 Pages

19 October 2022

The ViTs model has been widely used since it was proposed, and its performance on large-scale datasets has surpassed that of CNN models. In order to deploy the ViTs model safely in practical application scenarios, its robustness needs to be investiga...

  • Article
  • Open Access
3,043 Views
24 Pages

15 August 2025

Machine learning (ML) has greatly improved intrusion detection in enterprise networks. However, ML models remain vulnerable to adversarial attacks, where small input changes cause misclassification. This study evaluates the robustness of a Random For...

  • Article
  • Open Access
1,303 Views
16 Pages

5 November 2024

Achieving high attack success rate (ASR) with minimal perturbed distortion has consistently been a prominent and challenging research topic in the field of adversarial examples. In this paper, a novel method to optimize communication signal adversari...

  • Article
  • Open Access
7 Citations
5,539 Views
25 Pages

20 November 2023

Smart city applications that request sensitive user information necessitate a comprehensive data privacy solution. Federated learning (FL), also known as privacy by design, is a new paradigm in machine learning (ML). However, FL models are susceptibl...

  • Article
  • Open Access
11 Citations
4,816 Views
25 Pages

6 November 2024

Kolmogorov–Arnold Networks (KANs) are a novel class of neural network architectures based on the Kolmogorov–Arnold representation theorem, which has demonstrated potential advantages in accuracy and interpretability over Multilayer Percep...

  • Article
  • Open Access
3 Citations
1,984 Views
15 Pages

9 August 2023

Although the spectrum sensing algorithms based on deep learning have achieved remarkable detection performance, the sensing performance is easily affected by adversarial attacks due to the fragility of neural networks. Even slight adversarial perturb...

  • Article
  • Open Access
2 Citations
1,115 Views
25 Pages

Driver Distraction Detection in Extreme Conditions Using Kolmogorov–Arnold Networks

  • János Hollósi,
  • Gábor Kovács,
  • Mykola Sysyn,
  • Dmytro Kurhan,
  • Szabolcs Fischer and
  • Viktor Nagy

Driver distraction can have severe safety consequences, particularly in public transportation. This paper presents a novel approach for detecting bus driver actions, such as mobile phone usage and interactions with passengers, using Kolmogorov–Arnold...

  • Proceeding Paper
  • Open Access
666 Views
11 Pages

Resilience of UNet-Based Models Under Adversarial Conditions in Medical Image Segmentation

  • Dina Koishiyeva,
  • Jeong Won Kang,
  • Teodor Iliev,
  • Alibek Bissembayev and
  • Assel Mukasheva

Adversarial modifications of input data can degrade the stability of deep neural networks in medical image segmentation. This study evaluates the robustness of UNet and Att-UNet++ architectures using the NuInsSeg dataset with annotated nuclear region...

  • Article
  • Open Access
6 Citations
2,753 Views
13 Pages

Deep-learning-assisted medical diagnosis has brought revolutionary innovations to medicine. Breast cancer is a great threat to women’s health, and deep-learning-assisted diagnosis of breast cancer pathology images can save manpower and improve...

  • Article
  • Open Access
1 Citations
1,419 Views
16 Pages

20 September 2024

When dealing with non-IID data, federated learning confronts issues such as client drift and sluggish convergence. Therefore, we propose a Bidirectional Corrective Model-Contrastive Federated Adversarial Training (BCMCFAT) framework. On the client si...

  • Article
  • Open Access
2 Citations
3,205 Views
26 Pages

13 July 2021

A repeatable and deterministic non-random weight initialization method in convolutional layers of neural networks examined with the Fast Gradient Sign Method (FSGM). Using the FSGM approach as a technique to measure the initialization effect with con...

  • Article
  • Open Access
1 Citations
1,841 Views
18 Pages

19 September 2024

Deep learning models excel in interpreting the exponentially growing amounts of remote sensing data; however, they are susceptible to deception and spoofing by adversarial samples, posing catastrophic threats. The existing methods to combat adversari...

  • Article
  • Open Access
4 Citations
2,653 Views
15 Pages

20 September 2022

Deep neural networks (DNNs) have attracted extensive attention because of their excellent performance in many areas; however, DNNs are vulnerable to adversarial examples. In this paper, we propose a similarity metric called inner-class adjusted cosin...

  • Article
  • Open Access
2 Citations
1,711 Views
16 Pages

5 October 2023

In this paper, we propose an advanced method for adversarial training that focuses on leveraging the underlying structure of adversarial perturbation distributions. Unlike conventional adversarial training techniques that consider adversarial example...

  • Article
  • Open Access
635 Views
22 Pages

VexNet: Vector-Composed Feature-Oriented Neural Network

  • Xiao Du,
  • Ziyou Guo,
  • Zihao Li,
  • Yang Cao,
  • Xing Chen and
  • Tieru Wu

Extracting robust features against geometric transformations and adversarial perturbations remains a critical challenge in deep learning. Although capsule networks exhibit resilience through vector-encapsulated features and dynamic routing, they suff...

  • Article
  • Open Access
2 Citations
4,055 Views
22 Pages

29 April 2024

In the rapidly evolving landscape of cybersecurity, model extraction attacks pose a significant challenge, undermining the integrity of machine learning models by enabling adversaries to replicate proprietary algorithms without direct access. This pa...

  • Article
  • Open Access
13 Citations
2,989 Views
14 Pages

The coronavirus disease 2019 (COVID-19) rapidly spread around the world, and resulted in a global pandemic. Applying artificial intelligence to COVID-19 research can produce very exciting results. However, most research has focused on applying AI tec...

  • Article
  • Open Access
4 Citations
2,710 Views
23 Pages

13 February 2025

The prevalence of wildfires presents significant challenges for fire detection systems, particularly in differentiating fire from complex backgrounds and maintaining detection reliability under diverse environmental conditions. It is crucial to addre...

  • Article
  • Open Access
6 Citations
3,253 Views
14 Pages

Novel Exploit Feature-Map-Based Detection of Adversarial Attacks

  • Ali Saeed Almuflih,
  • Dhairya Vyas,
  • Viral V. Kapdia,
  • Mohamed Rafik Noor Mohamed Qureshi,
  • Karishma Mohamed Rafik Qureshi and
  • Elaf Abdullah Makkawi

20 May 2022

In machine learning (ML), adversarial attack (targeted or untargeted) in the presence of noise disturbs the model prediction. This research suggests that adversarial perturbations on pictures lead to noise in the features constructed by any networks....

  • Article
  • Open Access
2 Citations
2,154 Views
14 Pages

Efficient Adversarial Attack Based on Moment Estimation and Lookahead Gradient

  • Dian Hong,
  • Deng Chen,
  • Yanduo Zhang,
  • Huabing Zhou,
  • Liang Xie,
  • Jianping Ju and
  • Jianyin Tang

Adversarial example generation is a technique that involves perturbing inputs with imperceptible noise to induce misclassifications in neural networks, serving as a means to assess the robustness of such models. Among the adversarial attack algorithm...

  • Article
  • Open Access
1 Citations
2,043 Views
15 Pages

27 January 2023

Fine-grained recognition has many applications in many fields and aims to identify targets from subcategories. This is a highly challenging task due to the minor differences between subcategories. Both modal missing and adversarial sample attacks are...

  • Article
  • Open Access
17 Citations
5,284 Views
12 Pages

Image Denoising Based on GAN with Optimization Algorithm

  • Min-Ling Zhu,
  • Liang-Liang Zhao and
  • Li Xiao

Image denoising has been a knotty issue in the computer vision field, although the developing deep learning technology has brought remarkable improvements in image denoising. Denoising networks based on deep learning technology still face some proble...

  • Article
  • Open Access
1 Citations
2,091 Views
12 Pages

Deep Neural Networks (DNNs) used for image classification are vulnerable to adversarial examples, which are images that are intentionally generated to predict an incorrect output for a deep learning model. Various defense methods have been proposed t...

  • Article
  • Open Access
8 Citations
15,678 Views
16 Pages

25 January 2025

The rise of deepfakes—synthetic media generated using artificial intelligence—threatens digital content authenticity, facilitating misinformation and manipulation. However, deepfakes can also depict real or entirely fictitious individuals...

  • Article
  • Open Access
1,445 Views
21 Pages

With the evolution of 5G edge computing networks, privacy-aware applications are gaining significant attention due to their decentralised processing capabilities. However, these networks face substantial challenges to ensure privacy and security, spe...

  • Article
  • Open Access
5 Citations
4,801 Views
17 Pages

Roadmap of Adversarial Machine Learning in Internet of Things-Enabled Security Systems

  • Yasmine Harbi,
  • Khedidja Medani,
  • Chirihane Gherbi,
  • Zibouda Aliouat and
  • Saad Harous

9 August 2024

Machine learning (ML) represents one of the main pillars of the current digital era, specifically in modern real-world applications. The Internet of Things (IoT) technology is foundational in developing advanced intelligent systems. The convergence o...

  • Article
  • Open Access
39 Citations
6,607 Views
14 Pages

Deep neural network has been widely used in pattern recognition and speech processing, but its vulnerability to adversarial attacks also proverbially demonstrated. These attacks perform unstructured pixel-wise perturbation to fool the classifier, whi...

  • Article
  • Open Access
706 Views
20 Pages

30 April 2025

Forests play a vital role in maintaining ecological balance, making accurate forest monitoring technologies essential. Remote sensing point cloud data always capture distinctive geometric features of forests, including the cylindrical symmetry of tre...

  • Article
  • Open Access
1 Citations
1,496 Views
24 Pages

To improve the robustness of intrusion detection systems constructed using deep learning models, a method based on an auxiliary adversarial training WGAN (AuxAtWGAN) is proposed from the defender’s perspective. First, one-dimensional traffic da...

  • Article
  • Open Access
2,461 Views
26 Pages

Image Segmentation Framework for Detecting Adversarial Attacks for Autonomous Driving Cars

  • Ahmad Fakhr Aldeen Sattout,
  • Ali Chehab,
  • Ammar Mohanna and
  • Razane Tajeddine

27 January 2025

The widespread deployment of deep neural networks (DNNs) in critical real-time applications has spurred significant research into their security and robustness. A key vulnerability identified is that DNN decisions can be maliciously altered by introd...

  • Article
  • Open Access
1,778 Views
34 Pages

Adversarial Attacks on Supervised Energy-Based Anomaly Detection in Clean Water Systems

  • Naghmeh Moradpoor,
  • Ezra Abah,
  • Andres Robles-Durazno and
  • Leandros Maglaras

Critical National Infrastructure includes large networks such as telecommunications, transportation, health services, police, nuclear power plants, and utilities like clean water, gas, and electricity. The protection of these infrastructures is cruci...

  • Article
  • Open Access
1,353 Views
21 Pages

17 November 2025

With the increasing sophistication of Artificial Intelligence (AI), traditional digital steganography methods face a growing risk of being detected and compromised. Adversarial attacks, in particular, pose a significant threat to the security and rob...

of 2