Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (8)

Search Parameters:
Keywords = adversarial purification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 10762 KiB  
Article
Evaluating Adversarial Robustness of No-Reference Image and Video Quality Assessment Models with Frequency-Masked Gradient Orthogonalization Adversarial Attack
by Khaled Abud, Sergey Lavrushkin and Dmitry Vatolin
Big Data Cogn. Comput. 2025, 9(7), 166; https://doi.org/10.3390/bdcc9070166 - 25 Jun 2025
Viewed by 770
Abstract
Neural-network-based models have made considerable progress in many computer vision areas over recent years. However, many works have exposed their vulnerability to malicious input data manipulation—that is, to adversarial attacks. Although many recent works have thoroughly examined the adversarial robustness of classifiers, the [...] Read more.
Neural-network-based models have made considerable progress in many computer vision areas over recent years. However, many works have exposed their vulnerability to malicious input data manipulation—that is, to adversarial attacks. Although many recent works have thoroughly examined the adversarial robustness of classifiers, the robustness of Image Quality Assessment (IQA) methods remains understudied. This paper addresses this gap by proposing FM-GOAT (Frequency-Masked Gradient Orthogonalization Attack), a novel white box adversarial method tailored for no-reference IQA models. Using a novel gradient orthogonalization technique, FM-GOAT uniquely optimizes adversarial perturbations against multiple perceptual constraints to minimize visibility, moving beyond traditional lp-norm bounds. We evaluate FM-GOAT on seven state-of-the-art NR-IQA models across three image and video datasets, revealing significant vulnerability to the proposed attack. Furthermore, we examine the applicability of adversarial purification methods to the IQA task, as well as their efficiency in mitigating white box adversarial attacks. By studying the activations from models’ intermediate layers, we explore their behavioral patterns in adversarial scenarios and discover valuable insights that may lead to better adversarial detection. Full article
Show Figures

Figure 1

20 pages, 9246 KiB  
Article
Enhancing Robustness in UDC Image Restoration Through Adversarial Purification and Fine-Tuning
by Wenjie Dong, Zhenbo Song, Zhenyuan Zhang, Xuanzheng Lin and Jianfeng Lu
Sensors 2025, 25(11), 3386; https://doi.org/10.3390/s25113386 - 28 May 2025
Viewed by 369
Abstract
This study presents a novel defense framework to fortify Under-Display Camera (UDC) image restoration models against adversarial attacks, a previously underexplored vulnerability in this domain. Our research initially conducts an in-depth robustness evaluation of deep-learning-based UDC image restoration models by employing several white-box [...] Read more.
This study presents a novel defense framework to fortify Under-Display Camera (UDC) image restoration models against adversarial attacks, a previously underexplored vulnerability in this domain. Our research initially conducts an in-depth robustness evaluation of deep-learning-based UDC image restoration models by employing several white-box and black-box attacking methods. Following the assessment, we propose a two-stage approach integrating diffusion-based adversarial purification and efficient fine-tuning, uniquely designed to eliminate perturbations while retaining restoration fidelity. For the first time, we systematically evaluate seven state-of-the-art UDC models (such as DISCNet, UFormer, etc.) under diverse attacks (PGD, C&W, etc.), revealing severe performance degradation (DISCNet’s PSNR drops from 35.24 to 15.16 under C&W attack). Our framework demonstrates significant improvements: after purification and fine-tuning, DISCNet’s PSNR rebounds to 32.17 under PGD attack (vs. 30.17 without defense), while UFormer achieves a 19.71 PSNR under LPIPS-guided attacks (vs. 17.38 baseline). The effectiveness of our proposed approach is validated through extensive experiments, showing marked improvements in resilience against various adversarial attacks. Full article
Show Figures

Figure 1

31 pages, 3790 KiB  
Article
A Robust Recommender System Against Adversarial and Shilling Attacks Using Diffusion Networks and Self-Adaptive Learning
by Ali Alhwayzee, Saeed Araban and Davood Zabihzadeh
Symmetry 2025, 17(2), 233; https://doi.org/10.3390/sym17020233 - 5 Feb 2025
Cited by 3 | Viewed by 1297
Abstract
Shilling and adversarial attacks are two main types of attacks against recommender systems (RSs). In modern RSs, existing defense methods are hindered by the following two challenges: (1) the diversity of RSs’ information sources beyond the interaction matrix, such as user comments, textual [...] Read more.
Shilling and adversarial attacks are two main types of attacks against recommender systems (RSs). In modern RSs, existing defense methods are hindered by the following two challenges: (1) the diversity of RSs’ information sources beyond the interaction matrix, such as user comments, textual data, and visual information; and (2) most defense methods are robust only against specific types of adversarial attacks. Ensuring the robustness of RSs against new adversarial attacks across different data sources remains an open problem. To address this problem, we propose a novel method that unifies adversarial attack detection, purification, and fake user detection in RSs by utilizing a guided diffusion adversarial purification network and a self-adaptive training technique. Our approach aims to simultaneously handle both known and unknown adversarial attacks on RSs’ inputs and outputs. We conducted extensive experiments on three large-scale datasets to evaluate the effectiveness of the proposed method. The results confirm that our method can effectively eliminate adversarial perturbations on images and textual content within RSs, surpassing state-of-the-art methods by a significant margin. Moreover, it achieved the best results in three out of five evaluated shilling attack types. Finally, for attacks with realistic magnitudes, it can maintain baseline performance levels even when multiple attacks are applied simultaneously. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

22 pages, 7282 KiB  
Article
QuEst: Adversarial Attack Intensity Estimation via Query Response Analysis
by Eun Gi Lee, Chi Hyeok Min and Seok Bong Yoo
Mathematics 2024, 12(22), 3508; https://doi.org/10.3390/math12223508 - 9 Nov 2024
Viewed by 869
Abstract
Deep learning has dramatically advanced computer vision tasks, including person re-identification (re-ID), substantially improving matching individuals across diverse camera views. However, person re-ID systems remain vulnerable to adversarial attacks that introduce imperceptible perturbations, leading to misidentification and undermining system reliability. This paper addresses [...] Read more.
Deep learning has dramatically advanced computer vision tasks, including person re-identification (re-ID), substantially improving matching individuals across diverse camera views. However, person re-ID systems remain vulnerable to adversarial attacks that introduce imperceptible perturbations, leading to misidentification and undermining system reliability. This paper addresses the challenge of robust person re-ID in the presence of adversarial examples by estimating attack intensity to enable effective detection and adaptive purification. The proposed approach leverages the observation that adversarial examples in retrieval tasks disrupt the relevance and internal consistency of retrieval results, degrading re-ID accuracy. This approach estimates the attack intensity and dynamically adjusts the purification strength by analyzing the query response data, addressing the limitations of fixed purification methods. This approach also preserves the performance of the model on clean data by avoiding unnecessary manipulation while improving the robustness of the system and its reliability in the presence of adversarial examples. The experimental results demonstrate that the proposed method effectively detects adversarial examples and estimates the attack intensity through query response analysis. This approach enhances purification performance when integrated with adversarial purification techniques in person re-ID systems. Full article
Show Figures

Figure 1

18 pages, 11050 KiB  
Article
Mitigating Adversarial Attacks in Object Detection through Conditional Diffusion Models
by Xudong Ye, Qi Zhang, Sanshuai Cui, Zuobin Ying, Jingzhang Sun and Xia Du
Mathematics 2024, 12(19), 3093; https://doi.org/10.3390/math12193093 - 2 Oct 2024
Viewed by 2564
Abstract
The field of object detection has witnessed significant advancements in recent years, thanks to the remarkable progress in artificial intelligence and deep learning. These breakthroughs have significantly enhanced the accuracy and efficiency of detecting and categorizing objects in digital images. Nonetheless, contemporary object [...] Read more.
The field of object detection has witnessed significant advancements in recent years, thanks to the remarkable progress in artificial intelligence and deep learning. These breakthroughs have significantly enhanced the accuracy and efficiency of detecting and categorizing objects in digital images. Nonetheless, contemporary object detection technologies have certain limitations, such as their inability to counter white-box attacks, insufficient denoising, suboptimal reconstruction, and gradient confusion. To overcome these hurdles, this study proposes an innovative approach that uses conditional diffusion models to perturb adversarial examples. The process begins with the application of a random chessboard mask to the adversarial example, followed by the addition of a slight noise to fill the masked area during the forward process. The adversarial image is then restored to its original form through a reverse generative process that only considers the masked pixels, not the entire image. Next, we use the complement of the initial mask as the mask for the second stage to reconstruct the image once more. This two-stage masking process allows for the complete removal of global disturbances and aids in image reconstruction. In particular, we employ a conditional diffusion model based on a class-conditional U-Net architecture, with the source image further conditioned through concatenation. Our method outperforms the recently introduced HARP method by 5% and 6.5% in mAP on the COCO2017 and PASCAL VOC datasets, respectively, under non-APT PGD attacks. Comprehensive experimental results confirm that our method can effectively restore adversarial examples, demonstrating its practical utility. Full article
Show Figures

Figure 1

19 pages, 3784 KiB  
Article
Robust and Refined Salient Object Detection Based on Diffusion Model
by Hanchen Ye, Yuyue Zhang and Xiaoli Zhao
Electronics 2023, 12(24), 4962; https://doi.org/10.3390/electronics12244962 - 11 Dec 2023
Cited by 1 | Viewed by 2010
Abstract
Salient object detection (SOD) networks are vulnerable to adversarial attacks. As adversarial training is computationally expensive for SOD, existing defense methods instead adopt a noise-against-noise strategy that disrupts adversarial perturbation and restores the image either in input or feature space. However, their limited [...] Read more.
Salient object detection (SOD) networks are vulnerable to adversarial attacks. As adversarial training is computationally expensive for SOD, existing defense methods instead adopt a noise-against-noise strategy that disrupts adversarial perturbation and restores the image either in input or feature space. However, their limited learning capacity and the need for network modifications limit their applicability. In recent years, the popular diffusion model coincides with the existing defense idea and exhibits excellent purification performance, but there still remains an accuracy gap between the saliency results generated from the purified images and the benign images. In this paper, we propose a Robust and Refined (RoRe) SOD defense framework based on the diffusion model to simultaneously achieve adversarial robustness as well as improved accuracy for benign and purified images. Our proposed RoRe defense consists of three modules: purification, adversarial detection, and refinement. The purification module leverages the powerful generation capability of the diffusion model to purify perturbed input images to achieve robustness. The adversarial detection module utilizes the guidance classifier in the diffusion model for multi-step voting classification. By combining this classifier with a similarity condition, precise adversarial detection can be achieved, providing the possibility of regaining the original accuracy for benign images. The refinement module uses a simple and effective UNet to enhance the accuracy of purified images. The experiments demonstrate that RoRe achieves superior robustness over state-of-the-art methods while maintaining high accuracy for benign images. Moreover, RoRe shows good results against backward pass differentiable approximation (BPDA) attacks. Full article
(This article belongs to the Special Issue AI Security and Safety)
Show Figures

Figure 1

11 pages, 945 KiB  
Article
Improving Adversarial Robustness via Distillation-Based Purification
by Inhwa Koo, Dong-Kyu Chae and Sang-Chul Lee
Appl. Sci. 2023, 13(20), 11313; https://doi.org/10.3390/app132011313 - 15 Oct 2023
Viewed by 2422
Abstract
Despite the impressive performance of deep neural networks on many different vision tasks, they have been known to be vulnerable to intentionally added noise to input images. To combat these adversarial examples (AEs), improving the adversarial robustness of models has emerged as an [...] Read more.
Despite the impressive performance of deep neural networks on many different vision tasks, they have been known to be vulnerable to intentionally added noise to input images. To combat these adversarial examples (AEs), improving the adversarial robustness of models has emerged as an important research topic, and research has been conducted in various directions including adversarial training, image denoising, and adversarial purification. Among them, this paper focuses on adversarial purification, which is a kind of pre-processing that removes noise before AEs enter a classification model. The advantage of adversarial purification is that it can improve robustness without affecting the model’s nature, while another defense techniques like adversarial training suffer from a decrease in model accuracy. Our proposed purification framework utilizes a Convolutional Autoencoder as a base model to capture the features of images and their spatial structure.We further aim to improve the adversarial robustness of our purification model by distilling the knowledge from teacher models. To this end, we train two Convolutional Autoencoders (teachers), one with adversarial training and the other with normal training. Then, through ensemble knowledge distillation, we transfer the ability of denoising and restoring of original images to the student model (purification model). Our extensive experiments confirm that our student model achieves high purification performance(i.e., how accurately a pre-trained classification model classifies purified images). The ablation study confirms the positive effect of our idea of ensemble knowledge distillation from two teachers on performance. Full article
(This article belongs to the Topic Adversarial Machine Learning: Theories and Applications)
Show Figures

Figure 1

13 pages, 2524 KiB  
Article
Real-Time High-Performance Laser Welding Defect Detection by Combining ACGAN-Based Data Enhancement and Multi-Model Fusion
by Kui Fan, Peng Peng, Hongping Zhou, Lulu Wang and Zhongyi Guo
Sensors 2021, 21(21), 7304; https://doi.org/10.3390/s21217304 - 2 Nov 2021
Cited by 21 | Viewed by 4530
Abstract
Most of the existing laser welding process monitoring technologies focus on the detection of post-engineering defects, but in the mass production of electronic equipment, such as laser welding metal plates, the real-time identification of defect detection has more important practical significance. The data [...] Read more.
Most of the existing laser welding process monitoring technologies focus on the detection of post-engineering defects, but in the mass production of electronic equipment, such as laser welding metal plates, the real-time identification of defect detection has more important practical significance. The data set of laser welding process is often difficult to build and there is not enough experimental data, which hinder the applications of the data-driven laser welding defect detection method. In this paper, an intelligent welding defect diagnosis method based on auxiliary classifier generative adversarial networks (ACGAN) has been proposed. Firstly, a ten-class dataset consisting of 6467 samples, was constructed, which originate from the optical and thermal sensory parameters in the welding process. A new structured ACGAN network model is proposed to generate fake data similar to the true defect feature distributions. In addition, in order to make the difference between different defects categories more obvious after data expansion, a data filtering and data purification scheme was proposed based on ensemble learning and an SVM (support vector machine), which is used to filter the bad generated data. In the experiments, the classification accuracy can reach 96.83% and 85.13%, for the CNN (convolutional neural network) algorithm model and ACGAN model, respectively. However, the accuracy can further improve to 97.86% and 98.37% for the fusion models of ACGAN-CNN and ACGAN-SVM-CNN models, respectively. The results show that ACGAN can not only be used as an algorithm model for classification, but also be used to achieve superior real-time classification and recognition through data enhancement and multi-model fusion. Full article
(This article belongs to the Special Issue Artificial Intelligence for Smart Sensing, Test and Measurement)
Show Figures

Figure 1

Back to TopTop