Next Article in Journal
Learning Path Recommendation Enhanced by Knowledge Tracing and Large Language Model
Previous Article in Journal
An Adaptive Observer-Based Voltage Parameter Estimation Method for Single-Phase Grid with DC Offset
Previous Article in Special Issue
Multimodal Gene Expression and Methylation Profiling Reveals Misclassified Tumors Beyond Histological Diagnosis
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Adversarial Defense for Medical Images

Institute of Information Management, National Yang Ming Chiao Tung University, Hsinchu City 300093, Taiwan
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(22), 4384; https://doi.org/10.3390/electronics14224384
Submission received: 13 August 2025 / Revised: 25 October 2025 / Accepted: 5 November 2025 / Published: 10 November 2025

Abstract

The rapid advancement of deep learning is significantly hindered by its vulnerability to adversarial attacks, a critical concern in sensitive domains like medicine where misclassification can have severe, irreversible consequences. This issue directly underscores prediction unreliability and is central to the goals of Explainable Artificial Intelligence (XAI) and Trustworthy AI. This study addresses this fundamental problem by evaluating the efficacy of denoising techniques against adversarial attacks on medical images. Our primary objective is to assess the performance of various denoising models. The authors generate a test set of adversarial medical images using the one-pixel attack method, which subtly modifies a minimal number of pixels to induce misclassification. The authors propose a novel autoencoder-based denoising model and evaluate it across four diverse medical image datasets: Derma, Pathology, OCT, and Chest. Denoising models were trained by introducing Impulse noise and subsequently tested on the adversarially attacked images, with effectiveness quantitatively evaluated using standard image quality metrics. The results demonstrate that the proposed denoising autoencoder model performs consistently well across all datasets. By mitigating catastrophic failures induced by sparse attacks, this work enhances system dependability and significantly contributes to the development of more robust and reliable deep learning applications for clinical practice. A key limitation is that the evaluation was confined to sparse, pixel-level attacks; robustness to dense, multi-pixel adversarial attacks, such as PGD or AutoAttack, is not guaranteed and requires future investigation.
Keywords: pixel-attack; machine learning; medical image; denoising model; autoencoder pixel-attack; machine learning; medical image; denoising model; autoencoder

Share and Cite

MDPI and ACS Style

Tsai, M.-J.; Lee, Y.-C.; Lien, H.-Y.; Liang, C.-C. Adversarial Defense for Medical Images. Electronics 2025, 14, 4384. https://doi.org/10.3390/electronics14224384

AMA Style

Tsai M-J, Lee Y-C, Lien H-Y, Liang C-C. Adversarial Defense for Medical Images. Electronics. 2025; 14(22):4384. https://doi.org/10.3390/electronics14224384

Chicago/Turabian Style

Tsai, Min-Jen, Ya-Chu Lee, Hsin-Ying Lien, and Cheng-Chien Liang. 2025. "Adversarial Defense for Medical Images" Electronics 14, no. 22: 4384. https://doi.org/10.3390/electronics14224384

APA Style

Tsai, M.-J., Lee, Y.-C., Lien, H.-Y., & Liang, C.-C. (2025). Adversarial Defense for Medical Images. Electronics, 14(22), 4384. https://doi.org/10.3390/electronics14224384

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop