Next Article in Journal
Systematic Review of Acoustic Monitoring in Livestock Farming: Vocalization Patterns and Sound Source Analysis
Previous Article in Journal
Color and Texture of Wheat and Whole Grain Wheat Salty Crackers—Technological Aspects of Cricket Powder Addition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

MAAG: A Multi-Attention Architecture for Generalizable Multi-Target Adversarial Attacks

School of Computer Science and Engineering, Jishou University, Jishou 416000, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(18), 9915; https://doi.org/10.3390/app15189915
Submission received: 20 August 2025 / Revised: 6 September 2025 / Accepted: 8 September 2025 / Published: 10 September 2025

Abstract

Adversarial examples pose a severe threat to deep neural networks. They are crafted by applying imperceptible perturbations to benign inputs, causing the model to produce incorrect predictions. Most existing attack methods exhibit limited generalization, especially in black-box settings involving unseen models or unknown classes. To address these limitations, we propose MAAG (multi-attention adversarial generation), a novel model architecture that enhances attack generalizability and transferability. MAAG integrates channel and spatial attention to extract representative features for adversarial example generation and capture diverse decision boundaries for better transferability. A composite loss guides the generation of adversarial examples across different victim models. Extensive experiments validate the superiority of our proposed method in crafting adversarial examples for both known and unknown classes. Specifically, it surpasses existing generative methods by approximately 7.0% and 7.8% in attack success rate on known and unknown classes, respectively.
Keywords: adversarial attacks; multiple attention; generalization capability; multi-target attacks adversarial attacks; multiple attention; generalization capability; multi-target attacks

Share and Cite

MDPI and ACS Style

Ou, D.; Lu, J.; Hua, C.; Zhou, S.; Zeng, Y.; He, Y.; Tian, J. MAAG: A Multi-Attention Architecture for Generalizable Multi-Target Adversarial Attacks. Appl. Sci. 2025, 15, 9915. https://doi.org/10.3390/app15189915

AMA Style

Ou D, Lu J, Hua C, Zhou S, Zeng Y, He Y, Tian J. MAAG: A Multi-Attention Architecture for Generalizable Multi-Target Adversarial Attacks. Applied Sciences. 2025; 15(18):9915. https://doi.org/10.3390/app15189915

Chicago/Turabian Style

Ou, Dongbo, Jintian Lu, Cheng Hua, Shihui Zhou, Ying Zeng, Yingsheng He, and Jie Tian. 2025. "MAAG: A Multi-Attention Architecture for Generalizable Multi-Target Adversarial Attacks" Applied Sciences 15, no. 18: 9915. https://doi.org/10.3390/app15189915

APA Style

Ou, D., Lu, J., Hua, C., Zhou, S., Zeng, Y., He, Y., & Tian, J. (2025). MAAG: A Multi-Attention Architecture for Generalizable Multi-Target Adversarial Attacks. Applied Sciences, 15(18), 9915. https://doi.org/10.3390/app15189915

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop