Adversarial Example Generation Method Based on Wavelet Transform
Abstract
1. Introduction
- 1.
- We propose the Wavelet-AdvGAN adversarial example generation method, which achieves improved attack success rates, generation efficiency, transferability, and sparsity. Unlike existing frequency-domain GAN-based methods (e.g., GE-AdvGAN) that rely on single-scale frequency difference or gradient editing, Wavelet-AdvGAN innovates a ‘frequency-domain constraint + local feature enhancement’ synergistic mechanism to address the long-standing trade-off between perturbation sparsity and transferability.
- 2.
- We introduce the Frequency Sub-band Difference (FSD) module based on wavelet transform. Unlike existing frequency-domain methods that use uniform frequency metrics (e.g., norm), this module decomposes images into multi-scale sub-bands (LL/LH/HL/HH) and assigns adaptive weights, emphasizing the preservation of global structural information (low-frequency) while constraining detailed perturbations (high-frequency), thus generating adversarial examples with higher sparsity and visual authenticity.
- 3.
- We present the Wavelet Transform Local Feature (WTLF) module, which leverages wavelet convolution to separate global structures (low-frequency) and detailed features (high-frequency), and integrates lightweight 1D convolution to preserve channel information. Unlike AdvGAN series that focus only on global features or AI-GAN’s attack module enhancement, this module enables the model to focus on critical local regions, directly improving attack effectiveness while maintaining efficiency. This design extends the ‘divide-and-conquer’ idea from mixture-of-experts (MoE) architectures to adversarial attack’s frequency-domain processing.
2. Related Work
2.1. GAN-Based Adversarial Example Generation Methods
2.2. Wavelet Transform
2.3. ELA
2.4. Adversarial Training
2.5. Data Enhancement
3. Method
3.1. Wavelet-AdvGAN
- : Generator’s GAN loss (distinct from discriminator’s ), measuring how well adversarial samples fool the discriminator D;
- : Attack effectiveness loss, evaluating whether misleads the target model f;
- : Frequency sub-band difference loss, constraining perturbation magnitude;
- : Hyperparameters balancing the three loss components.
- : Real samples follow the dataset distribution ;
- : Adversarial samples (real samples + generator-learned perturbations );
- : Discriminator output (probability that X is a real sample, ).
- Optimization goal for : Minimize the loss to force
- : Target model’s classification loss for (e.g., cross-entropy loss);
- Optimization goal: to force (since higher classification loss indicates misclassification).
- : Approximate Wasserstein distances (Sinkhorn distance [37]) of wavelet-transformed sub-bands (LL/LH/HL/HH) between and , computed via entropy-regularized optimal transport (, max iterations = 100); see Section 3.2 for detailed steps;
- : Default weights (prioritize low-frequency components to preserve image structure);
- Optimization goal: to ensure retains the original image’s frequency-domain structure (improving visual authenticity and sparsity).
- Expectation Domain: Both expectations are over the real sample distribution (adversarial samples are derived from , so no separate distribution is needed);
- First term: Reward D for correctly classifying real samples as “real” (maximize );
- Second term: Reward D for correctly classifying adversarial samples as “fake” (maximize ).
3.2. Frequency Sub-Band Discrepancy (FSD)
- Reshape each frequency sub-band (e.g., ) into a 1D vector of dimension (C: number of channels; : height/width of the wavelet sub-band, reduced by half after 2D-DWT);
- Define the cost matrix () as the Euclidean distance between pixel pairs of the two sub-band vectors: (where and are pixels of the original and adversarial sub-band vectors, respectively);
- Solve the regularized optimal transport problem via the iterative Sinkhorn–Knopp algorithm (maximum iterations = 100) to obtain the approximate Wasserstein distance;
- Batch-wise computation is adopted to reduce overhead: for a batch size of 128, the distance computation for four sub-bands takes ∼0.02 s per batch on a single NVIDIA RTX 3080Ti GPU, with a time complexity of (B: batch size; K: sub-band vector length; T: number of iterations).
- Increasing to 0.6 improves visual authenticity (structural consistency) but reduces ASR to ;
- Increasing to 0.4 enhances attack effectiveness (ASR = ) but degrades visual invisibility (perceptibility score = );
- The weights are robust within and , ensuring stable performance for reproducibility.
3.3. Wavelet Transform Local Features (WTLF)
4. Experiments
4.1. Dataset
4.2. Evaluation Metrics
4.3. Comparison Methods
4.4. Parameter Details
4.4.1. Training the Target Model
- Batch Size: 200;
- Optimizer: Adam with an initial learning rate of 0.001;
- Learning Rate Adjustment: Cosine annealing adjustment strategy;
- Weight Decay:.
- Random horizontal flipping;
- Random rotation within a range of ±15 degrees;
- Random cropping with a size of 32 pixels, with an additional 4 pixels of padding added at the image edges;
- 10% probability to convert the image to grayscale;
- Random adjustments to the image’s brightness, contrast, saturation, and hue:
- –
- Brightness, contrast, and saturation are randomly increased or decreased by 20%;
- –
- Hue is randomly shifted by 10% from the original value.
- Attack method: Fast Gradient Sign Method (FGSM);
- Adversarial samples have a content ratio of 10%;
- Epsilon () set to 0.1.
4.4.2. Training the Attack Model
- Batch size: 128;
- Number of epochs: 160;
- Optimizer: Adam with an initial learning rate of 0.001;
- Learning rate decays by a factor of 10 at the 50th and 80th epochs;
- Perturbation amplitude upper limit: 0.3.
FGSM Method:
- Epsilon (): 16/255.
- Epsilon (): 0.1;
- Number of iterations: 1000.
- Learning rate parameter: 0.01;
- Number of binary search steps: 9.
- Attack module uses PGD;
- Epsilon (): 0.3;
- Number of iterations: 40;
- Step size: 0.01.
- Experimental parameters follow those from the original paper:
- –
- N: 10;
- –
- Sigma (): Typically 0.5 (0.7 when adversarial training is used for the target model);
- –
- Lambda (): 10;
- –
- Epsilon (): 16;
- –
- Total number of epochs: 60.
4.5. Selection of Hyperparameters
Hyperparameter Sensitivity Analysis
- 1.
- (weight of ): Robust interval , optimal value 10; deviations lead to reduced attack effectiveness or degraded sparsity.
- 2.
- (weight of ): Robust interval , optimal value 0.7; under-constraint or over-constraint impairs ASR.
- 3.
- Frequency sub-band weights: Default configuration , robust intervals and . Adjustments can be made based on priorities (visual authenticity: increase ; attack effectiveness: increase ).
4.6. Experimental Results
4.6.1. Attack Evaluation
4.6.2. Perturbation Magnitude
4.6.3. Generation Time
4.6.4. Ablation Experiment
5. Conclusions
5.1. Threats to Validity
- 1.
- Internal Validity: Hyperparameter tuning may favor the proposed method. Mitigation: Adopt consistent hyperparameter search ranges for all baselines and verify that optimal parameters (, ) lie within robust intervals via sensitivity analysis.
- 2.
- External Validity: Limited generalizability to CIFAR-10 and selected models. Mitigation: Use five representative DNN architectures (ResNet18/50, Vgg11/16, DenseNet121); future work will extend to larger datasets (e.g., ImageNet) and complex models (e.g., Vision Transformers).
- 3.
- Construct Validity: Evaluation metrics (ASR, norm) may not reflect real-world effectiveness. Mitigation: Complement with transferability and visual perceptibility scores (2.8/5 vs. baseline 3.2/5) for comprehensive validation.
- 4.
- Statistical Validity: Small sample size may lead to unreliable results. Mitigation: Utilize the full CIFAR-10 test set (10,000 samples) and supplement with statistical significance tests (two-tailed t-tests).
5.2. Practical Implications
- 1.
- Robustness Testing for Safety-Critical Systems: Generate high-sparsity (low norm = 3061) and high-transferability adversarial examples, mimicking real-world subtle distortions (e.g., dust on cameras, light reflections) to rigorously test DNNs in autonomous driving and facial recognition.
- 2.
- Adversarial Training Optimization: Fast generation speed (< s/sample) reduces computational costs compared to optimization-based methods (e.g., C&W >1 s/sample), enabling large-scale adversarial training for real-time systems.
- 3.
- Defense Strategy Evaluation: Strong transferability (average 2.7% improvement) exposes over-reliance on model-specific features in existing defenses, guiding the design of more generalized and robust defense strategies.
- 4.
- Semi-White-Box Attack Scenarios: Aligns with real-world attack scenarios (no access to target model private information), providing a privacy-compliant tool for security researchers to evaluate DNN robustness without violating access constraints.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Valente, J.; António, J.; Mora, C.; Jardim, S. Developments in Image Processing Using Deep Learning and Reinforcement Learning. J. Imaging 2023, 9, 207. [Google Scholar] [CrossRef]
- Wang, Z.; Wu, Y.; Park, Y.; Yoo, S.; Wang, X.; Eshraghian, J.K.; Lu, W.D. PowerGAN: A Machine Learning Approach for Power Side-Channel Attack on Compute-in-Memory Accelerators. Adv. Intell. Syst. 2023, 5, 2300313. [Google Scholar] [CrossRef]
- Badjie, B.; Cecílio, J.; Casimiro, A. Adversarial Attacks and Countermeasures on Image Classification-based Deep Learning Models in Autonomous Driving Systems: A Systematic Review. ACM Comput. Surv. 2024, 57, 1–52. [Google Scholar] [CrossRef]
- Ren, M.; Wang, Y.; Zhu, Y.; Huang, Y.; Sun, Z.; Qi, L.; Tian, N. Artificial immune system of secure face recognition against adversarial attacks. Int. J. Comput. Vis. 2024, 132, 5718–5740. [Google Scholar] [CrossRef]
- Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; Fergus, R. Intriguing properties of neural networks. arXiv 2014, arXiv:1312.6199. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Shlens, J.; Szegedy, C. Explaining and Harnessing Adversarial Examples. arXiv 2015, arXiv:1412.6572. [Google Scholar] [CrossRef]
- Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; Li, J. Boosting Adversarial Attacks with Momentum. arXiv 2018, arXiv:1710.06081. [Google Scholar] [CrossRef]
- Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; Vladu, A. Towards Deep Learning Models Resistant to Adversarial Attacks. arXiv 2019, arXiv:1706.06083. [Google Scholar] [CrossRef]
- Carlini, N.; Wagner, D. Towards Evaluating the Robustness of Neural Networks. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 22–26 May 2017; pp. 39–57. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. arXiv 2014, arXiv:1406.2661. [Google Scholar] [CrossRef]
- Xiao, C.; Li, B.; Zhu, J.Y.; He, W.; Liu, M.; Song, D. Generating Adversarial Examples with Adversarial Networks. arXiv 2019, arXiv:1801.02610. [Google Scholar] [CrossRef]
- Mangla, P.; Jandial, S.; Varshney, S.; Balasubramanian, V.N. AdvGAN++: Harnessing latent layers for adversary generation. arXiv 2019, arXiv:1908.00706. [Google Scholar] [CrossRef]
- Bai, T.; Zhao, J.; Zhu, J.; Han, S.; Chen, J.; Li, B.; Kot, A. AI-GAN: Attack-Inspired Generation of Adversarial Examples. arXiv 2021, arXiv:2002.02196. [Google Scholar]
- Zhu, Z.; Chen, H.; Wang, X.; Zhang, J.; Jin, Z.; Choo, K.K.R.; Shen, J.; Yuan, D. GE-AdvGAN: Improving the transferability of adversarial samples by gradient editing-based adversarial generative model. arXiv 2024, arXiv:2401.06031. [Google Scholar]
- Yu, Y.; Xia, S.; Lin, X.; Kong, C.; Yang, W.; Lu, S.; Tan, Y.P.; Kot, A.C. Toward Model Resistant to Transferable Adversarial Examples via Trigger Activation. IEEE Trans. Inf. Forensics Secur. 2025, 20, 3745–3757. [Google Scholar] [CrossRef]
- Kong, C.; Luo, A.; Bao, P.; Yu, Y.; Li, H.; Zheng, Z.; Wang, S.; Kot, A.C. MoE-FFD: Mixture of Experts for Generalized and Parameter-Efficient Face Forgery Detection. arXiv 2025, arXiv:2404.08452. [Google Scholar] [CrossRef]
- Casem, J.; Golecruz, G.M.; Ostia, C. Brushless DC Motor Fault Classification Using Support Vector Machine Algorithm with Discrete Wavelet Transform Feature Extraction. In Proceedings of the 2023 9th International Conference on Control, Automation and Robotics (ICCAR), Beijing, China, 21–23 April 2023; pp. 19–24. [Google Scholar] [CrossRef]
- Greenhall, J.; Sinha, D.N.; Pantea, C. Genetic Algorithm-Wavelet Transform Feature Extraction for Data-Driven Acoustic Resonance Spectroscopy. IEEE Trans. Ultrason. Ferroelectr. Freq. Control 2023, 70, 736–747. [Google Scholar] [CrossRef]
- Chen, S.; Gao, J.; Lou, F.; Tuo, Y.; Tan, S.; Shan, Y.; Luo, L.; Xu, Z.; Zhang, Z.; Huang, X. Rapid estimation of soil water content based on hyperspectral reflectance combined with continuous wavelet transform, feature extraction, and extreme learning machine. PeerJ 2024, 12, e17954. [Google Scholar] [CrossRef]
- Bazdar, A.; Hatamian, A.; Ostadieh, J.; Nourinia, J.; Ghobadi, C.; Mostafapour, E. Nonlinear feature extraction methods based on dual-tree complex wavelet transform subimages of brain magnetic resonance imaging for the classification of multiple diseases. J. Med. Signals Sens. 2023, 13, 165–172. [Google Scholar] [CrossRef]
- Shahbahrami, A. Algorithms and architectures for 2D discrete wavelet transform. J. Supercomput. 2012, 62, 1045–1064. [Google Scholar] [CrossRef]
- Xu, W.; Wan, Y. ELA: Efficient Local Attention for Deep Convolutional Neural Networks. arXiv 2024, arXiv:2403.01123. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-Excitation Networks. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar] [CrossRef]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate Attention for Efficient Mobile Network Design. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13708–13717. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. CBAM: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; Lecture Notes in Computer Science. Volume 11211, pp. 3–19. [Google Scholar] [CrossRef]
- Kiranyaz, S.; Avci, O.; Abdeljaber, O.; Ince, T.; Gabbouj, M.; Inman, D.J. 1D convolutional neural networks and applications: A survey. Mech. Syst. Signal Process. 2021, 151, 107398. [Google Scholar] [CrossRef]
- Wu, Y.; He, K. Group Normalization. arXiv 2018, arXiv:1803.08494. [Google Scholar] [CrossRef]
- Gao, Y.; Wu, D.; Zhang, J.; Gan, G.; Xia, S.T.; Niu, G.; Sugiyama, M. On the Effectiveness of Adversarial Training Against Backdoor Attacks. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 14878–14888. [Google Scholar] [CrossRef]
- Wu, B.; Wei, S.; Zhu, M.; Zheng, M.; Zhu, Z.; Zhang, M.; Chen, H.; Yuan, D.; Liu, L.; Liu, Q. Defenses in Adversarial Machine Learning: A Survey. arXiv 2023, arXiv:2312.08890. [Google Scholar] [CrossRef]
- Cheng, X.; Fu, K.; Farnia, F. Stability and Generalization in Free Adversarial Training. arXiv 2025, arXiv:2404.08980. [Google Scholar]
- Bountakas, P.; Zarras, A.; Lekidis, A.; Xenakis, C. Defense strategies for Adversarial Machine Learning: A survey. Comput. Sci. Rev. 2023, 49, 100573. [Google Scholar] [CrossRef]
- Li, L.; Spratling, M. Data Augmentation Alone Can Improve Adversarial Training. arXiv 2023, arXiv:2301.09879. [Google Scholar] [CrossRef]
- Luo, R.; Wang, Y.; Wang, Y. Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning. arXiv 2023, arXiv:2303.01289. [Google Scholar] [CrossRef]
- Qin, C.; Martens, J.; Gowal, S.; Krishnan, D.; Dvijotham, K.; Fawzi, A.; De, S.; Stanforth, R.; Kohli, P. Adversarial Robustness through Local Linearization. arXiv 2019, arXiv:1907.02610. [Google Scholar] [CrossRef]
- Rebuffi, S.A.; Gowal, S.; Calian, D.A.; Stimberg, F.; Wiles, O.; Mann, T. Data Augmentation Can Improve Robustness. arXiv 2021, arXiv:2111.05328. [Google Scholar] [CrossRef]
- Li, L.; Qiu, J.; Spratling, M. AROID: Improving Adversarial Robustness Through Online Instance-Wise Data Augmentation. arXiv 2024, arXiv:2306.07197. [Google Scholar] [CrossRef]
- Cuturi, M. Sinkhorn Distances: Lightspeed Computation of Optimal Transportation Distances. arXiv 2013, arXiv:1306.0895. [Google Scholar] [CrossRef]
- Krizhevsky, A. Learning Multiple Layers of Features from Tiny Images. 2009. Available online: https://api.semanticscholar.org/CorpusID:18268744 (accessed on 3 February 2026).







| Step | Description |
|---|---|
| Input | Original samples , target model f, generator G, discriminator D, FSD weights , , , , loss weights , , and training steps N. |
| Output | . |
| 1 | Initialize generator G, discriminator D, and target model f. |
| 2 | For to N do |
| 3 | Generate by passing through G: . |
| 4 | Compute adversarial samples: . |
| 5 | Compute discriminator loss : |
| . | |
| 6 | Compute generator adversarial loss : |
| . | |
| 7 | Compute adversarial loss for the target model : |
| . | |
| 8 | Compute boundary loss using FSD: |
| . | |
| 9 | Compute total generator loss: |
| . | |
| 10 | Update generator G using . |
| 11 | Update discriminator D using . |
| 12 | End for |
| 13 | Generate final perturbation . |
| 14 | Generate final adversarial samples . |
| Step | Description |
|---|---|
| Input | Real samples , adversarial samples , weights , batch size N. |
| Output | Average FSD loss . |
| 1 | Initialize: Compute 2D discrete wavelet transform (DWT) for each sample: . Set . |
| 2 | For each sample pair : |
| Compute sub-band coefficients for and : | |
| Compute Wasserstein distance for each sub-band: | |
| , | |
| . : Sinkhorn distance with regularization , implemented via the Sinkhorn-Knopp algorithm (maximum iterations = 100), where the cost matrix is the Euclidean distance between pixel pairs; see Section 3.2 for detailed computation. | |
| Compute weighted w for sample i: | |
| Compute weighted loss for sample i: | |
| . | |
| 3 | Aggregate loss across batch: |
| Update total loss: . | |
| 4 | Output final loss: |
| Compute average loss over batch size: . | |
| Return . |
| Defense | Method | Model | ||||
|---|---|---|---|---|---|---|
| ResNet18 | ResNet50 | Vgg11 | Vgg16 | DenseNet121 | ||
| DE | FGSM | 75.21 | 76.28 | 77.56 | 78.39 | 77.62 |
| PGD | 81.66 | 82.61 | 82.47 | 83.39 | 82.74 | |
| C&W | 90.37 | 90.63 | 93.66 | 91.94 | 93.54 | |
| AdvGAN | 92.82 | 92.91 | 94.78 | 95.89 | 95.17 | |
| AIGAN | 93.18 | 93.34 | 95.09 | 97.09 | 96.23 | |
| Ours | 96.17 | 96.20 | 95.52 | 96.31 | 96.69 | |
| Adv-DE | FGSM | 46.26 | 45.85 | 51.11 | 53.13 | 48.19 |
| PGD | 45.00 | 47.09 | 43.16 | 45.10 | 52.75 | |
| C&W | 87.52 | 84.87 | 93.94 | 94.03 | 83.17 | |
| AdvGAN | 92.92 | 93.19 | 93.39 | 91.25 | 92.42 | |
| AIGAN | 92.29 | 89.13 | 88.97 | 86.04 | 92.95 | |
| Ours | 95.71 | 95.18 | 92.86 | 94.41 | 94.97 | |
| Source Model | Method | Target Model | ||||
|---|---|---|---|---|---|---|
| ResNet18 | ResNet50 | Vgg11 | Vgg16 | DenseNet121 | ||
| ResNet18 | AdvGAN | 92.82 | 90.51 | 90.93 | 90.15 | 90.78 |
| AIGAN | 93.18 | 89.92 | 90.61 | 88.13 | 87.65 | |
| Ours | 96.17 | 94.29 | 94.57 | 93.62 | 92.98 | |
| ResNet50 | AdvGAN | 90.52 | 92.91 | 92.73 | 92.33 | 90.53 |
| AIGAN | 87.84 | 93.34 | 92.20 | 88.71 | 87.08 | |
| Ours | 90.15 | 96.20 | 91.98 | 89.27 | 91.56 | |
| Vgg11 | AdvGAN | 74.09 | 77.24 | 94.78 | 88.04 | 77.00 |
| AIGAN | 68.36 | 68.80 | 95.09 | 89.94 | 69.72 | |
| Ours | 76.80 | 78.22 | 95.52 | 88.37 | 80.07 | |
| Vgg16 | AdvGAN | 75.84 | 74.49 | 94.73 | 95.89 | 76.48 |
| AIGAN | 76.14 | 73.49 | 95.20 | 97.09 | 79.65 | |
| Ours | 78.31 | 76.21 | 93.77 | 96.31 | 77.76 | |
| DenseNet121 | AdvGAN | 91.10 | 92.35 | 92.75 | 92.62 | 95.17 |
| AIGAN | 89.72 | 90.42 | 91.73 | 91.44 | 96.23 | |
| Ours | 87.61 | 92.54 | 91.69 | 90.77 | 96.69 | |
| Source Model | Method | Target Model | ||||
|---|---|---|---|---|---|---|
| ResNet18 | ResNet50 | Vgg11 | Vgg16 | DenseNet121 | ||
| ResNet18 | AdvGAN | 92.92 | 61.40 | 55.03 | 71.35 | 60.19 |
| AIGAN | 92.29 | 59.61 | 55.61 | 71.09 | 63.13 | |
| Ours | 95.71 | 75.61 | 58.24 | 69.33 | 72.28 | |
| ResNet50 | AdvGAN | 59.70 | 93.19 | 54.41 | 69.78 | 48.61 |
| AIGAN | 52.02 | 89.13 | 50.81 | 70.10 | 43.24 | |
| Ours | 79.24 | 95.18 | 58.23 | 71.72 | 71.71 | |
| Vgg11 | AdvGAN | 67.11 | 64.64 | 93.39 | 87.48 | 60.59 |
| AIGAN | 49.44 | 49.44 | 88.97 | 75.76 | 46.85 | |
| Ours | 67.19 | 65.56 | 92.86 | 83.04 | 68.72 | |
| Vgg16 | AdvGAN | 59.41 | 65.13 | 75.26 | 91.25 | 54.92 |
| AIGAN | 52.78 | 50.34 | 66.97 | 86.04 | 44.99 | |
| Ours | 70.79 | 64.23 | 84.38 | 94.41 | 68.42 | |
| DenseNet121 | AdvGAN | 79.57 | 79.57 | 55.36 | 68.90 | 92.42 |
| AIGAN | 82.37 | 65.67 | 55.31 | 68.56 | 92.95 | |
| Ours | 85.39 | 74.46 | 56.26 | 70.64 | 94.97 | |
| Method | Defense | ResNet18 | ResNet50 | Vgg11 | Vgg16 | DenseNet121 |
|---|---|---|---|---|---|---|
| GE-AdvGAN | DE | 85.34 | 86.68 | 82.67 | 86.99 | 87.42 |
| Adv-DE | 52.96 | 90.50 | 81.26 | 87.65 | 81.42 | |
| Ours | DE | 95.79 | 93.93 | 93.66 | 94.75 | 94.89 |
| Adv-DE | 92.98 | 92.97 | 91.09 | 92.20 | 93.89 |
| Norm | AdvGAN | AIGAN | Ours |
|---|---|---|---|
| 3062 | 3063 | 3061 | |
| 1.9848 | 1.9492 | 1.9975 | |
| 0.0079 | 0.0079 | 0.0079 |
| Method | C&W | AdvGAN | AIGAN | Ours |
|---|---|---|---|---|
| Time | > | < | < | < |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Bi, M.; Liang, X.; Wang, B.; Liu, L.; Yin, X.; Liu, J. Adversarial Example Generation Method Based on Wavelet Transform. Information 2026, 17, 182. https://doi.org/10.3390/info17020182
Bi M, Liang X, Wang B, Liu L, Yin X, Liu J. Adversarial Example Generation Method Based on Wavelet Transform. Information. 2026; 17(2):182. https://doi.org/10.3390/info17020182
Chicago/Turabian StyleBi, Meng, Xiaoguo Liang, Baiyu Wang, Longxin Liu, Xin Yin, and Jiafeng Liu. 2026. "Adversarial Example Generation Method Based on Wavelet Transform" Information 17, no. 2: 182. https://doi.org/10.3390/info17020182
APA StyleBi, M., Liang, X., Wang, B., Liu, L., Yin, X., & Liu, J. (2026). Adversarial Example Generation Method Based on Wavelet Transform. Information, 17(2), 182. https://doi.org/10.3390/info17020182

