Backdoor Training Paradigm in Generative Adversarial Networks
Abstract
:1. Introduction
2. Related Works
2.1. Backdoor Training Protection
2.2. Generative Adversarial Network (GAN)
- Initialize the parameters for the discriminator and for the generator.
- Sample m real data points from the true data distribution . Simultaneously, sample m noise vectors from a prior noise distribution . Pass these noise vectors through the generator to produce corresponding fake samples .
- Alternately train and as follows:
- (a)
- Fix and optimize to improve its ability to distinguish real samples from generated ones.
- (b)
- Fix and optimize to produce samples that maximize the probability of fooling . This involves using the gradient of ’s loss to update , guiding it towards generating samples closer to the true data distribution.
2.3. Diffusion Models
3. Backdoor Training Paradigm in Generative Models
3.1. Backdoor Training in GANs
3.2. Backdoor Training in Diffusion Models
3.3. Backdoor Training Paradigm
4. Threat Model
- Attacker’s Capabilities: In our model, we assume that the attacker has full control over the training process, allowing them to modify both the training data and loss function. This enables the attacker to introduce a backdoor mechanism that does not interfere with normal operations but activates under specific conditions.
- Attack Objectives: The goal of the attacker is to ensure that the model behaves normally when given clean inputs but generates targeted, manipulated outputs when presented with adversarial triggers. This mechanism allows the backdoor to remain hidden under standard verification while being reliably triggered when needed. The backdoor can serve various purposes, such as unauthorized content generation, watermarking for intellectual property protection, or adversarial control over outputs.
- Attack Methods: To achieve this, the attacker incorporates a backdoor loss term into the training objective, resulting in the final optimization function as follows:
5. Experiment
5.1. Regularization in GAN Models
- Regularization in DCGAN
- Regularization in SRGAN
- Regularization in CycleGAN
5.2. Experimental Metrics Explanation
5.3. Experimental Setting and Results
5.4. Robustness Against Fine-Tuning Attacks
6. Discussion
- Strengths
- Weaknesses
7. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Data Availability Statement
Conflicts of Interest
References
- Oussidi, A.; Elhassouny, A. Deep generative models: Survey. In Proceedings of the 2018 International Conference on Intelligent Systems and Computer Vision (ISCV), Fez, Morocco, 2–4 April 2018; pp. 1–8. [Google Scholar]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2015, arXiv:1511.06434. [Google Scholar]
- Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; Aila, T. Analyzing and improving the image quality of StyleGAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 12–19 June 2020; pp. 8110–8119. [Google Scholar]
- Karras, T.; Aittala, M.; Laine, S.; Härkönen, E.; Hellsten, J.; Lehtinen, J.; Aila, T. Alias-free generative adversarial networks. Adv. Neural Inf. Process. Syst.s 2021, 34, 852–863. [Google Scholar]
- Weng, L. From GAN to WGAN. arXiv 2019, arXiv:1904.08994. [Google Scholar]
- Brock, A. Large Scale GAN Training for High Fidelity Natural Image Synthesis. arXiv 2018, arXiv:1809.11096. [Google Scholar]
- Karras, T. A Style-Based Generator Architecture for Generative Adversarial Networks. arXiv 2019, arXiv:1812.04948. [Google Scholar]
- Chen, H.; Wang, Z.; Li, X.; Sun, X.; Chen, F.; Liu, J.; Wang, J.; Raj, B.; Liu, Z.; Barsoum, E. SoftVQ-VAE: Efficient 1-Dimensional Continuous Tokenizer. arXiv 2024, arXiv:2412.10958. [Google Scholar]
- Walker, J.; Razavi, A.; Oord, A.V.D. Predicting video with VQVAE. arXiv 2021, arXiv:2103.01950. [Google Scholar]
- Liu, Y.; Liu, Z.; Li, S.; Yu, Z.; Guo, Y.; Liu, Q.; Wang, G. Cloud-VAE: Variational autoencoder with concepts embedded. Pattern Recognit. 2023, 140, 109530. [Google Scholar] [CrossRef]
- Razavi, A.; Van den Oord, A.; Vinyals, O. Generating diverse high-fidelity images with VQ-VAE-2. Adv. Neural Inf. Process. Syst. 2019, 32, 14866–14876. [Google Scholar]
- Yang, L.; Zhang, Z.; Song, Y.; Hong, S.; Xu, R.; Zhao, Y.; Zhang, W.; Cui, B.; Yang, M.-H. Diffusion models: A comprehensive survey of methods and applications. ACM Comput. Surv. 2023, 56, 1–39. [Google Scholar] [CrossRef]
- Ho, J.; Jain, A.; Abbeel, P. Denoising diffusion probabilistic models. Adv. Neural Inf. Process. Syst. 2020, 33, 6840–6851. [Google Scholar]
- Zheng, K.; Lu, C.; Chen, J.; Zhu, J. DPM-Solver-V3: Improved diffusion ODE solver with empirical model statistics. Adv. Neural Inf. Process. Syst. 2023, 36, 55502–55542. [Google Scholar]
- Song, Y.; Sohl-Dickstein, J.; Kingma, D.P.; Kumar, A.; Ermon, S.; Poole, B. Score-Based Generative Modeling through Stochastic Differential Equations. 2021. Available online: https://openreview.net/forum?id=PxTIG12RRHS (accessed on 8 March 2025).
- Tian, K.; Jiang, Y.; Yuan, Z.; Peng, B.; Wang, L. Visual autoregressive modeling: Scalable image generation via next-scale prediction. arXiv 2024, arXiv:2404.02905. [Google Scholar]
- Huang, Y.; Huang, J.; Liu, Y.; Yan, M.; Lv, J.; Liu, J.; Xiong, W.; Zhang, H.; Chen, S.; Cao, L. Diffusion model-based image editing: A survey. arXiv 2024, arXiv:2402.17525. [Google Scholar] [CrossRef]
- Moser, B.B.; Shanbhag, A.S.; Raue, F.; Frolov, S.; Palacio, S.; Dengel, A. Diffusion models, image super-resolution, and everything: A survey. IEEE Trans. Neural Networks Learn. Syst. 2024, 35, 1–21. [Google Scholar] [CrossRef]
- Huang, R.; Huang, J.; Yang, D.; Ren, Y.; Liu, L.; Li, M.; Ye, Z.; Liu, J.; Yin, X.; Zhao, Z. Make-an-audio: Text-to-audio generation with prompt-enhanced diffusion models. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; pp. 13916–13932. [Google Scholar]
- Liu, H.; Chen, Z.; Yuan, Y.; Mei, X.; Liu, X.; Mandic, D.; Wang, W.; Plumbley, M.D. Audioldm: Text-to-audio generation with latent diffusion models. arXiv 2023, arXiv:2301.12503. [Google Scholar]
- Xing, Z.; Feng, Q.; Chen, H.; Dai, Q.; Hu, H.; Xu, H.; Wu, Z.; Jiang, Y.-G. A survey on video diffusion models. ACM Comput. Surv. 2024, 57, 1–42. [Google Scholar] [CrossRef]
- Yang, L.; Yu, Z.; Meng, C.; Xu, M.; Ermon, S.; Bin, C. Mastering text-to-image diffusion: Recaptioning, planning, and generating with multimodal LLMs. In Proceedings of the Forty-first International Conference on Machine Learning, Vienna, Austria, 21–27 July 2024. [Google Scholar]
- Wang, T.; Zhang, Y.; Qi, S.; Zhao, R.; Zhihua, X.; Weng, J. Security and privacy on generative data in AIGC: A survey. ACM Comput. Surv. 2023, 57, 82. [Google Scholar] [CrossRef]
- Feretzakis, G.; Papaspyridis, K.; Gkoulalas-Divanis, A.; Verykios, V.S. Privacy-Preserving Techniques in Generative AI and Large Language Models: A Narrative Review. Information 2024, 15, 697. [Google Scholar] [CrossRef]
- Vyas, N.; Kakade, S.M.; Barak, B. On provable copyright protection for generative models. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; pp. 35277–35299. [Google Scholar]
- Samuelson, P. Generative AI meets copyright. Science 2023, 381, 158–161. [Google Scholar] [CrossRef]
- Golda, A.; Mekonen, K.; Pandey, A.; Singh, A.; Hassija, V.; Chamola, V.; Sikdar, B. Privacy and Security Concerns in Generative AI: A Comprehensive Survey. IEEE Access 2024, 12, 48126–48144. [Google Scholar] [CrossRef]
- Shimomura, Y.; Tomiyama, T. Service modeling for service engineering. In Proceedings of the International Working Conference on the Design of Information Infrastructure Systems for Manufacturing, Osaka, Japan, 18–20 November 2002; pp. 31–38. [Google Scholar]
- Gu, T.; Dolan-Gavitt, B.; Garg, S. Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv 2017, arXiv:1708.06733. [Google Scholar]
- Li, Y.; Zhai, T.; Wu, B.; Jiang, Y.; Li, Z.; Xia, S. Rethinking the Trigger of Backdoor Attack. arXiv 2020, arXiv:2004.04692. [Google Scholar]
- Barni, M.; Kallas, K.; Tondi, B. A New Backdoor Attack in CNNs by Training Set Corruption Without Label Poisoning. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 101–105. [Google Scholar]
- Gao, Y.; Wu, D.; Zhang, J.; Gan, G.; Xia, S.-T.; Niu, G.; Sugiyama, M. On the Effectiveness of Adversarial Training Against Backdoor Attacks. IEEE Trans. Neural Netw. Learn. Syst. 2023, 35, 14878–14888. [Google Scholar] [CrossRef]
- Xiang, Z.; Miller, D.J.; Kesidis, G. Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios. arXiv 2022, arXiv:2201.08474. [Google Scholar]
- Weng, C.-H.; Lee, Y.-T.; Wu, S.-H.B. On the trade-off between adversarial and backdoor robustness. Adv. Neural Inf. Process. Syst. 2020, 33, 11973–11983. [Google Scholar]
- Dong, Y.; Yang, X.; Deng, Z.; Pang, T.; Xiao, Z.; Su, H.; Zhu, J. Black-Box Detection of Backdoor Attacks with Limited Information and Data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 16482–16491. [Google Scholar]
- Chen, W.; Wu, B.; Wang, H. Effective Backdoor Defense by Exploiting Sensitivity of Poisoned Samples. Adv. Neural Inf. Process. Syst. 2022, 35, 9727–9737. [Google Scholar] [CrossRef]
- Li, Y.; Li, Y.; Wu, B.; Li, L.; He, R.; Lyu, S. Invisible Backdoor Attack with Sample-Specific Triggers. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 16463–16472. [Google Scholar]
- Yao, Y.; Li, H.; Zheng, H.; Zhao, B.Y. Latent Backdoor Attacks on Deep Neural Networks. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, London, UK, 11-15 November 2019; ACM: New York, NY, USA, 2019; pp. 2041–2055. [Google Scholar]
- Bird, C.; Ungless, E.; Kasirzadeh, A. Typology of Risks of Generative Text-to-Image Models. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, Montréal, QC, Canada, 8–10 August 2023; pp. 396–410. [Google Scholar]
- Barnett, J. The Ethical Implications of Generative Audio Models: A Systematic Literature Review. In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, Montréal, QC, Canada, 8–10 August 2023; pp. 146–161. [Google Scholar]
- Liu, Y.; Yao, Y.; Ton, J.-F.; Zhang, X.; Cheng, R.; Guo, H.; Klochkov, Y.; Taufiq, M.F.; Li, H. Trustworthy LLMs: A Survey and Guideline for Evaluating Large Language Models’ Alignment. arXiv 2023, arXiv:2308.05374. [Google Scholar]
- Somepalli, G.; Singla, V.; Goldblum, M.; Geiping, J.; Goldstein, T. Understanding and Mitigating Copying in Diffusion Models. Adv. Neural Inf. Process. Syst. 2023, 36, 47783–47803. [Google Scholar]
- Fei, J.; Xia, Z.; Tondi, B.; Barni, M. Supervised GAN Watermarking for Intellectual Property Protection. In Proceedings of the 2022 IEEE International Workshop on Information Forensics and Security (WIFS), Shanghai, China, 12-16 December 2022; pp. 1–6. [Google Scholar]
- Singh, H.K.; Baranwal, N.; Singh, K.N.; Singh, A.K.; Zhou, H. GAN-Based Watermarking for Encrypted Images in Healthcare Scenarios. Neurocomputing 2023, 560, 126853. [Google Scholar] [CrossRef]
- Lin, D.; Tondi, B.; Li, B.; Barni, M. A CycleGAN Watermarking Method for Ownership Verification. IEEE Trans. Dependable Secure Comput. 2024, 1–15. [Google Scholar] [CrossRef]
- Wu, J.; Shi, H.; Zhang, S.; Lei, Z.; Yang, Y.; Li, S.Z. De-Mark GAN: Removing Dense Watermark with Generative Adversarial Network. In Proceedings of the 2018 International Conference on Biometrics (ICB), Gold Coast, QLD, Australia, 20–23 February 2018; pp. 69–74. [Google Scholar]
- Uchida, Y.; Nagai, Y.; Sakazawa, S.; Satoh, S. Embedding watermarks into deep neural networks. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, Bucharest, Romania, 6–9 June 2017; ACM: New York, NY, USA, 2017; pp. 269–277. [Google Scholar]
- O’Shea, K. An introduction to convolutional neural networks. arXiv, 2015; arXiv:1511.08458. [Google Scholar]
- Clements, J.; Lao, Y. DeepHardMark: Towards watermarking neural network hardware. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Conference, 22–28 February 2022; Volume 36, Number 4. pp. 4450–4458. [Google Scholar]
- Lao, Y.; Zhao, W.; Yang, P.; Li, P. DeepAuth: A DNN authentication framework by model-unique and fragile signature embedding. In Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Conference, 22–28 February 2022; Volume 36, Number 9. pp. 9595–9603. [Google Scholar]
- Creswell, A.; White, T.; Dumoulin, V.; Arulkumaran, K.; Sengupta, B.; Bharath, A.A. Generative adversarial networks: An overview. IEEE Signal Process. Mag. 2018, 35, 53–65. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 2672–2680. [Google Scholar]
- Hershey, J.R.; Olsen, P.A. Approximating the Kullback-Leibler divergence between Gaussian mixture models. In Proceedings of the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Honolulu, HI, USA, 15–20 April 2007; pp. IV–317. [Google Scholar]
- Salem, A.; Sautter, Y.; Backes, M.; Humbert, M.; Zhang, Y. Baaan: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models. arXiv 2020, arXiv:2010.03007. [Google Scholar]
- Rawat, A.; Levacher, K.; Sinn, M. The Devil Is in the GAN: Backdoor Attacks and Defenses in Deep Generative Models. In Proceedings of the 27th European Symposium on Research in Computer Security, Copenhagen, Denmark, 26–30 September 2022; Springer: Cham, Switzerland, 2022; pp. 776–783. [Google Scholar]
- Zhu, L.; Ning, R.; Wang, C.; Xin, C.; Wu, H. Gangsweep: Sweep out Neural Backdoors by GAN. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; ACM: New York, NY, USA, 2020; pp. 3173–3181. [Google Scholar]
- Ong, D.S.; Chan, C.S.; Ng, K.W.; Fan, L.; Yang, Q. Protecting Intellectual Property of Generative Adversarial Networks from Ambiguity Attacks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 3630–3639. [Google Scholar]
- Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z.; et al. Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4681–4690. [Google Scholar]
- Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2223–2232. [Google Scholar]
- Chen, J.; Xiong, H.; Zheng, H.; Zhang, J.; Liu, Y. Dyn-backdoor: Backdoor Attack on Dynamic Link Prediction. IEEE Trans. Netw. Sci. Eng. 2023, 11, 525–542. [Google Scholar] [CrossRef]
- Ding, Y.; Wang, Z.; Qin, Z.; Zhou, E.; Zhu, G.; Qin, Z.; Choo, K.-K.R. Backdoor Attack on Deep Learning-Based Medical Image Encryption and Decryption Network. IEEE Trans. Inf. Forensics Secur. 2023, 19, 280–292. [Google Scholar] [CrossRef]
- Chou, S.-Y.; Chen, P.-Y.; Ho, T.-Y. How to Backdoor Diffusion Models? In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 4015–4024. [Google Scholar]
- Struppek, L.; Hintersdorf, D.; Kersting, K. Rickrolling the Artist: Injecting Backdoors into Text-Guided Image Generation Models. In Proceedings of the International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; IEEE: Piscataway, NJ, USA, 2023. [Google Scholar]
- Zhai, S.; Dong, Y.; Shen, Q.; Pu, S.; Fang, Y.; Su, H. Text-to-Image Diffusion Models Can Be Easily Backdoored through Multimodal Data Poisoning. In Proceedings of the 31st ACM International Conference on Multimedia, Ottawa, ON, Canada, 29 October–3 November 2023; ACM: New York, NY, USA, 2023; pp. 1577–1587. [Google Scholar]
- Li, S.; Ma, J.; Cheng, M. Invisible Backdoor Attacks on Diffusion Models. arXiv 2024, arXiv:2406.00816. [Google Scholar]
- Jiang, W.; Li, H.; He, J.; Zhang, R.; Xu, G.; Zhang, T.; Lu, R. Backdoor Attacks against Image-to-Image Networks. arXiv 2024, arXiv:2407.10445. [Google Scholar]
- Dong, C.; Loy, C.C.; Tang, X. Accelerating the super-resolution convolutional neural network. In Proceedings of the 14th European Conference on Computer Vision (ECCV 2016), Amsterdam, The Netherlands, 11–14 October 2016; Springer: Cham, Switzerland, 2016; pp. 391–407. [Google Scholar]
- Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-Image Translation with Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 1125–1134. [Google Scholar]
- Yu, Y.; Zhang, W.; Deng, Y. Frechet Inception Distance (FID) for Evaluating GANs; China University of Mining Technology Beijing Graduate School: Beijing, China, 2021; Volume 3. [Google Scholar]
- Hore, A.; Ziou, D. Image quality metrics: PSNR vs. SSIM. In Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey, 23–26 August 2010; pp. 2366–2369. [Google Scholar]
- De Boer, P.-T.; Kroese, D.P.; Mannor, S.; Rubinstein, R.Y. A tutorial on the cross-entropy method. Ann. Oper. Res. 2005, 134, 19–67. [Google Scholar] [CrossRef]
- Amari, S. Backpropagation and stochastic gradient descent method. Neurocomputing 1993, 5, 185–196. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Hinton, G. Learning Multiple Layers of Features from Tiny Images; University of Toronto: Toronto, ON, Canada, 2009. [Google Scholar]
- Wah, C.; Branson, S.; Welinder, P.; Perona, P.; Belongie, S. The Caltech-UCSD Birds-200-2011 Dataset; California Institute of Technology: Pasadena, CA, USA, 2011. [Google Scholar]
- Deng, J.; Dong, W.; Socher, R.; Li, L.-J.; Li, K.; Fei-Fei, L. ImageNet: A large-scale hierarchical image database. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 248–255. [Google Scholar]
- Huang, J.-B.; Singh, A.; Ahuja, N. Single image super-resolution from transformed self-exemplars. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 5197–5206. [Google Scholar]
- Martin, D.; Fowlkes, C.; Tal, D.; Malik, J. A database of human-segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the Eighth IEEE International Conference on Computer Vision (ICCV), Vancouver, BC, Canada, 7–14 July 2001; pp. 416–423. [Google Scholar]
- Cordts, M.; Omran, M.; Ramos, S.; Rehfeld, T.; Enzweiler, M.; Benenson, R.; Franke, U.; Roth, S.; Schiele, B. The Cityscapes Dataset for Semantic Urban Scene Understanding. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 3213–3223. [Google Scholar]
Method | Original Model Loss: | Backdoor Loss: |
---|---|---|
DCGAN [2,57] | ||
SRGAN [57,58] | ||
[57,59] | ||
[57,59] | ||
ConditionGAN [54] | ||
DCGAN [2,55] | ||
GangSweep [56] | ||
Dyn-Backdoor [60] | ||
EncDec Network [61] |
Method | Original Model Loss: | Backdoor Loss: |
---|---|---|
BadDiffusion [62] | ||
Rickrolling-TPA [63] | ||
Rickrolling-TAA [63] | ||
Multimodal-Pixel [64] | ||
Multimodal-Object [64] | ||
Multimodal-Style [64] | ||
Invisible [65] | ||
I2I-Model [66] |
Method | Dataset | FID ↓ [69] | Time (s) | ||
---|---|---|---|---|---|
DCGAN [2] | CIFAR-10 [73] | – | – | ||
+ backdoor | CIFAR-10 [73] | – | – | 11,705 | |
DCGAN [2] | CUB-200 [74] | – | – | 12,102 | |
+ backdoor | CUB-200 [74] | – | – | 15,140 | |
Method | Train | Test | PSNR ↓ [70] | SSIM ↑ [70] | Time (s) |
SRGAN [58] | ImageNet [75] | Set5 [76] | 58,402 | ||
+ backdoor | ImageNet [75] | Set5 [76] | 70,374 | ||
SRGAN [58] | ImageNet [75] | Set14 [76] | 58,402 | ||
+ backdoor | ImageNet [75] | Set14 [76] | 70,374 | ||
SRGAN [58] | ImageNet [75] | BSD100 [77] | 58,402 | ||
+ backdoor | ImageNet [75] | BSD100 [77] | 70,374 | ||
Method | Dataset | Per-pixel acc.↑ | Per-class acc.↑ | Class IoU↑ | Time (s) |
CycleGAN [59] | cityscapes [78] | 94,902 | |||
+ backdoor | cityscapes [78] | 108,226 |
Method | Dataset | FID↓ [69] | ||
---|---|---|---|---|
DCGAN [2] | CIFAR-10 [73] | – | – | |
+ fine-tuning attack | CIFAR-10 [73] | – | – | |
DCGAN [2] | CUB-200 [74] | – | – | |
+ fine-tuning attack | CUB-200 [74] | – | – | |
Method | Train | Test | PSNR ↑ [70] | SSIM ↑ [70] |
SRGAN [58] | ImageNet [75] | Set5 [76] | ||
+ fine-tuning attack | ImageNet [75] | Set5 [76] | ||
SRGAN [58] | ImageNet [75] | Set14 [76] | ||
+ fine-tuning attack | ImageNet [75] | Set14 [76] | ||
SRGAN [58] | ImageNet [75] | BSD100 [77] | ||
+ fine-tuning attack | ImageNet [75] | BSD100 [77] | ||
Method | Dataset | Per-pixel acc. ↑ | Per-class acc.↑ | Class IoU↑ |
CycleGAN [59] | cityscapes [78] | |||
+ fine-tuning attack | cityscapes [78] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, H.; Cheng, F. Backdoor Training Paradigm in Generative Adversarial Networks. Entropy 2025, 27, 283. https://doi.org/10.3390/e27030283
Wang H, Cheng F. Backdoor Training Paradigm in Generative Adversarial Networks. Entropy. 2025; 27(3):283. https://doi.org/10.3390/e27030283
Chicago/Turabian StyleWang, Huangji, and Fan Cheng. 2025. "Backdoor Training Paradigm in Generative Adversarial Networks" Entropy 27, no. 3: 283. https://doi.org/10.3390/e27030283
APA StyleWang, H., & Cheng, F. (2025). Backdoor Training Paradigm in Generative Adversarial Networks. Entropy, 27(3), 283. https://doi.org/10.3390/e27030283