CCIBA: A Chromatic Channel-Based Implicit Backdoor Attack on Deep Neural Networks
Abstract
1. Introduction
2. Related Work
2.1. Backdoor Attacks
2.2. Backdoor Detection and Defense
3. Program Design
3.1. Threat Model
3.1.1. Attacker’s Knowledge
3.1.2. Attacker’s Capabilities
3.1.3. Attacker’s Goals
3.2. Problem Definition
3.3. Implicit Backdoor Generation Scheme Based on Chromaticity Channel
Algorithm 1: Chromatic channel-based implicit backdoor attack method |
Input: source benign dataset , trigger image , embedding volume , poisoning ratio Output: poisoned dataset ,. 1. Initialize the poisoned dataset as an empty array. 2. Randomly select the data samples with the proportion of from the source benign dataset and put them into the dataset . 3. 4. Calculate the width and height of the input sample image and the trigger image. 5. If the width and height of the two images are not of the same size, then: 6. Adjust the width and height of the small image to be the same as the large image. 7. Convert the input sample image and trigger image from RGB to YUV. 8. DWT the UV channels of both images. 9. SVD the LL and HH sub-bands of and . 10. Replace the last singular values and the singular vectors of the LL and HH sub-bands of . 11. Perform ISVD on the modified LL and HH sub-bands of the input samples. 12. Perform IDWT on all sub-bands to return the reconstructed YUV channel. 13. Convert the poisoned sample image YUV channel to RGB channel. 14. Change the poisoned sample image to our target label . 15. 16. Return |
4. Experimental Analysis and Verification
4.1. Experimental Setup
4.2. Experimental Results and Analysis
4.2.1. Backdoor Attack Effectiveness Evaluation
4.2.2. Stealthiness Analysis
4.3. Sustainability Analysis
4.4. Ablation Experiment
5. Summary
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wenger, E.; Passananti, J.; Bhagoji, A.N.; Yao, Y.; Zheng, H.; Zhao, B.Y. Backdoor attacks against deep learning systems in the physical world. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 6206–6215. [Google Scholar]
- Chen, J.; Teo, T.H.; Kok, C.L.; Koh, Y.Y. A Novel Single-Word Speech Recognition on Embedded Systems Using a Convolution Neuron Network with Improved Out-of-Distribution Detection. Electronics 2024, 13, 530. [Google Scholar] [CrossRef]
- Jiang, W.; Li, H.; Liu, S.; Luo, X.; Lu, R. Poisoning and evasion attacks against deep learning algorithms in autonomous vehicles. IEEE Trans. Veh. Technol. 2020, 69, 4439–4449. [Google Scholar] [CrossRef]
- Zhao, J.; Zhao, W.; Deng, B.; Wang, Z.; Zhang, F.; Zheng, W.; Cao, W.; Nan, J.; Lian, Y.; Burke, A.F. Autonomous driving system: A comprehensive survey. Expert Syst. Appl. 2023, 242, 122836. [Google Scholar] [CrossRef]
- Zhou, X.; Liang, W.; Wang, K.I.-K.; Wang, H.; Yang, L.T.; Jin, Q. Deep-learning-enhanced human activity recognition for internet of healthcare things. IEEE Internet Things J. 2020, 7, 6429–6438. [Google Scholar] [CrossRef]
- Mao, X.; Shan, Y.; Li, F.; Chen, X.; Zhang, S. CLSpell: Contrastive learning with phonological and visual knowledge for Chinese spelling check. Neurocomputing 2023, 554, 126468. [Google Scholar] [CrossRef]
- Shi, C.; Xian, M.; Zhou, X.; Wang, H.; Cheng, H.D. Multi-slice low-rank tensor decomposition based multi-atlas segmentation: Application to automatic pathological liver CT segmentation. Med. Image Anal. 2021, 73, 102152. [Google Scholar] [CrossRef]
- Zhan, M.; Kou, G.; Dong, Y.; Chiclana, F.; Herrera-Viedma, E. Bounded confidence evolution of opinions and actions in social networks. IEEE Trans. Cybern. 2021, 52, 7017–7028. [Google Scholar] [CrossRef]
- Liang, W.; Chen, X.; Huang, S.; Xiong, G.; Yan, K.; Zhou, X. Federal learning edge network based sentiment analysis combating global COVID-19. Comput. Commun. 2023, 204, 33–42. [Google Scholar] [CrossRef]
- Zhou, X.; Liang, W.; Wang, K.I.-K.; Yan, Z.; Yang, L.T.; Wei, W.; Ma, J.; Jin, Q. Decentralized P2P federated learning for privacy-preserving and resilient mobile robotic systems. IEEE Wirel. Commun. 2023, 30, 82–89. [Google Scholar] [CrossRef]
- Zhou, X.; Ye, X.; Wang, K.I.-K.; Liang, W.; Nair, N.K.C.; Shimizu, S.; Yan, Z.; Jin, Q. Hierarchical federated learning with social context clustering-based participant selection for internet of medical things applications. IEEE Trans. Comput. Soc. Syst. 2023, 10, 1742–1751. [Google Scholar] [CrossRef]
- Chen, X.; Xu, G.; Xu, X.; Jiang, H.; Tian, Z.; Ma, T. Multicenter hierarchical federated learning with fault-tolerance mechanisms for resilient edge computing networks. IEEE Trans. Neural Netw. Learn. Syst. 2024, 36, 47–61. [Google Scholar] [CrossRef]
- Zhang, C.; Ni, Z.; Xu, Y.; Luo, E.; Chen, L.; Zhang, Y. A trustworthy industrial data management scheme based on redactable blockchain. J. Parallel Distrib. Comput. 2021, 152, 167–176. [Google Scholar] [CrossRef]
- Qi, L.; Dou, W.; Hu, C.; Zhou, Y.; Yu, J. A context-aware service evaluation approach over big data for cloud applications. IEEE Trans. Cloud Comput. 2015, 8, 338–348. [Google Scholar] [CrossRef]
- Zhou, X.; Liang, W.; Li, W.; Yan, K.; Shimizu, S.; Wang, K.I.-K. Hierarchical adversarial attacks against graph-neural-network-based IoT network intrusion detection system. IEEE Internet Things J. 2021, 9, 9310–9319. [Google Scholar] [CrossRef]
- Ouyang, Y.; Liu, W.; Yang, Q.; Mao, X.; Li, F. Trust based task offloading scheme in UAV-enhanced edge computing network. Peer-to-Peer Netw. Appl. 2021, 14, 3268–3290. [Google Scholar] [CrossRef]
- Xiong, T.; Feng, S.; Pan, M.; Yu, Y. Smart contract generation for inter-organizational process collaboration. Concurr. Comput. Pract. Exp. 2024, 36, e7961. [Google Scholar] [CrossRef]
- Zhou, X.; Wu, J.; Liang, W.; Wang, K.I.K.; Yan, Z.; Yang, L.T.; Jin, Q. Reconstructed graph neural network with knowledge distillation for lightweight anomaly detection. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 11817–11828. [Google Scholar] [CrossRef]
- Zhang, J.; Bhuiyan, M.Z.A.; Yang, X.; Wang, T.; Xu, X.; Hayajneh, T.; Khan, F. AntiConcealer: Reliable detection of adversary concealed behaviors in EdgeAI-assisted IoT. IEEE Internet Things J. 2021, 9, 22184–22193. [Google Scholar] [CrossRef]
- Fei, F.; Li, S.; Dai, H.; Hu, C.; Dou, W.; Ni, Q. A k-anonymity based schema for location privacy preservation. IEEE Trans. Sustain. Comput. 2017, 4, 156–167. [Google Scholar] [CrossRef]
- Xu, C.; Ren, J.; She, L.; Zhang, Y.; Qin, Z.; Ren, K. EdgeSanitizer: Locally differentially private deep inference at the edge for mobile data analytics. IEEE Internet Things J. 2019, 6, 5140–5151. [Google Scholar] [CrossRef]
- Li, C.; He, A.; Wen, Y.; Liu, G.; Chronopoulos, A.T. Optimal trading mechanism based on differential privacy protection and stackelberg game in big data market. IEEE Trans. Serv. Comput. 2023, 16, 3550–3563. [Google Scholar] [CrossRef]
- Li, Q.; Ma, B.; Wang, X.; Wang, C.; Gao, S. Image steganography in color conversion. IEEE Trans. Circuits Syst. II Express Briefs 2023, 71, 106–110. [Google Scholar] [CrossRef]
- Li, Q.; Wang, X.; Ma, B.; Wang, X.; Wang, C.; Gao, S.; Shi, Y. Concealed attack for robust watermarking based on generative model and perceptual loss. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 5695–5706. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, C.; Lu, M.; Yang, J.; Gui, J.; Zhang, S. From simple to complex scenes: Learning robust feature representations for accurate human parsing. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 5449–5462. [Google Scholar] [CrossRef] [PubMed]
- Wang, C.; Zhang, Q.; Wang, X.; Zhou, L.; Li, Q.; Xia, Z.; Ma, B.; Shi, Y.-Q. Light-Field Image Multiple Reversible Robust Watermarking Against Geometric Attacks. IEEE Trans. Dependable Secur. Comput. 2025. [Google Scholar] [CrossRef]
- Pan, Z.; Wang, Y.; Cao, Y.; Gui, W. VAE-based interpretable latent variable model for process monitoring. IEEE Trans. Neural Netw. Learn. Syst. 2023, 35, 6075–6088. [Google Scholar] [CrossRef]
- Zhou, X.; Zheng, X.; Shu, T.; Liang, W.; Wang, K.I.-K.; Qi, L.; Shimizu, S.; Jin, Q. Information theoretic learning-enhanced dual-generative adversarial networks with causal representation for robust OOD generalization. IEEE Trans. Neural Netw. Learn. Syst. 2023, 36, 2066–2079. [Google Scholar] [CrossRef]
- Gu, T.; Liu, K.; Dolan-Gavitt, B.; Garg, S. Badnets: Evaluating backdooring attacks o n deep neural networks. IEEE Access 2019, 7, 47230–47244. [Google Scholar] [CrossRef]
- Chen, X.; Liu, C.; Li, B.; Song, D.; Lu, K. Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning. arXiv 2017, arXiv:1712.05526. [Google Scholar] [CrossRef]
- Li, S.; Xue, M.; Zhao, B.Z.H.; Zhang, X. Invisible Backdoor Attacks on Deep Neural Networks via Steganography and Regularization. IEEE Trans. Dependable Secur. Comput. 2021, 18, 2088–2105. [Google Scholar] [CrossRef]
- Nguyen, A.; Tran, A. WaNet—Imperceptible Warping-Based Back door Attack. arXiv 2021, arXiv:2102.10369. [Google Scholar]
- Xiao, T.; Deng, X.; Jiang, W. An Invisible Backdoor Attack based on DCT-Injection. In Proceedings of the 2022 IEEE International Conference on Unmanned Systems (ICUS), Guangzhou, China, 28–30 October 2022; pp. 399–404. [Google Scholar]
- Xue, M.; Ni, S.; Wu, Y.; Zhang, Y.; Liu, W. Imperceptible and multi-channel backdoor attack. Appl. Intell. 2024, 54, 1099–1116. [Google Scholar] [CrossRef]
- Feng, Y.; Ma, B.; Zhang, J.; Zhao, S.; Xia, Y.; Tao, D. Fiba: Frequency-injection based backdoor attack in medical image analysis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 20876–20885. [Google Scholar]
- Li, Y.; Li, Y.; Wu, B.; Li, L.; He, R.; Lyu, S. Invisible Backdoor Attack with Sample-Specific Triggers. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 16443–16452. [Google Scholar]
- Wang, Z.; Zhai, J.; Ma, S. Bppattack: Stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 15074–15084. [Google Scholar]
- Zhang, J.; Dongdong, C.; Huang, Q.; Liao, J.; Zhang, W.; Feng, H.; Hua, G.; Yu, N. Poison ink: Robust and invisible backdoor attack. IEEE Trans. Image Process. 2022, 31, 5691–5705. [Google Scholar] [CrossRef]
- Li, C.; Pang, R.; Xi, Z.; Du, T.; Ji, S.; Yao, Y.; Wang, T. An embarrassingly simple backdoor attack on self-supervised learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 4367–4378. [Google Scholar]
- Liu, K.; Dolan-Gavitt, B.; Garg, S. Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks. arXiv 2018, arXiv:1805.12185. [Google Scholar] [CrossRef]
- Wang, B.; Yao, Y.; Shan, S.; Viswanath, B.; Zheng, H. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 19–23 May 2019; pp. 707–723. [Google Scholar]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the 2017 IEEE/CVF International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 618–626. [Google Scholar]
- Chou, E.; Tramer, F.; Pellegrino, G. SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems. In Proceedings of the 2020 IEEE Security and Privacy Workshops (SPW), San Francisco, CA, USA, 21 May 2020; pp. 48–54. [Google Scholar]
- Gao, Y.; Xu, C.; Wang, D.; Chen, S.; Ranasinghe, D.C.; Nepal, S. Strip: A defense against trojan attacks on deep neural networks. In Proceedings of the 35th Annual Computer Security Applications Conference, San Juan, PR, USA, 9–13 December 2019; pp. 113–125. [Google Scholar]
- Tran, B.; Li, J.; Madry, A. Spectral signatures in backdoor attacks. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (NeurIPS), Montreal, QC, Canada, 3–8 December 2018; pp. 8011–8021. [Google Scholar]
- Hayase, J.; Kong, W.; Somani, R.; Oh, S. Spectre: Defending against backdoor attacks using robust statistics. In Proceedings of the 38th International Conference on Machine Learning, PMLR, Virtual, 18–24 July 2021; pp. 4129–4139. [Google Scholar]
Attack Methods | CIFAR-10 | Tiny-ImageNet | CIFAR-100 | |||
---|---|---|---|---|---|---|
ACC (%) | ASR (%) | ACC (%) | ASR (%) | ACC (%) | ASR (%) | |
Badnets | 91.68 | 94.24 | 55.78 | 99.89 | 67.03 | 86.50 |
Blended | 93.46 | 99.75 | 55.24 | 99.75 | 69.81 | 99.42 |
Bpp | 91.48 | 99.94 | 46.73 | 99.83 | 66.39 | 98.24 |
CCIBA | 93.44 | 99.77 | 55.50 | 99.83 | 69.62 | 99.27 |
Person | P1 | P2 | P3 | P4 | P5 | P6 |
---|---|---|---|---|---|---|
Accuracy | 0.51 | 0.49 | 0.45 | 0.52 | 0.53 | 0.47 |
Attack Method | CIFAR-10 | Tiny-ImageNet | CIFAR-100 | ||||||
---|---|---|---|---|---|---|---|---|---|
PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | PSNR | SSIM | LPIPS | |
Blended | 28.21 | 0.8706 | 0.1436 | 28.54 | 0.7582 | 0.2475 | 28.38 | 0.7477 | 0.3232 |
Bpp | 29.46 | 0.9457 | 0.0429 | 29.57 | 0.8424 | 0.1404 | 29.44 | 0.8675 | 0.1173 |
CCIBA | 39.30 | 0.9953 | 0.0027 | 42.48 | 0.9938 | 0.0099 | 40.08 | 0.9919 | 0.0027 |
Attack Method | Anomaly Index | ||
---|---|---|---|
CIFAR-10 | Tiny-ImageNet | CIFAR-100 | |
CCIBA (α = 0.125) | 1.57 | 1.52 | 1.23 |
CCIBA (α = 0.5) | 2.54 | 2.40 | 2.16 |
Datasets | TN | FP | FN | TP | TPR | FPR |
---|---|---|---|---|---|---|
CIFAR-10 | 37,497 | 7503 | 5000 | 0 | 0 | 0.1667 |
Tiny-ImageNet | 76,502 | 13,498 | 10,000 | 0 | 0 | 0.1499 |
CIFAR-100 | 37,848 | 7152 | 4625 | 375 | 0.075 | 0.1589 |
Datasets | TN | FP | FN | TP | TPR | FPR |
---|---|---|---|---|---|---|
CIFAR-10 | 37,571 | 7429 | 4929 | 71 | 0.0142 | 0.1650 |
Tiny-ImageNet | 76,073 | 13,927 | 8947 | 1053 | 0.1053 | 0.1547 |
CIFAR-100 | 37,859 | 7141 | 4691 | 309 | 0.0618 | 0.1586 |
Embedding Locations | ACC (%) | ASR (%) |
---|---|---|
LL | 93.17 | 99.10 |
HH | 92.91 | 99.42 |
HL | 93.73 | 99.17 |
LH | 93.34 | 99.20 |
LL + HH | 93.44 | 99.77 |
Embedding Amount | ACC (%) | ASR (%) |
---|---|---|
2 | 92.35 | 94.97 |
4 | 92.60 | 96.20 |
6 | 93.47 | 99.30 |
8 | 93.44 | 99.77 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, C.; Liu, J.; Liu, Y.; Yang, S. CCIBA: A Chromatic Channel-Based Implicit Backdoor Attack on Deep Neural Networks. Electronics 2025, 14, 3569. https://doi.org/10.3390/electronics14183569
Li C, Liu J, Liu Y, Yang S. CCIBA: A Chromatic Channel-Based Implicit Backdoor Attack on Deep Neural Networks. Electronics. 2025; 14(18):3569. https://doi.org/10.3390/electronics14183569
Chicago/Turabian StyleLi, Chaoliang, Jiyan Liu, Yang Liu, and Shengjie Yang. 2025. "CCIBA: A Chromatic Channel-Based Implicit Backdoor Attack on Deep Neural Networks" Electronics 14, no. 18: 3569. https://doi.org/10.3390/electronics14183569
APA StyleLi, C., Liu, J., Liu, Y., & Yang, S. (2025). CCIBA: A Chromatic Channel-Based Implicit Backdoor Attack on Deep Neural Networks. Electronics, 14(18), 3569. https://doi.org/10.3390/electronics14183569