Research on Encrypted Transmission and Recognition of Garbage Images in Low-Illumination Environments
Abstract
1. Introduction
- (1)
- We propose a waste detection model optimized for low-light conditions. This model is capable of extracting more information from low-light images, thereby enhancing recognition accuracy.
- (2)
- The attention mechanism is introduced into the multi-channel dark light enhancement network. By integrating multiple parallel image enhancement pathways, this study achieves a more comprehensive and accurate capture of image information features, enhancing image clarity and visibility through feature fusion.
- (3)
- By introducing the attention mechanism, the network can pay more attention to the feature salience of different channels while maintaining the advantage of multi-branch parallelism, so as to improve the robustness and generalization of the network.
2. Related Works
2.1. Object Detection and Waste Detection Model Low-Illumination Image Detection Algorithm
2.2. Low-Illumination Image Detection Algorithm
3. Methods
3.1. Multi-Channel Image Enhancement Algorithm Based on Attention Mechanism
3.2. Data Preparation
3.3. Model
4. Experimental Validation
4.1. Image Evaluation Indicators
- (1)
- Improved structural similarity (
- (2)
- PSNR
4.2. Low Illumination Enhancement Effect
4.3. Identification Test
5. Discussion
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Zhang, Q.; Yang, Q.; Zhang, X.; Wei, W.; Bao, Q.; Su, J.; Liu, X. A multi-label waste detection model based on transfer learning. Resour. Conserv. Recycl. 2022, 181, 106235. [Google Scholar] [CrossRef]
- Hu, Y.; Zhan, P.; Xu, Y.; Zhao, J.; Li, Y.; Li, X. Temporal representation learning for time series classification. Neural Comput. Appl. 2021, 33, 3169–3182. [Google Scholar] [CrossRef]
- White, G.; Cabrera, C.; Palade, A.; Li, F.; Clarke, S. WasteNet: Waste classification at the edge for smart bins. arXiv 2020, arXiv:2006.05873. [Google Scholar] [CrossRef]
- Zheng, Z.; Wang, P.; Ren, D.; Liu, W.; Ye, R.; Hu, Q.; Zuo, W. Enhancing geometric factors in model learning and inference for object detection and instance segmentation. IEEE Trans. Cybern. 2021, 52, 8574–8586. [Google Scholar] [CrossRef]
- Alrayes, F.S.; Asiri, M.M.; Maashi, M.S.; Nour, M.K.; Rizwanullah, M.; Osman, A.E.; Drar, S.; Zamani, A.S. Waste classification using vision transformer based on multilayer hybrid convolution neural network. Urban Clim. 2023, 49, 101483. [Google Scholar] [CrossRef]
- Qu, Y.; Ou, Y.; Xiong, R. Low illumination enhancement for object detection in self-driving. In Proceedings of the 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dali, China, 6–8 December 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1738–1743. [Google Scholar]
- Martinec, D.; Herman, I.; Šebek, M. On the necessity of symmetric positional coupling for string stability. IEEE Trans. Control. Netw. Syst. 2016, 5, 45–54. [Google Scholar] [CrossRef]
- Xiao, Y.; Jiang, A.; Ye, J.; Wang, M.-W. Making of night vision: Object detection under low-illumination. IEEE Access 2020, 8, 123075–123086. [Google Scholar] [CrossRef]
- Zhang, M.; Xu, S.; Song, W.; He, Q.; Wei, Q. Lightweight underwater object detection based on yolo v4 and multi-scale attentional feature fusion. Remote. Sens. 2021, 13, 4706. [Google Scholar] [CrossRef]
- Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ye, R. Distance-IoU loss: Faster and better learning for bounding box regression. In Proceedings of the AAAI Conference on Artificial Intelligence, Singapore, 20–27 January 2020; Volume 34, pp. 12993–13000. [Google Scholar]
- Chen, X.; Huang, H.; Liu, Y.; Li, J.; Liu, M. Robot for automatic waste sorting on construction sites. Autom. Constr. 2022, 141, 104387. [Google Scholar] [CrossRef]
- Jiang, X.; Hu, H.; Qin, Y.; Hu, Y.; Ding, R. A real-time rural domestic garbage detection algorithm with an improved YOLOv5s network model. Sci. Rep. 2022, 12, 16802. [Google Scholar] [CrossRef] [PubMed]
- Gundupalli, S.P.; Hait, S.; Thakur, A. Automated municipal solid waste sorting for recycling using a mobile manipulator. In Proceedings of the International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Houston, TX, USA, 23–26 August 2026; American Society of Mechanical Engineers: Livingston, NJ, USA, 2016; Volume 50152, p. V05AT07A045. [Google Scholar]
- Ma, W.; Yu, J.; Wang, X. An improved faster R-CNN based spam detection and classification method. Comput. Eng. 2021, 8, 294–300. [Google Scholar]
- Yadav, S.; Shanmugam, A.; Hima, V.; Suresh, N. Waste classification and segregation: Machine learning and iot approach. In Proceedings of the 2021 2nd International Conference on Intelligent Engineering and Management (ICIEM), London, UK, 28–30 April 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 233–238. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Aishwarya, A.; Wadhwa, P.; Owais, O.; Vashisht, V. A waste management technique to detect and separate non-biodegradable waste using machine learning and YOLO algorithm. In Proceedings of the 2021 11th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 28–29 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 443–447. [Google Scholar]
- Zhang, C.J.; Zhu, L.; Yu, L. Review of attention mechanism in convolutional neural networks. Comput. Eng. Appl. 2021, 57, 64–72. [Google Scholar]
- Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A deep autoencoder approach to natural low-light image enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
- Jung, H.; Kim, Y.; Jang, H.; Ha, N.; Sohn, K. Unsupervised deep image fusion with structure tensor representations. IEEE Trans. Image Process. 2020, 29, 3845–3858. [Google Scholar] [CrossRef] [PubMed]
- Ignatov, A.; Kobyshev, N.; Timofte, R.; Vanhoey, K.; Van Gool, L. Dslr-quality photos on mobile devices with deep convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 3277–3285. [Google Scholar]
- Liang, X.; Chen, X.; Ren, K.; Miao, X.; Chen, Z.; Jin, Y. Low-light image enhancement via adaptive frequency decomposition network. Sci. Rep. 2023, 13, 14107. [Google Scholar] [CrossRef]
- Kim, H.U.; Koh, Y.J.; Kim, C.S. Global and local enhancement networks for paired and unpaired image enhancement. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXV 16. Springer International Publishing: Cham, Switzerland, 2020; pp. 339–354. [Google Scholar]
- Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H. A general U-shaped transformer for image restoration. In Proceedings of the 2022 IEEE CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 17662–17672. [Google Scholar]
- Wang, W.; Wei, C.; Yang, W.; Liu, J. Gladnet: Low-light enhancement network with global awareness. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), Xi’an, China, 15–19 May 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 751–755. [Google Scholar]
- Shao, M.W.; Li, L.; Meng, D.Y.; Zuo, W.M. Uncertainty guided multi-scale attention network for raindrop removal from a single image. IEEE Trans. Image Process. 2021, 30, 4828–4839. [Google Scholar] [CrossRef]
- Qian, Y.; Jiang, Z.; He, Y.; Zhang, S.; Jiang, S. Multi-scale error feedback network for low-light image enhancement. Neural Comput. Appl. 2022, 34, 21301–21317. [Google Scholar] [CrossRef]
- Zhang, S.; Lam, E.Y. An effective decomposition-enhancement method to restore light field images captured in the dark. Signal Process. 2021, 189, 108279. [Google Scholar] [CrossRef]
- Wang, J.; Yang, Y.; Hua, Y. Image quality enhancement using hybrid attention networks. IET Image Process. 2022, 16, 521–534. [Google Scholar] [CrossRef]
- Khan, R.; Akbar, S.; Khan, A.; Marwan, M.; Qaisar, Z.H.; Mehmood, A.; Shahid, F.; Munir, K.; Zheng, Z. Dental image enhancement network for early diagnosis of oral dental disease. Sci. Rep. 2023, 13, 5312. [Google Scholar] [CrossRef]
- Dong, C.; Loy, C.C.; He, K.; Tang, X. Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 295–307. [Google Scholar] [CrossRef]
- Xu, J.; Hou, Y.; Ren, D.; Liu, L.; Zhu, F.; Yu, M.; Wang, H.; Shao, L. Star: A structure and texture aware retinex model. IEEE Trans. Image Process. 2020, 29, 5022–5037. [Google Scholar] [CrossRef] [PubMed]
- Ese, A.K.; Tutsoy, O. Medical image reasoning with the convolutional neural network-based fuzzy logic. In Proceedings of the 4th International Congress on Engineering Sciences and Multidisciplinary, Baghdad, Iraq, 3–4 June 2022. [Google Scholar]
- Xiang, F.; Jian, Z.; Liang, P.; Gu, X.Q. Robust image fusion with block sparse representation and online dictionary learning. IET Image Process. 2018, 12, 345–353. [Google Scholar] [CrossRef]
- Liu, Y.; Zhang, X.; Zhang, Q.; Li, C.; Huang, F.; Tang, X.; Li, Z. Dual self-attention with co-attention networks for visual question answering. Pattern Recognit. 2021, 117, 107956. [Google Scholar] [CrossRef]
- He, X.; Wang, Y.; Zhao, S.; Chen, X. Co-attention fusion network for multimodal skin cancer diagnosis. Pattern Recognit. 2023, 133, 108990. [Google Scholar] [CrossRef]









| Dataset | Model | PSNR | SSIM |
|---|---|---|---|
| Dark light image by PASCALVOC image data collection | Models that do not have attention mechanisms | 24.25 | 0.87 |
| Introduce a model of an attention mechanism | 24.93 | 0.89 |
| Models | Faster R-CNN | Swin-T | Dark-Waste | Proposed |
|---|---|---|---|---|
| Precision | 76.95 | 73.76 | 74.15 | 82.15 |
| Recall | 69.78 | 63.45 | 64.75 | 63.87 |
| F1-score | 0.61 | 0.64 | 0.72 | 0.74 |
| Dataset | Model | AP (%) | |||||||
|---|---|---|---|---|---|---|---|---|---|
| Battery | Cardboard | Drugs | Glass | Metal | Paper | Plastic | Vegetable | ||
| TrashBig | FRCNN | 82.5 | 75.3 | 69.2 | 78.6 | 74.1 | 83.0 | 80.5 | 72.4 |
| +IC | 84.7 | 77.5 | 71.4 | 80.2 | 76.3 | 85.5 | 82.1 | 74.8 | |
| YOLOv5 | 85.2 | 76.1 | 72.3 | 79.5 | 75.8 | 84.1 | 81.0 | 73.6 | |
| +IC | 87.3 | 78.2 | 74.5 | 81.8 | 78.4 | 85.8 | 82.9 | 75.1 | |
| Dark-Waste | 79.4 | 72.6 | 66.1 | 76.4 | 71.7 | 80.0 | 77.2 | 69.8 | |
| +IC | 81.3 | 74.3 | 68.4 | 78.1 | 73.5 | 81.9 | 78.5 | 71.2 | |
| Proposed | 89.5 | 80.1 | 76.8 | 83.0 | 79.9 | 86.4 | 84.2 | 77.3 | |
| +IC | 90.7 | 81.9 | 78.2 | 84.1 | 81.2 | 87.5 | 85.4 | 78.9 | |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Lv, Z.; Diao, Y.; Zeng, C.; Wang, W.; An, S. Research on Encrypted Transmission and Recognition of Garbage Images in Low-Illumination Environments. Electronics 2026, 15, 302. https://doi.org/10.3390/electronics15020302
Lv Z, Diao Y, Zeng C, Wang W, An S. Research on Encrypted Transmission and Recognition of Garbage Images in Low-Illumination Environments. Electronics. 2026; 15(2):302. https://doi.org/10.3390/electronics15020302
Chicago/Turabian StyleLv, Zhenwei, Yapeng Diao, Chunnian Zeng, Weiping Wang, and Shufan An. 2026. "Research on Encrypted Transmission and Recognition of Garbage Images in Low-Illumination Environments" Electronics 15, no. 2: 302. https://doi.org/10.3390/electronics15020302
APA StyleLv, Z., Diao, Y., Zeng, C., Wang, W., & An, S. (2026). Research on Encrypted Transmission and Recognition of Garbage Images in Low-Illumination Environments. Electronics, 15(2), 302. https://doi.org/10.3390/electronics15020302

