Research on Target Detection and Counting Algorithms for Swarming Termites in Agricultural and Forestry Disaster Early Warning
Abstract
1. Introduction
2. Relate Work
3. Materials and Methods
3.1. Dataset Construction
3.2. The Indicators of Evaluation
3.3. Experimental Parameter Configuration
3.4. YOLOv11-ST
3.4.1. Network Modeling for YOLOv11
3.4.2. Backbone Architecture Optimization
3.4.3. Frequency Dynamic Convolution
3.4.4. Local-Region Self-Attention
3.4.5. SPPF-DW Module
3.5. Swarming Termites Counting
3.5.1. Termite Keypoint Feature Extraction
3.5.2. Termite Edge Extraction and Hu Moment Calculation
3.5.3. Termite Movement Trajectory Computation
3.5.4. Weight Allocation Method
- 1
- Keypoint Weight (R1)
- 2
- Hu Moment Weight (R2)
- 3
- Motion Trajectory Weight (R3)
4. Results
4.1. Comparative Experiments of Attention Modules
4.2. Ablation Experiments
- (1)
- The first group represents the baseline YOLOv11 results, serving as the comparison benchmark for the following seven experimental groups. Its precision, recall, and mAP@0.5 were 90.16%, 85.22%, and 91.19%, respectively.
- (2)
- The second, third, and fourth groups were experiments with one modification added sequentially. Adding the FDConv module achieved precision, recall, and mAP@0.5 of 90.43%, 86.13%, and 91.87%, respectively. Incorporating the LRSA module resulted in precision, recall, and mAP@0.5 of 91.32%, 86.74%, and 92.41%, respectively. Integrating the SPPF-DW module yielded precision, recall, and mAP@0.5 of 91.89%, 86.76%, and 92.34%, respectively.
- (3)
- The fifth, sixth, and seventh groups involved adding two modifications simultaneously: FDConv and LRSA, FDConv and SPPF-DW, and LRSA and SPPF-DW, respectively. The combination of FDConv and LRSA modules achieved precision, recall, and mAP@0.5 of 92.07%, 87.03%, and 92.85%, respectively. The FDConv and SPPF-DW combination resulted in precision, recall, and mAP@0.5 of 92.44%, 86.96%, and 92.61%, respectively. The LRSA and SPPF-DW combination yielded precision, recall, and mAP@0.5 of 92.68%, 87.09%, and 92.94%, respectively.
- (4)
- The eighth group demonstrates the results with all improvements integrated. Compared to the baseline model, precision increased by 2.82%, recall improved by 2.1%, and mAP@0.5 was enhanced by 2.02%.
4.3. Comparative Analysis of Detection Performance Among Different Models
4.4. Swarming Termite Counting Experiment
5. Deployment of YOLOv11-ST
6. Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Oi, F. A review of the evolution of termite control: A continuum of alternatives to termiticides in the United States with emphasis on efficacy testing requirements for product registration. Insects 2022, 13, 50. [Google Scholar] [CrossRef] [PubMed]
- Wu, D.; Seibold, S.; Ellwood, M.D.F.; Chu, C. Differential effects of vegetation and climate on termite diversity and damage. J. Appl. Ecol. 2022, 59, 2922–2935. [Google Scholar] [CrossRef]
- Huang, J.H.; Liu, Y.T.; Ni, H.C.; Ni, H.C.; Chen, B.-Y.; Huang, S.-Y.; Tsai, H.-K.; Li, H.-F. Termite pest identification method based on deep convolution neural networks. J. Econ. Entomol. 2021, 114, 2452–2459. [Google Scholar] [CrossRef] [PubMed]
- Cho, J.; Choi, J.; Qiao, M.; Ji, C.W. Automatic identification of whiteflies, aphids and thrips in greenhouse based on image analysis. Int. J. Math. Comput. Simul. 2007, 346, 244. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Jiang, P.; Ergu, D.; Liu, F.; Cai, Y.; Ma, B. A Review of Yolo algorithm developments. Procedia Comput. Sci. 2022, 199, 1066–1073. [Google Scholar] [CrossRef]
- Zhong, Y.; Gao, J.; Lei, Q.; Zhou, Y. A vision-based counting and recognition system for flying insects in intelligent agriculture. Sensors 2018, 18, 1489. [Google Scholar] [CrossRef] [PubMed]
- Saradopoulos, I.; Potamitis, I.; Ntalampiras, S.; Konstantaras, A.I.; Antonidakis, E.N. Edge computing for vision-based, urban-insects traps in the context of smart cities. Sensors 2022, 22, 2006. [Google Scholar] [CrossRef] [PubMed]
- Dai, M.; Dorjoy, M.M.H.; Miao, H.; Zhang, S. A new pest detection method based on improved YOLOv5m. Insects 2023, 14, 54. [Google Scholar] [CrossRef] [PubMed]
- Luo, W.; Xing, J.; Milan, A.; Zhang, X.; Liu, W.; Zhao, X.; Kim, T.-K. Multiple object tracking: A literature review. Artif. Intell. 2021, 293, 103448. [Google Scholar] [CrossRef]
- Li, Z.; Zhu, Y.; Sui, S.; Zhao, Y.; Liu, P.; Li, X. Real-time detection and counting of wheat ears base on improved YOLOv7. Comput. Electron. Agric. 2024, 218, 108670. [Google Scholar] [CrossRef]
- Wang, T.; Zhao, L.; Li, B.; Liu, X.; Xu, W.; Li, J. Recognition and counting of typical apple pests based on deep learning. Ecol. Inform. 2022, 68, 101556. [Google Scholar] [CrossRef]
- Chen, L.; Gu, L.; Li, L.; Yan, C.; Fu, Y. Frequency Dynamic Convolution for Dense Image Prediction. arXiv 2025, arXiv:2503.18783. [Google Scholar] [CrossRef]
- Liu, X.; Liu, J.; Tang, J.; Wu, G. CATANet: Efficient Content-Aware Token Aggregation for Lightweight Image Super-Resolution. arXiv 2025, arXiv:2503.06896. [Google Scholar] [CrossRef]
- Liu, Y.; Shao, Z.; Hoffmann, N. Global Attention Mechanism: Retain Information to Enhance Channel-Spatial Interactions. arXiv 2021, arXiv:2112.05561. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Albanie, S.; Sun, G.; Wu, E. Squeeze-and-Excitation Networks. arXiv 2017, arXiv:1709.01507. [Google Scholar]
- Zhu, L.; Wang, X.; Ke, Z.; Zhang, W.; Lau, R. Biformer: Vision transformer with bi-level routing attention. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 10323–10333. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Cbam: Convolutional Block Attention Module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018. [Google Scholar]
- Hou, Q.; Zhou, D.; Feng, J. Coordinate attention for efficient mobile network design. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 13713–13722. [Google Scholar]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11534–11542. [Google Scholar]
- Varghese, R.; Sambath, M. Yolov8: A novel object detection algorithm with enhanced performance and robustness. In Proceedings of the 2024 International Conference on Advances in Data Engineering and Intelligent Computing Systems (ADICS), Chennai, India, 18–19 April 2024; pp. 1–6. [Google Scholar]
- Wang, A.; Chen, H.; Liu, L.; Chen, K.; Lin, Z.; Han, J.; Ding, G. Yolov10: Real-time end-to-end object detection. Adv. Neural Inf. Process. Syst. 2024, 37, 107984–108011. [Google Scholar]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 2015, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- Wang, H.; Guo, X.; Zhang, S.; Li, G.; Zhao, Q.; Wang, Z. Detection and recognition of foreign objects in Pu-erh Sun-dried green tea using an improved YOLOv8 based on deep learning. PLoS ONE 2025, 20, e0312112. [Google Scholar] [CrossRef] [PubMed]
- Khanam, R.; Hussain, M. Yolov11: An overview of the key architectural enhancements. arXiv 2024, arXiv:2410.17725. [Google Scholar] [CrossRef]
- Wang, N.; Fu, S.; Rao, Q.; Zhang, G.; Ding, M. Insect-YOLO: A new method of crop insect detection. Comput. Electron. Agric. 2025, 232, 110085. [Google Scholar] [CrossRef]
- Bewley, A.; Ge, Z.; Ott, L.; Ramos, F.; Upcroft, B. Simple online and realtime tracking. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 3464–3468. [Google Scholar]
- Wojke, N.; Bewley, A.; Paulus, D. Simple online and realtime tracking with a deep association metric. In Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China, 17–20 September 2017; pp. 3645–3649. [Google Scholar]
- Zhang, Y.; Sun, P.; Jiang, Y.; Yu, D.; Weng, F.; Yuan, Z.; Luo, P.; Liu, W.; Wang, X. Bytetrack: Multi-object tracking by associating every detection box. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer Nature: Cham, Switzerland, 2022; pp. 1–21. [Google Scholar]












| Software Environment | Hardware Environment |
|---|---|
| Operating system: Windows v11 Deployment: Python v3.12.4 Deep learning framework: PyTorch v2.0.1 | CPU: Intel Core i9-14900KS GPU: NVIDIA GeForce RTX 3090 (24 GB) |
| Training Parameters | |
|---|---|
| Learning rate: 0.01 Input image size: 640 × 640 Epoch: 300 | Warmup bias lr: 0.1 Batch size: 16 |
| Model | mAP@0.5 (%) | mAP@0.5–0.95 (%) |
|---|---|---|
| Baseline | 91.19 | 74.29 |
| YOLOv11 + GAM | 91.89 | 75.21 |
| YOLOv11 + SE | 91.64 | 74.73 |
| YOLOv11 + Biformer | 91.76 | 75.09 |
| YOLOv11 + CBAM | 92.03 | 75.30 |
| YOLOv11 + CA | 91.25 | 74.52 |
| YOLOv11 + ECA | 91.87 | 75.13 |
| YOLOv11 + LRSA | 92.41 | 75.71 |
| FDConv | LRSA | SPPF-DW | Precision (%) | Recall (%) | mAP@0.5 (%) | ↑ΔmAP@0.5 (%) |
|---|---|---|---|---|---|---|
| 90.16 | 85.22 | 91.19 | - | |||
| √ | 90.43 | 86.13 | 91.87 | 0.68 | ||
| √ | 91.32 | 86.74 | 92.41 | 1.22 | ||
| √ | 91.89 | 86.76 | 92.34 | 1.15 | ||
| √ | √ | 92.07 | 87.03 | 92.85 | 1.66 | |
| √ | √ | 92.44 | 86.96 | 92.61 | 1.42 | |
| √ | √ | 92.68 | 87.09 | 92.94 | 1.75 | |
| √ | √ | √ | 92.98 | 87.32 | 93.21 | 2.02 |
| Model | Precision (%) | Recall (%) | mAP@0.5 (%) | mAP@0.5–0.95 (%) |
|---|---|---|---|---|
| YOLOv8 | 89.63 | 83.76 | 89.11 | 72.84 |
| YOLOv10 | 88.23 | 84.38 | 90.57 | 72.34 |
| YOLOv11 | 90.16 | 85.22 | 91.19 | 74.29 |
| Faster RCNN | 88.92 | 84.52 | 87.34 | 67.10 |
| YOLOv8n-MEB | 90.09 | 84.11 | 90.45 | 71.61 |
| Insect-YOLO | 90.60 | 85.91 | 91.05 | 73.65 |
| YOLOv11-ST | 92.98 | 87.32 | 93.21 | 76.09 |
| Method | AAR (%) |
|---|---|
| Sort | 80.6% |
| DeepSort | 84.4% |
| ByteTrack | 86.2% |
| Our method | 91.2% |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, H.; Wang, Y.; Chen, T. Research on Target Detection and Counting Algorithms for Swarming Termites in Agricultural and Forestry Disaster Early Warning. Appl. Sci. 2025, 15, 11838. https://doi.org/10.3390/app152111838
Wang H, Wang Y, Chen T. Research on Target Detection and Counting Algorithms for Swarming Termites in Agricultural and Forestry Disaster Early Warning. Applied Sciences. 2025; 15(21):11838. https://doi.org/10.3390/app152111838
Chicago/Turabian StyleWang, Hechuang, Yifan Wang, and Tong Chen. 2025. "Research on Target Detection and Counting Algorithms for Swarming Termites in Agricultural and Forestry Disaster Early Warning" Applied Sciences 15, no. 21: 11838. https://doi.org/10.3390/app152111838
APA StyleWang, H., Wang, Y., & Chen, T. (2025). Research on Target Detection and Counting Algorithms for Swarming Termites in Agricultural and Forestry Disaster Early Warning. Applied Sciences, 15(21), 11838. https://doi.org/10.3390/app152111838
