High-Precision Peanut Pod Detection Device Based on Dual-Route Attention Mechanism
Abstract
1. Introduction
2. Related Work
3. Materials and Methods
3.1. Dataset Construction
3.2. Model Improvements
3.2.1. BiFormer Module in the Backbone Network
3.2.2. Triplet Attention Module
3.2.3. MPDIoU
3.3. Performance Evaluation Metrics
4. Results and Discussion
4.1. Experimental Environment
4.2. Model Training Results
4.3. Comparative Experiments on Loss Functions
4.4. Ablation Study
4.5. Testing of Peanut Pod Detection Device
4.6. Comparative Experiments with Different Models
5. Discussion
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Chen, T.; Yang, W.; Zhang, H.; Zhu, B.; Zeng, R.; Wang, X.; Wang, S.; Wang, L.; Qi, H.; Lan, Y.; et al. Early detection of bacterial wilt in peanut plants through leaf-level hyperspectral and unmanned aerial vehicle data. Comput. Electron. Agric. 2020, 177, 105708. [Google Scholar] [CrossRef]
- Huai, D.; Wu, J.; Xue, X.; Hu, M.; Zhi, C.; Pandey, M.K.; Liu, N.; Huang, L.; Bai, D.; Yan, L.; et al. Red fluorescence protein (DsRed2) promotes the screening efficiency in peanut genetic transformation. Front. Plant Sci. 2023, 14, 1123644. [Google Scholar] [CrossRef]
- Xing, J.; Zhan, C.; Ma, J.; Chao, Z.; Liu, Y. Lightweight detection model for safe wear at worksites using GPD-YOLOv8 algorithm. Sci. Rep. 2025, 15, 1227. [Google Scholar] [CrossRef]
- Li, S.; Lv, Y.; Liu, X.; Li, M. Detection of safety helmet and mask wearing using improved YOLOv5s. Sci. Rep. 2023, 13, 21417. [Google Scholar] [CrossRef]
- Nandhini, T.J.; Thinakaran, K. Detection of Crime Scene Objects using Deep Learning Techniques. In Proceedings of the 2023 International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT), Bengaluru, India, 5–7 January 2023; pp. 357–361. [Google Scholar] [CrossRef]
- Zeng, W.; Shan, L.; Yuan, C.; Du, S. Advancing cardiac diagnostics: Exceptional accuracy in abnormal ECG signal classification with cascading deep learning and explainability analysis. Appl. Soft Comput. 2024, 165, 112056. [Google Scholar] [CrossRef]
- Mahmood, T.; Rehman, A.; Saba, T.; Nadeem, L.; Bahaj, S.A.O. Recent Advancements and Future Prospects in Active Deep Learning for Medical Image Segmentation and Classification. IEEE Access 2023, 11, 113623–113652. [Google Scholar] [CrossRef]
- Xue, T.; Liu, Z.; Lan, S.; Zhang, Q.; Yang, A.; Li, J. YOLO-FSE: An Improved Target Detection Algorithm for Vehicles in Autonomous Driving. IEEE Internet Things J. 2025, 12, 13922–13933. [Google Scholar] [CrossRef]
- Pavel, M.I.; Tan, S.Y.; Abdullah, A. Vision-Based Autonomous Vehicle Systems Based on Deep Learning: A Systematic Literature Review. Appl. Sci. 2022, 12, 6831. [Google Scholar] [CrossRef]
- Jiang, T.; Frøseth, G.T.; Rønnquist, A.; Kong, X.; Deng, L. A visual inspection and diagnosis system for bridge rivets based on a convolutional neural network. Comput. Civ. Infrastruct. Eng. 2024, 39, 3786–3804. [Google Scholar] [CrossRef]
- Apostolopoulos, I.D.; Tzani, M.A. Industrial object and defect recognition utilizing multilevel feature extraction from industrial scenes with Deep Learning approach. J. Ambient. Intell. Humaniz. Comput. 2022, 14, 10263–10276. [Google Scholar] [CrossRef]
- Chen, Y.; Xu, H.; Chang, P.; Huang, Y.; Zhong, F.; Jia, Q.; Chen, L.; Zhong, H.; Liu, S. CES-YOLOv8: Strawberry Maturity Detection Based on the Improved YOLOv8. Agronomy 2024, 14, 1353. [Google Scholar] [CrossRef]
- Megalingam, R.K.; Manoharan, S.K.; Maruthababu, R.B. Integrated fuzzy and deep learning model for identification of coconut maturity without human intervention. Neural Comput. Appl. 2024, 36, 6133–6145. [Google Scholar] [CrossRef]
- Liu, S.; Xu, H.; Deng, Y.; Cai, Y.; Wu, Y.; Zhong, X.; Zheng, J.; Lin, Z.; Ruan, M.; Chen, J.; et al. YOLOv8-LSW: A Lightweight Bitter Melon Leaf Disease Detection Model. Agriculture 2025, 15, 1281. [Google Scholar] [CrossRef]
- Zhao, S.; Liu, J.; Wu, S. Multiple disease detection method for greenhouse-cultivated strawberry based on multiscale feature fusion Faster R_CNN. Comput. Electron. Agric. 2022, 199, 107176. [Google Scholar] [CrossRef]
- Liang, X.; Wei, Z.; Chen, K. A method for segmentation and localization of tomato lateral pruning points in complex environments based on improved YOLOV5. Comput. Electron. Agric. 2024, 229, 109731. [Google Scholar] [CrossRef]
- Moallem, P.; Serajoddin, A.; Pourghassem, H. Computer vision-based apple grading for golden delicious apples based on surface features. Inf. Process. Agric. 2017, 4, 33–40. [Google Scholar] [CrossRef]
- Li, Z.; Niu, B.; Peng, F.; Li, G.; Yang, Z.; Wu, J. Classification of Peanut Images Based on Multi-features and SVM. IFAC-PapersOnLine 2018, 51, 726–731. [Google Scholar] [CrossRef]
- Momin, M.A.; Yamamoto, K.; Miyamoto, M.; Kondo, N.; Grift, T. Machine vision based soybean quality evaluation. Comput. Electron. Agric. 2017, 140, 452–460. [Google Scholar] [CrossRef]
- Olgun, M.; Onarcan, A.O.; Özkan, K.; Işik, Ş.; Sezer, O.; Özgişi, K.; Ayter, N.G.; Başçiftçi, Z.B.; Ardiç, M.; Koyuncu, O. Wheat grain classification by using dense SIFT features with SVM classifier. Comput. Electron. Agric. 2016, 122, 185–190. [Google Scholar] [CrossRef]
- Koklu, M.; Cinar, I.; Taspinar, Y.S. Classification of rice varieties with deep learning methods. Comput. Electron. Agric. 2021, 187, 106285. [Google Scholar] [CrossRef]
- Yang, H.; Ni, J.; Gao, J.; Han, Z.; Luan, T. A novel method for peanut variety identification and classification by Improved VGG16. Sci. Rep. 2021, 11, 15756. [Google Scholar] [CrossRef] [PubMed]
- Guo, Z.; Wang, H.; Dong, H.; Xia, L.; Darwish, I.A.; Guo, Y.; Sun, X. Optimizing aflatoxin B1 detection in peanut kernels through deep modular combination optimization algorithm: A deep learning approach to quality evaluation of postharvest nuts. Postharvest Biol. Technol. 2024, 220, 113293. [Google Scholar] [CrossRef]
- Huang, Y.; Niu, D.; Hou, C.; Yang, L. Improved Peanut Quality Detection Method of YOLOv8n. Comput. Eng. Appl. 2024, 60, 257–267. [Google Scholar]
- Wan, Y.; Li, J. LGP-YOLO: An efficient convolutional neural network for surface defect detection of light guide plate. Complex Intell. Syst. 2023, 10, 2083–2105. [Google Scholar] [CrossRef]
- Buslaev, A.; Iglovikov, V.I.; Khvedchenya, E.; Parinov, A.; Druzhinin, M.; Kalinin, A.A. Albumentations: Fast and flexible image augmentations. Information 2020, 11, 125. [Google Scholar] [CrossRef]
- Sohan, M.; Sai Ram, T.; Rami Reddy, C.V. A Review on YOLOv8 and Its Advancements. In Algorithms for Intelligent Systems; Springer Nature: Singapore, 2024; pp. 529–545. [Google Scholar] [CrossRef]
- Ahmad, H.M.; Rahimi, A. SH17: A dataset for human safety and personal protective equipment detection in manufacturing industry. J. Saf. Sci. Resil. 2025, 6, 175–185. [Google Scholar] [CrossRef]
- Tong, Z.; Xu, T.; Shi, C.; Li, S.; Xie, Q.; Rong, L. Research on pig behavior recognition method based on CBCW-YOLO v8. Trans. Chin. Soc. Agric. Mach. 2025, 56, 411–419. [Google Scholar]
- Ju, R.-Y.; Cai, W. Fracture detection in pediatric wrist trauma X-ray images using YOLOv8 algorithm. Sci. Rep. 2023, 13, 20077. [Google Scholar] [CrossRef]
- Zhu, L.; Wang, X.; Ke, Z.; Zhang, W.; Lau, R. BiFormer: Vision Transformer with Bi-Level Routing Attention. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 10323–10333. [Google Scholar] [CrossRef]
- Deng, F.; Chen, J.; Fu, L.; Zhong, J.; Qiaoi, W.; Luo, J.; Li, J.; Li, N. Real-time citrus variety detection in orchards based on complex scenarios of improved YOLOv7. Front. Plant Sci. 2024, 15, 1381694. [Google Scholar] [CrossRef]
- Roy, A.; Saffar, M.; Vaswani, A.; Grangier, D. Efficient content-based sparse attention with routing transformers. Trans. Assoc. Comput. Linguist. 2021, 9, 53–68. [Google Scholar] [CrossRef]
- Misra, D.; Nalamada, T.; Arasanipalai, A.U.; Hou, Q. Rotate to Attend: Convolutional Triplet Attention Module. In Proceedings of the 2021 IEEE Winter Conference on Applications of Computer Vision (WACV), Virtual, 5–9 January 2021; pp. 3138–3147. [Google Scholar] [CrossRef]
- Huang, Z.; Zhao, H.; Zhan, J.; Li, H. A multivariate intersection over union of SiamRPN network for visual tracking. Vis. Comput. 2021, 38, 2739–2750. [Google Scholar] [CrossRef]
- Zheng, Z.; Wang, P.; Liu, W.; Li, J.; Ye, R.; Ren, D. Distance-IoU Loss: Faster and Better Learning for Bounding Box Regression. Proc. AAAI Conf. Artif. Intell. 2020, 34, 12993–13000. [Google Scholar] [CrossRef]
- He, J.; Zhang, S.; Yang, C.; Wang, H.; Gao, J.; Huang, W.; Wang, Q.; Wang, X.; Yuan, W.; Wu, Y.; et al. Pest recognition in microstates state: An improvement of YOLOv7 based on Spatial and Channel Reconstruction Convolution for feature redundancy and vision transformer with Bi-Level Routing Attention. Front. Plant Sci. 2024, 15, 1327237. [Google Scholar] [CrossRef]
- Wikipedia Contributors. Precision and Recall. In Wikipedia, The Free Encyclopedia; Wikimedia Foundation, Inc.: San Francisco, CA, USA, 2025; Available online: https://en.wikipedia.org/wiki/Precision_and_recall (accessed on 13 July 2025).
- Gebremeskel, G.B.; Mengistie, D.G. Groundnut (Arachis hypogaea L.) seed defect classification using ensemble deep learning techniques. Smart Agric. Technol. 2024, 9, 100587. [Google Scholar] [CrossRef]
- Zou, S.; Tseng, Y.-C.; Zare, A.; Rowland, D.L.; Tillman, B.L.; Yoon, S.-C. Peanut maturity classification using hyperspectral imagery. Biosyst. Eng. 2019, 188, 165–177. [Google Scholar] [CrossRef]
- Osornio-Rios, R.A.; Cueva-Perez, I.; Alvarado-Hernandez, A.I.; Dunai, L.; Zamudio-Ramirez, I.; Antonino-Daviu, J.A. FPGA-Microprocessor based Sensor for faults detection in induction motors using time-frequency and machine learning methods. Sensors 2024, 24, 2653. [Google Scholar] [CrossRef] [PubMed]











| Configuration | Parameter |
|---|---|
| CPU | Intel(R) Core(TM) i7-13700K |
| GPU | NVIDIA GeForce RTX 4070 Ti |
| Memory | 64 G |
| Operating System | Windows10 (64 bit) |
| Deep Learning Framework | PyTorch 1.10.1 |
| Programming Language | Python 3.8.10 |
| Model | Precision | Recall | mAP50 | F1-Score |
|---|---|---|---|---|
| YOLOv8 | 94.50% | 93.80% | 97.80% | 94.15% |
| BTM-YOLO v8 | 98.40% | 96.20% | 99.00% | 97.29% |
| ID | Loss | Precision | Recall | mAP50 | F1_Score |
|---|---|---|---|---|---|
| 1 | CIoU | 94.50% | 93.80% | 97.80% | 94.15% |
| 2 | inner_CIoU | 95.80% | 93.00% | 97.60% | 94.38% |
| 3 | DIoU | 94.90% | 94.60% | 97.40% | 94.75% |
| 4 | EIoU | 94.30% | 90.70% | 96.70% | 92.46% |
| 5 | GIoU | 94.50% | 92.20% | 96.50% | 93.34% |
| 6 | SIoU | 94.30% | 96.00% | 97.90% | 95.14% |
| 7 | MPDIoU | 96.40% | 95.20% | 98.30% | 95.80% |
| ID | Backbone | Attention | Loss | Precision | Recall | mAP50 | F1_Score |
|---|---|---|---|---|---|---|---|
| 1 | - | - | - | 94.50% | 93.80% | 97.80% | 94.15% |
| 2 | Biformer | - | - | 97.00% | 95.90% | 98.60% | 96.45% |
| 3 | - | - | MPDIoU | 96.40% | 95.20% | 98.30% | 95.80% |
| 4 | - | TripletAttention | - | 94.10% | 96.00% | 97.30% | 95.04% |
| 5 | Biformer | TripletAttention | - | 97.20% | 95.60% | 98.40% | 96.39% |
| 6 | Biformer | - | MPDIoU | 96.80% | 97.90% | 98.90% | 97.35% |
| 7 | - | TripletAttention | MPDIoU | 96.60% | 93.60% | 98.40% | 95.08% |
| 8 | Biformer | TripletAttention | MPDIoU | 98.40% | 96.20% | 99.00% | 97.29% |
| One Kernel | Two Kernels | Three Kernels | Four or More Kernels | R2 | RMSE | |
|---|---|---|---|---|---|---|
| Machine | 169 | 1419 | 472 | 13 | 0.999 | 12.69 |
| Manual | 177 | 1443 | 472 | 11 |
| Models | Precision | Recall | mAP50 | F1-Score | Gflops | FPS |
|---|---|---|---|---|---|---|
| YOLOv5 | 93.40% | 90.40% | 96.40% | 91.88% | 5.8 | 303.51 |
| YOLOv6 | 88.70% | 86.20% | 91.80% | 87.43% | 11.5 | 271.51 |
| YOLOv9s | 93.80% | 92.30% | 96.30% | 93.04% | 22.1 | 103.24 |
| YOLOv10n | 90.00% | 93.70% | 95.80% | 91.81% | 8.2 | 266.03 |
| YOLOv3-tiny | 94.60% | 90.50% | 94.00% | 92.50% | 14.3 | 263.51 |
| YOLOv11n | 86.30% | 93.70% | 94.30% | 89.85% | 6.3 | 281.09 |
| BTM-YOLO v8 (ours) | 98.40% | 96.20% | 99.00% | 97.29% | 30.5 | 64.92 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Chen, Y.; Chang, P.; Wang, T.; Zhao, J. High-Precision Peanut Pod Detection Device Based on Dual-Route Attention Mechanism. Appl. Sci. 2026, 16, 418. https://doi.org/10.3390/app16010418
Chen Y, Chang P, Wang T, Zhao J. High-Precision Peanut Pod Detection Device Based on Dual-Route Attention Mechanism. Applied Sciences. 2026; 16(1):418. https://doi.org/10.3390/app16010418
Chicago/Turabian StyleChen, Yongkuai, Pengyan Chang, Tao Wang, and Jian Zhao. 2026. "High-Precision Peanut Pod Detection Device Based on Dual-Route Attention Mechanism" Applied Sciences 16, no. 1: 418. https://doi.org/10.3390/app16010418
APA StyleChen, Y., Chang, P., Wang, T., & Zhao, J. (2026). High-Precision Peanut Pod Detection Device Based on Dual-Route Attention Mechanism. Applied Sciences, 16(1), 418. https://doi.org/10.3390/app16010418

