Attention-Guided Edge-Optimized Network for Real-Time Detection and Counting of Pre-Weaning Piglets in Farrowing Crates
Simple Summary
Abstract
1. Introduction
- Feature extraction enhancement: A concise and effective multi-scale spatial pyramid attention C2f module is proposed. By replacing the C2f module in the backbone, the ability of the YOLOv8n backbone to extract multi-scale spatial information on the input is improved, enabling it to fully integrate structural regularization and structural information and efficiently establish long-distance channel dependencies.
- Neck structure optimization: An improved Gather-and-Distribute mechanism is incorporated into the neck part of YOLOv8n, which enables and accelerates multi-scale feature fusion by fully leveraging high-level semantic features and low-level spatial information, thereby improving the detection speed of the model.
- Detection strategy refinement: The number of detection heads is reduced to one, and both the sample assignment strategy and the detection head structure are refined, effectively reducing the number of parameters while maintaining or even improving the detection performance.
2. Materials
3. Methodology
3.1. Multi-Scale Spatial Pyramid Attention C2f
3.1.1. The Enhanced Hierarchical-Phantom Convolution Module
3.1.2. Channel Relationship Modeling
3.2. Redesign of the Neck Applying the Gather-and-Distribute Mechanism
3.3. The Task-Aligned Detection Head
4. Experiments and Result Analysis
4.1. The Experimental Setting
4.2. Quantitative Results
4.2.1. Ablation Studies
4.3. Qualitative Results and Discussion
4.4. Deployment on Devices with Limited Computational Resources
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Lu, M.; Xiong, Y.; Li, K.; Liu, L.; Yan, L.; Ding, Y.; Lin, X.; Yang, X.; Shen, M. An automatic splitting method for the adhesive piglets’ gray scale image based on the ellipse shape feature. Comput. Electron. Agric. 2016, 120, 53–62. [Google Scholar] [CrossRef]
- Oczak, M.; Maschat, K.; Berckmans, D.; Vranken, E.; Baumgartner, J. Automatic estimation of number of piglets in a pen during farrowing, using image analysis. Biosyst. Eng. 2016, 151, 81–89. [Google Scholar] [CrossRef]
- Gan, H.; Guo, J.; Liu, K.; Deng, X.; Zhou, H.; Luo, D.; Chen, S.; Norton, T.; Xue, Y. Counting piglet suckling events using deep learning-based action density estimation. Comput. Electron. Agric. 2023, 210, 107877. [Google Scholar] [CrossRef]
- Farahnakian, F.; Farahnakian, F.; Björkman, S.; Bloch, V.; Pastell, M.; Heikkonen, J. Pose estimation of sow and piglets during free farrowing using deep learning. J. Agric. Food Res. 2024, 16, 101067. [Google Scholar] [CrossRef]
- Gan, H.; Menegon, F.; Sun, A.; Scollo, A.; Jiang, Q.; Xue, Y.; Norton, T. Peeking into the unseen: Occlusion-resistant segmentation for preweaning piglets under crushing events. Comput. Electron. Agric. 2024, 219, 108683. [Google Scholar] [CrossRef]
- Okinda, C.; Lu, M.; Nyalala, I.; Li, J.; Shen, M. Asphyxia occurrence detection in sows during the farrowing phase by inter-birth interval evaluation. Comput. Electron. Agric. 2018, 152, 221–232. [Google Scholar] [CrossRef]
- Liu, T.; Kong, N.; Liu, Z.; Xi, L.; Hui, X.; Ma, W.; Li, X.; Cheng, P.; Ji, Z.; Yang, Z.; et al. New insights into factors affecting piglet crushing and anti-crushing techniques. Livest. Sci. 2022, 265, 105080. [Google Scholar] [CrossRef]
- Cheng, J.; Ward, M.P. Risk factors for the spread of African Swine Fever in China: A systematic review of Chinese-language literature. Transbound. Emerg. Dis. 2022, 69, e1289. [Google Scholar] [CrossRef] [PubMed]
- Pastell, M.; Hietaoja, J.; Yun, J.; Tiusanen, J.; Valros, A. Predicting farrowing of sows housed in crates and pens using accelerometers and CUSUM charts. Comput. Electron. Agric. 2016, 127, 197–203. [Google Scholar] [CrossRef]
- Zhang, C.; Shen, M.; Liu, L.; Zhang, H.; Cedrick Sean, O. Newborn piglets recognition method based on machine vision. J. Nanjing Agric. Univ. 2017, 40, 169–175. (In Chinese) [Google Scholar] [CrossRef]
- Silapachote, P.; Srisuphab, A.; Banchongthanakit, W. An Embedded System Device to Monitor Farrowing. In Proceedings of the 2018 5th International Conference on Advanced Informatics: Concept Theory and Applications (ICAICTA), Krabi, Thailand, 14–17 August 2018; pp. 208–213. [Google Scholar] [CrossRef]
- Huang, E.; He, Z.; Mao, A.; Ceballos, M.C.; Parsons, T.D.; Liu, K. A semi-supervised generative adversarial network for amodal instance segmentation of piglets in farrowing pens. Comput. Electron. Agric. 2023, 209, 107839. [Google Scholar] [CrossRef]
- Zhao, C.; Liang, X.; Yu, H.; Wang, H.; Fan, S.; Li, B. Automatic identification and counting method of caged hens and eggs based on improved YOLO v7. Trans. Chin. Soc. Agric. Mach. 2023, 54, 300–312. (In Chinese) [Google Scholar] [CrossRef]
- Tu, S.; Cao, Y.; Liang, Y.; Zeng, Z.; Ou, H.; Du, J.; Chen, W. Tracking and automatic behavioral analysis of group-housed pigs based on YOLOX+ BoT-SORT-slim. Smart Agric. Technol. 2024, 9, 100566. [Google Scholar] [CrossRef]
- Tian, M.; Guo, H.; Chen, H.; Wang, Q.; Long, C.; Ma, Y. Automated pig counting using deep learning. Comput. Electron. Agric. 2019, 163, 104840. [Google Scholar] [CrossRef]
- Huang, E.; Mao, A.; Gan, H.; Ceballos, M.C.; Parsons, T.D.; Xue, Y.; Liu, K. Center clustering network improves piglet counting under occlusion. Comput. Electron. Agric. 2021, 189, 106417. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhou, S.; Zhang, N.; Chai, X.; Sun, T. A Regional Farming Pig Counting System Based on Improved Instance Segmentation Algorithm. Smart Agric. 2024, 6, 53. (In Chinese) [Google Scholar] [CrossRef]
- He, P.; Zhao, S.; Pan, P.; Zhou, G.; Zhang, J. PDC-YOLO: A Network for Pig Detection under Complex Conditions for Counting Purposes. Agriculture 2024, 14, 1807. [Google Scholar] [CrossRef]
- Zhou, J.; Liu, L.; Jiang, T.; Tian, H.; Shen, M.; Liu, L. A Novel Behavior Detection Method for Sows and Piglets during Lactation Based on an Inspection Robot. Comput. Electron. Agric. 2024, 227, 109613. [Google Scholar] [CrossRef]
- Jensen, D.B.; Pedersen, L.J. Automatic counting and positioning of slaughter pigs within the pen using a convolutional neural network and video images. Comput. Electron. Agric. 2021, 188, 106296. [Google Scholar] [CrossRef]
- Lee, G.; Ogata, K.; Kawasue, K.; Sakamoto, S.; Ieiri, S. Identifying-and-counting based monitoring scheme for pigs by integrating BLE tags and WBLCX antennas. Comput. Electron. Agric. 2022, 198, 107070. [Google Scholar] [CrossRef]
- Feng, W.; Wang, K.; Zhou, S. An efficient neural network for pig counting and localization by density map estimation. IEEE Access 2023, 11, 81079–81091. [Google Scholar] [CrossRef]
- Ho, K.Y.; Tsai, Y.J.; Kuo, Y.F. Automatic monitoring of lactation frequency of sows and movement quantification of newborn piglets in farrowing houses using convolutional neural networks. Comput. Electron. Agric. 2021, 189, 106376. [Google Scholar] [CrossRef]
- Ding, Q.A.; Chen, J.; Shen, M.X.; Liu, L.S. Activity detection of suckling piglets based on motion area analysis using frame differences in combination with convolution neural network. Comput. Electron. Agric. 2022, 194, 106741. [Google Scholar] [CrossRef]
- Gao, S.; Xia, T.; Hong, G.; Zhu, Y.; Chen, Z.; Pan, E.; Xi, L. An inspection network with dynamic feature extractor and task alignment head for steel surface defect. Measurement 2024, 224, 113957. [Google Scholar] [CrossRef]
- Ma, B.; Hua, Z.; Wen, Y.; Deng, H.; Zhao, Y.; Pu, L.; Song, H. Using an improved lightweight YOLOv8 model for real-time detection of multi-stage apple fruit in complex orchard environments. Artif. Intell. Agric. 2024, 11, 70–82. [Google Scholar] [CrossRef]
- Wang, D.; Dong, Z.; Yang, G.; Li, W.; Wang, Y.; Wang, W.; Zhang, Y.; Lü, Z.; Qin, Y. APNet-YOLOv8s: A real-time automatic aquatic plants recognition algorithm for complex environments. Ecol. Indic. 2024, 167, 112597. [Google Scholar] [CrossRef]
- Chen, Z.; Feng, J.; Zhu, K.; Yang, Z.; Wang, Y.; Ren, M. YOLOv8-ACCW: Lightweight grape leaf disease detection method based on improved YOLOv8. IEEE Access 2024, 12, 123595–123608. [Google Scholar] [CrossRef]
- Wang, X.; Liu, J. Vegetable disease detection using an improved YOLOv8 algorithm in the greenhouse plant environment. Sci. Rep. 2024, 14, 4261. [Google Scholar] [CrossRef]
- Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A.C.; Lo, W.Y.; et al. Segment anything. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 4015–4026. [Google Scholar] [CrossRef]
- zk. Dongcheng-b Dataset. 2025. Available online: https://universe.roboflow.com/zk/dongcheng-b (accessed on 7 March 2025).
- Yu, Y.; Zhang, Y.; Cheng, Z.; Song, Z.; Tang, C. Multi-scale spatial pyramid attention mechanism for image recognition: An effective approach. Eng. Appl. Artif. Intell. 2024, 133, 108261. [Google Scholar] [CrossRef]
- Wang, Q.; Wu, B.; Zhu, P.; Li, P.; Zuo, W.; Hu, Q. ECA-Net: Efficient channel attention for deep convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 3–19 June 2020; pp. 11534–11542. [Google Scholar] [CrossRef]
- Wang, C.; He, W.; Nie, Y.; Guo, J.; Liu, C.; Han, K.; Wang, Y. Gold-YOLO: Efficient Object Detector via Gather-and-Distribute Mechanism. arXiv 2023, arXiv:2309.11331. [Google Scholar]
- Xu, C.; Liao, Y.; Liu, Y.; Tian, R.; Guo, T. Lightweight rail surface defect detection algorithm based on an improved YOLOv8. Measurement 2025, 242, 115922. [Google Scholar] [CrossRef]
- Woo, S.; Debnath, S.; Hu, R.; Chen, X.; Liu, Z.; Kweon, I.S.; Xie, S. Convnext v2: Co-designing and scaling convnets with masked autoencoders. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 16133–16142. [Google Scholar] [CrossRef]
- Feng, C.; Zhong, Y.; Gao, Y.; Scott, M.R.; Huang, W. Tood: Task-aligned one-stage object detection. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; IEEE Computer Society: Washington, DC, USA, 2021; pp. 3490–3499. [Google Scholar] [CrossRef]
- Dai, X.; Chen, Y.; Xiao, B.; Chen, D.; Liu, M.; Yuan, L.; Zhang, L. Dynamic head: Unifying object detection heads with attentions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 7373–7382. [Google Scholar] [CrossRef]
- Bu, Y.; Ye, H.; Tie, Z.; Chen, Y.; Zhang, D. OD-YOLO: Robust small object detection model in remote sensing image with a novel multi-scale feature fusion. Sensors 2024, 24, 3596. [Google Scholar] [CrossRef] [PubMed]
- Chen, K.; Wang, J.; Pang, J.; Cao, Y.; Xiong, Y.; Li, X.; Sun, S.; Feng, W.; Liu, Z.; Xu, J.; et al. MMDetection: Open MMLab Detection Toolbox and Benchmark. arXiv 2019, arXiv:1906.07155. [Google Scholar] [CrossRef]
- Girshick, R. Fast r-cnn. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1440–1448. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. Ssd: Single shot multibox detector. In Proceedings of the Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Proceedings, Part I 14, 2016. pp. 21–37. [Google Scholar] [CrossRef]
- Ge, Z.; Liu, S.; Wang, F.; Li, Z.; Sun, J. Yolox: Exceeding yolo series in 2021. arXiv 2021, arXiv:2107.08430. [Google Scholar] [CrossRef]
- Jocher, G.; Qiu, J.; Chaurasia, A. Ultralytics YOLO. Version 8.3.127, License: AGPL-3.0. 2025. Available online: https://github.com/ultralytics/ultralytics (accessed on 8 June 2025).
- Lv, W.; Zhao, Y.; Chang, Q.; Huang, K.; Wang, G.; Liu, Y. RT-DETRv2: Improved Baseline with Bag-of-Freebies for Real-Time Detection Transformer. arXiv 2024, arXiv:2407.17140. [Google Scholar]
- Fernandez, F.G. TorchCAM: Class Activation Explorer. 2020. Available online: https://github.com/frgfm/torch-cam (accessed on 20 August 2025).
- Ultralytics. Yolov8 Performance on Raspberry Pi 4B (8Gb). 2024. Available online: https://github.com/ultralytics/ultralytics/issues/12996 (accessed on 5 January 2025).
Model | Computational Complexity (MFLOPs) | Parameters (K) |
---|---|---|
C2f | 2766.000 | 107.264 |
MSPA-C2f | 375.196 | 14.402 |
Hyperparameter | Value |
---|---|
Training Epochs | 100 |
Batch Size | 16 |
Learning Rate | 0.001 |
IoU Threshold | 0.5 |
Data Augmentation | albumentations (flip, corp, hsv, brightness, mosaic, etc.) |
Model | Precision (%) | Recall (%) | mAP0.5 (%) | mAP (%) | Params (M) | Model Size (MB) | FLOPs (G) | Inference Time (ms) |
---|---|---|---|---|---|---|---|---|
Faster R-CNN | 39.26 | 95.53 | 74.50 | 48.50 | 99.252 | 404.1 | 188.416 | 65.4 |
SSD_Lite | 63.12 | 86.48 | 69.60 | 48.30 | 3.403 | 15.3 | 2.754 | 8.0 |
TOOD | 64.32 | 93.30 | 75.80 | 53.30 | 32.021 | 129.9 | 78.857 | 31.1 |
ATSS DyHead | 68.57 | 94.28 | 79.10 | 60.60 | 38.892 | 160.7 | 43.559 | 46.1 |
YOLOX-tiny | 86.25 | 49.28 | 72.10 | 36.60 | 5.033 | 59.6 | 3.199 | 8.4 |
Gold YOLO | 83.40 | 92.00 | 91.62 | 64.20 | 5.620 | 49.6 | 12.1 | 2.29 |
YOLO11n | 87.70 | 94.40 | 94.20 | 71.80 | 2.582 | 5.5 | 6.3 | 14.1 |
RT-DETRv2 | - | - | 94.80 | 75.30 | 40.444 | 141.0 | 132.7 | 15.2 |
Our work | 88.50 | 91.70 | 93.80 | 70.30 | 1.249 | 2.7 | 4.3 | 8.0 |
Method | MSPA C2f | GD | THead | Precision (%) | Recall (%) | mAP0.5 (%) | mAP (%) | Params (M) | Model Size (MB) | FLOPs (G) | Inference Time (ms) |
---|---|---|---|---|---|---|---|---|---|---|---|
Baseline | - | - | - | 85.9 | 92.1 | 91.8 | 69.4 | 3.006 | 6.2 | 8.1 | 5.12 |
M | ✓ | - | - | 86.7 | 90.6 | 92.1 | 69.0 | 2.532 | 5.3 | 6.8 | 8.22 |
GD | - | ✓ | - | 86.7 | 86.6 | 89.6 | 59.5 | 1.748 | 3.5 | 6.2 | 4.64 |
T | - | - | ✓ | 87.0 | 89.2 | 91.8 | 65.8 | 2.461 | 4.8 | 5.6 | 5.10 |
M + GD | ✓ | ✓ | - | 86.9 | 92.4 | 93.9 | 70.9 | 1.341 | 2.9 | 5.7 | 7.91 |
M + T | ✓ | - | ✓ | 87.0 | 88.9 | 91.0 | 62.2 | 1.987 | 3.9 | 4.3 | 8.01 |
GD + T | - | ✓ | ✓ | 87.6 | 91.8 | 92.6 | 64.8 | 1.723 | 3.4 | 5.7 | 5.29 |
M + GD + T | ✓ | ✓ | ✓ | 88.5 | 91.7 | 93.8 | 70.3 | 1.249 | 2.7 | 4.3 | 8.12 |
Method | class_0 | class_1 | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
GT | TP | FN | MAE | MSE | MAR (%) | GT | TP | FN | MAE | MSE | MAR (%) | |
Baseline | 841 | 754 | 87 | 0.68 | 1.85 | 89.65 | 482 | 479 | 3 | 0.02 | 0.03 | 99.37 |
M | 841 | 751 | 90 | 0.67 | 1.81 | 89.29 | 482 | 478 | 4 | 0.04 | 0.05 | 99.17 |
GD | 841 | 716 | 125 | 1.07 | 3.90 | 85.13 | 482 | 477 | 5 | 0.03 | 0.03 | 98.96 |
T | 841 | 741 | 100 | 0.65 | 1.65 | 88.11 | 482 | 458 | 24 | 0.05 | 0.05 | 95.02 |
M + GD | 841 | 770 | 71 | 0.63 | 1.63 | 91.55 | 482 | 480 | 2 | 0.02 | 0.03 | 99.58 |
M + T | 841 | 719 | 122 | 0.63 | 1.52 | 85.49 | 482 | 459 | 23 | 0.05 | 0.05 | 95.23 |
GD + T | 841 | 738 | 103 | 0.63 | 1.68 | 87.75 | 482 | 477 | 5 | 0.03 | 0.05 | 98.96 |
MGDT | 841 | 773 | 68 | 0.65 | 1.67 | 91.91 | 482 | 478 | 4 | 0.02 | 0.03 | 99.17 |
Method | MSPA C2f | GD | THead | Precision (%) | Recall (%) | mAP0.5 (%) | mAP (%) | Inference (ms) |
---|---|---|---|---|---|---|---|---|
Baseline | - | - | - | |||||
M | ✓ | - | - | |||||
GD | - | ✓ | - | |||||
T | - | - | ✓ | |||||
M + GD | ✓ | ✓ | - | * | * | |||
M + T | ✓ | - | ✓ | |||||
GD + T | - | ✓ | ✓ | |||||
M + GD + T | ✓ | ✓ | ✓ | * | * | * |
Device | Ground Truth | True Positive | False Negative | False Positive | ||||
---|---|---|---|---|---|---|---|---|
Piglet | Swine | Piglet | Swine | Piglet | Swine | Piglet | Swine | |
Raspberry Pi 4B | 20 | 4 | 18 | 4 | 2 | 0 | 0 | 0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kong, N.; Liu, T.; Li, G.; Xi, L.; Wang, S.; Shi, Y. Attention-Guided Edge-Optimized Network for Real-Time Detection and Counting of Pre-Weaning Piglets in Farrowing Crates. Animals 2025, 15, 2553. https://doi.org/10.3390/ani15172553
Kong N, Liu T, Li G, Xi L, Wang S, Shi Y. Attention-Guided Edge-Optimized Network for Real-Time Detection and Counting of Pre-Weaning Piglets in Farrowing Crates. Animals. 2025; 15(17):2553. https://doi.org/10.3390/ani15172553
Chicago/Turabian StyleKong, Ning, Tongshuai Liu, Guoming Li, Lei Xi, Shuo Wang, and Yuepeng Shi. 2025. "Attention-Guided Edge-Optimized Network for Real-Time Detection and Counting of Pre-Weaning Piglets in Farrowing Crates" Animals 15, no. 17: 2553. https://doi.org/10.3390/ani15172553
APA StyleKong, N., Liu, T., Li, G., Xi, L., Wang, S., & Shi, Y. (2025). Attention-Guided Edge-Optimized Network for Real-Time Detection and Counting of Pre-Weaning Piglets in Farrowing Crates. Animals, 15(17), 2553. https://doi.org/10.3390/ani15172553