MTS-YOLO: A Multi-Task Lightweight and Efficient Model for Tomato Fruit Bunch Maturity and Stem Detection
Abstract
:1. Introduction
- (1)
- (2)
- We propose HLIS-PAN, featuring the newly designed down-top select feature fusion module (DSFF), which fuses low-level features into high-level features, compensating for positional information loss and improving semantic understanding. Compared to the YOLOv8 neck network, HLIS-PAN is lighter and more efficient.
- (3)
- We integrate CAA [41] to sharpen the focus on central features, enhance elongated target recognition, and boost foreground detection precision, which contributes to optimizing the picking robot’s performance.
2. Materials and Methods
2.1. Experimental Dataset
2.1.1. Dataset Source
2.1.2. Dataset Sample Description
2.1.3. Dataset Splitting
2.2. Model Introduction
2.2.1. Network Architecture of YOLOv8
2.2.2. Network Architecture of MTS-YOLO
2.3. High and Low-Level Interactive Screening Path Aggregation Network
2.3.1. Context Anchor Attention
2.3.2. Top-Down Select Feature Fusion Module
2.3.3. Down-Top Select Feature Fusion Module
3. Results and Analysis
3.1. Experimental Environment and Parameter Settings
3.2. Model Evaluation Metrics
3.3. Training and Testing Results of MTS-YOLO
3.4. Comparison of Neck
3.5. Comparison of Model Performance with Lightweight State-of-the-Art Models
3.6. Display of Visual Results
3.7. Comparison of Model Performance with Larger State-of-the-Art Models
3.8. Ablation Experiment
4. Discussion
4.1. Discussion of the Current Research Status
4.2. Limitations and Future Work
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
Appendix B
Model | F1-Score | [email protected] | [email protected]:0.95 | Parameters | Time | FLOPs |
---|---|---|---|---|---|---|
YOLOv5n | 82.2% | 83.6% | 66.7% | 2.39 M | 3.7 ms | 7.1 G |
YOLOv6n | 84.3% | 86.1% | 69.1% | 4.04 M | 4.0 ms | 11.8 G |
YOLOv7-tiny | 84.8% | 85.5% | 62.5% | 5.74 M | 7.5 ms | 13.0 G |
YOLOv8n | 84.4% | 86.7% | 67.0% | 2.87 M | 4.0 ms | 8.1 G |
YOLOv9t | 82.7% | 87.6% | 69.6% | 2.50 M | 8.9 ms | 10.7 G |
YOLOv9t * | 82.7% | 87.6% | 69.6% | 1.79 M | 6.3 ms | 7.1 G |
YOLOv10n | 81.3% | 83.2% | 63.9% | 2.57 M | 3.4 ms | 8.2 G |
MTS-YOLO (ours) | 85.1% | 88.9% | 68.3% | 2.05 M | 3.8 ms | 6.8 G |
Appendix C
Appendix D
References
- FAO. World Food and Agriculture—Statistical Yearbook 2022; FAO: Rome, Italy, 2022. [Google Scholar]
- Xiao, X.; Wang, Y.N.; Jiang, Y.M. Review of research advances in fruit and vegetable harvesting robots. J. Electr. Eng. Technol. 2024, 19, 773–789. [Google Scholar] [CrossRef]
- Kalampokas, T.; Vrochidou, E.; Papakostas, G.; Pachidis, T.; Kaburlasos, V. Grape stem detection using regression convolutional neural networks. Comput. Electron. Agric. 2021, 186, 106220. [Google Scholar] [CrossRef]
- Ariza-Sentís, M.; Vélez, S.; Martínez-Peña, R.; Baja, H.; Valente, J. Object detection and tracking in Precision Farming: A systematic review. Comput. Electron. Agric. 2024, 219, 108757. [Google Scholar] [CrossRef]
- Kumar, S.-D.; Esakkirajan, S.; Bama, S.; Keerthiveena, B. A microcontroller based machine vision approach for tomato grading and sorting using SVM classifier. Microprocess. Microsyst. 2020, 76, 103090. [Google Scholar] [CrossRef]
- Bai, Y.H.; Mao, S.; Zhou, J.; Zhang, B.H. Clustered tomato detection and picking point location using machine learning-aided image analysis for automatic robotic harvesting. Precis. Agric. 2023, 24, 727–743. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 39, 1137–1149. [Google Scholar] [CrossRef]
- Sun, J.; He, X.F.; Wu, M.M.; Wu, X.H.; Shen, J.F.; Lu, B. Detection of tomato organs based on convolutional neural network under the overlap and occlusion backgrounds. Mach. Vis. Appl. 2020, 31, 1–13. [Google Scholar] [CrossRef]
- Mu, Y.; Chen, T.S.; Ninomiya, S.; Guo, W. Intact detection of highly occluded immature tomatoes on plants using deep learning techniques. Sensors 2020, 20, 2984. [Google Scholar] [CrossRef]
- Seo, D.; Cho, B.-H.; Kim, K.-C. Development of monitoring robot system for tomato fruits in hydroponic greenhouses. Agronomy 2021, 11, 2211. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A. Ssd: Single shot multiBox detector. In Proceedings of the 14th European Conference of the European Conference on Computer Vision (ECCV 2016), Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 779–788. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLO9000: Better, Faster, Stronger. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 7263–7271. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Bochkovskiy, A.; Wang, C.Y.; Liao, H.-Y.M. YOLOv4: Optimal speed and accuracy of object detection. arXiv 2020, arXiv:2004.10934. [Google Scholar]
- Jocher, G. YOLOv5 Release v6.1. 2022. Available online: https://github.com/ultralytics/YOLOv5/releases/tag/v6.1 (accessed on 5 August 2024).
- Li, C.; Li, L.; Geng, Y.; Jiang, H.; Cheng, M.; Zhang, B.; Ke, Z.; Xu, X.; Chu, X. YOLOv6 v3.0: A full-scale reloading. arXiv 2023, arXiv:2301.05586. [Google Scholar]
- Wang, C.Y.; Bochkovskiy, A.; Liao, H.-Y.M. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 17–24 June 2023; pp. 7464–7475. [Google Scholar]
- Jocher, G. Ultralytics YOLOv8. Available online: https://github.com/ultralytics/ultralytics (accessed on 5 August 2024).
- Wang, C.Y.; Yeh, I.-H.; Liao, H.-Y.M. Yolov9: Learning what you want to learn using programmable gradient information. arXiv 2024, arXiv:2402.13616. [Google Scholar]
- Wang, A.; Chen, H.; Liu, L.H.; Chen, K.; Lin, Z.J.; Han, J.G.; Ding, G.G. Yolov10: Real-time end-to-end object detection. arXiv 2024, arXiv:2405.14458. [Google Scholar]
- Yuan, T.; Lv, L.; Zhang, F.; Fu, J.; Gao, J.; Zhang, J.X.; Li, W.; Zhang, C.L.; Zhang, W.Q. Robust cherry tomatoes detection algorithm in greenhouse scene based on SSD. Agriculture 2020, 10, 160. [Google Scholar] [CrossRef]
- Vasconez, J.P.; Delpiano, J.; Vougioukas, S.; Cheein, F. Comparison of convolutional neural networks in fruit detection and counting: A comprehensive evaluation. Comput. Electron. Agric. 2020, 173, 105348. [Google Scholar] [CrossRef]
- Zheng, T.X.; Jiang, M.Z.; Li, Y.F.; Feng, M.C. Research on tomato detection in natural environment based on RC-YOLOv4. Comput. Electron. Agric. 2022, 198, 107029. [Google Scholar] [CrossRef]
- Ge, Y.H.; Lin, S.; Zhang, Y.H.; Li, Z.L.; Cheng, H.T.; Dong, J.; Shao, S.S.; Zhang, J.; Qi, X.Y.; Wu, Z.D. Tracking and counting of tomato at different growth period using an improving YOLO-deepsort network for inspection robot. Machines 2022, 10, 489. [Google Scholar] [CrossRef]
- Zeng, T.H.; Li, S.Y.; Song, Q.M.; Zhong, F.L.; Wei, X. Lightweight tomato real-time detection method based on improved YOLO and mobile deployment. Comput. Electron. Agric. 2023, 205, 107625. [Google Scholar] [CrossRef]
- Phan, Q.; Nguyen, V.; Lien, C.; Duong, T.; Hou, M.T.; Le, N. Classification of Tomato Fruit Using Yolov5 and Convolutional Neural Network Models. Plants 2023, 12, 790. [Google Scholar] [CrossRef]
- Li, P.; Zheng, J.S.; Li, P.Y.; Long, H.W.; Li, M.; Gao, L.H. Tomato maturity detection and counting model based on MHSA-YOLOv8. Sensors 2023, 23, 6701. [Google Scholar] [CrossRef] [PubMed]
- Chen, W.B.; Liu, M.C.; Zhao, C.J.; Li, C.J.; Wang, Y.Q. MTD-YOLO: Multi-task deep convolutional neural network for cherry tomato fruit bunch maturity detection. Comput. Electron. Agric. 2024, 216, 108533. [Google Scholar] [CrossRef]
- Yue, X.Y.; Qi, K.; Yang, F.H.; Na, X.Y.; Liu, Y.H.; Liu, C.H. RSR-YOLO: A real-time method for small target tomato detection based on improved YOLOv8 network. Discov. Appl. Sci. 2024, 6, 268. [Google Scholar] [CrossRef]
- Chen, J.Y.; Liu, H.; Zhang, Y.T.; Zhang, D.K.; Ouyang, H.K.; Chen, X.Y. A multiscale lightweight and efficient model based on YOLOv7: Applied to citrus orchard. Plants 2022, 11, 3260. [Google Scholar] [CrossRef]
- Yan, B.; Fan, P.; Lei, X.Y.; Liu, Z.J.; Yang, F.Z. A real-time apple targets detection method for picking robot based on improved YOLOv5. Remote Sens. 2021, 13, 1619. [Google Scholar] [CrossRef]
- Nan, Y.L.; Zhang, H.C.; Zeng, Y.; Zheng, J.Q.; Ge, Y.F. Intelligent detection of Multi-Class pitaya fruits in target picking row based on WGB-YOLO network. Comput. Electron. Agric. 2023, 208, 107780. [Google Scholar] [CrossRef]
- Chen, J.Q.; Ma, A.Q.; Huang, L.X.; Su, Y.S.; Li, W.Q.; Zhang, H.D.; Wang, Z.K. GA-YOLO: A lightweight YOLO model for dense and occluded grape target detection. Horticulturae 2023, 9, 443. [Google Scholar] [CrossRef]
- Cao, L.L.; Chen, Y.R.; Jin, Q.G. Lightweight Strawberry Instance Segmentation on Low-Power Devices for Picking Robots. Electronics 2023, 12, 3145. [Google Scholar] [CrossRef]
- Zhang, P.; Liu, X.M.; Yuan, J.; Liu, C.L. YOLO5-spear: A robust and real-time spear tips locator by improving image augmentation and lightweight network for selective harvesting robot of white asparagus. Biosyst. Eng. 2022, 218, 43–61. [Google Scholar] [CrossRef]
- Miao, Z.H.; Yu, X.Y.; Li, N.; Zhang, Z.; He, C.X.; Deng, C.Y.; Sun, T. Efficient tomato harvesting robot based on image processing and deep learning. Precis. Agric. 2023, 24, 254–287. [Google Scholar] [CrossRef]
- Zhu, X.Y.; Chen, F.J.; Zhang, X.W.; Zheng, Y.L.; Peng, X.D.; Chen, C. Detection the maturity of multi-cultivar olive fruit in orchard environments based on Olive-EfficientDet. Sci. Hortic. 2024, 324, 112607. [Google Scholar] [CrossRef]
- Chen, Y.F.; Zhang, C.Y.; Chen, B.; Huang, Y.Y.; Sun, Y.F.; Wang, C.M.; Fu, X.J.; Dai, Y.X.; Qin, F.W.; Peng, Y.; et al. Accurate leukocyte detection based on deformable-DETR and multi-level feature fusion for aiding diagnosis of blood diseases. Comput. Biol. Med. 2024, 170, 107917. [Google Scholar] [CrossRef] [PubMed]
- Liu, W.; Lu, H.; Fu, H.; Cao, Z. Learning to upsample by learning to sample. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 1–6 October 2023; pp. 6027–6037. [Google Scholar]
- Cai, X.; Lai, Q.; Wang, Y.; Wang, W.; Sun, Z.; Yao, Y. Poly kernel inception network for remote sensing detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 17–21 June 2024; pp. 27706–27716. [Google Scholar]
- Song, G.Z.; Shi, Y.; Wang, J.; Jing, C.; Luo, G.F.; Sun, S.; Wang, X.L.; Li, Y.N. 2022 Dataset of String Tomato in Shanxi Nonggu Tomato Town. Sci. Data Bank. 2023. Available online: https://cstr.cn/31253.11.sciencedb.05228 (accessed on 5 August 2024).
- Feng, C.; Zhong, Y.; Gao, Y.; Scott, M.R.; Huang, W. TOOD: Task-Aligned One-Stage Object Detection. In Proceedings of the 2021 IEEE International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 3490–3499. [Google Scholar]
- Dong, X.J.; Zhang, C.S.; Wang, J.H.; Chen, Y.; Wang, D.W. Real-time detection of surface cracking defects for large-sized stamped parts. Comput. Ind. 2024, 159, 104105. [Google Scholar] [CrossRef]
- Bakirci, M. Enhancing vehicle detection in intelligent transportation systems via autonomous UAV platform and YOLOv8 integration. Appl. Soft Comput. 2024, 164, 112015. [Google Scholar] [CrossRef]
- Solimani, F.; Cardellicchio, A.; Dimauro, G.; Petrozza, A.; Summerer, S.; Cellini, F.; Renò, V. Optimizing tomato plant phenotyping detection: Boosting YOLOv8 architecture to tackle data complexity. Comput. Electron. Agric. 2024, 218, 108728. [Google Scholar] [CrossRef]
- Gu, Y.; Hong, R.; Cao, Y. Application of the YOLOv8 Model to a Fruit Picking Robot. In Proceedings of the 2024 IEEE 2nd International Conference on Control, Electronics and Computer Technology (ICCECT), Jiling, China, 26–28 April 2024; pp. 580–585. [Google Scholar]
- Jiang, Y.Q.; Tan, Z.Y.; Wang, J.Y.; Sun, X.Y.; Lin, M.; Lin, H. GiraffeDet: A heavy-neck paradigm for object detection. arXiv 2022, arXiv:2202.04256. [Google Scholar]
- Wang, C.C.; He, W.; Nie, Y.; Guo, J.Y.; Liu, C.J.; Wang, Y.H.; Han, K. Gold-YOLO: Efficient object detector via gather-and-distribute mechanism. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), New Orleans, LA, USA, 10–16 December 2023; pp. 51094–51112. [Google Scholar]
- Chen, Z.X.; He, Z.W.; Lu, Z.M. DEA-Net: Single image dehazing based on detail-enhanced convolution and content-guided attention. IEEE Trans. Image Process 2024, 33, 1002–1015. [Google Scholar] [CrossRef]
- Yang, L.X.; Zhang, R.Y.; Li, L.D.; Xie, X.H. Simam: A simple, parameter-free attention module for convolutional neural networks. In Proceedings of the International Conference on Machine Learning (ICML), Online, 18–24 July 2021; pp. 11863–11874. [Google Scholar]
- Woo, S.; Park, J.; Lee, J.Y.; Kweon, I. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Hu, S.; Gao, F.; Zhou, X.W.; Dong, J.Y.; Du, Q. Hybrid Convolutional and Attention Network for Hyperspectral Image Denoising. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1–5. [Google Scholar] [CrossRef]
- Wang, J.Q.; Chen, K.; Xu, R.; Liu, Z.W.; Loy, C.; Lin, D.H. Carafe: Content-aware reassembly of features. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 3007–3016. [Google Scholar]
- Xiao, F.; Wang, H.; Xu, Y.; Zhang, R. Fruit detection and recognition based on deep learning for automatic harvesting: An overview and review. Agronomy 2023, 13, 1625. [Google Scholar] [CrossRef]
- Liu, Y.; Zheng, H.T.; Zhang, Y.H.; Zhang, Q.J.; Chen, H.L.; Xu, X.Y.; Wang, G.Y. “Is this blueberry ripe?”: A blueberry ripeness detection algorithm for use on picking robots. Front. Plant Sci. 2023, 14, 1198650. [Google Scholar] [CrossRef]
- Zhang, B.; Xia, Y.Y.; Wang, R.R.; Wang, Y.; Yin, C.H.; Fu, M.; Fu, W. Recognition of mango and location of picking point on stem based on a multi-task CNN model named YOLOMS. Precis. Agric. 2024, 25, 1454–1476. [Google Scholar] [CrossRef]
- Hou, C.J.; Xu, J.L.; Tang, Y.; Zhuang, J.J.; Tan, Z.P.; Chen, W.L.; Wei, S.; Huang, H.S.; Fang, M.W. Detection and localization of citrus picking points based on binocular vision. Precis. Agric. 2024, 1–35. [Google Scholar] [CrossRef]
- ElBeheiry, N.; Balog, R. Technologies driving the shift to smart farming: A review. IEEE Sens. J. 2022, 23, 1752–1769. [Google Scholar] [CrossRef]
- Tang, Y.; Qiu, J.; Zhang, Y.; Wu, D.; Cao, Y.; Zhao, K.; Zhu, L. Optimization strategies of fruit detection to overcome the challenge of unstructured background in field orchard environment: A review. Precis. Agric. 2023, 24, 1183–1219. [Google Scholar] [CrossRef]
- Meng, F.; Li, J.H.; Zhang, Y.Q.; Qi, S.J.; Tang, Y.C. Transforming unmanned pineapple picking with spatio-temporal convolutional neural networks. Comput. Electron. Agric. 2023, 214, 108298. [Google Scholar] [CrossRef]
- Chen, J.Q.; Ma, A.Q.; Huang, L.X.; Li, H.W.; Zhang, H.Y.; Huang, Y.; Zhu, T.T. Efficient and lightweight grape and picking point synchronous detection model based on key point detection. Comput. Electron. Agric. 2024, 217, 108612. [Google Scholar] [CrossRef]
- Zhong, Z.Y.; Yun, L.J.; Cheng, F.Y.; Chen, Z.Q.; Zhang, C.J. Light-YOLO: A Lightweight and Efficient YOLO-Based Deep Learning Model for Mango Detection. Agriculture 2024, 14, 140. [Google Scholar] [CrossRef]
- Miranda, J.; Gené-Mola, J.; Zude-Sasse, M.; Tsoulias, N.; Escolà, A.; Arnó, J.; Rosell-Polo, J.; Sanz-Cortiella, R.; Martínez-Casasnovas, J.; Gregorio, E. Fruit sizing using AI: A review of methods and challenges. Postharvest Biol. Technol. 2023, 206, 112587. [Google Scholar] [CrossRef]
Research | F1-Score | [email protected] | Parameters | Time | FLOPs |
---|---|---|---|---|---|
RC-YOLOv4 [24] | 88.5% | 94.4% | 61.22 M | - - - | - - - |
THYOLO [26] | 93.5% | 96.9% (val) | 1.55 M | 42.5 ms (CPU) | 2.6 G |
MHSA-YOLOv8 [28] | 80.6% | 86.4% | 11.37 M | - - - | 29.3 G |
MTD-YOLO [29] | 87.8% | 86.6% | - - - | 4.9 ms (RTX3080) | 103.3 G |
RSR-YOLO [30] | 88.7% | 90.7% | - - - | 13.2 ms (TITAN RTX) | 16.9 G |
Set | Target Box | Number of Images | ||
---|---|---|---|---|
Mature | Stem | Raw | ||
Train | 2496 | 2288 | 1156 | 2932 |
Validation | 302 | 244 | 137 | 366 |
Test | 326 | 290 | 113 | 367 |
Total | 3124 | 2822 | 1406 | 3665 |
Set | Target Box | Number of Images | ||
---|---|---|---|---|
Mature | Stem | Raw | ||
Train | 703 | 782 | 729 | 1166 |
Validation | 87 | 101 | 87 | 145 |
Test | 90 | 97 | 89 | 147 |
Total | 880 | 980 | 905 | 1458 |
Parameter Label | Selected Configuration |
---|---|
Number of epochs | 300 |
Image dimensions | 640 × 640 |
Batch size | 16 |
Optimizer | SGD |
Bounding boxes loss function | CIoU |
Initial learning rate (Lr0) | 0.01 |
Final learning rate (Lrf) | 0.01 |
Momentum | 0.937 |
Weight decay | 0.0005 |
Warmup epochs | 3.0 |
Warmup momentum | 0.8 |
Warmup bias | 0.1 |
Model | Categories | Precision | Recall | F1-Score | [email protected] | [email protected]:0.95 |
---|---|---|---|---|---|---|
YOLOv8n | mature | 94.5% | 95.4% | 94.9% | 98.8% | 95.1% |
stem | 83.9% | 66.4% | 74.1% | 77.1% | 28.7% | |
raw | 84.4% | 93.4% | 88.7% | 95.9% | 85.4% | |
all | 87.6% | 85.1% | 86.3% | 90.6% | 69.7% | |
MTS-YOLO (ours) | mature | 96.5% | 95.7% | 96.1% | 98.8% | 95.1% |
stem | 82.8% | 73.0% | 77.6% | 81.1% | 29.9% | |
raw | 91.6% | 92.7% | 92.1% | 96.0% | 86.7% | |
all | 90.3% | 87.1% | 88.7% | 92.0% | 70.6% |
Model | F1-Score | [email protected] | [email protected]:0.95 | Parameters | Time |
---|---|---|---|---|---|
YOLOv8-EfficientRepBiPAN | 87.0% | 91.5% | 70.2% | 2.58 M | 4.0 ms |
YOLOv8-GDFPN | 88.0% | 90.6% | 69.9% | 3.11 M | 3.5 ms |
YOLOv8-GoldYOLO | 86.8% | 89.9% | 69.1% | 5.70 M | 3.7 ms |
YOLOv8-CGAFusion | 87.2% | 90.7% | 70.1% | 3.01 M | 3.5 ms |
YOLOv8-HS-FPN | 86.8% | 91.0% | 69.5% | 1.84 M | 3.4 ms |
MTS-YOLO-SimAM | 87.1% | 90.7% | 69.5% | 1.88 M | 3.4 ms |
MTS-YOLO-CBAM | 87.1% | 90.8% | 69.7% | 1.96 M | 3.5 ms |
MTS-YOLO-CAFM | 87.9% | 91.3% | 69.9% | 2.31 M | 3.8 ms |
MTS-YOLO-CARAFE | 88.3% | 90.9% | 70.7% | 2.16 M | 3.7 ms |
MTS-YOLO (ours) | 88.7% | 92.0% | 70.6% | 2.05 M | 3.1 ms |
Model | Precision | Recall | F1-Score | [email protected] | [email protected]:0.95 | Parameters | Time | FLOPs |
---|---|---|---|---|---|---|---|---|
YOLOv5n | 88.1% | 85.0% | 86.5% | 90.0% | 69.8% | 2.39 M | 2.9 ms | 7.1 G |
YOLOv6n | 87.4% | 85.4% | 86.3% | 89.3% | 69.7% | 4.04 M | 3.5 ms | 11.8 G |
YOLOv7-tiny | 86.1% | 85.5% | 85.8% | 89.9% | 66.3% | 5.74 M | 6.7 ms | 13.0 G |
YOLOv8n | 87.6% | 85.1% | 86.3% | 90.6% | 69.7% | 2.87 M | 3.6 ms | 8.1 G |
YOLOv9t | 87.3% | 85.3% | 86.3% | 91.1% | 70.9% | 2.50 M | 7.4 ms | 10.7 G |
YOLOv9t * | 87.3% | 85.3% | 86.3% | 91.1% | 70.9% | 1.79 M | 5.8 ms | 7.1 G |
YOLOv10n | 83.6% | 85.1% | 84.3% | 87.4% | 68.0% | 2.57 M | 2.8 ms | 8.2 G |
MTS-YOLO (ours) | 90.3% | 87.1% | 88.7% | 92.0% | 70.6% | 2.05 M | 3.1 ms | 6.8 G |
Model | F1-Score | [email protected] | [email protected]:0.95 | Parameters | Time | FLOPs |
---|---|---|---|---|---|---|
YOLOv5s | 88.0% | 91.4% | 70.3% | 8.69 M | 3.4 ms | 23.8 G |
YOLOv6s | 86.8% | 89.9% | 70.0% | 15.54 M | 3.9 ms | 44.0 G |
YOLOv7 | 88.4% | 92.3% | 69.9% | 34.80 M | 10.8 ms | 103.2 G |
YOLOv8s | 88.2% | 91.4% | 70.3% | 10.61 M | 3.5 ms | 28.4 G |
YOLOv9s | 88.9% | 92.0% | 72.8% | 9.15 M | 9.6 ms | 38.7 G |
YOLOv9s * | 88.9% | 92.0% | 72.8% | 6.75 M | 6.8 ms | 26.2 G |
YOLOv10s | 87.6% | 89.8% | 67.3% | 7.66 M | 3.3 ms | 24.5 G |
MTS-YOLO (ours) | 88.7% | 92.0% | 70.6% | 2.05 M | 3.1 ms | 6.8 G |
Model | SFF | TSFF | DSFF | CAA | F1-Score | [email protected] | [email protected]:0.95 | Parameters | Time | FLOPs |
---|---|---|---|---|---|---|---|---|---|---|
YOLOv8n | 86.3% | 90.6% | 69.7% | 2.87 M | 3.6 ms | 8.1 G | ||||
A | ✓ | 86.8% | 91.0% | 69.5% | 1.84 M | 3.4 ms | 6.8 G | |||
B | ✓ | 87.1% | 90.3% | 69.1% | 1.78 M | 2.9 ms | 6.3 G | |||
C | ✓ | ✓ | 87.8% | 91.0% | 70.5% | 1.90 M | 3.4 ms | 6.5 G | ||
D | ✓ | ✓ | 86.9% | 90.9% | 70.0% | 1.99 M | 3.6 ms | 7.1 G | ||
E | ✓ | ✓ | 87.7% | 91.6% | 70.1% | 1.92 M | 3.1 ms | 6.5 G | ||
MTS-YOLO (ours) | ✓ | ✓ | ✓ | 88.7% | 92.0% | 70.6% | 2.05 M | 3.1 ms | 6.8 G |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, M.; Lin, H.; Shi, X.; Zhu, S.; Zheng, B. MTS-YOLO: A Multi-Task Lightweight and Efficient Model for Tomato Fruit Bunch Maturity and Stem Detection. Horticulturae 2024, 10, 1006. https://doi.org/10.3390/horticulturae10091006
Wu M, Lin H, Shi X, Zhu S, Zheng B. MTS-YOLO: A Multi-Task Lightweight and Efficient Model for Tomato Fruit Bunch Maturity and Stem Detection. Horticulturae. 2024; 10(9):1006. https://doi.org/10.3390/horticulturae10091006
Chicago/Turabian StyleWu, Maonian, Hanran Lin, Xingren Shi, Shaojun Zhu, and Bo Zheng. 2024. "MTS-YOLO: A Multi-Task Lightweight and Efficient Model for Tomato Fruit Bunch Maturity and Stem Detection" Horticulturae 10, no. 9: 1006. https://doi.org/10.3390/horticulturae10091006
APA StyleWu, M., Lin, H., Shi, X., Zhu, S., & Zheng, B. (2024). MTS-YOLO: A Multi-Task Lightweight and Efficient Model for Tomato Fruit Bunch Maturity and Stem Detection. Horticulturae, 10(9), 1006. https://doi.org/10.3390/horticulturae10091006