Study on Utilizing Mask R-CNN for Phenotypic Estimation of Lettuce’s Growth Status and Optimal Harvest Timing
Abstract
:1. Introduction
2. Materials and Methods
2.1. Datasets
2.2. Method (Improvement of the Model)
2.2.1. Backbone Network
2.2.2. Phenotype Traits Head
2.2.3. Experimental Environment and Training Strategies
2.2.4. Evaluation Index
3. Results
3.1. Results of the Proposed Method
3.2. The Application Results of Different Species
4. Discussion
4.1. Comparison of Different Backbone
4.2. Comparison of Phenotypic Traits Branching with Different Convolutional Layers
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Simko, I.; Hayes, R.J.; Mou, B.; McCreight, J.D. Lettuce and Spinach. In Yield Gains in Major U.S. Field Crops; Smith, S., Diers, B., Specht, J., Carver, B., Eds.; American Society of Agronomy, Inc.: Madison, WI, USA; Crop Science Society of America, Inc.: Madison, WI, USA; Soil Science Society of America, Inc.: Madison, WI, USA, 2014; pp. 53–85. [Google Scholar]
- Xiong, J.; Yu, D.; Liu, S.; Shu, L.; Wang, X.; Liu, Z. A Review of Plant Phenotypic Image Recognition Technology Based on Deep Learning. Electronics 2021, 10, 81. [Google Scholar] [CrossRef]
- Casadesús, J.; Villegas, D. Conventional digital cameras as a tool for assessing leaf area index and biomass for cereal breeding. J. Integr. Plant Biol. 2014, 56, 7–14. [Google Scholar] [CrossRef] [PubMed]
- Zhang, L.; Verma, B.; Stockwell, D.; Chowdhury, S. Density Weighted Connectivity of Grass Pixels in image frames for biomass estimation. Expert Syst. Appl. 2018, 101, 213–227. [Google Scholar] [CrossRef]
- Guo, Y.; Gao, Z.; Zhang, Z.; Li, Y.; Hu, Z.; Xin, D.; Chen, Q.; Zhu, R. Automatic and Accurate Acquisition of Stem-Related Phenotypes of Mature Soybean Based on Deep Learning and Directed Search Algorithms. Front. Plant Sci. 2022, 13, 906751. [Google Scholar] [CrossRef] [PubMed]
- Du, W.; Liu, P. Instance Segmentation and Berry Counting of Table Grape before Thinning Based on AS-SwinT. Plant Phenomics 2023, 5, 0085. [Google Scholar] [CrossRef] [PubMed]
- Zhang, L.; Xu, Z.; Xu, D.; Ma, J.; Chen, Y.; Fu, Z. Growth monitoring of greenhouse lettuce based on a convolutional neural network. Hortic. Res. 2020, 7, 124. [Google Scholar] [CrossRef] [PubMed]
- Ye, Z.; Tan, X.; Dai, M.; Lin, Y.; Chen, X.; Nie, P.; Ruan, Y.; Kong, D. Estimation of rice seedling growth traits with an end-to-end multi-objective deep learning framework. Front. Plant Sci. 2023, 14, 1165552. [Google Scholar] [CrossRef]
- Du, J.; Lu, X.; Fan, J.; Qin, Y.; Yang, X.; Guo, X. Image-Based High-Throughput Detection and Phenotype Evaluation Method for Multiple Lettuce Varieties. Front. Plant Sci. 2020, 11, 563386. [Google Scholar] [CrossRef]
- Buxbaum, N.; Lieth, J.H.; Earles, M. Non-destructive Plant Biomass Monitoring with High Spatio-Temporal Resolution via Proximal RGB-D Imagery and End-to-End Deep Learning. Front. Plant Sci. 2022, 13, 758818. [Google Scholar] [CrossRef]
- Quan, L.; Li, H.; Li, H.; Jiang, W.; Lou, Z.; Chen, L. Two-Stream Dense Feature Fusion Network Based on RGB-D Data for the Real-Time Prediction of Weed Aboveground Fresh Weight in a Field Environment. Remote Sens. 2021, 13, 2288. [Google Scholar] [CrossRef]
- Zhang, Q.; Zhang, X.; Wu, Y.; Li, X. TMSCNet: A three-stage multi-branch self-correcting trait estimation network for RGB and depth images of lettuce. Front. Plant Sci. 2022, 13, 982562. [Google Scholar] [CrossRef] [PubMed]
- Milella, A.; Marani, R.; Petitti, A.; Reina, G. In-field high throughput grapevine phenotyping with a consumer-grade depth camera. Comput. Electron. Agric. 2019, 156, 293–306. [Google Scholar] [CrossRef]
- Moghimi, A.; Yang, C.; Anderson, J.A. Aerial hyperspectral imagery and deep neural networks for high-throughput yield phenotyping in wheat. Comput. Electron. Agric. 2020, 172, 105299. [Google Scholar] [CrossRef]
- Ampatzidis, Y.; Partel, V. UAV-Based High Throughput Phenotyping in Citrus Utilizing Multispectral Imaging and Artificial Intelligence. Remote Sens. 2019, 11, 410. [Google Scholar] [CrossRef]
- Li, Z.; Chen, Z.; Cheng, Q.; Fei, S.; Zhou, X. Deep Learning Models Outperform Generalized Machine Learning Models in Predicting Winter Wheat Yield Based on Multispectral Data from Drones. Drones 2023, 7, 505. [Google Scholar] [CrossRef]
- Giuffrida, M.V.; Doerner, P.; Tsaftaris, S.A. Pheno-Deep Counter: A unified and versatile deep learning architecture for leaf counting. Plant J. 2018, 96, 880–890. [Google Scholar] [CrossRef] [PubMed]
- Xu, D.; Chen, J.; Li, B.; Ma, J. Improving Lettuce Fresh Weight Estimation Accuracy through RGB-D Fusion. Agronomy 2023, 13, 2617. [Google Scholar] [CrossRef]
- Rasti, S.; Bleakley, C.J.; Silvestre, G.C.M.; Holden, N.M.; Langton, D.; O’Hare, G.M.P. Crop growth stage estimation prior to canopy closure using deep learning algorithms. Neural Comput. Appl. 2021, 33, 1733–1743. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 2980–2988. [Google Scholar]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You only look once: Unified, real-time object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA; 2016; pp. 779–788. [Google Scholar]
- Yazdinejad, A.; Dehghantanha, A.; Parizi, R.M.; Epiphaniou, G. An optimized fuzzy deep learning model for data classification based on NSGA-II. Neurocomputing 2023, 522, 116–128. [Google Scholar] [CrossRef]
- Tang, C.; Chen, D.; Wang, X.; Ni, X.; Liu, Y.; Liu, Y.; Mao, X.; Wang, S. A fine recognition method of strawberry ripeness combining Mask R-CNN and region segmentation. Front. Plant Sci. 2023, 14, 1211830. [Google Scholar] [CrossRef]
- Siricharoen, P.; Yomsatieankul, W.; Bunsri, T. Recognizing the sweet and sour taste of pineapple fruits using residual networks and green-relative color transformation attached with Mask R-CNN. Postharvest Biol. Technol. 2023, 196, 112174. [Google Scholar] [CrossRef]
- Wang, D.; He, D. Fusion of Mask RCNN and attention mechanism for instance segmentation of apples under complex background. Comput. Electron. Agric. 2022, 196, 106864. [Google Scholar] [CrossRef]
- Cong, P.; Li, S.; Zhou, J.; Lv, K.; Feng, H. Research on Instance Segmentation Algorithm of Greenhouse Sweet Pepper Detection Based on Improved Mask RCNN. Agronomy 2023, 13, 196. [Google Scholar] [CrossRef]
- Wang, L.; Jia, K.; Fu, Y.; Xu, X.; Fan, L.; Wang, Q.; Zhu, W.; Niu, Q. Overlapped tobacco shred image segmentation and area computation using an improved Mask RCNN network and COT algorithm. Front. Plant Sci. 2023, 14, 1108560. [Google Scholar] [CrossRef] [PubMed]
- Yu, G.; Luo, Y.; Deng, R. Automatic segmentation of golden pomfret based on fusion of multi-head self-attention and channel-attention mechanism. Comput. Electron. Agric. 2022, 202, 107369. [Google Scholar] [CrossRef]
- Han, B.; Hu, Z.; Su, Z.; Bai, X.; Yin, S.; Luo, J.; Zhao, Y. Mask_LaC R-CNN for measuring morphological features of fish. Measurement 2022, 203, 111859. [Google Scholar] [CrossRef]
- Zhang, C.; Zhou, J.; Wang, H.; Tan, T.; Cui, M.; Huang, Z.; Wang, P.; Zhang, L. Multi-species individual tree segmentation and identification based on improved mask R-CNN and UAV imagery in mixed forests. Remote Sens. 2022, 14, 874. [Google Scholar] [CrossRef]
- Li, H.; Shi, H.; Du, A.; Mao, Y.; Fan, K.; Wang, Y.; Shen, Y.; Wang, S.; Xu, X.; Tian, L. Symptom recognition of disease and insect damage based on Mask R-CNN, wavelet transform, and F-RNet. Front. Plant Sci. 2022, 13, 922797. [Google Scholar] [CrossRef]
- Wang, H.; Mou, Q.; Yue, Y.; Zhao, H. Research on Detection Technology of Various Fruit Disease Spots Based on Mask R-CNN. In Proceedings of the 2020 IEEE International Conference on Mechatronics and Automation (ICMA), Beijing, China, 13–16 October 2020; pp. 1083–1087. [Google Scholar]
- Zhang, J.; Lu, J.; Zhang, Q.; Qi, Q.; Zheng, G.; Chen, F.; Chen, S.; Zhang, F.; Fang, W.; Guan, Z. Estimation of Garden Chrysanthemum Crown Diameter Using Unmanned Aerial Vehicle (UAV)-Based RGB Imagery. Agronomy 2024, 14, 337. [Google Scholar] [CrossRef]
- Zheng, C.; Abd-Elrahman, A.; Whitaker, V.M.; Dalid, C. Deep learning for strawberry canopy delineation and biomass prediction from high-resolution images. Plant Phenomics 2022, 2022, 9850486. [Google Scholar] [CrossRef]
- Li, L.; Bie, Z.; Zhang, Y.; Huang, Y.; Peng, C.; Han, B.; Xu, S. Nondestructive Detection of Key Phenotypes for the Canopy of the Watermelon Plug Seedlings Based on Deep Learning. Hortic. Plant J. 2023, in press. [Google Scholar] [CrossRef]
- Gao, X.; Zan, X.; Yang, S.; Zhang, R.; Chen, S.; Zhang, X.; Liu, Z.; Ma, Y.; Zhao, Y.; Li, S. Maize seedling information extraction from UAV images based on semi-automatic sample generation and Mask R-CNN model. Eur. J. Agron. 2023, 147, 126845. [Google Scholar] [CrossRef]
- Hao, Z.; Lin, L.; Post, C.J.; Mikhailova, E.A.; Li, M.; Chen, Y.; Yu, K.; Liu, J. Automated tree-crown and height detection in a young forest plantation using mask region-based convolutional neural network (Mask R-CNN). ISPRS J. Photogramm. Remote Sens. 2021, 178, 112–123. [Google Scholar] [CrossRef]
- Ding, X.; Zhang, X.; Ma, N.; Han, J.; Ding, G.; Sun, J. Repvgg: Making vgg-style convnets great again. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021; pp. 13728–13737. [Google Scholar]
- Hemming, S.; de Zwart, H.F.; Elings, A.; Bijlaard, M.; van Marrewijk, B.; Petropoulou, A. 3rd Autonomous Greenhouse Challenge:Online Challenge Lettuce Images. Available online: https://doi.org/10.4121/15023088.v1(accessed on 2 March 2022).
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 936–944. [Google Scholar]
Number of Experiments | Del | Seg | ||||
---|---|---|---|---|---|---|
AP | AP50 | AP75 | AP | AP50 | AP75 | |
k1 | 0.8674 | 0.9947 | 0.9832 | 0.8797 | 0.9947 | 0.9865 |
k2 | 0.8608 | 0.9887 | 0.9862 | 0.8721 | 0.9887 | 0.9862 |
k3 | 0.8697 | 0.9997 | 0.9866 | 0.8818 | 0.9997 | 0.9948 |
k4 | 0.8743 | 0.9993 | 0.9899 | 0.8851 | 0.9993 | 0.9993 |
k5 | 0.8699 | 0.9998 | 0.9812 | 0.8832 | 0.9998 | 0.9998 |
average | 0.8684 | 0.9964 | 0.9854 | 0.8803 | 0.9964 | 0.9933 |
Variety | fsw | dsw | h | d | la | |||||
---|---|---|---|---|---|---|---|---|---|---|
R2 | MAPE | R2 | MAPE | R2 | MAPE | R2 | MAPE | R2 | MAPE | |
Lugano | 0.9656 | 0.0903 | 0.9587 | 0.1202 | 0.9121 | 0.0836 | 0.9188 | 0.0531 | 0.9509 | 0.0873 |
Salanova | 0.9466 | 0.1448 | 0.9406 | 0.2278 | 0.8717 | 0.0914 | 0.9151 | 0.0494 | 0.9545 | 0.1016 |
Aphylion | 0.9586 | 0.0975 | 0.9661 | 0.1346 | 0.9547 | 0.0597 | 0.9112 | 0.0591 | 0.9651 | 0.0798 |
Satine | 0.9647 | 0.0957 | 0.9670 | 0.1201 | 0.9355 | 0.0670 | 0.8600 | 0.0570 | 0.9494 | 0.0919 |
Backbone | Del | Seg | Model Parameter (MB) | Inference Time (s) | FLOPs (G) | ||||
---|---|---|---|---|---|---|---|---|---|
AP | AP50 | AP75 | AP | AP50 | AP75 | ||||
RepVGG | 0.8684 | 0.9964 | 0.9854 | 0.8804 | 0.9964 | 0.9933 | 127 | 0.0154 | 82.486 |
MoblieNet_V3 | 0.8165 | 0.9851 | 0.9587 | 0.8460 | 0.9852 | 0.9727 | 331 | 0.0155 | 67.684 |
EfficientNet | 0.8489 | 0.9964 | 0.9836 | 0.8840 | 0.9964 | 0.9884 | 344.4 | 0.0169 | 70.193 |
ResNet50 | 0.8236 | 0.9904 | 0.9672 | 0.8787 | 0.9904 | 0.9810 | 585.8 | 0.0233 | 122.350 |
Backbone | fsw | dsw | h | d | la | |||||
---|---|---|---|---|---|---|---|---|---|---|
R2 | MAPE | R2 | MAPE | R2 | MAPE | R2 | MAPE | R2 | MAPE | |
RepVGG | 0.9600 | 0.1073 | 0.9596 | 0.1522 | 0.9329 | 0.0757 | 0.9136 | 0.0548 | 0.9592 | 0.0899 |
MoblieNet_V3 | 0.9600 | 0.1196 | 0.9628 | 0.1569 | 0.9337 | 0.0780 | 0.9100 | 0.0574 | 0.9523 | 0.0987 |
EfficientNet | 0.9587 | 0.1063 | 0.9596 | 0.1531 | 0.9252 | 0.0828 | 0.9041 | 0.0570 | 0.9549 | 0.0936 |
ResNet50 | 0.9504 | 0.1193 | 0.9560 | 0.1676 | 0.9115 | 0.0890 | 0.8867 | 0.0647 | 0.9529 | 0.0981 |
Number of Layer | Del | Seg | ||||
---|---|---|---|---|---|---|
AP | AP50 | AP75 | AP | AP50 | AP75 | |
number = 6 | 0.8566 | 0.9951 | 0.9832 | 0.8734 | 0.9951 | 0.9886 |
number = 8 | 0.8684 | 0.9964 | 0.9854 | 0.8803 | 0.9964 | 0.9933 |
number = 10 | 0.8642 | 0.9941 | 0.9805 | 0.8781 | 0.9941 | 0.9887 |
Number of Layer | fsw | dsw | h | d | la | |||||
---|---|---|---|---|---|---|---|---|---|---|
R2 | MAPE | R2 | MAPE | R2 | MAPE | R2 | MAPE | R2 | MAPE | |
number = 6 | 0.9558 | 0.1092 | 0.9589 | 0.1514 | 0.9168 | 0.0815 | 0.9119 | 0.0563 | 0.9549 | 0.0916 |
number = 8 | 0.9600 | 0.1073 | 0.9596 | 0.1522 | 0.9329 | 0.0757 | 0.9136 | 0.0548 | 0.9592 | 0.0899 |
number = 10 | 0.9570 | 0.1061 | 0.9599 | 0.1463 | 0.9211 | 0.0800 | 0.9020 | 0.0561 | 0.9590 | 0.0892 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hou, L.; Zhu, Y.; Wei, N.; Liu, Z.; You, J.; Zhou, J.; Zhang, J. Study on Utilizing Mask R-CNN for Phenotypic Estimation of Lettuce’s Growth Status and Optimal Harvest Timing. Agronomy 2024, 14, 1271. https://doi.org/10.3390/agronomy14061271
Hou L, Zhu Y, Wei N, Liu Z, You J, Zhou J, Zhang J. Study on Utilizing Mask R-CNN for Phenotypic Estimation of Lettuce’s Growth Status and Optimal Harvest Timing. Agronomy. 2024; 14(6):1271. https://doi.org/10.3390/agronomy14061271
Chicago/Turabian StyleHou, Lixin, Yuxia Zhu, Ning Wei, Zeye Liu, Jixuan You, Jing Zhou, and Jian Zhang. 2024. "Study on Utilizing Mask R-CNN for Phenotypic Estimation of Lettuce’s Growth Status and Optimal Harvest Timing" Agronomy 14, no. 6: 1271. https://doi.org/10.3390/agronomy14061271
APA StyleHou, L., Zhu, Y., Wei, N., Liu, Z., You, J., Zhou, J., & Zhang, J. (2024). Study on Utilizing Mask R-CNN for Phenotypic Estimation of Lettuce’s Growth Status and Optimal Harvest Timing. Agronomy, 14(6), 1271. https://doi.org/10.3390/agronomy14061271