Detection and Counting of Maize Leaves Based on Two-Stage Deep Learning with UAV-Based RGB Image
Abstract
:1. Introduction
2. Materials and Methods
2.1. UAV Image Acquisition and Preprocessing
2.2. Overall Design of Leaf Counting
2.3. Instance Segmentation Model of Maize Seedlings
2.4. Object Detection Model of Maize Leaves
2.5. Parameter Setting for Training
2.6. Evaluation Metrics
3. Results
3.1. Instance Segmentation Results of Maize Seedlings
3.2. Object Detection of Maize Leaves with Different Model
3.3. Counting Results of Maize Leaves
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Chen, F.Q.; Ji, X.Z.; Bai, M.X.; Zhuang, Z.L.; Peng, Y.L. Network Analysis of Different Exogenous Hormones on the Regulation of Deep Sowing Tolerance in Maize Seedlings. Front. Plant Sci. 2021, 12, 739101. [Google Scholar] [CrossRef] [PubMed]
- Fan, J.H.; Zhou, J.; Wang, B.W.; de Leon, N.; Kaeppler, S.M.; Lima, D.C.; Zhang, Z. Estimation of Maize Yield and Flowering Time Using Multi-Temporal UAV-Based Hyperspectral Data. Remote Sens. 2022, 14, 3052. [Google Scholar] [CrossRef]
- Chen, S.; Liu, W.H.; Feng, P.Y.; Ye, T.; Ma, Y.C.; Zhang, Z. Improving Spatial Disaggregation of Crop Yield by Incorporating Machine Learning with Multisource Data: A Case Study of Chinese Maize Yield. Remote Sens. 2022, 14, 2340. [Google Scholar] [CrossRef]
- Zermas, D.; Morellas, V.; Mulla, D.; Papanikolopoulos, N. 3D model processing for high throughput phenotype extraction—The case of corn. Comput. Electron. Agric. 2020, 172, 105047. [Google Scholar] [CrossRef]
- Li, Z.B.; Guo, R.H.; Li, M.; Chen, Y.R.; Li, G.Y. A review of computer vision technologies for plant phenotyping. Comput. Electron. Agric. 2020, 176, 105672. [Google Scholar] [CrossRef]
- Rabab, S.; Badenhorst, P.; Chen, Y.P.; Daetwyler, H.D. A template-free machine vision-based crop row detection algorithm. Precis. Agric. 2021, 22, 124–153. [Google Scholar] [CrossRef]
- Roth, L.; Barendregt, C.; Bétrix, C.; Hund, A.; Walter, A. High-throughput field phenotyping of soybean: Spotting an ideotype. Remote Sens. Environ. 2022, 269, 112797. [Google Scholar] [CrossRef]
- Bouguettaya, A.; Zarzour, H.; Kechida, A.; Taberkit, A.M. Vehicle Detection From UAV Imagery with Deep Learning: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2021, 1–21. [Google Scholar] [CrossRef] [PubMed]
- Ji, Y.S.; Chen, Z.; Cheng, Q.; Liu, R.; Li, M.W.; Yan, X.; Li, G.; Wang, D.; Fu, L.; Ma, Y.; et al. Estimation of plant height and yield based on UAV imagery in faba bean (Vicia faba L.). Plant Methods 2022, 18, 26. [Google Scholar] [CrossRef] [PubMed]
- Jiang, Z.; Tu, H.F.; Bai, B.W.; Yang, C.H.; Zhao, B.Q.; Guo, Z.Y.; Liu, Q.; Zhao, H.; Yang, W.N.; Xiong, L.Z.; et al. Combining UAV-RGB high-throughput field phenotyping and genome-wide association study to reveal genetic variation of rice germplasms in dynamic response to drought stress. New Phytol. 2021, 232, 440–455. [Google Scholar] [CrossRef] [PubMed]
- Li, L.L.; Qiao, J.W.; Yao, J.; Li, J.; Li, L. Automatic freezing-tolerant rapeseed material recognition using UAV images and deep learning. Plant Methods 2022, 18, 5. [Google Scholar] [CrossRef]
- Alzadjali, A.; Alali, M.H.; Veeranampalayam Sivakumar, A.N.; Deogun, J.S.; Scott, S.; Schnable, J.C.; Shi, Y. Maize Tassel Detection from UAV Imagery Using Deep Learning. Front. Robot. AI 2021, 8, 600410. [Google Scholar] [CrossRef] [PubMed]
- Barreto, A.; Lottes, P.; Ispizua Yamati, F.R.; Baumgarten, S.; Wolf, N.A.; Stachniss, C.; Mahlein, A.; Paulus, S. Automatic UAV-based counting of seedlings in sugar-beet field and extension to maize and strawberry. Comput. Electron. Agric. 2021, 191, 106493. [Google Scholar] [CrossRef]
- Liu, S.B.; Yin, D.M.; Feng, H.K.; Li, Z.H.; Xu, X.B.; Shi, L.; Jin, X.L. Estimating maize seedling number with UAV RGB images and advanced image processing methods. Precis. Agric. 2022, 23, 1604–1632. [Google Scholar] [CrossRef]
- Kienbaum, L.; Correa Abondano, M.; Blas, R.; Schmid, K. DeepCob: Precise and high-throughput analysis of maize cob geometry using deep learning with an application in genebank phenomics. Plant Methods 2021, 17, 1–19. [Google Scholar] [CrossRef] [PubMed]
- Kang, J.; Liu, L.T.; Zhang, F.C.; Shen, C.; Wang, N.; Shao, L.M. Semantic segmentation model of cotton roots in-situ image based on attention mechanism. Comput. Electron. Agric. 2021, 189, 106370. [Google Scholar] [CrossRef]
- Mei, W.Y.; Wang, H.Y.; Fouhey, D.; Zhou, W.Q.; Hinks, I.; Gray, J.M.; Van Berkel, D.; Jain, M. Using Deep Learning and Very-High-Resolution Imagery to Map Smallholder Field Boundaries. Remote Sens. 2022, 14, 3046. [Google Scholar] [CrossRef]
- Xu, H.; Blonder, B.; Jodra, M.; Malhi, Y.; Fricker, M. Automated and accurate segmentation of leaf venation networks via deep learning. New Phytol. 2021, 229, 631–648. [Google Scholar] [CrossRef] [PubMed]
- Yang, S.; Zheng, L.H.; Yang, H.J.; Zhang, M.; Wu, T.T.; Sun, S.; Tomasetto, F.; Wang, M.J. A synthetic datasets based instance segmentation network for High-throughput soybean pods phenotype investigation. Expert Syst. Appl. 2022, 192, 116403. [Google Scholar] [CrossRef]
- Zhang, W.L.; Wang, J.Q.; Liu, Y.X.; Chen, K.Z.; Li, H.B.; Duan, Y.L.; Wu, W.B.; Shi, Y.; Guo, W. Deep-learning-based in-field citrus fruit detection and tracking. Hortic. Res. 2022, 9, uhac003. [Google Scholar] [CrossRef] [PubMed]
- Wen, C.J.; Wu, J.S.; Chen, H.R.; Su, H.Q.; Chen, X.; Li, Z.S.; Yang, C. Wheat Spike Detection and Counting in the Field Based on SpikeRetinaNet. Front. Plant Sci. 2022, 13, 821717. [Google Scholar] [CrossRef] [PubMed]
- Wang, H.J.; Lin, Y.Y.; Xu, X.J.; Chen, Z.Y.; Wu, Z.H.; Tang, Y.C. A Study on Long-Close Distance Coordination Control Strategy for Litchi Picking. Agronomy 2022, 12, 1520. [Google Scholar] [CrossRef]
- Tang, Y.C.; Zhou, H.; Wang, H.J.; Zhang, Y.Q. Fruit detection and positioning technology for a Camellia oleifera C. Abel orchard based on improved YOLOv4-tiny model and binocular stereo vision. Expert Syst. Appl. 2023, 211, 118573. [Google Scholar] [CrossRef]
- Tang, Y.C.; Chen, M.Y.; Wang, C.L.; Luo, L.F.; Li, J.H.; Lian, G.P.; Zou, X.J. Recognition and Localization Methods for Vision-Based Fruit Picking Robots: A Review. Front. Plant Sci. 2020, 11, 510. [Google Scholar] [CrossRef]
- Liu, Y.L.; Cen, C.J.; Che, Y.P.; Ke, R.; Ma, Y.; Ma, Y.T. Detection of Maize Tassels from UAV RGB Imagery with Faster R-CNN. Remote Sens. 2020, 12, 338. [Google Scholar] [CrossRef] [Green Version]
- Ngugi, L.C.; Abdelwahab, M.; Abo-Zahhad, M. Tomato leaf segmentation algorithms for mobile phone applications using deep learning. Comput. Electron. Agric. 2020, 178, 105788. [Google Scholar] [CrossRef]
- Ma, X.; Deng, X.W.; Qi, L.; Jiang, Y.; Li, H.W.; Wang, Y.W.; Xing, X.P. Fully convolutional network for rice seedling and weed image segmentation at the seedling stage in paddy fields. PLoS ONE 2019, 14, e215676. [Google Scholar] [CrossRef]
- Gan, H.M.; Ou, M.Q.; Li, C.P.; Wang, X.R.; Guo, J.F.; Mao, A.X.; Camila Ceballos, M.; Parsons, T.D.; Liu, K.; Xue, Y.J. Automated detection and analysis of piglet suckling behaviour using high-accuracy amodal instance segmentation. Comput. Electron. Agric. 2022, 199, 107162. [Google Scholar] [CrossRef]
- Mendoza, A.; Trullo, R.; Wielhorski, Y. Descriptive modeling of textiles using FE simulations and deep learning. Compos. Sci. Technol. 2021, 213, 108897. [Google Scholar] [CrossRef]
- Wagner, F.H.; Dalagnol, R.; Tarabalka, Y.; Segantine, T.Y.F.; Thomé, R.; Hirye, M.C.M. U-Net-Id, an Instance Segmentation Model for Building Extraction from Satellite Images—Case Study in the Joanópolis City, Brazil. Remote Sens. 2020, 12, 1544. [Google Scholar] [CrossRef]
- Jia, W.K.; Tian, Y.Y.; Luo, R.; Zhang, Z.H.; Lian, J.; Zheng, Y.J. Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot. Comput. Electron. Agric. 2020, 172, 105380. [Google Scholar] [CrossRef]
- Soetedjo, A.; Hendriarianti, E. Plant Leaf Detection and Counting in a Greenhouse during Day and Nighttime Using a Raspberry Pi NoIR Camera. Sensors 2021, 21, 6659. [Google Scholar] [CrossRef]
- Vishal, M.K.; Banerjee, B.; Saluja, R.; Raju, D.; Chinnusamy, V.; Kumar, S.; Sahoo, R.N.; Adinarayana, J. Leaf Counting in Rice (Oryza sativa L.) Using Object Detection: A Deep Learning Approach. In Proceedings of the IGARSS 2020–2020 IEEE International Geoscience and Remote Sensing Symposium, Waikoloa, HI, USA, 26 September–2 October 2020; pp. 5286–5289. [Google Scholar]
- Dobrescu, A.; Giuffrida, M.V.; Tsaftaris, S.A. Doing More with Less: A Multitask Deep Learning Approach in Plant Phenotyping. Front. Plant Sci. 2020, 11, 141–151. [Google Scholar] [CrossRef]
- Miao, C.Y.; Guo, A.; Thompson, A.M.; Yang, J.L.; Ge, Y.F.; Schnable, J.C. Automation of leaf counting in maize and sorghum using deep learning. Plant Phenome J. 2021, 4, e20022. [Google Scholar] [CrossRef]
- Wang, D.D.; He, D.J. Channel pruned YOLO V5s-based deep learning approach for rapid and accurate apple fruitlet detection before fruit thinning. Biosyst. Eng. 2021, 210, 271–281. [Google Scholar] [CrossRef]
- Qi, X.K.; Dong, J.S.; Lan, Y.B.; Zhu, H. Method for Identifying Litchi Picking Position Based on YOLOv5 and PSPNet. Remote Sens. 2022, 14, 2004. [Google Scholar] [CrossRef]
- Zhao, J.Q.; Zhang, X.H.; Yan, J.W.; Qiu, X.L.; Yao, X.; Tian, Y.C.; Zhu, Y.; Cao, W.X. A Wheat Spike Detection Method in UAV Images Based on Improved YOLOv5. Remote Sens. 2021, 13, 3095. [Google Scholar] [CrossRef]
- Weyler, J.; Milioto, A.; Falck, T.; Behley, J.; Stachniss, C. Joint Plant Instance Detection and Leaf Count Estimation for In-Field Plant Phenotyping. IEEE Robot. Autom. Lett. 2021, 6, 3599–3606. [Google Scholar] [CrossRef]
- Wang, C.S.; Du, P.F.; Wu, H.R.; Li, J.X.; Zhao, C.J.; Zhu, H. A cucumber leaf disease severity classification method based on the fusion of DeepLabV3+ and U-Net. Comput. Electron. Agric. 2021, 189, 106373. [Google Scholar] [CrossRef]
- Su, W.H.; Zhang, J.J.; Yang, C.; Page, R.; Szinyei, T.; Hirsch, C.D.; Steffenson, B.J. Automatic Evaluation of Wheat Resistance to Fusarium Head Blight Using Dual Mask-RCNN Deep Learning Frameworks in Computer Vision. Remote Sens. 2021, 13, 26. [Google Scholar] [CrossRef]
- Liu, B.Y.; Fan, K.J.; Su, W.H.; Peng, Y.K. Two-Stage Convolutional Neural Networks for Diagnosing the Severity of Alternaria Leaf Blotch Disease of the Apple Tree. Remote Sens. 2022, 14, 2519. [Google Scholar] [CrossRef]
- Wkentaro, Labelme. Available online: https://github.com/wkentaro/labelme (accessed on 20 August 2021).
- Tzutalin. LabelImg. Available online: https://github.com/tzutalin/labelImg (accessed on 1 February 2022).
- MMDetection Contributors. OpenMMLab Detection Toolbox and Benchmark [Computer Software]. Available online: https://github.com/open-mmlab/mmdetection (accessed on 4 January 2022).
- Ultralytics. YOLOv5. Available online: https://github.com/ultralytics/yolov5/tree/v6.0 (accessed on 28 February 2022).
- He, K.M.; Gkioxari, G.; Dollar, P.; Girshick, R. Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 42, 386–397. [Google Scholar] [CrossRef] [PubMed]
- Long, J.E.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 640–651. [Google Scholar]
- Ren, S.Q.; He, K.M.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
- Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.M.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (‘CVPR’17), Kalakaua Ave, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
- Qi, J.T.; Liu, X.N.; Liu, K.; Xu, F.R.; Guo, H.; Tian, X.L.; Li, M.; Bao, Z.Y.; Li, Y. An improved YOLOv5 model based on visual attention mechanism: Application to recognition of tomato virus disease. Comput. Electron. Agric. 2022, 194, 106780. [Google Scholar] [CrossRef]
- Gu, W.C.; Bai, S.; Kong, L.X. A review on 2D instance segmentation based on deep neural networks. Image Vision Comput. 2022, 120, 104401. [Google Scholar] [CrossRef]
- Minaee, S.; Boykov, Y.Y.; Porikli, F.; Plaza, A.J.; Kehtarnavaz, N.; Terzopoulos, D. Image Segmentation Using Deep Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3523–3542. [Google Scholar] [CrossRef]
- Lalit, M.; Tomancak, P.; Jug, F. EmbedSeg: Embedding-based Instance Segmentation for Biomedical Microscopy Data. Med. Image Anal. 2022, 81, 102523. [Google Scholar] [CrossRef]
- Shen, L.; Chen, S.; Mi, Z.W.; Su, J.Y.; Huang, R.; Song, Y.Y.; Fang, Y.L.; Su, B.F. Identifying veraison process of colored wine grapes in field conditions combining deep learning and image analysis. Comput. Electron. Agric. 2022, 200, 107268. [Google Scholar] [CrossRef]
- Zu, L.L.; Zhao, Y.P.; Liu, J.Q.; Su, F.; Zhang, Y.; Liu, P.Z. Detection and Segmentation of Mature Green Tomatoes Based on Mask R-CNN with Automatic Image Acquisition Approach. Sensors 2021, 21, 7842. [Google Scholar] [CrossRef] [PubMed]
- Liu, Y.; Zhang, Z.L.; Liu, X.; Wang, L.; Xia, X.H. Efficient image segmentation based on deep learning for mineral image classification. Adv. Powder Technol. 2021, 32, 3885–3903. [Google Scholar] [CrossRef]
- Xiao, J.X.; Liu, G.; Wang, K.J.; Si, Y.S. Cow identification in free-stall barns based on an improved Mask R-CNN and an SVM. Comput. Electron. Agric. 2022, 194, 106738. [Google Scholar] [CrossRef]
- Junior, L.C.M.; Alfredo, C.; Ulson, J.A.C. Real Time Weed Detection using Computer Vision and Deep Learning. In Proceedings of the 2021 14th IEEE International Conference on Industry Applications (INDUSCON), São Paulo, Brazil, 15–18 August 2021. [Google Scholar]
- Chen, Y.C.; Liu, W.B.; Zhang, J.Y. An Enhanced YOLOv5 Model with Attention Module for Vehicle-Pedestrian Detection. In Proceedings of the 2022 IEEE 31st International Symposium on Industrial Electronics (ISIE), Anchorage, AK, USA, 1–3 June 2022. [Google Scholar]
- Chen, R.N.; Ma, Y.X.; Liu, L.J.; Chen, N.L.; Cui, Z.M.; Wei, G.D.; Wang, W.P. Semi-supervised anatomical landmark detection via shape-regulated self-training. Neurocomputing 2022, 471, 335–345. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, C.Q.; Zhou, Y.J. Camouflaged people detection based on a semi-supervised search identification network. Def. Technol. 2021; in press. [Google Scholar] [CrossRef]
Model | Batch-Size | Learning Rate | Optimizer | Weight Decay | Momentum | Epoch |
---|---|---|---|---|---|---|
Mask R-CNN | 12 | 0.02 | SGD | 0.0001 | 0.9 | 100 |
YOLOv5 | 16 | 0.01 | SGD | 0.0005 | 0.937 | 300 |
Backbone | Loss Function | mAP0.50 (%) | mAP0.75 (%) | mAP0.5:0.95 (%) | Time (s/img) | ||||
---|---|---|---|---|---|---|---|---|---|
Bbox | Mask | Bbox | Mask | Bbox | Mask | Bbox | Mask | ||
Resnet 50 | L1 | 95.7 | 93.9 | 83.7 | 55.8 | 71.5 | 50.9 | 0.06 | 0.11 |
Resnet 101 | L1 | 93.6 | 93.3 | 84.3 | 56.4 | 73.0 | 51.4 | 0.06 | 0.08 |
Resnet 50 | SmoothLR | 96.9 | 95.2 | 82.6 | 60.0 | 71.4 | 51.9 | 0.05 | 0.07 |
Resnet 101 | SmoothLR | 94.9 | 93.5 | 87.7 | 59.4 | 74.6 | 52.0 | 0.05 | 0.08 |
Model | Precision (%) | Recall (%) | AP (%) | mAP (%) | |||
---|---|---|---|---|---|---|---|
Fully Unfolded Leaf | Newly Appeared Leaf | Fully Unfolded Leaf | Newly Appeared Leaf | Fully Unfolded Leaf | Newly Appeared Leaf | ||
YOLOv5n | 91.4 | 65.9 | 84.8 | 49.9 | 89.4 | 53.3 | 71.4 |
YOLOv5s | 87.8 | 58.6 | 86.5 | 48.3 | 89.2 | 48.6 | 68.9 |
YOLOv5m | 90.7 | 61.9 | 87.1 | 54.2 | 89.9 | 54.3 | 72.1 |
YOLOv5l | 91.2 | 63.8 | 85.7 | 55.8 | 89.7 | 57.3 | 73.5 |
YOLOv5x | 92.0 | 68.8 | 84.4 | 50.0 | 89.6 | 54.0 | 71.8 |
Model | Precision (%) | Recall (%) | AP (%) | mAP (%) | |||
---|---|---|---|---|---|---|---|
Fully Unfolded Leaf | Newly Appeared Leaf | Fully Unfolded Leaf | Newly Appeared Leaf | Fully Unfolded Leaf | Newly Appeared Leaf | ||
Faster R-CNN | 40.7 | 18.2 | 79.9 | 48.1 | 67.3 | 17.3 | 42.3 |
SSD | 97.2 | 83.3 | 57.8 | 8.2 | 72.7 | 18.1 | 45.4 |
YOLOv5x | 92.0 | 68.8 | 84.4 | 50.0 | 89.6 | 54.0 | 71.8 |
Model | Images | Differential Value | Accuracy Rate | ||||
---|---|---|---|---|---|---|---|
−2 | −1 | 0 | 1 | 2 | |||
YOLOv5n | 170 | 1 | 20 | 113 | 30 | 6 | 66.5% |
YOLOv5s | 2 | 28 | 117 | 19 | 4 | 68.8% | |
YOLOv5m | 3 | 26 | 116 | 23 | 2 | 68.2% | |
YOLOv5l | 2 | 25 | 124 | 18 | 1 | 72.9% | |
YOLOv5x | 2 | 16 | 124 | 28 | — | 72.9% |
Model | Images | Differential Value | Accuracy Rate | ||
---|---|---|---|---|---|
−1 | 0 | 1 | |||
YOLOv5n | 170 | 9 | 119 | 42 | 70.0% |
YOLOv5s | 9 | 115 | 46 | 67.6% | |
YOLOv5m | 5 | 128 | 37 | 75.3% | |
YOLOv5l | 18 | 125 | 27 | 73.5% | |
YOLOv5x | 11 | 128 | 31 | 75.3% |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Xu, X.; Wang, L.; Shu, M.; Liang, X.; Ghafoor, A.Z.; Liu, Y.; Ma, Y.; Zhu, J. Detection and Counting of Maize Leaves Based on Two-Stage Deep Learning with UAV-Based RGB Image. Remote Sens. 2022, 14, 5388. https://doi.org/10.3390/rs14215388
Xu X, Wang L, Shu M, Liang X, Ghafoor AZ, Liu Y, Ma Y, Zhu J. Detection and Counting of Maize Leaves Based on Two-Stage Deep Learning with UAV-Based RGB Image. Remote Sensing. 2022; 14(21):5388. https://doi.org/10.3390/rs14215388
Chicago/Turabian StyleXu, Xingmei, Lu Wang, Meiyan Shu, Xuewen Liang, Abu Zar Ghafoor, Yunling Liu, Yuntao Ma, and Jinyu Zhu. 2022. "Detection and Counting of Maize Leaves Based on Two-Stage Deep Learning with UAV-Based RGB Image" Remote Sensing 14, no. 21: 5388. https://doi.org/10.3390/rs14215388
APA StyleXu, X., Wang, L., Shu, M., Liang, X., Ghafoor, A. Z., Liu, Y., Ma, Y., & Zhu, J. (2022). Detection and Counting of Maize Leaves Based on Two-Stage Deep Learning with UAV-Based RGB Image. Remote Sensing, 14(21), 5388. https://doi.org/10.3390/rs14215388