Automated Rice Phenology Stage Mapping Using UAV Images and Deep Learning
Abstract
:1. Introduction
2. Materials and Methods
2.1. Experimental Sites and Equipment
2.2. Construction Process of Dataset
2.3. Multi-stage Rice Field Segmentation Model
2.3.1. Ghost Convolution Module (GCM)
2.3.2. Ghost Bilateral Network (GBiNet)
2.3.3. Experimental Setup and Parameters
2.4. Traits Locating and Mapping System
2.4.1. Direct Geo-Locating (DGL)
2.4.2. Incremental Sparse Sampling (ISS)
3. Results
3.1. Segmentation Model Performance
3.2. Direct Geo-Locating Accuracy
3.3. Rice Phenology Mapping
4. Discussion
4.1. Efficiency of GBiNet
4.2. Confusion Matrix and Classes Accuracy
4.3. Limitation and Future Study
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
Algorithm A1. Incremental Sparse Sampling of UAV Image Patches | |
Input: {} # is -length image index; and is the row and column number of patches set; is the ratio that defines the minimum threshold distance, and is the edge number of patches to be removed | |
Output: | |
1: | |
2: | for in do for j in do |
3: | add () to # is direct geo-locating of a patch in the image |
4: | for in do |
5: | |
6: | for in do for j in do |
7: | add () to |
8: | end for |
9: | # is calculating distance-matrix |
10: | for in do for in do |
11: | if then |
12: | append to |
13: | end if |
14: | end for |
15: | end for |
16: | return |
Index | GCP_name | GCP_loc | OBS_num | DEV_avg (m) |
---|---|---|---|---|
1 | lq1_1 | 28.21048270,121.04907560 | 7 | 0.22 |
2 | lq1_2 | 28.21044813,121.04889650 | 7 | 0.23 |
3 | lq1_3 | 28.21040046,121.04868980 | 8 | 0.24 |
4 | lq1_4 | 28.21034937,121.04865530 | 4 | 0.14 |
5 | lq1_5 | 28.21036053,121.04851120 | 8 | 0.20 |
6 | lq2_1 | 28.24022942,121.02827295 | 4 | 0.21 |
7 | lq2_2 | 28.24042392,121.02825991 | 4 | 0.19 |
8 | lq2_3 | 28.24053005,121.02825080 | 8 | 0.25 |
9 | lq2_4 | 28.24060837,121.02824626 | 8 | 0.20 |
10 | lq2_5 | 28.24065780,121.02824478 | 8 | 0.21 |
11 | sds_1 | 30.07501281,119.92426521 | 7 | 0.32 |
12 | sds_2 | 30.07498620,119.92427318 | 7 | 0.30 |
13 | sds_3 | 30.07500016,119.92423888 | 13 | 0.24 |
14 | sds_4 | 30.07501029,119.92420711 | 12 | 0.23 |
15 | sds_5 | 30.07498435,119.92420728 | 13 | 0.25 |
16 | sds_6 | 30.07480693,119.92382305 | 12 | 0.26 |
17 | sds_7 | 30.07480502,119.92378671 | 8 | 0.16 |
18 | sds_8 | 30.07485181,119.92380298 | 8 | 0.16 |
19 | sds_9 | 30.07489651,119.92381531 | 12 | 0.18 |
20 | sds_10 | 30.07489462,119.92378427 | 8 | 0.13 |
Sum: 166 | Avg: 0.21 |
References
- Yu, S.; Ali, J.; Zhou, S.; Ren, G.; Xie, H.; Xu, J.; Yu, X.; Zhou, F.; Peng, S.; Ma, L.; et al. From Green Super Rice to Green Agriculture: Reaping the Promise of Functional Genomics Research. Mol. Plant 2022, 15, 9–26. [Google Scholar] [CrossRef]
- Yang, C.-Y.; Yang, M.-D.; Tseng, W.-C.; Hsu, Y.-C.; Li, G.-S.; Lai, M.-H.; Wu, D.-H.; Lu, H.-Y. Assessment of Rice Developmental Stage Using Time Series UAV Imagery for Variable Irrigation Management. Sensors 2020, 20, 5354. [Google Scholar] [CrossRef]
- Wang, L.; Chen, S.; Peng, Z.; Huang, J.; Wang, C.; Jiang, H.; Zheng, Q.; Li, D. Phenology Effects on Physically Based Estimation of Paddy Rice Canopy Traits from UAV Hyperspectral Imagery. Remote Sens. 2021, 13, 1792. [Google Scholar] [CrossRef]
- Wang, F.; Yao, X.; Xie, L.; Zheng, J.; Xu, T. Rice Yield Estimation Based on Vegetation Index and Florescence Spectral Information from UAV Hyperspectral Remote Sensing. Remote Sens. 2021, 13, 3390. [Google Scholar] [CrossRef]
- Zhou, J.; Lu, X.; Yang, R.; Chen, H.; Wang, Y.; Zhang, Y.; Huang, J.; Liu, F. Developing Novel Rice Yield Index Using UAV Remote Sensing Imagery Fusion Technology. Drones 2022, 6, 151. [Google Scholar] [CrossRef]
- Guo, Y.; Fu, Y.; Hao, F.; Zhang, X.; Wu, W.; Jin, X.; Robin Bryant, C.; Senthilnath, J. Integrated Phenology and Climate in Rice Yields Prediction Using Machine Learning Methods. Ecol. Indic. 2021, 120, 106935. [Google Scholar] [CrossRef]
- Ge, H.; Ma, F.; Li, Z.; Du, C. Grain Yield Estimation in Rice Breeding Using Phenological Data and Vegetation Indices Derived from UAV Images. Agronomy 2021, 11, 2439. [Google Scholar] [CrossRef]
- Lan, Y.; Huang, K.; Yang, C.; Lei, L.; Ye, J.; Zhang, J.; Zeng, W.; Zhang, Y.; Deng, J. Real-Time Identification of Rice Weeds by UAV Low-Altitude Remote Sensing Based on Improved Semantic Segmentation Model. Remote Sens. 2021, 13, 4370. [Google Scholar] [CrossRef]
- Gao, Y.; Wallach, D.; Liu, B.; Dingkuhn, M.; Boote, K.J.; Singh, U.; Asseng, S.; Kahveci, T.; He, J.; Zhang, R.; et al. Comparison of Three Calibration Methods for Modeling Rice Phenology. Agric. For. Meteorol. 2020, 280, 107785. [Google Scholar] [CrossRef]
- Thorp, K.R.; Drajat, D. Deep Machine Learning with Sentinel Satellite Data to Map Paddy Rice Production Stages across West Java, Indonesia. Remote Sens. Environ. 2021, 265, 112679. [Google Scholar] [CrossRef]
- Salsabila, C.; Ghazali, M.F.; Zaenudin, A. Historical Paddy Rice Growth and Phenology Pattern Estimation Using Dual Polarization of Sentinel 1. In Proceedings of the 2021 7th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Kuta, Bali Island, Indonesia, 1–3 November 2021; pp. 1–5. [Google Scholar]
- Dey, S.; Bhogapurapu, N.; Bhattacharya, A.; Mandal, D.; Lopez-Sanchez, J.M.; McNairn, H.; Frery, A.C. Rice Phenology Mapping Using Novel Target Characterization Parameters from Polarimetric SAR Data. Int. J. Remote Sens. 2021, 42, 5515–5539. [Google Scholar] [CrossRef]
- Yang, H.; Pan, B.; Li, N.; Wang, W.; Zhang, J.; Zhang, X. A Systematic Method for Spatio-Temporal Phenology Estimation of Paddy Rice Using Time Series Sentinel-1 Images. Remote Sens. Environ. 2021, 259, 112394. [Google Scholar] [CrossRef]
- Chew, R.; Rineer, J.; Beach, R.; O’Neil, M.; Ujeneza, N.; Lapidus, D.; Miano, T.; Hegarty-Craver, M.; Polly, J.; Temple, D.S. Deep Neural Networks and Transfer Learning for Food Crop Identification in UAV Images. Drones 2020, 4, 7. [Google Scholar] [CrossRef] [Green Version]
- Colorado, J.D.; Calderon, F.; Mendez, D.; Petro, E.; Rojas, J.P.; Correa, E.S.; Mondragon, I.F.; Rebolledo, M.C.; Jaramillo-Botero, A. A Novel NIR-Image Segmentation Method for the Precise Estimation of above-Ground Biomass in Rice Crops. PLoS ONE 2020, 15, e0239591. [Google Scholar] [CrossRef]
- ten Harkel, J.; Bartholomeus, H.; Kooistra, L. Biomass and Crop Height Estimation of Different Crops Using UAV-Based Lidar. Remote Sens. 2019, 12, 17. [Google Scholar] [CrossRef] [Green Version]
- Lu, N.; Wu, Y.; Zheng, H.; Yao, X.; Zhu, Y.; Cao, W.; Cheng, T. An Assessment of Multi-View Spectral Information from UAV-Based Color-Infrared Images for Improved Estimation of Nitrogen Nutrition Status in Winter Wheat. Precis. Agric. 2022, 23, 1653–1674. [Google Scholar] [CrossRef]
- Yang, M.-D.; Tseng, H.-H.; Hsu, Y.-C.; Yang, C.-Y.; Lai, M.-H.; Wu, D.-H. A UAV Open Dataset of Rice Paddies for Deep Learning Practice. Remote Sens. 2021, 13, 1358. [Google Scholar] [CrossRef]
- Liao, K.-C.; Lu, J.-H. Using UAV to Detect Solar Module Fault Conditions of a Solar Power Farm with IR and Visual Image Analysis. Appl. Sci. 2021, 11, 1835. [Google Scholar] [CrossRef]
- Ge, H.; Ma, F.; Li, Z.; Tan, Z.; Du, C. Improved Accuracy of Phenological Detection in Rice Breeding by Using Ensemble Models of Machine Learning Based on UAV-RGB Imagery. Remote Sens. 2021, 13, 2678. [Google Scholar] [CrossRef]
- Ma, Y.; Jiang, Q.; Wu, X.; Zhu, R.; Gong, Y.; Peng, Y.; Duan, B.; Fang, S. Monitoring Hybrid Rice Phenology at Initial Heading Stage Based on Low-Altitude Remote Sensing Data. Remote Sens. 2021, 13, 86. [Google Scholar] [CrossRef]
- Yang, Q.; Shi, L.; Han, J.; Chen, Z.; Yu, J. A VI-Based Phenology Adaptation Approach for Rice Crop Monitoring Using UAV Multispectral Images. Field Crops Res. 2022, 277, 108419. [Google Scholar] [CrossRef]
- Qiu, Z.; Xiang, H.; Ma, F.; Du, C. Qualifications of Rice Growth Indicators Optimized at Different Growth Stages Using Unmanned Aerial Vehicle Digital Imagery. Remote Sens. 2020, 12, 3228. [Google Scholar] [CrossRef]
- Muharam, F.M.; Nurulhuda, K.; Zulkafli, Z.; Tarmizi, M.A.; Abdullah, A.N.H.; Che Hashim, M.F.; Mohd Zad, S.N.; Radhwane, D.; Ismail, M.R. UAV- and Random-Forest-AdaBoost (RFA)-Based Estimation of Rice Plant Traits. Agronomy 2021, 11, 915. [Google Scholar] [CrossRef]
- Yang, Q.; Shi, L.; Han, J.; Yu, J.; Huang, K. A near Real-Time Deep Learning Approach for Detecting Rice Phenology Based on UAV Images. Agric. For. Meteorol. 2020, 287, 107938. [Google Scholar] [CrossRef]
- Jin, B.; Cruz, L.; Gonçalves, N. Pseudo RGB-D Face Recognition. IEEE Sens. J. 2022, 22, 21780–21794. [Google Scholar] [CrossRef]
- Zheng, Q.; Yang, M.; Yang, J.; Zhang, Q.; Zhang, X. Improvement of Generalization Ability of Deep CNN via Implicit Regularization in Two-Stage Training Process. IEEE Access 2018, 6, 15844–15869. [Google Scholar] [CrossRef]
- Yao, T.; Qu, C.; Liu, Q.; Deng, R.; Tian, Y.; Xu, J.; Jha, A.; Bao, S.; Zhao, M.; Fogo, A.B.; et al. Compound Figure Separation of Biomedical Images with Side Loss. In Proceedings of the Deep Generative Models, and Data Augmentation, Labelling, and Imperfections; Engelhardt, S., Oksuz, I., Zhu, D., Yuan, Y., Mukhopadhyay, A., Heller, N., Huang, S.X., Nguyen, H., Sznitman, R., Xue, Y., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 173–183. [Google Scholar]
- Faster Mean-Shift: GPU-Accelerated Clustering for Cosine Embedding-Based Cell Segmentation and Tracking|Elsevier Enhanced Reader. Available online: https://reader.elsevier.com/reader/sd/pii/S1361841521000943?token=59BC9ED87AA5C3ED925A392D47B3B2CB2F49003C6976BD1201BBFCDE2A2E1D596A9A2DA8AC2B08377324479C0C036932&originRegion=us-east-1&originCreation=20230115150646 (accessed on 15 January 2023).
- Deng, J.; Zhong, Z.; Huang, H.; Lan, Y.; Han, Y.; Zhang, Y. Lightweight Semantic Segmentation Network for Real-Time Weed Mapping Using Unmanned Aerial Vehicles. Appl. Sci. 2020, 10, 7132. [Google Scholar] [CrossRef]
- Sai, G.U.; Tejasri, N.; Kumar, A.; Rajalakshmi, P. Deep Learning Based Overcomplete Representations for Paddy Rice Crop and Weed Segmentation. In Proceedings of the IGARSS 2022—2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; pp. 6077–6080. [Google Scholar]
- Deng, H.; Yang, Y.; Liu, Z.; Liu, X.; Huang, D.; Liu, M.; Chen, X. A Paddy Field Segmentation Method Combining Attention Mechanism and Adaptive Feature Fusion. Appl. Eng. Agric. 2022, 38, 421–434. [Google Scholar] [CrossRef]
- Yang, M.-D.; Tseng, H.-H.; Hsu, Y.-C.; Tsai, H.P. Semantic Segmentation Using Deep Learning with Vegetation Indices for Rice Lodging Identification in Multi-Date UAV Visible Images. Remote Sens. 2020, 12, 633. [Google Scholar] [CrossRef] [Green Version]
- DJI-Zenmuse P1. Available online: https://www.dji.com/zenmuse-p1/specs (accessed on 12 December 2022).
- PIX4Dmapper-Support. Available online: https://support.pix4d.com/hc/en-us/categories/360001503192-PIX4Dmapper (accessed on 13 December 2022).
- QGIS: A Free and Open Source Geographic Information System. Available online: https://www.qgis.org/en/site/ (accessed on 13 December 2022).
- Hao, Y.; Liu, Y.; Wu, Z.; Han, L.; Chen, Y.; Chen, G.; Chu, L.; Tang, S.; Yu, Z.; Chen, Z.; et al. EdgeFlow: Achieving Practical Interactive Segmentation with Edge-Guided Flow. arXiv 2021, arXiv:210909406. [Google Scholar]
- OpenMMLab Semantic Segmentation Toolbox and Benchmark. Available online: https://github.com/open-mmlab/mmsegmentation (accessed on 13 December 2022).
- DJI-Matrice 300 RTK. Available online: https://www.dji.com/matrice-300/specs (accessed on 12 December 2022).
- Johnson, J.M.; Khoshgoftaar, T.M. Survey on Deep Learning with Class Imbalance. J. Big Data 2019, 6, 27. [Google Scholar] [CrossRef] [Green Version]
- Yu, C.; Wang, J.; Peng, C.; Gao, C.; Yu, G.; Sang, N. BiSeNet: Bilateral Segmentation Network for Real-Time Semantic Segmentation. In Proceedings of the European conference on computer vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 325–341. [Google Scholar]
- Yu, C.; Gao, C.; Wang, J.; Yu, G.; Shen, C.; Sang, N. BiSeNet V2: Bilateral Network with Guided Aggregation for Real-Time Semantic Segmentation. Int. J. Comput. Vis. 2021, 129, 3051–3068. [Google Scholar] [CrossRef]
- Lu, X.; Yang, R.; Zhou, J.; Jiao, J.; Liu, F.; Liu, Y.; Su, B.; Gu, P. A Hybrid Model of Ghost-Convolution Enlightened Transformer for Effective Diagnosis of Grape Leaf Disease and Pest. J. King Saud Univ.-Comput. Inf. Sci. 2022, 34, 1755–1767. [Google Scholar] [CrossRef]
- Han, K.; Wang, Y.; Tian, Q.; Guo, J.; Xu, C.; Xu, C. GhostNet: More Features From Cheap Operations. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Seattle, WA, USA, 13–19 June 2020; pp. 1580–1589. [Google Scholar]
- Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar]
- Linder, W. Digital Photogrammetry; Springer: Berlin/Heidelberg, Germany, 2009; ISBN 978-3-540-92724-2. [Google Scholar]
- Wolf, P.R.; Dewitt, B.A.; Wilkinson, B.E. Elements of Photogrammetry with Applications in GIS, 4th ed.; [fully updated]; McGraw-Hill Education: New York, NY, USA, 2014; ISBN 978-0-07-176112-3. [Google Scholar]
- Flight Control-DJI Mobile SDK Documentation. Available online: https://developer.dji.com/mobile-sdk/documentation/introduction/flightController_concepts.html (accessed on 15 December 2022).
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet Classification with Deep Convolutional Neural Networks. In Proceedings of the Advances in Neural Information Processing Systems, Lake Tahoe, NV, USA, 3–6 December 2012; p. 25. [Google Scholar]
- Hu, P.; Guo, W.; Chapman, S.C.; Guo, Y.; Zheng, B. Pixel Size of Aerial Imagery Constrains the Applications of Unmanned Aerial Vehicle in Crop Breeding. ISPRS J. Photogramm. Remote Sens. 2019, 154, 1–9. [Google Scholar] [CrossRef]
- Chan, K.C.K.; Zhou, S.; Xu, X.; Loy, C.C. BasicVSR++: Improving Video Super-Resolution with Enhanced Propagation and Alignment. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 19–24 June 2022; pp. 5962–5971. [Google Scholar]
- Hai, J.; Hao, Y.; Zou, F.; Lin, F.; Han, S. A Visual Navigation System for UAV under Diverse Illumination Conditions. Appl. Artif. Intell. 2021, 35, 1529–1549. [Google Scholar] [CrossRef]
- Hu, Z.; Shi, T.; Wang, C.; Li, Q.; Wu, G. Scale-Sets Image Classification with Hierarchical Sample Enriching and Automatic Scale Selection. Int. J. Appl. Earth Obs. Geoinf. 2021, 105, 102605. [Google Scholar] [CrossRef]
Mission Code | Patch Number | Class Pixel Number (Million) | ||||
---|---|---|---|---|---|---|
Seedling | Jointing | Heading | Filling | Others | ||
210604_djd1 | 200 | 208.8 | 0 | 0 | 0 | 149.1 |
210606_djd2 | 140 | 162.0 | 0 | 0 | 0 | 88.5 |
210909_djd3 | 200 | 0 | 244.4 | 0 | 0 | 113.4 |
210616_djd4 | 200 | 0 | 259.5 | 0 | 0 | 98.4 |
210718_djd5 | 300 | 34.3 | 169.0 | 0 | 186.2 | 147.3 |
210721_djd6 | 300 | 55.0 | 184.5 | 0 | 146.4 | 150.8 |
220628_qt1 | 300 | 87.3 | 6.5 | 281.4 | 2.5 | 159.0 |
220628_qt2 | 260 | 76.0 | 73.4 | 140.9 | 0 | 174.9 |
220712_lq1 | 100 | 0 | 0 | 2.0 | 117.7 | 59.2 |
220712_lq2 | 100 | 0 | 0 | 0 | 119.4 | 59.5 |
220713_ra1 | 100 | 0 | 0 | 0 | 119.1 | 59.8 |
220713_ra2 | 100 | 0 | 0 | 0 | 122.7 | 56.2 |
220727_sds | 140 | 0 | 168.6 | 0 | 0 | 81.9 |
220928_xs2 | 160 | 0 | 2.8 | 176.5 | 6.8 | 100.2 |
Sum | 2600 | 623.4 | 1108.7 | 600.8 | 820.8 | 1498.2 |
Pixel Ratio | - | 13% | 24% | 13% | 18% | 32% |
Computation/ Parameters | Overall | Components | ||||
---|---|---|---|---|---|---|
Detail Branch | Semantic Branch | BGA Layer | Decode Head | Auxiliary Head | ||
FLOPs (G) | 21.29 | 10.11 | 1.22 | 1.53 | 8.43 | 0.00 |
100.0 | 47.5 | 5.7 | 7.2 | 39.6 | / | |
Weights (M) | 3.34 1 | 0.52 | 1.16 | 0.48 | 1.19 | 11.42 |
100.0 | 15.5 | 34.7 | 14.3 | 35.4 | / |
Stage | Input Shape | Operator | Number and Stride | Output Shape |
---|---|---|---|---|
Encoder—Detail Branch 1 | H × W × 3 | G-Block 1 | 2-1 | H/2 × W/2 × 64 |
Encoder—Detail Branch 2 | H/2 × W/2 × 64 | G-Block 1 | 2-1-1 | H/4 × W/4 × 64 |
Encoder—Detail Branch 3 | H/4 × W/4 × 64 | G-Block 1 | 2-1-1 | H/8 × W/8 × 128 |
Encoder—Semantic Branch 1 | H × W × 3 | Stem-Block 3 | 4 | H/4 × W/4 × 16 |
Encoder—Semantic Branch 3 | H/4 × W/4 × 16 | GE-Block 3 | 2-1 | H/8 × W/8 × 32 |
Encoder—Semantic Branch 4 | H/8 × W/8 × 32 | GE-Block 3 | 2-1 | H/16 × W/16 × 64 |
Encoder—Semantic Branch 5 | H/16 × W/16 × 64 | GE-Block 3 | 2-1-1-1 | H/32 × W/32 × 128 |
Encoder—Semantic Branch 5 | H/16 × W/16 × 64 | CE-Block 3 | 1 | H/32 × W/32 × 128 |
Encoder—Aggregation Layer | (H/8 × W/8 + H/32 × W/32) × 128 | BGA-Block 3 | 1 | H/8 × W/8 × 128 |
Decoder—Segmentation Head | H/8 × W/8 × 128 | GCN-Head 2 | 1 | H × W × 5 |
Model | Weights (M) | FLOPs (G) | Speed (FPS) | mIoU-val % | aAcc-Test % | mIoU-Test % | DSW Speed (FPS) |
---|---|---|---|---|---|---|---|
pspnet_r18-d8 (2017) | 12.79 | 62.98 | 2.8 | 91.78 | 95.53 | 91.57 | 21.3 |
deeplabv3p_r18-d8 (2018) | 12.47 | 62.91 | 2.7 | 92.16 | 95.56 | 91.55 | 20.7 |
fcn_hr18s (2019) | 3.94 | 11.13 | 2.5 | 91.82 | 95.39 | 91.28 | 21.7 |
bisenetv2_fcn (2021) | 14.77 | 14.25 | 5.4 | 91.52 | 95.41 | 91.31 | 36.4 |
Model | Weights (M) | FLOPs (G) | DSW Speed (FPS) | mIoU-val | aAcc-Test | mIoU-Test |
---|---|---|---|---|---|---|
bisenetv2_fcn | 14.77 | 21.29 | 36.4 | 91.25 | 95.11 | 90.95 |
GBiNet_r2 | 13.93 | 12.22 | 41.0 | 91.64 | 95.43 | 91.50 |
GBiNet_r4 | 13.51 | 7.68 | 44.9 | 91.13 | 94.89 | 90.47 |
GBiNet_r8 | 13.30 | 5.41 | 46.8 | 90.92 | 94.93 | 90.56 |
GBiNet_64dx4_r2 | 3.51 | 3.03 | 47.9 | 90.74 | 94.79 | 90.26 |
GBiNet_64dx8_r4 | 3.34 | 2.24 | 52.3 | 90.90 | 94.80 | 90.40 |
GBiNet_t32dx2_r4 | 0.82 | 0.50 | 61.9 | 90.20 | 94.71 | 90.19 |
Computation/ Parameters | Model | Overall | Components | ||||
---|---|---|---|---|---|---|---|
Detail Branch | Semantic Branch | BGA Layer | Decode Head | Auxiliary Head | |||
FLOPs (G) | bisenetv2_fcn | 21.288 | 10.113 | 1.223 | 1.525 | 8.427 | 0 |
GBiNet_r2 | 12.222 | 5.191 | 1.223 | 1.525 | 4.283 | 0 | |
GBiNet_r4 | 7.681 | 2.730 | 1.223 | 1.525 | 2.203 | 0 | |
GBiNet_r8 | 5.412 | 1.500 | 1.223 | 1.525 | 1.164 | 0 | |
GBiNet_64dx8_r4 | 2.443 | 0.749 | 0.731 | 0.385 | 0.578 | 0 | |
GBiNet_t32dx2_r4 | 0.499 | 0.182 | 0.180 | 0.098 | 0.039 | 0 | |
Weights (M) | bisenetv2_fcn | 3.343 | 0.519 | 1.160 | 0.479 | 1.185 | 11.421 |
GBiNet_r2 | 2.504 | 0.263 | 1.160 | 0.479 | 0.602 | 11.421 | |
GBiNet_r4 | 2.084 | 0.136 | 1.160 | 0.479 | 0.309 | 11.421 | |
GBiNet_r8 | 1.874 | 0.072 | 1.160 | 0.479 | 0.163 | 11.421 | |
GBiNet_64dx8_r4 | 0.577 | 0.036 | 0.339 | 0.121 | 0.081 | 2.902 | |
GBiNet_t32dx2_r4 | 0.091 | 0.006 | 0.049 | 0.031 | 0.005 | 0.726 |
Model | Class Pixel Number Ratio and IoU-Class (%) | ||||
---|---|---|---|---|---|
Seedling | Jointing | Heading | Filling | Others | |
13% | 24% | 13% | 18% | 32% | |
bisenetv2_fcn | 91.79 | 24% | 88.20 | 94.38 | 87.54 |
GBiNet_r2 | 91.62 | 93.43 | 89.85 | 94.31 | 88.29 |
GBiNet_r4 | 91.29 | 92.21 | 87.18 | 94.21 | 87.48 |
GBiNet_r8 | 90.49 | 92.59 | 88.67 | 93.58 | 87.46 |
GBiNet_64dx8_r4 | 89.98 | 91.90 | 89.39 | 93.74 | 87.01 |
GBiNet_64dx4_r2 | 90.42 | 91.82 | 87.72 | 93.89 | 87.45 |
GBiNet_t32dx2_r4 | 88.83 | 92.31 | 89.90 | 93.31 | 86.61 |
Average | 90.63 | 79.21 | 88.70 | 93.92 | 87.41 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lu, X.; Zhou, J.; Yang, R.; Yan, Z.; Lin, Y.; Jiao, J.; Liu, F. Automated Rice Phenology Stage Mapping Using UAV Images and Deep Learning. Drones 2023, 7, 83. https://doi.org/10.3390/drones7020083
Lu X, Zhou J, Yang R, Yan Z, Lin Y, Jiao J, Liu F. Automated Rice Phenology Stage Mapping Using UAV Images and Deep Learning. Drones. 2023; 7(2):83. https://doi.org/10.3390/drones7020083
Chicago/Turabian StyleLu, Xiangyu, Jun Zhou, Rui Yang, Zhiyan Yan, Yiyuan Lin, Jie Jiao, and Fei Liu. 2023. "Automated Rice Phenology Stage Mapping Using UAV Images and Deep Learning" Drones 7, no. 2: 83. https://doi.org/10.3390/drones7020083
APA StyleLu, X., Zhou, J., Yang, R., Yan, Z., Lin, Y., Jiao, J., & Liu, F. (2023). Automated Rice Phenology Stage Mapping Using UAV Images and Deep Learning. Drones, 7(2), 83. https://doi.org/10.3390/drones7020083