Detection and Mapping of Chestnut Using Deep Learning from High-Resolution UAV-Based RGB Imagery
Abstract
:1. Introduction
2. Materials and Methods
2.1. Study Site
2.2. Acquisition and Annotation of Images
2.2.1. Field Data
2.2.2. UAV Data Acquisition and Processing
2.2.3. Manual Data Labeling
2.3. Experiment Design
2.3.1. Semantic Segmentation Model
2.3.2. Model Training and Application
- (1)
- Preparation for the training dataset
- (2)
- Model training
- (3)
- Model application
2.3.3. Model Accuracy Evaluation
3. Results
3.1. Results of Chestnut Segmentation
3.2. Impact of Backbone Networks ResNet-34 and ResNet-50 on Model Performance
3.3. Evaluation of Model Generalization on Different Test Datasets
4. Discussion
4.1. Semantic Segmentation
4.2. Model Performance
4.3. Distribution of Sample Data
4.4. Portability across Data Types
4.5. Limitations and Practical Considerations
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Maxwell, S.; Fernando, A. Cash crops in developing countries: The issues, the facts, the policies. World Dev. 1989, 17, 1677–1708. [Google Scholar]
- Anderman, T.L.; Remans, R.; Wood, S.A.; DeRosa, K.; DeFries, R.S. Synergies and tradeoffs between cash crop production and food security: A case study in rural Ghana. Food Secur. 2014, 6, 541–554. [Google Scholar]
- Su, S.; Yang, C.; Hu, Y.; Luo, F.; Wang, Y. Progressive landscape fragmentation in relation to cash crop cultivation. Appl. Geogr. 2014, 53, 20–31. [Google Scholar] [CrossRef]
- Gibril, M.B.A.; Shafri, H.Z.M.; Shanableh, A.; Al-Ruzouq, R.; Wayayok, A.; Hashim, S.J. Deep convolutional neural network for large-scale date palm tree mapping from UAV-based images. Remote Sens. 2021, 13, 2787. [Google Scholar] [CrossRef]
- Maheswari, P.; Raja, P.; Apolo-Apolo, O.E.; Pérez-Ruiz, M. Intelligent fruit yield estimation for orchards using deep learning based semantic segmentation techniques—A review. Front. Plant Sci. 2021, 12, 684328. [Google Scholar]
- Osco, L.P.; Arruda, M.D.S.D.; Marcato Junior, J.; Da Silva, N.B.; Ramos, A.P.M.; Moryia, É.A.S.; Imai, N.N.; Pereira, D.R.; Creste, J.E.; Matsubara, E.T.; et al. A convolutional neural network approach for counting and geolocating citrus-trees in UAV multispectral imagery. ISPRS J. Photogramm. 2020, 160, 97–106. [Google Scholar] [CrossRef]
- Ferreira, M.P.; Lotte, R.G.; D’Elia, F.V.; Stamatopoulos, C.; Kim, D.; Benjamin, A.R. Accurate mapping of brazil nut trees (Bertholletia excelsa) in amazonian forests using WorldView-3 satellite images and convolutional neural networks. Ecol. Inform. 2021, 63, 101302. [Google Scholar]
- Badiu, D.; Arion, F.; Muresan, I.; Lile, R.; Mitre, V. Evaluation of economic efficiency of apple orchard investments. Sustainability 2015, 7, 10521–10533. [Google Scholar] [CrossRef]
- Braga, J.R.G.; Peripato, V.; Dalagnol, R.; Ferreira, M.P.; Tarabalka, Y.; OC Aragão, L.E.; De Campos Velho, H.F.; Shiguemori, E.H.; Wagner, F.H. Tree crown delineation algorithm based on a convolutional neural network. Remote Sens. 2020, 12, 1288. [Google Scholar]
- Wulfsohn, D.; Aravena Zamora, F.; Potin Téllez, C.; Zamora Lagos, I.; García-Fiñana, M. Multilevel systematic sampling to estimate total fruit number for yield forecasts. Precis. Agric. 2012, 13, 256–275. [Google Scholar] [CrossRef]
- He, L.; Fang, W.; Zhao, G.; Wu, Z.; Fu, L.; Li, R.; Majeed, Y.; Dhupia, J. Fruit yield prediction and estimation in orchards: A state-of-the-art comprehensive review for both direct and indirect methods. Comput. Electron. Agr. 2022, 195, 106812. [Google Scholar]
- Mitchard, E.T.A.; Feldpausch, T.R.; Brienen, R.J.W.; Lopez Gonzalez, G.; Monteagudo, A.; Baker, T.R.; Lewis, S.L.; Lloyd, J.; Quesada, C.A.; Gloor, M.; et al. Markedly divergent estimates of Amazon forest carbon density from ground plots and satellites. Glob. Ecol. Biogeogr. 2014, 23, 935–946. [Google Scholar] [CrossRef]
- White, J.C.; Coops, N.C.; Wulder, M.A.; Vastaranta, M.; Hilker, T.; Tompalski, P. Remote sensing technologies for enhancing forest inventories: A review. Can. J. Remote Sens. 2016, 42, 619–641. [Google Scholar]
- Osco, L.P.; Marcato Junior, J.; Marques Ramos, A.P.; de Castro Jorge, L.A.; Fatholahi, S.N.; de Andrade Silva, J.; Matsubara, E.T.; Pistori, H.; Gonçalves, W.N.; Li, J. A review on deep learning in UAV remote sensing. Int. J. Appl. Earth Obs. Geoinf. 2021, 102, 102456. [Google Scholar]
- Liang, X.; Wang, Y.; Pyörälä, J.; Lehtomäki, M.; Yu, X.; Kaartinen, H.; Kukko, A.; Honkavaara, E.; Issaoui, A.E.I.; Nevalainen, O.; et al. Forest in situ observations using unmanned aerial vehicle as an alternative of terrestrial measurements. For. Ecosyst. 2019, 6, 20. [Google Scholar] [CrossRef]
- Zhang, C.; Xia, K.; Feng, H.; Yang, Y.; Du, X. Tree species classification using deep learning and RGB optical images obtained by an unmanned aerial vehicle. J. For. Res. 2021, 32, 1879–1888. [Google Scholar] [CrossRef]
- Xu, Z.; Shen, X.; Cao, L.; Coops, N.C.; Goodbody, T.R.H.; Zhong, T.; Zhao, W.; Sun, Q.; Ba, S.; Zhang, Z.; et al. Tree species classification using UAS-based digital aerial photogrammetry point clouds and multispectral imageries in subtropical natural forests. Int. J. Appl. Earth Obs. 2020, 92, 102173. [Google Scholar]
- Horvitz, E.; Mulligan, D. Machine learning: Trends, perspectives, and prospects. Science 2015, 349, 253–255. [Google Scholar] [CrossRef] [PubMed]
- Guo, Q.; Zhang, J.; Guo, S.; Ye, Z.; Deng, H.; Hou, X.; Zhang, H. Urban tree classification based on object-oriented approach and random forest algorithm using unmanned aerial vehicle (UAV) multispectral imagery. Remote Sens. 2022, 14, 3885. [Google Scholar] [CrossRef]
- Yan, S.; Jing, L.; Wang, H. A new individual tree species recognition method based on a convolutional neural network and high-spatial resolution remote sensing imagery. Remote Sens. 2021, 13, 479. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar]
- Schmidhuber, J. Deep learning in neural networks: An overview. Neural Netw. 2015, 61, 85–117. [Google Scholar]
- Schiefer, F.; Kattenborn, T.; Frick, A.; Frey, J.; Schall, P.; Koch, B.; Schmidtlein, S. Mapping forest tree species in high resolution UAV-based RGB-imagery by means of convolutional neural networks. ISPRS J. Photogramm. 2020, 170, 205–215. [Google Scholar] [CrossRef]
- Guo, J.; Xu, Q.; Zeng, Y.; Liu, Z.; Zhu, X.X. Nationwide urban tree canopy mapping and coverage assessment in Brazil from high-resolution remote sensing images using deep learning. ISPRS J. Photogramm. 2023, 198, 1–15. [Google Scholar]
- Lassalle, G.; Ferreira, M.P.; La Rosa, L.E.C.; de Souza Filho, C.R. Deep learning-based individual tree crown delineation in mangrove forests using very-high-resolution satellite imagery. ISPRS J. Photogramm. 2022, 189, 220–235. [Google Scholar]
- Kumar, S.; Jayagopal, P. Delineation of field boundary from multispectral satellite images through U-Net segmentation and template matching. Ecol. Inform. 2021, 64, 101370. [Google Scholar]
- La Rosa, L.E.C.; Sothe, C.; Feitosa, R.Q.; de Almeida, C.M.; Schimalski, M.B.; Oliveira, D.A.B. Multi-task fully convolutional network for tree species mapping in dense forests using small training hyperspectral data. ISPRS J. Photogramm. 2021, 179, 35–49. [Google Scholar] [CrossRef]
- Zhao, H.; Morgenroth, J.; Pearse, G.; Schindler, J. A systematic review of individual tree crown detection and delineation with convolutional neural networks (CNN). Curr. For. Rep. 2023, 9, 149–170. [Google Scholar]
- Zhu, Y.; Zhou, J.; Yang, Y.; Liu, L.; Liu, F.; Kong, W. Rapid target detection of fruit trees using UAV imaging and improved light YOLOv4 algorithm. Remote Sens. 2022, 14, 4324. [Google Scholar] [CrossRef]
- Ferreira, M.P.; Almeida, D.R.A.D.; Papa, D.D.A.; Minervino, J.B.S.; Veras, H.F.P.; Formighieri, A.; Santos, C.A.N.; Ferreira, M.A.D.; Figueiredo, E.O.; Ferreira, E.J.L. Individual tree detection and species classification of Amazonian palms using UAV images and deep learning. For. Ecol. Manag. 2020, 475, 118397. [Google Scholar] [CrossRef]
- Wang, J.; Ding, J.; Ran, S.; Qin, S.; Liu, B.; Li, X. Automatic pear extraction from high-resolution images by a visual attention mechanism network. Remote Sens. 2023, 15, 3283. [Google Scholar]
- Kattenborn, T.; Schiefer, F.; Frey, J.; Feilhauer, H.; Mahecha, M.D.; Dormann, C.F. Spatially autocorrelated training and validation samples inflate performance assessment of convolutional neural networks. ISPRS Open J. Photogramm. Remote Sens. 2022, 5, 100018. [Google Scholar] [CrossRef]
- Hao, Z.; Lin, L.; Post, C.J.; Mikhailova, E.A.; Yu, K.; Fang, H.; Liu, J. The co-effect of image resolution and crown size on deep learning for individual tree detection and delineation. Int. J. Digit. Earth 2023, 16, 3754–3772. [Google Scholar] [CrossRef]
- Weinstein, B.G.; Marconi, S.; Bohlman, S.A.; Zare, A.; White, E.P. Cross-site learning in deep learning RGB tree crown detection. Ecol. Inform. 2020, 56, 101061. [Google Scholar] [CrossRef]
- Martín, M.A.; Mattioni, C.; Molina, J.R.; Alvarez, J.B.; Cherubini, M.; Herrera, M.A.; Villani, F.; Martín, L.M. Landscape genetic structure of chestnut (Castanea sativa Mill.) in Spain. Tree Genet. Genomes 2012, 8, 127–136. [Google Scholar] [CrossRef]
- Retallack, A.; Finlayson, G.; Ostendorf, B.; Lewis, M. Using deep learning to detect an indicator arid shrub in ultra-high-resolution UAV imagery. Ecol. Indic. 2022, 145, 109698. [Google Scholar] [CrossRef]
- Guo, Y.; Liu, Y.; Oerlemans, A.; Lao, S.; Wu, S.; Lew, M.S. Deep learning for visual understanding: A review. Neurocomputing 2016, 187, 27–48. [Google Scholar]
- Guo, Y.; Liu, Y.; Georgiou, T.; Lew, M.S. A review of semantic segmentation using deep neural networks. Int. J Multimed. Inf. Retr. 2018, 7, 87–93. [Google Scholar] [CrossRef]
- Yuan, X.; Shi, J.; Gu, L. A review of deep learning methods for semantic segmentation of remote sensing imagery. Expert Syst. Appl. 2021, 169, 114417. [Google Scholar]
- Zunair, H.; Ben Hamza, A. Sharp U-Net: Depthwise convolutional network for biomedical image segmentation. Comput. Biol. Med. 2021, 136, 104699. [Google Scholar] [CrossRef] [PubMed]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.O.; Villena-Martinez, V.; Garcia-Rodriguez, J. A review on deep learning techniques applied to semantic segmentation. arXiv 2022, arXiv:1704.06857. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer International Publishing: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Zhou, Z.; Siddiquee, M.; Tajbakhsh, N.; Liang, J. UNet++: A nested U-Net architecture for medical image segmentation. In Proceedings of the Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Granada, Spain, 20 September 2018; Springer International Publishing: Berlin/Heidelberg, Germany; Volume 11045, pp. 3–11. [Google Scholar]
- Chen, L.; Papandreou, G.; Schroff, F.; Adam, H. Rethinking atrous convolution for semantic image segmentation. arXiv 2017, arXiv:1706.05587, preprint. [Google Scholar]
- Choi, K.; Lim, W.; Chang, B.; Jeong, J.; Kim, I.; Park, C.; Ko, D.W. An automatic approach for tree species detection and profile estimation of urban street trees using deep learning and Google street view images. ISPRS J. Photogramm. 2022, 190, 165–180. [Google Scholar]
- Chen, L.; Zhu, Y.; Papandreou, G.; Schroff, F.; Adam, H. Encoder-Decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 801–818. [Google Scholar]
- Xia, L.; Zhang, R.; Chen, L.; Li, L.; Yi, T.; Wen, Y.; Ding, C.; Xie, C. Evaluation of deep learning segmentation models for detection of pine wilt disease in unmanned aerial vehicle Images. Remote Sens. 2021, 13, 3594. [Google Scholar] [CrossRef]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Stubbings, P.; Peskett, J.; Rowe, F.; Arribas-Bel, D. A hierarchical urban forest index using street-level imagery and deep learning. Remote Sens. 2019, 11, 1395. [Google Scholar] [CrossRef]
- Hao, Z.; Lin, L.; Post, C.J.; Mikhailova, E.A.; Li, M.; Chen, Y.; Yu, K.; Liu, J. Automated tree-crown and height detection in a young forest plantation using mask region-based convolutional neural network (Mask R-CNN). ISPRS J. Photogramm. Remote Sens. 2021, 178, 112–123. [Google Scholar]
- Castilla, G.; Filiatrault, M.; McDermid, G.J.; Gartrell, M. Estimating individual conifer seedling height using drone-based image point clouds. Forests 2020, 11, 924. [Google Scholar] [CrossRef]
- Rodríguez-Garlito, E.C.; Paz-Gallardo, A.; Plaza, A. Mapping invasive aquatic plants in sentinel-2 images using convolutional neural networks trained with spectral indices. IEEE J.-Stars 2023, 16, 2889–2899. [Google Scholar] [CrossRef]
- Puliti, S.; Ene, L.T.; Gobakken, T.; Næsset, E. Use of partial-coverage UAV data in sampling for large scale forest inventories. Remote Sens. Environ. 2017, 194, 115–126. [Google Scholar]
- Flood, N.; Watson, F.; Collett, L. Using a U-net convolutional neural network to map woody vegetation extent from high resolution satellite imagery across Queensland, Australia. Int. J. Appl. Earth Obs. 2019, 82, 101897. [Google Scholar]
- Jiang, Q.; Huang, Z.; Xu, G.; Su, Y. MIoP-NMS: Perfecting crops target detection and counting in dense occlusion from high-resolution UAV imagery. Smart Agric. Technol. 2023, 4, 100226. [Google Scholar] [CrossRef]
- Zhao, X.; Yuan, Y.; Song, M.; Ding, Y.; Lin, F.; Liang, D.; Zhang, D. Use of unmanned aerial vehicle imagery and deep learning U-Net to extract rice lodging. Sensors 2019, 19, 3859. [Google Scholar] [CrossRef]
- Pádua, L.; Marques, P.; Martins, L.; Sousa, A.; Peres, E.; Sousa, J.J. Monitoring of chestnut trees using machine learning techniques applied to UAV-based multispectral data. Remote Sens. 2020, 12, 3032. [Google Scholar] [CrossRef]
- Marques, P.; Pádua, L.; Adão, T.; Hruška, J.; Peres, E.; Sousa, A.; Sousa, J.J. UAV-Based automatic detection and monitoring of chestnut trees. Remote Sens. 2019, 11, 855. [Google Scholar] [CrossRef]
- Di Gennaro, S.F.; Nati, C.; Dainelli, R.; Pastonchi, L.; Berton, A.; Toscano, P.; Matese, A. An automatic UAV based segmentation approach for pruning biomass estimation in irregularly spaced chestnut orchards. Forests 2020, 11, 308. [Google Scholar]
- Kattenborn, T.; Eichel, J.; Fassnacht, F.E. Convolutional Neural Networks enable efficient, accurate and fine-grained segmentation of plant species and communities from high-resolution UAV imagery. Sci. Rep. 2019, 9, 17656. [Google Scholar] [CrossRef]
- Morales, G.; Kemper, G.; Sevillano, G.; Arteaga, D.; Ortega, I.; Telles, J. Automatic segmentation of Mauritia flexuosa in Unmanned Aerial Vehicle (UAV) imagery using deep learning. Forests 2018, 9, 736. [Google Scholar] [CrossRef]
- Lobo Torres, D.; Queiroz Feitosa, R.; Nigri Happ, P.; Elena Cué La Rosa, L.; Marcato Junior, J.; Martins, J.; Olã Bressan, P.; Gonçalves, W.N.; Liesenberg, V. Applying fully convolutional architectures for semantic segmentation of a single tree species in urban environment on high resolution UAV optical imagery. Sensors 2020, 20, 563. [Google Scholar] [CrossRef]
- Jeon, E.; Kim, S.; Park, S.; Kwak, J.; Choi, I. Semantic segmentation of seagrass habitat from drone imagery based on deep learning: A comparative study. Ecol. Inform. 2021, 66, 101430. [Google Scholar] [CrossRef]
- Yang, X.; Li, S.; Chen, Z.; Chanussot, J.; Jia, X.; Zhang, B.; Li, B.; Chen, P. An attention-fused network for semantic segmentation of very-high-resolution remote sensing imagery. ISPRS J. Photogramm. 2021, 177, 238–262. [Google Scholar] [CrossRef]
- Morell-Monzó, S.; Sebastiá-Frasquet, M.; Estornell, J.; Moltó, E. Detecting abandoned citrus crops using Sentinel-2 time series. A case study in the Comunitat Valenciana region (Spain). ISPRS J. Photogramm. 2023, 201, 54–66. [Google Scholar]
- Volpi, I.; Marchi, S.; Petacchi, R.; Hoxha, K.; Guidotti, D. Detecting olive grove abandonment with Sentinel-2 and machine learning: The development of a web-based tool for land management. Smart Agric. Technol. 2023, 3, 100068. [Google Scholar] [CrossRef]
- Kattenborn, T.; Leitloff, J.; Schiefer, F.; Hinz, S. Review on Convolutional Neural Networks (CNN) in vegetation remote sensing. ISPRS J. Photogramm. 2021, 173, 24–49. [Google Scholar]
- Harmon, I.; Marconi, S.; Weinstein, B.; Graves, S.; Wang, D.Z.; Zare, A.; Bohlman, S.; Singh, A.; White, E. Injecting domain knowledge into deep neural networks for tree crown delineation. IEEE T. Geosci. Remote. 2022, 60, 1–19. [Google Scholar] [CrossRef]
- Jin, H.; Jin, X.; Zhou, Y.; Guo, P.; Ren, J.; Yao, J.; Zhang, S. A survey of energy efficient methods for UAV communication. Veh. Commun. 2023, 41, 100594. [Google Scholar]
Semantic Segmentation Model | Backbone Network | Producer’s Accuracy (%) | User’s Accuracy (%) | F1 Score (%) |
---|---|---|---|---|
U-Net | ResNet-34 | 80.91 | 89.15 | 84.78 |
ResNet-50 | 82.31 | 88.13 | 85.08 | |
DeepLab V3 | ResNet-34 | 83.77 | 88.63 | 86.13 1 |
ResNet-50 | 79.85 | 90.51 | 84.75 | |
PSPNet | ResNet-34 | 77.69 | 89.60 | 83.16 |
ResNet-50 | 78.06 | 88.26 | 82.67 | |
DeepLab V3+ | ResNet-34 | 71.39 | 88.47 | 78.69 |
ResNet-50 | 82.20 | 87.60 | 84.78 |
Semantic Segmentation Model | Test Data | Producer’s Accu-racy (%) | User’s Accuracy (%) | F1 Score (%) |
---|---|---|---|---|
U-Net | Test data DP | 86.84 | 90.11 | 88.44 |
Test data SP | 76.38 | 87.17 | 81.41 | |
DeepLab V3 | Test data DP | 86.12 | 91.04 | 88.51 |
Test data SP | 77.50 | 88.09 | 82.37 | |
PSPNet | Test data DP | 85.46 | 91.01 | 88.13 |
Test data SP | 70.29 | 86.85 | 77.69 | |
DeepLab V3+ | Test data DP | 85.18 | 90.64 | 87.80 |
Test data SP | 68.41 | 85.43 | 75.67 |
Model | Training Data | Test Data | Producer’s Accuracy (%) | User’s Accuracy (%) | F1 Score (%) |
---|---|---|---|---|---|
Model DP | DP | Test data DP | 88.53 | 86.61 | 87.56 |
Test data SP | 76.28 | 78.46 | 77.36 | ||
Mean | 82.41 | 82.54 | 82.46 | ||
Model SP | SP | Test data DP | 85.28 | 88.35 | 86.79 |
Test data SP | 78.15 | 83.73 | 80.84 | ||
Mean | 81.72 | 86.04 | 83.81 | ||
Model Mix | Mix | Test data DP | 86.47 | 91.06 | 88.71 |
Test data SP | 81.06 | 86.20 | 83.55 | ||
Mean | 83.77 | 88.63 | 86.13 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sun, Y.; Hao, Z.; Guo, Z.; Liu, Z.; Huang, J. Detection and Mapping of Chestnut Using Deep Learning from High-Resolution UAV-Based RGB Imagery. Remote Sens. 2023, 15, 4923. https://doi.org/10.3390/rs15204923
Sun Y, Hao Z, Guo Z, Liu Z, Huang J. Detection and Mapping of Chestnut Using Deep Learning from High-Resolution UAV-Based RGB Imagery. Remote Sensing. 2023; 15(20):4923. https://doi.org/10.3390/rs15204923
Chicago/Turabian StyleSun, Yifei, Zhenbang Hao, Zhanbao Guo, Zhenhu Liu, and Jiaxing Huang. 2023. "Detection and Mapping of Chestnut Using Deep Learning from High-Resolution UAV-Based RGB Imagery" Remote Sensing 15, no. 20: 4923. https://doi.org/10.3390/rs15204923
APA StyleSun, Y., Hao, Z., Guo, Z., Liu, Z., & Huang, J. (2023). Detection and Mapping of Chestnut Using Deep Learning from High-Resolution UAV-Based RGB Imagery. Remote Sensing, 15(20), 4923. https://doi.org/10.3390/rs15204923