Individual Tree Species Identification Based on a Combination of Deep Learning and Traditional Features
Abstract
:1. Introduction
2. Data
2.1. Study Area
2.2. Experimental Data
2.3. Data Preprocessing
2.4. Sample Set Construction and Enhancement
3. Model Structure
3.1. Basic Network
3.2. Explore Module
3.2.1. Feature Selection
3.2.2. Explore Module Construction
3.3. Whole Model Construction
3.4. Experimental Settings
3.5. Accuracy Assessment
4. Results
4.1. Training and Validation Accuracies
4.2. Identification Accuracy Evaluation
4.3. ITS Classification Map of Study Areas
4.3.1. ITS Classification Map of the Huangshan Study Area
4.3.2. ITS Classification Map of the Gaofeng Study Area
4.4. Applicability of the Explore Module for Limited Sample Sets
4.4.1. ITS Classification Results in the Huangshan Study Area Using Limited Sample Sets
4.4.2. ITS Classification Results in the Gaofeng Study Area Using Limited Sample Sets
4.5. Applicability of the Explore Module for Different Deep Learning Models
4.5.1. ITS Classification Results of Three Models in the Huangshan Study Area
4.5.2. ITS Classification Results of the Three Models in the Gaofeng Study Area
5. Discussion
5.1. The Influencing Factors of Tree Species Classification
5.2. The Weight Setting
5.3. Introduction of Traditional Image Classification Methods
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Dixon, R.K.; Solomon, A.M.; Brown, S.; Houghton, R.A.; Trexier, M.C.; Wisniewski, J. Carbon Pools and Flux of Global Forest Ecosystems. Science 1994, 263, 185–190. [Google Scholar] [CrossRef] [PubMed]
- Anitha, K.; Joseph, S.; Chandran, R.J.; Ramasamy, E.V.; Prasad, S.N. Tree species diversity and community com-position in a human-dominated tropical forest of Western Ghats biodiversity hotspot. India. Ecol. Complex. 2010, 7, 217–224. [Google Scholar] [CrossRef]
- Yin, D.M.; Wang, L. How to assess the accuracy of the individual tree-based forest inventory derived from remotely sensed data: A review. Int. J. Remote Sens. 2016, 37, 4521–4553. [Google Scholar] [CrossRef]
- Madonsela, S.; Cho, M.A.; Mathieu, R.; Mutanga, O.; Ramoelo, A.; Kaszta, Z.; Van de Kerchove, R.; Wolff, E. Multi-phenology WorldView-2 imagery improves remote sensing of savannah tree species. Int. J. Appl. Earth Obs. Geoinf. 2017, 58, 65–73. [Google Scholar] [CrossRef]
- Nijland, W.; Coops, N.C.; Macdonald, S.E.; Nielsen, S.E.; Bater, C.W.; White, B.; Ogilvie, J.; Stadt, J. Remote sensing proxies of productivity and moisture predict forest stand type and recovery rate following experimental harvest. For. Ecol. Manag. 2015, 357, 239–247. [Google Scholar] [CrossRef]
- Fedrigo, M.; Newnham, G.J.; Coops, N.C.; Culvenor, D.S.; Bolton, D.K.; Nitschke, C.R. Predicting temperate for-est stand types using only structural profiles from discrete return airborne lidar. ISPRS J. Photogramm. Remote Sens. 2018, 136, 106–119. [Google Scholar] [CrossRef]
- Grabska, E.; Hostert, P.; Pflugmacher, D.; Ostapowicz, K. Forest Stand Species Mapping Using the Sentinel-2 Time Series. Remote Sens. 2019, 11, 1197. [Google Scholar] [CrossRef]
- Oreti, L.; Giuliarelli, D.; Tomao, A.; Barbati, A. Object Oriented Classification for Mapping Mixed and Pure Forest Stands Using Very-High Resolution Imagery. Remote Sens. 2021, 13, 2508. [Google Scholar] [CrossRef]
- Wan, H.M.; Tang, Y.W.; Jing, L.H.; Li, H.; Qiu, F.; Wu, W.J. Tree species classification of forest stands using multi-source remote sensing data. Remote Sens. 2021, 13, 144. [Google Scholar] [CrossRef]
- Holmgren, J.; Persson, Å. Identifying species of individual trees using airborne laser scanner. Remote Sens. Environ. 2004, 90, 415–423. [Google Scholar] [CrossRef]
- Dalponte, M.; Bruzzone, L.; Gianelle, D. Tree species classification in the Southern Alps based on the fusion of very high geometrical resolution multispectral/hyperspectral images and LiDAR data. Remote Sens. Environ. 2012, 123, 258–270. [Google Scholar] [CrossRef]
- Immitzer, M.; Atzberger, C.; Koukal, T. Tree Species Classification with Random Forest Using Very High Spatial Resolution 8-Band WorldView-2 Satellite Data. Remote Sens. 2012, 4, 2661–2693. [Google Scholar] [CrossRef]
- Gougeon, F.A. A crown-following approach to the automatic delineation of individual tree crowns in high spatial resolution aerial images. Can. J. Remote. Sens. 1995, 21, 274–284. [Google Scholar] [CrossRef]
- Culvenor, D.S. TIDA: An algorithm for the delineation of tree crowns in high spatial resolution remotely sensed imagery. Comput. Geosci. 2002, 28, 33–44. [Google Scholar] [CrossRef]
- Jing, L.H.; Hu, B.X.; Li, J.L.; Noland, T. Automated delineation of individual tree crowns from Lidar data by multi-scale analysis and segmentation. Photogramm. Eng. Remote Sen. 2012, 78, 1275–1284. [Google Scholar] [CrossRef]
- Hamraz, H.; Contreras, M.A.; Zhang, J. Vertical stratification of forest canopy for segmentation of understory trees within small-footprint airborne LiDAR point clouds. ISPRS J. Photogramm. Remote. Sens. 2017, 130, 385–392. [Google Scholar] [CrossRef]
- Qiu, L.; Jing, L.H.; Hu, B.X.; Li, H.; Tang, Y.W. A new individual tree crown delineation method for high resolution multispectral imagery. Remote Sens. 2020, 12, 585. [Google Scholar] [CrossRef]
- Cho, M.A.; Malahlela, O.; Ramoelo, A. Assessing the utility WorldView-2 imagery for tree species mapping in South African subtropical humid forest and the conservation implications: Dukuduku forest patch as case study. Int. J. Appl. Earth Obs. Geoinf. 2015, 38, 349–357. [Google Scholar] [CrossRef]
- Lee, J.; Cai, X.H.; Lellmann, J.; Dalponte, M.; Malhi, Y.; Butt, N.; Morecroft, M.; Schonlieb, C.B.; Coomes, D.A. Individual tree species classification from airborne multisensor imagery using robust PCA. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2554–2567. [Google Scholar] [CrossRef]
- Sedliak, M.; Sačkov, I.; Kulla, L. Classification of tree species composition using a combination of multispectral imagery and airborne laser scanning data. For. J. 2017, 63, 1–9. [Google Scholar] [CrossRef]
- Chenari, A.; Erfanifard, Y.; Dehghani, M.; Pourghasemi, H.R. Woodland mapping at single-tree levels using object-oriented classification of unmanned aerial vehicle (UAV) images. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2017, 42–44, 43–49. [Google Scholar] [CrossRef]
- Lu, D.; Weng, Q. A survey of image classification methods and techniques for improving classification performance. Int. J. Remote Sens. 2007, 28, 823–870. [Google Scholar] [CrossRef]
- Wang, X.; Wang, Y.; Zhou, C.; Yin, L.; Feng, X. Urban forest monitoring based on multiple features at the single tree scale by UAV. Urban. For. Urban. Green. 2021, 58, 126958. [Google Scholar] [CrossRef]
- Maschler, J.; Atzberger, C.; Immitzer, M. Individual Tree Crown Segmentation and Classification of 13 Tree Species Using Airborne Hyperspectral Data. Remote Sens. 2018, 10, 1218. [Google Scholar] [CrossRef]
- Mishra, N.B.; Mainali, K.P.; Shrestha, B.B.; Radenz, J.; Karki, D. Species-Level Vegetation Mapping in a Himalayan Treeline Ecotone Using Unmanned Aerial System (UAS) Imagery. ISPRS Int. J. Geo-Inf. 2018, 7, 445. [Google Scholar] [CrossRef]
- Sothe, C.; De Almeida, C.M.; Schimalski, M.B.; La Rosa, L.E.C.; Castro, J.D.B.; Feitosa, R.Q.; Dalponte, M.; Lima, C.L.; Liesenberg, V.; Miyoshi, G.T.; et al. Comparative performance of convolutional neural network, weighted and conventional support vector machine and random forest for classifying tree species using hyperspectral and photogrammetric data. GIScience Remote Sens. 2020, 57, 369–394. [Google Scholar] [CrossRef]
- Xu, Z.; Shen, X.; Cao, L.; Coops, N.C.; Goodbody, T.R.H.; Zhong, T.; Zhao, W.D.; Sun, Q.L.; Ba, S.; Zhang, Z.N.; et al. Tree species classification using UAS-based digital aerial photogrammetry point clouds and multispectral imageries in subtropical natural forests. Int. J. Appl. Earth Obs. Geoinf. 2020, 92, 102173. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- He, K.M.; Zhang, X.Y.; Ren, S.Q.; Sun, J. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 30 June 2016; pp. 770–778. [Google Scholar]
- Huang, G.; Liu, Z.; van der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017. [Google Scholar]
- Fujimoto, A.; Haga, C.; Matsui, T.; Machimura, T.; Hayashi, K.; Sugita, S.; Takagi, H. An end to end process development for UAV-SfM based forest monitoring: Individual tree detection, species classification and carbon dynamics simulation. Forests 2019, 10, 680. [Google Scholar] [CrossRef]
- Egli, S.; Höpke, M. CNN-Based Tree Species Classification Using High Resolution RGB Image Data from Automated UAV Observations. Remote Sens. 2020, 12, 3892. [Google Scholar] [CrossRef]
- Cao, K.; Zhang, X. An Improved Res-UNet Model for Tree Species Classification Using Airborne High-Resolution Images. Remote Sens. 2020, 12, 1128. [Google Scholar] [CrossRef]
- Osco, L.P.; Arruda, M.D.S.D.; Gonçalves, D.N.; Dias, A.; Batistoti, J.; de Souza, M.; Gomes, F.D.G.; Ramos, A.P.M.; Jorge, L.A.D.C.; Liesenberg, V.; et al. A CNN approach to simultaneously count plants and detect plantation-rows from UAV imagery. ISPRS J. Photogramm. Remote Sens. 2021, 174, 1–17. [Google Scholar] [CrossRef]
- Miraki, M.; Sohrabi, H.; Fatehi, P.; Kneubuehler, M. Individual tree crown delineation from high-resolution UAV images in broadleaf forest. Ecol. Inform. 2021, 61, 101207. [Google Scholar] [CrossRef]
- Nezami, S.; Khoramshahi, E.; Nevalainen, O.; Polonen, I.; Honkavaara, E. Tree species classification of drone hyperspectral and RGB imagery with deep learning convolutional neural networks. Remote Sens. 2020, 12, 1070. [Google Scholar] [CrossRef]
- Yan, S.; Jing, L.; Wang, H. A New Individual Tree Species Recognition Method Based on a Convolutional Neural Network and High-Spatial Resolution Remote Sensing Imagery. Remote Sens. 2021, 13, 479. [Google Scholar] [CrossRef]
- Ouyang, G.; Jing, L.H.; Yan, S.J.; Li, H.; Tang, Y.W.; Tan, B.X. Classification of individual tree species in high-resolution remote sensing imagery based on convolution neural network. Laser Optoelectron. Prog. 2021, 58, 349–362. [Google Scholar]
- Zhang, C.; Zhou, J.W.; Wang, H.W.; Tan, T.Y.; Cui, M.C.; Huang, Z.L.; Wang, P.; Zhang, L. Multi-species individual tree segmentation and identification based on improved mask R-CNN and UAV imagery in mixed forests. Remote Sens. 2022, 14, 874. [Google Scholar] [CrossRef]
- Mallinis, G.; Koutsias, N.; Tsakiri-Strati, M.; Karteris, M. Object-based classification using Quickbird imagery for delineating forest vegetation polygons in a Mediterranean test site. ISPRS J. Photogramm. Remote Sens. 2008, 63, 237–250. [Google Scholar] [CrossRef]
- Zhang, C.Y.; Qiu, F. Mapping Individual tree species in an urban forest using airborne Lidar data and hyperspectral imagery. Photogramm. Eng. Remote Sens. 2012, 78, 1079–1087. [Google Scholar] [CrossRef]
- Pant, P.; Heikkinen, V.; Hovi, A.; Korpela, I.; Hauta-Kasari, M.; Tokola, T. Evaluation of simulated bands in air-borne optical sensors for tree species identification. Remote Sens. Environ. 2013, 138, 27–37. [Google Scholar] [CrossRef]
- Naidoo, L.; Cho, M.A.; Mathieu, R.; Asner, G. Classification of savanna tree species, in the Greater Kruger National Park region, by integrating hyperspectral and LiDAR data in a Random Forest data mining environment. ISPRS J. Photogramm. Remote Sens. 2012, 69, 167–179. [Google Scholar] [CrossRef]
- Dalponte, M.; Orka, H.O.; Ene, L.T.; Gobakken, T.; Naesset, E. Tree crown delineation and tree species classification in boreal forests using hyperspectral and ALS data. Remote Sens. Environ. 2014, 140, 306–317. [Google Scholar] [CrossRef]
- Fassnacht, F.E.; Latifi, H.; Stereńczak, K.; Modzelewska, A.; Lefsky, M.; Waser, L.T.; Straub, C.; Ghosh, A. Review of studies on tree species classification from remotely sensed data. Remote Sens. Environ. 2016, 186, 64–87. [Google Scholar] [CrossRef]
- Jin, W.; Dong, S.; Dong, C.; Ye, X. Hybrid ensemble model for differential diagnosis between COVID-19 and common viral pneumonia by chest X-ray radiograph. Comput. Biol. Med. 2021, 131, 104252. [Google Scholar] [CrossRef]
- Saini, M.; Susan, S. Bag-of-Visual-Words codebook generation using deep features for effective classification of imbalanced multi-class image datasets. Multimed. Tools Appl. 2021, 80, 20821–20847. [Google Scholar] [CrossRef]
- Bakour, K.; Ünver, H.M. DeepVisDroid: Android malware detection by hybridizing image-based features with deep learning techniques. Neural Comput. Appl. 2021, 33, 11499–11516. [Google Scholar] [CrossRef]
- Dey, N.; Zhang, Y.-D.; Rajinikanth, V.; Pugalenthi, R.; Raja, N.S.M. Customized VGG19 Architecture for Pneumonia Detection in Chest X-Rays. Pattern Recognit. Lett. 2021, 143, 67–74. [Google Scholar] [CrossRef]
- Varin, M.; Chalghaf, B.; Joanisse, G. Object-Based Approach Using Very High Spatial Resolution 16-Band WorldView-3 and LiDAR Data for Tree Species Classification in a Broadleaf Forest in Quebec, Canada. Remote Sens. 2020, 12, 3092. [Google Scholar] [CrossRef]
- Jing, L.; Cheng, Q. Two improvement schemes of PAN modulation fusion methods for spectral distortion minimization. Int. J. Remote Sens. 2009, 30, 2119–2131. [Google Scholar] [CrossRef]
- Jing, L.; Hu, B.; Li, J.; Noland, T.; Guo, H. Automated tree crown delineation from imagery based on morphological techniques. In Proceedings of the 35th International Symposium on Remote Sensing of Environment (ISRSE35), Beijing, China, 22–26 April 2013. [Google Scholar]
- Shorten, C.; Khoshgoftaar., T. A survey on image data augmentation for deep learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
- Zhou, D.-X. Theory of deep convolutional neural networks: Downsampling. Neural Netw. 2020, 124, 319–327. [Google Scholar] [CrossRef]
- Al-Azzawi, A.; Ouadou, A.; Max, H.; Duan, Y.; Tanner, J.J.; Cheng, J. DeepCryoPicker: Fully automated deep neural network for single protein particle picking in cryo-EM. BMC Bioinform. 2020, 21, 509. [Google Scholar] [CrossRef]
- Wang, T.; Lu, C.; Yang, M.; Hong, F.; Liu, C. A hybrid method for heartbeat classification via convolutional neural networks, multilayer perceptrons and focal loss. PeerJ Comput. Sci. 2020, 6, 324. [Google Scholar] [CrossRef]
- Li, G.; Zhang, M.; Li, J.; Lv, F.; Tong, G. Efficient densely connected convolutional neural networks. Pattern Recognit. 2020, 109, 107610. [Google Scholar] [CrossRef]
- Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
- Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural Features for Image Classification. IEEE Trans. Syst. Man. Cybern. 1973, 6, 610–621. [Google Scholar] [CrossRef]
- Haralick, R.M. Statistical and structural approaches to texture. Proc. IEEE 1979, 67, 786–804. [Google Scholar] [CrossRef]
- Fung, T.; Ellsworth, L. For Change Detection Using Various Accuracy. Photogramm. Eng. Remote Sens. 1988, 54, 1449–1454. [Google Scholar]
- Shi, W.; Zhao, X.; Zhao, J.; Zhao, S.; Guo, Y.; Liu, N.; Sun, N.; Du, X.; Sun, M. Reliability and consistency assessment of land cover products atmacro and local scales in typical cities. Int. J. Digit. Earth 2023, 16, 486–508. [Google Scholar] [CrossRef]
- Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Medical Image Computing and Computer-Assisted Intervention; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Springer International Publishing: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar] [CrossRef]
- Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef]
- Caudullo, G.; Tinner, W.; De Rigo, D. Picea abies in Europe: Distribution, habitat, usage and threats. Eur. Atlas For. Tree Species 2016, 1, 114–116. [Google Scholar]
- Modzelewska, A.; Fassnacht, F.E.; Stereńczak, K. Tree species identification within an extensive forest area with diverse management regimes using airborne hyperspectral data. Int. J. Appl. Earth Obs. Geoinf. 2020, 84, 101960. [Google Scholar] [CrossRef]
- Cortes, C.; Vapnik, V. Support-vector networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Franco-Lopez, H.; Ek, A.R.; Bauer, M.E. Estimation and mapping of forest stand density, volume, and cover type using the k-nearest neighbors method. Remote Sens. Environ. 2001, 77, 251–274. [Google Scholar] [CrossRef]
Tree Species | Abbreviation | Training Samples Set | Validation Samples Set | Test Samples Set |
---|---|---|---|---|
Phyllostachys heterocycla | Ph.h | 396 | 138 | 138 |
Other Evergreen arbors | Ev.a | 1596 | 534 | 534 |
Cunninghamia lanceolata | Cu.l | 498 | 168 | 168 |
Pinus hwangshanensis | Pi.h | 7194 | 2406 | 2406 |
Other Deciduous arbors | De.a | 2214 | 744 | 744 |
Total | Total | 11,898 | 3990 | 3990 |
Tree Species | Abbreviation | Training Samples Set | Validation Samples Set | Test Samples Set |
---|---|---|---|---|
Eucalyptus urophylla S.T.Blake | Eu.s | 373 | 125 | 78 |
Cunninghamia lanceolata | Cu.l | 1777 | 593 | 318 |
Pinus massoniana Lamb | Pm.l | 1894 | 632 | 336 |
Eucalyptus grandis x urophylla | Eg.x | 1444 | 482 | 288 |
Illicium verum Hook. f. | Iv.h | 949 | 317 | 174 |
Total | Total | 6437 | 2149 | 1194 |
Feature | Abbreviation | Formula | Meaning of Symbols |
---|---|---|---|
Mean Value | Meanb | n: number of pixels contained in the object. Iib: the i-th pixel value of the object in band b. L: the number of bands : the standard deviation of band a. : the standard deviation of band b. : The highest grayscale level of an image. P(i,j): the value of the i-th row and j-th column in the GLCM. : the mean and variance of : the variance of : the mean of , : are the variance of . | |
Standard Deviation | |||
Brightness | |||
Maximum Difference | |||
Angular Second Moment | ASM | ||
Contrast | CON | ||
Correlation | COR | ||
Entropy | ENT | ||
Dissimilarity | DIS | ||
Homogeneity | HOM |
Study Area | Model | Training Accuracy/% | Validation Accuracy/% |
---|---|---|---|
Huang Shang | DenseNet | 95.69 | 95.52 |
ED | 96.29 | 95.85 | |
Gao Feng | DenseNet | 92.95 | 93.71 |
ED | 93.71 | 95.16 |
Model | Evaluating Indicator | Tree Species | ||||
---|---|---|---|---|---|---|
Ph.h a | Ev.a b | Cu.l c | Pi.h d | De.a e | ||
DenseNet | Producer’s accuracy/% | 91.30 | 75.28 | 78.57 | 97.51 | 97.58 |
User’s accuracy/% | 100.00 | 83.75 | 95.65 | 95.13 | 93.08 | |
Overall accuracy/% | 93.53 | |||||
Macro-F1 | 90.70 | |||||
ED | Producer’s accuracy/% | 91.30 | 74.16 | 82.14 | 98.00 | 99.19 |
User’s accuracy/% | 100.00 | 88.00 | 95.83 | 95.16 | 93.18 | |
Overall accuracy/% | 94.14 | |||||
Macro-F1 | 91.62 |
Model | Evaluating Indicator | Tree Species | ||||
---|---|---|---|---|---|---|
Eu.s a | Cu.lb | Pm.l c | Eg.x d | Iv.h e | ||
DenseNet | Producer’s accuracy/% | 100.00 | 96.86 | 86.61 | 89.58 | 90.80 |
User’s accuracy/% | 82.98 | 88.76 | 100.00 | 97.36 | 80.20 | |
Overall accuracy/% | 91.54 | |||||
Macro-F1 | 91.29 | |||||
ED | Producer’s accuracy/% | 100.00 | 97.80 | 86.90 | 93.40 | 94.83 |
User’s accuracy/% | 83.87 | 93.39 | 99.66 | 100.00 | 80.10 | |
Overall accuracy/% | 93.38 | |||||
Macro-F1 | 92.97 |
Study Area | Training Sample Size | Model | Evaluating Indicator | Tree Species | ||||
---|---|---|---|---|---|---|---|---|
Ph.h a | Ev.a b | Cu.l c | Pi.h d | De.a e | ||||
Huang Shan | 480 | DenseNet | Producer’s accuracy/% | 95.65 | 78.65 | 85.71 | 87.53 | 81.45 |
User’s accuracy/% | 95.65 | 53.85 | 61.54 | 95.64 | 95.28 | |||
Overall accuracy/% | 85.41 | |||||||
Macro-F1 | 83.01 | |||||||
ED | Producer’s accuracy/% | 95.65 | 82.02 | 85.71 | 90.02 | 83.06 | ||
User’s accuracy/% | 95.65 | 57.94 | 82.76 | 95.25 | 95.37 | |||
Overall accuracy/% | 87.67 | |||||||
Macro-F1 | 86.33 | |||||||
420 | DenseNet | Producer’s accuracy/% | 95.65 | 82.02 | 85.71 | 90.02 | 75.00 | |
User’s accuracy/% | 100.00 | 56.59 | 61.54 | 95.76 | 94.90 | |||
Overall accuracy/% | 86.17 | |||||||
Macro-F1 | 83.67 | |||||||
ED | Producer’s accuracy/% | 95.65 | 74.16 | 89.29 | 93.02 | 88.71 | ||
User’s accuracy/% | 100.00 | 68.04 | 64.10 | 95.64 | 94.02 | |||
Overall accuracy/% | 89.62 | |||||||
Macro-F1 | 86.22 | |||||||
360 | DenseNet | Producer’s accuracy/% | 95.65 | 84.27 | 82.14 | 89.78 | 72.58 | |
User’s accuracy/% | 95.65 | 52.82 | 76.67 | 96.51 | 92.78 | |||
Overall accuracy/% | 85.71 | |||||||
Macro-F1 | 83.87 | |||||||
ED | Producer’s accuracy/% | 95.65 | 65.17 | 85.71 | 94.76 | 79.03 | ||
User’s accuracy/% | 91.67 | 62.37 | 60.00 | 95.24 | 89.91 | |||
Overall accuracy/% | 87.52 | |||||||
Macro-F1 | 81.90 |
Study Area | Training Sample Size | Model | Evaluating Indicator | Tree Species | ||||
---|---|---|---|---|---|---|---|---|
Eu.s a | Cu.l b | Pm.l c | Eg.x d | Iv.h e | ||||
Gao Feng | 480 | DenseNet | Producer’s accuracy/% | 100.00 | 91.51 | 80.06 | 86.46 | 93.10 |
User’s accuracy/% | 84.78 | 84.10 | 94.39 | 94.32 | 78.26 | |||
Overall accuracy/% | 87.86 | |||||||
Macro-F1 | 88.67 | |||||||
ED | Producer’s accuracy/% | 100.00 | 89.62 | 80.65 | 90.28 | 91.38 | ||
User’s accuracy/% | 89.66 | 84.07 | 96.79 | 90.59 | 79.10 | |||
Overall accuracy/% | 88.19 | |||||||
Macro-F1 | 89.20 | |||||||
420 | DenseNet | Producer’s accuracy/% | 98.72 | 94.65 | 77.38 | 87.85 | 77.01 | |
User’s accuracy/% | 89.53 | 76.40 | 93.53 | 94.76 | 79.29 | |||
Overall accuracy/% | 85.85 | |||||||
Macro-F1 | 86.91 | |||||||
ED | Producer’s accuracy/% | 100.00 | 95.60 | 78.57 | 89.24 | 89.66 | ||
User’s accuracy/% | 86.67 | 87.36 | 98.51 | 95.19 | 71.56 | |||
Overall accuracy/% | 88.69 | |||||||
Macro-F1 | 89.21 | |||||||
360 | DenseNet | Producer’s accuracy/% | 100.00 | 87.11 | 72.32 | 80.56 | 94.83 | |
User’s accuracy/% | 89.66 | 77.59 | 91.35 | 88.89 | 73.99 | |||
Overall accuracy/% | 83.33 | |||||||
Macro-F1 | 85.61 | |||||||
ED | Producer’s accuracy/% | 98.72 | 83.65 | 74.11 | 89.24 | 85.63 | ||
User’s accuracy/% | 92.77 | 81.85 | 87.37 | 85.38 | 74.50 | |||
Overall accuracy/% | 83.58 | |||||||
Macro-F1 | 85.31 |
Study Area | Model | Evaluating Indicator | Tree Species | ||||
---|---|---|---|---|---|---|---|
Ph.h a | Ev.a b | Cu.l c | Pi.h d | De.a e | |||
Huang shan | AlexNet | Producer’s accuracy/% | 82.61 | 68.54 | 71.43 | 97.76 | 96.77 |
User’s accuracy/% | 95.00 | 83.56 | 83.33 | 93.56 | 93.02 | ||
Overall accuracy/% | 92.03 | ||||||
Macro-F1 | 86.44 | ||||||
EA | Producer’s accuracy/% | 95.65 | 74.16 | 64.29 | 98.00 | 95.16 | |
User’s accuracy/% | 95.65 | 80.49 | 94.74 | 94.47 | 94.40 | ||
Overall accuracy/% | 92.78 | ||||||
Macro-F1 | 88.58 | ||||||
UNet | Producer’s accuracy/% | 60.87 | 55.06 | 53.57 | 90.77 | 88.71 | |
User’s accuracy/% | 73.68 | 71.01 | 50.00 | 90.10 | 76.92 | ||
Overall accuracy/% | 83.01 | ||||||
Macro-F1 | 71.05 | ||||||
EU | Producer’s accuracy/% | 91.30 | 53.93 | 57.14 | 95.26 | 79.84 | |
User’s accuracy/% | 72.41 | 67.61 | 76.19 | 90.09 | 82.50 | ||
Overall accuracy/% | 85.11 | ||||||
Macro-F1 | 76.61 | ||||||
LeNet | Producer’s accuracy/% | 91.30 | 70.79 | 75.00 | 97.51 | 95.16 | |
User’s accuracy/% | 95.45 | 84.00 | 91.30 | 93.32 | 93.65 | ||
Overall accuracy/% | 92.33 | ||||||
Macro-F1 | 88.66 | ||||||
EL | Producer’s accuracy/% | 91.30 | 70.79 | 78.57 | 98.00 | 95.16 | |
User’s accuracy/% | 100.00 | 82.89 | 91.67 | 93.79 | 94.40 | ||
Overall accuracy/% | 92.78 | ||||||
Macro-F1 | 89.57 |
Study Area | Model | Evaluating Indicator | Tree Species | ||||
---|---|---|---|---|---|---|---|
Eu.s a | Cu.l b | Pm.l c | Eg.x d | Iv.h e | |||
Gao Feng | AlexNet | Producer’s accuracy/% | 100.00 | 90.25 | 86.01 | 90.28 | 72.99 |
User’s accuracy/% | 84.78 | 83.19 | 88.65 | 97.01 | 77.91 | ||
Overall accuracy/% | 87.19 | ||||||
Macro-F1 | 87.10 | ||||||
EA | Producer’s accuracy/% | 100.00 | 92.77 | 89.29 | 90.28 | 83.91 | |
User’s accuracy/% | 85.71 | 89.67 | 92.59 | 97.74 | 79.35 | ||
Overall accuracy/% | 90.37 | ||||||
Macro-F1 | 90.12 | ||||||
UNet | Producer’s accuracy/% | 12.82 | 83.33 | 54.46 | 56.25 | 16.09 | |
User’s accuracy/% | 45.45 | 44.31 | 55.45 | 83.08 | 57.14 | ||
Overall accuracy/% | 54.27 | ||||||
Macro-F1 | 50.07 | ||||||
EU | Producer’s accuracy/% | 51.28 | 81.13 | 61.90 | 71.18 | 31.61 | |
User’s accuracy/% | 81.63 | 54.78 | 68.42 | 77.36 | 52.38 | ||
Overall accuracy/% | 64.15 | ||||||
Macro-F1 | 62.95 | ||||||
LeNet | Producer’s accuracy/% | 83.33 | 90.57 | 79.76 | 85.07 | 56.32 | |
User’s accuracy/% | 97.01 | 73.66 | 80.00 | 89.74 | 76.56 | ||
Overall accuracy/% | 80.74 | ||||||
Macro-F1 | 81.14 | ||||||
EL | Producer’s accuracy/% | 96.15 | 90.25 | 77.08 | 89.93 | 67.24 | |
User’s accuracy/% | 88.24 | 76.53 | 85.76 | 91.20 | 79.05 | ||
Overall accuracy/% | 83.50 | ||||||
Macro-F1 | 84.14 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chen, C.; Jing, L.; Li, H.; Tang, Y.; Chen, F. Individual Tree Species Identification Based on a Combination of Deep Learning and Traditional Features. Remote Sens. 2023, 15, 2301. https://doi.org/10.3390/rs15092301
Chen C, Jing L, Li H, Tang Y, Chen F. Individual Tree Species Identification Based on a Combination of Deep Learning and Traditional Features. Remote Sensing. 2023; 15(9):2301. https://doi.org/10.3390/rs15092301
Chicago/Turabian StyleChen, Caiyan, Linhai Jing, Hui Li, Yunwei Tang, and Fulong Chen. 2023. "Individual Tree Species Identification Based on a Combination of Deep Learning and Traditional Features" Remote Sensing 15, no. 9: 2301. https://doi.org/10.3390/rs15092301
APA StyleChen, C., Jing, L., Li, H., Tang, Y., & Chen, F. (2023). Individual Tree Species Identification Based on a Combination of Deep Learning and Traditional Features. Remote Sensing, 15(9), 2301. https://doi.org/10.3390/rs15092301