Object-Oriented Convolutional Neural Network for Forest Stand Classification Based on Multi-Source Data Collaboration
Abstract
:1. Introduction
2. Study Area and Data
2.1. Study Area
2.2. Data Description
2.2.1. Remote Sensing Data
2.2.2. Data Collected in the Field
3. Methods
3.1. Image Fusion
- (1)
- Using the average downsample method, reduce the high-resolution bands Mi (i = 1, …, n) to the same pixel size as the low resolution band to obtain the low-frequency components Mi,L. Then, use cubic convolution to sample Mi,L back to the original pixel size and calculate their spatial detail information Mi,h:
- (2)
- Establish a linear relationship f between the low resolution band Mj (j = 1, …, m) to be fused and the low-frequency components Mi,L of n high-resolution bands and obtain n correlation coefficients ai (i = 1…, n):
- (3)
- Conduct three convolutional operations to upsample Mj to the pixel size of Mi, denoted as Mj,h, and compute the fusion band Mj,f:
3.2. Image Segmentation
- (1)
- Spectral angle (SA) considers spectral data as vectors in multidimensional space, and the angle between vector pixels and their adjacent vector pixels is the spectral angle between two pixels. The size of the spectral angle between adjacent pixels can measure the spectral difference between pixels. The spectral angle between vector pixels a and b is as follows:
- (2)
- Spectral distance (dist) refers to treating spectral data as vectors in a multidimensional space, and the distance between vector pixels and their adjacent pixels is the spectral distance between two pixels. Spectral distance is a measure of the brightness value of a pixel, and its calculation formula is as follows:
- (3)
- Combining SA and dist indicators defines the spectral difference dif between pixels:
3.3. CNN Structure
3.4. Experimental Design
3.5. Training Samples Selection and CNN Sample Set Construction
4. Results
4.1. Image Fusion Results
4.2. Image Segmentation Results
4.3. Classification Results
5. Discussion
6. Conclusions
- (1)
- With the integration of the red edge band information into RapidEye, the accuracy of forest stand classification improves continuously. Prove that the fusion design proposed in this experiment can effectively increase the spectral information used for classification and improve the application range of Sentinel-2, which is highly significant for forest classification.
- (2)
- By integrating object-oriented techniques with convolutional neural networks (OCNN), a pathway is established for constructing deep learning sample libraries. This approach not only prevents the emergence of ‘salt-and-pepper noise’ but also ensuring the integrity and homogeneity of the stands.
- (3)
- The canopy height information derived from LiDAR point clouds can, to a certain extent, enhance the precision of forest stand classification.
- (4)
- Comparing traditional object-oriented classification with OCNN classification methods, it is found that the classification accuracy of ResNet_18 is the highest. For the classification results of four data sources using ResNet_18, there is an enhancement of 4.53%, 5.49%, 5.97%, and 4.53% than SVM method, which holds the highest classification accuracy among traditional methods. The OCNN classification method has been demonstrated to be effective in identifying forest stands, offering a viable approach for the precise identification of forest tree species.
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Ye, M.; Zhao, Y.; Im, J.; Zhao, Y.; Zhen, Z. A deep-learning-based tree species classification for natural secondary forests using unmanned aerial vehicle hyperspectral images and LiDAR, Ecological Indicators. Remote Sens. 2024, 159, 111608. [Google Scholar]
- Gong, Y.; Li, X.; Du, H.; Zhou, G.; Mao, F.; Zhou, L.; Zhang, B.; Xuan, J.; Zhu, D. Tree Species Classifications of Urban Forests Using UAV-LiDAR Intensity Frequency Data. Remote Sens. 2023, 15, 110. [Google Scholar] [CrossRef]
- Chen, Y.; Zhao, S.; Xie, Z.; Lu, D.; Chen, E. Mapping multiple tree species classes using a hierarchical procedure with optimized node variables and thresholds based on high spatial resolution satellite data. Remote Sens. 2020, 57, 526–542. [Google Scholar] [CrossRef]
- Shi, W.; Wang, S.; Yue, H.; Wang, D.; Ye, H.; Sun, L.; Sun, J.; Liu, J.; Deng, Z.; Rao, Y.; et al. Identifying Tree Species in a Warm-Temperate Deciduous Forest by Combining Multi-Rotor and Fixed-Wing Unmanned Aerial Vehicles. Drones 2023, 7, 353. [Google Scholar] [CrossRef]
- Mielczarek, D.; Sikorski, P.; Archiciński, P.; Ciężkowski, W.; Zaniewska, E.; Chormański, J. The Use of an Airborne Laser Scanner for Rapid Identification of Invasive Tree Species Acer negundo in Riparian Forests. Remote Sens. 2023, 15, 212. [Google Scholar] [CrossRef]
- Jia, W.; Pang, Y. Tree species classification in an extensive forest area using airborne hyperspectral data under varying light conditions. J. For. Res. 2023, 34, 1359–1377. [Google Scholar] [CrossRef]
- Eisfelder, C.; Kraus, T.; Bock, M.; Werner, M.; Buchroithner, M.F.; Strunz, G. Towards automated forest-type mapping—A service within GSE forest monitoring based on SPOT-5 and IKONOS data. Int. J. Remote Sens. 2009, 30, 5015–5038. [Google Scholar] [CrossRef]
- Liu, Y.; Gong, W.; Hu, X.; Gong, J. Forest type identification with random forest using sentinel-1a, sentinel-2a, multi-temporal landsat-8 and dem data. Remote Sens. 2018, 10, 946. [Google Scholar] [CrossRef]
- Xu, H.; Pan, P.; Yang, W.; Ouyang, X.; Ning, J.; Shao, J.; Li, Q. Classification and Accuracy Evaluation of Forest Resources Based on Multi-source Remote Sensing Images. Acta Agric. Univ. Jiangxiensis 2019, 41, 751–760. [Google Scholar]
- Lin, H.; Liu, X.; Han, Z.; Cui, H.; Dian, Y. Identification of Tree Species in Forest Communities at Different Altitudes Based on Multi-Source Aerial Remote Sensing Data. Appl. Sci. 2023, 13, 4911. [Google Scholar] [CrossRef]
- Machala, M. Forest mapping through object-based image analysis of multispectral and lidar aerial data. Eur. J. Remote Sens. 2014, 47, 117–131. [Google Scholar] [CrossRef]
- Dechesne, C.; Mallet, C.; Le Bris, A.; Gouet, V.; Hervieu, A. Forest stand segmentation using airborne lidar data and very high resolution multispectral imagery. ISPRS—Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2016, XLI-B3, 207–214. [Google Scholar] [CrossRef]
- Xie, Z.; Chen, Y.; Lu, D.; Li, G.; Chen, E. Classification of land cover, forest, and tree species classes with ziyuan-3 multispectral and stereo data. Remote Sens. 2019, 11, 164. [Google Scholar] [CrossRef]
- Dalponte, M.; Bruzzone, L.; Gianelle, D. Tree species classification in the southern alps based on the fusion of very high geometrical resolution multispectral/hyperspectral images and lidar data. Remote Sens. Environ. 2012, 123, 258–270. [Google Scholar] [CrossRef]
- Ghosh, A.; Fassnacht, F.E.; Joshi, P.K.; Koch, B. A framework for mapping tree species combining hyperspectral and lidar data: Role of selected classifiers and sensor across three spatial scales. Int. J. Appl. Earth Obs. 2014, 26, 49–63. [Google Scholar] [CrossRef]
- Yifang, S.; Skidmore, A.K.; Tiejun, W.; Stefanie, H.; Uta, H.; Nicole, P.; Zhu, X.; Marco, H. Tree species classification using plant functional traits from lidar and hyperspectral data. Int. J. Appl. Earth Obs. 2018, 73, 207–219. [Google Scholar]
- Akumu, C.E.; Dennis, S. Effect of the Red-Edge Band from Drone Altum Multispectral Camera in Mapping the Canopy Cover of Winter Wheat, Chickweed, and Hairy Buttercup. Drones 2023, 7, 277. [Google Scholar] [CrossRef]
- Weichelt, H.; Rosso, R.; Marx, A.; Reigber, S.; Douglass, K.; Heynen, M. The RapidEye Red-Edge Band-White Paper. Available online: https://apollomapping.com/wp-content/user_uploads/2012/07/RapidEye-Red-Edge-White-Paper.pdf (accessed on 3 November 2022).
- Cui, B.; Zhao, Q.; Huang, W.; Song, X.; Ye, H.; Zhou, X. Leaf chlorophyll content retrieval of wheat by simulated RapidEye, Sentinel-2 and EnMAP data. J. Integr. Agric. 2019, 18, 1230–1245. [Google Scholar] [CrossRef]
- Wang, X.; Lu, Z.; Yuan, Z.; Zhong, H.; Wang, G. A review of the application of optical remote sensing satellites in the red edge band. Satell. Appl. 2023, 2, 48–53. [Google Scholar]
- Sothe, C.; Almeida, C.M.d.; Liesenberg, V.; Schimalski, M.B. Evaluating Sentinel-2 and Landsat-8 Data to Map Sucessional Forest Stages in a Subtropical Forest in Southern Brazil. Remote Sens. 2017, 9, 838. [Google Scholar] [CrossRef]
- Grabska, E.; Hostert, P.; Pflugmacher, D.; Ostapowicz, K. Forest Stand Species Mapping Using the Sentinel-2 Time Series. Remote Sens. 2019, 11, 1197. [Google Scholar] [CrossRef]
- Lim, J.; Kim, K.; Kim, E.; Jin, R. Machine Learning for Tree Species Classification Using Sentinel-2 Spectral Information, Crown Texture, and Environmental Variables. Remote Sens. 2020, 12, 2049. [Google Scholar] [CrossRef]
- Yang, D.; Li, C.; Li, B. Forest Type Classification Based on Multi-temporal Sentinel-2A/B Imagery Using U-Net Model. For. Res. 2022, 35, 103–111. [Google Scholar]
- Van Deventer, H.; Cho, M.A.; Mutanga, O. Improving the classification of six evergreen subtropical tree species with multi-season data from leaf spectra simulated to worldview-2 and rapideye. Int. J. Remote Sens. 2017, 38, 4804–4830. [Google Scholar] [CrossRef]
- Adelabu, S.; Mutanga, O.; Adam, E.; Cho, M.A. Exploiting machine learning algorithms for tree species classification in a semiarid woodland using rapideye image. J. Appl. Remote Sens. 2013, 101, 073480. [Google Scholar] [CrossRef]
- Long, J.A.; Lawrence, R.L.; Greenwood, M.C.; Marshall, L.; Miller, P.R. Object-oriented crop classification using multitemporal ETM+ SLC-off imagery and random forest. Gisci. Remote Sens. 2013, 50, 418–436. [Google Scholar] [CrossRef]
- Shi, J.; Li, D.; Chu, X.; Yang, J.; Shen, C. Intelligent classification of land cover types in open-pit mine area using object-oriented method and multitask learning. J. Appl. Remote Sens. 2022, 16, 038504. [Google Scholar] [CrossRef]
- Xia, T.; Ji, W.; Li, W.; Zhang, C.; Wu, W. Phenology-based decision tree classification of rice-crayfish fields from sentinel-2 imagery in Qianjiang, China. Int. J. Remote Sens. 2021, 42, 8124–8144. [Google Scholar] [CrossRef]
- Wang, P.; Ge, J.; Fang, Z.; Zhao, G.; Sun, G. Semi-automatic object—Oriented geological disaster target extraction based on high-resolution remote sensing. Mt. Res. 2018, 36, 654–659. [Google Scholar]
- Xing, Y.; Liu, P.; Xie, Y.; Gao, Y.; Yue, Z.; Lu, Z.; Li, X. Object-oriented building grading extraction method based on high resolution remote sensing images. Spacecr. Recovery Remote Sens. 2023, 44, 88–102. [Google Scholar]
- Nakada, M.; Wang, H.; Terzopoulos, D. AcFR: Active face recognition using convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; IEEE: Piscataway, NJ, USA, 2017. [Google Scholar]
- Ke, Y.H.; Quackenbush, L.J. A review of methods for automatic individual tree-crown detection and delineation from passive remote sensing. Int. J. Remote Sens. 2011, 32, 4725–4747. [Google Scholar] [CrossRef]
- Rybacki, P.; Niemann, J.; Derouiche, S.; Chetehouna, S.; Boulaares, I.; Seghir, N.M.; Diatta, J.; Osuch, A. Convolutional Neural Network (CNN) Model for the Classification of Varieties of Date Palm Fruits (Phoenix dactylifera L.). Sensors 2024, 24, 558. [Google Scholar] [CrossRef] [PubMed]
- Lemley, J.; Bazrafkan, S.; Corcoran, P. Deep Learning for Consumer Devices and Services: Pushing the limits for machine learning, artificial intelligence, and computer vision. IEEE Consum. Electron. Mag. 2017, 6, 48–56. [Google Scholar] [CrossRef]
- Garcia-Garcia, A.; Orts-Escolano, S.; Oprea, S.; Villena-Martinez, V.; Martinez-Gonzalez, P.; Garcia-Rodriguez, J. A survey on deep learning techniques for image and video semantic segmentation. Appl. Soft Comput. 2018, 70, 41–65. [Google Scholar] [CrossRef]
- Yuan, J.; Hou, X.; Xiao, Y.; Cao, D.; Guan, W.; Nie, L. Multi-criteria active deep learning for image classification. Knowl. Based Syst. 2019, 172, 86–94. [Google Scholar] [CrossRef]
- Jo, J.; Jadidi, Z. A high precision crack classification system using multi-layered image processing and deep belief learning. Struct. Infrastruct. Eng. 2020, 16, 297–305. [Google Scholar] [CrossRef]
- Zou, Q.; Ni, L.; Zhang, T.; Wang, Q. Deep Learning Based Feature Selection for Remote Sensing Scene Classification. IEEE Geosci. Remote Sens. 2015, 12, 2321–2325. [Google Scholar] [CrossRef]
- Deng, Z.; Sun, H.; Zhou, S.; Zhao, J.; Lei, L.; Zou, H. Multi-scale object detection in remote sensing imagery with convolutional neural networks. ISPRS J. Photogramm. 2018, 145, 3–22. [Google Scholar] [CrossRef]
- Muralimohanbabu, Y.; Radhika, K. Multi spectral image classification based on deep feature extraction using deep learning technique. Int. J. Bioinform. Res. Appl. 2021, 17, 250–261. [Google Scholar] [CrossRef]
- Yang, W.; Song, H.; Xu, Y. DCSRL: A change detection method for remote sensing images based on deep coupled sparse representation learning. Remote Sens. Lett. 2022, 13, 756–766. [Google Scholar] [CrossRef]
- Hakula, A.; Ruoppa, L.; Lehtomäki, M.; Yu, X.; Kukko, A.; Kaartinen, H.; Taher, J.; Matikainen, L.; Hyyppä, E.; Luoma, V.; et al. Individual tree segmentation and species classification using high-density close-range multispectral laser scanning data. ISPRS Open J. Photogramm. Remote Sens. 2023, 9, 100039. [Google Scholar] [CrossRef]
- Weinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual Tree-Crown Detection in RGB Imagery Using Semi Supervised Deep Learning Neural Networks. Remote Sens. 2019, 11, 1309. [Google Scholar] [CrossRef]
- Xu, S.; Wang, R.; Shi, W.; Wang, X. Classification of Tree Species in Transmission Line Corridors Based on YOLO v7. Forests 2024, 15, 61. [Google Scholar] [CrossRef]
- Jing, L.; Cheng, Q. A technique based on non-linear transform and multivariate analysis to merge thermal infrared data and higher-resolution multispectral data. Int. J. Remote Sens. 2010, 31, 6459–6471. [Google Scholar] [CrossRef]
- Ge, W. Multi-Source Remote Sensing Fusion for Lithological Information Enhancement. Ph.D. Thesis, China University of Geosciences, Beijing, China, 2018. [Google Scholar]
- Li, X.; Jing, L.; Li, H.; Tang, Y.; Ge, W. Seed extraction method for seeded region growing based on one-dimensional spectral difference. J. Image Graph. 2016, 21, 1256–1264. [Google Scholar]
Sentinel-2A Bands | Central Wavelength (um) | Spatial Resolution (m) | RapidEye Bands | Band Rang (um) | Spatial Resolution (m) |
---|---|---|---|---|---|
Band 1-Coastal aerosol | 0.44 | 60 | |||
Band 2-Blue | 0.49 | 10 | Band 1-Blue | 440~510 | 5 |
Band 3-Green | 0.56 | 10 | Band 2-Green | 520~590 | 5 |
Band 4-Red | 0.67 | 10 | Band 3-Red | 630~685 | 5 |
Band 5-Red Edge1 | 0.71 | 20 | Band 4-Red Edge | 690~730 | 5 |
Band 6-Red Edge2 | 0.74 | 20 | |||
Band 7-Red Edge3 | 0.78 | 20 | Band 5-NIR | 760~850 | 5 |
Band 8-NIR | 0.84 | 10 | |||
Band 8A-Red Edge4 | 0.87 | 20 | |||
Band 9-Water vapor | 0.95 | 60 | |||
Band 10-SWIR—Cirrus | 1.38 | 60 | |||
Band 11-SWIR | 1.61 | 20 | |||
Band 12-SWIR | 2.19 | 20 |
Data Source | Traditional Object-Oriented | Object-Oriented_CNN |
---|---|---|
RapidEye | DTC KNN MLC SVM | AlexNet LeNet ResNet_18 |
RapidEye + S2A Red-egde2 (RapidEye + RE2) | ||
RapidEye + S2A Red-egde2 + S2A Red-egde4 (RapidEye + RE2 + RE4) | ||
RapidEye + S2A Red-egde2 + S2A Red-egde4 + CHM (RapidEye + RE2 + RE4 + CHM) |
Input Image Size | Train Epoch | Batch_Size | Classifier | Monitor Quantity | Save Best Model |
---|---|---|---|---|---|
64 | 200 | 16 | Softmax | Val_acc | True |
Method | Evaluation Indicators | |||||
---|---|---|---|---|---|---|
M | STD | EN | AG | SF | CEN | |
MV | 144.59 | 65.64 | 7.03 | 14.01 | 28.75 | 0.07 |
GS | 131.91 | 50.51 | 6.85 | 17.61 | 37.32 | 0.28 |
Data Source | Layer Weight | Scale Rang | Scale Step Size | Color Indices | Tightness Indices |
---|---|---|---|---|---|
5 m RapidEye | 1 | 20–100 | 10 | 0.9 | 0.5 |
Methods OA (%) Data Source | RapidEye | RapidEye +RE2 | RapidEye +RE2 +RE4 | RapidEye +RE2 +RE4 +CHM |
---|---|---|---|---|
DTC | 52.27 | 60.62 | 61.10 | 62.29 |
KNN | 55.61 | 58.23 | 59.43 | 60.86 |
MLC | 73.03 | 74.70 | 75.66 | 80.91 |
SVM | 76.85 | 77.80 | 78.52 | 81.15 |
AlexNet | 74.22 | 75.18 | 79.24 | 79.95 |
LeNet | 78.28 | 79.47 | 81.62 | 84.49 |
ResNet_18 | 81.38 | 83.29 | 84.49 | 85.68 |
Actual Number of Sample Points | |||||||
---|---|---|---|---|---|---|---|
Category | Others | Octagon | Eucalyptus | Cedar | Pine | Other Broad-Leaved | Total |
others | 72 | 4 | 2 | 0 | 5 | 2 | 85 |
Octagon | 2 | 51 | 0 | 0 | 1 | 0 | 54 |
eucalyptus | 1 | 2 | 87 | 1 | 0 | 0 | 91 |
Cedar | 1 | 1 | 1 | 69 | 2 | 6 | 80 |
pine | 3 | 2 | 2 | 7 | 35 | 8 | 57 |
Other broad-leaved | 0 | 0 | 2 | 4 | 1 | 45 | 52 |
total | 79 | 60 | 94 | 81 | 44 | 61 | 419 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhao, X.; Jing, L.; Zhang, G.; Zhu, Z.; Liu, H.; Ren, S. Object-Oriented Convolutional Neural Network for Forest Stand Classification Based on Multi-Source Data Collaboration. Forests 2024, 15, 529. https://doi.org/10.3390/f15030529
Zhao X, Jing L, Zhang G, Zhu Z, Liu H, Ren S. Object-Oriented Convolutional Neural Network for Forest Stand Classification Based on Multi-Source Data Collaboration. Forests. 2024; 15(3):529. https://doi.org/10.3390/f15030529
Chicago/Turabian StyleZhao, Xiaoqing, Linhai Jing, Gaoqiang Zhang, Zhenzhou Zhu, Haodong Liu, and Siyuan Ren. 2024. "Object-Oriented Convolutional Neural Network for Forest Stand Classification Based on Multi-Source Data Collaboration" Forests 15, no. 3: 529. https://doi.org/10.3390/f15030529
APA StyleZhao, X., Jing, L., Zhang, G., Zhu, Z., Liu, H., & Ren, S. (2024). Object-Oriented Convolutional Neural Network for Forest Stand Classification Based on Multi-Source Data Collaboration. Forests, 15(3), 529. https://doi.org/10.3390/f15030529