Tree Species Classification Based on Self-Supervised Learning with Multisource Remote Sensing Images
Abstract
:1. Introduction
2. Materials and Methods
2.1. Study Area
2.2. Data
2.3. Classification Models
2.3.1. Feature Extraction Based on MVAE
2.3.2. Feature Extraction Based on MAAE
2.3.3. M-SSL Model for Tree Classification Based on Multisource Images
Algorithm 1 M-SSL model pseudocode |
, , , , . 1: Loop 200 epochs: , |
, |
, , , , , |
15: End. |
3. Experimental Results and Analysis
3.1. Experimental Settings
3.2. Comparative Experiment
3.3. Analysis of Parameters
4. Discussion
- The self-supervised learning model proposed in this paper is similar to traditional deep learning methods, and features are extracted and fused from the two data sources. HSI and MSI provide implicit augmentation with spectral and structural information differences. Using HSI and MSI as multi-modal features for network training provides more desirable presentation than other methods.
- By taking the advantages of generative learning and contrastive learning to conduct joint learning, the M-SSL model extracted two types of features from multi-source datasets in the pretext task, and fine-tuned parameters only in downstream task. The sharing features of multi-source tree species images learned from pretext task training bring robustness and stability to downstream tasks.
- Observed from the compared results, it is proven that the feature learning based on M-AAE or M-VAE can better integrate the discriminate features and remove some redundant features. The pixel from the same tree species with positive samples provides more abundant information to make the result of tree species classification more accurate.
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Van Engelen, J.E.; Hoos, H.H. A survey on semi-supervised learning. Mach. Learn. 2020, 109, 373–440. [Google Scholar] [CrossRef]
- Sharma, R.C.; Hara, K. Self-Supervised Learning of Satellite-Derived Vegetation Indices for Clustering and Visualization of Vegetation Types. J. Imaging 2021, 7, 30. [Google Scholar] [CrossRef]
- Saheer, L.B.; Shahawy, M. Self-Supervised Approach for Urban Tree Recognition on Aerial Images. In Proceedings of the IFIP International Conference on Artificial Intelligence Applications and Innovations, Hersonissos, Greece, 25–27 June 2021; Springer: Cham, Switzerland, 2021; pp. 476–486. [Google Scholar]
- Weinstein, B.G.; Marconi, S.; Bohlman, S.; Zare, A.; White, E. Individual tree-crown detection in RGB imagery using semi-supervised deep learning neural networks. Remote Sens. 2019, 11, 1309. [Google Scholar] [CrossRef]
- Cat Tuong, T.T.; Tani, H.; Wang, X.; Thang, N.Q. Semi-supervised classification and landscape metrics for mapping and spatial pattern change analysis of tropical forest types in Thua Thien Hue province, Vietnam. Forests 2019, 10, 673. [Google Scholar] [CrossRef]
- Wan, H.; Tang, Y.; Jing, L.; Li, H.; Qiu, F.; Wu, W. Tree Species Classification of Forest Stands Using Multisource Remote Sensing Data. Remote Sens. 2021, 13, 144. [Google Scholar] [CrossRef]
- Li, Y.; Shao, Z.; Huang, X.; Cai, B.; Peng, S. Meta-FSEO: A Meta-Learning Fast Adaptation with Self-Supervised Embedding Optimization for Few-Shot Remote Sensing Scene Classification. Remote Sens. 2021, 13, 2776. [Google Scholar] [CrossRef]
- Zhao, Z.; Luo, Z.; Li, J.; Chen, C.; Piao, Y. When Self-Supervised Learning Meets Scene Classification: Remote Sensing Scene Classification Based on a Multitask Learning Framework. Remote Sens. 2020, 12, 3276. [Google Scholar] [CrossRef]
- Illarionova, S.; Trekin, A.; Ignatiev, V.; Oseledets, I. Tree Species Mapping on Sentinel-2 Satellite Imagery with Weakly Supervised Classification and Object-Wise Sampling. Forests 2021, 12, 1413. [Google Scholar] [CrossRef]
- Dong, H.; Ma, W.; Wu, Y.; Zhang, J.; Jiao, L. Self-Supervised Representation Learning for Remote Sensing Image Change Detection Based on Temporal Prediction. Remote Sens. 2020, 12, 1868. [Google Scholar] [CrossRef]
- Weis, M.A.; Pede, L.; Lüddecke, T.; Ecker, A.S. Self-supervised Representation Learning of Neuronal Morphologies. arXiv 2021, arXiv:2112.12482. [Google Scholar]
- Van Horn, G.; Cole, E.; Beery, S.; Wilber, K.; Belongie, S.; Aodha, O.M. Benchmarking Representation Learning for Natural World Image Collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 12884–12893. [Google Scholar]
- Liu, B.; Gao, K.; Yu, A.; Ding, L.; Qiu, C.; Li, J. ES2FL: Ensemble Self-Supervised Feature Learning for Small Sample Classification of Hyperspectral Images. Remote Sens. 2022, 14, 4236. [Google Scholar] [CrossRef]
- Liu, C.; Sun, H.; Xu, Y.; Kuang, G. Multi-Source Remote Sensing Pretraining Based on Contrastive Self-Supervised Learning. Remote Sens. 2022, 14, 4632. [Google Scholar] [CrossRef]
- Monowar, M.M.; Hamid, M.A.; Ohi, A.Q.; Alassafi, M.O.; Mridha, M.F. AutoRet: A Self-Supervised Spatial Recurrent Network for Content-Based Image Retrieval. Sensors 2022, 22, 2188. [Google Scholar] [CrossRef]
- Wang, J.; Wang, Y.; Liu, H. Hybrid Variability Aware Network (HVANet): A self-supervised deep framework for label-free SAR image change detection. Remote Sens. 2022, 14, 734. [Google Scholar] [CrossRef]
- Tao, B.; Chen, X.; Tong, X.; Jiang, D.; Chen, B. Self-supervised monocular depth estimation based on channel attention. Photonics 2022, 9, 434. [Google Scholar] [CrossRef]
- Gao, H.; Zhao, Y.; Guo, P.; Sun, Z.; Chen, X.; Tang, Y. Cycle and Self-Supervised Consistency Training for Adapting Semantic Segmentation of Aerial Images. Remote Sens. 2022, 14, 1527. [Google Scholar] [CrossRef]
- Liu, B.; Yu, H.; Du, J.; Wu, Y.; Li, Y.; Zhu, Z.; Wang, Z. Specific Emitter Identification Based on Self-Supervised Contrast Learning. Electronics 2022, 11, 2907. [Google Scholar] [CrossRef]
- Cui, X.Z.; Feng, Q.; Wang, S.Z.; Zhang, J.-H. Monocular depth estimation with self-supervised learning for vineyard unmanned agricultural vehicle. Sensors 2022, 22, 721. [Google Scholar] [CrossRef]
- Wang, X.; Zhang, R.; Shen, C.; Kong, T.; Li, L. Dense Contrastive Learning for Self-Supervised Visual Pre-Training. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 3024–3033. [Google Scholar]
- Mei, S.; Ji, J.; Geng, Y.; Zhang, Z.; Li, X.; Du, Q. Unsupervised spatial–spectral feature learning by 3D convolutional autoencoder for hyperspectral classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 6808–6820. [Google Scholar] [CrossRef]
- Wang, X.; Tan, K.; Du, Q.; Chen, Y.; Du, P. Caps-TripleGAN: GAN-assisted CapsNet for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 7232–7245. [Google Scholar] [CrossRef]
- Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 580–587. [Google Scholar]
- Koch, G.; Zemel, R.; Salakhutdinov, R. Siamese Neural Networks for One-Shot Image Recognition. In Proceedings of the ICML Deep Learning Workshop, Lille, France, 10–11 June 2015; p. 2. [Google Scholar]
- Chen, T.; Kornblith, S.; Norouzi, M.; Hinton, G. A Simple Framework for Contrastive Learning of Visual Representations. In Proceedings of the International Conference on Machine Learning PMLR, Online, 13–18 July 2020; pp. 1597–1607. [Google Scholar]
- He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum Contrast for Unsupervised Visual Representation Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–14 June 2020; pp. 9729–9738. [Google Scholar]
- Chen, X.; He, K. Exploring Simple Siamese Representation Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15750–15758. [Google Scholar]
- Li, J.; Zhou, P.; Xiong, C.; Hoi, S.C.H. Prototypical contrastive learning of unsupervised representations. arXiv 2020, arXiv:2005.04966. [Google Scholar]
- Cao, Z.; Li, X.; Feng, Y.; Chen, S.; Xia, C.; Zhao, L. ContrastNet: Unsupervised feature learning by autoencoder and prototypical contrastive learning for hyperspectral imagery classification. Neurocomputing 2021, 460, 71–83. [Google Scholar] [CrossRef]
- Wang, L.; Fan, W.Y. Identification of forest dominant tree species group based on hyperspectral remote sensing data. Northeast. For. Univ. 2015, 43, 134–137. [Google Scholar]
- Drusch, M.; Del Bello, U.; Carlier, S.; Colin, O.; Fernandez, V.; Gascon, F.; Hoersch, B.; Isola, C.; Laberinti, P.; Martimort, P.; et al. Sentinel-2: ESA’s optical high-resolution mission for GMES operational services. Remote Sens. Environ. 2012, 120, 25–36. [Google Scholar] [CrossRef]
- Johannessen, J.A. Potential Contribution to Earth System Science: Oceans and Cryosphere. In Proceedings of the EGU General Assembly Conference Abstracts, Vienna, Austria, 3–8 April 2010; p. 15625. [Google Scholar]
- Vahdat, A.; Kautz, J. NVAE: A deep hierarchical variational autoencoder. Adv. Neural Inf. Process. Syst. 2020, 33, 19667–19679. [Google Scholar]
- Wang, X.; Ren, H.; Wang, A. Smish: A Novel Activation Function for Deep Learning Methods. Electronics 2022, 11, 540. [Google Scholar] [CrossRef]
- Oord, A.; Li, Y.; Vinyals, O. Representation learning with contrastive predictive coding. arXiv 2018, arXiv:1807.03748. [Google Scholar]
- Johnson, J.; Douze, M.; Jégou, H. Billion-scale similarity search with gpus. IEEE Trans. Big Data 2019, 7, 535–547. [Google Scholar] [CrossRef]
- Gao, Y.; Li, W.; Zhang, M.; Wang, J. Hyperspectral and multispectral classification for coastal wetland using depthwise feature interaction network. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
- Hu, P.; Peng, D.; Sang, Y.; Xiang, Y. Multi-view linear discriminant analysis network. IEEE Trans. Image Process. 2019, 28, 5352–5365. [Google Scholar] [CrossRef]
- Sharma, P.; Berwal, Y.P.S.; Ghai, W. Performance analysis of deep learning CNN models for disease detection in plants using image segmentation. Inf. Process. Agric. 2020, 7, 566–574. [Google Scholar] [CrossRef]
Birch | Larch | Mongolia | Poplar | Spruce | Willow | |
---|---|---|---|---|---|---|
Dataset (1) | 130,124 | 39,216 | 57,620 | 3019 | 15,330 | 3492 |
Dataset (2) | 150,771 | 58,829 | 11,412 | 2175 | 17,048 | 1067 |
Dataset (3) | 99,082 | 82,746 | 38,114 | 1013 | 13,460 | 15,486 |
h-AAE Encoder Parameter Setting | |
---|---|
Layer | The Shape of Output |
Input | [−1, 1, 15, 15, 15] |
Con3d | [−1, 8, 13, 13, 13] |
BatchNorm3d | [−1, 8, 13, 13, 13] |
Smish | [−1, 8, 13, 13, 13] |
Con3d | [−1, 16, 11, 11, 11] |
BatchNorm3d | [−1, 16, 11, 11, 11] |
Smish | [−1, 16, 11, 11, 11] |
Con2d | [−1, 32, 9, 9] |
BatchNorm2d | [−1, 32, 9, 9] |
Smish | [−1, 32, 9, 9] |
Con2d | [−1, 64, 7, 7] |
BatchNorm2d | [−1, 64, 7, 7] |
Smish | [−1, 64, 7, 7] |
AdapAvgPool2d | [−1, 64, 4, 4] |
Linear Smish | [−1, 512] [−1, 512] |
Linear | [−1, 128] |
h-AAE Decoder Parameter Setting | |
---|---|
Layer | The Shape of Output |
Input | [−1, 128] |
Linear | [−1, 512] |
Smish | [−1, 512] |
Linear | [−1, 21104] |
Smish | [−1, 21104] |
ConTrans2d | [−1, 64, 9, 9] |
BatchNorm2d | [−1, 64, 9, 9] |
Smish | [−1, 64, 9, 9] |
ConTrans2d | [−1, 32, 11, 11] |
BatchNorm2d | [−1, 32, 11, 11] |
Smish | [−1, 32, 11, 11] |
ConTrans3d | [−1, 16, 13, 13, 13] |
BatchNorm3d | [−1, 16, 13, 13, 13] |
Smish | [−1, 16, 13, 13, 13] |
ConTrans3d | [−1, 8, 15, 15, 15] |
BatchNorm3d | [−1, 1, 15, 15, 15] |
m-AAE Encoder Parameter Setting | |
---|---|
Layer | The Shape of Output |
Input | [−1, 1, 12, 15, 15] |
Con3d | [−1, 8, 9, 13, 13] |
BatchNorm3d | [−1, 8, 9, 13, 13] |
Smish | [−1, 8, 9, 13, 13] |
Con3d | [−1, 16, 5, 11, 11] |
BatchNorm3d | [−1, 16, 5, 11, 11] |
Smish | [−1, 16, 5, 11, 11] |
Con2d | [−1, 32, 9, 9] |
BatchNorm2d | [−1, 32, 9, 9] |
Smish | [−1, 32, 9, 9] |
Con2d | [−1, 64, 7, 7] |
BatchNorm2d | [−1, 64, 7, 7] |
Smish | [−1, 64, 7, 7] |
AdapAvgPool2d | [−1, 64, 4, 4] |
Linear Smish | [−1, 512] [−1, 512] |
Linear | [−1, 128] |
m-AAE Decoder Parameter Setting | |
---|---|
Layer | The Shape of Output |
Input | [−1, 128] |
Linear | [−1, 512] |
Smish | [−1, 512] |
Linear | [−1, 12704] |
Smish | [−1, 12704] |
ConTrans2d | [−1, 64, 9, 9] |
BatchNorm2d | [−1, 64, 9, 9] |
Smish | [−1, 64, 9, 9] |
ConTrans2d | [−1, 32, 11, 11] |
BatchNorm2d | [−1, 32, 11, 11] |
Smish | [−1, 32, 11, 11] |
ConTrans3d | [−1, 16, 11, 13, 13] |
BatchNorm3d | [−1, 16, 11, 13, 13] |
Smish | [−1, 16, 11, 13, 13] |
ConTrans3d | [−1, 8, 12, 15, 15] |
BatchNorm3d | [−1, 1, 12, 15, 15] |
h-VAE Encoder Parameter Setting | |
---|---|
Layer | The Shape of Output |
Input | [−1, 1, 15, 15, 15] |
Con3d | [−1, 8, 13, 13, 13] |
BatchNorm3d | [−1, 8, 13, 13, 13] |
Smish | [−1, 8, 13, 13, 13] |
Con3d | [−1, 16, 11, 11, 11] |
BatchNorm3d | [−1, 16, 11, 11, 11] |
Smish | [−1, 16, 11, 11, 11] |
Con2d | [−1, 32, 9, 9] |
BatchNorm2d | [−1, 32, 9, 9] |
Smish | [−1, 32, 9, 9] |
Con2d | [−1, 64, 7, 7] |
BatchNorm2d | [−1, 64, 7, 7] |
Smish | [−1, 64, 7, 7] |
AdapAvgPool2d | [−1, 64, 4, 4] |
Linear Smish | [−1, 512] [−1, 512] |
Linear | [−1, 128] |
h-VAE Decoder Parameter Setting | |
---|---|
Layer | The Shape of Output |
Input | [−1, 128] |
Linear | [−1, 512] |
Smish | [−1, 512] |
Linear | [−1, 21104] |
Smish | [−1, 21104] |
ConTrans2d | [−1, 64, 9, 9] |
BatchNorm2d | [−1, 64, 9, 9] |
Smish | [−1, 64, 9, 9] |
ConTrans2d | [−1, 32, 11, 11] |
BatchNorm2d | [−1, 32, 11, 11] |
Smish | [−1, 32, 11, 11] |
ConTrans3d | [−1, 16, 13, 13, 13] |
BatchNorm3d | [−1, 16, 13, 13, 13] |
Smish | [−1, 16, 13, 13, 13] |
ConTrans3d | [−1, 8, 15, 15, 15] |
BatchNorm3d | [−1, 1, 15, 15, 15] |
m-VAE Encoder Parameter Setting | |
---|---|
Layer | The Shape of Output |
Input | [−1, 1, 12, 15, 15] |
Con3d | [−1, 8, 9, 13, 13] |
BatchNorm3d | [−1, 8, 9, 13, 13] |
Smish | [−1, 8, 9, 13, 13] |
Con3d | [−1, 16, 5, 11, 11] |
BatchNorm3d | [−1, 16, 5, 11, 11] |
Smish | [−1, 16, 5, 11, 11] |
Con2d | [−1, 32, 9, 9] |
BatchNorm2d | [−1, 32, 9, 9] |
Smish | [−1, 32, 9, 9] |
Con2d | [−1, 64, 7, 7] |
BatchNorm2d | [−1, 64, 7, 7] |
Smish | [−1, 64, 7, 7] |
AdapAvgPool2d | [−1, 64, 4, 4] |
Linear Smish | [−1, 512] [−1, 512] |
Linear | [−1, 128] |
m-VAE Decoder Parameter Setting | |
---|---|
Layer | The Shape of Output |
Input | [−1, 128] |
Linear | [−1, 512] |
Smish | [−1, 512] |
Linear | [−1, 12704] |
Smish | [−1, 12704] |
ConTrans2d | [−1, 64, 9, 9] |
BatchNorm2d | [−1, 64, 9, 9] |
Smish | [−1, 64, 9, 9] |
ConTrans2d | [−1, 32, 11, 11] |
BatchNorm2d | [−1, 32, 11, 11] |
Smish | [−1, 32, 11, 11] |
ConTrans3d | [−1,16,11,13,13] |
BatchNorm3d | [−1, 16, 11, 13, 13] |
Smish | [−1, 16, 11, 13, 13] |
ConTrans3d | [−1, 8, 12, 15, 15] |
BatchNorm3d | [−1, 1, 12, 15, 15] |
M-SSL Parameter Setting | |
---|---|
Layer | The Shape of Output |
HSI Input | [−1, 1024] |
MSI Input | [−1, 1024] |
ConTrans2d | [−1, 64, 6, 6] |
BatchNorm2d | [−1, 64, 6, 6] |
Smish | [−1, 64, 6, 6] |
ConTrans2d | [−1, 64, 8, 8] |
BatchNorm2d | [−1, 64, 8, 8] |
Smish | [−1, 64, 8, 8] |
Con2d | [−1, 128, 6, 6] |
BatchNorm2d | [−1, 128, 6, 6] |
Smish | [−1, 128, 6, 6] |
Con2d | [−1, 64, 4, 4] |
BatchNorm2d | [−1, 64, 4, 4] |
Smish | [−1, 64, 4, 4] |
Con2d | [−1, 32, 2, 2] |
Smish | [−1, 32, 2, 2] |
Linear | [−1, 128] |
Smish | [−1, 128] |
Linear | [−1, 128] |
Supervised Model | Unsupervised Model | ||||||
---|---|---|---|---|---|---|---|
LDA | 1D-CNN | S-CNN | CUFL | AAE | VAE | M-SSL | |
Birch | 71.70 | 72.20 | 76.32 | 79.44 | 78.23 | 75.32 | 78.18 |
Larch | 72.89 | 80.37 | 80.92 | 73.69 | 79.38 | 80.31 | 80.82 |
Spruce | 61.50 | 63.82 | 75.95 | 76.68 | 75.94 | 72.27 | 78.38 |
Mongolia | 56.4 | 69.15 | 77.68 | 77.45 | 78.00 | 78.87 | 79.20 |
Willow | 56.70 | 62.47 | 57.88 | 73.17 | 74.23 | 72.98 | 73.96 |
Poplar | 61.02 | 54.45 | 73.62 | 72.83 | 73.77 | 72.77 | 73.25 |
OA (%) | 68.65 | 69.61 | 76.25 | 76.76 | 77.41 | 75.83 | 80.60 |
AA (%) | 66.36 | 71.98 | 74.49 | 76.82 | 78.51 | 75.75 | 79.87 |
Kappa (%) | 65.71 | 68.17 | 66.81 | 75.30 | 76.42 | 74.31 | 79.17 |
Supervised Model | Unsupervised Model | ||||||
---|---|---|---|---|---|---|---|
LDA | 1D-CNN | S-CNN | CUFL | AAE | VAE | M-SSL | |
Birch | 57.65 | 65.601 | 67.09 | 71.50 | 76.48 | 74.17 | 77.20 |
Larch | 71.94 | 71.325 | 72.10 | 80.28 | 81.08 | 80.72 | 81.18 |
Spruce | 57.40 | 69. 69 | 64.64 | 73.26 | 72.00 | 70.82 | 74.42 |
Mongolia | 56.40 | 48.96 | 65.18 | 79.11 | 70.98 | 74.27 | 75.24 |
Willow | 54.72 | 65.35 | 65.65 | 54.03 | 64.18 | 60.36 | 76.10 |
Poplar | 62.82 | 63.45 | 64.62 | 62.37 | 63.77 | 62.77 | 73.25 |
OA (%) | 54.37 | 60.95 | 67.85 | 75.69 | 75.08 | 70.28 | 79.69 |
AA (%) | 55.05 | 63.36 | 70.04 | 67.29 | 70.39 | 68.68 | 77.92 |
Kappa (%) | 53.89 | 61.02 | 67.69 | 62.84 | 69.13 | 64.38 | 74.64 |
Supervised Model | Unsupervised Model | ||||||
---|---|---|---|---|---|---|---|
LDA | 1D-CNN | S-CNN | CUFL | AAE | VAE | M-SSL | |
Birch | 72.71 | 47.42 | 64.19 | 72.43 | 72.14 | 70.07 | 79.34 |
Larch | 77.83 | 55.53 | 65.97 | 73.16 | 71.98 | 76.69 | 80.33 |
Spruce | 74.47 | 48.89 | 57.88 | 72.46 | 75.78 | 74.12 | 76.84 |
Mongolia | 43.83 | 37.93 | 60.17 | 71.33 | 70.72 | 72.73 | 79.92 |
Willow | 53.14 | 69.29 | 72.18 | 75.81 | 76.60 | 75.38 | 76.29 |
Poplar | 50.43 | 77.45 | 75.69 | 78.47 | 65.37 | 64.06 | 75.83 |
OA (%) | 60.49 | 55.43 | 67.21 | 74.56 | 70.71 | 69.11 | 76.54 |
AA (%) | 60.01 | 62.60 | 63.61 | 74.52 | 68.61 | 67.80 | 74.06 |
Kappa (%) | 55.34 | 54.69 | 62.75 | 73.68 | 65.89 | 63.17 | 73.65 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, X.; Yang, N.; Liu, E.; Gu, W.; Zhang, J.; Zhao, S.; Sun, G.; Wang, J. Tree Species Classification Based on Self-Supervised Learning with Multisource Remote Sensing Images. Appl. Sci. 2023, 13, 1928. https://doi.org/10.3390/app13031928
Wang X, Yang N, Liu E, Gu W, Zhang J, Zhao S, Sun G, Wang J. Tree Species Classification Based on Self-Supervised Learning with Multisource Remote Sensing Images. Applied Sciences. 2023; 13(3):1928. https://doi.org/10.3390/app13031928
Chicago/Turabian StyleWang, Xueliang, Nan Yang, Enjun Liu, Wencheng Gu, Jinglin Zhang, Shuo Zhao, Guijiang Sun, and Jian Wang. 2023. "Tree Species Classification Based on Self-Supervised Learning with Multisource Remote Sensing Images" Applied Sciences 13, no. 3: 1928. https://doi.org/10.3390/app13031928
APA StyleWang, X., Yang, N., Liu, E., Gu, W., Zhang, J., Zhao, S., Sun, G., & Wang, J. (2023). Tree Species Classification Based on Self-Supervised Learning with Multisource Remote Sensing Images. Applied Sciences, 13(3), 1928. https://doi.org/10.3390/app13031928