OPT-SAR-MS2Net: A Multi-Source Multi-Scale Siamese Network for Land Object Classification Using Remote Sensing Images
Abstract
:1. Introduction
- To avoid complex training strategies with multi-stage, multi-source feature fusion networks for optical and SAR remote sensing data, we designed an end-to-end architectural network named OPT-SAR-MS2Net for land object classification.
- The OPT-SAR-MFF multi-modal feature fusion module is designed to combine complementary information obtained from optical and SAR remote sensing data. It employs a shallow–deep-level feature fusion strategy to compensate for information loss during network transmission.
- To address the issue of a single receptive field in convolutional neural networks, we designed the multi-scale information perception module, OPT-SAR-MIP. This module enhances the feature representation of multi-scale land objects in the top view of remote sensing images.
- Our work outperforms other SOTA methods by improving the mIoU and OA by 2.3% and 2.6%, respectively, on the dataset WHU-OPT-SAR [18].
2. Related Work
2.1. Land Object Classification of Single-Source Remote Sensing Imagery
2.2. Land Object Classification of Multi-Source Remote Sensing Imagery
3. Method
3.1. OPT-SAR-MS2Net Encoder Part
3.1.1. Dual Branch Siamese Multi-Source Feature Extraction Architecture
3.1.2. Multi-Source Feature Fusion Module OPT-SAR-MFF
3.1.3. Multi-Scale Information Perception Module OPT-SAR-MIP
3.2. OPT-SAR-MS2Net Decoder Part
3.3. Loss Function
4. Experiment and Analysis
4.1. Experimental Settings
4.1.1. Description of the Dataset
4.1.2. Experimental Configuration and Evaluation Criteria
4.2. Experimentation on the Dataset
4.3. Ablation Experiments
4.3.1. Effectiveness of Each Source of Data
4.3.2. Effectiveness of Each Module
4.3.3. Effectiveness of Loss with Penalty Weight
4.3.4. Robustness of the Model
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Abdi, A.M. Land cover and land use classification performance of machine learning algorithms in a boreal landscape using Sentinel-2 data. GIScience Remote Sens. 2020, 57, 1–20. [Google Scholar] [CrossRef]
- Bai, Y.; Sun, G.; Li, Y.; Ma, P.; Li, G.; Zhang, Y. Comprehensively analyzing optical and polarimetric SAR features for land-use/land-cover classification and urban vegetation extraction in highly-dense urban area. Int. J. Appl. Earth Obs. Geoinf. 2021, 103, 102496. [Google Scholar] [CrossRef]
- Girma, R.; Fürst, C.; Moges, A. Land use land cover change modeling by integrating artificial neural network with cellular Automata-Markov chain model in Gidabo river basin, main Ethiopian rift. Environ. Chall. 2022, 6, 100419. [Google Scholar] [CrossRef]
- Liu, J.; Gong, M.; Qin, K.; Zhang, P. A deep convolutional coupling network for change detection based on heterogeneous optical and radar images. IEEE Trans. Neural Netw. Learn. Syst. 2016, 29, 545–559. [Google Scholar] [CrossRef]
- Audebert, N.; Le Saux, B.; Lefèvre, S. Beyond RGB: Very high resolution urban remote sensing with multimodal deep networks. ISPRS J. Photogramm. Remote Sens. 2018, 140, 20–32. [Google Scholar] [CrossRef]
- Schmitt, M.; Zhu, X.X. Data fusion and remote sensing: An ever-growing relationship. IEEE Geosci. Remote Sens. Mag. 2016, 4, 6–23. [Google Scholar] [CrossRef]
- Mou, L.; Zhu, X.; Vakalopoulou, M.; Karantzalos, K.; Paragios, N.; Le Saux, B.; Moser, G.; Tuia, D. Multitemporal very high resolution from space: Outcome of the 2016 IEEE GRSS data fusion contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3435–3447. [Google Scholar] [CrossRef]
- Yuan, H.; Van Der Wiele, C.F.; Khorram, S. An automated artificial neural network system for land use/land cover classifi-cation from Landsat TM imagery. Remote Sens. 2009, 1, 243–265. [Google Scholar] [CrossRef]
- Zhang, C.; Sargent, I.; Pan, X.; Li, H.; Gardiner, A.; Hare, J.; Atkinson, P.M. Joint Deep Learning for land cover and land use classification. Remote Sens. Environ. 2019, 221, 173–187. [Google Scholar] [CrossRef]
- Zhao, W.; Du, S. Learning multiscale and deep representations for classifying remotely sensed imagery. ISPRS J. Photogramm. Remote Sens. 2016, 113, 155–165. [Google Scholar] [CrossRef]
- Chen, X.; Lin, K.-Y.; Wang, J.; Wu, W.; Qian, C.; Li, H.; Zeng, G. Bi-directional cross-modality feature propagation with sep-aration-and-aggregation gate for RGB-D semantic segmentation. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 561–577. [Google Scholar]
- Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.-S.; Zhang, L.; Xu, F.; Fraundorfer, F. Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geosci. Remote Sens. Mag. 2017, 5, 8–36. [Google Scholar] [CrossRef]
- Mou, L.; Schmitt, M.; Wang, Y.; Zhu, X.X. Identifying corresponding patches in SAR and optical imagery with a convolutional neural network. In Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA, 23–28 July 2017; pp. 5482–5485. [Google Scholar]
- Li, X.; Zhang, G.; Cui, H.; Hou, S.; Wang, S.; Li, X.; Chen, Y.; Li, Z.; Zhang, L. MCANet: A joint semantic segmentation framework of optical and SAR images for land use classification. Int. J. Appl. Earth Obs. Geoinf. 2022, 106, 102638. [Google Scholar] [CrossRef]
- Li, X.; Zhang, G.; Cui, H.; Hou, S.; Chen, Y.; Li, Z.; Li, H.; Wang, H. Progressive fusion learning: A multimodal joint segmentation framework for building extraction from optical and SAR images. ISPRS J. Photogramm. Remote Sens. 2023, 195, 178–191. [Google Scholar] [CrossRef]
- Jensen, J.R.; Qiu, F.; Patterson, K. A neural network image interpretation system to extract rural and urban land use and land cover information from remote sensor data. Geocarto Int. 2001, 16, 21–30. [Google Scholar] [CrossRef]
- Li, X.; Lei, L.; Sun, Y.; Li, M.; Kuang, G. Collaborative attention-based heterogeneous gated fusion network for land cover classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 3829–3845. [Google Scholar] [CrossRef]
- Li, X.; Lei, L.; Sun, Y.; Li, M.; Kuang, G. Multimodal bilinear fusion network with second-order attention-based channel se-lection for land cover classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1011–1026. [Google Scholar] [CrossRef]
- Solberg, A.; Taxt, T.; Jain, A. A Markov random field model for classification of multisource satellite imagery. IEEE Trans. Geosci. Remote Sens. 1996, 34, 100–113. [Google Scholar] [CrossRef]
- Pacifici, F.; Del Frate, F.; Emery, W.J.; Gamba, P.; Chanussot, J. Urban mapping using coarse SAR and optical data: Outcome of the 2007 GRSS data fusion contest. IEEE Geosci. Remote Sens. Lett. 2008, 5, 331–335. [Google Scholar] [CrossRef]
- Talukdar, S.; Singha, P.; Mahato, S.; Shahfahad; Pal, S.; Liou, Y.-A.; Rahman, A. Land-use land-cover classification by machine learning classifiers for satellite observations—A review. Remote Sens. 2020, 12, 1135. [Google Scholar] [CrossRef]
- Casals-Carrasco, P.; Kubo, S.; Madhavan, B.B. Application of spectral mixture analysis for terrain evaluation studies. Int. J. Remote Sens. 2000, 21, 3039–3055. [Google Scholar] [CrossRef]
- Pu, R.; Landry, S. A comparative analysis of high spatial resolution IKONOS and WorldView-2 imagery for mapping urban tree species. Remote Sens. Environ. 2012, 124, 516–533. [Google Scholar] [CrossRef]
- Tong, X.-Y.; Xia, G.-S.; Lu, Q.; Shen, H.; Li, S.; You, S.; Zhang, L. Land-cover classification with high-resolution remote sensing images using transferable deep models. Remote Sens. Environ. 2020, 237, 111322. [Google Scholar] [CrossRef]
- Dickenson, M.; Gueguen, L. Rotated rectangles for symbolized building footprint extraction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23 June 2018; pp. 225–228. [Google Scholar]
- Kuo, T.-S.; Tseng, K.-S.; Yan, J.-W.; Liu, Y.-C.; Frank Wang, Y.-C. Deep aggregation net for land cover classification. In Pro-ceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. Salt Lake City, UT, USA, 18–23 June 2018; pp. 252–256. [Google Scholar]
- Aich, S.; van der Kamp, W.; Stavness, I. Semantic binary segmentation using convolutional networks without decoders. In Proceedings of the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–23 June 2018; pp. 197–201. [Google Scholar]
- Dong, S.; Zhuang, Y.; Yang, Z.; Pang, L.; Chen, H.; Long, T. Land cover classification from VHR optical remote sensing images by feature ensemble deep learning network. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1396–1400. [Google Scholar] [CrossRef]
- Liu, Y.; Fan, B.; Wang, L.; Bai, J.; Xiang, S.; Pan, C. Semantic labeling in very high resolution images via a self-cascaded con-volutional neural network. ISPRS J. Photogramm. Remote Sens. 2018, 145, 78–95. [Google Scholar] [CrossRef]
- Sellami, A.; Tabbone, S. Deep neural networks-based relevant latent representation learning for hyperspectral image classi-fication. Pattern Recognit. 2022, 121, 108224. [Google Scholar] [CrossRef]
- Kang, W.; Xiang, Y.; Wang, F.; Wan, L.; You, H. Flood detection in Gaofen-3 SAR images via fully convolutional networks. Sensors 2018, 18, 2915. [Google Scholar] [CrossRef]
- Ding, L.; Zheng, K.; Lin, D.; Chen, Y.; Liu, B.; Li, J.; Bruzzone, L. MP-ResNet: Multipath residual network for the semantic segmentation of high-resolution PolSAR images. IEEE Geosci. Remote Sens. Lett. 2021, 19, 1–5. [Google Scholar] [CrossRef]
- Reichstein, M.; Camps-Valls, G.; Stevens, B.; Jung, M.; Denzler, J.; Carvalhais, N.; Prabhat, f. Deep learning and process un-derstanding for data-driven Earth system science. Nature 2019, 566, 195–204. [Google Scholar] [CrossRef]
- Paisitkriangkrai, S.; Sherrah, J.; Janney, P.; Hengel, V.-D. Effective semantic pixel labelling with convolutional networks and conditional random fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Boston, MA, USA, 7–12 May 2015; pp. 36–43. [Google Scholar]
- Paisitkriangkrai, S.; Sherrah, J.; Janney, P.; Van Den Hengel, A. Semantic labeling of aerial and satellite imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 2868–2881. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer International Publishing: Munich, Germany, 2015; pp. 234–241. [Google Scholar]
- Chen, L.-C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep con-volutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef]
- Yang, X.; Li, S.; Chen, Z.; Chanussot, J.; Jia, X.; Zhang, B.; Li, B.; Chen, P. An attention-fused network for semantic segmentation of very-high-resolution remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2021, 177, 238–262. [Google Scholar] [CrossRef]
- Zhang, X.; Han, L.; Han, L.; Zhu, L. How well do deep learning-based methods for land cover classification and object detection perform on high resolution remote sensing imagery. Remote Sens. 2020, 12, 417. [Google Scholar] [CrossRef]
- Gao, T.; Chen, H.; Chen, W. Adaptive heterogeneous support tensor machine: An extended STM for object recognition using an arbitrary combination of multisource heterogeneous remote sensing data. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–22. [Google Scholar] [CrossRef]
- Zheng, X.; Huan, L.; Xia, G.-S.; Gong, J. Parsing very high resolution urban scene images by learning deep ConvNets with edge-aware loss. ISPRS J. Photogramm. Remote Sens. 2020, 170, 15–28. [Google Scholar] [CrossRef]
- Hu, F.; Xia, G.-S.; Hu, J.; Zhang, L. Transferring deep convolutional neural networks for the scene classification of high-resolution remote sensing imagery. Remote Sens. 2015, 7, 14680–14707. [Google Scholar] [CrossRef]
- Hong, D.; Chanussot, J.; Yokoya, N.; Kang, J.; Zhu, X.X. Learning-shared cross-modality representation using multispectral-LiDAR and hyperspectral data. IEEE Geoscience and Remote Sensing Letters 2020, 17, 1470–1474. [Google Scholar] [CrossRef]
- Gao, T.; Chen, H. Multicycle disassembly-based decomposition algorithm to train multiclass support vector machines. Pattern Recognit. 2023, 140, 109479. [Google Scholar] [CrossRef]
- Jiang, L.; Liao, M.; Lin, H.; Yang, L. Synergistic use of optical and InSAR data for urban impervious surface mapping: A case study in Hong Kong. Int. J. Remote Sens. 2009, 30, 2781–2796. [Google Scholar] [CrossRef]
- Zhang, Y.; Zhang, H.; Lin, H. Improving the impervious surface estimation with combined use of optical and SAR remote sensing images. Remote Sens. Environ. 2014, 141, 155–167. [Google Scholar] [CrossRef]
- Gunatilaka, A.H.; Baertlein, B.A. Feature-level and decision-level fusion of noncoincidently sampled sensors for land mine detection. IEEE Trans. Pattern Anal. Mach. Intell. 2001, 23, 577–589. [Google Scholar] [CrossRef]
- Liao, W.; Bellens, R.; Pizurica, A.; Gautama, S.; Philips, W. Combining feature fusion and decision fusion for classification of hyperspectral and LiDAR data. In Proceedings of the 2014 IEEE Geoscience and Remote Sensing Symposium, Quebec City, QC, Canada, 13–18 July 2014; pp. 1241–1244. [Google Scholar] [CrossRef]
- Huang, X.; Zhang, L. An SVM ensemble approach combining spectral, structural, and semantic features for the classification of high-resolution remotely sensed imagery. IEEE Trans. Geosci. Remote Sens. 2012, 51, 257–272. [Google Scholar] [CrossRef]
- Li, W.; Du, Q. Gabor-filtering-based nearest regularized subspace for hyperspectral image classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 1012–1022. [Google Scholar] [CrossRef]
- Kang, W.; Xiang, Y.; Wang, F.; You, H. CFNet: A cross fusion network for joint land cover classification using optical and SAR images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 1562–1574. [Google Scholar] [CrossRef]
- Yurtkulu, S.C.; Şahin, Y.H.; Unal, G. Semantic segmentation with extended DeepLabv3 architecture. In Proceedings of the 2019 27th Signal Processing and Communications Applications Conference (SIU), Sivas, Turkey, 24–26 April 2019; pp. 1–4. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar] [CrossRef]
- Olofsson, P.; Foody, G.M.; Herold, M.; Stehman, S.V.; Woodcock, C.E.; Wulder, M.A. Good practices for estimating area and assessing accuracy of land change. Remote Sens. Environ. 2014, 148, 42–57. [Google Scholar] [CrossRef]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmen-tation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Baheti, B.; Innani, S.; Gajre, S.; Talbar, S. Semantic scene segmentation in unstructured environment with modified DeepLabV3+. Pattern Recognit. Lett. 2020, 138, 223–229. [Google Scholar] [CrossRef]
- Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
- Huang, H.; Lin, L.; Tong, R.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.; Chen, Y.-W.; Wu, J. Unet 3+: A full-scale connected unet for medical image segmentation. In Proceedings of the ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Virtual, 4–9 May 2020; pp. 1055–1059. [Google Scholar]
- Zhang, X.; Yang, P.; Wang, Y.; Shen, W.; Yang, J.; Ye, K.; Zhou, M.; Sun, H. LBF-based CS Algorithm for Multireceiver SAS. IEEE Geosci. Remote Sens. Lett. 2024, 21, 1502505. [Google Scholar] [CrossRef]
- Yang, P. An imaging algorithm for high-resolution imaging sonar system. Multimed. Tools Appl. 2023, 83, 31957–31973. [Google Scholar] [CrossRef]
- Grządziel, A. The Impact of Side-Scan Sonar Resolution and Acoustic Shadow Phenomenon on the Quality of Sonar Imagery and Data Interpretation Capabilities. Remote Sens. 2023, 15, 5599. [Google Scholar] [CrossRef]
Category | Pixel Count per Category | Penalty Weight | Proportion of Pixels (%) |
---|---|---|---|
farmlands | 664,037,272 | 0.015 | 34.5 |
forests | 90,315,847 | 0.014 | 37.7 |
cities | 112,245,660 | 0.113 | 4.6 |
villages | 273,311,896 | 0.090 | 5.8 |
waters | 725,600,699 | 0.037 | 14.2 |
roads | 18,457,677 | 0.551 | 1.0 |
others | 32,882,578 | 0.310 | 1.7 |
background | 9,906,771 | 1.0 | 0.5 |
Public Datasets | Training Datasets | Validation Datasets | Test Datasets |
---|---|---|---|
Number of images | 17,660 | 5870 | 5870 |
Methods | OA (%) | mIoU (%) | Kappa (%) | Accuracy of Each Class (%) | ||||||
---|---|---|---|---|---|---|---|---|---|---|
Farmlands | Cities | Villages | Waters | Forests | Roads | Others | ||||
SegNet | 0.757 | 0.374 | 0.669 | 0.765 | 0.428 | 0.451 | 0.684 | 0.969 | 0.428 | 0.140 |
DeeplabV3+ | 0.809 | 0.412 | 0.726 | 0.795 | 0.658 | 0.393 | 0.752 | 0.942 | 0.790 | 0.127 |
Non-local | 0.724 | 0.305 | 0.601 | 0.714 | 0.401 | 0.399 | 0.576 | 0.805 | 0.366 | 0.108 |
SA-Gate | 0.733 | 0.312 | 0.611 | 0.722 | 0.410 | 0.421 | 0.616 | 0.878 | 0.395 | 0.137 |
U-Net 3+ | 0.785 | 0.385 | 0.683 | 0.695 | 0.567 | 0.374 | 0.674 | 0.804 | 0.519 | 0.220 |
MCANet | 0.817 | 0.429 | 0.735 | 0.797 | 0.588 | 0.497 | 0.786 | 0.958 | 0.352 | 0.272 |
Ours | 0.843 | 0.452 | 0.720 | 0.723 | 0.537 | 0.759 | 0.796 | 0.922 | 0.868 | 0.285 |
OPT-SAR-MS2Net | OA | mIoU | Kappa | Accuracy of Each Class | ||||||
---|---|---|---|---|---|---|---|---|---|---|
Farmlands | Cities | Villages | Waters | Forests | Roads | Others | ||||
Optical input | 0.768 | 0.312 | 0.593 | 0.648 | 0.488 | 0.598 | 0.535 | 0.926 | 0.742 | 0.107 |
SAR input | 0.774 | 0.320 | 0.599 | 0.709 | 0.440 | 0.674 | 0.697 | 0.901 | 0.793 | 0.277 |
Optical+SAR | 0.843 | 0.452 | 0.720 | 0.723 | 0.537 | 0.759 | 0.796 | 0.922 | 0.868 | 0.285 |
ResNet50 | ResNet101 | OPT-SAR-MFF | OPT-SAR-MIP | OA | mIou | Kappa |
---|---|---|---|---|---|---|
√ | √ | 0.793 | 0.355 | 0.649 | ||
√ | √ | 0.816 | 0.394 | 0.676 | ||
√ | √ | 0.825 | 0.409 | 0.693 | ||
√ | √ | 0.827 | 0.416 | 0.701 | ||
√ | √ | √ | 0.835 | 0.400 | 0.702 | |
√ | √ | √ | 0.836 | 0.424 | 0.709 |
Penalty Weight | OA | mIou | Kappa |
---|---|---|---|
With | 0.843 | 0.452 | 0.720 |
Without | 0.818 | 0.300 | 0.661 |
Gaussian Noise | OA | mIou | Kappa |
---|---|---|---|
Without | 0.8440 | 0.4542 | 0.7213 |
With | 0.8439 | 0.4542 | 0.7212 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Hu, W.; Wang, X.; Zhan, F.; Cao, L.; Liu, Y.; Yang, W.; Ji, M.; Meng, L.; Guo, P.; Yang, Z.; et al. OPT-SAR-MS2Net: A Multi-Source Multi-Scale Siamese Network for Land Object Classification Using Remote Sensing Images. Remote Sens. 2024, 16, 1850. https://doi.org/10.3390/rs16111850
Hu W, Wang X, Zhan F, Cao L, Liu Y, Yang W, Ji M, Meng L, Guo P, Yang Z, et al. OPT-SAR-MS2Net: A Multi-Source Multi-Scale Siamese Network for Land Object Classification Using Remote Sensing Images. Remote Sensing. 2024; 16(11):1850. https://doi.org/10.3390/rs16111850
Chicago/Turabian StyleHu, Wei, Xinhui Wang, Feng Zhan, Lu Cao, Yong Liu, Weili Yang, Mingjiang Ji, Ling Meng, Pengyu Guo, Zhi Yang, and et al. 2024. "OPT-SAR-MS2Net: A Multi-Source Multi-Scale Siamese Network for Land Object Classification Using Remote Sensing Images" Remote Sensing 16, no. 11: 1850. https://doi.org/10.3390/rs16111850
APA StyleHu, W., Wang, X., Zhan, F., Cao, L., Liu, Y., Yang, W., Ji, M., Meng, L., Guo, P., Yang, Z., & Liu, Y. (2024). OPT-SAR-MS2Net: A Multi-Source Multi-Scale Siamese Network for Land Object Classification Using Remote Sensing Images. Remote Sensing, 16(11), 1850. https://doi.org/10.3390/rs16111850