Cross-Domain Landslide Mapping in Remote Sensing Images Based on Unsupervised Domain Adaptation Framework
Highlights
- The proposed LandsDANet demonstrates outstanding performance in cross-domain landslides mapping based on unsupervised domain adaptation.
- The implementation of Rare Class Sampling, Wallis filter and Contrastive loss collectively enhance the model’s performance in cross-domain feature extraction and learning.
- The proposed method plays a critical role in rapid cross-domain landslide identification and disaster emergency response.
- The proposed method reduces the model’s reliance on pixel-level annotations for training, improving the efficiency of landslide mapping.
Abstract
1. Introduction
- (1)
- We propose LandsDANet, a novel UDA framework specifically designed for the rapid identification of landslides across diverse domains. This network employs adversarial learning to align data distributions at the output level and supports end-to-end training, facilitated by flexible implementation schemes.
- (2)
- The Wallis filter is introduced for image style transfer, effectively minimizing low-level domain discrepancies and achieving image-level alignment. A sampling strategy specifically focusing on landslide rare category, is implemented to prevent the model from exhibiting excessive bias towards more common classes during the adaptation process.
- (3)
- The contrastive loss function is introduced to reduce the representation distance among features belonging to the same category while maximizing the distance between features from different categories, enhancing the model’s robustness in category discrimination and improving the cross-domain segmentation results.
- (4)
- To evaluate the model’s cross-domain adaptability, experiments were performed using the ISPRS benchmark dataset alongside landslide datasets created by authors. Our proposed model exhibits exceptional performance relative to leading UDA methods, and comprehensive ablation studies corroborate the significance of each module’s contribution. This framework substantially enhances the accuracy of cross-domain landslide identification while effectively mitigating domain gaps caused by distinct triggering mechanisms and imaging conditions.
2. Related Work
2.1. UDA Semantic Segmentation in the Computer Vision Field
2.2. UDA Semantic Segmentation in the Remote Sensing Field
3. Methodology
3.1. LandsDANet Architecture
3.2. ISegFormer Semantic Segmentation Network and Discriminator Network
3.3. Wallis Filter and Rare Class Sampling
3.4. Loss Functions
4. Datasets
4.1. General Remote Sensing Datasets
4.2. Landslide Inventory
4.2.1. 2024 Hualien Earthquake
4.2.2. 2024 Meizhou Rainfall
4.3. Validation Metric and Network Training
| Algorithm 1 Training Process of the Proposed Method |
| Input: Source samples , target samples , Training epochs E, Steps per epoch T |
| Output: The predicted label of source image, the predicted label of target image, optimized segmentation model parameters and optimized discriminator parameters |
|
for epoch = 1 to E do for step = 1 to T do Sample (, ) from , sample from Calculate with and based on Equation (5) Freeze Calculate with (, ) based on Equation (10) Calculate with based on Equation (11) Update weights Unfreeze Freeze Calculate with and respectively based on Equation (12) Update weights Unfreeze end for end for |
5. Result and Accuracy Assessment
5.1. Validation on General Domain Shifts
5.1.1. P2V_S Task: Adaptation Under Geographic and Illumination Variations
5.1.2. V2P_S Task: Adaptation Under Spatial Resolution Variation
5.1.3. P2V_D Task: Adaptation Under Channel Composition Variation
5.2. Cross-Domain Landslide Detection Experiments
5.2.1. M2T Task
5.2.2. T2M Task
6. Discussion
6.1. Ablation Study
6.1.1. Ablation of Wallis Filter and RCS Module
6.1.2. Ablation of Contrastive Loss Module
6.2. Computational Complexity Analysis
6.3. Effects of Decoder
6.4. Advantages of LandsDANet for Cross-Domain Landslide Identification
6.5. Limitations and Future Study
7. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
| Area/m2 | Detection Rate | Precision | Recall | F1 |
|---|---|---|---|---|
| <90 | 30.5 | 34.3 | 26.9 | 30.2 |
| 99–900 | 46.6 | 37.6 | 31.0 | 34.0 |
| 909–9000 | 88.1 | 54.9 | 62.6 | 58.5 |
| >9009 | 98.2 | 56.6 | 67.2 | 61.4 |
| Area/m2 | Detection Rate | Precision | Recall | F1 |
|---|---|---|---|---|
| <90 | 19.7 | 36.6 | 18.8 | 24.9 |
| 99–900 | 45 | 49.8 | 32.4 | 39.2 |
| 909–9000 | 86.6 | 65.7 | 62.6 | 64.1 |
| >9009 | 96 | 65.9 | 63.1 | 64.5 |
References
- Keefer, D.K. Landslides caused by earthquakes. Geol. Soc. Am. Bull. 1984, 95, 406–421. [Google Scholar] [CrossRef]
- Fan, X.; Liu, B.; Luo, J.; Pan, S.; Han, S.; Zhou, Z. Comparison of earthquake-induced shallow landslide susceptibility assessment based on two-category LR and KDE-MLR. Sci. Rep. 2023, 13, 833. [Google Scholar] [CrossRef]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, A.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
- Yang, J.; Ding, M.; Huang, W.; Li, Z.; Zhang, Z.; Wu, J.; Peng, J. A Generalized Deep Learning-Based Method for Rapid Co-Seismic Landslide Mapping. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2024, 17, 16970–16983. [Google Scholar] [CrossRef]
- Xie, E.; Wang, W.; Yu, A.; Anandkumar, A.; Alvarez, J.M.; Luo, P. SegFormer: Simple and efficient design for semantic segmentation with transformers. In Proceedings of the Advances in Neural Information Processing Systems, Virtual, 6–14 December 2021; pp. 12077–12090. [Google Scholar]
- Gao, M.; Chen, F.; Wang, L.; Zhao, H.; Yu, B. Swin Transformer-Based Multiscale Attention Model for Landslide Extraction From Large-Scale Area. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4415314. [Google Scholar] [CrossRef]
- Dong, A.; Dou, J.; Li, C.; Chen, Z.; Ji, J.; Xing, K.; Zhang, J.; Daud, H. Accelerating Cross-Scene Co-Seismic Landslide Detection Through Progressive Transfer Learning and Lightweight Deep Learning Strategies. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4410213. [Google Scholar] [CrossRef]
- Lv, P.; Ma, L.; Li, Q.; Du, F. ShapeFormer: A Shape-Enhanced Vision Transformer Model for Optical Remote Sensing Image Landslide Detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 2681–2689. [Google Scholar] [CrossRef]
- Wu, L.; Liu, R.; Ju, N.; Zhang, A.; Gou, J.; He, G.; Lei, Y. Landslide mapping based on a hybrid CNN-transformer network and deep transfer learning using remote sensing images with topographic and spectral features. Int. J. Appl. Earth Obs. Geoinf. 2024, 126, 103612. [Google Scholar] [CrossRef]
- Xu, Q.; Ouyang, C.; Jiang, T.; Yuan, X.; Fan, X.; Cheng, D. MFFENet and ADANet: A robust deep transfer learning method and its application in high precision and fast cross-scene recognition of earthquake induced landslides. Landslides 2022, 19, 1617–1647. [Google Scholar] [CrossRef]
- Fang, C.; Fan, X.; Wang, X.; Nava, L.; Zhong, H.; Dong, X.; Qi, J.; Catani, F. A globally distributed dataset of coseismic landslide mapping via multi-source high-resolution remote sensing images. Earth Syst. Sci. Data 2024, 16, 4817–4842. [Google Scholar] [CrossRef]
- Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 2672–2680. [Google Scholar]
- Rottensteiner, F.; Sohn, G.; Jung, J.; Gerke, M.; Breitkopf, U. The ISPRS benchmark on urban object classification and 3D building reconstruction. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, I-3, 293–298. [Google Scholar] [CrossRef]
- Liu, X.; Xing, F.; You, J.; Lu, J.; Kuo, C.; Fakhri, G.E.; Woo, J. Subtype-Aware Dynamic Unsupervised Domain Adaptation. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 2820–2834. [Google Scholar] [CrossRef] [PubMed]
- Ouyang, L.; Key, A. Maximum Mean Discrepancy for Generalization in the Presence of Distribution and Missingness Shift. arXiv 2021, arXiv:2111.10344. [Google Scholar]
- Hoffman, J.; Tzeng, E.; Park, T.; Zhu, J.-Y.; Isola, P.; Saenko, K.; Efros, A.A.; Darrell, T. CyCADA: Cycle-consistent adversarial domain adaptation. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; pp. 1989–1998. [Google Scholar]
- Zhu, J.Y.; Park, T.; Isola, P.; Efros, A. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 2242–2251. [Google Scholar]
- Tasar, O.; Happy, S.L.; Tarabalka, Y.; Alliez, P. ColorMapGAN: Unsupervised Domain Adaptation for Semantic Segmentation Using Color Mapping Generative Adversarial Networks. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7178–7193. [Google Scholar] [CrossRef]
- Chen, Y.-H.; Chen, W.-Y.; Chen, Y.-T.; Tsai, B.-C.; Wang, Y.-C.-F.; Sun, M. No more discrimination: Cross city adaptation of road scene segmenters. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 1992–2001. [Google Scholar]
- Hoffman, J.; Wang, D.; Yu, F.; Darrell, T. FCNs in the wild: Pixel-level adversarial and constraint-based adaptation. arXiv 2016, arXiv:1612.02649. [Google Scholar]
- Tsai, Y.-H.; Hung, W.-C.; Schulter, S.; Sohn, K.; Yang, M.-H.; Chandraker, M. Learning to adapt structured output space for semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7472–7481. [Google Scholar]
- Vu, T.-H.; Jain, H.; Bucher, M.; Cord, M.; Pérez, P. ADVENT: Adversarial entropy minimization for domain adaptation in semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 2517–2526. [Google Scholar]
- Luo, Y.; Zheng, L.; Guan, T.; Yu, J.; Yang, Y. Taking a Closer Look at Domain Shift: Category-Level Adversaries for Semantics Consistent Domain Adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 2502–2511. [Google Scholar]
- Wang, H.; Hen, T.; Zhang, S.W.; Duan, L.; Mei, T. Classes Matter: A Fine-Grained Adversarial Approach to Cross-Domain Semantic Segmentation. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 642–659. [Google Scholar]
- French, G.; Mackiewicz, M.; Fisher, M. Self-ensembling for visual domain adaptation. arXiv 2017, arXiv:1706.05208. [Google Scholar]
- Zheng, Z.; Yang, Y. Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain Adaptive Semantic Segmentation. Int. J. Comput. Vis. 2020, 129, 1106–1120. [Google Scholar] [CrossRef]
- Zhang, P.; Zhang, B.; Zhang, T.; Chen, D.; Wang, Y.; Wen, F. Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation. arXiv 2021, arXiv:2101.10979. [Google Scholar] [CrossRef]
- Zhu, J.; Guo, Y.; Sun, G.; Yang, L.; Deng, M.; Chen, J. Unsupervised domain adaptation semantic segmentation of high-resolution remote sensing imagery with invariant domain-level prototype memory. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5603518. [Google Scholar] [CrossRef]
- Zhang, L.; Lan, M.; Zhang, J.; Tao, D. Stagewise Unsupervised Domain Adaptation With Adversarial Self-Training for Road Segmentation of Remote-Sensing Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5609413. [Google Scholar] [CrossRef]
- Peng, D.; Guan, H.; Zang, Y.; Bruzzone, L. Full-level domain adaptation for building extraction in very-high-resolution optical remote-sensing images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5607317. [Google Scholar] [CrossRef]
- Chen, J.; Zhu, J.; He, P.; Guo, Y.; Hong, L.; Yang, Y. Unsupervised Domain Adaptation for Building Extraction of High-Resolution Remote Sensing Imagery Based on Decoupling Style and Semantic Features. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4406917. [Google Scholar] [CrossRef]
- Li, P.; Wang, Y.; Si, T.; Ullah, K.; Han, W.; Wang, L. DSFA: Cross-scene domain style and feature adaptation for landslide detection from high spatial resolution images. Int. J. Digit. Earth 2023, 16, 2426–2447. [Google Scholar] [CrossRef]
- Yu, B.; Chen, F.; Chen, W.; Shi, G.; Xu, C.; Wang, N.; Wang, L. Cross-domain landslide mapping by harmonizing heterogeneous remote sensing datasets. GISci. Remote Sens. 2025, 62, 2559457. [Google Scholar] [CrossRef]
- Zhang, X.; Yu, W.; Pun, M.-O.; Shi, W. Cross-domain landslide mapping from large-scale remote sensing images using prototype-guided domain-aware progressive representation learning. ISPRS J. Photogramm. Remote Sens. 2023, 197, 1–17. [Google Scholar] [CrossRef]
- Li, P.; Wang, G.; Liu, G.; Fang, Z.; Ullah, K. Unsupervised Landslide Detection From Multitemporal High-Resolution Images Based on Progressive Label Upgradation and Cross-Temporal Style Adaption. IEEE Trans. Geosci. Remote Sens. 2024, 62, 4410715. [Google Scholar] [CrossRef]
- Wei, R.; Li, Y.; Li, Y.; Zhang, B.; Wang, J.; Wu, C.; Yao, S.; Ye, C. A universal adapter in segmentation models for transferable landslide mapping. ISPRS J. Photogramm. Remote Sens. 2024, 218, 446–465. [Google Scholar] [CrossRef]
- Wang, J.; Zhang, X.; Ma, X.; Yu, W.; Ghamisi, P. Auto-Prompting SAM for Weakly Supervised Landslide Extraction. IEEE Geosci. Remote Sens. Lett. 2025, 22, 6008705. [Google Scholar] [CrossRef]
- Woo, S.; Park, J.; Lee, J.-Y.; Kweon, I.S. Convolutional block attention module (CBAM). In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
- Zhao, S.; Wang, Y.; Yang, Z.; Cai, D. Region mutual information loss for semantic segmentation. In Proceedings of the 33rd International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 11117–11127. [Google Scholar]
- Chang, J.M.; Chao, W.A.; Yang, C.M.; Huang, M.W. Coseismic and subsequent landslides of the 2024 Hualien earthquake (M7.2) on April 3 in Taiwan. Landslides 2024, 21, 2591–2595. [Google Scholar] [CrossRef]
- Chen, Y.; Song, C.; Li, Z.; Chen, B.; Yu, C.; Hu, J.-C.; Cai, X.; Zhu, S.; Wang, Q.; Ma, Y.; et al. Preliminary analysis of landslides induced by the 3 April 2024 Mw 7.4 Hualien, Taiwan earthquake. Landslides 2025, 22, 1551–1562. [Google Scholar] [CrossRef]
- Zhang, C.; Jiang, W.; Zhang, Y.; Wang, W.; Zhao, Q.; Wang, C. Transformer and CNN Hybrid Deep Neural Network for Semantic Segmentation of Very-High-Resolution Remote Sensing Imagery. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4408820. [Google Scholar] [CrossRef]
- Laurens, V.D.M.; Hinton, G. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
- Liu, X.; Peng, Y.; Lu, Z.; Li, W.; Yu, J.; Ge, D.; Xiang, W. Feature-Fusion Segmentation Network for Landslide Detection Using High-Resolution Remote Sensing Images and Digital Elevation Model Data. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4500314. [Google Scholar] [CrossRef]
- Radford, A.; Metz, L.; Chintala, S. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. arXiv 2016, arXiv:1511.06434. [Google Scholar] [CrossRef]



















| Method | Impervious Surfaces | Building | Low Vegetation | Tree | Car | Clutter | mIoU | mF1 | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| IoU | F1 | IoU | F1 | IoU | F1 | IoU | F1 | IoU | F1 | IoU | F1 | ||||
| Source only | 55.39 | 71.29 | 66.42 | 79.83 | 45.10 | 62.17 | 56.92 | 72.55 | 27.94 | 43.68 | 9.53 | 17.41 | 43.55 | 57.82 | |
| State-of- the-art methods | AdaptSegNet [21] | 58.70 | 73.98 | 66.49 | 79.87 | 39.84 | 56.98 | 49.01 | 65.78 | 32.44 | 48.98 | 17.61 | 29.94 | 44.01 | 59.26 |
| AdvEnt [22] | 66.94 | 80.2 | 69.08 | 81.72 | 44.39 | 61.49 | 49.65 | 66.36 | 31.09 | 47.44 | 12.82 | 22.73 | 45.66 | 59.99 | |
| ADANet [10] | 67.23 | 80.4 | 65.77 | 79.35 | 42.15 | 59.31 | 47.22 | 64.15 | 35.8 | 52.73 | 23.61 | 38.2 | 46.96 | 62.36 | |
| MemoryAdaptNet [28] | 64.95 | 78.75 | 67.17 | 80.36 | 46.71 | 63.68 | 46.18 | 63.19 | 37.54 | 54.59 | 21.52 | 35.42 | 47.35 | 62.67 | |
| LandsDANet (ours) | 75.36 | 85.95 | 80.26 | 89.05 | 55.43 | 71.32 | 56.51 | 72.21 | 49.94 | 66.62 | 23.9 | 38.57 | 56.9 | 70.62 | |
| Method | Impervious Surfaces | Building | Low Vegetation | Tree | Car | Clutter | mIoU | mF1 | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| IoU | F1 | IoU | F1 | IoU | F1 | IoU | F1 | IoU | F1 | IoU | F1 | ||||
| Source only | 64.93 | 78.73 | 60.83 | 75.64 | 44.08 | 61.18 | 20.41 | 33.91 | 67.53 | 80.62 | 4.23 | 8.12 | 43.67 | 56.37 | |
| State-of- the-art methods | AdaptSegNet [21] | 51.77 | 68.22 | 53.92 | 70.07 | 42.02 | 59.18 | 32.38 | 48.92 | 42.34 | 59.48 | 7.67 | 14.24 | 38.35 | 53.35 |
| AdvEnt [22] | 58.41 | 73.74 | 58.11 | 73.51 | 39.18 | 56.31 | 31.66 | 48.09 | 47.34 | 64.26 | 5.45 | 10.34 | 40.03 | 54.37 | |
| ADANet [10] | 61.68 | 76.30 | 57.53 | 73.04 | 32.47 | 49.02 | 35.3 | 52.18 | 58.8 | 74.06 | 15.08 | 26.20 | 43.48 | 58.47 | |
| MemoryAdaptNet [28] | 60.11 | 75.08 | 59.64 | 74.72 | 29.14 | 45.14 | 43.44 | 60.57 | 49.71 | 66.41 | 8.63 | 15.89 | 41.78 | 56.30 | |
| LandsDANet (ours) | 66.77 | 80.07 | 76.71 | 86.82 | 49.88 | 66.56 | 49.25 | 65.99 | 70.83 | 82.93 | 9.76 | 17.78 | 53.87 | 66.69 | |
| Method | Impervious Surfaces | Building | Low vegetation | Tree | Car | Clutter | mIoU | mF1 | |||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| IoU | F1 | IoU | F1 | IoU | F1 | IoU | F1 | IoU | F1 | IoU | F1 | ||||
| Source only | 57.62 | 73.11 | 60.91 | 75.71 | 11.11 | 19.99 | 15.49 | 26.82 | 27.34 | 42.94 | 2.81 | 5.46 | 29.21 | 40.67 | |
| State-of- the-art methods | AdaptSegNet [21] | 60.04 | 75.03 | 68.16 | 81.07 | 32.29 | 48.82 | 32.98 | 49.6 | 42.34 | 59.48 | 14.17 | 24.83 | 43.05 | 57.76 |
| AdvEnt [22] | 63.92 | 77.99 | 74.06 | 85.10 | 35.42 | 52.32 | 42.02 | 59.17 | 47.34 | 64.26 | 14.06 | 24.65 | 45.94 | 60.40 | |
| ADANet [10] | 62.33 | 76.79 | 67.53 | 80.62 | 34.48 | 51.28 | 40.64 | 57.79 | 58.8 | 74.06 | 25.4 | 40.51 | 45.25 | 60.88 | |
| MemoryAdaptNet [28] | 60.4 | 75.31 | 63.53 | 77.7 | 28.21 | 44.01 | 39.66 | 56.80 | 49.71 | 66.41 | 14.50 | 25.32 | 40.66 | 55.64 | |
| LandsDANet (ours) | 60.89 | 75.70 | 82.12 | 90.18 | 27.61 | 43.27 | 45.68 | 62.71 | 70.83 | 82.93 | 24.63 | 39.53 | 48.50 | 63.02 | |
| Method | Landslide | mIoU | mF1 | Recall | Precision | ||
|---|---|---|---|---|---|---|---|
| IoU | F1 | ||||||
| Source only | 17.12 | 29.23 | 56.35 | 63.50 | 19.43 | 59.01 | |
| State-of-the-art methods | AdaptSegNet [21] | 22.24 | 36.38 | 59.02 | 67.12 | 25.77 | 61.86 |
| AdvEnt [22] | 25.98 | 41.24 | 60.91 | 69.56 | 31.13 | 61.08 | |
| ADANet [10] | 37.49 | 54.53 | 66.41 | 76.07 | 59.25 | 50.51 | |
| MemoryAdaptNet [28] (2023) | 43.44 | 60.57 | 69.93 | 79.38 | 58.19 | 63.16 | |
| LandsDANet (ours) | 52.85 | 69.15 | 74.95 | 83.83 | 69.85 | 68.47 | |
| Method | Landslide | mIoU | mF1 | Recall | Precision | ||
|---|---|---|---|---|---|---|---|
| IoU | F1 | ||||||
| Source only | 18.71 | 31.54 | 57.72 | 64.94 | 20.23 | 71.39 | |
| State-of-the-art methods | AdaptSegNet [21] | 26.56 | 41.97 | 61.48 | 70.07 | 34.84 | 52.77 |
| AdvEnt [22] | 30.53 | 46.78 | 63.45 | 72.47 | 42.50 | 52.01 | |
| ADANet [10] | 41.95 | 59.11 | 69.22 | 78.66 | 67.12 | 52.81 | |
| MemoryAdaptNet [28] (2023) | 38.22 | 55.30 | 67.46 | 76.81 | 54.15 | 56.51 | |
| LandsDANet (ours) | 45.75 | 62.77 | 71.52 | 80.70 | 60.40 | 65.34 | |
| Method | P2V_S Task | T2M Task | ||
|---|---|---|---|---|
| mF1 | mIoU | mF1 | mIoU | |
| No adaptation | 57.82 | 43.55 | 64.94 | 57.72 |
| Baseline | 62.27 | 48.02 | 706 | 61.6 |
| +RCS and Wallis filter | 66.52 | 52.45 | 78.14 | 68.94 |
| +Contrastive Loss | 69.59 | 55.46 | 78.29 | 68.69 |
| +RCS+Wallisfilter+ContrastiveLoss | 70.62 | 56.90 | 80.70 | 71.52 |
| Method | FLOPs (G) | Params (M) | Inference Time (ms) | mF1 (%) | |
|---|---|---|---|---|---|
| P2V_S Task | T2M Task | ||||
| AdaptSegNet [21] | 184.72 | 42.83 | 42 | 59.26 | 70.07 |
| AdvEnt [22] | 186.12 | 43.16 | 48.61 | 59.99 | 72.47 |
| ADANet [10] | 485.68 | 71.36 | 61.23 | 62.36 | 78.66 |
| MemoryAdaptNet [28] | 143.51 | 134.01 | 39.60 | 62.67 | 76.81 |
| LandsDANet | 105.42 | 85.04 | 61.65 | 70.62 | 80.70 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Yang, J.; Ding, M.; Huang, W.; Xue, Q.; Dong, Y.; Chen, B.; Peng, L.; Zhang, F.; Li, Z. Cross-Domain Landslide Mapping in Remote Sensing Images Based on Unsupervised Domain Adaptation Framework. Remote Sens. 2026, 18, 286. https://doi.org/10.3390/rs18020286
Yang J, Ding M, Huang W, Xue Q, Dong Y, Chen B, Peng L, Zhang F, Li Z. Cross-Domain Landslide Mapping in Remote Sensing Images Based on Unsupervised Domain Adaptation Framework. Remote Sensing. 2026; 18(2):286. https://doi.org/10.3390/rs18020286
Chicago/Turabian StyleYang, Jing, Mingtao Ding, Wubiao Huang, Qiang Xue, Ying Dong, Bo Chen, Lulu Peng, Fuling Zhang, and Zhenhong Li. 2026. "Cross-Domain Landslide Mapping in Remote Sensing Images Based on Unsupervised Domain Adaptation Framework" Remote Sensing 18, no. 2: 286. https://doi.org/10.3390/rs18020286
APA StyleYang, J., Ding, M., Huang, W., Xue, Q., Dong, Y., Chen, B., Peng, L., Zhang, F., & Li, Z. (2026). Cross-Domain Landslide Mapping in Remote Sensing Images Based on Unsupervised Domain Adaptation Framework. Remote Sensing, 18(2), 286. https://doi.org/10.3390/rs18020286

