A Multi-Task Consistency Enhancement Network for Semantic Change Detection in HR Remote Sensing Images and Application of Non-Agriculturalization
Abstract
:1. Introduction
- (1)
- We propose a novel network for SCD, and the network is developed using the architecture of encoder-fusion-decoder based on multi-task learning. In the encoder, we design a stacked Siamese backbone that combines CNN and Transformer, which reduces the probability of miss detection while ensuring precise segmentation in the information detail;
- (2)
- To mitigate false detection arising in non-changing regions and enhance the results’ integrity representation of change features, we introduce a multi-task consistency enhancement module (MCEM) and adopt a method of cross-task mapping connection. Moreover, we design a novel joint loss to better alleviate the negative impact of class imbalance;
- (3)
- We propose a novel pixel-level SCD dataset about cropland’s non-agricultural detection, the non-agriculturalization of Fuzhou (NAFZ) dataset, which is constructed based on historical images from three distinct areas in Fuzhou. For the NAFZ dataset, we explore the suitability and performance of the proposed network for identifying non-agricultural types in HR images.
2. Methodology
2.1. Overall Architecture of Network
2.2. Encoder of MECNet
2.3. Multi-Task Consistency Enhancement Module
2.4. Decoder of MCENet
2.5. Loss Function
3. Experiment Datasets and Evaluation Metric
3.1. Experiment Datasets
3.1.1. The SECOND Dataset
3.1.2. The HRSCD Dataset
3.1.3. The NAFZ Dataset
3.2. Evaluation Metric
3.3. Implementation Details
4. Experiment Results and Analysis
4.1. Comparative Networks Introduce
- The benchmark architectures of CD networks are: (1) The series of Siamese FCN [23]: The Siamese FCN serves as the baseline network for BCD. In this study, the applicability of Siamese FCN in SCD was investigated by only modifying its architecture at the output layers. Specifically, we adopted the fully convolutional Siamese concatenation network (FC-Siam-Conc) and the fully convolutional Siamese difference network (FC-Siam-Diff) that have been extensively employed in existing research. (2) The series of HRSCD networks [22]: baseline networks and different architectural design strategies for SCD were proposed by Daudt et al. It provides a basic reference for evaluating the proposed network. In this study, we selected three networks designed according to multi-task strategies, including the multi-task direct comparison strategy (HRSCD-str.1), the multi-task separate strategy (HRSCD-str.3), and the multi-task integrate strategy (HRSCD-str.4);
- FCCDN [26]: A multi-task learning-based feature constraint network. The nonlocal feature pyramid and a strategy of self-supervised learning are adopted in this network to achieve SS for bi-temporal images. It is characterized by SS-guided dual branches and restricts the decoder from generating CD results;
- BIT [34]: A network that takes the first attempt to apply CNN and Transformer as the backbone for CD. In this network, CNN is independently adopted to capture bi-temporal shallow features and corresponding fusion features, while a single Transformer backbone is leveraged to extract high-level fusion features;
- PCFN [28]: A Siamese post-classification design-based multi-task network. It guides dual branches of SS by incorporating the encoder’s multi-scale fusion features, and it generates the results of change types by only merging the output layer of the SS branch in the decoder;
- SCDNet [27]: A multi-task network based on Siamese U-Net. Compared with PCFN, it also exploits the multi-scale fusion features of the encoder to guide SS branches. In the decoder, however, the network does not employ an independent branch for BCD and optimizes it through the corresponding loss function;
- BiSRNet [40]: A novel SSCD-l architecture proposed by Ding et al. based on the late fusion and subtask separation strategy. This novel benchmark architecture of multi-task SCD employs Siamese FCN as the backbone. Moreover, the method introduces a cross-temporal semantic reasoning (Cot-SR) module to develop information correlations between bi-temporal features;
- MTSCDNet [56]: A multi-task network based on Siamese architecture and Swin-Transformer. Similar to the strategy adopted in this study, this network gains the attention map of the deep change feature by adopting the spatial attention module (SAM) and then cascades into subtasks in the channel dimension, such that the network is endowed with a better ability to recognize change regions while maintaining consistency in non-changing regions.
4.2. Results on the SECOND Dataset
4.3. Results on the HRSCD Dataset
4.4. Results on the NAFZ Dataset
5. Discussion
5.1. Evaluation of MCEM
5.2. Evaluation of Proposed Loss Function
- Loss-a: The joint loss function without adaptive weights ();
- Loss-b: The joint loss function with adaptive weights ();
- Semantic consistence loss (SCL) [44]: SCL is a similarity measurement loss function proposed by Ding et al., based on cosine similarity theory. It calculates the difference between semantic mapping results and change labels through the cosine loss function;
- Tversky loss (TL) [56]: TL is the loss function designed by Cui et al., based on the Tversky index theory, specifically aimed at mitigating class imbalance issues. It relies on increasing the weight of negative pixels to reduce the effect of non-changing samples.
5.3. Ablation Study
- Model-a: We adopted the Siamese backbone of the standard Resnet34 encoder and the traditional multi-task joint loss function (Loss-a) as the baseline network in accordance with the architecture of the multi-task integrated strategy;
- Model-b: Based on Model-a, we adopted the proposed loss (our loss) in this study to analyze its effect on the overall network performance;
- Model-c: Based on Model-b, we adopted the proposed backbone that combines CNN and Transformer in this study to investigate its effect on the overall network performance;
- Model-d: Based on Model-c, we adopted the method of cross-task mapping connection and assembled the auxiliary designs involved in the above to explore its effect on the overall network performance.
6. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Fatemi Nasrabadi, S.B. Questions of Concern in Drawing Up a Remote Sensing Change Detection Plan. J. Indian Soc. Remote Sens. 2019, 47, 1455–1469. [Google Scholar] [CrossRef]
- Khelifi, L.; Mignotte, M. Deep learning for change detection in remote sensing images: Comprehensive review and meta-analysis. IEEE Access 2020, 8, 126385–126400. [Google Scholar] [CrossRef]
- Bai, T.; Sun, K.; Li, W.; Li, D.; Chen, Y.; Sui, H. A novel class-specific object-based method for urban change detection using high-resolution remote sensing imagery. Photogramm. Eng. Remote Sens. 2021, 87, 249–262. [Google Scholar] [CrossRef]
- Fang, H.; Guo, S.; Wang, X.; Liu, S.; Lin, C.; Du, P. Automatic Urban Scene-Level Binary Change Detection Based on A Novel Sample Selection Approach and Advanced Triplet Neural Network. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5601518. [Google Scholar] [CrossRef]
- Xia, L.; Chen, J.; Luo, J.; Zhang, J.; Yang, D.; Shen, Z. Building Change Detection Based on an Edge-Guided Convolutional Neural Network Combined with a Transformer. Remote Sens. 2022, 14, 4524. [Google Scholar] [CrossRef]
- Zheng, Z.; Zhong, Y.; Wang, J.; Ma, A.; Zhang, L. Building damage assessment for rapid disaster response with a deep object-based semantic change detection framework: From natural disasters to man-made disasters. Remote Sens. Environ. 2021, 265, 112636. [Google Scholar] [CrossRef]
- Rui, X.; Cao, Y.; Yuan, X.; Kang, Y.; Song, W. Disastergan: Generative adversarial networks for remote sensing disaster image generation. Remote Sens. 2021, 13, 4284. [Google Scholar] [CrossRef]
- Wu, C.; Zhang, F.; Xia, J.; Xu, Y.; Li, G.; Xie, J.; Du, Z.; Liu, R. Building damage detection using U-Net with attention mechanism from pre-and post-disaster remote sensing datasets. Remote Sens. 2021, 13, 905. [Google Scholar] [CrossRef]
- Zhu, L.; Xing, H.; Zhao, L.; Qu, H.; Sun, W. A change type determination method based on knowledge of spectral changes in land cover types. Earth Sci. Inform. 2023, 16, 1265–1279. [Google Scholar] [CrossRef]
- Chen, J.; Zhao, W.; Chen, X. Cropland change detection with harmonic function and generative adversarial network. IEEE Geosci. Remote Sens. Lett. 2020, 19, 2500205. [Google Scholar] [CrossRef]
- Decuyper, M.; Chávez, R.O.; Lohbeck, M.; Lastra, J.A.; Tsendbazar, N.; Hackländer, J.; Herold, M.; Vågen, T.-G. Continuous monitoring of forest change dynamics with satellite time series. Remote Sens. Environ. 2022, 269, 112829. [Google Scholar] [CrossRef]
- Jiang, J.; Xiang, J.; Yan, E.; Song, Y.; Mo, D. Forest-CD: Forest Change Detection Network Based on VHR Images. IEEE Geosci. Remote Sens. Lett. 2022, 19, 2506005. [Google Scholar] [CrossRef]
- Zou, Y.; Shen, T.; Chen, Z.; Chen, P.; Yang, X.; Zan, L. A Transformer-Based Neural Network with Improved Pyramid Pooling Module for Change Detection in Ecological Redline Monitoring. Remote Sens. 2023, 15, 588. [Google Scholar] [CrossRef]
- Tesfaw, B.A.; Dzwairo, B.; Sahlu, D. Assessments of the impacts of land use/land cover change on water resources: Tana Sub-Basin, Ethiopia. J. Water Clim. Chang. 2023, 14, 421–441. [Google Scholar] [CrossRef]
- Liu, Y.; Zuo, X.; Tian, J.; Li, S.; Cai, K.; Zhang, W. Research on generic optical remote sensing products: A review of scientific exploration, technology research, and engineering application. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2021, 14, 3937–3953. [Google Scholar] [CrossRef]
- Wu, C.; Li, X.; Chen, W.; Li, X. A review of geological applications of high-spatial-resolution remote sensing data. J. Circuits Syst. Comput. 2020, 29, 2030006. [Google Scholar] [CrossRef]
- Parelius, E.J. A Review of Deep-Learning Methods for Change Detection in Multispectral Remote Sensing Images. Remote Sens. 2023, 15, 2092. [Google Scholar] [CrossRef]
- Shafique, A.; Cao, G.; Khan, Z.; Asad, M.; Aslam, M. Deep learning-based change detection in remote sensing images: A review. Remote Sens. 2022, 14, 871. [Google Scholar] [CrossRef]
- Zhuang, Z.; Shi, W.; Sun, W.; Wen, P.; Wang, L.; Yang, W.; Li, T. Multi-class remote sensing change detection based on model fusion. Int. J. Remote Sens. 2023, 44, 878–901. [Google Scholar] [CrossRef]
- Tian, S.; Zhong, Y.; Zheng, Z.; Ma, A.; Tan, X.; Zhang, L. Large-scale deep learning based binary and semantic change detection in ultra high resolution remote sensing imagery: From benchmark datasets to urban application. ISPRS J. Photogramm. Remote Sens. 2022, 193, 164–186. [Google Scholar] [CrossRef]
- Asokan, A.; Anitha, J. Change detection techniques for remote sensing applications: A survey. Earth Sci. Inform. 2019, 12, 143–160. [Google Scholar] [CrossRef]
- Daudt, R.C.; Le Saux, B.; Boulch, A.; Gousseau, Y. Multitask learning for large-scale semantic change detection. Comput. Vis. Image Underst. 2019, 187, 102783. [Google Scholar] [CrossRef]
- Daudt, R.C.; Le Saux, B.; Boulch, A. Fully convolutional siamese networks for change detection. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 4063–4067. [Google Scholar]
- Bai, T.; Wang, L.; Yin, D.; Sun, K.; Chen, Y.; Li, W.; Li, D. Deep learning for change detection in remote sensing: A review. Geo-Spat. Inf. Sci. 2022, 1–27. [Google Scholar] [CrossRef]
- Jiang, H.; Peng, M.; Zhong, Y.; Xie, H.; Hao, Z.; Lin, J.; Ma, X.; Hu, X. A survey on deep learning-based change detection from high-resolution remote sensing images. Remote Sens. 2022, 14, 1552. [Google Scholar] [CrossRef]
- Chen, P.; Zhang, B.; Hong, D.; Chen, Z.; Yang, X.; Li, B. FCCDN: Feature constraint network for VHR image change detection. ISPRS J. Photogramm. Remote Sens. 2022, 187, 101–119. [Google Scholar] [CrossRef]
- Peng, D.; Bruzzone, L.; Zhang, Y.; Guan, H.; He, P. SCDNET: A novel convolutional network for semantic change detection in high resolution optical remote sensing imagery. Int. J. Appl. Earth Obs. Geoinf. 2021, 103, 102465. [Google Scholar] [CrossRef]
- Xia, H.; Tian, Y.; Zhang, L.; Li, S. A Deep Siamese Postclassification Fusion Network for Semantic Change Detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5622716. [Google Scholar] [CrossRef]
- Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
- Chen, J.; Hong, H.; Song, B.; Guo, J.; Chen, C.; Xu, J. MDCT: Multi-Kernel Dilated Convolution and Transformer for One-Stage Object Detection of Remote Sensing Images. Remote Sens. 2023, 15, 371. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. Adv. Neural Inf. Process. Syst. 2017, 30, 5998–6008. [Google Scholar]
- Yuan, J.; Wang, L.; Cheng, S. STransUNet: A Siamese TransUNet-Based Remote Sensing Image Change Detection Network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 9241–9253. [Google Scholar] [CrossRef]
- Liu, M.; Shi, Q.; Chai, Z.; Li, J. PA-Former: Learning prior-aware transformer for remote sensing building change detection. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6515305. [Google Scholar] [CrossRef]
- Chen, H.; Qi, Z.; Shi, Z. Remote sensing image change detection with transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5607514. [Google Scholar] [CrossRef]
- Wang, W.; Tan, X.; Zhang, P.; Wang, X. A CBAM based multiscale transformer fusion approach for remote sensing image change detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 6817–6825. [Google Scholar] [CrossRef]
- Shi, W.; Zhang, M.; Zhang, R.; Chen, S.; Zhan, Z. Change detection based on artificial intelligence: State-of-the-art and challenges. Remote Sens. 2020, 12, 1688. [Google Scholar] [CrossRef]
- Zheng, Z.; Zhong, Y.; Tian, S.; Ma, A.; Zhang, L. ChangeMask: Deep multi-task encoder-transformer-decoder architecture for semantic change detection. ISPRS J. Photogramm. Remote Sens. 2022, 183, 228–239. [Google Scholar] [CrossRef]
- Zhou, Y.; Wang, J.; Ding, J.; Liu, B.; Weng, N.; Xiao, H. SIGNet: A Siamese Graph Convolutional Network for Multi-Class Urban Change Detection. Remote Sens. 2023, 15, 2464. [Google Scholar] [CrossRef]
- He, Y.; Zhang, H.; Ning, X.; Zhang, R.; Chang, D.; Hao, M. Spatial-Temporal Semantic Perception Network for Remote Sensing Image Semantic Change Detection. Remote Sens. 2023, 15, 4095. [Google Scholar] [CrossRef]
- Ding, L.; Guo, H.; Liu, S.; Mou, L.; Zhang, J.; Bruzzone, L. Bi-temporal semantic reasoning for the semantic change detection in HR remote sensing images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5620014. [Google Scholar] [CrossRef]
- Tang, H.; Wang, H.; Zhang, X. Multi-class change detection of remote sensing images based on class rebalancing. Int. J. Digit. Earth 2022, 15, 1377–1394. [Google Scholar] [CrossRef]
- Zhu, Q.; Guo, X.; Deng, W.; Shi, S.; Guan, Q.; Zhong, Y.; Zhang, L.; Li, D. Land-use/land-cover change detection based on a Siamese global learning framework for high spatial resolution remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2022, 184, 63–78. [Google Scholar] [CrossRef]
- Tantithamthavorn, C.; Hassan, A.E.; Matsumoto, K. The impact of class rebalancing techniques on the performance and interpretation of defect prediction models. IEEE Trans. Softw. Eng. 2018, 46, 1200–1219. [Google Scholar] [CrossRef]
- Xiang, S.; Wang, M.; Jiang, X.; Xie, G.; Zhang, Z.; Tang, P. Dual-task semantic change detection for remote sensing images using the generative change field module. Remote Sens. 2021, 13, 3336. [Google Scholar] [CrossRef]
- Niu, Y.; Guo, H.; Lu, J.; Ding, L.; Yu, D. SMNet: Symmetric Multi-Task Network for Semantic Change Detection in Remote Sensing Images Based on CNN and Transformer. Remote Sens. 2023, 15, 949. [Google Scholar] [CrossRef]
- Afaq, Y.; Manocha, A. Analysis on change detection techniques for remote sensing applications: A review. Ecol. Inform. 2021, 63, 101310. [Google Scholar] [CrossRef]
- Li, S.; Li, X. Global understanding of farmland abandonment: A review and prospects. J. Geogr. Sci. 2017, 27, 1123–1150. [Google Scholar] [CrossRef]
- Li, M.; Long, J.; Stein, A.; Wang, X. Using a semantic edge-aware multi-task neural network to delineate agricultural parcels from remote sensing images. ISPRS J. Photogramm. Remote Sens. 2023, 200, 24–40. [Google Scholar] [CrossRef]
- Chen, Y.; Wang, S.; Wang, Y. Spatiotemporal evolution of cultivated land non-agriculturalization and its drivers in typical areas of southwest China from 2000 to 2020. Remote Sens. 2022, 14, 3211. [Google Scholar] [CrossRef]
- Liu, M.; Chai, Z.; Deng, H.; Liu, R. A CNN-transformer network with multiscale context aggregation for fine-grained cropland change detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 15, 4297–4306. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
- Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3146–3154. [Google Scholar]
- Hadsell, R.; Chopra, S.; LeCun, Y. Dimensionality reduction by learning an invariant mapping. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; pp. 1735–1742. [Google Scholar]
- Yang, K.; Xia, G.-S.; Liu, Z.; Du, B.; Yang, W.; Pelillo, M.; Zhang, L. Asymmetric siamese networks for semantic change detection in aerial images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5609818. [Google Scholar] [CrossRef]
- Cui, F.; Jiang, J. MTSCD-Net: A network based on multi-task learning for semantic change detection of bitemporal remote sensing images. Int. J. Appl. Earth Obs. Geoinf. 2023, 118, 103294. [Google Scholar] [CrossRef]
- Long, J.; Li, M.; Wang, X.; Stein, A. Delineation of agricultural fields using multi-task BsiNet from high-resolution satellite images. Int. J. Appl. Earth Obs. Geoinf. 2022, 112, 102871. [Google Scholar] [CrossRef]
- Hao, M.; Yang, C.; Lin, H.; Zou, L.; Liu, S.; Zhang, H. Bi-Temporal change detection of high-resolution images by referencing time series medium-resolution images. Int. J. Remote Sens. 2023, 44, 3333–3357. [Google Scholar] [CrossRef]
- Xu, C.; Ye, Z.; Mei, L.; Shen, S.; Sun, S.; Wang, Y.; Yang, W. Cross-Attention Guided Group Aggregation Network for Cropland Change Detection. IEEE Sens. J. 2023, 23, 13680–13691. [Google Scholar] [CrossRef]
- Lei, J.; Gu, Y.; Xie, W.; Li, Y.; Du, Q. Boundary extraction constrained siamese network for remote sensing image change detection. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5621613. [Google Scholar] [CrossRef]
- Wu, C.; Du, B.; Zhang, L. Slow feature analysis for change detection in multispectral imagery. IEEE Trans. Geosci. Remote Sens. 2013, 52, 2858–2874. [Google Scholar] [CrossRef]
- Du, B.; Ru, L.; Wu, C.; Zhang, L. Unsupervised deep slow feature analysis for change detection in multi-temporal remote sensing images. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9976–9992. [Google Scholar] [CrossRef]
- Wu, C.; Zhang, L.; Zhang, L. A scene change detection framework for multi-temporal very high resolution remote sensing images. Signal Process. 2016, 124, 184–197. [Google Scholar] [CrossRef]
- Song, A.; Choi, J.; Han, Y.; Kim, Y. Change detection in hyperspectral images using recurrent 3D fully convolutional networks. Remote Sens. 2018, 10, 1827. [Google Scholar] [CrossRef]
Name | Image Score | Resolution | Label’s Categories |
---|---|---|---|
SCEOND | Multi-score platforms | 0.5–3 m | non-changing, non-vegetated surface, low vegetation, tree, building, water and playground |
HRSCD | BD ORTHO | 0.5 m | non-changing, artificial surface, agricultural area, tree and water |
NAFZ | Google Earth | 0.6 m | non-changing, cropland, tree, water, building, road and bare land |
ID | Location | Time1 (Y/M/D) | Time2 (Y/M/D) | Area (km2) |
---|---|---|---|---|
1 | Changle (Town of Hunan and Wenling) | 2018/03/24 | 2020/07/19 | 31.33 |
2 | Changle (Town of Guhuai, Heshang, Wenwusha and Zhanggang) | 2018/01/13 | 2019/02/15 | 105.84 |
3 | Minhou (Town of Shanggan, Xiangqian, Qingkou) | 2016/03/03 | 2019/03/20 | 43.18 |
Network | OA (%) | IoUC (%) | mIoU (%) | F1 (%) | FSCD (%) | Kappa (%) | Sek (%) | Score (%) |
---|---|---|---|---|---|---|---|---|
FC-Siam-Concat | 86.46 | 41.55 | 68.69 | 42.10 | 53.15 | 22.61 | 12.60 | 29.42 |
FC-Siam-Diff | 85.78 | 41.13 | 68.99 | 45.59 | 53.28 | 22.90 | 12.71 | 29.59 |
HRSCD-str.1 | 85.57 | 39.99 | 65.83 | 38.39 | 50.29 | 18.93 | 10.39 | 27.02 |
HRSCD-str.3 | 85.27 | 35.62 | 66.37 | 44.38 | 51.25 | 20.14 | 10.58 | 27.32 |
HRSCD-str.4 | 85.59 | 36.26 | 68.69 | 49.73 | 53.06 | 25.06 | 13.25 | 29.88 |
FCCDN | 86.89 | 38.77 | 69.88 | 49.32 | 56.49 | 29.37 | 15.92 | 32.11 |
BIT | 87.64 | 42.71 | 71.94 | 50.36 | 58.29 | 32.61 | 18.39 | 34.45 |
PCFN | 87.93 | 44.42 | 72.23 | 49.12 | 59.17 | 32.93 | 18.89 | 34.89 |
SCDNet | 87.54 | 43.69 | 71.61 | 47.47 | 58.04 | 30.93 | 17.61 | 33.81 |
BiSRNet | 86.83 | 41.24 | 71.41 | 50.45 | 59.19 | 35.06 | 19.48 | 35.06 |
MTSCDNet | 87.83 | 45.13 | 72.14 | 50.60 | 61.03 | 34.90 | 20.16 | 35.75 |
MCENet | 88.42 | 46.35 | 73.22 | 52.21 | 62.60 | 37.72 | 22.06 | 37.41 |
Network | OA (%) | IoUC (%) | mIoU (%) | F1 (%) | FSCD (%) | Kappa (%) | Sek (%) | Score (%) |
---|---|---|---|---|---|---|---|---|
FC-Siam-Concat | 81.01 | 47.41 | 56.29 | 24.46 | 49.44 | 8.9 | 5.26 | 20.57 |
FC-Siam-Diff | 81.32 | 47.76 | 57.84 | 24.91 | 51.35 | 11.41 | 6.77 | 22.09 |
HRSCD-str.1 | 79.89 | 42.88 | 54.28 | 17.26 | 39.10 | 6.25 | 3.53 | 18.75 |
HRSCD-str.3 | 80.87 | 39.85 | 58.89 | 28.83 | 48.02 | 9.93 | 5.44 | 21.48 |
HRSCD-str.4 | 80.28 | 37.91 | 58.50 | 32.49 | 47.76 | 7.44 | 4.00 | 20.35 |
FCCDN | 79.63 | 31.03 | 56.94 | 29.77 | 41.76 | 5.74 | 2.88 | 19.10 |
BIT | 82.96 | 42.10 | 62.89 | 30.67 | 55.63 | 18.95 | 10.62 | 26.30 |
PCFN | 83.92 | 51.74 | 63.47 | 31.39 | 61.71 | 21.74 | 13.42 | 28.44 |
SCDNet | 80.63 | 45.13 | 53.90 | 17.40 | 43.93 | 8.81 | 5.09 | 19.74 |
BiSRNet | 84.14 | 50.61 | 63.08 | 30.05 | 60.78 | 21.75 | 13.27 | 28.21 |
MTSCDNet | 83.71 | 50.35 | 63.78 | 33.15 | 61.17 | 21.05 | 12.81 | 28.10 |
MCENet | 84.37 | 54.77 | 67.33 | 38.07 | 61.73 | 23.37 | 14.87 | 30.61 |
Network | OA (%) | IoUC (%) | mIoU (%) | F1 (%) | FSCD (%) | Kappa (%) | Sek (%) | Score (%) |
---|---|---|---|---|---|---|---|---|
FC-Siam-Concat | 87.52 | 22.22 | 65.96 | 27.97 | 57.71 | 25.1 | 11.53 | 27.86 |
FC-Siam-Diff | 88.27 | 17.57 | 66.10 | 22.07 | 56.02 | 20.11 | 8.82 | 26.00 |
HRSCD-str.1 | 89.64 | 13.00 | 60.88 | 16.83 | 46.34 | 9.98 | 4.18 | 21.19 |
HRSCD-str.3 | 89.06 | 18.42 | 66.98 | 23.55 | 59.69 | 28.1 | 12.43 | 28.79 |
HRSCD-str.4 | 89.14 | 23.80 | 69.01 | 30.59 | 63.38 | 36.7 | 17.13 | 32.70 |
FCCDN | 91.86 | 23.00 | 71.94 | 29.42 | 65.85 | 38.75 | 17.94 | 34.14 |
BIT | 91.47 | 22.31 | 71.81 | 27.82 | 65.57 | 38.99 | 17.93 | 34.09 |
PCFN | 89.65 | 26.13 | 70.83 | 31.54 | 65.65 | 41.29 | 20.10 | 35.32 |
SCDNet | 88.08 | 24.00 | 67.38 | 29.96 | 61.55 | 32.91 | 15.39 | 30.99 |
BiSRNet | 88.72 | 25.60 | 70.04 | 32.33 | 65.67 | 42.12 | 20.92 | 35.66 |
MTSCDNet | 90.80 | 24.21 | 71.53 | 29.92 | 65.43 | 39.18 | 18.36 | 34.31 |
MCENet | 91.56 | 27.45 | 73.69 | 33.69 | 68.31 | 44.77 | 21.67 | 37.28 |
Method | SECOND | HRSCD | NAFZ | ||||||
---|---|---|---|---|---|---|---|---|---|
FSCD (%) | Sek (%) | Score (%) | FSCD (%) | Sek (%) | Score (%) | FSCD (%) | Sek (%) | Score (%) | |
60.92 | 20.33 | 35.96 | 60.45 | 14.14 | 30.08 | 65.43 | 19.40 | 34.53 | |
59.64 | 18.94 | 34.84 | 59.47 | 11.99 | 27.83 | 66.18 | 20.53 | 36.01 | |
60.63 | 19.66 | 35.41 | 61.17 | 14.01 | 29.59 | 68.18 | 21.25 | 36.74 | |
FFG | 62.06 | 21.34 | 36.82 | 61.68 | 14.20 | 30.05 | 68.25 | 21.37 | 36.92 |
Cot-SR | 60.99 | 20.46 | 36.10 | 60.80 | 13.33 | 28.92 | 67.29 | 20.74 | 36.20 |
MCEM | 62.60 | 22.06 | 37.41 | 61.73 | 14.87 | 30.61 | 68.31 | 21.67 | 37.28 |
Loss | SECOND | HRSCD | NAFZ | ||||||
---|---|---|---|---|---|---|---|---|---|
FSCD (%) | Sek (%) | Score (%) | FSCD (%) | Sek (%) | Score (%) | FSCD (%) | Sek (%) | Score (%) | |
Loss-a | 59.84 | 19.06 | 34.93 | 61.34 | 13.31 | 28.90 | 65.13 | 18.27 | 34.12 |
Loss-b | 61.02 | 20.95 | 36.16 | 61.57 | 14.13 | 29.70 | 65.89 | 19.22 | 35.13 |
SCL | 61.55 | 21.04 | 36.60 | 61.05 | 14.71 | 29.95 | 67.12 | 19.95 | 35.88 |
TL | 59.68 | 19.31 | 35.12 | 60.61 | 13.21 | 28.72 | 63.09 | 16.44 | 33.09 |
Our loss | 62.60 | 22.06 | 37.41 | 61.73 | 14.87 | 30.61 | 68.31 | 21.67 | 37.28 |
Method | Ablation Designs | |||
---|---|---|---|---|
CNN | Transformer | Cross-Task Mapping Connection | Proposed Loss | |
Model-a | √ | |||
Model-b | √ | √ | ||
Model-c | √ | √ | √ | |
Model-d | √ | √ | √ | √ |
Method | SECOND | HRSCD | NAFZ | ||||||
---|---|---|---|---|---|---|---|---|---|
FSCD (%) | Sek (%) | Score (%) | FSCD (%) | Sek (%) | Score (%) | FSCD (%) | Sek (%) | Score (%) | |
Model-a | 58.28 | 18.46 | 34.69 | 52.88 | 7.27 | 23.09 | 61.31 | 13.03 | 29.87 |
Model-b | 58.94 | 18.99 | 35.14 | 54.66 | 7.82 | 23.98 | 62.83 | 15.42 | 31.80 |
Model-c | 60.65 | 20.49 | 36.19 | 54.62 | 10.07 | 26.25 | 65.08 | 17.80 | 34.03 |
Model-d | 61.49 | 21.10 | 36.68 | 61.51 | 14.06 | 29.60 | 65.57 | 18.88 | 34.76 |
Layers | SECOND | HRSCD | NAFZ | ||||||
---|---|---|---|---|---|---|---|---|---|
FSCD (%) | Sek (%) | Score (%) | FSCD (%) | Sek (%) | Score (%) | FSCD (%) | Sek (%) | Score (%) | |
(1, 1, 1) | 61.49 | 21.10 | 36.68 | 61.51 | 14.06 | 29.60 | 65.57 | 18.88 | 34.76 |
(2, 2, 2) | 62.60 | 22.06 | 37.41 | 61.73 | 14.87 | 30.61 | 66.40 | 19.46 | 35.19 |
(3, 3, 3) | 62.30 | 21.54 | 36.98 | 59.18 | 11.85 | 28.18 | 68.31 | 21.67 | 37.28 |
(4, 4, 4) | 62.03 | 21.44 | 36.93 | 59.83 | 13.22 | 29.03 | 64.62 | 17.41 | 33.28 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lin, H.; Wang, X.; Li, M.; Huang, D.; Wu, R. A Multi-Task Consistency Enhancement Network for Semantic Change Detection in HR Remote Sensing Images and Application of Non-Agriculturalization. Remote Sens. 2023, 15, 5106. https://doi.org/10.3390/rs15215106
Lin H, Wang X, Li M, Huang D, Wu R. A Multi-Task Consistency Enhancement Network for Semantic Change Detection in HR Remote Sensing Images and Application of Non-Agriculturalization. Remote Sensing. 2023; 15(21):5106. https://doi.org/10.3390/rs15215106
Chicago/Turabian StyleLin, Haihan, Xiaoqin Wang, Mengmeng Li, Dehua Huang, and Ruijiao Wu. 2023. "A Multi-Task Consistency Enhancement Network for Semantic Change Detection in HR Remote Sensing Images and Application of Non-Agriculturalization" Remote Sensing 15, no. 21: 5106. https://doi.org/10.3390/rs15215106
APA StyleLin, H., Wang, X., Li, M., Huang, D., & Wu, R. (2023). A Multi-Task Consistency Enhancement Network for Semantic Change Detection in HR Remote Sensing Images and Application of Non-Agriculturalization. Remote Sensing, 15(21), 5106. https://doi.org/10.3390/rs15215106