Novel Land Cover Change Detection Deep Learning Framework with Very Small Initial Samples Using Heterogeneous Remote Sensing Images
Abstract
:1. Introduction
- A novel deep-learning framework is designed for Hete-CD. This simple framework offers several advantages in improving detection accuracy with a small number of initial samples. The deep learning framework’s simple yet competitive performance is attractive and preferred for practical engineering.
- A non-parameter sample-enhanced algorithm is proposed to be embedded into a neural network. In particular, this algorithm explores the potential samples around each initial sample using a non-parameter and iterative approach. Although this idea was verified by Hete-CD with HRSIs in this study, it may be useful for other supervised remote sensing image applications, such as land cover classification, scene classification, and Homo-CD.
2. Methods
2.1. Overview
2.2. Proposed Deep-Learning Neural Network
Non-Parameter Sample-Enhanced Algorithm
2.3. Accuracy Assessment
3. Experiments
3.1. Dataset Description
3.2. Experimental Setup
- (i)
- In the first part of the experiments, the three relatively new and highly cited related methods were as follows: The first method, named adaptive graph and structure cycle consistency (AGSCC) (https://github.com/yulisun/AGSCC, accessed on 1 May 2023) [45], focused on exploring the shared structural features of the bitemporal HRSIs. The explored shared structural features were comparable for change detection. The second method, named graph-based image regression and MRF segmentation method (GIR-MRF) (https://github.com/yulisun/GIR-MRF, accessed on 1 May 2023) [50], aimed at learning the shared features via graph-based image regression. The third method, called the sparse constrained adaptive structure consistency-based method (SCASC) (https://github.com/yulisun/SCASC, accessed on 1 May 2023) [42], attempted to improve the adaptive structural extraction efficiency for Hete-CD. These studies [42,45,50] were typical in the field of change detection with HRSIs. Accordingly, these studies were adopted for comparison with our proposed framework. Based on the comparisons, the parameters of the selected methods [42,45,50] were the same as those used in the original studies. Twelve unchanged samples (six pairs) and changed samples (six pairs) were randomly selected from the ground reference map for framework initialization in our proposed framework.
- (ii)
- The second part of the experiments aimed at verifying the advantages and feasibility of the proposed framework while comparing it with some state-of-the-art deep learning methods. The first method, named fully convolutional Siamese difference (FC-Siam-diff) [28], was an extension of UNet. The second method, named crosswalk detection network (CDNet) [55], aimed at learning the change magnitude between the bitemporal images through a cross-convolution strategy. The experimental results based on four datasets clearly demonstrated the robustness and superiority of the proposed CDNet. The selected method, called feature difference convolutional neural network (FDCNN) [26], conducted convolutions on the feature difference map and obtained the binary change detection map. A deeply supervised image fusion network (DSIFN) [63] concentrated on exploring highly representative deep features of bi-temporal images through a fully convolutional two-stream architecture for LCCD with HRSIs. A cross-layer convolutional neural network (CLNet) was first proposed for LCCD with HRSIs to learn the correlation between the bi-temporal images at different features. The four experimental applications clearly demonstrated the superiority of the proposed approach [27]. In addition, a multiscale fully convolutional network (MFCN) was constructed for the ground land cover change area with various shapes and sizes [30]. The following parameters are set for each network: learning rate = 0.0001, batch size = 3, and epochs = 20. All the selected deep learning methods used the same training samples randomly clipped and extracted from the ground reference map to guarantee comparative fairness. Moreover, the quantity of training samples for the state-of-the-art deep learning methods is equal to the number of enhanced samples when the iteration of our proposed framework is terminated.
- (iii)
- The ratio between the training samples, validation samples, and testing samples was about 2:1:7. The number of initial samples for each approach and dataset was 12 pairs. The initial samples for each dataset were randomly obtained based on the ground reference.
3.3. Results
- (i)
- Comparisons with traditional methods: The visual performance of AGSCC [45], GIR-MRF [50], and SCASC [42] is presented in Figure 3a–c, respectively. In comparison with the results based on our proposed framework, Figure 3d shows that our proposed framework achieved the best detection performance with the fewest false alarm (green) and missed alarm (red) pixels. The corresponding quantitative results in Table 2 further supported the conclusion from the visual observation comparisons. For example, the proposed framework achieved 99.04%, 98.75%, 96.77%, and 97.92% in terms of OA for the four datasets. These values were the best among the results from AGSCC [45], GIR-MRF [50], SCASC [42], and our proposed framework.
- (ii)
- Comparisons with deep learning methods: We further verified the robustness of the proposed framework by comparing it with some state-of-the-art deep learning methods. The detection maps presented in Figure 4, Figure 5, Figure 6 and Figure 7 are acquired by using different deep-learning methods for the four datasets. These visual comparisons indicated that the proposed framework performed better with fewer false and missed alarms. During the removal of the sample-enhanced algorithm from the entire proposed framework, as denoted by “Proposed-” in Table 3, Table 4, Table 5 and Table 6, the improvement was achieved compared with that of the other approaches for the same datasets. For example, the proposed approach obtained the best OA = 99.04% and the based FA = 0.33%. In comparison with the results based on the proposed approach without the proposed sample enhancement approach, the improvement of the proposed approach coupled with the sample-enhanced algorithm was approximately 2.0% in terms of OA for Dataset-1. Multiscale information extraction and selective kernel attention in the promoted neural network were complementary to learning more accuracy in our proposed framework. The following quantitative comparisons in Table 3, Table 4, Table 5 and Table 6 further supported the visually observed conclusion.
3.4. Discussion
- (i)
- Relationship between the initial and the final samples: According to Section 2, the proposed framework was initialized with a small number of training samples, which were iteratively amplified at every iteration. Accordingly, observing the relationship between the initial and final samples helps us understand the balancing ability for unchanged and changed classes in our proposed framework. Figure 8 shows that the quantity of samples for the changed and unchanged classes is equal for initialization. When the iteration of the proposed framework was terminated, the number of samples for the unchanged and changed classes was automatically adjusted to be different because the size of the area was distinct in an image scene. Detecting them with an unequal sample is beneficial for balancing their detection accuracies. Figure 8 also demonstrates that the relationship between the initial and final samples is nonlinear. For example, for the unchanged and changed samples of Dataset-1, when the initial samples for the unchanged class increased from three to four pairs, its final samples decreased from 56 pairs to 33 pairs. Moreover, different datasets exhibit the relationship between the initial and final samples for our proposed framework. Therefore, determining the suitable quantity of samples for initializing the proposed framework may involve conducting trial-and-error experiments in practical applications.
- (ii)
- Relationship between the initial samples and detection accuracies: The observation results indicated that the detection accuracy initially decreased with the increment of the initial samples for some datasets (Figure 9). Moreover, the accuracy increased and appeared to be a state with a small variation. For example, the OA for Dataset-2 and -4 decreased when the initial samples increased from three pairs to four pairs, and then it increased and vibrated within [96.72%, 98.75%] and [96.05%, 97.92%], respectively. Some explored samples may be marked with missed labels, which negatively affected the learning performance. However, the sample with missed labels becomes the minority in the total enhanced sample set with the increment of the initial training samples. Consequently, an uncertain relationship may cause detection accuracy variation when training our proposed framework with the sample set. The observation illustrated in Figure 9 indicates the proposed approach can explore samples. However, the detection accuracy did not always linearly improve with the number of samples. This phenomenon is due to the distinct variability, spectral homogeneity, and uncertainty of each dataset.
4. Conclusions
- (i)
- Advanced detection accuracy is obtained with the proposed framework. The comparative results with four pairs of actual HTRSIs indicated that the proposed framework outperforms three traditional cognate methods in terms of visual performance and nine quantitative evaluation metrics.
- (ii)
- Iteratively training deep learning neural networks with a non-parameter sample-enhanced algorithm is effective in improving detection performance with limited initial samples. This work is the first to combine a non-parameter sample-enhanced algorithm with a deep learning neural network for iteratively training the neural network with the enhanced training samples from every iteration. The experimental results and comparisons demonstrated the feasibility and effectiveness of the recommended sample-enhanced algorithm and training strategy for our proposed framework.
- (iii)
- A simple, robust, and non-parameter framework is preferred and acceptable for practical engineering applications. In addition to the small number of initial training samples for initialization, the proposed framework has no parameters that require hard tuning. This characteristic can be easily applied to practical engineering.
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Singh, A. Review article digital change detection techniques using remotely-sensed data. Int. J. Remote Sens. 1989, 10, 989–1003. [Google Scholar] [CrossRef]
- Pande, C.B. Land use/land cover and change detection mapping in Rahuri watershed area (MS), India using the google earth engine and machine learning approach. Geocarto Int. 2022, 37, 13860–13880. [Google Scholar] [CrossRef]
- Lv, Z.; Huang, H.; Sun, W.; Jia, M.; Benediktsson, J.A.; Chen, F. Iterative Training Sample Augmentation for Enhancing Land Cover Change Detection Performance With Deep Learning Neural Network. IEEE Trans. Neural Netw. Learn. Syst. 2023, 1–14. [Google Scholar] [CrossRef]
- Anniballe, R.; Noto, F.; Scalia, T.; Bignami, C.; Stramondo, S.; Chini, M.; Pierdicca, N. Earthquake damage mapping: An overall assessment of ground surveys and VHR image change detection after L’Aquila 2009 earthquake. Remote Sens. Environ. 2018, 210, 166–178. [Google Scholar] [CrossRef]
- Li, Z.; Shi, W.; Lu, P.; Yan, L.; Wang, Q.; Miao, Z. Landslide mapping from aerial photographs using change detection-based Markov random field. Remote Sens. Environ. 2016, 187, 76–90. [Google Scholar] [CrossRef]
- Li, Z.; Shi, W.; Myint, S.W.; Lu, P.; Wang, Q. Semi-automated landslide inventory mapping from bitemporal aerial photographs using change detection and level set method. Remote Sens. Environ. 2016, 175, 215–230. [Google Scholar] [CrossRef]
- Bouziani, M.; Goïta, K.; He, D.-C. Automatic change detection of buildings in urban environment from very high spatial resolution images using existing geodatabase and prior knowledge. ISPRS J. Photogramm. Remote Sens. 2010, 65, 143–153. [Google Scholar] [CrossRef]
- Coppin, P.; Jonckheere, I.; Nackaerts, K.; Muys, B.; Lambin, E. Review ArticleDigital change detection methods in ecosystem monitoring: A review. Int. J. Remote Sens. 2004, 25, 1565–1596. [Google Scholar] [CrossRef]
- Leichtle, T.; Geiß, C.; Wurm, M.; Lakes, T.; Taubenböck, H. Unsupervised change detection in VHR remote sensing imagery–an object-based clustering approach in a dynamic urban environment. Int. J. Appl. Earth Obs. Geoinf. 2017, 54, 15–27. [Google Scholar] [CrossRef]
- Munyati, C. Wetland change detection on the Kafue Flats, Zambia, by classification of a multitemporal remote sensing image dataset. Int. J. Remote Sens. 2000, 21, 1787–1806. [Google Scholar] [CrossRef]
- Xian, G.; Homer, C.; Fry, J. Updating the 2001 National Land Cover Database land cover classification to 2006 by using Landsat imagery change detection methods. Remote Sens. Environ. 2009, 113, 1133–1147. [Google Scholar] [CrossRef]
- Gao, J.; Liu, Y. Determination of land degradation causes in Tongyu County, Northeast China via land cover change detection. Int. J. Appl. Earth Obs. Geoinf. 2010, 12, 9–16. [Google Scholar] [CrossRef]
- Taubenböck, H.; Esch, T.; Felbier, A.; Wiesner, M.; Roth, A.; Dech, S. Monitoring urbanization in mega cities from space. Remote Sens. Environ. 2012, 117, 162–176. [Google Scholar] [CrossRef]
- Zhang, T.; Huang, X. Monitoring of urban impervious surfaces using time series of high-resolution remote sensing images in rapidly urbanized areas: A case study of Shenzhen. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2692–2708. [Google Scholar] [CrossRef]
- Awad, M.M. An innovative intelligent system based on remote sensing and mathematical models for improving crop yield estimation. Inf. Process. Agric. 2019, 6, 316–325. [Google Scholar] [CrossRef]
- Lv, Z.; Liu, T.; Benediktsson, J.A.; Falco, N. Land cover change detection techniques: Very-high-resolution optical images: A review. IEEE Geosci. Remote Sens. Mag. 2021, 10, 44–63. [Google Scholar] [CrossRef]
- Zhu, Z. Change detection using landsat time series: A review of frequencies, preprocessing, algorithms, and applications. ISPRS J. Photogramm. Remote Sens. 2017, 130, 370–384. [Google Scholar] [CrossRef]
- Hachicha, S.; Chaabane, F. On the SAR change detection review and optimal decision. Int. J. Remote Sens. 2014, 35, 1693–1714. [Google Scholar] [CrossRef]
- Wen, D.; Huang, X.; Bovolo, F.; Li, J.; Ke, X.; Zhang, A.; Benediktsson, J.A. Change detection from very-high-spatial-resolution optical remote sensing images: Methods, applications, and future directions. IEEE Geosci. Remote Sens. Mag. 2021, 9, 68–101. [Google Scholar] [CrossRef]
- Lv, Z.; Huang, H.; Li, X.; Zhao, M.; Benediktsson, J.A.; Sun, W.; Falco, N. Land cover change detection with heterogeneous remote sensing images: Review, progress, and perspective. Proc. IEEE 2022, 110, 1976–1991. [Google Scholar] [CrossRef]
- Hong, S.; Vatsavai, R.R. Sliding window-based probabilistic change detection for remote-sensed images. Procedia Comput. Sci. 2016, 80, 2348–2352. [Google Scholar] [CrossRef]
- Lu, P.; Qin, Y.; Li, Z.; Mondini, A.C.; Casagli, N. Landslide mapping from multi-sensor data through improved change detection-based Markov random field. Remote Sens. Environ. 2019, 231, 111235. [Google Scholar] [CrossRef]
- Lv, Z.; Liu, T.; Shi, C.; Benediktsson, J.A. Local histogram-based analysis for detecting land cover change using VHR remote sensing images. IEEE Geosci. Remote Sens. Lett. 2020, 18, 1284–1287. [Google Scholar] [CrossRef]
- Chen, L.; Liu, C.; Chang, F.; Li, S.; Nie, Z. Adaptive multi-level feature fusion and attention-based network for arbitrary-oriented object detection in remote sensing imagery. Neurocomputing 2021, 451, 67–80. [Google Scholar] [CrossRef]
- Mou, L.; Bruzzone, L.; Zhu, X.X. Learning spectral-spatial-temporal features via a recurrent convolutional neural network for change detection in multispectral imagery. IEEE Trans. Geosci. Remote Sens. 2018, 57, 924–935. [Google Scholar] [CrossRef]
- Zhang, M.; Shi, W. A feature difference convolutional neural network-based change detection method. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7232–7246. [Google Scholar] [CrossRef]
- Zheng, Z.; Wan, Y.; Zhang, Y.; Xiang, S.; Peng, D.; Zhang, B. CLNet: Cross-layer convolutional neural network for change detection in optical remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2021, 175, 247–267. [Google Scholar] [CrossRef]
- Daudt, R.C.; Le Saux, B.; Boulch, A. Fully convolutional siamese networks for change detection. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; pp. 4063–4067. [Google Scholar]
- Chen, J.; Yuan, Z.; Peng, J.; Chen, L.; Huang, H.; Zhu, J.; Liu, Y.; Li, H. DASNet: Dual attentive fully convolutional Siamese networks for change detection in high-resolution satellite images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 14, 1194–1206. [Google Scholar] [CrossRef]
- Li, X.; He, M.; Li, H.; Shen, H. A combined loss-based multiscale fully convolutional network for high-resolution remote sensing image change detection. IEEE Geosci. Remote Sens. Lett. 2021, 19, 8017505. [Google Scholar] [CrossRef]
- Huang, R.; Zhou, M.; Zhao, Q.; Zou, Y. Change detection with absolute difference of multiscale deep features. Neurocomputing 2020, 418, 102–113. [Google Scholar] [CrossRef]
- Chen, P.; Li, C.; Zhang, B.; Chen, Z.; Yang, X.; Lu, K.; Zhuang, L. A Region-Based Feature Fusion Network for VHR Image Change Detection. Remote Sens. 2022, 14, 5577. [Google Scholar] [CrossRef]
- Asokan, A.; Anitha, J.; Patrut, B.; Danciulescu, D.; Hemanth, D.J. Deep feature extraction and feature fusion for bi-temporal satellite image classification. Comput. Mater. Contin. 2021, 66, 373–388. [Google Scholar] [CrossRef]
- Zhang, Z.-D.; Tan, M.-L.; Lan, Z.-C.; Liu, H.-C.; Pei, L.; Yu, W.-X. CDNet: A real-time and robust crosswalk detection network on Jetson nano based on YOLOv5. Neural Comput. Appl. 2022, 34, 10719–10730. [Google Scholar] [CrossRef]
- Yang, B.; Qin, L.; Liu, J.; Liu, X. UTRNet: An Unsupervised Time-Distance-Guided Convolutional Recurrent Network for Change Detection in Irregularly Collected Images. IEEE Trans. Geosci. Remote Sens. 2022, 60, 4410516. [Google Scholar] [CrossRef]
- Fiorucci, F.; Giordan, D.; Santangelo, M.; Dutto, F.; Rossi, M.; Guzzetti, F. Criteria for the optimal selection of remote sensing optical images to map event landslides. Nat. Hazards Earth Syst. Sci. 2018, 18, 405–417. [Google Scholar] [CrossRef]
- Huang, Z.; Zhang, Y.; Li, Q.; Zhang, T.; Sang, N.; Hong, H. Progressive dual-domain filter for enhancing and denoising optical remote-sensing images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 759–763. [Google Scholar] [CrossRef]
- You, Y.; Cao, J.; Zhou, W. A survey of change detection methods based on remote sensing images for multi-source and multi-objective scenarios. Remote Sens. 2020, 12, 2460. [Google Scholar] [CrossRef]
- Gong, M.; Jiang, F.; Qin, A.K.; Liu, T.; Zhan, T.; Lu, D.; Zheng, H.; Zhang, M. A spectral and spatial attention network for change detection in hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5521614. [Google Scholar] [CrossRef]
- Wan, L.; Zhang, T.; You, H. Multi-sensor remote sensing image change detection based on sorted histograms. Int. J. Remote Sens. 2018, 39, 3753–3775. [Google Scholar] [CrossRef]
- Lei, L.; Sun, Y.; Kuang, G. Adaptive local structure consistency-based heterogeneous remote sensing change detection. IEEE Geosci. Remote Sens. Lett. 2020, 19, 8003905. [Google Scholar] [CrossRef]
- Sun, Y.; Lei, L.; Guan, D.; Li, M.; Kuang, G. Sparse-Constrained Adaptive Structure Consistency-Based Unsupervised Image Regression for Heterogeneous Remote-Sensing Change Detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 4405814. [Google Scholar] [CrossRef]
- Luppino, L.T.; Bianchi, F.M.; Moser, G.; Anfinsen, S.N. Unsupervised image regression for heterogeneous change detection. arXiv 2019, arXiv:1909.05948. [Google Scholar] [CrossRef]
- Luppino, L.T.; Bianchi, F.M.; Moser, G.; Anfinsen, S.N. Remote sensing image regression for heterogeneous change detection. In Proceedings of the 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), Aalborg, Denmark, 17–20 September 2018; pp. 1–6. [Google Scholar]
- Sun, Y.; Lei, L.; Guan, D.; Wu, J.; Kuang, G.; Liu, L. Image regression with structure cycle consistency for heterogeneous change detection. IEEE Trans. Neural Netw. Learn. Syst. 2022, 1–15. [Google Scholar] [CrossRef] [PubMed]
- Wu, J.; Li, B.; Qin, Y.; Ni, W.; Zhang, H.; Sun, Y. A Multiscale Graph Convolutional Network for Change Detection in Homogeneous and Heterogeneous Remote Sensing Images. arXiv 2021, arXiv:2102.08041. [Google Scholar] [CrossRef]
- Sun, Y.; Lei, L.; Li, X.; Tan, X.; Kuang, G. Patch Similarity Graph Matrix-Based Unsupervised Remote Sensing Change Detection With Homogeneous and Heterogeneous Sensors. IEEE Trans. Geosci. Remote Sens. 2021, 59, 4841–4861. [Google Scholar] [CrossRef]
- Sun, Y.; Lei, L.; Li, X.; Tan, X.; Kuang, G. Structure Consistency-Based Graph for Unsupervised Change Detection With Homogeneous and Heterogeneous Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2021, 60, 4700221. [Google Scholar] [CrossRef]
- Sun, Y.; Lei, L.; Guan, D.; Kuang, G. Iterative Robust Graph for Unsupervised Change Detection of Heterogeneous Remote Sensing Images. IEEE Trans. Image Process. 2021, 30, 6277–6291. [Google Scholar] [CrossRef]
- Sun, Y.; Lei, L.; Tan, X.; Guan, D.; Wu, J.; Kuang, G. Structured graph based image regression for unsupervised multimodal change detection. ISPRS J. Photogramm. Remote Sens. 2022, 185, 16–31. [Google Scholar] [CrossRef]
- Liu, Z.; Li, G.; Mercier, G.; He, Y.; Pan, Q. Change detection in heterogenous remote sensing images via homogeneous pixel transformation. IEEE Trans. Image Process. 2017, 27, 1822–1834. [Google Scholar] [CrossRef]
- Lv, Z.; Zhang, P.; Sun, W.; Benediktsson, J.A.; Li, J.; Wang, W. Novel Adaptive Region Spectral-Spatial Features for Land Cover Classification with High Spatial Resolution Remotely Sensed Imagery. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5609412. [Google Scholar] [CrossRef]
- Lv, Z.; Zhong, P.; Wang, W.; You, Z.; Shi, C. Novel Piecewise Distance based on Adaptive Region Key-points Extraction for LCCD with VHR Remote Sensing Images. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5607709. [Google Scholar] [CrossRef]
- Guo, Y.; Liu, Y.; Oerlemans, A.; Lao, S.; Wu, S.; Lew, M.S. Deep learning for visual understanding: A review. Neurocomputing 2016, 187, 27–48. [Google Scholar] [CrossRef]
- Lv, Z.; Zhong, P.; Wang, W.; You, Z.; Falco, N. Multi-scale Attention Network Guided with Change Gradient Image for Land Cover Change Detection Using Remote Sensing Images. IEEE Geosci. Remote Sens. Lett. 2023, 20, 2501805. [Google Scholar] [CrossRef]
- Zhan, T.; Gong, M.; Jiang, X.; Li, S. Log-based transformation feature learning for change detection in heterogeneous images. IEEE Geosci. Remote Sens. Lett. 2018, 15, 1352–1356. [Google Scholar] [CrossRef]
- Wu, Y.; Li, J.; Yuan, Y.; Qin, A.; Miao, Q.-G.; Gong, M.-G. Commonality autoencoder: Learning common features for change detection from heterogeneous images. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 4257–4270. [Google Scholar] [CrossRef]
- Niu, X.; Gong, M.; Zhan, T.; Yang, Y. A conditional adversarial network for change detection in heterogeneous images. IEEE Geosci. Remote Sens. Lett. 2018, 16, 45–49. [Google Scholar] [CrossRef]
- Zou, Z.; Shi, Z. Random access memories: A new paradigm for target detection in high resolution aerial remote sensing images. IEEE Trans. Image Process. 2017, 27, 1100–1111. [Google Scholar] [CrossRef]
- Li, Z.; You, Y.; Liu, F. Multi-scale ships detection in high-resolution remote sensing image via saliency-based region convolutional neural network. In Proceedings of the IGARSS 2019–2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; pp. 246–249. [Google Scholar]
- Li, X.; Wang, W.; Hu, X.; Yang, J. Selective kernel networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 510–519. [Google Scholar]
- Balki, I.; Amirabadi, A.; Levman, J.; Martel, A.L.; Emersic, Z.; Meden, B.; Garcia-Pedrero, A.; Ramirez, S.C.; Kong, D.; Moody, A.R.; et al. Sample-size determination methodologies for machine learning in medical imaging research: A systematic review. Can. Assoc. Radiol. J. 2019, 70, 344–353. [Google Scholar] [CrossRef]
- Zhang, C.; Yue, P.; Tapete, D.; Jiang, L.; Shangguan, B.; Huang, L.; Liu, G. A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images. ISPRS J. Photogramm. Remote Sens. 2020, 166, 183–200. [Google Scholar] [CrossRef]
Evaluation Indicators | Formula | Definition |
---|---|---|
False alarm (FA) | FA is the ratio between the falsely changed and unchanged pixels of the ground truth. | |
Missed alarm (MA) | MA is the ratio between the falsely unchanged and changed pixels of the ground truth. | |
Total error (TE) | TE is the ratio between the summary of falsely changed and unchanged pixels and the total pixels of the ground map. | |
Overall accuracy (OA) | OA is the accurately detected pixels between the total pixels of the ground map. | |
Average accuracy (AA) | AA is the mean of accurately detected changes and accurately unchanged ratios. | |
Kappa coefficient (Ka) | Ka reflects the reliability of the detection map by measuring inter-rater reliability for the changed and unchanged classes. | |
Precision (Pr) | Pr is the ratio between the accurately detected and total changed pixels in a detention map. | |
Recall (Re) | Re is the ratio between the accurately detected and total changed pixels in the ground truth map. | |
F1-score (F1) | F1 is the harmonic mean of precision and recall. |
Dataset | Methods | OA | Kappa | AA | FA | MA | TE | Precision | Recall | F-Score |
---|---|---|---|---|---|---|---|---|---|---|
Dataset-1 | AGSCC [45] | 95.66 | 0.66 | 84.08 | 2.575 | 29.26 | 4.341 | 66.07 | 70.74 | 68.33 |
GIR-MRF [50] | 95.43 | 0.6746 | 88.34 | 3.492 | 19.83 | 4.573 | 61.95 | 80.17 | 69.89 | |
SCASC [42] | 94.38 | 0.595 | 83.41 | 3.951 | 29.23 | 5.624 | 55.94 | 70.77 | 62.49 | |
Proposed framework | 99.04 | 0.9201 | 94.9 | 0.33 | 9.861 | 0.964 | 95.04 | 90.14 | 92.52 | |
Dataset-2 | AGSCC [45] | 98.24 | 0.7732 | 83.03 | 0.2824 | 32.06 | 1.76 | 92.14 | 67.94 | 78.21 |
GIR-MRF [50] | 98.18 | 0.81 | 92.58 | 1.25 | 13.60 | 1.82 | 77.18 | 86.40 | 81.53 | |
SCASC [42] | 97.9 | 0.741 | 83.84 | 0.6554 | 31.67 | 2.097 | 83.56 | 68.33 | 75.18 | |
Proposed framework | 98.75 | 0.9097 | 95.6 | 0.4129 | 9.392 | 0.762 | 91.13 | 91.61 | 91.37 | |
Dataset-3 | AGSCC [45] | 95.33 | 0.7904 | 90.73 | 3.165 | 15.37 | 4.669 | 78.98 | 84.63 | 81.71 |
GIR-MRF [50] | 93.6 | 0.7386 | 91.84 | 5.824 | 10.5 | 6.4 | 68.35 | 89.5 | 77.51 | |
SCASC [42] | 94.75 | 0.7704 | 90.77 | 3.952 | 14.5 | 5.252 | 75.25 | 85.5 | 80.05 | |
Proposed framework | 96.77 | 0.9466 | 96.87 | 0.4602 | 5.805 | 1.049 | 96.36 | 94.2 | 95.27 | |
Dataset-4 | AGSCC [45] | 95.81 | 0.8652 | 91.38 | 0.7504 | 16.5 | 2.297 | 92.6 | 83.5 | 87.81 |
GIR-MRF [50] | 95.8 | 0.888 | 95.16 | 1.377 | 8.305 | 2.037 | 88.29 | 91.7 | 89.96 | |
SCASC [42] | 95.27 | 0.9058 | 95.09 | 0.8823 | 8.931 | 1.633 | 91.96 | 91.07 | 91.51 | |
Proposed framework | 97.92 | 0.9607 | 97.75 | 0.3272 | 4.168 | 0.713 | 97.11 | 95.83 | 96.47 |
Methods | OA | Kappa | AA | FA | MA | TE | Precision | Recall | F-Score |
---|---|---|---|---|---|---|---|---|---|
FC-Siam-diff [28] | 97.05 | 0.7603 | 87.79 | 1.53 | 22.90 | 2.95 | 78.12 | 77.10 | 77.61 |
CDNet [34] | 97.37 | 0.7687 | 85.29 | 0.78 | 28.64 | 2.63 | 86.61 | 71.36 | 78.25 |
FDCNN [26] | 95.36 | 0.6658 | 87.42 | 3.43 | 21.73 | 4.64 | 61.78 | 78.27 | 69.05 |
DSIFN [63] | 97.30 | 0.7807 | 88.91 | 1.42 | 20.76 | 2.70 | 79.80 | 79.24 | 79.52 |
CLNet [27] | 97.11 | 0.7553 | 86.02 | 1.19 | 26.76 | 2.89 | 81.31 | 73.24 | 77.06 |
MFCN [30] | 97.41 | 0.7921 | 89.97 | 1.46 | 18.59 | 2.59 | 79.81 | 81.41 | 80.60 |
Proposed- | 97.83 | 0.808 | 87.00 | 0.52 | 25.48 | 2.17 | 91.02 | 74.52 | 81.95 |
Proposed | 99.04 | 0.9201 | 94.9 | 0.33 | 9.861 | 0.964 | 95.04 | 90.14 | 92.52 |
Methods | OA | Kappa | AA | FA | MA | TE | Precision | Recall | F-Score |
---|---|---|---|---|---|---|---|---|---|
FC-Siam-diff [28] | 97.05 | 0.6378 | 75.80 | 0.37 | 48.03 | 2.47 | 86.68 | 51.97 | 64.98 |
CDNet [34] | 97.32 | 0.845 | 93.16 | 0.77 | 12.91 | 1.26 | 83.32 | 87.09 | 85.16 |
FDCNN [26] | 94.58 | 0.6308 | 93.07 | 3.99 | 9.87 | 4.20 | 51.02 | 90.13 | 65.16 |
DSIFN [63] | 97.51 | 0.7893 | 83.90 | 0.11 | 32.10 | 1.37 | 96.23 | 67.90 | 79.62 |
CLNet [27] | 97.95 | 0.8195 | 87.59 | 0.33 | 24.49 | 1.36 | 91.29 | 75.51 | 82.66 |
MFCN [30] | 97.65 | 0.7909 | 91.99 | 1.28 | 14.73 | 1.87 | 75.48 | 85.27 | 80.08 |
Proposed- | 98.12 | 0.807 | 84.76 | 0.06 | 30.42 | 1.40 | 98.10 | 69.58 | 81.41 |
Proposed | 98.75 | 0.9097 | 95.6 | 0.4129 | 9.392 | 0.762 | 91.13 | 91.61 | 91.37 |
Methods | OA | Kappa | AA | FA | MA | TE | Precision | Recall | F-Score |
---|---|---|---|---|---|---|---|---|---|
FC-Siam-diff [28] | 92.25 | 0.7657 | 95.85 | 6.15 | 2.16 | 5.57 | 67.31 | 97.84 | 79.76 |
CDNet [34] | 93.62 | 0.8618 | 92.28 | 1.20 | 14.23 | 2.51 | 89.59 | 85.77 | 87.64 |
FDCNN [26] | 94.64 | 0.9081 | 98.28 | 2.05 | 1.39 | 1.91 | 86.10 | 98.61 | 91.93 |
DSIFN [63] | 93.46 | 0.8926 | 98.48 | 2.63 | 0.40 | 2.27 | 83.11 | 99.60 | 90.61 |
CLNet [27] | 89.74 | 0.6618 | 84.04 | 4.10 | 27.83 | 6.37 | 67.76 | 72.17 | 69.90 |
MFCN [30] | 95.87 | 0.8962 | 92.36 | 0.31 | 14.97 | 1.95 | 97.26 | 85.03 | 90.73 |
Proposed- | 95.88 | 0.9033 | 95.45 | 1.21 | 7.89 | 1.93 | 90.80 | 92.11 | 91.45 |
Proposed | 96.77 | 0.9466 | 96.87 | 0.4602 | 5.805 | 1.049 | 96.36 | 94.2 | 95.27 |
Methods | OA | Kappa | AA | FA | MA | TE | Precision | Recall | F-Score |
---|---|---|---|---|---|---|---|---|---|
FC-Siam-diff [28] | 86.67 | 0.5282 | 86.74 | 11.84 | 14.68 | 11.97 | 45.30 | 85.32 | 59.18 |
CDNet [34] | 96.21 | 0.9126 | 93.27 | 0.18 | 13.29 | 1.39 | 98.06 | 86.71 | 92.03 |
FDCNN [26] | 94.88 | 0.8541 | 95.00 | 2.25 | 7.76 | 2.74 | 82.28 | 92.24 | 86.97 |
DSIFN [63] | 96.82 | 0.9476 | 97.24 | 0.48 | 5.03 | 0.91 | 95.59 | 94.97 | 95.28 |
CLNet [27] | 92.38 | 0.542 | 70.03 | 0.11 | 59.83 | 6.07 | 97.57 | 40.17 | 56.91 |
MFCN [30] | 97.5 | 0.9372 | 96.55 | 0.56 | 6.34 | 1.14 | 95.09 | 93.66 | 94.37 |
Proposed- | 97.75 | 0.9508 | 96.87 | 0.32 | 5.95 | 0.88 | 97.17 | 94.05 | 95.58 |
Proposed | 97.92 | 0.9607 | 97.75 | 0.33 | 4.168 | 0.713 | 97.11 | 95.83 | 96.47 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhu, Y.; Li, Q.; Lv, Z.; Falco, N. Novel Land Cover Change Detection Deep Learning Framework with Very Small Initial Samples Using Heterogeneous Remote Sensing Images. Remote Sens. 2023, 15, 4609. https://doi.org/10.3390/rs15184609
Zhu Y, Li Q, Lv Z, Falco N. Novel Land Cover Change Detection Deep Learning Framework with Very Small Initial Samples Using Heterogeneous Remote Sensing Images. Remote Sensing. 2023; 15(18):4609. https://doi.org/10.3390/rs15184609
Chicago/Turabian StyleZhu, Yangpeng, Qianyu Li, Zhiyong Lv, and Nicola Falco. 2023. "Novel Land Cover Change Detection Deep Learning Framework with Very Small Initial Samples Using Heterogeneous Remote Sensing Images" Remote Sensing 15, no. 18: 4609. https://doi.org/10.3390/rs15184609
APA StyleZhu, Y., Li, Q., Lv, Z., & Falco, N. (2023). Novel Land Cover Change Detection Deep Learning Framework with Very Small Initial Samples Using Heterogeneous Remote Sensing Images. Remote Sensing, 15(18), 4609. https://doi.org/10.3390/rs15184609