An Improved Symmetric Network with Feature Difference and Receptive Field for Change Detection
Abstract
1. Introduction
- (1)
- The proposal of the MFDE module, which extracts feature differences while preserving original spatial content and enhances multiscale feature fusion.
- (2)
- The development of the AED module, which dynamically weights decoder outputs based on receptive field scales to improve inference accuracy.
- (3)
- The construction of a UAV-based dataset (GBCNR), offering a high-quality benchmark for evaluating CD models in coastal wetland environments.
2. Related Works
2.1. Feature Difference
2.2. Receptive Field
3. Methods
3.1. Overall Structure
3.2. Multibranch Feature Difference Extraction
- (1)
- Difference: The difference branch extracts feature differences. Taking the encoded features of the encoder as the input, this branch calculates a difference matrix of the input features to initially capture change information. It then calculates a corresponding threshold matrix, highlighting the differential parts of the feature expression based on the threshold matrix to obtain feature differences. The mask module is used to filter out non-change information from the features. The process is formalized as follows:In the difference branch of the MFDE module, the combined approach of average pooling and clipping operations () proves more effective, stemming from its specialized optimization mechanism for expressing feature differences in change detection tasks. Average pooling, by computing the global mean of feature differences, establishes a dynamic baseline threshold, enabling subsequent mask operations to effectively distinguish random noise from genuine change signals. Meanwhile, clipping operations, by suppressing negative fluctuations, preserve significant positive differential features while eliminating reverse interference. This unidirectional activation property aligns closely with the fundamental requirement in change detection tasks of ”focusing solely on differential absolute values”.
- (2)
- Preservation: This branch models the information lost in the difference branch. The subtraction and clamp operations in the difference branch can lead to the loss of pixel information. Thus, the preserve branch is added to model the lost information and integrates it into the differential information, resulting in high-quality feature differences. The process is formalized as follows:In the selection of feature processing strategies, feature concatenation offers a multidimensional information preservation advantage over simple subtraction. Feature concatenation, by parallelizing features in the channel dimension, retains the complete topological structure of original features and achieves the interaction fusion of cross-temporal features through subsequent 3 × 3 convolutions. This processing method is particularly suitable for scenarios with temporal changes, as the cascaded feature space can simultaneously encode both “disappearing features” and “emerging features” as orthogonal dimensions, whereas subtraction operations can only reflect net differences.
- (3)
- Fusion: This branch fuses different scales to obtain feature differences with complete information. It uses convolution and upsampling to match the scale of deep features with shallow features and then concatenates them and uses convolution to model feature differences after multiscale fusion. The process is formalized as follows:
3.3. Adaptive Ensemble Decision
- (1)
- Adaptive learning: In the proposed module, the change map obtained from the last layer of the decoder stage has the same size as the original input (256 × 256). Each unit on this scale corresponds to each unit on the change map, with a response field ratio (RFR) of 1. RFRs at different scales are calculated as follows:RFR serves as a dependable proxy for determining confidence levels within the AED module. It directly measures the spatial impact of features at each decoder layer on the final change prediction, aligning with their hierarchical representational significance. Shallow layers, characterized by small RFR values, excel at capturing intricate details such as textures but may lack comprehensive contextual understanding. In contrast, deep layers with larger RFR values encode semantic information with broader spatial influence. By standardizing the RFR of each layer relative to the total, confidence factors dynamically adjust the weighting of contributions. Layers with expansive RFRs (e.g., 16 × 16 features governing a 16 × 16 output block) are assigned higher confidence to maintain global consistency, while layers with smaller RFRs (e.g., 64 × 64 features with localized impact) refine specific details. This approach ensures that the collective decision effectively balances multi-scale features in proportion to their receptive fields, effectively dampening noise (via CAM/MASK) while amplifying trustworthy change signals.The RFR-based mechanism essentially transforms the decoder into a parallelized, adaptive system where each layer’s confidence level mirrors its inherent ability to facilitate precise change detection, thereby sidestepping arbitrary weight assignments based on heuristics.
- (2)
- Ensemble Decision: By multiplying the confidence factors with the feature maps at different scales in the decoder stage, this paper constructs features at the corresponding scales. The integrated results from all learners provide accurate change information. The process is formalized as follows:
4. Experiments and Discussion
4.1. Datasets
4.2. Implementation Details
4.3. Ablation Study
4.4. Comparison Results and Discussion
4.5. Training Stability and Robustness
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Khelifi, L.; Mignotte, M. Deep learning for change detection in remote sensing images: Comprehensive review and meta-analysis. IEEE Access 2020, 8, 126385–126400. [Google Scholar] [CrossRef]
- Singh, A. Review article digital change detection techniques using remotely-sensed data. Int. J. Remote Sens. 1989, 10, 989–1003. [Google Scholar] [CrossRef]
- Chicco, D. Siamese neural networks: An overview. In Artificial Neural Networks; Humana: New York, NY, USA, 2021; pp. 73–94. [Google Scholar]
- Zhao, S.; Zhang, X.; Xiao, P.; He, G. Exchanging dual-encoder–decoder: A new strategy for change detection with semantic guidance and spatial localization. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4508016. [Google Scholar] [CrossRef]
- Zheng, Z.; Zhong, Y.; Tian, S.; Ma, A.; Zhang, L. ChangeMask: Deep multi-task encoder-transformer-decoder architecture for semantic change detection. ISPRS J. Photogramm. Remote Sens. 2022, 183, 228–239. [Google Scholar] [CrossRef]
- Panda, M.K.; Sharma, A.; Bajpai, V.; Subudhi, B.N.; Thangaraj, V.; Jakhetiya, V. Encoder and decoder network with ResNet-50 and global average feature pooling for local change detection. Comput. Vis. Image Underst. 2022, 222, 103501. [Google Scholar] [CrossRef]
- Yang, Z.; Wu, Y.; Li, M.; Hu, X.; Li, Z. Unsupervised change detection in PolSAR images using siamese encoder–decoder framework based on graph-context attention network. Int. J. Appl. Earth Obs. Geoinf. 2023, 124, 103511. [Google Scholar] [CrossRef]
- Daudt, R.C.; Le Saux, B.; Boulch, A. Fully convolutional siamese networks for change detection. In Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece, 7–10 October 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 4063–4067. [Google Scholar]
- Zhang, C.; Yue, P.; Tapete, D.; Jiang, L.; Shangguan, B.; Huang, L.; Liu, G. A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images. ISPRS J. Photogramm. Remote Sens. 2020, 166, 183–200. [Google Scholar] [CrossRef]
- Hou, B.; Liu, Q.; Wang, H.; Wang, Y. From W-Net to CDGAN: Bitemporal change detection via deep learning techniques. IEEE Trans. Geosci. Remote Sens. 2019, 58, 1790–1802. [Google Scholar] [CrossRef]
- Fang, S.; Li, K.; Shao, J.; Li, Z. SNUNet-CD: A densely connected Siamese network for change detection of VHR images. IEEE Geosci. Remote Sens. Lett. 2021, 19, 8007805. [Google Scholar] [CrossRef]
- Chen, P.; Zhang, B.; Hong, D.; Chen, Z.; Yang, X.; Li, B. FCCDN: Feature constraint network for VHR image change detection. ISPRS J. Photogramm. Remote Sens. 2022, 187, 101–119. [Google Scholar] [CrossRef]
- Liang, Y.; Zhang, C.; Han, M. RaSRNet: An end-to-end relation-aware semantic reasoning network for change detection in optical remote sensing images. IEEE Trans. Instrum. Meas. 2023, 73, 5006711. [Google Scholar] [CrossRef]
- Aitken, K.; Ramasesh, V.; Cao, Y.; Maheswaranathan, N. Understanding how encoder-decoder architectures attend. Adv. Neural Inf. Process. Syst. 2021, 34, 22184–22195. [Google Scholar]
- Zhang, L.; Hu, X.; Zhang, M.; Shu, Z.; Zhou, H. Object-level change detection with a dual correlation attention-guided detector. ISPRS J. Photogramm. Remote Sens. 2021, 177, 147–160. [Google Scholar] [CrossRef]
- Zhang, M.; Shi, W. A feature difference convolutional neural network-based change detection method. IEEE Trans. Geosci. Remote Sens. 2020, 58, 7232–7246. [Google Scholar] [CrossRef]
- Yuan, S.; Wei, F.; Zhang, L.; Fu, H.; Gong, P. Receptive Convolution Boosts Large-Scale Multi-Class Change Detection. In Proceedings of the IGARSS 2024-2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 7–12 July 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 10459–10462. [Google Scholar]
- Chen, H.; Qi, Z.; Shi, Z. Remote sensing image change detection with transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5607514. [Google Scholar] [CrossRef]
- Bandara, W.G.C.; Patel, V.M. A transformer-based siamese network for change detection. In Proceedings of the IGARSS 2022-2022 IEEE International Geoscience and Remote Sensing Symposium, Kuala Lumpur, Malaysia, 17–22 July 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 207–210. [Google Scholar]
- Yan, J.; Cheng, Y.; Wang, Q.; Liu, L.; Zhang, W.; Jin, B. Transformer and graph convolution-based unsupervised detection of machine anomalous sound under domain shifts. IEEE Trans. Emerg. Top. Comput. Intell. 2024, 8, 2827–2842. [Google Scholar] [CrossRef]
- Yan, J.; Cheng, Y.; Zhang, F.; Zhou, N.; Wang, H.; Jin, B.; Wang, M.; Zhang, W. Multi-modal imitation learning for arc detection in complex railway environments. IEEE Trans. Instrum. Meas. 2025, 74, 3529413. [Google Scholar] [CrossRef]
- Cheng, Y.; Yan, J.; Zhang, F.; Li, M.; Zhou, N.; Shi, C.; Jin, B.; Zhang, W. Surrogate modeling of pantograph-catenary system interactions. Mech. Syst. Signal Process. 2025, 224, 112134. [Google Scholar] [CrossRef]
- Li, L.; Wang, L.; Du, A.; Li, Y. LRDE-Net: Large receptive field and image difference enhancement network for remote sensing images change detection. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 17, 162–174. [Google Scholar] [CrossRef]
- Luo, F.; Zhou, T.; Liu, J.; Guo, T.; Gong, X.; Ren, J. Multiscale diff-changed feature fusion network for hyperspectral image change detection. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5502713. [Google Scholar] [CrossRef]
- Yuan, J.; Deng, Z.; Wang, S.; Luo, Z. Multi receptive field network for semantic segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 1–5 March 2020; pp. 1894–1903. [Google Scholar]
- Shen, X.; Wang, C.; Li, X.; Yu, Z.; Li, J.; Wen, C.; Cheng, M.; He, Z. RF-Net: An end-to-end image matching network based on receptive field. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8132–8140. [Google Scholar]
- He, Z.; Cao, Y.; Du, L.; Xu, B.; Yang, J.; Cao, Y.; Tang, S.; Zhuang, Y. MRFN: Multi-receptive-field network for fast and accurate single image super-resolution. IEEE Trans. Multimed. 2019, 22, 1042–1054. [Google Scholar] [CrossRef]
- Araujo, A.; Norris, W.; Sim, J. Computing receptive fields of convolutional neural networks. Distill 2019, 4, e21. [Google Scholar] [CrossRef]
- Yu, F. Multi-scale context aggregation by dilated convolutions. arXiv 2015, arXiv:1511.07122. [Google Scholar]
- Liu, S.; Huang, D. Receptive field block net for accurate and fast object detection. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 385–400. [Google Scholar]
- Tan, M.; Yuan, X.; Liang, B.; Han, S. DRFnet: Dynamic receptive field network for object detection and image recognition. Front. Neurorobot. 2023, 16, 1100697. [Google Scholar] [CrossRef]
- Shi, Q.; Liu, M.; Li, S.; Liu, X.; Wang, F.; Zhang, L. A deeply supervised attention metric-based network and an open aerial image dataset for remote sensing change detection. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5604816. [Google Scholar] [CrossRef]
- Yang, J.; Huang, X. 30 m annual land cover and its dynamics in China from 1990 to 2019. Earth Syst. Sci. Data Discuss. 2021, 2021, 1–29. [Google Scholar]
- Chen, H.; Shi, Z. A spatial-temporal attention-based method and a new dataset for remote sensing image change detection. Remote Sens. 2020, 12, 1662. [Google Scholar] [CrossRef]
- Ji, S.; Wei, S.; Lu, M. Fully convolutional networks for multisource building extraction from an open aerial and satellite imagery data set. IEEE Trans. Geosci. Remote Sens. 2018, 57, 574–586. [Google Scholar] [CrossRef]
- Peng, X.; Zhong, R.; Li, Z.; Li, Q. Optical remote sensing image change detection based on attention mechanism and image difference. IEEE Trans. Geosci. Remote Sens. 2020, 59, 7296–7307. [Google Scholar] [CrossRef]
- Shen, Q.; Huang, J.; Wang, M.; Tao, S.; Yang, R.; Zhang, X. Semantic feature-constrained multitask siamese network for building change detection in high-spatial-resolution remote sensing imagery. ISPRS J. Photogramm. Remote Sens. 2022, 189, 78–94. [Google Scholar] [CrossRef]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Representations (ICLR), San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
- Hendrycks, D.; Lee, K.; Mazeika, M. Using pre-training can improve model robustness and uncertainty. In Proceedings of the International Conference on Machine Learning. PMLR, Long Beach, CA, USA, 9–15 June 2019; pp. 2712–2721. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Simonyan, K. Very deep convolutional networks for large-scale image recognition. arXiv 2014, arXiv:1409.1556. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, 5–9 October 2015; Proceedings, Part III 18. Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Oktay, O.; Schlemper, J.; Folgoc, L.L.; Lee, M.; Heinrich, M.; Misawa, K.; Mori, K.; McDonagh, S.; Hammerla, N.Y.; Kainz, B.; et al. Attention u-net: Learning where to look for the pancreas. arXiv 2018, arXiv:1804.03999. [Google Scholar]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
- Liu, W.; Lin, Y.; Liu, W.; Yu, Y.; Li, J. An attention-based multiscale transformer network for remote sensing image change detection. ISPRS J. Photogramm. Remote Sens. 2023, 202, 599–609. [Google Scholar] [CrossRef]
- Zhang, H.; Chen, H.; Zhou, C.; Chen, K.; Liu, C.; Zou, Z.; Shi, Z. Bifa: Remote sensing image change detection with bitemporal feature alignment. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5614317. [Google Scholar] [CrossRef]
- Tsutsui, S.; Hirakawa, T.; Yamashita, T.; Fujiyoshi, H. Semantic segmentation and change detection by multi-task U-Net. In Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA, 19–22 September 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 619–623. [Google Scholar]
- Ahmed, M.; Alasad, Q.; Yuan, J.S.; Alawad, M. Re-Evaluating Deep Learning Attacks and Defenses in Cybersecurity Systems. Big Data Cogn. Comput. 2024, 8, 191. [Google Scholar] [CrossRef]
Name | SYSU | LEVIR-CD | WHU | NJDS | CLCD | GBCNR |
---|---|---|---|---|---|---|
Resolution (m) | 0.5 | 0.5 | 0.3 | 0.3 | 0.5–2 | 0.13 |
Image pairs | 20,000 | 637 | 1 | 1 | 2400 | 2496 |
Image size (pixels) | 256 × 256 | 1024 × 1024 | 32,207 × 15,354 | 14,231 × 11,381 | 256 × 256 | 256 × 256 |
Datasets | Methods | P | R | F1 | IoU |
---|---|---|---|---|---|
EDED | 88.46 | 76.07 | 81.80 | 69.20 | |
SYSU | EDED+MFDE | 87.78 | 77.33 | 82.22 | 69.81 |
EDED+MFDE+AED | 86.72 | 79.34 | 82.87 | 70.75 | |
EDED | 93.25 | 90.95 | 92.09 | 85.34 | |
LEVIR-CD | EDED+MFDE | 92.91 | 91.93 | 92.42 | 85.91 |
EDED+MFDE+AED | 93.30 | 91.72 | 92.50 | 86.05 | |
EDED | 92.32 | 91.75 | 92.03 | 85.24 | |
WHU | EDED+MFDE | 93.00 | 92.09 | 92.54 | 86.12 |
EDED+MFDE+AED | 93.06 | 93.16 | 93.11 | 87.11 | |
EDED | 80.84 | 53.52 | 64.40 | 47.50 | |
NJDS | EDED+MFDE | 79.25 | 57.96 | 66.96 | 50.33 |
EDED+MFDE+AED | 84.07 | 57.35 | 68.19 | 51.73 | |
EDED | 70.39 | 71.90 | 71.14 | 55.20 | |
CLCD | EDED+MFDE | 73.43 | 71.91 | 72.66 | 57.06 |
EDED+MFDE+AED | 73.78 | 73.48 | 73.63 | 58.27 | |
EDED | 64.61 | 71.98 | 68.10 | 51.63 | |
GBCNR | EDED+MFDE | 65.60 | 75.78 | 70.32 | 54.23 |
EDED+MFDE+AED | 67.96 | 77.69 | 72.50 | 56.86 |
Backbone | FLOPs (G) | Params (M) | P (%) | R (%) | F1 (%) | IoU (%) |
---|---|---|---|---|---|---|
FDRF-ResNet50 | 18.93 | 5.71 | 62.74 | 79.41 | 70.10 | 53.96 |
FDRF-VGG16 | 59.22 | 11.49 | 64.88 | 77.11 | 70.47 | 54.40 |
FDRF-EDED | 23.47 | 9.16 | 67.96 | 77.69 | 72.50 | 56.86 |
Methods | SYSU | LEVIR-CD | WHU | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
P | R | F1 | IoU | P | R | F1 | IoU | P | R | F1 | IoU | |
FC-EF | 78.26 | 76.3 | 77.27 | 62.96 | 88.53 | 86.83 | 87.67 | 78.05 | 80.87 | 75.43 | 78.05 | 64.01 |
FC-Sima-diff | 83.04 | 79.11 | 81.03 | 68.11 | 94.02 | 82.93 | 88.13 | 78.77 | 84.73 | 87.31 | 86.00 | 75.44 |
FC-Siam-conc | 74.32 | 75.84 | 75.07 | 60.09 | 83.81 | 91.00 | 87.26 | 77.39 | 78.86 | 78.64 | 78.75 | 64.95 |
IFN | 79.59 | 75.58 | 77.53 | 63.31 | 89.18 | 87.17 | 88.16 | 78.83 | 91.44 | 89.75 | 90.59 | 82.79 |
STANet | 81.14 | 76.48 | 78.74 | 64.94 | 86.91 | 80.17 | 83.40 | 71.53 | 79.37 | 85.5 | 82.32 | 69.95 |
BiT | 89.13 | 61.21 | 72.58 | 56.96 | 89.24 | 89.37 | 89.30 | 80.68 | 86.64 | 81.48 | 83.98 | 72.39 |
SNUNet | 70.76 | 85.33 | 77.36 | 63.09 | 89.53 | 83.31 | 86.31 | 75.91 | 85.60 | 81.49 | 83.49 | 71.67 |
AMTNet | 80.96 | 76.84 | 78.85 | 65.08 | 91.14 | 89.21 | 90.17 | 82.09 | 92.86 | 81.49 | 83.49 | 71.67 |
SGSLN | 86.20 | 78.00 | 81.89 | 69.34 | 92.91 | 91.21 | 92.05 | 85.28 | 92.32 | 91.99 | 92.27 | 85.64 |
FDRF | 86.72 | 79.34 | 82.87 | 70.75 | 93.30 | 91.72 | 92.50 | 86.05 | 93.06 | 93.16 | 93.11 | 87.11 |
Methods | FLOPs (G) | Params (M) | P (%) | R (%) | F1 (%) | IoU (%) |
---|---|---|---|---|---|---|
U-Net | 7.8 | 35.6 | 46.45 | 52.64 | 49.35 | 32.76 |
AttU-Net | 8.9 | 49.2 | 55.57 | 44.60 | 49.48 | 32.88 |
PSPNet | 65.6 | 364.2 | 50.57 | 58.21 | 54.12 | 37.10 |
DTCDSCN | 28.3 | 187.5 | 51.92 | 62.78 | 56.84 | 39.70 |
IFN | 41.10 | 50.71 | 49.44 | 14.35 | 22.24 | 12.51 |
MTU-Net | 46.2 | 215.8 | 65.29 | 62.82 | 64.03 | 47.09 |
ATMNet | 21.56 | 24.67 | 77.32 | 55.75 | 64.79 | 47.91 |
SGSLN-PT | 11.5 | 6.04 | 71.21 | 41.45 | 52.27 | 35.44 |
FDRF-PT | 23.47 | 9.16 | 72.35 | 43.53 | 54.35 | 37.31 |
SGSLN | 11.5 | 6.04 | 80.84 | 53.52 | 64.40 | 47.50 |
BiFA | 53.00 | 5.58 | 78.14 | 56.12 | 65.33 | 48.47 |
FDRF | 23.47 | 9.16 | 84.07 | 57.35 | 68.19 | 51.73 |
Methods | FLOPs (G) | Params (M) | CLCD | GBCNR | ||||||
---|---|---|---|---|---|---|---|---|---|---|
P (%) | R (%) | F1 (%) | IoU (%) | P (%) | R (%) | F1 (%) | IoU (%) | |||
FC-EF | 3.58 | 1.35 | 70.82 | 62.37 | 66.32 | 49.62 | 56.97 | 63.12 | 59.89 | 42.74 |
FC-Sima-diff | 4.73 | 1.35 | 71.70 | 47.60 | 57.22 | 40.07 | 61.40 | 57.05 | 59.14 | 41.99 |
FC-Siam-conc | 5.33 | 1.55 | 61.42 | 62.75 | 62.08 | 45.01 | 59.79 | 54.99 | 57.29 | 40.14 |
SNUNet | 54.82 | 12.04 | 64.26 | 52.33 | 57.69 | 40.54 | 63.1 | 64.50 | 63.79 | 46.83 |
BiT | 8.75 | 3.49 | 73.27 | 52.91 | 61.45 | 44.35 | 60.19 | 76.36 | 67.32 | 50.73 |
ATMNet | 21.56 | 24.67 | 73.97 | 72.54 | 73.25 | 57.79 | 66.59 | 69.96 | 68.23 | 51.78 |
SGSLN | 11.5 | 6.04 | 70.39 | 71.90 | 71.14 | 55.20 | 64.61 | 71.98 | 68.10 | 51.63 |
FDRF | 23.47 | 9.16 | 73.78 | 73.48 | 73.63 | 58.27 | 67.96 | 77.69 | 72.50 | 56.86 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhang, B.; Wang, Y.; Lu, J.; Wang, Q. An Improved Symmetric Network with Feature Difference and Receptive Field for Change Detection. Symmetry 2025, 17, 1095. https://doi.org/10.3390/sym17071095
Zhang B, Wang Y, Lu J, Wang Q. An Improved Symmetric Network with Feature Difference and Receptive Field for Change Detection. Symmetry. 2025; 17(7):1095. https://doi.org/10.3390/sym17071095
Chicago/Turabian StyleZhang, Botao, Yixuan Wang, Jia Lu, and Qin Wang. 2025. "An Improved Symmetric Network with Feature Difference and Receptive Field for Change Detection" Symmetry 17, no. 7: 1095. https://doi.org/10.3390/sym17071095
APA StyleZhang, B., Wang, Y., Lu, J., & Wang, Q. (2025). An Improved Symmetric Network with Feature Difference and Receptive Field for Change Detection. Symmetry, 17(7), 1095. https://doi.org/10.3390/sym17071095