Heterogenous Image Matching Fusion Based on Cumulative Structural Similarity
Abstract
1. Introduction
2. Feature Point Description Algorithm
2.1. Flow Chart
2.2. Log-Gabor Filter
2.3. Edge Direction Histogram Construction
2.3.1. Amplitude of the Edge Where the Pixel Is Located
2.3.2. Orientation of the Edge Where the Pixel Is Located
2.4. Feature Descriptor Construction
Algorithm 1: Rotational-invariant descriptor construction based on cumulative structural response. |
Input: Grayscale image I; keypoint set K = {k1, k2, ..., kn} Output: Descriptor set D = {di, dz, ..., dn} 1: Preprocess image (grayscale normalization, histogram equalization) 2: For each keypoint ∈ K do 3: Apply multi-scale, multi-orientation Log-Gabor filters to obtain responses , θ(x,y) 4: Compute amplitude map A(x,y) using phase congruency 5: Calculate edge orientation map O(x,y) via directional projection 6: Extract a 16×16 pixel local patch centered at kᵢ 7: Construct a 36-bin orientation histogram weighted by A and Gaussian distance 8: Determine dominant orientation θ_dom as the peak of the histogram 9: Rotate the local patch by −θ_dom for rotation invariance 10: Divide the patch into 4×4 sub-regions (each 4×4 pixels) 11: For each sub-region: 12: Compute a 6-bin orientation histogram of cumulative response 13: Concatenate all histograms into a 96-dimensional descriptor dᵢ 14: Normalize and threshold dᵢ to enhance robustness 15: End For 16: Return D |
2.5. Eigenvector Matching
3. Experimental Analysis
3.1. Experimental Data and Preprocessing
3.2. Image Matching Experiment
3.3. Experimental Analysis and Evaluation
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Liu, J.; Wu, G.; Liu, Z.; Wang, D.; Jiang, Z.; Ma, L.; Zhong, W.; Fan, X.; Liu, R. Infrared and Visible Image Fusion: From Data Compatibility to Task Adaption. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 47, 2349–2369. [Google Scholar] [CrossRef] [PubMed]
- Liu, Q.; Wang, X. Bidirectional Feature Fusion and Enhanced Alignment Based Multimodal Semantic Segmentation for Remote Sensing Images. Remote Sens. 2024, 16, 2289. [Google Scholar] [CrossRef]
- Li, Y.; Liu, Y.; Su, X.; Luo, X.; Yao, L. Review of Infrared and Visible Image Registration. Infrared Technol. 2022, 44, 641–651. [Google Scholar]
- Che, K.; Lv, J.; Gong, J. Robust and Efficient Registration of Infrared and Visible Images for Vehicular Imaging Systems. Remote Sens. 2024, 16, 4526. [Google Scholar] [CrossRef]
- Liu, J.; Wu, Y.; Huang, Z. SMoA: Searching a Modality-Oriented Architecture for Infrared and Visible Image Fusion. IEEE Signal Process. Lett. 2021, 28, 1818–1822. [Google Scholar] [CrossRef]
- Chen, Z.; Yang, X.; Zhang, C.; Duan, X. Infrared and visible image registration based on R-MI-rényi measurement. J. Electron. Meas. Instrum. 2018, 1, 1–8. [Google Scholar]
- Ye, Y.; Bruzzone, L.; Shan, J.; Bovolo, F.; Zhu, Q. Fast and Robust Matching for Multimodal Remote Sensing Image Registration. IEEE Trans. Geosci. Remote Sens. 2019, 57, 9059–9070. [Google Scholar] [CrossRef]
- Min, C.; Gu, Y.; Yang, F.; Li, Y.; Lian, W. Non-Rigid Registration for Infrared and Visible Images via Gaussian Weighted Shape Context and Enhanced Affine Transformation. IEEE Access 2020, 8, 42562–42575. [Google Scholar] [CrossRef]
- Li, Q.; Han, G.; Liu, P.; Yang, H.; Wu, J. An Infrared-Visible Image Registration Method Based on the Constrained Point Feature. Sensors 2022, 21, 1188. [Google Scholar] [CrossRef]
- Liu, S.; Yang, B.; Wang, Y.; Tian, J.; Yin, L.; Zheng, W. 2D/3D Multimode Medical Image Registration Based on Normalized Cross-Correlation. Appl. Sci. 2022, 12, 2828. [Google Scholar] [CrossRef]
- Zhang, W.; Li, T.; Zhang, Y. LTFormer: A light-weight transformer-based self-supervised matching network for heterogeneous remote sensing images. Inf. Fusion 2024, 109, 102425. [Google Scholar] [CrossRef]
- Cao, F.; Shi, T.; Han, K.; Wang, P.; An, W. Log-Gabor filter-based high-precision multi-modal remote sensing image matching. Acta Geod. Cartogr. Sin. 2024, 53, 526–536. [Google Scholar]
- Li, J.; Zhou, R.; Ruan, Z. Research on the registration of infrared and visible images based on phase consistency and edge extreme points. IET Image Process. 2025, 19, e13317. [Google Scholar] [CrossRef]
- Wang, Q.; Gao, X.; Wang, F.; Ji, Z. Feature Point Matching Method Based on Consistent Edge Structures for Infrared and Visible Images. Appl. Sci. 2020, 10, 2302. [Google Scholar] [CrossRef]
- Yao, Q.; Song, D.; Xu, X.; Zou, K. Visual Feature-Guided Diamond Convolutional Network for Finger Vein Recognition. Sensors 2024, 24, 6097. [Google Scholar] [CrossRef]
- Lv, L.; Yuan, Q.; Li, Z. An algorithm of Iris feature-extracting based on 2D Log-Gabor. Multimed. Tools Appl. 2019, 78, 22643–22666. [Google Scholar] [CrossRef]
- Zhang, S. Infrared feature extraction and recognition technique for human motion posture based on similarity metric. Mil. Autom. 2025, 44, 35–40. [Google Scholar]
- Maksimović-Moićević, S.; Lukač zeljko, Ž.; Temerinac, M. Objective estimation of subjective image quality assessment using multi-parameter prediction. IET Image Process. 2019, 13, 2428–2435. [Google Scholar] [CrossRef]
- Guan, S.Y.; Wang, T.M.; Meng, C. A Review of Point Feature Based Medical Image Registration. Chin. J. Mech. Eng. 2018, 31, 76. [Google Scholar] [CrossRef]
- Xia, X.; Xiang, H.; Cao, Y.; Ge, Z.; Jiang, Z. Feature Extraction and Matching of Humanoid-Eye Binocular Images Based on SUSAN-SIFT Algorithm. Biomimetics 2023, 8, 139. [Google Scholar] [CrossRef]
- Zhu, D. SIFT algorithm analysis and optimization. In Proceedings of the 2010 International Conference on Image Analysis and Signal Processing, Zhejiang, China, 9–11 April 2010; pp. 415–419. [Google Scholar]
- Chen, X.; Shen, K.; Li, Y. Log-Gabor coefficient correlation structure and its application to image retrieval. Comput. Simul. 2022, 39, 180–185. [Google Scholar]
- Xie, X. Multimodal remote sensing image matching with cumulative structural feature description. Telecommun. Technol. 2022, 62, 1780–1785. [Google Scholar]
- Zhu, Z.; Song, X.; Cui, W.; Qi, F. A review of visible-infrared image fusion for target detection. Comput. Eng. Appl. 2025, 1–26, in press. [Google Scholar]
Algorithm | Group 1 Recognition Rate | Group 2 Recognition Rate | Group 3 Recognition Rate | Average Recognition Rate | Standard Deviation | Recognition Rate Under Gaussian Noise |
---|---|---|---|---|---|---|
SIFT [21] | 33% | 36% | 35% | 34.7% | 1.2% | 22.5% |
Literature [22] | 50% | 52% | 51% | 51% | 1.0% | 44% |
Literature [23] | 60% | 68% | 65% | 64.3% | 3.4% | 58.9% |
This paper | 89% | 92% | 93% | 91.3% | 2.1% | 85.4% |
Algorithm | Group 1 | Group 2 | Group 3 | Mean Error | Standard Deviation | Recognition Rate (with Noise) |
---|---|---|---|---|---|---|
SIFT [21] | 35.2° | 36.7° | 38.9° | 36.9° | 1.9° | 25.4% |
Literature [22] | 34.5° | 37.0° | 35.3° | 35.6° | 1.3° | 30.2% |
Literature [23] | 28.1° | 29.4° | 27.6° | 28.4° | 0.9° | 44.9% |
This paper | 17.3° | 16.8° | 15.9° | 16.7° | 0.7° | 67.1% |
Algorithm | Group 1 Alignment Time (s) | Group 2 Alignment Time (s) | Group 3 Alignment Time (s) | Average Alignment Time (s) |
---|---|---|---|---|
SIFT [21] | 0.71 | 0.78 | 0.9 | 0.8 |
Literature [22] | 1.2 | 0.89 | 1.1 | 1.06 |
Literature [23] | 1.5 | 1.3 | 1.6 | 1.47 |
this paper | 1.2 | 1.3 | 1.2 | 1.23 |
Method Variant | Group 1 | Group 2 | Group 3 | Average |
---|---|---|---|---|
Full Method | 89.0 | 92.0 | 93.0 | 91.3 |
Rotation Invariance | 75.5 | 77.8 | 78.2 | 77.2 |
RANSAC | 81.2 | 83.0 | 83.4 | 82.5 |
Phase Congruency | 69.8 | 71.1 | 73.5 | 71.5 |
Log-Gabor Filtering | 66.7 | 69.0 | 70.2 | 68.6 |
Method | Matching Accuracy (%) | Processing Time (s) | GPU Dependency |
---|---|---|---|
SIFT [24] | 34.7 | 0.82 | No |
Literature [18] | 51.0 | 1.10 | No |
Literature [19] | 64.3 | 1.33 | No |
SuperPoint | 85.5 | 1.45 | Yes |
D2-Net | 89.2 | 2.60 | Yes |
LoFTR | 93.5 | 1.92 | Yes |
MatchFormer | 95.1 | 2.35 | Yes |
This paper | 91.3 | 1.23 | No |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Zhu, N.; Yang, S.; Wang, Z. Heterogenous Image Matching Fusion Based on Cumulative Structural Similarity. Electronics 2025, 14, 2693. https://doi.org/10.3390/electronics14132693
Zhu N, Yang S, Wang Z. Heterogenous Image Matching Fusion Based on Cumulative Structural Similarity. Electronics. 2025; 14(13):2693. https://doi.org/10.3390/electronics14132693
Chicago/Turabian StyleZhu, Nan, Shiman Yang, and Zhongxun Wang. 2025. "Heterogenous Image Matching Fusion Based on Cumulative Structural Similarity" Electronics 14, no. 13: 2693. https://doi.org/10.3390/electronics14132693
APA StyleZhu, N., Yang, S., & Wang, Z. (2025). Heterogenous Image Matching Fusion Based on Cumulative Structural Similarity. Electronics, 14(13), 2693. https://doi.org/10.3390/electronics14132693