Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (59)

Search Parameters:
Keywords = rotation-invariant descriptors

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3294 KB  
Article
Rotation- and Scale-Invariant Object Detection Using Compressed 2D Voting with Sparse Point-Pair Screening
by Chenbo Shi, Yue Yu, Gongwei Zhang, Shaojia Yan, Changsheng Zhu, Yanhong Cheng and Chun Zhang
Electronics 2025, 14(15), 3046; https://doi.org/10.3390/electronics14153046 - 30 Jul 2025
Viewed by 290
Abstract
The Generalized Hough Transform (GHT) is a powerful method for rigid shape detection under rotation, scaling, translation, and partial occlusion conditions, but its four-dimensional accumulator incurs prohibitive computational and memory demands that prevent real-time deployment. To address this, we propose a framework that [...] Read more.
The Generalized Hough Transform (GHT) is a powerful method for rigid shape detection under rotation, scaling, translation, and partial occlusion conditions, but its four-dimensional accumulator incurs prohibitive computational and memory demands that prevent real-time deployment. To address this, we propose a framework that compresses the 4-D search space into a concise 2-D voting scheme by combining two-level sparse point-pair screening with an accelerated lookup. In the offline stage, template edges are extracted using an adaptive Canny operator with Otsu-determined thresholds, and gradient-direction differences for all point pairs are quantized to retain only those in the dominant bin, yielding rotation- and scale-invariant descriptors that populate a compact 2-D reference table. During the online stage, an adaptive grid selects only the highest-gradient pixels per cell as a base points, while a precomputed gradient-direction bucket table enables constant-time retrieval of compatible subpoints. Each valid base–subpoint pair is mapped to indices in the lookup table, and “fuzzy” votes are cast over a 3 × 3 neighborhood in the 2-D accumulator, whose global peak determines the object center. Evaluation on 200 real industrial parts—augmented to 1000 samples with noise, blur, occlusion, and nonlinear illumination—demonstrates that our method maintains over 90% localization accuracy, matches the classical GHT, and achieves a ten-fold speedup, outperforming IGHT and LI-GHT variants by 2–3×, thereby delivering a robust, real-time solution for industrial rigid object localization. Full article
Show Figures

Figure 1

16 pages, 6397 KB  
Article
Heterogenous Image Matching Fusion Based on Cumulative Structural Similarity
by Nan Zhu, Shiman Yang and Zhongxun Wang
Electronics 2025, 14(13), 2693; https://doi.org/10.3390/electronics14132693 - 3 Jul 2025
Viewed by 262
Abstract
To solve the problem of the limited capability of multimodal image feature descriptors constructed by gradient information and the phase consistency principle, a method of cumulative structure feature descriptor construction with rotation invariance is proposed in this paper. Firstly, we extract the direction [...] Read more.
To solve the problem of the limited capability of multimodal image feature descriptors constructed by gradient information and the phase consistency principle, a method of cumulative structure feature descriptor construction with rotation invariance is proposed in this paper. Firstly, we extract the direction of multi-scale and multi-direction feature point edges using the Log-Gabor odd-symmetric filter and calculate the amplitude of pixel edges based on the phase consistency principle. Then, the main direction of the key points is determined based on the edge direction feature map, and the coordinates are established according to the main direction to ensure that the feature point descriptor has rotation invariance. Finally, the Log-Gabor odd-symmetric filter calculates the cumulative structural response in the maximum direction and constructs a highly identifiable descriptor with rotation invariance. We select several representative heterogeneous images as test data and compare the matching performance of the proposed algorithm with several excellent descriptors. The results indicate that the descriptor constructed in this paper is more robust than other descriptors for heterosource images with rotation changes. Full article
Show Figures

Figure 1

27 pages, 10290 KB  
Article
Benchmarking Point Cloud Feature Extraction with Smooth Overlap of Atomic Positions (SOAP): A Pixel-Wise Approach for MNIST Handwritten Data
by Eiaki V. Morooka, Yuto Omae, Mika Hämäläinen and Hirotaka Takahashi
AppliedMath 2025, 5(2), 72; https://doi.org/10.3390/appliedmath5020072 - 13 Jun 2025
Viewed by 550
Abstract
In this study, we introduce a novel application of the Smooth Overlap of Atomic Positions (SOAP) descriptor for pixel-wise image feature extraction and classification as a benchmark for SOAP point cloud feature extraction, using MNIST handwritten digits as a benchmark. By converting 2D [...] Read more.
In this study, we introduce a novel application of the Smooth Overlap of Atomic Positions (SOAP) descriptor for pixel-wise image feature extraction and classification as a benchmark for SOAP point cloud feature extraction, using MNIST handwritten digits as a benchmark. By converting 2D images into 3D point sets, we compute pixel-centered SOAP vectors that are intrinsically invariant to translation, rotation, and mirror symmetry. We demonstrate how the descriptor’s hyperparameters—particularly the cutoff radius—significantly influence classification accuracy, and show that the high-dimensional SOAP vectors can be efficiently compressed using PCA or autoencoders with minimal loss in predictive performance. Our experiments also highlight the method’s robustness to positional noise, exhibiting graceful degradation even under substantial Gaussian perturbations. Overall, this approach offers an effective and flexible pipeline for extracting rotationally and translationally invariant image features, potentially reducing reliance on extensive data augmentation and providing a robust representation for further machine learning tasks. Full article
(This article belongs to the Special Issue Optimization and Machine Learning)
Show Figures

Figure 1

26 pages, 4371 KB  
Article
A Robust Rotation-Equivariant Feature Extraction Framework for Ground Texture-Based Visual Localization
by Yuezhen Cai, Linyuan Xia, Ting On Chan, Junxia Li and Qianxia Li
Sensors 2025, 25(12), 3585; https://doi.org/10.3390/s25123585 - 6 Jun 2025
Viewed by 588
Abstract
Ground texture-based localization leverages environment-invariant, planar-constrained features to enhance pose estimation robustness, thus offering inherent advantages for seamless localization. However, traditional feature extraction methods struggle with reliable performance under large-scale rotations and texture sparsity in the case of ground texture-based localization. This study [...] Read more.
Ground texture-based localization leverages environment-invariant, planar-constrained features to enhance pose estimation robustness, thus offering inherent advantages for seamless localization. However, traditional feature extraction methods struggle with reliable performance under large-scale rotations and texture sparsity in the case of ground texture-based localization. This study addresses these challenges through a learning-based feature extraction framework—Ground Texture Rotation-Equivariant Keypoints and Descriptors (GT-REKD). The GT-REKD framework employs group-equivariant convolutions over the cyclic rotation group, augmented with directional attention and orientation-encoding heads, to produce dense keypoints and descriptors that are exactly invariant to 0–360° in-plane rotations. The experimental results for ground texture localization show that GT-REKD achieves 96.14% matching in pure rotation tests, 94.08% in incremental localization, and relocalization errors of 5.55° and 4.41 px (≈0.1 cm), consistently outperforming baseline methods under extreme rotations and sparse textures, highlighting its applicability to visual localization and simultaneous localization and mapping (SLAM) tasks. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

14 pages, 452 KB  
Article
A Comprehensive Comparative Study of Quick Invariant Signature (QIS), Dynamic Time Warping (DTW), and Hybrid QIS + DTW for Time Series Analysis
by Hamid Reza Shahbazkia, Hamid Reza Khosravani, Alisher Pulatov, Elmira Hajimani and Mahsa Kiazadeh
Mathematics 2025, 13(6), 999; https://doi.org/10.3390/math13060999 - 19 Mar 2025
Viewed by 3214
Abstract
This study presents a comprehensive evaluation of the quick invariant signature (QIS), dynamic time warping (DTW), and a novel hybrid QIS + DTW approach for time series analysis. QIS, a translation and rotation invariant shape descriptor, and DTW, a widely used alignment technique, [...] Read more.
This study presents a comprehensive evaluation of the quick invariant signature (QIS), dynamic time warping (DTW), and a novel hybrid QIS + DTW approach for time series analysis. QIS, a translation and rotation invariant shape descriptor, and DTW, a widely used alignment technique, were tested individually and in combination across various datasets, including ECG5000, seismic data, and synthetic signals. Our hybrid method was designed to embed the structural representation of the QIS with the temporal alignment capabilities of DTW. This hybrid method achieved a performance of up to 93% classification accuracy on ECG5000, outperforming DTW alone (86%) and a standard MLP classifier in noisy or low-data conditions. These findings confirm that integrating structural invariance (QIS) with temporal alignment (DTW) yields superior robustness to noise and time compression artifacts. We recommend adopting hybrid QIS + DTW, particularly for applications in biomedical signal monitoring and earthquake detection, where real-time analysis and minimal labeled data are critical. The proposed hybrid approach does not require extensive training, making it suitable for resource-constrained scenarios. Full article
(This article belongs to the Special Issue Mathematical Modeling and Optimization in Signal Processing)
Show Figures

Figure 1

21 pages, 14388 KB  
Article
Adaptive Matching of High-Frequency Infrared Sea Surface Images Using a Phase-Consistency Model
by Xiangyu Li, Jie Chen, Jianwei Li, Zhentao Yu and Yaxun Zhang
Sensors 2025, 25(5), 1607; https://doi.org/10.3390/s25051607 - 6 Mar 2025
Viewed by 692
Abstract
The sea surface displays dynamic characteristics, such as waves and various formations. As a result, images of the sea surface usually have few stable feature points, with a background that is often complex and variable. Moreover, the sea surface undergoes significant changes due [...] Read more.
The sea surface displays dynamic characteristics, such as waves and various formations. As a result, images of the sea surface usually have few stable feature points, with a background that is often complex and variable. Moreover, the sea surface undergoes significant changes due to variations in wind speed, lighting conditions, weather, and other environmental factors, resulting in considerable discrepancies between images. These variations present challenges for identification using traditional methods. This paper introduces an algorithm based on the phase-consistency model. We utilize image data collected from a specific maritime area with a high-frame-rate surface array infrared camera. By accurately detecting images with identical names, we focus on the subtle texture information of the sea surface and its rotational invariance, enhancing the accuracy and robustness of the matching algorithm. We begin by constructing a nonlinear scale space using a nonlinear diffusion method. Maximum and minimum moments are generated using an odd symmetric Log–Gabor filter within the two-dimensional phase-consistency model. Next, we identify extremum points in the anisotropic weighted moment space. We use the phase-consistency feature values as image gradient features and develop feature descriptors based on the Log–Gabor filter that are insensitive to scale and rotation. Finally, we employ Euclidean distance as the similarity measure for initial matching, align the feature descriptors, and remove false matches using the fast sample consensus (FSC) algorithm. Our findings indicate that the proposed algorithm significantly improves upon traditional feature-matching methods in overall efficacy. Specifically, the average number of matching points for long-wave infrared images is 1147, while for mid-wave infrared images, it increases to 8241. Additionally, the root mean square error (RMSE) fluctuations for both image types remain stable, averaging 1.5. The proposed algorithm also enhances the rotation invariance of image matching, achieving satisfactory results even at significant rotation angles. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

20 pages, 7090 KB  
Article
An Infrared and Visible Image Alignment Method Based on Gradient Distribution Properties and Scale-Invariant Features in Electric Power Scenes
by Lin Zhu, Yuxing Mao, Chunxu Chen and Lanjia Ning
J. Imaging 2025, 11(1), 23; https://doi.org/10.3390/jimaging11010023 - 13 Jan 2025
Viewed by 1204
Abstract
In grid intelligent inspection systems, automatic registration of infrared and visible light images in power scenes is a crucial research technology. Since there are obvious differences in key attributes between visible and infrared images, direct alignment is often difficult to achieve the expected [...] Read more.
In grid intelligent inspection systems, automatic registration of infrared and visible light images in power scenes is a crucial research technology. Since there are obvious differences in key attributes between visible and infrared images, direct alignment is often difficult to achieve the expected results. To overcome the high difficulty of aligning infrared and visible light images, an image alignment method is proposed in this paper. First, we use the Sobel operator to extract the edge information of the image pair. Second, the feature points in the edges are recognised by a curvature scale space (CSS) corner detector. Third, the Histogram of Orientation Gradients (HOG) is extracted as the gradient distribution characteristics of the feature points, which are normalised with the Scale Invariant Feature Transform (SIFT) algorithm to form feature descriptors. Finally, initial matching and accurate matching are achieved by the improved fast approximate nearest-neighbour matching method and adaptive thresholding, respectively. Experiments show that this method can robustly match the feature points of image pairs under rotation, scale, and viewpoint differences, and achieves excellent matching results. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Graphical abstract

26 pages, 32372 KB  
Article
A Line Feature-Based Rotation Invariant Method for Pre- and Post-Damage Remote Sensing Image Registration
by Yalun Zhao, Derong Chen and Jiulu Gong
Remote Sens. 2025, 17(2), 184; https://doi.org/10.3390/rs17020184 - 7 Jan 2025
Cited by 1 | Viewed by 897
Abstract
The accurate registration of pre- and post-damage images plays a vital role in the change analysis of the target area and the subsequent work of damage effect assessment. However, due to the impact of shooting time and damaged areas, there are large background [...] Read more.
The accurate registration of pre- and post-damage images plays a vital role in the change analysis of the target area and the subsequent work of damage effect assessment. However, due to the impact of shooting time and damaged areas, there are large background and regional differences between pre- and post-damage remote sensing images, and the existing image registration methods do not perform well. In this paper, a line feature-based rotation invariant image registration method is proposed for pre- and post-damage remote sensing images. First, we extract and screen straight line segments from the images before and after damage. Then, we design a new method to calculate the main direction of each line segment and rotate the image based on the current line segment’s main direction and the center coordinates. According to the spatial distribution (distance and angle) of the reference line segment relative to the remaining line segments, a line feature descriptor vector is constructed and matched for each line segment on the rotated image. Since the main edge contour can preserve more invariant features, this descriptor can be better applied to the registration of pre- and post-damage remote sensing images. Finally, we cross-pair the midpoints and endpoints of the matched line segments to improve the accuracy of subsequent affine transformation parameter calculations. In remote sensing images with large background and regional differences, the average registration precision of our method is close to 100%, and the root mean square error is about 1 pixel. At the same time, the rotation invariance of our method is verified by rotating the test images. In addition, the results of the comparative experiments show that the registration precision and error of the proposed method are better than those of the existing typical representative algorithms. Full article
Show Figures

Figure 1

24 pages, 13141 KB  
Article
Robust and Efficient Registration of Infrared and Visible Images for Vehicular Imaging Systems
by Kai Che, Jian Lv, Jiayuan Gong, Jia Wei, Yun Zhou and Longcheng Que
Remote Sens. 2024, 16(23), 4526; https://doi.org/10.3390/rs16234526 - 3 Dec 2024
Cited by 1 | Viewed by 1470
Abstract
The automatic registration of infrared and visible images in vehicular imaging systems remains challenging in vision-assisted driving systems because of differences in imaging mechanisms. Existing registration methods often fail to accurately register infrared and visible images in vehicular imaging systems due to numerous [...] Read more.
The automatic registration of infrared and visible images in vehicular imaging systems remains challenging in vision-assisted driving systems because of differences in imaging mechanisms. Existing registration methods often fail to accurately register infrared and visible images in vehicular imaging systems due to numerous spurious points during feature extraction, unstable feature descriptions, and low feature matching efficiency. To address these issues, a robust and efficient registration of infrared and visible images for vehicular imaging systems is proposed. In the feature extraction stage, we propose a structural similarity point extractor (SSPE) that extracts feature points using the structural similarity between weighted phase congruency (PC) maps and gradient magnitude (GM) maps. This approach effectively suppresses invalid feature points while ensuring the extraction of stable and reliable ones. In the feature description stage, we design a rotation-invariant feature descriptor (RIFD) that comprehensively describes the attributes of feature points, thereby enhancing their discriminative power. In the feature matching stage, we propose an effective coarse-to-fine matching strategy (EC2F) that improves the matching efficiency through nearest neighbor matching and threshold-based fast sample consensus (FSC), while improving registration accuracy through coordinate-based iterative optimization. Registration experiments on public datasets and a self-established dataset demonstrate the superior performance of our proposed method, and also confirm its effectiveness in real vehicular environments. Full article
Show Figures

Figure 1

26 pages, 24227 KB  
Article
A Base-Map-Guided Global Localization Solution for Heterogeneous Robots Using a Co-View Context Descriptor
by Xuzhe Duan, Meng Wu, Chao Xiong, Qingwu Hu and Pengcheng Zhao
Remote Sens. 2024, 16(21), 4027; https://doi.org/10.3390/rs16214027 - 30 Oct 2024
Cited by 1 | Viewed by 1740
Abstract
With the continuous advancement of autonomous driving technology, an increasing number of high-definition (HD) maps have been generated and stored in geospatial databases. These HD maps can provide strong localization support for mobile robots equipped with light detection and ranging (LiDAR) sensors. However, [...] Read more.
With the continuous advancement of autonomous driving technology, an increasing number of high-definition (HD) maps have been generated and stored in geospatial databases. These HD maps can provide strong localization support for mobile robots equipped with light detection and ranging (LiDAR) sensors. However, the global localization of heterogeneous robots under complex environments remains challenging. Most of the existing point cloud global localization methods perform poorly due to the different perspective views of heterogeneous robots. Leveraging existing HD maps, this paper proposes a base-map-guided heterogeneous robots localization solution. A novel co-view context descriptor with rotational invariance is developed to represent the characteristics of heterogeneous point clouds in a unified manner. The pre-set base map is divided into virtual scans, each of which generates a candidate co-view context descriptor. These descriptors are assigned to robots before operations. By matching the query co-view context descriptors of a working robot with the assigned candidate descriptors, the coarse localization is achieved. Finally, the refined localization is done through point cloud registration. The proposed solution can be applied to both single-robot and multi-robot global localization scenarios, especially when communication is impaired. The heterogeneous datasets used for the experiments cover both indoor and outdoor scenarios, utilizing various scanning modes. The average rotation and translation errors are within 1° and 0.30 m, indicating the proposed solution can provide reliable localization support despite communication failures, even across heterogeneous robots. Full article
Show Figures

Figure 1

24 pages, 7524 KB  
Article
Spatial Feature-Based ISAR Image Registration for Space Targets
by Lizhi Zhao, Junling Wang, Jiaoyang Su and Haoyue Luo
Remote Sens. 2024, 16(19), 3625; https://doi.org/10.3390/rs16193625 - 28 Sep 2024
Cited by 6 | Viewed by 1278
Abstract
Image registration is essential for applications requiring the joint processing of inverse synthetic aperture radar (ISAR) images, such as interferometric ISAR, image enhancement, and image fusion. Traditional image registration methods, developed for optical images, often perform poorly with ISAR images due to their [...] Read more.
Image registration is essential for applications requiring the joint processing of inverse synthetic aperture radar (ISAR) images, such as interferometric ISAR, image enhancement, and image fusion. Traditional image registration methods, developed for optical images, often perform poorly with ISAR images due to their differing imaging mechanisms. This paper introduces a novel spatial feature-based ISAR image registration method. The method encodes spatial information by utilizing the distances and angles between dominant scatterers to construct translation and rotation-invariant feature descriptors. These feature descriptors are then used for scatterer matching, while the coordinate transformation of matched scatterers is employed to estimate image registration parameters. To mitigate the glint effects of scatterers, the random sample consensus (RANSAC) algorithm is applied for parameter estimation. By extracting global spatial information, the constructed feature curves exhibit greater stability and reliability. Additionally, using multiple dominant scatterers ensures adaptability to low signal-to-noise (SNR) ratio conditions. The effectiveness of the method is validated through both simulated and natural ISAR image sequences. Comparative performance results with traditional image registration methods, such as the SIFT, SURF and SIFT+SURF algorithms, are also included. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

16 pages, 13027 KB  
Article
A Real-Time Global Re-Localization Framework for a 3D LiDAR-Based Navigation System
by Ziqi Chai, Chao Liu and Zhenhua Xiong
Sensors 2024, 24(19), 6288; https://doi.org/10.3390/s24196288 - 28 Sep 2024
Viewed by 2190
Abstract
Place recognition is widely used to re-localize robots in pre-built point cloud maps for navigation. However, current place recognition methods can only be used to recognize previously visited places. Moreover, these methods are limited by the requirement of using the same types of [...] Read more.
Place recognition is widely used to re-localize robots in pre-built point cloud maps for navigation. However, current place recognition methods can only be used to recognize previously visited places. Moreover, these methods are limited by the requirement of using the same types of sensors in the re-localization process and the process is time consuming. In this paper, a template-matching-based global re-localization framework is proposed to address these challenges. The proposed framework includes an offline building stage and an online matching stage. In the offline stage, virtual LiDAR scans are densely resampled in the map and rotation-invariant descriptors can be extracted as templates. These templates are hierarchically clustered to build a template library. The map used to collect virtual LiDAR scans can be built either by the robot itself previously, or by other heterogeneous sensors. So, an important feature of the proposed framework is that it can be used in environments that have never been visited by the robot before. In the online stage, a cascade coarse-to-fine template matching method is proposed for efficient matching, considering both computational efficiency and accuracy. In the simulation with 100 K templates, the proposed framework achieves a 99% success rate and around 11 Hz matching speed when the re-localization error threshold is 1.0 m. In the validation on The Newer College Dataset with 40 K templates, it achieves a 94.67% success rate and around 7 Hz matching speed when the re-localization error threshold is 1.0 m. All the results show that the proposed framework has high accuracy, excellent efficiency, and the capability to achieve global re-localization in heterogeneous maps. Full article
Show Figures

Figure 1

29 pages, 4861 KB  
Article
A New Approach for Effective Retrieval of Medical Images: A Step towards Computer-Assisted Diagnosis
by Suchita Sharma and Ashutosh Aggarwal
J. Imaging 2024, 10(9), 210; https://doi.org/10.3390/jimaging10090210 - 26 Aug 2024
Cited by 1 | Viewed by 1230
Abstract
The biomedical imaging field has grown enormously in the past decade. In the era of digitization, the demand for computer-assisted diagnosis is increasing day by day. The COVID-19 pandemic further emphasized how retrieving meaningful information from medical repositories can aid in improving the [...] Read more.
The biomedical imaging field has grown enormously in the past decade. In the era of digitization, the demand for computer-assisted diagnosis is increasing day by day. The COVID-19 pandemic further emphasized how retrieving meaningful information from medical repositories can aid in improving the quality of patient’s diagnosis. Therefore, content-based retrieval of medical images has a very prominent role in fulfilling our ultimate goal of developing automated computer-assisted diagnosis systems. Therefore, this paper presents a content-based medical image retrieval system that extracts multi-resolution, noise-resistant, rotation-invariant texture features in the form of a novel pattern descriptor, i.e., MsNrRiTxP, from medical images. In the proposed approach, the input medical image is initially decomposed into three neutrosophic images on its transformation into the neutrosophic domain. Afterwards, three distinct pattern descriptors, i.e., MsTrP, NrTxP, and RiTxP, are derived at multiple scales from the three neutrosophic images. The proposed MsNrRiTxP pattern descriptor is obtained by scale-wise concatenation of the joint histograms of MsTrP×RiTxP and NrTxP×RiTxP. To demonstrate the efficacy of the proposed system, medical images of different modalities, i.e., CT and MRI, from four test datasets are considered in our experimental setup. The retrieval performance of the proposed approach is exhaustively compared with several existing, recent, and state-of-the-art local binary pattern-based variants. The retrieval rates obtained by the proposed approach for the noise-free and noisy variants of the test datasets are observed to be substantially higher than the compared ones. Full article
Show Figures

Figure 1

15 pages, 38862 KB  
Article
Crater Triangle Matching Algorithm Based on Fused Geometric and Regional Features
by Mingda Jin and Wei Shao
Aerospace 2024, 11(6), 417; https://doi.org/10.3390/aerospace11060417 - 21 May 2024
Cited by 1 | Viewed by 1320
Abstract
Craters are regarded as significant navigation landmarks during the descent and landing process in small body exploration missions for their universality. Recognizing and matching craters is a crucial prerequisite for visual and LIDAR-based navigation tasks. Compared to traditional algorithms, deep learning-based crater detection [...] Read more.
Craters are regarded as significant navigation landmarks during the descent and landing process in small body exploration missions for their universality. Recognizing and matching craters is a crucial prerequisite for visual and LIDAR-based navigation tasks. Compared to traditional algorithms, deep learning-based crater detection algorithms can achieve a higher recognition rate. However, matching crater detection results under various image transformations still poses challenges. To address the problem, a composite feature-matching algorithm that combines geometric descriptors and region descriptors (extracting normalized region pixel gradient features as feature vectors) is proposed. First, the geometric configuration map is constructed based on the crater detection results. Then, geometric descriptors and region descriptors are established within each feature primitive of the map. Subsequently, taking the salience of geometric features into consideration, composite feature descriptors with scale, rotation, and illumination invariance are generated through fusion geometric and region descriptors. Finally, descriptor matching is accomplished by computing the relative distances between descriptors and adhering to the nearest neighbor principle. Experimental results show that the composite feature descriptor proposed in this paper has better matching performance than only using shape descriptors or region descriptors, and can achieve a more than 90% correct matching rate, which can provide technical support for the small body visual navigation task. Full article
(This article belongs to the Special Issue Space Navigation and Control Technologies)
Show Figures

Figure 1

16 pages, 6884 KB  
Article
Gradient Weakly Sensitive Multi-Source Sensor Image Registration Method
by Ronghua Li, Mingshuo Zhao, Haopeng Xue, Xinyu Li and Yuan Deng
Mathematics 2024, 12(8), 1186; https://doi.org/10.3390/math12081186 - 15 Apr 2024
Cited by 3 | Viewed by 1154
Abstract
Aiming at the nonlinear radiometric differences between multi-source sensor images and coherent spot noise and other factors that lead to alignment difficulties, the registration method of gradient weakly sensitive multi-source sensor images is proposed, which does not need to extract the image gradient [...] Read more.
Aiming at the nonlinear radiometric differences between multi-source sensor images and coherent spot noise and other factors that lead to alignment difficulties, the registration method of gradient weakly sensitive multi-source sensor images is proposed, which does not need to extract the image gradient in the whole process and has rotational invariance. In the feature point detection stage, the maximum moment map is obtained by using the phase consistency transform to replace the gradient edge map for chunked Harris feature point detection, thus increasing the number of repeated feature points in the heterogeneous image. To have rotational invariance of the subsequent descriptors, a method to determine the main phase angle is proposed. The phase angle of the region near the feature point is counted, and the parabolic interpolation method is used to estimate the more accurate main phase angle under the determined interval. In the feature description stage, the Log-Gabor convolution sequence is used to construct the index map with the maximum phase amplitude, the heterogeneous image is converted to an isomorphic image, and the isomorphic image of the region around the feature point is rotated by using the main phase angle, which is in turn used to construct the feature vector with the feature point as the center by the quadratic interpolation method. In the feature matching stage, feature matching is performed by using the sum of squares of Euclidean distances as a similarity metric. Finally, after qualitative and quantitative experiments of six groups of five pairs of different multi-source sensor image alignment correct matching rates, root mean square errors, and the number of correctly matched points statistics, this algorithm is verified to have the advantage of robust accuracy compared with the current algorithms. Full article
(This article belongs to the Special Issue Applied Mathematical Modeling and Intelligent Algorithms)
Show Figures

Figure 1

Back to TopTop