Next Article in Journal
2EZBFT for Decentralized Oracle Consensus with Distant Smart Terminals
Previous Article in Journal
Joint Trajectory and IRS Phase Shift Optimization for Dual IRS-UAV-Assisted Uplink Data Collection in Wireless Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Curvature-Aware Point-Pair Signatures for Robust Unbalanced Point Cloud Registration

1
School of Electronics and Control Engineering, Chang’an University, Xi’an 710064, China
2
China Railway Design Corporation, Tianjin 300251, China
3
The School of Computer Science, Northwestern Polytechnical University, Xi’an 710129, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2025, 25(20), 6267; https://doi.org/10.3390/s25206267
Submission received: 3 September 2025 / Revised: 6 October 2025 / Accepted: 7 October 2025 / Published: 10 October 2025
(This article belongs to the Section Sensing and Imaging)

Abstract

Existing point cloud registration methods can effectively handle large-scale and partially overlapping point cloud pairs. However, registering unbalanced point cloud pairs with significant disparities in spatial extent and point density remains a challenging problem that has received limited research attention. This challenge primarily arises from the difficulty in achieving accurate local registration when the point clouds exhibit substantial scale variations and uneven density distributions. This paper presents a novel registration method for unbalanced point cloud pairs that utilizes the local point cluster structure feature for effective outlier rejection. The fundamental principle underlying our method is that the internal structure of a local cluster comprising a point and its K-nearest neighbors maintains rigidity-preserved invariance across different point clouds. The proposed pipeline operates through four sequential stages. First, keypoints are detected in both the source and target point clouds. Second, local feature descriptors are employed to establish initial one-to-many correspondences, which is a strategy that increases correspondences redundancy to enhance the pool of potential inliers. Third, the proposed Local Point Cluster Structure Feature is applied to filter outliers from the initial correspondences. Finally, the transformation hypothesis is generated and evaluated through the RANSAC method. To validate the efficacy of the proposed method, we construct a carefully designed benchmark named KITTI-UPP (KITTI-Unbalanced Point cloud Pairs) based on the KITTI odometry dataset. We further evaluate our method on the real-world TIESY Dataset which is a LiDAR-scanned dataset collected by the Third Railway Survey and Design Institute Group Co. Extensive experiments demonstrate that our method significantly outperforms the state-of-the-art methods in terms of both registration success rate and computational efficiency on the KITTI-UPP benchmark. Moreover, it achieves competitive results on the real-world TIESY dataset, confirming its applicability and generalizability across diverse real-world scenarios.

1. Introduction

Point cloud registration (PCR) is a fundamental task in 3D computer vision [1,2], which aims to estimate a six-degree-of-freedom (6-DoF) rigid transformation so that point clouds can be precisely aligned. This technique plays a critical role in various applications, including autonomous vehicle localization, 3D object detection [3,4], and large-scale scene reconstruction [5]. While existing methods assume scale-balanced inputs, practical scenarios frequently necessitate unbalanced registration, where a local partial scan (e.g., single LiDAR frame) must be aligned with a global environmental map (e.g., city-scale 3D model) [6,7]. This paradigm introduces substantial challenges primarily arising from extreme scale disparities that significantly amplify outlier ratios [8,9,10]. Non-uniform density distributions further suppress discriminative local structures during feature extraction [11,12], while performance degradation becomes particularly pronounced in low-overlap scenarios [12]. The presence of repetitive architectural patterns also contributes to matching ambiguities in large-scale environments [13].
Despite substantial progress in handling low-overlap and high-outlier scenarios [11,14], current point cloud registration methods exhibit critical limitations when faced with unbalanced inputs characterized by significant differences in spatial extent and point density [15,16]. Feature-based methods leveraging matchability detection [11] or hierarchical correspondence prediction [13] frequently succumb to descriptor indistinctiveness as sparse local structures interact with dense environmental regions, while deep learning-based registration networks optimized for balanced inputs demonstrate poor generalization to extreme scale variations [12,17]. Furthermore, global localization frameworks that discretize large-scale maps into fragmented components [18,19] introduce artificial segmentation artifacts, fundamentally compromising holistic alignment integrity. Crucially, all methods suffer from structural distractions that repetitive geometric patterns in expansive environments misguide feature matching [11], compounded by density-induced bias that drowns discriminative local features within high-density zones of global point clouds [12].
To address these challenges, we propose a point-pair signature method that is aware of curvature (CURV) and specifically designed for unbalanced point cloud registration. Our main contributions are summarized as follows:
  • Curvature-optimized keypoint detection that significantly outperforms existing methods (e.g., ISS, H3D) [8,20,21,22,23] in repeatability under scale variations.
  • Systematic investigation of one-to-N correspondence for unbalanced PCR, demonstrating its superiority over conventional one-to-one matching paradigms [24,25].
  • By integrating curvature-aware signatures with geometric consistency validation, our rejection mechanism simultaneously improves inlier selection ratios, maintains registration accuracy, and ensures computational efficiency [9,26,27].
This paper is structured as follows: Section 2 reviews the existing literature on point cloud registration, covering both balanced and unbalanced scenarios. Section 3 details our method, including problem formulation, keypoint detection via voxel downsampling curvature estimation, one-to-many correspondences generation using feature similarity, and local cluster-based matching. Section 4 presents the experimental setup and evaluation results validating our method. Section 5 draws the conclusions and discusses future research directions.

2. Related Work

This section provides a brief overview of the existing point cloud registration methods, including balanced and unbalanced registration methods.

2.1. Balanced Point Cloud Registration

Recent advances in point cloud registration methodologies can be broadly categorized into two main methods: traditional geometric methods and deep learning-based methods.
For geometric methods, the classical RANSAC framework [8,20,21,25,26,28] remains fundamental for 6-DoF pose estimation. Several improved variants have been developed, including SAC-IA (utilizing spatial uniform sampling) [24], GC-RANSAC (employing graph-cut optimization) [8], and SAC-COT (based on compatibility graph sampling) [25]. Complementary global optimization methods such as GO-ICP [27] and GORE [9] have been proposed, implementing intelligent ICP [29] scheduling and precise boundary computation, respectively.
Deep learning-based methods have introduced various novel architectures for point cloud registration. Some methods, such as FCGF [30] and D3Feat [11], employ fully convolutional designs and joint learning frameworks for feature extraction. Architectures such as Predator [12] utilize attention mechanisms to address low-overlap scenarios, while SpinNet [1] incorporates rotation-invariant designs. Additionally, some methods like PointDSC [11] and RGM [31] apply deep learning and graph matching techniques for inlier/outlier differentiation. Recent detection-free methods [32], particularly CoFiNet [17] and GeoTransformer [33], focus on the transformation parameter estimation by an end-to-end manner.
Despite these advancements, both methodological categories present certain limitations. Traditional geometric methods face computational challenges in high-outlier scenarios [10] and demonstrate sensitivity to point cloud scale variations [9,27]. Deep learning-based methods remain constrained by their dependence on extensive training data and exhibit limited generalization capabilities [12,13,34]. Emerging hybrid methods such as MAC [14,35] attempt to address these limitations by integrating geometric- and learning-based advantages. Future research should focus on developing more efficient and robust registration algorithms capable of handling complex 3D scenarios in practical applications.

2.2. Unbalanced Point Cloud Registration

Unbalanced registration addresses scenarios characterized by extreme disparities in spatial scale or point density, exemplified by the alignment of large-scale reconstructed maps with local scans [7,15,36,37]. Existing methods can be broadly categorized into two paradigms.
The first is the localization paradigm, which adopts a two-stage “retrieval + local registration” framework: global descriptors like NetVLAD [7] first retrieve the most relevant reference frame from a prebuilt map, followed by balanced pairwise registration. Although some methods such as Du [16] attempt joint optimization of local and global features, this framework suffers from fundamental limitations [38]. Significant computational and memory overhead arises from storing numerous overlapping reference frames [18,19], leading to memory explosion. The method also fails to handle domain differences; for instance, severe density gaps between sparse SfM point clouds and dense LiDAR scans cause feature matching to collapse [39]. Crucially, errors from the retrieval stage propagate irreversibly—if coarse alignment fails, subsequent registration cannot recover [38].
The second is the registration paradigm, where existing robust estimators catastrophically fail under unbalanced conditions. RANSAC assumes uniform point distributions [8,20], causing ineffective minimal set sampling in density-variant regions. BnB methods [9,10,27] rely on spatial consistency for bound computation, but scale discrepancies induce overly loose bounds that nullify optimality guarantees. Spatial compatibility-based outlier filters [35] misclassify correspondences in non-overlapping areas due to geometric asymmetry. Although specialized pipelines such as Lu [11] target niche challenges (e.g., large-scale LiDAR registration), no unified framework addresses unbalanced registration holistically [13,40]. The core conflict lies in the inherent “equilibrium assumption” of traditional methods clashing with the geometric heterogeneity of unbalanced scenarios, demanding fundamentally adaptive mechanisms for scale and density variations.

3. Method

Our method is illustrated in Figure 1, registering a partial-scanned source point cloud to a large-scale target under a significant scale difference. First, keypoints are extracted from two point clouds based on curvature information. Second, initially dense correspondences are generated via the FPFH [24] descriptor and feature matching method for both point clouds. Finally, a novel point-pair signature is introduced to remove outliers from the initial correspondences, and then the 6-DoF transformation is estimated through robust hypothesis evaluation.

3.1. Problem Formulation

Given source point cloud P and target point cloud Q , which are downsampled from the original partial-view source point cloud S R n × 3 and the complete target point cloud T R m × 3 , respectively, with n m indicating a substantial scale difference. A correspondence pair C = ( p , q ) represents a feature matching from a point p in P and a point q in Q . The goal of registration is to find a 6-DoF rigid transformation, composed of a rotation matrix R and a translation vector t, that optimally aligns P to Q such that q = R · p + t holds for corresponding points.

3.2. Keypoint Detection

The critical innovations introduced by our curvature-aware detection module are fundamentally different from the traditional keypoint detectors like ISS [22]. While ISS [22] typically employs fixed threshold and uniform neighborhood processing, our method utilizes a normalized eigenvalue ratio κ i p = λ 0 λ 0 + λ 1 + λ 2 + ϵ that prioritizes subtle geometric features through the smallest eigenvalue, significantly enhancing sensitivity to scale variations. Furthermore, our integrated pipeline combines adaptive threshold τ κ with hysteresis-based NMS using a dynamic radius r n = α r v ( α [ 2 , 5 ] ), ensuring robust keypoint selection across non-uniform point densities and reducing noise sensitivity through second-order differential properties. The unique combination of density invariance, noise robustness, and geometric discriminability specifically addresses the inherent challenges of unbalanced scenarios, including extreme scale disparities and heterogeneous density distributions.
The practical implementation of our strategy relies on the strategic configuration of two core parameters: the voxel size r v and curvature threshold τ κ . The voxel size r v functions as a physical-scale normalizer, with its selection guided by the characteristic scale of salient geometric features in the target environment, providing inherent adaptability to different scene scales and point densities. Complementarily, the curvature threshold τ κ governs the essential trade-off between keypoint repeatability and distinctiveness, configurable through analysis of curvature value distributions in representative data samples. The principled yet flexible parameter selection strategy underpins the method’s consistent performance across diverse datasets and sensing conditions, as evidenced by our results on both KITTI-UPP and TIESY benchmarks, representing a key strength for handling the inherent variations in unbalanced point cloud registration.
Given source point cloud P and target point cloud Q , we perform keypoint detection through curvature analysis as follows:

3.2.1. Voxel Downsampling

Both point clouds undergo voxel grid filtering for computational efficiency:
P = VoxelDownSample ( P , r v ) ,
Q = VoxelDownSample ( Q , r v ) ,
where r v is the voxel edge length controlling the downsampling resolution. This operation preserves geometric features larger than r v while reducing point density.

3.2.2. Local Curvature Estimation

For each point p i P and q j Q , compute the covariance matrix over its r c -radius neighborhood:
𝒩 i p = { p j p j p i 2 r c } ,
C i p = 1 | 𝒩 i p | p j 𝒩 i p ( p j p ¯ i ) ( p j p ¯ i ) T .
Through the eigen decomposition C i p = V Λ V T ( Λ = diag ( λ 0 , λ 1 , λ 2 ) ), the curvature is as follows:
κ i p = λ 0 λ 0 + λ 1 + λ 2 + ϵ , ϵ = 10 8 ,
where λ 0 λ 1 λ 2 encode the local surface variation. The same process is applied to Q to obtain κ j q .

3.2.3. Candidate Keypoint Selection

Select geometrically salient points from both clouds:
K p = { p i P κ i p > τ κ } ,
K q = { q j Q κ j q > τ κ } ,
where the τ κ is a corresponding threshold set for keypoint selection.

3.2.4. Non-Maximum Suppression

The refined keypoints are selected through spatial competition:
P key = p i K p κ i p > max p j B ( p i , r n ) ( κ j p δ ) ,
Q key = q j K q κ j q > max q k B ( q j , r n ) ( κ k q δ ) ,
where B ( p i , r n ) = { p j p j p i 2 r n } , r n = α r v ( α [ 2 , 5 ] ), and δ introduce hysteresis.

3.3. One-to-Many Correspondances

The proposed one-to-many matching strategy provides an effective solution to the feature asymmetry challenge inherent in unbalanced point cloud registration. Building upon the detected keypoints P key and Q key from Section 3.2, our method establishes multiple potential correspondences for each source keypoint, creating a redundant yet discriminative matching space. This design offers two key benefits: (1) the increased correspondence candidates significantly improve inlier counts for final registration; (2) the adaptive feature extraction pipeline maintains geometric consistency while accommodating density variations.
The core innovation of our method lies in its integrated handling of two critical aspects: local feature representation and correspondence validation. For feature representation, we employ FPFH descriptor with adaptive radius scaling to dynamically adjust to non-uniform point distributions. For correspondence validation, the reciprocal L 2 -norm similarity metric provides robust constraints in feature space, effectively suppressing mismatches from non-overlapping regions. This combination ensures both the quantity and quality of matches, addressing the fundamental challenges in unbalanced registration.

3.3.1. Feature Extraction

The FPFH features are computed from source and target keypoints:
F s = { ϕ ( p ) p P key } ,
F t = { ϕ ( q ) q Q key } ,
where ϕ ( · ) denotes the FPFH descriptor, computed with adaptive radius search:
r = α · d , d = 1 | K | k = 1 | K | p k NN K ( p k ) .
Here K represents the keypoint set ( P key for source, Q key for target), d is the average spacing between keypoints, and α is a scale factor. The nearest neighbor NN K ( p k ) is searched within the same keypoint set.

3.3.2. Similarity Metric

This curvature-aware similarity metric facilitates robust correspondence establishment by quantifying the geometric affinity between locally salient points across different point clouds. The feature similarity is defined using L 2 -norm reciprocal:
s ( f i , f j ) = 1 1 + f i f j 2 .

3.3.3. One-to-Many Correspondence Generation

For each source keypoint p P key , find the top N target keypoints with the highest similarity:
T ( p ) = top N q Q key { s ( ϕ ( p ) , ϕ ( q ) ) } ,
where top N selects the N points with maximum similarity. The initial correspondence set is generated as follows:
C initial = p P key ( p , q ) q T ( p ) , s ( ϕ ( p ) , ϕ ( q ) ) > θ .
This strategy generates up to | P key | × N keypoint-level correspondences. The redundant design boosts inlier quantity while feature similarity constraints ( θ = 0.2 ) filter interference from non-overlapping regions.

3.4. Local Geometric Verification

The local geometric verification module establishes robust correspondences by examining the structural consistency between keypoints and their k-nearest neighborhoods (illustrated using = 4 ), as visualized in Figure 2. For each candidate pair ( p , q ) C initial , ordered neighbor sets 𝒩 p = p i and 𝒩 q = q i (sorted by Euclidean distance | p p i | ) are extracted to calculate two complementary geometric descriptors: (1) relative distance features d i = | p p i | and d ^ i = | q q i | , and (2) angular consistency measures α ( i , j ) and α ^ ( i , j ) . The binary agreement indicators M d ( i ) and M α ( i , j ) are then derived by adaptive thresholds δ d and δ α , with the final similarity score S ( p , q ) calculated as a normalized consensus of matched characteristics.
This dual feature paradigm addresses a critical limitation of conventional methods, which often solely rely on distance metrics, by introducing angular coherence as an orthogonal validation signal. Furthermore, the adaptive thresholding mechanism dynamically adjusts to local point density variations, ensuring reliable performance even under significant noise. In particular, the neighborhood-preserving design guarantees that local structures from sparser point clouds remain identifiable within denser counterparts, overcoming a key challenge in unbalanced registration scenarios.
Building upon the initial correspondences in Section 3.3, geometric verification is performed through local structure matching. Each candidate ( p , q ) is evaluated by comparing its k-NN geometric constellation, achieving computational efficiency via feature evaluation while outperforming global verification methods in outlier rejection. The balance of precision and scalability makes the method particularly suitable for real-time applications where structural fidelity must be preserved without compromising processing speed.

3.4.1. Local Structure Construction

For each keypoint p P key and q Q key , its local geometric structure is constructed using its k nearest neighbors:
𝒩 p = { p i i = 1 , 2 , , k } p p i p p i + 1 ,
𝒩 q = { q i i = 1 , 2 , , k } q q i q q i + 1 ,
where the neighbors are ordered by the Euclidean distance from the central point.

3.4.2. Local Structure Similarity Matching

The geometric features and their matching criteria are defined as follows:
d i = p p i , d ^ i = q q i , i { 1 , 2 , , k } ,
α ( i , j ) = ( p i p p j ) , α ^ ( i , j ) = ( q i q q j ) , 1 i < j k .
The d i and d ^ i are the distance features. The α ( i , j ) and α ^ ( i , j ) are the angle features. For each feature, a binary match indicator is defined based on the threshold conditions:
Distance matching : M d ( i ) = 1 if | d i d ^ i | < δ d 0 otherwise ,
Angle matching : M α ( i , j ) = 1 if | α ( i , j ) α ^ ( i , j ) | < δ α 0 otherwise ,
where the δ d is a distance difference threshold, and the δ α is an angle difference threshold.
The similarity score is then computed as the proportion of matched features:
S ( p , q ) = i = 1 k M d ( i ) + 1 i < j k M α ( i , j ) k + k 2 ,
where the k is the number of nearest neighbors and the k 2 = k ( k 1 ) 2 .

3.4.3. Matched Point Pairs Generation

Final correspondences are established by selecting candidate pairs that satisfy the similarity threshold condition:
M = ( p , q ) ( p , q ) C initial , S ( p , q ) > τ ,
where τ is a similarity threshold that controls the matching strictness and ensures correspondence quality.

3.5. Hypothesis Generation and Evaluation

Based on the filtered correspondence set M obtained from outlier rejection, the number of outliers in the point correspondence set is significantly reduced, thus we can directly employ the RANSAC algorithm for robust transformation estimation. We employ the RANSAC algorithm to robustly estimate the optimal 6-DoF rigid transformation ( R , t ) . The algorithm iteratively performs the following procedure: in each iteration, three point correspondences are randomly selected to compute a rigid transformation hypothesis ( R , t ) via SVD, followed by inlier counting with threshold γ (where | R p + t q | 2 < γ ) [41,42,43]. After typically thousands of iterations, the hypothesis with the maximum inliers is selected and refined using its consensus set to obtain the final transformation estimate.
To quantitatively evaluate the registration accuracy, we computed the rotation error and translation error against the ground truth transformation ( R g t , t g t ) . After multiple rounds of optimization, the best hypothesis is refined to obtain the final transformation. The registration result is evaluated using the following:
RE = arccos tr ( R R g t T ) 1 2 × 180 π ,
TE = t t g t 2 .
The rotation error (RE) is computed using the trace operator t r ( · ) , which sums the diagonal elements of a matrix. Specifically, RE quantifies the angular deviation in degrees between the estimated and ground truth rotations. And the translation error (TE) measures the Euclidean distance between the estimated and ground truth translation vectors. These metrics provide complementary perspectives on registration quality, with RE quantifying rotational alignment and TE quantifying positional accuracy. Lower values for both metrics indicate better registration performance.

4. Experiments

4.1. Experimental Setup

Datasets. Our method is evaluated on two complementary datasets:
  • Synthetic KITTI-UPP: Created from KITTI Odometry with fixed sampling interval (hop=10) to control unbalance ratios [44]:
    -
    Balanced (1:1): 150-frame query vs. 150-frame reference
    -
    Moderate Unbalance (1:4): 150-frame query vs. 600-frame reference
    -
    Severe Unbalance (1:10): 150-frame query vs. 1500-frame reference
    Each group contains 108 registration pairs with query frames strictly excluded from reference point clouds. The unbalanced ratios (1:1, 1:4, 1:10) precisely define the relative scale between source and target point clouds based on frame aggregation ranges. For example, the 1:10 ratio configuration involves registration between a source point cloud aggregated from frames 0–150 and a target point cloud aggregated from frames 0–1500, ensuring the source is entirely contained within the target. Some samples are visualized in Figure 3.
  • Real-world TIESY dataset: Collected via mobile LiDAR scanning in diverse urban/rural environments by TIESY Survey Institute, featuring natural unbalance ratios of 1:4 to 1:6. This dataset comprises 95 registration pairs across 18 geographical areas, covering streets, buildings, vegetation, and infrastructure. Some samples of the dataset are visualized in Figure 4.
Evaluation Criteria. For both the KITTI-UPP dataset and the real-world TIESY dataset, we employ the RE and TE as the evaluation metrics. The registration is considered successful when RE ≤ 15° and TE ≤ 60 cm. A dataset’s registration accuracy of a method is defined as the ratio of successful cases to the total number of point cloud pairs.
Implementation Details. In our experiments which are designed to evaluate performance under varying unbalanced conditions, we adopt appropriate values of N (which determines the one-to-many matching pairs) for the three KITTI-UPP ratios: N = 1 for the 1:1 ratio, N = 12 for the 1:4 ratio, and N = 20 for the 1:10 ratio. Furthermore, for the scenario involving large-scale point clouds aggregated by 3000 frames, we maintain the parameter N at 20. For the real-world TIESY dataset, all the results we selected are tested with the parameter N set to 12. Regarding the local point cluster structure that we utilize, the number of neighboring points k is set to four, while the angle and distance thresholds are configured as 5° and 0.1 m, respectively.

4.2. Results on KITTI-UPP Dataset

4.2.1. Evaluation on One-to-Many Correspondences

Building upon our evaluation framework, an evaluation of one-to-many correspondence generation is conducted using ISS [22] keypoints on the KITTI-UPP dataset with 1:4 ratio. The assessment examines two distinct registration scenarios: (1) Small-to-Large: registration from partial scans to the whole; (2) Large-to-Small: registration from the whole scans to the partial.
In the experiments, the parameter N is varied across the following discrete values:
Small-to-Large: N { 1 , 4 , 6 , 8 , 12 , 16 , 20 , 24 } ;
Large-to-Small: N { 1 , 2 , 4 , 6 } .
Six methods including Fast-MAC [43], MAC [41], SC2-PCR [42], Teaser++ [45], FGR [46], and RANSAC [28] are rigorously evaluated under both configurations. Comprehensive performance results, including registration rates and error metrics, are provided in Table 1. Cases where Fast-MAC [43], MAC [41], and Teaser++ [45] fail to process due to excessive correspondence pairs or memory overflow are treated as registration failures and reflected in the registration success rate. Specifically, this occurs for MAC and Fast-MAC at N = 16, 20, and 24 in the small-to-large scenario and at N = 4, and 6 in the large-to-small scenario, while Teaser++ encounters similar limitations at N = 24 in the small-to-large scenario and at N = 4, and 6 in the large-to-small scenario.
As evidenced by the results in Table 1, our method consistently achieves superior performance under all experimental configurations. (1) A key observation is the significant performance improvement across nearly all methods as N increases. For instance, when N is increased from 1 to 12, the RR of methods such as Fast-MAC [43], MAC [41], and SC2-PCR [42] significantly improves from 51.85% to 97.22%, strongly validating the effectiveness of the one-to-many correspondence strategy. (2) While Fast-MAC [43], MAC [41], and Teaser++ [45] exhibit improved accuracy with larger N values, they often incur substantial computational overhead or even fail to process dense correspondence sets efficiently, resulting in performance degradation. When N is increased from 12 to 24, the registration time of methods such as Fast-MAC [43] and MAC [41] increases significantly, while the registration rate drops markedly from 97.22% to 26.85%. In contrast, our method not only attains the highest registration accuracy but also maintains the lowest computational time, demonstrating remarkable efficiency even at N = 24 . (3) Our method retains competitive performance under large value of N when the keypoint detection module is substituted with ISS [22] method.

4.2.2. Experiments with Different Keypoint Modules

We conduct extensive experiments on the KITTI-UPP dataset under three unbalanced registration scenarios at ratios of 1:1, 1:4, and 1:10. Our evaluation framework incorporates multiple keypoint detection methods, including ISS [22], H3D [23], and our curvature-based method. As shown in Table 2, we report comprehensive comparisons with both traditional methods, including Fast-MAC [43], MAC [41], SC2-PCR [42], Teaser++ [45], FGR [46], and RANSAC [28], as well as deep learning-based methods, specifically GeoTransformer [33] and PARENet [47].
The comprehensive comparison demonstrates that our method achieves superior performance when utilizing curvature-based keypoints, particularly in severely unbalanced scenarios. Under the “CURV + FPFH“ configuration, our method not only attains the highest registration success rates across all imbalance ratios (100.00%, 99.07%, and 95.37%, respectively) but also maintains exceptional computational efficiency with consistently low time consumption (0.11 s, 0.10 s, and 0.10 s, respectively). This performance advantage becomes particularly evident in the most challenging 1:10 ratio scenario where our method outperforms all competing methods by achieving the highest registration success rate while simultaneously maintaining significantly superior computational efficiency compared to other high-performance methods. By contrast, deep learning-based methods that GeoTransformer [33] and PARENet [47] utilize their official KITTI pre-trained weights with uniformly downsampled inputs exhibit suboptimal performance, achieving only 38.89% and 71.30% success rates, respectively, under the 1:10 ratio. This progressive performance degradation is caused by the scale unbalance increasing. This limitation stems from their inability to handle the extreme point density variation and heterogeneous geometric information in unbalanced scenarios, a fundamental challenge that current learning-based architectures struggle to address. Our method outperforms all competing methods by achieving the highest registration success rate while simultaneously maintaining significantly superior computational efficiency compared with other high-performance methods. The comparative visualization of registration results for all methods is presented in Figure 3, featuring three representative scenarios from the KITTI-UPP dataset.
To further investigate performance boundaries under extreme unbalanced conditions, we conducted an additional experiment with an unprecedented unbalanced ratio of 1:20, which registers a 150-frame query point cloud against a 3000-frame reference point cloud. The scenario presents substantial challenges due to the massive scale disparity and significantly amplified outlier ratios. We selectively compared our method with the most competitive methods, including Fast-MAC [43], MAC [41], SC2-PCR [42], and Teaser++ [45], which had demonstrated superior performance in prior experiments. As evidenced in Table 3, our method achieves the highest registration success rate of 94.44% with exceptional computational efficiency of 0.11 s. Our method demonstrates a 2.77% point improvement in registration accuracy over SC2-PCR [42], which achieved 91.67% accuracy with 0.16 s computation time. Similarly, our method outperforms Teaser++ [45], MAC [41], and Fast-MAC [43] by 8.33%, 11.11%, and 14.81% points in registration success rate, respectively, while maintaining substantially lower computational requirements. These compelling results conclusively demonstrate notable competitiveness and exceptional robustness even in the ultra-extreme scenario of our method, underscoring the effectiveness of our curvature-aware method in handling severe scale disparities while ensuring computational efficiency.

4.3. Results on TIESY Dataset

4.3.1. Evaluation on One-to-Many Correspondences

Building upon our evaluation framework, this study conducts an evaluation of one-to-many correspondence generation using ISS [22] keypoints on the real-world TIESY Dataset with ratios ranging from 1:4 to 1:6. The assessment examines two distinct registration scenarios: (1) Small-to-Large: registration from the partial scans to the whole; (2) Large-to-Small: registration from the whole scans to the partial;
In the experiments, the parameter N is varied across the following discrete values:
Small-to-Large: N { 1 , 2 , 4 , 8 , 12 , 16 , 22 , 26 } ;
Large-to-Small: N { 1 , 2 , 3 , 4 } .
Six point cloud registration methods, including Fast-MAC [43], MAC [41], SC2-PCR [42], Teaser++ [45], FGR [46] and RANSAC [28] are rigorously evaluated under both configurations. Comprehensive performance results, including registration success rates and error metrics, are provided in Table 4.
While the RR of Fast-MAC [43], MAC [41], and Teaser++ [45] exhibited a positive correlation with increasing N, though with commensurate computational time escalation—for instance, in the partial-to-whole registration scenario, as N increased from 1 to 12, the RR improved significantly from 68.42% to 86.32%. Conversely, FGR [46] and SC2-PCR [42] maintained stable performance across all N configurations, while RANSAC [28] demonstrated progressive performance degradation as correspondence quantities expanded. In contrast, our method consistently achieves the highest registration success rate with the fastest computational speed, highlighting its superior efficiency and robustness under varying correspondence densities.
The following conclusions can be drawn: As N increases, the inlier count of correspondences increases, while the inlier ratio decreases. In practical registration, this effectively improves the registration success rate, but this comes at the cost of significantly increased computation time.

4.3.2. Robust Experiments

We conducted extensive experiments on the real-world TIESY dataset under both noiseless and noisy conditions, evaluating three unbalanced registration scenarios. Our evaluation framework is built on the CURV-based keypoint detection. For comprehensive comparison, we evaluate both traditional methods: Fast-MAC [43], MAC [41], SC2-PCR [42], Teaser++ [45], FGR [46], and RANSAC [28], and deep learning-based methods including GeoTransformer [33] and PARENet [47], and the results are represented in Table 5. Notably, the deep learning-based methods are evaluated using their official KITTI pre-trained weights, as TIESY is also an outdoor dataset. However, as shown in Table 5, these learning-based methods demonstrate significantly inferior performance, achieving only 39.80% and 33.67% registration success rates, respectively, substantially lower than traditional high-performance methods. The performance gap can be attributed to two primary factors: The TIESY dataset features substantial environmental complexity, comprising a wide range of real-world scenarios from buildings to vegetation. This diversity reveals the limited generalization capacity of deep learning-based methods when faced with challenging and varied conditions.
We quantitatively evaluate the noise robustness of our method by testing with different levels of Gaussian noise. The results presented in Table 6 demonstrate that our method maintains competitive performance across various noise conditions while consistently achieving low computational time. As indicated, while our method maintains strong performance at lower noise levels with a registration rate of 93.68% at noise level 0.01, a noticeable degradation occurs at higher noise intensities where the registration rate drops to 88.42% at noise level 0.03. This performance decline can be attributed to several factors inherent in the local geometric verification process. Firstly, the curvature-based keypoint detection, while robust under moderate noise, becomes less stable when significant Gaussian noise distorts local surface geometries, leading to inconsistent keypoint repeatability. Secondly, the FPFH descriptors, though efficient, are sensitive to neighborhood perturbations caused by noise, which amplifies feature mismatching in the one-to-many correspondence generation stage. Finally, the local structure similarity matching, which relies on precise distance and angular relationships, suffers from threshold misalignment when noise exceeds the adaptive tolerance of the geometric verification module.

4.4. Ablation Study

4.4.1. Keypoint Detection Comparison

We conduct experimental analyses on the KITTI-UPP and real-world TIESY datasets, specifically evaluating the performance of the keypoint detection methods proposed in Section 3—namely ISS [22], H3D [23], and curvature-based detection—combined with FPFH for generating various correspondences. The results of our tests on the KITTI-UPP dataset are summarized in Table 7, where the values of N for the small-to-large and large-to-small sequences are set to {1, 2, 4, 8, 12, 24} and {1, 2, 3, 4, 5, 6}, respectively. It can be observed that the curvature-based method, when combined with FPFH, significantly outperforms the other two methods in terms of correspondence quality and overall registration accuracy. As presented in Table 8, the inlier statistics of curvature-based feature matching on the real-world TIESY dataset are summarized. For this dataset, the values of N for the small-to-large and large-to-small sequences are set to {1, 2, 4, 8, 12, 16, 22, 26} and {1, 2, 3, 4}, respectively. Although the inlier ratio naturally decreases as the value of N increases, the absolute number of inliers demonstrates a consistent growing trend. This observation indicates that the feature correspondences derived from CURV keypoint detection exhibit high quality, enabling the algorithm to effectively distill a substantial number of correct correspondences from a large pool of candidate matches. These reliable correspondences form a solid foundation for subsequent high-precision registration tasks.

4.4.2. Parameter Sensitivity Analysis

We conducted an ablation study on the KITTI-UPP(1:4) dataset to evaluate the robustness of our method under different parameter settings. With fixed thresholds (distance difference d < 0.1 m, angular difference θ < 5 , and matching score = 1 in Section 4, we systematically varied the number of neighboring points k around each target point. As shown in Table 9, with our parameter N set to 24, the method demonstrates strong robustness across different k values, maintaining high registration rates above 95% while keeping rotation errors below 0.007 rad in all cases. The consistent performance across parameter variations confirms the stability of our method.
To validate our core innovative components and assess the generalization capability of our method, we conduct ablation studies on KITTI-UPP(1:4) with parameter k = 4 . We examine individual and combined contributions of angle and distance constraints across FPFH and PARENet descriptors using 1-to-24 correspondence pairs. As shown in Table 10, the combined constraints achieve optimal performance for both descriptors. FPFH with combined constraints reaches 100.00% RR, significantly outperforming angle-only (92.59%) and distance-only (25.00%) configurations. PARENet similarly benefits from combination (93.52%) versus angle-only (77.78%) and distance-only (85.19%). The complementary nature of constraints is evident from their varying effectiveness across descriptors. These results demonstrate that our method, through its combined use of angle and distance, is not only effective across different descriptors but also crucial for delivering superior registration accuracy in unbalanced point clouds.

5. Conclusions

We proposed a curvature-aware outlier rejection method for unbalanced point cloud registration. Our method employs a one-to-many correspondence strategy to increase inlier counts using geometric redundancy while preserving structural invariance. Based on local geometric consistency, we developed a robust outlier removal mechanism for dense correspondence sets. Experiments on synthetic KITTI-UPP and real-world TIESY datasets demonstrated that the proposed method achieved inherent resilience to partial overlaps through probabilistic correspondence expansion and improved registration stability via geometrically verified candidate pooling. The results confirmed that our correspondence generation strategy successfully balanced match quantity and precision in unbalanced registration. However, the proposed method exhibited relatively weak robustness in keypoint detection due to its reliance on curvature-based features. The future work will focus on enhancing keypoint stability, algorithmic acceleration for ultra-dense scenarios, integrating multi-scale or global features to handle large-scale inconsistencies, reducing sensitivity to parameters via adaptive- or learning-based optimization, and improving computational efficiency through hierarchical pruning and optimized feature extraction.

Author Contributions

Conceptualization, X.H., Z.Z. and S.Q.; Methodology, X.H., Z.Z. and S.Q.; Validation, X.H. and Z.Z.; Formal analysis, J.D., G.W. and J.Y.; Investigation, J.Y. and S.Q.; Data curation, J.Y. and G.W.; Writing—original draft, X.H. and Z.Z.; Writing—review & editing, Z.Z., J.Y. and S.Q.; Project administration, J.D. and G.W.; Funding acquisition, J.D. and G.W. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Tianjin Key Laboratory of Rail Transit Navigation Positioning and Spatio-temporal Big Data Technology (TKL2024B14), the Natural Science Basic Research Plan in Shaanxi Province of China under Grant No. 2025JC-YBMS-651, Science and Technology Program of Tianjin (232GSSSS00010), Science and Technology Research and Development Program of China State Railway Group Co., Ltd. (Q2024G032), Science and Technology Development Project of China Railway Design Group Co., Ltd. (Grant no. 2023A0240109), China Railway Design Corporation Science and Technology Program Major Program (2024B0240110).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used to support the findings of this study are included in the article. The source code and dataset will be available at https://github.com/SHIYI-hu/Curvature-Aware-Point-Pair-Signatures (accessed on 6 October 2025).

Conflicts of Interest

Authors Jiwei Deng and Guangshuai Wang were employed by the company China Railway Design Corporation. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Ao, S.; Hu, Q.; Yang, B.; Markham, A.; Guo, Y. Spinnet: Learning a general surface descriptor for 3d point cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 11753–11762. [Google Scholar]
  2. Wang, T.; Fu, Y.; Zhang, Z.; Cheng, X.; Li, L.; He, Z.; Wang, H.; Gong, K. Research on Ground Point Cloud Segmentation Algorithm Based on Local Density Plane Fitting in Road Scene. Sensors 2025, 25, 4781. [Google Scholar] [CrossRef] [PubMed]
  3. Aoki, Y.; Goforth, H.; Srivatsan, R.A.; Lucey, S. Pointnetlk: Robust & efficient point cloud registration using pointnet. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 7163–7172. [Google Scholar]
  4. Guo, Y.; Bennamoun, M.; Sohel, F.; Lu, M.; Wan, J. 3D object recognition in cluttered scenes with local surface features: A survey. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 2270–2287. [Google Scholar] [CrossRef] [PubMed]
  5. Mian, A.S.; Bennamoun, M.; Owens, R.A. Automatic correspondence for 3D modeling: An extensive review. Int. J. Shape Model. 2005, 11, 253–291. [Google Scholar] [CrossRef]
  6. Geiger, A.; Lenz, P.; Urtasun, R. Are we ready for autonomous driving? The kitti vision benchmark suite. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; pp. 3354–3361. [Google Scholar]
  7. Uy, M.A.; Lee, G.H. Pointnetvlad: Deep point cloud based retrieval for large-scale place recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 4470–4479. [Google Scholar]
  8. Barath, D.; Matas, J. Graph-cut RANSAC. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6733–6741. [Google Scholar]
  9. Bustos, A.P.; Chin, T.J. Guaranteed outlier removal for point cloud registration with correspondences. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 2868–2882. [Google Scholar] [CrossRef] [PubMed]
  10. Li, J. A Practical O(N2) Outlier Removal Method for Correspondence-Based Point Cloud Registration. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 3926–3939. [Google Scholar]
  11. Bai, X.; Luo, Z.; Zhou, L.; Fu, H.; Quan, L.; Tai, C.L. D3feat: Joint learning of dense detection and description of 3d local features. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 6359–6367. [Google Scholar]
  12. Huang, S.; Gojcic, Z.; Usvyatsov, M.; Wieser, A.; Schindler, K. Predator: Registration of 3d point clouds with low overlap. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 4267–4276. [Google Scholar]
  13. Yu, H.; Li, F.; Saleh, M.; Busam, B.; Ilic, S. Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration. In Proceedings of the 35th Conference on Neural Information Processing Systems (NeurIPS 2021), Online, 6–14 December 2021; Volume 34, pp. 23872–23884. [Google Scholar]
  14. Lee, J.; Kim, S.; Cho, M.; Park, J. Deep hough voting for robust global registration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 15994–16003. [Google Scholar]
  15. Komorowski, J. Minkloc3d: Point cloud based large-scale place recognition. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Virtual, 5–9 January 2021; pp. 1790–1799. [Google Scholar]
  16. Du, J.; Wang, R.; Cremers, D. Dh3d: Deep hierarchical 3d descriptors for robust large-scale 6dof relocalization. In Proceedings of the 16th European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; Springer: Berlin/Heidelberg, Germany, 2020; pp. 744–762. [Google Scholar]
  17. Choy, C.; Park, J.; Koltun, V. Fully convolutional geometric features. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 8958–8966. [Google Scholar]
  18. Zhang, W.; Xiao, C. PCAN: 3D attention map learning using contextual information for point cloud based retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12436–12445. [Google Scholar]
  19. Liu, Z.; Zhou, S.; Suo, C.; Yin, P.; Chen, W.; Wang, H.; Li, H.; Liu, Y.H. Lpd-net: 3d point cloud learning for large-scale place recognition and environment analysis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 2831–2840. [Google Scholar]
  20. Quan, S.; Yang, J. Compatibility-guided sampling consensus for 3-d point cloud registration. IEEE Trans. Geosci. Remote. Sens. 2020, 58, 7380–7392. [Google Scholar] [CrossRef]
  21. Yang, J.; Chen, J.; Quan, S.; Wang, W.; Zhang, Y. Correspondence selection with loose–tight geometric voting for 3-D point cloud registration. IEEE Trans. Geosci. Remote. Sens. 2022, 60, 1–14. [Google Scholar] [CrossRef]
  22. Zhong, Y. Intrinsic shape signatures: A shape descriptor for 3D object recognition. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, Kyoto, Japan, 27 September–4 October 2009; IEEE: New York, NY, USA, 2009; pp. 689–696. [Google Scholar]
  23. Mian, A.; Bennamoun, M.; Owens, R. On the repeatability and quality of keypoints for local feature-based 3d object retrieval from cluttered scenes. Int. J. Comput. Vis. 2010, 89, 348–361. [Google Scholar] [CrossRef]
  24. Rusu, R.B.; Blodow, N.; Beetz, M. Fast point feature histograms (FPFH) for 3D registration. In Proceedings of the IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
  25. Yang, J.; Huang, Z.; Quan, S.; Qi, Z.; Zhang, Y. SAC-COT: Sample consensus by sampling compatibility triangles in graphs for 3-D point cloud registration. IEEE Trans. Geosci. Remote. Sens. 2021, 60, 1–15. [Google Scholar] [CrossRef]
  26. Yang, J.; Huang, Z.; Quan, S.; Zhang, Q.; Zhang, Y.; Cao, Z. Toward efficient and robust metrics for RANSAC hypotheses and 3D rigid registration. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 893–906. [Google Scholar] [CrossRef]
  27. Yang, J.; Li, H.; Campbell, D.; Jia, Y. Go-ICP: A globally optimal solution to 3D ICP point-set registration. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 2241–2254. [Google Scholar] [CrossRef] [PubMed]
  28. Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
  29. Yuan, H.; Li, G.; Wang, L.; Li, X. Research on the Improved ICP Algorithm for LiDAR Point Cloud Registration. Sensors 2025, 25, 4748. [Google Scholar] [CrossRef] [PubMed]
  30. Choy, C.; Gwak, J.; Savarese, S. 4d spatio-temporal convnets: Minkowski convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3075–3084. [Google Scholar]
  31. Pais, G.D.; Ramalingam, S.; Govindu, V.M.; Nascimento, J.C.; Chellappa, R.; Miraldo, P. 3dregnet: A deep neural network for 3d point registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 7193–7203. [Google Scholar]
  32. Cai, P. Neural Network-Based Low-Level 3D Point Cloud Processing. Ph.D. Thesis, University of South Carolina, Columbia, SC, USA, 19 December 2024. [Google Scholar]
  33. Qin, Z.; Yu, H.; Wang, C.; Guo, Y.; Peng, Y.; Xu, K. Geometric transformer for fast and robust point cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 11143–11152. [Google Scholar]
  34. Došljak, V.; Jovančević, I.; Orteu, J.J.; Brault, R.; Belbacha, Z. Airplane panels inspection via 3D point cloud segmentation on low-volume training dataset. In Proceedings of the 17th International Conference on Quality Control by Artificial Vision, Yamanashi, Japan, 4–6 June 2025; SPIE: Bellingham, WA, USA, 2025; Volume 13737, pp. 13–20. [Google Scholar]
  35. Bai, X.; Luo, Z.; Zhou, L.; Chen, H.; Li, L.; Hu, Z.; Fu, H.; Tai, C.L. Pointdsc: Robust point cloud registration using deep spatial consistency. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 15859–15869. [Google Scholar]
  36. Jirawattanasomkul, T.; Hang, L.; Srivaranun, S.; Likitlersuang, S.; Jongvivatsakul, P.; Yodsudjai, W.; Thammarak, P. Digital twin-based structural health monitoring and measurements of dynamic characteristics in balanced cantilever bridge. Resilient Cities Struct. 2025, 4, 48–66. [Google Scholar] [CrossRef]
  37. Ababsa, F.; Noureddine, M.; Bouali, M.; El Meouche, R.; Sammuneh, M.; De Martin, F.; Beaufils, M.; Viguier, F.; Salvati, B. Digital twins for predictive monitoring and anomaly detection: Application to seismic and railway infrastructures. In Proceedings of the 17th International Conference on Quality Control by Artificial Vision, Yamanashi, Japan, 4–6 June 2025; SPIE: Bellingham, WA, USA, 2025; Volume 13737, pp. 75–82. [Google Scholar]
  38. Yin, H.; Wang, Y.; Ding, X.; Tang, L.; Huang, S.; Xiong, R. 3d lidar-based global localization using siamese neural network. IEEE Trans. Intell. Transp. Syst. 2019, 21, 1380–1392. [Google Scholar] [CrossRef]
  39. Dai, A.; Chang, A.X.; Savva, M.; Halber, M.; Funkhouser, T.; Nießner, M. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5828–5839. [Google Scholar]
  40. Choy, C.; Lee, J.; Ranftl, R.; Park, J.; Koltun, V. High-dimensional convolutional networks for geometric pattern recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13–19 June 2020; pp. 11227–11236. [Google Scholar]
  41. Zhang, X.; Yang, J.; Zhang, S.; Zhang, Y. 3D registration with maximal cliques. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 17745–17754. [Google Scholar]
  42. Chen, Z.; Sun, K.; Yang, F.; Tao, W. Sc2-pcr: A second order spatial compatibility for efficient and robust point cloud registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 13221–13231. [Google Scholar]
  43. Zhang, Y.; Zhao, H.; Li, H.; Chen, S. Fastmac: Stochastic spectral sampling of correspondence graph. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 16–22 June 2024; pp. 17857–17867. [Google Scholar]
  44. Lee, K.; Lee, J.; Park, J. Learning to register unbalanced point pairs. arXiv 2022, arXiv:2207.04221. [Google Scholar] [CrossRef]
  45. Yang, H.; Shi, J.; Carlone, L. Teaser: Fast and certifiable point cloud registration. IEEE Trans. Robot. 2020, 37, 314–333. [Google Scholar] [CrossRef]
  46. Zhou, Q.Y.; Park, J.; Koltun, V. Fast global registration. In Proceedings of the 14th European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Springer: Berlin/Heidelberg, Germany, 2016; pp. 766–782. [Google Scholar]
  47. Yao, R.; Du, S.; Cui, W.; Tang, C.; Yang, C. PARE-Net: Position-aware rotation-equivariant networks for robust point cloud registration. In Proceedings of the 18th European Conference on Computer Vision, Milan, Italy, 29 September–4 October 2024; Springer: Berlin/Heidelberg, Germany, 2024; pp. 287–303. [Google Scholar]
Figure 1. Pipeline of the proposed method. (1) Source point cloud and target point cloud. (2) Keypoint Detection. (3) One-to-N Correspondence Matching (N = 5). (4) Each point and its K-nearest neighbors (e.g., k = 4) form a local geometric structure for similarity matching. (5) Hypothesis Generation.
Figure 1. Pipeline of the proposed method. (1) Source point cloud and target point cloud. (2) Keypoint Detection. (3) One-to-N Correspondence Matching (N = 5). (4) Each point and its K-nearest neighbors (e.g., k = 4) form a local geometric structure for similarity matching. (5) Hypothesis Generation.
Sensors 25 06267 g001
Figure 2. Visualization of the local geometric structure comparison for a candidate correspondence ( p , q ) , shown with k = 4 neighbors as a representative example. The source point p (a) and its k nearest neighbors form a local structure characterized by radial distances d i = | p p i | (ordered as d 1 d 2 d k ) and angular relationships α ( i , j ) = ( p i p p j ) between neighbor pairs. Similarly, the target point q (b) exhibits corresponding features d ^ i = | q q i | and α ^ ( i , j ) = ( q i q q j ) . The similarity matching process directly compares these distance and angle features between the two structures to verify correspondence validity.
Figure 2. Visualization of the local geometric structure comparison for a candidate correspondence ( p , q ) , shown with k = 4 neighbors as a representative example. The source point p (a) and its k nearest neighbors form a local structure characterized by radial distances d i = | p p i | (ordered as d 1 d 2 d k ) and angular relationships α ( i , j ) = ( p i p p j ) between neighbor pairs. Similarly, the target point q (b) exhibits corresponding features d ^ i = | q q i | and α ^ ( i , j ) = ( q i q q j ) . The similarity matching process directly compares these distance and angle features between the two structures to verify correspondence validity.
Sensors 25 06267 g002
Figure 3. Registration visualizations of various methods on KITTI-UPP dataset. The pre and post registration are shown, with green color depth quantitatively indicating alignment accuracy.
Figure 3. Registration visualizations of various methods on KITTI-UPP dataset. The pre and post registration are shown, with green color depth quantitatively indicating alignment accuracy.
Sensors 25 06267 g003
Figure 4. Registration visualizations of various methods on real-world TIESY dataset. The pre and post registration are shown, with green color depth quantitatively indicating alignment accuracy.
Figure 4. Registration visualizations of various methods on real-world TIESY dataset. The pre and post registration are shown, with green color depth quantitatively indicating alignment accuracy.
Sensors 25 06267 g004
Table 1. Evaluation of one-to-many correspondences on KITTI-UPP benchmark for both scenarios. Bold indicates the best result and underline indicates the second-best. `/` indicates cases where RR is 0%, making RE, TE, and Time inapplicable. `Ours*` employs ISS [22] in place of the CURV-based keypoint detection.
Table 1. Evaluation of one-to-many correspondences on KITTI-UPP benchmark for both scenarios. Bold indicates the best result and underline indicates the second-best. `/` indicates cases where RR is 0%, making RE, TE, and Time inapplicable. `Ours*` employs ISS [22] in place of the CURV-based keypoint detection.
(a) Small-to-Large Scenario (Partial-to-Whole)
MethodN = 1N = 4N = 6N = 8
RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)
RANSAC [28]7.410.0915.120.012.780.0818.310.010.00///0.00///
FGR [46]39.812.2542.350.3441.671.8145.561.2841.672.2447.582.1342.592.3955.212.88
Teaser++ [45]57.419.0336.210.0782.410.029.560.3287.040.014.110.8792.590.035.631.49
SC2-PCR [42]57.410.0825.150.0170.370.026.210.0766.670.1751.210.1165.740.022.150.13
MAC [41]52.780.0144.120.1082.410.0114.522.3087.960.0213.255.1094.440.0125.6910.65
Fast-MAC [43]51.850.0145.210.0179.630.0116.320.1185.190.0217.351.2989.810.0127.953.48
Ours*35.190.1134.440.1062.040.0619.120.1078.700.020.820.1088.880.024.820.10
Ours70.370.012.850.0995.370.010.250.1099.070.010.130.1099.070.010.170.10
MethodN = 12N = 16N = 20N = 24
RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)
RANSAC [28]0.00///0.00///0.00///0.00///
FGR [46]37.962.2354.474.1530.812.4840.605.0834.262.0841.666.3037.962.6456.327.60
Teaser++ [45]96.300.025.012.7298.150.013.695.09100.000.014.528.7761.110.013.2415.83
SC2-PCR [42]51.850.023.350.1450.000.1153.310.1643.370.1351.630.1843.520.026.870.19
MAC [41]97.220.0118.5644.2687.040.0131.3268.2551.850.0131.8579.0526.850.0230.2195.14
Fast-MAC [43]89.810.0120.214.2063.890.0132.2517.7938.890.0236.215.3719.440.0236.687.55
Ours*97.220.013.580.1197.220.014.020.1198.150.056.520.1199.070.013.540.11
Ours99.070.010.140.10100.000.010.080.11100.000.010.090.11100.000.010.060.11
(b) Large-to-Small Scenario (Whole-to-Partial)
MethodN = 1N = 2N = 4N = 6
RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)
RANSAC [28]0.930.0210.360.010.930.0915.360.020.00///0.00///
FGR [46]37.042.9944.631.1837.962.8548.892.3525.932.2448.624.7436.113.1542.315.17
Teaser++ [45]85.190.0919.680.2691.670.036.351.0369.440.025.213.0671.300.049.567.17
SC2-PCR [42]65.740.1027.580.0659.260.056.840.1244.440.1225.010.1548.150.0410.250.18
MAC [41]79.630.024.122.0091.670.015.788.9269.440.017.7132.5372.220.018.2597.05
Fast-MAC [43]75.000.024.690.1085.190.0117.866.9464.810.028.964.3962.960.018.638.47
Ours*47.220.049.010.1083.330.017.030.1078.700.035.860.1187.960.057.520.11
Ours98.150.021.420.10100.000.010.780.10100.000.000.820.11100.000.000.680.11
Table 2. Registration results on KITTI-UPP dataset with different keypoint detection modules. Bold indicates the best result and underline indicates the second-best.
Table 2. Registration results on KITTI-UPP dataset with different keypoint detection modules. Bold indicates the best result and underline indicates the second-best.
CorrespondenceMethod1:10 Ratio1:4 Ratio1:1 Ratio
RR (%)Time (s)RR (%)Time (s)RR (%)Time (s)
(i) Deep learning-GeoTransformer [33]38.894.6562.042.2994.440.62
PARENet [47]71.309.0187.044.05100.000.78
(ii) TraditionalISS+FPFH [22]RANSAC [28]0.000.010.000.0163.890.01
FGR [46]43.521.1937.964.1594.441.06
Teaser++ [45]81.480.4596.302.7295.370.90
SC2-PCR [42]74.770.1151.850.1496.300.04
MAC [41]88.669.9097.2244.2694.442.02
Fast-MAC [43]84.110.1989.814.2094.441.02
H3D+FPFH [23]RANSAC [28]0.000.010.000.0147.220.01
FGR [46]50.005.6064.812.87100.000.62
Teaser++ [45]83.331.9889.811.16100.002.33
SC2-PCR [42]70.370.3983.330.2697.220.02
MAC [41]93.5126.5294.4411.47100.003.44
Fast-MAC [43]92.944.1891.673.07100.001.58
CURV+FPFHRANSAC [28]0.000.018.330.0195.370.01
FGR [46]84.265.1087.963.48100.000.99
Teaser++ [45]90.747.4993.529.61100.000.20
SC2-PCR [42]94.440.1499.070.11100.000.30
MAC [41]89.8124.9687.0415.78100.001.09
Fast-MAC [43]87.966.4687.042.78100.000.15
Ours95.370.1099.070.10100.000.11
Table 3. Registration results on KITTI-UPP dataset under 1:20 unbalanced ratio with best-performing methods. Bold indicates the best result and underline indicates the second-best. (150-frame query vs. 3000-frame reference).
Table 3. Registration results on KITTI-UPP dataset under 1:20 unbalanced ratio with best-performing methods. Bold indicates the best result and underline indicates the second-best. (150-frame query vs. 3000-frame reference).
MethodRR (%)Time (s)
TEASER++ [45]86.113.38
SC2-PCR [42]91.670.16
MAC [41]83.3325.11
Fast-MAC [43]79.634.15
Ours94.440.11
Table 4. Evaluation of one-to-many correspondences on real-world TIESY benchmark for both scenarios. Bold indicates the best result and underline indicates the second-best. `-` indicates untested cases due to memory overflow or prohibitive computation time.
Table 4. Evaluation of one-to-many correspondences on real-world TIESY benchmark for both scenarios. Bold indicates the best result and underline indicates the second-best. `-` indicates untested cases due to memory overflow or prohibitive computation time.
(a) Small-to-Large Scenario (Partial-to-Whole)
MethodN = 1N = 2N = 4N = 8
RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)
RANSAC [28]37.890.0621.250.0130.530.1027.540.0127.370.1845.510.0116.840.0929.630.02
FGR [46]47.370.7245.850.5346.320.6341.440.7542.110.7546.350.6741.050.8841.012.26
Teaser++ [45]69.470.011.360.0375.790.000.980.0884.210.001.550.2684.210.026.361.01
SC2-PCR [42]68.420.010.680.8269.470.020.870.2375.790.020.950.2575.790.020.640.30
MAC [41]68.420.015.210.4672.630.018.621.3583.160.0118.542.5985.260.0118.7511.71
Fast-MAC [43]69.150.014.870.0271.580.015.340.0678.950.015.630.2385.260.0110.989.47
Ours*63.820.010.890.0967.020.011.020.0974.470.000.450.1084.210.020.820.10
Ours72.340.010.470.0979.790.010.650.1086.170.010.820.1092.550.020.770.10
MethodN=12N=16N=22N=26
RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)
RANSAC [28]13.680.4148.240.0210.530.6241.360.029.471.1948.690.036.320.6648.560.03
FGR [46]38.951.4640.083.3033.681.0939.624.3630.531.1841.145.9430.531.2440.056.98
Teaser++ [45]86.320.001.082.32
SC2-PCR [42]69.470.020.850.3267.370.010.920.3466.320.046.010.3668.420.048.080.38
MAC [41]86.320.0116.5229.84
Fast-MAC [43]86.320.0117.4854.82
Ours*86.320.010.950.1087.230.010.680.1089.360.011.200.1090.430.011.040.10
Ours92.550.010.650.1094.680.060.680.1096.810.020.590.1096.810.010.490.10
(b) Large-to-Small Scenario (Whole-to-Partial)
MethodN = 1N = 2N = 3N = 4
RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)RR (%)RE (°)TE (cm)Time (s)
RANSAC [28]12.630.5714.210.0210.531.1619.580.035.261.1522.350.035.262.8434.520.03
FGR [46]35.790.9052.011.9233.681.1854.453.7234.741.1749.865.5631.581.1348.787.27
Teaser++ [45]81.050.010.870.97
SC2-PCR [42]72.630.2242.240.1756.840.119.870.3354.740.0720.060.3645.260.0927.140.39
MAC [41]81.050.010.4512.2291.490.010.8626.23
Fast-MAC [43]78.950.010.5816.0690.530.020.9782.60
Ours*73.400.011.290.1085.110.021.050.1188.290.013.580.1092.550.012.250.10
Ours90.430.011.010.1092.550.010.850.1093.620.010.820.1093.620.010.550.10
Table 5. Registration results of various methods on real-world TIESY dataset. Bold indicates the best result and underline indicates the second-best.
Table 5. Registration results of various methods on real-world TIESY dataset. Bold indicates the best result and underline indicates the second-best.
MethodRR (%)Time (s)
(i) Deep LearningGeoTransformer [33]39.808.24
PARENet [47]33.6712.49
(ii) TraditionalRANSAC [28]8.420.02
FGR [46]72.631.63
Teaser++ [45]95.790.58
SC2-PCR [42]93.680.14
MAC [41]95.797.00
Fast-MAC [43]93.680.26
Ours95.790.11
Table 6. Registration results of our method on real-world TIESY dataset under various noise settings.
Table 6. Registration results of our method on real-world TIESY dataset under various noise settings.
Noise LevelRR (%)Time (s)
0.0193.680.11
0.0295.790.11
0.0388.420.11
Table 7. The comparison of various keypoint detection models on KITTI-UPP datasets.
Table 7. The comparison of various keypoint detection models on KITTI-UPP datasets.
KeypointRegistrationMetricN
12481224
CURVSmall-to-LargeCorrespondence Count1574.333148.666297.3212,594.6418,891.9637,783.92
Inlier Ratio (%)10.027.214.292.501.831.09
Inlier Count161.13230.87274.63320.36348.62389.35
Large-to-SmallCorrespondence Count5169.4610,338.9215,508.3820,677.8425,847.3031,016.76
Inlier Ratio (%)3.832.782.091.701.441.26
Inlier Count191.80278.13311.79337.57357.83375.38
H3D [23]Small-to-LargeCorrespondence Count1174.052348.104696.209392.4014,088.6028,177.20
Inlier Ratio (%)2.201.491.020.680.550.37
Inlier Count25.0933.4444.3557.1565.7484.17
Large-to-SmallCorrespondence Count4810.859621.7014,432.5519,243.4024,054.2528,865.10
Inlier Ratio (%)0.850.600.490.420.370.33
Inlier Count36.5750.0359.8667.5674.1977.48
ISS [22]Small-to-LargeCorrespondence Count1715.883431.766863.5213,727.0420,590.5641,181.12
Inlier Ratio (%)1.200.860.590.390.310.21
Inlier Count21.5630.8641.7554.9365.2988.81
Large-to-SmallCorrespondence Count5951.9611,903.9217,855.8823,807.8429,759.8035,711.76
Inlier Ratio (%)0.560.400.330.280.250.23
Inlier Count32.7247.2357.5265.9373.2480.48
Table 8. Inlier statistics of our method on real-world TIESY dataset. `-` indicates no data.
Table 8. Inlier statistics of our method on real-world TIESY dataset. `-` indicates no data.
Registration ModeMetricN
124812162226
Small-to-LargeCorrespondence Count1540.933081.866163.7212,327.4418,491.1624,654.8833,900.4640,064.18
Inlier Ratio (%)10.156.373.992.511.931.601.301.16
Inlier Count141.23176.86219.87273.60313.26344.47383.58405.41
Large-to-SmallCorrespondence Count10,641.4721,262.9431,924.4142,565.88
Inlier Ratio (%)2.151.441.140.96
Inlier Count179.47234.11272.47302.97
Table 9. Analysis experiment of our method with different k values on the KITTI-UPP(1:4) dataset.
Table 9. Analysis experiment of our method with different k values on the KITTI-UPP(1:4) dataset.
kRRRETETime
395.37%0.00700.0370.1491
4100.00%0.00022.5630.1488
5100.00%0.00021.7630.1487
699.07%0.00021.1300.1487
799.07%0.00029.7010.1487
898.15%0.00014.6700.1488
Table 10. Ablation study on geometric verification components with different descriptors on KITTI-UPP(1:4) dataset. Bold indicates the best result and underline indicates the second-best. (AC: Angle Constraint; DC: Distance Constraint).
Table 10. Ablation study on geometric verification components with different descriptors on KITTI-UPP(1:4) dataset. Bold indicates the best result and underline indicates the second-best. (AC: Angle Constraint; DC: Distance Constraint).
DescriptorACDCRR (%)RE (°)TE (cm)Time (s)
FPFH [24] 92.590.00080.0020.1144
25.000.00026.6370.1153
100.000.00022.5630.1488
PARENet [47] 77.780.03360.1560.0967
85.190.00300.0080.0949
93.520.00029.0320.0975
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, X.; Zeng, Z.; Deng, J.; Wang, G.; Yang, J.; Quan, S. Curvature-Aware Point-Pair Signatures for Robust Unbalanced Point Cloud Registration. Sensors 2025, 25, 6267. https://doi.org/10.3390/s25206267

AMA Style

Hu X, Zeng Z, Deng J, Wang G, Yang J, Quan S. Curvature-Aware Point-Pair Signatures for Robust Unbalanced Point Cloud Registration. Sensors. 2025; 25(20):6267. https://doi.org/10.3390/s25206267

Chicago/Turabian Style

Hu, Xinhang, Zhao Zeng, Jiwei Deng, Guangshuai Wang, Jiaqi Yang, and Siwen Quan. 2025. "Curvature-Aware Point-Pair Signatures for Robust Unbalanced Point Cloud Registration" Sensors 25, no. 20: 6267. https://doi.org/10.3390/s25206267

APA Style

Hu, X., Zeng, Z., Deng, J., Wang, G., Yang, J., & Quan, S. (2025). Curvature-Aware Point-Pair Signatures for Robust Unbalanced Point Cloud Registration. Sensors, 25(20), 6267. https://doi.org/10.3390/s25206267

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop