Next Article in Journal
Assessing Subsidence and Coastal Inundation in the Yellow River Delta Using TS-InSAR and Active Inundation Algorithm
Previous Article in Journal
Pulse Interference Mitigation Method for BeiDou Receivers Based on Message Randomization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Low-Overlap Registration of Multi-Source LiDAR Point Clouds in Urban Scenes Through Dual-Stage Feature Pruning and Progressive Hierarchical Methods

1
College of Surveying and Geo-Informatics, North China University of Water Resources and Electric Power, Zhengzhou 450046, China
2
Henan Institute of Metrology, Zhengzhou 450008, China
3
College of Geomatics and Geoinformation, Guilin University of Technology, Guilin 541004, China
4
State Key Laboratory of Geohazard Prevention and Geoenvironment Protection, Chengdu University of Technology, Chengdu 610059, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(17), 2938; https://doi.org/10.3390/rs17172938
Submission received: 14 July 2025 / Revised: 21 August 2025 / Accepted: 22 August 2025 / Published: 24 August 2025
(This article belongs to the Special Issue Point Cloud Data Analysis and Applications)

Abstract

With the rapid advancement in laser scanning technologies, the capability to collect massive volumes of data and richer detailed features has been significantly enhanced. However, the differential representation ability of multi-source point clouds in capturing intricate structures within complex scenes, combined with the computational burden imposed by large datasets, presents substantial challenges to current registration methods. The proposed method encompasses two innovative feature point pruning techniques and two closely interconnected progressive processes. First, it identifies structural points that effectively represent the features of the scene and performs a rapid initial alignment of point clouds within the two-dimensional plane. Subsequently, it establishes the mapping relationship between the point clouds to be matched utilizing FPFH descriptors, followed by further screening to extract the maximum consensus set composed of points that meet constraints based on the intensity of graph nodes. Then, it integrates the processes of feature point description and similarity measurement to achieve precise point cloud registration. The proposed method effectively extracts matching primitives from large datasets, addressing issues related to false matches and noise in complex data environments. It has demonstrated favorable matching results even in scenarios with low overlap between datasets. On two public datasets and a self-constructed dataset, the method achieves an effective point set screening rate of approximately 1‰. On the WHU-TLS dataset, our method achieves a registration accuracy characterized by a rotation precision of 0.062° and a translation precision of 0.027 m, representing improvements of 70% and 80%, respectively, over current state-of-the-art (SOTA) methods. The results obtained from real registration tasks demonstrate that our approach attains competitive registration accuracy when compared with existing SOTA techniques.

1. Introduction

With the rapid development of LiDAR scanning technology, point cloud data sources have become increasingly abundant and exhibit multi-platform characteristics [1,2,3,4]. UAV LiDAR systems are capable of acquiring high-density point cloud data from aerial platforms, significantly enhancing the representation of major ground objects in urban environments, such as buildings [5,6,7]. In terms of ground-based acquisition, various methods—including fixed-point scanning, handheld SLAM, and vehicle-mounted mobile acquisition—each possess distinct features. This diverse array of multi-source and heterogeneous point cloud data has found extensive applications in fields such as cultural heritage preservation [8], scene reconstruction [9], and simultaneous localization and mapping [10]. However, notable differences exist in their ability to represent detailed structures of ground objects. Consequently, the challenge of rapidly and accurately integrating point cloud data from multiple platforms, stations, and types has emerged as a critical foundation for advancing intelligent scene understanding.
For the essential task of point cloud registration, a multitude of methods have been proposed to date. Currently, techniques such as ICP [11], 4PCS [12], and TEASER++ [13] are widely utilized for data registration. While these methods demonstrate exceptional performance in high-overlap registration scenarios, their effectiveness significantly diminishes in low-overlap situations, as illustrated in Figure 1. Point cloud registration has long been a focal point and an area of intense research interest within both scientific and industrial communities [14,15,16,17]. It has achieved considerable applicability in high-overlap tasks; however, there are also some innovative approaches designed for low-overlap conditions. Nevertheless, these often necessitate specific scene characteristics or involve complex feature extraction processes.
Existing registration methods often implicitly assume a high overlap rate of point clouds, which renders them challenging to apply directly to the registration of low-overlap point clouds. Consequently, several methods specifically designed for low-overlap point clouds have been proposed. Patch matching is a relatively novel approach [18]. Since patches encapsulate significantly more information than individual points, they can maintain high stability in matching even when the overlapping area is minimal, thereby reducing mismatches that arise from insufficient data. However, this method is susceptible to generating false matches when similar local geometric structures are present in different regions. With advancements in neural networks, learning-based techniques have been increasingly applied to point cloud registration and have garnered substantial attention [19,20,21]. These methods achieve registration by rejecting and complementing non-overlapping regions, as demonstrated by EFGHNet [22], Point-BERT [23], and SACF-Net [24]. Nevertheless, models may experience a decline in registration accuracy due to overfitting on training data, making it challenging for them to effectively handle large-scale point cloud datasets.
To address the aforementioned issues, this paper proposes a method that incorporates two-stage feature pruning and two-stage progressive hierarchical registration. By utilizing predominant structural feature points that can robustly represent scene characteristics, along with a maximum consensus set composed of points adhering to specific geometric constraints, the proposed approach effectively filters massive data. This reduction in data volume minimizes the consumption of computational resources and is particularly applicable to low-overlap scenarios. Furthermore, it employs a more robust mapping screening technique to mitigate the occurrence of false matches generated in complex urban scenes.
The primary contributions of this study are outlined as follows:
(1)
By screening individual points based on predominant structural features, a procedural registration from 2D to 3D is accomplished. This approach not only achieves a robust initial alignment but also significantly minimizes false matches and noise interference, thereby enhancing the algorithm’s robustness in complex scenarios.
(2)
The integration of feature point extraction and similarity measurement processes can mitigate information loss and enhance overall efficiency. The procedure for extracting key mapping feature points is inherently aligned with the process of measuring similarity.
(3)
By effectively implementing a progressive screening process for matching point pairs based on the dual dimensions of structural features and robust mapping characteristics, we derive the maximum consensus set. This approach addresses the computational overhead associated with large datasets, thereby facilitating more efficient registration.
In the subsequent sections, Section 2 provides a comprehensive review of existing point cloud registration methods. Section 3 elaborates on the proposed registration approach in detail. Section 4 presents the experimental results along with an analysis of these findings. Finally, the conclusions are discussed in Section 5.

2. Related Work

2.1. Line Feature-Based Approach

Traditional point cloud registration methods can be broadly categorized into two main types: optimization-based methods and feature-based methods. The fundamental principle of optimization-based approaches is to reformulate the point cloud registration challenge as an optimization problem. By establishing an objective function that quantifies the degree of alignment between point clouds, these methods aim to identify the rotation matrix and translation vector that either minimize or maximize this objective function, thereby achieving effective point cloud alignment [25,26,27]. A classic and widely utilized method, such as the iterative closest point (ICP) algorithm [11], is predicated on the nearest point-to-point distance. It determines the least-squares transformation parameters of corresponding points through multiple iterations. Conversely, the normal distributions transform (NDT) algorithm [28] employs a probabilistic model, framing point cloud registration as a problem of probabilistic inference and achieving registration by maximizing this probability model within spatial grids. However, registration strategies that optimize an objective function are highly sensitive to the initial pose of the point clouds and are prone to converging on local optima. In contrast, Lv et al. [29] introduced a novel point cloud registration method known as KSS-ICP, which is based on Kendall shape space (KSS). By constructing a representation of point clouds under KSS and seeking optimal matches from a global perspective, this approach achieves more accurate registrations due to its robustness against similarity transformations, non-uniform density variations, noise interference, and defective components. Nonetheless, the efficacy of KSS-ICP is contingent upon parallel acceleration; inadequate hardware conditions can significantly impair computational efficiency. Similarly, Parkinson et al. [30] proposed leveraging regression techniques to learn functions for continuous representation of point clouds. They enhanced point cloud registration by modeling a new type of regularizer in reproducing kernel Hilbert space (RKHS), which adapts well to multi-source point cloud data. However, its performance may be constrained when addressing scenes that lack structure or texture.
Feature-based methods achieve point cloud alignment by extracting feature information from point cloud data. The fundamental principle of these methods is to leverage the uniqueness and invariance of features, which helps reduce the number of points that need to be matched, decrease computational complexity, and enhance both the robustness and accuracy of registration [31,32,33]. Typically, feature-based approaches employ descriptors to identify key points for registration [34,35,36]; however, they predominantly rely on local geometric features rather than global ones. This reliance can lead to registration outcomes being influenced by the initial positioning of the point clouds. In urban environments characterized by man-made structures, line-based registration methods demonstrate a strong adaptability for effective registration in such scenarios. Liu et al. [37] were the first to perform supervoxel segmentation on point clouds for the extraction of feature lines, subsequently selecting interest points from these feature lines. This approach allows the interest points utilized for registration to inherit the robustness of feature lines against noise and variations in initial position, thereby circumventing inefficiencies associated with analyzing features at each individual point. The proposed method demonstrates high accuracy and efficiency in both fine and coarse registration tasks. However, it is important to note that this method relies on sharp features within point clouds and is particularly effective for registering point clouds characterized by pronounced features; its efficacy diminishes when applied to point clouds lacking distinct or sharp features. Ma et al. [38] highlighted the significance of contour cues in point cloud registration tasks. They abstracted point clouds into contour lines to create sketches, thus preserving essential contour information. Subsequently, they encoded these contours using local geometric descriptors specific to contours and integrated multiple geometric features to achieve precise parameter estimation. Nevertheless, the accuracy of this methodology is contingent upon the completeness of contour lines, which poses challenges for its widespread application across diverse scenarios. Li et al. [39] specifically designed line–plane semantic structural features (LSSFs) for structured urban environments to facilitate the automatic rough registration of large-scale urban point clouds. They identified the most reliable feature groups through four-point geometric consistency to estimate initial spatial transformation parameters, thereby providing a robust and efficient method for point cloud registration in complex urban settings. However, the performance of this approach is contingent upon the quality of noise reduction and feature extraction processes.

2.2. Deep Learning-Based Methods

With the rapid development in deep learning technology, deep learning methods have garnered significant attention in the field of point cloud registration [40,41,42]. Bello et al. [43] introduced a deep model called GEConvNet, which utilizes geometric features to construct local graphs and employs shared multi-layer perceptrons (MLPs) on graph edges to extract deep features. This approach effectively addresses challenges related to rotations and translations. However, techniques that rely on local geometric information often overlook the semantic information inherent in the scene. Zhao et al. [44] introduced the enhanced PointNet and the learnable feature aggregation transformer network (LFA-Net) for point cloud registration. This approach is capable of extracting richer local features from various receptive fields in both the source and template clouds, facilitating extensive interaction with feature information between pairs of point clouds. Additionally, it adaptively models global context information to select representative points, thereby achieving optimal point cloud registration. However, balancing computational complexity with real-time performance remains a challenge. Liu et al. [45] approached the point cloud registration problem as a task of semantic instance matching and registration, proposing a deep semantic graph matching (DeepSGM) method for large-scale outdoor point cloud registration. This method extracts semantic instances from 3D points using a semantic segmentation network and formulates the process of semantic instance matching as an optimal transport problem. However, it necessitates that point cloud registration datasets include both transformation ground truth and semantic labels, which limits its applicability across various scenarios. Wu et al. [46] introduced an inliers estimation network (INENet) designed for inconsistent point clouds. Their proposed probability estimation network and threshold prediction network are employed to identify overlapping regions between two point clouds, thereby facilitating partial overlap registration; this approach demonstrates effective performance even in cases where the overlap rates are low. Nonetheless, this method is restricted to situations involving only a single missing point cloud, which constrains its utility in real-world applications. In the work by Ma et al. [47], raw points are encoded into superpoints through a combined architecture utilizing KPConv and FPN, while incorporating a PPF transformer module to learn distinctive hybrid features invariant to rigid transformations. Correspondences between superpoints are established based on sparse matching techniques, rendering this method competitive in low-overlap scenarios. However, deep learning methods incur significant computational costs when handling large numbers of points, thus limiting their practical application contexts.

3. Materials and Methods

3.1. Workflow

This paper presents a novel registration method specifically designed for point clouds with low overlap rates in urban environments. The approach begins with rapid coarse registration in the planar direction, followed by the identification of the maximum consensus set through a two-stage feature pruning process to achieve precise point cloud registration. This method progressively estimates the 3D transformation between two point clouds, transitioning from coarse to fine alignment, and demonstrates strong adaptability to point clouds originating from diverse sources. The methodology comprises three primary processes: (1) extraction of predominant structure feature point clusters; (2) coarse registration based on the cosine value of the included angle; and (3) fine registration utilizing predominant structure feature point clusters, as illustrated in Figure 2.

3.2. Extraction of Predominant Structure Feature Point Clusters

In the research on point cloud registration for urban scenes, the accurate selection of representative feature points significantly influences both the efficiency and accuracy of the registration process. While point-based registration methods demonstrate broad applicability, their heightened sensitivity to noise and variations in point cloud overlap often pose challenges when addressing the complexities inherent in urban environments. However, the planar features present in a significant number of man-made objects within urban environments offer novel insights for registration processes. Nevertheless, directly extracted planes encompass a variety of elements, including small objects, large objects, regular shapes, and irregular forms. When making observations, individuals typically concentrate on components that exhibit global regularity or constitute a substantial proportion of the scene while often overlooking smaller or irregular items. This phenomenon similarly applies to the process of registration. Consequently, this paper advocates for conducting registration primarily on building facades characterized by global regularity or those that represent a considerable portion of the overall structure; these points are referred to as predominant structure feature points.

3.2.1. Predominant Structure Feature Point

To achieve the accurate extraction of predominant structure feature points, a voxelization grid and a binarization process are proposed. This approach converts unstructured and unordered point clouds into regularly resampled discrete 3D grids, as shown in Figure 3, where the colors of the point clouds are assigned according to their intensity values. Subsequently, a binarization process (i.e., assigning values of 0 or 1) is applied to the resampled grids, where each voxel is assigned a binary value that represents the facade projection density of that voxel. Specifically, a voxel containing more points than a predetermined threshold is labeled as 1. In this paper, the grid size is determined based on three times the point resolution. Conversely, voxels with fewer points than this threshold are labeled as 0. The threshold serves primarily to filter out points exhibiting sparse distribution across the facade. Through this methodology, unstructured and unordered point clouds can be effectively resampled into predominant structure feature points characterized by significant spatial continuity and enhanced point cloud density attributes.

3.2.2. Clustering of Predominant Point Clusters

According to the methodology presented in the previous section, a substantial number of predominant structure feature point clusters can be generated, from which the building framework can be derived through further processing. However, it is important to note that some point clouds are collected from considerable distances relative to the acquisition station. Consequently, these point clusters often encompass multiple sparse or small-scale facades, contributing to a low overlap rate among the point clouds. To address this issue, we employ the Euclidean clustering algorithm to effectively group the point cloud clusters. Sparse point cloud clusters are designated with a label of 0, indicating their exclusion from subsequent calculations. This approach yields refined predominant structure feature point clusters and significantly reduces the volume of point clouds requiring processing. As a result, it efficiently enhances the success rate of registration and decreases computation time.

3.3. Coarse Registration Through Alignment of Predominant Features

Considering that laser scanning results can generally ensure the vertical orientation of the Z-axis, we can decompose the 3D registration process into two distinct steps: addressing horizontal and vertical directions. Dimensionality reduction will effectively minimize the degrees of freedom in the registration procedure. In this section, our primary focus is to achieve horizontal point cloud registration based on the predominant features of buildings, which encompasses two key components: calculation of line features and similarity measurement. The cosine value of the included angle is employed to identify correct line correspondences and compute the necessary transformation.

3.3.1. Line Extraction

To extract line features, it is essential to first obtain point features from the facade point cloud. The processed facade point cloud is projected onto a horizontal plane, and principal component analysis (PCA) is employed to compute the covariance matrix of the point set. The first principal direction, which represents the direction of maximum variance, corresponds to the first eigenvector, while the second principal direction—orthogonal to the first component—corresponds to the second eigenvector. The ratio of eigenvalues s 2,2 / s 1,1 serves as an indicator of the flatness of the point set: a ratio approaching 0 suggests strong linearity, whereas a ratio close to 1 indicates weak linearity. Points exhibiting minimal eigenvalue ratios (which are most likely associated with lines) are selected as seed points for line growing.
The line extraction process utilizes an enhanced region-growing algorithm that takes into account both spatial proximity and geometric consistency when selecting neighboring points. Specifically, candidate points must meet two criteria: (1) they must be spatially adjacent to the seed point, and (2) they must align with the current line’s extension direction, which is defined by the principal direction determined earlier. This dual constraint ensures the extracted lines adhere to the continuity assumption. Following the incorporation of new points, line features are recalculated iteratively until all qualifying lines are extracted, resulting in a final set of lines denoted as L = l 1 , l 2 , l 3 , l n . It is important to note that, due to the directional ambiguity of computed line normals, their inverse counterparts are also included in this analysis. In the source point cloud, cosine values of line pair normal vectors will be employed to search for candidate correspondences within the target point cloud.

3.3.2. Similarity Measurement

The cosine values of angles between line normals are employed to establish candidate line correspondences. Given that line features are derived from point cloud traversal and linear feature filtering, a single physical line segment may yield multiple discrete fragments. To address this issue, our method merges closely situated line segments within the set L based on the similarity of their normal vectors and intercepts, retaining only those line pairs that satisfy geometric constraints along with their corresponding angular cosine values. By exhaustively comparing the cosine values of all possible combinations of line pairs between two point clouds, a match is identified when the angular cosine difference between two paired lines falls below a specified threshold (0.2 times the point cloud resolution). Subsequently, the rotation matrix r is computed from these matched line pairs using the following procedure.
Given two matched line pairs L 1 a , L 1 b and L 2 a , L 2 b with corresponding normal vectors n 1 a , n 1 b , n 2 a , n 2 b , subscripts 1 and 2 are employed to differentiate between the line segments, while letters a and b denote the two endpoints of each line segment. The objective of this paper is to determine a rotation matrix r that satisfies Formula (1). The problem can be formulated as minimizing an error function.
In practical matching scenarios, line pairs may demonstrate two distinct correspondence configurations:
  r n 1 a n 2 a   r n 1 b n 2 b
  • Forward correspondence: L 1 a L 2 a and L 1 b L 2 b .
  • Reverse correspondence: L 1 a L 2 b and L 1 b L 2 a .
Thus, both possibilities are computationally assessed during the process of singular value decomposition (SVD). The translation vector t is obtained from line intercept parameters c by solving the linear system, as demonstrated in Formula (2).
  c = n 2 a n 2 b · t = c 1 a c 2 a c 1 b c 2 b
Geometrically, this optimization process aims to identify the optimal rotation matrix r and translation vector t that maximize the alignment of both normal vectors and spatial positions between the two sets of lines.

3.4. Fine Registration Utilizing Reliable Correspondences of Feature Points

Building upon the coarse registration achieved through line pair cosine features, this study introduces a fine registration approach grounded in correspondence graph optimization. The proposed method constructs a graph structure wherein nodes represent feature correspondences and edges encode geometric consistency constraints. Subsequently, it assesses node reliability via adjacency matrix analysis and evaluates edge reliability through an affinity matrix that incorporates both loose and tight constraints. Ultimately, optimal registration parameters are determined through global consensus set optimization.

3.4.1. Pruning Based on the Reliability of Point Primitive Correspondences

To establish robust correspondences, this study employs a weighted centroid displacement algorithm for key point extraction [48]. Compared with conventional feature detection methods, this algorithm exhibits superior capability in identifying structurally significant key points from complex building facade point clouds, thereby facilitating the construction of more stable correspondence graphs. Specifically, given the source point cloud key points Q = q i i = 1 n and target point cloud key points P = p j j = 1 n , the rotation-invariant fast point feature histogram (FPFH) descriptor is utilized to characterize spatial distribution features between key points and their neighborhoods, resulting in an initial correspondence set H = { ( p i , q i ) } 1 N .
This study employs the flexible graph matching (FRGM) [49] framework to define graph attributes as G = V , E , where the vertex set V = v i i = 1 N represents discrete feature correspondences. The edge set E encodes geometric constraints through undirected connections, with edge attributes defined as spatial distance differences between corresponding nodes. The undirected graph is illustrated in Figure 4, where the red lines represent the undirected graph. The adjacency matrix A quantifies the topological relationships among nodes, as presented in Formula (3).
  a i j P = a i j Q = 1 , E i j P E i j Q < δ 0 , E i j P E i j Q δ , i = 1 , 2 , , N , i j
where a i j P and a i j Q denote the elements of the adjacency matrix A. The expression E i j P E i j Q represents the difference in distances between nodes q i and q j , as well as between nodes p i and p j . Additionally, δ is a distance threshold that is adaptively determined based on the density of the point cloud.
To accurately assess matching reliability, we define node strength b as a quantitative metric, calculated using Formula (4). Here, a i j represents the element located at row i and column j of adjacency matrix A , while N denotes the total number of nodes. This metric essentially extends the concept of node degree deg ( v i , v j ) by incorporating edge weights. The summation of geometric consistency strengths (the values of a i j ) across all adjacent edges that meet geometric constraints directly reflects the topological significance of the node in global correspondences.
  b i = j = 1 N a i j ( a i j A )
Higher b values indicate a greater satisfaction of cross-cloud geometric constraints, thereby suggesting an increased confidence in matching. Figure 5 illustrates the process of point-based screening. In Figure 5a, the initial similar point pairs identified by the FPFH descriptor are presented, with black lines denoting correct corresponding point pairs and red dashed lines indicating incorrect ones. Figure 5b displays the undirected graph generated from these corresponding point pairs.
Experiments demonstrate that the reliability screening based on node strength can reduce the mismatching rate by approximately 0.4‰ (refer to Section 4.4). Following this principle, we sort all elements within the matrix and subsequently select the top K groups of reliable point pairs to establish a set of reliable correspondences H r = { ( p i , q i ) } 1 K , H r H for the subsequent alignment step. It is important to note that H r is derived from the initial correspondence set H through the application of the first constraint.

3.4.2. Pruning Based on the Reliability of Line Primitive Correspondences

However, relying solely on the pruning of point primitives is insufficient to completely eliminate non-correspondence points. As illustrated in Figure 6, the non-correspondence points in both the source point cloud and the target point cloud satisfy the point primitive reliability condition E i j p = E i j q ; however, they do not fulfill the line primitive reliability condition P r E i j p E i 2 p P r E i j q E i 2 q . Here, E i j p denotes the distance between two points, while P r E i j p E i 2 p represents the projection distance from a point to a line. To address this issue, this paper introduces an edge node affinity matrix (ENAM), denoted as M , which functions essentially as a column vector. Each element within this matrix indicates whether a node pair v k satisfies the constraint conditions associated with edge E i j , thereby allowing for an evaluation of the corresponding edge’s reliability. The values of elements in matrix M are derived using Formula (5). Similar to assessing node correspondence reliability, the sum of elements in M reflects the number of nodes that meet these constraints; thus, a larger value signifies that more point pairs can be aligned by this edge and indicates higher reliability.
  m ( k , 1 ) = 1 , F ( E i j P Q , V k P Q ) < 0 0 , F ( E i j P Q , V k P Q ) 0 , ( i , j , k = 1,2 , , K , i j k )
Among these, E i j P Q and V k P Q represent the aligned edges and their corresponding nodes that fulfill the constraints imposed by F .
Herein, we introduce two constraint functions [50]: the relaxation constraint F 1 , which is based on projection distance, and the strict constraint F 2 , which relies on rotation angle. These constraints are utilized for assessing the reliability of corresponding edges, as detailed in Formulas (6) and (7).
  F 1 ( E i j P Q , V k P Q ) = | | P r j E i j P E i k P | | P r j E i j Q E i k Q | | δ
where P r j E 1 P E 2 P denotes the projection distance of E 2 onto E 1 , and the threshold δ is adaptively determined based on the density of the point cloud. E i j P Q and V k P Q   represent the edges and nodes of the respective graph.
  F 1 identifies reliable corresponding point pairs through dual geometric constraints: distance consistency and axial projection consistency. When two point pairs are correctly matched, their relative distances in the source and target point clouds should be approximately equal. This criterion helps to exclude erroneous correspondences that exhibit excessively large distance discrepancies after transformation. Furthermore, the projection components of edge vectors along the rotation axis must remain consistent to ensure the geometric integrity of rotational transformations. This additional requirement further filters out incorrect correspondences that may satisfy distance constraints but have inconsistent directional properties. As a relaxation constraint that necessitates only vector projection and subtraction operations, this approach is more efficient than stringent constraint functions and is well-suited for preliminary screening purposes. If the edges within the candidate set do not meet these relaxation constraints, the calculation of strict constraints can be bypassed.
  F 2 ( E i j P Q , V k P Q ) = V k P R ( θ , E i j P Q ) V k Q δ
where Formula (7) indicates that the aligned edge vector E i j P Q serves as the rotation axis, and the rotation angle θ is determined to align ( V k P , V k Q ) . The threshold δ is adaptively established based on the density of the point cloud.
  F 2 quantifies the rotation alignment error by defining a solvable angle interval through stringent constraints based on rotation angles. It computes the angle interval for each corresponding point pair and employs the interval stab scanning algorithm to identify point pairs that correspond to the rotation angle θ , which can maximize the consensus set. This process results in a consensus set I i j H r H under strict constraints. By integrating loose constraints, a hierarchical constraint system is established, thereby ensuring both accuracy and efficiency.

3.4.3. Registration Procedure

After obtaining the global maximum consensus set I i j , the process advances to the registration stage. Initially, based on the corresponding edges ( E i j P , E i j Q ) within the global maximum consensus set, the initial transformation parameters are calculated using Formula (8). This step not only addresses most of the rotation and translation corrections already managed during coarse registration but also specifically compensates for any vertical axis displacement that may have been overlooked in that stage. Subsequently, to enhance accuracy further, it is essential to optimize these initial transformation parameters through least squares method and singular value decomposition applied to all points in the global maximum consensus set I i j , thereby yielding the final registration parameters R and T .
  R i j = R ( θ , E i j P Q ) R E ~ i j P Q T i j = 1 H k = 1 K I i j ( p k R i j q k )
where H denotes the total number of elements within the global maximum consensus set I i j , while θ represents the rotation angle that optimizes this consensus set.

4. Experiments and Results

4.1. Dataset and Computational Environments

This paper evaluates the proposed method using two benchmark datasets and one self-constructed dataset to assess its effectiveness and robustness. The NCWU dataset is obtained from unmanned aerial vehicle (UAV) platforms equipped with lidar, as well as mobile platforms. Due to the relatively high flight altitude, traditional airborne lidar often results in sparse point cloud data in building facade areas, leading to significant data loss. In contrast, leveraging the advantages of low-altitude operation, the UAV lidar achieves higher-density point cloud acquisition through closer scanning distances, which significantly enhances the geometric representation of building facades. Although there remains a certain degree of missing fine structural features, it effectively reconstructs the main structural characteristics of buildings; meanwhile, data collected by the mobile platform lack information about roof structures but clearly present facade details. These two datasets are highly complementary and exhibit notable differences. The Semantic3D dataset [51] is collected in street environments using a measurement-grade terrestrial laser scanning (TLS) platform and primarily contains linear low-rise building data characterized by limited building types and simple compositional structures yet possesses high point cloud density. Conversely, the WHU-TLS building dataset [52] is acquired through scanning open scenes with a wide-angle laser scanner system that encompasses various complex ground object scenarios over an extensive coverage range, as illustrated in Figure 7. Table 1 provides more detailed information regarding these three datasets. In the subsequent processing steps, we calculated the overlap rates among the three groups of point clouds for matching purposes utilizing voxel occupancy and point correspondences. To calculate the overlap rate of point clouds, this paper employs a bidirectional nearest neighbor search to identify overlapping points. The overlap rate is determined by averaging two ratios: the ratio of the number of overlapping points to the total number of target point clouds, and the ratio of the number of overlapping points to the total number of source point clouds. In general, these three datasets are distinguished by low overlap rates, a variety of building types, and significant occlusion. These factors present considerable challenges to the robustness and efficiency of registration algorithms.
All experiments were conducted on a computer equipped with a 64-bit Windows operating system, 16 GB of RAM, and an AMD Ryzen 7 5700G processor (AMD, Santa Clara, CA, USA) running at a clock speed of 3.80 GHz.

4.2. Analysis of Registration Performance

The method proposed in this paper includes an incremental process that transitions from angle-based coarse registration to fine registration. As illustrated in Figure 8, Figure 9 and Figure 10, the three pairs of initially unregistered point clouds are successfully registered following the coarse registration phase. However, it is evident that some gaps remain between the corresponding objects post-coarse registration. This discrepancy primarily arises from deviations encountered during the detection of main frame line segments within the coarse registration process. Consequently, there are gaps between certain detected line segments and their actual counterparts in the main frame, resulting in an incomplete overlap of corresponding objects after registration. Furthermore, the lack of vertical direction calculations during coarse registration contributes to inconsistencies between the obtained registration results and real-world conditions. It is important to highlight that incorporating fine registration into the outcomes of coarse registration significantly enhances overall performance. We can also observe from the enlarged view of the registration results that the incremental registration achieved by the proposed method demonstrates exceptional performance. For instance, in Figure 8, the occluded window in the green point cloud is seamlessly integrated with the window in the red point cloud; in Figure 9, the red and green point clouds collectively form a complete fountain; and in Figure 10, the trunk sections of both point clouds have accurately overlapped. Furthermore, the building facades depicted in these two groups of point clouds exhibit strong concordance. The registration outcomes substantiate that our method is capable of effectively managing large-scale scenes characterized by low overlap rates while achieving commendable accuracy.
To quantitatively assess the registration accuracy, we incorporated classic state-of-the-art (SOTA) methods such as iterative closest point (ICP) [11] and K4PCS [53] in our comparative experiments. Additionally, we compared angle-based coarse registration with the PKSS method proposed by Lv et al. [29]. The PKSS method is a point cloud registration approach that integrates the ICP algorithm within the framework of Kendall’s shape space theory. In contrast with the 4PCS method, K4PCS utilizes a Kd-tree data structure along with a four-point coplanarity criterion to efficiently filter potential sets of matching point pairs. This approach proves particularly effective when handling large datasets characterized by varying densities, noise, and outliers. In our experiments, relatively complete partial point clouds were selected to facilitate a more accurate evaluation of performance across these datasets. Furthermore, we conducted an analysis on the operational efficacy of the proposed method.
The performance of the algorithm is assessed through the evaluation of both axis–angle rotation error and translation error across all transformations among the scans [54]. Given the source point cloud P s , the target point cloud P t , the estimated transformation R e | t e , and the datasets that provide the ground truth R g | t g , we can express the transformation error as follows. Refer to Formula (9) for details.
  e r = a r c c o s ( t r ( R g ( R e ) T ) 1 2 ) 180 π e t = t e t g
Table 2 presents the quantitative evaluation results obtained from three methods applied to three datasets. The findings indicate that the proposed method achieves highly accurate registration outcomes in terms of both transformation and correspondence. For the NCWU dataset, the coarse registration yields a rotation accuracy of 0.209° and a translation accuracy of 0.733 m; following the integration of fine registration, these metrics improve significantly, with rotation accuracy reaching 0.088° and translation accuracy achieving 0.116 m. In the case of the SEMANTIC dataset and WHU dataset, the coarse registration method attains rotation accuracies of 0.172° and 0.240°, alongside translation accuracies of 0.895 m and 0.524 m, respectively. Upon incorporating fine registration, rotation accuracies are further enhanced to 0.074° and 0.062°, while translation accuracies reach values of 0.017 m and 0.027 m, respectively. Figure 11 and Figure 12 illustrate the rotational registration accuracy and translational registration accuracy achieved by the proposed method across all three datasets.
In the registration process, existing methods inevitably lose some key features during sampling, which leads to a reduction in effective feature points. Additionally, in low-overlap regions, repetitive structures (such as repeated tree trunks and building columns) complicate the accurate verification of geometric consistency, resulting in the failure of geometric constraints. Furthermore, many approaches are predicated on the assumption of a high point cloud overlap rate; this can cause registration results to be heavily influenced by outliers, ultimately yielding incorrect transformations. The classic ICP algorithm aims to perform registration by minimizing the Euclidean distance between point clouds. However, when addressing point clouds with inconsistent ranges, it tends to prioritize reducing the distance discrepancy between two sets of point clouds through translation of their positions while neglecting pairwise consistency. This oversight can lead to significantly larger translation errors in experimental outcomes. The K4PCS method relies on geometric constraints derived from coplanar point sets. Although urban scenes typically contain numerous coplanar point sets, various gaps present within these scenes adversely affect the quantity of available coplanar points. Consequently, this limitation hinders the algorithm’s ability to effectively capture sufficient constraint information. Compared with the PKSS method, which utilizes voxelization and parallel farthest point sampling (FPS) for point cloud simplification—where challenges such as excessive retention or erroneous deletion of key geometric feature points may arise in scenes characterized by uneven point cloud density distribution, ultimately leading to a decline in matching accuracy—the proposed method exhibits significant advantages in two primary aspects. Firstly, during the coarse registration stage, it effectively preserves the predominant structural points that exhibit prominent geometric features. Secondly, during the fine registration stage, it introduces an innovative dual-stage feature pruning mechanism that effectively eliminates redundant points while ensuring the accurate retention of key feature points. This distinction allows the proposed method to achieve a noticeable improvement in registration accuracy when compared with the PKSS method. More specifically, our method substantially reduces large rotational and translational transformations between the original unregistered point clouds, even with only rough registration performed. When fine registration is subsequently applied following coarse adjustment, both transformation and translation accuracies are significantly enhanced, resulting in superior overall registration accuracy.

4.3. Analysis of the Extraction Results for Predominant Structural Feature Points

After processing buildings through main frame extraction, their primary frame structures can be effectively extracted. Despite the presence of significant non-overlapping areas between the source point cloud and target point cloud data, research has demonstrated that the complete structure of an entire building can be accurately represented using only the line segments derived from a portion of its main frame. This characteristic facilitates direct point cloud registration in practical applications, even when the two sets of point cloud data encompass only different elements of the same building, as illustrated in Figure 13. Specifically, within Figure 13, the blue points correspond to the point clouds, while the red lines represent the extracted straight lines. The sub-figures in Figure 13 illustrate the results of extracting predominant structural feature points from three distinct datasets, with arrows indicating the straight lines employed for coarse registration. Compared with directly conducting fine registration on the original point cloud, the point cloud obtained after main frame extraction not only reduces the total number of points but also decreases the quantity of corresponding points involved in fine registration calculations. This enhancement leads to improved final registration accuracy, alleviates the traditional reliance on data integrity in point cloud registration, and offers novel ideas and methodologies for processing building point clouds in complex scenarios.
To quantitatively analyze the effect of main frame extraction, the average distance between two planes was selected as a metric to demonstrate registration performance. However, the root mean square error (RMSE), which is commonly used in registration assessments, becomes less applicable when there are numerous non-overlapping regions. Consequently, we extracted the overlapping portions of the registered point clouds to compute the RMSE. The results are shown in Table 3. In terms of average distance, the three datasets achieved values of 0.099 m, 0.057 m, and 0.059 m respectively. It is evident that performance was optimal for the SEMANTIC3D dataset; this can be attributed to its pronounced linear features that facilitate improved plane matching. The RMSE for the three datasets reached accuracies of 0.0688 m, 0.0543 m, and 0.0487 m respectively.
After analyzing the cross-sections of the point cloud, it is evident that the ground points depicted in Figure 14c are well-registered. In contrast, the steps from the two platforms shown in Figure 14d exhibit discrepancies, which ultimately results in a lower registration accuracy compared with other datasets. Nevertheless, it is noteworthy that the lower end of the red steps aligns closely with the green steps. A similar scenario can be observed in Figure 14e. In instances where neither data source captures a complete representation of the same object structure, as illustrated in Figure 14f, auxiliary lines (blue lines) indicate that these two incomplete structures are interconnected. This observation underscores the effectiveness of our proposed method.

4.4. Analysis of Registration Efficiency

Table 4 presents the running times of the proposed method in comparison with other methods. The ICP algorithm is excluded from this comparison due to its requirement for selecting overlapping regions for registration. The primary reason for the rapid execution of the method proposed in this paper lies in its significant simplification of the registration process. Most existing methods allocate considerable time to local feature extraction, aiming to establish accurate point correspondences across all points and subsequently optimize the model by minimizing feature matching loss. Although approaches such as K4PCS incorporate an efficient four-point coplanarity detection algorithm and enhance search strategies—resulting in certain improvements over the PKSS method that employs global search—these advancements exhibit limited effectiveness when applied to point cloud registration within large urban environments. In contrast, the registration technique introduced herein reduces the number of corresponding point pairs that need calculation by identifying dominant structural points within extensive datasets, thereby substantially decreasing the time required during the registration phase.
Furthermore, we analyze the efficiency of registration from the perspective of the number of reliable point pairs. In our registration process, we first extracted facade structural points from the original point cloud using voxel binarization. Subsequently, we filtered out node intensity points based on the intensity values in the correspondence graph. Finally, we obtained a set of filtered point pairs determined by the reliability of graph edges, as shown in Figure 15. Upon completing the registration process, we calculated the distance from each point in the registered source point cloud to its corresponding points in the target point cloud to establish a mapping relationship. In assessing this mapping relationship, this paper set a threshold of 0.01 m to determine whether or not a pair of points successfully established such a relationship. To provide clearer insight into changes in point cloud density, we computed the ratio of corresponding points to the original point as an indicator for the screening rate. As shown in Table 5 and Table 6, after filtering facade structural points, there was a significant reduction in overall point count within the point clouds—particularly evident within narrow spaces like SEMANTIC—where most points were concentrated on ground surfaces, while feature-rich information remained sparse among them. Extracting facade structural points can greatly reduce the amount of data processing, thereby allowing for a more focused analysis on critical features and enabling semantically reliable point screening. In the node intensity-based screening process, the WHU dataset yielded the highest number of screened points. This outcome is attributed to the extensive range of the point cloud dataset, where numerous identical features exhibit considerable geometric distortions due to varying distances and complex occlusion, leading to inconsistent geometric relationships. A similar phenomenon was observed in the NCWU dataset. The two distinct collection methods—ground-based and aerial—resulted in sparse facade structures within the source point cloud; furthermore, differing scanning angles contributed to substantial geometric discrepancies when compared with the target point cloud. Moreover, among the final filtered points, it was noted that the NCWU dataset—which lacks overlapping regions—contained the fewest number of ultimately filtered points. However, following integration with ground points during mapping processes, there was an increase in mapping points across all three datasets; notably, WHU exhibited the highest count of mapping points. To facilitate observation of variations in point quantities, Figure 16 presents the results reflecting these numbers after applying a natural logarithm transformation.

4.5. Partial Overlap Registration

To comprehensively evaluate the performance of our method on point clouds with varying overlap rates, we selected pairs of point clouds exhibiting overlap rates ranging from 20% to 50% from the dataset and further categorized them into more refined groups. Specifically, we classified the data into three distinct levels based on their overlap rates and demonstrated the efficacy of the proposed method under these differing conditions. This was accomplished by removing overlapping regions as described in Section 4.3, thereby obtaining datasets characterized by different overlap rates and verifying the robustness of our method on non-overlapping point clouds. The experimental results are presented in Table 7. From our research findings, we can draw two key conclusions: (1) Although the registration performance of the proposed method inevitably degrades when the data’s overlap rate falls below 30%, it remains capable of effectively addressing point cloud registration tasks across various overlap rate scenarios while demonstrating commendable adaptability in terms of overall performance. (2) The proposed method exhibits strong robustness against challenges associated with non-overlapping point cloud registration resulting from varying initial overlap rates, as evidenced by minimal fluctuations in its performance indicators across diverse scenarios, reflecting stable processing capabilities.
Owing to the registration process that leverages dominant features, approximate point cloud registration can still be accomplished through the utilization of partial dominant points within the overlapping region, even in cases where a portion of the point cloud has been removed. However, if the eliminated overlapping area contains critical dominant structural points, which hinders the extraction of an adequate number of line segments, the performance of registration will inevitably deteriorate.

5. Conclusions

In this paper, we present a novel method for multi-source LiDAR point cloud registration that is grounded in progressive hierarchical processing and dual-stage feature pruning. The proposed approach includes voxel binarization to extract facade structural points, line pair-based coarse registration, and fine registration through the use of a correspondence graph. Among these components, the extraction of facade structural points allows for the semantic identification of facade groups rich in information from the point cloud data, thereby providing a robust foundation for subsequent coarse and fine registration processes. The coarse registration phase leverages line pairs derived from facades to establish approximate geometric correspondences even when dealing with incomplete point clouds, effectively addressing large rotations and translations between different point clouds. In contrast, the fine registration phase enhances accuracy by eliminating incorrect matches through dual-stage feature pruning applied to both graph nodes and edges. Experimental results across three datasets demonstrate that our proposed method achieves superior accuracy compared with existing techniques.
It is important to acknowledge that the current methodology possesses certain limitations. For instance, it is heavily dependent on the quality and completeness of facade structural points. In scenarios where facades are significantly occluded or exhibit irregular shapes—such as in some ancient buildings characterized by complex and unique structures—the extraction of effective facade structural points may be compromised, consequently diminishing registration accuracy. In our future work, we will investigate the application of this method across various urban environments, particularly those characterized by heterogeneous buildings and cultural heritage, with a focus on addressing these limitations. Our primary aim will be to address these limitations while assessing its generalization capability and optimizing its performance in practical applications. Moreover, considering the significant variations in indoor scene features, this aspect will also represent a crucial focus of our research.

Author Contributions

Conceptualization, K.M.; methodology, K.M. and F.Y.; software, K.M., F.Y. and S.L.; validation, F.Y. and G.H.; formal analysis, K.M. and F.W.; investigation, K.M. and L.C.; data curation, K.M., F.Y. and X.J.; writing—original draft preparation, F.Y. and S.L.; writing—review and editing, S.L. and F.W.; visualization, K.M. and F.Y.; project administration, S.L., X.J. and L.C.; funding acquisition, K.M. and G.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Henan Province Key Research and Development Project (No. 251111222400), and the Key Research Projects of Higher Education Institutions in Henan Province (No. 25B420001).

Data Availability Statement

The data presented in this study are openly accessible under reference numbers [47,48].

Acknowledgments

The authors would like to thank the editor and reviewers for their contributions to this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Xia, S.; Chen, D.; Wang, R.; Li, J.; Zhang, X. Geometric Primitives in LiDAR Point Clouds: A Review. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 685–707. [Google Scholar] [CrossRef]
  2. Wang, X.; Chen, Q.; Wang, H.; Li, X.; Yang, H. Automatic Registration Framework for Multi-Platform Point Cloud Data in Natural Forests. Int. J. Remote Sens. 2023, 44, 4596–4616. [Google Scholar] [CrossRef]
  3. Bello, S.A.; Yu, S.; Wang, C.; Adam, J.M.; Li, J. Review: Deep Learning on 3D Point Clouds. Remote Sens. 2020, 12, 1729. [Google Scholar] [CrossRef]
  4. Chen, M.; Xiao, L.; Jin, Z.; Pan, J.; Mu, F.; Tang, F. Registration of Terrestrial Laser Scanning Data in Forest Areas Using Smartphone Positioning and Orientation Data. Remote Sens. Lett. 2023, 14, 381–391. [Google Scholar] [CrossRef]
  5. Stilla, U.; Xu, Y. Change Detection of Urban Objects Using 3D Point Clouds: A Review. ISPRS J. Photogramm. Remote Sens. 2023, 197, 228–255. [Google Scholar] [CrossRef]
  6. Zhu, Z.; Chen, T.; Rowlinson, S.; Rusch, R.; Ruan, X. A Quantitative Investigation of the Effect of Scan Planning and Multi-Technology Fusion for Point Cloud Data Collection on Registration and Data Quality: A Case Study of Bond University’s Sustainable Building. Buildings 2023, 13, 1473. [Google Scholar] [CrossRef]
  7. Li, R.; Gan, S.; Yuan, X.; Bi, R.; Luo, W.; Chen, C.; Zhu, Z. Automatic Registration of Large-Scale Building Point Clouds with High Outlier Rates. Autom. Constr. 2024, 168, 105870. [Google Scholar] [CrossRef]
  8. Yang, S.; Hou, M.; Shaker, A.; Li, S. Modeling and Processing of Smart Point Clouds of Cultural Relics with Complex Geometries. IJGI 2021, 10, 617. [Google Scholar] [CrossRef]
  9. Gao, X.; Yang, R.; Chen, X.; Tan, J.; Liu, Y.; Liu, S. Indoor Scene Reconstruction from LiDAR Point Cloud Based on Roof Extraction. J. Build. Eng. 2024, 97, 110874. [Google Scholar] [CrossRef]
  10. Zuo, C.; Feng, Z.; Xiao, X. CCMD-SLAM: Communication-Efficient Centralized Multirobot Dense SLAM With Real-Time Point Cloud Maintenance. IEEE Trans. Instrum. Meas. 2024, 73, 7504812. [Google Scholar] [CrossRef]
  11. Besl, P.J.; McKay, N.D. A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  12. Aiger, D.; Mitra, N.J.; Cohen-Or, D. 4-Points Congruent Sets for Robust Pairwise Surface Registration. ACM Trans. Graph. 2008, 27, 1–10. [Google Scholar] [CrossRef]
  13. Yang, H.; Shi, J.; Carlone, L. TEASER: Fast and Certifiable Point Cloud Registration. IEEE Trans. Robot. 2021, 37, 314–333. [Google Scholar] [CrossRef]
  14. Wang, Q.; Kim, M.-K. Applications of 3D Point Cloud Data in the Construction Industry: A Fifteen-Year Review from 2004 to 2018. Adv. Eng. Inform. 2019, 39, 306–319. [Google Scholar] [CrossRef]
  15. Hu, K.; Chen, Z.; Kang, H.; Tang, Y. 3D Vision Technologies for a Self-Developed Structural External Crack Damage Recognition Robot. Autom. Constr. 2024, 159, 105262. [Google Scholar] [CrossRef]
  16. Kumar Singh, S.; Pratap Banerjee, B.; Raval, S. A Review of Laser Scanning for Geological and Geotechnical Applications in Underground Mining. Int. J. Min. Sci. Technol. 2023, 33, 133–154. [Google Scholar] [CrossRef]
  17. Chen, M.; Wang, S.; Wang, M.; Wan, Y.; He, P. Entropy-Based Registration of Point Clouds Using Terrestrial Laser Scanning and Smartphone GPS. Sensors 2017, 17, 197. [Google Scholar] [CrossRef] [PubMed]
  18. Zhao, T.; Tian, T.; Zou, X.; Yan, L.; Zhong, S. Robust Point Cloud Registration via Patch Matching. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5102213. [Google Scholar] [CrossRef]
  19. Cattaneo, D.; Vaghi, M.; Valada, A. LCDNet: Deep Loop Closure Detection and Point Cloud Registration for LiDAR SLAM. IEEE Trans. Robot. 2022, 38, 2074–2093. [Google Scholar] [CrossRef]
  20. Huang, X.; Mei, G.; Zhang, J. Feature-Metric Registration: A Fast Semi-Supervised Approach for Robust Point Cloud Registration Without Correspondences. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020. [Google Scholar]
  21. Sarode, V.; Li, X.; Goforth, H.; Aoki, Y.; Srivatsan, R.A.; Lucey, S.; Choset, H. PCRNet: Point Cloud Registration Network Using PointNet Encoding. arXiv 2019, arXiv:1908.07906. [Google Scholar] [CrossRef]
  22. Jeon, Y.; Seo, S.-W. EFGHNet: A Versatile Image-to-Point Cloud Registration Network for Extreme Outdoor Environment. IEEE Robot. Autom. Lett. 2022, 7, 7511–7517. [Google Scholar] [CrossRef]
  23. Yu, X.; Tang, L.; Rao, Y.; Huang, T.; Zhou, J.; Lu, J. Point-BERT: Pre-Training 3D Point Cloud Transformers with Masked Point Modeling. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 19291–19300. [Google Scholar]
  24. Wu, Y.; Hu, X.; Zhang, Y.; Gong, M.; Ma, W.; Miao, Q. SACF-Net: Skip-Attention Based Correspondence Filtering Network for Point Cloud Registration. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 3585–3595. [Google Scholar] [CrossRef]
  25. Shi, X.; Liu, T.; Han, X. Improved Iterative Closest Point(ICP) 3D Point Cloud Registration Algorithm Based on Point Cloud Filtering and Adaptive Fireworks for Coarse Registration. Int. J. Remote Sens. 2020, 41, 3197–3220. [Google Scholar] [CrossRef]
  26. Geng, H.; Song, P.; Zhang, W. An Improved Large Planar Point Cloud Registration Algorithm. Electronics 2024, 13, 2696. [Google Scholar] [CrossRef]
  27. Tian, Y.; Yue, X.; Zhu, J. Coarse–Fine Registration of Point Cloud Based on New Improved Whale Optimization Algorithm and Iterative Closest Point Algorithm. Symmetry 2023, 15, 2128. [Google Scholar] [CrossRef]
  28. Das, A.; Waslander, S.L. Scan Registration Using Segmented Region Growing NDT. Int. J. Robot. Res. 2014, 33, 1645–1663. [Google Scholar] [CrossRef]
  29. Lv, C.; Lin, W.; Zhao, B. KSS-ICP: Point Cloud Registration Based on Kendall Shape Space. IEEE Trans. Image Process. 2023, 32, 1681–1693. [Google Scholar] [CrossRef]
  30. Parkison, S.A.; Ghaffari, M.; Gan, L.; Zhang, R.; Ushani, A.K.; Eustice, R.M. Boosting Shape Registration Algorithms via Reproducing Kernel Hilbert Space Regularizers. IEEE Robot. Autom. Lett. 2019, 4, 4563–4570. [Google Scholar] [CrossRef]
  31. Tao, W.; Xiao, Y.; Wang, R.; Lu, T.; Xu, S. A Fast Registration Method for Building Point Clouds Obtained by Terrestrial Laser Scanner via 2-D Feature Points. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2024, 17, 9324–9336. [Google Scholar] [CrossRef]
  32. Yue, X.; Liu, Z.; Zhu, J.; Gao, X.; Yang, B.; Tian, Y. Coarse-Fine Point Cloud Registration Based on Local Point-Pair Features and the Iterative Closest Point Algorithm. Appl. Intell. 2022, 52, 12569–12583. [Google Scholar] [CrossRef]
  33. Yang, L.; Xu, S.; Yang, Z.; He, J.; Gong, L.; Wang, W.; Li, Y.; Wang, L.; Chen, Z. Fast Registration Algorithm for Laser Point Cloud Based on 3D-SIFT Features. Sensors 2025, 25, 628. [Google Scholar] [CrossRef]
  34. Zhong, Y. Intrinsic Shape Signatures: A Shape Descriptor for 3D Object Recognition. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, Kyoto, Japan, 27 September–4 October 2009; pp. 689–696. [Google Scholar]
  35. He, L.; Wang, S.; Hu, Q.; Cai, Q.; Li, M.; Bai, Y.; Wu, K.; Xiang, B. GFOICP: Geometric Feature Optimized Iterative Closest Point for 3-D Point Cloud Registration. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5704217. [Google Scholar] [CrossRef]
  36. Sipiran, I.; Bustos, B. Harris 3D: A Robust Extension of the Harris Operator for Interest Point Detection on 3D Meshes. Vis. Comput. 2011, 27, 963–976. [Google Scholar] [CrossRef]
  37. Liu, L.; Xiao, J.; Wang, Y.; Lu, Z.; Wang, Y. A Novel Rock-Mass Point Cloud Registration Method Based on Feature Line Extraction and Feature Point Matching. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5701117. [Google Scholar] [CrossRef]
  38. Ma, G.; Wei, H. A Novel Sketch-Based Framework Utilizing Contour Cues for Efficient Point Cloud Registration. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5703616. [Google Scholar] [CrossRef]
  39. Li, R.; Yuan, X.; Gan, S.; Bi, R.; Luo, W.; Chen, C.; Zhu, Z. Automatic Coarse Registration of Urban Point Clouds Using Line-Planar Semantic Structural Features. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5707824. [Google Scholar] [CrossRef]
  40. Wu, Y.; Lei, J.; Yuan, Y.; Fan, X.; Gong, M.; Ma, W.; Miao, Q.; Zhang, M. Equivariance-Based Markov Decision Process for Unsupervised Point Cloud Registration. IEEE Trans. Circuits Syst. Video Technol. 2025, 35, 4648–4660. [Google Scholar] [CrossRef]
  41. Wang, C.; Gu, Y.; Li, X. LPRnet: A Self-Supervised Registration Network for LiDAR and Photogrammetric Point Clouds. IEEE Trans. Geosci. Remote Sens. 2025, 63, 4404012. [Google Scholar] [CrossRef]
  42. Wang, Z.; Huang, S.; Butt, J.A.; Cai, Y.; Varga, M.; Wieser, A. Cross-Modal Feature Fusion for Robust Point Cloud Registration with Ambiguous Geometry. ISPRS J. Photogramm. Remote Sens. 2025, 227, 31–47. [Google Scholar] [CrossRef]
  43. Bello, S.A.; Alfasly, S.; Mao, J.; Lu, J.; Li, L.; Xu, C.; Zou, Y. Geometric Edge Convolution for Rigid Transformation Invariant Features in 3D Point Clouds. Neurocomputing 2025, 622, 129313. [Google Scholar] [CrossRef]
  44. Zhao, Z.; Kang, J.; Feng, L.; Liang, J.; Ren, Y.; Wu, B. LFA-Net: Enhanced PointNet and Assignable Weights Transformer Network for Partial-to-Partial Point Cloud Registration. IEEE Trans. Circuits Syst. Video Technol. 2025, 35, 162–177. [Google Scholar] [CrossRef]
  45. Liu, S.; Wang, T.; Zhang, Y.; Zhou, R.; Li, L.; Dai, C.; Zhang, Y.; Wang, L.; Wang, H. Deep Semantic Graph Matching for Large-Scale Outdoor Point Cloud Registration. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5701412. [Google Scholar] [CrossRef]
  46. Wu, Y.; Zhang, Y.; Fan, X.; Gong, M.; Miao, Q.; Ma, W. INENet: Inliers Estimation Network with Similarity Learning for Partial Overlapping Registration. IEEE Trans. Circuits Syst. Video Technol. 2023, 33, 1413–1426. [Google Scholar] [CrossRef]
  47. Ma, T.; Han, G.; Chu, Y.; Ren, H. Sparse-to-Dense Point Cloud Registration Based on Rotation-Invariant Features. Remote Sens. 2024, 16, 2485. [Google Scholar] [CrossRef]
  48. Li, S.; Yan, F.; Ma, K.; Hu, Q.; Wang, F.; Liu, W. Optimal Feature-Guided Position-Shape Dual Optimization for Building Point Cloud Facade Detail Enhancement. Remote Sens. 2024, 16, 4324. [Google Scholar] [CrossRef]
  49. Wang, F.-D.; Xue, N.; Zhang, Y.; Xia, G.-S.; Pelillo, M. A Functional Representation for Graph Matching. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 42, 2737–2754. [Google Scholar] [CrossRef]
  50. Yan, L.; Wei, P.; Xie, H.; Dai, J.; Wu, H.; Huang, M. A New Outlier Removal Strategy Based on Reliability of Correspondence Graph for Fast Point Cloud Registration. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 45, 7986–8002. [Google Scholar] [CrossRef] [PubMed]
  51. Hackel, T.; Savinov, N.; Ladicky, L.; Wegner, J.D.; Schindler, K.; Pollefeys, M. Semantic3D.Net: A New Large-Scale Point Cloud Classification Benchmark. arXiv 2017, arXiv:1704.03847. [Google Scholar] [CrossRef]
  52. Dong, Z.; Liang, F.; Yang, B.; Xu, Y.; Zang, Y.; Li, J.; Wang, Y.; Dai, W.; Fan, H.; Hyyppä, J.; et al. Registration of Large-Scale Terrestrial Laser Scanner Point Clouds: A Review and Benchmark. ISPRS J. Photogramm. Remote Sens. 2020, 163, 327–342. [Google Scholar] [CrossRef]
  53. Theiler, P.W.; Wegner, J.D.; Schindler, K. Keypoint-Based 4-Points Congruent Sets—Automated Marker-Less Registration of Laser Scans. ISPRS J. Photogramm. Remote Sens. 2014, 96, 149–163. [Google Scholar] [CrossRef]
  54. Wei, P.; Yan, L.; Xie, H.; Huang, M. Automatic Coarse Registration of Point Clouds Using Plane Contour Shape Descriptor and Topological Graph Voting. Autom. Constr. 2022, 134, 104055. [Google Scholar] [CrossRef]
Figure 1. Registration of point clouds with low overlap.
Figure 1. Registration of point clouds with low overlap.
Remotesensing 17 02938 g001
Figure 2. The workflow of the proposed methodology.
Figure 2. The workflow of the proposed methodology.
Remotesensing 17 02938 g002
Figure 3. Schematic diagram of voxel binarization.
Figure 3. Schematic diagram of voxel binarization.
Remotesensing 17 02938 g003
Figure 4. Undirected complete graph.
Figure 4. Undirected complete graph.
Remotesensing 17 02938 g004
Figure 5. Graph-based screening of reliable point pairs. (a) Initial similar point pairs identified using the FPFH descriptor; (b) undirected graph constructed from the corresponding point pairs. In this representation, black lines denote point pairs with node strength exceeding the threshold, while red lines indicate those with node strength below the threshold. Among them, colors are assigned based on height, and the numbers in the figure represent the candidate corresponding points between the source point cloud and the target point cloud.
Figure 5. Graph-based screening of reliable point pairs. (a) Initial similar point pairs identified using the FPFH descriptor; (b) undirected graph constructed from the corresponding point pairs. In this representation, black lines denote point pairs with node strength exceeding the threshold, while red lines indicate those with node strength below the threshold. Among them, colors are assigned based on height, and the numbers in the figure represent the candidate corresponding points between the source point cloud and the target point cloud.
Remotesensing 17 02938 g005
Figure 6. This figure demonstrates the rationale behind the inability to entirely eliminate non-correspondence points through graph nodes alone.
Figure 6. This figure demonstrates the rationale behind the inability to entirely eliminate non-correspondence points through graph nodes alone.
Remotesensing 17 02938 g006
Figure 7. Dataset description. Herein, the red point cloud represents the source point cloud, while the green point cloud denotes the target point cloud.
Figure 7. Dataset description. Herein, the red point cloud represents the source point cloud, while the green point cloud denotes the target point cloud.
Remotesensing 17 02938 g007
Figure 8. Results of coarse registration and complete registration on the NCWU dataset, where the red points represent the source point cloud and the green points denote the target point cloud.
Figure 8. Results of coarse registration and complete registration on the NCWU dataset, where the red points represent the source point cloud and the green points denote the target point cloud.
Remotesensing 17 02938 g008
Figure 9. Results of coarse registration and complete registration on the Semantic dataset, where the red points represent the source point cloud and the green points denote the target point cloud.
Figure 9. Results of coarse registration and complete registration on the Semantic dataset, where the red points represent the source point cloud and the green points denote the target point cloud.
Remotesensing 17 02938 g009
Figure 10. Results of coarse registration and complete registration results on the WHU dataset, where the red points represent the source point cloud and the green points denote the target point cloud.
Figure 10. Results of coarse registration and complete registration results on the WHU dataset, where the red points represent the source point cloud and the green points denote the target point cloud.
Remotesensing 17 02938 g010
Figure 11. Accuracy of rotational registration in the conducted experiments.
Figure 11. Accuracy of rotational registration in the conducted experiments.
Remotesensing 17 02938 g011
Figure 12. Accuracy of translational registration in the conducted experiments.
Figure 12. Accuracy of translational registration in the conducted experiments.
Remotesensing 17 02938 g012
Figure 13. Extraction results of predominant structural points.
Figure 13. Extraction results of predominant structural points.
Remotesensing 17 02938 g013
Figure 14. Section display of the registration results.
Figure 14. Section display of the registration results.
Remotesensing 17 02938 g014
Figure 15. Schematic representation of the completed mapped point cloud. The red points denote the source point cloud, the green points indicate the target point cloud, and the blue points represent the calculated mapped point cloud. (a) NCWU; (b) SEMANTIC 3D; (c) WHU-TLS.
Figure 15. Schematic representation of the completed mapped point cloud. The red points denote the source point cloud, the green points indicate the target point cloud, and the blue points represent the calculated mapped point cloud. (a) NCWU; (b) SEMANTIC 3D; (c) WHU-TLS.
Remotesensing 17 02938 g015
Figure 16. Graph of the variation in the number of points.
Figure 16. Graph of the variation in the number of points.
Remotesensing 17 02938 g016
Table 1. Comprehensive descriptions of the three datasets.
Table 1. Comprehensive descriptions of the three datasets.
DatasetsNCWUSEMANTICWHU
ReferenceAlignmentReferenceAlignmentReferenceAlignment
PlatformMobileUAVTerrestrialTerrestrialTerrestrialTerrestrial
Point number8,494,88317,421,28341,268,28835,207,2899,285,86011,259,360
Coverage area (m)200 × 190 × 60290 × 210 × 60250 × 270 × 50320 × 270 × 160400 × 800 × 120500 × 700 × 180
Table 2. Results of the registration experiments.
Table 2. Results of the registration experiments.
DatasetMetricICPK4PCSPKSSLine-Based
Registration
Ours
NCWU e r (deg)0.6590.3150.1150.2090.088
e t (m)1.1680.6980.2650.7330.116
SEMANTIC e r (deg)0.5420.2370.1350.1720.074
e t (m)1.3400.2490.2020.8950.017
WHU e r (deg)0.6270.2230.1720.2400.062
e t (m)1.6520.5900.1090.5240.027
Table 3. Accuracy of predominant structural points.
Table 3. Accuracy of predominant structural points.
DatasetAverage Distance (m)RMSE (m)
NCWU0.0990.0688
SEMANTIC0.0570.0543
WHU0.0590.0487
Table 4. The execution time (in seconds) of the five methods across the three datasets.
Table 4. The execution time (in seconds) of the five methods across the three datasets.
DatasetK4PCSPKSSRegistration StageOurs
NCWU49255031474
SEMANTIC153259567
WHU5045608480
Table 5. Number of points screened based on reliability.
Table 5. Number of points screened based on reliability.
DatasetAverage Number of PointsAverage Elevation Structural PointsNodal Strength PointsFinal Pruning PointRealization of Mapping Points
NCWU12,958,083467,77355,34016511966
SEMANTIC38,237,78996,76813,628382113,151
WHU10,272,610782,52858,844476827,651
Table 6. Reliability rates of point pairs.
Table 6. Reliability rates of point pairs.
DatasetFacade Structural Point RatesNode Strength RatesFinal Pruning RatesMapping Achievement Rates
NCWU36.1‰4.3‰0.1‰0.2‰
SEMANTIC2.5‰0.4‰0.1‰0.3‰
WHU76.2‰5.7‰0.5‰2.7‰
Table 7. Accuracy of rotation (degrees) and translation (meters) at varying overlap rates.
Table 7. Accuracy of rotation (degrees) and translation (meters) at varying overlap rates.
Overlap40~50%30~40%20~30%
RotationTranslation Rotation Translation Rotation Translation
NCWU0.1340.1920.2230.21145.05962.857
SEMANTIC0.1560.1880.2591.39599.96437.538
WHU0.0670.0230.0880.02988.54938.947
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ma, K.; Yan, F.; Li, S.; Huang, G.; Jia, X.; Wang, F.; Chen, L. Low-Overlap Registration of Multi-Source LiDAR Point Clouds in Urban Scenes Through Dual-Stage Feature Pruning and Progressive Hierarchical Methods. Remote Sens. 2025, 17, 2938. https://doi.org/10.3390/rs17172938

AMA Style

Ma K, Yan F, Li S, Huang G, Jia X, Wang F, Chen L. Low-Overlap Registration of Multi-Source LiDAR Point Clouds in Urban Scenes Through Dual-Stage Feature Pruning and Progressive Hierarchical Methods. Remote Sensing. 2025; 17(17):2938. https://doi.org/10.3390/rs17172938

Chicago/Turabian Style

Ma, Kaifeng, Fengtao Yan, Shiming Li, Guiping Huang, Xiaojie Jia, Feng Wang, and Li Chen. 2025. "Low-Overlap Registration of Multi-Source LiDAR Point Clouds in Urban Scenes Through Dual-Stage Feature Pruning and Progressive Hierarchical Methods" Remote Sensing 17, no. 17: 2938. https://doi.org/10.3390/rs17172938

APA Style

Ma, K., Yan, F., Li, S., Huang, G., Jia, X., Wang, F., & Chen, L. (2025). Low-Overlap Registration of Multi-Source LiDAR Point Clouds in Urban Scenes Through Dual-Stage Feature Pruning and Progressive Hierarchical Methods. Remote Sensing, 17(17), 2938. https://doi.org/10.3390/rs17172938

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop