Next Article in Journal
Mine Gas Time-Series Data Prediction and Fluctuation Monitoring Method Based on Decomposition-Enhanced Cross-Graph Forecasting and Anomaly Finding
Previous Article in Journal
Ghost-Free HDR Imaging in Dynamic Scenes via High–Low-Frequency Decomposition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancing Point Cloud Registration for Pipe Fittings: A Coarse-to-Fine Approach with DANIP Keypoint Detection and ICP Optimization

School of Mechatronic Engineering, Changchun University of Technology, Changchun 130012, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(22), 7012; https://doi.org/10.3390/s25227012
Submission received: 21 October 2025 / Revised: 11 November 2025 / Accepted: 12 November 2025 / Published: 17 November 2025
(This article belongs to the Section Optical Sensors)

Abstract

In 3D reconstruction, loss of depth data caused by highly reflective surfaces often undermines the accuracy of point cloud registration. Traditional registration methods suffer from reduced accuracy and computational efficiency under such conditions. This paper presents a novel coarse-to-fine point cloud registration approach that combines a density-aware keypoint detection method with iterative closest point optimization to enhance both precision and computational performance. The proposed keypoint detection method optimizes registration by progressively refining the initial pose estimate through multi-scale geometric feature detection. This process includes a density-aware mechanism for removing edge outliers and an adaptive threshold based on normal vector inner products. This improves both keypoint identification accuracy and matching efficiency, providing better initial registration for the iterative closest point algorithm in scenarios with significant data loss. The approach prevents the iterative closest point algorithm from converging to local optima, which improves both convergence speed and overall computational performance. Experimental results show that, under optimal conditions, the runtime is reduced by up to 78.01% across several datasets, including those from Stanford, Kinect, Queen, and ASL-LRD. Compared to other traditional methods, the proposed approach delivers higher registration accuracy, even for multi-view point clouds with severe data loss, which demonstrates its robustness and potential for engineering applications.

1. Introduction

Pipe fittings, such as elbows, are essential components of industrial pipeline systems and are extensively utilized in construction [1], manufacturing [2], and energy [3]. The geometric shapes and spatial configurations of these pipe fittings directly impact the precision of system installation, operational efficiency [4], and safety. Therefore, accurate reconstruction and measurement [5] of these components are of significant importance. However, this process presents significant technical challenges due to the geometric complexity and material properties of the fittings. In response, 3D point cloud registration algorithms have become essential tools for reconstructing the shape and spatial position of pipe components. These algorithms enable the accurate alignment of raw point cloud data acquired by sensors into a unified coordinate system, thereby facilitating precise measurement and digital modeling. However, the performance of depth cameras based on structured light is often limited by the reflective properties of highly reflective surfaces during point cloud acquisition. For instance, when the illuminated surface is flat or smooth, incident light may reflect unevenly or concentrate excessively, making it difficult for the camera’s sensor to properly capture the reflected rays. This issue is particularly pronounced on metal surfaces or other highly reflective materials, often leading to data loss and compromising the integrity of the point cloud [6]. To address this challenge, many studies have adopted High Dynamic Range (HDR) technology, which captures illumination across multiple exposures to mitigate the effects of uneven surface reflections. Although HDR techniques have shown some success in reducing data loss, the complexity of multiple exposures and post-processing still leads to deficiencies in certain regions of the point cloud, thereby imposing higher demands on point cloud registration algorithms.
Point cloud registration is a key technique for aligning data from multiple sources into a unified coordinate system. This ensures consistent geometric relationships between datasets. Currently, the most widely used point cloud registration method is the Iterative Closest Point (ICP) [7]. Due to its stability and applicability, ICP has been extensively employed in industrial inspection, robotic navigation, and 3D reconstruction. However, the ICP algorithm is sensitive to initial alignment and requires the point clouds to be relatively close in position. Additionally, it is susceptible to noise and outliers. The performance of the ICP algorithm can be significantly limited when dealing with depth data loss from highly reflective surfaces. To address these challenges, researchers have proposed numerous ICP variants [8,9,10]. However, most of these variants remain sensitive to both initial alignment and noise. A prevalent approach to mitigate these limitations is to perform coarse registration [11,12,13,14] using feature matching. This provides a better initial alignment for the ICP algorithm, which is then applied to achieve final registration.
Feature-based coarse registration typically involves steps like keypoint detection, feature description, feature matching, and robust pose estimation. For coarse registration methods, researchers have proposed approaches like Fast Point Feature Histograms (FPFH) [15], Histograms of Point Pair Features (HoPPF) [16], and Binary and Triangle Combined (BTC) Descriptor [17] to improve feature descriptions. They have also suggested methods such as Graph-Cut Random Sample Consensus (GC-RANSAC) [18], V-Random Sample Consensus (VSAC) [19], Quadratic-time Guaranteed Outlier Removal (QGORE) [20], and Streamlined Progressive Sample Consensus (SPROSAC) [14] to enhance robust pose estimation. However, since feature descriptions capture the local characteristics of keypoints and robust pose estimation depends on the correspondence of these keypoints, the accuracy of the initial values provided to the ICP algorithm is ultimately constrained by the quality of keypoint detection on the surface of incomplete data.
Point cloud keypoints refer to a set of points that reflect the fundamental geometric structure of a point cloud. They can be extracted based on defined detection criteria, with rotational and translational invariance. Keypoint detection methods aim to identify prominent points at specific scales. Zhong [21] proposed the Intrinsic Shape Signatures (ISS) algorithm. It measures the saliency of keypoints based on the eigenvalue decomposition of the scatter matrix derived from a point and its neighboring points. Keypoints are considered only if the ratio between two consecutive eigenvalues is below a specified threshold. Zaharescu and Boyer [22] introduced a 3D keypoint detection algorithm called MeshDOG, designed for uniform triangular meshes, which is invariant to rotation, translation, and scale transformations. Steder et al. [23] proposed a new method for detecting keypoints and calculating feature descriptors in 3D distance data, called the Normal Aligned Radial Feature (NARF). This method extracts distinctive keypoints in 3D point clouds. Sipiran and Bustos [24] developed a 3D keypoint detection algorithm called Harris 3D, using the Harris operator. This algorithm determines keypoints by computing the Harris response within the neighborhood of observation points. The Scale-Invariant Feature Transform (SIFT), originally designed for 2D images, was adapted by Rusu and Cousins [25] for 3D point clouds, resulting in the SIFT 3D algorithm. This adaptation replaces pixel intensity in the original algorithm with the principal curvature of points in the 3D point cloud. With the advancement of deep learning, researchers have begun exploring learning-based algorithms for 3D keypoint detection in point clouds. This has led to the development of methods such as KeypointNet [26], Unsupervised Stable Interest Point Detection (USIP) [27], Dense 3D Feature (D3Feat) [28], and Semantic Keypoint (SKP) [29]. However, the application of deep learning-based algorithms in industrial fields is limited by challenges related to interpretability, the need for large amounts of labeled data, computational resource constraints, and their robustness in handling noise. Although significant progress has been made in 3D keypoint detection, there is still room for improvement in areas like robustness, generalization, adaptive parameter adjustment, and handling data loss from reflective surfaces.
In this paper, we propose a coarse-to-fine point cloud registration method for industrial pipe fittings that integrates the newly proposed Density-Aware Normal Inner Product (DANIP) keypoint detection with the ICP algorithm. The main contributions of this research are summarized as follows:
  • An innovative 3D point cloud keypoint detection method, DANIP, is proposed, which combines a density-aware anomaly point removal mechanism with a multi-scale locally adaptive threshold detection based on normal vector inner products. This method demonstrates exceptional performance in keypoint detection accuracy, matching precision, and computational efficiency.
  • We introduce a coarse-to-fine point cloud registration method based on DANIP keypoint detection and the ICP algorithm. This method effectively addresses the limitations of the ICP algorithm, which is prone to local optima, while significantly improving convergence efficiency and computational performance in the registration process.
  • We conduct a registration study of common pipe fittings in real-world environments to evaluate the effectiveness of the coarse-to-fine point cloud registration method based on DANIP and ICP. The proposed method achieves higher registration accuracy than mainstream algorithms, even in multi-view scenarios with severe data loss.
The remainder of this paper is organized as follows. Section 2 presents the calculation method for the Density-Aware Normal Inner Product. Section 3 describes the proposed coarse-to-fine point cloud registration method based on DANIP and ICP. Section 4 discusses the experimental validation of the proposed method. Finally, Section 5 concludes the paper with a summary of the findings.

2. Density-Aware Normal Inner Product Keypoint Detection

After obtaining the pre-processed point cloud data, the Density-Aware Normal Inner Product Keypoint Detection (DANIP) method detects keypoints through the following steps. First, a density-based edge point removal mechanism is introduced, while the mean of the inner product of normal vectors within an adaptive neighborhood is used as the response value for local keypoint detection. Then, a non-maximum suppression technique, based on the distribution of normal vectors and the point cloud, is applied to further enhance the reliability of the detected keypoints.

2.1. Density-Aware Normal Inner Product

In point cloud registration, the distribution of points located at the edges of the point cloud’s contours and in areas of missing data often deviates from the actual conditions due to limitations imposed by the sensor’s acquisition perspective and reflected light, as shown in Figure 1. Local features constructed based on this anomalous distribution, such as normal vectors, cannot accurately represent the true characteristics of these points. Therefore, we propose the Density-Aware Normal Inner Product keypoint detection method.
In keypoint detection, the neighborhood radius r is often manually set, which may require multiple adjustments for different point clouds. To overcome this limitation, we construct the detection response value of a point based on the number of nearest neighbors k in the point cloud. For a point pi in the point cloud P, let the k nearest neighbors be denoted as p i j j = 1 k , with the distance between the neighbor pij and pi being dij. We define edge points based on density awareness as:
P = p i 1 N p i   is   an   edge   point ,   if   d i k > t i p i   is   nonedge   point ,   if   d i k t i
where dik represents the distance between pi and its neighboring point pik, ti is the local dynamic threshold, which can dynamically reflect the density distribution of the surrounding area of pi.
t i = μ i + α σ i
μ i = 1 m j = 1 m d j k σ i = 1 m j = 1 m d j k μ i
where μi is the local mean, σi is the local standard deviation, α is the tolerance parameter for noise, m is the size of the local neighborhood, and djk represents the distance from the neighbor pij of pi to its own k-th nearest neighbor pjk.
k is the number of neighboring points and can be adjusted based on the density of the point cloud. When the point cloud density is low, k can be set between 10 and 20. Conversely, for high-density point clouds or cases with significant missing data, noise, or edge anomalies, k can be set between 20 and 40. The value of m should ensure that the neighborhood covers local density variations while avoiding the inclusion of irrelevant regions. It is recommended to set m between 0.5k and k, rounded to an integer. The tolerance coefficient α can be adjusted according to the noise level. For low-noise scenarios, α can be set between 0.5 and 1, while for high-noise scenarios, α can be set between 1.5 and 2.
The core of density-aware edge point detection lies in comparing the distance dik from point pi to its k-th nearest neighbor with a locally adaptive threshold ti. In regions with uniform density, dik is close to the threshold ti, and the point is classified as a non-edge point. In contrast, at density-varying edge regions, dik increases significantly due to the absence of neighboring points and exceeds the threshold ti, enabling precise detection. Additionally, dynamically adjusting the threshold ti using the neighborhood standard deviation σi effectively suppresses noise interference and enhances the algorithm’s robustness.
For non-edge points, we use the inner product of local normal vectors as the response value for keypoint detection. Let the normal vector of pi be ni, and the normal vector of the neighbor pij be nij. We define the response value Ri for Density-Aware Normal Inner Product as:
R i = 1 k j = 1 k n i n i j
n i n i j = n i n i j cos n i ,   n i j
The value of Ri ranges from [0, 1]. When pi is at a protruding or recessed position, the angle between ni and nij becomes larger, resulting in a smaller value for Ri, as shown in Figure 2a. Conversely, when pi is situated on a flatter surface, the angle between ni and nij is smaller, leading to a larger value for Ri, as illustrated in Figure 2b.
After obtaining the response value Ri of the non-edge point pi in the point cloud, with its neighboring points denoted as pij and their response values as Rij, the keypoint detection condition is defined as:
t R i = min 1 m 1 j = 1 m 1 R i j , 1 m 2 j = 1 m 2 R i j , 1 m 3 j = 1 m 3 R i j
p i   is   keypoint ,                 if   R i < t R i p i   is   nonkeypoint ,   if   R i t R i
where tRi is the local multi-scale detection threshold, and m1, m2, m3 represent the local scale sizes, with values of k/3, 2k/3, and k, respectively, rounded up to the nearest integer.

2.2. Non-Maximum Suppression

In order to enhance the accuracy and robustness of keypoint detection, this study also incorporates non-maximum suppression (NMS) to effectively eliminate redundant keypoints, thereby improving the precision of the detection results.
For a point pi in the point cloud P, the nearest k neighboring points are p i j j = 1 k , with the distance between pi and pij denoted as dij. The covariance matrix for pi and its neighboring points pij is established as in:
cov p i = j = 1 k W i j p i p i j p i p i j T j = 1 k W i j
W i j = 1 d i j = 1 p i p i j
where Wij represents the weight. Calculate all the eigenvalues λ i 1 , λ i 2 , λ i 3 ( λ i 1 λ i 2 λ i 3 ) of matrix cov(pi). The magnitude of these eigenvalues reflects the variance of the data in the corresponding feature vector directions. Larger eigenvalues indicate that the point cloud is more widely distributed in that direction. Therefore, we define the response value Si for point pi based on non-maximum suppression as follows:
S i = 1 ,   if   d i k > t i s i ,   if   d i k t i
s i = λ i 1 λ i 2
where dik represents the distance between pi and the neighboring point pik, and ti denotes the local dynamic threshold determined by Equation (2). si reflects the degree of variation of the local point cloud around pi in the main direction. A larger si indicates a more pronounced variation in the main direction around pi, suggesting that the local point cloud is more likely to exhibit linear or elongated characteristics, making pi more likely to be detected as a keypoint. Unlike method [21], which employs si as the output response for keypoint detection, we innovatively redefine it as a modulation parameter in the NMS process, allowing si to adaptively suppress spurious keypoints. Therefore, for any keypoint pi, the NMS response value Si and the NMS response values S i j j = 1 k for neigh boring points p i j j = 1 k are defined. We define the condition based on NMS as:
p i   is   keypoint ,                 if   S i = max S i , S i j j = 1 k p i   is   nonkeypoint ,   if   S i max S i , S i j j = 1 k
where k represents the number of neighbors of point pi. Applying non-maximum suppression to all keypoints can determine the final keypoints.
The pseudocode of DANIP algorithm is shown in the Algorithm 1 below.
Algorithm 1. DANIP Algorithm
Input: point cloud P and number of neighboring points k
Output: keypoint KP
1Calculate the local dynamic threshold ti based on Equation (3);
2Use the local dynamic threshold ti to filter out edge outliers according to Equation (1);
3Compute the normal vector nP of the point cloud P;
4Calculate the response value tRi for dynamic multi-scale keypoint detection based on Equations (4)–(6);
5For non-edge points, determine the candidate keypoints using Equation (7);
6Construct the local neighborhood covariance matrix using Equations (8) and (9);
7Determine the threshold for non-maximum suppression based on Equations (10) and (11);
8Perform non-maximum suppression on the candidate keypoints based on Equation (11) to obtain the final keypoints KP.
Compared to existing keypoint detection methods, DANIP first introduces an edge point removal mechanism, effectively eliminating the interference of edge outliers in the keypoint detection results. Secondly, in terms of detection range, DANIP adopts a dynamic multi-scale neighborhood strategy, selecting keypoints only when a point shows significant prominence across multiple scales. Finally, DANIP further refines the selection of keypoints with significant response values through local maximum suppression within the neighborhood range.

3. Coarse-to-Fine Registration Using DANIP Keypoints and ICP

Point cloud registration is a critical task in computer vision and 3D reconstruction, aiming to align point cloud data from different viewpoints or time instances to generate a unified 3D model. This section introduces a novel point cloud registration algorithm based on DANIP keypoints and the ICP method. The proposed algorithm follows a coarse-to-fine strategy, initially achieving a coarse alignment through efficient keypoint detection and matching, and then refining the alignment using ICP. This design not only prevents ICP from converging to a local optimum but also accelerates the registration process while enhancing the algorithm’s robustness to noise and outliers. A flowchart of the algorithm is shown in Figure 3, which includes several key steps: data preprocessing, keypoint detection, feature description, feature matching, robust estimation, and fine registration.

3.1. Coarse Registration Based on DANIP Keypoints

3.1.1. Data Preprocessing

Due to the influence of environmental factors, the 3D point cloud data obtained by sensors during the acquisition process often contains redundant information. Therefore, data preprocessing is usually required before point cloud registration. Preprocessing typically includes filtering and downsampling. The purpose of filtering is to remove noise and outliers, while downsampling aims to reduce the data size and improve the computational efficiency of subsequent processing. In our method, we utilize voxel filtering for downsampling, a widely used technique.

3.1.2. Keypoint Detection

Keypoint detection is a crucial step in feature-based coarse registration methods and plays a vital role in determining the accuracy of the coarse registration. In this paper, we use the DANIP method, as introduced in Section 2, to detect keypoints in the pipe fitting point cloud.

3.1.3. Feature Description

Three-dimensional point cloud feature descriptors are used to extract and represent the local geometric features within the point cloud, enabling effective comparison and matching of keypoints in point cloud registration tasks. In this study, we employ the commonly used local feature descriptor FPFH [15] to represent the local features of the pipe fitting point cloud.
The core idea of the FPFH descriptor is to utilize a local coordinate system to describe the local features of a point. The definition of the local coordinate system is shown in Figure 4. Let pi be the detected keypoint, and pij be the j-th neighboring point of pi. ni and nij represent the normal vectors of pi and pij, respectively. The local coordinate system is defined as follows:
u = n i v = p i j   p i u w = u v
Based on the defined local coordinate system, the FPFH descriptor employs the following parameters to represent the positional information between pi and pij.
α = v n i j θ = a r c t a n w n i j , u n i j ϕ = u p i j p i / p i j p i
The values of (α, θ, ø) for all neighboring points within a radius r around point pi are computed. A histogram with 11 bins is generated for all (α, θ, ø), and the statistical result is denoted as SPFH(pi). Therefore, FPFH(pi) is defined as:
F P F H p i = S P F H ( p i ) + 1 k j = 1 k 1 ω j S P F H ( p i j ) w j = p i j p i
where k is the number of neighboring points of pi. Figure 5 shows the calculation range of FPFH(pi).

3.1.4. Feature Matching

Feature matching is employed to establish correspondences between similar keypoints across different point cloud datasets.
Given the source point cloud Q, the target point cloud P, and their corresponding keypoint sets KQ and KP, the features of keypoints qj and pi are represented by FPFH(qj) and FPFH(pi), respectively. In the feature space, if the distance between FPFH(pi) and FPFH(qj) is minimal, then pi and qj are considered a matching point pair, denoted as ci.
K Q = q j 1 M K P = p i 1 N c i = ( p i , q j )
In FPFH q j 1 M , the nearest neighbor for each FPFH p i 1 N is identified, thereby establishing the correspondence set C between the point clouds.
C = c i 1 N

3.1.5. Robust Estimation

Robust estimation aims to accurately estimate the pose parameters of a target in the presence of noise, outliers, or other disturbances. In this paper, Maximum Likelihood Estimation Sample Consensus (MLESAC) [30] is employed to estimate the rotation and translation parameters for coarse registration. The objective of MLESAC is to maximize the likelihood function, and the objective function can be expressed as:
T * = arg max T i = 1 N log P I + P O P I = ε 1 2 π σ exp p i T q j 2 2 σ 2 P O = 1 ε 1 ν
where T represents the model parameters, ε is the inlier confidence rate, pi and qj are a pair of corresponding keypoints, σ is the standard deviation of the noise, and ν denotes the constant representing the distribution range of outliers.

3.2. Fine Registration

The ICP algorithm is a widely used method for point cloud registration. The ICP algorithm computes the rotation and translation parameters T by minimizing the distance between the registered point clouds P and Q.
P = p i 1 N Q = q j 1 M
The objective function for the ICP algorithm is defined as follows:
min T i = 1 N T q j p i
where pi represents the point on the point cloud Q that is closest to the transformed point qj after applying the rotation and translation. The method presented in this paper first uses DANIP keypoints to establish the correspondence set and estimate the rotation and translation parameters T. It then resolves the problems of local optimality and low efficiency in the ICP algorithm by providing a good initial estimate of the transformation parameters T.

4. Experimental Evaluation

In this section, we systematically evaluate the performance of the proposed DANIP method across multiple aspects. First, we conduct a comprehensive performance comparison of DANIP with existing mainstream keypoint detection algorithms, including SIFT 3D [25], Harris 3D [24], ISS [21], and NVDP [12], using four publicly available datasets. We then investigate the impact of different keypoint detection algorithms on the accuracy of coarse point cloud registration. Building on this, we focus on evaluating the improvements in the performance of the traditional ICP algorithm by integrating DANIP with a coarse-to-fine registration strategy. Finally, to verify the effectiveness of the proposed method in practical industrial applications, we use real pipe fitting point cloud data acquired by a structured light camera to test and analyze the registration performance of the DANIP-ICP combined coarse-to-fine registration algorithm under data missing conditions.

4.1. Keypoint Detection Performance

To evaluate the performance of the DANIP algorithm compared to existing mainstream keypoint detection methods, we conducted comparative experiments on four representative publicly available datasets: Stanford [31], Kinect [32], Queen [33], and ASL-LRD [34]. Table 1 provides a detailed description of the specific characteristics of these datasets. These datasets, collected using different types of sensors, exhibit significant differences in resolution, noise levels, and data sparsity, enabling an effective validation of the algorithm’s adaptability and robustness under varying scene conditions. Figure 6 presents some of the point cloud samples used in the experiment, visually highlighting the diversity of the datasets.
For evaluation metrics, we used Recall, Precision, Time, and F1 score to assess the performance. For a keypoint qj on the source point cloud Q and a keypoint pi on the target point cloud P, the criteria for pi and qj to be considered a correctly matched keypoint pair are:
T q j p i t
R e c a l l = correctly   matched   keypoints   count corresponding   keypoints   count
P r e c i s i o n = correctly   matched   keypoints   count total   number   of   matched   keypoints
where T represents the true registration transformation parameters. t is the distance threshold. The value of t is varied to obtain the curves. The experimental results are shown in Figure 7 and the corresponding F1 and time are presented in Table 2.
As illustrated in Figure 7a,b, the performance of DANIP on the Stanford and Kinect datasets is second only to the SIFT 3D method. Notably, the results in Figure 7c,d demonstrate that DANIP achieves optimal performance on the Queen and ASL-LRD datasets. Quantitative analysis in Table 2 further confirms that DANIP attains higher F1 scores on most datasets, particularly excelling on the ASL-LRD dataset, which suffers from severe data missing issues. Moreover, compared to SIFT 3D, DANIP exhibits significant advantages in time efficiency, indicating that the method achieves a well-balanced trade-off between computational complexity and detection accuracy.
The experimental results reveal that DANIP demonstrates excellent performance in 3D keypoint detection tasks. Although its time efficiency is slightly lower than that of ISS and Harris 3D on some datasets, DANIP shows clear advantages in F1 scores on most datasets, particularly exhibiting stronger robustness when processing low-resolution or noisy data. This characteristic endows DANIP with significant practical value in real-world applications, especially in scenarios that require a balance between computational efficiency and detection accuracy.

4.2. Coarse Registration Comparison

To evaluate the performance of the proposed keypoint detection algorithm, DANIP, in the coarse registration task, we compared the coarse registration method based on the DANIP algorithm with several other commonly used algorithms, including ISS, Harris 3D, SIFT 3D, SUSAN [35], and NVDP, across different datasets. To quantitatively assess registration accuracy, root mean square error RMSE, mean reprojection error MRE, Bayesian information criterion BIC and time consumption Time were used as evaluation metrics.
RMSE = 1 N T q j p i N
MRE = 1 N 1 N T q j p i
B I C = k ln ( N ) 2 ln ( L )
where T represents the transformation parameters estimated during coarse registration, N is the number of points in the source point cloud, qj is an arbitrary point in the source point cloud, and pi is the closest point in the target point cloud to the transformed qj. k and L are the number of parameters and maximum likelihood value of the point cloud transformation model, respectively.
Considering the uncertainty of a single registration experiment, we conducted 100 trials to ensure the reliability of the results. Table 3 presents the experimental results, listing the average values of each evaluation metric.
As illustrated in Table 3, from the perspective of root mean square error RMSE and mean reprojection error MRE, the DANIP algorithm demonstrates superior performance across all datasets. Specifically, on the Stanford dataset, DANIP achieves an RMSE of 0.0022 and an MRE of 0.000855, slightly outperforming other algorithms. On the Kinect and Queen datasets, DANIP performs comparably to SIFT 3D, with RMSE values of 0.0047 and 0.0181, respectively, both of which are the lowest recorded. Notably, on the ASL-LRD dataset, DANIP achieves an RMSE of 0.1288 and an MRE of 0.065782, significantly surpassing other algorithms, which indicates that DANIP possesses higher registration accuracy when dealing with complex scenes. The Bayesian Information Criterion BIC also demonstrates the efficiency of the DANIP algorithm across all datasets. In comparison to other algorithms, DANIP shows superior BIC values on each dataset, further validating its stability and accuracy in point cloud registration tasks.
Furthermore, in terms of computational efficiency, the DANIP algorithm exhibits superior performance across all datasets. For instance, on the Stanford dataset, DANIP’s computational time is 2.44 s, which is markedly lower than the 4.79 s of SIFT 3D and the 4.32 s of Harris 3D. Similarly, on the Kinect and Queen datasets, DANIP’s computational times are 1.26 s and 1.21 s, respectively, again significantly lower than those of other algorithms. This suggests that DANIP not only excels in registration accuracy but also demonstrates remarkable computational efficiency.
To provide a more intuitive comparison of the coarse registration performance across different datasets, we have summarized the RMSE of 100 registration results in Table 4 and plotted the 95% confidence interval bar chart, as shown in Figure 8.
In Table 4, the stability and superiority of the DANIP algorithm across all datasets can be intuitively observed. This indicates that it maintains high accuracy when handling complex data. The confidence intervals in Figure 8 also clearly reflect the stability of DANIP, with the error ranges being small across all datasets, further validating its effectiveness in point cloud registration tasks.
The experimental results indicate that the DANIP algorithm exhibits significant advantages in both registration accuracy and computational efficiency, particularly when handling complex scenes and large-scale data, where its performance is notably superior to that of other algorithms.

4.3. Coarse-to-Fine Registration Comparison

To validate the effectiveness of the proposed coarse-to-fine point cloud registration method based on DANIP and ICP, this study conducted experiments on four publicly available datasets and performed a systematic comparison with six common registration methods, including the traditional ICP algorithm, SIFT 3D + ICP, and Harris 3D + ICP. To comprehensively evaluate the performance of each algorithm, we used root mean square error RMSE and runtime as the primary evaluation metrics. The experimental results are presented in Figure 9 and Table 5.
The experimental results demonstrate the effectiveness of the coarse-to-fine point cloud registration algorithm based on DANIP and ICP. By utilizing the more accurate keypoint detection algorithm DANIP, the proposed method provides superior initial values for the ICP algorithm, thereby addressing the inefficiency and high sensitivity to initial values inherent in traditional ICP. As shown in Figure 8, the registration error RMSE for all methods tends to converge as the number of iterations increases. Notably, the DANIP + ICP algorithm exhibits the fastest convergence rate in Figure 8a,b,d. In Figure 8c, when the traditional ICP algorithm converges to a local minimum, the DANIP+ICP-based algorithm’s convergence error is significantly smaller than that of ICP.
Table 5 presents a comparison of runtime performance across different datasets. The DANIP+ICP algorithm achieves the shortest runtime among all evaluated methods. Specifically, compared to the traditional ICP algorithm, the proposed method reduces the runtime by 66.93%, 78.01%, 75.48%, and 23.69% on datasets A, B, C, and D, respectively. These results indicate that the coarse-to-fine registration algorithm based on DANIP+ICP not only accelerates convergence but also effectively avoids local minima, significantly improving computational efficiency.
The superior performance of the proposed algorithm can be attributed to the robustness of the DANIP algorithm in handling incomplete data, providing more reliable keypoints for initial alignment. Furthermore, DANIP generates more precise initial values, which contribute to faster and more accurate convergence during subsequent ICP refinement. Finally, the complementarity between the coarse and fine registration stages ensures global consistency and accuracy in the final registration result. The experimental results suggest that combining the advanced keypoint detection algorithm DANIP with the traditional ICP framework can greatly enhance point cloud registration performance, particularly in challenging scenarios involving incomplete or noisy data.

4.4. Pipe Fitting Registration Performance

To evaluate the effectiveness of the proposed coarse-to-fine point cloud registration method based on DANIPI and ICP in practical industrial scenarios, we compared it with several mainstream registration algorithms, including ICP, NDT [36], LM-ICP [37], G-ICP, and P-ICP [38]. To objectively assess registration accuracy, the root mean square error RMSE was used as the quantitative evaluation metric.
The experimental data was sourced from an industrial pipe fitting point cloud dataset, which was autonomously collected. Common industrial pipe fittings and a portion of the point clouds involved in the experiment are shown in Figure 10. The experimental equipment is shown in Figure 11. Data acquisition was performed using the COGNEX 3D-A5005 depth camera (Cognex Corp., Natick, MA, USA) along with its accompanying hardware and software system, capturing the 3D point cloud data of the fittings through multi-view scanning. The schematic diagram of multi-view scanning and registration is shown in Figure 12. In the registration experiment design, an odd-even index grouping strategy was adopted, with the odd-indexed point cloud serving as the target point cloud and the even-indexed point cloud as the source. To ensure data quality, all point clouds underwent a standardized preprocessing workflow, including pass-through filtering and voxel filtering (1/4 downsampling). Considering the uncertainty of a single registration experiment, we conducted 100 trials to ensure the reliability of the results. The registration results are shown in Table 6 and Figure 13. Table 6 records the registration errors of each algorithm under the same experimental conditions, while Figure 13 displays the registration results of the proposed algorithm.
As shown in Figure 10c–e, the high reflectivity of the metallic surface of the pipe fittings leads to data loss in the point cloud captured by the depth camera. This data loss increases the complexity and difficulty of the point cloud registration process. Utilizing the DANIP algorithm, which is more robust to data loss, for coarse registration to provide better initial values effectively improves the registration accuracy.
As indicated by the experimental data in Table 6, the proposed coarse-to-fine point cloud registration algorithm based on DANIP and ICP demonstrates outstanding performance in terms of registration accuracy, with the root mean square error RMSE consistently lower than that of the comparison methods, yielding the best registration results. As seen in Figure 13, even in the presence of significant data loss in both the source and target point clouds, the coarse-to-fine registration algorithm based on DANIP and ICP can still achieve precise registration.
The experimental results confirm that the coarse-to-fine registration algorithm based on DANIP and ICP effectively overcomes the limitations of traditional methods when handling data loss caused by surface reflectivity. It significantly improves registration accuracy. This not only validates the robustness and reliability of the algorithm in real-world pipe fitting point cloud registration scenarios but also provides a new solution for future research and practical applications.

5. Conclusions and Future Work

This paper introduces a novel 3D point cloud keypoint detection method—Density-Aware Normal Inner Product Keypoint Detection (DANIP). DANIP incorporates a density-based edge point removal mechanism and utilizes adaptive neighborhood normal vector inner products for keypoint detection. The method demonstrates superior performance in terms of recall rate, precision, and computational efficiency. Furthermore, a coarse-to-fine point cloud registration method combining DANIP and ICP is proposed. This method leverages the robust performance of DANIP in handling data missingness for coarse registration, effectively addressing the issues of ICP’s susceptibility to local minima and low efficiency. The main research conclusions are as follows:
  • A novel keypoint detection method, DANIP, is proposed. Experimental results in keypoint detection show that, compared to other classical methods, DANIP achieves higher detection accuracy and computational efficiency on public datasets such as Stanford, Kinect, Queen, and ASL-LRD.
  • A coarse-to-fine registration method combining DANIP and ICP is proposed. This method effectively avoids the local minima problem in the ICP algorithm, significantly improving convergence efficiency and computational performance. Under optimal conditions, the runtime is reduced by 66.93%, 78.01%, 75.48%, and 23.69% on the Stanford, Kinect, Queen, and ASL-LRD datasets, respectively.
  • Compared to other classical registration algorithms, the coarse-to-fine point cloud registration based on DANIP and ICP achieves higher accuracy even in the presence of severe data loss in multi-view industrial pipe datasets. These findings validate the robustness of the proposed method against data loss caused by reflectivity and highlight its potential in engineering applications.
However, despite the promising results, several limitations persist. While the method performs well on standard datasets, its applicability to more complex or noisy real-world data requires further investigation. Additionally, although the method handles moderate data loss effectively, it still faces challenges when dealing with severe or systematic data loss.
In future research, we plan to further combine improvements in the algorithm with the effects of actual industrial applications, focusing on the following aspects:
Production Efficiency Improvement: By integrating industrial field data, we will assess the impact of the registration algorithm on production efficiency. By optimizing the algorithm’s runtime, we aim to increase the number of parts produced per hour and compare this with traditional methods to analyze its potential industrial value.
Control of Geometric Tolerances and Precision Enhancement: We will conduct in-depth research on how the algorithm ensures the geometric tolerances of machined parts. Additionally, we will explore how precise registration control can achieve higher assembly accuracy and reduce assembly errors.
Reduction in Rework Rates and Quality Improvement: The improvement in algorithm accuracy will directly impact assembly precision, reducing rework rates caused by errors. Future studies will verify the practical effects of the algorithm in reducing rework rates and improving production quality through experimental data.

Author Contributions

Conceptualization, Z.L. and X.Y.; methodology, Z.L.; validation, Z.L, and X.Y.; formal analysis, Z.L; investigation, Z.L.; resources, X.Y.; data curation, Z.L.; writing—original draft preparation, Z.L.; writing—review and editing, Z.L.; visualization, Z.L.; supervision, X.Y.; project administration, X.Y.; funding acquisition, X.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Jilin Provincial Scientific and Technological Development Program, grant number 20240302033GX.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, C.; Guan, Y.; Wang, X.; Zhou, C.; Xun, Y.; Gui, L. Experimental and numerical studies on heat transfer characteristics of vertical deep-buried U-bend pipe in intermittent heating mode. Geothermics 2019, 79, 14–25. [Google Scholar] [CrossRef]
  2. Hong, S.-P.; Yoon, S.-J.; Kim, D.-J.; Kim, Y.-J.; Huh, N.-S. Enhanced elastic stress solutions for junctions in various pipe bends under internal pressure and combined loading (90° pipe bend, U-bend, double-bend pipe). Int. J. Press. Vessel. Pip. 2024, 212, 105343. [Google Scholar] [CrossRef]
  3. Cao, R.; Ma, D.; Chen, W.; Li, M.; Dai, H.; Wang, L. Multistable dynamic behaviors of cantilevered curved pipes conveying fluid. J. Fluids Struct. 2024, 130, 104196. [Google Scholar] [CrossRef]
  4. Feigang, T.; Kaiyuan, L.; Lifeng, L.; Yang, W. An Intelligent Detection System for Full Crew of Elevator Car. In Proceedings of the 2018 11th International Conference on Intelligent Computation Technology and Automation (ICICTA), Changsha, China, 22–23 September 2018; pp. 183–185. [Google Scholar] [CrossRef]
  5. Tan, F.; Zhai, M.; Zhai, C. Foreign object detection in urban rail transit based on deep differentiation segmentation neural network. Heliyon 2024, 10, e37072. [Google Scholar] [CrossRef] [PubMed]
  6. Liu, C.-S.; Lin, J.-J.; Chen, B.-R. A novel 3D scanning technique for reflective metal surface based on HDR-like image from pseudo exposure image fusion method. Opt. Lasers Eng. 2023, 168, 107688. [Google Scholar] [CrossRef]
  7. Besl, P.J.; McKay, N.D. A method for registration of 3-D shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1992, 14, 239–256. [Google Scholar] [CrossRef]
  8. Segal, A.; Haehnel, D.; Thrun, S. Generalized-ICP. In Robotics: Science and Systems V; University of Washington: Seattle, DC, USA, 2009. [Google Scholar] [CrossRef]
  9. Serafin, J.; Grisetti, G. NICP: Dense normal based point cloud registration. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; pp. 742–749. [Google Scholar] [CrossRef]
  10. Yang, J.; Li, H.; Campbell, D.; Jia, Y. Go-ICP: A Globally Optimal Solution to 3D ICP Point-Set Registration. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 38, 2241–2254. [Google Scholar] [CrossRef]
  11. Lei, J.; Song, J.; Peng, B.; Li, W.; Pan, Z.; Huang, Q. C2FNet: A Coarse-to-Fine Network for Multi-View 3D Point Cloud Generation. IEEE Trans. Image Process. 2022, 31, 6707–6718. [Google Scholar] [CrossRef]
  12. Yue, X.; Liu, Z.; Zhu, J.; Gao, X.; Yang, B.; Tian, Y. Coarse-fine point cloud registration based on local point-pair features and the iterative closest point algorithm. Appl. Intell. 2022, 52, 12569–12583. [Google Scholar] [CrossRef]
  13. Li, Q.; Yan, Y.; Li, W. Coarse-to-fine segmentation of individual street trees from side-view point clouds. Urban For. Urban Green. 2023, 89, 128097. [Google Scholar] [CrossRef]
  14. Liu, Z.; Yue, X.; Zhu, J. SPROSAC: Streamlined progressive sample consensus for coarse–fine point cloud registration. Appl. Intell. 2024, 54, 5117–5135. [Google Scholar] [CrossRef]
  15. Rusu, R.B.; Blodow, N.; Beetz, M. Fast Point Feature Histograms (FPFH) for 3D registration. In Proceedings of the 2009 IEEE International Conference on Robotics and Automation, Kobe, Japan, 12–17 May 2009; pp. 3212–3217. [Google Scholar]
  16. Zhao, H.; Tang, M.; Ding, H. HoPPF: A novel local surface descriptor for 3D object recognition. Pattern Recognit. 2020, 103, 107272. [Google Scholar] [CrossRef]
  17. Yuan, C.; Lin, J.; Liu, Z.; Wei, H.; Hong, X.; Zhang, F. BTC: A Binary and Triangle Combined Descriptor for 3-D Place Recognition. IEEE Trans. Robot. 2024, 40, 1580–1599. [Google Scholar] [CrossRef]
  18. Barath, D.; Matas, J. Graph-Cut RANSAC. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 6733–6741. [Google Scholar] [CrossRef]
  19. Ivashechkin, M.; Barath, D.; Matas, J. VSAC: Efficient and Accurate Estimator for H and F. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, 10–17 October 2021; pp. 15223–15232. [Google Scholar] [CrossRef]
  20. Li, J.; Shi, P.; Hu, Q.; Zhang, Y. QGORE: Quadratic-Time Guaranteed Outlier Removal for Point Cloud Registration. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 11136–11151. [Google Scholar] [CrossRef] [PubMed]
  21. Zhong, Y. Intrinsic shape signatures: A shape descriptor for 3D object recognition. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops, Kyoto, Japan, 27 September–4 October 2009; pp. 689–696. [Google Scholar]
  22. Zaharescu, A.; Boyer, E.; Varanasi, K.; Horaud, R. Surface feature detection and description with applications to mesh matching. In Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 373–380. [Google Scholar] [CrossRef]
  23. Steder, B.; Rusu, R.B.; Konolige, K.; Burgard, W. NARF: 3D Range Image Features for Object Recognition. Available online: http://ais.informatik.uni-freiburg.de/publications/papers/steder10irosws.pdf (accessed on 11 November 2025).
  24. Sipiran, I.; Bustos, B. Harris 3D: A robust extension of the Harris operator for interest point detection on 3D meshes. Vis. Comput. 2011, 27, 963–976. [Google Scholar] [CrossRef]
  25. Rusu, R.B.; Cousins, S. 3d is here: Point cloud library (pcl). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
  26. Suwajanakorn, S.; Snavely, N.; Tompson, J.; Norouzi, M. Discovery of latent 3d keypoints via end-to-end geometric reasoning. In Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS’18), Red Hook, NY, USA, 3 December 2018; pp. 2063–2074. [Google Scholar]
  27. Li, J.; Lee, G.H. USIP: Unsupervised Stable Interest Point Detection From 3D Point Clouds. In Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea, 27 October–2 November 2019; pp. 361–370. [Google Scholar] [CrossRef]
  28. Bai, X.; Luo, Z.; Zhou, L.; Fu, H.; Quan, L.; Tai, C.-L. D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 13–19 June 2020; pp. 6358–6366. [Google Scholar] [CrossRef]
  29. Luo, Z.; Xue, W.; Chae, J.; Fu, G. SKP: Semantic 3D Keypoint Detection for Category-Level Robotic Manipulation. IEEE Robot. Autom. Lett. 2022, 7, 5437–5444. [Google Scholar] [CrossRef]
  30. Tordoff, B.; Murray, D. Guided-MLESAC: Faster image transform estimation by using matching priors. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1523–1535. [Google Scholar] [CrossRef]
  31. Curless, B.; Levoy, M. A volumetric method for building complex models from range images. In Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, New York, NY, USA, 1 August 1996; pp. 303–312. [Google Scholar]
  32. Tombari, F.; Salti, S.; Stefano, L.D. Unique signatures of histograms for local surface description. In Proceedings of the European Conference on Computer Vision, Heraklion, Crete, Greece, 5–11 September 2010; pp. 356–369. [Google Scholar] [CrossRef]
  33. Kiforenko, L.; Drost, B.; Tombari, F.; Krüger, N.; Buch, A.G. A performance evaluation of point pair features. Comput. Vis. Image Underst. 2018, 166, 66–80. [Google Scholar] [CrossRef]
  34. Pomerleau, F.; Liu, M.; Colas, F.; Siegwart, R. Challenging data sets for point cloud registration algorithms. Int. J. Robot. Res. 2012, 31, 1705–1711. [Google Scholar] [CrossRef]
  35. Smith, S.M.; Brady, J.M. SUSAN—A New Approach to Low Level Image Processing. Int. J. Comput. Vis. 1997, 23, 45–78. [Google Scholar] [CrossRef]
  36. Biber, P. The Normal Distributions Transform: A New Approach to Laser Scan Matching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 27–31 October 2003; pp. 2743–2748. [Google Scholar]
  37. Fitzgibbon, A.W. Robust registration of 2D and 3D point sets. Image Vis. Comput. 2003, 21, 1145–1153. [Google Scholar] [CrossRef]
  38. Rusinkiewicz, S.; Levoy, M. Efficient variants of the ICP algorithm. In Proceedings Third International Conference on 3-D Digital Imaging and Modeling, Quebec City, QC, Canada, 28 May–1 June 2001; pp. 145–152. [Google Scholar] [CrossRef]
Figure 1. Distortion of edge point distribution.
Figure 1. Distortion of edge point distribution.
Sensors 25 07012 g001
Figure 2. Normal vector diagram. (a) Curved Area; (b) Flat Area.
Figure 2. Normal vector diagram. (a) Curved Area; (b) Flat Area.
Sensors 25 07012 g002
Figure 3. Coarse to fine registration.
Figure 3. Coarse to fine registration.
Sensors 25 07012 g003
Figure 4. Definition of Local Coordinate System.
Figure 4. Definition of Local Coordinate System.
Sensors 25 07012 g004
Figure 5. Calculation Range of FPFH(pi).
Figure 5. Calculation Range of FPFH(pi).
Sensors 25 07012 g005
Figure 6. Visualization of some point clouds involved in the experiment.(a) bun045; (b) PeterRabbit000; (c) im0; (d) Hokuyo_1.
Figure 6. Visualization of some point clouds involved in the experiment.(a) bun045; (b) PeterRabbit000; (c) im0; (d) Hokuyo_1.
Sensors 25 07012 g006aSensors 25 07012 g006b
Figure 7. The RPC results for keypoint detection. (a) Stanford bun000&045; (b) Kinect PeterRabbit000&001; (c) Queen im0&2;(d) ASL-LRD stairs Hokuyo_0&1 (1/4 downsample).
Figure 7. The RPC results for keypoint detection. (a) Stanford bun000&045; (b) Kinect PeterRabbit000&001; (c) Queen im0&2;(d) ASL-LRD stairs Hokuyo_0&1 (1/4 downsample).
Sensors 25 07012 g007
Figure 8. The 95% confidence interval bar chart.(a) Stanford; (b) Kinect; (c) Queen; (d) ASL-LRD.
Figure 8. The 95% confidence interval bar chart.(a) Stanford; (b) Kinect; (c) Queen; (d) ASL-LRD.
Sensors 25 07012 g008
Figure 9. Coarse-to-Fine Registration.(a) Stanford bun000&045; (b) Kinect PeterRabbit000&001; (c) Queen im0&2; (d) ASL-LRD stairs Hokuyo_0&1 (1/4 downsample).
Figure 9. Coarse-to-Fine Registration.(a) Stanford bun000&045; (b) Kinect PeterRabbit000&001; (c) Queen im0&2; (d) ASL-LRD stairs Hokuyo_0&1 (1/4 downsample).
Sensors 25 07012 g009
Figure 10. Common pipe fittings and point clouds. (a) Elbow; (b) Reducer; (c) Tee; (d) elbow02; (e) reducer01; (f) tee02.
Figure 10. Common pipe fittings and point clouds. (a) Elbow; (b) Reducer; (c) Tee; (d) elbow02; (e) reducer01; (f) tee02.
Sensors 25 07012 g010
Figure 11. Experimental Equipment.
Figure 11. Experimental Equipment.
Sensors 25 07012 g011
Figure 12. Multi-View Scanning and Registration.
Figure 12. Multi-View Scanning and Registration.
Sensors 25 07012 g012
Figure 13. Registration Results of Point Clouds for Pipe Fittings (blue represents the target point cloud, green represents the source point cloud, and orange represents the aligned point cloud). (a) elbow01&02; (b) reducer01&02; (c) tee01&02.
Figure 13. Registration Results of Point Clouds for Pipe Fittings (blue represents the target point cloud, green represents the source point cloud, and orange represents the aligned point cloud). (a) elbow01&02; (b) reducer01&02; (c) tee01&02.
Sensors 25 07012 g013
Table 1. Datasets used in the evaluation.
Table 1. Datasets used in the evaluation.
No.DatasetsAcquisition MethodCharacteristicQualityDegree of Data LossModel Number
1StanfordCyberware 3030 MSDiversityHighLow6
2KinectMicrosoft KinectLow densityLowMedium7
3QueenMinolta vividScanning errorMediumMedium5
4ASL-LRDHokuyo UTM-30LXLarge size and noiseMediumHigh8
Table 2. F1 Score and Time of Keypoint Detection.
Table 2. F1 Score and Time of Keypoint Detection.
ParameterDatasetsISSHarris 3DNVDPSIFT 3DDANIP
F1Stanford0.93260.74150.89390.96450.9512
Kinect0.77290.81620.91620.94430.9302
Queen0.98210.95030.92401.00001.0000
ASL-LRD0.74680.84160.97710.96770.9891
Time(s)Stanford0.74711.03880.85712.85690.8879
Kinect0.23840.22010.16762.04670.3374
Queen0.20730.09410.13991.73020.2001
ASL-LRD0.47750.33890.59793.17021.0059
Table 3. Comparison of Parameters for Different Datasets and Methods with 100 Repetitions.
Table 3. Comparison of Parameters for Different Datasets and Methods with 100 Repetitions.
ParameterDatasetsISSHarris 3DSIFT 3DSUSANNVDPDANIP
RMSEStanford0.00230.00250.00230.00420.00240.0022
Kinect0.00520.00590.00450.00650.00550.0047
Queen0.01910.01900.01810.02030.01970.0181
ASL-LRD0.20240.17800.13340.18120.14070.1288
MREStanford0.0009470.0013120.0008960.0023440.0010650.000855
Kinect0.0039750.0047720.0029480.0050160.0040280.003406
Queen0.0116910.0105210.0096310.0124680.0121180.009432
ASL-LRD0.1372100.1129400.0723240.1131200.0933070.065782
BICStanford71.140371.223271.112171.460271.188671.0851
Kinect64.202964.367564.118664.394664.310464.1285
Queen64.005364.001163.965164.160664.027863.9607
ASL-LRD79.978878.062276.442878.424376.645876.2269
Time(s)Stanford2.924.324.794.332.932.44
Kinect2.492.104.263.071.751.26
Queen1.442.354.102.242.091.21
ASL-LRD3.524.275.903.093.423.32
Table 4. RMSE Summary of Registration Errors.
Table 4. RMSE Summary of Registration Errors.
ParameterDatasetsISSHarris 3DSIFT 3DSUSANNVDPDANIP
MediansStanford0.002370.002510.002280.004410.002430.00224
Kinect0.005180.005870.004530.006630.005540.00472
Queen0.019170.019020.018160.020260.019750.01798
ASL-LRD0.198530.179730.144120.192130.145720.12919
IQRStanford0.000080.000280.000120.000520.000070.00003
Kinect0.0002120.0003360.0001640.0007350.0001070.000131
Queen0.0014750.0008210.0009020.0010150.0012420.000901
ASL-LRD0.0020190.0018260.0012630.0019380.0013120.000553
Table 5. Performance Comparison of Algorithms Across Different Datasets with 100 Repetitions.
Table 5. Performance Comparison of Algorithms Across Different Datasets with 100 Repetitions.
AlgorithmMetricStanford
bun000&045
Kinect
PeterRabbit000&001
Queen
im0&2
ASL-LRD
Hokuyo 0&1
ICPRuntime(s)14.396612.67669.74297.1939
SUSAN + ICPRuntime(s)7.98655.21593.16506.1760
Rate of decline44.53%58.85%67.51%14.15%
Harris 3D + ICPRuntime(s)7.39263.72203.61996.6802
Rate of decline48.65%70.64%62.85%7.14%
NVDP + ICPRuntime(s)5.39243.11123.10045.7802
Rate of decline62.54%75.46%68.18%19.65%
ISS + ICPRuntime(s)6.53404.04492.72336.0138
Rate of decline54.61%68.09%72.05%16.40%
SIFT 3D + ICPRuntime(s)7.52155.78515.30458.3011
Rate of decline47.76%54.36%45.56%N/A
DANIP + ICPRuntime(s)4.76042.78732.38895.4896
Rate of decline66.93%78.01%75.48%23.69%
Table 6. The registration error RMSE of commonly used pipe fittings.
Table 6. The registration error RMSE of commonly used pipe fittings.
Point CloudNumber of PointICPLM-ICPP-ICPG-ICPNDTDANIP-ICP
elbow01&02131,071&139,9780.50110.49510.50850.50820.70390.4935
elbow03&04127,826&122,9860.39370.37930.49960.52011.25310.3778
elbow07&08123,277&115,8560.79880.79830.50570.51181.32060.4327
elbow15&16121,585&122,6550.77160.77190.86150.89482.37350.7712
reducer01&0284,686&85,2980.22890.21910.24090.33860.22780.2151
reducer03&0483,436&89,8810.48640.48900.60870.66030.41290.4063
tee01&02153,909&149,8251.35621.35331.39981.43922.83031.3433
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Z.; Yue, X. Enhancing Point Cloud Registration for Pipe Fittings: A Coarse-to-Fine Approach with DANIP Keypoint Detection and ICP Optimization. Sensors 2025, 25, 7012. https://doi.org/10.3390/s25227012

AMA Style

Liu Z, Yue X. Enhancing Point Cloud Registration for Pipe Fittings: A Coarse-to-Fine Approach with DANIP Keypoint Detection and ICP Optimization. Sensors. 2025; 25(22):7012. https://doi.org/10.3390/s25227012

Chicago/Turabian Style

Liu, Zeyuan, and Xiaofeng Yue. 2025. "Enhancing Point Cloud Registration for Pipe Fittings: A Coarse-to-Fine Approach with DANIP Keypoint Detection and ICP Optimization" Sensors 25, no. 22: 7012. https://doi.org/10.3390/s25227012

APA Style

Liu, Z., & Yue, X. (2025). Enhancing Point Cloud Registration for Pipe Fittings: A Coarse-to-Fine Approach with DANIP Keypoint Detection and ICP Optimization. Sensors, 25(22), 7012. https://doi.org/10.3390/s25227012

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop