Next Article in Journal
An Overview of Commercial Virtual Reality Providers in Education: Mapping the Current Market Landscape
Previous Article in Journal
Ultrasonic Evaluation Method for Mechanical Performance Degradation of Fluororubber Used in Nuclear Power Facility
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Unified Denoising Framework for Restoring the LiDAR Point Cloud Geometry of Reflective Targets

1
Institute of Microelectronics, Chinese Academy of Sciences, Beijing 100029, China
2
University of Chinese Academy of Sciences, Beijing 100049, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(7), 3904; https://doi.org/10.3390/app15073904
Submission received: 30 January 2025 / Revised: 14 March 2025 / Accepted: 29 March 2025 / Published: 2 April 2025

Abstract

:
LiDAR point clouds of reflective targets often contain significant noise, which severely impacts the feature extraction accuracy and performance of object detection algorithms. These challenges present substantial obstacles to point cloud processing and its applications. In this paper, we propose a Unified Denoising Framework (UDF) aimed at removing noise and restoring the geometry of reflective targets. The proposed method consists of three steps: veiling effect denoising using an improved pass-through filter, range anomalies correction through M-estimator Sample Consensus (MSAC) plane fitting and ray projection, and blooming effect denoising based on an adaptive error ellipse. The parameters of the error ellipse are automatically determined using the divergence angle of the laser beam, blooming factors, and the normal vector along the boundary of the point cloud. The proposed method was validated on a self-constructed traffic sign point cloud dataset. The experimental results showed that the method achieved a mean square error (MSE) of 0.15 cm2, a mean city-block distance (MCD) of 0.05 cm, and relative height and width errors of 1.92% and 1.91%, respectively. Compared to five representative algorithms, the proposed method demonstrated superior performance in both denoising accuracy and the restoration of target geometric features.

1. Introduction

In recent years, Light Detection and Ranging (LiDAR) technology has undergone rapid development and become a hotspot in both academic research and industrial applications. By emitting pulsed laser beams, LiDAR can quickly capture the three-dimensional spatial information and intensity of surrounding environments. LiDAR demonstrates remarkable advantages under complex scenarios, adverse weather conditions, and varying lighting environments. Consequently, it has been widely used in various applications, such as autonomous driving [1,2,3,4], digital twin city modeling [5,6,7], agricultural monitoring [8,9,10], and forestry surveying [11,12,13].
However, LiDAR systems encounter several critical challenges due to the divergence of the laser beam and the presence of reflective targets (e.g., traffic signs) in complex environments. First, when the laser scans the edges of targets, the signal received by the detector is a mixture of laser echo signals from both the target and the object behind it. This causes the LiDAR system to inaccurately estimate the distance between them, resulting in noise points known as the veiling effect [14]. Second, reflective targets can induce abnormal range measurements, introducing range-anomalous points and reducing feature points on the target surface. Third, the blooming effect [15,16] generates scattered points around the target, leading to distortions in the geometric shape and dimensions of the target. These challenges significantly degrade the performance of object detection and classification algorithms in autonomous driving applications. Consequently, it is still a critical and challenging task to develop effective point cloud denoising algorithms to restore the geometry of reflective targets.
To address these issues, various point cloud denoising methods have been proposed [17,18], which can be categorized into traditional methods [19,20,21,22,23,24,25,26,27,28,29,30,31,32] and deep learning-based methods [33,34,35,36,37,38]. Traditional point cloud denoising methods can further be categorized into filter-based and optimization-based methods.
Filter-based methods [19,20,21,22,23,24,25,26,27,28,29] generally assume that noise is present in high-frequency components and employ filters that function on point positions or normal vectors. The pass-through filter is a simple and effective algorithm in the PCL (Point Cloud Library) [19]; it removes values that are either inside or outside a specified range along a specific dimension (e.g., X, Y, Z coordinates or intensity attribute). Miknis [20,21] improved the computational efficiency of the pass-through filter by optimizing the vector access logic. The statistical filter leverages statistical analysis to remove outliers that are far from their neighbors based on the relative distances between each point and its neighboring points. This method is particularly effective for point clouds with substantial noise or a high presence of outliers. Nurunnabi [22] developed two robust statistical methods for detecting outliers in 3D point cloud data: one leveraging robust z-scores and the other utilizing a robust Mahalanobis distance approach. By incorporating local neighborhood information, the two methods effectively identify both clustered and uniformly distributed outliers. Carrilho [23] introduced an adaptive approach that utilizes point cloud cell subdivision. Rather than generating a single histogram for the entire dataset, the proposed method applies filtering to smaller segments, where variations in ground elevation can be disregarded. The bilateral filter is a technique inspired by the bilateral filter used in image processing, designed to smooth data while preserving edges and geometric features. The method operates by weighting the influence of neighboring points based on both their spatial proximity and feature similarity, such as intensity or surface normal vectors. Fleishman [24] proposed a fast and effective anisotropic mesh denoising algorithm that filters mesh vertices along the normal direction using local neighborhoods. Inspired by the success of bilateral filtering in image denoising, the method extends its principles to 3D meshes, addressing the unique challenges of transitioning from 2D images to 3D manifolds. Digne [25] developed a method that adapts the bilateral filter from grayscale images to 3D meshes, denoising points by considering both spatial proximity and normal direction. This approach achieves effective noise reduction while preserving sharp edges. Wen [26] conducted experimental evaluations of the effectiveness of the point cloud bilateral filtering algorithm in automatic driving scenes. The radius filter identifies and removes outliers in a point cloud by evaluating the number of neighboring points within a specified radius, retaining points only if their neighborhood density meets a predefined threshold. Duan [27] introduced a noise-reduction approach to filtering point clouds, utilizing an adaptive radius outlier removal filter that incorporates principal component analysis. Szutor [28] developed an algorithm inspired by the ROL (Radius Outlier Filter), designed to detect outliers in large-scale point clouds with a non-exponential time complexity. Luo [29] proposed a multibeam point cloud denoising method that combines radius filtering for edge noise removal with density clustering to restore mistakenly filtered terrain points. However, filter-based denoising algorithms are limited by their reliance on predefined parameters and assumptions, making it challenging to balance noise reduction and feature preservation in complex or diverse point cloud datasets.
Optimization-based denoising methods [30,31,32] address the problem by formulating it as an optimization task. The goal is to produce a denoised point cloud that closely aligns with the input data while satisfying constraints derived from prior knowledge of the underlying geometry and noise distribution. However, achieving satisfactory results using these methods often requires careful parameter tuning, especially for complex point cloud structures.
Deep learning-based methods can be classified into supervised [33,34,35] and unsupervised approaches [36,37,38], depending on the availability of labeled data during model training. Deep learning-based techniques eliminate these limitations by learning features directly from the data. Rakotosaona [34] presented a data-driven approach to denoising unordered point clouds, building on the PCPNet framework. The method removes outliers and computes correction vectors to align noisy points with clean surfaces. Luo [38] proposed an algorithm that builds a model to predict a score for each point, representing its significance and helping to differentiate between noise and valid data. Through iterative processing of the initial point cloud, low-score points (noise) are gradually eliminated, while high-score points are preserved. However, supervised approaches typically require large labeled datasets, which are often difficult to obtain. In contrast, unsupervised methods face challenges such as reliance on assumptions about noise characteristics and a limited ability to generalize across diverse LiDAR point cloud datasets.
To remove noise points while preserving features, we developed a Unified Denoising Framework (UDF) for restoring the geometry of reflective targets. The UDF consists of three modules: a veiling effect denoising module, a range anomalies correction module, and a blooming effect denoising module. The performance of the UDF was quantitatively evaluated using a self-constructed traffic sign point cloud dataset. Compared with some traditional and deep learning-based denoising algorithms, the results demonstrated that the proposed framework achieved significant advantages in both denoising performance and geometric feature restoration.
Our contribution in this paper can be summarized in the following three aspects:
(1)
Unlike existing denoising algorithms that typically address only one type of noise, our method is capable of effectively tackling the veiling effect, range anomalies, and blooming effect within a unified framework, offering a comprehensive solution to the challenges encountered in complex scenarios.
(2)
By integrating the MSAC-based plane fitting algorithm with a ray-projection approach, the range-anomalous points are corrected back to the fitted plane. Compared with denoising algorithms based on spatial statistical strategies, which tend to treat these points as noise and remove them, our method recovers these points to the target’s surface, resulting in a denser and more complete point cloud representation.
(3)
To the best of our knowledge, our method is the first to utilize the spatial energy distribution of Gaussian laser beams to eliminate noise points caused by the blooming effect. An adaptive error ellipse is used to refine the boundary of the blooming point cloud. The parameters of the error ellipse are adaptively determined based on the divergence angle of the laser beam, the target distance, and the normal vector along the boundary. With simple adjustments to the algorithm’s parameters, it can be applied to various LiDAR models with different divergence angles.
The rest of this paper is organized as follows: Section 2 provides a detailed introduction to the dataset and the proposed denoising framework. Section 3 presents the experiments and results, including experimental setup, evaluation metrics, and experimental results. Section 4 discusses the proposed algorithm, and the conclusions are summarized in Section 5.

2. Materials and Methods

2.1. Dataset

The dataset used in this study was collected using the DJI Livox (Shenzhen, China) Tele-15 LiDAR sensor, which operates at a laser wavelength of 905 nm, with a range precision (1σ @ 20 m) of 2 cm and a laser beam divergence half-angle of 0.12° (vertical) × 0.02° (horizontal). The reflective target used for data acquisition was a square traffic sign with dimensions of 0.6 m × 0.6 m. Due to the non-repetitive scanning characteristics of the Tele15 sensor, the point cloud density varied across different positions within the sensor’s scanning area. To better simulate the diversity of data encountered in autonomous driving scenarios, we considered several factors during data collection, including variations in lighting conditions, target distance, laser incidence angle, and point cloud density at different target locations. Data were collected at 14 distinct distances under varying lighting conditions. During data acquisition at each distance, the LiDAR sensor was mounted on a tripod, and its orientation was adjusted to change both the position of the target within the sensor’s field of view and the laser incidence angle on the target surface. At each distance, 90 samples were acquired, resulting in 1260 samples in total in the dataset.
The data acquisition scenario and point cloud are shown in Figure 1. The ground-truth point cloud of the target in each sample was obtained through manual editing and programmatic processing, incorporating prior knowledge of the target’s geometric shape and dimensions.

2.2. Methods

The proposed method uses a unified framework to remove noise points of reflective targets. The workflow consists of three main parts, as illustrated in Figure 2. First, original LiDAR data are preprocessed to remove points caused by the veiling effect, which are usually with a low intensity, thereby identifying potential target points. The intensity threshold is determined by locating the peak probability density center. Second, the MSAC algorithm is applied to extract a plane and classify points as inliers and outliers. Subsequently, noise points with range anomalies are corrected back to the target’s surface, followed by the projection of the point cloud onto the YOZ plane and the removal of isolated noise points. Third, boundary extraction is performed using the alpha shape algorithm, followed by point-wise boundary correction based on an adaptive error ellipse to refine the boundary. The parameters of the error ellipse are automatically adjusted according to the target’s distance and the normal vector along the original boundary. Finally, noise points caused by the blooming effect are removed using both the original and the newly refined boundaries, ensuring an accurate restoration of the geometry of the target.

2.2.1. Veiling Effect Denoising

The original LiDAR point cloud data contain a large number of noise points induced by the veiling effect, which are usually located near the edges of the target. These noise points share a common characteristic of a relatively low reflectivity intensity. To remove these noise points, we employ an improved intensity-based pass-through filtering algorithm, as illustrated in Figure 3.
The intensity threshold for the pass-through filter is determined by searching for the center of the peak probability density. First, the intensity of all points is statistically analyzed. Then, a local peak detection algorithm [39] is applied to identify the local peaks in the histogram, and the peak with the highest intensity is selected. Subsequently, Gaussian fitting [40] is performed on the data within the neighborhood of this peak to determine the center point of the local distribution. Finally, the intensity of the center point is used as the threshold to remove all points with an intensity below this value, leaving the remaining points as potential target points.

2.2.2. Range Anomalies Correction

Random sample consensus (RANSAC) [41] identifies inlier data and fits a model by searching for the minimum cost under the given model and distance threshold. A point is classified as an inlier with a weight of zero if its distance to the model is less than the threshold; otherwise, it is classified as an outlier. Since all inliers have a weight of zero, the algorithm selects the optimal model with the maximum number of inliers and the smallest total weight. However, when the threshold is set too large, especially for datasets containing distinct outliers, it may result in a significant increase in fitting errors.
MSAC [42] is an improved variant of the RANSAC algorithm designed to reduce the algorithm’s sensitivity to the selection of the distance threshold T. The modified cost function is shown in Equation (1). When the distance e from a point to the model is less than the threshold T, the point is classified as an inlier and assigned a weight of e. Otherwise, the point is classified as an outlier and assigned a weight of T.
ρ 2 ( e 2 ) = e 2 e 2 < T 2 T 2 e 2 T 2
Due to the signal saturation and distortion of the pulsed laser echo when measuring the surface of reflective targets, measurement errors cause a shift in the distance along the direction of laser emission. This results in the loss of certain surface features of the target. Therefore, although the spatial distribution characteristics of outliers are similar to those of noise points caused by the veiling effect, their intensity values are nearly identical to those of the point cloud on the surface of the reflective target. Based on the principles of range anomalies described above, the outliers are processed as follows: first, the laser emission vector is obtained by connecting the origin of the LiDAR coordinate system to the range-anomalous point; then, the intersection of this vector with the plane fitted by MSAC is calculated. The intersection point (Equation (2)) represents the corrected position of the outlier. The illustration of the range-anomalous points’ correction is shown in Figure 4; the procedure of the algorithm is presented in Algorithm 1.
x 0 = d 0 a 0 x 0 + b 0 y 0 + c 0 z 0 x 0 y 0 = d 0 a 0 x 0 + b 0 y 0 + c 0 z 0 y 0 z 0 = d 0 a 0 x 0 + b 0 y 0 + c 0 z 0 z 0
where ( x 0 , y 0 , z 0 ) and ( x 0 , y 0 , z 0 ) represent the coordinates of the range-anomalous point and the range-corrected point, respectively. The parameters a0, b0, c0, and d0 are the coefficients of the plane equation a 0 x + b 0 y + c 0 z + d 0 = 0 .
Algorithm 1: The range anomalies correction algorithm
Input: Potential target points P and the maximum distance from an inlier point to the plane d max
Output: Projected 2D points Q
(1)  Fit a plane to P using the MSAC algorithm to obtain the plane equation.
(2)  Classify points as inliers P i n   and   outliers   P o u t
(3)   if   P o u t is not empty then
(4)      for   each   point   P i P o u t  do
(5)        Define the origin of the LiDAR coordinate system as O (0, 0, 0)
(6)         Compute   the   vector   of   O P i
(7)         Compute   intersection   P i   of   O P i with the plane a0x + b0y + c0z + d0 = 0
(8)         Update   P i   with   P i
(9)     end for
(10)  end if
(11)   Combine   inliers   and   corrected   outliers   and   get   P
(12)   Project   all   points   in   P onto the YOZ plane and remove isolated noise points
(13)  return Q

2.2.3. Blooming Effect Denoising

The workflow of the blooming effect denoising algorithm is illustrated in Figure 5, and the detailed procedure is outlined in Algorithm 2. For the point cloud projected onto the YOZ plane after correcting range anomalies, the denoising process involves the following steps:
  • Boundary extraction: Use the alpha shape algorithm to extract the boundary of the point cloud, obtaining boundary segments and boundary points.
  • Error ellipse construction: Construct an error ellipse centered at each boundary point. The length of the semi-major and semi-minor axes of the ellipse (Equation (3)) are determined based on the LiDAR beam divergence angle ( θ V and θ H ), target distance ( L ), and blooming factors ( λ V and λ H ). The blooming factors can be estimated through a simple experiment: collect point cloud data of a target with known dimensions at a specific distance and compare the size of the point cloud with the actual size of the target to estimate the factors. In the equation, the subscripts V and H represent vertical and horizontal, respectively.
    a = L tan ( θ V ) λ V b = L tan ( θ H ) λ H
  • Normal vector calculation: For each boundary point, compute the normal vectors of its adjacent left and right boundary segments. The normalized sum of these two vectors is used as the normal vector (Equation (4)) of the boundary point.
    v = ( v y , v z )
  • Boundary point correction: Draw the tangent and the normal vector at each boundary point. Adjust the tangent along the direction of the normal vector until it is tangential to the error ellipse. The point of tangency is regarded as the corrected boundary point ( y t , z t ) . The gradient unit vector of the ellipse at this point is G (Equation (5)). Let v = G ; then, the absolute value of the slope of the line connecting this point to the center of the ellipse can be obtained (Equation (6)). The absolute value of the boundary point coordinate offset is shown in Equation (7).
    G = ( 2 y t b 2 , 2 z t a 2 )
    k = v z v y a 2 b 2
    Δ y = a b a 2 + b 2 k 2 Δ z = k a b a 2 + b 2 k 2
  • Noise removal: Traverse all boundary points of the original point cloud to generate a new refined boundary. Points located between the original boundary and the corrected boundary are identified as noise and are removed.
In this study, the alpha shape algorithm [43] was used to extract the boundary of the point cloud. The basic idea of alpha shape is to describe the shape of a point set by adjusting the parameter α , which serves as a generalization of the convex hull. Alpha shape utilizes Delaunay triangulation to construct the geometric structure of the points and uses α to control the level of detail in the shape. When α is large, the result approximates the convex hull; when α is small, it captures more detailed local features. As illustrated in Figure 6, when the alpha radius is small, the algorithm can capture fine and complex local details, but the extracted boundary contours tend to be complex and fragmented. As the alpha radius increases, the extracted boundary contours gradually become smoother.
As the target distance increases, the point cloud becomes sparser. To address this, we adopt the average point spacing of the point cloud projected onto the YOZ plane as the alpha radius, ensuring adaptability for boundary extraction across targets at varying distances.
Algorithm 2: The boundary correction algorithm based on adaptive error ellipse
Input: Projected 2D points Q and alpha radius α r
Output: Target points T
(1)  Extract the boundary of point cloud Q using alpha shape with alpha radius α r
(2)  Obtain the facets F (each facet represents an edge segment in a 2D plane) and the corresponding boundary points B (each facet contains two boundary points)
(3)  Initialize the normal vector for each boundary point with zero values
(4)  for each facet f i F do
(5)      Compute the normal vector n i for the facet f i using its two endpoints b i and b j
(6)      Normalize n i to make it a unit vector
(7)      Accumulate n i to the normal of the two endpoints b i and b j
(8)  end for
(9)  Initialize the corrected boundary points with B
(10)  for each boundary point b i B  do
(11)     Compute the unit normal vector v i for the point b i
(12)     Compute the distance L from b i to the origin of the LiDAR coordinate system
(13)     Define the adaptive error ellipse centered at b i ; calculate the semi-major axis a and semi-minor axis b of the ellipse
(14)     Calculate the absolute value of the slope k between the original boundary point and the corrected boundary point b i
(15)     Calculate the corrected boundary point coordinate b i based on the adaptive ellipse parameters
(16)  end for
(17)  Connect all points b i to form the new boundary
(18)  for each point q Q do
(19)     if q is located between the original and the corrected boundaries, then
(20)        Mark q as an inflated point and remove it
(21)     else if q is located inside the corrected boundary, then
(22)        Mark q as a target point and retain it
(23)  end if
(24)  return T

3. Experiments and Results

3.1. Experimental Setup

The proposed algorithm was implemented in MATLAB R2022b, and the experiments were performed on a desktop computer equipped with an Intel(R) Core(TM) i7-12700H CPU @2.30 GHz and 64 GB of RAM (Lenovo, Beijing, China). A suite of minimal bounding objects available on the MATLAB Community File Exchange was employed to compute minimal bounding geometries, including circles, rectangles, triangles, and spheres. The Inspector module of PolyWorks Metrology Suite was utilized for the visualization of point cloud data.

3.2. Evaluation Metrics

To evaluate the denoising performance of the proposed algorithm, we employed four metrics derived from the confusion matrix of the denoising results (see Table 1), which have been commonly applied in previous studies [44]. These metrics include the type I error (T.I), type II error (T.II), total error (T.E.), and kappa coefficient (κ). The type I error (Equation (8)) represents the percentage of target points incorrectly classified as noise points, while the type II error (Equation (9)) reflects the proportion of noise points mistakenly classified as target points. The total error (Equation (10)) quantifies the overall proportion of misclassified points. Meanwhile, the kappa coefficient (Equation (11)) provides an alternative metric for accessing overall classification accuracy by accounting for chance agreement and quantifying the improvement in classification accuracy over a random classification.
T y p e   I   e r r o r = F N T P + F N
T y p e   I I   e r r o r = F P F P + T N
T o t a l   e r r o r = F N + F P S
K a p p a   c o e f f i c i e n t = p 0 p e 1 p e
where S = T P + F N + F P + T N , p 0 = T P + T N S , p e = ( T P + F N ) ( T P + F P ) + ( F P + T N ) ( F N + T N ) S 2 .
Additionally, to evaluate the performance of the proposed algorithm in restoring the geometric features of reflective targets, four metrics were introduced: mean square error (MSE), mean city-block distance (MCD), relative height error (R.H), and relative width error (R.W). These metrics are designed to evaluate the geometric fidelity of the point cloud, with a focus on spatial accuracy and dimensional consistency [45].
Suppose the ground-truth point cloud and the restored point cloud are P = p i i = 1 N 1 , Q = q i i = 1 N 2 , where p i , q i 3 . It should be noted that N 1 and N 2 are not necessarily equal. First, we calculate the average squared Euclidean distance from each ground-truth point to its closest restored point and vice versa: from each restored point to its closest ground-truth point. The final MSE value (Equation (12)) is derived by averaging these two measures, providing a quantitative measure of the discrepancy between the ground-truth points and the restored points. The MCD (Equation (13)) is calculated in a similar manner to the MSE, but it employs the l 1 norm in place of the l 2 norm.
M S E = 1 2 p i P min q j Q p i q j 2 2 N 1 + q i Q min p j P q i p j 2 2 N 2
M C D = 1 2 p i P min q j Q p i q j N 1 + q i Q min p j P q i p j N 2
The actual height (H) and width (W) of the target are first measured using a ruler. Subsequently, the height (h) and width (w) of the restored point cloud are calculated. The relative error (Equations (14) and (15)) is defined as the ratio of the absolute difference between the calculated and actual values to the actual values.
r e l a t i v e   h e i g h t   e r r o r = H h H
r e l a t i v e   w i d t h   e r r o r = W w W
where H and W refer to the height and width of targets in the denoised point cloud, while h and w denote the actual height and width of the target.

3.3. Experimental Results

To evaluate the algorithm’s performance, we randomly selected one sample from each of the 14 distances in the dataset. The denoising results of the UDF for 14 samples are summarized in Table 2. The algorithm achieved an overall average total error of 2.09% and a kappa coefficient of 95.46%, with standard deviations of 0.89% and 2.05%, respectively. These results demonstrate the algorithm’s robust performance and high classification accuracy across targets at varying distances and orientations.
For the MSE and MCD metrics, the average values across all samples were 0.15 cm2 and 0.05 cm, respectively. Notably, Sample 14 exhibited the highest MSE and MCD values, likely due to its longer distance, which resulted in a sparser point cloud. This sparsity appears to have negatively impacted the accuracy of target boundary extraction during the blooming effect denoising step. The average relative height error and relative width error across all samples were approximately 1.9%. For a target with dimensions of 60 cm in height and width, this corresponds to absolute errors in height and width not exceeding 2 cm.
To compare the proposed framework with previous point cloud denoising approaches, we selected five representative methods from recent years, including four traditional denoising algorithms and one deep learning-based algorithm. Some algorithms adjust the spatial positions of the original point cloud during the denoising process, making it difficult to calculate total error and kappa coefficient. Therefore, we selected MSE, MCD, relative height error, and relative width error as metrics to evaluate the feature preservation and recovery capabilities of the algorithms.
The results for MSE and MCD are presented in Table 3 and Table 4. As shown, the UDF achieved the highest accuracy across all samples, which is primarily due to its precise recovery of range-anomalous points and its effective denoising of the blooming effect. In contrast, other algorithms, which treat range-anomalous points as noise and are less effective at denoising the blooming effect, exhibited higher MSE and MCD values.
The evaluation results for relative height error and relative width error are presented in Table 5 and Table 6. The UDF achieved the lowest relative height error in nearly all samples. Although the improvement in relative width error was less pronounced, its standard deviation was the smallest among all methods at 1.01%. This can be attributed to the elliptical shape of the Tele-15 laser beam, which had a vertical divergence angle of 0.12° and a horizontal divergence angle of 0.02°. These characteristics cause more significant distortions in the height direction compared to the width direction.

4. Discussion

To qualitatively evaluate the performance of the veiling effect denoising module, two samples from the dataset were selected. A comparison of the results using the pass-through filter with a constant-intensity threshold and our modified strategy is presented in Figure 7.
It can be observed that the geometric features of the target in the original point cloud are noticeably distorted, particularly in the vertical direction. For the first selected data sample, the results of both algorithms show no significant differences. However, for another sample at a longer distance, the constant-threshold method removes part of the edge of the target’s point cloud, resulting in substantial feature loss. In contrast, our improved method is adaptable to targets at different distances, effectively removing noise points caused by the veiling effect while preserving the target’s features.
To qualitatively analyze the performance of the range anomalies’ correction module, two representative samples were selected in the dataset. The results for the two samples are shown in Figure 8 and Figure 9. It can be observed that a large number of outliers caused by range anomalies are corrected to the target surface. The corrected outliers fill the sparse regions in the inliers, especially the area where the letter “P” is located on the traffic sign.
For reflective targets, the distortion of target features caused by the blooming effect is more pronounced compared to the veiling effect and range anomalies. To visually compare the denoising performance of different algorithms, we selected Sample 12 as a representative example from the dataset, as the target is located at a relatively great distance, resulting in a more pronounced target distortion. The blooming factors were determined through a simple experiment in which data were collected from a reflective target with known dimensions. The difference in height and width between the blooming point cloud and the actual target corresponds to the major and minor axes of the error ellipse, respectively. The blooming factors were then calculated using Equation (3). Specifically, the values of λ V  and  λ H were 0.615 and 0.415, respectively. Figure 10 illustrates the original point cloud, the ground truth, and the denoising results of different algorithms. It can be observed that the shape and size of the denoised point cloud obtained by our proposed method align most closely with the ground truth. In contrast, the results of other algorithms exhibit noticeable distortions, particularly in the height dimension, where the size of the denoised point cloud remains larger than the actual dimensions of the target.

5. Conclusions

In this study, we addressed the critical challenge of denoising LiDAR point clouds of reflective targets, which often suffer from severe noise and distortions. To this end, we propose a Unified Denoising Framework (UDF), which is capable of tackling the veiling effect, range anomalies, and blooming effect. The experimental results indicated that the UDF achieved an MSE of 0.15 cm2, an MCD of 0.05 cm, and relative errors in height and width of 1.92% and 1.91%, respectively. In comparison to five representative algorithms, the UDF exhibited notable improvements in both denoising performance and the restoration of target geometric features. The UDF provides a robust solution for enhancing the quality of LiDAR point clouds in applications where accurate target geometry is essential, such as autonomous driving and infrastructure mapping. By restoring the geometric features of the target, this method can significantly improve downstream tasks, including feature extraction and object detection.
However, the UDF has some limitations. The current method is primarily designed for reflective planar targets, and improvements may be required for when it is applied to non-planar or small targets. Future research could focus on enhancing the algorithm to address these challenges and explore adaptive parameter optimization techniques.
Additionally, environmental factors such as humidity, fog, rain, temperature fluctuations, and air quality (e.g., dust) may also influence LiDAR measurements by affecting laser beam propagation and target reflectivity. Investigating the impact of these conditions on the blooming effect and the performance of LiDAR-based perception systems could further improve the robustness of the proposed method in real-world applications.

Author Contributions

Conceptualization, T.X.; methodology, T.X.; software, T.X.; validation, T.X. and C.W.; formal analysis, T.X.; investigation, T.X. and C.W.; resources, T.X.; data curation, T.X.; writing—original draft preparation, T.X.; writing—review and editing, T.X., J.Z., C.W., F.L. and Z.M.; visualization, T.X.; supervision, J.Z.; project administration, J.Z., F.L. and Z.M.; funding acquisition, J.Z. and F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China under the Special Fund for Research on National Major Research Instruments (Grant number 62327803).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are available upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Li, Y.; Ibanez-Guzman, J. Lidar for Autonomous Driving: The Principles, Challenges, and Trends for Automotive Lidar and Perception Systems. IEEE Signal Process. Mag. 2020, 37, 50–61. [Google Scholar] [CrossRef]
  2. Feng, D.; Haase-Schütz, C.; Rosenbaum, L.; Hertlein, H.; Gläser, C.; Timm, F.; Wiesbeck, W.; Dietmayer, K. Deep Multi-Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1341–1360. [Google Scholar] [CrossRef]
  3. Li, Y.; Ma, L.; Zhong, Z.; Liu, F.; Chapman, M.A.; Cao, D.; Li, J. Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 3412–3432. [Google Scholar] [CrossRef] [PubMed]
  4. Jin, X.; Yang, H.; He, X.; Liu, G.; Yan, Z.; Wang, Q. Robust LiDAR-Based Vehicle Detection for On-Road Autonomous Driving. Remote Sens. 2023, 15, 3160. [Google Scholar] [CrossRef]
  5. Xue, F.; Lu, W.; Chen, Z.; Webster, C.J. From LiDAR point cloud towards digital twin city: Clustering city objects based on Gestalt principles. ISPRS J. Photogramm. Remote Sens. 2020, 167, 418–431. [Google Scholar] [CrossRef]
  6. Kulawiak, M. A Cost-Effective Method for Reconstructing City-Building 3D Models from Sparse Lidar Point Clouds. Remote Sens. 2022, 14, 1278. [Google Scholar] [CrossRef]
  7. Franzini, M.; Casella, V.M.; Monti, B. Assessment of Leica CityMapper-2 LiDAR Data within Milan’s Digital Twin Project. Remote Sens. 2023, 15, 5263. [Google Scholar] [CrossRef]
  8. Borowiec, N.; Marmol, U. Using LiDAR System as a Data Source for Agricultural Land Boundaries. Remote Sens. 2022, 14, 1048. [Google Scholar] [CrossRef]
  9. Debnath, S.; Paul, M.; Debnath, T. Applications of LiDAR in Agriculture and Future Research Directions. J. Imaging 2023, 9, 57. [Google Scholar] [CrossRef]
  10. Karim, M.R.; Reza, M.N.; Jin, H.; Haque, M.A.; Lee, K.-H.; Sung, J.; Chung, S.-O. Application of LiDAR Sensors for Crop and Working Environment Recognition in Agriculture: A Review. Remote Sens. 2024, 16, 4623. [Google Scholar] [CrossRef]
  11. Dassot, M.; Constant, T.; Fournier, M. The use of terrestrial LiDAR technology in forest science: Application fields, benefits and challenges. Ann. For. Sci. 2011, 68, 959–974. [Google Scholar] [CrossRef]
  12. Li, W.; Guo, Q.; Jakubowski, M.K.; Kelly, M. A new method for segmenting individual trees from the lidar point cloud. Photogramm. Eng. Remote Sens. 2012, 78, 75–84. [Google Scholar]
  13. Fassnacht, F.E.; White, J.C.; Wulder, M.A.; Næsset, E. Remote sensing in forestry: Current challenges, considerations and directions. For. Int. J. For. Res. 2023, 97, 11–37. [Google Scholar] [CrossRef]
  14. Yang, D.; Liu, Y.; Chen, Q.; Chen, M.; Zhan, S.; Cheung, N.-k.; Chan, H.-Y.; Wang, Z.; Li, W.J. Development of the high angular resolution 360° LiDAR based on scanning MEMS mirror. Sci. Rep. 2023, 13, 1540. [Google Scholar] [CrossRef]
  15. Cheong, S.; Ha, J. LiDAR Blooming Artifacts Estimation Method Induced by Retro-Reflectance with Synthetic Data Modeling and Deep Learning. In Proceedings of the 2024 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia), Danang, Vietnam, 3–6 November 2024; pp. 1–4. [Google Scholar]
  16. Uttarkabat, S.; Appukuttan, S.; Gupta, K.; Nayak, S.; Palo, P. BloomNet: Perception of Blooming Effect in ADAS using Synthetic LiDAR Point Cloud Data. In Proceedings of the 2024 IEEE Intelligent Vehicles Symposium (IV), Jeju Island, Republic of Korea, 2–5 June 2024; pp. 1886–1892. [Google Scholar]
  17. Han, X.-F.; Jin, J.S.; Wang, M.-J.; Jiang, W.; Gao, L.; Xiao, L. A review of algorithms for filtering the 3D point cloud. Signal Process. Image Commun. 2017, 57, 103–112. [Google Scholar] [CrossRef]
  18. Zhou, L.; Sun, G.; Li, Y.; Li, W.; Su, Z. Point cloud denoising review: From classical to deep learning-based approaches. Graph. Models 2022, 121, 101140. [Google Scholar] [CrossRef]
  19. Rusu, R.B.; Cousins, S. 3D is here: Point Cloud Library (PCL). In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 1–4. [Google Scholar]
  20. Miknis, M.; Davies, R.; Plassmann, P.; Ware, A. Near real-time point cloud processing using the PCL. In Proceedings of the 2015 International Conference on Systems, Signals and Image Processing (IWSSIP), London, UK, 10–12 September 2015; pp. 153–156. [Google Scholar]
  21. Miknis, M.; Davies, R.; Plassmann, P.; Ware, A. Efficient point cloud pre-processing using the point cloud library. Int. J. Image Process. 2016, 10, 63–72. [Google Scholar]
  22. Nurunnabi, A.; West, G.; Belton, D. Outlier detection and robust normal-curvature estimation in mobile laser scanning 3D point cloud data. Pattern Recognit. 2015, 48, 1404–1419. [Google Scholar] [CrossRef]
  23. Carrilho, A.C.; Galo, M.; Santos, R.C. Statistical Outlier Detection Method For Airborne Lidar Data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 42, 87–92. [Google Scholar] [CrossRef]
  24. Fleishman, S.; Drori, I.; Cohen-Or, D. Bilateral mesh denoising. In Proceedings of the ACM SIGGRAPH 2003 Papers, San Diego, CA, USA, 27–31 July 2003; pp. 950–953. [Google Scholar]
  25. Digne, J.; de Franchis, C. The Bilateral Filter for Point Clouds. Image Process. Line 2017, 7, 278–287. [Google Scholar] [CrossRef]
  26. Guoqiang, W.; Hongxia, Z.; Zhiwei, G.; Wei, S.; Dagong, J. Bilateral filter denoising of Lidar point cloud data in automatic driving scene. Infrared Phys. Technol. 2023, 131, 104724. [Google Scholar] [CrossRef]
  27. Duan, Y.; Yang, C.; Li, H. Low-complexity adaptive radius outlier removal filter based on PCA forlidar point cloud denoising. Appl. Opt. 2021, 60, E1–E7. [Google Scholar] [CrossRef] [PubMed]
  28. Szutor, P.; Zichar, M. Fast Radius Outlier Filter Variant for Large Point Clouds. Data 2023, 8, 149. [Google Scholar] [CrossRef]
  29. Luo, Y.; Shi, S.; Zhang, K. Multibeam Point Cloud Denoising Method Based on Modified Radius Filter. In Proceedings of the 4th International Conference on Geology, Mapping and Remote Sensing (ICGMRS 2023), Wuhan, China, 23 January 2024; pp. 98–103. [Google Scholar]
  30. Fleishman, S.; Cohen-Or, D.; Silva, C.T. Robust moving least-squares fitting with sharp features. ACM Trans. Graph. 2005, 24, 544–552. [Google Scholar] [CrossRef]
  31. Preiner, R.; Mattausch, O.; Arikan, M.; Pajarola, R.; Wimmer, M. Continuous projection for fast L1 reconstruction. ACM Trans. Graph. 2014, 33, 1–13. [Google Scholar] [CrossRef]
  32. Xu, Z.; Foi, A. Anisotropic Denoising of 3D Point Clouds by Aggregation of Multiple Surface-Adaptive Estimates. IEEE Trans. Vis. Comput. Graph. 2021, 27, 2851–2868. [Google Scholar] [CrossRef]
  33. Yu, L.; Li, X.; Fu, C.-W.; Cohen-Or, D.; Heng, P.-A. Ec-net: An edge-aware point set consolidation network. In Proceedings of the European conference on computer vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 386–402. [Google Scholar]
  34. Rakotosaona, M.-J.; La Barbera, V.; Guerrero, P.; Mitra, N.J.; Ovsjanikov, M. PointCleanNet: Learning to Denoise and Remove Outliers from Dense Point Clouds. Comput. Graph. Forum 2020, 39, 185–203. [Google Scholar] [CrossRef]
  35. Pistilli, F.; Fracastoro, G.; Valsesia, D.; Magli, E. Learning Robust Graph-Convolutional Representations for Point Cloud Denoising. IEEE J. Sel. Top. Signal Process. 2021, 15, 402–414. [Google Scholar] [CrossRef]
  36. Chen, S.; Duan, C.; Yang, Y.; Li, D.; Feng, C.; Tian, D. Deep Unsupervised Learning of 3D Point Clouds via Graph Topology Inference and Filtering. IEEE Trans. Image Process. 2020, 29, 3183–3198. [Google Scholar] [CrossRef]
  37. Luo, S.; Hu, W. Differentiable Manifold Reconstruction for Point Cloud Denoising. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 1330–1338. [Google Scholar]
  38. Luo, S.; Hu, W. Score-Based Point Cloud Denoising. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, BC, Canada, 10–17 October 2021; pp. 4563–4572. [Google Scholar]
  39. Sezan, M.I. A peak detection algorithm and its application to histogram-based image data reduction. Comput. Vis. Graph. Image Process. 1990, 49, 36–51. [Google Scholar] [CrossRef]
  40. Guo, H. A Simple Algorithm for Fitting a Gaussian Function [DSP Tips and Tricks]. IEEE Signal Process. Mag. 2011, 28, 134–137. [Google Scholar] [CrossRef]
  41. Schnabel, R.; Wahl, R.; Klein, R. Efficient RANSAC for Point-Cloud Shape Detection. Comput. Graph. Forum 2007, 26, 214–226. [Google Scholar] [CrossRef]
  42. Torr, P.H.S.; Zisserman, A. MLESAC: A New Robust Estimator with Application to Estimating Image Geometry. Comput. Vis. Image Underst. 2000, 78, 138–156. [Google Scholar] [CrossRef]
  43. Edelsbrunner, H.; Kirkpatrick, D.; Seidel, R. On the shape of a set of points in the plane. IEEE Trans. Inf. Theory 1983, 29, 551–559. [Google Scholar] [CrossRef]
  44. Congalton, R.G. A review of assessing the accuracy of classifications of remotely sensed data. Remote Sens. Environ. 1991, 37, 35–46. [Google Scholar] [CrossRef]
  45. Zeng, J.; Cheung, G.; Ng, M.; Pang, J.; Yang, C. 3D Point Cloud Denoising Using Graph Laplacian Regularization of a Low Dimensional Manifold Model. IEEE Trans. Image Process. 2020, 29, 3474–3489. [Google Scholar] [CrossRef]
Figure 1. LiDAR data acquisition scenario and the point cloud: (a) Livox Tele-15 LiDAR sensor mounted on a tripod; (b) roadside traffic sign; (c) original point cloud at a specific distance with significant noise points due to the veiling effect, range anomalies, and blooming effect; (d) ground truth at the same distance as (c); note: (c,d) are displayed from an oblique perspective at the same scale.
Figure 1. LiDAR data acquisition scenario and the point cloud: (a) Livox Tele-15 LiDAR sensor mounted on a tripod; (b) roadside traffic sign; (c) original point cloud at a specific distance with significant noise points due to the veiling effect, range anomalies, and blooming effect; (d) ground truth at the same distance as (c); note: (c,d) are displayed from an oblique perspective at the same scale.
Applsci 15 03904 g001
Figure 2. Workflow of UDF for denoising LiDAR point cloud of reflective target.
Figure 2. Workflow of UDF for denoising LiDAR point cloud of reflective target.
Applsci 15 03904 g002
Figure 3. Illustration of veiling effect denoising: (a) center of peak probability density of the original LiDAR point cloud intensity; (b) coarse denoising to remove low-intensity noise points and extract potential target points. Note: The area of the point cloud after coarse denoising from the X-axis perspective is still larger than the actual target region, indicating that the blooming effect has not been eliminated.
Figure 3. Illustration of veiling effect denoising: (a) center of peak probability density of the original LiDAR point cloud intensity; (b) coarse denoising to remove low-intensity noise points and extract potential target points. Note: The area of the point cloud after coarse denoising from the X-axis perspective is still larger than the actual target region, indicating that the blooming effect has not been eliminated.
Applsci 15 03904 g003
Figure 4. Illustration of range-anomalous points’ correction: (a) original points with range anomalies; (b) points with corrected ranges and coordinates. Note: The fitted plane in the figure is presented from a perspective parallel to the plane.
Figure 4. Illustration of range-anomalous points’ correction: (a) original points with range anomalies; (b) points with corrected ranges and coordinates. Note: The fitted plane in the figure is presented from a perspective parallel to the plane.
Applsci 15 03904 g004
Figure 5. Illustration of blooming effect denoising: (a) boundary extraction of the original point cloud using alpha shape; (b) error ellipse construction and boundary point correction; (c) point-wise correction to obtain the new boundary; (d) blooming effect denoising and target points’ extraction based on the original and corrected boundaries. The blue line represents the “Original boundary”, with blue dots marking the corresponding “Original boundary points”. The green line represents the “Corrected boundary”, with green dots marking the “Corrected boundary points”. The red dots represent the “Point cloud before blooming effect removal”. In (d), the semi-transparent red dots located between the blue and green lines represent the “Noise points caused by the blooming effect”. O is defined as the origin of the LiDAR coordinate system, consistent with its representation in Figure 4. Additionally, the definition of point O can be found in line (5) of the pseudocode table in Algorithm 1.
Figure 5. Illustration of blooming effect denoising: (a) boundary extraction of the original point cloud using alpha shape; (b) error ellipse construction and boundary point correction; (c) point-wise correction to obtain the new boundary; (d) blooming effect denoising and target points’ extraction based on the original and corrected boundaries. The blue line represents the “Original boundary”, with blue dots marking the corresponding “Original boundary points”. The green line represents the “Corrected boundary”, with green dots marking the “Corrected boundary points”. The red dots represent the “Point cloud before blooming effect removal”. In (d), the semi-transparent red dots located between the blue and green lines represent the “Noise points caused by the blooming effect”. O is defined as the origin of the LiDAR coordinate system, consistent with its representation in Figure 4. Additionally, the definition of point O can be found in line (5) of the pseudocode table in Algorithm 1.
Applsci 15 03904 g005
Figure 6. Illustration of boundary extraction based on alpha shape: (a) the original point cloud of the target; (bd) the alpha shapes of the point cloud; the alpha radius is set to 0.01, 0.05, and 0.5, respectively.
Figure 6. Illustration of boundary extraction based on alpha shape: (a) the original point cloud of the target; (bd) the alpha shapes of the point cloud; the alpha radius is set to 0.01, 0.05, and 0.5, respectively.
Applsci 15 03904 g006
Figure 7. Comparison of pass-through filtering using a constant-intensity threshold and a peak probability density center-based threshold: (a,d) original point cloud of the reflective target at two different distances; (b,e) filtering results using a constant-intensity threshold at the corresponding distances of (a,d); (c,f) filtering results using the peak probability density center of the point cloud intensity at the corresponding distances of (a,d). Note: all point clouds are displayed from the front perspective at the same scale.
Figure 7. Comparison of pass-through filtering using a constant-intensity threshold and a peak probability density center-based threshold: (a,d) original point cloud of the reflective target at two different distances; (b,e) filtering results using a constant-intensity threshold at the corresponding distances of (a,d); (c,f) filtering results using the peak probability density center of the point cloud intensity at the corresponding distances of (a,d). Note: all point clouds are displayed from the front perspective at the same scale.
Applsci 15 03904 g007
Figure 8. Results of range anomalies’ correction for Sample 1: (a,e) point cloud after the veiling effect denoising from the side and front perspectives; (b,f) inliers and outliers classified by the MSAC plane fitting; (c,g) corrected outliers; (d,h) the combination of inliers and the corrected outliers. Note: (ad) are shown from the side perspective, (eh) are shown from the front perspective, and all point clouds are displayed at the same scale.
Figure 8. Results of range anomalies’ correction for Sample 1: (a,e) point cloud after the veiling effect denoising from the side and front perspectives; (b,f) inliers and outliers classified by the MSAC plane fitting; (c,g) corrected outliers; (d,h) the combination of inliers and the corrected outliers. Note: (ad) are shown from the side perspective, (eh) are shown from the front perspective, and all point clouds are displayed at the same scale.
Applsci 15 03904 g008aApplsci 15 03904 g008b
Figure 9. Results of range anomalies’ correction for Sample 2: (a,e) point cloud after the veiling effect denoising; (b,f) inliers and outliers classified by the MSAC plane fitting; (c,g) corrected outliers; (d,h) the combination of inliers and the corrected outliers. Note: (ad) are shown from the side perspective, (eh) are shown from the front perspective, and all point clouds are displayed at the same scale.
Figure 9. Results of range anomalies’ correction for Sample 2: (a,e) point cloud after the veiling effect denoising; (b,f) inliers and outliers classified by the MSAC plane fitting; (c,g) corrected outliers; (d,h) the combination of inliers and the corrected outliers. Note: (ad) are shown from the side perspective, (eh) are shown from the front perspective, and all point clouds are displayed at the same scale.
Applsci 15 03904 g009
Figure 10. Comparison of blooming effect denoising results using different algorithms for Sample 12: (a): original point cloud; (b) ground truth; (cg) results of algorithms from References [19,22,25,27,34] respectively; (h) result of the proposed framework. Note: All point clouds are shown from the front perspective at the same scale.
Figure 10. Comparison of blooming effect denoising results using different algorithms for Sample 12: (a): original point cloud; (b) ground truth; (cg) results of algorithms from References [19,22,25,27,34] respectively; (h) result of the proposed framework. Note: All point clouds are shown from the front perspective at the same scale.
Applsci 15 03904 g010
Table 1. Confusion matrix for the LiDAR point cloud denoising results.
Table 1. Confusion matrix for the LiDAR point cloud denoising results.
Denoised Data
Target PointsNoise Points
Ground-TruthTarget pointsTPFN
Noise pointsFPTN
Note: TP, TN, FP, and FN refer to True Positive, True Negative, False Positive, and False Negative, respectively, representing the correctly and incorrectly classified targets and noise points.
Table 2. Performance evaluations of the UDF on the self-constructed dataset.
Table 2. Performance evaluations of the UDF on the self-constructed dataset.
SampleT.I (%)T.II (%)T.E (%)Kappa (%)MSE (cm2)MCD (cm)R.H (%)R.W (%)
Sample 12.592.102.4793.320.020.050.403.12
Sample 20.065.742.2295.240.030.061.252.78
Sample 30.014.531.8596.140.020.021.532.01
Sample 40.013.731.6496.660.020.021.401.57
Sample 50.413.201.7396.520.030.021.652.47
Sample 60.423.331.8796.260.060.032.290.16
Sample 70.423.912.2995.400.070.043.192.08
Sample 80.430.420.4299.150.020.010.120.52
Sample 90.445.543.5192.780.170.075.004.05
Sample 100.011.270.7998.330.060.020.691.25
Sample 111.052.762.1695.320.170.062.862.35
Sample 122.760.851.5096.650.110.051.420.85
Sample 134.562.893.4592.290.120.040.422.03
Sample 146.941.603.4092.321.180.234.601.51
Avg.1.442.992.0995.460.150.051.921.91
Max.6.945.743.5199.151.180.235.004.05
Min.0.010.420.4292.290.020.010.120.16
Std.2.011.580.892.050.290.051.461.01
Table 3. MSE comparison among algorithms (cm2).
Table 3. MSE comparison among algorithms (cm2).
SampleRusu
[19]
Nurunnabi
[22]
Digne
[25]
Rakotosaona
[34]
Duan
[27]
Ours
Sample 1238.282.00223.48238.341.520.02
Sample 25.772.3010.015.872.020.03
Sample 30.730.379.770.730.730.02
Sample 41.180.6310.151.181.130.02
Sample 51.741.039.721.741.560.03
Sample 62.681.4010.262.682.470.06
Sample 73.622.329.863.623.300.07
Sample 85.173.0210.575.174.680.02
Sample 96.814.449.496.815.650.17
Sample 1010.886.4312.4110.889.050.06
Sample 1112.456.3511.3312.458.580.17
Sample 1218.5713.4414.5218.5716.170.11
Sample 1316.678.7413.5216.6712.080.12
Sample 1424.8113.0919.4624.8119.941.18
Avg.24.954.6826.7524.976.350.15
Max.238.2813.44223.48238.3419.941.18
Min.0.730.379.490.730.730.02
Std.59.584.2354.6359.605.840.29
Note: The bolded numbers represent the lowest MSE values for each sample.
Table 4. MCD comparison among algorithms (cm).
Table 4. MCD comparison among algorithms (cm).
SampleRusu
[19]
Nurunnabi
[22]
Digne
[25]
Rakotosaona
[34]
Duan
[27]
Ours
Sample 13.160.165.213.240.130.05
Sample 20.700.341.860.890.430.06
Sample 30.210.141.770.210.400.02
Sample 40.280.201.850.280.490.02
Sample 50.360.271.800.370.540.02
Sample 60.480.341.930.480.750.03
Sample 70.590.471.880.590.870.04
Sample 80.740.552.040.741.010.01
Sample 90.890.721.810.891.090.07
Sample 101.180.872.271.181.510.02
Sample 111.260.852.211.261.370.06
Sample 121.631.342.481.631.960.05
Sample 131.491.022.461.491.670.04
Sample 141.851.203.211.852.170.23
Avg.1.060.612.341.081.030.05
Max.3.161.345.213.242.170.23
Min.0.210.141.770.210.130.01
Std.0.770.380.880.780.610.05
Note: The bolded numbers represent the lowest MCD values for each sample.
Table 5. Relative height error comparison among algorithms (%).
Table 5. Relative height error comparison among algorithms (%).
SampleOriginal DataRusu
[19]
Nurunnabi
[22]
Digne
[25]
Rakotosaona
[34]
Duan
[27]
Ours
Sample 142.2736.484.869.3935.803.560.40
Sample 262.3312.4512.2024.7412.5910.931.25
Sample 357.1719.3813.2720.8319.3814.951.53
Sample 444.8123.1115.9217.9523.1118.481.40
Sample 552.6926.9621.1815.2426.9623.521.65
Sample 662.1831.5520.5412.2731.5525.162.29
Sample 767.7736.3426.097.8136.3429.853.19
Sample 872.4036.7528.116.8436.7531.800.12
Sample 980.6845.6134.791.7645.6141.015.00
Sample 1095.3450.7038.552.3450.7041.410.69
Sample 11104.9355.8737.165.0655.8743.542.86
Sample 12111.0456.3247.618.2756.3347.401.42
Sample 13121.5061.8541.539.3361.8348.910.42
Sample 14123.1662.4246.2316.1962.4247.994.60
Avg.78.4539.7027.7211.2939.6630.611.92
Max.123.1662.4247.6124.7462.4248.915.00
Min.42.2712.454.861.7612.593.560.12
Std.26.9115.5613.086.6315.5514.401.46
Note: The bolded numbers represent the lowest relative height errors for each sample.
Table 6. Relative width error comparison among algorithms (%).
Table 6. Relative width error comparison among algorithms (%).
SampleOriginal DataRusu
[19]
Nurunnabi
[22]
Digne
[25]
Rakotosaona
[34]
Duan
[27]
Ours
Sample 150.036.372.6634.276.242.063.12
Sample 245.793.783.4533.483.882.272.78
Sample 340.874.061.3534.414.061.992.01
Sample 442.384.080.6335.624.071.631.57
Sample 542.785.280.0635.935.292.422.47
Sample 645.503.562.0536.813.560.490.16
Sample 750.655.840.9136.515.831.062.08
Sample 852.645.003.7837.685.001.800.52
Sample 954.918.730.5537.448.723.064.05
Sample 1058.987.082.0537.827.083.701.25
Sample 1164.878.600.1538.528.606.072.35
Sample 1274.536.600.1140.936.611.940.85
Sample 1374.6710.391.6039.8710.396.612.03
Sample 1480.599.440.0239.799.441.391.51
Avg.55.666.341.3837.086.342.611.91
Max.80.5910.393.7840.9310.396.614.05
Min.40.873.560.0233.483.560.490.16
Std.12.712.161.222.152.151.701.01
Note: The bolded numbers represent the lowest relative width errors for each sample.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xie, T.; Zhu, J.; Wang, C.; Li, F.; Meng, Z. A Unified Denoising Framework for Restoring the LiDAR Point Cloud Geometry of Reflective Targets. Appl. Sci. 2025, 15, 3904. https://doi.org/10.3390/app15073904

AMA Style

Xie T, Zhu J, Wang C, Li F, Meng Z. A Unified Denoising Framework for Restoring the LiDAR Point Cloud Geometry of Reflective Targets. Applied Sciences. 2025; 15(7):3904. https://doi.org/10.3390/app15073904

Chicago/Turabian Style

Xie, Tianpeng, Jingguo Zhu, Chunxiao Wang, Feng Li, and Zhe Meng. 2025. "A Unified Denoising Framework for Restoring the LiDAR Point Cloud Geometry of Reflective Targets" Applied Sciences 15, no. 7: 3904. https://doi.org/10.3390/app15073904

APA Style

Xie, T., Zhu, J., Wang, C., Li, F., & Meng, Z. (2025). A Unified Denoising Framework for Restoring the LiDAR Point Cloud Geometry of Reflective Targets. Applied Sciences, 15(7), 3904. https://doi.org/10.3390/app15073904

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop