You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

5 June 2020

Sparse Regularization-Based Approach for Point Cloud Denoising and Sharp Features Enhancement

,
and
1
Faculty of Engineering, Universidad Autónoma del Caribe, Barranquilla 080001, Colombia
2
Faculty of Engineering, Universidad del Magdalena, Santa Marta 470004, Colombia
3
Facultad de Minas, Universidad Nacional de Colombia, Sede Medellín 050041, Colombia
*
Author to whom correspondence should be addressed.
This article belongs to the Section Physical Sensors

Abstract

Denoising the point cloud is fundamental for reconstructing high quality surfaces with details in order to eliminate noise and outliers in the 3D scanning process. The challenges for a denoising algorithm are noise reduction and sharp features preservation. In this paper, we present a new model to reconstruct and smooth point clouds that combine L1-median filtering with sparse L1 regularization for both denoising the normal vectors and updating the position of the points to preserve sharp features in the point cloud. The L1-median filter is robust to outliers and noise compared to the mean. The L1 norm is a way to measure the sparsity of a solution, and applying an L1 optimization to the point cloud can measure the sparsity of sharp features, producing clean point set surfaces with sharp features. We optimize the L1 minimization problem by using the proximal gradient descent algorithm. Experimental results show that our approach is comparable to the state-of-the-art methods, as it filters out 3D models with a high level of noise, but keeps their geometric features.

1. Introduction

With the rapid expansion of 3D scanning devices, the process of capturing and scanning real objects has become a common task in many areas, ranging from medicine and entertainment to industry and 3D printing. Despite significant development in the precision of 3D scanning technology, the raw data produced by the scanning devices inevitably contains noise and outliers caused by the inherent measurement error of 3D devices and the digitalization process. Herein lies the importance of denoising the point cloud in a pre-processing step before proceeding with surface reconstruction or shape analysis. The goal of the denoising algorithms is to suppress noise and outliers while preserving the sharp features such as edges and corners. Unlike denoising methods focused on triangular meshes, methods focused on point clouds do not have connectivity information, introducing an additional challenge. Denoising point clouds with sharp features is a complex problem as the features and noise are high frequency and therefore difficult to distinguish.
Many point set surfaces are piecewise smooth almost everywhere except for a few features such as corners and edges [1,2]. This means that these features are sparse, allowing a sparsity analysis in the point clouds to be conducted to estimate them. We can measure the sparsity of a solution using either the L0 norm or the L1 norm. The L0 norm counts the number of non-zero elements in a vector, directly measuring the sparsity, but it is challenging to optimize due to its non-convexity; furthermore, the L1 norm can approximate the L0 norm. The L1 norm is convex, and under certain conditions, produces sparse solutions. Some works exploit the sparsity theory on point clouds [1,2,3,4], and applying this theory is motivated by the field of sparse signal reconstruction and compressed sensing [5,6]. These works have attempted to overcome the problems related to noisy point clouds; the algorithms proposed in these studies perform well for feature preservation with a certain level of noise. Still, when the noise scale is larger or impulsive noise is present, they usually do not do well. Although the L0 norm produces sparser solutions than the L1 norm, the application of the L0 minimization can over-flatten and over-sharpened effects for small geometric features.
In this paper, we propose a robust method that focuses on removing noise and outliers while preserving the sharp features in a point cloud. Our approach comprises two iterative steps: (1) the normal estimation, finding a regression plane equidistant to all heights in a local neighborhood to then calculate the normal at the plane; and (2) based on the estimation of the normals, the position of the points is updated, using the orthogonal distance of the noisy point to the local regression plane, shifting the point along the normal direction projecting it onto the plane. This two-step procedure is repeated until a minimum error threshold is reached.
Our work is motivated by three observations: (1) L1-median filtering data-fidelity term encourages us to find a local regression plane to approximate the input points while discarding the noise and outliers; (2) points that belong to the same local smooth region will have similar normals, and the differences between them should be sparse, while large differences in values would reveal sharp features; and (3) points in a local neighborhood must comply with a local planarity criterion, except in the sharp features.
The three principal contributions of this paper are as follows: We present a two-step method for both normal estimation and point position update measuring the sparsity of sharp features while discriminating between noise and features. First, an adaptive weighted strategy is used to improve the normal estimation on sharp features. Second, a point cloud denoising method is developed that is sufficiently robust to large scale noise, outliers, and impulsive noise, outperforming some state-of-the-art point clouds denoising methods.

3. Robust Point Cloud Denoising

As in previous point cloud denoising methods [22,24,25], our method comprises two steps, namely normal denoising and point position update.
Starting from a noisy point cloud P near an unknown surface S , the goal of the proposed algorithm is to find a noise-free point set P that conserves the features of the original point cloud. We use local neighborhoods to each point p i P . The main idea is to define a local reference tangent plane p to every p i in the point cloud and determine its normal vector n i , and then shift the point p i in the normal direction to a distance τ i , obtaining a new position p i = p i + τ i n i , p i P . The new position p i being the projection of point p i onto the tangent plane p , which is the linear approximation of the surface S at point p i , as shown in Figure 1.
Figure 1. The point p i is projected onto the reference plane. p is a linear approximation to the surface.
The normal n i and the displacement τ i , are computed iteratively by adjusting the tangent plane p to the neighborhood N g ( p i ) . To estimate the tangent plane p , we are looking for an equidistance height to all heights of the points p j N g ( p i ) over p . To estimate the local reference tangent plane p , we minimize a cost function (Equation (2)) concerning τ and n , subject to the constraint n = 1 .

3.1. L1-Median

The L1-median is a robust estimator related to the multivariate median, and is defined to be the point p , which minimizes the sum of Euclidean distances to all points in the data set { p j } j J .
arg   min x j J p j p
The L1-median is insensitive to outliers and noise when compared with the mean [10]. We use the L1-median as a data-fidelity term.

3.2. L1 Sparse Regularization

L1 regularization has been applied for feature selection [27], sparse signal reconstruction [28], signal processing as image decomposition [29], and basis pursuit [30]. Although L0 regularization produces the sparsest solution, under certain constraints, L1 regularization produces a sparse solution [31]. Image processing has successfully applied L1 regularization to preserve fine details and edges through the minimization of the gradient [32]. This is conceptually named total-variation regularization or TV, and is used to measure the sparsity of the gradient. The proposed method uses TV for normal estimation to preserve the sharp features.

3.3. Cost Function

To denoise the noisy point cloud, we integrate L1-median height filter and L1 regularization of gradient or total variation. The normal is obtained, minimizing the following energy functional:
min n , τ E f + λ E r e g
where E f is the fidelity term, E r e g is the regularization term, λ is the regularization parameter, and n = 1 .
The L1-median height fidelity term E f is defined as follows:
E f = q j N g ( p i ) h i τ i ψ ( h i , τ i ) θ ( p i q j )
where h i = n i t ( p i q j ) , ψ ( h i , τ i ) = e ( h i τ i ) 2 / σ h 2 , and θ ( p i q j ) = e d 2 / σ d 2 .
It is used to fit a robust hyperplane in the neighborhood of the sampled point p i , and to then estimate the normal vector with respect to the hyperplane. We minimize the sum of Euclidean distances of the orthogonal projections (height) of points q j N g ( p i ) , with respect to the hyperplane (Equation (3)). τ i is the height to be found, which minimized the orthogonal projections of each point q j to the hyperplane.
arg   min x j J p j x
The estimation of the hyperplane shows robustness to large deviations of points q j . The outliers are identified by the L1-median height filter, which penalizes points q j with high orthogonal projections or heights h i with respect to the hyperplane.
Consequently, points q j with considerable heights, h i , are probably located passing through the sharp features; these points are possible outliers. As such, we propose an adaptive weighting strategy, which adaptively assigns the weight of each point as a function of the height. Thus, the weighting term ψ ( . ) , adaptively encourages the reduction of the influence of points q j with high h i values, and σ h is the height parameter, which controls sensitivity to outliers. Thus, the term ψ ( . ) only considers points located in the same smooth region to estimate the normal vector.
The L1 norm regularization term E r e g is defined as
E r e g = n j N g ( n i ) w i , j n i n j 1
where w i , j = e ( 1 n i t n j 1 cos ( σ n ) ) 2 , which is introduced as a measure of sparsity to preserve the sharp features and smooth the underlying surface. If a point cloud is piecewise smooth, many of the gradients in the normals field n (consistently oriented) tend to be zero; in contrast, the large values of the gradient only indicate sharp features. This means that normals n j N g ( n i ) in a neighborhood must be similar, where w i , j is the normal weight function, and σ n is the angle parameter that measures the similarity between normals n j , which is customarily is set to σ n = 15 0 .

3.4. Model Optimization

We optimize n i and τ i subject to n i = 1 , using a minimizing strategy defined as follows:
min n i , τ q j N g ( p i ) h i τ i ψ ( h i , τ i ) θ ( p i q j ) + λ n j N g ( n i ) w i , j n i n j 1 .
This approach finds the optimal values of n i and τ i by alternating optimization strategies, a procedure shown in Algorithm 1.
Algorithm 1. Model optimization
1Initialization: τ 0 = 0
2repeat
 j=0
repeat
3  fix τ k , solve for n k + 1 as minimum of Equation (6).
4  fix n k , solve for τ k + 1 as minimum of Equation (6).
   p k + 1 = p k + τ k n k
5until p k + 1 p k 2 2 < ε
edge_points_correction()
14until j > j m a x
We solve the energy minimization problem regarding n having fixed τ . Since the minimization problem is non-differentiable due to the regularization term E r e g , we use the proximal gradient descendent method [33] as an optimization strategy.

3.4.1. The n 0 Parameter Initialization n

First, we estimate the initial normal set to each p i P using only the equation corresponding to the fidelity term E f , with τ = 0 for the optimization. Similar to [34], we use the constraint n = 1 , and compose the Lagrange form of Equation (3) to compute the derivative with respect to n , obtaining
L ( n , λ ) = q j N g ( p i ) h i ψ ( h i ) θ ( p i q j ) + λ 2 ( 1 n i 2 )
L n ( n , λ ) = q j N g ( p i ) ω i ( p i q j ) ( p i q j ) t n i λ n i = 0
with weight ω i = ψ ( h i ) θ ( p i q j ) / h i
We can see that the term ω i ( p i q j ) ( p i q j ) t on Equation (8) is a symmetric and definite positive matrix (weighted covariance matrix), and we can rewrite it depending on n as
C m ( n ) n = λ n
where C m ( n ) = q j N g ( p i ) ω ( p i q j ) ( p i q j ) t
Equation (6) is an eigensystem and can be solved iteratively as follows:
C m ( n k ) n k + 1 = λ k + 1 n k + 1
where λ k + 1 is the smallest eigenvalue of C m ( n k ) , and n k + 1 is an orthonormal eigenvector. We start the initialization with n 0 = 0 , i.e., C m ( 0 ) n 1 = λ 1 n 1 , is the first iteration.

3.4.2. Optimization of n

Keeping τ and ψ ( h i , τ i ) fixed to solve Equation (6), ψ ( h i , τ i ) is treated as a constant because it is a practical way to make it computationally tractable. Thus, the fidelity term E f has gradient E f as follows:
E f ( n ) = q j N g ( p i ) η i ( h i τ i ) ( p i q j ) t
with weight η i = ψ ( h i , τ i ) θ ( p i q j ) / h i τ i . η i is undefined when h i = τ i ; therefore, when h i τ i < 10 3 , we set η i = θ ( p i q j ) .
Setting d = n i n j , we define the proximal mapping (or operator), associated with a convex non-differentiable function h ( ) , as follows:
p r o x h , γ ( d ) = arg   min z ( h ( z ) + 1 2 γ z d 2 2 )
The proximal gradient descendent has an iteration form as follows:
d k + 1 = p r o x h , γ ( d k γ E f ( n k ) )
where γ > 0 is a scalar termed step size, and d k + 1 is computed iteratively until convergence. The proximal operator corresponding to the L1-norm or regularization term E r e g is the following shrinkage or soft thresholding function:
p r o x h , γ ( d i c ) = { d i c γ λ w i j if    d i c > γ λ w i j 0 if    | d i c | γ λ w i j d i c + γ λ w i j if    d i c < γ λ w i j
where d i c is each component of normal vector d i .

3.4.3. Optimization of τ

With n fixed, we solve Equation (6) for τ , which shows that the fidelity term E f has gradient E f .
E f ( τ ) = q j N g ( p i ) η i ( h i τ i ) = 0
By solving E f ( τ ) , we obtain an iterative solution, which yields the following local update equation:
τ i k + 1 = q j N g ( p i ) η i h i q j N g ( p i ) η i
where η i = ψ ( h i , τ i ) θ ( p i q j ) h i τ i . The parameters n and τ are iteratively optimized using Equations (13) and (16) until convergence; η i is undefined when h i = τ i . Therefore, when h i τ i < 10 3 , we set η i = θ ( p i q j ) .

3.5. Point Position Update and Point Border Correction

In the last stage of the denoising method, we follow the update vertex position with a distance-based constraint proposed by [25], where the resulted point cloud P is bounded within a prescribed distance to the input point cloud.

3.5.1. Point Position Update

The authors in [25] propose a parameter provided by the user ε R + , bounding the maximum deviation d i between the initial noisy point cloud and its corresponding iteratively denoised version point p i P . The update position point p i for our algorithm is determined as follows:
p i = { p i + τ i n i i f   d i ε p i i f   d i ε ,
where d i is computed as the difference between p i and the corresponding original point in the noisy point cloud. The parameter ε is set to 4 h , i.e., ε = 4 h .
To make our algorithm more robust against edge artifacts and blurring, we detect the edge and corner points, and it corrects its position to obtain cleaner and more defined borders.

3.5.2. Edge Points Detection

To detect sharp features in the point cloud, we refer to the method proposed in [22], which uses the normals associated to each point in P and measures the normal variability into the neighborhood; if the variability is lower than a predefined threshold, t h is labeled as edge point. The similarity between normal vectors n i and n j is defined as follows:
w n ( n i , n j ) = e x p ( n i n j 2 2 σ n 2 )
where σ n is an angle threshold; using Equation (18), we define the normal variation in N g ( n i ) as follows:
V n ( i ) = 1 | N g ( n i ) | j N g ( n i ) w n ( n i , n j ) .
All points that satisfy V n ( i ) < t h are labeled as edge points.

3.5.3. Edge Point Correction

After edge points detection and taking advantage of the fact that the estimated normals near the edges and corners belong to surfaces on one side or another of the sharp features, we propose a scheme to correct the position of points that belong to edges or corners, which present a deviation from the corner or borderline. As shown in Figure 2, we find the closest point p j with normal vector n j on an opposite surface to the edge point p i and its normal n i . Next, we project point p i onto the plane that contains point p j and its normal n j . The new position is computed as
d p r o j = n j ( p i p j ) .
Figure 2. Edge points correction.
We only correct the point positions that meet the following criteria:
p c o r r = { p i d p r o j · n j i f   δ < | d p r o j | < ρ · h p i o t h e r   c a s e
where h is the average distance between the points of the point cloud, ρ is a fixed value that represents the percentage of the maximum shift of point p i towards the edge line, and δ is a fixed value close to zero.

4. Experimental Results and Discussion

The proposed method was implemented in MATLAB and run on a laptop with Intel Core i7-2670QM CPU, 2.20 GHz processor, and 8 GB RAM. We tested the method using several point clouds with sharp features and smooth surfaces including irregular sampling. Additionally, synthetic and real scanned noisy point clouds were used to validate our method. The synthetic models were contaminated with Gaussian noise and impulsive noise along the normal directions or random directions. Different levels of Gaussian noise with zero mean and standard deviation σ were applied to the models; the standard deviation was proportional to the average distance between the points of the ground-truth point clouds. The noise of raw scanned data was natural. We compared our method with eight state-of-the-art denoising approaches including two MLS-based methods, APSS [8] and RIMLS [9]; one LOP based method, EAR [12]; one sparsity-based method, MRPCA [3]; one graph-based method, GLR [19]; and three normal smoothing-based methods, CNV [25], RN [24], and GN [22]. Methods APSS and RIMLS were implemented in MeshLab software. The GLR code and EAR software were provided by the authors, as were the results of the MRPCA, CNV, RN, and GN methods.

4.1. Parameter Selection and Tuning

Our method presented seven parameters: the sparsity parameter λ , the height sensitivity σ h , the distance action range σ d , the bound displacement ρ , the low bound δ , the radius of neighborhood r , and the total number of iterations k .
The sparsity regularization parameter λ , depends on the desired gradient sparsity level and affects the reconstruction of sharp features and the smoothness of the point cloud. A larger λ yields a smoother result. We observed that λ = 0.2 worked well for all the testing point sets used in the experiments.
The displacement ρ was fixed throughout all experiments with ρ = 0.7 . Parameter δ was fixed throughout all experiments with δ = 10 4 . Parameter h was the average distance between the points. We computed the value of h , taking the six nearest neighbors to each point. The distance action range σ d , and the height sensitivity σ h , are user-defined values given in terms of h . In all the experiments, the radius r of the neighborhood was set to σ d , i.e., r = σ d ; a smaller value of σ d leads to faster computation because the neighborhood N g ( p i ) is small, and large values may cross sharp features and over smooth the results. Alternatively, r = σ d can be chosen as a function of the local point density. In the results of the experiments, we chose the values of this parameter constant, tuned to achieve visually appealing results. The height sensitivity, σ h , controls the outliers in the point cloud; small values of σ h preserve the model features, while large values only preserve the salient features.
The values of σ h and σ d depend on the level of noise. The bigger the noise level, the larger the value of these parameters that should be chosen. We used σ d { 1.5 h , 2 h , 3 h , 4 h } and σ h { 0.5 h , 0.7 h , 0.9 h } for synthetic data and σ d { 1.5 h , 2 h } and σ h { 0.1 h , 0.2 h , 0.3 h } for real scanned point clouds. The difference of parameter values between synthetic and real models was because the level of noise in real models was lower than synthetic models. The number of iterations k for the best results was set at k { 10 , 16 , 20 , 50 } . At last, there were only three parameters for our algorithm to tune the results ( σ h , σ d , k ) .
In our comparison experiments, we used the following parameter set for the eight selected state-of-the-art methods. For the methods [3,12,22,24], we mention Default in the parameter Table 1, because the corresponding smooth models were provided by the authors in [8]; we reported a tuple (scale, # of iterations, α ); [9] = ( σ r , σ n ); [12] = (Default values); [25] = ( τ , ρ , p ), and [19], the parameter settings in their paper. Our method = ( σ h , σ d , k ).
Table 1. Parameter settings of comparative methods for different models.

4.2. Quantitative Analysis

We used the following three metrics in our quantitative analysis: feature preservation, accuracy, and signal-to-noise ratio.

4.2.1. Error Metrics

To quantify feature preservation, we measured the orientation error between the smoothed point cloud and the ground truth. Mean angular deviation (MAD) was defined to measure the orientation error as follows:
M A D = 1 n i = 0 n 1 < ( n ¯ i , n ^ i )
where n ¯ i and n ^ i are the point normals corresponding to the ground truth and the smoothed point cloud, respectively.
To quantify the accuracy, i.e., the closeness between the ground truth model and the smoothing model, we used the mean-squared-error (MSE) metric, which measures the average of the squared Euclidean distances between the ground truth points and their closest denoising points, and vice versa between the denoised points and their closest ground truth points, and finally, the average between two measures gives the MSE. If the ground truth model and the smoothed model are P 1 = { p i } i = 1 , . n 1 and P 2 = { q j } j = 1 , . n 2 , the point clouds can be of different sizes, i.e., n 1 n 2 . The MSE is defined as follows:
M S E = 1 2 n 1 p i P 1 min q j P 2 p i q j 2 2 + 1 2 n 2 q i P 2 min p j P 1 q j p i 2 2
Finally, the signal-to-noise ratio (SNR) is measured in dB and is defined as follows.
S N R = 10 log 1 n 2 q i P 2 q i 2 2 M S E

4.2.2. Metric Evaluation

Table 2 and Table 3 shows the comparison between our method and competing state-of-the-art methods highlighting cells in gray indicating the best performance. In Table 2 we can also see that the MSE and SNR metrics are the lowest values compared to eight state-of-the-art methods. For the Fandisk model, the proposed method reached the second position in all three metrics. For the Rocker Arm, our method outperformed the state-of-the-art methods in the MAD metric, but with MSE and SNR our method was better than the APSS method. For the Octahedron model, our method outperformed all metrics of the state-of-the-art methods.
Table 2. Error metrics by comparative methods for each one of the 3D objects.
Table 3. The results of the error metrics of different compared methods for two Block and Trim-star objects varying the noise levels.
Table 3 shows a comparison between four state-of-the-art methods, with three different levels of noise, i.e., σ = 0.1   h , σ = 0.2   h ,   and   σ = 0.3   h . We can see that the MAD grew as the noise increased. Thus, if the noise level is high, the orientation error will be larger compared to the lower noise level. Table 3 shows that the proposed method achieved the best results for the MAD metric, with all levels of noise, but for the level of noise σ = 0.1   h , RIMLS achieved the best MSE and SNR. However, for the level of noise σ = 0.2   h   and   σ = 0.3   h , our method outperformed the competing methods. In all the experiments, the proposed method achieved the best results on average in all three metrics and all noise levels. The visual results of the experiment are shown in Figure 3. For each one of the objects, we illustrated the original model, the noisy model, the results of the best method when comparing methods, and the results of our approach.
Figure 3. The first column shows the model from the original data. The second model is corrupted by Gaussian noise. The third column shows most accurate result of the comparison methods using MSE metric. The fourth column shows the result of correcting by the proposed method. The Gaussian noise levels and the comparison method are (a) Cube σ = 0.3 h method CVN, (b) Fandisk σ = 0.28 h method RN, (c) Rocker Arm σ = 0.3 h , and (d) Octhaedron σ = 0.3 h .

4.3. Qualitative Analysis

For visual comparison, we used the ball pivoting algorithm (BPA) [35] to reconstruct the mesh from the smoothed point cloud. The point clouds were contaminated with Gaussian noise along random directions and normal directions, and impulsive noise along random direction.

Irregular Point Clouds

The Cube and the Fandisk objects had non-uniform density points corrupted by Gaussian noise in random directions ( σ = 0.28   h , σ = 0.3   h ,   and   0.3   h , respectively). Figure 4, shows that our method could preserve the sharp features and not produce bumps in flat areas like the APSS [8], RIMLS [9], and EAR [12] methods. GLR [19], MRPCA [3], RN [24], and GN [22] methods cleaned the noise effectively over flat regions but produced over-smoothing in the corners and edges. CNV [25] properly reconstructed the sharp features and cleaned the flat areas, but small artifacts appeared in some corners. As seen in Figure 5, our method could reconstruct sharp features and shallow features. APSS smoothed around the sharp features and did not remove the noise correctly. RIMLS and EAR preserved sharp features but produced some bump features in the resulting models. MRPCA removed the noise and preserved some sharp features but smoothed shallow areas around flat regions. GLR removed noise effectively but over smoothed the sharp features and shallow areas. GN, RN, and CVN produced a similar output to our method, but there were some artifacts on the borders and in corners.
Figure 4. The Cube model with non-uniform distribution of points, corrupted by Gaussian noise ( σ = 0.3   h ) along all directions, where h is the average distance between the points of the point cloud. We can see that the proposed method was able to preserve sharp features effectively when compared to the state-of-the-art methods. The surface was reconstructed using the ball pivoting algorithm.
Figure 5. The Fandisk model with non-uniform distribution of points, corrupted by Gaussian noise ( σ = 0.28   h ). We can see that the proposed method was able to preserve sharp features effectively when compared to the state-of-the-art methods.

4.4. Impulsive Noise

Figure 6 shows the results of handling a corrupted point cloud adding an impulsive noise of σ = 0.5   h along the normal direction. The Twelve model was smoothed by the proposed method and its edges were preserved. RIMLS, APSS, and EAR methods were not able to smooth the noise properly and reconstruct the edges.
Figure 6. The Twelve model corrupted by impulsive noise ( σ = 0.5   h ) in the normal direction. We can see that the proposed method was able to preserve sharp features effectively when compared to the state-of-the-art methods. The surface was reconstructed using the ball pivoting algorithm.

4.5. Natural Noise of 3D Scan Objects

We also compared these approaches using real scanned data of free form objects. Figure 7 shows the results of different methods applied to raw data scans. From the Rabbit object, it seems like our method effectively removed the noise while preserving features when compared to APSS, RIMLS, and EAR.
Figure 7. The rabbit model, with natural noise. We can see that the proposed method was able to preserve sharp features effectively when compared to the state-of-the-art methods. The surface was reconstructed using the ball pivoting algorithm.
GLR and RN preserved features, but lost some fine details as the eye and grooves in the ear were lost. GN and CVN preserved more detail than any of the other methods but they lost details in the eye and nose. MRPCA and the proposed method produced very similar results. Figure 8 shows the ball joint medical data. We can see that our method removed the noise, while details and sharp features were preserved, and the spherical shape was effectively smoothed. The APSS and RIMLS methods were not able to smooth the noise properly, and the resulting surfaces presented bumps. MRPCA, GN, RN, and GLR effectively removed the noise component but smoothed the sharp features. The EAR and CVN methods produced noise-free results, preserving the sharp features and smoothing the surfaces.
Figure 8. The ball joint model. We can see that the proposed method was able to preserve sharp features effectively when compared to the state-of-the-art methods. The surface was reconstructed using the ball pivoting algorithm.

5. Conclusions

In this paper, we proposed combining the L1 median filter and the L1 norm regularization for a point cloud-based denoising algorithm that preserves sharp features. The algorithm uses double sparsity modeling both in the fitting term and in the regularization term. The L1 median is insensitive to outliers and noise, while the L1 norm preserves the sharp features and smooths the surface. The combined L1-median and L1-norm cost function was optimized with an alternating minimization strategy using a proximal gradient and a descendent iterative schema, allowing the implementation of a simple algorithm. The proposed method can handle models contaminated with Gaussian and impulse noise. High noise levels can produce erroneous results as they affect the normals estimation. Another issue is the concern with irregular point sampling models. While the irregular sampling remains low, the output of our algorithm produces good results; but when the point cloud density is highly irregular, the output quality decreases. To recover the sharp features, we introduce a border correction procedure that helps to correct edges and corners, preserving the models’ original sharp features.
Experimental results reveal that our proposal can preserve sharp features when compared to previous point cloud denoising methods, and the algorithm is robust in denoising both synthetic and raw point scans. The method depends on some empirical parameters, σ h and σ d , defined by the user, and which we tuned manually to obtain the desired results. How to determine these parameters continues to be a challenge and is a direction we are going to investigate in our future research. The implementation of a global solution for the cost function is another issue to be examined in future work.

Author Contributions

All authors have read and agree to the published version of the manuscript. Conceptualization, E.L.; methodology, E.L, G.S.-T., and J.W.B.; software, E.L.; validation, E.L., G.S.-T., and J.B; formal analysis, E.L.; investigation, E.L.; writing—original draft preparation, E.L.; writing—review and editing, G.S.-T.; supervision, G.S.-T. and J.W.B.

Funding

This research was supported by the Administrative Department of Science and Technology of Colombia-COLCIENCIAS under the doctoral scholarship program COLCIENCIAS 2015-727 and by The Universidad del Magdalena under research fund FONCIENCIAS.

Acknowledgments

We would like to thank Sunil Kumar Yadav for supplying part of the 3D models used in this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Sun, Y.; Schaefer, S.; Wang, W. Denoising point sets via L0 minimization. Comput. Aided Geom. Des. 2015, 35–36, 2–15. [Google Scholar] [CrossRef]
  2. Avron, H.; Sharf, A.; Greif, C.; Cohen-Or, D. ℓ1 Sparse reconstruction of sharp point set surfaces. ACM Trans. Graph. 2010, 29, 135:1–135:12. [Google Scholar] [CrossRef]
  3. Mattei, E.; Castrodad, A. Point cloud denoising via moving RPCA. Comput. Graph. Forum 2017, 36, 123–137. [Google Scholar] [CrossRef]
  4. Sarkar, K.; Bernard, F.; Varanasi, K.; Theobalt, C.; Stricker, D. Structured low-rank matrix factorization for point-cloud denoising. In Proceedings of the 2018 International Conference on 3D Vision (3DV), Verona, Italy, 5–8 September 2018; pp. 444–453. [Google Scholar] [CrossRef]
  5. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theor. 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  6. Candes, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef]
  7. Alexa, M.; Behr, J.; Cohen-Or, D.; Fleishman, S.; Levin, D.; Silva, C.T. Computing and rendering point set surfaces. IEEE Trans. Vis. Comput. Graph. 2003, 9, 3–15. [Google Scholar] [CrossRef]
  8. Guennebaud, G.; Gross, M. Algebraic point set surfaces. ACM Trans. Graph. 2007, 26, 23-es. [Google Scholar] [CrossRef]
  9. Öztireli, A.C.; Guennebaud, G.; Gross, M. Feature preserving point set surfaces based on non-linear kernel regression. Comput. Graph. Forum 2009, 28, 493–501. [Google Scholar] [CrossRef]
  10. Lipman, Y.; Cohen-Or, D.; Levin, D.; Tal-Ezer, H. Parameterization-free projection for geometry reconstruction. In ACM SIGGRAPH 2007 Papers; ACM: New York, NY, USA, 2007. [Google Scholar] [CrossRef]
  11. Huang, H.; Li, D.; Zhang, H.; Ascher, U.; Cohen-Or, D. Consolidation of unorganized point clouds for surface reconstruction. ACM Trans. Graph. 2009, 28, 1–7. [Google Scholar] [CrossRef]
  12. Huang, H.; Wu, S.; Gong, M.; Cohen-Or, D.; Ascher, U.; Zhang, H. Edge-aware point set resampling. ACM Trans. Graph. 2013, 32, 9:1–9:12. [Google Scholar] [CrossRef]
  13. Buades, A.; Coll, B.; Morel, J.-M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar] [CrossRef]
  14. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
  15. Deschaud, J.-E.; Goulette, F. Point cloud non local denoising using local surface descriptor similarity. In IAPRS, Vol. XXXVIII, Part 3A–Saint-Mandé, France, 1–3 September 2010; Paparoditis, N., Pierrot-Deseilligny, M., Mallet, C., Tournaire, O., Eds.; Institut Geographique National, Laboratoire MATIS: Saint-Mandé, France, 2010; pp. 109–114. [Google Scholar]
  16. Rosman, G.; Dubrovina, A.; Kimmel, R. Patch-collaborative spectral point-cloud denoising: Patch-collaborative spectral point-cloud denoising. Comput. Graph. Forum 2013, 32, 1–12. [Google Scholar] [CrossRef]
  17. Schoenenberger, Y.; Paratte, J.; Vandergheynst, P. Graph-based denoising for time-varying point clouds. In Proceedings of the 2015 3DTV-Conference: The True Vision-Capture, Transmission and Display of 3D Video (3DTV-CON), Lisbon, Portugal, 8–10 July 2015; pp. 1–4. [Google Scholar] [CrossRef]
  18. Dinesh, C.; Cheung, G.; Bajic, I.V.; Yang, C. Fast 3D point cloud denoising via bipartite graph approximation & total variation. arXiv 2018, arXiv:1804.10831. [Google Scholar]
  19. Zeng, J.; Cheung, G.; Ng, M.; Pang, J.; Yang, C. 3D point cloud denoising using graph laplacian regularization of a low dimensional manifold model. IEEE Trans. Image Process. 2020, 29, 3474–3489. [Google Scholar] [CrossRef] [PubMed]
  20. Duan, C.; Chen, S.; Kovacevic, J. Weighted multi-projection: 3d point cloud denoising with tangent planes. In Proceedings of the 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP), Anaheim, CA, USA, 26–29 November 2018; pp. 725–729. [Google Scholar] [CrossRef]
  21. Leal, E.; Leal, N. Point cloud denoising using robust principal component analysis. In Proceedings of the First International Conference on Computer Graphics Theory and Applications, Setúbal, Portugal, 25–28 February 2006; pp. 51–58. [Google Scholar] [CrossRef]
  22. Zheng, Y.; Li, G.; Wu, S.; Liu, Y.; Gao, Y. Guided point cloud denoising via sharp feature skeletons. Vis. Comput. 2017, 33, 857–867. [Google Scholar] [CrossRef]
  23. Huang, H.; Wu, S.; Cohen-Or, D.; Gong, M.; Zhang, H.; Li, G.; Chen, B. L1-medial skeleton of point cloud. ACM Trans. Graph. 2013, 32, 65:1–65:8. [Google Scholar] [CrossRef]
  24. Zheng, Y.; Li, G.; Xu, X.; Wu, S.; Nie, Y. Rolling normal filtering for point clouds. Comput. Aided Geom. Des. 2018, 62, 16–28. [Google Scholar] [CrossRef]
  25. Yadav, S.K.; Reitebuch, U.; Skrodzki, M.; Zimmermann, E.; Polthier, K. Constraint-based point set denoising using normal voting tensor and restricted quadratic error metrics. Comput. Graph. 2018, 74, 234–243. [Google Scholar] [CrossRef]
  26. Xu, L.; Wang, R.; Zhang, J.; Yang, Z.; Deng, J.; Chen, F.; Liu, L. Survey on sparsity in geometric modeling and processing. Graph. Models 2015, 82, 160–180. [Google Scholar] [CrossRef]
  27. Ng, A.Y. Feature Selection, L1 vs. L2 Regularization, and Rotational Invariance. In Proceedings of the Twenty-First International Conference on Machine Learning; ACM: New York, NY, USA, 2004; p. 78. [Google Scholar] [CrossRef]
  28. Liu, Q.; Wang, J. L1-minimization algorithms for sparse signal reconstruction based on a projection neural network. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 698–707. [Google Scholar] [CrossRef]
  29. Elad, M.; Starck, J.-L.; Querre, P.; Donoho, D.L. Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA). Appl. Comput. Harmon. Anal. 2005, 19, 340–358. [Google Scholar] [CrossRef]
  30. Chen, S.; Donoho, D. Basis pursuit. In Proceedings of the 1994 28th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 31 October–2 November 1994; Volume 1, pp. 41–44. [Google Scholar] [CrossRef]
  31. Donoho, D.L.; Elad, M.; Temlyakov, V.N. Stable recovery of sparse overcomplete representations in the presence of noise. IEEE Trans. Inf. Theory 2006, 52, 6–18. [Google Scholar] [CrossRef]
  32. Rudin, L.I.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Physica D 1992, 60, 259–268. [Google Scholar] [CrossRef]
  33. Parikh, N.; Boyd, S. Proximal Algorithms. Found. Trends Optim. 2014, 1, 127–239. [Google Scholar] [CrossRef]
  34. Mederos, B.; Velho, L.; de Figueiredo, L.H. Robust smoothing of noisy point clouds. In Proc. SIAM Conference on Geometric Design and Computing; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
  35. Bernardini, F.; Mittleman, J.; Rushmeier, H.; Silva, C.; Taubin, G. The ball-pivoting algorithm for surface reconstruction. IEEE Trans. Vis. Comput. Graph. 1999, 5, 349–359. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.