Next Article in Journal
Unsupervised Domain Adaptive Transfer Learning for Urban Built-Up Area Extraction
Previous Article in Journal
Constructing Rasterized Covariates from LiDAR Point Cloud Data via Structured Query Language
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

The Point Cloud Reduction Algorithm Based on the Feature Extraction of a Neighborhood Normal Vector and Fuzzy-c Means Clustering †

School of Internet of Things, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
*
Author to whom correspondence should be addressed.
Presented at the 31st International Conference on Geoinformatics, Toronto, ON, Canada, 14–16 August 2024.
Proceedings 2024, 110(1), 13; https://doi.org/10.3390/proceedings2024110013
Published: 3 December 2024
(This article belongs to the Proceedings of The 31st International Conference on Geoinformatics)

Abstract

:
The three-dimensional model of geographic elements serves as the primary medium for digital visualization. However, the original point cloud model is often vast and includes considerable redundant data, resulting in inefficiencies during the three-dimensional modeling process. To address this issue, this paper proposes a point cloud reduction algorithm that leverages domain normal vectors and fuzzy-c means (FCM) clustering for feature extraction. The algorithm first extracts the edge points of the model and then utilizes domain normal vectors to extract the overall feature points of the model. Next, utilizing point cloud curvature, coordinate information, and geometric attributes, the algorithm applies the FCM clustering method to isolate local feature points. Non-feature points are then sampled using an enhanced farthest point sampling technique. Finally, the algorithm integrates edge points, feature points, and non-feature points to generate simplified point cloud data. This paper compares the proposed algorithm with traditional methods, including the uniform grid method, random sampling method, and curvature sampling method, and evaluates the simplified point cloud in terms of reduction level and reconstruction time. This approach effectively preserves critical feature information from the majority of point cloud data, thereby addressing the complexities inherent in original point cloud models.

1. Introduction

With the advancement of 3D laser scanning technology [1], point cloud data have found wide-ranging applications across various fields. In computer vision, they facilitate tasks such as 3D reconstruction [2] and object recognition [3]. In robotics, point cloud data aid in obstacle detection [4] and navigation [5]. Moreover, point cloud data serve as a crucial data source for constructing 3D models of geographic spatial elements. However, the original point cloud data often contain redundant information, leading to inefficiencies in the 3D modeling of geographic spatial elements. Thus, simplifying point cloud data has become a critical necessity.
Currently, both domestic and international scholars have conducted extensive research on point cloud reduction. Traditional methods include the enveloping box method [6], random sampling [7], curvature sampling [8], farthest distance sampling [9], clustering [10], and local entropy methods [11]. Li et al. [12] utilized the binary K-means clustering algorithm to simplify point clouds based on their curvature, effectively extracting feature points while omitting non-feature points. However, this approach may generate numerous gaps in models with extensive flat areas. Li et al. [13] proposed a uniform reduction algorithm for scattered point clouds, which preserves the feature information of the original data but faces challenges regarding adaptive voxel dimensions. Hu et al. [14] introduced the hyperbolic tangent function for down-sampling feature points near voxel centroids, achieving effective point cloud simplification with smoother surfaces and fewer gaps, though balancing the degree of reduction remains a consideration. Leal et al. [15] simplified point cloud data by estimating the local point cloud density, but their method focused solely on curvature attributes for feature extraction, resulting in suboptimal simplification outcomes. Markovic et al. [16] employed a method based on Support Vector Regression Machine to identify feature points, which is sensitive to error thresholds and control limits for point cloud characteristics.
To address the limitations of the aforementioned algorithms, this paper proposes a point cloud reduction algorithm based on the feature extraction of neighborhood normal vectors and FCM clustering for feature extraction. The algorithm begins by employing the farthest point sampling method to extract edge points from the model. Then, it constructs a KD-tree for the remaining point cloud data and utilizes domain normal vectors to extract overall feature points. Point cloud curvature, coordinate information, and geometric attributes are used as clustering features for the FCM algorithm to extract local feature points. Finally, edge points, overall feature points, and local feature points are merged to achieve point cloud simplification. This algorithm is conceptually simple, effectively retains key feature information from the point cloud data, and significantly reduces data redundancy, leading to a substantial improvement in the efficiency of 3D reconstruction. The algorithm flow is shown in Figure 1.

2. Method

2.1. Global Feature Point Extraction Based on Domain Normal Vector

The overall feature point is crucial in point cloud reduction, serving as a prominent element. Among the important geometric properties in point cloud data, the normal vector [17] stands out, aiding in identifying areas with substantial changes in the data. This study utilizes the normal vector feature to extract the overall feature points from the point cloud model. Initially, a domain is defined for the points within the point cloud. The KD-tree [18] data structure is employed to locate the k neighboring points of the selected target point P. Subsequently, the selected target points and their neighboring points are fitted into a surface. The covariance matrix of the domain set is computed to derive the eigenvalues and eigenvectors. Finally, the Principal Component Analysis (PCA) method [19] determines that the eigenvector corresponding to the minimum eigenvalue represents the domain’s normal vector at the target point P. By calculating the angle between the normal vector of the target point P and the normal vectors of points within the domain set, points are assessed against a predefined threshold. If the angle between the normal vectors exceeds this threshold, it indicates significant geometric changes near the target point, thereby marking it as an overall feature point. Conversely, if the angle between the normal vectors is below the threshold, it suggests minimal geometric variation near the target point, thereby excluding it as an overall feature point. Ultimately, overall feature points are identified based on this set threshold.

2.2. Local Feature Point Extraction Based on FCM Clustering Algorithm

FCM clustering is a popular clustering method. This method uses the concept of determining the geometric proximity of data points in Euclidean space [20]. It assigns these data to different clusters and then determines the distance between these clusters. The FCM clustering algorithm is currently one of the most widely used and successful clustering algorithms. Assume that n data samples are X = {x1, x2, ···, xn}, c (2 ≤ cn) is the number of cluster centers divided by the samples, {B1, B2, ···, Bc} is the corresponding c cluster categories, U is its similarity classification matrix, the cluster centers of each category are v = {v1, v2, ···, vc}, and uk(xi) is the membership function of sample xi to class Bk, and then the objective function Ja can be expressed as follows:
J a ( U , v ) = i = 1 n k = 1 c [ u k ( x i ) ] b D i k 2
where Dik is the Euclidean distance, the formula is D i k = D X i V K j = 1 m ( x i j v k j ) 2 , i denotes the i-th data sample, where 1 ≤ in, and k represents k-th cluster, where 1 ≤ kc; m is the number of features in the sample, b is the degree of fuzzification, and its value range is 1 ≤ b; and the sum of the membership values of each sample to each cluster is 1, that is, it is required to satisfy Formula (2).
j = 1 c u j x i = 1 , i = 1,2 , , n
Formulas (3) and (4) are used to calculate the membership uk (xi) of sample Xi to class Bk and c cluster centers {vi}, respectively.
u k x i = 1 Σ j = 1 c ( D i k D j k ) 2 b 1
v i j = k = 1 n [ u k ( x i ) ] b x i k = 1 n [ u k ( x i ) ] b
Formulas (3) and (4) are used to iterate until the conditions are met. At this time, the algorithm has repeatedly modified the cluster center and membership degree and completed the fuzzy clustering division. This paper uses point cloud coordinate information and geometric feature information to construct a multidimensional data structure, performs FCM clustering on the constructed multidimensional data point cloud, and sets a threshold based on the average curvature to extract local feature points and retain the remaining points as non-feature points.

3. Result

To verify the effectiveness of the point cloud reduction algorithm proposed in this paper, the classic “bunny” point cloud dataset from Stanford University, consisting of 35947 points, is used as the experimental data. The algorithm is implemented using Python, and the algorithm is compared with the uniform grid method, random sampling method, and curvature sampling method. Considering that the setting of the threshold θ affects the extraction effect of the overall feature points when extracting the overall feature points, this paper uses the normal vector threshold of Π/3, Π/2, and 2Π/3 to extract the overall feature points. The extracted overall feature point effect diagram is shown in Figure 2. As shown in Figure 2, the threshold significantly impacts the number of point clouds; a larger threshold results in fewer extracted point clouds, which may lead to the loss of crucial information in the point cloud model. When the threshold is set below Π/2, more points are classified as feature points. While this approach captures additional details, it also results in an excessive number of point clouds being extracted, thereby compromising the effectiveness of point cloud simplification and reducing the efficiency of subsequent reduction processes. Therefore, this paper sets the threshold to Π/2 to extract the overall feature points. Figure 3 shows the reduction results of the bunny model using both the proposed algorithm and traditional algorithms. Table 1 compares the efficiency of the bunny model using the algorithm in this paper with the uniform grid method, random sampling method, and curvature sampling method.
The following can be seen from Figure 3: The results of the uniform grid method after reduction are relatively regular, some features are lost in some feature areas, and the ability to retain details is not as good as the method proposed in this paper; the results of the random sampling method after reduction are relatively scattered, and the details are not well preserved, and a lot of feature information is missing, which may cause feature loss for subsequent 3D modeling; although the curvature sampling method can effectively retain the feature information of the point cloud model, holes may appear in flat areas; and the method proposed in this paper can perform good reduction in both feature areas and non-feature areas. Under the condition of roughly the same reduction rate, it can maximize the retention of feature information and details of the point cloud model and can prevent the model from having holes.
From Table 1, it is evident that under similar degrees of streamlining, although the algorithm in this paper requires more time for point cloud streamlining compared to the traditional method, the reconstruction process takes essentially the same amount of time. This paper proposes a point cloud streamlining algorithm based on domain normal vectors and FCM clustering for feature extraction. It utilizes normal vector features to categorize the point cloud into feature and non-feature points, facilitating streamlined point cloud processing, followed by three-dimensional reconstruction. Compared to traditional point cloud streamlining methods, this algorithm enhances the uniformity of streamlined point cloud data, effectively preserving the detailed features of the 3D model post-reconstruction. Moreover, it reduces processing time, improves reconstruction efficiency, eliminates redundant data, and enhances 3D model accuracy. These advancements hold practical significance in the field of 3D reconstruction.

4. Conclusions

This paper proposes a point cloud reduction algorithm based on domain normal vector and FCM clustering feature extraction. The algorithm uses the features of normal vectors to divide point clouds into feature points and non-feature points, simplifies point clouds on this basis, and reconstructs the simplified point clouds in three dimensions. Compared with the traditional point cloud reduction method, the algorithm improves the uniformity of the simplified point cloud data, effectively retains the detailed features of the three-dimensional model after three-dimensional reconstruction, improves efficiency, removes a large amount of redundant data, and improves the accuracy of the three-dimensional model. It has practical value in the field of three-dimensional reconstruction.

Author Contributions

H.X.: Writing—original draft, Methodology, Software, Resources, Data curation, Supervision, Formal Analysis, Visualization; D.J.: Conceptualization, Writing—original draft, Methodology, Investigation, Writing—review & editing; W.L.: Conceptualization, Software, Resources, Writing—Review & Editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors upon request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chang, K.T.; Chang, J.R.; Liu, J.K. Detection of pavement distresses using 3D laser scanning technology. Comput. Civ. Eng. 2005, 2005, 1–11. [Google Scholar]
  2. Mouragnon, E.; Lhuillier, M.; Dhome, M.; Dekeyser, F.; Sayd, P. Real time localization and 3d reconstruction. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; IEEE: Piscataway, NJ, USA, 2006; Volume 1, pp. 363–370. [Google Scholar]
  3. Huang, Y. Research and application of object recognition technology in sweeping robots. Sci. Technol. Innov. 2023, 19, 171–172+175. [Google Scholar]
  4. Ji, Y.; Li, S.; Peng, C.; Xu, H.; Cao, R.; Zhang, M. Obstacle detection and recognition in farmland based on fusion point cloud data. Comput. Electron. Agric. 2021, 189, 106409. [Google Scholar] [CrossRef]
  5. Wang, X.; Mizukami, Y.; Tada, M.; Matsuno, F. Navigation of a mobile robot in a dynamic environment using a point cloud map. Artif. Life Robot. 2021, 26, 10–20. [Google Scholar] [CrossRef]
  6. Mousavian, A.; Anguelov, D.; Flynn, J.; Kosecka, J. 3d bounding box estimation using deep learning and geometry. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7074–7082. [Google Scholar]
  7. Hu, Q.; Yang, B.; Xie, L.; Rosa, S.; Guo, Y.; Wang, Z.; Trigoni, N.; Markham, A. Learning semantic segmentation of large-scale point clouds with random sampling. IEEE Trans. Pattern Anal. Mach. Intell. 2021, 44, 8338–8354. [Google Scholar] [CrossRef]
  8. Kim, S.J.; Kim, C.H.; Levin, D. Surface reduction using a discrete curvature norm. Comput. Graph. 2002, 26, 657–663. [Google Scholar] [CrossRef]
  9. Eldar, Y.; Lindenbaum, M.; Porat, M.; Zeevi, Y.Y. The farthest point strategy for progressive image sampling. IEEE Trans. Image Process. 1997, 6, 1305–1315. [Google Scholar] [CrossRef] [PubMed]
  10. Shi, B.Q.; Liang, J.; Liu, Q. Adaptive reduction of point cloud using k-means clustering. Comput.-Aided Des. 2011, 43, 910–922. [Google Scholar] [CrossRef]
  11. Wang, Z.; Yang, H. Local entropy-based feature-preserving reduction and evaluation for large field point cloud. Vis. Comput. 2023, 40, 6705–6721. [Google Scholar] [CrossRef]
  12. Li, P.; Cui, F. Research on curvature graded point cloud data reduction optimization algorithm based on binary K-means clustering. Electron. Meas. Technol. 2022, 45, 66–71. [Google Scholar]
  13. Li, R.; Yang, M.; Liu, Y.; Zhang, H. A uniform reduction algorithm for scattered point clouds. Acta Opt. Sin. 2017, 37, 89–97. [Google Scholar]
  14. Hu, Z.; Cao, L.; Pei, D.; Mei, Z. Adaptive simplified point cloud improved preprocessing optimization 3D reconstruction algorithm. Laser Optoelectron. Prog. 2023, 60, 219–224. [Google Scholar]
  15. Leal, N.; Leal, E.; German, S.T. A linear programming approach for 3D point cloud reduction. IAEN G Int. J. Comput. Sci. 2017, 44, 60–67. [Google Scholar]
  16. Martin, R.R.; Stroud, I.A.; Marshall, A.D. Data reduction for reverse engineering. In Proceedings of the 7th Conference on Information Geometers, Maui, HI, USA, 7–10 January 1997; pp. 85–100. [Google Scholar]
  17. Yang, X.; Tian, Y.L. Super normal vector for activity recognition using depth sequences. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 804–811. [Google Scholar]
  18. Guo, Y. KD-TREE spatial indexing technology. Comput. Prod. Circ. 2020, 6, 168. [Google Scholar]
  19. Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
  20. Rass, S.; König, S.; Ahmad, S.; Goman, M. Metricizing the Euclidean space towards desired distance relations in point clouds. IEEE Trans. Inf. Forensics Secur. 2024, 19, 7304–7319. [Google Scholar] [CrossRef]
Figure 1. Flowchart of the algorithm in this paper.
Figure 1. Flowchart of the algorithm in this paper.
Proceedings 110 00013 g001
Figure 2. Effect of extracting overall feature points with different thresholds. In the figure, (a) the normal vector pinch threshold of the domain points is Π/3, (b) the normal vector pinch threshold of the domain points is Π/2, and (c) the normal vector pinch threshold of the domain points is 2Π/3.
Figure 2. Effect of extracting overall feature points with different thresholds. In the figure, (a) the normal vector pinch threshold of the domain points is Π/3, (b) the normal vector pinch threshold of the domain points is Π/2, and (c) the normal vector pinch threshold of the domain points is 2Π/3.
Proceedings 110 00013 g002
Figure 3. Bunny simplified effect picture. In the figure, (a) uses the algorithm in this paper, (b) uses the uniform grid method, (c) uses the random sampling method, and (d) uses the curvature sampling method.
Figure 3. Bunny simplified effect picture. In the figure, (a) uses the algorithm in this paper, (b) uses the uniform grid method, (c) uses the random sampling method, and (d) uses the curvature sampling method.
Proceedings 110 00013 g003
Table 1. Comparison of efficiency of various algorithms of bunny model.
Table 1. Comparison of efficiency of various algorithms of bunny model.
MethodReduction Rate/%Reconstruction Time/ms
algorithm in this paper64.47727
uniform grid method59.23834
random sampling method61.52751
curvature sampling method57.18848
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Xu, H.; Jiao, D.; Li, W. The Point Cloud Reduction Algorithm Based on the Feature Extraction of a Neighborhood Normal Vector and Fuzzy-c Means Clustering. Proceedings 2024, 110, 13. https://doi.org/10.3390/proceedings2024110013

AMA Style

Xu H, Jiao D, Li W. The Point Cloud Reduction Algorithm Based on the Feature Extraction of a Neighborhood Normal Vector and Fuzzy-c Means Clustering. Proceedings. 2024; 110(1):13. https://doi.org/10.3390/proceedings2024110013

Chicago/Turabian Style

Xu, Hongxiao, Donglai Jiao, and Wenmei Li. 2024. "The Point Cloud Reduction Algorithm Based on the Feature Extraction of a Neighborhood Normal Vector and Fuzzy-c Means Clustering" Proceedings 110, no. 1: 13. https://doi.org/10.3390/proceedings2024110013

APA Style

Xu, H., Jiao, D., & Li, W. (2024). The Point Cloud Reduction Algorithm Based on the Feature Extraction of a Neighborhood Normal Vector and Fuzzy-c Means Clustering. Proceedings, 110(1), 13. https://doi.org/10.3390/proceedings2024110013

Article Metrics

Back to TopTop