Next Article in Journal
A Symmetry-Enhanced Secure and Traceable Data Sharing Model Based on Decentralized Information Flow Control for the End–Edge–Cloud Paradigm
Previous Article in Journal
High-Precision Digital Time-Interval Measurement in Dual-Comb Systems via Adaptive Signal Processing and Centroid Localization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Skewness-Based Density Metric and Deep Learning Framework for Point Cloud Analysis: Detection of Non-Uniform Regions and Boundary Extraction

1
The School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China
2
The Engineering Research Center of Environmental Laser Remote Sensing Technology and Application of Henan Province, Nanyang Normal University, Nanyang 473061, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(10), 1770; https://doi.org/10.3390/sym17101770
Submission received: 1 September 2025 / Revised: 17 October 2025 / Accepted: 17 October 2025 / Published: 20 October 2025
(This article belongs to the Section Computer)

Abstract

This paper redefines point cloud density by utilizing statistical skewness derived from the geometric relationships between points and their local centroids. By comparing with a symmetric uniform reference model, this method can efficiently describe distribution patterns and detect non-uniform regions. Furthermore, a deep learning model trained on these skewness features achieves 85.96% accuracy in automated boundary extraction, significantly reducing omission errors compared to conventional density-based methods. The proposed framework offers an effective solution for automated point cloud segmentation and modeling.

1. Introduction

Point cloud density, defined as a metric for assessing the spatial uniformity of discrete point distributions, is a critical parameter in point cloud processing. Variations in point cloud density serve as diagnostic indicators for detecting morphological discontinuities in terrain features and quantifying stochastic noise artifacts within 3D spatial datasets [1,2]. Petras et al. analyzed point density variations in LiDAR point clouds using geospatial methods [3], while A. Mahphood and H. Arefi proposed a building detection method based on point cloud density, processing data via 3D and 2D density calculations to remove vegetation and trees [4]. Peng Cheng et al. employed an adaptive sampling radius adjustment method, leveraging point cloud density to enhance semantic segmentation accuracy for non-uniform sparse road point clouds from mobile LiDAR systems [5]. This dual functionality enables density-based analysis to concurrently support boundary delineation algorithms and data quality assessment protocols in photogrammetric and LiDAR applications.
In broad conceptual terms, point cloud density is defined as the numerical concentration of discrete sampling points per unit area or volumetric space. Its mathematical formalization depends on the spatial dimension (2D/3D) under consideration. However, identical areal or volumetric density values are of limited value for describing localized spatial distribution characteristics. Distinct spatial configurations—such as clustered versus dispersed arrangements—can yield equivalent density measures. Considering the limitations of traditional computational methods and other attributes of point clouds, numerous studies have introduced significant improvements to point cloud processing techniques based on point cloud density. Askari et al. employed point cloud density analysis combined with Hough transform and Principal Component Analysis (PCA) to extract exterior wall lines from a photogrammetric point cloud, which was generated from spherical camera images, for the 3D reconstruction of building blocks [6]. Rupnik et al. classified factors affecting point cloud density into design parameters and scanning conditions, proposing a density formula and calculation methodology [7]. Balsa-Barreiro et al. estimated the mean density by excluding 12.5% of the sampling area beforehand [8]. Separately, Minglei Li et al. introduced a method to enhance point cloud quality through octree-based reorganization and resampling [9]. Their approach reorders unstructured points using an energy optimization formula, resulting in regularized and organized point distributions that improve surface normal vector estimation accuracy. Yu Li et al. used a point cloud density threshold filtering algorithm to improve data quality and computational efficiency in tunnel leakage detection via 3D laser scanning [10]. Ellie Pang et al. used a camera-LiDAR fusion technique leveraging point cloud density enhancement to improve defect visibility in 2D and 3D road data preparation for digital road maintenance systems [11]. Lavanecha Chandran et al. used point cloud density analysis combined with semi-automated tree classification and strip alignment methods to map glacial erratic boulders in modern and ancient glaciated terrains using high-resolution LiDAR data [12].
This study aims to address two main challenges: cumbersome methods for calculating and defining density, and the complexity of detecting uneven density regions. To this end, we propose a simplified density metric based on the distances and angles of points relative to their local centroid. Furthermore, we develop a grid-based method that utilizes statistical skewness to identify uneven areas effectively. The efficacy of our proposed methodology is demonstrated through experimental validation.
Traditional methods often depend on manually engineered features (e.g., statistical skewness, distance/angular distributions), whose design heuristics may limit robustness in complex scenarios such as surfaces with textures or protrusions. In contrast, deep learning is capable of automatically learning features and accomplishing the classification of point clouds in a more intelligent manner. The deployment of deep learning architectures for processing 3D point clouds—including tasks such as semantic segmentation and point-wise annotation—has emerged as the predominant paradigm in contemporary 3D data processing pipelines [13,14].
Yao et al. proposed OG-PointNet++, which employs an unbalanced octree structure to precompute point density distributions based on local density, thereby reducing computational overhead in hierarchical feature extraction [15]. Sharma et al. introduced a deep learning architecture that utilizes feature extraction and reshaping to enhance point cloud density, coupled with a compound loss function to jointly optimize upsampled points and their normals for robust surface reconstruction [16]. Liu et al. proposed a Radius Feature Abstraction (RFA) module within a Group-in-Group Transformer (GiG) architecture to extract radius-based density features, explicitly characterizing the spatial sparsity of local point clouds for enhanced feature representation in 3D learning tasks [17]. Chen et al. proposed a deep learning method for 3D object detection in point clouds by fusing density information and local features [18]. Although deep learning has improved the efficiency and accuracy of applications such as object boundary extraction and point cloud segmentation based on point cloud density, the computational approach for point cloud density still relies on the number of points per unit space or area, lacking detailed information regarding the distribution of points. In this study, we employ a deep learning framework to classify the spatial distribution patterns of points within a region, facilitating an automated assessment of their uniformity. Specifically, the skewness of angular and distance distributions relative to the geometric centroid is used as a key feature for learning.
The key contributions of this paper are as follows:
A novel density metric: We propose a new method for defining and computing point cloud density based on statistical features derived from the angles and distances of points relative to their local centroid.
An effective uneven region detector: We develop a grid-based detection technique that uses statistical skewness to quantify distribution asymmetry and systematically identify areas with non-uniform density.
A deep learning framework for distribution analysis: We leverage a deep learning model to automatically classify spatial distribution patterns, facilitating advanced applications such as automated boundary detection.

2. Methodology

Figure 1 illustrates the workflow of the proposed methodology. This study consists of two main components: (1) a density feature-based method for detecting non-uniform regions in point clouds, and (2) a deep learning model for boundary extraction from point clouds, trained using distance and angle skewness features.
In the first component, the point cloud data undergoes initial preprocessing, including operations such as meshing. Subsequently, density characteristics are quantified by computing the statistical skewness of both the distances and the angles formed by the connecting lines between each grid centroid and the points within the grid. This step identifies the distribution pattern of the point cloud.
The second component involves constructing a deep learning model based on the features derived from the first part. After preprocessing steps—including normalization and data labeling—a training dataset is prepared. Geometric features are extracted by calculating the skewness of distances and angles from the points to their corresponding grid cell centroids. These features are then used as input to a classification model to identify point cloud distribution patterns. The model outputs indices of non-uniform grid cells, ultimately enabling the extraction of boundaries of indoor objects from the point cloud.

3. Definition of Point Cloud Density

Point cloud data typically comprises 3D coordinates, intensity values, and RGB information, while also exhibiting irregular and unordered structures. Moreover, although the number of points within a given region may remain constant, their spatial distribution can vary significantly, as illustrated in Figure 2. Furthermore, due to the hemispherical emission pattern of laser beams, point clouds captured at different distances exhibit distinct levels of sparsity, as shown in Figure 3. In such cases, density-dependent processing tasks—such as noise filtering and boundary detection—may suffer from performance degradation. To overcome these challenges, this study introduces a revised definition of point cloud density based on the skewness of both the distance from each point to the geometric center and the angular distribution between points.

4. Calculation of Point Cloud Density

To quantify these spatial characteristics, the distribution of distances from the points to a specific location (e.g., the geometric center) reflects their proximity to that point. Conversely, the distribution of angles—formed by the connecting lines between the points and the reference point—reveals their directional orientation around it. Building on this framework, a comprehensive statistical analysis that combines both distance and angular distributions can effectively characterize the spatial density variation (i.e., sparsity or concentration) of the point cloud within a region.
Skewness is a statistical measure that quantifies the asymmetry of a probability distribution around its mean. It describes the extent to which a distribution deviates from being symmetric. The formula is as follows:
γ = 1 n i = 1 n ( x i x ¯ ) 3 ( 1 n i = 1 n ( x i x ¯ ) 2 ) 3 / 2
In algorithm (1), x i is individual data points, x ¯ is sample mean, n is sample size.
When γ > 0 : Positive skew (right-skewed)—the tail on the right side is longer or fatter.
When γ < 0 : Negative skew (left-skewed)—the tail on the left side is longer or fatter.
When γ = 0 : Symmetric distribution (though not necessarily normal).
Skewness can be used as a measure that describes the degree and direction of asymmetry in a statistical distribution. It has been widely applied in laser point cloud processing for tasks such as filtering and object extraction. Specifically, a left-skewed distribution (negative skewness) indicates that the mean is less than the median, whereas a right-skewed distribution (positive skewness) indicates that the mean is greater than the median.
To operationalize this analysis at a local scale within a unit grid cell, we interpret the skewness as follows:
For distance distributions: Negative skewness indicates that the mode is greater than the mean, suggesting that points are clustered at longer distances from the centroid. Conversely, positive skewness implies clustering closer to the centroid.
For angular distributions: Negative skewness indicates that the mode of angles is greater than the mean, suggesting a directional aggregation in specific orientations. Positive skewness, conversely, points to a more dispersed angular distribution.
Based on this analytical foundation, the definition of the point cloud density metric is formalized as follows: This study defines the density of a point cloud as a dual-aspect skewness metric derived from both the distance and angular distributions formed by connecting lines between the point cloud and the geometric center of a 2D plane or 3D space. Figure 4 shows the distance and the angle formed by connecting points with the center of the grid. P 1 , P 2 , P 3 and P 4 are point data, point O is the geometric center of the grid.
θ i is the angle formed by the connection between P i , P i + 1 and point O . The formula is as follows:
θ i = a r c c o s O P i · O P i + 1 O P i O P i + 1
From Formula (2), the following equation can be derived:
θ i = a r c c o s ( X p i X o ) ( X p i + 1 X o ) + ( Y p i Y o ) ( Y p i + 1 Y o ) ( X p i X o ) 2 + ( Y p i Y o ) 2 ( X p i + 1 X o ) 2 + ( Y p i + 1 Y o ) 2
In algorithm (2), ( X p i ,   Y p i ) and ( X p i + 1 ,   Y p i + 1 ) are coordinates of point P i and P i + 1 , ( X o ,   Y o ) is coordinate of point O .
d i is Euclidean distance from point P i to point O :
d i = ( X p i X o ) 2 + ( Y p i Y o ) 2
The skewness of the angle S k e w θ can be calculated using Formula (5):
S k e w θ = E θ i μ θ σ θ 3
In algorithm (4), μ θ is the average of angle μ θ = θ 1 + θ 2 + · · · + θ n 1 / ( n 1 ) , σ θ is the standard deviation of the angle.
The skewness S k e w d of distance can be calculated using Formula (6):
S k e w d = E d i μ d σ d 3
In algorithm (6), μ θ is the average of angle μ d = d 1 + d 2 + · · · + d n / n , σ d is the standard deviation of the distance.

5. Point Cloud Meshing

Based on the characteristic of varying point distribution distances across different point cloud datasets, this part performs preliminary tessellation-based regional partitioning of the raw data. The meshing procedure comprises two steps: The first step is to compute the distance from each point to its nearest neighbor and calculate the average inter-point spacing. The second step is to determine the unit cell size according to the average point spacing, with this study proposing a unit cell dimension of 10 times the average inter-point distance.

6. Detection of Non-Uniform Regions in Point Clouds

By utilizing the concept of skewness in statistics, the angular and distance skewness of actual point clouds is compared with that under uniformly distributed conditions, yielding the relative angular skewness difference D θ and the relative distance skewness difference D d . When the actual skewness deviation exceeds the acceptable range of T θ and T d , the points within the grid cell are identified as non-uniformly distributed. The specific criteria are as follows:
D θ = S θ S θ r e f / S θ r e f
D d = S d S d r e f / S d r e f
If D θ > T θ or D d > T d , the distribution is deemed non-uniform. Here, S θ denotes the angular skewness of the target grid cell, while S θ r e f represents the reference angular skewness under comparable point counts; S d indicates the distance skewness of the target grid cell, and S d r e f refers to the reference distance skewness under equivalent point quantities.

7. Deep Learning-Based Object Boundary Extraction of Indoor Point Cloud

In response to the structural characteristics of indoor scenes, this study initially performs slicing operations—either horizontal or vertical—based on the spatial position of the target objects (as illustrated in Figure 5a), thereby enhancing boundary detection precision. Subsequently, a 2D projection is applied to the sliced point cloud of the target region (shown in Figure 5b,c). Finally, the projected point cloud slice undergoes grid processing using the aforementioned grid-based approach (demonstrated in Figure 5d). Following this preprocessing of indoor object point clouds, the statistical skewness method is employed to evaluate distribution uniformity within each grid cell, identifying grid indices of non-uniform regions.

7.1. Process and Structure

The extraction of indoor object point cloud boundaries based on deep learning models requires feature space standardization, training set construction, and network model building.

7.1.1. Feature Space Standardization

Feature data comprises three dimensions: distance skewness, angular skewness, and corresponding uniformity metrics. During feature standardization, point cloud distributions are categorized into distinct levels and standardized category-specifically.

7.1.2. Training Set Construction

When building the training dataset, this study generates point cloud data within unit grids with variations in quantity and distribution patterns, including randomly generated point clouds, uniformly distributed point clouds, and typical non-uniform point clouds. The number of points per grid is set between 10 and 200, referencing real-world point cloud densities.

7.1.3. Deep Learning Model

This paper constructs a three-layer fully connected neural network architecture using the TensorFlow framework. Data standardization and label encoding are implemented via the Scikit-learn toolkit, establishing an end-to-end point cloud density prediction framework encompassing data input/output, preprocessing, model inference, and result output. Central to the system’s effectiveness is its preprocessing pipeline, which implements a two-stage transformation of raw data into machine learning-ready inputs. Feature standardization employs z-score normalization with persistent storage of mean and variance statistics, ensuring consistent scaling during both training and inference phases. The label processing subsystem performs bidirectional encoding between string labels and numerical representations, supporting both dense integer formats and sparse one-hot vectors to accommodate different model architectures. Data splitting incorporates stratified sampling to preserve class distributions, while dynamic batching optimizes memory utilization without compromising statistical randomness. These preprocessing steps are meticulously designed to handle edge cases such as zero-variance features through numerical stabilization techniques. The model architecture exemplifies a balanced approach to complexity and performance, featuring a sequential structure with two ReLU-activated hidden layers that provide sufficient capacity for most classification tasks while maintaining computational efficiency.
The workflow is illustrated in Figure 6. The input layer contains 64 ReLU-activated neuron units, receiving standardized feature vectors of distance skewness and angular skewness. The hidden layer employs 32 ReLU-activated neurons to build a nonlinear mapping layer, enabling feature abstraction and hierarchical extraction of point cloud spatial distribution patterns. The output layer utilizes a Softmax activation function to form a multi-class classifier, generating probability distributions for point cloud uniformity levels and establishing an end-to-end mapping from feature input to classification decisions. Additionally, the model incorporates a training visualization module and a model storage module that saves trained models in HDF5 format.

8. Experiment and Results

8.1. Experimental Data

Stairs, windows, and traffic signs, as common indoor and outdoor objects, are common experimental subjects in current point cloud processing research. The long and straight contour lines and repetitive geometric structure of stairs can effectively test the capability of the proposed method in this paper to process regular geometric shapes. Windows, featuring both inner and outer boundaries with a hollow interior, can effectively evaluate the proposed algorithm’s ability to balance noise resistance and detail preservation. Traffic signs, whose point clouds may be relatively sparse in real-world scenarios, can assess the robustness of the algorithm under conditions of sparse data. To verify the representational capability of the proposed method for point cloud density and the feasibility/detection efficacy of using the skewness metric in statistical data to identify regions of non-uniform point cloud distribution, this study designed simulated uniform distribution data and selected the ISPRS public dataset for experimental analysis. Figure 7 displays the ISPRS staircase point cloud data, where the red-highlighted areas represent the experimental data selected for this study—specifically regions containing 199 points (referred to as Region 7-2) and 412 points (referred to as Region 5-3).
The experimental environment configuration for deep learning-based point cloud boundary extraction is presented in Table 1.
Deep learning training and testing experiments were conducted on a single platform running the Windows 10 operating system. Python was used as the programming language for the experiments, with the TensorFlow deep learning framework primarily employed to construct the training network. Point cloud visualization was mainly performed using CloudCompare software (Version: 2.12.0 (Kyiv) [Windows 64-bit]). The training dataset consisted of 600 pairs of skewness statistics for the distances and angles between the geometric center of each unit grid and its constituent points. Within each square unit grid, 10–200 randomly distributed points were generated. The distances and skewness values under uniform distribution were calculated for different point quantities as reference metrics to evaluate the uniformity or deviation of randomly distributed points. The training set and validation set were split in an 80:20 ratio. The experimental data consist of point clouds of windows from buildings within the campus of Wuhan University (Figure 8a), as well as point clouds of traffic signs on urban roads in Wuhan from the publicly available dataset “WHU-Urban3D Dataset” (Figure 9). Due to the large number of point cloud files, Figure 9 displays the point cloud of a representative file and a road image, while Figure 10 presents schematic diagrams of Traffic Sign No. 1 and No. 2. Window 1 and Window 2 were obtained by cropping from the original point cloud data (Figure 8b,c). Window 1 has a shape close to a square and contains 4974 points, while Window 2 is rectangular in shape and comprises 5655 points. Traffic Sign 1 has a flat surface and consists of 326 points, whereas Traffic Sign 2 represents the back-side point cloud of a large traffic sign, featuring an uneven surface with 6450 points. The point cloud acquisition device was the FARO Focus S150 3D laser scanner, featuring a resolution of 976 kpts/s (at 307 m), an accuracy of 1 mm, and an optimal measurement range of 0.6–150 m. The 3D scanner’s scanning size was set as 8192 × 3413, the resolution was set as 1/5, and the quality was set as 2 times. The average scanning time was set to 5–6 min.
The simulated uniformly distributed point clouds (designated as Point Cloud 1, 2, and 3) were generated within 1-cm square grids containing 121, 289, and 441 points, respectively. Skewness values of angular and distance distributions were computed for each configuration, with quantitative results documented in Table 2. To visually characterize the spatial distribution patterns of Point Clouds 1–3, this study employs bar charts and statistical distribution plots to illustrate the frequency distributions of both distance and angular measurements.
As shown in Figure 11, Figure 12 and Figure 13, under uniform distribution of point clouds:
(1)
The three distance bar charts exhibit identical variation trends with point numbers, showing symmetric distributions. When point numbers approach maximum or minimum values, the distance from the center point increases; when approaching the median value, points cluster closer to the center. The minimum distance occurs when points approach the grid center, where the median point number corresponds to the lowest point in the bar chart.
(2)
The three distance distribution charts demonstrate fundamentally similar trends. Points equidistant from the grid center show even-numbered distributions. The skewness of the distance count distributions is all negative, with absolute values around 0.3.
(3)
The three angle bar charts display identical variation patterns with point numbers, presenting symmetric distributions characterized by higher central values and lower extremes. Every 10–20 data points form a small segment, each maintaining internal symmetry. The angle value approaches zero when point numbers are near the median.
(4)
The three angle distribution charts share essentially identical trends. Angles predominantly concentrate within 10 degrees. For instance, in Figure 13’s 441-point angle statistics, 288 angles fall between 0–6 degrees. The skewness of angle count distributions is all positive, with values increasing proportionally to point quantities.
From the above three types of statistical graphs, it can be concluded that the distance and angle from the point cloud to the center point of the grid follow the following pattern: The distribution trends of distance and angle quantity bar charts for uniform point clouds are similar. The statistical skewness of distances is approximately −0.3, while that of angles is positive and increases with point quantity. The closer real point clouds conform to this distribution pattern, the more uniform their distribution becomes. These examples demonstrate that integrating statistical information of distances and angles within the grid can accurately characterize the point distribution in grid regions, thereby enabling the description of point density distribution in those areas.

8.2. Evaluation Index

The evaluation metrics adopted in this study comprise two indicators: the miss rate and the false positive rate. The miss rate refers to the failure to identify points that should be recognized as boundary points, while the false positive rate indicates the erroneous identification of non-boundary points as boundary points. By comparing the experimental extraction results with the ground truth boundary data, we quantify accuracy through two metrics: the miss rate e m and the false positive rate e f :
e m = N m i s s N G T × 100 %
e f = N f a l s e N G T × 100 %
Here, N m i s s denotes the number of missed boundary points, N f a l s e represents the number of false positive points, and N G T indicates the number of ground truth boundary points. A low miss rate signifies fewer omissions, reflecting a closer approximation between the extracted and ground truth boundaries; conversely, a high miss rate indicates greater deviation. Similarly, a low false positive rate demonstrates minimal inclusion of irrelevant points, resulting in tighter alignment with the ground truth boundary, whereas a high false positive rate suggests reduced precision.

8.3. Detection of Non-Uniform Regions in Point Clouds

Figure 14 and Figure 15 present bar charts and statistical plots for Subregion 7-2 grid and the 14 × 14 reference point cloud, depicting the distribution of inliers, their distances to the center point, and their angles. Comparison with the reference point cloud reveals that the distance bar chart in Figure 14 exhibits an asymmetric distribution, with the minimum point skewed to the right. This indicates that the point cloud within this grid is not center-symmetric relative to the grid center point. The distance statistical plot shows a higher concentration of points within the 0.08–0.10 m range, suggesting the presence of numerous points distributed near the grid edges. The angle bar chart also displays an asymmetric distribution. Both the angle bar chart and statistical plot indicate a higher density of obtuse angles, predominantly clustered in regions with higher point numbers (higher indices). This observation suggests the point cloud likely contains more rows, with continuous sequences of single-point rows concentrated primarily in these higher-indexed regions.
Table 3 presents a statistical comparison between the 196 uniformly distributed reference points in the 14 × 14 grid and the point cloud of Subregion 7-2. The actual data shows that both the mean and median angles are smaller than those of the reference point cloud. Furthermore, the angular skewness value of 4.08 is 50.6% higher than the reference value of 2.71, exceeding the 30% threshold and indicating abnormal angular skewness. This suggests that adjacent point angles within Subregion 7-2 are generally smaller, and the point density in the northwest quadrant is lower compared to the southwest quadrant. While the mean and median distances in Subregion 7-2 are close to the reference values, the distance skewness is 143% lower than the standard value, exceeding the 40% threshold and signifying abnormal distance skewness. Therefore, it can be inferred that the Subregion 7-2 grid exhibits significant inhomogeneity.
As Table 3 shows, the Grid 7-2 shows smaller mean and median values for angles compared to the reference point cloud, along with lower angular skewness. This indicates that the angular mode in the 7-2 grid is smaller than both its own mean and the mean of the reference uniform point cloud, implying that adjacent points in the 7-2 area exhibit smaller inter-point angles. Additionally, the distance mean and median value in the 7-2 grid are smaller than those of the reference point cloud, while the distance skewness is negative and less than −0.3. This suggests that the distance mean of the actual data exceeds its mode and is also greater than the reference data mean. Consequently, it can be inferred that points in the 7-2 grid are distributed farther from the grid center.
Figure 16 and Figure 17 present the statistical distribution diagrams of distances and angles for the 5-3 area point cloud and the 20 × 20 reference point cloud. Through comparative analysis, the distance bar chart reveals a dual-minimum-point pattern, indicating the presence of two points near the grid center in the 5-3 area without central symmetry. The distance statistical diagram aligns with the reference point cloud’s overall trend but shows fewer counts in the 0.04–0.06 m range, suggesting a near-uniform distribution of distances from points to the grid center in the 5-3 area and sparse clustering around the center. Meanwhile, the angle bar chart exhibits multiple obtuse angles approaching 180°, reflecting greater dispersion of the point cloud. The angle statistical diagram further highlights more obtuse angles in the 5-3 area than the reference point cloud, implying a higher concentration of row-structured data within this grid region.
Table 4 compares the statistical data of 400 uniformly distributed reference points with those of the actual point cloud. The results indicate that the angular skewness of the actual data is smaller than that of the reference data, while its mean value is larger, suggesting that adjacent point angles in the actual point cloud are more dispersed. Meanwhile, the angular skewness difference reaches 58%, indicating an angular skewness anomaly. The distance skewness difference (22%) falls within the normal range. These findings demonstrate that Area 5-3 is a non-uniform region.
Based on the two sets of simulation experiments conducted, decision thresholds for determining uniform distribution were established as follows: Due to typically greater fluctuations in distance skewness, its threshold was relaxed to 50%, while angular skewness, being more sensitive to variations, was set at 40%. Consequently, the relative difference thresholds are defined as T θ = 40% for angular skewness and T d = 50% for distance skewness.

8.4. Deep Learning-Based Object Boundary Extraction

After data training and parameter tuning, our deep learning model achieved final evaluation metrics of 0.4880 validation loss and 85.96% validation accuracy. Figure 18 documents the training dynamics: In Figure 18a, the solid blue line (training accuracy) shows a rapid initial ascent followed by stabilization, indicating effective learning on training data where accuracy consistently improved with increasing epochs. Conversely, the dashed red line (validation accuracy) exhibits slower improvement with marginal decline after stabilization, suggesting suboptimal generalization and model performance approaching its limit. Figure 18b reveals complementary loss patterns: The solid blue line (training loss) demonstrates a sharp early decline converging to stability, reflecting continuous error reduction, while the dashed red line (validation loss) displays a slower descent with a slight post-stabilization increase, collectively demonstrating model convergence.
To validate the accuracy of the proposed boundary detection method, this experiment compares its performance against traditional density-based boundary detection approaches. The traditional density-based boundary extraction method mentioned in this paper employs a point count statistics approach within a neighborhood radius to calculate point density and subsequently identify boundary regions. By setting a search radius and a minimum point count threshold, this method counts the number of points in the neighborhood of each point as the density value. Regions with a density below the threshold are determined to be boundary areas. The miss rate e m and false positive rate e f serve as evaluation metrics for both methods. Table 5 presents a comparative analysis of boundary extraction results on Window 1 point cloud data:
As shown in Table 5, compared with traditional methods, the proposed approach exhibits fewer false negatives and a lower false negative rate (0.47%) at the boundary of Window 1. However, it shows more false positives with a higher false positive rate (43.44%). The analysis suggests this may result from including irrelevant points during the point cloud meshing process. When the grid size is set smaller, the reduced number of points per grid makes it difficult to accurately assess point distribution uniformity. Therefore, to ensure both distribution assessment accuracy and boundary extraction integrity, this experiment deliberately included a small number of adjacent irrelevant points when configuring the grid size.
Table 6 compares the boundary extraction results of the two methods for the point cloud of Window 2. As shown in Table 6, compared with traditional methods, the proposed approach exhibits fewer false negatives and a lower false negative rate (13.69%) at the boundary of Window 2. However, it shows more false positives with a higher false positive rate (21.81%), further validating the hypothesis that the proposed method includes irrelevant points during point cloud meshing.
Collectively analyzing both tables, the proposed method demonstrates an overall lower false negative rate and achieves higher boundary integrity compared to traditional density-based boundary extraction methods. While the traditional method yields boundaries with closer spatial alignment to the ideal boundary, it suffers from poorer completeness.
Figure 19 presents experimental results of two boundary extraction attempts and the ideal boundary for Window 1. For comparative analysis, extracted boundaries are highlighted with bold points in the figures.
Both Figure 19b,c demonstrate roughly rectangular boundary extraction results. However, the boundary points in Figure 19b exhibit discontinuous distribution along the rectangular edges with poor connectivity, resulting in fragmented and visibly broken edges. In contrast, the boundary extraction in Figure 19c yields smoother, more continuous points with enhanced connectivity, forming a more complete and well-defined rectangular contour without noticeable discontinuities.
Figure 20 presents experimental results of two boundary extraction attempts alongside the ideal boundary for Window 2, where critical performance differences are observed: while the traditional method fails to capture the upper and right edges of the window sill in Figure 20b, our proposed approach (Figure 20c) successfully extracts the complete boundary—visually confirming the quantitative superiority established in prior analyses.
Collectively evaluating the results, the proposed boundary extraction method demonstrates superior positional accuracy and continuity over traditional point cloud density-based approaches.
To verify the applicability and robustness of the method proposed in this study, boundary extraction of traffic signs was also conducted. Figure 21 presents the ideal boundary, the boundary extraction results of Traffic Sign 1 obtained by the traditional method and the method proposed in this study. For comparative analysis, extracted boundaries are highlighted with bold points in the figures.
As can be seen from Figure 21, compared with the ideal boundary, the boundaries extracted by the above two methods are relatively wider. The method proposed in this paper can fully extract the boundary of Traffic Sign 1, whereas traditional methods perform poorly in extracting the non-smooth bottom edge.
Table 7 presents a comparison of the results of boundary extraction for Traffic Sign No. 1 using the two methods. The traditional method has a missed detection rate ( e m ) of 16.13%, whereas the method proposed in this paper did not miss any true boundary points, indicating that our method significantly outperforms the traditional method in detecting true boundary points of Traffic Sign 1. In terms of false positive rate ( e f ), our method has reduced it by approximately 21% compared to the traditional method, demonstrating its superior ability to distinguish between boundaries and non-boundaries.
To verify the extraction effectiveness of proposed method under complex conditions such as protrusions and textures on object surfaces, we selected the back-side point cloud of a traffic sign as Traffic Sign 2 for experimental purposes (shown in Figure 10c). Figure 22 displays the ideal boundary, the boundary extracted by the traditional method, and the boundary extracted by proposed method for Traffic Sign 2. Overall, both methods yielded unsatisfactory extraction results for non-smooth surfaces. The traditional method sometimes identified the support frame of the traffic sign while failing to recognize its panel. In contrast, the proposed method occasionally identified both the support frame and the panel simultaneously.
Table 8 presents a comparison of the extraction results of the two methods for Traffic Sign No. 2. In terms of the missed detection rate ( e m ), the traditional method stands at 16.78%, while our method achieves 0%, indicating its exceptional capability in capturing boundaries. Regarding the false detection rate ( e f ), proposed method has significantly reduced it from 327.69% to 110.92%. Although the new method has reduced the false detection rate, the 110.92% false detection rate still indicates that a significant number of non-boundary points are being misclassified. This suggests that the proposed method lacks the utilization of spatial distribution or texture features, and relying solely on density information may not be sufficient to accurately distinguish between complex boundaries and non-boundaries on the surface.
Table 9 presents the summary of the miss rate. From the perspective of the average miss rate, the traditional density-based boundary detection method has an average miss rate of 40.73%, while the proposed method boasts an average miss rate of merely 3.54%. This indicates that, on the whole, the proposed method can more effectively capture actually existing boundaries in boundary detection, significantly reducing instances of missed detections, and its performance is markedly superior to that of the traditional method. Although the miss rates of the proposed method fluctuate to some extent across different scenarios, the overall range of fluctuation is relatively small, and all miss rates remain at a low level. This demonstrates that the method presented in this paper exhibits a certain degree of stability and adaptability. In contrast, the traditional method shows significant variations in miss rates across different scenarios, indicating its poor adaptability to varying conditions and suggesting that it may not be able to maintain satisfactory boundary detection performance in diverse scenarios.
Table 10 presents the summary of the false positive rate. The traditional method suffers from an alarmingly high average false positive rate of 101.21%, implying that, on the whole, the boundaries detected by this method significantly interfere with subsequent analysis and processing, potentially leading to erroneous decisions and conclusions. In contrast, the proposed method in this paper achieves an average false positive rate of 54.12%, which, while lower than that of the traditional method, still remains at a relatively high level. The proposed method also encounters the issue of excessive false detections in scenarios with complex surfaces, necessitating further algorithm optimization by incorporating features such as point cloud texture, semantics, and local characteristics to reduce the false positive rate.

9. Conclusions

This study defines point cloud density through angular and distance relationships between points and mesh centroids. This method simplifies density computation through statistical metrics while enabling precise characterization of variations in density distribution. Building on this, we leverage skewness to detect non-uniform regions, experimentally validating the method’s feasibility by comparing uniformly distributed reference point clouds against real-world data. Experimental analysis compares the distance and angular skewness metrics of target point clouds against uniformly distributed reference datasets. The detection outcomes demonstrate the method’s capability to not only identify non-uniform regions but also provide preliminary assessment of distribution characteristics within these areas. Building upon the aforementioned methodology, this research employs deep learning for point cloud boundary extraction. We established a training dataset encompassing point clouds with varying point counts and distribution patterns, achieving 85.96% validation accuracy. Compared to conventional density-based boundary detection methods, the proposed model exhibits a maximum false negative rate of 13.69%—significantly enhancing boundary detection completeness. This integrated approach offers a useful reference for automated point cloud boundary extraction and modeling.

Author Contributions

Conceptualization, C.L. and X.H.; methodology, validation, formal analysis, writing, C.L.; reviewing and editing, X.H., W.W. and P.T.; supervision, X.H.; funding acquisition, X.H. and P.T. All authors have read and agreed to the published version of the manuscript.

Funding

This paper is supported by National Natural Science Foundation of China (NSFC) under grant No. 42271452 and the Natural Science Foundation of Henan Province under Grant 242300420617.

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declared that there are no conflicts of interest.

References

  1. Tang, L.Y.; Zhan, Y.B.; Chen, Z.; Yu, B.; Tao, D. Contrastive boundary learning for point cloud segmentation. In Proceedings of the 2022 IEEE/CVF Conference Computer Vision and Pattern Recognition Conference (CVPR), New Orleans, LA, USA, 18–24 June 2022; IEEE Press: New York, NY, USA, 2022; pp. 8479–8489. [Google Scholar]
  2. Larue, E.A.; Fahey, R.; Fuson, T.L.; Foster, J.R.; Matthes, J.H.; Krause, K.; Hardiman, B.S. Evaluating the sensitivity of forest structural diversity characterization to LiDAR point density. Ecosphere 2022, 13, e4209. [Google Scholar] [CrossRef]
  3. Petras, V.; Petrasova, A.; McCarter, J.B.; Mitasova, H.; Meentemeyer, R.K. Point Density Variations in Airborne Lidar Point Clouds. Sensors 2023, 23, 1593. [Google Scholar] [CrossRef] [PubMed]
  4. Mahphood, A.; Arefi, H. Density-Based Method For Building Detection From Lidar Point Cloud. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2023, X-4/W1-2022, 423–428. [Google Scholar] [CrossRef]
  5. Cheng, P.; Guo, M.; Wang, H.; Fu, Z.; Li, D.; Ren, X. Fusion Segmentation Network Guided by Adaptive Sampling Radius and Channel Attention Mechanism Module for MLS Point Clouds. Appl. Sci. 2023, 13, 281. [Google Scholar] [CrossRef]
  6. Askari, Q.; Arefi, H.; Maboudi, M. 3D Reconstruction of Building Blocks Based on Extraction of Exterior Wall Lines Using Point Cloud Density Generated from Spherical Camera Images. Remote Sens. 2024, 16, 4377. [Google Scholar] [CrossRef]
  7. Rupnik, B.; Mongus, D.; Zalik, B. Point Density Evaluation of Airborne LiDAR Datasets. J. Univers. Comput. Sci. 2015, 21, 587–603. [Google Scholar] [CrossRef]
  8. Balsa-Barreiro, J.; Avariento, J.P.; Lerma, J.L. Airborne light detection and ranging (LiDAR) point density analysis. Sci. Res. Essays 2012, 7, 3010–3019. [Google Scholar] [CrossRef]
  9. Li, M.; Sun, C. Refinement of LiDAR point clouds using a super voxel based approach. ISPRS J. Photogramm. Remote Sens. 2018, 143, 213–221. [Google Scholar] [CrossRef]
  10. Lavanecha, C.; Nicholas, E.; Syed, B.; Roger, C.P.; Denise, M.B.; Niko, P. Mapping of glacial erratic boulders for mineral exploration: A review with LiDAR-based examples from modern and ancient glaciated terrains. Geomorphology 2025, 487, 109938. [Google Scholar] [CrossRef]
  11. Yu, L.; Huai, N.W.; Hong, Z.C.; De, S.G.; Ren, P.C. Application of an innovative BMask R-CNN-DAT enhanced with feature extraction and boundary awareness in tunnel leakage detection. Tunn. Undergr. Sp. Tech. 2026, 167, 107074. [Google Scholar] [CrossRef]
  12. Pang, E.; Davletshina, D.; D’aVigneau, A.M.; Park, N.; de Silva, L.; Brilakis, I. Digitalizing Road Maintenance: A Novel Approach for the Preparation and Integration of 2D and 3D Road Data through Image Shadow Removal and Point Cloud Densification. J. Comput. Civ. Eng. 2025, 39, 04025080. [Google Scholar] [CrossRef]
  13. Hazer, A.; Yildirim, R. Deep Learning Based Point Cloud Processing Techniques. IEEE Access 2022, 10, 127237–127283. [Google Scholar] [CrossRef]
  14. Sarker, S.; Sarker, P.; Stone, G.; Gorman, R.; Tavakkoli, A.; Bebis, G.; Sattarvand, J. A comprehensive overview of deep learning techniques for 3D point cloud classification and semantic segmentation. Mach. Vis. Appl. 2024, 35, 67. [Google Scholar] [CrossRef]
  15. Yao, X.; Guo, J.; Hu, J.; Cao, Q. Using Deep Learning in Semantic Classification for Point Cloud Data. IEEE Access 2019, 7, 37121–37130. [Google Scholar] [CrossRef]
  16. Sharma, R.; Schwandt, T.; Kunert, C.; Urban, S.; Broll, W. Point Cloud Upsampling and Normal Estimation using Deep Learning for Robust Surface Reconstruction. In Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications—Volume 5: VISAPP, Virtual, 8–10 February 2021; SciTePress: Setúbal, Portugal, 2021; pp. 70–79, ISBN 978-989-758-488-6, ISSN 2184-4321. [Google Scholar] [CrossRef]
  17. Liu, S.; Fu, K.; Wang, M.; Song, Z. Group-in-Group Relation-Based Transformer for 3D Point Cloud Learning. Remote Sens. 2022, 14, 1563. [Google Scholar] [CrossRef]
  18. Chen, X.; Hua, X.; Liu, H.; Wang, D.X.; Li, K. Laser point cloud segmentation based on density peak cluster of center homogenization algorithm. Sci. Surv. Mapp. 2021, 46, 71–83+158. [Google Scholar]
Figure 1. Workflow of the methodology.
Figure 1. Workflow of the methodology.
Symmetry 17 01770 g001
Figure 2. Points with the same quantity but different distributions.
Figure 2. Points with the same quantity but different distributions.
Symmetry 17 01770 g002
Figure 3. Point Cloud Sparsity Variation by Distance.
Figure 3. Point Cloud Sparsity Variation by Distance.
Symmetry 17 01770 g003
Figure 4. The distance and the angle formed by connecting points with the center of the grid.
Figure 4. The distance and the angle formed by connecting points with the center of the grid.
Symmetry 17 01770 g004
Figure 5. Preprocessing Pipeline for Indoor Object Point Clouds. (a) Target Objects. (b) Slicing Processing. (c) 2D Projection. (d) Grid Processing.
Figure 5. Preprocessing Pipeline for Indoor Object Point Clouds. (a) Target Objects. (b) Slicing Processing. (c) 2D Projection. (d) Grid Processing.
Symmetry 17 01770 g005
Figure 6. The workflow of neural network.
Figure 6. The workflow of neural network.
Symmetry 17 01770 g006
Figure 7. ISPRS stairs point cloud dataset.
Figure 7. ISPRS stairs point cloud dataset.
Symmetry 17 01770 g007
Figure 8. Windows from buildings within the campus of Wuhan University. (a) Data collection location. (b) Raw data. (c) Target point cloud cropping from raw data.
Figure 8. Windows from buildings within the campus of Wuhan University. (a) Data collection location. (b) Raw data. (c) Target point cloud cropping from raw data.
Symmetry 17 01770 g008
Figure 9. WHU-Urban3D Dataset: Partial road point cloud dataset and road image. (a) Partial road point cloud dataset. (b) Partial road image.
Figure 9. WHU-Urban3D Dataset: Partial road point cloud dataset and road image. (a) Partial road point cloud dataset. (b) Partial road image.
Symmetry 17 01770 g009
Figure 10. Schematic diagram of Traffic Sign No. 1 and No. 2. (a) Schematic diagram of Traffic Sign No. 1. (b) Schematic diagram of Traffic Sign No. 2. (c) Schematic diagram of the rear view of Traffic Sign No. 2.
Figure 10. Schematic diagram of Traffic Sign No. 1 and No. 2. (a) Schematic diagram of Traffic Sign No. 1. (b) Schematic diagram of Traffic Sign No. 2. (c) Schematic diagram of the rear view of Traffic Sign No. 2.
Symmetry 17 01770 g010
Figure 11. Statistical data distribution of distance and angle for point cloud 1. (a) Point cloud 1. (b) Line diagram from the center point to each point. (c) Distance bar chart. (d) Distance statistical chart. (e) Angle bar chart. (f) Angle statistical chart.
Figure 11. Statistical data distribution of distance and angle for point cloud 1. (a) Point cloud 1. (b) Line diagram from the center point to each point. (c) Distance bar chart. (d) Distance statistical chart. (e) Angle bar chart. (f) Angle statistical chart.
Symmetry 17 01770 g011aSymmetry 17 01770 g011b
Figure 12. Statistical data distribution of distance and angle for point cloud 2. (a) Point cloud 2. (b) Line diagram from the center point to each point. (c) Distance bar chart. (d) Distance statistical chart. (e) Angle bar chart. (f) Angle statistical chart.
Figure 12. Statistical data distribution of distance and angle for point cloud 2. (a) Point cloud 2. (b) Line diagram from the center point to each point. (c) Distance bar chart. (d) Distance statistical chart. (e) Angle bar chart. (f) Angle statistical chart.
Symmetry 17 01770 g012
Figure 13. Statistical data distribution of distance and angle for point cloud 3. (a) Point cloud 3. (b) Line diagram from the center point to each point. (c) Distance bar chart. (d) Distance statistical chart. (e) Angle bar chart. (f) Angle statistical chart.
Figure 13. Statistical data distribution of distance and angle for point cloud 3. (a) Point cloud 3. (b) Line diagram from the center point to each point. (c) Distance bar chart. (d) Distance statistical chart. (e) Angle bar chart. (f) Angle statistical chart.
Symmetry 17 01770 g013aSymmetry 17 01770 g013b
Figure 14. Statistical data distribution of distance and angle for point cloud 7-2. (a) 7-2 point cloud data. (b) Distance bar chart. (c) Distance statistical chart. (d) Angle bar chart. (e) Angle statistical chart.
Figure 14. Statistical data distribution of distance and angle for point cloud 7-2. (a) 7-2 point cloud data. (b) Distance bar chart. (c) Distance statistical chart. (d) Angle bar chart. (e) Angle statistical chart.
Symmetry 17 01770 g014
Figure 15. 14 × 14 Statistical Data Distribution of Reference Point Cloud Distance and Angle. (a) 14 × 14 reference point cloud data. (b) Distance bar chart. (c) Distance statistical chart. (d) Angle bar chart. (e) Angle statistical chart.
Figure 15. 14 × 14 Statistical Data Distribution of Reference Point Cloud Distance and Angle. (a) 14 × 14 reference point cloud data. (b) Distance bar chart. (c) Distance statistical chart. (d) Angle bar chart. (e) Angle statistical chart.
Symmetry 17 01770 g015
Figure 16. Statistical data distribution of distance and angle for point cloud 5-3. (a) 5-3 point cloud data. (b) Distance bar chart. (c) Distance statistical chart. (d) Angle bar chart. (e) Angle statistical chart.
Figure 16. Statistical data distribution of distance and angle for point cloud 5-3. (a) 5-3 point cloud data. (b) Distance bar chart. (c) Distance statistical chart. (d) Angle bar chart. (e) Angle statistical chart.
Symmetry 17 01770 g016
Figure 17. Statistical Data Distribution of 20 × 20 Reference Point Cloud Distance and Angle. (a) 20 × 20 reference point cloud data. (b) Distance bar chart. (b) Distance bar chart. (c) Distance statistical chart. (d) Angle bar chart. (e) Angle statistical chart.
Figure 17. Statistical Data Distribution of 20 × 20 Reference Point Cloud Distance and Angle. (a) 20 × 20 reference point cloud data. (b) Distance bar chart. (b) Distance bar chart. (c) Distance statistical chart. (d) Angle bar chart. (e) Angle statistical chart.
Symmetry 17 01770 g017aSymmetry 17 01770 g017b
Figure 18. Result of Deep Learning Training. (a) Accuracy curve of the model. (b) Loss curve of the model.
Figure 18. Result of Deep Learning Training. (a) Accuracy curve of the model. (b) Loss curve of the model.
Symmetry 17 01770 g018
Figure 19. Comparison of Point Cloud Boundary Extraction Results for Window No. 1. (a) Ideal boundary. (b) Boundaries detected by traditional method. (c) Boundaries detected by proposed method.
Figure 19. Comparison of Point Cloud Boundary Extraction Results for Window No. 1. (a) Ideal boundary. (b) Boundaries detected by traditional method. (c) Boundaries detected by proposed method.
Symmetry 17 01770 g019
Figure 20. Comparison of Point Cloud Boundary Extraction Results for Window No. 2. (a) Ideal boundary. (b) Boundaries detected by traditional method. (c) Boundaries detected by proposed method.
Figure 20. Comparison of Point Cloud Boundary Extraction Results for Window No. 2. (a) Ideal boundary. (b) Boundaries detected by traditional method. (c) Boundaries detected by proposed method.
Symmetry 17 01770 g020
Figure 21. Comparison of Point Cloud Boundary Extraction Results for Traffic Sign No. 1. (a) Ideal boundary. (b) Boundaries detected by traditional method. (c) Boundaries detected by proposed method.
Figure 21. Comparison of Point Cloud Boundary Extraction Results for Traffic Sign No. 1. (a) Ideal boundary. (b) Boundaries detected by traditional method. (c) Boundaries detected by proposed method.
Symmetry 17 01770 g021
Figure 22. Comparison of Point Cloud Boundary Extraction Results for Traffic Sign No. 2. (a) Ideal boundary. (b) Boundaries detected by traditional method. (c) Boundaries detected by proposed method.
Figure 22. Comparison of Point Cloud Boundary Extraction Results for Traffic Sign No. 2. (a) Ideal boundary. (b) Boundaries detected by traditional method. (c) Boundaries detected by proposed method.
Symmetry 17 01770 g022
Table 1. Environment for Deep Learning Model.
Table 1. Environment for Deep Learning Model.
Operating SystemWindows 10
RAM32 GB
GPURTX4070Ti SUPER 16 GB
CPUIntel i7-14700KF
Deep Learning FrameworkTensorflow 2.18.0
CUDA Version12.7
Python Version3.9.13
Table 2. Simulate uniformly distributed data.
Table 2. Simulate uniformly distributed data.
Point CloudNumber of PointsSkewness of AngleSkewness of Distance
111 × 11 = 1212.4477−0.3658
217 × 17 = 2893.2522−0.3291
321 × 21 = 4413.1624−0.3213
Table 3. Comparison of Statistical Data between 196 Uniformly Distributed Reference Points and 7-2 Grid Point Clouds.
Table 3. Comparison of Statistical Data between 196 Uniformly Distributed Reference Points and 7-2 Grid Point Clouds.
GridNumber of PointsAverage Angle ValueMedian of AnglesSkewness of AnglesAverage Distance ValueMedian of DistancesSkewness of Distances
7-21990.3880.911794.080.0770.082−0.793
14 × 14
Reference
1961.8361.6852.710.0740.079−0.326
Table 4. Comparison of Statistical Data between 400 Uniformly Distributed Reference Points and the Actual Point Cloud of 5-3 Grid.
Table 4. Comparison of Statistical Data between 400 Uniformly Distributed Reference Points and the Actual Point Cloud of 5-3 Grid.
GridNumber of PointsAverage Angle ValueMedian of AnglesSkewness of AnglesAverage Distance ValueMedian of DistancesSkewness of Distances
5-341216.06095.76303.55810.06940.0731−0.4033
20 × 20
Reference
40012.17925.10224.27950.069060.0726−0.2937
Table 5. Comparison of Accuracy of Two Methods of Window No. 1 Boundary Extraction.
Table 5. Comparison of Accuracy of Two Methods of Window No. 1 Boundary Extraction.
Traditional Density-Based Boundary DetectionProposed Method
N G T 633633
N m i s s 4443
N f a l s e 48275
miss rate e m 70.14%0.47%
false positive rate e f 7.58%43.44%
Table 6. Comparison of Accuracy of Two Methods of Window No. 2 Boundary Extraction.
Table 6. Comparison of Accuracy of Two Methods of Window No. 2 Boundary Extraction.
Traditional Density-Based Boundary DetectionProposed Method
N G T 628628
N m i s s 37686
N f a l s e 52137
miss rate e m 59.87%13.69%
false positive rate e f 8.28%21.81%
Table 7. Comparison of Accuracy of Two Methods of Traffic Sign No. 1 Boundary Extraction.
Table 7. Comparison of Accuracy of Two Methods of Traffic Sign No. 1 Boundary Extraction.
Traditional Density-Based Boundary DetectionProposed Method
N G T 6262
N m i s s 100
N f a l s e 3825
miss rate e m 16.13%0%
false positive rate e f 61.29%40.32%
Table 8. Comparison of Accuracy of Two Methods of Traffic Sign No. 2 Boundary Extraction.
Table 8. Comparison of Accuracy of Two Methods of Traffic Sign No. 2 Boundary Extraction.
Traditional Density-Based Boundary DetectionProposed Method
N G T 751751
N m i s s 1260
N f a l s e 2461833
miss rate e m 16.78%0%
false positive rate e f 327.69%110.92%
Table 9. Summary of Miss Rate.
Table 9. Summary of Miss Rate.
Traditional Density-Based Boundary DetectionProposed Method
Window 170.14%0.47%
Window 259.87%13.69%
Traffic sign 116.13%0%
Traffic sign 216.78%0%
average40.73%3.54%
Table 10. Summary of False Positive Rate.
Table 10. Summary of False Positive Rate.
Traditional Density-Based Boundary DetectionProposed Method
Window 17.58%43.44%
Window 28.28%21.81%
Traffic sign 161.29%40.32%
Traffic sign 2327.69%110.92%
average101.21%54.12%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, C.; Hua, X.; Wang, W.; Tian, P. A Skewness-Based Density Metric and Deep Learning Framework for Point Cloud Analysis: Detection of Non-Uniform Regions and Boundary Extraction. Symmetry 2025, 17, 1770. https://doi.org/10.3390/sym17101770

AMA Style

Li C, Hua X, Wang W, Tian P. A Skewness-Based Density Metric and Deep Learning Framework for Point Cloud Analysis: Detection of Non-Uniform Regions and Boundary Extraction. Symmetry. 2025; 17(10):1770. https://doi.org/10.3390/sym17101770

Chicago/Turabian Style

Li, Cheng, Xianghong Hua, Wenbo Wang, and Pengju Tian. 2025. "A Skewness-Based Density Metric and Deep Learning Framework for Point Cloud Analysis: Detection of Non-Uniform Regions and Boundary Extraction" Symmetry 17, no. 10: 1770. https://doi.org/10.3390/sym17101770

APA Style

Li, C., Hua, X., Wang, W., & Tian, P. (2025). A Skewness-Based Density Metric and Deep Learning Framework for Point Cloud Analysis: Detection of Non-Uniform Regions and Boundary Extraction. Symmetry, 17(10), 1770. https://doi.org/10.3390/sym17101770

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop