You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

9 July 2025

DFPS: An Efficient Downsampling Algorithm Designed for the Global Feature Preservation of Large-Scale Point Cloud Data

,
,
,
,
and
1
College of Geodesy and Geomatics, Shandong University of Science and Technology, Qingdao 266590, China
2
School of Civil Engineering, Anhui Jianzhu University, Hefei 230601, China
3
Qingdao Xiushan Mobile Survey Co., Ltd., Qingdao 266510, China
*
Authors to whom correspondence should be addressed.
This article belongs to the Section Vehicular Sensing

Abstract

This paper introduces an efficient 3D point cloud downsampling algorithm (DFPS) based on adaptive multi-level grid partitioning. By leveraging an adaptive hierarchical grid partitioning mechanism, the algorithm dynamically adjusts computational intensity in accordance with terrain complexity. This approach effectively balances the global feature retention of point cloud data with computational efficiency, making it highly adaptable to the growing trend of large-scale 3D point cloud datasets. DFPS is designed with a multithreaded parallel acceleration architecture, which significantly enhances processing speed. Experimental results demonstrate that, for a point cloud dataset containing millions of points, DFPS reduces processing time from approximately 161,665 s using the original FPS method to approximately 71.64 s at a 12.5% sampling rate, achieving an efficiency improvement of over 2200 times. As the sampling rate decreases, the performance advantage becomes more pronounced: at a 3.125% sampling rate, the efficiency improves by nearly 10,000 times. By employing visual observation and quantitative analysis (with the chamfer distance as the measurement index), it is evident that DFPS can effectively preserve global feature information. Notably, DFPS does not depend on GPU-based heterogeneous computing, enabling seamless deployment in resource-constrained environments such as airborne and mobile devices, which makes DFPS an effective and lightweighting tool for providing high-quality input data for subsequent algorithms, including point cloud registration and semantic segmentation.

1. Introduction

With the rapid advancement of 3D perception technology, point cloud data have become a critical data modality across various domains such as geographic information systems, autonomous driving, robot navigation, and augmented reality. A key application lies in autonomous driving, where point clouds facilitate road detection, obstacle recognition, and pedestrian tracking, thereby playing an essential role in environmental perception and positioning [1,2,3,4,5]. Furthermore, 3D point clouds are widely utilized in intelligent manufacturing, modern medicine, art design, and cultural heritage preservation, highlighting their broad applicability across disciplines [6,7,8].
Point cloud data are primarily acquired through LiDAR (Light Detection and Ranging) sensors and multi-view photogrammetry techniques. These methods offer unmatched precision in capturing surface morphology and object characteristics [9]. Structurally, point clouds are represented as multi-dimensional attribute matrices and are commonly stored in formats such as .bin, .pcd, .ply, .txt, and .csv [10,11]. The evolution of 3D perception technology has resulted in an exponential increase in the volume of point cloud data. High-precision multi-beam LiDAR systems can now capture millions of points per frame by significantly enhancing angular resolution and scanning frequency [12]. Advances in solid-state LiDAR, particularly in channel integration and signal processing, have substantially improved the capacity of automotive perception systems to generate dense point clouds within short time intervals [3,13,14,15,16]. This provides detailed information for scene reconstruction, enabling the detection and quantification of road defects in point cloud data, as well as supporting applications that require large-scale point cloud datasets—such as large-scale bridge construction—by delivering high-quality, dense point clouds tailored to these demanding tasks [17,18]. However, in data processing workflows for applications like autonomous driving and large-scale point cloud Level of Detail (LOD) visualization [19], the sheer volume of point cloud data presents significant challenges in registration, segmentation, and visualization [6,20,21,22]. The surge in data volume leads to non-linear increases in computational complexity, imposing significant costs on storage and transmission [23,24,25]. Under these circumstances, point cloud downsampling has emerged as a vital preprocessing technique, offering three primary advantages:
  • Reduction of computational complexity: Large volumes of point cloud data present significant challenges to point cloud processing algorithms. Directly processing massive point clouds can impose a substantial computational burden. Downsampling mitigates this issue by reducing the number of points through predefined sampling rules and logic, thereby effectively lowering the computational load in subsequent processing stages [11,26].
  • Improvement of data quality: Appropriate downsampling algorithms can effectively address inconsistencies in point density that arise during raw data acquisition due to factors such as equipment limitations and environmental conditions. These inconsistencies can negatively affect downstream processing tasks. For instance, in point cloud registration, varying point densities may lead to poor alignment between point sets [27,28]. Similarly, in aerial surveying, spliced point clouds often contain numerous redundant points caused by longitudinal and lateral overlaps, which can degrade both processing speed and modeling accuracy [29,30].
  • Integration of deep learning: With the rapid development of deep learning, various models for point cloud registration, classification, and segmentation have emerged. Notable examples include Qi’s PointNet [31] and PointNet++ [26], Li’s PointCNN [32], and Yang’s RITNet [4], which serve as foundational architectures for point cloud data processing. In deep-learning applications, downsampling techniques based on the farthest-point sampling algorithm have gained widespread adoption due to their ability to preserve global features and standardize input formats [33]. These techniques enable point clouds to meet specific network input requirements—such as the standardized 2048-point input required by the PointNet series—while retaining critical feature information.
Current point cloud downsampling methods can be categorized into three primary approaches: rule-based sampling, deep-learning-based sampling, and geometric feature-based sampling. Firstly, the rule-based downsampling methods are regarded as the most convenient methods, including random sampling, uniform sampling, and voxel-based sampling, which prioritize computational simplicity and efficiency. However, they exhibit notable limitations, such as edge detail degradation and inadequate accuracy for complex or high-precision applications [16,34,35]. Secondly, the deep-learning-based downsampling methods face several technical challenges in resource-constrained engineering environments, including dependency on large labeled datasets for training, resulting in high memory consumption and computational overhead, which hinders real-time processing capabilities [34]. High costs are associated with data acquisition and annotation, often leading to performance degradation and poor generalization in data-scarce or cross-domain scenarios due to overfitting. Thirdly, the geometric feature-based downsampling strategies, such as farthest-point sampling (FPS) and curvature-aware sampling, offer distinct advantages. FPS, in particular, has become a standard preprocessing module in many prominent deep-learning models (e.g., PointCNN, PointNet++) [20,36,37,38,39] due to its uniform spatial coverage and topological preservation properties. However, the geometric feature-based dowsampling strategies often encounter significant computational burdens, and their low operational efficiency severely hinders their application in practical engineering. Although the FPS algorithm is highly authoritative in terms of preserving global features, it suffers from its greedy iterative distance calculation, resulting in exponentially increasing complexity with larger point clouds, creating severe performance bottlenecks in CPU-based implementations [34,37]. In high-precision applications, FPS typically consumes 30–70% of a neural network’s runtime, with this proportion scaling with data size [3,40,41]. While there are some GPU-based optimizations to improve efficiency, they lack universality and remain unsuitable for resource-constrained environments. In many practical engineering applications, the feasibility of lightweight deployment is of great significance.
Based on the aforementioned analysis, this paper presents DFPS, an efficient downsampling algorithm designed for the global feature preservation of large-scale point cloud data. The proposed DFPS algorithm enhances conventional sampling methodology through two key innovations: an adaptive grid partitioning mechanism that optimizes spatial sampling logic, and a multithreaded parallel computing architecture that accelerates processing throughput. Notably, the algorithm’s lightweight design ensures GPU-independent operation, making it particularly suitable for resource-constrained engineering applications.
The remaining part of this article is arranged as follows:
Section 2 elaborates on the DFPS algorithmic framework, including its operational workflow and fundamental principles.
Section 3 presents comprehensive experimental validation, featuring comparative analyses of sampling efficiency and quality assessment, which collectively demonstrate DFPS’s superior performance as a 3D point cloud downsampling solution.
Section 4 concludes with a discussion of DFPS’s distinctive characteristics, applicable domains, and potential engineering applications.

2. Methodology

The DFPS algorithm employs an adaptive hierarchical grid partitioning mechanism for iterative farthest-point sampling. This mechanism systematically incorporates critical factors, including the hardware performance of the experimental equipment, the scale of the point cloud data, and the diverse processing objectives for point clouds. Through an iterative optimization process, the algorithm performs dynamic grid partitioning, thereby achieving a rational and efficient multi-level grid division of the entire point cloud dataset. The adaptive hierarchical iterative mechanism can be used to control the number of point cloud outputs to match the preset sampling rate. A distinctive feature of DFPS is its adjustable parameter β, which enables manual calibration of the weight assigned to preserving local details, catering to varying requirements in point cloud data processing tasks. Apart from the adaptive hierarchical grid partitioning mechanism, the algorithm incorporates a multithreaded parallel acceleration architecture, ensuring computational efficiency during the downsampling process of point clouds. The comprehensive logical framework of the DFPS algorithm is illustrated in Figure 1, demonstrating its systematic workflow.
Figure 1. Flowchart of the DFPS algorithm, showing the specific logic of adaptive hierarchical grid partitioning.
As illustrated in Figure 1, the sampling process adopts an iterative farthest-point sampling approach to ensure both processing efficiency and sampling accuracy. The implementation follows these steps: Firstly, the raw data undergo spatial partitioning into eight equal segments. The system then evaluates whether the initial point cloud count N0 exceeds Nmin, where Nmin represents the predetermined threshold for initiating the adaptive hierarchical grid partitioning mechanism. This threshold is typically set to a relatively small value (such as 256) to prevent the DFPS adaptive hierarchical grid partitioning mechanism from reducing processing speed when handling small-scale point clouds. If N0 ≤ Nmin, the system performs grid merging followed by immediate grid recombination. If N0 > Nmin, the process proceeds with the adaptive hierarchical grid partitioning mechanism. This mechanism combines first-round farthest-point sampling to enable dynamic adjustment of local detail weights while significantly reducing the computational load for second-round farthest-point sampling.
The adaptive hierarchical grid partitioning determination mechanism serves as the core component of the DFPS algorithm’s iterative optimization process. This determination rule, denoted as P a r t i t i o n ( G i , k l ) , governs the adaptive partitioning logic throughout the hierarchical grid construction.
  P a r t i t i o n ( G i , k l ) = T r u e                 i f   N i , k l θ u p p e r F a l s e                         o t h e r w i s e          
Here, N i , k l   denotes the number of point clouds within the k-th unit of the i-th partition segment at the l-th hierarchical level ( S i , k l ). The parameter θ u p p e r represents the point cloud partitioning threshold, defined as the minimum point cloud count required to initiate a partition. This threshold is determined immediately after loading the raw data into the system, with its calculation formula expressed as follows:
  θ u p p e r = β L m i n d · L 0 γ N 0 n c c b
Here, β serves as an adjustable parameter, typically initialized to 1, which globally modulates the partitioning threshold. For instance, when enhanced preservation of local details is required in practical applications, reducing β decreases θ u p p e r , thereby lowering the partitioning threshold. γ is the parameter of the adaptive hierarchical iterative mechanism, and it is adjusted during the adaptive hierarchical iteration process. N 0 represents the total point count in the raw dataset, while n c denotes the CPU core count and c b indicates the base clock frequency of the processor. L 0 corresponds to the initial block size of the raw data, and L m i n signifies the minimum resolution threshold, implemented to prevent resource inefficiency caused by excessive partitioning in extreme cases. L m i n is quantified by the grid block dimensions after successive partitioning operations, with its determination formula expressed as:
  L m i n = 1 log 10 N i L 0
When the adaptive hierarchical grid partitioning condition evaluates to True, the corresponding point cloud undergoes octant segmentation. Each resulting grid partition is dynamically assessed against the point cloud partitioning threshold   θ u p p e r until meeting the iteration termination criterion. This iterative approach enables independent first-round farthest-point sampling (at the designated sampling rate) for all grid partitions. The sampled point sets S i , k l from each grid partition are subsequently aggregated: S = S i , k l . The unified point set S then undergoes second-round farthest-point sampling, producing a 50% sampled output for the adaptive hierarchical iterative mechanism’s evaluation. This iterative framework simultaneously regulates the final output point cloud’s sampling rate. Firstly, we claim that t g t 0 = 1   ; n   , let t g t n = t g t n 1 n n + 1 . Then, n > 0 , t g t n > s a m p l i n g   r a t e > t g t n + 1 .
w h i l e (   i n   )     T r u e             c u r r e n t   s a m p l i n g   r a t e = i i + 1   F a l s e         c u r r e n t   s a m p l i n g   r a t e = a t g t ( i )
Here, i represents the number of iterations, n is the target number of iterations, and t g t i is the approximation value of the sampling rate calculated in the i-th iteration, while a is the target sampling rate. At this point, the true value of the sampling rate in the i-th iteration is i i + 1 . When the adaptive hierarchical iteration mechanism is determined to be true, parameter adjustments are made: for γ in θ u p p e r , γ = γ 0 1.05 , where γ 0 is the γ value after the last iteration adjustment, and for N m i n , N m i n = N m i n 0 · 0.75 .
When the value is False, no parameter adjustment is performed. The sampling rate is set as r a t e = a t g t ( n ) , the final sample is taken, and the final result is output.
DFPS dynamically determines the maximum point count per grid cell based on the computational capabilities of the target hardware. The input point cloud is partitioned into hierarchical grids using a subdivision methodology analogous to octree structures, with a critical distinction: DFPS automatically increases partitioning depth in densely populated regions to ensure uniform point distribution across all grids, thereby maximizing efficiency gains from multithreaded parallelization. By adopting a dynamic and hierarchical downsampling method, it is more suitable for processing extremely large point cloud data, avoiding the memory usage problem caused by the logic of the farthest-point sampling itself. This strategy balances local feature preservation with global structural coherence, ensuring optimal preservation of critical feature information in the final sampled output. The core objective of farthest-point sampling is to achieve non-uniform and efficient sampling of the original point cloud by iteratively selecting the point that is farthest from the set of already sampled points. The algorithm process is described in Figure 2 with the attached diagram illustrating the principle of FPS [39]:
Figure 2. Schematic diagram of the farthest-point sampling, illustrating the basic logic of the farthest-point sampling through a simple example.
With a point cloud set P = { p i R 3 | i = 1 , . . . , N } , the iterative process of farthest-point sampling for generating a downsampled subset S P is formally defined as follows:
  • Randomly select a seed point S 0 P and initialize the set as S = { s 0 } .
  • Conduct iterative sampling: for each iteration k = 1 , . . . , K 1 , compute the minimum Euclidean distance from each candidate point P P \ S to the current sampled set S: d p , S = m i n s S p s 2 . Identify the candidate point with the largest minimum distance: s k = a r g m a x p P \ S d p , S . Lastly, augment the sampled set: S S { s k } .
Furthermore, DFPS introduces a multithreaded parallel acceleration architecture that synergistically optimizes computational efficiency through two complementary parallelism strategies:
  • Acceleration for entire progress: This architecture parallelizes computations across adaptively partitioned grids. For instance, in the illustrated example, the initial point cloud is decomposed into eight grid blocks through one iteration of adaptive partitioning. Each grid block is assigned to an independent thread, enabling concurrent processing of localized farthest-point sampling (FPS) operations.
  • Acceleration for the single grid: Within each grid block, the algorithm further implements fine-grained parallelism during critical computational phases, such as the calculation of minimum Euclidean distances in FPS. This intra-grid parallelization exploits multi-core capabilities to accelerate distance metric evaluations and candidate point selection.
As depicted in the Figure 3, this dual-layered acceleration mechanism operates at both the global grid decomposition level and the local per-grid computation level. The hierarchical approach minimizes computational redundancy while maximizing hardware resource utilization, particularly in scenarios involving recursive grid decomposition. By decoupling coarse-grained task parallelism from fine-grained data parallelism, DFPS achieves near-linear scalability with respect to available CPU cores, establishing a robust foundation for real-time processing of billion-scale point clouds.
Figure 3. Schematic diagram of multithreaded parallel acceleration architecture.

4. Conclusions

This paper addresses the issues of excessive redundant points in large-scale laser point cloud data processing, which negatively impact the efficiency of subsequent point cloud data processing. Additionally, it highlights the inability of traditional downsampling algorithms to adequately preserve global features. To tackle these challenges, a DFPS (Dynamic Farthest-Point Sampling) downsampling algorithm based on an adaptive hierarchical grid partitioning mechanism is proposed. Through experimental validation, DFPS significantly enhances the algorithm’s running speed while ensuring the integrity of both global and local features of the point cloud. This provides a novel technical approach for large-scale laser point cloud data processing under resource-constrained environments.

4.1. Research Summary

The DFPS algorithm incorporates an adaptive multi-level grid partitioning mechanism that synergistically combines dynamic recursive sampling logic with farthest-point sampling principles. Through recursive subdivision of point clouds into computational resource-optimized grid units and implementation of a multithreaded parallel acceleration architecture, the system achieves substantial computational efficiency enhancements. This approach resolves the serialization bottleneck inherent in conventional FPS iterative processes while reducing constructor invocations to optimize memory utilization, thereby achieving balanced workload distribution and refined memory access patterns. Comparative experiments reveal that DFPS demonstrates order-of-magnitude performance enhancements ranging from ~10× to ~60× compared to a non-adaptive hierarchical grid partitioning mechanism and a multi-thread-only optimized variant, with acceleration effects becoming progressively pronounced at lower sampling rates. The adaptive grid architecture enables DFPS to maintain significant processing speed advantages across point clouds spanning three orders of magnitude (103–106 points). On an Intel i7-9750H platform, DFPS reduces processing latency for million-point clouds from 161,665 s (baseline FPS at 12.5% sampling) to 71.64 s (2200× acceleration), with further efficiency gains observed at reduced sampling rates: 35,060 s (baseline FPS at 3.125%) versus 3.78 s (9278× acceleration). In addition to its high processing efficiency, DFPS exhibits robust global feature retention and effective redundant point elimination capabilities. By employing visual observation and quantitative analysis (with the chamfer distance as the measurement index), it is evident that DFPS can effectively preserve global feature information even at low sampling rates. It excels in retaining micro-topography details while efficiently eliminating redundant points, thereby providing high-quality input for subsequent point cloud data processing and computations. Furthermore, leveraging a lightweight multithreaded design on the CPU, DFPS operates independently of GPU heterogeneous computing, thus avoiding reliance on specialized frameworks such as CUDA. This enables seamless deployment in resource-constrained environments, including airborne systems and mobile devices, making it especially suitable for the downsampling of large-scale laser depth point clouds following stitching.

4.2. Application Prospects

The DFPS algorithm has been empirically validated for computational efficiency and operational robustness. Its superior capability in preserving global structural features, coupled with high-throughput processing performance, establishes it as a general-purpose solution for large-scale point cloud datasets. This methodology effectively addresses critical challenges, including redundant data elimination and computational overhead reduction in downstream processing pipelines. Furthermore, the integration of adaptive dynamic sampling with multi-scale feature retention mechanisms demonstrates significant potential for diverse engineering applications.
1.
High-Precision Point Cloud Processing Applications
DFPS exhibits distinctive capabilities in precision-critical operations, including point cloud registration, semantic segmentation, and object classification. Conventional registration methodologies suffer from accuracy degradation induced by heterogeneous feature space distributions stemming from density variations. DFPS mitigates this limitation by establishing density-uniform point distributions, thereby enhancing alignment precision in overlapping regions. For semantic segmentation tasks, the algorithm’s adaptive hierarchical grid partitioning mechanism ensures comprehensive retention of both global topological characteristics and local geometric minutiae during redundancy removal. This dual-scale preservation property enables deep-learning frameworks to achieve enhanced discriminative performance on fine-grained features, particularly in texture-deficient regions such as road boundaries and foliage edges. As neural network-based point cloud analysis remains an active research frontier, DFPS serves as a preprocessing module that optimally balances computational efficiency with semantic consistency for the algorithm based on the deep-learning method.
2.
Hierarchical Multi-Resolution Management for Ultra-Scale Point Cloud
The advancement of laser scanning hardware has precipitated exponential growth in point cloud data volumes. Post-stitching processing of bathymetric point clouds in large-scale maritime environments routinely generates datasets exceeding hundreds of millions of sampling points. DFPS’s adaptive hierarchical architecture demonstrates seamless interoperability with Level of Detail (LOD) theory, enabling multi-resolution visualization and computational management of ultra-scale point clouds through GPU-accelerated processing pipelines. Integrated with a viewpoint-adaptive resolution control mechanism, the algorithm dynamically adjusts sampling densities in response to observational proximity thresholds. Spatially varying sampling strategies across resolution hierarchies ensure optimal preservation of global geometric features during rendering while maintaining perceptual fidelity, thereby substantially mitigating computational overhead in DFPS-LOD integrated systems. This framework implements tiered data streaming: low-resolution proxies facilitate real-time interaction on mobile/web platforms, whereas cloud-hosted full-resolution datasets empower semantic parsing of navigation-critical features, including reef formations and submerged structures. The DFPS-LOD synergy consequently establishes a paradigm-shifting solution to storage-computation-visualization constraints inherent in large-scale digital twin ecosystems for coastal cities and oceanic environments.

Author Contributions

Conceptualization, J.D. and M.T.; Formal analysis, J.D., M.T. and J.Y.; Funding acquisition, M.T. and J.Y.; Investigation, J.D., M.T. and Y.S.; Methodology, J.D. and M.T.; Project administration, M.T. and J.Y.; Resources, G.L., Y.W. and Y.S.; Software, J.D.; Supervision, M.T., G.L. and Y.W.; Validation, J.D.; Visualization, J.D.; Writing–original draft, J.D.; Writing–review and editing, M.T. and J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 42106180).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Data available on request due to restrictions.

Conflicts of Interest

Author Guoyu Li was employed by the company Qingdao Xiushan Mobile Survey Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Al-Rawabdeh, A.; He, F.; Habib, A. Automated Feature-Based Down-Sampling Approaches for Fine Registration of Irregular Point Clouds. Remote Sens. 2020, 12, 1224. [Google Scholar] [CrossRef]
  2. Yang, X.; Fu, T.; Dai, G.; Zeng, S.; Zhong, K.; Hong, K.; Wang, Y. An Efficient Accelerator for Point-based and Voxel-based Point Cloud Neural Networks. In Proceedings of the 60th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 9–13 July 2023. [Google Scholar]
  3. Qin, J.; Yang, W.; Wu, T.; He, B.; Xiang, L. Incremental Road Network Update Method with Trajectory Data and UAV Remote Sensing Imagery. ISPRS Int. J. Geo-Inf. 2022, 11, 502. [Google Scholar] [CrossRef]
  4. Yang, M.; Li, Y.; Wang, S.; Yang, S.; Liu, H. RITNet: A Rotation Invariant Transformer based Network for Point Cloud Registration. In Proceedings of the 34th IEEE International Conference on Tools with Artificial Intelligence (ICTAI), Macao, China, 31 October–2 November 2022; pp. 616–621. [Google Scholar]
  5. Chen, Z.; Zeng, W.; Yang, Z.; Yu, L.; Fu, C.-W.; Qu, H. LassoNet: Deep Lasso-Selection of 3D Point Clouds. IEEE Trans. Vis. Comput. Graph. 2020, 26, 195–204. [Google Scholar] [CrossRef]
  6. Ku, J.; Mozifian, M.; Lee, J.; Harakeh, A.; Waslander, S.L. Joint 3D Proposal Generation and Object Detection from View Aggregation. In Proceedings of the 25th IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain, 1–5 October 2018; pp. 5750–5757. [Google Scholar]
  7. Han, S.; Yu, S.; Zhang, X.; Zhang, L.; Ran, C.; Zhang, Q.; Li, H. Research and application on deep learning-based point cloud completion for marine structures with point coordinate fusion and coordinate-supervised point cloud generator. Measurement 2025, 242, 116246. [Google Scholar] [CrossRef]
  8. Wang, H.; Cheng, Y.; Liu, N.; Zhao, Y.; Chan, J.C.-W.; Li, Z. An Illumination-Invariant Shadow-Based Scene Matching Navigation Approach in Low-Altitude Flight. Remote Sens. 2022, 14, 3869. [Google Scholar] [CrossRef]
  9. Deng, W.; Huang, K.; Chen, X.; Zhou, Z.; Shi, C.; Guo, R.; Zhang, H. RGB-D Based Semantic SLAM Framework for Rescue Robot. In Proceedings of the Chinese Automation Congress (CAC), Shanghai, China, 6–8 November 2020; pp. 6023–6028. [Google Scholar]
  10. Lin, Y.; Zhang, Z.; Tang, H.; Wang, H.; Han, S. PointAcc: Efficient Point Cloud Accelerator. In Proceedings of the 54th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), Athens, Greece, 18–22 October 2021; pp. 449–461. [Google Scholar]
  11. Zhang, H.; An, L.; Chu, V.W.; Stow, D.A.; Liu, X.; Ding, Q. Learning Adjustable Reduced Downsampling Network for Small Object Detection in Urban Environments. Remote Sens. 2021, 13, 3608. [Google Scholar] [CrossRef]
  12. Pinkham, R.; Zeng, S.; Zhang, Z. QuickNN: Memory and Performance Optimization of k-d Tree Based Nearest Neighbor Search for 3D Point Clouds. In Proceedings of the 26th IEEE International Symposium on High Performance Computer Architecture (HPCA), San Diego, CA, USA, 22–26 February 2020; pp. 180–192. [Google Scholar]
  13. Ji, J.; Wang, W.; Ning, Y.; Bo, H.; Ren, Y. Research on a Matching Method for Vehicle-Borne Laser Point Cloud and Panoramic Images Based on Occlusion Removal. Remote Sens. 2024, 16, 2531. [Google Scholar] [CrossRef]
  14. Harwin, S.; Lucieer, A. Assessing the Accuracy of Georeferenced Point Clouds Produced via Multi-View Stereopsis from Unmanned Aerial Vehicle (UAV) Imagery. Remote Sens. 2012, 4, 1573–1599. [Google Scholar] [CrossRef]
  15. Fan, L.; Pang, Z.; Zhang, T.; Wang, Y.-X.; Zhao, H.; Wang, F.; Wang, N.; Zhang, Z. Embracing Single Stride 3D Object Detector with Sparse Transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 8448–8458. [Google Scholar]
  16. Zeng, D.; Yu, F. Research on the Application of Big Data Automatic Search and Data Mining Based on Remote Sensing Technology. In Proceedings of the 3rd International Conference on Artificial Intelligence and Big Data (ICAIBD), Chengdu, China, 28–31 May 2020; pp. 122–127. [Google Scholar]
  17. Zhu, Y.B.; Brigham, J.C.; Fascetti, A. LiDAR-RGB Data Fusion for Four-Dimensional UAV-Based Monitoring of Reinforced Concrete Bridge Construction: Case Study of the Fern Hollow Bridge Reconstruction. J. Constr. Eng. Manag. 2025, 151, 05024016. [Google Scholar] [CrossRef]
  18. Tan, Y.; Deng, T.; Zhou, J.Y.; Zhou, Z.X. LiDAR-Based Automatic Pavement Distress Detection and Management Using Deep Learning and BIM. J. Constr. Eng. Manag. 2024, 150, 04024069. [Google Scholar] [CrossRef]
  19. Kim, S.; Kwon, D.; Chul, K.B. Visualization and Editing of Large Point Cloud Data Based on External Memory. Korean J. Comput. Des. Eng. 2020, 25, 267–276. [Google Scholar] [CrossRef]
  20. An, L.; Zhou, P.; Zhou, M.; Wang, Y.; Zhang, Q. PointTr: Low-Overlap Point Cloud Registration With Transformer. IEEE Sens. J. 2024, 24, 12795–12805. [Google Scholar] [CrossRef]
  21. Liu, H.Y.; Hou, M.L.; Li, A.Q.; Xie, L.L. An Automatic Extraction Method for the Parameters of Multi-Lod Bim Models for Typical Components of Wooden Architectural Heritage. In Proceedings of the 27th CIPA International Symposium on Documenting the Past for a Better Future, Avila, Spain, 1–5 September 2019; pp. 679–685. [Google Scholar]
  22. Qiu, S.; Anwar, S.; Barnes, N. Semantic Segmentation for Real Point Cloud Scenes via Bilateral Augmentation and Adaptive Fusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021; pp. 1757–1767. [Google Scholar]
  23. Al-Ghuribi, S.M.; Noah, S.A.M.; Mohammed, M.A.; Tiwary, N.; Saat, N.I.Y. A Comparative Study of Sentiment-Aware Collaborative Filtering Algorithms for Arabic Recommendation Systems. IEEE Access 2024, 12, 174441–174454. [Google Scholar] [CrossRef]
  24. Ak, A.; Zerman, E.; Quach, M.; Chetouani, A.; Smolic, A.; Valenzise, G.; Le Callet, P. BASICS: Broad Quality Assessment of Static Point Clouds in a Compression Scenario. IEEE Trans. Multimed. 2024, 26, 6730–6742. [Google Scholar] [CrossRef]
  25. Chung, M.; Jung, M.; Kim, Y. Enhancing Remote Sensing Image Super-Resolution Guided by Bicubic-Downsampled Low-Resolution Image. Remote Sens. 2023, 15, 3309. [Google Scholar] [CrossRef]
  26. Qi, C.R.; Yi, L.; Su, H.; Guibas, L.J. PointNet plus plus: Deep Hierarchical Feature Learning on Point Sets in a Metric Space. In Proceedings of the 31st Annual Conference on Neural Information Processing Systems (NIPS), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  27. Cui, Y.; Zhang, Y.; Dong, J.; Sun, H.; Chen, X.; Zhu, F. LinK3D: Linear Keypoints Representation for 3D LiDAR Point Cloud. IEEE Robot. Autom. Lett. 2024, 9, 2128–2135. [Google Scholar] [CrossRef]
  28. Qiao, Z.; Yu, Z.; Jiang, B.; Yin, H.; Shen, S. G3Reg: Pyramid Graph-Based Global Registration Using Gaussian Ellipsoid Model. IEEE Trans. Autom. Sci. Eng. 2025, 22, 3416–3432. [Google Scholar] [CrossRef]
  29. Lyu, W.; Ke, W.; Sheng, H.; Ma, X.; Zhang, H. Dynamic Downsampling Algorithm for 3D Point Cloud Map Based on Voxel Filtering. Appl. Sci. 2024, 14, 3160. [Google Scholar] [CrossRef]
  30. Mirt, A.; Reiche, J.; Verbesselt, J.; Herold, M. A Downsampling Method Addressing the Modifiable Areal Unit Problem in Remote Sensing. Remote Sens. 2022, 14, 5538. [Google Scholar] [CrossRef]
  31. Qi, C.R.; Su, H.; Mo, K.; Guibas, L.J. PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 77–85. [Google Scholar]
  32. Li, Y.Y.; Bu, R.; Sun, M.C.; Wu, W.; Di, X.H.; Chen, B.Q. PointCNN: Convolution On X-Transformed Points. In Proceedings of the 32nd Conference on Neural Information Processing Systems (NIPS), Montreal, QC, Canada, 2–8 December 2018. [Google Scholar]
  33. Wan, H.; Nurmamat, P.; Chen, J.; Cao, Y.; Wang, S.; Zhang, Y.; Huang, Z. Fine-Grained Aircraft Recognition Based on Dynamic Feature Synthesis and Contrastive Learning. Remote Sens. 2025, 17, 768. [Google Scholar] [CrossRef]
  34. Han, M.; Wang, L.; Xiao, L.; Zhang, H.; Zhang, C.; Xu, X.; Zhu, J. QuickFPS: Architecture and Algorithm Co-Design for Farthest Point Sampling in Large-Scale Point Clouds. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2023, 42, 4011–4024. [Google Scholar] [CrossRef]
  35. Zhang, W.; Qi, J.; Wan, P.; Wang, H.; Xie, D.; Wang, X.; Yan, G. An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation. Remote Sens. 2016, 8, 501. [Google Scholar] [CrossRef]
  36. Wang, X.; Jin, Y.; Cen, Y.; Wang, T.; Tang, B.; Li, Y. LighTN: Light-Weight Transformer Network for Performance-Overhead Tradeoff in Point Cloud Downsampling. IEEE Trans. Multimed. 2025, 27, 832–847. [Google Scholar] [CrossRef]
  37. Wu, W.X.; Qi, O.G.; Li, F.X.; Soc, I.C. PointConv: Deep Convolutional Networks on 3D Point Clouds. In Proceedings of the 32nd IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; pp. 9613–9622. [Google Scholar]
  38. Wu, B.; Zhou, X.; Zhao, S.; Yue, X.; Keutzer, K. SqueezeSegV2: Improved Model Structure and Unsupervised Domain Adaptation for Road-Object Segmentation from a LiDAR Point Cloud. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, 20–24 May 2019; pp. 4376–4382. [Google Scholar]
  39. Eldar, Y.; Lindenbaum, M.; Porat, M.; Zeevi, Y.Y. The farthest point strategy for progressive image sampling. IEEE Trans. Image Process. 1997, 6, 1305–1315. [Google Scholar] [CrossRef]
  40. Qin, Z.; Yu, H.; Wang, C.; Guo, Y.; Peng, Y.; Xu, K. Geometric Transformer for Fast and Robust Point Cloud Registration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 11133–11142. [Google Scholar]
  41. Saad, W.; Bennis, M.; Chen, M.Z. A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems. IEEE Netw. 2020, 34, 134–142. [Google Scholar] [CrossRef]
  42. Fan, H.Q.; Su, H.; Guibas, L. A Point Set Generation Network for 3D Object Reconstruction from a Single Image. In Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2463–2471. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.