Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (174)

Search Parameters:
Keywords = indoor point cloud data

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 4960 KB  
Article
TGR-T: Truncated-Gaussian-Weighted Reliability for Adaptive Dynamic Thresholding in Weakly Supervised Indoor 3D Point Cloud Segmentation
by Ziwei Luo, Xinyue Liu, Jun Jiang, Hanyu Qi, Chen Wang, Zhong Xie and Tao Zeng
ISPRS Int. J. Geo-Inf. 2026, 15(3), 108; https://doi.org/10.3390/ijgi15030108 - 4 Mar 2026
Viewed by 303
Abstract
Indoor 3D point cloud semantic segmentation is a fundamental task for fine-grained scene understanding and intelligent perception. Due to the prohibitive cost of dense point-wise annotations, weakly supervised learning has emerged as a promising alternative for indoor point cloud segmentation. However, existing weakly [...] Read more.
Indoor 3D point cloud semantic segmentation is a fundamental task for fine-grained scene understanding and intelligent perception. Due to the prohibitive cost of dense point-wise annotations, weakly supervised learning has emerged as a promising alternative for indoor point cloud segmentation. However, existing weakly supervised methods commonly rely on fixed confidence thresholds for pseudo-label selection, which exhibit limited generalization caused by threshold sensitivity, underutilization of informative low-confidence regions, and progressive noise accumulation during self-training. To address these issues, we propose TGR-T, a weakly supervised framework for indoor 3D point cloud semantic segmentation that incorporates truncated-Gaussian-weighted reliability with adaptive dynamic thresholding. Specifically, a reliability-adaptive dynamic thresholding strategy is introduced to guide pseudo-label selection based on the evolving confidence statistics of unlabeled mini-batches, with exponential moving average smoothing employed to produce stable global estimates and robust separation of reliable and ambiguous regions. To further exploit uncertain regions, a learnable truncated Gaussian weighting function is designed to explicitly model prediction uncertainty within the ambiguous set, providing soft supervision by assigning adaptive weights to low-confidence predictions during optimization. Extensive experimental results demonstrate that the proposed framework significantly enhances the exploitation of unlabeled data under extremely limited supervision: extensive experiments conducted on standard indoor 3D scene benchmarks demonstrate that TGR-T achieves competitive or superior segmentation performance under extremely sparse supervision and can even outperform several fully supervised baselines trained with dense annotations while using only 1% labeled points, thereby substantially narrowing the performance gap between weakly supervised and fully supervised 3D semantic segmentation methods. Full article
(This article belongs to the Special Issue Indoor Mobile Mapping and Location-Based Knowledge Services)
Show Figures

Figure 1

22 pages, 5296 KB  
Article
Pepper-4D: Spatiotemporal 3D Pepper Crop Dataset for Phenotyping
by Foysal Ahmed, Dawei Li, Boyuan Zhao, Zhanjiang Wang, Jiali Huang, Tingzhicheng Li, Jingjing Huang, Jiahui Hou, Sayed Jobaer and Han Yan
Plants 2026, 15(4), 599; https://doi.org/10.3390/plants15040599 - 13 Feb 2026
Viewed by 763
Abstract
Pepper (Capsicum annuum) is a globally significant horticultural crop cultivated for its culinary, medicinal, and economic value. Traditional approaches for boosting the agricultural production of pepper, notably, expanding farmland, have become increasingly unsustainable. Recent advancements in artificial intelligence and 3D computer [...] Read more.
Pepper (Capsicum annuum) is a globally significant horticultural crop cultivated for its culinary, medicinal, and economic value. Traditional approaches for boosting the agricultural production of pepper, notably, expanding farmland, have become increasingly unsustainable. Recent advancements in artificial intelligence and 3D computer vision have started to transform crop cultivation and phenotyping, which has shed new light on increasing production by advanced breeding. However, currently, the field still lacks 3D pepper data that contains enough detail for organ-level analysis. Therefore, we propose Pepper-4D, a new, high-precision 4D point cloud dataset that records both the spatial structure and temporal development of pepper plants across various continuous growth stages. Our dataset is divided into three subsets, including a total of 916 individual point clouds from 29 indoor-cultivated pepper plant samples. Our dataset provides manual annotations at both the plant-level and organ-level, supporting phenotyping tasks such as pepper growth status classification, organ semantic segmentation, organ instance segmentation, organ growth tracking, new organ detection, and even the generation of synthetic 3D pepper plants. Full article
(This article belongs to the Special Issue AI-Driven Machine Vision Technologies in Plant Science)
Show Figures

Graphical abstract

20 pages, 5876 KB  
Article
Dynamic Die-Forging Scene Semantic Segmentation via Point Cloud–BEV Feature Fusion with Star Encoding
by Xuewen Feng, Aiming Wang, Guoying Meng, Yiyang Xu, Jie Yang, Xiaohan Cheng, Yijin Xiong and Juntao Wang
Sensors 2026, 26(2), 708; https://doi.org/10.3390/s26020708 - 21 Jan 2026
Viewed by 332
Abstract
Semantic segmentation of workpieces and die cavities is critical for intelligent process monitoring and quality control in hammer die-forging. However, the field of 3D point cloud segmentation currently faces prominent limitations in forging scenario adaptation: existing state-of-the-art (SOTA) methods are predominantly optimized for [...] Read more.
Semantic segmentation of workpieces and die cavities is critical for intelligent process monitoring and quality control in hammer die-forging. However, the field of 3D point cloud segmentation currently faces prominent limitations in forging scenario adaptation: existing state-of-the-art (SOTA) methods are predominantly optimized for road driving or indoor scenes, where targets have stable poses and regular surfaces. They lack dedicated designs for capturing the fine-grained deformation characteristics of forging workpieces and alleviating multi-scale feature misalignment caused by large pose variations—key pain points in forging segmentation. Consequently, these methods fail to balance segmentation accuracy and real-time efficiency required for practical forging applications. To address this gap, this paper proposes a novel semantic segmentation framework fusing 3D point cloud and bird’s-eye-view (BEV) representations for complex die-forging scenes. Specifically, a Star-based encoding module is designed in the BEV encoding stage to enhance capture of fine-grained workpiece deformation characteristics. A hierarchical feature-offset alignment mechanism is developed in decoding to alleviate multi-scale spatial and semantic misalignment, facilitating efficient cross-layer fusion. Additionally, a weighted adaptive fusion module enables complementary information interaction between point cloud and BEV modalities to improve precision.We evaluate the proposed method on our self-constructed simulated and real die-forging point cloud datasets. The results show that when trained solely on simulated data and tested directly in real-world scenarios, our method achieves an mIoU that surpasses RPVNet by 1.1%. After fine-tuning with a small amount of real data, the mIoU further improves by 5%, reaching optimal performance. Full article
Show Figures

Figure 1

24 pages, 15285 KB  
Article
An Efficient and Accurate UAV State Estimation Method with Multi-LiDAR–IMU–Camera Fusion
by Junfeng Ding, Pei An, Kun Yu, Tao Ma, Bin Fang and Jie Ma
Drones 2025, 9(12), 823; https://doi.org/10.3390/drones9120823 - 27 Nov 2025
Viewed by 1095
Abstract
State estimation plays a vital role in UAV navigation and control. With the continuous decrease in sensor cost and size, UAVs equipped with multiple LiDARs, Inertial Measurement Units (IMUs), and cameras have attracted increasing attention. Such systems can acquire rich environmental and motion [...] Read more.
State estimation plays a vital role in UAV navigation and control. With the continuous decrease in sensor cost and size, UAVs equipped with multiple LiDARs, Inertial Measurement Units (IMUs), and cameras have attracted increasing attention. Such systems can acquire rich environmental and motion information from multiple perspectives, thereby enabling more precise navigation and mapping in complex environments. However, efficiently utilizing multi-sensor data for state estimation remains challenging. There is a complex coupling relationship between IMUs’ bias and UAV state. To address these challenges, this paper proposes an efficient and accurate UAV state estimation method tailored for multi-LiDAR–IMU–camera systems. Specifically, we first construct an efficient distributed state estimation model. It decomposes the multi-LiDAR–IMU–camera system into a series of single LiDAR–IMU–camera subsystems, reformulating the complex coupling problem as an efficient distributed state estimation problem. Then, we derive an accurate feedback function to constrain and optimize the UAV state using estimated subsystem states, thus enhancing overall estimation accuracy. Based on this model, we design an efficient distributed state estimation algorithm with multi-LiDAR-IMU-Camerafusion, termed DLIC. DLIC achieves robust multi-sensor data fusion via shared feature maps, effectively improving both estimation robustness and accuracy. In addition, we design an accelerated image-to-point cloud registration module (A-I2P) to provide reliable visual measurements, further boosting state estimation efficiency. Extensive experiments are conducted on 18 real-world indoor and outdoor scenarios from the public NTU VIRAL dataset. The results demonstrate that DLIC consistently outperforms existing multi-sensor methods across key evaluation metrics, including RMSE, MAE, SD, and SSE. More importantly, our method runs in real time on a resource-constrained embedded device equipped with only an 8-core CPU, while maintaining low memory consumption. Full article
(This article belongs to the Special Issue Advances in Guidance, Navigation, and Control)
Show Figures

Figure 1

17 pages, 8813 KB  
Article
A Fast Algorithm for Boundary Point Extraction of Planar Building Components from Point Clouds
by Yongzhong Huang, Ming Chen, Gaoming He and Jianming Liu
Electronics 2025, 14(21), 4313; https://doi.org/10.3390/electronics14214313 - 2 Nov 2025
Viewed by 887
Abstract
The boundaries of planar building components characterize the structural outline of buildings and are of great importance in applications such as indoor model reconstruction and localization. However, traditional methods for extracting boundary points of planar building components from point clouds are often constrained [...] Read more.
The boundaries of planar building components characterize the structural outline of buildings and are of great importance in applications such as indoor model reconstruction and localization. However, traditional methods for extracting boundary points of planar building components from point clouds are often constrained by high computational complexity, limited efficiency, and insufficient accuracy. To address these challenges, this paper presents a rapid algorithm for the direct extraction of boundary points from point cloud data. The algorithm first performs planar fitting and projects all points onto the fitted plane to mitigate the influence of outlier noise. Next, for each point in the point cloud, a plane is constructed perpendicular to the existing plane points. Coarse boundary points are identified by counting the neighboring points on one side of this plane, which effectively eliminates most of the interior points. Finally, a boundary detection zone is defined for each coarse boundary point and its neighboring points. The precise boundary points are extracted by counting the number of points within this defined region. Experimental validation indicates that our algorithm can extract boundary points of planar building components from point clouds both accurately and efficiently, with notable robustness against noise. Full article
(This article belongs to the Topic Intelligent Image Processing Technology)
Show Figures

Figure 1

20 pages, 74841 KB  
Article
Autonomous Concrete Crack Monitoring Using a Mobile Robot with a 2-DoF Manipulator and Stereo Vision Sensors
by Seola Yang, Daeik Jang, Jonghyeok Kim and Haemin Jeon
Sensors 2025, 25(19), 6121; https://doi.org/10.3390/s25196121 - 3 Oct 2025
Cited by 2 | Viewed by 1639
Abstract
Crack monitoring in concrete structures is essential to maintaining structural integrity. Therefore, this paper proposes a mobile ground robot equipped with a 2-DoF manipulator and stereo vision sensors for autonomous crack monitoring and mapping. To facilitate crack detection over large areas, a 2-DoF [...] Read more.
Crack monitoring in concrete structures is essential to maintaining structural integrity. Therefore, this paper proposes a mobile ground robot equipped with a 2-DoF manipulator and stereo vision sensors for autonomous crack monitoring and mapping. To facilitate crack detection over large areas, a 2-DoF motorized manipulator providing linear and rotational motions, with a stereo vision sensor mounted on the end effector, was deployed. In combination with a manual rotation plate, this configuration enhances accessibility and expands the field of view for crack monitoring. Another stereo vision sensor, mounted at the front of the robot, was used to acquire point cloud data of the surrounding environment, enabling tasks such as SLAM (simultaneous localization and mapping), path planning and following, and obstacle avoidance. Cracks are detected and segmented using the deep learning algorithms YOLO (You Only Look Once) v6-s and SFNet (Semantic Flow Network), respectively. To enhance the performance of crack segmentation, synthetic image generation and preprocessing techniques, including cropping and scaling, were applied. The dimensions of cracks are calculated using point clouds filtered with the median absolute deviation method. To validate the performance of the proposed crack-monitoring and mapping method with the robot system, indoor experimental tests were performed. The experimental results confirmed that, in cases of divided imaging, the crack propagation direction was predicted, enabling robotic manipulation and division-point calculation. Subsequently, total crack length and width were calculated by combining reconstructed 3D point clouds from multiple frames, with a maximum relative error of 1%. Full article
Show Figures

Figure 1

20 pages, 7575 KB  
Article
A Two-Step Filtering Approach for Indoor LiDAR Point Clouds: Efficient Removal of Jump Points and Misdetected Points
by Yibo Cao, Yonghao Huang and Junheng Ni
Sensors 2025, 25(19), 5937; https://doi.org/10.3390/s25195937 - 23 Sep 2025
Viewed by 871
Abstract
In the simultaneous localization and mapping (SLAM) process of indoor mobile robots, accurate and stable point cloud data are crucial for localization and environment perception. However, in practical applications indoor mobile robots may encounter glass, smooth floors, edge objects, etc. Point cloud data [...] Read more.
In the simultaneous localization and mapping (SLAM) process of indoor mobile robots, accurate and stable point cloud data are crucial for localization and environment perception. However, in practical applications indoor mobile robots may encounter glass, smooth floors, edge objects, etc. Point cloud data are often misdetected in such environments, especially at the intersection of flat surfaces and edges of obstacles, which are prone to generating jump points. Smooth planes may also lead to the emergence of misdetected points due to reflective properties or sensor errors. To solve these problems, a two-step filtering method is proposed in this paper. In the first step, a clustering filtering algorithm based on radial distance and tangential span is used for effective filtering against jump points. The algorithm ensures accurate data by analyzing the spatial relationship between each point in the point cloud and the neighboring points, which allows it to identify and filter out the jump points. In the second step, a filtering algorithm based on the grid penetration model is used to further filter out misdetected points on the smooth plane. The model eliminates unrealistic point cloud data and improves the overall quality of the point cloud by simulating the characteristics of the beam penetrating the object. Experimental results in indoor environments show that this two-step filtering method significantly reduces jump points and misdetected points in the point cloud, leading to improved navigational accuracy and stability of indoor mobile robots. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

38 pages, 10032 KB  
Article
Closed and Structural Optimization for 3D Line Segment Extraction in Building Point Clouds
by Ruoming Zhai, Xianquan Han, Peng Wan, Jianzhou Li, Yifeng He and Bangning Ding
Remote Sens. 2025, 17(18), 3234; https://doi.org/10.3390/rs17183234 - 18 Sep 2025
Cited by 1 | Viewed by 1338
Abstract
The extraction of architectural structural line features can simplify the 3D spatial representation of built environments, reduce the storage and processing burden of large-scale point clouds, and provide essential geometric primitives for downstream modeling tasks. However, existing 3D line extraction methods suffer from [...] Read more.
The extraction of architectural structural line features can simplify the 3D spatial representation of built environments, reduce the storage and processing burden of large-scale point clouds, and provide essential geometric primitives for downstream modeling tasks. However, existing 3D line extraction methods suffer from incomplete and fragmented contours, with missing or misaligned intersections. To overcome these limitations, this study proposes a patch-level framework for 3D line extraction and structural optimization from building point clouds. The proposed method first partitions point clouds into planar patches and establishes local image planes for each patch, enabling a structured 2D representation of unstructured 3D data. Then, graph-cut segmentation is proposed to extract compact boundary contours, which are vectorized into closed lines and back-projected into 3D space to form the initial line segments. To improve geometric consistency, regularized geometric constraints, including adjacency, collinearity, and orthogonality constraints, are further designed to merge homogeneous segments, refine topology, and strengthen structural outlines. Finally, we evaluated the approach on three indoor building environments and four outdoor scenes, and experimental results show that it reduces noise and redundancy while significantly improving the completeness, closure, and alignment of 3D line features in various complex architectural structures. Full article
Show Figures

Figure 1

31 pages, 6007 KB  
Article
Geometry and Topology Preservable Line Structure Construction for Indoor Point Cloud Based on the Encoding and Extracting Framework
by Haiyang Lyu, Hongxiao Xu, Donglai Jiao and Hanru Zhang
Remote Sens. 2025, 17(17), 3033; https://doi.org/10.3390/rs17173033 - 1 Sep 2025
Cited by 1 | Viewed by 2032
Abstract
The line structure is an efficient form of representation and modeling for LiDAR point clouds, while the Line Structure Construction (LSC) method aims to extract complete and coherent line structures from complex 3D point clouds, thereby providing a foundation for geometric modeling, scene [...] Read more.
The line structure is an efficient form of representation and modeling for LiDAR point clouds, while the Line Structure Construction (LSC) method aims to extract complete and coherent line structures from complex 3D point clouds, thereby providing a foundation for geometric modeling, scene understanding, and downstream applications. However, traditional LSC methods often fall short in preserving both the geometric integrity and topological connectivity of line structures derived from such datasets. To address this issue, we propose the Geometry and Topology Preservable Line Structure Construction (GTP-LSC) method, based on the Encoding and Extracting Framework (EEF). First, in the encoding phase, point cloud features related to line structures are mapped into a high-dimensional feature space. A 3D U-Net is then employed to compute Subsets with Structure feature of Line (SSL) from the dense, unstructured, and noisy indoor LiDAR point cloud data. Next, in the extraction phase, the SSL is transformed into a 3D field enriched with line features. Initially extracted line structures are then constructed based on Morse theory, effectively preserving the topological relationships. In the final step, these line structures are optimized using RANdom SAmple Consensus (RANSAC) and Constructive Solid Geometry (CSG) to ensure geometric completeness. This step also facilitates the generation of complex entities, enabling an accurate and comprehensive representation of both geometric and topological aspects of the line structures. Experiments were conducted using the Indoor Laser Scanning Dataset, focusing on the parking garage (D1), the corridor (D2), and the multi-room structure (D3). The results demonstrated that the proposed GTP-LSC method outperformed existing approaches in terms of both geometric integrity and topological connectivity. To evaluate the performance of different LSC methods, the IoU Buffer Ratio (IBR) was used to measure the overlap between the actual and constructed line structures. The proposed method achieved IBR scores of 92.5% (D1), 94.2% (D2), and 90.8% (D3) for these scenes. Additionally, Precision, Recall, and F-Score were calculated to further assess the LSC results. The F-Score of the proposed method was 0.89 (D1), 0.92 (D2), and 0.89 (D3), demonstrating superior performance in both visual analysis and quantitative results compared to other methods. Full article
(This article belongs to the Special Issue Point Cloud Data Analysis and Applications)
Show Figures

Graphical abstract

23 pages, 4627 KB  
Article
Dynamic SLAM Dense Point Cloud Map by Fusion of Semantic Information and Bayesian Moving Probability
by Qing An, Shao Li, Yanglu Wan, Wei Xuan, Chao Chen, Bufan Zhao and Xijiang Chen
Sensors 2025, 25(17), 5304; https://doi.org/10.3390/s25175304 - 26 Aug 2025
Viewed by 1789
Abstract
Most existing Simultaneous Localization and Mapping (SLAM) systems rely on the assumption of static environments to achieve reliable and efficient mapping. However, such methods often suffer from degraded localization accuracy and mapping consistency in dynamic settings, as they lack explicit mechanisms to distinguish [...] Read more.
Most existing Simultaneous Localization and Mapping (SLAM) systems rely on the assumption of static environments to achieve reliable and efficient mapping. However, such methods often suffer from degraded localization accuracy and mapping consistency in dynamic settings, as they lack explicit mechanisms to distinguish between static and dynamic elements. To overcome this limitation, we present BMP-SLAM, a vision-based SLAM approach that integrates semantic segmentation and Bayesian motion estimation to robustly handle dynamic indoor scenes. To enable real-time dynamic object detection, we integrate YOLOv5, a semantic segmentation network that identifies and localizes dynamic regions within the environment, into a dedicated dynamic target detection thread. Simultaneously, the data association Bayesian mobile probability proposed in this paper effectively eliminates dynamic feature points and successfully reduces the impact of dynamic targets in the environment on the SLAM system. To enhance complex indoor robotic navigation, the proposed system integrates semantic keyframe information with dynamic object detection outputs to reconstruct high-fidelity 3D point cloud maps of indoor environments. The evaluation conducted on the TUM RGB-D dataset indicates that the performance of BMP-SLAM is superior to that of ORB-SLAM3, with the trajectory tracking accuracy improved by 96.35%. Comparative evaluations demonstrate that the proposed system achieves superior performance in dynamic environments, exhibiting both lower trajectory drift and enhanced positioning precision relative to state-of-the-art dynamic SLAM methods. Full article
(This article belongs to the Special Issue Indoor Localization Technologies and Applications)
Show Figures

Figure 1

25 pages, 2435 KB  
Article
FAIS: Fully Automatic Indoor Surveying Framework of Terrestrial Laser Scanning Point Clouds in Large-Scale Indoor Environments
by Wenhao Li, Tong Jia, Shiyi Guo, Yunchun Zhou, Yizhe Liu and Hao Wang
Remote Sens. 2025, 17(16), 2863; https://doi.org/10.3390/rs17162863 - 17 Aug 2025
Viewed by 1385
Abstract
This article presents a novel fully automatic indoor surveying (FAIS) framework for large-scale indoor environments using a Terrestrial Laser Scanning (TLS) hardware system. Traditional methods for indoor surveying are labor-intensive and time-consuming, as they rely on manually positioning scanners for data capture and [...] Read more.
This article presents a novel fully automatic indoor surveying (FAIS) framework for large-scale indoor environments using a Terrestrial Laser Scanning (TLS) hardware system. Traditional methods for indoor surveying are labor-intensive and time-consuming, as they rely on manually positioning scanners for data capture and placing markers for registration. What is more, positioning scanners manually may cause uneven scanning or rescanning, including unstructured areas specifically. To ensure full coverage of the scene, we precisely obtain the number and location of scan stations through the Signed Distance Function (SDF) based method. Meanwhile, we propose an efficient large-scale dense point cloud registration method without markers. The proposed framework is adapted to environments where the scanner operates on a flat surface, such as office spaces, theater stage spaces, urban areas, and some cultural heritage scenic areas. Experiments demonstrate that the proposed method decreases computation time and obtains a more complete point cloud. Full article
Show Figures

Figure 1

18 pages, 12540 KB  
Article
SS-LIO: Robust Tightly Coupled Solid-State LiDAR–Inertial Odometry for Indoor Degraded Environments
by Yongle Zou, Peipei Meng, Jianqiang Xiong and Xinglin Wan
Electronics 2025, 14(15), 2951; https://doi.org/10.3390/electronics14152951 - 24 Jul 2025
Cited by 1 | Viewed by 1816
Abstract
Solid-state LiDAR systems are widely recognized for their high reliability, low cost, and lightweight design, but they encounter significant challenges in SLAM tasks due to their limited field of view and uneven horizontal scanning patterns, especially in indoor environments with geometric constraints. To [...] Read more.
Solid-state LiDAR systems are widely recognized for their high reliability, low cost, and lightweight design, but they encounter significant challenges in SLAM tasks due to their limited field of view and uneven horizontal scanning patterns, especially in indoor environments with geometric constraints. To address these challenges, this paper proposes SS-LIO, a precise, robust, and real-time LiDAR–Inertial odometry solution designed for solid-state LiDAR systems. SS-LIO uses uncertainty propagation in LiDAR point-cloud modeling and a tightly coupled iterative extended Kalman filter to fuse LiDAR feature points with IMU data for reliable localization. It also employs voxels to encapsulate planar features for accurate map construction. Experimental results from open-source datasets and self-collected data demonstrate that SS-LIO achieves superior accuracy and robustness compared to state-of-the-art methods, with an end-to-end drift of only 0.2 m in indoor degraded scenarios. The detailed and accurate point-cloud maps generated by SS-LIO reflect the smoothness and precision of trajectory estimation, with significantly reduced drift and deviation. These outcomes highlight the effectiveness of SS-LIO in addressing the SLAM challenges posed by solid-state LiDAR systems and its capability to produce reliable maps in complex indoor settings. Full article
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)
Show Figures

Figure 1

28 pages, 6171 KB  
Article
Error Distribution Pattern Analysis of Mobile Laser Scanners for Precise As-Built BIM Generation
by Sung-Jae Bae, Junbeom Park, Joonhee Ham, Minji Song and Jung-Yeol Kim
Appl. Sci. 2025, 15(14), 8076; https://doi.org/10.3390/app15148076 - 20 Jul 2025
Cited by 1 | Viewed by 1203
Abstract
Point clouds acquired by mobile laser scanners (MLS) are widely used for generating as-built building information models (BIM), particularly in indoor construction environments and existing buildings. While MLS offers fast and efficient scanning through SLAM technology, its accuracy and precision remains lower than [...] Read more.
Point clouds acquired by mobile laser scanners (MLS) are widely used for generating as-built building information models (BIM), particularly in indoor construction environments and existing buildings. While MLS offers fast and efficient scanning through SLAM technology, its accuracy and precision remains lower than that of terrestrial laser scanners (TLS). This study investigates the potential to improve MLS-based as-built BIM accuracy by analyzing and utilizing error distribution patterns inherent in MLS point clouds. Based on the assumption that each MLS device exhibits consistent and unique error distribution patterns, an experiment was conducted using three MLS devices and TLS-derived reference data. The analysis employed iterative closest point (ICP) registration and cloud-to-mesh (C2M) distance measurements on mock-ups with closed shapes. The results revealed that error patterns were stable across scans and could be leveraged as correction factors. In other words, the results indicate that when using MLS for as-built BIM generation, robust fitting methods have limitations in obtaining realistic object dimensions, as they do not account for the unique error patterns present in MLS point clouds. The proposed method provides a simple and repeatable approach for enhancing MLS accuracy, contributing to improved dimensional reliability in MLS-driven BIM applications. Full article
(This article belongs to the Special Issue Construction Automation and Robotics)
Show Figures

Figure 1

24 pages, 6341 KB  
Article
A Comparative Study of Indoor Accuracies Between SLAM and Static Scanners
by Anna Chrbolková, Martin Štroner, Rudolf Urban, Ondřej Michal, Tomáš Křemen and Jaroslav Braun
Appl. Sci. 2025, 15(14), 8053; https://doi.org/10.3390/app15148053 - 19 Jul 2025
Cited by 4 | Viewed by 3702
Abstract
This study presents a comprehensive comparison of static and SLAM (Simultaneous Localization and Mapping) laser scanners of both new and old generation in a controlled indoor environment of a standard commercial building with long, linear corridors and recesses. The aim was to assess [...] Read more.
This study presents a comprehensive comparison of static and SLAM (Simultaneous Localization and Mapping) laser scanners of both new and old generation in a controlled indoor environment of a standard commercial building with long, linear corridors and recesses. The aim was to assess both global and local accuracy, as well as noise characteristics, of each scanner. Methods: A highly accurate static scanner was used to generate a reference point cloud. Five devices were evaluated: two static scanners (Leica RTC 360 and Trimble X7) and three SLAM scanners (GeoSLAM ZEB Horizon RT, Emesent Hovermap ST-X, and FARO Orbis). Accuracy analysis included systematic and random error assessment, axis-specific displacement evaluation, and profile-based local accuracy measurements. Additionally, noise was quantified before and after data smoothing. Static scanners yielded superior accuracies, with the Leica RTC 360 achieving the best performance (absolute accuracy of 1.2 mm). Among SLAM systems, the Emesent Hovermap ST-X and FARO Orbis—both newer-generation devices—demonstrated significant improvements over the older-generation GeoSLAM ZEB Horizon RT. After smoothing, the noise levels of these new-generation SLAM scanners (approx. 2.1–2.2 mm) approached those of static systems. The findings underline the ongoing technological progress in SLAM systems, with the new-generation SLAM scanners becoming increasingly viable alternatives to static scanners, especially when speed, ease of use, and reduced occlusions are prioritized. This makes them well-suited for rapid indoor mapping applications, provided that the slightly lower accuracy is acceptable for the intended use. Full article
Show Figures

Figure 1

25 pages, 5526 KB  
Article
Implementation of Integrated Smart Construction Monitoring System Based on Point Cloud Data and IoT Technique
by Ju-Yong Kim, Suhyun Kang, Jungmin Cho, Seungjin Jeong, Sanghee Kim, Youngje Sung, Byoungkil Lee and Gwang-Hee Kim
Sensors 2025, 25(13), 3997; https://doi.org/10.3390/s25133997 - 26 Jun 2025
Cited by 6 | Viewed by 5355
Abstract
This study presents an integrated smart construction monitoring system that combines point cloud data (PCD) from a 3D laser scanner with real-time IoT sensors and ultra-wideband (UWB) indoor positioning technology to enhance construction site safety and quality management. The system addresses the limitations [...] Read more.
This study presents an integrated smart construction monitoring system that combines point cloud data (PCD) from a 3D laser scanner with real-time IoT sensors and ultra-wideband (UWB) indoor positioning technology to enhance construction site safety and quality management. The system addresses the limitations of traditional BIM-based methods by leveraging high-precision PCD that accurately reflects actual site conditions. Field validation was conducted over 17 days at a residential construction site, focusing on two floors during concrete pouring. The concrete strength prediction model, based on the ASTM C1074 maturity method, achieved prediction accuracy within 1–2 MPa of measured values (e.g., predicted: 26.2 MPa vs. actual: 25.3 MPa at 14 days). The UWB-based worker localization system demonstrated a maximum positioning error of 1.44 m with 1 s update intervals, enabling real-time tracking of worker movements. Static accuracy tests showed localization errors of 0.80–0.94 m under clear line-of-sight and 1.14–1.26 m under partial non-line-of-sight. The integrated platform successfully combined PCD visualization with real-time sensor data, allowing construction managers to monitor concrete curing progress and worker safety simultaneously. Full article
Show Figures

Figure 1

Back to TopTop