Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (645)

Search Parameters:
Keywords = point cloud registration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 36715 KB  
Article
Development of an Autonomous UAV for Multi-Modal Mapping of Underground Mines
by Luis Escobar, David Akhihiero, Jason N. Gross and Guilherme A. S. Pereira
Robotics 2026, 15(3), 63; https://doi.org/10.3390/robotics15030063 - 19 Mar 2026
Abstract
Underground mine inspection is a critical operation for safety and resource management. It presents unique challenges, including confined spaces, harsh environments, and the lack of reliable positioning systems. This paper presents the design, development, and evaluation of an Unmanned Aerial Vehicle (UAV) specifically [...] Read more.
Underground mine inspection is a critical operation for safety and resource management. It presents unique challenges, including confined spaces, harsh environments, and the lack of reliable positioning systems. This paper presents the design, development, and evaluation of an Unmanned Aerial Vehicle (UAV) specifically engineered for supervised autonomous inspection in subterranean scenarios. Key technical contributions include mechanical adaptations for collision tolerance, an optimized sensor-actuator selection for navigation, and the deployment of a mission-governing state machine for seamless autonomous acquisition. Furthermore, we detail the data treatment workflow, employing a multi-modal point cloud registration technique that successfully integrates high-resolution visual-depth scans of critical mine pillars into a comprehensive, globally referenced map derived from Light Detection and Ranging (LiDAR) data of the entire workspace. We show experiments that illustrate and validate our approach in two real-world scenarios, a simulated coal mine used to train mine rescue teams and an operating Limestone mine. Full article
(This article belongs to the Special Issue Localization and 3D Mapping of Intelligent Robotics)
Show Figures

Figure 1

22 pages, 13068 KB  
Article
A Block-Wise ICP Method for Retrieving 3D Landslide Displacement Vectors Based on Terrestrial Laser Scanning Point Clouds
by Zhao Xian, Jia-Wen Zhou, Zhi-Yu Li, Yuan-Mao Xu and Nan Jiang
Remote Sens. 2026, 18(6), 923; https://doi.org/10.3390/rs18060923 - 18 Mar 2026
Viewed by 49
Abstract
Terrestrial laser scanning (TLS) provides dense point clouds for landslide monitoring, yet occlusion, heterogeneous point density, and seasonal vegetation introduce noise and unstable deformation boundaries in multi-temporal change detection. To overcome the limitations of the multiscale model-to-model cloud comparison (M3C2) method under dominant [...] Read more.
Terrestrial laser scanning (TLS) provides dense point clouds for landslide monitoring, yet occlusion, heterogeneous point density, and seasonal vegetation introduce noise and unstable deformation boundaries in multi-temporal change detection. To overcome the limitations of the multiscale model-to-model cloud comparison (M3C2) method under dominant downslope tangential motion and vegetation disturbance, we propose a block-wise ICP method to retrieve 3D displacement vectors. The scene is partitioned into local sub-blocks; rigid registration is performed within each sub-block, and the estimated translation is assigned to the sub-block center. A two-stage matching and quality control procedure removes under-constrained sub-blocks, enabling the direct retrieval of 3D displacement vectors and interpretable boundaries. Applied to the Longxigou landslide in Wenchuan using RIEGL VZ-2000i surveys on 1 November 2023 and 23 May 2024, the proposed method produces a more continuous displacement field and clearer boundaries than M3C2. For a tower target, manual measurements indicate a displacement of 0.41–0.63 m; our estimates are within 0.33–0.40 m, whereas M3C2 mostly falls between −0.25 and 0.25 m. In a seasonal vegetation change scene, we detect a canopy envelope expansion of approximately 0.20–0.40 m, while M3C2 shows scattered canopy responses that hinder boundary interpretation. A sensitivity analysis indicates a block-scale trade-off between boundary stability and peak preservation, motivating adaptive multi-scale blocking and uncertainty quantification. Full article
(This article belongs to the Special Issue Advances in Remote Sensing Technology for Ground Deformation)
Show Figures

Figure 1

29 pages, 27328 KB  
Article
Robust-Registration-Based Systematic Error Correction for Time-Series Point Clouds
by Chao Zhu, Fuquan Tang, Qian Yang, Jingxiang Li, Junlei Xue, Jiawei Yi and Yu Su
Appl. Sci. 2026, 16(6), 2776; https://doi.org/10.3390/app16062776 - 13 Mar 2026
Viewed by 156
Abstract
Accurate registration of multi-temporal LiDAR point clouds is essential for reliable monitoring of mining subsidence. Systematic errors in point clouds acquired at different times can arise from GNSS/INS positioning drift, sensor calibration bias, and differences in observation geometry. These errors typically manifest as [...] Read more.
Accurate registration of multi-temporal LiDAR point clouds is essential for reliable monitoring of mining subsidence. Systematic errors in point clouds acquired at different times can arise from GNSS/INS positioning drift, sensor calibration bias, and differences in observation geometry. These errors typically manifest as global reference shifts or gradual distortions. When such errors are superimposed on real terrain changes, they can mask subsidence signals and introduce observational pseudo-differences, thereby increasing the difficulty of separating actual subsidence from artifacts. To address this issue, this study proposes Robust-Registration-Based Systematic Error Correction for Time-Series Point Clouds (RR-SEC), which establishes a consistent reference framework across epochs. The method does not assume that stable areas remain strictly unchanged. Instead, it identifies regions whose local change patterns are more temporally consistent using an information entropy analysis of multi-temporal differences. Under complex terrain, the method selects points with lower difference entropy as stable control points and uses them to constrain the registration process. It then performs Generalized Iterative Closest Point (GICP) rigid registration under these constraints to estimate the overall three-dimensional translation and rotation between point clouds from different periods. The estimated transformation is applied to the entire point cloud to correct inter-epoch reference mismatches and unify the coordinate reference across all epochs. Comprehensive validation using simulated complex terrain data containing rigid reference biases and non-rigid deformations, as well as UAV LiDAR data collected from the MuduChaideng Coal Mine, shows that, compared with the baseline GICP method, RR-SEC reduces alignment errors. It decreases the mean residual in stable areas by approximately 85%. The subsidence values computed from the corrected point clouds are more consistent with measured values, and the spatial deformation patterns are easier to interpret. RR-SEC demonstrates robust performance and can serve as a practical approach to improve the accuracy of deformation monitoring in mining areas and potentially other geoscientific applications. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

27 pages, 15287 KB  
Article
Optimizing 3D LiDAR Installation Height for High-Fidelity Canopy Phenotyping in Spindle-Shaped Orchards
by Limin Liu, Yuzhen Dong, Xijie Liao, Chunxiao Li, Yirong Han, Sen Li, Qingqing Xin and Weili Liu
Horticulturae 2026, 12(3), 331; https://doi.org/10.3390/horticulturae12030331 - 10 Mar 2026
Viewed by 223
Abstract
High-fidelity acquisition of canopy phenotypic data is critical for the advancement of orchard Artificial Intelligence (AI). Yet, an improper Light Detection and Ranging (LiDAR) installation height (IH) frequently induces data occlusion and substantial measurement errors. To address this limitation, this study developed an [...] Read more.
High-fidelity acquisition of canopy phenotypic data is critical for the advancement of orchard Artificial Intelligence (AI). Yet, an improper Light Detection and Ranging (LiDAR) installation height (IH) frequently induces data occlusion and substantial measurement errors. To address this limitation, this study developed an information collection vehicle (ICV) integrated with a 16-channel three-dimensional (3D) LiDAR to determine the optimal LiDAR IH. Three representative LiDAR IHs (1.4 m, 2.0 m, and 2.6 m) were evaluated on spindle-shaped cherry trees under both forward and reverse driving strategies. Subsequently, a novel 12-zone refined evaluation framework was introduced to quantify localized errors that are conventionally obscured by traditional whole-canopy metrics. Results demonstrated a profound nonlinear relationship between IH and measurement accuracy. Specifically, the 2.0 m IH (approximating the canopy’s geometric center) emerged as the optimal setup, maintaining relative errors (REs) below 5% with minimal dispersion. Conversely, the 2.6 m IH caused lower-canopy volume REs to surge beyond 16% owing to restricted downward viewing angles. Additionally, reverse driving at higher IHs exacerbated mechanical vibrations via the “lever arm effect”, thereby significantly degrading point cloud registration accuracy. Ultimately, these findings underscore the critical necessity of aligning sensors with the canopy geometric center, supplying essential theoretical guidelines for the hardware design of future orchard robots. Full article
Show Figures

Figure 1

26 pages, 4902 KB  
Article
Multi-Sensor-Assisted Navigation for UAVs in Power Inspection: A Fusion Approach Using LiDAR, IMU and GPS
by Anjun Wang, Wenbin Yu, Xuexing Dong, Yang Yang, Shizeng Liu, Jiahao Liu and Hongwei Mei
Appl. Sci. 2026, 16(6), 2632; https://doi.org/10.3390/app16062632 - 10 Mar 2026
Viewed by 178
Abstract
High-precision localization is essential for autonomous navigation and environment perception of unmanned aerial vehicles (UAVs) in complex power inspection scenarios. To overcome the limited accuracy and accumulated drift of conventional GPS-based single-sensor localization, this paper proposes a LiDAR–IMU–GPS-aided navigation method that combines a [...] Read more.
High-precision localization is essential for autonomous navigation and environment perception of unmanned aerial vehicles (UAVs) in complex power inspection scenarios. To overcome the limited accuracy and accumulated drift of conventional GPS-based single-sensor localization, this paper proposes a LiDAR–IMU–GPS-aided navigation method that combines a tightly coupled front-end and a loosely coupled back-end. The front-end employs an improved Lie-group-based UKF-SLAM framework to explicitly handle the nonlinearities of rotational motion, thereby improving the stability of local pose estimation. The back-end integrates GPS absolute constraints, loop closure detection, and point cloud registration via pose graph optimization, which effectively suppresses long-term accumulated drift. The framework achieves accurate and robust localization for UAV power inspection. Experiments on public benchmark datasets and real-world power inspection scenarios demonstrate the effectiveness of the proposed method. On the MH_02_easy sequence, the absolute trajectory error is reduced from 0.521 m to 0.170 m compared with ROVIO, while in a real inspection sequence the cumulative error is reduced by more than 99% after back-end optimization. Moreover, the system maintains stable navigation under GPS-degraded conditions, indicating strong robustness and practical applicability. Full article
Show Figures

Figure 1

19 pages, 3692 KB  
Article
Automated Processing and Deviation Analysis of 3D Pipeline Point Clouds Based on Geometric Features
by Shaofeng Jin, Kangrui Fu, Chengzhen Yang and Huanhuan Rui
J. Imaging 2026, 12(3), 115; https://doi.org/10.3390/jimaging12030115 - 9 Mar 2026
Viewed by 325
Abstract
To meet the strict non-contact measurement requirements for the assembly of aircraft engine pipelines and to overcome the limitations of the traditional three-dimensional laser scanning workflow, this study proposes an automated pipeline point cloud processing and deviation analysis framework. Through a standardized three-dimensional [...] Read more.
To meet the strict non-contact measurement requirements for the assembly of aircraft engine pipelines and to overcome the limitations of the traditional three-dimensional laser scanning workflow, this study proposes an automated pipeline point cloud processing and deviation analysis framework. Through a standardized three-dimensional laser scanning procedure, high-resolution pipeline point clouds are obtained and preprocessed. Based on the geometric characteristics of the pipeline, automated algorithms for point cloud feature segmentation, axis extraction, and model registration are developed. Particularly, the three-dimensional extended Douglas–Peucker (DP) algorithm is introduced to achieve efficient point cloud downsampling while retaining necessary geometric and structural features. These algorithms are fully integrated into a unified software platform, supporting one-click operation, and can automatically analyze and obtain five key types of pipeline deviations: angular deviation, radial deviation, axial deviation, roundness error, and diameter error. The platform also provides intuitive visualization effects and comprehensive report generation functions to facilitate quantitative inspection and analysis. Test results show that the proposed method significantly improves the processing efficiency and measurement reliability of complex pipeline systems. The developed framework provides a powerful practical solution for the automated geometric inspection of aircraft engine pipelines and lays a solid foundation for subsequent quality assessment tasks. Full article
Show Figures

Figure 1

24 pages, 4915 KB  
Article
Semantic-Guided Matching of Heterogeneous UAV Imagery and Mobile LiDAR Data Using Deep Learning and Graph Neural Networks
by Tee-Ann Teo, Hao Yu and Pei-Cheng Chen
Drones 2026, 10(3), 185; https://doi.org/10.3390/drones10030185 - 8 Mar 2026
Viewed by 235
Abstract
The integration of heterogeneous geospatial data, specifically low-cost unmanned aerial vehicle (UAV) imagery and mobile light detection and ranging (LiDAR) system point clouds, presents a significant challenge due to the significant radiometric and structural discrepancies between the two modalities. This study proposes a [...] Read more.
The integration of heterogeneous geospatial data, specifically low-cost unmanned aerial vehicle (UAV) imagery and mobile light detection and ranging (LiDAR) system point clouds, presents a significant challenge due to the significant radiometric and structural discrepancies between the two modalities. This study proposes a novel air-to-ground semantic feature matching framework to achieve precise geometric registration between these data sources by effectively incorporating semantic-constraint deep learning-based matching. The methodology transformed the cross-sensor alignment challenge into a robust two-dimensional image matching problem. This was achieved by first using YOLOv11 for semantic segmentation of common road markings in both the UAV orthoimage and the converted LiDAR intensity image to generate highly consistent feature references. Subsequently, the SuperPoint detector and a graph neural network matcher, SuperGlue, were applied to these semantic images to establish reliable geomatics information correspondence points. Experimental results confirmed that this semantic-guided strategy consistently outperformed traditional feature-based matching (i.e., scale-invariant feature transform + fast library for approximate nearest neighbors), particularly by converting the noisy LiDAR intensity image into a stabilized semantic representation. The explicit application of semantic constraints further proved effective in eliminating false matches between geometrically similar but semantically distinct objects. The final object-specific analysis demonstrated that features with clear, complex geometric structures (e.g., pedestrian crossings and directional arrows) provide the most robust matching control. In summary, the proposed framework successfully leverages semantic context to overcome cross-sensor heterogeneity, offering an automated and precise solution for the geometric alignment of mobile LiDAR data. Full article
Show Figures

Figure 1

23 pages, 4427 KB  
Article
Virtual Reassembly Method for Cultural Relic Fragments Based on Multi-Feature Extraction
by Jianghong Zhao, Jia Yang, Mengtian Cao, Lisha Yin, Rui Liu and Xinfeng Chang
Appl. Sci. 2026, 16(5), 2588; https://doi.org/10.3390/app16052588 - 8 Mar 2026
Viewed by 257
Abstract
The virtual reassembly of fragmented cultural relics remains a challenging task due to incomplete contours, complex fracture geometries, and the lack of reliable accuracy evaluation when ground-truth models are unavailable. To address these issues, this study proposes an automated virtual reassembly framework based [...] Read more.
The virtual reassembly of fragmented cultural relics remains a challenging task due to incomplete contours, complex fracture geometries, and the lack of reliable accuracy evaluation when ground-truth models are unavailable. To address these issues, this study proposes an automated virtual reassembly framework based on multi-feature extraction and hierarchical fragment matching. First, contour points are extracted from fragment point clouds using neighborhood roughness analysis and further refined through a Cylinder Box-based completion strategy to recover missing contour segments. Then, multiple complementary features, including Fast Point Feature Histograms (FPFHs), Heat Kernel Signatures (HKSs), and a spatial cube-based contour shape descriptor, are jointly constructed to characterize both local geometric details and global structural properties of fragments. To improve matching efficiency and robustness, a tree-based fragment retrieval strategy combined with a coarse-to-fine registration scheme is employed to identify adjacent fragments while reducing computational complexity. In addition, a pseudo-ground-truth accuracy evaluation method is introduced to quantitatively assess cumulative reassembly errors in the absence of reliable reference data. Experiments conducted on the public Buddha head dataset demonstrate that the proposed method achieves stable and visually consistent reassembly results, with a cumulative error as low as 1.58%, while significantly reducing retrieval computations compared with exhaustive matching strategies. These results indicate that the proposed framework provides a practical and verifiable solution for the automated digital restoration of fragmented cultural relics. Full article
(This article belongs to the Special Issue Non-Destructive Techniques for Heritage Conservation)
Show Figures

Figure 1

18 pages, 1354 KB  
Article
Design and Performance Validation of 4D Radar ICP-Integrated Navigation with Stochastic Cloning Augmentation
by Hyeongseob Shin, Dongha Kwon and Sangkyung Sung
Sensors 2026, 26(5), 1660; https://doi.org/10.3390/s26051660 - 5 Mar 2026
Viewed by 263
Abstract
Automotive radar has emerged as a pivotal technology for navigation in GNSS-denied environments, offering superior robustness to adverse weather and fluctuating lighting conditions compared to vision or LiDAR-based sensors. Despite these advantages, the inherent sparsity and noise of radar measurements often lead to [...] Read more.
Automotive radar has emerged as a pivotal technology for navigation in GNSS-denied environments, offering superior robustness to adverse weather and fluctuating lighting conditions compared to vision or LiDAR-based sensors. Despite these advantages, the inherent sparsity and noise of radar measurements often lead to degraded estimation accuracy and system reliability. To address these challenges, various radar-based localization frameworks have been explored, ranging from optimization-based and Extended Kalman Filter (EKF) approaches fused with Inertial Measurement Units (IMUs) to point cloud registration techniques like Iterative Closest Point (ICP). While filter-based methods are favored in multi-sensor fusion for their proven stability, ICP is widely utilized for high-precision pose estimation in point-cloud-centric systems. In this study, we propose a novel Radar-Inertial Odometry (RIO) framework that synergistically integrates ICP-based relative pose estimation with model-based sensor fusion. The proposed methodology leverages relative transformations derived from ICP alongside ego-velocity estimations obtained from radar Doppler measurements. To effectively incorporate relative ICP constraints, a stochastic cloning technique is implemented to augment previous states and their associated covariances, ensuring that the uncertainty of historical poses is explicitly accounted for. The performance of the proposed method is validated using public open-source datasets, demonstrating higher localization accuracy and more consistent performance compared to existing algorithms used for comparison. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Graphical abstract

21 pages, 3931 KB  
Article
Vehicle Speed Estimation Using Infrastructure-Mounted LiDAR via Rectangle Edge Matching
by Injun Hong and Manbok Park
Appl. Sci. 2026, 16(5), 2513; https://doi.org/10.3390/app16052513 - 5 Mar 2026
Viewed by 211
Abstract
Smart transportation infrastructure is increasingly deployed, and cooperative perception using stationary Light Detection and Ranging (LiDAR) sensors installed at intersections and along roadsides is becoming more important. However, infrastructure LiDAR often suffers from sparse point-cloud data (PCD) at long ranges and frequent occlusions, [...] Read more.
Smart transportation infrastructure is increasingly deployed, and cooperative perception using stationary Light Detection and Ranging (LiDAR) sensors installed at intersections and along roadsides is becoming more important. However, infrastructure LiDAR often suffers from sparse point-cloud data (PCD) at long ranges and frequent occlusions, which can degrade the stability of inter-frame displacement and speed estimation. This paper proposes a real-time vehicle speed estimation method that operates robustly under sparse and partially observed conditions. The proposed approach extracts boundary points from clustered vehicle PCD and removes outliers, and then fits a 2D rectangle to the vehicle contour via Gauss–Newton optimization by minimizing distance-based residuals between boundary points and rectangle edges. To further improve robustness, we incorporate Hessian augmentation terms that account for boundary states and size variations, thereby alleviating excessive boundary violations and abnormal deformation of the width and height parameters during iterations. Next, from the fitted rectangles in consecutive frames, we construct a nearest corner with respect to the LiDAR origin and an auxiliary point, and perform 2D SVD-based alignment using only these two representative points. This enables efficient computation of inter-frame displacement and speed without full point-cloud registration (e.g., iterative closest point (ICP)). Experiments conducted at an intersection in K-City (Hwaseong, Republic of Korea) using a 40-channel LiDAR, a test vehicle (Genesis G70), and a real-time kinematic (RTK) system (MRP-2000) show that the proposed method stably preserves representative points and fits rectangles, even in sparse regions where only about two LiDAR rings are observed. Using CAN-based vehicle speed as the reference, the proposed method achieves an MAE of 0.76–1.37 kph and an RMSE of 0.90–1.58 kph over the tested speed settings (30, 50, and 70 kph, as well as high speed (~90 kph)) and trajectory scenarios. Furthermore, per-object processing-time measurements confirm the real-time feasibility of the proposed algorithm. Full article
Show Figures

Figure 1

26 pages, 12167 KB  
Article
Real-Time Pose Measurement Framework of Wind Tunnel Aircraft Models Based on a Monocular Time-of-Flight Camera
by Jianqiang Huang, Cui Liang, Shuai Zhao and Tengchao Huang
Sensors 2026, 26(5), 1476; https://doi.org/10.3390/s26051476 - 26 Feb 2026
Viewed by 220
Abstract
Precise and real-time acquisition of aircraft model attitude is fundamental for aerodynamic analysis in wind tunnel experiments, yet achieving high-precision non-contact measurement remains a significant challenge. To address this, this paper proposes a pose measurement framework based on a monocular Time-of-Flight (ToF) camera [...] Read more.
Precise and real-time acquisition of aircraft model attitude is fundamental for aerodynamic analysis in wind tunnel experiments, yet achieving high-precision non-contact measurement remains a significant challenge. To address this, this paper proposes a pose measurement framework based on a monocular Time-of-Flight (ToF) camera that fuses keyframe global registration with non-keyframe local registration. First, a novel hand-crafted local feature based on three-plane encoded height and density is introduced. When combined with the Two-stage Consensus Filtering RANSAC (TCF-RANSAC) algorithm, this feature achieves robust global registration of keyframes, providing reliable initial pose estimates for the system. Subsequently, leveraging the continuity constraint of model motion, fast incremental local registration of non-keyframes is performed using the Generalized Iterative Closest Point (GICP) algorithm, which avoids falling into local optima while significantly improving computational efficiency. Evaluation results on simulated datasets with synthetic noise and a real experimental platform demonstrate that the method achieves a single-axis rotation angle error of less than 0.03 while processing at over 40 FPS, satisfying real-time measurement requirements. Comparative evaluations against multiple existing registration methods indicate that the proposed framework achieves superior accuracy and robustness, reducing rotation angle errors by 9% to 39% compared to mainstream global registration methods under single-view ToF sensing conditions. Furthermore, this study quantifies the error distribution characteristics of monocular ToF-based pose estimation, revealing an “axis-sensitivity” phenomenon where the rotation error around the optical axis is significantly lower (e.g., 0.02, 0.03) than that around the orthogonal axes (e.g., 0.38, 0.26). These findings provide practical guidance for camera placement and system design in high-precision aerodynamic measurement scenarios. Full article
Show Figures

Figure 1

22 pages, 2732 KB  
Article
Automated Single-Sensor 3D Scanning and Modular Benchmark Objects for Human-Scale 3D Reconstruction
by Kartik Choudhary, Mats Isaksson, Gavin W. Lambert and Tony Dicker
Sensors 2026, 26(4), 1331; https://doi.org/10.3390/s26041331 - 19 Feb 2026
Viewed by 414
Abstract
High-fidelity 3D reconstruction of human-sized objects typically requires multi-sensor scanning systems that are expensive, complex, and rely on proprietary hardware configurations. Existing low-cost approaches often rely on handheld scanning, which is inherently unstructured and operator-dependent, leading to inconsistent coverage and variable reconstruction quality. [...] Read more.
High-fidelity 3D reconstruction of human-sized objects typically requires multi-sensor scanning systems that are expensive, complex, and rely on proprietary hardware configurations. Existing low-cost approaches often rely on handheld scanning, which is inherently unstructured and operator-dependent, leading to inconsistent coverage and variable reconstruction quality. This limitation necessitates the need for a controlled, repeatable, and affordable scanning method that can generate high-quality data without requiring multi-sensor hardware or external tracking markers. This study presents a marker-less scanning platform designed for human-scale reconstruction. The system consists of a single structured-light sensor mounted on a vertical linear actuator, synchronised with a motorised turntable that rotates the subject. This constrained kinematic setup ensures a repeatable cylindrical acquisition trajectory. To address the geometric ambiguity often found in vertical translational symmetry (i.e., where distinct elevation steps appear identical), the system employs a sensor-assisted initialisation strategy, where feedback from the rotary encoder and linear drive serves as constraints for the registration pipeline. The captured frames are reconstructed into a complete model through a two-step Iterative Closest Point (ICP) procedure that eliminates the vertical drift and model collapse (often referred to as “telescoping”) common in unconstrained scanning. To evaluate system performance, a modular anthropometric benchmark object representing a human-sized target (1.6 m) was scanned. The reconstructed model was assessed in terms of surface coverage and volumetric fidelity relative to a CAD reference. The results demonstrate high sampling stability, achieving a mean surface density of 0.760points/mm2 on front-facing surfaces. Geometric deviation analysis revealed a mean signed error of −1.54 mm (σ= 2.27 mm), corresponding to a relative volumetric error of approximately 0.096% over the full vertical span. These findings confirm that a single-sensor system, when guided by precise kinematics, can mitigate the non-linear bending and drift artefacts of handheld acquisition, providing an accessible yet rigorously accurate alternative to industrial multi-sensor systems. Full article
(This article belongs to the Special Issue Sensors for Object Detection, Pose Estimation, and 3D Reconstruction)
Show Figures

Figure 1

14 pages, 1877 KB  
Article
Research on 3D Point Cloud Modeling Method for Pillar-Type Insulators Based on Multi-View 2D LiDAR
by Yan Liu, Haoyang Li, Chenyun Cai and Qian Li
Electronics 2026, 15(4), 826; https://doi.org/10.3390/electronics15040826 - 14 Feb 2026
Viewed by 212
Abstract
In the context of three-dimensional (3D) point cloud modeling for pillar-type insulators during the “post-production–pre-use” phase, current methodologies encounter challenges in achieving a balance between cost-effectiveness, comprehensive coverage, and high precision. This study introduces a novel 3D point cloud modeling approach that utilizes [...] Read more.
In the context of three-dimensional (3D) point cloud modeling for pillar-type insulators during the “post-production–pre-use” phase, current methodologies encounter challenges in achieving a balance between cost-effectiveness, comprehensive coverage, and high precision. This study introduces a novel 3D point cloud modeling approach that utilizes multi-view two-dimensional (2D) LiDAR technology. This method employs three 2D LiDAR sensors positioned at 120° intervals to conduct layer-by-layer scanning, thereby capturing the surface point cloud data of insulators from various heights and perspectives. This approach effectively mitigates the impact of occlusion and facilitates comprehensive 360° data acquisition. Based on this foundation, the skirt structure characteristics of pillar-type insulators were extracted, and a point cloud registration and stitching algorithm, grounded in structural constraints, was developed to facilitate a high-precision 3D reconstruction. The experimental findings indicate that the proposed approach in this study demonstrates substantial improvements in modeling accuracy compared with the baseline methods. In repeated experiments, the proposed method in this study showed an average distance error with a mean (μMDE) ± standard deviation (σ) of 1.15 ± 0.07, while the root mean square error had a mean (μRMS) ± standard deviation (σ) of 1.26 ± 0.11. This method offers several advantages, including a straightforward structure, low system cost, and excellent point cloud continuity (1 mm). The maximum measurement error for the disc diameter was 2.986 mm, which satisfies the engineering application requirement of ±5 mm, thereby confirming the feasibility and practical utility of the method in the 3D modeling of pillar-type insulators. Full article
Show Figures

Figure 1

1 pages, 124 KB  
Correction
Correction: Kang et al. Point Cloud Registration Method Based on Geometric Constraint and Transformation Evaluation. Sensors 2024, 24, 1853
by Chuanli Kang, Chongming Geng, Zitao Lin, Sai Zhang, Siyao Zhang and Shiwei Wang
Sensors 2026, 26(4), 1241; https://doi.org/10.3390/s26041241 - 14 Feb 2026
Viewed by 155
Abstract
In the original publication [...] Full article
27 pages, 6570 KB  
Article
LiDAR–Inertial–Visual Odometry Based on Elastic Registration and Dynamic Feature Removal
by Qiang Ma, Fuhong Qin, Peng Xiao, Meng Wei, Sihong Chen, Wenbo Xu, Xingrui Yue, Ruicheng Xu and Zheng He
Electronics 2026, 15(4), 741; https://doi.org/10.3390/electronics15040741 - 9 Feb 2026
Viewed by 430
Abstract
Simultaneous Localization and Mapping (SLAM) is a fundamental capability for autonomous robots. However, in highly dynamic scenes, conventional SLAM systems often suffer from degraded accuracy due to LiDAR motion distortion and interference from moving objects. To address these challenges, this paper proposes a [...] Read more.
Simultaneous Localization and Mapping (SLAM) is a fundamental capability for autonomous robots. However, in highly dynamic scenes, conventional SLAM systems often suffer from degraded accuracy due to LiDAR motion distortion and interference from moving objects. To address these challenges, this paper proposes a LiDAR–Inertial–Visual odometry framework based on elastic registration and dynamic feature removal, with the aim of enhancing system robustness through detailed algorithmic supplements. In the LiDAR odometry module, an elastic registration-based de-skewing method is introduced by modeling second-order motion, enabling accurate point cloud correction under non-uniform motion. In the visual odometry module, a multi-strategy dynamic feature suppression mechanism is developed, combining IMU-assisted motion consistency verification with a lightweight YOLOv5-based detection network to effectively filter out dynamic interference with low computational overhead. Furthermore, depth information for visual key points is recovered using LiDAR assistance to enable tightly coupled pose estimation. Extensive experiments on the TUM and M2DGR datasets demonstrate that the proposed method achieves a 96.3% reduction in absolute trajectory error (ATE) compared with ORB-SLAM2 in highly dynamic scenarios. Real-world deployment on an embedded computing device further confirms the framework’s real-time performance and practical applicability in complex environments. Full article
Show Figures

Figure 1

Back to TopTop