Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (374)

Search Parameters:
Keywords = LIDAR SLAM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 3484 KB  
Article
IFA-ICP: A Low-Complexity and Image Feature-Assisted Iterative Closest Point (ICP) Scheme for Odometry Estimation in SLAM, and Its FPGA-Based Hardware Accelerator Design
by Jia-En Li and Yin-Tsung Hwang
Sensors 2026, 26(8), 2326; https://doi.org/10.3390/s26082326 - 9 Apr 2026
Viewed by 185
Abstract
Odometry estimation, which calculates the trajectory of a moving object across timeframes, is a critical and time-consuming function in SLAM (Simultaneous Localization and Mapping) systems. Although LiDAR-based sensing is most popular for outdoor and long-range applications because of its ranging accuracy, the sparsity [...] Read more.
Odometry estimation, which calculates the trajectory of a moving object across timeframes, is a critical and time-consuming function in SLAM (Simultaneous Localization and Mapping) systems. Although LiDAR-based sensing is most popular for outdoor and long-range applications because of its ranging accuracy, the sparsity of laser point cloud poses a significant challenge to feature extraction and matching in odometry estimation. In this paper, we investigate odometry estimation from two aspects, i.e., algorithm optimization, and system design/implementation. In algorithm optimization, we present an image feature-assisted odometry estimation scheme that leverages the richness of image information captured by a companion camera to enhance the accuracy of laser point cloud matching. This also serves as a screening mechanism to reduce the matching size and lower the computing complexity for a higher estimation rate. In addition, various schemes, such as adaptive threshold in image feature point selection, principal component analysis (PCA)-based plane fitting for laser point interpolation, and Gauss–Newton optimization for calculating the transform matrix, are also employed to improve the accuracy of odometry estimation. The performance of improved odometry estimation is verified using an existing FLOAM (Fast Lidar Odometry and Mapping) framework. The KITTI dataset for autonomous vehicles with ground truth was used as the test bench. Simulation results indicate that the translation error and rotation error can be reduced by 16.6% and 1.3%, respectively. Computing complexity, measured as the software execution time, also reduced by 63%. In system implementation, a hardware/software (HW/SW) co-design strategy was adopted, where complexity profiling was first conducted to determine the task partitioning and time-consuming tasks are offloaded to a hardware accelerator. This facilitates real-time execution on a resource-constrained embedded platform consisting of a microprocessor module (Raspberry Pi) and an attached FPGA board (Pynq Z2). Efficient hardware designs for customized DSP functions (adaptive threshold and PCA) were developed in an FPGA capable of completing one data frame in 20ms. The final system implementation met the target throughput of 10 estimations per second, and can be scaled up further. Full article
(This article belongs to the Topic Advances in Autonomous Vehicles, Automation, and Robotics)
Show Figures

Figure 1

17 pages, 4279 KB  
Review
Bibliometric Analysis on Control Architectures for Robotics in Agriculture
by Simone Figorilli, Simona Violino, Simone Vasta, Federico Pallottino, Giorgio Manca, Lorenzo Bianchi and Corrado Costa
Robotics 2026, 15(4), 75; https://doi.org/10.3390/robotics15040075 - 3 Apr 2026
Viewed by 318
Abstract
(1) Background: Robotics and advanced control architectures are increasingly central to the development of precision agriculture (PA), supporting automated, efficient, and data-driven farm management. This review offers a comprehensive analysis of scientific literature on robotic control systems applied to PA, focusing on technological [...] Read more.
(1) Background: Robotics and advanced control architectures are increasingly central to the development of precision agriculture (PA), supporting automated, efficient, and data-driven farm management. This review offers a comprehensive analysis of scientific literature on robotic control systems applied to PA, focusing on technological progress, methodological approaches, and emerging research trends. (2) Methods: A systematic review was conducted according to PRISMA guidelines, combined with a bibliometric analysis using VOSviewer to examine term co-occurrences, thematic clusters, and topic evolution over time. Publications indexed in Scopus between 1976 and 2025 were analyzed. (3) Results: Results reveal a sharp growth in publications after 2010, with a strong acceleration from 2015 onward, reflecting advances in autonomous systems and the integration of artificial intelligence, sensor technologies, and distributed software frameworks. Three principal clusters emerged: algorithmic and control methods (e.g., neural networks, path tracking, simulation); sensing and infrastructure technologies (e.g., LiDAR, SLAM, IMU, ROS, deep learning-based perception); and agronomic applications, including crop monitoring, irrigation, yield estimation, and farm management. Citation trends indicate a shift from foundational control theory to AI-driven solutions. (4) Conclusions: Overall, control architectures are evolving toward modular, scalable, and interoperable systems enabling autonomous decision-making in complex agricultural environments. Full article
(This article belongs to the Section Agricultural and Field Robotics)
Show Figures

Figure 1

29 pages, 6656 KB  
Article
Improvements to the FLOAM Algorithm: GICP Registration and SOR Filtering in Mobile Robots with Pure Laser Configuration and Enhanced SLAM Performance
by Shichen Fu, Tianbao Zhao, Junkai Zhang, Guangming Guo and Weixiong Zheng
Appl. Sci. 2026, 16(7), 3141; https://doi.org/10.3390/app16073141 - 24 Mar 2026
Viewed by 273
Abstract
Laser SLAM is a key enabling technology for autonomous navigation of intelligent mobile robots. The standard FLOAM algorithm experiences low positioning accuracy, weak anti-interference performance, and prone error accumulation in pure LiDAR scenarios, making it difficult to meet practical engineering requirements. The focus [...] Read more.
Laser SLAM is a key enabling technology for autonomous navigation of intelligent mobile robots. The standard FLOAM algorithm experiences low positioning accuracy, weak anti-interference performance, and prone error accumulation in pure LiDAR scenarios, making it difficult to meet practical engineering requirements. The focus of numerous studies is thus on improved pure laser SLAM algorithms that are highly robust. The enhanced algorithm of FLOAM GICP registration and SOR filtering is applied in this study. The SOR filtering processes the laser point cloud to remove outlier noise. The GICP registration replaces the classic with an optimized matching cost function. Experiments are conducted on a mobile robot with a Leishen C16 LiDAR to simulate real-life tests in an indoor corridor and outdoor plaza on the Gazebo simulation platform. The results from the EVO tool’s quantitative evaluation indicate that the indoor mean absolute error and RMSE were reduced by 46.67% and 41.67% compared with FLOAM. The outdoor mean and maximum errors are reduced by 46.00% and 70.00%, respectively. The proposed improved scheme achieves centimeter-level positioning accuracy and strong robustness in pure laser configurations without auxiliary sensors such as IMUs or odometers, providing a reliable technical solution for the engineering application of mobile robots in sensor-constrained scenarios. Full article
Show Figures

Figure 1

19 pages, 894 KB  
Review
Indoor Mapping as a Spatiotemporal Framework for Mitigating Greenhouse Gas Emissions in Buildings: A Review
by Vinuri Nilanika Goonetilleke, Muditha K. Heenkenda and Kamil Zaniewski
Geomatics 2026, 6(2), 27; https://doi.org/10.3390/geomatics6020027 - 19 Mar 2026
Viewed by 492
Abstract
Climate change is a critical global challenge, and the building sector accounts for nearly 30% of global greenhouse gas (GHG) emissions, remaining a key target for mitigation. Indoor environments contribute significantly to GHG emissions, primarily through heating, cooling, lighting, and occupant-driven energy use. [...] Read more.
Climate change is a critical global challenge, and the building sector accounts for nearly 30% of global greenhouse gas (GHG) emissions, remaining a key target for mitigation. Indoor environments contribute significantly to GHG emissions, primarily through heating, cooling, lighting, and occupant-driven energy use. Indoor mapping, serving as the foundation for Digital Twins (DTs), provides a spatiotemporal framework that integrates sensor data with Building Information Modelling (BIM), Geographic Information Systems (GIS), and Internet of Things (IoT) to support energy-efficient, low-carbon building operations. This review examined the role of indoor mapping in understanding, modelling, and reducing GHG emissions in buildings. It synthesized current advancements in indoor spatial data acquisition, ranging from Light Detection And Ranging (LiDAR) and Simultaneous Localization and Mapping (SLAM) to deep learning-based floor plan extraction, and evaluated their contribution to improved indoor environmental analysis. The review highlighted emerging techniques, challenges, and gaps, particularly the limited integration of physical indoor spaces with virtual layers representing assets, occupants, and equipment. Addressing this gap requires embedding spatial modelling as an intermediate analytical layer that structures and contextualizes sensor data to support spatiotemporal decision-making. Overall, this review demonstrated that indoor mapping plays a critical role in transforming spatial information into actionable insights, enabling more accurate energy modelling, enhanced real-time building management, and stronger data-driven strategies for GHG mitigation in the built environment. Full article
Show Figures

Graphical abstract

20 pages, 24767 KB  
Article
VINA-SLAM: A Voxel-Based Inertial and Normal-Aligned LiDAR–IMU SLAM
by Ruyang Zhang and Bingyu Sun
Sensors 2026, 26(6), 1810; https://doi.org/10.3390/s26061810 - 13 Mar 2026
Viewed by 581
Abstract
Environments with sparse or repetitive geometric structures, such as long corridors and narrow stairwells, remain challenging for LiDAR–inertial simultaneous localization and mapping (LiDAR–IMU SLAM) due to insufficient geometric observability and unreliable data associations. To address these issues, we propose VINA-SLAM, a novel LiDAR–IMU [...] Read more.
Environments with sparse or repetitive geometric structures, such as long corridors and narrow stairwells, remain challenging for LiDAR–inertial simultaneous localization and mapping (LiDAR–IMU SLAM) due to insufficient geometric observability and unreliable data associations. To address these issues, we propose VINA-SLAM, a novel LiDAR–IMU SLAM framework that constructs a unified global voxel map to explicitly exploit structural consistency. VINA-SLAM continuously tracks surface normals stored in the global voxel map using a normal-guided correspondence strategy, enabling stable scan-to-map alignment in degenerate scenes. Furthermore, a tangent-space metric is introduced to supplement missing rotational constraints around planar regions, providing reliable initial pose estimates for local optimization. A tightly coupled sliding-window bundle adjustment is then formulated by jointly incorporating IMU factors, voxel normal consistency factors, and planar regularization terms. In particular, the minimum eigenvalue of each voxel’s covariance is used as a statistically principled planar constraint, improving the Hessian conditioning and cross-view geometric consistency. The proposed system directly aligns raw LiDAR scans to the voxelized map without explicit feature extraction or loop closure. Experiments on 25 sequences from the HILTI and MARS-LVIG datasets show that VINA-SLAM reduces ATE by 25–40% on average while maintaining real-time performance at 10 Hz in the evaluated geometrically degenerate environments. Full article
Show Figures

Figure 1

28 pages, 6157 KB  
Article
RI-DVP: A Physics–Geometry Dual-Driven Framework for Static Map Construction in Sparse LiDAR Scenarios
by Xiaokai Li, Li Wang, Haolong Luo and Guangyun Li
Remote Sens. 2026, 18(5), 821; https://doi.org/10.3390/rs18050821 - 6 Mar 2026
Viewed by 363
Abstract
High-fidelity static map construction is essential for reliable autonomous navigation, yet dynamic environments introduce severe artifacts caused by moving objects (also referred to as dynamic artifacts) in accumulated maps. While geometry-based methods perform well on dense point clouds, their performance notably degrades on [...] Read more.
High-fidelity static map construction is essential for reliable autonomous navigation, yet dynamic environments introduce severe artifacts caused by moving objects (also referred to as dynamic artifacts) in accumulated maps. While geometry-based methods perform well on dense point clouds, their performance notably degrades on sparse 16-beam LiDAR due to the “Sparsity Trap”: dynamic objects are frequently missed by ray-based geometry, and purely geometric cues fail in radiometrically ambiguous scenarios. To address this, we propose RI-DVP, a physics–geometry dual-driven framework. Unlike conventional approaches, RI-DVP first performs a physics-inspired radiometric normalization that compensates for range attenuation and incidence-angle effects to establish a consistent signal baseline. Subsequently, a Dual-Residual Aggressive Removal (DRAR) module jointly exploits geometric residuals—bounded by a range-dependent spatial uncertainty envelope—and calibrated intensity residuals to detect geometrically indistinguishable objects. To balance recall and precision, a Hierarchical Static Reversion strategy (HSR) employs two-stage recovery to retrieve large-scale structures and correct fine-grained artifacts via topology-based adhesion reasoning. Experiments on SemanticKITTI and custom sparse datasets demonstrate that RI-DVP outperforms state-of-the-art geometric baselines, improving Dynamic Accuracy by over 36 percentage points in sparse scanning scenarios using a VLP-16 LiDAR sensor (Velodyne Acoustics, Inc., Morgan Hill, CA, USA) compared to baselines that fail under the sparsity trap while achieving real-time performance at approximately 15.3 Hz. Full article
(This article belongs to the Special Issue LiDAR Technology for Autonomous Navigation and Mapping)
Show Figures

Figure 1

16 pages, 2080 KB  
Article
Lidar–Vision Depth Fusion for Robust Loop Closure Detection in SLAM Systems
by Bingzhuo Liu, Panlong Wu, Rongting Chen, Yidan Zheng and Mengyu Li
Machines 2026, 14(3), 282; https://doi.org/10.3390/machines14030282 - 3 Mar 2026
Viewed by 463
Abstract
Loop Closure Detection (LCD) is a key component of Simultaneous Localization and Mapping (SLAM) systems, responsible for correcting odometric drift and maintaining global consistency in localization and mapping. However, single-modality LCD methods suffer from inherent limitations: LiDAR-based approaches are affected by point cloud [...] Read more.
Loop Closure Detection (LCD) is a key component of Simultaneous Localization and Mapping (SLAM) systems, responsible for correcting odometric drift and maintaining global consistency in localization and mapping. However, single-modality LCD methods suffer from inherent limitations: LiDAR-based approaches are affected by point cloud sparsity, limiting feature representation in unstructured environments, while vision-based methods are sensitive to illumination and weather variations, reducing robustness. To address these issues, this paper presents a LiDAR–vision multimodal fusion LCD algorithm. Spatiotemporal alignment between LiDAR point clouds and images is achieved through extrinsic calibration and timestamp interpolation to ensure cross-modal consistency. Harris corner detection and BRIEF descriptors are employed to extract visual features, and a LiDAR-projected sparse depth map is used to complete depth information, mapping 2D features into 3D space. A hybrid feature representation is then constructed by fusing LiDAR geometric triangle descriptors with visual BRIEF descriptors, enabling efficient loop candidate retrieval via hash indexing. Finally, an improved RANSAC algorithm performs geometric verification to enhance the robustness of relative pose estimation. Experiments on the KITTI and NCLT datasets show that the proposed method achieves average F1 scores of 85.28% and 77.63%, respectively, outperforming both unimodal and existing multimodal approaches. When integrated into a SLAM framework, it reduces the Absolute Error (ATE) RMSE by 11.2–16.4% compared with LiDAR-only methods, demonstrating improved loop detection accuracy and overall system robustness in complex environments. Full article
Show Figures

Figure 1

21 pages, 2960 KB  
Article
Comparative Performance Evaluation of Multi-Type LiDAR Sensors and Their Applicability to Sidewalk HD Mapping
by Dongha Lee, Sungho Kang, Jaecheol Lee and Junghyun Kim
Sensors 2026, 26(5), 1480; https://doi.org/10.3390/s26051480 - 26 Feb 2026
Viewed by 423
Abstract
Sidewalk high-definition (HD) maps require centimetre-level representation of pedestrian barriers to support mobility assistance and barrier-free infrastructure management. This study evaluates six mobile light detection and ranging (LiDAR) platforms for sidewalk HD mapping: terrestrial laser scanning (TLS), a push-cart mobile mapping system (MMS), [...] Read more.
Sidewalk high-definition (HD) maps require centimetre-level representation of pedestrian barriers to support mobility assistance and barrier-free infrastructure management. This study evaluates six mobile light detection and ranging (LiDAR) platforms for sidewalk HD mapping: terrestrial laser scanning (TLS), a push-cart mobile mapping system (MMS), two backpack systems (GNSS/INS (Global Navigation Satellite System/Inertial Navigation System)-aided and SLAM (simultaneous localization and mapping)-based), and two handheld systems (GNSS/INS-aided and SLAM-based). Surveys were conducted at two sites with contrasting occlusion and GNSS conditions (park and dense downtown corridors). Point clouds were transformed to a common control network, with independent checkpoints for absolute accuracy. The reference dataset achieved a planimetric root mean square error (RMSE) of 0.017–0.049 m and vertical RMSE of 0.009–0.014 m across sites. Platforms were compared for positional accuracy, point density, and extractability of key accessibility attributes (effective width, step height, and longitudinal slope). Cart-mounted MMS provided stable geometry under occlusion, while SLAM-based handheld mapping improved robustness in GNSS-degraded areas; backpack SLAM performance depended on loop-closure opportunities and scene dynamics. We provide guidance on selecting pedestrian-scale LiDAR platforms for sidewalk HD mapping under different survey conditions. Full article
(This article belongs to the Special Issue Remote Sensing in Urban Surveying and Mapping)
Show Figures

Figure 1

27 pages, 6570 KB  
Article
LiDAR–Inertial–Visual Odometry Based on Elastic Registration and Dynamic Feature Removal
by Qiang Ma, Fuhong Qin, Peng Xiao, Meng Wei, Sihong Chen, Wenbo Xu, Xingrui Yue, Ruicheng Xu and Zheng He
Electronics 2026, 15(4), 741; https://doi.org/10.3390/electronics15040741 - 9 Feb 2026
Viewed by 592
Abstract
Simultaneous Localization and Mapping (SLAM) is a fundamental capability for autonomous robots. However, in highly dynamic scenes, conventional SLAM systems often suffer from degraded accuracy due to LiDAR motion distortion and interference from moving objects. To address these challenges, this paper proposes a [...] Read more.
Simultaneous Localization and Mapping (SLAM) is a fundamental capability for autonomous robots. However, in highly dynamic scenes, conventional SLAM systems often suffer from degraded accuracy due to LiDAR motion distortion and interference from moving objects. To address these challenges, this paper proposes a LiDAR–Inertial–Visual odometry framework based on elastic registration and dynamic feature removal, with the aim of enhancing system robustness through detailed algorithmic supplements. In the LiDAR odometry module, an elastic registration-based de-skewing method is introduced by modeling second-order motion, enabling accurate point cloud correction under non-uniform motion. In the visual odometry module, a multi-strategy dynamic feature suppression mechanism is developed, combining IMU-assisted motion consistency verification with a lightweight YOLOv5-based detection network to effectively filter out dynamic interference with low computational overhead. Furthermore, depth information for visual key points is recovered using LiDAR assistance to enable tightly coupled pose estimation. Extensive experiments on the TUM and M2DGR datasets demonstrate that the proposed method achieves a 96.3% reduction in absolute trajectory error (ATE) compared with ORB-SLAM2 in highly dynamic scenarios. Real-world deployment on an embedded computing device further confirms the framework’s real-time performance and practical applicability in complex environments. Full article
Show Figures

Figure 1

20 pages, 4015 KB  
Article
Adaptive Kalman Filter-Based SLAM in LiDAR-Degenerated Environments
by Ran Ma, Tao Zhou and Liang Chen
Sensors 2026, 26(3), 861; https://doi.org/10.3390/s26030861 - 28 Jan 2026
Viewed by 913
Abstract
Owing to the low cost, small size, and convenience for installation, 2D LiDAR has been widely used in mobile robots for simultaneous positioning and mapping (SLAM). However, traditional 2D LiDAR SLAM methods have low robustness and accuracy in LiDAR-degenerated environments. To improve the [...] Read more.
Owing to the low cost, small size, and convenience for installation, 2D LiDAR has been widely used in mobile robots for simultaneous positioning and mapping (SLAM). However, traditional 2D LiDAR SLAM methods have low robustness and accuracy in LiDAR-degenerated environments. To improve the robustness of the SLAM method in such environments, an innovative SLAM method is developed, which mainly includes two parts, i.e., the front-end positioning and the back-end optimization. Specifically, in the front-end part, the AKF (adaptive Kalman filter) method is applied to estimate the pose of the mobile robot, zero bias of acceleration and gyroscope, lever arm length, and the mounting angle. The adaptive factor of the AKF can dynamically adjust the variance of the process and measurement noises based on the residual. In the back-end part, a particle filter (PF) is employed to optimize the pose estimation and build the map, where the pose domain constraint from the output of the front-end is introduced in the PF to avoid mismatch and enhance positioning accuracy. To verify the performance of the method, a series of experiments is carried out in four typical environments. The experimental results show that the positioning precision has been improved by about 61.3–97.9%, 35.7–99.0%, and 43.8–93.0% compared to the Karto SLAM, Hector SLAM, and Cartographer, respectively. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

35 pages, 10558 KB  
Article
Cave of Altamira (Spain): UAV-Based SLAM Mapping, Digital Twin and Segmentation-Driven Crack Detection for Preventive Conservation in Paleolithic Rock-Art Environments
by Jorge Angás, Manuel Bea, Carlos Valladares, Cristian Iranzo, Gonzalo Ruiz, Pilar Fatás, Carmen de las Heras, Miguel Ángel Sánchez-Carro, Viola Bruschi, Alfredo Prada and Lucía M. Díaz-González
Drones 2026, 10(1), 73; https://doi.org/10.3390/drones10010073 - 22 Jan 2026
Cited by 1 | Viewed by 1074
Abstract
The Cave of Altamira (Spain), a UNESCO World Heritage site, contains one of the most fragile and inaccessible Paleolithic rock-art environments in Europe, where geomatics documentation is constrained not only by severe spatial, lighting and safety limitations but also by conservation-driven restrictions on [...] Read more.
The Cave of Altamira (Spain), a UNESCO World Heritage site, contains one of the most fragile and inaccessible Paleolithic rock-art environments in Europe, where geomatics documentation is constrained not only by severe spatial, lighting and safety limitations but also by conservation-driven restrictions on time, access and operational procedures. This study applies a confined-space UAV equipped with LiDAR-based SLAM navigation to document and assess the stability of the vertical rock wall leading to “La Hoya” Hall, a structurally sensitive sector of the cave. Twelve autonomous and assisted flights were conducted, generating dense LiDAR point clouds and video sequences processed through videogrammetry to produce high-resolution 3D meshes. A Mask R-CNN deep learning model was trained on manually segmented images to explore automated crack detection under variable illumination and viewing conditions. The results reveal active fractures, overhanging blocks and sediment accumulations located on inaccessible ledges, demonstrating the capacity of UAV-SLAM workflows to overcome the limitations of traditional surveys in confined subterranean environments. All datasets were integrated into the DiGHER digital twin platform, enabling traceable storage, multitemporal comparison, and collaborative annotation. Overall, the study demonstrates the feasibility of combining UAV-based SLAM mapping, videogrammetry and deep learning segmentation as a reproducible baseline workflow to inform preventive conservation and future multitemporal monitoring in Paleolithic caves and similarly constrained cultural heritage contexts. Full article
(This article belongs to the Topic 3D Documentation of Natural and Cultural Heritage)
Show Figures

Figure 1

41 pages, 7497 KB  
Article
Vertically Constrained LiDAR-Inertial SLAM in Dynamic Environments
by Shuangfeng Wei, Junfeng Qiu, Anpeng Shen, Keming Qu and Tong Yang
Appl. Sci. 2026, 16(2), 1046; https://doi.org/10.3390/app16021046 - 20 Jan 2026
Viewed by 534
Abstract
With the advancement of Light Detection and Ranging (LiDAR) technology and computer science, LiDAR–Inertial Simultaneous Localization and Mapping (SLAM) has become essential in autonomous driving, robotic navigation, and 3D reconstruction. However, dynamic objects such as pedestrians and vehicles, with complex terrain conditions, pose [...] Read more.
With the advancement of Light Detection and Ranging (LiDAR) technology and computer science, LiDAR–Inertial Simultaneous Localization and Mapping (SLAM) has become essential in autonomous driving, robotic navigation, and 3D reconstruction. However, dynamic objects such as pedestrians and vehicles, with complex terrain conditions, pose serious challenges to existing SLAM systems. These factors introduce artifacts into the acquired point clouds and result in significant vertical drift in SLAM trajectories. To address these challenges, this study focuses on controlling vertical drift errors in LiDAR–Inertial SLAM systems operating in dynamic environments. The research focuses on three key aspects: ground point segmentation, dynamic artifact removal, and vertical drift optimization. In order to improve the robustness of ground point segmentation operations, this study proposes a method based on a concentric sector model. This method divides point clouds into concentric regions and fits flat surfaces within each region to accurately extract ground points. To mitigate the impact of dynamic objects on map quality, this study proposes a removal algorithm that combines multi-frame residual analysis with curvature-based filtering. Specifically, the algorithm tracks residual changes in non-ground points across consecutive frames to detect inconsistencies caused by motion, while curvature features are used to further distinguish moving objects from static structures. This combined approach enables effective identification and removal of dynamic artifacts, resulting in a reduction in vertical drift. Full article
Show Figures

Figure 1

54 pages, 8516 KB  
Review
Interdisciplinary Applications of LiDAR in Forest Studies: Advances in Sensors, Methods, and Cross-Domain Metrics
by Nadeem Fareed, Carlos Alberto Silva, Izaya Numata and Joao Paulo Flores
Remote Sens. 2026, 18(2), 219; https://doi.org/10.3390/rs18020219 - 9 Jan 2026
Viewed by 1735
Abstract
Over the past two decades, Light Detection and Ranging (LiDAR) technology has evolved from early National Aeronautics and Space Administration (NASA)-led airborne laser altimetry into commercially mature systems that now underpin vegetation remote sensing across scales. Continuous advancements in laser engineering, signal processing, [...] Read more.
Over the past two decades, Light Detection and Ranging (LiDAR) technology has evolved from early National Aeronautics and Space Administration (NASA)-led airborne laser altimetry into commercially mature systems that now underpin vegetation remote sensing across scales. Continuous advancements in laser engineering, signal processing, and complementary technologies—such as Inertial Measurement Units (IMU) and Global Navigation Satellite Systems (GNSS)—have yielded compact, cost-effective, and highly sophisticated LiDAR sensors. Concurrently, innovations in carrier platforms, including uncrewed aerial systems (UAS), mobile laser scanning (MLS), Simultaneous Localization and Mapping (SLAM) frameworks, have expanded LiDAR’s observational capacity from plot- to global-scale applications in forestry, precision agriculture, ecological monitoring, Above Ground Biomass (AGB) modeling, and wildfire science. This review synthesizes LiDAR’s cross-domain capabilities for the following: (a) quantifying vegetation structure, function, and compositional dynamics; (b) recent sensor developments encompassing ALS discrete-return (ALSD), and ALS full-waveform (ALSFW), photon-counting LiDAR (PCL), emerging multispectral LiDAR (MSL), and hyperspectral LiDAR (HSL) systems; and (c) state-of-the-art data processing and fusion workflows integrating optical and radar datasets. The synthesis demonstrates that many LiDAR-derived vegetation metrics are inherently transferable across domains when interpreted within a unified structural framework. The review further highlights the growing role of artificial-intelligence (AI)-driven approaches for segmentation, classification, and multitemporal analysis, enabling scalable assessments of vegetation dynamics at unprecedented spatial and temporal extents. By consolidating historical developments, current methodological advances, and emerging research directions, this review establishes a comprehensive state-of-the-art perspective on LiDAR’s transformative role and future potential in monitoring and modeling Earth’s vegetated ecosystems. Full article
(This article belongs to the Special Issue Digital Modeling for Sustainable Forest Management)
Show Figures

Graphical abstract

18 pages, 7305 KB  
Article
SERail-SLAM: Semantic-Enhanced Railway LiDAR SLAM
by Weiwei Song, Shiqi Zheng, Xinye Dai, Xiao Wang, Yusheng Wang, Zihao Wang, Shujie Zhou, Wenlei Liu and Yidong Lou
Machines 2026, 14(1), 72; https://doi.org/10.3390/machines14010072 - 7 Jan 2026
Viewed by 810
Abstract
Reliable state estimation in railway environments presents significant challenges due to geometric degeneracy resulting from repetitive structural layouts and point cloud sparsity caused by high-speed motion. Conventional LiDAR-based SLAM systems frequently suffer from longitudinal drift and mapping artifacts when operating in such feature-scarce [...] Read more.
Reliable state estimation in railway environments presents significant challenges due to geometric degeneracy resulting from repetitive structural layouts and point cloud sparsity caused by high-speed motion. Conventional LiDAR-based SLAM systems frequently suffer from longitudinal drift and mapping artifacts when operating in such feature-scarce and dynamically complex scenarios. To address these limitations, this paper proposes SERail-SLAM, a robust semantic-enhanced multi-sensor fusion framework that tightly couples LiDAR odometry, inertial pre-integration, and GNSS constraints. Unlike traditional approaches that rely on rigid voxel grids or binary semantic masking, we introduce a Semantic-Enhanced Adaptive Voxel Map. By leveraging eigen-decomposition of local point distributions, this mapping strategy dynamically preserves fine-grained stable structures while compressing redundant planar surfaces, thereby enhancing spatial descriptiveness. Furthermore, to mitigate the impact of environmental noise and segmentation uncertainty, a confidence-aware filtering mechanism is developed. This method utilizes raw segmentation probabilities to adaptively weight input measurements, effectively distinguishing reliable landmarks from clutter. Finally, a category-weighted joint optimization scheme is implemented, where feature associations are constrained by semantic stability priors, ensuring globally consistent localization. Extensive experiments in real-world railway datasets demonstrate that the proposed system achieves superior accuracy and robustness compared to state-of-the-art geometric and semantic SLAM methods. Full article
(This article belongs to the Special Issue Dynamic Analysis and Condition Monitoring of High-Speed Trains)
Show Figures

Figure 1

15 pages, 4002 KB  
Article
LiDAR–Visual–Inertial Multi-UGV Collaborative SLAM Framework
by Hongyu Wei, Pingfan Wu, Xutong Zhang, Jianyong Zheng, Jianzheng Zhang and Kun Wei
Drones 2026, 10(1), 31; https://doi.org/10.3390/drones10010031 - 5 Jan 2026
Viewed by 1592
Abstract
The collaborative execution of tasks by multiple Unmanned Ground Vehicles (UGVs) has become a development trend in the field of unmanned systems. Existing collaborative Simultaneous Localization and Mapping (SLAM) frameworks mainly employ methods based on visual–inertial or LiDAR–inertial. However, the use of C-SLAM [...] Read more.
The collaborative execution of tasks by multiple Unmanned Ground Vehicles (UGVs) has become a development trend in the field of unmanned systems. Existing collaborative Simultaneous Localization and Mapping (SLAM) frameworks mainly employ methods based on visual–inertial or LiDAR–inertial. However, the use of C-SLAM based on these three types of sensors is relatively less common. Therefore, these systems cannot achieve robust and accurate global localization performance in real-world environments. In order to address this issue, a LiDAR–visual–inertial multi-UGV collaborative SLAM framework is proposed in this paper. The whole system is divided into three parts. The first part constructs a front-end odometry by integrating the raw information from LiDAR, visual, and inertial sensors, which provides the accurate initial pose estimation and local mapping of each UGV for the collaborative system. The second part utilizes the similarity of different local mappings to form a global mapping of the environment. The third part achieves global localization and mapping optimization for multi-UGV localization system. In order to verify the effectiveness of the proposed framework, a series of real-world experiments have been conducted. Over an average trajectory length of 237 m, the framework achieves a mean Absolute Pose Error (APE) of 1.49 m and Relative Pose Error (RPE) of 1.68° after the global optimization. The experimental results demonstrate that the proposed framework achieves superior collaborative localization and mapping performance, with the mean APE reduced by 5.4% and mean RPE reduced by 1.4% compared to other methods. Full article
Show Figures

Figure 1

Back to TopTop