Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (307)

Search Parameters:
Keywords = LiDAR-SLAM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 6656 KB  
Article
Improvements to the FLOAM Algorithm: GICP Registration and SOR Filtering in Mobile Robots with Pure Laser Configuration and Enhanced SLAM Performance
by Shichen Fu, Tianbao Zhao, Junkai Zhang, Guangming Guo and Weixiong Zheng
Appl. Sci. 2026, 16(7), 3141; https://doi.org/10.3390/app16073141 - 24 Mar 2026
Viewed by 157
Abstract
Laser SLAM is a key enabling technology for autonomous navigation of intelligent mobile robots. The standard FLOAM algorithm experiences low positioning accuracy, weak anti-interference performance, and prone error accumulation in pure LiDAR scenarios, making it difficult to meet practical engineering requirements. The focus [...] Read more.
Laser SLAM is a key enabling technology for autonomous navigation of intelligent mobile robots. The standard FLOAM algorithm experiences low positioning accuracy, weak anti-interference performance, and prone error accumulation in pure LiDAR scenarios, making it difficult to meet practical engineering requirements. The focus of numerous studies is thus on improved pure laser SLAM algorithms that are highly robust. The enhanced algorithm of FLOAM GICP registration and SOR filtering is applied in this study. The SOR filtering processes the laser point cloud to remove outlier noise. The GICP registration replaces the classic with an optimized matching cost function. Experiments are conducted on a mobile robot with a Leishen C16 LiDAR to simulate real-life tests in an indoor corridor and outdoor plaza on the Gazebo simulation platform. The results from the EVO tool’s quantitative evaluation indicate that the indoor mean absolute error and RMSE were reduced by 46.67% and 41.67% compared with FLOAM. The outdoor mean and maximum errors are reduced by 46.00% and 70.00%, respectively. The proposed improved scheme achieves centimeter-level positioning accuracy and strong robustness in pure laser configurations without auxiliary sensors such as IMUs or odometers, providing a reliable technical solution for the engineering application of mobile robots in sensor-constrained scenarios. Full article
Show Figures

Figure 1

19 pages, 894 KB  
Review
Indoor Mapping as a Spatiotemporal Framework for Mitigating Greenhouse Gas Emissions in Buildings: A Review
by Vinuri Nilanika Goonetilleke, Muditha K. Heenkenda and Kamil Zaniewski
Geomatics 2026, 6(2), 27; https://doi.org/10.3390/geomatics6020027 - 19 Mar 2026
Viewed by 222
Abstract
Climate change is a critical global challenge, and the building sector accounts for nearly 30% of global greenhouse gas (GHG) emissions, remaining a key target for mitigation. Indoor environments contribute significantly to GHG emissions, primarily through heating, cooling, lighting, and occupant-driven energy use. [...] Read more.
Climate change is a critical global challenge, and the building sector accounts for nearly 30% of global greenhouse gas (GHG) emissions, remaining a key target for mitigation. Indoor environments contribute significantly to GHG emissions, primarily through heating, cooling, lighting, and occupant-driven energy use. Indoor mapping, serving as the foundation for Digital Twins (DTs), provides a spatiotemporal framework that integrates sensor data with Building Information Modelling (BIM), Geographic Information Systems (GIS), and Internet of Things (IoT) to support energy-efficient, low-carbon building operations. This review examined the role of indoor mapping in understanding, modelling, and reducing GHG emissions in buildings. It synthesized current advancements in indoor spatial data acquisition, ranging from Light Detection And Ranging (LiDAR) and Simultaneous Localization and Mapping (SLAM) to deep learning-based floor plan extraction, and evaluated their contribution to improved indoor environmental analysis. The review highlighted emerging techniques, challenges, and gaps, particularly the limited integration of physical indoor spaces with virtual layers representing assets, occupants, and equipment. Addressing this gap requires embedding spatial modelling as an intermediate analytical layer that structures and contextualizes sensor data to support spatiotemporal decision-making. Overall, this review demonstrated that indoor mapping plays a critical role in transforming spatial information into actionable insights, enabling more accurate energy modelling, enhanced real-time building management, and stronger data-driven strategies for GHG mitigation in the built environment. Full article
Show Figures

Graphical abstract

20 pages, 24767 KB  
Article
VINA-SLAM: A Voxel-Based Inertial and Normal-Aligned LiDAR–IMU SLAM
by Ruyang Zhang and Bingyu Sun
Sensors 2026, 26(6), 1810; https://doi.org/10.3390/s26061810 - 13 Mar 2026
Viewed by 386
Abstract
Environments with sparse or repetitive geometric structures, such as long corridors and narrow stairwells, remain challenging for LiDAR–inertial simultaneous localization and mapping (LiDAR–IMU SLAM) due to insufficient geometric observability and unreliable data associations. To address these issues, we propose VINA-SLAM, a novel LiDAR–IMU [...] Read more.
Environments with sparse or repetitive geometric structures, such as long corridors and narrow stairwells, remain challenging for LiDAR–inertial simultaneous localization and mapping (LiDAR–IMU SLAM) due to insufficient geometric observability and unreliable data associations. To address these issues, we propose VINA-SLAM, a novel LiDAR–IMU SLAM framework that constructs a unified global voxel map to explicitly exploit structural consistency. VINA-SLAM continuously tracks surface normals stored in the global voxel map using a normal-guided correspondence strategy, enabling stable scan-to-map alignment in degenerate scenes. Furthermore, a tangent-space metric is introduced to supplement missing rotational constraints around planar regions, providing reliable initial pose estimates for local optimization. A tightly coupled sliding-window bundle adjustment is then formulated by jointly incorporating IMU factors, voxel normal consistency factors, and planar regularization terms. In particular, the minimum eigenvalue of each voxel’s covariance is used as a statistically principled planar constraint, improving the Hessian conditioning and cross-view geometric consistency. The proposed system directly aligns raw LiDAR scans to the voxelized map without explicit feature extraction or loop closure. Experiments on 25 sequences from the HILTI and MARS-LVIG datasets show that VINA-SLAM reduces ATE by 25–40% on average while maintaining real-time performance at 10 Hz in the evaluated geometrically degenerate environments. Full article
Show Figures

Figure 1

26 pages, 4902 KB  
Article
Multi-Sensor-Assisted Navigation for UAVs in Power Inspection: A Fusion Approach Using LiDAR, IMU and GPS
by Anjun Wang, Wenbin Yu, Xuexing Dong, Yang Yang, Shizeng Liu, Jiahao Liu and Hongwei Mei
Appl. Sci. 2026, 16(6), 2632; https://doi.org/10.3390/app16062632 - 10 Mar 2026
Viewed by 229
Abstract
High-precision localization is essential for autonomous navigation and environment perception of unmanned aerial vehicles (UAVs) in complex power inspection scenarios. To overcome the limited accuracy and accumulated drift of conventional GPS-based single-sensor localization, this paper proposes a LiDAR–IMU–GPS-aided navigation method that combines a [...] Read more.
High-precision localization is essential for autonomous navigation and environment perception of unmanned aerial vehicles (UAVs) in complex power inspection scenarios. To overcome the limited accuracy and accumulated drift of conventional GPS-based single-sensor localization, this paper proposes a LiDAR–IMU–GPS-aided navigation method that combines a tightly coupled front-end and a loosely coupled back-end. The front-end employs an improved Lie-group-based UKF-SLAM framework to explicitly handle the nonlinearities of rotational motion, thereby improving the stability of local pose estimation. The back-end integrates GPS absolute constraints, loop closure detection, and point cloud registration via pose graph optimization, which effectively suppresses long-term accumulated drift. The framework achieves accurate and robust localization for UAV power inspection. Experiments on public benchmark datasets and real-world power inspection scenarios demonstrate the effectiveness of the proposed method. On the MH_02_easy sequence, the absolute trajectory error is reduced from 0.521 m to 0.170 m compared with ROVIO, while in a real inspection sequence the cumulative error is reduced by more than 99% after back-end optimization. Moreover, the system maintains stable navigation under GPS-degraded conditions, indicating strong robustness and practical applicability. Full article
Show Figures

Figure 1

28 pages, 6157 KB  
Article
RI-DVP: A Physics–Geometry Dual-Driven Framework for Static Map Construction in Sparse LiDAR Scenarios
by Xiaokai Li, Li Wang, Haolong Luo and Guangyun Li
Remote Sens. 2026, 18(5), 821; https://doi.org/10.3390/rs18050821 - 6 Mar 2026
Viewed by 294
Abstract
High-fidelity static map construction is essential for reliable autonomous navigation, yet dynamic environments introduce severe artifacts caused by moving objects (also referred to as dynamic artifacts) in accumulated maps. While geometry-based methods perform well on dense point clouds, their performance notably degrades on [...] Read more.
High-fidelity static map construction is essential for reliable autonomous navigation, yet dynamic environments introduce severe artifacts caused by moving objects (also referred to as dynamic artifacts) in accumulated maps. While geometry-based methods perform well on dense point clouds, their performance notably degrades on sparse 16-beam LiDAR due to the “Sparsity Trap”: dynamic objects are frequently missed by ray-based geometry, and purely geometric cues fail in radiometrically ambiguous scenarios. To address this, we propose RI-DVP, a physics–geometry dual-driven framework. Unlike conventional approaches, RI-DVP first performs a physics-inspired radiometric normalization that compensates for range attenuation and incidence-angle effects to establish a consistent signal baseline. Subsequently, a Dual-Residual Aggressive Removal (DRAR) module jointly exploits geometric residuals—bounded by a range-dependent spatial uncertainty envelope—and calibrated intensity residuals to detect geometrically indistinguishable objects. To balance recall and precision, a Hierarchical Static Reversion strategy (HSR) employs two-stage recovery to retrieve large-scale structures and correct fine-grained artifacts via topology-based adhesion reasoning. Experiments on SemanticKITTI and custom sparse datasets demonstrate that RI-DVP outperforms state-of-the-art geometric baselines, improving Dynamic Accuracy by over 36 percentage points in sparse scanning scenarios using a VLP-16 LiDAR sensor (Velodyne Acoustics, Inc., Morgan Hill, CA, USA) compared to baselines that fail under the sparsity trap while achieving real-time performance at approximately 15.3 Hz. Full article
(This article belongs to the Special Issue LiDAR Technology for Autonomous Navigation and Mapping)
Show Figures

Figure 1

33 pages, 12968 KB  
Article
Tunnel-SLAM: Low-Cost LiDAR/Vision/RTK/Inertial Integration on Vehicles for Roadway Tunnels
by Zeyu Li, Xian Wu, Jianhui Cui, Ying Xu, Rufei Liu, Rui Tu and Wei Jiang
Electronics 2026, 15(5), 1101; https://doi.org/10.3390/electronics15051101 - 6 Mar 2026
Viewed by 385
Abstract
Reliable positioning and mapping in roadway tunnels are crucial for vehicle-based monitoring and inspection, especially considering the challenging environmental conditions such as rapidly changing illumination, low-texture environments, and repetitive structural elements. While general LiDAR-inertial odometry (LIO) frameworks and loop-closure detection methods are effective [...] Read more.
Reliable positioning and mapping in roadway tunnels are crucial for vehicle-based monitoring and inspection, especially considering the challenging environmental conditions such as rapidly changing illumination, low-texture environments, and repetitive structural elements. While general LiDAR-inertial odometry (LIO) frameworks and loop-closure detection methods are effective in general scenarios, they often suffer from severe drift or incorrect loop constraints under these specific conditions. These challenges are further exacerbated by the inherent uncertainties associated with low-cost sensors. This paper introduces a narrow field-of-view LiDAR-centric RTK-visual-inertial SLAM system enhanced by three key modules: semantic-assisted loop detection and matching, two-stage RTK quality control, and adaptive factor graph optimization (FGO). In the first module, the proposed semantic loop descriptor (SLD) matching is used to determine the potential loop closure locations and then integrates the corresponding constraint as graph nodes. The quality control module addresses RTK outlier rejection during tunnel entry and exit, employing an event-driven stochastic model to characterize the uncertainty between RTK and the other sensors, effectively suppressing RTK-induced errors. FGO module performs optimization by incorporating LIO, RTK, and loop closure factors, employing a keyframe-based strategy to produce globally optimized poses while continuously updating the map. The proposed Tunnel-SLAM was evaluated against state-of-the-art SLAM algorithms in four extended roadway tunnels, ranging in traveling distance approximately from 5 to 10 km. Experimental results demonstrate that the proposed SLAM achieved a final drift of less than 2 m with loop closure, demonstrating significantly reducing the drift, while other existing SLAM frameworks fail catastrophically or have large drift. Full article
(This article belongs to the Special Issue Simultaneous Localization and Mapping (SLAM) of Mobile Robots)
Show Figures

Figure 1

16 pages, 2080 KB  
Article
Lidar–Vision Depth Fusion for Robust Loop Closure Detection in SLAM Systems
by Bingzhuo Liu, Panlong Wu, Rongting Chen, Yidan Zheng and Mengyu Li
Machines 2026, 14(3), 282; https://doi.org/10.3390/machines14030282 - 3 Mar 2026
Viewed by 361
Abstract
Loop Closure Detection (LCD) is a key component of Simultaneous Localization and Mapping (SLAM) systems, responsible for correcting odometric drift and maintaining global consistency in localization and mapping. However, single-modality LCD methods suffer from inherent limitations: LiDAR-based approaches are affected by point cloud [...] Read more.
Loop Closure Detection (LCD) is a key component of Simultaneous Localization and Mapping (SLAM) systems, responsible for correcting odometric drift and maintaining global consistency in localization and mapping. However, single-modality LCD methods suffer from inherent limitations: LiDAR-based approaches are affected by point cloud sparsity, limiting feature representation in unstructured environments, while vision-based methods are sensitive to illumination and weather variations, reducing robustness. To address these issues, this paper presents a LiDAR–vision multimodal fusion LCD algorithm. Spatiotemporal alignment between LiDAR point clouds and images is achieved through extrinsic calibration and timestamp interpolation to ensure cross-modal consistency. Harris corner detection and BRIEF descriptors are employed to extract visual features, and a LiDAR-projected sparse depth map is used to complete depth information, mapping 2D features into 3D space. A hybrid feature representation is then constructed by fusing LiDAR geometric triangle descriptors with visual BRIEF descriptors, enabling efficient loop candidate retrieval via hash indexing. Finally, an improved RANSAC algorithm performs geometric verification to enhance the robustness of relative pose estimation. Experiments on the KITTI and NCLT datasets show that the proposed method achieves average F1 scores of 85.28% and 77.63%, respectively, outperforming both unimodal and existing multimodal approaches. When integrated into a SLAM framework, it reduces the Absolute Error (ATE) RMSE by 11.2–16.4% compared with LiDAR-only methods, demonstrating improved loop detection accuracy and overall system robustness in complex environments. Full article
Show Figures

Figure 1

21 pages, 2960 KB  
Article
Comparative Performance Evaluation of Multi-Type LiDAR Sensors and Their Applicability to Sidewalk HD Mapping
by Dongha Lee, Sungho Kang, Jaecheol Lee and Junghyun Kim
Sensors 2026, 26(5), 1480; https://doi.org/10.3390/s26051480 - 26 Feb 2026
Viewed by 345
Abstract
Sidewalk high-definition (HD) maps require centimetre-level representation of pedestrian barriers to support mobility assistance and barrier-free infrastructure management. This study evaluates six mobile light detection and ranging (LiDAR) platforms for sidewalk HD mapping: terrestrial laser scanning (TLS), a push-cart mobile mapping system (MMS), [...] Read more.
Sidewalk high-definition (HD) maps require centimetre-level representation of pedestrian barriers to support mobility assistance and barrier-free infrastructure management. This study evaluates six mobile light detection and ranging (LiDAR) platforms for sidewalk HD mapping: terrestrial laser scanning (TLS), a push-cart mobile mapping system (MMS), two backpack systems (GNSS/INS (Global Navigation Satellite System/Inertial Navigation System)-aided and SLAM (simultaneous localization and mapping)-based), and two handheld systems (GNSS/INS-aided and SLAM-based). Surveys were conducted at two sites with contrasting occlusion and GNSS conditions (park and dense downtown corridors). Point clouds were transformed to a common control network, with independent checkpoints for absolute accuracy. The reference dataset achieved a planimetric root mean square error (RMSE) of 0.017–0.049 m and vertical RMSE of 0.009–0.014 m across sites. Platforms were compared for positional accuracy, point density, and extractability of key accessibility attributes (effective width, step height, and longitudinal slope). Cart-mounted MMS provided stable geometry under occlusion, while SLAM-based handheld mapping improved robustness in GNSS-degraded areas; backpack SLAM performance depended on loop-closure opportunities and scene dynamics. We provide guidance on selecting pedestrian-scale LiDAR platforms for sidewalk HD mapping under different survey conditions. Full article
(This article belongs to the Special Issue Remote Sensing in Urban Surveying and Mapping)
Show Figures

Figure 1

25 pages, 15267 KB  
Article
3D Semantic Map Reconstruction for Orchard Environments Using Multi-Sensor Fusion
by Quanchao Wang, Yiheng Chen, Jiaxiang Li, Yongxing Chen and Hongjun Wang
Agriculture 2026, 16(4), 455; https://doi.org/10.3390/agriculture16040455 - 15 Feb 2026
Viewed by 615
Abstract
Semantic point cloud maps play a pivotal role in smart agriculture. They provide not only core three-dimensional data for orchard management but also empower robots with environmental perception, enabling safer and more efficient navigation and planning. However, traditional point cloud maps primarily model [...] Read more.
Semantic point cloud maps play a pivotal role in smart agriculture. They provide not only core three-dimensional data for orchard management but also empower robots with environmental perception, enabling safer and more efficient navigation and planning. However, traditional point cloud maps primarily model surrounding obstacles from a geometric perspective, failing to capture distinctions and characteristics between individual obstacles. In contrast, semantic maps encompass semantic information and even topological relationships among objects in the environment. Furthermore, existing semantic map construction methods are predominantly vision-based, making them ill-suited to handle rapid lighting changes in agricultural settings that can cause positioning failures. Therefore, this paper proposes a positioning and semantic map reconstruction method tailored for orchards. It integrates visual, LiDAR, and inertial sensors to obtain high-precision pose and point cloud maps. By combining open-vocabulary detection and semantic segmentation models, it projects two-dimensional detected semantic information onto the three-dimensional point cloud, ultimately generating a point cloud map enriched with semantic information. The resulting 2D occupancy grid map is utilized for robotic motion planning. Experimental results demonstrate that on a custom dataset, the proposed method achieves 74.33% mIoU for semantic segmentation accuracy, 12.4% relative error for fruit recall rate, and 0.038803 m mean translation error for localization. The deployed semantic segmentation network Fast-SAM achieves a processing speed of 13.36 ms per frame. These results demonstrate that the proposed method combines high accuracy with real-time performance in semantic map reconstruction. This exploratory work provides theoretical and technical references for future research on more precise localization and more complete semantic mapping, offering broad application prospects and providing key technological support for intelligent agriculture. Full article
(This article belongs to the Special Issue Advances in Robotic Systems for Precision Orchard Operations)
Show Figures

Figure 1

25 pages, 7517 KB  
Article
VCC: Vertical Feature and Circle Combined Descriptor for 3D Place Recognition
by Wenguang Li, Yongxin Ma, Jiying Ren, Jinshun Ou, Jun Zhou and Panling Huang
Sensors 2026, 26(4), 1185; https://doi.org/10.3390/s26041185 - 11 Feb 2026
Viewed by 288
Abstract
Loop closure detection remains a critical challenge in LiDAR-based SLAM, particularly for achieving robust place recognition in environments with rotational and translational variations. To extract more concise environmental representations from point clouds and improve extraction efficiency, this paper proposes a novel composite descriptor—the [...] Read more.
Loop closure detection remains a critical challenge in LiDAR-based SLAM, particularly for achieving robust place recognition in environments with rotational and translational variations. To extract more concise environmental representations from point clouds and improve extraction efficiency, this paper proposes a novel composite descriptor—the vertical feature and circle combined (VCC) descriptor, a novel 3D local descriptor designed for efficient and rotation-invariant place recognition. The VCC descriptor captures environmental structure by extracting vertical features from voxelized point clouds and encoding them into circular arc-based histograms, ensuring robustness to viewpoint changes. Under the same hardware, experiments conducted on different datasets demonstrate that the proposed algorithm significantly improves both feature representation efficiency and loop closure recognition performance when compared with the other descriptors, completing loop closure retrieval within 30 ms, which satisfies real-time operation requirements. The results confirm that VCC provides a compact, efficient, and rotation-invariant representation suitable for LiDAR-based SLAM systems. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

27 pages, 6570 KB  
Article
LiDAR–Inertial–Visual Odometry Based on Elastic Registration and Dynamic Feature Removal
by Qiang Ma, Fuhong Qin, Peng Xiao, Meng Wei, Sihong Chen, Wenbo Xu, Xingrui Yue, Ruicheng Xu and Zheng He
Electronics 2026, 15(4), 741; https://doi.org/10.3390/electronics15040741 - 9 Feb 2026
Viewed by 462
Abstract
Simultaneous Localization and Mapping (SLAM) is a fundamental capability for autonomous robots. However, in highly dynamic scenes, conventional SLAM systems often suffer from degraded accuracy due to LiDAR motion distortion and interference from moving objects. To address these challenges, this paper proposes a [...] Read more.
Simultaneous Localization and Mapping (SLAM) is a fundamental capability for autonomous robots. However, in highly dynamic scenes, conventional SLAM systems often suffer from degraded accuracy due to LiDAR motion distortion and interference from moving objects. To address these challenges, this paper proposes a LiDAR–Inertial–Visual odometry framework based on elastic registration and dynamic feature removal, with the aim of enhancing system robustness through detailed algorithmic supplements. In the LiDAR odometry module, an elastic registration-based de-skewing method is introduced by modeling second-order motion, enabling accurate point cloud correction under non-uniform motion. In the visual odometry module, a multi-strategy dynamic feature suppression mechanism is developed, combining IMU-assisted motion consistency verification with a lightweight YOLOv5-based detection network to effectively filter out dynamic interference with low computational overhead. Furthermore, depth information for visual key points is recovered using LiDAR assistance to enable tightly coupled pose estimation. Extensive experiments on the TUM and M2DGR datasets demonstrate that the proposed method achieves a 96.3% reduction in absolute trajectory error (ATE) compared with ORB-SLAM2 in highly dynamic scenarios. Real-world deployment on an embedded computing device further confirms the framework’s real-time performance and practical applicability in complex environments. Full article
Show Figures

Figure 1

20 pages, 4015 KB  
Article
Adaptive Kalman Filter-Based SLAM in LiDAR-Degenerated Environments
by Ran Ma, Tao Zhou and Liang Chen
Sensors 2026, 26(3), 861; https://doi.org/10.3390/s26030861 - 28 Jan 2026
Viewed by 824
Abstract
Owing to the low cost, small size, and convenience for installation, 2D LiDAR has been widely used in mobile robots for simultaneous positioning and mapping (SLAM). However, traditional 2D LiDAR SLAM methods have low robustness and accuracy in LiDAR-degenerated environments. To improve the [...] Read more.
Owing to the low cost, small size, and convenience for installation, 2D LiDAR has been widely used in mobile robots for simultaneous positioning and mapping (SLAM). However, traditional 2D LiDAR SLAM methods have low robustness and accuracy in LiDAR-degenerated environments. To improve the robustness of the SLAM method in such environments, an innovative SLAM method is developed, which mainly includes two parts, i.e., the front-end positioning and the back-end optimization. Specifically, in the front-end part, the AKF (adaptive Kalman filter) method is applied to estimate the pose of the mobile robot, zero bias of acceleration and gyroscope, lever arm length, and the mounting angle. The adaptive factor of the AKF can dynamically adjust the variance of the process and measurement noises based on the residual. In the back-end part, a particle filter (PF) is employed to optimize the pose estimation and build the map, where the pose domain constraint from the output of the front-end is introduced in the PF to avoid mismatch and enhance positioning accuracy. To verify the performance of the method, a series of experiments is carried out in four typical environments. The experimental results show that the positioning precision has been improved by about 61.3–97.9%, 35.7–99.0%, and 43.8–93.0% compared to the Karto SLAM, Hector SLAM, and Cartographer, respectively. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

35 pages, 10558 KB  
Article
Cave of Altamira (Spain): UAV-Based SLAM Mapping, Digital Twin and Segmentation-Driven Crack Detection for Preventive Conservation in Paleolithic Rock-Art Environments
by Jorge Angás, Manuel Bea, Carlos Valladares, Cristian Iranzo, Gonzalo Ruiz, Pilar Fatás, Carmen de las Heras, Miguel Ángel Sánchez-Carro, Viola Bruschi, Alfredo Prada and Lucía M. Díaz-González
Drones 2026, 10(1), 73; https://doi.org/10.3390/drones10010073 - 22 Jan 2026
Viewed by 804
Abstract
The Cave of Altamira (Spain), a UNESCO World Heritage site, contains one of the most fragile and inaccessible Paleolithic rock-art environments in Europe, where geomatics documentation is constrained not only by severe spatial, lighting and safety limitations but also by conservation-driven restrictions on [...] Read more.
The Cave of Altamira (Spain), a UNESCO World Heritage site, contains one of the most fragile and inaccessible Paleolithic rock-art environments in Europe, where geomatics documentation is constrained not only by severe spatial, lighting and safety limitations but also by conservation-driven restrictions on time, access and operational procedures. This study applies a confined-space UAV equipped with LiDAR-based SLAM navigation to document and assess the stability of the vertical rock wall leading to “La Hoya” Hall, a structurally sensitive sector of the cave. Twelve autonomous and assisted flights were conducted, generating dense LiDAR point clouds and video sequences processed through videogrammetry to produce high-resolution 3D meshes. A Mask R-CNN deep learning model was trained on manually segmented images to explore automated crack detection under variable illumination and viewing conditions. The results reveal active fractures, overhanging blocks and sediment accumulations located on inaccessible ledges, demonstrating the capacity of UAV-SLAM workflows to overcome the limitations of traditional surveys in confined subterranean environments. All datasets were integrated into the DiGHER digital twin platform, enabling traceable storage, multitemporal comparison, and collaborative annotation. Overall, the study demonstrates the feasibility of combining UAV-based SLAM mapping, videogrammetry and deep learning segmentation as a reproducible baseline workflow to inform preventive conservation and future multitemporal monitoring in Paleolithic caves and similarly constrained cultural heritage contexts. Full article
(This article belongs to the Topic 3D Documentation of Natural and Cultural Heritage)
Show Figures

Figure 1

41 pages, 7497 KB  
Article
Vertically Constrained LiDAR-Inertial SLAM in Dynamic Environments
by Shuangfeng Wei, Junfeng Qiu, Anpeng Shen, Keming Qu and Tong Yang
Appl. Sci. 2026, 16(2), 1046; https://doi.org/10.3390/app16021046 - 20 Jan 2026
Viewed by 388
Abstract
With the advancement of Light Detection and Ranging (LiDAR) technology and computer science, LiDAR–Inertial Simultaneous Localization and Mapping (SLAM) has become essential in autonomous driving, robotic navigation, and 3D reconstruction. However, dynamic objects such as pedestrians and vehicles, with complex terrain conditions, pose [...] Read more.
With the advancement of Light Detection and Ranging (LiDAR) technology and computer science, LiDAR–Inertial Simultaneous Localization and Mapping (SLAM) has become essential in autonomous driving, robotic navigation, and 3D reconstruction. However, dynamic objects such as pedestrians and vehicles, with complex terrain conditions, pose serious challenges to existing SLAM systems. These factors introduce artifacts into the acquired point clouds and result in significant vertical drift in SLAM trajectories. To address these challenges, this study focuses on controlling vertical drift errors in LiDAR–Inertial SLAM systems operating in dynamic environments. The research focuses on three key aspects: ground point segmentation, dynamic artifact removal, and vertical drift optimization. In order to improve the robustness of ground point segmentation operations, this study proposes a method based on a concentric sector model. This method divides point clouds into concentric regions and fits flat surfaces within each region to accurately extract ground points. To mitigate the impact of dynamic objects on map quality, this study proposes a removal algorithm that combines multi-frame residual analysis with curvature-based filtering. Specifically, the algorithm tracks residual changes in non-ground points across consecutive frames to detect inconsistencies caused by motion, while curvature features are used to further distinguish moving objects from static structures. This combined approach enables effective identification and removal of dynamic artifacts, resulting in a reduction in vertical drift. Full article
Show Figures

Figure 1

54 pages, 8516 KB  
Review
Interdisciplinary Applications of LiDAR in Forest Studies: Advances in Sensors, Methods, and Cross-Domain Metrics
by Nadeem Fareed, Carlos Alberto Silva, Izaya Numata and Joao Paulo Flores
Remote Sens. 2026, 18(2), 219; https://doi.org/10.3390/rs18020219 - 9 Jan 2026
Viewed by 1434
Abstract
Over the past two decades, Light Detection and Ranging (LiDAR) technology has evolved from early National Aeronautics and Space Administration (NASA)-led airborne laser altimetry into commercially mature systems that now underpin vegetation remote sensing across scales. Continuous advancements in laser engineering, signal processing, [...] Read more.
Over the past two decades, Light Detection and Ranging (LiDAR) technology has evolved from early National Aeronautics and Space Administration (NASA)-led airborne laser altimetry into commercially mature systems that now underpin vegetation remote sensing across scales. Continuous advancements in laser engineering, signal processing, and complementary technologies—such as Inertial Measurement Units (IMU) and Global Navigation Satellite Systems (GNSS)—have yielded compact, cost-effective, and highly sophisticated LiDAR sensors. Concurrently, innovations in carrier platforms, including uncrewed aerial systems (UAS), mobile laser scanning (MLS), Simultaneous Localization and Mapping (SLAM) frameworks, have expanded LiDAR’s observational capacity from plot- to global-scale applications in forestry, precision agriculture, ecological monitoring, Above Ground Biomass (AGB) modeling, and wildfire science. This review synthesizes LiDAR’s cross-domain capabilities for the following: (a) quantifying vegetation structure, function, and compositional dynamics; (b) recent sensor developments encompassing ALS discrete-return (ALSD), and ALS full-waveform (ALSFW), photon-counting LiDAR (PCL), emerging multispectral LiDAR (MSL), and hyperspectral LiDAR (HSL) systems; and (c) state-of-the-art data processing and fusion workflows integrating optical and radar datasets. The synthesis demonstrates that many LiDAR-derived vegetation metrics are inherently transferable across domains when interpreted within a unified structural framework. The review further highlights the growing role of artificial-intelligence (AI)-driven approaches for segmentation, classification, and multitemporal analysis, enabling scalable assessments of vegetation dynamics at unprecedented spatial and temporal extents. By consolidating historical developments, current methodological advances, and emerging research directions, this review establishes a comprehensive state-of-the-art perspective on LiDAR’s transformative role and future potential in monitoring and modeling Earth’s vegetated ecosystems. Full article
(This article belongs to the Special Issue Digital Modeling for Sustainable Forest Management)
Show Figures

Graphical abstract

Back to TopTop