Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (322)

Search Parameters:
Keywords = LIDAR SLAM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 12540 KiB  
Article
SS-LIO: Robust Tightly Coupled Solid-State LiDAR–Inertial Odometry for Indoor Degraded Environments
by Yongle Zou, Peipei Meng, Jianqiang Xiong and Xinglin Wan
Electronics 2025, 14(15), 2951; https://doi.org/10.3390/electronics14152951 - 24 Jul 2025
Abstract
Solid-state LiDAR systems are widely recognized for their high reliability, low cost, and lightweight design, but they encounter significant challenges in SLAM tasks due to their limited field of view and uneven horizontal scanning patterns, especially in indoor environments with geometric constraints. To [...] Read more.
Solid-state LiDAR systems are widely recognized for their high reliability, low cost, and lightweight design, but they encounter significant challenges in SLAM tasks due to their limited field of view and uneven horizontal scanning patterns, especially in indoor environments with geometric constraints. To address these challenges, this paper proposes SS-LIO, a precise, robust, and real-time LiDAR–Inertial odometry solution designed for solid-state LiDAR systems. SS-LIO uses uncertainty propagation in LiDAR point-cloud modeling and a tightly coupled iterative extended Kalman filter to fuse LiDAR feature points with IMU data for reliable localization. It also employs voxels to encapsulate planar features for accurate map construction. Experimental results from open-source datasets and self-collected data demonstrate that SS-LIO achieves superior accuracy and robustness compared to state-of-the-art methods, with an end-to-end drift of only 0.2 m in indoor degraded scenarios. The detailed and accurate point-cloud maps generated by SS-LIO reflect the smoothness and precision of trajectory estimation, with significantly reduced drift and deviation. These outcomes highlight the effectiveness of SS-LIO in addressing the SLAM challenges posed by solid-state LiDAR systems and its capability to produce reliable maps in complex indoor settings. Full article
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)
Show Figures

Figure 1

20 pages, 5862 KiB  
Article
ICP-Based Mapping and Localization System for AGV with 2D LiDAR
by Felype de L. Silva, Eisenhawer de M. Fernandes, Péricles R. Barros, Levi da C. Pimentel, Felipe C. Pimenta, Antonio G. B. de Lima and João M. P. Q. Delgado
Sensors 2025, 25(15), 4541; https://doi.org/10.3390/s25154541 - 22 Jul 2025
Viewed by 85
Abstract
This work presents the development of a functional real-time SLAM system designed to enhance the perception capabilities of an Automated Guided Vehicle (AGV) using only a 2D LiDAR sensor. The proposal aims to address recurring gaps in the literature, such as the need [...] Read more.
This work presents the development of a functional real-time SLAM system designed to enhance the perception capabilities of an Automated Guided Vehicle (AGV) using only a 2D LiDAR sensor. The proposal aims to address recurring gaps in the literature, such as the need for low-complexity solutions that are independent of auxiliary sensors and capable of operating on embedded platforms with limited computational resources. The system integrates scan alignment techniques based on the Iterative Closest Point (ICP) algorithm. Experimental validation in a controlled environment indicated better performance using Gauss–Newton optimization and the point-to-plane metric, achieving pose estimation accuracy of 99.42%, 99.6%, and 99.99% in the position (x, y) and orientation (θ) components, respectively. Subsequently, the system was adapted for operation with data from the onboard sensor, integrating a lightweight graphical interface for real-time visualization of scans, estimated pose, and the evolving map. Despite the moderate update rate, the system proved effective for robotic applications, enabling coherent localization and progressive environment mapping. The modular architecture developed allows for future extensions such as trajectory planning and control. The proposed solution provides a robust and adaptable foundation for mobile platforms, with potential applications in industrial automation, academic research, and education in mobile robotics. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

20 pages, 3710 KiB  
Article
An Accurate LiDAR-Inertial SLAM Based on Multi-Category Feature Extraction and Matching
by Nuo Li, Yiqing Yao, Xiaosu Xu, Shuai Zhou and Taihong Yang
Remote Sens. 2025, 17(14), 2425; https://doi.org/10.3390/rs17142425 - 12 Jul 2025
Viewed by 352
Abstract
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity [...] Read more.
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity to noise and sparsity, and the inclusion of redundant or low-quality feature correspondences. These weaknesses hinder their performance in complex or dynamic environments and fail to meet the reliability requirements of autonomous systems. To overcome these challenges, we propose a novel and accurate LiDAR-inertial SLAM framework with three major contributions. First, we employ a robust multi-category feature extraction method based on principal component analysis (PCA), which effectively filters out noisy and weakly structured points, ensuring stable feature representation. Second, to suppress outlier correspondences and enhance pose estimation reliability, we introduce a coarse-to-fine two-stage feature correspondence selection strategy that evaluates geometric consistency and structural contribution. Third, we develop an adaptive weighted pose estimation scheme that considers both distance and directional consistency, improving the robustness of feature matching under varying scene conditions. These components are jointly optimized within a sliding-window-based factor graph, integrating LiDAR feature factors, IMU pre-integration, and loop closure constraints. Extensive experiments on public datasets (KITTI, M2DGR) and a custom-collected dataset validate the proposed method’s effectiveness. Results show that our system consistently outperforms state-of-the-art approaches in accuracy and robustness, particularly in scenes with sparse structure, motion distortion, and dynamic interference, demonstrating its suitability for reliable real-world deployment. Full article
(This article belongs to the Special Issue LiDAR Technology for Autonomous Navigation and Mapping)
Show Figures

Figure 1

18 pages, 16696 KiB  
Technical Note
LIO-GC: LiDAR Inertial Odometry with Adaptive Ground Constraints
by Wenwen Tian, Juefei Wang, Puwei Yang, Wen Xiao and Sisi Zlatanova
Remote Sens. 2025, 17(14), 2376; https://doi.org/10.3390/rs17142376 - 10 Jul 2025
Viewed by 422
Abstract
LiDAR-based simultaneous localization and mapping (SLAM) techniques are commonly applied in high-precision mapping and positioning for mobile platforms. However, the vertical resolution limitations of multi-beam spinning LiDAR sensors can significantly impair vertical estimation accuracy. This challenge is accentuated in scenarios involving fewer-line or [...] Read more.
LiDAR-based simultaneous localization and mapping (SLAM) techniques are commonly applied in high-precision mapping and positioning for mobile platforms. However, the vertical resolution limitations of multi-beam spinning LiDAR sensors can significantly impair vertical estimation accuracy. This challenge is accentuated in scenarios involving fewer-line or cost-effective spinning LiDARs, where vertical features are sparse. To address this issue, we introduce LIO-GC, which effectively extracts ground features and integrates them into a factor graph to rectify vertical accuracy. Unlike conventional methods relying on geometric features for ground plane segmentation, our approach leverages a self-adaptive strategy that considers the uneven point cloud distribution and inconsistency due to ground fluctuations. By optimizing laser range factors, ground feature constraints, and loop closure factors using graph optimization frameworks, our method surpasses current approaches, demonstrating superior performance through evaluation on open-source and newly collected datasets. Full article
Show Figures

Figure 1

32 pages, 2740 KiB  
Article
Vision-Based Navigation and Perception for Autonomous Robots: Sensors, SLAM, Control Strategies, and Cross-Domain Applications—A Review
by Eder A. Rodríguez-Martínez, Wendy Flores-Fuentes, Farouk Achakir, Oleg Sergiyenko and Fabian N. Murrieta-Rico
Eng 2025, 6(7), 153; https://doi.org/10.3390/eng6070153 - 7 Jul 2025
Viewed by 972
Abstract
Camera-centric perception has matured into a cornerstone of modern autonomy, from self-driving cars and factory cobots to underwater and planetary exploration. This review synthesizes more than a decade of progress in vision-based robotic navigation through an engineering lens, charting the full pipeline from [...] Read more.
Camera-centric perception has matured into a cornerstone of modern autonomy, from self-driving cars and factory cobots to underwater and planetary exploration. This review synthesizes more than a decade of progress in vision-based robotic navigation through an engineering lens, charting the full pipeline from sensing to deployment. We first examine the expanding sensor palette—monocular and multi-camera rigs, stereo and RGB-D devices, LiDAR–camera hybrids, event cameras, and infrared systems—highlighting the complementary operating envelopes and the rise of learning-based depth inference. The advances in visual localization and mapping are then analyzed, contrasting sparse and dense SLAM approaches, as well as monocular, stereo, and visual–inertial formulations. Additional topics include loop closure, semantic mapping, and LiDAR–visual–inertial fusion, which enables drift-free operation in dynamic environments. Building on these foundations, we review the navigation and control strategies, spanning classical planning, reinforcement and imitation learning, hybrid topological–metric memories, and emerging visual language guidance. Application case studies—autonomous driving, industrial manipulation, autonomous underwater vehicles, planetary rovers, aerial drones, and humanoids—demonstrate how tailored sensor suites and algorithms meet domain-specific constraints. Finally, the future research trajectories are distilled: generative AI for synthetic training data and scene completion; high-density 3D perception with solid-state LiDAR and neural implicit representations; event-based vision for ultra-fast control; and human-centric autonomy in next-generation robots. By providing a unified taxonomy, a comparative analysis, and engineering guidelines, this review aims to inform researchers and practitioners designing robust, scalable, vision-driven robotic systems. Full article
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)
Show Figures

Figure 1

30 pages, 14473 KiB  
Article
VOX-LIO: An Effective and Robust LiDAR-Inertial Odometry System Based on Surfel Voxels
by Meijun Guo, Yonghui Liu, Yuhang Yang, Xiaohai He and Weimin Zhang
Remote Sens. 2025, 17(13), 2214; https://doi.org/10.3390/rs17132214 - 27 Jun 2025
Viewed by 379
Abstract
Accurate and robust pose estimation is critical for simultaneous localization and mapping (SLAM), and multi-sensor fusion has demonstrated efficacy with significant potential for robotic applications. This study presents VOX-LIO, an effective LiDAR-inertial odometry system. To improve both robustness and accuracy, we propose an [...] Read more.
Accurate and robust pose estimation is critical for simultaneous localization and mapping (SLAM), and multi-sensor fusion has demonstrated efficacy with significant potential for robotic applications. This study presents VOX-LIO, an effective LiDAR-inertial odometry system. To improve both robustness and accuracy, we propose an adaptive hash voxel-based point cloud map management method that incorporates surfel features and planarity. This method enhances the efficiency of point-to-surfel association by leveraging long-term observed surfel. It facilitates the incremental refinement of surfel features within classified surfel voxels, thereby enabling precise and efficient map updates. Furthermore, we develop a weighted fusion approach that integrates LiDAR and IMU data measurements on the manifold, effectively compensating for motion distortion, particularly under high-speed LiDAR motion. We validate our system through experiments conducted on both public datasets and our mobile robot platforms. The results demonstrate that VOX-LIO outperforms the existing methods, effectively handling challenging environments while minimizing computational cost. Full article
Show Figures

Figure 1

25 pages, 21149 KiB  
Article
Enhancing Conventional Land Surveying for Cadastral Documentation in Romania with UAV Photogrammetry and SLAM
by Lucian O. Dragomir, Cosmin Alin Popescu, Mihai V. Herbei, George Popescu, Roxana Claudia Herbei, Tudor Salagean, Simion Bruma, Catalin Sabou and Paul Sestras
Remote Sens. 2025, 17(13), 2113; https://doi.org/10.3390/rs17132113 - 20 Jun 2025
Viewed by 600
Abstract
This study presents an integrated surveying methodology for efficient and accurate cadastral documentation, combining UAV photogrammetry, SLAM-based terrestrial and aerial scanning, and conventional geodetic measurements. Designed to be scalable across various cadastral and planning contexts, the workflow was tested in Charlottenburg, Romania’s only [...] Read more.
This study presents an integrated surveying methodology for efficient and accurate cadastral documentation, combining UAV photogrammetry, SLAM-based terrestrial and aerial scanning, and conventional geodetic measurements. Designed to be scalable across various cadastral and planning contexts, the workflow was tested in Charlottenburg, Romania’s only circular heritage village. The approach addresses challenges in built environments where traditional total station or GNSS techniques face limitations due to obstructed visibility and complex architectural geometries. The SLAM system was initially deployed in mobile scanning mode using a backpack configuration for ground-level data acquisition, and was later mounted on a UAV to capture building sides and areas inaccessible from the main road. The results demonstrate that the integration of aerial and terrestrial data acquisition enables precise building footprint extraction, with a reported RMSE of 0.109 m between the extracted contours and ground-truth total station measurements. The final cadastral outputs are fully compatible with GIS and CAD systems, supporting efficient land registration, urban planning, and historical site documentation. The findings highlight the method’s applicability for modernizing cadastral workflows, particularly in dense or irregularly structured areas, offering a practical, accurate, and time-saving solution adaptable to both national and international land administration needs. Beyond the combination of known technologies, the innovation lies in the practical integration of terrestrial and aerial SLAM (dual SLAM) with RTK UAV workflows under real-world constraints, offering a field-validated solution for complex cadastral scenarios where traditional methods are limited. Full article
Show Figures

Graphical abstract

25 pages, 40577 KiB  
Article
Laser SLAM Matching Localization Method for Subway Tunnel Point Clouds
by Yi Zhang, Feiyang Dong, Qihao Sun and Weiwei Song
Sensors 2025, 25(12), 3681; https://doi.org/10.3390/s25123681 - 12 Jun 2025
Cited by 1 | Viewed by 426
Abstract
When facing geometrically similar environments such as subway tunnels, Scan-Map registration is highly dependent on the correct initial value of the pose, otherwise mismatching is prone to occur, which limits the application of SLAM (Simultaneous Localization and Mapping) in tunnels. We propose a [...] Read more.
When facing geometrically similar environments such as subway tunnels, Scan-Map registration is highly dependent on the correct initial value of the pose, otherwise mismatching is prone to occur, which limits the application of SLAM (Simultaneous Localization and Mapping) in tunnels. We propose a novel coarse-to-fine registration strategy that includes geometric feature extraction and a keyframe-based pose optimization model. The method involves initial feature point set acquisition through point distance calculations, followed by the extraction of line and plane features, and convex hull features based on the normal vector’s change rate. Coarse registration is achieved through rotation and translation using three types of feature sets, with the resulting pose serving as the initial value for fine registration via Point-Plane ICP. The algorithm’s accuracy and efficiency are validated using Innovusion lidar scans of a subway tunnel, achieving a single-frame point cloud registration accuracy of 3 cm within 0.7 s, significantly improving upon traditional registration algorithms. The study concludes that the proposed method effectively enhances SLAM’s applicability in challenging tunnel environments, ensuring high registration accuracy and efficiency. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

26 pages, 6784 KiB  
Article
FAEM: Fast Autonomous Exploration for UAV in Large-Scale Unknown Environments Using LiDAR-Based Mapping
by Xu Zhang, Jiqiang Wang, Shuwen Wang, Mengfei Wang, Tao Wang, Zhuowen Feng, Shibo Zhu and Enhui Zheng
Drones 2025, 9(6), 423; https://doi.org/10.3390/drones9060423 - 10 Jun 2025
Cited by 1 | Viewed by 621
Abstract
Autonomous exploration is a fundamental challenge for various applications of unmanned aerial vehicles (UAVs). To enhance exploration efficiency in large-scale unknown environments, we propose a Fast Autonomous Exploration Framework (FAEM) designed to enable efficient autonomous exploration and real-time mapping by UAV quadrotors in [...] Read more.
Autonomous exploration is a fundamental challenge for various applications of unmanned aerial vehicles (UAVs). To enhance exploration efficiency in large-scale unknown environments, we propose a Fast Autonomous Exploration Framework (FAEM) designed to enable efficient autonomous exploration and real-time mapping by UAV quadrotors in unknown environments. By employing a hierarchical exploration strategy that integrates geometry-constrained, occlusion-free ellipsoidal viewpoint generation with a global-guided kinodynamic topological path searching method, the framework identifies a global path that accesses high-gain viewpoints and generates a corresponding highly maneuverable, energy-efficient flight trajectory. This integrated approach within the hierarchical framework achieves an effective balance between exploration efficiency and computational cost. Furthermore, to ensure trajectory continuity and stability during real-world execution, we propose an adaptive dynamic replanning strategy incorporating dynamic starting point selection and real-time replanning. Experimental results demonstrate FAEM’s superior performance compared to typical and state-of-the-art methods in existence. The proposed method was successfully validated on an autonomous quadrotor platform equipped with LiDAR navigation. The UAV achieves coverage of 8957–13,042 m3 and increases exploration speed by 23.4% compared to the state-of-the-art FUEL method, demonstrating its effectiveness in large-scale, complex real-world environments. Full article
Show Figures

Figure 1

23 pages, 4909 KiB  
Article
Autonomous Navigation and Obstacle Avoidance for Orchard Spraying Robots: A Sensor-Fusion Approach with ArduPilot, ROS, and EKF
by Xinjie Zhu, Xiaoshun Zhao, Jingyan Liu, Weijun Feng and Xiaofei Fan
Agronomy 2025, 15(6), 1373; https://doi.org/10.3390/agronomy15061373 - 3 Jun 2025
Viewed by 758
Abstract
To address the challenges of low pesticide utilization, insufficient automation, and health risks in orchard plant protection, we developed an autonomous spraying vehicle using ArduPilot firmware and a robot operating system (ROS). The system tackles orchard navigation hurdles, including global navigation satellite system [...] Read more.
To address the challenges of low pesticide utilization, insufficient automation, and health risks in orchard plant protection, we developed an autonomous spraying vehicle using ArduPilot firmware and a robot operating system (ROS). The system tackles orchard navigation hurdles, including global navigation satellite system (GNSS) signal obstruction, light detection and ranging (LIDAR) simultaneous localization and mapping (SLAM) error accumulation, and lighting-limited visual positioning. A key innovation is the integration of an extended Kalman filter (EKF) to dynamically fuse T265 visual odometry, inertial measurement unit (IMU), and GPS data, overcoming single-sensor limitations and enhancing positioning robustness in complex environments. Additionally, the study optimizes PID controller derivative parameters for tracked chassis, improving acceleration/deceleration control smoothness. The system, composed of Pixhawk 4, Raspberry Pi 4B, Silan S2L LIDAR, T265 visual odometry, and a Quectel EC200A 4G module, enables autonomous path planning, real-time obstacle avoidance, and multi-mission navigation. Indoor/outdoor tests and field experiments in Sun Village Orchard validated its autonomous cruising and obstacle avoidance capabilities under real-world orchard conditions, demonstrating feasibility for intelligent plant protection. Full article
(This article belongs to the Special Issue Smart Pest Control for Building Farm Resilience)
Show Figures

Figure 1

29 pages, 4560 KiB  
Article
GNSS-RTK-Based Navigation with Real-Time Obstacle Avoidance for Low-Speed Micro Electric Vehicles
by Nuksit Noomwongs, Kanin Kiataramgul, Sunhapos Chantranuwathana and Gridsada Phanomchoeng
Machines 2025, 13(6), 471; https://doi.org/10.3390/machines13060471 - 29 May 2025
Viewed by 535
Abstract
Autonomous navigation for micro electric vehicles (micro EVs) operating in semi-structured environments—such as university campuses and industrial parks—requires solutions that are cost-effective, low in complexity, and robust. Traditional autonomous systems often rely on high-definition maps, multi-sensor fusion, or vision-based SLAM, which demand expensive [...] Read more.
Autonomous navigation for micro electric vehicles (micro EVs) operating in semi-structured environments—such as university campuses and industrial parks—requires solutions that are cost-effective, low in complexity, and robust. Traditional autonomous systems often rely on high-definition maps, multi-sensor fusion, or vision-based SLAM, which demand expensive sensors and high computational power. These approaches are often impractical for micro EVs with limited onboard resources. To address this gap, a real-world autonomous navigation system is presented, combining RTK-GNSS and 2D LiDAR with a real-time trajectory scoring algorithm. This configuration enables accurate path following and obstacle avoidance without relying on complex mapping or multi-sensor fusion. This study presents the development and experimental validation of a low-speed autonomous navigation system for a micro electric vehicle based on GNSS-RTK localization and real-time obstacle avoidance. The research achieved the following three primary objectives: (1) the development of a low-level control system for steering, acceleration, and braking; (2) the design of a high-level navigation controller for autonomous path following using GNSS data; and (3) the implementation of real-time obstacle avoidance capabilities. The system employs a scored predicted trajectory algorithm that simultaneously optimizes path-following accuracy and obstacle evasion. A Toyota COMS micro EV was modified for autonomous operation and tested on a closed-loop campus track. Experimental results demonstrated an average lateral deviation of 0.07 m at 10 km/h and 0.12 m at 15 km/h, with heading deviations of approximately 3° and 4°, respectively. Obstacle avoidance tests showed safe maneuvering with a minimum clearance of 1.2 m from obstacles, as configured. The system proved robust against minor GNSS signal degradation, maintaining precise navigation without reliance on complex map building or inertial sensing. The results confirm that GNSS-RTK-based navigation combined with minimal sensing provides an effective and practical solution for autonomous driving in semi-structured environments. Full article
(This article belongs to the Section Vehicle Engineering)
Show Figures

Figure 1

25 pages, 5180 KiB  
Article
An Improved SLAM Algorithm for Substation Inspection Robots Based on 3D Lidar and Visual Information Fusion
by Yicen Liu and Songhai Fan
Energies 2025, 18(11), 2797; https://doi.org/10.3390/en18112797 - 27 May 2025
Viewed by 463
Abstract
Current substation inspection robots mainly use Lidar as a sensor for localization and map building. However, laser SLAM has the problem of localization error in scenes with similar and missing environmental structural features, and environmental maps built by laser SLAM provide more single-road [...] Read more.
Current substation inspection robots mainly use Lidar as a sensor for localization and map building. However, laser SLAM has the problem of localization error in scenes with similar and missing environmental structural features, and environmental maps built by laser SLAM provide more single-road information for inspection robot navigation, which is not conducive to the judgment of the road scene. For this reason, in this paper, 3D Lidar information and visual information are fused to create a SLAM algorithm applicable to substation inspection robots to solve the above laser SLAM localization error problem and improve the algorithm’s localization accuracy. First, in order to recover the scalability of monocular visual localization, the algorithm in this paper utilizes 3D Lidar information and visual information to calculate the true position of image feature points in space. Second, the laser position and visual position are utilized with interpolation to correct the point cloud distortion caused by the motion of the Lidar. Then, a position-adaptive selection algorithm is designed to use visual position instead of laser inter-frame position in some special regions to improve the robustness of the algorithm. Finally, a color laser point cloud map of the substation is constructed to provide more road environment information for the navigation of the inspection robot. The experimental results show that the localization accuracy and map-building effect of the VO-Lidar SLAM algorithm designed in this paper are better than the current laser SLAM algorithm and verify the applicability of the color laser point cloud map constructed by this algorithm in substation environments. Full article
Show Figures

Figure 1

28 pages, 8922 KiB  
Article
Multi-Robot Cooperative Simultaneous Localization and Mapping Algorithm Based on Sub-Graph Partitioning
by Wan Xu, Yanliang Chen, Shijie Liu, Ao Nie and Rupeng Chen
Sensors 2025, 25(9), 2953; https://doi.org/10.3390/s25092953 - 7 May 2025
Viewed by 741
Abstract
To address the challenges in multi-robot collaborative SLAM, including excessive redundant computations and low processing efficiency in candidate loop closure selection during front-end loop detection, as well as high computational complexity and long iteration times due to global pose optimization in the back-end, [...] Read more.
To address the challenges in multi-robot collaborative SLAM, including excessive redundant computations and low processing efficiency in candidate loop closure selection during front-end loop detection, as well as high computational complexity and long iteration times due to global pose optimization in the back-end, this paper introduces several key improvements. First, a global matching and candidate loop selection strategy is incorporated into the front-end loop detection module, leveraging both LiDAR point clouds and visual features to achieve cross-robot loop detection, effectively mitigating computational redundancy and reducing false matches in collaborative multi-robot systems. Second, an improved distributed robust pose graph optimization algorithm is proposed in the back-end module. By introducing a robust cost function to filter out erroneous loop closures and employing a subgraph optimization strategy during iterative optimization, the proposed approach enhances convergence speed and solution quality, thereby reducing uncertainty in multi-robot pose association. Experimental results demonstrate that the proposed method significantly improves computational efficiency and localization accuracy. Specifically, in front-end loop detection, the proposed algorithm achieves an F1-score improvement of approximately 8.5–51.5% compared to other methods. In back-end optimization, it outperforms traditional algorithms in terms of both convergence speed and optimization accuracy. In terms of localization accuracy, the proposed method achieves an improvement of approximately 32.8% over other open source algorithms. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

16 pages, 11784 KiB  
Article
Application of Unmanned Aerial Vehicle and Airborne Light Detection and Ranging Technologies to Identifying Terrain Obstacles and Designing Access Solutions for the Interior Parts of Forest Stands
by Petr Hrůza, Tomáš Mikita and Nikola Žižlavská
Forests 2025, 16(5), 729; https://doi.org/10.3390/f16050729 - 24 Apr 2025
Viewed by 475
Abstract
We applied UAV (Unmanned Aerial Vehicle) and ALS (Airborne Laser Scanning) remote sensing methods to identify terrain obstacles encountered during timber extraction in the skidding process with the aim of proposing accessibility solutions to the inner parts of forest stands using skidding trails. [...] Read more.
We applied UAV (Unmanned Aerial Vehicle) and ALS (Airborne Laser Scanning) remote sensing methods to identify terrain obstacles encountered during timber extraction in the skidding process with the aim of proposing accessibility solutions to the inner parts of forest stands using skidding trails. At the Vítovický žleb site, located east of Brno in the South Moravian Region of the Czech Republic, we analysed the accuracy of digital terrain models (DTMs) created from UAV LiDAR (Light Detection and Ranging), RGB (Red–Green–Blue) UAV, ALS data taken on site and publicly available LiDAR data DMR 5G (Digital Model of Relief of the Czech Republic, 5th Generation, based on airborne laser scanning, providing pre-classified ground points with an average density of 1 point/m2). UAV data were obtained using two types of drones: a DJI Mavic 2 mounted with an RGB photogrammetric camera and a GeoSLAM Horizon laser scanner on a DJI M600 Pro hexacopter. We achieved the best accuracy with UAV technologies, with an average deviation of 0.06 m, compared to 0.20 m and 0.71 m for ALS and DMR 5G, respectively. The RMSE (Root Mean Square Error) values further confirm the differences in accuracy, with UAV-based models reaching as low as 0.71 m compared to over 1.0 m for ALS and DMR 5G. The results demonstrated that UAVs are well-suited for detailed analysis of rugged terrain morphology and obstacle identification during timber extraction, potentially replacing physical terrain surveys for timber extraction planning. Meanwhile, ALS and DMR 5G data showed significant potential for use in planning the placement of skidding trails and determining the direction and length of timber extraction from logging sites to forest roads, primarily due to their ability to cover large areas effectively. Differences in the analysis results obtained using GIS (Geographic Information System) cost surface solutions applied to ALS and DMR 5G data DTMs were evident on logging sites with terrain obstacles, where the site-specific ALS data proved to be more precise. While DMR 5G is based on ALS data, its generalised nature results in lower accuracy, making site-specific ALS data preferable for analysing rugged terrain and planning timber extractions. However, DMR 5G remains suitable for use in more uniform terrain without obstacles. Thus, we recommend combining UAV and ALS technologies for terrain with obstacles, as we found this approach optimal for efficiently planning the logging-transport process. Full article
(This article belongs to the Section Forest Operations and Engineering)
Show Figures

Figure 1

20 pages, 41816 KiB  
Article
The 3D Gaussian Splatting SLAM System for Dynamic Scenes Based on LiDAR Point Clouds and Vision Fusion
by Yuquan Zhang, Guangan Jiang, Mingrui Li and Guosheng Feng
Appl. Sci. 2025, 15(8), 4190; https://doi.org/10.3390/app15084190 - 10 Apr 2025
Viewed by 2859
Abstract
This paper presents a novel 3D Gaussian Splatting (3DGS)-based Simultaneous Localization and Mapping (SLAM) system that integrates Light Detection and Ranging (LiDAR) and vision data to enhance dynamic scene tracking and reconstruction. Existing 3DGS systems face challenges in sensor fusion and handling dynamic [...] Read more.
This paper presents a novel 3D Gaussian Splatting (3DGS)-based Simultaneous Localization and Mapping (SLAM) system that integrates Light Detection and Ranging (LiDAR) and vision data to enhance dynamic scene tracking and reconstruction. Existing 3DGS systems face challenges in sensor fusion and handling dynamic objects. To address these, we introduce a hybrid uncertainty-based 3D segmentation method that leverages uncertainty estimation and 3D object detection, effectively removing dynamic points and improving static map reconstruction. Our system also employs a sliding window-based keyframe fusion strategy that reduces computational load while maintaining accuracy. By incorporating a novel dynamic rendering loss function and pruning techniques, we suppress artifacts such as ghosting and ensure real-time operation in complex environments. Extensive experiments show that our system outperforms existing methods in dynamic object removal and overall reconstruction quality. The key innovations of our work lie in its integration of hybrid uncertainty-based segmentation, dynamic rendering loss functions, and an optimized sliding window strategy, which collectively enhance robustness and efficiency in dynamic scene reconstruction. This approach offers a promising solution for real-time robotic applications, including autonomous navigation and augmented reality. Full article
(This article belongs to the Special Issue Trends and Prospects for Wireless Sensor Networks and IoT)
Show Figures

Figure 1

Back to TopTop