Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (62)

Search Parameters:
Keywords = LiDAR-Inertial odometry

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 3921 KB  
Article
Tightly Coupled LiDAR-Inertial Odometry for Autonomous Driving via Self-Adaptive Filtering and Factor Graph Optimization
by Weiwei Lyu, Haoting Li, Shuanggen Jin, Haocai Huang, Xiaojuan Tian, Yunlong Zhang, Zheyuan Du and Jinling Wang
Machines 2025, 13(11), 977; https://doi.org/10.3390/machines13110977 - 23 Oct 2025
Viewed by 496
Abstract
Simultaneous Localization and Mapping (SLAM) has become a critical tool for fully autonomous driving. However, current methods suffer from inefficient data utilization and degraded navigation performance in complex and unknown environments. In this paper, an accurate and tightly coupled method of LiDAR-inertial odometry [...] Read more.
Simultaneous Localization and Mapping (SLAM) has become a critical tool for fully autonomous driving. However, current methods suffer from inefficient data utilization and degraded navigation performance in complex and unknown environments. In this paper, an accurate and tightly coupled method of LiDAR-inertial odometry is proposed. First, a self-adaptive voxel grid filter is developed to dynamically downsample the original point clouds based on environmental feature richness, aiming to balance navigation accuracy and real-time performance. Second, keyframe factors are selected based on thresholds of translation distance, rotation angle, and time interval and then introduced into the factor graph to improve global consistency. Additionally, high-quality Global Navigation Satellite System (GNSS) factors are selected and incorporated into the factor graph through linear interpolation, thereby improving the navigation accuracy in complex and unknown environments. The proposed method is evaluated using KITTI dataset over various scales and environments. Results show that the proposed method has demonstrated very promising better results when compared with the other methods, such as ALOAM, LIO-SAM, and SC-LeGO-LOAM. Especially in urban scenes, the trajectory accuracy of the proposed method has been improved by 33.13%, 57.56%, and 58.4%, respectively, illustrating excellent navigation and positioning capabilities. Full article
(This article belongs to the Section Vehicle Engineering)
Show Figures

Figure 1

15 pages, 2133 KB  
Article
A LiDAR SLAM and Visual-Servoing Fusion Approach to Inter-Zone Localization and Navigation in Multi-Span Greenhouses
by Chunyang Ni, Jianfeng Cai and Pengbo Wang
Agronomy 2025, 15(10), 2380; https://doi.org/10.3390/agronomy15102380 - 12 Oct 2025
Viewed by 728
Abstract
Greenhouse automation has become increasingly important in facility agriculture, yet multi-span glass greenhouses pose both scientific and practical challenges for autonomous mobile robots. Scientifically, solid-state LiDAR is vulnerable to glass-induced reflections, sparse geometric features, and narrow vertical fields of view, all of which [...] Read more.
Greenhouse automation has become increasingly important in facility agriculture, yet multi-span glass greenhouses pose both scientific and practical challenges for autonomous mobile robots. Scientifically, solid-state LiDAR is vulnerable to glass-induced reflections, sparse geometric features, and narrow vertical fields of view, all of which undermine Simultaneous Localization and Mapping (SLAM)-based localization and mapping. Practically, large-scale crop production demands accurate inter-row navigation and efficient rail switching to reduce labor intensity and ensure stable operations. To address these challenges, this study presents an integrated localization-navigation framework for mobile robots in multi-span glass greenhouses. In the intralogistics area, the LiDAR Inertial Odometry-Simultaneous Localization and Mapping (LIO-SAM) pipeline was enhanced with reflection filtering, adaptive feature-extraction thresholds, and improved loop-closure detection, generating high-fidelity three-dimensional maps that were converted into two-dimensional occupancy grids for A-Star global path planning and Dynamic Window Approach (DWA) local control. In the cultivation area, where rails intersect with internal corridors, YOLOv8n-based rail-center detection combined with a pure-pursuit controller established a vision-servo framework for lateral rail switching and inter-row navigation. Field experiments demonstrated that the optimized mapping reduced the mean relative error by 15%. At a navigation speed of 0.2 m/s, the robot achieved a mean lateral deviation of 4.12 cm and a heading offset of 1.79°, while the vision-servo rail-switching system improved efficiency by 25.2%. These findings confirm the proposed framework’s accuracy, robustness, and practical applicability, providing strong support for intelligent facility-agriculture operations. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

22 pages, 5743 KB  
Article
Lightweight Road Adaptive Path Tracking Based on Soft Actor–Critic RL Method
by Yubo Weng and Jinhong Sun
Sensors 2025, 25(19), 6079; https://doi.org/10.3390/s25196079 - 2 Oct 2025
Viewed by 524
Abstract
We propose a speed-adaptive robot accurate path-tracking framework based on the soft actor–critic (SAC) and Stanley methods (STANLY_ASAC). First, the Lidar–Inertial Odometry Simultaneous Localization and Mapping (LIO-SLAM) method is used to map the environment and the LIO-localization framework is adopted to achieve real-time [...] Read more.
We propose a speed-adaptive robot accurate path-tracking framework based on the soft actor–critic (SAC) and Stanley methods (STANLY_ASAC). First, the Lidar–Inertial Odometry Simultaneous Localization and Mapping (LIO-SLAM) method is used to map the environment and the LIO-localization framework is adopted to achieve real-time positioning and output the robot pose at 100 Hz. Next, the Rapidly exploring Random Tree (RRT) algorithm is employed for global path planning. On this basis, we integrate an improved A* algorithm for local obstacle avoidance and apply a gradient descent smoothing algorithm to generate a reference path that satisfies the robot’s kinematic constraints. Secondly, a network classification model based on U-Net is used to classify common road surfaces and generate classification results that significantly compensate for tracking accuracy errors caused by incorrect road surface coefficients. Next, we leverage the powerful learning capability of adaptive SAC (ASAC) to adaptively adjust the vehicle’s acceleration and lateral deviation gain according to the road and vehicle states. Vehicle acceleration is used to generate the real-time tracking speed, and the lateral deviation gain is used to calculate the front wheel angle via the Stanley tracking algorithm. Finally, we deploy the algorithm on a mobile robot and test its path-tracking performance in different scenarios. The results show that the proposed path-tracking algorithm can accurately follow the generated path. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

26 pages, 10272 KB  
Article
Research on Disaster Environment Map Fusion Construction and Reinforcement Learning Navigation Technology Based on Air–Ground Collaborative Multi-Heterogeneous Robot Systems
by Hongtao Tao, Wen Zhao, Li Zhao and Junlong Wang
Sensors 2025, 25(16), 4988; https://doi.org/10.3390/s25164988 - 12 Aug 2025
Viewed by 1051
Abstract
The primary challenge that robots face in disaster rescue is to precisely and efficiently construct disaster maps and achieve autonomous navigation. This paper proposes a method for air–ground collaborative map construction. It utilizes the flight capability of an unmanned aerial vehicle (UAV) to [...] Read more.
The primary challenge that robots face in disaster rescue is to precisely and efficiently construct disaster maps and achieve autonomous navigation. This paper proposes a method for air–ground collaborative map construction. It utilizes the flight capability of an unmanned aerial vehicle (UAV) to achieve rapid three-dimensional space coverage and complex terrain crossing for rapid and efficient map construction. Meanwhile, it utilizes the stable operation capability of an unmanned ground vehicle (UGV) and the ground detail survey capability to achieve precise map construction. The maps constructed by the two are accurately integrated to obtain precise disaster environment maps. Among them, the map construction and positioning technology is based on the FAST LiDAR–inertial odometry 2 (FAST-LIO2) framework, enabling the robot to achieve precise positioning even in complex environments, thereby obtaining more accurate point cloud maps. Before conducting map fusion, the point cloud is preprocessed first to reduce the density of the point cloud and also minimize the interference of noise and outliers. Subsequently, the coarse and fine registrations of the point clouds are carried out in sequence. The coarse registration is used to reduce the initial pose difference of the two point clouds, which is conducive to the subsequent rapid and efficient fine registration. The coarse registration uses the improved sample consensus initial alignment (SAC-IA) algorithm, which significantly reduces the registration time compared with the traditional SAC-IA algorithm. The precise registration uses the voxelized generalized iterative closest point (VGICP) algorithm. It has a faster registration speed compared with the generalized iterative closest point (GICP) algorithm while ensuring accuracy. In reinforcement learning navigation, we adopted the deep deterministic policy gradient (DDPG) path planning algorithm. Compared with the deep Q-network (DQN) algorithm and the A* algorithm, the DDPG algorithm is more conducive to the robot choosing a better route in a complex and unknown environment, and at the same time, the motion trajectory is smoother. This paper adopts Gazebo simulation. Compared with physical robot operation, it provides a safe, controllable, and cost-effective environment, supports efficient large-scale experiments and algorithm debugging, and also supports flexible sensor simulation and automated verification, thereby optimizing the overall testing process. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

20 pages, 5843 KB  
Article
Accurate and Robust Train Localization: Fusing Degeneracy-Aware LiDAR-Inertial Odometry and Visual Landmark Correction
by Lin Yue, Peng Wang, Jinchao Mu, Chen Cai, Dingyi Wang and Hao Ren
Sensors 2025, 25(15), 4637; https://doi.org/10.3390/s25154637 - 26 Jul 2025
Viewed by 1118
Abstract
To overcome the limitations of current train positioning systems, including low positioning accuracy and heavy reliance on track transponders or GNSS signals, this paper proposes a novel LiDAR-inertial and visual landmark fusion framework. Firstly, an IMU preintegration factor considering the Earth’s rotation and [...] Read more.
To overcome the limitations of current train positioning systems, including low positioning accuracy and heavy reliance on track transponders or GNSS signals, this paper proposes a novel LiDAR-inertial and visual landmark fusion framework. Firstly, an IMU preintegration factor considering the Earth’s rotation and a LiDAR-inertial odometry factor accounting for degenerate states are constructed to adapt to railway train operating environments. Subsequently, a lightweight network based on YOLO improvement is used for recognizing reflective kilometer posts, while PaddleOCR extracts numerical codes. High-precision vertex coordinates of kilometer posts are obtained by jointly using LiDAR point cloud and an image detection box. Next, a kilometer post factor is constructed, and multi-source information is optimized within a factor graph framework. Finally, onboard experiments conducted on real railway vehicles demonstrate high-precision landmark detection at 35 FPS with 94.8% average precision. The proposed method delivers robust positioning within 5 m RMSE accuracy for high-speed, long-distance train travel, establishing a novel framework for intelligent railway development. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

18 pages, 3315 KB  
Article
Real-Time Geo-Localization for Land Vehicles Using LIV-SLAM and Referenced Satellite Imagery
by Yating Yao, Jing Dong, Songlai Han, Haiqiao Liu, Quanfu Hu and Zhikang Chen
Appl. Sci. 2025, 15(15), 8257; https://doi.org/10.3390/app15158257 - 24 Jul 2025
Viewed by 695
Abstract
Existing Simultaneous Localization and Mapping (SLAM) algorithms provide precise local pose estimation and real-time scene reconstruction, widely applied in autonomous navigation for land vehicles. However, the odometry of SLAM algorithms exhibits localization drift and error divergence over long-distance operations due to the lack [...] Read more.
Existing Simultaneous Localization and Mapping (SLAM) algorithms provide precise local pose estimation and real-time scene reconstruction, widely applied in autonomous navigation for land vehicles. However, the odometry of SLAM algorithms exhibits localization drift and error divergence over long-distance operations due to the lack of inherent global constraints. In this paper, we propose a real-time geo-localization method for land vehicles, which only relies on a LiDAR-inertial-visual SLAM (LIV-SLAM) and a referenced image. The proposed method enables long-distance navigation without requiring GPS or loop closure, while eliminating accumulated localization errors. To achieve this, the local map constructed by SLAM is real-timely projected onto a downward-view image, and a highly efficient cross modal matching algorithm is proposed to estimate the global position by aligning the projected local image to a geo-referenced satellite image. The cross-modal algorithm leverages dense texture orientation features, ensuring robustness against cross-modal distortion and local scene changes, and supports efficient correlation in the frequency domain for real-time performance. We also propose a novel adaptive Kalman filter (AKF) to integrate the global position provided by the cross-modal matching and the pose estimated by LIV-SLAM. The proposed AKF is designed to effectively handle observation delays and asynchronous updates while simultaneously rejecting the impact of erroneous matches through an Observation-Aware Gain Scaling (OAGS) mechanism. We verify the proposed algorithm through R3LIVE and NCLT datasets, demonstrating superior computational efficiency, reliability, and accuracy compared to existing methods. Full article
(This article belongs to the Special Issue Navigation and Positioning Based on Multi-Sensor Fusion Technology)
Show Figures

Figure 1

18 pages, 12540 KB  
Article
SS-LIO: Robust Tightly Coupled Solid-State LiDAR–Inertial Odometry for Indoor Degraded Environments
by Yongle Zou, Peipei Meng, Jianqiang Xiong and Xinglin Wan
Electronics 2025, 14(15), 2951; https://doi.org/10.3390/electronics14152951 - 24 Jul 2025
Viewed by 1047
Abstract
Solid-state LiDAR systems are widely recognized for their high reliability, low cost, and lightweight design, but they encounter significant challenges in SLAM tasks due to their limited field of view and uneven horizontal scanning patterns, especially in indoor environments with geometric constraints. To [...] Read more.
Solid-state LiDAR systems are widely recognized for their high reliability, low cost, and lightweight design, but they encounter significant challenges in SLAM tasks due to their limited field of view and uneven horizontal scanning patterns, especially in indoor environments with geometric constraints. To address these challenges, this paper proposes SS-LIO, a precise, robust, and real-time LiDAR–Inertial odometry solution designed for solid-state LiDAR systems. SS-LIO uses uncertainty propagation in LiDAR point-cloud modeling and a tightly coupled iterative extended Kalman filter to fuse LiDAR feature points with IMU data for reliable localization. It also employs voxels to encapsulate planar features for accurate map construction. Experimental results from open-source datasets and self-collected data demonstrate that SS-LIO achieves superior accuracy and robustness compared to state-of-the-art methods, with an end-to-end drift of only 0.2 m in indoor degraded scenarios. The detailed and accurate point-cloud maps generated by SS-LIO reflect the smoothness and precision of trajectory estimation, with significantly reduced drift and deviation. These outcomes highlight the effectiveness of SS-LIO in addressing the SLAM challenges posed by solid-state LiDAR systems and its capability to produce reliable maps in complex indoor settings. Full article
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)
Show Figures

Figure 1

18 pages, 16696 KB  
Technical Note
LIO-GC: LiDAR Inertial Odometry with Adaptive Ground Constraints
by Wenwen Tian, Juefei Wang, Puwei Yang, Wen Xiao and Sisi Zlatanova
Remote Sens. 2025, 17(14), 2376; https://doi.org/10.3390/rs17142376 - 10 Jul 2025
Cited by 1 | Viewed by 2814
Abstract
LiDAR-based simultaneous localization and mapping (SLAM) techniques are commonly applied in high-precision mapping and positioning for mobile platforms. However, the vertical resolution limitations of multi-beam spinning LiDAR sensors can significantly impair vertical estimation accuracy. This challenge is accentuated in scenarios involving fewer-line or [...] Read more.
LiDAR-based simultaneous localization and mapping (SLAM) techniques are commonly applied in high-precision mapping and positioning for mobile platforms. However, the vertical resolution limitations of multi-beam spinning LiDAR sensors can significantly impair vertical estimation accuracy. This challenge is accentuated in scenarios involving fewer-line or cost-effective spinning LiDARs, where vertical features are sparse. To address this issue, we introduce LIO-GC, which effectively extracts ground features and integrates them into a factor graph to rectify vertical accuracy. Unlike conventional methods relying on geometric features for ground plane segmentation, our approach leverages a self-adaptive strategy that considers the uneven point cloud distribution and inconsistency due to ground fluctuations. By optimizing laser range factors, ground feature constraints, and loop closure factors using graph optimization frameworks, our method surpasses current approaches, demonstrating superior performance through evaluation on open-source and newly collected datasets. Full article
Show Figures

Figure 1

30 pages, 14473 KB  
Article
VOX-LIO: An Effective and Robust LiDAR-Inertial Odometry System Based on Surfel Voxels
by Meijun Guo, Yonghui Liu, Yuhang Yang, Xiaohai He and Weimin Zhang
Remote Sens. 2025, 17(13), 2214; https://doi.org/10.3390/rs17132214 - 27 Jun 2025
Viewed by 1832
Abstract
Accurate and robust pose estimation is critical for simultaneous localization and mapping (SLAM), and multi-sensor fusion has demonstrated efficacy with significant potential for robotic applications. This study presents VOX-LIO, an effective LiDAR-inertial odometry system. To improve both robustness and accuracy, we propose an [...] Read more.
Accurate and robust pose estimation is critical for simultaneous localization and mapping (SLAM), and multi-sensor fusion has demonstrated efficacy with significant potential for robotic applications. This study presents VOX-LIO, an effective LiDAR-inertial odometry system. To improve both robustness and accuracy, we propose an adaptive hash voxel-based point cloud map management method that incorporates surfel features and planarity. This method enhances the efficiency of point-to-surfel association by leveraging long-term observed surfel. It facilitates the incremental refinement of surfel features within classified surfel voxels, thereby enabling precise and efficient map updates. Furthermore, we develop a weighted fusion approach that integrates LiDAR and IMU data measurements on the manifold, effectively compensating for motion distortion, particularly under high-speed LiDAR motion. We validate our system through experiments conducted on both public datasets and our mobile robot platforms. The results demonstrate that VOX-LIO outperforms the existing methods, effectively handling challenging environments while minimizing computational cost. Full article
Show Figures

Figure 1

27 pages, 9977 KB  
Article
Mergeable Probabilistic Voxel Mapping for LiDAR–Inertial–Visual Odometry
by Balong Wang, Nassim Bessaad, Huiying Xu, Xinzhong Zhu and Hongbo Li
Electronics 2025, 14(11), 2142; https://doi.org/10.3390/electronics14112142 - 24 May 2025
Cited by 1 | Viewed by 1865
Abstract
To address the limitations of existing LiDAR–visual fusion methods in adequately accounting for map uncertainties induced by LiDAR measurement noise, this paper introduces a LiDAR–inertial–visual odometry framework leveraging mergeable probabilistic voxel mapping. The method innovatively employs probabilistic voxel models to characterize uncertainties in [...] Read more.
To address the limitations of existing LiDAR–visual fusion methods in adequately accounting for map uncertainties induced by LiDAR measurement noise, this paper introduces a LiDAR–inertial–visual odometry framework leveraging mergeable probabilistic voxel mapping. The method innovatively employs probabilistic voxel models to characterize uncertainties in environmental geometric plane features and optimizes computational efficiency through a voxel merging strategy. Additionally, it integrates color information from cameras to further enhance localization accuracy. Specifically, in the LiDAR–inertial odometry (LIO) subsystem, a probabilistic voxel plane model is constructed for LiDAR point clouds to explicitly represent measurement noise uncertainty, thereby improving the accuracy and robustness of point cloud registration. A voxel merging strategy based on the union-find algorithm is introduced to merge coplanar voxel planes, reducing computational load. In the visual–inertial odometry (VIO) subsystem, image tracking points are generated through a global map projection, and outlier points are eliminated using a random sample consensus algorithm based on a dynamic Bayesian network. Finally, state estimation accuracy is enhanced by jointly optimizing frame-to-frame reprojection errors and frame-to-map RGB color errors. Experimental results demonstrate that the proposed method achieves root mean square errors (RMSEs) of absolute trajectory error at 0.478 m and 0.185 m on the M2DGR and NTU-VIRAL datasets, respectively, while attaining real-time performance with an average processing time of 39.19 ms per-frame on the NTU-VIRAL datasets. Compared to state-of-the-art approaches, our method exhibits significant improvements in both accuracy and computational efficiency. Full article
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)
Show Figures

Figure 1

23 pages, 20311 KB  
Article
Bridge Geometric Shape Measurement Using LiDAR–Camera Fusion Mapping and Learning-Based Segmentation Method
by Shang Jiang, Yifan Yang, Siyang Gu, Jiahui Li and Yingyan Hou
Buildings 2025, 15(9), 1458; https://doi.org/10.3390/buildings15091458 - 25 Apr 2025
Cited by 4 | Viewed by 1623
Abstract
The rapid measurement of three-dimensional bridge geometric shapes is crucial for assessing construction quality and in-service structural conditions. Existing geometric shape measurement methods predominantly rely on traditional surveying instruments, which suffer from low efficiency and are limited to sparse point sampling. This study [...] Read more.
The rapid measurement of three-dimensional bridge geometric shapes is crucial for assessing construction quality and in-service structural conditions. Existing geometric shape measurement methods predominantly rely on traditional surveying instruments, which suffer from low efficiency and are limited to sparse point sampling. This study proposes a novel framework that utilizes an airborne LiDAR–camera fusion system for data acquisition, reconstructs high-precision 3D bridge models through real-time mapping, and automatically extracts structural geometric shapes using deep learning. The main contributions include the following: (1) A synchronized LiDAR–camera fusion system integrated with an unmanned aerial vehicle (UAV) and a microprocessor was developed, enabling the flexible and large-scale acquisition of bridge images and point clouds; (2) A multi-sensor fusion mapping method coupling visual-inertial odometry (VIO) and Li-DAR-inertial odometry (LIO) was implemented to construct 3D bridge point clouds in real time robustly; and (3) An instance segmentation network-based approach was proposed to detect key structural components in images, with detected geometric shapes projected from image coordinates to 3D space using LiDAR–camera calibration parameters, addressing challenges in automated large-scale point cloud analysis. The proposed method was validated through geometric shape measurements on a concrete arch bridge. The results demonstrate that compared to the oblique photogrammetry method, the proposed approach reduces errors by 77.13%, while its detection time accounts for 4.18% of that required by a stationary laser scanner and 0.29% of that needed for oblique photogrammetry. Full article
(This article belongs to the Special Issue Urban Infrastructure and Resilient, Sustainable Buildings)
Show Figures

Figure 1

21 pages, 10896 KB  
Article
Loosely Coupled PPP/Inertial/LiDAR Simultaneous Localization and Mapping (SLAM) Based on Graph Optimization
by Baoxiang Zhang, Cheng Yang, Guorui Xiao, Peigong Li, Zhengyang Xiao, Haopeng Wei and Jialin Liu
Remote Sens. 2025, 17(5), 812; https://doi.org/10.3390/rs17050812 - 25 Feb 2025
Viewed by 1263
Abstract
Navigation services and high-precision positioning play a significant role in emerging fields such as self-driving and mobile robots. The performance of precise point positioning (PPP) may be seriously affected by signal interference and struggles to achieve continuous and accurate positioning in complex environments. [...] Read more.
Navigation services and high-precision positioning play a significant role in emerging fields such as self-driving and mobile robots. The performance of precise point positioning (PPP) may be seriously affected by signal interference and struggles to achieve continuous and accurate positioning in complex environments. LiDAR/inertial navigation can use spatial structure information to realize pose estimation but cannot solve the problem of cumulative error. This study proposes a PPP/inertial/LiDAR combined localization algorithm based on factor graph optimization. Firstly, the algorithm performed the spatial alignment by adding the initial yaw factor. Then, the PPP factor and anchor factor were constructed using PPP information. Finally, the global localization is estimated accurately and robustly based on the factor graph. The vehicle experiment shows that the proposed algorithm in this study can achieve meter-level accuracy in complex environments and can greatly enhance the accuracy, continuity, and reliability of attitude estimation. Full article
Show Figures

Figure 1

29 pages, 4682 KB  
Article
LSAF-LSTM-Based Self-Adaptive Multi-Sensor Fusion for Robust UAV State Estimation in Challenging Environments
by Mahammad Irfan, Sagar Dalai, Petar Trslic, James Riordan and Gerard Dooly
Machines 2025, 13(2), 130; https://doi.org/10.3390/machines13020130 - 9 Feb 2025
Cited by 4 | Viewed by 3015
Abstract
Unmanned aerial vehicle (UAV) state estimation is fundamental across applications like robot navigation, autonomous driving, virtual reality (VR), and augmented reality (AR). This research highlights the critical role of robust state estimation in ensuring safe and efficient autonomous UAV navigation, particularly in challenging [...] Read more.
Unmanned aerial vehicle (UAV) state estimation is fundamental across applications like robot navigation, autonomous driving, virtual reality (VR), and augmented reality (AR). This research highlights the critical role of robust state estimation in ensuring safe and efficient autonomous UAV navigation, particularly in challenging environments. We propose a deep learning-based adaptive sensor fusion framework for UAV state estimation, integrating multi-sensor data from stereo cameras, an IMU, two 3D LiDAR’s, and GPS. The framework dynamically adjusts fusion weights in real time using a long short-term memory (LSTM) model, enhancing robustness under diverse conditions such as illumination changes, structureless environments, degraded GPS signals, or complete signal loss where traditional single-sensor SLAM methods often fail. Validated on an in-house integrated UAV platform and evaluated against high-precision RTK ground truth, the algorithm incorporates deep learning-predicted fusion weights into an optimization-based odometry pipeline. The system delivers robust, consistent, and accurate state estimation, outperforming state-of-the-art techniques. Experimental results demonstrate its adaptability and effectiveness across challenging scenarios, showcasing significant advancements in UAV autonomy and reliability through the synergistic integration of deep learning and sensor fusion. Full article
Show Figures

Figure 1

42 pages, 40649 KB  
Article
A Multi-Drone System Proof of Concept for Forestry Applications
by André G. Araújo, Carlos A. P. Pizzino, Micael S. Couceiro and Rui P. Rocha
Drones 2025, 9(2), 80; https://doi.org/10.3390/drones9020080 - 21 Jan 2025
Cited by 10 | Viewed by 4928
Abstract
This study presents a multi-drone proof of concept for efficient forest mapping and autonomous operation, framed within the context of the OPENSWARM EU Project. The approach leverages state-of-the-art open-source simultaneous localisation and mapping (SLAM) frameworks, like LiDAR (Light Detection And Ranging) Inertial Odometry [...] Read more.
This study presents a multi-drone proof of concept for efficient forest mapping and autonomous operation, framed within the context of the OPENSWARM EU Project. The approach leverages state-of-the-art open-source simultaneous localisation and mapping (SLAM) frameworks, like LiDAR (Light Detection And Ranging) Inertial Odometry via Smoothing and Mapping (LIO-SAM), and Distributed Collaborative LiDAR SLAM Framework for a Robotic Swarm (DCL-SLAM), seamlessly integrated within the MRS UAV System and Swarm Formation packages. This integration is achieved through a series of procedures compliant with Robot Operating System middleware (ROS), including an auto-tuning particle swarm optimisation method for enhanced flight control and stabilisation, which is crucial for autonomous operation in challenging environments. Field experiments conducted in a forest with multiple drones demonstrate the system’s ability to navigate complex terrains as a coordinated swarm, accurately and collaboratively mapping forest areas. Results highlight the potential of this proof of concept, contributing to the development of scalable autonomous solutions for forestry management. The findings emphasise the significance of integrating multiple open-source technologies to advance sustainable forestry practices using swarms of drones. Full article
Show Figures

Figure 1

8 pages, 7391 KB  
Proceeding Paper
Comparative Analysis of LiDAR Inertial Odometry Algorithms in Blueberry Crops
by Ricardo Huaman, Clayder Gonzalez and Sixto Prado
Eng. Proc. 2025, 83(1), 9; https://doi.org/10.3390/engproc2025083009 - 9 Jan 2025
Viewed by 2555
Abstract
In recent years, LiDAR Odometry (LO) and LiDAR Inertial Odometry (LIO) algorithms for robot localization have considerably improved, with significant advancements demonstrated in various benchmarks. However, their performance in agricultural environments remains underexplored. This study addresses this gap by evaluating five state-of-the-art LO [...] Read more.
In recent years, LiDAR Odometry (LO) and LiDAR Inertial Odometry (LIO) algorithms for robot localization have considerably improved, with significant advancements demonstrated in various benchmarks. However, their performance in agricultural environments remains underexplored. This study addresses this gap by evaluating five state-of-the-art LO and LIO algorithms—LeGO-LOAM, DLO, DLIO, FAST-LIO2, and Point-LIO—in a blueberry farm setting. Using an Ouster OS1-32 LiDAR mounted on a four-wheeled mobile robot, the algorithms were evaluated using the translational error metric across four distinct sequences. DLIO showed the highest accuracy across all sequences, with a minimal error of 0.126 m over a 230 m path, while FAST-LIO2 achieved its lowest translational error of 0.606 m on a U-shaped path. LeGO-LOAM, however, struggled due to the environment’s lack of linear and planar features. The results underscore the effectiveness and potential limitations of these algorithms in agricultural environments, offering insights into future improvements and adaptations. Full article
Show Figures

Figure 1

Back to TopTop