Multi-Sensor-Assisted Navigation for UAVs in Power Inspection: A Fusion Approach Using LiDAR, IMU and GPS
Abstract
1. Introduction
- We propose a navigation architecture for power inspection that combines a tightly coupled LiDAR-IMU front-end (UKF-SLAM on Lie groups) with a loosely coupled back-end (GPS and loop closure pose graph optimization), achieving both high local accuracy and global consistency;
- We design a UKF-SLAM algorithm formulated on Lie group operations to enhance the mathematical rigor and numerical stability of rotational estimation in the front-end;
- We construct a pose graph optimization model incorporating GPS constraints, loop closure detection, and point cloud registration, which effectively suppresses long-term accumulated drift in a loosely coupled manner.
2. An Aided Localization Method Based on UKF Filtering for LiDAR–IMU Fusion
2.1. Fusion Algorithm Design Workflow
2.1.1. Introduction to Data Fusion Strategy
- (1)
- Information Fusion Strategy Based on Distinguishable Units
- (2)
- Information Fusion Strategy Based on Feature Complementarity
- (3)
- Fusion Strategy Based on Target Attributes from Different Sensors
- (4)
- Fusion Strategy Based on Decisions from Different Sensors
2.1.2. Study on Alignment-Based Fusion Strategies
- (1)
- Temporal Alignment Algorithm
- (2)
- Spatial Alignment Algorithm
2.2. Optimization Process Based on UKF-SLAM
2.2.1. State Vector and System Model
2.2.2. UKF-SLAM Based on Lie Group Operations
- (1)
- Unscented Kalman Filter (UKF)
- (2)
- Improved UKF with Lie Group Operations
- Closure: ;
- Associativity: ;
- Existence of Inverse Elements: .
- The three-dimensional rotation group , which represents rotation matrices ;
- The three-dimensional special Euclidean group which represents rigid-body transformation matrices:where denotes the translation vector.
- Construct the augmented state on the manifold and generate sigma points;
- Propagate the sigma points through the nonlinear system equations, and compute the a priori estimate using geodesic mean and covariance calculations;
- Predict the measurements using the observation model, and compute the cross-covariance and the observation covariance;
- Fuse the measurement residuals using the Kalman gain to update the posterior state and covariance;
- Iteratively execute the above steps to achieve continuous state estimation of the dynamic system on the manifold.
3. An Aided Localization Study Based on Pose Graph Optimization for LiDAR–GPS Fusion
3.1. Construction of the Pose Graph Optimization Model
3.1.1. Construction of GPS Coordinate Constraints
3.1.2. Construction of Loop Closure Constraints
- b.
- By differentiating the residual equation, the Jacobian matrix can be derived as follows:
- b.
- By differentiating the residual equation, the Jacobian matrix can be derived as follows:
3.1.3. Construction of Point Cloud Registration Constraints
- By differentiating the residual equation, the Jacobian matrix can be derived as follows:
- By differentiating the residual equation, the Jacobian matrix can be derived as follows:
3.1.4. Design of the Pose Graph Model
3.2. Technical Implementation Scheme
- (1)
- GPS Module
- The satellite information acquisition module is responsible for receiving raw GPS satellite signal data;
- The latitude and longitude solution module computes the UAV’s geodetic coordinates based on the received signals and provides initial information for the coordinate transformation module;
- The coordinate transformation module converts the geodetic coordinates into the Earth-Centered Earth-Fixed (ECEF) coordinate system and subsequently transforms them into relative coordinates in the LiDAR coordinate frame, which are defined as the node coordinates in this study.
- (2)
- LiDAR Module
- The point cloud acquisition module collects and generates point cloud data of the surrounding environment using an onboard mechanical LiDAR. The point cloud data are represented in the form of three-dimensional coordinates . This module performs data acquisition and transmission, providing raw point cloud data for subsequent processing stages, and serves as the basis for lightweight front-end perception in power inspection scenarios.
- The point cloud filtering module operates on the output of the acquisition layer. In this work, K-means clustering is adopted as a lightweight preprocessing strategy to organize local point cloud structure and suppress redundant or irrelevant small-scale noise points with relatively low computational cost. Compared with more complex density-based or model-based filtering methods, this design is more suitable for the current real-time-oriented framework. In addition, voxel grid filtering is employed to reduce the number of points, thereby decreasing algorithmic complexity and enhancing real-time performance. The combination of K-means clustering and voxel grid downsampling provides a practical balance between filtering effectiveness and computational efficiency.
- The point cloud calibration module utilizes the raw point cloud data together with the matching results from the two preceding frames. A prediction-based approach is adopted to transform the point clouds from each scanning cycle into a unified temporal reference, producing temporally aligned point clouds that are subsequently fed into the feature extraction module, which helps improve temporal consistency before registration.
- The feature extraction module extracts edge features and planar features from the point cloud based on curvature analysis. This curvature-based design is chosen because it can effectively preserve the dominant structural characteristics of power-line environments while maintaining relatively low implementation complexity. Compared with learning-based feature extraction methods, it avoids additional training requirements and higher computational overhead, making it more suitable for the current stage of the proposed framework. Feature sequences corresponding to different time instants are stored for matching purposes. The point cloud matching layer matches features from identical spatial locations across two time instants and forwards the results to the pose estimation layer.
- The point cloud matching module performs feature-based matching between two time instants and employs the Levenberg–Marquardt (LM) nonlinear optimization method to decouple and estimate the UAV rotation matrix and translation vector . The resulting pose estimates are then passed to the localization fusion layer. This feature-based matching strategy further reduces the computational burden compared with direct dense matching, and is therefore more compatible with the efficiency requirements of UAV onboard localization.
- (3)
- Remaining Modules
- The graph optimization module adopts a sliding-window-based optimization strategy. It fuses the pose transformation information between two consecutive point clouds output by the point cloud matching module with the relative GPS poses provided by the coordinate transformation module. By jointly optimizing multiple historical state variables through a least-squares framework, the pose states of multiple time instants are estimated simultaneously.
- The inverse coordinate transformation module converts the fused pose estimates back into the Earth-Centered Earth-Fixed (ECEF) coordinate system and subsequently transforms them into geodetic coordinates (latitude and longitude), which are then output as the final positioning results. The coordinate transformation process remains as follows: First, the pose estimates (e.g., position and orientation) in the local frame are converted to ECEF coordinates using a series of geometric transformations. This involves first translating the local coordinate system’s origin to the Earth’s center and then rotating the coordinates to align with the Earth’s rotational axes. The rotation is performed using a direction cosine matrix (DCM) or quaternion-based transformations. Second, once the position is in the ECEF frame, it is transformed into geodetic coordinates (latitude, longitude, and altitude). The relationship between ECEF coordinates and geodetic coordinates is non-trivial and requires solving the following set of nonlinear equations:
4. Experimental Setup and Result Analysis
4.1. Experimental Evaluation of UKF-Based LiDAR–IMU-Aided Localization
| Algorithm 1: LiDAR–IMU–GPS Fusion for UAV Localization |
| Input: |
| at time k |
| } between k − 1 and k |
| (latitude, longitude, altitude) |
| Output: |
| ∈SE(3) for each time step k |
| 1: , empty pose graph |
| 2: for each time step k = 1, 2, … do |
| 3: // ---------- Front-end: LiDAR–IMU fusion via Lie-group UKF ---------- |
| ] using Equations (2)–(6) |
| on SE(3) manifold (Equations (28)–(30)) |
| 6: Propagate sigma points through motion model (Equations (2)–(6)) |
| using geodesic distance (Equations (31) and (32)) |
| is available then |
| 9: Extract edge and planar features based on curvature (Section 3.2) |
| 11: Compute observation model (Equations (8) and (9)) |
| 12: Update UKF: compute Kalman gain, posterior state and covariance on SE(3) (Equations (33)–(38)) |
| 13: end if |
| 14: |
| 15: // ---------- Back-end: Pose graph optimization (triggered every N frames) ---------- |
| 16: if k mod N = 0 then |
| as a new node in the graph |
| 18: if GPS data available then |
| (Equation (40)), Jacobian (Equation (41)) |
| 20: end if |
| using LiDAR registration residual (Equation (54)), Jacobians (Equations (57) and (60)) |
| 22: if loop closure detected between current frame and historical frame l then |
| 23: Add loop closure edge: residual (Equation (44)), Jacobians (Equations (48) and (51)) |
| 24: end if |
| 25: Solve nonlinear least-squares problem using Levenberg–Marquardt (Equation (61)) |
| 26: end if |
| 27: end for |
4.2. Experimental Evaluation of Pose Graph Optimization-Based LiDAR-GPS-Aided Localization
4.2.1. Data Analysis
4.2.2. Experimental Summary
- The system architecture design for UAV-based LiDAR–GPS fusion localization comprises multiple functional modules, including a satellite signal acquisition module, a latitude–longitude computation module, a coordinate transformation module, a point cloud acquisition module, a point cloud filtering module, a point cloud calibration module, a feature extraction module, a point cloud matching module, a graph optimization module, and an inverse coordinate transformation module.
- The algorithmic framework for UAV-based LiDAR–GPS fusion localization is established by integrating several key components, including the conversion of GPS geodetic coordinates into node coordinates for pose graph optimization, the algorithmic design for estimating the relative pose between two LiDAR point cloud frames, and the subsequent coupling of these estimates into a pose graph optimization framework. Through joint optimization, more accurate geodetic position estimates (latitude and longitude) are ultimately obtained.
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Tavasci, L.; Nex, F.; Gandolfi, S. Reliability of real-time kinematic (RTK) positioning for low-cost drones’ navigation across global navigation satellite System (GNSS) critical environments. Sensors 2024, 24, 6096. [Google Scholar] [CrossRef]
- Elamin, A.; Abdelaziz, N.; El-Rabbany, A. A GNSS/INS/LiDAR integration scheme for UAV-based navigation in GNSS-challenging environments. Sensors 2022, 22, 9908. [Google Scholar] [CrossRef]
- Chang, Y.; Cheng, Y.; Manzoor, U.; Murray, J. A review of UAV autonomous navigation in GPS-denied environments. Robot. Auton. Syst. 2023, 170, 104533. [Google Scholar] [CrossRef]
- Xie, L.; Jiang, C.; Sun, Q.; Wang, H.; Song, Q.; Guan, G. The global map’s creating and positioning of substation inspection robot based on adaptive Monte Carlo particle filter algorithm. Electr. Power Eng. Technol. 2019, 38, 16–23. [Google Scholar]
- Zhou, W.; Liu, G. Review of overhead line defect inspection based on deep learning and UAV images. Electr. Power Eng. Technol. 2024, 43, 73–82. [Google Scholar]
- Gyagenda, N.; Hatilima, J.V.; Roth, H.; Zhmud, V. A review of GNSS-independent UAV navigation techniques. Robot. Auton. Syst. 2022, 152, 104069. [Google Scholar] [CrossRef]
- Mahdi, A.E.; Azouz, A.; Abdalla, A.E.; Abosekeen, A. A machine learning approach for an improved inertial navigation system solution. Sensors 2022, 22, 1687. [Google Scholar] [CrossRef]
- Buchanan, R.; Agrawal, V.; Camurri, M.; Dellaert, F.; Fallon, M. Deep imu bias inference for robust visual-inertial odometry with factor graphs. IEEE Robot. Autom. Lett. 2022, 8, 41–48. [Google Scholar] [CrossRef]
- Lai, L.; Li, L.; Wang, H.; Yuan, J.; Fan, W.; Zhao, D. Enhanced LiDAR-inertial SLAM with adaptive intensity feature extraction and fusion. Measurement 2025, 253, 117738. [Google Scholar] [CrossRef]
- Yang, J.-C.; Lin, C.-J.; You, B.-Y.; Yan, Y.-L.; Cheng, T.-H. Rtlio: Real-time lidar-inertial odometry and mapping for UAVS. Sensors 2021, 21, 3955. [Google Scholar] [CrossRef]
- Julier, S.J.; Uhlmann, J.K. Unscented filtering and nonlinear estimation. Proc. IEEE 2004, 92, 401–422. [Google Scholar] [CrossRef]
- Crassidis, J.L.; Markley, F.L. Unscented filtering for spacecraft attitude estimation. J. Guid. Control Dyn. 2003, 26, 536–542. [Google Scholar] [CrossRef]
- da Silva, M.F.; Honório, L.M.; Marcato, A.L.M.; Vidal, V.F.; Santos, M.F. Unmanned aerial vehicle for transmission line inspection using an extended Kalman filter with colored electromagnetic interference. ISA Trans. 2020, 100, 322–333. [Google Scholar] [CrossRef]
- Shan, T.; Englot, B.; Meyers, D.; Wang, W.; Ratti, C.; Rus, D. Lio-sam: Tightly-coupled lidar inertial odometry via smoothing and mapping. In Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA, 24 October 2020–24 January 2021; pp. 5135–5142. [Google Scholar]
- He, X.; Pan, S.; Gao, W.; Lu, X. LiDAR-inertial-GNSS fusion positioning system in urban environment: Local accurate registration and global drift-free. Remote Sens. 2022, 14, 2104. [Google Scholar] [CrossRef]
- Markley, F.L. Attitude estimation or quaternion estimation? J. Astronaut. Sci. 2004, 52, 221–238. [Google Scholar] [CrossRef]
- Pittelkau, M.E. An analysis of the quaternion attitude determination filter. J. Astronaut. Sci. 2003, 51, 103–120. [Google Scholar] [CrossRef]
- Li, X.R.; Jilkov, V.P. Survey of maneuvering target tracking. Part V. Multiple-model methods. IEEE Trans. Aerosp. Electron. Syst. 2005, 41, 1255–1321. [Google Scholar] [CrossRef]
- Yang, J.; Han, X.; Deng, W.; Jin, H.; Zhang, B. Laser Pulse-Driven Multi-Sensor Time Synchronization Method for LiDAR Systems. Sensors 2025, 25, 7555. [Google Scholar] [CrossRef]
- Gao, Y.; Zhao, L. VE-LIOM: A Versatile and Efficient LiDAR-Inertial Odometry and Mapping System. Remote Sens. 2024, 16, 2772. [Google Scholar] [CrossRef]
- Gao, J.; Sha, J.; Wang, Y.; Wang, X.; Tan, C. A fast and stable GNSS-LiDAR-inertial state estimator from coarse to fine by iterated error-state Kalman filter. Robot. Auton. Syst. 2024, 175, 104675. [Google Scholar] [CrossRef]
- Bono, F.M.; Polinelli, A.; Radicioni, L.; Benedetti, L.; Castelli-Dezza, F.; Cinquemani, S.; Belloli, M. Wireless Accelerometer Architecture for Bridge SHM: From Sensor Design to System Deployment. Future Internet 2025, 17, 29. [Google Scholar] [CrossRef]
- Julier, S.; Uhlmann, J.; Durrant-Whyte, H.F. A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans. Autom. Control 2002, 45, 477–482. [Google Scholar] [CrossRef]
- Celledoni, E.; Owren, B. Lie group methods for rigid body dynamics and time integration on manifolds. Comput. Methods Appl. Mech. Eng. 2003, 192, 421–438. [Google Scholar] [CrossRef]
- Müller, A.; Terze, Z. The significance of the configuration space Lie group for the constraint satisfaction in numerical time integration of multibody systems. Mech. Mach. Theory 2014, 82, 173–202. [Google Scholar] [CrossRef]
- Bai, Y.B.; Holden, L.; Kealy, A.; Zaminpardaz, S.; Choy, S. A hybrid indoor/outdoor detection approach for smartphone-based seamless positioning. J. Navig. 2022, 75, 946–965. [Google Scholar] [CrossRef]
- Steger, C.R.; Steger, B.; Schär, C. HORAYZON v1.2: An efficient and flexible ray-tracing algorithm to compute horizon and sky view factor. Geosci. Model Dev. 2022, 15, 6817–6840. [Google Scholar] [CrossRef]
- Baasch, K.-N.; Icking, L.; Ruwisch, F.; Schön, S. Coordinate frames and transformations in GNSS ray-tracing for autonomous driving in urban areas. Remote Sens. 2022, 15, 180. [Google Scholar] [CrossRef]
- Ćwian, K.; Nowicki, M.R.; Wietrzykowski, J.; Skrzypczyński, P. Large-scale LiDAR SLAM with factor graph optimization on high-level geometric features. Sensors 2021, 21, 3445. [Google Scholar] [CrossRef]
- Kretzschmar, H.; Stachniss, C. Information-theoretic compression of pose graphs for laser-based SLAM. Int. J. Robot. Res. 2012, 31, 1219–1230. [Google Scholar] [CrossRef]








| Sequence | MH_01_Easy | MH_02_Easy | MH_03_Medium | MH_04_Difficult | MH_05_Difficult |
|---|---|---|---|---|---|
| VIN-MONO | 0.114 | 0.186 | 0.202 | 0.257 | 0.254 |
| ROVIO | 0.172 | 0.521 | 0.386 | 0.522 | 0.418 |
| NormalUKF | 0.284 | 0.199 | 0.246 | 0.490 | 0.351 |
| Proposed | 0.139 | 0.170 | 0.243 | 0.323 | 0.261 |
| Loop Closure Index | New Frame | Old Frame | Fitting Error |
|---|---|---|---|
| 1 | 983 | 153 | 0.156819 |
| 2 | 988 | 158 | 0.0504666 |
| 3 | 1353 | 944 | 0.0826466 |
| 4 | 1358 | 949 | 0.0233855 |
| 5 | 1363 | 954 | 0.0472858 |
| 6 | 1368 | 959 | 0.0849739 |
| 7 | 1373 | 964 | 0.0943137 |
| 8 | 1378 | 969 | 0.103733 |
| 9 | 1383 | 973 | 0.107383 |
| 10 | 1388 | 977 | 0.0551427 |
| Method | Attribute | RPE | ATE | APE | Attribute | Value |
|---|---|---|---|---|---|---|
| Front-end odometry | Max | 18.938824 | 110.030900 | 111.690937 | -- | -- |
| Mean | 2.807531 | 42.333587 | 42.502190 | -- | -- | |
| Median | 1.833126 | 39.178521 | 39.217980 | -- | -- | |
| Min | 0.529429 | 0.000000 | 0.000001 | -- | -- | |
| RMSE | 4.535629 | 54.280761 | 54.519383 | -- | -- | |
| SSE | 555.442219 | 4098443 | 4134557 | -- | -- | |
| STD | 3.562261 | 33.974527 | 34.145673 | -- | -- | |
| Front-end odometry + back-end optimization with loop closure detection | Max | 20.326717 | 16.018695 | 16.129760 | -- | -- |
| Mean | 2.503285 | 5.470432 | 5.468591 | Number of loop closure edges | 10 | |
| Median | 1.456017 | 3.902593 | 3.899106 | Number of optimization edges | 1400 | |
| Min | 0.623540 | 0.000000 | 0.000001 | Number of vertices | 1391 | |
| RMSE | 4.455809 | 6.829047 | 6.830040 | Error before optimization | -- | |
| SSE | 536.064369 | 64870.5 | 64889.4 | Error after optimization | -- | |
| STD | 3.686164 | 4.087818 | 4.091939 | Optimization time | 0.921837 | |
| Front-end odometry + GPS-based back-end optimization | Max | 2.274238 | 0.748267 | 0.800248 | -- | -- |
| Mean | 1.077270 | 0.043310 | 0.055482 | Number of loop closure edges | 0 | |
| Median | 0.998823 | 0.026902 | 0.038200 | Number of optimization edges | 2781 | |
| Min | 0.287864 | 0.001636 | 0.007302 | Number of vertices | 1391 | |
| RMSE | 1.184077 | 0.076119 | 0.086910 | Error before optimization | 1103180 | |
| SSE | 37.855005 | 8.059595 | 10.506733 | Error after optimization | 23.5963 | |
| STD | 0.491454 | 0.062597 | 0.066896 | Optimization time | 0.110304 | |
| Front-end odometry + back-end optimization with loop closure detection + GPS-based back-end optimization | Max | 2.274238 | 0.748267 | 0.800247 | -- | -- |
| Mean | 1.056682 | 0.043851 | 0.055488 | Number of loop closure edges | 10 | |
| Median | 0.983374 | 0.027404 | 0.038182 | Number of optimization edges | 2791 | |
| Min | 0.245364 | 0.001638 | 0.007325 | Number of vertices | 1391 | |
| RMSE | 1.173358 | 0.076425 | 0.086931 | Error before optimization | 1157560 | |
| SSE | 37.172754 | 8.124548 | 10.511893 | Error after optimization | 23.7683 | |
| STD | 0.510090 | 0.062593 | 0.017162 | Optimization time | 0.17204 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Wang, A.; Yu, W.; Dong, X.; Yang, Y.; Liu, S.; Liu, J.; Mei, H. Multi-Sensor-Assisted Navigation for UAVs in Power Inspection: A Fusion Approach Using LiDAR, IMU and GPS. Appl. Sci. 2026, 16, 2632. https://doi.org/10.3390/app16062632
Wang A, Yu W, Dong X, Yang Y, Liu S, Liu J, Mei H. Multi-Sensor-Assisted Navigation for UAVs in Power Inspection: A Fusion Approach Using LiDAR, IMU and GPS. Applied Sciences. 2026; 16(6):2632. https://doi.org/10.3390/app16062632
Chicago/Turabian StyleWang, Anjun, Wenbin Yu, Xuexing Dong, Yang Yang, Shizeng Liu, Jiahao Liu, and Hongwei Mei. 2026. "Multi-Sensor-Assisted Navigation for UAVs in Power Inspection: A Fusion Approach Using LiDAR, IMU and GPS" Applied Sciences 16, no. 6: 2632. https://doi.org/10.3390/app16062632
APA StyleWang, A., Yu, W., Dong, X., Yang, Y., Liu, S., Liu, J., & Mei, H. (2026). Multi-Sensor-Assisted Navigation for UAVs in Power Inspection: A Fusion Approach Using LiDAR, IMU and GPS. Applied Sciences, 16(6), 2632. https://doi.org/10.3390/app16062632

