Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (48)

Search Parameters:
Keywords = IMU preintegration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 5112 KB  
Article
A Lightweight and Low-Cost Underwater Localization System Based on Visual–Inertial–Depth Fusion for Net-Cage Cleaning Robots
by Chuanyu Geng, Junhua Chen and Hao Li
J. Mar. Sci. Eng. 2026, 14(1), 48; https://doi.org/10.3390/jmse14010048 - 26 Dec 2025
Viewed by 595
Abstract
Net-cage aquaculture faces challenges from biofouling, which reduces water exchange and threatens structural integrity. Automated cleaning robots provide an alternative to human divers but require effective, low-cost localization. Conventional acoustic–inertial systems are expensive and complex, while vision-only or IMU-based methods suffer from drift [...] Read more.
Net-cage aquaculture faces challenges from biofouling, which reduces water exchange and threatens structural integrity. Automated cleaning robots provide an alternative to human divers but require effective, low-cost localization. Conventional acoustic–inertial systems are expensive and complex, while vision-only or IMU-based methods suffer from drift in turbid, low-texture waters. This paper presents a lightweight Visual–Inertial–Depth (VID) fusion framework for underwater net-cage cleaning robots. Built on the VINS-Fusion system the method estimates scene scale using optical flow and stereo matching, and incorporates IMU pre-integration for high-frequency motion prediction. A pressure-based depth factor constrains Z-axis drift, and reflective-anchor initialization ensures global alignment. The system runs in real time on a Jetson Orin NX with ROS. Experiments in air, tank, pool, and ocean settings demonstrate its robustness. In controlled environments, the mean anchor coordinate error (ACE) was 0.05–0.16 m, and loop-closure drift (LCD) was ≤0.5 m per 5 m. In ocean trials, turbulence and biofouling led to drift (LCD 1.32 m over 16.05 m, 8.3%), but IMU and depth cues helped maintain vertical stability. The system delivers real-time, cost-effective localization in structured underwater cages and offers insights for improvements in dynamic marine conditions. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

25 pages, 103370 KB  
Article
NeRF-Enhanced Visual–Inertial SLAM for Low-Light Underwater Sensing
by Zhe Wang, Qinyue Zhang, Yuqi Hu and Bing Zheng
J. Mar. Sci. Eng. 2026, 14(1), 46; https://doi.org/10.3390/jmse14010046 - 26 Dec 2025
Viewed by 785
Abstract
Marine robots operating in low illumination and turbid waters require reliable measurement and control for surveying, inspection, and monitoring. This paper present a sensor-centric visual–inertial simultaneous localization and mapping (SLAM) pipeline that combines low-light enhancement, learned feature matching, and NeRF-based dense reconstruction to [...] Read more.
Marine robots operating in low illumination and turbid waters require reliable measurement and control for surveying, inspection, and monitoring. This paper present a sensor-centric visual–inertial simultaneous localization and mapping (SLAM) pipeline that combines low-light enhancement, learned feature matching, and NeRF-based dense reconstruction to provide stable navigation states. A lightweight encoder–decoder with global attention improves signal-to-noise ratio and contrast while preserving feature geometry. SuperPoint and LightGlue deliver robust correspondences under severe visual degradation. Visual and inertial data are tightly fused through IMU pre-integration and nonlinear optimization, producing steady pose estimates that sustain downstream guidance and trajectory planning. An accelerated NeRF converts monocular sequences into dense, photorealistic reconstructions that complement sparse SLAM maps and support survey-grade measurement products. Experiments on AQUALOC sequences demonstrate improved localization stability and higher-fidelity reconstructions at competitive runtime, showing robustness to low illumination and turbidity. The results indicate an effective engineering pathway that integrates underwater image enhancement, multi-sensor fusion, and neural scene representations to improve navigation reliability and mission effectiveness in realistic marine environments. Full article
(This article belongs to the Special Issue Intelligent Measurement and Control System of Marine Robots)
Show Figures

Figure 1

26 pages, 4779 KB  
Article
MF-IEKF: A Multiplicative Federated Invariant Extended Kalman Filter for INS/GNSS
by Lebin Zhao, Tao Chen, Peipei Yuan, Xiaoyang Li and Yang Luo
Sensors 2026, 26(1), 127; https://doi.org/10.3390/s26010127 - 24 Dec 2025
Viewed by 689
Abstract
The integration of an inertial navigation system (INS) with the Global Navigation Satellite System (GNSS) is crucial for suppressing the error drift of the INS. However, traditional fusion methods based on the extended Kalman filter (EKF) suffer from geometric inconsistency, leading to biased [...] Read more.
The integration of an inertial navigation system (INS) with the Global Navigation Satellite System (GNSS) is crucial for suppressing the error drift of the INS. However, traditional fusion methods based on the extended Kalman filter (EKF) suffer from geometric inconsistency, leading to biased estimates—a problem markedly exacerbated under large initial misalignment angles. The invariant extended Kalman filter (IEKF) embeds the state in the Lie group SE2(3) to establish a more consistent framework, yet two limitations remain. First, its standard update fails to synergize complementary error information within the left-invariant formulation, capping estimation accuracy. Second, velocity and position states converge slowly under extreme misalignment. To address these issues, a multiplicative federated IEKF (MF-IEKF) was proposed. A geometrically consistent state propagation model on SE2(3) is derived from multiplicative IMU pre-integration. Two parallel, mutually inverse left-invariant error sub-filters (ML1-IEKF and ML2-IEKF) cooperate to improve overall accuracy. For large-misalignment scenarios, a short-term multiplicative right-invariant sub-filter is introduced to suppress initial position and velocity errors. Extensive Monte Carlo simulations and KITTI dataset experiments show that MF-IEKF achieves higher navigation accuracy and robustness than ML1-IEKF. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

16 pages, 3174 KB  
Article
Online Mapping from Weight Matching Odometry and Highly Dynamic Point Cloud Filtering via Pseudo-Occupancy Grid
by Xin Zhao, Xingyu Cao, Meng Ding, Da Jiang and Chao Wei
Sensors 2025, 25(22), 6872; https://doi.org/10.3390/s25226872 - 10 Nov 2025
Viewed by 2534
Abstract
Efficient locomotion in autonomous driving and robotics requires clearer visualization and more precise map. This paper presents a high accuracy online mapping including weight matching LiDAR-IMU-GNSS odometry and an object-level highly dynamic point cloud filtering method based on a pseudo-occupancy grid. The odometry [...] Read more.
Efficient locomotion in autonomous driving and robotics requires clearer visualization and more precise map. This paper presents a high accuracy online mapping including weight matching LiDAR-IMU-GNSS odometry and an object-level highly dynamic point cloud filtering method based on a pseudo-occupancy grid. The odometry integrates IMU pre-integration, ground point segmentation through progressive morphological filtering (PMF), motion compensation, and weight feature point matching. Weight feature point matching enhances alignment accuracy by combining geometric and reflectance intensity similarities. By computing the pseudo-occupancy ratio between the current frame and prior local submaps, the grid probability values are updated to identify the distribution of dynamic grids. Object-level point cloud cluster segmentation is obtained using the curved voxel clustering method, eventually leading to filtering out the object-level highly dynamic point clouds during the online mapping process. Compared to the LIO-SAM and FAST-LIO2 frameworks, the proposed odometry demonstrates superior accuracy in the KITTI, UrbanLoco, and Newer College (NCD) datasets. Meantime, the proposed highly dynamic point cloud filtering algorithm exhibits better detection precision than the performance of Removert and ERASOR. Furthermore, the high-accuracy online mapping is built from a real-time dataset with the comprehensive filtering of driving vehicles, cyclists, and pedestrians. This research contributes to the field of high-accuracy online mapping, especially in filtering highly dynamic objects in an advanced way. Full article
(This article belongs to the Special Issue Application of LiDAR Remote Sensing and Mapping)
Show Figures

Figure 1

24 pages, 4680 KB  
Article
Indoor Pedestrian Location via Factor Graph Optimization Based on Sliding Windows
by Yu Cheng, Haifeng Li, Xixiang Liu, Shuai Chen and Shouzheng Zhu
Sensors 2025, 25(17), 5545; https://doi.org/10.3390/s25175545 - 5 Sep 2025
Cited by 1 | Viewed by 4578
Abstract
Global navigation satellite systems (GNSS) can provide high-quality location information in outdoor environments. In indoor environments, GNSS cannot achieve accurate and stable location information due to the obstruction and attenuation of buildings together with the influence of multipath effects. Due to the rapid [...] Read more.
Global navigation satellite systems (GNSS) can provide high-quality location information in outdoor environments. In indoor environments, GNSS cannot achieve accurate and stable location information due to the obstruction and attenuation of buildings together with the influence of multipath effects. Due to the rapid development of micro-electro-mechanical system (MEMS) sensors, today’s smartphones are equipped with various low-cost and small-volume MEMS sensors. Therefore, it is of great significance to study indoor pedestrian positioning technology based on smartphones. In order to provide pedestrians with high-precision and reliable location information in indoor environments, we propose a pedestrian dead reckoning (PDR) method based on Transformer+TCN (temporal convolutional network). Firstly, we use IMU (inertial measurement unit)/PDR pre-integration to suppress the inertial navigation divergence. Secondly, we propose a step length estimation algorithm based on Transformer+TCN. The Transformer and TCN networks are superimposed to improve the ability to capture complex dependencies and improve the generalization and reliability of step length estimation. Finally, we propose factor graph optimization (FGO) models based on sliding windows (SW-FGO) to provide accurate posture, which use accelerometer (ACC)/gyroscope/magnetometer (MAG) data to establish factors. We designed a fusion positioning estimation test and a comparison test on step length estimation algorithm. The results show that the fusion method based on SW-FGO proposed by us improves the positioning accuracy by 29.68% compared with the traditional FGO algorithm, and the absolute position error of the step length estimation algorithm based on Transformer+TCN in pocket mode is mitigated by 42.15% compared with the LSTM algorithm. The step length estimation model error of Transformer+TCN is 1.61%, and the step length estimation accuracy is improved by 24.41%. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

13 pages, 4728 KB  
Article
Stereo Direct Sparse Visual–Inertial Odometry with Efficient Second-Order Minimization
by Chenhui Fu and Jiangang Lu
Sensors 2025, 25(15), 4852; https://doi.org/10.3390/s25154852 - 7 Aug 2025
Cited by 1 | Viewed by 2487
Abstract
Visual–inertial odometry (VIO) is the primary supporting technology for autonomous systems, but it faces three major challenges: initialization sensitivity, dynamic illumination, and multi-sensor fusion. In order to overcome these challenges, this paper proposes stereo direct sparse visual–inertial odometry with efficient second-order minimization. It [...] Read more.
Visual–inertial odometry (VIO) is the primary supporting technology for autonomous systems, but it faces three major challenges: initialization sensitivity, dynamic illumination, and multi-sensor fusion. In order to overcome these challenges, this paper proposes stereo direct sparse visual–inertial odometry with efficient second-order minimization. It is entirely implemented using the direct method, which includes a depth initialization module based on visual–inertial alignment, a stereo image tracking module, and a marginalization module. Inertial measurement unit (IMU) data is first aligned with a stereo image to initialize the system effectively. Then, based on the efficient second-order minimization (ESM) algorithm, the photometric error and the inertial error are minimized to jointly optimize camera poses and sparse scene geometry. IMU information is accumulated between several frames using measurement preintegration and is inserted into the optimization as an additional constraint between keyframes. A marginalization module is added to reduce the computation complexity of the optimization and maintain the information about the previous states. The proposed system is evaluated on the KITTI visual odometry benchmark and the EuRoC dataset. The experimental results demonstrate that the proposed system achieves state-of-the-art performance in terms of accuracy and robustness. Full article
(This article belongs to the Section Vehicular Sensing)
Show Figures

Figure 1

20 pages, 5843 KB  
Article
Accurate and Robust Train Localization: Fusing Degeneracy-Aware LiDAR-Inertial Odometry and Visual Landmark Correction
by Lin Yue, Peng Wang, Jinchao Mu, Chen Cai, Dingyi Wang and Hao Ren
Sensors 2025, 25(15), 4637; https://doi.org/10.3390/s25154637 - 26 Jul 2025
Cited by 1 | Viewed by 2000
Abstract
To overcome the limitations of current train positioning systems, including low positioning accuracy and heavy reliance on track transponders or GNSS signals, this paper proposes a novel LiDAR-inertial and visual landmark fusion framework. Firstly, an IMU preintegration factor considering the Earth’s rotation and [...] Read more.
To overcome the limitations of current train positioning systems, including low positioning accuracy and heavy reliance on track transponders or GNSS signals, this paper proposes a novel LiDAR-inertial and visual landmark fusion framework. Firstly, an IMU preintegration factor considering the Earth’s rotation and a LiDAR-inertial odometry factor accounting for degenerate states are constructed to adapt to railway train operating environments. Subsequently, a lightweight network based on YOLO improvement is used for recognizing reflective kilometer posts, while PaddleOCR extracts numerical codes. High-precision vertex coordinates of kilometer posts are obtained by jointly using LiDAR point cloud and an image detection box. Next, a kilometer post factor is constructed, and multi-source information is optimized within a factor graph framework. Finally, onboard experiments conducted on real railway vehicles demonstrate high-precision landmark detection at 35 FPS with 94.8% average precision. The proposed method delivers robust positioning within 5 m RMSE accuracy for high-speed, long-distance train travel, establishing a novel framework for intelligent railway development. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

20 pages, 3710 KB  
Article
An Accurate LiDAR-Inertial SLAM Based on Multi-Category Feature Extraction and Matching
by Nuo Li, Yiqing Yao, Xiaosu Xu, Shuai Zhou and Taihong Yang
Remote Sens. 2025, 17(14), 2425; https://doi.org/10.3390/rs17142425 - 12 Jul 2025
Cited by 1 | Viewed by 2249
Abstract
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity [...] Read more.
Light Detection and Ranging(LiDAR)-inertial simultaneous localization and mapping (SLAM) is a critical component in multi-sensor autonomous navigation systems, providing both accurate pose estimation and detailed environmental understanding. Despite its importance, existing optimization-based LiDAR-inertial SLAM methods often face key limitations: unreliable feature extraction, sensitivity to noise and sparsity, and the inclusion of redundant or low-quality feature correspondences. These weaknesses hinder their performance in complex or dynamic environments and fail to meet the reliability requirements of autonomous systems. To overcome these challenges, we propose a novel and accurate LiDAR-inertial SLAM framework with three major contributions. First, we employ a robust multi-category feature extraction method based on principal component analysis (PCA), which effectively filters out noisy and weakly structured points, ensuring stable feature representation. Second, to suppress outlier correspondences and enhance pose estimation reliability, we introduce a coarse-to-fine two-stage feature correspondence selection strategy that evaluates geometric consistency and structural contribution. Third, we develop an adaptive weighted pose estimation scheme that considers both distance and directional consistency, improving the robustness of feature matching under varying scene conditions. These components are jointly optimized within a sliding-window-based factor graph, integrating LiDAR feature factors, IMU pre-integration, and loop closure constraints. Extensive experiments on public datasets (KITTI, M2DGR) and a custom-collected dataset validate the proposed method’s effectiveness. Results show that our system consistently outperforms state-of-the-art approaches in accuracy and robustness, particularly in scenes with sparse structure, motion distortion, and dynamic interference, demonstrating its suitability for reliable real-world deployment. Full article
(This article belongs to the Special Issue LiDAR Technology for Autonomous Navigation and Mapping)
Show Figures

Figure 1

16 pages, 3055 KB  
Article
LET-SE2-VINS: A Hybrid Optical Flow Framework for Robust Visual–Inertial SLAM
by Wei Zhao, Hongyang Sun, Songsong Ma and Haitao Wang
Sensors 2025, 25(13), 3837; https://doi.org/10.3390/s25133837 - 20 Jun 2025
Cited by 2 | Viewed by 1680
Abstract
This paper presents SE2-LET-VINS, an enhanced Visual–Inertial Simultaneous Localization and Mapping (VI-SLAM) system built upon the classic Visual–Inertial Navigation System for Monocular Cameras (VINS-Mono) framework, designed to improve localization accuracy and robustness in complex environments. By integrating Lightweight Neural Network (LET-NET) for high-quality [...] Read more.
This paper presents SE2-LET-VINS, an enhanced Visual–Inertial Simultaneous Localization and Mapping (VI-SLAM) system built upon the classic Visual–Inertial Navigation System for Monocular Cameras (VINS-Mono) framework, designed to improve localization accuracy and robustness in complex environments. By integrating Lightweight Neural Network (LET-NET) for high-quality feature extraction and Special Euclidean Group in 2D (SE2) optical flow tracking, the system achieves superior performance in challenging scenarios such as low lighting and rapid motion. The proposed method processes Inertial Measurement Unit (IMU) data and camera data, utilizing pre-integration and RANdom SAmple Consensus (RANSAC) for precise feature matching. Experimental results on the European Robotics Challenges (EuRoc) dataset demonstrate that the proposed hybrid method improves localization accuracy by up to 43.89% compared to the classic VINS-Mono model in sequences with loop closure detection. In no-loop scenarios, the method also achieves error reductions of 29.7%, 21.8%, and 24.1% on the MH_04, MH_05, and V2_03 sequences, respectively. Trajectory visualization and Gaussian fitting analysis further confirm the system’s good robustness and accuracy. SE2-LET-VINS offers a robust solution for visual–inertial navigation, particularly in demanding environments, and paves the way for future real-time applications and extended capabilities. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

22 pages, 22067 KB  
Article
Robust GNSS/INS Tightly Coupled Positioning Using Factor Graph Optimization with P-Spline and Dynamic Prediction
by Bokun Ning, Fang Zhao, Haiyong Luo, Dan Luo and Wenhua Shao
Remote Sens. 2025, 17(10), 1792; https://doi.org/10.3390/rs17101792 - 21 May 2025
Cited by 5 | Viewed by 5149
Abstract
The combination of GNSS RTK and INS offers complementary advantages but faces significant challenges in urban canyons. Frequent cycle slips in carrier phase measurements and ambiguity resolution algorithms increase computational burden without improving positioning accuracy. Additionally, environmental interference introduces noise into observations, potentially [...] Read more.
The combination of GNSS RTK and INS offers complementary advantages but faces significant challenges in urban canyons. Frequent cycle slips in carrier phase measurements and ambiguity resolution algorithms increase computational burden without improving positioning accuracy. Additionally, environmental interference introduces noise into observations, potentially leading to complete signal loss. To address these issues, this paper proposes a factor graph optimization (FGO) positioning algorithm incorporating predictive observation factors. First, a penalized spline (P-spline) is constructed to predict and smooth Doppler measurements. The predicted Doppler is then fused with the dynamics model predictions to enhance robustness. Using predictive Doppler, carrier phase and pseudorange observations are reconstructed, generating predictive constraint factors to improve positioning accuracy. Real-world tests conducted in urban canyons, including Shanghai, demonstrate that the proposed method maintains stable positioning performance even under short-term signal outages, effectively mitigating cumulative positioning errors caused by data loss. Compared to traditional methods that rely solely on available observations, the proposed algorithm improves northward and dynamic positioning accuracy by 35% and 29%, respectively, providing a highly robust navigation solution for complex urban environments. Full article
Show Figures

Figure 1

18 pages, 7697 KB  
Article
GNSS/IMU/ODO Integrated Navigation Method Based on Adaptive Sliding Window Factor Graph
by Xinchun Ji, Chenjun Long, Liuyin Ju, Hang Zhao and Dongyan Wei
Electronics 2025, 14(1), 124; https://doi.org/10.3390/electronics14010124 - 31 Dec 2024
Cited by 1 | Viewed by 2954
Abstract
One of the predominant technologies for multi-source navigation in vehicles involves the fusion of GNSS/IMU/ODO through a factor graph. To address issues such as the asynchronous sampling frequencies between the IMU and ODO, as well as diminished accuracy during GNSS signal loss, we [...] Read more.
One of the predominant technologies for multi-source navigation in vehicles involves the fusion of GNSS/IMU/ODO through a factor graph. To address issues such as the asynchronous sampling frequencies between the IMU and ODO, as well as diminished accuracy during GNSS signal loss, we propose a GNSS/IMU/ODO integrated navigation method based on an adaptive sliding window factor graph. The measurements from the ODO are utilized as observation factors to mitigate prediction interpolation errors associated with traditional ODO pre-integration methods. Additionally, online estimation and compensation for both installation angle deviations and scale factors of the ODO further enhance its ability to constrain pose errors during GNSS signal loss. A multi-state marginalization algorithm is proposed and then utilized to adaptively adjust the sliding window size based on the quality of GNSS observations, enhancing pose optimization accuracy in multi-source fusion while prioritizing computational efficiency. Tests conducted in typical urban environments and mountainous regions demonstrate that our proposed method significantly enhances fusion navigation accuracy under complex GNSS conditions. In a complex city environment, our method achieves a 55.3% and 29.8% improvement in position and velocity accuracy and enhancements of 32.0% and 61.6% in pitch and heading angle accuracy, respectively. These results match the precision of long sliding windows, with a 75.8% gain in computational efficiency. In mountainous regions, our method enhances the position accuracy in the three dimensions by factors of 89.5%, 83.7%, and 43.4%, the velocity accuracy in the three dimensions by factors of 65.4%, 32.6%, and 53.1%, and reduces the attitude errors in roll, pitch, and yaw by 70.5%, 60.8%, and 26.0%, respectively, demonstrating strong engineering applicability through an optimal balance of precision and efficiency. Full article
Show Figures

Figure 1

23 pages, 10712 KB  
Article
A Multi-Sensor Fusion Autonomous Driving Localization System for Mining Environments
by Yi Wang, Chungming Own, Haitao Zhang and Minzhou Luo
Electronics 2024, 13(23), 4717; https://doi.org/10.3390/electronics13234717 - 28 Nov 2024
Cited by 3 | Viewed by 4498
Abstract
We propose a multi-sensor fusion localization framework for autonomous heavy-duty trucks suitable for mining scenarios, which enables high-precision, real-time trajectory generation, and map construction. The motion estimated through pre-integration of the inertial measurement unit (IMU) can eliminate distortions in the point cloud and [...] Read more.
We propose a multi-sensor fusion localization framework for autonomous heavy-duty trucks suitable for mining scenarios, which enables high-precision, real-time trajectory generation, and map construction. The motion estimated through pre-integration of the inertial measurement unit (IMU) can eliminate distortions in the point cloud and provide an initial guess for LiDAR odometry optimization. The point cloud information obtained from LiDAR can assist in recovering depth information from image features extracted by the monocular camera. To ensure real-time performance, we introduce an iKD-tree to organize the point cloud data. To address issues arising from bumpy road segments and long-distance driving in practical mining scenarios, we can incorporate a large number of relative and absolute measurements from different sources, such as GPS information and AprilTag-assisted localization data, including loop closure, as factors in the system. The proposed method has been extensively evaluated on public datasets and self-collected datasets from mining sites. Full article
(This article belongs to the Special Issue Unmanned Vehicles Systems Application)
Show Figures

Figure 1

22 pages, 12893 KB  
Article
Research on Visual–Inertial Measurement Unit Fusion Simultaneous Localization and Mapping Algorithm for Complex Terrain in Open-Pit Mines
by Yuanbin Xiao, Wubin Xu, Bing Li, Hanwen Zhang, Bo Xu and Weixin Zhou
Sensors 2024, 24(22), 7360; https://doi.org/10.3390/s24227360 - 18 Nov 2024
Cited by 1 | Viewed by 1996
Abstract
As mining technology advances, intelligent robots in open-pit mining require precise localization and digital maps. Nonetheless, significant pitch variations, uneven highways, and rocky surfaces with minimal texture present substantial challenges to the precision of feature extraction and positioning in traditional visual SLAM systems, [...] Read more.
As mining technology advances, intelligent robots in open-pit mining require precise localization and digital maps. Nonetheless, significant pitch variations, uneven highways, and rocky surfaces with minimal texture present substantial challenges to the precision of feature extraction and positioning in traditional visual SLAM systems, owing to the intricate terrain features of open-pit mines. This study proposes an improved SLAM technique that integrates visual and Inertial Measurement Unit (IMU) data to address these challenges. The method incorporates a point–line feature fusion matching strategy to enhance the quality and stability of line feature extraction. It integrates an enhanced Line Segment Detection (LSD) algorithm with short segment culling and approximate line merging techniques. The combination of IMU pre-integration and visual feature restrictions is executed inside a tightly coupled visual–inertial framework utilizing a sliding window approach for back-end optimization, enhancing system robustness and precision. Experimental results demonstrate that the suggested method improves RMSE accuracy by 36.62% and 26.88% on the MH and VR sequences of the EuRoC dataset, respectively, compared to ORB-SLAM3. The improved SLAM system significantly reduces trajectory drift in the simulated open-pit mining tests, improving localization accuracy by 40.62% and 61.32%. The results indicate that the proposed method demonstrates significance. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

29 pages, 6572 KB  
Article
Robust Parking Space Recognition Approach Based on Tightly Coupled Polarized Lidar and Pre-Integration IMU
by Jialiang Chen, Fei Li, Xiaohui Liu and Yuelin Yuan
Appl. Sci. 2024, 14(20), 9181; https://doi.org/10.3390/app14209181 - 10 Oct 2024
Cited by 1 | Viewed by 2665
Abstract
Improving the accuracy of parking space recognition is crucial in the fields for Automated Valet Parking (AVP) of autonomous driving. In AVP, accurate free space recognition significantly impacts the safety and comfort of both the vehicles and drivers. To enhance parking space recognition [...] Read more.
Improving the accuracy of parking space recognition is crucial in the fields for Automated Valet Parking (AVP) of autonomous driving. In AVP, accurate free space recognition significantly impacts the safety and comfort of both the vehicles and drivers. To enhance parking space recognition and annotation in unknown environments, this paper proposes an automatic parking space annotation approach with tight coupling of Lidar and Inertial Measurement Unit (IMU). First, the pose of the Lidar frame was tightly coupled with high-frequency IMU data to compensate for vehicle motion, reducing its impact on the pose transformation of the Lidar point cloud. Next, simultaneous localization and mapping (SLAM) were performed using the compensated Lidar frame. By extracting two-dimensional polarized edge features and planar features from the three-dimensional Lidar point cloud, a polarized Lidar odometry was constructed. The polarized Lidar odometry factor and loop closure factor were jointly optimized in the iSAM2. Finally, the pitch angle of the constructed local map was evaluated to filter out ground points, and the regions of interest (ROI) were projected onto a grid map. The free space between adjacent vehicle point clouds was assessed on the grid map using convex hull detection and straight-line fitting. The experiments were conducted on both local and open datasets. The proposed method achieved an average precision and recall of 98.89% and 98.79% on the local dataset, respectively; it also achieved 97.08% and 99.40% on the nuScenes dataset. And it reduced storage usage by 48.38% while ensuring running time. Comparative experiments on open datasets show that the proposed method can adapt to various scenarios and exhibits strong robustness. Full article
Show Figures

Figure 1

21 pages, 8934 KB  
Article
Fault Detection and Interactive Multiple Models Optimization Algorithm Based on Factor Graph Navigation System
by Shouyi Wang, Qinghua Zeng, Chen Shao, Fangdong Li and Jianye Liu
Remote Sens. 2024, 16(10), 1651; https://doi.org/10.3390/rs16101651 - 7 May 2024
Cited by 9 | Viewed by 3247
Abstract
Accurate and stable positioning is significant for vehicle navigation systems, especially in complex urban environments. However, urban canyons and dynamic interference make vehicle sensors prone to disturbance, leading to vehicle positioning errors and even failures. To address these issues, an adaptive loosely coupled [...] Read more.
Accurate and stable positioning is significant for vehicle navigation systems, especially in complex urban environments. However, urban canyons and dynamic interference make vehicle sensors prone to disturbance, leading to vehicle positioning errors and even failures. To address these issues, an adaptive loosely coupled IMU/GNSS/LiDAR integrated navigation system based on factor graph optimization with sensor weight optimization and fault detection is proposed. First, the factor nodes and system framework are constructed based on error models of sensors, and the optimization method principle is derived. Second, the interactive multiple-model algorithm based on factor graph optimization (IMMFGO) is utilized to calculate and adjust sensor weights for global optimization, which will reduce the impact of disturbed sensors. Finally, a multi-stage fault detection, isolation, and recovery (MSFDIR) strategy is implemented based on the IMMFGO results and IMU pre-integration measurements, which can detect significant sensor faults and optimize the system structure. Vehicle experiments show that our IMMFGO method generally obtains better performance in positioning accuracy by 23.7% compared to adaptive factor graph optimization (AFGO) methods, and the MSFDIR strategy possesses the capability of fault sensor detection, which provides an essential reference for multi-source vehicle navigation systems in urban canyons. Full article
(This article belongs to the Section Engineering Remote Sensing)
Show Figures

Figure 1

Back to TopTop