Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (41)

Search Parameters:
Keywords = visual odometer

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 10564 KB  
Article
DynaFusion-SLAM: Multi-Sensor Fusion and Dynamic Optimization of Autonomous Navigation Algorithms for Pasture-Pushing Robot
by Zhiwei Liu, Jiandong Fang and Yudong Zhao
Sensors 2025, 25(11), 3395; https://doi.org/10.3390/s25113395 - 28 May 2025
Cited by 1 | Viewed by 1822
Abstract
Aiming to address the problems of fewer related studies on autonomous navigation algorithms based on multi-sensor fusion in complex scenarios in pastures, lower degrees of fusion, and insufficient cruising accuracy of the operation path in complex outdoor environments, a multimodal autonomous navigation system [...] Read more.
Aiming to address the problems of fewer related studies on autonomous navigation algorithms based on multi-sensor fusion in complex scenarios in pastures, lower degrees of fusion, and insufficient cruising accuracy of the operation path in complex outdoor environments, a multimodal autonomous navigation system is proposed based on a loosely coupled architecture of Cartographer–RTAB-Map (real-time appearance-based mapping). Through laser-vision inertial guidance multi-sensor data fusion, the system achieves high-precision mapping and robust path planning in complex scenes. First, comparing the mainstream laser SLAM algorithms (Hector/Gmapping/Cartographer) through simulation experiments, Cartographer is found to have a significant memory efficiency advantage in large-scale scenarios and is thus chosen as the front-end odometer. Secondly, a two-way position optimization mechanism is innovatively designed: (1) When building the map, Cartographer processes the laser with IMU and odometer data to generate mileage estimations, which provide positioning compensation for RTAB-Map. (2) RTAB-Map fuses the depth camera point cloud and laser data, corrects the global position through visual closed-loop detection, and then uses 2D localization to construct a bimodal environment representation containing a 2D raster map and a 3D point cloud, achieving a complete description of the simulated ranch environment and material morphology and constructing a framework for the navigation algorithm of the pushing robot based on the two types of fused data. During navigation, the combination of RTAB-Map’s global localization and AMCL’s local localization is used to generate a smoother and robust positional attitude by fusing IMU and odometer data through the EKF algorithm. Global path planning is performed using Dijkstra’s algorithm and combined with the TEB (Timed Elastic Band) algorithm for local path planning. Finally, experimental validation is performed in a laboratory-simulated pasture environment. The results indicate that when the RTAB-Map algorithm fuses with the multi-source odometry, its performance is significantly improved in the laboratory-simulated ranch scenario, the maximum absolute value of the error of the map measurement size is narrowed from 24.908 cm to 4.456 cm, the maximum absolute value of the relative error is reduced from 6.227% to 2.025%, and the absolute value of the error at each location is significantly reduced. At the same time, the introduction of multi-source mileage fusion can effectively avoid the phenomenon of large-scale offset or drift in the process of map construction. On this basis, the robot constructs a fusion map containing a simulated pasture environment and material patterns. In the navigation accuracy test experiments, our proposed method reduces the root mean square error (RMSE) coefficient by 1.7% and Std by 2.7% compared with that of RTAB-MAP. The RMSE is reduced by 26.7% and Std by 22.8% compared to that of the AMCL algorithm. On this basis, the robot successfully traverses the six preset points, and the measured X and Y directions and the overall position errors of the six points meet the requirements of the pasture-pushing task. The robot successfully returns to the starting point after completing the task of multi-point navigation, achieving autonomous navigation of the robot. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

21 pages, 4833 KB  
Article
An Effective 3D Instance Map Reconstruction Method Based on RGBD Images for Indoor Scene
by Heng Wu, Yanjie Liu, Chao Wang and Yanlong Wei
Remote Sens. 2025, 17(1), 139; https://doi.org/10.3390/rs17010139 - 3 Jan 2025
Cited by 2 | Viewed by 2653
Abstract
To enhance the intelligence of robots, constructing accurate object-level instance maps is essential. However, the diversity and clutter of objects in indoor scenes present significant challenges for instance map construction. To tackle this issue, we propose a method for constructing object-level instance maps [...] Read more.
To enhance the intelligence of robots, constructing accurate object-level instance maps is essential. However, the diversity and clutter of objects in indoor scenes present significant challenges for instance map construction. To tackle this issue, we propose a method for constructing object-level instance maps based on RGBD images. First, we utilize the advanced visual odometer ORB-SLAM3 to estimate the poses of image frames and extract keyframes. Next, we perform semantic and geometric segmentation on the color and depth images of these keyframes, respectively, using semantic segmentation to optimize the geometric segmentation results and address inaccuracies in the target segmentation caused by small depth variations. The segmented depth images are then projected into point cloud segments, which are assigned corresponding semantic information. We integrate these point cloud segments into a global voxel map, updating each voxel’s class using color, distance constraints, and Bayesian methods to create an object-level instance map. Finally, we construct an ellipsoids scene from this map to test the robot’s localization capabilities in indoor environments using semantic information. Our experiments demonstrate that this method accurately and robustly constructs the environment, facilitating precise object-level scene segmentation. Furthermore, compared to manually labeled ellipsoidal maps, generating ellipsoidal maps from extracted objects enables accurate global localization. Full article
(This article belongs to the Special Issue 3D Scene Reconstruction, Modeling and Analysis Using Remote Sensing)
Show Figures

Figure 1

12 pages, 1136 KB  
Article
Research on GNSS/IMU/Visual Fusion Positioning Based on Adaptive Filtering
by Ao Liu, Hang Guo, Min Yu, Jian Xiong, Huiyang Liu and Pengfei Xie
Appl. Sci. 2024, 14(24), 11507; https://doi.org/10.3390/app142411507 - 10 Dec 2024
Viewed by 2950
Abstract
The accuracy of satellite positioning results depends on the number of available satellites in the sky. In complex environments such as urban canyons, the effectiveness of satellite positioning is often compromised. To enhance the positioning accuracy of low-cost sensors, this paper combines the [...] Read more.
The accuracy of satellite positioning results depends on the number of available satellites in the sky. In complex environments such as urban canyons, the effectiveness of satellite positioning is often compromised. To enhance the positioning accuracy of low-cost sensors, this paper combines the visual odometer data output by Xtion with the GNSS/IMU integrated positioning data output by the satellite receiver and MEMS IMU both in the mobile phone through adaptive Kalman filtering to improve positioning accuracy. Studies conducted in different experimental scenarios have found that in unobstructed environments, the RMSE of GNSS/IMU/visual fusion positioning accuracy improves by 50.4% compared to satellite positioning and by 24.4% compared to GNSS/IMU integrated positioning. In obstructed environments, the RMSE of GNSS/IMU/visual fusion positioning accuracy improves by 57.8% compared to satellite positioning and by 36.8% compared to GNSS/IMU integrated positioning. Full article
Show Figures

Figure 1

17 pages, 13227 KB  
Article
Robot Localization Method Based on Multi-Sensor Fusion in Low-Light Environment
by Mengqi Wang, Zengzeng Lian, María Amparo Núñez-Andrés, Penghui Wang, Yalin Tian, Zhe Yue and Lingxiao Gu
Electronics 2024, 13(22), 4346; https://doi.org/10.3390/electronics13224346 - 6 Nov 2024
Cited by 2 | Viewed by 2575
Abstract
When robots perform localization in indoor low-light environments, factors such as weak and uneven lighting can degrade image quality. This degradation results in a reduced number of feature extractions by the visual odometry front end and may even cause tracking loss, thereby impacting [...] Read more.
When robots perform localization in indoor low-light environments, factors such as weak and uneven lighting can degrade image quality. This degradation results in a reduced number of feature extractions by the visual odometry front end and may even cause tracking loss, thereby impacting the algorithm’s positioning accuracy. To enhance the localization accuracy of mobile robots in indoor low-light environments, this paper proposes a visual inertial odometry method (L-MSCKF) based on the multi-state constraint Kalman filter. Addressing the challenges of low-light conditions, we integrated Inertial Measurement Unit (IMU) data with stereo vision odometry. The algorithm includes an image enhancement module and a gyroscope zero-bias correction mechanism to facilitate feature matching in stereo vision odometry. We conducted tests on the EuRoC dataset and compared our method with other similar algorithms, thereby validating the effectiveness and accuracy of L-MSCKF. Full article
Show Figures

Figure 1

19 pages, 3876 KB  
Article
An Adaptive Fast Incremental Smoothing Approach to INS/GPS/VO Factor Graph Inference
by Zhaoxu Tian, Yongmei Cheng and Shun Yao
Appl. Sci. 2024, 14(13), 5691; https://doi.org/10.3390/app14135691 - 29 Jun 2024
Cited by 2 | Viewed by 1996
Abstract
In response to asynchronous and delayed sensors within multi-sensor integrated navigation systems, the computational complexity of joint optimization navigation solutions persistently rises. This paper introduces an adaptive fast integrated navigation algorithm for INS/GPS/VO based on factor graph. The factor graph model for INS/GPS/VO [...] Read more.
In response to asynchronous and delayed sensors within multi-sensor integrated navigation systems, the computational complexity of joint optimization navigation solutions persistently rises. This paper introduces an adaptive fast integrated navigation algorithm for INS/GPS/VO based on factor graph. The factor graph model for INS/GPS/VO is developed subsequent to individual modeling of the Inertial Navigation System (INS), Global Positioning System (GPS), and Visual Odometer (VO) using the factor graph model approach. Additionally, an Adaptive Fast Incremental Smoothing (AFIS) factor graph optimization algorithm is proposed. The simulation results demonstrate that the factor-graph-based integrated navigation algorithm consistently yields high-precision navigation outcomes even amidst dynamic changes in sensor validity and the presence of asynchronous and delayed sensor measurements. Notably, the AFIS factor graph optimization algorithm significantly enhances real-time performance compared to traditional Incremental Smoothing (IF) algorithms, while maintaining comparable real-time accuracy. Full article
(This article belongs to the Collection Advances in Automation and Robotics)
Show Figures

Figure 1

24 pages, 4766 KB  
Article
A Multi-Information Fusion Method for Repetitive Tunnel Disease Detection
by Zhiyuan Gan, Li Teng, Ying Chang, Xinyang Feng, Mengnan Gao and Xinwen Gao
Sustainability 2024, 16(10), 4285; https://doi.org/10.3390/su16104285 - 19 May 2024
Cited by 6 | Viewed by 1900
Abstract
Existing tunnel defect detection methods often lack repeated inspections, limiting longitudinal analysis of defects. To address this, we propose a multi-information fusion approach for continuous defect monitoring. Initially, we utilized the You Only Look Once version 7 (Yolov7) network to identify defects in [...] Read more.
Existing tunnel defect detection methods often lack repeated inspections, limiting longitudinal analysis of defects. To address this, we propose a multi-information fusion approach for continuous defect monitoring. Initially, we utilized the You Only Look Once version 7 (Yolov7) network to identify defects in tunnel lining videos. Subsequently, defect localization is achieved with Super Visual Odometer (SuperVO) algorithm. Lastly, the SuperPoint–SuperGlue Matching Network (SpSg Network) is employed to analyze similarities among defect images. Combining the above information, the repeatability detection of the disease is realized. SuperVO was tested in tunnels of 159 m and 260 m, showcasing enhanced localization accuracy compared to traditional visual odometry methods, with errors measuring below 0.3 m on average and 0.8 m at maximum. The SpSg Network surpassed the depth-feature-based Siamese Network in image matching, achieving a precision of 96.61%, recall of 93.44%, and F1 score of 95%. These findings validate the effectiveness of this approach in the repetitive detection and monitoring of tunnel defects. Full article
(This article belongs to the Special Issue Emergency Plans and Disaster Management in the Era of Smart Cities)
Show Figures

Figure 1

13 pages, 3888 KB  
Article
Visual Odometry Based on Improved Oriented Features from Accelerated Segment Test and Rotated Binary Robust Independent Elementary Features
by Di Wu, Zhihao Ma, Weiping Xu, Haifeng He and Zhenlin Li
World Electr. Veh. J. 2024, 15(3), 123; https://doi.org/10.3390/wevj15030123 - 21 Mar 2024
Cited by 2 | Viewed by 2679
Abstract
To address the problem of system instability during vehicle low-speed driving, we propose improving the visual odometer using ORB (Oriented FAST and Rotated BRIEF) features. The homogeneity of ORB features leads to poor corner point properties of some feature points. When the environmental [...] Read more.
To address the problem of system instability during vehicle low-speed driving, we propose improving the visual odometer using ORB (Oriented FAST and Rotated BRIEF) features. The homogeneity of ORB features leads to poor corner point properties of some feature points. When the environmental texture lacks richness, it leads to poor matching performance and low matching accuracy of the feature points. We solve the problem of the corner point properties of feature points using weight calculation for regions with different textures. When the vehicle speed is too low, the continuous frames captured by the camera will overlap significantly, causing large fluctuations in the system error. We use motion model estimation to solve this problem. Meanwhile, experimental validation using the KITTI dataset achieves good results. Full article
Show Figures

Figure 1

18 pages, 6117 KB  
Article
Research on a Visual/Ultra-Wideband Tightly Coupled Fusion Localization Algorithm
by Pin Jiang, Chen Hu, Tingting Wang, Ke Lv, Tingfeng Guo, Jinxuan Jiang and Wenwu Hu
Sensors 2024, 24(5), 1710; https://doi.org/10.3390/s24051710 - 6 Mar 2024
Cited by 6 | Viewed by 3361
Abstract
In the autonomous navigation of mobile robots, precise positioning is crucial. In forest environments with weak satellite signals or in sites disturbed by complex environments, satellite positioning accuracy has difficulty in meeting the requirements of autonomous navigation positioning accuracy for robots. This article [...] Read more.
In the autonomous navigation of mobile robots, precise positioning is crucial. In forest environments with weak satellite signals or in sites disturbed by complex environments, satellite positioning accuracy has difficulty in meeting the requirements of autonomous navigation positioning accuracy for robots. This article proposes a vision SLAM/UWB tightly coupled localization method and designs a UWB non-line-of-sight error identification method using the displacement increment of the visual odometer. It utilizes the displacement increment of visual output and UWB ranging information as measurement values and applies the extended Kalman filtering algorithm for data fusion. This study utilized the constructed experimental platform to collect images and ultra-wideband ranging data in outdoor environments and experimentally validated the combined positioning method. The experimental results show that the algorithm outperforms individual UWB or loosely coupled combination positioning methods in terms of positioning accuracy. It effectively eliminates non-line-of-sight errors in UWB, improving the accuracy and stability of the combined positioning system. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

21 pages, 4008 KB  
Article
Cognitive Enhancement of Robot Path Planning and Environmental Perception Based on Gmapping Algorithm Optimization
by Xintong Liu, Gu Gong, Xiaoting Hu, Gongyu Shang and Hua Zhu
Electronics 2024, 13(5), 818; https://doi.org/10.3390/electronics13050818 - 20 Feb 2024
Cited by 5 | Viewed by 2688
Abstract
In the logistics warehouse environment, the autonomous navigation and environment perception of the logistics sorting robot are two key challenges. To deal with the complex obstacles and cargo layout in a warehouse, this study focuses on improving the robot perception and navigation system [...] Read more.
In the logistics warehouse environment, the autonomous navigation and environment perception of the logistics sorting robot are two key challenges. To deal with the complex obstacles and cargo layout in a warehouse, this study focuses on improving the robot perception and navigation system to achieve efficient path planning and safe motion control. For this purpose, a scheme based on an improved Gmapping algorithm is proposed to construct a high-precision map inside a warehouse through the efficient scanning and processing of environmental data by robots. While the improved algorithm effectively integrates sensor data with robot position information to realize the real-time modeling and analysis of warehouse environments. Consequently, the precise mapping results provide a reliable navigation basis for the robot, enabling it to make intelligent path planning and obstacle avoidance decisions in unknown or dynamic environments. The experimental results show that the robot using the improved Gmapping algorithm has high accuracy and robustness in identifying obstacles and an effectively reduced navigation error, thus improving the intelligence level and efficiency of logistics operations. The improved algorithm significantly enhances obstacle detection rates, increasing them by 4.05%. Simultaneously, it successfully reduces map size accuracy errors by 1.4% and angle accuracy errors by 0.5%. Additionally, the accuracy of the robot’s travel distance improves by 2.4%, and the mapping time is reduced by nine seconds. Significant progress has been made in achieving high-precision environmental perception and intelligent navigation, providing reliable technical support and solutions for autonomous operations in logistics warehouses. Full article
(This article belongs to the Section Electrical and Autonomous Vehicles)
Show Figures

Figure 1

19 pages, 8449 KB  
Article
VID-SLAM: Robust Pose Estimation with RGBD-Inertial Input for Indoor Robotic Localization
by Dan Shan, Jinhe Su, Xiaofeng Wang, Yujun Liu, Taojian Zhou and Zebiao Wu
Electronics 2024, 13(2), 318; https://doi.org/10.3390/electronics13020318 - 11 Jan 2024
Cited by 4 | Viewed by 3644
Abstract
This study proposes a tightly coupled multi-sensor Simultaneous Localization and Mapping (SLAM) framework that integrates RGB-D and inertial measurements to achieve highly accurate 6 degree of freedom (6DOF) metric localization in a variety of environments. Through the consideration of geometric consistency, inertial measurement [...] Read more.
This study proposes a tightly coupled multi-sensor Simultaneous Localization and Mapping (SLAM) framework that integrates RGB-D and inertial measurements to achieve highly accurate 6 degree of freedom (6DOF) metric localization in a variety of environments. Through the consideration of geometric consistency, inertial measurement unit constraints, and visual re-projection errors, we present visual-inertial-depth odometry (called VIDO), an efficient state estimation back-end, to minimise the cascading losses of all factors. Existing visual-inertial odometers rely on visual feature-based constraints to eliminate the translational displacement and angular drift produced by Inertial Measurement Unit (IMU) noise. To mitigate these constraints, we introduce the iterative closest point error of adjacent frames and update the state vectors of observed frames through the minimisation of the estimation errors of all sensors. Moreover, the closed-loop module allows for further optimization of the global attitude map to correct the long-term drift. For experiments, we collect an RGBD-inertial data set for a comprehensive evaluation of VID-SLAM. The data set contains RGB-D image pairs, IMU measurements, and two types of ground truth data. The experimental results show that VID-SLAM achieves state-of-the-art positioning accuracy and outperforms mainstream vSLAM solutions, including ElasticFusion, ORB-SLAM2, and VINS-Mono. Full article
Show Figures

Figure 1

26 pages, 10606 KB  
Article
Correlative Scan Matching Position Estimation Method by Fusing Visual and Radar Line Features
by Yang Li, Xiwei Cui, Yanping Wang and Jinping Sun
Remote Sens. 2024, 16(1), 114; https://doi.org/10.3390/rs16010114 - 27 Dec 2023
Cited by 2 | Viewed by 2670
Abstract
Millimeter-wave radar and optical cameras are one of the primary sensing combinations for autonomous platforms such as self-driving vehicles and disaster monitoring robots. The millimeter-wave radar odometry can perform self-pose estimation and environmental mapping. However, cumulative errors can arise during extended measurement periods. [...] Read more.
Millimeter-wave radar and optical cameras are one of the primary sensing combinations for autonomous platforms such as self-driving vehicles and disaster monitoring robots. The millimeter-wave radar odometry can perform self-pose estimation and environmental mapping. However, cumulative errors can arise during extended measurement periods. In particular scenes where loop closure conditions are absent and visual geometric features are discontinuous, existing loop detection methods based on back-end optimization face challenges. To address this issue, this study introduces a correlative scan matching (CSM) pose estimation method that integrates visual and radar line features (VRL-SLAM). By making use of the pose output and the occupied grid map generated by the front end of the millimeter-wave radar’s simultaneous localization and mapping (SLAM), it compensates for accumulated errors by matching discontinuous visual line features and radar line features. Firstly, a pose estimation framework that integrates visual and radar line features was proposed to reduce the accumulated errors generated by the odometer. Secondly, an adaptive Hough transform line detection method (A-Hough) based on the projection of the prior radar grid map was introduced, eliminating interference from non-matching lines, enhancing the accuracy of line feature matching, and establishing a collection of visual line features. Furthermore, a Gaussian mixture model clustering method based on radar cross-section (RCS) was proposed, reducing the impact of radar clutter points online feature matching. Lastly, actual data from two scenes were collected to compare the algorithm proposed in this study with the CSM algorithm and RI-SLAM.. The results demonstrated a reduction in long-term accumulated errors, verifying the effectiveness of the method. Full article
(This article belongs to the Special Issue Environmental Monitoring Using UAV and Mobile Mapping Systems)
Show Figures

Figure 1

13 pages, 6089 KB  
Article
Accuracy Analysis of Visual Odometer for Unmanned Rollers in Tunnels
by Hao Huang, Xuebin Wang, Yongbiao Hu and Peng Tan
Electronics 2023, 12(20), 4202; https://doi.org/10.3390/electronics12204202 - 10 Oct 2023
Cited by 4 | Viewed by 1554
Abstract
Rollers, integral to road construction, are undergoing rapid advancements in unmanned functionality. To address the specific challenge of unmanned compaction within tunnels, we propose a vision-based odometry system for unmanned rollers. This system solves the problem of tunnel localization under conditions of low [...] Read more.
Rollers, integral to road construction, are undergoing rapid advancements in unmanned functionality. To address the specific challenge of unmanned compaction within tunnels, we propose a vision-based odometry system for unmanned rollers. This system solves the problem of tunnel localization under conditions of low texture and high noise. We evaluate and compare the performance of various feature extraction and matching methods, followed by the application of random sample consensus (RANSAC) to eliminate false matches. Subsequently, Perspective-n-Points (PnP) was employed to establish a minimal-error analysis for pose estimation and trajectory analysis. The findings reveal that binary robust invariant scalable key points (BRISK) exhibits larger errors due to fewer correctly matched feature points, while scale invariant feature transform (SIFT) falls short of real-time requirements. Compared to Oriented FAST and Rotated BRIEF (ORB) and the direct method, the maximum relative error and the median error between the compaction trajectory estimated by speed-up robust features (SURF) and the actual trajectory were the smallest. Consequently, the unmanned rollers employing SURF + PnP improved the accuracy and robustness. This research contributes valuable insights to the development of autonomous road construction equipment, particularly in challenging tunnels. Full article
(This article belongs to the Section Electrical and Autonomous Vehicles)
Show Figures

Figure 1

13 pages, 23756 KB  
Article
Gas-Driven Endoscopic Robot for Visual Inspection of Corrosion Defects Inside Gas Pipelines
by Jin Fang, Jun Xiang, Li Ma, Hao Liu, Chenxiang Wang and Shan Liang
Processes 2023, 11(4), 1098; https://doi.org/10.3390/pr11041098 - 4 Apr 2023
Cited by 7 | Viewed by 3109
Abstract
The internal inspection of corrosion in large natural gas pipelines is a fundamental task for the prevention of possible failures. Photos and videos provide direct proof of internal corrosion defects. However, the implementation of this technique is limited by fast robot motion and [...] Read more.
The internal inspection of corrosion in large natural gas pipelines is a fundamental task for the prevention of possible failures. Photos and videos provide direct proof of internal corrosion defects. However, the implementation of this technique is limited by fast robot motion and poor lighting conditions, with high-quality images being key to its success. In this work, we developed a natural gas-driven pipeline endoscopic robot (GDPER) for the visual inspection of the inner wall surfaces of pipelines. GDPER consists of driving, odometer, and vision modules connected by universal joints. It is designed to work in a 154 mm gas-pressurized pipeline up to a maximum of 6 MPa, allowing it to smoothly pass through bends and cross-ring welds at a maximum speed of 3 m/s using gas pressure driving. Test results have shown that HD MP4 video files can be obtained, and the location of defects on the pipelines can be detected by intelligent video image post-processing. The gas-driven function enables the survey of very long pipelines without impacting the transport of the pipage. Full article
(This article belongs to the Section Process Control and Monitoring)
Show Figures

Figure 1

16 pages, 3937 KB  
Article
Vision and Inertial Navigation Combined-Based Pose Measurement Method of Cantilever Roadheader
by Jicheng Wan, Xuhui Zhang, Chao Zhang, Wenjuan Yang, Mengyu Lei, Yuyang Du and Zheng Dong
Sustainability 2023, 15(5), 4018; https://doi.org/10.3390/su15054018 - 22 Feb 2023
Cited by 10 | Viewed by 2295
Abstract
Pose measurement of coal mine excavation equipment is an important part of roadway excavation. However, in the underground mining roadway of coal mine, there are some influencing factors such as low illumination, high dust and interference from multiple equipment, which lead to the [...] Read more.
Pose measurement of coal mine excavation equipment is an important part of roadway excavation. However, in the underground mining roadway of coal mine, there are some influencing factors such as low illumination, high dust and interference from multiple equipment, which lead to the difficulty in the position and pose measurement of roadheader with low measurement accuracy and poor stability. A combination positioning method based on machine vision and optical fiber inertial navigation is proposed to realize the position and pose measurement of roadheader and improve the accuracy and stability of the position and pose measurement. The visual measurement model of arm roadheader is established, and the optical fiber inertial navigation technology and the spatial coordinate transformation method are used. Finally, the Kalman filter fusion algorithm is used to fuse the two kinds of data to get the accurate roadheader pose data, and the inertia is compensated and corrected. Underground coal mine experiments are designed to verify the performance of the proposed method. The results show that the positioning error of the roadheader body using this method is within 40 mm, which meets the positioning accuracy requirements of roadway construction. This method compensates for the shortcomings of low accuracy and poor reliability of single vision measurement, single inertial navigation measurement and single odometer measurement. Full article
(This article belongs to the Special Issue Advances in Intelligent and Sustainable Mining)
Show Figures

Figure 1

23 pages, 4690 KB  
Article
Visual/Inertial/GNSS Integrated Navigation System under GNSS Spoofing Attack
by Nianzu Gu, Fei Xing and Zheng You
Remote Sens. 2022, 14(23), 5975; https://doi.org/10.3390/rs14235975 - 25 Nov 2022
Cited by 17 | Viewed by 3221
Abstract
Visual/Inertial/GNSS (VIG) integrated navigation and positioning systems are widely used in unmanned vehicles and other systems. This VIG system is vulnerable to of GNSS spoofing attacks. Relevant research on the harm that spoofing causes to the system and performance analyses of VIG systems [...] Read more.
Visual/Inertial/GNSS (VIG) integrated navigation and positioning systems are widely used in unmanned vehicles and other systems. This VIG system is vulnerable to of GNSS spoofing attacks. Relevant research on the harm that spoofing causes to the system and performance analyses of VIG systems under GNSS spoofing are not sufficient. In this paper, an open-source VIG algorithm, VINS-Fusion, based on nonlinear optimization, is used to analyze the performance of the VIG system under a GNSS spoofing attack. The influence of the visual inertial odometer (VIO) scale estimation error and the transformation matrix deviation in the transition period of spoofing detection is analyzed. Deviation correction methods based on the GNSS-assisted scale compensation coefficient estimation method and optimal pose transformation matrix selection are proposed for VIG-integrated system in spoofing areas. For an area that the integrated system can revisit many times, a global pose map-matching method is proposed. An outfield experiment with a GNSS spoofing attack is carried out in this paper. The experiment result shows that, even if the GNSS measurements are seriously affected by the spoofing, the integrated system still can run independently, following the preset waypoint. The scale compensation coefficient estimation method, the optimal pose transformation matrix selection method and the global pose map-matching method can depress the estimation error under the circumstances of a spoofing attack. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

Back to TopTop