Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (53)

Search Parameters:
Keywords = Eurocity

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 5578 KiB  
Article
Adaptive Covariance Matrix for UAV-Based Visual–Inertial Navigation Systems Using Gaussian Formulas
by Yangzi Cong, Wenbin Su, Nan Jiang, Wenpeng Zong, Long Li, Yan Xu, Tianhe Xu and Paipai Wu
Sensors 2025, 25(15), 4745; https://doi.org/10.3390/s25154745 - 1 Aug 2025
Viewed by 228
Abstract
In a variety of UAV applications, visual–inertial navigation systems (VINSs) play a crucial role in providing accurate positioning and navigation solutions. However, traditional VINS struggle to adapt flexibly to varying environmental conditions due to fixed covariance matrix settings. This limitation becomes especially acute [...] Read more.
In a variety of UAV applications, visual–inertial navigation systems (VINSs) play a crucial role in providing accurate positioning and navigation solutions. However, traditional VINS struggle to adapt flexibly to varying environmental conditions due to fixed covariance matrix settings. This limitation becomes especially acute during high-speed drone operations, where motion blur and fluctuating image clarity can significantly compromise navigation accuracy and system robustness. To address these issues, we propose an innovative adaptive covariance matrix estimation method for UAV-based VINS using Gaussian formulas. Our approach enhances the accuracy and robustness of the navigation system by dynamically adjusting the covariance matrix according to the quality of the images. Leveraging the advanced Laplacian operator, detailed assessments of image blur are performed, thereby achieving precise perception of image quality. Based on these assessments, a novel mechanism is introduced for dynamically adjusting the visual covariance matrix using a Gaussian model according to the clarity of images in the current environment. Extensive simulation experiments across the EuRoC and TUM VI datasets, as well as the field tests, have validated our method, demonstrating significant improvements in navigation accuracy of drones in scenarios with motion blur. Our algorithm has shown significantly higher accuracy compared to the famous VINS-Mono framework, outperforming it by 18.18% on average, as well as the optimization rate of RMS, which reaches 65.66% for the F1 dataset and 41.74% for F2 in the field tests outdoors. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

21 pages, 4044 KiB  
Article
DK-SLAM: Monocular Visual SLAM with Deep Keypoint Learning, Tracking, and Loop Closing
by Hao Qu, Lilian Zhang, Jun Mao, Junbo Tie, Xiaofeng He, Xiaoping Hu, Yifei Shi and Changhao Chen
Appl. Sci. 2025, 15(14), 7838; https://doi.org/10.3390/app15147838 - 13 Jul 2025
Viewed by 414
Abstract
The performance of visual SLAM in complex, real-world scenarios is often compromised by unreliable feature extraction and matching when using handcrafted features. Although deep learning-based local features excel at capturing high-level information and perform well on matching benchmarks, they struggle with generalization in [...] Read more.
The performance of visual SLAM in complex, real-world scenarios is often compromised by unreliable feature extraction and matching when using handcrafted features. Although deep learning-based local features excel at capturing high-level information and perform well on matching benchmarks, they struggle with generalization in continuous motion scenes, adversely affecting loop detection accuracy. Our system employs a Model-Agnostic Meta-Learning (MAML) strategy to optimize the training of keypoint extraction networks, enhancing their adaptability to diverse environments. Additionally, we introduce a coarse-to-fine feature tracking mechanism for learned keypoints. It begins with a direct method to approximate the relative pose between consecutive frames, followed by a feature matching method for refined pose estimation. To mitigate cumulative positioning errors, DK-SLAM incorporates a novel online learning module that utilizes binary features for loop closure detection. This module dynamically identifies loop nodes within a sequence, ensuring accurate and efficient localization. Experimental evaluations on publicly available datasets demonstrate that DK-SLAM outperforms leading traditional and learning-based SLAM systems, such as ORB-SLAM3 and LIFT-SLAM. DK-SLAM achieves 17.7% better translation accuracy and 24.2% better rotation accuracy than ORB-SLAM3 on KITTI and 34.2% better translation accuracy on EuRoC. These results underscore the efficacy and robustness of our DK-SLAM in varied and challenging real-world environments. Full article
(This article belongs to the Section Robotics and Automation)
Show Figures

Figure 1

16 pages, 3055 KiB  
Article
LET-SE2-VINS: A Hybrid Optical Flow Framework for Robust Visual–Inertial SLAM
by Wei Zhao, Hongyang Sun, Songsong Ma and Haitao Wang
Sensors 2025, 25(13), 3837; https://doi.org/10.3390/s25133837 - 20 Jun 2025
Viewed by 574
Abstract
This paper presents SE2-LET-VINS, an enhanced Visual–Inertial Simultaneous Localization and Mapping (VI-SLAM) system built upon the classic Visual–Inertial Navigation System for Monocular Cameras (VINS-Mono) framework, designed to improve localization accuracy and robustness in complex environments. By integrating Lightweight Neural Network (LET-NET) for high-quality [...] Read more.
This paper presents SE2-LET-VINS, an enhanced Visual–Inertial Simultaneous Localization and Mapping (VI-SLAM) system built upon the classic Visual–Inertial Navigation System for Monocular Cameras (VINS-Mono) framework, designed to improve localization accuracy and robustness in complex environments. By integrating Lightweight Neural Network (LET-NET) for high-quality feature extraction and Special Euclidean Group in 2D (SE2) optical flow tracking, the system achieves superior performance in challenging scenarios such as low lighting and rapid motion. The proposed method processes Inertial Measurement Unit (IMU) data and camera data, utilizing pre-integration and RANdom SAmple Consensus (RANSAC) for precise feature matching. Experimental results on the European Robotics Challenges (EuRoc) dataset demonstrate that the proposed hybrid method improves localization accuracy by up to 43.89% compared to the classic VINS-Mono model in sequences with loop closure detection. In no-loop scenarios, the method also achieves error reductions of 29.7%, 21.8%, and 24.1% on the MH_04, MH_05, and V2_03 sequences, respectively. Trajectory visualization and Gaussian fitting analysis further confirm the system’s good robustness and accuracy. SE2-LET-VINS offers a robust solution for visual–inertial navigation, particularly in demanding environments, and paves the way for future real-time applications and extended capabilities. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

35 pages, 21267 KiB  
Article
Unmanned Aerial Vehicle–Unmanned Ground Vehicle Centric Visual Semantic Simultaneous Localization and Mapping Framework with Remote Interaction for Dynamic Scenarios
by Chang Liu, Yang Zhang, Liqun Ma, Yong Huang, Keyan Liu and Guangwei Wang
Drones 2025, 9(6), 424; https://doi.org/10.3390/drones9060424 - 10 Jun 2025
Viewed by 1252
Abstract
In this study, we introduce an Unmanned Aerial Vehicle (UAV) centric visual semantic simultaneous localization and mapping (SLAM) framework that integrates RGB–D cameras, inertial measurement units (IMUs), and a 5G–enabled remote interaction module. Our system addresses three critical limitations in existing approaches: (1) [...] Read more.
In this study, we introduce an Unmanned Aerial Vehicle (UAV) centric visual semantic simultaneous localization and mapping (SLAM) framework that integrates RGB–D cameras, inertial measurement units (IMUs), and a 5G–enabled remote interaction module. Our system addresses three critical limitations in existing approaches: (1) Distance constraints in remote operations; (2) Static map assumptions in dynamic environments; and (3) High–dimensional perception requirements for UAV–based applications. By combining YOLO–based object detection with epipolar–constraint-based dynamic feature removal, our method achieves real-time semantic mapping while rejecting motion artifacts. The framework further incorporates a dual–channel communication architecture to enable seamless human–in–the–loop control over UAV–Unmanned Ground Vehicle (UGV) teams in large–scale scenarios. Experimental validation across indoor and outdoor environments indicates that the system can achieve a detection rate of up to 75 frames per second (FPS) on an NVIDIA Jetson AGX Xavier using YOLO–FASTEST, ensuring the rapid identification of dynamic objects. In dynamic scenarios, the localization accuracy attains an average absolute pose error (APE) of 0.1275 m. This outperforms state–of–the–art methods like Dynamic–VINS (0.211 m) and ORB–SLAM3 (0.148 m) on the EuRoC MAV Dataset. The dual-channel communication architecture (Web Real–Time Communication (WebRTC) for video and Message Queuing Telemetry Transport (MQTT) for telemetry) reduces bandwidth consumption by 65% compared to traditional TCP–based protocols. Moreover, our hybrid dynamic feature filtering can reject 89% of dynamic features in occluded scenarios, guaranteeing accurate mapping in complex environments. Our framework represents a significant advancement in enabling intelligent UAVs/UGVs to navigate and interact in complex, dynamic environments, offering real-time semantic understanding and accurate localization. Full article
(This article belongs to the Special Issue Advances in Perception, Communications, and Control for Drones)
Show Figures

Figure 1

26 pages, 6870 KiB  
Article
DA-IRRK: Data-Adaptive Iteratively Reweighted Robust Kernel-Based Approach for Back-End Optimization in Visual SLAM
by Zhimin Hu, Lan Cheng, Jiangxia Wei, Xinying Xu, Zhe Zhang and Gaowei Yan
Sensors 2025, 25(8), 2529; https://doi.org/10.3390/s25082529 - 17 Apr 2025
Viewed by 474
Abstract
Back-end optimization is a key process to eliminate the cumulative error in Visual Simultaneous Localization and Mapping (VSLAM). Existing VSLAM frameworks often use kernel function-based back-end optimization methods. However, these methods typically rely on fixed kernel parameters based on the chi-square test, assuming [...] Read more.
Back-end optimization is a key process to eliminate the cumulative error in Visual Simultaneous Localization and Mapping (VSLAM). Existing VSLAM frameworks often use kernel function-based back-end optimization methods. However, these methods typically rely on fixed kernel parameters based on the chi-square test, assuming Gaussian-distributed reprojection errors. In practice, though, reprojection errors are not always Gaussian, which can reduce robustness and accuracy. Therefore, we propose a data-adaptive iteratively reweighted robust kernel (DA-IRRK) approach, which combines median absolute deviation (MAD) with iteratively reweighted strategies. The robustness parameters are adaptively adjusted according to the MAD of reprojection errors, and the Huber kernel function is used to demonstrate the implementation of the back-end optimization process. The method is compared with other robust function-based approaches via the EuRoC dataset and the KITTI dataset, showing adaptability across different VSLAM frameworks and demonstrating significant improvements in trajectory accuracy on the vast majority of dataset sequences. The statistical analysis of the results from the perspective of reprojection error indicates DA-IRRK can tackle non-Gaussian noises better than the compared methods. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

20 pages, 8769 KiB  
Article
Mamba-DQN: Adaptively Tunes Visual SLAM Parameters Based on Historical Observation DQN
by Xubo Ma, Chuhua Huang, Xin Huang and Wangping Wu
Appl. Sci. 2025, 15(6), 2950; https://doi.org/10.3390/app15062950 - 9 Mar 2025
Viewed by 1093
Abstract
The parameter configuration of traditional visual SLAM algorithms usually relies on expert experience and extensive experiments, and the parameter configuration needs to be reset as the scene changes, which is a complex and tedious process. To achieve parameter adaptation in visual SLAM, we [...] Read more.
The parameter configuration of traditional visual SLAM algorithms usually relies on expert experience and extensive experiments, and the parameter configuration needs to be reset as the scene changes, which is a complex and tedious process. To achieve parameter adaptation in visual SLAM, we propose the Mamba-DQN method, which transforms complex parameter adjustment tasks into policy learning assignments for the agent. In this paper, we select the key parameters of visual SLAM to construct the agent action space. The reward function is constructed based on the absolute trajectory error (ATE), and the Mamba history observer is built within the agent to learn the observation trajectory, aiming to improve the quality of the agent’s decisions. Finally, the proposed method was experimented on the EuRoc MAV and TUM-VI datasets. The experimental results show that Mamba-DQN not only enhances the positioning accuracy of visual SLAM and demonstrates good real-time performance but also avoids the tedious parameter adjustment process. Full article
(This article belongs to the Special Issue Applications in Computer Vision and Image Processing)
Show Figures

Figure 1

15 pages, 3120 KiB  
Article
Implementation of Visual Odometry on Jetson Nano
by Jakub Krško, Dušan Nemec, Vojtech Šimák and Mário Michálik
Sensors 2025, 25(4), 1025; https://doi.org/10.3390/s25041025 - 9 Feb 2025
Viewed by 2221
Abstract
This paper presents the implementation of ORB-SLAM3 for visual odometry on a low-power ARM-based system, specifically the Jetson Nano, to track a robot’s movement using RGB-D cameras. Key challenges addressed include the selection of compatible software libraries, camera calibration, and system optimization. The [...] Read more.
This paper presents the implementation of ORB-SLAM3 for visual odometry on a low-power ARM-based system, specifically the Jetson Nano, to track a robot’s movement using RGB-D cameras. Key challenges addressed include the selection of compatible software libraries, camera calibration, and system optimization. The ORB-SLAM3 algorithm was adapted for the ARM architecture and tested using both the EuRoC dataset and real-world scenarios involving a mobile robot. The testing demonstrated that ORB-SLAM3 provides accurate localization, with errors in path estimation ranging from 3 to 11 cm when using the EuRoC dataset. Real-world tests on a mobile robot revealed discrepancies primarily due to encoder drift and environmental factors such as lighting and texture. The paper discusses strategies for mitigating these errors, including enhanced calibration and the potential use of encoder data for tracking when camera performance falters. Future improvements focus on refining the calibration process, adding trajectory correction mechanisms, and integrating visual odometry data more effectively into broader systems. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

22 pages, 12893 KiB  
Article
Research on Visual–Inertial Measurement Unit Fusion Simultaneous Localization and Mapping Algorithm for Complex Terrain in Open-Pit Mines
by Yuanbin Xiao, Wubin Xu, Bing Li, Hanwen Zhang, Bo Xu and Weixin Zhou
Sensors 2024, 24(22), 7360; https://doi.org/10.3390/s24227360 - 18 Nov 2024
Viewed by 1336
Abstract
As mining technology advances, intelligent robots in open-pit mining require precise localization and digital maps. Nonetheless, significant pitch variations, uneven highways, and rocky surfaces with minimal texture present substantial challenges to the precision of feature extraction and positioning in traditional visual SLAM systems, [...] Read more.
As mining technology advances, intelligent robots in open-pit mining require precise localization and digital maps. Nonetheless, significant pitch variations, uneven highways, and rocky surfaces with minimal texture present substantial challenges to the precision of feature extraction and positioning in traditional visual SLAM systems, owing to the intricate terrain features of open-pit mines. This study proposes an improved SLAM technique that integrates visual and Inertial Measurement Unit (IMU) data to address these challenges. The method incorporates a point–line feature fusion matching strategy to enhance the quality and stability of line feature extraction. It integrates an enhanced Line Segment Detection (LSD) algorithm with short segment culling and approximate line merging techniques. The combination of IMU pre-integration and visual feature restrictions is executed inside a tightly coupled visual–inertial framework utilizing a sliding window approach for back-end optimization, enhancing system robustness and precision. Experimental results demonstrate that the suggested method improves RMSE accuracy by 36.62% and 26.88% on the MH and VR sequences of the EuRoC dataset, respectively, compared to ORB-SLAM3. The improved SLAM system significantly reduces trajectory drift in the simulated open-pit mining tests, improving localization accuracy by 40.62% and 61.32%. The results indicate that the proposed method demonstrates significance. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

17 pages, 13227 KiB  
Article
Robot Localization Method Based on Multi-Sensor Fusion in Low-Light Environment
by Mengqi Wang, Zengzeng Lian, María Amparo Núñez-Andrés, Penghui Wang, Yalin Tian, Zhe Yue and Lingxiao Gu
Electronics 2024, 13(22), 4346; https://doi.org/10.3390/electronics13224346 - 6 Nov 2024
Viewed by 1871
Abstract
When robots perform localization in indoor low-light environments, factors such as weak and uneven lighting can degrade image quality. This degradation results in a reduced number of feature extractions by the visual odometry front end and may even cause tracking loss, thereby impacting [...] Read more.
When robots perform localization in indoor low-light environments, factors such as weak and uneven lighting can degrade image quality. This degradation results in a reduced number of feature extractions by the visual odometry front end and may even cause tracking loss, thereby impacting the algorithm’s positioning accuracy. To enhance the localization accuracy of mobile robots in indoor low-light environments, this paper proposes a visual inertial odometry method (L-MSCKF) based on the multi-state constraint Kalman filter. Addressing the challenges of low-light conditions, we integrated Inertial Measurement Unit (IMU) data with stereo vision odometry. The algorithm includes an image enhancement module and a gyroscope zero-bias correction mechanism to facilitate feature matching in stereo vision odometry. We conducted tests on the EuRoC dataset and compared our method with other similar algorithms, thereby validating the effectiveness and accuracy of L-MSCKF. Full article
Show Figures

Figure 1

23 pages, 10455 KiB  
Article
ULG-SLAM: A Novel Unsupervised Learning and Geometric Feature-Based Visual SLAM Algorithm for Robot Localizability Estimation
by Yihan Huang, Fei Xie, Jing Zhao, Zhilin Gao, Jun Chen, Fei Zhao and Xixiang Liu
Remote Sens. 2024, 16(11), 1968; https://doi.org/10.3390/rs16111968 - 30 May 2024
Cited by 10 | Viewed by 2048
Abstract
Indoor localization has long been a challenging task due to the complexity and dynamism of indoor environments. This paper proposes ULG-SLAM, a novel unsupervised learning and geometric-based visual SLAM algorithm for robot localizability estimation to improve the accuracy and robustness of visual SLAM. [...] Read more.
Indoor localization has long been a challenging task due to the complexity and dynamism of indoor environments. This paper proposes ULG-SLAM, a novel unsupervised learning and geometric-based visual SLAM algorithm for robot localizability estimation to improve the accuracy and robustness of visual SLAM. Firstly, a dynamic feature filtering based on unsupervised learning and moving consistency checks is developed to eliminate the features of dynamic objects. Secondly, an improved line feature extraction algorithm based on LSD is proposed to optimize the effect of geometric feature extraction. Thirdly, geometric features are used to optimize localizability estimation, and an adaptive weight model and attention mechanism are built using the method of region delimitation and region growth. Finally, to verify the effectiveness and robustness of localizability estimation, multiple indoor experiments using the EuRoC dataset and TUM RGB-D dataset are conducted. Compared with ORBSLAM2, the experimental results demonstrate that absolute trajectory accuracy can be improved by 95% for equivalent processing speed in walking sequences. In fr3/walking_xyz and fr3/walking_half, ULG-SLAM tracks more trajectories than DS-SLAM, and the ATE RMSE is improved by 36% and 6%, respectively. Furthermore, the improvement in robot localizability over DynaSLAM is noteworthy, coming in at about 11% and 3%, respectively. Full article
(This article belongs to the Special Issue Advances in Applications of Remote Sensing GIS and GNSS)
Show Figures

Figure 1

14 pages, 11587 KiB  
Article
Efficient Structure from Motion for Large-Size Videos from an Open Outdoor UAV Dataset
by Ruilin Xiang, Jiagang Chen and Shunping Ji
Sensors 2024, 24(10), 3039; https://doi.org/10.3390/s24103039 - 10 May 2024
Viewed by 1679
Abstract
Modern UAVs (unmanned aerial vehicles) equipped with video cameras can provide large-scale high-resolution video data. This poses significant challenges for structure from motion (SfM) and simultaneous localization and mapping (SLAM) algorithms, as most of them are developed for relatively small-scale and low-resolution scenes. [...] Read more.
Modern UAVs (unmanned aerial vehicles) equipped with video cameras can provide large-scale high-resolution video data. This poses significant challenges for structure from motion (SfM) and simultaneous localization and mapping (SLAM) algorithms, as most of them are developed for relatively small-scale and low-resolution scenes. In this paper, we present a video-based SfM method specifically designed for high-resolution large-size UAV videos. Despite the wide range of applications for SfM, performing mainstream SfM methods on such videos poses challenges due to their high computational cost. Our method consists of three main steps. Firstly, we employ a visual SLAM (VSLAM) system to efficiently extract keyframes, keypoints, initial camera poses, and sparse structures from downsampled videos. Next, we propose a novel two-step keypoint adjustment method. Instead of matching new points in the original videos, our method effectively and efficiently adjusts the existing keypoints at the original scale. Finally, we refine the poses and structures using a rotation-averaging constrained global bundle adjustment (BA) technique, incorporating the adjusted keypoints. To enrich the resources available for SLAM or SfM studies, we provide a large-size (3840 × 2160) outdoor video dataset with millimeter-level-accuracy ground control points, which supplements the current relatively low-resolution video datasets. Experiments demonstrate that, compared with other SLAM or SfM methods, our method achieves an average efficiency improvement of 100% on our collected dataset and 45% on the EuRoc dataset. Our method also demonstrates superior localization accuracy when compared with state-of-the-art SLAM or SfM methods. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

19 pages, 3494 KiB  
Article
Visual–Inertial Odometry of Structured and Unstructured Lines Based on Vanishing Points in Indoor Environments
by Xiaojing He, Baoquan Li, Shulei Qiu and Kexin Liu
Appl. Sci. 2024, 14(5), 1990; https://doi.org/10.3390/app14051990 - 28 Feb 2024
Cited by 2 | Viewed by 1448
Abstract
In conventional point-line visual–inertial odometry systems in indoor environments, consideration of spatial position recovery and line feature classification can improve localization accuracy. In this paper, a monocular visual–inertial odometry based on structured and unstructured line features of vanishing points is proposed. First, the [...] Read more.
In conventional point-line visual–inertial odometry systems in indoor environments, consideration of spatial position recovery and line feature classification can improve localization accuracy. In this paper, a monocular visual–inertial odometry based on structured and unstructured line features of vanishing points is proposed. First, the degeneracy phenomenon caused by a special geometric relationship between epipoles and line features is analyzed in the process of triangulation, and a degeneracy detection strategy is designed to determine the location of the epipoles. Then, considering that the vanishing point and the epipole coincide at infinity, the vanishing point feature is introduced to solve the degeneracy and direction vector optimization problem of line features. Finally, threshold constraints are used to categorize straight lines into structural and non-structural features under the Manhattan world assumption, and the vanishing point measurement model is added to the sliding window for joint optimization. Comparative tests on the EuRoC and TUM-VI public datasets validated the effectiveness of the proposed method. Full article
(This article belongs to the Topic Multi-Sensor Integrated Navigation Systems)
Show Figures

Figure 1

30 pages, 22271 KiB  
Article
A Novel Approach for Simultaneous Localization and Dense Mapping Based on Binocular Vision in Forest Ecological Environment
by Lina Liu, Yaqiu Liu, Yunlei Lv and Xiang Li
Forests 2024, 15(1), 147; https://doi.org/10.3390/f15010147 - 10 Jan 2024
Cited by 5 | Viewed by 2081
Abstract
The three-dimensional reconstruction of forest ecological environment by low-altitude remote sensing photography from Unmanned Aerial Vehicles (UAVs) provides a powerful basis for the fine surveying of forest resources and forest management. A stereo vision system, D-SLAM, is proposed to realize simultaneous localization and [...] Read more.
The three-dimensional reconstruction of forest ecological environment by low-altitude remote sensing photography from Unmanned Aerial Vehicles (UAVs) provides a powerful basis for the fine surveying of forest resources and forest management. A stereo vision system, D-SLAM, is proposed to realize simultaneous localization and dense mapping for UAVs in complex forest ecological environments. The system takes binocular images as input and 3D dense maps as target outputs, while the 3D sparse maps and the camera poses can be obtained. The tracking thread utilizes temporal clue to match sparse map points for zero-drift localization. The relative motion amount and data association between frames are used as constraints for new keyframes selection, and the binocular image spatial clue compensation strategy is proposed to increase the robustness of the algorithm tracking. The dense mapping thread uses Linear Attention Network (LANet) to predict reliable disparity maps in ill-posed regions, which are transformed to depth maps for constructing dense point cloud maps. Evaluations of three datasets, EuRoC, KITTI and Forest, show that the proposed system can run at 30 ordinary frames and 3 keyframes per second with Forest, with a high localization accuracy of several centimeters for Root Mean Squared Absolute Trajectory Error (RMS ATE) on EuRoC and a Relative Root Mean Squared Error (RMSE) with two average values of 0.64 and 0.2 for trel and Rrel with KITTI, outperforming most mainstream models in terms of tracking accuracy and robustness. Moreover, the advantage of dense mapping compensates for the shortcomings of sparse mapping in most Smultaneous Localization and Mapping (SLAM) systems and the proposed system meets the requirements of real-time localization and dense mapping in the complex ecological environment of forests. Full article
(This article belongs to the Special Issue Modeling and Remote Sensing of Forests Ecosystem)
Show Figures

Figure 1

16 pages, 4161 KiB  
Article
InertialNet: Inertial Measurement Learning for Simultaneous Localization and Mapping
by Huei-Yung Lin, Tse-An Liu and Wei-Yang Lin
Sensors 2023, 23(24), 9812; https://doi.org/10.3390/s23249812 - 14 Dec 2023
Cited by 1 | Viewed by 1516
Abstract
SLAM (simultaneous localization and mapping) plays a crucial role in autonomous robot navigation. A challenging aspect of visual SLAM systems is determining the 3D camera orientation of the motion trajectory. In this paper, we introduce an end-to-end network structure, InertialNet, which establishes the [...] Read more.
SLAM (simultaneous localization and mapping) plays a crucial role in autonomous robot navigation. A challenging aspect of visual SLAM systems is determining the 3D camera orientation of the motion trajectory. In this paper, we introduce an end-to-end network structure, InertialNet, which establishes the correlation between the image sequence and the IMU signals. Our network model is built upon inertial measurement learning and is employed to predict the camera’s general motion pose. By incorporating an optical flow substructure, InertialNet is independent of the appearance of training sets and can be adapted to new environments. It maintains stable predictions even in the presence of image blur, changes in illumination, and low-texture scenes. In our experiments, we evaluated InertialNet on the public EuRoC dataset and our dataset, demonstrating its feasibility with faster training convergence and fewer model parameters for inertial measurement prediction. Full article
(This article belongs to the Special Issue Advanced Inertial Sensors, Navigation, and Fusion)
Show Figures

Figure 1

22 pages, 12196 KiB  
Article
A Roadheader Positioning Method Based on Multi-Sensor Fusion
by Haoran Wang, Zhenglong Li, Hongwei Wang, Wenyan Cao, Fujing Zhang and Yuheng Wang
Electronics 2023, 12(22), 4556; https://doi.org/10.3390/electronics12224556 - 7 Nov 2023
Cited by 3 | Viewed by 1824
Abstract
In coal mines, accurate positioning is vital for roadheader equipment. However, most roadheaders use a standalone strapdown inertial navigation system (SINS) which faces challenges like error accumulation, drift, initial alignment needs, temperature sensitivity, and the demand for high-quality sensors. In this paper, a [...] Read more.
In coal mines, accurate positioning is vital for roadheader equipment. However, most roadheaders use a standalone strapdown inertial navigation system (SINS) which faces challenges like error accumulation, drift, initial alignment needs, temperature sensitivity, and the demand for high-quality sensors. In this paper, a roadheader Visual–Inertial Odometry (VIO) system is proposed, combining SINS and stereo visual odometry to adjust to coal mine environments. Given the inherently dimly lit conditions of coal mines, our system includes an image-enhancement module to preprocess images, aiding in feature matching for stereo visual odometry. Additionally, a Kalman filter merges the positional data from SINS and stereo visual odometry. When tested against three other methods on the KITTI and EuRoC datasets, our approach showed notable precision on the EBZ160M-2 Roadheader, with attitude errors less than 0.2751° and position discrepancies within 0.0328 m, proving its advantages over SINS. Full article
Show Figures

Figure 1

Back to TopTop