Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (27)

Search Parameters:
Keywords = range–visual–inertial odometry

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 7868 KB  
Article
An Indoor UAV Localization Framework with ESKF Tightly-Coupled Fusion and Multi-Epoch UWB Outlier Rejection
by Jianmin Zhao, Zhongliang Deng, Enwen Hu, Wenju Su, Boyang Lou and Yanxu Liu
Sensors 2025, 25(24), 7673; https://doi.org/10.3390/s25247673 - 18 Dec 2025
Viewed by 531
Abstract
Unmanned aerial vehicles (UAVs) are increasingly used indoors for inspection, security, and emergency tasks. Achieving accurate and robust localization under Global Navigation Satellite System (GNSS) unavailability and obstacle occlusions is therefore a critical challenge. Due to their inherent physical limitations, Inertial Measurement Unit [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly used indoors for inspection, security, and emergency tasks. Achieving accurate and robust localization under Global Navigation Satellite System (GNSS) unavailability and obstacle occlusions is therefore a critical challenge. Due to their inherent physical limitations, Inertial Measurement Unit (IMU)–based localization errors accumulate over time, Ultra-Wideband (UWB) measurements suffer from systematic biases in Non-Line-of-Sight (NLOS) environments and Visual–Inertial Odometry (VIO) depends heavily on environmental features, making it susceptible to long-term drift. We propose a tightly coupled fusion framework based on the Error-State Kalman Filter (ESKF). Using an IMU motion model for prediction, the method incorporates raw UWB ranges, VIO relative poses, and TFmini altitude in the update step. To suppress abnormal UWB measurements, a multi-epoch outlier rejection method constrained by VIO is developed, which can robustly eliminate NLOS range measurements and effectively mitigate the influence of outliers on observation updates. This framework improves both observation quality and fusion stability. We validate the proposed method on a real-world platform in an underground parking garage. Experimental results demonstrate that, in complex indoor environments, the proposed approach exhibits significant advantages over existing algorithms, achieving higher localization accuracy and robustness while effectively suppressing UWB NLOS errors as well as IMU and VIO drift. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

24 pages, 1735 KB  
Article
A Multi-Sensor Fusion-Based Localization Method for a Magnetic Adhesion Wall-Climbing Robot
by Xiaowei Han, Hao Li, Nanmu Hui, Jiaying Zhang and Gaofeng Yue
Sensors 2025, 25(16), 5051; https://doi.org/10.3390/s25165051 - 14 Aug 2025
Cited by 2 | Viewed by 1533
Abstract
To address the decline in the localization accuracy of magnetic adhesion wall-climbing robots operating on large steel structures, caused by visual occlusion, sensor drift, and environmental interference, this study proposes a simulation-based multi-sensor fusion localization method that integrates an Inertial Measurement Unit (IMU), [...] Read more.
To address the decline in the localization accuracy of magnetic adhesion wall-climbing robots operating on large steel structures, caused by visual occlusion, sensor drift, and environmental interference, this study proposes a simulation-based multi-sensor fusion localization method that integrates an Inertial Measurement Unit (IMU), Wheel Odometry (Odom), and Ultra-Wideband (UWB). An Extended Kalman Filter (EKF) is employed to integrate IMU and Odom measurements through a complementary filtering model, while a geometric residual-based weighting mechanism is introduced to optimize raw UWB ranging data. This enhances the accuracy and robustness of both the prediction and observation stages. All evaluations were conducted in a simulated environment, including scenarios on flat plates and spherical tank-shaped steel surfaces. The proposed method maintained a maximum localization error within 5 cm in both linear and closed-loop trajectories and achieved over 30% improvement in horizontal accuracy compared to baseline EKF-based approaches. The system exhibited consistent localization performance across varying surface geometries, providing technical support for robotic operations on large steel infrastructures. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

23 pages, 4909 KB  
Article
Autonomous Navigation and Obstacle Avoidance for Orchard Spraying Robots: A Sensor-Fusion Approach with ArduPilot, ROS, and EKF
by Xinjie Zhu, Xiaoshun Zhao, Jingyan Liu, Weijun Feng and Xiaofei Fan
Agronomy 2025, 15(6), 1373; https://doi.org/10.3390/agronomy15061373 - 3 Jun 2025
Cited by 3 | Viewed by 2780
Abstract
To address the challenges of low pesticide utilization, insufficient automation, and health risks in orchard plant protection, we developed an autonomous spraying vehicle using ArduPilot firmware and a robot operating system (ROS). The system tackles orchard navigation hurdles, including global navigation satellite system [...] Read more.
To address the challenges of low pesticide utilization, insufficient automation, and health risks in orchard plant protection, we developed an autonomous spraying vehicle using ArduPilot firmware and a robot operating system (ROS). The system tackles orchard navigation hurdles, including global navigation satellite system (GNSS) signal obstruction, light detection and ranging (LIDAR) simultaneous localization and mapping (SLAM) error accumulation, and lighting-limited visual positioning. A key innovation is the integration of an extended Kalman filter (EKF) to dynamically fuse T265 visual odometry, inertial measurement unit (IMU), and GPS data, overcoming single-sensor limitations and enhancing positioning robustness in complex environments. Additionally, the study optimizes PID controller derivative parameters for tracked chassis, improving acceleration/deceleration control smoothness. The system, composed of Pixhawk 4, Raspberry Pi 4B, Silan S2L LIDAR, T265 visual odometry, and a Quectel EC200A 4G module, enables autonomous path planning, real-time obstacle avoidance, and multi-mission navigation. Indoor/outdoor tests and field experiments in Sun Village Orchard validated its autonomous cruising and obstacle avoidance capabilities under real-world orchard conditions, demonstrating feasibility for intelligent plant protection. Full article
(This article belongs to the Special Issue Smart Pest Control for Building Farm Resilience)
Show Figures

Figure 1

17 pages, 1922 KB  
Article
Enhancing Visual–Inertial Odometry Robustness and Accuracy in Challenging Environments
by Alessandro Minervini, Adrian Carrio and Giorgio Guglieri
Robotics 2025, 14(6), 71; https://doi.org/10.3390/robotics14060071 - 27 May 2025
Cited by 1 | Viewed by 7546
Abstract
Visual–Inertial Odometry (VIO) algorithms are widely adopted for autonomous drone navigation in GNSS-denied environments. However, conventional monocular and stereo VIO setups often lack robustness under challenging environmental conditions or during aggressive maneuvers, due to the sensitivity of visual information to lighting, texture, and [...] Read more.
Visual–Inertial Odometry (VIO) algorithms are widely adopted for autonomous drone navigation in GNSS-denied environments. However, conventional monocular and stereo VIO setups often lack robustness under challenging environmental conditions or during aggressive maneuvers, due to the sensitivity of visual information to lighting, texture, and motion blur. In this work, we enhance an existing open-source VIO algorithm to improve both the robustness and accuracy of the pose estimation. First, we integrate an IMU-based motion prediction module to improve feature tracking across frames, particularly during high-speed movements. Second, we extend the algorithm to support a multi-camera setup, which significantly improves tracking performance in low-texture environments. Finally, to reduce the computational complexity, we introduce an adaptive feature selection strategy that dynamically adjusts the detection thresholds according to the number of detected features. Experimental results validate the proposed approaches, demonstrating notable improvements in both accuracy and robustness across a range of challenging scenarios. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

28 pages, 4077 KB  
Review
A Comprehensive Survey on Short-Distance Localization of UAVs
by Luka Kramarić, Niko Jelušić, Tomislav Radišić and Mario Muštra
Drones 2025, 9(3), 188; https://doi.org/10.3390/drones9030188 - 4 Mar 2025
Cited by 12 | Viewed by 7462
Abstract
The localization of Unmanned Aerial Vehicles (UAVs) is a critical area of research, particularly in applications requiring high accuracy and reliability in Global Positioning System (GPS)-denied environments. This paper presents a comprehensive overview of short-distance localization methods for UAVs, exploring their strengths, limitations, [...] Read more.
The localization of Unmanned Aerial Vehicles (UAVs) is a critical area of research, particularly in applications requiring high accuracy and reliability in Global Positioning System (GPS)-denied environments. This paper presents a comprehensive overview of short-distance localization methods for UAVs, exploring their strengths, limitations, and practical applications. Among short-distance localization methods, ultra-wideband (UWB) technology has gained significant attention due to its ability to provide accurate positioning, resistance to multipath interference, and low power consumption. Different approaches to the usage of UWB sensors, such as time of arrival (ToA), time difference of arrival (TDoA), and double-sided two-way ranging (DS-TWR), alongside their integration with complementary sensors like Inertial Measurement Units (IMUs), cameras, and visual odometry systems, are explored. Furthermore, this paper provides an evaluation of the key factors affecting UWB-based localization performance, including anchor placement, synchronization, and the challenges of combined use with other localization technologies. By highlighting the current trends in UWB-related research, including its increasing use in swarm control, indoor navigation, and autonomous landing, potential researchers could benefit from this study by using it as a guide for choosing the appropriate localization techniques, emphasizing UWB technology’s potential as a foundational technology in advanced UAV applications. Full article
(This article belongs to the Special Issue Resilient Networking and Task Allocation for Drone Swarms)
Show Figures

Figure 1

16 pages, 6121 KB  
Article
Stereo Event-Based Visual–Inertial Odometry
by Kunfeng Wang, Kaichun Zhao, Wenshuai Lu and Zheng You
Sensors 2025, 25(3), 887; https://doi.org/10.3390/s25030887 - 31 Jan 2025
Cited by 7 | Viewed by 3581
Abstract
Event-based cameras are a new type of vision sensor in which pixels operate independently and respond asynchronously to changes in brightness with microsecond resolution, instead of providing standard intensity frames. Compared with traditional cameras, event-based cameras have low latency, no motion blur, and [...] Read more.
Event-based cameras are a new type of vision sensor in which pixels operate independently and respond asynchronously to changes in brightness with microsecond resolution, instead of providing standard intensity frames. Compared with traditional cameras, event-based cameras have low latency, no motion blur, and high dynamic range (HDR), which provide possibilities for robots to deal with some challenging scenes. We propose a visual–inertial odometry for stereo event-based cameras based on Error-State Kalman Filter (ESKF). The vision module updates the pose by relying on the edge alignment of a semi-dense 3D map to a 2D image, while the IMU module updates the pose using median integration. We evaluate our method on public datasets with general 6-DoF motion (three-axis translation and three-axis rotation) and compare the results against the ground truth. We compared our results with those from other methods, demonstrating the effectiveness of our approach. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

17 pages, 4607 KB  
Article
Event-Based Visual/Inertial Odometry for UAV Indoor Navigation
by Ahmed Elamin, Ahmed El-Rabbany and Sunil Jacob
Sensors 2025, 25(1), 61; https://doi.org/10.3390/s25010061 - 25 Dec 2024
Cited by 12 | Viewed by 8452
Abstract
Indoor navigation is becoming increasingly essential for multiple applications. It is complex and challenging due to dynamic scenes, limited space, and, more importantly, the unavailability of global navigation satellite system (GNSS) signals. Recently, new sensors have emerged, namely event cameras, which show great [...] Read more.
Indoor navigation is becoming increasingly essential for multiple applications. It is complex and challenging due to dynamic scenes, limited space, and, more importantly, the unavailability of global navigation satellite system (GNSS) signals. Recently, new sensors have emerged, namely event cameras, which show great potential for indoor navigation due to their high dynamic range and low latency. In this study, an event-based visual–inertial odometry approach is proposed, emphasizing adaptive event accumulation and selective keyframe updates to reduce computational overhead. The proposed approach fuses events, standard frames, and inertial measurements for precise indoor navigation. Features are detected and tracked on the standard images. The events are accumulated into frames and used to track the features between the standard frames. Subsequently, the IMU measurements and the feature tracks are fused to continuously estimate the sensor states. The proposed approach is evaluated using both simulated and real-world datasets. Compared with the state-of-the-art U-SLAM algorithm, our approach achieves a substantial reduction in the mean positional error and RMSE in simulated environments, showing up to 50% and 47% reductions along the x- and y-axes, respectively. The approach achieves 5–10 ms latency per event batch and 10–20 ms for frame updates, demonstrating real-time performance on resource-constrained platforms. These results underscore the potential of our approach as a robust solution for real-world UAV indoor navigation scenarios. Full article
(This article belongs to the Special Issue Multi-sensor Integration for Navigation and Environmental Sensing)
Show Figures

Figure 1

20 pages, 4436 KB  
Article
An Integrated Algorithm Fusing UWB Ranging Positioning and Visual–Inertial Information for Unmanned Vehicles
by Shuang Li, Lihui Wang, Baoguo Yu, Xiaohu Liang, Shitong Du, Yifan Li and Zihan Yang
Remote Sens. 2024, 16(23), 4530; https://doi.org/10.3390/rs16234530 - 3 Dec 2024
Cited by 4 | Viewed by 3072
Abstract
During the execution of autonomous tasks within sheltered space environments, unmanned vehicles demand highly precise and seamless continuous positioning capabilities. While the existing visual–inertial-based positioning methods can provide accurate poses over short distances, they are prone to error accumulation. Conversely, radio-based positioning techniques [...] Read more.
During the execution of autonomous tasks within sheltered space environments, unmanned vehicles demand highly precise and seamless continuous positioning capabilities. While the existing visual–inertial-based positioning methods can provide accurate poses over short distances, they are prone to error accumulation. Conversely, radio-based positioning techniques could offer absolute position information, yet they encountered difficulties in sheltered space scenarios. Usually, three or more base stations were required for localization. To address these issues, a binocular vision/inertia/ultra-wideband (UWB) combined positioning method based on factor graph optimization was proposed. This approach incorporated UWB ranging and positioning information into the visual–inertia system. Based on a sliding window, the joint nonlinear optimization of multi-source data, including IMU measurements, visual features, as well as UWB ranging and positioning information, was accomplished. Relying on visual inertial odometry, this methodology enabled autonomous positioning without the prerequisite for prior scene knowledge. When UWB base stations were available in the environment, their distance measurements or positioning information could be employed to institute global pose constraints in combination with visual–inertial odometry data. Through the joint optimization of UWB distance or positioning measurements and visual–inertial odometry data, the proposed method precisely ascertained the vehicle’s position and effectively mitigated accumulated errors. The experimental results indicated that the positioning error of the proposed method was reduced by 51.4% compared to the traditional method, thereby fulfilling the requirements for the precise autonomous navigation of unmanned vehicles in sheltered space. Full article
Show Figures

Figure 1

7 pages, 3886 KB  
Proceeding Paper
Event/Visual/IMU Integration for UAV-Based Indoor Navigation
by Ahmed Elamin and Ahmed El-Rabbany
Proceedings 2024, 110(1), 2; https://doi.org/10.3390/proceedings2024110002 - 2 Dec 2024
Viewed by 2150
Abstract
Unmanned aerial vehicle (UAV) navigation in indoor environments is challenging due to varying light conditions, the dynamic clutter typical of indoor spaces, and the absence of GNSS signals. In response to these complexities, emerging sensors, such as event cameras, demonstrate significant potential in [...] Read more.
Unmanned aerial vehicle (UAV) navigation in indoor environments is challenging due to varying light conditions, the dynamic clutter typical of indoor spaces, and the absence of GNSS signals. In response to these complexities, emerging sensors, such as event cameras, demonstrate significant potential in indoor navigation with their low latency and high dynamic range characteristics. Unlike traditional RGB cameras, event cameras mitigate motion blur and operate effectively in low-light conditions. Nevertheless, they exhibit limitations in terms of information output during scenarios of limited motion, in contrast to standard cameras that can capture detailed surroundings. This study proposes a novel event-based visual–inertial odometry approach for precise indoor navigation. In the proposed approach, the standard images are leveraged for feature detection and tracking, while events are aggregated into frames to track features between consecutive standard frames. The fusion of IMU measurements and feature tracks facilitates the continuous estimation of sensor states. The proposed approach is evaluated and validated using a controlled office environment simulation developed using Gazebo, employing a P230 simulated drone equipped with an event camera, an RGB camera, and IMU sensors. This simulated environment provides a testbed for evaluating and showcasing the proposed approach’s robust performance in realistic indoor navigation scenarios. Full article
(This article belongs to the Proceedings of The 31st International Conference on Geoinformatics)
Show Figures

Figure 1

14 pages, 4431 KB  
Article
Improved Multi-Sensor Fusion Dynamic Odometry Based on Neural Networks
by Lishu Luo, Fulun Peng and Longhui Dong
Sensors 2024, 24(19), 6193; https://doi.org/10.3390/s24196193 - 25 Sep 2024
Cited by 6 | Viewed by 3242
Abstract
High-precision simultaneous localization and mapping (SLAM) in dynamic real-world environments plays a crucial role in autonomous robot navigation, self-driving cars, and drone control. To address this dynamic localization issue, in this paper, a dynamic odometry method is proposed based on FAST-LIVO, a fast [...] Read more.
High-precision simultaneous localization and mapping (SLAM) in dynamic real-world environments plays a crucial role in autonomous robot navigation, self-driving cars, and drone control. To address this dynamic localization issue, in this paper, a dynamic odometry method is proposed based on FAST-LIVO, a fast LiDAR (light detection and ranging)–inertial–visual odometry system, integrating neural networks with laser, camera, and inertial measurement unit modalities. The method first constructs visual–inertial and LiDAR–inertial odometry subsystems. Then, a lightweight neural network is used to remove dynamic elements from the visual part, and dynamic clustering is applied to the LiDAR part to eliminate dynamic environments, ensuring the reliability of the remaining environmental data. Validation of the datasets shows that the proposed multi-sensor fusion dynamic odometry can achieve high-precision pose estimation in complex dynamic environments with high continuity, reliability, and dynamic robustness. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

25 pages, 4182 KB  
Article
W-VSLAM: A Visual Mapping Algorithm for Indoor Inspection Robots
by Dingji Luo, Yucan Huang, Xuchao Huang, Mingda Miao and Xueshan Gao
Sensors 2024, 24(17), 5662; https://doi.org/10.3390/s24175662 - 30 Aug 2024
Cited by 1 | Viewed by 2229
Abstract
In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual–inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of [...] Read more.
In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual–inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of mobile robots in indoor environments, we propose a visual SLAM perception method that integrates wheel odometry information. First, the robot’s body pose is parameterized in SE(2) and the corresponding camera pose is parameterized in SE(3). On this basis, we derive the visual constraint residuals and their Jacobian matrices for reprojection observations using the camera projection model. We employ the concept of pre-integration to derive pose-constraint residuals and their Jacobian matrices and utilize marginalization theory to derive the relative pose residuals and their Jacobians for loop closure constraints. This approach solves the nonlinear optimization problem to obtain the optimal pose and landmark points of the ground-moving robot. A comparison with the ORBSLAM3 algorithm reveals that, in the recorded indoor environment datasets, the proposed algorithm demonstrates significantly higher perception accuracy, with root mean square error (RMSE) improvements of 89.2% in translation and 98.5% in rotation for absolute trajectory error (ATE). The overall trajectory localization accuracy ranges between 5 and 17 cm, validating the effectiveness of the proposed algorithm. These findings can be applied to preliminary mapping for the autonomous navigation of indoor mobile robots and serve as a basis for path planning based on the mapping results. Full article
Show Figures

Figure 1

22 pages, 5331 KB  
Article
Rapid Initialization Method of Unmanned Aerial Vehicle Swarm Based on VIO-UWB in Satellite Denial Environment
by Runmin Wang and Zhongliang Deng
Drones 2024, 8(7), 339; https://doi.org/10.3390/drones8070339 - 22 Jul 2024
Cited by 4 | Viewed by 3111
Abstract
In environments where satellite signals are blocked, initializing UAV swarms quickly is a technical challenge, especially indoors or in areas with weak satellite signals, making it difficult to establish the relative position of the swarm. Two common methods for initialization are using the [...] Read more.
In environments where satellite signals are blocked, initializing UAV swarms quickly is a technical challenge, especially indoors or in areas with weak satellite signals, making it difficult to establish the relative position of the swarm. Two common methods for initialization are using the camera for joint SLAM initialization, which increases communication burden due to image feature point analysis, and obtaining a rough positional relationship using prior information through a device such as a magnetic compass, which lacks accuracy. In recent years, visual–inertial odometry (VIO) technology has significantly progressed, providing new solutions. With improved computing power and enhanced VIO accuracy, it is now possible to establish the relative position relationship through the movement of drones. This paper proposes a two-stage robust initialization method for swarms of more than four UAVs, suitable for larger-scale satellite denial scenarios. Firstly, the paper analyzes the Cramér–Rao lower bound (CRLB) problem and the moving configuration problem of the cluster to determine the optimal anchor node for the algorithm. Subsequently, a strategy is used to screen anchor nodes that are close to the lower bound of CRLB, and an optimization problem is constructed to solve the position relationship between anchor nodes through the relative motion and ranging relationship between UAVs. This optimization problem includes quadratic constraints as well as linear constraints and is a quadratically constrained quadratic programming problem (QCQP) with high robustness and high precision. After addressing the anchor node problem, this paper simplifies and improves a fast swarm cooperative positioning algorithm, which is faster than the traditional multidimensional scaling (MDS) algorithm. The results of theoretical simulations and actual UAV tests demonstrate that the proposed algorithm is advanced, superior, and effectively solves the UAV swarm initialization problem under the condition of a satellite signal rejection. Full article
Show Figures

Figure 1

23 pages, 8365 KB  
Article
Resilient Multi-Sensor UAV Navigation with a Hybrid Federated Fusion Architecture
by Sorin Andrei Negru, Patrick Geragersian, Ivan Petrunin and Weisi Guo
Sensors 2024, 24(3), 981; https://doi.org/10.3390/s24030981 - 2 Feb 2024
Cited by 22 | Viewed by 6502
Abstract
Future UAV (unmanned aerial vehicle) operations in urban environments demand a PNT (position, navigation, and timing) solution that is both robust and resilient. While a GNSS (global navigation satellite system) can provide an accurate position under open-sky assumptions, the complexity of urban operations [...] Read more.
Future UAV (unmanned aerial vehicle) operations in urban environments demand a PNT (position, navigation, and timing) solution that is both robust and resilient. While a GNSS (global navigation satellite system) can provide an accurate position under open-sky assumptions, the complexity of urban operations leads to NLOS (non-line-of-sight) and multipath effects, which in turn impact the accuracy of the PNT data. A key research question within the research community pertains to determining the appropriate hybrid fusion architecture that can ensure the resilience and continuity of UAV operations in urban environments, minimizing significant degradations of PNT data. In this context, we present a novel federated fusion architecture that integrates data from the GNSS, the IMU (inertial measurement unit), a monocular camera, and a barometer to cope with the GNSS multipath and positioning performance degradation. Within the federated fusion architecture, local filters are implemented using EKFs (extended Kalman filters), while a master filter is used in the form of a GRU (gated recurrent unit) block. Data collection is performed by setting up a virtual environment in AirSim for the visual odometry aid and barometer data, while Spirent GSS7000 hardware is used to collect the GNSS and IMU data. The hybrid fusion architecture is compared to a classic federated architecture (formed only by EKFs) and tested under different light and weather conditions to assess its resilience, including multipath and GNSS outages. The proposed solution demonstrates improved resilience and robustness in a range of degraded conditions while maintaining a good level of positioning performance with a 95th percentile error of 0.54 m for the square scenario and 1.72 m for the survey scenario. Full article
(This article belongs to the Special Issue New Methods and Applications for UAVs)
Show Figures

Figure 1

17 pages, 4870 KB  
Article
Deepwater 3D Measurements with a Novel Sensor System
by Christian Bräuer-Burchardt, Christoph Munkelt, Michael Bleier, Anja Baumann, Matthias Heinze, Ingo Gebhart, Peter Kühmstedt and Gunther Notni
Appl. Sci. 2024, 14(2), 557; https://doi.org/10.3390/app14020557 - 9 Jan 2024
Cited by 2 | Viewed by 2294
Abstract
A novel 3D sensor system for underwater application is presented, primarily designed to carry out inspections on industrial facilities such as piping systems, offshore wind farm foundations, anchor chains, and other structures at deep depths of up to 1000 m. The 3D sensor [...] Read more.
A novel 3D sensor system for underwater application is presented, primarily designed to carry out inspections on industrial facilities such as piping systems, offshore wind farm foundations, anchor chains, and other structures at deep depths of up to 1000 m. The 3D sensor system enables high-resolution 3D capture at a measuring volume of approximately 1 m3, as well as the simultaneous capture of color data using active stereo scanning with structured lighting, producing highly accurate and detailed 3D images for close-range inspection. Furthermore, the system uses visual inertial odometry to map the seafloor and create a rough 3D overall model of the environment via Simultaneous Localization and Mapping (SLAM). For this reason, the system is also suitable for geological, biological, or archaeological applications in underwater areas. This article describes the overall system and data processing, as well as initial results regarding the measurement accuracy and applicability from tests of the sensor system in a water basin and offshore with a Remotely Operating Vehicle (ROV) in the Baltic Sea. Full article
Show Figures

Figure 1

46 pages, 9224 KB  
Article
LD-SLAM: A Robust and Accurate GNSS-Aided Multi-Map Method for Long-Distance Visual SLAM
by Dongdong Li, Fangbing Zhang, Jiaxiao Feng, Zhijun Wang, Jinghui Fan, Ye Li, Jing Li and Tao Yang
Remote Sens. 2023, 15(18), 4442; https://doi.org/10.3390/rs15184442 - 9 Sep 2023
Cited by 13 | Viewed by 6681
Abstract
Continuous, robust, and precise localization is pivotal in enabling the autonomous operation of robots and aircraft in intricate environments, particularly in the absence of GNSS (global navigation satellite system) signals. However, commonly employed approaches, such as visual odometry and inertial navigation systems, encounter [...] Read more.
Continuous, robust, and precise localization is pivotal in enabling the autonomous operation of robots and aircraft in intricate environments, particularly in the absence of GNSS (global navigation satellite system) signals. However, commonly employed approaches, such as visual odometry and inertial navigation systems, encounter hindrances in achieving effective navigation and positioning due to issues of error accumulation. Additionally, the challenge of managing extensive map creation and exploration arises when deploying these systems on unmanned aerial vehicle terminals. This study introduces an innovative system capable of conducting long-range and multi-map visual SLAM (simultaneous localization and mapping) using monocular cameras equipped with pinhole and fisheye lens models. We formulate a graph optimization model integrating GNSS data and graphical information through multi-sensor fusion navigation and positioning technology. We propose partitioning SLAM maps based on map health status to augment accuracy and resilience in large-scale map generation. We introduce a multi-map matching and fusion algorithm leveraging geographical positioning and visual data to address excessive discrete mapping, leading to resource wastage and reduced map-switching efficiency. Furthermore, a multi-map-based visual SLAM online localization algorithm is presented, adeptly managing and coordinating distinct geographical maps in different temporal and spatial domains. We employ a quadcopter to establish a testing system and generate an aerial image dataset spanning several kilometers. Our experiments exhibit the framework’s noteworthy robustness and accuracy in long-distance navigation. For instance, our GNSS-assisted multi-map SLAM achieves an average accuracy of 1.5 m within a 20 km range during unmanned aerial vehicle flights. Full article
Show Figures

Graphical abstract

Back to TopTop