Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = semi-direct SLAM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1224 KiB  
Article
Sonar-Based Simultaneous Localization and Mapping Using the Semi-Direct Method
by Xu Han, Jinghao Sun, Shu Zhang, Junyu Dong and Hui Yu
J. Mar. Sci. Eng. 2024, 12(12), 2234; https://doi.org/10.3390/jmse12122234 - 5 Dec 2024
Viewed by 1759
Abstract
The SLAM problem is a common challenge faced by ROVs working underwater, with the key issue being the accurate estimation of pose. In this work, we make full use of the positional information of point clouds and the surrounding pixel data. To obtain [...] Read more.
The SLAM problem is a common challenge faced by ROVs working underwater, with the key issue being the accurate estimation of pose. In this work, we make full use of the positional information of point clouds and the surrounding pixel data. To obtain better feature extraction results in specific directions, we propose a method that accelerates the computation of the two-dimensional SO-CFAR algorithm, with the time cost being only a very slight increase compared to the one-dimensional SO-CFAR. We develop a sonar semi-direct method, adapted from the direct method used in visual SLAM. With the initialization from the ICP algorithm, we apply this method to further refine the pose estimation. To overcome the deficiencies of sonar images, we preprocess the images and reformulate the sonar imaging model in imitation of camera imaging models, further optimizing the pose by minimizing photometric error and fully leveraging pixel information. The improved front end and the accelerated two-dimensional SO-CFAR are assessed through quantitative experiments. The performance of SLAM in large real-world environments is assessed through qualitative experiments. Full article
Show Figures

Figure 1

27 pages, 8113 KiB  
Article
A Robust Semi-Direct 3D SLAM for Mobile Robot Based on Dense Optical Flow in Dynamic Scenes
by Bo Hu and Jingwen Luo
Biomimetics 2023, 8(4), 371; https://doi.org/10.3390/biomimetics8040371 - 16 Aug 2023
Cited by 4 | Viewed by 2010
Abstract
Dynamic objects bring about a large number of error accumulations in pose estimation of mobile robots in dynamic scenes, and result in the failure to build a map that is consistent with the surrounding environment. Along these lines, this paper presents a robust [...] Read more.
Dynamic objects bring about a large number of error accumulations in pose estimation of mobile robots in dynamic scenes, and result in the failure to build a map that is consistent with the surrounding environment. Along these lines, this paper presents a robust semi-direct 3D simultaneous localization and mapping (SLAM) algorithm for mobile robots based on dense optical flow. First, a preliminary estimation of the robot’s pose is conducted using the sparse direct method and the homography matrix is utilized to compensate for the current frame image to reduce the image deformation caused by rotation during the robot’s motion. Then, by calculating the dense optical flow field of two adjacent frames and segmenting the dynamic region in the scene based on the dynamic threshold, the local map points projected within the dynamic regions are eliminated. On this basis, the robot’s pose is optimized by minimizing the reprojection error. Moreover, a high-performance keyframe selection strategy is developed, and keyframes are inserted when the robot’s pose is successfully tracked. Meanwhile, feature points are extracted and matched to the keyframes for subsequent optimization and mapping. Considering that the direct method is subject to tracking failure in practical application scenarios, the feature points and map points of keyframes are employed in robot relocation. Finally, all keyframes and map points are used as optimization variables for global bundle adjustment (BA) optimization, so as to construct a globally consistent 3D dense octree map. A series of simulations and experiments demonstrate the superior performance of the proposed algorithm. Full article
(This article belongs to the Special Issue Artificial Intelligence for Autonomous Robots 2023)
Show Figures

Figure 1

21 pages, 4974 KiB  
Article
Improving Visual SLAM by Combining SVO and ORB-SLAM2 with a Complementary Filter to Enhance Indoor Mini-Drone Localization under Varying Conditions
by Amin Basiri, Valerio Mariani and Luigi Glielmo
Drones 2023, 7(6), 404; https://doi.org/10.3390/drones7060404 - 19 Jun 2023
Cited by 10 | Viewed by 5674
Abstract
Mini-drones can be used for a variety of tasks, ranging from weather monitoring to package delivery, search and rescue, and also recreation. In outdoor scenarios, they leverage Global Positioning Systems (GPS) and/or similar systems for localization in order to preserve safety and performance. [...] Read more.
Mini-drones can be used for a variety of tasks, ranging from weather monitoring to package delivery, search and rescue, and also recreation. In outdoor scenarios, they leverage Global Positioning Systems (GPS) and/or similar systems for localization in order to preserve safety and performance. In indoor scenarios, technologies such as Visual Simultaneous Localization and Mapping (V-SLAM) are used instead. However, more advancements are still required for mini-drone navigation applications, especially in the case of stricter safety requirements. In this research, a novel method for enhancing indoor mini-drone localization performance is proposed. By merging Oriented Rotated Brief SLAM (ORB-SLAM2) and Semi-Direct Monocular Visual Odometry (SVO) via an Adaptive Complementary Filter (ACF), the proposed strategy achieves better position estimates under various conditions (low light in low-surface-texture environments and high flying speed), showing an average percentage error of 18.1% and 25.9% smaller than that of ORB-SLAM and SVO against the ground-truth. Full article
(This article belongs to the Special Issue Drone-Based Information Fusion to Improve Autonomous Navigation)
Show Figures

Figure 1

18 pages, 6088 KiB  
Article
Ceiling-View Semi-Direct Monocular Visual Odometry with Planar Constraint
by Yishen Wang, Shaoming Zhang and Jianmei Wang
Remote Sens. 2022, 14(21), 5447; https://doi.org/10.3390/rs14215447 - 29 Oct 2022
Cited by 6 | Viewed by 2795
Abstract
When the SLAM algorithm is used to provide positioning services for a robot in an indoor scene, dynamic obstacles can interfere with the robot’s observation. Observing the ceiling using an upward-looking camera that has a stable field of view can help the robot [...] Read more.
When the SLAM algorithm is used to provide positioning services for a robot in an indoor scene, dynamic obstacles can interfere with the robot’s observation. Observing the ceiling using an upward-looking camera that has a stable field of view can help the robot avoid the disturbance created by dynamic obstacles. Aiming at the indoor environment, we propose a new ceiling-view visual odometry method that introduces plane constraints as additional conditions. By exploiting the coplanar structural constraints of the features, our method achieves better accuracy and stability in a ceiling scene with repeated texture. Given a series of ceiling images, we first use the semi-direct method with the coplanar constraint to preliminarily calculate the relative pose between camera frames and then exploit the ceiling plane as an additional constraint. In this step, the photometric error and the geometric constraint are both optimized in a sliding window to further improve the trajectory accuracy. Due to the lack of datasets for ceiling scenes, we also present a dataset for the ceiling-view visual odometry for which the LiDAR-Inertial SLAM method provides the ground truth. Finally, through an actual scene test, we verify that, in the ceiling environment, our method performs better than the existing visual odometry approach. Full article
Show Figures

Figure 1

25 pages, 31903 KiB  
Article
Power Tower Inspection Simultaneous Localization and Mapping: A Monocular Semantic Positioning Approach for UAV Transmission Tower Inspection
by Zhiying Liu, Xiren Miao, Zhiqiang Xie, Hao Jiang and Jing Chen
Sensors 2022, 22(19), 7360; https://doi.org/10.3390/s22197360 - 28 Sep 2022
Cited by 7 | Viewed by 3066
Abstract
Realizing autonomous unmanned aerial vehicle (UAV) inspection is of great significance for power line maintenance. This paper introduces a scheme of using the structure of a tower to realize visual geographical positioning of UAV for tower inspection and presents a monocular semantic simultaneous [...] Read more.
Realizing autonomous unmanned aerial vehicle (UAV) inspection is of great significance for power line maintenance. This paper introduces a scheme of using the structure of a tower to realize visual geographical positioning of UAV for tower inspection and presents a monocular semantic simultaneous localization and mapping (SLAM) framework termed PTI-SLAM (power tower inspection SLAM) to cope with the challenge of a tower inspection scene. The proposed scheme utilizes prior knowledge of tower component geolocation and regards geographical positioning as the estimation of transformation between SLAM and the geographic coordinates. To accomplish the robust positioning and semi-dense semantic mapping with limited computing power, PTI-SLAM combines the feature-based SLAM method with a fusion-based direct method and conveys a loosely coupled architecture of a semantic task and a SLAM task. The fusion-based direct method is specially designed to overcome the fragility of the direct method against adverse conditions concerning the inspection scene. Experiment results show that PTI-SLAM inherits the robustness advantage of the feature-based method and the semi-dense mapping ability of the direct method and achieves decimeter-level real-time positioning in the airborne system. The experiment concerning geographical positioning indicates more competitive accuracy compared to the previous visual approach and artificial UAV operating, demonstrating the potential of PTI-SLAM. Full article
Show Figures

Figure 1

17 pages, 953 KiB  
Article
Direct and Indirect vSLAM Fusion for Augmented Reality
by Mohamed Outahar, Guillaume Moreau and Jean-Marie Normand
J. Imaging 2021, 7(8), 141; https://doi.org/10.3390/jimaging7080141 - 10 Aug 2021
Cited by 6 | Viewed by 3274
Abstract
Augmented reality (AR) is an emerging technology that is applied in many fields. One of the limitations that still prevents AR to be even more widely used relates to the accessibility of devices. Indeed, the devices currently used are usually high end, expensive [...] Read more.
Augmented reality (AR) is an emerging technology that is applied in many fields. One of the limitations that still prevents AR to be even more widely used relates to the accessibility of devices. Indeed, the devices currently used are usually high end, expensive glasses or mobile devices. vSLAM (visual simultaneous localization and mapping) algorithms circumvent this problem by requiring relatively cheap cameras for AR. vSLAM algorithms can be classified as direct or indirect methods based on the type of data used. Each class of algorithms works optimally on a type of scene (e.g., textured or untextured) but unfortunately with little overlap. In this work, a method is proposed to fuse a direct and an indirect methods in order to have a higher robustness and to offer the possibility for AR to move seamlessly between different types of scenes. Our method is tested on three datasets against state-of-the-art direct (LSD-SLAM), semi-direct (LCSD) and indirect (ORBSLAM2) algorithms in two different scenarios: a trajectory planning and an AR scenario where a virtual object is displayed on top of the video feed; furthermore, a similar method (LCSD SLAM) is also compared to our proposal. Results show that our fusion algorithm is generally as efficient as the best algorithm both in terms of trajectory (mean errors with respect to ground truth trajectory measurements) as well as in terms of quality of the augmentation (robustness and stability). In short, we can propose a fusion algorithm that, in our tests, takes the best of both the direct and indirect methods. Full article
(This article belongs to the Special Issue Advanced Scene Perception for Augmented Reality)
Show Figures

Figure 1

18 pages, 5318 KiB  
Article
SD-VIS: A Fast and Accurate Semi-Direct Monocular Visual-Inertial Simultaneous Localization and Mapping (SLAM)
by Quanpan Liu, Zhengjie Wang and Huan Wang
Sensors 2020, 20(5), 1511; https://doi.org/10.3390/s20051511 - 9 Mar 2020
Cited by 9 | Viewed by 5589
Abstract
In practical applications, how to achieve a perfect balance between high accuracy and computational efficiency can be the main challenge for simultaneous localization and mapping (SLAM). To solve this challenge, we propose SD-VIS, a novel fast and accurate semi-direct visual-inertial SLAM framework, which [...] Read more.
In practical applications, how to achieve a perfect balance between high accuracy and computational efficiency can be the main challenge for simultaneous localization and mapping (SLAM). To solve this challenge, we propose SD-VIS, a novel fast and accurate semi-direct visual-inertial SLAM framework, which can estimate camera motion and structure of surrounding sparse scenes. In the initialization procedure, we align the pre-integrated IMU measurements and visual images and calibrate out the metric scale, initial velocity, gravity vector, and gyroscope bias by using multiple view geometry (MVG) theory based on the feature-based method. At the front-end, keyframes are tracked by feature-based method and used for back-end optimization and loop closure detection, while non-keyframes are utilized for fast-tracking by direct method. This strategy makes the system not only have the better real-time performance of direct method, but also have high accuracy and loop closing detection ability based on feature-based method. At the back-end, we propose a sliding window-based tightly-coupled optimization framework, which can get more accurate state estimation by minimizing the visual and IMU measurement errors. In order to limit the computational complexity, we adopt the marginalization strategy to fix the number of keyframes in the sliding window. Experimental evaluation on EuRoC dataset demonstrates the feasibility and superior real-time performance of SD-VIS. Compared with state-of-the-art SLAM systems, we can achieve a better balance between accuracy and speed. Full article
(This article belongs to the Special Issue Visual and Camera Sensors)
Show Figures

Figure 1

20 pages, 7167 KiB  
Article
Dynamic-DSO: Direct Sparse Odometry Using Objects Semantic Information for Dynamic Environments
by Chao Sheng, Shuguo Pan, Wang Gao, Yong Tan and Tao Zhao
Appl. Sci. 2020, 10(4), 1467; https://doi.org/10.3390/app10041467 - 21 Feb 2020
Cited by 29 | Viewed by 6229
Abstract
Traditional Simultaneous Localization and Mapping (SLAM) (with loop closure detection), or Visual Odometry (VO) (without loop closure detection), are based on the static environment assumption. When working in dynamic environments, they perform poorly whether using direct methods or indirect methods (feature points methods). [...] Read more.
Traditional Simultaneous Localization and Mapping (SLAM) (with loop closure detection), or Visual Odometry (VO) (without loop closure detection), are based on the static environment assumption. When working in dynamic environments, they perform poorly whether using direct methods or indirect methods (feature points methods). In this paper, Dynamic-DSO which is a semantic monocular direct visual odometry based on DSO (Direct Sparse Odometry) is proposed. The proposed system is completely implemented with the direct method, which is different from the most current dynamic systems combining the indirect method with deep learning. Firstly, convolutional neural networks (CNNs) are applied to the original RGB image to generate the pixel-wise semantic information of dynamic objects. Then, based on the semantic information of the dynamic objects, dynamic candidate points are filtered out in keyframes candidate points extraction; only static candidate points are reserved in the tracking and optimization module, to achieve accurate camera pose estimation in dynamic environments. The photometric error calculated by the projection points in dynamic region of subsequent frames are removed from the whole photometric error in pyramid motion tracking model. Finally, the sliding window optimization which neglects the photometric error calculated in the dynamic region of each keyframe is applied to obtain the precise camera pose. Experiments on the public TUM dynamic dataset and the modified Euroc dataset show that the positioning accuracy and robustness of the proposed Dynamic-DSO is significantly higher than the state-of-the-art direct method in dynamic environments, and the semi-dense cloud map constructed by Dynamic-DSO is clearer and more detailed. Full article
(This article belongs to the Special Issue Mobile Robots Navigation Ⅱ)
Show Figures

Graphical abstract

18 pages, 1452 KiB  
Article
SDVL: Efficient and Accurate Semi-Direct Visual Localization
by Eduardo Perdices and José María Cañas
Sensors 2019, 19(2), 302; https://doi.org/10.3390/s19020302 - 14 Jan 2019
Cited by 9 | Viewed by 4188
Abstract
Visual Simultaneous Localization and Mapping (SLAM) approaches have achieved a major breakthrough in recent years. This paper presents a new monocular visual odometry algorithm able to localize in 3D a robot or a camera inside an unknown environment in real time, even on [...] Read more.
Visual Simultaneous Localization and Mapping (SLAM) approaches have achieved a major breakthrough in recent years. This paper presents a new monocular visual odometry algorithm able to localize in 3D a robot or a camera inside an unknown environment in real time, even on slow processors such as those used in unmanned aerial vehicles (UAVs) or cell phones. The so-called semi-direct visual localization (SDVL) approach is focused on localization accuracy and uses semi-direct methods to increase feature-matching efficiency. It uses inverse-depth 3D point parameterization. The tracking thread includes a motion model, direct image alignment, and optimized feature matching. Additionally, an outlier rejection mechanism (ORM) has been implemented to rule out misplaced features, improving accuracy especially in partially dynamic environments. A relocalization module is also included but keeping the real-time operation. The mapping thread performs an automatic map initialization with homography, a sampled integration of new points and a selective map optimization. The proposed algorithm was experimentally tested with international datasets and compared to state-of-the-art algorithms. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

Back to TopTop