sensors-logo

Journal Browser

Journal Browser

Best Practice in Simultaneous Localization and Mapping (SLAM)

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: closed (31 March 2022) | Viewed by 25925

Special Issue Editors


E-Mail Website
Guest Editor
Institute of Robotics and Machine Intelligence, Poznan University of Technology, 60-965 Poznan, Poland
Interests: SLAM; computer vision; walking robots; robot perception; sensor calibration

E-Mail Website
Guest Editor
Department of Systems and Computer Science Sapienza, University of Rome, Via Ariosto 25, I-00185 Rome, Italy
Interests: SLAM; localization; mapping; robot perception

E-Mail Website
Guest Editor
Oxford Robotics Institute, University of Oxford, Oxford OX2 6NN, UK
Interests: robotics state estimation; legged robots

Special Issue Information

Dear Colleagues,

The recent years have seen SLAM algorithms being deployed in a variety of different platforms, with increasing levels of accuracy and robustness. The domain of applications is vast and enables legged or wheeled ground vehicles, drones, and underwater robots to accomplish complex tasks such as managing warehouses, autonomous driving, house cleaning, and more. Transitioning from controlled lab environments to in-field deployment is very challenging, and is often made possible via the accumulation of solid expertise on practical aspects related to SLAM, such as time synchronization, transmission delays, sensor acquisition, noise, adverse operating conditions, and model uncertainties. Solutions to these problems are often the result of knowledge and expertise accumulated and consolidated over time within a specific research team. Sharing this technical expertise with the research community in the form of best practices would be highly beneficial, as it is a key enabler to unleash the true potential of robot autonomy.

The developments in SLAM also coincide with further developments in autonomous robots. The availability of ubiquitous localization techniques drives development forward in planning and control domains that when combined form an autonomous agent. In this Special Issue, we are also interested in new developments and challenges faced when dealing with autonomy during real-life applications. More broadly, we also kindly encourage submissions dealing with methods that are critical components for a real-life operation like calibration solutions.

The goal of this Special Issue is to invite high-quality, state-of-the-art research papers that deal with challenging issues in SLAM. We solicit original papers of unpublished and completed research that is not currently under review by any other conference/magazine/journal. Topics of interest include but are not limited to the list below:

  • Best practice in visual, LiDAR, and radar SLAM;
  • Fault detection and recovery in SLAM;
  • Tight integration of SLAM algorithms with planning and control systems;
  • Emerging SLAM application domains and their inherent challenges;
  • Calibration methods for single and multi-cue SLAM systems;
  • SLAM with non-conventional sensors;
  • Core SLAM aspects:
    • Position tracking;
    • Optimization;
    • Uncertainty estimation (also marginalization);
    • Data association;
    • Loop closing;
  • Learning-based methods to enhance model-based SLAM systems.

Dr. Michał R. Nowicki
Prof. Giorgio Grisetti
Dr. Marco Camurri
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

37 pages, 9421 KiB  
Article
2D SLAM Algorithms Characterization, Calibration, and Comparison Considering Pose Error, Map Accuracy as Well as CPU and Memory Usage
by Kevin Trejos, Laura Rincón, Miguel Bolaños, José Fallas and Leonardo Marín
Sensors 2022, 22(18), 6903; https://doi.org/10.3390/s22186903 - 13 Sep 2022
Cited by 10 | Viewed by 5787
Abstract
The present work proposes a method to characterize, calibrate, and compare, any 2D SLAM algorithm, providing strong statistical evidence, based on descriptive and inferential statistics to bring confidence levels about overall behavior of the algorithms and their comparisons. This work focuses on characterize, [...] Read more.
The present work proposes a method to characterize, calibrate, and compare, any 2D SLAM algorithm, providing strong statistical evidence, based on descriptive and inferential statistics to bring confidence levels about overall behavior of the algorithms and their comparisons. This work focuses on characterize, calibrate, and compare Cartographer, Gmapping, HECTOR-SLAM, KARTO-SLAM, and RTAB-Map SLAM algorithms. There were four metrics in place: pose error, map accuracy, CPU usage, and memory usage; from these four metrics, to characterize them, Plackett–Burman and factorial experiments were performed, and enhancement after characterization and calibration was granted using hypothesis tests, in addition to the central limit theorem. Full article
(This article belongs to the Special Issue Best Practice in Simultaneous Localization and Mapping (SLAM))
Show Figures

Figure 1

17 pages, 1021 KiB  
Article
Zonotopic Linear Parameter Varying SLAM Applied to Autonomous Vehicles
by Marc Facerias, Vicenç Puig and Eugenio Alcala
Sensors 2022, 22(10), 3672; https://doi.org/10.3390/s22103672 - 11 May 2022
Cited by 4 | Viewed by 1378
Abstract
This article presents an approach to address the problem of localisation within the autonomous driving framework. In particular, this work takes advantage of the properties of polytopic Linear Parameter Varying (LPV) systems and set-based methodologies applied to Kalman filters to precisely locate both [...] Read more.
This article presents an approach to address the problem of localisation within the autonomous driving framework. In particular, this work takes advantage of the properties of polytopic Linear Parameter Varying (LPV) systems and set-based methodologies applied to Kalman filters to precisely locate both a set of landmarks and the vehicle itself. Using these techniques, we present an alternative approach to localisation algorithms that relies on the use of zonotopes to provide a guaranteed estimation of the states of the vehicle and its surroundings, which does not depend on any assumption of the noise nature other than its limits. LPV theory is used to model the dynamics of the vehicle and implement both an LPV-model predictive controller and a Zonotopic Kalman filter that allow localisation and navigation of the robot. The control and estimation scheme is validated in simulation using the Robotic Operating System (ROS) framework, where its effectiveness is demonstrated. Full article
(This article belongs to the Special Issue Best Practice in Simultaneous Localization and Mapping (SLAM))
Show Figures

Figure 1

20 pages, 7922 KiB  
Article
Autonomous Vehicle State Estimation and Mapping Using Takagi–Sugeno Modeling Approach
by Shivam Chaubey and Vicenç Puig
Sensors 2022, 22(9), 3399; https://doi.org/10.3390/s22093399 - 28 Apr 2022
Cited by 4 | Viewed by 1784
Abstract
This paper proposes an optimal approach for state estimation based on the Takagi–Sugeno (TS) Kalman filter using measurement sensors and rough pose obtained from LIDAR scan end-points matching. To obtain stable and optimal TS Kalman gain for estimator design, a linear matrix inequality [...] Read more.
This paper proposes an optimal approach for state estimation based on the Takagi–Sugeno (TS) Kalman filter using measurement sensors and rough pose obtained from LIDAR scan end-points matching. To obtain stable and optimal TS Kalman gain for estimator design, a linear matrix inequality (LMI) is optimized which is constructed from Lyapunov stability criteria and dual linear quadratic regulator (LQR). The technique utilizes a Takagi–Sugeno (TS) representation of the system, which allows modeling the complex nonlinear dynamics in such a way that linearization is not required for the estimator or controller design. In addition, the TS fuzzy representation is exploited to obtain a real-time Kalman gain, avoiding the expensive optimization of LMIs at every step. The estimation schema is integrated with a nonlinear model-predictive control (NMPC) that is in charge of controlling the vehicle. For the demonstration, the approach is tested in the simulation, and for practical validity, a small-scale autonomous car is used. Full article
(This article belongs to the Special Issue Best Practice in Simultaneous Localization and Mapping (SLAM))
Show Figures

Figure 1

24 pages, 36055 KiB  
Article
VDBFusion: Flexible and Efficient TSDF Integration of Range Sensor Data
by Ignacio Vizzo, Tiziano Guadagnino, Jens Behley and Cyrill Stachniss
Sensors 2022, 22(3), 1296; https://doi.org/10.3390/s22031296 - 8 Feb 2022
Cited by 28 | Viewed by 5811
Abstract
Mapping is a crucial task in robotics and a fundamental building block of most mobile systems deployed in the real world. Robots use different environment representations depending on their task and sensor setup. This paper showcases a practical approach to volumetric surface reconstruction [...] Read more.
Mapping is a crucial task in robotics and a fundamental building block of most mobile systems deployed in the real world. Robots use different environment representations depending on their task and sensor setup. This paper showcases a practical approach to volumetric surface reconstruction based on truncated signed distance functions, also called TSDFs. We revisit the basics of this mapping technique and offer an approach for building effective and efficient real-world mapping systems. In contrast to most state-of-the-art SLAM and mapping approaches, we are making no assumptions on the size of the environment nor the employed range sensor. Unlike most other approaches, we introduce an effective system that works in multiple domains using different sensors. To achieve this, we build upon the Academy-Award-winning OpenVDB library used in filmmaking to realize an effective 3D map representation. Based on this, our proposed system is flexible and highly effective and, in the end, capable of integrating point clouds from a 64-beam LiDAR sensor at 20 frames per second using a single-core CPU. Along with this publication comes an easy-to-use C++ and Python library to quickly and efficiently solve volumetric mapping problems with TSDFs. Full article
(This article belongs to the Special Issue Best Practice in Simultaneous Localization and Mapping (SLAM))
Show Figures

Figure 1

15 pages, 146437 KiB  
Article
AUV Navigation Correction Based on Automated Multibeam Tile Matching
by Jochen Mohrmann and Jens Greinert
Sensors 2022, 22(3), 954; https://doi.org/10.3390/s22030954 - 26 Jan 2022
Cited by 1 | Viewed by 2644
Abstract
Ocean science and hydroacoustic seafloor mapping rely on accurate navigation underwater. By exploiting terrain information provided by a multibeam echosounder system, it is possible to significantly improve map quality. This article presents an algorithm capable of improving map quality and accuracy by aligning [...] Read more.
Ocean science and hydroacoustic seafloor mapping rely on accurate navigation underwater. By exploiting terrain information provided by a multibeam echosounder system, it is possible to significantly improve map quality. This article presents an algorithm capable of improving map quality and accuracy by aligning consecutive pings to tiles that are matched pairwise. A globally consistent solution is calculated from these matches. The proposed method has the potential to be used online in addition to other navigation solutions, but is mainly targeted for post processing. The algorithm was tested using different parameter settings on an AUV and a ship-based dataset. The ship-based dataset is publicly available as a benchmark. The original accurate navigation serving as a ground truth, alongside trajectories that include an artificial drift, are available. This allows quantitative comparisons between algorithms and parameter settings. Full article
(This article belongs to the Special Issue Best Practice in Simultaneous Localization and Mapping (SLAM))
Show Figures

Figure 1

18 pages, 2571 KiB  
Article
Stereo Visual Odometry Pose Correction through Unsupervised Deep Learning
by Sumin Zhang, Shouyi Lu, Rui He and Zhipeng Bao
Sensors 2021, 21(14), 4735; https://doi.org/10.3390/s21144735 - 11 Jul 2021
Cited by 8 | Viewed by 3184
Abstract
Visual simultaneous localization and mapping (VSLAM) plays a vital role in the field of positioning and navigation. At the heart of VSLAM is visual odometry (VO), which uses continuous images to estimate the camera’s ego-motion. However, due to many assumptions of the classical [...] Read more.
Visual simultaneous localization and mapping (VSLAM) plays a vital role in the field of positioning and navigation. At the heart of VSLAM is visual odometry (VO), which uses continuous images to estimate the camera’s ego-motion. However, due to many assumptions of the classical VO system, robots can hardly operate in challenging environments. To solve this challenge, we combine the multiview geometry constraints of the classical stereo VO system with the robustness of deep learning to present an unsupervised pose correction network for the classical stereo VO system. The pose correction network regresses a pose correction that results in positioning error due to violation of modeling assumptions to make the classical stereo VO positioning more accurate. The pose correction network does not rely on the dataset with ground truth poses for training. The pose correction network also simultaneously generates a depth map and an explainability mask. Extensive experiments on the KITTI dataset show the pose correction network can significantly improve the positioning accuracy of the classical stereo VO system. Notably, the corrected classical stereo VO system’s average absolute trajectory error, average translational relative pose error, and average translational root-mean-square drift on a length of 100–800 m in the KITTI dataset is 13.77 cm, 0.038 m, and 1.08%, respectively. Therefore, the improved stereo VO system has almost reached the state of the art. Full article
(This article belongs to the Special Issue Best Practice in Simultaneous Localization and Mapping (SLAM))
Show Figures

Figure 1

13 pages, 3905 KiB  
Article
Pose Estimation of Excavator Manipulator Based on Monocular Vision Marker System
by Jiangying Zhao, Yongbiao Hu and Mingrui Tian
Sensors 2021, 21(13), 4478; https://doi.org/10.3390/s21134478 - 30 Jun 2021
Cited by 21 | Viewed by 3326
Abstract
Excavation is one of the broadest activities in the construction industry, often affected by safety and productivity. To address these problems, it is necessary for construction sites to automatically monitor the poses of excavator manipulators in real time. Based on computer vision (CV) [...] Read more.
Excavation is one of the broadest activities in the construction industry, often affected by safety and productivity. To address these problems, it is necessary for construction sites to automatically monitor the poses of excavator manipulators in real time. Based on computer vision (CV) technology, an approach, through a monocular camera and marker, was proposed to estimate the pose parameters (including orientation and position) of the excavator manipulator. To simulate the pose estimation process, a measurement system was established with a common camera and marker. Through comprehensive experiments and error analysis, this approach showed that the maximum detectable depth of the system is greater than 11 m, the orientation error is less than 8.5°, and the position error is less than 22 mm. A prototype of the system that proved the feasibility of the proposed method was tested. Furthermore, this study provides an alternative CV technology for monitoring construction machines. Full article
(This article belongs to the Special Issue Best Practice in Simultaneous Localization and Mapping (SLAM))
Show Figures

Figure 1

Back to TopTop