Next Article in Journal
Crosstalk Analysis of a CMOS Single Membrane Thermopile Detector Array
Next Article in Special Issue
A Compressed Sensing Approach for Multiple Obstacle Localisation Using Sonar Sensors in Air
Previous Article in Journal
A Deep Learning-Based Model for the Automated Assessment of the Activity of a Single Worker
Previous Article in Special Issue
A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping
Open AccessArticle

Representations and Benchmarking of Modern Visual SLAM Systems

by 1,2,3,†, 1,2,3,† and 2,*
Shanghai Institute of Microsystem and Information Technology, Chinese Academy of Sciences, Shanghai 200050, China
School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China
University of Chinese Academy of Sciences, Beijing 100049, China
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2020, 20(9), 2572;
Received: 23 March 2020 / Revised: 27 April 2020 / Accepted: 28 April 2020 / Published: 30 April 2020
(This article belongs to the Special Issue Autonomous Mobile Robots: Real-Time Sensing, Navigation, and Control)
Simultaneous Localisation And Mapping (SLAM) has long been recognised as a core problem to be solved within countless emerging mobile applications that require intelligent interaction or navigation in an environment. Classical solutions to the problem primarily aim at localisation and reconstruction of a geometric 3D model of the scene. More recently, the community increasingly investigates the development of Spatial Artificial Intelligence (Spatial AI), an evolutionary paradigm pursuing a simultaneous recovery of object-level composition and semantic annotations of the recovered 3D model. Several interesting approaches have already been presented, producing object-level maps with both geometric and semantic properties rather than just accurate and robust localisation performance. As such, they require much broader ground truth information for validation purposes. We discuss the structure of the representations and optimisation problems involved in Spatial AI, and propose new synthetic datasets that, for the first time, include accurate ground truth information about the scene composition as well as individual object shapes and poses. We furthermore propose evaluation metrics for all aspects of such joint geometric-semantic representations and apply them to a new semantic SLAM framework. It is our hope that the introduction of these datasets and proper evaluation metrics will be instrumental in the evaluation of current and future Spatial AI systems and as such contribute substantially to the overall research progress on this important topic. View Full-Text
Keywords: artificial intelligence; computer vision; SLAM; semantic scene understanding; visual localisation and mapping; spatial AI artificial intelligence; computer vision; SLAM; semantic scene understanding; visual localisation and mapping; spatial AI
Show Figures

Graphical abstract

MDPI and ACS Style

Cao, Y.; Hu, L.; Kneip, L. Representations and Benchmarking of Modern Visual SLAM Systems. Sensors 2020, 20, 2572.

AMA Style

Cao Y, Hu L, Kneip L. Representations and Benchmarking of Modern Visual SLAM Systems. Sensors. 2020; 20(9):2572.

Chicago/Turabian Style

Cao, Yuchen; Hu, Lan; Kneip, Laurent. 2020. "Representations and Benchmarking of Modern Visual SLAM Systems" Sensors 20, no. 9: 2572.

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Search more from Scilit
Back to TopTop