Next Article in Journal
Effect of Seawater Exposure on Impact Damping Behavior of Viscoelastic Material of Pounding Tuned Mass Damper (PTMD)
Next Article in Special Issue
Evaluating the Overall Accuracy of Additional Learning and Automatic Classification System for CT Images
Previous Article in Journal
First-Principles Prediction of Skyrmionic Phase Behavior in GdFe2 Films Capped by 4d and 5d Transition Metals
Previous Article in Special Issue
A Novel Self-Intersection Penalty Term for Statistical Body Shape Models and Its Applications in 3D Pose Estimation

Fast 3D Semantic Mapping in Road Scenes

School of Instrument Science and Engineering, Southeast University, Nanjing 210096, Jiangsu, China
COSYS/LIVIC, IFSTTAR, 25 allée des Marronniers, 78000 Versailles, France
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in LI, Xuanpeng, et al. Fast semi-dense 3D semantic mapping with monocular visual SLAM. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems, Yokohama, Japan, 16–19 October 2017; pp. 385–390
Appl. Sci. 2019, 9(4), 631;
Received: 30 December 2018 / Revised: 7 February 2019 / Accepted: 8 February 2019 / Published: 13 February 2019
(This article belongs to the Special Issue Intelligent Imaging and Analysis)
Fast 3D reconstruction with semantic information in road scenes is of great requirements for autonomous navigation. It involves issues of geometry and appearance in the field of computer vision. In this work, we propose a fast 3D semantic mapping system based on the monocular vision by fusion of localization, mapping, and scene parsing. From visual sequences, it can estimate the camera pose, calculate the depth, predict the semantic segmentation, and finally realize the 3D semantic mapping. Our system consists of three modules: a parallel visual Simultaneous Localization And Mapping (SLAM) and semantic segmentation module, an incrementally semantic transfer from 2D image to 3D point cloud, and a global optimization based on Conditional Random Field (CRF). It is a heuristic approach that improves the accuracy of the 3D semantic labeling in light of the spatial consistency on each step of 3D reconstruction. In our framework, there is no need to make semantic inference on each frame of sequence, since the 3D point cloud data with semantic information is corresponding to sparse reference frames. It saves on the computational cost and allows our mapping system to perform online. We evaluate the system on road scenes, e.g., KITTI, and observe a significant speed-up in the inference stage by labeling on the 3D point cloud. View Full-Text
Keywords: 3D semantic mapping; incrementally probabilistic fusion; CRF regularization; road scenes 3D semantic mapping; incrementally probabilistic fusion; CRF regularization; road scenes
Show Figures

Figure 1

MDPI and ACS Style

Li, X.; Wang, D.; Ao, H.; Belaroussi, R.; Gruyer, D. Fast 3D Semantic Mapping in Road Scenes. Appl. Sci. 2019, 9, 631.

AMA Style

Li X, Wang D, Ao H, Belaroussi R, Gruyer D. Fast 3D Semantic Mapping in Road Scenes. Applied Sciences. 2019; 9(4):631.

Chicago/Turabian Style

Li, Xuanpeng, Dong Wang, Huanxuan Ao, Rachid Belaroussi, and Dominique Gruyer. 2019. "Fast 3D Semantic Mapping in Road Scenes" Applied Sciences 9, no. 4: 631.

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Back to TopTop