Next Article in Journal
Tunable Fabry-Perot Interferometer Designed for Far-Infrared Wavelength by Utilizing Electromagnetic Force
Next Article in Special Issue
A Method for 6D Pose Estimation of Free-Form Rigid Objects Using Point Pair Features on Range Data
Previous Article in Journal
The Sensory Quality and Volatile Profile of Dark Chocolate Enriched with Encapsulated Probiotic Lactobacillus plantarum Bacteria
Previous Article in Special Issue
Relative Pose Based Redundancy Removal: Collaborative RGB-D Data Transmission in Mobile Visual Sensor Networks
Article Menu
Issue 8 (August) cover image

Export Article

Open AccessArticle
Sensors 2018, 18(8), 2571; https://doi.org/10.3390/s18082571

Towards a Meaningful 3D Map Using a 3D Lidar and a Camera

1
School of Electrical and Electronic Engineering, Yonsei University, Seoul 03722, Korea
2
Department of Electrical Engineering, Changwon National University, Changwon-Si 51140, Korea
*
Author to whom correspondence should be addressed.
Received: 30 May 2018 / Revised: 27 July 2018 / Accepted: 1 August 2018 / Published: 6 August 2018
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Full-Text   |   PDF [9701 KB, uploaded 6 August 2018]   |  

Abstract

Semantic 3D maps are required for various applications including robot navigation and surveying, and their importance has significantly increased. Generally, existing studies on semantic mapping were camera-based approaches that could not be operated in large-scale environments owing to their computational burden. Recently, a method of combining a 3D Lidar with a camera was introduced to address this problem, and a 3D Lidar and a camera were also utilized for semantic 3D mapping. In this study, our algorithm consists of semantic mapping and map refinement. In the semantic mapping, a GPS and an IMU are integrated to estimate the odometry of the system, and subsequently, the point clouds measured from a 3D Lidar are registered by using this information. Furthermore, we use the latest CNN-based semantic segmentation to obtain semantic information on the surrounding environment. To integrate the point cloud with semantic information, we developed incremental semantic labeling including coordinate alignment, error minimization, and semantic information fusion. Additionally, to improve the quality of the generated semantic map, the map refinement is processed in a batch. It enhances the spatial distribution of labels and removes traces produced by moving vehicles effectively. We conduct experiments on challenging sequences to demonstrate that our algorithm outperforms state-of-the-art methods in terms of accuracy and intersection over union. View Full-Text
Keywords: 3D Lidar; large-scale mapping; map refinement; moving vehicle removal; semantic mapping; semantic reconstruction 3D Lidar; large-scale mapping; map refinement; moving vehicle removal; semantic mapping; semantic reconstruction
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Jeong, J.; Yoon, T.S.; Park, J.B. Towards a Meaningful 3D Map Using a 3D Lidar and a Camera. Sensors 2018, 18, 2571.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top