Next Article in Journal
Comparison of CBERS-04, GF-1, and GF-2 Satellite Panchromatic Images for Mapping Quasi-Circular Vegetation Patches in the Yellow River Delta, China
Next Article in Special Issue
New Method of Microimages Generation for 3D Display
Previous Article in Journal
Distributed Egocentric Betweenness Measure as a Vehicle Selection Mechanism in VANETs: A Performance Evaluation Study
Previous Article in Special Issue
A Method for 6D Pose Estimation of Free-Form Rigid Objects Using Point Pair Features on Range Data
Open AccessArticle

Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots

Institute for Digital Technologies, Loughborough University, London E15 2GZ, UK
*
Author to whom correspondence should be addressed.
Sensors 2018, 18(8), 2730; https://doi.org/10.3390/s18082730
Received: 9 May 2018 / Revised: 6 August 2018 / Accepted: 15 August 2018 / Published: 20 August 2018
(This article belongs to the Special Issue Depth Sensors and 3D Vision)
Autonomous robots that assist humans in day to day living tasks are becoming increasingly popular. Autonomous mobile robots operate by sensing and perceiving their surrounding environment to make accurate driving decisions. A combination of several different sensors such as LiDAR, radar, ultrasound sensors and cameras are utilized to sense the surrounding environment of autonomous vehicles. These heterogeneous sensors simultaneously capture various physical attributes of the environment. Such multimodality and redundancy of sensing need to be positively utilized for reliable and consistent perception of the environment through sensor data fusion. However, these multimodal sensor data streams are different from each other in many ways, such as temporal and spatial resolution, data format, and geometric alignment. For the subsequent perception algorithms to utilize the diversity offered by multimodal sensing, the data streams need to be spatially, geometrically and temporally aligned with each other. In this paper, we address the problem of fusing the outputs of a Light Detection and Ranging (LiDAR) scanner and a wide-angle monocular image sensor for free space detection. The outputs of LiDAR scanner and the image sensor are of different spatial resolutions and need to be aligned with each other. A geometrical model is used to spatially align the two sensor outputs, followed by a Gaussian Process (GP) regression-based resolution matching algorithm to interpolate the missing data with quantifiable uncertainty. The results indicate that the proposed sensor data fusion framework significantly aids the subsequent perception steps, as illustrated by the performance improvement of a uncertainty aware free space detection algorithm. View Full-Text
Keywords: sensor data fusion; depth sensing; LiDAR; Gaussian Process regression; free space detection; autonomous vehicles; assistive robots sensor data fusion; depth sensing; LiDAR; Gaussian Process regression; free space detection; autonomous vehicles; assistive robots
Show Figures

Figure 1

MDPI and ACS Style

De Silva, V.; Roche, J.; Kondoz, A. Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots. Sensors 2018, 18, 2730. https://doi.org/10.3390/s18082730

AMA Style

De Silva V, Roche J, Kondoz A. Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots. Sensors. 2018; 18(8):2730. https://doi.org/10.3390/s18082730

Chicago/Turabian Style

De Silva, Varuna; Roche, Jamie; Kondoz, Ahmet. 2018. "Robust Fusion of LiDAR and Wide-Angle Camera Data for Autonomous Mobile Robots" Sensors 18, no. 8: 2730. https://doi.org/10.3390/s18082730

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop