Next Article in Journal
Meat and Fish Freshness Assessment by a Portable and Simplified Electronic Nose System (Mastersense)
Next Article in Special Issue
Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility
Previous Article in Journal
Orthogonal Demodulation Pound–Drever–Hall Technique for Ultra-Low Detection Limit Pressure Sensing
Previous Article in Special Issue
Research on Lane a Compensation Method Based on Multi-Sensor Fusion
Open AccessArticle

SemanticDepth: Fusing Semantic Segmentation and Monocular Depth Estimation for Enabling Autonomous Driving in Roads without Lane Lines

Institute of Automotive Technology, Technical University of Munich, Boltzmannstr. 15, 85748 Garching bei München, Germany
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(14), 3224; https://doi.org/10.3390/s19143224
Received: 12 June 2019 / Revised: 5 July 2019 / Accepted: 18 July 2019 / Published: 22 July 2019
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
Typically, lane departure warning systems rely on lane lines being present on the road.
However, in many scenarios, e.g., secondary roads or some streets in cities, lane lines are either
not present or not sufficiently well signaled. In this work, we present a vision-based method to
locate a vehicle within the road when no lane lines are present using only RGB images as input.
To this end, we propose to fuse together the outputs of a semantic segmentation and a monocular
depth estimation architecture to reconstruct locally a semantic 3D point cloud of the viewed scene.
We only retain points belonging to the road and, additionally, to any kind of fences or walls that
might be present right at the sides of the road. We then compute the width of the road at a certain
point on the planned trajectory and, additionally, what we denote as the fence-to-fence distance.
Our system is suited to any kind of motoring scenario and is especially useful when lane lines are
not present on the road or do not signal the path correctly. The additional fence-to-fence distance
computation is complementary to the road’s width estimation. We quantitatively test our method
on a set of images featuring streets of the city of Munich that contain a road-fence structure, so as
to compare our two proposed variants, namely the road’s width and the fence-to-fence distance
computation. In addition, we also validate our system qualitatively on the Stuttgart sequence of the
publicly available Cityscapes dataset, where no fences or walls are present at the sides of the road,
thus demonstrating that our system can be deployed in a standard city-like environment. For the
benefit of the community, we make our software open source. View Full-Text
Keywords: autonomous driving; scene understanding; Advanced Driver Assistance Systems (ADAS); fusion architecture; deep learning; computer vision; semantic segmentation; monocular depth estimation; situational awareness autonomous driving; scene understanding; Advanced Driver Assistance Systems (ADAS); fusion architecture; deep learning; computer vision; semantic segmentation; monocular depth estimation; situational awareness
Show Figures

Figure 1

MDPI and ACS Style

Palafox, P.R.; Betz, J.; Nobis, F.; Riedl, K.; Lienkamp, M. SemanticDepth: Fusing Semantic Segmentation and Monocular Depth Estimation for Enabling Autonomous Driving in Roads without Lane Lines. Sensors 2019, 19, 3224.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop