Next Article in Journal
Single-Nanowire Fuse for Ionization Gas Detection
Previous Article in Journal
Real-Time Human-In-The-Loop Simulation with Mobile Agents, Chat Bots, and Crowd Sensing for Smart Cities
Previous Article in Special Issue
Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility
Open AccessArticle

Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles

1
Mechanical and Industrial Engineering, University of Illinois, Chicago, IL 60607, USA
2
Information and Decision Sciences, University of Illinois, Chicago, IL 60607, USA
*
Authors to whom correspondence should be addressed.
Sensors 2019, 19(20), 4357; https://doi.org/10.3390/s19204357
Received: 13 August 2019 / Revised: 28 September 2019 / Accepted: 3 October 2019 / Published: 9 October 2019
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some fusion architectures can perform very well in lab conditions using powerful computational resources; however, in real-world applications, they cannot be implemented in an embedded edge computer due to their high cost and computational need. We propose a new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking. This fusion framework uses a proposed encoder-decoder based Fully Convolutional Neural Network (FCNx) and a traditional Extended Kalman Filter (EKF) nonlinear state estimator method. It also uses a configuration of camera, LiDAR, and radar sensors that are best suited for each fusion method. The goal of this hybrid framework is to provide a cost-effective, lightweight, modular, and robust (in case of a sensor failure) fusion system solution. It uses FCNx algorithm that improve road detection accuracy compared to benchmark models while maintaining real-time efficiency that can be used in an autonomous vehicle embedded computer. Tested on over 3K road scenes, our fusion algorithm shows better performance in various environment scenarios compared to baseline benchmark networks. Moreover, the algorithm is implemented in a vehicle and tested using actual sensor data collected from a vehicle, performing real-time environment perception. View Full-Text
Keywords: environment perception; sensor fusion; LiDAR; road segmentation; fully convolutional network; object detection and tracking; extended Kalman filter; autonomous vehicle environment perception; sensor fusion; LiDAR; road segmentation; fully convolutional network; object detection and tracking; extended Kalman filter; autonomous vehicle
Show Figures

Figure 1

MDPI and ACS Style

Shahian Jahromi, B.; Tulabandhula, T.; Cetin, S. Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles. Sensors 2019, 19, 4357.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop