sensors-logo

Journal Browser

Journal Browser

Special Issue "Sensor Data Fusion for Autonomous and Connected Driving"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 15 November 2019.

Special Issue Editors

Dr. Dominique Gruyer
E-Mail Website
Guest Editor
Head of LIVIC lab, Cosys department, IFSTTAR, 25 allée des Marronnier 78000 Versailles, France
Interests: autonomous driving; multi-sensor data fusion; cooperative systems; environment perception; extended perception; sensors simulation for ADAS prototyping
Dr. Olivier Orfila
E-Mail Website
Guest Editor
Deputy Director of the LIVIC lab, IFSTTAR, 25 allée des Marronnier 78000 Versailles, France
Interests: Autonomous Driving; Optimal Path Planning; Eco-Mobility; Eco Consumption
Prof. Dr. Haihao Sun
E-Mail Website
Guest Editor
College of transportation engineering, Tongji University, (Jiading campus), Shanghai 201804, China
Interests: Methods and Means for Traffic Data Collection and Analysis; Mobility Simulation; Transportation Environment for Connected Autonomous Vehicles
Prof. Dr. Homayoun Najjaran
E-Mail Website
Guest Editor
School of Engineering, University of British Colombia, Kelowna V1V1V7, BC, Canada
Interests: Artificial Intelligence, Machine Learning, and Computer Vision with Applications in Unmanned Vehicles, Robotics, and Industrial Automation

Special Issue Information

Dear Colleagues,

Over the last decades, the development of advanced driver assistance systems (ADAS) has become a critical endeavor to reach different objectives: safety enhancement, mobility improvement, energy optimization and driver comfort. In order to tackle the first three objectives, considerable research focusing on autonomous driving has been carried out. Recent research and the development of highly automated driving aim to ultimately replace the driver’s actions with robotic functions. Partially automated driving will require co-pilot applications involving a combination of the above methods, algorithms and architectures. Such a system is built from complex, distributed and cooperative architectures with strong properties such as reliability and robustness. These properties must be maintained despite complex and degraded working conditions, including adverse weather conditions, fog or dust as perceived by sensors. This Special Issue will provide an overview of the recent research related to sensor and data fusion, information processing and merging, and fusion architecture for the cooperative perception and risk assessment needed for autonomous mobility means. Indeed, prior to ensuring a high level of safety in the deployment of autonomous driving applications, it is necessary to guarantee high-quality and real-time perception mechanisms. Therefore, research contributions concerning new automotive sensors, AI for semantic information generation and safe operation are welcome.

Dr. Dominique Gruyer
Dr. Olivier Orfila
Prof. Dr. Haihao Sun
Prof. Dr. Homayoun Najjaran
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensor data fusion
  • information processing
  • cooperative perception
  • fusion for connected vehicles
  • autonomous driving
  • fusion architecture
  • smart sensors
  • AI for semantic information

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles
Sensors 2019, 19(20), 4357; https://doi.org/10.3390/s19204357 - 09 Oct 2019
Abstract
There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some [...] Read more.
There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some fusion architectures can perform very well in lab conditions using powerful computational resources; however, in real-world applications, they cannot be implemented in an embedded edge computer due to their high cost and computational need. We propose a new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking. This fusion framework uses a proposed encoder-decoder based Fully Convolutional Neural Network (FCNx) and a traditional Extended Kalman Filter (EKF) nonlinear state estimator method. It also uses a configuration of camera, LiDAR, and radar sensors that are best suited for each fusion method. The goal of this hybrid framework is to provide a cost-effective, lightweight, modular, and robust (in case of a sensor failure) fusion system solution. It uses FCNx algorithm that improve road detection accuracy compared to benchmark models while maintaining real-time efficiency that can be used in an autonomous vehicle embedded computer. Tested on over 3K road scenes, our fusion algorithm shows better performance in various environment scenarios compared to baseline benchmark networks. Moreover, the algorithm is implemented in a vehicle and tested using actual sensor data collected from a vehicle, performing real-time environment perception. Full article
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
Show Figures

Figure 1

Open AccessArticle
Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility
Sensors 2019, 19(17), 3727; https://doi.org/10.3390/s19173727 - 28 Aug 2019
Abstract
Reliable vision in challenging illumination conditions is one of the crucial requirements of future autonomous automotive systems. In the last decade, thermal cameras have become more easily accessible to a larger number of researchers. This has resulted in numerous studies which confirmed the [...] Read more.
Reliable vision in challenging illumination conditions is one of the crucial requirements of future autonomous automotive systems. In the last decade, thermal cameras have become more easily accessible to a larger number of researchers. This has resulted in numerous studies which confirmed the benefits of the thermal cameras in limited visibility conditions. In this paper, we propose a learning-based method for visible and thermal image fusion that focuses on generating fused images with high visual similarity to regular truecolor (red-green-blue or RGB) images, while introducing new informative details in pedestrian regions. The goal is to create natural, intuitive images that would be more informative than a regular RGB camera to a human driver in challenging visibility conditions. The main novelty of this paper is the idea to rely on two types of objective functions for optimization: a similarity metric between the RGB input and the fused output to achieve natural image appearance; and an auxiliary pedestrian detection error to help defining relevant features of the human appearance and blending them into the output. We train a convolutional neural network using image samples from variable conditions (day and night) so that the network learns the appearance of humans in the different modalities and creates more robust results applicable in realistic situations. Our experiments show that the visibility of pedestrians is noticeably improved especially in dark regions and at night. Compared to existing methods we can better learn context and define fusion rules that focus on the pedestrian appearance, while that is not guaranteed with methods that focus on low-level image quality metrics. Full article
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
Show Figures

Figure 1

Open AccessArticle
SemanticDepth: Fusing Semantic Segmentation and Monocular Depth Estimation for Enabling Autonomous Driving in Roads without Lane Lines
Sensors 2019, 19(14), 3224; https://doi.org/10.3390/s19143224 - 22 Jul 2019
Abstract
Typically, lane departure warning systems rely on lane lines being present on the road.
However, in many scenarios, e.g., secondary roads or some streets in cities, lane lines are either
not present or not sufficiently well signaled. In this work, we present a [...] Read more.
Typically, lane departure warning systems rely on lane lines being present on the road.
However, in many scenarios, e.g., secondary roads or some streets in cities, lane lines are either
not present or not sufficiently well signaled. In this work, we present a vision-based method to
locate a vehicle within the road when no lane lines are present using only RGB images as input.
To this end, we propose to fuse together the outputs of a semantic segmentation and a monocular
depth estimation architecture to reconstruct locally a semantic 3D point cloud of the viewed scene.
We only retain points belonging to the road and, additionally, to any kind of fences or walls that
might be present right at the sides of the road. We then compute the width of the road at a certain
point on the planned trajectory and, additionally, what we denote as the fence-to-fence distance.
Our system is suited to any kind of motoring scenario and is especially useful when lane lines are
not present on the road or do not signal the path correctly. The additional fence-to-fence distance
computation is complementary to the road’s width estimation. We quantitatively test our method
on a set of images featuring streets of the city of Munich that contain a road-fence structure, so as
to compare our two proposed variants, namely the road’s width and the fence-to-fence distance
computation. In addition, we also validate our system qualitatively on the Stuttgart sequence of the
publicly available Cityscapes dataset, where no fences or walls are present at the sides of the road,
thus demonstrating that our system can be deployed in a standard city-like environment. For the
benefit of the community, we make our software open source. Full article
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
Show Figures

Figure 1

Open AccessArticle
Research on Lane a Compensation Method Based on Multi-Sensor Fusion
Sensors 2019, 19(7), 1584; https://doi.org/10.3390/s19071584 - 02 Apr 2019
Cited by 1
Abstract
The curvature of the lane output by the vision sensor caused by shadows, changes in lighting and line breaking jumps over in a period of time, which leads to serious problems for unmanned driving control. It is particularly important to predict or compensate [...] Read more.
The curvature of the lane output by the vision sensor caused by shadows, changes in lighting and line breaking jumps over in a period of time, which leads to serious problems for unmanned driving control. It is particularly important to predict or compensate the real lane in real-time during sensor jumps. This paper presents a lane compensation method based on multi-sensor fusion of global positioning system (GPS), inertial measurement unit (IMU) and vision sensors. In order to compensate the lane, the cubic polynomial function of the longitudinal distance is selected as the lane model. In this method, a Kalman filter is used to estimate vehicle velocity and yaw angle by GPS and IMU measurements, and a vehicle kinematics model is established to describe vehicle motion. It uses the geometric relationship between vehicle and relative lane motion at the current moment to solve the coefficient of the lane polynomial at the next moment. The simulation and vehicle test results show that the prediction information can compensate for the failure of the vision sensor, and has good real-time, robustness and accuracy. Full article
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
Show Figures

Figure 1

Back to TopTop