sensors-logo

Journal Browser

Journal Browser

Special Issue "Sensor Data Fusion for Autonomous and Connected Driving"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (15 June 2020).

Special Issue Editors

Dr. Dominique Gruyer
Website
Guest Editor
Head of LIVIC lab, Cosys department, IFSTTAR, 25 allée des Marronnier 78000 Versailles, France
Interests: autonomous driving; multi-sensor data fusion; cooperative systems; environment perception; extended perception; sensors simulation for ADAS prototyping
Dr. Olivier Orfila
Website
Guest Editor
Deputy Director of the LIVIC lab, IFSTTAR, 25 allée des Marronnier 78000 Versailles, France
Interests: Autonomous Driving; Optimal Path Planning; Eco-Mobility; Eco Consumption
Prof. Dr. Haihao Sun
Website
Guest Editor
College of transportation engineering, Tongji University, (Jiading campus), Shanghai 201804, China
Interests: Methods and Means for Traffic Data Collection and Analysis; Mobility Simulation; Transportation Environment for Connected Autonomous Vehicles
Prof. Dr. Homayoun Najjaran
Website
Guest Editor
School of Engineering, University of British Colombia, Kelowna V1V1V7, BC, Canada
Interests: artificial intelligence; sensor fusion; machine learning; computer vision with applications in unmanned vehicles, robotics, and industrial automation
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Over the last decades, the development of advanced driver assistance systems (ADAS) has become a critical endeavor to reach different objectives: safety enhancement, mobility improvement, energy optimization and driver comfort. In order to tackle the first three objectives, considerable research focusing on autonomous driving has been carried out. Recent research and the development of highly automated driving aim to ultimately replace the driver’s actions with robotic functions. Partially automated driving will require co-pilot applications involving a combination of the above methods, algorithms and architectures. Such a system is built from complex, distributed and cooperative architectures with strong properties such as reliability and robustness. These properties must be maintained despite complex and degraded working conditions, including adverse weather conditions, fog or dust as perceived by sensors. This Special Issue will provide an overview of the recent research related to sensor and data fusion, information processing and merging, and fusion architecture for the cooperative perception and risk assessment needed for autonomous mobility means. Indeed, prior to ensuring a high level of safety in the deployment of autonomous driving applications, it is necessary to guarantee high-quality and real-time perception mechanisms. Therefore, research contributions concerning new automotive sensors, AI for semantic information generation and safe operation are welcome.

Dr. Dominique Gruyer
Dr. Olivier Orfila
Prof. Dr. Haihao Sun
Prof. Dr. Homayoun Najjaran
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensor data fusion
  • information processing
  • cooperative perception
  • fusion for connected vehicles
  • autonomous driving
  • fusion architecture
  • smart sensors
  • AI for semantic information

Published Papers (7 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
Formal Verification of Heuristic Autonomous Intersection Management Using Statistical Model Checking
Sensors 2020, 20(16), 4506; https://doi.org/10.3390/s20164506 - 12 Aug 2020
Abstract
Autonomous vehicles are gaining popularity throughout the world among researchers and consumers. However, their popularity has not yet reached the level where it is widely accepted as a fully developed technology as a large portion of the consumer base feels skeptical about it. [...] Read more.
Autonomous vehicles are gaining popularity throughout the world among researchers and consumers. However, their popularity has not yet reached the level where it is widely accepted as a fully developed technology as a large portion of the consumer base feels skeptical about it. Proving the correctness of this technology will help in establishing faith in it. That is easier said than done because of the fact that the formal verification techniques has not attained the level of development and application that it is ought to. In this work, we present Statistical Model Checking (SMC) as a possible solution for verifying the safety of autonomous systems and algorithms. We apply it on Heuristic Autonomous Intersection Management (HAIM) algorithm. The presented verification routine can be adopted for other conflict point based autonomous intersection management algorithms as well. Along with verifying the HAIM, we also demonstrate the modeling and verification applied at each stage of development to verify the inherent behavior of the algorithm. The HAIM scheme is formally modeled using a variant of the language of Timed Automata. The model consists of automata that encode the behavior of vehicles, intersection manager (IM) and collision checkers. To verify the complete nature of the heuristic and ensure correct modeling of the system, we model it in layers and verify each layer separately for their expected behavior. Along with that, we perform implementation verification and error injection testing to ensure faithful modeling of the system. Results show with high confidence the freedom from collisions of the intersection controlled by the HAIM algorithm. Full article
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
Show Figures

Figure 1

Open AccessArticle
Managing Localization Uncertainty to Handle Semantic Lane Information from Geo-Referenced Maps in Evidential Occupancy Grids
Sensors 2020, 20(2), 352; https://doi.org/10.3390/s20020352 - 08 Jan 2020
Cited by 1
Abstract
Occupancy grid is a popular environment model that is widely applied for autonomous navigation of mobile robots. This model encodes obstacle information into the grid cells as a reference of the space state. However, when navigating on roads, the planning module of an [...] Read more.
Occupancy grid is a popular environment model that is widely applied for autonomous navigation of mobile robots. This model encodes obstacle information into the grid cells as a reference of the space state. However, when navigating on roads, the planning module of an autonomous vehicle needs to have semantic understanding of the scene, especially concerning the accessibility of the driving space. This paper presents a grid-based evidential approach for modeling semantic road space by taking advantage of a prior map that contains lane-level information. Road rules are encoded in the grid for semantic understanding. Our approach focuses on dealing with the localization uncertainty, which is a key issue, while parsing information from the prior map. Readings from an exteroceptive sensor are as well integrated in the grid to provide real-time obstacle information. All the information is managed in an evidential framework based on Dempster–Shafer theory. Real road results are reported with qualitative evaluation and quantitative analysis of the constructed grids to show the performance and the behavior of the method for real-time application. Full article
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
Show Figures

Figure 1

Open AccessArticle
Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles
Sensors 2019, 19(20), 4357; https://doi.org/10.3390/s19204357 - 09 Oct 2019
Cited by 13
Abstract
There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some [...] Read more.
There are many sensor fusion frameworks proposed in the literature using different sensors and fusion methods combinations and configurations. More focus has been on improving the accuracy performance; however, the implementation feasibility of these frameworks in an autonomous vehicle is less explored. Some fusion architectures can perform very well in lab conditions using powerful computational resources; however, in real-world applications, they cannot be implemented in an embedded edge computer due to their high cost and computational need. We propose a new hybrid multi-sensor fusion pipeline configuration that performs environment perception for autonomous vehicles such as road segmentation, obstacle detection, and tracking. This fusion framework uses a proposed encoder-decoder based Fully Convolutional Neural Network (FCNx) and a traditional Extended Kalman Filter (EKF) nonlinear state estimator method. It also uses a configuration of camera, LiDAR, and radar sensors that are best suited for each fusion method. The goal of this hybrid framework is to provide a cost-effective, lightweight, modular, and robust (in case of a sensor failure) fusion system solution. It uses FCNx algorithm that improve road detection accuracy compared to benchmark models while maintaining real-time efficiency that can be used in an autonomous vehicle embedded computer. Tested on over 3K road scenes, our fusion algorithm shows better performance in various environment scenarios compared to baseline benchmark networks. Moreover, the algorithm is implemented in a vehicle and tested using actual sensor data collected from a vehicle, performing real-time environment perception. Full article
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
Show Figures

Figure 1

Open AccessArticle
Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility
Sensors 2019, 19(17), 3727; https://doi.org/10.3390/s19173727 - 28 Aug 2019
Cited by 4
Abstract
Reliable vision in challenging illumination conditions is one of the crucial requirements of future autonomous automotive systems. In the last decade, thermal cameras have become more easily accessible to a larger number of researchers. This has resulted in numerous studies which confirmed the [...] Read more.
Reliable vision in challenging illumination conditions is one of the crucial requirements of future autonomous automotive systems. In the last decade, thermal cameras have become more easily accessible to a larger number of researchers. This has resulted in numerous studies which confirmed the benefits of the thermal cameras in limited visibility conditions. In this paper, we propose a learning-based method for visible and thermal image fusion that focuses on generating fused images with high visual similarity to regular truecolor (red-green-blue or RGB) images, while introducing new informative details in pedestrian regions. The goal is to create natural, intuitive images that would be more informative than a regular RGB camera to a human driver in challenging visibility conditions. The main novelty of this paper is the idea to rely on two types of objective functions for optimization: a similarity metric between the RGB input and the fused output to achieve natural image appearance; and an auxiliary pedestrian detection error to help defining relevant features of the human appearance and blending them into the output. We train a convolutional neural network using image samples from variable conditions (day and night) so that the network learns the appearance of humans in the different modalities and creates more robust results applicable in realistic situations. Our experiments show that the visibility of pedestrians is noticeably improved especially in dark regions and at night. Compared to existing methods we can better learn context and define fusion rules that focus on the pedestrian appearance, while that is not guaranteed with methods that focus on low-level image quality metrics. Full article
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
Show Figures

Figure 1

Open AccessArticle
SemanticDepth: Fusing Semantic Segmentation and Monocular Depth Estimation for Enabling Autonomous Driving in Roads without Lane Lines
Sensors 2019, 19(14), 3224; https://doi.org/10.3390/s19143224 - 22 Jul 2019
Cited by 4
Abstract
Typically, lane departure warning systems rely on lane lines being present on the road.
However, in many scenarios, e.g., secondary roads or some streets in cities, lane lines are either
not present or not sufficiently well signaled. In this work, we present a [...] Read more.
Typically, lane departure warning systems rely on lane lines being present on the road.
However, in many scenarios, e.g., secondary roads or some streets in cities, lane lines are either
not present or not sufficiently well signaled. In this work, we present a vision-based method to
locate a vehicle within the road when no lane lines are present using only RGB images as input.
To this end, we propose to fuse together the outputs of a semantic segmentation and a monocular
depth estimation architecture to reconstruct locally a semantic 3D point cloud of the viewed scene.
We only retain points belonging to the road and, additionally, to any kind of fences or walls that
might be present right at the sides of the road. We then compute the width of the road at a certain
point on the planned trajectory and, additionally, what we denote as the fence-to-fence distance.
Our system is suited to any kind of motoring scenario and is especially useful when lane lines are
not present on the road or do not signal the path correctly. The additional fence-to-fence distance
computation is complementary to the road’s width estimation. We quantitatively test our method
on a set of images featuring streets of the city of Munich that contain a road-fence structure, so as
to compare our two proposed variants, namely the road’s width and the fence-to-fence distance
computation. In addition, we also validate our system qualitatively on the Stuttgart sequence of the
publicly available Cityscapes dataset, where no fences or walls are present at the sides of the road,
thus demonstrating that our system can be deployed in a standard city-like environment. For the
benefit of the community, we make our software open source. Full article
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
Show Figures

Figure 1

Open AccessArticle
Research on Lane a Compensation Method Based on Multi-Sensor Fusion
Sensors 2019, 19(7), 1584; https://doi.org/10.3390/s19071584 - 02 Apr 2019
Cited by 2
Abstract
The curvature of the lane output by the vision sensor caused by shadows, changes in lighting and line breaking jumps over in a period of time, which leads to serious problems for unmanned driving control. It is particularly important to predict or compensate [...] Read more.
The curvature of the lane output by the vision sensor caused by shadows, changes in lighting and line breaking jumps over in a period of time, which leads to serious problems for unmanned driving control. It is particularly important to predict or compensate the real lane in real-time during sensor jumps. This paper presents a lane compensation method based on multi-sensor fusion of global positioning system (GPS), inertial measurement unit (IMU) and vision sensors. In order to compensate the lane, the cubic polynomial function of the longitudinal distance is selected as the lane model. In this method, a Kalman filter is used to estimate vehicle velocity and yaw angle by GPS and IMU measurements, and a vehicle kinematics model is established to describe vehicle motion. It uses the geometric relationship between vehicle and relative lane motion at the current moment to solve the coefficient of the lane polynomial at the next moment. The simulation and vehicle test results show that the prediction information can compensate for the failure of the vision sensor, and has good real-time, robustness and accuracy. Full article
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
Show Figures

Figure 1

Review

Jump to: Research

Open AccessReview
Deep Learning Sensor Fusion for Autonomous Vehicle Perception and Localization: A Review
Sensors 2020, 20(15), 4220; https://doi.org/10.3390/s20154220 - 29 Jul 2020
Cited by 4
Abstract
Autonomous vehicles (AV) are expected to improve, reshape, and revolutionize the future of ground transportation. It is anticipated that ordinary vehicles will one day be replaced with smart vehicles that are able to make decisions and perform driving tasks on their own. In [...] Read more.
Autonomous vehicles (AV) are expected to improve, reshape, and revolutionize the future of ground transportation. It is anticipated that ordinary vehicles will one day be replaced with smart vehicles that are able to make decisions and perform driving tasks on their own. In order to achieve this objective, self-driving vehicles are equipped with sensors that are used to sense and perceive both their surroundings and the faraway environment, using further advances in communication technologies, such as 5G. In the meantime, local perception, as with human beings, will continue to be an effective means for controlling the vehicle at short range. In the other hand, extended perception allows for anticipation of distant events and produces smarter behavior to guide the vehicle to its destination while respecting a set of criteria (safety, energy management, traffic optimization, comfort). In spite of the remarkable advancements of sensor technologies in terms of their effectiveness and applicability for AV systems in recent years, sensors can still fail because of noise, ambient conditions, or manufacturing defects, among other factors; hence, it is not advisable to rely on a single sensor for any of the autonomous driving tasks. The practical solution is to incorporate multiple competitive and complementary sensors that work synergistically to overcome their individual shortcomings. This article provides a comprehensive review of the state-of-the-art methods utilized to improve the performance of AV systems in short-range or local vehicle environments. Specifically, it focuses on recent studies that use deep learning sensor fusion algorithms for perception, localization, and mapping. The article concludes by highlighting some of the current trends and possible future research directions. Full article
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
Show Figures

Figure 1

Back to TopTop