Next Article in Journal
A Method of Short Text Representation Based on the Feature Probability Embedded Vector
Next Article in Special Issue
Real-Time Hybrid Multi-Sensor Fusion Framework for Perception in Autonomous Vehicles
Previous Article in Journal
Marine Observation Beacon Clustering and Recycling Technology Based on Wireless Sensor Networks
Previous Article in Special Issue
SemanticDepth: Fusing Semantic Segmentation and Monocular Depth Estimation for Enabling Autonomous Driving in Roads without Lane Lines
Open AccessArticle

Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility

TELIN-IPI, Ghent University - imec, St-Pietersnieuwstraat 41, B-9000 Gent, Belgium
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(17), 3727; https://doi.org/10.3390/s19173727
Received: 2 July 2019 / Revised: 13 August 2019 / Accepted: 26 August 2019 / Published: 28 August 2019
(This article belongs to the Special Issue Sensor Data Fusion for Autonomous and Connected Driving)
Reliable vision in challenging illumination conditions is one of the crucial requirements of future autonomous automotive systems. In the last decade, thermal cameras have become more easily accessible to a larger number of researchers. This has resulted in numerous studies which confirmed the benefits of the thermal cameras in limited visibility conditions. In this paper, we propose a learning-based method for visible and thermal image fusion that focuses on generating fused images with high visual similarity to regular truecolor (red-green-blue or RGB) images, while introducing new informative details in pedestrian regions. The goal is to create natural, intuitive images that would be more informative than a regular RGB camera to a human driver in challenging visibility conditions. The main novelty of this paper is the idea to rely on two types of objective functions for optimization: a similarity metric between the RGB input and the fused output to achieve natural image appearance; and an auxiliary pedestrian detection error to help defining relevant features of the human appearance and blending them into the output. We train a convolutional neural network using image samples from variable conditions (day and night) so that the network learns the appearance of humans in the different modalities and creates more robust results applicable in realistic situations. Our experiments show that the visibility of pedestrians is noticeably improved especially in dark regions and at night. Compared to existing methods we can better learn context and define fusion rules that focus on the pedestrian appearance, while that is not guaranteed with methods that focus on low-level image quality metrics. View Full-Text
Keywords: fusion; visible; infrared; ADAS; pedestrian detection; deep learning fusion; visible; infrared; ADAS; pedestrian detection; deep learning
Show Figures

Figure 1

MDPI and ACS Style

Shopovska, I.; Jovanov, L.; Philips, W. Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility. Sensors 2019, 19, 3727. https://doi.org/10.3390/s19173727

AMA Style

Shopovska I, Jovanov L, Philips W. Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility. Sensors. 2019; 19(17):3727. https://doi.org/10.3390/s19173727

Chicago/Turabian Style

Shopovska, Ivana; Jovanov, Ljubomir; Philips, Wilfried. 2019. "Deep Visible and Thermal Image Fusion for Enhanced Pedestrian Visibility" Sensors 19, no. 17: 3727. https://doi.org/10.3390/s19173727

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop