Next Article in Journal
AI-Based Sensor Information Fusion for Supporting Deep Supervised Learning
Next Article in Special Issue
Depth Estimation and Semantic Segmentation from a Single RGB Image Using a Hybrid Convolutional Neural Network
Previous Article in Journal
A Hybrid System for Distinguishing between Brain Death and Coma Using Diverse EEG Features
Previous Article in Special Issue
Multi-Oriented and Scale-Invariant License Plate Detection Based on Convolutional Neural Networks
Article

VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility

Department of Computer Engineering, Gachon University, Gyeonggi-do 461-701, Korea
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(6), 1343; https://doi.org/10.3390/s19061343
Received: 11 February 2019 / Revised: 6 March 2019 / Accepted: 11 March 2019 / Published: 18 March 2019
(This article belongs to the Special Issue Deep Learning-Based Image Sensors)
Visibility is a complex phenomenon inspired by emissions and air pollutants or by factors, including sunlight, humidity, temperature, and time, which decrease the clarity of what is visible through the atmosphere. This paper provides a detailed overview of the state-of-the-art contributions in relation to visibility estimation under various foggy weather conditions. We propose VisNet, which is a new approach based on deep integrated convolutional neural networks for the estimation of visibility distances from camera imagery. The implemented network uses three streams of deep integrated convolutional neural networks, which are connected in parallel. In addition, we have collected the largest dataset with three million outdoor images and exact visibility values for this study. To evaluate the model’s performance fairly and objectively, the model is trained on three image datasets with different visibility ranges, each with a different number of classes. Moreover, our proposed model, VisNet, evaluated under dissimilar fog density scenarios, uses a diverse set of images. Prior to feeding the network, each input image is filtered in the frequency domain to remove low-level features, and a spectral filter is applied to each input for the extraction of low-contrast regions. Compared to the previous methods, our approach achieves the highest performance in terms of classification based on three different datasets. Furthermore, our VisNet considerably outperforms not only the classical methods, but also state-of-the-art models of visibility estimation. View Full-Text
Keywords: convolutional neural networks; Fast Fourier transform; spectral filter; visibility; VisNet convolutional neural networks; Fast Fourier transform; spectral filter; visibility; VisNet
Show Figures

Figure 1

MDPI and ACS Style

Palvanov, A.; Cho, Y.I. VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility. Sensors 2019, 19, 1343. https://doi.org/10.3390/s19061343

AMA Style

Palvanov A, Cho YI. VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility. Sensors. 2019; 19(6):1343. https://doi.org/10.3390/s19061343

Chicago/Turabian Style

Palvanov, Akmaljon, and Young I. Cho 2019. "VisNet: Deep Convolutional Neural Networks for Forecasting Atmospheric Visibility" Sensors 19, no. 6: 1343. https://doi.org/10.3390/s19061343

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop