Next Article in Journal
A Reply to “Comments on “A New Elliptical Model for Device-Free Localization””
Next Article in Special Issue
Aerial Mapping of Forests Affected by Pathogens Using UAVs, Hyperspectral Sensors, and Artificial Intelligence
Previous Article in Journal
An Adaptive Method for Switching between Pedestrian/Car Indoor Positioning Algorithms based on Multilayer Time Sequences
Previous Article in Special Issue
Online Aerial Terrain Mapping for Ground Robot Navigation
Article Menu
Issue 3 (March) cover image

Export Article

Open AccessArticle
Sensors 2018, 18(3), 712; https://doi.org/10.3390/s18030712

Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery

1
AInML Lab, School of Electronics and Control Engineering, Chang’an University, Xi’an 710064, China
2
School of Automation, Southeast University, Nanjing 210009, China
3
Shengyao Intelligence Technology Co. Ltd., Shanghai 201112, China
*
Author to whom correspondence should be addressed.
Received: 8 December 2017 / Revised: 16 February 2018 / Accepted: 19 February 2018 / Published: 27 February 2018
(This article belongs to the Special Issue UAV or Drones for Remote Sensing Applications)
View Full-Text   |   Download PDF [9981 KB, uploaded 1 March 2018]   |  

Abstract

An unmanned aerial vehicle (UAV) equipped with global positioning systems (GPS) can provide direct georeferenced imagery, mapping an area with high resolution. So far, the major difficulty in wildfire image classification is the lack of unified identification marks, the fire features of color, shape, texture (smoke, flame, or both) and background can vary significantly from one scene to another. Deep learning (e.g., DCNN for Deep Convolutional Neural Network) is very effective in high-level feature learning, however, a substantial amount of training images dataset is obligatory in optimizing its weights value and coefficients. In this work, we proposed a new saliency detection algorithm for fast location and segmentation of core fire area in aerial images. As the proposed method can effectively avoid feature loss caused by direct resizing; it is used in data augmentation and formation of a standard fire image dataset ‘UAV_Fire’. A 15-layered self-learning DCNN architecture named ‘Fire_Net’ is then presented as a self-learning fire feature exactor and classifier. We evaluated different architectures and several key parameters (drop out ratio, batch size, etc.) of the DCNN model regarding its validation accuracy. The proposed architecture outperformed previous methods by achieving an overall accuracy of 98%. Furthermore, ‘Fire_Net’ guarantied an average processing speed of 41.5 ms per image for real-time wildfire inspection. To demonstrate its practical utility, Fire_Net is tested on 40 sampled images in wildfire news reports and all of them have been accurately identified. View Full-Text
Keywords: UAV; wildfire; deep learning; saliency detection UAV; wildfire; deep learning; saliency detection
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Zhao, Y.; Ma, J.; Li, X.; Zhang, J. Saliency Detection and Deep Learning-Based Wildfire Identification in UAV Imagery. Sensors 2018, 18, 712.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top