Next Article in Journal
3D Reconstructions Using Unstabilized Video Footage from an Unmanned Aerial Vehicle
Previous Article in Journal
An Overview of Infrared Remote Sensing of Volcanic Activity
Article Menu
Issue 2 (June) cover image

Export Article

Open AccessArticle
J. Imaging 2017, 3(2), 14;

A Novel Vision-Based Classification System for Explosion Phenomena

Computer Science and Engineering Department, University of Bridgeport, Bridgeport, CT 06604, USA
Author to whom correspondence should be addressed.
Received: 4 October 2016 / Revised: 10 April 2017 / Accepted: 10 April 2017 / Published: 15 April 2017
Full-Text   |   PDF [7812 KB, uploaded 17 April 2017]   |  


The need for a proper design and implementation of adequate surveillance system for detecting and categorizing explosion phenomena is nowadays rising as a part of the development planning for risk reduction processes including mitigation and preparedness. In this context, we introduce state-of-the-art explosions classification using pattern recognition techniques. Consequently, we define seven patterns for some of explosion and non-explosion phenomena including: pyroclastic density currents, lava fountains, lava and tephra fallout, nuclear explosions, wildfires, fireworks, and sky clouds. Towards the classification goal, we collected a new dataset of 5327 2D RGB images that are used for training the classifier. Furthermore, in order to achieve high reliability in the proposed explosion classification system and to provide multiple analysis for the monitored phenomena, we propose employing multiple approaches for feature extraction on images including texture features, features in the spatial domain, and features in the transform domain. Texture features are measured on intensity levels using the Principal Component Analysis (PCA) algorithm to obtain the highest 100 eigenvectors and eigenvalues. Moreover, features in the spatial domain are calculated using amplitude features such as the YCbCr color model; then, PCA is used to reduce vectors’ dimensionality to 100 features. Lastly, features in the transform domain are calculated using Radix-2 Fast Fourier Transform (Radix-2 FFT), and PCA is then employed to extract the highest 100 eigenvectors. In addition, these textures, amplitude and frequency features are combined in an input vector of length 300 which provides a valuable insight into the images under consideration. Accordingly, these features are fed into a combiner to map the input frames to the desired outputs and divide the space into regions or categories. Thus, we propose to employ one-against-one multi-class degree-3 polynomial kernel Support Vector Machine (SVM). The efficiency of the proposed research methodology was evaluated on a totality of 980 frames that were retrieved from multiple YouTube videos. These videos were taken in real outdoor environments for the seven scenarios of the respective defined classes. As a result, we obtained an accuracy of 94.08%, and the total time for categorizing one frame was approximately 0.12 s. View Full-Text
Keywords: volcanic eruptions; nuclear explosions; YCbCr; PCA; Radix-2 FFT; SVM volcanic eruptions; nuclear explosions; YCbCr; PCA; Radix-2 FFT; SVM

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Share & Cite This Article

MDPI and ACS Style

Abusaleh, S.; Mahmood, A.; Elleithy, K.; Patel, S. A Novel Vision-Based Classification System for Explosion Phenomena. J. Imaging 2017, 3, 14.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics



[Return to top]
J. Imaging EISSN 2313-433X Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top