Next Article in Journal
Research on Resource Allocation Method of Space Information Networks Based on Deep Reinforcement Learning
Next Article in Special Issue
A Novel Object-Based Deep Learning Framework for Semantic Segmentation of Very High-Resolution Remote Sensing Data: Comparison with Convolutional and Fully Convolutional Networks
Previous Article in Journal
Spatiotemporal Patterns and Morphological Characteristics of Ulva prolifera Distribution in the Yellow Sea, China in 2016–2018
Previous Article in Special Issue
A Novel Approach for the Detection of Standing Tree Stems from Plot-Level Terrestrial Laser Scanning Data
Open AccessArticle

Fusing Multimodal Video Data for Detecting Moving Objects/Targets in Challenging Indoor and Outdoor Scenes

Remote Sensing Laboratory, National Technical University of Athens, 15780 Zographos, Greece
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(4), 446; https://doi.org/10.3390/rs11040446
Received: 31 December 2018 / Revised: 8 February 2019 / Accepted: 16 February 2019 / Published: 21 February 2019
Single sensor systems and standard optical—usually RGB CCTV video cameras—fail to provide adequate observations, or the amount of spectral information required to build rich, expressive, discriminative features for object detection and tracking tasks in challenging outdoor and indoor scenes under various environmental/illumination conditions. Towards this direction, we have designed a multisensor system based on thermal, shortwave infrared, and hyperspectral video sensors and propose a processing pipeline able to perform in real-time object detection tasks despite the huge amount of the concurrently acquired video streams. In particular, in order to avoid the computationally intensive coregistration of the hyperspectral data with other imaging modalities, the initially detected targets are projected through a local coordinate system on the hypercube image plane. Regarding the object detection, a detector-agnostic procedure has been developed, integrating both unsupervised (background subtraction) and supervised (deep learning convolutional neural networks) techniques for validation purposes. The detected and verified targets are extracted through the fusion and data association steps based on temporal spectral signatures of both target and background. The quite promising experimental results in challenging indoor and outdoor scenes indicated the robust and efficient performance of the developed methodology under different conditions like fog, smoke, and illumination changes. View Full-Text
Keywords: hyperspectral; SWIR; thermal; video; multisensor; detection; tracking; moving object hyperspectral; SWIR; thermal; video; multisensor; detection; tracking; moving object
Show Figures

Graphical abstract

MDPI and ACS Style

Kandylakis, Z.; Vasili, K.; Karantzalos, K. Fusing Multimodal Video Data for Detecting Moving Objects/Targets in Challenging Indoor and Outdoor Scenes. Remote Sens. 2019, 11, 446.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop