Next Article in Journal
Earth Surface Deformation in the North China Plain Detected by Joint Analysis of GRACE and GPS Data
Next Article in Special Issue
Advances in Target Detection and Tracking in Forward-Looking InfraRed (FLIR) Imagery
Previous Article in Journal
Survey on Fall Detection and Fall Prevention Using Wearable and External Sensors
Previous Article in Special Issue
Relevance-Based Template Matching for Tracking Targets in FLIR Imagery
Article Menu

Export Article

Open AccessArticle
Sensors 2014, 14(10), 19843-19860; doi:10.3390/s141019843

Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance

1
BAE Systems, Burlington, MA 01803, USA
2
Air Force Research Lab, Rome, NY 13441, USA
*
Author to whom correspondence should be addressed.
Received: 16 May 2014 / Revised: 26 July 2014 / Accepted: 9 October 2014 / Published: 22 October 2014
View Full-Text   |   Download PDF [1158 KB, uploaded 22 October 2014]   |  

Abstract

We describe two advanced video analysis techniques, including video-indexed by voice annotations (VIVA) and multi-media indexing and explorer (MINER). VIVA utilizes analyst call-outs (ACOs) in the form of chat messages (voice-to-text) to associate labels with video target tracks, to designate spatial-temporal activity boundaries and to augment video tracking in challenging scenarios. Challenging scenarios include low-resolution sensors, moving targets and target trajectories obscured by natural and man-made clutter. MINER includes: (1) a fusion of graphical track and text data using probabilistic methods; (2) an activity pattern learning framework to support querying an index of activities of interest (AOIs) and targets of interest (TOIs) by movement type and geolocation; and (3) a user interface to support streaming multi-intelligence data processing. We also present an activity pattern learning framework that uses the multi-source associated data as training to index a large archive of full-motion videos (FMV). VIVA and MINER examples are demonstrated for wide aerial/overhead imagery over common data sets affording an improvement in tracking from video data alone, leading to 84% detection with modest misdetection/false alarm results due to the complexity of the scenario. The novel use of ACOs and chat Sensors 2014, 14 19844 messages in video tracking paves the way for user interaction, correction and preparation of situation awareness reports. View Full-Text
Keywords: activity recognition; FMV tracking; ATR; fusion; surveillance; pattern learning; features; registration; geo-registration activity recognition; FMV tracking; ATR; fusion; surveillance; pattern learning; features; registration; geo-registration
Figures

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Hammoud, R.I.; Sahin, C.S.; Blasch, E.P.; Rhodes, B.J.; Wang, T. Automatic Association of Chats and Video Tracks for Activity Learning and Recognition in Aerial Video Surveillance. Sensors 2014, 14, 19843-19860.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top