Next Article in Journal
Receiver-Side TCP Countermeasure in Cellular Networks
Previous Article in Journal
Safety Distance Identification for Crane Drivers Based on Mask R-CNN
Open AccessArticle

Dynamic Spatio-Temporal Bag of Expressions (D-STBoE) Model for Human Action Recognition

1
Department of Software Engineering, Fatima Jinnah Women University, Rawalpindi 46000, Pakistan
2
Software Engineering Department, University of Engineering and Technology, Taxila 47050, Pakistan
3
School of Electronic Engineering and Computer Science, Queen Mary University of London, London E1 4NS, UK
4
Cortexica Vision Systems Ltd., London SE1 9LQ, UK
5
Computer Engineering Department, University of Engineering and Technology, Taxila 47050, Pakistan
6
School of Computer Science and Mathematics, Kingston University, London KT1 2EE, UK
7
Department of Computer Science, Universidad Carlos III de Madrid, Leganés, 28911 Madrid, Spain
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(12), 2790; https://doi.org/10.3390/s19122790
Received: 16 May 2019 / Revised: 18 June 2019 / Accepted: 19 June 2019 / Published: 21 June 2019
(This article belongs to the Section Intelligent Sensors)
Human action recognition (HAR) has emerged as a core research domain for video understanding and analysis, thus attracting many researchers. Although significant results have been achieved in simple scenarios, HAR is still a challenging task due to issues associated with view independence, occlusion and inter-class variation observed in realistic scenarios. In previous research efforts, the classical bag of visual words approach along with its variations has been widely used. In this paper, we propose a Dynamic Spatio-Temporal Bag of Expressions (D-STBoE) model for human action recognition without compromising the strengths of the classical bag of visual words approach. Expressions are formed based on the density of a spatio-temporal cube of a visual word. To handle inter-class variation, we use class-specific visual word representation for visual expression generation. In contrast to the Bag of Expressions (BoE) model, the formation of visual expressions is based on the density of spatio-temporal cubes built around each visual word, as constructing neighborhoods with a fixed number of neighbors could include non-relevant information making a visual expression less discriminative in scenarios with occlusion and changing viewpoints. Thus, the proposed approach makes the model more robust to occlusion and changing viewpoint challenges present in realistic scenarios. Furthermore, we train a multi-class Support Vector Machine (SVM) for classifying bag of expressions into action classes. Comprehensive experiments on four publicly available datasets: KTH, UCF Sports, UCF11 and UCF50 show that the proposed model outperforms existing state-of-the-art human action recognition methods in term of accuracy to 99.21%, 98.60%, 96.94 and 94.10%, respectively. View Full-Text
Keywords: human action recognition; Bag of Words (BoW); Bag of Expressions (BoE); spatio-temporal; dynamic neighborhood human action recognition; Bag of Words (BoW); Bag of Expressions (BoE); spatio-temporal; dynamic neighborhood
Show Figures

Figure 1

MDPI and ACS Style

Nazir, S.; Yousaf, M.H.; Nebel, J.-C.; Velastin, S.A. Dynamic Spatio-Temporal Bag of Expressions (D-STBoE) Model for Human Action Recognition. Sensors 2019, 19, 2790.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop