Next Article in Journal
Robust In-Plane Structures Oscillation Monitoring by Terrestrial Photogrammetry
Next Article in Special Issue
An Innovative Multi-Model Neural Network Approach for Feature Selection in Emotion Recognition Using Deep Feature Clustering
Previous Article in Journal
Fostering Environmental Awareness with Smart IoT Planters in Campuses
Previous Article in Special Issue
Deep Joint Spatiotemporal Network (DJSTN) for Efficient Facial Expression Recognition
Article

Keys for Action: An Efficient Keyframe-Based Approach for 3D Action Recognition Using a Deep Neural Network

1
Department of Computer Science, National University of Computer and Emerging Sciences, Islamabad 44000, Pakistan
2
Department of Computer Science II, Universität Bonn, 53115 Bonn, Germany
*
Author to whom correspondence should be addressed.
Decease.
Sensors 2020, 20(8), 2226; https://doi.org/10.3390/s20082226
Received: 13 February 2020 / Revised: 19 March 2020 / Accepted: 20 March 2020 / Published: 15 April 2020
In this paper, we propose a novel and efficient framework for 3D action recognition using a deep learning architecture. First, we develop a 3D normalized pose space that consists of only 3D normalized poses, which are generated by discarding translation and orientation information. From these poses, we extract joint features and employ them further in a Deep Neural Network (DNN) in order to learn the action model. The architecture of our DNN consists of two hidden layers with the sigmoid activation function and an output layer with the softmax function. Furthermore, we propose a keyframe extraction methodology through which, from a motion sequence of 3D frames, we efficiently extract the keyframes that contribute substantially to the performance of the action. In this way, we eliminate redundant frames and reduce the length of the motion. More precisely, we ultimately summarize the motion sequence, while preserving the original motion semantics. We only consider the remaining essential informative frames in the process of action recognition, and the proposed pipeline is sufficiently fast and robust as a result. Finally, we evaluate our proposed framework intensively on publicly available benchmark Motion Capture (MoCap) datasets, namely HDM05 and CMU. From our experiments, we reveal that our proposed scheme significantly outperforms other state-of-the-art approaches. View Full-Text
Keywords: action recognition; deep neural network (DNN); motion capture (MoCap) datasets; keyframe extraction action recognition; deep neural network (DNN); motion capture (MoCap) datasets; keyframe extraction
Show Figures

Figure 1

MDPI and ACS Style

Yasin, H.; Hussain, M.; Weber, A. Keys for Action: An Efficient Keyframe-Based Approach for 3D Action Recognition Using a Deep Neural Network. Sensors 2020, 20, 2226. https://doi.org/10.3390/s20082226

AMA Style

Yasin H, Hussain M, Weber A. Keys for Action: An Efficient Keyframe-Based Approach for 3D Action Recognition Using a Deep Neural Network. Sensors. 2020; 20(8):2226. https://doi.org/10.3390/s20082226

Chicago/Turabian Style

Yasin, Hashim, Mazhar Hussain, and Andreas Weber. 2020. "Keys for Action: An Efficient Keyframe-Based Approach for 3D Action Recognition Using a Deep Neural Network" Sensors 20, no. 8: 2226. https://doi.org/10.3390/s20082226

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop