Next Article in Journal
Spatial Distortion Assessments of a Low-Cost Laboratory and Field Hyperspectral Imaging System
Next Article in Special Issue
A New Approach for Motor Imagery Classification Based on Sorted Blind Source Separation, Continuous Wavelet Transform, and Convolutional Neural Network
Previous Article in Journal
The Multisensor Array Based on Grown-On-Chip Zinc Oxide Nanorod Network for Selective Discrimination of Alcohol Vapors at Sub-ppm Range
Previous Article in Special Issue
Continuous Distant Measurement of the User’s Heart Rate in Human-Computer Interaction Applications
Open AccessArticle

Target-Specific Action Classification for Automated Assessment of Human Motor Behavior from Video

1
Augmented Cognition Lab (ACLab), Department of Electrical and Computer Engineering, Northeastern University, Boston, MA 02115, USA
2
Digital Medicine & Translational Imaging Group, Pfizer, Cambridge, MA 02139, USA
3
Neurology Department, Tufts University School of Medicine, Boston, MA 02111, USA
4
Department of Anatomy & Neurobiology, Boston University School of Medicine, Boston, MA 02118, USA
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(19), 4266; https://doi.org/10.3390/s19194266
Received: 30 August 2019 / Revised: 27 September 2019 / Accepted: 28 September 2019 / Published: 1 October 2019
Objective monitoring and assessment of human motor behavior can improve the diagnosis and management of several medical conditions. Over the past decade, significant advances have been made in the use of wearable technology for continuously monitoring human motor behavior in free-living conditions. However, wearable technology remains ill-suited for applications which require monitoring and interpretation of complex motor behaviors (e.g., involving interactions with the environment). Recent advances in computer vision and deep learning have opened up new possibilities for extracting information from video recordings. In this paper, we present a hierarchical vision-based behavior phenotyping method for classification of basic human actions in video recordings performed using a single RGB camera. Our method addresses challenges associated with tracking multiple human actors and classification of actions in videos recorded in changing environments with different fields of view. We implement a cascaded pose tracker that uses temporal relationships between detections for short-term tracking and appearance based tracklet fusion for long-term tracking. Furthermore, for action classification, we use pose evolution maps derived from the cascaded pose tracker as low-dimensional and interpretable representations of the movement sequences for training a convolutional neural network. The cascaded pose tracker achieves an average accuracy of 88% in tracking the target human actor in our video recordings, and overall system achieves average test accuracy of 84% for target-specific action classification in untrimmed video recordings. View Full-Text
Keywords: action classification; human motor behavior; computer vision; deep learning; pose tracking action classification; human motor behavior; computer vision; deep learning; pose tracking
Show Figures

Figure 1

MDPI and ACS Style

Rezaei, B.; Christakis, Y.; Ho, B.; Thomas, K.; Erb, K.; Ostadabbas, S.; Patel, S. Target-Specific Action Classification for Automated Assessment of Human Motor Behavior from Video. Sensors 2019, 19, 4266.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop