Next Article in Journal
Pictorial AR Tag with Hidden Multi-Level Bar-Code and Its Potential Applications
Previous Article in Journal
A State-of-the-Art Review of Augmented Reality in Engineering Analysis and Simulation
Article Menu

Export Article

Open AccessArticle
Multimodal Technologies Interact. 2017, 1(3), 19; doi:10.3390/mti1030019

Three-Dimensional, Kinematic, Human Behavioral Pattern-Based Features for Multimodal Emotion Recognition

AssetMark Inc., Concord, CA 94520, USA
Received: 17 August 2017 / Revised: 3 September 2017 / Accepted: 8 September 2017 / Published: 11 September 2017
View Full-Text   |   Download PDF [1427 KB, uploaded 12 September 2017]   |  

Abstract

This paper presents a multimodal emotion recognition method that uses a feature-level combination of three-dimensional (3D) geometric features (coordinates, distance and angle of joints), kinematic features such as velocity and displacement of joints, and features extracted from daily behavioral patterns such as frequency of head nod, hand wave, and body gestures that represent specific emotions. Head, face, hand, body, and speech data were captured from 15 participants using an infrared sensor (Microsoft Kinect). The 3D geometric and kinematic features were developed using raw feature data from the visual channel. Human emotional behavior-based features were developed using inter-annotator agreement and commonly observed expressions, movements and postures associated to specific emotions. The features from each modality and the behavioral pattern-based features (head shake, arm retraction, body forward movement depicting anger) were combined to train the multimodal classifier for the emotion recognition system. The classifier was trained using 10-fold cross validation and support vector machine (SVM) to predict six basic emotions. The results showed improvement in emotion recognition accuracy (The precision increased by 3.28% and the recall rate by 3.17%) when the 3D geometric, kinematic, and human behavioral pattern-based features were combined for multimodal emotion recognition using supervised classification. View Full-Text
Keywords: multimodal emotion recognition; depth sensing; infrared sensor; affective computing; three-dimensional features; geometric features; kinematic features; feature-level fusion multimodal emotion recognition; depth sensing; infrared sensor; affective computing; three-dimensional features; geometric features; kinematic features; feature-level fusion
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Patwardhan, A. Three-Dimensional, Kinematic, Human Behavioral Pattern-Based Features for Multimodal Emotion Recognition. Multimodal Technologies Interact. 2017, 1, 19.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Multimodal Technologies Interact. EISSN 2414-4088 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top