Next Article in Journal
Efficient Sky Dehazing by Atmospheric Light Fusion
Next Article in Special Issue
A Deep Machine Learning Method for Concurrent and Interleaved Human Activity Recognition
Previous Article in Journal
Data Augmentation of Surface Electromyography for Hand Gesture Recognition
Previous Article in Special Issue
Feature Selection on 2D and 3D Geometric Features to Improve Facial Expression Recognition
Article

Multi-Modality Emotion Recognition Model with GAT-Based Multi-Head Inter-Modality Attention

1
Advanced Telecommunications Research Institute International, Kyoto 619-0288, Japan
2
Graduate School of Engineering Science, Osaka University, Osaka 560-8531, Japan
3
Interactive Robot Research Team, Robotics Project, RIKEN, Kyoto 619-0288, Japan
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(17), 4894; https://doi.org/10.3390/s20174894
Received: 15 July 2020 / Revised: 9 August 2020 / Accepted: 27 August 2020 / Published: 29 August 2020
Emotion recognition has been gaining attention in recent years due to its applications on artificial agents. To achieve a good performance with this task, much research has been conducted on the multi-modality emotion recognition model for leveraging the different strengths of each modality. However, a research question remains: what exactly is the most appropriate way to fuse the information from different modalities? In this paper, we proposed audio sample augmentation and an emotion-oriented encoder-decoder to improve the performance of emotion recognition and discussed an inter-modality, decision-level fusion method based on a graph attention network (GAT). Compared to the baseline, our model improved the weighted average F1-scores from 64.18 to 68.31% and the weighted average accuracy from 65.25 to 69.88%. View Full-Text
Keywords: emotion recognition; multi-modality; graph attention network emotion recognition; multi-modality; graph attention network
Show Figures

Figure 1

MDPI and ACS Style

Fu, C.; Liu, C.; Ishi, C.T.; Ishiguro, H. Multi-Modality Emotion Recognition Model with GAT-Based Multi-Head Inter-Modality Attention. Sensors 2020, 20, 4894. https://doi.org/10.3390/s20174894

AMA Style

Fu C, Liu C, Ishi CT, Ishiguro H. Multi-Modality Emotion Recognition Model with GAT-Based Multi-Head Inter-Modality Attention. Sensors. 2020; 20(17):4894. https://doi.org/10.3390/s20174894

Chicago/Turabian Style

Fu, Changzeng, Chaoran Liu, Carlos T. Ishi, and Hiroshi Ishiguro. 2020. "Multi-Modality Emotion Recognition Model with GAT-Based Multi-Head Inter-Modality Attention" Sensors 20, no. 17: 4894. https://doi.org/10.3390/s20174894

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop