Next Article in Journal
A Comprehensive Computer-Assisted Diagnosis System for Early Assessment of Renal Cancer Tumors
Next Article in Special Issue
EEG-Based Emotion Recognition by Convolutional Neural Network with Multi-Scale Kernels
Previous Article in Journal
Evaluation of Shear Horizontal Surface Acoustic Wave Biosensors Using “Layer Parameter” Obtained from Sensor Responses during Immunoreaction
Previous Article in Special Issue
Robust Multimodal Emotion Recognition from Conversation with Transformer-Based Crossmodality Fusion
Article

Deep-Learning-Based Multimodal Emotion Classification for Music Videos

Department of Computer Science and Engineering, Jeonbuk National University, Jeonju-City 54896, Korea
*
Author to whom correspondence should be addressed.
Academic Editors: Soo-Hyung Kim and Gueesang Lee
Sensors 2021, 21(14), 4927; https://doi.org/10.3390/s21144927
Received: 14 June 2021 / Revised: 16 July 2021 / Accepted: 17 July 2021 / Published: 20 July 2021
(This article belongs to the Special Issue Sensor Based Multi-Modal Emotion Recognition)
Music videos contain a great deal of visual and acoustic information. Each information source within a music video influences the emotions conveyed through the audio and video, suggesting that only a multimodal approach is capable of achieving efficient affective computing. This paper presents an affective computing system that relies on music, video, and facial expression cues, making it useful for emotional analysis. We applied the audio–video information exchange and boosting methods to regularize the training process and reduced the computational costs by using a separable convolution strategy. In sum, our empirical findings are as follows: (1) Multimodal representations efficiently capture all acoustic and visual emotional clues included in each music video, (2) the computational cost of each neural network is significantly reduced by factorizing the standard 2D/3D convolution into separate channels and spatiotemporal interactions, and (3) information-sharing methods incorporated into multimodal representations are helpful in guiding individual information flow and boosting overall performance. We tested our findings across several unimodal and multimodal networks against various evaluation metrics and visual analyzers. Our best classifier attained 74% accuracy, an f1-score of 0.73, and an area under the curve score of 0.926. View Full-Text
Keywords: channel and filter separable convolution; end-to-end emotion classification; unimodal and multimodal channel and filter separable convolution; end-to-end emotion classification; unimodal and multimodal
Show Figures

Figure 1

MDPI and ACS Style

Pandeya, Y.R.; Bhattarai, B.; Lee, J. Deep-Learning-Based Multimodal Emotion Classification for Music Videos. Sensors 2021, 21, 4927. https://doi.org/10.3390/s21144927

AMA Style

Pandeya YR, Bhattarai B, Lee J. Deep-Learning-Based Multimodal Emotion Classification for Music Videos. Sensors. 2021; 21(14):4927. https://doi.org/10.3390/s21144927

Chicago/Turabian Style

Pandeya, Yagya R., Bhuwan Bhattarai, and Joonwhoan Lee. 2021. "Deep-Learning-Based Multimodal Emotion Classification for Music Videos" Sensors 21, no. 14: 4927. https://doi.org/10.3390/s21144927

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop