Next Article in Journal
Information Management in a Collaboratively-Developed Approach to Enterprise Resource Planning—A Higher Education Perspective
Previous Article in Journal
Decision-Making Techniques for Credit Resource Management Using Machine Learning and Optimization
Open AccessArticle

Bimodal Emotion Recognition Model for Minnan Songs

1
School of Computer Science, Wuhan University, Wuhan 430072, China
2
School of Physics and Information Engineering, Minnan Normal University, Zhangzhou 363000, China
3
College of Information and Engineering, Jingdezhen Ceramic Institute, Jingdezhen 333000, China
*
Authors to whom correspondence should be addressed.
Information 2020, 11(3), 145; https://doi.org/10.3390/info11030145
Received: 29 January 2020 / Revised: 20 February 2020 / Accepted: 2 March 2020 / Published: 4 March 2020
Most of the existing research papers study the emotion recognition of Minnan songs from the perspectives of music analysis theory and music appreciation. However, these investigations do not explore any possibility of carrying out an automatic emotion recognition of Minnan songs. In this paper, we propose a model that consists of four main modules to classify the emotion of Minnan songs by using the bimodal data—song lyrics and audio. In the proposed model, an attention-based Long Short-Term Memory (LSTM) neural network is applied to extract lyrical features, and a Convolutional Neural Network (CNN) is used to extract the audio features from the spectrum. Then, two kinds of extracted features are concatenated by multimodal compact bilinear pooling, and finally, the concatenated features are input to the classifying module to determine the song emotion. We designed three experiment groups to investigate the classifying performance of combinations of the four main parts, the comparisons of proposed model with the current approaches and the influence of a few key parameters on the performance of emotion recognition. The results show that the proposed model exhibits better performance over all other experimental groups. The accuracy, precision and recall of the proposed model exceed 0.80 in a combination of appropriate parameters. View Full-Text
Keywords: bimodal emotion recognition; Minnan songs; attention-based LSTM; convolutional neural network; Mel spectrum bimodal emotion recognition; Minnan songs; attention-based LSTM; convolutional neural network; Mel spectrum
Show Figures

Figure 1

MDPI and ACS Style

Xiang, Z.; Dong, X.; Li, Y.; Yu, F.; Xu, X.; Wu, H. Bimodal Emotion Recognition Model for Minnan Songs. Information 2020, 11, 145.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop