Topic Editors

The Laboratoire d’Informatique de Grenoble, University of Grenoble Alpes, 38000 Grenoble, France
Department of Computer Science, NTNU - Norwegian University of Science and Technology, P.O. Box 191, 2802 Gjøvik, Norway
Department of Computer Science, Sukkur IBA University, Sukkur 65200, Pakistan
Dr. Maheen Bakhtyar
SLIDE, Université Grenoble Alpes, 38401 Grenoble, France

Multimodal Sentiment Analysis Based on Deep Learning Methods Such as Convolutional Neural Networks

Abstract submission deadline
closed (31 August 2024)
Manuscript submission deadline
closed (31 October 2024)
Viewed by
17023

Topic Information

Dear Colleagues,

This Special Issue is aimed at researchers who are working on large scale data with real-word problem solving. Every social media app is generating data in GBs that require researcher attention for identifying useful patterns and information. Due to globalization, every social media content is shared and commented on by diverse background users. The data contain several opinions in different languages on the similar topics. The classical approach for text classification, i.e., sentiment analysis, mainly relies on NLP techniques related to single language. However, it is important to propose a model that can learn features from multilingual data. Submissions on this issue focused on theoretical expansion and also the applications of sentiment analysis on our daily life are invited. Topics of interest include but are not limited to the following areas:

  • Text classification;
  • Opinion mining;
  • Visualization of opinions;
  • Social network analysis for sentiment analysis;
  • Multi-model learning for text classification;
  • Multi-lingual sentiment analysis;
  • Applications for sentiment analysis;
  • Explainable artificial intelligence for sentiment analysis;
  • Aspect-based sentiment analysis;
  • Hate speech detection;
  • Sarcasm and irony detection.

Dr. Junaid Baber
Dr. Ali Shariq Imran
Prof. Dr. Doudpota Sher
Dr. Maheen Bakhtyar 
Topic Editors

Keywords

  •  text classification
  •  opinion mining
  •  visualization of opinions
  •  social network analysis for sentiment analysis
  •  multi-model learning for text classification
  •  multi-lingual sentiment analysis
  •  applications for sentiment analysis
  •  explainable artificial intelligence for sentiment analysis
  •  deep learning for text classification

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
Algorithms
algorithms
1.8 4.5 2008 18.9 Days CHF 1600
Axioms
axioms
1.9 - 2012 22.8 Days CHF 2400
Future Internet
futureinternet
2.8 8.3 2009 16.9 Days CHF 1600
Mathematics
mathematics
2.3 4.6 2013 18.3 Days CHF 2600
Symmetry
symmetry
2.2 5.3 2009 17.3 Days CHF 2400

Preprints.org is a multidisciplinary platform offering a preprint service designed to facilitate the early sharing of your research. It supports and empowers your research journey from the very beginning.

MDPI Topics is collaborating with Preprints.org and has established a direct connection between MDPI journals and the platform. Authors are encouraged to take advantage of this opportunity by posting their preprints at Preprints.org prior to publication:

  1. Share your research immediately: disseminate your ideas prior to publication and establish priority for your work.
  2. Safeguard your intellectual contribution: Protect your ideas with a time-stamped preprint that serves as proof of your research timeline.
  3. Boost visibility and impact: Increase the reach and influence of your research by making it accessible to a global audience.
  4. Gain early feedback: Receive valuable input and insights from peers before submitting to a journal.
  5. Ensure broad indexing: Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (5 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
19 pages, 1401 KiB  
Article
Enhancing Arabic Sentiment Analysis of Consumer Reviews: Machine Learning and Deep Learning Methods Based on NLP
by Hani Almaqtari, Feng Zeng and Ammar Mohammed
Algorithms 2024, 17(11), 495; https://doi.org/10.3390/a17110495 - 3 Nov 2024
Cited by 2 | Viewed by 1682
Abstract
Sentiment analysis utilizes Natural Language Processing (NLP) techniques to extract opinions from text, which is critical for businesses looking to refine strategies and better understand customer feedback. Understanding people’s sentiments about products through emotional tone analysis is paramount. However, analyzing sentiment in Arabic [...] Read more.
Sentiment analysis utilizes Natural Language Processing (NLP) techniques to extract opinions from text, which is critical for businesses looking to refine strategies and better understand customer feedback. Understanding people’s sentiments about products through emotional tone analysis is paramount. However, analyzing sentiment in Arabic and its dialects poses challenges due to the language’s intricate morphology, right-to-left script, and nuanced emotional expressions. To address this, this study introduces the Arb-MCNN-Bi Model, which integrates the strengths of the transformer-based AraBERT (Arabic Bidirectional Encoder Representations from Transformers) model with a Multi-channel Convolutional Neural Network (MCNN) and a Bidirectional Gated Recurrent Unit (BiGRU) for Arabic sentiment analysis. AraBERT, designed specifically for Arabic, captures rich contextual information through word embeddings. These embeddings are processed by the MCNN to enhance feature extraction and by the BiGRU to retain long-term dependencies. The final output is obtained through feedforward neural networks. The study compares the proposed model with various machine learning and deep learning methods, applying advanced NLP techniques such as Term Frequency-Inverse Document Frequency (TF-IDF), n-gram, Word2Vec (Skip-gram), and fastText (Skip-gram). Experiments are conducted on three Arabic datasets: the Arabic Customer Reviews Dataset (ACRD), Large-scale Arabic Book Reviews (LABR), and the Hotel Arabic Reviews dataset (HARD). The Arb-MCNN-Bi model with AraBERT achieved accuracies of 96.92%, 96.68%, and 92.93% on the ACRD, HARD, and LABR datasets, respectively. These results demonstrate the model’s effectiveness in analyzing Arabic text data and outperforming traditional approaches. Full article
Show Figures

Figure 1

20 pages, 6478 KiB  
Article
CSINet: Channel–Spatial Fusion Networks for Asymmetric Facial Expression Recognition
by Yan Cheng and Defeng Kong
Symmetry 2024, 16(4), 471; https://doi.org/10.3390/sym16040471 - 12 Apr 2024
Cited by 2 | Viewed by 1564
Abstract
Occlusion or posture change of the face in natural scenes has typical asymmetry; however, an asymmetric face plays a key part in the lack of information available for facial expression recognition. To solve the problem of low accuracy of asymmetric facial expression recognition, [...] Read more.
Occlusion or posture change of the face in natural scenes has typical asymmetry; however, an asymmetric face plays a key part in the lack of information available for facial expression recognition. To solve the problem of low accuracy of asymmetric facial expression recognition, this paper proposes a fusion of channel global features and a spatial local information expression recognition network called the “Channel–Spatial Integration Network” (CSINet). First, to extract the underlying detail information and deepen the network, the attention residual module with a redundant information filtering function is designed, and the backbone feature-extraction network is constituted by module stacking. Second, considering the loss of information in the local key area of face occlusion, the channel–spatial fusion structure is constructed, and the channel features and spatial features are combined to enhance the accuracy of occluded facial recognition. Finally, before the full connection layer, more local spatial information is embedded into the global channel information to capture the relationship between different channel–spatial targets, which improves the accuracy of feature expression. Experimental results on the natural scene facial expression data sets RAF-DB and FERPlus show that the recognition accuracies of the modeling approach proposed in this paper are 89.67% and 90.83%, which are 13.24% and 11.52% higher than that of the baseline network ResNet50, respectively. Compared with the latest facial expression recognition methods such as CVT, PACVT, etc., the method in this paper obtains better evaluation results of masked facial expression recognition, which provides certain theoretical and technical references for daily facial emotion analysis and human–computer interaction applications. Full article
Show Figures

Figure 1

13 pages, 1270 KiB  
Article
Multimodal Prompt Learning in Emotion Recognition Using Context and Audio Information
by Eunseo Jeong, Gyunyeop Kim and Sangwoo Kang
Mathematics 2023, 11(13), 2908; https://doi.org/10.3390/math11132908 - 28 Jun 2023
Cited by 10 | Viewed by 3825
Abstract
Prompt learning has improved the performance of language models by reducing the gap in language model training methods of pre-training and downstream tasks. However, extending prompt learning in language models pre-trained with unimodal data to multimodal sources is difficult as it requires additional [...] Read more.
Prompt learning has improved the performance of language models by reducing the gap in language model training methods of pre-training and downstream tasks. However, extending prompt learning in language models pre-trained with unimodal data to multimodal sources is difficult as it requires additional deep-learning layers that cannot be attached. In the natural-language emotion-recognition task, improved emotional classification can be expected when using audio and text to train a model rather than only natural-language text. Audio information, such as voice pitch, tone, and intonation, can give more information that is unavailable in text to predict emotions more effectively. Thus, using both audio and text can enable better emotion prediction in speech emotion-recognition models compared to semantic information alone. In this paper, in contrast to existing studies that use multimodal data with an additional layer, we propose a method for improving the performance of speech emotion recognition using multimodal prompt learning with text-based pre-trained models. The proposed method is using text and audio information in prompt learning by employing a language model pre-trained on natural-language text. In addition, we propose a method to improve the emotion-recognition performance of the current utterance using the emotion and contextual information of the previous utterances for prompt learning in speech emotion-recognition tasks. The performance of the proposed method was evaluated using the English multimodal dataset MELD and the Korean multimodal dataset KEMDy20. Experiments using both the proposed methods obtained an accuracy of 87.49%, F1 score of 44.16, and weighted F1 score of 86.28. Full article
Show Figures

Figure 1

21 pages, 1274 KiB  
Article
Quantum-Inspired Fully Complex-Valued Neutral Network for Sentiment Analysis
by Wei Lai, Jinjing Shi and Yan Chang
Axioms 2023, 12(3), 308; https://doi.org/10.3390/axioms12030308 - 19 Mar 2023
Cited by 11 | Viewed by 4895
Abstract
Most of the existing quantum-inspired models are based on amplitude-phase embedding to model natural language, which maps words into Hilbert space. In quantum-computing theory, the vectors corresponding to quantum states are all complex values, so there is a gap between these two areas. [...] Read more.
Most of the existing quantum-inspired models are based on amplitude-phase embedding to model natural language, which maps words into Hilbert space. In quantum-computing theory, the vectors corresponding to quantum states are all complex values, so there is a gap between these two areas. Presently, complex-valued neural networks have been studied, but their practical applications are few, let alone in the downstream tasks of natural language processing such as sentiment analysis and language modeling. In fact, the complex-valued neural network can use the imaginary part information to embed hidden information and can express more complex information, which is suitable for modeling complex natural language. Meanwhile, quantum-inspired models are defined in Hilbert space, which is also a complex space. So it is natural to construct quantum-inspired models based on complex-valued neural networks. Therefore, we propose a new quantum-inspired model for NLP, ComplexQNN, which contains a complex-valued embedding layer, a quantum encoding layer, and a measurement layer. The modules of ComplexQNN are fully based on complex-valued neural networks. It is more in line with quantum-computing theory and easier to transfer to quantum computers in the future to achieve exponential acceleration. We conducted experiments on six sentiment-classification datasets comparing with five classical models (TextCNN, GRU, ELMo, BERT, and RoBERTa). The results show that our model has improved by 10% in accuracy metric compared with TextCNN and GRU, and has competitive experimental results with ELMo, BERT, and RoBERTa. Full article
Show Figures

Figure 1

16 pages, 5311 KiB  
Article
Product Evaluation Prediction Model Based on Multi-Level Deep Feature Fusion
by Qingyan Zhou, Hao Li, Youhua Zhang and Junhong Zheng
Future Internet 2023, 15(1), 31; https://doi.org/10.3390/fi15010031 - 9 Jan 2023
Cited by 2 | Viewed by 2191
Abstract
Traditional product evaluation research is to collect data through questionnaires or interviews to optimize product design, but the whole process takes a long time to deploy and cannot fully reflect the market situation. Aiming at this problem, we propose a product evaluation prediction [...] Read more.
Traditional product evaluation research is to collect data through questionnaires or interviews to optimize product design, but the whole process takes a long time to deploy and cannot fully reflect the market situation. Aiming at this problem, we propose a product evaluation prediction model based on multi-level deep feature fusion of online reviews. It mines product satisfaction from the massive reviews published by users on e-commerce websites, and uses this model to analyze the relationship between design attributes and customer satisfaction, design products based on customer satisfaction. Our proposed model can be divided into the following four parts: First, the DSCNN (Depthwise Separable Convolutions) layer and pooling layer are used to combine extracting shallow features from the primordial data. Secondly, CBAM (Convolutional Block Attention Module) is used to realize the dimension separation of features, enhance the expressive ability of key features in the two dimensions of space and channel, and suppress the influence of redundant information. Thirdly, BiLSTM (Bidirectional Long Short-Term Memory) is used to overcome the complexity and nonlinearity of product evaluation prediction, output the predicted result through the fully connected layer. Finally, using the global optimization capability of the genetic algorithm, the hyperparameter optimization of the model constructed above is carried out. The final forecasting model consists of a series of decision rules that avoid model redundancy and achieve the best forecasting effect. It has been verified that the method proposed in this paper is better than the above-mentioned models in five evaluation indicators such as MSE, MAE, RMSE, MAPE and SMAPE, compared with Support Vector Regression (SVR), DSCNN, BiLSTM and DSCNN-BiLSTM. By predicting customer emotional satisfaction, it can provide accurate decision-making suggestions for enterprises to design new products. Full article
Show Figures

Figure 1

Back to TopTop