Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (58)

Search Parameters:
Keywords = fine-grained sentiment

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
37 pages, 5086 KiB  
Article
Global Embeddings, Local Signals: Zero-Shot Sentiment Analysis of Transport Complaints
by Aliya Nugumanova, Daniyar Rakhimzhanov and Aiganym Mansurova
Informatics 2025, 12(3), 82; https://doi.org/10.3390/informatics12030082 - 14 Aug 2025
Abstract
Public transport agencies must triage thousands of multilingual complaints every day, yet the cost of training and serving fine-grained sentiment analysis models limits real-time deployment. The proposed “one encoder, any facet” framework therefore offers a reproducible, resource-efficient alternative to heavy fine-tuning for domain-specific [...] Read more.
Public transport agencies must triage thousands of multilingual complaints every day, yet the cost of training and serving fine-grained sentiment analysis models limits real-time deployment. The proposed “one encoder, any facet” framework therefore offers a reproducible, resource-efficient alternative to heavy fine-tuning for domain-specific sentiment analysis or opinion mining tasks on digital service data. To the best of our knowledge, we are the first to test this paradigm on operational multilingual complaints, where public transport agencies must prioritize thousands of Russian- and Kazakh-language messages each day. A human-labelled corpus of 2400 complaints is embedded with five open-source universal models. Obtained embeddings are matched to semantic “anchor” queries that describe three distinct facets: service aspect (eight classes), implicit frustration, and explicit customer request. In the strict zero-shot setting, the best encoder reaches 77% accuracy for aspect detection, 74% for frustration, and 80% for request; taken together, these signals reproduce human four-level priority in 60% of cases. Attaching a single-layer logistic probe on top of the frozen embeddings boosts performance to 89% for aspect, 83–87% for the binary facets, and 72% for end-to-end triage. Compared with recent fine-tuned sentiment analysis systems, our pipeline cuts memory demands by two orders of magnitude and eliminates task-specific training yet narrows the accuracy gap to under five percentage points. These findings indicate that a single frozen encoder, guided by handcrafted anchors and an ultra-light head, can deliver near-human triage quality across multiple pragmatic dimensions, opening the door to low-cost, language-agnostic monitoring of digital-service feedback. Full article
(This article belongs to the Special Issue Practical Applications of Sentiment Analysis)
Show Figures

Figure 1

22 pages, 4479 KiB  
Article
MGMR-Net: Mamba-Guided Multimodal Reconstruction and Fusion Network for Sentiment Analysis with Incomplete Modalities
by Chengcheng Yang, Zhiyao Liang, Tonglai Liu, Zeng Hu and Dashun Yan
Electronics 2025, 14(15), 3088; https://doi.org/10.3390/electronics14153088 - 1 Aug 2025
Viewed by 330
Abstract
Multimodal sentiment analysis (MSA) faces key challenges such as incomplete modality inputs, long-range temporal dependencies, and suboptimal fusion strategies. To address these, we propose MGMR-Net, a Mamba-guided multimodal reconstruction and fusion network that integrates modality-aware reconstruction with text-centric fusion within an efficient state-space [...] Read more.
Multimodal sentiment analysis (MSA) faces key challenges such as incomplete modality inputs, long-range temporal dependencies, and suboptimal fusion strategies. To address these, we propose MGMR-Net, a Mamba-guided multimodal reconstruction and fusion network that integrates modality-aware reconstruction with text-centric fusion within an efficient state-space modeling framework. MGMR-Net consists of two core components: the Mamba-collaborative fusion module, which utilizes a two-stage selective state-space mechanism for fine-grained cross-modal alignment and hierarchical temporal integration, and the Mamba-enhanced reconstruction module, which employs continuous-time recurrence and dynamic gating to accurately recover corrupted or missing modality features. The entire network is jointly optimized via a unified multi-task loss, enabling simultaneous learning of discriminative features for sentiment prediction and reconstructive features for modality recovery. Extensive experiments on CMU-MOSI, CMU-MOSEI, and CH-SIMS datasets demonstrate that MGMR-Net consistently outperforms several baseline methods under both complete and missing modality settings, achieving superior accuracy, robustness, and generalization. Full article
(This article belongs to the Special Issue Application of Data Mining in Decision Support Systems (DSSs))
Show Figures

Figure 1

28 pages, 1081 KiB  
Article
Machine Learning with Self-Assessment Manikin Valence Scale for Fine-Grained Sentiment Analysis
by Lindung Parningotan Manik, Harry Susianto, Arawinda Dinakaramani, R. Niken Pramanik and Totok Suhardijanto
Information 2025, 16(7), 562; https://doi.org/10.3390/info16070562 - 30 Jun 2025
Viewed by 471
Abstract
Traditional sentiment analysis methods use lexicons or machine learning models to classify text as positive or negative. These approaches are unable to capture nuance or intensity in short or informal texts. We propose a novel method that uses the Self-Assessment Manikin (SAM) valence [...] Read more.
Traditional sentiment analysis methods use lexicons or machine learning models to classify text as positive or negative. These approaches are unable to capture nuance or intensity in short or informal texts. We propose a novel method that uses the Self-Assessment Manikin (SAM) valence scale, which provides a continuous measurement of sentiment, ranging from extremely positive to extremely negative. We describe the development of a lexicon of emotion-laden words with SAM valence scales and investigate its application to fine-grained sentiment analysis. We also propose a lexicon-based polarity approach to complement textual features in machine learning models trained to predict a numerical sentiment label for a given text. This method is evaluated using a new dataset of short texts with sentiment labels based on expert ratings, which are predicted using various machine learning fusion mechanisms. The lexicon-based polarity method is found to provide improvements of 0.250, 0.999, and 0.261 in the mean squared error for classical machine learning, RNN, and transformer-based architectures, respectively. Full article
Show Figures

Figure 1

25 pages, 2838 KiB  
Article
BHE+ALBERT-Mixplus: A Distributed Symmetric Approximate Homomorphic Encryption Model for Secure Short-Text Sentiment Classification in Teaching Evaluations
by Jingren Zhang, Siti Sarah Maidin and Deshinta Arrova Dewi
Symmetry 2025, 17(6), 903; https://doi.org/10.3390/sym17060903 - 7 Jun 2025
Viewed by 490
Abstract
This study addresses the sentiment classification of short texts in teaching evaluations. To mitigate concerns regarding data security in cloud-based sentiment analysis and to overcome the limited feature extraction capacity of traditional deep-learning methods, we propose a distributed symmetric approximate homomorphic hybrid sentiment [...] Read more.
This study addresses the sentiment classification of short texts in teaching evaluations. To mitigate concerns regarding data security in cloud-based sentiment analysis and to overcome the limited feature extraction capacity of traditional deep-learning methods, we propose a distributed symmetric approximate homomorphic hybrid sentiment classification model, denoted BHE+ALBERT-Mixplus. To enable homomorphic encryption of non-polynomial functions within the ALBERT-Mixplus architecture—a mixing-and-enhancement variant of ALBERT—we introduce the BHE (BERT-based Homomorphic Encryption) algorithm. The BHE establishes a distributed symmetric approximation workflow, constructing a cloud–user symmetric encryption framework. Within this framework, simplified computations and mathematical approximations are applied to handle non-polynomial operations (e.g., GELU, Softmax, and LayerNorm) under the CKKS homomorphic-encryption scheme. Consequently, the ALBERT-Mixplus model can securely perform classification on encrypted data without compromising utility. To improve feature extraction and enhance prediction accuracy in sentiment classification, ALBERT-Mixplus incorporates two core components: 1. A meta-information extraction layer, employing a lightweight pre-trained ALBERT model to capture extensive general semantic knowledge and thereby bolster robustness to noise. 2. A hybrid feature-extraction layer, which fuses a bidirectional gated recurrent unit (BiGRU) with a multi-scale convolutional neural network (MCNN) to capture both global contextual dependencies and fine-grained local semantic features across multiple scales. Together, these layers enrich the model’s deep feature representations. Experimental results on the TAD-2023 and SST-2 datasets demonstrate that BHE+ALBERT-Mixplus achieves competitive improvements in key evaluation metrics compared to mainstream models, despite a slight increase in computational overhead. The proposed framework enables secure analysis of diverse student feedback while preserving data privacy. This allows marginalized student groups to benefit equally from AI-driven insights, thereby embodying the principles of educational equity and inclusive education. Moreover, through its innovative distributed encryption workflow, the model enhances computational efficiency while promoting environmental sustainability by reducing energy consumption and optimizing resource allocation. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

21 pages, 667 KiB  
Article
A Stance Detection Model Based on Sentiment Analysis and Toxic Language Detection
by Long Kang, Jiaqi Yao, Ruoshuang Du, Lu Ren, Haifeng Liu and Bo Xu
Electronics 2025, 14(11), 2126; https://doi.org/10.3390/electronics14112126 - 23 May 2025
Viewed by 813
Abstract
In this paper, we present a stance detection model grounded in multi-task learning, specifically designed to address the intricate challenge of text stance analysis within social media comments. This model is structured with an embedding network, an encoder module, a sophisticated multi-task attention [...] Read more.
In this paper, we present a stance detection model grounded in multi-task learning, specifically designed to address the intricate challenge of text stance analysis within social media comments. This model is structured with an embedding network, an encoder module, a sophisticated multi-task attention mechanism, an ensemble module, and a classification output layer. To augment the performance of stance detection, we employed sentiment analysis and toxicity language detection as auxiliary tasks. The sentiment analysis plays a pivotal role in enabling the model to capture the public opinion inclinations of both individual and collective users. By delving into these inclinations, our model can extract fine-grained stance elements, offering a more nuanced understanding of users’ positions. On the other hand, toxicity language detection aids in modeling the extreme tendencies of social media users towards specific events. It identifies manifestations of hatred, offensiveness, discrimination, and insult, thereby allowing the model to reconstruct users’ genuine stance information from these extreme expressions. Through the synergy of multi-task joint learning, the accuracy and reliability of the stance detection were significantly improved. To validate the efficacy of our proposed model, we selected two hot events as representative cases, one from the Chinese Weibo platform and the other from the English Twitter platform. A series of comprehensive tasks, including developing crawler programs, collecting data, performing data preprocessing, and conducting data annotation, were systematically executed. Subsequently, we applied our model to detect the stances within the comments related to these two events, categorizing them into three classes: support, opposition, and ambiguity. The experimental results demonstrate that our stance detection model, which integrates sentiment analysis and toxicity language detection, substantially improves the detection accuracy, outperforming traditional methods. Full article
Show Figures

Figure 1

22 pages, 5294 KiB  
Article
Text-in-Image Enhanced Self-Supervised Alignment Model for Aspect-Based Multimodal Sentiment Analysis on Social Media
by Xuefeng Zhao, Yuxiang Wang and Zhaoman Zhong
Sensors 2025, 25(8), 2553; https://doi.org/10.3390/s25082553 - 17 Apr 2025
Viewed by 742
Abstract
The rapid development of social media has driven the need for opinion mining and sentiment analysis based on multimodal samples. As a fine-grained task within multimodal sentiment analysis, aspect-based multimodal sentiment analysis (ABMSA) enables the accurate and efficient determination of sentiment polarity for [...] Read more.
The rapid development of social media has driven the need for opinion mining and sentiment analysis based on multimodal samples. As a fine-grained task within multimodal sentiment analysis, aspect-based multimodal sentiment analysis (ABMSA) enables the accurate and efficient determination of sentiment polarity for aspect-level targets. However, traditional ABMSA methods often perform suboptimally on social media samples, as the images in these samples typically contain embedded text that conventional models overlook. Such text influences sentiment judgment. To address this issue, we propose a text-in-image enhanced self-supervised alignment model (TESAM) that accounts for multimodal information more comprehensively. Specifically, we employed Optical Character Recognition technology to extract embedded text from images and, based on the principle that text-in-image is an integral part of the visual modality, fused it with visual features to obtain more comprehensive image representations. Additionally, we incorporate aspect words to guide the model in disregarding irrelevant semantic features, thereby reducing noise interference. Furthermore, to mitigate the semantic gap between modalities, we propose pre-training the feature extraction module with self-supervised alignment. During this pre-training stage, unimodal semantic embeddings from both modalities are aligned by calculating errors using Euclidean distance and cosine similarity. Experimental results demonstrate that TESAM achieved remarkable performances on three ABMSA benchmarks. These results validate the rationale and effectiveness of our proposed improvements. Full article
(This article belongs to the Special Issue Advanced Signal Processing for Affective Computing)
Show Figures

Figure 1

33 pages, 3077 KiB  
Article
Perspective-Based Microblog Summarization
by Chih-Yuan Li, Soon Ae Chun and James Geller
Information 2025, 16(4), 285; https://doi.org/10.3390/info16040285 - 1 Apr 2025
Viewed by 763
Abstract
Social media allows people to express and share a variety of experiences, opinions, beliefs, interpretations, or viewpoints on a single topic. Summarizing a collection of social media posts (microblogs) on one topic may be challenging and can result in an incoherent summary due [...] Read more.
Social media allows people to express and share a variety of experiences, opinions, beliefs, interpretations, or viewpoints on a single topic. Summarizing a collection of social media posts (microblogs) on one topic may be challenging and can result in an incoherent summary due to multiple perspectives from different users. We introduce a novel approach to microblog summarization, the Multiple-View Summarization Framework (MVSF), designed to efficiently generate multiple summaries from the same social media dataset depending on chosen perspectives and deliver personalized and fine-grained summaries. The MVSF leverages component-of-perspective computing, which can recognize the perspectives expressed in microblogs, such as sentiments, political orientations, or unreliable opinions (fake news). The perspective computing can filter social media data to summarize them according to specific user-selected perspectives. For the summarization methods, our framework implements three extractive summarization methods: Entity-based, Social Signal-based, and Triple-based. We conduct comparative evaluations of MVSF summarizations against state-of-the-art summarization models, including BertSum, SBert, T5, and Bart-Large-CNN, by using a gold-standard BBC news dataset and Rouge scores. Furthermore, we utilize a dataset of 18,047 tweets about COVID-19 vaccines to demonstrate the applications of MVSF. Our contributions include the innovative approach of using user perspectives in summarization methods as a unified framework, capable of generating multiple summaries that reflect different perspectives, in contrast to prior approaches of generating one-size-fits-all summaries for one dataset. The practical implication of MVSF is that it offers users diverse perspectives from social media data. Our prototype web application is also implemented using ChatGPT to show the feasibility of our approach. Full article
(This article belongs to the Special Issue Text Mining: Challenges, Algorithms, Tools and Applications)
Show Figures

Figure 1

25 pages, 1451 KiB  
Article
A Graph Neural Network-Based Context-Aware Framework for Sentiment Analysis Classification in Chinese Microblogs
by Zhesheng Jin and Yunhua Zhang
Mathematics 2025, 13(6), 997; https://doi.org/10.3390/math13060997 - 18 Mar 2025
Cited by 1 | Viewed by 1109
Abstract
Sentiment analysis in Chinese microblogs is challenged by complex syntactic structures and fine-grained sentiment shifts. To address these challenges, a Contextually Enriched Graph Neural Network (CE-GNN) is proposed, integrating self-supervised learning, context-aware sentiment embeddings, and Graph Neural Networks (GNNs) to enhance sentiment classification. [...] Read more.
Sentiment analysis in Chinese microblogs is challenged by complex syntactic structures and fine-grained sentiment shifts. To address these challenges, a Contextually Enriched Graph Neural Network (CE-GNN) is proposed, integrating self-supervised learning, context-aware sentiment embeddings, and Graph Neural Networks (GNNs) to enhance sentiment classification. First, CE-GNN is pre-trained on a large corpus of unlabeled text through self-supervised learning, where Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) are leveraged to obtain contextualized embeddings. These embeddings are then refined through a context-aware sentiment embedding layer, which is dynamically adjusted based on the surrounding text to improve sentiment sensitivity. Next, syntactic dependencies are captured by Graph Neural Networks (GNNs), where words are represented as nodes and syntactic relationships are denoted as edges. Through this graph-based structure, complex sentence structures, particularly in Chinese, can be interpreted more effectively. Finally, the model is fine-tuned on a labeled dataset, achieving state-of-the-art performance in sentiment classification. Experimental results demonstrate that CE-GNN achieves superior accuracy, with a Macro F-measure of 80.21% and a Micro F-measure of 82.93%. Ablation studies further confirm that each module contributes significantly to the overall performance. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Figure 1

23 pages, 1421 KiB  
Article
EmoBERTa-X: Advanced Emotion Classifier with Multi-Head Attention and DES for Multilabel Emotion Classification
by Farah Hassan Labib, Mazen Elagamy and Sherine Nagy Saleh
Big Data Cogn. Comput. 2025, 9(2), 48; https://doi.org/10.3390/bdcc9020048 - 19 Feb 2025
Cited by 2 | Viewed by 1807
Abstract
The rising prevalence of social media turns them into huge, rich repositories of human emotions. Understanding and categorizing human emotion from social media content is of fundamental importance for many reasons, such as improvement of user experience, monitoring of public sentiment, support for [...] Read more.
The rising prevalence of social media turns them into huge, rich repositories of human emotions. Understanding and categorizing human emotion from social media content is of fundamental importance for many reasons, such as improvement of user experience, monitoring of public sentiment, support for mental health, and enhancement of focused marketing strategies. However, social media text is often unstructured and ambiguous; hence, extracting meaningful emotional information is difficult. Thus, effective emotion classification needs advanced techniques. This article proposes a novel model, EmoBERTa-X, to enhance performance in multilabel emotion classification, particularly in informal and ambiguous social media texts. Attention mechanisms combined with ensemble learning, supported by preprocessing steps, help in avoiding issues such as class imbalance of the dataset, ambiguity in short texts, and the inherent complexities of multilabel classification. The experimental results on the GoEmotions dataset indicate that EmoBERTa-X has outperformed state-of-the-art models on fine-grained emotion-detection tasks in social media expressions with an accuracy increase of 4.32% over some popular approaches. Full article
(This article belongs to the Special Issue Advances in Natural Language Processing and Text Mining)
Show Figures

Figure 1

18 pages, 1720 KiB  
Article
Fine-Grained Sentiment Analysis Based on SSFF-GCN Model
by Yuexu Zhao, Junjie Fang and Shaolong Jin
Systems 2025, 13(2), 111; https://doi.org/10.3390/systems13020111 - 11 Feb 2025
Cited by 1 | Viewed by 1231
Abstract
The research on aspect-based sentiment analysis (ABSA) mostly relies on a single attention mechanism or grammatical semantic information, which makes it less effective in dealing with complex language structures. To address the challenges in fine-grained sentiment analysis tasks, this paper establishes a novel [...] Read more.
The research on aspect-based sentiment analysis (ABSA) mostly relies on a single attention mechanism or grammatical semantic information, which makes it less effective in dealing with complex language structures. To address the challenges in fine-grained sentiment analysis tasks, this paper establishes a novel model of syntax and semantics based on feature fusion together with a graph convolutional network (SSFF-GCN), which includes a dual-channel information extraction layer by combining syntactic dependency graphs and semantic information, and consists of three important modules: the syntactic feature enhancement module, semantic feature extraction module, and feature fusion module. In the grammar feature enhancement module, this model uses dependency trees to capture the structural relationship between emotional words and target words and adds a dual affine attention module to enhance grammar learning ability. In the semantic feature extraction module, aspect-aware attention combined with self-attention is used to extract semantic associations in sentences, which ensures effective capture of long-distance dependency information. The feature fusion module dynamically combines the enhanced syntactic and semantic information through a gated mechanism; therefore, it enhances the model’s ability to express emotional features. The empirical results show that the SSFF-GCN model is generally superior to existing models on several publicly available datasets. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

20 pages, 908 KiB  
Article
Mining Nuanced Weibo Sentiment with Hierarchical Graph Modeling and Self-Supervised Learning
by Chuyang Wang, Jessada Konpang, Adisorn Sirikham and Shasha Tian
Electronics 2025, 14(1), 41; https://doi.org/10.3390/electronics14010041 - 26 Dec 2024
Viewed by 1113
Abstract
Weibo sentiment analysis has gained prominence, particularly during the COVID-19 pandemic, as a means to monitor public emotions and detect emerging mental health trends. However, challenges arise from Weibo’s informal language, nuanced expressions, and stylistic features unique to social media, which complicate the [...] Read more.
Weibo sentiment analysis has gained prominence, particularly during the COVID-19 pandemic, as a means to monitor public emotions and detect emerging mental health trends. However, challenges arise from Weibo’s informal language, nuanced expressions, and stylistic features unique to social media, which complicate the accurate interpretation of sentiments. Existing models often fall short, relying on text-based methods that inadequately capture the rich emotional texture of Weibo posts, and are constrained by single loss functions that limit emotional depth. To address these limitations, we propose a novel framework incorporating a sentiment graph and self-supervised learning. Our approach introduces a “sentiment graph” that leverages both word-to-post and post-to-post relational connections, allowing the model to capture fine-grained sentiment cues and context-dependent meanings. Enhanced by a gated mechanism within the graph, our model selectively filters emotional signals based on intensity and relevance, improving its sensitivity to subtle variations such as sarcasm. Additionally, a self-supervised objective enables the model to generalize beyond labeled data, capturing latent emotional structures within the graph. Through this integration of sentiment graph and self-supervised learning, our approach advances Weibo sentiment analysis, offering a robust method for understanding the complex emotional landscape of social media. Full article
(This article belongs to the Special Issue Artificial Intelligence in Graphics and Images)
Show Figures

Figure 1

23 pages, 1149 KiB  
Article
MGAFN-ISA: Multi-Granularity Attention Fusion Network for Implicit Sentiment Analysis
by Yifan Huo, Ming Liu, Junhong Zheng and Lili He
Electronics 2024, 13(24), 4905; https://doi.org/10.3390/electronics13244905 (registering DOI) - 12 Dec 2024
Viewed by 1060
Abstract
Although significant progress has been made in sentiment analysis tasks based on image–text data, existing methods still have limitations in capturing cross-modal correlations and detailed information. To address these issues, we propose a Multi-Granularity Attention Fusion Network for Implicit Sentiment Analysis (MGAFN-ISA). MGAFN-ISA [...] Read more.
Although significant progress has been made in sentiment analysis tasks based on image–text data, existing methods still have limitations in capturing cross-modal correlations and detailed information. To address these issues, we propose a Multi-Granularity Attention Fusion Network for Implicit Sentiment Analysis (MGAFN-ISA). MGAFN-ISA that leverages neural networks and attention mechanisms to effectively reduce noise interference between different modalities and captures distinct, fine-grained visual and textual features. The model includes two key feature extraction modules: a multi-scale attention fusion-based visual feature extractor and a hierarchical attention mechanism-based textual feature extractor, each designed to extract detailed and discriminative visual and textual representations. Additionally, we introduce an image translator engine to produce accurate and detailed image descriptions, further narrowing the semantic gap between the visual and textual modalities. A bidirectional cross-attention mechanism is also incorporated to utilize correlations between fine-grained local regions across modalities, extracting complementary information from heterogeneous visual and textual data. Finally, we designed an adaptive multimodal classification module that dynamically adjusts the contribution of each modality through an adaptive gating mechanism. Extensive experimental results demonstrate that MGAFN-ISA achieves a significant performance improvement over nine state-of-the-art methods across multiple public datasets, validating the effectiveness and advancement of our proposed approach. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

30 pages, 5419 KiB  
Article
Explainable Aspect-Based Sentiment Analysis Using Transformer Models
by Isidoros Perikos and Athanasios Diamantopoulos
Big Data Cogn. Comput. 2024, 8(11), 141; https://doi.org/10.3390/bdcc8110141 - 24 Oct 2024
Cited by 6 | Viewed by 7099
Abstract
An aspect-based sentiment analysis (ABSA) aims to perform a fine-grained analysis of text to identify sentiments and opinions associated with specific aspects. Recently, transformers and large language models have demonstrated exceptional performance in detecting aspects and determining their associated sentiments within text. However, [...] Read more.
An aspect-based sentiment analysis (ABSA) aims to perform a fine-grained analysis of text to identify sentiments and opinions associated with specific aspects. Recently, transformers and large language models have demonstrated exceptional performance in detecting aspects and determining their associated sentiments within text. However, understanding the decision-making processes of transformers remains a significant challenge, as they often operate as black-box models, making it difficult to interpret how they arrive at specific predictions. In this article, we examine the performance of various transformers on ABSA and we employ explainability techniques to illustrate their inner decision-making processes. Firstly, we fine-tune several pre-trained transformers, including BERT, RoBERTa, DistilBERT, and XLNet, on an extensive set of data composed of MAMS, SemEval, and Naver datasets. These datasets consist of over 16,100 complex sentences, each containing a couple of aspects and corresponding polarities. The models were fine-tuned using optimal hyperparameters and RoBERTa achieved the highest performance, reporting 89.16% accuracy on MAMS and SemEval and 97.62% on Naver. We implemented five explainability techniques, LIME, SHAP, attention weight visualization, integrated gradients, and Grad-CAM, to illustrate how transformers make predictions and highlight influential words. These techniques can reveal how models use specific words and contextual information to make sentiment predictions, which can improve performance, address biases, and enhance model efficiency and robustness. These also point out directions for further focus on the analysis of models’ bias in combination with explainability methods, ensuring that explainability highlights potential biases in predictions. Full article
(This article belongs to the Special Issue Advances in Natural Language Processing and Text Mining)
Show Figures

Figure 1

20 pages, 1391 KiB  
Article
A Hybrid Approach to Dimensional Aspect-Based Sentiment Analysis Using BERT and Large Language Models
by Yice Zhang, Hongling Xu, Delong Zhang and Ruifeng Xu
Electronics 2024, 13(18), 3724; https://doi.org/10.3390/electronics13183724 - 19 Sep 2024
Cited by 4 | Viewed by 3573
Abstract
Dimensional aspect-based sentiment analysis (dimABSA) aims to recognize aspect-level quadruples from reviews, offering a fine-grained sentiment description for user opinions. A quadruple consists of aspect, category, opinion, and sentiment intensity, which is represented using continuous real-valued scores in the valence–arousal dimensions. To address [...] Read more.
Dimensional aspect-based sentiment analysis (dimABSA) aims to recognize aspect-level quadruples from reviews, offering a fine-grained sentiment description for user opinions. A quadruple consists of aspect, category, opinion, and sentiment intensity, which is represented using continuous real-valued scores in the valence–arousal dimensions. To address this task, we propose a hybrid approach that integrates the BERT model with a large language model (LLM). Firstly, we develop both the BERT-based and LLM-based methods for dimABSA. The BERT-based method employs a pipeline approach, while the LLM-based method transforms the dimABSA task into a text generation task. Secondly, we evaluate their performance in entity extraction, relation classification, and intensity prediction to determine their advantages. Finally, we devise a hybrid approach to fully utilize their advantages across different scenarios. Experiments demonstrate that the hybrid approach outperforms BERT-based and LLM-based methods, achieving state-of-the-art performance with an F1-score of 41.7% on the quadruple extraction. Full article
(This article belongs to the Special Issue New Advances in Affective Computing)
Show Figures

Figure 1

17 pages, 525 KiB  
Article
Hybrid Graph Neural Network-Based Aspect-Level Sentiment Classification
by Hongyan Zhao, Cheng Cui and Changxing Wu
Electronics 2024, 13(16), 3263; https://doi.org/10.3390/electronics13163263 - 17 Aug 2024
Viewed by 1151
Abstract
Aspect-level sentiment classification has received more and more attention from both academia and industry due to its ability to provide more fine-grained sentiment information. Recent studies have demonstrated that models incorporating dependency syntax information can more effectively capture the aspect-specific context, leading to [...] Read more.
Aspect-level sentiment classification has received more and more attention from both academia and industry due to its ability to provide more fine-grained sentiment information. Recent studies have demonstrated that models incorporating dependency syntax information can more effectively capture the aspect-specific context, leading to improved performance. However, existing studies have two shortcomings: (1) they only utilize dependency relations between words, neglecting the types of these dependencies, and (2) they often predict the sentiment polarity of each aspect independently, disregarding the sentiment relationships between multiple aspects in a sentence. To address the above issues, we propose an aspect-level sentiment classification model based on a hybrid graph neural network. The core of our model involves constructing several hybrid graph neural network layers, designed to transfer information among words, between words and aspects, and among aspects. In the process of information transmission, our model takes into account not only dependency relations and their types between words but also sentiment relationships between aspects. Our experimental results based on three commonly used datasets demonstrate that the proposed model achieves a performance that is comparable to or better than recent benchmark methods. Full article
(This article belongs to the Special Issue Advances in Natural Language Processing and Their Applications)
Show Figures

Figure 1

Back to TopTop