Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = BiCARU

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 427 KB  
Article
Modular Multi-Task Learning for Emotion-Aware Stance Inference in Online Discourse
by Sio-Kei Im and Ka-Hou Chan
Mathematics 2025, 13(20), 3287; https://doi.org/10.3390/math13203287 - 14 Oct 2025
Viewed by 1191
Abstract
Stance detection on social media is increasingly vital for understanding public opinion, mitigating misinformation, and enhancing digital trust. This study proposes a modular Multi-Task Learning (MTL) framework that jointly models stance detection and sentiment analysis to address the emotional complexity of user-generated content. [...] Read more.
Stance detection on social media is increasingly vital for understanding public opinion, mitigating misinformation, and enhancing digital trust. This study proposes a modular Multi-Task Learning (MTL) framework that jointly models stance detection and sentiment analysis to address the emotional complexity of user-generated content. The architecture integrates a RoBERTa-based shared encoder with BiCARU layers to capture both contextual semantics and sequential dependencies. Stance classification is reformulated into three parallel binary subtasks, while sentiment analysis serves as an auxiliary signal to enrich stance representations. Attention mechanisms and contrastive learning are incorporated to improve interpretability and robustness. Evaluated on the NLPCC2016 Weibo dataset, the proposed model achieves an average F1-score of 0.7886, confirming its competitive performance in emotionally nuanced classification tasks. This approach highlights the value of emotional cues in stance inference and offers a scalable, interpretable solution for secure opinion mining in dynamic online environments. Full article
Show Figures

Figure 1

11 pages, 1539 KB  
Article
GAT-Based Bi-CARU with Adaptive Feature-Based Transformation for Video Summarisation
by Ka-Hou Chan and Sio-Kei Im
Technologies 2024, 12(8), 126; https://doi.org/10.3390/technologies12080126 - 5 Aug 2024
Cited by 6 | Viewed by 2597
Abstract
Nowadays, video is a common social media in our lives. Video summarisation has become an interesting task for information extraction, where the challenge of high redundancy of key scenes leads to difficulties in retrieving important messages. To address this challenge, this work presents [...] Read more.
Nowadays, video is a common social media in our lives. Video summarisation has become an interesting task for information extraction, where the challenge of high redundancy of key scenes leads to difficulties in retrieving important messages. To address this challenge, this work presents a novel approach called the Graph Attention (GAT)-based bi-directional content-adaptive recurrent unit model for video summarisation. The model makes use of the graph attention approach to transform the visual features of interesting scene(s) from a video. This transformation is achieved by a mechanism called Adaptive Feature-based Transformation (AFT), which extracts the visual features and elevates them to a higher-level representation. We also introduce a new GAT-based attention model that extracts major features from weight features for information extraction, taking into account the tendency of humans to pay attention to transformations and moving objects. Additionally, we integrate the higher-level visual features obtained from the attention layer with the semantic features processed by Bi-CARU. By combining both visual and semantic information, the proposed work enhances the accuracy of key-scene determination. By addressing the issue of high redundancy among major information and using advanced techniques, our method provides a competitive and efficient way to summarise videos. Experimental results show that our approach outperforms existing state-of-the-art methods in video summarisation. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

Back to TopTop