Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Keywords = movie highlights’ extraction

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2612 KiB  
Article
Extracting Implicit User Preferences in Conversational Recommender Systems Using Large Language Models
by Woo-Seok Kim, Seongho Lim, Gun-Woo Kim and Sang-Min Choi
Mathematics 2025, 13(2), 221; https://doi.org/10.3390/math13020221 - 10 Jan 2025
Viewed by 2361
Abstract
Conversational recommender systems (CRSs) have garnered increasing attention for their ability to provide personalized recommendations through natural language interactions. Although large language models (LLMs) have shown potential in recommendation systems owing to their superior language understanding and reasoning capabilities, extracting and utilizing implicit [...] Read more.
Conversational recommender systems (CRSs) have garnered increasing attention for their ability to provide personalized recommendations through natural language interactions. Although large language models (LLMs) have shown potential in recommendation systems owing to their superior language understanding and reasoning capabilities, extracting and utilizing implicit user preferences from conversations remains a formidable challenge. This paper proposes a method that leverages LLMs to extract implicit preferences and explicitly incorporate them into the recommendation process. Initially, LLMs identify implicit user preferences from conversations, which are then refined into fine-grained numerical values using a BERT-based multi-label classifier to enhance recommendation precision. The proposed approach is validated through experiments on three comprehensive datasets: the Reddit Movie Dataset (8413 dialogues), Inspired (825 dialogues), and ReDial (2311 dialogues). Results show that our approach considerably outperforms traditional CRS methods, achieving a 23.3% improvement in Recall@20 on the ReDial dataset and a 7.2% average improvement in recommendation accuracy across all datasets with GPT-3.5-turbo and GPT-4. These findings highlight the potential of using LLMs to extract and utilize implicit conversational information, effectively enhancing the quality of recommendations in CRSs. Full article
Show Figures

Figure 1

26 pages, 2453 KiB  
Article
AQSA: Aspect-Based Quality Sentiment Analysis for Multi-Labeling with Improved ResNet Hybrid Algorithm
by Muhammad Irfan, Nasir Ayub, Qazi Arbab Ahmed, Saifur Rahman, Muhammad Salman Bashir, Grzegorz Nowakowski, Samar M. Alqhtani and Marek Sieja
Electronics 2023, 12(6), 1298; https://doi.org/10.3390/electronics12061298 - 8 Mar 2023
Cited by 9 | Viewed by 3334
Abstract
Sentiment analysis (SA) is an area of study currently being investigated in text mining. SA is the computational handling of a text’s views, emotions, subjectivity, and subjective nature. The researchers realized that generating generic sentiment from textual material was inadequate, so they developed [...] Read more.
Sentiment analysis (SA) is an area of study currently being investigated in text mining. SA is the computational handling of a text’s views, emotions, subjectivity, and subjective nature. The researchers realized that generating generic sentiment from textual material was inadequate, so they developed SA to extract expressions from textual information. The problem of removing emotional aspects through multi-labeling based on data from certain aspects may be resolved. This article proposes the swarm-based hybrid model residual networks with sand cat swarm optimization (ResNet-SCSO), a novel method for increasing the precision and variation of learning the text with the multi-labeling method. Contrary to existing multi-label training approaches, ResNet-SCSO highlights the diversity and accuracy of methodologies based on multi-labeling. Five distinct datasets were analyzed (movies, research articles, medical, birds, and proteins). To achieve accurate and improved data, we initially used preprocessing. Secondly, we used the GloVe and TF-IDF to extract features. Thirdly, a word association is created using the word2vec method. Additionally, the enhanced data are utilized for training and validating the ResNet model (tuned with SCSO). We tested the accuracy of ResNet-SCSO on research article, medical, birds, movie, and protein images using the aspect-based multi-labeling method. The accuracy was 95%, 96%, 97%, 92%, and 96%, respectively. With multi-label datasets of varying dimensions, our proposed model shows that ResNet-SCSO is significantly better than other commonly used techniques. Experimental findings confirm the implemented strategy’s success compared to existing benchmark methods. Full article
(This article belongs to the Special Issue Artificial Intelligence Technologies and Applications)
Show Figures

Figure 1

15 pages, 1655 KiB  
Article
Improving Graph-Based Movie Recommender System Using Cinematic Experience
by CheonSol Lee, DongHee Han, Keejun Han and Mun Yi
Appl. Sci. 2022, 12(3), 1493; https://doi.org/10.3390/app12031493 - 29 Jan 2022
Cited by 25 | Viewed by 7586
Abstract
With the advent of many movie content platforms, users face a flood of content and consequent difficulties in selecting appropriate movie titles. Although much research has been conducted in developing effective recommender systems to provide personalized recommendations based on customers’ past preferences and [...] Read more.
With the advent of many movie content platforms, users face a flood of content and consequent difficulties in selecting appropriate movie titles. Although much research has been conducted in developing effective recommender systems to provide personalized recommendations based on customers’ past preferences and behaviors, not much attention has been paid to leveraging users’ sentiments and emotions together. In this study, we built a new graph-based movie recommender system that utilized sentiment and emotion information along with user ratings, and evaluated its performance in comparison to well known conventional models and state-of-the-art graph-based models. The sentiment and emotion information were extracted using fine-tuned BERT. We used a Kaggle dataset created by crawling movies’ meta-data and review data from the Rotten Tomatoes website and Amazon product data. The study results show that the proposed IGMC-based models coupled with emotion and sentiment are superior over the compared models. The findings highlight the significance of using sentiment and emotion information in relation to movie recommendation. Full article
Show Figures

Figure 1

25 pages, 1078 KiB  
Article
AttendAffectNet–Emotion Prediction of Movie Viewers Using Multimodal Fusion with Self-Attention
by Ha Thi Phuong Thao, B T Balamurali, Gemma Roig and Dorien Herremans
Sensors 2021, 21(24), 8356; https://doi.org/10.3390/s21248356 - 14 Dec 2021
Cited by 13 | Viewed by 4831
Abstract
In this paper, we tackle the problem of predicting the affective responses of movie viewers, based on the content of the movies. Current studies on this topic focus on video representation learning and fusion techniques to combine the extracted features for predicting affect. [...] Read more.
In this paper, we tackle the problem of predicting the affective responses of movie viewers, based on the content of the movies. Current studies on this topic focus on video representation learning and fusion techniques to combine the extracted features for predicting affect. Yet, these typically, while ignoring the correlation between multiple modality inputs, ignore the correlation between temporal inputs (i.e., sequential features). To explore these correlations, a neural network architecture—namely AttendAffectNet (AAN)—uses the self-attention mechanism for predicting the emotions of movie viewers from different input modalities. Particularly, visual, audio, and text features are considered for predicting emotions (and expressed in terms of valence and arousal). We analyze three variants of our proposed AAN: Feature AAN, Temporal AAN, and Mixed AAN. The Feature AAN applies the self-attention mechanism in an innovative way on the features extracted from the different modalities (including video, audio, and movie subtitles) of a whole movie to, thereby, capture the relationships between them. The Temporal AAN takes the time domain of the movies and the sequential dependency of affective responses into account. In the Temporal AAN, self-attention is applied on the concatenated (multimodal) feature vectors representing different subsequent movie segments. In the Mixed AAN, we combine the strong points of the Feature AAN and the Temporal AAN, by applying self-attention first on vectors of features obtained from different modalities in each movie segment and then on the feature representations of all subsequent (temporal) movie segments. We extensively trained and validated our proposed AAN on both the MediaEval 2016 dataset for the Emotional Impact of Movies Task and the extended COGNIMUSE dataset. Our experiments demonstrate that audio features play a more influential role than those extracted from video and movie subtitles when predicting the emotions of movie viewers on these datasets. The models that use all visual, audio, and text features simultaneously as their inputs performed better than those using features extracted from each modality separately. In addition, the Feature AAN outperformed other AAN variants on the above-mentioned datasets, highlighting the importance of taking different features as context to one another when fusing them. The Feature AAN also performed better than the baseline models when predicting the valence dimension. Full article
(This article belongs to the Special Issue Sensor Based Multi-Modal Emotion Recognition)
Show Figures

Figure 1

17 pages, 1410 KiB  
Article
Two-Way Affective Modeling for Hidden Movie Highlights’ Extraction
by Zheng Wang, Xinyu Yan, Wei Jiang and Meijun Sun
Sensors 2018, 18(12), 4241; https://doi.org/10.3390/s18124241 - 3 Dec 2018
Viewed by 3963
Abstract
Movie highlights are composed of video segments that induce a steady increase of the audience’s excitement. Automatic movie highlights’ extraction plays an important role in content analysis, ranking, indexing, and trailer production. To address this challenging problem, previous work suggested a direct mapping [...] Read more.
Movie highlights are composed of video segments that induce a steady increase of the audience’s excitement. Automatic movie highlights’ extraction plays an important role in content analysis, ranking, indexing, and trailer production. To address this challenging problem, previous work suggested a direct mapping from low-level features to high-level perceptual categories. However, they only considered the highlight as intense scenes, like fighting, shooting, and explosions. Many hidden highlights are ignored because their low-level features’ values are too low. Driven by cognitive psychology analysis, combined top-down and bottom-up processing is utilized to derive the proposed two-way excitement model. Under the criteria of global sensitivity and local abnormality, middle-level features are extracted in excitement modeling to bridge the gap between the feature space and the high-level perceptual space. To validate the proposed approach, a group of well-known movies covering several typical types is employed. Quantitative assessment using the determined excitement levels has indicated that the proposed method produces promising results in movie highlights’ extraction, even if the response in the low-level audio-visual feature space is low. Full article
Show Figures

Figure 1

Back to TopTop