Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline

Article Types

Countries / Regions

Search Results (1)

Search Parameters:
Keywords = film and TV scenes

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1678 KiB  
Article
Multitask Learning-Based Affective Prediction for Videos of Films and TV Scenes
by Zhibin Su, Shige Lin, Luyue Zhang, Yiming Feng and Wei Jiang
Appl. Sci. 2024, 14(11), 4391; https://doi.org/10.3390/app14114391 - 22 May 2024
Viewed by 1340
Abstract
Film and TV video scenes contain rich art and design elements such as light and shadow, color, composition, and complex affects. To recognize the fine-grained affects of the art carrier, this paper proposes a multitask affective value prediction model based on an attention [...] Read more.
Film and TV video scenes contain rich art and design elements such as light and shadow, color, composition, and complex affects. To recognize the fine-grained affects of the art carrier, this paper proposes a multitask affective value prediction model based on an attention mechanism. After comparing the characteristics of different models, a multitask prediction framework based on the improved progressive layered extraction (PLE) architecture (multi-headed attention and factor correlation-based PLE), incorporating a multi-headed self-attention mechanism and correlation analysis of affective factors, is constructed. Both the dynamic and static features of a video are chosen as fusion input, while the regression of fine-grained affects and classification of whether a character exists in a video are designed as different training tasks. Considering the correlation between different affects, we propose a loss function based on association constraints, which effectively solves the problem of training balance within tasks. Experimental results on a self-built video dataset show that the algorithm can give full play to the complementary advantages of different features and improve the accuracy of prediction, which is more suitable for fine-grained affect mining of film and TV scenes. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Visual Processing)
Show Figures

Figure 1

Back to TopTop