Next Article in Journal
Comparative Study of Ferrate, Persulfate, and Percarbonate as Oxidants in Plasma-Based Dye Remediation: Assessing Their Potential for Process Enhancement
Previous Article in Journal
Structural Design and Analysis of Telescope for Gravitational Wave Detection in TianQin Program
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Context-Aware Visual Emotion Recognition Through Hierarchical Fusion of Facial Micro-Features and Scene Semantics

College of Digital Innovation Technology, Rangsit University, 52/347 Muang-Ake Phaholyothin Road, Lak-Hok, Muang, Pathumthani 12000, Thailand
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(24), 13160; https://doi.org/10.3390/app152413160
Submission received: 10 November 2025 / Revised: 1 December 2025 / Accepted: 12 December 2025 / Published: 15 December 2025
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

Visual emotion recognition in unconstrained environments remains challenging, as single-stream deep learning models often fail to capture the localized facial cues and contextual information necessary for accurate classification. This study introduces a hierarchical multi-level feature fusion framework that systematically combines low-level micro-textural features (Local Binary Patterns), mid-level facial cues (Facial Action Units), and high-level scene semantics (Places365) with ResNet-50 global embeddings. Evaluated on the large-scale EmoSet-3.3M dataset, which contains 3.3 million images across eight emotion categories, the framework demonstrates marked performance gains with the best configuration (LBP-FAUs-Places365-ResNet). The proposed framework achieves 74% accuracy and a macro-averaged F1-score of 0.75 under its best configuration (LBP-FAUs-Places365-ResNet), representing a five-percentage-point improvement over the ResNet-50 baseline. The approach excels at distinguishing high-intensity emotions, maintaining efficient inference (2.2 ms per image, 29 M parameters), and analysis confirms that integrating facial muscle activations with scene context enables nuanced emotional differentiation. These results validate that hierarchical feature integration significantly advances robust, human-aligned visual emotion recognition, making it suitable for real-world Human–Computer Interaction (HCI) and affective computing applications.
Keywords: visual emotion recognition; multi-level feature fusion; facial action units; contextual scene analysis; hierarchical deep learning; affective computing visual emotion recognition; multi-level feature fusion; facial action units; contextual scene analysis; hierarchical deep learning; affective computing

Share and Cite

MDPI and ACS Style

Yongsiriwit, K.; Chaisiriprasert, P.; Aribarg, T.; Kork, S. Context-Aware Visual Emotion Recognition Through Hierarchical Fusion of Facial Micro-Features and Scene Semantics. Appl. Sci. 2025, 15, 13160. https://doi.org/10.3390/app152413160

AMA Style

Yongsiriwit K, Chaisiriprasert P, Aribarg T, Kork S. Context-Aware Visual Emotion Recognition Through Hierarchical Fusion of Facial Micro-Features and Scene Semantics. Applied Sciences. 2025; 15(24):13160. https://doi.org/10.3390/app152413160

Chicago/Turabian Style

Yongsiriwit, Karn, Parkpoom Chaisiriprasert, Thannob Aribarg, and Sokliv Kork. 2025. "Context-Aware Visual Emotion Recognition Through Hierarchical Fusion of Facial Micro-Features and Scene Semantics" Applied Sciences 15, no. 24: 13160. https://doi.org/10.3390/app152413160

APA Style

Yongsiriwit, K., Chaisiriprasert, P., Aribarg, T., & Kork, S. (2025). Context-Aware Visual Emotion Recognition Through Hierarchical Fusion of Facial Micro-Features and Scene Semantics. Applied Sciences, 15(24), 13160. https://doi.org/10.3390/app152413160

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop