You are currently on the new version of our website. Access the old version .
  • This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
  • Review
  • Open Access

27 January 2026

A Comparative Study of Emotion Recognition Systems: From Classical Approaches to Multimodal Large Language Models

,
,
and
1
Faculty of Electronics, Telecommunications and Information Technology, National University of Science and Technology “Politehnica” Bucharest, Bd. Iuliu Maniu 1-3, 061071 Bucharest, Romania
2
Laboratoire SAMOVAR, Télécom SudParis, Institut Polytechnique de Paris, 91000 Paris, France
*
Author to whom correspondence should be addressed.
This article belongs to the Section Computing and Artificial Intelligence

Abstract

Emotion recognition in video (ERV) aims to infer human affect from visual, audio, and contextual signals and is increasingly important for interactive and intelligent systems. Over the past decade, ERV has evolved from handcrafted features and task-specific deep learning models toward transformer-based vision–language models and multimodal large language models (MLLMs). This review surveys this evolution, with an emphasis on engineering considerations relevant to real-world deployment. We analyze multimodal fusion strategies, dataset characteristics, and evaluation protocols, highlighting limitations in robustness, bias, and annotation quality under unconstrained conditions. Emerging MLLM-based approaches are examined in terms of performance, reasoning capability, computational cost, and interaction potential. By comparing task-specific models with foundation model approaches, we clarify their respective strengths for resource-constrained versus context-aware applications. Finally, we outline practical research directions toward building robust, efficient, and deployable ERV systems for applied scenarios such as assistive technologies and human–AI interaction.

Article Metrics

Citations

Article Access Statistics

Article metric data becomes available approximately 24 hours after publication online.