Next Article in Journal
A Reduced Reaction Model for Combustion of n-Pentanol
Previous Article in Journal
Investigation of Methods for Generating Smooth Continuous Curves Resulting from the Intersection of Conical Surfaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Insights into the Emotion Classification of Artificial Intelligence: Evolution, Application, and Obstacles of Emotion Classification †

Department of Informatics, Universitas Amikom Yogyakarta, Yogyakarta 55283, Indonesia
*
Author to whom correspondence should be addressed.
Presented at the 8th Eurasian Conference on Educational Innovation 2025, Bali, Indonesia, 7–9 February 2025.
Eng. Proc. 2025, 103(1), 24; https://doi.org/10.3390/engproc2025103024
Published: 3 September 2025

Abstract

In this systematic literature review, we examined the integration of emotional intelligence into artificial intelligence (AI) systems, focusing on advancements, challenges, and opportunities in emotion classification technologies. Accurate emotion recognition in AI holds immense potential in healthcare, the IoT, and education. However, challenges such as computational demands, limited dataset diversity, and real-time deployment complexity remain significant. In this review, we included research on emerging solutions like multimodal data processing, attention mechanisms, and real-time emotion tracking to address these issues. By overcoming these issues, AI systems enhance human–AI interactions and expand real-world applications. Recommendations for improving accuracy and scalability in emotion-aware AI are provided based on the review results.

1. Introduction

Emotion classification, a subfield of artificial intelligence (AI), has gained significant attention as researchers strive to build systems capable of understanding and responding to human emotions [1]. The ability to interpret emotions from textual, visual, and auditory data presents a critical opportunity for improving human–AI interaction across diverse applications, including healthcare, education, and customer service [2]. However, emotions are inherently complex and nuanced, making it difficult to develop AI models that can accurately capture emotional expressions [3]. Despite technological advancements, challenges such as dataset diversity, real-time application constraints, and computational demands limit the effectiveness of emotion-aware AI systems [4]. Addressing these issues is crucial for the next generation of emotionally intelligent AI [5].
With the growing demand for AI systems that empathetically interact with humans [6], technology becomes more integrated into everyday life. Users expect AI to process information and respond in a human-centric, emotionally aware manner [2]. AI models that provide sensitive, emotionally relevant responses are particularly effective in mental health services and virtual assistants [4]. Furthermore, the increasing complexity of human emotions in multicultural and multilingual environments demands adaptable and generalizable models [7]. This research aims to address these gaps by reviewing the existing literature on emotion classification and identifying the techniques that show the most promise for overcoming these challenges [1].
We conducted a systematic literature review (SLR) on the advancements, limitations, and opportunities in emotion classification within AI systems [8]. By analyzing current approaches, we identified the methods that offer the greatest potential for improving accuracy, scalability, and real-time application of emotion-aware AI [9]. We also explored how emerging technologies such as multimodal data processing and attention mechanisms have been utilized to enhance the emotional intelligence of AI. We also examined how AI can be inclusive and adaptable to different linguistic and cultural contexts [5]. The results provide a reference for future research.
In this study, we reviewed datasets and techniques used for emotion classification and their characteristics, the advantages and disadvantages of the current techniques used for emotion classification and their impacts on performance and applicability, and opportunities and challenges for the development of classification algorithms for emotion recognition in terms of model scalability, real-time deployment, and cross-cultural adaptability.
The results present the opportunities and challenges associated with emotion classification in AI. By synthesizing recent advancements, the results provide basic information on how AI systems become emotionally intelligent to better suit real-world applications. Furthermore, shortcomings of current methodologies can be addressed with the provided suggestions for improving the scalability and adaptability of the emotion classification system. Amid the growing body of knowledge in AI, this result offers a foundation for developing more empathetic, human-centered AI technologies that can interact with users in more meaningful ways.

2. Methodology

2.1. Search and Selection Criteria

We used the Scopus database to identify relevant publications related to the emotion classification of AI systems. The keywords included “emotional intelligence,” “emotion classification,” “AI systems,” “real-time applications,” and “multimodal data processing”. Initially, 506 publications were retrieved, but after applying selection criteria, the number was reduced to 79 articles. The criteria included publications from 2021 to 2024, articles or reviews, English publications, and open-access articles. The process ensured that the review encompassed the most recent and relevant research on emotion classification in AI to analyze current trends and advancements.

2.2. Screening and Eligibility

In the screening and eligibility process, 79 articles were evaluated based on the title and abstract to determine the relevance of this study. In the initial screening process, emotional intelligence, real-time applications, and multimodal data were used as keywords. A detailed full-text review was conducted, subsequently focusing on the research methodologies and alignment with the study’s objectives, particularly the analysis and methodology. In this process, the number of eligible articles was refined to 50. Finally, 36 articles were included in the review, selected for their analysis, methodological precision, and direct relevance to the topic. In the final inclusion process, the most significant and insightful contributions were considered to select articles (Figure 1).

2.3. Data Extraction

The data were extracted based on the key information from the 36 eligible articles. From each article, the following was extracted: author names, year of publication, datasets used in the research, techniques applied for emotion classification, and the main findings. A thorough analysis of the strengths and weaknesses was conducted to identify the opportunities for improvement and the challenges encountered in their methods. This systematic approach ensured an understanding of the current landscape of emotion classification and the comparison and identification of trends and gaps. By focusing on these critical aspects, a robust and insightful analysis of state-of-the-art emotion-aware AI systems was enabled.

2.4. Quality Assessment

In the quality assessment of the literature, reliability and validity are critical. Each article was evaluated based on the rigor of its research methodology, the clarity in how data and results were presented, and the reproducibility of the outcomes. We prioritized robust and impactful insights to ensure credible research results. By maintaining high standards, the results provide solid and trustworthy information on advancing emotion classification in AI systems.

2.5. Results

The extracted data were used to identify common themes, emerging trends, and research gaps in the emotion classification of AI systems. We identified recurring themes, such as the increasing use of multimodal data and attention mechanisms to improve accuracy, and the challenges posed by real-time deployment and computational costs. Models that were adaptable across diverse languages and datasets were found, which addresses the limitations of previous studies. However, significant gaps were observed in the scalability and imbalance of datasets. Progress has been made in this area, but further exploration is required to advance emotionally intelligent AI systems.

3. Results and Discussion

3.1. Datasets

In emotion classification research, a variety of datasets are necessary to capture emotional expressions through different instruments such as text, audio, visual cues, and physiological signals (Figure 2). These datasets include RAVDESS (version 1.0), Emo-DB (2009 release), IEMOCAP (version 1.0), and the OMG Emotion Dataset (version 1.1), each presenting its own unique set of challenges and characteristics [10,11,12,13]. Text-based datasets, such as SemEval-2018 Task 1 or GoEmotions dataset (version 1.0), contain annotated texts that express emotions explicitly or implicitly [8,14]. Audiovisual and physiological datasets include facial expressions, voice intonations, and bio-signals that reflect emotional states [15]. The complexity and variety of these datasets present the multifaceted nature of human emotions, demanding robust and versatile analytical approaches [16].
One of the major challenges in emotion classification is the inherent subjectivity and variability of emotional expression across different cultures and individuals. This variability leads to discrepancies in how emotions are labeled in the datasets, which in turn affects the training and performance of classification models [17]. Additionally, many datasets suffer from class imbalance, where certain emotions are underrepresented, making it difficult for models to learn effective discriminative features for these categories [18]. Techniques such as data augmentation are used to avoid synthetic oversampling or modify existing samples to enhance underrepresented classes to address these issues [8].
Deep learning techniques are used to handle the high dimensionality and complexity of multimodal emotion datasets. Convolutional neural networks (CNNs) and recurrent neural networks (RNNs) are effective in extracting time-invariant and sequential features from visual and auditory data [12,19]. Long short-term memory (LSTM) or gated recurrent units (GRUs) are particularly adept at modeling the temporal dynamics of emotional expressions in speech [20] and video [4,21,22]. These models capture the subtle changes in emotional intensity over time, which are crucial for accurate emotion recognition.
Transformers and self-attention mechanisms have also emerged as powerful tools in emotion classification due to their ability to focus on relevant parts of an input sequence without being constrained by the sequence’s temporal structure. This is especially beneficial with datasets where emotional expressions are interspersed with neutral content, allowing the model to dynamically focus on emotionally salient features [4,23,24]. For instance, models such as bidirectional encoder representations from transformers (BERT) and its variants, adapted with self-attention mechanisms, have been successfully applied to textual and multimodal emotion datasets, demonstrating superior performance in capturing the context and intricacies of emotional expressions [4,24,25]. Lastly, hybrid models that integrate different neural network architectures are increasingly common in tackling the diverse challenges presented by emotion classification datasets. For example, CNNs [26] with LSTMs [27] or GRUs leverage spatial and temporal feature extraction capabilities to effectively address the complexities of multimodal emotion data [12,18]. Similarly, graph neural networks (GNNs) [1] and attention-based models provide innovative ways to encode relational information and dependencies within data, such as the interaction between different speakers in a conversation or different modalities within a dataset [2,4]. These advanced techniques enable more nuanced and context-aware models in emotion classification research, underscoring the continuous evolution of methodologies to better understand and interpret human emotions.

3.2. Advantages and Disadvantages

Emotion classification techniques have significantly improved accuracy across various modalities, including textual, auditory, and visual data, as shown in Figure 3. These improvements are largely attributed to advanced deep learning models such as CNNs [28] and RNNs, which demonstrate high classification accuracy and improved recognition accuracy in complex emotional datasets [12,19]. Additionally, transformer-based models and attention mechanisms have further pushed the boundaries, offering enhanced feature learning and sentiment classification, especially in context-rich environments [22,29]. These methods excel in capturing the subtle nuances of emotional expressions, making them highly effective for tasks involving nuanced sentiment analysis and emotion recognition.
However, these models need to be sophisticated to address model complexity as they require considerable computational resources [30]. Advanced emotion classification models are characterized by high model complexity, which leads to difficulties in tuning and optimization, especially when adapting to new datasets or different linguistic contexts [10,17,19]. Moreover, the complexity of mechanisms such as attention layers necessitates extensive computational resources, resulting in high computational costs [12,13,29]. This complexity not only impacts the scalability of emotion classification systems but also limits their applicability in resource-constrained settings.
The requirement for substantial computational resources is a critical disadvantage that can hinder the deployment of these models in real-time applications or on devices with limited processing capabilities [11,31]. High resource consumption and the need for powerful hardware to manage and process large datasets make these technologies less accessible, particularly for small organizations or individual researchers [32]. This is a significant consideration as the democratization of AI technologies is crucial for broad-based innovation and application.
In terms of accuracy, while these models achieve high performance, they often require hyperparameter tuning and extensive training data to reach their full potential [33]. This is particularly challenging when dealing with languages or datasets that are not well-represented in training materials, which diminishes the model’s effectiveness in those contexts [4,21]. Even performing well in one language or dataset, techniques do not have high accuracy, especially when code-switching or multilingual data are involved [4,34].
Despite these challenges, the ongoing advancements in model design and training techniques remove these drawbacks. For instance, methods to reduce redundancy and complexity are developed, alongside strategies for more efficient feature extraction [35,36]. These improvements are crucial for enhancing the viability of emotion classification systems across a wider range of applications and devices, ensuring that they remain accurate, more adaptable, and less resource-intensive over time [9,37]. These developments lead to future research to refine the balance between performance and practicality.

3.3. Opportunities and Challenges

The advancement of emotional intelligence in AI systems presents opportunities to bridge the gap left by previous research limitations, particularly in enhancing model robustness and accuracy across diverse applications, as shown in Figure 4. As AI technologies progress, emotional intelligence significantly improves user interaction and predictive analytics in systems ranging from healthcare to customer service [22,34]. However, achieving this necessitates overcoming the high computational costs often associated with training sophisticated models on large datasets, which remains a formidable challenge [22,36]. Reducing these costs without compromising model performance is crucial for the wider implementation and accessibility of these technologies.
Expanding emotion classification systems to handle diverse and multimodal datasets is another significant opportunity. This expansion enables more robust models that perform well across various languages and cultural contexts, addressing one of the primary criticisms of current systems [8]. Nevertheless, managing and processing these large multimodal datasets is inherently complex and demands substantial computational resources [19,29]. Innovations in data processing and model efficiency are required to harness the full potential of diverse datasets while maintaining manageable computational demands.
Real-time applications of emotion recognition systems, such as in Internet of Things (IoT) devices or real-time monitoring systems, are burgeoning fields with immense potential [11,35]. These applications are observed in security, healthcare, and public safety and provide immediate responses based on emotional assessments. However, the challenge of deploying these models in real-time environments requires models to be highly efficient and capable of operating under stringent latency requirements [11,12]. Overcoming these operational challenges is essential for the success of real-time emotion recognition applications.
The use of emotion recognition in educational and industrial monitoring applications showcases the versatility of emotion AI. The use is particularly transformative in educational settings by adapting learning experiences to the emotional states of students, thereby potentially improving learning outcomes [38]. In industries, monitoring the emotional well-being of employees enhances workplace safety and productivity [33]. Yet, these applications must address scalability challenges and the need for models that can adapt to different physical environments and data types, which is particularly demanding [31].
Lastly, the cross-modal and cross-device applicability of emotion recognition systems ensures seamless integration across various user interfaces and platforms [18]. This adaptability leads to more personalized and sensitive AI applications that understand and respond to human emotions effectively across different devices and modalities. However, the complexities associated with balancing multiple features and ensuring the scalability of these systems are substantial [8,17]. Addressing these technical challenges is vital for the development of universally applicable, efficient, and effective emotion recognition systems.

3.4. Research Gaps

Emotion classification algorithms, particularly those based on transformer, RNN, LSTM, GRU, and GNN architectures, are promising but face challenges. For transformers, a significant gap is found in adapting their high computational demands for real-time applications and resource-constrained devices, as their complexity often limits usage outside of high-performance environments. RNN-based models, while adept at capturing sequential data, often struggle with long-term dependencies and vanishing gradients, which hampers their accuracy in complex, emotionally nuanced datasets. LSTM and GRU models address some of these issues but still encounter difficulties in balancing temporal accuracy and computational efficiency in larger datasets, especially in multimodal settings. GNNs offer unique advantages for relational data but face challenges in scaling across multimodal datasets, as they are often limited to specific data types and structures. Each of these architectures needs further development to handle the demands of diverse, large-scale emotional data more effectively.
Moreover, it is necessary to enhance cross-cultural and multilingual adaptability within these algorithms. For transformers, achieving language and cultural generalization without exhaustive retraining on diverse data remains an obstacle, impacting their effectiveness in global applications. RNN, LSTM, and GRU models require improvements to maintain accuracy across low-resource or code-mixed languages, which are common in real-world data but often underrepresented in training. GNNs, while excellent for network-based data, also need methods to better integrate multimodal data and emotional context, ensuring more accurate and context-sensitive predictions. Addressing these gaps in adaptability, scalability, and efficiency is essential for advancing emotion classification algorithms and expanding their impact in practical, culturally diverse, and emotionally rich environments.

4. Conclusions

The findings of this literature review demonstrate the rapid evolution of emotion classification research, with diverse datasets and advanced techniques. Textual, audiovisual, and physiological datasets have been employed to capture emotional expressions, while recent techniques, including transformers, LSTMs, and hybrid neural networks, have improved their accuracy. However, despite these advancements, several challenges, such as dataset imbalance, cultural variability, and high computational costs, remain. Techniques such as attention mechanisms and multimodal data processing are promising in overcoming these limitations. Addressing these challenges is crucial for making emotion-aware AI systems more robust and adaptable across different cultural contexts and real-time applications.
By assessing the advantages and disadvantages of emotion classification techniques, their opportunities and challenges have been identified. Advanced deep learning models have proven effective but face increased computational demands, limiting their practical deployment. Opportunities of these models are diverse due to the emergence of multimodal datasets and efficient techniques for real-time applications. The scalability and inclusivity of emotion recognition systems must be ensured by tackling the identified challenges. Based on the potential of emotionally intelligent AI, human interaction and well-being can be enhanced, promoting an empathetic technological integration.

Author Contributions

Conceptualization, M.E.H., E.U. and K.K.; methodology, M.E.H. and E.U.; formal analysis, M.E.H.; investigation, M.E.H.; resources, E.U. and A.S.; data curation, M.E.H.; writing—original draft preparation, M.E.H.; writing—review and editing, E.U., K.K. and A.S.; visualization, M.E.H.; supervision, E.U. and K.K.; project administration, E.U. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data supporting the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, C.; Li, P.; Zhang, Y.; Li, N.; Si, Y.; Li, F.; Cao, Z.; Chen, H.; Chen, B.; Yao, D.; et al. Effective Emotion Recognition by Learning Discriminative Graph Topologies in EEG Brain Networks. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 10258–10272. [Google Scholar] [CrossRef]
  2. Zhu, T.; Li, L.; Yang, J.; Zhao, S.; Xiao, X. Multimodal Emotion Classification with Multi-Level Semantic Reasoning Network. IEEE Trans. Multimed. 2023, 25, 6868–6880. [Google Scholar] [CrossRef]
  3. Gao, Q.; Zeng, H.; Li, G.; Tong, T. Graph Reasoning-Based Emotion Recognition Network. IEEE Access 2021, 9, 6488–6497. [Google Scholar] [CrossRef]
  4. Ameer, I.; Sidorov, G.; Gómez-Adorno, H.; Nawab, R.M.A. Multi-Label Emotion Classification on Code-Mixed Text: Data and Methods. IEEE Access 2022, 10, 8779–8789. [Google Scholar] [CrossRef]
  5. Zhu, X.; Liu, G.; Zhao, L.; Rong, W.; Sun, J.; Liu, R. Emotion Classification from Multi-Band Electroencephalogram Data Using Dynamic Simplifying Graph Convolutional Network and Channel Style Recalibration Module. Sensors 2023, 23, 1917. [Google Scholar] [CrossRef]
  6. Das, A.; Hoque, M.M.; Sharif, O.; Dewan, M.A.A.; Siddique, N. TEmoX: Classification of Textual Emotion Using Ensemble of Transformers. IEEE Access 2023, 11, 109803–109818. [Google Scholar] [CrossRef]
  7. Gu, Y.; Wang, Y.; Zhang, H.-R.; Wu, J.; Gu, X. Enhancing Text Classification by Graph Neural Networks with Multi-Granular Topic-Aware Graph. IEEE Access 2023, 11, 20169–20183. [Google Scholar] [CrossRef]
  8. Ahanin, Z.; Ismail, M.A.; Singh, N.S.S.; AL-Ashmori, A. Hybrid Feature Extraction for Multi-Label Emotion Classification in English Text Messages. Sustainability 2023, 15, 12539. [Google Scholar] [CrossRef]
  9. Shirian, A.; Tripathi, S.; Guha, T. Dynamic Emotion Modeling with Learnable Graphs and Graph Inception Network. IEEE Trans. Multimedia 2021, 24, 780–790. [Google Scholar] [CrossRef]
  10. Zhao, Z.; Li, Q.; Zhang, Z.; Cummins, N.; Wang, H.; Tao, J.; Schuller, B.W. Combining a Parallel 2D CNN with a Self-Attention Dilated Residual Network for CTC-based Discrete Speech Emotion Recognition. Neural Netw. 2021, 141, 52–60. [Google Scholar] [CrossRef] [PubMed]
  11. Andayani, F.; Theng, L.B.; Tsun, M.T.; Chua, C. Hybrid LSTM-Transformer Model for Emotion Recognition from Speech Audio Files. IEEE Access 2022, 10, 36018–36027. [Google Scholar] [CrossRef]
  12. Atila, O.; Şengür, A. Attention Guided 3D CNN-LSTM Model for Accurate Speech Based Emotion Recognition. Appl. Acoust. 2021, 182, 108260. [Google Scholar] [CrossRef]
  13. Kollias, D.; Zafeiriou, S. Exploiting Multi-CNN Features in CNN-RNN Based Dimensional Emotion Recognition on the OMG in-the-Wild Dataset. IEEE Trans. Affect. Comput. 2021, 12, 595–606. [Google Scholar] [CrossRef]
  14. Ameer, I.; Bölücü, N.; Sidorov, G.; Can, B. Emotion Classification in Texts Over Graph Neural Networks: Semantic Representation Is Better Than Syntactic. IEEE Access 2023, 11, 56921–56934. [Google Scholar] [CrossRef]
  15. Leem, S.-G.; Fulford, D.; Onnela, J.-P.; Gard, D.; Busso, C. Selective Acoustic Feature Enhancement for Speech Emotion Recognition with Noisy Speech. IEEE/ACM Trans. Audio Speech Lang. Process. 2024, 32, 917–929. [Google Scholar] [CrossRef]
  16. Mavsar, M.; Morimoto, J.; Ude, A. GAN-Based Semi-Supervised Training of LSTM Nets for Intention Recognition in Cooperative Tasks. IEEE Robot. Autom. Lett. 2024, 9, 263–270. [Google Scholar] [CrossRef]
  17. Zhang, J.; Liu, X.; Wang, Z.; Yang, H. Graph-Based Object Semantic Refinement for Visual Emotion Recognition. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 3036–3049. [Google Scholar] [CrossRef]
  18. Hasib, K.M.; Azam, S.; Karim, A.; Al Marouf, A.; Shamrat, F.J.M.; Montaha, S.; Yeo, K.C.; Jonkman, M.; Alhajj, R.; Rokne, J.G. McNn-Lstm: Combining CNN and LSTM to Classify Multi-Class Text in Imbalanced News Data. IEEE Access 2023, 11, 93048–93063. [Google Scholar] [CrossRef]
  19. Jia, X. Music Emotion Classification Method Based on Deep Learning and Improved Attention Mechanism. Comput. Intell. Neurosci. 2022, 2022, 5181899. [Google Scholar] [CrossRef]
  20. Kim, D.-H.; Son, W.-H.; Kwak, S.-S.; Yun, T.-H.; Park, J.-H.; Lee, J.-D. A Hybrid Deep Learning Emotion Classification System Using Multimodal Data. Sensors 2023, 23, 9333. [Google Scholar] [CrossRef]
  21. Huang, F.; Li, X.; Yuan, C.; Zhang, S.; Zhang, J.; Qiao, S. Attention-Emotion-Enhanced Convolutional LSTM for Sentiment Analysis. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 4332–4345. [Google Scholar] [CrossRef] [PubMed]
  22. Le, H.-D.; Lee, G.-S.; Kim, S.-H.; Kim, S.; Yang, H.-J. Multi-Label Multimodal Emotion Recognition with Transformer-Based Fusion and Emotion-Level Representation Learning. IEEE Access 2023, 11, 14742–14751. [Google Scholar] [CrossRef]
  23. Zhang, X.; Wu, Z.; Liu, K.; Zhao, Z.; Wang, J.; Wu, C. Text Sentiment Classification Based on BERT Embedding and Sliced Multi-Head Self-Attention Bi-Gru. Sensors 2023, 23, 1481. [Google Scholar] [CrossRef]
  24. Üveges, I.; Ring, O. HunEmBERT: A Fine-Tuned BERT-Model for Classifying Sentiment and Emotion in Political Communication. IEEE Access 2023, 11, 60267–60278. [Google Scholar] [CrossRef]
  25. Chen, L.; Li, M.; Wu, M.; Pedrycz, W.; Hirota, K. Convolutional Features-Based Broad Learning with LSTM for Multidimensional Facial Emotion Recognition in Human–Robot Interaction. IEEE Trans. Syst. Man Cybern. Syst. 2024, 54, 64–75. [Google Scholar] [CrossRef]
  26. Bian, M.; He, G.; Feng, G.; Zhang, X.; Ren, Y. Verifiable Privacy-Preserving Heart Rate Estimation Based on LSTM. IEEE Internet Things J. 2024, 11, 1719–1731. [Google Scholar] [CrossRef]
  27. Cheng, Y.; Sun, H.; Chen, H.; Li, M.; Cai, Y.; Cai, Z.; Huang, J. Sentiment Analysis Using Multi-Head Attention Capsules with Multi-Channel CNN and Bidirectional GRU. IEEE Access 2021, 9, 60383–60395. [Google Scholar] [CrossRef]
  28. Chen, H.; Sun, Y.; Zhang, M.; Zhang, M. Automatic Noise Generation and Reduction for Text Classification. IEEE/ACM Trans. Audio Speech Lang. Process. 2024, 32, 139–150. [Google Scholar] [CrossRef]
  29. Tao, W.; Li, C.; Song, R.; Cheng, J.; Liu, Y.; Wan, F.; Chen, X. EEG-Based Emotion Recognition via Channel-Wise Attention and Self Attention. IEEE Trans. Affect. Comput. 2023, 14, 382–393. [Google Scholar] [CrossRef]
  30. Li, C.; Zhang, Z.; Zhang, X.; Huang, G.; Liu, Y.; Chen, X. EEG-based Emotion Recognition via Transformer Neural Architecture Search. IEEE Trans. Ind. Inform. 2023, 19, 6016–6025. [Google Scholar] [CrossRef]
  31. Yu, W.; Kim, I.Y.; Mechefske, C.K. Analysis of Different RNN Autoencoder Variants for Time Series Classification and Machine Prognostics. Mech. Syst. Signal Process. 2021, 149, 107322. [Google Scholar] [CrossRef]
  32. Machová, K.; Szabóova, M.; Paralič, J.; Mičko, J. Detection of Emotion by Text Analysis Using Machine Learning. Front. Psychol. 2023, 14, 1190326. [Google Scholar] [CrossRef]
  33. Asghar, M.A.; Khan, M.J.; Shahid, H.; Xiong, N.; Mehmood, R.M. Semi-Skipping Layered Gated Unit and Efficient Network: Hybrid Deep Feature Selection Method for Edge Computing in EEG-Based Emotion Classification. IEEE Access 2021, 9, 13378–13389. [Google Scholar] [CrossRef]
  34. Zulqarnain, M.; Ghazali, R.; Hassim, Y.M.M.; Aamir, M. An Enhanced Gated Recurrent Unit with Auto-Encoder for Solving Text Classification Problems. Arab. J. Sci. Eng. 2021, 46, 8953–8967. [Google Scholar] [CrossRef]
  35. Li, M.; Qiu, M.; Kong, W.; Zhu, L.; Ding, Y. Fusion Graph Representation of EEG for Emotion Recognition. Sensors 2023, 23, 1404. [Google Scholar] [CrossRef]
  36. Shen, S.; Fan, J. Emotion Analysis of Ideological and Political Education Using a GRU Deep Neural Network. Front. Psychol. 2022, 13, 908154. [Google Scholar] [CrossRef]
  37. Wang, X.; Tong, Y. Application of an Emotional Classification Model in E-Commerce Text Based on an Improved Transformer Model. PLoS ONE 2021, 16, e0247984. [Google Scholar] [CrossRef]
  38. Khan, P.; Ranjan, P.; Kumar, S. AT2GRU: A Human Emotion Recognition Model with Mitigated Device Heterogeneity. IEEE Trans. Affect. Comput. 2023, 14, 1520–1532. [Google Scholar] [CrossRef]
Figure 1. Number of publications based on year.
Figure 1. Number of publications based on year.
Engproc 103 00024 g001
Figure 2. Types and number of publications on recent techniques (red) and datasets (blue).
Figure 2. Types and number of publications on recent techniques (red) and datasets (blue).
Engproc 103 00024 g002
Figure 3. Types and number of publications on the advantages (blue) and disadvantages (red) of recent techniques.
Figure 3. Types and number of publications on the advantages (blue) and disadvantages (red) of recent techniques.
Engproc 103 00024 g003
Figure 4. Types and number of publications on opportunities (blue) and challenges (red) of future research.
Figure 4. Types and number of publications on opportunities (blue) and challenges (red) of future research.
Engproc 103 00024 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Endah Hiswati, M.; Utami, E.; Kusrini, K.; Setyanto, A. Insights into the Emotion Classification of Artificial Intelligence: Evolution, Application, and Obstacles of Emotion Classification. Eng. Proc. 2025, 103, 24. https://doi.org/10.3390/engproc2025103024

AMA Style

Endah Hiswati M, Utami E, Kusrini K, Setyanto A. Insights into the Emotion Classification of Artificial Intelligence: Evolution, Application, and Obstacles of Emotion Classification. Engineering Proceedings. 2025; 103(1):24. https://doi.org/10.3390/engproc2025103024

Chicago/Turabian Style

Endah Hiswati, Marselina, Ema Utami, Kusrini Kusrini, and Arief Setyanto. 2025. "Insights into the Emotion Classification of Artificial Intelligence: Evolution, Application, and Obstacles of Emotion Classification" Engineering Proceedings 103, no. 1: 24. https://doi.org/10.3390/engproc2025103024

APA Style

Endah Hiswati, M., Utami, E., Kusrini, K., & Setyanto, A. (2025). Insights into the Emotion Classification of Artificial Intelligence: Evolution, Application, and Obstacles of Emotion Classification. Engineering Proceedings, 103(1), 24. https://doi.org/10.3390/engproc2025103024

Article Metrics

Back to TopTop