sensors-logo

Journal Browser

Journal Browser

Emotion Recognition and Cognitive Behavior Analysis Based on Sensors

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensing and Imaging".

Deadline for manuscript submissions: 5 July 2025 | Viewed by 7262

Special Issue Editors


E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Perugia, 06123 Perugia, Italy
Interests: artificial intelligence; emotion recognition; learner behaviour modeling; semantic proximity measures; link prediction; deep learning algorithms
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Architecture and Engineering, University of Parma, Parco Area delle Scienze 181/A, Parma, Italy
Interests: computer vision; pattern recognition; machine learning; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Information Technology Department, São Paulo State Technological College (FATEC), São Paulo 01101-010, SP, Brazil
Interests: 3D face recognition; interpersonal emotion recognition

Special Issue Information

Dear Colleagues,

Emotion recognition is the process of identifying human emotion. People vary widely in their accuracy at recognizing the emotions of others. The use of technology to help people with emotion recognition is a relatively nascent research area. Past studies have found that emotion recognition training using cognitive behavioral analysis improved emotion recognition among individuals with mental disorders. Additionally, an intelligent method for human–computer interaction is also needed to bridge the gap of communication. This requires natural language processing, speech/vision processing, machine learning, as well as core reasoning technologies. All of these problems deal with a stream of data not only from individual sensors, such as image sensors, biomedical signal sensors, and wearable devices, but also from the fusion of various sensors.

This Special Issue is looking for high-quality research contributions in one or more of the following domains:

  • Emotion recognition;
  • Gesture recognition;
  • Cognitive behavior analysis;
  • Speech emotion recognition;
  • Emotional cognition;
  • Facial recognition.

Prof. Dr. Valentina Franzoni
Dr. Claudio Ferrari
Dr. João Baptista Cardia Neto
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 6196 KiB  
Article
Building a Gender-Bias-Resistant Super Corpus as a Deep Learning Baseline for Speech Emotion Recognition
by Babak Abbaschian and Adel Elmaghraby
Sensors 2025, 25(7), 1991; https://doi.org/10.3390/s25071991 - 22 Mar 2025
Viewed by 306
Abstract
The focus on Speech Emotion Recognition has dramatically increased in recent years, driven by the need for automatic speech-recognition-based systems and intelligent assistants to enhance user experience by incorporating emotional content. While deep learning techniques have significantly advanced SER systems, their robustness concerning [...] Read more.
The focus on Speech Emotion Recognition has dramatically increased in recent years, driven by the need for automatic speech-recognition-based systems and intelligent assistants to enhance user experience by incorporating emotional content. While deep learning techniques have significantly advanced SER systems, their robustness concerning speaker gender and out-of-distribution data has not been thoroughly examined. Furthermore, standards for SER remain rooted in landmark papers from the 2000s, even though modern deep learning architectures can achieve comparable or superior results to the state of the art of that era. In this research, we address these challenges by creating a new super corpus from existing databases, providing a larger pool of samples. We benchmark this dataset using various deep learning architectures, setting a new baseline for the task. Additionally, our experiments reveal that models trained on this super corpus demonstrate superior generalization and accuracy and exhibit lower gender bias compared to models trained on individual databases. We further show that traditional preprocessing techniques, such as denoising and normalization, are insufficient to address inherent biases in the data. However, our data augmentation approach effectively shifts these biases, improving model fairness across gender groups and emotions and, in some cases, fully debiasing the models. Full article
(This article belongs to the Special Issue Emotion Recognition and Cognitive Behavior Analysis Based on Sensors)
Show Figures

Graphical abstract

30 pages, 8759 KiB  
Article
Identifying Novel Emotions and Wellbeing of Horses from Videos Through Unsupervised Learning
by Aarya Bhave, Emily Kieson, Alina Hafner and Peter A. Gloor
Sensors 2025, 25(3), 859; https://doi.org/10.3390/s25030859 - 31 Jan 2025
Viewed by 715
Abstract
This research applies unsupervised learning on a large original dataset of horses in the wild to identify previously unidentified horse emotions. We construct a novel, high-quality, diverse dataset of 3929 images consisting of five wild horse breeds worldwide at different geographical locations. We [...] Read more.
This research applies unsupervised learning on a large original dataset of horses in the wild to identify previously unidentified horse emotions. We construct a novel, high-quality, diverse dataset of 3929 images consisting of five wild horse breeds worldwide at different geographical locations. We base our analysis on the seven Panksepp emotions of mammals “Exploring”, “Sadness”, “Playing”, “Rage”, “Fear”, “Affectionate” and “Lust”, along with one additional emotion “Pain” which has been shown to be highly relevant for horses. We apply the contrastive learning framework MoCo (Momentum Contrast for Unsupervised Visual Representation Learning) on our dataset to predict the seven Panksepp emotions and “Pain” using unsupervised learning. We significantly modify the MoCo framework, building a custom downstream classifier network that connects with a frozen CNN encoder that is pretrained using MoCo. Our method allows the encoder network to learn similarities and differences within image groups on its own without labels. The clusters thus formed are indicative of deeper nuances and complexities within a horse’s mood, which can possibly hint towards the existence of novel and complex equine emotions. Full article
(This article belongs to the Special Issue Emotion Recognition and Cognitive Behavior Analysis Based on Sensors)
Show Figures

Figure 1

20 pages, 4313 KiB  
Article
Dynamic Emotion Recognition and Expression Imitation in Neurotypical Adults and Their Associations with Autistic Traits
by Hai-Ting Wang, Jia-Ling Lyu and Sarina Hui-Lin Chien
Sensors 2024, 24(24), 8133; https://doi.org/10.3390/s24248133 - 19 Dec 2024
Viewed by 1450
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by deficits in social interaction and communication. While many studies suggest that individuals with ASD struggle with emotion processing, the association between emotion processing and autistic traits in non-clinical populations is still unclear. We [...] Read more.
Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by deficits in social interaction and communication. While many studies suggest that individuals with ASD struggle with emotion processing, the association between emotion processing and autistic traits in non-clinical populations is still unclear. We examine whether neurotypical adults’ facial emotion recognition and expression imitation are associated with autistic traits. We recruited 32 neurotypical adults; each received two computerized tasks, the Dynamic Emotion Recognition and Expression Imitation, and two standardized measures: the Chinese version AQ and the Twenty-Item Prosopagnosia Index (PI-20). Results for the dynamic emotion recognition showed that happiness has the highest mean accuracy, followed by surprise, sadness, anger, fear, and disgust. For expression imitation, it was easiest to imitate surprise and happiness, followed by disgust, while the accuracy of imitating sadness, anger, and fear was much lower. Importantly, individual AQ scores negatively correlated with emotion recognition accuracy and positively correlated with PI-20. The AQ imagination, communication sub-scores, and PI-20 positively correlated with the expression imitation of surprise. In summary, we found a significant link between recognizing emotional expressions and the level of autistic traits in non-clinical populations, supporting the concept of broader autism phenotype. Full article
(This article belongs to the Special Issue Emotion Recognition and Cognitive Behavior Analysis Based on Sensors)
Show Figures

Figure 1

15 pages, 1670 KiB  
Article
Recognition of Dynamic Emotional Expressions in Children and Adults and Its Associations with Empathy
by Yu-Chen Chiang, Sarina Hui-Lin Chien, Jia-Ling Lyu and Chien-Kai Chang
Sensors 2024, 24(14), 4674; https://doi.org/10.3390/s24144674 - 18 Jul 2024
Cited by 2 | Viewed by 1640
Abstract
This present study investigates emotion recognition in children and adults and its association with EQ and motor empathy. Overall, 58 children (33 5–6-year-olds, 25 7–9-year-olds) and 61 adults (24 young adults, 37 parents) participated in this study. Each participant received an EQ questionnaire [...] Read more.
This present study investigates emotion recognition in children and adults and its association with EQ and motor empathy. Overall, 58 children (33 5–6-year-olds, 25 7–9-year-olds) and 61 adults (24 young adults, 37 parents) participated in this study. Each participant received an EQ questionnaire and completed the dynamic emotion expression recognition task, where participants were asked to identify four basic emotions (happy, sad, fearful, and angry) from neutral to fully expressed states, and the motor empathy task, where participants’ facial muscle activity was recorded. The results showed that “happy” was the easiest expression for all ages; 5- to 6-year-old children performed equally well as adults. The accuracies for “fearful,” “angry,” and “sad” expressions were significantly lower in children than in adults. For motor empathy, 7- to 9-year-old children exhibited the highest level of facial muscle activity, while the young adults showed the lowest engagement. Importantly, individual EQ scores positively correlated with the motor empathy index in adults but not in children. In sum, our study echoes the previous literature, showing that the identification of negative emotions is still difficult for children aged 5–9 but that this improves in late childhood. Our results also suggest that stronger facial mimicry responses are positively related to a higher level of empathy in adults. Full article
(This article belongs to the Special Issue Emotion Recognition and Cognitive Behavior Analysis Based on Sensors)
Show Figures

Figure 1

27 pages, 689 KiB  
Article
Synthetic Corpus Generation for Deep Learning-Based Translation of Spanish Sign Language
by Marina Perea-Trigo, Celia Botella-López, Miguel Ángel Martínez-del-Amor, Juan Antonio Álvarez-García, Luis Miguel Soria-Morillo and Juan José Vegas-Olmos
Sensors 2024, 24(5), 1472; https://doi.org/10.3390/s24051472 - 24 Feb 2024
Cited by 4 | Viewed by 2174
Abstract
Sign language serves as the primary mode of communication for the deaf community. With technological advancements, it is crucial to develop systems capable of enhancing communication between deaf and hearing individuals. This paper reviews recent state-of-the-art methods in sign language recognition, translation, and [...] Read more.
Sign language serves as the primary mode of communication for the deaf community. With technological advancements, it is crucial to develop systems capable of enhancing communication between deaf and hearing individuals. This paper reviews recent state-of-the-art methods in sign language recognition, translation, and production. Additionally, we introduce a rule-based system, called ruLSE, for generating synthetic datasets in Spanish Sign Language. To check the usefulness of these datasets, we conduct experiments with two state-of-the-art models based on Transformers, MarianMT and Transformer-STMC. In general, we observe that the former achieves better results (+3.7 points in the BLEU-4 metric) although the latter is up to four times faster. Furthermore, the use of pre-trained word embeddings in Spanish enhances results. The rule-based system demonstrates superior performance and efficiency compared to Transformer models in Sign Language Production tasks. Lastly, we contribute to the state of the art by releasing the generated synthetic dataset in Spanish named synLSE. Full article
(This article belongs to the Special Issue Emotion Recognition and Cognitive Behavior Analysis Based on Sensors)
Show Figures

Figure 1

Back to TopTop