Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (193)

Search Parameters:
Keywords = facial emotion detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 1550 KiB  
Article
Augmented Reality for Learning Algorithms: Evaluation of Its Impact on Students’ Emotions Using Artificial Intelligence
by Mónica Gómez-Ríos, Maximiliano Paredes-Velasco, J. Ángel Velázquez-Iturbide and Miguel Ángel Quiroz Martínez
Appl. Sci. 2025, 15(14), 7745; https://doi.org/10.3390/app15147745 - 10 Jul 2025
Viewed by 146
Abstract
Augmented reality is an educational technology mainly used in disciplines with a strong physical component, such as architecture or engineering. However, its application is much less common in more abstract fields, such as programming and algorithms. Some augmented reality apps for algorithm education [...] Read more.
Augmented reality is an educational technology mainly used in disciplines with a strong physical component, such as architecture or engineering. However, its application is much less common in more abstract fields, such as programming and algorithms. Some augmented reality apps for algorithm education exist, but their effect remains insufficiently assessed. In particular, emotions are an important factor for learning, and the emotional impact of augmented reality should be determined. This article inquires about the impact on students’ emotions of an augmented reality tool for learning Dijkstra’s algorithm. This investigation uses an artificial intelligence tool that detects emotions in real time through facial recognition. The data captured with this tool show that students’ positive emotions increased significantly, statistically surpassing negative emotions, and that some negative emotions, such as fear, were considerably reduced. The results show the same trend as those obtained with psychometric questionnaires, but both positive and negative emotions registered with questionnaires were significantly greater than those registered with the artificial intelligence tool. The contribution of this article is twofold. Firstly, it reinforces previous findings on the positive emotional impact of augmented reality on students. Secondly, it shows an alignment of different instruments to measure emotions, but to varying degrees. Full article
Show Figures

Figure 1

26 pages, 15354 KiB  
Article
Adaptive Neuro-Affective Engagement via Bayesian Feedback Learning in Serious Games for Neurodivergent Children
by Diego Resende Faria and Pedro Paulo da Silva Ayrosa
Appl. Sci. 2025, 15(13), 7532; https://doi.org/10.3390/app15137532 - 4 Jul 2025
Viewed by 299
Abstract
Neuro-Affective Intelligence (NAI) integrates neuroscience, psychology, and artificial intelligence to support neurodivergent children through personalized Child–Machine Interaction (CMI). This paper presents an adaptive neuro-affective system designed to enhance engagement in children with neurodevelopmental disorders through serious games. The proposed framework incorporates real-time biophysical [...] Read more.
Neuro-Affective Intelligence (NAI) integrates neuroscience, psychology, and artificial intelligence to support neurodivergent children through personalized Child–Machine Interaction (CMI). This paper presents an adaptive neuro-affective system designed to enhance engagement in children with neurodevelopmental disorders through serious games. The proposed framework incorporates real-time biophysical signals—including EEG-based concentration, facial expressions, and in-game performance—to compute a personalized engagement score. We introduce a novel mechanism, Bayesian Immediate Feedback Learning (BIFL), which dynamically selects visual, auditory, or textual stimuli based on real-time neuro-affective feedback. A multimodal CNN-based classifier detects mental states, while a probabilistic ensemble merges affective state classifications derived from facial expressions. A multimodal weighted engagement function continuously updates stimulus–response expectations. The system adapts in real time by selecting the most appropriate cue to support the child’s cognitive and emotional state. Experimental validation with 40 children (ages 6–10) diagnosed with Autism Spectrum Disorder (ASD) and Attention Deficit Hyperactivity Disorder (ADHD) demonstrates the system’s effectiveness in sustaining attention, improving emotional regulation, and increasing overall game engagement. The proposed framework—combining neuro-affective state recognition, multimodal engagement scoring, and BIFL—significantly improved cognitive and emotional outcomes: concentration increased by 22.4%, emotional engagement by 24.8%, and game performance by 32.1%. Statistical analysis confirmed the significance of these improvements (p<0.001, Cohen’s d>1.4). These findings demonstrate the feasibility and impact of probabilistic, multimodal, and neuro-adaptive AI systems in therapeutic and educational applications. Full article
Show Figures

Figure 1

34 pages, 6816 KiB  
Article
Towards an Emotion-Aware Metaverse: A Human-Centric Shipboard Fire Drill Simulator
by Musaab H. Hamed-Ahmed, Diego Ramil-López, Paula Fraga-Lamas and Tiago M. Fernández-Caramés
Technologies 2025, 13(6), 253; https://doi.org/10.3390/technologies13060253 - 17 Jun 2025
Viewed by 385
Abstract
Traditional Extended Reality (XR) and Metaverse applications focus heavily on User Experience (UX) but often overlook the role of emotions in user interaction. This article addresses that gap by presenting an emotion-aware Metaverse application: a Virtual Reality (VR) fire drill simulator for shipboard [...] Read more.
Traditional Extended Reality (XR) and Metaverse applications focus heavily on User Experience (UX) but often overlook the role of emotions in user interaction. This article addresses that gap by presenting an emotion-aware Metaverse application: a Virtual Reality (VR) fire drill simulator for shipboard emergency training. The simulator detects emotions in real time, assessing trainees’ responses under stress to improve learning outcomes. Its architecture incorporates eye-tracking and facial expression analysis via Meta Quest Pro headsets. Two experimental phases were conducted. The first revealed issues like poor navigation and lack of visual guidance. These insights led to an improved second version with a refined User Interface (UI), a real-time task tracker and clearer visual cues. The obtained results showed that the included design improvements can reduce task completion times between 14.18% and 32.72%. Emotional feedback varied, suggesting a need for more immersive elements. Overall, this article provides useful guidelines for creating the next generation of emotion-aware Metaverse applications. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

23 pages, 1664 KiB  
Article
Seeing the Unseen: Real-Time Micro-Expression Recognition with Action Units and GPT-Based Reasoning
by Gabriela Laura Sălăgean, Monica Leba and Andreea Cristina Ionica
Appl. Sci. 2025, 15(12), 6417; https://doi.org/10.3390/app15126417 - 6 Jun 2025
Viewed by 1032
Abstract
This paper presents a real-time system for the detection and classification of facial micro-expressions, evaluated on the CASME II dataset. Micro-expressions are brief and subtle indicators of genuine emotions, posing significant challenges for automatic recognition due to their low intensity, short duration, and [...] Read more.
This paper presents a real-time system for the detection and classification of facial micro-expressions, evaluated on the CASME II dataset. Micro-expressions are brief and subtle indicators of genuine emotions, posing significant challenges for automatic recognition due to their low intensity, short duration, and inter-subject variability. To address these challenges, the proposed system integrates advanced computer vision techniques, rule-based classification grounded in the Facial Action Coding System, and artificial intelligence components. The architecture employs MediaPipe for facial landmark tracking and action unit extraction, expert rules to resolve common emotional confusions, and deep learning modules for optimized classification. Experimental validation demonstrated a classification accuracy of 93.30% on CASME II, highlighting the effectiveness of the hybrid design. The system also incorporates mechanisms for amplifying weak signals and adapting to new subjects through continuous knowledge updates. These results confirm the advantages of combining domain expertise with AI-driven reasoning to improve micro-expression recognition. The proposed methodology has practical implications for various fields, including clinical psychology, security, marketing, and human-computer interaction, where the accurate interpretation of emotional micro-signals is essential. Full article
Show Figures

Figure 1

24 pages, 552 KiB  
Review
Ethical Considerations in Emotion Recognition Research
by Darlene Barker, Mukesh Kumar Reddy Tippireddy, Ali Farhan and Bilal Ahmed
Psychol. Int. 2025, 7(2), 43; https://doi.org/10.3390/psycholint7020043 - 29 May 2025
Viewed by 1800
Abstract
The deployment of emotion-recognition technologies expands across healthcare education and gaming sectors to improve human–computer interaction. These systems examine facial expressions together with vocal tone and physiological signals, which include pupil size and electroencephalogram (EEG), to detect emotional states and deliver customized responses. [...] Read more.
The deployment of emotion-recognition technologies expands across healthcare education and gaming sectors to improve human–computer interaction. These systems examine facial expressions together with vocal tone and physiological signals, which include pupil size and electroencephalogram (EEG), to detect emotional states and deliver customized responses. The technology provides benefits through accessibility, responsiveness, and adaptability but generates multiple complex ethical issues. The combination of emotional profiling with biased algorithmic interpretations of culturally diverse expressions and affective data collection without meaningful consent presents major ethical concerns. The increased presence of these systems in classrooms, therapy sessions, and personal devices makes the potential for misuse or misinterpretation more critical. The paper integrates findings from literature review and initial emotion-recognition studies to create a conceptual framework that prioritizes data dignity, algorithmic accountability, and user agency and presents a conceptual framework that addresses these risks and includes safeguards for participants’ emotional well-being. The framework introduces structural safeguards which include data minimization, adaptive consent mechanisms, and transparent model logic as a more complete solution than privacy or fairness approaches. The authors present functional recommendations that guide developers to create ethically robust systems that match user principles and regulatory requirements. The development of real-time feedback loops for user awareness should be combined with clear disclosures about data use and participatory design practices. The successful oversight of these systems requires interdisciplinary work between researchers, policymakers, designers, and ethicists. The paper provides practical ethical recommendations for developing affective computing systems that advance the field while maintaining responsible deployment and governance in academic research and industry settings. The findings hold particular importance for high-stakes applications including healthcare, education, and workplace monitoring systems that use emotion-recognition technology. Full article
(This article belongs to the Section Neuropsychology, Clinical Psychology, and Mental Health)
Show Figures

Figure 1

22 pages, 3864 KiB  
Article
Raspberry Pi-Based Face Recognition Door Lock System
by Seifeldin Sherif Fathy Ali Elnozahy, Senthill C. Pari and Lee Chu Liang
IoT 2025, 6(2), 31; https://doi.org/10.3390/iot6020031 - 20 May 2025
Viewed by 1393
Abstract
Access control systems protect homes and businesses in the continually evolving security industry. This paper designs and implements a Raspberry Pi-based facial recognition door lock system using artificial intelligence and computer vision for reliability, efficiency, and usability. With the Raspberry Pi as its [...] Read more.
Access control systems protect homes and businesses in the continually evolving security industry. This paper designs and implements a Raspberry Pi-based facial recognition door lock system using artificial intelligence and computer vision for reliability, efficiency, and usability. With the Raspberry Pi as its CPU, the system uses facial recognition for authentication. A camera module for real-time image capturing, a relay module for solenoid lock control, and OpenCV for image processing are essential. The system uses the DeepFace library to detect user emotions and adaptive learning to improve recognition accuracy for approved users. The device also adapts to poor lighting and distances, and it sends real-time remote monitoring messages. Some of the most important things that have been achieved include adaptive facial recognition, ensuring that the system changes as it is used, and integrating real-time notifications and emotion detection without any problems. Face recognition worked well in many settings. Modular architecture facilitated hardware–software integration and scalability for various applications. In conclusion, this study created an intelligent facial recognition door lock system using Raspberry Pi hardware and open-source software libraries. The system addresses traditional access control limits and is practical, scalable, and inexpensive, demonstrating biometric technology’s potential in modern security systems. Full article
Show Figures

Figure 1

24 pages, 7075 KiB  
Article
Visual Geometry Group-SwishNet-Based Asymmetric Facial Emotion Recognition for Multi-Face Engagement Detection in Online Learning Environments
by Qiaohong Yao, Mengmeng Wang and Yubin Li
Symmetry 2025, 17(5), 711; https://doi.org/10.3390/sym17050711 - 7 May 2025
Viewed by 565
Abstract
In the contemporary global educational environment, the automatic assessment of students’ online engagement has garnered widespread attention. A substantial number of studies have demonstrated that facial expressions are a crucial indicator for measuring engagement. However, due to the asymmetry inherent in facial expressions [...] Read more.
In the contemporary global educational environment, the automatic assessment of students’ online engagement has garnered widespread attention. A substantial number of studies have demonstrated that facial expressions are a crucial indicator for measuring engagement. However, due to the asymmetry inherent in facial expressions and the varying degrees of deviation of students’ faces from a camera, significant challenges have been posed to accurate emotion recognition in the online learning environment. To address these challenges, this work proposes a novel VGG-SwishNet model, which is based on the VGG-16 model and aims to enhance the recognition ability of asymmetric facial expressions, thereby improving the reliability of student engagement assessment in online education. The Swish activation function is introduced into the model due to its smoothness and self-gating mechanism. Its smoothness aids in stabilizing gradient updates during backpropagation and facilitates better handling of minor variations in input data. This enables the model to more effectively capture subtle differences and asymmetric variations in facial expressions. Additionally, the self-gating mechanism allows the function to automatically adjust its degree of nonlinearity. This helps the model to learn more effective asymmetric feature representations and mitigates the vanishing gradient problem to some extent. Subsequently, this model was applied to the assessment of engagement and provided a visualization of the results. In terms of performance, the proposed method achieved high recognition accuracy on the JAFFE, KDEF, and CK+ datasets. Specifically, under 80–20% and 10-fold cross-validation (CV) scenarios, the recognition accuracy exceeded 95%. According to the obtained results, the proposed approach demonstrates higher accuracy and robust stability. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

25 pages, 630 KiB  
Review
Innovative Approaches in Sensory Food Science: From Digital Tools to Virtual Reality
by Fernanda Cosme, Tânia Rocha, Catarina Marques, João Barroso and Alice Vilela
Appl. Sci. 2025, 15(8), 4538; https://doi.org/10.3390/app15084538 - 20 Apr 2025
Cited by 1 | Viewed by 2800
Abstract
The food industry faces growing challenges due to evolving consumer demands, requiring digital technologies to enhance sensory analysis. Innovations such as eye tracking, FaceReader, virtual reality (VR), augmented reality (AR), and artificial intelligence (AI) are transforming consumer behavior research by providing deeper insights [...] Read more.
The food industry faces growing challenges due to evolving consumer demands, requiring digital technologies to enhance sensory analysis. Innovations such as eye tracking, FaceReader, virtual reality (VR), augmented reality (AR), and artificial intelligence (AI) are transforming consumer behavior research by providing deeper insights into sensory experiences. For instance, FaceReader captures emotional responses to food by analyzing facial expressions, offering valuable data on consumer preferences for taste, texture, and aroma. Together, these technologies provide a comprehensive understanding of the sensory experience, aiding product development and branding. Electronic nose, tongue, and eye technologies also replicate human sensory capabilities, enabling objective and efficient assessment of aroma, taste, and color. The electronic nose (E-nose) detects volatile compounds for aroma evaluation, while the electronic tongue (E-tongue) evaluates taste through electrochemical sensors, ensuring accuracy and consistency in sensory analysis. The electronic eye (E-eye) analyzes food color, supporting quality control processes. These advancements offer rapid, non-invasive, reproducible assessments, benefiting research and industrial applications. By improving the precision and efficiency of sensory analysis, digital tools help enhance product quality and consumer satisfaction in the competitive food industry. This review explores the latest digital methods shaping food sensory research and innovation. Full article
Show Figures

Figure 1

22 pages, 3427 KiB  
Article
A Multimodal Artificial Intelligence Model for Depression Severity Detection Based on Audio and Video Signals
by Liyuan Zhang, Shuai Zhang, Xv Zhang and Yafeng Zhao
Electronics 2025, 14(7), 1464; https://doi.org/10.3390/electronics14071464 - 4 Apr 2025
Viewed by 1328
Abstract
In recent years, artificial intelligence (AI) has increasingly utilized speech and video signals for emotion recognition, facial recognition, and depression detection, playing a crucial role in mental health assessment. However, the AI-driven research on detecting depression severity remains limited, and the existing models [...] Read more.
In recent years, artificial intelligence (AI) has increasingly utilized speech and video signals for emotion recognition, facial recognition, and depression detection, playing a crucial role in mental health assessment. However, the AI-driven research on detecting depression severity remains limited, and the existing models are often too large for lightweight deployment, restricting their real-time monitoring capabilities, especially in resource-constrained environments. To address these challenges, this study proposes a lightweight and accurate multimodal method for detecting depression severity, aiming to provide effective support for smart healthcare systems. Specifically, we design a multimodal detection network based on speech and video signals, enhancing the recognition of depression severity by optimizing the cross-modal fusion strategy. The model leverages Long Short-Term Memory (LSTM) networks to capture long-term dependencies in speech and visual sequences, effectively extracting dynamic features associated with depression. Considering the behavioral differences of respondents when interacting with human versus robotic interviewers, we train two separate sub-models and fuse their outputs using a Mixture of Experts (MOE) framework capable of modeling uncertainty, thereby suppressing the influence of low-confidence experts. In terms of the loss function, the traditional Mean Squared Error (MSE) is replaced with Negative Log-Likelihood (NLL) to better model prediction uncertainty and enhance robustness. The experimental results show that the improved AI model achieves an accuracy of 83.86% in depression severity recognition. The model’s floating-point operations per second (FLOPs) reached 0.468 GFLOPs, with a parameter size of only 0.52 MB, demonstrating its compact size and strong performance. These findings underscore the importance of emotion and facial recognition in AI applications for mental health, offering a promising solution for real-time depression monitoring in resource-limited environments. Full article
Show Figures

Figure 1

20 pages, 2239 KiB  
Article
A Novel Lightweight Deep Learning Approach for Drivers’ Facial Expression Detection
by Jia Uddin
Designs 2025, 9(2), 45; https://doi.org/10.3390/designs9020045 - 3 Apr 2025
Cited by 1 | Viewed by 803
Abstract
Drivers’ facial expression recognition systems play a pivotal role in Advanced Driver Assistance Systems (ADASs) by monitoring emotional states and detecting fatigue or distractions in real time. However, deploying such systems in resource-constrained environments like vehicles requires lightweight architectures to ensure real-time performance, [...] Read more.
Drivers’ facial expression recognition systems play a pivotal role in Advanced Driver Assistance Systems (ADASs) by monitoring emotional states and detecting fatigue or distractions in real time. However, deploying such systems in resource-constrained environments like vehicles requires lightweight architectures to ensure real-time performance, efficient model updates, and compatibility with embedded hardware. Smaller models significantly reduce communication overhead in distributed training. For autonomous vehicles, lightweight architectures also minimize the data transfer required for over-the-air updates. Moreover, they are crucial for their deployability on hardware with limited on-chip memory. In this work, we propose a novel Dual Attention Lightweight Deep Learning (DALDL) approach for drivers’ facial expression recognition. The proposed approach combines the SqueezeNext architecture with a Dual Attention Convolution (DAC) block. Our DAC block integrates Hybrid Channel Attention (HCA) and Coordinate Space Attention (CSA) to enhance feature extraction efficiency while maintaining minimal parameter overhead. To evaluate the effectiveness of our architecture, we compare it against two baselines: (a) Vanilla SqueezeNet and (b) AlexNet. Compared with SqueezeNet, DALDL improves accuracy by 7.96% and F1-score by 7.95% on the KMU-FED dataset. On the CK+ dataset, it achieves 8.51% higher accuracy and 8.40% higher F1-score. Against AlexNet, DALDL improves accuracy by 4.34% and F1-score by 4.17% on KMU-FED. Lastly, on CK+, it provides a 5.36% boost in accuracy and a 7.24% increase in F1-score. These results demonstrate that DALDL is a promising solution for efficient and accurate emotion recognition in real-world automotive applications. Full article
Show Figures

Figure 1

39 pages, 13137 KiB  
Article
Neural Network-Based Emotion Classification in Medical Robotics: Anticipating Enhanced Human–Robot Interaction in Healthcare
by Waqar Riaz, Jiancheng (Charles) Ji, Khalid Zaman and Gan Zengkang
Electronics 2025, 14(7), 1320; https://doi.org/10.3390/electronics14071320 - 27 Mar 2025
Viewed by 667
Abstract
This study advances artificial intelligence by pioneering the classification of human emotions (for patients) with a healthcare mobile robot, anticipating human–robot interaction for humans (patients) admitted in hospitals or any healthcare environment. This study delves into the challenge of accurately classifying humans emotion [...] Read more.
This study advances artificial intelligence by pioneering the classification of human emotions (for patients) with a healthcare mobile robot, anticipating human–robot interaction for humans (patients) admitted in hospitals or any healthcare environment. This study delves into the challenge of accurately classifying humans emotion as a patient emotion, which is a critical factor in understanding patients’ recent moods and situations. We integrate convolutional neural networks (CNNs), recurrent neural networks (RNNs), and multi-layer perceptrons (MLPs) to analyze facial emotions comprehensively. The process begins by deploying a faster region-based convolutional neural network (Faster R-CNN) to swiftly and accurately identify human emotions in real-time and recorded video feeds. This includes advanced feature extraction across three CNN models and innovative fusion techniques, which strengthen the improved Inception-V3 for superior accuracy and replace the improved Faster R-CNN feature learning module. This valuable replacement aims to enhance the accuracy of face detection in our proposed framework. Carefully acquired these datasets in a simulated environment. Validation on the EMOTIC, CK+, FER-2013, and AffectNet datasets all showed impressive accuracy rates of 98.01%, 99.53%, 99.27%, and 96.81%, respectively. These class-wise accuracy rates show that it has the potential to advance the medical environment and measures in the intelligent manufacturing of healthcare mobile robots. Full article
(This article belongs to the Special Issue New Advances of Brain-Computer and Human-Robot Interaction)
Show Figures

Figure 1

35 pages, 9232 KiB  
Article
Applying a Convolutional Vision Transformer for Emotion Recognition in Children with Autism: Fusion of Facial Expressions and Speech Features
by Yonggu Wang, Kailin Pan, Yifan Shao, Jiarong Ma and Xiaojuan Li
Appl. Sci. 2025, 15(6), 3083; https://doi.org/10.3390/app15063083 - 12 Mar 2025
Viewed by 1371
Abstract
With advances in digital technology, including deep learning and big data analytics, new methods have been developed for autism diagnosis and intervention. Emotion recognition and the detection of autism in children are prominent subjects in autism research. Typically using single-modal data to analyze [...] Read more.
With advances in digital technology, including deep learning and big data analytics, new methods have been developed for autism diagnosis and intervention. Emotion recognition and the detection of autism in children are prominent subjects in autism research. Typically using single-modal data to analyze the emotional states of children with autism, previous research has found that the accuracy of recognition algorithms must be improved. Our study creates datasets on the facial and speech emotions of children with autism in their natural states. A convolutional vision transformer-based emotion recognition model is constructed for the two distinct datasets. The findings indicate that the model achieves accuracies of 79.12% and 83.47% for facial expression recognition and Mel spectrogram recognition, respectively. Consequently, we propose a multimodal data fusion strategy for emotion recognition and construct a feature fusion model based on an attention mechanism, which attains a recognition accuracy of 90.73%. Ultimately, by using gradient-weighted class activation mapping, a prediction heat map is produced to visualize facial expressions and speech features under four emotional states. This study offers a technical direction for the use of intelligent perception technology in the realm of special education and enriches the theory of emotional intelligence perception of children with autism. Full article
Show Figures

Figure 1

26 pages, 5572 KiB  
Article
Leveraging Symmetry and Addressing Asymmetry Challenges for Improved Convolutional Neural Network-Based Facial Emotion Recognition
by Gabriela Laura Sălăgean, Monica Leba and Andreea Cristina Ionica
Symmetry 2025, 17(3), 397; https://doi.org/10.3390/sym17030397 - 6 Mar 2025
Cited by 1 | Viewed by 907
Abstract
This study introduces a custom-designed CNN architecture that extracts robust, multi-level facial features and incorporates preprocessing techniques to correct or reduce asymmetry before classification. The innovative characteristics of this research lie in its integrated approach to overcoming facial asymmetry challenges and enhancing CNN-based [...] Read more.
This study introduces a custom-designed CNN architecture that extracts robust, multi-level facial features and incorporates preprocessing techniques to correct or reduce asymmetry before classification. The innovative characteristics of this research lie in its integrated approach to overcoming facial asymmetry challenges and enhancing CNN-based emotion recognition. This is completed by well-known data augmentation strategies—using methods such as vertical flipping and shuffling—that generate symmetric variations in facial images, effectively balancing the dataset and improving recognition accuracy. Additionally, a Loss Weight parameter is used to fine-tune training, thereby optimizing performance across diverse and unbalanced emotion classes. Collectively, all these contribute to an efficient, real-time facial emotion recognition system that outperforms traditional CNN models and offers practical benefits for various applications while also addressing the inherent challenges of facial asymmetry in emotion detection. Our experimental results demonstrate superior performance compared to other CNN methods, marking a step forward in applications ranging from human–computer interaction to immersive technologies while also acknowledging privacy and ethical considerations. Full article
Show Figures

Figure 1

15 pages, 974 KiB  
Article
Overcoming Challenges in Video-Based Health Monitoring: Real-World Implementation, Ethics, and Data Considerations
by Simão Ferreira, Catarina Marinheiro, Catarina Mateus, Pedro Pereira Rodrigues, Matilde A. Rodrigues and Nuno Rocha
Sensors 2025, 25(5), 1357; https://doi.org/10.3390/s25051357 - 22 Feb 2025
Viewed by 1208
Abstract
In the context of evolving healthcare technologies, this study investigates the application of AI and machine learning in video-based health monitoring systems, focusing on the challenges and potential of implementing such systems in real-world scenarios, specifically for knowledge workers. The research underscores the [...] Read more.
In the context of evolving healthcare technologies, this study investigates the application of AI and machine learning in video-based health monitoring systems, focusing on the challenges and potential of implementing such systems in real-world scenarios, specifically for knowledge workers. The research underscores the criticality of addressing technological, ethical, and practical hurdles in deploying these systems outside controlled laboratory environments. Methodologically, the study spanned three months and employed advanced facial recognition technology embedded in participants’ computing devices to collect physiological metrics such as heart rate, blinking frequency, and emotional states, thereby contributing to a stress detection dataset. This approach ensured data privacy and aligns with ethical standards. The results reveal significant challenges in data collection and processing, including biases in video datasets, the need for high-resolution videos, and the complexities of maintaining data quality and consistency, with 42% (after adjustments) of data lost. In conclusion, this research emphasizes the necessity for rigorous, ethical, and technologically adapted methodologies to fully realize the benefits of these systems in diverse healthcare contexts. Full article
Show Figures

Figure 1

20 pages, 4882 KiB  
Article
Empowering Recovery: The T-Rehab System’s Semi-Immersive Approach to Emotional and Physical Well-Being in Tele-Rehabilitation
by Hayette Hadjar, Binh Vu and Matthias Hemmje
Electronics 2025, 14(5), 852; https://doi.org/10.3390/electronics14050852 - 21 Feb 2025
Viewed by 663
Abstract
The T-Rehab System delivers a semi-immersive tele-rehabilitation experience by integrating Affective Computing (AC) through facial expression analysis and contactless heartbeat monitoring. T-Rehab closely monitors patients’ mental health as they engage in a personalized, semi-immersive Virtual Reality (VR) game on a desktop PC, using [...] Read more.
The T-Rehab System delivers a semi-immersive tele-rehabilitation experience by integrating Affective Computing (AC) through facial expression analysis and contactless heartbeat monitoring. T-Rehab closely monitors patients’ mental health as they engage in a personalized, semi-immersive Virtual Reality (VR) game on a desktop PC, using a webcam with MediaPipe to track their hand movements for interactive exercises, allowing the system to tailor treatment content for increased engagement and comfort. T-Rehab’s evaluation comprises two assessments: system performance and cognitive walkthroughs. The first evaluation focuses on system performance, assessing the tested game, middleware, and facial emotion monitoring to ensure hardware compatibility and effective support for AC, gaming, and tele-rehabilitation. The second evaluation uses cognitive walkthroughs to examine usability, identifying potential issues in emotion detection and tele-rehabilitation. Together, these evaluations provide insights into T-Rehab’s functionality, usability, and impact in supporting both physical rehabilitation and emotional well-being. The thorough integration of technology inside T-Rehab ensures a holistic approach to tele-rehabilitation, allowing patients to participate comfortably and efficiently from anywhere. This technique not only improves physical therapy outcomes but also promotes mental resilience, marking an important step advance in tele-rehabilitation practices. Full article
Show Figures

Figure 1

Back to TopTop