Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (298)

Search Parameters:
Keywords = facial expression processing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 1329 KB  
Article
Anxiety-Related Modulation of Early Neural Responses to Task-Irrelevant Emotional Faces
by Eligiusz Wronka
Brain Sci. 2026, 16(1), 26; https://doi.org/10.3390/brainsci16010026 - 25 Dec 2025
Viewed by 14
Abstract
Objectives: The purpose of the study was to test the hypothesis that high anxiety is associated with biased processing of threat-related stimuli and that anxious individuals may be particularly sensitive to facial expressions of fear or anger. In addition, these effects may [...] Read more.
Objectives: The purpose of the study was to test the hypothesis that high anxiety is associated with biased processing of threat-related stimuli and that anxious individuals may be particularly sensitive to facial expressions of fear or anger. In addition, these effects may result from a specific pattern occurring in the early stages of visual information processing. Methods: Event-Related Potentials (ERPs) were recorded in response to task-irrelevant pictures of faces presented in either an upright or inverted position in two groups differing in trait anxiety, as assessed by scores on the Spielberger Trait Anxiety Inventory (STAI). Behavioural responses and ERP activity were also recorded in response to simple neutral visual stimuli presented during exposure to the facial stimuli, which served as probe-targets. Results: A typical Face Inversion Effect was observed, characterised by longer latencies and greater amplitudes of the early P1 and N170 ERP components. Differences between low- and high-anxious individuals emerged at parieto-occipital sites within the time window of the early P1 component. The later stage of face processing, indexed by the N170 component, was not affected by the level of trait anxiety. Conclusions: The results of this experiment indicate that anxiety level modulates the initial stages of information processing, as reflected in the P1 component. This may be associated with anxiety-related differences in the involuntary processing of face detection of emotional expression. Consequently, a greater attentional engagement appears to occur in highly anxious individuals, leading to delayed behavioural responses to concurrently presented neutral stimuli. Full article
(This article belongs to the Special Issue Advances in Face Perception and How Disorders Affect Face Perception)
Show Figures

Figure 1

22 pages, 11862 KB  
Article
Do We View Robots as We Do Ourselves? Examining Robotic Face Processing Using EEG
by Xaviera Pérez-Arenas, Álvaro A. Rivera-Rei, David Huepe and Vicente Soto
Brain Sci. 2026, 16(1), 9; https://doi.org/10.3390/brainsci16010009 - 22 Dec 2025
Viewed by 218
Abstract
Background/Objectives: The ability to perceive and process emotional faces quickly and efficiently is essential for human social interactions. In recent years, humans have started to interact more regularly with robotic faces in the form of virtual or real-world robots. Neurophysiological research regarding how [...] Read more.
Background/Objectives: The ability to perceive and process emotional faces quickly and efficiently is essential for human social interactions. In recent years, humans have started to interact more regularly with robotic faces in the form of virtual or real-world robots. Neurophysiological research regarding how the brain decodes robotic faces relative to human ones is scarce and, as such, warrants further research to explore these mechanisms and their social implications. Methods: This study uses event-related potentials (ERPs) to examine the neural correlates during an emotional face categorization task involving human and robotic stimuli. We examined differences in brain activity elicited by viewing robotic and human faces expressing both happy and neutral emotions. ERP waveforms’ amplitudes for the P100, N170, P300, and P600 components were calculated and compared. Furthermore, mass univariate analysis of ERP waveforms was carried out to explore effects not limited to brain regions previously reported in the literature. Results: Results showed robotic faces evoked increased waveform amplitudes at early components (P100 and N170) as well as at the later P300 component. Further, only mid-latency and late cortical components (P300 and P600) showed amplitude differences resulting from emotional valences, aligning with dual-stage models of face processing. Conclusions: These results advance our understanding of face processing during human–robot interaction and contribute to our understanding of brain mechanisms underlying interactions when viewing social robots, setting new considerations for their use in brain health settings and broader cognitive impact. Full article
(This article belongs to the Section Cognitive, Social and Affective Neuroscience)
Show Figures

Figure 1

13 pages, 1510 KB  
Article
The Impact of Perceptual Adaptation and Real Exposure to Catastrophic Events on Facial Emotion Categorization
by Pasquale La Malva, Valentina Sforza, Eleonora D’Intino, Irene Ceccato, Adolfo Di Crosta, Rocco Palumbo, Alberto Di Domenico and Giulia Prete
Brain Sci. 2026, 16(1), 5; https://doi.org/10.3390/brainsci16010005 - 19 Dec 2025
Viewed by 157
Abstract
Background/Objectives: Facial expressions are central to nonverbal communication and social cognition, and their recognition is shaped not only by facial features but also by contextual cues and prior experience. In high-threat contexts, rapid and accurate decoding of others’ emotions is adaptively advantageous. Grounded [...] Read more.
Background/Objectives: Facial expressions are central to nonverbal communication and social cognition, and their recognition is shaped not only by facial features but also by contextual cues and prior experience. In high-threat contexts, rapid and accurate decoding of others’ emotions is adaptively advantageous. Grounded in neurocognitive models of face processing and vigilance, we tested whether brief perceptual adaptation to emotionally salient scenes, real-world disaster exposure, and pre-traumatic stress reactions enhance facial-emotion categorization. Methods: Fifty healthy adults reported prior direct exposure to catastrophic events (present/absent) and completed the Pre-Traumatic Stress Reactions Checklist (Pre-Cl; low/high). In a computerized task, participants viewed a single adaptor image for 5 s—negative (disaster), positive (pleasant environment), or neutral (phase-scrambled)—and then categorized a target face as emotional (fearful, angry, happy) or neutral as quickly and accurately as possible. Performance was compared across adaptation conditions and target emotions and examined as a function of disaster exposure and Pre-Cl. Results: Emotional adaptation (negative or positive) yielded better performance than neutral adaptation. Higher-order interactions among adaptation condition, target emotion, disaster exposure, and Pre-Cl indicated that the magnitude of facilitation varied across specific facial emotions and was modulated by both experiential (exposed vs. non-exposed) and dispositional (low vs. high Pre-Cl) factors. These effects support a combined influence of short-term contextual tuning and longer-term experience on facial-emotion categorization. Conclusions: Brief exposure to emotionally salient scenes facilitates subsequent categorization of facial emotions relative to neutral baselines, and this benefit is differentially shaped by prior disaster exposure and pre-traumatic stress. The findings provide behavioral evidence that short-term perceptual adaptation and longer-term experiential predispositions jointly modulate a fundamental communicative behavior, consistent with neurocognitive accounts in which context-sensitive visual pathways and salience systems dynamically adjust to support adaptive responding under threat. Full article
Show Figures

Figure 1

14 pages, 639 KB  
Article
Recognising Emotions from the Voice: A tDCS and fNIRS Double-Blind Study on the Role of the Cerebellum in Emotional Prosody
by Sharon Mara Luciano, Laura Sagliano, Alessia Salzillo, Luigi Trojano and Francesco Panico
Brain Sci. 2025, 15(12), 1327; https://doi.org/10.3390/brainsci15121327 - 13 Dec 2025
Viewed by 379
Abstract
Background: Emotional prosody refers to the variations in pitch, pause, melody, rhythm, and stress of pronunciation conveying emotional meaning during speech. Although several studies demonstrated that the cerebellum is involved in the network subserving recognition of emotional facial expressions, there is only [...] Read more.
Background: Emotional prosody refers to the variations in pitch, pause, melody, rhythm, and stress of pronunciation conveying emotional meaning during speech. Although several studies demonstrated that the cerebellum is involved in the network subserving recognition of emotional facial expressions, there is only preliminary evidence suggesting its possible contribution to recognising emotional prosody by modulating the activity of cerebello-prefrontal circuits. The present study aims to further explore the role of the left and right cerebellum in the recognition of emotional prosody in a sample of healthy individuals who were required to identify emotions (happiness, anger, sadness, surprise, disgust, and neutral) from vocal stimuli selected from a validated database (EMOVO corpus). Methods: Anodal transcranial Direct Current Stimulation (tDCS) was used in offline mode to modulate cerebellar activity before the emotional prosody recognition task, and functional near-infrared spectroscopy (fNIRS) was used to monitor stimulation-related changes in oxy- and deoxy- haemoglobin (O2HB and HHB) in prefrontal areas (PFC). Results: Right cerebellar stimulation reduced reaction times in the recognition of all emotions (except neutral and disgust) as compared to both the sham and left cerebellar stimulation, while accuracy was not affected by the stimulation. Haemodynamic data revealed that right cerebellar stimulation reduced O2HB and increased HHB in the PFC bilaterally relative to the other stimulation conditions. Conclusions: These findings are consistent with the involvement of the right cerebellum in modulating emotional processing and in regulating cerebello-prefrontal circuits. Full article
Show Figures

Figure 1

26 pages, 3084 KB  
Article
Lightweight Convolutional Neural Network with Efficient Channel Attention Mechanism for Real-Time Facial Emotion Recognition in Embedded Systems
by Juan A. Ramirez-Quintana, Jesus J. Muñoz-Pacheco, Graciela Ramirez-Alonso, Jesus A. Medrano-Hermosillo and Alma D. Corral-Saenz
Sensors 2025, 25(23), 7264; https://doi.org/10.3390/s25237264 - 28 Nov 2025
Viewed by 582
Abstract
This paper presents a novel deep neural network for real-time emotion recognition based on facial expression measurement, optimized for low computational complexity, called Lightweight Expression Recognition Network (LiExNet). The LiExNet architecture comprises only 42,000 parameters and integrates convolutional layers, depthwise convolutional layers, an [...] Read more.
This paper presents a novel deep neural network for real-time emotion recognition based on facial expression measurement, optimized for low computational complexity, called Lightweight Expression Recognition Network (LiExNet). The LiExNet architecture comprises only 42,000 parameters and integrates convolutional layers, depthwise convolutional layers, an efficient channel attention mechanism, and fully connected layers. The network was trained and evaluated on three widely used datasets (CK+, KDEF, and FER2013) and a custom dataset, EMOTION-ITCH. This dataset comprises facial expressions from both industrial workers and non-workers, enabling the study of emotional responses to occupational stress. Experimental results demonstrate that LiExNet achieves high recognition performance with minimal computational resources, reaching 99.5% accuracy on CK+, 88.2% on KDEF, 79.2% on FER2013, and 96% on EMOTION-ITCH. In addition, LiExNet supports real-time inference on embedded systems, requiring only 0.03 MB of memory and 1.38 GFLOPs of computational power. Comparative evaluations show that among real-time methods, LiExNet achieves the best results, ranking first on the CK+ and KDEF datasets, and second on FER2013, demonstrating consistent performance across these datasets. These results position LiExNet as a practical and robust alternative for real-time emotion monitoring and emotional dissonance assessment in occupational settings, including hardware-constrained and embedded environments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 3240 KB  
Article
A Lightweight Teaching Assessment Framework Using Facial Expression Recognition for Online Courses
by Jinfeng Wang, Xiaomei Chen and Zicong Zhang
Appl. Sci. 2025, 15(23), 12461; https://doi.org/10.3390/app152312461 - 24 Nov 2025
Viewed by 388
Abstract
To ensure the effectiveness of online teaching, educators must understand students’ learning progress. This study proposes LWKD-ViT, a framework designed to accurately capture students’ emotions during online courses. The framework is built on a lightweight facial expression recognition (FER) model with modifications to [...] Read more.
To ensure the effectiveness of online teaching, educators must understand students’ learning progress. This study proposes LWKD-ViT, a framework designed to accurately capture students’ emotions during online courses. The framework is built on a lightweight facial expression recognition (FER) model with modifications to the fusion block. In addition, knowledge distillation (KD) is integrated into the online course platform to enhance performance. The framework follows a defined process involving face detection, tracking, and clustering to extract facial sequences for each student. An improved model, MobileViT-Local, developed by the authors, extracts emotion features from individual frames of students’ facial video streams for classification and prediction. Students’ facial images are captured through their device cameras and analyzed in real time on their devices, eliminating the need to transmit videos to the teacher’s computer or a remote server. To evaluate the performance of MobileViT-Local, comprehensive tests were conducted on benchmark datasets, including RAFD, RAF-DB, and FER2013, as well as a self-built dataset, SCAUOL. Experimental results demonstrate the model’s competitive performance and superior efficiency. Due to the use of knowledge distillation, the proposed model achieves a prediction accuracy of 94.96%, surpassing other mainstream models. It also exhibits excellent performance, with optimal FLOPs of 0.265 G and a compact size of 4.96 M, while maintaining acceptable accuracy. Full article
Show Figures

Figure 1

12 pages, 5310 KB  
Article
Overexpression of miR-320-3p, miR-381-3p, and miR-27a-3p Suppresses Genes Related to Midline Facial Cleft in Mouse Cranial Neural Crest Cells
by Chihiro Iwaya, Akiko Suzuki and Junichi Iwata
Int. J. Mol. Sci. 2025, 26(21), 10730; https://doi.org/10.3390/ijms262110730 - 4 Nov 2025
Viewed by 455
Abstract
Midline facial clefts are severe craniofacial defects that occur due to an underdeveloped frontonasal process. While genetic studies in mice have identified several genes that are crucial for midfacial development, the interactions and regulatory mechanisms of these genes during development remain unclear. In [...] Read more.
Midline facial clefts are severe craniofacial defects that occur due to an underdeveloped frontonasal process. While genetic studies in mice have identified several genes that are crucial for midfacial development, the interactions and regulatory mechanisms of these genes during development remain unclear. In this study, we conducted a systematic review and database search to curate genes associated with midline facial clefts in mice. We identified a total of 78 relevant genes, which included 69 single-gene mutant mice, nine spontaneous models, and 20 compound mutant mice. We then performed bioinformatic analyses with these genes to identify candidate microRNAs (miRNAs) that may regulate the expression of genes related to midline facial clefts. Furthermore, we experimentally evaluated the four highest-ranking candidates—miR-320-3p, miR-381-3p, miR-27a-3p, and miR-124-3p—in O9-1 cells. Our results indicated that overexpression of any of these miRNAs inhibited cell proliferation through the suppression of genes associated with midline facial clefts. Thus, our results suggest that miR-320-3p, miR-381-3p, miR-27a-3p, and miR-124-3p are involved in the cause of midline facial anomalies. Full article
Show Figures

Figure 1

21 pages, 48081 KB  
Article
A Public Health Approach to Automated Pain Intensity Recognition in Chest Pain Patients via Facial Expression Analysis for Emergency Care Prioritization
by Rita Wiryasaputra, Yu-Tse Tsan, Qi-Xiang Zhang, Hsing-Hung Liu, Yu-Wei Chan and Chao-Tung Yang
Diagnostics 2025, 15(20), 2661; https://doi.org/10.3390/diagnostics15202661 - 21 Oct 2025
Viewed by 902
Abstract
Background/Objectives: Cardiovascular disease remains a leading cause of death worldwide, with chest pain often serving as an initial reason for emergency visits. However, the severity of chest pain does not necessarily correlate with the severity of myocardial infarction. Facial expressions are an [...] Read more.
Background/Objectives: Cardiovascular disease remains a leading cause of death worldwide, with chest pain often serving as an initial reason for emergency visits. However, the severity of chest pain does not necessarily correlate with the severity of myocardial infarction. Facial expressions are an essential medium to convey the intensity of pain, particularly in patients experiencing speech difficulties. Automating the recognition of facial pain expression may therefore provide an auxiliary tool for monitoring chest pain without replacing clinical diagnosis. Methods: Using streaming technology, the system captures real-time facial expressions and classifies pain levels using a deep learning framework. The PSPI scores were incorporated with the YOLO models to ensure precise classification. Through extensive fine-tuning, we compare the performance of YOLO-series models, evaluating both computational efficiency and diagnostic accuracy rather than focusing solely on accuracy or processing time. Results: The custom YOLOv4 model demonstrated superior performance in pain level recognition, achieving a precision of 97% and the fastest training time. The system integrates a web-based interface with color-coded pain indicators, which can be deployed on smartphones and laptops for flexible use in healthcare settings. Conclusions: This study demonstrates the potential of automating pain assessment based on facial expressions to assist healthcare professionals in observing patient discomfort. Importantly, the approach does not infer the underlying cause of myocardial infarction. Future work will incorporate clinical metadata and a lightweight edge computing model to enable real-time pain monitoring in diverse care environments, which may support patient monitoring and assist in clinical observation. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

20 pages, 2565 KB  
Article
GBV-Net: Hierarchical Fusion of Facial Expressions and Physiological Signals for Multimodal Emotion Recognition
by Jiling Yu, Yandong Ru, Bangjun Lei and Hongming Chen
Sensors 2025, 25(20), 6397; https://doi.org/10.3390/s25206397 - 16 Oct 2025
Viewed by 950
Abstract
A core challenge in multimodal emotion recognition lies in the precise capture of the inherent multimodal interactive nature of human emotions. Addressing the limitation of existing methods, which often process visual signals (facial expressions) and physiological signals (EEG, ECG, EOG, and GSR) in [...] Read more.
A core challenge in multimodal emotion recognition lies in the precise capture of the inherent multimodal interactive nature of human emotions. Addressing the limitation of existing methods, which often process visual signals (facial expressions) and physiological signals (EEG, ECG, EOG, and GSR) in isolation and thus fail to exploit their complementary strengths effectively, this paper presents a new multimodal emotion recognition framework called the Gated Biological Visual Network (GBV-Net). This framework enhances emotion recognition accuracy through deep synergistic fusion of facial expressions and physiological signals. GBV-Net integrates three core modules: (1) a facial feature extractor based on a modified ConvNeXt V2 architecture incorporating lightweight Transformers, specifically designed to capture subtle spatio-temporal dynamics in facial expressions; (2) a hybrid physiological feature extractor combining 1D convolutions, Temporal Convolutional Networks (TCNs), and convolutional self-attention mechanisms, adept at modeling local patterns and long-range temporal dependencies in physiological signals; and (3) an enhanced gated attention fusion module capable of adaptively learning inter-modal weights to achieve dynamic, synergistic integration at the feature level. A thorough investigation of the publicly accessible DEAP and MAHNOB-HCI datasets reveals that GBV-Net surpasses contemporary methods. Specifically, on the DEAP dataset, the model attained classification accuracies of 95.10% for Valence and 95.65% for Arousal, with F1-scores of 95.52% and 96.35%, respectively. On MAHNOB-HCI, the accuracies achieved were 97.28% for Valence and 97.73% for Arousal, with F1-scores of 97.50% and 97.74%, respectively. These experimental findings substantiate that GBV-Net effectively captures deep-level interactive information between multimodal signals, thereby improving emotion recognition accuracy. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

13 pages, 1312 KB  
Article
Comparing Visual Search Efficiency Across Different Facial Characteristics
by Navdeep Kaur, Isabella Hooge and Andrea Albonico
Vision 2025, 9(4), 88; https://doi.org/10.3390/vision9040088 - 15 Oct 2025
Viewed by 642
Abstract
Face recognition is an important skill that helps people make social judgments by identifying both who a person is and other characteristics such as their expression, age, and ethnicity. Previous models of face processing, such as those proposed by Bruce and Young and [...] Read more.
Face recognition is an important skill that helps people make social judgments by identifying both who a person is and other characteristics such as their expression, age, and ethnicity. Previous models of face processing, such as those proposed by Bruce and Young and by Haxby and colleagues, suggest that identity and other facial features are processed through partly independent systems. This study aimed to compare the efficiency with which different facial characteristics are processed in a visual search task. Participants viewed arrays of two, four, or six faces and judged whether one face differed from the others. Four tasks were created, focusing separately on identity, expression, ethnicity, and gender. We found that search times were significantly longer when looking for identity and shorter when looking for ethnicity. Significant correlations were found among almost all tests in all outcome variables. Comparison of target-present and target-absent trials suggested that performance in none of the tests seems to follow a serial-search-terminating model. These results suggest that different facial characteristics share early processing but differentiate into independent recognition mechanisms at a later stage. Full article
(This article belongs to the Section Visual Neuroscience)
Show Figures

Figure 1

20 pages, 431 KB  
Article
Re-Viewing the Same Artwork with Emotional Reappraisal: An Undergraduate Classroom Study in Time-Based Media Art Education
by Haocheng Feng, Tzu-Yang Wang, Takaya Yuizono and Shan Huang
Educ. Sci. 2025, 15(10), 1354; https://doi.org/10.3390/educsci15101354 - 12 Oct 2025
Viewed by 1047
Abstract
Learning and understanding of art are increasingly understood as dynamic processes in which emotion and cognition unfold over time. However, classroom-based evidence on how structured temporal intervals and guided prompts reshape students’ emotional experience remains limited. This study addresses these gaps by quantitatively [...] Read more.
Learning and understanding of art are increasingly understood as dynamic processes in which emotion and cognition unfold over time. However, classroom-based evidence on how structured temporal intervals and guided prompts reshape students’ emotional experience remains limited. This study addresses these gaps by quantitatively examining changes in emotion over time in a higher education institution. Employing a comparative experimental design, third-year undergraduate art students participated in two structured courses, where emotional responses were captured using an emotion recognition approach (facial expression and self-reported text) during two sessions: initial impression and delayed impression (three days later). The findings reveal a high consistency in dominant facial expressions and substantial agreement in self-reported emotions across both settings. However, the delayed impression elicited greater emotional diversity and intensity, reflecting deeper cognitive engagement and emotional processing over time. These results reveal a longitudinal trajectory of emotion influenced by guided reflective re-view over time. Emotional dynamics extend medium theory by embedding temporal and affective dimensions into TBMA course settings. This study proposes an ethically grounded and technically feasible framework for emotion recognition that supports reflective learning rather than mere measurement. Together, these contributions redefine TBMA education as a temporal and emotional ecosystem and provide an empirical foundation for future research on how emotion fosters understanding, interest, and appreciation in higher media art education. Full article
(This article belongs to the Section Education and Psychology)
Show Figures

Figure 1

18 pages, 5377 KB  
Article
M3ENet: A Multi-Modal Fusion Network for Efficient Micro-Expression Recognition
by Ke Zhao, Xuanyu Liu and Guangqian Yang
Sensors 2025, 25(20), 6276; https://doi.org/10.3390/s25206276 - 10 Oct 2025
Cited by 1 | Viewed by 838
Abstract
Micro-expression recognition (MER) aims to detect brief and subtle facial movements that reveal suppressed emotions, discerning authentic emotional responses in scenarios such as visitor experience analysis in museum settings. However, it remains a highly challenging task due to the fleeting duration, low intensity, [...] Read more.
Micro-expression recognition (MER) aims to detect brief and subtle facial movements that reveal suppressed emotions, discerning authentic emotional responses in scenarios such as visitor experience analysis in museum settings. However, it remains a highly challenging task due to the fleeting duration, low intensity, and limited availability of annotated data. Most existing approaches rely solely on either appearance or motion cues, thereby restricting their ability to capture expressive information fully. To overcome these limitations, we propose a lightweight multi-modal fusion network, termed M3ENet, which integrates both motion and appearance cues through early-stage feature fusion. Specifically, our model extracts horizontal, vertical, and strain-based optical flow between the onset and apex frames, alongside RGB images from the onset, apex, and offset frames. These inputs are processed by two modality-specific subnetworks, whose features are fused to exploit complementary information for robust classification. To improve generalization in low data regimes, we employ targeted data augmentation and adopt focal loss to mitigate class imbalance. Extensive experiments on five benchmark datasets, including CASME I, CASME II, CAS(ME)2, SAMM, and MMEW, demonstrate that M3ENet achieves state-of-the-art performance with high efficiency. Ablation studies and Grad-CAM visualizations further confirm the effectiveness and interpretability of the proposed architecture. Full article
(This article belongs to the Special Issue AI-Based Computer Vision Sensors & Systems—2nd Edition)
Show Figures

Figure 1

17 pages, 2353 KB  
Article
AI-Based Facial Emotion Analysis in Infants During Complimentary Feeding: A Descriptive Study of Maternal and Infant Influences
by Murat Gülşen, Beril Aydın, Güliz Gürer and Sıddika Songül Yalçın
Nutrients 2025, 17(19), 3182; https://doi.org/10.3390/nu17193182 - 9 Oct 2025
Viewed by 734
Abstract
Background/Objectives: Infant emotional responses during complementary feeding offer key insights into early developmental processes and feeding behaviors. AI-driven facial emotion analysis presents a novel, objective method to quantify these subtle expressions, potentially informing interventions in early childhood nutrition. We aimed to investigate [...] Read more.
Background/Objectives: Infant emotional responses during complementary feeding offer key insights into early developmental processes and feeding behaviors. AI-driven facial emotion analysis presents a novel, objective method to quantify these subtle expressions, potentially informing interventions in early childhood nutrition. We aimed to investigate how maternal and infant traits influence infants’ emotional responses during complementary feeding using an automated facial analysis tool. Methods: This multi-center study involved 117 typically developing infants (6–11 months) and their mothers. Standardized feeding sessions were recorded, and OpenFace software quantified six emotions (surprise, sadness, fear, happiness, anger, disgust). Data were normalized and analyzed via Generalized Estimating Equations to identify associations with maternal BMI, education, work status, and infant age, sex, and complementary feeding initiation. Results: Emotional responses did not differ significantly across five food groups. Infants of mothers with BMI > 30 kg/m2 showed greater surprise, while those whose mothers were well-educated and not working displayed more happiness. Older infants and those introduced to complementary feeding before six months exhibited higher levels of anger. Parental or infant food selectivity did not significantly affect responses. Conclusions: The findings indicate that maternal and infant demographic factors exert a more pronounced influence on infant emotional responses during complementary feeding than the type of food provided. These results highlight the importance of integrating broader psychosocial variables into early feeding practices and underscore the potential utility of AI-driven facial emotion analysis in advancing research on infant development. Full article
(This article belongs to the Section Nutrition Methodology & Assessment)
Show Figures

Figure 1

25 pages, 2622 KB  
Article
Food Emotional Perception and Eating Willingness Under Different Lighting Colors: A Preliminary Study Based on Consumer Facial Expression Analysis
by Yuan Shu, Huixian Gao, Yihan Wang and Yangyang Wei
Foods 2025, 14(19), 3440; https://doi.org/10.3390/foods14193440 - 8 Oct 2025
Cited by 1 | Viewed by 3096
Abstract
The influence of lighting color on food is a multidimensional process, linking visual interventions with people’s perception of food appearance, physiological responses, and psychological associations. This study, as a preliminary exploratory research, aims to initially investigate the effects of different lighting colors on [...] Read more.
The influence of lighting color on food is a multidimensional process, linking visual interventions with people’s perception of food appearance, physiological responses, and psychological associations. This study, as a preliminary exploratory research, aims to initially investigate the effects of different lighting colors on food-induced consumer appetite and emotional perception. By measuring consumers’ physiological facial expression data, we verify whether the results are consistent with self-reported subjective evaluations. Questionnaires, Shapiro–Wilk tests, and one-sample t-tests were employed for data mining and cross-validation and combined with generalized facial expression recognition (GFER) technology to analyze participants’ emotional perceptions under various lighting colors. The results show that consumers displayed the most positive emotions and the highest appetite under 2700 K warm white light. Under this condition, the average intensity of participants’ “happy” emotion was 0.25 (SD = 0.12), indicating a clear positive emotional state. Eating willingness also reached its peak at 2700 K. In contrast, blue light-induced negative emotions and lower appetite. Among all lighting types, blue light evoked the strongest “sad” emotion (M = 0.39). This study provides a preliminary exploration of the theoretical framework regarding the relationship between food and consumer behavior, offering new perspectives for product marketing in the food industry and consumer food preference cognition. However, the generalizability of its conclusions still requires further verification in subsequent studies. Full article
(This article belongs to the Section Sensory and Consumer Sciences)
Show Figures

Figure 1

17 pages, 3701 KB  
Review
A Review of Assessment of Sow Pain During Farrowing Using Grimace Scores
by Lucy Palmer, Sabrina Lomax and Roslyn Bathgate
Animals 2025, 15(19), 2915; https://doi.org/10.3390/ani15192915 - 7 Oct 2025
Viewed by 851
Abstract
Reproduction is one of the most important considerations for the livestock industry, presenting significant economic and animal health and welfare pressures for producers. Parturition, the process of giving birth, is known to be highly painful in many mammalian species, but the understanding of [...] Read more.
Reproduction is one of the most important considerations for the livestock industry, presenting significant economic and animal health and welfare pressures for producers. Parturition, the process of giving birth, is known to be highly painful in many mammalian species, but the understanding of parturient pain in sows is limited. Farrowing, the process of parturition in pigs, is understudied compared to other livestock species, with very little research available specifically regarding pain. Pain can be detrimental to animal wellbeing; hence, it is vital for it to be reliably detected and managed in such a way that improves both sow and piglet health and welfare. Grimace scales have been developed as a method for pain detection and quantification in animals via observations of facial expression changes in response to painful stimuli. This presents a unique opportunity for improved pain assessment during farrowing, increasing the current understanding of farrowing dynamics and potentially enhancing farrowing management decisions to prioritise sow welfare. This review synthesises and critically analyses the current knowledge on sow parturient pain and the ability for the application of facial grimace scoring to measure pain severity. Grimace scoring was found to be an effective, simple and feasible method of pain assessment in a number of domestic species, and its recent application to farrowing is a promising development in the understanding and management of sow welfare during parturition. Full article
(This article belongs to the Special Issue Animal Health and Welfare Assessment of Pigs)
Show Figures

Figure 1

Back to TopTop