Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,424)

Search Parameters:
Keywords = Emotional expression

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 5930 KB  
Article
Style-Abstraction-Based Data Augmentation for Robust Affective Computing
by Xu Qiu, Taewan Kim and Bongjae Kim
Appl. Sci. 2026, 16(6), 3109; https://doi.org/10.3390/app16063109 - 23 Mar 2026
Abstract
Personality recognition and emotion recognition, two core tasks within affective computing, are fundamentally constrained by data scarcity as collecting and annotating human behavioral data is expensive and restricted by privacy concerns. Under these limited data conditions, existing models tend to rely on superficial [...] Read more.
Personality recognition and emotion recognition, two core tasks within affective computing, are fundamentally constrained by data scarcity as collecting and annotating human behavioral data is expensive and restricted by privacy concerns. Under these limited data conditions, existing models tend to rely on superficial shortcut features such as background appearance, lighting conditions, or color variations, rather than behavior-relevant cues including facial expressions, posture, and motion dynamics. To address this issue, we propose Style-Abstraction-based Data Augmentation, a style transfer-based augmentation strategy that reduces dependency on low-level appearance information while preserving high-level semantic cues. Specifically, we employ cartoonization to generate stylized variants of training videos that retain expressive characteristics but remove stylistic bias. We validate our approach on three diverse personality benchmarks (First Impression v2, UDIVA v0.5, and KETI) and emotion benchmark(Emotion Dataset) using state-of-the-art models including ViViT (Video Vision Transformer), TimeSformer, and VST (Video Swin Transformer). Our experiments indicate that increasing the proportion of style-abstracted data in the training set can improve performance on the evaluated datasets. Notably, our method yields consistent gains across all benchmarks: a 0.0893 reduction in MSE on UDIVA v0.5 (with VST), a 0.0023 improvement in 1-MAE on KETI (with TimeSformer), and a 0.0051 improvement on First Impression v2 (with TimeSformer). Furthermore, extending style-abstraction-based data augmentation to a four-class categorical emotion recognition task demonstrates similar performance gains, achieving up to a 3.44% accuracy increase with the TimeSformer backbone. These findings verify that our style-abstraction-based data augmentation facilitates learning of behavior-relevant features by reducing reliance on superficial shortcuts. Overall, cartoonization-based style abstraction for data augmentation functions as both an effective augmentation strategy and a regularization mechanism, encouraging the model to learn more stable and generalizable representations for affective computing applications. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Digital Image Processing)
Show Figures

Figure 1

11 pages, 257 KB  
Entry
Saudade as a Cultural Concept
by Susana Amante
Encyclopedia 2026, 6(3), 71; https://doi.org/10.3390/encyclopedia6030071 - 23 Mar 2026
Definition
Saudade is a cultural concept expressing a profound sense of longing, nostalgia, or melancholy associated with absence, loss, or unattainable experiences. Emerging in medieval Portugal and shaped by historical, social, and literary developments, it has evolved from an individual emotion into a collective [...] Read more.
Saudade is a cultural concept expressing a profound sense of longing, nostalgia, or melancholy associated with absence, loss, or unattainable experiences. Emerging in medieval Portugal and shaped by historical, social, and literary developments, it has evolved from an individual emotion into a collective cultural construct reflecting the identity, history, and aesthetic sensibilities of Lusophone communities. Drawing on peer-reviewed scholarship and interdisciplinary research in cultural studies, this entry examines how saudade is expressed in the literature, music, and philosophical discourse, and its role in national memory, emigration, and cultural imagination. While sometimes described as untranslatable, its uniqueness reflects deep historical and cultural embedding rather than a linguistic limitation. Saudade, therefore, functions as a multilayered symbolic category, revealing the interplay between emotion, language, and cultural identity in Lusophone contexts. Full article
(This article belongs to the Section Arts & Humanities)
20 pages, 879 KB  
Article
The Influence of Group Psychology on Network Cluster Behavior: A Moderated Mediation Model
by Jianjun Ni, Zhangbo Xiong and Mingzheng Wu
Behav. Sci. 2026, 16(3), 465; https://doi.org/10.3390/bs16030465 - 20 Mar 2026
Viewed by 19
Abstract
With the rapid development in new media and social platforms on the internet, some social hotspots or sensitive events can easily ferment and spread in the online space, attracting the attention or concentrated discussion of young students. Network cluster behavior is a collective [...] Read more.
With the rapid development in new media and social platforms on the internet, some social hotspots or sensitive events can easily ferment and spread in the online space, attracting the attention or concentrated discussion of young students. Network cluster behavior is a collective behavior in which a large number of netizens collectively express and gather opinions around social hot issues of common concern, creating online public opinion. The study explored the influence of group psychology on the process of college students participating in online cluster behavior. A survey was conducted involving 2137 college students from over 10 universities in Zhejiang Province, Jiangsu Province, and other regions. The data were analyzed using correlation analysis and moderated mediation model testing. This study found that group psychological factors, such as emotional infection, depersonalization, the spiral of silence, relative deprivation, group polarization, and action mobilization, positively predicted network cluster behavior. The action mobilization of opinion leaders mediated the relationship between emotional infection and network cluster behavior. Group polarization mediated the relationship between the spiral of silence and network cluster behavior. Additionally, group efficacy moderated the latter part of the mediation process between group polarization and network cluster behavior. Full article
(This article belongs to the Section Organizational Behaviors)
Show Figures

Figure 1

21 pages, 287 KB  
Article
Post-Liturgical Women’s Rituals Among Western Ukrainian Female Labor Migrants in Israel
by Anna Prashizky
Religions 2026, 17(3), 396; https://doi.org/10.3390/rel17030396 - 20 Mar 2026
Viewed by 42
Abstract
This article develops the analytical concept of post-liturgical female rituality to examine informal religious practices created by Western Ukrainian female labor migrants in Israel. Drawing on approaches that conceptualize ritual as flexible, embodied, and processual, it focuses on women’s ritual activities that take [...] Read more.
This article develops the analytical concept of post-liturgical female rituality to examine informal religious practices created by Western Ukrainian female labor migrants in Israel. Drawing on approaches that conceptualize ritual as flexible, embodied, and processual, it focuses on women’s ritual activities that take place in close temporal and symbolic proximity to official church liturgy while remaining outside canonical frameworks. Rather than directly challenging institutional religion, these practices extend and reinterpret patriarchal liturgy through gendered forms of ritual engagement. The analysis is based on qualitative research among Ukrainian Greek Catholic women in Israel, including 27 in-depth interviews, participant observation, and digital ethnography. The findings highlight three interconnected dimensions: collective gatherings following church services; post-liturgical practices involving food, singing, and embodied performance; and national-religious rituals expressing emotional belonging to Ukraine in the context of war. The article argues that post-liturgical female rituals constitute a distinct form of women’s religious agency that operates within institutional Christianity while reworking its meanings, contributing to feminist scholarship on ritual, migration, and war. Full article
(This article belongs to the Special Issue Studies on Religious Rituals and Practices)
23 pages, 4795 KB  
Article
RolEmo: A Role-Aware Commonsense-Augmented Contrastive Learning Framework for Emotion Classification
by Muhammad Abulaish and Anjali Bhardwaj
Mach. Learn. Knowl. Extr. 2026, 8(3), 79; https://doi.org/10.3390/make8030079 - 19 Mar 2026
Viewed by 23
Abstract
Emotion classification is a fundamental task in affective computing, with applications in human–computer interaction, mental health monitoring, and social media analysis. Although most existing methods formulate it as a flat classification problem, emotional expressions are inherently structured and grounded in semantic roles such [...] Read more.
Emotion classification is a fundamental task in affective computing, with applications in human–computer interaction, mental health monitoring, and social media analysis. Although most existing methods formulate it as a flat classification problem, emotional expressions are inherently structured and grounded in semantic roles such as the emotion cue, stimulus, experiencer, and target. However, the relative contribution of these roles to emotion inference has not been systematically examined. Unlike prior models, we propose RolEmo, a role-aware framework for emotion classification that explicitly incorporates semantic role information. The framework employs a controlled role-masking strategy to analyze the contribution of individual roles, augments textual representations with external commonsense knowledge to capture implicit affective context, and applies supervised contrastive learning to structure the embedding space by bringing emotionally similar instances closer while separating opposing ones. We evaluate RolEmo on three benchmark datasets annotated with semantic roles. Experimental results demonstrate that RolEmo outperforms the strongest baseline across three datasets by up to 16.4%, 25.8%, and 23.2% in the Full Text, Only Role, and Without Role settings, respectively. The analysis further indicates that the cue and stimulus roles provide the most reliable signals for emotion classification, with their removal causing performance drops of up to 6.2% in macro f1-score, while experiencer and target roles exhibit more variable effects. These findings highlight the importance of structured semantic modeling and commonsense reasoning for robust and interpretable emotion understanding. Full article
(This article belongs to the Section Learning)
Show Figures

Figure 1

37 pages, 3831 KB  
Article
A Hybrid NER–Sentiment Model for Uzbek Texts: Integrating Lexical, Deep Learning, and Entity-Based Approaches
by Bobur Saidov, Vladimir Barakhnin, Rakhmon Saparbaev, Zayniddin Narmuratov, Rustamova Manzura, Ruzmetova Zilolakhon and Anorgul Atajanova
Big Data Cogn. Comput. 2026, 10(3), 92; https://doi.org/10.3390/bdcc10030092 - 19 Mar 2026
Viewed by 52
Abstract
This work proposes a hybrid Uzbek sentiment analysis model (sometimes referred to as tonality analysis in the local literature) that integrates contextual text representations with named-entity information from an NER module and emoji-based emotional cues that are common in short online messages. To [...] Read more.
This work proposes a hybrid Uzbek sentiment analysis model (sometimes referred to as tonality analysis in the local literature) that integrates contextual text representations with named-entity information from an NER module and emoji-based emotional cues that are common in short online messages. To provide a comprehensive baseline comparison, we evaluate seven approaches—SVM, LSTM, mBERT, XLM-RoBERTa-base, mDeBERTa-v3, LaBSE, and the proposed hybrid model—covering both classical machine learning and modern multilingual transformer architectures for low-resource sentiment tasks. The overall pipeline begins with Uzbek-specific text normalization to reduce noise from informal spellings, transliteration variants, and inconsistent apostrophe usage. In parallel, the system performs explicit emoji extraction to capture affective signals that are often expressed non-verbally in social media texts. Next, we construct three complementary feature streams: a context encoder for sentence-level semantics, NER-driven entity features that encode entity mentions and types, and an emotion module that models emoji priors and their interaction with contextual meaning. These streams are fused into a unified representation and fed to a final classifier to predict sentiment polarity. Experiments on an Uzbek test set demonstrate that the hybrid model reaches an F1-score of 0.92, consistently outperforming text-only baselines. The results indicate that entity-aware and emoji-informed features improve robustness under sarcasm/irony, mixed sentiment with multiple targets, and orthographic noise, making the approach suitable for social media analytics, public opinion monitoring, customer feedback triage, and recommendation-oriented text mining. Full article
(This article belongs to the Section Data Mining and Machine Learning)
Show Figures

Figure 1

22 pages, 7355 KB  
Article
IAE-Net: Incremental Learning-Based Attention-Enhanced DenseNet for Robust Facial Emotion Recognition
by Haseeb Ali Khan and Jong-Ha Lee
Mathematics 2026, 14(6), 1023; https://doi.org/10.3390/math14061023 - 18 Mar 2026
Viewed by 78
Abstract
Facial emotion recognition (FER) is an important component of human–computer interaction and healthcare-oriented affective computing. However, reliable deployment remains difficult in unconstrained settings due to appearance and geometric variability (e.g., pose, illumination, and occlusion), demographic imbalance, and dataset bias. In practice, two additional [...] Read more.
Facial emotion recognition (FER) is an important component of human–computer interaction and healthcare-oriented affective computing. However, reliable deployment remains difficult in unconstrained settings due to appearance and geometric variability (e.g., pose, illumination, and occlusion), demographic imbalance, and dataset bias. In practice, two additional constraints frequently limit real-world FER systems: the computational overhead of heavy architectures and limited adaptability when data evolve over time, where sequential updates can cause catastrophic forgetting. To address these challenges, we propose the Incremental Attention-Enhanced Network (IAE-Net), a compact single-branch framework built on a DenseNet121 backbone and a cascaded refinement pipeline. The model incorporates Channel Attention (CA) to emphasize expression-relevant feature channels and suppress less informative responses, followed by a deformable attention module (DA) that reduces feature misalignment caused by non-rigid facial motion and pose shifts, thereby improving robustness under geometric variability. For continual deployment, IAE-Net supports class-incremental updates via weight transfer, exemplar replay, and knowledge distillation to improve retention during sequential learning. We evaluate IAE-Net on four widely used benchmarks, FER2013, FERPlus, KDEF, and AffectNet, covering both controlled and in-the-wild conditions under a unified training protocol. The proposed approach achieves accuracies of 79.15%, 92.03%, 99.48%, and 74.20% on FER2013, FERPlus, KDEF, and AffectNet, respectively, with balanced precision, recall, and F1-score trends. These results indicate that IAE-Net provides an efficient and extensible FER framework with potential utility in dynamic real-world and longitudinal healthcare-oriented applications. Full article
(This article belongs to the Special Issue Recent Advances and Applications of Artificial Neural Networks)
Show Figures

Figure 1

14 pages, 1070 KB  
Article
Return or Stay? The Dilemma of Hope and Despair Among Syrian Refugees Living in Jordan: An Ecological Perspective
by Lojayn Smadi and Bader Seetan Al-Madi
Soc. Sci. 2026, 15(3), 196; https://doi.org/10.3390/socsci15030196 - 18 Mar 2026
Viewed by 100
Abstract
The political transition in Syria following the fall of the Al-Assad regime in December 2024 has renewed debates about refugee return. This study examines Syrian refugees’ intentions to return from Jordan and the factors shaping these decisions using a mixed-method design. A stratified [...] Read more.
The political transition in Syria following the fall of the Al-Assad regime in December 2024 has renewed debates about refugee return. This study examines Syrian refugees’ intentions to return from Jordan and the factors shaping these decisions using a mixed-method design. A stratified random sample of 1070 refugees residing in host areas and camps was surveyed through telephone interviews, complemented by four focus group discussions and two key informant interviews with experts. Although 61% of respondents expressed an intention to return, only 20% indicated concrete or immediate plans, suggesting that return remains largely aspirational rather than imminent. Access to housing and property (55%), economic condition (46%), and safety and security (40%) emerged as central determinants, indicating that structural barriers, rather than regime change alone, shape decision-making. Qualitative findings further reveal that emotional attachment to Syria sustains return aspirations, yet financial hardship, debt in Jordan, and housing destruction in Syria constrain refugees’ capabilities to act. These findings underscore that voluntary, safe, and dignified repatriation depends not only on addressing structural barriers in Syria, but also on maintaining essential protection and support for Syrian refugees in Jordan. Full article
Show Figures

Figure 1

22 pages, 5817 KB  
Article
Experiencing a Serious Game for the Norman Castle of Aci Castello: A Pilot Project
by Roberto Rizza, Paolino Trapani, Myriam Vaccaro, Dario Allegra, Eleonora Pappalardo, Anna Maria Gueli and Filippo Stanco
Heritage 2026, 9(3), 117; https://doi.org/10.3390/heritage9030117 - 17 Mar 2026
Viewed by 152
Abstract
Cultural heritage, in all its tangible and intangible expressions, is undergoing a process of renewal driven by the integration of digital technologies and participatory approaches. This study presents a pilot project developed within the SAMOTHRACE Fundation, focused on the design of a Serious [...] Read more.
Cultural heritage, in all its tangible and intangible expressions, is undergoing a process of renewal driven by the integration of digital technologies and participatory approaches. This study presents a pilot project developed within the SAMOTHRACE Fundation, focused on the design of a Serious Game dedicated to the Norman Castle of Aci Castello in Sicily. The project explores how game-based learning and interactive storytelling can enhance visitor engagement, accessibility, and understanding of small-scale heritage sites that are often excluded from major cultural circuits. Using Unity and Blender, the prototype combines historical research, 3D reconstruction, and narrative interaction to transform the castle into an immersive educational environment. This initial phase also served as the basis for an academic thesis, laying the methodological groundwork for future expansion and evaluation. The results of this pilot provide preliminary quantitative evidence that serious games can support cultural communication strategies, foster emotional engagement, and stimulate curiosity toward minor heritage sites, while remaining compatible with the constraints of modest institutions. Ultimately, the project illustrates how even modest institutions can leverage digital innovation to revitalize their heritage assets, promote inclusive participation, and explore new models of interactive archaeology and community-centered cultural engagement. Full article
Show Figures

Figure 1

16 pages, 317 KB  
Article
Veiled Expressions of the Sacred: Ghazal, Genre, and Mystical Experience in Neshāṭī’s Poetry
by Muhammed Tarik Ablak
Religions 2026, 17(3), 371; https://doi.org/10.3390/rel17030371 - 16 Mar 2026
Viewed by 151
Abstract
This article examines how religious experience is articulated through genre in the poetry of the seventeenth-century Ottoman Mawlawī shaykh Neshāṭī (d. 1674), focusing on the striking contrast between his ghazals and non-ghazal compositions. While Neshāṭī’s qaṣīdas, mathnawīs and other formal genres employ an [...] Read more.
This article examines how religious experience is articulated through genre in the poetry of the seventeenth-century Ottoman Mawlawī shaykh Neshāṭī (d. 1674), focusing on the striking contrast between his ghazals and non-ghazal compositions. While Neshāṭī’s qaṣīdas, mathnawīs and other formal genres employ an explicit and direct religious language—addressing God, the Prophet, sacred figures, and doctrinal themes—his ghazals are dominated by imagery of wine, love, and the beloved, which at first glance appears markedly profane. Rather than reading this contrast as a sign of secularization or doctrinal inconsistency, the article argues that it reflects a conscious poetic strategy shaped by the expressive style of the ghazal. Through a close reading of Neshāṭī’s Dīwān, the study demonstrates that religious meaning in ghazals is not absent but deliberately rendered implicit. Drawing on motifs such as the mirror, secret (sirr), annihilation (fanāʾ fīʾllāh), and states of spiritual contraction, Neshāṭī transforms the language of human love into a vehicle for divine experience. In this context, the ghazal emerges as a genre particularly suited to conveying religious meaning through ambiguity, emotional intensity, and symbolic indirection rather than overt doctrinal exposition. By situating Neshāṭī within both the Mawlawī tradition and the aesthetics of Sabk-i Hindī, this article highlights how genre manifests religious expression in Ottoman poetry. It proposes that divine encounter in Neshāṭī’s work is realized less through explicit theological discourse than through the affective and symbolic potential of the ghazal. In doing so, the study offers a new reading of Neshāṭī’s poetry and contributes to broader discussions on the relationship between literary/lyrical genre, mysticism, and religious experience in Islamic literary traditions. Full article
(This article belongs to the Special Issue Divine Encounters: Exploring Religious Themes in Literature)
22 pages, 4646 KB  
Article
Evaluating Chronic Sex-Specific Changes in Glutamatergic Signaling Markers Following Traumatic Brain Injury
by Caiti-Erin Talty, Madison S. Wypyski, Susan F. Murphy and Pamela J. VandeVord
Int. J. Mol. Sci. 2026, 27(6), 2670; https://doi.org/10.3390/ijms27062670 - 14 Mar 2026
Viewed by 221
Abstract
Traumatic brain injury (TBI) can lead to persistent adverse outcomes, including cognitive and emotional dysfunction, with recent estimates indicating that up to 50% of individuals with mild TBI experience long-term symptoms. Growing evidence suggests that biological sex influences TBI outcomes and recovery trajectories; [...] Read more.
Traumatic brain injury (TBI) can lead to persistent adverse outcomes, including cognitive and emotional dysfunction, with recent estimates indicating that up to 50% of individuals with mild TBI experience long-term symptoms. Growing evidence suggests that biological sex influences TBI outcomes and recovery trajectories; however, the molecular underpinnings driving these sex-specific differences remain poorly understood. In this study, a preclinical TBI model was used to directly compare chronic glutamatergic alterations in adult male and female Sprague Dawley rats. To define frontocortical molecular signatures associated with sex-specific glutamatergic dysfunction, proteomic analyses were conducted. Proteomic data revealed dysregulation of key pathways, cellular processes, and molecular regulators involved in excitatory signaling and synaptic function in both sexes. Biomarker profiling identified a single common biomarker between males and females, along with multiple biomarkers unique to each sex. Furthermore, two key brain regions highly susceptible to TBI, the prefrontal cortex and hippocampal subregions, were examined for chronic alterations in key glutamatergic signaling proteins, including N-methyl-D-aspartate (NMDA) receptors and the excitatory synaptic marker postsynaptic density protein 95 (PSD95). Immunofluorescence analyses revealed both sex- and region-specific alterations in the expression of NMDA receptor subunits, as well as in PSD95. Notably, many of these changes were concentrated within the hippocampal subregions, suggesting long-term dysregulation of hippocampal glutamatergic circuitry following injury. Together, these findings indicate the emergence of chronic sex-specific pathophysiology in glutamate signaling after TBI and highlight the importance of incorporating sex as a biological variable in the development of precision medicine-based therapeutic strategies for TBI. Full article
Show Figures

Figure 1

32 pages, 7928 KB  
Article
eXCube2: Explainable Brain-Inspired Spiking Neural Network Framework for Emotion Recognition from Audio, Visual and Multimodal Audio–Visual Data
by N. K. Kasabov, A. Yang, Z. Wang, I. Abouhassan, A. Kassabova and T. Lappas
Biomimetics 2026, 11(3), 208; https://doi.org/10.3390/biomimetics11030208 - 14 Mar 2026
Viewed by 176
Abstract
This paper introduces a biomimetic framework and novel brain-inspired AI (BIAI) models based on spiking neural networks (SNNs) for emotional state recognition from audio (speech), visual (face), and integrated multimodal audio–visual data. The developed framework, named eXCube2, uses a three-dimensional SNN architecture NeuCube [...] Read more.
This paper introduces a biomimetic framework and novel brain-inspired AI (BIAI) models based on spiking neural networks (SNNs) for emotional state recognition from audio (speech), visual (face), and integrated multimodal audio–visual data. The developed framework, named eXCube2, uses a three-dimensional SNN architecture NeuCube that is spatially structured according to a human brain template. The BIAI models developed in eXCube2 are trainable on spatio- and spectro-temporal data using brain-inspired learning rules. Such models are explainable in terms of revealing patterns in data and are adaptable to new data. The eXCube2 models are implemented as software systems and tested on speech and video data of subjects expressing emotional states. The use of a brain template for the SNN structure enables brain-inspired tonotopic and stereo mapping of audio inputs, topographic mapping of visual data, and the combined use of both modalities. This novel approach brings AI-based emotional state recognition closer to human perception, provides a better explainability and adaptability than existing AI systems. It also results in a higher or competitive accuracy, even though this was not the main goal here. This is demonstrated through experiments on benchmark datasets, achieving classification accuracy above 80% on single-modality data and 88.9% when multimodal audio–visual data are used, and a “don’t know” output is introduced. The paper further discusses possible applications of the proposed eXCube2 framework to other audio, visual, and audio–visual data for solving challenging problems, such as recognizing emotional states of people from different origins; brain state diagnosis (e.g., Parkinson’s disease, Alzheimer’s disease, ADHD, dementia); measuring response to treatment over time; evaluating satisfaction responses from online clients; cognitive robotics; human–robot interaction; chatbots; and interactive computer games. The SNN-based implementation of BIAI also enables the use of neuromorphic chips and platforms, leading to reduced power consumption, smaller device size, higher performance accuracy, and improved adaptability and explainability. This research shows a step toward building brain-inspired AI systems. Full article
Show Figures

Figure 1

30 pages, 3486 KB  
Article
AI Creation of Facial Expression Database for Advanced Emotion Recognition Using Diffusion Model and Pre-Trained CNN Models
by Jia Jun Ho, Wee How Khoh, Ying Han Pang, Hui Yen Yap and Fang Chuen Lim Alvin
Appl. Sci. 2026, 16(6), 2769; https://doi.org/10.3390/app16062769 - 13 Mar 2026
Viewed by 177
Abstract
With applications in psychology, security, and human–computer interaction, facial expression recognition (FER) has become an essential tool for non-verbal communication. Current research often categorizes expressions into micro- and macro-types, yet existing datasets suffer from inconsistent labelling for classes, limited diversity of the databases, [...] Read more.
With applications in psychology, security, and human–computer interaction, facial expression recognition (FER) has become an essential tool for non-verbal communication. Current research often categorizes expressions into micro- and macro-types, yet existing datasets suffer from inconsistent labelling for classes, limited diversity of the databases, and insufficient scale for the currently available datasets. To address these gaps, this work proposes a novel framework combining the diffusion model with pre-trained CNNs. Leveraging original images from established datasets, CASME II, we generate synthetic facial expressions to augment training data, mitigating bias and inconsistency. The synthetic dataset is evaluated using ResNet 50, VGG16 and Inception V3 architectures. Inception V3 trained on the proposed AI-generated dataset and tested using CASME II, VGG-16 with data augmentation applied is trained on CASME II and tested on the proposed AI-generated dataset, and Inception V3 with 30% freezing layers method is trained on the proposed AI-generated dataset and tested using CASME II. These all successfully achieved state-of-the-art performance. The data augmentation and freezing layers approaches significantly improved the performance of the models. Our proposed approaches achieved state-of-the-art performance and outperformed most of the existing state-of-the-art approaches benchmarked in this study. Full article
Show Figures

Figure 1

39 pages, 7178 KB  
Article
Deep-Learning-Derived Facial Electromyogram Signatures of Emotion in Immersive Virtual Reality (bWell): Exploring the Impact of Emotional, Cognitive, and Physical Demands
by Zohreh H. Meybodi, Francis Thibault, Budhachandra Khundrakpam, Gino De Luca, Jing Zhang, Joshua A. Granek and Nusrat Choudhury
Sensors 2026, 26(6), 1827; https://doi.org/10.3390/s26061827 - 13 Mar 2026
Viewed by 274
Abstract
Emotional and workload-related states unfold dynamically during immersive virtual reality (VR) experiences, yet reliable physiological modeling in such environments remains challenging. We investigated whether multi-channel facial electromyography (fEMG), combined with spatio-temporal deep learning, can (i) accurately classify calibrated facial expressions across participants and [...] Read more.
Emotional and workload-related states unfold dynamically during immersive virtual reality (VR) experiences, yet reliable physiological modeling in such environments remains challenging. We investigated whether multi-channel facial electromyography (fEMG), combined with spatio-temporal deep learning, can (i) accurately classify calibrated facial expressions across participants and (ii) transfer to spontaneous, task-elicited behavior in immersive VR. Twelve adults completed a calibration phase involving four intentional expressions (smile, frown, raised eyebrow, neutral), followed by VR scenes designed to elicit emotional, cognitive, physical, and dual task demands. After participant-level physiological normalization, a single shared Convolutional Neural Network–Temporal Convolutional Network (CNN–TCN) model was trained and evaluated using leave-one-participant-out (LOPO) validation. The model achieved strong cross-participant performance (Macro-F1 = 0.88 ± 0.13; ROC-AUC = 0.95 ± 0.06). When applied to unlabeled spontaneous VR task-elicited fEMG recordings, the trained model generated continuous expression classes. Derived static and temporal expression features showed scene-dependent modulation and False Discovery Rate (FDR)-surviving associations, primarily with perceived physical demand (NASA-TLX). The observed muscle activation patterns were physiologically plausible and aligned with Facial Action Coding System (FACS)-based interpretations of underlying muscle activity. These findings demonstrate that end-to-end spatio-temporal modeling of raw fEMG enables facial expression sensing in immersive VR using a single shared model following physiological normalization. The proposed framework bridges calibrated expression learning and spontaneous task-elicited behavior, supporting privacy-preserving, continuous and physiologically grounded monitoring in human-centered VR applications. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (3rd Edition))
Show Figures

Figure 1

23 pages, 430 KB  
Essay
Unfiltered Access, Unseen Harms: A Developmental and Public Health Critique of Digital Rights Discourse
by Danielle A. Einstein, Samantha Marsh, Michoel L. Moshel, Talia Sinani and Tracy Burrell
Int. J. Environ. Res. Public Health 2026, 23(3), 364; https://doi.org/10.3390/ijerph23030364 - 12 Mar 2026
Viewed by 448
Abstract
This article maps and prioritises the foundational developmental needs of children and adolescents: social development, cognitive growth, emotional regulation, identity formation, and moral reasoning. Early and excessive digital engagement is then examined for its potential to impact these milestones, with consequences that reverberate [...] Read more.
This article maps and prioritises the foundational developmental needs of children and adolescents: social development, cognitive growth, emotional regulation, identity formation, and moral reasoning. Early and excessive digital engagement is then examined for its potential to impact these milestones, with consequences that reverberate through wellbeing, relationships, and lifelong resilience. Arguments which frame digital engagement as an individual right with potential benefits, downplay developmental risks. Drawing on developmental rights and agency frameworks, the current review disputes the prevailing assumption that digital participation should take precedence over healthy developmental trajectories. Instead, the debate is reframed around children’s evolving capacities. It is proposed that digital entitlements are nested within age-appropriate limits and supports. Protecting the best interests of the child requires recognising the risk of addictive technology use. The rights of the child must also ensure cultivation of emotional competence and self-reliance. Overemphasis on digital expression risks elevating performative self-presentation before moral reasoning, critical thinking, and offline social skills have matured, particularly within environments shaped by algorithmic amplification, transient relationships, peer harassment, and the desire for validation. To address these risks, we advocate for a multi-layered public health response: consistent, developmentally attuned messaging; empowered parents and educators; whole-school strategies; and policy reforms that prioritise safety, accountability, and developmental alignment. By situating digital engagement within a developmental framework, this article proposes key principles on which to base the discussion of safeguarding youth wellbeing in the digital era. Full article
Back to TopTop