Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (561)

Search Parameters:
Keywords = emotional arousal

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 85184 KiB  
Article
MB-MSTFNet: A Multi-Band Spatio-Temporal Attention Network for EEG Sensor-Based Emotion Recognition
by Cheng Fang, Sitong Liu and Bing Gao
Sensors 2025, 25(15), 4819; https://doi.org/10.3390/s25154819 - 5 Aug 2025
Abstract
Emotion analysis based on electroencephalogram (EEG) sensors is pivotal for human–machine interaction yet faces key challenges in spatio-temporal feature fusion and cross-band and brain-region integration from multi-channel sensor-derived signals. This paper proposes MB-MSTFNet, a novel framework for EEG emotion recognition. The model constructs [...] Read more.
Emotion analysis based on electroencephalogram (EEG) sensors is pivotal for human–machine interaction yet faces key challenges in spatio-temporal feature fusion and cross-band and brain-region integration from multi-channel sensor-derived signals. This paper proposes MB-MSTFNet, a novel framework for EEG emotion recognition. The model constructs a 3D tensor to encode band–space–time correlations of sensor data, explicitly modeling frequency-domain dynamics and spatial distributions of EEG sensors across brain regions. A multi-scale CNN-Inception module extracts hierarchical spatial features via diverse convolutional kernels and pooling operations, capturing localized sensor activations and global brain network interactions. Bi-directional GRUs (BiGRUs) model temporal dependencies in sensor time-series, adept at capturing long-range dynamic patterns. Multi-head self-attention highlights critical time windows and brain regions by assigning adaptive weights to relevant sensor channels, suppressing noise from non-contributory electrodes. Experiments on the DEAP dataset, containing multi-channel EEG sensor recordings, show that MB-MSTFNet achieves 96.80 ± 0.92% valence accuracy, 98.02 ± 0.76% arousal accuracy for binary classification tasks, and 92.85 ± 1.45% accuracy for four-class classification. Ablation studies validate that feature fusion, bidirectional temporal modeling, and multi-scale mechanisms significantly enhance performance by improving feature complementarity. This sensor-driven framework advances affective computing by integrating spatio-temporal dynamics and multi-band interactions of EEG sensor signals, enabling efficient real-time emotion recognition. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

42 pages, 3822 KiB  
Article
The Criticality of Consciousness: Excitatory–Inhibitory Balance and Dual Memory Systems in Active Inference
by Don M. Tucker, Phan Luu and Karl J. Friston
Entropy 2025, 27(8), 829; https://doi.org/10.3390/e27080829 - 4 Aug 2025
Viewed by 248
Abstract
The organization of consciousness is described through increasingly rich theoretical models. We review evidence that working memory capacity—essential to generating consciousness in the cerebral cortex—is supported by dual limbic memory systems. These dorsal (Papez) and ventral (Yakovlev) limbic networks provide the basis for [...] Read more.
The organization of consciousness is described through increasingly rich theoretical models. We review evidence that working memory capacity—essential to generating consciousness in the cerebral cortex—is supported by dual limbic memory systems. These dorsal (Papez) and ventral (Yakovlev) limbic networks provide the basis for mnemonic processing and prediction in the dorsal and ventral divisions of the human neocortex. Empirical evidence suggests that the dorsal limbic division is (i) regulated preferentially by excitatory feedforward control, (ii) consolidated by REM sleep, and (iii) controlled in waking by phasic arousal through lemnothalamic projections from the pontine brainstem reticular activating system. The ventral limbic division and striatum, (i) organizes the inhibitory neurophysiology of NREM to (ii) consolidate explicit memory in sleep, (iii) operating in waking cognition under the same inhibitory feedback control supported by collothalamic tonic activation from the midbrain. We propose that (i) these dual (excitatory and inhibitory) systems alternate in the stages of sleep, and (ii) in waking they must be balanced—at criticality—to optimize the active inference that generates conscious experiences. Optimal Bayesian belief updating rests on balanced feedforward (excitatory predictive) and feedback (inhibitory corrective) control biases that play the role of prior and likelihood (i.e., sensory) precision. Because the excitatory (E) phasic arousal and inhibitory (I) tonic activation systems that regulate these dual limbic divisions have distinct affective properties, varying levels of elation for phasic arousal (E) and anxiety for tonic activation (I), the dual control systems regulate sleep and consciousness in ways that are adaptively balanced—around the entropic nadir of EI criticality—for optimal self-regulation of consciousness and psychological health. Because they are emotive as well as motive control systems, these dual systems have unique qualities of feeling that may be registered as subjective experience. Full article
(This article belongs to the Special Issue Active Inference in Cognitive Neuroscience)
Show Figures

Figure 1

20 pages, 1253 KiB  
Article
Multimodal Detection of Emotional and Cognitive States in E-Learning Through Deep Fusion of Visual and Textual Data with NLP
by Qamar El Maazouzi and Asmaa Retbi
Computers 2025, 14(8), 314; https://doi.org/10.3390/computers14080314 - 2 Aug 2025
Viewed by 283
Abstract
In distance learning environments, learner engagement directly impacts attention, motivation, and academic performance. Signs of fatigue, negative affect, or critical remarks can warn of growing disengagement and potential dropout. However, most existing approaches rely on a single modality, visual or text-based, without providing [...] Read more.
In distance learning environments, learner engagement directly impacts attention, motivation, and academic performance. Signs of fatigue, negative affect, or critical remarks can warn of growing disengagement and potential dropout. However, most existing approaches rely on a single modality, visual or text-based, without providing a general view of learners’ cognitive and affective states. We propose a multimodal system that integrates three complementary analyzes: (1) a CNN-LSTM model augmented with warning signs such as PERCLOS and yawning frequency for fatigue detection, (2) facial emotion recognition by EmoNet and an LSTM to handle temporal dynamics, and (3) sentiment analysis of feedback by a fine-tuned BERT model. It was evaluated on three public benchmarks: DAiSEE for fatigue, AffectNet for emotion, and MOOC Review (Coursera) for sentiment analysis. The results show a precision of 88.5% for fatigue detection, 70% for emotion detection, and 91.5% for sentiment analysis. Aggregating these cues enables an accurate identification of disengagement periods and triggers individualized pedagogical interventions. These results, although based on independently sourced datasets, demonstrate the feasibility of an integrated approach to detecting disengagement and open the door to emotionally intelligent learning systems with potential for future work in real-time content personalization and adaptive learning assistance. Full article
(This article belongs to the Special Issue Present and Future of E-Learning Technologies (2nd Edition))
Show Figures

Figure 1

26 pages, 4899 KiB  
Article
Material Perception in Virtual Environments: Impacts on Thermal Perception, Emotions, and Functionality in Industrial Renovation
by Long He, Minjia Wu, Yue Ma, Di Cui, Yongjiang Wu and Yang Wei
Buildings 2025, 15(15), 2698; https://doi.org/10.3390/buildings15152698 - 31 Jul 2025
Viewed by 236
Abstract
Industrial building renovation is a sustainable strategy to preserve urban heritage while meeting modern needs. However, how interior material scenes affect users’ emotions, thermal perception, and functional preferences remains underexplored in adaptive reuse contexts. This study used virtual reality (VR) to examine four [...] Read more.
Industrial building renovation is a sustainable strategy to preserve urban heritage while meeting modern needs. However, how interior material scenes affect users’ emotions, thermal perception, and functional preferences remains underexplored in adaptive reuse contexts. This study used virtual reality (VR) to examine four common material scenes—wood, concrete, red brick, and white-painted surfaces—within industrial renovation settings. A total of 159 participants experienced four Lumion-rendered VR environments and rated them on thermal perception (visual warmth, thermal sensation, comfort), emotional response (arousal, pleasure, restoration), and functional preference. Data were analyzed using repeated measures ANOVA and Pearson correlation. Wood and red brick scenes were associated with warm visuals; wood scenes received the highest ratings for thermal comfort and pleasure, white-painted scenes for restoration and arousal, and concrete scenes, the lowest scores overall. Functional preferences varied by space: white-painted and concrete scenes were most preferred in study/work settings, wood in social spaces, wood and red brick in rest areas, and concrete in exhibition spaces. By isolating material variables in VR, this study offers a novel empirical approach and practical guidance for material selection in adaptive reuse to enhance user comfort, emotional well-being, and spatial functionality in industrial heritage renovations. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

28 pages, 6503 KiB  
Article
Aging-in-Place Attachment Among Older Adults in Macau’s High-Density Community Spaces: A Multi-Dimensional Empirical Study
by Hongzhan Lai, Stephen Siu Yu Lau, Yuan Su and Chen-Yi Sun
World 2025, 6(3), 101; https://doi.org/10.3390/world6030101 - 17 Jul 2025
Viewed by 754
Abstract
This study explores key factors influencing Aging-in-Place Attachment (AiPA) among older adults in Macau’s high-density community spaces, emphasizing interactions between the built environment, behavior, and psychology. A multidimensional framework evaluates environmental, behavioral, human-factor, and psychological contributions. A mixed-methods, multisource approach was employed. This [...] Read more.
This study explores key factors influencing Aging-in-Place Attachment (AiPA) among older adults in Macau’s high-density community spaces, emphasizing interactions between the built environment, behavior, and psychology. A multidimensional framework evaluates environmental, behavioral, human-factor, and psychological contributions. A mixed-methods, multisource approach was employed. This study measured spatial characteristics of nine public spaces, conducted systematic behavioral observations, and collected questionnaire data on place attachment and aging intentions. Eye-tracking and galvanic skin response (GSR) captured visual attention and emotional arousal. Hierarchical regression analysis tested the explanatory power of each variable group, supplemented by semi-structured interviews for qualitative depth. The results showed that the physical environment had a limited direct impact but served as a critical foundation. Behavioral variables increased explanatory power (~15%), emphasizing community engagement. Human-factor data added ~4%, indicating that sensory and habitual interactions strengthen bonds. Psychological factors contributed most (~59%), confirming AiPA as a multidimensional construct shaped primarily by emotional and social connections, supported by physical and behavioral contexts. In Macau’s dense urban context, older adults’ desire to age in place is mainly driven by emotional connection and social participation, with spatial design serving as an enabler. Effective age-friendly strategies must extend beyond infrastructure upgrades to cultivate belonging and interaction. This study advances environmental gerontology and architecture theory by explaining the mechanisms of attachment in later life. Future work should explore how physical spaces foster psychological well-being and examine emerging factors such as digital and intergenerational engagement. Full article
Show Figures

Figure 1

21 pages, 523 KiB  
Review
Wired for Intensity: The Neuropsychological Dynamics of Borderline Personality Disorders—An Integrative Review
by Eleni Giannoulis, Christos Nousis, Maria Krokou, Ifigeneia Zikou and Ioannis Malogiannis
J. Clin. Med. 2025, 14(14), 4973; https://doi.org/10.3390/jcm14144973 - 14 Jul 2025
Viewed by 637
Abstract
Background: Borderline personality disorder (BPD) is a severe psychiatric condition characterised by emotional instability, impulsivity, interpersonal dysfunction, and self-injurious behaviours. Despite growing clinical interest, the neuropsychological mechanisms underlying these symptoms are still not fully understood. This review aims to summarise findings from neuroimaging, [...] Read more.
Background: Borderline personality disorder (BPD) is a severe psychiatric condition characterised by emotional instability, impulsivity, interpersonal dysfunction, and self-injurious behaviours. Despite growing clinical interest, the neuropsychological mechanisms underlying these symptoms are still not fully understood. This review aims to summarise findings from neuroimaging, psychophysiological, and neurodevelopmental studies in order to clarify the neurobiological and physiological basis of BPD, with a particular focus on emotional dysregulation and implications for the treatment of adolescents. Methods: A narrative review was conducted, integrating results from longitudinal neurodevelopmental studies, functional and structural neuroimaging research (e.g. FMRI and PET), and psychophysiological assessments (e.g., heart rate variability and cortisol reactivity). Studies were selected based on their contribution to understanding the neural correlates of BPD symptom dimensions, particularly emotion dysregulation, impulsivity, interpersonal dysfunction, and self-harm. Results: Findings suggest that early reductions in amygdala volume, as early as age 13 predict later BPD symptoms. Hyperactivity of the amygdala, combined with hypoactivity in the prefrontal cortex, underlies deficits in emotion regulation. Orbitofrontal abnormalities correlate with impulsivity, while disruptions in the default mode network and oxytocin signaling are related to interpersonal dysfunction. Self-injurious behaviour appears to serve a neuropsychological function in regulating emotional pain and trauma-related arousal. This is linked to disruption of the hypothalamic-pituitary-adrenal (HPA) axis and structural brain alterations. The Unified Protocol for Adolescents (UP-A) was more effective to Mentalization-Based Therapy for Adolescents (MBT-A) at reducing emotional dysregulation compared, though challenges in treating identity disturbance and relational difficulties remain. Discussion: The reviewed evidence suggests that BPD has its in early neurodevelopmental vulnerability and is sustained by maladaptive neurophysiological processes. Emotional dysregulation emerges as a central transdiagnostic mechanism. Self-harm may serve as a strategy for regulating emotions in response to trauma-related neural dysregulation. These findings advocate for the integration of neuroscience into psychotherapeutic practice, including the application of neuromodulation techniques and psychophysiological monitoring. Conclusions: A comprehensive understanding of BPD requires a neuropsychologically informed framework. Personalised treatment approaches combining pharmacotherapy, brain-based interventions, and developmentally adapted psychotherapies—particularly DBT, psychodynamic therapy, and trauma-informed care—are essential. Future research should prioritise interdisciplinary, longitudinal studies to further bridge the gap between neurobiological findings and clinical innovation. Full article
(This article belongs to the Special Issue Neuro-Psychiatric Disorders: Updates on Diagnosis and Treatment)
Show Figures

Figure 1

16 pages, 1351 KiB  
Article
A Comparative Study on Machine Learning Methods for EEG-Based Human Emotion Recognition
by Shokoufeh Davarzani, Simin Masihi, Masoud Panahi, Abdulrahman Olalekan Yusuf and Massood Atashbar
Electronics 2025, 14(14), 2744; https://doi.org/10.3390/electronics14142744 - 8 Jul 2025
Viewed by 476
Abstract
Electroencephalogram (EEG) signals provide a direct and non-invasive means of interpreting brain activity and are increasingly becoming valuable in embedded emotion-aware systems, particularly for applications in healthcare, wearable electronics, and human–machine interactions. Among various EEG-based emotion recognition techniques, deep learning methods have demonstrated [...] Read more.
Electroencephalogram (EEG) signals provide a direct and non-invasive means of interpreting brain activity and are increasingly becoming valuable in embedded emotion-aware systems, particularly for applications in healthcare, wearable electronics, and human–machine interactions. Among various EEG-based emotion recognition techniques, deep learning methods have demonstrated superior performance compared to traditional approaches. This advantage stems from their ability to extract complex features—such as spectral–spatial connectivity, temporal dynamics, and non-linear patterns—from raw EEG data, leading to a more accurate and robust representation of emotional states and better adaptation to diverse data characteristics. This study explores and compares deep and shallow neural networks for human emotion recognition from raw EEG data, with the goal of enabling real-time processing in embedded and edge-deployable systems. Deep learning models—specifically convolutional neural networks (CNNs) and recurrent neural networks (RNNs)—have been benchmarked against traditional approaches such as the multi-layer perceptron (MLP), support vector machine (SVM), and k-nearest neighbors (kNN) algorithms. This comparative study investigates the effectiveness of deep learning techniques in EEG-based emotion recognition by classifying emotions into four categories based on the valence–arousal plane: high arousal, positive valence (HAPV); low arousal, positive valence (LAPV); high arousal, negative valence (HANV); and low arousal, negative valence (LANV). Evaluations were conducted using the DEAP dataset. The results indicate that both the CNN and RNN-STM models have a high classification performance in EEG-based emotion recognition, with an average accuracy of 90.13% and 93.36%, respectively, significantly outperforming shallow algorithms (MLP, SVM, kNN). Full article
(This article belongs to the Special Issue New Advances in Embedded Software and Applications)
Show Figures

Figure 1

29 pages, 4973 KiB  
Article
Speech and Elocution Training (SET): A Self-Efficacy Catalyst for Language Potential Activation and Career-Oriented Development for Higher Vocational Students
by Xiaojian Zheng, Mohd Hazwan Mohd Puad and Habibah Ab Jalil
Educ. Sci. 2025, 15(7), 850; https://doi.org/10.3390/educsci15070850 - 2 Jul 2025
Viewed by 448
Abstract
This study explores how Speech and Elocution Training (SET) activates language potential and fosters career-oriented development among higher vocational students through self-efficacy mechanisms. Through qualitative interviews with four vocational graduates who participated in SET 5 to 10 years ago, the research identifies three [...] Read more.
This study explores how Speech and Elocution Training (SET) activates language potential and fosters career-oriented development among higher vocational students through self-efficacy mechanisms. Through qualitative interviews with four vocational graduates who participated in SET 5 to 10 years ago, the research identifies three key findings. First, SET comprises curriculum content (e.g., workplace communication modules such as hosting, storytelling, and sales pitching) and classroom training using multimodal TED resources and Toastmasters International-simulated practices, which spark language potential through skill-focused, realistic exercises. Second, these pedagogies facilitate a progression where initial language potential evolves from nascent career interests into concrete job-seeking intentions and long-term career plans: completing workplace-related speech tasks boosts confidence in career choices, planning, and job competencies, enabling adaptability to professional challenges. Third, SET aligns with Bandura’s four self-efficacy determinants; these are successful experiences (including personalized and virtual skill acquisition and certified affirmation), vicarious experiences (via observation platforms and constructive peer modeling), verbal persuasion (direct instructional feedback and indirect emotional support), and the arousal of optimistic emotions (the cognitive reframing of challenges and direct desensitization to anxieties). These mechanisms collectively create a positive cycle that enhances self-efficacy, amplifies language potential, and clarifies career intentions. While highlighting SET’s efficacy, this study notes a small sample size limitation, urging future mixed-methods studies with diverse samples to validate these mechanisms across broader vocational contexts and refine understanding of language training’s role in fostering linguistic competence and career readiness. Full article
Show Figures

Figure 1

27 pages, 2935 KiB  
Article
A Pilot Study on Emotional Equivalence Between VR and Real Spaces Using EEG and Heart Rate Variability
by Takato Kobayashi, Narumon Jadram, Shukuka Ninomiya, Kazuhiro Suzuki and Midori Sugaya
Sensors 2025, 25(13), 4097; https://doi.org/10.3390/s25134097 - 30 Jun 2025
Viewed by 594
Abstract
In recent years, the application of virtual reality (VR) for spatial evaluation has gained traction in the fields of architecture and interior design. However, for VR to serve as a viable substitute for real-world environments, it is essential that experiences within VR elicit [...] Read more.
In recent years, the application of virtual reality (VR) for spatial evaluation has gained traction in the fields of architecture and interior design. However, for VR to serve as a viable substitute for real-world environments, it is essential that experiences within VR elicit emotional responses comparable to those evoked by actual spaces. Despite this prerequisite, there remains a paucity of studies that objectively compare and evaluate the emotional responses elicited by VR and real-world environments. Consequently, it is not yet fully understood whether VR can reliably replicate the emotional experiences induced by physical spaces. This study aims to investigate the influence of presentation modality on emotional responses by comparing a VR space and a real-world space with identical designs. The comparison was conducted using both subjective evaluations (Semantic Differential method) and physiological indices (electroencephalography and heart rate variability). The results indicated that the real-world environment was associated with impressions of comfort and preference, whereas the VR environment evoked impressions characterized by heightened arousal. Additionally, elevated beta wave activity and increased beta/alpha ratios were observed in the VR condition, suggesting a state of high arousal, as further supported by positioning on the Emotion Map. Moreover, analysis of pNN50 revealed a transient increase in parasympathetic nervous activity during the VR experience. This study is positioned as a pilot investigation to explore physiological and emotional differences between VR and real spaces. Full article
Show Figures

Figure 1

16 pages, 2095 KiB  
Article
Multimodal Knowledge Distillation for Emotion Recognition
by Zhenxuan Zhang and Guanyu Lu
Brain Sci. 2025, 15(7), 707; https://doi.org/10.3390/brainsci15070707 - 30 Jun 2025
Viewed by 555
Abstract
Multimodal emotion recognition has emerged as a prominent field in affective computing, offering superior performance compared to single-modality methods. Among various physiological signals, EEG signals and EOG data are highly valued for their complementary strengths in emotion recognition. However, the practical application of [...] Read more.
Multimodal emotion recognition has emerged as a prominent field in affective computing, offering superior performance compared to single-modality methods. Among various physiological signals, EEG signals and EOG data are highly valued for their complementary strengths in emotion recognition. However, the practical application of EEG-based approaches is often hindered by high costs and operational complexity, making EOG a more feasible alternative in real-world scenarios. To address this limitation, this study introduces a novel framework for multimodal knowledge distillation, designed to improve the practicality of emotion decoding while maintaining high accuracy, with the framework including a multimodal fusion module to extract and integrate interactive and heterogeneous features, and a unimodal student model structurally aligned with the multimodal teacher model for better knowledge alignment. The framework combines EEG and EOG signals into a unified model and distills the fused multimodal features into a simplified EOG-only model. To facilitate efficient knowledge transfer, the approach incorporates a dynamic feedback mechanism that adjusts the guidance provided by the multimodal model to the unimodal model during the distillation process based on performance metrics. The proposed method was comprehensively evaluated on two datasets based on EEG and EOG signals. The accuracy of the valence and arousal of the proposed model in the DEAP dataset are 70.38% and 60.41%, respectively. The accuracy of valence and arousal in the BJTU-Emotion dataset are 61.31% and 60.31%, respectively. The proposed method achieves state-of-the-art classification performance compared to the baseline method, with statistically significant improvements confirmed by paired t-tests (p < 0.05), and the framework effectively transfers knowledge from multimodal models to unimodal EOG models, enhancing the practicality of emotion recognition while maintaining high accuracy, thus expanding the applicability of emotion recognition in real-world scenarios. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

15 pages, 770 KiB  
Data Descriptor
NPFC-Test: A Multimodal Dataset from an Interactive Digital Assessment Using Wearables and Self-Reports
by Luis Fernando Morán-Mirabal, Luis Eduardo Güemes-Frese, Mariana Favarony-Avila, Sergio Noé Torres-Rodríguez and Jessica Alejandra Ruiz-Ramirez
Data 2025, 10(7), 103; https://doi.org/10.3390/data10070103 - 30 Jun 2025
Viewed by 447
Abstract
The growing implementation of digital platforms and mobile devices in educational environments has generated the need to explore new approaches for evaluating the learning experience beyond traditional self-reports or instructor presence. In this context, the NPFC-Test dataset was created from an experimental protocol [...] Read more.
The growing implementation of digital platforms and mobile devices in educational environments has generated the need to explore new approaches for evaluating the learning experience beyond traditional self-reports or instructor presence. In this context, the NPFC-Test dataset was created from an experimental protocol conducted at the Experiential Classroom of the Institute for the Future of Education. The dataset was built by collecting multimodal indicators such as neuronal, physiological, and facial data using a portable EEG headband, a medical-grade biometric bracelet, a high-resolution depth camera, and self-report questionnaires. The participants were exposed to a digital test lasting 20 min, composed of audiovisual stimuli and cognitive challenges, during which synchronized data from all devices were gathered. The dataset includes timestamped records related to emotional valence, arousal, and concentration, offering a valuable resource for multimodal learning analytics (MMLA). The recorded data were processed through calibration procedures, temporal alignment techniques, and emotion recognition models. It is expected that the NPFC-Test dataset will support future studies in human–computer interaction and educational data science by providing structured evidence to analyze cognitive and emotional states in learning processes. In addition, it offers a replicable framework for capturing synchronized biometric and behavioral data in controlled academic settings. Full article
Show Figures

Figure 1

20 pages, 3062 KiB  
Article
Cognitive Networks and Text Analysis Identify Anxiety as a Key Dimension of Distress in Genuine Suicide Notes
by Massimo Stella, Trevor James Swanson, Andreia Sofia Teixeira, Brianne N. Richson, Ying Li, Thomas T. Hills, Kelsie T. Forbush and David Watson
Big Data Cogn. Comput. 2025, 9(7), 171; https://doi.org/10.3390/bdcc9070171 - 27 Jun 2025
Viewed by 615
Abstract
Understanding the mindset of people who die by suicide remains a key research challenge. We map conceptual and emotional word–word co-occurrences in 139 genuine suicide notes and in reference word lists, an Emotional Recall Task, from 200 individuals grouped by high/low depression, anxiety, [...] Read more.
Understanding the mindset of people who die by suicide remains a key research challenge. We map conceptual and emotional word–word co-occurrences in 139 genuine suicide notes and in reference word lists, an Emotional Recall Task, from 200 individuals grouped by high/low depression, anxiety, and stress levels on DASS-21. Positive words cover most of the suicide notes’ vocabulary; however, co-occurrences in suicide notes overlap mostly with those produced by individuals with low anxiety (Jaccard index of 0.42 for valence and 0.38 for arousal). We introduce a “words not said” method: It removes every word that corpus A shares with a comparison corpus B and then checks the emotions of “residual” words in AB. With no leftover emotions, A and B are similar in expressing the same emotions. Simulations indicate this method can classify high/low levels of depression, anxiety and stress with 80% accuracy in a balanced task. After subtracting suicide note words, only the high-anxiety corpus displays no significant residual emotions. Our findings thus pin anxiety as a key latent feature of suicidal psychology and offer an interpretable language-based marker for suicide risk detection. Full article
Show Figures

Figure 1

34 pages, 3186 KiB  
Article
A Continuous Music Recommendation Method Considering Emotional Change
by Se In Baek and Yong Kyu Lee
Appl. Sci. 2025, 15(13), 7222; https://doi.org/10.3390/app15137222 - 26 Jun 2025
Viewed by 233
Abstract
Music, movies, books, pictures, and other media can change a user’s emotions, which are important factors in recommending appropriate items. As users’ emotions change over time, the content they select may vary accordingly. Existing emotion-based content recommendation methods primarily recommend content based on [...] Read more.
Music, movies, books, pictures, and other media can change a user’s emotions, which are important factors in recommending appropriate items. As users’ emotions change over time, the content they select may vary accordingly. Existing emotion-based content recommendation methods primarily recommend content based on the user’s current emotional state. In this study, we propose a continuous music recommendation method that adapts to a user’s changing emotions. Based on Thayer’s emotion model, emotions were classified into four areas, and music and user emotion vectors were created by analyzing the relationships between valence, arousal, and each emotion using a multiple regression model. Based on the user’s emotional history data, a personalized mental model (PMM) was created using a Markov chain. The PMM was used to predict future emotions and generate user emotion vectors for each period. A recommendation list was created by calculating the similarity between music emotion vectors and user emotion vectors. To prove the effectiveness of the proposed method, the accuracy of the music emotion analysis, user emotion prediction, and music recommendation results were evaluated. To evaluate the experiments, the PMM and the modified mental model (MMM) were used to predict user emotions and generate recommendation lists. The accuracy of the content emotion analysis was 87.26%, and the accuracy of user emotion prediction was 86.72%, an improvement of 13.68% compared with the MMM. Additionally, the balanced accuracy of the content recommendation was 79.31%, an improvement of 26.88% compared with the MMM. The proposed method can recommend content that is suitable for users. Full article
Show Figures

Figure 1

28 pages, 1609 KiB  
Article
Emotion Recognition from rPPG via Physiologically Inspired Temporal Encoding and Attention-Based Curriculum Learning
by Changmin Lee, Hyunwoo Lee and Mincheol Whang
Sensors 2025, 25(13), 3995; https://doi.org/10.3390/s25133995 - 26 Jun 2025
Viewed by 583
Abstract
Remote photoplethysmography (rPPG) enables non-contact physiological measurement for emotion recognition, yet the temporally sparse nature of emotional cardiovascular responses, intrinsic measurement noise, weak session-level labels, and subtle correlates of valence pose critical challenges. To address these issues, we propose a physiologically inspired deep [...] Read more.
Remote photoplethysmography (rPPG) enables non-contact physiological measurement for emotion recognition, yet the temporally sparse nature of emotional cardiovascular responses, intrinsic measurement noise, weak session-level labels, and subtle correlates of valence pose critical challenges. To address these issues, we propose a physiologically inspired deep learning framework comprising a Multi-scale Temporal Dynamics Encoder (MTDE) to capture autonomic nervous system dynamics across multiple timescales, an adaptive sparse α-Entmax attention mechanism to identify salient emotional segments amidst noisy signals, Gated Temporal Pooling for the robust aggregation of emotional features, and a structured three-phase curriculum learning strategy to systematically handle temporal sparsity, weak labels, and noise. Evaluated on the MAHNOB-HCI dataset (27 subjects and 527 sessions with a subject-mixed split), our temporal-only model achieved competitive performance in arousal recognition (66.04% accuracy; 61.97% weighted F1-score), surpassing prior CNN-LSTM baselines. However, lower performance in valence (62.26% accuracy) revealed inherent physiological limitations regarding a unimodal temporal cardiovascular analysis. These findings establish clear benchmarks for temporal-only rPPG emotion recognition and underscore the necessity of incorporating spatial or multimodal information to effectively capture nuanced emotional dimensions such as valence, guiding future research directions in affective computing. Full article
(This article belongs to the Special Issue Emotion Recognition and Cognitive Behavior Analysis Based on Sensors)
Show Figures

Figure 1

21 pages, 2393 KiB  
Article
Digital Tools in Action: 3D Printing for Personalized Skincare in the Era of Beauty Tech
by Sara Bom, Pedro Contreiras Pinto, Helena Margarida Ribeiro and Joana Marto
Cosmetics 2025, 12(4), 136; https://doi.org/10.3390/cosmetics12040136 - 25 Jun 2025
Viewed by 630
Abstract
3D printing (3DP) enables the development of highly customizable skincare solutions, offering precise control over formulation, structure, and aesthetic properties. Therefore, this study explores the impact of patches’ microstructure on hydration efficacy using conventional and advanced chemical/morphological confocal techniques. Moreover, it advances to [...] Read more.
3D printing (3DP) enables the development of highly customizable skincare solutions, offering precise control over formulation, structure, and aesthetic properties. Therefore, this study explores the impact of patches’ microstructure on hydration efficacy using conventional and advanced chemical/morphological confocal techniques. Moreover, it advances to the personalization of under-eye 3D-printed skincare patches and assesses consumer acceptability through emotional sensing, providing a comparative analysis against a non-3D-printed market option. The results indicate that increasing the patches’ internal porosity enhances water retention in the stratum corneum (53.0 vs. 45.4% µm). Additionally, patches were personalized to address individual skin needs/conditions (design and bioactive composition) and consumer preferences (color and fragrance). The affective analysis indicated a high level of consumer acceptance for the 3D-printed option, as evidenced by the higher valence (14.5 vs. 1.1 action units) and arousal (4.2 vs. 2.7 peaks/minute) scores. These findings highlight the potential of 3DP for personalized skincare, demonstrating how structural modifications can modulate hydration. Furthermore, the biometric-preference digital approach employed offers unparalleled versatility, enabling rapid customization to meet the unique requirements of different skin types. By embracing this advancement, a new era of personalized skincare emerges, where cutting-edge science powers solutions for enhanced skin health and consumer satisfaction. Full article
(This article belongs to the Special Issue Feature Papers in Cosmetics in 2025)
Show Figures

Figure 1

Back to TopTop