Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (305)

Search Parameters:
Keywords = valence-arousal

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1351 KiB  
Article
A Comparative Study on Machine Learning Methods for EEG-Based Human Emotion Recognition
by Shokoufeh Davarzani, Simin Masihi, Masoud Panahi, Abdulrahman Olalekan Yusuf and Massood Atashbar
Electronics 2025, 14(14), 2744; https://doi.org/10.3390/electronics14142744 - 8 Jul 2025
Viewed by 424
Abstract
Electroencephalogram (EEG) signals provide a direct and non-invasive means of interpreting brain activity and are increasingly becoming valuable in embedded emotion-aware systems, particularly for applications in healthcare, wearable electronics, and human–machine interactions. Among various EEG-based emotion recognition techniques, deep learning methods have demonstrated [...] Read more.
Electroencephalogram (EEG) signals provide a direct and non-invasive means of interpreting brain activity and are increasingly becoming valuable in embedded emotion-aware systems, particularly for applications in healthcare, wearable electronics, and human–machine interactions. Among various EEG-based emotion recognition techniques, deep learning methods have demonstrated superior performance compared to traditional approaches. This advantage stems from their ability to extract complex features—such as spectral–spatial connectivity, temporal dynamics, and non-linear patterns—from raw EEG data, leading to a more accurate and robust representation of emotional states and better adaptation to diverse data characteristics. This study explores and compares deep and shallow neural networks for human emotion recognition from raw EEG data, with the goal of enabling real-time processing in embedded and edge-deployable systems. Deep learning models—specifically convolutional neural networks (CNNs) and recurrent neural networks (RNNs)—have been benchmarked against traditional approaches such as the multi-layer perceptron (MLP), support vector machine (SVM), and k-nearest neighbors (kNN) algorithms. This comparative study investigates the effectiveness of deep learning techniques in EEG-based emotion recognition by classifying emotions into four categories based on the valence–arousal plane: high arousal, positive valence (HAPV); low arousal, positive valence (LAPV); high arousal, negative valence (HANV); and low arousal, negative valence (LANV). Evaluations were conducted using the DEAP dataset. The results indicate that both the CNN and RNN-STM models have a high classification performance in EEG-based emotion recognition, with an average accuracy of 90.13% and 93.36%, respectively, significantly outperforming shallow algorithms (MLP, SVM, kNN). Full article
(This article belongs to the Special Issue New Advances in Embedded Software and Applications)
Show Figures

Figure 1

16 pages, 2095 KiB  
Article
Multimodal Knowledge Distillation for Emotion Recognition
by Zhenxuan Zhang and Guanyu Lu
Brain Sci. 2025, 15(7), 707; https://doi.org/10.3390/brainsci15070707 - 30 Jun 2025
Viewed by 508
Abstract
Multimodal emotion recognition has emerged as a prominent field in affective computing, offering superior performance compared to single-modality methods. Among various physiological signals, EEG signals and EOG data are highly valued for their complementary strengths in emotion recognition. However, the practical application of [...] Read more.
Multimodal emotion recognition has emerged as a prominent field in affective computing, offering superior performance compared to single-modality methods. Among various physiological signals, EEG signals and EOG data are highly valued for their complementary strengths in emotion recognition. However, the practical application of EEG-based approaches is often hindered by high costs and operational complexity, making EOG a more feasible alternative in real-world scenarios. To address this limitation, this study introduces a novel framework for multimodal knowledge distillation, designed to improve the practicality of emotion decoding while maintaining high accuracy, with the framework including a multimodal fusion module to extract and integrate interactive and heterogeneous features, and a unimodal student model structurally aligned with the multimodal teacher model for better knowledge alignment. The framework combines EEG and EOG signals into a unified model and distills the fused multimodal features into a simplified EOG-only model. To facilitate efficient knowledge transfer, the approach incorporates a dynamic feedback mechanism that adjusts the guidance provided by the multimodal model to the unimodal model during the distillation process based on performance metrics. The proposed method was comprehensively evaluated on two datasets based on EEG and EOG signals. The accuracy of the valence and arousal of the proposed model in the DEAP dataset are 70.38% and 60.41%, respectively. The accuracy of valence and arousal in the BJTU-Emotion dataset are 61.31% and 60.31%, respectively. The proposed method achieves state-of-the-art classification performance compared to the baseline method, with statistically significant improvements confirmed by paired t-tests (p < 0.05), and the framework effectively transfers knowledge from multimodal models to unimodal EOG models, enhancing the practicality of emotion recognition while maintaining high accuracy, thus expanding the applicability of emotion recognition in real-world scenarios. Full article
(This article belongs to the Section Neurotechnology and Neuroimaging)
Show Figures

Figure 1

15 pages, 770 KiB  
Data Descriptor
NPFC-Test: A Multimodal Dataset from an Interactive Digital Assessment Using Wearables and Self-Reports
by Luis Fernando Morán-Mirabal, Luis Eduardo Güemes-Frese, Mariana Favarony-Avila, Sergio Noé Torres-Rodríguez and Jessica Alejandra Ruiz-Ramirez
Data 2025, 10(7), 103; https://doi.org/10.3390/data10070103 - 30 Jun 2025
Viewed by 372
Abstract
The growing implementation of digital platforms and mobile devices in educational environments has generated the need to explore new approaches for evaluating the learning experience beyond traditional self-reports or instructor presence. In this context, the NPFC-Test dataset was created from an experimental protocol [...] Read more.
The growing implementation of digital platforms and mobile devices in educational environments has generated the need to explore new approaches for evaluating the learning experience beyond traditional self-reports or instructor presence. In this context, the NPFC-Test dataset was created from an experimental protocol conducted at the Experiential Classroom of the Institute for the Future of Education. The dataset was built by collecting multimodal indicators such as neuronal, physiological, and facial data using a portable EEG headband, a medical-grade biometric bracelet, a high-resolution depth camera, and self-report questionnaires. The participants were exposed to a digital test lasting 20 min, composed of audiovisual stimuli and cognitive challenges, during which synchronized data from all devices were gathered. The dataset includes timestamped records related to emotional valence, arousal, and concentration, offering a valuable resource for multimodal learning analytics (MMLA). The recorded data were processed through calibration procedures, temporal alignment techniques, and emotion recognition models. It is expected that the NPFC-Test dataset will support future studies in human–computer interaction and educational data science by providing structured evidence to analyze cognitive and emotional states in learning processes. In addition, it offers a replicable framework for capturing synchronized biometric and behavioral data in controlled academic settings. Full article
Show Figures

Figure 1

20 pages, 3062 KiB  
Article
Cognitive Networks and Text Analysis Identify Anxiety as a Key Dimension of Distress in Genuine Suicide Notes
by Massimo Stella, Trevor James Swanson, Andreia Sofia Teixeira, Brianne N. Richson, Ying Li, Thomas T. Hills, Kelsie T. Forbush and David Watson
Big Data Cogn. Comput. 2025, 9(7), 171; https://doi.org/10.3390/bdcc9070171 - 27 Jun 2025
Viewed by 581
Abstract
Understanding the mindset of people who die by suicide remains a key research challenge. We map conceptual and emotional word–word co-occurrences in 139 genuine suicide notes and in reference word lists, an Emotional Recall Task, from 200 individuals grouped by high/low depression, anxiety, [...] Read more.
Understanding the mindset of people who die by suicide remains a key research challenge. We map conceptual and emotional word–word co-occurrences in 139 genuine suicide notes and in reference word lists, an Emotional Recall Task, from 200 individuals grouped by high/low depression, anxiety, and stress levels on DASS-21. Positive words cover most of the suicide notes’ vocabulary; however, co-occurrences in suicide notes overlap mostly with those produced by individuals with low anxiety (Jaccard index of 0.42 for valence and 0.38 for arousal). We introduce a “words not said” method: It removes every word that corpus A shares with a comparison corpus B and then checks the emotions of “residual” words in AB. With no leftover emotions, A and B are similar in expressing the same emotions. Simulations indicate this method can classify high/low levels of depression, anxiety and stress with 80% accuracy in a balanced task. After subtracting suicide note words, only the high-anxiety corpus displays no significant residual emotions. Our findings thus pin anxiety as a key latent feature of suicidal psychology and offer an interpretable language-based marker for suicide risk detection. Full article
Show Figures

Figure 1

34 pages, 3186 KiB  
Article
A Continuous Music Recommendation Method Considering Emotional Change
by Se In Baek and Yong Kyu Lee
Appl. Sci. 2025, 15(13), 7222; https://doi.org/10.3390/app15137222 - 26 Jun 2025
Viewed by 216
Abstract
Music, movies, books, pictures, and other media can change a user’s emotions, which are important factors in recommending appropriate items. As users’ emotions change over time, the content they select may vary accordingly. Existing emotion-based content recommendation methods primarily recommend content based on [...] Read more.
Music, movies, books, pictures, and other media can change a user’s emotions, which are important factors in recommending appropriate items. As users’ emotions change over time, the content they select may vary accordingly. Existing emotion-based content recommendation methods primarily recommend content based on the user’s current emotional state. In this study, we propose a continuous music recommendation method that adapts to a user’s changing emotions. Based on Thayer’s emotion model, emotions were classified into four areas, and music and user emotion vectors were created by analyzing the relationships between valence, arousal, and each emotion using a multiple regression model. Based on the user’s emotional history data, a personalized mental model (PMM) was created using a Markov chain. The PMM was used to predict future emotions and generate user emotion vectors for each period. A recommendation list was created by calculating the similarity between music emotion vectors and user emotion vectors. To prove the effectiveness of the proposed method, the accuracy of the music emotion analysis, user emotion prediction, and music recommendation results were evaluated. To evaluate the experiments, the PMM and the modified mental model (MMM) were used to predict user emotions and generate recommendation lists. The accuracy of the content emotion analysis was 87.26%, and the accuracy of user emotion prediction was 86.72%, an improvement of 13.68% compared with the MMM. Additionally, the balanced accuracy of the content recommendation was 79.31%, an improvement of 26.88% compared with the MMM. The proposed method can recommend content that is suitable for users. Full article
Show Figures

Figure 1

28 pages, 1609 KiB  
Article
Emotion Recognition from rPPG via Physiologically Inspired Temporal Encoding and Attention-Based Curriculum Learning
by Changmin Lee, Hyunwoo Lee and Mincheol Whang
Sensors 2025, 25(13), 3995; https://doi.org/10.3390/s25133995 - 26 Jun 2025
Viewed by 519
Abstract
Remote photoplethysmography (rPPG) enables non-contact physiological measurement for emotion recognition, yet the temporally sparse nature of emotional cardiovascular responses, intrinsic measurement noise, weak session-level labels, and subtle correlates of valence pose critical challenges. To address these issues, we propose a physiologically inspired deep [...] Read more.
Remote photoplethysmography (rPPG) enables non-contact physiological measurement for emotion recognition, yet the temporally sparse nature of emotional cardiovascular responses, intrinsic measurement noise, weak session-level labels, and subtle correlates of valence pose critical challenges. To address these issues, we propose a physiologically inspired deep learning framework comprising a Multi-scale Temporal Dynamics Encoder (MTDE) to capture autonomic nervous system dynamics across multiple timescales, an adaptive sparse α-Entmax attention mechanism to identify salient emotional segments amidst noisy signals, Gated Temporal Pooling for the robust aggregation of emotional features, and a structured three-phase curriculum learning strategy to systematically handle temporal sparsity, weak labels, and noise. Evaluated on the MAHNOB-HCI dataset (27 subjects and 527 sessions with a subject-mixed split), our temporal-only model achieved competitive performance in arousal recognition (66.04% accuracy; 61.97% weighted F1-score), surpassing prior CNN-LSTM baselines. However, lower performance in valence (62.26% accuracy) revealed inherent physiological limitations regarding a unimodal temporal cardiovascular analysis. These findings establish clear benchmarks for temporal-only rPPG emotion recognition and underscore the necessity of incorporating spatial or multimodal information to effectively capture nuanced emotional dimensions such as valence, guiding future research directions in affective computing. Full article
(This article belongs to the Special Issue Emotion Recognition and Cognitive Behavior Analysis Based on Sensors)
Show Figures

Figure 1

21 pages, 2393 KiB  
Article
Digital Tools in Action: 3D Printing for Personalized Skincare in the Era of Beauty Tech
by Sara Bom, Pedro Contreiras Pinto, Helena Margarida Ribeiro and Joana Marto
Cosmetics 2025, 12(4), 136; https://doi.org/10.3390/cosmetics12040136 - 25 Jun 2025
Viewed by 564
Abstract
3D printing (3DP) enables the development of highly customizable skincare solutions, offering precise control over formulation, structure, and aesthetic properties. Therefore, this study explores the impact of patches’ microstructure on hydration efficacy using conventional and advanced chemical/morphological confocal techniques. Moreover, it advances to [...] Read more.
3D printing (3DP) enables the development of highly customizable skincare solutions, offering precise control over formulation, structure, and aesthetic properties. Therefore, this study explores the impact of patches’ microstructure on hydration efficacy using conventional and advanced chemical/morphological confocal techniques. Moreover, it advances to the personalization of under-eye 3D-printed skincare patches and assesses consumer acceptability through emotional sensing, providing a comparative analysis against a non-3D-printed market option. The results indicate that increasing the patches’ internal porosity enhances water retention in the stratum corneum (53.0 vs. 45.4% µm). Additionally, patches were personalized to address individual skin needs/conditions (design and bioactive composition) and consumer preferences (color and fragrance). The affective analysis indicated a high level of consumer acceptance for the 3D-printed option, as evidenced by the higher valence (14.5 vs. 1.1 action units) and arousal (4.2 vs. 2.7 peaks/minute) scores. These findings highlight the potential of 3DP for personalized skincare, demonstrating how structural modifications can modulate hydration. Furthermore, the biometric-preference digital approach employed offers unparalleled versatility, enabling rapid customization to meet the unique requirements of different skin types. By embracing this advancement, a new era of personalized skincare emerges, where cutting-edge science powers solutions for enhanced skin health and consumer satisfaction. Full article
(This article belongs to the Special Issue Feature Papers in Cosmetics in 2025)
Show Figures

Figure 1

24 pages, 2358 KiB  
Article
Classifying Emotionally Induced Pain Intensity Using Multimodal Physiological Signals and Subjective Ratings: A Pilot Study
by Eun-Hye Jang, Young-Ji Eum, Daesub Yoon and Sangwon Byun
Appl. Sci. 2025, 15(13), 7149; https://doi.org/10.3390/app15137149 - 25 Jun 2025
Viewed by 327
Abstract
We explore the feasibility of classifying perceived pain intensity—despite the stimulus being identical—using multimodal physiological signals and self-reported emotional ratings. A total of 112 healthy participants watched the same anger-inducing video, yet reported varying pain intensities (5, 6, or 7 on a 7-point [...] Read more.
We explore the feasibility of classifying perceived pain intensity—despite the stimulus being identical—using multimodal physiological signals and self-reported emotional ratings. A total of 112 healthy participants watched the same anger-inducing video, yet reported varying pain intensities (5, 6, or 7 on a 7-point scale). We recorded electrocardiogram, skin conductance (SC), respiration, photoplethysmogram results, and finger temperature, extracting 12 physiological features. Participants also rated their valence and arousal. Using a random forest model, we classified pain versus baseline and distinguished intensity levels. Compared to baseline, the painful stimulus altered heart rate variability, SC, respiration, and pulse transit time (PTT). Higher perceived pain correlated with more negative valence, higher arousal, and elevated SC, suggesting stronger sympathetic activation. The classification of baseline versus pain using SC and respiratory features reached an F1 score of 0.83. For intensity levels 6 versus 7, including PTT and skin conductance response along with valence achieved an F1 score of 0.73. These findings highlight distinct psychophysiological patterns that reflect perceived intensity under the same stimulus. SC features emerged as key biomarkers, while valence and arousal offered complementary insights, supporting the development of personalized, psychologically informed pain assessment systems. Full article
(This article belongs to the Special Issue Monitoring of Human Physiological Signals)
Show Figures

Figure 1

28 pages, 5423 KiB  
Article
Design Strategies for Mobile Click-and-Load Waiting Scenarios
by Yang Yin, Yingpin Chen, Chenan Wang, Yuching Chiang, Pinhao Wang, Haoran Wei, Haibo Lei, Chunlei Chai and Hao Fan
Appl. Sci. 2025, 15(12), 6717; https://doi.org/10.3390/app15126717 - 16 Jun 2025
Viewed by 439
Abstract
The optimization of design strategies in loading and waiting scenarios is of great significance for enhancing user experience. This study focuses on the click-to-load waiting scenario in mobile device interfaces and systematically analyzes the user experience performance of three design strategies—the interface type, [...] Read more.
The optimization of design strategies in loading and waiting scenarios is of great significance for enhancing user experience. This study focuses on the click-to-load waiting scenario in mobile device interfaces and systematically analyzes the user experience performance of three design strategies—the interface type, loading indicator, and layout—across different page transition types (including the tab page, content page, and half-screen overlay). Based on questionnaire responses and experimental data (N = 90) collected from participants aged 20–29, we assessed subjective user perceptions across five validated metrics: time perception, loading speed, satisfaction, emotional valence, and arousal level. The results revealed significant differences among strategies in terms of loading speed perception, time awareness, and emotional responses. Notably, progressive loading strategies proved particularly effective in enhancing user satisfaction and alleviating temporal cognitive load. This study summarizes the characteristics of strategy applicability and proposes general optimization recommendations, offering both theoretical insights and practical guidance for designing loading feedback in mobile device interfaces. Full article
Show Figures

Figure 1

24 pages, 2269 KiB  
Article
This Is the Way People Are Negative Anymore: Mapping Emotionally Negative Affect in Syntactically Positive Anymore Through Sentiment Analysis of Tweets
by Christopher Strelluf and Thomas T. Hills
Languages 2025, 10(6), 136; https://doi.org/10.3390/languages10060136 - 10 Jun 2025
Viewed by 1067
Abstract
The adverb anymore is standardly a negative polarity item (NPI), which must be licensed by triggers of non-positive polarity. Some Englishes also allow anymore in positive-polarity clauses. Linguists have posited that this non-polarity anymore (NPAM) carries a feature of negative affect. However, this [...] Read more.
The adverb anymore is standardly a negative polarity item (NPI), which must be licensed by triggers of non-positive polarity. Some Englishes also allow anymore in positive-polarity clauses. Linguists have posited that this non-polarity anymore (NPAM) carries a feature of negative affect. However, this claim is based on elicited judgments, and linguists have argued that respondents cannot reliably evaluate NPAM via conscious judgment. To solve this problem, we employ sentiment analysis to examine the relationship between NPAM and negative affect in a Twitter corpus. Using two complementary sentiment analytic frameworks, we demonstrate that words occurring with NPAM have lower valence, higher arousal, and lower dominance than words occurring with NPI-anymore. Broadly, this confirms NPAM’s association with negative affect in natural-language productions. We additionally identify inter- and intra-regional differences in affective dimensions, as well as variability across different types of NPI trigger, showing that the relationship between negative affect and NPAM is not monolithic dialectally, syntactically, or semantically. The project demonstrates the utility of sentiment analysis for examining emotional characteristics of low-frequency variables, providing a new tool for dialectology, micro-syntax, and variationist sociolinguistics. Full article
(This article belongs to the Special Issue Linguistics of Social Media)
Show Figures

Figure 1

17 pages, 4080 KiB  
Article
Defining and Analyzing Nervousness Using AI-Based Facial Expression Recognition
by Hyunsoo Seo, Seunghyun Kim and Eui Chul Lee
Mathematics 2025, 13(11), 1745; https://doi.org/10.3390/math13111745 - 25 May 2025
Viewed by 911
Abstract
Nervousness is a complex emotional state characterized by high arousal and ambiguous valence, often triggered in high-stress environments. This study presents a mathematical and computational framework for defining and classifying nervousness using facial expression data projected onto a valence–arousal (V–A) space. A statistical [...] Read more.
Nervousness is a complex emotional state characterized by high arousal and ambiguous valence, often triggered in high-stress environments. This study presents a mathematical and computational framework for defining and classifying nervousness using facial expression data projected onto a valence–arousal (V–A) space. A statistical approach employing the Minimum Covariance Determinant (MCD) estimator is used to construct 90% and 99% confidence ellipses for nervous and non-nervous states, respectively, using Mahalanobis distance. These ellipses form the basis for binary labeling of the AffectNet dataset. We apply a deep learning model trained via knowledge distillation, with EmoNet as the teacher and MobileNetV2 as the student, to efficiently classify nervousness. The experimental results on the AffectNet dataset show that our proposed method achieves a classification accuracy of 81.08%, improving over the baseline by approximately 6%. These results are obtained by refining the valence–arousal distributions and applying knowledge distillation from EmoNet to MobileNetV2. We use accuracy and F1-score as evaluation metrics to validate the performance. Furthermore, we perform a qualitative analysis using action unit (AU) activation graphs to provide deeper insight into nervous facial expressions. The proposed method demonstrates how mathematical tools and deep learning can be integrated for robust affective state modeling. Full article
Show Figures

Figure 1

14 pages, 839 KiB  
Article
The Impact of Children’s Food Neophobia on Meal Perception, Emotional Responses, and Food Waste in Italian Primary School Canteens
by Maria Piochi, Michele Antonio Fino and Luisa Torri
Foods 2025, 14(10), 1777; https://doi.org/10.3390/foods14101777 - 16 May 2025
Viewed by 502
Abstract
Food neophobia (FN) has been poorly explored in real contexts and in large-scale studies with children. This study assessed the impact of FN in children on school canteen meals by considering liking, emotional status, and food waste behaviours. We involved 630 children (7–11 [...] Read more.
Food neophobia (FN) has been poorly explored in real contexts and in large-scale studies with children. This study assessed the impact of FN in children on school canteen meals by considering liking, emotional status, and food waste behaviours. We involved 630 children (7–11 years old; females = 53%) from nine Italian primary schools. The main self-reported variables that were collected included pleasure of eating in the canteen, declared liking for different foods, emotional responses, meal description, and food waste. The characteristics of low neophobia (LN), medium neophobia (MN), and high neophobia (HN) were comparable between genders and provenience and did not differ by the pleasure of eating at home. Children with HN had the lowest frequency of eating in the canteen, the highest self-reported amount of wasted food, and the lowest liking for all items, especially vegetables and legumes; they selected mostly emotions with negative valence and described the meal as more uncomfortable and boring. Instead, children exhibiting LN used positive emotions with high arousal to describe the meal and found it a little boring, while those with MN showed an intermediate attitude. Children with HN may benefit from familiarisation actions to accept non-domestic meals and reduce food waste in non-familiar environments. Improving school canteen contexts (e.g., the socialising possibility) can modulate children’s emotional responses and reduce food waste. Full article
(This article belongs to the Section Sensory and Consumer Sciences)
Show Figures

Figure 1

24 pages, 5386 KiB  
Article
Impact of Emotional Design: Improving Sustainable Well-Being Through Bio-Based Tea Waste Materials
by Ming Lei, Shenghua Tan, Pin Gao, Zhiyu Long, Li Sun and Yuekun Dong
Buildings 2025, 15(9), 1559; https://doi.org/10.3390/buildings15091559 - 5 May 2025
Viewed by 1425
Abstract
Commercial progress concerning biobased materials has been slow, with success depending on functionality and emotional responses. Emotional interaction research provides a novel way to shift perceptions of biobased materials. This study proposes a human-centered emotional design framework using biobased tea waste to explore [...] Read more.
Commercial progress concerning biobased materials has been slow, with success depending on functionality and emotional responses. Emotional interaction research provides a novel way to shift perceptions of biobased materials. This study proposes a human-centered emotional design framework using biobased tea waste to explore how sensory properties (form, color, odor, surface roughness) shape emotional responses and contribute to sustainable wellbeing. We used a mixed-methods approach combining subjective evaluations (Self-Assessment Manikin scale) with physiological metrics (EEG, skin temperature, pupil dilation) from 24 participants. Results demonstrated that spherical forms and high surface roughness significantly enhanced emotional valence and arousal, while warm-toned yellow samples elicited 23% higher pleasure ratings than dark ones. Neurophysiological data revealed that positive emotions correlated with reduced alpha power in the parietal lobe (αPz, p = 0.03) and a 0.3 °C rise in skin temperature, whereas negative evaluations activated gamma oscillations in central brain regions (γCz, p = 0.02). Mapping these findings to human factors engineering principles, we developed actionable design strategies—such as texture-optimized surfaces and color–emotion pairings—that transform tea waste into emotionally resonant, sustainable products. This work advances emotional design’s role in fostering ecological sustainability and human wellbeing, demonstrating how human-centered engineering can align material functionality with psychological fulfillment. Full article
Show Figures

Figure 1

22 pages, 3079 KiB  
Article
ECE-TTS: A Zero-Shot Emotion Text-to-Speech Model with Simplified and Precise Control
by Shixiong Liang, Ruohua Zhou and Qingsheng Yuan
Appl. Sci. 2025, 15(9), 5108; https://doi.org/10.3390/app15095108 - 4 May 2025
Viewed by 2119
Abstract
Significant advances have been made in emotional speech synthesis technology; however, existing models still face challenges in achieving fine-grained emotion style control and simple yet precise emotion intensity regulation. To address these issues, we propose Easy-Control Emotion Text-to-Speech (ECE-TTS), a zero-shot TTS model [...] Read more.
Significant advances have been made in emotional speech synthesis technology; however, existing models still face challenges in achieving fine-grained emotion style control and simple yet precise emotion intensity regulation. To address these issues, we propose Easy-Control Emotion Text-to-Speech (ECE-TTS), a zero-shot TTS model built upon the F5-TTS architecture, simplifying emotion modeling while maintaining accurate control. ECE-TTS leverages pretrained emotion recognizers to extract Valence, Arousal, and Dominance (VAD) values, transforming them into Emotion-Adaptive Spherical Vectors (EASV) for precise emotion style representation. Emotion intensity modulation is efficiently realized via simple arithmetic operations on emotion vectors without introducing additional complex modules or training extra regression networks. Emotion style control experiments demonstrate that ECE-TTS achieves a Word Error Rate (WER) of 13.91%, an Aro-Val-Domin SIM of 0.679, and an Emo SIM of 0.594, surpassing GenerSpeech (WER = 16.34%, Aro-Val-Domin SIM = 0.627, Emo SIM = 0.563) and EmoSphere++ (WER = 15.08%, Aro-Val-Domin SIM = 0.656, Emo SIM = 0.578). Subjective Mean Opinion Score (MOS) evaluations (1–5 scale) further confirm improvements in speaker similarity (3.93), naturalness (3.98), and emotional expressiveness (3.94). Additionally, emotion intensity control experiments demonstrate smooth and precise modulation across varying emotional strengths. These results validate ECE-TTS as a highly effective and practical solution for high-quality, emotion-controllable speech synthesis. Full article
Show Figures

Figure 1

27 pages, 8770 KiB  
Article
Evaluation of Rural Visual Landscape Quality Based on Multi-Source Affective Computing
by Xinyu Zhao, Lin Lin, Xiao Guo, Zhisheng Wang and Ruixuan Li
Appl. Sci. 2025, 15(9), 4905; https://doi.org/10.3390/app15094905 - 28 Apr 2025
Viewed by 572
Abstract
Assessing the visual quality of rural landscapes is pivotal for quantifying ecological services and preserving cultural heritage; however, conventional ecological indicators neglect emotional and cognitive dimensions. To address this gap, the present study proposes a novel visual quality assessment method for rural landscapes [...] Read more.
Assessing the visual quality of rural landscapes is pivotal for quantifying ecological services and preserving cultural heritage; however, conventional ecological indicators neglect emotional and cognitive dimensions. To address this gap, the present study proposes a novel visual quality assessment method for rural landscapes that integrates multimodal sentiment classification models to strengthen sustainability metrics. Four landscape types were selected from three representative villages in Dalian City, China, and the physiological signals (EEG, EOG) and subjective evaluations (Beauty Assessment and SAM Scales) of students and teachers were recorded. Binary, ternary, and five-category emotion classification models were then developed. Results indicate that the binary and ternary models achieve superior accuracy in emotional valence and arousal, whereas the five-category model performs least effectively. Furthermore, an ensemble learning approach outperforms individual classifiers in both binary and ternary tasks, yielding a 16.54% increase in mean accuracy. Integrating subjective and objective data further enhances ternary classification accuracy by 7.7% compared to existing studies, confirming the value of multi-source features. These findings demonstrate that a multi-source sentiment computing framework can serve as a robust quantitative tool for evaluating emotional quality in rural landscapes and promoting their sustainable development. Full article
Show Figures

Figure 1

Back to TopTop