Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = valence-arousal-dominance model

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 3079 KiB  
Article
ECE-TTS: A Zero-Shot Emotion Text-to-Speech Model with Simplified and Precise Control
by Shixiong Liang, Ruohua Zhou and Qingsheng Yuan
Appl. Sci. 2025, 15(9), 5108; https://doi.org/10.3390/app15095108 - 4 May 2025
Viewed by 2119
Abstract
Significant advances have been made in emotional speech synthesis technology; however, existing models still face challenges in achieving fine-grained emotion style control and simple yet precise emotion intensity regulation. To address these issues, we propose Easy-Control Emotion Text-to-Speech (ECE-TTS), a zero-shot TTS model [...] Read more.
Significant advances have been made in emotional speech synthesis technology; however, existing models still face challenges in achieving fine-grained emotion style control and simple yet precise emotion intensity regulation. To address these issues, we propose Easy-Control Emotion Text-to-Speech (ECE-TTS), a zero-shot TTS model built upon the F5-TTS architecture, simplifying emotion modeling while maintaining accurate control. ECE-TTS leverages pretrained emotion recognizers to extract Valence, Arousal, and Dominance (VAD) values, transforming them into Emotion-Adaptive Spherical Vectors (EASV) for precise emotion style representation. Emotion intensity modulation is efficiently realized via simple arithmetic operations on emotion vectors without introducing additional complex modules or training extra regression networks. Emotion style control experiments demonstrate that ECE-TTS achieves a Word Error Rate (WER) of 13.91%, an Aro-Val-Domin SIM of 0.679, and an Emo SIM of 0.594, surpassing GenerSpeech (WER = 16.34%, Aro-Val-Domin SIM = 0.627, Emo SIM = 0.563) and EmoSphere++ (WER = 15.08%, Aro-Val-Domin SIM = 0.656, Emo SIM = 0.578). Subjective Mean Opinion Score (MOS) evaluations (1–5 scale) further confirm improvements in speaker similarity (3.93), naturalness (3.98), and emotional expressiveness (3.94). Additionally, emotion intensity control experiments demonstrate smooth and precise modulation across varying emotional strengths. These results validate ECE-TTS as a highly effective and practical solution for high-quality, emotion-controllable speech synthesis. Full article
Show Figures

Figure 1

16 pages, 1698 KiB  
Article
EEG-RegNet: Regressive Emotion Recognition in Continuous VAD Space Using EEG Signals
by Hyo Jin Jon, Longbin Jin, Hyuntaek Jung, Hyunseo Kim and Eun Yi Kim
Mathematics 2025, 13(1), 87; https://doi.org/10.3390/math13010087 - 29 Dec 2024
Viewed by 1323
Abstract
Electroencephalogram (EEG)-based emotion recognition has garnered significant attention in brain–computer interface research and healthcare applications. While deep learning models have been extensively studied, most are designed for classification tasks and struggle to accurately predict continuous emotional scores in regression settings. In this paper, [...] Read more.
Electroencephalogram (EEG)-based emotion recognition has garnered significant attention in brain–computer interface research and healthcare applications. While deep learning models have been extensively studied, most are designed for classification tasks and struggle to accurately predict continuous emotional scores in regression settings. In this paper, we introduce EEG-RegNet, a novel deep neural network tailored for precise emotional score prediction across the continuous valence–arousal–dominance (VAD) space. EEG-RegNet tackles two core challenges: extracting subject-independent, emotion-relevant EEG features and mapping these features to fine-grained, continuous emotional scores. The model leverages 2D convolutional neural networks (CNNs) for spatial feature extraction and a 1D CNN for temporal dynamics, providing robust spatiotemporal modeling. A key innovation is the hybrid loss function, which integrates mean squared error (MSE) and cross-entropy (CE) with a Bernoulli penalty to enhance probability estimation and address sparsity in the emotional space. Extensive experiments on the DEAP dataset show that EEG-RegNet achieves state-of-the-art results in continuous emotional score prediction and attains 95% accuracy in fine-grained emotion classification, highlighting its scalability and precision in emotion recognition. Full article
Show Figures

Figure 1

20 pages, 5165 KiB  
Article
Emotion Recognition Model of EEG Signals Based on Double Attention Mechanism
by Yahong Ma, Zhentao Huang, Yuyao Yang, Shanwen Zhang, Qi Dong, Rongrong Wang and Liangliang Hu
Brain Sci. 2024, 14(12), 1289; https://doi.org/10.3390/brainsci14121289 - 21 Dec 2024
Cited by 3 | Viewed by 1945
Abstract
Background: Emotions play a crucial role in people’s lives, profoundly affecting their cognition, decision-making, and interpersonal communication. Emotion recognition based on brain signals has become a significant challenge in the fields of affective computing and human-computer interaction. Methods: Addressing the issue of inaccurate [...] Read more.
Background: Emotions play a crucial role in people’s lives, profoundly affecting their cognition, decision-making, and interpersonal communication. Emotion recognition based on brain signals has become a significant challenge in the fields of affective computing and human-computer interaction. Methods: Addressing the issue of inaccurate feature extraction and low accuracy of existing deep learning models in emotion recognition, this paper proposes a multi-channel automatic classification model for emotion EEG signals named DACB, which is based on dual attention mechanisms, convolutional neural networks, and bidirectional long short-term memory networks. DACB extracts features in both temporal and spatial dimensions, incorporating not only convolutional neural networks but also SE attention mechanism modules for learning the importance of different channel features, thereby enhancing the network’s performance. DACB also introduces dot product attention mechanisms to learn the importance of spatial and temporal features, effectively improving the model’s accuracy. Results: The accuracy of this method in single-shot validation tests on the SEED-IV and DREAMER (Valence-Arousal-Dominance three-classification) datasets is 99.96% and 87.52%, 90.06%, and 89.05%, respectively. In 10-fold cross-validation tests, the accuracy is 99.73% and 84.26%, 85.40%, and 85.02%, outperforming other models. Conclusions: This demonstrates that the DACB model achieves high accuracy in emotion classification tasks, demonstrating outstanding performance and generalization ability and providing new directions for future research in EEG signal recognition. Full article
Show Figures

Figure 1

10 pages, 2891 KiB  
Proceeding Paper
Analysis of Multiple Emotions from Electroencephalogram Signals Using Machine Learning Models
by Jehosheba Margaret Matthew, Masoodhu Banu Noordheen Mohammad Mustafa and Madhumithaa Selvarajan
Eng. Proc. 2024, 82(1), 41; https://doi.org/10.3390/ecsa-11-20398 - 25 Nov 2024
Viewed by 604
Abstract
Emotion recognition is a valuable technique to monitor the emotional well-being of human beings. It is found that around 60% of people suffer from different psychological conditions like depression, anxiety, and other mental issues. Mental health studies explore how different emotional expressions are [...] Read more.
Emotion recognition is a valuable technique to monitor the emotional well-being of human beings. It is found that around 60% of people suffer from different psychological conditions like depression, anxiety, and other mental issues. Mental health studies explore how different emotional expressions are linked to specific psychological conditions. Recognizing these patterns and identifying their emotions is complex in human beings since it varies from each individual. Emotion represents the state of mind in response to a particular situation. These emotions, that are collected using EEG electrodes, need detailed emotional analysis to contribute to clinical analysis and personalized health monitoring. Most of the research works are based on valence and arousal (VA) resulting in two, three, and four emotional classes based on their combinations. The main objective of this paper is to include dominance along with valence and arousal (VAD) resulting in the classification of 16 classes of emotional states and thereby improving the number of emotions to be identified. This paper also considers a 2-class emotion, 4-class emotion, and 16-class emotion classification problem, applies different models, and discusses the evaluation methodology in order to select the best one. Among the six machine learning models, KNN proved to be the best model with the classification accuracy of 95.8% for 2-class, 91.78% for 4-class and 89.26% for 16-class. Performance metrics like Precision, ROC, Recall, F1-Score, and Accuracy are evaluated. Additionally, statistical analysis has been performed using Friedman Chi-square test to validate the results. Full article
Show Figures

Figure 1

24 pages, 8284 KiB  
Article
Hybrid Natural Language Processing Model for Sentiment Analysis during Natural Crisis
by Marko Horvat, Gordan Gledec and Fran Leontić
Electronics 2024, 13(10), 1991; https://doi.org/10.3390/electronics13101991 - 20 May 2024
Cited by 5 | Viewed by 3032
Abstract
This paper introduces a novel natural language processing (NLP) model as an original approach to sentiment analysis, with a focus on understanding emotional responses during major disasters or conflicts. The model was created specifically for Croatian and is based on unigrams, but it [...] Read more.
This paper introduces a novel natural language processing (NLP) model as an original approach to sentiment analysis, with a focus on understanding emotional responses during major disasters or conflicts. The model was created specifically for Croatian and is based on unigrams, but it can be used with any language that supports the n-gram model and expanded to multiple word sequences. The presented model generates a sentiment score aligned with discrete and dimensional emotion models, reliability metrics, and individual word scores using affective datasets Extended ANEW and NRC WordEmotion Association Lexicon. The sentiment analysis model incorporates different methodologies, including lexicon-based, machine learning, and hybrid approaches. The process of preprocessing includes translation, lemmatization, and data refinement, utilized automated translation services as well as the CLARIN Knowledge Centre for South Slavic languages (CLASSLA) library, with a particular emphasis on diacritical mark correction and tokenization. The presented model was experimentally evaluated on three simultaneous major natural crises that recently affected Croatia. The study’s findings reveal a significant shift in emotional dimensions during the COVID-19 pandemic, particularly a decrease in valence, arousal, and dominance, which corresponded with the two-month recovery period. Furthermore, the 2020 Croatian earthquakes elicited a wide range of negative discrete emotions, including anger, fear, and sadness, with the recuperation period much longer than in the case of COVID-19. This study represents an advancement in sentiment analysis, particularly in linguistically specific contexts, and provides insights into the emotional landscape shaped by major societal events. Full article
(This article belongs to the Special Issue Emerging Theory and Applications in Natural Language Processing)
Show Figures

Figure 1

17 pages, 3370 KiB  
Article
FC-TFS-CGRU: A Temporal–Frequency–Spatial Electroencephalography Emotion Recognition Model Based on Functional Connectivity and a Convolutional Gated Recurrent Unit Hybrid Architecture
by Xia Wu, Yumei Zhang, Jingjing Li, Honghong Yang and Xiaojun Wu
Sensors 2024, 24(6), 1979; https://doi.org/10.3390/s24061979 - 20 Mar 2024
Cited by 7 | Viewed by 1683
Abstract
The gated recurrent unit (GRU) network can effectively capture temporal information for 1D signals, such as electroencephalography and event-related brain potential, and it has been widely used in the field of EEG emotion recognition. However, multi-domain features, including the spatial, frequency, and temporal [...] Read more.
The gated recurrent unit (GRU) network can effectively capture temporal information for 1D signals, such as electroencephalography and event-related brain potential, and it has been widely used in the field of EEG emotion recognition. However, multi-domain features, including the spatial, frequency, and temporal features of EEG signals, contribute to emotion recognition, while GRUs show some limitations in capturing frequency–spatial features. Thus, we proposed a hybrid architecture of convolutional neural networks and GRUs (CGRU) to effectively capture the complementary temporal features and spatial–frequency features hidden in signal channels. In addition, to investigate the interactions among different brain regions during emotional information processing, we considered the functional connectivity relationship of the brain by introducing a phase-locking value to calculate the phase difference between the EEG channels to gain spatial information based on functional connectivity. Then, in the classification module, we incorporated attention constraints to address the issue of the uneven recognition contribution of EEG signal features. Finally, we conducted experiments on the DEAP and DREAMER databases. The results demonstrated that our model outperforms the other models with remarkable recognition accuracy of 99.51%, 99.60%, and 99.59% (58.67%, 65.74%, and 67.05%) on DEAP and 98.63%, 98.7%, and 98.71% (75.65%, 75.89%, and 71.71%) on DREAMER in a subject-dependent experiment (subject-independent experiment) for arousal, valence, and dominance. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

22 pages, 6337 KiB  
Article
Cascaded Convolutional Recurrent Neural Networks for EEG Emotion Recognition Based on Temporal–Frequency–Spatial Features
by Yuan Luo, Changbo Wu and Caiyun Lv
Appl. Sci. 2023, 13(11), 6761; https://doi.org/10.3390/app13116761 - 2 Jun 2023
Cited by 5 | Viewed by 2109
Abstract
Emotion recognition is a research area that spans multiple disciplines, including computational science, neuroscience, and cognitive psychology. The use of electroencephalogram (EEG) signals in emotion recognition is particularly promising due to their objective and nonartefactual nature. To effectively leverage the spatial information between [...] Read more.
Emotion recognition is a research area that spans multiple disciplines, including computational science, neuroscience, and cognitive psychology. The use of electroencephalogram (EEG) signals in emotion recognition is particularly promising due to their objective and nonartefactual nature. To effectively leverage the spatial information between electrodes, the temporal correlation of EEG sequences, and the various sub-bands of information corresponding to different emotions, we construct a 4D matrix comprising temporal–frequency–spatial features as the input to our proposed hybrid model. This model incorporates a residual network based on depthwise convolution (DC) and pointwise convolution (PC), which not only extracts the spatial–frequency information in the input signal, but also reduces the training parameters. To further improve performance, we apply frequency channel attention networks (FcaNet) to distribute weights to different channel features. Finally, we use a bidirectional long short-term memory network (Bi-LSTM) to learn the temporal information in the sequence in both directions. To highlight the temporal importance of the frame window in the sample, we choose the weighted sum of the hidden layer states at all frame moments as the input to softmax. Our experimental results demonstrate that the proposed method achieves excellent recognition performance. We experimentally validated all proposed methods on the DEAP dataset, which has authoritative status in the EEG emotion recognition domain. The average accuracy achieved was 97.84% for the four binary classifications of valence, arousal, dominance, and liking and 88.46% for the four classifications of high and low valence–arousal recognition. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) Applied to Computational Psychology)
Show Figures

Figure 1

19 pages, 2965 KiB  
Article
EEG-Based Emotion Recognition Using Convolutional Recurrent Neural Network with Multi-Head Self-Attention
by Zhangfang Hu, Libujie Chen, Yuan Luo and Jingfan Zhou
Appl. Sci. 2022, 12(21), 11255; https://doi.org/10.3390/app122111255 - 6 Nov 2022
Cited by 39 | Viewed by 5988
Abstract
In recent years, deep learning has been widely used in emotion recognition, but the models and algorithms in practical applications still have much room for improvement. With the development of graph convolutional neural networks, new ideas for emotional recognition based on EEG have [...] Read more.
In recent years, deep learning has been widely used in emotion recognition, but the models and algorithms in practical applications still have much room for improvement. With the development of graph convolutional neural networks, new ideas for emotional recognition based on EEG have arisen. In this paper, we propose a novel deep learning model-based emotion recognition method. First, the EEG signal is spatially filtered by using the common spatial pattern (CSP), and the filtered signal is converted into a time–frequency map by continuous wavelet transform (CWT). This is used as the input data of the network; then the feature extraction and classification are performed by the deep learning model. We called this model CNN-BiLSTM-MHSA, which consists of a convolutional neural network (CNN), bi-directional long and short-term memory network (BiLSTM), and multi-head self-attention (MHSA). This network is capable of learning the time series and spatial information of EEG emotion signals in depth, smoothing EEG signals and extracting deep features with CNN, learning emotion information of future and past time series with BiLSTM, and improving recognition accuracy with MHSA by reassigning weights to emotion features. Finally, we conducted experiments on the DEAP dataset for sentiment classification, and the experimental results showed that the method has better results than the existing classification. The accuracy of high and low valence, arousal, dominance, and liking state recognition is 98.10%, and the accuracy of four classifications of high and low valence-arousal recognition is 89.33%. Full article
(This article belongs to the Topic Advances in Artificial Neural Networks)
Show Figures

Figure 1

22 pages, 725 KiB  
Article
Automated Emotion Identification Using Fourier–Bessel Domain-Based Entropies
by Aditya Nalwaya, Kritiprasanna Das and Ram Bilas Pachori
Entropy 2022, 24(10), 1322; https://doi.org/10.3390/e24101322 - 20 Sep 2022
Cited by 37 | Viewed by 3295
Abstract
Human dependence on computers is increasing day by day; thus, human interaction with computers must be more dynamic and contextual rather than static or generalized. The development of such devices requires knowledge of the emotional state of the user interacting with it; for [...] Read more.
Human dependence on computers is increasing day by day; thus, human interaction with computers must be more dynamic and contextual rather than static or generalized. The development of such devices requires knowledge of the emotional state of the user interacting with it; for this purpose, an emotion recognition system is required. Physiological signals, specifically, electrocardiogram (ECG) and electroencephalogram (EEG), were studied here for the purpose of emotion recognition. This paper proposes novel entropy-based features in the Fourier–Bessel domain instead of the Fourier domain, where frequency resolution is twice that of the latter. Further, to represent such non-stationary signals, the Fourier–Bessel series expansion (FBSE) is used, which has non-stationary basis functions, making it more suitable than the Fourier representation. EEG and ECG signals are decomposed into narrow-band modes using FBSE-based empirical wavelet transform (FBSE-EWT). The proposed entropies of each mode are computed to form the feature vector, which are further used to develop machine learning models. The proposed emotion detection algorithm is evaluated using publicly available DREAMER dataset. K-nearest neighbors (KNN) classifier provides accuracies of 97.84%, 97.91%, and 97.86% for arousal, valence, and dominance classes, respectively. Finally, this paper concludes that the obtained entropy features are suitable for emotion recognition from given physiological signals. Full article
(This article belongs to the Special Issue Entropy Algorithms for the Analysis of Biomedical Signals)
Show Figures

Figure 1

25 pages, 4490 KiB  
Article
Emotion Recognition Using a Reduced Set of EEG Channels Based on Holographic Feature Maps
by Ante Topic, Mladen Russo, Maja Stella and Matko Saric
Sensors 2022, 22(9), 3248; https://doi.org/10.3390/s22093248 - 23 Apr 2022
Cited by 35 | Viewed by 5793
Abstract
An important function of the construction of the Brain-Computer Interface (BCI) device is the development of a model that is able to recognize emotions from electroencephalogram (EEG) signals. Research in this area is very challenging because the EEG signal is non-stationary, non-linear, and [...] Read more.
An important function of the construction of the Brain-Computer Interface (BCI) device is the development of a model that is able to recognize emotions from electroencephalogram (EEG) signals. Research in this area is very challenging because the EEG signal is non-stationary, non-linear, and contains a lot of noise due to artifacts caused by muscle activity and poor electrode contact. EEG signals are recorded with non-invasive wearable devices using a large number of electrodes, which increase the dimensionality and, thereby, also the computational complexity of EEG data. It also reduces the level of comfort of the subjects. This paper implements our holographic features, investigates electrode selection, and uses the most relevant channels to maximize model accuracy. The ReliefF and Neighborhood Component Analysis (NCA) methods were used to select the optimal electrodes. Verification was performed on four publicly available datasets. Our holographic feature maps were constructed using computer-generated holography (CGH) based on the values of signal characteristics displayed in space. The resulting 2D maps are the input to the Convolutional Neural Network (CNN), which serves as a feature extraction method. This methodology uses a reduced set of electrodes, which are different between men and women, and obtains state-of-the-art results in a three-dimensional emotional space. The experimental results show that the channel selection methods improve emotion recognition rates significantly with an accuracy of 90.76% for valence, 92.92% for arousal, and 92.97% for dominance. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

29 pages, 6517 KiB  
Article
Semi-Supervised Cross-Subject Emotion Recognition Based on Stacked Denoising Autoencoder Architecture Using a Fusion of Multi-Modal Physiological Signals
by Junhai Luo, Yuxin Tian, Hang Yu, Yu Chen and Man Wu
Entropy 2022, 24(5), 577; https://doi.org/10.3390/e24050577 - 20 Apr 2022
Cited by 17 | Viewed by 3275
Abstract
In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of elaborate physiological emotion data features come up and are combined with various classifying models to detect one’s emotional states. To circumvent [...] Read more.
In recent decades, emotion recognition has received considerable attention. As more enthusiasm has shifted to the physiological pattern, a wide range of elaborate physiological emotion data features come up and are combined with various classifying models to detect one’s emotional states. To circumvent the labor of artificially designing features, we propose to acquire affective and robust representations automatically through the Stacked Denoising Autoencoder (SDA) architecture with unsupervised pre-training, followed by supervised fine-tuning. In this paper, we compare the performances of different features and models through three binary classification tasks based on the Valence-Arousal-Dominance (VAD) affection model. Decision fusion and feature fusion of electroencephalogram (EEG) and peripheral signals are performed on hand-engineered features; data-level fusion is performed on deep-learning methods. It turns out that the fusion data perform better than the two modalities. To take advantage of deep-learning algorithms, we augment the original data and feed it directly into our training model. We use two deep architectures and another generative stacked semi-supervised architecture as references for comparison to test the method’s practical effects. The results reveal that our scheme slightly outperforms the other three deep feature extractors and surpasses the state-of-the-art of hand-engineered features. Full article
(This article belongs to the Topic Machine and Deep Learning)
Show Figures

Figure 1

23 pages, 774 KiB  
Article
An Explainable Approach Based on Emotion and Sentiment Features for Detecting People with Mental Disorders on Social Networks
by Leslie Marjorie Gallegos Salazar, Octavio Loyola-González and Miguel Angel Medina-Pérez
Appl. Sci. 2021, 11(22), 10932; https://doi.org/10.3390/app112210932 - 19 Nov 2021
Cited by 10 | Viewed by 3467
Abstract
Mental disorders are a global problem that widely affects different segments of the population. Diagnosis and treatment are difficult to obtain, as there are not enough specialists on the matter, and mental health is not yet a common topic among the population. The [...] Read more.
Mental disorders are a global problem that widely affects different segments of the population. Diagnosis and treatment are difficult to obtain, as there are not enough specialists on the matter, and mental health is not yet a common topic among the population. The computer science field has proposed some solutions to detect the risk of depression, based on language use and data obtained through social media. These solutions are mainly focused on objective features, such as n-grams and lexicons, which are complicated to be understood by experts in the application area. Hence, in this paper, we propose a contrast pattern-based classifier to detect depression by using a new data representation based only on emotion and sentiment analysis extracted from posts on social media. Our proposed feature representation contains 28 different features, which are more understandable by specialists than other proposed representations. Our feature representation jointly with a contrast pattern-based classifier has obtained better classification results than five other combinations of features and classifiers reported in the literature. Our proposal statistically outperformed the Random Forest, Naive Bayes, and AdaBoost classifiers using the parser-tree, VAD (Valence, Arousal, and Dominance) and Topics, and Bag of Words (BOW) representations. It obtained similar statistical results to the logistic regression models using the Ensemble of BOWs and Handcrafted features representations. In all cases, our proposal was able to provide an explanation close to the language of experts, due to the mined contrast patterns. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence (XAI))
Show Figures

Figure 1

18 pages, 1897 KiB  
Article
Mixing and Matching Emotion Frameworks: Investigating Cross-Framework Transfer Learning for Dutch Emotion Detection
by Luna De Bruyne, Orphée De Clercq and Véronique Hoste
Electronics 2021, 10(21), 2643; https://doi.org/10.3390/electronics10212643 - 29 Oct 2021
Cited by 6 | Viewed by 3312
Abstract
Emotion detection has become a growing field of study, especially seeing its broad application potential. Research usually focuses on emotion classification, but performance tends to be rather low, especially when dealing with more advanced emotion categories that are tailored to specific tasks and [...] Read more.
Emotion detection has become a growing field of study, especially seeing its broad application potential. Research usually focuses on emotion classification, but performance tends to be rather low, especially when dealing with more advanced emotion categories that are tailored to specific tasks and domains. Therefore, we propose the use of the dimensional emotion representations valence, arousal and dominance (VAD), in an emotion regression task. Firstly, we hypothesize that they can improve performance of the classification task, and secondly, they might be used as a pivot mechanism to map towards any given emotion framework, which allows tailoring emotion frameworks to specific applications. In this paper, we examine three cross-framework transfer methodologies: multi-task learning, in which VAD regression and classification are learned simultaneously; meta-learning, where VAD regression and emotion classification are learned separately and predictions are jointly used as input for a meta-learner; and a pivot mechanism, which converts the predictions of the VAD model to emotion classes. We show that dimensional representations can indeed boost performance for emotion classification, especially in the meta-learning setting (up to 7% macro F1-score compared to regular emotion classification). The pivot method was not able to compete with the base model, but further inspection suggests that it could be efficient, provided that the VAD regression model is further improved. Full article
(This article belongs to the Special Issue Emerging Application of Sentiment Analysis Technologies)
Show Figures

Figure 1

24 pages, 1424 KiB  
Article
Effectiveness of a Multicomponent Treatment for Fibromyalgia Based on Pain Neuroscience Education, Exercise Therapy, Psychological Support, and Nature Exposure (NAT-FM): A Pragmatic Randomized Controlled Trial
by Mayte Serrat, Míriam Almirall, Marta Musté, Juan P. Sanabria-Mazo, Albert Feliu-Soler, Jorge L. Méndez-Ulrich, Juan V. Luciano and Antoni Sanz
J. Clin. Med. 2020, 9(10), 3348; https://doi.org/10.3390/jcm9103348 - 18 Oct 2020
Cited by 60 | Viewed by 11015
Abstract
A recent study (FIBROWALK) has supported the effectiveness of a multicomponent treatment based on pain neuroscience education (PNE), exercise therapy (TE), cognitive behavioral therapy (CBT), and mindfulness in patients with fibromyalgia. The aim of the present RCT was: (a) to analyze the effectiveness [...] Read more.
A recent study (FIBROWALK) has supported the effectiveness of a multicomponent treatment based on pain neuroscience education (PNE), exercise therapy (TE), cognitive behavioral therapy (CBT), and mindfulness in patients with fibromyalgia. The aim of the present RCT was: (a) to analyze the effectiveness of a 12-week multicomponent treatment (nature activity therapy for fibromyalgia, NAT-FM) based on the same therapeutic components described above plus nature exposure to maximize improvements in functional impairment (primary outcome), as well as pain, fatigue, anxiety-depression, physical functioning, positive and negative affect, self-esteem, and perceived stress (secondary outcomes), and kinesiophobia, pain catastrophizing thoughts, personal perceived competence, and cognitive emotion regulation (process variables) compared with treatment as usual (TAU); (b) to preliminarily assess the effects of the nature-based activities included (yoga, Nordic walking, nature photography, and Shinrin Yoku); and (c) to examine whether the positive effects of TAU + NAT-FM on primary and secondary outcomes at post-treatment were mediated through baseline to six-week changes in process variables. A total of 169 FM patients were randomized into two study arms: TAU + NAT-FM vs. TAU alone. Data were collected at baseline, at six-week of treatment, at post-treatment, and throughout treatment by ecological momentary assessment (EMA). Using an intention to treat (ITT) approach, linear mixed-effects models and mediational models through path analyses were computed. Overall, TAU + NAT-FM was significantly more effective than TAU at posttreatment for the primary and secondary outcomes evaluated, as well as for the process variables. Moderate-to-large effect sizes were achieved at six-weeks for functional impairment, anxiety, kinesiophobia, perceived competence, and positive reappraisal. The number needed to treat (NNT) was 3 (95%CI = 1.6–3.2). The nature activities yielded an improvement in affective valence, arousal, dominance, fatigue, pain, stress, and self-efficacy. Kinesiophobia and perceived competence were the mediators that could explain a significant part of the improvements obtained with TAU + NAT-FM treatment. TAU + NAT-FM is an effective co-adjuvant multicomponent treatment for improving FM-related symptoms. Full article
Show Figures

Figure 1

18 pages, 553 KiB  
Article
Fear Level Classification Based on Emotional Dimensions and Machine Learning Techniques
by Oana Bălan, Gabriela Moise, Alin Moldoveanu, Marius Leordeanu and Florica Moldoveanu
Sensors 2019, 19(7), 1738; https://doi.org/10.3390/s19071738 - 11 Apr 2019
Cited by 69 | Viewed by 9000
Abstract
There has been steady progress in the field of affective computing over the last two decades that has integrated artificial intelligence techniques in the construction of computational models of emotion. Having, as a purpose, the development of a system for treating phobias that [...] Read more.
There has been steady progress in the field of affective computing over the last two decades that has integrated artificial intelligence techniques in the construction of computational models of emotion. Having, as a purpose, the development of a system for treating phobias that would automatically determine fear levels and adapt exposure intensity based on the user’s current affective state, we propose a comparative study between various machine and deep learning techniques (four deep neural network models, a stochastic configuration network, Support Vector Machine, Linear Discriminant Analysis, Random Forest and k-Nearest Neighbors), with and without feature selection, for recognizing and classifying fear levels based on the electroencephalogram (EEG) and peripheral data from the DEAP (Database for Emotion Analysis using Physiological signals) database. Fear was considered an emotion eliciting low valence, high arousal and low dominance. By dividing the ratings of valence/arousal/dominance emotion dimensions, we propose two paradigms for fear level estimation—the two-level (0—no fear and 1—fear) and the four-level (0—no fear, 1—low fear, 2—medium fear, 3—high fear) paradigms. Although all the methods provide good classification accuracies, the highest F scores have been obtained using the Random Forest Classifier—89.96% and 85.33% for the two-level and four-level fear evaluation modality. Full article
(This article belongs to the Section Biosensors)
Show Figures

Figure 1

Back to TopTop