Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (15)

Search Parameters:
Keywords = AMIGOS dataset

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 7066 KiB  
Systematic Review
A Systematic Review on Artificial Intelligence-Based Multimodal Dialogue Systems Capable of Emotion Recognition
by Luis Bravo, Ciro Rodriguez, Pedro Hidalgo and Cesar Angulo
Multimodal Technol. Interact. 2025, 9(3), 28; https://doi.org/10.3390/mti9030028 - 14 Mar 2025
Viewed by 2934
Abstract
In the current context, the use of technologies in applications for multimodal dialogue systems with computers and emotion recognition through artificial intelligence continues to grow rapidly. Consequently, it is challenging for researchers to identify gaps, propose new models, and increase user satisfaction. The [...] Read more.
In the current context, the use of technologies in applications for multimodal dialogue systems with computers and emotion recognition through artificial intelligence continues to grow rapidly. Consequently, it is challenging for researchers to identify gaps, propose new models, and increase user satisfaction. The objective of this study is to explore and analyze potential applications based on artificial intelligence for multimodal dialogue systems incorporating emotion recognition. The methodology used in selecting papers is in accordance with PRISMA and identifies 13 scientific articles whose research proposals are generally focused on convolutional neural networks (CNNs), Long Short-Term Memory (LSTM), GRU, and BERT. The research results identify the proposed models as Mindlink-Eumpy, RHPRnet, Emo Fu-Sense, 3FACRNNN, H-MMER, TMID, DKMD, and MatCR. The datasets used are DEAP, MAHNOB-HCI, SEED-IV, SEDD-V, AMIGOS, and DREAMER. In addition, the metrics achieved by the models are presented. It is concluded that emotion recognition models such as Emo Fu-Sense, 3FACRNNN, and H-MMER obtain outstanding results, with their accuracy ranging from 92.62% to 98.19%, and multimodal dialogue models such as TMID and the scene-aware model with BLEU4 metrics obtain values of 51.59% and 29%, respectively. Full article
Show Figures

Figure 1

20 pages, 6575 KiB  
Article
Transcriptome-Wide Association Study Reveals New Molecular Interactions Associated with Melanoma Pathogenesis
by Mohamed N. Saad and Mohamed Hamed
Cancers 2024, 16(14), 2517; https://doi.org/10.3390/cancers16142517 - 11 Jul 2024
Cited by 2 | Viewed by 2462
Abstract
A transcriptome-wide association study (TWAS) was conducted on genome-wide association study (GWAS) summary statistics of malignant melanoma of skin (UK Biobank dataset) and The Cancer Genome Atlas-Skin Cutaneous Melanoma (TCGA-SKCM) gene expression weights to identify melanoma susceptibility genes. The GWAS included 2465 cases [...] Read more.
A transcriptome-wide association study (TWAS) was conducted on genome-wide association study (GWAS) summary statistics of malignant melanoma of skin (UK Biobank dataset) and The Cancer Genome Atlas-Skin Cutaneous Melanoma (TCGA-SKCM) gene expression weights to identify melanoma susceptibility genes. The GWAS included 2465 cases and 449,799 controls, while the gene expression testing was conducted on 103 cases. Afterward, a gene enrichment analysis was applied to identify significant TWAS associations. The melanoma’s gene–microRNA (miRNA) regulatory network was constructed from the TWAS genes and their corresponding miRNAs. At last, a disease enrichment analysis was conducted on the corresponding miRNAs. The TWAS detected 27 genes associated with melanoma with p-values less than 0.05 (the top three genes are LOC389458 (RBAK), C16orf73 (MEIOB), and EIF3CL). After the joint/conditional test, one gene (AMIGO1) was dropped, resulting in 26 significant genes. The Gene Ontology (GO) biological process associated the extended gene set (76 genes) with protein K11-linked ubiquitination and regulation of cell cycle phase transition. K11-linked ubiquitin chains regulate cell division. Interestingly, the extended gene set was related to different skin cancer subtypes. Moreover, the enriched pathways were nsp1 from SARS-CoV-2 that inhibit translation initiation in the host cell, cell cycle, translation factors, and DNA repair pathways full network. The gene-miRNA regulatory network identified 10 hotspot genes with the top three: TP53, BRCA1, and MDM2; and four hotspot miRNAs: mir-16, mir-15a, mir-125b, and mir-146a. Melanoma was among the top ten diseases associated with the corresponding (106) miRNAs. Our results shed light on melanoma pathogenesis and biologically significant molecular interactions. Full article
(This article belongs to the Special Issue Biomarkers for the Early Detection and Treatment of Cancers)
Show Figures

Figure 1

16 pages, 533 KiB  
Article
Zooming into the Complex Dynamics of Electrodermal Activity Recorded during Emotional Stimuli: A Multiscale Approach
by Laura Lavezzo, Andrea Gargano, Enzo Pasquale Scilingo and Mimma Nardelli
Bioengineering 2024, 11(6), 520; https://doi.org/10.3390/bioengineering11060520 - 21 May 2024
Cited by 2 | Viewed by 2141
Abstract
Physiological phenomena exhibit complex behaviours arising at multiple time scales. To investigate them, techniques derived from chaos theory were applied to physiological signals, providing promising results in distinguishing between healthy and pathological states. Fractal-like properties of electrodermal activity (EDA), a well-validated tool for [...] Read more.
Physiological phenomena exhibit complex behaviours arising at multiple time scales. To investigate them, techniques derived from chaos theory were applied to physiological signals, providing promising results in distinguishing between healthy and pathological states. Fractal-like properties of electrodermal activity (EDA), a well-validated tool for monitoring the autonomic nervous system state, have been reported in previous literature. This study proposes the multiscale complexity index of electrodermal activity (MComEDA) to discern different autonomic responses based on EDA signals. This method builds upon our previously proposed algorithm, ComEDA, and it is empowered with a coarse-graining procedure to provide a view at multiple time scales of the EDA response. We tested MComEDA’s performance on the EDA signals of two publicly available datasets, i.e., the Continuously Annotated Signals of Emotion (CASE) dataset and the Affect, Personality and Mood Research on Individuals and Groups (AMIGOS) dataset, both containing physiological data recorded from healthy participants during the view of ultra-short emotional video clips. Our results highlighted that the values of MComEDA were significantly different (p-value < 0.05 after Wilcoxon signed rank test with Bonferroni’s correction) when comparing high- and low-arousal stimuli. Furthermore, MComEDA outperformed the single-scale approach in discriminating among different valence levels of high-arousal stimuli, e.g., showing significantly different values for scary and amusing stimuli (p-value = 0.024). These findings suggest that a multiscale approach to the nonlinear analysis of EDA signals can improve the information gathered on task-specific autonomic response, even when ultra-short time series are considered. Full article
(This article belongs to the Special Issue Advances in Multivariate and Multiscale Physiological Signal Analysis)
Show Figures

Figure 1

18 pages, 9824 KiB  
Article
De Novo Variants Found in Three Distinct Schizophrenia Populations Hit a Common Core Gene Network Related to Microtubule and Actin Cytoskeleton Gene Ontology Classes
by Yann Loe-Mie, Christine Plançon, Caroline Dubertret, Takeo Yoshikawa, Binnaz Yalcin, Stephan C. Collins, Anne Boland, Jean-François Deleuze, Philip Gorwood, Dalila Benmessaoud, Michel Simonneau and Aude-Marie Lepagnol-Bestel
Life 2024, 14(2), 244; https://doi.org/10.3390/life14020244 - 9 Feb 2024
Cited by 3 | Viewed by 2119
Abstract
Schizophrenia (SZ) is a heterogeneous and debilitating psychiatric disorder with a strong genetic component. To elucidate functional networks perturbed in schizophrenia, we analysed a large dataset of whole-genome studies that identified SNVs, CNVs, and a multi-stage schizophrenia genome-wide association study. Our analysis identified [...] Read more.
Schizophrenia (SZ) is a heterogeneous and debilitating psychiatric disorder with a strong genetic component. To elucidate functional networks perturbed in schizophrenia, we analysed a large dataset of whole-genome studies that identified SNVs, CNVs, and a multi-stage schizophrenia genome-wide association study. Our analysis identified three subclusters that are interrelated and with small overlaps: GO:0007017~Microtubule-Based Process, GO:00015629~Actin Cytoskeleton, and GO:0007268~SynapticTransmission. We next analysed three distinct trio cohorts of 75 SZ Algerian, 45 SZ French, and 61 SZ Japanese patients. We performed Illumina HiSeq whole-exome sequencing and identified de novo mutations using a Bayesian approach. We validated 88 de novo mutations by Sanger sequencing: 35 in French, 21 in Algerian, and 32 in Japanese SZ patients. These 88 de novo mutations exhibited an enrichment in genes encoding proteins related to GO:0051015~actin filament binding (p = 0.0011) using David, and enrichments in GO: 0003774~transport (p = 0.019) and GO:0003729~mRNA binding (p = 0.010) using Amigo. One of these de novo variant was found in CORO1C coding sequence. We studied Coro1c haploinsufficiency in a Coro1c+/− mouse and found defects in the corpus callosum. These results could motivate future studies of the mechanisms surrounding genes encoding proteins involved in transport and the cytoskeleton, with the goal of developing therapeutic intervention strategies for a subset of SZ cases. Full article
(This article belongs to the Special Issue Genomics and Transcriptomics Research in Medicine)
Show Figures

Figure 1

23 pages, 2756 KiB  
Article
Online Learning for Wearable EEG-Based Emotion Classification
by Sidratul Moontaha, Franziska Elisabeth Friederike Schumann and Bert Arnrich
Sensors 2023, 23(5), 2387; https://doi.org/10.3390/s23052387 - 21 Feb 2023
Cited by 19 | Viewed by 6599
Abstract
Giving emotional intelligence to machines can facilitate the early detection and prediction of mental diseases and symptoms. Electroencephalography (EEG)-based emotion recognition is widely applied because it measures electrical correlates directly from the brain rather than indirect measurement of other physiological responses initiated by [...] Read more.
Giving emotional intelligence to machines can facilitate the early detection and prediction of mental diseases and symptoms. Electroencephalography (EEG)-based emotion recognition is widely applied because it measures electrical correlates directly from the brain rather than indirect measurement of other physiological responses initiated by the brain. Therefore, we used non-invasive and portable EEG sensors to develop a real-time emotion classification pipeline. The pipeline trains different binary classifiers for Valence and Arousal dimensions from an incoming EEG data stream achieving a 23.9% (Arousal) and 25.8% (Valence) higher F1-Score on the state-of-art AMIGOS dataset than previous work. Afterward, the pipeline was applied to the curated dataset from 15 participants using two consumer-grade EEG devices while watching 16 short emotional videos in a controlled environment. Mean F1-Scores of 87% (Arousal) and 82% (Valence) were achieved for an immediate label setting. Additionally, the pipeline proved to be fast enough to achieve predictions in real-time in a live scenario with delayed labels while continuously being updated. The significant discrepancy from the readily available labels on the classification scores leads to future work to include more data. Thereafter, the pipeline is ready to be used for real-time applications of emotion classification. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors (Volume II))
Show Figures

Figure 1

19 pages, 1457 KiB  
Article
Comprehensive Analysis of Feature Extraction Methods for Emotion Recognition from Multichannel EEG Recordings
by Rajamanickam Yuvaraj, Prasanth Thagavel, John Thomas, Jack Fogarty and Farhan Ali
Sensors 2023, 23(2), 915; https://doi.org/10.3390/s23020915 - 12 Jan 2023
Cited by 46 | Viewed by 7542
Abstract
Advances in signal processing and machine learning have expedited electroencephalogram (EEG)-based emotion recognition research, and numerous EEG signal features have been investigated to detect or characterize human emotions. However, most studies in this area have used relatively small monocentric data and focused on [...] Read more.
Advances in signal processing and machine learning have expedited electroencephalogram (EEG)-based emotion recognition research, and numerous EEG signal features have been investigated to detect or characterize human emotions. However, most studies in this area have used relatively small monocentric data and focused on a limited range of EEG features, making it difficult to compare the utility of different sets of EEG features for emotion recognition. This study addressed that by comparing the classification accuracy (performance) of a comprehensive range of EEG feature sets for identifying emotional states, in terms of valence and arousal. The classification accuracy of five EEG feature sets were investigated, including statistical features, fractal dimension (FD), Hjorth parameters, higher order spectra (HOS), and those derived using wavelet analysis. Performance was evaluated using two classifier methods, support vector machine (SVM) and classification and regression tree (CART), across five independent and publicly available datasets linking EEG to emotional states: MAHNOB-HCI, DEAP, SEED, AMIGOS, and DREAMER. The FD-CART feature-classification method attained the best mean classification accuracy for valence (85.06%) and arousal (84.55%) across the five datasets. The stability of these findings across the five different datasets also indicate that FD features derived from EEG data are reliable for emotion recognition. The results may lead to the possible development of an online feature extraction framework, thereby enabling the development of an EEG-based emotion recognition system in real time. Full article
(This article belongs to the Special Issue EEG Signal Processing Techniques and Applications)
Show Figures

Figure 1

16 pages, 3039 KiB  
Article
An Ensemble Learning Method for Emotion Charting Using Multimodal Physiological Signals
by Amna Waheed Awan, Syed Muhammad Usman, Shehzad Khalid, Aamir Anwar, Roobaea Alroobaea, Saddam Hussain, Jasem Almotiri, Syed Sajid Ullah and Muhammad Usman Akram
Sensors 2022, 22(23), 9480; https://doi.org/10.3390/s22239480 - 4 Dec 2022
Cited by 13 | Viewed by 3867
Abstract
Emotion charting using multimodal signals has gained great demand for stroke-affected patients, for psychiatrists while examining patients, and for neuromarketing applications. Multimodal signals for emotion charting include electrocardiogram (ECG) signals, electroencephalogram (EEG) signals, and galvanic skin response (GSR) signals. EEG, ECG, and GSR [...] Read more.
Emotion charting using multimodal signals has gained great demand for stroke-affected patients, for psychiatrists while examining patients, and for neuromarketing applications. Multimodal signals for emotion charting include electrocardiogram (ECG) signals, electroencephalogram (EEG) signals, and galvanic skin response (GSR) signals. EEG, ECG, and GSR are also known as physiological signals, which can be used for identification of human emotions. Due to the unbiased nature of physiological signals, this field has become a great motivation in recent research as physiological signals are generated autonomously from human central nervous system. Researchers have developed multiple methods for the classification of these signals for emotion detection. However, due to the non-linear nature of these signals and the inclusion of noise, while recording, accurate classification of physiological signals is a challenge for emotion charting. Valence and arousal are two important states for emotion detection; therefore, this paper presents a novel ensemble learning method based on deep learning for the classification of four different emotional states including high valence and high arousal (HVHA), low valence and low arousal (LVLA), high valence and low arousal (HVLA) and low valence high arousal (LVHA). In the proposed method, multimodal signals (EEG, ECG, and GSR) are preprocessed using bandpass filtering and independent components analysis (ICA) for noise removal in EEG signals followed by discrete wavelet transform for time domain to frequency domain conversion. Discrete wavelet transform results in spectrograms of the physiological signal and then features are extracted using stacked autoencoders from those spectrograms. A feature vector is obtained from the bottleneck layer of the autoencoder and is fed to three classifiers SVM (support vector machine), RF (random forest), and LSTM (long short-term memory) followed by majority voting as ensemble classification. The proposed system is trained and tested on the AMIGOS dataset with k-fold cross-validation. The proposed system obtained the highest accuracy of 94.5% and shows improved results of the proposed method compared with other state-of-the-art methods. Full article
(This article belongs to the Special Issue Advances in IoMT for Healthcare Systems)
Show Figures

Figure 1

21 pages, 520 KiB  
Article
Feature Selection for Continuous within- and Cross-User EEG-Based Emotion Recognition
by Nicole Bendrich, Pradeep Kumar and Erik Scheme
Sensors 2022, 22(23), 9282; https://doi.org/10.3390/s22239282 - 29 Nov 2022
Cited by 1 | Viewed by 2573
Abstract
The monitoring of emotional state is important in the prevention and management of mental health problems and is increasingly being used to support affective computing. As such, researchers are exploring various modalities from which emotion can be inferred, such as through facial images [...] Read more.
The monitoring of emotional state is important in the prevention and management of mental health problems and is increasingly being used to support affective computing. As such, researchers are exploring various modalities from which emotion can be inferred, such as through facial images or via electroencephalography (EEG) signals. Current research commonly investigates the performance of machine-learning-based emotion recognition systems by exposing users to stimuli that are assumed to elicit a single unchanging emotional response. Moreover, in order to demonstrate better results, many models are tested in evaluation frameworks that do not reflect realistic real-world implementations. Consequently, in this paper, we explore the design of EEG-based emotion recognition systems using longer, variable stimuli using the publicly available AMIGOS dataset. Feature engineering and selection results are evaluated across four different cross-validation frameworks, including versions of leave-one-movie-out (testing with a known user, but a previously unseen movie), leave-one-person-out (testing with a known movie, but a previously unseen person), and leave-one-person-and-movie-out (testing on both a new user and new movie). Results of feature selection lead to a 13% absolute improvement over comparable previously reported studies, and demonstrate the importance of evaluation framework on the design and performance of EEG-based emotion recognition systems. Full article
(This article belongs to the Special Issue Sensor Based Multi-Modal Emotion Recognition)
Show Figures

Figure 1

18 pages, 3556 KiB  
Article
Research on Emotion Recognition for Online Learning in a Novel Computing Model
by Mengnan Chen, Lun Xie, Chiqin Li and Zhiliang Wang
Appl. Sci. 2022, 12(9), 4236; https://doi.org/10.3390/app12094236 - 22 Apr 2022
Cited by 10 | Viewed by 3182
Abstract
The recognition of human emotions is expected to completely change the mode of human-computer interaction. In emotion recognition research, we need to focus on accuracy and real-time performance in order to apply emotional recognition based on physiological signals to solve practical problems. Considering [...] Read more.
The recognition of human emotions is expected to completely change the mode of human-computer interaction. In emotion recognition research, we need to focus on accuracy and real-time performance in order to apply emotional recognition based on physiological signals to solve practical problems. Considering the timeliness dimension of emotion recognition, we propose a terminal-edge-cloud system architecture. Compared to traditional sentiment computing architectures, the proposed architecture in this paper reduces the average time consumption by 15% when running the same affective computing process. Proposed Joint Mutual Information (JMI) based feature extraction affective computing model, and we conducted extensive experiments on the AMIGOS dataset. Through experimental comparison, this feature extraction network has obvious advantages over the commonly used methods. The model performs sentiment classification, and the average accuracy of valence and arousal is 71% and 81.8%, compared with recent similar sentiment classifier research, the average accuracy is improved by 0.85%. In addition, we set up an experiment with 30 people in an online learning scenario to validate the computing system and algorithm model. The result proved that the accuracy and real-time recognition were satisfactory, and improved the online learning real-time emotional interaction experience. Full article
(This article belongs to the Special Issue Artificial Intelligence in Online Higher Educational Data Mining)
Show Figures

Figure 1

25 pages, 771 KiB  
Article
Emotion Recognition from Physiological Channels Using Graph Neural Network
by Tomasz Wierciński, Mateusz Rock, Robert Zwierzycki, Teresa Zawadzka and Michał Zawadzki
Sensors 2022, 22(8), 2980; https://doi.org/10.3390/s22082980 - 13 Apr 2022
Cited by 14 | Viewed by 5393
Abstract
In recent years, a number of new research papers have emerged on the application of neural networks in affective computing. One of the newest trends observed is the utilization of graph neural networks (GNNs) to recognize emotions. The study presented in the paper [...] Read more.
In recent years, a number of new research papers have emerged on the application of neural networks in affective computing. One of the newest trends observed is the utilization of graph neural networks (GNNs) to recognize emotions. The study presented in the paper follows this trend. Within the work, GraphSleepNet (a GNN for classifying the stages of sleep) was adjusted for emotion recognition and validated for this purpose. The key assumption of the validation was to analyze its correctness for the Circumplex model to further analyze the solution for emotion recognition in the Ekman modal. The novelty of this research is not only the utilization of a GNN network with GraphSleepNet architecture for emotion recognition, but also the analysis of the potential of emotion recognition based on differential entropy features in the Ekman model with a neutral state and a special focus on continuous emotion recognition during the performance of an activity The GNN was validated against the AMIGOS dataset. The research shows how the use of various modalities influences the correctness of the recognition of basic emotions and the neutral state. Moreover, the correctness of the recognition of basic emotions is validated for two configurations of the GNN. The results show numerous interesting observations for Ekman’s model while the accuracy of the Circumplex model is similar to the baseline methods. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors)
Show Figures

Figure 1

31 pages, 1027 KiB  
Article
Graph Representation Integrating Signals for Emotion Recognition and Analysis
by Teresa Zawadzka, Tomasz Wierciński, Grzegorz Meller, Mateusz Rock, Robert Zwierzycki and Michał R. Wróbel
Sensors 2021, 21(12), 4035; https://doi.org/10.3390/s21124035 - 11 Jun 2021
Cited by 6 | Viewed by 4968
Abstract
Data reusability is an important feature of current research, just in every field of science. Modern research in Affective Computing, often rely on datasets containing experiments-originated data such as biosignals, video clips, or images. Moreover, conducting experiments with a vast number of participants [...] Read more.
Data reusability is an important feature of current research, just in every field of science. Modern research in Affective Computing, often rely on datasets containing experiments-originated data such as biosignals, video clips, or images. Moreover, conducting experiments with a vast number of participants to build datasets for Affective Computing research is time-consuming and expensive. Therefore, it is extremely important to provide solutions allowing one to (re)use data from a variety of sources, which usually demands data integration. This paper presents the Graph Representation Integrating Signals for Emotion Recognition and Analysis (GRISERA) framework, which provides a persistent model for storing integrated signals and methods for its creation. To the best of our knowledge, this is the first approach in Affective Computing field that addresses the problem of integrating data from multiple experiments, storing it in a consistent way, and providing query patterns for data retrieval. The proposed framework is based on the standardized graph model, which is known to be highly suitable for signal processing purposes. The validation proved that data from the well-known AMIGOS dataset can be stored in the GRISERA framework and later retrieved for training deep learning models. Furthermore, the second case study proved that it is possible to integrate signals from multiple sources (AMIGOS, ASCERTAIN, and DEAP) into GRISERA and retrieve them for further statistical analysis. Full article
(This article belongs to the Special Issue Emotion Recognition Based on Sensors)
Show Figures

Figure 1

21 pages, 1404 KiB  
Article
Predicting Exact Valence and Arousal Values from EEG
by Filipe Galvão, Soraia M. Alarcão and Manuel J. Fonseca
Sensors 2021, 21(10), 3414; https://doi.org/10.3390/s21103414 - 14 May 2021
Cited by 72 | Viewed by 8620
Abstract
Recognition of emotions from physiological signals, and in particular from electroencephalography (EEG), is a field within affective computing gaining increasing relevance. Although researchers have used these signals to recognize emotions, most of them only identify a limited set of emotional states (e.g., happiness, [...] Read more.
Recognition of emotions from physiological signals, and in particular from electroencephalography (EEG), is a field within affective computing gaining increasing relevance. Although researchers have used these signals to recognize emotions, most of them only identify a limited set of emotional states (e.g., happiness, sadness, anger, etc.) and have not attempted to predict exact values for valence and arousal, which would provide a wider range of emotional states. This paper describes our proposed model for predicting the exact values of valence and arousal in a subject-independent scenario. To create it, we studied the best features, brain waves, and machine learning models that are currently in use for emotion classification. This systematic analysis revealed that the best prediction model uses a KNN regressor (K = 1) with Manhattan distance, features from the alpha, beta and gamma bands, and the differential asymmetry from the alpha band. Results, using the DEAP, AMIGOS and DREAMER datasets, show that our model can predict valence and arousal values with a low error (MAE < 0.06, RMSE < 0.16) and a strong correlation between predicted and expected values (PCC > 0.80), and can identify four emotional classes with an accuracy of 84.4%. The findings of this work show that the features, brain waves and machine learning models, typically used in emotion classification tasks, can be used in more challenging situations, such as the prediction of exact values for valence and arousal. Full article
(This article belongs to the Special Issue Biomedical Signal Acquisition and Processing Using Sensors)
Show Figures

Figure 1

26 pages, 1182 KiB  
Article
CNN and LSTM-Based Emotion Charting Using Physiological Signals
by Muhammad Najam Dar, Muhammad Usman Akram, Sajid Gul Khawaja and Amit N. Pujari
Sensors 2020, 20(16), 4551; https://doi.org/10.3390/s20164551 - 14 Aug 2020
Cited by 95 | Viewed by 10980
Abstract
Novel trends in affective computing are based on reliable sources of physiological signals such as Electroencephalogram (EEG), Electrocardiogram (ECG), and Galvanic Skin Response (GSR). The use of these signals provides challenges of performance improvement within a broader set of emotion classes in a [...] Read more.
Novel trends in affective computing are based on reliable sources of physiological signals such as Electroencephalogram (EEG), Electrocardiogram (ECG), and Galvanic Skin Response (GSR). The use of these signals provides challenges of performance improvement within a broader set of emotion classes in a less constrained real-world environment. To overcome these challenges, we propose a computational framework of 2D Convolutional Neural Network (CNN) architecture for the arrangement of 14 channels of EEG, and a combination of Long Short-Term Memory (LSTM) and 1D-CNN architecture for ECG and GSR. Our approach is subject-independent and incorporates two publicly available datasets of DREAMER and AMIGOS with low-cost, wearable sensors to extract physiological signals suitable for real-world environments. The results outperform state-of-the-art approaches for classification into four classes, namely High Valence—High Arousal, High Valence—Low Arousal, Low Valence—High Arousal, and Low Valence—Low Arousal. Emotion elicitation average accuracy of 98.73% is achieved with ECG right-channel modality, 76.65% with EEG modality, and 63.67% with GSR modality for AMIGOS. The overall highest accuracy of 99.0% for the AMIGOS dataset and 90.8% for the DREAMER dataset is achieved with multi-modal fusion. A strong correlation between spectral- and hidden-layer feature analysis with classification performance suggests the efficacy of the proposed method for significant feature extraction and higher emotion elicitation performance to a broader context for less constrained environments. Full article
(This article belongs to the Special Issue Signal Processing Using Non-invasive Physiological Sensors)
Show Figures

Figure 1

15 pages, 1499 KiB  
Article
Automatic Recognition of Personality Profiles Using EEG Functional Connectivity during Emotional Processing
by Manousos A. Klados, Panagiota Konstantinidi, Rosalia Dacosta-Aguayo, Vasiliki-Despoina Kostaridou, Alessandro Vinciarelli and Michalis Zervakis
Brain Sci. 2020, 10(5), 278; https://doi.org/10.3390/brainsci10050278 - 3 May 2020
Cited by 25 | Viewed by 5952
Abstract
Personality is the characteristic set of an individual’s behavioral and emotional patterns that evolve from biological and environmental factors. The recognition of personality profiles is crucial in making human–computer interaction (HCI) applications realistic, more focused, and user friendly. The ability to recognize personality [...] Read more.
Personality is the characteristic set of an individual’s behavioral and emotional patterns that evolve from biological and environmental factors. The recognition of personality profiles is crucial in making human–computer interaction (HCI) applications realistic, more focused, and user friendly. The ability to recognize personality using neuroscientific data underpins the neurobiological basis of personality. This paper aims to automatically recognize personality, combining scalp electroencephalogram (EEG) and machine learning techniques. As the resting state EEG has not so far been proven efficient for predicting personality, we used EEG recordings elicited during emotion processing. This study was based on data from the AMIGOS dataset reflecting the response of 37 healthy participants. Brain networks and graph theoretical parameters were extracted from cleaned EEG signals, while each trait score was dichotomized into low- and high-level using the k-means algorithm. A feature selection algorithm was used afterwards to reduce the feature-set size to the best 10 features to describe each trait separately. Support vector machines (SVM) were finally employed to classify each instance. Our method achieved a classification accuracy of 83.8% for extraversion, 86.5% for agreeableness, 83.8% for conscientiousness, 83.8% for neuroticism, and 73% for openness. Full article
Show Figures

Figure 1

25 pages, 7679 KiB  
Article
Analysis of Personality and EEG Features in Emotion Recognition Using Machine Learning Techniques to Classify Arousal and Valence Labels
by Laura Alejandra Martínez-Tejada, Yasuhisa Maruyama, Natsue Yoshimura and Yasuharu Koike
Mach. Learn. Knowl. Extr. 2020, 2(2), 99-124; https://doi.org/10.3390/make2020007 - 13 Apr 2020
Cited by 22 | Viewed by 7184
Abstract
We analyzed the contribution of electroencephalogram (EEG) data, age, sex, and personality traits to emotion recognition processes—through the classification of arousal, valence, and discrete emotions labels—using feature selection techniques and machine learning classifiers. EEG traits and age, sex, and personality traits were retrieved [...] Read more.
We analyzed the contribution of electroencephalogram (EEG) data, age, sex, and personality traits to emotion recognition processes—through the classification of arousal, valence, and discrete emotions labels—using feature selection techniques and machine learning classifiers. EEG traits and age, sex, and personality traits were retrieved from a well-known dataset—AMIGOS—and two sets of traits were built to analyze the classification performance. We found that age, sex, and personality traits were not significantly associated with the classification of arousal, valence and discrete emotions using machine learning. The added EEG features increased the classification accuracies (compared with the original report), for arousal and valence labels. Classification of arousal and valence labels achieved higher than chance levels; however, they did not exceed 70% accuracy in the different tested scenarios. For discrete emotions, the mean accuracies and the mean area under the curve scores were higher than chance; however, F1 scores were low, implying that several false positives and false negatives were present. This study highlights the performance of EEG traits, age, sex, and personality traits using emotion classifiers. These findings could help to understand the traits relationship in a technological and data level for personalized human-computer interactions systems. Full article
(This article belongs to the Section Data)
Show Figures

Figure 1

Back to TopTop