Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (16)

Search Parameters:
Keywords = EEG and ECG signal recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2565 KB  
Article
GBV-Net: Hierarchical Fusion of Facial Expressions and Physiological Signals for Multimodal Emotion Recognition
by Jiling Yu, Yandong Ru, Bangjun Lei and Hongming Chen
Sensors 2025, 25(20), 6397; https://doi.org/10.3390/s25206397 - 16 Oct 2025
Viewed by 1127
Abstract
A core challenge in multimodal emotion recognition lies in the precise capture of the inherent multimodal interactive nature of human emotions. Addressing the limitation of existing methods, which often process visual signals (facial expressions) and physiological signals (EEG, ECG, EOG, and GSR) in [...] Read more.
A core challenge in multimodal emotion recognition lies in the precise capture of the inherent multimodal interactive nature of human emotions. Addressing the limitation of existing methods, which often process visual signals (facial expressions) and physiological signals (EEG, ECG, EOG, and GSR) in isolation and thus fail to exploit their complementary strengths effectively, this paper presents a new multimodal emotion recognition framework called the Gated Biological Visual Network (GBV-Net). This framework enhances emotion recognition accuracy through deep synergistic fusion of facial expressions and physiological signals. GBV-Net integrates three core modules: (1) a facial feature extractor based on a modified ConvNeXt V2 architecture incorporating lightweight Transformers, specifically designed to capture subtle spatio-temporal dynamics in facial expressions; (2) a hybrid physiological feature extractor combining 1D convolutions, Temporal Convolutional Networks (TCNs), and convolutional self-attention mechanisms, adept at modeling local patterns and long-range temporal dependencies in physiological signals; and (3) an enhanced gated attention fusion module capable of adaptively learning inter-modal weights to achieve dynamic, synergistic integration at the feature level. A thorough investigation of the publicly accessible DEAP and MAHNOB-HCI datasets reveals that GBV-Net surpasses contemporary methods. Specifically, on the DEAP dataset, the model attained classification accuracies of 95.10% for Valence and 95.65% for Arousal, with F1-scores of 95.52% and 96.35%, respectively. On MAHNOB-HCI, the accuracies achieved were 97.28% for Valence and 97.73% for Arousal, with F1-scores of 97.50% and 97.74%, respectively. These experimental findings substantiate that GBV-Net effectively captures deep-level interactive information between multimodal signals, thereby improving emotion recognition accuracy. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

17 pages, 2243 KB  
Article
Modeling Visual Fatigue in Remote Tower Air Traffic Controllers: A Multimodal Physiological Data-Based Approach
by Ruihan Liang, Weijun Pan, Qinghai Zuo, Chen Zhang, Shenhao Chen, Sheng Chen and Leilei Deng
Aerospace 2025, 12(6), 474; https://doi.org/10.3390/aerospace12060474 - 27 May 2025
Cited by 2 | Viewed by 1433
Abstract
As a forward-looking development in air traffic control (ATC), remote towers rely on virtualized information presentation, which may exacerbate visual fatigue among controllers and compromise operational safety. This study proposes a visual fatigue recognition model based on multimodal physiological signals. A 60-min simulated [...] Read more.
As a forward-looking development in air traffic control (ATC), remote towers rely on virtualized information presentation, which may exacerbate visual fatigue among controllers and compromise operational safety. This study proposes a visual fatigue recognition model based on multimodal physiological signals. A 60-min simulated remote tower task was conducted with 36 participants, during which eye-tracking (ET), electroencephalography (EEG), electrocardiography (ECG), and electrodermal activity (EDA) signals were collected. Subjective fatigue questionnaires and objective ophthalmic measurements were also recorded before and after the task. Statistically significant features were identified through paired t-tests, and fatigue labels were constructed by combining subjective and objective indicators. LightGBM was then employed to rank feature importance by integrating split frequency and information gain into a composite score. The top 12 features were selected and used to train a multilayer perceptron (MLP) for classification. The model achieved an average balanced accuracy of 0.92 and an F1 score of 0.90 under 12-fold cross-validation, demonstrating excellent predictive performance. The high-ranking features spanned four modalities, revealing typical physiological patterns of visual fatigue across ocular behavior, cortical activity, autonomic regulation, and arousal level. These findings validate the effectiveness of multimodal fusion in modeling visual fatigue and provide theoretical and technical support for human factor monitoring and risk mitigation in remote tower environments. Full article
(This article belongs to the Section Air Traffic and Transportation)
Show Figures

Figure 1

30 pages, 11703 KB  
Article
A Multimodal Feature Fusion Brain Fatigue Recognition System Based on Bayes-gcForest
by You Zhou, Pukun Chen, Yifan Fan and Yin Wu
Sensors 2024, 24(9), 2910; https://doi.org/10.3390/s24092910 - 2 May 2024
Cited by 6 | Viewed by 3301
Abstract
Modern society increasingly recognizes brain fatigue as a critical factor affecting human health and productivity. This study introduces a novel, portable, cost-effective, and user-friendly system for real-time collection, monitoring, and analysis of physiological signals aimed at enhancing the precision and efficiency of brain [...] Read more.
Modern society increasingly recognizes brain fatigue as a critical factor affecting human health and productivity. This study introduces a novel, portable, cost-effective, and user-friendly system for real-time collection, monitoring, and analysis of physiological signals aimed at enhancing the precision and efficiency of brain fatigue recognition and broadening its application scope. Utilizing raw physiological data, this study constructed a compact dataset that incorporated EEG and ECG data from 20 subjects to index fatigue characteristics. By employing a Bayesian-optimized multi-granularity cascade forest (Bayes-gcForest) for fatigue state recognition, this study achieved recognition rates of 95.71% and 96.13% on the DROZY public dataset and constructed dataset, respectively. These results highlight the effectiveness of the multi-modal feature fusion model in brain fatigue recognition, providing a viable solution for cost-effective and efficient fatigue monitoring. Furthermore, this approach offers theoretical support for designing rest systems for researchers. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

19 pages, 1047 KB  
Article
Assessment of Drivers’ Mental Workload by Multimodal Measures during Auditory-Based Dual-Task Driving Scenarios
by Jiaqi Huang, Qiliang Zhang, Tingru Zhang, Tieyan Wang and Da Tao
Sensors 2024, 24(3), 1041; https://doi.org/10.3390/s24031041 - 5 Feb 2024
Cited by 23 | Viewed by 5383
Abstract
Assessing drivers’ mental workload is crucial for reducing road accidents. This study examined drivers’ mental workload in a simulated auditory-based dual-task driving scenario, with driving tasks as the main task, and auditory-based N-back tasks as the secondary task. A total of three levels [...] Read more.
Assessing drivers’ mental workload is crucial for reducing road accidents. This study examined drivers’ mental workload in a simulated auditory-based dual-task driving scenario, with driving tasks as the main task, and auditory-based N-back tasks as the secondary task. A total of three levels of mental workload (i.e., low, medium, high) were manipulated by varying the difficulty levels of the secondary task (i.e., no presence of secondary task, 1-back, 2-back). Multimodal measures, including a set of subjective measures, physiological measures, and behavioral performance measures, were collected during the experiment. The results showed that an increase in task difficulty led to increased subjective ratings of mental workload and a decrease in task performance for the secondary N-back tasks. Significant differences were observed across the different levels of mental workload in multimodal physiological measures, such as delta waves in EEG signals, fixation distance in eye movement signals, time- and frequency-domain measures in ECG signals, and skin conductance in EDA signals. In addition, four driving performance measures related to vehicle velocity and the deviation of pedal input and vehicle position also showed sensitivity to the changes in drivers’ mental workload. The findings from this study can contribute to a comprehensive understanding of effective measures for mental workload assessment in driving scenarios and to the development of smart driving systems for the accurate recognition of drivers’ mental states. Full article
Show Figures

Figure 1

18 pages, 3182 KB  
Article
EEG and ECG-Based Multi-Sensor Fusion Computing for Real-Time Fatigue Driving Recognition Based on Feedback Mechanism
by Ling Wang, Fangjie Song, Tie Hua Zhou, Jiayu Hao and Keun Ho Ryu
Sensors 2023, 23(20), 8386; https://doi.org/10.3390/s23208386 - 11 Oct 2023
Cited by 25 | Viewed by 6419
Abstract
A variety of technologies that could enhance driving safety are being actively explored, with the aim of reducing traffic accidents by accurately recognizing the driver’s state. In this field, three mainstream detection methods have been widely applied, namely visual monitoring, physiological indicator monitoring [...] Read more.
A variety of technologies that could enhance driving safety are being actively explored, with the aim of reducing traffic accidents by accurately recognizing the driver’s state. In this field, three mainstream detection methods have been widely applied, namely visual monitoring, physiological indicator monitoring and vehicle behavior analysis. In order to achieve more accurate driver state recognition, we adopted a multi-sensor fusion approach. We monitored driver physiological signals, electroencephalogram (EEG) signals and electrocardiogram (ECG) signals to determine fatigue state, while an in-vehicle camera observed driver behavior and provided more information for driver state assessment. In addition, an outside camera was used to monitor vehicle position to determine whether there were any driving deviations due to distraction or fatigue. After a series of experimental validations, our research results showed that our multi-sensor approach exhibited good performance for driver state recognition. This study could provide a solid foundation and development direction for future in-depth driver state recognition research, which is expected to further improve road safety. Full article
(This article belongs to the Special Issue Advanced-Sensors-Based Emotion Sensing and Recognition)
Show Figures

Figure 1

19 pages, 1225 KB  
Review
Review of Studies on Emotion Recognition and Judgment Based on Physiological Signals
by Wenqian Lin and Chao Li
Appl. Sci. 2023, 13(4), 2573; https://doi.org/10.3390/app13042573 - 16 Feb 2023
Cited by 101 | Viewed by 12122
Abstract
People’s emotions play an important part in our daily life and can not only reflect psychological and physical states, but also play a vital role in people’s communication, cognition and decision-making. Variations in people’s emotions induced by external conditions are accompanied by variations [...] Read more.
People’s emotions play an important part in our daily life and can not only reflect psychological and physical states, but also play a vital role in people’s communication, cognition and decision-making. Variations in people’s emotions induced by external conditions are accompanied by variations in physiological signals that can be measured and identified. People’s psychological signals are mainly measured with electroencephalograms (EEGs), electrodermal activity (EDA), electrocardiograms (ECGs), electromyography (EMG), pulse waves, etc. EEG signals are a comprehensive embodiment of the operation of numerous neurons in the cerebral cortex and can immediately express brain activity. EDA measures the electrical features of skin through skin conductance response, skin potential, skin conductance level or skin potential response. ECG technology uses an electrocardiograph to record changes in electrical activity in each cardiac cycle of the heart from the body surface. EMG is a technique that uses electronic instruments to evaluate and record the electrical activity of muscles, which is usually referred to as myoelectric activity. EEG, EDA, ECG and EMG have been widely used to recognize and judge people’s emotions in various situations. Different physiological signals have their own characteristics and are suitable for different occasions. Therefore, a review of the research work and application of emotion recognition and judgment based on the four physiological signals mentioned above is offered. The content covers the technologies adopted, the objects of application and the effects achieved. Finally, the application scenarios for different physiological signals are compared, and issues for attention are explored to provide reference and a basis for further investigation. Full article
(This article belongs to the Special Issue Recent Advances in Biological Science and Technology)
Show Figures

Figure 1

21 pages, 2388 KB  
Systematic Review
Wearable Sensors and Artificial Intelligence for Physical Ergonomics: A Systematic Review of Literature
by Leandro Donisi, Giuseppe Cesarelli, Noemi Pisani, Alfonso Maria Ponsiglione, Carlo Ricciardi and Edda Capodaglio
Diagnostics 2022, 12(12), 3048; https://doi.org/10.3390/diagnostics12123048 - 5 Dec 2022
Cited by 67 | Viewed by 12643
Abstract
Physical ergonomics has established itself as a valid strategy for monitoring potential disorders related, for example, to working activities. Recently, in the field of physical ergonomics, several studies have also shown potential for improvement in experimental methods of ergonomic analysis, through the combined [...] Read more.
Physical ergonomics has established itself as a valid strategy for monitoring potential disorders related, for example, to working activities. Recently, in the field of physical ergonomics, several studies have also shown potential for improvement in experimental methods of ergonomic analysis, through the combined use of artificial intelligence, and wearable sensors. In this regard, this review intends to provide a first account of the investigations carried out using these combined methods, considering the period up to 2021. The method that combines the information obtained on the worker through physical sensors (IMU, accelerometer, gyroscope, etc.) or biopotential sensors (EMG, EEG, EKG/ECG), with the analysis through artificial intelligence systems (machine learning or deep learning), offers interesting perspectives from both diagnostic, prognostic, and preventive points of view. In particular, the signals, obtained from wearable sensors for the recognition and categorization of the postural and biomechanical load of the worker, can be processed to formulate interesting algorithms for applications in the preventive field (especially with respect to musculoskeletal disorders), and with high statistical power. For Ergonomics, but also for Occupational Medicine, these applications improve the knowledge of the limits of the human organism, helping in the definition of sustainability thresholds, and in the ergonomic design of environments, tools, and work organization. The growth prospects for this research area are the refinement of the procedures for the detection and processing of signals; the expansion of the study to assisted working methods (assistive robots, exoskeletons), and to categories of workers suffering from pathologies or disabilities; as well as the development of risk assessment systems that exceed those currently used in ergonomics in precision and agility. Full article
(This article belongs to the Section Point-of-Care Diagnostics and Devices)
Show Figures

Figure 1

22 pages, 725 KB  
Article
Automated Emotion Identification Using Fourier–Bessel Domain-Based Entropies
by Aditya Nalwaya, Kritiprasanna Das and Ram Bilas Pachori
Entropy 2022, 24(10), 1322; https://doi.org/10.3390/e24101322 - 20 Sep 2022
Cited by 41 | Viewed by 3839
Abstract
Human dependence on computers is increasing day by day; thus, human interaction with computers must be more dynamic and contextual rather than static or generalized. The development of such devices requires knowledge of the emotional state of the user interacting with it; for [...] Read more.
Human dependence on computers is increasing day by day; thus, human interaction with computers must be more dynamic and contextual rather than static or generalized. The development of such devices requires knowledge of the emotional state of the user interacting with it; for this purpose, an emotion recognition system is required. Physiological signals, specifically, electrocardiogram (ECG) and electroencephalogram (EEG), were studied here for the purpose of emotion recognition. This paper proposes novel entropy-based features in the Fourier–Bessel domain instead of the Fourier domain, where frequency resolution is twice that of the latter. Further, to represent such non-stationary signals, the Fourier–Bessel series expansion (FBSE) is used, which has non-stationary basis functions, making it more suitable than the Fourier representation. EEG and ECG signals are decomposed into narrow-band modes using FBSE-based empirical wavelet transform (FBSE-EWT). The proposed entropies of each mode are computed to form the feature vector, which are further used to develop machine learning models. The proposed emotion detection algorithm is evaluated using publicly available DREAMER dataset. K-nearest neighbors (KNN) classifier provides accuracies of 97.84%, 97.91%, and 97.86% for arousal, valence, and dominance classes, respectively. Finally, this paper concludes that the obtained entropy features are suitable for emotion recognition from given physiological signals. Full article
(This article belongs to the Special Issue Entropy Algorithms for the Analysis of Biomedical Signals)
Show Figures

Figure 1

34 pages, 1021 KB  
Review
Representation Learning and Pattern Recognition in Cognitive Biometrics: A Survey
by Min Wang, Xuefei Yin, Yanming Zhu and Jiankun Hu
Sensors 2022, 22(14), 5111; https://doi.org/10.3390/s22145111 - 7 Jul 2022
Cited by 33 | Viewed by 6344
Abstract
Cognitive biometrics is an emerging branch of biometric technology. Recent research has demonstrated great potential for using cognitive biometrics in versatile applications, including biometric recognition and cognitive and emotional state recognition. There is a major need to summarize the latest developments in this [...] Read more.
Cognitive biometrics is an emerging branch of biometric technology. Recent research has demonstrated great potential for using cognitive biometrics in versatile applications, including biometric recognition and cognitive and emotional state recognition. There is a major need to summarize the latest developments in this field. Existing surveys have mainly focused on a small subset of cognitive biometric modalities, such as EEG and ECG. This article provides a comprehensive review of cognitive biometrics, covering all the major biosignal modalities and applications. A taxonomy is designed to structure the corresponding knowledge and guide the survey from signal acquisition and pre-processing to representation learning and pattern recognition. We provide a unified view of the methodological advances in these four aspects across various biosignals and applications, facilitating interdisciplinary research and knowledge transfer across fields. Furthermore, this article discusses open research directions in cognitive biometrics and proposes future prospects for developing reliable and secure cognitive biometric systems. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

14 pages, 1464 KB  
Article
Emotion Recognition from ECG Signals Using Wavelet Scattering and Machine Learning
by Axel Sepúlveda, Francisco Castillo, Carlos Palma and Maria Rodriguez-Fernandez
Appl. Sci. 2021, 11(11), 4945; https://doi.org/10.3390/app11114945 - 27 May 2021
Cited by 83 | Viewed by 8035
Abstract
Affect detection combined with a system that dynamically responds to a person’s emotional state allows an improved user experience with computers, systems, and environments and has a wide range of applications, including entertainment and health care. Previous studies on this topic have used [...] Read more.
Affect detection combined with a system that dynamically responds to a person’s emotional state allows an improved user experience with computers, systems, and environments and has a wide range of applications, including entertainment and health care. Previous studies on this topic have used a variety of machine learning algorithms and inputs such as audial, visual, or physiological signals. Recently, a lot of interest has been focused on the last, as speech or video recording is impractical for some applications. Therefore, there is a need to create Human–Computer Interface Systems capable of recognizing emotional states from noninvasive and nonintrusive physiological signals. Typically, the recognition task is carried out from electroencephalogram (EEG) signals, obtaining good accuracy. However, EEGs are difficult to register without interfering with daily activities, and recent studies have shown that it is possible to use electrocardiogram (ECG) signals for this purpose. This work improves the performance of emotion recognition from ECG signals using wavelet transform for signal analysis. Features of the ECG signal are extracted from the AMIGOS database using a wavelet scattering algorithm that allows obtaining features of the signal at different time scales, which are then used as inputs for different classifiers to evaluate their performance. The results show that the proposed algorithm for extracting features and classifying the signals obtains an accuracy of 88.8% in the valence dimension, 90.2% in arousal, and 95.3% in a two-dimensional classification, which is better than the performance reported in previous studies. This algorithm is expected to be useful for classifying emotions using wearable devices. Full article
Show Figures

Figure 1

27 pages, 4718 KB  
Article
FusionSense: Emotion Classification Using Feature Fusion of Multimodal Data and Deep Learning in a Brain-Inspired Spiking Neural Network
by Clarence Tan, Gerardo Ceballos, Nikola Kasabov and Narayan Puthanmadam Subramaniyam
Sensors 2020, 20(18), 5328; https://doi.org/10.3390/s20185328 - 17 Sep 2020
Cited by 44 | Viewed by 8618
Abstract
Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, [...] Read more.
Using multimodal signals to solve the problem of emotion recognition is one of the emerging trends in affective computing. Several studies have utilized state of the art deep learning methods and combined physiological signals, such as the electrocardiogram (EEG), electroencephalogram (ECG), skin temperature, along with facial expressions, voice, posture to name a few, in order to classify emotions. Spiking neural networks (SNNs) represent the third generation of neural networks and employ biologically plausible models of neurons. SNNs have been shown to handle Spatio-temporal data, which is essentially the nature of the data encountered in emotion recognition problem, in an efficient manner. In this work, for the first time, we propose the application of SNNs in order to solve the emotion recognition problem with the multimodal dataset. Specifically, we use the NeuCube framework, which employs an evolving SNN architecture to classify emotional valence and evaluate the performance of our approach on the MAHNOB-HCI dataset. The multimodal data used in our work consists of facial expressions along with physiological signals such as ECG, skin temperature, skin conductance, respiration signal, mouth length, and pupil size. We perform classification under the Leave-One-Subject-Out (LOSO) cross-validation mode. Our results show that the proposed approach achieves an accuracy of 73.15% for classifying binary valence when applying feature-level fusion, which is comparable to other deep learning methods. We achieve this accuracy even without using EEG, which other deep learning methods have relied on to achieve this level of accuracy. In conclusion, we have demonstrated that the SNN can be successfully used for solving the emotion recognition problem with multimodal data and also provide directions for future research utilizing SNN for Affective computing. In addition to the good accuracy, the SNN recognition system is requires incrementally trainable on new data in an adaptive way. It only one pass training, which makes it suitable for practical and on-line applications. These features are not manifested in other methods for this problem. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

14 pages, 1005 KB  
Article
Pattern Recognition of Cognitive Load Using EEG and ECG Signals
by Ronglong Xiong, Fanmeng Kong, Xuehong Yang, Guangyuan Liu and Wanhui Wen
Sensors 2020, 20(18), 5122; https://doi.org/10.3390/s20185122 - 8 Sep 2020
Cited by 52 | Viewed by 8174
Abstract
The matching of cognitive load and working memory is the key for effective learning, and cognitive effort in the learning process has nervous responses which can be quantified in various physiological parameters. Therefore, it is meaningful to explore automatic cognitive load pattern recognition [...] Read more.
The matching of cognitive load and working memory is the key for effective learning, and cognitive effort in the learning process has nervous responses which can be quantified in various physiological parameters. Therefore, it is meaningful to explore automatic cognitive load pattern recognition by using physiological measures. Firstly, this work extracted 33 commonly used physiological features to quantify autonomic and central nervous activities. Secondly, we selected a critical feature subset for cognitive load recognition by sequential backward selection and particle swarm optimization algorithms. Finally, pattern recognition models of cognitive load conditions were constructed by a performance comparison of several classifiers. We grouped the samples in an open dataset to form two binary classification problems: (1) cognitive load state vs. baseline state; (2) cognitive load mismatching state vs. cognitive load matching state. The decision tree classifier obtained 96.3% accuracy for the cognitive load vs. baseline classification, and the support vector machine obtained 97.2% accuracy for the cognitive load mismatching vs. cognitive load matching classification. The cognitive load and baseline states are distinguishable in the level of active state of mind and three activity features of the autonomic nervous system. The cognitive load mismatching and matching states are distinguishable in the level of active state of mind and two activity features of the autonomic nervous system. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

21 pages, 4929 KB  
Article
Experimental Study for Determining the Parameters Required for Detecting ECG and EEG Related Diseases during the Timed-Up and Go Test
by Vasco Ponciano, Ivan Miguel Pires, Fernando Reinaldo Ribeiro, María Vanessa Villasana, Maria Canavarro Teixeira and Eftim Zdravevski
Computers 2020, 9(3), 67; https://doi.org/10.3390/computers9030067 - 27 Aug 2020
Cited by 8 | Viewed by 5159
Abstract
The use of smartphones, coupled with different sensors, makes it an attractive solution for measuring different physical and physiological features, allowing for the monitoring of various parameters and even identifying some diseases. The BITalino device allows the use of different sensors, including Electroencephalography [...] Read more.
The use of smartphones, coupled with different sensors, makes it an attractive solution for measuring different physical and physiological features, allowing for the monitoring of various parameters and even identifying some diseases. The BITalino device allows the use of different sensors, including Electroencephalography (EEG) and Electrocardiography (ECG) sensors, to study different health parameters. With these devices, the acquisition of signals is straightforward, and it is possible to connect them using a Bluetooth connection. With the acquired data, it is possible to measure parameters such as calculating the QRS complex and its variation with ECG data to control the individual’s heartbeat. Similarly, by using the EEG sensor, one could analyze the individual’s brain activity and frequency. The purpose of this paper is to present a method for recognition of the diseases related to ECG and EEG data, with sensors available in off-the-shelf mobile devices and sensors connected to a BITalino device. The data were collected during the elderly’s experiences, performing the Timed-Up and Go test, and the different diseases found in the sample in the study. The data were analyzed, and the following features were extracted from the ECG, including heart rate, linear heart rate variability, the average QRS interval, the average R-R interval, and the average R-S interval, and the EEG, including frequency and variability. Finally, the diseases are correlated with different parameters, proving that there are relations between the individuals and the different health conditions. Full article
(This article belongs to the Special Issue Machine Learning for EEG Signal Processing)
Show Figures

Figure 1

26 pages, 1182 KB  
Article
CNN and LSTM-Based Emotion Charting Using Physiological Signals
by Muhammad Najam Dar, Muhammad Usman Akram, Sajid Gul Khawaja and Amit N. Pujari
Sensors 2020, 20(16), 4551; https://doi.org/10.3390/s20164551 - 14 Aug 2020
Cited by 105 | Viewed by 12448
Abstract
Novel trends in affective computing are based on reliable sources of physiological signals such as Electroencephalogram (EEG), Electrocardiogram (ECG), and Galvanic Skin Response (GSR). The use of these signals provides challenges of performance improvement within a broader set of emotion classes in a [...] Read more.
Novel trends in affective computing are based on reliable sources of physiological signals such as Electroencephalogram (EEG), Electrocardiogram (ECG), and Galvanic Skin Response (GSR). The use of these signals provides challenges of performance improvement within a broader set of emotion classes in a less constrained real-world environment. To overcome these challenges, we propose a computational framework of 2D Convolutional Neural Network (CNN) architecture for the arrangement of 14 channels of EEG, and a combination of Long Short-Term Memory (LSTM) and 1D-CNN architecture for ECG and GSR. Our approach is subject-independent and incorporates two publicly available datasets of DREAMER and AMIGOS with low-cost, wearable sensors to extract physiological signals suitable for real-world environments. The results outperform state-of-the-art approaches for classification into four classes, namely High Valence—High Arousal, High Valence—Low Arousal, Low Valence—High Arousal, and Low Valence—Low Arousal. Emotion elicitation average accuracy of 98.73% is achieved with ECG right-channel modality, 76.65% with EEG modality, and 63.67% with GSR modality for AMIGOS. The overall highest accuracy of 99.0% for the AMIGOS dataset and 90.8% for the DREAMER dataset is achieved with multi-modal fusion. A strong correlation between spectral- and hidden-layer feature analysis with classification performance suggests the efficacy of the proposed method for significant feature extraction and higher emotion elicitation performance to a broader context for less constrained environments. Full article
(This article belongs to the Special Issue Signal Processing Using Non-invasive Physiological Sensors)
Show Figures

Figure 1

18 pages, 2070 KB  
Article
Brain and Body Emotional Responses: Multimodal Approximation for Valence Classification
by Jennifer Sorinas, Jose Manuel Ferrández and Eduardo Fernandez
Sensors 2020, 20(1), 313; https://doi.org/10.3390/s20010313 - 6 Jan 2020
Cited by 22 | Viewed by 5839
Abstract
In order to develop more precise and functional affective applications, it is necessary to achieve a balance between the psychology and the engineering applied to emotions. Signals from the central and peripheral nervous systems have been used for emotion recognition purposes, however, their [...] Read more.
In order to develop more precise and functional affective applications, it is necessary to achieve a balance between the psychology and the engineering applied to emotions. Signals from the central and peripheral nervous systems have been used for emotion recognition purposes, however, their operation and the relationship between them remains unknown. In this context, in the present work, we have tried to approach the study of the psychobiology of both systems in order to generate a computational model for the recognition of emotions in the dimension of valence. To this end, the electroencephalography (EEG) signal, electrocardiography (ECG) signal and skin temperature of 24 subjects have been studied. Each methodology has been evaluated individually, finding characteristic patterns of positive and negative emotions in each of them. After feature selection of each methodology, the results of the classification showed that, although the classification of emotions is possible at both central and peripheral levels, the multimodal approach did not improve the results obtained through the EEG alone. In addition, differences have been observed between cerebral and peripheral responses in the processing of emotions by separating the sample by sex; though, the differences between men and women were only notable at the peripheral nervous system level. Full article
(This article belongs to the Special Issue Sensors for Affective Computing and Sentiment Analysis)
Show Figures

Figure 1

Back to TopTop