E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Emotion and Stress Recognition Related Sensors and Machine Learning Technologies"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 31 December 2020.

Special Issue Editors

Guest Editor
Prof. Dr. Kyandoghere Kyamakya

Alpen-Adria-Universität Klagenfurt, Klagenfurt, Austria
Website | E-Mail
Phone: +43 463 2700 3540
Interests: intelligent transportation systems; machine vision; machine learning and pattern recognition; neurocomputing and applications; systems science and nonlinear dynamics; telecommunications systems; robotics and autonomous systems
Guest Editor
Dr. Fadi Al-Machot

Alpen-Adria-Universität Klagenfurt, Department of Applied Informatics, Klagenfurt, Austria
Website | E-Mail
Interests: machine learning; pattern recognition; image processing; data mining; video understanding; cognitive modeling and recognition
Guest Editor
Dr. Ahmad Haj Mosa

Alpen-Adria-Universität Klagenfurt, Klagenfurt, Austria
Website | E-Mail
Interests: machine learning; cognitive neuroscience; applied mathematics; machine vision
Guest Editor
Prof. Hamid Bouchachia

Bournemouth University, Machine Intelligence Group, Bournemouth, United Kingdom
Website | E-Mail
Interests: machine learning; data mining; computational intelligence; ambient intelligence and telecare
Guest Editor
Dr. Jean Chamberlain Chedjou

Alpen-Adria-Universität Klagenfurt, Institute of Smart System Technologies, Klagenfurt, Austria
Website | E-Mail
Interests: dynamic systems in engineering; neurocomputing and applications; optimization and inverse problems; intelligent transportation systems
Guest Editor
Prof. Dr. Antoine Bagula

University of the Western Cape, ISAT Laboratory, Bellville, South Africa
Website | E-Mail
Interests: internet-of-things; artificial intelligence; blockchain technologies; next generation networks

Special Issue Information

Dear Colleagues,

A myriad of modern intelligent sociotechnical systems makes use of human emotion and stress data. Different technologies are used to collect that data, like physiological sensors (e.g., EEG, ECG, electrodermal activity and skin conductance) and other non-intrusive sensors (e.g., piezo-vibration sensors, facial images, chairborne differential vibration sensors, bed-borne differential vibration sensors). Examples of such systems range from driver assistance systems, medical patient monitoring systems, and emotion-aware intelligent systems, up to complex collaborative robotics systems.

Emotion and stress classification from physiological signals is extremely challenging from various perspectives: (a) sensor-data quality and reliability; (b) classification performance (accuracy, precision, specificity, recall, F-measure); (c) robustness of subject-independent recognition; (d) portability of the classification systems to different environments; and (e) the estimation of the emotional state from a system-dynamical perspective.

This Special Issue invites contributions that address (i) sensing technologies and issues and (ii) machine learning techniques of relevance to tackle the challenges above. In particular, submitted papers should clearly show novel contributions and innovative applications covering, but not limited to, any of the following topics around emotion and stress recognition:

  • Intrusive sensors systems and devices for capturing biosignals:
    • EEG sensor systems
    • ECG sensor systems
    • Electrodermal activity sensor systems
  • Sensor data quality assessment and management
  • Data pre-processing, noise filtering, and calibration concepts for biosignals
  • Non-intrusive sensors technologies:
    • Visual sensors
    • Acoustic sensors
    • Vibration sensors
    • Piezo-electric sensors
  • Emotion recognition using mobile phones and smart watches
  • Body area sensor networks for emotion and stress studies
  • Experimental datasets:
    • Datasets generation principles and concepts
    • Quality insurance
    • Emotion elicitation material and concepts
  • Machine learning techniques for robust emotion recognition:
    • Graphical models
    • Neural network methods (LSTM networks, cellular neural networks);
    • Deep learning methods
    • Statistical learning
    • Multivariate empirical mode decomposition
    • Etc.
  • Subject-independent emotion and stress recognition concepts and systems:
    • Facial expression-based systems
    • Speech-based systems
    • EEG-based systems
    • ECG-based systems
    • Electrodermal activity-based systems
    • Multimodal recognition systems
    • Sensor fusion concepts
    • Etc.
  • Emotion and stress estimation-and-forecasting from a nonlinear dynamical system’s perspective:
    • Recursive quantitative analysis
    • Poincaré maps, fractal dimension analysis, Lyapunov exponents and entropies (e.g.: multiscale, permutation) of biosignals: EEG, ECG, speech, etc.
    • Regularized learning with nonlinear dynamical features of EEG, ECG, and speech signals
    • Complexity measurement and analysis of biosignals used for emotion recognition
    • Nonlinear features variability analysis
    • Dynamical graph convolutional neural networks
    • Etc.

Prof. Dr. Kyandoghere Kyamakya
Dr. Fadi Al-Machot
Dr. Ahmad Haj Mosa
Prof. Hamid Bouchachia
Dr. Jean Chamberlain Chedjou
Prof. Dr. Antoine Bagula
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (6 papers)

View options order results:
result details:
Displaying articles 1-6
Export citation of selected articles as:

Research

Open AccessArticle
Deep ECG-Respiration Network (DeepER Net) for Recognizing Mental Stress
Sensors 2019, 19(13), 3021; https://doi.org/10.3390/s19133021
Received: 16 June 2019 / Revised: 4 July 2019 / Accepted: 7 July 2019 / Published: 9 July 2019
PDF Full-text (1404 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Unmanaged long-term mental stress in the workplace can lead to serious health problems and reduced productivity. To prevent this, it is important to recognize and relieve mental stress in a timely manner. Here, we propose a novel stress detection algorithm based on end-to-end [...] Read more.
Unmanaged long-term mental stress in the workplace can lead to serious health problems and reduced productivity. To prevent this, it is important to recognize and relieve mental stress in a timely manner. Here, we propose a novel stress detection algorithm based on end-to-end deep learning using multiple physiological signals, such as electrocardiogram (ECG) and respiration (RESP) signal. To mimic workplace stress in our experiments, we used Stroop and math tasks as stressors, with each stressor being followed by a relaxation task. Herein, we recruited 18 subjects and measured both ECG and RESP signals using Zephyr BioHarness 3.0. After five-fold cross validation, the proposed network performed well, with an average accuracy of 83.9%, an average F1 score of 0.81, and an average area under the receiver operating characteristic (ROC) curve (AUC) of 0.92, demonstrating its superiority over conventional machine learning models. Furthermore, by visualizing the activation of the trained network’s neurons, we found that they were activated by specific ECG and RESP patterns. In conclusion, we successfully validated the feasibility of end-to-end deep learning using multiple physiological signals for recognition of mental stress in the workplace. We believe that this is a promising approach that will help to improve the quality of life of people suffering from long-term work-related mental stress. Full article
Figures

Figure 1

Open AccessArticle
Combining Inter-Subject Modeling with a Subject-Based Data Transformation to Improve Affect Recognition from EEG Signals
Sensors 2019, 19(13), 2999; https://doi.org/10.3390/s19132999
Received: 4 June 2019 / Revised: 3 July 2019 / Accepted: 5 July 2019 / Published: 8 July 2019
PDF Full-text (426 KB) | HTML Full-text | XML Full-text
Abstract
Existing correlations between features extracted from Electroencephalography (EEG) signals and emotional aspects have motivated the development of a diversity of EEG-based affect detection methods. Both intra-subject and inter-subject approaches have been used in this context. Intra-subject approaches generally suffer from the small sample [...] Read more.
Existing correlations between features extracted from Electroencephalography (EEG) signals and emotional aspects have motivated the development of a diversity of EEG-based affect detection methods. Both intra-subject and inter-subject approaches have been used in this context. Intra-subject approaches generally suffer from the small sample problem, and require the collection of exhaustive data for each new user before the detection system is usable. On the contrary, inter-subject models do not account for the personality and physiological influence of how the individual is feeling and expressing emotions. In this paper, we analyze both modeling approaches, using three public repositories. The results show that the subject’s influence on the EEG signals is substantially higher than that of the emotion and hence it is necessary to account for the subject’s influence on the EEG signals. To do this, we propose a data transformation that seamlessly integrates individual traits into an inter-subject approach, improving classification results. Full article
Figures

Figure 1

Open AccessArticle
Visual and Thermal Image Processing for Facial Specific Landmark Detection to Infer Emotions in a Child-Robot Interaction
Sensors 2019, 19(13), 2844; https://doi.org/10.3390/s19132844
Received: 20 May 2019 / Revised: 21 June 2019 / Accepted: 22 June 2019 / Published: 26 June 2019
PDF Full-text (3408 KB) | HTML Full-text | XML Full-text
Abstract
Child-Robot Interaction (CRI) has become increasingly addressed in research and applications. This work proposes a system for emotion recognition in children, recording facial images by both visual (RGB—red, green and blue) and Infrared Thermal Imaging (IRTI) cameras. For this purpose, the Viola-Jones algorithm [...] Read more.
Child-Robot Interaction (CRI) has become increasingly addressed in research and applications. This work proposes a system for emotion recognition in children, recording facial images by both visual (RGB—red, green and blue) and Infrared Thermal Imaging (IRTI) cameras. For this purpose, the Viola-Jones algorithm is used on color images to detect facial regions of interest (ROIs), which are transferred to the thermal camera plane by multiplying a homography matrix obtained through the calibration process of the camera system. As a novelty, we propose to compute the error probability for each ROI located over thermal images, using a reference frame manually marked by a trained expert, in order to choose that ROI better placed according to the expert criteria. Then, this selected ROI is used to relocate the other ROIs, increasing the concordance with respect to the reference manual annotations. Afterwards, other methods for feature extraction, dimensionality reduction through Principal Component Analysis (PCA) and pattern classification by Linear Discriminant Analysis (LDA) are applied to infer emotions. The results show that our approach for ROI locations may track facial landmarks with significant low errors with respect to the traditional Viola-Jones algorithm. These ROIs have shown to be relevant for recognition of five emotions, specifically disgust, fear, happiness, sadness, and surprise, with our recognition system based on PCA and LDA achieving mean accuracy (ACC) and Kappa values of 85.75% and 81.84%, respectively. As a second stage, the proposed recognition system was trained with a dataset of thermal images, collected on 28 typically developing children, in order to infer one of five basic emotions (disgust, fear, happiness, sadness, and surprise) during a child-robot interaction. The results show that our system can be integrated to a social robot to infer child emotions during a child-robot interaction. Full article
Figures

Figure 1

Open AccessArticle
Identifying Traffic Context Using Driving Stress: A Longitudinal Preliminary Case Study
Sensors 2019, 19(9), 2152; https://doi.org/10.3390/s19092152
Received: 11 April 2019 / Revised: 3 May 2019 / Accepted: 7 May 2019 / Published: 9 May 2019
Cited by 1 | PDF Full-text (1768 KB) | HTML Full-text | XML Full-text
Abstract
Many previous studies have identified that physiological responses of a driver are significantly associated with driving stress. However, research is limited to identifying the effects of traffic conditions (low vs. high traffic) and road types (highway vs. city) on driving stress. The objective [...] Read more.
Many previous studies have identified that physiological responses of a driver are significantly associated with driving stress. However, research is limited to identifying the effects of traffic conditions (low vs. high traffic) and road types (highway vs. city) on driving stress. The objective of this study is to quantify the relationship between driving stress and traffic conditions, and driving stress and road types, respectively. In this study, electrodermal activity (EDA) signals for a male driver were collected in real road driving conditions for 60 min a day for 21 days. To classify the levels of driving stress (low vs. high), two separate models were developed by incorporating the statistical features of the EDA signals, one for traffic conditions and the other for road types. Both models were based on the application of EDA features with the logistic regression analysis. City driving turned out to be more stressful than highway driving. Traffic conditions, defined as traffic jam also significantly affected the stress level of the driver, when using the criteria of the vehicle speed of 40 km/h and standard deviation of the speed of 20 km/h. Relevance to industry: The classification results of the two models indicate that the traffic conditions and the road types are important features for driving stress and its related applications. Full article
Figures

Figure 1

Open AccessArticle
Recognition of Emotion Intensities Using Machine Learning Algorithms: A Comparative Study
Sensors 2019, 19(8), 1897; https://doi.org/10.3390/s19081897
Received: 24 March 2019 / Revised: 18 April 2019 / Accepted: 18 April 2019 / Published: 21 April 2019
Cited by 1 | PDF Full-text (1650 KB) | HTML Full-text | XML Full-text
Abstract
Over the past two decades, automatic facial emotion recognition has received enormous attention. This is due to the increase in the need for behavioral biometric systems and human–machine interaction where the facial emotion recognition and the intensity of emotion play vital roles. The [...] Read more.
Over the past two decades, automatic facial emotion recognition has received enormous attention. This is due to the increase in the need for behavioral biometric systems and human–machine interaction where the facial emotion recognition and the intensity of emotion play vital roles. The existing works usually do not encode the intensity of the observed facial emotion and even less involve modeling the multi-class facial behavior data jointly. Our work involves recognizing the emotion along with the respective intensities of those emotions. The algorithms used in this comparative study are Gabor filters, a Histogram of Oriented Gradients (HOG), and Local Binary Pattern (LBP) for feature extraction. For classification, we have used Support Vector Machine (SVM), Random Forest (RF), and Nearest Neighbor Algorithm (kNN). This attains emotion recognition and intensity estimation of each recognized emotion. This is a comparative study of classifiers used for facial emotion recognition along with the intensity estimation of those emotions for databases. The results verified that the comparative study could be further used in real-time behavioral facial emotion and intensity of emotion recognition. Full article
Figures

Figure 1

Open AccessArticle
A Deep-Learning Model for Subject-Independent Human Emotion Recognition Using Electrodermal Activity Sensors
Sensors 2019, 19(7), 1659; https://doi.org/10.3390/s19071659
Received: 14 March 2019 / Revised: 31 March 2019 / Accepted: 3 April 2019 / Published: 7 April 2019
Cited by 1 | PDF Full-text (478 KB) | HTML Full-text | XML Full-text
Abstract
One of the main objectives of Active and Assisted Living (AAL) environments is to ensure that elderly and/or disabled people perform/live well in their immediate environments; this can be monitored by among others the recognition of emotions based on non-highly intrusive sensors such [...] Read more.
One of the main objectives of Active and Assisted Living (AAL) environments is to ensure that elderly and/or disabled people perform/live well in their immediate environments; this can be monitored by among others the recognition of emotions based on non-highly intrusive sensors such as Electrodermal Activity (EDA) sensors. However, designing a learning system or building a machine-learning model to recognize human emotions while training the system on a specific group of persons and testing the system on a totally a new group of persons is still a serious challenge in the field, as it is possible that the second testing group of persons may have different emotion patterns. Accordingly, the purpose of this paper is to contribute to the field of human emotion recognition by proposing a Convolutional Neural Network (CNN) architecture which ensures promising robustness-related results for both subject-dependent and subject-independent human emotion recognition. The CNN model has been trained using a grid search technique which is a model hyperparameter optimization technique to fine-tune the parameters of the proposed CNN architecture. The overall concept’s performance is validated and stress-tested by using MAHNOB and DEAP datasets. The results demonstrate a promising robustness improvement regarding various evaluation metrics. We could increase the accuracy for subject-independent classification to 78% and 82% for MAHNOB and DEAP respectively and to 81% and 85% subject-dependent classification for MAHNOB and DEAP respectively (4 classes/labels). The work shows clearly that while using solely the non-intrusive EDA sensors a robust classification of human emotion is possible even without involving additional/other physiological signals. Full article
Figures

Figure 1

Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top