Special Issue "Ubiquitous Technologies for Emotion Recognition"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: 15 July 2020.

Special Issue Editors

Prof. Dr. Oresti Banos
Website
Guest Editor
Computational Behaviour Modelling Research Centre for Information and Communications Technology, University of Granada (UGR), 18071 Granada, Spain
Interests: wearable, ubiquitous, and mobile computing, artificial intelligence, data mining, digital health
Special Issues and Collections in MDPI journals
Prof. Dr. Luis A. Castro
Website
Guest Editor
Sonora Institute of Technology (ITSON), Ciudad Obregon, Mexico
Interests: human–computer interaction; ubiquitous and mobile computing; mobile sensing; context awareness; behaviour and context sensing
Prof. Dr. Claudia Villalonga
Website
Guest Editor
Universidad Internacional de La Rioja, Logroño, Spain
Interests: ontologies; semantics; context awareness; machine learning; artificial intelligence

Special Issue Information

Dear Colleagues,

Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now with the advent of wearable, mobile, and ubiquitous technologies that we can aim at sensing and recognizing emotions, continuously and in the wild. This Special Issue aims at bringing together the latest experiences, findings, and developments regarding ubiquitous sensing, modelling, and recognition of human emotions.

Original, high-quality contributions from both academia and industry are sought. Manuscripts submitted for review should not have been published elsewhere or be under review by other journals or peer-reviewed conferences.

Topics of interest include, but are not limited to:

  • Wearable, mobile, and ubiquitous emotion recognition systems
  • Algorithms and features for the recognition of emotional states from face, speech, body gestures, and physiological measures
  • Methods for multi-modal recognition of individual and group emotion
  • Benchmarking, datasets, and simulation tools that have been applied to study and/or support emotion recognition
  • Applications of emotion recognition including education, health care, entertainment, vehicle operation, social agents, and ambient intelligence

Prof. Dr. Oresti Banos
Prof. Dr. Luis A. Castro
Prof. Dr. Claudia Villalonga
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • emotion recognition
  • multi-modal sensing
  • wearable, mobile and ubiquitous computing
  • affective computing

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
Driver Facial Expression Analysis Using LFA-CRNN-Based Feature Extraction for Health-Risk Decisions
Appl. Sci. 2020, 10(8), 2956; https://doi.org/10.3390/app10082956 - 24 Apr 2020
Abstract
As people communicate with each other, they use gestures and facial expressions as a means to convey and understand emotional state. Non-verbal means of communication are essential to understanding, based on external clues to a person’s emotional state. Recently, active studies have been [...] Read more.
As people communicate with each other, they use gestures and facial expressions as a means to convey and understand emotional state. Non-verbal means of communication are essential to understanding, based on external clues to a person’s emotional state. Recently, active studies have been conducted on the lifecare service of analyzing users’ facial expressions. Yet, rather than a service necessary for everyday life, the service is currently provided only for health care centers or certain medical institutions. It is necessary to conduct studies to prevent accidents that suddenly occur in everyday life and to cope with emergencies. Thus, we propose facial expression analysis using line-segment feature analysis-convolutional recurrent neural network (LFA-CRNN) feature extraction for health-risk assessments of drivers. The purpose of such an analysis is to manage and monitor patients with chronic diseases who are rapidly increasing in number. To prevent automobile accidents and to respond to emergency situations due to acute diseases, we propose a service that monitors a driver’s facial expressions to assess health risks and alert the driver to risk-related matters while driving. To identify health risks, deep learning technology is used to recognize expressions of pain and to determine if a person is in pain while driving. Since the amount of input-image data is large, analyzing facial expressions accurately is difficult for a process with limited resources while providing the service on a real-time basis. Accordingly, a line-segment feature analysis algorithm is proposed to reduce the amount of data, and the LFA-CRNN model was designed for this purpose. Through this model, the severity of a driver’s pain is classified into one of nine types. The LFA-CRNN model consists of one convolution layer that is reshaped and delivered into two bidirectional gated recurrent unit layers. Finally, biometric data are classified through softmax. In addition, to evaluate the performance of LFA-CRNN, the performance was compared through the CRNN and AlexNet Models based on the University of Northern British Columbia and McMaster University (UNBC-McMaster) database. Full article
(This article belongs to the Special Issue Ubiquitous Technologies for Emotion Recognition)
Show Figures

Figure 1

Open AccessArticle
EEG-Based Emotion Recognition Using Logistic Regression with Gaussian Kernel and Laplacian Prior and Investigation of Critical Frequency Bands
Appl. Sci. 2020, 10(5), 1619; https://doi.org/10.3390/app10051619 - 29 Feb 2020
Abstract
Emotion plays a nuclear part in human attention, decision-making, and communication. Electroencephalogram (EEG)-based emotion recognition has developed a lot due to the application of Brain-Computer Interface (BCI) and its effectiveness compared to body expressions and other physiological signals. Despite significant progress in affective [...] Read more.
Emotion plays a nuclear part in human attention, decision-making, and communication. Electroencephalogram (EEG)-based emotion recognition has developed a lot due to the application of Brain-Computer Interface (BCI) and its effectiveness compared to body expressions and other physiological signals. Despite significant progress in affective computing, emotion recognition is still an unexplored problem. This paper introduced Logistic Regression (LR) with Gaussian kernel and Laplacian prior for EEG-based emotion recognition. The Gaussian kernel enhances the EEG data separability in the transformed space. The Laplacian prior promotes the sparsity of learned LR regressors to avoid over-specification. The LR regressors are optimized using the logistic regression via variable splitting and augmented Lagrangian (LORSAL) algorithm. For simplicity, the introduced method is noted as LORSAL. Experiments were conducted on the dataset for emotion analysis using EEG, physiological and video signals (DEAP). Various spectral features and features by combining electrodes (power spectral density (PSD), differential entropy (DE), differential asymmetry (DASM), rational asymmetry (RASM), and differential caudality (DCAU)) were extracted from different frequency bands (Delta, Theta, Alpha, Beta, Gamma, and Total) with EEG signals. The Naive Bayes (NB), support vector machine (SVM), linear LR with L1-regularization (LR_L1), linear LR with L2-regularization (LR_L2) were used for comparison in the binary emotion classification for valence and arousal. LORSAL obtained the best classification accuracies (77.17% and 77.03% for valence and arousal, respectively) on the DE features extracted from total frequency bands. This paper also investigates the critical frequency bands in emotion recognition. The experimental results showed the superiority of Gamma and Beta bands in classifying emotions. It was presented that DE was the most informative and DASM and DCAU had lower computational complexity with relatively ideal accuracies. An analysis of LORSAL and the recently deep learning (DL) methods is included in the discussion. Conclusions and future work are presented in the final section. Full article
(This article belongs to the Special Issue Ubiquitous Technologies for Emotion Recognition)
Show Figures

Figure 1

Open AccessArticle
Deep Learning for EEG-Based Preference Classification in Neuromarketing
Appl. Sci. 2020, 10(4), 1525; https://doi.org/10.3390/app10041525 - 24 Feb 2020
Abstract
The traditional marketing methodologies (e.g., television commercials and newspaper advertisements) may be unsuccessful at selling products because they do not robustly stimulate the consumers to purchase a particular product. Such conventional marketing methods attempt to determine the attitude of the consumers toward a [...] Read more.
The traditional marketing methodologies (e.g., television commercials and newspaper advertisements) may be unsuccessful at selling products because they do not robustly stimulate the consumers to purchase a particular product. Such conventional marketing methods attempt to determine the attitude of the consumers toward a product, which may not represent the real behavior at the point of purchase. It is likely that the marketers misunderstand the consumer behavior because the predicted attitude does not always reflect the real purchasing behaviors of the consumers. This research study was aimed at bridging the gap between traditional market research, which relies on explicit consumer responses, and neuromarketing research, which reflects the implicit consumer responses. The EEG-based preference recognition in neuromarketing was extensively reviewed. Another gap in neuromarketing research is the lack of extensive data-mining approaches for the prediction and classification of the consumer preferences. Therefore, in this work, a deep-learning approach is adopted to detect the consumer preferences by using EEG signals from the DEAP dataset by considering the power spectral density and valence features. The results demonstrated that, although the proposed deep-learning exhibits a higher accuracy, recall, and precision compared with the k-nearest neighbor and support vector machine algorithms, random forest reaches similar results to deep learning on the same dataset. Full article
(This article belongs to the Special Issue Ubiquitous Technologies for Emotion Recognition)
Show Figures

Figure 1

Open AccessArticle
Application of Texture Descriptors to Facial Emotion Recognition in Infants
Appl. Sci. 2020, 10(3), 1115; https://doi.org/10.3390/app10031115 - 07 Feb 2020
Abstract
The recognition of facial emotions is an important issue in computer vision and artificial intelligence due to its important academic and commercial potential. If we focus on the health sector, the ability to detect and control patients’ emotions, mainly pain, is a fundamental [...] Read more.
The recognition of facial emotions is an important issue in computer vision and artificial intelligence due to its important academic and commercial potential. If we focus on the health sector, the ability to detect and control patients’ emotions, mainly pain, is a fundamental objective within any medical service. Nowadays, the evaluation of pain in patients depends mainly on the continuous monitoring of the medical staff when the patient is unable to express verbally his/her experience of pain, as is the case of patients under sedation or babies. Therefore, it is necessary to provide alternative methods for its evaluation and detection. Facial expressions can be considered as a valid indicator of a person’s degree of pain. Consequently, this paper presents a monitoring system for babies that uses an automatic pain detection system by means of image analysis. This system could be accessed through wearable or mobile devices. To do this, this paper makes use of three different texture descriptors for pain detection: Local Binary Patterns, Local Ternary Patterns, and Radon Barcodes. These descriptors are used together with Support Vector Machines (SVM) for their classification. The experimental results show that the proposed features give a very promising classification accuracy of around 95% for the Infant COPE database, which proves the validity of the proposed method. Full article
(This article belongs to the Special Issue Ubiquitous Technologies for Emotion Recognition)
Show Figures

Figure 1

Open AccessArticle
Detection of Emotion Using Multi-Block Deep Learning in a Self-Management Interview App
Appl. Sci. 2019, 9(22), 4830; https://doi.org/10.3390/app9224830 - 11 Nov 2019
Cited by 1
Abstract
Recently, domestic universities have constructed and operated online mock interview systems for students’ preparation for employment. Students can have a mock interview anywhere and at any time through the online mock interview system, and can improve any problems during the interviews via images [...] Read more.
Recently, domestic universities have constructed and operated online mock interview systems for students’ preparation for employment. Students can have a mock interview anywhere and at any time through the online mock interview system, and can improve any problems during the interviews via images stored in real time. For such practice, it is necessary to analyze the emotional state of the student based on the situation, and to provide coaching through accurate analysis of the interview. In this paper, we propose detection of user emotions using multi-block deep learning in a self-management interview application. Unlike the basic structure for learning about whole-face images, the multi-block deep learning method helps the user learn after sampling the core facial areas (eyes, nose, mouth, etc.), which are important factors for emotion analysis from face detection. Through the multi-block process, sampling is carried out using multiple AdaBoost learning. For optimal block image screening and verification, similarity measurement is also performed during this process. A performance evaluation of the proposed model compares the proposed system with AlexNet, which has mainly been used for facial recognition in the past. As comparison items, the recognition rate and extraction time of the specific area are compared. The extraction time of the specific area decreased by 2.61%, and the recognition rate increased by 3.75%, indicating that the proposed facial recognition method is excellent. It is expected to provide good-quality, customized interview education for job seekers by establishing a systematic interview system using the proposed deep learning method. Full article
(This article belongs to the Special Issue Ubiquitous Technologies for Emotion Recognition)
Show Figures

Figure 1

Review

Jump to: Research

Open AccessReview
Thermal Infrared Imaging-Based Affective Computing and Its Application to Facilitate Human Robot Interaction: A Review
Appl. Sci. 2020, 10(8), 2924; https://doi.org/10.3390/app10082924 - 23 Apr 2020
Abstract
Over recent years, robots are increasingly being employed in several aspects of modern society. Among others, social robots have the potential to benefit education, healthcare, and tourism. To achieve this purpose, robots should be able to engage humans, recognize users’ emotions, and to [...] Read more.
Over recent years, robots are increasingly being employed in several aspects of modern society. Among others, social robots have the potential to benefit education, healthcare, and tourism. To achieve this purpose, robots should be able to engage humans, recognize users’ emotions, and to some extent properly react and "behave" in a natural interaction. Most robotics applications primarily use visual information for emotion recognition, which is often based on facial expressions. However, the display of emotional states through facial expression is inherently a voluntary controlled process that is typical of human–human interaction. In fact, humans have not yet learned to use this channel when communicating with a robotic technology. Hence, there is an urgent need to exploit emotion information channels not directly controlled by humans, such as those that can be ascribed to physiological modulations. Thermal infrared imaging-based affective computing has the potential to be the solution to such an issue. It is a validated technology that allows the non-obtrusive monitoring of physiological parameters and from which it might be possible to infer affective states. This review is aimed to outline the advantages and the current research challenges of thermal imaging-based affective computing for human–robot interaction. Full article
(This article belongs to the Special Issue Ubiquitous Technologies for Emotion Recognition)
Show Figures

Figure 1

Back to TopTop