Special Issue on the Future of Intelligent Human-Computer Interface

A special issue of Future Internet (ISSN 1999-5903).

Deadline for manuscript submissions: closed (1 March 2019) | Viewed by 20315

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, University of Salerno, 84084 Fisciano, SA, Italy
Interests: end-user development; mobile computing; data management; visual languages; geographic information systems; image processing and biometrics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, University of Salerno, 84084 Fisciano, SA, Italy
Interests: human computer interaction; usability engineering
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Over the years, the disciplines of Artificial Intelligence (AI) and Human–Computer Interaction (HCI) have separately evolved to leverage, respectively, the potentials of intelligent machines, and those of advanced interaction paradigms and techniques. A common goal on which the two scientific communities have progressively converged has been to enhance human quality of life through a beneficial and effective interaction with intelligent systems. Such a goal is today raising several research challenges in different areas, ranging from machine learning, to natural language processing, to knowledge representation, to user modeling, to multimodal interfaces, to virtual reality interfaces, just to mention a few.  

The present Special Issue is intended to collect research contributions, which may witness the current challenges and future directions of intelligent human-computer interfaces. Original papers, especially, are sought, which tackle critical aspects of user experience with intelligent machines, possibly referring to a concrete domain, e.g., health, emergency, education, manufacturing industry, etc.

Potential topics include, but are not limited to:

- adaptive and customizable user interfaces

- affective multimodal interfaces

- brain computing interfaces

- collaborative intelligent interfaces

- intelligent assistive user interfaces 

- intelligent ubiquitous/mobile interfaces

- intelligent visualization/visual analytics tools

- interactive machine learning

- natural language processing interfaces

- smart interaction design techniques for smart objects

- usability evaluation of intelligent user interfaces 

- user experience with persuasive technology in the internet of things

- user modelling for intelligent interfaces.

Prof. Dr. Genoveffa Tortora
Prof. Dr. Giuliana Vitiello
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Future Internet is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 3044 KiB  
Article
Convolutional Two-Stream Network Using Multi-Facial Feature Fusion for Driver Fatigue Detection
by Weihuang Liu, Jinhao Qian, Zengwei Yao, Xintao Jiao and Jiahui Pan
Future Internet 2019, 11(5), 115; https://doi.org/10.3390/fi11050115 - 14 May 2019
Cited by 90 | Viewed by 9888
Abstract
Road traffic accidents caused by fatigue driving are common causes of human casualties. In this paper, we present a driver fatigue detection algorithm using two-stream network models with multi-facial features. The algorithm consists of four parts: (1) Positioning mouth and eye with multi-task [...] Read more.
Road traffic accidents caused by fatigue driving are common causes of human casualties. In this paper, we present a driver fatigue detection algorithm using two-stream network models with multi-facial features. The algorithm consists of four parts: (1) Positioning mouth and eye with multi-task cascaded convolutional neural networks (MTCNNs). (2) Extracting the static features from a partial facial image. (3) Extracting the dynamic features from a partial facial optical flow. (4) Combining both static and dynamic features using a two-stream neural network to make the classification. The main contribution of this paper is the combination of a two-stream network and multi-facial features for driver fatigue detection. Two-stream networks can combine static and dynamic image information, while partial facial images as network inputs can focus on fatigue-related information, which brings better performance. Moreover, we applied gamma correction to enhance image contrast, which can help our method achieve better results, noted by an increased accuracy of 2% in night environments. Finally, an accuracy of 97.06% was achieved on the National Tsing Hua University Driver Drowsiness Detection (NTHU-DDD) dataset. Full article
(This article belongs to the Special Issue Special Issue on the Future of Intelligent Human-Computer Interface)
Show Figures

Figure 1

17 pages, 3376 KiB  
Article
Combining Facial Expressions and Electroencephalography to Enhance Emotion Recognition
by Yongrui Huang, Jianhao Yang, Siyu Liu and Jiahui Pan
Future Internet 2019, 11(5), 105; https://doi.org/10.3390/fi11050105 - 2 May 2019
Cited by 83 | Viewed by 9830
Abstract
Emotion recognition plays an essential role in human–computer interaction. Previous studies have investigated the use of facial expression and electroencephalogram (EEG) signals from single modal for emotion recognition separately, but few have paid attention to a fusion between them. In this paper, we [...] Read more.
Emotion recognition plays an essential role in human–computer interaction. Previous studies have investigated the use of facial expression and electroencephalogram (EEG) signals from single modal for emotion recognition separately, but few have paid attention to a fusion between them. In this paper, we adopted a multimodal emotion recognition framework by combining facial expression and EEG, based on a valence-arousal emotional model. For facial expression detection, we followed a transfer learning approach for multi-task convolutional neural network (CNN) architectures to detect the state of valence and arousal. For EEG detection, two learning targets (valence and arousal) were detected by different support vector machine (SVM) classifiers, separately. Finally, two decision-level fusion methods based on the enumerate weight rule or an adaptive boosting technique were used to combine facial expression and EEG. In the experiment, the subjects were instructed to watch clips designed to elicit an emotional response and then reported their emotional state. We used two emotion datasets—a Database for Emotion Analysis using Physiological Signals (DEAP) and MAHNOB-human computer interface (MAHNOB-HCI)—to evaluate our method. In addition, we also performed an online experiment to make our method more robust. We experimentally demonstrated that our method produces state-of-the-art results in terms of binary valence/arousal classification, based on DEAP and MAHNOB-HCI data sets. Besides this, for the online experiment, we achieved 69.75% accuracy for the valence space and 70.00% accuracy for the arousal space after fusion, each of which has surpassed the highest performing single modality (69.28% for the valence space and 64.00% for the arousal space). The results suggest that the combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources. The novelty of this work is as follows. To begin with, we combined facial expression and EEG to improve the performance of emotion recognition. Furthermore, we used transfer learning techniques to tackle the problem of lacking data and achieve higher accuracy for facial expression. Finally, in addition to implementing the widely used fusion method based on enumerating different weights between two models, we also explored a novel fusion method, applying boosting technique. Full article
(This article belongs to the Special Issue Special Issue on the Future of Intelligent Human-Computer Interface)
Show Figures

Figure 1

Back to TopTop