sensors-logo

Journal Browser

Journal Browser

Special Issue "Human-Machine Interaction and Sensors"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 31 July 2020.

Special Issue Editor

Prof. Dr. Khalid Saeed
E-Mail Website
Guest Editor
Białystok University of Technology, Wiejska 45A, 15-351 Białystok, Poland
Interests: biometrics; computer information systems

Special Issue Information

Dear Colleagues,

Human–robot interaction is one of the most important topics today. This kind of cooperation needs a special set of sensors or even sensor networks. Some time ago, we could not even imagine that situations from science-fiction movies would become part of our daily life. For instance, biometrics and its sensors, through which we can be recognized without any login or passwords, or novel sensor-based safety systems in intelligent houses are all based on sensors networks and Internet of Things procedures. The main problems with all these solutions are connected with analyzing data safety (especially when we are dealing with personal information) as well as with sensor failures. The second difficulty is especially dangerous in the solutions on which human life depends—for example, gyroscopes in aircrafts. When such sensors are broken, the pilot does not know where the horizon is, which can easily lead to a crash. Recently, significant improvements in these fields have been observed, and further experiments are still being planned or taking place already. However, some novel approaches are still needed—particularly, searching high-level results in shorter time and with lower computational complexity.

Prof. Dr. Khalid Saeed
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Sensors
  • Sensors networks
  • Biometrics
  • Safety systems
  • Intelligent houses
  • Internet of Things

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Biometric Identification from Human Aesthetic Preferences
Sensors 2020, 20(4), 1133; https://doi.org/10.3390/s20041133 - 19 Feb 2020
Abstract
In recent years, human–machine interactions encompass many avenues of life, ranging from personal communications to professional activities. This trend has allowed for person identification based on behavior rather than physical traits to emerge as a growing research domain, which spans areas such as [...] Read more.
In recent years, human–machine interactions encompass many avenues of life, ranging from personal communications to professional activities. This trend has allowed for person identification based on behavior rather than physical traits to emerge as a growing research domain, which spans areas such as online education, e-commerce, e-communication, and biometric security. The expression of opinions is an example of online behavior that is commonly shared through the liking of online images. Visual aesthetic is a behavioral biometric that involves using a person’s sense of fondness for images. The identification of individuals using their visual aesthetic values as discriminatory features is an emerging domain of research. This paper introduces a novel method for aesthetic feature dimensionality reduction using gene expression programming. The proposed system is capable of using a tree-based genetic approach for feature recombination. Reducing feature dimensionality improves classifier accuracy, reduces computation runtime, and minimizes required storage. The results obtained on a dataset of 200 Flickr users evaluating 40,000 images demonstrate a 95% accuracy of identity recognition based solely on users’ aesthetic preferences. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Open AccessArticle
Assessment of the Potential of Wrist-Worn Wearable Sensors for Driver Drowsiness Detection
Sensors 2020, 20(4), 1029; https://doi.org/10.3390/s20041029 - 14 Feb 2020
Abstract
Drowsy driving imposes a high safety risk. Current systems often use driving behavior parameters for driver drowsiness detection. The continuous driving automation reduces the availability of these parameters, therefore reducing the scope of such methods. Especially, techniques that include physiological measurements seem to [...] Read more.
Drowsy driving imposes a high safety risk. Current systems often use driving behavior parameters for driver drowsiness detection. The continuous driving automation reduces the availability of these parameters, therefore reducing the scope of such methods. Especially, techniques that include physiological measurements seem to be a promising alternative. However, in a dynamic environment such as driving, only non- or minimal intrusive methods are accepted, and vibrations from the roadbed could lead to degraded sensor technology. This work contributes to driver drowsiness detection with a machine learning approach applied solely to physiological data collected from a non-intrusive retrofittable system in the form of a wrist-worn wearable sensor. To check accuracy and feasibility, results are compared with reference data from a medical-grade ECG device. A user study with 30 participants in a high-fidelity driving simulator was conducted. Several machine learning algorithms for binary classification were applied in user-dependent and independent tests. Results provide evidence that the non-intrusive setting achieves a similar accuracy as compared to the medical-grade device, and high accuracies (>92%) could be achieved, especially in a user-dependent scenario. The proposed approach offers new possibilities for human–machine interaction in a car and especially for driver state monitoring in the field of automated driving. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Graphical abstract

Open AccessArticle
Towards Mixed-Initiative Human–Robot Interaction: Assessment of Discriminative Physiological and Behavioral Features for Performance Prediction
Sensors 2020, 20(1), 296; https://doi.org/10.3390/s20010296 - 05 Jan 2020
Abstract
The design of human–robot interactions is a key challenge to optimize operational performance. A promising approach is to consider mixed-initiative interactions in which the tasks and authority of each human and artificial agents are dynamically defined according to their current abilities. An important [...] Read more.
The design of human–robot interactions is a key challenge to optimize operational performance. A promising approach is to consider mixed-initiative interactions in which the tasks and authority of each human and artificial agents are dynamically defined according to their current abilities. An important issue for the implementation of mixed-initiative systems is to monitor human performance to dynamically drive task allocation between human and artificial agents (i.e., robots). We, therefore, designed an experimental scenario involving missions whereby participants had to cooperate with a robot to fight fires while facing hazards. Two levels of robot automation (manual vs. autonomous) were randomly manipulated to assess their impact on the participants’ performance across missions. Cardiac activity, eye-tracking, and participants’ actions on the user interface were collected. The participants performed differently to an extent that we could identify high and low score mission groups that also exhibited different behavioral, cardiac and ocular patterns. More specifically, our findings indicated that the higher level of automation could be beneficial to low-scoring participants but detrimental to high-scoring ones, and vice versa. In addition, inter-subject single-trial classification results showed that the studied behavioral and physiological features were relevant to predict mission performance. The highest average balanced accuracy (74%) was reached using the features extracted from all input devices. These results suggest that an adaptive HRI driving system, that would aim at maximizing performance, would be capable of analyzing such physiological and behavior markers online to further change the level of automation when it is relevant for the mission purpose. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Open AccessArticle
Adaptive Binarization of QR Code Images for Fast Automatic Sorting in Warehouse Systems
Sensors 2019, 19(24), 5466; https://doi.org/10.3390/s19245466 - 11 Dec 2019
Abstract
As the fundamental element of the Internet of Things, the QR code has become increasingly crucial for connecting online and offline services. Concerning e-commerce and logistics, we mainly focus on how to identify QR codes quickly and accurately. An adaptive binarization approach is [...] Read more.
As the fundamental element of the Internet of Things, the QR code has become increasingly crucial for connecting online and offline services. Concerning e-commerce and logistics, we mainly focus on how to identify QR codes quickly and accurately. An adaptive binarization approach is proposed to solve the problem of uneven illumination in warehouse automatic sorting systems. Guided by cognitive modeling, we adaptively select the block window of the QR code for robust binarization under uneven illumination. The proposed method can eliminate the impact of uneven illumination of QR codes effectively whilst meeting the real-time needs in the automatic warehouse sorting. Experimental results have demonstrated the superiority of the proposed approach when benchmarked with several state-of-the-art methods. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Open AccessArticle
A Deep Learning-Based End-to-End Composite System for Hand Detection and Gesture Recognition
Sensors 2019, 19(23), 5282; https://doi.org/10.3390/s19235282 - 30 Nov 2019
Abstract
Recent research on hand detection and gesture recognition has attracted increasing interest due to its broad range of potential applications, such as human-computer interaction, sign language recognition, hand action analysis, driver hand behavior monitoring, and virtual reality. In recent years, several approaches have [...] Read more.
Recent research on hand detection and gesture recognition has attracted increasing interest due to its broad range of potential applications, such as human-computer interaction, sign language recognition, hand action analysis, driver hand behavior monitoring, and virtual reality. In recent years, several approaches have been proposed with the aim of developing a robust algorithm which functions in complex and cluttered environments. Although several researchers have addressed this challenging problem, a robust system is still elusive. Therefore, we propose a deep learning-based architecture to jointly detect and classify hand gestures. In the proposed architecture, the whole image is passed through a one-stage dense object detector to extract hand regions, which, in turn, pass through a lightweight convolutional neural network (CNN) for hand gesture recognition. To evaluate our approach, we conducted extensive experiments on four publicly available datasets for hand detection, including the Oxford, 5-signers, EgoHands, and Indian classical dance (ICD) datasets, along with two hand gesture datasets with different gesture vocabularies for hand gesture recognition, namely, the LaRED and TinyHands datasets. Here, experimental results demonstrate that the proposed architecture is efficient and robust. In addition, it outperforms other approaches in both the hand detection and gesture classification tasks. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Open AccessArticle
A Gesture Recognition Algorithm for Hand-Assisted Laparoscopic Surgery
Sensors 2019, 19(23), 5182; https://doi.org/10.3390/s19235182 - 26 Nov 2019
Abstract
Minimally invasive surgery (MIS) techniques are growing in quantity and complexity to cover a wider range of interventions. More specifically, hand-assisted laparoscopic surgery (HALS) involves the use of one surgeon’s hand inside the patient whereas the other one manages a single laparoscopic tool. [...] Read more.
Minimally invasive surgery (MIS) techniques are growing in quantity and complexity to cover a wider range of interventions. More specifically, hand-assisted laparoscopic surgery (HALS) involves the use of one surgeon’s hand inside the patient whereas the other one manages a single laparoscopic tool. In this scenario, those surgical procedures performed with an additional tool require the aid of an assistant. Furthermore, in the case of a human–robot assistant pairing a fluid communication is mandatory. This human–machine interaction must combine both explicit orders and implicit information from the surgical gestures. In this context, this paper focuses on the development of a hand gesture recognition system for HALS. The recognition is based on a hidden Markov model (HMM) algorithm with an improved automated training step, which can also learn during the online surgical procedure by means of a reinforcement learning process. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Back to TopTop