sensors-logo

Journal Browser

Journal Browser

Special Issue "Human-Machine Interaction and Sensors"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 July 2020).

Special Issue Editor

Prof. Dr. Khalid Saeed
E-Mail Website
Guest Editor
1. Computer Science, Head of Department of Digital Media and Computer Graphics, Bialystok University of Technology, 45A Wiejska Street, 15 351 Bialystok, Poland
2. Department of Electronics and Computation Sciences, Universidaddfe la Costa, Calle 58, Barranquilla, Colombia
Interests: image processing; biometrics; security systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Human–robot interaction is one of the most important topics today. This kind of cooperation needs a special set of sensors or even sensor networks. Some time ago, we could not even imagine that situations from science-fiction movies would become part of our daily life. For instance, biometrics and its sensors, through which we can be recognized without any login or passwords, or novel sensor-based safety systems in intelligent houses are all based on sensors networks and Internet of Things procedures. The main problems with all these solutions are connected with analyzing data safety (especially when we are dealing with personal information) as well as with sensor failures. The second difficulty is especially dangerous in the solutions on which human life depends—for example, gyroscopes in aircrafts. When such sensors are broken, the pilot does not know where the horizon is, which can easily lead to a crash. Recently, significant improvements in these fields have been observed, and further experiments are still being planned or taking place already. However, some novel approaches are still needed—particularly, searching high-level results in shorter time and with lower computational complexity.

Prof. Dr. Khalid Saeed
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Sensors
  • Sensors networks
  • Biometrics
  • Safety systems
  • Intelligent houses
  • Internet of Things

Published Papers (18 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

Article
Physical, Modular and Articulated Interface for Interactive Molecular Manipulation
Sensors 2020, 20(18), 5415; https://doi.org/10.3390/s20185415 - 21 Sep 2020
Cited by 1 | Viewed by 1129
Abstract
Rational drug design is an approach based on detailed knowledge of molecular interactions and dynamic of bio-molecules. This approach involves designing new digital and interactive tools including classical desktop interaction devices as well as advanced ones such as haptic arms or virtual reality [...] Read more.
Rational drug design is an approach based on detailed knowledge of molecular interactions and dynamic of bio-molecules. This approach involves designing new digital and interactive tools including classical desktop interaction devices as well as advanced ones such as haptic arms or virtual reality devices. These approaches however struggle to deal with flexibility of bio-molecules by simultaneously steering the numerous degrees of freedom. We propose a new method that follows a direct interaction approach by implementing an innovative methodology benefiting from a physical, modular and articulated molecular interface augmented by wireless embedded sensors. The goal is to create, design and steer its in silico twin virtual model and better interact with dynamic molecular models. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Article
Customized 2D Barcode Sensing for Anti-Counterfeiting Application in Smart IoT with Fast Encoding and Information Hiding
Sensors 2020, 20(17), 4926; https://doi.org/10.3390/s20174926 - 31 Aug 2020
Cited by 2 | Viewed by 914
Abstract
With the development of commodity economy, the emergence of fake and shoddy products has seriously harmed the interests of consumers and enterprises. To tackle this challenge, customized 2D barcode is proposed to satisfy the requirements of the enterprise anti-counterfeiting certification. Based on information [...] Read more.
With the development of commodity economy, the emergence of fake and shoddy products has seriously harmed the interests of consumers and enterprises. To tackle this challenge, customized 2D barcode is proposed to satisfy the requirements of the enterprise anti-counterfeiting certification. Based on information hiding technology, the proposed approach can solve these challenging problems and provide a low-cost, difficult to forge, and easy to identify solution, while achieving the function of conventional 2D barcodes. By weighting between the perceptual quality and decoding robustness in sensing recognition, the customized 2D barcode can maintain a better aesthetic appearance for anti-counterfeiting and achieve fast encoding. A new picture-embedding scheme was designed to consider 2D barcode, within a unit image block as a basic encoding unit, where the 2D barcode finder patterns were embedded after encoding. Experimental results demonstrated that the proposed customized barcode could provide better encoding characteristics, while maintaining better decoding robustness than several state-of-the-art methods. Additionally, as a closed source 2D barcode that could be visually anti-counterfeit, the customized 2D barcode could effectively prevent counterfeiting that replicate physical labels. Benefitting from the high-security, high information capacity, and low-cost, the proposed customized 2D barcode with sensing recognition scheme provide a highly practical, valuable in terms of marketing, and anti-counterfeiting traceable solution for future smart IoT applications. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Communication
ClothFace: A Batteryless RFID-Based Textile Platform for Handwriting Recognition
Sensors 2020, 20(17), 4878; https://doi.org/10.3390/s20174878 - 28 Aug 2020
Viewed by 937
Abstract
This paper introduces a prototype of ClothFace technology, a battery-free textile-based handwriting recognition platform that includes an e-textile antenna and a 10 × 10 array of radio frequency identification (RFID) integrated circuits (ICs), each with a unique ID. Touching the textile platform surface [...] Read more.
This paper introduces a prototype of ClothFace technology, a battery-free textile-based handwriting recognition platform that includes an e-textile antenna and a 10 × 10 array of radio frequency identification (RFID) integrated circuits (ICs), each with a unique ID. Touching the textile platform surface creates an electrical connection from specific ICs to the antenna, which enables the connected ICs to be read with an external UHF (ultra-haigh frequency) RFID reader. In this paper, the platform is demonstrated to recognize handwritten numbers 0–9. The raw data collected by the platform are a sequence of IDs from the touched ICs. The system converts the data into bitmaps and their details are increased by interpolating between neighboring samples using the sequential information of IDs. These images of digits written on the platform can be classified, with enough accuracy for practical use, by deep learning. The recognition system was trained and tested with samples from six volunteers using the platform. The real-time number recognition ability of the ClothFace technology is demonstrated to work successfully with a very low error rate. The overall recognition accuracy of the platform is 94.6% and the accuracy for each digit is between 91.1% and 98.3%. As the solution is fully passive and gets all the needed energy from the external RFID reader, it enables a maintenance-free and cost-effective user interface that can be integrated into clothing and into textiles around us. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Article
Research on a Cognitive Distraction Recognition Model for Intelligent Driving Systems Based on Real Vehicle Experiments
Sensors 2020, 20(16), 4426; https://doi.org/10.3390/s20164426 - 07 Aug 2020
Cited by 3 | Viewed by 1084
Abstract
The accurate and prompt recognition of a driver’s cognitive distraction state is of great significance to intelligent driving systems (IDSs) and human-autonomous collaboration systems (HACSs). Once the driver’s distraction status has been accurately identified, the IDS or HACS can actively intervene or take [...] Read more.
The accurate and prompt recognition of a driver’s cognitive distraction state is of great significance to intelligent driving systems (IDSs) and human-autonomous collaboration systems (HACSs). Once the driver’s distraction status has been accurately identified, the IDS or HACS can actively intervene or take control of the vehicle, thereby avoiding the safety hazards caused by distracted driving. However, few studies have considered the time–frequency characteristics of the driving behavior and vehicle status during distracted driving for the establishment of a recognition model. This study seeks to exploit a recognition model of cognitive distraction driving according to the time–frequency analysis of the characteristic parameters. Therefore, an on-road experiment was implemented to measure the relative parameters under both normal and distracted driving via a test vehicle equipped with multiple sensors. Wavelet packet analysis was used to extract the time–frequency characteristics, and 21 pivotal features were determined as the input of the training model. Finally, a bidirectional long short-term memory network (Bi-LSTM) combined with an attention mechanism (Atten-BiLSTM) was proposed and trained. The results indicate that, compared with the support vector machine (SVM) model and the long short-term memory network (LSTM) model, the proposed model achieved the highest recognition accuracy (90.64%) for cognitive distraction under the time window setting of 5 s. The determination of time–frequency characteristic parameters and the more accurate recognition of cognitive distraction driving achieved in this work provide a foundation for human-centered intelligent vehicles. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Article
PeriSense: Ring-Based Multi-Finger Gesture Interaction Utilizing Capacitive Proximity Sensing
Sensors 2020, 20(14), 3990; https://doi.org/10.3390/s20143990 - 17 Jul 2020
Cited by 4 | Viewed by 1510
Abstract
Rings are widely accepted wearables for gesture interaction. However, most rings can sense only the motion of one finger or the whole hand. We present PeriSense, a ring-shaped interaction device enabling multi-finger gesture interaction. Gestures of the finger wearing ring and its adjacent [...] Read more.
Rings are widely accepted wearables for gesture interaction. However, most rings can sense only the motion of one finger or the whole hand. We present PeriSense, a ring-shaped interaction device enabling multi-finger gesture interaction. Gestures of the finger wearing ring and its adjacent fingers are sensed by measuring capacitive proximity between electrodes and human skin. Our main contribution is the determination of PeriSense’s interaction space involving the evaluation of capabilities and limitations. We introduce a prototype named PeriSense, analyze the sensor resolution at different distances, and evaluate finger gestures and unistroke gestures based on gesture sets allowing the determination of the strengths and limitations. We show that PeriSense is able to sense the change of conductive objects reliably up to 2.5 cm. Furthermore, we show that this capability enables different interaction techniques such as multi-finger gesture recognition or two-handed unistroke input. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Article
A Comparative Study in Real-Time Scene Sonification for Visually Impaired People
Sensors 2020, 20(11), 3222; https://doi.org/10.3390/s20113222 - 05 Jun 2020
Cited by 5 | Viewed by 1496
Abstract
In recent years, with the development of depth cameras and scene detection algorithms, a wide variety of electronic travel aids for visually impaired people have been proposed. However, it is still challenging to convey scene information to visually impaired people efficiently. In this [...] Read more.
In recent years, with the development of depth cameras and scene detection algorithms, a wide variety of electronic travel aids for visually impaired people have been proposed. However, it is still challenging to convey scene information to visually impaired people efficiently. In this paper, we propose three different auditory-based interaction methods, i.e., depth image sonification, obstacle sonification as well as path sonification, which convey raw depth images, obstacle information and path information respectively to visually impaired people. Three sonification methods are compared comprehensively through a field experiment attended by twelve visually impaired participants. The results show that the sonification of high-level scene information, such as the direction of pathway, is easier to learn and adapt, and is more suitable for point-to-point navigation. In contrast, through the sonification of low-level scene information, such as raw depth images, visually impaired people can understand the surrounding environment more comprehensively. Furthermore, there is no interaction method that is best suited for all participants in the experiment, and visually impaired individuals need a period of time to find the most suitable interaction method. Our findings highlight the features and the differences of three scene detection algorithms and the corresponding sonification methods. The results provide insights into the design of electronic travel aids, and the conclusions can also be applied in other fields, such as the sound feedback of virtual reality applications. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Article
A Method for Measuring the Height of Hand Movements Based on a Planar Array of Electrostatic Induction Electrodes
Sensors 2020, 20(10), 2943; https://doi.org/10.3390/s20102943 - 22 May 2020
Cited by 1 | Viewed by 869
Abstract
This paper proposes a method based on a planar array of electrostatic induction electrodes, which uses human body electrostatics to measure the height of hand movements. The human body is electrostatically charged for a variety of reasons. In the process of a hand [...] Read more.
This paper proposes a method based on a planar array of electrostatic induction electrodes, which uses human body electrostatics to measure the height of hand movements. The human body is electrostatically charged for a variety of reasons. In the process of a hand movement, the change of a human body’s electric field is captured through the electrostatic sensors connected to the electrode array. A measurement algorithm for the height of hand movements is used to measure the height of hand movements after the direction of it has been obtained. Compared with the tridimensional array, the planar array has the advantages of less space and easy deployment; therefore, it is more widely used. In this paper, a human hand movement sensing system based on human body electrostatics was established to perform verification experiments. The results show that this method can measure the height of hand movements with good accuracy to meet the requirements of non-contact human-computer interactions. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Article
Legodroid: A Type-Driven Library for Android and LEGO Mindstorms Interoperability
Sensors 2020, 20(7), 1926; https://doi.org/10.3390/s20071926 - 30 Mar 2020
Cited by 1 | Viewed by 1240
Abstract
LEGO Mindstorms robots are widely used as educational tools to acquire skills in programming complex systems involving the interaction of sensors and actuators, and they offer a flexible and modular workbench to design and evaluate user–machine interaction prototypes in the robotic area. However, [...] Read more.
LEGO Mindstorms robots are widely used as educational tools to acquire skills in programming complex systems involving the interaction of sensors and actuators, and they offer a flexible and modular workbench to design and evaluate user–machine interaction prototypes in the robotic area. However, there is still a lack of support to interoperability features and the need of high-level tools to program the interaction of a robot with other devices. In this paper, we introduce Legodroid, a new Java library enabling cross-programming LEGO Mindstorms robots through Android smartphones that exploits their combined computational and sensorial capabilities in a seamless way. The library provides a number of type-driven coding patterns for interacting with sensors and motors. In this way, the robustness of the software managing robot’s sensors dramatically improves. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Article
Evaluating the Impact of a Two-Stage Multivariate Data Cleansing Approach to Improve to the Performance of Machine Learning Classifiers: A Case Study in Human Activity Recognition
Sensors 2020, 20(7), 1858; https://doi.org/10.3390/s20071858 - 27 Mar 2020
Cited by 4 | Viewed by 1145
Abstract
Human activity recognition (HAR) is a popular field of study. The outcomes of the projects in this area have the potential to impact on the quality of life of people with conditions such as dementia. HAR is focused primarily on applying machine learning [...] Read more.
Human activity recognition (HAR) is a popular field of study. The outcomes of the projects in this area have the potential to impact on the quality of life of people with conditions such as dementia. HAR is focused primarily on applying machine learning classifiers on data from low level sensors such as accelerometers. The performance of these classifiers can be improved through an adequate training process. In order to improve the training process, multivariate outlier detection was used in order to improve the quality of data in the training set and, subsequently, performance of the classifier. The impact of the technique was evaluated with KNN and random forest (RF) classifiers. In the case of KNN, the performance of the classifier was improved from 55.9% to 63.59%. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Article
STDD: Short-Term Depression Detection with Passive Sensing
Sensors 2020, 20(5), 1396; https://doi.org/10.3390/s20051396 - 04 Mar 2020
Cited by 14 | Viewed by 2228
Abstract
It has recently been reported that identifying the depression severity of a person requires involvement of mental health professionals who use traditional methods like interviews and self-reports, which results in spending time and money. In this work we made solid contributions on short-term [...] Read more.
It has recently been reported that identifying the depression severity of a person requires involvement of mental health professionals who use traditional methods like interviews and self-reports, which results in spending time and money. In this work we made solid contributions on short-term depression detection using every-day mobile devices. To improve the accuracy of depression detection, we extracted five factors influencing depression (symptom clusters) from the DSM-5 (Diagnostic and Statistical Manual of Mental Disorders), namely, physical activity, mood, social activity, sleep, and food intake and extracted features related to each symptom cluster from mobile devices’ sensors. We conducted an experiment, where we recruited 20 participants from four different depression groups based on PHQ-9 (the Patient Health Questionnaire-9, the 9-item depression module from the full PHQ), which are normal, mildly depressed, moderately depressed, and severely depressed and built a machine learning model for automatic classification of depression category in a short period of time. To achieve the aim of short-term depression classification, we developed Short-Term Depression Detector (STDD), a framework that consisted of a smartphone and a wearable device that constantly reported the metrics (sensor data and self-reports) to perform depression group classification. The result of this pilot study revealed high correlations between participants` Ecological Momentary Assessment (EMA) self-reports and passive sensing (sensor data) in physical activity, mood, and sleep levels; STDD demonstrated the feasibility of group classification with an accuracy of 96.00% (standard deviation (SD) = 2.76). Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Article
Biometric Identification from Human Aesthetic Preferences
Sensors 2020, 20(4), 1133; https://doi.org/10.3390/s20041133 - 19 Feb 2020
Cited by 3 | Viewed by 954
Abstract
In recent years, human–machine interactions encompass many avenues of life, ranging from personal communications to professional activities. This trend has allowed for person identification based on behavior rather than physical traits to emerge as a growing research domain, which spans areas such as [...] Read more.
In recent years, human–machine interactions encompass many avenues of life, ranging from personal communications to professional activities. This trend has allowed for person identification based on behavior rather than physical traits to emerge as a growing research domain, which spans areas such as online education, e-commerce, e-communication, and biometric security. The expression of opinions is an example of online behavior that is commonly shared through the liking of online images. Visual aesthetic is a behavioral biometric that involves using a person’s sense of fondness for images. The identification of individuals using their visual aesthetic values as discriminatory features is an emerging domain of research. This paper introduces a novel method for aesthetic feature dimensionality reduction using gene expression programming. The proposed system is capable of using a tree-based genetic approach for feature recombination. Reducing feature dimensionality improves classifier accuracy, reduces computation runtime, and minimizes required storage. The results obtained on a dataset of 200 Flickr users evaluating 40,000 images demonstrate a 95% accuracy of identity recognition based solely on users’ aesthetic preferences. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Article
Assessment of the Potential of Wrist-Worn Wearable Sensors for Driver Drowsiness Detection
Sensors 2020, 20(4), 1029; https://doi.org/10.3390/s20041029 - 14 Feb 2020
Cited by 14 | Viewed by 2804
Abstract
Drowsy driving imposes a high safety risk. Current systems often use driving behavior parameters for driver drowsiness detection. The continuous driving automation reduces the availability of these parameters, therefore reducing the scope of such methods. Especially, techniques that include physiological measurements seem to [...] Read more.
Drowsy driving imposes a high safety risk. Current systems often use driving behavior parameters for driver drowsiness detection. The continuous driving automation reduces the availability of these parameters, therefore reducing the scope of such methods. Especially, techniques that include physiological measurements seem to be a promising alternative. However, in a dynamic environment such as driving, only non- or minimal intrusive methods are accepted, and vibrations from the roadbed could lead to degraded sensor technology. This work contributes to driver drowsiness detection with a machine learning approach applied solely to physiological data collected from a non-intrusive retrofittable system in the form of a wrist-worn wearable sensor. To check accuracy and feasibility, results are compared with reference data from a medical-grade ECG device. A user study with 30 participants in a high-fidelity driving simulator was conducted. Several machine learning algorithms for binary classification were applied in user-dependent and independent tests. Results provide evidence that the non-intrusive setting achieves a similar accuracy as compared to the medical-grade device, and high accuracies (>92%) could be achieved, especially in a user-dependent scenario. The proposed approach offers new possibilities for human–machine interaction in a car and especially for driver state monitoring in the field of automated driving. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Graphical abstract

Article
Towards Mixed-Initiative Human–Robot Interaction: Assessment of Discriminative Physiological and Behavioral Features for Performance Prediction
Sensors 2020, 20(1), 296; https://doi.org/10.3390/s20010296 - 05 Jan 2020
Cited by 7 | Viewed by 1484
Abstract
The design of human–robot interactions is a key challenge to optimize operational performance. A promising approach is to consider mixed-initiative interactions in which the tasks and authority of each human and artificial agents are dynamically defined according to their current abilities. An important [...] Read more.
The design of human–robot interactions is a key challenge to optimize operational performance. A promising approach is to consider mixed-initiative interactions in which the tasks and authority of each human and artificial agents are dynamically defined according to their current abilities. An important issue for the implementation of mixed-initiative systems is to monitor human performance to dynamically drive task allocation between human and artificial agents (i.e., robots). We, therefore, designed an experimental scenario involving missions whereby participants had to cooperate with a robot to fight fires while facing hazards. Two levels of robot automation (manual vs. autonomous) were randomly manipulated to assess their impact on the participants’ performance across missions. Cardiac activity, eye-tracking, and participants’ actions on the user interface were collected. The participants performed differently to an extent that we could identify high and low score mission groups that also exhibited different behavioral, cardiac and ocular patterns. More specifically, our findings indicated that the higher level of automation could be beneficial to low-scoring participants but detrimental to high-scoring ones, and vice versa. In addition, inter-subject single-trial classification results showed that the studied behavioral and physiological features were relevant to predict mission performance. The highest average balanced accuracy (74%) was reached using the features extracted from all input devices. These results suggest that an adaptive HRI driving system, that would aim at maximizing performance, would be capable of analyzing such physiological and behavior markers online to further change the level of automation when it is relevant for the mission purpose. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Article
Adaptive Binarization of QR Code Images for Fast Automatic Sorting in Warehouse Systems
Sensors 2019, 19(24), 5466; https://doi.org/10.3390/s19245466 - 11 Dec 2019
Cited by 6 | Viewed by 1772
Abstract
As the fundamental element of the Internet of Things, the QR code has become increasingly crucial for connecting online and offline services. Concerning e-commerce and logistics, we mainly focus on how to identify QR codes quickly and accurately. An adaptive binarization approach is [...] Read more.
As the fundamental element of the Internet of Things, the QR code has become increasingly crucial for connecting online and offline services. Concerning e-commerce and logistics, we mainly focus on how to identify QR codes quickly and accurately. An adaptive binarization approach is proposed to solve the problem of uneven illumination in warehouse automatic sorting systems. Guided by cognitive modeling, we adaptively select the block window of the QR code for robust binarization under uneven illumination. The proposed method can eliminate the impact of uneven illumination of QR codes effectively whilst meeting the real-time needs in the automatic warehouse sorting. Experimental results have demonstrated the superiority of the proposed approach when benchmarked with several state-of-the-art methods. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Article
A Deep Learning-Based End-to-End Composite System for Hand Detection and Gesture Recognition
Sensors 2019, 19(23), 5282; https://doi.org/10.3390/s19235282 - 30 Nov 2019
Cited by 13 | Viewed by 2071
Abstract
Recent research on hand detection and gesture recognition has attracted increasing interest due to its broad range of potential applications, such as human-computer interaction, sign language recognition, hand action analysis, driver hand behavior monitoring, and virtual reality. In recent years, several approaches have [...] Read more.
Recent research on hand detection and gesture recognition has attracted increasing interest due to its broad range of potential applications, such as human-computer interaction, sign language recognition, hand action analysis, driver hand behavior monitoring, and virtual reality. In recent years, several approaches have been proposed with the aim of developing a robust algorithm which functions in complex and cluttered environments. Although several researchers have addressed this challenging problem, a robust system is still elusive. Therefore, we propose a deep learning-based architecture to jointly detect and classify hand gestures. In the proposed architecture, the whole image is passed through a one-stage dense object detector to extract hand regions, which, in turn, pass through a lightweight convolutional neural network (CNN) for hand gesture recognition. To evaluate our approach, we conducted extensive experiments on four publicly available datasets for hand detection, including the Oxford, 5-signers, EgoHands, and Indian classical dance (ICD) datasets, along with two hand gesture datasets with different gesture vocabularies for hand gesture recognition, namely, the LaRED and TinyHands datasets. Here, experimental results demonstrate that the proposed architecture is efficient and robust. In addition, it outperforms other approaches in both the hand detection and gesture classification tasks. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Article
A Gesture Recognition Algorithm for Hand-Assisted Laparoscopic Surgery
Sensors 2019, 19(23), 5182; https://doi.org/10.3390/s19235182 - 26 Nov 2019
Cited by 3 | Viewed by 1975
Abstract
Minimally invasive surgery (MIS) techniques are growing in quantity and complexity to cover a wider range of interventions. More specifically, hand-assisted laparoscopic surgery (HALS) involves the use of one surgeon’s hand inside the patient whereas the other one manages a single laparoscopic tool. [...] Read more.
Minimally invasive surgery (MIS) techniques are growing in quantity and complexity to cover a wider range of interventions. More specifically, hand-assisted laparoscopic surgery (HALS) involves the use of one surgeon’s hand inside the patient whereas the other one manages a single laparoscopic tool. In this scenario, those surgical procedures performed with an additional tool require the aid of an assistant. Furthermore, in the case of a human–robot assistant pairing a fluid communication is mandatory. This human–machine interaction must combine both explicit orders and implicit information from the surgical gestures. In this context, this paper focuses on the development of a hand gesture recognition system for HALS. The recognition is based on a hidden Markov model (HMM) algorithm with an improved automated training step, which can also learn during the online surgical procedure by means of a reinforcement learning process. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Review

Jump to: Research, Other

Review
Unsupervised Human Activity Recognition Using the Clustering Approach: A Review
Sensors 2020, 20(9), 2702; https://doi.org/10.3390/s20092702 - 09 May 2020
Cited by 9 | Viewed by 1976
Abstract
Currently, many applications have emerged from the implementation of software development and hardware use, known as the Internet of things. One of the most important application areas of this type of technology is in health care. Various applications arise daily in order to [...] Read more.
Currently, many applications have emerged from the implementation of software development and hardware use, known as the Internet of things. One of the most important application areas of this type of technology is in health care. Various applications arise daily in order to improve the quality of life and to promote an improvement in the treatments of patients at home that suffer from different pathologies. That is why there has emerged a line of work of great interest, focused on the study and analysis of daily life activities, on the use of different data analysis techniques to identify and to help manage this type of patient. This article shows the result of the systematic review of the literature on the use of the Clustering method, which is one of the most used techniques in the analysis of unsupervised data applied to activities of daily living, as well as the description of variables of high importance as a year of publication, type of article, most used algorithms, types of dataset used, and metrics implemented. These data will allow the reader to locate the recent results of the application of this technique to a particular area of knowledge. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Other

Jump to: Research, Review

Letter
Guitar Chord Sensing and Recognition Using Multi-Task Learning and Physical Data Augmentation with Robotics
Sensors 2020, 20(21), 6077; https://doi.org/10.3390/s20216077 - 26 Oct 2020
Viewed by 862
Abstract
In recent years, many researchers have shown increasing interest in music information retrieval (MIR) applications, with automatic chord recognition being one of the popular tasks. Many studies have achieved/demonstrated considerable improvement using deep learning based models in automatic chord recognition problems. However, most [...] Read more.
In recent years, many researchers have shown increasing interest in music information retrieval (MIR) applications, with automatic chord recognition being one of the popular tasks. Many studies have achieved/demonstrated considerable improvement using deep learning based models in automatic chord recognition problems. However, most of the existing models have focused on simple chord recognition, which classifies the root note with the major, minor, and seventh chords. Furthermore, in learning-based recognition, it is critical to collect high-quality and large amounts of training data to achieve the desired performance. In this paper, we present a multi-task learning (MTL) model for a guitar chord recognition task, where the model is trained using a relatively large-vocabulary guitar chord dataset. To solve data scarcity issues, a physical data augmentation method that directly records the chord dataset from a robotic performer is employed. Deep learning based MTL is proposed to improve the performance of automatic chord recognition with the proposed physical data augmentation dataset. The proposed MTL model is compared with four baseline models and its corresponding single-task learning model using two types of datasets, including a human dataset and a human combined with the augmented dataset. The proposed methods outperform the baseline models, and the results show that most scores of the proposed multi-task learning model are better than those of the corresponding single-task learning model. The experimental results demonstrate that physical data augmentation is an effective method for increasing the dataset size for guitar chord recognition tasks. Full article
(This article belongs to the Special Issue Human-Machine Interaction and Sensors)
Show Figures

Figure 1

Back to TopTop