sensors-logo

Journal Browser

Journal Browser

Sensor Systems for Gesture Recognition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (30 September 2020) | Viewed by 73999

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Electronic Engineering, University of Tor Vergata Rome, 00133 Rome, Italy
Interests: wearable sensors; brain–computer interface; motion tracking; gait analysis; sensory glove; biotechnologies
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Informatics and Computer Science, Escuela Politécnica Nacional, Ladrón de Guevara E11-253, Quito, Ecuador
Interests: EMG sensors; gesture recognition; human-machine interfaces; supervised learning; reinforcement learning; computer perception

Special Issue Information

Dear Colleagues,

Gesture recognition (GR) aims at interpreting human gestures by means of math algorithms. Its achievement will have widespread applications in a number of different fields, with impacts that can help or meaningfully improve our quality of life.

In the real world, GR can interpret communication meanings at a distance or can “translate” in written sentences or by a synthetized voice the “sign language”. In a virtual reality (VR) and augmented reality (AR) world, GR can allow navigating and/or interacting, as it occurs, for instance, with the user interface (UI) of a smart TV controlled by hand gestures.

The possible applications are countless, and we can mention just a few. In the health field, GR allows to augment the motion capabilities of disabled people or to support surgeons in surgical settings. In gaming, GR frees gamers from input devices, such as keyboards, mouse, and joysticks. In the automotive industry, GR allows drivers to control car appliances (see BMW 7 Series). In cinematography, GR is used to computer-generate effects or creatures. In everyday life, GR is the mean to interact with smartphone apps (see uSens, Inc. and Gestigon GmbH, for example). In human–robot interactions, GR keeps the operator in safe conditions, while his/her gestures become the remote commands for tele-operating a robot. GR allows music creation too, converting human movements into sounds.

GR is achieved through (1) data acquisition, (2) identification of patterns, and (3) interpretation (each of these phases can consist of different stages).

Data can be acquired by means of sensor systems based on different measurement principles, such as mechanical, magnetic, optic, acoustic, inertial principles, or hybrid sensors. Within this frame, optical technologies are historically the most explored ones (since 1870, when animal movements were analysed via pictures’ sequences) and represent the current state of the art. However, optical technologies are expensive and require a dedicated room and skilled personnel. Therefore, non-optical technologies, in particular the ones based on wearable sensors, are increasingly gaining importance.

In order to obtain GR, different methods can be adopted for data segmentation, feature extraction, and classification. These methods highly depend on the type of data (according to the adopted type of sensor system) and the type of gestures to be recognized.

The (supervised on unsupervised) recognition of patterns in data, i.e., regularities, arrangements, characteristics, can be approached by machine learning or heuristics and can be linked to artificial intelligence (AI).

In sum, sensor systems for gesture recognition deal with an ensemble of topics that can singularly or jointly be accessed and that represent a great opportunity for further developments, with widespread potential applications.

This call for papers invites technical contributions to a Sensors Special Issue providing an up-to-date overview on “Sensor Systems for Gesture Recognition”. This Special Issue will deal with theory, solutions, and innovative applications. Potential topics include, but are not limited to:

  • Sensor systems
  • Gesture recognition
  • Gesture recognition technologies
  • Gesture extraction methods
  • Gesture detection sensors
  • Wearable sensors
  • Human tracking
  • Human postures and movements
  • Motion detection and tracking
  • Hand gesture recognition
  • Sign language recognition
  • Gait analysis
  • Remote controlling
  • Pattern recognition for gesture recognition
  • Machine learning for gesture recognition
  • Applications of gesture recognitions
  • Algorithms for gesture recognition

Prof. Dr. Giovanni Saggio
Dr. Marco E. Benalcázar
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Sensor systems 
  • Wearable sensors 
  • Video-based gesture recognition 
  • Motion tracking 
  • Motion detection 
  • Pattern recognition 
  • Hand gestures 
  • Gait analysis

Published Papers (17 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

21 pages, 2983 KiB  
Article
A Gesture Elicitation Study of Nose-Based Gestures
by Jorge-Luis Pérez-Medina, Santiago Villarreal and Jean Vanderdonckt
Sensors 2020, 20(24), 7118; https://doi.org/10.3390/s20247118 - 11 Dec 2020
Cited by 11 | Viewed by 4944
Abstract
Presently, miniaturized sensors can be embedded in any small-size wearable to recognize movements on some parts of the human body. For example, an electrooculography-based sensor in smart glasses recognizes finger movements on the nose. To explore the interaction capabilities, this paper conducts a [...] Read more.
Presently, miniaturized sensors can be embedded in any small-size wearable to recognize movements on some parts of the human body. For example, an electrooculography-based sensor in smart glasses recognizes finger movements on the nose. To explore the interaction capabilities, this paper conducts a gesture elicitation study as a between-subjects experiment involving one group of 12 females and one group of 12 males, expressing their preferred nose-based gestures on 19 Internet-of-Things tasks. Based on classification criteria, the 912 elicited gestures are clustered into 53 unique gestures resulting in 23 categories, to form a taxonomy and a consensus set of 38 final gestures, providing researchers and practitioners with a larger base with six design guidelines. To test whether the measurement method impacts these results, the agreement scores and rates, computed for determining the most agreed gestures upon participants, are compared with the Condorcet and the de Borda count methods to observe that the results remain consistent, sometimes with a slightly different order. To test whether the results are sensitive to gender, inferential statistics suggest that no significant difference exists between males and females for agreement scores and rates. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

23 pages, 1820 KiB  
Article
Face Pose Alignment with Event Cameras
by Arman Savran and Chiara Bartolozzi
Sensors 2020, 20(24), 7079; https://doi.org/10.3390/s20247079 - 10 Dec 2020
Cited by 7 | Viewed by 2558
Abstract
Event camera (EC) emerges as a bio-inspired sensor which can be an alternative or complementary vision modality with the benefits of energy efficiency, high dynamic range, and high temporal resolution coupled with activity dependent sparse sensing. In this study we investigate with ECs [...] Read more.
Event camera (EC) emerges as a bio-inspired sensor which can be an alternative or complementary vision modality with the benefits of energy efficiency, high dynamic range, and high temporal resolution coupled with activity dependent sparse sensing. In this study we investigate with ECs the problem of face pose alignment, which is an essential pre-processing stage for facial processing pipelines. EC-based alignment can unlock all these benefits in facial applications, especially where motion and dynamics carry the most relevant information due to the temporal change event sensing. We specifically aim at efficient processing by developing a coarse alignment method to handle large pose variations in facial applications. For this purpose, we have prepared by multiple human annotations a dataset of extreme head rotations with varying motion intensity. We propose a motion detection based alignment approach in order to generate activity dependent pose-events that prevents unnecessary computations in the absence of pose change. The alignment is realized by cascaded regression of extremely randomized trees. Since EC sensors perform temporal differentiation, we characterize the performance of the alignment in terms of different levels of head movement speeds and face localization uncertainty ranges as well as face resolution and predictor complexity. Our method obtained 2.7% alignment failure on average, whereas annotator disagreement was 1%. The promising coarse alignment performance on EC sensor data together with a comprehensive analysis demonstrate the potential of ECs in facial applications. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

30 pages, 9641 KiB  
Article
Adaptive Rehabilitation Bots in Serious Games
by Imad Afyouni, Abdullah Murad and Anas Einea
Sensors 2020, 20(24), 7037; https://doi.org/10.3390/s20247037 - 9 Dec 2020
Cited by 11 | Viewed by 3663
Abstract
In recent years, we have witnessed a growing adoption of serious games in telerehabilitation by taking advantage of advanced multimedia technologies such as motion capture and virtual reality devices. Current serious game solutions for telerehabilitation suffer form lack of personalization and adaptiveness to [...] Read more.
In recent years, we have witnessed a growing adoption of serious games in telerehabilitation by taking advantage of advanced multimedia technologies such as motion capture and virtual reality devices. Current serious game solutions for telerehabilitation suffer form lack of personalization and adaptiveness to patients’ needs and performance. This paper introduces “RehaBot”, a framework for adaptive generation of personalized serious games in the context of remote rehabilitation, using 3D motion tracking and virtual reality environments. A personalized and versatile gaming platform with embedded virtual assistants, called “Rehab bots”, is created. Utilizing these rehab bots, all workout session scenes will include a guide with various sets of motions to direct patients towards performing the prescribed exercises correctly. Furthermore, the rehab bots employ a robust technique to adjust the workout difficulty level in real-time to match the patients’ performance. This technique correlates and matches the patterns of the precalculated motions with patients’ motions to produce a highly engaging gamified workout experience. Moreover, multimodal insights are passed to the users pointing out the joints that did not perform as anticipated along with suggestions to improve the current performance. A clinical study was conducted on patients dealing with chronic neck pain to prove the usability and effectiveness of our adjunctive online physiotherapy solution. Ten participants used the serious gaming platform, while four participants performed the traditional procedure with an active program for neck pain relief, for two weeks (10 min, 10 sessions/2 weeks). Feasibility and user experience measures were collected, and the results of experiments show that patients found our game-based adaptive solution engaging and effective, and most of them could achieve high accuracy in performing the personalized prescribed therapies. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

18 pages, 4059 KiB  
Article
A Novel GAN-Based Synthesis Method for In-Air Handwritten Words
by Xin Zhang and Yang Xue
Sensors 2020, 20(22), 6548; https://doi.org/10.3390/s20226548 - 16 Nov 2020
Cited by 2 | Viewed by 2110
Abstract
In recent years, with the miniaturization and high energy efficiency of MEMS (micro-electro-mechanical systems), in-air handwriting technology based on inertial sensors has come to the fore. Most of the previous works have focused on character-level in-air handwriting recognition. In contrast, few works focus [...] Read more.
In recent years, with the miniaturization and high energy efficiency of MEMS (micro-electro-mechanical systems), in-air handwriting technology based on inertial sensors has come to the fore. Most of the previous works have focused on character-level in-air handwriting recognition. In contrast, few works focus on word-level in-air handwriting tasks. In the field of word-level recognition, researchers have to face the problems of insufficient data and poor generalization performance of recognition methods. On one hand, the training of deep neural learning networks usually requires a particularly large dataset, but collecting data will take a lot of time and money. On the other hand, a deep recognition network trained on a small dataset can hardly recognize samples whose labels do not appear in the training set. To address these problems, we propose a two-stage synthesis method of in-air handwritten words. The proposed method includes a splicing module guided by an additional corpus and a generating module trained by adversarial learning. We carefully design the proposed network so that it can handle word sample inputs of arbitrary length and pay more attention to the details of the samples. We design multiple sets of experiments on a public dataset. The experimental results demonstrate the success of the proposed method. What is impressive is that with the help of the air-writing word synthesizer, the recognition model learns the context information (combination information of characters) of the word. In this way, it can recognize words that have never appeared in the training process. In this paper, the recognition model trained on synthetic data achieves a word-level recognition accuracy of 62.3% on the public dataset. Compared with the model trained using only the public dataset, the word-level accuracy is improved by 62%. Furthermore, the proposed method can synthesize realistic samples under the condition of limited of in-air handwritten character samples and word samples. It largely solves the problem of insufficient data. In the future, mathematically modeling the strokes between characters in words may help us find a better way to splice character samples. In addition, we will apply our method to various datasets and improve the splicing module and generating module for different tasks. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

20 pages, 2526 KiB  
Article
Efficient Upper Limb Position Estimation Based on Angular Displacement Sensors for Wearable Devices
by Aldo-Francisco Contreras-González, Manuel Ferre, Miguel Ángel Sánchez-Urán, Francisco Javier Sáez-Sáez and Fernando Blaya Haro
Sensors 2020, 20(22), 6452; https://doi.org/10.3390/s20226452 - 12 Nov 2020
Cited by 1 | Viewed by 2645
Abstract
Motion tracking techniques have been extensively studied in recent years. However, capturing movements of the upper limbs is a challenging task. This document presents the estimation of arm orientation and elbow and wrist position using wearable flexible sensors (WFSs). A study was developed [...] Read more.
Motion tracking techniques have been extensively studied in recent years. However, capturing movements of the upper limbs is a challenging task. This document presents the estimation of arm orientation and elbow and wrist position using wearable flexible sensors (WFSs). A study was developed to obtain the highest range of motion (ROM) of the shoulder with as few sensors as possible, and a method for estimating arm length and a calibration procedure was proposed. Performance was verified by comparing measurement of the shoulder joint angles obtained from commercial two-axis soft angular displacement sensors (sADS) from Bend Labs and from the ground truth system (GTS) OptiTrack. The global root-mean-square error (RMSE) for the shoulder angle is 2.93 degrees and 37.5 mm for the position estimation of the wrist in cyclical movements; this measure of RMSE was improved to 13.6 mm by implementing a gesture classifier. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

19 pages, 3178 KiB  
Article
British Sign Language Recognition via Late Fusion of Computer Vision and Leap Motion with Transfer Learning to American Sign Language
by Jordan J. Bird, Anikó Ekárt and Diego R. Faria
Sensors 2020, 20(18), 5151; https://doi.org/10.3390/s20185151 - 9 Sep 2020
Cited by 50 | Viewed by 9034
Abstract
In this work, we show that a late fusion approach to multimodality in sign language recognition improves the overall ability of the model in comparison to the singular approaches of image classification (88.14%) and Leap Motion data classification (72.73%). With a large synchronous [...] Read more.
In this work, we show that a late fusion approach to multimodality in sign language recognition improves the overall ability of the model in comparison to the singular approaches of image classification (88.14%) and Leap Motion data classification (72.73%). With a large synchronous dataset of 18 BSL gestures collected from multiple subjects, two deep neural networks are benchmarked and compared to derive a best topology for each. The Vision model is implemented by a Convolutional Neural Network and optimised Artificial Neural Network, and the Leap Motion model is implemented by an evolutionary search of Artificial Neural Network topology. Next, the two best networks are fused for synchronised processing, which results in a better overall result (94.44%) as complementary features are learnt in addition to the original task. The hypothesis is further supported by application of the three models to a set of completely unseen data where a multimodality approach achieves the best results relative to the single sensor method. When transfer learning with the weights trained via British Sign Language, all three models outperform standard random weight distribution when classifying American Sign Language (ASL), and the best model overall for ASL classification was the transfer learning multimodality approach, which scored 82.55% accuracy. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Graphical abstract

20 pages, 5307 KiB  
Article
TUHAD: Taekwondo Unit Technique Human Action Dataset with Key Frame-Based CNN Action Recognition
by Jinkue Lee and Hoeryong Jung
Sensors 2020, 20(17), 4871; https://doi.org/10.3390/s20174871 - 28 Aug 2020
Cited by 21 | Viewed by 4270
Abstract
In taekwondo, poomsae (i.e., form) competitions have no quantitative scoring standards, unlike gyeorugi (i.e., full-contact sparring) in the Olympics. Consequently, there are diverse fairness issues regarding poomsae evaluation, and the demand for quantitative evaluation tools is increasing. Action recognition is a promising approach, [...] Read more.
In taekwondo, poomsae (i.e., form) competitions have no quantitative scoring standards, unlike gyeorugi (i.e., full-contact sparring) in the Olympics. Consequently, there are diverse fairness issues regarding poomsae evaluation, and the demand for quantitative evaluation tools is increasing. Action recognition is a promising approach, but the extreme and rapid actions of taekwondo complicate its application. This study established the Taekwondo Unit technique Human Action Dataset (TUHAD), which consists of multimodal image sequences of poomsae actions. TUHAD contains 1936 action samples of eight unit techniques performed by 10 experts and captured by two camera views. A key frame-based convolutional neural network architecture was developed for taekwondo action recognition, and its accuracy was validated for various input configurations. A correlation analysis of the input configuration and accuracy demonstrated that the proposed model achieved a recognition accuracy of up to 95.833% (lowest accuracy of 74.49%). This study contributes to the research and development of taekwondo action recognition. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

14 pages, 873 KiB  
Article
An Acoustic Sensing Gesture Recognition System Design Based on a Hidden Markov Model
by Bruna Salles Moreira, Angelo Perkusich and Saulo O. D. Luiz
Sensors 2020, 20(17), 4803; https://doi.org/10.3390/s20174803 - 26 Aug 2020
Cited by 5 | Viewed by 2599
Abstract
Many human activities are tactile. Recognizing how a person touches an object or a surface surrounding them is an active area of research and it has generated keen interest within the interactive surface community. In this paper, we compare two machine learning techniques, [...] Read more.
Many human activities are tactile. Recognizing how a person touches an object or a surface surrounding them is an active area of research and it has generated keen interest within the interactive surface community. In this paper, we compare two machine learning techniques, namely Artificial Neural Network (ANN) and Hidden Markov Models (HMM), as they are some of the most common techniques with low computational cost used to classify an acoustic-based input. We employ a small and low-cost hardware design composed of a microphone, a stethoscope, a conditioning circuit, and a microcontroller. Together with an appropriate surface, we integrated these components into a passive gesture recognition input system for experimental evaluation. To perform the evaluation, we acquire the signals using a small microphone and send it through the microcontroller to MATLAB’s toolboxes to implement and evaluate the ANN and HMM models. We also present the hardware and software implementation and discuss the advantages and limitations of these techniques in gesture recognition while using a simple alphabet of three geometrical figures: circle, square, and triangle. The results validate the robustness of the HMM technique that achieved a success rate of 90%, with a shorter training time than the ANN. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

13 pages, 3599 KiB  
Article
Development of Real-Time Hand Gesture Recognition for Tabletop Holographic Display Interaction Using Azure Kinect
by Chanhwi Lee, Jaehan Kim, Seoungbae Cho, Jinwoong Kim, Jisang Yoo and Soonchul Kwon
Sensors 2020, 20(16), 4566; https://doi.org/10.3390/s20164566 - 14 Aug 2020
Cited by 14 | Viewed by 5625
Abstract
The use of human gesturing to interact with devices such as computers or smartphones has presented several problems. This form of interaction relies on gesture interaction technology such as Leap Motion from Leap Motion, Inc, which enables humans to use hand gestures to [...] Read more.
The use of human gesturing to interact with devices such as computers or smartphones has presented several problems. This form of interaction relies on gesture interaction technology such as Leap Motion from Leap Motion, Inc, which enables humans to use hand gestures to interact with a computer. The technology has excellent hand detection performance, and even allows simple games to be played using gestures. Another example is the contactless use of a smartphone to take a photograph by simply folding and opening the palm. Research on interaction with other devices via hand gestures is in progress. Similarly, studies on the creation of a hologram display from objects that actually exist are also underway. We propose a hand gesture recognition system that can control the Tabletop holographic display based on an actual object. The depth image obtained using the latest Time-of-Flight based depth camera Azure Kinect is used to obtain information about the hand and hand joints by using the deep-learning model CrossInfoNet. Using this information, we developed a real time system that defines and recognizes gestures indicating left, right, up, and down basic rotation, and zoom in, zoom out, and continuous rotation to the left and right. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

16 pages, 3368 KiB  
Article
A Portable Fuzzy Driver Drowsiness Estimation System
by Alimed Celecia, Karla Figueiredo, Marley Vellasco and René González
Sensors 2020, 20(15), 4093; https://doi.org/10.3390/s20154093 - 23 Jul 2020
Cited by 20 | Viewed by 3374
Abstract
The adequate automatic detection of driver fatigue is a very valuable approach for the prevention of traffic accidents. Devices that can determine drowsiness conditions accurately must inherently be portable, adaptable to different vehicles and drivers, and robust to conditions such as illumination changes [...] Read more.
The adequate automatic detection of driver fatigue is a very valuable approach for the prevention of traffic accidents. Devices that can determine drowsiness conditions accurately must inherently be portable, adaptable to different vehicles and drivers, and robust to conditions such as illumination changes or visual occlusion. With the advent of a new generation of computationally powerful embedded systems such as the Raspberry Pi, a new category of real-time and low-cost portable drowsiness detection systems could become standard tools. Usually, the proposed solutions using this platform are limited to the definition of thresholds for some defined drowsiness indicator or the application of computationally expensive classification models that limits their use in real-time. In this research, we propose the development of a new portable, low-cost, accurate, and robust drowsiness recognition device. The proposed device combines complementary drowsiness measures derived from a temporal window of eyes (PERCLOS, ECD) and mouth (AOT) states through a fuzzy inference system deployed in a Raspberry Pi with the capability of real-time response. The system provides three degrees of drowsiness (Low-Normal State, Medium-Drowsy State, and High-Severe Drowsiness State), and was assessed in terms of its computational performance and efficiency, resulting in a significant accuracy of 95.5% in state recognition that demonstrates the feasibility of the approach. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

14 pages, 1074 KiB  
Article
Sign Language Recognition Using Wearable Electronics: Implementing k-Nearest Neighbors with Dynamic Time Warping and Convolutional Neural Network Algorithms
by Giovanni Saggio, Pietro Cavallo, Mariachiara Ricci, Vito Errico, Jonathan Zea and Marco E. Benalcázar
Sensors 2020, 20(14), 3879; https://doi.org/10.3390/s20143879 - 11 Jul 2020
Cited by 33 | Viewed by 3630
Abstract
We propose a sign language recognition system based on wearable electronics and two different classification algorithms. The wearable electronics were made of a sensory glove and inertial measurement units to gather fingers, wrist, and arm/forearm movements. The classifiers were k-Nearest Neighbors with Dynamic [...] Read more.
We propose a sign language recognition system based on wearable electronics and two different classification algorithms. The wearable electronics were made of a sensory glove and inertial measurement units to gather fingers, wrist, and arm/forearm movements. The classifiers were k-Nearest Neighbors with Dynamic Time Warping (that is a non-parametric method) and Convolutional Neural Networks (that is a parametric method). Ten sign-words were considered from the Italian Sign Language: cose, grazie, maestra, together with words with international meaning such as google, internet, jogging, pizza, television, twitter, and ciao. The signs were repeated one-hundred times each by seven people, five male and two females, aged 29–54 y ± 10.34 (SD). The adopted classifiers performed with an accuracy of 96.6% ± 3.4 (SD) for the k-Nearest Neighbors plus the Dynamic Time Warping and of 98.0% ± 2.0 (SD) for the Convolutional Neural Networks. Our system was made of wearable electronics among the most complete ones, and the classifiers top performed in comparison with other relevant works reported in the literature. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

16 pages, 3048 KiB  
Article
Analysis of Control Characteristics between Dominant and Non-Dominant Hands by Transient Responses of Circular Tracking Movements in 3D Virtual Reality Space
by Wookhyun Park, Woong Choi, Hanjin Jo, Geonhui Lee and Jaehyo Kim
Sensors 2020, 20(12), 3477; https://doi.org/10.3390/s20123477 - 19 Jun 2020
Cited by 5 | Viewed by 2752
Abstract
Human movement is a controlled result of the sensory-motor system, and the motor control mechanism has been studied through diverse movements. The present study examined control characteristics of dominant and non-dominant hands by analyzing the transient responses of circular tracking movements in 3D [...] Read more.
Human movement is a controlled result of the sensory-motor system, and the motor control mechanism has been studied through diverse movements. The present study examined control characteristics of dominant and non-dominant hands by analyzing the transient responses of circular tracking movements in 3D virtual reality space. A visual target rotated in a circular trajectory at four different speeds, and 29 participants tracked the target with their hands. The position of each subject’s hand was measured, and the following three parameters were investigated: normalized initial peak velocity (IPV2), initial peak time (IPT2), and time delay (TD2). The IPV2 of both hands decreased as target speed increased. The results of IPT2 revealed that the dominant hand reached its peak velocity 0.0423 s earlier than the non-dominant hand, regardless of target speed. The TD2 of the hands diminished by 0.0218 s on average as target speed increased, but the dominant hand statistically revealed a 0.0417-s shorter TD2 than the non-dominant hand. Velocity-control performances from the IPV2 and IPT2 suggested that an identical internal model controls movement in both hands, whereas the dominant hand is likely more experienced than the non-dominant hand in reacting to neural commands, resulting in better reactivity in the movement task. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

15 pages, 1147 KiB  
Article
Turning Characteristics of the More-Affected Side in Parkinson’s Disease Patients with Freezing of Gait
by Hwayoung Park, Changhong Youm, Myeounggon Lee, Byungjoo Noh and Sang-Myung Cheon
Sensors 2020, 20(11), 3098; https://doi.org/10.3390/s20113098 - 30 May 2020
Cited by 7 | Viewed by 2705
Abstract
This study investigated the turning characteristics of the more-affected limbs in Parkinson’s disease (PD) patients in comparison with that of a control group, and in PD patients with freezing of gait (FOG; freezers) in comparison with those without FOG (non-freezers) for 360° and [...] Read more.
This study investigated the turning characteristics of the more-affected limbs in Parkinson’s disease (PD) patients in comparison with that of a control group, and in PD patients with freezing of gait (FOG; freezers) in comparison with those without FOG (non-freezers) for 360° and 540° turning tasks at the maximum speed. A total of 12 freezers, 12 non-freezers, and 12 controls participated in this study. The PD patients showed significantly longer total durations, shorter inner and outer step lengths, and greater anterior–posterior (AP) root mean square (RMS) center of mass (COM) distances compared to those for the controls. The freezers showed significantly greater AP and medial-lateral (ML) RMS COM distances compared to those of non-freezers. The turning task toward the inner step of the more-affected side (IMA) in PD patients showed significantly greater step width, total steps, and AP and ML RMS COM distances than that toward the outer step of the more-affected side (OMA). The corresponding results for freezers revealed significantly higher total steps and shorter inner step length during the 540° turn toward the IMA than that toward the OMA. Therefore, PD patients and freezers exhibited greater turning difficulty in performing challenging turning tasks such as turning with an increased angle and speed and toward the more-affected side. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

19 pages, 2971 KiB  
Article
Simultaneous Hand Gesture Classification and Finger Angle Estimation via a Novel Dual-Output Deep Learning Model
by Qinghua Gao, Shuo Jiang and Peter B. Shull
Sensors 2020, 20(10), 2972; https://doi.org/10.3390/s20102972 - 24 May 2020
Cited by 13 | Viewed by 4608
Abstract
Hand gesture classification and finger angle estimation are both critical for intuitive human–computer interaction. However, most approaches study them in isolation. We thus propose a dual-output deep learning model to enable simultaneous hand gesture classification and finger angle estimation. Data augmentation and deep [...] Read more.
Hand gesture classification and finger angle estimation are both critical for intuitive human–computer interaction. However, most approaches study them in isolation. We thus propose a dual-output deep learning model to enable simultaneous hand gesture classification and finger angle estimation. Data augmentation and deep learning were used to detect spatial-temporal features via a wristband with ten modified barometric sensors. Ten subjects performed experimental testing by flexing/extending each finger at the metacarpophalangeal joint while the proposed model was used to classify each hand gesture and estimate continuous finger angles simultaneously. A data glove was worn to record ground-truth finger angles. Overall hand gesture classification accuracy was 97.5% and finger angle estimation R 2 was 0.922, both of which were significantly higher than shallow existing learning approaches used in isolation. The proposed method could be used in applications related to the human–computer interaction and in control environments with both discrete and continuous variables. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Graphical abstract

17 pages, 8166 KiB  
Article
A Frame Detection Method for Real-Time Hand Gesture Recognition Systems Using CW-Radar
by Myoungseok Yu, Narae Kim, Yunho Jung and Seongjoo Lee
Sensors 2020, 20(8), 2321; https://doi.org/10.3390/s20082321 - 18 Apr 2020
Cited by 23 | Viewed by 6099
Abstract
In this paper, a method to detect frames was described that can be used as hand gesture data when configuring a real-time hand gesture recognition system using continuous wave (CW) radar. Detecting valid frames raises accuracy which recognizes gestures. Therefore, it is essential [...] Read more.
In this paper, a method to detect frames was described that can be used as hand gesture data when configuring a real-time hand gesture recognition system using continuous wave (CW) radar. Detecting valid frames raises accuracy which recognizes gestures. Therefore, it is essential to detect valid frames in the real-time hand gesture recognition system using CW radar. The conventional research on hand gesture recognition systems has not been conducted on detecting valid frames. We took the R-wave on electrocardiogram (ECG) detection as the conventional method. The detection probability of the conventional method was 85.04%. It has a low accuracy to use the hand gesture recognition system. The proposal consists of 2-stages to improve accuracy. We measured the performance of the detection method of hand gestures provided by the detection probability and the recognition probability. By comparing the performance of each detection method, we proposed an optimal detection method. The proposal detects valid frames with an accuracy of 96.88%, 11.84% higher than the accuracy of the conventional method. Also, the recognition probability of the proposal method was 94.21%, which was 3.71% lower than the ideal method. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

Review

Jump to: Research, Other

30 pages, 4963 KiB  
Review
Motion Detection Using Tactile Sensors Based on Pressure-Sensitive Transistor Arrays
by Jiuk Jang, Yoon Sun Jun, Hunkyu Seo, Moohyun Kim and Jang-Ung Park
Sensors 2020, 20(13), 3624; https://doi.org/10.3390/s20133624 - 28 Jun 2020
Cited by 33 | Viewed by 7548
Abstract
In recent years, to develop more spontaneous and instant interfaces between a system and users, technology has evolved toward designing efficient and simple gesture recognition (GR) techniques. As a tool for acquiring human motion, a tactile sensor system, which converts the human touch [...] Read more.
In recent years, to develop more spontaneous and instant interfaces between a system and users, technology has evolved toward designing efficient and simple gesture recognition (GR) techniques. As a tool for acquiring human motion, a tactile sensor system, which converts the human touch signal into a single datum and executes a command by translating a bundle of data into a text language or triggering a preset sequence as a haptic motion, has been developed. The tactile sensor aims to collect comprehensive data on various motions, from the touch of a fingertip to large body movements. The sensor devices have different characteristics that are important for target applications. Furthermore, devices can be fabricated using various principles, and include piezoelectric, capacitive, piezoresistive, and field-effect transistor types, depending on the parameters to be achieved. Here, we introduce tactile sensors consisting of field-effect transistors (FETs). GR requires a process involving the acquisition of a large amount of data in an array rather than a single sensor, suggesting the importance of fabricating a tactile sensor as an array. In this case, an FET-type pressure sensor can exploit the advantages of active-matrix sensor arrays that allow high-array uniformity, high spatial contrast, and facile integration with electrical circuitry. We envision that tactile sensors based on FETs will be beneficial for GR as well as future applications, and these sensors will provide substantial opportunities for next-generation motion sensing systems. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

Other

Jump to: Research, Review

16 pages, 701 KiB  
Letter
A Hierarchical Learning Approach for Human Action Recognition
by Nicolas Lemieux and Rita Noumeir
Sensors 2020, 20(17), 4946; https://doi.org/10.3390/s20174946 - 1 Sep 2020
Cited by 15 | Viewed by 2919
Abstract
In the domain of human action recognition, existing works mainly focus on using RGB, depth, skeleton and infrared data for analysis. While these methods have the benefit of being non-invasive, they can only be used within limited setups, are prone to issues such [...] Read more.
In the domain of human action recognition, existing works mainly focus on using RGB, depth, skeleton and infrared data for analysis. While these methods have the benefit of being non-invasive, they can only be used within limited setups, are prone to issues such as occlusion and often need substantial computational resources. In this work, we address human action recognition through inertial sensor signals, which have a vast quantity of practical applications in fields such as sports analysis and human-machine interfaces. For that purpose, we propose a new learning framework built around a 1D-CNN architecture, which we validated by achieving very competitive results on the publicly available UTD-MHAD dataset. Moreover, the proposed method provides some answers to two of the greatest challenges currently faced by action recognition algorithms, which are (1) the recognition of high-level activities and (2) the reduction of their computational cost in order to make them accessible to embedded devices. Finally, this paper also investigates the tractability of the features throughout the proposed framework, both in time and duration, as we believe it could play an important role in future works in order to make the solution more intelligible, hardware-friendly and accurate. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition)
Show Figures

Figure 1

Back to TopTop