Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (169)

Search Parameters:
Keywords = hand gestures classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 27333 KiB  
Article
Gest-SAR: A Gesture-Controlled Spatial AR System for Interactive Manual Assembly Guidance with Real-Time Operational Feedback
by Naimul Hasan and Bugra Alkan
Machines 2025, 13(8), 658; https://doi.org/10.3390/machines13080658 - 27 Jul 2025
Viewed by 166
Abstract
Manual assembly remains essential in modern manufacturing, yet the increasing complexity of customised production imposes significant cognitive burdens and error rates on workers. Existing Spatial Augmented Reality (SAR) systems often operate passively, lacking adaptive interaction, real-time feedback and a control system with gesture. [...] Read more.
Manual assembly remains essential in modern manufacturing, yet the increasing complexity of customised production imposes significant cognitive burdens and error rates on workers. Existing Spatial Augmented Reality (SAR) systems often operate passively, lacking adaptive interaction, real-time feedback and a control system with gesture. In response, we present Gest-SAR, a SAR framework that integrates a custom MediaPipe-based gesture classification model to deliver adaptive light-guided pick-to-place assembly instructions and real-time error feedback within a closed-loop interaction instance. In a within-subject study, ten participants completed standardised Duplo-based assembly tasks using Gest-SAR, paper-based manuals, and tablet-based instructions; performance was evaluated via assembly cycle time, selection and placement error rates, cognitive workload assessed by NASA-TLX, and usability test by post-experimental questionnaires. Quantitative results demonstrate that Gest-SAR significantly reduces cycle times with an average of 3.95 min compared to Paper (Mean = 7.89 min, p < 0.01) and Tablet (Mean = 6.99 min, p < 0.01). It also achieved 7 times less average error rates while lowering perceived cognitive workload (p < 0.05 for mental demand) compared to conventional modalities. In total, 90% of the users agreed to prefer SAR over paper and tablet modalities. These outcomes indicate that natural hand-gesture interaction coupled with real-time visual feedback enhances both the efficiency and accuracy of manual assembly. By embedding AI-driven gesture recognition and AR projection into a human-centric assistance system, Gest-SAR advances the collaborative interplay between humans and machines, aligning with Industry 5.0 objectives of resilient, sustainable, and intelligent manufacturing. Full article
(This article belongs to the Special Issue AI-Integrated Advanced Robotics Towards Industry 5.0)
Show Figures

Figure 1

24 pages, 15879 KiB  
Article
Real-Time Hand Gesture Recognition in Clinical Settings: A Low-Power FMCW Radar Integrated Sensor System with Multiple Feature Fusion
by Haili Wang, Muye Zhang, Linghao Zhang, Xiaoxiao Zhu and Qixin Cao
Sensors 2025, 25(13), 4169; https://doi.org/10.3390/s25134169 - 4 Jul 2025
Viewed by 399
Abstract
Robust and efficient contactless human–machine interaction is critical for integrated sensor systems in clinical settings, demanding low-power solutions adaptable to edge computing platforms. This paper presents a real-time hand gesture recognition system using a low-power Frequency-Modulated Continuous Wave (FMCW) radar sensor, featuring a [...] Read more.
Robust and efficient contactless human–machine interaction is critical for integrated sensor systems in clinical settings, demanding low-power solutions adaptable to edge computing platforms. This paper presents a real-time hand gesture recognition system using a low-power Frequency-Modulated Continuous Wave (FMCW) radar sensor, featuring a novel Multiple Feature Fusion (MFF) framework optimized for deployment on edge devices. The proposed system integrates velocity profiles, angular variations, and spatial-temporal features through a dual-stage processing architecture: an adaptive energy thresholding detector segments gestures, followed by an attention-enhanced neural classifier. Innovations include dynamic clutter suppression and multi-path cancellation optimized for complex clinical environments. Experimental validation demonstrates high performance, achieving 98% detection recall and 93.87% classification accuracy under LOSO cross-validation. On embedded hardware, the system processes at 28 FPS, showing higher robustness against environmental noise and lower computational overhead compared with existing methods. This low-power, edge-based solution is highly suitable for applications like sterile medical control and patient monitoring, advancing contactless interaction in healthcare by addressing efficiency and robustness challenges in radar sensing for edge computing. Full article
(This article belongs to the Special Issue Integrated Sensor Systems for Medical Applications)
Show Figures

Figure 1

19 pages, 26591 KiB  
Article
Hand Washing Gesture Recognition Using Synthetic Dataset
by Rüstem Özakar and Eyüp Gedikli
J. Imaging 2025, 11(7), 208; https://doi.org/10.3390/jimaging11070208 - 22 Jun 2025
Cited by 1 | Viewed by 458
Abstract
Hand hygiene is paramount for public health, especially in critical sectors like healthcare and the food industry. Ensuring compliance with recommended hand washing gestures is vital, necessitating autonomous evaluation systems leveraging machine learning techniques. However, the scarcity of comprehensive datasets poses a significant [...] Read more.
Hand hygiene is paramount for public health, especially in critical sectors like healthcare and the food industry. Ensuring compliance with recommended hand washing gestures is vital, necessitating autonomous evaluation systems leveraging machine learning techniques. However, the scarcity of comprehensive datasets poses a significant challenge. This study addresses this issue by presenting an open synthetic hand washing dataset, created using 3D computer-generated imagery, comprising 96,000 frames (equivalent to 64 min of footage), encompassing eight gestures performed by four characters in four diverse environments. This synthetic dataset includes RGB images, depth/isolated depth images and hand mask images. Using this dataset, four neural network models, Inception-V3, Yolo-8n, Yolo-8n segmentation and PointNet, were trained for gesture classification. The models were subsequently evaluated on a large real-world hand washing dataset, demonstrating successful classification accuracies of 56.9% for Inception-V3, 76.3% for Yolo-8n and 79.3% for Yolo-8n segmentation. These findings underscore the effectiveness of synthetic data in training machine learning models for hand washing gesture recognition. Full article
(This article belongs to the Section Computer Vision and Pattern Recognition)
Show Figures

Figure 1

16 pages, 545 KiB  
Article
Microcontroller Implementation of LSTM Neural Networks for Dynamic Hand Gesture Recognition
by Kevin Di Leo, Giorgio Biagetti, Laura Falaschetti and Paolo Crippa
Sensors 2025, 25(12), 3831; https://doi.org/10.3390/s25123831 - 19 Jun 2025
Viewed by 544
Abstract
Accelerometers are nowadays included in almost any portable or mobile device, including smartphones, smartwatches, wrist-bands, and even smart rings. The data collected from them is therefore an ideal candidate to tackle human motion recognition, as it can easily and unobtrusively be acquired. In [...] Read more.
Accelerometers are nowadays included in almost any portable or mobile device, including smartphones, smartwatches, wrist-bands, and even smart rings. The data collected from them is therefore an ideal candidate to tackle human motion recognition, as it can easily and unobtrusively be acquired. In this work we analyze the performance of a hand-gesture classification system implemented using LSTM neural networks on a resource-constrained microcontroller platform, which required trade-offs between network accuracy and resource utilization. Using a publicly available dataset, which includes data for 20 different hand gestures recorded from 10 subjects using a wrist-worn device with a 3-axial accelerometer, we achieved nearly 90.25% accuracy while running the model on an STM32L4-series microcontroller, with an inference time of 418 ms for 4 s sequences, corresponding to an average CPU usage of about 10% for the recognition task. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

23 pages, 8446 KiB  
Article
A Novel Bilateral Data Fusion Approach for EMG-Driven Deep Learning in Post-Stroke Paretic Gesture Recognition
by Alexey Anastasiev, Hideki Kadone, Aiki Marushima, Hiroki Watanabe, Alexander Zaboronok, Shinya Watanabe, Akira Matsumura, Kenji Suzuki, Yuji Matsumaru, Hiroyuki Nishiyama and Eiichi Ishikawa
Sensors 2025, 25(12), 3664; https://doi.org/10.3390/s25123664 - 11 Jun 2025
Viewed by 718
Abstract
We introduce a hybrid deep learning model for recognizing hand gestures from electromyography (EMG) signals in subacute stroke patients: the one-dimensional convolutional long short-term memory neural network (CNN-LSTM). The proposed network was trained, tested, and cross-validated on seven hand gesture movements, collected via [...] Read more.
We introduce a hybrid deep learning model for recognizing hand gestures from electromyography (EMG) signals in subacute stroke patients: the one-dimensional convolutional long short-term memory neural network (CNN-LSTM). The proposed network was trained, tested, and cross-validated on seven hand gesture movements, collected via EMG from 25 patients exhibiting clinical features of paresis. EMG data from these patients were collected twice post-stroke, at least one week apart, and divided into datasets A and B to assess performance over time while balancing subject-specific content and minimizing training bias. Dataset A had a median post-stroke time of 16.0 ± 8.6 days, while dataset B had a median of 19.2 ± 13.7 days. In classification tests based on the number of gesture classes (ranging from two to seven), the hybrid model achieved accuracies ranging from 85.66% to 82.27% in dataset A and from 88.36% to 81.69% in dataset B. To address the limitations of deep learning with small datasets, we developed a novel bilateral data fusion approach that incorporates EMG signals from the non-paretic limb during training. This approach significantly enhanced model performance across both datasets, as evidenced by improvements in sensitivity, specificity, accuracy, and F1-score metrics. The most substantial gains were observed in the three-gesture subset, where classification accuracy increased from 73.01% to 78.42% in dataset A, and from 77.95% to 85.69% in dataset B. In conclusion, although these results may be slightly lower than those of traditional supervised learning algorithms, the combination of bilateral data fusion and the absence of feature engineering offers a novel perspective for neurorehabilitation, where every data segment is critically significant. Full article
(This article belongs to the Special Issue Biomedical Imaging, Sensing and Signal Processing)
Show Figures

Figure 1

26 pages, 8022 KiB  
Article
Toward a Recognition System for Mexican Sign Language: Arm Movement Detection
by Gabriela Hilario-Acuapan, Keny Ordaz-Hernández, Mario Castelán and Ismael Lopez-Juarez
Sensors 2025, 25(12), 3636; https://doi.org/10.3390/s25123636 - 10 Jun 2025
Viewed by 712
Abstract
This paper describes ongoing work surrounding the creation of a recognition system for Mexican Sign Language (LSM). We propose a general sign decomposition that is divided into three parts, i.e., hand configuration (HC), arm movement (AM), and non-hand gestures (NHGs). This paper focuses [...] Read more.
This paper describes ongoing work surrounding the creation of a recognition system for Mexican Sign Language (LSM). We propose a general sign decomposition that is divided into three parts, i.e., hand configuration (HC), arm movement (AM), and non-hand gestures (NHGs). This paper focuses on the AM features and reports the approach created to analyze visual patterns in arm joint movements (wrists, shoulders, and elbows). For this research, a proprietary dataset—one that does not limit the recognition of arm movements—was developed, with active participation from the deaf community and LSM experts. We analyzed two case studies involving three sign subsets. For each sign, the pose was extracted to generate shapes of the joint paths during the arm movements and fed to a CNN classifier. YOLOv8 was used for pose estimation and visual pattern classification purposes. The proposed approach, based on pose estimation, shows promising results for constructing CNN models to classify a wide range of signs. Full article
Show Figures

Figure 1

24 pages, 4340 KiB  
Article
Real-Time Mobile Application for Translating Portuguese Sign Language to Text Using Machine Learning
by Gonçalo Fonseca, Gonçalo Marques, Pedro Albuquerque Santos and Rui Jesus
Electronics 2025, 14(12), 2351; https://doi.org/10.3390/electronics14122351 - 8 Jun 2025
Cited by 1 | Viewed by 1087
Abstract
Communication barriers between deaf and hearing individuals present significant challenges to social inclusion, highlighting the need for effective technological aids. This study aimed to bridge this gap by developing a mobile system for the real-time translation of Portuguese Sign Language (LGP) alphabet gestures [...] Read more.
Communication barriers between deaf and hearing individuals present significant challenges to social inclusion, highlighting the need for effective technological aids. This study aimed to bridge this gap by developing a mobile system for the real-time translation of Portuguese Sign Language (LGP) alphabet gestures into text, addressing a specific technological void for LGP. The core of the solution is a mobile application integrating two distinct machine learning approaches trained on a custom LGP dataset: firstly, a Convolutional Neural Network (CNN) optimized with TensorFlow Lite for efficient, on-device image classification, enabling offline use; secondly, a method utilizing MediaPipe for hand landmark extraction from the camera feed, with classification performed by a server-side Multilayer Perceptron (MLP). Evaluation tests confirmed that both approaches could recognize LGP alphabet gestures with good accuracy (F1-scores of approximately 76% for the CNN and 77% for the MediaPipe+MLP) and processing speed (1 to 2 s per gesture on high-end devices using the CNN and 3 to 5 s under typical network conditions using MediaPipe+MLP), facilitating efficient real-time translation, though performance trade-offs regarding speed versus accuracy under different conditions were observed. The implementation of this dual-method system provides crucial flexibility, adapting to varying network conditions and device capabilities, and offers a scalable foundation for future expansion to include more complex gestures. This work delivers a practical tool that may contribute to improve communication accessibility and the societal integration of the deaf community in Portugal. Full article
(This article belongs to the Special Issue Virtual Reality Applications in Enhancing Human Lives)
Show Figures

Figure 1

26 pages, 8770 KiB  
Article
Evaluation of Benchmark Datasets and Deep Learning Models with Pre-Trained Weights for Vision-Based Dynamic Hand Gesture Recognition
by Yaseen, Oh-Jin Kwon, Jaeho Kim, Jinhee Lee and Faiz Ullah
Appl. Sci. 2025, 15(11), 6045; https://doi.org/10.3390/app15116045 - 27 May 2025
Viewed by 709
Abstract
The integration of dynamic hand gesture recognition in computer vision-based systems promises enhanced human–computer interaction, providing a natural and intuitive way of communicating. However, achieving real-time performance efficiency is a highly challenging task. As the effectiveness of dynamic hand gesture recognition is dependent [...] Read more.
The integration of dynamic hand gesture recognition in computer vision-based systems promises enhanced human–computer interaction, providing a natural and intuitive way of communicating. However, achieving real-time performance efficiency is a highly challenging task. As the effectiveness of dynamic hand gesture recognition is dependent on the nature of the underlying datasets and deep learning models, selecting a diverse and effective dataset and a deep learning model is crucial to achieve reliable performance. This study explores the effectiveness of benchmark hand gesture recognition datasets in training lightweight deep learning models for robust performance. The objective is to evaluate and analyze these datasets and models through training and evaluation for use in practical applications. For the evaluation of these datasets and models, we analyze the models’ performances by evaluation metrics, such as precision, recall, F1-score, specificity, and accuracy. For an unbiased comparison, both subjective and objective metrics are reported, thus offering significant insights on understanding dataset–model interactions in hand gesture recognition. Full article
Show Figures

Figure 1

15 pages, 329 KiB  
Article
Classification of Electroencephalography Motor Execution Signals Using a Hybrid Neural Network Based on Instantaneous Frequency and Amplitude Obtained via Empirical Wavelet Transform
by Patryk Zych, Kacper Filipek, Agata Mrozek-Czajkowska and Piotr Kuwałek
Sensors 2025, 25(11), 3284; https://doi.org/10.3390/s25113284 - 23 May 2025
Viewed by 547
Abstract
Brain–computer interfaces (BCIs) have garnered significant interest due to their potential to enable communication and control for individuals with limited or no ability to interact with technologies in a conventional way. By applying electrical signals generated by brain cells, BCIs eliminate the need [...] Read more.
Brain–computer interfaces (BCIs) have garnered significant interest due to their potential to enable communication and control for individuals with limited or no ability to interact with technologies in a conventional way. By applying electrical signals generated by brain cells, BCIs eliminate the need for physical interaction with external devices. This study investigates the performance of traditional classifiers—specifically, linear discriminant analysis (LDA) and support vector machines (SVMs)—in comparison with a hybrid neural network model for EEG-based gesture classification. The dataset comprised EEG recordings of seven distinct gestures performed by 33 participants. Binary classification tasks were conducted using both raw windowed EEG signals and features extracted via bandpower and the empirical wavelet transform (EWT). The hybrid neural network architecture demonstrated higher classification accuracy compared to the standard classifiers. These findings suggest that combining featuring extraction with deep learning models offers a promising approach for improving EEG gesture recognition in BCI systems. Full article
Show Figures

Figure 1

15 pages, 4192 KiB  
Article
Enhancing Kazakh Sign Language Recognition with BiLSTM Using YOLO Keypoints and Optical Flow
by Zholdas Buribayev, Maria Aouani, Zhansaya Zhangabay, Ainur Yerkos, Zemfira Abdirazak and Mukhtar Zhassuzak
Appl. Sci. 2025, 15(10), 5685; https://doi.org/10.3390/app15105685 - 20 May 2025
Viewed by 680
Abstract
Sign languages are characterized by complex and subtle hand movements, which are challenging for computer vision systems to accurately recognize. This study suggests an innovative deep learning pipeline specifically designed for reliable gesture recognition of Kazakh Sign Language. This approach combines key point [...] Read more.
Sign languages are characterized by complex and subtle hand movements, which are challenging for computer vision systems to accurately recognize. This study suggests an innovative deep learning pipeline specifically designed for reliable gesture recognition of Kazakh Sign Language. This approach combines key point detection using the YOLO model, optical flow estimation, and a bidirectional long short-term memory (BiLSTM) network. At the initial stage, a dataset is generated using MediaPipe, which is then used to train the YOLO model in order to accurately identify key hand points. After training, the YOLO model extracts key points and bounding boxes from video recordings of gestures, creating consistent representations of movements. To improve the recognition of dynamic gestures, the optical flow is calculated in an area covering 10% of the area around key points, which allows the dynamics of movements to be captured and provides additional time characteristics. The BiLSTM network is trained on multimodal input that combines data on keypoints, bounding boxes, and optical flow, resulting in improved gesture classification accuracy. The experimental results demonstrate that the proposed approach is superior to traditional methods based solely on key points, especially in recognizing fast and complex gestures. The proposed structure promotes the development of sign language recognition technologies, especially for poorly studied languages such as Kazakh, paving the way to more inclusive and effective communication tools. Full article
Show Figures

Figure 1

19 pages, 3444 KiB  
Article
Gesture Classification Using a Smartwatch: Focusing on Unseen Non-Target Gestures
by Jae-Hyuk Choi, Hyun-Tae Choi, Kyeong-Taek Kim, Jin-Sub Jung, Seok-Hyeon Lee and Won-Du Chang
Appl. Sci. 2025, 15(9), 4867; https://doi.org/10.3390/app15094867 - 27 Apr 2025
Viewed by 529
Abstract
Hand gestures serve as a fundamental means of communication, and extensive research has been conducted to develop automated recognition systems. These systems are expected to improve human/computer interaction, particularly in environments where verbal communication is limited. A key challenge in these systems is [...] Read more.
Hand gestures serve as a fundamental means of communication, and extensive research has been conducted to develop automated recognition systems. These systems are expected to improve human/computer interaction, particularly in environments where verbal communication is limited. A key challenge in these systems is the classification of non-target actions, as everyday movements are often not included in the training set, but resembling target gestures can lead to misclassification. Unlike previous studies that primarily focused on target action recognition, this study explicitly addresses the unseen non-target classification problem through an experiment to distinguish target and non-target activities based on movement characteristics. This study examines the ability of deep learning models to generalize classification criteria beyond predefined training sets. The proposed method was validated with arm movement data from 20 target group participants and 11 non-target group participants, achieving an average F1-score of 84.23%, with a non-target classification score of 73.23%. Furthermore, we confirmed that data augmentation and incorporating a loss factor significantly improved the recognition of unseen non-target gestures. The results suggest that improving classification performance on untrained, non-target movements will enhance the applicability of gesture recognition systems in real-world environments. This is particularly relevant for wearable devices, assistive technologies, and human/computer interaction systems. Full article
Show Figures

Figure 1

23 pages, 10659 KiB  
Article
A Fast and Low-Impact Embedded Orientation Correction Algorithm for Hand Gesture Recognition Armbands
by Andrea Mongardi, Fabio Rossi, Andrea Prestia, Paolo Motto Ros and Danilo Demarchi
Sensors 2025, 25(7), 2188; https://doi.org/10.3390/s25072188 - 30 Mar 2025
Viewed by 577
Abstract
Hand gesture recognition is a prominent topic in the recent literature, with surface ElectroMyoGraphy (sEMG) recognized as a key method for wearable Human–Machine Interfaces (HMIs). However, sensor placement still significantly impacts systems performance. This study addresses sensor displacement by introducing a fast and [...] Read more.
Hand gesture recognition is a prominent topic in the recent literature, with surface ElectroMyoGraphy (sEMG) recognized as a key method for wearable Human–Machine Interfaces (HMIs). However, sensor placement still significantly impacts systems performance. This study addresses sensor displacement by introducing a fast and low-impact orientation correction algorithm for sEMG-based HMI armbands. The algorithm includes a calibration phase to estimate armband orientation and real-time data correction, requiring only two distinct hand gestures in terms of sEMG activation. This ensures hardware and database independence and eliminates the need for model retraining, as data correction occurs prior to classification or prediction. The algorithm was implemented in a hand gesture HMI system featuring a custom seven-channel sEMG armband with an Artificial Neural Network (ANN) capable of recognizing nine gestures. Validation demonstrated its effectiveness, achieving 93.36% average prediction accuracy with arbitrary armband wearing orientation. The algorithm also has minimal impact on power consumption and latency, requiring just an additional 500 μW and introducing a latency increase of 408 μs. These results highlight the algorithm’s efficacy, general applicability, and efficiency, presenting it as a promising solution to the electrode-shift issue in sEMG-based HMI applications. Full article
Show Figures

Figure 1

47 pages, 2260 KiB  
Review
Hand Gesture Recognition on Edge Devices: Sensor Technologies, Algorithms, and Processing Hardware
by Elfi Fertl, Encarnación Castillo, Georg Stettinger, Manuel P. Cuéllar and Diego P. Morales
Sensors 2025, 25(6), 1687; https://doi.org/10.3390/s25061687 - 8 Mar 2025
Cited by 2 | Viewed by 2076
Abstract
Hand gesture recognition (HGR) is a convenient and natural form of human–computer interaction. It is suitable for various applications. Much research has already focused on wearable device-based HGR. By contrast, this paper gives an overview focused on device-free HGR. That means we evaluate [...] Read more.
Hand gesture recognition (HGR) is a convenient and natural form of human–computer interaction. It is suitable for various applications. Much research has already focused on wearable device-based HGR. By contrast, this paper gives an overview focused on device-free HGR. That means we evaluate HGR systems that do not require the user to wear something like a data glove or hold a device. HGR systems are explored regarding technology, hardware, and algorithms. The interconnectedness of timing and power requirements with hardware, pre-processing algorithm, classification, and technology and how they permit more or less granularity, accuracy, and number of gestures is clearly demonstrated. Sensor modalities evaluated are WIFI, vision, radar, mobile networks, and ultrasound. The pre-processing technologies stereo vision, multiple-input multiple-output (MIMO), spectrogram, phased array, range-doppler-map, range-angle-map, doppler-angle-map, and multilateration are explored. Classification approaches with and without ML are studied. Among those with ML, assessed algorithms range from simple tree structures to transformers. All applications are evaluated taking into account their level of integration. This encompasses determining whether the application presented is suitable for edge integration, their real-time capability, whether continuous learning is implemented, which robustness was achieved, whether ML is applied, and the accuracy level. Our survey aims to provide a thorough understanding of the current state of the art in device-free HGR on edge devices and in general. Finally, on the basis of present-day challenges and opportunities in this field, we outline which further research we suggest for HGR improvement. Our goal is to promote the development of efficient and accurate gesture recognition systems. Full article
(This article belongs to the Special Issue Multimodal Sensing Technologies for IoT and AI-Enabled Systems)
Show Figures

Figure 1

16 pages, 2911 KiB  
Article
A Bimodal EMG/FMG System Using Machine Learning Techniques for Gesture Recognition Optimization
by Nuno Pires and Milton P. Macedo
Signals 2025, 6(1), 8; https://doi.org/10.3390/signals6010008 - 20 Feb 2025
Viewed by 1321
Abstract
This study is part of a broader project, the Open Source Bionic Hand, which aims to develop and control, in real time, a low-cost 3D-printed bionic hand prototype using signals from the muscles of the forearm. This work is intended to implement a [...] Read more.
This study is part of a broader project, the Open Source Bionic Hand, which aims to develop and control, in real time, a low-cost 3D-printed bionic hand prototype using signals from the muscles of the forearm. This work is intended to implement a bimodal signal acquisition system, which uses EMG signals and Force Myography (FMG) in order to optimize the recognition of gesture intention and, consequently, the control of the bionic hand. The implementation of this bimodal EMG-FMG system will be described. It uses two different signals from BITalino EMG modules and Flexiforce™ sensors from Tekscan™. The dataset was built from thirty-six features extracted from each acquisition using two of each EMG and FMG sensors in extensor and flexor muscle groups simultaneously. The extraction of features is also depicted, as well as the subsequent use of these features to train and compare Machine Learning models in gesture recognition through MATLAB’s Classification Learner tool (v2.2.5 software). Preliminary results obtained from a dataset of three healthy volunteers show the effectiveness of this bimodal EMG/FMG system in the improvement of the efficacy on gesture recognition as it is shown, for example, for the Quadratic SVM classifier that raises from 75.00% with EMG signals to 87.96% using both signals. Full article
Show Figures

Figure 1

14 pages, 6544 KiB  
Article
Multi-Mode Hand Gesture-Based VR Locomotion Technique for Intuitive Telemanipulation Viewpoint Control in Tightly Arranged Logistic Environments
by Jaehoon Jeong, Haegyeom Choi and Donghun Lee
Sensors 2025, 25(4), 1181; https://doi.org/10.3390/s25041181 - 14 Feb 2025
Viewed by 892
Abstract
Telemanipulation-based object-side picking with a suction gripper often faces challenges such as occlusion of the target object or the gripper and the need for precise alignment between the suction cup and the object’s surface. These issues can significantly affect task success rates in [...] Read more.
Telemanipulation-based object-side picking with a suction gripper often faces challenges such as occlusion of the target object or the gripper and the need for precise alignment between the suction cup and the object’s surface. These issues can significantly affect task success rates in logistics environments. To address these problems, this study proposes a multi-mode hand gesture-based virtual reality (VR) locomotion method to enable intuitive and precise viewpoint control. The system utilizes a head-mounted display (HMD) camera to capture hand skeleton data, which a multi-layer perceptron (MLP) model processes. The model classifies gestures into three modes: translation, rotation, and fixed, corresponding to fist, pointing, and unknown gestures, respectively. Translation mode moves the viewpoint based on the wrist’s displacement, rotation mode adjusts the viewpoint’s angle based on the wrist’s angular displacement, and fixed mode stabilizes the viewpoint when gestures are ambiguous. A dataset of 4312 frames was used for training and validation, with 666 frames for testing. The MLP model achieved a classification accuracy of 98.4%, with precision, recall, and F1-score exceeding 0.98. These results demonstrate the system’s ability to address the challenges of telemanipulation tasks by enabling accurate gesture recognition and seamless mode transitions. Full article
(This article belongs to the Special Issue Applications of Body Worn Sensors and Wearables)
Show Figures

Figure 1

Back to TopTop