Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (47)

Search Parameters:
Keywords = hand posture recognition

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
9 pages, 1490 KB  
Case Report
Dynamic Cervical Myelopathy Misleading on Neutral Imaging: The Role of Flexion–Extension MRI
by Leonardo Anselmi, Donato Creatura, Mario De Robertis, Ali Baram, Emanuele Stucchi, Gabriele Capo, Jad El Choueiri, Federico Pessina, Maurizio Fornari and Carlo Brembilla
J. Clin. Med. 2026, 15(4), 1333; https://doi.org/10.3390/jcm15041333 - 8 Feb 2026
Viewed by 716
Abstract
Background/Objectives: Degenerative cervical myelopathy (DCM) may result from posture-dependent spinal cord compromise not detectable on neutral imaging. Dynamic MRI can uncover clinically relevant mechanisms underlying otherwise unexplained myelopathy and guide management. This report illustrates a dynamic cervical myelopathy phenotype revealed by flexion–extension imaging [...] Read more.
Background/Objectives: Degenerative cervical myelopathy (DCM) may result from posture-dependent spinal cord compromise not detectable on neutral imaging. Dynamic MRI can uncover clinically relevant mechanisms underlying otherwise unexplained myelopathy and guide management. This report illustrates a dynamic cervical myelopathy phenotype revealed by flexion–extension imaging and its impact on surgical decision-making. Methods: A 49-year-old man presented with progressive bilateral upper-limb paresthesias, intrinsic hand atrophy, and distal weakness. Neutral cervical MRI, standard radiographs, and flexion–extension MRI were performed to investigate a suspected dynamic etiology, including differentiation from Hirayama disease. Surgical treatment consisted of anterior cervical discectomy and fusion (ACDF), with clinical and radiological follow-up. Results: Neutral MRI showed intramedullary T2 hyperintensity from C4 to C6 without static canal stenosis or frank compression, while radiographs demonstrated segmental kyphosis without instability. Flexion MRI revealed reproducible spinal cord contact with a small cranially located osteophyte at C5–C6, concordant with the myelopathic signal. ACDF at C4–C6 led to clinical improvement. One year later, recurrent symptoms from adjacent-segment pathology (C3–C4 myelopathic signal and C6–C7 foraminal disc herniation) required a second ACDF, resulting in durable neurological stability. Conclusions: This case demonstrates flexion-dependent cord–osteophyte conflict causing cervical myelomalacia in the absence of static stenosis. Dynamic MRI resolved a clinical–radiological mismatch and directly informed surgical planning. Recognition of dynamic myelopathy phenotypes and vigilance for adjacent-segment disease after fusion are essential for optimizing outcomes. Full article
Show Figures

Figure 1

19 pages, 1787 KB  
Article
Event-Based Machine Vision for Edge AI Computing
by Paul K. J. Park, Junseok Kim, Juhyun Ko and Yeoungjin Chang
Sensors 2026, 26(3), 935; https://doi.org/10.3390/s26030935 - 1 Feb 2026
Viewed by 947
Abstract
Event-based sensors provide sparse, motion-centric measurements that can reduce data bandwidth and enable always-on perception on resource-constrained edge devices. This paper presents an event-based machine vision framework for smart-home AIoT that couples a Dynamic Vision Sensor (DVS) with compute-efficient algorithms for (i) human/object [...] Read more.
Event-based sensors provide sparse, motion-centric measurements that can reduce data bandwidth and enable always-on perception on resource-constrained edge devices. This paper presents an event-based machine vision framework for smart-home AIoT that couples a Dynamic Vision Sensor (DVS) with compute-efficient algorithms for (i) human/object detection, (ii) 2D human pose estimation, (iii) hand posture recognition for human–machine interfaces. The main methodological contributions are timestamp-based, polarity-agnostic recency encoding that preserves moving-edge structure while suppressing static background, and task-specific network optimizations (architectural reduction and mixed-bit quantization) tailored to sparse event images. With a fixed downstream network, the recency encoding improves action recognition accuracy over temporal accumulation (0.908 vs. 0.896). In a 24 h indoor monitoring experiment (640 × 480), the raw DVS stream is about 30× smaller than conventional CMOS video and remains about 5× smaller after standard compression. For human detection, the optimized event processing reduces computation from 5.8 G to 81 M FLOPs and runtime from 172 ms to 15 ms (more than 11× speed-up). For pose estimation, a pruned HRNet reduces model size from 127 MB to 19 MB and inference time from 70 ms to 6 ms on an NVIDIA Titan X while maintaining a comparable accuracy (mAP from 0.95 to 0.94) on MS COCO 2017 using synthetic event streams generated by an event simulator. For hand posture recognition, a compact CNN achieves 99.19% recall and 0.0926% FAR with 14.31 ms latency on a single i5-4590 CPU core using 10-frame sequence voting. These results indicate that event-based sensing combined with lightweight inference is a practical approach to privacy-friendly, real-time perception under strict edge constraints. Full article
(This article belongs to the Special Issue Next-Generation Edge AI in Wearable Devices)
Show Figures

Figure 1

31 pages, 9712 KB  
Article
YOLO-HRNet with Attention Mechanism: For Automated Ergonomic Risk Assessment in Garment Manufacturing
by Yichen Tan, Ziqian Yang and Zhihui Wu
Appl. Sci. 2025, 15(24), 12950; https://doi.org/10.3390/app152412950 - 8 Dec 2025
Viewed by 852
Abstract
For garment manufacturing, an efficient and precise assessment of ergonomics is vital to prevent work-related musculoskeletal disorders. This study creates a computer vision-based algorithm for fast and accurate risk analysis. Specifically, we introduced SE and CBAM attention mechanisms into the YOLO network and [...] Read more.
For garment manufacturing, an efficient and precise assessment of ergonomics is vital to prevent work-related musculoskeletal disorders. This study creates a computer vision-based algorithm for fast and accurate risk analysis. Specifically, we introduced SE and CBAM attention mechanisms into the YOLO network and integrated the optimized modules into the HRNet architecture to improve the accuracy of human pose recognition. This approach effectively addresses common interferences in garment production environments, such as fabric accumulation, equipment occlusion, and complex hand movements, while significantly enhancing the accuracy of human detection. On the COCO dataset, it increased mAP and recall by 4.43% and 5.99%, respectively, over YOLOv8. Furthermore, by analyzing key postural features from worker videos of cutting, sewing, and pressing, we achieved a quantified ergonomic risk assessment. Experimental results indicate that the RULA scores calculated using this algorithm are highly consistent and stable with expert evaluations and accurately reflect the dynamic changes in ergonomic risk levels across different processes. It is important to note that the validation was based on a pilot study involving a limited number of workers and task types, meaning that the findings primarily demonstrate feasibility rather than full-scale generalizability. Even so, the algorithm outperforms existing lightweight solutions and can be deployed in real-time on edge devices within factories, providing a low-cost ergonomic monitoring tool for the garment manufacturing industry. This helps prevent and reduce musculoskeletal injuries among workers. Full article
Show Figures

Figure 1

47 pages, 15579 KB  
Article
Geometric Symmetry and Temporal Optimization in Human Pose and Hand Gesture Recognition for Intelligent Elderly Individual Monitoring
by Pongsarun Boonyopakorn and Mahasak Ketcham
Symmetry 2025, 17(9), 1423; https://doi.org/10.3390/sym17091423 - 1 Sep 2025
Cited by 3 | Viewed by 1557
Abstract
This study introduces a real-time, non-intrusive monitoring system designed to support elderly care through vision-based pose estimation and hand gesture recognition. The proposed framework integrates convolutional neural networks (CNNs), temporal modeling using LSTM networks, and symmetry-aware keypoint analysis to enhance the accuracy and [...] Read more.
This study introduces a real-time, non-intrusive monitoring system designed to support elderly care through vision-based pose estimation and hand gesture recognition. The proposed framework integrates convolutional neural networks (CNNs), temporal modeling using LSTM networks, and symmetry-aware keypoint analysis to enhance the accuracy and reliability of behavior detection under varied real-world conditions. By leveraging the bilateral symmetry of human anatomy, the system improves the robustness of posture and gesture classification, even in the presence of partial occlusion or variable lighting. A total of 21 hand landmarks and 33 body pose points are used to recognize predefined actions and communication gestures, enabling seamless interaction without wearable devices. Experimental evaluations across four distinct lighting environments confirm a consistent accuracy above 90%, with real-time alerts triggered via IoT messaging platforms. The system’s modular architecture, interpretability, and adaptability make it a scalable solution for intelligent elderly individual monitoring, offering a novel application of spatial symmetry and optimized deep learning in healthcare technology. Full article
Show Figures

Figure 1

22 pages, 8643 KB  
Article
A Comparison of Deep Learning Techniques for Pose Recognition in Up-and-Go Pole Walking Exercises Using Skeleton Images and Feature Data
by Wan-Chih Lin, Yu-Chen Tu, Hong-Yi Lin and Ming-Hseng Tseng
Electronics 2025, 14(6), 1075; https://doi.org/10.3390/electronics14061075 - 7 Mar 2025
Cited by 6 | Viewed by 5183
Abstract
This study evaluates the performance of seven deep learning methods for recognizing motion patterns in Up-and-Go pole walking exercises, aiming to improve rehabilitation technologies for the elderly population. For the ageing population, improving the accuracy of movement posture for elderly people is crucial [...] Read more.
This study evaluates the performance of seven deep learning methods for recognizing motion patterns in Up-and-Go pole walking exercises, aiming to improve rehabilitation technologies for the elderly population. For the ageing population, improving the accuracy of movement posture for elderly people is crucial in obtaining better rehabilitation outcomes. Up-and-Go pole walking exercises offer significant health benefits, but attaining the correct pose in motion is essential for achieving these benefits. The dataset includes skeleton images generated by OpenPose 1.7.0 and 2D and 3D skeleton images extracted through MediaPipe 0.10.21. Two sets of feature data were developed for model evaluation: one that comprises 12 features representing the key coordinates of the hands and feet and another consisting of 30 features derived from subdivided full-body skeletons. The study compares the accuracy and performance of each method, examining the impact of different combinations and representations on motion patterns. The experimental results indicate that the Swin model based on MediaPipe 2D skeleton images achieved the highest accuracy (99.7%), demonstrating superior performance in recognizing motion patterns of Up-and-Go pole walking exercises. The study summarizes the advantages and limitations of each approach, highlighting the contributions of different features and data representations to recognition outcomes. This research provides scientific evidence to advance elderly rehabilitation technologies by accurately recognizing poses. Full article
(This article belongs to the Special Issue Advances in Information, Intelligence, Systems and Applications)
Show Figures

Figure 1

17 pages, 9355 KB  
Article
Grasp Pattern Recognition Using Surface Electromyography Signals and Bayesian-Optimized Support Vector Machines for Low-Cost Hand Prostheses
by Alessandro Grattarola, Marta C. Mora, Joaquín Cerdá-Boluda and José V. García Ortiz
Appl. Sci. 2025, 15(3), 1062; https://doi.org/10.3390/app15031062 - 22 Jan 2025
Cited by 8 | Viewed by 3216
Abstract
Every year, thousands of people undergo amputations due to trauma or medical conditions. The loss of an upper limb, in particular, has profound physical and psychological consequences for patients. One potential solution is the use of externally powered prostheses equipped with motorized artificial [...] Read more.
Every year, thousands of people undergo amputations due to trauma or medical conditions. The loss of an upper limb, in particular, has profound physical and psychological consequences for patients. One potential solution is the use of externally powered prostheses equipped with motorized artificial hands. However, these commercially available prosthetic hands are prohibitively expensive for most users. In recent years, advancements in 3D printing and sensor technologies have enabled the design and production of low-cost, externally powered prostheses. This paper presents a pattern-recognition-based human–prosthesis interface that utilizes surface electromyography (sEMG) signals, captured by an affordable device, the Myo armband. A Support Vector Machine (SVM) algorithm, optimized using Bayesian techniques, is trained to classify the user’s intended grasp from among nine common grasping postures essential for daily life activities and functional prosthetic performance. The proposal is viable for real-time implementations on low-cost platforms with 85% accuracy in grasping posture recognition. Full article
Show Figures

Figure 1

15 pages, 7134 KB  
Article
Single-Handed Gesture Recognition with RGB Camera for Drone Motion Control
by Guhnoo Yun, Hwykuen Kwak and Dong Hwan Kim
Appl. Sci. 2024, 14(22), 10230; https://doi.org/10.3390/app142210230 - 7 Nov 2024
Cited by 6 | Viewed by 4956
Abstract
Recent progress in hand gesture recognition has introduced several natural and intuitive approaches to drone control. However, effectively maneuvering drones in complex environments remains challenging. Drone movements are governed by four independent factors: roll, yaw, pitch, and throttle. Each factor includes three distinct [...] Read more.
Recent progress in hand gesture recognition has introduced several natural and intuitive approaches to drone control. However, effectively maneuvering drones in complex environments remains challenging. Drone movements are governed by four independent factors: roll, yaw, pitch, and throttle. Each factor includes three distinct behaviors—increase, decrease, and neutral—necessitating hand gesture vocabularies capable of expressing at least 81 combinations for comprehensive drone control in diverse scenarios. In this paper, we introduce a new set of hand gestures for precise drone control, leveraging an RGB camera sensor. These gestures are categorized into motion-based and posture-based types for efficient management. Then, we develop a lightweight hand gesture recognition algorithm capable of real-time operation on even edge devices, ensuring accurate and timely recognition. Subsequently, we integrate hand gesture recognition into a drone simulator to execute 81 commands for drone flight. Overall, the proposed hand gestures and recognition system offer natural control for complex drone maneuvers. Full article
(This article belongs to the Section Aerospace Science and Engineering)
Show Figures

Figure 1

15 pages, 6630 KB  
Article
An Actively Vision-Assisted Low-Load Wearable Hand Function Mirror Rehabilitation System
by Zheyu Chen, Huanjun Wang, Yubing Yang, Lichao Chen, Zhilong Yan, Guoli Xiao, Yi Sun, Songsheng Zhu, Bin Liu, Liang Li and Jianqing Li
Actuators 2024, 13(9), 368; https://doi.org/10.3390/act13090368 - 19 Sep 2024
Cited by 1 | Viewed by 2594
Abstract
The restoration of fine motor function in the hand is crucial for stroke survivors with hemiplegia to reintegrate into daily life and presents a significant challenge in post-stroke rehabilitation. Current mirror rehabilitation systems based on wearable devices require medical professionals or caregivers to [...] Read more.
The restoration of fine motor function in the hand is crucial for stroke survivors with hemiplegia to reintegrate into daily life and presents a significant challenge in post-stroke rehabilitation. Current mirror rehabilitation systems based on wearable devices require medical professionals or caregivers to assist patients in donning sensor gloves on the healthy side, thus hindering autonomous training, increasing labor costs, and imposing psychological burdens on patients. This study developed a low-load wearable hand function mirror rehabilitation robotic system based on visual gesture recognition. The system incorporates an active visual apparatus capable of adjusting its position and viewpoint autonomously, enabling the subtle monitoring of the healthy side’s gesture throughout the rehabilitation process. Consequently, patients only need to wear the device on their impaired hand to complete the mirror training, facilitating independent rehabilitation exercises. An algorithm based on hand key point gesture recognition was developed, which is capable of automatically identifying eight distinct gestures. Additionally, the system supports remote audio–video interaction during training sessions, addressing the lack of professional guidance in independent rehabilitation. A prototype of the system was constructed, a dataset for hand gesture recognition was collected, and the system’s performance as well as functionality were rigorously tested. The results indicate that the gesture recognition accuracy exceeds 90% under ten-fold cross-validation. The system enables operators to independently complete hand rehabilitation training, while the active visual system accommodates a patient’s rehabilitation needs across different postures. This study explores methods for autonomous hand function rehabilitation training, thereby offering valuable insights for future research on hand function recovery. Full article
(This article belongs to the Special Issue Actuators and Robotic Devices for Rehabilitation and Assistance)
Show Figures

Figure 1

17 pages, 5681 KB  
Article
Visual Perception and Multimodal Control: A Novel Approach to Designing an Intelligent Badminton Serving Device
by Fulai Jiang, Yuxuan Lin, Rui Ming, Chuan Qin, Yangjie Wu, Yuhui Liu and Haibo Luo
Machines 2024, 12(5), 331; https://doi.org/10.3390/machines12050331 - 13 May 2024
Cited by 3 | Viewed by 2473
Abstract
Addressing the current issue of limited control methods for badminton serving devices, this paper proposes a vision-based multimodal control system and method for badminton serving. The system integrates computer vision recognition technology with traditional control methods for badminton serving devices. By installing vision [...] Read more.
Addressing the current issue of limited control methods for badminton serving devices, this paper proposes a vision-based multimodal control system and method for badminton serving. The system integrates computer vision recognition technology with traditional control methods for badminton serving devices. By installing vision capture devices on the serving device, the system identifies various human body postures. Based on the content of posture information, corresponding control signals are sent to adjust parameters such as launch angle and speed, enabling multiple modes of serving. Firstly, the hardware design for the badminton serving device is presented, including the design of the actuator module through 3D modeling. Simultaneously, an embedded development board circuit is designed to meet the requirements of multimodal control. Secondly, in the aspect of visual perception for human body recognition, an improved BlazePose candidate region posture recognition algorithm is proposed based on existing posture recognition algorithms. Furthermore, mappings between posture information and hand information are established to facilitate parameter conversion for the serving device under different postures. Finally, extensive experiments validate the feasibility and stability of the developed system and method. Full article
(This article belongs to the Special Issue Advanced Methodology of Intelligent Control and Measurement)
Show Figures

Figure 1

24 pages, 4796 KB  
Article
sEMG-Based Robust Recognition of Grasping Postures with a Machine Learning Approach for Low-Cost Hand Control
by Marta C. Mora, José V. García-Ortiz and Joaquín Cerdá-Boluda
Sensors 2024, 24(7), 2063; https://doi.org/10.3390/s24072063 - 23 Mar 2024
Cited by 11 | Viewed by 3743
Abstract
The design and control of artificial hands remains a challenge in engineering. Popular prostheses are bio-mechanically simple with restricted manipulation capabilities, as advanced devices are pricy or abandoned due to their difficult communication with the hand. For social robots, the interpretation of human [...] Read more.
The design and control of artificial hands remains a challenge in engineering. Popular prostheses are bio-mechanically simple with restricted manipulation capabilities, as advanced devices are pricy or abandoned due to their difficult communication with the hand. For social robots, the interpretation of human intention is key for their integration in daily life. This can be achieved with machine learning (ML) algorithms, which are barely used for grasping posture recognition. This work proposes an ML approach to recognize nine hand postures, representing 90% of the activities of daily living in real time using an sEMG human–robot interface (HRI). Data from 20 subjects wearing a Myo armband (8 sEMG signals) were gathered from the NinaPro DS5 and from experimental tests with the YCB Object Set, and they were used jointly in the development of a simple multi-layer perceptron in MATLAB, with a global percentage success of 73% using only two features. GPU-based implementations were run to select the best architecture, with generalization capabilities, robustness-versus-electrode shift, low memory expense, and real-time performance. This architecture enables the implementation of grasping posture recognition in low-cost devices, aimed at the development of affordable functional prostheses and HRI for social robots. Full article
Show Figures

Figure 1

19 pages, 5168 KB  
Article
Detection of Rehabilitation Training Effect of Upper Limb Movement Disorder Based on MPL-CNN
by Lijuan Shi, Runmin Wang, Jian Zhao, Jing Zhang and Zhejun Kuang
Sensors 2024, 24(4), 1105; https://doi.org/10.3390/s24041105 - 8 Feb 2024
Cited by 12 | Viewed by 4674
Abstract
Stroke represents a medical emergency and can lead to the development of movement disorders such as abnormal muscle tone, limited range of motion, or abnormalities in coordination and balance. In order to help stroke patients recover as soon as possible, rehabilitation training methods [...] Read more.
Stroke represents a medical emergency and can lead to the development of movement disorders such as abnormal muscle tone, limited range of motion, or abnormalities in coordination and balance. In order to help stroke patients recover as soon as possible, rehabilitation training methods employ various movement modes such as ordinary movements and joint reactions to induce active reactions in the limbs and gradually restore normal functions. Rehabilitation effect evaluation can help physicians understand the rehabilitation needs of different patients, determine effective treatment methods and strategies, and improve treatment efficiency. In order to achieve real-time and accuracy of action detection, this article uses Mediapipe’s action detection algorithm and proposes a model based on MPL-CNN. Mediapipe can be used to identify key point features of the patient’s upper limbs and simultaneously identify key point features of the hand. In order to detect the effect of rehabilitation training for upper limb movement disorders, LSTM and CNN are combined to form a new LSTM-CNN model, which is used to identify the action features of upper limb rehabilitation training extracted by Medipipe. The MPL-CNN model can effectively identify the accuracy of rehabilitation movements during upper limb rehabilitation training for stroke patients. In order to ensure the scientific validity and unified standards of rehabilitation training movements, this article employs the postures in the Fugl-Meyer Upper Limb Rehabilitation Training Functional Assessment Form (FMA) and establishes an FMA upper limb rehabilitation data set for experimental verification. Experimental results show that in each stage of the Fugl-Meyer upper limb rehabilitation training evaluation effect detection, the MPL-CNN-based method’s recognition accuracy of upper limb rehabilitation training actions reached 95%. At the same time, the average accuracy rate of various upper limb rehabilitation training actions reaches 97.54%. This shows that the model is highly robust across different action categories and proves that the MPL-CNN model is an effective and feasible solution. This method based on MPL-CNN can provide a high-precision detection method for the evaluation of rehabilitation effects of upper limb movement disorders after stroke, helping clinicians in evaluating the patient’s rehabilitation progress and adjusting the rehabilitation plan based on the evaluation results. This will help improve the personalization and precision of rehabilitation treatment and promote patient recovery. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

13 pages, 867 KB  
Article
A Four-Stage Mahalanobis-Distance-Based Method for Hand Posture Recognition
by Dawid Warchoł and Tomasz Kapuściński
Appl. Sci. 2023, 13(22), 12347; https://doi.org/10.3390/app132212347 - 15 Nov 2023
Cited by 2 | Viewed by 1699
Abstract
Automatic recognition of hand postures is an important research topic with many applications, e.g., communication support for deaf people. In this paper, we present a novel four-stage, Mahalanobis-distance-based method for hand posture recognition using skeletal data. The proposed method is based on a [...] Read more.
Automatic recognition of hand postures is an important research topic with many applications, e.g., communication support for deaf people. In this paper, we present a novel four-stage, Mahalanobis-distance-based method for hand posture recognition using skeletal data. The proposed method is based on a two-stage classification algorithm with two additional stages related to joint preprocessing (normalization) and a rule-based system, specific to hand shapes that the algorithm is meant to classify. The method achieves superior effectiveness on two benchmark datasets, the first of which was created by us for the purpose of this work, while the second is a well-known and publicly available dataset. The method’s recognition rate measured by leave-one-subject-out cross-validation tests is 94.69% on the first dataset and 97.44% on the second. Experiments, including comparison with other state-of-the-art methods and ablation studies related to classification accuracy and time, confirm the effectiveness of our approach. Full article
(This article belongs to the Topic Applications in Image Analysis and Pattern Recognition)
Show Figures

Figure 1

20 pages, 5660 KB  
Article
Recognition of Grasping Patterns Using Deep Learning for Human–Robot Collaboration
by Pedro Amaral, Filipe Silva and Vítor Santos
Sensors 2023, 23(21), 8989; https://doi.org/10.3390/s23218989 - 5 Nov 2023
Cited by 6 | Viewed by 4279
Abstract
Recent advances in the field of collaborative robotics aim to endow industrial robots with prediction and anticipation abilities. In many shared tasks, the robot’s ability to accurately perceive and recognize the objects being manipulated by the human operator is crucial to make predictions [...] Read more.
Recent advances in the field of collaborative robotics aim to endow industrial robots with prediction and anticipation abilities. In many shared tasks, the robot’s ability to accurately perceive and recognize the objects being manipulated by the human operator is crucial to make predictions about the operator’s intentions. In this context, this paper proposes a novel learning-based framework to enable an assistive robot to recognize the object grasped by the human operator based on the pattern of the hand and finger joints. The framework combines the strengths of the commonly available software MediaPipe in detecting hand landmarks in an RGB image with a deep multi-class classifier that predicts the manipulated object from the extracted keypoints. This study focuses on the comparison between two deep architectures, a convolutional neural network and a transformer, in terms of prediction accuracy, precision, recall and F1-score. We test the performance of the recognition system on a new dataset collected with different users and in different sessions. The results demonstrate the effectiveness of the proposed methods, while providing valuable insights into the factors that limit the generalization ability of the models. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

13 pages, 4441 KB  
Article
Study on the Design and Performance of a Glove Based on the FBG Array for Hand Posture Sensing
by Hongcheng Rao, Binbin Luo, Decao Wu, Pan Yi, Fudan Chen, Shenghui Shi, Xue Zou, Yuliang Chen and Mingfu Zhao
Sensors 2023, 23(20), 8495; https://doi.org/10.3390/s23208495 - 16 Oct 2023
Cited by 16 | Viewed by 3621
Abstract
This study introduces a new wearable fiber-optic sensor glove. The glove utilizes a flexible material, polydimethylsiloxane (PDMS), and a silicone tube to encapsulate fiber Bragg gratings (FBGs). It is employed to enable the self-perception of hand posture, gesture recognition, and the prediction of [...] Read more.
This study introduces a new wearable fiber-optic sensor glove. The glove utilizes a flexible material, polydimethylsiloxane (PDMS), and a silicone tube to encapsulate fiber Bragg gratings (FBGs). It is employed to enable the self-perception of hand posture, gesture recognition, and the prediction of grasping objects. The investigation employs the Support Vector Machine (SVM) approach for predicting grasping objects. The proposed fiber-optic sensor glove can concurrently monitor the motion of 14 hand joints comprising 5 metacarpophalangeal joints (MCP), 5 proximal interphalangeal joints (PIP), and 4 distal interphalangeal joints (DIP). To expand the measurement range of the sensors, a sinusoidal layout incorporates the FBG array into the glove. The experimental results indicate that the wearable sensing glove can track finger flexion within a range of 0° to 100°, with a modest minimum measurement error (Error) of 0.176° and a minimum standard deviation (SD) of 0.685°. Notably, the glove accurately detects hand gestures in real-time and even forecasts grasping actions. The fiber-optic smart glove technology proposed herein holds promising potential for industrial applications, including object grasping, 3D displays via virtual reality, and human–computer interaction. Full article
(This article belongs to the Special Issue Fiber Grating Sensors and Applications)
Show Figures

Figure 1

15 pages, 4788 KB  
Article
Saliency-Driven Hand Gesture Recognition Incorporating Histogram of Oriented Gradients (HOG) and Deep Learning
by Farzaneh Jafari and Anup Basu
Sensors 2023, 23(18), 7790; https://doi.org/10.3390/s23187790 - 11 Sep 2023
Cited by 8 | Viewed by 2933
Abstract
Hand gesture recognition is a vital means of communication to convey information between humans and machines. We propose a novel model for hand gesture recognition based on computer vision methods and compare results based on images with complex scenes. While extracting skin color [...] Read more.
Hand gesture recognition is a vital means of communication to convey information between humans and machines. We propose a novel model for hand gesture recognition based on computer vision methods and compare results based on images with complex scenes. While extracting skin color information is an efficient method to determine hand regions, complicated image backgrounds adversely affect recognizing the exact area of the hand shape. Some valuable features like saliency maps, histogram of oriented gradients (HOG), Canny edge detection, and skin color help us maximize the accuracy of hand shape recognition. Considering these features, we proposed an efficient hand posture detection model that improves the test accuracy results to over 99% on the NUS Hand Posture Dataset II and more than 97% on the hand gesture dataset with different challenging backgrounds. In addition, we added noise to around 60% of our datasets. Replicating our experiment, we achieved more than 98% and nearly 97% accuracy on NUS and hand gesture datasets, respectively. Experiments illustrate that the saliency method with HOG has stable performance for a wide range of images with complex backgrounds having varied hand colors and sizes. Full article
Show Figures

Figure 1

Back to TopTop