Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (26)

Search Parameters:
Keywords = hand grasps classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
35 pages, 12036 KiB  
Article
Transfer Learning and Deep Neural Networks for Robust Intersubject Hand Movement Detection from EEG Signals
by Chiang Liang Kok, Chee Kit Ho, Thein Htet Aung, Yit Yan Koh and Tee Hui Teo
Appl. Sci. 2024, 14(17), 8091; https://doi.org/10.3390/app14178091 - 9 Sep 2024
Cited by 8 | Viewed by 1485
Abstract
In this research, five systems were developed to classify four distinct motor functions—forward hand movement (FW), grasp (GP), release (RL), and reverse hand movement (RV)—from EEG signals, using the WAY-EEG-GAL dataset where participants performed a sequence of hand movements. During preprocessing, band-pass filtering [...] Read more.
In this research, five systems were developed to classify four distinct motor functions—forward hand movement (FW), grasp (GP), release (RL), and reverse hand movement (RV)—from EEG signals, using the WAY-EEG-GAL dataset where participants performed a sequence of hand movements. During preprocessing, band-pass filtering was applied to remove artifacts and focus on the mu and beta frequency bands. The initial system, a preliminary study model, explored the overall framework of EEG signal processing and classification, utilizing time-domain features such as variance and frequency-domain features such as alpha and beta power, with a KNN model for classification. Insights from this study informed the development of a baseline system, which innovatively combined the common spatial patterns (CSP) method with continuous wavelet transform (CWT) for feature extraction and employed a GoogLeNet classifier with transfer learning. This system classified six unique pairs of events derived from the four motor functions, achieving remarkable accuracy, with the highest being 99.73% for the GP–RV pair and the lowest 80.87% for the FW–GP pair in intersubject classification. Building on this success, three additional systems were developed for four-way classification. The final model, ML-CSP-OVR, demonstrated the highest intersubject classification accuracy of 78.08% using all combined data and 76.39% for leave-one-out intersubject classification. This proposed model, featuring a novel combination of CSP-OVR, CWT, and GoogLeNet, represents a significant advancement in the field, showcasing strong potential as a general system for motor imagery (MI) tasks that is not dependent on the subject. This work highlights the prominence of the research contribution by demonstrating the effectiveness and robustness of the proposed approach in achieving high classification accuracy across different motor functions and subjects. Full article
(This article belongs to the Section Applied Biosciences and Bioengineering)
Show Figures

Figure 1

16 pages, 3410 KiB  
Article
Feature Extraction Based on Sparse Coding Approach for Hand Grasp Type Classification
by Jirayu Samkunta, Patinya Ketthong, Nghia Thi Mai, Md Abdus Samad Kamal, Iwanori Murakami and Kou Yamada
Algorithms 2024, 17(6), 240; https://doi.org/10.3390/a17060240 - 3 Jun 2024
Viewed by 1136
Abstract
The kinematics of the human hand exhibit complex and diverse characteristics unique to each individual. Various techniques such as vision-based, ultrasonic-based, and data-glove-based approaches have been employed to analyze human hand movements. However, a critical challenge remains in efficiently analyzing and classifying hand [...] Read more.
The kinematics of the human hand exhibit complex and diverse characteristics unique to each individual. Various techniques such as vision-based, ultrasonic-based, and data-glove-based approaches have been employed to analyze human hand movements. However, a critical challenge remains in efficiently analyzing and classifying hand grasp types based on time-series kinematic data. In this paper, we propose a novel sparse coding feature extraction technique based on dictionary learning to address this challenge. Our method enhances model accuracy, reduces training time, and minimizes overfitting risk. We benchmarked our approach against principal component analysis (PCA) and sparse coding based on a Gaussian random dictionary. Our results demonstrate a significant improvement in classification accuracy: achieving 81.78% with our method compared to 31.43% for PCA and 77.27% for the Gaussian random dictionary. Furthermore, our technique outperforms in terms of macro-average F1-score and average area under the curve (AUC) while also significantly reducing the number of features required. Full article
(This article belongs to the Special Issue Algorithms for Feature Selection (2nd Edition))
Show Figures

Figure 1

19 pages, 2760 KiB  
Article
Explainable Multimodal Graph Isomorphism Network for Interpreting Sex Differences in Adolescent Neurodevelopment
by Binish Patel, Anton Orlichenko, Adnan Patel, Gang Qu, Tony W. Wilson, Julia M. Stephen, Vince D. Calhoun and Yu-Ping Wang
Appl. Sci. 2024, 14(10), 4144; https://doi.org/10.3390/app14104144 - 14 May 2024
Cited by 2 | Viewed by 1746
Abstract
Background: A fundamental grasp of the variability observed in healthy individuals holds paramount importance in the investigation of neuropsychiatric conditions characterized by sex-related phenotypic distinctions. Functional magnetic resonance imaging (fMRI) serves as a meaningful tool for discerning these differences. Among deep learning [...] Read more.
Background: A fundamental grasp of the variability observed in healthy individuals holds paramount importance in the investigation of neuropsychiatric conditions characterized by sex-related phenotypic distinctions. Functional magnetic resonance imaging (fMRI) serves as a meaningful tool for discerning these differences. Among deep learning models, graph neural networks (GNNs) are particularly well-suited for analyzing brain networks derived from fMRI blood oxygen level-dependent (BOLD) signals, enabling the effective exploration of sex differences during adolescence. Method: In the present study, we introduce a multi-modal graph isomorphism network (MGIN) designed to elucidate sex-based disparities using fMRI task-related data. Our approach amalgamates brain networks obtained from multiple scans of the same individual, thereby enhancing predictive capabilities and feature identification. The MGIN model adeptly pinpoints crucial subnetworks both within and between multi-task fMRI datasets. Moreover, it offers interpretability through the utilization of GNNExplainer, which identifies pivotal sub-network graph structures contributing significantly to sex group classification. Results: Our findings indicate that the MGIN model outperforms competing models in terms of classification accuracy, underscoring the benefits of combining two fMRI paradigms. Additionally, our model discerns the most significant sex-related functional networks, encompassing the default mode network (DMN), visual (VIS) network, cognitive (CNG) network, frontal (FRNT) network, salience (SAL) network, subcortical (SUB) network, and sensorimotor (SM) network associated with hand and mouth movements. Remarkably, the MGIN model achieves superior sex classification accuracy when juxtaposed with other state-of-the-art algorithms, yielding a noteworthy 81.67% improvement in classification accuracy. Conclusion: Our model’s superiority emanates from its capacity to consolidate data from multiple scans of subjects within a proven interpretable framework. Beyond its classification prowess, our model guides our comprehension of neurodevelopment during adolescence by identifying critical subnetworks of functional connectivity. Full article
(This article belongs to the Special Issue Applications of Artificial Intelligence in Biomedical Data Analysis)
Show Figures

Figure 1

17 pages, 3772 KiB  
Article
A Semiautonomous Control Strategy Based on Computer Vision for a Hand–Wrist Prosthesis
by Gianmarco Cirelli, Christian Tamantini, Luigi Pietro Cordella and Francesca Cordella
Robotics 2023, 12(6), 152; https://doi.org/10.3390/robotics12060152 - 13 Nov 2023
Cited by 8 | Viewed by 3203
Abstract
Alleviating the burden on amputees in terms of high-level control of their prosthetic devices is an open research challenge. EMG-based intention detection presents some limitations due to movement artifacts, fatigue, and stability. The integration of exteroceptive sensing can provide a valuable solution to [...] Read more.
Alleviating the burden on amputees in terms of high-level control of their prosthetic devices is an open research challenge. EMG-based intention detection presents some limitations due to movement artifacts, fatigue, and stability. The integration of exteroceptive sensing can provide a valuable solution to overcome such limitations. In this paper, a novel semiautonomous control system (SCS) for wrist–hand prostheses using a computer vision system (CVS) is proposed and validated. The SCS integrates object detection, grasp selection, and wrist orientation estimation algorithms. By combining CVS with a simulated EMG-based intention detection module, the SCS guarantees reliable prosthesis control. Results show high accuracy in grasping and object classification (≥97%) at a fast frame analysis frequency (2.07 FPS). The SCS achieves an average angular estimation error ≤18° and stability ≤0.8° for the proposed application. Operative tests demonstrate the capabilities of the proposed approach to handle complex real-world scenarios and pave the way for future implementation on a real prosthetic device. Full article
(This article belongs to the Special Issue AI for Robotic Exoskeletons and Prostheses)
Show Figures

Figure 1

20 pages, 5818 KiB  
Article
Recognition of Grasping Patterns Using Deep Learning for Human–Robot Collaboration
by Pedro Amaral, Filipe Silva and Vítor Santos
Sensors 2023, 23(21), 8989; https://doi.org/10.3390/s23218989 - 5 Nov 2023
Cited by 2 | Viewed by 2956
Abstract
Recent advances in the field of collaborative robotics aim to endow industrial robots with prediction and anticipation abilities. In many shared tasks, the robot’s ability to accurately perceive and recognize the objects being manipulated by the human operator is crucial to make predictions [...] Read more.
Recent advances in the field of collaborative robotics aim to endow industrial robots with prediction and anticipation abilities. In many shared tasks, the robot’s ability to accurately perceive and recognize the objects being manipulated by the human operator is crucial to make predictions about the operator’s intentions. In this context, this paper proposes a novel learning-based framework to enable an assistive robot to recognize the object grasped by the human operator based on the pattern of the hand and finger joints. The framework combines the strengths of the commonly available software MediaPipe in detecting hand landmarks in an RGB image with a deep multi-class classifier that predicts the manipulated object from the extracted keypoints. This study focuses on the comparison between two deep architectures, a convolutional neural network and a transformer, in terms of prediction accuracy, precision, recall and F1-score. We test the performance of the recognition system on a new dataset collected with different users and in different sessions. The results demonstrate the effectiveness of the proposed methods, while providing valuable insights into the factors that limit the generalization ability of the models. Full article
(This article belongs to the Special Issue Smart Sensing Technology for Human Activity Recognition)
Show Figures

Figure 1

15 pages, 3971 KiB  
Article
Real-Time Classification of Motor Imagery Using Dynamic Window-Level Granger Causality Analysis of fMRI Data
by Tianyuan Liu, Bao Li, Chi Zhang, Panpan Chen, Weichen Zhao and Bin Yan
Brain Sci. 2023, 13(10), 1406; https://doi.org/10.3390/brainsci13101406 - 1 Oct 2023
Cited by 1 | Viewed by 2028
Abstract
This article presents a method for extracting neural signal features to identify the imagination of left- and right-hand grasping movements. A functional magnetic resonance imaging (fMRI) experiment is employed to identify four brain regions with significant activations during motor imagery (MI) and the [...] Read more.
This article presents a method for extracting neural signal features to identify the imagination of left- and right-hand grasping movements. A functional magnetic resonance imaging (fMRI) experiment is employed to identify four brain regions with significant activations during motor imagery (MI) and the effective connections between these regions of interest (ROIs) were calculated using Dynamic Window-level Granger Causality (DWGC). Then, a real-time fMRI (rt-fMRI) classification system for left- and right-hand MI is developed using the Open-NFT platform. We conducted data acquisition and processing on three subjects, and all of whom were recruited from a local college. As a result, the maximum accuracy of using Support Vector Machine (SVM) classifier on real-time three-class classification (rest, left hand, and right hand) with effective connections is 69.3%. And it is 3% higher than that of traditional multivoxel pattern classification analysis on average. Moreover, it significantly improves classification accuracy during the initial stage of MI tasks while reducing the latency effects in real-time decoding. The study suggests that the effective connections obtained through the DWGC method serve as valuable features for real-time decoding of MI using fMRI. Moreover, they exhibit higher sensitivity to changes in brain states. This research offers theoretical support and technical guidance for extracting neural signal features in the context of fMRI-based studies. Full article
Show Figures

Figure 1

17 pages, 5844 KiB  
Article
Decoding Electroencephalography Underlying Natural Grasp Tasks across Multiple Dimensions
by Hao Gu, Jian Wang, Fengyuan Jiao, Yan Han, Wang Xu and Xin Zhao
Electronics 2023, 12(18), 3894; https://doi.org/10.3390/electronics12183894 - 15 Sep 2023
Cited by 1 | Viewed by 1434
Abstract
Individuals suffering from motor dysfunction due to various diseases often face challenges in performing essential activities such as grasping objects using their upper limbs, eating, writing, and more. This limitation significantly impacts their ability to live independently. Brain–computer interfaces offer a promising solution, [...] Read more.
Individuals suffering from motor dysfunction due to various diseases often face challenges in performing essential activities such as grasping objects using their upper limbs, eating, writing, and more. This limitation significantly impacts their ability to live independently. Brain–computer interfaces offer a promising solution, enabling them to interact with the external environment in a meaningful way. This exploration focused on decoding the electroencephalography of natural grasp tasks across three dimensions: movement-related cortical potentials, event-related desynchronization/synchronization, and brain functional connectivity, aiming to provide assistance for the development of intelligent assistive devices controlled by electroencephalography signals generated during natural movements. Furthermore, electrode selection was conducted using global coupling strength, and a random forest classification model was employed to decode three types of natural grasp tasks (palmar grasp, lateral grasp, and rest state). The results indicated that a noteworthy lateralization phenomenon in brain activity emerged, which is closely associated with the right or left of the executive hand. The reorganization of the frontal region is closely associated with external visual stimuli and the central and parietal regions play a crucial role in the process of motor execution. An overall average classification accuracy of 80.3% was achieved in a natural grasp task involving eight subjects. Full article
(This article belongs to the Special Issue Emerging Trends in Advanced Video and Sequence Technology)
Show Figures

Figure 1

22 pages, 2505 KiB  
Review
A Review of Myoelectric Control for Prosthetic Hand Manipulation
by Ziming Chen, Huasong Min, Dong Wang, Ziwei Xia, Fuchun Sun and Bin Fang
Biomimetics 2023, 8(3), 328; https://doi.org/10.3390/biomimetics8030328 - 24 Jul 2023
Cited by 49 | Viewed by 17755
Abstract
Myoelectric control for prosthetic hands is an important topic in the field of rehabilitation. Intuitive and intelligent myoelectric control can help amputees to regain upper limb function. However, current research efforts are primarily focused on developing rich myoelectric classifiers and biomimetic control methods, [...] Read more.
Myoelectric control for prosthetic hands is an important topic in the field of rehabilitation. Intuitive and intelligent myoelectric control can help amputees to regain upper limb function. However, current research efforts are primarily focused on developing rich myoelectric classifiers and biomimetic control methods, limiting prosthetic hand manipulation to simple grasping and releasing tasks, while rarely exploring complex daily tasks. In this article, we conduct a systematic review of recent achievements in two areas, namely, intention recognition research and control strategy research. Specifically, we focus on advanced methods for motion intention types, discrete motion classification, continuous motion estimation, unidirectional control, feedback control, and shared control. In addition, based on the above review, we analyze the challenges and opportunities for research directions of functionality-augmented prosthetic hands and user burden reduction, which can help overcome the limitations of current myoelectric control research and provide development prospects for future research. Full article
(This article belongs to the Special Issue Intelligent Human-Robot Interaction)
Show Figures

Figure 1

15 pages, 4475 KiB  
Article
Advanced Stiffness Sensing through the Pincer Grasping of Soft Pneumatic Grippers
by Chaiwuth Sithiwichankit and Ratchatin Chancharoen
Sensors 2023, 23(13), 6094; https://doi.org/10.3390/s23136094 - 2 Jul 2023
Viewed by 1801
Abstract
In this study, a comprehensive approach for sensing object stiffness through the pincer grasping of soft pneumatic grippers (SPGs) is presented. This study was inspired by the haptic sensing of human hands that allows us to perceive object properties through grasping. Many researchers [...] Read more.
In this study, a comprehensive approach for sensing object stiffness through the pincer grasping of soft pneumatic grippers (SPGs) is presented. This study was inspired by the haptic sensing of human hands that allows us to perceive object properties through grasping. Many researchers have tried to imitate this capability in robotic grippers. The association between gripper performance and object reaction must be determined for this purpose. However, soft pneumatic actuators (SPA), the main components of SPGs, are extremely compliant. SPA compliance makes the determination of the association challenging. Methodologically, the connection between the behaviors of grasped objects and those of SPAs was clarified. A new concept of SPA modeling was then introduced. A method for stiffness sensing through SPG pincer grasping was developed based on this connection, and demonstrated on four samples. This method was validated through compression testing on the same samples. The results indicate that the proposed method yielded similar stiffness trends with slight deviations in compression testing. A main limitation in this study was the occlusion effect, which leads to dramatic deviations when grasped objects greatly deform. This is the first study to enable stiffness sensing and SPG grasping to be carried out in the same attempt. This study makes a major contribution to research on soft robotics by progressing the role of sensing for SPG grasping and object classification by offering an efficient method for acquiring another effective class of classification input. Ultimately, the proposed framework shows promise for future applications in inspecting and classifying visually indistinguishable objects. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

24 pages, 49676 KiB  
Article
Soft-Sensor System for Grasp Type Recognition in Underactuated Hand Prostheses
by Laura De Arco, María José Pontes, Marcelo E. V. Segatto, Maxwell E. Monteiro, Carlos A. Cifuentes and Camilo A. R. Díaz
Sensors 2023, 23(7), 3364; https://doi.org/10.3390/s23073364 - 23 Mar 2023
Cited by 11 | Viewed by 3155
Abstract
This paper presents the development of an intelligent soft-sensor system to add haptic perception to the underactuated hand prosthesis PrHand. Two sensors based on optical fiber were constructed, one for finger joint angles and the other for fingertips’ contact force. Three sensor fabrications [...] Read more.
This paper presents the development of an intelligent soft-sensor system to add haptic perception to the underactuated hand prosthesis PrHand. Two sensors based on optical fiber were constructed, one for finger joint angles and the other for fingertips’ contact force. Three sensor fabrications were tested for the angle sensor by axially rotating the sensors in four positions. The configuration with the most similar response in the four rotations was chosen. The chosen sensors presented a polynomial response with R2 higher than 92%. The tactile force sensors tracked the force made over the objects. Almost all sensors presented a polynomial response with R2 higher than 94%. The system monitored the prosthesis activity by recognizing grasp types. Six machine learning algorithms were tested: linear regression, k-nearest neighbor, support vector machine, decision tree, k-means clustering, and hierarchical clustering. To validate the algorithms, a k-fold test was used with a k = 10, and the accuracy result for k-nearest neighbor was 98.5%, while that for decision tree was 93.3%, enabling the classification of the eight grip types. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

16 pages, 2669 KiB  
Article
Classification of Activities of Daily Living Based on Grasp Dynamics Obtained from a Leap Motion Controller
by Hajar Sharif, Ahmadreza Eslaminia, Pramod Chembrammel and Thenkurussi Kesavadas
Sensors 2022, 22(21), 8273; https://doi.org/10.3390/s22218273 - 28 Oct 2022
Cited by 7 | Viewed by 3135
Abstract
Stroke is one of the leading causes of mortality and disability worldwide. Several evaluation methods have been used to assess the effects of stroke on the performance of activities of daily living (ADL). However, these methods are qualitative. A first step toward developing [...] Read more.
Stroke is one of the leading causes of mortality and disability worldwide. Several evaluation methods have been used to assess the effects of stroke on the performance of activities of daily living (ADL). However, these methods are qualitative. A first step toward developing a quantitative evaluation method is to classify different ADL tasks based on the hand grasp. In this paper, a dataset is presented that includes data collected by a leap motion controller on the hand grasps of healthy adults performing eight common ADL tasks. Then, a set of features with time and frequency domains is combined with two well-known classifiers, i.e., the support vector machine and convolutional neural network, to classify the tasks, and a classification accuracy of over 99% is achieved. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

16 pages, 3886 KiB  
Article
Research on Upper Limb Action Intention Recognition Method Based on Fusion of Posture Information and Visual Information
by Jian-Wei Cui, Han Du, Bing-Yan Yan and Xuan-Jie Wang
Electronics 2022, 11(19), 3078; https://doi.org/10.3390/electronics11193078 - 27 Sep 2022
Cited by 4 | Viewed by 2254
Abstract
A prosthetic hand is one of the main ways to help patients with upper limb disabilities regain their daily living abilities. Prosthetic hand manipulation must be coordinated with the user’s action intention. Therefore, the key to the control of the prosthetic hand is [...] Read more.
A prosthetic hand is one of the main ways to help patients with upper limb disabilities regain their daily living abilities. Prosthetic hand manipulation must be coordinated with the user’s action intention. Therefore, the key to the control of the prosthetic hand is to recognize the action intention of the upper limb. At present, there are still problems such as difficulty in decoding information and a low recognition rate of identifying action intention with EMG signals and EEG signals. While inertial sensors have the advantages of low cost and high accuracy and posture information can characterize the upper limb motion state, visual information has the advantages of high information and being able to detect the type of target objects, which can be complementarily fused with inertial sensors to further grasp the human motion requirements. Therefore, this paper proposes an upper limb action intention recognition method based on the fusion of posture information and visual information. The inertial sensor is used to collect the attitude angle data during the movement of the upper limb, and according to the similarity of the human upper limb structure to the linkage mechanism, a model of the upper limb of the human body is established using the positive kinematics theory of a mechanical arm to solve the upper limb end positions. The upper limb end positions were classified into three categories: torso front, upper body nearby, and the initial position, and a multilayer perceptron model was trained to learn the classification relationships. In addition, a miniature camera was installed on the hand to obtain visual image information during upper limb movement. The target objects are detected using the YOLOv5 deep learning method, and then, the target objects are classified into two categories: wearable items and non-wearable items. Finally, the upper limb intention is jointly decided by the upper limb motion state, target object type, and upper limb end position to achieve the control of the prosthetic hand. We applied the upper limb intention recognition method to the experimental system of a mechanical prosthetic hand and invited several volunteers to test it. The experimental results showed that the intention recognition success rate reached 92.4%, which verifies the feasibility and practicality of the upper limb action intention recognition method based on the fusion of posture information and visual information. Full article
Show Figures

Figure 1

16 pages, 3913 KiB  
Article
Deep-Learning-Based Accurate Identification of Warehouse Goods for Robot Picking Operations
by Huwei Liu, Li Zhou, Junhui Zhao, Fan Wang, Jianglong Yang, Kaibo Liang and Zhaochan Li
Sustainability 2022, 14(13), 7781; https://doi.org/10.3390/su14137781 - 26 Jun 2022
Cited by 15 | Viewed by 4984
Abstract
In order to explore the application of robots in intelligent supply-chain and digital logistics, and to achieve efficient operation, energy conservation, and emission reduction in the field of warehousing and sorting, we conducted research in the field of unmanned sorting and automated warehousing. [...] Read more.
In order to explore the application of robots in intelligent supply-chain and digital logistics, and to achieve efficient operation, energy conservation, and emission reduction in the field of warehousing and sorting, we conducted research in the field of unmanned sorting and automated warehousing. Under the guidance of the theory of sustainable development, the ESG (Environmental Social Governance) goals in the social aspect are realized through digital technology in the storage field. In the picking process of warehousing, efficient and accurate cargo identification is the premise to ensure the accuracy and timeliness of intelligent robot operation. According to the driving and grasping methods of different robot arms, the image recognition model of arbitrarily shaped objects is established by using a convolution neural network (CNN) on the basis of simulating a human hand grasping objects. The model updates the loss function value and global step size by exponential decay and moving average, realizes the identification and classification of goods, and obtains the running dynamics of the program in real time by using visual tools. In addition, combined with the different characteristics of the data set, such as shape, size, surface material, brittleness, weight, among others, different intelligent grab solutions are selected for different types of goods to realize the automatic picking of goods of any shape in the picking list. Through the application of intelligent item grabbing in the storage field, it lays a foundation for the construction of an intelligent supply-chain system, and provides a new research perspective for cooperative robots (COBOT) in the field of logistics warehousing. Full article
Show Figures

Figure 1

22 pages, 4467 KiB  
Article
Recognition of Upper Limb Action Intention Based on IMU
by Jian-Wei Cui, Zhi-Gang Li, Han Du, Bing-Yan Yan and Pu-Dong Lu
Sensors 2022, 22(5), 1954; https://doi.org/10.3390/s22051954 - 2 Mar 2022
Cited by 22 | Viewed by 4761
Abstract
Using motion information of the upper limb to control the prosthetic hand has become a hotspot of current research. The operation of the prosthetic hand must also be coordinated with the user’s intention. Therefore, identifying action intention of the upper limb based on [...] Read more.
Using motion information of the upper limb to control the prosthetic hand has become a hotspot of current research. The operation of the prosthetic hand must also be coordinated with the user’s intention. Therefore, identifying action intention of the upper limb based on motion information of the upper limb is key to controlling the prosthetic hand. Since a wearable inertial sensor bears the advantages of small size, low cost, and little external environment interference, we employ an inertial sensor to collect angle and angular velocity data during movement of the upper limb. Aiming at the action classification for putting on socks, putting on shoes and tying shoelaces, this paper proposes a recognition model based on the Dynamic Time Warping (DTW) algorithm of the motion unit. Based on whether the upper limb is moving, the complete motion data are divided into several motion units. Considering the delay associated with controlling the prosthetic hand, this paper only performs feature extraction on the first motion unit and the second motion unit, and recognizes action on different classifiers. The experimental results reveal that the DTW algorithm based on motion unit bears a higher recognition rate and lower running time. The recognition rate reaches as high as 99.46%, and the average running time measures 8.027 ms. In order to enable the prosthetic hand to understand the grasping intention of the upper limb, this paper proposes a Generalized Regression Neural Network (GRNN) model based on 10-fold cross-validation. The motion state of the upper limb is subdivided, and the static state is used as the sign of controlling the prosthetic hand. This paper applies a 10-fold cross-validation method to train the neural network model to find the optimal smoothing parameter. In addition, the recognition performance of different neural networks is compared. The experimental results show that the GRNN model based on 10-fold cross-validation exhibits a high accuracy rate, capable of reaching 98.28%. Finally, the two algorithms proposed in this paper are implemented in an experiment of using the prosthetic hand to reproduce an action, and the feasibility and practicability of the algorithm are verified by experiment. Full article
Show Figures

Figure 1

10 pages, 1782 KiB  
Article
Phase-Based Grasp Classification for Prosthetic Hand Control Using sEMG
by Shuo Wang, Jingjing Zheng, Bin Zheng and Xianta Jiang
Biosensors 2022, 12(2), 57; https://doi.org/10.3390/bios12020057 - 21 Jan 2022
Cited by 11 | Viewed by 3711
Abstract
Pattern recognition using surface Electromyography (sEMG) applied on prosthesis control has attracted much attention in these years. In most of the existing methods, the sEMG signal during the firmly grasped period is used for grasp classification because good performance can be achieved due [...] Read more.
Pattern recognition using surface Electromyography (sEMG) applied on prosthesis control has attracted much attention in these years. In most of the existing methods, the sEMG signal during the firmly grasped period is used for grasp classification because good performance can be achieved due to its relatively stable signal. However, using the only the firmly grasped period may cause a delay to control the prosthetic hand gestures. Regarding this issue, we explored how grasp classification accuracy changes during the reaching and grasping process, and identified the period that can leverage the grasp classification accuracy and the earlier grasp detection. We found that the grasp classification accuracy increased along the hand gradually grasping the object till firmly grasped, and there is a sweet period before firmly grasped period, which could be suitable for early grasp classification with reduced delay. On top of this, we also explored corresponding training strategies for better grasp classification in real-time applications. Full article
(This article belongs to the Special Issue Intelligent Biosignal Processing in Wearable and Implantable Sensors)
Show Figures

Figure 1

Back to TopTop