Next Article in Journal
A Systematic Review of Machine Learning Techniques in Hematopoietic Stem Cell Transplantation (HSCT)
Next Article in Special Issue
Powered Two-Wheeler Riding Profile Clustering for an In-Depth Study of Bend-Taking Practices
Previous Article in Journal
Lysenin Channels as Sensors for Ions and Molecules
Previous Article in Special Issue
Reinforcement Learning Based Fast Self-Recalibrating Decoder for Intracortical Brain–Machine Interface
Open AccessArticle

A Multimodal Intention Detection Sensor Suite for Shared Autonomy of Upper-Limb Robotic Prostheses

1
Moonshine Inc., London W12 0LN, UK
2
Department of Mechanical Engineering, UK Dementia Research Institute Care-Research and Technology Centre (DRI-CRT) Imperial College London, London SW7 2AZ, UK
3
Department of Bioengineering, Imperial College London, London SW7 2AZ, UK
4
Department of Mechanical Engineering, National University of Singapore, Singapore 119077, Singapore
5
Department of Electrical and Computer Engineering, New York University, New York, NY 11201, USA
6
Department of Mechanical and Aerospace Engineering, New York University, New York, NY 11201, USA
7
NYU WIRELESS, New York University, New York, NY 11201, USA
*
Authors to whom correspondence should be addressed.
Sensors 2020, 20(21), 6097; https://doi.org/10.3390/s20216097
Received: 8 August 2020 / Revised: 8 October 2020 / Accepted: 23 October 2020 / Published: 27 October 2020
(This article belongs to the Special Issue Wearable Sensor for Activity Analysis and Context Recognition)
Neurorobotic augmentation (e.g., robotic assist) is now in regular use to support individuals suffering from impaired motor functions. A major unresolved challenge, however, is the excessive cognitive load necessary for the human–machine interface (HMI). Grasp control remains one of the most challenging HMI tasks, demanding simultaneous, agile, and precise control of multiple degrees-of-freedom (DoFs) while following a specific timing pattern in the joint and human–robot task spaces. Most commercially available systems use either an indirect mode-switching configuration or a limited sequential control strategy, limiting activation to one DoF at a time. To address this challenge, we introduce a shared autonomy framework centred around a low-cost multi-modal sensor suite fusing: (a) mechanomyography (MMG) to estimate the intended muscle activation, (b) camera-based visual information for integrated autonomous object recognition, and (c) inertial measurement to enhance intention prediction based on the grasping trajectory. The complete system predicts user intent for grasp based on measured dynamical features during natural motions. A total of 84 motion features were extracted from the sensor suite, and tests were conducted on 10 able-bodied and 1 amputee participants for grasping common household objects with a robotic hand. Real-time grasp classification accuracy using visual and motion features obtained 100%, 82.5%, and 88.9% across all participants for detecting and executing grasping actions for a bottle, lid, and box, respectively. The proposed multimodal sensor suite is a novel approach for predicting different grasp strategies and automating task performance using a commercial upper-limb prosthetic device. The system also shows potential to improve the usability of modern neurorobotic systems due to the intuitive control design. View Full-Text
Keywords: shared autonomy; prosthetic technology; mechanomyography shared autonomy; prosthetic technology; mechanomyography
Show Figures

Figure 1

MDPI and ACS Style

Gardner, M.; Mancero Castillo, C.S.; Wilson, S.; Farina, D.; Burdet, E.; Khoo, B.C.; Atashzar, S.F.; Vaidyanathan, R. A Multimodal Intention Detection Sensor Suite for Shared Autonomy of Upper-Limb Robotic Prostheses. Sensors 2020, 20, 6097.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop