sensors-logo

Journal Browser

Journal Browser

Sensor Fusion in Assistive and Rehabilitation Robotics

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 March 2020) | Viewed by 39086

Special Issue Editors


E-Mail Website
Guest Editor
Assistant Professor, Department of Electrical & Computer Engineering, College of Engineering and Applied Science, University of Wyoming, Laramie, WY 82071, USA
Interests: rehabilitation engineering; virtual reality; human–machine interaction; sensor fusion

E-Mail Website
Guest Editor
Sensory-Motor Systems Lab, Department of Health Sciences and Technology, ETH Zurich, 8092 Zurich, Switzerland
Spinal Cord Injury Center, University Hospital Balgrist, University of Zurich, 8008 Zurich, Switzerland
Interests: human motion synthesis; biomechanics; virtual reality; human–machine interaction; rehabilitation robotics

Special Issue Information

Dear Colleagues,

As the world’s population gradually grows older, more and more adults are experiencing sensory–motor disabilities due to stroke, traumatic brain injury, spinal cord injury and other diseases. People with such disabilities can greatly benefit from the help of robotic technologies: assistive robots can help people perform everyday activities (e.g., eating) despite a loss of motor function, while rehabilitation robots can help people perform intensive exercise to regain lost motor functions. However, while the effectiveness of assistive and rehabilitation robots has been demonstrated in several clinical trials (which have found, e.g., that rehabilitation robots are approximately as effective as human therapists), the technology has not yet been broadly adopted by hospitals and end-users.

This limited adoption of assistive and rehabilitation robots is not due to inappropriate mechanical design—state-of-the-art robots have many degrees of freedom and are mechanically robust to suboptimal operating conditions, allowing them to theoretically deliver support in a variety of real-world environments. Instead, the main limitation is currently the software: it is unclear how assistive and rehabilitation robots can appropriately react to users’ needs, intentions and mental states in order to provide the optimal amount of support. Thus, the limitations of state-of-the-art robots need to be addressed using novel sensor fusion algorithms that can tell the robot how to react in response to potentially unreliable multimodal information from the user and the environment.

The aim of this Special Issue is to showcase advanced sensor fusion algorithms for assistive and rehabilitation robotics, ideally with end-user evaluations that demonstrate how these algorithms perform in realistic operating conditions. These advances in sensor fusion will lead to a new generation of intelligent, collaborative robots that enhance users’ quality of life by either replacing lost sensory–motor functions or helping regain these lost functions. Topics of interest include but are not limited to:

  • Affective computing (stress and workload recognition) for assistive and rehabilitation robots
  • Brain–computer interfaces for assistive and rehabilitation robots
  • Intention detection and gesture/gait recognition for intelligent orthoses and prostheses
  • Localization and mapping for robotic wheelchairs and other mobile assistive robots
  • Shared control algorithms for assistive and rehabilitation robots
  • Error augmentation and resistance training in rehabilitation robotics
  • Telerehabilitation and robot-aided telemonitoring
  • Human factor analyses of existing intelligent robots

Prof. Domen Novak
Prof. Robert Riener
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • assistive robotics
  • rehabilitation robotics
  • sensor fusion
  • robot control
  • shared control
  • intention detection
  • affective computing
  • simultaneous localization and mapping
  • human factors
  • brain–computer interfaces

Published Papers (9 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research, Review

3 pages, 161 KiB  
Editorial
Sensor Fusion in Assistive and Rehabilitation Robotics
by Domen Novak and Robert Riener
Sensors 2020, 20(18), 5235; https://doi.org/10.3390/s20185235 - 14 Sep 2020
Cited by 1 | Viewed by 1876
Abstract
As the world’s population gradually grows older, more and more adults are experiencing sensory–motor disabilities due to stroke, traumatic brain injury, spinal cord injury and other diseases [...] Full article
(This article belongs to the Special Issue Sensor Fusion in Assistive and Rehabilitation Robotics)

Research

Jump to: Editorial, Review

19 pages, 537 KiB  
Article
Characterizing Human Box-Lifting Behavior Using Wearable Inertial Motion Sensors
by Steven D. Hlucny and Domen Novak
Sensors 2020, 20(8), 2323; https://doi.org/10.3390/s20082323 - 18 Apr 2020
Cited by 16 | Viewed by 3462
Abstract
Although several studies have used wearable sensors to analyze human lifting, this has generally only been done in a limited manner. In this proof-of-concept study, we investigate multiple aspects of offline lift characterization using wearable inertial measurement sensors: detecting the start and end [...] Read more.
Although several studies have used wearable sensors to analyze human lifting, this has generally only been done in a limited manner. In this proof-of-concept study, we investigate multiple aspects of offline lift characterization using wearable inertial measurement sensors: detecting the start and end of the lift and classifying the vertical movement of the object, the posture used, the weight of the object, and the asymmetry involved. In addition, the lift duration, horizontal distance from the lifter to the object, the vertical displacement of the object, and the asymmetric angle are computed as lift parameters. Twenty-four healthy participants performed two repetitions of 30 different main lifts each while wearing a commercial inertial measurement system. The data from these trials were used to develop, train, and evaluate the lift characterization algorithms presented. The lift detection algorithm had a start time error of 0.10 s ± 0.21 s and an end time error of 0.36 s ± 0.27 s across all 1489 lift trials with no missed lifts. For posture, asymmetry, vertical movement, and weight, our classifiers achieved accuracies of 96.8%, 98.3%, 97.3%, and 64.2%, respectively, for automatically detected lifts. The vertical height and displacement estimates were, on average, within 25 cm of the reference values. The horizontal distances measured for some lifts were quite different than expected (up to 14.5 cm), but were very consistent. Estimated asymmetry angles were similarly precise. In the future, these proof-of-concept offline algorithms can be expanded and improved to work in real-time. This would enable their use in applications such as real-time health monitoring and feedback for assistive devices. Full article
(This article belongs to the Special Issue Sensor Fusion in Assistive and Rehabilitation Robotics)
Show Figures

Figure 1

18 pages, 3048 KiB  
Article
Subject- and Environment-Based Sensor Variability for Wearable Lower-Limb Assistive Devices
by Nili E. Krausz, Blair H. Hu and Levi J. Hargrove
Sensors 2019, 19(22), 4887; https://doi.org/10.3390/s19224887 - 08 Nov 2019
Cited by 27 | Viewed by 3834
Abstract
Significant research effort has gone towards the development of powered lower limb prostheses that control power during gait. These devices use forward prediction based on electromyography (EMG), kinetics and kinematics to command the prosthesis which locomotion activity is desired. Unfortunately these predictions can [...] Read more.
Significant research effort has gone towards the development of powered lower limb prostheses that control power during gait. These devices use forward prediction based on electromyography (EMG), kinetics and kinematics to command the prosthesis which locomotion activity is desired. Unfortunately these predictions can have substantial errors, which can potentially lead to trips or falls. It is hypothesized that one reason for the significant prediction errors in the current control systems for powered lower-limb prostheses is due to the inter- and intra-subject variability of the data sources used for prediction. Environmental data, recorded from a depth sensor worn on a belt, should have less variability across trials and subjects as compared to kinetics, kinematics and EMG data, and thus its addition is proposed. The variability of each data source was analyzed, once normalized, to determine the intra-activity and intra-subject variability for each sensor modality. Then measures of separability, repeatability, clustering and overall desirability were computed. Results showed that combining Vision, EMG, IMU (inertial measurement unit), and Goniometer features yielded the best separability, repeatability, clustering and desirability across subjects and activities. This will likely be useful for future application in a forward predictor, which will incorporate Vision-based environmental data into a forward predictor for powered lower-limb prosthesis and exoskeletons. Full article
(This article belongs to the Special Issue Sensor Fusion in Assistive and Rehabilitation Robotics)
Show Figures

Figure 1

17 pages, 7113 KiB  
Article
Bilateral Tactile Feedback-Enabled Training for Stroke Survivors Using Microsoft KinectTM
by Abbas Orand, Eren Erdal Aksoy, Hiroyuki Miyasaka, Carolyn Weeks Levy, Xin Zhang and Carlo Menon
Sensors 2019, 19(16), 3474; https://doi.org/10.3390/s19163474 - 08 Aug 2019
Cited by 13 | Viewed by 4874
Abstract
Rehabilitation and mobility training of post-stroke patients is crucial for their functional recovery. While traditional methods can still help patients, new rehabilitation and mobility training methods are necessary to facilitate better recovery at lower costs. In this work, our objective was to design [...] Read more.
Rehabilitation and mobility training of post-stroke patients is crucial for their functional recovery. While traditional methods can still help patients, new rehabilitation and mobility training methods are necessary to facilitate better recovery at lower costs. In this work, our objective was to design and develop a rehabilitation training system targeting the functional recovery of post-stroke users with high efficiency. To accomplish this goal, we applied a bilateral training method, which proved to be effective in enhancing motor recovery using tactile feedback for the training. One participant with hemiparesis underwent six weeks of training. Two protocols, “contralateral arm matching” and “both arms moving together”, were carried out by the participant. Each of the protocols consisted of “shoulder abduction” and “shoulder flexion” at angles close to 30 and 60 degrees. The participant carried out 15 repetitions at each angle for each task. For example, in the “contralateral arm matching” protocol, the unaffected arm of the participant was set to an angle close to 30 degrees. He was then requested to keep the unaffected arm at the specified angle while trying to match the position with the affected arm. Whenever the two arms matched, a vibration was given on both brachialis muscles. For the “both arms moving together” protocol, the two arms were first set approximately to an angle of either 30 or 60 degrees. The participant was asked to return both arms to a relaxed position before moving both arms back to the remembered specified angle. The arm that was slower in moving to the specified angle received a vibration. We performed clinical assessments before, midway through, and after the training period using a Fugl-Meyer assessment (FMA), a Wolf motor function test (WMFT), and a proprioceptive assessment. For the assessments, two ipsilateral and contralateral arm matching tasks, each consisting of three movements (shoulder abduction, shoulder flexion, and elbow flexion), were used. Movements were performed at two angles, 30 and 60 degrees. For both tasks, the same procedure was used. For example, in the case of the ipsilateral arm matching task, an experimenter positioned the affected arm of the participant at 30 degrees of shoulder abduction. The participant was requested to keep the arm in that position for ~5 s before returning to a relaxed initial position. Then, after another ~5-s delay, the participant moved the affected arm back to the remembered position. An experimenter measured this shoulder abduction angle manually using a goniometer. The same procedure was repeated for the 60 degree angle and for the other two movements. We applied a low-cost Kinect to extract the participant’s body joint position data. Tactile feedback was given based on the arm position detected by the Kinect sensor. By using a Kinect sensor, we demonstrated the feasibility of the system for the training of a post-stroke user. The proposed system can further be employed for self-training of patients at home. The results of the FMA, WMFT, and goniometer angle measurements showed improvements in several tasks, suggesting a positive effect of the training system and its feasibility for further application for stroke survivors’ rehabilitation. Full article
(This article belongs to the Special Issue Sensor Fusion in Assistive and Rehabilitation Robotics)
Show Figures

Figure 1

15 pages, 309 KiB  
Article
Development of an EMG-Based Muscle Health Model for Elbow Trauma Patients
by Emma Farago, Shrikant Chinchalkar, Daniel J. Lizotte and Ana Luisa Trejos
Sensors 2019, 19(15), 3309; https://doi.org/10.3390/s19153309 - 27 Jul 2019
Cited by 9 | Viewed by 3829
Abstract
Wearable robotic braces have the potential to improve rehabilitative therapies for patients suffering from musculoskeletal (MSK) conditions. Ideally, a quantitative assessment of health would be incorporated into rehabilitative devices to monitor patient recovery. The purpose of this work is to develop a model [...] Read more.
Wearable robotic braces have the potential to improve rehabilitative therapies for patients suffering from musculoskeletal (MSK) conditions. Ideally, a quantitative assessment of health would be incorporated into rehabilitative devices to monitor patient recovery. The purpose of this work is to develop a model to distinguish between the healthy and injured arms of elbow trauma patients based on electromyography (EMG) data. Surface EMG recordings were collected from the healthy and injured limbs of 30 elbow trauma patients while performing 10 upper-limb motions. Forty-two features and five feature sets were extracted from the data. Feature selection was performed to improve the class separation and to reduce the computational complexity of the feature sets. The following classifiers were tested: linear discriminant analysis (LDA), support vector machine (SVM), and random forest (RF). The classifiers were used to distinguish between two levels of health: healthy and injured (50% baseline accuracy rate). Maximum fractal length (MFL), myopulse percentage rate (MYOP), power spectrum ratio (PSR) and spike shape analysis features were identified as the best features for classifying elbow muscle health. A majority vote of the LDA classification models provided a cross-validation accuracy of 82.1%. The work described in this paper indicates that it is possible to discern between healthy and injured limbs of patients with MSK elbow injuries. Further assessment and optimization could improve the consistency and accuracy of the classification models. This work is the first of its kind to identify EMG metrics for muscle health assessment by wearable rehabilitative devices. Full article
(This article belongs to the Special Issue Sensor Fusion in Assistive and Rehabilitation Robotics)
Show Figures

Figure 1

24 pages, 657 KiB  
Article
Gait Phase Detection for Lower-Limb Exoskeletons using Foot Motion Data from a Single Inertial Measurement Unit in Hemiparetic Individuals
by Miguel D. Sánchez Manchola, María J. Pinto Bernal, Marcela Munera and Carlos A. Cifuentes
Sensors 2019, 19(13), 2988; https://doi.org/10.3390/s19132988 - 06 Jul 2019
Cited by 52 | Viewed by 7093
Abstract
Due to the recent rise in the use of lower-limb exoskeletons as an alternative for gait rehabilitation, gait phase detection has become an increasingly important feature in the control of these devices. In addition, highly functional, low-cost recovery devices are needed in developing [...] Read more.
Due to the recent rise in the use of lower-limb exoskeletons as an alternative for gait rehabilitation, gait phase detection has become an increasingly important feature in the control of these devices. In addition, highly functional, low-cost recovery devices are needed in developing countries, since limited budgets are allocated specifically for biomedical advances. To achieve this goal, this paper presents two gait phase partitioning algorithms that use motion data from a single inertial measurement unit (IMU) placed on the foot instep. For these data, sagittal angular velocity and linear acceleration signals were extracted from nine healthy subjects and nine pathological subjects. Pressure patterns from force sensitive resistors (FSR) instrumented on a custom insole were used as reference values. The performance of a threshold-based (TB) algorithm and a hidden Markov model (HMM) based algorithm, trained by means of subject-specific and standardized parameters approaches, were compared during treadmill walking tasks in terms of timing errors and the goodness index. The findings indicate that HMM outperforms TB for this hardware configuration. In addition, the HMM-based classifier trained by an intra-subject approach showed excellent reliability for the evaluation of mean time, i.e., its intra-class correlation coefficient (ICC) was greater than 0.75 . In conclusion, the HMM-based method proposed here can be implemented for gait phase recognition, such as to evaluate gait variability in patients and to control robotic orthoses for lower-limb rehabilitation. Full article
(This article belongs to the Special Issue Sensor Fusion in Assistive and Rehabilitation Robotics)
Show Figures

Figure 1

11 pages, 3467 KiB  
Article
Robot-Assisted Eccentric Contraction Training of the Tibialis Anterior Muscle Based on Position and Force Sensing
by Keisuke Kubota, Masashi Sekiya and Toshiaki Tsuji
Sensors 2019, 19(6), 1288; https://doi.org/10.3390/s19061288 - 14 Mar 2019
Cited by 4 | Viewed by 2974
Abstract
The purpose of this study was to determine the clinical effects of a training robot that induced eccentric tibialis anterior muscle contraction by controlling the strength and speed. The speed and the strength are controlled simultaneously by introducing robot training with two different [...] Read more.
The purpose of this study was to determine the clinical effects of a training robot that induced eccentric tibialis anterior muscle contraction by controlling the strength and speed. The speed and the strength are controlled simultaneously by introducing robot training with two different feedbacks: velocity feedback in the robot controller and force bio-feedback based on force visualization. By performing quantitative eccentric contraction training, it is expected that the fall risk reduces owing to the improved muscle function. Evaluation of 11 elderly participants with months training period was conducted through a cross-over comparison test. The results of timed up and go (TUG) tests and 5 m walking tests were compared. The intergroup comparison was done using the Kruskal-Wallis test. The results of cross-over test indicated no significant difference between the 5-m walking time measured after the training and control phases. However, there was a trend toward improvement, and a significant difference was observed between the training and control phases in all subjects. Full article
(This article belongs to the Special Issue Sensor Fusion in Assistive and Rehabilitation Robotics)
Show Figures

Figure 1

17 pages, 6099 KiB  
Article
Design and Feasibility Study of a Leg-exoskeleton Assistive Wheelchair Robot with Tests on Gluteus Medius Muscles
by Gao Huang, Marco Ceccarelli, Qiang Huang, Weimin Zhang, Zhangguo Yu, Xuechao Chen and Jingeng Mai
Sensors 2019, 19(3), 548; https://doi.org/10.3390/s19030548 - 28 Jan 2019
Cited by 12 | Viewed by 5741
Abstract
The muscles of the lower limbs directly influence leg motion, therefore, lower limb muscle exercise is important for persons living with lower limb disabilities. This paper presents a medical assistive robot with leg exoskeletons for locomotion and leg muscle exercises. It also presents [...] Read more.
The muscles of the lower limbs directly influence leg motion, therefore, lower limb muscle exercise is important for persons living with lower limb disabilities. This paper presents a medical assistive robot with leg exoskeletons for locomotion and leg muscle exercises. It also presents a novel pedal-cycling actuation method with a crank-rocker mechanism. The mechanism is driven by a single motor with a mechanical structure that ensures user safety. A control system is designed based on a master-slave control with sensor fusion method. Here, the intended motion of the user is detected by pedal-based force sensors and is then used in combination with joystick movements as control signals for leg-exoskeleton and wheelchair motions. Experimental data is presented and then analyzed to determine robotic motion characteristics as well as the assistance efficiency with attached electromyogram (EMG) sensors. A typical muscle EMG signal analysis shows that the exercise efficiency for EMG activated amplitudes of the gluteus medius muscles approximates a walking at speed of 3 m/s when cycling at different speeds (i.e., from 16 to 80 r/min) in a wheelchair. As such, the present wheelchair robot is a good candidate for enabling effective gluteus medius muscle exercises for persons living with gluteus medius muscle disabilities. Full article
(This article belongs to the Special Issue Sensor Fusion in Assistive and Rehabilitation Robotics)
Show Figures

Figure 1

Review

Jump to: Editorial, Research

27 pages, 1097 KiB  
Review
A Survey of Teleceptive Sensing for Wearable Assistive Robotic Devices
by Nili E. Krausz and Levi J. Hargrove
Sensors 2019, 19(23), 5238; https://doi.org/10.3390/s19235238 - 28 Nov 2019
Cited by 22 | Viewed by 4678
Abstract
Teleception is defined as sensing that occurs remotely, with no physical contact with the object being sensed. To emulate innate control systems of the human body, a control system for a semi- or fully autonomous assistive device not only requires feedforward models of [...] Read more.
Teleception is defined as sensing that occurs remotely, with no physical contact with the object being sensed. To emulate innate control systems of the human body, a control system for a semi- or fully autonomous assistive device not only requires feedforward models of desired movement, but also the environmental or contextual awareness that could be provided by teleception. Several recent publications present teleception modalities integrated into control systems and provide preliminary results, for example, for performing hand grasp prediction or endpoint control of an arm assistive device; and gait segmentation, forward prediction of desired locomotion mode, and activity-specific control of a prosthetic leg or exoskeleton. Collectively, several different approaches to incorporating teleception have been used, including sensor fusion, geometric segmentation, and machine learning. In this paper, we summarize the recent and ongoing published work in this promising new area of research. Full article
(This article belongs to the Special Issue Sensor Fusion in Assistive and Rehabilitation Robotics)
Show Figures

Figure 1

Back to TopTop