Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (48)

Search Parameters:
Keywords = motion intention estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 7159 KiB  
Article
Methodology for Human–Robot Collaborative Assembly Based on Human Skill Imitation and Learning
by Yixuan Zhou, Naisheng Tang, Ziyi Li and Hanlei Sun
Machines 2025, 13(5), 431; https://doi.org/10.3390/machines13050431 - 19 May 2025
Viewed by 705
Abstract
With the growing demand for personalized and flexible production, human–robot collaboration technology receives increasing attention. However, enabling robots to accurately perceive and align with human motion intentions remains a significant challenge. To address this, a novel human–robot collaborative control framework is proposed, which [...] Read more.
With the growing demand for personalized and flexible production, human–robot collaboration technology receives increasing attention. However, enabling robots to accurately perceive and align with human motion intentions remains a significant challenge. To address this, a novel human–robot collaborative control framework is proposed, which utilizes electromyography (EMG) signals as an interaction interface and integrates human skill imitation with reinforcement learning. Specifically, to manage the dynamic variation in muscle coordination patterns induced by joint angle changes, a temporal graph neural network enhanced with an Angle-Guided Attention mechanism is developed. This method adaptively models the topological relationships among muscle groups, enabling high-precision three-dimensional dynamic arm force estimation. Furthermore, an expert reward function and a fuzzy experience replay mechanism are introduced in the reinforcement learning model to guide the human skill learning process, thereby enhancing collaborative comfort and smoothness. The proposed approach is validated through a collaborative assembly task. Experimental results show that the proposed arm force estimation model reduces estimation errors by 10.38%, 8.33%, and 11.20% across three spatial directions compared to a conventional Deep Long Short-Term Memory (Deep-LSTM). Moreover, it significantly outperforms state-of-the-art methods, including traditional imitation learning and adaptive admittance control, in terms of collaborative comfort, smoothness, and assembly accuracy. Full article
Show Figures

Figure 1

19 pages, 28576 KiB  
Article
Adaptive Admittance Control of Human–Spacesuit Interaction for Joint-Assisted Exoskeleton Robot in Active Spacesuit
by Xijun Liu, Hao Zhao, Heng Yang, Zhaoyang Li and Yuehong Dai
Electronics 2025, 14(10), 1969; https://doi.org/10.3390/electronics14101969 - 12 May 2025
Viewed by 327
Abstract
To deal with the astronaut’s motion intention, as well as uncertainties in robotic dynamics, a human–spacesuit interaction (HSI) model is presented for the development of a joint-assisted exoskeleton robot in an active spacesuit using adaptive admittance control. Firstly, an adaptive RBF neural network [...] Read more.
To deal with the astronaut’s motion intention, as well as uncertainties in robotic dynamics, a human–spacesuit interaction (HSI) model is presented for the development of a joint-assisted exoskeleton robot in an active spacesuit using adaptive admittance control. Firstly, an adaptive RBF neural network control was designed for different astronauts, or the same astronauts in different states, which could be used to approximate the variable HSI model as a whole. Secondly, based on robust fuzzy control, the position inner loop of adaptive admittance control was designed to enhance the tracking effect for a given reference trajectory. When there is an interaction force between the active spacesuit and the wearer, the actual HSI force measured by the sensor transforms into the correction of the desired trajectory input, and the position inner loop tracks the corrected reference trajectory. The online estimation of stiffness is employed to assess the variable impedance property of a joint-assisted exoskeleton robot in an active spacesuit. Oxygen consumption decreased by 15.88% at most, which indicates that the proposed control method enables the wearer to effectively execute a simulated lunar sample collection mission with the joint-assisted exoskeleton robot. Full article
Show Figures

Figure 1

24 pages, 1540 KiB  
Review
Myoelectric Control in Rehabilitative and Assistive Soft Exoskeletons: A Comprehensive Review of Trends, Challenges, and Integration with Soft Robotic Devices
by Alejandro Toro-Ossaba, Juan C. Tejada and Daniel Sanin-Villa
Biomimetics 2025, 10(4), 214; https://doi.org/10.3390/biomimetics10040214 - 1 Apr 2025
Viewed by 1086
Abstract
Soft robotic exoskeletons have emerged as a transformative solution for rehabilitation and assistance, offering greater adaptability and comfort than rigid designs. Myoelectric control, based on electromyography (EMG) signals, plays a key role in enabling intuitive and adaptive interaction between the user and the [...] Read more.
Soft robotic exoskeletons have emerged as a transformative solution for rehabilitation and assistance, offering greater adaptability and comfort than rigid designs. Myoelectric control, based on electromyography (EMG) signals, plays a key role in enabling intuitive and adaptive interaction between the user and the exoskeleton. This review analyzes recent advancements in myoelectric control strategies, emphasizing their integration into soft robotic exoskeletons. Unlike previous studies, this work highlights the unique challenges posed by the deformability and compliance of soft structures, requiring novel approaches to motion intention estimation and control. Key contributions include critically evaluating machine learning-based motion prediction, model-free adaptive control methods, and real-time validation strategies to enhance rehabilitation outcomes. Additionally, we identify persistent challenges such as EMG signal variability, computational complexity, and the real-time adaptability of control algorithms, which limit clinical implementation. By interpreting recent trends, this review highlights the need for improved EMG acquisition techniques, robust adaptive control frameworks, and enhanced real-time learning to optimize human-exoskeleton interaction. Beyond summarizing the state of the art, this work provides an in-depth discussion of how myoelectric control can advance rehabilitation by ensuring more responsive and personalized exoskeleton assistance. Future research should focus on refining control schemes tailored to soft robotic architectures, ensuring seamless integration into rehabilitation protocols. This review is a foundation for developing intelligent soft exoskeletons that effectively support motor recovery and assistive applications. Full article
Show Figures

Figure 1

18 pages, 4465 KiB  
Article
A Semi-Autonomous Telemanipulation Order-Picking Control Based on Estimating Operator Intent for Box-Stacking Storage Environments
by Donggyu Min, Hojin Yoon and Donghun Lee
Sensors 2025, 25(4), 1217; https://doi.org/10.3390/s25041217 - 17 Feb 2025
Viewed by 623
Abstract
Teleoperation-based order picking in logistics warehouse environments has been advancing steadily. However, the accuracy of such operations varies depending on the type of human–robot interface (HRI) employed. Immersive HRI, which uses a head-mounted display (HMD) and controllers, can significantly reduce task accuracy due [...] Read more.
Teleoperation-based order picking in logistics warehouse environments has been advancing steadily. However, the accuracy of such operations varies depending on the type of human–robot interface (HRI) employed. Immersive HRI, which uses a head-mounted display (HMD) and controllers, can significantly reduce task accuracy due to the limited field of view in virtual environments. To address this limitation, this study proposes a semi-autonomous telemanipulation order-picking control method based on operator intent estimation using intersection points between the end-effector and the target logistics plane in box-stacking storage environments. The proposed method consists of two stages. The first stage involves operator intent estimation, which approximates the target logistics plane using objects identified through camera vision and calculates the intersection points by intersecting the end-effector heading vector with the plane. These points are accumulated and modeled as a Gaussian distribution, with the probability density function (PDF) of each target object treated as its likelihood. Bayesian probability filtering is then applied to estimate target probabilities, and predefined conditions are used to switch control between autonomous and manual controllers. Results show that the proposed operator intent estimation method identified the correct target in 74.6% of the task’s duration. The proposed semi-autonomous control method successfully transferred control to the autonomous controller within 32.2% of the total task duration using a combination of three parameters. This approach inferred operator intent based solely on manipulator motion and reduced the fatigue of the operator. This method demonstrates potential for broad application in teleoperation systems, offering high operational efficiency regardless of operator expertise or training level. Full article
(This article belongs to the Special Issue Applications of Body Worn Sensors and Wearables)
Show Figures

Figure 1

15 pages, 4299 KiB  
Article
Research on Welfare Robots: A Multifunctional Assistive Robot and Human–Machine System
by Shuoyu Wang
Appl. Sci. 2025, 15(3), 1621; https://doi.org/10.3390/app15031621 - 5 Feb 2025
Cited by 1 | Viewed by 1128
Abstract
Welfare refers to the state of happiness and well-being experienced by a person. Welfare robots can directly contribute to people’s happiness and well-being. Specific welfare robots include health promotion robots, rehabilitation robots, assistive robots, nursing care robots, etc. Welfare robots are used in [...] Read more.
Welfare refers to the state of happiness and well-being experienced by a person. Welfare robots can directly contribute to people’s happiness and well-being. Specific welfare robots include health promotion robots, rehabilitation robots, assistive robots, nursing care robots, etc. Welfare robots are used in human living spaces and exert actions on humans through force and information. Because industrial robots that handle objects prioritize high speed and efficiency, if their control methods were to be applied directly to welfare robots, the results would be unsatisfactory and extremely dangerous. This paper proposes a method for constructing a human–machine system for welfare robots that includes the estimation of the user’s work intention, a measurement of riding comfort, and motion generation. Furthermore, various types of welfare equipment for people with walking disabilities have been developed, but most of them have a single function. Equipping small homes with many single-function devices is difficult, and their use is complicated and not standardized. Therefore, in this study, we developed a multifunctional assistive robot that integrates mobility, transfer, work support, and training. It is a typical welfare robot and is effective in preventing a user’s minor disabilities from becoming more severe. In this paper, we discuss the research challenge points of human–machine welfare robot systems and their current situation using the multifunctional assistive robot as a typical example. Full article
(This article belongs to the Special Issue Rehabilitation and Assistive Robotics: Latest Advances and Prospects)
Show Figures

Figure 1

22 pages, 1481 KiB  
Article
Adaptive Impedance Control of a Human–Robotic System Based on Motion Intention Estimation and Output Constraints
by Junjie Ma, Hongjun Chen, Xinglan Liu, Yong Yang and Deqing Huang
Appl. Sci. 2025, 15(3), 1271; https://doi.org/10.3390/app15031271 - 26 Jan 2025
Cited by 2 | Viewed by 1168
Abstract
The rehabilitation exoskeleton represents a typical human–robot system featuring complex nonlinear dynamics. This paper is devoted to proposing an adaptive impedance control strategy for a rehabilitation exoskelton. The patient’s motion intention is estimated online by the neural network (NN) to cope with the [...] Read more.
The rehabilitation exoskeleton represents a typical human–robot system featuring complex nonlinear dynamics. This paper is devoted to proposing an adaptive impedance control strategy for a rehabilitation exoskelton. The patient’s motion intention is estimated online by the neural network (NN) to cope with the intervention of the patient’s subjective motor awareness in the late stage of rehabilitation training. Due to the differences in impedance parameters for training tasks in individual patients and periods, the least square method was used to learn the impedance parameters of the patient. Considering the uncertainties of the exoskeleton and the safety of rehabilitation training, an adaptive neural network impedance controller with output constraints was designed. The NN was applied to approximate the unknown dynamics and the barrier Lyapunov function was applied to prevent the system from violating the output rules. The feasibility and effectiveness of the proposed strategy were verified by simulation. Full article
Show Figures

Figure 1

29 pages, 32678 KiB  
Article
An Active Control Method for a Lower Limb Rehabilitation Robot with Human Motion Intention Recognition
by Zhuangqun Song, Peng Zhao, Xueji Wu, Rong Yang and Xueshan Gao
Sensors 2025, 25(3), 713; https://doi.org/10.3390/s25030713 - 24 Jan 2025
Cited by 3 | Viewed by 1590
Abstract
This study presents a method for the active control of a follow-up lower extremity exoskeleton rehabilitation robot (LEERR) based on human motion intention recognition. Initially, to effectively support body weight and compensate for the vertical movement of the human center of mass, a [...] Read more.
This study presents a method for the active control of a follow-up lower extremity exoskeleton rehabilitation robot (LEERR) based on human motion intention recognition. Initially, to effectively support body weight and compensate for the vertical movement of the human center of mass, a vision-driven follow-and-track control strategy is proposed. Subsequently, an algorithm for recognizing human motion intentions based on machine learning is proposed for human-robot collaboration tasks. A muscle–machine interface is constructed using a bi-directional long short-term memory (BiLSTM) network, which decodes multichannel surface electromyography (sEMG) signals into flexion and extension angles of the hip and knee joints in the sagittal plane. The hyperparameters of the BiLSTM network are optimized using the quantum-behaved particle swarm optimization (QPSO) algorithm, resulting in a QPSO-BiLSTM hybrid model that enables continuous real-time estimation of human motion intentions. Further, to address the uncertain nonlinear dynamics of the wearer-exoskeleton robot system, a dual radial basis function neural network adaptive sliding mode Controller (DRBFNNASMC) is designed to generate control torques, thereby enabling the precise tracking of motion trajectories generated by the muscle–machine interface. Experimental results indicate that the follow-up-assisted frame can accurately track human motion trajectories. The QPSO-BiLSTM network outperforms traditional BiLSTM and PSO-BiLSTM networks in predicting continuous lower limb motion, while the DRBFNNASMC controller demonstrates superior gait tracking performance compared to the fuzzy compensated adaptive sliding mode control (FCASMC) algorithm and the traditional proportional–integral–derivative (PID) control algorithm. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

44 pages, 4022 KiB  
Review
Neural Network for Enhancing Robot-Assisted Rehabilitation: A Systematic Review
by Nafizul Alam, Sk Hasan, Gazi Abdullah Mashud and Subodh Bhujel
Actuators 2025, 14(1), 16; https://doi.org/10.3390/act14010016 - 6 Jan 2025
Cited by 4 | Viewed by 2651
Abstract
The integration of neural networks into robotic exoskeletons for physical rehabilitation has become popular due to their ability to interpret complex physiological signals. Surface electromyography (sEMG), electromyography (EMG), electroencephalography (EEG), and other physiological signals enable communication between the human body and robotic systems. [...] Read more.
The integration of neural networks into robotic exoskeletons for physical rehabilitation has become popular due to their ability to interpret complex physiological signals. Surface electromyography (sEMG), electromyography (EMG), electroencephalography (EEG), and other physiological signals enable communication between the human body and robotic systems. Utilizing physiological signals for communicating with robots plays a crucial role in robot-assisted neurorehabilitation. This systematic review synthesizes 44 peer-reviewed studies, exploring how neural networks can improve exoskeleton robot-assisted rehabilitation for individuals with impaired upper limbs. By categorizing the studies based on robot-assisted joints, sensor systems, and control methodologies, we offer a comprehensive overview of neural network applications in this field. Our findings demonstrate that neural networks, such as Convolutional Neural Networks (CNNs), Long Short-Term Memory (LSTM), Radial Basis Function Neural Networks (RBFNNs), and other forms of neural networks significantly contribute to patient-specific rehabilitation by enabling adaptive learning and personalized therapy. CNNs improve motion intention estimation and control accuracy, while LSTM networks capture temporal muscle activity patterns for real-time rehabilitation. RBFNNs improve human–robot interaction by adapting to individual movement patterns, leading to more personalized and efficient therapy. This review highlights the potential of neural networks to revolutionize upper limb rehabilitation, improving motor recovery and patient outcomes in both clinical and home-based settings. It also recommends the future direction of customizing existing neural networks for robot-assisted rehabilitation applications. Full article
Show Figures

Figure 1

19 pages, 1971 KiB  
Article
A Hierarchical-Based Learning Approach for Multi-Action Intent Recognition
by David Hollinger, Ryan S. Pollard, Mark C. Schall, Howard Chen and Michael Zabala
Sensors 2024, 24(23), 7857; https://doi.org/10.3390/s24237857 - 9 Dec 2024
Viewed by 1126
Abstract
Recent applications of wearable inertial measurement units (IMUs) for predicting human movement have often entailed estimating action-level (e.g., walking, running, jumping) and joint-level (e.g., ankle plantarflexion angle) motion. Although action-level or joint-level information is frequently the focus of movement intent prediction, contextual information [...] Read more.
Recent applications of wearable inertial measurement units (IMUs) for predicting human movement have often entailed estimating action-level (e.g., walking, running, jumping) and joint-level (e.g., ankle plantarflexion angle) motion. Although action-level or joint-level information is frequently the focus of movement intent prediction, contextual information is necessary for a more thorough approach to intent recognition. Therefore, a combination of action-level and joint-level information may offer a more comprehensive approach to predicting movement intent. In this study, we devised a novel hierarchical-based method combining action-level classification and subsequent joint-level regression to predict joint angles 100 ms into the future. K-nearest neighbors (KNN), bidirectional long short-term memory (BiLSTM), and temporal convolutional network (TCN) models were employed for action-level classification, and a random forest model trained on action-specific IMU data was used for joint-level prediction. A joint-level action-generic model trained on multiple actions (e.g., backward walking, kneeling down, kneeling up, running, and walking) was also used for predicting the joint angle. Compared with a hierarchical-based approach, the action-generic model had lower prediction error for backward walking, kneeling down, and kneeling up. Although the TCN and BiLSTM classifiers achieved classification accuracies of 89.87% and 89.30%, respectively, they did not surpass the performance of the action-generic random forest model when used in combination with an action-specific random forest model. This may have been because the action-generic approach was trained on more data from multiple actions. This study demonstrates the advantage of leveraging large, disparate data sources over a hierarchical-based approach for joint-level prediction. Moreover, it demonstrates the efficacy of an IMU-driven, task-agnostic model in predicting future joint angles across multiple actions. Full article
Show Figures

Figure 1

14 pages, 5641 KiB  
Article
Estimation of Lower Limb Joint Angles Using sEMG Signals and RGB-D Camera
by Guoming Du, Zhen Ding, Hao Guo, Meichao Song and Feng Jiang
Bioengineering 2024, 11(10), 1026; https://doi.org/10.3390/bioengineering11101026 - 15 Oct 2024
Cited by 4 | Viewed by 1876
Abstract
Estimating human joint angles is a crucial task in motion analysis, gesture recognition, and motion intention prediction. This paper presents a novel model-based approach for generating reliable and accurate human joint angle estimation using a dual-branch network. The proposed network leverages combined features [...] Read more.
Estimating human joint angles is a crucial task in motion analysis, gesture recognition, and motion intention prediction. This paper presents a novel model-based approach for generating reliable and accurate human joint angle estimation using a dual-branch network. The proposed network leverages combined features derived from encoded sEMG signals and RGB-D image data. To ensure the accuracy and reliability of the estimation algorithm, the proposed network employs a convolutional autoencoder to generate a high-level compression of sEMG features aimed at motion prediction. Considering the variability in the distribution of sEMG signals, the proposed network introduces a vision-based joint regression network to maintain the stability of combined features. Taking into account latency, occlusion, and shading issues with vision data acquisition, the feature fusion network utilizes high-frequency sEMG features as weights for specific features extracted from image data. The proposed method achieves effective human body joint angle estimation for motion analysis and motion intention prediction by mitigating the effects of non-stationary sEMG signals. Full article
(This article belongs to the Special Issue Bioengineering of the Motor System)
Show Figures

Figure 1

15 pages, 847 KiB  
Review
Movement Outcomes Acquired via Markerless Motion Capture Systems Compared with Marker-Based Systems for Adult Patient Populations: A Scoping Review
by Matthew Pardell, Naomi D. Dolgoy, Stéphanie Bernard, Kerry Bayless, Robert Hirsche, Liz Dennett and Puneeta Tandon
Biomechanics 2024, 4(4), 618-632; https://doi.org/10.3390/biomechanics4040044 - 8 Oct 2024
Cited by 3 | Viewed by 3396
Abstract
Mobile motion capture is a promising technology for assessing physical movement; markerless motion capture systems (MLSs) offer great potential in rehabilitation settings, given their accessibility compared to marker-based motion capture systems (MBSs). This review explores the current literature on rehabilitation, for direct comparison [...] Read more.
Mobile motion capture is a promising technology for assessing physical movement; markerless motion capture systems (MLSs) offer great potential in rehabilitation settings, given their accessibility compared to marker-based motion capture systems (MBSs). This review explores the current literature on rehabilitation, for direct comparison of movement-related outcomes captured by MLSs to MBSs and for application of MLSs in movement measurements. Following a scoping review methodology, nine databases were searched (May to August 2023). Eligible articles had to present at least one estimate of the mean difference between a measure of a physical movement assessed by MLS and by MBS. Sixteen studies met the selection criteria and were included. For comparison of MLSs with MBSs, measures of mean joint range of motion (ROM) displacement were found to be similar, while peak joint angle outcomes were significantly different. Upper body movement outcomes were found to be comparable, while lower body movement outcomes were very different. Overall, nearly two-thirds of measurements identified statistical differences between MLS and MBS outcomes. Regarding application, no studies assessed the technology with patient populations. Further MLS-specific research with consideration of patient populations (e.g., intentional error testing, testing in less-than-ideal settings) would be beneficial for utilization of motion capture in rehabilitation contexts. Full article
(This article belongs to the Section Injury Biomechanics and Rehabilitation)
Show Figures

Figure 1

17 pages, 5998 KiB  
Article
A Novel TCN-LSTM Hybrid Model for sEMG-Based Continuous Estimation of Wrist Joint Angles
by Jiale Du, Zunyi Liu, Wenyuan Dong, Weifeng Zhang and Zhonghua Miao
Sensors 2024, 24(17), 5631; https://doi.org/10.3390/s24175631 - 30 Aug 2024
Cited by 7 | Viewed by 2391
Abstract
Surface electromyography (sEMG) offers a novel method in human–machine interactions (HMIs) since it is a distinct physiological electrical signal that conceals human movement intention and muscle information. Unfortunately, the nonlinear and non-smooth features of sEMG signals often make joint angle estimation difficult. This [...] Read more.
Surface electromyography (sEMG) offers a novel method in human–machine interactions (HMIs) since it is a distinct physiological electrical signal that conceals human movement intention and muscle information. Unfortunately, the nonlinear and non-smooth features of sEMG signals often make joint angle estimation difficult. This paper proposes a joint angle prediction model for the continuous estimation of wrist motion angle changes based on sEMG signals. The proposed model combines a temporal convolutional network (TCN) with a long short-term memory (LSTM) network, where the TCN can sense local information and mine the deeper information of the sEMG signals, while LSTM, with its excellent temporal memory capability, can make up for the lack of the ability of the TCN to capture the long-term dependence of the sEMG signals, resulting in a better prediction. We validated the proposed method in the publicly available Ninapro DB1 dataset by selecting the first eight subjects and picking three types of wrist-dependent movements: wrist flexion (WF), wrist ulnar deviation (WUD), and wrist extension and closed hand (WECH). Finally, the proposed TCN-LSTM model was compared with the TCN and LSTM models. The proposed TCN-LSTM outperformed the TCN and LSTM models in terms of the root mean square error (RMSE) and average coefficient of determination (R2). The TCN-LSTM model achieved an average RMSE of 0.064, representing a 41% reduction compared to the TCN model and a 52% reduction compared to the LSTM model. The TCN-LSTM also achieved an average R2 of 0.93, indicating an 11% improvement over the TCN model and an 18% improvement over the LSTM model. Full article
(This article belongs to the Section Wearables)
Show Figures

Figure 1

15 pages, 3914 KiB  
Article
Trial of Brain–Computer Interface for Continuous Motion Using Electroencephalography and Electromyography
by Norihiko Saga, Yukina Okawa, Takuma Saga, Toshiyuki Satoh and Naoki Saito
Electronics 2024, 13(14), 2770; https://doi.org/10.3390/electronics13142770 - 15 Jul 2024
Cited by 2 | Viewed by 2043
Abstract
Most BCI systems used in neurorehabilitation detect EEG features indicating motor intent based on machine learning, focusing on repetitive movements, such as limb flexion and extension. These machine learning methods require large datasets and are time consuming, making them unsuitable for same-day rehabilitation [...] Read more.
Most BCI systems used in neurorehabilitation detect EEG features indicating motor intent based on machine learning, focusing on repetitive movements, such as limb flexion and extension. These machine learning methods require large datasets and are time consuming, making them unsuitable for same-day rehabilitation training following EEG measurements. Therefore, we propose a BMI system based on fuzzy inference that bypasses the need for specific EEG features, introducing an algorithm that allows patients to progress from measurement to training within a few hours. Additionally, we explored the integration of electromyography (EMG) with conventional EEG-based motor intention estimation to capture continuous movements, which is essential for advanced motor function training, such as skill improvement. In this study, we developed an algorithm that detects the initial movement via EEG and switches to EMG for subsequent movements. This approach ensures real-time responsiveness and effective handling of continuous movements. Herein, we report the results of this study. Full article
(This article belongs to the Special Issue Brain Computer Interface: Theory, Method, and Application)
Show Figures

Figure 1

14 pages, 1175 KiB  
Article
Investigation of Motor Learning Effects Using a Hybrid Rehabilitation System Based on Motion Estimation
by Kensuke Takenaka, Keisuke Shima and Koji Shimatani
Sensors 2024, 24(11), 3496; https://doi.org/10.3390/s24113496 - 29 May 2024
Viewed by 1181
Abstract
Upper-limb paralysis requires extensive rehabilitation to recover functionality for everyday living, and such assistance can be supported with robot technology. Against such a background, we have proposed an electromyography (EMG)-driven hybrid rehabilitation system based on motion estimation using a probabilistic neural network. The [...] Read more.
Upper-limb paralysis requires extensive rehabilitation to recover functionality for everyday living, and such assistance can be supported with robot technology. Against such a background, we have proposed an electromyography (EMG)-driven hybrid rehabilitation system based on motion estimation using a probabilistic neural network. The system controls a robot and functional electrical stimulation (FES) from movement estimation using EMG signals based on the user’s intention, enabling intuitive learning of joint motion and muscle contraction capacity even for multiple motions. In this study, hybrid and visual-feedback training were conducted with pointing movements involving the non-dominant wrist, and the motor learning effect was examined via quantitative evaluation of accuracy, stability, and smoothness. The results show that hybrid instruction was as effective as visual feedback training in all aspects. Accordingly, passive hybrid instruction using the proposed system can be considered effective in promoting motor learning and rehabilitation for paralysis with inability to perform voluntary movements. Full article
Show Figures

Figure 1

17 pages, 3964 KiB  
Article
A Wearable Bidirectional Human–Machine Interface: Merging Motion Capture and Vibrotactile Feedback in a Wireless Bracelet
by Julian Kindel, Daniel Andreas, Zhongshi Hou, Anany Dwivedi and Philipp Beckerle
Multimodal Technol. Interact. 2024, 8(6), 44; https://doi.org/10.3390/mti8060044 - 23 May 2024
Cited by 3 | Viewed by 2557
Abstract
Humans interact with the environment through a variety of senses. Touch in particular contributes to a sense of presence, enhancing perceptual experiences, and establishing causal relations between events. Many human–machine interfaces only allow for one-way communication, which does not do justice to the [...] Read more.
Humans interact with the environment through a variety of senses. Touch in particular contributes to a sense of presence, enhancing perceptual experiences, and establishing causal relations between events. Many human–machine interfaces only allow for one-way communication, which does not do justice to the complexity of the interaction. To address this, we developed a bidirectional human–machine interface featuring a bracelet equipped with linear resonant actuators, controlled via a Robot Operating System (ROS) program, to simulate haptic feedback. Further, the wireless interface includes a motion sensor and a sensor to quantify the tightness of the bracelet. Our functional experiments, which compared stimulation with three and five intensity levels, respectively, were performed by four healthy participants in their twenties and thirties. The participants achieved an average accuracy of 88% estimating three vibration intensity levels. While the estimation accuracy for five intensity levels was only 67%, the results indicated a good performance in perceiving relative vibration changes with an accuracy of 82%. The proposed haptic feedback bracelet will facilitate research investigating the benefits of bidirectional human–machine interfaces and the perception of vibrotactile feedback in general by closing the gap for a versatile device that can provide high-density user feedback in combination with sensors for intent detection. Full article
Show Figures

Figure 1

Back to TopTop