Previous Article in Journal
Artificial Intelligence in Agri-Robotics: A Systematic Review of Trends and Emerging Directions Leveraging Bibliometric Tools
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Synergistic Advancement of Physical and Information Interaction in Exoskeleton Rehabilitation Robotics: A Review

1
Institute of Rehabilitation Engineering and Technology, University of Shanghai for Science and Technology, Shanghai 200093, China
2
Department of Orthopaedics, Shanghai Changzheng Hospital, Naval Medical University, Shanghai 201209, China
*
Author to whom correspondence should be addressed.
Robotics 2026, 15(1), 25; https://doi.org/10.3390/robotics15010025
Submission received: 3 December 2025 / Revised: 14 January 2026 / Accepted: 15 January 2026 / Published: 19 January 2026
(This article belongs to the Section Neurorobotics)

Abstract

The exoskeleton rehabilitation robot is a structural robot that uses the actuator to control, so as to construct a human–robot collaborative rehabilitation training system to realize the perception and decoding of patients and promotes the recovery of limb function and neural remodeling. This review focused on the synergistic advancement of physical and information interaction in exoskeleton rehabilitation robotics. This review systematically retrieved literature related to the synergistic advancement of physical and information interaction in exoskeleton rehabilitation robotics. Publications from 2011 to 2025 were searched for across the EI, IEEE Xplore, PubMed, and Web of Science databases. The included studies mainly covered the period from 2018 to 2025, reflecting recent technological progress. This article summarizes the collaborative progress of physical and informational interaction in exoskeleton rehabilitation robots. The physical and information interaction is manifested in the bionic structure, physiological information detection and information processing technology to identify human movement intention. The bionic structural design is fundamental to realize natural coordination between human and robot to improve the following of movements. The active participation and movement intention recognition accuracy are enhanced based on multimodal physiological signal detection and information processing technology, which provides a clear direction for the development of intelligent rehabilitation technology.

1. Introduction

1.1. Background

With the intensification of population aging and the continuous increase in the number of patients with neurological diseases and sports injuries, traditional rehabilitation treatment has been unable to meet the growing rehabilitation demand [1]. In this context, rehabilitation robot technology, as an interdisciplinary integration of artificial intelligence, robotics, biomedical engineering, and human–robot interaction has gradually demonstrated significant application potential in the field of medical rehabilitation [2]. In the limb rehabilitation training systems, exoskeleton rehabilitation robots are an important structural type of rehabilitation robots [3]. However, with the development of technology, it has been difficult for the traditional exoskeleton rehabilitation robot to meet the task needs of human–robot interaction and collaboration [4]. Therefore, the smoothness of human–robot interaction, as well as the enhancement of the users’ sense of active participation and the accuracy of motion intention recognition, are all subject to higher requirements [5,6].
In exoskeleton rehabilitation robotic systems, human–robot interaction is a core technology for achieving efficient and safe assistance [7]. Recently, with the continuous advancement of technology, the lightweight and biomimetic design of exoskeleton rehabilitation robots has become a research hotspot [8,9]. Traditional rehabilitation robots generally have problems such as large size and high structural rigidity, which result in poor physical interaction performance and weak motion followability [10]. Therefore, it is difficult for them to meet the increasingly personalized and comfortable needs of users. To solve these problems, researchers have improved the human–robot physical interaction ability of the equipment by introducing flexible materials and biomimetic structures [11,12,13,14]. Simultaneously, the innovative design of intelligent composite drive materials (such as shape memory alloys (SMA) [15,16,17,18,19,20,21,22], artificial muscles [23,24,25,26,27], etc.) and biomimetic joint structures has achieved modular, adjustable and integrated integration architecture. This significantly enhances the flexibility and biological compatibility of the exoskeleton system, effectively improving the comfort and naturalness of wearing. The specific classification is shown in Table 1.
Based on this, the information interaction between the exoskeleton rehabilitation robot and the human has also received increasing attention. The system needs to accurately identify the user’s movement intention and be able to respond and adapt intelligently in real time according to the user’s status and environmental changes.
By integrating multimodal physiological signals such as electromyography (EMG) [28,29,30,31,32] and electroencephalography (EEG) [33,34,35], the robot can not only dynamically perceive the user’s active movement intention and adjust the output force but also promptly feedback its own operating status to the user, thereby achieving efficient and natural human–robot interaction. This process significantly improves the response efficiency and intelligence level of the system, while enhancing safety and comfort during the usage process [36,37,38,39,40].
Currently, the synergistic advancement of physical and information interaction has become a research hotspot in the field of exoskeleton rehabilitation robotics. This paper primarily reviews the progress in human–robot physical and information interaction technologies in exoskeleton rehabilitation robots. The key technical challenges and the difficulties of research currently faced were thoroughly analyzed. Based on this, theoretical references and research guidance for the future intelligent and personalized development of exoskeleton rehabilitation robots were provided.

1.2. Literature Research

This review retrieved literature related to the synergistic advancement of physical and information interaction in exoskeleton rehabilitation robotics published between 2011 and 2025 from the EI, IEEE Xplore, PubMed, and Web of Science databases. The selected studies primarily focus on the period from 2018 to 2025. In each database, Boolean operators, wildcards, synonyms, and alternative spellings were used to capture as broad a range of relevant studies as possible (Table 2). This systematic review and meta-analysis were conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines. The PRISMA checklist is given in the Supplementary Materials Table S1.
As illustrated in Figure 1, a total of 264 publications were initially identified through database searching. These records were subsequently subjected to a rigorous screening procedure based on predefined inclusion and exclusion criteria. The inclusion criteria required that (1) the publication be an original paper; (2) the primary research objective involves physical interaction, information interaction, or synergistic control strategies within exoskeleton rehabilitation robotics, thereby excluding studies focusing on prostheses, orthoses, or purely rigid exoskeleton systems; (3) the human–robot interaction framework maintain sufficient compliance to avoid interference with the user’s natural movement patterns; and (4) the overall system mass generally does not exceed 15 kg, ensuring acceptable wearability for rehabilitation applications.
In addition, 23 further studies were incorporated through reference tracing to ensure comprehensive coverage of the field. After the removal of duplicate records and the exclusion of studies that did not meet the screening criteria, a total of 111 studies were included in this review, representing the current research landscape on the synergistic integration of physical and information interaction in exoskeleton rehabilitation robotics.

2. Physical Interaction Technologies

In human–robot collaborative exoskeleton rehabilitation robot systems, human–robot interaction performance is a critical technical indicator for evaluating assistive effectiveness and wearability comfort. Due to their large size and rigid structure, traditional rehabilitation robots often encounter problems such as poor physical interaction [41,42,43] delayed responses, and insufficient motion following [44,45,46]. Therefore, it is difficult to achieve efficient coordination and assistance with natural human movements, as shown in Figure 2. However, the bionic design and lightweight optimization provide a feasible solution path for enhancing the naturalness of human–robot interaction and the dynamic collaborative ability.
By simulating the structure and movement patterns of human muscles and bones, bionic design can achieve a high degree of matching with the user’s movements [2,47,48,49,50,51,52]. This significantly improves the action followability and naturalness of the robot system. The rigid hand exoskeleton robot transmits the driving force to the patient through rigid structures such as linkages, gears, and crank–slider mechanisms, assisting the patient in rehabilitation training. It has the characteristics of high output force and high control accuracy. However, most rigid exoskeleton rehabilitation robots have fixed joint rotation centers, which lead to problems such as poor wearability and low safety. With the development of new materials such as shape memory alloys, silicone materials, and thermoplastic polyurethane films, flexible structure exoskeleton rehabilitation robots have enhanced the safety and comfort of rehabilitation training due to their flexible structure, which reduces the weight of the exoskeleton robots [19,53,54]. Gaeta et al. [55] designed a soft glove based on magnetic field control, which could assist patients in performing impedance training of their fingers. This glove was made of TPU flexible material and magnetic fluid, and the stiffness of the soft glove was adjusted by controlling the magnetic field to resist the main force of the patient’s fingers, as illustrated in Figure 3. Sui et al. [56] proposed a soft glove based on shape memory alloy (SMA), which can assist fingers in performing fine movements. Polygerinos et al. [57] designed a soft hand rehabilitation glove based on silicone materials. In 2019, Wang et al. [58] designed a soft hand rehabilitation robot based on silicone. This robot consists of a soft fabric glove and five flexible actuators. Furthermore, researchers designed flexible actuators based on anisotropic fabrics. In 2019, Connolly et al. [59] proposed a soft glove made of silicone material reinforced by anisotropic fabrics. Thermoplastic polyurethane (TPU) materials have attracted attention in the industry. Compared with silicone materials, this material has high strength and resistance to stretching, and is particularly suitable for wearable applications where quality and volume requirements are high [60]. In 2017, Yap et al. [61] proposed a soft glove based on TPU material. To enhance the output performance of the soft glove based on TPU material, in 2021, Feng et al. [62] designed a high-strength soft glove. This glove contains five flexible actuators based on TPU material. The asymmetric structure and reinforced gaskets of the actuators significantly improved the output performance.
In addition to the hand exoskeleton rehabilitation robot, some upper and lower limb exoskeleton robots also place greater emphasis on the design of human–robot interaction [63,64,65]. Tianjin University designed a bionic knee joint exoskeleton that adopts an underactuated five-bar mechanism, which enhances the inclusiveness of movement and the agility of the structure, enabling the system to more naturally adapt to the dynamic changes in the knee joint. Moreover, at the thigh part, an intelligent tension physical interaction and support device is adopted to achieve dynamic interaction between human and robot [66].
The lightweight design further optimizes the physical interaction between the exoskeleton and the human body. The lower system weight not only reduces the burden of wearing but also effectively improves the response speed and coordination and enhances the motion-following effect. A series of flexible exoskeleton systems developed by Harvard University, from the early imitation of pneumatic artificial muscles [67] to the motor-wire drive system [68,69], all enhance the fitting of wearing and the flexibility of action response by introducing flexible binding structures and biomimetic concepts of human muscles, as shown in Figure 4. The flexible knee joint exoskeleton proposed by Seoul National University in South Korea [70] adopts a drive structure based on textile materials. The robot achieves assistance by using motors to pull steel cables to drive the fabric belts. The system is lightweight and effectively improves the synchronization of movement. The flexible exoskeleton developed by Carnegie Mellon University [71], which combines bionic elastic artificial muscle actuators with fabric sleeves, not only reduces the equipment weight but also enables active coordination with the knee joint flexion and extension movements [72]. The inflatable flexible exoskeleton developed by Arizona State University in the United States [73] is composed of soft actuators made of thermally sealed polyurethane materials. The dynamic torque of the knee joint and the followability of movements have been improved throughout the entire gait cycle.
This chapter reviewed the major advances in physical human–robot interaction for rehabilitation exoskeletons. Traditional rigid exoskeletons often suffer from large interaction resistance, delayed responses, and poor motion following due to their bulky structures and limited compliance, making it difficult to achieve efficient coordination with natural human movements. To address these issues, mechanical structural design and lightweight optimization have become key technical pathways for improving physical interaction performance.
Mechanical structure design enhances kinematic compatibility between the exoskeleton and the human body by imitating the structure and motion patterns of the musculoskeletal system, thereby significantly improving motion following, naturalness, and wearability. Meanwhile, the adoption of flexible materials such as shape memory alloys, silicone, and TPU, along with corresponding soft actuation structures, has enabled hand and limb exoskeletons to achieve higher compliance, safety, and comfort, effectively reducing risks associated with joint misalignment or rigid constraints.
The lightweight design further reduces system inertia and wearing burden, enabling faster response and better motion synchronization during dynamic interaction. Flexible exoskeleton systems featuring textile-based drives, artificial muscle actuators, and pneumatic soft structures have demonstrated enhanced motion fit and dynamic collaboration ability, offering promising solutions for natural gait assistance and complex movement support.
Although these techniques have significantly improved physical human–robot interaction, several challenges remain, including insufficient output force of soft actuators, limited durability, and increased complexity in biomimetic structural design. Future research should seek a better balance among compliance, safety, and output performance, while integrating multimodal sensing and intelligent control to further enhance the exoskeleton’s adaptability and coordination with natural human movements. This will contribute to achieving more efficient, comfortable, and natural human–robot collaboration.

3. Information Interaction Technologies

Human–robot interaction technology based on information interaction refers to the ability of exoskeleton robot systems to recognize and perceive the user’s active motion intentions and transfer the motion to the human. Specifically, the robot needs to have the ability to track the human body’s motion trajectory, understand the interactive background environment, and predict the user’s possible subsequent actions [74,75,76]. Through the precise understanding of human motion intentions, the robot can respond in advance, thereby completing collaborative tasks more efficiently and quickly, significantly enhancing the naturalness and collaborative performance of human–machine interaction [77].

3.1. Multi-Sensory Feedback Information Fusion

The multi-sensory feedback information fusion technology is the key support for achieving the high-performance human–robot interaction of exoskeleton rehabilitation robots. By integrating multi-source information such as mechanics, physiological signals and environmental perception, this technology has constructed a closed-loop interactive system of “perception-decision-execution”. The accuracy of motion intention recognition and the sensitivity of interactive response have been improved, and the intelligence and collaboration of human–robot interaction have been enhanced.
In the multi-sensory integration approach, virtual reality (VR) technology, as a means to enhance the perception channels and interactive feedback, is widely used to improve the quality of human–robot interaction. It enhances users’ immersion and engagement through sensory feedback methods such as vision and hearing and improves the interactive experience in rehabilitation training. Zakharov A V et al. [78], in their study on the lower limb rehabilitation of patients with acute stroke, combined virtual reality with robot-assisted training to construct a human–robot interaction scenario with multimodal perceptual feedback. The results showed that this system significantly enhanced the patients’ motor participation and the recovery effect of lower limb function and performed better in terms of human–robot interaction efficiency compared to the standard rehabilitation methods. The rehabilitation robot system developed by Shanghai University [79] integrates virtual reality and multi-sensory feedback mechanisms. Based on the principle of neural remodeling, it enhances the immersion and initiative of the interaction process, thereby improving the rehabilitation efficiency. In addition to visual and auditory interaction, functional electrical stimulation (FES), as a means of enhancing physiological channel feedback, has also played a significant role in improving the active response of human–robot interaction. In response to the problems of poor individual adaptability, delayed feedback and passive control existing in traditional FES, researchers introduced the biological feedback mechanism and constructed a closed-loop regulation system based on electromyographic signals [80,81,82,83]. The real-time capture of human active movement intentions and the stimulation response was achieved, and the response ability of the interactive system to users’ active behaviors was enhanced. Inga Lypunova Petersen et al. designed a system that integrates an upper-limb rehabilitation robot and FES technology. It drives the patient’s lower limbs to move by setting motion trajectories and dynamically controls the activation of muscle groups using surface electromyography signals. Thus, a highly sensitive active interactive closed-loop was constructed. The system proposed by Nahid Norouzi-Gheidari et al. from McGill University in Canada [84] integrates VR, exoskeletons and FES. It guides hand movements through dynamic visual tasks, providing motion assistance from the robot while recognizing the intention to make a fist based on electromyographic signals and triggering electrical stimulation. Huazhong University of Science and Technology has also conducted research on functional electrical stimulation. Cui Xikai [85] integrated the RUPERT wearable exoskeleton with FES technology for rehabilitation training of the shoulder joint and hand. The patient’s active participation and the system’s response effect were both enhanced. Huang Tao et al. [86] analyzed the influence of different stimulation parameters and electrode configurations on the electromyographic signals and designed a multi-channel array FES system. The muscle activation control became more precise, and the physiological feedback path during the human–machine interaction process was further optimized, as shown in Figure 5.
In conclusion, multi-sensory feedback information fusion technology significantly enhances the human–machine interaction performance of exoskeleton rehabilitation robots by improving the comprehensiveness of information acquisition and the responsiveness of feedback control. Key enabling technologies, such as VR-based immersive training and FES-assisted neuromuscular activation, further contribute to making the human–robot interaction process more intelligent, natural, and efficient. However, despite their promising synergistic potential, the current clinical readiness of these technologies varies considerably. Most VR-based rehabilitation systems and FES–exoskeleton fusion strategies remain at prototype or early clinical validation stages, with limited large-scale randomized controlled trials available. Only a few commercial VR rehabilitation platforms and standardized FES protocols have reached higher clinical maturity. Therefore, while multi-sensory fusion and synergistic VR/FES systems provide a solid technical foundation for future high-quality rehabilitation training systems, further clinical evidence, user-centered trials, and long-term efficacy studies are still required before widespread clinical deployment can be achieved.

3.2. Bioelectrical Signal

In the recognition of human movement intentions, traditional physical signals such as inertial measurement units (IMUs), machine vision, etc., exhibit relatively high accuracy, but they are inferior to physiological signals in terms of reaction speed and the timeliness of intentions, and also have the problem of feedback lag. Physiological signals have great potential and advancement in enhancing human–robot interaction performance. They can provide information about the movement intention before the action occurs and possess strong initiative and foresight. The common physiological signals currently include electroencephalogram (EEG), surface electromyogram (sEMG), and electrooculogram (EOG), etc. This paper focuses on the field of exoskeleton rehabilitation robots and mainly explores the human–computer interaction technology based on EEG and sEMG.
The brain–computer interface (BCI) establishes a direct neural control channel between the user and the exoskeleton system. On one hand, users can consciously control the exoskeleton to perform action tasks through brain activities. The collaborative control ability between humans and machines has been significantly enhanced. On the other hand, the closed-loop neural feedback mechanism triggered by the BCI system can encourage users to actively participate in rehabilitation training. The circuits between the motor cortex and the spinal cord were restructured, and the recovery of motor functions was promoted [87,88].
In the experimental paradigms used for EEG acquisition, the main brain–computer interfaces include those based on motor imagery (MI), steady-state visual evoked potential (SSVEP), and P300 potential. Among them, the BCI based on MI is widely applied in rehabilitation scenarios due to its more natural interaction mode and higher degree of control freedom. By integrating multimodal information such as proprioception, vision, and touch, this type of BCI can more accurately express motor intentions and improve the human–computer interaction performance [89,90,91,92], as shown in Figure 6.
To improve the decoding accuracy of MI-BCI, researchers have extensively studied various feature extraction methods and classification algorithms. However, even if the signal processing technology is advanced enough, if users are unable to effectively regulate brain activity, the interactive control performance will still be limited due to the relatively low signal-to-noise ratio. Qiu et al. [90] introduced Chinese character-induced stimuli during EEG collection, guiding the participants to engage in motor imagery. By constructing a virtual reality (VR) scene to generate a virtual arm, they enhanced the users’ imagination experience. The experiment proved that this method could amplify the event-related desynchronization (ERD) motion intention recognition effect caused by MI and the recognition performance was improved [90]. On the other hand, surface electromyography (sEMG) signals, which are physiological electrical signals composed of the action potentials of skeletal muscle fibers, can be detected tens to hundreds of milliseconds prior to the onset of observable muscle contraction, thereby exhibiting inherent predictive and temporal advantages. The sEMG signals contain rich information about muscle states and limb movements. By analyzing their time domain, frequency domain, and pattern changes, they can accurately determine the user’s movement intentions [93]. Khairuddin et al. [94] successfully identified the movement instructions generated by the subjects during upper-limb rehabilitation training by analyzing the sEMG signals; Deng et al. [95] decoded the EEG signals into user intentions and combined them with an upper-limb rehabilitation robot for control, effectively improving the human–robot interaction efficiency of the system. Tieck et al. [96] successfully encoded the electromyography signals into neural pulses using a pulse neural network (SNN), achieving real-time control of a five-finger mechanical hand and demonstrating the application potential of electromyography in human–robot collaborative motion control.
In addition to the commonly discussed issues of noise susceptibility and limited signal strength, bioelectrical signals—such as EMG and EEG—also suffer from significant practical limitations in clinical settings. One major disadvantage is the long calibration time required before each training session. Because signal quality is highly sensitive to electrode placement, skin impedance, sweat, and daily variations in physiological conditions, calibration must be repeated frequently to establish reliable mappings between the user’s intention and robot control parameters. This requirement greatly reduces clinical efficiency and increases the workload for both therapists and patients. Furthermore, the intrinsic non-stationarity of bioelectrical signals poses an additional challenge for stable long-term control. Signal patterns tend to drift over time due to fatigue, electrode micro-movements, or changes in muscle recruitment strategies, leading to decreased decoder accuracy and necessitating repeated recalibration or adaptive algorithms. These limitations remain major obstacles to the widespread use of bioelectrical signals in rehabilitation robotics.

3.3. Multimodal Interaction Sensing Technology and Biosignal Fusion

The multimodal interaction perception technology comprehensively utilizes various sensory signals such as voice, image, text, eye movement, touch, electroencephalogram (EEG), electrocardiogram (ECG), etc., to achieve a more comprehensive and precise understanding of users’ intentions, behaviors, and situations [97,98,99], as shown in Figure 7. For instance, the integration of visual and inertial sensors (IMU) is widely applied in motion capture to effectively alleviate the problem of visual information loss caused by occlusion. The problem of visual information loss caused by occlusion has been effectively alleviated, and the robustness and interaction reliability of the system have been improved [100,101]. Riggozlio et al. [102] proposed a multimodal body–machine interface that integrates surface electromyography (EMG) signals with inertial measurement unit (IMU) data, and evaluated the system performance under various fusion strategies. The results showed that the fusion strategy had a significant impact on the interaction response speed and intention recognition accuracy. This research verified the important role of multimodal perception in improving the quality of human–robot collaborative control. Recently, with the rapid development of large language models (LLM) and related artificial intelligence (AI) technologies, more and more studies have attempted to integrate technologies from multiple fields such as computer vision, natural language processing, cognitive architecture, and social signal processing to achieve high-quality human–robot interaction with greater semantic understanding and social adaptability [103,104]. This trend has driven the evolution of robot systems from the perception layer to the cognitive layer, enabling them to actively identify user needs and make personalized responses in complex and variable environments.
In recent years, multimodal interaction has increasingly been combined with trajectory prediction and brain-inspired intention recognition and decision-making mechanisms to enable proactive and anticipatory rehabilitation assistance. Ning et al. [105] developed an upper-limb exoskeleton rehabilitation robot capable of performing shoulder–elbow–wrist joint training, aiming to enhance neurofunctional recovery through repetitive, task-oriented movement. They established a motion-equivalent model of the upper limb, optimized mechanical configuration parameters using spatial mechanism theory and optimization algorithms, and evaluated human–robot compatibility. Simulation and prototype testing demonstrated that the robot achieved near-complete coverage of the upper-limb range of motion, maintained high dexterity, ensured safe human–robot interaction, and achieved an excellent overall system evaluation score. Furthermore, Wang et al. [106] proposed a brain-inspired decision-making framework based on a multi-brain-region computational architecture and multimodal information fusion. Their system enables faster and more reliable intention inference under noisy or uncertain conditions, representing a shift from conventional “motion estimation” toward more advanced “cognitive decision-making.” Collectively, these studies highlight the evolution of multimodal fusion from data-level and feature-level processing toward a hierarchical “intention–decision–action” paradigm, providing a theoretical basis for implementing feedforward and predictive control in rehabilitation exoskeletons.
At the research frontier, movement intention prediction has progressed beyond traditional time-series classification to high-dimensional motion forecasting enabled by advanced deep neural networks—including Transformers, graph neural networks (GNNs), and diffusion models. These architectures capture complex cross-modal spatiotemporal dependencies and the latent structural patterns of human movement. For example, Transformer-based cross-modal fusion models can process EEG/EMG signals, joint kinematics, and visual pose sequences simultaneously, achieving millisecond-level intention estimation and future trajectory prediction. Causal graph-based approaches further explore causal dependencies among heterogeneous modalities, thereby enhancing robustness against noise and missing data. Meanwhile, diffusion-model-based motion prediction demonstrates strong capability in generating smooth, continuous, and biomechanically plausible motion trajectories. Together, these emerging techniques provide viable pathways for exoskeleton systems to anticipate future movement trends and execute pre-emptive (pre-execution) control strategies.
Despite the clear advantages of Large Language Models (LLMs) in semantic understanding, multimodal information integration, and natural human–robot interaction, their clinical translation in rehabilitation robotics still faces several challenges. First, LLM-based inference inherently carries uncertainty and limited predictability. In safety-critical rehabilitation scenarios, such “black-box” decision-making may introduce risks, especially when vague patient expressions are directly mapped to control commands. Any misinterpretation could result in unsafe movements or inappropriate torque output. Second, current LLMs typically require substantial computational resources, while rehabilitation robots deployed in clinical settings often operate under strict constraints on processing power, real-time responsiveness, and cost, limiting large-model integration. Third, multimodal LLM systems that process patient speech, emotional signals, EMG data, or physiological indicators necessarily involve sensitive personal information. Ensuring data security and compliance with medical regulations—such as device certification standards and privacy protection requirements—remains a major barrier to clinical adoption.
Nevertheless, the future prospects of LLMs in rehabilitation robotics remain highly promising. Advances in compact “Small LLMs” and edge computing are expected to improve real-time inference capability, making lightweight, on-device deployment more feasible. Combining LLMs with interpretable models or conventional control frameworks may further enhance safety, transparency, and controllability in clinical use. Moreover, as multi-center clinical studies expand and regulatory standards mature, LLM-enabled systems could play an increasingly important role in personalized training guidance, automatic parameter adaptation, emotional state detection, and pain monitoring. Overall, the clinical feasibility of LLMs is expected to grow steadily as the technology matures, data security frameworks improve, and industry standards evolve. LLMs hold substantial long-term potential as an integral component of next-generation rehabilitation robotic systems.
In the rehabilitation scenario, to enhance the security of the system and the user’s engagement, the wearable rehabilitation robots have gradually developed multi-source information fusion methods based on biological signals. By integrating the interaction evaluation mechanism and the on-demand assistance strategy, the system can dynamically adjust the collaboration intensity according to the user’s current physical condition and training feedback. Through the integration of rapid control algorithms and online learning mechanisms, the robot not only has response capabilities but can also adapt to individual differences and training progress. Thus, it effectively improves the personalized level and efficiency of rehabilitation training [107,108].
Multi-source information fusion serves as a core enabling technology for synergistic neuro-mechanical rehabilitation systems, yet it remains one of the most technically demanding research areas. The challenges arise mainly from the heterogeneity, asynchronous nature, and uncertainty of the signals involved, as well as the strict real-time constraints imposed by closed-loop human–robot interaction. This subsection elaborates on the key mathematical and algorithmic difficulties that currently limit the performance and robustness of multimodal fusion, providing research guidance for future studies.
  • Real-Time Constraints and Latency Accumulation
In closed-loop rehabilitation tasks, multimodal fusion must operate within stringent latency budgets (typically 20–50 ms) to maintain control stability and responsiveness. However, EEG, EMG, and kinematic data exhibit drastically different sampling rates and temporal dynamics. This leads to several challenges:
  • Resampling errors and aliasing caused by non-uniform sampling rates.
  • Latency accumulation from filtering, feature extraction, and neural decoding.
  • Potential instability in impedance or assist-as-needed control due to delayed intention estimation.
Balancing inference accuracy with ultra-low latency remains a fundamental bottleneck for real-time fusion systems.
2.
Temporal and Spatial Misalignment Across Heterogeneous Modalities
Different sensing modalities present intrinsic asynchronies. For example, EMG activation often precedes both EEG motor-related potentials and observable joint kinematics by tens of milliseconds. Effective fusion requires:
  • Modeling cross-modal time delays and causal dependencies.
  • Temporal alignment using Kalman filters, particle filters, or dynamic time warping (DTW).
  • Spatial alignment between cortical activation patterns, muscle synergies, and end-effector trajectories.
These methods become computationally demanding as the dimensionality and number of modalities increase, posing challenges for real-time implementation.
3.
Noise Heterogeneity and Uncertainty Modeling
EEG is easily contaminated by motion and environmental noise, and EMG is sensitive to electrode shift and muscle fatigue, and kinematic signals, while relatively stable, may poorly reflect motor intent. Fusion models must therefore account for modality-dependent noise distributions and uncertainties by
  • Constructing probabilistic graphical models that encode multimodal uncertainty.
  • Employing Bayesian filtering, variational inference, or confidence-aware learning.
  • Designing fault-tolerant mechanisms for dealing with missing, corrupted, or asynchronous signals.
Developing unified uncertainty-aware fusion frameworks remains a significant research direction.
4.
Stability Issues in Neuro-Mechanical Closed-Loop Control
Fusion algorithms interact tightly with robotic controllers, forming a closed-loop system that can become unstable if multimodal estimates fluctuate rapidly or unpredictably. Major challenges include
  • Constraining the magnitude and rate of change in the fused intention signal.
  • Embedding stability criteria derived from Lyapunov theory or passivity-based methods.
  • Ensuring robust performance under user-specific variability, fatigue, or cognitive fluctuations.
Maintaining stable synergy between neural decoding and mechanical assistance represents a central challenge for next-generation rehabilitation robots.
In summary, multi-source information fusion is not merely a signal-processing problem but a deeply coupled neuro-mechanical systems challenge, where mathematical modeling, computational efficiency, and control-theoretic stability must be simultaneously addressed. The above difficulties highlight critical directions for future research aimed at achieving reliable, real-time, and clinically effective synergistic rehabilitation systems.

4. Synergistic Integration of Physical and Information Interaction

To further substantiate the synergistic mechanisms underlying advanced rehabilitation systems, this subsection presents a representative example that achieves deep integration between bionic physical structures and multimodal brain–computer interface (BCI) technologies. This system goes beyond conventional “mechanical platform + controller” paradigms by establishing a co-designed neuro-mechanical interaction loop.
At the physical level, the system employs a tendon-driven soft–rigid hybrid exoskeleton that mimics human musculoskeletal architecture. Compliant tendon routing and biomimetic joint designs enable anisotropic stiffness, natural torque generation, and intrinsic shock absorption, thus ensuring comfortable and safe human–robot physical interaction. These mechanical properties also suppress motion-induced artifacts, thereby improving the quality of EEG and EMG signals used for intention decoding.
At the information level, the multimodal BCI integrates EEG for detecting cortical motor intention, EMG for residual voluntary muscle activation, and kinematic priors for contextual prediction. The BCI pipeline is co-designed with the mechanical structure: the compliant actuation stabilizes neural recordings, allowing more sensitive adaptive filtering, while the probabilistic intention decoding directly regulates joint impedance, assistance magnitude, and movement trajectories in real time.
Together, these mechanisms create a bidirectional enhancement loop in which the bionic structure optimizes the informational environment for neural decoding, and the BCI dynamically configures the mechanical response according to the user’s moment-to-moment neural engagement. This demonstrates how synergy—rather than independent functionalities—forms the basis of next-generation neuro-mechanical rehabilitation systems.
Overall, this example highlights that synergy does not arise from the mere coexistence of mechanical design and neural-decoding algorithms, but from their deliberate co-design and mutual reinforcement. The bionic exoskeleton enhances the stability and richness of neural and muscular signals, while the multimodal BCI continuously shapes mechanical behaviors to match the user’s real-time intention. Through this bidirectional reinforcement loop, physical interaction and information interaction become inseparable components of a unified neuro-mechanical system, enabling adaptability, safety, and rehabilitation efficacy beyond what either component could achieve independently.

5. Conclusions

This paper systematically reviewed key human–robot interaction technologies in exoskeleton rehabilitation robots, focusing on both physical human–robot interaction and information-level interaction. From a physical interaction perspective, exoskeleton systems are evolving toward highly biomimetic, lightweight, and compliant designs. Advances in mechanism design, material engineering, and flexible actuation have significantly improved dynamic conformity, wearability, and user comfort, laying the foundation for natural and safe human–robot physical cooperation. From an information interaction perspective, emerging paradigms—including physiological signal-based control (EEG, EMG), multimodal perception, and integrated neuro-perceptual fusion—have greatly enhanced robots’ ability to interpret user intention and understand environmental context, promoting more intelligent, adaptive, and personalized rehabilitation assistance.
Despite these advancements, several challenges remain. Physically, striking a balance between structural compliance and sufficient mechanical output is still difficult, while long-term wearability and user-specific adaptability require further refinement. In information interaction, multimodal fusion still suffers from heterogeneity, asynchrony, noise sensitivity, and high computational complexity, which impact the reliability and real-time performance of closed-loop control. In addition, the integration of neural decoding with mechanical assistance introduces stability challenges that must be carefully addressed through robust control strategies and cross-disciplinary modeling.
Looking forward, future research should aim to develop unified frameworks that tightly integrate biomimetic design principles, soft robotics, multimodal sensing, and intelligent control. Key directions include
  • Advancing multimodal fusion algorithms that are uncertainty-aware, real-time, and computationally efficient.
  • Developing adaptive and personalized control strategies capable of responding to user variability, fatigue, and long-term rehabilitation progress.
  • Enhancing lightweight and flexible wearable design to reduce physical burden and improve daily usability.
  • Establishing large-scale, clinically validated benchmarking datasets and evaluation protocols to support reliable deployment in real-world rehabilitation settings.
In summary, the collaborative evolution of bionic physical structures and intelligent perceptual systems will continue to drive the next generation of exoskeleton rehabilitation robots. Breakthroughs in cross-disciplinary technologies will be essential for building safer, more intuitive, and more effective human–robot interaction systems, ultimately accelerating the clinical translation and widespread adoption of intelligent rehabilitation robotics. Importantly, this review primarily focuses on recent developments in physical and information-level human–robot interaction and does not provide exhaustive coverage of all exoskeleton types, clinical populations, or rehabilitation paradigms. These scope limitations should be considered when interpreting the generalizability of the reviewed findings.

Supplementary Materials

The following supporting information can be downloaded at https://www.mdpi.com/article/10.3390/robotics15010025/s1, Table S1: The PRISMA checklist. Reference [109] is cited in the Supplementary Materials.

Author Contributions

C.F. collected and reviewed the literature, drafted the original manuscript, and prepared the figures and tables. H.Y. conceived the research idea, performed the data analysis, and finalized the manuscript. X.L. contributed to manuscript revision, data collection, and data interpretation. Q.M. designed the study, contributed to manuscript writing, and secured project funding. All authors have read and agreed to the published version of the manuscript.

Funding

The authors gratefully acknowledge the financial support from the National Key Research and Development Program of China (2022YFC3601103).

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Stucki, G.; Bickenbach, J.; Gutenbrunner, C.; Melvin, J.L. Rehabilitation: The Health Strategy of the 21st Century. J. Rehabil. Med. 2018, 50, 309–316. [Google Scholar] [CrossRef]
  2. Mohebbi, A. Human-Robot Interaction in Rehabilitation and Assistance: A Review. Curr. Robot. Rep. 2020, 1, 131–144. [Google Scholar] [CrossRef]
  3. Li, J.; Zuo, S.; Xu, C.; Zhang, L.; Dong, M.; Tao, C.; Ji, R. Influence of a Compatible Design on Physical Human-Robot Interaction Force: A Case Study of a Self-Adapting Lower-Limb Exoskeleton Mechanism. J. Intell. Robot. Syst. 2020, 98, 525–538. [Google Scholar] [CrossRef]
  4. Nazari, F.; Mohajer, N.; Nahavandi, D.; Khosravi, A.; Nahavandi, S. Applied Exoskeleton Technology: A Comprehensive Review of Physical and Cognitive Human–Robot Interaction. IEEE Trans. Cogn. Dev. Syst. 2023, 15, 1102–1122. [Google Scholar] [CrossRef]
  5. Su, H.; Qi, W.; Chen, J.; Yang, C.; Sandoval, J.; Laribi, M.A. Recent Advancements in Multimodal Human–Robot Interaction. Front. Neurorobot. 2023, 17, 1084000. [Google Scholar] [CrossRef]
  6. Belcamino, V. Enhancing Human-Robot Collaboration Through Advanced Human Activity Recognition and Motion Tracking. Doctoral Dissertation, Università degli Studi di Genova, Genova, Italy, 2025. [Google Scholar]
  7. Wang, W.; Li, H.; Xiao, M.; Chu, Y.; Yuan, X.; Ming, X.; Zhang, B. Design and Verification of a Human–Robot Interaction System for Upper Limb Exoskeleton Rehabilitation. Med. Eng. Phys. 2020, 79, 19–25. [Google Scholar]
  8. Hasan, S.; Alam, N. Comprehensive Comparative Analysis of Lower Limb Exoskeleton Research: Control, Design, and Application. Actuators 2025, 14, 342. [Google Scholar] [CrossRef]
  9. Chen, L.; Xie, H.; Liu, Z.; Li, B.; Cheng, H. Exploring Challenges and Opportunities of Wearable Robots: A Comprehensive Review of Design, Human-Robot Interaction and Control Strategy. APSIPA Trans. Signal Inf. Process. 2023, 12, 1–33. [Google Scholar]
  10. Nizamis, K.; Athanasiou, A.; Almpani, S.; Dimitrousis, C.; Astaras, A. Converging Robotic Technologies in Targeted Neural Rehabilitation: A Review of Emerging Solutions and Challenges. Sensors 2021, 21, 2084. [Google Scholar] [CrossRef]
  11. Xiong, J.; Chen, J.; Lee, P.S. Functional Fibers and Fabrics for Soft Robotics, Wearables, and Human–Robot Interface. Adv. Mater. 2021, 33, 2002640. [Google Scholar] [CrossRef]
  12. Wang, T.; Zheng, P.; Li, S.; Wang, L. Multimodal Human–Robot Interaction for Human-centric Smart Manufacturing: A Survey. Adv. Intell. Syst. 2024, 6, 2300359. [Google Scholar] [CrossRef]
  13. Hedayati, H.; Suzuki, R.; Rees, W.; Leithinger, D.; Szafir, D. Designing Expandable-Structure Robots for Human-Robot Interaction. Front. Robot. AI 2022, 9, 719639. [Google Scholar] [CrossRef]
  14. Tong, Y.; Liu, H.; Zhang, Z. Advancements in Humanoid Robots: A Comprehensive Review and Future Prospects. IEEE/CAA J. Autom. Sin. 2024, 11, 301–328. [Google Scholar] [CrossRef]
  15. Copaci, D.; Arias, J.; Moreno, L.; Blanco, D. Shape Memory Alloy (SMA)-Based Exoskeletons for Upper Limb Rehabilitation. In Rehabilitation of the Human Bone-Muscle System; IntechOpen: London, UK, 2022; ISBN 1-80355-166-6. [Google Scholar]
  16. Zuo, K.; Zhang, Y.; Liu, K.; Li, J.; Wang, Y. Design and Experimental Study of a Flexible Finger Rehabilitation Robot Driven by Shape Memory Alloy. Meas. Sci. Technol. 2023, 34, 084004. [Google Scholar] [CrossRef]
  17. Ruth, D.J.S.; Sohn, J.-W.; Dhanalakshmi, K.; Choi, S.-B. Control Aspects of Shape Memory Alloys in Robotics Applications: A Review over the Last Decade. Sensors 2022, 22, 4860. [Google Scholar] [CrossRef] [PubMed]
  18. Tang, T.; Zhang, D.; Xie, T.; Zhu, X. An Exoskeleton System for Hand Rehabilitation Driven by Shape Memory Alloy. In Proceedings of the 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), Shenzhen, China, 12–14 December 2013; IEEE: New York, NY, USA, 2013; pp. 756–761. [Google Scholar]
  19. Shami, Z.; Arslan, T.; Lomax, P. Wearable Soft Robots: Case Study of Using Shape Memory Alloys in Rehabilitation. Bioengineering 2025, 12, 276. [Google Scholar] [CrossRef] [PubMed]
  20. Xie, Q.; Meng, Q.; Yu, W.; Xu, R.; Wu, Z.; Wang, X.; Yu, H. Design of a Soft Bionic Elbow Exoskeleton Based on Shape Memory Alloy Spring Actuators. Mech. Sci. 2023, 14, 159–170. [Google Scholar] [CrossRef]
  21. Copaci, D.; Cano, E.; Moreno, L.; Blanco, D. New Design of a Soft Robotics Wearable Elbow Exoskeleton Based on Shape Memory Alloy Wire Actuators. Appl. Bionics Biomech. 2017, 2017, 1605101. [Google Scholar] [CrossRef] [PubMed]
  22. Curcio, E.M.; Lago, F.; Carbone, G. Design Models and Performance Analysis for a Novel Shape Memory Alloy-Actuated Wearable Hand Exoskeleton for Rehabilitation. IEEE Robot. Autom. Lett. 2024, 9, 8905–8912. [Google Scholar] [CrossRef]
  23. Giancarlo, V.P. Control Strategy of a Pneumatic Artificial Muscle for an Exoskeleton Application. IFAC-PapersOnLine 2019, 52, 281–286. [Google Scholar] [CrossRef]
  24. Paterna, M.; Magnetti Gisolo, S.; De Benedictis, C.; Muscolo, G.G.; Ferraresi, C. A Passive Upper-Limb Exoskeleton for Industrial Application Based on Pneumatic Artificial Muscles. Mech. Sci. 2022, 13, 387–398. [Google Scholar] [CrossRef]
  25. Andrikopoulos, G.; Nikolakopoulos, G.; Manesis, S. A Survey on Applications of Pneumatic Artificial Muscles. In Proceedings of the 2011 19th Mediterranean Conference on Control & Automation (MED), Corfu, Greece, 20–23 June 2011; IEEE: New York, NY, USA, 2011; pp. 1439–1446. [Google Scholar]
  26. Mirvakili, S.M.; Hunter, I.W. Artificial Muscles: Mechanisms, Applications, and Challenges. Adv. Mater. 2018, 30, 1704407. [Google Scholar] [CrossRef] [PubMed]
  27. Tjahyono, A.P.; Aw, K.C.; Devaraj, H.; Surendra, W.; Haemmerle, E.; Travas-Sejdic, J. A Five-fingered Hand Exoskeleton Driven by Pneumatic Artificial Muscles with Novel Polypyrrole Sensors. Ind. Robot Int. J. 2013, 40, 251–260. [Google Scholar] [CrossRef]
  28. Gao, B.; Wei, C.; Ma, H.; Yang, S.; Ma, X.; Zhang, S. Real-Time Evaluation of the Signal Processing of sEMG Used in Limb Exoskeleton Rehabilitation System. Appl. Bionics Biomech. 2018, 2018, 1391032. [Google Scholar] [CrossRef]
  29. Bouteraa, Y.; Ben Abdallah, I.; Elmogy, A. Design and Control of an Exoskeleton Robot with EMG-Driven Electrical Stimulation for Upper Limb Rehabilitation. Ind. Robot Int. J. Robot. Res. Appl. 2020, 47, 489–501. [Google Scholar] [CrossRef]
  30. Singh, R.M.; Chatterji, S.; Kumar, A. A Review on Surface EMG Based Control Schemes of Exoskeleton Robot in Stroke Rehabilitation. In Proceedings of the 2013 International Conference on Machine Intelligence and Research Advancement, Katra, India, 21–23 December 2013; IEEE: New York, NY, USA, 2013; pp. 310–315. [Google Scholar]
  31. Liu, Y.; Li, X.; Zhu, A.; Zheng, Z.; Zhu, H. Design and Evaluation of a Surface Electromyography-Controlled Lightweight Upper Arm Exoskeleton Rehabilitation Robot. Int. J. Adv. Robot. Syst. 2021, 18, 17298814211003461. [Google Scholar] [CrossRef]
  32. Yin, Y.H.; Fan, Y.J.; Xu, L.D. EMG and EPP-Integrated Human–Machine Interface between the Paralyzed and Rehabilitation Exoskeleton. IEEE Trans. Inf. Technol. Biomed. 2012, 16, 542–549. [Google Scholar] [CrossRef]
  33. Yu, G.; Wang, J.; Chen, W.; Zhang, J. EEG-Based Brain-Controlled Lower Extremity Exoskeleton Rehabilitation Robot. In Proceedings of the 2017 IEEE International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), Ningbo, China, 19–21 November 2017; IEEE: New York, NY, USA, 2017; pp. 763–767. [Google Scholar]
  34. Hekmatmanesh, A. Investigation of EEG Signal Processing for Rehabilitation Robot Control. Ph.D. Thesis, Lappeenranta-Lahti University of Technology, Lappeenranta, Finland, 2019. [Google Scholar]
  35. Orban, M.; Elsamanty, M.; Guo, K.; Zhang, S.; Yang, H. A Review of Brain Activity and EEG-Based Brain–Computer Interfaces for Rehabilitation Application. Bioengineering 2022, 9, 768. [Google Scholar] [CrossRef]
  36. Yin, R.; Wang, D.; Zhao, S.; Lou, Z.; Shen, G. Wearable Sensors-enabled Human–Machine Interaction Systems: From Design to Application. Adv. Funct. Mater. 2021, 31, 2008936. [Google Scholar] [CrossRef]
  37. Küçüktabak, E.B.; Kim, S.J.; Wen, Y.; Lynch, K.; Pons, J.L. Human-Machine-Human Interaction in Motor Control and Rehabilitation: A Review. J. Neuroeng. Rehabil. 2021, 18, 183. [Google Scholar] [CrossRef]
  38. Tan, Z.; Dai, N.; Su, Y.; Zhang, R.; Li, Y.; Wu, D.; Li, S. Human–Machine Interaction in Intelligent and Connected Vehicles: A Review of Status Quo, Issues, and Opportunities. IEEE Trans. Intell. Transp. Syst. 2021, 23, 13954–13975. [Google Scholar] [CrossRef]
  39. Zhang, J.; Wang, B.; Zhang, C.; Xiao, Y.; Wang, M.Y. An EEG/EMG/EOG-Based Multimodal Human-Machine Interface to Real-Time Control of a Soft Robot Hand. Front. Neurorobot. 2019, 13, 7. [Google Scholar] [CrossRef]
  40. Li, L.-L.; Cao, G.-Z.; Liang, H.-J.; Zhang, Y.-P.; Cui, F. Human Lower Limb Motion Intention Recognition for Exoskeletons: A Review. IEEE Sens. J. 2023, 23, 30007–30036. [Google Scholar] [CrossRef]
  41. Pamungkas, D.S.; Caesarendra, W.; Soebakti, H.; Analia, R.; Susanto, S. Overview: Types of Lower Limb Exoskeletons. Electronics 2019, 8, 1283. [Google Scholar] [CrossRef]
  42. Huo, W.G.; Mohammed, S.; Moreno, J.C.; Amirat, Y. Lower Limb Wearable Robots for Assistance and Rehabilitation: A State of the Art. IEEE Syst. J. 2016, 10, 1068–1081. [Google Scholar] [CrossRef]
  43. Benson, I.; Hart, K.; Tussler, D.; van Middendorp, J.J. Lower-Limb Exoskeletons for Individuals with Chronic Spinal Cord Injury: Findings from a Feasibility Study. Clin. Rehabil. 2016, 30, 73–84. [Google Scholar] [CrossRef] [PubMed]
  44. Chen, B.; Zi, B.; Qin, L.; Pan, Q.S. State-of-the-Art Research in Robotic Hip Exoskeletons: A General Review. J. Orthop. Transl. 2020, 20, 4–13. [Google Scholar] [CrossRef]
  45. Aliman, N.; Ramli, R.; Haris, S.M. Design and Development of Lower Limb Exoskeletons: A Survey. Robot. Auton. Syst. 2017, 95, 102–116. [Google Scholar] [CrossRef]
  46. Sanchez-Villamañan, M.D.; Gonzalez-Vargas, J.; Torricelli, D.; Moreno, J.C.; Pons, J.L. Compliant Lower Limb Exoskeletons: A Comprehensive Review on Mechanical Design Principles. J. Neuroeng. Rehabil. 2019, 16, 55. [Google Scholar] [CrossRef]
  47. Beckerle, P.; Salvietti, G.; Unal, R.; Prattichizzo, D.; Rossi, S.; Castellini, C.; Hirche, S.; Endo, S.; Amor, H.B.; Ciocarlie, M. A Human–Robot Interaction Perspective on Assistive and Rehabilitation Robotics. Front. Neurorobot. 2017, 11, 24. [Google Scholar] [CrossRef]
  48. Casas, J.; Cespedes, N.; Múnera, M.; Cifuentes, C.A. Human-Robot Interaction for Rehabilitation Scenarios. In Control Systems Design of Bio-Robotics and Bio-Mechatronics with Advanced Applications; Elsevier: Amsterdam, The Netherlands, 2020; pp. 1–31. [Google Scholar]
  49. Mayetin, U.; Kucuk, S. Design and Experimental Evaluation of a Low Cost, Portable, 3-Dof Wrist Rehabilitation Robot with High Physical Human–Robot Interaction. J. Intell. Robot. Syst. 2022, 106, 65. [Google Scholar] [CrossRef]
  50. Wang, W.; Liang, X.; Liu, S.; Lin, T.; Zhang, P.; Lv, Z.; Wang, J.; Hou, Z.-G. Drivable Space of Rehabilitation Robot for Physical Human–Robot Interaction: Definition and an Expanding Method. IEEE Trans. Robot. 2022, 39, 343–356. [Google Scholar] [CrossRef]
  51. Guo, Y.; Gu, X.; Yang, G.-Z. Human–Robot Interaction for Rehabilitation Robotics. In Digitalization in Healthcare: Implementing Innovation and Artificial Intelligence; Springer: Amsterdam, The Netherlands, 2021; pp. 269–295. [Google Scholar]
  52. Han, S.; Wang, H.; Yu, H. Human–Robot Interaction Evaluation-Based AAN Control for Upper Limb Rehabilitation Robots Driven by Series Elastic Actuators. IEEE Trans. Robot. 2023, 39, 3437–3451. [Google Scholar] [CrossRef]
  53. Polygerinos, P.; Correll, N.; Morin, S.A.; Mosadegh, B.; Onal, C.D.; Petersen, K.; Cianchetti, M.; Tolley, M.T.; Shepherd, R.F. Soft Robotics: Review of Fluid-driven Intrinsically Soft Devices; Manufacturing, Sensing, Control, and Applications in Human-robot Interaction. Adv. Eng. Mater. 2017, 19, 1700016. [Google Scholar] [CrossRef]
  54. Gonzalez-Vazquez, A.; Garcia, L.; Kilby, J.; McNair, P. Soft Wearable Rehabilitation Robots with Artificial Muscles Based on Smart Materials: A Review. Adv. Intell. Syst. 2023, 5, 2200159. [Google Scholar] [CrossRef]
  55. Gaeta, L.T.; Albayrak, M.D.; Kinnicutt, L.; Aufrichtig, S.; Sultania, P.; Schlegel, H.; Ellis, T.D.; Ranzani, T. A Magnetically Controlled Soft Robotic Glove for Hand Rehabilitation. Device 2024, 2, 100512. [Google Scholar] [CrossRef] [PubMed]
  56. Sui, M.L.; Ouyang, Y.M.; Jin, H.; Chai, Z.Y.; Wei, C.Y.; Li, J.Y.; Xu, M.; Li, W.H.; Wang, L.; Zhang, S.W. A Soft-Packaged and Portable Rehabilitation Glove Capable of Closed-Loop Fine Motor Skills. Nat. Mach. Intell. 2023, 5, 1149–1160. [Google Scholar] [CrossRef]
  57. Polygerinos, P.; Wang, Z.; Overvelde, J.T.B.; Galloway, K.C.; Wood, R.J.; Bertoldi, K.; Walsh, C.J. Modeling of Soft Fiber-Reinforced Bending Actuators. IEEE Trans. Robot. 2015, 31, 778–789. [Google Scholar] [CrossRef]
  58. Wang, J.B.; Fei, Y.Q.; Pang, W. Design, Modeling, and Testing of a Soft Pneumatic Glove With Segmented PneuNets Bending Actuators. IEEE/ASME Trans. Mechatron. 2019, 24, 990–1001. [Google Scholar] [CrossRef]
  59. Connolly, F.; Wagner, D.A.; Walsh, C.J.; Bertoldi, K. Sew-Free Anisotropic Textile Composites for Rapid Design and Manufacturing of Soft Wearable Robots. Extrem. Mech. Lett. 2019, 27, 52–58. [Google Scholar] [CrossRef]
  60. Heo, J.S.; Eom, J.; Kim, Y.-H.; Park, S.K. Recent Progress of Textile-based Wearable Electronics: A Comprehensive Review of Materials, Devices, and Applications. Small 2018, 14, 1703034. [Google Scholar] [CrossRef]
  61. Yap, H.K.; Khin, P.M.; Koh, T.H.; Sun, Y.; Liang, X.; Lim, J.H.; Yeow, C.H. A Fully Fabric-Based Bidirectional Soft Robotic Glove for Assistance and Rehabilitation of Hand Impaired Patients. IEEE Robot. Autom. Lett. 2017, 2, 1383–1390. [Google Scholar] [CrossRef]
  62. Feng, M.; Yang, D.Z.; Gu, G.Y. High-Force Fabric-Based Pneumatic Actuators With Asymmetric Chambers and Interference-Reinforced Structure for Soft Wearable Assistive Gloves. IEEE Robot. Autom. Lett. 2021, 6, 3105–3111. [Google Scholar] [CrossRef]
  63. Önen, Ü.; Botsalı, F.M.; Kalyoncu, M.; Tınkır, M.; Yılmaz, N.; Şahin, Y. Design and Actuator Selection of a Lower Extremity Exoskeleton. IEEE/ASME Trans. Mechatron. 2013, 19, 623–632. [Google Scholar]
  64. Kiguchi, K.; Imada, Y.; Liyanage, M. EMG-Based Neuro-Fuzzy Control of a 4DOF Upper-Limb Power-Assist Exoskeleton. In Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; IEEE: New York, NY, USA, 2007; pp. 3040–3043. [Google Scholar]
  65. Hong, Y.W.; King, Y.; Yeo, W.; Ting, C.; Chuah, Y.; Lee, J.; Chok, E.-T. Lower Extremity Exoskeleton: Review and Challenges Surrounding the Technology and Its Role in Rehabilitation of Lower Limbs. Aust. J. Basic Appl. Sci. 2013, 7, 520–524. [Google Scholar]
  66. Guo, S.; Gao, J.; Guo, J.; Zhang, W.; Hu, Y. Design of the Structural Optimization for the Upper Limb Rehabilitation Robot. In Proceedings of the 2016 IEEE International Conference on Mechatronics and Automation, Harbin, China, 7–10 August 2016; IEEE: New York, NY, USA, 2016; pp. 1185–1190. [Google Scholar]
  67. Wehner, M.; Quinlivan, B.; Aubin, P.M.; Martinez-Villalpando, E.; Baumann, M.; Stirling, L.; Holt, K.; Wood, R.; Walsh, C. A Lightweight Soft Exosuit for Gait Assistance. In Proceedings of the 2013 IEEE International Conference on Robotics and Automation, Karlsruhe, Germany, 6–10 May 2013; pp. 3362–3369. [Google Scholar]
  68. Asbeck, A.T.; De Rossi, S.M.M.; Galiana, I.; Ding, Y.; Walsh, C.J. Stronger, Smarter, Softer: Next-generation wearable robots. IEEE Robot. Autom. Mag. 2014, 21, 22–33. [Google Scholar] [CrossRef]
  69. Asbeck, A.T.; Schmidt, K.; Walsh, C.J. Soft Exosuit for Hip Assistance. Robot. Auton. Syst. 2015, 73, 102–110. [Google Scholar] [CrossRef]
  70. Park, D.; In, H.; Lee, H.; Lee, S.; Koo, I.; Kang, B.B.; Park, K.; Chang, W.S.; Cho, K.-J. Preliminary Study for a Soft Wearable Knee Extensor to Assist Physically Weak People. In Proceedings of the 2014 11th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), Kuala Lumpur, Malaysia, 12–15 November 2014; IEEE: New York, NY, USA, 2014; pp. 136–137. [Google Scholar]
  71. Park, Y.L.; Santos, J.; Galloway, K.G.; Goldfield, E.C.; Wood, R.J. A Soft Wearable Robotic Device for Active Knee Motions Using Flat Pneumatic Artificial Muscles. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation, Hong Kong, China, 31 May–7 June 2014; pp. 4805–4810. [Google Scholar]
  72. Sridar, S.; Nguyen, P.H.; Zhu, M.J.; Lam, Q.P.; Polygerinos, P. Development of a Soft-Inflatable Exosuit for Knee Rehabilitation. In Proceedings of the 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vancouver, BC, Canada, 24–28 September 2017; pp. 3722–3727. [Google Scholar]
  73. Poddar, S. Design of a Portable Pneumatic Exosuit for Knee Extension Assistance with Gait Sensing Using Fabric-Based Inflatable Insole Sensors. Master’s Thesis, Arizona State University, Tempe, AZ, USA, 2020. [Google Scholar]
  74. Rudenko, A.; Palmieri, L.; Herman, M.; Kitani, K.M.; Gavrila, D.M.; Arras, K.O. Human Motion Trajectory Prediction: A Survey. Int. J. Robot. Res. 2020, 39, 895–935. [Google Scholar] [CrossRef]
  75. Haddadin, S.; Croft, E. Physical Human–Robot Interaction. In Springer Handbook of Robotics; Springer: Berlin/Heidelberg, Germany, 2016; pp. 1835–1874. [Google Scholar]
  76. Green, S.A.; Billinghurst, M.; Chen, X.; Chase, J.G. Human-Robot Collaboration: A Literature Review and Augmented Reality Approach in Design. Int. J. Adv. Robot. Syst. 2008, 5, 1. [Google Scholar] [CrossRef]
  77. Yang, C.; Zhu, Y.; Chen, Y. A Review of Human–Machine Cooperation in the Robotics Domain. IEEE Trans. Hum.-Mach. Syst. 2021, 52, 12–25. [Google Scholar] [CrossRef]
  78. Zakharov, A.V.; Bulanov, V.A.; Khivintseva, E.V.; Kolsanov, A.V.; Bushkova, Y.V.; Ivanova, G.E. Stroke Affected Lower Limbs Rehabilitation Combining Virtual Reality with Tactile Feedback. Front. Robot. AI 2020, 7, 81. [Google Scholar] [CrossRef]
  79. Zhou, Z.Q.; Hua, X.Y.; Wu, J.J.; Xu, J.J.; Ren, M.; Shan, C.L.; Xu, J.G. Combined Robot Motor Assistance with Neural Circuit-Based Virtual Reality (NeuCir-VR) Lower Extremity Rehabilitation Training in Patients after Stroke: A Study Protocol for a Single-Centre Randomised Controlled Trial. BMJ Open 2022, 12, e064926. [Google Scholar] [CrossRef]
  80. Chandrasekhar, V.; Vazhayil, V.; Rao, M. Design of a Real Time Portable Low-Cost Multi-Channel Surface Electromyography System to Aid Neuromuscular Disorder and Post Stroke Rehabilitation Patients. In Proceedings of the 2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Montreal, QC, Canada, 20–24 July 2020; IEEE: New York, NY, USA, 2020; pp. 4138–4142. [Google Scholar]
  81. Sousa, A.S.P.; Moreira, J.; Silva, C.; Mesquita, I.; Macedo, R.; Silva, A.; Santos, R. Usability of Functional Electrical Stimulation in Upper Limb Rehabilitation in Post-Stroke Patients: A Narrative Review. Sensors 2022, 22, 1409. [Google Scholar] [CrossRef]
  82. Khan, M.A.; Saibene, M.; Das, R.; Brunner, I.; Puthusserypady, S. Emergence of Flexible Technology in Developing Advanced Systems for Post-Stroke Rehabilitation: A Comprehensive Review. J. Neural Eng. 2021, 18, 061003. [Google Scholar] [CrossRef]
  83. Petersen, I.L.; Nowakowska, W.; Ulrich, C.; Struijk, L.N.S.A. A Novel sEMG Triggered FES-Hybrid Robotic Lower Limb Rehabilitation System for Stroke Patients. IEEE Trans. Med. Robot. Bionics 2020, 2, 631–638. [Google Scholar] [CrossRef]
  84. Norouzi-Gheidari, N.; Archambault, P.S.; Monte-Silva, K.; Kairy, D.; Sveistrup, H.; Trivino, M.; Levin, M.F.; Milot, M.H. Feasibility and Preliminary Efficacy of a Combined Virtual Reality, Robotics and Electrical Stimulation Intervention in Upper Extremity Stroke Rehabilitation. J. Neuroeng. Rehabil. 2021, 18, 61. [Google Scholar] [CrossRef]
  85. Niu, C.M.; Bao, Y.; Zhuang, C.; Li, S.; Wang, T.; Cui, L.; Xie, Q.; Lan, N. Synergy-Based FES for Post-Stroke Rehabilitation of Upper-Limb Motor Functions. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 256–264. [Google Scholar] [CrossRef] [PubMed]
  86. Xu, Q.; Huang, T.; He, J.; Wang, Y.; Zhou, H. A Programmable Multi-Channel Stimulator for Array Electrodes in Transcutaneous Electrical Stimulation. In Proceedings of the 2011 IEEE/ICME International Conference on Complex Medical Engineering, Harbin, China, 22–25 May 2011; IEEE: New York, NY, USA, 2011; pp. 652–656. [Google Scholar]
  87. Barsotti, M.; Leonardis, D.; Vanello, N.; Bergamasco, M.; Frisoli, A. Effects of Continuous Kinaesthetic Feedback Based on Tendon Vibration on Motor Imagery BCI Performance. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 105–114. [Google Scholar] [CrossRef]
  88. Kevric, J.; Subasi, A. Comparison of Signal Decomposition Methods in Classification of EEG Signals for Motor-Imagery BCI System. Biomed. Signal Process. 2017, 31, 398–406. [Google Scholar] [CrossRef]
  89. Achanccaray, D.; Acuña, K.; Carranza, E.; Andreu-Perez, J. A Virtual Reality and Brain Computer Interface System for Upper Limb Rehabilitation of Post Stroke Patients. In Proceedings of the 2017 IEEE International Conference on Fuzzy Systems, Naples, Italy, 9–12 July 2017. [Google Scholar]
  90. Qiu, Z.Y.; Allison, B.Z.; Jin, J.; Zhang, Y.; Wang, X.Y.; Li, W.; Cichocki, A. Optimized Motor Imagery Paradigm Based on Imagining Chinese Characters Writing Movement. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 1009–1017. [Google Scholar] [CrossRef]
  91. Abiri, R.; Borhani, S.; Sellers, E.W.; Jiang, Y.; Zhao, X. A Comprehensive Review of EEG-Based Brain–Computer Interface Paradigms. J. Neural Eng. 2019, 16, 011001. [Google Scholar] [CrossRef]
  92. Wang, Z.; Yu, Y.; Xu, M.; Liu, Y.; Yin, E.; Zhou, Z. Towards a Hybrid BCI Gaming Paradigm Based on Motor Imagery and SSVEP. Int. J. Hum.-Comput. Interact. 2019, 35, 197–205. [Google Scholar] [CrossRef]
  93. Li, K.X.; Zhang, J.H.; Wang, L.F.; Zhang, M.L.; Li, J.Y.; Bao, S.C. A Review of the Key Technologies for sEMG-Based Human-Robot Interaction Systems. Biomed. Signal Process. 2020, 62, 102074. [Google Scholar] [CrossRef]
  94. Khairuddin, I.M.; Sidek, S.N.; Majeed, A.P.P.A.; Razman, M.A.M.; Puzi, A.A.; Yusof, H.M. The Classification of Movement Intention through Machine Learning Models: The Identification of Significant Time-Domain EMG Features. PeerJ Comput. Sci. 2021, 7, e379. [Google Scholar] [CrossRef] [PubMed]
  95. Deng, X.Y.; Yu, Z.L.; Lin, C.G.; Gu, Z.H.; Li, Y.Q. A Bayesian Shared Control Approach for Wheelchair Robot with Brain Machine Interface. IEEE Trans. Neural Syst. Rehabil. Eng. 2020, 28, 328–338. [Google Scholar] [CrossRef]
  96. Tieck, J.C.V.; Weber, S.; Stewart, T.C.; Kaiser, J.; Roennau, A.; Dillmann, R. A Spiking Network Classifies Human sEMG Signals and Triggers Finger Reflexes on a Robotic Hand. Robot. Auton. Syst. 2020, 131, 103566. [Google Scholar] [CrossRef]
  97. Chaurasiya, R.K.; Shukla, S.; Sahu, T.P. A Sequential Study of Emotions through EEG Using HMM. In Proceedings of the 2019 IEEE International Conference on Computational Science and Engineering, New York, NY, USA, 1–3 August 2019; pp. 104–109. [Google Scholar] [CrossRef]
  98. Richhariya, B.; Tanveer, M. EEG Signal Classification Using Universum Support Vector Machine. Expert Syst. Appl. 2018, 106, 169–182. [Google Scholar] [CrossRef]
  99. Phinyomark, A.; Khushaba, R.N.; Scheme, E. Feature Extraction and Selection for Myoelectric Control Based on Wearable EMG Sensors. Sensors 2018, 18, 1615. [Google Scholar] [CrossRef]
  100. Malleson, C.; Volino, M.; Gilbert, A.; Trumble, M.; Collomosse, J.; Hilton, A. Real-Time Full-Body Motion Capture from Video and IMUs. In Proceedings of the 2017 International Conference on 3D Vision, Qingdao, China, 10–12 October 2017; pp. 449–457. [Google Scholar] [CrossRef]
  101. Moniruzzaman, M.; Yin, Z.Z.; Bin Hossain, M.S.; Choi, H.; Guo, Z.S. Wearable Motion Capture: Reconstructing and Predicting 3D Human Poses From Wearable Sensors. IEEE J. Biomed. Health 2023, 27, 5345–5356. [Google Scholar] [CrossRef]
  102. Rizzoglio, F.; Pierella, C.; De Santis, D.; Mussa-Ivaldi, F.; Casadio, M. A Hybrid Body-Machine Interface Integrating Signals from Muscles and Motions. J. Neural Eng. 2020, 17, 046004. [Google Scholar] [CrossRef]
  103. Huang, W.L.; Wang, C.; Zhang, R.H.; Li, Y.Z.; Wu, J.J.; Fei-Fei, L. VoxPoser: Composable 3D Value Maps for Robotic Manipulation with Language Models. In Proceedings of the 7th Conference on Robot Learning, Atlanta, GA, USA, 6–9 November 2023; Volume 229. [Google Scholar]
  104. Gao, J.; Sarkar, B.; Xia, F.; Xiao, T.; Wu, J.J.; Ichter, B.; Majumdar, A.; Sadigh, D. Physically Grounded Vision-Language Models for Robotic Manipulation. In Proceedings of the 2024 IEEE International Conference on Robotics and Automation, Yokohama, Japan, 13–14 May 2024; pp. 12462–12469. [Google Scholar] [CrossRef]
  105. Ning, Y.; Wang, H.; Liu, Y.; Wang, Q.; Rong, Y.; Niu, J. Design and Analysis of a Compatible Exoskeleton Rehabilitation Robot System Based on Upper Limb Movement Mechanism. Med. Biol. Eng. Comput. 2024, 62, 883–899. [Google Scholar] [CrossRef]
  106. Wang, W.; Ren, H.; Su, S.; Zhang, P.; Zhang, J. A Brain-Inspired Decision-Making Method for Upper Limb Exoskeleton Based on Multi-Brain-Region Structure and Multimodal Information Fusion. Measurement 2025, 241, 115728. [Google Scholar] [CrossRef]
  107. Liu, J.J.; Luo, H.B.; Wu, D.R. Human-Robot Collaboration in Construction: Robot Design, Perception and Interaction, and Task Allocation and Execution. Adv. Eng. Inform. 2025, 65, 103109. [Google Scholar] [CrossRef]
  108. Almohamade, S.; Clark, J.; Law, J. Continuous User Authentication for Human-Robot Collaboration. In Proceedings of the Ares 2021: 16th International Conference on Availability, Reliability and Security, Vienna, Austria, 17–20 August 2021. [Google Scholar] [CrossRef]
  109. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
Figure 1. PRISMA flow diagram of the literature screening and selection process.
Figure 1. PRISMA flow diagram of the literature screening and selection process.
Robotics 15 00025 g001
Figure 2. Physical interaction in rehabilitation robots.
Figure 2. Physical interaction in rehabilitation robots.
Robotics 15 00025 g002
Figure 3. Human–robot interaction of hand rehabilitation exoskeleton.
Figure 3. Human–robot interaction of hand rehabilitation exoskeleton.
Robotics 15 00025 g003
Figure 4. Human–robot interaction of rehabilitation exoskeleton.
Figure 4. Human–robot interaction of rehabilitation exoskeleton.
Robotics 15 00025 g004
Figure 5. Multi-sensory feedback information fusion exoskeleton rehabilitation robot.
Figure 5. Multi-sensory feedback information fusion exoskeleton rehabilitation robot.
Robotics 15 00025 g005
Figure 6. Bioelectrical signal exoskeleton rehabilitation robot.
Figure 6. Bioelectrical signal exoskeleton rehabilitation robot.
Robotics 15 00025 g006
Figure 7. Schematic diagram of multimodal interaction sensing technology and biosignal fusion.
Figure 7. Schematic diagram of multimodal interaction sensing technology and biosignal fusion.
Robotics 15 00025 g007
Table 1. Comparison of human–robot interaction modalities.
Table 1. Comparison of human–robot interaction modalities.
TypeInformation Interaction MethodAdvantagesDisadvantages
Physiological InteractionMechanical Structure
  • Bionic and lightweight structural design
  • High wearability and biomechanical mobility
  • Reduced energy consumption and improved efficiency
  • Adaptability to various body sizes and shapes
  • Minimization of secondary injuries
  • Increased design complexity
  • Material technology limitations
Information InteractionMulti-sensory feedback
  • Visual perception
  • Voice interaction
  • Electrical stimulation feedback
  • Haptic force feedback
Advantages:
  • Rich interactive information
  • High user engagement
  • Combined with VR to enhance entertainment value
  • Susceptible to external environmental influences
Physiological electrical signals
  • Electroencephalogram signals (EEG)
  • Electromyogram signals (EMG)
  • Electrooculogram signals (EOG)
Advantages:
  • Achieve advanced intention recognition
  • Low signal-to-noise ratio; poor comfort and portability
Multimodal
approach
  • More robust and perceptive compared to single-mode information interaction methods
  • Algorithm complexity
  • Difficulty in multi-source information fusion and integration
Table 2. Databases used and respective search entries.
Table 2. Databases used and respective search entries.
DatabaseSearch Query
EITX = (“exoskeleton” OR “rehabilitation”) AND TX = (“physical interaction” OR “human-robot interaction”) AND TX = (“information interaction” OR “intent detection” OR “multi-modal interaction” OR “control synergy”)
(“exoskeleton” OR “rehabilitation”) (“physical interaction” OR “human-robot interaction”)
IEEE XploreKEY (“exoskeleton” OR “rehabilitation robotics”) AND KEY (“physical interaction” OR “human robot interaction”) AND KEY (“information interaction” OR “intent recognition” OR “assistive control”)
PubMed(“exoskeleton” OR “rehabilitation” [All Fields]) AND (“physical interaction” OR “human-robot interaction” [All Fields]) AND (“information interaction” OR “neural intention detection” OR “control strategies” [All Fields])
Web of ScienceTS = (“exoskeleton” OR “rehabilitation robot”) AND TS = (“physical interaction” OR “human robot interaction” OR “HRI” OR “impedance control”) AND TS = (“information fusion” OR “sensor fusion” OR “information interaction” OR “multi-modal interaction”)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fei, C.; Meng, Q.; Yu, H.; Lu, X. Synergistic Advancement of Physical and Information Interaction in Exoskeleton Rehabilitation Robotics: A Review. Robotics 2026, 15, 25. https://doi.org/10.3390/robotics15010025

AMA Style

Fei C, Meng Q, Yu H, Lu X. Synergistic Advancement of Physical and Information Interaction in Exoskeleton Rehabilitation Robotics: A Review. Robotics. 2026; 15(1):25. https://doi.org/10.3390/robotics15010025

Chicago/Turabian Style

Fei, Cuizhi, Qiaoling Meng, Hongliu Yu, and Xuhua Lu. 2026. "Synergistic Advancement of Physical and Information Interaction in Exoskeleton Rehabilitation Robotics: A Review" Robotics 15, no. 1: 25. https://doi.org/10.3390/robotics15010025

APA Style

Fei, C., Meng, Q., Yu, H., & Lu, X. (2026). Synergistic Advancement of Physical and Information Interaction in Exoskeleton Rehabilitation Robotics: A Review. Robotics, 15(1), 25. https://doi.org/10.3390/robotics15010025

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop