Next Article in Journal
Energy-Efficient Fall-Detection System Using LoRa and Hybrid Algorithms
Previous Article in Journal
Obstacle Feature Information-Based Motion Decision-Making Method for Obstacle-Crossing Motions in Lower Limb Exoskeleton Robots
Previous Article in Special Issue
Biomimetic Design and Validation of an Adaptive Cable-Driven Elbow Exoskeleton Inspired by the Shrimp Shell
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Vector-Based Motion Retargeting Approach for Exoskeletons with Shoulder Girdle Mechanism

1
State Key Laboratory of Robotics and System, Harbin Institute of Technology, Harbin 150001, China
2
Department of Mechanical Engineering, School of Naval Architecture and Ocean Engineering, Harbin Institute of Technology (Weihai), Weihai 264200, China
3
Shandong Guoxing Smartech Co., Ltd., Qingdao 365500, China
*
Authors to whom correspondence should be addressed.
Biomimetics 2025, 10(5), 312; https://doi.org/10.3390/biomimetics10050312
Submission received: 20 March 2025 / Revised: 29 April 2025 / Accepted: 10 May 2025 / Published: 12 May 2025
(This article belongs to the Special Issue Bionic Wearable Robotics and Intelligent Assistive Technologies)

Abstract

:
Shoulder girdle plays a dominant role in coordinating the natural movements of the upper arm. Inverse kinematics, optimization, and data-driven approaches are usually used to conduct motion retargeting. However, these methods do not consider shoulder girdle movement. When the kinematic structure of human and that of exoskeletons share a similar joint configuration, analytical motion retargeting methods can be used for exoskeletons with shoulder girdle mechanism. This paper proposes a vector-based analytical motion retargeting approach for exoskeletons with shoulder girdle mechanism. The approach maps the vectors of the upper limb segments to the joint space using vector-based methods. Simulation results using four different motion descriptions confirm the method’s accuracy and efficiency.

1. Introduction

Exoskeletons are rapidly advancing technologies with applications in rehabilitation [1,2,3,4], teleoperation [5,6,7,8], and human augmentation [9,10,11]. Generating natural, efficient, and dynamically stable human-like motions is crucial, but remains a fundamental challenge due to differences in kinematics, dynamics, and actuation constraints between the human and the robotic system.
Motion retargeting, the process of transferring human motion to a different embodiment, such as an exoskeleton, is an effective method for generating human-like motion. Traditionally, motion retargeting is performed by manually defining a mapping between two different morphologies (e.g., a human actor and an exoskeleton), using methods based on Inverse kinematics (IK) and optimization techniques. IK methods can be categorized into differential IK and analytical IK. Marcia et al. employed an iterative differential approach to map hierarchical human feature points to robot joints [12]. Dariush et al. extended this approach by incorporating joint limit constraints and self-collision avoidance penalties to generate physically feasible robot motion [13]. For Spherical-Rotational-Spherical (SRS) structured robots, analytical IK can provide efficient closed-form solutions for mapping the human end-effector pose to a robotic arm, enabling motion retargeting [14,15,16]. However, mapping only the end-effector pose may lead to significant differences in intermediate joints, such as the elbow. Yang et al. addressed this issue by proposing an anthropomorphic motion retargeting framework that introduces the normalized normal vector of the arm plane to determine the elbow position [17]. Optimization-based methods model motion retargeting as a constrained optimization problem to ensure the generated motion respects the robot’s physical limits. Suleiman et al. formulated an optimization problem that considers joint limits, torque constraints, and dynamic feasibility, and solved it recursively using a dynamics-based optimization algorithm [18]. Similarly, Tairan He et al. minimized joint position differences between human and humanoid robots using the Adam optimizer to implement motion retargeting [5]. However, these iterative methods suffer from high computational costs and are not suitable for real-time applications.
On the other hand, data-driven motion retargeting has been used to circumvent the manual mapping process by leveraging machine learning methods. Such learning based methods enjoy flexibility and scalability as they reduce the need for excessive domain knowledge and tedious tuning processes required to define pose features properly. Owing to the merits of such methods, a human pose can be mapped to a exoskeletons pose by employing a statistical model called shared Gaussian process latent variable models (GPLVM) [19]. Many data-driven techniques have relied on GPLVM to construct shared latent spaces between the two motion domains [20,21,22]. Hang et al. proposed associate latent encoding (ALE), which uses two different variational auto-encoders (VAEs) with a single shared latent space [23]. A similar idea was extended in [24] by incorporating negative examples to aid safety constraints. However, one clear drawback of data-driven motion retargeting method is the need to gather a sufficiently large dataset beforehand.
Most existing IK-based, optimization-based, and data-driven methods focus on human arms, but lack support for shoulder girdle movement. The natural movements of the upper arm are strongly coordinated with the movements of the shoulder girdle, represented by the scapulohumeral rhythm (SHR). However, few studies have incorporated shoulder girdle kinematics in their retargeting frameworks, limiting their applicability to achieve a full range of upper-limb movements. In addition, when the kinematic structure of human and that of exoskeletons share a similar joint configuration, analytical method can provide computational efficiency and high accuracy. Moreover, different studies adopt specific motion representation methods, such as joint positions [25]; end-effector poses; and kinematic parameters including shoulder girdle angles, swivel angle, and wrist position [26]. Our research focuses on developing a unified approach to map these diverse representations into the joint space of exoskeletons.
In this paper, a vector-based analytical motion retargeting approach for exoskeletons with shoulder girdle mechanism is proposed. Our method leverages vector-based kinematic transformations to achieve efficient and accurate motion retargeting, eliminating the computational burden of optimization-based methods and the heavy data dependency of data-driven approaches. Moreover, the proposed method supports multiple input motion representations, enabling its applicability to a wide range of tasks. The contributions of this paper are as follows:
  • A vector-based analytical motion retargeting approach is proposed for exoskeletons with shoulder girdle mechanism, mapping the vectors of the upper limb segments to the joint space through a vector-based method with high computational efficiency and precision.
  • The approach can accommodate four motion representation methods: (a) joint positions; (b) the end-effector (wrist) pose; (c) shoulder girdle angles, swivel angle, and wrist position (SGASAWP); and (d) polynomial descriptions of the SHR, swivel angle, and wrist position (SHRSAWP).
The remainder of this paper is organized as follows: Section 2 summarizes the kinematic structure of the upper limb exoskeleton. Section 3 presents the vector-based analytical motion retargeting approach. Section 4 discusses how different motion representation methods can be mapped into the joint space using the approach. Section 5 verifies the effectiveness and universality of the proposed approach through simulation experiments. Finally, Section 6 presents conclusions and outlines future work.

2. The Kinematic Structure of the Upper Limb Exoskeleton

The human upper limb consists of multiple joints with a wide range of motion and flexibility. However, multiple joints present challenges in determining the degrees of freedom (DOF) arrangement for exoskeletons. The shoulder girdle alone comprises four joints [27], making it inefficient and unnecessary to replicate each joint’s motion individually in an exoskeleton. The movement of the shoulder girdle contributes significantly to the translational motion of the glenohumeral (GH) joint, particularly during elevation–depression and protraction–retraction. Studies have shown that the trajectory of the GH joint follows two circular arcs, with the distance between the axes of rotation deviating by less than 3 mm [28]. Based on this observation, the exoskeleton can use two orthogonal rotational joints to support the movement of the shoulder girdle. For the GH joint, which is a ball-and-socket joint, the exoskeleton can employ three intersecting rotational joints to mimic its motion. The elbow joint, which supports forearm flexion and extension, can be modeled as a single revolute joint with its axis perpendicular to the plane defined by the upper arm and forearm.
The exoskeleton FREE II is a six-DOF exoskeleton with a shoulder girdle mechanism utilizing parallelograms [29]. The configuration of FREE II is illustrated in Figure 1. The axis Z 2 which is parallel to Z 2 but rotates in the opposite direction, has been added to model the kinematics of the parallelogram mechanism. To represent the Denavit–Hartenberg (DH) parameters d and a solely using the lengths of upper arm and forearm, axis Z 5 is also added. The DH parameters for FREE II are listed in Table 1 [30].
The FK of FREE II can be derived as follows:
end 0 T = 1 0 T 2 1 T 2 2 T 3 2 T 4 3 T 5 4 T 5 5 T 6 5 T end 6 T
where
i i 1 T = cos θ i sin θ i 0 a i 1 sin θ i cos α i 1 cos θ i cos α i 1 sin α i 1 sin α i 1 d i sin θ i sin α i 1 cos θ i sin α i 1 cos α i 1 cos α i 1 d i 0 0 0 1
5 5 T = cos β 2 0 sin β 2 0 0 1 0 0 sin β 2 0 cos β 2 0 0 0 0 1

3. The Vector-Based Analytical Motion Retargeting Approach

Owing to the alignment of the anatomical and robotic joint axes, along with the rigid human–robot connection, the orientations of the joints and links in the exoskeleton remain fixed relative to the vectors of the upper limb segments of the wearer. Consequently, given the vectors of the upper limb, the joint orientations of the exoskeleton can be determined, thereby defining the configuration of the upper limb exoskeleton. The vectors of the shoulder girdle, upper arm, and forearm are represented by e SG , e GE , and e EW , respectively. The relationships between the orientations of joints and links in the exoskeleton and e SG , e GE , and e EW are as follows:
The vectors of the upper arm and forearm links in exoskeleton are parallel to the wearer’s upper arm and forearm, respectively, i.e.,: e GE = X 5 , e EW = Y 6 = Z 7 .
The elbow joint axis must be perpendicular to the vectors of upper arm and forearm. The orientation of elbow joint is determined by:
Z 6 = e GE × e EW e GE × e EW
Given two non-parallel vectors located on the upper arm link in the exoskeleton: Z 6 and X 5 , the posture of the upper arm link is uniquely determined. As illustrated in Figure 1b, the vectors Z 5 and Z 5 , both located on the upper arm link, can be calculated using the following expressions:
Z 5 = R o t ( e GE , α 5 ) Z 6
Z 5 = R o t ( Y 5 , β 2 ) Z 5
where Y 5 = Z 5 × X 5 .
The axis of the second joint Z 2 is perpendicular to the vector of shoulder girdle e SG and Z 1 . Thus, Z 2 is determined by:
Z 2 = Z 2 = e SG × Z 1 e SG × Z 1
where Z 2 is parallel with Z 2 , Z 0 = Z 1 = 0 0 1 T .
The axis of the third joint Z 3 remains fixed relative to e SG x y , which is the unit vector of e SG projected onto the XY-plane, due to the presence of the parallelogram mechanism. Therefore, Z 3 can be determined by:
Z 3 = R o t ( X 2 , α 3 ) Z 2
where X 2 = R o t ( Z 2 , β 1 ) e SG x y .
Given that the links are rigid bodies, the angles between Z 4 and Z 3 , as well as between Z 4 and Z 5 remain constant. Therefore, Z 4 can be determined using Equation (9):
Z 3 · Z 4 = c α 3 Z 4 · Z 5 = c α 4 Z 4 = 1
By expressing z 4 x and z 4 y in terms of z 4 z using the two equations from the Equation (9), and then substituting them into the remaining equation, the system is reduced to a quadratic equation in z 4 z , which can then be solved for Z 4 . Diagram of two possible solutions of Z 4 is shown in Figure 2.
According to the DH criterion, after obtaining Z i , the X i can be obtained, which is perpendicular to the plane formed by Z i and Z i + 1 :
X i = sign ( α i ) Z i × Z i + 1 Z i × Z i + 1
where i = 2 , 3 , 4 .
To obtain X 5 , X 5 is rotated around Y 5 by an angle β 2 , as follows:
X 5 = R o t ( Y 5 , β 2 ) X 5
Then θ i is the angle between X i 1 and X i :
θ i = acos X i 1 · X i
where i = 3 , 4 , 5 , 6 .

4. Mapping Different Motion Representation Methods into the Joint Space Using the Approach

4.1. Joint Positions

When the motion representation method is joint (GH, elbow, and wrist) positions, the vectors of the shoulder girdle e SG , upper arm e GE , and forearm e EW can be obtained by:
e SG = P GH P GH e GE = P EB P GH P EB P GH e EW = P w P EB P w P EB
Then θ i can be determined using the proposed approach in Section 3.

4.2. End-Effector Pose

When the motion representation method is the end-effector pose, the vector of forearm can be determined as follows:
e EW = Z end
where Z end = end 0 T ( 1 : 3 , 3 ) .
The upper arm, shoulder girdle and e SE form a triangle and the angle between e GE and e SE is denoted as γ GH , as shown in Figure 3. Y end is parallel to Z 6 and perpendicular to the plane formed by e GE and e EW . e GE can be solved using three constraints:
e GE · e SE = c γ GH e GE · Y end = 0 e GE = 1
where c γ GH = cos γ GH = l SE 2 + l GE 2 l SG 2 2 l SE l GE , l SE = P W e EW l EW , e SE = P W e EW l EW l SE , P W = end 0 T ( 1 : 3 , 4 ) , Y end = end 0 T ( 1 : 3 , 2 ) . Diagram of the two solutions of e GE is shown in Figure 4.
Vector of shoulder girdle can be obtained using e GE and e EW :
e SG = P W e GE l GE e EW l EW l SG
Finally, θ i can be obtained using e SG , e GE , and e EW , as detailed in Section 3.

4.3. SGASAWP

When the motion representation method is SGASAWP, the vector e SG , e GE , and e EW can be determined by calculating the joint position.
Given the shoulder girdle angles θ 1 and θ 2 , the position of GH joint can be expressed as:
P GH = p GH x p GH y p GH z = l SG c 1 c 2 l SG s 1 c 2 l SG s 2
where c i = cos θ i and s i = sin θ i .
The set of possible P EB is on a circle, as shown in Figure 5. The circle radius is:
r = l GE sin ( γ EB ) = l GE 1 cos ( γ EB ) 2
where cos ( γ EB ) = l GW 2 + l GE 2 l EW 2 2 l GW l GE , l GW = P W P GH .
The circle center can be obtained using the following expression:
O = P GH + l GE cos ( γ EB ) e GW
where e GW = P W P GH P W P GH .
Given swivel angle ϕ EB , P EB can be solved by:
P EB = O + r e EB
where e EB = a cos ( ϕ EB ) + b sin ( ϕ EB ) , b = e GW × ( Y ) e GW × ( Y ) , and a = b × e GW .
After obtaining P GH , P EB , and P w , the vectors of the shoulder girdle e SG , upper arm e GE , and forearm e EW can be solved by Equation (13), respectively. Finally, θ i can be obtained as detailed in Section 3.

4.4. SHRSAWP

When the motion representation method is SHRSAWP, both θ 1 and θ 2 need to be determined from the feasible solutions to best match the SHR.
The polynomial fitting method is commonly used to model the coupled motion of the shoulder girdle with the elevation of the upper arm. One of these models was selected to validate our algorithm:
θ 1 ¯ = 4.34 × 10 5 θ E 3 3.21 × 10 3 θ E 2 + 0.1 θ E 0.06 θ 2 ¯ = 5.28 × 10 7 θ E 4 + 7 × 10 5 θ E 3 3.92 × 10 3 θ E 2 + 0.04 θ E + 0.13
where angle of elevation of the upper arm θ E = acos p EB y p GH y l GE .
Both θ 1 and θ 2 can be determined by minimizing the object function:
e r r ( θ 1 , θ 2 ) = θ 1 θ 1 ¯ + θ 2 θ 2 ¯
Then θ i can be obtained as detailed in Section 4.3.

5. Numerical Simulation

This section focuses on the validation of the proposed approach. The FREE II was selected as the object of the simulation. The simulation results including joint positions, end-effector pose, SGASAWP and SHRSAWP are described in Section 5.1, Section 5.2, Section 5.3, and Section 5.4, respectively. The time consumption and source of errors are discussed in Section 5.5. The algorithm was run on a computer with an Intel i9-12900H processor and 16 GB RAM (Intel, Santa Clara, CA, USA).

5.1. Joint Positions

Firstly, the expected trajectories of P GH , P EB , and P W were obtained using a motion capture system (Mars2H, Nokov, Nosálov, Czech Republic, sampling rate: 60 Hz). These expected trajectories were then used to determine the IK solutions θ i using the method described in Section 4.1. The determined IK solutions θ i were finally substituted into FK to calculate the joint positions to compare with the expected ones. The resolution of the exoskeleton joints was considered to improve the realism of the simulation.
When the motion representation method is joint positions, there are two different sets of IK solutions. One set of results is displayed in Figure 6. The corresponding joint angles are shown in Figure 6a. Figure 6b illustrates the expected and calculated trajectories of joint positions, while Figure 6c presents the associated errors. The link lengths of exoskeleton were fixed, whereas human upper limb segment lengths captured via motion capture system varied with soft tissue artifacts and marker displacement. This discrepancy will bring joint position errors, and the maximum mean position error could be captured within 4.7 mm.
The CLIK method employs a differential approach to map human feature points to robot joints, and introduces a closed-loop error term to minimize the Cartesian error. A comparison between our algorithm and the CLIK method is presented in Table 2. Compared to the CLIK method, our approach demonstrates superior accuracy and efficiency.

5.2. End-Effector Pose

Firstly, the expected end-effector pose trajectories were obtained by using a set of IK solutions from Section 5.1 as input for FK. The expected trajectories were then used to obtain IK solutions θ i using the method described in Section 4.2. Finally, the calculated trajectories were determined by substituting the computed IK solutions θ i into FK to compare with the expected ones.
When the motion representation method is the end-effector pose, there are four different sets of IK solutions. A randomly selected set of results is displayed in Figure 7, where “EXP.” and “Cal.” in the legend stand for “expected” and “calculated”, respectively. The corresponding joint angles are shown in Figure 7a. Figure 7b illustrates the expected and calculated end-effector position and orientation, while Figure 7c presents the associated errors. The expected and calculated trajectories of wrist position and orientation aligned closely with each other. And the mean position error was approximately 0.018 mm and the mean Euler angle error was approximately 9.07 × 10 5 rad.
The comparison between our algorithm and the Jacobian-based method was shown in Table 3. Compared to the Jacobian-based method, the proposed approach using the vector method demonstrated superior performance in terms of solving speed and accuracy. Although the Jacobian-based method can improve accuracy by lowering threshold of error, its efficiency tends to decrease as a result.

5.3. SGASAWP

Firstly, the expected trajectories of shoulder girdle angles, swivel angle, and wrist positions were obtained by using a set of IK solutions from Section 5.1 as input for FK. The expected trajectories were then used to determine the IK solutions θ i using the method described in Section 4.3. Finally, the calculated trajectories of the shoulder girdle angles, swivel angle, and wrist positions were determined by substituting the computed IK solutions θ i into FK to compare with the expected ones.
When the motion representation method is SGASAWP, there are two different sets of IK solutions. A randomly selected set of results is shown in Figure 8. The corresponding joint angles are displayed in Figure 8a. Figure 8b illustrates the expected and calculated trajectories of shoulder girdle angles, swivel angle, and wrist positions, while Figure 8c presents the associated errors. The expected and calculated trajectories of the shoulder girdle angles, swivel angle, and wrist positions aligned closely with each other. The mean position error was about 0.017 mm and the mean swivel angle error was about 6.0 × 10 5 rad.

5.4. SHRSAWP

Firstly, the expected trajectories of swivel angle and wrist positions were obtained by using a set of IK solutions from Section 5.1 as input for FK. The expected trajectories were then used to determine the IK solutions θ i using the method described in Section 4.4. The IK solutions θ i were subsequently used to calculate the angle of elevation of the upper arm θ E and the trajectories of the shoulder girdle angle, swivel angle, and wrist positions through FK. The expected shoulder girdle angles θ 1 and θ 2 were obtained using the polynomial representation of SHR. Finally, the calculated trajectories were compared with the expected ones.
When the motion representation method is SHRSAWP, there are two different sets of IK solutions. A randomly selected set of these solutions is displayed in Figure 9. The corresponding joint angles are illustrated in Figure 9a. The expected and calculated trajectories of shoulder girdle angles, swivel angle, and wrist positions are shown in Figure 9b, while the associated errors are depicted in Figure 9c. The expected and calculated trajectories of the shoulder girdle angles, swivel angle, and wrist positions aligned closely with one another. The mean position error was approximately 0.018 mm, and the mean error in the shoulder girdle angles was approximately 4.50 × 10 4 and 4.19 × 10 5 rad. The comparison between our algorithm and GEAA method was shown in Table 4. Upper arm movements inherently induce shoulder girdle motion, and shoulder girdle motion in turn affects the upper arm motion. Therefore, the SHR in arm motion introduces an additional layer of complexity to motion retargeting. In the GEAA method, the relationship between the shoulder girdle angle and the upper arm elevation angle strictly follows predefined joint rhythms, and the upper arm configuration can be solved through optimization. In our approach, the shoulder girdle angles can be solved using an optimization process, while the upper arm configuration can be computed analytically. Motion retargeting can be achieved using our approach, while the SHR can also be maintained by minimizing the error of shoulder girdle joint angles. Compared to the GEAA, our algorithm achieved superior solving speed and greater accuracy due to reducing the number of optimization parameters and algebraic equations.

5.5. Discussion

The primary source of the proposed approach was the accumulated numerical error in solving the inverse trigonometric functions. The approach was implemented in MATLAB R2023b and C++ 2019. The calculation time of four motion representation methods was listed in Table 5. The algorithm’s computation time increased with the number of inverse solution sets. Compared to the joint position representation method, when the motion representation method was the end-effector pose, both the number of inverse solutions and the computation time nearly doubled. In the SGASAWP method, since the shoulder girdle angles were given, the computation time was shortest. Conversely, in the SHRSAWP method, where optimization algorithms were implemented, the longest computation time was limited to 15 ms. When the approach was implemented with C++ running on industrial PC CX6015-0100 (Beckhoff Automation Inc., Verl, Germany) under the software environment of TwinCAT3, the time consumption ranges from 0.0014 ms to 0.04 ms. Although m-file implementations ran slower by about one orders of magnitude, they can still be used for real-time control.

6. Conclusions

A vector-based analytical motion retargeting approach for exoskeletons with shoulder girdle mechanism was proposed. Given the vectors of upper limb segments, this approach leveraged the alignment of anatomical and robotic joint axes, along with the rigid human-robot connection, to determine the orientation of each exoskeleton joint. The approach could map four upper limb motion representation methods to the joint space of exoskeleton. The simulation results validated its accuracy and efficiency. Computational times for the four motion representation methods are 0.0145 ms, 0.0236 ms, 0.0127 ms, and 13.5 ms, respectively. When the motion representation was joint positions, the fixed link lengths of the exoskeleton did not match the varying lengths of human upper limb segments, and the positional error was relatively larger. For the other three representations, the maximum angular error and positional error can be captured within 4.5 × 10 4 rad and 0.018 mm, respectively.
Addressing the challenges associated with upper limb rehabilitation with full wrist freedom is the future research focus. Also, Kalman filter algorithm will be employed to mitigate the noise encountered during joint position acquisition.

Author Contributions

Conceptualization: Y.Y.; Methodology: J.W., S.P. and M.B.; Validation: J.G.; Writing, literature review and editing: J.W. and J.G. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Major Research Plan of National Natural Science Foundation of China (Grant NO. 91648106).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The research data are not publicly available as the study is ongoing. However, the data can be requested for academic reasons by contacting hitw0423@163.com.

Acknowledgments

The authors would like to thank all subjects who participated in experiments and the members of the exoskeleton team.

Conflicts of Interest

Mingsong Bao was employed by the Shandong Guoxing Smartech Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Mancisidor, A.; Zubizarreta, A.; Cabanes, I.; Bengoa, P.; Jung, J.H. Kinematical and dynamical modeling of a multipurpose upper limbs rehabilitation robot. Robot.-Comput.-Integr. Manuf. 2018, 49, 374–387. [Google Scholar] [CrossRef]
  2. Kim, B.; Deshpande, A.D. An upper-body rehabilitation exoskeleton Harmony with an anatomical shoulder mechanism: Design, modeling, control, and performance evaluation. Int. J. Robot. Res. 2017, 36, 414–435. [Google Scholar] [CrossRef]
  3. Zimmermann, Y.; Sommerhalder, M.; Wolf, P.; Riener, R.; Hutter, M. ANYexo 2.0: A Fully Actuated Upper-Limb Exoskeleton for Manipulation and Joint-Oriented Training in All Stages of Rehabilitation. IEEE Trans. Robot. 2023, 39, 2131–2150. [Google Scholar] [CrossRef]
  4. Feng, Y.; Wang, P.; Wu, C. Variable tensile stiffness pneumatic actuators with adjustable stick-slip friction of soft-tooth structures. Mater. Des. 2025, 253, 113860. [Google Scholar] [CrossRef]
  5. He, T.; Luo, Z.; Xiao, W.; Zhang, C.; Kitani, K.; Liu, C.; Shi, G. Learning Human-to-Humanoid Real-Time Whole-Body Teleoperation. In Proceedings of the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates, 14–18 October 2024; pp. 8944–8951. [Google Scholar] [CrossRef]
  6. Toedtheide, A.; Chen, X.; Sadeghian, H.; Naceri, A.; Haddadin, S. A Force-Sensitive Exoskeleton for Teleoperation: An Application in Elderly Care Robotics. In Proceedings of the 2023 IEEE International Conference on Robotics and Automation (ICRA), London, UK, 29 May–2 June 2023; pp. 12624–12630. [Google Scholar] [CrossRef]
  7. Jo, I.; Park, Y.; Bae, J. A teleoperation system with an exoskeleton interface. In Proceedings of the 2013 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, Wollongong, Australia, 9–12 July 2013; pp. 1649–1654. [Google Scholar] [CrossRef]
  8. Cheng, C.; Dai, W.; Wu, T.; Chen, X.; Wu, M.; Yu, J.; Jiang, J.; Lu, H. Efficient and Precise Homo-Hetero Teleoperation Based on an Optimized Upper Limb Exoskeleton. IEEE/ASME Trans. Mechatronics 2024. [Google Scholar] [CrossRef]
  9. Kim, Y.G.; Little, K.; Noronha, B.; Xiloyannis, M.; Masia, L.; Accoto, D. A voice activated bi-articular exosuit for upper limb assistance during lifting tasks. Robot.-Comput.-Integr. Manuf. 2020, 66, 101995. [Google Scholar] [CrossRef]
  10. Liu, L.; Zhang, Y.; Liu, G.; Xu, W. Variable motion mapping to enhance stiffness discrimination and identification in robot hand teleoperation. Robot.-Comput.-Integr. Manuf. 2018, 51, 202–208. [Google Scholar] [CrossRef]
  11. Walsh, C.J.; Paluska, D.; Pasch, K.; Grand, W.; Herr, H.M. Development of a lightweight, underactuated exoskeleton for load-carrying augmentation. In Proceedings of the IEEE International Conference on Robotics & Automation, Orlando, FL, USA, 15–19 May 2006. [Google Scholar]
  12. Riley, M.; Ude, A.; Wade, K.; Atkeson, C. Enabling real-time full-body imitation: A natural way of transferring human movement to humanoids. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation (Cat. No.03CH37422), Taipei, Taiwan, 14–19 September 2003; Volume 2, pp. 2368–2374. [Google Scholar] [CrossRef]
  13. Dariush, B.; Gienger, M.; Arumbakkam, A.; Goerick, C.; Zhu, Y.; Fujimura, K. Online and markerless motion retargeting with kinematic constraints. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 191–198. [Google Scholar] [CrossRef]
  14. Asfour, T.; Dillmann, R. Human-like motion of a humanoid robot arm based on a closed-form solution of the inverse kinematics problem. In Proceedings of the Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), Las Vegas, NV, USA, 27 October–1 November 2003; Volume 2, pp. 1407–1412. [Google Scholar] [CrossRef]
  15. Shimizu, M.; Kakuya, H.; Yoon, W.K.; Kitagaki, K.; Kosuge, K. Analytical Inverse Kinematic Computation for 7-DOF Redundant Manipulators With Joint Limits and Its Application to Redundancy Resolution. IEEE Trans. Robot. 2008, 24, 1131–1142. [Google Scholar] [CrossRef]
  16. Liu, W.; Chen, D.; Steil, J. Analytical inverse kinematics solver for anthropomorphic 7-DOF redundant manipulators with human-like configuration constraints. J. Intell. Robot. Syst. 2017, 86, 63–79. [Google Scholar] [CrossRef]
  17. Yang, Z.; Bien, S.; Nertinger, S.; Naceri, A.; Haddadin, S. An Optimization-based Scheme for Real-time Transfer of Human Arm Motion to Robot Arm. In Proceedings of the 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Abu Dhabi, United Arab Emirates, 14–18 October 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 12220–12225. [Google Scholar] [CrossRef]
  18. Suleiman, W.; Yoshida, E.; Kanehiro, F.; Laumond, J.P.; Monin, A. On human motion imitation by humanoid robot. In Proceedings of the 2008 IEEE International Conference on Robotics and Automation, Pasadena, CA, USA, 19–23 May 2008; IEEE: Piscataway, NJ, USA, 2008; pp. 2697–2704. [Google Scholar] [CrossRef]
  19. Yamane, K.; Ariki, Y.; Hodgins, J. Animating non-humanoid characters with human motion data. In Proceedings of the Proceedings of the 2010 ACM SIGGRAPH/Eurographics Symposium on Computer Animation, Madrid, Spain, 2–4 July 2010; pp. 169–178. [Google Scholar] [CrossRef]
  20. Lawrence, N. Gaussian process latent variable models for visualisation of high dimensional data. Adv. Neural Inf. Process. Syst. 2003, 16. Available online: https://proceedings.neurips.cc/paper/2003/hash/9657c1fffd38824e5ab0472e022e577e-Abstract.html (accessed on 9 May 2024).
  21. Shon, A.; Grochow, K.; Hertzmann, A.; Rao, R.P. Learning shared latent structure for image synthesis and robotic imitation. Adv. Neural Inf. Process. Syst. 2005, 18. Available online: https://papers.nips.cc/paper_files/paper/2005/hash/030e65da2b1c944090548d36b244b28d-Abstract.html (accessed on 9 May 2024).
  22. Chen, Q.; Wang, T.; Yang, Z.; Li, H.; Lu, R.; Sun, Y.; Zheng, B.; Yan, C. SDPL: Shifting-Dense Partition Learning for UAV-view Geo-localization. IEEE Trans. Circuits Syst. Video Technol. 2024, 34, 11810–11824. [Google Scholar] [CrossRef]
  23. Yin, H.; Melo, F.; Billard, A.; Paiva, A. Associate latent encodings in learning from demonstrations. In Proceedings of the Proceedings of the AAAI Conference on Artificial Intelligence, San Francisco, CA, USA, 4–9 February 2017; Volume 31. [Google Scholar] [CrossRef]
  24. Choi, S.; Kim, J. Cross-domain motion transfer via safety-aware shared latent space modeling. IEEE Robot. Autom. Lett. 2020, 5, 2634–2641. [Google Scholar] [CrossRef]
  25. Jarrassé, N.; Crocher, V.; Morel, G. A Method for measuring the upper limb motion and computing a compatible exoskeleton trajectory. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal, 7–12 October 2012; pp. 3461–3466. [Google Scholar]
  26. Zarrin, R.S.; Zeiaee, A.; Langari, R.; Buchanan, J.J.; Robson, N. Towards autonomous ergonomic upper-limb exoskeletons: A computational approach for planning a human-like path. Robot. Auton. Syst. 2021, 145, 103843. [Google Scholar] [CrossRef]
  27. Miniato, M.A.; Anand, P.; Varacallo, M. Anatomy, shoulder and upper limb, shoulder. In StatPearls; StatPearls Publishing: St. Petersburg, FL, USA, 2021. [Google Scholar]
  28. Zimmermann, Y.; Forino, A.; Riener, R.; Hutter, M. ANYexo: A Versatile and Dynamic Upper-Limb Rehabilitation Robot. IEEE Robot. Autom. Lett. 2019, 4, 3649–3656. [Google Scholar] [CrossRef]
  29. Pei, S.; Wang, J.; Yang, Y.; Dong, A.; Guo, B.; Guo, J.; Yao, Y. A Human-Centered Kinematics Design Optimization of Upper Limb Rehabilitation Exoskeleton Based on Configuration Manifold. IEEE Open J. Comput. Soc. 2025, 6, 282–293. [Google Scholar] [CrossRef]
  30. Denavit, J.; Hartenberg, R.S. A kinematic notation for lower-pair mechanisms based on matrices. J. Appl. Mech. 1955, 22, 215–221. [Google Scholar] [CrossRef]
  31. Siciliano, B. Robotics: Modelling, Planning and Control (G.-L. Zhang Trans.); Xi’an Jiaotong University Press: Xi’an, China, 2016. [Google Scholar]
Figure 1. The configuration of FREE II. (a) Mapping between human and FREE II. (b) Detailed view of upper arm link. (c) Top view of FREE II.
Figure 1. The configuration of FREE II. (a) Mapping between human and FREE II. (b) Detailed view of upper arm link. (c) Top view of FREE II.
Biomimetics 10 00312 g001
Figure 2. Diagram of two solutions of Z 4 . The angle between Z 4 and Z 3 is α 3 , and the angle between Z 4 and Z 5 is α 4 .
Figure 2. Diagram of two solutions of Z 4 . The angle between Z 4 and Z 3 is α 3 , and the angle between Z 4 and Z 5 is α 4 .
Biomimetics 10 00312 g002
Figure 3. Constraint of the orientation of upper arm e GE . The angle between e GE and e SE is γ GH . And Y end is perpendicular to e GE .
Figure 3. Constraint of the orientation of upper arm e GE . The angle between e GE and e SE is γ GH . And Y end is perpendicular to e GE .
Biomimetics 10 00312 g003
Figure 4. Two possible configurations of upper limb. The angle between e GE and e SE is γ GH , indicating that e GE lies on the generatrix of a cone with e SE as its axis and a half-angle of γ GH . Additionally, since e GE must also lie within a plane perpendicular to Y end (the gray plane), its possible positions are confined to the intersection of the cone and the plane, i.e., e GE 1 and e GE 2 .
Figure 4. Two possible configurations of upper limb. The angle between e GE and e SE is γ GH , indicating that e GE lies on the generatrix of a cone with e SE as its axis and a half-angle of γ GH . Additionally, since e GE must also lie within a plane perpendicular to Y end (the gray plane), its possible positions are confined to the intersection of the cone and the plane, i.e., e GE 1 and e GE 2 .
Biomimetics 10 00312 g004
Figure 5. Diagram of the swivel angle. Given the position of GH and wrist, the set of possible P EB is on the dashed circle. The swivel angle ϕ EB is the angle between a and e EB . γ EB is the angle between e GW and e GE .
Figure 5. Diagram of the swivel angle. Given the position of GH and wrist, the set of possible P EB is on the dashed circle. The swivel angle ϕ EB is the angle between a and e EB . γ EB is the angle between e GW and e GE .
Biomimetics 10 00312 g005
Figure 6. Simulation results when the motion representation method is joint positions. (a) One randomly selected set of IK solutions. (b) Trajectories of expected and calculated joint positions. (c) Error of joint positions.
Figure 6. Simulation results when the motion representation method is joint positions. (a) One randomly selected set of IK solutions. (b) Trajectories of expected and calculated joint positions. (c) Error of joint positions.
Biomimetics 10 00312 g006
Figure 7. Simulation results when the motion representation method is the end-effector pose. (a) One randomly selected set of IK solutions. (b) Trajectories of expected and calculated end effector pose. (c) Error of position and Euler angle.
Figure 7. Simulation results when the motion representation method is the end-effector pose. (a) One randomly selected set of IK solutions. (b) Trajectories of expected and calculated end effector pose. (c) Error of position and Euler angle.
Biomimetics 10 00312 g007
Figure 8. Simulation results when the motion representation method is SGASAWP. (a) One randomly selected set of IK solutions. (b) Trajectories of expected and calculated θ 1 , θ 2 , ϕ EB , and P W . (c) Error of θ 1 , θ 2 , ϕ EB , and P W .
Figure 8. Simulation results when the motion representation method is SGASAWP. (a) One randomly selected set of IK solutions. (b) Trajectories of expected and calculated θ 1 , θ 2 , ϕ EB , and P W . (c) Error of θ 1 , θ 2 , ϕ EB , and P W .
Biomimetics 10 00312 g008
Figure 9. Simulation results when the motion representation method is SHRSAWP. (a) One randomly selected set of IK solutions. (b) Trajectories of expected and calculated θ 1 , θ 2 , ϕ EB , and P W . (c) Error of θ 1 , θ 2 , ϕ EB , and P W .
Figure 9. Simulation results when the motion representation method is SHRSAWP. (a) One randomly selected set of IK solutions. (b) Trajectories of expected and calculated θ 1 , θ 2 , ϕ EB , and P W . (c) Error of θ 1 , θ 2 , ϕ EB , and P W .
Biomimetics 10 00312 g009
Table 1. DH parameters of FREE II.
Table 1. DH parameters of FREE II.
i α i 1 a i 1 d i θ i
1000 θ 1
29000 θ 2
2′0 l SG 0 β 1 θ 2
386.4700 θ 3
4−9000 θ 4
58500 θ 5
5′Rotation with respect to y 5 by β 2
6−95 l GE 0 θ 6
End−900 l EW 0
Table 2. Comparison with CLIK when the motion representation method is joint positions.
Table 2. Comparison with CLIK when the motion representation method is joint positions.
MethodMean Error of PGH (mm)Mean Error of PEB (mm)Mean Error of PW (mm)Time (ms)
Vector method1.443.944.730.0145
CLIK [13]3.864.496.900.1891
Table 3. Comparison with Jacobian-based method when the motion representation method is the end-effector pose.
Table 3. Comparison with Jacobian-based method when the motion representation method is the end-effector pose.
MethodMean Position Error (mm)Mean Euler Angle Error (rad)Calculation Time (ms)
Vector method0.018 9.07 × 10 5 0.0236
Jacobian-based method [31]0.022 1.11 × 10 4 1.3
Table 4. Comparison with GEAA method when the motion representation method is SHRSAWP.
Table 4. Comparison with GEAA method when the motion representation method is SHRSAWP.
MethodMean Position Error (mm)Mean Error of θ1 (rad)Mean Error of θ2 (rad)Calculation Time (ms)
Vector method0.018 4.50 × 10 4 4.19 × 10 5 13.5
GEAA [26]0.0210.039 6.41 × 10 5 27.5
Table 5. Calculation time of the four motion representation methods.
Table 5. Calculation time of the four motion representation methods.
Representation MethodJoint PositionsEnd-Effector PoseSGASAWPSHRSAWP
m-file (ms)0.01450.02360.012713.5
C++ (ms)0.00150.00300.00140.04
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, J.; Pei, S.; Guo, J.; Bao, M.; Yao, Y. A Vector-Based Motion Retargeting Approach for Exoskeletons with Shoulder Girdle Mechanism. Biomimetics 2025, 10, 312. https://doi.org/10.3390/biomimetics10050312

AMA Style

Wang J, Pei S, Guo J, Bao M, Yao Y. A Vector-Based Motion Retargeting Approach for Exoskeletons with Shoulder Girdle Mechanism. Biomimetics. 2025; 10(5):312. https://doi.org/10.3390/biomimetics10050312

Chicago/Turabian Style

Wang, Jiajia, Shuo Pei, Junlong Guo, Mingsong Bao, and Yufeng Yao. 2025. "A Vector-Based Motion Retargeting Approach for Exoskeletons with Shoulder Girdle Mechanism" Biomimetics 10, no. 5: 312. https://doi.org/10.3390/biomimetics10050312

APA Style

Wang, J., Pei, S., Guo, J., Bao, M., & Yao, Y. (2025). A Vector-Based Motion Retargeting Approach for Exoskeletons with Shoulder Girdle Mechanism. Biomimetics, 10(5), 312. https://doi.org/10.3390/biomimetics10050312

Article Metrics

Back to TopTop